You are here

Skeptic

Skeptoid #933: COVID-19 and the Lab Leak

Skeptoid Feed - 17 hours 17 min ago

Was the SARS-CoV-2 virus of natural origin, or was it engineered in a Chinese research lab?

Categories: Critical Thinking, Skeptic

Indigenous Knowledge

neurologicablog Feed - Mon, 04/22/2024 - 5:05am

I recently received the following question to the SGU e-mail:

“I have had several conversations with friends/colleagues lately regarding indigenous beliefs/stories. They assert that not believing these based on oral histories alone is morally wrong and ignoring a different cultures method of knowledge sharing. I do not want to be insensitive, and I would never argue this point directly with an indigenous person (my friends asserting these points are all white). But it really rubs me the wrong way to be told to believe things without what I would consider more concrete evidence. I’m really not sure how to comport myself in these situations. I would love to hear any thoughts you have on this topic, as I don’t have many skeptical friends.”

I also frequently encounter this tension, between a philosophical dedication to scientific methods and respect of indigenous cultures. Similar tensions come up in other contexts, such as indigenous cultures that hunt endangered species. These tensions are sometimes framed as “decolonization” defined as “the process of freeing an institution, sphere of activity, etc. from the cultural or social effects of colonization.” Here is a more detailed description:

“Decolonization is about “cultural, psychological, and economic freedom” for Indigenous people with the goal of achieving Indigenous sovereignty — the right and ability of Indigenous people to practice self-determination over their land, cultures, and political and economic systems.”

I completely understand this concept and think the project is legitimate. To “decolonize” an indigenous culture you have to do more than just physically remove foreign settlers. Psychological and cultural colonization is harder to remove. And often cultural colonization was very deliberate, such as missionaries spreading the “correct” religion to “primitive” people.

But like all good ideas, it can be taken too far. People tend to prefer the moral clarity of simplistic dichotomies. What the e-mailer is referring to is when science is considered part of colonization, and something that indigenous people should free themselves of. Further, we need to respect their cultural freedom from science and accept their historical view of reality as being just as legitimate as a science-based one. But I think this approach is completely misguided, even if it is well-intentioned (well intentioned but misguided is often a dangerous combination).

There are a couple of ways to look at this. One is that science is not a cultural belief. Science (and philosophy, for that matter) is something that transcends culture. The purpose of science is to transcend culture, to use a set of methods that are as objective as possible, and to eliminate bias as much as possible. In fact, scientists often have to make a deliberate effort to think outside of the biases of their own culture and world view.

Logic and facts are not cultural. Reality does not care about our own belief systems, whatever their origin, it is what it is regardless. Respecting an indigenous culture does not mean we must surrender respect for facts and logic.

Another important perspective, I think, is that as a species we have some shared culture and knowledge. This is actually a very useful and even beautiful thing – there is human culture and knowledge that we can all share, and I would put science at the top of that list. There are objective methods we can use to come to mutual agreement despite our differing cultures and histories. We can have the commonality of a shared reality, because that reality actually exists (whether we believe in it or not) and because the scientific methods we use to understand that reality are transcultural. Science, therefore, is not one culture colonizing another, but all cultures placing something objective and verifiable above their own history, culture, and parochial perspectives.

We can make similar arguments for certain basic aspects of ethics and morality, although this is more difficult to achieve universal objectivity. But as a species we can conclude that certain things are objectively ethically wrong, such as slavery. If an indigenous culture believed in and actively practiced human slavery, would we be compelled to respect that and look the other way?

Yet another layer to this discussion is consideration of the methods that are used by one society to convince another to adopt its norms. If it is done by force, that is colonization. If it is done by intellectual persuasion and adopted freely, that is just one group sharing its knowledge with another.

And finally, I think we can respect the mythology and beliefs of another culture without accepting those beliefs as objectively true, or abandoning all concept of “truth” and pretending that all knowledge is equal and relative. Pretending the ancient cultural beliefs of a group are “true” is actually infantilizing and racist, in my opinion. It assumes that they are incapable of reconciling what every culture has had to reconcile to some degree – the difference between historical beliefs and objective evidence. Every society has their narratives, their view of history, and facts invariably push up against those narratives.

I know that in practice these principles are very complex and there is a lot of gray zone. Science is an ideal, and people have a tendency to exploit ideals to promote their own agenda. Just labeling something science doesn’t mean we can bulldoze over other considerations, and science is often corrupted by corporate interests, and cultural promotion, even to the point of hegemony. This is because the people who execute science are flawed and biased. But that does not change the ideal itself. Science and philosophy (examining arguments for internal logical consistency) are methods we can use to arrive at transcultural human beliefs and institutions.

Let’s take the World Health Organization (WHO), for example, which is an international organization dedicated to promoting health around the world. I would argue, as an international organization, they should be relying on objective science-based methods as much as possible. Also, since their goal is to improve the health of humanity, science is the best way to do that. They should not, in my opinion, bow down before any individual culture’s pre-scientific beliefs about health for the purpose of cultural sensitivity. It is not their mission to promote local cultures or to right the wrongs of past colonizers. They should unapologetically take the position that they will only promote and use interventions that are based on objective scientific evidence. They can still do this in a culturally sensitive way. All physicians need to practice “culturally competent” medicine, which does not have to include endorsing or using treatments that do not work.

So in practice this is all very messy, but I think it’s important to at least following legitimate guiding principles. Science is something that all of humanity owns, and it strives towards an ideal that it is transcultural and objective. This is not incompatible with respecting local cultures and self-determination.

 

The post Indigenous Knowledge first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #980 - Apr 20 2024

Skeptics Guide to the Universe Feed - Sat, 04/20/2024 - 9:00am
What's the Word: Anhedonia; News Items: New Scams, Reconductoring, ISS Space Junk, Zombie Cicadas, Death by Wellness; Who's That Noisy; Your Questions and E-mail: AI Drug Development Correction; Science or Fiction
Categories: Skeptic

Annie Jacobsen — What Happens Minutes After a Nuclear Launch?

Skeptic.com feed - Sat, 04/20/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss424_Annie_Jacobsen_2024_04_20.mp3 Download MP3

Every generation, a journalist has looked deep into the heart of the nuclear military establishment: the technologies, the safeguards, the plans, and the risks. These investigations are vital to how we understand the world we really live in—where one nuclear missile will beget one in return, and where the choreography of the world’s end requires massive decisions made on seconds’ notice with information that is only as good as the intelligence we have.

Pulitzer Prize finalist Annie Jacobsen’s Nuclear War: A Scenario explores this ticking-clock scenario, based on dozens of exclusive new interviews with military and civilian experts who have built the weapons, have been privy to the response plans, and have been responsible for those decisions should they have needed to be made. Nuclear War: A Scenario examines the handful of minutes after a nuclear missile launch. It is essential reading, and unlike any other book in its depth and urgency.

Annie Jacobsen is an investigative journals, Pulitzer Prize finalist, and New York Times bestselling author. Her books include: Area 51, Operation Paperclip, The Pentagon’s Brain, Phenomena, First Platoon, and Surprise, Kill, Vanish. Her book Nuclear War: A Scenario, has been optioned to be made into a dramatic film.

Shermer and Jacobsen discuss:

  • So much has been written on this subject…what is new? (Richard Rhodes’s nuclear tetraology (The Making of the Atomic Bomb, Dark Sun, Arsenals of Folly, Twilight of the Bombs, Eric Schlosser’s Command and Control, Fred Kaplan’s The Bomb, Martin Sherwin’s Gambling with Armageddon, Daniel Ellsberg’s The Doomsday Machine, Carl Sagan’s and Richard Turco’s A Path Where No Man Thought)
  • How much more is classified that we still do not know?
  • Why we have a nuclear triad (land missiles, submarine missiles, bombers)
  • Competition among military forces and increasing budgets for more weapons
  • How many types of nuclear weapons are there now, and how many total?
  • Why humans engage in aggression, violence and war
  • The Prisoner’s dilemma, Hobbesian trap, Security Dilemma, the “other guy” problem
  • Balance of Terror, Mutual Assured Destruction, Logic of Deterrence
  • Close calls: Cuban Missile Crisis, Nuclear sub/Vasily Arkipov (1962), Damascus Titan missile explosion (1980), Able Archer 83 war exercise in Europe, Stanislav Petrov, etc.
  • Surviving a nuclear explosion/war
  • What happens in a nuclear bomb explosion
  • Short terms and long term consequences of a nuclear exchange
  • Nuclear Winter
  • Nuclear protests & films (On the Beach, Fail Safe, Dr. Strangelove, War Games, The Day After)
  • Getting to Nuclear Zero: Stockpile reduction, No First Use, No Launch on Warning, shift taboo from not using them to not owning them,
  • Reagan and Gorbachev and arms reductions
  • North Korea, China/Taiwan
  • Göbekli Tepe and post-apocalyptic world.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

New Generation of Electric Robots

neurologicablog Feed - Fri, 04/19/2024 - 5:10am

Boston Dynamics (now owned by Hyundai) has revealed its electric version of its Atlas robot. These robot videos always look impressive, but at the very least we know that we are seeing the best take. We don’t know how many times the robot failed to get the one great video. There are also concerns about companies presenting what the full working prototype might look like, rather than what it actually currently does. The state of CGI is such that it’s possible to fake robot videos that are indistinguishable to the viewer from real ones.

So it’s understandable that these robot reveal videos are always looked at with a bit of skepticism. But it does seem that pushback does have an effect, and there is pressure on robotics companies to be more transparent. The video of the new Atlas robot does seem to be real, at least. Also, these are products for the real world. At some point the latest version of Atlas will be operating on factory floors, and if it didn’t work Boston Dynamics would not be sustainable as a company.

What we are now seeing, not just with Atlas but also Tesla’s Optimus Gen 2, and others, is conversion to all electric robots. This makes them smaller, lighter, and quieter than the previous hydraulic versions. They are also not tethered to cables as previous versions.

My first question was – what is the battery life? Boston Dynamics says they are “targeting” a four hour battery life for the commercial version of the Atlas. I love that corporate speak. I could not find a more direct answer in the time I had to research this piece. But four hours seems reasonable – the prior version from 2015 had about a 90 minute battery life depending on use. Apparently the new Atlas can swap out its own battery.

In addition to being electric, the Atlas is faster and more nimble. It can rotate its joints to give it more flexibility than a human, as demonstrated in the video. The goal is to allow it to flexibly operate in narrow work spaces.

Tesla has also unveiled its Optimus Gen 2 robot, which is a bit more oriented around personal rather than factory use. Tesla hypes that it could theoretically go shopping and then come home and cook you dinner. By way of demonstration, it released a video of Optimus delicately handling eggs. To be clear, Optimus is a prototype, not ready for commercialization. Tesla knows it needs to make continued improvement before this product is ready for prime time. Musk claims he is aiming for a sub $20,000 price tag for the commercial version of Optimus – but of course that does not mean much until they are actually for sale.

There is no question that the latest crop of electric robots are a significant improvement on earlier robots – they are more agile, lighter, with longer battery life. These robots can also benefit from recent advances in AI technology. Currently there are estimated to be 3.4 million industrial robots at work in the world, and this number is growing. The question is – are we really on the cusp of robots transitioning to non-industrial work and residential spaces? As is often the case – it’s hard to say.

As a general rule it’s good to assume that technology hype tends to be premature, and real-world applications often take longer than we anticipate. But then, the technology crosses the finish line and suddenly appears. All the hype of personal data assistants merging with cell phones and the internet lasted for at least a decade, before the iPhone suddenly changed the world. There is a hype, a post-hype, and then a reality phase to such technologies. Of course, the reality may be that the technology fails. Right now, for example, we appear to be in the post-hype phase of self-driving cars. But we also seem to be rapidly transitioning to self-driving cars as a reality, at least to some extent.

It still feels like we are in the hype phase of residential robots. It’s hard to say how long it will be before all-purpose robots are common in work spaces and the home. The difference, I think, with this technology is that is already does exist, for industrial use. This is more of a transition to a new use, rather than developing the technology itself. But on the other hand, the transition from factory floor to home is a massive one, and does require new technology to some extent.

There is also the issue of cost. Are people going to pay 20k for a robot? What’s the “killer app” that will make the purchase worth it? Where is the price break where people will feel it is a worthwhile appliance, worth the cost. When will robots become the new microwave oven?

On the encouraging side is the fact that these robots are already very capable, and steady incremental advances will add up quickly (as they already have). On the down side, it’s hard to see such an appliance will be worth the cost anytime soon. They will need to become either incredibly useful, or much cheaper. Will they really provide 20k worth of convenience, and be more cost-effective than just hiring people to do the jobs you don’t want to do? There is a threshold, but we still may be years away from crossing it.

The post New Generation of Electric Robots first appeared on NeuroLogica Blog.

Categories: Skeptic

Bayesian Balance: How a Tool for Bayesian Thinking Can Guide Us Between Relativism and the Truth Trap

Skeptic.com feed - Fri, 04/19/2024 - 12:00am

On October 17, 2005 the talk show host and comedian Stephen Colbert introduced the word “truthiness” in the premier episode of his show The Colbert Report:1 “We’re not talking about truth, we’re talking about something that seems like truth— the truth we want to exist.”2 Since then the word has become entrenched in our everyday vocabulary but we’ve largely lost Colbert’s satirical critique of “living in a post-truth world.” Truthiness has become our truth. Kellyanne Conway opened the door to “alternative facts”3 while Oprah Winfrey exhorted you to “speak your truth.”4 And the co-founder of Skeptic magazine, Michael Shermer, has begun to regularly talk to his podcast guests about objective external truths and subjective internal truths, inside of which are historical truths, political truths, religious truths, literary truths, mythical truths, scientific truths, empirical truths, narrative truths, and cultural truths.5 It is an often-heard complaint to say that we live in a post-truth world, but what we really have is far too many claims for it. Instead, we propose that the vital search for truth is actually best continued when we drop our assertions that we have something like an absolute Truth with a capital T.

Why is that? Consider one of our friends who is a Young Earth creationist. He believes the Bible is inerrant. He is convinced that every word it contains, including the six days of creation story of the universe, is Truth (spelled with a capital T because it is unquestionably, eternally true). From this position, he has rejected evidence brought to him from multiple disciplines that all converge on a much older Earth and universe. He has rejected evidence from fields such as biology, paleontology, astronomy, glaciology, and archeology, all of which should reduce his confidence in the claim that the formation of the Earth and every living thing on it, together with the creation of the sun, moon, and stars, all took place in literally six Earth days. Even when it was pointed out to him that the first chapter of Genesis mentions liquid water, light, and every kind of vegetation before there was a sun or any kind of star whatsoever, he claimed not to see a problem. His reply to such doubts is to simply say, “with God, all things are possible.”6

Lacking any uncertainty about the claim that “the Bible is Truth,” this creationist has only been able to conclude two things when faced with tough questions: (1) we are interpreting the Bible incorrectly, or (2) the evidence that appears to undermine a six-day creation is being interpreted incorrectly. These are inappropriately skeptical responses, but they are the only options left to someone who has decided beforehand that their belief is Truth. And, importantly, we have to admit that this observation could be turned back on us too. As soon as we become absolutely certain about a belief—as soon as we start calling something a capital “T” Truth—then we too become resistant to any evidence that could be interpreted as challenging it. After all, we are not absolutely certain that the account in Genesis is false. Instead, we simply consider it very, very unlikely, given all of the evidence at hand. We must keep in mind that we sample a tiny sliver of reality, with limited senses that only have access to a few of possibly many dimensions, in but one of quite likely multiple universes. Given this situation, intellectual humility is required.

Some history and definitions from philosophy are useful to examine all of this more precisely. Of particular relevance is the field of epistemology, which studies what knowledge is or can be. A common starting point is Plato’s definition of knowledge as justified true belief (JTB).7 According to this JTB formulation, all three of those components are necessary for our notions or ideas to rise to the level of being accepted as genuine knowledge as opposed to being dismissible as mere opinion. And in an effort to make this distinction clear, definitions for all three of these components have been developed over the ensuing millennia. For epistemologists, beliefs are “what we take to be the case or regard as true.”8 For a belief to be true, it doesn’t just need to seem correct now; “most philosophers add the further constraint that a proposition never changes its truth-value in space or time.”9 And we can’t just stumble on these truths; our beliefs require some reason or evidence to justify them.10

Readers of Skeptic will likely be familiar with skeptical arguments from Agrippa (the problem of infinite regress11), David Hume (the problem of induction12), Rene Descartes (the problem of the evil demon13), and others that have chipped away at the possibility of ever attaining absolute knowledge. In 1963, however, Edmund Gettier fully upended the JTB theory of knowledge by demonstrating—in what has come to be called “Gettier problems”14—that even if we managed to actually have a justified true belief, we may have just gotten there by a stroke of good luck. And the last 60 years of epistemology have shown that we can seemingly never be certain that we are in receipt of such good fortune.

This philosophical work has been an effort to identify an essential and unchanging feature of the universe—a perfectly justified truth that we can absolutely believe in and know. This Holy Grail of philosophy surely would be nice to have, but it makes sense that we don’t. Ever since Darwin demonstrated that all of life could be traced back to the simplest of origins, it has slowly become obvious that all knowledge is evolving and changing as well. We don’t know what the future will reveal and even our most unquestioned assumptions could be upended if, say, we’ve actually been living in a simulation all this time, or Descartes’ evil demon really has been viciously deluding us. It only makes sense that Daniel Dennett titled one of his recent papers, “Darwin and the Overdue Demise of Essentialism.”15

So, what is to be done after this demise of our cherished notions of truth, belief, and knowledge? Hold onto them and claim them anyway, as does the creationist? No. That path leads to error and intractable conflict. Instead, we should keep our minds open, and adjust and adapt to evidence as it becomes available. This style of thinking has become formalized and is known as Bayesian reasoning. Central to Bayesian reasoning is a conditional probability formula that helps us revise our beliefs to be better aligned with the available evidence. The formula is known as Bayes’ theorem. It is used to work out how likely something is, taking into account both what we already know as well as any new evidence. As a demonstration, consider a disease diagnosis, derived from a paper titled, “How to Train Novices in Bayesian Reasoning:”

10 percent of adults who participate in a study have a particular medical condition. 60 percent of participants with this condition will test positive for the condition. 20 percent of participants without the condition will also test positive. Calculate the probability of having the medical condition given a positive test result.16

Most people, including medical students, get the answer to this type of question wrong. Some would say the accuracy of the test is 60 percent. However, the answer must be understood in the broader context of false positives and the relative rarity of the disease.

Simply putting actual numbers on the face of these percentages will help you visualize this. For example, since the rate of the disease is only 10 percent, that would mean 10 in 100 people have the condition, and the test would correctly identify six of these people. But since 90 of the 100 people don’t have the condition, yet 20 percent of them would also receive a positive test result, that would mean 18 people would be incorrectly flagged. Therefore, 24 total people would get positive test results, but only six of those would actually have the disease. And that means the answer to the question is only 25 percent. (And, by the way, a negative result would only give you about 95 percent likelihood that you were in the clear. Four of the 76 negatives would actually have the disease.)

Now, most usages of Bayesian reasoning won’t come with such detailed and precise statistics. We will very rarely be able to calculate the probability that an assertion is correct by using known weights of positive evidence, negative evidence, false positives, and false negatives. However, now that we are aware of these factors, we can try to weigh them roughly in our minds, starting with the two core norms of Bayesian epistemology: thinking about beliefs in terms of probability and updating one’s beliefs as conditions change.17 We propose it may be easier to think in this Bayesian way using a modified version of a concept put forward by the philosopher Andy Norman, called Reason’s Fulcrum.18

Figure 1. A Simple Lever. Balancing a simple lever can be achieved by moving the fulcrum so that the ratio of the beam is the inverse of the ratio of mass. Here, an adult who is three times heavier than the child is balanced by giving the child three times the length of beam. The mass of the beam is ignored. Illustrations in this article by Jim W.W. Smith

Like Bayes, Norman asserts that our beliefs ought to change in response to reason and evidence, or as David Hume said, “a wise man proportions his belief to the evidence.”19 These changes could be seen as the movement of the fulcrum lying under a simple lever. Picture a beam or a plank (the lever) with a balancing point (the fulcrum) somewhere in the middle, such as a playground teeter-totter. As in Figure 1, you can balance a large adult with a small child just by positioning the fulcrum closer to the adult. And if you know their weight, then the location of that fulcrum can be calculated ahead of time because the ratio of the beam length on either side of the fulcrum is the inverse of the ratio of mass between the adult and child (e.g., a three times heavier person is balanced by a distance having a ratio of 1:3 units of distance).

If we now move to the realm of reason, we can imagine substituting the ratio of mass between an adult and child by the ratio of how likely the evidence is to be observed between a claim and its counterclaim. Note how the term in italics captures not just the absolute quantity of evidence but the relative quality of that evidence as well. Once this is considered, then the balancing point at the fulcrum gives us our level of credence in each of our two competing claims.

Figure 2. Ratio of 90–10 for People Without–With the Condition. A 10 percent chance of having a condition gives a beam ratio of 1:9. The location of the fulcrum shows the credence that a random person should have about their medical status.

To see how this works for the example previously given about a test for a medical condition, we start by looking at the balance point in the general population (Figure 2). Not having the disease is represented by 90 people on the left side of the lever, and having the disease is represented by 10 people on the right side. This is a ratio of 9:1. So, to get our lever to balance, we must move the fulcrum so that the length of the beam on either side of the balancing point has the inverse ratio of 1:9. This, then, is the physical depiction of a 10 percent likelihood of having the medical condition in the general population. There are 10 units of distance between the two populations and the fulcrum is on the far left, 1 unit away from all the negatives.

Figure 3. Ratio of 18 False Positives to 6 True Positives. A 1 to 3 beam ratio illustrates a 25 percent chance of truly having this condition. The location of the fulcrum shows the proper level of credence for someone if they receive a positive test.

Next, we want to see the balance point after a positive result (Figure 3). On the left: the test has a 20 percent false positive rate, so 18 of the 90 people stay on our giant seesaw even though they don’t actually have the condition. On the right: 60 percent of the 10 people who have the condition would test positive, so this leaves six people. Therefore, the new ratio after the test is 18:6, or 3:1. This means that in order to restore balance, the fulcrum must be shifted to the inverse ratio of 1:3. There are now four total units of distance between the left and right, and the fulcrum is 1 unit from the left. So, after receiving a positive test result, the probability of having the condition (being in the group on the right) is one in four or 25 percent (the portion of beam on the left). This confirms the answer we derived earlier using abstract mathematical formulas, but many may find the concepts easier to grasp based on the visual representation.

To recap, the position of the fulcrum under the beam is the balancing point of the likelihood of observing the available evidence for two competing claims. This position is called our credence. As we become aware of new evidence, our credence must move to restore a balanced position. In the example above, the average person in the population would have been right to hold a credence of 10 percent that they had a particular condition. And after getting a positive test, this new evidence would shift their credence, but only to a likelihood of 25 percent. That’s worse for the person, but actually still pretty unlikely. Of course, more relevant evidence in the future may shift the fulcrum further in one direction or another. That is the way Bayesian reasoning attempts to wisely proportion one’s credence to the evidence.

Figure 4. Breaking Reason’s Fulcrum. Absolute certainty makes Bayes’ theorem unresponsive to evidence in the same way that a simple lever is unresponsive to mass when it becomes a ramp.

What about our Young Earth creationist friend? When using Bayes’ theorem, the absolute certainty he holds starts with a credence of zero percent or 100 percent and always results in an end credence of zero percent or 100 percent, regardless of what any possible evidence might show. To guard against this, the statistician Dennis Lindley proposed “Cromwell’s Rule,” based on Oliver Cromwell’s famous 1650 quip: “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”20 This rule simply states that you should never assign a probability of zero percent or 100 percent to any proposition. Once we frame our friend’s certainty in the Truth of biblical inerrancy as setting his fulcrum to the extreme end of the beam, we get a clear model for why he is so resistant to counterevidence. Absolute certainty breaks Reason’s Fulcrum. It removes any chance for leverage to change a mind. When beliefs reach the status of “certain truth” they simply build ramps on which any future evidence effortlessly slides off (Figure 4).

So far, this is the standard way of treating evidence in Bayesian epistemology to arrive at a credence. The lever and fulcrum depictions provide a tangible way of seeing this, which may be helpful to some readers. However, we also propose that this physical model might help with a common criticism of Bayesian epistemology. In the relevant academic literature, Bayesians are said to “hardly mention” sources of knowledge, the justification for one’s credence is “seldom discussed,” and “Bayesians have hardly opened their ‘black box’, E, of evidence.”21 We propose to address this by first noting it should be obvious from the explanations above that not all evidence deserves to be placed directly onto the lever. In the medical diagnosis example, we were told exactly how many false negatives and false positives we could expect, but this is rarely known. Yet, if ten drunken campers over the course of a few decades swear they saw something that looked like Bigfoot, we would treat that body of evidence differently than if it were nine drunken campers and footage from one high-definition camera of documentarians working for the BBC. How should we depict this difference between the quality of evidence versus the quantity of evidence?

We don’t yet have firm rules or “Bayesian coefficients” for how to precisely treat all types of evidence, but we can take some guidance from the history of the development of the scientific method. Evidential claims can start with something very small, such as one observation under suspect conditions given by an unreliable observer. In some cases, perhaps that’s the best we’ve got for informing our credences. Such evidence might feel fragile, but…who knows? The content could turn out to be robust. How do we strengthen it? Slowly, step by step, we progress to observations with better tools and conditions by more reliable observers. Eventually, we’re off and running with the growing list of reasons why we trust science: replication, verification, inductive hypotheses, deductive predictions, falsifiability, experimentation, theory development, peer review, social paradigms, incorporating a diversity of opinions, and broad consensus.22

We can also bracket these various knowledgegenerating activities into three separate categories for theories. The simplest type of theory we have explains previous evidence. This is called retrodiction. All good theories can explain the past, but we have to be aware that this is also what “just-so stories” do, as in Rudyard Kipling’s entertaining theory for how Indian rhinoceroses got their skin—cake crumbs made them so itchy they rubbed their skin until it became raw, stretched, and all folded up.23

Even better than simply explaining what we already know, good theories should make predictions. Newton’s theories predicted that a comet would appear around Christmastime in 1758. When this unusual sight appeared in the sky on Christmas day, the comet (named for Newton’s close friend Edmund Halley) was taken as very strong evidence for Newtonian physics. Theories such as this can become stronger the more they explain and predict further evidence.

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Finally, beyond predictive theories, there are ones that can bring forth what William Whewell called consilience.24 Whewell coined the term scientist and he described consilience as what occurs when a theory that is designed to account for one type of phenomenon turns out to also account for another completely different type. The clearest example is Darwin’s theory of evolution. It accounts for biodiversity, fossil evidence, geographical population distribution, and a huge range of other mysteries that previous theories could not make sense of. And this consilience is no accident—Darwin was a student of Whewell’s and he was nervous about sharing his theory until he had made it as robust as possible.

Figure 5. The Bayesian Balance. Evidence is sorted by sieves of theories that provide retrodiction, prediction, and consilience. Better and better theories have lower rates of false positives and require a greater movement of the fulcrum to represent our increased credence. Evidence that does not yet conform to any theories at all merely contributes to an overall skepticism about the knowledge we thought we had.

Combining all of these ideas, we propose a new way (Figure 5) of sifting through the mountains of evidence the world is constantly bombarding us with. We think it is useful to consider the three different categories of theories, each dealing with different strengths of evidence, as a set of sieves by which we can first filter the data to be weighed in our minds. In this view, some types of evidence might be rather low quality, acting like a medical test with false positives near 50 percent. Such poor evidence goes equally on each side of the beam and never really moves the fulcrum. However, other evidence is much more likely to be reliable and can be counted on one side of the beam at a much higher rate than the other (although never with 100 percent certainty). And evidence that does not fit with any theory whatsoever really just ought to make us feel more skeptical about what we think we know until and unless we figure out a way to incorporate it into a new theory.

We submit that this mental model of a Bayesian Balance allows us to adjust our credences more easily and intuitively. Also, it never tips the lever all the way over into unreasonable certainty. To use it, you don’t have to delve into the history of philosophy, epistemology, skepticism, knowledge, justified true beliefs, Bayesian inferences, or difficult calculations using probability notation and unknown coefficients. You simply need to keep weighing the evidence and paying attention to which kinds of evidence are more or less likely to count. Remember that observations can sometimes be misleading, so a good guiding principle is, “Could my evidence be observed even if I’m wrong?” Doing so fosters a properly skeptical mindset. It frees us from the truth trap, yet enables us to move forward, wisely proportioning our credences as best as the evidence allows us.

About the Author

Zafir Ivanov is a writer and public speaker focusing on why we believe and why it’s best we believe as little as possible. His lifelong interests include how we form beliefs and why people seem immune to counterevidence. He collaborated with the Cognitive Immunology Research Initiative and The Evolutionary Philosophy Circle. Watch his TED talk.

Ed Gibney writes fiction and philosophy while trying to bring an evolutionary perspective to both of those pursuits. He has previously worked in the federal government trying to make it more effective and efficient. He started a Special Advisor program at the U.S. Secret Service to assist their director with this goal, and he worked in similar programs at the FBI and DHS after business school and a stint in the Peace Corps. His work can be found at evphil.com.

References
  1. https://rb.gy/ms7xw
  2. https://rb.gy/erira
  3. https://rb.gy/pjkay
  4. https://rb.gy/yyqh0
  5. https://rb.gy/96p2g
  6. https://rb.gy/f9rj3
  7. https://rb.gy/5sdni
  8. https://rb.gy/zdcqn
  9. https://rb.gy/3gke6
  10. https://rb.gy/1no1h
  11. https://rb.gy/eh2fl
  12. https://rb.gy/2k9xa
  13. Gillespie, M. A. (1995). Nihilism Before Nietzsche. University of Chicago Press.
  14. https://rb.gy/4iavf
  15. https://rb.gy/crv9j
  16. https://rb.gy/zb862
  17. https://rb.gy/dm5qc
  18. Norman, A. (2021). Mental Immunity: Infectious Ideas, Mind-Parasites, and the Search for a Better Way to Think. Harper Wave.
  19. https://rb.gy/2k9xa
  20. Jackman, S. (2009). The Foundations of Bayesian Inference. In Bayesian Analysis for the Social Sciences. John Wiley & Sons.
  21. Hajek, A., & Lin, H. (2017). A Tale of Two Epistemologies? Res Philosophica, 94(2), 207–232.
  22. Oreskes, N. (2019). Why Trust Science? Princeton University Press.
  23. https://rb.gy/2us27
  24. Whewell, W. (1847). The Philosophy of the Inductive Sciences, Founded Upon Their History. London J.W. Parker.
Categories: Critical Thinking, Skeptic

Evolution and Copy-Paste Errors

neurologicablog Feed - Tue, 04/16/2024 - 5:07am

Evolution deniers (I know there is a spectrum, but generally speaking) are terrible scientists and logicians. The obvious reason is because they are committing the primary mortal sin of pseudoscience – working backwards from a desired conclusion rather than following evidence and logic wherever it leads. They therefore clasp onto arguments that are fatally flawed because they feel they can use them to support their position. One could literally write a book using bad creationist arguments to demonstrate every type of poor reasoning and pseudoscience (I should know).

A classic example is an argument mainly promoted as part of so-called “intelligent design”, which is just evolution denial desperately seeking academic respectability (and failing). The argument goes that natural selection cannot increase information, only reduce it. It does not explain the origin of complex information. For example:

big obstacle for evolutionary belief is this: What mechanism could possibly have added all the extra information required to transform a one-celled creature progressively into pelicans, palm trees, and people? Natural selection alone can’t do it—selection involves getting rid of information. A group of creatures might become more adapted to the cold, for example, by the elimination of those which don’t carry the genetic information to make thick fur. But that doesn’t explain the origin of the information to make thick fur.

I am an educator, so I can forgive asking a naive question. Asking it in a public forum in order to defend a specific position is more dodgy, but if it were done in good faith, that could still propel public understanding forward. But evolution deniers continue to ask the same questions over and over, even after they have been definitively answered by countless experts. That demonstrates bad faith. They know the answer. They cannot respond to the answer. So they pretend it doesn’t exist, or when confronted directly, respond with the equivalent of, “Hey, look over there.”

The answer is right in the formulation of their position – “Natural selection alone can’t do it…”. I can quibble with the notion that natural selection only removes information, but even if we accept this premise, it doesn’t matter, because natural selection is not acting alone. Evolution is better understood as a two-step process, generating new information and then selecting the subset of that new information which provides an immediate survival advantage. There are multiple mechanisms for generating new information. These include mutations, where one amino acid is swapped out for another. But is also includes “copy paste” errors, in which entire genes, or sets of genes, or entire chromosomes, and sometime entire genomes are copied. It is difficult to argue that adding new genes to the total set of genes in a genome is not adding more information.

That is where evolution deniers play a logical game of three-card monte. They say – Ah, but mutations are random. They are “mistakes” that can only degrade the information. They are not directed or cumulative. This is the equivalent of arguing that a car cannot work because the engine cannot steer the car, and the steering column cannot propel the car. But of course, it’s the other way around. Similarly, mutations are not directed but they do add more information, and selection does not add more raw information but it can be directed and cumulative. The combination can add more specific information over time – new genes that make new proteins that have new functions.

The other major unstated assumption in this evolution denying argument is that there is some essential perfect state of a gene and any mutation is a degradation. But this is not correct. All genes are mutants, and there is no “correct” state or preferred state. There are only different states with different functionality. Functionality is also not objectively or essentially better or worse, just different. But some states may provide selective advantages under some conditions. Also, it is better to think of different functional states as having a different sets of tradeoffs. The statistically advantageous tradeoffs are more likely to survive and persist.

This is all logically sound, but what does the empirical evidence say? If intelligent design were true, then we would expect to see a pattern in biology that suggests top-down de-novo design. Genes would all be their own entities, made to purpose, without any remnants of a deep past history – at least, if you are willing to admit to a testable version of intelligent design. Proponents usually dodge any such tests by arguing, essentially, that – whatever we find, that’s what the designer intended.

In any case, if evolution were true we would expect to find a pattern in biology that suggests a nested branching relationship among all things, including genes. Genes did not come from nowhere, wholly perfect and complete. Genes must have evolved from ancestral genes, which further suggests that occasionally there are duplications of genes. That is how the total number of genes can potentially increase over evolutionary history.

Guess what we find when we look at the genomes of multicellular creatures. We find evidence of gene duplications and a branching pattern of relationships. A recent study adds to the mountain of evidence for this pattern. Researchers looked at the genomes of 20 bilaterian species – these include vertebrates and insects that have a basically bilaterally symmetrical body plan. What they found is that core genes and sets of genes that are involved with basic body anatomy are preserved across the bilaterian spectrum. Further, many of these core genes were the result of gene duplication, with multiple whole genome duplication events. They further found that when genes are duplicated, different cell lines can have different patterns of gene expression. This can even result in the evolution of new specialized cell types.

Gene expression refers to the fact that not all genes are expressed to the same degree in all cells. Liver cells express liver genes, while brain cells express brain genes (to put it simply). You can therefore have evolutionary change in a gene without mutating the amino acid sequence of the protein the gene codes for, but rather by altering the regulation of gene expression.

Gene duplication also allows for an important process in evolution – experimentation. When genes are duplicated, one copy can continue its original function. This, of course, is critical for genes that have core functions that are necessary for the organism to be alive. One copy continues this core function, while another copy (or more) is free to mutate and alter its function. This could lead to advantages in the core functionality, or to taking on entirely new functions. Any mutations that happen to provide even the slightest advantage will tend to be preserved, allowing for endless evolutionary tweaking and cumulative change that can ultimately lead to entirely new cell lines, tissues, anatomy, and functions. That certainly sounds like adding new information to me.

Not all changes, by the way, have to be immediately directed by natural selection. There is also random genetic drift. A redundant gene, unmoored from selective pressures, can endlessly “drift”, accumulating many genetic changes. If at any point, in any individual of any descendant line, that genes produces a protein that can be exploited for some immediate advantage, it will then gain a toe-hold on natural selection, and we’re off to the races.

When we look at the genomes of many different species, it’s pretty clear this is what has actually happened, many times, throughout evolutionary history. We can even map out a branching relationship of these events. Evolutionary lineages that are related have the same history of gene evolution (up to their last common ancestor). The quirky details of their genes line up in a way that can only be explained by a shared history. A shared function by a common designer doesn’t cut it. Many of these quirky details are not related to function, or there would be countless functional options. One would have to propose that the intelligent designer deliberately created life to look exactly as if it has evolved. That is yet another unfalsifiable notion that keeps intelligent design outside the boundaries of science.

The post Evolution and Copy-Paste Errors first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #932: Is Recycling for Real?

Skeptoid Feed - Tue, 04/16/2024 - 2:00am

A close look at where recycling of some common materials is actually at these days.

Categories: Critical Thinking, Skeptic

Nick Bostrom — Life and Meaning in a Solved World

Skeptic.com feed - Tue, 04/16/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss423_Nick_Bostrom_2024_04_16.mp3 Download MP3

Nick Bostrom’s previous book, Superintelligence: Paths, Dangers, Strategies, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of “post-instrumentality” in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable. Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day?

Bostrom’s new book, Deep Utopia, shines new light on these old questions and gives us glimpses of a different kind of existence, which might be ours in the future.

Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked a global conversation about the future of AI. His work has pioneered many of the ideas that frame current thinking about humanity’s future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist’s curse, etc.), while some of his recent work concerns the moral status of digital minds. His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. He has an academic background in theoretical physics, AI, and computational neuroscience as well as philosophy.

Bostrom and Shermer discuss:

  • The Future of Life Institute’s Open Letter calling for a pause on “giant AI experiments”
  • Eliezer Yudkowsky Time OpEd: “Shut It All Down” — “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
  • Utopia, Dystopia, Protopia
  • Would it be boring to live in a perfect world?
  • If we lived forever with everything we need, what would be the purpose of life?
  • Trekonomics, post-scarcity economics
  • The hedonic treadmill and positional wealth values—will people never be satisfied with “enough”?
  • Overpopulation of the 1960s and today’s birth dearth
  • Colonizing the galaxy (von Neumann probes, O’Neill cylinders, Dyson spheres)
  • The Fermi paradox: where is everyone?
  • Mind uploading and immortality
  • Examples of Technological Maturity
  • Google’s Gemini AI debacle
  • Large Language Models
  • ChatGPT, GPT-4, GPT-5 and beyond
  • The alignment problem
  • What set of values should AI be aligned with, and what legal and ethical status should it have?
  • The hard problem of consciousness
  • How would we know if an AI system was sentient?
  • Can AI systems be conscious?
On Mind Uploading and Replicating / Resurrecting Everyone Who Ever Lived

(An excerpt from Michael Shermer’s 2018 book Heavens on Earth.)

The sums involved in achieving immortality through the duplication or resurrection scenarios are not to be underestimated. There are around 85 billion neurons in a human brain, each with about a thousand synaptic links, for a total of 100 trillion connections to be accurately preserved and replicated. This is a staggering level of complexity made all the more so by the additional glial cells in the brain, which provide support and insulation for neurons and can change the actions of firing neurons, so these cells better be preserved as well in any duplication or resurrection scenario, just in case. Estimates of the ratio of glial cells to neurons in a brain vary from 1:1 to 10:1. If you’re not a lightning calculator, that computes to a total brain cell count of somewhere between 170 billion and 850 billion. Then factor in the hundreds or thousands of synaptic connections between each of the 85 billion neurons, adding approximately 100 trillion synaptic connections total for each brain. That’s not all. There are around ten billion proteins per neuron, which effect how memories are stored, plus the countless extracellular molecules in between those tens of billions of brain cells.

These estimates are just for the brain and do not even include the rest of the nervous system outside of the skull—what neuroscientists call the “embodied brain” or the “extended mind” and which many philosophers of mind believe is necessary for normal cognition. So you might want to have this extended mind resurrected or uploaded along with your mind. After all, you are not just your internal thoughts and emotions disconnected from your body. Many of your thoughts and emotions are intimately entwined with how your body interacts with its environment, so any preserved connectome, to be fully operational as recreating the experience of what it is like to be a sentient being, would also need to be housed in a body. So we would need a warehouse of brainless clones or very sophisticated robots prepared to have these uploaded mind neural units installed. How many? Well, to avoid the charge of elitism, it’s only fair that everyone who ever lived be resurrected, so that means multiplying the staggering data package for one person by 108 billion.

Then there’s the relationship between memory and life history. Our memory is not like a videotape that can be played back on the viewing screen of our minds. When an event happens to us, a selective impression of it is made on the brain through the senses. As that sense impression wends its way through different neural networks, where it ends up depends on what type of memory it is. As a memory is processed and prepared for long-term storage we rehearse it and in the process it is changed. This editing process depends on previous memories, subsequent events and memories, and emotions. This process recurs trillions of times in the course of a lifetime, to the point where we have to wonder if we have memories of actual events, or memories of the memories of those events, or even memories of memories of memories…. What’s the “true” memory? There is no such thing. Our memories are the product of trillions of synaptic neuronal connections that are constantly being edited, redacted, reinforced, and extinguished, such that a resurrection of a human with memories intact will depend on when in the individual’s life history the replication or resurrection is implemented.

In his book The Physics of Immortality the physicist Frank Tipler calculates that an Omega Point computer in the far future will contain 10 to the power of 10 to the power of 123 bits (a 1 followed by 10123 zeros), powerful enough, he says, to resurrect everyone who ever lived. That may be—it is a staggeringly large number—but is even an Omega Point computer powerful enough to reconstruct all of the historical contingencies and necessities in which a person lived, such as the weather, climate, geography, economic cycles, recessions and depressions, social trends, religious movements, wars, political revolutions, paradigm shifts, ideological revolutions, and the like, on top of duplicating our genome and connectome? It seems unlikely, but if so GOSH would also need to duplicate all of the individual conjunctures and interactions between that person and all other persons as they intersect with and influence each other in each of those lifetimes. Then multiply all that by the 108 billion people who ever lived or are currently living. Whatever the number, it would have to be even larger than the famed Googolplex (10 to the power of a googol, with a googol being 10100, or 1010100) from which Google and its Googleplex headquarters derived its name. Even a googol of googolplexes would not suffice. In essence, it would require the resurrection of the entire universe and its many billions of years of history. Inconceivable.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Using AI To Create Virtual Environments

neurologicablog Feed - Mon, 04/15/2024 - 4:57am

Generative AI applications seem to be on the steep part of the development curve – not only is the technology getting better, but people are finding more and more uses for it. It’s a new powerful tool with broad applicability, and so there are countless startups and researchers exploring its potential. The last time, I think, a new technology had this type of explosion was the smartphone and the rapid introduction of millions of apps.

Generative AI applications have been created to generate text, pictures, video, songs, and imitate specific voices. I have been using most of these apps extensively, and they are continually improving. Now we can add another application to the list – generating virtual environments. This is not a public use app, but was developed by engineers for a specific purpose – to train robots.

The application is called holodeck, after the Star Trek holodeck. You can use natural language to direct the application to build a specific type of virtual 3D space, such as “build me a three bedroom single floor apartment” or “build me a music studio”. The application uses generative AI technology to then build the space, with walls, floor, and ceiling, and then pull from a database of objects to fill the space with appropriate things. It also has a set of rules for where things go, so it doesn’t put a couch on the ceiling.

The purpose of the app is to be able to generate lots of realistic and complex environments in which to train robot navigation AI. Such robotic AIs need to be trained on virtual spaces so they can learn how to navigate out there is the real world. Like any AI training, the more data the better. This means the trainers need millions of virtual environments, and they just don’t exist. In an initial test, Holodeck was compared to an earlier application called ProcTHOR and performed significantly better. For example, when asked to find a piano in a music studio a ProcTHOR trained robot succeeded 6% of the time while a Holodeck trained robot succeeded 30% of the time.

That’s great, but let’s get to the fun stuff – how can we use this technology for entertainment? The ability to generate a 3D virtual space is a nice addition to the list above, all of which is contributing to a specific application that I have in mind – generative video games. Of course there are companies already working on this. It’s a no-brainer. But let’s talk about what this can mean.

In the short run generative AI can be used to improve the currently chumpy AI behind most video games. For avid gamers, it is a cliche that video game AI not very good, although some are better than others. Responses from NPCs are canned and often nonsensical, missing a lot of context about the evolution of the plot in the game. The reaction of NPCs and creatures in the world is also ultimately simplistic and predictable. This makes it possible for gamers to quickly learn how to hack the limitations of the game’s AI in order to exploit it.

Now let’s imagine our favorite video games powered by generative AI. We could have a more natural conversation with a major NPC in the game. The world can remember the previous actions of the player and adapt accordingly. AI combat can be more adaptive and therefore unpredictable and challenging.

But there is another layer here – generative AI can be used to generate the video game itself, or at least parts of it. This was referenced in the Black Mirror episode, the USS Callister. The world of the game was an infinite generated space. In many ways this is an easier task than real-world applications, at least potentially. Think of a major title, like Fallout. The number of objects in the game, including every item, weapon, monster, and character, is finite. It’s much less than a real-world environment. The same is true for the elements of the environment itself. A generative AI could therefore use the database of objects that already exists for the game an generate new locations. The game could become literally infinite.

Of course, generative AI could be used to create the game in the first place, decreasing the development time, which is years for major titles. Such games famously use a limited set of recorded voices for the characters, which means you hear the same canned phrases over and over again. Now you don’t have to get actors into studios to record script (although you still might want to do this for major characters), you can just generate voices as needed.

This means that video game production can focus on creating the objects, the artistic feel, the backbone plot, the rules and physics for the world, and then let generative AI create infinite iterations of it. This can be done as part of game development. Or it can be done on a server that is hosting one instance of the game (which is how massive multiplayer games work), or eventually it can be done for one player’s individual instance of the game, just like using ChatGPT on your personal computer.

This could further mean that each player’s experience of a game can be unique, and will depend greatly on the actions of the player. In fact, players may be able to generate their own gaming environments. What I mean is, for example (sticking with Fallout), you could sign into a Bethesda Fallout website, choose the game you want, enter in the variables you want, and generate some additional content to add to your game. There could be lots of variables – how developed the area is, how densely populated, how dangerous are the people, how dangerous are the monsters, how challenging is the environment itself, what is the resource availability, etc. This already exists for the game Minecraft, which generates new unique environments as you go and allows players to tweak lots of variables, but the game is extremely graphically limited.

Also, I am just thinking of using AI to recreate the current style of video games but faster, better, and with unlimited content. Game developers, however, may think of ways to leverage generative AI to create new genres of video games – doing new things that are not possible without generative AI.

It seems inevitable that this is where we are headed. I am just curious how long it will take. I think the first crop of generative video games will come in the form of new content for existing games. Then we will see entirely new games developed with and for generative AI. This may also give a boost to VR gaming, with the ability to generate 3D virtual spaces.

And of course gaming is only one of many entertainment possibilities for generative AI. How long will it be before we have fully generated video, with music, voices, and a storyline? All the elements are there, now it’s just a matter of putting them all together with sufficient quality.

I am focusing on the entertainment applications, because it’s fun, but there are many practical applications as well, such as the original purpose of Holodeck, to train navigation AI for robots. But often technology is driven by entertainment applications, because that is where the money is. More serious applications then benefit.

The post Using AI To Create Virtual Environments first appeared on NeuroLogica Blog.

Categories: Skeptic

Robert Zubrin — How What We Can Create on the Red Planet Informs Us on How Best to Live on the Blue Planet

Skeptic.com feed - Sat, 04/13/2024 - 10:40am
https://traffic.libsyn.com/secure/sciencesalon/mss422_Robert_Zubrin_2024_04_13.mp3 Download MP3

When Robert Zubrin published his classic book The Case for Mars a quarter century ago, setting foot on the Red Planet seemed a fantasy. Today, manned exploration is certain, and as Zubrin affirms in The New World on Mars, so too is colonization. From the astronautical engineer venerated by NASA and today’s space entrepreneurs, here is what we will achieve on Mars and how.

SpaceX, Blue Origin, and Virgin Galactic are building fleets of space vehicles to make interplanetary travel as affordable as Old-World passages to America. We will settle on Mars, and with our knowledge of the planet, analyzed in depth by Dr. Zubrin, we will utilize the resources and tackle the challenges that await us. What we will we build? Populous Martian city-states producing air, water, food, power, and more. Zubrin’s Martian economy will pay for necessary imports and generate income from varied enterprises, such as real estate sales—homes that are airtight and protect against cosmic space radiation, with fish-farm aquariums positioned overhead, letting in sunlight and blocking cosmic rays while providing fascinating views. Zubrin even predicts the Red Planet customs, social relations, and government—of the people, by the people, for the people, with inalienable individual rights—that will overcome traditional forms of oppression to draw Earth immigrants. After all, Mars needs talent.

With all of this in place, Zubrin’s Red Planet will become a pressure cooker for invention in bioengineering, synthetic biology, robotics, medicine, nuclear energy, and more, benefiting humans on Earth, Mars, and beyond. We can create this magnificent future, making life better, less fatalistic. The New World on Mars proves that there is no point killing each other over provinces and limited resources when, together, we can create planets.

Robert Zubrin is former president of the aerospace R&D company Pioneer Astronautics, which performs advanced space research for NASA, the US Air Force, the US Department of Energy, and private companies. He is the founder and president of the Mars Society, an international organization dedicated to furthering the exploration and settlement of Mars, leading the Society’s successful effort to build the first simulated human Mars exploration base in the Canadian Arctic and growing the organization to include 7,000 members in 40 countries. A nuclear and astronautical engineer, Zubrin began his career with Martin Marietta (later Lockheed Martin) as a Senior Engineer involved in the design of advanced interplanetary missions. His “Mars Direct” plan for near-term human exploration of Mars was commended by NASA Administrator Dan Goldin and covered in The Economist, Fortune, Air and Space Smithsonian, Newsweek (cover story), Time, The New York Times, The Boston Globe, as well as on BBC, PBS TV, CNN, the Discovery Channel, and National Public Radio. Zubrin is also the author of twelve books, including The Case for Mars: The Plan to Settle the Red Planet and Why We Must, with more than 100,000 copies in print in America alone and now in its 25th Anniversary Edition. He lives with his wife, Hope, a science teacher, in Golden, Colorado. His latest book is: The New World on Mars: What We Can Create on the Red Planet. The next big Mars Society conference in Seattle August 8-11.

Read Zubrin’s discussion of his paper on panspermia for seeding like on Earth.

Shermer and Zubrin discuss:

  • Why not start with the moon?
  • What’s it like on Mars? Like the top of Mt. Everest?
  • Was Mars ever like Earth? Water, life, etc.?
  • How much will it cost to go to Mars?
  • How to get people to Mars: food, water, radiation, boredom?
  • Where on Mars should people settle?
  • What are “natural resources”?
  • Resources on Mars already there vs. need to be produced
  • Analogies with Europeans colonizing North America
  • Public vs. private enterprise for space exploration
  • Economics on Mars
  • Politics on Mars
  • Lessons from the Red Planet for the Blue Planet
  • Ingersoll’s insight: free speech & thought > science & technology > machines as our slaves > moon landing. “This is something that free people can do.”
  • Liberty in space: won’t the most powerful people on Mars threaten to shut off your air if you don’t obey?
  • Independent City-States on Mars
  • Direct vs. representative democracy
  • America as a model for what we can create on Mars
  • Are new frontiers needed for civilization to continue?
  • The worst idea ever: that the total amount of potential resources is fixed.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

The Skeptics Guide #979 - Apr 13 2024

Skeptics Guide to the Universe Feed - Fri, 04/12/2024 - 5:00am
Live from Dallas with special guest Dustin Bates of Starset; Eclipse Science; News Items: AI Designed Drugs, AI Music, Music Getting Simpler, Aphantasia Spectrum, Nova and Comet Compete with Eclipse; Science or Fiction
Categories: Skeptic

Pain & Profit: Who’s Responsible for the Opioid Crisis?

Skeptic.com feed - Fri, 04/12/2024 - 12:00am

In 2021 the CDC issued a grim statistic: more than one million Americans had died from overdoses since 1999 when it started tracking an opioid epidemic that began with prescription painkillers and is now dominated by fentanyl.1 Since that sobering milestone, another 300,000 have died.2 That is roughly the same number of Americans who died in all wars the United States has entered (1.3 million) combined, including the First and Second World Wars and the Civil War.3 The opioid epidemic is, aside possibly from obesity, the biggest health crisis of our time.

Most know about the frenzy of finger pointing, lawsuits, bankruptcy filings among pharmaceutical companies, drug distributors, national pharmacy chains, medical associations, and the Federal Drug Administration. There is plenty of blame to go around. What is not often discussed in the extensive media coverage about the epidemic is how we got here.

The story of how the opioid crisis got underway and who is responsible is a tale of greed, poor government regulation, and many missed opportunities. It began with good intentions based on bad data and later became a movement in which profits took precedence over morals. It is a tragedy that was largely preventable and, as such, one of the most infuriating chapters in modern U.S. history.

History of Pain

Chronic pain affects 50 million Americans, more than those with high blood pressure, diabetes, or depression.4 Developing a medication that alleviates pain without too many side effects has been one of the drug industry’s holy grails. The market is enormous, and most people are long-term patients. Opiates were isolated as effective pain killers in the 1800s. At the turn of the 20th century—the drug industry’s Wild West days—they were dispensed over the counter. Over time, opiates earned a notorious and deserved reputation for addiction. German giant Bayer patented and marketed Heroin as, incredible as it now sounds, a cure for morphine addiction.

Congress did not pass a law requiring prescriptions for narcotic-based medications until 1938.5 It took another 33 years before the federal government created the Controlled Substances Act in 1971, listing oxycodone, fentanyl (along with cocaine and methamphetamine) as Schedule II drugs. That meant they had a risk of “severe psychological or physical dependence” but had medical and therapeutic uses. Doctors were supposed to balance the risks of opioids against the needs of patients who required them for short-term use after surgery or an accident, or longer treatment for disabling chronic pain.

Throughout the 1970s and early 1980s, drug companies spent a lot of money searching for a nonaddictive painkiller. Every effort ended in failure. In a Science article, a pharmacologist and a chemist at the National Institutes of Health concluded that it was unlikely such a medication was possible.

This was the same time, however, when a few physicians were about to upend traditional medical views about pain and how to treat it. Until the early 1980s, medical schools taught that pain was only a symptom of some underlying physical condition. Physicians did not treat it as a stand-alone ailment but instead searched for what caused it. The specialty of “pain management” did not exist. An anesthesiologist, John Bonica, whom Time dubbed “pain relief’s founding father,” questioned the conventional wisdom. Bonica suffered chronic shoulder and hip pain from his pre-medical career, first as a professional wrestler, then a carnival strongman, and finally the light heavyweight world wrestling champion.6 Bonica contended that underdiagnosing pain meant millions of patients suffered needlessly. He cofounded the International Association for the Study of Pain (its journal, Pain, is the field’s leading publication) in 1974, and three years later the American Pain Society (APS).7

The incipient movement to prioritize pain was not long underway when a five-sentence “letter to the editor” in the January 10, 1980, New England Journal of Medicine (NEJM) kicked off a parallel revolution in reconsidering established medical views about the risks of opioids. A doctor, Hershel Jick, and a grad student, Jane Porter, had examined 39,946 records of Boston University Hospital patients to determine adverse reactions and potential abuse for widely used medications. Almost a third (11,882) had “received at least one narcotic preparation” but they found only “four cases of reasonably well-documented addiction in patients who had no history of addiction.” Their conclusion was as unorthodox as it was decisive: “Despite widespread use of narcotic drugs in hospitals, the development of addiction is rare.”8

The letter cited two previous studies, both of which involved only hospitalized patients given small doses of opioids in a controlled setting. Very few had had them dispensed for more than five days. None were given painkillers after they were discharged from the hospital.

No one could have predicted the impact that letter had on the reassessment of using opioids to treat pain. During the next two decades it was cited over 1,600 times in textbooks, medical journals, and other publications. More than 80 percent of those who mentioned it left out that it only studied hospitalized patients who took opioids for a few days. Instead, that 99-word letter was widely cited to support far broader conclusions about the safety profile of opioids.9 (In 2017 the NEJM published a rare “Editor’s Note,” adding it to its webpage with the original Jick-Porter letter: “For reasons of public health, readers should be aware that this letter has been ‘heavily and uncritically’ cited as evidence that addiction is rare with opioid therapy.”)

The twin themes—that not treating pain was negligent and that opioids were safe for almost everyone—reinforced one another.

The World Health Organization (WHO) cited the Jick- Porter letter in 1986 as a cornerstone for challenging decades of medical dogma that “the risks of widely prescribing opioids far outweighed any benefits.” Six weeks after the WHO publication, Pain published a startling report, the “Chronic Use of Opioid Analgesics in Non-Malignant Pain.” The lead author was Russell Portenoy, a 31-year-old Memorial Sloan Kettering physician specializing in anesthesiology, neurology, pain control, and pharmacology. His coauthor was Kathleen Foley, a top pain management specialist.

Portenoy and Foley had studied 38 patients who had been administered narcotic analgesics—a third took oxycodone—for up to seven years. Two thirds reported significant or total pain relief. There was “no toxicity,” the two doctors reported, and only two patients had a problem with addiction, both of whom had “a history of prior drug abuse.” They concluded that “opioid maintenance therapy can be a safe, salutary and more humane alternative to the options of surgery or no treatment in those patients with intractable non-malignant pain and no history of drug abuse.”10

Pain as the Fifth Vital Sign

That paper kicked off a contentious and at times rancorous debate over whether opioids had been unfairly branded for decades and underutilized in pain management. The charismatic Portenoy emerged as the unofficial spokesman for the embryonic movement to reassess opioids. He saw himself as a pioneer in reexamining outdated views about opioids. If he could convince doctors not to fear dispensing opioids, it could help millions of patients suffering from chronic pain.

A diverse, informal network of physicians contributed to the emerging reevaluation. Doctors specializing in pain management formed The American Academy of Pain Medicine and the American Society of Addiction Medicine (its slogan is “Addiction is a chronic brain disease”). They in turn encouraged patients suffering from chronic pain to form advocacy groups and petition the FDA to loosen opioid dispensing restrictions.

In 1990, American Pain Society president, Dr. Mitchell Max, wrote a widely read editorial lamenting how little progress had been made in treating pain. “Unlike ‘vital signs,’ pain isn’t displayed in a prominent place on the chart or at the bedside or nursing station,” he wrote.”11 Max’s fix was to have physicians ask patients on every visit about whether they were in pain. Doctors had for decades kept watch of four vital signs when examining patients: blood pressure, pulse, temperature, and breathing. The American Pain Society suggested “Pain as the 5th Vital Sign.”

There was no reliable diagnostic test, as there was for blood pressure or cholesterol. Pain was a subjective assessment based on the doctor’s observations and the patient’s descriptions of symptoms. What one patient described as moderate pain that restricted mobility might be excruciating and disabling for someone else. The first rudimentary measurements were developed around this time. One of them, the McGill Pain Index, had 78 words related to pain divided into 20 sections. Patients picked the words that best described their pain. Another, called the Memorial Pain Assessment Card, had eight simplified descriptions and patients selected the one that best matched their pain’s intensity. Yet another was developed by a pediatric nurse and child life specialist in Oklahoma—a chart for children with 10 handdrawn faces ranging from happy and laughing to angry and crying. Variations of that scale soon became a 1 to 10 rating for adults, 1 being “very mild, barely noticeable,” and 10 signifying “unspeakable pain.”

Those tools meant that differing pain tolerances among patients were no longer important. What mattered was tracking whether a patient’s pain was getting better or worse. The Joint Commission, an independent, not-for-profit organization responsible for accrediting 96 percent of all U.S. hospitals and clinics, became the first major group to endorse pain as the fifth vital sign. After the Veterans Administration embraced it, it was adopted quickly in the private sector.12

Over the next few years, a series of other small trials published in medical journals reinforced Portnoy’s 1986 study. They uniformly concluded that opioids did not deserve their terrible reputation and that they were extremely “effective in treating long-term chronic pain.” Buried in scientific footnotes was that “long-term” usually meant 12 to 16 weeks and “effective in treating” meant “superior to placebo.”13

An anesthesiologist and dentist, J. David Haddox, pushed the limits of the reevaluation movement. Haddox, who later became the American Academy of Pain Medicine president and went to work for Purdue Pharma, reported in Pain about the failure to treat the pain of a 17-year-old leukemia patient. That failure, wrote Haddox, had “led to changes similar to those seen with idiopathic opioid psychologic dependence (addiction).” “Pseudoaddiction” was a syndrome, he theorized, that doctors unintentionally caused when they failed to provide their patients with sufficient opioid painkillers. The “behavioral changes” that many doctors concluded constituted addiction, argued Haddox, was only evidence of how undertreated the patient was in terms of narcotic painkillers.14

America’s three major pain associations embraced pseudoaddiction.15 (It took a quarter century before a comprehensive study revealed that in the 224 scientific articles that cited pseudoaddiction, only 18 provided even the sketchiest anecdotal data to support the theory. The study concluded that pseudoaddiction was itself “fake addiction.”)

The same month that Haddox introduced pseudoaddiction, a dozen prominent doctors published “The Physician’s Responsibility Toward Hopelessly Ill Patients” in the New England Journal of Medicine. Although the study was limited to terminally ill patients, pain management advocates enthusiastically applied its conclusion to all patients: “The proper dose of pain medication is the dose that is sufficient to relieve pain and suffering.… To allow a patient to experience unbearable pain or suffering is unethical medical practice.”16

New Jersey became the first state to adopt an “intractable pain treatment” law that recognized patients had a right to treat their pain. The statute shielded doctors from criminal or civil liability if the narcotics dispensed caused an addiction; 18 other states soon followed.

Enter Big Pharma

Portenoy and colleagues contended that opioids should be the first treatment option for chronic nonmalignant pain if the patient had no history of addiction. Instead of setting a maximum dose, the emerging standard of care was that opioids should be dispensed until the patient’s pain was relieved. The twin themes—that not treating pain was negligent and that opioids were safe for almost everyone—reinforced one another. The Sackler family, owners of a small drug company, Purdue Pharma, would have been hard pressed to plan a better lead-in to their release a decade later of OxyContin, their blockbuster opioid-based painkiller.

Purdue used a Wizard of Oz analogy to promise the reps who sold the most oxycontin that “A pot of gold awaits you ‘Over the Rainbow.’”

When the pain reevaluation movement had begun in the mid-1980s, OxyContin was not even on the drawing board. It was in early development when pain was on its way to becoming the fifth vital sign. In the following decade, Purdue did what every other drug company with an opioid-based product did: spent millions underwriting and subsidizing the doctors, advocacy organizations, and pain societies who were at the vanguard of the reevaluation movement. Many pioneering doctors reaped big fees as company lecturers. Purdue and other drug firms subsidized courses at medical schools, professional conferences and conventions, and continuing education classes. And, similar to what happened with the launch of other major drugs, some government officials (even a few key FDA officials) eventually went to work for Purdue and other firms selling opioids. Purdue and its competitors spent lots of money on the pain advocates precisely because they were promoting ideas about pain treatment that the drug manufacturers enthusiastically embraced.

The opioids reevaluation movement might not have had such an impact if it was not for the development of a time-release opioid painkiller, OxyContin. Purdue, and its aggressive marketing of OxyContin, came at a time when doctors were more willing to believe that opioids could be safely prescribed.

Three psychiatrist brothers, Arthur, Mortimer, and Raymond Sackler had bought Purdue in 1952. It was then a tiny New York drug company whose product line consisted mostly of natural laxatives, earwax removers, and tonics that claimed to boost brain function and metabolism. A decade after purchasing Purdue, the Sacklers added a distressed British manufacturer, Napp Pharmaceuticals. The Sacklers had not thought about developing a painkiller until Napp took advantage of an opportunity in the United Kingdom.

Cicely Saunders, a British nurse-turned physician, had opened the world’s first hospice in London in 1967. Her biggest obstacle in alleviating patient’s terminal discomfort was the need to dose painkillers intravenously every few hours. The patients got little sleep and it was not possible to send them home to spend their last days surrounded by friends and family.

Morphine, Saunders found, was not as effective in alleviating pain as diamorphine (a brand name for heroin). Heroin’s biggest drawback, she concluded, was that “it may be rather short in action.”17 She experimented by adding sedatives and tranquilizers to extend the time pain was relieved, but she was stymied at every turn by intolerable side effects.

Still, Saunders had a permissive view of opioids and their addictive power. She did not think heroin had a “greater tendency to cause addiction than any other similar drug.… We have several patients in the wards at the moment who have come off completely without any withdrawal symptoms.”18

What she wanted was a revolutionary narcotic painkiller. In a single dose, it had to provide long relief from intense pain without causing sleepiness, motor coordination problems, and memory lapses. Several independent British pharmaceutical companies accepted her challenge. Smith & Nephew developed Narphen, a synthetic opioid it claimed was 10 times more powerful than morphine, quicker acting, and had a milder side effects profile. Although Saunders acknowledged that Narphen was a better end-of-life drug, it was not her holy grail for terminal cancer pain.

Smith & Nephew’s stumble handed the Sacklers an opportunity. Napp launched a significant research effort to find the new painkiller. When the breakthrough came in 1980, it promised not only to revolutionize pain care for the terminally ill, but it unwittingly provided the technology that would later fuel America’s opioid crisis. Napp introduced a morphine painkiller with a revolutionary, invisible- to-the-human-eye, sustained-release coating. That chemical layer consisted of a dual-action polymer mix that turned to a gel when exposed to stomach acid. Napp claimed the drug, MST Continus (continuous), released pure morphine at a steady rate over 12 hours. They could adjust the release rate by fine-tuning the density of the coating’s water-based polymer. It was the breakthrough painkiller for which Cicely Saunders had been searching since the late-1960s.

MST Continus carved out a market in the UK, but it was limited for end-of-life cancer and hospice patients. It took the Sacklers seven years (until 1987) to get FDA approval for that drug in the U.S. (which they renamed MS-Contin). The FDA had slowed the approval process since its active ingredient, morphine, was a Schedule II controlled substance. By the time it went on sale in America, Portnoy had published the first of his studies concluding that opioids were not as addictive as previously thought and that they should be prescribed liberally to treat pain.

Purdue, now run by two of the surviving Sackler brothers, Mortimer and Raymond, and some of their children, took note of the burgeoning pain management movement. Raymond’s son, Richard Sackler, also a doctor, led a company effort to find an improved painkiller, or at least one with much broader commercial appeal than MS-Contin. Richard Sackler thought that any new painkiller should not use morphine since it had a notorious reputation as an end-of-life medication. Purdue’s science team picked oxycodone, a chemical cousin of heroin. While there were some oxycodone-based painkillers on the market—Percodan (oxycodone and aspirin) and Percocet (oxycodone and acetaminophen)—they were immediate-release pills. If Purdue could master an extended-release oxycodone pill, it would be the first of its kind.

Their oxycodone-based drug was still an unnamed product. Its first clinical trial was only completed in 1989. It took until 1992 for Purdue to apply for a patent. In 1995, the company finally got FDA approval. And it also won an extraordinary concession from the government regulator. Although Purdue had not conducted clinical trials to determine whether OxyContin was less likely to be addictive or abused than other opioid painkillers, the FDA had approved wording requested by the company: “Delayed absorption as provided by OxyContin tablets, is believed to reduce the abuse liability of a drug.”19 (Curtis Wright, the FDA officer who oversaw the OxyContin label approval, soon left the agency to work at Purdue as its medical officer for risk assessment).

Marketing Pain

Purdue’s sales team highlighted that extraordinary sentence to convince physicians that it was a safer narcotic than its rivals. Purdue prepared an unprecedented marketing launch for OxyContin. The late Arthur Sackler was a marketing genius, widely acknowledged as having introduced aggressive Madison Avenue advertising tactics to selling pharmaceuticals. Arthur had handled the promotion for Hoffman LaRoche’s 1960s blockbuster drugs, Librium and Valium, and had made them the biggest-selling drugs in the world for a record 17 years.

Purdue laid out a sales strategy for OxyContin straight from Arthur’s playbook. Its twin sales pitches were that OxyContin relieved pain longer than any other opioid painkiller, and because it was a time-release product, it was less likely to be addictive.

Purdue sales reps raised “concerns about addiction” before physicians did. It was, they said, understandable that no matter how wonderful a drug, “a small minority” of patients “may not be reliable or trustworthy” for narcotic painkillers. If the doctors were still skeptical at that stage, the reps showed them the FDA-approved label that stated if OxyContin was used as prescribed for treating moderate to serious pain, addiction was “very rare.” What constitutes “very rare”? Less than one percent, according to the sales reps. To tilt the odds in favor of its “low risk of addiction” sales strategy, Purdue underwrote several studies that reported addiction rates from long-term opioid treatment between only 0.2 percent and 3.27 percent. However, those company-sponsored reports were never confirmed by independent studies.

Purdue also got help in promoting the “low risk of addiction” from the American Pain Society and the American Academy of Pain Medicine. Purdue and other opioid drug manufacturers were generous funders of both organizations. The groups issued a consensus statement emphasizing that opioids were effective for treating nonmalignant chronic pain and reiterating that it was “established” that there was a “less than 1 percent” probability of addiction.

Purdue sales reps hammered home that OxyContin released oxycodone into the bloodstream at a steady rate over 12 hours. That, Purdue claimed, made it impossible for addicts to get the rush they chased. Without a high, patients would not want more of the drug as it wore off. The company knew that was not true—its own clinical trials demonstrated that for some patients up to 40 percent of oxycodone was released into the bloodstream in the first hour or two. That was fast enough to cause a high and a resulting crash that required another pill in order to feel better.

Dispensing physicians had no idea what Oxy cost, nor did most care. Since they did not pay for the drugs, they let patients and their insurance companies worry about that.

Purdue revised its compensation packages for its sales team, especially top performers, in time for the OxyContin launch. Large bonuses could double a sales rep’s salary. In an internal memo to the “Entire Field Force,” Purdue used a Wizard of Oz analogy to promise the reps who sold the most that “A pot of gold awaits you ‘Over the Rainbow.’” Two months later after Oxy went on sale, another memo titled, “$$$$$$$$$$$$$ It’s Bonus Time in the Neighborhood!”, urged the sales team to push doctors to prescribe the higher-dose pills.

There was far greater profit for Purdue, and more money for the sales team, by pushing higher doses. There were three strengths when it went on sale: 10, 20, and 40 milligrams. An 80 mg tablet was released a month later (15, 30, 60, and 160 mg pills would arrive in a few years). Purdue’s production costs were virtually the same for each since oxycodone, the active ingredient, was inexpensive to manufacture. However, Purdue charged more for each additional strength. On average, a bottle of 20 mg pills cost twice as much as the 10 mg variety, and 80 mg pills were about seven times more expensive. If a patient took 20 mg pills twice a week, Purdue made less than $40 in profit. The same patient prescribed 80 mg pills twice a week returned $200 to Purdue, a 450 percent increase (that profit exceeded $600 a bottle in another five years).

Dispensing physicians had no idea what Oxy cost, nor did most care. Since they did not pay for the drugs, they let patients and their insurance companies worry about that.

Purdue created “Individualize the Dose,” a campaign designed to push the strongest doses. Sales reps told doctors that the company’s studies showed it was best to start patients on a medium to higher dose. The stronger doses, Purdue assured physicians, could be dispensed even to people who had never used opioids, all without adverse effects. The field reps contended that the higher-dose pills were no more likely to cause addiction. That was not true. Internal documents later revealed that Purdue’s sales team knew that stronger doses carried a significantly higher likelihood of dependence, addiction, and even potentially lethal respiratory suppression. While the company’s press releases claimed “dose was not a risk factor for opioid overdose,” internal communications are replete with references to the dangers of “dose-related overdose.”

OxyContin was instantly the most successful drug Purdue ever released. By 2001, only five years after it had gone on the market, its cumulative sales had passed a billion dollars, a first for Purdue. Although a lucrative hit for the Sacklers, OxyContin was less than ten percent of the opioid market. Johnson & Johnson, Janssen, Cephalon, and Endo Pharmaceuticals had their own narcotic painkillers. Their sales teams pitched them as aggressively as Purdue pushed Oxy, and all the companies subsidized the same nonprofits and patient advocacy groups. Janssen managed to get FDA approval in 1990 for the first fentanyl patch to treat severe pain. Fentanyl was then the most potent synthetic opioid, one hundred times stronger than morphine and 1.5 times more powerful than oxycodone. Two years after the FDA had given a green light to OxyContin, it approved Cephalon’s Actiq, a fentanyl “lollipop,” for cancer patients whose intense pain did not respond to other narcotics. Fentanyl patches and Actiq pops were diverted illicitly for big profits and sometimes with lethal side effects. There were widespread industry rumors that Cephalon’s sales team pushed its lollipops off-label as “ER on a stick” for chronic pain.

Still, by 2001, it was OxyContin that was in the crosshairs of some angry patients, the media, and the DEA. Small towns throughout Appalachia seemed overrun by a deluge of OxyContin, locally called “Hillbilly Heroin.” The DEA, meanwhile, was investigating diversion of the drug from the manufacturing plant Purdue used in New Jersey. It was also compiling evidence that Oxy contributed to overdose deaths by examining autopsy reports from across the country. The DEA wanted the FDA to put strict restrictions on the number of refills allowed for the painkiller.

In February 2001, OxyContin appeared for the first time in the New York Times, a front-page story—“Cancer Painkillers Pose New Abuse Threat”—about how it had become an abused drug in at least seven states.20 The Times raised the issue of whether Purdue’s hard-hitting marketing was partially responsible for the growing problems.

Purdue went all out to battle the bad press and its regulatory headaches. It hired big name legal talent. Rudy Giuliani, fresh off being America’s Mayor after his handling of the city in the aftermath of the 9/11 attacks, had just opened a private office and he began lobbying government officials on Purdue’s behalf. The company dispatched its medical officers and top executives to meet with the FDA and DEA. It assured both that it was working to control any abuse and diversion and it contested the findings about Oxy’s role in overdoses by pointing to the cocktail of illicit drugs in most of the autopsy reports. At that stage, the DEA could not find a death in which the victim had only Oxy, without alcohol, benzos, heroin, cocaine, cannabis, or some other drug. In the same month as the Times story, Richard Sackler sent an internal Purdue email that said, “We have to hammer on the abusers in every way possible. They are the culprits and the problem. They are reckless criminals.”

Purdue emerged mostly unscathed from all the extra scrutiny. Although the FDA did require changes to OxyContin’s label, it was far less than what activists wanted. The FDA ordered the addition of a so-called black box warning. The bold-font warning was a reminder to doctors that OxyContin was “a Schedule II controlled substance with an abuse liability similar to morphine.” No drug company liked having a black box warning on its label, but as I learned in my reporting, Purdue was not upset since it considered the language a good compromise. One marketing executive remarked later, “It is black box lite.” It merely reiterated what most physicians knew already about OxyContin.

In 2004, OxyContin officially earned the dubious distinction as the most abused drug in America.21 Parents who had lost children to OxyContin were trying to raise awareness about the drug’s dangers. The biggest concern for Purdue, however, was an ongoing investigation into Oxy’s marketing by the West Virginia U.S. Attorney, John Brownlee, who started his probe in 2002. West Virginia was one of states hardest hit by OxyContin. In 2006,

Brownlee was ready to bring a case. He forwarded a six-page memo to the DOJ’s Criminal Division to get authorization to file felony charges against Purdue and its top executives for money laundering, wire and mail fraud, and conspiracy.22 Brownlee got bad news from headquarters. The Criminal Division vetoed all the serious felony counts and instead gave him permission only to bring less serious charges around misbranding the drug. That was a clean and straightforward prosecution.

In May 2007, Purdue and three non-Sackler executives accepted a plea agreement. The company and officers pled guilty to a scheme “to defraud or mislead, marketed and promoted OxyContin as less addictive, less subject to abuse and diversion, and less likely to cause tolerance and withdrawal than other pain medications.”23 Purdue’s fine was $634.5 million, and the three executives paid a combined $34.5 million.

Purdue signed both Consent and Corporate Integrity agreements. It agreed not to make “any written or oral claim that is false, misleading, or deceptive” in marketing OxyContin and to report immediately any signs of false or deceptive marketing. The strict terms of those agreements should have been the end of Oxy’s nationwide trail of devastation. Instead, the ink was barely dry before Purdue started flagrantly disregarding the rules. The deadliest years and record abuse with OxyContin came after the 2007 guilty pleas.

And Then It Got Even Worse

Purdue went on a hiring binge that eventually doubled its sales force. It unleashed them to push Oxy with a renewed vigor. The company also paid millions to the “key physician opinion leaders” so they would convince doctors that OxyContin should be their first choice whenever a patient presented with serious pain. The results were impressive. In the year that Purdue pled guilty, sales passed $1 billion annually and profits exceeded $600 million. OxyContin provided 90 percent of Purdue’s profits.

The opioid crisis is a tragedy that was largely preventable and, as such, one of the most infuriating chapters in modern U.S. history.

When Purdue faced the possibility of generic competition in 2010, the company devised a “new and improved” coating that it said was more difficult to crush, snort or inject. Although Purdue’s two small studies showed the new version had “no effect” in reducing the addiction and overdose potential, the FDA still approved tamper-resistant OxyContin. (It took ten years before an FDA advisory panel ruled that the tamper-resistant Oxy had failed to reduce opioid overdoses).

With the FDA approval, Purdue spent millions on a splashy ad campaign directed to physicians. Titled “Opioids with Abuse Deterrent Properties,” Purdue touted its crush-resistant formulation as the first ever narcotic pain reliever that reduced the chances for abuse and slashed the addiction rate. The campaign worked. Many doctors believed it and increased their prescribing pace.

In 2011, four years after Purdue’s criminal guilty plea, OxyContin surpassed heroin and cocaine to become the nation’s most deadly drug. Sales were also at a record, each year breaking the previous year’s record. When there was a slowdown in 2013, the Sacklers brought in McKinsey & Company consultants, who laid out a plan to “supercharge” sales. The results were almost immediate. In 2015, Forbes listed the Sackler family on its “Richest Families” list for the first time. The Sacklers, with an estimated net worth of $14 billion, had jumped ahead of the Rockefellers, Mellons, and Busches, among many others. Forbes titled the family “the OxyContin Clan.”24

The news about the Sacklers great fortune was lost under a deluge of news about the national toll from OxyContin. By 2015, for the first time, opioids killed more people than guns and car crashes combined, and lethal overdoses even surpassed the peak year of HIV/AIDS deaths. Statisticians blamed OxyContin for the first decline in two decades in the life expectancy of Americans. And a CDC report confirmed what some doctors suspected: prescription opioid users were 40 times more likely to become heroin addicts, making Oxy the most effective gateway drug into heroin. The CDC urged doctors either to “carefully justify” or “avoid” prescribing more than 60 mg daily. Still, the guidelines were voluntary. Only seven states passed legislation to limit the number of prescriptions.

In 2016, OxyContin and the opioid epidemic became a presidential campaign issue. The Joint Commission, responsible for accrediting hospitals and clinics, reversed its 2001 position that pain should be the fifth vital sign. Even the FDA was slowly recognizing the extent of the problem. Parents who lost children to opioids had submitted a citizen’s petition to the FDA, pleading with the regulators to classify Oxy for severe pain only. After eight years on the back burner, the agency was seriously considering it.

It’s Only Money

Suddenly, the Sacklers and Purdue, and their competitors, were on the defensive. The Trump administration declared the opioid epidemic a public health emergency in 2017. That action freed up extra federal resources for treatment. A few months later, forty-one state attorneys general subpoenaed internal Purdue marketing and promotion documents. Purdue announced plans to slash its sales force by half and that it would no longer market Oxy directly to individual physicians, instead concentrating on hospitals and clinics.

In 2019, a judicial panel decided to streamline the more than 2,500 pending lawsuits under the jurisdiction of a single federal judge in Ohio. The consolidated lawsuit was called the National Prescription Opiate Litigation. The following month, the Massachusetts Attorney General filed an amended complaint that was different from all others. It relied on Purdue’s internal records to conclude that eight of the Sackler-family directors had “created the epidemic and profited from it through a web of illegal deceit.” The New York Attorney General filed a similar action a few weeks later and added that the Sacklers had personally transferred hundreds of millions in assets to offshore tax havens.

To drive home how much the Sacklers had profited from OxyContin, court documents filed by the attorneys general revealed that the family directors had voted payments of $12 to $13 billion in profits since OxyContin went on sale. By the end of 2019, OxyContin had $35 billion in sales from its launch, while America recorded its 200,000 death since the government had begun tracking them.25

In the end, it was lawyers, state prosecutors, and the nation’s top class action litigators, who pried some financial justice from the many parties that shared responsibility for the national tragedy. Purdue filed for bankruptcy protection in late 2019 and the Sacklers sought protection from all the civil litigation so long as they contributed a lot of money to an overall settlement. In 2022, the family agreed to pay $6 billion toward a settlement and a bankruptcy judge signed off on a plan that freed them from civil litigation.26 (I co-wrote two New York Times opinion pieces that argued the judge had exceeded his bankruptcy court authority by discharging all actions pending against the Sacklers, who had not themselves filed bankruptcy. That issue and the complex bankruptcy plan are now pending before the Supreme Court.) Under the bankruptcy plan, Purdue became a public entity that continued to sell OxyContin, with any proceeds going to treatment and public health.

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

In 2022, Johnson and Johnson paid $5 billion to settle the litigation pending against it. J&J also announced it was quitting the opioid painkiller business. The country’s three largest wholesale drug distributors—AmerisourceBergen, Cardinal Health, and McKesson—reached a settlement in the tsunami of litigation pending against them by paying a combined $21 billion.27 Another $13.8 billion came from the big three pharmacy chains, Walmart, Walgreens, and CVS. Rite Aid filed for bankruptcy protection. The litigation has produced about $55 billion in total settlements.28

Still none of that matters to many families who lost loved ones to the overzealous marketing of prescription painkillers. And, with the many families I have interviewed, they note that no one has gone to prison for having made such enormous profits off the deaths of several hundred thousand Americans. Many who helped fuel the epidemic, such as overprescribing doctors, owners of pill mills, and lax regulators at the FDA and in state health agencies, got away without so much as a slap on the wrist.

An unnamed plaintiff’s lawyer told The Guardian in 2018 that the Sacklers were “essentially a crime family… drug dealers in nice suits and dresses.” No prosecutors, however, had the courage to bring a criminal action against the Sacklers and other opioid kingpins.

What a shame.

About the Author

Gerald Posner is an award-winning journalist and author of thirteen books, including New York Times nonfiction bestsellers Why America Slept (about 9/11) and God’s Bankers (about the Vatican), and the Pulitzer Prize finalist Case Closed (about the JFK assassination). His latest, Pharma, is a withering and encyclopedic indictment of a drug industry that often seems to prioritize profits over patients. A graduate of the University of California at Berkeley, he was a litigation associate at a Wall Street law firm. Before turning to journalism, he spent several years providing pro bono legal representation on behalf of survivors of Nazi experiments at Auschwitz.

References
  1. https://rb.gy/6a7hv
  2. https://rb.gy/8pyh2
  3. https://rb.gy/h7pop
  4. https://rb.gy/lviuy; https://rb.gy/wrekp
  5. Cavers, D.F. (1939). The Food, Drug, and Cosmetic Act of 1938: Its Legislative History and Its Substantive Provisions. Law & Contemp. Probs., 6, 2.
  6. “John Bonica, Pain’s Champion and the Multidisciplinary Pain Clinic,” Relief of Pain and Suffering, John C. Liebeskind History of Pain Collection, Box 951798, History & Special Collections, UCLA Louise M. Darling Biomedical Library, Los Angeles, CA.
  7. Brennan, F. (2015). The U.S. Congressional “Decade on Pain Control and Research” 2001– 2011: A Review. Journal of Pain & Palliative Care Pharmacotherapy, 29(3), 212–227.; https://rb.gy/zmifj
  8. Porter, J., & Jick, H. (1980). Addiction Rare in Patients Treated With Narcotics. New England Journal of Medicine, 302(2), 123.
  9. https://rb.gy/zmg4c; https://rb.gy/leawh. In 2017, six researchers published in the NEJM the results of their review of all subsequent citations to the 1980 letter. “In conclusion, we found that a fivesentence letter published in the Journal in 1980 was heavily and uncritically cited as evidence that addiction was rare with long-term opioid therapy. We believe that this citation pattern contributed to the North American opioid crisis by helping to shape a narrative that allayed prescribers’ concerns about the risk of addiction associated with long-term opioid therapy.” Dr. Jick told the Associated Press in 2017: “I’m essentially mortified that that letter to the editor was used as an excuse to do what these drug companies did.”
  10. Portenoy, R.K., & Foley, K.M. (1986). Chronic Use of Opioid Analgesics in Non-Malignant Pain: Report of 38 Cases. Pain, 25(2), 171–186.
  11. Max quoted in Schottenfeld, J.R., Waldman, S.A., Gluck, A.R., & Tobin, D.G. (2018). Pain and Addiction in Specialty and Primary Care: The Bookends of a Crisis. Journal of Law, Medicine & Ethics, 46(2), 220–237.
  12. Morone, N.E., & Weiner, D.K. (2013). Pain as the Fifth Vital Sign: Exposing the Vital Need for Pain Education. Clinical Therapeutics, 35(11), 1728–1732.
  13. Sullivan, M.D., & Howe, C.Q. (2013). Opioid Therapy for Chronic Pain in the United States: Promises and Perils. Pain, 154, S94–S100.
  14. Weissman, D.E., & Haddox, J.D. (1989). Opioid Pseudoaddiction—an Iatrogenic Syndrome. Pain, 36(3), 363–366.
  15. “Definitions Related to the Use of Opioids for the Treatment of Pain,” Consensus Statement of the American Academy of Pain Medicine, the American Pain Society, and the American Society of Addiction Medicine, approved by the American Academy of Pain Medicine Board of Directors on February 13, 2001, the American Pain Society Board of Directors on February 14, 2001, and the American Society of Addiction Medicine Board of Directors on February 21, 2001 (replacing the original ASAM Statement of April 1997), published 2001.
  16. Wanzer, S.H., Federman, D.D., Adelstein, S.J., Cassel, C.K., Cassem, E.H., Cranford, R.E., … & Van Eys, J. (1989). The Physician’s Responsibility Toward Hopelessly Ill Patients. A Second Look.
  17. Saunders, C. (1965). The Last Stages of Life. The American Journal of Nursing, 70–75.
  18. Saunders, C. (1963). The Treatment of Intractable Pain in Terminal Cancer. Proceedings of the Royal Society of Medicine, 56, 195–197.
  19. https://rb.gy/l7kvh
  20. https://rb.gy/tzwla
  21. Cicero, T. J., Inciardi, J. A., & Muñoz, A. (2005). Trends in Abuse of OxyContin and Other Opioid Analgesics in the United States: 2002–2004. The Journal of Pain, 6(10), 662–672.
  22. https://rb.gy/xdv0m
  23. 2007-05-09 Agreed Statement of Facts, Para 20.
  24. https://rb.gy/qi6ph
  25. https://rb.gy/67baw
  26. https://rb.gy/580po
  27. https://rb.gy/hz79m
  28. https://rb.gy/ma2m8
Categories: Critical Thinking, Skeptic

Reconductoring our Electrical Grid

neurologicablog Feed - Thu, 04/11/2024 - 5:26am

Over the weekend when I was in Dallas for the eclipse, I ran into a local businessman who works in the energy sector, mainly involved in new solar projects. This is not surprising as Texas is second only to California in solar installation. I asked him if he is experiencing a backlog in connections to the grid and his reaction was immediate – a huge backlog. This aligns with official reports – there is a huge backlog and its growing.

In fact, the various electrical grids may be the primary limiting factor in transitioning to greener energy sources. As I wrote recently, energy demand is increasing, faster than previously projected. Our grid infrastructure is aging, and mainly uses 100 year old technology. There are also a number of regulatory hurdles to expanding and upgrading the grid. There is good news in this story, however. We have at our disposal the technology to virtually double the capacity of our existing grid, while reducing the risk of sparking fires and weather-induced power outages. This can be done cheaper and faster than building new power lines.

The process is called reconductoring, which just means replacing existing power lines with more advanced power lines. I have to say, I falsely assumed that all this talk about upgrading the electrical grid included replacing existing power lines and other infrastructure with more advanced technology, but it really doesn’t. It is mainly about building new grid extensions to accommodate new energy sources and demand. Every resource I have read, including this Forbes article, give the same primary reason why this is the case. Utility companies make more money from expensive expansion projects, for which they can charge their customers. Cheaper reconductoring projects make them less money.

Other reasons are given as well. The utility companies may be unfamiliar with the technology, not want to retrain their workers, see this as “new technology” that should be approached as a pilot project, and may have some misconceptions about the safety of the technology. However, the newer powerlines have been used for over two decades, and Europe is way ahead of the US in installing it. These are hurdles that can all be solved with a little money and regulation.

Traditional power lines have a steel core with surrounding aluminum wires. Newer power lines have a carbon composite core with surrounding annealed aluminum. The newer cables are stronger, sag less, and have up to twice the energy carrying capacity as the older lines. Upgrading to the newer cables is a no-brainer.

The electrical grids are now the primary limiting factor in getting new clean energy online. But adding new power lines is a slow process. There is no single agency that can do it, so new permits have to go through a maze of local jurisdictions. Utility companies also fight with each other over who has to pay for what. And local residents create a NIMBY problem, pushing back against new power lines.

Reconductoring bypasses all of those issues, because it uses existing power lines and infrastructure. There are no new permits – you just do it.

In a way, we can take advantage of our past negligence. We have essentially been building new power lines to add more capacity, rather than updating lines. This means we have left ourselves an easy way to massively expand our grid capacity. There is already some money in the infrastructure bill and the IRA for grid upgrades, but the consensus seems to be that this is not enough. We likely need a new bill, one that provides the regulation and funding necessary for a massive reconductoring project in the US. And again, the best part about this approach is that it can be done fast. We can get ahead of our increasing energy demand, and make the grid more resilient and safer.

This will not solve all problems. Some new additions will still need to be made for the grid, not only to expand overall capacity, but to bring new locations onto the grid, both sources and users of electricity. Those necessary grid expansions, however, can take priority, as we won’t need to build new towers just to add capacity to existing routes.

Yet again it seems we have the technology we need to successfully make the transition to a much greener energy sector. We just need to get our act together. We need to make some strategic investments and changes to regulations and how we do things. There are about 3,000 electric utility companies in the US who are responsible for grid upgrades. There are also many state and local jurisdictions. This is an impossible patchwork of entities that need to work together to improve, update, and expand the grid, and so the result is a slow bureaucratic mess (which should come as a surprise to no one). There are also some perverse incentives, such as the way utility companies are reimbursed for capital expenditures.

Again I am reminded of my experience with telehealth – we had the technology, and the advantages were all there. But we could not seem to make it happen because of bureaucratic hurdles. Then COVID hit, and literally overnight we made it happen. If we see the threat of climate change with the same urgency, we can similarly removed logistical hurdles and make a green transition happen.

The post Reconductoring our Electrical Grid first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #931: Error Erasure Extravaganza

Skeptoid Feed - Tue, 04/09/2024 - 2:00am

It's time once again for Skeptoid to correct another round of errors in previous shows.

Categories: Critical Thinking, Skeptic

Eve Herold — Robots and the People Who Love Them

Skeptic.com feed - Tue, 04/09/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss421_Eve_Herold_2024_04_09.mp3 Download MP3

If there’s one universal trait among humans, it’s our social nature. The craving to connect is universal, compelling, and frequently irresistible. This concept is central to Robots and the People Who Love Them. Socially interactive robots will soon transform friendship, work, home life, love, healthcare, warfare, education, and nearly every nook and cranny of modern life. This book is an exploration of how we, the most gregarious creatures in the food chain, could be changed by social robots. On the other hand, it considers how we will remain the same, and asks how human nature will express itself when confronted by a new class of beings created in our own image.

Drawing upon recent research in the development of social robots, including how people react to them, how in our minds the boundaries between the real and the unreal are routinely blurred when we interact with them, and how their feigned emotions evoke our real ones, science writer Eve Herold takes readers through the gamut of what it will be like to live with social robots and still hold on to our humanity. This is the perfect book for anyone interested in the latest developments in social robots and the intersection of human nature and artificial intelligence and robotics, and what it means for our future.

Eve Herold is an award-winning science writer and consultant in the scientific and medical nonprofit space. A longtime communications and policy executive for scientific organizations, she currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI and bioethical issues in leading-edge medicine. Previous books include Stem Cell Wars and Beyond Human, and her work has appeared in the Wall Street Journal, Vice, the Washington Post and the Boston Globe, among others. She’s a frequent contributor to the online science magazine, Leaps, and is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.

Shermer and Herold discuss:

  • What happened to our flying cars and jetpacks from The Jetsons?
  • What is a robot, anyway? And what are social robots?
  • Oskar Kokoschka, Alma Mahler, and the female doll
  • Robot nannies, friends, therapists, caregivers, and lovers
  • Sex robots
  • The uncanny valley: roboticist Masahiro Mori in 1970
  • Robots in science fiction
  • Psychological states: anthropomorphism, effectance (the need to interact effectively with one’s environment), theory of mind (onto robots), social connectedness
  • “Personal, social, emotional, home robots”
  • Emotions, animism, mind
  • Emotional intelligence
  • Turing Test
  • Artificial intelligence and natural intelligence
  • What is AI and AGI?
  • The alignment problem
  • Large Language Models
  • ChatGPT, GPT-4, GPT-5 and beyond
  • Robopocalypse
  • Robo soldiers
  • What is “mind”, “thinking”, and “consciousness”, and how do molecules and matter give rise to such nonmaterial processes?
  • Westworld: Robot sentience?
  • The hard problem of consciousness
  • The self and other minds
  • How would we know if an AI system was sentient?
  • Can AI systems be conscious?
  • Does Watson know that it beat the great Ken Jennings in Jeopardy!?
  • Self-driving cars
  • What set of values should AI be aligned with, and what legal and ethical status should it.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Eclipse 2024

neurologicablog Feed - Mon, 04/08/2024 - 5:37am

I am currently in Dallas Texas waiting to see, hopefully, the 2024 total solar eclipse. This would be my first total eclipse, and everything I have heard indicates that it is an incredible experience. Unfortunately, the weather calls for some clouds, although forecasts have been getting a little better over the past few days, with the clouds being delayed. Hopefully there will be a break in the clouds during totality.

Actually there is another reason to hope for a good viewing. During totality the temperature will drop rapidly. This can cause changes in pressure that will temporarily disperse some types of clouds.

I am prepared with eclipse glasses, a pair of solar binoculars, and one of my viewing companions has a solar telescope. These are all certified and safe, and I have already used the glasses and binoculars extensively. You can use them to view the sun even when there is not an eclipse. With the binoculars you can see sunspots – it’s pretty amazing.

While we (me and the SGU crew including George Hrab and our tech guru, Ian) are in Dallas we put on three shows over the weekend, including recording two live episodes of the SGU. These were our biggest crowds ever for a live event, and included mostly people not from Texas. People from all over the world are here to see the eclipse.

I have to add, just because there is so much talk about this in the media, a clarification about the danger of viewing solar eclipses. You can view totality without protection and without danger. Also, during most of the partial eclipse, viewing the eclipse is no different than viewing the sun. It is dangerous to look directly at the sun. You should not do it as it can damage your retina.

But – we all live our lives without fearing accidentally staring at the sun, because it hurts and we naturally don’t do it. The only real danger of an eclipse is when most of the sun is covered, so that only a crescent of sun is visible. In this case the remaining amount of sun is not bright enough to trigger pain and cause us to look away. But that sliver of sun is still bright enough to damage your retina. So don’t look directly at a partial eclipse even if it is not painful. This includes locations out of the path of totality that will have a high degree of sun cover, or just before or after totality. That is when you want to use certified eclipse glasses (that are in good condition). During totality you do not need eclipse glasses, and you would see nothing but black anyway.

I will add updates here, and hopefully some pictures, once the eclipse happens.

Update: Well, despite weeks of bad weather reports and angst, we had clear skies in Dallas, and got to see the entire eclipse, including all of totality. Absolutely amazing. It is one of those wondrous natural phenomena that you have to experience in person.

During totality we were able to see multiple prominences, including one big one. Essentially this was a huge arc of red gas extending from the surface of the sun. Beautiful.

I would definitely recommend planning a trip to a future total solar eclipse. It will be worth it.

The post Eclipse 2024 first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #978 - Apr 6 2024

Skeptics Guide to the Universe Feed - Sat, 04/06/2024 - 5:00am
Guest Rogue: Andrea Jones Rooy; Quickie with Bob: Silicon Spikes; News Items: Havana Syndrome, Robo Taxis in New York, Rebellions - Cultural Memory - and Eclipses, Gravitational Waves and Human Life; Your Questions and E-mails: Evolution of Gullibility; Who's That Noisy; Science or Fiction
Categories: Skeptic

Lance Grande — The Formation, Diversification, and Extinction of World Religions

Skeptic.com feed - Sat, 04/06/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss420_Lance_Grande_2024_04_06.mp3 Download MP3

Thousands of religions have adherents today, and countless more have existed throughout history. What accounts for this astonishing diversity?

This extraordinarily ambitious and comprehensive book demonstrates how evolutionary systematics and philosophy can yield new insight into the development of organized religion. Lance Grande―a leading evolutionary systematist―examines the growth and diversification of hundreds of religions over time, highlighting their historical interrelationships. Combining evolutionary theory with a wealth of cultural records, he explores the formation, extinction, and diversification of different world religions, including the many branches of Asian cyclicism, polytheism, and monotheism.

Grande deploys an illuminating graphic system of evolutionary trees to illustrate historical interrelationships among the world’s major religious traditions, rejecting colonialist and hierarchical “ladder of progress” views of evolution. Extensive and informative illustrations clearly and vividly indicate complex historical developments and help readers grasp the breadth of interconnections across eras and cultures.

The Evolution of Religions marshals compelling evidence, starting far back in time, that all major belief systems are related, despite the many conflicts that have taken place among them. By emphasizing these broad historical interconnections, this book promotes the need for greater tolerance and deeper, unbiased understanding of cultural diversity. Such traits may be necessary for the future survival of humanity.

Lance Grande is the Negaunee Distinguished Service Curator, Emeritus, of the Field Museum of Natural and Cultural History in Chicago. He is a specialist in evolutionary systematics, paleontology, and biology who has a deep interest in the interdisciplinary applications of scientific method and philosophy. His many books include Curators: Behind the Scenes of Natural History Museums (2017) and The Lost World of Fossil Lake: Snapshots from Deep Time (2013). His new book is The Evolution of Religions: A History of Related Traditions.

Shermer and Grande discuss:

  • Why is a paleontologist and evolutionary theorist interested in religion?
  • Evolutionary systematics and comparativism in evolutionary biology, linguistics, and the history of religion
  • What is a comparative systematicist?
  • E. O. Wilson’s consilience approach
  • Agnostic approach: not addressing the truth value of any one religion
  • What is religion?
  • Variety: 10,000 different religions: Christianity (33%), Islam (23%), Hinduism/Buddhism (23%), Judaism (0.2%), Other (10%), Agnosticism (10%), Atheism (2%)
  • Evolutionary trees of religion
  • Biological vs. cultural evolution & diversification: Lamarkian vs. Darwinian
  • Historical colonialist progressivism and social Darwinism
  • Frans Boaz, Margaret Meade, historical particularism
  • Rather than focusing on differences, focus on similarities
  • Nature/Nurture & The Blank Slate in anthropology & the social sciences
  • Early evolutionary origins of religion: the cognitive revolution, agenticity, patternicity, theory of mind, animism, spiritism, polytheism
  • Gobekli Tepe as the earliest religious ceremonial structure
  • Machu Picchu and Inca religion
  • Human sacrifice and religion
  • Apocalypto
  • Pizzaro, Atahualpa, and Spanism/European colonialism & eradication of New World religions
  • Time’s arrow and Time’s cycle: Asian Cyclicism
  • Dharmic religion (India), Taoism, Buddhism, Jainism, Sikhism, Shinoism (Hirohito)
  • Old World Hard Polytheism (vs. Soft?) & New World Hard Polytheism (Mesopotamian, Egyptian, Celtic, Greek, Old Norse, Siberian totemism, Alaskan totemism
  • Colonialism and missionaries extinguished many polytheistic religions
  • Linear Monotheism: Atenism, Zoroastrianism, El, Yahweh, Jehovah, Monad, Allah (linear time: one birth, one life, one death, one eternal afterlife; dualistic cosmology: good vs. evil, light vs. dark, heaven vs. hell); proselytic: conversion efforts
  • Abrahamic Monotheism 6th century BCE Second Temple Judaism and Samaritanism
  • Included prophets: Noah, Abraham, Moses (60% of all religious people today)
  • Tanakh sacred scripture 6th century BCE: Hebrew Bible, Old Testament, Quran
  • Jesu-venerationism (1st century CE): Ebionism (Jesus as prophet but not divine), Traditional Christianity, Biblical Demiurgism (primal good god Monad, evil creator spirit Demiurge; saw Jesus as the spiritual emanation of the Monad), Islam
  • Reformation: Catholicism split into Protestantism, Anglicanism
  • Islam: revered 25 prophets from Adam to Jesus, ending with Muhammad
  • Expansion of Islam through conquests in the 7th and 8th centuries CE
  • 4 Generalizations:

    • Organized Religions are historically related at one ideological level or another (illustrated by trees);
    • Largest major branches today were historically intertwined with major political powers;
    • Authority of women declined with the rise of male dominated pantheons, empires, clergies, caliphates;
    • Religion played a role in our species’ early ability to adapt to its social and physical environment: tribalism was a competitive advantage for early humans in which communal societies that developed agriculture, commerce, educational facilities, and armies out-competed less communitarian groups.
Show Notes
How We Believe

In my 2000 book How We Believe: Science, Skepticism, and the Search for God, I defined religion as “a social institution that evolved as an integral mechanism of human culture to create and promote myths, to encourage altruism and reciprocal altruism, and to reveal the level of commitment to cooperate and reciprocate among members of the community.” That is, there are two primary purposes of religion:

  1. The creation of stories and myths that address the deepest questions we can ask ourselves: Where did we come from? Why are we here? What does our ultimate future hold?
  2. The production of moral systems to provide social cohesion for the most social of all the social primates. God figures prominently in both these modes as the ultimate subject of mythmaking and the final arbiter of moral dilemmas and enforcer of ethical precepts.
From Shermer’s book Truth

“Jesus was a great spiritual teacher who had a profound effect on many people,” writes Lance Grande in his magisterial The Evolution of Religions, admitting that “he became what is probably the most influential person in history.” But this says nothing about the verisimilitude of the miracle claims made in Jesus’ name. In fact, as Grande notes, neither during his own lifetime (4BC-30 CE), nor in the earliest writings of the New Testament by Paul, were miracle claims made in Jesus’s name. Even Paul’s mention of the resurrection of Christ was described in 1 Corinthians (15:44) as a spiritual event rather than a literal one: “It is sown a natural body; it is raised a spiritual body. There is a natural body, and there is a spiritual body.” In Paul’s writings about Christ, says Grande, “he speaks of him in a mystical sense, as a spiritual entity of human consciousness.” Many contemporary groups, in fact, “saw Christ as a spirit that possessed the man Jesus at his baptism and left him before his death at the crucifixion” (called “separationism”). But since political monarchs in the first century CE were treated as divine, Christian proselytizers began to refer to Jesus as the “King of Kings,” and so came to pass the deification of an otherwise mortal man. Here is how Grande recaps the transformation:

Reports of specific miracles only began to appear several decades after the death of Jesus, in the Gospel of Mark (65-70 CE) and in later gospels (80-100CE). This suggests that stories of miracles (e.g., controlling the weather, creating loaves and fishes out of nothing, turning water into wine, healing the sick, and raising the physical dead) were layered into the story of Jesus as expressions of an ultimate God experience.

And as is typical of myths in the making, in the retelling across peoples, spaces, and generations, layers of improbability are added as a test of faith:

Once the stories of miracles began to appear in early Christianity, they were retold repeatedly, until they became ingrained beliefs. More stories were added, such as miracles about singing angels, stars announcing earthly happenings, and even a fetus (that of John the Baptist in his mother Elizabeth’s womb) leaping to acknowledge the anticipated power of another fetus (that of Jesus in his mother Mary’s womb). These details, many of which probably began as metaphorical lessons, gradually became accepted by many followers as literal historic truths. It is probable that some of these stories were never intended as documents of historical fact.

From metaphorical lessons to historic truths. Perhaps this is what the author of the Gospel of John meant when he wrote (John 20:31): “But these are written, that ye might believe that Jesus is the Christ, the Son of God; and that believing ye might have life through his name.”

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Maggie Jackson — Uncertain: The Wisdom and Wonder of Being Unsure

Skeptic.com feed - Tue, 04/02/2024 - 1:39pm
https://traffic.libsyn.com/secure/sciencesalon/mss419_Maggie_Jackson_2024_04_02.mp3 Download MP3

In an era of terrifying unpredictability, we race to address complex crises with quick, sure algorithms, bullet points, and tweets. How could we find the clarity and vision so urgently needed today by being unsure? Uncertain is about the triumph of doing just that. A scientific adventure tale set on the front lines of a volatile era, this epiphany of a book by award-winning author Maggie Jackson shows us how to skillfully confront the unexpected and the unknown, and how to harness not-knowing in the service of wisdom, invention, mutual understanding, and resilience.

Long neglected as a topic of study and widely treated as a shameful flaw, uncertainty is revealed to be a crucial gadfly of the mind, jolting us from the routine and the assumed into a space for exploring unseen meaning. Far from luring us into inertia, uncertainty is the mindset most needed in times of flux and a remarkable antidote to the narrow-mindedness of our day. In laboratories, political campaigns, and on the frontiers of artificial intelligence, Jackson meets the pioneers decoding the surprising gifts of being unsure. Each chapter examines a mode of uncertainty-in-action, from creative reverie to the dissent that spurs team success. Step by step, the art and science of uncertainty reveal being unsure as a skill set for incisive thinking and day-to-day flourishing.

Maggie Jackson is an award-winning author and journalist known for her pioneering writings on social trends, particularly technology’s impact on humanity. Winner of the 2020 Dorothy Lee Book Award for excellence in technology criticism, her book Distracted was compared by FastCompany.com to Silent Spring for its prescient critique of technology’s excesses, named a Best Summer Book by the Seattle Post-Intelligencer, and was a prime inspiration for Google’s 2018 global initiative to promote digital well-being. Jackson is also the author of Living with Robots and The State of the American Mind. Her expertise has been featured in The New York Times, Business Week, Vanity Fair, Wired.com, O Magazine, and The Times of London; on MSNBC, NPR’s All Things Considered, Oprah Radio, The Takeaway, and on the Diane Rehm Show and the Brian Lehrer Show; and in multiple TV segments and film documentaries worldwide. Her speaking career includes appearances at Google, Harvard Business School, and the Chautauqua Institute. Jackson lives with her family in New York and Rhode Island.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic