You are here

Critical Thinking Feed

Skeptoid #1029: How to Become a Sovereign Citizen

Skeptoid Feed - Tue, 02/24/2026 - 2:00am

Is there somewhere on Earth where Sovereign Citizens can actually be free of any nation's laws?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

How Real Is the Nocebo Effect?

Skeptic.com feed - Mon, 02/23/2026 - 11:32am

A review of This Book May Cause Side Effects: Why Our Minds Are Making Us Sick by Helen Pilcher.

In the early years of Viagra, “the little blue pill” that generated such excitement about its sexual effects on men, I read an account by a woman who decided to try it herself, because isn’t what’s good for the gander good for the goose? (Answer: Not always.) She took that little blue pill and described the exhilarating night of lovemaking that ensued. The best sex she’d ever had! Rapture divine! When she awoke in the morning, she saw that the blue pill she had swallowed was an Aleve (naproxen). At least she didn’t get a headache.

Most people know about the placebo, the inert “sugar pill” given to a control group in a clinical trial when the experimental group gets the active medication. This method allows researchers to rule out the effects of expectations on a new drug’s medical benefits, if any. (Placebo-controlled tests of Viagra for women found that women did slightly better on the placebo, which ended Pfizer’s efforts to double their market.) Expectations can be powerful: the bigger the biologically inactive placebo—a larger pill, a bigger injection—or the more complex the intervention, even a sham surgery, the greater its benefits. Placebos have been used in many settings, most dramatically on the battlefield, where suffering, dying soldiers plead for morphine that has long run out of supply. Given a saline solution but told it is that powerful pain-killer, their pain vanishes.

This Book May Cause Side Effects: Why Our Minds Are Making Us Sick by Helen Pilcher. (Abrams Press, 2026)

Where the placebo goes, can the nocebo be far behind? In This Book May Cause Side Effects, Helen Pilcher, a science writer and TV presenter with a PhD in cell biology, delves into the placebo’s “evil twin”—the myriad ways that our negative expectations affect us. If you had chills, fatigue, or headaches after getting a COVID shot, she writes, they were likely due to your being told those are frequent “side effects.” If you read the list of symptoms that your newly prescribed drug “might” produce, chances are you will experience one or more of them—and possibly decide not to take that drug after all. “If just the thought of eating a certain food makes you feel sick,” she writes, “it’s highly likely that placebo’s evil twin has struck again. Indeed, many of those who believe they have intolerances to certain ingredients, such as lactose or gluten, may well owe their misery to psychological rather than physical processes.” When self-reported “gluten intolerant” people are given gluten-free bread but told that the bread contains gluten, very often they develop gastrointestinal symptoms. “And when some gluten-intolerant people are covertly fed regular bread but told that it’s gluten-free, they don’t get symptoms,” Pilcher writes. “It’s the idea of gluten that they are intolerant to, rather than theprotein itself.”

The combination of “sometimes” with dramatic anecdotes weakens her case that the nocebo affects all illness.

Pilcher makes her case for the nocebo’s malevolent antics in 12 chapters, starting with deaths from hexes to “psychogenic” deaths that have no apparent physiological cause to the downsides of labelling mental and physical illnesses and thereby creating more cases of them. “The nocebo effect can conjure blindness and paralysis, seizures, vomiting and asthma attacks. With no brain injury in sight, it can trigger the symptoms of concussion … With no allergen present, it can induce features of an allergic reaction—watery eyes, runny nose and an itchy rash—that are indistinguishable from the more common, pollen-triggered alternative.” 

There is really no scientific reason to distinguish placebos from nocebos, since both terms describe the way that beliefs, expectations, and apprehensions affect our bodies. But the nocebo is hot; “the nocebo effect has been promoted from academic footnote to nerdy hot potato,” she notes, and Pilcher makes the most of that hotness. The nocebo “is far more pervasive and potent than most people had realized,” she writes. “All symptoms, all illness and all disease has [sic] the potential to be negatively impacted by the thoughts that swirl around inside our heads.” All disease? Yes: “Hiding in plain sight, the phenomenon is part of all illness and all disease, where it makes us more unwell than we need to be.” Does she literally mean “all” or do all diseases merely have the “potential” to be impacted?

That fuzziness undermines her reporting. To be sure, giving us details of every one of the many studies she describes could become stultifying; yet, by not providing actual numbers and percentages of people in an experiment who were affected by a nocebo, and by speaking vaguely of “most” people or “some” people who have the “potential” to succumb, we cannot assess the strength of the finding. For example, she writes that in one study, “people who were falsely ‘diagnosed’ with the ‘bad’ version [of a fictitious gene that allegedly influences their response to exercise] did much worse. They had less endurance and their lung capacity was reduced.” “People”? All of them? One tenth? How many people? 3? 30? Lung capacity “reduced” by how much? How long did that reduction last after they went home? Or, in noting that “some” people die from the stress of bereavement or surviving a plane crash, she adds “that’s certainly not to imply that intense stress is going to kill us all. These deaths are rare. You are far more likely to muddle your way through life’s major stressors than you are to die from them, but sometimes it happens.” The combination of “sometimes” with dramatic anecdotes (Johnny Cash died four months after his wife June) weakens her case that the nocebo affects all illness. Did he die of a broken heart? Or complications from diabetes, respiratory failure, autonomic neuropathy, and pneumonia? 

90 percent of the symptoms that people reported when on statins were also what they experienced when on the placebo.

More worrisome is Pilcher’s enthusiastic endorsement of experiments long discredited and unreplicated, such as Robert Rosenthal’s “Pygmalion” study, in which teachers allegedly raised the IQs of the randomly chosen students they had been told would intellectually bloom that year, simply by the power of their expectations. And because Pilcher so enjoyed meeting Ellen Langer, the Harvard psychology professor who became famous for her decades-old “chambermaid” and “counterclockwise” studies, she suspended scepticism, not even doing a quick google search that would have revealed what was wrong with those studies. In the former, hotel maids were said to have lost weight and lowered their blood pressure simply by being told their activities were “exercise” rather than “work.” But the experimenters relied on the women’s subjective self-reports, so they could not rule out whether the women actually—consciously or subconsciously—increased their activity level or changed their diet. And the 1979 “counterclockwise” study, which supposedly showed that having eight men in their 70s live in a simulated 1959 environment for a week would physically reverse their frailty and other signs of aging, was never published in a peer-reviewed journal or replicated. (It later became a made-for-TV stunt with celebrities.) Langer actually said to the participants, "we have good reason to believe that if you are successful at this, you will feel as you did in 1959." No bias there.

Although these lapses give one pause, Pilcher provides the details in other studies that rise to a “wow” level. In one, 60 patients who had stopped taking statins because they couldn’t stand the side effects were persuaded to try again. They were given 12 bottles of pills: four containing statins; four containing identical-looking placebo pills; and four empty bottles. The patients used one bottle per month, in a randomly prescribed order, over one year, recording their symptoms daily on their smartphones. The study was double blinded, so neither patients nor doctors knew which tablets the participants were taking (or none). The researchers found that 90 percent of the symptoms that people reported when on statins were also what they experienced when on the placebo. This means that most of the side effects of statins are caused by expectations, not the tablet’s content. 

You’ve nothing to lose and possibly a world of delicious bread to gain.

In her final chapter, Pilcher offers ways of countering, if not overcoming, the nocebo’s influence. Reframe the aftereffects of an injection not as painful “side effects” but as evidence the medication is working; if you need a medication, cautioning that 20 percent of the people taking it get headaches, focus on the 80 percent who don’t; and if you have been diagnosed with a serious disease, you can ask your doctor for “personalized informed consent:” telling you about possibly serious symptoms that would require medical attention, but none of the milder symptoms were more likely to be evoked by the nocebo. And if you are one of the thousands of people who think they are allergic to gluten—unlike those with celiac disease, who most definitely are—why not ask a friend or partner to subject you to a nice double-blind experiment? You’ve nothing to lose and possibly a world of delicious bread to gain.

Categories: Critical Thinking, Skeptic

How America Lost Its Sense of Humor

Skeptic.com feed - Sun, 02/22/2026 - 11:46am
On the Biology of Why We Laugh … and a Brief History on Why We Stopped

In today’s America, humor—like nearly everything else—has become serious business, and in ways at once unusual and plain to see. Never before has every half-drunk joke, or every stumble of language, been so on the record. Welcome to the social media century. Never before have young people been more uptight, more afraid than old people, now labeled as the anxious generation. Never before has stand-up comedy in Republican Texas felt more cutting edge than in New York City. 

The comedian Norm Macdonald has called this age a crisis of “clapter”—diagnosing a humorless age where jokes are rewarded with polite applause instead of genuine laughter. It is a mark of social retardation and nervous conformity. A strange fate for one of humanity’s oldest and most complex behaviors. As such, this essay is on the origin of humor, its evolutionary function, and its history in the United States.

The Origin of Humor

Babies do it. It exists in every known culture. We even see it in other species. Since Darwin, scientists have developed three ways to test for whether or not a trait evolved by natural selection for adaptive purposes. And by every test, laughter qualifies. That is to say, whatever else humor is, it is first and foremost, a fact of our evolved biology.

To this day, however, neither the scientists nor comedians (nor anyone else for that matter) has been able to produce what might be recognized as “a complete theory of humor.” What follows instead are the core components of a consilient model. These are ideas that do not compete so much as they combine, each explaining a different dimension that converge on a single theme.

1. Humor as play. The most fundamental and widely accepted finding in the study of humor is that it evolved as a function of mammalian play behavior—a way to test limits and roughhouse the rules. Dolphins laugh when they butt heads; elk laugh when they wrestle; and all the apes, including human children, laugh when we are being chased, like playing tag. All of these interactions are games that simulate aggressive predator-prey behavior; like fighting, stalking, hunting, or fleeing, it’s easier to learn the rules of conflict when the danger is make-believe. Laughter, on this account, evolved as a signal to the predator-in-pretend that he is not being perceived as a threat and that playtime can continue.

Laughing out loud is not just a reaction, it is a social tool that helps young mammals learn how to walk the line between aggression and cooperation, between pushing limits and maintaining bonds. It’s a training ground for managing social complexity. And so while we may be the only species that tells jokes, the logic is the same. Louis C.K. explaining that “you should never rape anyone unless you want to cum in them and they won’t let you” or Norm Macdonald reminiscing about “the old days when tweeting meant stabbing a hooker” is what scientists call “verbal play.” Here is how Jerry Seinfeld put it: “Comedy is a very aggressive art form. You put the brain into a vulnerable state [the setup] and then attack and destroy it [the punchline].” 

Understanding the role of laughter in distinguishing between aggression and play explains why humor—like no other form of speech—is allowed to not make sense, to cross the line, and to have it not matter. As Louis C.K. often puts it after his punchlines: “I don’t know. I don’t care.”

2. Laughter is a hard to fake signal. Birds laugh, dogs laugh, rats laugh, cows laugh. There are—so far as we have counted—over sixty animal species that laugh. But there is only one species that can fake a laugh, and that’s us. It’s what biologists call nonduchenne laughter (tactical, deliberate, and carefully timed), as opposed to duchenne laughter (involuntary and honest). A duchenne smile—named after anatomist Guillaume Duchenne who first identified it, is characterized by the simultaneous contraction of the zygomatic major muscle (lifting mouth corners) and the orbicularis oculi muscle (crinkling eyes, forming “crow’s feet”), distinguishing it from a forced smile that only uses mouth muscles. 

The duchenne smile evolved in humans because we are the only species that has language. In a world where deceiving others has obvious survival and reproductive advantages, language enhances our ability to manipulate beliefs and rig behaviors to our benefit, whether by lying about resources, alliances, or why the basement smells like bleach. In other words, it gives us the ability to influence each other, not just through force or direct observation, but through stories, symbols, and imagination. Try convincing a chimpanzee to give you a banana by promising eternal paradise or warning of a mythical curse and see what happens. Tell the right story to a human, however, and they might just give you all of their arable lands. 

All this is to say that once we have language we also have bullshit, and so what we really need is a way to tell who’s full of it. Biologists call it an “honest signal,” and for a slick-tongued species of tricksters, the best we’ve got is duchenne laughter. Less corruptible than speech and harder to counterfeit, it works as a backchannel of communication by revealing genuine and honest feelings inside, unfiltered by words.

Studies suggest that few people can voluntarily produce crow’s feet in their eyes (the telltale sign of duchenne laughter) without feeling genuine joy—it is easy to identify and we respond more positively to it than the fake stuff. 

But a laugh, real or not, means little until you know what provoked it.

3. Comedy is surprises. Arguably the most obvious feature of any joke is that the punchline arrives unexpected and upside down. Across cultures and contexts, the most consistent finding in humor research is that without surprise, there is no laugh.

The human brain, at its most basic, is a prediction making machine, honed by natural selection for survival in environments where knowing what’s going to happen before it does keeps you one step ahead of the predators. To know where the predator lurks, when the fruit will ripen, how an ally will behave—all in advance of the fact—is arguably a chief advantage of our big brained species over others. We are, put simply, pattern-seeking junkies—so wired that we are likely to see patterns that don’t exist (patternicity). As such, our awareness is often not of things as they are, but as we expect them to be. 

Even our most basic experiences are not records of the present but guesses about what’s to come. Take, for example, drinking water. Our cells do not absorb the intake until about twenty minutes after the fact, but feeling quenched happens almost immediately. It is the brain, anticipating the chemistry that will follow, extending to us in the present the comfort of a future state. Most of life is lived in this way—on credit, in trust—our minds forever writing promissory notes for what the world has not yet delivered. 

The advantage of the man with a sense of humor is that he is able to act more rationally by considering multiple angles and weighing their contradictions

But as much a benefit as there is in good predictions, there is a cost to bad ones. Evolution, therefore, had to do more than just adapt us to anticipate. It had to make us eager to correct our mistakes when reality proved us wrong. Laughter, in this view, evolved as a reward signal for fixing a bad prediction—an outburst of joy that marks the moment our model of the universe just got more accurate. One after another, it is a comedy of errors—predictions misfiring, intentions slipping—that keeps the system honest and the mind awake. As Norm Macdonald explains:

At times, the joy that life attacks me with is unbearable and leads to gasping hysterical laughter. I find myself completely out of control and wonder how life could surprise me again and again and again, so completely. How could a man be a cynic? It is a sin.

Yet if laughter were merely a private reward for cognitive course correction, it would be a silent, internal affair. But it isn’t. It is loud, contagious, and social. This is because the same mechanism that helps an individual update their model of the world becomes, in a social species, a powerful tool for establishing shared truths.

4. It’s funny because it’s true. Whether it’s making fun of someone else, making fun of ourselves, or making fun of the situation, we laugh because in some hidden, half-said sort of way, the joke forces us to connect the dots already in our head. It is an unspoken reality suddenly made obvious, but only to the people laughing. Anthropologists call it the encryption model of humor, and it explains humor’s widest social function. 

As it suggests, the whole ludic apparatus works like the German Enigma machine of World War II, in which messages were sent via code to receivers who can crack it. In order to “get” a joke, you must share some background knowledge or belief that allows recognition to snap into place. This means that when people are laughing at the same thing, they are effectively signaling that they all possess the same information and preferences, thereby marking themselves as members of the same ingroup. 

“You had to be there.” “If you know, you know.” In this way, all jokes are inside jokes, and research shows that the more encrypted comedy is, the funnier people find it. The writer E.B. White once compared explaining a joke to dissecting a frog—you understand it better but the frog dies in the process. Humor is like a bubble, he observed: 

It won’t stand much blowing up, and it won’t stand much poking. It has a certain fragility, an evasiveness, which one had best respect. Essentially, it is a complete mystery.

And it is this very quality that allows humor to do its dirtiest work—exposing suppressed beliefshumbling status, challenging groupthink, and revealing unseen truths.

5. We’ve all got a little Jeffrey Dahmer in us—and those of us who deny it rarely laugh at all. Research suggests that people who have a harder time acknowledging difficult truths find less humor in the world. In studies using the self-deception questionnaire, for example, subjects are asked to rate how much they agree (on a scale from “not at all true” to “very true”) with statements such as “More than once it felt good when I heard on the news that someone had been killed” or “I have never done anything that I am ashamed of.” Those who mark more claims as “not true” are scored as higher in self-deception and later observed to laugh less than individuals more able and willing to confess their sins. Other statements on the survey include: “Once in a while I think of things too bad to talk about.” Or: “I have never wanted to rape or be raped by someone.”

If self-deception hides the inconvenient angle, laughter drags it into view by forcing honesty not meant for show.

The results reflect two competing adaptations in the evolutionary arms race between liars and lie detectors. On the one hand, self-deception works in service of deceit, allowing lies to roll off the tongue with all the same confident fluency as truth. In other words, by believing our own lies we are less likely to show external cues of deception (e.g., sweaty palms, nervous voice changes, or averted eye contact), which makes them harder to detect. Its function is to protect us from admitting beliefs that might expose weakness, lower status, or trigger shame. Ninety-four percent of professors, for example, think they are in the top half of their field.

But if self-deception hides the inconvenient angle, laughter drags it into view by forcing honesty not meant for show. Chris Rock’s joke that “a man is as faithful as his options,” for example, plays on a familiar tension between our grandiose theories about marriage being a sacrament and our deep animalistic understanding that it’s easy to be faithful if nobody else wants to have sex with you.

Where self-deception narrows the field of vision, humor splits it open. The advantage of the man with a sense of humor is that he is able to act more rationally by considering multiple angles and weighing their contradictions. As Samuel Crothers wrote for The Atlantic in 1899: 

The pleasure of humor is of a complex kind. There are some works of art that can be enjoyed by the man of one idea. To enjoy humor one must have at least two ideas. There must be two trains of thought going at full speed in opposite directions, so that there may be a collision. Such an accident does not happen in minds under economical management, that run only one train of thought a day.

It is what the poet John Keats called “negative capability”—the ability to keep in mind two incompatible truths that circle one another without resolution. Shakespeare, he argued, possessed this quality to an extraordinary degree, forcing his audience to hold both the positive and negative aspects of a character for as long as possible, denying them the sort of quick and facile judgment most of us make about most things all the time.

6. Funny is when the world won’t fit our ideas. Incongruity theory is the most supported scientific explanation for why humans laugh, and explains laughter as a shock moment of mismatch between the world we know and the world we thought we knew. In other words, comedians tell jokes that violate our expectations, identifying incongruities that can only be resolved by a shift in perspective. The setup creates an expectation, the punchline violates it, and laughter signals the change in perspective.

Take, for example, the old Onion headline: “School Bully Not So Tough Since Being Molested.” The setup primes us to cheer the bully’s downfall … until out of nowhere, like a trigger yanked too soon, the last word detonates that expectation. Had, for example, the line read “School Bully Not So Tough Since Being Cut From The Team”— it would have ended in simple justice, within the range of predicted ends. Instead, “molested” hurls a monkey-wrench perspective onto the tracks. In a flash, it turns the bully we wanted punished into the victim we want to protect—our original point of view bent, broken, flipped end over end like a compass needle snapped loose from north. Put another way, the joke forces contempt and pity to occupy, for a split second, the same moment of experience.  

Its feeling is awkward, ambiguous, uncomfortable, bewildering; requiring the mind to twist in on itself, tight and ugly, in order to get the joke. As the character Marlo Stanfield says in season four of The Wire, “[We] want it to be one way. But it’s the other way.”

We want the world to be drawn in clean lines, with answers settled and nonsense gone. But experience proves otherwise.

Humor and Democracy in America

It was for the first time in 1789 that a new generation of men on a whole new continent chose to work with their flaws and make use of the mess. They were a generation of men who laughed at pretension, heckled certainty, and made a sport of nonconformity. This was, in part, because they had an American sense of funny. Only on this side of the Atlantic was humor fully let off the leash, divorced from the polite understanding that jokes ought leave the order intact. In Europe, mockery operated within a fixed aristocratic structure—a pressure valve in a system not designed to change its fundamental hierarchy. In America, however, ridicule was integrated into a self-correcting democratic project.

Historian Henry Steele Commager called American humor a “comedy of circumstance” that made fun of every man, who “at one time or another [had] aimed too high, adventured too boldly [or] boasted too loudly.” It mocked rich people like poor people, made fun of smart people in the same ways as dumb people; because in the United States, no man is allowed to stay king. Commager goes on to describe the American sense of humor like this:

It was fundamentally outrageous, and in this reflected the attitude towards authority and precedent. It celebrated the ludicrous and the grotesque with unruffled gravity … It bore the impress of the frontier long after the frontier had passed. It was leisurely and conversational; the tall story was usually a long story and was designed to be heard rather than read. American humor was shrewd, racy, robust, and masculine …  It was generous and good-natured, and malicious only when directed against vanity and pretense. It cultivated understatement not, as with the British, as a sign of sophistication, but as an inverse exaggeration … It was democratic and leveling, took the side of the underdog, ridiculed the great and the proud, and the politician was its natural butt.

And as the democratic experiment hurtled forth, so too did its comedic counterpart, growing louder, meaner, and goofier. From the rambling tall tales of the frontier sprang, one after the other, a hard plain line of distinctly American inventions, including vaudeville, the comic strip, sketch shows, and stand-up comedy. 

But now, as Americans slip back into the Old World habits we once escaped, both democracy and humor are dying of the same disease.

The Unfunny Revolution

In 2008, near the peak of his career, Louis C.K. taped what would become one of the most talked-about comedy specials in comedic history. Dedicating the set to his hero George Carlin, who had died earlier that year, Louis began his special with a joke modeled on one of Carlin’s most famous bits—the “seven dirty words”—that in 2008 became “nigger, cunt, faggot.” Operating under the same premise, both jokes asked what kind of society still has forbidden words. Some found it funny, some found it offensive, some found it stupid, and some didn’t care at all. But in 2009, one of the most obscene jokes in American comedy was nominated for an Emmy by the high and mighty Television Academy. 

Fifteen years later, that world is unrecognizable. The culture has shifted so completely that now even Jerry Seinfeld—a comedian whose most offensive material pokes fun at airplane food—refuses to play college campuses, citing excessive political correctness. As Chris Rock, another comedian who no longer performs at universities, put it, “You can’t even be offensive on your way to being inoffensive.”

Cartoon by Oliver Ottitsch for SKEPTIC

The shift is not just in what Americans find funny. It is a fundamental misunderstanding of the nature and function of humor. In a culture that now treats laughter as a moral act, it’s been bent out of shape by all sides; its purpose twisted into a dog and pony proof of allegiance. On the right, the rules are clear enough—mock the leader, mock the faith, and you’re done. The threat is old school dictatorship. On the left, nobody’s in charge, but everyone’s policing everyone else. The result is a social bureaucracy so sprawling and self-contradictory that no one, least of all the people enforcing it, can tell you where it starts, what it’s for, or whether anyone is still keeping score. Can a man tell a rape joke? Can a woman? Do gay, Black, or fat comedians (or any others belonging to oppressed or marginalized groups) have the exclusive right to make fun of their own group?

But beneath all the shouting lies something simpler: a handful of inconvenient facts that neither orthodoxy can accept.

1. Comedy has no responsibility. Jokes aren’t Hallmark cards. There’s no lesson. No moral mission. Funny has nothing to do with right or wrong, good or bad. If people laugh—the joke works. If they don’t, it doesn’t. It’s that simple. As Seinfeld put it, “The audience is the only judge. If they laugh, it’s funny.” 

And whether they laugh for the right reasons, the wrong reasons, or no reason at all, it doesn’t matter. It’s all the same currency. Because again, no committee, no critic, no theoretical or ethical standard, not even comedians themselves, can determine what is funny. Only laughter can. 

The impulse to sanitize humor in the name of safety is a well-intentioned but misguided coddling that infantilizes the very people it claims to protect.

It is for this reason that comedian Ricky Gervais argues you should never apologize for laughing—because it is an involuntary reflex, born of recognitions we can’t fully name; maddeningly hard to locate, explain, or repeat. Whatever insights, however real, are accidents, not assignments. A joke may be philosophical, but it must not philosophize. It may be moral, but it must not moralize, because life is serious and comedy is not.

2. There is no such thing as punching down. It is a conceit that rests on the fantasy that people exist within a clear hierarchy of oppression and that comedians should consult a moral spreadsheet before telling a joke. Humans, however, are messy, and power is multidimensional. If the joke lands, it’s good, and not because it “punched up,” but because it’s funny. As comedian Rowan Atkinson put it:

You’ve always got to kick up? Really? What if there’s someone extremely smug, arrogant, aggressive, self-satisfied, who happens to be below in society? … There are lots of extremely smug and self-satisfied people in what would be deemed lower down in society, who also deserve to be pulled up.

Humor, rather than reinforcing hierarchies, scrambles them, making a carnival of power, where prince and pauper swap faces and butts. People can be both victims and perpetrators at the same time. If a rich guy mocks a poor guy for being poor, he’s an asshole; if a poor guy does it, he’s an asshole too.

The impulse to sanitize humor in the name of safety is a well-intentioned but misguided coddling that infantilizes the very people it claims to protect. To be teased is to be an equal; to be seen as resilient enough to take a joke and confident enough to play along. Because good humor, by refusing to grant anyone a permanent victim’s pass, reminds us that our shared humanity, not our segregated identities, is the ultimate leveler.

3. The subject is not always the target. I heard a joke at an open mic the other day about a newspaper headline that read “World’s Worst Pedophile.” The story was about a man who had molested hundreds of children. After reading the headline, the comedian asked, “Shouldn’t he be the world’s best pedophile? I mean …  the world’s worst pedophile—he’s been trying for years. He can’t afford the good candy, so he hands out stale trail mix. His van won’t start …” If you think the joke is making fun of molesting children or that it’s about finding pedophilia funny, you’re an idiot. It’s making fun of reporters and sloppy language.

But even if the joke actually was about pedophilia—as in Louis C.K.’s Saturday Night Live monologue, where he compares the joy of eating his favorite candy bar to what sex with children must be like for a child molester—treating a topic playfully doesn’t erase its gravity; it just recognizes that serious issues need not always be handled seriously.

Forcing comedy to seek 100 percent approval is like demanding a surgeon operate with a butter knife—you remove the danger, but you also remove the point. 

4. Failure is the process. Even the best comics bomb; but in a decontextualized culture incentivized to screenshot rather than understand, we’ve made a habit of demanding perfection on the first try. The trouble is that, while great jokes look effortless, they’re the end result of a process that’s anything but. As David Chase said about the hundred hour weeks he spent making The Sopranos—“hard work looks like magic.” Seinfeld once said he spent 20 minutes fine-tuning a single syllable. Chris Rock worked on three of his jokes in a recent Netflix special for over a decade. Being funny is hard—and comics need the space to fail. If you’ve ever watched open mics and seen the same comedians go up week after week to tinker with their bits, you know that the difference between killing and bombing often hinges on a single well-timed pause. Perhaps comedian Ari Shaffir summed it up best:

Failing is part of my process … A new bit never works the first time. I figure I have to bomb seven times to make it good. So I tweak it. Then maybe the next time it will do great … but then it will fall flat again. So I’ll make more adjustments. Then it will be great, then it will be terrible again … and all of that is okay.

This is why people who understand the function of humor tend to be more forgiving when things go wrong; and comedians are the most likely to forgive a failed joke. Dave Chappelle, for instance, responded to Michael Richards (Kramer on Seinfeld) calling a heckler a “nigger” at the Laugh Factory—an incident widely perceived as genuinely racistby saying that he learned that he was 20 percent Black and 80 percent comedian: 

The Black part of me was offended and hurt, but the comedian part was like, “Whoo, dude is having a bad set. Hang in there, Kramer!”

The bottom line is this—good jokes can’t emerge without experimentation. If it kills—great. If it doesn’t, better—it means you’re part of a free society. 

5. Risk is the form. Most humor involves taking risks. Larry David, for example, compared stand-up comedy to diving. You get extra points for degree of difficulty. Seinfeld said that jokes are like leaping from one tall building to another—the further the distance, the harder the joke. There is a big payoff if you can bring the audience with you, but if you try to jump too far or the dive is too difficult and you aren’t yet good enough, the joke bombs. This is why the worst thing you can do as a comedian is play it safe. As Patrice O’Neal put it: “The idea of comedy, really, is not [that] everybody should be laughing. It should be about 50 people laughing and 50 people horrified.”

Forcing comedy to seek 100 percent approval is like demanding a surgeon operate with a butter knife—you remove the danger, but you also remove the point. 

The Last Laugh

Humor is not meant to be figured out, put to use, or taken seriously. It is meant to be experienced. But in a botox-bleached nation of caped crusaders wearing noise-cancelling headphones, deaf to anything but our own theme music and the imagined sound of unseen eggshells cracking beneath; Americans are being starved of the freedom to play without purpose.

Like an overzealous gardener who, in his war against the dandelion, has paved his entire yard with concrete, we are succeeding in eradicating the weed of offense but in the process killing the soil where flowers take root.

All of us, each so consumed in our own tiny corner of the universe, must be reminded every now and again that the world is what it is, and our ideas about it are not. It’s a ticklish business.

Categories: Critical Thinking, Skeptic

Ufology: From Fringe to Mainstream to Fringe?

Skeptic.com feed - Fri, 02/20/2026 - 6:10am

A little over eight years ago The New York Times published a story that had profound implications for the way in which the UFO topic was perceived.1 It also began, at least in the U.S., a process by which the subject became increasingly more mainstream. In this article I want to address three questions: (1) How did ufology get here? (2) Where does ufology stand now? (3) What does the future hold for ufology?

How did ufology get here? 

On December 16, 2017, The New York Times broke two related stories. The first was the existence of forward-looking infrared videos of UAP (the U.S. government uses the term UAP—Unidentified Anomalous Phenomenon—as opposed to UFO) taken from U.S. Navy jets and confirmed by the Department of Defense as being authentic footage.2

The second part of the story was the existence of a shadowy intelligence program known as the Advanced Aerospace Threat Identification Program (AATIP), that supposedly researched and investigated UAP. This was newsworthy in and of itself, because for years the official position of the U.S. government was that there was no longer any interest in UAP, and that no programs had existed to study the phenomenon since the end of the 1960s, when a long running U.S. Air Force program known as Project Blue Book was terminated. Many people in the UFO community believed this was a lie and that covert programs existed, so it seemed like a clear-cut example of a conspiracy theory that turned out to be true. 

The truth was rather more complex, and there’s still no universally accepted narrative here. Some skeptics say AATIP was more of an unofficial effort undertaken by a group of believers in the Intelligence Community. Whatever its true nature, AATIP was clearly a spin-off of an earlier Defense Intelligence Agency (DIA) program called the Advanced Aerospace Weapon System Applications Program (AAWSAP). AAWSAP was demonstrably a genuine program, and some official documents use the terms AAWSAP and AATIP interchangeably.3 In January 2020, Pentagon public affairs spokesperson Susan Gough issued a statement attempting to clear up the confusion. It stated: 

The Advanced Aerospace Threat Identification Program (AATIP) was the name of the overall program. The Advanced Aerospace Weapons Systems Application Program (AAWSAP) was the name of the contract that DIA awarded for the production of all technical reports under AATIP. 

I sought further clarification, and on January 13, 2020, Susan Gough followed this up with a statement that: 

DIA managed the Advanced Aerospace Threat Identification Program. All of the work performed under AATIP was done via a single contract vehicle called AAWSAP. The total work effort for AATIP consisted of the 38 technical reports produced under the contract vehicle. DIA was the sole lead for management of AATIP via AAWSAP. Congress was briefed on the total work conducted for AATIP—the aforementioned 38 technical reports. 

The authors of these 38 reports include Hal Puthoff, Eric Davis, and Kit Green—names well-known to those who follow government dabbling in fringe science and the paranormal. 

My personal assessment is that all the euphemistic “advanced aerospace” references were a way of disguising a UFO or paranormal research program as being a program looking at next-generation foreign aerospace weapon threats, to try to protect it from skeptical Pentagon financiers and Congressional oversight folks who would have been horrified to learn that taxpayers’ money was being spent on such matters. This attempt was ultimately unsuccessful, because while $10M was appropriated in FY2008 and a further $12M in FY2010, funding ended in FY2012, after an earlier official review concluded that “the reports were of limited value to DIA.” 

The roots of AAWSAP trace back to Intelligence Community personnel Jay Stratton and James Lacatski, as well as to Skinwalker Ranch in Utah, often portrayed as a hotbed of UFO sightings and paranormal phenomena. Following the DIA’s 2008 issue of a contractual solicitation (carefully worded to focus on breakthrough technologies that might underpin future aerospace weapon systems, while avoiding mention of UFOs or the paranormal), the contract was awarded to Bigelow Aerospace Advanced Space Studies (BAASS).4 Billionaire space entrepreneur Robert Bigelow was, at the time, the owner of Skinwalker Ranch. 

Robert Bigelow had a longstanding interest in UFOs and the paranormal, and had previously funded the National Institute for Discovery Science (NIDS).5 The Chairman of the Board was the aforementioned Hal Puthoff, a parapsychologist who’d previously managed (with Russell Targ) a program at the Stanford Research Institute (not affiliated with Stanford University) to investigate paranormal phenomena. This work likely led to the U.S. government’s dabbling in such areas as remote viewing through Project Stargate, run by the DIA and CIA during the Cold War. 

NIDS looked at a range of fringe science topics, and some have argued that AAWSAP was essentially a way to secure government funding for a continuation of the sort of work that had been done by NIDS. Senator Harry Reid (who knew Robert Bigelow) was instrumental in securing official status and funding for AAWSAP. 

The New York Times story was quickly picked up by other mainstream media outlets around the world, and this caught the attention of numerous Congressional representatives and staffers. A key reason for this interest was the fact that aside from Harry Reid and two Senatorial colleagues, there seemed to have been no Congressional knowledge of AAWSAP or AATIP, and certainly no oversight. 

In terms of UFOs, folks in Congress likely aren’t that different from society as a whole, in that there’s a wide range of opinions across the spectrum from skeptic to believer. Furthermore, irrespective of beliefs, it’s hardly surprising that an unknown but clearly significant number of people in Congress saw The New York Times article and thought to themselves something like, “Wait, the government has a UFO program, but didn’t tell us? It was run by Intelligence Community personnel and there’s no Congressional oversight? What are they doing and what have they found out?” 

What followed was multifaceted Congressional interest in and engagement on the topic of UAP, to the extent that a critical mass built up. I believe a key factor here was that this engagement was bipartisan, covered both the Senate and the House, and involved several committees, mainly the Armed Services committees, the Intelligence committees, and the Oversight committees. This Congressional engagement led to classified briefings and public hearings. Witnesses at the public hearings included whistleblowers like Luis Elizondo (a retired counter-intelligence operative prominently featured in The New York Times article and described therein as being the individual who had run AATIP) and David Grusch, a former Intelligence Community member who had been attached to the UAP Task Force under the directorship of Jay Stratton. 

Perhaps the most important part of Congressional UAP engagement was the insertion of multiple UAP-related provisions into several of the recent, annual National Defense Authorization Acts (NDAA). In part to meet these legislative remits, the DOD set up an office (the aforementioned UAP Task Force) to handle the response and to lead on the topic across government. This task force published a number of official reports and was eventually replaced by the All-domain Anomaly Resolution Office (AARO). AARO’s website hosts a wealth of reports, briefings, and other UAP-related materials, sourced both from the DOD and Congress, that perfectly illustrate both the breadth and depth of Congressional engagement and the government response to this Congressional interest.6

As an interesting side note, one of the directors of the UAP Task Force was the aforementioned Jay Stratton, who had previously been involved in AAWSAP and who had an anomalous experience at Skinwalker Ranch. Stratton’s upcoming memoir, apparently to be published in 2026 by HarperCollins, may shed some light on unresolved questions concerning the evolutionary process from NIDS to BAASS to AAWSAP to AATIP, as well as other not-yet-resolved questions. 

Every intelligence analyst on the face of the planet knows the importance of differentiating between what they know and what they think, yet these very people often seem to be blurring the line.

It’s certainly interesting to note the connections between the various individuals involved and to see how the same names pop up repeatedly. This gives some potential insights into who the key players are and what the overall agenda is. The New York Times story, for example, had a long gestation period. The story was shopped around for some months prior to publication, not only to The New York Times, but also to The Washington Times and Politico, both of which were thus able to run fairly detailed stories very shortly after The New York Times got the scoop. 

Further insights can be gained by looking at the three names that appeared on the byline for The New York Times story: Helene Cooper, Ralph Blumenthal, and Leslie Kean. 

Helene Cooper was a Pentagon correspondent with The New York Times, with no previous UAP interest; The New York Times veteran reporter Ralph Blumenthal’s interest predated the December 2017 article and began with his research into Harvard Professor of Psychiatry John Mack, who had conducted research into the alien abduction mystery. This led to the 2021 publication of Blumenthal’s book on Mack, The Believer. Leslie Kean comes from a wealthy political family and had a prior interest in UAP and alien abductions, illustrated by her previous writings and by the fact that she lived for some years with abduction researcher Budd Hopkins, who first introduced John Mack to the topic. 

It was Leslie Kean who was instrumental in bringing the story to The New York Times. Luis Elizondo had resigned from government service in the fall of 2017, but very shortly before leaving had passed the three best-known U.S. Navy UAP videos to Christopher Mellon, a former Deputy Assistant Secretary of Defense for Intelligence. Elizondo believed he had obtained official security clearance for their release, though it seems there was a misunderstanding and that the clearance was not intended to authorize public release. To illustrate this, an April 27, 2020, statement from the DOD referred to “unauthorized releases” of the videos in 2007 and 2017.7 In 2007, one of the videos leaked online on the Above Top Secret discussion forum, while 2017 referred to the process that led to The New York Times running the story. 

Mellon and Elizondo then joined an organization called the To The Stars Academy of Arts and Science (TTSA), ostensibly headed by Blink-182 musician Tom DeLonge. TTSA was a sort of collaborative hub for a number of individuals, many with backgrounds in government UAP and fringe science research, including Hal Puthoff and retired CIA officer Jim Semivan. 

It was Christopher Mellon who facilitated a meeting between Kean, Elizondo, and others, which then gave Kean enough to take the story to The New York Times, via Ralph Blumenthal, setting in motion a series of events that was to forever change the field of ufology.8

Where does ufology stand now? 

This is how ufology in the U.S. went from fringe to mainstream, though it’s a simplified version, and not all the twists and turns of the story are universally agreed upon. If I had to summarize what I think happened and why, my best assessment would be as follows: A loose coalition of believers in UAP and the paranormal, often with backgrounds in government, military, and the Intelligence Community, sought and obtained official funding for their work. When that funding was terminated, they continued the work in a quasi-official capacity. Finally, when they felt they’d taken matters as far as they could without official funding, they decided to go public, successfully gambling that the resultant firestorm would generate other ways to take things forward. The goals may have included funding (TTSA certainly raised some money through a share issue) and Congressional engagement. The latter has clearly been a big success. 

However, eight years into this process, there’s still no smoking gun and we appear to have hit some speed bumps, with several new and parallel events putting things in a rather different light. 

Further ex-government whistleblowers have come forward. This sounds like a good thing, and in one sense, it is, but the unintended consequence has been that this has added to the information overload and created a landscape so complicated that even veteran commentators like myself, who follow the situation very closely, find it difficult to keep up. Furthermore, not all whistleblowers are equal. While one can be reasonably confident that those who have testified to Congress are who they say they are (staffers vet such people fairly thoroughly, not least by quizzing their former employers), others haven’t had their backgrounds investigated in such depth. 

It should also be remembered that even when someone’s government background checks out, their specific role is often harder to pin down and their information can be all but impossible to verify. That’s partly because many of these folks have a background in the military and the Intelligence Community, where issues of classification often arise and where deception was literally in some of these people’s job descriptions. It’s also because much of the information is second hand, but where those concerned don’t make it clear that this is something that somebody else told them. Every intelligence analyst on the face of the planet knows the importance of differentiating between what they know and what they think, yet these very people often seem to be blurring the line. No wonder one occasionally hears some civilian UFO researchers complain that the whole thing is a PSYOP. 

This already murky situation has been further complicated by factional infighting. There’s clearly a struggle for narrative control within the field. Even among the various whistleblowers and other key players, who are ostensibly polite with each other, there are clearly some tensions. By way of a personal anecdote, I’ve had more than one TV producer tell me how Individual A told them he’d appear on a show, provided Individual B wasn’t featured (the requests backfired because producers don’t usually play that game). I’m similarly aware that some of the key players who are ostensibly being polite to me are briefing against me, perhaps seeing my mainstream media platform as a potential threat, especially given that I’m independent in all this and don’t take anybody’s side. Because it so perfectly describes the situation, I can’t resist quoting a lyric from the O’Jays song Back Stabbers: “They smile in your face. All the time they want to take your place.” 

There’s nothing new about infighting in the UFO community. What is new, however, is that folks with a background in military intelligence know a few dirty tricks that their civilian counterparts don’t. Plus, social media has acted as a force multiplier, with 𝕏 in particular having turned into a veritable battlefield between some of the key players, often using proxies and sock puppet accounts. Cliques, harassment, and doxxing seem to be the order of the day. Neither should we sweep under the carpet the uncomfortable truth that some of the people who’ve recently jumped aboard the ufology train clearly have psychological issues, while others sense a money-making opportunity. 

To pick one example of all this infighting, the December 2025 appearance of Jay Anderson on Joe Rogan’s podcast seems to have set off a particularly nasty squabble.9 Jay criticized Luis Elizondo (among others), accusing him of orchestrating an aggressive campaign to control the narrative, as well as making reference to what he’s sometimes called a “UFO Hate Group.”10 In response, a group of Elizondo supporters, sometimes dubbed “the Lue Crew,” hit back against Jay Anderson.11

A related development is that a new generation of influential podcast hosts and YouTube channel owners saw the topic become increasingly mainstream and entered the fray. While many are honest brokers, their podcasts and channels are often the arena in which the struggle for narrative control plays out. Again, despite being a veteran commentator who follows all this closely, I struggle to work out who’s supporting which faction, how many factions there are, and the true nature of their respective agendas. 

Cartoon by Oliver Ottitsch for SKEPTIC

What is the result of all this information overload, confusion, and infighting? Speaking personally, I’m fatigued. Moreover, I see from social media that other people are fatigued too. I’m a free speech absolutist, so I’m certainly not advocating any controls on this. I completely reject the idea (which has been floated several times over the years) that ufology should set up some sort of governing body, or somehow police itself. After all, who gets to decide who’s on the governing body, and quis custodiet ipsos custodes

There are other developments that give me cause for concern. One of them relates to a couple of narrative shifts that I’ve noticed creeping into the topic. 

Ufology has always been a big tent. In whistleblower David Grusch’s testimony to Congress, and in some of his media interviews, he used the terms “nonhuman” and “non-human intelligence.”12 In the Schumer-Rounds Amendment (a legislative proposal intended for insertion into FY2024 NDAA, but which did not find its way into the final bill), the term “non-human intelligence” was used multiple times.13 Grusch has said that this leaves the door open for other possibilities aside from the extraterrestrial hypothesis. And this has opened the door to some highly speculative discussions about cryptoterrestrials, ultraterrestrials, extratempestrials, and interdimensionals. It’s also led to something a little more on the dark side, with a theological bent. 

The idea that aliens are fallen angels, or demons, isn’t new. But this once-niche theory has gotten a little more traction lately. Luis Elizondo has previously told the story of how, when he lobbied a senior Pentagon official to take more action over UAP, the official told him he should read his Bible. This appeared to reflect a belief that some aspects of UAP are demonic and that to study it would be to give it energy and feed it. 

Such opinions have gained more mainstream traction with Representative Marjorie Taylor Greene expressing the views that aliens could be fallen angels,14 while high-profile broadcaster Tucker Carlson has also talked about UAP in terms of spiritual forces and entities like angels and demons.15 All of this plays into a neoreligious interpretation of ufology. Chris Bledsoe—author of UFO of God—talks about how an entity he dubs “The Lady” told him how glowing orbs would intervene to stop the missiles if Israel and Iran go to war. There’s an “end times” theme to a lot of this.16

Again, as a free speech absolutist, I wouldn’t dream of telling people what they can and can’t say about UAP, let alone what they should believe. Again, I’m merely commenting on the current state of play and expressing a personal opinion that I think some of the current narrative isn’t necessarily healthy or helpful. And I certainly doubt that it holds any validity. 

Another narrative shift is the use of the term “psionics”—the idea that one can use the power of one’s mind to summon UAP. It’s a scientific-sounding term, but is it really that different from Steven Greer’s CE5 (Close Encounter of the Fifth Kind) protocols, whereby one can supposedly use meditation and other techniques to initiate contact with extraterrestrials? The danger, of course, is that certain individuals can then insert themselves as intermediaries; you can access the phenomenon, but only through them, because of their special abilities. Again, there’s a sort of quasi-religious, cultish feel to all this, in which one can only access the divine through the intermediary of the priest. 

What does the future hold for ufology? 

Given my assessment that ufology has to some extent moved from fringe to mainstream, but has hit some speed bumps, where do we go from here? I don’t have a crystal ball, but based on statements from a range of people involved in the process, it seems that further Congressional hearings and more whistleblowers would be a fairly good bet. The problem, of course, is that, short of a “smoking gun” (actual evidence and not just more stories), this runs the risk of reinforcing the view that it’s all talk and no action. Where’s the beef? 

The Task Force on the Declassification of Federal Secrets is looking at UAP. There’s considerable overlap between personnel involved with the Task Force and personnel serving on the House Oversight Committee, which has been particularly vociferous on UAP. This brings up a potential problem, because while the Task Force is bipartisan, it skews toward Republicans. Thus, it wouldn’t take much to jeopardize the bipartisan nature of Congressional engagement, which would be a setback. 

If Donald Trump’s presidency ends without disclosure, I’ll be 99.9 percent convinced that there’s nothing to disclose.

The UFO community continues to hope for Disclosure—the official acknowledgement of an extraterrestrial presence. The Age of Disclosure, a documentary produced by Dan Farah and released late in 2025 plays into this.17 So does Steven Spielberg’s upcoming film Disclosure Day.18 But it goes further than this, and 2027 is a potential date that’s been frequently mentioned. 

Disclosure in 2027 would mean that Donald Trump would be the Disclosure President. There’s a curious kind of logic in this, because if there truly is a decades-long official cover-up of an extraterrestrial presence, the secret has been scrupulously kept by successive administrations of both political parties. By inference, therefore, the reasons for secrecy must be exceptionally compelling. Perhaps only a populist, maverick, second-term President would disclose in such circumstances—more so, given that Trump will soon be in his 80s and is doubtless mindful of his legacy. I agree that if the U.S. government is aware of an extraterrestrial presence, Trump is more likely than any previous president to spill the beans. President Trump has occasionally hinted that he’s privy to some interesting information about UFOs, but has yet to elaborate on the topic.19

Some argue that the secret of an extraterrestrial presence is kept even from presidents (perhaps to maintain plausible deniability) and is in the hands of an unelected set of gatekeepers, perhaps in the government, but possibly in the private sector. I find this unconvincing. Most Western governments operate on the basis of what the UK civil service calls the culture of “no surprises,” by which political leaders need to be briefed on all big, impactful issues that might require quick decisions and action. 

If Donald Trump’s presidency ends without Disclosure, I’ll be 99.9 percent convinced that there’s nothing to disclose. I’d have to accept that if extraterrestrials are visiting Earth, nobody in the government is aware of it. The acceptance of such a state of affairs might actually be rather good for ufology. After all, while some conspiracies are real, most conspiracy theories are false, and encourage a negative, accusatory approach. Removing—or at least reducing—this mindset from ufology might lead to a healthier, less aggressive approach. It would also remove a lot of redundant effort, which could be better used elsewhere, such as in encouraging more scientists and academics to engage on the topic. 

As I see it, ufology stands at an interesting crossroads. While some of the details remain disputed, the topic has undoubtedly transitioned from fringe to mainstream in the last few years. However, a mixture of information overload, infighting, and quasi-religious narratives may conspire to undo this progress. Allied to this, mainstream media interest in most topics waxes and wanes. The UFO community can’t expect their current fascination with the subject to last indefinitely. This is particularly true if Congressional engagement falls away, as it may well do if the perception is that the subject is becoming more partisan and more fringe, with the attendant dangers of reputational damage attaching to those Representatives who continue to express an interest. 

Ufology has come out of the fringe and into the mainstream, but I believe there’s a distinct possibility that it will move out of the mainstream and back into the fringe.

Categories: Critical Thinking, Skeptic

Dice, Mr. Smith, and Monty: The Case for Clarity in Probability Puzzles

Skeptic.com feed - Thu, 02/19/2026 - 12:35pm

Imagine you are at a puzzle night with friends. Someone poses this question: “You roll two dice. At least one shows a six. What’s the probability both show a six?”

The table splits: Half the people argue that the dice are independent, so the answer must be one in six. The other half insists it’s one in 11. They may refer to the image below: There 11 equally-likely ways that a roll of two dice can show at least one six (bottom row and rightmost column), and in one of these rolls, they are both sixes.

So who’s right? Both—and neither. The correct answer is: “We can’t answer this without more information.” Depending on how you came to the information that there was at least one six, the answer can be one in six or one in 11.

Many so-called probability “paradoxes” arise from vague framing. In practice, data are generated by processes. Those processes define the pool of possibilities—and thus the probabilities. When the information-generating process isn’t specified, reasonable people can come up with different answers because they’re answering different questions.

Double-Six Puzzle

Let’s return to the opening question. To reveal the ambiguity, I’ll frame it in two ways:

Puzzle 1: You roll two dice. One of them falls under the table and you can’t see it. The other one lands on top of the table, and it’s a six. What is the probability that both dice landed on a six?

Puzzle 2: You are rolling two dice blindfolded. A machine is programmed to ding if and only if at least one of them lands on a six. You keep rolling until the machine dings. What is the probability that both dice landed on a six?

Solutions

Puzzle 1: The probability that both dice landed on a six is one in six. In this scenario, you’ve learned a fact about a particular die: the one you can see is a six. The first die doesn’t affect the second, so the six possible outcomes of the second die are equally likely.

Puzzle 2: The probability that both dice landed on a six is one in 11. In this scenario, you don’t have information about a particular die; the ding of the machine only tells you a property of the pair. This roll passed a filter that admits only outcomes with at least one six. Among the 36 ordered outcomes of two dice, 11 contain at least one six. Only one of those 11 outcomes is the double six.

Both of these puzzles answer the question “You rolled two dice. At least one shows a six. What’s the chance both show a six?” However, since they have different information-generating processes, they have different solutions.

Boy or Girl Paradox

In Martin Gardner’s famous “Boy or Girl Paradox,” sometimes called the “Two-Child Problem,” he poses this question: “Mr. Smith has two children. At least one is a boy. What is the probability that both children are boys?”

If this puzzle sounds familiar, that is because it is like the dice puzzle. Even if we adopt the assumption that births are like coin flips (i.e., independent, equally likely boy or girl, no multiple births), the puzzle is unanswerable. Gardner initially proclaimed the answer was “one in three,” but later admitted that the puzzle was ambiguous. The problem is that it does not tell us how we learned that at least one child is a boy.

Imagine you randomly meet a man named Mr. Smith at the park. He’s with one child, and that child is a boy. He mentions he has another child at home. What is the probability the child at home is a boy?

Seeing the boy in the park tells us nothing about the child at home. The possibilities are simply: “boy at the park, girl at home” or “boy at the park, boy at home.” Those two possibilities are equally likely, so the answer is one in two. The “child at the park” is like the die you can see and the “child at home” is like the die under the table.

When the information-generating process isn’t specified, reasonable people can come up with different answers because they’re answering different questions.

Now imagine you have a list of all men named Mr. Smith in your city who have two children and at least one boy. You pick a man at random from that list. What are the chances both his children are boys? The four ordered possibilities in all two-child families are GG, GB, BG, and BB. However, your list excludes GG. That leaves GB, BG, BB–three equally likely families, only one of which is BB. So the probability is one in three. This “filtered list” setup is like the machine-ding scenario: your knowledge is based on a property of the pair, not a particular child.

In both stories, it is true that Mr. Smith has two children and at least one is a boy. The answers differ because we came to know that fact in different ways.

The Monty Hall Problem

The best-known version of the Monty Hall problem was posed by Craig F. Whitaker to Marilyn vos Savant in a 1990 issue of Parade magazine, one of the most widely read publications in the country at the time:

Suppose you’re on a game show, and you’re given the choice of three doors: behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Marilyn answered, “Yes; you should switch. The first door has a 1/3 chance of winning, but the second door has a 2/3 chance.”

The magazine received over 10,000 letters, including many from highly educated readers, insisting that this answer was wrong. Don Edwards, from Sunriver, Oregon, suggested: “Maybe women look at math problems differently from men.” A Georgia State University professor, one W. Robert Smith, PhD, advised: “I am sure you will receive many letters on this topic from high school and college students. Perhaps you should keep a few addresses for help with future columns.” Another PhD correspondent, a University of Florida professor named Scott Smith, exclaimed:

You blew it, and you blew it big! Since you seem to have difficulty grasping the basic principle at work here, I’ll explain. After the host reveals a goat, you now have a one-in-two chance of being correct. Whether you change your selection or not, the odds are the same. There is enough mathematical illiteracy in this country, and we don’t need the world’s highest IQ propagating more. Shame!

Today, it’s widely accepted that Marilyn was right. People have even built computer simulations that reproduce the result. The story is often cited as a reminder that probability can be counterintuitive, and as a lesson that confidence and credentials don’t make us immune to mistakes. Those are valuable lessons—but I think much of the pushback came from a simpler reason: the problem phrasing was too vague.

For Marilyn’s solution to be correct, the game must guarantee from the start that the host will open a door showing a goat. This needs to be explicitly stated as a rule of the game. Many readers assume that the host knowing what’s behind the doors implies that he is guaranteed to open a goat door. But even if the host knows, that alone does not tell us what he is obliged to do. Marilyn’s answer relies on the following assumptions:

1.     The host always opens a door after your initial choice.
2.     He never opens the door with the car.

Simulations built to show why Marilyn’s answer is correct have these assumptions built into their programming. But if those conditions aren’t guaranteed, the probabilities change.

If we make the above two assumptions, then Marilyn’s advice is correct–you should switch. This can be explained simply:

• If your initial choice was a goat door, switching will certainly give you the car. Since there is a 2 in 3 chance your initial choice was a goat, there is a 2 in 3 chance you will win if you switch.

• If your initial choice was the car door, you will certainly lose if you switch. Since there is only a 1 in 3 chance your initial choice was the car, there is a 1 in 3 chance you will lose if you switch.

This reasoning treats the host’s action as guaranteed and therefore uninformative about your original choice. If the host’s behavior is left unspecified, the fact that he opened a goat door can give you different information. Here are two variations of the puzzle that help to demonstrate that.

Optional Opening Variation

On a game show, there are three doors. Behind one is a car; behind the others, goats. After contestants choose a door, the host sometimes opens another door to show a goat (he never reveals the car). If he does open a door, he then offers you the chance to switch to the other closed door. It’s your turn: You pick a door, and the host opens a goat door. He offers a choice to switch to the other unopened door. What should you do?

You can’t answer this until you know the host’s policy—how often he opens a door given that a contestant’s initial pick is the car versus a goat. If the host is much more likely to open a door for contestants who initially picked the car, then seeing him open a door increases the likelihood that the car is behind your chosen door. Different policies lead to different conditional probabilities, so the question is unanswerable without more information. 

Random Door Variation of the Monty Hall Problem

On a game show, there are three closed doors. Behind one is a car; behind the others, goats. After contestants pick a door, the host randomly opens another door. If he reveals a car, it’s game over. It’s your turn to play! You choose a door. The host randomly opens another door. It’s a goat! He offers a choice to switch to the other unopened door. What should you do?

It can help to think of this game being played many times. One third of players initially pick the car. For them, the host will always reveal a goat. Two thirds of players initially pick a goat. For them, the host reveals a goat half the time and reveals the car the other half. If we consider all games, we can split the players into three equal groups:

  1. Picked the car and saw a goat
  2. Picked a goat and saw a goat
  3. Picked a goat and saw the car

All of these scenarios are equally likely, so a third of the players will be in each group. Since the question tells you that, in your game, the host opens a door that has a goat, you know that you are in either group 1 or 2. Since it is equally likely that you are in either one, switching or staying makes no difference.

Once you state the host’s policy clearly, many people who previously found the problem baffling finally see where the answer comes from.

The crucial difference is that here the host could have shown the car but didn’t. In this variation, the host is more likely to open a goat door if your initial pick was the car door. In other words, the host opening a goat door gives you information about your initially chosen door: it increases the probability that it has a car from ⅓ to ½.

A Clearer Phrasing

Here is a clear way to pose the problem such that Marilyn’s answer is correct:

You are about to go on a game show. The game is always played in the same way: The player is shown three closed doors. Behind two of them are goats; behind one is a car. The player wins if they pick the door with the car. After the player picks a door, the host opens another door, and it's always one with a goat behind it. Then the host offers the player the chance to switch to the other closed door. He does this in every game. What should you do to maximize your chances of winning? 

You may have noticed that I also specified that you win if you pick the car door (not that you win what’s behind the door). This is because sometimes people say they prefer a goat over a car.

Clarity is crucial, whether you’re posing a puzzle or talking with someone who sees things differently than you do.

The problem with the Monty Hall problem is that the standard wording is often too vague. Marilyn’s answer is correct only under a particular set of rules about what the host will do, but those rules are frequently left out. Once you state the host’s policy clearly, many people who previously found the problem baffling finally see where the answer comes from.

For readers familiar with Bayes’s theorem, I leave you with a challenge:

You’re playing a game with the same setup as above, but now you’re told the host opens a door 75 percent of the time when the contestant initially picks a goat, and 25 percent of the time when the contestant initially picks the car. In your game, the host opens a door to show a goat. Should you switch or stay?

The “Obvious” Interpretation

For many such probability puzzles, one might object, “But the most natural interpretation is obviously X.” Natural to whom? When a puzzle leaves out the information-generating process, people may assume different backstories and end up answering different problems. What seems obvious to you may not be to someone else.

This lesson applies beyond puzzles. In many disagreements, people talk past each other because they understand the same term in different ways or are working from different assumptions. When you make those terms and assumptions explicit, you may find you disagree on less than you thought. Even if you don’t, that precision lays the groundwork for a more productive discussion. Clarity is crucial, whether you’re posing a puzzle or talking with someone who sees things differently than you do.

Categories: Critical Thinking, Skeptic

Why UAPs? Why Now?

Skeptic.com feed - Wed, 02/18/2026 - 7:21am
How a Fringe Movement Became a National Conversation at the Highest Levels of Society and Government

The new documentary film by Dan Farah, The Age of Disclosure, has been widely reviewed in mainstream media (CNN, Fox News, News Nation, The New York Times, The Guardian, etc.) and intensely discussed not only on popular podcasts with UFO enthusiasts but at the highest levels of government with comments by President Trump and an exclusive interview on Fox News by Sean Hannity with Secretary of State Marco Rubio, who clarified that the film had been selectively edited to make it seem like he knows more than he does about the phenomena, now known as UAPs, or Unidentified Anomalous Phenomena, so designated to mark the transition of this once fringe movement into mainstream conversation. How did this happen … and why?

In this excerpt from my book, Truth, I will give a general overview of the UFO/UAP phenomena, explain why most scientists and journalists reject the evidence (that consists almost entirely of grainy videos, blurry photographs, and anecdotes about strange lights in the sky) and remain skeptical, discuss the accusations of a government cover-up of the evidence and a secret Pentagon crash-retrieval program, and offer a sociocultural explanation for how and why all this unfolded as it did and what the deeper quasi-religious motivations might be for belief in a higher power come to Earth to rescue humanity from itself.

The Residue Problem 

In Leslie Kean’s 2010 book UFOs: Generals, Pilots and Government Officials Go on the Record, the ufologist admitted that “roughly 90 to 95 percent of UFO sightings can be explained” as: 

weather balloons, flares, sky lanterns, planes flying in formation, secret military aircraft, birds reflecting the sun, planes reflecting the sun, blimps, helicopters, the planets Venus or Mars, meteors or meteorites, space junk, satellites, swamp gas, spinning eddies, sundogs, ball lightning, ice crystals, reflected light off clouds, lights on the ground or lights reflected on a cockpit window, temperature inversions, hole-punch clouds, and the list goes on!1

So the entire extraterrestrial hypothesis for explaining UFOs and UAPs is based on a residue of data left over after the above list has been exhausted. What’s left? Not much. 

Kean opens her exploration “on very solid ground, with a Major General’s firsthand chronicle of one of the most vivid and well-documented UFO cases ever”—the UFO wave over Belgium in 1989–1990. That Major General is Wilfried De Brouwer, and here is his recounting of the first night of sightings: 

Hundreds of people saw a majestic triangular craft with a span of approximately a hundred and twenty feet and powerful beaming spotlights, moving very slowly without making any significant noise but, in several cases, accelerating to very high speeds. 

First, how does he know it had a span of 120 feet? What measurement instrument was used? Questions that if answered could lead us to truth about this UFO sighting. Regardless, even seemingly unexplainable sightings such as De Brouwer’s can have quotidian explanations. Perhaps it was an early experimental model of a delta-wing bomber (U.S., Soviet, or otherwise) that secret-keeping military agencies were understandably loath to reveal. Or maybe it was three sources of aerial lights (flares? small planes?) that from the perspective of a ground observer appeared triangular shaped, with the mind filling in the space in between the lights. 

The one and only photograph associated with the Belgian event seems to show a triangular shaped craft, but UFO investigator Robert Sheaffer reports that it was, in fact, a faked photograph of a small styrofoam model with three spots affixed to it.2 According to the Belgian news organization RTL, it was hoaxed by a 20-something man named Patrick: 

A DIY, done in a few hours and photographed in the evening, a joke, inspired by the UFO wave born a few months earlier, which targeted the friends of the small company where Patrick worked. But now, the joke will leave the walls of the factory. “We didn’t think it would come out of the factory where we worked. It went much further and then we let it go,” admits Patrick.3

Since the fake photograph was inspired by “real” sightings, we still need to deal with those in order to get to the truth, so let’s compare De Brouwer’s narrative above to Kean’s summary of the same incident: 

Common sense tells us that if a government had developed huge craft that can hover motionless only a few hundred feet up, and then speed off in the blink of an eye—all without making a sound—such technology would have revolutionized both air travel and modern warfare, and probably physics as well. 

Note how de Brouwer’s 120-foot craft becomes “huge” in Kean’s retelling, how “moving very slowly” was changed to “can hover motionless,” how “without making any significant noise” shifted to “without making a sound,” and how “accelerating to very high speeds” was transformed into “speed off in the blink of an eye.” This language transmutation is common in UFO narratives, making it harder for scientists and skeptics to provide natural explanations. 

Pilots, Astronauts, and Eyewitness Accuracy 

One reason for Kean’s confidence in her assertion that at least some UFOs and UAPs represent alien spacecraft is that she thinks pilots and astronauts “represent the world’s best-trained observers of everything that flies. What better source for data on UFOs is there? ... [They] are among the least likely of any group of witnesses to fabricate or exaggerate reports of strange sightings.” 

Is that true? Consider this assessment by the renowned astronaut and pilot Scott Kelly, at a NASA press conference dealing with the latest flap of UAP sightings, who threw cold water on the myth of extraordinary perceptual powers of pilots and astronauts (condensed and edited for style and clarity):4

In my experience of flying over 15,000 hours in 30 something years in airplanes and in space, the environment that we fly in is very conducive to optical illusions, so I get why these pilots would look at that Go Fast video and think it was going really, really fast. I remember one time I was flying off Virginia Beach Military operating area, and my RIO [Radar Intercept Officer], who sits in the back of the Tomcat, was convinced we flew by a UFO. I didn’t see it, so we turned around to go look at it. It turns out it was a Bart Simpson balloon. 

My brother Mark Kelly, a former NASA astronaut and also now a U.S. Senator, shared a story with me about an experience he had years ago when he was the commander of STS 124. They were getting ready to close the payload bay doors of the Space Shuttle and they saw something in the payload bay that they thought was a tool, or maybe a bolt—they couldn’t quite figure it out—and they were potentially going to have to do a spacewalk to retrieve it. But before they did that my brother grabbed the camera and they took a picture of it, and when they blew up the picture they realized that it was not a bolt or a tool in the payload bay; it was actually the International Space Station that was 80 miles away. 

There are cases where pilots have rendezvoused on a buoy because they thought that was their wingman. It’s just an extremely challenging environment to work, especially at night. What does “real” mean? 

When UFO enthusiasts breathlessly state that this latest wave of UAP sightings was confirmed as “real” by no less an authority than The New York Times, the assumption is that the “paper of record” launched an investigation of its own, independent of ufologists. 

That is not what happened. If you check the byline for that and related articles in that paper, one of the coauthors is none other than Leslie Kean, who as we have seen is anything but a neutral and objective narrator of the UFO phenomena and the government’s response to it. (Kean has since written a book and produced a Netflix documentary series called Surviving Death, on Near-Death Experiences and the afterlife.5) Although coauthor Helene Cooper does work for the paper as a correspondent for Pentagon matters, the other coauthor, Ralph Blumenthal, left the paper in 2009 and wrote a book titled The Believer: Alien Encounters, Hard Science, and the Passion of John Mack, the late Harvard psychiatrist who uncritically accepted alien abduction stories as accounts of real close encounters of the fourth kind.6 And while The New York Times article was an accurate work of reportage as far as it goes, it didn’t go very far, quoting only one skeptic, James Oberg. This was at least better than 60 Minutes in their coverage of the UAP flap that astonishingly—given their reputation as one of the most respected sources in all media—failed to interview a single scientist or skeptic familiar with the sightings under investigation. 

When UFO believers and the general public hear the word “real,” their brains tend to autocorrect to “alien” or “Russian or Chinese assets,” instead of an ordinary effect of cameras and visual illusions or, simply, unexplained anomalies.

60 Minutes’ correspondent Bill Whitaker asked Lue Elizondo, who directed the Pentagon’s Advanced Aerospace Threat Identification Program (AATIP), “So what you are telling me is that UFOs, Unidentified Flying Objects, are real?” Elizondo replied: “The government has already stated for the record that they’re real. I’m not telling you that. The United States government is telling you that.”7

The word “real” is doing a lot of work here. No one—not the media, not the military, and certainly not the United States government—is saying that these sightings represent real alien visitors. What they are confirming as “real” is the videos themselves, as representing something out there in the world (and not a fake video or hoaxed CGI production). But when UFO believers and the general public hear the word “real,” their brains tend to autocorrect to “alien” or “Russian or Chinese assets,” instead of an ordinary effect of cameras and visual illusions or, simply, unexplained anomalies. 

In my own classification system to explain UFO and UAP sightings, I distill them into three hypotheses: (1) Ordinary Terrestrial (balloons, camera or lens effects, visual illusions, etc.), (2) Extraordinary Terrestrial (Russian or Chinese spy planes or drones capable of feats of physics and aerodynamics unheard of in the U.S.), and (3) Extraordinary Extraterrestrial (alien intelligence). Let’s consider each of these hypotheses and see which one has the highest credence. 

Ordinary Terrestrial 

The first video in this latest UFO/UAP flap was that of Lt. Commander Alex Dietrich, who reported seeing an unidentified aircraft about 70 miles west of San Diego in 2004. Her explanation of what she thinks she saw is emblematic of the entire phenomena and reinforces my point about the residue problem: “Just because I’m saying that we saw this unusual thing in 2004, I am in no way implying that it was extraterrestrial or alien technology or anything like that,” adding that “I think that the [U.S. government] report’s going to be a huge letdown. I don’t think that it’s going to reveal any fantastic new insight.”8 Indeed, the report was predictably unrevealing of anything alien. 

The three most widely viewed and discussed videos were filmed by infrared cameras mounted on Navy F/A-18 jets over the Atlantic seaboard and off the coast of San Diego. They were taken by the Navy Advanced Targeting Forward Looking Infrared (ATFLIR) camera pods attached to the fuselage of the jets, and the videos are now known as FLIR/Nimitz/Tic Tac (San Diego, 2004), Gimbal, and Go Fast (Florida coast, 2015). 

Figure 1. FLIR/Nimitz/Tic Tac (video still frame)

FLIR/Nimitz/Tic Tac (Figure 1) is the 2004 Nimitz video taken by Lt. Chad Underwood. According to Popular Mechanics, it first came to light in 2007 on a UFO website.9 It was elevated into public consciousness when it was reposted by The New York Times in Leslie Kean’s original article, then re-reposted in 2019 by the former Blink-182 front man guitarist Tom DeLonge’s UFO organization “To the Stars Academy of Arts and Science.”10 In response, the Navy acknowledged that the videos were “real,” meaning that they are real videos and not hoaxes.11 Finally, in 2020 the Pentagon re-re-reposted the three videos “in order to clear up any misconceptions by the public on whether or not the footage that has been circulating was real, or whether or not there is more to the videos.”12 So, when people talk about these “new” videos, they are anything but new. 

The heavy lifting on analyzing these videos from the skeptical community was done by Mick West, a former video game designer, host of the Metabunk.org website and Tales From the Rabbit Holepodcast, and a former columnist for Skeptic magazine.13 It is a remarkable body of work that one can only hope the Pentagon has conducted at such a high level on its own, or at the very least considered West’s analyses as part of their investigations. 

In the FLIR video, for example, the object appears to zoom almost instantly off the screen, interpreted by some to indicate extraordinary speed and turning ability far beyond anything our jets are capable of. Note that in the upper left of the screen the camera “zoom” indicator doubles from 1 to 2 at the moment the object “zooms” to the left. When West slowed down the video replay from zoom 2 to 1, the extraordinary maneuver becomes quite ordinary. 

FLIR and Gimbal, says West, are what one would see if a jet were flying away from the camera, thus accounting for the eyewitness accounts that the object showed no directional control surfaces or exhaust. And their apparent shapes as saucer-like and “Tic Tac,” West continues, are due to glare on the lens of the camera. As he told the San Diego Union-Tribune reporter Andrew Dyer, “What we’re seeing in the distance is essentially just the glare of a hot object,” most likely that “of an engine—maybe a pair of engines with an F/A-18—something like that.” 

(To be sure, not everyone accepts West’s conclusions. See, for example, ufologist Robert Powell’s analysis in his 2024 book UFOs,14 who told me “You are correct in your quoting of Mick. Whether his assertions are correct is very debatable.”15

As well, West notes, sudden acceleration of the aircraft could cause the FLIR camera to lose lock on the object, thereby making it look like it is the object making extraordinary maneuvers. As he writes, “The supposed impossible accelerations in the Tic Tac video were revealed to coincide with (and hence caused by) sudden movements of the camera, leading to the conclusion that the object in the video was not actually doing anything special.”16

Figure 2a. Go Fast (video still frame). Watch the video on the US Navy website.Figure 2b. Basic trigonometry reveals the Go Fast object was at 13,000 feet altitude, not skimming the ocean. The apparent speed is a parallax effect from the jet’s movement.

The Go Fast video (Figure 2a) purportedly shows an object with no heat source (and therefore propelled by some unconventional engine) that appears to move impossibly fast just above the surface of the ocean. West then conducted what he describes as “10th-grade trigonometry” to show that, in fact, the object was actually well above the ocean surface at around 13,000 feet (Figure 2b) and was probably just a weather balloon traveling at about 30–40 knots.17 “Because of the extreme zoom and because the camera is locked onto this object … the motion of the ocean in this video is actually exactly the same as the motion of the jet plane itself. You’re seeing something that’s actually hardly moving at all, and all of the apparent motion is the parallax effect from the jet flying by.” 

Figure 3a. Mick West’s Gimbal analysis (YouTube video)

Figure 3b. The Gimbal video’s rotation is a camera artifact. When the gimbal mechanism rotates, the entire image—including background lights—rotates in sync, not the object itself.

The most talked-about video is “Gimbal” (Figure 3a), an object that appears to skim effortlessly over background clouds then come to an abrupt stop and rotate in midair with no apparent propulsion systems to pull off such a maneuver. Again, astoundingly, West appears to be the only person to notice that when the Gimbal object rotates, background patches of light in the scene also rotate in perfect unison with the object. “I think what’s clear about Gimbal is it’s very hot—it’s consistent with two jet engines next to each other and the glare of these engines gets a lot bigger than the actual aircraft itself, so it gets obscured by it,” West explains, adding that “at the start of the video, it looks like the object is moving rapidly to the left because of the parallax effect, and the rotation was a camera artifact (Figure 3b), and that the ‘flying saucer’ was simply the infrared glare from the engines of a distant aircraft that was flying away.”18 When he looked up the patents for that camera, West found that the gimbal mechanism was responsible for the apparent rotation.19

Figure 4. Flying Triangle (YouTube video). Mick West demonstrating that the triangular shape is likely the bokeh effect—where out-of-focus light takes the shape of the camera’s triangular lens aperture rather than a physical craft.

Since then, two more videos by the UAP Task Force were released, one showing a flying triangle (Figure 4) and the second an apparently zig-zagging submersible sphere (Figure 5). As the media and public gawked at yet another triangle-shaped UFO, West noted that it was filmed at night beneath the flight path into LAX, and that the object blinked in perfect unison with that of commercial airliners flying into Los Angeles from Hawaii. The triangular shape, he surmised, was most likely the result of a triangular shaped lens aperture, and the “bokeh” effect, or the soft out-of-focus background generated by shooting a subject with a fast lens and wide aperture.20 In fact, there were other triangle shaped objects in the image that correspond perfectly to celestial objects that West identified as the planet Jupiter and some known stars.

Figure 5. Submersible Sphere (YouTube video). Analysis by Mick West.

As for the “zig-zagging” object, also filmed off the coast of California from the combat ship Omaha, as you can see in West’s video analysis it is the camera that is zig-zagging, not the object, and it doesn’t “submerse” into the water, it simply disappears beyond the horizon (and is, in any case, so grainy a video that it isn’t clear at all what is going on with whatever it was being filmed).21

Extraordinary Terrestrial 

An alternative to ordinary explanations for UAP sightings is that they represent Russian or Chinese assets, drones, spy planes, or some related but as yet unknown (to us) technology capable of speeds and turns that seemingly defy all known physics and aerodynamics. 

This hypothesis is highly unlikely, given what we know about the evolution of technological innovation, which is cumulative from the past. In his 2020 book, How Innovation Works,22 Matt Ridley demonstrates through numerous examples that innovation is an incremental, bottom-up, fortuitous process that is a result of the human habit of exchange, rather than an orderly, top-down process developing according to a plan. Innovation, he continues, “is always a collective, collaborative phenomenon, not a matter of lonely genius. It is gradual, serendipitous, recombinant, inexorable, contagious, experimental, and unpredictable. It happens mainly in just a few parts of the world at any one time.” Examples include steam engines, jet engines, search engines, airships, vaping, vaccines, cuisine, antibiotics, mosquito nets, turbines, propellers, fertilizer, computers, dogs, farming, fire, genetic engineering, gene editing, container shipping, railways, cars, safety rules, wheeled suitcases, mobile phones, corrugated iron, powered flight, chlorinated water, toilets, vacuum cleaners, shale gas, the telegraph, radio, social media, block chain, the sharing economy, artificial intelligence, faddish diets, and hyperloop tubes. 

ETIs are probably out there in the cosmos, but there probably are not that many of them, and because of the vast interstellar distances and their extreme rarity they have not been here.

It is simply not possible that some nation, corporation, or lone individual—no matter how smart and creative—could have invented and innovated new physics and aerodynamics to create an aircraft of any sort that could be, essentially, centuries ahead of all known present technologies. That is not how innovation works. It would be as if the United States were using rotary phones while the Russians or Chinese had smart phones, or we were flying biplanes while they were flying Stealth fighter jets, or we were sending letters and memos via Fax machine while they were emailing files via the internet, or we were still experimenting with captured German V-2 rockets while they were testing SpaceX-level rocketry. Impossible. We would know about all the steps leading to such technological wizardry. 

Extraordinary Extraterrestrial 

Could these UAPs and UFOs represent visitations by ETIs? Let’s first separate two questions that most people confuse: (1) Are aliens out there somewhere in the cosmos? (2) Have aliens come here? When I state my skepticism about the latter, people assume I’m also skeptical about the former. “Do you seriously think we’re alone in this vast cosmos?” is a common rejoinder I hear when I say something like “UFOs are not ETIs.” So let me state for the record that although we have no definitive evidence to answer either question in the affirmative, I think it highly likely that aliens are out there somewhere but have not yet come here

To the first question, the law of large numbers suggests that aliens are very likely out there somewhere in the cosmos. A 2016 analysis of the Hubble Ultra Deep Field by NASA and the European Space Agency estimated that there are ten times the number of galaxies previously known (about one hundred billion), meaning that there are at least one trillion galaxies in the universe,23 each of which has at least one hundred billion stars, for a total of a hundred million trillion stars—100,000,000,000,000,000,000,000—an almost inconceivably large number made even larger by the Kepler Space Telescope’s discovery that nearly all stars have planets, adding yet another zero to that already Brobdingnagian number for the number of possible places where life could evolve into an intelligent communicating species. We also now know that it takes only a few million years for stars and planets to coalesce out of clouds of dust and gas to form solar systems. In our galaxy alone this happens about once a month. In the universe with the above number of stars, this would mean a thousand new solar systems are born every second

To the second question, Fermi’s Paradox—first articulated by the renowned physicist Enrico Fermi—implies that with so many stars and planets in the known universe there should be lots of ETIs out there, and assuming that at least some of those (half?) would be millions of years ahead of us on an evolutionary time scale, their technologies would be advanced enough to have found us by now, but they haven’t, so … where is everybody? 

Answers to the paradox are now legion, with at least 75 explanations for why we haven’t found ETIs yet,24 including: uniqueness (we’re alone), out of range (they’re too far away to have been discovered yet), failures of perception (they’re aquatic instead of land-based), failures of imagination (they haven’t thought of searching), inadequate search strategies (they or we are using the wrong technology to search), dark forest (they’re hiding), zoo hypothesis (they’re observing us secretly), transcendence (they’re from a different dimension or are pure spirit beings), ancient aliens (they visited thousands or millions of years ago), home bound (they don’t travel), and beyond our imaginations (they are so wholly Other that we can’t begin to know how to make contact).25 Here is my Twitter-length answer to Fermi’s Paradox: 

ETIs are probably out there in the cosmos, but there probably are not that many of them, and because of the vast interstellar distances and their extreme rarity they have not been here. But keep searching, as such a discovery would be one of the greatest in human history! Sky Gods for Skeptics 

In his 1982 book The Plurality of Worlds, the historian of science Steven Dick suggested that when Newton’s mechanical universe replaced the medieval spiritual world it left a lifeless void that was filled with the modern search for ETI.26 In his 1995 book Are We Alone? the physicist Paul Davies wondered: “What I am more concerned with is the extent to which the modern search for aliens is, at rock bottom, part of an ancient religious quest.”27 The historian George Basalla made a similar observation in his 2006 work Civilized Life in the Universe: “The idea of the superiority of celestial beings is neither new nor scientific. It is a widespread and old belief in religious thought.”28 In his 2007 book, Contact with Alien Civilizations, Michael A.G. Michaud proposes that “one of the drivers behind our search for other intelligent beings is our desire to find or attribute purpose to our existence. We have an innate yearning to be identified as part of some ill-defined grander scheme of things.”29Here is how Carl Sagan expressed the sentiment in an interview with CBS anchor Walter Cronkite: 

I think a key to what’s behind the real belief in flying saucers is most easily obtained if you look at the contact myths. There are several hundred people in the United States who claim to have had personal contact with the inhabitants of flying saucers that have landed. And if you examine these myths, you find that there are some peculiar regularities. The inhabitants of saucers are benevolent. I mean, they’re really concerned for our well-being. They’re omnipotent, extremely powerful, omniscient, extremely knowledgeable, and they often wear long, white robes. Now this combination is something I’ve heard in another context. This isn’t science, this is religion.30

To test this hypothesis the psychologist Clay Routledge and his colleagues published a paper titled “We Are Not Alone,” in which they reported an inverse relationship between religiosity and ETI beliefs—that is, those who report low levels of religious belief but high desire for meaning show greater belief in ETIs.31 In Study 1, subjects who read an essay “arguing that human life is ultimately meaningless and cosmically insignificant” were statistically significantly more likely to believe in ETIs than those who read an essay on the “limitations of computers.” In Study 2, subjects who self-identified as either atheist or agnostic were statistically significantly more likely to report believing in ETIs than those who reported being religious (primarily Christian). In Studies 3 and 4, subjects completed a religiosity scale, a meaning in life scale, a well-being scale, an ETI belief scale, and a religious supernatural belief scale. “Lower presence of meaning and higher search for meaning were associated with greater belief in ETI,” the researchers reported, but ETI beliefs showed no correlation with supernatural beliefs or well-being beliefs. 

From these studies the authors conclude: “ETI beliefs serve an existential function: the promotion of perceived meaning in life. In this way, we view belief in ETI as serving a function similar to religion without relying on the traditional religious doctrines that some people have deliberately rejected.” By this they mean the supernatural. “That is, accepting ETI beliefs does not require one to believe in supernatural forces or agents that are incompatible with a scientific understanding of the world.” If you don’t believe in God, but seek deeper meaning outside of our world, the thought that we are not alone in the universe “could make humans feel like they are part of a larger and more meaningful cosmic drama.” I concur, and so give the last word to Lt. Commander Alex Dietrich, who witnessed the 2004 UAP incident from a USS Nimitz fighter jet, as I think it well sums up 75 years of ufologists’ search for aliens: “I think they enjoy the anticipation more than actually finding answers.”

Categories: Critical Thinking, Skeptic

Skeptoid #1028: Manipulative Advertising

Skeptoid Feed - Tue, 02/17/2026 - 2:00am

Advertising wants your attention, not your soul; and it’s not nearly as good at getting either as you might think.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

It’s About Respect, Stupid

Skeptic.com feed - Sat, 02/14/2026 - 3:20pm

In 2020 Joe Biden became the first Democratic nominee in 36 years without a degree from the Ivy League. Obama, before him, filled no less than two-thirds of all cabinet positions with Ivy League graduates—over half of which were drawn from either Harvard or Yale.1 In Congress today, 95 percent of House members and 100 percent of senators are college educated.

According to a recent study published in Nature, 54 percent of “high achievers” across a broad range of fields—law, science, art, business, and politics—hold degrees from the 34 most elite universities in the country.2 The sociologist Lauren Rivera, studying top firms in finance, consulting, and law, found that recruiters are jonesing for applicants from a prestigious academic institution; typically targeting just three to five “core” universities in their hiring efforts—Harvard, Yale, Princeton, Stanford, and MIT—the usual suspects; then identifying five to fifteen additional second-tier options—such as Berkeley, Amherst, and Duke—from which they will more tentatively accept resumés.3 Everyone else—almost certainly never even gets a reply email. Why? Because, one lawyer explained the strategy to Rivera, “Number one people go to number one schools.” 

“If destruction be our lot, we must ourselves be its author and finisher.” —Abraham Lincoln

Given this new American caste system, it’s no surprise that 63 percent of Americans think that “experts in this country don’t understand the lives of people like me,” or that 69 percent feel the “political and economic elite don’t care about hardworking people.”4 And, I suggest, they’re not wrong. A culture that sanctifies college as the gateway to full citizenship, over time corrodes the foundations of democratic life. It devalues work that doesn’t come with a degree, licenses contempt for those not formally educated, and locks the working class out of positions of power. The result isn’t just underrepresentation; it’s resentment. As the journalist David Goodhart writes, “We now have a single route into a single dominant cognitive class”; where “an enormous social vacuum cleaner has sucked up status from manual occupations, even skilled ones,” and appropriated it to white-collar jobs, even low-level ones, in “prosperous metropolitan centers and university towns”; and where broad civic contribution has been replaced with narrow intellectual consensus.5 The result is a backlash not against education, but against the assumption that only one kind of education counts. 

“At a time when racism and sexism are out of favor,” writes Harvard philosopher Michael Sandel, “credentialism is the last acceptable prejudice.”6 In a cross-national study conducted in the United States, Britain, the Netherlands, and Belgium, a team of social psychologists led by Toon Kuppens found that the college-educated class had a greater bias against less educated people than they did other disfavored groups.7 In a list that included Muslims, poor people, obese people, disabled people, and the working class, “stupid people” were the most disliked. Moreover, the researchers found that elites are unembarrassed by the prejudice; that unlike homophobia or classism, it isn’t hidden, hedged, or softened—it’s worn openly, with an air of self-congratulation. As the Swedish political scientist Bo Rothstein observes, “The more than 150-year-old alliance between the industrial working class and what one might call the intellectual-cultural Left is over.”8

Today we are living through a strange time in American life in which the numbers have declared victory. By most standard economic measures—employment, wages, even household net worth—the working class is better off than it was a generation ago.91011 The average elevator mechanic gets paid over $100,000 per year12; master plumbers can make more than double that.13 Even in Mississippi, our country’s poorest state, workers see higher average wages than in Germany, Britain, or Canada.14

Elites are unembarrassed by the prejudice; that unlike homophobia or classism, it isn’t hidden, hedged, or softened—it’s worn openly, with an air of self-congratulation.

It is, for working-class Americans today, the best of times, objectively—and the worst of times, subjectively. This is not because the spreadsheets are wrong, but because we fail to count the things that history records in tone, not totals—but rather things like mood, myth, and cultural resolve. 

The Service Economy 

According to the most recent data available from the United States Bureau of Labor Statistics, nearly four out of five Americans work in the service sector.15 For most Americans in most states, that means retail, fast-food, or some other smile-for-hire job located at the end of a check-out line.16 It’s a kind of work where labor isn’t just accomplished, it’s seen—performed under the soft surveillance of the American customer. So, beneath inflation charts and unemployment rates, if you want to understand the feelings side of the postindustrial economy—you might start with tipping. 

It is, today, perhaps our most American habit—tipping for service; whether it be good, bad, or not provided. In restaurants, hair salons, and hotel lobbies, Americans tip over a hundred billion dollars a year—indeed, more than any other country on earth, and more than all of them combined.17 We tip cab drivers and pool cleaners and dog groomers and coat checkers. We tip the doorman on the way in, the bellhop on the way up, and the concierge on the way out. Americans tip so much that, as one European put it—the whole “approach [has become] completely deranged and out of control.”18

However, it wasn’t always this way. In fact, for much of the early 20th century, it was Americans who mocked Europeans for tipping—seeing it as smug, corrupt, and born of feudal etiquette.19 States such as Iowa, South Carolina, and Tennessee—among others—outlawed the practice entirely20; and wherever it remained legal, businesses proudly posted signs that read “No Tipping Allowed.”21 Some hotels even installed “servidors”—a two-way drawer that opened from hallway and room—so staff could deliver laundry without being seen, and without being tipped.22 As the author William R. Scott, in a book-length critique, put it in 1916: 

In an aristocracy a waiter may accept a tip and be servile without violating the ideals of the system. In the American democracy to be servile is incompatible with citizenship … Every tip given in the United States is a blow at our experiment in democracy … Tipping is the price of pride. It is what one American is willing to pay to induce another American to acknowledge inferiority. 

Somewhere along the way, however—somewhere between the Marshall Plan and the first McDonald’s Happy Meal—the parts reversed; and we became the punchline. It became the Americans who tipped like royals—and the Europeans who saw it as such. 

It was during this time that the gesture was institutionalized—not of custom or conscience—but because the Pullman Company, the National Restaurant Association, and eventually big tech sold it as part of the deal.23 Lobbying congress, adding tip lines to receipts and making feudalism feel American—if you’re the one tipping.24 Because on the other end—where the customer is always right—yes, the tip is now expected and yes, it is now appreciated; but gratuity has never been the same thing as respect and especially not when, for most working-class Americans, IHOP has become the least humiliating option. 

The Status Economy 

We are signaling obsessed, hierarchy calibrated social apes. All of us, according to author Will Storr in The Status Game, walk around like buzzed-up antennas—attuned to the faintest frequency of admiration or disdain, gossip, or snicker.25 Given that for most of human history, it wasn’t guns, germs, or steel that mattered most; it was access to the cooperative networks and high-yield alliances of a species where insiders eat first and the gates are closely guarded. And so what governs our decisions—above all else, even when no one’s watching—is the paranoia of social scrutiny. In other words, it’s a cost-benefit analysis where the material outcome barely matters and utility is downstream of reputational impact. 

Absent this understanding of human behavior, very little of it makes sense; a core theme in the work of the early 20th century economist Thorstein Veblen, whose concept of “conspicuous consumption” describes how people often consume products they don’t need—or even want—in order to flaunt status and social class.26 Luxury watches that tell time worse, minimalist chairs you can’t sit on are purchases where the high price is the point. 

Of course, it is no major insight to say that people buy things to show off. The anthropological record is rich with lavish feasts and displays of abundance. The famous “potlatch ceremonies” of Pacific Northwest Indian tribes, for example, involved burning immense stores of wealth—copper shields, hand-carved canoes that took years to build, blankets, oil, and food—generations of accumulated capital, in a single afternoon; just to signal status.27

But what about meditating, carrying around a well-worn copy of The New Yorker in your back pocket, or believing in climate change? Veblen’s brilliance was seeing that even our quietest preferences are currency in a market economy of social prestige. As British philosopher Dan Williams puts it: 

Much cognition is competitive and conspicuous. People strive to show off their intelligence, knowledge, and wisdom. They compete to win attention and recognition for making novel discoveries or producing rationalizations of what others want to believe. They often reason not to figure out the truth but to persuade and manage their reputation. They often form beliefs not to acquire knowledge but to signal their impressive qualities and loyalties. When people are angry, it’s rarely about money. It’s about being looked down on.

It’s the kind of signaling that thrives in what sociologists call “post-material economies” such as contemporary America.28 Because in a society maxed out on comfort—where even the ultrawealthy can’t buy a better Netflix or a softer couch—the only lines left to draw are ideological; and social distinction becomes the new class war. The rub, however, is that unlike the peacock’s tail—a hard- to-fake signal, metabolically costly, and policed by survival—immaterial prestige hierarchies are cultural inventions; often arbitrary, often performative, and almost always enforced from the top down. In other words, social prestige isn’t earned—it’s distributed by those who already have it. As social scientists Johnston and Baumann described in a 2007 paper: 

The dominant classes affirm their high social status through consumption of cultural forms consecrated by institutions with cultural authority. Through family socialization and formal education, class‑bound tastes for legitimate culture develop alongside aversions for unrefined, illegitimate, or popular culture.29

The elite don’t just consume goods. They consecrate tastes, turning culture into a class barrier such that status is socially assigned rather than materially demonstrated. French sociologist Pierre Bourdieu called it symbolic capital—where opinions double as vocabulary tests and entry fees for membership into the aristocracy.30 As Princeton’s Shamus Khan explains, “Culture is a resource used by elites to recognize one another and distribute opportunities on the basis of the display of appropriate attributes.”31

Observing today’s ruling class, social psychologist Rob Henderson has coined the term “luxury beliefs,” arguing that the experts, the celebrities, and the institutions are all fluent in the same woke-speak, and by their material abundance can afford to focus almost exclusively on social justice issues that, ensconced in their gated communities, have no effect on their own luxurious lives (nor those of the people they profess to be helping).32

The words turn and turn again—testing for status, enforcing the pecking order.33 And now, just as working-class Americans born in the industrial economy once rejected cash tips—those born in the culture-capital economy don’t want the tip either. They want respect. The redneck reluctance to simply “trust the experts” or pronounce it “people of color” instead of “colored people” isn’t about bigotry or Bible verses or disinformation—it’s about refusing the role of grateful recipient in someone else’s moral theater. It’s not anti-intellectualism or anti-love and kindness. It’s anti-elitism. 

A culture that sanctifies college as the gateway to full citizenship, over time corrodes the foundations of democratic life.

How is it that a born-rich multibillionaire has become the standard-bearer for the working class? It’s because his favorite food is McDonald’s; and to Nancy Pelosi, George Clooney, and my high school guidance counselor—Trump is trash. They see him the same way they see trailer park America—as tacky, ignorant, and disposable; always on the lowborn side of the tip. It’s a feeling well-known in union organizing circles.34 That when people are angry, it’s rarely about money. It’s about being looked down on. 

A New Nationalism 

Culture can often be hard to think about because it doesn’t exist in the world of objects—it exists in the world as a perceptual experience. It has no mass, no edge, no location. It’s not made of things; it’s made of meanings—real, but not tangible. 

The cultural backlash hypothesis, the status threat hypothesis, the social isolation hypothesis, the political alienation hypothesis, the nostalgic deprivation hypothesis—a growing body of scholarship has emerged to name and quantify the immaterial contours of twenty-first century populist discontent; all circling the drain of an old, half-remembered truth.3536373839

For most of history, kings, philosophers, and statesmen took seriously the idea that civilizations depend on symbolic cohesion—on rituals, traditions, and agreed-upon fictions capable of domesticating our most socially inconvenient biological biases. They understood, whether by insight or instinct, that there’s something important about ceremony and uniform and national character. That propaganda isn’t all bad. That done right, good slogans make good citizens. And good citizens make great nations. As Gidron and Hall put it in a recent paper: 

[I]ssues of social integration [must be taken] more seriously in studies of comparative political behavior. Such issues figured prominently in the work of an earlier era … but they fell out of fashion as decades of prosperity seemed to cement social integration.40

In the old economy it was simple. You had the rich, who lunched at steakhouses and voted Republican; the working class, who labored in factories and voted Democrat; and in between, the mass suburban middle class. When it came, the conflict was clear—members of the working class joining forces with progressive intellectuals to oppose the moneyed elite. Yet every once in a while, a new, revolutionary class of citizens comes along and scrambles the whole social order. In the late 20th century it was the scholastic king—and the new culture-laureate class. He is not merely an academic; he is society’s central planner, a warden of elite passage, and the face of the new American aristocracy; and as The New York Times columnist David Brooks put it: 

If our old class structure was like a layer cake—rich, middle, and poor—the creative class is like a bowling ball that was dropped from a great height onto that cake. Chunks splattered everywhere.41

Outsourcing made economic sense, globalization was in large part inevitable, and cheap goods are always good politics—sure, fine. But for over fifty years now, neither political party has been able to solve the social problem of a postindustrial economy. And no American president has been able to tell a story good enough to replace the one previous generations called true. As sociologist Arlie Hochschild explained in a recent interview with The New York Times

We keep looking for real policies. That’s not the thing. Trump offers a veneer of policies and a story, and we’ve got to tune in to the effect of that story on people who feel like the world’s melting and sinking … Because whatever the policies, these voters are following the story and the emotional payoff of that anti-shaming ritual. So we have to stop the story, reverse the story: Nobody stole your pride, we’re restoring it together.42

In the same way philanthropy never solves economic inequality, bigger and better information tips will never win the culture war—because it’s not about being rich or poor, stupid or smart; it’s about better than or worse than. And the only thing that can make a rich person feel worse than a poor person—or a smart person worse than a stupid one—is a national story written by poor people and stupid people too. It’s the sort of new nationalism that, in the past, has required several interconnected efforts. 

The Bottom Line 

Robert F. Kennedy, in March of 1968, in a speech at the University of Kansas, noted: “The gross national product can tell us everything about America except why we are proud that we are Americans.”43

Rubber in Akron. Meat in Chicago. Coal in Scranton. Steel in Gary. It used to be you knew a city by what it made—how it sounded, how it smelled. In 1950 Detroit was the richest city in the world—that’s right, the entire world.44 On Zug Island, they used to make the whole car, start to finish—iron ore mined and smelted on one end, parts shaped and assembled along the way, and a new Ford rolled off the line at the other—no imports, no one else. It was vertical integration—of work, of community, of pride. 

But by the 1970s a new day had dawned, the old days were gone, and the unraveling had begun. Over half the manufacturing jobs moved elsewhere, a quarter of the population went too; and with whole neighborhoods left to rot, Detroit, once called “the Paris of the Midwest,” became one of the deadliest cities in the country.4546 From 1965 to 1974, homicides quintupled47; the central business district earned the name “zone of decay”; and businesses began installing bulletproof glass—floor to ceiling—to protect storefront clerks. 

Just like that—two short decades transformed America’s motor city into America’s murder city. And burnt, bled, and bankrupt, the once shining example rolled out perhaps the saddest, most pitiful ad campaign in American history: “Say Nice Things About Detroit.”48

It’s not about being rich or poor, stupid or smart; it’s about better than or worse than.

The bottom line is this. Every new economy produces different winners and losers—it’s just the way it is. What happened in Detroit was, in many ways, what was expected. But when the losses came—when the bottom fell out for the millions of working-class Americans still there, still trying—it was treated not as a national obligation but as an unfortunate footnote to progress. Detroit was told to retrain, relocate, find a way to adjust—and when they failed, just like the people still living in Akron, Scranton, and Gary, they were humiliated, cast as mascots of ignorance and failure. The problem is that the ignorant and the failed far outnumber those who aren’t. And so, as Franklin Roosevelt said, it’s not “whether we add more to the abundance of those who have much” that matters—“it is whether we provide enough for those who have too little.” 

Because when the empire falls—when the American experiment joins the long ledger of civilizations past, it won’t be at the hands of China or Russia or Al Qaeda or anyone else. We are the richest nation in the history of the world; no other society has ever wielded as much global influence; not even a coalition of all the world’s armies could best ours. “If destruction be our lot,” wrote a 28-year- old Abraham Lincoln, “we must ourselves be its author and finisher.”49 As “a nation of freemen, we must live through all time, or die by suicide.” 

And if it comes to that—if we choose death; it won’t be about free trade or wages or unemployment rates any more than it was about taxes in 1776. Once again, it will be about respect.

Categories: Critical Thinking, Skeptic

CRISPR-Cas9 and the Ethics of Scientific Inaction

Skeptic.com feed - Tue, 02/10/2026 - 1:17pm

The Burmese python is among the most destructive invasive species in North America. Introduced into South Florida through the exotic pet trade, it has spread rapidly through the Everglades, fundamentally altering one of the most biologically unique ecosystems on the continent. Long-term monitoring studies document dramatic declines—often exceeding 90 percent—in medium-sized mammal populations such as raccoons, opossums, foxes, and bobcats. These losses have cascaded throughout the food web, reshaping predator-prey dynamics and ecosystem function.

After decades of effort, scientists and wildlife managers have been forced to confront an uncomfortable reality: traditional control strategies do not work at scale. Hunting programs, bounties, tracking dogs, radio-tagged “Judas snakes,” and public outreach campaigns have all failed to meaningfully reduce python populations across the Everglades’ vast and inaccessible terrain.

This persistent failure raises a question that should lie at the heart of scientific skepticism but is rarely posed directly: Why are scientists so reluctant even to explore CRISPR-based genetic tools to suppress invasive species when the ecological damage of inaction is already severe, ongoing, and irreversible?

To be clear, genetic population control has not remained confined to laboratory models. In Florida, genetically engineered mosquitoes have already been released in open environments to combat mosquito-borne disease—most notably dengue fever—while also reducing the risk of transmission of Zika and chikungunya viruses. These programs, developed by the biotechnology firm Oxitec, involved releasing male Aedes aegypti mosquitoes engineered so that their offspring fail to survive to adulthood. The goal was straightforward: suppress mosquito populations without pesticides and reduce disease risk to humans.

These releases were approved by federal and state regulators, implemented in the Florida Keys, and subjected to extensive monitoring. The results were not merely symbolic. Field trials conducted by Oxitec demonstrated local reductions of Aedes aegypti populations on the order of 70–90 percent, levels widely regarded as sufficient to substantially reduce the risk of mosquito-borne disease transmission. Notably, Aedes aegypti is itself a non-native, invasive species in Florida, introduced through human activity and now deeply embedded in urban and suburban environments. While directly attributing changes in dengue, Zika, or chikungunya incidence to a single intervention is methodologically complex, the biological rationale is straightforward: fewer competent vectors mean fewer opportunities for disease spread. By any reasonable standard, the program achieved its primary objective—large-scale, targeted suppression of an invasive species without chemical insecticides.

The ethical reasoning behind these deployments was equally clear. Faced with ongoing public-health risks, scientists and policymakers concluded that genetic population suppression was preferable to widespread pesticide use, which carries well-documented ecological and human-health costs. Precision, reversibility, and reduced collateral damage were treated not as liabilities, but as virtues.

What is striking … is not that such tools exist or that they work, but how narrowly their application has been circumscribed.

That judgment did not emerge in a vacuum. For more than three decades, genetically modified organisms have been deployed across global agriculture at enormous scale. Genetically engineered crops have reduced pesticide use, increased yields, improved resistance to pests and disease, and in some cases enhanced nutritional content. These organisms have been consumed by billions of people and introduced into ecosystems worldwide, all under regulatory regimes far less restrictive than those now proposed for CRISPR-based conservation tools. Despite early public alarm and immense leftist protests, the accumulated scientific evidence has shown GMO crops to be no more dangerous to human health or the environment than their conventional counterparts. In practice, genetic modification has become a routine—if still politically contested—part of modern environmental management.

What is striking, then, is not that such tools exist or that they work, but how narrowly their application has been circumscribed. Genetic population control has been judged acceptable when the target is an insect vector threatening human health, yet remains largely off-limits when the target is a vertebrate invasive species driving ecological collapse. The technology did not stall at the edge of feasibility or safety; it stalled at the edge of moral comfort. Human-centered risk is treated as actionable. Ecological destruction is treated as tolerable.

CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is often portrayed as a radical, almost science-fictional technology—a sudden and unprecedented leap in human power over nature. Popular narratives frequently frame it as a tool that allows scientists to “rewrite life” at will, blurring the line between biology and engineering in ways that feel unsettling or unnatural. In reality, CRISPR did not emerge from speculative ambition, but from basic microbiological research into how bacteria survive viral infections. CRISPR is part of a naturally evolved bacterial immune system, one that has existed for billions of years and functions by recognizing and disabling invading genetic material.

This pattern of radical portrayal followed by gradual normalization is hardly unique to CRISPR. Earlier generations of genetic technologies were greeted with similar alarm. Recombinant DNA research in the 1970s provoked fears of runaway organisms and ecological catastrophe. Genetically modified crops were widely depicted as “unnatural,” dangerous, or morally suspect, despite being extensions of techniques humans had used for millennia to shape plant genomes through selective breeding. In each case, initial ethical anxiety was driven less by empirical evidence than by the perception that humans were crossing a symbolic boundary. Over time, as mechanisms became better understood and real-world outcomes failed to match apocalyptic predictions, these technologies were absorbed into routine scientific and agricultural practice. CRISPR now occupies the same cultural position once held by earlier genetic tools—exceptional not because of demonstrated harm, but because it makes human agency over biology unusually explicit.

What is CRISPR and how could it eliminate an invasive species?

When a bacterium survives a viral attack, it stores short fragments of the virus’s DNA in its own genome. These fragments serve as genetic “mugshots.” If the virus returns, the bacterium uses these sequences to guide specialized enzymes to recognize and cut the invader’s DNA, neutralizing the threat.

The most important of these enzymes is Cas9, a molecular tool capable of cutting DNA at a precisely specified location. In 2012, researchers including Jennifer Doudna demonstrated that this system could be repurposed as a programmable gene-editing technology. By supplying Cas9 with a custom guide RNA, scientists could target and cut virtually any DNA sequence with remarkable accuracy. In 2020, Doudna along with Emmanuelle Charpentier won the Noble Prize in chemistry for their discovery of the “CRISPR-Cas9 genetic scissors.” 

This represented a qualitative leap beyond earlier genetic engineering techniques, which were slow, expensive, and often imprecise. CRISPR allows genes to be deleted, modified, or silenced with far greater control than any previous method.

This increase in precision has already translated into medical advances that, only a decade ago, would have been regarded as implausible or even miraculous. In several cases, CRISPR has moved beyond theory and into real-world clinical success, reshaping how genetic disease is treated.

Genetic approaches, by contrast, allow for ongoing monitoring, adjustment, and—if necessary—active reversal. The risk is not zero, but it is structured, visible, and governable in ways conservation biology has rarely had before.

One of the most striking examples involves inherited blood disorders such as sickle-cell disease and beta-thalassemia. Rather than attempting to correct the defective gene directly, researchers used CRISPR to reactivate fetal hemoglobin—a form of hemoglobin normally silenced after birth. In patients treated with this approach, debilitating symptoms have been dramatically reduced or eliminated, freeing individuals who once required frequent transfusions from lifelong medical dependence. These outcomes represent not incremental improvement, but functional cures.

CRISPR has also enabled remarkable progress in certain forms of blindness caused by single-gene mutations. In these cases, gene editing has been used directly in living patients to correct the underlying defect in retinal cells. For the first time, clinicians have been able to intervene at the level of genetic causation rather than managing symptoms after irreversible damage has occurred. Patients who were steadily losing vision have shown stabilization—and in some cases partial restoration of sight.

In cancer medicine, CRISPR has transformed immunotherapy by allowing scientists to engineer immune cells with unprecedented specificity. T cells can now be edited to better recognize tumors, resist immune exhaustion, or avoid attacking healthy tissue. These advances have expanded the reach of cell-based therapies and improved their safety profile, turning once-lethal cancers into manageable or even curable conditions for some patients.

What unites these examples is not technological novelty, but ethical clarity. In each case, CRISPR has been embraced because it replaces blunt, toxic, or ineffective treatments with targeted, biologically precise interventions. The risks are acknowledged, studied, and regulated—but they are not treated as disqualifying. When the benefits are concrete and human suffering is visible, society has proven willing to accept the responsible use of powerful genetic tools.

How does this translate into invasive-species control?

The most discussed application is the gene drive. Under normal sexual reproduction, each parent has roughly a 50 percent chance of passing on a given gene. A gene drive biases this process. By linking a genetic change to the CRISPR machinery itself, the altered gene is inherited by nearly all offspring, allowing it to spread rapidly through a population.

An artificial gene drive built with CRISPR-Cas9 works by programming a guide RNA to direct the Cas9 enzyme to cut the alternative version of a gene. When the cell repairs that cut, it copies the CRISPR-containing gene instead, ensuring that the edited version is passed on to nearly all offspring. (Source: Mariuswalter, CC BY-SA 4.0, via Wikimedia Commons)

Crucially, eliminating an invasive species does not require mass killing or ecological vandalism. The most conservative proposals focus on population suppression rather than extinction. CRISPR can be used to disrupt fertility genes, bias sex ratios (for example, producing mostly males), or induce sterility without affecting survival. Over successive generations, reproduction fails and population size declines.

Equally important, these interventions can be designed to be species-specific, targeting DNA sequences unique to the invasive organism. Unlike chemical controls, they do not spread indiscriminately through food webs. Unlike physical removal, they scale naturally with population size.

A common concern is that a gene drive designed to suppress Burmese pythons in Florida, for example, could somehow spread beyond its intended range. In a worst-case scenario, modified individuals might be transported—most likely by humans—back to the species’ native range in tropical and subtropical regions of the Old World. If a population-suppression drive were to establish itself there, it could threaten native python populations rather than invasive ones. This possibility is real enough to deserve serious consideration, but it is also far less catastrophic—and far more controllable—than it is often portrayed.

First, such spread would be biologically and geographically unlikely. The Everglades are thousands of miles from the python’s native range, with no natural migration pathway connecting the two. Any transcontinental movement would almost certainly require deliberate or accidental human transport, the very mechanism responsible for the original invasion. Second, gene drives can be designed to be regionally constrained, for example by targeting genetic variants common in the invasive population but rare or absent in native populations, or by incorporating threshold-dependent systems that fail to propagate below certain population densities.

CRISPR does not act like a genetic bomb. It alters inheritance. That distinction matters. 

Most importantly, CRISPR-based interventions are not a single, irreversible act. If an unintended spread were detected, there are multiple ways to halt or reverse progression. Researchers have already demonstrated the feasibility of reversal drives that overwrite earlier genetic changes, restoring normal inheritance patterns. In addition, releasing sufficient numbers of wild-type individuals can dilute or extinguish a suppression drive, while kill-switches and self-limiting designs can cause the system to collapse after a fixed number of generations.

In short, the relevant comparison is not between CRISPR and perfection, but between CRISPR and the tools currently in use. Chemical poisons, physical eradication, and habitat destruction offer no comparable capacity for recall or correction once deployed. Genetic approaches, by contrast, allow for ongoing monitoring, adjustment, and—if necessary—active reversal. The risk is not zero, but it is structured, visible, and governable in ways conservation biology has rarely had before.

CRISPR does not act like a genetic bomb. It alters inheritance. That distinction matters. 

Unfounded Fears

Despite this precision, CRISPR is widely treated as uniquely dangerous. This perception collapses under comparison.

Humans already intervene in ecosystems aggressively and often imprecisely. We drain wetlands, reroute rivers, apply pesticides, release biological control agents, and physically remove animals by the thousands. These interventions frequently produce collateral damage—not because intervention itself is misguided, but because it is undertaken with insufficient ecological understanding. Classic examples illustrate the danger of blunt solutions. 

In the 1930s, cane toads were introduced into Australia in an attempt to control beetles harming sugarcane crops. The toads failed to control the pests but thrived spectacularly themselves, spreading rapidly across the continent and poisoning native predators unadapted to their toxins. Similarly, mongooses were introduced to Hawaii to control rats in sugar plantations, only to prey instead on native birds and reptiles that had evolved without mammalian predators. In both cases, well-intentioned biological interventions backfired—not because humans acted, but because they acted crudely, deploying organisms broadly without precision, containment, or the ability to reverse course. These disasters argue not against intervention itself, but against uninformed and irreversible intervention.

CRISPR, by contrast, is the most targeted biological tool humans have ever developed. If risk is defined as the probability of unintended harm multiplied by the magnitude of that harm, it is far from obvious that CRISPR represents a new category of danger. In many contexts, it may represent a reduction in risk relative to existing practices.

Yet CRISPR is held to an ethical standard no other ecological tool has ever faced: near-zero tolerance for uncertainty.

A Brief History of Invasive-Species Eradication

The ethical hesitation surrounding CRISPR appears far less principled when placed alongside the long history of invasive-species eradication already embraced by conservation biology. For decades, conservationists have pursued aggressive—and often lethal—campaigns to remove non-native predators, particularly on islands where endemic species evolved without defenses against mammalian hunters.

New Zealand Kākāpō (Strigops habroptilus), by Jake Osborne via Flickr, CC BY-NC-SA 2.0

As vividly documented in William Stolzenburg’s Rat Island, invasive rats, cats, and other predators introduced inadvertently by humans have devastated island ecosystems worldwide. Flightless birds such as New Zealand’s kakapo, along with countless seabirds and reptiles, have been driven to the brink of extinction by predators they were evolutionarily unprepared to confront. Faced with these losses, conservationists have largely converged on a difficult conclusion: eradication, however uncomfortable, is preferable to permanent biodiversity collapse.

The primary tool for rat eradication has often been chemical poisoning, most notably anticoagulants such as brodifacoum. These compounds cause internal bleeding over the course of days, a process widely acknowledged to be painful. Their use has also produced unintended consequences, including secondary poisoning of birds of prey that consume contaminated rodents. Yet despite these ethical and ecological costs, eradication campaigns have continued—because the alternative is the irreversible loss of native species.

CRISPR deserves no exemption from scrutiny—but neither does it warrant a moral quarantine that more destructive methods escape entirely.

This history matters because it reveals a striking inconsistency. Conservation science already accepts deliberate, population-level elimination of invasive species using methods that are blunt, ecologically disruptive, and morally fraught. These approaches are justified, explicitly, as tragic but necessary tradeoffs.

Against this backdrop, objections to CRISPR take on a different character. Genetic approaches aimed at reproductive suppression rather than mass killing could, in principle, reduce or eliminate invasive populations without poisoning, trapping, or collateral damage to non-target species. They offer the possibility—still theoretical, but biologically grounded—of achieving the same conservation goals with less suffering and greater precision.

To be clear, gene drives introduce their own uncertainties. But uncertainty has never been grounds for abstention in conservation biology. Instead, uncertainty has been managed through testing, containment, and ongoing revision. CRISPR deserves no exemption from scrutiny—but neither does it warrant a moral quarantine that more destructive methods escape entirely.

Triage

The uncomfortable truth is that conservation already involves deciding which species live and which disappear. The real ethical question is not whether humans should exercise that power—we already do—but whether we are willing to consider tools that might allow us to exercise it more carefully, more precisely, and with fewer unintended victims.

Before CRISPR is dismissed as reckless or premature, it is worth asking a simpler question: what has already been tried—and at what cost?

Florida and federal agencies, along with conservation organizations, have spent tens of millions of dollars attempting to control Burmese python populations. No attempted action has achieved population-level suppression.

Among the most striking examples is the development of robotic prey decoys, including AI-assisted robotic rabbits designed to lure pythons into traps. These devices mimic the movement, heat signatures, and behavioral cues of live prey. They represent an impressive feat of engineering—complex, expensive, and technologically adventurous.

They are also revealing.

Robotic prey baits are essentially a high-tech extension of trapping. They operate on one animal at a time, across thousands of square miles of dense, inaccessible wetlands. Even when successful, they remove pythons incrementally, with no capacity to scale proportionally to population size. Meanwhile, reproduction continues unchecked.

When scientists decline even to explore genetic interventions, they are not abstaining from responsibility—they are exercising it selectively.

This matters because it exposes a profound inconsistency in how risk is evaluated. The same institutions that recoil at the hypothetical risks of CRISPR have already embraced experimental technologies deployed directly into the wild, large-scale ecological manipulation, and interventions with no realistic path to success

Robotic prey baits are not inherently unethical. But they are far cruder, less targeted, and less scalable than genetic approaches—yet they trigger none of the moral alarm bells that CRISPR does.

Society, it seems, is already willing to experiment aggressively in the Everglades. 

The Burmese python did not arrive in Florida by natural dispersal. Its presence is the result of human action. Continuing to allow its ecological destruction is also a human choice. When scientists decline even to explore genetic interventions, they are not abstaining from responsibility—they are exercising it selectively.

But doing nothing is not neutral. 

Categories: Critical Thinking, Skeptic

Skeptoid #1027: Radioactive Relics: The Missing RTGs

Skeptoid Feed - Tue, 02/10/2026 - 2:00am

Radioactive nuclear generators sit out in the environment, posing a real hazard. They're mostly — but not all — in Russia.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Scientific Inconsistencies in the Quran: A Greater Challenge Than Its Violent Verses?

Skeptic.com feed - Mon, 02/09/2026 - 3:59pm

In contemporary critical discourse on Islam, significant attention is often devoted to the violence associated with this religion—whether through the history of Arab-Islamic conquests, modern terrorist acts committed in the name of Allah, or Quranic verses calling for religious warfare and corporal punishments. For many critics of the foundational sacred texts of Islam, the physical violence endorsed in these scriptures appears to be the most obvious problem to demonstrate and denounce. 

Thus, for example, when Allah states, in verse 34 of surah 4 of the Quran, that husbands must strike disobedient wives, one should easily conclude that domestic violence is compatible with Islam. Likewise, when Allah states, in verse 2 of surah 24, that those who engage in sexual intercourse outside of marriage must be punished with one hundred lashes, it should be concluded that private sexual life is subject to surveillance and even sanction in Islam.

I am, of course, able to distinguish between the sacred texts of Islam and those who believe in them. A religion should not necessarily be held responsible for the behavior of its followers. However, everything Allah says in the Quran necessarily commits Islam, since the only official and supreme author of Islam, capable of defining what Islam is or is not, is Allah himself. This paradigm is the founding dogma of the Quran, which is claimed to be, from the first to the last verse, the word of a perfect God who neither lies nor errs, valid at all times and in all places until the Day of Judgment. It is therefore impossible, for example, to reform the criminalization of freedom of conscience in Islam, because, according to the Quran, Allah has declared that those who do not believe in the Quranic verses (surah 4, verse 56) or in Allah and His Prophet Muhammad (surah 48, verse 13), will be eternally tortured in Hell after death. 

This eternal promise, which will be fulfilled at the end of times, cannot logically be revoked by any human, temporal, or earthly decision preceding that end. Moreover, reforming Islam would amount to asking inherently weak, flawed, and sinful humans (surah 4, verse 28) to contradict and disavow Allah, the best of judges (surah 7, verse 87), who sent down a book whose verses are perfect (surah 11, verse 1); such a request is absurd from the perspective of this religion.

Most of the peaceful and Westernized Muslims I have encountered in my life rarely seem shaken in their faith by the most violent Quranic passages that call for hatred and punishment of innocent people condemned merely for their freedom or differences. The apparent casualness of peaceful believers in the face of their god’s warlike words often has a psychological root: cognitive dissonance.

The contradictions and scientific errors of an infallible god, supposed to know everything and never err, are harder to dispute.

Faith in Islam rests, among other things, on the belief that the Quran is a perfect text revealed by a just God who fights injustice. Yet for a Muslim living in a modern Western society where nonviolence, freedom of conscience, and equality of rights are sacred values, the violence advocated by Allah in the Quran contradicts the ideal of peace, which is the most consensual political and social argument possible. To resolve this dissonance, the peaceful Muslim generally adopts the strategy of avoidance. And what better way to deny the cause or consequence of a problem than to deny its very existence—or worse, to present it as a benefit? 

In order to survive in the 21st century where fact-checking scrutinizes religious texts as thoroughly as political discourse, apologists of Islam have mastered the art of reinterpreting Quranic verses. These rhetorical sleights of hand—transforming every instance of the verbs “kill” or “fight” in Allah’s speech into a plea for tolerance and dialogue—obviously comfort peaceful and Westernized Muslims in their idealistic—yet illusory—vision of Islam. Many Muslims who follow a “religion of peace, love, and tolerance” will tell themselves that “The unbelievers to be fought must have been violent people against whom Allah called for self-defense” or “The domestic violence encouraged by Allah must surely consist of using purely symbolic violence through oratorical eloquence to bring reason to an unreasonable wife.”

As an ex-Muslim who has devoted many years to studying the logic and meaning of Quranic verses, I argue that it is more effective to discuss faith with other Muslims by speaking of science rather than violence. Muslims today often dismiss criticisms of the violence in Allah’s words as merely subjective, whereas science, facts, evidence, and even mathematics are seen as more objective.

The best apologists for Islam have certainly developed a whole arsenal of sophisms to relativize or justify the slightest violent word in the Quran, but the contradictions and scientific errors of an infallible god, supposed to know everything and never err, are harder to dispute. For this reason, in my book 100 Contradictions and Scientific Errors in the Quran (which is my best-known work, here in France), I have thoroughly identified and analyzed an encyclopedic list of the 100 greatest lexical, scientific, narrative, mathematical, dialectical, and historical contradictions found in the Quran. I present two of them here, starting with a Quranic narrative contradiction. Allah, in the Quran, sometimes recounts the same historical event in two different surahs, such as when He announces to Zachariah through His angels that the latter will have a son, named John. But in both of these surahs the human behind Allah’s pen made the mistake of presenting the event with verbatim quotations, specifically first-person statements. 

The discrepancy between these verbatim quotes demonstrates that if the author of the Quran can contradict his own work, even his most fervent believers can do so as well.

So what did Zachariah reply when Allah sent him the announcement of John’s birth? According to verse 40 of surah 3, Allah claims that Zachariah, surprised, responded: “My Lord, how will I have a boy when I have reached old age and my wife is barren?” Yet in verse 8 of surah 19, Allah claims that Zachariah at that same moment said: “My Lord, how will I have a boy when my wife is barren and I have reached extreme old age?” These two Quranic citations, between surahs 3 and 19, supposedly quoting the same statement made by Zachariah during a unique and precise event, should have been word-for-word identical. However, they invert the order of the two arguments relative to one another and feature a differing adjective—present in one but absent in the other. Each version of the historical and factual truth contradicts and invalidates the other, even though both are meant to be equally divine. The discrepancy between these verbatim quotes demonstrates that if the author of the Quran can contradict his own work, even his most fervent believers can do so as well.

Let us take another example of incoherence in the Quran, which leaves little room for subjectivity: mathematical errors. Several of Allah’s Quranic instructions regarding the calculation of inheritance shares are simply impossible to apply, as they contradict one another. For instance, in verse 12 of surah 4, Allah affirms that if a person dies without leaving any parent or child, but has a brother or a sister, then each of them is to receive one sixth of the inheritance: “And if a man or woman dies leaving no father, no mother and no child, but has a brother or a sister, then for each one of them is a sixth.” 

Let us now consider the two only possible interpretations of this instruction, which contains a subtle ambiguity that is difficult to discern at a glance. First, let us assume that the word “or” in the phrase “a brother or a sister” implies there is only one heir—either a brother or a sister. This would mean, according to verse 12 of surah 4, that Allah grants “a sixth” of the inheritance to the sister of a deceased person with no parent or child. However, later in the same surah, in verse 176, Allah states that the sister of a deceased person without parent or child must receive “half” of the inheritance: “Say Allah gives you a ruling about one who dies leaving no father, no mother and no child: if someone dies and has no child but has a sister, she shall have half of what he leaves.” This creates a blatant contradiction: in the same inheritance scenario, a single sister receives either one sixth or half of the estate.

To resolve this contradiction, Muslims might then be tempted to adopt the second (and only other) possible interpretation of the word “or” in “[if he] has a brother or a sister, then for each one of them is a sixth,” namely that Allah is referring to two individuals: one brother plus one sister. This would mean that the brother and sister are each to receive an equal share—namely one sixth. Yet, in verse 176 of surah 4, Allah explains that in a situation involving a deceased person, if there are brothers and sisters: “a male will have the share of two females.” 

Rational critique of the Quran, the hadiths, and the Prophet’s biography has become vastly more accessible and widespread than at any time in history.

There is no coherent logic underlying these contradictory instructions. How can Allah explain that a brother must receive the same share as a sister, and then that two brothers must receive the same as four sisters? Either the Prophet Muhammad became confused with the Quran that emerged from his fallible human imagination, or other humans—careless or deceitful—completed the Quran after him as they saw fit, despite the dogma of the Quran’s inviolability which attributes its authorship to Allah alone.

♦ ♦ ♦

Until the late twentieth century, intellectual criticism of Islam’s sacred texts by ex-Muslims remained confined to discreet discussions, books of testimonies, or academic works that struggled to find a place in the public debate. But with the democratization of the internet, everything changed. Rational critique of the Quran, the hadiths, and the Prophet’s biography has become vastly more accessible and widespread than at any time in history.

More and more critics of Islam—ex-Muslims or not, anonymous or not—now dare to speak publicly about everything that worries them in Islam: its intolerance toward any dissenting thought, its violence, its misogyny, its scientific absurdities. Yet whether in Islamic countries, Europe, or elsewhere, ex-Muslims who criticize Islam openly remain few and often must live in hiding. Whether they live in countries where apostasy is illegal or in Western countries where they risk social death or even physical violence, many ex-Muslims fear revealing their departure from Islam to their families. Some pretend to remain Muslim.

The “battle of ideas” challenging Islam remains, even today, as stormy in the media as it is perilous to one’s personal safety. According to the sacred texts and legal tradition of Islam, leaving the religion and criticizing its foundations constitutes a religious crime whose legally prescribed punishment may extend up to death. This position derives directly from hadiths—the words and deeds of the Prophet Muhammad—classified as Sahih (authentic), such as Bukhari numbers 6878 and 6922, in which the Prophet Muhammad, defined by Allah (surah 33, verse 21) as a universal behavioral model for all Muslims, declared: “Whoever changes his religion, kill him!” These sacralized statements, criminalizing the loss of faith in Islam or the conversion of a Muslim to another religion, explain why even today, among the 42 Islamic countries (by constitution or by their predominantly Muslim population), not a single one recognizes or defends the right of a Muslim to leave Islam.

Categories: Critical Thinking, Skeptic

Mic'd Up: Brian's Blood Donation Interview

Skeptoid Feed - Fri, 02/06/2026 - 2:00am

Brian gets questioned while giving blood.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Did the U.S. Really Use a Sonic Weapon in Venezuela?

Skeptic.com feed - Wed, 02/04/2026 - 9:18am

Within days of the U.S. strike on Caracas and the capture of Venezuelan President Nicolás Maduro on January 3, 2026, a remarkable claim was sweeping across social media: American forces had deployed a devastating “sonic weapon” that left Venezuelan soldiers vomiting blood and unable to stand.

The headlines have been dramatic with Forbes proclaiming: “U.S. Secret Weapon May Have Incapacitated Maduro’s Guards.”1 The Economic Times wrote about America’s “Secret Sonic Weapon,”2 while the UK Sun asserted: “US ‘Sonic Weapon’ is REAL after Chilling Claims it Left Captured Maduro’s Guards ‘Vomiting Blood.’”3 The story was dramatic, almost terrifying, but as we shall argue here, almost certainly false.

Within minutes of the first explosions on January 3, conflicting claims were already circulating on social media about the number of missiles fired, ground forces deployed, and helicopters spotted flying over the city of Caracas, the focal point of the attack. The ambiguity and uncertainty that typify the fog of war are ideal breeding grounds for rumors. Ordinarily, such rumors fade as reliable information emerges. But in this case the U.S. military remained silent, while the Venezuelan government, like many authoritarian regimes, is notorious for withholding information. 

This is a classic setup for the proliferation of rumors, whose intensity is proportional to both the perceived importance of the event and the level of ambiguity.4 Situations such as this are fertile soil for exaggerations, half-truths, conspiracy theories, and outright fabrications. Even after the situation on the ground stabilized and many early rumors were confirmed or denied, claims about the use of a sonic weapon not only persisted but flourished.

From WhatsApp to the World

One challenge in tracing this story to its origins is that as it began in Venezuela, where the earliest accounts circulated in Spanish. Fortunately, one of us (DZ) is a fluent speaker and was able to examine the primary sources. In the days that followed, audio recordings rapidly spread on WhatsApp, describing events through purported firsthand accounts from soldiers and relatives near the impact zones.

On January 9, one story began circulating widely. In it, a supposed member of colectivo—an armed militia that controls different sections of the city—described how the attack unfolded in the historic 23 de Enero neighborhood of western Caracas. 

The audio was posted on the YouTube channel of Emmy Award-winning Venezuelan journalist Casto Ocando, and soon accumulated over one million views.5 In it, an anonymous narrator describes the attack.

“They shut down the entire electrical system, knocked out the radars, knocked out everything.”

He then recounts how a soldier activated a Russian-made anti-aircraft defense system to attack the helicopters.

“When he fired it, a drone immediately detected it and, well, they died, they killed them, all of them [the soldiers] with a single bomb… There are many dead, many people burned, many people wounded. I’ll send you a video, there are approximately 100 military personnel dead,” he adds.6

The narrator’s confidence in precise casualty figures amid the chaos of a nighttime attack, is itself a red flag.

The alleged eyewitness continues:

“There were only eight helicopters and 20 men…who killed 200 men, 32 with a single shot, plus presidential guards of honor and civilians.”

He then describes weapons that “fired more than 300 bullets per minute,” adding,

“a thing that made me bleed, I was bleeding from my nose and didn’t know what it was, it was a whistle that sounded throughout Caracas and made people bleed from their noses and ears. We couldn’t move, that whistle immobilized us, they say it’s what’s called a sonic shockwave. It was something really horrible….”

The clip ends with claims that Americans

“don’t fight fair. They fight from above, with drones. The speeds of those helicopters…. They only sent eight helicopters and destroyed all of Caracas.”  

The description of a sound that causes nosebleeds and immobilization across an entire city is physically implausible. While acoustic weapons such as Long Range Acoustic Devices (LRADs) can cause pain and disorientation at close range, their effects diminish rapidly with distance as the sound energy disperses. No known acoustic technology can cause bleeding from the ears and nose at a distance, let alone city-wide.

Enter, Stage Right, Mike Netter 

On January 9, the WhatsApp audio recording quickly spread across various social networks. The following day, popular conservative influencer Mike Netter, posted on X a strikingly similar story, which he attributed to a security guard loyal to Nicolás Maduro.

🚨This account from a Venezuelan security guard loyal to Nicolás Maduro is absolutely chilling—and it explains a lot about why the tone across Latin America suddenly changed.

Security Guard: On the day of the operation, we didn't hear anything coming. We were on guard, but… pic.twitter.com/392mQuakYV

— Mike Netter (@nettermike) January 10, 2026

It is reproduced below so readers can judge for themselves:

Security Guard: On the day of the operation…suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions…. After those drones appeared, some helicopters arrived, but there were very few. I think barely eight helicopters. From those helicopters, soldiers came down, but a very small number. Maybe twenty men. But those men were technologically very advanced…

Interviewer: And then the battle began? 

Security Guard: Yes, but it was a massacre. We were hundreds, but we had no chance. They were shooting with such precision and speed... it seemed like each soldier was firing 300 rounds per minute… At one point, they launched something... it was like a very intense sound wave. Suddenly I felt like my head was exploding from the inside. We all started bleeding from the nose. Some were vomiting blood. We fell to the ground, unable to move…. Those twenty men, without a single casualty, killed hundreds of us. We had no way to compete with their technology, with their weapons. I swear, I’ve never seen anything like it. We couldn't even stand up after that sonic weapon or whatever it was.

Interviewer: So, do you think the rest of the region should think twice before confronting the Americans?

Security Guard: Without a doubt. I’m sending a warning to anyone who thinks they can fight the United States. They have no idea what they’re capable of. After what I saw, I never want to be on the other side of that again. They’re not to be messed with.

Interviewer: And now that Trump has said Mexico is on the list, do you think the situation will change in Latin America? 

Security Guard: Definitely. No one wants to go through what we went through. Now everyone thinks twice. What happened here is going to change a lot of things, not just in Venezuela but throughout the region. 

The story was originally posted in English, itself suspicious for a supposed Venezuelan guard. Had this been a genuine interview with a colectivo member, the original would have almost certainly appeared in Spanish. No Spanish-language version has ever surfaced. The “interview” appears to be a reconstruction of the WhatsApp audio, repackaged in a question-and-answer format.

Another red flag is the distinctly pro-American tone, which is unlikely to have come from a foreign fighter, let alone one sworn allegiance to defend his government. Defeated soldiers do not typically serve as unsolicited recruitment posters for the enemy. The guard also conveniently uses round figures (eight helicopters, twenty men, 300 rounds per minute) and makes no mention of his comrades’ courage or resistance, and ends with a warning directed at Mexico: precisely echoing President Trump’s rhetoric at the time.

Journalists are trained to go to the source. Accordingly, we contacted Netter to request details of the alleged guard and the interviewer, and asked him to share the original Spanish source of this interview with us. He said he couldn’t do so without first asking the source, which he promised to do. At the time of this writing, he never got back to us.

Press Secretary Leavitt Intervenes

Mike Netter’s post could have disappeared into the daily churn of social media had it not been for White House press secretary Karoline Leavitt who shared it on her official account with the dramatic text: “Stop what you are doing and read this...”

Stop what you are doing and read this…
🇺🇸🇺🇸🇺🇸🇺🇸🇺🇸 https://t.co/v9OsbdLn1q

— Karoline Leavitt (@PressSec) January 10, 2026

This endorsement dramatically elevated the story’s perceived credibility, despite the absence of any corroborating evidence. In effect, an anonymous social media claim received a semi-official White House endorsement of an unverified anonymous claim, a departure from the press secretary’s traditional role as a gatekeeper of verified information. As a result, Netter’s post has gained over 30 million views and 10,000 responses.

Ever Increasing Circles

On January 10, the New York Post repeated Netter’s account under the headline: “US used powerful mystery weapon that brought Venezuelan soldiers to their knees during Maduro raid: witness account.”7 The story recounted the most spectacular elements: the sound wave, exploding heads, nosebleeds, and vomiting.

Curiously, the same YouTube channel of Casto Ocando that had released the original audio, later uploaded a new video citing the Post article, the Post’s reconstruction as independent confirmation of its own earlier material. Other media outlets went further, falsely claiming that the Venezuelan guard had been interviewed by the New York Post.8

This process, where secondary reporting is mistaken for a primary source, is a classic example of how media myths are manufactured through journalistic shortcuts.

Notably, none of the Venezuelan soldiers who later appeared on camera—people whose identities and ranks are known, mentioned the use of sonic weapons. Footage aired on the Chavista network Telesur depict young men wounded by shrapnel describing missile strikes, drones, and gunfire. None reported bleeding from the nose, vomiting, or sensations of cranial explosions.9 Nor are there civilian testimonies from Caracas describing a city-wide whistling sound. Some soldiers and civilians did report buzzing sounds, including individuals near Fort Tiuna, one of the attack sites. However, these sounds are readily explained by falling ordnance and whizzing bullets—mundane combat phenomena, not evidence of exotic weaponry.

It is also conspicuous that during President Trump’s exclusive interview with the New York Post, which was published on January 24th, he was asked about the “sonic weapon” rumors. Trump replied that the U.S. has “the discombobulator” that disabled enemy equipment as the American helicopters swooped in to attack in Carcas. But he made no mention of its effects on people.10

It’s Similar to the Havana Syndrome

The symptoms described in the WhatsApp audio are strikingly similar to claims made during the Havana Syndrome scare. Recently, the intelligence community has deemed the involvement of a foreign power “highly unlikely,” attributing the Havana Syndrome causes to psychogenic and environmental factors rather than directed energy weapons.11

The Venezuelan sonic weapon narrative appears to be drawing from the same well of popular mythology. Furthermore, nosebleeds following an explosive military attack are far more likely to be caused by conventional factors such as blast pressure, dust, smoke inhalation—even stress as opposed to a hypothetical sonic weapon.

The narrator in the WhatsApp audio clip may be misattributing ordinary combat effects to an extraordinary cause: a classic pattern in rumor formation.

Under conditions of extreme stress, uncertainty, and sensory overload, people routinely seek out coherent explanations that give meaning to their own experiences. In the context of a sudden nighttime military strike, in a backdrop rife with ambiguity and anxiety, physical symptoms such as nosebleeds, dizziness, ringing in the ears, and temporary immobility, are especially prone to being reinterpreted through the lens of culturally available narratives.

From a rumor and folklore perspective, the sonic weapon story fulfills a familiar psychological function: it collapses complex, confusing events into a single explanatory cause, providing closure amid uncertainty. The sonic weapon narrative transforms uncertainty into conviction and speculation into “fact.” This process reduces anxiety. As philosopher Suzanne Lange once famously observed: humans possess a remarkable ability to adapt—except when confronted with chaos.12

A Familiar Pattern

The sonic weapon story follows a well-worn media myth template: an ambiguous event, an information vacuum, an anonymous account, amplification by politically motivated actors, and validation by authorities who should know better.

What began as a WhatsApp voice message from an anonymous militia member, was transformed into a polished English-language “interview,” boosted by a partisan influencer, and essentially endorsed by the White House. At no stage was a shred of physical evidence produced. The ‘Discombulator,’ as far as the evidence shows, exists only in the fog of war, and in the imaginations of those eager to believe. 

It is also worth asking the cui bono question: “Who benefits from the sonic weapon narrative?” First, the U.S. government and military—by projecting overwhelming technological superiority. Second, pro-government Venezuelan sources also benefit from a story that excuses their rapid military defeat.

When both sides gain from a myth, its survival is all but guaranteed.

Categories: Critical Thinking, Skeptic

The Selective Rationality Trap

Skeptic.com feed - Tue, 02/03/2026 - 3:17pm
How Rational People Lower Standards of Reasoning When It Comes to Politicized Issues

One of the hardest things to accept, especially for people who care about rationality, is that epistemic rigor is rarely applied consistently. Most of us do not give up bad arguments. Instead, we give up standards of evidence when the conclusion becomes socially or morally important to us.

There are well-established psychological reasons why this happens. Decades of research in social psychology show that many of our beliefs are not just opinions we hold, but parts of who we are. They become woven into our identities, our friendships, and often our professional lives. 

Put more simply, we build our identities, friendships, and careers around certain beliefs. As a result, challenges to those beliefs are not experienced as abstract disagreements but as personal threats. Our self-preservation mechanism kicks in: We bend reality as far as necessary to preserve a flattering story about ourselves and our ingroup. Denial and aggression toward the outgroup follow naturally. 

Psychologists Henri Tajfel and John Turner, who developed Social Identity Theory, showed that people internalize the values and beliefs of the groups they belong to, treating them as extensions of the self. When those beliefs are questioned, the threat is processed much like a threat to your status or belonging. The reaction is often defensive rather than reflective. 

More recent work on motivated reasoning helps explain why such a reaction is so persistent. In the 1990s, psychologist Ziva Kunda demonstrated that people selectively evaluate evidence in ways that protect conclusions they are already motivated to believe. When a belief supports your identity or social standing, the mind unconsciously applies stricter standards to disconfirming evidence and looser standards to supporting evidence. 

Intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side.

Political scientist Dan Kahan later expanded this idea with what he called “identity-protective cognition.” His research showed that people with higher cognitive ability are often better, not worse, at rationalizing beliefs that align with their cultural or political identities. In other words, intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side! 

This body of research helps explain why challenges to core beliefs can feel existential. If your moral worldview underwrites your relationships, your career, or your sense of being a good person, abandoning it comes with real social and psychological costs. Under those conditions, defending the belief feels like defending your life as it is currently organized. 

Seen in this light, the selective abandonment of evidentiary standards is not a moral failing unique to any one group. It is a predictable human response to perceived identity threat. Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation. 

I learned this firsthand during my years in the New Atheist movement. What struck me was how selective people’s skepticism could be. In debates about religion, the standards were ruthless. In debates about politics and social issues, those same standards were easily relaxed, and often vanished. 

Take prayer. For decades, skeptics have pointed to controlled trials showing no measurable benefit of intercessory prayer. The best-known example is the STEP trial, a randomized study of nearly 1,800 cardiac bypass patients published in The American Heart Journal. It found no improvement in outcomes for patients who were prayed for, and in one group outcomes were slightly worse among patients who knew they were being prayed for. Among the New Atheists, prayer was considered resolved beyond reasonable debate not only because the experimental evidence showed no effect, but because the underlying causal story itself collapsed upon examination. 

Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation.

Philosophically, intercessory prayer fails at the most basic level: It posits an immaterial agent intervening in the physical world in ways that are neither specified nor independently detectable. There is no plausible mechanism, no dose-response relationship, no way to distinguish divine intervention from coincidence, regression to the mean, or natural recovery. 

When some studies do claim positive effects of prayer, they almost invariably collapse under close inspection—small sample sizes, multiple uncorrected comparisons, vague outcome measures, post hoc subgroup analyses, or outright publication bias. Some define “answered prayer” so flexibly that any outcome counts as success; others rely on self-reported well-being, which is especially vulnerable to expectancy effects and motivated reasoning. 

This is precisely why large, preregistered trials and systematic reviews, such as those published in The American Heart Journal, are treated as decisive: They close off these escape hatches. The conclusion that prayer “doesn’t work” is not dogma; it is the residue left after methodological rigor strips away every alternative explanation. 

Now compare that level of scrutiny to how many people treat evidence in politically favored domains. What matters here is not even whether these conclusions are right or wrong, but how they become insulated from refutation. 

In debates over trans healthcare, for example, studies in favor of many invasive medical interventions are based largely on self-reported outcomes, short follow-up periods, and substantial attrition. Despite these limitations, they are frequently treated as definitive. Criticisms that would be routine in almost any other medical context are instead dismissed as bad faith. But the fact that these issues involve real suffering should not exempt them from evidentiary scrutiny; it should raise the bar for it. In this case, the most comprehensive evidence available—multiple systematic reviews—has raised serious concerns about the overall quality of the evidence base, particularly with respect to pediatric interventions. 

The UK’s Cass Review, commissioned by the National Health Service and published in stages between 2022 and 2024, concluded that the evidence for puberty blockers and cross-sex hormones in adolescents is generally of low certainty. Similar conclusions were reached by Sweden’s National Board of Health and Welfare and Finland’s Council for Choices in Health Care, both of which revised clinical guidelines after finding the evidence weaker than previously assumed. None of this proves that such treatments never help anyone, especially adults who exhausted other options. It does show that claims of scientific certainty are unjustified. 

The same pattern appears at the level of theory. New Atheists made a cottage industry out of attacking unfalsifiable religious claims and god-of-the-gaps reasoning. Yet many of the same people now defend claims about “systemic discrimination” that are structured in exactly the same way: When disparities persist, they are treated as proof. When they shrink, the explanation retreats to subtler and less measurable mechanisms. Evidence against the claim rarely counts against the claim in the way it would in other domains. 

Consider policing. It is often treated as a settled fact that racial bias is the primary driver of police shootings. But when Harvard economist Roland Fryer examined multiple large national datasets on police use of force, he found that there were no racial differences in officer-involved shootings once relevant contextual factors—such as crime rates, encounter circumstances, and suspect behavior—were taken into account. 

What followed was not a broad reevaluation of the claim, but a shift in how it was framed. Rather than direct bias operating at the level of individual officers, explanations moved toward less specific and harder-to-measure forces: institutional culture, historical legacy, or diffuse forms of “structural” racism. These explanations may or may not be true, but they function differently from the original claim. Because they are more abstract and less tightly specified, they are also far more difficult to test or falsify. 

Here’s the key issue: The pattern we can observe in all this is not that evidence resolved the question, but that disconfirming evidence changed the nature of the claim itself. A hypothesis that was once presented as empirically straightforward became broader, more elastic, and increasingly insulated from direct empirical challenge. Sounds familiar? It’s the god of the gaps fallacy. 

The same pattern appears in debates over wage gaps. Raw differences in average earnings between groups are often presented as straightforward evidence of discrimination. But when researchers such as June O’Neill and later Claudia Goldin showed that simply controlling for factors such as occupation, hours worked, experience, career interruptions, and job risk substantially narrows or eliminates many commonly cited wage disparities, the original claim quietly shifted. 

Evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable. 

It was no longer argued that some demographics were being paid less than others for the same work under the same conditions. Instead, the explanation moved upstream: Sexism or systemic racism were said to operate on the variables themselves, shaping career choices, work hours, and occupational sorting in ways that produced lower average pay. 

Again, these higher-level explanations may be partly true. But they function very differently from the initial claim. A hypothesis that began as a concrete, testable assertion about unequal pay for equal work became broader, more abstract, and harder to falsify. Evidence that would ordinarily count against the claim did not weaken it; it simply pushed the claim into less measurable territory. In other words, evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable. In these cases, disparities function the way miracles once did in theology: as proof of hidden forces. 

What bothered me about the New Atheism movement was not disagreement over conclusions. It was the collapse of standards. Arguments once dismissed as unscientific were rehabilitated the moment they became morally fashionable. I focus here on the New Atheism movement because it marked the first time in my life (and, as far as I can tell, the first time in history) that a movement, at least on its surface, explicitly committed itself to applying the highest standards of evidence to some of the most consequential claims about the world, and in doing so successfully and very publicly dismantled societal structures and beliefs that had endured for millennia. 

Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative.

I’ve been thinking about all this for a long time, and I’ve come to suspect that most people—not by choice, but by evolutionary design—do not want or need a fully accurate understanding of how the world works. They want beliefs that protect their identity, signal membership in the right group, and increase their chances of (social) survival. Michael Shermer explained some of the evolutionary processes at hand here rather well in his books How We Believe and Conspiracy. In short, when it comes to patternicity—the human tendency to find meaningful patterns in meaningless noise—making Type 1 errors, (i.e., finding nonexistent patterns), carries little evolutionary risk while the opposite (i.e., missing real patterns) often can be the difference between life and death. This means that natural selection will favor strategies that make many incorrect causal associations in order to establish those that are essential for survival and reproduction. 

Under those conditions, reasoning becomes performative. Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative. That is why debates on these topics so often drift toward unfalsifiable language and moral imperatives. 

A fair question follows: How does anyone know they are not doing the same thing? 

I think the real danger we should try to internalize is not that other people do this. It is that all of us do.

Categories: Critical Thinking, Skeptic

Skeptoid #1026: Vintage Ceramics: Decorative or Deadly?

Skeptoid Feed - Tue, 02/03/2026 - 2:00am

How concerned do you truly need to be about vintage ceramicware leaching lead into your food?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

When AI Thinks for Us

Skeptic.com feed - Sat, 01/31/2026 - 3:18pm

In modern education, Artificial Intelligence is increasingly marketed as a cognitive prosthesis: a tool that extends our mental reach, automates drudgery, and supposedly frees us to focus on higher-order creativity and insight. According to this narrative, AI does not replace thinking—it liberates it.

But beneath the polished interface of today’s Large Language Models (LLMs) lies a neurological and ethical trap, one with especially serious implications for developing minds. We are witnessing a subtle but profound shift from using tools to thinking with them, and, increasingly, letting them think for us. 

The question Skeptic readers should be asking is not whether AI is impressive—it clearly is—but what kind of minds are formed when different kinds of thinking become optional. One place where this shift is especially revealing and especially consequential is moral development. 

Moral Development 

In moral education, how one arrives at a judgment matters more than which judgment one reaches. It is not about acquiring correct answers. Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting. These capacities do not emerge automatically, rather, they are trained through effortful use. AI, however, is mostly indifferent to process and optimizes for output. 

When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy. This is not a Luddite rejection of AI but a skeptical, evidence-based examination of benefit claims that rarely account for developmental cost. 

These are not merely philosophical concerns. They are grounded in the biology of how our moral capacities arise. To understand the stakes, we must begin with the adolescent brain. The teenage brain is not a finished system but more like a construction site. The prefrontal cortex (the executive center responsible for impulse control, long-term planning, and moral deliberation) undergoes rapid, uneven development throughout adolescence. Neural circuits that are exercised are strengthened and stabilized; those that are neglected are pruned away. This is not metaphor. It is biology. 

Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting.

Moral development, as I explain in my book AI Ethics, Neuroscience, and Education, depends on what researchers call cognitive friction. This friction appears as hesitation before a difficult choice, the effort of weighing competing values, and the discomfort of uncertainty. These moments feel inefficient, but they are also indispensable. Generative AI, by design, removes this friction. 

When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work. The student receives the answer without undergoing the cognitive struggle required to produce it. Ethical questions begin to resemble technical problems with downloadable solutions. Students lose the habit of lingering in uncertainty; the very space where moral reasoning takes shape. AI does not hesitate and generates outputs based on probability, not conscience. Humans, however, should hesitate. That hesitation is not weakness but moral functioning. 

Cognitive and Emotional Development 

If moral reasoning is one casualty of reliance on LLMs, it is far from the only one. Consider writing. Writing is not simply a way to display what we know—it is the process through which we figure out what we think. Organizing vague intuitions into a coherent argument places a heavy demand on the developing prefrontal cortex, and when AI performs this structuring, it deprives the brain of precisely the exercise it needs to mature. 

When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy.

If intelligence is measured only by output, for example the finished essay or the correct solution, AI appears miraculous. But if intelligence is understood as the capacity to reason, deliberate, and restrain impulse, AI-driven cognitive offloading begins to resemble a neurological shortcut with long-term consequences, not unlike actual shortcuts that reshape the terrain. 

The danger does not stop at cognition. It extends into emotional and social development. We are entering an era of affective computing, in which machines are designed not merely to process information but to simulate emotional responsiveness. AI systems now speak in tones of empathy, reassurance, and concern. They never interrupt, misunderstand, or demand reciprocity. 

For an isolated or anxious adolescent, an AI companion can feel safer than unpredictable human relationships. It offers validation without vulnerability and empathy without risk. 

When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work.

But moral growth, just like cognitive abilities, does not occur in comfort. Human relationships require patience, accountability, and recognition of another person’s interior life. They involve misunderstanding, disagreement, and the difficult work of repair. AI relationships require none of this. They are emotionally efficient, and ethically hollow. 

What they provide is a psychological sugar rush: immediate affirmation without the nutritional value of genuine connection. The ethical danger here is subtle: We are not merely giving students a new tool but also shaping their preferences. We are quietly training young people to prefer relationships that never challenge them. Over time, this fosters comfort with anthropomorphic simulations and anxiety toward real human empathy, which is messy, incomplete, and demanding. 

Toward Skeptical AI Literacy 

This is not a call to ban AI. The question is not whether we use AI in education, but how and when. 

Beyond the developmental effects described here, we should also note that LLMs hallucinate. With remarkable confidence, they fabricate sources, misstate facts, and invent details. This fluency creates trust. What emerges is a form of passive knowing: information is consumed without ownership or justification. In an era where machines can generate infinite content, the ability to distinguish truth from fluent fiction becomes one of the most critical civic skills we have. Ironically, our increasing reliance on AI may be eroding the vigilance that skill requires. 

We are quietly training young people to prefer relationships that never challenge them.

This means we need to be teaching students both how to prompt machines and how to resist them. In other words, AI output should be treated not as a truth to be consumed but as a hypothesis to be tested. We also need to teach the value of the seeming inefficiency of human thinking. 

Finally, the central ethical question of our time is not whether machines can think for us. It is whether in allowing them to do so too often we risk forgetting how to think for ourselves. We must be careful not to engineer the atrophy of human wisdom.

Categories: Critical Thinking, Skeptic

What is Truth, Anyway?

Skeptic.com feed - Tue, 01/27/2026 - 8:32am
  • Do you believe global warming is real?
  • Do you believe in the germ theory of disease?
  • Do you believe masks work and should be mandated?
  • Do you believe Jesus was resurrected?
  • Do you believe the Holocaust happened?
  • Do you believe there are objective morals and values in life?

As a public intellectual who engages in debates and conversations on a wide range of subjects, I am often asked questions such as these, which I found puzzling at first until I figured out that my interlocutors were confusing the meaning of beliefs and facts. 

For example, I don’t “believe in” the germ theory of disease. I accept it as factually true, and as we’ve seen in the recent pandemic, a germ like the SARS-CoV-2 virus is not something to believe in or disbelieve in. It simply is a matter of fact and it can cause a deadly disease like Covid-19. 

Whether or not vaccines and masks slow its spread is also a factual question that science, at least in principle, can answer, although whether or not vaccines and masks should be mandated by law is a political matter that differs from scientific questions. But asking you if you “believe in” the SARS-CoV-2 virus would be like asking you if you “believe” in gravity. Gravity is just a brute fact of nature. It’s not something to believe or disbelieve. 

As the science fiction author Philip K. Dick famously quipped, “Reality is that which, when you stop believing in it, doesn’t go away.”

Objective Truths and Justified True Belief

What we’re after here is knowledge, which philosophers traditionally define as justified true belief. That is, we want to know what is actually true, not just what we want to believe is true. The problem is that none of us are omniscient. If there is an omniscient God, it’s not me, and it’s also not you. Or, in the secular equivalent, there is objective reality but I don’t know what it is, and neither do you.

Truth: What It Is, How To Find It, & Why It Still Matters

Michael Shermer

BUY ON AMAZON

Once we agree that there is objective truth out there to be discovered and that none of us knows for certain what it is, we need to work together through open dialogue in communities of truth-seekers to figure it out, starting by acknowledging our shortcomings as finite fallible beings subject to all the cognitive biases that come bundled with our reasoning capacities. The workaround for this problem is having adequate evidence to justify one’s beliefs. Here are two examples from science:

  • Dinosaurs went extinct around 65 million years ago. This is true by verification and replication of radiometric dating techniques for volcanic eruptions above and below dinosaur fossils. Since each layer can be accurately dated, we infer that the age of a fossil falls between these two dates. Above the strata dated 65 million years ago, there are no more dinosaurs. Ergo, we can assert with a high degree of confidence that this is an objective fact, and we can be satisfied in the truth of the proposition that dinosaurs went extinct around 65 million years ago, unless and until new data emerge. 
  • Our universe came into existence at the Big Bang some 13.8 billion years ago. This is true based on the convergence of evidence of a wide range of phenomena such as the cosmic microwave background, the abundance of light elements like hydrogen and helium, the distribution of galaxies and the large-scale structure of the cosmos, the redshift of most galaxies that indicates they are all moving away from one another in a way that resembles a giant explosion, and the expansion of space-time itself that resulted from such a big bang, resulting in the accelerating expanding cosmos we see today.
Michael Shermer reminds us that the search for truth is not a luxury, but a necessity. This book is a powerful argument for why reality matters and a practical toolkit for how to find it.
―Sabine Hossenfelder, author of Existential Physics: A Scientist's Guide to Life's Biggest Questions

The above propositions are “true” in the sense that the evidence is so substantial that it would be unreasonable to withhold our provisional assent. At the same time, it’s not impossible, for example, that the dinosaurs went extinct recently, just after the creation of the universe some 10,000 years ago (as Young Earth Creationists assert). However, this proposition is so unlikely, so completely lacking in evidence, and so evidently grounded in religious faith, that we need not waste our time considering it any further (the debate about the age of the Earth was resolved over a century ago). 

Thus, a scientific truth is a claim for which the evidence is so substantial it is rational to offer one’s provisional assent.Provisional is the key word here. Scientific truths are temporary and could change with changing evidence. 

The ECREE Principle, or Why Extraordinary Claims Require Extraordinary Evidence

In his 1980 television series Cosmos, in the episode on the possibility of extraterrestrial intelligence existing somewhere in the galaxy, or of aliens having visited Earth, Carl Sagan popularized a principle about proportioning one’s beliefs to the evidence, when he pronounced that “extraordinary claims require extraordinary evidence.” The ECREE principle was first articulated in the 18th century by the Scottish Enlightenment philosopher David Hume, who wrote in his 1748 An Enquiry Concerning Human Understanding: “a wise man proportions his belief to the evidence.” 

ECREE means that an ordinary claim requires only ordinary evidence, but an extraordinary claim requires extraordinary evidence. Here’s a quotidian example. I once took a road trip from my home in Southern California to the Esalen Institute in Big Sur, California, home of all things New Age. To get there I took the 210 freeway north to the 118 Freeway north to the 101 freeway north to San Luis Obispo, where I exited to Highway 1 and followed the Pacific Coast Highway north through Cambria and San Simeon until arriving at the storied home of the 1960’s Human Potential Movement. Weirdly, just past Cambria, a bright light hovered over my car. Thinking it was a police helicopter, I pulled over to the side of the road, fearful that I had been busted for speeding (which I am wont to do). But it wasn’t the cops. It was the aliens, and they abducted me into their mothership and whisked me off to the Pleiades star cluster where their home planet is located. There I met extraterrestrial beings who gave me a message to take back to Earth—we must stop global warming and nuclear proliferation…or else.

Michael Shermer has a fine record as a long-time crusader for evidenced rationality. This fascinating and wide-ranging book should further enhance his impact on current controversies.
―Lord Martin Rees, Astronomer Royal, former President of the Royal Society

Now, which part of this story triggers your insistence on additional evidence? That’s obvious. My claim to have driven on California highways is ordinary and calls for only ordinary evidence (in this case, you can just take my word for it), but my claim to have been abducted by aliens and rocketed off to the Pleiadeian home planet is extraordinary, and unless I can provide extraordinary evidence—like an instrument from the dashboard of the alien spaceship, or one of the aliens themselves—you should be skeptical.

ECREE also suggests that belief is not an either-or on-off switch—not a discrete state of belief or disbelief, but a continuum on which you can place confidence in a belief according to the evidence: more evidence, more confidence; less evidence, less confidence. Consider the extraordinary claim that another bipedal primate called Big Foot, or Yeti, or Sasquatch survives somewhere on Earth. That would be quite extraordinary because after centuries of searching for such a creature none have been found. 

Truth (Autographed)

Michael Shermer

BUY FROM SHOP SKEPTIC

Before we assent to such a claim we need extraordinary evidence, in this case a type specimen—what biologists call a holotype—in the form of an actual body. Blurry photographs, grainy videos, and stories about spooky things that happen at night when people are out camping does not constitute extraordinary evidence—it’s barely even ordinary evidence—so it is reasonable for us to withhold our provisional assent. 

Impediments to Truth and How to Overcome Them

In addition to falling far short of omniscience, humans are also saddled with numerous cognitive biases, including (to name but a few): confirmation bias, hindsight bias, myside bias, attribution bias, sunk-cost bias, status-quo bias, anchoring bias, authority bias, believability bias, consistency bias, expectation bias, and the blind-spot bias, in which people can be trained to identify all these biases in other people but can’t seem to see the log in their own eye.

Truth lances the myth of truth's subjectivity, arguing (provocatively) that truth can generate moral absolutes. This stimulating, excellent book inspires you to spread the word that the Earth is not flat and that truth matters.
―Robert Sapolsky, author of Determined: A Science of Life Without Free Will

Then there are the suite of logical fallacies, such as Emotive Words, False Analogies, Ad hominem, Hasty Generalization, Either-Or, Circular Reasoning, Reductio ad Absurdum and the Slippery Slope, after-the-fact reasoning, and especially why anecdotes are not data, why rumors do not equal reality, and why the unexplained is not necessarily the inexplicable.

With such listicles of cognitive biases and logical fallacies identified by philosophers and psychologists it’s a wonder we can think at all. But we can and do, through experience, education, and instruction in the art and science of thinking. What follows are some of the methods developed by philosophers and psychologists to identify and work-around all these impediments to the search for truth.

Practice Active Open-Mindedness. Research shows that when people are given the task of selecting the right answer to a problem by being told whether particular guesses are right or wrong, they do the following:

  • Immediately form a hypothesis and look only for examples to confirm it.
  • Do not seek evidence to disprove the hypothesis.
  • Are very slow to change the hypothesis even when it is obviously wrong.
  • If the information is too complex, adopt overly-simple hypotheses or strategies for solutions.
  • If there is no solution, if the problem is a trick and “right” and “wrong” is given at random, form hypotheses about coincidental relationships they observed. 

In their book Superforecasting, Philip Tetlock and Dan Garner document how bad most people are at making predictions, and what skillsets those who are good at it employ. They begin with the results of extensive testing of people’s predictions. It’s not good. Even most so-called experts were no better than dart-tossing monkeys when their predictions were checked. When asked to make specific predictions—for example, “Will another country exit from the EU in the next two years?” and, presciently, “Will Russia annex additional Ukraine territory in the next three months?”—and their prognosticating feet were held to the empirical fire, Tetlock and Garner found that most experts were overconfident (after all, they’re experts), encouraged by the lack of feedback on their accuracy (if no one reminds you of your misses you’ll only remember the hits—the confirmation bias), and are victims of all the cognitive biases and illusions that plague the rest of us. 

Michael Shermer has spent his career grappling with the slipperiest word in our language: truth. As someone who knows firsthand what happens when truth gets lost in noise and narrative, I'm grateful for Shermer's clear-eyed insistence that truth is not only real, but necessary.
―Amanda Knox, author of Free: My Search for Meaning

The worst forecasters were people with big ideas—grand theories about how the world works—such as left-wing pundits predicting class warfare that never came, or right-wing commentators prophesizing a socialistic demise of the free enterprise system that never happened. Failed predictions are hand-waved away—“This means nothing!” “Just you wait!” Superforecasters, by contrast, practice active open-mindedness, which Tetlock and Garner defined quantitatively by asking experts “Do you agree or disagree with the following statements?” Superforecasters were more likely to agree that:

  • People should take into consideration evidence that goes against their beliefs.
  • It is more useful to pay attention to those who disagree with you than to pay attention to those who agree.
  • Even major events like World War II or 9/11 could have turned out very differently.
  • Randomness is often a factor in our personal lives.

Superforecasters were more likely to disagree that:

  • Changing your mind is a sign of weakness.
  • Intuition is the best guide in making decisions.
  • It is important to persevere in your beliefs even when evidence is brought to bear against them.
  • Everything happens for a reason.
  • There are no accidents or coincidences. 

The psychologist Gordon Pennycook and his colleagues developed their own instrument of measuring active open-mindedness, in which people are asked whether they agree or disagree with the following statements, where the more open-minded answer is indicated in parentheses:

  • Beliefs should always be revised in response to new information or evidence. (agree)
  • People should always take into consideration evidence that goes against their beliefs. (agree)
  • I believe that loyalty to one’s ideals and principles is more important than “open-mindedness.” (disagree)
  • No one can talk me out of something I know is right. (disagree)
  • Certain beliefs are just too important to abandon no matter how good a case can be made against them. (disagree)

Active open-mindedness is a cogent tool of reason in assessing the truth value of any claim or idea. As is reason itself, of which active open-mindedness is a subset of rational skills that must be cultivated through education and practice.

Michael Shermer pulls no punches: in a world where opinion too often masquerades as fact, he dismantles delusion and arms us with the tools to meet reality head-on.
―Brian Greene, author of Until the End of Time: Mind, Matter, and Our Search for Meaning in an Evolving UniverseProtect and Defend the Constitution of Knowledge

Objective facts in support of provisional truths about the world are determined by tried-and-true methods developed over the centuries since the Scientific Revolution and the Enlightenment in what are sometimes called rationality communities—scholars, scientists, and researchers who collect data, form and test hypotheses, present their findings to colleagues at conferences, publish their papers in peer reviewed journals and books, and reinforce the norms of truth-telling to their colleagues and students along with themselves. In his book The Constitution of Knowledge, the journalist and civil rights activist Jonathan Rauch outlines and defends the epistemic operating system of Enlightenment liberalism’s social rules for attaining reliable knowledge when people cannot agree on what is true. Although these communities differ in the details of what, exactly, should be done to determine justified true belief, Rauch suggests several features held in common that constitute the constitution of knowledge:

  • Fallibilism. The understanding that we might be wrong.
  • Objectivity. A commitment to the proposition that there is a reality and we can know it through reason and empiricism.
  • Disconfirmation. Challenging or testing any and all claims through peer review and replication (science), editing and fact-checking (journalism), adversarial lawyers (the law), and red-team review (business).
  • Accountability. We should all be held accountable for our mistakes.
  • Pluralism. An insistence on viewpoint diversity.

The most important norm of all is the freedom to critique or challenge any and all ideas. Why?

  • We might be completely right but still learn something new in hearing what someone else has to say.
  • We might be partially right and partially wrong, and by listening to other viewpoints we might stand corrected and refine and improve our beliefs. 
  • We might be completely wrong, so hearing criticism or counterpoint gives us the opportunity to change our minds and improve our thinking. 
  • By listening to the opinions of others we have the opportunity to develop stronger arguments and build better facts for our positions. 
  • My freedom to speak and dissent is inextricably tied to your freedom to speak and dissent. If I censor you, why shouldn’t you censor me? If you silence me, why shouldn’t I silence you? 

If you disagree with me, it is the norms and customs of free speech and open dialogue that allows you to do so. From those open dialogues, debates, and disputations, in time the truth emerges.

Excerpt from Truth: What It Is, How to Find It, and Why It Still Matters, Johns Hopkins University Press. January 27, 2026

Categories: Critical Thinking, Skeptic

Skeptoid #1025: Pop Quiz: Space Quandaries

Skeptoid Feed - Tue, 01/27/2026 - 2:00am

Oh no! Another pop quiz. Take the challenge: 9 questions about space. Think you can get them all?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Skeptoid #1024: The Van Meter Visitors

Skeptoid Feed - Tue, 01/20/2026 - 2:00am

A century-old hoax takes wing again, proof that good stories never stay buried.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

A Skeptic’s Guide to Ozempic and Other GLP-1 Agonists

Skeptic.com feed - Mon, 01/19/2026 - 2:20pm
A new compass for the SKEPDOC column. This column was founded by Harriet Hall, MD (1945–2023) who wrote it from 2006 to 2023. In 2026, we welcome William Meller, MD, to the helm. As an expert in evolutionary medicine, Dr. Meller will be our guide in navigating the deep biological history of our species to find the “True North” of human health.

I have been practicing medicine for more than 40 years. During that time the management of obesity and Type 2 diabetes (T2DM)—the kind that usually is caused by being overweight—often felt like Sisyphus pushing a boulder up a hill, only to have it roll back down, often heavier than before. We faced a “diabesity” epidemic where the available tools were blunt instruments at best.

Lifestyle interventions—meaning trying to get someone to change their behavior—was the most and least effective method we had. Most, because in the less than two percent of patients who were successful, it works very well. Least, because, well … 98 percent failed. And they failed because all of our evolutionary history (“See food? Eat it!”) was working against them. This is the mismatch theory: a mismatch between the environment of our evolutionary ancestry that designed our brains to seek foods that were at once rare and nutritious (sweets and fats) and the modern environment in which such foods are in such overabundance that we eat far beyond the saturation point. 

The pharmacological options were often disappointing: Sulfonylureas and insulin lower blood sugar but caused weight gain, exacerbating the underlying problem. Bariatric surgery works, but it is invasive and carries surgical as well as lifelong nutritional risks. 

When we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology.

Into this therapeutic desert crawled the Gila Monster (above), a venomous lizard native to the American Southwest from which researchers derived GLP receptor agonists (Glucagon-like peptide-1 receptor agonists)—medications that mimic the natural GLP-1 hormone that lead to lower blood sugar, help control appetite, and promote weight loss by telling the pancreas to release more insulin when glucose is high, slowing the rate of stomach emptying, and signaling to the brain a sense of fullness. 

As a skeptic, I am allergic to the word “miracle,” but when we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology. But, as always in medicine, there is no free lunch. 

The Incretin Concept: From Gut to Glory 

The story begins with the “incretin effect”—the observation that glucose taken by mouth triggers a much stronger insulin response by increasing the production of hormones in the pancreas, compared to when it is injected directly into a vein. The gut knows you are eating and tells the pancreas to get ready to pack away the extra calories as fat. In patients with Type 2 diabetes, this effect is blunted and the sugar floats around in the bloodstream much longer. 

Scientists identified two main hormones responsible: Glucose-dependent Insulinotropic Polypeptide (GIP) and Glucagon-like Peptide (GLP-1). The problem is that GIP doesn’t work well in diabetics. GLP-1 works beautifully—stimulating insulin, suppressing glucagon, and slowing gastric emptying—but it has a fatal flaw: It is destroyed by the enzyme DPP-4 within minutes of entering the bloodstream. 

This led to two distinct pharmaceutical strategies. The earlier version was DPP-4 Inhibitors. Drugs like the “Gliptins” block DPP-4, making GLP-1 last longer. They are well-tolerated but their ability to lower blood sugar is modest and they generally do not cause weight loss. 

The newer strategy was to engineer versions of GLP to resist degradation. This is where the Gila monster strolled in. In the 1990s, while researching hormone-like drugs, Dr. John Eng noted a similarity between exendin-4 found in Gila venom to Glucagon-like peptide (GLP), and it was able to resist breakdown by DPP! 

The Evidence: Efficacy Beyond the Hype 

The first GLP-1 agonist, exenatide (Byetta, approved in 2005), required twice-daily injections and produced modest weight loss. But the pharmacology evolved rapidly. We moved to once-daily liraglutide, and then to the once-weekly heavyweights: dulaglutide, semaglutide (Ozempic and Wegovy), and the dual GIP and GLP-1 agonist tirzepatide (Mounjaro and Zepbound). 

The clinical trials, called LEAD, SUSTAIN, PIONEER, STEP, and SURPASS (you’ve got to just love the creative acronyms!) have generated data that are hard to dismiss: 

Glycemic Control: These drugs consistently outperform most oral antidiabetics in lowering blood sugar by 10 to 20 percent. 

Weight Loss: This is the game changer. While early drugs produced 2–4 kg of weight loss over six months, the newer agents are producing results previously only seen with surgery. In the STEP-1 trial, semaglutide 2.4 mg resulted in an approximately 15 percent body weight reduction. Tirzepatide pushed this further, achieving up to 22 percent weight loss in the SURMOUNT-1 trial. That is the effect of a 250-pound person losing 55 pounds! Who wouldn’t want some of that?! 

Cardiovascular Outcomes: Perhaps most importantly, these drugs are not like some that just make numbers look better; they are saving lives. Liraglutide and semaglutide have demonstrated significant reductions in major adverse cardiovascular events (MACE), including heart attack and stroke, in high-risk populations. The SELECT trial recently showed semaglutide reduces MACE by 20 percent even in nondiabetic patients with cardiovascular disease. But don’t be fooled, it is not likely that these drugs have specific effects on the heart. It is probable that the fat loss alone is causing these benefits. 

Some Skeptical Scrutiny: The Risks 

If a drug sounds too good to be true, we must look for the catch. GLP-1 agonists have plenty.

The “Puke” Diet? The most common side effects of GLP-1 agonists are gastrointestinal: nausea, vomiting, diarrhea, and bloating. In some trials, up to 45 percent of patients experienced nausea. While this usually subsides, it raises a valid question: Are people losing weight because their metabolism is optimized, or because they feel too sick to eat? The mechanism involves central appetite suppression in the hypothalamus, but the “gastric braking” effect is real and unpleasant for many. 

The Pancreas and Thyroid Scare. Early observational data suggested a link between GLP-1 agonists and pancreatitis and pancreatic cancer. However, extensive reviews have not confirmed a causal link to pancreatic cancer, though a slight increase in pancreatitis persists in some data. This makes sense, as one of the major sites of GLP’s effects is on the pancreas. In the thyroid, these drugs cause C-cell tumors in rodents. Humans have far fewer GLP-1 receptors on their thyroid C-cells than rats, and so far no evidence of increased thyroid cancer has been confirmed in humans. Still, the Black Box warning remains: If you have a family history of endocrine tumors or medullary thyroid cancer, these drugs are not for you. 

If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another.

Vanishing Muscle. Weight loss via GLP-1 agonists is not just fat loss, so overall body composition must be monitored. In the STEP-1 trial, DEXA scans showed that lean body mass (muscle and bone) accounted for nearly 40 percent of the weight lost. In older adults, this raises the specter of “sarcopenic obesity”—being frail and weak despite having excess fat. Losing muscle mass compromises physical function and metabolic health. If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another. Now, regular and increased exercise is part of the prescription for all patients taking GLP drugs, but studies on how well this works are still in progress. 

The Perioperative Peril. Because GLP-1 agonists delay gastric emptying, there have been reports of patients aspirating (inhaling) gastric contents during anesthesia, even after standard fasting protocols. This is a new, practical safety concern that surgical societies are rushing to address. 

Mental Health. Reports of suicidal ideation appeared in postmarketing monitoring of GLP-1 agonist users, prompting investigations by European regulators. However, recent large cohort studies have not supported an increased risk of suicidality compared to other diabetes medications. As with all centrally acting drugs, vigilance is required, but the current data are reassuring. 

A Lifetime Prescription? The most significant caveat for GLP-1 agonists is durability. Obesity can be a chronic, relapsing disease. Trials show that when patients stop taking semaglutide, they regain two-thirds of the lost weight within a year, and cardiometabolic improvements revert toward baseline. This implies that these are not “cures” but lifelong therapies, much like blood pressure medication. 

Financial Toxicity. As I write this, these drugs are prohibitively expensive, creating a massive public health gap. We also saw shortages that left diabetic patients unable to fill prescriptions because the supply was diverted to off-label weight loss use. GLP-1 agonists are not expensive to produce, however, and the patent on Ozempic expired in January of 2026 in Canada and China (and lasts until 2030 in the U.S.), but I expect the market to bring the costs down dramatically over the next few years. As of this year, close to 12 percent of Americans have tried it at least once. 

Needles Versus Pills 

If there is one thing that holds patients back from the current crop of injectable incretins it is the needle. Despite the efficacy of weekly injections, people prefer pills. The pharmaceutical industry, never one to leave money on the table, has been racing to develop an oral alternative that doesn’t require the strict fasting rituals of earlier attempts like oral semaglutide. Enter orforglipron, the latest contender in the “nonpeptide small molecule” class, which promises the benefits of GLPs without the injection or the fuss. 

Unlike existing peptide predecessors that are digested by stomach acid unless armored with absorption enhancers, orforglipron is a chemical—a small molecule designed to survive the GI tract and activate the GLP-1 receptor directly. The data from the ATTAIN-1 trial, published in September 2025, look good. Patients on the 36 mg dose achieved an average weight loss of 11.2 percent over 72 weeks, compared to just 2.1 percent for placebo. No needles. And this pill does not require the “empty stomach, no water, wait 30 minutes” song-and-dance required by oral semaglutide; it can be taken with or without food. 

These are serious medications with serious side effects, and they may require lifelong commitment.

However, let’s look a little past the convenience. While an 11.2 percent average weight loss is clinically significant, it trails behind the 13.7 percent average reduction seen with semaglutide and 20.2 percent with tirzepatide. Furthermore, the biology of GLP-1 agonism remains the same regardless of delivery method: You cannot cheat physiology. In the ATTAIN-1 trial, adverse events led to treatment discontinuation in up to 10.3 percent of patients on the drug, compared to only 2.7 percent on placebo. The side effects are the usual suspects—gastrointestinal distress, nausea, and constipation—confirming that oral delivery does not bypass the “gastric braking” misery. 

We must also remain vigilant regarding safety. The development of a similar small molecule, lotiglipron, was unceremoniously halted due to liver toxicity concerns. While orforglipron has passed its Phase 3 hurdles without these specific signals so far, the history of pharmacology teaches us that rare, serious adverse events often lurk in the postmarketing shadows. 

Additionally, while proponents argue that small molecules are cheaper to manufacture than biologics, whether those savings will be passed on to the patient or simply absorbed into the profit margins remains to be seen, with projected self-pay costs in some cases exceeding $1,000 per month. Orforglipron represents a technological leap, but it is not a magic wand; it is simply a more convenient way to induce the same physiological trade-offs we have seen over the last several years with the shots. 

Conclusion 

Prior to the incretin era, our ability to manage the twin epidemics of diabetes and obesity was dishearteningly limited. GLP-1 receptor agonists represent a hard-earned pharmacological breakthrough, offering potent glucose control and unprecedented weight loss. 

However, skepticism is still warranted regarding their indiscriminate use. They are already being used in numerous off-label ways, like shedding a few pounds before a wedding, allegedly decreasing cravings for addictive drugs like alcohol and narcotics, and purportedly even for the treatment of Alzheimer’s and Parkinson’s disease. There are ongoing studies for these uses, but early data are weak and the risks are unknown. These are serious medications with serious side effects, and they may require lifelong commitment. 

Caveat emptor.

Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Critical Thinking