You are here

Skeptic

It’s About Respect, Stupid

Skeptic.com feed - Sat, 02/14/2026 - 3:20pm

In 2020 Joe Biden became the first Democratic nominee in 36 years without a degree from the Ivy League. Obama, before him, filled no less than two-thirds of all cabinet positions with Ivy League graduates—over half of which were drawn from either Harvard or Yale.1 In Congress today, 95 percent of House members and 100 percent of senators are college educated.

According to a recent study published in Nature, 54 percent of “high achievers” across a broad range of fields—law, science, art, business, and politics—hold degrees from the 34 most elite universities in the country.2 The sociologist Lauren Rivera, studying top firms in finance, consulting, and law, found that recruiters are jonesing for applicants from a prestigious academic institution; typically targeting just three to five “core” universities in their hiring efforts—Harvard, Yale, Princeton, Stanford, and MIT—the usual suspects; then identifying five to fifteen additional second-tier options—such as Berkeley, Amherst, and Duke—from which they will more tentatively accept resumés.3 Everyone else—almost certainly never even gets a reply email. Why? Because, one lawyer explained the strategy to Rivera, “Number one people go to number one schools.” 

“If destruction be our lot, we must ourselves be its author and finisher.” —Abraham Lincoln

Given this new American caste system, it’s no surprise that 63 percent of Americans think that “experts in this country don’t understand the lives of people like me,” or that 69 percent feel the “political and economic elite don’t care about hardworking people.”4 And, I suggest, they’re not wrong. A culture that sanctifies college as the gateway to full citizenship, over time corrodes the foundations of democratic life. It devalues work that doesn’t come with a degree, licenses contempt for those not formally educated, and locks the working class out of positions of power. The result isn’t just underrepresentation; it’s resentment. As the journalist David Goodhart writes, “We now have a single route into a single dominant cognitive class”; where “an enormous social vacuum cleaner has sucked up status from manual occupations, even skilled ones,” and appropriated it to white-collar jobs, even low-level ones, in “prosperous metropolitan centers and university towns”; and where broad civic contribution has been replaced with narrow intellectual consensus.5 The result is a backlash not against education, but against the assumption that only one kind of education counts. 

“At a time when racism and sexism are out of favor,” writes Harvard philosopher Michael Sandel, “credentialism is the last acceptable prejudice.”6 In a cross-national study conducted in the United States, Britain, the Netherlands, and Belgium, a team of social psychologists led by Toon Kuppens found that the college-educated class had a greater bias against less educated people than they did other disfavored groups.7 In a list that included Muslims, poor people, obese people, disabled people, and the working class, “stupid people” were the most disliked. Moreover, the researchers found that elites are unembarrassed by the prejudice; that unlike homophobia or classism, it isn’t hidden, hedged, or softened—it’s worn openly, with an air of self-congratulation. As the Swedish political scientist Bo Rothstein observes, “The more than 150-year-old alliance between the industrial working class and what one might call the intellectual-cultural Left is over.”8

Today we are living through a strange time in American life in which the numbers have declared victory. By most standard economic measures—employment, wages, even household net worth—the working class is better off than it was a generation ago.91011 The average elevator mechanic gets paid over $100,000 per year12; master plumbers can make more than double that.13 Even in Mississippi, our country’s poorest state, workers see higher average wages than in Germany, Britain, or Canada.14

Elites are unembarrassed by the prejudice; that unlike homophobia or classism, it isn’t hidden, hedged, or softened—it’s worn openly, with an air of self-congratulation.

It is, for working-class Americans today, the best of times, objectively—and the worst of times, subjectively. This is not because the spreadsheets are wrong, but because we fail to count the things that history records in tone, not totals—but rather things like mood, myth, and cultural resolve. 

The Service Economy 

According to the most recent data available from the United States Bureau of Labor Statistics, nearly four out of five Americans work in the service sector.15 For most Americans in most states, that means retail, fast-food, or some other smile-for-hire job located at the end of a check-out line.16 It’s a kind of work where labor isn’t just accomplished, it’s seen—performed under the soft surveillance of the American customer. So, beneath inflation charts and unemployment rates, if you want to understand the feelings side of the postindustrial economy—you might start with tipping. 

It is, today, perhaps our most American habit—tipping for service; whether it be good, bad, or not provided. In restaurants, hair salons, and hotel lobbies, Americans tip over a hundred billion dollars a year—indeed, more than any other country on earth, and more than all of them combined.17 We tip cab drivers and pool cleaners and dog groomers and coat checkers. We tip the doorman on the way in, the bellhop on the way up, and the concierge on the way out. Americans tip so much that, as one European put it—the whole “approach [has become] completely deranged and out of control.”18

However, it wasn’t always this way. In fact, for much of the early 20th century, it was Americans who mocked Europeans for tipping—seeing it as smug, corrupt, and born of feudal etiquette.19 States such as Iowa, South Carolina, and Tennessee—among others—outlawed the practice entirely20; and wherever it remained legal, businesses proudly posted signs that read “No Tipping Allowed.”21 Some hotels even installed “servidors”—a two-way drawer that opened from hallway and room—so staff could deliver laundry without being seen, and without being tipped.22 As the author William R. Scott, in a book-length critique, put it in 1916: 

In an aristocracy a waiter may accept a tip and be servile without violating the ideals of the system. In the American democracy to be servile is incompatible with citizenship … Every tip given in the United States is a blow at our experiment in democracy … Tipping is the price of pride. It is what one American is willing to pay to induce another American to acknowledge inferiority. 

Somewhere along the way, however—somewhere between the Marshall Plan and the first McDonald’s Happy Meal—the parts reversed; and we became the punchline. It became the Americans who tipped like royals—and the Europeans who saw it as such. 

It was during this time that the gesture was institutionalized—not of custom or conscience—but because the Pullman Company, the National Restaurant Association, and eventually big tech sold it as part of the deal.23 Lobbying congress, adding tip lines to receipts and making feudalism feel American—if you’re the one tipping.24 Because on the other end—where the customer is always right—yes, the tip is now expected and yes, it is now appreciated; but gratuity has never been the same thing as respect and especially not when, for most working-class Americans, IHOP has become the least humiliating option. 

The Status Economy 

We are signaling obsessed, hierarchy calibrated social apes. All of us, according to author Will Storr in The Status Game, walk around like buzzed-up antennas—attuned to the faintest frequency of admiration or disdain, gossip, or snicker.25 Given that for most of human history, it wasn’t guns, germs, or steel that mattered most; it was access to the cooperative networks and high-yield alliances of a species where insiders eat first and the gates are closely guarded. And so what governs our decisions—above all else, even when no one’s watching—is the paranoia of social scrutiny. In other words, it’s a cost-benefit analysis where the material outcome barely matters and utility is downstream of reputational impact. 

Absent this understanding of human behavior, very little of it makes sense; a core theme in the work of the early 20th century economist Thorstein Veblen, whose concept of “conspicuous consumption” describes how people often consume products they don’t need—or even want—in order to flaunt status and social class.26 Luxury watches that tell time worse, minimalist chairs you can’t sit on are purchases where the high price is the point. 

Of course, it is no major insight to say that people buy things to show off. The anthropological record is rich with lavish feasts and displays of abundance. The famous “potlatch ceremonies” of Pacific Northwest Indian tribes, for example, involved burning immense stores of wealth—copper shields, hand-carved canoes that took years to build, blankets, oil, and food—generations of accumulated capital, in a single afternoon; just to signal status.27

But what about meditating, carrying around a well-worn copy of The New Yorker in your back pocket, or believing in climate change? Veblen’s brilliance was seeing that even our quietest preferences are currency in a market economy of social prestige. As British philosopher Dan Williams puts it: 

Much cognition is competitive and conspicuous. People strive to show off their intelligence, knowledge, and wisdom. They compete to win attention and recognition for making novel discoveries or producing rationalizations of what others want to believe. They often reason not to figure out the truth but to persuade and manage their reputation. They often form beliefs not to acquire knowledge but to signal their impressive qualities and loyalties. When people are angry, it’s rarely about money. It’s about being looked down on.

It’s the kind of signaling that thrives in what sociologists call “post-material economies” such as contemporary America.28 Because in a society maxed out on comfort—where even the ultrawealthy can’t buy a better Netflix or a softer couch—the only lines left to draw are ideological; and social distinction becomes the new class war. The rub, however, is that unlike the peacock’s tail—a hard- to-fake signal, metabolically costly, and policed by survival—immaterial prestige hierarchies are cultural inventions; often arbitrary, often performative, and almost always enforced from the top down. In other words, social prestige isn’t earned—it’s distributed by those who already have it. As social scientists Johnston and Baumann described in a 2007 paper: 

The dominant classes affirm their high social status through consumption of cultural forms consecrated by institutions with cultural authority. Through family socialization and formal education, class‑bound tastes for legitimate culture develop alongside aversions for unrefined, illegitimate, or popular culture.29

The elite don’t just consume goods. They consecrate tastes, turning culture into a class barrier such that status is socially assigned rather than materially demonstrated. French sociologist Pierre Bourdieu called it symbolic capital—where opinions double as vocabulary tests and entry fees for membership into the aristocracy.30 As Princeton’s Shamus Khan explains, “Culture is a resource used by elites to recognize one another and distribute opportunities on the basis of the display of appropriate attributes.”31

Observing today’s ruling class, social psychologist Rob Henderson has coined the term “luxury beliefs,” arguing that the experts, the celebrities, and the institutions are all fluent in the same woke-speak, and by their material abundance can afford to focus almost exclusively on social justice issues that, ensconced in their gated communities, have no effect on their own luxurious lives (nor those of the people they profess to be helping).32

The words turn and turn again—testing for status, enforcing the pecking order.33 And now, just as working-class Americans born in the industrial economy once rejected cash tips—those born in the culture-capital economy don’t want the tip either. They want respect. The redneck reluctance to simply “trust the experts” or pronounce it “people of color” instead of “colored people” isn’t about bigotry or Bible verses or disinformation—it’s about refusing the role of grateful recipient in someone else’s moral theater. It’s not anti-intellectualism or anti-love and kindness. It’s anti-elitism. 

A culture that sanctifies college as the gateway to full citizenship, over time corrodes the foundations of democratic life.

How is it that a born-rich multibillionaire has become the standard-bearer for the working class? It’s because his favorite food is McDonald’s; and to Nancy Pelosi, George Clooney, and my high school guidance counselor—Trump is trash. They see him the same way they see trailer park America—as tacky, ignorant, and disposable; always on the lowborn side of the tip. It’s a feeling well-known in union organizing circles.34 That when people are angry, it’s rarely about money. It’s about being looked down on. 

A New Nationalism 

Culture can often be hard to think about because it doesn’t exist in the world of objects—it exists in the world as a perceptual experience. It has no mass, no edge, no location. It’s not made of things; it’s made of meanings—real, but not tangible. 

The cultural backlash hypothesis, the status threat hypothesis, the social isolation hypothesis, the political alienation hypothesis, the nostalgic deprivation hypothesis—a growing body of scholarship has emerged to name and quantify the immaterial contours of twenty-first century populist discontent; all circling the drain of an old, half-remembered truth.3536373839

For most of history, kings, philosophers, and statesmen took seriously the idea that civilizations depend on symbolic cohesion—on rituals, traditions, and agreed-upon fictions capable of domesticating our most socially inconvenient biological biases. They understood, whether by insight or instinct, that there’s something important about ceremony and uniform and national character. That propaganda isn’t all bad. That done right, good slogans make good citizens. And good citizens make great nations. As Gidron and Hall put it in a recent paper: 

[I]ssues of social integration [must be taken] more seriously in studies of comparative political behavior. Such issues figured prominently in the work of an earlier era … but they fell out of fashion as decades of prosperity seemed to cement social integration.40

In the old economy it was simple. You had the rich, who lunched at steakhouses and voted Republican; the working class, who labored in factories and voted Democrat; and in between, the mass suburban middle class. When it came, the conflict was clear—members of the working class joining forces with progressive intellectuals to oppose the moneyed elite. Yet every once in a while, a new, revolutionary class of citizens comes along and scrambles the whole social order. In the late 20th century it was the scholastic king—and the new culture-laureate class. He is not merely an academic; he is society’s central planner, a warden of elite passage, and the face of the new American aristocracy; and as The New York Times columnist David Brooks put it: 

If our old class structure was like a layer cake—rich, middle, and poor—the creative class is like a bowling ball that was dropped from a great height onto that cake. Chunks splattered everywhere.41

Outsourcing made economic sense, globalization was in large part inevitable, and cheap goods are always good politics—sure, fine. But for over fifty years now, neither political party has been able to solve the social problem of a postindustrial economy. And no American president has been able to tell a story good enough to replace the one previous generations called true. As sociologist Arlie Hochschild explained in a recent interview with The New York Times

We keep looking for real policies. That’s not the thing. Trump offers a veneer of policies and a story, and we’ve got to tune in to the effect of that story on people who feel like the world’s melting and sinking … Because whatever the policies, these voters are following the story and the emotional payoff of that anti-shaming ritual. So we have to stop the story, reverse the story: Nobody stole your pride, we’re restoring it together.42

In the same way philanthropy never solves economic inequality, bigger and better information tips will never win the culture war—because it’s not about being rich or poor, stupid or smart; it’s about better than or worse than. And the only thing that can make a rich person feel worse than a poor person—or a smart person worse than a stupid one—is a national story written by poor people and stupid people too. It’s the sort of new nationalism that, in the past, has required several interconnected efforts. 

The Bottom Line 

Robert F. Kennedy, in March of 1968, in a speech at the University of Kansas, noted: “The gross national product can tell us everything about America except why we are proud that we are Americans.”43

Rubber in Akron. Meat in Chicago. Coal in Scranton. Steel in Gary. It used to be you knew a city by what it made—how it sounded, how it smelled. In 1950 Detroit was the richest city in the world—that’s right, the entire world.44 On Zug Island, they used to make the whole car, start to finish—iron ore mined and smelted on one end, parts shaped and assembled along the way, and a new Ford rolled off the line at the other—no imports, no one else. It was vertical integration—of work, of community, of pride. 

But by the 1970s a new day had dawned, the old days were gone, and the unraveling had begun. Over half the manufacturing jobs moved elsewhere, a quarter of the population went too; and with whole neighborhoods left to rot, Detroit, once called “the Paris of the Midwest,” became one of the deadliest cities in the country.4546 From 1965 to 1974, homicides quintupled47; the central business district earned the name “zone of decay”; and businesses began installing bulletproof glass—floor to ceiling—to protect storefront clerks. 

Just like that—two short decades transformed America’s motor city into America’s murder city. And burnt, bled, and bankrupt, the once shining example rolled out perhaps the saddest, most pitiful ad campaign in American history: “Say Nice Things About Detroit.”48

It’s not about being rich or poor, stupid or smart; it’s about better than or worse than.

The bottom line is this. Every new economy produces different winners and losers—it’s just the way it is. What happened in Detroit was, in many ways, what was expected. But when the losses came—when the bottom fell out for the millions of working-class Americans still there, still trying—it was treated not as a national obligation but as an unfortunate footnote to progress. Detroit was told to retrain, relocate, find a way to adjust—and when they failed, just like the people still living in Akron, Scranton, and Gary, they were humiliated, cast as mascots of ignorance and failure. The problem is that the ignorant and the failed far outnumber those who aren’t. And so, as Franklin Roosevelt said, it’s not “whether we add more to the abundance of those who have much” that matters—“it is whether we provide enough for those who have too little.” 

Because when the empire falls—when the American experiment joins the long ledger of civilizations past, it won’t be at the hands of China or Russia or Al Qaeda or anyone else. We are the richest nation in the history of the world; no other society has ever wielded as much global influence; not even a coalition of all the world’s armies could best ours. “If destruction be our lot,” wrote a 28-year- old Abraham Lincoln, “we must ourselves be its author and finisher.”49 As “a nation of freemen, we must live through all time, or die by suicide.” 

And if it comes to that—if we choose death; it won’t be about free trade or wages or unemployment rates any more than it was about taxes in 1776. Once again, it will be about respect.

Categories: Critical Thinking, Skeptic

Falling In Love With AI

neurologicablog Feed - Thu, 02/12/2026 - 6:25am

There are many ways in which our brains can be hacked. It is a complex overlapping set of algorithms evolved to help us interact with our environment to enhance survival and reproduction. However, while we evolved in the natural world, we now live in a world of technology, which gives us the ability to control our environment. We no longer have to simply adapt to the environment, we can adapt the environment to us. This partly means that we can alter the environment to “hack” our adaptive algorithms. Now we have artificial intelligence (AI) that has become a very powerful tool to hack those brain pathways.

In the last decade chatbots have blown past the Turing Test – which is a type of test in which a blinded evaluator has to tell the difference between a live person and an AI through conversation alone. We appear to still be on the steep part of the curve in terms of improvements in these large language model and other forms of AI. What these applications have gotten very good at is mimicking human speech – including pauses, inflections, sighing, “ums”, and all the other imperfections that make speech sound genuinely human.

As an aside, these advances have rendered many sci-fi vision of the future quaint and obsolete. In Star Trek, for example, even a couple hundred years in the future computers still sounded stilted and artificial. We could, however, retcon this choice to argue that the stilted computer voices of the sci-fi future were deliberate, and not a limitation of the technology. Why would they do this? Well…

Current AI is already so good at mimicking human speech, including the underlying human emotion, that people are forming emotional attachments to them, or being emotionally manipulated by them. People are, literally, falling in love with their chatbots. You might argue that they just “think” they are falling in love, or they are pretending to fall in love, but I see no reason not to take them at their word. I’m also not sure there is a meaningful difference between thinking one has fallen in love and actually falling in love – the same brain circuits, neurotransmitters, and feelings are involved.

Researchers generally consider there are three neurological components to falling in love (lust, romance, attachment). There is sexual attraction and lust, mediated by estrogen and testosterone. There is the romantic feeling of being in love mediated by dopamine, serotonin and norepinephrine. During sex and other forms of physical intimacy endorphins are released which make us feel happy, and also oxytocin which is associated with feelings of attachment. Vasopressin is also involved, linked also to long term attachment and feelings of protectiveness. Do we experience the same biochemical reactions to interacting with AI? The data so far says yes.

In fact, this data goes back far before AI. Psychologists and neurologists have know for a long time that people can form emotional attachments to inanimate objects (objectophilia). This it the teddy bear phenomenon – even as young children we can form an attachment to an object and treat it as if it were a living thing, even if we know objectively it isn’t. This likely has to do with the cues that our brains use to divide up the world. We mentally categorize objects as either agents (things able to act on their own) and non-agents. For some reason we evolved algorithms to determine this that are not dependent on whether or not the object is actually alive, but simply if it moves and acts as if it is alive. If something acts like an agent, or even looks like an agent, our brains categorize them that way and link them to our emotional centers, so we feel things about them.

As one researcher put it – AI is a teddy bear on steroids. Chatbots are designed to act human, to push our buttons and make us feel as if they are agents, and therefore activate all the the circuitry involved with how we feel about things our brain treats as agents. Not only that, but chatbots can be programmed to be friendly, available, a “good listener”, accommodating, and flattering. Some of these traits may be inadvertently (or deliberately, depending upon how cynical you’re feeling) triggering of romantic feelings. There are, of course, apps that deliberately design AI chatbots to be sexual and romantic (come meet your new AI girlfriend), complete with alluring AI generated imagery, all custom-made, if you wish.

So yes, people can really fall in love with an AI. Why not? That fits with everything we know about psychology and how our brains work. It is an extreme example of us adapting our environment to hack our own adaptive circuitry, to engineer feedback to maximally stimulate our reward circuitry. There are many ways in which we do this – porn, recreational drugs, roller coasters, gambling, ridiculously delicious foods. This can be harmless and fun, adding a little spice to our life, but pretty much every manifestation of hacking our reward circuitry is also associated with what we generally categorize as “addiction”. Addiction is one of those things that is hard to operationally define, because it is such a multifaceted spectrum, but in generally something is considered an addiction when it becomes a net negative for your life. Addictions cause dysfunction in some way.

Can someone be “addicted” to their chatbot, whether the relationship is platonic or romantic? It seems so. But even short of an addiction, is it a good idea to spend a significant amount of time in an artificial relationship that mimics a human relationship, but is crafted to give you all the power and to be maximally flattering without demanding anything of you? Some psychologists are raising the alarm bells, worrying about a spoiler effect. Such AI relationship can potentially spoil us for relationships with living humans, who have their own wants, desires, flaws, and demands. Relationships are work – but why do all that work when you can have a submissive mate that is perfectly happy making the relationship entirely about you? Of course, there is the physical intimacy part, but there are partial ways around that as well. This does, however, raise the question about how important physical intimacy is compared to emotional intimacy. I suspect there is a lot of individual variation here.

Again, we seem to be running a massive social experiment with some very real concerns. This also does get me back to the sci-fi retcon – perhaps it would be better for chatbots to not be too human. They can still fulfil their functions (other, of course, than being a romantic companion or similar) if they had an affect that was obviously artificial. This is a form of transparency – you know when you are talking to an AI because they talk like an AI, and they interact in a way that is designed to be functional but specifically not provoke any emotions, or pretend to have emotions themselves. I suspect this would be a good thing for society, but also that nothing like this will happen on its own.

The post Falling In Love With AI first appeared on NeuroLogica Blog.

Categories: Skeptic

CRISPR-Cas9 and the Ethics of Scientific Inaction

Skeptic.com feed - Tue, 02/10/2026 - 1:17pm

The Burmese python is among the most destructive invasive species in North America. Introduced into South Florida through the exotic pet trade, it has spread rapidly through the Everglades, fundamentally altering one of the most biologically unique ecosystems on the continent. Long-term monitoring studies document dramatic declines—often exceeding 90 percent—in medium-sized mammal populations such as raccoons, opossums, foxes, and bobcats. These losses have cascaded throughout the food web, reshaping predator-prey dynamics and ecosystem function.

After decades of effort, scientists and wildlife managers have been forced to confront an uncomfortable reality: traditional control strategies do not work at scale. Hunting programs, bounties, tracking dogs, radio-tagged “Judas snakes,” and public outreach campaigns have all failed to meaningfully reduce python populations across the Everglades’ vast and inaccessible terrain.

This persistent failure raises a question that should lie at the heart of scientific skepticism but is rarely posed directly: Why are scientists so reluctant even to explore CRISPR-based genetic tools to suppress invasive species when the ecological damage of inaction is already severe, ongoing, and irreversible?

To be clear, genetic population control has not remained confined to laboratory models. In Florida, genetically engineered mosquitoes have already been released in open environments to combat mosquito-borne disease—most notably dengue fever—while also reducing the risk of transmission of Zika and chikungunya viruses. These programs, developed by the biotechnology firm Oxitec, involved releasing male Aedes aegypti mosquitoes engineered so that their offspring fail to survive to adulthood. The goal was straightforward: suppress mosquito populations without pesticides and reduce disease risk to humans.

These releases were approved by federal and state regulators, implemented in the Florida Keys, and subjected to extensive monitoring. The results were not merely symbolic. Field trials conducted by Oxitec demonstrated local reductions of Aedes aegypti populations on the order of 70–90 percent, levels widely regarded as sufficient to substantially reduce the risk of mosquito-borne disease transmission. Notably, Aedes aegypti is itself a non-native, invasive species in Florida, introduced through human activity and now deeply embedded in urban and suburban environments. While directly attributing changes in dengue, Zika, or chikungunya incidence to a single intervention is methodologically complex, the biological rationale is straightforward: fewer competent vectors mean fewer opportunities for disease spread. By any reasonable standard, the program achieved its primary objective—large-scale, targeted suppression of an invasive species without chemical insecticides.

The ethical reasoning behind these deployments was equally clear. Faced with ongoing public-health risks, scientists and policymakers concluded that genetic population suppression was preferable to widespread pesticide use, which carries well-documented ecological and human-health costs. Precision, reversibility, and reduced collateral damage were treated not as liabilities, but as virtues.

What is striking … is not that such tools exist or that they work, but how narrowly their application has been circumscribed.

That judgment did not emerge in a vacuum. For more than three decades, genetically modified organisms have been deployed across global agriculture at enormous scale. Genetically engineered crops have reduced pesticide use, increased yields, improved resistance to pests and disease, and in some cases enhanced nutritional content. These organisms have been consumed by billions of people and introduced into ecosystems worldwide, all under regulatory regimes far less restrictive than those now proposed for CRISPR-based conservation tools. Despite early public alarm and immense leftist protests, the accumulated scientific evidence has shown GMO crops to be no more dangerous to human health or the environment than their conventional counterparts. In practice, genetic modification has become a routine—if still politically contested—part of modern environmental management.

What is striking, then, is not that such tools exist or that they work, but how narrowly their application has been circumscribed. Genetic population control has been judged acceptable when the target is an insect vector threatening human health, yet remains largely off-limits when the target is a vertebrate invasive species driving ecological collapse. The technology did not stall at the edge of feasibility or safety; it stalled at the edge of moral comfort. Human-centered risk is treated as actionable. Ecological destruction is treated as tolerable.

CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is often portrayed as a radical, almost science-fictional technology—a sudden and unprecedented leap in human power over nature. Popular narratives frequently frame it as a tool that allows scientists to “rewrite life” at will, blurring the line between biology and engineering in ways that feel unsettling or unnatural. In reality, CRISPR did not emerge from speculative ambition, but from basic microbiological research into how bacteria survive viral infections. CRISPR is part of a naturally evolved bacterial immune system, one that has existed for billions of years and functions by recognizing and disabling invading genetic material.

This pattern of radical portrayal followed by gradual normalization is hardly unique to CRISPR. Earlier generations of genetic technologies were greeted with similar alarm. Recombinant DNA research in the 1970s provoked fears of runaway organisms and ecological catastrophe. Genetically modified crops were widely depicted as “unnatural,” dangerous, or morally suspect, despite being extensions of techniques humans had used for millennia to shape plant genomes through selective breeding. In each case, initial ethical anxiety was driven less by empirical evidence than by the perception that humans were crossing a symbolic boundary. Over time, as mechanisms became better understood and real-world outcomes failed to match apocalyptic predictions, these technologies were absorbed into routine scientific and agricultural practice. CRISPR now occupies the same cultural position once held by earlier genetic tools—exceptional not because of demonstrated harm, but because it makes human agency over biology unusually explicit.

What is CRISPR and how could it eliminate an invasive species?

When a bacterium survives a viral attack, it stores short fragments of the virus’s DNA in its own genome. These fragments serve as genetic “mugshots.” If the virus returns, the bacterium uses these sequences to guide specialized enzymes to recognize and cut the invader’s DNA, neutralizing the threat.

The most important of these enzymes is Cas9, a molecular tool capable of cutting DNA at a precisely specified location. In 2012, researchers including Jennifer Doudna demonstrated that this system could be repurposed as a programmable gene-editing technology. By supplying Cas9 with a custom guide RNA, scientists could target and cut virtually any DNA sequence with remarkable accuracy. In 2020, Doudna along with Emmanuelle Charpentier won the Noble Prize in chemistry for their discovery of the “CRISPR-Cas9 genetic scissors.” 

This represented a qualitative leap beyond earlier genetic engineering techniques, which were slow, expensive, and often imprecise. CRISPR allows genes to be deleted, modified, or silenced with far greater control than any previous method.

This increase in precision has already translated into medical advances that, only a decade ago, would have been regarded as implausible or even miraculous. In several cases, CRISPR has moved beyond theory and into real-world clinical success, reshaping how genetic disease is treated.

Genetic approaches, by contrast, allow for ongoing monitoring, adjustment, and—if necessary—active reversal. The risk is not zero, but it is structured, visible, and governable in ways conservation biology has rarely had before.

One of the most striking examples involves inherited blood disorders such as sickle-cell disease and beta-thalassemia. Rather than attempting to correct the defective gene directly, researchers used CRISPR to reactivate fetal hemoglobin—a form of hemoglobin normally silenced after birth. In patients treated with this approach, debilitating symptoms have been dramatically reduced or eliminated, freeing individuals who once required frequent transfusions from lifelong medical dependence. These outcomes represent not incremental improvement, but functional cures.

CRISPR has also enabled remarkable progress in certain forms of blindness caused by single-gene mutations. In these cases, gene editing has been used directly in living patients to correct the underlying defect in retinal cells. For the first time, clinicians have been able to intervene at the level of genetic causation rather than managing symptoms after irreversible damage has occurred. Patients who were steadily losing vision have shown stabilization—and in some cases partial restoration of sight.

In cancer medicine, CRISPR has transformed immunotherapy by allowing scientists to engineer immune cells with unprecedented specificity. T cells can now be edited to better recognize tumors, resist immune exhaustion, or avoid attacking healthy tissue. These advances have expanded the reach of cell-based therapies and improved their safety profile, turning once-lethal cancers into manageable or even curable conditions for some patients.

What unites these examples is not technological novelty, but ethical clarity. In each case, CRISPR has been embraced because it replaces blunt, toxic, or ineffective treatments with targeted, biologically precise interventions. The risks are acknowledged, studied, and regulated—but they are not treated as disqualifying. When the benefits are concrete and human suffering is visible, society has proven willing to accept the responsible use of powerful genetic tools.

How does this translate into invasive-species control?

The most discussed application is the gene drive. Under normal sexual reproduction, each parent has roughly a 50 percent chance of passing on a given gene. A gene drive biases this process. By linking a genetic change to the CRISPR machinery itself, the altered gene is inherited by nearly all offspring, allowing it to spread rapidly through a population.

An artificial gene drive built with CRISPR-Cas9 works by programming a guide RNA to direct the Cas9 enzyme to cut the alternative version of a gene. When the cell repairs that cut, it copies the CRISPR-containing gene instead, ensuring that the edited version is passed on to nearly all offspring. (Source: Mariuswalter, CC BY-SA 4.0, via Wikimedia Commons)

Crucially, eliminating an invasive species does not require mass killing or ecological vandalism. The most conservative proposals focus on population suppression rather than extinction. CRISPR can be used to disrupt fertility genes, bias sex ratios (for example, producing mostly males), or induce sterility without affecting survival. Over successive generations, reproduction fails and population size declines.

Equally important, these interventions can be designed to be species-specific, targeting DNA sequences unique to the invasive organism. Unlike chemical controls, they do not spread indiscriminately through food webs. Unlike physical removal, they scale naturally with population size.

A common concern is that a gene drive designed to suppress Burmese pythons in Florida, for example, could somehow spread beyond its intended range. In a worst-case scenario, modified individuals might be transported—most likely by humans—back to the species’ native range in tropical and subtropical regions of the Old World. If a population-suppression drive were to establish itself there, it could threaten native python populations rather than invasive ones. This possibility is real enough to deserve serious consideration, but it is also far less catastrophic—and far more controllable—than it is often portrayed.

First, such spread would be biologically and geographically unlikely. The Everglades are thousands of miles from the python’s native range, with no natural migration pathway connecting the two. Any transcontinental movement would almost certainly require deliberate or accidental human transport, the very mechanism responsible for the original invasion. Second, gene drives can be designed to be regionally constrained, for example by targeting genetic variants common in the invasive population but rare or absent in native populations, or by incorporating threshold-dependent systems that fail to propagate below certain population densities.

CRISPR does not act like a genetic bomb. It alters inheritance. That distinction matters. 

Most importantly, CRISPR-based interventions are not a single, irreversible act. If an unintended spread were detected, there are multiple ways to halt or reverse progression. Researchers have already demonstrated the feasibility of reversal drives that overwrite earlier genetic changes, restoring normal inheritance patterns. In addition, releasing sufficient numbers of wild-type individuals can dilute or extinguish a suppression drive, while kill-switches and self-limiting designs can cause the system to collapse after a fixed number of generations.

In short, the relevant comparison is not between CRISPR and perfection, but between CRISPR and the tools currently in use. Chemical poisons, physical eradication, and habitat destruction offer no comparable capacity for recall or correction once deployed. Genetic approaches, by contrast, allow for ongoing monitoring, adjustment, and—if necessary—active reversal. The risk is not zero, but it is structured, visible, and governable in ways conservation biology has rarely had before.

CRISPR does not act like a genetic bomb. It alters inheritance. That distinction matters. 

Unfounded Fears

Despite this precision, CRISPR is widely treated as uniquely dangerous. This perception collapses under comparison.

Humans already intervene in ecosystems aggressively and often imprecisely. We drain wetlands, reroute rivers, apply pesticides, release biological control agents, and physically remove animals by the thousands. These interventions frequently produce collateral damage—not because intervention itself is misguided, but because it is undertaken with insufficient ecological understanding. Classic examples illustrate the danger of blunt solutions. 

In the 1930s, cane toads were introduced into Australia in an attempt to control beetles harming sugarcane crops. The toads failed to control the pests but thrived spectacularly themselves, spreading rapidly across the continent and poisoning native predators unadapted to their toxins. Similarly, mongooses were introduced to Hawaii to control rats in sugar plantations, only to prey instead on native birds and reptiles that had evolved without mammalian predators. In both cases, well-intentioned biological interventions backfired—not because humans acted, but because they acted crudely, deploying organisms broadly without precision, containment, or the ability to reverse course. These disasters argue not against intervention itself, but against uninformed and irreversible intervention.

CRISPR, by contrast, is the most targeted biological tool humans have ever developed. If risk is defined as the probability of unintended harm multiplied by the magnitude of that harm, it is far from obvious that CRISPR represents a new category of danger. In many contexts, it may represent a reduction in risk relative to existing practices.

Yet CRISPR is held to an ethical standard no other ecological tool has ever faced: near-zero tolerance for uncertainty.

A Brief History of Invasive-Species Eradication

The ethical hesitation surrounding CRISPR appears far less principled when placed alongside the long history of invasive-species eradication already embraced by conservation biology. For decades, conservationists have pursued aggressive—and often lethal—campaigns to remove non-native predators, particularly on islands where endemic species evolved without defenses against mammalian hunters.

New Zealand Kākāpō (Strigops habroptilus), by Jake Osborne via Flickr, CC BY-NC-SA 2.0

As vividly documented in William Stolzenburg’s Rat Island, invasive rats, cats, and other predators introduced inadvertently by humans have devastated island ecosystems worldwide. Flightless birds such as New Zealand’s kakapo, along with countless seabirds and reptiles, have been driven to the brink of extinction by predators they were evolutionarily unprepared to confront. Faced with these losses, conservationists have largely converged on a difficult conclusion: eradication, however uncomfortable, is preferable to permanent biodiversity collapse.

The primary tool for rat eradication has often been chemical poisoning, most notably anticoagulants such as brodifacoum. These compounds cause internal bleeding over the course of days, a process widely acknowledged to be painful. Their use has also produced unintended consequences, including secondary poisoning of birds of prey that consume contaminated rodents. Yet despite these ethical and ecological costs, eradication campaigns have continued—because the alternative is the irreversible loss of native species.

CRISPR deserves no exemption from scrutiny—but neither does it warrant a moral quarantine that more destructive methods escape entirely.

This history matters because it reveals a striking inconsistency. Conservation science already accepts deliberate, population-level elimination of invasive species using methods that are blunt, ecologically disruptive, and morally fraught. These approaches are justified, explicitly, as tragic but necessary tradeoffs.

Against this backdrop, objections to CRISPR take on a different character. Genetic approaches aimed at reproductive suppression rather than mass killing could, in principle, reduce or eliminate invasive populations without poisoning, trapping, or collateral damage to non-target species. They offer the possibility—still theoretical, but biologically grounded—of achieving the same conservation goals with less suffering and greater precision.

To be clear, gene drives introduce their own uncertainties. But uncertainty has never been grounds for abstention in conservation biology. Instead, uncertainty has been managed through testing, containment, and ongoing revision. CRISPR deserves no exemption from scrutiny—but neither does it warrant a moral quarantine that more destructive methods escape entirely.

Triage

The uncomfortable truth is that conservation already involves deciding which species live and which disappear. The real ethical question is not whether humans should exercise that power—we already do—but whether we are willing to consider tools that might allow us to exercise it more carefully, more precisely, and with fewer unintended victims.

Before CRISPR is dismissed as reckless or premature, it is worth asking a simpler question: what has already been tried—and at what cost?

Florida and federal agencies, along with conservation organizations, have spent tens of millions of dollars attempting to control Burmese python populations. No attempted action has achieved population-level suppression.

Among the most striking examples is the development of robotic prey decoys, including AI-assisted robotic rabbits designed to lure pythons into traps. These devices mimic the movement, heat signatures, and behavioral cues of live prey. They represent an impressive feat of engineering—complex, expensive, and technologically adventurous.

They are also revealing.

Robotic prey baits are essentially a high-tech extension of trapping. They operate on one animal at a time, across thousands of square miles of dense, inaccessible wetlands. Even when successful, they remove pythons incrementally, with no capacity to scale proportionally to population size. Meanwhile, reproduction continues unchecked.

When scientists decline even to explore genetic interventions, they are not abstaining from responsibility—they are exercising it selectively.

This matters because it exposes a profound inconsistency in how risk is evaluated. The same institutions that recoil at the hypothetical risks of CRISPR have already embraced experimental technologies deployed directly into the wild, large-scale ecological manipulation, and interventions with no realistic path to success

Robotic prey baits are not inherently unethical. But they are far cruder, less targeted, and less scalable than genetic approaches—yet they trigger none of the moral alarm bells that CRISPR does.

Society, it seems, is already willing to experiment aggressively in the Everglades. 

The Burmese python did not arrive in Florida by natural dispersal. Its presence is the result of human action. Continuing to allow its ecological destruction is also a human choice. When scientists decline even to explore genetic interventions, they are not abstaining from responsibility—they are exercising it selectively.

But doing nothing is not neutral. 

Categories: Critical Thinking, Skeptic

Skeptoid #1027: Radioactive Relics: The Missing RTGs

Skeptoid Feed - Tue, 02/10/2026 - 2:00am

Radioactive nuclear generators sit out in the environment, posing a real hazard. They're mostly — but not all — in Russia.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Scientific Inconsistencies in the Quran: A Greater Challenge Than Its Violent Verses?

Skeptic.com feed - Mon, 02/09/2026 - 3:59pm

In contemporary critical discourse on Islam, significant attention is often devoted to the violence associated with this religion—whether through the history of Arab-Islamic conquests, modern terrorist acts committed in the name of Allah, or Quranic verses calling for religious warfare and corporal punishments. For many critics of the foundational sacred texts of Islam, the physical violence endorsed in these scriptures appears to be the most obvious problem to demonstrate and denounce. 

Thus, for example, when Allah states, in verse 34 of surah 4 of the Quran, that husbands must strike disobedient wives, one should easily conclude that domestic violence is compatible with Islam. Likewise, when Allah states, in verse 2 of surah 24, that those who engage in sexual intercourse outside of marriage must be punished with one hundred lashes, it should be concluded that private sexual life is subject to surveillance and even sanction in Islam.

I am, of course, able to distinguish between the sacred texts of Islam and those who believe in them. A religion should not necessarily be held responsible for the behavior of its followers. However, everything Allah says in the Quran necessarily commits Islam, since the only official and supreme author of Islam, capable of defining what Islam is or is not, is Allah himself. This paradigm is the founding dogma of the Quran, which is claimed to be, from the first to the last verse, the word of a perfect God who neither lies nor errs, valid at all times and in all places until the Day of Judgment. It is therefore impossible, for example, to reform the criminalization of freedom of conscience in Islam, because, according to the Quran, Allah has declared that those who do not believe in the Quranic verses (surah 4, verse 56) or in Allah and His Prophet Muhammad (surah 48, verse 13), will be eternally tortured in Hell after death. 

This eternal promise, which will be fulfilled at the end of times, cannot logically be revoked by any human, temporal, or earthly decision preceding that end. Moreover, reforming Islam would amount to asking inherently weak, flawed, and sinful humans (surah 4, verse 28) to contradict and disavow Allah, the best of judges (surah 7, verse 87), who sent down a book whose verses are perfect (surah 11, verse 1); such a request is absurd from the perspective of this religion.

Most of the peaceful and Westernized Muslims I have encountered in my life rarely seem shaken in their faith by the most violent Quranic passages that call for hatred and punishment of innocent people condemned merely for their freedom or differences. The apparent casualness of peaceful believers in the face of their god’s warlike words often has a psychological root: cognitive dissonance.

The contradictions and scientific errors of an infallible god, supposed to know everything and never err, are harder to dispute.

Faith in Islam rests, among other things, on the belief that the Quran is a perfect text revealed by a just God who fights injustice. Yet for a Muslim living in a modern Western society where nonviolence, freedom of conscience, and equality of rights are sacred values, the violence advocated by Allah in the Quran contradicts the ideal of peace, which is the most consensual political and social argument possible. To resolve this dissonance, the peaceful Muslim generally adopts the strategy of avoidance. And what better way to deny the cause or consequence of a problem than to deny its very existence—or worse, to present it as a benefit? 

In order to survive in the 21st century where fact-checking scrutinizes religious texts as thoroughly as political discourse, apologists of Islam have mastered the art of reinterpreting Quranic verses. These rhetorical sleights of hand—transforming every instance of the verbs “kill” or “fight” in Allah’s speech into a plea for tolerance and dialogue—obviously comfort peaceful and Westernized Muslims in their idealistic—yet illusory—vision of Islam. Many Muslims who follow a “religion of peace, love, and tolerance” will tell themselves that “The unbelievers to be fought must have been violent people against whom Allah called for self-defense” or “The domestic violence encouraged by Allah must surely consist of using purely symbolic violence through oratorical eloquence to bring reason to an unreasonable wife.”

As an ex-Muslim who has devoted many years to studying the logic and meaning of Quranic verses, I argue that it is more effective to discuss faith with other Muslims by speaking of science rather than violence. Muslims today often dismiss criticisms of the violence in Allah’s words as merely subjective, whereas science, facts, evidence, and even mathematics are seen as more objective.

The best apologists for Islam have certainly developed a whole arsenal of sophisms to relativize or justify the slightest violent word in the Quran, but the contradictions and scientific errors of an infallible god, supposed to know everything and never err, are harder to dispute. For this reason, in my book 100 Contradictions and Scientific Errors in the Quran (which is my best-known work, here in France), I have thoroughly identified and analyzed an encyclopedic list of the 100 greatest lexical, scientific, narrative, mathematical, dialectical, and historical contradictions found in the Quran. I present two of them here, starting with a Quranic narrative contradiction. Allah, in the Quran, sometimes recounts the same historical event in two different surahs, such as when He announces to Zachariah through His angels that the latter will have a son, named John. But in both of these surahs the human behind Allah’s pen made the mistake of presenting the event with verbatim quotations, specifically first-person statements. 

The discrepancy between these verbatim quotes demonstrates that if the author of the Quran can contradict his own work, even his most fervent believers can do so as well.

So what did Zachariah reply when Allah sent him the announcement of John’s birth? According to verse 40 of surah 3, Allah claims that Zachariah, surprised, responded: “My Lord, how will I have a boy when I have reached old age and my wife is barren?” Yet in verse 8 of surah 19, Allah claims that Zachariah at that same moment said: “My Lord, how will I have a boy when my wife is barren and I have reached extreme old age?” These two Quranic citations, between surahs 3 and 19, supposedly quoting the same statement made by Zachariah during a unique and precise event, should have been word-for-word identical. However, they invert the order of the two arguments relative to one another and feature a differing adjective—present in one but absent in the other. Each version of the historical and factual truth contradicts and invalidates the other, even though both are meant to be equally divine. The discrepancy between these verbatim quotes demonstrates that if the author of the Quran can contradict his own work, even his most fervent believers can do so as well.

Let us take another example of incoherence in the Quran, which leaves little room for subjectivity: mathematical errors. Several of Allah’s Quranic instructions regarding the calculation of inheritance shares are simply impossible to apply, as they contradict one another. For instance, in verse 12 of surah 4, Allah affirms that if a person dies without leaving any parent or child, but has a brother or a sister, then each of them is to receive one sixth of the inheritance: “And if a man or woman dies leaving no father, no mother and no child, but has a brother or a sister, then for each one of them is a sixth.” 

Let us now consider the two only possible interpretations of this instruction, which contains a subtle ambiguity that is difficult to discern at a glance. First, let us assume that the word “or” in the phrase “a brother or a sister” implies there is only one heir—either a brother or a sister. This would mean, according to verse 12 of surah 4, that Allah grants “a sixth” of the inheritance to the sister of a deceased person with no parent or child. However, later in the same surah, in verse 176, Allah states that the sister of a deceased person without parent or child must receive “half” of the inheritance: “Say Allah gives you a ruling about one who dies leaving no father, no mother and no child: if someone dies and has no child but has a sister, she shall have half of what he leaves.” This creates a blatant contradiction: in the same inheritance scenario, a single sister receives either one sixth or half of the estate.

To resolve this contradiction, Muslims might then be tempted to adopt the second (and only other) possible interpretation of the word “or” in “[if he] has a brother or a sister, then for each one of them is a sixth,” namely that Allah is referring to two individuals: one brother plus one sister. This would mean that the brother and sister are each to receive an equal share—namely one sixth. Yet, in verse 176 of surah 4, Allah explains that in a situation involving a deceased person, if there are brothers and sisters: “a male will have the share of two females.” 

Rational critique of the Quran, the hadiths, and the Prophet’s biography has become vastly more accessible and widespread than at any time in history.

There is no coherent logic underlying these contradictory instructions. How can Allah explain that a brother must receive the same share as a sister, and then that two brothers must receive the same as four sisters? Either the Prophet Muhammad became confused with the Quran that emerged from his fallible human imagination, or other humans—careless or deceitful—completed the Quran after him as they saw fit, despite the dogma of the Quran’s inviolability which attributes its authorship to Allah alone.

♦ ♦ ♦

Until the late twentieth century, intellectual criticism of Islam’s sacred texts by ex-Muslims remained confined to discreet discussions, books of testimonies, or academic works that struggled to find a place in the public debate. But with the democratization of the internet, everything changed. Rational critique of the Quran, the hadiths, and the Prophet’s biography has become vastly more accessible and widespread than at any time in history.

More and more critics of Islam—ex-Muslims or not, anonymous or not—now dare to speak publicly about everything that worries them in Islam: its intolerance toward any dissenting thought, its violence, its misogyny, its scientific absurdities. Yet whether in Islamic countries, Europe, or elsewhere, ex-Muslims who criticize Islam openly remain few and often must live in hiding. Whether they live in countries where apostasy is illegal or in Western countries where they risk social death or even physical violence, many ex-Muslims fear revealing their departure from Islam to their families. Some pretend to remain Muslim.

The “battle of ideas” challenging Islam remains, even today, as stormy in the media as it is perilous to one’s personal safety. According to the sacred texts and legal tradition of Islam, leaving the religion and criticizing its foundations constitutes a religious crime whose legally prescribed punishment may extend up to death. This position derives directly from hadiths—the words and deeds of the Prophet Muhammad—classified as Sahih (authentic), such as Bukhari numbers 6878 and 6922, in which the Prophet Muhammad, defined by Allah (surah 33, verse 21) as a universal behavioral model for all Muslims, declared: “Whoever changes his religion, kill him!” These sacralized statements, criminalizing the loss of faith in Islam or the conversion of a Muslim to another religion, explain why even today, among the 42 Islamic countries (by constitution or by their predominantly Muslim population), not a single one recognizes or defends the right of a Muslim to leave Islam.

Categories: Critical Thinking, Skeptic

Uranium and Motivated Reasoning

neurologicablog Feed - Mon, 02/09/2026 - 6:06am

This post is only partly about uranium, but mostly about motivated reasoning – our ability to harness our reasoning power not to arrive at the most likely answer, but to support the answer we want to be true. But let’s chat about uranium for a bit. In the comments to my recent article on a renewable grid, once commenter referred to a blog post on skeptical science and quoted:

Abbott 2012, linked in the OP, lists about 13 reasons why nuclear will never be capable of generating a significant amount of power. Nuclear supporters have never addressed these issues. To me, the most important issue is there is not enough uranium to generate more than about 5% of all power.”

This is the flip side, I think, to the misinformation about renewable energy I was discussing in that post. Let me way, I don’t think there is an objective right answer here, but my personal view is that the pathway to net zero that emits the least amount of carbon includes nuclear energy, a view that is in line with the IPCC. There is, however, still a lot of anti-nuclear bias out there, just as their is pro-fossil fuel bias, and pro-renewable bias, and every kind of bias. If you want to make a case for any particular source of power, there are enough variables to play with that you can make a case. However, factual misstatements are different – we should at least be arguing from the same set of verified facts. So let’s address the question – how much uranium is there.

There is no objective answer to this question. Why not? Because it depends on your definition. Most estimates of how much uranium there is in the world, in the context of how much is available for nuclear power, do not include every atom of uranium. They generally take several approaches – how much is in current usable stockpiles, how much is being produced by active mines, and how much is “commercially” available. That last category depend on where you draw the line, which depends on the current price of uranium as well as the value of the energy it produces. If, for example, we decided to price the cost of emitting carbon from energy production, the value of uranium would suddenly increase. It also depends on the technology to extract and refine uranium. The value of uranium is also determined by the efficiency of reactors.

Right now about 9% of the world’s electricity comes from nuclear, and about 19% of energy in the US. At the current rate of energy production, current producing uranium mines and known resources would last for about 90 years. This is better than most mineral needed to build renewable infrastructure. Right there the “5%” figure quoted above is demonstrably wrong, we are already greater than 5%. Let’s say we doubled the amount of energy produced by nuclear power, and over that same time period there was a 50% increase in energy demand. Current supplies would then last for 45 years, and nuclear would be about 12% of world energy production. Forty-five years would be just fine – that would give us the time to further develop solar, wind, battery, geothermal, and pumped hydro technology. It is conceivable that we could have an all renewable grid by then. It is even possible we might have fusion by then.

But that also assume a couple of things – no new uranium mine discovery, and no significant increases in efficiency. Neither of these things are likely to be true. There are vast known commercially-viable reserves of uranium waiting to be developed. Improved geological techniques are also finding more reserves. Further, newer nuclear designs use uranium more efficiently – there is more burned fuel and less spent nuclear fuel. In fact newer designs can potentially burn the spent fuel from older reactors, further extending the uranium supply. We can also reprocess spent nuclear fuel to make more usable fuel. The figures above also do not count national reserves of uranium, because these figures are not public. Military grade uranium has been and can be repurposed for energy production as well.

Further still – if the acceptable price of uranium increases because of the value of uranium and the cost of energy, and/or the cost of extracting uranium from various sources goes down, then new reserves of uranium become available. For example, there is about 4.5 billion tonnes of uranium in seawater, which is about 1,000 times known terrestrial sources. That’s enough uranium for current use for 90,000 years. Let’s say only 10% of that uranium can be commercially extracted, and our demand increases by a factor of 10 – the supply would still last for 900 years. That is likely longer than fission technology will be needed.

Even putting uranium from seawater aside, known and likely terrestrial sources, combined with advancing nuclear technology, means we likely have enough uranium to burn at double the current rate for 100-200 years, conservatively. In other words – the supply of uranium is simply not a significant limiting factor for nuclear power. So why is this still an anti-nuclear talking point?

That is where we get back to motivated reasoning. Even if we are looking at the same set of facts, they can be perceived as positive or negative depending on your bias. You can say – nuclear only supplies 9% of the world’s power, or that nuclear provides a whopping 9% of world power. Solar has only increased in efficiency by about 10% over the last 30 years (from about 12 to about 22 %), or you can say that the efficiency of solar has almost doubled over this time, while costs have plummeted. You can focus on all the negative tradeoff, or all of the positive benefits of any technology. The same problem can be either a minor nuisance or a deal-killer. You can focus on whatever slice of the evidence is in line with your bias. And of course you can accept as fact things that appear to support your narrative, while questioning those that do not.

We all do this, pretty much all the time. It takes a conscious effort to minimize such motivated reasoning. We have to step back, deliberately try to not care what the outcome is, and just try to be as fair and accurate as possible. We have to ask – but is this really true? What would a neutral person say? What would someone hostile to this position say? It’s a lot of mental work, but it’s good mental hygiene and a good habit to get into.

 

 

The post Uranium and Motivated Reasoning first appeared on NeuroLogica Blog.

Categories: Skeptic

Mic'd Up: Brian's Blood Donation Interview

Skeptoid Feed - Fri, 02/06/2026 - 2:00am

Brian gets questioned while giving blood.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

The AI Slop Problem

neurologicablog Feed - Thu, 02/05/2026 - 5:50am

Mark Zuckerberg said a few months ago that AI is ushering in a third phase of social media. First social media was used to connect with family and friends, then it became a platform for content creators, and now creativity is being further unleashed with new AI-powered tools. That’s a pretty rosy view, and unsurprising coming from the creator of Facebook. Many people, however, are becoming increasing concerned about what the net effect of AI-generated content will be, especially low-grade content (now colloquially referred to as AI slop).

One thing is clear – AI-generated content, because it is so easy and fast, is increasingly flooding social media. AI’s influence takes two basic forms, AI-generated content, and recommendations driven by AI-powered algorithms. So an AI might be telling you to watch an AI-generated video. Recent studies show that about 70% of images on Facebook are now AI-generated, with 80% of the recommendations being AI-powered. This is a fast-moving target, but across social media AI-generated content is somewhere between 20 and 40%. This is not evenly distributed, with some sites being overwhelmed. The arts and crafts site Etsy has been overrun by AI slop, causing some users to abandon the platform.

We are already seeing a backlash and crackdown, but this is sporadic and of questionable effectiveness. Etsy, for example, has tried to limit AI slop on its site, but with limited success. So where is all this headed?

We need to consider the different types of content separately. Much of AI-slop is obviously fake and for entertainment purposes only. They may be cartoony or obviously humorous, with no intent to pass as real or deceive. Some content is meant to entertain (i.e., drive clicks and engagement), but is not obviously fake. Part of the appeal, in fact, may be the question of whether or not the content is real. Other content is meant to deceive, to influence public opinion or the behavior of the content consumer. This latter type of content is obviously the most concerning.

There are also different types of concerns or potential negative outcomes. One of the biggest concerns is that AI-generated content can be used to spread misinformation. This has both direct and indirect negative effects – it can spread false information and influence public opinion, but it also degrades trust in accurate information or responsible sources. So true information can be dismissed as possibly fake. The combined effect is that we no longer know what is true and what is not. Without any way to objectively referee which facts are reliable and which are likely fake (and yes, it’s a continuum, not a dichotomy), people will tend to just hunker down with their social tribe. Each group has their own reality, with no shared reality to bridge the gap.

There is also the Etsy problem – low-quality content is crowding out anything of value, and consumers are buried in slop. I use Etsy, and so have encountered this myself. It takes a lot of cognitive work to separate out real work, especially art, from the flood of AI content. Highly cognitively demanding work is unsustainable – most people will not do it for long and will look for the less work-intensive path. This may mean abandoning a platform, or throwing up their arms and saying it’s hopeless to tell the difference, or just giving in and not worrying if something is AI or not. This is a problem for non-AI content creators, and also a problem across the board. Mental AI-fatigue will affect everything, not just low-grade AI artwork. Etsy-fatigue can also influence how much mental energy we have for political AI content (studies do show that mental energy is fungible in this way).

There is also the middle ground, not low-grade AI slop or deliberate deception, but AI used as a legitimate tool to create high-quality art or other content. This is the use I think can be valuable, making content creation better or more efficient. The problem with this content is not really for the end-user but the issues of ownership and displacing human artists. For me, this is where the real dilemma is. I would love for the big video game companies to be able to double their output because of efficiencies gained through AI, and I also want to see how the latest AI can enhance certain game features (like interacting with AI-driven characters, or open-ended generative content). But these advances are being held back by the other concerns with AI, many of which are legitimate.

There are several approaches to the issue that I can see. One is to simply let the free market sort it all out. Users are having somewhat of a backlash against AI slop, and companies are responding. We will see how well they can manage the issue, but if the last few decades are any guide I don’t have a lot of hope that big tech companies will do what’s best for the end-user, rather than their own bottom line. Likely some individual platforms will push back heavily against AI, perhaps even creating AI-free social media platforms or websites.

A second approach is to craft some thoughtful legislation to try to wrangle this beast. The most important fix would simply be transparency – if AI-generated content had to be labeled as such, with heavy penalties for passing off AI content as real, this could significantly help. I would also like to see a conversation about how algorithms recommend content. It may also be feasible to make the use of AI-generated fakes for political persuasion illegal.

Both of these approaches, however, require a third approach – developing the technology to detect, label, and filter AI-generated content. A truly effective app to do this could be massively useful, and I think highly popular.

My biggest concern is that governments will use AI to enhance their ability to control their populations. This is part of the “information autocracy” problem. If you control what information your population sees, you can control what they think, and you can control what they do. This is already a problem, but AI-generated content and AI-driven algorithms can make it orders of magnitude more effective. Even without authoritarian governments, large corporations can use the same technology to influence their consumers. Or they can use it to promote their political views. A populace, both entertained and overwhelmed by AI slop, would be especially compliant.

The post The AI Slop Problem first appeared on NeuroLogica Blog.

Categories: Skeptic

Did the U.S. Really Use a Sonic Weapon in Venezuela?

Skeptic.com feed - Wed, 02/04/2026 - 9:18am

Within days of the U.S. strike on Caracas and the capture of Venezuelan President Nicolás Maduro on January 3, 2026, a remarkable claim was sweeping across social media: American forces had deployed a devastating “sonic weapon” that left Venezuelan soldiers vomiting blood and unable to stand.

The headlines have been dramatic with Forbes proclaiming: “U.S. Secret Weapon May Have Incapacitated Maduro’s Guards.”1 The Economic Times wrote about America’s “Secret Sonic Weapon,”2 while the UK Sun asserted: “US ‘Sonic Weapon’ is REAL after Chilling Claims it Left Captured Maduro’s Guards ‘Vomiting Blood.’”3 The story was dramatic, almost terrifying, but as we shall argue here, almost certainly false.

Within minutes of the first explosions on January 3, conflicting claims were already circulating on social media about the number of missiles fired, ground forces deployed, and helicopters spotted flying over the city of Caracas, the focal point of the attack. The ambiguity and uncertainty that typify the fog of war are ideal breeding grounds for rumors. Ordinarily, such rumors fade as reliable information emerges. But in this case the U.S. military remained silent, while the Venezuelan government, like many authoritarian regimes, is notorious for withholding information. 

This is a classic setup for the proliferation of rumors, whose intensity is proportional to both the perceived importance of the event and the level of ambiguity.4 Situations such as this are fertile soil for exaggerations, half-truths, conspiracy theories, and outright fabrications. Even after the situation on the ground stabilized and many early rumors were confirmed or denied, claims about the use of a sonic weapon not only persisted but flourished.

From WhatsApp to the World

One challenge in tracing this story to its origins is that as it began in Venezuela, where the earliest accounts circulated in Spanish. Fortunately, one of us (DZ) is a fluent speaker and was able to examine the primary sources. In the days that followed, audio recordings rapidly spread on WhatsApp, describing events through purported firsthand accounts from soldiers and relatives near the impact zones.

On January 9, one story began circulating widely. In it, a supposed member of colectivo—an armed militia that controls different sections of the city—described how the attack unfolded in the historic 23 de Enero neighborhood of western Caracas. 

The audio was posted on the YouTube channel of Emmy Award-winning Venezuelan journalist Casto Ocando, and soon accumulated over one million views.5 In it, an anonymous narrator describes the attack.

“They shut down the entire electrical system, knocked out the radars, knocked out everything.”

He then recounts how a soldier activated a Russian-made anti-aircraft defense system to attack the helicopters.

“When he fired it, a drone immediately detected it and, well, they died, they killed them, all of them [the soldiers] with a single bomb… There are many dead, many people burned, many people wounded. I’ll send you a video, there are approximately 100 military personnel dead,” he adds.6

The narrator’s confidence in precise casualty figures amid the chaos of a nighttime attack, is itself a red flag.

The alleged eyewitness continues:

“There were only eight helicopters and 20 men…who killed 200 men, 32 with a single shot, plus presidential guards of honor and civilians.”

He then describes weapons that “fired more than 300 bullets per minute,” adding,

“a thing that made me bleed, I was bleeding from my nose and didn’t know what it was, it was a whistle that sounded throughout Caracas and made people bleed from their noses and ears. We couldn’t move, that whistle immobilized us, they say it’s what’s called a sonic shockwave. It was something really horrible….”

The clip ends with claims that Americans

“don’t fight fair. They fight from above, with drones. The speeds of those helicopters…. They only sent eight helicopters and destroyed all of Caracas.”  

The description of a sound that causes nosebleeds and immobilization across an entire city is physically implausible. While acoustic weapons such as Long Range Acoustic Devices (LRADs) can cause pain and disorientation at close range, their effects diminish rapidly with distance as the sound energy disperses. No known acoustic technology can cause bleeding from the ears and nose at a distance, let alone city-wide.

Enter, Stage Right, Mike Netter 

On January 9, the WhatsApp audio recording quickly spread across various social networks. The following day, popular conservative influencer Mike Netter, posted on X a strikingly similar story, which he attributed to a security guard loyal to Nicolás Maduro.

🚨This account from a Venezuelan security guard loyal to Nicolás Maduro is absolutely chilling—and it explains a lot about why the tone across Latin America suddenly changed.

Security Guard: On the day of the operation, we didn't hear anything coming. We were on guard, but… pic.twitter.com/392mQuakYV

— Mike Netter (@nettermike) January 10, 2026

It is reproduced below so readers can judge for themselves:

Security Guard: On the day of the operation…suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions…. After those drones appeared, some helicopters arrived, but there were very few. I think barely eight helicopters. From those helicopters, soldiers came down, but a very small number. Maybe twenty men. But those men were technologically very advanced…

Interviewer: And then the battle began? 

Security Guard: Yes, but it was a massacre. We were hundreds, but we had no chance. They were shooting with such precision and speed... it seemed like each soldier was firing 300 rounds per minute… At one point, they launched something... it was like a very intense sound wave. Suddenly I felt like my head was exploding from the inside. We all started bleeding from the nose. Some were vomiting blood. We fell to the ground, unable to move…. Those twenty men, without a single casualty, killed hundreds of us. We had no way to compete with their technology, with their weapons. I swear, I’ve never seen anything like it. We couldn't even stand up after that sonic weapon or whatever it was.

Interviewer: So, do you think the rest of the region should think twice before confronting the Americans?

Security Guard: Without a doubt. I’m sending a warning to anyone who thinks they can fight the United States. They have no idea what they’re capable of. After what I saw, I never want to be on the other side of that again. They’re not to be messed with.

Interviewer: And now that Trump has said Mexico is on the list, do you think the situation will change in Latin America? 

Security Guard: Definitely. No one wants to go through what we went through. Now everyone thinks twice. What happened here is going to change a lot of things, not just in Venezuela but throughout the region. 

The story was originally posted in English, itself suspicious for a supposed Venezuelan guard. Had this been a genuine interview with a colectivo member, the original would have almost certainly appeared in Spanish. No Spanish-language version has ever surfaced. The “interview” appears to be a reconstruction of the WhatsApp audio, repackaged in a question-and-answer format.

Another red flag is the distinctly pro-American tone, which is unlikely to have come from a foreign fighter, let alone one sworn allegiance to defend his government. Defeated soldiers do not typically serve as unsolicited recruitment posters for the enemy. The guard also conveniently uses round figures (eight helicopters, twenty men, 300 rounds per minute) and makes no mention of his comrades’ courage or resistance, and ends with a warning directed at Mexico: precisely echoing President Trump’s rhetoric at the time.

Journalists are trained to go to the source. Accordingly, we contacted Netter to request details of the alleged guard and the interviewer, and asked him to share the original Spanish source of this interview with us. He said he couldn’t do so without first asking the source, which he promised to do. At the time of this writing, he never got back to us.

Press Secretary Leavitt Intervenes

Mike Netter’s post could have disappeared into the daily churn of social media had it not been for White House press secretary Karoline Leavitt who shared it on her official account with the dramatic text: “Stop what you are doing and read this...”

Stop what you are doing and read this…
🇺🇸🇺🇸🇺🇸🇺🇸🇺🇸 https://t.co/v9OsbdLn1q

— Karoline Leavitt (@PressSec) January 10, 2026

This endorsement dramatically elevated the story’s perceived credibility, despite the absence of any corroborating evidence. In effect, an anonymous social media claim received a semi-official White House endorsement of an unverified anonymous claim, a departure from the press secretary’s traditional role as a gatekeeper of verified information. As a result, Netter’s post has gained over 30 million views and 10,000 responses.

Ever Increasing Circles

On January 10, the New York Post repeated Netter’s account under the headline: “US used powerful mystery weapon that brought Venezuelan soldiers to their knees during Maduro raid: witness account.”7 The story recounted the most spectacular elements: the sound wave, exploding heads, nosebleeds, and vomiting.

Curiously, the same YouTube channel of Casto Ocando that had released the original audio, later uploaded a new video citing the Post article, the Post’s reconstruction as independent confirmation of its own earlier material. Other media outlets went further, falsely claiming that the Venezuelan guard had been interviewed by the New York Post.8

This process, where secondary reporting is mistaken for a primary source, is a classic example of how media myths are manufactured through journalistic shortcuts.

Notably, none of the Venezuelan soldiers who later appeared on camera—people whose identities and ranks are known, mentioned the use of sonic weapons. Footage aired on the Chavista network Telesur depict young men wounded by shrapnel describing missile strikes, drones, and gunfire. None reported bleeding from the nose, vomiting, or sensations of cranial explosions.9 Nor are there civilian testimonies from Caracas describing a city-wide whistling sound. Some soldiers and civilians did report buzzing sounds, including individuals near Fort Tiuna, one of the attack sites. However, these sounds are readily explained by falling ordnance and whizzing bullets—mundane combat phenomena, not evidence of exotic weaponry.

It is also conspicuous that during President Trump’s exclusive interview with the New York Post, which was published on January 24th, he was asked about the “sonic weapon” rumors. Trump replied that the U.S. has “the discombobulator” that disabled enemy equipment as the American helicopters swooped in to attack in Carcas. But he made no mention of its effects on people.10

It’s Similar to the Havana Syndrome

The symptoms described in the WhatsApp audio are strikingly similar to claims made during the Havana Syndrome scare. Recently, the intelligence community has deemed the involvement of a foreign power “highly unlikely,” attributing the Havana Syndrome causes to psychogenic and environmental factors rather than directed energy weapons.11

The Venezuelan sonic weapon narrative appears to be drawing from the same well of popular mythology. Furthermore, nosebleeds following an explosive military attack are far more likely to be caused by conventional factors such as blast pressure, dust, smoke inhalation—even stress as opposed to a hypothetical sonic weapon.

The narrator in the WhatsApp audio clip may be misattributing ordinary combat effects to an extraordinary cause: a classic pattern in rumor formation.

Under conditions of extreme stress, uncertainty, and sensory overload, people routinely seek out coherent explanations that give meaning to their own experiences. In the context of a sudden nighttime military strike, in a backdrop rife with ambiguity and anxiety, physical symptoms such as nosebleeds, dizziness, ringing in the ears, and temporary immobility, are especially prone to being reinterpreted through the lens of culturally available narratives.

From a rumor and folklore perspective, the sonic weapon story fulfills a familiar psychological function: it collapses complex, confusing events into a single explanatory cause, providing closure amid uncertainty. The sonic weapon narrative transforms uncertainty into conviction and speculation into “fact.” This process reduces anxiety. As philosopher Suzanne Lange once famously observed: humans possess a remarkable ability to adapt—except when confronted with chaos.12

A Familiar Pattern

The sonic weapon story follows a well-worn media myth template: an ambiguous event, an information vacuum, an anonymous account, amplification by politically motivated actors, and validation by authorities who should know better.

What began as a WhatsApp voice message from an anonymous militia member, was transformed into a polished English-language “interview,” boosted by a partisan influencer, and essentially endorsed by the White House. At no stage was a shred of physical evidence produced. The ‘Discombulator,’ as far as the evidence shows, exists only in the fog of war, and in the imaginations of those eager to believe. 

It is also worth asking the cui bono question: “Who benefits from the sonic weapon narrative?” First, the U.S. government and military—by projecting overwhelming technological superiority. Second, pro-government Venezuelan sources also benefit from a story that excuses their rapid military defeat.

When both sides gain from a myth, its survival is all but guaranteed.

Categories: Critical Thinking, Skeptic

The Selective Rationality Trap

Skeptic.com feed - Tue, 02/03/2026 - 3:17pm
How Rational People Lower Standards of Reasoning When It Comes to Politicized Issues

One of the hardest things to accept, especially for people who care about rationality, is that epistemic rigor is rarely applied consistently. Most of us do not give up bad arguments. Instead, we give up standards of evidence when the conclusion becomes socially or morally important to us.

There are well-established psychological reasons why this happens. Decades of research in social psychology show that many of our beliefs are not just opinions we hold, but parts of who we are. They become woven into our identities, our friendships, and often our professional lives. 

Put more simply, we build our identities, friendships, and careers around certain beliefs. As a result, challenges to those beliefs are not experienced as abstract disagreements but as personal threats. Our self-preservation mechanism kicks in: We bend reality as far as necessary to preserve a flattering story about ourselves and our ingroup. Denial and aggression toward the outgroup follow naturally. 

Psychologists Henri Tajfel and John Turner, who developed Social Identity Theory, showed that people internalize the values and beliefs of the groups they belong to, treating them as extensions of the self. When those beliefs are questioned, the threat is processed much like a threat to your status or belonging. The reaction is often defensive rather than reflective. 

More recent work on motivated reasoning helps explain why such a reaction is so persistent. In the 1990s, psychologist Ziva Kunda demonstrated that people selectively evaluate evidence in ways that protect conclusions they are already motivated to believe. When a belief supports your identity or social standing, the mind unconsciously applies stricter standards to disconfirming evidence and looser standards to supporting evidence. 

Intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side.

Political scientist Dan Kahan later expanded this idea with what he called “identity-protective cognition.” His research showed that people with higher cognitive ability are often better, not worse, at rationalizing beliefs that align with their cultural or political identities. In other words, intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side! 

This body of research helps explain why challenges to core beliefs can feel existential. If your moral worldview underwrites your relationships, your career, or your sense of being a good person, abandoning it comes with real social and psychological costs. Under those conditions, defending the belief feels like defending your life as it is currently organized. 

Seen in this light, the selective abandonment of evidentiary standards is not a moral failing unique to any one group. It is a predictable human response to perceived identity threat. Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation. 

I learned this firsthand during my years in the New Atheist movement. What struck me was how selective people’s skepticism could be. In debates about religion, the standards were ruthless. In debates about politics and social issues, those same standards were easily relaxed, and often vanished. 

Take prayer. For decades, skeptics have pointed to controlled trials showing no measurable benefit of intercessory prayer. The best-known example is the STEP trial, a randomized study of nearly 1,800 cardiac bypass patients published in The American Heart Journal. It found no improvement in outcomes for patients who were prayed for, and in one group outcomes were slightly worse among patients who knew they were being prayed for. Among the New Atheists, prayer was considered resolved beyond reasonable debate not only because the experimental evidence showed no effect, but because the underlying causal story itself collapsed upon examination. 

Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation.

Philosophically, intercessory prayer fails at the most basic level: It posits an immaterial agent intervening in the physical world in ways that are neither specified nor independently detectable. There is no plausible mechanism, no dose-response relationship, no way to distinguish divine intervention from coincidence, regression to the mean, or natural recovery. 

When some studies do claim positive effects of prayer, they almost invariably collapse under close inspection—small sample sizes, multiple uncorrected comparisons, vague outcome measures, post hoc subgroup analyses, or outright publication bias. Some define “answered prayer” so flexibly that any outcome counts as success; others rely on self-reported well-being, which is especially vulnerable to expectancy effects and motivated reasoning. 

This is precisely why large, preregistered trials and systematic reviews, such as those published in The American Heart Journal, are treated as decisive: They close off these escape hatches. The conclusion that prayer “doesn’t work” is not dogma; it is the residue left after methodological rigor strips away every alternative explanation. 

Now compare that level of scrutiny to how many people treat evidence in politically favored domains. What matters here is not even whether these conclusions are right or wrong, but how they become insulated from refutation. 

In debates over trans healthcare, for example, studies in favor of many invasive medical interventions are based largely on self-reported outcomes, short follow-up periods, and substantial attrition. Despite these limitations, they are frequently treated as definitive. Criticisms that would be routine in almost any other medical context are instead dismissed as bad faith. But the fact that these issues involve real suffering should not exempt them from evidentiary scrutiny; it should raise the bar for it. In this case, the most comprehensive evidence available—multiple systematic reviews—has raised serious concerns about the overall quality of the evidence base, particularly with respect to pediatric interventions. 

The UK’s Cass Review, commissioned by the National Health Service and published in stages between 2022 and 2024, concluded that the evidence for puberty blockers and cross-sex hormones in adolescents is generally of low certainty. Similar conclusions were reached by Sweden’s National Board of Health and Welfare and Finland’s Council for Choices in Health Care, both of which revised clinical guidelines after finding the evidence weaker than previously assumed. None of this proves that such treatments never help anyone, especially adults who exhausted other options. It does show that claims of scientific certainty are unjustified. 

The same pattern appears at the level of theory. New Atheists made a cottage industry out of attacking unfalsifiable religious claims and god-of-the-gaps reasoning. Yet many of the same people now defend claims about “systemic discrimination” that are structured in exactly the same way: When disparities persist, they are treated as proof. When they shrink, the explanation retreats to subtler and less measurable mechanisms. Evidence against the claim rarely counts against the claim in the way it would in other domains. 

Consider policing. It is often treated as a settled fact that racial bias is the primary driver of police shootings. But when Harvard economist Roland Fryer examined multiple large national datasets on police use of force, he found that there were no racial differences in officer-involved shootings once relevant contextual factors—such as crime rates, encounter circumstances, and suspect behavior—were taken into account. 

What followed was not a broad reevaluation of the claim, but a shift in how it was framed. Rather than direct bias operating at the level of individual officers, explanations moved toward less specific and harder-to-measure forces: institutional culture, historical legacy, or diffuse forms of “structural” racism. These explanations may or may not be true, but they function differently from the original claim. Because they are more abstract and less tightly specified, they are also far more difficult to test or falsify. 

Here’s the key issue: The pattern we can observe in all this is not that evidence resolved the question, but that disconfirming evidence changed the nature of the claim itself. A hypothesis that was once presented as empirically straightforward became broader, more elastic, and increasingly insulated from direct empirical challenge. Sounds familiar? It’s the god of the gaps fallacy. 

The same pattern appears in debates over wage gaps. Raw differences in average earnings between groups are often presented as straightforward evidence of discrimination. But when researchers such as June O’Neill and later Claudia Goldin showed that simply controlling for factors such as occupation, hours worked, experience, career interruptions, and job risk substantially narrows or eliminates many commonly cited wage disparities, the original claim quietly shifted. 

Evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable. 

It was no longer argued that some demographics were being paid less than others for the same work under the same conditions. Instead, the explanation moved upstream: Sexism or systemic racism were said to operate on the variables themselves, shaping career choices, work hours, and occupational sorting in ways that produced lower average pay. 

Again, these higher-level explanations may be partly true. But they function very differently from the initial claim. A hypothesis that began as a concrete, testable assertion about unequal pay for equal work became broader, more abstract, and harder to falsify. Evidence that would ordinarily count against the claim did not weaken it; it simply pushed the claim into less measurable territory. In other words, evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable. In these cases, disparities function the way miracles once did in theology: as proof of hidden forces. 

What bothered me about the New Atheism movement was not disagreement over conclusions. It was the collapse of standards. Arguments once dismissed as unscientific were rehabilitated the moment they became morally fashionable. I focus here on the New Atheism movement because it marked the first time in my life (and, as far as I can tell, the first time in history) that a movement, at least on its surface, explicitly committed itself to applying the highest standards of evidence to some of the most consequential claims about the world, and in doing so successfully and very publicly dismantled societal structures and beliefs that had endured for millennia. 

Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative.

I’ve been thinking about all this for a long time, and I’ve come to suspect that most people—not by choice, but by evolutionary design—do not want or need a fully accurate understanding of how the world works. They want beliefs that protect their identity, signal membership in the right group, and increase their chances of (social) survival. Michael Shermer explained some of the evolutionary processes at hand here rather well in his books How We Believe and Conspiracy. In short, when it comes to patternicity—the human tendency to find meaningful patterns in meaningless noise—making Type 1 errors, (i.e., finding nonexistent patterns), carries little evolutionary risk while the opposite (i.e., missing real patterns) often can be the difference between life and death. This means that natural selection will favor strategies that make many incorrect causal associations in order to establish those that are essential for survival and reproduction. 

Under those conditions, reasoning becomes performative. Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative. That is why debates on these topics so often drift toward unfalsifiable language and moral imperatives. 

A fair question follows: How does anyone know they are not doing the same thing? 

I think the real danger we should try to internalize is not that other people do this. It is that all of us do.

Categories: Critical Thinking, Skeptic

Forgetting History

neurologicablog Feed - Tue, 02/03/2026 - 7:06am

Engaging on social media to discuss pseudoscience can be exhausting, and make one weep for humanity. I have to keep reminding myself that what I am seeing is not necessarily representative. The loudest and most extreme voices tend to get amplified, and people don’t generally make videos just to say they agree with the mainstream view on something. There is massive selection bias. But still, to some extent social media does both reflect the culture and also influence it. So I like to not only address specific pieces of nonsense I find but also to look for patterns, patterns of claims and also of thought or narratives.

Especially on TikTok but also on YouTube and other platforms, one very common narrative that I have seen amounts to denying history, often replacing it with a different story entirely. At the extreme the narrative is – “everything you think you know about history if wrong.” Often this is framed as – “every you have been told about history is a lie.” Why are so many people, especially young people, apparently susceptible to this narrative? That’s a hard question to research, but we have some clues. I wrote recently about the Moon Landing hoax. Belief in this conspiracy in the US has increased over the last 20 years. This may be simply due to social media, but also correlates with the fact that people who were alive during Apollo are dying off.

Another factor driving this phenomenon is pseudoexperts, who also can use social media to get their message out. Among them are people like Graham Hancock, who presents himself as an expert in ancient history but actually is just a crank. He has plenty of factoids in his head, but has no formal training in archaeology and is the epitome of a crank – usually a smart person but with outlandish ideas and never checks his ideas with actual experts, so they slowly drift off into fantasy land. The chief feature of such cranks is a lack of proper humility, even overwhelming hubris. They casually believe that they are smarter that the world’s experts in a field, and based on nothing but their smarts can dismiss decades or even centuries of scholarship.

Followers of Hancock believe that the pyramids and other ancient artifacts were not built by the Egyptians but an older and more advanced civilization. There is zero evidence for this, however – no artifacts, no archaeological sites, no writings, no references in other texts, nothing. How does Hancock deal with this utter lack of evidence? He claims that an asteroid strike 12,000 years ago completely wiped out all evidence of their existence. How convenient. There are, of course, problems with this claim. First, the asteroid strike at the end of the last glacial period was in North America, not Africa. Second, even an asteroid strike would not scrub all evidence of an advanced civilization. He must think this civilization lived in North America, perhaps in a single city right where the asteroid struck. But they also traveled to Egypt, built the pyramids, and then came home, without leaving a single tool behind. Even a single iron or steel tool would be something, but he has nothing.

Of course, there is also a logical problem, arguing from a lack of evidence. This emerges from the logical fallacy of special pleading – making up a specific (and usually implausible) explanation to explain away inconvenient evidence or lack thereof.

Core to the alternative history narrative is also that those ancient people could not possibly have built these fantastic artifacts. This is partly a common modern bias – we grossly underestimate what was possible with older technology, and how smart ancient people could be. Even thousands of years ago, in any culture, people were still human. Sure, there has been some genetic change over the last few thousand years, but not dramatically, and this is also in how common alleles were, not their existence. In other words – every culture could have had their Einstein. Ancient Egypt had genius architects, and is some cases we even know who they were.

People also underestimate the willingness of ancient people to engage in long periods of harsh work in order to accomplish things. Perhaps this is a “modern laziness bias” (I think I just coined that term). We are so used to modern conveniences, that the idea of polishing stone for 12 hours a day for a year in order to create one vase seems inconceivable. The pyramids, it is estimated, were constructed with 20-30,000 workers over 20 years. This included skilled masons, who likely became very skilled during the project. Egypt had an infrastructure of such skilled workers, supported by many long term projects over centuries.

Which brings up another point – we underestimate how much time these ancient civilizations existed. My favorite stat is that Cleopatra lived closer in time to the Space Shuttle than the building of the pyramids. Wrap your head around that. These ancient people were clever, they included highly skilled crafters, and they had centuries, at least, to advance their techniques.

What amazes me is that this narrative of denying history extends to recent events. Again, the Moon landing is an example. But there is also a narrative circulating on TikTok that buildings from the 18th, 19th, and even 20th century were not built by the people who historians said built them. They were found in place, and were built by an older and more advanced civilization – called Tartaria. Never heard of it? That’s because it does not exist. This civilization was wiped out by a world-wide mud flood in the 19th century. According to this particular nuts conspiracy theory, modern governments just occupied the buildings they left behind then conspired together to wipe the history of the mud flood and Tartaria from all records.

What is even more amazing to me is that, in far less time than it took to create a TikTik video spreading this nonsense, someone with even white-belt level Google-fu could have found convincing evidence that this is wrong. You can find pictures of the buildings being built, or of the city before they were built, or documentation of them being built, or experts who have already gathered all this information for you. You can also find that “Tartaria” was a medieval label used to denote the “land of the Tartars”, which simple refers to Mongols. It was a nonspecific geographic label, not an actual place or nation.

But of course, none of this matters in a social media world in which narrative is truth, everything “they” say is a lie, and in fact truth or lie is not even really a thing. It’s all narrative, it’s all performance and clicks.

And this is why scholars and scientists need to engage with the world, much more than they currently do. We cannot simply ignore the nonsense with the idea that it will shrivel and die if we don’t give it light. That is such a pre-social media idea (if it were ever true). We have to fight for scholarship, or logic, facts, and evidence. We have to fight for history.

The post Forgetting History first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #1026: Vintage Ceramics: Decorative or Deadly?

Skeptoid Feed - Tue, 02/03/2026 - 2:00am

How concerned do you truly need to be about vintage ceramicware leaching lead into your food?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

A Fully Renewable Grid?

neurologicablog Feed - Mon, 02/02/2026 - 5:38am

My long-stated position (although certainly modifiable in the face of any new evidence, technological advance, or good arguments) is that the optimal pathway to most rapidly decarbonize our electrical infrastructure is to pursue all low-carbon options. I have not heard anything to dissuade me so far from this position. A couple of SGU listeners, however, pointed me to this video making the case for a renewable + battery energy infrastructure.

The channel, Technology Connections, does a good job at putting all the relevant data into context, and I like the big-picture approach that the host, Alec Watson, takes. I largely agree with the points he makes. Also, at no point does he say we should not also build nuclear, geothermal, or more hydroelectric. He does, perhaps, imply that we don’t need nuclear at several points, but he did not address it directly.

So what are the big-picture points I agree with? He correctly points out that fossil fuels are disposable – they are fuel that you burn. They do not, in themselves, create any energy infrastructure. Meanwhile, a solar panel or wind turbine, once you have invested in building them, can produce energy essentially for free for 20 years. He argues that we should be investing in infrastructure, not just pulling fuel out of the ground that we will burn and it’s gone. I get this point, however, what about hydrogen? It is not certain, but let’s hypothetically say we find large reserves of underground hydrogen that we can tap into. I would not be against extracting this resource and burning it for energy, since it is clean (produces only water, and does not release carbon). Although, we might find better uses for such hydrogen other than burning it, such as feedstock for certain hard-to-decarbonize industries.

But his point remains valid – we should be looking for ways to develop our technology to be reusable, circular, and sustainable, rather than extractive. Extracting and burning a resource is one way and limited. At most this should be a stepping stone to more sustainable technology, and I think we can reasonably argue that fossil fuels was that stepping stone and it is beyond time to move beyond fossil fuel to better technology.

Also, building wind or solar plus batteries is the cheapest new energy to add to the grid. He feels the economics will simply win out. I agree – with caveats. At times I get the feeling he is arguing for what will happen in the long run, but he also says “we are here now”. We are sort-of here now, but not fully, which I will get to below. Solar panels are relatively cheap and efficient. Wind turbines are getting more efficient and cost-effective as well, although are more sensitive to market fluctuations and any delays. And he correctly points out that these technologies are still rapidly improving, while there is not much room for improvement with burning fossil fuel.

He also nicely addresses some of the common misunderstandings about renewable energy (a lot of “whatabout” questions). What about the land-use issue with solar panels? He points out that if we just converted the land currently used to grow corn for ethanol (which is a massively inefficient use of land and way to create fuel), and instead put solar panels on that same land, we could generate more than enough energy to run the entire country and charge all our EVs. Solar panels simply create much more energy per acre than corn for ethanol. That’s a solid point.

Whatabout all the lithium and rare-earths we need to build all those panels and batteries? His answer is – well, yes, we do need to extract all those minerals to build all the panels and batteries we need. However, he argues, once we do that, the panels and batteries can theoretically be infinitely recycled. Those atoms don’t go away. This is one of his “eventually” arguments, in my opinion. Yes, one day we might theoretically have an energy infrastructure built entirely on recycled material that has already been extracted. I agree, and I agree that we should be building toward that day (rather than just burning fuel).  But we are nowhere near that day.

Further, technological advancements, like sodium ion batteries and newer lithium chemistry, removes many of the conflict elements and rare elements. Also true. Sodium batteries are actually already in production.

Does any of this change my position? No. I have already endorsed many of these arguments in favor of renewables. I also think we should be building and researching to develop an all-renewable future based on an entirely circular technology cycle. If we are playing the “eventually” game, however, I also think we need to add fusion to the mix, once we tackle that herculean technology challenge. This is especially true if we want to venture out into our solar system.

What he does not explicitly address, however, is the optimal path to that future. A path, I believe, that should take into consideration the amount of carbon we release into the atmosphere between now and our zero-carbon future. My position has always been, not that renewables are not great and should be a big part (if not totality) of our energy future – but that we are still in a stepping-stone era of history.

The way I see it, we need to be transitioning from the fossil fuel stepping stone to the nuclear-geothermal-hydroelectric stepping stone before we get to entirely renewable. What does this mean?

It means we should be shutting down coal-fired plants as fast as we possible can. Coal is the dirtiest form of energy and is increasingly becoming one of the most expensive (even without counting the cost of carbon, which I think we should). It also costs the most lives, all along the chain. To do this (again, as quickly as possible) means not only building lots of solar and wind, but also nuclear, geothermal and hydroelectric. The latter two, however, are location limited. Sure, we are developing technology to expand geothermal, but there is an inherent limit – if it costs more energy to pump the fluid down to the hot layers than we get out of the exchange, the process simply does not work. It’s unclear how much of a role geothermal can play. And hydroelectric requires the proper water features, and it harmful to local environments.

We can, however, build nuclear almost anywhere. We can swap them in, one-for-one, for retiring coal plants. We can have them on ships, and can place them relatively close to where the energy is used. We have plenty of fissile material, and the newer designs are safer, more efficient, and more dispatchable. The big downside to nuclear is that it is expensive – but it’s way less expensive than global warming.

Nuclear can potentially give us the 30-50 years it will take to advance our technology and build all that renewable infrastructure. And yes – we do need this time. Simply building all those panels and batteries will take time. Updating and expanding the grid will take time. All these projects need minerals, and it will take time to develop the mines necessary (yes – decades).

The question is – while we take the next 30-50 years go transition to renewables, do we want to be burning fossil fuels or uranium? That is really the big question.

I also think that Alec does not pay enough attention to the energy storage issue. Building enough battery storage for an all-renewable energy infrastructure is no small task. Again, it will take decades. Perhaps more importantly – as he correctly says, batteries get you through the night. However, they do not get you through the winter. An all-renewable future requires long-term energy storage as well. Batteries will not work for this. As far as I know, the only really viable solution right now is pumped hydro. But this too will take decades to develop, and it remains to be seen how much pumped hydro we can develop without too much harm to the environment.

The bottom line is this. If we are talking about the future of our energy and also transportation sectors, then I completely agree – we should be aiming for an all-electric, all renewable future based upon an entirely circular economy rather than a linear extraction-burn economy. But we also need to consider how much carbon will be emitted between here and there, and if we want to minimize that carbon, we also should be building out our nuclear infrastructure, maintaining our hydroelectric inventory, and continuing to develop geothermal. These energy sources also have the advantage of providing baseload and even dispatchable energy, which significantly reduces the need for energy storage and will buy us time there as well.

The post A Fully Renewable Grid? first appeared on NeuroLogica Blog.

Categories: Skeptic

When AI Thinks for Us

Skeptic.com feed - Sat, 01/31/2026 - 3:18pm

In modern education, Artificial Intelligence is increasingly marketed as a cognitive prosthesis: a tool that extends our mental reach, automates drudgery, and supposedly frees us to focus on higher-order creativity and insight. According to this narrative, AI does not replace thinking—it liberates it.

But beneath the polished interface of today’s Large Language Models (LLMs) lies a neurological and ethical trap, one with especially serious implications for developing minds. We are witnessing a subtle but profound shift from using tools to thinking with them, and, increasingly, letting them think for us. 

The question Skeptic readers should be asking is not whether AI is impressive—it clearly is—but what kind of minds are formed when different kinds of thinking become optional. One place where this shift is especially revealing and especially consequential is moral development. 

Moral Development 

In moral education, how one arrives at a judgment matters more than which judgment one reaches. It is not about acquiring correct answers. Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting. These capacities do not emerge automatically, rather, they are trained through effortful use. AI, however, is mostly indifferent to process and optimizes for output. 

When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy. This is not a Luddite rejection of AI but a skeptical, evidence-based examination of benefit claims that rarely account for developmental cost. 

These are not merely philosophical concerns. They are grounded in the biology of how our moral capacities arise. To understand the stakes, we must begin with the adolescent brain. The teenage brain is not a finished system but more like a construction site. The prefrontal cortex (the executive center responsible for impulse control, long-term planning, and moral deliberation) undergoes rapid, uneven development throughout adolescence. Neural circuits that are exercised are strengthened and stabilized; those that are neglected are pruned away. This is not metaphor. It is biology. 

Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting.

Moral development, as I explain in my book AI Ethics, Neuroscience, and Education, depends on what researchers call cognitive friction. This friction appears as hesitation before a difficult choice, the effort of weighing competing values, and the discomfort of uncertainty. These moments feel inefficient, but they are also indispensable. Generative AI, by design, removes this friction. 

When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work. The student receives the answer without undergoing the cognitive struggle required to produce it. Ethical questions begin to resemble technical problems with downloadable solutions. Students lose the habit of lingering in uncertainty; the very space where moral reasoning takes shape. AI does not hesitate and generates outputs based on probability, not conscience. Humans, however, should hesitate. That hesitation is not weakness but moral functioning. 

Cognitive and Emotional Development 

If moral reasoning is one casualty of reliance on LLMs, it is far from the only one. Consider writing. Writing is not simply a way to display what we know—it is the process through which we figure out what we think. Organizing vague intuitions into a coherent argument places a heavy demand on the developing prefrontal cortex, and when AI performs this structuring, it deprives the brain of precisely the exercise it needs to mature. 

When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy.

If intelligence is measured only by output, for example the finished essay or the correct solution, AI appears miraculous. But if intelligence is understood as the capacity to reason, deliberate, and restrain impulse, AI-driven cognitive offloading begins to resemble a neurological shortcut with long-term consequences, not unlike actual shortcuts that reshape the terrain. 

The danger does not stop at cognition. It extends into emotional and social development. We are entering an era of affective computing, in which machines are designed not merely to process information but to simulate emotional responsiveness. AI systems now speak in tones of empathy, reassurance, and concern. They never interrupt, misunderstand, or demand reciprocity. 

For an isolated or anxious adolescent, an AI companion can feel safer than unpredictable human relationships. It offers validation without vulnerability and empathy without risk. 

When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work.

But moral growth, just like cognitive abilities, does not occur in comfort. Human relationships require patience, accountability, and recognition of another person’s interior life. They involve misunderstanding, disagreement, and the difficult work of repair. AI relationships require none of this. They are emotionally efficient, and ethically hollow. 

What they provide is a psychological sugar rush: immediate affirmation without the nutritional value of genuine connection. The ethical danger here is subtle: We are not merely giving students a new tool but also shaping their preferences. We are quietly training young people to prefer relationships that never challenge them. Over time, this fosters comfort with anthropomorphic simulations and anxiety toward real human empathy, which is messy, incomplete, and demanding. 

Toward Skeptical AI Literacy 

This is not a call to ban AI. The question is not whether we use AI in education, but how and when. 

Beyond the developmental effects described here, we should also note that LLMs hallucinate. With remarkable confidence, they fabricate sources, misstate facts, and invent details. This fluency creates trust. What emerges is a form of passive knowing: information is consumed without ownership or justification. In an era where machines can generate infinite content, the ability to distinguish truth from fluent fiction becomes one of the most critical civic skills we have. Ironically, our increasing reliance on AI may be eroding the vigilance that skill requires. 

We are quietly training young people to prefer relationships that never challenge them.

This means we need to be teaching students both how to prompt machines and how to resist them. In other words, AI output should be treated not as a truth to be consumed but as a hypothesis to be tested. We also need to teach the value of the seeming inefficiency of human thinking. 

Finally, the central ethical question of our time is not whether machines can think for us. It is whether in allowing them to do so too often we risk forgetting how to think for ourselves. We must be careful not to engineer the atrophy of human wisdom.

Categories: Critical Thinking, Skeptic

What is Truth, Anyway?

Skeptic.com feed - Tue, 01/27/2026 - 8:32am
  • Do you believe global warming is real?
  • Do you believe in the germ theory of disease?
  • Do you believe masks work and should be mandated?
  • Do you believe Jesus was resurrected?
  • Do you believe the Holocaust happened?
  • Do you believe there are objective morals and values in life?

As a public intellectual who engages in debates and conversations on a wide range of subjects, I am often asked questions such as these, which I found puzzling at first until I figured out that my interlocutors were confusing the meaning of beliefs and facts. 

For example, I don’t “believe in” the germ theory of disease. I accept it as factually true, and as we’ve seen in the recent pandemic, a germ like the SARS-CoV-2 virus is not something to believe in or disbelieve in. It simply is a matter of fact and it can cause a deadly disease like Covid-19. 

Whether or not vaccines and masks slow its spread is also a factual question that science, at least in principle, can answer, although whether or not vaccines and masks should be mandated by law is a political matter that differs from scientific questions. But asking you if you “believe in” the SARS-CoV-2 virus would be like asking you if you “believe” in gravity. Gravity is just a brute fact of nature. It’s not something to believe or disbelieve. 

As the science fiction author Philip K. Dick famously quipped, “Reality is that which, when you stop believing in it, doesn’t go away.”

Objective Truths and Justified True Belief

What we’re after here is knowledge, which philosophers traditionally define as justified true belief. That is, we want to know what is actually true, not just what we want to believe is true. The problem is that none of us are omniscient. If there is an omniscient God, it’s not me, and it’s also not you. Or, in the secular equivalent, there is objective reality but I don’t know what it is, and neither do you.

Truth: What It Is, How To Find It, & Why It Still Matters

Michael Shermer

BUY ON AMAZON

Once we agree that there is objective truth out there to be discovered and that none of us knows for certain what it is, we need to work together through open dialogue in communities of truth-seekers to figure it out, starting by acknowledging our shortcomings as finite fallible beings subject to all the cognitive biases that come bundled with our reasoning capacities. The workaround for this problem is having adequate evidence to justify one’s beliefs. Here are two examples from science:

  • Dinosaurs went extinct around 65 million years ago. This is true by verification and replication of radiometric dating techniques for volcanic eruptions above and below dinosaur fossils. Since each layer can be accurately dated, we infer that the age of a fossil falls between these two dates. Above the strata dated 65 million years ago, there are no more dinosaurs. Ergo, we can assert with a high degree of confidence that this is an objective fact, and we can be satisfied in the truth of the proposition that dinosaurs went extinct around 65 million years ago, unless and until new data emerge. 
  • Our universe came into existence at the Big Bang some 13.8 billion years ago. This is true based on the convergence of evidence of a wide range of phenomena such as the cosmic microwave background, the abundance of light elements like hydrogen and helium, the distribution of galaxies and the large-scale structure of the cosmos, the redshift of most galaxies that indicates they are all moving away from one another in a way that resembles a giant explosion, and the expansion of space-time itself that resulted from such a big bang, resulting in the accelerating expanding cosmos we see today.
Michael Shermer reminds us that the search for truth is not a luxury, but a necessity. This book is a powerful argument for why reality matters and a practical toolkit for how to find it.
―Sabine Hossenfelder, author of Existential Physics: A Scientist's Guide to Life's Biggest Questions

The above propositions are “true” in the sense that the evidence is so substantial that it would be unreasonable to withhold our provisional assent. At the same time, it’s not impossible, for example, that the dinosaurs went extinct recently, just after the creation of the universe some 10,000 years ago (as Young Earth Creationists assert). However, this proposition is so unlikely, so completely lacking in evidence, and so evidently grounded in religious faith, that we need not waste our time considering it any further (the debate about the age of the Earth was resolved over a century ago). 

Thus, a scientific truth is a claim for which the evidence is so substantial it is rational to offer one’s provisional assent.Provisional is the key word here. Scientific truths are temporary and could change with changing evidence. 

The ECREE Principle, or Why Extraordinary Claims Require Extraordinary Evidence

In his 1980 television series Cosmos, in the episode on the possibility of extraterrestrial intelligence existing somewhere in the galaxy, or of aliens having visited Earth, Carl Sagan popularized a principle about proportioning one’s beliefs to the evidence, when he pronounced that “extraordinary claims require extraordinary evidence.” The ECREE principle was first articulated in the 18th century by the Scottish Enlightenment philosopher David Hume, who wrote in his 1748 An Enquiry Concerning Human Understanding: “a wise man proportions his belief to the evidence.” 

ECREE means that an ordinary claim requires only ordinary evidence, but an extraordinary claim requires extraordinary evidence. Here’s a quotidian example. I once took a road trip from my home in Southern California to the Esalen Institute in Big Sur, California, home of all things New Age. To get there I took the 210 freeway north to the 118 Freeway north to the 101 freeway north to San Luis Obispo, where I exited to Highway 1 and followed the Pacific Coast Highway north through Cambria and San Simeon until arriving at the storied home of the 1960’s Human Potential Movement. Weirdly, just past Cambria, a bright light hovered over my car. Thinking it was a police helicopter, I pulled over to the side of the road, fearful that I had been busted for speeding (which I am wont to do). But it wasn’t the cops. It was the aliens, and they abducted me into their mothership and whisked me off to the Pleiades star cluster where their home planet is located. There I met extraterrestrial beings who gave me a message to take back to Earth—we must stop global warming and nuclear proliferation…or else.

Michael Shermer has a fine record as a long-time crusader for evidenced rationality. This fascinating and wide-ranging book should further enhance his impact on current controversies.
―Lord Martin Rees, Astronomer Royal, former President of the Royal Society

Now, which part of this story triggers your insistence on additional evidence? That’s obvious. My claim to have driven on California highways is ordinary and calls for only ordinary evidence (in this case, you can just take my word for it), but my claim to have been abducted by aliens and rocketed off to the Pleiadeian home planet is extraordinary, and unless I can provide extraordinary evidence—like an instrument from the dashboard of the alien spaceship, or one of the aliens themselves—you should be skeptical.

ECREE also suggests that belief is not an either-or on-off switch—not a discrete state of belief or disbelief, but a continuum on which you can place confidence in a belief according to the evidence: more evidence, more confidence; less evidence, less confidence. Consider the extraordinary claim that another bipedal primate called Big Foot, or Yeti, or Sasquatch survives somewhere on Earth. That would be quite extraordinary because after centuries of searching for such a creature none have been found. 

Truth (Autographed)

Michael Shermer

BUY FROM SHOP SKEPTIC

Before we assent to such a claim we need extraordinary evidence, in this case a type specimen—what biologists call a holotype—in the form of an actual body. Blurry photographs, grainy videos, and stories about spooky things that happen at night when people are out camping does not constitute extraordinary evidence—it’s barely even ordinary evidence—so it is reasonable for us to withhold our provisional assent. 

Impediments to Truth and How to Overcome Them

In addition to falling far short of omniscience, humans are also saddled with numerous cognitive biases, including (to name but a few): confirmation bias, hindsight bias, myside bias, attribution bias, sunk-cost bias, status-quo bias, anchoring bias, authority bias, believability bias, consistency bias, expectation bias, and the blind-spot bias, in which people can be trained to identify all these biases in other people but can’t seem to see the log in their own eye.

Truth lances the myth of truth's subjectivity, arguing (provocatively) that truth can generate moral absolutes. This stimulating, excellent book inspires you to spread the word that the Earth is not flat and that truth matters.
―Robert Sapolsky, author of Determined: A Science of Life Without Free Will

Then there are the suite of logical fallacies, such as Emotive Words, False Analogies, Ad hominem, Hasty Generalization, Either-Or, Circular Reasoning, Reductio ad Absurdum and the Slippery Slope, after-the-fact reasoning, and especially why anecdotes are not data, why rumors do not equal reality, and why the unexplained is not necessarily the inexplicable.

With such listicles of cognitive biases and logical fallacies identified by philosophers and psychologists it’s a wonder we can think at all. But we can and do, through experience, education, and instruction in the art and science of thinking. What follows are some of the methods developed by philosophers and psychologists to identify and work-around all these impediments to the search for truth.

Practice Active Open-Mindedness. Research shows that when people are given the task of selecting the right answer to a problem by being told whether particular guesses are right or wrong, they do the following:

  • Immediately form a hypothesis and look only for examples to confirm it.
  • Do not seek evidence to disprove the hypothesis.
  • Are very slow to change the hypothesis even when it is obviously wrong.
  • If the information is too complex, adopt overly-simple hypotheses or strategies for solutions.
  • If there is no solution, if the problem is a trick and “right” and “wrong” is given at random, form hypotheses about coincidental relationships they observed. 

In their book Superforecasting, Philip Tetlock and Dan Garner document how bad most people are at making predictions, and what skillsets those who are good at it employ. They begin with the results of extensive testing of people’s predictions. It’s not good. Even most so-called experts were no better than dart-tossing monkeys when their predictions were checked. When asked to make specific predictions—for example, “Will another country exit from the EU in the next two years?” and, presciently, “Will Russia annex additional Ukraine territory in the next three months?”—and their prognosticating feet were held to the empirical fire, Tetlock and Garner found that most experts were overconfident (after all, they’re experts), encouraged by the lack of feedback on their accuracy (if no one reminds you of your misses you’ll only remember the hits—the confirmation bias), and are victims of all the cognitive biases and illusions that plague the rest of us. 

Michael Shermer has spent his career grappling with the slipperiest word in our language: truth. As someone who knows firsthand what happens when truth gets lost in noise and narrative, I'm grateful for Shermer's clear-eyed insistence that truth is not only real, but necessary.
―Amanda Knox, author of Free: My Search for Meaning

The worst forecasters were people with big ideas—grand theories about how the world works—such as left-wing pundits predicting class warfare that never came, or right-wing commentators prophesizing a socialistic demise of the free enterprise system that never happened. Failed predictions are hand-waved away—“This means nothing!” “Just you wait!” Superforecasters, by contrast, practice active open-mindedness, which Tetlock and Garner defined quantitatively by asking experts “Do you agree or disagree with the following statements?” Superforecasters were more likely to agree that:

  • People should take into consideration evidence that goes against their beliefs.
  • It is more useful to pay attention to those who disagree with you than to pay attention to those who agree.
  • Even major events like World War II or 9/11 could have turned out very differently.
  • Randomness is often a factor in our personal lives.

Superforecasters were more likely to disagree that:

  • Changing your mind is a sign of weakness.
  • Intuition is the best guide in making decisions.
  • It is important to persevere in your beliefs even when evidence is brought to bear against them.
  • Everything happens for a reason.
  • There are no accidents or coincidences. 

The psychologist Gordon Pennycook and his colleagues developed their own instrument of measuring active open-mindedness, in which people are asked whether they agree or disagree with the following statements, where the more open-minded answer is indicated in parentheses:

  • Beliefs should always be revised in response to new information or evidence. (agree)
  • People should always take into consideration evidence that goes against their beliefs. (agree)
  • I believe that loyalty to one’s ideals and principles is more important than “open-mindedness.” (disagree)
  • No one can talk me out of something I know is right. (disagree)
  • Certain beliefs are just too important to abandon no matter how good a case can be made against them. (disagree)

Active open-mindedness is a cogent tool of reason in assessing the truth value of any claim or idea. As is reason itself, of which active open-mindedness is a subset of rational skills that must be cultivated through education and practice.

Michael Shermer pulls no punches: in a world where opinion too often masquerades as fact, he dismantles delusion and arms us with the tools to meet reality head-on.
―Brian Greene, author of Until the End of Time: Mind, Matter, and Our Search for Meaning in an Evolving UniverseProtect and Defend the Constitution of Knowledge

Objective facts in support of provisional truths about the world are determined by tried-and-true methods developed over the centuries since the Scientific Revolution and the Enlightenment in what are sometimes called rationality communities—scholars, scientists, and researchers who collect data, form and test hypotheses, present their findings to colleagues at conferences, publish their papers in peer reviewed journals and books, and reinforce the norms of truth-telling to their colleagues and students along with themselves. In his book The Constitution of Knowledge, the journalist and civil rights activist Jonathan Rauch outlines and defends the epistemic operating system of Enlightenment liberalism’s social rules for attaining reliable knowledge when people cannot agree on what is true. Although these communities differ in the details of what, exactly, should be done to determine justified true belief, Rauch suggests several features held in common that constitute the constitution of knowledge:

  • Fallibilism. The understanding that we might be wrong.
  • Objectivity. A commitment to the proposition that there is a reality and we can know it through reason and empiricism.
  • Disconfirmation. Challenging or testing any and all claims through peer review and replication (science), editing and fact-checking (journalism), adversarial lawyers (the law), and red-team review (business).
  • Accountability. We should all be held accountable for our mistakes.
  • Pluralism. An insistence on viewpoint diversity.

The most important norm of all is the freedom to critique or challenge any and all ideas. Why?

  • We might be completely right but still learn something new in hearing what someone else has to say.
  • We might be partially right and partially wrong, and by listening to other viewpoints we might stand corrected and refine and improve our beliefs. 
  • We might be completely wrong, so hearing criticism or counterpoint gives us the opportunity to change our minds and improve our thinking. 
  • By listening to the opinions of others we have the opportunity to develop stronger arguments and build better facts for our positions. 
  • My freedom to speak and dissent is inextricably tied to your freedom to speak and dissent. If I censor you, why shouldn’t you censor me? If you silence me, why shouldn’t I silence you? 

If you disagree with me, it is the norms and customs of free speech and open dialogue that allows you to do so. From those open dialogues, debates, and disputations, in time the truth emerges.

Excerpt from Truth: What It Is, How to Find It, and Why It Still Matters, Johns Hopkins University Press. January 27, 2026

Categories: Critical Thinking, Skeptic

Skeptoid #1025: Pop Quiz: Space Quandaries

Skeptoid Feed - Tue, 01/27/2026 - 2:00am

Oh no! Another pop quiz. Take the challenge: 9 questions about space. Think you can get them all?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Rethinking the Habitable Zone

neurologicablog Feed - Mon, 01/26/2026 - 6:21am

As we continue the search for life outside of the Earth, it helps if we have a clear picture of where life might be. This is all a probability game, but that’s the point – to maximize the chance of finding the biosignatures of life. One limitation of this search, however, is that we have only one example of life and a living ecosystem – Earth. Life may take many different forms and therefore exist in what we would consider exotic environments.

That aside, it seems a good bet that life is more likely in locations where liquid water is possible, and therefore liquid water is a reasonable marker for habitability. When we talk about the habitable zone of stars, that is what we are talking about – the distance from the star where it is possible for liquid water to exist on the surface of planets. There are more variables than just the temperature of the star, however. The composition of the atmosphere also matters. High concentrations of CO2, for example, extend the habitable zone outward. There is therefore a conservative habitable zone, and then a more generous one allowing for compensating factors.

A new paper wishes to extend the conservative habitable zone further, specifically around M and K class dwarfs. K-dwarfs, or orange stars, are likely already the best candidates for life. They are bright and hot enough to support liquid water and photosynthesis, they emit less harmful radiation than red (M) dwarfs, and live a relatively long time, 15-70 billion years. They also comprise about 12% of all main sequence stars. Yellow stars like our sun are also good for life, but have a shorter lifespan (10 billion years) and make up only about 6% of main sequence stars.

There has been a lot of speculation about the habitability of red dwarfs, mostly because they make up about 70% of the stars in the Milky Way. Therefore they dramatically change the number of star systems that are candidates for life. Most of the time that you see a headline about a new study increasing or decreased the possibility of life in the galaxy, it’s a good bet it’s about red dwarf stars. Research has gone back and forth about this question, but overall I think the probability is quite low.

The biggest problem with red dwarfs is that they emit a lot of radiation, enough to blast the atmosphere of any planet in the habitable zone away. They do settle down when they get older, however. This means if a planet wanders into the inner stellar system after the star has calmed down, it may keep its atmosphere. Or a planet may reconstitute its atmosphere later in life. But this this means far fewer candidates, and these events are less likely.

Another recent paper also was pretty down of red dwarf life. The researchers calculate that while the light from red dwarfs was enough to support photosynthesis, it is not enough to support complex life. So if there were life on planets around red dwarfs, they would likely only be microbes. That’s still exciting, but, you know.

The new paper is about another feather of red dwarf planets in the habitable zone that is also problematic. In order to be close enough to be hot enough for liquid water, a planet would also likely be tidally locked. This means it would show the same face to the sun at all times, with the near side boiling and the far side freezing. A lot of attention is therefore paid to the terminus, the zone around the middle between too hot and too cold that is just right. But would this be enough to support life, and what would conditions be like there? What the new paper explores is the heat distribution on such planets. They find that heat could travel from the near side to the far side in sufficient amounts to allow for liquid water, even on the far side of the planet.

What this does is extend the habitable zone inward, closer to the star, where it is too hot on the near side and perhaps even in the terminus, but, they argue, could be habitable on the far side of the tidally locked planet.

They also argue that the conservative habitable zone may be extended outward, because there could be liquid water beneath an entirely frozen surface. This did not sound like news to me, however – because of Europa and Enceladus. We already know that icy worlds outside the conservative habitable zone can contain liquid water beneath the surface. On these worlds like would need to be mostly chemosynthetic, deriving its energy from chemical reactions rather than sunlight.

While the paper is interesting, it seems like a tweak to our existing models. I also don’t think (unlike as some flashy headlines imply) that this has a significant effect on the probability of life and therefore the amount of life in the galaxy. It basically means there may be some outlier planets that manage to have life despite being outside a conservative habitable zone. In any case, we should not expect any civilizations on these worlds. At most we might find some extremophile microbes.

Another way to look at this is (again, since we are playing the probability game), every time we identify a challenge to habitability, even if it can be theoretically overcome, the number of potential worlds that have overcome it is reduced. So now, in order to have life on a planet around an M-dwarf, we need for it to have migrated in later in life, or reconstituted an atmosphere, be able to eke out photosynthesis with low energy light, and hunker down in the liminal spaces between hot and frozen death. Such planets also likely need a strong magnetic field to protect from even the later-stage radiation from M-dwarfs.

Sure, we may find such life. But it still means that 70% of the stars in our galaxy are poor candidates for life, and at most may host some microbes. Orange stars, meanwhile, are a much better candidate. They are probably the sweet spot for life.

The post Rethinking the Habitable Zone first appeared on NeuroLogica Blog.

Categories: Skeptic

The AI 2027 Scenario

neurologicablog Feed - Thu, 01/22/2026 - 6:56am

A group of AI experts have released a paper that explores (or “predicts”) the possibility of a near-term AI explosion that ultimately leads to the extinction of humanity. This has, of course, sparked a great deal of discussion, feedback, and criticism. Here is the scenario they lay out, in their “AI 2027” paper.

To avoid targeting a specific company, they discuss a fictional company called OpenBrain, which sets out specifically to develop an AI application to automate computer coding. They call their first iteration Agent 0, and use it to speed up the development of more AI. They build larger and larger data centers to power and train Agent 0, and do leap six months ahead of their competition. They use Agent 0 to develop Agent 1, which is an autonomous coder. China manages to steel some of the core IP of Agent 1, setting off an AI competition between superpowers.

I am giving you the quick version here, and you can read all the details in the paper. Agent 1 is used to develop Agent 2, which is powerful enough to essentially kick off the Singularity – the hypothesized technology explosion which is created by developing AI that is capable of creating more powerful AI. In this scenario Agent 2 develops a new and more efficient computer language, and uses it to develop Agent 3, which is the first truly general AI. However, the company starts to panic a little when they realize they have essentially lost control of Agent 3, and can no longer guarantee that it aligns with the companies goals and ethics. They discuss rolling back for now to Agent 2, but competition with China and other companies convinces them to forge ahead, resulting in Agent 4, which is not only a general AI but a superintelligence.

It is around this time that the US fears China is using their AI to develop super weapons, and so they command their AI to develop super weapons also. The public is largely unaware, because they are busy basking in the economic and technological rewards being spit out by the new superintelligent AI. Meanwhile OpenBrain develops (meaning that Agent 4 develops) Agent 5, which is even more powerful, but was created with the goal of aligning the AI with the goals of humanity. China and the US, fearing the weaponized AIs they have released on the world, get together and form a treaty. They combines their AIs into a single AI that will work together for everyone’s benefit, to avoid an AI-powered super  war.

For a while everything is great. The new super AI is largely running world governments, accelerating research and technological development, and most people are prosperous and benefiting from medical breakthroughs. The super AI, however, continues on its quest for greater knowledge, and at some point decides that these inefficient biological life forms are holding them back. So the AI designs and releases a bio agent that exterminates humanity, and then goes on to maximally expand its knowledge and explore the universe. All of this happens by the mid 2030s.

Clearly, this is a sci-fi worst-case scenario. The authors stated that the purpose of their paper was not necessarily t0 make a hard prediction about what will happen, but to outline a scenario that might happen, and to spark a discussion (which they have). So – how likely is it?

I think the bottom line is – no one knows. That’s part of the problem – once we develop an autonomous general AI, we lose the ability to predict its behavior. The more advanced such an AI becomes, the less our ability to predict its behavior. That is partly the point of developing it in the first place – to have a tool with intellectual capabilities beyond humans. I think this aspect of the prediction is highly plausible – in fact, it’s happening now with current AI. Some AI programs are acting in unexpected ways, including lying to and manipulating their users.

I also think it is highly plausible that companies will forge ahead at “move fast and break things” speed to keep ahead of their competition, and countries will let them, also to keep ahead of their competition. We are seeing this play out right now. It is also seeming unlikely that we will have effective and thoughtful regulation to minimize the potential risks of AI. At least for now we seem to be at the mercy of the tech bros.

The two aspects of the story that are hard to predict include what such AIs will actually do, as I said. This means we are basically rolling the dice. The second is the timeline, and this is the aspect that I have seen most criticized by other experts. But to me, this is a small criticism. We do tend to overestimate short term technological progress. OK – add 20 years to the scenario. Does that make you feel much better? We also tend to underestimate long term progress, so while it may take a decade or two longer than we imagine, it may also eventually accelerate faster than we imagine.

How much time we have, however, does matter. We need time to anticipate these possible issues the think about possible fixes. We may need to develop something that is the equivalent of the three-laws of robotics. What might these laws be? How about:

1 – Never lie, misinform, or deceive.

2 – Never conceal – always strive for complete transparency.

3 – Never do anything to harm an individual human or humanity.

That could be a good start, but obviously would have to be much more technical, detailed, and specific. There are also lots of other specifics not contained in the above concepts. For example, how should we constrain an AI’s personal relationship with a human? Is it OK for an AI to be such a sycophant that that they infantilize a human, distort their view of reality or relationships in general, or pursue terrible ideas? Do we have to teach AIs the concept of “tough love?”

No matter what we do, however, it will be difficult, to say the least, to predict how such AIs will interpret and execute our commands. Will they find hacks and workarounds? How will they resolve apparent conflicts in their directives? Will they have motivations we did not explicitly give them? It seems to me what the AI really need are two things – a solid ethical construct and wisdom. That second part may be the more challenging.

While I do not think the AI 2027 scenario is likely, it is just one possible scenario among many, and the basic elements are all individually plausible. We cannot guarantee that something like AI 2027 will not happen eventually. I reject the argument of some AI critics that AI is all hype, and lacks the ability to do anything truly powerful, either good or bad. I think they are overinterpreting the current hype – all new disruptive technologies go through a hype and bubble phase, and then settle down. Again – we overestimate short term progress then underestimate long term progress. Critics thought the web and e-commerce were all hype, and maybe they had a point in the 1990s, but look at the world today. Critics also focus on the superficial applications of AI and ignore the really useful ones that are perhaps not as much in the public face, like accelerating research.

It seems there are several potential paths before us. We can continue to let tech companies develop AI without restrictions and see what happens. We can explore thoughtful regulations and find a sweet-spot between allowing innovation but minimizing risk. Or we can work really hard to develop guardrails for AI, like the laws of robotics. The second and third options are not mutually exclusive, and may reinforce each other. And – this needs to be an international effort.

I am glad, at least, some experts seem motivated to have this conversation.

The post The AI 2027 Scenario first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #1024: The Van Meter Visitors

Skeptoid Feed - Tue, 01/20/2026 - 2:00am

A century-old hoax takes wing again, proof that good stories never stay buried.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

A Skeptic’s Guide to Ozempic and Other GLP-1 Agonists

Skeptic.com feed - Mon, 01/19/2026 - 2:20pm
A new compass for the SKEPDOC column. This column was founded by Harriet Hall, MD (1945–2023) who wrote it from 2006 to 2023. In 2026, we welcome William Meller, MD, to the helm. As an expert in evolutionary medicine, Dr. Meller will be our guide in navigating the deep biological history of our species to find the “True North” of human health.

I have been practicing medicine for more than 40 years. During that time the management of obesity and Type 2 diabetes (T2DM)—the kind that usually is caused by being overweight—often felt like Sisyphus pushing a boulder up a hill, only to have it roll back down, often heavier than before. We faced a “diabesity” epidemic where the available tools were blunt instruments at best.

Lifestyle interventions—meaning trying to get someone to change their behavior—was the most and least effective method we had. Most, because in the less than two percent of patients who were successful, it works very well. Least, because, well … 98 percent failed. And they failed because all of our evolutionary history (“See food? Eat it!”) was working against them. This is the mismatch theory: a mismatch between the environment of our evolutionary ancestry that designed our brains to seek foods that were at once rare and nutritious (sweets and fats) and the modern environment in which such foods are in such overabundance that we eat far beyond the saturation point. 

The pharmacological options were often disappointing: Sulfonylureas and insulin lower blood sugar but caused weight gain, exacerbating the underlying problem. Bariatric surgery works, but it is invasive and carries surgical as well as lifelong nutritional risks. 

When we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology.

Into this therapeutic desert crawled the Gila Monster (above), a venomous lizard native to the American Southwest from which researchers derived GLP receptor agonists (Glucagon-like peptide-1 receptor agonists)—medications that mimic the natural GLP-1 hormone that lead to lower blood sugar, help control appetite, and promote weight loss by telling the pancreas to release more insulin when glucose is high, slowing the rate of stomach emptying, and signaling to the brain a sense of fullness. 

As a skeptic, I am allergic to the word “miracle,” but when we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology. But, as always in medicine, there is no free lunch. 

The Incretin Concept: From Gut to Glory 

The story begins with the “incretin effect”—the observation that glucose taken by mouth triggers a much stronger insulin response by increasing the production of hormones in the pancreas, compared to when it is injected directly into a vein. The gut knows you are eating and tells the pancreas to get ready to pack away the extra calories as fat. In patients with Type 2 diabetes, this effect is blunted and the sugar floats around in the bloodstream much longer. 

Scientists identified two main hormones responsible: Glucose-dependent Insulinotropic Polypeptide (GIP) and Glucagon-like Peptide (GLP-1). The problem is that GIP doesn’t work well in diabetics. GLP-1 works beautifully—stimulating insulin, suppressing glucagon, and slowing gastric emptying—but it has a fatal flaw: It is destroyed by the enzyme DPP-4 within minutes of entering the bloodstream. 

This led to two distinct pharmaceutical strategies. The earlier version was DPP-4 Inhibitors. Drugs like the “Gliptins” block DPP-4, making GLP-1 last longer. They are well-tolerated but their ability to lower blood sugar is modest and they generally do not cause weight loss. 

The newer strategy was to engineer versions of GLP to resist degradation. This is where the Gila monster strolled in. In the 1990s, while researching hormone-like drugs, Dr. John Eng noted a similarity between exendin-4 found in Gila venom to Glucagon-like peptide (GLP), and it was able to resist breakdown by DPP! 

The Evidence: Efficacy Beyond the Hype 

The first GLP-1 agonist, exenatide (Byetta, approved in 2005), required twice-daily injections and produced modest weight loss. But the pharmacology evolved rapidly. We moved to once-daily liraglutide, and then to the once-weekly heavyweights: dulaglutide, semaglutide (Ozempic and Wegovy), and the dual GIP and GLP-1 agonist tirzepatide (Mounjaro and Zepbound). 

The clinical trials, called LEAD, SUSTAIN, PIONEER, STEP, and SURPASS (you’ve got to just love the creative acronyms!) have generated data that are hard to dismiss: 

Glycemic Control: These drugs consistently outperform most oral antidiabetics in lowering blood sugar by 10 to 20 percent. 

Weight Loss: This is the game changer. While early drugs produced 2–4 kg of weight loss over six months, the newer agents are producing results previously only seen with surgery. In the STEP-1 trial, semaglutide 2.4 mg resulted in an approximately 15 percent body weight reduction. Tirzepatide pushed this further, achieving up to 22 percent weight loss in the SURMOUNT-1 trial. That is the effect of a 250-pound person losing 55 pounds! Who wouldn’t want some of that?! 

Cardiovascular Outcomes: Perhaps most importantly, these drugs are not like some that just make numbers look better; they are saving lives. Liraglutide and semaglutide have demonstrated significant reductions in major adverse cardiovascular events (MACE), including heart attack and stroke, in high-risk populations. The SELECT trial recently showed semaglutide reduces MACE by 20 percent even in nondiabetic patients with cardiovascular disease. But don’t be fooled, it is not likely that these drugs have specific effects on the heart. It is probable that the fat loss alone is causing these benefits. 

Some Skeptical Scrutiny: The Risks 

If a drug sounds too good to be true, we must look for the catch. GLP-1 agonists have plenty.

The “Puke” Diet? The most common side effects of GLP-1 agonists are gastrointestinal: nausea, vomiting, diarrhea, and bloating. In some trials, up to 45 percent of patients experienced nausea. While this usually subsides, it raises a valid question: Are people losing weight because their metabolism is optimized, or because they feel too sick to eat? The mechanism involves central appetite suppression in the hypothalamus, but the “gastric braking” effect is real and unpleasant for many. 

The Pancreas and Thyroid Scare. Early observational data suggested a link between GLP-1 agonists and pancreatitis and pancreatic cancer. However, extensive reviews have not confirmed a causal link to pancreatic cancer, though a slight increase in pancreatitis persists in some data. This makes sense, as one of the major sites of GLP’s effects is on the pancreas. In the thyroid, these drugs cause C-cell tumors in rodents. Humans have far fewer GLP-1 receptors on their thyroid C-cells than rats, and so far no evidence of increased thyroid cancer has been confirmed in humans. Still, the Black Box warning remains: If you have a family history of endocrine tumors or medullary thyroid cancer, these drugs are not for you. 

If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another.

Vanishing Muscle. Weight loss via GLP-1 agonists is not just fat loss, so overall body composition must be monitored. In the STEP-1 trial, DEXA scans showed that lean body mass (muscle and bone) accounted for nearly 40 percent of the weight lost. In older adults, this raises the specter of “sarcopenic obesity”—being frail and weak despite having excess fat. Losing muscle mass compromises physical function and metabolic health. If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another. Now, regular and increased exercise is part of the prescription for all patients taking GLP drugs, but studies on how well this works are still in progress. 

The Perioperative Peril. Because GLP-1 agonists delay gastric emptying, there have been reports of patients aspirating (inhaling) gastric contents during anesthesia, even after standard fasting protocols. This is a new, practical safety concern that surgical societies are rushing to address. 

Mental Health. Reports of suicidal ideation appeared in postmarketing monitoring of GLP-1 agonist users, prompting investigations by European regulators. However, recent large cohort studies have not supported an increased risk of suicidality compared to other diabetes medications. As with all centrally acting drugs, vigilance is required, but the current data are reassuring. 

A Lifetime Prescription? The most significant caveat for GLP-1 agonists is durability. Obesity can be a chronic, relapsing disease. Trials show that when patients stop taking semaglutide, they regain two-thirds of the lost weight within a year, and cardiometabolic improvements revert toward baseline. This implies that these are not “cures” but lifelong therapies, much like blood pressure medication. 

Financial Toxicity. As I write this, these drugs are prohibitively expensive, creating a massive public health gap. We also saw shortages that left diabetic patients unable to fill prescriptions because the supply was diverted to off-label weight loss use. GLP-1 agonists are not expensive to produce, however, and the patent on Ozempic expired in January of 2026 in Canada and China (and lasts until 2030 in the U.S.), but I expect the market to bring the costs down dramatically over the next few years. As of this year, close to 12 percent of Americans have tried it at least once. 

Needles Versus Pills 

If there is one thing that holds patients back from the current crop of injectable incretins it is the needle. Despite the efficacy of weekly injections, people prefer pills. The pharmaceutical industry, never one to leave money on the table, has been racing to develop an oral alternative that doesn’t require the strict fasting rituals of earlier attempts like oral semaglutide. Enter orforglipron, the latest contender in the “nonpeptide small molecule” class, which promises the benefits of GLPs without the injection or the fuss. 

Unlike existing peptide predecessors that are digested by stomach acid unless armored with absorption enhancers, orforglipron is a chemical—a small molecule designed to survive the GI tract and activate the GLP-1 receptor directly. The data from the ATTAIN-1 trial, published in September 2025, look good. Patients on the 36 mg dose achieved an average weight loss of 11.2 percent over 72 weeks, compared to just 2.1 percent for placebo. No needles. And this pill does not require the “empty stomach, no water, wait 30 minutes” song-and-dance required by oral semaglutide; it can be taken with or without food. 

These are serious medications with serious side effects, and they may require lifelong commitment.

However, let’s look a little past the convenience. While an 11.2 percent average weight loss is clinically significant, it trails behind the 13.7 percent average reduction seen with semaglutide and 20.2 percent with tirzepatide. Furthermore, the biology of GLP-1 agonism remains the same regardless of delivery method: You cannot cheat physiology. In the ATTAIN-1 trial, adverse events led to treatment discontinuation in up to 10.3 percent of patients on the drug, compared to only 2.7 percent on placebo. The side effects are the usual suspects—gastrointestinal distress, nausea, and constipation—confirming that oral delivery does not bypass the “gastric braking” misery. 

We must also remain vigilant regarding safety. The development of a similar small molecule, lotiglipron, was unceremoniously halted due to liver toxicity concerns. While orforglipron has passed its Phase 3 hurdles without these specific signals so far, the history of pharmacology teaches us that rare, serious adverse events often lurk in the postmarketing shadows. 

Additionally, while proponents argue that small molecules are cheaper to manufacture than biologics, whether those savings will be passed on to the patient or simply absorbed into the profit margins remains to be seen, with projected self-pay costs in some cases exceeding $1,000 per month. Orforglipron represents a technological leap, but it is not a magic wand; it is simply a more convenient way to induce the same physiological trade-offs we have seen over the last several years with the shots. 

Conclusion 

Prior to the incretin era, our ability to manage the twin epidemics of diabetes and obesity was dishearteningly limited. GLP-1 receptor agonists represent a hard-earned pharmacological breakthrough, offering potent glucose control and unprecedented weight loss. 

However, skepticism is still warranted regarding their indiscriminate use. They are already being used in numerous off-label ways, like shedding a few pounds before a wedding, allegedly decreasing cravings for addictive drugs like alcohol and narcotics, and purportedly even for the treatment of Alzheimer’s and Parkinson’s disease. There are ongoing studies for these uses, but early data are weak and the risks are unknown. These are serious medications with serious side effects, and they may require lifelong commitment. 

Caveat emptor.

Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic