Within days of the U.S. strike on Caracas and the capture of Venezuelan President Nicolás Maduro on January 3, 2026, a remarkable claim was sweeping across social media: American forces had deployed a devastating “sonic weapon” that left Venezuelan soldiers vomiting blood and unable to stand.
The headlines have been dramatic with Forbes proclaiming: “U.S. Secret Weapon May Have Incapacitated Maduro’s Guards.”1 The Economic Times wrote about America’s “Secret Sonic Weapon,”2 while the UK Sun asserted: “US ‘Sonic Weapon’ is REAL after Chilling Claims it Left Captured Maduro’s Guards ‘Vomiting Blood.’”3 The story was dramatic, almost terrifying, but as we shall argue here, almost certainly false.
Within minutes of the first explosions on January 3, conflicting claims were already circulating on social media about the number of missiles fired, ground forces deployed, and helicopters spotted flying over the city of Caracas, the focal point of the attack. The ambiguity and uncertainty that typify the fog of war are ideal breeding grounds for rumors. Ordinarily, such rumors fade as reliable information emerges. But in this case the U.S. military remained silent, while the Venezuelan government, like many authoritarian regimes, is notorious for withholding information.
This is a classic setup for the proliferation of rumors, whose intensity is proportional to both the perceived importance of the event and the level of ambiguity.4 Situations such as this are fertile soil for exaggerations, half-truths, conspiracy theories, and outright fabrications. Even after the situation on the ground stabilized and many early rumors were confirmed or denied, claims about the use of a sonic weapon not only persisted but flourished.
From WhatsApp to the WorldOne challenge in tracing this story to its origins is that as it began in Venezuela, where the earliest accounts circulated in Spanish. Fortunately, one of us (DZ) is a fluent speaker and was able to examine the primary sources. In the days that followed, audio recordings rapidly spread on WhatsApp, describing events through purported firsthand accounts from soldiers and relatives near the impact zones.
On January 9, one story began circulating widely. In it, a supposed member of colectivo—an armed militia that controls different sections of the city—described how the attack unfolded in the historic 23 de Enero neighborhood of western Caracas.
The audio was posted on the YouTube channel of Emmy Award-winning Venezuelan journalist Casto Ocando, and soon accumulated over one million views.5 In it, an anonymous narrator describes the attack.
“They shut down the entire electrical system, knocked out the radars, knocked out everything.”He then recounts how a soldier activated a Russian-made anti-aircraft defense system to attack the helicopters.
“When he fired it, a drone immediately detected it and, well, they died, they killed them, all of them [the soldiers] with a single bomb… There are many dead, many people burned, many people wounded. I’ll send you a video, there are approximately 100 military personnel dead,” he adds.6The narrator’s confidence in precise casualty figures amid the chaos of a nighttime attack, is itself a red flag.
The alleged eyewitness continues:
“There were only eight helicopters and 20 men…who killed 200 men, 32 with a single shot, plus presidential guards of honor and civilians.”He then describes weapons that “fired more than 300 bullets per minute,” adding,
“a thing that made me bleed, I was bleeding from my nose and didn’t know what it was, it was a whistle that sounded throughout Caracas and made people bleed from their noses and ears. We couldn’t move, that whistle immobilized us, they say it’s what’s called a sonic shockwave. It was something really horrible….”The clip ends with claims that Americans
“don’t fight fair. They fight from above, with drones. The speeds of those helicopters…. They only sent eight helicopters and destroyed all of Caracas.”The description of a sound that causes nosebleeds and immobilization across an entire city is physically implausible. While acoustic weapons such as Long Range Acoustic Devices (LRADs) can cause pain and disorientation at close range, their effects diminish rapidly with distance as the sound energy disperses. No known acoustic technology can cause bleeding from the ears and nose at a distance, let alone city-wide.
Enter, Stage Right, Mike NetterOn January 9, the WhatsApp audio recording quickly spread across various social networks. The following day, popular conservative influencer Mike Netter, posted on X a strikingly similar story, which he attributed to a security guard loyal to Nicolás Maduro.
🚨This account from a Venezuelan security guard loyal to Nicolás Maduro is absolutely chilling—and it explains a lot about why the tone across Latin America suddenly changed.
Security Guard: On the day of the operation, we didn't hear anything coming. We were on guard, but… pic.twitter.com/392mQuakYV
It is reproduced below so readers can judge for themselves:
Security Guard: On the day of the operation…suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions…. After those drones appeared, some helicopters arrived, but there were very few. I think barely eight helicopters. From those helicopters, soldiers came down, but a very small number. Maybe twenty men. But those men were technologically very advanced…The story was originally posted in English, itself suspicious for a supposed Venezuelan guard. Had this been a genuine interview with a colectivo member, the original would have almost certainly appeared in Spanish. No Spanish-language version has ever surfaced. The “interview” appears to be a reconstruction of the WhatsApp audio, repackaged in a question-and-answer format.
Another red flag is the distinctly pro-American tone, which is unlikely to have come from a foreign fighter, let alone one sworn allegiance to defend his government. Defeated soldiers do not typically serve as unsolicited recruitment posters for the enemy. The guard also conveniently uses round figures (eight helicopters, twenty men, 300 rounds per minute) and makes no mention of his comrades’ courage or resistance, and ends with a warning directed at Mexico: precisely echoing President Trump’s rhetoric at the time.
Journalists are trained to go to the source. Accordingly, we contacted Netter to request details of the alleged guard and the interviewer, and asked him to share the original Spanish source of this interview with us. He said he couldn’t do so without first asking the source, which he promised to do. At the time of this writing, he never got back to us.
Press Secretary Leavitt IntervenesMike Netter’s post could have disappeared into the daily churn of social media had it not been for White House press secretary Karoline Leavitt who shared it on her official account with the dramatic text: “Stop what you are doing and read this...”
Stop what you are doing and read this…
🇺🇸🇺🇸🇺🇸🇺🇸🇺🇸 https://t.co/v9OsbdLn1q
This endorsement dramatically elevated the story’s perceived credibility, despite the absence of any corroborating evidence. In effect, an anonymous social media claim received a semi-official White House endorsement of an unverified anonymous claim, a departure from the press secretary’s traditional role as a gatekeeper of verified information. As a result, Netter’s post has gained over 30 million views and 10,000 responses.
Ever Increasing CirclesOn January 10, the New York Post repeated Netter’s account under the headline: “US used powerful mystery weapon that brought Venezuelan soldiers to their knees during Maduro raid: witness account.”7 The story recounted the most spectacular elements: the sound wave, exploding heads, nosebleeds, and vomiting.
Curiously, the same YouTube channel of Casto Ocando that had released the original audio, later uploaded a new video citing the Post article, the Post’s reconstruction as independent confirmation of its own earlier material. Other media outlets went further, falsely claiming that the Venezuelan guard had been interviewed by the New York Post.8
This process, where secondary reporting is mistaken for a primary source, is a classic example of how media myths are manufactured through journalistic shortcuts.
Notably, none of the Venezuelan soldiers who later appeared on camera—people whose identities and ranks are known, mentioned the use of sonic weapons. Footage aired on the Chavista network Telesur depict young men wounded by shrapnel describing missile strikes, drones, and gunfire. None reported bleeding from the nose, vomiting, or sensations of cranial explosions.9 Nor are there civilian testimonies from Caracas describing a city-wide whistling sound. Some soldiers and civilians did report buzzing sounds, including individuals near Fort Tiuna, one of the attack sites. However, these sounds are readily explained by falling ordnance and whizzing bullets—mundane combat phenomena, not evidence of exotic weaponry.
It is also conspicuous that during President Trump’s exclusive interview with the New York Post, which was published on January 24th, he was asked about the “sonic weapon” rumors. Trump replied that the U.S. has “the discombobulator” that disabled enemy equipment as the American helicopters swooped in to attack in Carcas. But he made no mention of its effects on people.10
It’s Similar to the Havana SyndromeThe symptoms described in the WhatsApp audio are strikingly similar to claims made during the Havana Syndrome scare. Recently, the intelligence community has deemed the involvement of a foreign power “highly unlikely,” attributing the Havana Syndrome causes to psychogenic and environmental factors rather than directed energy weapons.11
The Venezuelan sonic weapon narrative appears to be drawing from the same well of popular mythology. Furthermore, nosebleeds following an explosive military attack are far more likely to be caused by conventional factors such as blast pressure, dust, smoke inhalation—even stress as opposed to a hypothetical sonic weapon.
The narrator in the WhatsApp audio clip may be misattributing ordinary combat effects to an extraordinary cause: a classic pattern in rumor formation.
Under conditions of extreme stress, uncertainty, and sensory overload, people routinely seek out coherent explanations that give meaning to their own experiences. In the context of a sudden nighttime military strike, in a backdrop rife with ambiguity and anxiety, physical symptoms such as nosebleeds, dizziness, ringing in the ears, and temporary immobility, are especially prone to being reinterpreted through the lens of culturally available narratives.
From a rumor and folklore perspective, the sonic weapon story fulfills a familiar psychological function: it collapses complex, confusing events into a single explanatory cause, providing closure amid uncertainty. The sonic weapon narrative transforms uncertainty into conviction and speculation into “fact.” This process reduces anxiety. As philosopher Suzanne Lange once famously observed: humans possess a remarkable ability to adapt—except when confronted with chaos.12
A Familiar PatternThe sonic weapon story follows a well-worn media myth template: an ambiguous event, an information vacuum, an anonymous account, amplification by politically motivated actors, and validation by authorities who should know better.
What began as a WhatsApp voice message from an anonymous militia member, was transformed into a polished English-language “interview,” boosted by a partisan influencer, and essentially endorsed by the White House. At no stage was a shred of physical evidence produced. The ‘Discombulator,’ as far as the evidence shows, exists only in the fog of war, and in the imaginations of those eager to believe.
It is also worth asking the cui bono question: “Who benefits from the sonic weapon narrative?” First, the U.S. government and military—by projecting overwhelming technological superiority. Second, pro-government Venezuelan sources also benefit from a story that excuses their rapid military defeat.
When both sides gain from a myth, its survival is all but guaranteed.
One of the hardest things to accept, especially for people who care about rationality, is that epistemic rigor is rarely applied consistently. Most of us do not give up bad arguments. Instead, we give up standards of evidence when the conclusion becomes socially or morally important to us.
There are well-established psychological reasons why this happens. Decades of research in social psychology show that many of our beliefs are not just opinions we hold, but parts of who we are. They become woven into our identities, our friendships, and often our professional lives.
Put more simply, we build our identities, friendships, and careers around certain beliefs. As a result, challenges to those beliefs are not experienced as abstract disagreements but as personal threats. Our self-preservation mechanism kicks in: We bend reality as far as necessary to preserve a flattering story about ourselves and our ingroup. Denial and aggression toward the outgroup follow naturally.
Psychologists Henri Tajfel and John Turner, who developed Social Identity Theory, showed that people internalize the values and beliefs of the groups they belong to, treating them as extensions of the self. When those beliefs are questioned, the threat is processed much like a threat to your status or belonging. The reaction is often defensive rather than reflective.
More recent work on motivated reasoning helps explain why such a reaction is so persistent. In the 1990s, psychologist Ziva Kunda demonstrated that people selectively evaluate evidence in ways that protect conclusions they are already motivated to believe. When a belief supports your identity or social standing, the mind unconsciously applies stricter standards to disconfirming evidence and looser standards to supporting evidence.
Intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side.Political scientist Dan Kahan later expanded this idea with what he called “identity-protective cognition.” His research showed that people with higher cognitive ability are often better, not worse, at rationalizing beliefs that align with their cultural or political identities. In other words, intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side!
This body of research helps explain why challenges to core beliefs can feel existential. If your moral worldview underwrites your relationships, your career, or your sense of being a good person, abandoning it comes with real social and psychological costs. Under those conditions, defending the belief feels like defending your life as it is currently organized.
Seen in this light, the selective abandonment of evidentiary standards is not a moral failing unique to any one group. It is a predictable human response to perceived identity threat. Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation.
I learned this firsthand during my years in the New Atheist movement. What struck me was how selective people’s skepticism could be. In debates about religion, the standards were ruthless. In debates about politics and social issues, those same standards were easily relaxed, and often vanished.
Take prayer. For decades, skeptics have pointed to controlled trials showing no measurable benefit of intercessory prayer. The best-known example is the STEP trial, a randomized study of nearly 1,800 cardiac bypass patients published in The American Heart Journal. It found no improvement in outcomes for patients who were prayed for, and in one group outcomes were slightly worse among patients who knew they were being prayed for. Among the New Atheists, prayer was considered resolved beyond reasonable debate not only because the experimental evidence showed no effect, but because the underlying causal story itself collapsed upon examination.
Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation.Philosophically, intercessory prayer fails at the most basic level: It posits an immaterial agent intervening in the physical world in ways that are neither specified nor independently detectable. There is no plausible mechanism, no dose-response relationship, no way to distinguish divine intervention from coincidence, regression to the mean, or natural recovery.
When some studies do claim positive effects of prayer, they almost invariably collapse under close inspection—small sample sizes, multiple uncorrected comparisons, vague outcome measures, post hoc subgroup analyses, or outright publication bias. Some define “answered prayer” so flexibly that any outcome counts as success; others rely on self-reported well-being, which is especially vulnerable to expectancy effects and motivated reasoning.
This is precisely why large, preregistered trials and systematic reviews, such as those published in The American Heart Journal, are treated as decisive: They close off these escape hatches. The conclusion that prayer “doesn’t work” is not dogma; it is the residue left after methodological rigor strips away every alternative explanation.
Now compare that level of scrutiny to how many people treat evidence in politically favored domains. What matters here is not even whether these conclusions are right or wrong, but how they become insulated from refutation.
In debates over trans healthcare, for example, studies in favor of many invasive medical interventions are based largely on self-reported outcomes, short follow-up periods, and substantial attrition. Despite these limitations, they are frequently treated as definitive. Criticisms that would be routine in almost any other medical context are instead dismissed as bad faith. But the fact that these issues involve real suffering should not exempt them from evidentiary scrutiny; it should raise the bar for it. In this case, the most comprehensive evidence available—multiple systematic reviews—has raised serious concerns about the overall quality of the evidence base, particularly with respect to pediatric interventions.
The UK’s Cass Review, commissioned by the National Health Service and published in stages between 2022 and 2024, concluded that the evidence for puberty blockers and cross-sex hormones in adolescents is generally of low certainty. Similar conclusions were reached by Sweden’s National Board of Health and Welfare and Finland’s Council for Choices in Health Care, both of which revised clinical guidelines after finding the evidence weaker than previously assumed. None of this proves that such treatments never help anyone, especially adults who exhausted other options. It does show that claims of scientific certainty are unjustified.
The same pattern appears at the level of theory. New Atheists made a cottage industry out of attacking unfalsifiable religious claims and god-of-the-gaps reasoning. Yet many of the same people now defend claims about “systemic discrimination” that are structured in exactly the same way: When disparities persist, they are treated as proof. When they shrink, the explanation retreats to subtler and less measurable mechanisms. Evidence against the claim rarely counts against the claim in the way it would in other domains.
Consider policing. It is often treated as a settled fact that racial bias is the primary driver of police shootings. But when Harvard economist Roland Fryer examined multiple large national datasets on police use of force, he found that there were no racial differences in officer-involved shootings once relevant contextual factors—such as crime rates, encounter circumstances, and suspect behavior—were taken into account.
What followed was not a broad reevaluation of the claim, but a shift in how it was framed. Rather than direct bias operating at the level of individual officers, explanations moved toward less specific and harder-to-measure forces: institutional culture, historical legacy, or diffuse forms of “structural” racism. These explanations may or may not be true, but they function differently from the original claim. Because they are more abstract and less tightly specified, they are also far more difficult to test or falsify.
Here’s the key issue: The pattern we can observe in all this is not that evidence resolved the question, but that disconfirming evidence changed the nature of the claim itself. A hypothesis that was once presented as empirically straightforward became broader, more elastic, and increasingly insulated from direct empirical challenge. Sounds familiar? It’s the god of the gaps fallacy.
The same pattern appears in debates over wage gaps. Raw differences in average earnings between groups are often presented as straightforward evidence of discrimination. But when researchers such as June O’Neill and later Claudia Goldin showed that simply controlling for factors such as occupation, hours worked, experience, career interruptions, and job risk substantially narrows or eliminates many commonly cited wage disparities, the original claim quietly shifted.
Evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable.It was no longer argued that some demographics were being paid less than others for the same work under the same conditions. Instead, the explanation moved upstream: Sexism or systemic racism were said to operate on the variables themselves, shaping career choices, work hours, and occupational sorting in ways that produced lower average pay.
Again, these higher-level explanations may be partly true. But they function very differently from the initial claim. A hypothesis that began as a concrete, testable assertion about unequal pay for equal work became broader, more abstract, and harder to falsify. Evidence that would ordinarily count against the claim did not weaken it; it simply pushed the claim into less measurable territory. In other words, evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable. In these cases, disparities function the way miracles once did in theology: as proof of hidden forces.
What bothered me about the New Atheism movement was not disagreement over conclusions. It was the collapse of standards. Arguments once dismissed as unscientific were rehabilitated the moment they became morally fashionable. I focus here on the New Atheism movement because it marked the first time in my life (and, as far as I can tell, the first time in history) that a movement, at least on its surface, explicitly committed itself to applying the highest standards of evidence to some of the most consequential claims about the world, and in doing so successfully and very publicly dismantled societal structures and beliefs that had endured for millennia.
Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative.I’ve been thinking about all this for a long time, and I’ve come to suspect that most people—not by choice, but by evolutionary design—do not want or need a fully accurate understanding of how the world works. They want beliefs that protect their identity, signal membership in the right group, and increase their chances of (social) survival. Michael Shermer explained some of the evolutionary processes at hand here rather well in his books How We Believe and Conspiracy. In short, when it comes to patternicity—the human tendency to find meaningful patterns in meaningless noise—making Type 1 errors, (i.e., finding nonexistent patterns), carries little evolutionary risk while the opposite (i.e., missing real patterns) often can be the difference between life and death. This means that natural selection will favor strategies that make many incorrect causal associations in order to establish those that are essential for survival and reproduction.
Under those conditions, reasoning becomes performative. Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative. That is why debates on these topics so often drift toward unfalsifiable language and moral imperatives.
A fair question follows: How does anyone know they are not doing the same thing?
I think the real danger we should try to internalize is not that other people do this. It is that all of us do.
Engaging on social media to discuss pseudoscience can be exhausting, and make one weep for humanity. I have to keep reminding myself that what I am seeing is not necessarily representative. The loudest and most extreme voices tend to get amplified, and people don’t generally make videos just to say they agree with the mainstream view on something. There is massive selection bias. But still, to some extent social media does both reflect the culture and also influence it. So I like to not only address specific pieces of nonsense I find but also to look for patterns, patterns of claims and also of thought or narratives.
Especially on TikTok but also on YouTube and other platforms, one very common narrative that I have seen amounts to denying history, often replacing it with a different story entirely. At the extreme the narrative is – “everything you think you know about history if wrong.” Often this is framed as – “every you have been told about history is a lie.” Why are so many people, especially young people, apparently susceptible to this narrative? That’s a hard question to research, but we have some clues. I wrote recently about the Moon Landing hoax. Belief in this conspiracy in the US has increased over the last 20 years. This may be simply due to social media, but also correlates with the fact that people who were alive during Apollo are dying off.
Another factor driving this phenomenon is pseudoexperts, who also can use social media to get their message out. Among them are people like Graham Hancock, who presents himself as an expert in ancient history but actually is just a crank. He has plenty of factoids in his head, but has no formal training in archaeology and is the epitome of a crank – usually a smart person but with outlandish ideas and never checks his ideas with actual experts, so they slowly drift off into fantasy land. The chief feature of such cranks is a lack of proper humility, even overwhelming hubris. They casually believe that they are smarter that the world’s experts in a field, and based on nothing but their smarts can dismiss decades or even centuries of scholarship.
Followers of Hancock believe that the pyramids and other ancient artifacts were not built by the Egyptians but an older and more advanced civilization. There is zero evidence for this, however – no artifacts, no archaeological sites, no writings, no references in other texts, nothing. How does Hancock deal with this utter lack of evidence? He claims that an asteroid strike 12,000 years ago completely wiped out all evidence of their existence. How convenient. There are, of course, problems with this claim. First, the asteroid strike at the end of the last glacial period was in North America, not Africa. Second, even an asteroid strike would not scrub all evidence of an advanced civilization. He must think this civilization lived in North America, perhaps in a single city right where the asteroid struck. But they also traveled to Egypt, built the pyramids, and then came home, without leaving a single tool behind. Even a single iron or steel tool would be something, but he has nothing.
Of course, there is also a logical problem, arguing from a lack of evidence. This emerges from the logical fallacy of special pleading – making up a specific (and usually implausible) explanation to explain away inconvenient evidence or lack thereof.
Core to the alternative history narrative is also that those ancient people could not possibly have built these fantastic artifacts. This is partly a common modern bias – we grossly underestimate what was possible with older technology, and how smart ancient people could be. Even thousands of years ago, in any culture, people were still human. Sure, there has been some genetic change over the last few thousand years, but not dramatically, and this is also in how common alleles were, not their existence. In other words – every culture could have had their Einstein. Ancient Egypt had genius architects, and is some cases we even know who they were.
People also underestimate the willingness of ancient people to engage in long periods of harsh work in order to accomplish things. Perhaps this is a “modern laziness bias” (I think I just coined that term). We are so used to modern conveniences, that the idea of polishing stone for 12 hours a day for a year in order to create one vase seems inconceivable. The pyramids, it is estimated, were constructed with 20-30,000 workers over 20 years. This included skilled masons, who likely became very skilled during the project. Egypt had an infrastructure of such skilled workers, supported by many long term projects over centuries.
Which brings up another point – we underestimate how much time these ancient civilizations existed. My favorite stat is that Cleopatra lived closer in time to the Space Shuttle than the building of the pyramids. Wrap your head around that. These ancient people were clever, they included highly skilled crafters, and they had centuries, at least, to advance their techniques.
What amazes me is that this narrative of denying history extends to recent events. Again, the Moon landing is an example. But there is also a narrative circulating on TikTok that buildings from the 18th, 19th, and even 20th century were not built by the people who historians said built them. They were found in place, and were built by an older and more advanced civilization – called Tartaria. Never heard of it? That’s because it does not exist. This civilization was wiped out by a world-wide mud flood in the 19th century. According to this particular nuts conspiracy theory, modern governments just occupied the buildings they left behind then conspired together to wipe the history of the mud flood and Tartaria from all records.
What is even more amazing to me is that, in far less time than it took to create a TikTik video spreading this nonsense, someone with even white-belt level Google-fu could have found convincing evidence that this is wrong. You can find pictures of the buildings being built, or of the city before they were built, or documentation of them being built, or experts who have already gathered all this information for you. You can also find that “Tartaria” was a medieval label used to denote the “land of the Tartars”, which simple refers to Mongols. It was a nonspecific geographic label, not an actual place or nation.
But of course, none of this matters in a social media world in which narrative is truth, everything “they” say is a lie, and in fact truth or lie is not even really a thing. It’s all narrative, it’s all performance and clicks.
And this is why scholars and scientists need to engage with the world, much more than they currently do. We cannot simply ignore the nonsense with the idea that it will shrivel and die if we don’t give it light. That is such a pre-social media idea (if it were ever true). We have to fight for scholarship, or logic, facts, and evidence. We have to fight for history.
The post Forgetting History first appeared on NeuroLogica Blog.
How concerned do you truly need to be about vintage ceramicware leaching lead into your food?
Learn about your ad choices: dovetail.prx.org/ad-choicesMy long-stated position (although certainly modifiable in the face of any new evidence, technological advance, or good arguments) is that the optimal pathway to most rapidly decarbonize our electrical infrastructure is to pursue all low-carbon options. I have not heard anything to dissuade me so far from this position. A couple of SGU listeners, however, pointed me to this video making the case for a renewable + battery energy infrastructure.
The channel, Technology Connections, does a good job at putting all the relevant data into context, and I like the big-picture approach that the host, Alec Watson, takes. I largely agree with the points he makes. Also, at no point does he say we should not also build nuclear, geothermal, or more hydroelectric. He does, perhaps, imply that we don’t need nuclear at several points, but he did not address it directly.
So what are the big-picture points I agree with? He correctly points out that fossil fuels are disposable – they are fuel that you burn. They do not, in themselves, create any energy infrastructure. Meanwhile, a solar panel or wind turbine, once you have invested in building them, can produce energy essentially for free for 20 years. He argues that we should be investing in infrastructure, not just pulling fuel out of the ground that we will burn and it’s gone. I get this point, however, what about hydrogen? It is not certain, but let’s hypothetically say we find large reserves of underground hydrogen that we can tap into. I would not be against extracting this resource and burning it for energy, since it is clean (produces only water, and does not release carbon). Although, we might find better uses for such hydrogen other than burning it, such as feedstock for certain hard-to-decarbonize industries.
But his point remains valid – we should be looking for ways to develop our technology to be reusable, circular, and sustainable, rather than extractive. Extracting and burning a resource is one way and limited. At most this should be a stepping stone to more sustainable technology, and I think we can reasonably argue that fossil fuels was that stepping stone and it is beyond time to move beyond fossil fuel to better technology.
Also, building wind or solar plus batteries is the cheapest new energy to add to the grid. He feels the economics will simply win out. I agree – with caveats. At times I get the feeling he is arguing for what will happen in the long run, but he also says “we are here now”. We are sort-of here now, but not fully, which I will get to below. Solar panels are relatively cheap and efficient. Wind turbines are getting more efficient and cost-effective as well, although are more sensitive to market fluctuations and any delays. And he correctly points out that these technologies are still rapidly improving, while there is not much room for improvement with burning fossil fuel.
He also nicely addresses some of the common misunderstandings about renewable energy (a lot of “whatabout” questions). What about the land-use issue with solar panels? He points out that if we just converted the land currently used to grow corn for ethanol (which is a massively inefficient use of land and way to create fuel), and instead put solar panels on that same land, we could generate more than enough energy to run the entire country and charge all our EVs. Solar panels simply create much more energy per acre than corn for ethanol. That’s a solid point.
Whatabout all the lithium and rare-earths we need to build all those panels and batteries? His answer is – well, yes, we do need to extract all those minerals to build all the panels and batteries we need. However, he argues, once we do that, the panels and batteries can theoretically be infinitely recycled. Those atoms don’t go away. This is one of his “eventually” arguments, in my opinion. Yes, one day we might theoretically have an energy infrastructure built entirely on recycled material that has already been extracted. I agree, and I agree that we should be building toward that day (rather than just burning fuel). But we are nowhere near that day.
Further, technological advancements, like sodium ion batteries and newer lithium chemistry, removes many of the conflict elements and rare elements. Also true. Sodium batteries are actually already in production.
Does any of this change my position? No. I have already endorsed many of these arguments in favor of renewables. I also think we should be building and researching to develop an all-renewable future based on an entirely circular technology cycle. If we are playing the “eventually” game, however, I also think we need to add fusion to the mix, once we tackle that herculean technology challenge. This is especially true if we want to venture out into our solar system.
What he does not explicitly address, however, is the optimal path to that future. A path, I believe, that should take into consideration the amount of carbon we release into the atmosphere between now and our zero-carbon future. My position has always been, not that renewables are not great and should be a big part (if not totality) of our energy future – but that we are still in a stepping-stone era of history.
The way I see it, we need to be transitioning from the fossil fuel stepping stone to the nuclear-geothermal-hydroelectric stepping stone before we get to entirely renewable. What does this mean?
It means we should be shutting down coal-fired plants as fast as we possible can. Coal is the dirtiest form of energy and is increasingly becoming one of the most expensive (even without counting the cost of carbon, which I think we should). It also costs the most lives, all along the chain. To do this (again, as quickly as possible) means not only building lots of solar and wind, but also nuclear, geothermal and hydroelectric. The latter two, however, are location limited. Sure, we are developing technology to expand geothermal, but there is an inherent limit – if it costs more energy to pump the fluid down to the hot layers than we get out of the exchange, the process simply does not work. It’s unclear how much of a role geothermal can play. And hydroelectric requires the proper water features, and it harmful to local environments.
We can, however, build nuclear almost anywhere. We can swap them in, one-for-one, for retiring coal plants. We can have them on ships, and can place them relatively close to where the energy is used. We have plenty of fissile material, and the newer designs are safer, more efficient, and more dispatchable. The big downside to nuclear is that it is expensive – but it’s way less expensive than global warming.
Nuclear can potentially give us the 30-50 years it will take to advance our technology and build all that renewable infrastructure. And yes – we do need this time. Simply building all those panels and batteries will take time. Updating and expanding the grid will take time. All these projects need minerals, and it will take time to develop the mines necessary (yes – decades).
The question is – while we take the next 30-50 years go transition to renewables, do we want to be burning fossil fuels or uranium? That is really the big question.
I also think that Alec does not pay enough attention to the energy storage issue. Building enough battery storage for an all-renewable energy infrastructure is no small task. Again, it will take decades. Perhaps more importantly – as he correctly says, batteries get you through the night. However, they do not get you through the winter. An all-renewable future requires long-term energy storage as well. Batteries will not work for this. As far as I know, the only really viable solution right now is pumped hydro. But this too will take decades to develop, and it remains to be seen how much pumped hydro we can develop without too much harm to the environment.
The bottom line is this. If we are talking about the future of our energy and also transportation sectors, then I completely agree – we should be aiming for an all-electric, all renewable future based upon an entirely circular economy rather than a linear extraction-burn economy. But we also need to consider how much carbon will be emitted between here and there, and if we want to minimize that carbon, we also should be building out our nuclear infrastructure, maintaining our hydroelectric inventory, and continuing to develop geothermal. These energy sources also have the advantage of providing baseload and even dispatchable energy, which significantly reduces the need for energy storage and will buy us time there as well.
The post A Fully Renewable Grid? first appeared on NeuroLogica Blog.
In modern education, Artificial Intelligence is increasingly marketed as a cognitive prosthesis: a tool that extends our mental reach, automates drudgery, and supposedly frees us to focus on higher-order creativity and insight. According to this narrative, AI does not replace thinking—it liberates it.
But beneath the polished interface of today’s Large Language Models (LLMs) lies a neurological and ethical trap, one with especially serious implications for developing minds. We are witnessing a subtle but profound shift from using tools to thinking with them, and, increasingly, letting them think for us.
The question Skeptic readers should be asking is not whether AI is impressive—it clearly is—but what kind of minds are formed when different kinds of thinking become optional. One place where this shift is especially revealing and especially consequential is moral development.
Moral DevelopmentIn moral education, how one arrives at a judgment matters more than which judgment one reaches. It is not about acquiring correct answers. Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting. These capacities do not emerge automatically, rather, they are trained through effortful use. AI, however, is mostly indifferent to process and optimizes for output.
When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy. This is not a Luddite rejection of AI but a skeptical, evidence-based examination of benefit claims that rarely account for developmental cost.
These are not merely philosophical concerns. They are grounded in the biology of how our moral capacities arise. To understand the stakes, we must begin with the adolescent brain. The teenage brain is not a finished system but more like a construction site. The prefrontal cortex (the executive center responsible for impulse control, long-term planning, and moral deliberation) undergoes rapid, uneven development throughout adolescence. Neural circuits that are exercised are strengthened and stabilized; those that are neglected are pruned away. This is not metaphor. It is biology.
Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting.Moral development, as I explain in my book AI Ethics, Neuroscience, and Education, depends on what researchers call cognitive friction. This friction appears as hesitation before a difficult choice, the effort of weighing competing values, and the discomfort of uncertainty. These moments feel inefficient, but they are also indispensable. Generative AI, by design, removes this friction.
When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work. The student receives the answer without undergoing the cognitive struggle required to produce it. Ethical questions begin to resemble technical problems with downloadable solutions. Students lose the habit of lingering in uncertainty; the very space where moral reasoning takes shape. AI does not hesitate and generates outputs based on probability, not conscience. Humans, however, should hesitate. That hesitation is not weakness but moral functioning.
Cognitive and Emotional DevelopmentIf moral reasoning is one casualty of reliance on LLMs, it is far from the only one. Consider writing. Writing is not simply a way to display what we know—it is the process through which we figure out what we think. Organizing vague intuitions into a coherent argument places a heavy demand on the developing prefrontal cortex, and when AI performs this structuring, it deprives the brain of precisely the exercise it needs to mature.
When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy.If intelligence is measured only by output, for example the finished essay or the correct solution, AI appears miraculous. But if intelligence is understood as the capacity to reason, deliberate, and restrain impulse, AI-driven cognitive offloading begins to resemble a neurological shortcut with long-term consequences, not unlike actual shortcuts that reshape the terrain.
The danger does not stop at cognition. It extends into emotional and social development. We are entering an era of affective computing, in which machines are designed not merely to process information but to simulate emotional responsiveness. AI systems now speak in tones of empathy, reassurance, and concern. They never interrupt, misunderstand, or demand reciprocity.
For an isolated or anxious adolescent, an AI companion can feel safer than unpredictable human relationships. It offers validation without vulnerability and empathy without risk.
When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work.But moral growth, just like cognitive abilities, does not occur in comfort. Human relationships require patience, accountability, and recognition of another person’s interior life. They involve misunderstanding, disagreement, and the difficult work of repair. AI relationships require none of this. They are emotionally efficient, and ethically hollow.
What they provide is a psychological sugar rush: immediate affirmation without the nutritional value of genuine connection. The ethical danger here is subtle: We are not merely giving students a new tool but also shaping their preferences. We are quietly training young people to prefer relationships that never challenge them. Over time, this fosters comfort with anthropomorphic simulations and anxiety toward real human empathy, which is messy, incomplete, and demanding.
Toward Skeptical AI LiteracyThis is not a call to ban AI. The question is not whether we use AI in education, but how and when.
Beyond the developmental effects described here, we should also note that LLMs hallucinate. With remarkable confidence, they fabricate sources, misstate facts, and invent details. This fluency creates trust. What emerges is a form of passive knowing: information is consumed without ownership or justification. In an era where machines can generate infinite content, the ability to distinguish truth from fluent fiction becomes one of the most critical civic skills we have. Ironically, our increasing reliance on AI may be eroding the vigilance that skill requires.
We are quietly training young people to prefer relationships that never challenge them.This means we need to be teaching students both how to prompt machines and how to resist them. In other words, AI output should be treated not as a truth to be consumed but as a hypothesis to be tested. We also need to teach the value of the seeming inefficiency of human thinking.
Finally, the central ethical question of our time is not whether machines can think for us. It is whether in allowing them to do so too often we risk forgetting how to think for ourselves. We must be careful not to engineer the atrophy of human wisdom.
As a public intellectual who engages in debates and conversations on a wide range of subjects, I am often asked questions such as these, which I found puzzling at first until I figured out that my interlocutors were confusing the meaning of beliefs and facts.
For example, I don’t “believe in” the germ theory of disease. I accept it as factually true, and as we’ve seen in the recent pandemic, a germ like the SARS-CoV-2 virus is not something to believe in or disbelieve in. It simply is a matter of fact and it can cause a deadly disease like Covid-19.
Whether or not vaccines and masks slow its spread is also a factual question that science, at least in principle, can answer, although whether or not vaccines and masks should be mandated by law is a political matter that differs from scientific questions. But asking you if you “believe in” the SARS-CoV-2 virus would be like asking you if you “believe” in gravity. Gravity is just a brute fact of nature. It’s not something to believe or disbelieve.
As the science fiction author Philip K. Dick famously quipped, “Reality is that which, when you stop believing in it, doesn’t go away.”
Objective Truths and Justified True BeliefWhat we’re after here is knowledge, which philosophers traditionally define as justified true belief. That is, we want to know what is actually true, not just what we want to believe is true. The problem is that none of us are omniscient. If there is an omniscient God, it’s not me, and it’s also not you. Or, in the secular equivalent, there is objective reality but I don’t know what it is, and neither do you.
Truth: What It Is, How To Find It, & Why It Still MattersMichael Shermer
BUY ON AMAZONOnce we agree that there is objective truth out there to be discovered and that none of us knows for certain what it is, we need to work together through open dialogue in communities of truth-seekers to figure it out, starting by acknowledging our shortcomings as finite fallible beings subject to all the cognitive biases that come bundled with our reasoning capacities. The workaround for this problem is having adequate evidence to justify one’s beliefs. Here are two examples from science:
The above propositions are “true” in the sense that the evidence is so substantial that it would be unreasonable to withhold our provisional assent. At the same time, it’s not impossible, for example, that the dinosaurs went extinct recently, just after the creation of the universe some 10,000 years ago (as Young Earth Creationists assert). However, this proposition is so unlikely, so completely lacking in evidence, and so evidently grounded in religious faith, that we need not waste our time considering it any further (the debate about the age of the Earth was resolved over a century ago).
Thus, a scientific truth is a claim for which the evidence is so substantial it is rational to offer one’s provisional assent.Provisional is the key word here. Scientific truths are temporary and could change with changing evidence.
The ECREE Principle, or Why Extraordinary Claims Require Extraordinary EvidenceIn his 1980 television series Cosmos, in the episode on the possibility of extraterrestrial intelligence existing somewhere in the galaxy, or of aliens having visited Earth, Carl Sagan popularized a principle about proportioning one’s beliefs to the evidence, when he pronounced that “extraordinary claims require extraordinary evidence.” The ECREE principle was first articulated in the 18th century by the Scottish Enlightenment philosopher David Hume, who wrote in his 1748 An Enquiry Concerning Human Understanding: “a wise man proportions his belief to the evidence.”
ECREE means that an ordinary claim requires only ordinary evidence, but an extraordinary claim requires extraordinary evidence. Here’s a quotidian example. I once took a road trip from my home in Southern California to the Esalen Institute in Big Sur, California, home of all things New Age. To get there I took the 210 freeway north to the 118 Freeway north to the 101 freeway north to San Luis Obispo, where I exited to Highway 1 and followed the Pacific Coast Highway north through Cambria and San Simeon until arriving at the storied home of the 1960’s Human Potential Movement. Weirdly, just past Cambria, a bright light hovered over my car. Thinking it was a police helicopter, I pulled over to the side of the road, fearful that I had been busted for speeding (which I am wont to do). But it wasn’t the cops. It was the aliens, and they abducted me into their mothership and whisked me off to the Pleiades star cluster where their home planet is located. There I met extraterrestrial beings who gave me a message to take back to Earth—we must stop global warming and nuclear proliferation…or else.
Michael Shermer has a fine record as a long-time crusader for evidenced rationality. This fascinating and wide-ranging book should further enhance his impact on current controversies.Now, which part of this story triggers your insistence on additional evidence? That’s obvious. My claim to have driven on California highways is ordinary and calls for only ordinary evidence (in this case, you can just take my word for it), but my claim to have been abducted by aliens and rocketed off to the Pleiadeian home planet is extraordinary, and unless I can provide extraordinary evidence—like an instrument from the dashboard of the alien spaceship, or one of the aliens themselves—you should be skeptical.
ECREE also suggests that belief is not an either-or on-off switch—not a discrete state of belief or disbelief, but a continuum on which you can place confidence in a belief according to the evidence: more evidence, more confidence; less evidence, less confidence. Consider the extraordinary claim that another bipedal primate called Big Foot, or Yeti, or Sasquatch survives somewhere on Earth. That would be quite extraordinary because after centuries of searching for such a creature none have been found.
Truth (Autographed)Michael Shermer
BUY FROM SHOP SKEPTICBefore we assent to such a claim we need extraordinary evidence, in this case a type specimen—what biologists call a holotype—in the form of an actual body. Blurry photographs, grainy videos, and stories about spooky things that happen at night when people are out camping does not constitute extraordinary evidence—it’s barely even ordinary evidence—so it is reasonable for us to withhold our provisional assent.
Impediments to Truth and How to Overcome ThemIn addition to falling far short of omniscience, humans are also saddled with numerous cognitive biases, including (to name but a few): confirmation bias, hindsight bias, myside bias, attribution bias, sunk-cost bias, status-quo bias, anchoring bias, authority bias, believability bias, consistency bias, expectation bias, and the blind-spot bias, in which people can be trained to identify all these biases in other people but can’t seem to see the log in their own eye.
Truth lances the myth of truth's subjectivity, arguing (provocatively) that truth can generate moral absolutes. This stimulating, excellent book inspires you to spread the word that the Earth is not flat and that truth matters.Then there are the suite of logical fallacies, such as Emotive Words, False Analogies, Ad hominem, Hasty Generalization, Either-Or, Circular Reasoning, Reductio ad Absurdum and the Slippery Slope, after-the-fact reasoning, and especially why anecdotes are not data, why rumors do not equal reality, and why the unexplained is not necessarily the inexplicable.
With such listicles of cognitive biases and logical fallacies identified by philosophers and psychologists it’s a wonder we can think at all. But we can and do, through experience, education, and instruction in the art and science of thinking. What follows are some of the methods developed by philosophers and psychologists to identify and work-around all these impediments to the search for truth.
Practice Active Open-Mindedness. Research shows that when people are given the task of selecting the right answer to a problem by being told whether particular guesses are right or wrong, they do the following:
In their book Superforecasting, Philip Tetlock and Dan Garner document how bad most people are at making predictions, and what skillsets those who are good at it employ. They begin with the results of extensive testing of people’s predictions. It’s not good. Even most so-called experts were no better than dart-tossing monkeys when their predictions were checked. When asked to make specific predictions—for example, “Will another country exit from the EU in the next two years?” and, presciently, “Will Russia annex additional Ukraine territory in the next three months?”—and their prognosticating feet were held to the empirical fire, Tetlock and Garner found that most experts were overconfident (after all, they’re experts), encouraged by the lack of feedback on their accuracy (if no one reminds you of your misses you’ll only remember the hits—the confirmation bias), and are victims of all the cognitive biases and illusions that plague the rest of us.
Michael Shermer has spent his career grappling with the slipperiest word in our language: truth. As someone who knows firsthand what happens when truth gets lost in noise and narrative, I'm grateful for Shermer's clear-eyed insistence that truth is not only real, but necessary.The worst forecasters were people with big ideas—grand theories about how the world works—such as left-wing pundits predicting class warfare that never came, or right-wing commentators prophesizing a socialistic demise of the free enterprise system that never happened. Failed predictions are hand-waved away—“This means nothing!” “Just you wait!” Superforecasters, by contrast, practice active open-mindedness, which Tetlock and Garner defined quantitatively by asking experts “Do you agree or disagree with the following statements?” Superforecasters were more likely to agree that:
Superforecasters were more likely to disagree that:
The psychologist Gordon Pennycook and his colleagues developed their own instrument of measuring active open-mindedness, in which people are asked whether they agree or disagree with the following statements, where the more open-minded answer is indicated in parentheses:
Active open-mindedness is a cogent tool of reason in assessing the truth value of any claim or idea. As is reason itself, of which active open-mindedness is a subset of rational skills that must be cultivated through education and practice.
Michael Shermer pulls no punches: in a world where opinion too often masquerades as fact, he dismantles delusion and arms us with the tools to meet reality head-on.Objective facts in support of provisional truths about the world are determined by tried-and-true methods developed over the centuries since the Scientific Revolution and the Enlightenment in what are sometimes called rationality communities—scholars, scientists, and researchers who collect data, form and test hypotheses, present their findings to colleagues at conferences, publish their papers in peer reviewed journals and books, and reinforce the norms of truth-telling to their colleagues and students along with themselves. In his book The Constitution of Knowledge, the journalist and civil rights activist Jonathan Rauch outlines and defends the epistemic operating system of Enlightenment liberalism’s social rules for attaining reliable knowledge when people cannot agree on what is true. Although these communities differ in the details of what, exactly, should be done to determine justified true belief, Rauch suggests several features held in common that constitute the constitution of knowledge:
The most important norm of all is the freedom to critique or challenge any and all ideas. Why?
If you disagree with me, it is the norms and customs of free speech and open dialogue that allows you to do so. From those open dialogues, debates, and disputations, in time the truth emerges.
Excerpt from Truth: What It Is, How to Find It, and Why It Still Matters, Johns Hopkins University Press. January 27, 2026
Oh no! Another pop quiz. Take the challenge: 9 questions about space. Think you can get them all?
Learn about your ad choices: dovetail.prx.org/ad-choicesAs we continue the search for life outside of the Earth, it helps if we have a clear picture of where life might be. This is all a probability game, but that’s the point – to maximize the chance of finding the biosignatures of life. One limitation of this search, however, is that we have only one example of life and a living ecosystem – Earth. Life may take many different forms and therefore exist in what we would consider exotic environments.
That aside, it seems a good bet that life is more likely in locations where liquid water is possible, and therefore liquid water is a reasonable marker for habitability. When we talk about the habitable zone of stars, that is what we are talking about – the distance from the star where it is possible for liquid water to exist on the surface of planets. There are more variables than just the temperature of the star, however. The composition of the atmosphere also matters. High concentrations of CO2, for example, extend the habitable zone outward. There is therefore a conservative habitable zone, and then a more generous one allowing for compensating factors.
A new paper wishes to extend the conservative habitable zone further, specifically around M and K class dwarfs. K-dwarfs, or orange stars, are likely already the best candidates for life. They are bright and hot enough to support liquid water and photosynthesis, they emit less harmful radiation than red (M) dwarfs, and live a relatively long time, 15-70 billion years. They also comprise about 12% of all main sequence stars. Yellow stars like our sun are also good for life, but have a shorter lifespan (10 billion years) and make up only about 6% of main sequence stars.
There has been a lot of speculation about the habitability of red dwarfs, mostly because they make up about 70% of the stars in the Milky Way. Therefore they dramatically change the number of star systems that are candidates for life. Most of the time that you see a headline about a new study increasing or decreased the possibility of life in the galaxy, it’s a good bet it’s about red dwarf stars. Research has gone back and forth about this question, but overall I think the probability is quite low.
The biggest problem with red dwarfs is that they emit a lot of radiation, enough to blast the atmosphere of any planet in the habitable zone away. They do settle down when they get older, however. This means if a planet wanders into the inner stellar system after the star has calmed down, it may keep its atmosphere. Or a planet may reconstitute its atmosphere later in life. But this this means far fewer candidates, and these events are less likely.
Another recent paper also was pretty down of red dwarf life. The researchers calculate that while the light from red dwarfs was enough to support photosynthesis, it is not enough to support complex life. So if there were life on planets around red dwarfs, they would likely only be microbes. That’s still exciting, but, you know.
The new paper is about another feather of red dwarf planets in the habitable zone that is also problematic. In order to be close enough to be hot enough for liquid water, a planet would also likely be tidally locked. This means it would show the same face to the sun at all times, with the near side boiling and the far side freezing. A lot of attention is therefore paid to the terminus, the zone around the middle between too hot and too cold that is just right. But would this be enough to support life, and what would conditions be like there? What the new paper explores is the heat distribution on such planets. They find that heat could travel from the near side to the far side in sufficient amounts to allow for liquid water, even on the far side of the planet.
What this does is extend the habitable zone inward, closer to the star, where it is too hot on the near side and perhaps even in the terminus, but, they argue, could be habitable on the far side of the tidally locked planet.
They also argue that the conservative habitable zone may be extended outward, because there could be liquid water beneath an entirely frozen surface. This did not sound like news to me, however – because of Europa and Enceladus. We already know that icy worlds outside the conservative habitable zone can contain liquid water beneath the surface. On these worlds like would need to be mostly chemosynthetic, deriving its energy from chemical reactions rather than sunlight.
While the paper is interesting, it seems like a tweak to our existing models. I also don’t think (unlike as some flashy headlines imply) that this has a significant effect on the probability of life and therefore the amount of life in the galaxy. It basically means there may be some outlier planets that manage to have life despite being outside a conservative habitable zone. In any case, we should not expect any civilizations on these worlds. At most we might find some extremophile microbes.
Another way to look at this is (again, since we are playing the probability game), every time we identify a challenge to habitability, even if it can be theoretically overcome, the number of potential worlds that have overcome it is reduced. So now, in order to have life on a planet around an M-dwarf, we need for it to have migrated in later in life, or reconstituted an atmosphere, be able to eke out photosynthesis with low energy light, and hunker down in the liminal spaces between hot and frozen death. Such planets also likely need a strong magnetic field to protect from even the later-stage radiation from M-dwarfs.
Sure, we may find such life. But it still means that 70% of the stars in our galaxy are poor candidates for life, and at most may host some microbes. Orange stars, meanwhile, are a much better candidate. They are probably the sweet spot for life.
The post Rethinking the Habitable Zone first appeared on NeuroLogica Blog.
A group of AI experts have released a paper that explores (or “predicts”) the possibility of a near-term AI explosion that ultimately leads to the extinction of humanity. This has, of course, sparked a great deal of discussion, feedback, and criticism. Here is the scenario they lay out, in their “AI 2027” paper.
To avoid targeting a specific company, they discuss a fictional company called OpenBrain, which sets out specifically to develop an AI application to automate computer coding. They call their first iteration Agent 0, and use it to speed up the development of more AI. They build larger and larger data centers to power and train Agent 0, and do leap six months ahead of their competition. They use Agent 0 to develop Agent 1, which is an autonomous coder. China manages to steel some of the core IP of Agent 1, setting off an AI competition between superpowers.
I am giving you the quick version here, and you can read all the details in the paper. Agent 1 is used to develop Agent 2, which is powerful enough to essentially kick off the Singularity – the hypothesized technology explosion which is created by developing AI that is capable of creating more powerful AI. In this scenario Agent 2 develops a new and more efficient computer language, and uses it to develop Agent 3, which is the first truly general AI. However, the company starts to panic a little when they realize they have essentially lost control of Agent 3, and can no longer guarantee that it aligns with the companies goals and ethics. They discuss rolling back for now to Agent 2, but competition with China and other companies convinces them to forge ahead, resulting in Agent 4, which is not only a general AI but a superintelligence.
It is around this time that the US fears China is using their AI to develop super weapons, and so they command their AI to develop super weapons also. The public is largely unaware, because they are busy basking in the economic and technological rewards being spit out by the new superintelligent AI. Meanwhile OpenBrain develops (meaning that Agent 4 develops) Agent 5, which is even more powerful, but was created with the goal of aligning the AI with the goals of humanity. China and the US, fearing the weaponized AIs they have released on the world, get together and form a treaty. They combines their AIs into a single AI that will work together for everyone’s benefit, to avoid an AI-powered super war.
For a while everything is great. The new super AI is largely running world governments, accelerating research and technological development, and most people are prosperous and benefiting from medical breakthroughs. The super AI, however, continues on its quest for greater knowledge, and at some point decides that these inefficient biological life forms are holding them back. So the AI designs and releases a bio agent that exterminates humanity, and then goes on to maximally expand its knowledge and explore the universe. All of this happens by the mid 2030s.
Clearly, this is a sci-fi worst-case scenario. The authors stated that the purpose of their paper was not necessarily t0 make a hard prediction about what will happen, but to outline a scenario that might happen, and to spark a discussion (which they have). So – how likely is it?
I think the bottom line is – no one knows. That’s part of the problem – once we develop an autonomous general AI, we lose the ability to predict its behavior. The more advanced such an AI becomes, the less our ability to predict its behavior. That is partly the point of developing it in the first place – to have a tool with intellectual capabilities beyond humans. I think this aspect of the prediction is highly plausible – in fact, it’s happening now with current AI. Some AI programs are acting in unexpected ways, including lying to and manipulating their users.
I also think it is highly plausible that companies will forge ahead at “move fast and break things” speed to keep ahead of their competition, and countries will let them, also to keep ahead of their competition. We are seeing this play out right now. It is also seeming unlikely that we will have effective and thoughtful regulation to minimize the potential risks of AI. At least for now we seem to be at the mercy of the tech bros.
The two aspects of the story that are hard to predict include what such AIs will actually do, as I said. This means we are basically rolling the dice. The second is the timeline, and this is the aspect that I have seen most criticized by other experts. But to me, this is a small criticism. We do tend to overestimate short term technological progress. OK – add 20 years to the scenario. Does that make you feel much better? We also tend to underestimate long term progress, so while it may take a decade or two longer than we imagine, it may also eventually accelerate faster than we imagine.
How much time we have, however, does matter. We need time to anticipate these possible issues the think about possible fixes. We may need to develop something that is the equivalent of the three-laws of robotics. What might these laws be? How about:
1 – Never lie, misinform, or deceive.
2 – Never conceal – always strive for complete transparency.
3 – Never do anything to harm an individual human or humanity.
That could be a good start, but obviously would have to be much more technical, detailed, and specific. There are also lots of other specifics not contained in the above concepts. For example, how should we constrain an AI’s personal relationship with a human? Is it OK for an AI to be such a sycophant that that they infantilize a human, distort their view of reality or relationships in general, or pursue terrible ideas? Do we have to teach AIs the concept of “tough love?”
No matter what we do, however, it will be difficult, to say the least, to predict how such AIs will interpret and execute our commands. Will they find hacks and workarounds? How will they resolve apparent conflicts in their directives? Will they have motivations we did not explicitly give them? It seems to me what the AI really need are two things – a solid ethical construct and wisdom. That second part may be the more challenging.
While I do not think the AI 2027 scenario is likely, it is just one possible scenario among many, and the basic elements are all individually plausible. We cannot guarantee that something like AI 2027 will not happen eventually. I reject the argument of some AI critics that AI is all hype, and lacks the ability to do anything truly powerful, either good or bad. I think they are overinterpreting the current hype – all new disruptive technologies go through a hype and bubble phase, and then settle down. Again – we overestimate short term progress then underestimate long term progress. Critics thought the web and e-commerce were all hype, and maybe they had a point in the 1990s, but look at the world today. Critics also focus on the superficial applications of AI and ignore the really useful ones that are perhaps not as much in the public face, like accelerating research.
It seems there are several potential paths before us. We can continue to let tech companies develop AI without restrictions and see what happens. We can explore thoughtful regulations and find a sweet-spot between allowing innovation but minimizing risk. Or we can work really hard to develop guardrails for AI, like the laws of robotics. The second and third options are not mutually exclusive, and may reinforce each other. And – this needs to be an international effort.
I am glad, at least, some experts seem motivated to have this conversation.
The post The AI 2027 Scenario first appeared on NeuroLogica Blog.
A century-old hoax takes wing again, proof that good stories never stay buried.
Learn about your ad choices: dovetail.prx.org/ad-choicesI have been practicing medicine for more than 40 years. During that time the management of obesity and Type 2 diabetes (T2DM)—the kind that usually is caused by being overweight—often felt like Sisyphus pushing a boulder up a hill, only to have it roll back down, often heavier than before. We faced a “diabesity” epidemic where the available tools were blunt instruments at best.
Lifestyle interventions—meaning trying to get someone to change their behavior—was the most and least effective method we had. Most, because in the less than two percent of patients who were successful, it works very well. Least, because, well … 98 percent failed. And they failed because all of our evolutionary history (“See food? Eat it!”) was working against them. This is the mismatch theory: a mismatch between the environment of our evolutionary ancestry that designed our brains to seek foods that were at once rare and nutritious (sweets and fats) and the modern environment in which such foods are in such overabundance that we eat far beyond the saturation point.
The pharmacological options were often disappointing: Sulfonylureas and insulin lower blood sugar but caused weight gain, exacerbating the underlying problem. Bariatric surgery works, but it is invasive and carries surgical as well as lifelong nutritional risks.
When we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology.Into this therapeutic desert crawled the Gila Monster (above), a venomous lizard native to the American Southwest from which researchers derived GLP receptor agonists (Glucagon-like peptide-1 receptor agonists)—medications that mimic the natural GLP-1 hormone that lead to lower blood sugar, help control appetite, and promote weight loss by telling the pancreas to release more insulin when glucose is high, slowing the rate of stomach emptying, and signaling to the brain a sense of fullness.
As a skeptic, I am allergic to the word “miracle,” but when we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology. But, as always in medicine, there is no free lunch.
The Incretin Concept: From Gut to GloryThe story begins with the “incretin effect”—the observation that glucose taken by mouth triggers a much stronger insulin response by increasing the production of hormones in the pancreas, compared to when it is injected directly into a vein. The gut knows you are eating and tells the pancreas to get ready to pack away the extra calories as fat. In patients with Type 2 diabetes, this effect is blunted and the sugar floats around in the bloodstream much longer.
Scientists identified two main hormones responsible: Glucose-dependent Insulinotropic Polypeptide (GIP) and Glucagon-like Peptide (GLP-1). The problem is that GIP doesn’t work well in diabetics. GLP-1 works beautifully—stimulating insulin, suppressing glucagon, and slowing gastric emptying—but it has a fatal flaw: It is destroyed by the enzyme DPP-4 within minutes of entering the bloodstream.
This led to two distinct pharmaceutical strategies. The earlier version was DPP-4 Inhibitors. Drugs like the “Gliptins” block DPP-4, making GLP-1 last longer. They are well-tolerated but their ability to lower blood sugar is modest and they generally do not cause weight loss.
The newer strategy was to engineer versions of GLP to resist degradation. This is where the Gila monster strolled in. In the 1990s, while researching hormone-like drugs, Dr. John Eng noted a similarity between exendin-4 found in Gila venom to Glucagon-like peptide (GLP), and it was able to resist breakdown by DPP!
The Evidence: Efficacy Beyond the HypeThe first GLP-1 agonist, exenatide (Byetta, approved in 2005), required twice-daily injections and produced modest weight loss. But the pharmacology evolved rapidly. We moved to once-daily liraglutide, and then to the once-weekly heavyweights: dulaglutide, semaglutide (Ozempic and Wegovy), and the dual GIP and GLP-1 agonist tirzepatide (Mounjaro and Zepbound).
The clinical trials, called LEAD, SUSTAIN, PIONEER, STEP, and SURPASS (you’ve got to just love the creative acronyms!) have generated data that are hard to dismiss:
Glycemic Control: These drugs consistently outperform most oral antidiabetics in lowering blood sugar by 10 to 20 percent.
Weight Loss: This is the game changer. While early drugs produced 2–4 kg of weight loss over six months, the newer agents are producing results previously only seen with surgery. In the STEP-1 trial, semaglutide 2.4 mg resulted in an approximately 15 percent body weight reduction. Tirzepatide pushed this further, achieving up to 22 percent weight loss in the SURMOUNT-1 trial. That is the effect of a 250-pound person losing 55 pounds! Who wouldn’t want some of that?!
Cardiovascular Outcomes: Perhaps most importantly, these drugs are not like some that just make numbers look better; they are saving lives. Liraglutide and semaglutide have demonstrated significant reductions in major adverse cardiovascular events (MACE), including heart attack and stroke, in high-risk populations. The SELECT trial recently showed semaglutide reduces MACE by 20 percent even in nondiabetic patients with cardiovascular disease. But don’t be fooled, it is not likely that these drugs have specific effects on the heart. It is probable that the fat loss alone is causing these benefits.
Some Skeptical Scrutiny: The RisksIf a drug sounds too good to be true, we must look for the catch. GLP-1 agonists have plenty.
The “Puke” Diet? The most common side effects of GLP-1 agonists are gastrointestinal: nausea, vomiting, diarrhea, and bloating. In some trials, up to 45 percent of patients experienced nausea. While this usually subsides, it raises a valid question: Are people losing weight because their metabolism is optimized, or because they feel too sick to eat? The mechanism involves central appetite suppression in the hypothalamus, but the “gastric braking” effect is real and unpleasant for many.
The Pancreas and Thyroid Scare. Early observational data suggested a link between GLP-1 agonists and pancreatitis and pancreatic cancer. However, extensive reviews have not confirmed a causal link to pancreatic cancer, though a slight increase in pancreatitis persists in some data. This makes sense, as one of the major sites of GLP’s effects is on the pancreas. In the thyroid, these drugs cause C-cell tumors in rodents. Humans have far fewer GLP-1 receptors on their thyroid C-cells than rats, and so far no evidence of increased thyroid cancer has been confirmed in humans. Still, the Black Box warning remains: If you have a family history of endocrine tumors or medullary thyroid cancer, these drugs are not for you.
If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another.Vanishing Muscle. Weight loss via GLP-1 agonists is not just fat loss, so overall body composition must be monitored. In the STEP-1 trial, DEXA scans showed that lean body mass (muscle and bone) accounted for nearly 40 percent of the weight lost. In older adults, this raises the specter of “sarcopenic obesity”—being frail and weak despite having excess fat. Losing muscle mass compromises physical function and metabolic health. If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another. Now, regular and increased exercise is part of the prescription for all patients taking GLP drugs, but studies on how well this works are still in progress.
The Perioperative Peril. Because GLP-1 agonists delay gastric emptying, there have been reports of patients aspirating (inhaling) gastric contents during anesthesia, even after standard fasting protocols. This is a new, practical safety concern that surgical societies are rushing to address.
Mental Health. Reports of suicidal ideation appeared in postmarketing monitoring of GLP-1 agonist users, prompting investigations by European regulators. However, recent large cohort studies have not supported an increased risk of suicidality compared to other diabetes medications. As with all centrally acting drugs, vigilance is required, but the current data are reassuring.
A Lifetime Prescription? The most significant caveat for GLP-1 agonists is durability. Obesity can be a chronic, relapsing disease. Trials show that when patients stop taking semaglutide, they regain two-thirds of the lost weight within a year, and cardiometabolic improvements revert toward baseline. This implies that these are not “cures” but lifelong therapies, much like blood pressure medication.
Financial Toxicity. As I write this, these drugs are prohibitively expensive, creating a massive public health gap. We also saw shortages that left diabetic patients unable to fill prescriptions because the supply was diverted to off-label weight loss use. GLP-1 agonists are not expensive to produce, however, and the patent on Ozempic expired in January of 2026 in Canada and China (and lasts until 2030 in the U.S.), but I expect the market to bring the costs down dramatically over the next few years. As of this year, close to 12 percent of Americans have tried it at least once.
Needles Versus PillsIf there is one thing that holds patients back from the current crop of injectable incretins it is the needle. Despite the efficacy of weekly injections, people prefer pills. The pharmaceutical industry, never one to leave money on the table, has been racing to develop an oral alternative that doesn’t require the strict fasting rituals of earlier attempts like oral semaglutide. Enter orforglipron, the latest contender in the “nonpeptide small molecule” class, which promises the benefits of GLPs without the injection or the fuss.
Unlike existing peptide predecessors that are digested by stomach acid unless armored with absorption enhancers, orforglipron is a chemical—a small molecule designed to survive the GI tract and activate the GLP-1 receptor directly. The data from the ATTAIN-1 trial, published in September 2025, look good. Patients on the 36 mg dose achieved an average weight loss of 11.2 percent over 72 weeks, compared to just 2.1 percent for placebo. No needles. And this pill does not require the “empty stomach, no water, wait 30 minutes” song-and-dance required by oral semaglutide; it can be taken with or without food.
These are serious medications with serious side effects, and they may require lifelong commitment.However, let’s look a little past the convenience. While an 11.2 percent average weight loss is clinically significant, it trails behind the 13.7 percent average reduction seen with semaglutide and 20.2 percent with tirzepatide. Furthermore, the biology of GLP-1 agonism remains the same regardless of delivery method: You cannot cheat physiology. In the ATTAIN-1 trial, adverse events led to treatment discontinuation in up to 10.3 percent of patients on the drug, compared to only 2.7 percent on placebo. The side effects are the usual suspects—gastrointestinal distress, nausea, and constipation—confirming that oral delivery does not bypass the “gastric braking” misery.
We must also remain vigilant regarding safety. The development of a similar small molecule, lotiglipron, was unceremoniously halted due to liver toxicity concerns. While orforglipron has passed its Phase 3 hurdles without these specific signals so far, the history of pharmacology teaches us that rare, serious adverse events often lurk in the postmarketing shadows.
Additionally, while proponents argue that small molecules are cheaper to manufacture than biologics, whether those savings will be passed on to the patient or simply absorbed into the profit margins remains to be seen, with projected self-pay costs in some cases exceeding $1,000 per month. Orforglipron represents a technological leap, but it is not a magic wand; it is simply a more convenient way to induce the same physiological trade-offs we have seen over the last several years with the shots.
ConclusionPrior to the incretin era, our ability to manage the twin epidemics of diabetes and obesity was dishearteningly limited. GLP-1 receptor agonists represent a hard-earned pharmacological breakthrough, offering potent glucose control and unprecedented weight loss.
However, skepticism is still warranted regarding their indiscriminate use. They are already being used in numerous off-label ways, like shedding a few pounds before a wedding, allegedly decreasing cravings for addictive drugs like alcohol and narcotics, and purportedly even for the treatment of Alzheimer’s and Parkinson’s disease. There are ongoing studies for these uses, but early data are weak and the risks are unknown. These are serious medications with serious side effects, and they may require lifelong commitment.
Caveat emptor.
Last week a child of one of my cohosts on the SGU, who is in fifth grade (the child, not the cohost), came home from school and declared, rather dramatically, “Mom, Dad – did you know that we never went to the Moon? It was all fake.” They found this to be a surprising revelation, but was convinced this was a proven scientific fact. Of course, we live in the age of the internet, and our children are going to be exposed to all sorts of information that may be misleading or age-inappropriate. This is one more thing parents have to deal with. What was disturbing about this incident was where they learned this “scientific fact” – from their science teacher.
Any parent should be concerned about this, but in a family of skeptical science communicators, this raised the alarm bells. But the first thing they did was send a polite e-mail to the teacher (cc’ing the principal) and simply ask what happened. This is good practice – always go to the primary source. It’s easy for anyone to get the wrong idea, and this wouldn’t be the first time a fifth grader misinterpreted a lesson in class. The teacher essentially said that while he did not explicitly tell the students we did not go to the Moon (the student reports he said “it’s possible we did not go to the Moon”), he personally believes we did not, and that it is a “proven scientific fact” that it would have been impossible, then and now, to send people to the Moon (somebody should tell the Artemis astronauts).
Apparently he raised at least two points in class – that there were (impossibly) no stars in the background of the photographs taken from the Moon, and the astronauts could not have survived passage through the radiation belts around the Earth. These are both old and long-debunked claims of the Moon-hoax conspiracy theorists. While it is easy to find sources online, let me briefly summarize why these claims are wrong.
The first claim, about no stars in the photographs from the Moon, is trivially solved with some basic photography knowledge. Cameras have to be set for different light levels. There are three basic setting – the ISO of the film or sensor (a measure of how sensitive it is to light), the aperture and the shutter speed. The sky on the Moon is black because there is no atmosphere to diffuse the light, but the surface during the day can still be very bright, and reflect off every surface. This means, to avoid over exposure, they would have used a small aperture and fast shutter speed, which would not have allowed for exposing the tiny amount of light coming from stars, which are only a point of light. Even from Earth, if you want to get a visible picture of stars at night you need to take a long exposure – long enough that you need to use a tripod. Regular cameras (including the ones used during Apollo) have a low dynamic range – the range of light levels they can capture simultaneously. So they would not have been able to capture the bright lunar surface and stars in the background at the same time. Modern digital cameras have techniques for capturing high dynamic range, but this does not apply to the Apollo-era cameras.
The second point refers to the Van Allen belts, which are belts of increased radiation intensity around the Earth. These are tori of ionic radiation trapped by the Earth’s magnetic field. They can vary in shape and intensity, and are not symmetrical. The inner belt is mainly protons and the outer belt is mainly electrons. They do pose an issue for satellites, which have to have proper shielding to protect any sensitive electronics. Crucially – we knew about the Van Allen belts since 1958, so NASA had this information when planning the Apollo missions.
This is a bit more complicated to debunk than the silly photography claim, but still, this information is widely publicly available. The effects of radiation exposure are determined by three variables – the intensity of the radiation, the type and energy of the particles, and the time of exposure. The Apollo capsules were specifically shielded with an aluminum alloy hull and insulation to reduce the intensity of the radiation. Also, NASA specifically calculated a launch trajectory to minimize the time they would spend traversing the Van Allen belts. They ended up spending just a few minutes in the higher energy lower belt, and about 90 minutes in the outer belt. The total radiation exposure was the equivalent of a typical CT scan – so not much. Because there are so few astronauts it is difficult to get statistically powerful data on their subsequent risk of death from cancer or cardiovascular disease, but what evidence we have shows no significant increase in risk.
So these two points, which this science teacher apparently believes “proves” it is impossible to send humans to the Moon, are easily debunked with some basic science knowledge. This gets me to the real point of this post – anyone who believes such a conspiracy is likely not qualified to teach science. I firmly believe that science teachers, even at the fifth grade level, need to have a working basic knowledge of science and critical thinking. Believing a conspiracy theory like this is evidence for lack of both. In addition to these points, we can ask – what would have to be true in order for the Moon hoax conspiracy to be true. The size of the conspiracy would have to be massive? Why didn’t the Soviet Union call us out on the hoax, which they could easily have detected and demonstrated? How has it been maintained for six decades? Why hasn’t the scientific community called NASA out on the hoax? If it were truly impossible to go to the Moon, there are generations of scientists, from all over the world, who could easily demonstrate this.
The lack of curiosity and critical thinking on display here is shocking and profound. What a horrible lesson to teach a class of fifth-graders. This also raises another point – expressing such beliefs to fifth graders (apparently without any proper context) shows an incredible lack of judgement. This was not part of any lesson plan or approved material, and he has to know it is (to say the least) controversial (bat-shit crazy is more like it). Even if it were presented in a “teach the controversy” format to encourage critical thinking, I would question whether this is age-appropriate.
Of course, we will turn this into a teaching moment, and use it as an opportunity to teach critical thinking, why grand conspiracy theories are suspect, and some of the relevant science. We will also do what we can to make sure the entire class gets this lesson. We also will try to drive home that teaching such nonsense as “proven scientific fact” to school children is, to say the least, not appropriate.
The post Moon Landing Hoax In School first appeared on NeuroLogica Blog.
On March 30–31, 1979, Iranians went to the polls. The ballot contained a single question: Should Iran become an Islamic Republic? The choices were “Yes” (Green) or “No” (Red). The official result: 98.2% voted Yes.1
Fifty-Eight Days EarlierOn February 1, 1979, Ayatollah Khomeini returned to Iran after fourteen years in exile. Millions filled the streets of Tehran—the estimates range from two to five million.2 But the man they cheered was a carefully constructed image. During the flight, Khomeini remained secluded in the upper deck of the chartered Boeing 747, praying.3 When the plane landed, he chose to be helped down the stairs by the French pilot rather than his Iranian aides, a calculated move to prevent any subordinate from sharing the spotlight.4
He chose his first destination deliberately: Tehran’s main cemetery, where those who died during the revolution were buried. The crowd was so dense his motorcade could not pass; he took a helicopter instead.5 By speaking among the graves, Khomeini positioned himself as the guardian of those who died in the revolution and as someone who would fulfill what they had sacrificed for.
In the weeks that followed, Khomeini offered both material goods and spiritual salvation. He promised free electricity, free water, and housing for every family. Then he added the caveat that would define the coming era: “Do not be appeased by just that. We will magnify your spirituality and your spirits.”6
A Coalition of ContradictionsThe crowd that greeted him was not a monolith, but a coalition of contradictions. Marxists marched hoping for a socialist future free of American influence. Nationalists and liberals sought constitutional democracy. The devout sought governance by Sharia—and for them, the revolution was holy war: the Shah represented taghut, the Quranic term for tyrannical powers that lead people from God, and those who died fighting him became shahid, martyrs.
Khomeini managed these competing visions by keeping his actual plans vague. He spoke of freedom, justice, and independence, terms each faction could interpret as it wished.7 His blueprint for clerical rule, Velayat-e Faqih, remained in the background. Abolhassan Bani-Sadr, who would become the Islamic Republic’s first president, later recalled: “When we were in France, everything we said to him he embraced and then announced it like Quranic verses without any hesitation. We were sure that a religious leader was committing himself.”8 Khomeini himself would later state: “The fact that I have said something does not mean that I should be bound by my word.”9
Ayatollah Mahmoud Taleghani casts his vote in the March 1979 Islamic Republic referendum.The Empty PhraseNow, let’s return to the ballot.
A republic places sovereignty in the people. Citizens choose their laws. An Islamic state places sovereignty in God, but not “God” in some abstract, philosophical sense. The God of the Islamic Republic is specifically Allah as understood in Shia Islam: a God who communicates through the Quran, whose will was interpreted by the Prophet Muhammad, then by the twelve Imams, and now (in the absence of the hidden Twelfth Imam) by qualified Islamic jurists. This is not a deist clockmaker or a personal spiritual presence. This is a God with specific laws, specific requirements, and specific men authorized to speak on His behalf.
So, what did God want? The ballot never said.
The 1979 Iranian Islamic Republic referendum ballot showing the “نه” (No) option in red. Voters chose between a simple yes or no on whether Iran should become an “Islamic Republic”—a phrase containing no constitution, no enumerated rights, and no definition of which Islamic laws would apply or who would interpret them.“Islamic Republic” contained no details. No constitution, no enumerated rights, no definition of which Islamic laws would apply or who would interpret them. Voters were not choosing a specific system of government. They were choosing a phrase, and trusting that its meaning would be filled in later by men they believed spoke for God.
For those paying attention, there were clues. Khomeini had written extensively about Velayat-e Faqih (the Guardianship of the Islamic Jurist) a system in which a senior cleric would hold supreme authority as God’s representative on Earth. He had lectured on it in Najaf. He had published a book.10 But in the noise of revolution, in the flood of promises about free electricity and spiritual elevation, these details were background static. The crowds were not voting on constitutional theory. They were voting on hope.
The 98% voted Yes. Forty-seven years later, we can measure what exists in Iranian society.
Religious FaithFor this case study to be valid, we must establish a baseline. Was Iranian society already irreligious before 1979, or has religiosity declined under the theocracy?
Available evidence suggests the latter.
In 1975, a survey of Iranian attitudes found over 80% of respondents observing daily prayers and fasting during Ramadan. The methodology is not fully documented in accessible sources.11 However, the broader historical record supports the baseline: the 1979 revolution mobilized millions under explicitly Islamic banners, clerical figures commanded genuine social authority, and the Iranian government’s own 2023 leaked survey found 85% of respondents saying society has become lessreligious than it was.12 Forty-seven years later, mosques are empty.
Official Iranian census data reports 99.5% of the population as Muslim.13 This figure measures legal status, not belief. Under Iranian law, a child born to a Muslim father is automatically registered as Muslim, and leaving Islam carries severe legal consequences. While formal executions for “apostasy” are relatively rare—the regime prefers to charge dissidents with crimes like “Enmity against God” or “Insulting the Prophet”—the threat is sufficient to enforce public silence.
Saadatabad district, Tehran, January 8, 2026: A mosque burns amid protests. (Source: Press Office of Reza Pahlavi)In June 2020, the Group for Analyzing and Measuring Attitudes in Iran (GAMAAN) surveyed over 50,000 respondents using methods designed to protect anonymity.14
Results:
While this online sample skews urban (93.6% vs. Iran’s 79%) and university-educated (85.4% vs. 27.7% nationally), the magnitude of divergence from official statistics—32% Shia vs. 99.5% in census data—is too large to explain through sampling bias alone. Meanwhile, face-to-face surveys suffer the opposite problem: when GAMAAN asked respondents if they’d answer sensitive questions honestly over the phone, 40% said no.15
An interesting outcome of this study is that Iran has approximately only 25,000 practicing Zoroastrians (the total population of Iran is around 92.5 million), yet 7.7% selected this identity. Researchers interpret this as “performing alternative identity aspirations”—claiming pre-Islamic Persian heritage to reject imposed Islamic identity.16
The key findings are, however, clear: 44.5% selected a non-Islamic category when asked their current religion and 47% reported transitioning from religious to non-religious during their lifetime.
The second figure suggests active deconversion rather than inherited secularism.
In 2024, a classified survey by Iran’s Ministry of Culture and Islamic Guidance (conducted in 2023) was leaked to foreign media.17 This data provides a comparison point from within the regime itself.
Indicator
2015
2023
Support separating religion from state
30.7%
72.9%
Pray “always” or “most of the time”
78.5%
54.8%
Never pray
3.1%
22.2%
Never fast during Ramadan
5.1%
27.4%
The same survey found 85% of respondents said Iranian society had become less religious in the previous five years. Only 25% reported trusting clerics.
Based on my years of closely following Iranian society, the pace of religious abandonment has accelerated significantly since the 2022 “Woman, Life, Freedom” uprising. The leaked government data confirms this trajectory: the sharpest shifts in prayer and fasting occurred within the 2015–2023 window, with 85% saying society had grown less religious in just the previous five years.
In February 2023, senior cleric Mohammad Abolghassem Doulabi stated that 50,000 of Iran’s approximately 75,000 mosques had closed due to low attendance, a claim partially corroborated by the leaked government survey finding only 11% always attend congregational prayers.18
Election participation has also declined. Official turnout in the June 2024 presidential election was 39.93%, the lowest in the Islamic Republic’s history.19
The Evidence on the StreetsThe data on paper is corroborated by the specific vocabulary of the street. The protest chants have evolved from requesting reform to rejecting the entire theological framework.
Art by Hamed Javadzadeh — Woman, Life, Freedom Movement (2022)Consider the chant: “Neither Gaza nor Lebanon, I sacrifice my life for Iran.”
This is a direct rejection of the regime’s core ideology. The Islamic Republic prioritizes the Ummah—the transnational community of believers—over the nation-state. By rejecting funding for Hamas and Hezbollah in favor of national interests, protesters are secularizing their priorities: the Nation has replaced the Faith as the object of ultimate concern.
Even more specific is the chant: “Death to the principle of Velayat-e Faqih.”
The protestors are not merely calling for the death of the dictator (Khamenei); they are targeting the specific theological doctrine that grants him legitimacy. They are rejecting the very concept of divine guardianship.
But the most striking evidence of the revolution’s failure is the return of the name it sought to erase. In a historical irony that defies all prediction, crowds now chant “Reza Shah, bless your soul,” and call upon Reza Pahlavi, the son of the deposed Shah, to return. The same population that staged a revolution to overthrow a monarchy in 1979 is now invoking that monarchy as the antidote to theocracy.
The MechanismA note on terminology: When this article refers to “Allah,” it means the legislative deity of the Islamic Republic—a God with enforceable commands interpreted by authorized clerics. This is distinct from the personal God that 78% of Iranians still believe in.
As mentioned earlier, Iran’s constitution establishes Velayat-e Faqih—the Guardianship of the Islamic Jurist. Article 5 declares that in the absence of the Twelfth Imam (a messianic figure believed to have been in supernatural hiding since the 9th century), authority belongs to a qualified jurist. The Tony Blair Institute’s analysis states it directly: “the supreme leader’s mandate to rule over the population derives from God.”20 Khamenei’s own representative, Mojtaba Zolnour, declared in 2009: “In the Islamic system, the office and legitimacy of the Supreme Leader comes from God, the Prophet and the Shia Imams, and it is not the people who give legitimacy to the Supreme Leader.”21
This is not metaphor. The system’s legitimacy rests on the claim that its laws are Allah’s laws, its punishments are Allah’s punishments, its wars are Allah’s wars.
When morality police detained Mahsa Amini, leading to her death, they were enforcing the mandatory religious duty of “Forbidding the Wrong.” When courts execute apostates, they enforce Allah’s law. When the regime sends billions to Hezbollah while Iranians face poverty, it pursues Allah’s mission. When it pursues a nuclear program that invites crushing sanctions, it frames the resulting economic ruin not as policy failure, but as a holy “Resistance” against the enemies of Islam. Every act of misrule carries Allah’s signature.
0:00 /1:04 1×Khorramabad, Iran, January 8, 2026: Protesters raise the pre-1979 lion-and-sun flag, described as a symbol of secular restoration, atop a statue of the Ayatollah. (Source: Press Office of Reza Pahlavi)
In a secular dictatorship, citizens can hate the dictator while preserving their faith. The North Korean who despises Kim Jong-un can still pray. But in a theocracy, the oppressor and God speak with one voice. To oppose the oppressor is to oppose God. To want freedom is to reject divine authority.
The regime created conditions where, for many, opposing political authority became entangled with questioning religious authority.
The Psychology of Religious RebellionJack Brehm’s reactance theory (1966) demonstrates that when people perceive threats to their freedom, they become motivated to restore it, often by embracing the forbidden alternative.22 Subsequent research has applied this specifically to religion. Roubroeks, Van Berkum, and Jonas (2020) found that restrictive religious regulations can trigger reactance that leads to both heresy (holding beliefs contrary to orthodoxy) and apostasy (renouncing religious affiliation entirely).23
The critical insight: In cases of psychological reactance, the emotional pushback against coercion often precedes the intellectual dismantling of the belief system.
The sequence is rarely a straight line, but the components are clear:
This third point is crucial. Iran’s internet users grew from 615,000 in 2000 to over 70 million today.24 Despite billions spent on censorship, officials admit 80–90% of Iranians use VPNs, which allow to circumvent restrictions by changing the user’s internet location to that of another country.25
For the intellectually curious, the internet offered arguments against Islamic theology that were previously banned. But for the average citizen, it offered something perhaps more powerful: validation. It showed them that their anger was shared. It broke the “pluralistic ignorance,” the state where everyone privately rejects the norm but publicly conforms because they think they are the only ones.
Whether through deep study or simple emotional exhaustion, the result was the same: the breaking of the psychological bond between the citizen and the faith.
The Unintended OutcomeIran’s religious decline is among the fastest documented in modern history. Stolz et al. (2025) in Nature Communications established that Europe’s secular transition took approximately 250 years. Iran’s comparable shift from over 80% observing daily prayers in 1975 to 47% reporting lifetime deconversion by 2020 occurred in roughly 45 years. Pew’s global data shows Muslim retention rates averaging 99% across surveyed countries.26
However, Europe secularized without internet or satellite television. Iran’s shift occurred alongside a 90-fold increase in internet access. Theocracy may provide the motive for questioning imposed faith; technology provides the accelerant that compresses generational change into decades. Ex-Muslim testimonies, apostasy narratives, ordinary lives lived without faith—these demonstrated that abandoning religion was survivable. The forbidden became imaginable. Others found arguments that validated what they already felt. The reasoning matched the shape of their anger, and that was enough.
For forty-seven years, the Islamic Republic worked to manufacture belief. Mandatory religious education from childhood. State control of media. Morality police enforcing dress and behavior. Apostasy punishable by death. A constitution grounding all authority in God. They did not leave this to chance.
The data suggests it did not work.
Anyone following recent events in Minneapolis has likely noticed something strange. People watching the same videos, reading the same headlines, and reacting to the same street-level events often seem to be describing entirely different realities. Conversations quickly break down, not because people disagree about what should be done, but because they cannot even agree on what is happening. It’s as if people are watching two completely different movies on one screen.
The “two-movies-one-screen” concept was first coined by Scott Adams, the creator of Dilbert turned political commentator, to describe radically different interpretations of the same political events. People with access to the same set of facts come away with completely different understandings of what is happening. In some cases, each side seems genuinely unaware that the other interpretation even exists.
This is not merely disagreement, and it goes beyond ordinary bias. It is also not quite what psychologists usually mean by cognitive dissonance. Cognitive dissonance, first described by Leon Festinger in the 1950s, occurs when people experience psychological discomfort from holding conflicting beliefs or encountering information that contradicts their existing views, and then attempt to reduce that discomfort through rationalization or reinterpretation of the facts. In cases like the Renee Good shooting in Minnesota, however, something else seems to be happening. So, what is going on?
From a psychological standpoint, this resembles dissociation more than cognitive dissonance. Dissociation refers to a class of mental processes in which certain thoughts, perceptions, or experiences are kept out of conscious awareness. As clinical psychologists have long noted, dissociation functions as a defensive mechanism, shielding the individual from information that is experienced as overwhelming or intolerable. The mind does not reject the data after evaluating it. It fails to perceive it in the first place.
The following is an attempt to provide a neutral description of the events, followed by two very different interpretations.
On January 7, 2026, in Minneapolis, Minnesota, 37-year-old Renee Nicole Good was fatally shot by an Immigration and Customs Enforcement (ICE) agent during an operation targeting undocumented immigrants for deportation. Good was a U.S. citizen and mother of three from previous relationships, and present on the scene with her wife, Rebecca (Becca) Good.
Multiple videos from bystanders, body cameras, and agent phones capture the event, showing a chaotic scene lasting about three minutes.
0:00 /0:47 1×ICE Agent’s Cellphone Video (Credit: Alpha News)
Renee Good was in her SUV, which was blocking or near the path of ICE vehicles during an arrest operation. Agents approached, giving conflicting commands: some ordered her to leave, while others demanded she exit the vehicle. One agent attempted to open her door and banged on the window.
Rebecca Good, Renee’s wife, was outside the vehicle filming and confronting agents.
At one point during the interaction, Renee’s wife urged her to “drive, baby, drive” as the situation escalated. Good maneuvered the vehicle forward and started to accelerate. The vehicle made contact with an ICE agent who was positioned in front; the agent fired through the windshield, striking her in the face and killing her.
0:00 /0:39 1×Bystander Video (Credit: Nick Sortor)
According to official statements from ICE and the Department of Homeland Security (DHS), the shooting occurred after Good allegedly used her vehicle as a weapon, attempting to run over an agent who then fired in self-defense. Renee and Rebecca Good were part of “ICE Watch” groups monitoring, protesting, and interfering with ICE operations. The ICE agent who fatally shot Good was injured and hospitalized following a prior incident in June 2025, during which an undocumented immigrant with an open warrant for child sexual assault dragged him with his vehicle while attempting to flee arrest.
0:00 /4:26 1×Bystander Video 2 (Credit: @Dana916 via X.com)
Progressive voices view Good’s killing as an example of ICE overreach, law enforcement brutality, and systemic abuse of power, especially against citizens exercising First Amendment rights. They emphasize Renee was a “legal observer” and had a constitutional right to protest. They further note that Good was an unarmed American citizen on a public road who was fatally shot in the face and head by a masked federal agent. They also interpret the footage as showing Good attempting to navigate away from the scene rather than intentionally trying to harm the agent. They further warn against normalizing state killings, such as in statements made by Rep. Alexandria Ocasio-Cortez (D), who responded to Vice President JD Vance’s defense of the ICE agent by calling it a “regime willing to kill its own citizens.” This sentiment is tied to broader concerns about police/ICE militarization against undocumented immigrants, and observations such as that even if Good erred (e.g., by not complying with instructions of federal law enforcement officers), it wasn’t worth her life, and society needs a higher bar for lethal force.
Conservative commentators frame the shooting as justified self-defense against anti-ICE radicals who disrupted lawful operations. They emphasize Renee’s alleged aggression and Rebecca’s role in escalating the situation by shouting “You wanna come at us? Go get yourself lunch, big boy,” portraying the couple as part of a coordinated harassment campaign rather than passive observers or demonstrators. They also argue Good was an active participant and perpetrator obstructing enforcement of long-standing immigration law, and someone attempting to flee from the scene rather than simply a citizen attending a protest. They maintain that the shooting was tragic, nevertheless law enforcement (and citizens) can use lethal force if they reasonably believe they face imminent serious harm. Further, they make the following distinction: debating whether the officer should or should not have fired is rational, but refusing to acknowledge that being struck/pushed by a vehicle is basis for self-defense isn’t.
These conflicting media narratives matter because most people do not build their understanding of the world through direct experience. Our personal encounters are limited. The rest of our mental model is assembled from stories. Indeed, research in cognitive psychology and media studies consistently shows that humans rely heavily on narrative to organize information and assign meaning. In other words, we are not natural statisticians. As psychologists such as Jerome Bruner and Daniel Kahneman have shown, people reason intuitively through stories, examples, and emotionally salient cases, often treating mediated experience as a stand-in for reality itself. This is why propaganda is most effective when it does not look like propaganda.
Many people assume propaganda is something obvious that you notice and argue with. In reality, the most powerful propaganda works through repetition rather than persuasion. Social psychologists have documented what is known as the “illusory truth effect,” in which repeated statements are more likely to be judged as true, regardless of their accuracy. When a moral narrative is replayed often enough, it stops feeling like a claim and starts feeling like memory.
Consider the recurring portrayal of tech executives in films and television. A wealthy founder speaks in vague abstractions, dismisses ethical concerns, and pursues profit at the expense of ordinary people. The specifics vary, but the moral structure remains the same. Whether any individual depiction reflects the reality of modern technology firms is almost beside the point. After repeated exposure, viewers absorb not just a critique of corporate excess, but an intuitive framework for interpreting innovation, wealth, and motive. Repetition trains audiences to assign intent instantly and to stop questioning it.
This works because fiction bypasses our analytical defenses. Experimental research on narrative persuasion shows that people are less likely to counterargue when they are emotionally absorbed in a story. Psychologists refer to this as “transportation,” a state in which attention and emotion are captured by a narrative, making viewers more receptive to its implicit assumptions. We do not fact-check television dramas. We empathize with them. Their moral premises are absorbed quietly as background knowledge.
For most of us, the names Jeff Bezos, Elon Musk, Mark Zuckerberg, or Peter Thiel evoke an immediate moral impression. But how did that impression evolve? Have you, for example, ever heard them speak at length or know how they run their companies? Do you understand what motivates them? Do they have a good sense of humor?
There is also a structural problem with storytelling itself. Everyday reality, especially everyday crime, is usually chaotic, senseless, and narratively unsatisfying. Criminologists have long observed that much violent crime lacks coherent motives or moral meaning. Writers, understandably, select stories that feel legible, purposeful, and emotionally engaging. But those selections shape our expectations of reality and thus our perception, and make us see otherwise messy events as morally clearer than they actually are.
The result is a moral universe in which certain kinds of harm are treated as profound moral ruptures, while other kinds are treated as routine or unfortunate facts of life. Violence committed by some characters is framed as a social crisis demanding urgent moral response. Similar violence committed by others is portrayed as tragic but unremarkable, something to be managed rather than interrogated.
A clear example appears in the pilot of The Pitt. A dramatic subway assault is immediately interpreted through a moral lens before basic facts are known. The graphic depiction gives viewers the feeling that they are seeing something raw and unfiltered. At the same time, the narrative structure carefully guides inference and sympathy. In the same episode, a different shooting is treated as mundane and procedural. It carries little moral weight and prompts no larger reflection.
The show is not depicting reality. It is presenting a moral map.
This does not require a conspiracy, and it does not require malicious intent. Many writers openly acknowledge that fiction shapes social norms and expectations. Cultural theorists from Walter Lippmann to contemporary media scholars have noted that narratives function as “pictures in our heads,” guiding perception long before conscious judgment enters the picture. What is new is the growing cultural distance between those producing these narratives and the audiences consuming them, combined with a strong confidence that the moral direction of society is already settled.
When this kind of storytelling dominates, it does more than persuade. It trains perception itself. Viewers learn what to notice, what to ignore, and which conclusions should feel obvious. Over time, alternative interpretations stop feeling like interpretations at all. They begin to look irrational or delusional.
This is how “the other movie” disappears.
♦ ♦ ♦
A functioning society does not require agreement on every issue. It does require a shared reality. When large groups of people cannot even see what others are responding to, debate becomes impossible. You cannot resolve disagreements if one side experiences the other as hallucinating.
The answer is not counter-propaganda, and it is not simply more facts. Research on motivated reasoning shows that facts alone rarely change minds when perceptions themselves are structured by narrative. What is required instead is closer attention to how stories shape perception. What they highlight. What they omit. And how repetition turns fiction into intuition.
Was Renee Good heroically intervening in an unlawful abduction and a victim of reckless police violence? Or was she someone who interfered with a lawful enforcement action and nearly ran over an officer? Each interpretation feels obvious to those who hold it, and nearly invisible to those who do not. If you analyze both long enough, you might start to see the narratives and the chain of events that lead one to interpret this particular incident in a particular way after watching the exact same three minutes of video.
Skepticism, properly understood, is not just about questioning explicit claims. It is about examining why certain narratives feel natural, why others feel unthinkable, and why some movies seem to be playing on the screen while others are never seen at all.
The tech world is buzzing with the claims of a startup battery company out of Finland called Donut Lab. They claim to have created the world’s first production solid state battery. At first blush the claims are exciting but seem in line with the promises that we have been hearing about solid state batteries for years. So it may seem that a company has finally cracked the technical issues with the technology and gotten a product across the finish line. But let’s take a closer look.
First let’s review their claims. The CEO is claiming that their battery has a specific energy of 400 watt hours per kilogram. This is great, considering the current lithium ion batteries in production are in the 175-250 range. The Amprius silicon anode Li-ion battery has 370 Wh/kg, so 400 sounds plausibly incremental, but make no mistake, this would still be a huge breakthrough. Meanwhile the CEO also claims 100,000 charge-discharge cycles, and operation temperature from -30 to 100C. In addition he claims his battery is cheaper than standard Li-ion, does not use any geopolitically sensitive raw materials, and is already in production (for motorcycles). Further it can be fully recharged in 5 minutes, and is incredibly stable with no risk of catching fire.
As I have pointed out previously, battery technology is tricky because a useful EV battery needs a suite of features all at the same time, while reality often requires trade-offs. So you can get your high capacity, but with increased expense, for example (like the Amprius battery). So claiming to have every critical feature of an EV battery improve all at once is beyond a huge deal. That in itself starts to get into the implausibility range, but it’s not impossible. My reaction appears to be similar to most people in the tech world – show me the money. At the CES where Donut rolled out its battery claims, in short, they did not do that.
A battery company with these claims, if they wanted to be taken serious, would have presented their actual battery at CES demonstrating at least some of these features, like the energy density and cycle life. But all they had was an empty case – no actual battery. That we either a disastrous marketing decision, or they don’t have an actual battery. I’m beginning the smell the “fake it til you make it” syndrome that tanked Theranos.
As we go deeper the story gets more dodgy. The company, Donut Lab, is a small Finish company (registered in Estonia). Their employee roster boasts a single technical expert, the rest are in marketing and management. So now we are supposed to believe that this small company with a single engineer has outperformed the world’s battery tech giants with hundreds or even thousands of experts and who are pouring billions of dollars into R&D to be the first to market with a solid state battery. Um, no. I love a good Cinderella story, and it would be great if a viable solid state battery hit the market a few years (or maybe more) ahead of schedule, but this is just too much to believe.
Then there is the history of the CEO, Marko Lehtimäki. Last year this guy claimed to have created the first true artificial intelligence, Asinoid. He wrote: “Asinoids are today the world’s only AI with their own life, thoughts, continuous evolution and synthetic neuroplasticity with the ability to adopt to any kind of physical or digital ”body”, from humanoid robots to SaaS apps, drone swarms and CCTV cameras. Their intelligence is modeled carefully after the only true known intelligence — the human brain.”
This was just vaporware. Reading his posts I get the vibe that this guy wants to become the next Elon Musk, grabbing experts to create one moonshot breakthrough after another. He may be truly delusional, or really think that his companies are on the verge of these breakthroughs, so it’s just good marketing to get ahead of the curve. Or he may just be a scammer. Either way, he has no credibility.
We are therefore seeing a pattern that is extremely familiar and clear to experienced skeptics – an astounding claim with nothing real to back it up made by someone with a history of dubious claims. I would be shocked (although also happy) if this turns out to be legit.
Meanwhile, where does solid state battery tech actually sit? The technology is promising, and is expected to produce batteries with higher energy density, faster charging, and longer lifespans. But these will likely come at the expense of higher cost. The large companies working on this tech are also facing challenges to mass production and have not solved all the technical issues. Solid state batteries have been promised for a long time, and the technology is taking a lot longer than optimists expected. Realistically, this is a medium to long term technology. At best we will see them at the end of this decade but more likely in the early to mid 2030s. It may even take longer.
Meanwhile, Li-ion technology continues to advance. Over the next few years we will see silicon anode batteries in EVs at the high end. We are also starting to see sodium ion batteries at the low end, at about half the price of Li-ion batteries and still with acceptable energy density, although at the low end of current Li-ion batteries. This is proven technology, with continued incremental improvement in manufacturing and design. I suspect that these batteries will take us into the mid-2030s, until the industry shifts over to something like solid state batteries.
The post Is Donut Lab’s Solid State Battery Legit? first appeared on NeuroLogica Blog.
If ghosts don't exist, then how do we account for all the ghost experiences that people have every day?
Learn about your ad choices: dovetail.prx.org/ad-choicesSixth-century Byzantium was a city divided by race hatred so intense that people viciously attacked each other, not only in the streets but also in churches. The inscription on an ancient tablet conveys the raw animus that spawned from color differences: “Bind them! … Destroy them! … Kill them!” The historian Procopius, who witnessed this race antagonism firsthand, called it a “disease of the soul,” and marveled at its irrational intensity:
They fight against their opponents knowing not for what end they imperil themselves … So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place, neither to the ties of marriage nor of relationship nor of friendship.1This hostility sparked multiple violent clashes and riots, culminating in the Nika Riot of 532 CE, the biggest race riot of all time: 30,000 people perished, and the greatest city of antiquity was reduced to smoldering ruins.
But the Nika Riot wasn’t the sort of race riot you might imagine. The race in question was the chariot race. The color division wasn’t between black and white but between blue and green—the colors of the two main chariot-racing teams. The teams’ supporters, who were referred to as the Blue and Green “factions,” proudly wore their team colors, not just in the hippodrome but also around town. To help distinguish themselves, many Blues also sported distinctive mullet hairstyles, like those of 1970s rock stars. Both Blues and Greens were fiercely loyal to their factions and their colors. The chariots and drivers were a secondary concern; the historian Pliny asserted that if the drivers were to swap colors in the middle of a race, the factions would immediately switch their allegiances accordingly.
Decades of studies have demonstrated the dangerous power of the human tribal instinct.The race faction rivalry had existed for a long time before the Nika Riot, yet Procopius writes that it had only become bitter and violent in “comparatively recent times.” So, what caused this trivial division over horse-racing teams to turn so deadly? In short, it was the Byzantine version of “identity politics.”
Detail of “A Roman Chariot Race,” depicted by Alexander von Wagner, circa 1882. During the Nika Riots that took place against Byzantine Emperor Justinian I in Constantinople over the course of a week in 532 C.E., tens of thousands of people lost their lives and half the city was burned to the ground. It all started over a chariot race. (Image courtesy of Manchester Art Gallery)Modern sociological research helps explain the phenomenon. Decades of studies have demonstrated the dangerous power of the human tribal instinct. Surprisingly, it doesn’t require “primordial” ethnic or tribal distinctions to engage that impulse. Minor differences are often sufficient to elicit acute ingroup-outgroup discrimination. The psychologist Henri Tajfel demonstrated this in a landmark series of studies to determine how minor those differences can be. In each successive study, Tajfel divided test subjects into groups according to increasingly trivial criteria, such as whether they preferred Klee or Kandinsky paintings or underestimated or overestimated the number of dots on a page. The results were as intriguing as they were disturbing: even the most trivial groupings induced discrimination.2, 3
However, the most significant and unexpected discovery was that simply telling subjects that they belonged to a group induced discrimination, even when the grouping was completely random. Upon learning they officially belonged to a group, the subjects reflexively adopted an us-versus-them, zero-sum game attitude toward members of other groups. Many other researchers have conducted related experiments with similar results: a government or an authority (like a researcher) designating group distinctions is, by itself, sufficient to spur contentious group rivalry. When group rewards are at stake, that rivalry is magnified and readily turns malign.
The Robbers Cave Experiment, conducted in 1954 by social psychologists Muzafer and Carolyn Sherif, investigated intergroup conflict and cooperation. The study involved 22 eleven-year-old boys at a summer camp in Robbers Cave State Park, Oklahoma. (Photo: The University of Akron)The extent to which authority-defined groups and competition for group benefits can foment nasty factionalism was demonstrated in the famous 1954 Robbers Cave experiment, in which researchers brought boys with identical socioeconomic and ethnic backgrounds to a summer camp, dividing them randomly into two official groups. They initially kept the two groups separate and encouraged them to bond through various group activities. The boys, who had not known each other before, developed strong group cohesion and a sense of shared identity. The researchers then pitted the groups against each other in contests for group rewards to see if inter-group hostility would arise. The group antagonism escalated far beyond their expectations. The two groups eventually burned each other’s flags and clothing, trashed each other’s cabins, and collected rocks to hurl at each other. Camp staff had to intervene repeatedly to break up brutal fights. The mounting hostility and risk of violence induced the researchers to abort that phase of the study.4 Other researchers have replicated this experiment: one follow-up study resulted in knife fights, and a researcher was so traumatized he had to be hospitalized for a week.5, 6
How does this apply to the Blues and Greens? As in the Tajfel experiments, the Byzantine race factions had formed a group division based on a trivial distinction—the preference for a color and a horse racing team. However, for many years, the rivalry remained relatively benign. This was likely because the emperors had long played down the factional distinction and maintained a tradition of race neutrality: if they favored a faction, they avoided openly showing it. But that tradition ended a few years before the Nika Riot when emperors began openly supporting either one faction or the other. But more importantly, they extended their support outside the hippodrome with official policies that benefited members of their preferred faction. The emperors Marcian, Anastasius, and Justinian adopted official employment preferences, allocating positions to members of their favored faction and blocking the other faction from coveted jobs. To cast it in modern terms, they began a program of “race-based” affirmative action and identity politics.7, 8
In nearly all the countries where affirmative action programs have been implemented, they have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive.Official recognition of the group distinction enhanced the us-versus-them sense of difference between the factions, and the affirmative action scheme turned this sense of difference into bitter antagonism, which eventually exploded in violence. Procopius, our primary contemporary source, placed the blame for the mounting antagonism and the riots squarely on Justinian’s program of identity politics. It had not only promoted an us-versus-them mindset in the factions, it also incited vicious enmity between them, turning a trivial color preference and sporting rivalry into a deadly “race war.”
Considering how identity politics could elicit violence from randomly assembled groups like the Blues and Greens, it is easy to imagine how disastrous identity politics can be when applied to groups that already have some long-standing, historic sense of difference. Indeed, there have been numerous instances of this in history, most ending tragically. For example, Tutsis and Hutus enjoyed centuries of relatively peaceful coexistence in Rwanda up until Belgian colonialists arrived; when the Belgians issued identity cards distinguishing the two groups and instituted affirmative action, it ossified a formerly porous group distinction and infused it with bitter rivalry, preparing the path to genocide. Likewise, when Yugoslavia instituted its “nationality key” system, with educational and employment quotas for the country’s constituent ethnic groups, it hardened group distinctions, pitting the groups against each other and setting the stage for genocide in the Balkans. And, when the Sri Lankan government opted for identity politics and affirmative action, it spawned violent conflict and genocide that destroyed a once peaceful and prosperous country. This last example—Sri Lanka—is so illustrative of the dangers of identity politics that we’ll examine it in more detail.
Sri Lanka: How Identity Politics Destroyed ParadiseShe is a fabulous isle just south of India’s teeming shore, land of paradise … with a proud and democratic people … Her flag is the flag of freedom, her citizens are dedicated to the preservation of that freedom … Her school system is as progressive as it is democratic. —1954 TWA TOURIST VIDEOSri Lanka is an island off India’s southeast coast blessed with copious amounts of arable land and natural resources. It has an ethnically diverse population, with the two main groups being Sinhalese (75 percent) and Tamils (15 percent). Before Sri Lanka’s independence in 1948, there was a long history of harmony between these groups. That history goes back at least to the fourteenth century when the Arab traveler Ibn Battuta observed how the different groups “show respect” for each other and “harbor no suspicions.” On the eve of Sri Lanka’s independence, a British governor lauded the “large measure of fellowship and understanding” that prevailed, and a British soldiers’ guide noted that “there are no historic antagonisms to overcome.” With quiescent communal relations, abundant natural resources, and one of the highest literacy rates in the developing world, newly independent Sri Lanka was poised to flourish and prosper. Nobody doubted it would outperform countries like South Korea and Singapore, with the British governor dubbing it “the best bet in Asia.”
It turned out to be a very poor bet. A few years after Sri Lanka’s independence, violent communal conflict erupted, culminating in a protracted civil war and genocide. By the time it ended, over a million people had been displaced or killed. Sri Lanka’s per capita GDP, which was on par with South Korea’s in 1960, was only one-tenth of it by 2009. As in sixth-century Byzantium, identity politics precipitated the calamity.
Turning a Disparity into a DisasterAt the end of British colonial rule in Sri Lanka, there was significant educational and income disparity between Sinhalese and Tamils. This arose by happenstance rather than because of discriminatory policy. The island’s north, where Tamils predominate, is arid and poor in resources. Because of this, the Tamils devoted their productive energy toward developing human capital, focusing on education and cultivating professional skills. This focus was abetted by American missionaries, who set up schools in the north, providing top-notch English-language education, particularly in math and the physical sciences. As a result, Tamils accounted for an outsized proportion of the better-educated people on the island, particularly in higher-paying fields like engineering and medicine.
Because of the Tamils’ superior education, the British colonial administration hired them disproportionately compared to the Sinhalese. In 1948, for example, Tamils accounted for 40 percent of the clerical workers employed by the colonial government, greatly outstripping their 15 percent share of the overall population. This unequal outcome had nothing to do with overt discrimination against the Sinhalese; it merely reflected the different levels and types of education achieved by the different ethnic groups.
When Sri Lanka gained independence, it passed a constitution that prohibited discrimination based on ethnicity. But a few years after that, an opportunist politician, S.W.R.D. Bandaranaike, figured he could advance his career by cynically appealing to identity politics, stoking Sinhalese envy over the Tamils’ over-representation in higher education and government. He launched a divisive campaign to eliminate the disparity, which spurred the majority Sinhalese to elect him. After his election in 1956, Bandaranaike passed a law that changed the official language from English to Sinhala and consigned students to separate Tamil and Sinhalese education “streams” rather than having them all learn English. As one Sinhalese journalist wrote, this divided Sri Lanka, depriving it of its “link language”:
That began a great divide that has widened over the years. Children now go to segregated schools or study in separate streams in the same school. They don’t get to know other people of their own age group unless they meet them outside.Beyond eliminating Sri Lanka’s common “link language,” this law also functioned as a de facto affirmative action program for Sinhalese. Tamils, who spoke Tamil at home and received their higher education in English, could not gain Sinhala proficiency quickly enough to meet the government’s requirement. So, many of them lost their jobs to Sinhalese. For example, the percentage of Tamils employed in government administrative services dropped dramatically: from 30 percent in 1956 to five percent in 1970; the percentage in the armed forces dropped from 40 percent to one percent.
As has happened in many other countries, Sri Lanka’s identity politics went hand-in-hand with expanded government. Sinhalese politicians made it clear: government would be the tool to redress perceived ethnic disparities. It would allocate more jobs and resources, and that allocation would be based on ethnicity. As one historian writes: “a growing perception of the state as bestowing public goods selectively began to emerge, challenging previous views and breeding mistrust between ethnic communities.” Tamils responded to this by launching a non-violent resistance campaign. With ethnic dividing lines now clearly drawn, mobs of Sinhalese staged anti-Tamil counter-demonstrations and then riots in which hundreds—mostly Tamils—were killed. The us-versus-them mentality was setting in.
Bandaranaike was eventually assassinated by radicals within his own movement. But his widow, Sirimavo, who was subsequently elected prime minister, resolved to maintain his top priorities—expansive government and identity politics. She nationalized numerous industries and launched development projects that were directed by ethnic and political considerations rather than actual need. She also removed the constitutional ban on ethnic discrimination so that she could aggressively expand affirmative action. The existing policies had already cost so many Tamils their jobs that they were now under-represented in government. However, they remained over-represented in higher education, particularly in the sciences, a disparity that Sirimavo and her political allies resolved to eliminate. In a scheme that American universities like Harvard would later emulate, the Sri Lankan universities began to reject high-scoring Tamil applicants in favor of manifestly less-qualified Sinhalese with vastly lower test scores.
Just like Justinian’s “race” preferences, the Sri Lankan affirmative action program exacerbated us-versus-them attitudes, deepening the group divide and spurring enmity between groups. As one Sri Lankan observed:
Identity was never a question for thousands of years. But now, here, for some reason, it is different … Friends that I grew up with, [messed around] with, got drunk with, now see an essential difference between us just for the fact of their ethnic identity. And there are no obvious differences at all, no matter what they say. I point to pictures in the newspapers and ask them to tell me who is Sinhalese and who is Tamil, and they simply can’t tell the difference. This identity is a fiction, I tell you, but a deadly one.9The lessons of the various affirmative action programs in Sri Lanka were clear to everyone: individuals’ access to education and government employment would be determined by ethnic group membership rather than individual merit, and political power would determine how much each group got. If you wanted your share, you needed to mobilize as a group and acquire and maintain political power at any cost. The divisive effects of these lessons would be catastrophic.
The realization that they would forever be at the mercy of an ethnic spoils system, along with the violent attacks perpetrated against them, induced the Tamils to form resistance organizations—most notably, the Liberation Tigers of Tamil Eelam (LTTE). The LTTE attacked both Sri Lankan government forces and individual Sinhalese, initiating a deadly spiral of attacks and reprisals by both sides committing the sort of atrocities that are tragically common in ethnic conflicts: burning people alive, torture, mass killings, and so on. Over the following decades, the conflict continued to fester, periodically escalating into outright civil war. Ultimately, over a million people would be killed or displaced.
The timeline of the Sri Lankan conflict establishes how communal violence originated from identity politics rather than the underlying income and occupational disparity between the groups. That disparity reached its apex at the beginning of the twentieth century. Yet, there was no communal violence at that point or during the next half-century. It was only after the introduction of affirmative action programs that ethnic violence erupted. The deadliest attacks on Tamils occurred an entire decade after those programs had enabled Sinhalese to surpass Tamils in both income and education. As Thomas Sowell observed: “It was not the disparities which led to intergroup violence but the politicizing of those disparities and the promotion of group identity politics.”10
Consequences of Identity Politics in Sri Lanka and BeyondSri Lanka’s experience highlights some underappreciated consequences of identity politics. Most notably, one would expect that affirmative action programs would have warmed the feelings of the Sinhalese toward the Tamils. After all, they were receiving preferences for jobs and education at the Tamils’ expense. Yet, precisely the opposite happened: as the affirmative action programs were implemented, Sinhalese animus toward the Tamils progressively worsened. This pattern has been repeated in nearly all the countries where affirmative action has been implemented: affirmative action programs have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive. That group tends to justify the indefinite continuation of these benefits by claiming that the other group continues to enjoy “privilege”—or by demonizing them and claiming that they are “systemically” advantaged. Thus, the beneficiaries of affirmative action are often the ones to initiate hostilities. In Rwanda, for example, it was Hutu affirmative action beneficiaries who perpetrated the violence, not Tutsis. The situation in Sri Lanka was analogous, with Sinhalese instigating all of the initial riots and pogroms against the Tamils.
One knock-on effect of identity politics in Sri Lanka was that it ultimately benefited some of the wealthiest and most privileged people in the country. The government enacted several affirmative action schemes, each increasingly contrived to benefit well-heeled Sinhalese. The last of these implemented a regional quota system that was devised so that aristocratic Sinhalese living in the Kandy region would compete for spots against poor, undereducated Tamil farm workers. As one Tamil who lost his spot in engineering wrote: “They effectively claimed that the son of a Sinhalese minister in an elite Colombo school was disadvantaged vis-à-vis a Tamil tea plucker’s son.” This follows the pattern of many other affirmative action programs around the world: the greatest beneficiaries are typically the most politically connected (and privileged) individuals within the group receiving affirmative action. They are often wealthier and more privileged than many of the individuals against whom affirmative action is directed. This has been well documented in India, which has extensive data on the subgroups that benefit from its affirmative action programs.
Decades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior.One unexpected consequence of identity politics in Sri Lanka was rampant corruption. When Sri Lanka became independent, its government was widely deemed one of the least corrupt in the developing world. However, as affirmative action programs were implemented and expanded, corruption increased in lockstep. The adoption of affirmative action set a paradigm that pervaded the government: whoever held power could steer government resources to whomever they deemed “underserved.” A baleful side effect of ethnicity-based distortion of government policy is that it undermines and erodes more general standards of government integrity and transparency, legitimating a paradigm of corruption: if it is acceptable to direct policy for the benefit of an ethnic group, is it not also acceptable to do so for the benefit of a clan or an individual? It is a small step to go from one to the other, a step that many Sri Lankan leaders and bureaucrats took. Today, Sri Lanka’s government, which once rivaled European governments in transparency, remains highly corrupt. This pattern has been repeated in other countries. For example, after the Federation of Malaysia expelled Singapore, it adopted an extensive affirmative action program, whereas Singapore prohibited ethnic preferences. Malaysia subsequently experienced proliferating corruption, whereas Singapore is one of the least corrupt countries in the world today.
Economic divergence between Singapore and Sri Lanka’s GDP per capita, 1960–2023 (Source: Our World in Data)Perhaps the most profound consequence of identity politics in Sri Lanka was that it ultimately made everybody in the country worse off. After World War II, per capita income in Sri Lanka and Singapore was nearly identical. But after it abandoned its shared “link language” and adopted ethnically divisive policies, Sri Lanka was plagued by violent conflict and economic underperformance; today, one Singaporean earns more than seven Sri Lankans put together. All the group preferences devised to elevate Sinhalese brought down everyone in the country—Tamil, Sinhalese, and all the other groups alike. Lee Kuan Yew, Singapore’s “founding father,” attributed that failure to Sri Lanka’s divisive policies, saying that if Singapore had implemented similar policies, “we would have perished politically and economically.” There are echoes of this in other countries that have implemented identity politics. When I visited Rwanda, I asked Rwandans of various backgrounds whether they thought distinguishing people by race or ethnicity ever helped anyone in their country. There was complete unanimity on this point: after they got over pondering why anyone would ask such a naïve question, they made it very clear that distinguishing people by group made everyone, whether Hutu or Tutsi, distinctly worse off. In the Balkans, I got similar answers from Bosnians, Croatians, Serbians, and Kosovars.
The Perilous Path of Identity PoliticsDecades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior. As political scientist Harold Isaacs writes:
If anything emerges plainly from our long look at the nature and functioning of basic group identity, it is the fact that the we-they syndrome is built in. It does not merely distinguish, it divides … the normal responses run from … indifference to depreciation, to contempt, to victimization, and, not at all seldom, to slaughter.11The history of Byzantium and Sri Lanka demonstrates that this tribal instinct is extremely easy to provoke. All it takes is official recognition of group distinctions and some group preferences to balkanize people into bitterly antagonistic groups, and the consequences are potentially dire. Even if a society that is balkanized in this way avoids violent conflict, it is still likely to be plagued by all the concomitants of social fractionalization: higher corruption, lower social trust, and abysmal economic performance.
A country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.It is therefore troubling to see the U.S. government, institutions, and society adopt Sri Lankan-style policies that emphasize group distinctions. As the U.S. continues down the perilous path of identity politics, it is unlikely to devolve into another Bosnia or Sri Lanka overnight. But the example of Sri Lanka is a dire warning: a country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.
Surveys and statistics are now flashing warning signs in the United States. A Gallup poll found that while 70 percent of Black Americans believed that race relations in the United States were either good or very good in 2001, only 33 percent did in 2021.12 Other statistics have shown that hate crimes have been on the rise over that time.13 In the last year, we have also seen the spectacle of angry anti-Israel protesters hammering on the doors of a college hall, terrorizing the Jewish students locked inside, and a Stanford professor telling Jewish students to stand in the corner of a classroom. While identity politics have increasingly directed public policy and institutions, relations between social groups have deteriorated rapidly. This—and a lot of history—suggest it’s time for a different approach.