Within days of the U.S. strike on Caracas and the capture of Venezuelan President Nicolás Maduro on January 3, 2026, a remarkable claim was sweeping across social media: American forces had deployed a devastating “sonic weapon” that left Venezuelan soldiers vomiting blood and unable to stand.
The headlines have been dramatic with Forbes proclaiming: “U.S. Secret Weapon May Have Incapacitated Maduro’s Guards.”1 The Economic Times wrote about America’s “Secret Sonic Weapon,”2 while the UK Sun asserted: “US ‘Sonic Weapon’ is REAL after Chilling Claims it Left Captured Maduro’s Guards ‘Vomiting Blood.’”3 The story was dramatic, almost terrifying, but as we shall argue here, almost certainly false.
Within minutes of the first explosions on January 3, conflicting claims were already circulating on social media about the number of missiles fired, ground forces deployed, and helicopters spotted flying over the city of Caracas, the focal point of the attack. The ambiguity and uncertainty that typify the fog of war are ideal breeding grounds for rumors. Ordinarily, such rumors fade as reliable information emerges. But in this case the U.S. military remained silent, while the Venezuelan government, like many authoritarian regimes, is notorious for withholding information.
This is a classic setup for the proliferation of rumors, whose intensity is proportional to both the perceived importance of the event and the level of ambiguity.4 Situations such as this are fertile soil for exaggerations, half-truths, conspiracy theories, and outright fabrications. Even after the situation on the ground stabilized and many early rumors were confirmed or denied, claims about the use of a sonic weapon not only persisted but flourished.
From WhatsApp to the WorldOne challenge in tracing this story to its origins is that as it began in Venezuela, where the earliest accounts circulated in Spanish. Fortunately, one of us (DZ) is a fluent speaker and was able to examine the primary sources. In the days that followed, audio recordings rapidly spread on WhatsApp, describing events through purported firsthand accounts from soldiers and relatives near the impact zones.
On January 9, one story began circulating widely. In it, a supposed member of colectivo—an armed militia that controls different sections of the city—described how the attack unfolded in the historic 23 de Enero neighborhood of western Caracas.
The audio was posted on the YouTube channel of Emmy Award-winning Venezuelan journalist Casto Ocando, and soon accumulated over one million views.5 In it, an anonymous narrator describes the attack.
“They shut down the entire electrical system, knocked out the radars, knocked out everything.”He then recounts how a soldier activated a Russian-made anti-aircraft defense system to attack the helicopters.
“When he fired it, a drone immediately detected it and, well, they died, they killed them, all of them [the soldiers] with a single bomb… There are many dead, many people burned, many people wounded. I’ll send you a video, there are approximately 100 military personnel dead,” he adds.6The narrator’s confidence in precise casualty figures amid the chaos of a nighttime attack, is itself a red flag.
The alleged eyewitness continues:
“There were only eight helicopters and 20 men…who killed 200 men, 32 with a single shot, plus presidential guards of honor and civilians.”He then describes weapons that “fired more than 300 bullets per minute,” adding,
“a thing that made me bleed, I was bleeding from my nose and didn’t know what it was, it was a whistle that sounded throughout Caracas and made people bleed from their noses and ears. We couldn’t move, that whistle immobilized us, they say it’s what’s called a sonic shockwave. It was something really horrible….”The clip ends with claims that Americans
“don’t fight fair. They fight from above, with drones. The speeds of those helicopters…. They only sent eight helicopters and destroyed all of Caracas.”The description of a sound that causes nosebleeds and immobilization across an entire city is physically implausible. While acoustic weapons such as Long Range Acoustic Devices (LRADs) can cause pain and disorientation at close range, their effects diminish rapidly with distance as the sound energy disperses. No known acoustic technology can cause bleeding from the ears and nose at a distance, let alone city-wide.
Enter, Stage Right, Mike NetterOn January 9, the WhatsApp audio recording quickly spread across various social networks. The following day, popular conservative influencer Mike Netter, posted on X a strikingly similar story, which he attributed to a security guard loyal to Nicolás Maduro.
🚨This account from a Venezuelan security guard loyal to Nicolás Maduro is absolutely chilling—and it explains a lot about why the tone across Latin America suddenly changed.
Security Guard: On the day of the operation, we didn't hear anything coming. We were on guard, but… pic.twitter.com/392mQuakYV
It is reproduced below so readers can judge for themselves:
Security Guard: On the day of the operation…suddenly all our radar systems shut down without any explanation. The next thing we saw were drones, a lot of drones, flying over our positions…. After those drones appeared, some helicopters arrived, but there were very few. I think barely eight helicopters. From those helicopters, soldiers came down, but a very small number. Maybe twenty men. But those men were technologically very advanced…The story was originally posted in English, itself suspicious for a supposed Venezuelan guard. Had this been a genuine interview with a colectivo member, the original would have almost certainly appeared in Spanish. No Spanish-language version has ever surfaced. The “interview” appears to be a reconstruction of the WhatsApp audio, repackaged in a question-and-answer format.
Another red flag is the distinctly pro-American tone, which is unlikely to have come from a foreign fighter, let alone one sworn allegiance to defend his government. Defeated soldiers do not typically serve as unsolicited recruitment posters for the enemy. The guard also conveniently uses round figures (eight helicopters, twenty men, 300 rounds per minute) and makes no mention of his comrades’ courage or resistance, and ends with a warning directed at Mexico: precisely echoing President Trump’s rhetoric at the time.
Journalists are trained to go to the source. Accordingly, we contacted Netter to request details of the alleged guard and the interviewer, and asked him to share the original Spanish source of this interview with us. He said he couldn’t do so without first asking the source, which he promised to do. At the time of this writing, he never got back to us.
Press Secretary Leavitt IntervenesMike Netter’s post could have disappeared into the daily churn of social media had it not been for White House press secretary Karoline Leavitt who shared it on her official account with the dramatic text: “Stop what you are doing and read this...”
Stop what you are doing and read this…
🇺🇸🇺🇸🇺🇸🇺🇸🇺🇸 https://t.co/v9OsbdLn1q
This endorsement dramatically elevated the story’s perceived credibility, despite the absence of any corroborating evidence. In effect, an anonymous social media claim received a semi-official White House endorsement of an unverified anonymous claim, a departure from the press secretary’s traditional role as a gatekeeper of verified information. As a result, Netter’s post has gained over 30 million views and 10,000 responses.
Ever Increasing CirclesOn January 10, the New York Post repeated Netter’s account under the headline: “US used powerful mystery weapon that brought Venezuelan soldiers to their knees during Maduro raid: witness account.”7 The story recounted the most spectacular elements: the sound wave, exploding heads, nosebleeds, and vomiting.
Curiously, the same YouTube channel of Casto Ocando that had released the original audio, later uploaded a new video citing the Post article, the Post’s reconstruction as independent confirmation of its own earlier material. Other media outlets went further, falsely claiming that the Venezuelan guard had been interviewed by the New York Post.8
This process, where secondary reporting is mistaken for a primary source, is a classic example of how media myths are manufactured through journalistic shortcuts.
Notably, none of the Venezuelan soldiers who later appeared on camera—people whose identities and ranks are known, mentioned the use of sonic weapons. Footage aired on the Chavista network Telesur depict young men wounded by shrapnel describing missile strikes, drones, and gunfire. None reported bleeding from the nose, vomiting, or sensations of cranial explosions.9 Nor are there civilian testimonies from Caracas describing a city-wide whistling sound. Some soldiers and civilians did report buzzing sounds, including individuals near Fort Tiuna, one of the attack sites. However, these sounds are readily explained by falling ordnance and whizzing bullets—mundane combat phenomena, not evidence of exotic weaponry.
It is also conspicuous that during President Trump’s exclusive interview with the New York Post, which was published on January 24th, he was asked about the “sonic weapon” rumors. Trump replied that the U.S. has “the discombobulator” that disabled enemy equipment as the American helicopters swooped in to attack in Carcas. But he made no mention of its effects on people.10
It’s Similar to the Havana SyndromeThe symptoms described in the WhatsApp audio are strikingly similar to claims made during the Havana Syndrome scare. Recently, the intelligence community has deemed the involvement of a foreign power “highly unlikely,” attributing the Havana Syndrome causes to psychogenic and environmental factors rather than directed energy weapons.11
The Venezuelan sonic weapon narrative appears to be drawing from the same well of popular mythology. Furthermore, nosebleeds following an explosive military attack are far more likely to be caused by conventional factors such as blast pressure, dust, smoke inhalation—even stress as opposed to a hypothetical sonic weapon.
The narrator in the WhatsApp audio clip may be misattributing ordinary combat effects to an extraordinary cause: a classic pattern in rumor formation.
Under conditions of extreme stress, uncertainty, and sensory overload, people routinely seek out coherent explanations that give meaning to their own experiences. In the context of a sudden nighttime military strike, in a backdrop rife with ambiguity and anxiety, physical symptoms such as nosebleeds, dizziness, ringing in the ears, and temporary immobility, are especially prone to being reinterpreted through the lens of culturally available narratives.
From a rumor and folklore perspective, the sonic weapon story fulfills a familiar psychological function: it collapses complex, confusing events into a single explanatory cause, providing closure amid uncertainty. The sonic weapon narrative transforms uncertainty into conviction and speculation into “fact.” This process reduces anxiety. As philosopher Suzanne Lange once famously observed: humans possess a remarkable ability to adapt—except when confronted with chaos.12
A Familiar PatternThe sonic weapon story follows a well-worn media myth template: an ambiguous event, an information vacuum, an anonymous account, amplification by politically motivated actors, and validation by authorities who should know better.
What began as a WhatsApp voice message from an anonymous militia member, was transformed into a polished English-language “interview,” boosted by a partisan influencer, and essentially endorsed by the White House. At no stage was a shred of physical evidence produced. The ‘Discombulator,’ as far as the evidence shows, exists only in the fog of war, and in the imaginations of those eager to believe.
It is also worth asking the cui bono question: “Who benefits from the sonic weapon narrative?” First, the U.S. government and military—by projecting overwhelming technological superiority. Second, pro-government Venezuelan sources also benefit from a story that excuses their rapid military defeat.
When both sides gain from a myth, its survival is all but guaranteed.
One of the hardest things to accept, especially for people who care about rationality, is that epistemic rigor is rarely applied consistently. Most of us do not give up bad arguments. Instead, we give up standards of evidence when the conclusion becomes socially or morally important to us.
There are well-established psychological reasons why this happens. Decades of research in social psychology show that many of our beliefs are not just opinions we hold, but parts of who we are. They become woven into our identities, our friendships, and often our professional lives.
Put more simply, we build our identities, friendships, and careers around certain beliefs. As a result, challenges to those beliefs are not experienced as abstract disagreements but as personal threats. Our self-preservation mechanism kicks in: We bend reality as far as necessary to preserve a flattering story about ourselves and our ingroup. Denial and aggression toward the outgroup follow naturally.
Psychologists Henri Tajfel and John Turner, who developed Social Identity Theory, showed that people internalize the values and beliefs of the groups they belong to, treating them as extensions of the self. When those beliefs are questioned, the threat is processed much like a threat to your status or belonging. The reaction is often defensive rather than reflective.
More recent work on motivated reasoning helps explain why such a reaction is so persistent. In the 1990s, psychologist Ziva Kunda demonstrated that people selectively evaluate evidence in ways that protect conclusions they are already motivated to believe. When a belief supports your identity or social standing, the mind unconsciously applies stricter standards to disconfirming evidence and looser standards to supporting evidence.
Intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side.Political scientist Dan Kahan later expanded this idea with what he called “identity-protective cognition.” His research showed that people with higher cognitive ability are often better, not worse, at rationalizing beliefs that align with their cultural or political identities. In other words, intelligence does not necessarily make you more objective; it can make you a more effective advocate for your own side!
This body of research helps explain why challenges to core beliefs can feel existential. If your moral worldview underwrites your relationships, your career, or your sense of being a good person, abandoning it comes with real social and psychological costs. Under those conditions, defending the belief feels like defending your life as it is currently organized.
Seen in this light, the selective abandonment of evidentiary standards is not a moral failing unique to any one group. It is a predictable human response to perceived identity threat. Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation.
I learned this firsthand during my years in the New Atheist movement. What struck me was how selective people’s skepticism could be. In debates about religion, the standards were ruthless. In debates about politics and social issues, those same standards were easily relaxed, and often vanished.
Take prayer. For decades, skeptics have pointed to controlled trials showing no measurable benefit of intercessory prayer. The best-known example is the STEP trial, a randomized study of nearly 1,800 cardiac bypass patients published in The American Heart Journal. It found no improvement in outcomes for patients who were prayed for, and in one group outcomes were slightly worse among patients who knew they were being prayed for. Among the New Atheists, prayer was considered resolved beyond reasonable debate not only because the experimental evidence showed no effect, but because the underlying causal story itself collapsed upon examination.
Reasoning shifts from a tool for understanding the world to a mechanism for self-preservation.Philosophically, intercessory prayer fails at the most basic level: It posits an immaterial agent intervening in the physical world in ways that are neither specified nor independently detectable. There is no plausible mechanism, no dose-response relationship, no way to distinguish divine intervention from coincidence, regression to the mean, or natural recovery.
When some studies do claim positive effects of prayer, they almost invariably collapse under close inspection—small sample sizes, multiple uncorrected comparisons, vague outcome measures, post hoc subgroup analyses, or outright publication bias. Some define “answered prayer” so flexibly that any outcome counts as success; others rely on self-reported well-being, which is especially vulnerable to expectancy effects and motivated reasoning.
This is precisely why large, preregistered trials and systematic reviews, such as those published in The American Heart Journal, are treated as decisive: They close off these escape hatches. The conclusion that prayer “doesn’t work” is not dogma; it is the residue left after methodological rigor strips away every alternative explanation.
Now compare that level of scrutiny to how many people treat evidence in politically favored domains. What matters here is not even whether these conclusions are right or wrong, but how they become insulated from refutation.
In debates over trans healthcare, for example, studies in favor of many invasive medical interventions are based largely on self-reported outcomes, short follow-up periods, and substantial attrition. Despite these limitations, they are frequently treated as definitive. Criticisms that would be routine in almost any other medical context are instead dismissed as bad faith. But the fact that these issues involve real suffering should not exempt them from evidentiary scrutiny; it should raise the bar for it. In this case, the most comprehensive evidence available—multiple systematic reviews—has raised serious concerns about the overall quality of the evidence base, particularly with respect to pediatric interventions.
The UK’s Cass Review, commissioned by the National Health Service and published in stages between 2022 and 2024, concluded that the evidence for puberty blockers and cross-sex hormones in adolescents is generally of low certainty. Similar conclusions were reached by Sweden’s National Board of Health and Welfare and Finland’s Council for Choices in Health Care, both of which revised clinical guidelines after finding the evidence weaker than previously assumed. None of this proves that such treatments never help anyone, especially adults who exhausted other options. It does show that claims of scientific certainty are unjustified.
The same pattern appears at the level of theory. New Atheists made a cottage industry out of attacking unfalsifiable religious claims and god-of-the-gaps reasoning. Yet many of the same people now defend claims about “systemic discrimination” that are structured in exactly the same way: When disparities persist, they are treated as proof. When they shrink, the explanation retreats to subtler and less measurable mechanisms. Evidence against the claim rarely counts against the claim in the way it would in other domains.
Consider policing. It is often treated as a settled fact that racial bias is the primary driver of police shootings. But when Harvard economist Roland Fryer examined multiple large national datasets on police use of force, he found that there were no racial differences in officer-involved shootings once relevant contextual factors—such as crime rates, encounter circumstances, and suspect behavior—were taken into account.
What followed was not a broad reevaluation of the claim, but a shift in how it was framed. Rather than direct bias operating at the level of individual officers, explanations moved toward less specific and harder-to-measure forces: institutional culture, historical legacy, or diffuse forms of “structural” racism. These explanations may or may not be true, but they function differently from the original claim. Because they are more abstract and less tightly specified, they are also far more difficult to test or falsify.
Here’s the key issue: The pattern we can observe in all this is not that evidence resolved the question, but that disconfirming evidence changed the nature of the claim itself. A hypothesis that was once presented as empirically straightforward became broader, more elastic, and increasingly insulated from direct empirical challenge. Sounds familiar? It’s the god of the gaps fallacy.
The same pattern appears in debates over wage gaps. Raw differences in average earnings between groups are often presented as straightforward evidence of discrimination. But when researchers such as June O’Neill and later Claudia Goldin showed that simply controlling for factors such as occupation, hours worked, experience, career interruptions, and job risk substantially narrows or eliminates many commonly cited wage disparities, the original claim quietly shifted.
Evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable.It was no longer argued that some demographics were being paid less than others for the same work under the same conditions. Instead, the explanation moved upstream: Sexism or systemic racism were said to operate on the variables themselves, shaping career choices, work hours, and occupational sorting in ways that produced lower average pay.
Again, these higher-level explanations may be partly true. But they function very differently from the initial claim. A hypothesis that began as a concrete, testable assertion about unequal pay for equal work became broader, more abstract, and harder to falsify. Evidence that would ordinarily count against the claim did not weaken it; it simply pushed the claim into less measurable territory. In other words, evidence that would count against the claim in any other domain instead causes the claim to become broader, more abstract, and less falsifiable. In these cases, disparities function the way miracles once did in theology: as proof of hidden forces.
What bothered me about the New Atheism movement was not disagreement over conclusions. It was the collapse of standards. Arguments once dismissed as unscientific were rehabilitated the moment they became morally fashionable. I focus here on the New Atheism movement because it marked the first time in my life (and, as far as I can tell, the first time in history) that a movement, at least on its surface, explicitly committed itself to applying the highest standards of evidence to some of the most consequential claims about the world, and in doing so successfully and very publicly dismantled societal structures and beliefs that had endured for millennia.
Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative.I’ve been thinking about all this for a long time, and I’ve come to suspect that most people—not by choice, but by evolutionary design—do not want or need a fully accurate understanding of how the world works. They want beliefs that protect their identity, signal membership in the right group, and increase their chances of (social) survival. Michael Shermer explained some of the evolutionary processes at hand here rather well in his books How We Believe and Conspiracy. In short, when it comes to patternicity—the human tendency to find meaningful patterns in meaningless noise—making Type 1 errors, (i.e., finding nonexistent patterns), carries little evolutionary risk while the opposite (i.e., missing real patterns) often can be the difference between life and death. This means that natural selection will favor strategies that make many incorrect causal associations in order to establish those that are essential for survival and reproduction.
Under those conditions, reasoning becomes performative. Skepticism is adopted when it flatters the self and abandoned when it threatens a moral narrative. That is why debates on these topics so often drift toward unfalsifiable language and moral imperatives.
A fair question follows: How does anyone know they are not doing the same thing?
I think the real danger we should try to internalize is not that other people do this. It is that all of us do.
How concerned do you truly need to be about vintage ceramicware leaching lead into your food?
Learn about your ad choices: dovetail.prx.org/ad-choicesIn modern education, Artificial Intelligence is increasingly marketed as a cognitive prosthesis: a tool that extends our mental reach, automates drudgery, and supposedly frees us to focus on higher-order creativity and insight. According to this narrative, AI does not replace thinking—it liberates it.
But beneath the polished interface of today’s Large Language Models (LLMs) lies a neurological and ethical trap, one with especially serious implications for developing minds. We are witnessing a subtle but profound shift from using tools to thinking with them, and, increasingly, letting them think for us.
The question Skeptic readers should be asking is not whether AI is impressive—it clearly is—but what kind of minds are formed when different kinds of thinking become optional. One place where this shift is especially revealing and especially consequential is moral development.
Moral DevelopmentIn moral education, how one arrives at a judgment matters more than which judgment one reaches. It is not about acquiring correct answers. Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting. These capacities do not emerge automatically, rather, they are trained through effortful use. AI, however, is mostly indifferent to process and optimizes for output.
When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy. This is not a Luddite rejection of AI but a skeptical, evidence-based examination of benefit claims that rarely account for developmental cost.
These are not merely philosophical concerns. They are grounded in the biology of how our moral capacities arise. To understand the stakes, we must begin with the adolescent brain. The teenage brain is not a finished system but more like a construction site. The prefrontal cortex (the executive center responsible for impulse control, long-term planning, and moral deliberation) undergoes rapid, uneven development throughout adolescence. Neural circuits that are exercised are strengthened and stabilized; those that are neglected are pruned away. This is not metaphor. It is biology.
Moral development involves cultivating the capacity to deliberate, restrain impulse, tolerate ambiguity, and reflect before acting.Moral development, as I explain in my book AI Ethics, Neuroscience, and Education, depends on what researchers call cognitive friction. This friction appears as hesitation before a difficult choice, the effort of weighing competing values, and the discomfort of uncertainty. These moments feel inefficient, but they are also indispensable. Generative AI, by design, removes this friction.
When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work. The student receives the answer without undergoing the cognitive struggle required to produce it. Ethical questions begin to resemble technical problems with downloadable solutions. Students lose the habit of lingering in uncertainty; the very space where moral reasoning takes shape. AI does not hesitate and generates outputs based on probability, not conscience. Humans, however, should hesitate. That hesitation is not weakness but moral functioning.
Cognitive and Emotional DevelopmentIf moral reasoning is one casualty of reliance on LLMs, it is far from the only one. Consider writing. Writing is not simply a way to display what we know—it is the process through which we figure out what we think. Organizing vague intuitions into a coherent argument places a heavy demand on the developing prefrontal cortex, and when AI performs this structuring, it deprives the brain of precisely the exercise it needs to mature.
When we outsource the labor of reflection to an algorithm, we risk a form of ethical atrophy.If intelligence is measured only by output, for example the finished essay or the correct solution, AI appears miraculous. But if intelligence is understood as the capacity to reason, deliberate, and restrain impulse, AI-driven cognitive offloading begins to resemble a neurological shortcut with long-term consequences, not unlike actual shortcuts that reshape the terrain.
The danger does not stop at cognition. It extends into emotional and social development. We are entering an era of affective computing, in which machines are designed not merely to process information but to simulate emotional responsiveness. AI systems now speak in tones of empathy, reassurance, and concern. They never interrupt, misunderstand, or demand reciprocity.
For an isolated or anxious adolescent, an AI companion can feel safer than unpredictable human relationships. It offers validation without vulnerability and empathy without risk.
When a student asks ChatGPT for a nuanced ethical argument and receives an instant, polished response, the brain skips the work.But moral growth, just like cognitive abilities, does not occur in comfort. Human relationships require patience, accountability, and recognition of another person’s interior life. They involve misunderstanding, disagreement, and the difficult work of repair. AI relationships require none of this. They are emotionally efficient, and ethically hollow.
What they provide is a psychological sugar rush: immediate affirmation without the nutritional value of genuine connection. The ethical danger here is subtle: We are not merely giving students a new tool but also shaping their preferences. We are quietly training young people to prefer relationships that never challenge them. Over time, this fosters comfort with anthropomorphic simulations and anxiety toward real human empathy, which is messy, incomplete, and demanding.
Toward Skeptical AI LiteracyThis is not a call to ban AI. The question is not whether we use AI in education, but how and when.
Beyond the developmental effects described here, we should also note that LLMs hallucinate. With remarkable confidence, they fabricate sources, misstate facts, and invent details. This fluency creates trust. What emerges is a form of passive knowing: information is consumed without ownership or justification. In an era where machines can generate infinite content, the ability to distinguish truth from fluent fiction becomes one of the most critical civic skills we have. Ironically, our increasing reliance on AI may be eroding the vigilance that skill requires.
We are quietly training young people to prefer relationships that never challenge them.This means we need to be teaching students both how to prompt machines and how to resist them. In other words, AI output should be treated not as a truth to be consumed but as a hypothesis to be tested. We also need to teach the value of the seeming inefficiency of human thinking.
Finally, the central ethical question of our time is not whether machines can think for us. It is whether in allowing them to do so too often we risk forgetting how to think for ourselves. We must be careful not to engineer the atrophy of human wisdom.
As a public intellectual who engages in debates and conversations on a wide range of subjects, I am often asked questions such as these, which I found puzzling at first until I figured out that my interlocutors were confusing the meaning of beliefs and facts.
For example, I don’t “believe in” the germ theory of disease. I accept it as factually true, and as we’ve seen in the recent pandemic, a germ like the SARS-CoV-2 virus is not something to believe in or disbelieve in. It simply is a matter of fact and it can cause a deadly disease like Covid-19.
Whether or not vaccines and masks slow its spread is also a factual question that science, at least in principle, can answer, although whether or not vaccines and masks should be mandated by law is a political matter that differs from scientific questions. But asking you if you “believe in” the SARS-CoV-2 virus would be like asking you if you “believe” in gravity. Gravity is just a brute fact of nature. It’s not something to believe or disbelieve.
As the science fiction author Philip K. Dick famously quipped, “Reality is that which, when you stop believing in it, doesn’t go away.”
Objective Truths and Justified True BeliefWhat we’re after here is knowledge, which philosophers traditionally define as justified true belief. That is, we want to know what is actually true, not just what we want to believe is true. The problem is that none of us are omniscient. If there is an omniscient God, it’s not me, and it’s also not you. Or, in the secular equivalent, there is objective reality but I don’t know what it is, and neither do you.
Truth: What It Is, How To Find It, & Why It Still MattersMichael Shermer
BUY ON AMAZONOnce we agree that there is objective truth out there to be discovered and that none of us knows for certain what it is, we need to work together through open dialogue in communities of truth-seekers to figure it out, starting by acknowledging our shortcomings as finite fallible beings subject to all the cognitive biases that come bundled with our reasoning capacities. The workaround for this problem is having adequate evidence to justify one’s beliefs. Here are two examples from science:
The above propositions are “true” in the sense that the evidence is so substantial that it would be unreasonable to withhold our provisional assent. At the same time, it’s not impossible, for example, that the dinosaurs went extinct recently, just after the creation of the universe some 10,000 years ago (as Young Earth Creationists assert). However, this proposition is so unlikely, so completely lacking in evidence, and so evidently grounded in religious faith, that we need not waste our time considering it any further (the debate about the age of the Earth was resolved over a century ago).
Thus, a scientific truth is a claim for which the evidence is so substantial it is rational to offer one’s provisional assent.Provisional is the key word here. Scientific truths are temporary and could change with changing evidence.
The ECREE Principle, or Why Extraordinary Claims Require Extraordinary EvidenceIn his 1980 television series Cosmos, in the episode on the possibility of extraterrestrial intelligence existing somewhere in the galaxy, or of aliens having visited Earth, Carl Sagan popularized a principle about proportioning one’s beliefs to the evidence, when he pronounced that “extraordinary claims require extraordinary evidence.” The ECREE principle was first articulated in the 18th century by the Scottish Enlightenment philosopher David Hume, who wrote in his 1748 An Enquiry Concerning Human Understanding: “a wise man proportions his belief to the evidence.”
ECREE means that an ordinary claim requires only ordinary evidence, but an extraordinary claim requires extraordinary evidence. Here’s a quotidian example. I once took a road trip from my home in Southern California to the Esalen Institute in Big Sur, California, home of all things New Age. To get there I took the 210 freeway north to the 118 Freeway north to the 101 freeway north to San Luis Obispo, where I exited to Highway 1 and followed the Pacific Coast Highway north through Cambria and San Simeon until arriving at the storied home of the 1960’s Human Potential Movement. Weirdly, just past Cambria, a bright light hovered over my car. Thinking it was a police helicopter, I pulled over to the side of the road, fearful that I had been busted for speeding (which I am wont to do). But it wasn’t the cops. It was the aliens, and they abducted me into their mothership and whisked me off to the Pleiades star cluster where their home planet is located. There I met extraterrestrial beings who gave me a message to take back to Earth—we must stop global warming and nuclear proliferation…or else.
Michael Shermer has a fine record as a long-time crusader for evidenced rationality. This fascinating and wide-ranging book should further enhance his impact on current controversies.Now, which part of this story triggers your insistence on additional evidence? That’s obvious. My claim to have driven on California highways is ordinary and calls for only ordinary evidence (in this case, you can just take my word for it), but my claim to have been abducted by aliens and rocketed off to the Pleiadeian home planet is extraordinary, and unless I can provide extraordinary evidence—like an instrument from the dashboard of the alien spaceship, or one of the aliens themselves—you should be skeptical.
ECREE also suggests that belief is not an either-or on-off switch—not a discrete state of belief or disbelief, but a continuum on which you can place confidence in a belief according to the evidence: more evidence, more confidence; less evidence, less confidence. Consider the extraordinary claim that another bipedal primate called Big Foot, or Yeti, or Sasquatch survives somewhere on Earth. That would be quite extraordinary because after centuries of searching for such a creature none have been found.
Truth (Autographed)Michael Shermer
BUY FROM SHOP SKEPTICBefore we assent to such a claim we need extraordinary evidence, in this case a type specimen—what biologists call a holotype—in the form of an actual body. Blurry photographs, grainy videos, and stories about spooky things that happen at night when people are out camping does not constitute extraordinary evidence—it’s barely even ordinary evidence—so it is reasonable for us to withhold our provisional assent.
Impediments to Truth and How to Overcome ThemIn addition to falling far short of omniscience, humans are also saddled with numerous cognitive biases, including (to name but a few): confirmation bias, hindsight bias, myside bias, attribution bias, sunk-cost bias, status-quo bias, anchoring bias, authority bias, believability bias, consistency bias, expectation bias, and the blind-spot bias, in which people can be trained to identify all these biases in other people but can’t seem to see the log in their own eye.
Truth lances the myth of truth's subjectivity, arguing (provocatively) that truth can generate moral absolutes. This stimulating, excellent book inspires you to spread the word that the Earth is not flat and that truth matters.Then there are the suite of logical fallacies, such as Emotive Words, False Analogies, Ad hominem, Hasty Generalization, Either-Or, Circular Reasoning, Reductio ad Absurdum and the Slippery Slope, after-the-fact reasoning, and especially why anecdotes are not data, why rumors do not equal reality, and why the unexplained is not necessarily the inexplicable.
With such listicles of cognitive biases and logical fallacies identified by philosophers and psychologists it’s a wonder we can think at all. But we can and do, through experience, education, and instruction in the art and science of thinking. What follows are some of the methods developed by philosophers and psychologists to identify and work-around all these impediments to the search for truth.
Practice Active Open-Mindedness. Research shows that when people are given the task of selecting the right answer to a problem by being told whether particular guesses are right or wrong, they do the following:
In their book Superforecasting, Philip Tetlock and Dan Garner document how bad most people are at making predictions, and what skillsets those who are good at it employ. They begin with the results of extensive testing of people’s predictions. It’s not good. Even most so-called experts were no better than dart-tossing monkeys when their predictions were checked. When asked to make specific predictions—for example, “Will another country exit from the EU in the next two years?” and, presciently, “Will Russia annex additional Ukraine territory in the next three months?”—and their prognosticating feet were held to the empirical fire, Tetlock and Garner found that most experts were overconfident (after all, they’re experts), encouraged by the lack of feedback on their accuracy (if no one reminds you of your misses you’ll only remember the hits—the confirmation bias), and are victims of all the cognitive biases and illusions that plague the rest of us.
Michael Shermer has spent his career grappling with the slipperiest word in our language: truth. As someone who knows firsthand what happens when truth gets lost in noise and narrative, I'm grateful for Shermer's clear-eyed insistence that truth is not only real, but necessary.The worst forecasters were people with big ideas—grand theories about how the world works—such as left-wing pundits predicting class warfare that never came, or right-wing commentators prophesizing a socialistic demise of the free enterprise system that never happened. Failed predictions are hand-waved away—“This means nothing!” “Just you wait!” Superforecasters, by contrast, practice active open-mindedness, which Tetlock and Garner defined quantitatively by asking experts “Do you agree or disagree with the following statements?” Superforecasters were more likely to agree that:
Superforecasters were more likely to disagree that:
The psychologist Gordon Pennycook and his colleagues developed their own instrument of measuring active open-mindedness, in which people are asked whether they agree or disagree with the following statements, where the more open-minded answer is indicated in parentheses:
Active open-mindedness is a cogent tool of reason in assessing the truth value of any claim or idea. As is reason itself, of which active open-mindedness is a subset of rational skills that must be cultivated through education and practice.
Michael Shermer pulls no punches: in a world where opinion too often masquerades as fact, he dismantles delusion and arms us with the tools to meet reality head-on.Objective facts in support of provisional truths about the world are determined by tried-and-true methods developed over the centuries since the Scientific Revolution and the Enlightenment in what are sometimes called rationality communities—scholars, scientists, and researchers who collect data, form and test hypotheses, present their findings to colleagues at conferences, publish their papers in peer reviewed journals and books, and reinforce the norms of truth-telling to their colleagues and students along with themselves. In his book The Constitution of Knowledge, the journalist and civil rights activist Jonathan Rauch outlines and defends the epistemic operating system of Enlightenment liberalism’s social rules for attaining reliable knowledge when people cannot agree on what is true. Although these communities differ in the details of what, exactly, should be done to determine justified true belief, Rauch suggests several features held in common that constitute the constitution of knowledge:
The most important norm of all is the freedom to critique or challenge any and all ideas. Why?
If you disagree with me, it is the norms and customs of free speech and open dialogue that allows you to do so. From those open dialogues, debates, and disputations, in time the truth emerges.
Excerpt from Truth: What It Is, How to Find It, and Why It Still Matters, Johns Hopkins University Press. January 27, 2026
Oh no! Another pop quiz. Take the challenge: 9 questions about space. Think you can get them all?
Learn about your ad choices: dovetail.prx.org/ad-choicesA century-old hoax takes wing again, proof that good stories never stay buried.
Learn about your ad choices: dovetail.prx.org/ad-choicesI have been practicing medicine for more than 40 years. During that time the management of obesity and Type 2 diabetes (T2DM)—the kind that usually is caused by being overweight—often felt like Sisyphus pushing a boulder up a hill, only to have it roll back down, often heavier than before. We faced a “diabesity” epidemic where the available tools were blunt instruments at best.
Lifestyle interventions—meaning trying to get someone to change their behavior—was the most and least effective method we had. Most, because in the less than two percent of patients who were successful, it works very well. Least, because, well … 98 percent failed. And they failed because all of our evolutionary history (“See food? Eat it!”) was working against them. This is the mismatch theory: a mismatch between the environment of our evolutionary ancestry that designed our brains to seek foods that were at once rare and nutritious (sweets and fats) and the modern environment in which such foods are in such overabundance that we eat far beyond the saturation point.
The pharmacological options were often disappointing: Sulfonylureas and insulin lower blood sugar but caused weight gain, exacerbating the underlying problem. Bariatric surgery works, but it is invasive and carries surgical as well as lifelong nutritional risks.
When we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology.Into this therapeutic desert crawled the Gila Monster (above), a venomous lizard native to the American Southwest from which researchers derived GLP receptor agonists (Glucagon-like peptide-1 receptor agonists)—medications that mimic the natural GLP-1 hormone that lead to lower blood sugar, help control appetite, and promote weight loss by telling the pancreas to release more insulin when glucose is high, slowing the rate of stomach emptying, and signaling to the brain a sense of fullness.
As a skeptic, I am allergic to the word “miracle,” but when we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology. But, as always in medicine, there is no free lunch.
The Incretin Concept: From Gut to GloryThe story begins with the “incretin effect”—the observation that glucose taken by mouth triggers a much stronger insulin response by increasing the production of hormones in the pancreas, compared to when it is injected directly into a vein. The gut knows you are eating and tells the pancreas to get ready to pack away the extra calories as fat. In patients with Type 2 diabetes, this effect is blunted and the sugar floats around in the bloodstream much longer.
Scientists identified two main hormones responsible: Glucose-dependent Insulinotropic Polypeptide (GIP) and Glucagon-like Peptide (GLP-1). The problem is that GIP doesn’t work well in diabetics. GLP-1 works beautifully—stimulating insulin, suppressing glucagon, and slowing gastric emptying—but it has a fatal flaw: It is destroyed by the enzyme DPP-4 within minutes of entering the bloodstream.
This led to two distinct pharmaceutical strategies. The earlier version was DPP-4 Inhibitors. Drugs like the “Gliptins” block DPP-4, making GLP-1 last longer. They are well-tolerated but their ability to lower blood sugar is modest and they generally do not cause weight loss.
The newer strategy was to engineer versions of GLP to resist degradation. This is where the Gila monster strolled in. In the 1990s, while researching hormone-like drugs, Dr. John Eng noted a similarity between exendin-4 found in Gila venom to Glucagon-like peptide (GLP), and it was able to resist breakdown by DPP!
The Evidence: Efficacy Beyond the HypeThe first GLP-1 agonist, exenatide (Byetta, approved in 2005), required twice-daily injections and produced modest weight loss. But the pharmacology evolved rapidly. We moved to once-daily liraglutide, and then to the once-weekly heavyweights: dulaglutide, semaglutide (Ozempic and Wegovy), and the dual GIP and GLP-1 agonist tirzepatide (Mounjaro and Zepbound).
The clinical trials, called LEAD, SUSTAIN, PIONEER, STEP, and SURPASS (you’ve got to just love the creative acronyms!) have generated data that are hard to dismiss:
Glycemic Control: These drugs consistently outperform most oral antidiabetics in lowering blood sugar by 10 to 20 percent.
Weight Loss: This is the game changer. While early drugs produced 2–4 kg of weight loss over six months, the newer agents are producing results previously only seen with surgery. In the STEP-1 trial, semaglutide 2.4 mg resulted in an approximately 15 percent body weight reduction. Tirzepatide pushed this further, achieving up to 22 percent weight loss in the SURMOUNT-1 trial. That is the effect of a 250-pound person losing 55 pounds! Who wouldn’t want some of that?!
Cardiovascular Outcomes: Perhaps most importantly, these drugs are not like some that just make numbers look better; they are saving lives. Liraglutide and semaglutide have demonstrated significant reductions in major adverse cardiovascular events (MACE), including heart attack and stroke, in high-risk populations. The SELECT trial recently showed semaglutide reduces MACE by 20 percent even in nondiabetic patients with cardiovascular disease. But don’t be fooled, it is not likely that these drugs have specific effects on the heart. It is probable that the fat loss alone is causing these benefits.
Some Skeptical Scrutiny: The RisksIf a drug sounds too good to be true, we must look for the catch. GLP-1 agonists have plenty.
The “Puke” Diet? The most common side effects of GLP-1 agonists are gastrointestinal: nausea, vomiting, diarrhea, and bloating. In some trials, up to 45 percent of patients experienced nausea. While this usually subsides, it raises a valid question: Are people losing weight because their metabolism is optimized, or because they feel too sick to eat? The mechanism involves central appetite suppression in the hypothalamus, but the “gastric braking” effect is real and unpleasant for many.
The Pancreas and Thyroid Scare. Early observational data suggested a link between GLP-1 agonists and pancreatitis and pancreatic cancer. However, extensive reviews have not confirmed a causal link to pancreatic cancer, though a slight increase in pancreatitis persists in some data. This makes sense, as one of the major sites of GLP’s effects is on the pancreas. In the thyroid, these drugs cause C-cell tumors in rodents. Humans have far fewer GLP-1 receptors on their thyroid C-cells than rats, and so far no evidence of increased thyroid cancer has been confirmed in humans. Still, the Black Box warning remains: If you have a family history of endocrine tumors or medullary thyroid cancer, these drugs are not for you.
If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another.Vanishing Muscle. Weight loss via GLP-1 agonists is not just fat loss, so overall body composition must be monitored. In the STEP-1 trial, DEXA scans showed that lean body mass (muscle and bone) accounted for nearly 40 percent of the weight lost. In older adults, this raises the specter of “sarcopenic obesity”—being frail and weak despite having excess fat. Losing muscle mass compromises physical function and metabolic health. If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another. Now, regular and increased exercise is part of the prescription for all patients taking GLP drugs, but studies on how well this works are still in progress.
The Perioperative Peril. Because GLP-1 agonists delay gastric emptying, there have been reports of patients aspirating (inhaling) gastric contents during anesthesia, even after standard fasting protocols. This is a new, practical safety concern that surgical societies are rushing to address.
Mental Health. Reports of suicidal ideation appeared in postmarketing monitoring of GLP-1 agonist users, prompting investigations by European regulators. However, recent large cohort studies have not supported an increased risk of suicidality compared to other diabetes medications. As with all centrally acting drugs, vigilance is required, but the current data are reassuring.
A Lifetime Prescription? The most significant caveat for GLP-1 agonists is durability. Obesity can be a chronic, relapsing disease. Trials show that when patients stop taking semaglutide, they regain two-thirds of the lost weight within a year, and cardiometabolic improvements revert toward baseline. This implies that these are not “cures” but lifelong therapies, much like blood pressure medication.
Financial Toxicity. As I write this, these drugs are prohibitively expensive, creating a massive public health gap. We also saw shortages that left diabetic patients unable to fill prescriptions because the supply was diverted to off-label weight loss use. GLP-1 agonists are not expensive to produce, however, and the patent on Ozempic expired in January of 2026 in Canada and China (and lasts until 2030 in the U.S.), but I expect the market to bring the costs down dramatically over the next few years. As of this year, close to 12 percent of Americans have tried it at least once.
Needles Versus PillsIf there is one thing that holds patients back from the current crop of injectable incretins it is the needle. Despite the efficacy of weekly injections, people prefer pills. The pharmaceutical industry, never one to leave money on the table, has been racing to develop an oral alternative that doesn’t require the strict fasting rituals of earlier attempts like oral semaglutide. Enter orforglipron, the latest contender in the “nonpeptide small molecule” class, which promises the benefits of GLPs without the injection or the fuss.
Unlike existing peptide predecessors that are digested by stomach acid unless armored with absorption enhancers, orforglipron is a chemical—a small molecule designed to survive the GI tract and activate the GLP-1 receptor directly. The data from the ATTAIN-1 trial, published in September 2025, look good. Patients on the 36 mg dose achieved an average weight loss of 11.2 percent over 72 weeks, compared to just 2.1 percent for placebo. No needles. And this pill does not require the “empty stomach, no water, wait 30 minutes” song-and-dance required by oral semaglutide; it can be taken with or without food.
These are serious medications with serious side effects, and they may require lifelong commitment.However, let’s look a little past the convenience. While an 11.2 percent average weight loss is clinically significant, it trails behind the 13.7 percent average reduction seen with semaglutide and 20.2 percent with tirzepatide. Furthermore, the biology of GLP-1 agonism remains the same regardless of delivery method: You cannot cheat physiology. In the ATTAIN-1 trial, adverse events led to treatment discontinuation in up to 10.3 percent of patients on the drug, compared to only 2.7 percent on placebo. The side effects are the usual suspects—gastrointestinal distress, nausea, and constipation—confirming that oral delivery does not bypass the “gastric braking” misery.
We must also remain vigilant regarding safety. The development of a similar small molecule, lotiglipron, was unceremoniously halted due to liver toxicity concerns. While orforglipron has passed its Phase 3 hurdles without these specific signals so far, the history of pharmacology teaches us that rare, serious adverse events often lurk in the postmarketing shadows.
Additionally, while proponents argue that small molecules are cheaper to manufacture than biologics, whether those savings will be passed on to the patient or simply absorbed into the profit margins remains to be seen, with projected self-pay costs in some cases exceeding $1,000 per month. Orforglipron represents a technological leap, but it is not a magic wand; it is simply a more convenient way to induce the same physiological trade-offs we have seen over the last several years with the shots.
ConclusionPrior to the incretin era, our ability to manage the twin epidemics of diabetes and obesity was dishearteningly limited. GLP-1 receptor agonists represent a hard-earned pharmacological breakthrough, offering potent glucose control and unprecedented weight loss.
However, skepticism is still warranted regarding their indiscriminate use. They are already being used in numerous off-label ways, like shedding a few pounds before a wedding, allegedly decreasing cravings for addictive drugs like alcohol and narcotics, and purportedly even for the treatment of Alzheimer’s and Parkinson’s disease. There are ongoing studies for these uses, but early data are weak and the risks are unknown. These are serious medications with serious side effects, and they may require lifelong commitment.
Caveat emptor.
On March 30–31, 1979, Iranians went to the polls. The ballot contained a single question: Should Iran become an Islamic Republic? The choices were “Yes” (Green) or “No” (Red). The official result: 98.2% voted Yes.1
Fifty-Eight Days EarlierOn February 1, 1979, Ayatollah Khomeini returned to Iran after fourteen years in exile. Millions filled the streets of Tehran—the estimates range from two to five million.2 But the man they cheered was a carefully constructed image. During the flight, Khomeini remained secluded in the upper deck of the chartered Boeing 747, praying.3 When the plane landed, he chose to be helped down the stairs by the French pilot rather than his Iranian aides, a calculated move to prevent any subordinate from sharing the spotlight.4
He chose his first destination deliberately: Tehran’s main cemetery, where those who died during the revolution were buried. The crowd was so dense his motorcade could not pass; he took a helicopter instead.5 By speaking among the graves, Khomeini positioned himself as the guardian of those who died in the revolution and as someone who would fulfill what they had sacrificed for.
In the weeks that followed, Khomeini offered both material goods and spiritual salvation. He promised free electricity, free water, and housing for every family. Then he added the caveat that would define the coming era: “Do not be appeased by just that. We will magnify your spirituality and your spirits.”6
A Coalition of ContradictionsThe crowd that greeted him was not a monolith, but a coalition of contradictions. Marxists marched hoping for a socialist future free of American influence. Nationalists and liberals sought constitutional democracy. The devout sought governance by Sharia—and for them, the revolution was holy war: the Shah represented taghut, the Quranic term for tyrannical powers that lead people from God, and those who died fighting him became shahid, martyrs.
Khomeini managed these competing visions by keeping his actual plans vague. He spoke of freedom, justice, and independence, terms each faction could interpret as it wished.7 His blueprint for clerical rule, Velayat-e Faqih, remained in the background. Abolhassan Bani-Sadr, who would become the Islamic Republic’s first president, later recalled: “When we were in France, everything we said to him he embraced and then announced it like Quranic verses without any hesitation. We were sure that a religious leader was committing himself.”8 Khomeini himself would later state: “The fact that I have said something does not mean that I should be bound by my word.”9
Ayatollah Mahmoud Taleghani casts his vote in the March 1979 Islamic Republic referendum.The Empty PhraseNow, let’s return to the ballot.
A republic places sovereignty in the people. Citizens choose their laws. An Islamic state places sovereignty in God, but not “God” in some abstract, philosophical sense. The God of the Islamic Republic is specifically Allah as understood in Shia Islam: a God who communicates through the Quran, whose will was interpreted by the Prophet Muhammad, then by the twelve Imams, and now (in the absence of the hidden Twelfth Imam) by qualified Islamic jurists. This is not a deist clockmaker or a personal spiritual presence. This is a God with specific laws, specific requirements, and specific men authorized to speak on His behalf.
So, what did God want? The ballot never said.
The 1979 Iranian Islamic Republic referendum ballot showing the “نه” (No) option in red. Voters chose between a simple yes or no on whether Iran should become an “Islamic Republic”—a phrase containing no constitution, no enumerated rights, and no definition of which Islamic laws would apply or who would interpret them.“Islamic Republic” contained no details. No constitution, no enumerated rights, no definition of which Islamic laws would apply or who would interpret them. Voters were not choosing a specific system of government. They were choosing a phrase, and trusting that its meaning would be filled in later by men they believed spoke for God.
For those paying attention, there were clues. Khomeini had written extensively about Velayat-e Faqih (the Guardianship of the Islamic Jurist) a system in which a senior cleric would hold supreme authority as God’s representative on Earth. He had lectured on it in Najaf. He had published a book.10 But in the noise of revolution, in the flood of promises about free electricity and spiritual elevation, these details were background static. The crowds were not voting on constitutional theory. They were voting on hope.
The 98% voted Yes. Forty-seven years later, we can measure what exists in Iranian society.
Religious FaithFor this case study to be valid, we must establish a baseline. Was Iranian society already irreligious before 1979, or has religiosity declined under the theocracy?
Available evidence suggests the latter.
In 1975, a survey of Iranian attitudes found over 80% of respondents observing daily prayers and fasting during Ramadan. The methodology is not fully documented in accessible sources.11 However, the broader historical record supports the baseline: the 1979 revolution mobilized millions under explicitly Islamic banners, clerical figures commanded genuine social authority, and the Iranian government’s own 2023 leaked survey found 85% of respondents saying society has become lessreligious than it was.12 Forty-seven years later, mosques are empty.
Official Iranian census data reports 99.5% of the population as Muslim.13 This figure measures legal status, not belief. Under Iranian law, a child born to a Muslim father is automatically registered as Muslim, and leaving Islam carries severe legal consequences. While formal executions for “apostasy” are relatively rare—the regime prefers to charge dissidents with crimes like “Enmity against God” or “Insulting the Prophet”—the threat is sufficient to enforce public silence.
Saadatabad district, Tehran, January 8, 2026: A mosque burns amid protests. (Source: Press Office of Reza Pahlavi)In June 2020, the Group for Analyzing and Measuring Attitudes in Iran (GAMAAN) surveyed over 50,000 respondents using methods designed to protect anonymity.14
Results:
While this online sample skews urban (93.6% vs. Iran’s 79%) and university-educated (85.4% vs. 27.7% nationally), the magnitude of divergence from official statistics—32% Shia vs. 99.5% in census data—is too large to explain through sampling bias alone. Meanwhile, face-to-face surveys suffer the opposite problem: when GAMAAN asked respondents if they’d answer sensitive questions honestly over the phone, 40% said no.15
An interesting outcome of this study is that Iran has approximately only 25,000 practicing Zoroastrians (the total population of Iran is around 92.5 million), yet 7.7% selected this identity. Researchers interpret this as “performing alternative identity aspirations”—claiming pre-Islamic Persian heritage to reject imposed Islamic identity.16
The key findings are, however, clear: 44.5% selected a non-Islamic category when asked their current religion and 47% reported transitioning from religious to non-religious during their lifetime.
The second figure suggests active deconversion rather than inherited secularism.
In 2024, a classified survey by Iran’s Ministry of Culture and Islamic Guidance (conducted in 2023) was leaked to foreign media.17 This data provides a comparison point from within the regime itself.
Indicator
2015
2023
Support separating religion from state
30.7%
72.9%
Pray “always” or “most of the time”
78.5%
54.8%
Never pray
3.1%
22.2%
Never fast during Ramadan
5.1%
27.4%
The same survey found 85% of respondents said Iranian society had become less religious in the previous five years. Only 25% reported trusting clerics.
Based on my years of closely following Iranian society, the pace of religious abandonment has accelerated significantly since the 2022 “Woman, Life, Freedom” uprising. The leaked government data confirms this trajectory: the sharpest shifts in prayer and fasting occurred within the 2015–2023 window, with 85% saying society had grown less religious in just the previous five years.
In February 2023, senior cleric Mohammad Abolghassem Doulabi stated that 50,000 of Iran’s approximately 75,000 mosques had closed due to low attendance, a claim partially corroborated by the leaked government survey finding only 11% always attend congregational prayers.18
Election participation has also declined. Official turnout in the June 2024 presidential election was 39.93%, the lowest in the Islamic Republic’s history.19
The Evidence on the StreetsThe data on paper is corroborated by the specific vocabulary of the street. The protest chants have evolved from requesting reform to rejecting the entire theological framework.
Art by Hamed Javadzadeh — Woman, Life, Freedom Movement (2022)Consider the chant: “Neither Gaza nor Lebanon, I sacrifice my life for Iran.”
This is a direct rejection of the regime’s core ideology. The Islamic Republic prioritizes the Ummah—the transnational community of believers—over the nation-state. By rejecting funding for Hamas and Hezbollah in favor of national interests, protesters are secularizing their priorities: the Nation has replaced the Faith as the object of ultimate concern.
Even more specific is the chant: “Death to the principle of Velayat-e Faqih.”
The protestors are not merely calling for the death of the dictator (Khamenei); they are targeting the specific theological doctrine that grants him legitimacy. They are rejecting the very concept of divine guardianship.
But the most striking evidence of the revolution’s failure is the return of the name it sought to erase. In a historical irony that defies all prediction, crowds now chant “Reza Shah, bless your soul,” and call upon Reza Pahlavi, the son of the deposed Shah, to return. The same population that staged a revolution to overthrow a monarchy in 1979 is now invoking that monarchy as the antidote to theocracy.
The MechanismA note on terminology: When this article refers to “Allah,” it means the legislative deity of the Islamic Republic—a God with enforceable commands interpreted by authorized clerics. This is distinct from the personal God that 78% of Iranians still believe in.
As mentioned earlier, Iran’s constitution establishes Velayat-e Faqih—the Guardianship of the Islamic Jurist. Article 5 declares that in the absence of the Twelfth Imam (a messianic figure believed to have been in supernatural hiding since the 9th century), authority belongs to a qualified jurist. The Tony Blair Institute’s analysis states it directly: “the supreme leader’s mandate to rule over the population derives from God.”20 Khamenei’s own representative, Mojtaba Zolnour, declared in 2009: “In the Islamic system, the office and legitimacy of the Supreme Leader comes from God, the Prophet and the Shia Imams, and it is not the people who give legitimacy to the Supreme Leader.”21
This is not metaphor. The system’s legitimacy rests on the claim that its laws are Allah’s laws, its punishments are Allah’s punishments, its wars are Allah’s wars.
When morality police detained Mahsa Amini, leading to her death, they were enforcing the mandatory religious duty of “Forbidding the Wrong.” When courts execute apostates, they enforce Allah’s law. When the regime sends billions to Hezbollah while Iranians face poverty, it pursues Allah’s mission. When it pursues a nuclear program that invites crushing sanctions, it frames the resulting economic ruin not as policy failure, but as a holy “Resistance” against the enemies of Islam. Every act of misrule carries Allah’s signature.
0:00 /1:04 1×Khorramabad, Iran, January 8, 2026: Protesters raise the pre-1979 lion-and-sun flag, described as a symbol of secular restoration, atop a statue of the Ayatollah. (Source: Press Office of Reza Pahlavi)
In a secular dictatorship, citizens can hate the dictator while preserving their faith. The North Korean who despises Kim Jong-un can still pray. But in a theocracy, the oppressor and God speak with one voice. To oppose the oppressor is to oppose God. To want freedom is to reject divine authority.
The regime created conditions where, for many, opposing political authority became entangled with questioning religious authority.
The Psychology of Religious RebellionJack Brehm’s reactance theory (1966) demonstrates that when people perceive threats to their freedom, they become motivated to restore it, often by embracing the forbidden alternative.22 Subsequent research has applied this specifically to religion. Roubroeks, Van Berkum, and Jonas (2020) found that restrictive religious regulations can trigger reactance that leads to both heresy (holding beliefs contrary to orthodoxy) and apostasy (renouncing religious affiliation entirely).23
The critical insight: In cases of psychological reactance, the emotional pushback against coercion often precedes the intellectual dismantling of the belief system.
The sequence is rarely a straight line, but the components are clear:
This third point is crucial. Iran’s internet users grew from 615,000 in 2000 to over 70 million today.24 Despite billions spent on censorship, officials admit 80–90% of Iranians use VPNs, which allow to circumvent restrictions by changing the user’s internet location to that of another country.25
For the intellectually curious, the internet offered arguments against Islamic theology that were previously banned. But for the average citizen, it offered something perhaps more powerful: validation. It showed them that their anger was shared. It broke the “pluralistic ignorance,” the state where everyone privately rejects the norm but publicly conforms because they think they are the only ones.
Whether through deep study or simple emotional exhaustion, the result was the same: the breaking of the psychological bond between the citizen and the faith.
The Unintended OutcomeIran’s religious decline is among the fastest documented in modern history. Stolz et al. (2025) in Nature Communications established that Europe’s secular transition took approximately 250 years. Iran’s comparable shift from over 80% observing daily prayers in 1975 to 47% reporting lifetime deconversion by 2020 occurred in roughly 45 years. Pew’s global data shows Muslim retention rates averaging 99% across surveyed countries.26
However, Europe secularized without internet or satellite television. Iran’s shift occurred alongside a 90-fold increase in internet access. Theocracy may provide the motive for questioning imposed faith; technology provides the accelerant that compresses generational change into decades. Ex-Muslim testimonies, apostasy narratives, ordinary lives lived without faith—these demonstrated that abandoning religion was survivable. The forbidden became imaginable. Others found arguments that validated what they already felt. The reasoning matched the shape of their anger, and that was enough.
For forty-seven years, the Islamic Republic worked to manufacture belief. Mandatory religious education from childhood. State control of media. Morality police enforcing dress and behavior. Apostasy punishable by death. A constitution grounding all authority in God. They did not leave this to chance.
The data suggests it did not work.
Anyone following recent events in Minneapolis has likely noticed something strange. People watching the same videos, reading the same headlines, and reacting to the same street-level events often seem to be describing entirely different realities. Conversations quickly break down, not because people disagree about what should be done, but because they cannot even agree on what is happening. It’s as if people are watching two completely different movies on one screen.
The “two-movies-one-screen” concept was first coined by Scott Adams, the creator of Dilbert turned political commentator, to describe radically different interpretations of the same political events. People with access to the same set of facts come away with completely different understandings of what is happening. In some cases, each side seems genuinely unaware that the other interpretation even exists.
This is not merely disagreement, and it goes beyond ordinary bias. It is also not quite what psychologists usually mean by cognitive dissonance. Cognitive dissonance, first described by Leon Festinger in the 1950s, occurs when people experience psychological discomfort from holding conflicting beliefs or encountering information that contradicts their existing views, and then attempt to reduce that discomfort through rationalization or reinterpretation of the facts. In cases like the Renee Good shooting in Minnesota, however, something else seems to be happening. So, what is going on?
From a psychological standpoint, this resembles dissociation more than cognitive dissonance. Dissociation refers to a class of mental processes in which certain thoughts, perceptions, or experiences are kept out of conscious awareness. As clinical psychologists have long noted, dissociation functions as a defensive mechanism, shielding the individual from information that is experienced as overwhelming or intolerable. The mind does not reject the data after evaluating it. It fails to perceive it in the first place.
The following is an attempt to provide a neutral description of the events, followed by two very different interpretations.
On January 7, 2026, in Minneapolis, Minnesota, 37-year-old Renee Nicole Good was fatally shot by an Immigration and Customs Enforcement (ICE) agent during an operation targeting undocumented immigrants for deportation. Good was a U.S. citizen and mother of three from previous relationships, and present on the scene with her wife, Rebecca (Becca) Good.
Multiple videos from bystanders, body cameras, and agent phones capture the event, showing a chaotic scene lasting about three minutes.
0:00 /0:47 1×ICE Agent’s Cellphone Video (Credit: Alpha News)
Renee Good was in her SUV, which was blocking or near the path of ICE vehicles during an arrest operation. Agents approached, giving conflicting commands: some ordered her to leave, while others demanded she exit the vehicle. One agent attempted to open her door and banged on the window.
Rebecca Good, Renee’s wife, was outside the vehicle filming and confronting agents.
At one point during the interaction, Renee’s wife urged her to “drive, baby, drive” as the situation escalated. Good maneuvered the vehicle forward and started to accelerate. The vehicle made contact with an ICE agent who was positioned in front; the agent fired through the windshield, striking her in the face and killing her.
0:00 /0:39 1×Bystander Video (Credit: Nick Sortor)
According to official statements from ICE and the Department of Homeland Security (DHS), the shooting occurred after Good allegedly used her vehicle as a weapon, attempting to run over an agent who then fired in self-defense. Renee and Rebecca Good were part of “ICE Watch” groups monitoring, protesting, and interfering with ICE operations. The ICE agent who fatally shot Good was injured and hospitalized following a prior incident in June 2025, during which an undocumented immigrant with an open warrant for child sexual assault dragged him with his vehicle while attempting to flee arrest.
0:00 /4:26 1×Bystander Video 2 (Credit: @Dana916 via X.com)
Progressive voices view Good’s killing as an example of ICE overreach, law enforcement brutality, and systemic abuse of power, especially against citizens exercising First Amendment rights. They emphasize Renee was a “legal observer” and had a constitutional right to protest. They further note that Good was an unarmed American citizen on a public road who was fatally shot in the face and head by a masked federal agent. They also interpret the footage as showing Good attempting to navigate away from the scene rather than intentionally trying to harm the agent. They further warn against normalizing state killings, such as in statements made by Rep. Alexandria Ocasio-Cortez (D), who responded to Vice President JD Vance’s defense of the ICE agent by calling it a “regime willing to kill its own citizens.” This sentiment is tied to broader concerns about police/ICE militarization against undocumented immigrants, and observations such as that even if Good erred (e.g., by not complying with instructions of federal law enforcement officers), it wasn’t worth her life, and society needs a higher bar for lethal force.
Conservative commentators frame the shooting as justified self-defense against anti-ICE radicals who disrupted lawful operations. They emphasize Renee’s alleged aggression and Rebecca’s role in escalating the situation by shouting “You wanna come at us? Go get yourself lunch, big boy,” portraying the couple as part of a coordinated harassment campaign rather than passive observers or demonstrators. They also argue Good was an active participant and perpetrator obstructing enforcement of long-standing immigration law, and someone attempting to flee from the scene rather than simply a citizen attending a protest. They maintain that the shooting was tragic, nevertheless law enforcement (and citizens) can use lethal force if they reasonably believe they face imminent serious harm. Further, they make the following distinction: debating whether the officer should or should not have fired is rational, but refusing to acknowledge that being struck/pushed by a vehicle is basis for self-defense isn’t.
These conflicting media narratives matter because most people do not build their understanding of the world through direct experience. Our personal encounters are limited. The rest of our mental model is assembled from stories. Indeed, research in cognitive psychology and media studies consistently shows that humans rely heavily on narrative to organize information and assign meaning. In other words, we are not natural statisticians. As psychologists such as Jerome Bruner and Daniel Kahneman have shown, people reason intuitively through stories, examples, and emotionally salient cases, often treating mediated experience as a stand-in for reality itself. This is why propaganda is most effective when it does not look like propaganda.
Many people assume propaganda is something obvious that you notice and argue with. In reality, the most powerful propaganda works through repetition rather than persuasion. Social psychologists have documented what is known as the “illusory truth effect,” in which repeated statements are more likely to be judged as true, regardless of their accuracy. When a moral narrative is replayed often enough, it stops feeling like a claim and starts feeling like memory.
Consider the recurring portrayal of tech executives in films and television. A wealthy founder speaks in vague abstractions, dismisses ethical concerns, and pursues profit at the expense of ordinary people. The specifics vary, but the moral structure remains the same. Whether any individual depiction reflects the reality of modern technology firms is almost beside the point. After repeated exposure, viewers absorb not just a critique of corporate excess, but an intuitive framework for interpreting innovation, wealth, and motive. Repetition trains audiences to assign intent instantly and to stop questioning it.
This works because fiction bypasses our analytical defenses. Experimental research on narrative persuasion shows that people are less likely to counterargue when they are emotionally absorbed in a story. Psychologists refer to this as “transportation,” a state in which attention and emotion are captured by a narrative, making viewers more receptive to its implicit assumptions. We do not fact-check television dramas. We empathize with them. Their moral premises are absorbed quietly as background knowledge.
For most of us, the names Jeff Bezos, Elon Musk, Mark Zuckerberg, or Peter Thiel evoke an immediate moral impression. But how did that impression evolve? Have you, for example, ever heard them speak at length or know how they run their companies? Do you understand what motivates them? Do they have a good sense of humor?
There is also a structural problem with storytelling itself. Everyday reality, especially everyday crime, is usually chaotic, senseless, and narratively unsatisfying. Criminologists have long observed that much violent crime lacks coherent motives or moral meaning. Writers, understandably, select stories that feel legible, purposeful, and emotionally engaging. But those selections shape our expectations of reality and thus our perception, and make us see otherwise messy events as morally clearer than they actually are.
The result is a moral universe in which certain kinds of harm are treated as profound moral ruptures, while other kinds are treated as routine or unfortunate facts of life. Violence committed by some characters is framed as a social crisis demanding urgent moral response. Similar violence committed by others is portrayed as tragic but unremarkable, something to be managed rather than interrogated.
A clear example appears in the pilot of The Pitt. A dramatic subway assault is immediately interpreted through a moral lens before basic facts are known. The graphic depiction gives viewers the feeling that they are seeing something raw and unfiltered. At the same time, the narrative structure carefully guides inference and sympathy. In the same episode, a different shooting is treated as mundane and procedural. It carries little moral weight and prompts no larger reflection.
The show is not depicting reality. It is presenting a moral map.
This does not require a conspiracy, and it does not require malicious intent. Many writers openly acknowledge that fiction shapes social norms and expectations. Cultural theorists from Walter Lippmann to contemporary media scholars have noted that narratives function as “pictures in our heads,” guiding perception long before conscious judgment enters the picture. What is new is the growing cultural distance between those producing these narratives and the audiences consuming them, combined with a strong confidence that the moral direction of society is already settled.
When this kind of storytelling dominates, it does more than persuade. It trains perception itself. Viewers learn what to notice, what to ignore, and which conclusions should feel obvious. Over time, alternative interpretations stop feeling like interpretations at all. They begin to look irrational or delusional.
This is how “the other movie” disappears.
♦ ♦ ♦
A functioning society does not require agreement on every issue. It does require a shared reality. When large groups of people cannot even see what others are responding to, debate becomes impossible. You cannot resolve disagreements if one side experiences the other as hallucinating.
The answer is not counter-propaganda, and it is not simply more facts. Research on motivated reasoning shows that facts alone rarely change minds when perceptions themselves are structured by narrative. What is required instead is closer attention to how stories shape perception. What they highlight. What they omit. And how repetition turns fiction into intuition.
Was Renee Good heroically intervening in an unlawful abduction and a victim of reckless police violence? Or was she someone who interfered with a lawful enforcement action and nearly ran over an officer? Each interpretation feels obvious to those who hold it, and nearly invisible to those who do not. If you analyze both long enough, you might start to see the narratives and the chain of events that lead one to interpret this particular incident in a particular way after watching the exact same three minutes of video.
Skepticism, properly understood, is not just about questioning explicit claims. It is about examining why certain narratives feel natural, why others feel unthinkable, and why some movies seem to be playing on the screen while others are never seen at all.
If ghosts don't exist, then how do we account for all the ghost experiences that people have every day?
Learn about your ad choices: dovetail.prx.org/ad-choicesSixth-century Byzantium was a city divided by race hatred so intense that people viciously attacked each other, not only in the streets but also in churches. The inscription on an ancient tablet conveys the raw animus that spawned from color differences: “Bind them! … Destroy them! … Kill them!” The historian Procopius, who witnessed this race antagonism firsthand, called it a “disease of the soul,” and marveled at its irrational intensity:
They fight against their opponents knowing not for what end they imperil themselves … So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place, neither to the ties of marriage nor of relationship nor of friendship.1This hostility sparked multiple violent clashes and riots, culminating in the Nika Riot of 532 CE, the biggest race riot of all time: 30,000 people perished, and the greatest city of antiquity was reduced to smoldering ruins.
But the Nika Riot wasn’t the sort of race riot you might imagine. The race in question was the chariot race. The color division wasn’t between black and white but between blue and green—the colors of the two main chariot-racing teams. The teams’ supporters, who were referred to as the Blue and Green “factions,” proudly wore their team colors, not just in the hippodrome but also around town. To help distinguish themselves, many Blues also sported distinctive mullet hairstyles, like those of 1970s rock stars. Both Blues and Greens were fiercely loyal to their factions and their colors. The chariots and drivers were a secondary concern; the historian Pliny asserted that if the drivers were to swap colors in the middle of a race, the factions would immediately switch their allegiances accordingly.
Decades of studies have demonstrated the dangerous power of the human tribal instinct.The race faction rivalry had existed for a long time before the Nika Riot, yet Procopius writes that it had only become bitter and violent in “comparatively recent times.” So, what caused this trivial division over horse-racing teams to turn so deadly? In short, it was the Byzantine version of “identity politics.”
Detail of “A Roman Chariot Race,” depicted by Alexander von Wagner, circa 1882. During the Nika Riots that took place against Byzantine Emperor Justinian I in Constantinople over the course of a week in 532 C.E., tens of thousands of people lost their lives and half the city was burned to the ground. It all started over a chariot race. (Image courtesy of Manchester Art Gallery)Modern sociological research helps explain the phenomenon. Decades of studies have demonstrated the dangerous power of the human tribal instinct. Surprisingly, it doesn’t require “primordial” ethnic or tribal distinctions to engage that impulse. Minor differences are often sufficient to elicit acute ingroup-outgroup discrimination. The psychologist Henri Tajfel demonstrated this in a landmark series of studies to determine how minor those differences can be. In each successive study, Tajfel divided test subjects into groups according to increasingly trivial criteria, such as whether they preferred Klee or Kandinsky paintings or underestimated or overestimated the number of dots on a page. The results were as intriguing as they were disturbing: even the most trivial groupings induced discrimination.2, 3
However, the most significant and unexpected discovery was that simply telling subjects that they belonged to a group induced discrimination, even when the grouping was completely random. Upon learning they officially belonged to a group, the subjects reflexively adopted an us-versus-them, zero-sum game attitude toward members of other groups. Many other researchers have conducted related experiments with similar results: a government or an authority (like a researcher) designating group distinctions is, by itself, sufficient to spur contentious group rivalry. When group rewards are at stake, that rivalry is magnified and readily turns malign.
The Robbers Cave Experiment, conducted in 1954 by social psychologists Muzafer and Carolyn Sherif, investigated intergroup conflict and cooperation. The study involved 22 eleven-year-old boys at a summer camp in Robbers Cave State Park, Oklahoma. (Photo: The University of Akron)The extent to which authority-defined groups and competition for group benefits can foment nasty factionalism was demonstrated in the famous 1954 Robbers Cave experiment, in which researchers brought boys with identical socioeconomic and ethnic backgrounds to a summer camp, dividing them randomly into two official groups. They initially kept the two groups separate and encouraged them to bond through various group activities. The boys, who had not known each other before, developed strong group cohesion and a sense of shared identity. The researchers then pitted the groups against each other in contests for group rewards to see if inter-group hostility would arise. The group antagonism escalated far beyond their expectations. The two groups eventually burned each other’s flags and clothing, trashed each other’s cabins, and collected rocks to hurl at each other. Camp staff had to intervene repeatedly to break up brutal fights. The mounting hostility and risk of violence induced the researchers to abort that phase of the study.4 Other researchers have replicated this experiment: one follow-up study resulted in knife fights, and a researcher was so traumatized he had to be hospitalized for a week.5, 6
How does this apply to the Blues and Greens? As in the Tajfel experiments, the Byzantine race factions had formed a group division based on a trivial distinction—the preference for a color and a horse racing team. However, for many years, the rivalry remained relatively benign. This was likely because the emperors had long played down the factional distinction and maintained a tradition of race neutrality: if they favored a faction, they avoided openly showing it. But that tradition ended a few years before the Nika Riot when emperors began openly supporting either one faction or the other. But more importantly, they extended their support outside the hippodrome with official policies that benefited members of their preferred faction. The emperors Marcian, Anastasius, and Justinian adopted official employment preferences, allocating positions to members of their favored faction and blocking the other faction from coveted jobs. To cast it in modern terms, they began a program of “race-based” affirmative action and identity politics.7, 8
In nearly all the countries where affirmative action programs have been implemented, they have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive.Official recognition of the group distinction enhanced the us-versus-them sense of difference between the factions, and the affirmative action scheme turned this sense of difference into bitter antagonism, which eventually exploded in violence. Procopius, our primary contemporary source, placed the blame for the mounting antagonism and the riots squarely on Justinian’s program of identity politics. It had not only promoted an us-versus-them mindset in the factions, it also incited vicious enmity between them, turning a trivial color preference and sporting rivalry into a deadly “race war.”
Considering how identity politics could elicit violence from randomly assembled groups like the Blues and Greens, it is easy to imagine how disastrous identity politics can be when applied to groups that already have some long-standing, historic sense of difference. Indeed, there have been numerous instances of this in history, most ending tragically. For example, Tutsis and Hutus enjoyed centuries of relatively peaceful coexistence in Rwanda up until Belgian colonialists arrived; when the Belgians issued identity cards distinguishing the two groups and instituted affirmative action, it ossified a formerly porous group distinction and infused it with bitter rivalry, preparing the path to genocide. Likewise, when Yugoslavia instituted its “nationality key” system, with educational and employment quotas for the country’s constituent ethnic groups, it hardened group distinctions, pitting the groups against each other and setting the stage for genocide in the Balkans. And, when the Sri Lankan government opted for identity politics and affirmative action, it spawned violent conflict and genocide that destroyed a once peaceful and prosperous country. This last example—Sri Lanka—is so illustrative of the dangers of identity politics that we’ll examine it in more detail.
Sri Lanka: How Identity Politics Destroyed ParadiseShe is a fabulous isle just south of India’s teeming shore, land of paradise … with a proud and democratic people … Her flag is the flag of freedom, her citizens are dedicated to the preservation of that freedom … Her school system is as progressive as it is democratic. —1954 TWA TOURIST VIDEOSri Lanka is an island off India’s southeast coast blessed with copious amounts of arable land and natural resources. It has an ethnically diverse population, with the two main groups being Sinhalese (75 percent) and Tamils (15 percent). Before Sri Lanka’s independence in 1948, there was a long history of harmony between these groups. That history goes back at least to the fourteenth century when the Arab traveler Ibn Battuta observed how the different groups “show respect” for each other and “harbor no suspicions.” On the eve of Sri Lanka’s independence, a British governor lauded the “large measure of fellowship and understanding” that prevailed, and a British soldiers’ guide noted that “there are no historic antagonisms to overcome.” With quiescent communal relations, abundant natural resources, and one of the highest literacy rates in the developing world, newly independent Sri Lanka was poised to flourish and prosper. Nobody doubted it would outperform countries like South Korea and Singapore, with the British governor dubbing it “the best bet in Asia.”
It turned out to be a very poor bet. A few years after Sri Lanka’s independence, violent communal conflict erupted, culminating in a protracted civil war and genocide. By the time it ended, over a million people had been displaced or killed. Sri Lanka’s per capita GDP, which was on par with South Korea’s in 1960, was only one-tenth of it by 2009. As in sixth-century Byzantium, identity politics precipitated the calamity.
Turning a Disparity into a DisasterAt the end of British colonial rule in Sri Lanka, there was significant educational and income disparity between Sinhalese and Tamils. This arose by happenstance rather than because of discriminatory policy. The island’s north, where Tamils predominate, is arid and poor in resources. Because of this, the Tamils devoted their productive energy toward developing human capital, focusing on education and cultivating professional skills. This focus was abetted by American missionaries, who set up schools in the north, providing top-notch English-language education, particularly in math and the physical sciences. As a result, Tamils accounted for an outsized proportion of the better-educated people on the island, particularly in higher-paying fields like engineering and medicine.
Because of the Tamils’ superior education, the British colonial administration hired them disproportionately compared to the Sinhalese. In 1948, for example, Tamils accounted for 40 percent of the clerical workers employed by the colonial government, greatly outstripping their 15 percent share of the overall population. This unequal outcome had nothing to do with overt discrimination against the Sinhalese; it merely reflected the different levels and types of education achieved by the different ethnic groups.
When Sri Lanka gained independence, it passed a constitution that prohibited discrimination based on ethnicity. But a few years after that, an opportunist politician, S.W.R.D. Bandaranaike, figured he could advance his career by cynically appealing to identity politics, stoking Sinhalese envy over the Tamils’ over-representation in higher education and government. He launched a divisive campaign to eliminate the disparity, which spurred the majority Sinhalese to elect him. After his election in 1956, Bandaranaike passed a law that changed the official language from English to Sinhala and consigned students to separate Tamil and Sinhalese education “streams” rather than having them all learn English. As one Sinhalese journalist wrote, this divided Sri Lanka, depriving it of its “link language”:
That began a great divide that has widened over the years. Children now go to segregated schools or study in separate streams in the same school. They don’t get to know other people of their own age group unless they meet them outside.Beyond eliminating Sri Lanka’s common “link language,” this law also functioned as a de facto affirmative action program for Sinhalese. Tamils, who spoke Tamil at home and received their higher education in English, could not gain Sinhala proficiency quickly enough to meet the government’s requirement. So, many of them lost their jobs to Sinhalese. For example, the percentage of Tamils employed in government administrative services dropped dramatically: from 30 percent in 1956 to five percent in 1970; the percentage in the armed forces dropped from 40 percent to one percent.
As has happened in many other countries, Sri Lanka’s identity politics went hand-in-hand with expanded government. Sinhalese politicians made it clear: government would be the tool to redress perceived ethnic disparities. It would allocate more jobs and resources, and that allocation would be based on ethnicity. As one historian writes: “a growing perception of the state as bestowing public goods selectively began to emerge, challenging previous views and breeding mistrust between ethnic communities.” Tamils responded to this by launching a non-violent resistance campaign. With ethnic dividing lines now clearly drawn, mobs of Sinhalese staged anti-Tamil counter-demonstrations and then riots in which hundreds—mostly Tamils—were killed. The us-versus-them mentality was setting in.
Bandaranaike was eventually assassinated by radicals within his own movement. But his widow, Sirimavo, who was subsequently elected prime minister, resolved to maintain his top priorities—expansive government and identity politics. She nationalized numerous industries and launched development projects that were directed by ethnic and political considerations rather than actual need. She also removed the constitutional ban on ethnic discrimination so that she could aggressively expand affirmative action. The existing policies had already cost so many Tamils their jobs that they were now under-represented in government. However, they remained over-represented in higher education, particularly in the sciences, a disparity that Sirimavo and her political allies resolved to eliminate. In a scheme that American universities like Harvard would later emulate, the Sri Lankan universities began to reject high-scoring Tamil applicants in favor of manifestly less-qualified Sinhalese with vastly lower test scores.
Just like Justinian’s “race” preferences, the Sri Lankan affirmative action program exacerbated us-versus-them attitudes, deepening the group divide and spurring enmity between groups. As one Sri Lankan observed:
Identity was never a question for thousands of years. But now, here, for some reason, it is different … Friends that I grew up with, [messed around] with, got drunk with, now see an essential difference between us just for the fact of their ethnic identity. And there are no obvious differences at all, no matter what they say. I point to pictures in the newspapers and ask them to tell me who is Sinhalese and who is Tamil, and they simply can’t tell the difference. This identity is a fiction, I tell you, but a deadly one.9The lessons of the various affirmative action programs in Sri Lanka were clear to everyone: individuals’ access to education and government employment would be determined by ethnic group membership rather than individual merit, and political power would determine how much each group got. If you wanted your share, you needed to mobilize as a group and acquire and maintain political power at any cost. The divisive effects of these lessons would be catastrophic.
The realization that they would forever be at the mercy of an ethnic spoils system, along with the violent attacks perpetrated against them, induced the Tamils to form resistance organizations—most notably, the Liberation Tigers of Tamil Eelam (LTTE). The LTTE attacked both Sri Lankan government forces and individual Sinhalese, initiating a deadly spiral of attacks and reprisals by both sides committing the sort of atrocities that are tragically common in ethnic conflicts: burning people alive, torture, mass killings, and so on. Over the following decades, the conflict continued to fester, periodically escalating into outright civil war. Ultimately, over a million people would be killed or displaced.
The timeline of the Sri Lankan conflict establishes how communal violence originated from identity politics rather than the underlying income and occupational disparity between the groups. That disparity reached its apex at the beginning of the twentieth century. Yet, there was no communal violence at that point or during the next half-century. It was only after the introduction of affirmative action programs that ethnic violence erupted. The deadliest attacks on Tamils occurred an entire decade after those programs had enabled Sinhalese to surpass Tamils in both income and education. As Thomas Sowell observed: “It was not the disparities which led to intergroup violence but the politicizing of those disparities and the promotion of group identity politics.”10
Consequences of Identity Politics in Sri Lanka and BeyondSri Lanka’s experience highlights some underappreciated consequences of identity politics. Most notably, one would expect that affirmative action programs would have warmed the feelings of the Sinhalese toward the Tamils. After all, they were receiving preferences for jobs and education at the Tamils’ expense. Yet, precisely the opposite happened: as the affirmative action programs were implemented, Sinhalese animus toward the Tamils progressively worsened. This pattern has been repeated in nearly all the countries where affirmative action has been implemented: affirmative action programs have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive. That group tends to justify the indefinite continuation of these benefits by claiming that the other group continues to enjoy “privilege”—or by demonizing them and claiming that they are “systemically” advantaged. Thus, the beneficiaries of affirmative action are often the ones to initiate hostilities. In Rwanda, for example, it was Hutu affirmative action beneficiaries who perpetrated the violence, not Tutsis. The situation in Sri Lanka was analogous, with Sinhalese instigating all of the initial riots and pogroms against the Tamils.
One knock-on effect of identity politics in Sri Lanka was that it ultimately benefited some of the wealthiest and most privileged people in the country. The government enacted several affirmative action schemes, each increasingly contrived to benefit well-heeled Sinhalese. The last of these implemented a regional quota system that was devised so that aristocratic Sinhalese living in the Kandy region would compete for spots against poor, undereducated Tamil farm workers. As one Tamil who lost his spot in engineering wrote: “They effectively claimed that the son of a Sinhalese minister in an elite Colombo school was disadvantaged vis-à-vis a Tamil tea plucker’s son.” This follows the pattern of many other affirmative action programs around the world: the greatest beneficiaries are typically the most politically connected (and privileged) individuals within the group receiving affirmative action. They are often wealthier and more privileged than many of the individuals against whom affirmative action is directed. This has been well documented in India, which has extensive data on the subgroups that benefit from its affirmative action programs.
Decades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior.One unexpected consequence of identity politics in Sri Lanka was rampant corruption. When Sri Lanka became independent, its government was widely deemed one of the least corrupt in the developing world. However, as affirmative action programs were implemented and expanded, corruption increased in lockstep. The adoption of affirmative action set a paradigm that pervaded the government: whoever held power could steer government resources to whomever they deemed “underserved.” A baleful side effect of ethnicity-based distortion of government policy is that it undermines and erodes more general standards of government integrity and transparency, legitimating a paradigm of corruption: if it is acceptable to direct policy for the benefit of an ethnic group, is it not also acceptable to do so for the benefit of a clan or an individual? It is a small step to go from one to the other, a step that many Sri Lankan leaders and bureaucrats took. Today, Sri Lanka’s government, which once rivaled European governments in transparency, remains highly corrupt. This pattern has been repeated in other countries. For example, after the Federation of Malaysia expelled Singapore, it adopted an extensive affirmative action program, whereas Singapore prohibited ethnic preferences. Malaysia subsequently experienced proliferating corruption, whereas Singapore is one of the least corrupt countries in the world today.
Economic divergence between Singapore and Sri Lanka’s GDP per capita, 1960–2023 (Source: Our World in Data)Perhaps the most profound consequence of identity politics in Sri Lanka was that it ultimately made everybody in the country worse off. After World War II, per capita income in Sri Lanka and Singapore was nearly identical. But after it abandoned its shared “link language” and adopted ethnically divisive policies, Sri Lanka was plagued by violent conflict and economic underperformance; today, one Singaporean earns more than seven Sri Lankans put together. All the group preferences devised to elevate Sinhalese brought down everyone in the country—Tamil, Sinhalese, and all the other groups alike. Lee Kuan Yew, Singapore’s “founding father,” attributed that failure to Sri Lanka’s divisive policies, saying that if Singapore had implemented similar policies, “we would have perished politically and economically.” There are echoes of this in other countries that have implemented identity politics. When I visited Rwanda, I asked Rwandans of various backgrounds whether they thought distinguishing people by race or ethnicity ever helped anyone in their country. There was complete unanimity on this point: after they got over pondering why anyone would ask such a naïve question, they made it very clear that distinguishing people by group made everyone, whether Hutu or Tutsi, distinctly worse off. In the Balkans, I got similar answers from Bosnians, Croatians, Serbians, and Kosovars.
The Perilous Path of Identity PoliticsDecades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior. As political scientist Harold Isaacs writes:
If anything emerges plainly from our long look at the nature and functioning of basic group identity, it is the fact that the we-they syndrome is built in. It does not merely distinguish, it divides … the normal responses run from … indifference to depreciation, to contempt, to victimization, and, not at all seldom, to slaughter.11The history of Byzantium and Sri Lanka demonstrates that this tribal instinct is extremely easy to provoke. All it takes is official recognition of group distinctions and some group preferences to balkanize people into bitterly antagonistic groups, and the consequences are potentially dire. Even if a society that is balkanized in this way avoids violent conflict, it is still likely to be plagued by all the concomitants of social fractionalization: higher corruption, lower social trust, and abysmal economic performance.
A country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.It is therefore troubling to see the U.S. government, institutions, and society adopt Sri Lankan-style policies that emphasize group distinctions. As the U.S. continues down the perilous path of identity politics, it is unlikely to devolve into another Bosnia or Sri Lanka overnight. But the example of Sri Lanka is a dire warning: a country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.
Surveys and statistics are now flashing warning signs in the United States. A Gallup poll found that while 70 percent of Black Americans believed that race relations in the United States were either good or very good in 2001, only 33 percent did in 2021.12 Other statistics have shown that hate crimes have been on the rise over that time.13 In the last year, we have also seen the spectacle of angry anti-Israel protesters hammering on the doors of a college hall, terrorizing the Jewish students locked inside, and a Stanford professor telling Jewish students to stand in the corner of a classroom. While identity politics have increasingly directed public policy and institutions, relations between social groups have deteriorated rapidly. This—and a lot of history—suggest it’s time for a different approach.
Mediterranean archaeologist Dr. Flint Dibble will be our resident expert on the real history (and the fake history) at our ports of call when Skeptoid Adventures sails from Málaga, Spain to Nice, France this April. He is perhaps best known for his 2024 destruction of pseudo-archaeologist Graham Hancock on the Joe Rogan Experience.
Learn about your ad choices: dovetail.prx.org/ad-choicesWilliam S. Burroughs was one of the most controversial literary figures of the early 1960s, an American postmodern author and visual artist who was considered one of the key figures of the Beat Generation that influenced pop culture (he was friends with Allen Ginsberg and Jack Kerouac). He also became preoccupied by an unusual experiment: the cut-up, a technique in which a written text is cut up and rearranged to create a new text. But this was no mere artistic preoccupation. Burroughs, author of the notorious Naked Lunch (the subject of a major literary censorship case when its publisher was sued for violating a Massachusetts obscenity law) claimed to have found a sort of window into the future, a time warp on paper and on tape.
Burroughs got the cut-up idea in 1959 from his close friend Brion Gysin. Burroughs remembered, “It was simply of course applying the montage method, which was really rather old hat in painting at that time, to writing. As Brion said, writing is fifty years behind painting.”1 Burroughs traced the cut-up back to an incident from the Dada movement of the 1920s, when Tristan Tzara announced his intention to create a poem on the spot by pulling words out of a hat.2
For Burroughs, however, the cut-ups were something more than a creative writing technique. He traced this supposed revelation back to a Time magazine article by the oil industrialist John Paul Getty. (Burroughs may have been referring to a February 1958 Time cover story on Getty. Getty did not write the article.) Upon cutting up the article, Burroughs created the following phrase: “It’s a bad thing to sue your own father.” When Getty was in fact sued by one of his sons, Burroughs came to believe that his cut-up had foretold the future:
Perhaps events are pre-written and prerecorded and when you cut word lines the future leaks out. I have seen enough examples to convince me that the cut-ups are a basic key to the nature and function of words.3Years later, in Howard Brookner’s Burroughs, the fedora-clad, now-aged author explains to his poet friend Allen Ginsberg:
Every particle of this universe contains the whole of the universe. You yourself have the whole of the universe. If I cut you up in a certain way I cut up the universe … So in my cut-ups I was attempting to tamper with the basic pre-recordings. But I think I have succeeded to some modest extent.At this, Ginsberg could only nod and utter a number of noncommittal “um hmms,” adding later: “Burroughs was, in cutting up, creating gaps in space and time, as Cezanne, or as meditation does.” Burroughs also cited a dubious summary of Wittgenstein’s Paradox: “This is Wittgenstein: If you have a prerecorded universe, in which everything is prerecorded, the only thing that is not prerecorded are the prerecordings themselves.”4 The actual Wittgenstein’s Paradox holds that “no course of action could be determined by a rule, because any course of action can be made out to accord with the rule.”
Ludwig Wittgenstein was a philosopher and language theorist, but there is no reason to believe that he thought of the universe as a giant tape recording. Rather, Burroughs’s notion of human consciousness was clearly influenced by L. Ron Hubbard’s engram theory, itself reliant on Freudian psychoanalytic theory with its emphasis on trauma and repressed memory. Seemingly derived from the medical theory of the memory trace, Hubbard described engrams as imprints of unpleasant experiences on the protoplasm of living beings.
Burroughs went so far as to describe the cut-up method as “streamlining Dianetics therapy system.” Proposing that his tape method could be used for therapy, he went on to suggest wiping “traumatic material” off a magnetic tape.5 He even hinted that Hubbard had borrowed the tape recording idea from him! His friend Ian Sommerville sold Hubbard two recorders, and Burroughs seemed to find it significant that Sommerville had become sick soon after, as if Hubbard were using an insidious black magic.6 Burroughs began to see the Scientology system as a form of brainwashing, even as he was increasingly convinced of Hubbard’s theories.
Moving on to the world of cinema, Burroughs made two cut-up films, Towers Open Fire in 1963 and The Cut-Ups in 1966, with the help of producer Antony Balch. And, in 1965, Burroughs proposed to Balch “a new type of science fiction film,”7 one that would expose “the story of Scientology and their attempt to take over this planet.”8 The film would explain that “vulgar stupid second rate people” had taken over the planet by means of a “virus parasite.”9
Burroughs brazenly went ahead with his cut up experiment, even though it might have serious ramifications for the universe: “Could you, by cutting up … cut and nullify the pre-recordings of your own future? Could the whole prerecorded future of the universe be prerecorded or altered? I don’t know. Let’s see.” Perhaps he was thinking of the scientists at Los Alamos, who exploded the first atomic bomb without being completely sure of the ramifications.10
Nor was Burroughs’s “sample operation” in influencing the universe an especially ethical exercise. In fall 1972 the author took issue with the Moka, “London’s first espresso bar,” leading to a vengeful exercise with overtones of Maya Deren, the experimental filmmaker who was also a voodoo priestess and flinger of malicious hexes.
Burroughs’s grudge against the Moka arose over what he described as “unprovoked discourtesy and poisonous cheesecake.” He took a movie camera and began filming. Within two months, the bar was closed. Burroughs recommended using this exercise to “discommode or destroy” any business you did not particularly like. He did not consider the bar might have shut down for some unrelated reason. Maybe word got out about the bad cheesecake.11 Some of the author’s magical thinking in this period may be a result of reliance on drugs, but Burroughs was a believer in curses since childhood.12
It is perhaps not a surprise that some thought the author’s new method was a prank. At a 1962 Edinburgh festival, Burroughs spoke about his new technique, which he was then calling the fold-in method. Members of the crowd thought they were being pranked, causing an Indian author to ask, “Are you being serious?” Burroughs insisted that he was.13
Burroughs presented a summary of his method to a gathering of students at Colorado’s Naropa Institute in 1976, and part of this lecture can be heard on the record Break Through in Grey Room. When Burroughs describes the revelatory Getty cut-up, laughter can be heard from the audience. Perhaps sensing some skepticism, Burroughs insists on his innocence in constructing the Getty rewording: “I mean, it’s purely extraneous information to me. [A woman can be heard laughing.] I had nothing to gain on either side. We had no explanation for this at the time, it’s just suggesting, perhaps, that when you cut into the present the future leaks out.”14
Burroughs may have been a bit disingenuous in telling the Naropa students he had no relationship to the wealthy Getty family. In the mid-1960s, in fact, through the art dealer Robert Fraser, Burroughs mingled with John Paul Getty Jr.15 Then, Burroughs stayed at a flat owned by art dealer Bill Willis from March to July 1967, where he often saw the likes of Getty, Jr.16
Admittedly this would have been later than Burroughs’s initial Getty cut-up (apparently in 1959, when Burroughs first became immersed in the whole cut-up process). But Burroughs may have been acquainted with members of the Getty circle before he actually met the Getty family. Plus, we are relying on a version of events that Burroughs publicly recounted in Daniel Odier’s The Job and later in 1976, and relying on Burroughs’s perception is a dubious proposition. In the 1976 Naropa lecture, Burroughs claims the lawsuit occurred a year after his cut-up,17 while in Daniel Odier’s The Job he claims it was a three-year gap. Also, in The Job he seems to garble matters by conflating the magazine title—Time—with the name of Getty’s company—Tidewater.18 I have not found any record of Getty being sued by one of his sons during the time period described.
Burroughs’s literary acquaintances were not impressed to see the author seemingly risking his (still quite tenuous) literary reputation on an obsession like this. Samuel Beckett was appalled at the notion of using the words of other writers and said so to Burroughs directly: “That’s not writing. It’s plumbing.”19 The poet Gregory Corso told Burroughs the cut-up method would quickly become “redundant.”20 Novelist Paul Bowles felt the method would “alienate the reader.”21 Norman Mailer was the most prominent literary figure to champion Burroughs’s work to the American mainstream, and he must have been let down to see Burroughs abandoning a major writing career to get hung up on something Mailer probably considered a trivial sidetrack. To Mailer, the cut-up experiments were a mere “recording,” a distraction from the art of fiction.22 Jennie Skerl and Robin Lydenberg note that “positive assessments of Burroughs’s cut-ups were rare … most saw cut-ups as boring or repellent.”23
Nevertheless, Burroughs produced his “cut-up trilogy”: The Soft Machine (1961), The Ticket That Exploded (1962), and Nova Express (1964), although none sold as well as Naked Lunch. Biographer Ted Morgan calls them “inaccessible to the general reader.”24 The impenetrability of Burroughs’s cut-ups added to his reputation as a “difficult” author. Even Burroughs’s off-and-on friend Timothy Leary asked, rhetorically, “Do you actually know anyone who has finished an entire book by Bill Burroughs?”25
Burroughs was greatly impressed by the 1971 English-language publication of Konstantin Raudive’s Breakthrough: An Amazing Experiment in Electronic Communication with the Dead, which popularized what is known today as EVP (Electronic Voice Phenomenon), a widely discredited phenomenon that purports to find hidden messages in audio recordings of background noise, of recordings played backwards, in random static noise between radio stations, and other low information sources.
Raudive believed these were the voices of the dead. Burroughs offered his own theory in keeping with his cut-up cosmology, namely that the entire universe was a vast playback device, something akin to a tape recording. Inspired by Raudive (and no doubt, Hubbard), Burroughs boldly rejected the precepts of modern psychology. People suffering from schizophrenia were not experiencing hallucinations; they were “tuning in to an intergalactic network of voices.”26
If we look at Burroughs’s supposed predictive phrases, we see a lot of what can only be called “reaching” or grasping at straws. In 1964 Burroughs came up with the phrase, “And here is a horrid air conditioner.” Ten years later, he “moved into a loft with a broken air conditioner.”27 There is nothing mysterious about having an air conditioner break down. If anything, Burroughs was lucky if he went ten years without a broken air conditioner.
Then there was this cryptic recorded query of Raudive’s: “Are you without jewels?” To Burroughs, this must refer to lasers, “which are made with jewels.” And another especially absurd quote from Raudive’s recordings: “You belong to the cucumbers?” Burroughs had read that “the pickle factory” was a slang term for the CIA, so the recording seemed to be an obvious CIA reference. He read this in either Time or Newsweek. For an icon of bohemian literature, one could argue that Burroughs relied an awful lot on the mainstream media for his prognostications.28 But how were researchers like Raudive and Burroughs tapping into the playback of the universe? Burroughs himself asked this question:
Now how random is random? We know so much that we don’t consciously know that perhaps the cut-in was not random. The operator at some level knew just where he was cutting in. As you know exactly on some level exactly where you were and what you were doing ten years ago at this particular time.29Burroughs was admitting that the cutter was influencing the cut-up, but he believed this was because the cutter was unconsciously tuned in to the future. A simpler explanation would be that Burroughs convinced himself that he was doing random work while he was in fact cutting together semiconscious rephrasings. For instance, he may have heard a rumor from one of his monied acquaintances that one of Getty’s sons was considering a legal action well before actually suing.
If the experimenter (i.e., Burroughs, or Gysin, or Raudive) is unconsciously influencing the experiment, then what we have is a new version of the Ouija board with its self-guided planchette—a device whose movements and messages are created by users who come to believe they are receiving messages from a spirit or other mysterious entity when, in fact, they are moving the planchette. This is known as the ideomotor response.
It is worth noting that in this lecture Burroughs refers to a number of concepts that are often considered dubious today, such as repressed memories and unreliable eyewitness accounts of events. For instance, he discusses “freaks,” seemingly referring to individuals with alleged eidetic or “photographic” memory. Perhaps he was thinking of his late friend Jack Kerouac, who was known by some in Lowell, Massachusetts, as “Memory Babe” due to his purportedly freakish recall powers?
There is no evidence to support the notion that anyone can foretell the future by cutting up newspapers, books, or film footage.Burroughs’s countercultural reputation grew through the 1970s until his death in 1997. But his cut-ups don’t seem to have received much attention from the parapsychological community, perhaps because he was so preoccupied with now-dated media and technology: newspapers, reel-to-reel recordings, and 8mm film. His metaphysical notion of the universe as a “playback” machine seems dated next to the trendier notion of the universe as a computer matrix.
William Burroughs was one of the most fascinating (and darkly funny) literary figures of the twentieth century, but that doesn’t make him a scientist. There is no evidence to support the notion that anyone can foretell the future by cutting up newspapers, books, or film footage.
The myths and misconceptions surrounding blood donation and why you might consider donating.
Links to resources from Bloodworks Northwest:
Learn about your ad choices: dovetail.prx.org/ad-choicesMerriam-Webster’s Dictionary announced that 2022 saw a 1740 percent increase in searches for gaslighting “with high interest throughout the year.” Merriam-Webster refines the term:
The idea of a deliberate conspiracy to mislead has made gaslighting useful in describing lies that are part of a larger plan. Unlike lying, which tends to be between individuals, and fraud, which tends to involve organizations, gaslighting applies in both personal and political contexts.1The term “gaslighting” entered the popular consciousness through a 1944 film, the American psychological thriller Gaslight, in which a husband wants to make his newlywed wife lose her mind to have her locked up in an asylum. His agenda is to steal jewels that he knows are hidden in her late aunt’s house where they are living. The movie’s name is symbolic of the many manipulations the husband undertakes to gaslight his wife into believing she’s insane.
The film is set in London in the late nineteenth century when lamps were fueled by gas. The wife notices that their lamps randomly go dim. One way the husband destabilizes her is by denying that the gaslights are indeed dimming. It really is such a small manipulation. It’s so minor that you might not make much of it. The husband has been showering his new wife with adoration—referred to in abusive relationships as “love bombing”—making it unlikely for her to think he’s being deceptive. When the wife is told that the gaslights are not dimming, she chooses to believe her devoted husband and doubt her own perceptions. This is the beginning of what could be the end.
The wife not only notices that the gaslights are dimming, but also that sounds are coming from the attic. Her husband denies the sounds. She can’t find her brooch even though she knows it was in her purse. He has removed it without her knowing. She finds a letter from one “Sergis Bauer,” and her once-adoring husband becomes furious with her. Later, he explains that he became upset because she was upset (which she wasn’t).
The husband tells his wife that the gaslights are not dimming; there are no sounds from the attic; she lost the brooch as it was not in her purse; she didn’t see a letter from Sergis Bauer. On top of all that, he tells her that she stole a painting, and he has found out that her mother was put in an asylum. He convinces his wife that not only is she fabricating things that don’t exist, but also that she’s a kleptomaniac, too high strung and unwell to be in public. She must be crazy like her mother. Stealing the aunt’s jewels is symbolic of a much more deadly crime: stealing his target’s sanity. The husband is building a case for how his wife is obviously unstable and untrustworthy. Slowly but surely, the wife begins to lose her grip on what’s real and what’s false. She loses faith in her own perceptions.
Luckily for the 1944 wife in the movie Gaslight, it being a Hollywood movie and all, a policeman takes an interest in the unfolding manipulation. It turns out that the wife is merely useful to the husband, and he exploits her for his own means. In the movie, it turns out that the husband is the one who is untrustworthy and who steals, not his destabilized wife.
Publicity still from the film Gaslight © 1944 Metro-Goldwyn-MayerGaslighting in a marriage is disturbing. Gaslighting in an institution such as a corporation, church, school, sports club, courthouse, retirement home, government agency, news station, or political party is deeply disturbing. The target in the marriage may lose her mind and come to believe that she is, in fact, corrupt and insane. Her relationship to reality becomes unhinged. As has been demonstrated throughout history, the target in institutional gaslighting leads to whole segments of society losing their minds and coming to believe whatever alternative facts and fabricated events they are being fed by those in positions with power, credibility, and social status. This collective madness can occur in cults, even in nations. We are well-informed by history how incredibly dangerous and destructive this manipulation can be.
In 2022, the term “gaslighting” was published in a United Kingdom High Court judgment for the first time in what is being called a “milestone” hearing in a domestic abuse case. Describing the case, Maya Oppenheim defines the act as follows:
Gaslighting refers to manipulating someone by making them question their very grasp on reality by forcing them to doubt their memories and pushing a false narrative of events.Although this is being legally identified as manipulation in a marriage, it applies equally well to the workplace. Those who tell the lies of bullying and gaslighting at work make targets question their grasp on reality, force them to doubt their memories, and push a false narrative of events. This false narrative is often believed by higher-ups who have been carefully groomed over time to believe in the power, credibility, and social standing of the one bullying. In this legal ruling, gaslighting is viewed as part of a campaign of psychological abuse that uses coercion and control to destabilize someone.
Controlling the narrative, silencing questions and concerns, forcing the community to adhere to the institution’s fabricated facts all prop up the harms of institutional complicity. Lawyer and workplace bullying expert Paul Pelletier finds that the lies of workplace bullying flourish when the leadership operates from a coercion and control model as identified in the manipulative and dysfunctional marriage under scrutiny in the UK High Court. Coercion and control as a leadership model sets the stage for the drama of bullying, gaslighting, and institutional complicity to unfold. Psychiatrist Dr. Helen Riess discusses leaders who use fear and intimidation to exert their authority: “This type of failed leadership tends to spread across organizations like the plague.”2
A year later, in 2023, a lawsuit was launched in New Jersey. Once again, gaslighting is one of the alleged behaviors that drove Joseph Nyre, former president of prestigious Seton Hall University, from his institution. As reported by Ted Sherman, Nyre alleges violations of the law against the former chairman of the board at Seton Hall, including the sexual harassment of Nyre’s wife. As a whistleblower, Nyre alleges he was targeted with “gaslighting, retaliation, and intimidation,” which led him to resign. Institutional complicity in silencing those who speak up uses textbook methods and gaslighting is long overdue to be understood as one of the weapons in their arsenal. Dr. Dorothy Suskind, an expert in workplace bullying, refers to the specific abuse meted out to those with “high ethical standards” as a “degradation ceremony.”3
The problem is, those who tell the lies of bullying and gaslighting do not experience self-reflection.Although gaslighting is being recognized in the law, it is not fully understood from a psychological and brain science perspective, and it is rarely applied to workplace culture. Only recently, in 2023, psychologists Priyam Kukreja and Jatin Pandey developed a “Gaslighting at Work Questionnaire” (GWQ) that revealed two key components in workplace gaslighting: trivialization and affliction. According to psychologist Mark Travers, trivialization may take the form of “making promises that don’t match their actions, twisting or misrepresenting things you’ve said, and making degrading comments about you and pretending you have nothing to be offended about.” Victims start down the path of wondering if they’re being “too sensitive.” Affliction may take the form of excessive control, making you self-critical, creating dependence, or being “very sweet to you and then flipp[ing] a switch, becoming hostile shortly after.”4 Again, this kind of maltreatment causes self-doubt. Kukreja and Pandey conclude:
The GWQ scale offers new opportunities to understand and measure gaslighting behaviors of a supervisor toward their subordinates in the work context. It adds to the existing literature on harmful leader behaviors, workplace abuse, and mistreatment by highlighting the importance of identifying and measuring gaslighting at work.5Introducing a questionnaire on gaslighting is an effective way to draw attention to how this form of manipulation occurs. Equally important, it provides vocabulary for workplaces to understand and discuss this specific form of abuse. In recent years, Forbes began publishing articles on gaslighting in the workplace indicating that it is on the leadership radar. Jonathan Westover advises on “How to Avoid and Counteract Gaslighting as a Leader,” and his approach is insightful:
The problem is, those who tell the lies of bullying and gaslighting do not experience self-reflection. They do not feel humility as an emotion, just like they don’t feel guilt or remorse. They are disinterested in others’ perceptions as their brain tends to objectify targets especially. They often experience a roller coaster of shame and grandiosity, and they deny vulnerability or the possibility that they have made a mistake. In short, they cannot have authentic relationships. They follow an abusive script that turns them—if not stopped—into a caricature who repeats bullying lies and gaslighting manipulations over and over. They avoid accountability and see trust as a game that they want to win. Using psychological research to understand how the brains of manipulators work hopefully will give us a better chance to prevent their negative impacts in the workplace.
Manzar Bashir describes several textbook gaslighting behaviors: trivializing your feelings, shifting blame, projecting their behavior, insulting and belittling, and creating confusion and contradictions, but he articulates one in particular—withholding information—that is very tricky to identify and yet can have devastating impacts. “Gaslighters often use a tactic of withholding information and keeping you in the dark about crucial matters. By selectively sharing or concealing facts, they manipulate your perception of reality and limit your ability to make informed decisions.”7 It’s insightful: gaslighting, along with a great deal of psychological manipulation, is harmful in its omissions and passivity. In other words, it’s the opposite of how we measure the harms of physical abuse. When you hurt someone’s body, we assess severity by how much active damage was done. But when the brain is being manipulated, we need to find ways to figure out how much lack of action causes damage. Physical assaults are designed to weaken and harm the body; assaults via gaslighting are designed to weaken and destabilize the brain and the mind. Injuries to the body are far more likely to get immediate treatment, whereas neurological damage to brain architecture and disruption of the mind’s ability to function healthily are too often ignored.
The more aware we are of how abusive brains operate … the better able we are to prevent workplace bullying and gaslighting.Psychologists and brain scientists have developed extensive evidence about the way in which gaslighting brains operate, notably different from brains that do not manipulate. Knowledge of psychopathic brains and the way they work can better protect us from the gaslighters’ domineering manipulation and their cruel capacity to exploit us for their own purposes.
Most of us who are targeted for bullying at work are caught off guard. Because we are not trained to anticipate manipulation, we’re easily victimized. The more aware we are of how abusive brains operate and how our brains are completely thrown off our game by them, the better able we are to prevent workplace bullying and gaslighting. The more leaders, managers, and HR are informed, the less likely they’ll be drawn into institutional complicity.
Those who tell the self-serving lies of bullying and gaslighting—with ease—are part of a formidable trio referred to in psychology as the Dark Triad: narcissists, Machiavellians, and psychopaths.8 How can we identify these manipulative people more quickly and refuse to believe them? What if there were a way to protect ourselves, and more specifically our sanity, from lies? These are the questions that drove the researching and writing of The Gaslit Brain. I needed to answer them because I was being gaslit at work.
Excerpted and adapted by the author from The Gaslit Brain, published by Prometheus, an imprint of The Globe Pequot Publishing Group. © 2025 by Jennifer Fraser.The history and pseudohistory of this infamous and ubiquitous obscene gesture.
Learn about your ad choices: dovetail.prx.org/ad-choicesDouble your generosity by donating to Skeptoid before the end of the year!
Learn about your ad choices: dovetail.prx.org/ad-choicesFrom fireplace to folklore, how the Yule log got its fake pagan backstory.
Learn about your ad choices: dovetail.prx.org/ad-choicesHistory can be a mirror or a wall. For many people, it’s a mirror only when they see their own family reflected in it—an ancestor who fought in a war, survived a famine, or emigrated under duress. For others, history is a wall they can never climb. The view on the other side is fixed: the past is not what was done to them, but what their parents or grandparents did to others.
That is the reality I discovered when interviewing the sons and daughters of leaders of the Third Reich.
When I began work on Hitler’s Children, I was not looking for new evidence about what happened in the Nazi Holocaust. The bureaucratic record of the Third Reich was already vast—memos, orders, trial transcripts, camp rosters—the Germans were masters of documenting their crimes.
What I wanted was something the archives could never provide: a human portrait of the children of top Nazis, the men and women who grew up in the shadow of fathers whose names had become synonyms for evil.
I wanted to know: What is it like to love a parent whom the world knows as a war criminal? How do you form a sense of self when the world has already decided who you are—and it is an identity you neither chose nor can easily shed? What happens to ordinary human relationships—marriage, friendship, parenthood—when your family name carries an explosive moral charge?
Those questions took me across Germany and Austria and into conversations that were often guarded, sometimes raw, and occasionally redemptive. Some doors never opened. Some opened a crack and then slammed shut the minute I explained that I could not promise a sympathetic portrait. A few opened wide, and what came out was not a clean confession or a tidy arc toward reconciliation but something more human: ambivalence, anger, loyalty, shame, defiance, grief. What emerged was not a single “Nazi progeny” experience but a spectrum of responses to inherited guilt.
Polish Jews captured by Germans during the suppression of the Warsaw Ghetto Uprising (Poland) and forced to leave their shelter and march to the Umschlagplatz for deportation, May 1943. Photo by Jürgen Stroop. (Credit: United States Holocaust Memorial Museum, courtesy of National Archives and Records Administration, College Park)Knocking on Closed DoorsTracking down the children of the regime’s inner circle required patience and a tolerance for being told no. Some had changed their surnames and slipped into anonymity. Others had moved abroad, where the name on their passport did not immediately freeze a room. Many were instantly hostile when I contacted them. They assumed—not unreasonably—that I was there to condemn their parents or to dredge up what they had spent decades trying to bury.
I learned quickly that the children of perpetrators could be as guarded as the children of victims. I knew many of the latter intimately because I had earlier co-authored a biography of Nazi Dr. Josef Mengele. I had spent countless hours with concentration camp survivors about their experience and the trauma it had left them. When I approached the children of the perpetrators, I discovered some had been burned by journalists who came for sensational quotes and left nuance on the cuttingroom floor. Others feared the moral judgment of strangers or the social cost in their own communities if they were seen as disloyal to family.
A few, though, agreed to speak. Some said they wanted the truth to be known while they were still alive. Others hoped that narrating their story aloud might lighten the weight they had carried in silence. What I heard, over time, was less a series of disconnected biographies than a set of recurring moral dilemmas.
The Spectrum of Inherited GuiltTo make sense of what I was hearing, I came to think of my interviewees along four rough lines. These are not scientific categories—lives overflow categories—but they capture distinct ways the various individuals navigated the same shadow.
1. The Rejectors. These were the sons and daughters who saw their fathers’ crimes with scorching clarity and devoted their lives to exposing them. Niklas Frank, son of Hans Frank—the Nazi Governor-General of occupied Poland—was the most uncompromising. He called his father a “spineless jerk,” wrote a book that dismantled the family mythology, and made no room for sentimentality in the face of historical fact. “You don’t put love for your father above the truth,” he told me. The choice for him was not between love and hate but between complicity and moral independence.
2. The Defenders. At the other end of the spectrum were those who insisted their fathers were maligned by history or punished beyond proportion. Wolf Hess defended his father, Rudolf Hess, Hitler’s deputy, as a “man of peace” betrayed by political enemies and victors’ justice. For Wolf, to defend his father was to defend himself from the conclusion that he was the son of a villain. The defense became a scaffold for identity, a way to live in the world without constantly negotiating contempt.
3. The Divided. In the middle were those who could neither fully condemn nor fully exonerate. Rolf Mengele—son of Dr. Josef Mengele—met his father only twice after the war. Rolf was sixteen the first time, when his father traveled from his South American hideaway for a skiing vacation in the Swiss Alps. Rolf’s mother had told him his real father had died in war, and the visitor was “Uncle Fritz.” Three years later he learned that Uncle Fritz was in reality his father and he learned about his crimes. He only met him again when Rolf was 33, a visit to South America to confront him about Auschwitz. The elder Mengele closed that door, telling his son never to question him about what happened at the camp and what led to the prisoners dubbing him the “Angel of Death.”
Public history sees uniforms and titles. Private memory remembers the warmth of a hand.Rolf did not deny his father’s atrocities; he had studied the documents as had everyone else. However, his sense of loyalty to his family had fractured the moral clarity that comes easily to people who never face the person behind the infamy. Rolf carried two incompatible truths: the father he barely knew and whom his family loved and the historical perpetrator he could not defend.
4. The Transcenders. Finally, there were those who took the moral debt they inherited and turned it outward—into a public ethic. Dagmar Drexel’s father was not a senior Nazi official but instead one of the murderous Einsatzgruppen, the mobile death squads that killed more than a million civilians. She chose the path of engagement and reconciliation, visiting Israel, supporting dialogue, and insisting that her children and grandchildren be raised in the light of historical truth. Dagmar hoped, as many did, that if her generation did the hard work, the third generation might be free of the burden.
These categories blur at the edges. People moved along the spectrum over time—hardening or softening as new documents and eyewitness accounts surfaced, as they aged, as their own children asked harder questions than journalists ever could.
Taken together, however, the spectrum reveals the variety of human strategies for living with the inheritance of atrocity.
The Private Life of a Perpetrator. The Höss family enjoys a seemingly idyllic domestic life—a swimming pool, a carefully tended garden, children at play—literally abutting the walls of Auschwitz. This publicity still from Jonathan Glazer’s film The Zone of Interest visually captures the double life of memory and the central torment for perpetrators’ children: reconciling the private tenderness of a parent with the public monstrosity of their crimes. (Credit: The Zone of Interest © 2023. Directed by Jonathan Glazer. Photo courtesy of A24.)The Double Life of MemoryFor outsiders, the hardest truth to grasp may be the most banal: perpetrators are still parents. A man who signed deportation orders may also have read bedtime stories, taught a child to swim, or taped the wobbling seat on a first bicycle. Public history sees uniforms and titles. Private memory remembers the warmth of a hand, the tone of a voice in the kitchen at night.
Reconciling those two realities—public monstrosity and private tenderness—was the central torment for many I met. Some resolved it by letting historical fact erase the personal. They repudiated the father and severed the line. Others clung to the personal, even when it meant being accused of denial.
Guilt is about actions; shame is about identity.Edda Göring, devoted to her father’s memory, described Hermann Göring as generous and loving. She did not deny the crimes of the regime for which he was one of its top leaders but resisted the idea that her father had been a fanatic. To critics, that sounded like apologetics. To her, it was loyalty to the man she knew as a kindly father.
The tension here is not reducible to “truth versus lies.” Rather, it is a collision of kinds of truth—the truth of documented atrocity and the truth of attachment, which does not yield easily to hard facts. I came to believe that part of the work of reckoning is sometimes learning to hold both truths at once without letting either evaporate the other.
Shame, Guilt, and the Psychology of the Second GenerationPsychology offers a vocabulary for what I heard. The “intergenerational transmission of trauma” is well documented among the children of victims—especially Holocaust survivors—where symptoms include anxiety, hypervigilance, and a deep mistrust of institutions. Among the children of perpetrators, I discovered that a related but distinct process plays out. Their inheritance is not injury but stigma—the corrosive effects of shame, moral ambiguity, and the fear that others see an invisible mark.
Guilt is about actions; shame is about identity. One can confess guilt and make amends. Shame, by contrast, whispers that one is something tainted. Several interviewees spoke of carrying a “name that enters the room first.” It affected romance (when to disclose the name), employment (whether a boss would know the family and decide against them), and decisions about parenthood (whether to have children at all).
Coping strategies reflected familiar psychological defenses. Some changed their names or emigrated—geographic cures for a moral biography. Others chose radical transparency—publicly condemning their fathers in books and interviews to reclaim their own moral agency. A third group practiced radical silence, hoping that if the topic never arose, the past might recede on its own. It never did. Silence, I learned, is a temporary dam. The water rises behind it.
How Family Systems Carry HistoryBeyond the individual psyche lies the family system—the ways stories are told or not told, the rituals of commemoration or erasure. Some families preserved elaborate mythologies in which the father had resisted orders, saved a Jewish neighbor, or known nothing about the machinery of murder.
The myths were often anchored in a single ambiguous episode—an order not carried out, a mild reprimand from a superior—that became the seed for an alternative history.
Other families split. Siblings took opposing stances. One condemned; another defended. At holiday meals, the past was both present and forbidden.
“Intergenerational trauma” named not only what moved from parent to child but what moved from child back to parent: a judgment the older generation could not bear.The emotional economy of those households looked familiar to anyone who has studied families marked by addiction or scandal: unspoken rules, competing narratives, and a tacit agreement that love depended on staying within one’s assigned role.
Children who broke the family line—who published a denunciation or appeared in a documentary—sometimes became moral exiles among their own kin. That rupture was the price of telling the truth as they saw it. In those moments, “intergenerational trauma” named not only what moved from parent to child but what moved from child back to parent: a judgment the older generation could not bear.
Social Mirrors: Schools, Workplaces, and the Public GazeThe burden was not only private. Society itself became a mirror in which these children saw themselves reflected, often in distorted ways. Several spoke of the quiet pause when a teacher or colleague recognized the surname—and then the question that followed, carefully phrased to sound neutral but freighted with suspicion: “Any relation to … ?” In adulthood, some learned to bring it up first, defanging the question with a practiced sentence—“Yes, I’m his daughter; no, I do not share his politics”—and moving on before the conversation stalled.
In public life, the reception depended on the role they chose. The rejectors found a kind of moral home among activists and historians. The defenders found communities that resent “victors’ justice.” The divided and the transcenders navigated lonelier paths, neither embraced by partisans nor comfortable with silence.
Hungarian Jews arriving at Auschwitz in May 1944. Moments after disembarking from the train, many faced Nazi selection—some to forced labor, many to death. Photo by Ernst Hofmann or Bernhard Walte. (Credit: German Federal Archives [CC-BY-SA 3.0])What Changes With Time—and What Doesn’tWe sometimes imagine that moral burdens fade in predictable half-lives. In my experience, time changed the tone but not always the weight. As my interviewees aged, many reported that reckoning deepened, not because new facts appeared but because their own children asked better questions.
The third generation—further from the emotional bond and closer to the educational curriculum—refused family mythologies in a way the second often could not. “Grandpa couldn’t have known,” a parent would say. “But he was there,” a teenager would answer.
Anniversaries, documentaries, and new archival releases periodically reset the conversation. A case reopened, a grave discovered, a diary authenticated—and the private work of reconciliation was hauled into public light. At those moments, people who had made peace with their own narrative found themselves having to make peace again, this time with an audience.
Adolf Hitler with Reich Minister of Propaganda Joseph Goebbels and wife, with their children: Helga, Hilde, and Helmut. (Credit: Bundesarchiv, Bild 183- 1987-0724-502 / Heinrich Hoffmann / CC-BY-SA 3.0)Comparative Frames: Not Only GermanyThe Nazi case is singular in scale and intent, but the dynamics I heard are not unique. Descendants of slave owners in the American South wrestle with family papers that list human beings as property and calculate children as “increase.” In post-apartheid South Africa, the Truth and Reconciliation Commission exposed a generation of children to testimony that shattered family legends. In Rwanda, the gacaca courts forced communities to confront the fact that génocidaires were not abstract monsters but neighbors—and often fathers. Across the former Yugoslavia, the International Criminal Tribunal’s judgments collided with nationalist narratives passed down at kitchen tables.
In all these contexts, the same questions surface: Am I responsible for the sins of my father? Can I love my parent without condoning their crimes? What do I owe to victims and their descendants? How do I build a life that is truly my own?
The answers vary by culture and circumstance, but the structure of the dilemma is recognizably human.
Mechanisms of Transmission: How the Shadow TravelsIf “intergenerational trauma” names an outcome, what are the mechanisms? Scholars point to at least four:
Silence. When families refuse to speak, children fill the vacuum with fantasy or shame. The mind abhors a narrative void. In several households I encountered, silence was the loudest sound in the room. It produced neither absolution nor forgetfulness—only rumination.
Mythmaking. The stories families tell—of resistance, ignorance, or necessity—shape the moral horizon. Even a small act of decency can be inflated into an alibi. Conversely, some families cultivate a punitive myth of inherited stain, a fatalism that imprisons the young in a script they cannot revise.
Ritual and Place. What families visit—or avoid—matters. One daughter told me she had been taken to battlefields but never to camps. Another said the first time she saw the Nuremberg courtroom, she felt she had stumbled into a photograph that had been waiting for her.
Rituals of remembrance can either widen or narrow moral imagination.
The second generation experiences a kind of indirect moral injury: an injury not from what they themselves did but from what knowing does to them.Institutional Echoes. Schools, museums, and media frame the past in ways that either invite reckoning or permit evasion. A curriculum that skips over the depth and breadth of atrocities—as has happened in many academic settings when it comes to the Hamas terror attack of October 7—makes it easier for descendants to imagine their relatives are free of any responsibility.
Institutions can either dignify the moral labor families attempt or tempt them with a ready-made script of innocence.
Child survivors of Auschwitz, wearing adult-sized prisoner jackets, stand behind a barbed wire fence. Still photograph from the Soviet film The Liberation of Auschwitz, taken by the film unit of the First Ukrainian Front, Auschwitz, 1945. (Credit: United States Holocaust Memorial Museum, courtesy of Belarusian State Archive of Documentary Film and Photography)Moral Injury and the Cost of Knowledge“Moral injury”—a term developed to describe soldiers who feel they have violated their own ethical codes—offers another lens. The second generation experiences a kind of indirect moral injury: an injury not from what they themselves did but from what knowing does to them. Knowledge damages one’s relationship to a beloved parent; truth injures attachment.
Some choose not to know much. Others choose to know everything and live with the ache. One daughter, who had read deeply in trial transcripts, said that learning the exact logistics of a deportation under her father’s authority broke something in her. “I used to think there must have been chaos,” she said. “It was worse—there was order.”
For her, the injury was precision—the bureaucratic elegance of evil.
Choosing Children: Reproduction Under a ShadowA notable fraction of those I interviewed had chosen not to become parents. The reasons varied: fear of passing on a name, a desire to end a line, uncertainty about what one could say to a child who asked, “Who was my grandfather?” One son told me that he chose not to become a father because he could not bear to pass on a story line he had never been able to fully explain.
None believed in genetic guilt. The concern was narrative. Parenthood would require mastering a story they themselves had not yet mastered. Others chose to have children precisely as a defiance of history—an insistence that a life could be built that was neither repetition nor repudiation but revision.
If we want to interrupt the transmission of harm—whether its currency is trauma or shame—we must map the routes it travels.These decisions often intersected with partners’ views. Some marriages could not bear the weight of history. One woman described the look on a fiancé’s face when he first grasped the details of her father’s role.
“It wasn’t revulsion,” she said. “It was calculation. He was calculating whether he could carry it with me.” The engagement ended.
The Skeptic’s Task: Between Verification and EmpathyA skeptic acknowledges the limits of memory and the demands of evidence. Interviews with perpetrators’ children are not court records; they are human documents, shaped by self-protection, loyalty, and fatigue. Defensiveness, denial, and selective recall were constants. My job, then and now, is to triangulate: place personal accounts against trial transcripts, diaries, and the scholarship of historians and psychologists.
Skepticism here is not cynicism. The aim is to understand without excusing, to listen without indulging. If we want to interrupt the transmission of harm—whether its currency is trauma or shame—we must map the routes it travels. That map requires both archival rigor and an ear for the ways people live with the past.
Freedom for the Third Generation?Again and again, interviewees asked whether their children—grandchildren of the perpetrators—could be free. There is some evidence that the burden lightens with distance, especially when the second generation does the work of truthtelling. But it is not inevitable. Silence begets fantasy, and fantasy rarely lands on justice.
The most hopeful conversations I had were with families who had made memory a practice rather than a panic. They visited sites of the crimes together. They read. They argued. They did not ask love to overrule truth or truth to annihilate love. They let both inhabit the same home. In those households, the third generation seemed less haunted and more oriented—not weighed down by a surname but awake to what it should mean to carry one.
A line of Dagmar Drexel stays with me: “Our generation has the obligation to confront the truth. Only then can the next one be free.”
The obligation is not to perpetual penance but to honest narration. Freedom comes not from forgetting, but from telling the story in a way the young can live with.
A German teacher singles out a child with “Aryan” features for special praise in class. The use of such examples taught schoolchildren to judge each other from a racial perspective. Germany, 1934. (Credit: United States Holocaust Memorial Museum, courtesy of Süddeutsche Zeitung Photo)Living in the Shadow Without Becoming ItThe story of the children of Nazi leaders is not only about Germany, nor only about the Holocaust. It is about the universal human challenge of living with a family legacy that collides with one’s moral values. We do not inherit guilt in the legal sense. Yet we can inherit its shadow—in our names, our family stories, our silences, and our choices.
Freedom comes not from forgetting, but from telling the story in a way the young can live with.The work of a lifetime, for some, is not to step out of the shadow but to learn how to live within it without becoming it. That means choosing accuracy over myth, candor over silence, accountability over performative shame. It means loving a parent, if one can, without lying about him—and refusing to let that love dictate the terms of one’s moral life.
If there is a single lesson my interviews taught me, it is that history is never safely past; it lives inside our most intimate relationships. To reckon with that is not to remain trapped. On the contrary, it is the only way through—an insistence that the very human bonds that transmitted the shadow can also be the ones that transform it.