Last week a child of one of my cohosts on the SGU, who is in fifth grade (the child, not the cohost), came home from school and declared, rather dramatically, “Mom, Dad – did you know that we never went to the Moon? It was all fake.” They found this to be a surprising revelation, but was convinced this was a proven scientific fact. Of course, we live in the age of the internet, and our children are going to be exposed to all sorts of information that may be misleading or age-inappropriate. This is one more thing parents have to deal with. What was disturbing about this incident was where they learned this “scientific fact” – from their science teacher.
Any parent should be concerned about this, but in a family of skeptical science communicators, this raised the alarm bells. But the first thing they did was send a polite e-mail to the teacher (cc’ing the principal) and simply ask what happened. This is good practice – always go to the primary source. It’s easy for anyone to get the wrong idea, and this wouldn’t be the first time a fifth grader misinterpreted a lesson in class. The teacher essentially said that while he did not explicitly tell the students we did not go to the Moon (the student reports he said “it’s possible we did not go to the Moon”), he personally believes we did not, and that it is a “proven scientific fact” that it would have been impossible, then and now, to send people to the Moon (somebody should tell the Artemis astronauts).
Apparently he raised at least two points in class – that there were (impossibly) no stars in the background of the photographs taken from the Moon, and the astronauts could not have survived passage through the radiation belts around the Earth. These are both old and long-debunked claims of the Moon-hoax conspiracy theorists. While it is easy to find sources online, let me briefly summarize why these claims are wrong.
The first claim, about no stars in the photographs from the Moon, is trivially solved with some basic photography knowledge. Cameras have to be set for different light levels. There are three basic setting – the ISO of the film or sensor (a measure of how sensitive it is to light), the aperture and the shutter speed. The sky on the Moon is black because there is no atmosphere to diffuse the light, but the surface during the day can still be very bright, and reflect off every surface. This means, to avoid over exposure, they would have used a small aperture and fast shutter speed, which would not have allowed for exposing the tiny amount of light coming from stars, which are only a point of light. Even from Earth, if you want to get a visible picture of stars at night you need to take a long exposure – long enough that you need to use a tripod. Regular cameras (including the ones used during Apollo) have a low dynamic range – the range of light levels they can capture simultaneously. So they would not have been able to capture the bright lunar surface and stars in the background at the same time. Modern digital cameras have techniques for capturing high dynamic range, but this does not apply to the Apollo-era cameras.
The second point refers to the Van Allen belts, which are belts of increased radiation intensity around the Earth. These are tori of ionic radiation trapped by the Earth’s magnetic field. They can vary in shape and intensity, and are not symmetrical. The inner belt is mainly protons and the outer belt is mainly electrons. They do pose an issue for satellites, which have to have proper shielding to protect any sensitive electronics. Crucially – we knew about the Van Allen belts since 1958, so NASA had this information when planning the Apollo missions.
This is a bit more complicated to debunk than the silly photography claim, but still, this information is widely publicly available. The effects of radiation exposure are determined by three variables – the intensity of the radiation, the type and energy of the particles, and the time of exposure. The Apollo capsules were specifically shielded with an aluminum alloy hull and insulation to reduce the intensity of the radiation. Also, NASA specifically calculated a launch trajectory to minimize the time they would spend traversing the Van Allen belts. They ended up spending just a few minutes in the higher energy lower belt, and about 90 minutes in the outer belt. The total radiation exposure was the equivalent of a typical CT scan – so not much. Because there are so few astronauts it is difficult to get statistically powerful data on their subsequent risk of death from cancer or cardiovascular disease, but what evidence we have shows no significant increase in risk.
So these two points, which this science teacher apparently believes “proves” it is impossible to send humans to the Moon, are easily debunked with some basic science knowledge. This gets me to the real point of this post – anyone who believes such a conspiracy is likely not qualified to teach science. I firmly believe that science teachers, even at the fifth grade level, need to have a working basic knowledge of science and critical thinking. Believing a conspiracy theory like this is evidence for lack of both. In addition to these points, we can ask – what would have to be true in order for the Moon hoax conspiracy to be true. The size of the conspiracy would have to be massive? Why didn’t the Soviet Union call us out on the hoax, which they could easily have detected and demonstrated? How has it been maintained for six decades? Why hasn’t the scientific community called NASA out on the hoax? If it were truly impossible to go to the Moon, there are generations of scientists, from all over the world, who could easily demonstrate this.
The lack of curiosity and critical thinking on display here is shocking and profound. What a horrible lesson to teach a class of fifth-graders. This also raises another point – expressing such beliefs to fifth graders (apparently without any proper context) shows an incredible lack of judgement. This was not part of any lesson plan or approved material, and he has to know it is (to say the least) controversial (bat-shit crazy is more like it). Even if it were presented in a “teach the controversy” format to encourage critical thinking, I would question whether this is age-appropriate.
Of course, we will turn this into a teaching moment, and use it as an opportunity to teach critical thinking, why grand conspiracy theories are suspect, and some of the relevant science. We will also do what we can to make sure the entire class gets this lesson. We also will try to drive home that teaching such nonsense as “proven scientific fact” to school children is, to say the least, not appropriate.
The post Moon Landing Hoax In School first appeared on NeuroLogica Blog.
On March 30–31, 1979, Iranians went to the polls. The ballot contained a single question: Should Iran become an Islamic Republic? The choices were “Yes” (Green) or “No” (Red). The official result: 98.2% voted Yes.1
Fifty-Eight Days EarlierOn February 1, 1979, Ayatollah Khomeini returned to Iran after fourteen years in exile. Millions filled the streets of Tehran—the estimates range from two to five million.2 But the man they cheered was a carefully constructed image. During the flight, Khomeini remained secluded in the upper deck of the chartered Boeing 747, praying.3 When the plane landed, he chose to be helped down the stairs by the French pilot rather than his Iranian aides, a calculated move to prevent any subordinate from sharing the spotlight.4
He chose his first destination deliberately: Tehran’s main cemetery, where those who died during the revolution were buried. The crowd was so dense his motorcade could not pass; he took a helicopter instead.5 By speaking among the graves, Khomeini positioned himself as the guardian of those who died in the revolution and as someone who would fulfill what they had sacrificed for.
In the weeks that followed, Khomeini offered both material goods and spiritual salvation. He promised free electricity, free water, and housing for every family. Then he added the caveat that would define the coming era: “Do not be appeased by just that. We will magnify your spirituality and your spirits.”6
A Coalition of ContradictionsThe crowd that greeted him was not a monolith, but a coalition of contradictions. Marxists marched hoping for a socialist future free of American influence. Nationalists and liberals sought constitutional democracy. The devout sought governance by Sharia—and for them, the revolution was holy war: the Shah represented taghut, the Quranic term for tyrannical powers that lead people from God, and those who died fighting him became shahid, martyrs.
Khomeini managed these competing visions by keeping his actual plans vague. He spoke of freedom, justice, and independence, terms each faction could interpret as it wished.7 His blueprint for clerical rule, Velayat-e Faqih, remained in the background. Abolhassan Bani-Sadr, who would become the Islamic Republic’s first president, later recalled: “When we were in France, everything we said to him he embraced and then announced it like Quranic verses without any hesitation. We were sure that a religious leader was committing himself.”8 Khomeini himself would later state: “The fact that I have said something does not mean that I should be bound by my word.”9
Ayatollah Mahmoud Taleghani casts his vote in the March 1979 Islamic Republic referendum.The Empty PhraseNow, let’s return to the ballot.
A republic places sovereignty in the people. Citizens choose their laws. An Islamic state places sovereignty in God, but not “God” in some abstract, philosophical sense. The God of the Islamic Republic is specifically Allah as understood in Shia Islam: a God who communicates through the Quran, whose will was interpreted by the Prophet Muhammad, then by the twelve Imams, and now (in the absence of the hidden Twelfth Imam) by qualified Islamic jurists. This is not a deist clockmaker or a personal spiritual presence. This is a God with specific laws, specific requirements, and specific men authorized to speak on His behalf.
So, what did God want? The ballot never said.
The 1979 Iranian Islamic Republic referendum ballot showing the “نه” (No) option in red. Voters chose between a simple yes or no on whether Iran should become an “Islamic Republic”—a phrase containing no constitution, no enumerated rights, and no definition of which Islamic laws would apply or who would interpret them.“Islamic Republic” contained no details. No constitution, no enumerated rights, no definition of which Islamic laws would apply or who would interpret them. Voters were not choosing a specific system of government. They were choosing a phrase, and trusting that its meaning would be filled in later by men they believed spoke for God.
For those paying attention, there were clues. Khomeini had written extensively about Velayat-e Faqih (the Guardianship of the Islamic Jurist) a system in which a senior cleric would hold supreme authority as God’s representative on Earth. He had lectured on it in Najaf. He had published a book.10 But in the noise of revolution, in the flood of promises about free electricity and spiritual elevation, these details were background static. The crowds were not voting on constitutional theory. They were voting on hope.
The 98% voted Yes. Forty-seven years later, we can measure what exists in Iranian society.
Religious FaithFor this case study to be valid, we must establish a baseline. Was Iranian society already irreligious before 1979, or has religiosity declined under the theocracy?
Available evidence suggests the latter.
In 1975, a survey of Iranian attitudes found over 80% of respondents observing daily prayers and fasting during Ramadan. The methodology is not fully documented in accessible sources.11 However, the broader historical record supports the baseline: the 1979 revolution mobilized millions under explicitly Islamic banners, clerical figures commanded genuine social authority, and the Iranian government’s own 2023 leaked survey found 85% of respondents saying society has become lessreligious than it was.12 Forty-seven years later, mosques are empty.
Official Iranian census data reports 99.5% of the population as Muslim.13 This figure measures legal status, not belief. Under Iranian law, a child born to a Muslim father is automatically registered as Muslim, and leaving Islam carries severe legal consequences. While formal executions for “apostasy” are relatively rare—the regime prefers to charge dissidents with crimes like “Enmity against God” or “Insulting the Prophet”—the threat is sufficient to enforce public silence.
Saadatabad district, Tehran, January 8, 2026: A mosque burns amid protests. (Source: Press Office of Reza Pahlavi)In June 2020, the Group for Analyzing and Measuring Attitudes in Iran (GAMAAN) surveyed over 50,000 respondents using methods designed to protect anonymity.14
Results:
While this online sample skews urban (93.6% vs. Iran’s 79%) and university-educated (85.4% vs. 27.7% nationally), the magnitude of divergence from official statistics—32% Shia vs. 99.5% in census data—is too large to explain through sampling bias alone. Meanwhile, face-to-face surveys suffer the opposite problem: when GAMAAN asked respondents if they’d answer sensitive questions honestly over the phone, 40% said no.15
An interesting outcome of this study is that Iran has approximately only 25,000 practicing Zoroastrians (the total population of Iran is around 92.5 million), yet 7.7% selected this identity. Researchers interpret this as “performing alternative identity aspirations”—claiming pre-Islamic Persian heritage to reject imposed Islamic identity.16
The key findings are, however, clear: 44.5% selected a non-Islamic category when asked their current religion and 47% reported transitioning from religious to non-religious during their lifetime.
The second figure suggests active deconversion rather than inherited secularism.
In 2024, a classified survey by Iran’s Ministry of Culture and Islamic Guidance (conducted in 2023) was leaked to foreign media.17 This data provides a comparison point from within the regime itself.
Indicator
2015
2023
Support separating religion from state
30.7%
72.9%
Pray “always” or “most of the time”
78.5%
54.8%
Never pray
3.1%
22.2%
Never fast during Ramadan
5.1%
27.4%
The same survey found 85% of respondents said Iranian society had become less religious in the previous five years. Only 25% reported trusting clerics.
Based on my years of closely following Iranian society, the pace of religious abandonment has accelerated significantly since the 2022 “Woman, Life, Freedom” uprising. The leaked government data confirms this trajectory: the sharpest shifts in prayer and fasting occurred within the 2015–2023 window, with 85% saying society had grown less religious in just the previous five years.
In February 2023, senior cleric Mohammad Abolghassem Doulabi stated that 50,000 of Iran’s approximately 75,000 mosques had closed due to low attendance, a claim partially corroborated by the leaked government survey finding only 11% always attend congregational prayers.18
Election participation has also declined. Official turnout in the June 2024 presidential election was 39.93%, the lowest in the Islamic Republic’s history.19
The Evidence on the StreetsThe data on paper is corroborated by the specific vocabulary of the street. The protest chants have evolved from requesting reform to rejecting the entire theological framework.
Art by Hamed Javadzadeh — Woman, Life, Freedom Movement (2022)Consider the chant: “Neither Gaza nor Lebanon, I sacrifice my life for Iran.”
This is a direct rejection of the regime’s core ideology. The Islamic Republic prioritizes the Ummah—the transnational community of believers—over the nation-state. By rejecting funding for Hamas and Hezbollah in favor of national interests, protesters are secularizing their priorities: the Nation has replaced the Faith as the object of ultimate concern.
Even more specific is the chant: “Death to the principle of Velayat-e Faqih.”
The protestors are not merely calling for the death of the dictator (Khamenei); they are targeting the specific theological doctrine that grants him legitimacy. They are rejecting the very concept of divine guardianship.
But the most striking evidence of the revolution’s failure is the return of the name it sought to erase. In a historical irony that defies all prediction, crowds now chant “Reza Shah, bless your soul,” and call upon Reza Pahlavi, the son of the deposed Shah, to return. The same population that staged a revolution to overthrow a monarchy in 1979 is now invoking that monarchy as the antidote to theocracy.
The MechanismA note on terminology: When this article refers to “Allah,” it means the legislative deity of the Islamic Republic—a God with enforceable commands interpreted by authorized clerics. This is distinct from the personal God that 78% of Iranians still believe in.
As mentioned earlier, Iran’s constitution establishes Velayat-e Faqih—the Guardianship of the Islamic Jurist. Article 5 declares that in the absence of the Twelfth Imam (a messianic figure believed to have been in supernatural hiding since the 9th century), authority belongs to a qualified jurist. The Tony Blair Institute’s analysis states it directly: “the supreme leader’s mandate to rule over the population derives from God.”20 Khamenei’s own representative, Mojtaba Zolnour, declared in 2009: “In the Islamic system, the office and legitimacy of the Supreme Leader comes from God, the Prophet and the Shia Imams, and it is not the people who give legitimacy to the Supreme Leader.”21
This is not metaphor. The system’s legitimacy rests on the claim that its laws are Allah’s laws, its punishments are Allah’s punishments, its wars are Allah’s wars.
When morality police detained Mahsa Amini, leading to her death, they were enforcing the mandatory religious duty of “Forbidding the Wrong.” When courts execute apostates, they enforce Allah’s law. When the regime sends billions to Hezbollah while Iranians face poverty, it pursues Allah’s mission. When it pursues a nuclear program that invites crushing sanctions, it frames the resulting economic ruin not as policy failure, but as a holy “Resistance” against the enemies of Islam. Every act of misrule carries Allah’s signature.
0:00 /1:04 1×Khorramabad, Iran, January 8, 2026: Protesters raise the pre-1979 lion-and-sun flag, described as a symbol of secular restoration, atop a statue of the Ayatollah. (Source: Press Office of Reza Pahlavi)
In a secular dictatorship, citizens can hate the dictator while preserving their faith. The North Korean who despises Kim Jong-un can still pray. But in a theocracy, the oppressor and God speak with one voice. To oppose the oppressor is to oppose God. To want freedom is to reject divine authority.
The regime created conditions where, for many, opposing political authority became entangled with questioning religious authority.
The Psychology of Religious RebellionJack Brehm’s reactance theory (1966) demonstrates that when people perceive threats to their freedom, they become motivated to restore it, often by embracing the forbidden alternative.22 Subsequent research has applied this specifically to religion. Roubroeks, Van Berkum, and Jonas (2020) found that restrictive religious regulations can trigger reactance that leads to both heresy (holding beliefs contrary to orthodoxy) and apostasy (renouncing religious affiliation entirely).23
The critical insight: In cases of psychological reactance, the emotional pushback against coercion often precedes the intellectual dismantling of the belief system.
The sequence is rarely a straight line, but the components are clear:
This third point is crucial. Iran’s internet users grew from 615,000 in 2000 to over 70 million today.24 Despite billions spent on censorship, officials admit 80–90% of Iranians use VPNs, which allow to circumvent restrictions by changing the user’s internet location to that of another country.25
For the intellectually curious, the internet offered arguments against Islamic theology that were previously banned. But for the average citizen, it offered something perhaps more powerful: validation. It showed them that their anger was shared. It broke the “pluralistic ignorance,” the state where everyone privately rejects the norm but publicly conforms because they think they are the only ones.
Whether through deep study or simple emotional exhaustion, the result was the same: the breaking of the psychological bond between the citizen and the faith.
The Unintended OutcomeIran’s religious decline is among the fastest documented in modern history. Stolz et al. (2025) in Nature Communications established that Europe’s secular transition took approximately 250 years. Iran’s comparable shift from over 80% observing daily prayers in 1975 to 47% reporting lifetime deconversion by 2020 occurred in roughly 45 years. Pew’s global data shows Muslim retention rates averaging 99% across surveyed countries.26
However, Europe secularized without internet or satellite television. Iran’s shift occurred alongside a 90-fold increase in internet access. Theocracy may provide the motive for questioning imposed faith; technology provides the accelerant that compresses generational change into decades. Ex-Muslim testimonies, apostasy narratives, ordinary lives lived without faith—these demonstrated that abandoning religion was survivable. The forbidden became imaginable. Others found arguments that validated what they already felt. The reasoning matched the shape of their anger, and that was enough.
For forty-seven years, the Islamic Republic worked to manufacture belief. Mandatory religious education from childhood. State control of media. Morality police enforcing dress and behavior. Apostasy punishable by death. A constitution grounding all authority in God. They did not leave this to chance.
The data suggests it did not work.
Anyone following recent events in Minneapolis has likely noticed something strange. People watching the same videos, reading the same headlines, and reacting to the same street-level events often seem to be describing entirely different realities. Conversations quickly break down, not because people disagree about what should be done, but because they cannot even agree on what is happening. It’s as if people are watching two completely different movies on one screen.
The “two-movies-one-screen” concept was first coined by Scott Adams, the creator of Dilbert turned political commentator, to describe radically different interpretations of the same political events. People with access to the same set of facts come away with completely different understandings of what is happening. In some cases, each side seems genuinely unaware that the other interpretation even exists.
This is not merely disagreement, and it goes beyond ordinary bias. It is also not quite what psychologists usually mean by cognitive dissonance. Cognitive dissonance, first described by Leon Festinger in the 1950s, occurs when people experience psychological discomfort from holding conflicting beliefs or encountering information that contradicts their existing views, and then attempt to reduce that discomfort through rationalization or reinterpretation of the facts. In cases like the Renee Good shooting in Minnesota, however, something else seems to be happening. So, what is going on?
From a psychological standpoint, this resembles dissociation more than cognitive dissonance. Dissociation refers to a class of mental processes in which certain thoughts, perceptions, or experiences are kept out of conscious awareness. As clinical psychologists have long noted, dissociation functions as a defensive mechanism, shielding the individual from information that is experienced as overwhelming or intolerable. The mind does not reject the data after evaluating it. It fails to perceive it in the first place.
The following is an attempt to provide a neutral description of the events, followed by two very different interpretations.
On January 7, 2026, in Minneapolis, Minnesota, 37-year-old Renee Nicole Good was fatally shot by an Immigration and Customs Enforcement (ICE) agent during an operation targeting undocumented immigrants for deportation. Good was a U.S. citizen and mother of three from previous relationships, and present on the scene with her wife, Rebecca (Becca) Good.
Multiple videos from bystanders, body cameras, and agent phones capture the event, showing a chaotic scene lasting about three minutes.
0:00 /0:47 1×ICE Agent’s Cellphone Video (Credit: Alpha News)
Renee Good was in her SUV, which was blocking or near the path of ICE vehicles during an arrest operation. Agents approached, giving conflicting commands: some ordered her to leave, while others demanded she exit the vehicle. One agent attempted to open her door and banged on the window.
Rebecca Good, Renee’s wife, was outside the vehicle filming and confronting agents.
At one point during the interaction, Renee’s wife urged her to “drive, baby, drive” as the situation escalated. Good maneuvered the vehicle forward and started to accelerate. The vehicle made contact with an ICE agent who was positioned in front; the agent fired through the windshield, striking her in the face and killing her.
0:00 /0:39 1×Bystander Video (Credit: Nick Sortor)
According to official statements from ICE and the Department of Homeland Security (DHS), the shooting occurred after Good allegedly used her vehicle as a weapon, attempting to run over an agent who then fired in self-defense. Renee and Rebecca Good were part of “ICE Watch” groups monitoring, protesting, and interfering with ICE operations. The ICE agent who fatally shot Good was injured and hospitalized following a prior incident in June 2025, during which an undocumented immigrant with an open warrant for child sexual assault dragged him with his vehicle while attempting to flee arrest.
0:00 /4:26 1×Bystander Video 2 (Credit: @Dana916 via X.com)
Progressive voices view Good’s killing as an example of ICE overreach, law enforcement brutality, and systemic abuse of power, especially against citizens exercising First Amendment rights. They emphasize Renee was a “legal observer” and had a constitutional right to protest. They further note that Good was an unarmed American citizen on a public road who was fatally shot in the face and head by a masked federal agent. They also interpret the footage as showing Good attempting to navigate away from the scene rather than intentionally trying to harm the agent. They further warn against normalizing state killings, such as in statements made by Rep. Alexandria Ocasio-Cortez (D), who responded to Vice President JD Vance’s defense of the ICE agent by calling it a “regime willing to kill its own citizens.” This sentiment is tied to broader concerns about police/ICE militarization against undocumented immigrants, and observations such as that even if Good erred (e.g., by not complying with instructions of federal law enforcement officers), it wasn’t worth her life, and society needs a higher bar for lethal force.
Conservative commentators frame the shooting as justified self-defense against anti-ICE radicals who disrupted lawful operations. They emphasize Renee’s alleged aggression and Rebecca’s role in escalating the situation by shouting “You wanna come at us? Go get yourself lunch, big boy,” portraying the couple as part of a coordinated harassment campaign rather than passive observers or demonstrators. They also argue Good was an active participant and perpetrator obstructing enforcement of long-standing immigration law, and someone attempting to flee from the scene rather than simply a citizen attending a protest. They maintain that the shooting was tragic, nevertheless law enforcement (and citizens) can use lethal force if they reasonably believe they face imminent serious harm. Further, they make the following distinction: debating whether the officer should or should not have fired is rational, but refusing to acknowledge that being struck/pushed by a vehicle is basis for self-defense isn’t.
These conflicting media narratives matter because most people do not build their understanding of the world through direct experience. Our personal encounters are limited. The rest of our mental model is assembled from stories. Indeed, research in cognitive psychology and media studies consistently shows that humans rely heavily on narrative to organize information and assign meaning. In other words, we are not natural statisticians. As psychologists such as Jerome Bruner and Daniel Kahneman have shown, people reason intuitively through stories, examples, and emotionally salient cases, often treating mediated experience as a stand-in for reality itself. This is why propaganda is most effective when it does not look like propaganda.
Many people assume propaganda is something obvious that you notice and argue with. In reality, the most powerful propaganda works through repetition rather than persuasion. Social psychologists have documented what is known as the “illusory truth effect,” in which repeated statements are more likely to be judged as true, regardless of their accuracy. When a moral narrative is replayed often enough, it stops feeling like a claim and starts feeling like memory.
Consider the recurring portrayal of tech executives in films and television. A wealthy founder speaks in vague abstractions, dismisses ethical concerns, and pursues profit at the expense of ordinary people. The specifics vary, but the moral structure remains the same. Whether any individual depiction reflects the reality of modern technology firms is almost beside the point. After repeated exposure, viewers absorb not just a critique of corporate excess, but an intuitive framework for interpreting innovation, wealth, and motive. Repetition trains audiences to assign intent instantly and to stop questioning it.
This works because fiction bypasses our analytical defenses. Experimental research on narrative persuasion shows that people are less likely to counterargue when they are emotionally absorbed in a story. Psychologists refer to this as “transportation,” a state in which attention and emotion are captured by a narrative, making viewers more receptive to its implicit assumptions. We do not fact-check television dramas. We empathize with them. Their moral premises are absorbed quietly as background knowledge.
For most of us, the names Jeff Bezos, Elon Musk, Mark Zuckerberg, or Peter Thiel evoke an immediate moral impression. But how did that impression evolve? Have you, for example, ever heard them speak at length or know how they run their companies? Do you understand what motivates them? Do they have a good sense of humor?
There is also a structural problem with storytelling itself. Everyday reality, especially everyday crime, is usually chaotic, senseless, and narratively unsatisfying. Criminologists have long observed that much violent crime lacks coherent motives or moral meaning. Writers, understandably, select stories that feel legible, purposeful, and emotionally engaging. But those selections shape our expectations of reality and thus our perception, and make us see otherwise messy events as morally clearer than they actually are.
The result is a moral universe in which certain kinds of harm are treated as profound moral ruptures, while other kinds are treated as routine or unfortunate facts of life. Violence committed by some characters is framed as a social crisis demanding urgent moral response. Similar violence committed by others is portrayed as tragic but unremarkable, something to be managed rather than interrogated.
A clear example appears in the pilot of The Pitt. A dramatic subway assault is immediately interpreted through a moral lens before basic facts are known. The graphic depiction gives viewers the feeling that they are seeing something raw and unfiltered. At the same time, the narrative structure carefully guides inference and sympathy. In the same episode, a different shooting is treated as mundane and procedural. It carries little moral weight and prompts no larger reflection.
The show is not depicting reality. It is presenting a moral map.
This does not require a conspiracy, and it does not require malicious intent. Many writers openly acknowledge that fiction shapes social norms and expectations. Cultural theorists from Walter Lippmann to contemporary media scholars have noted that narratives function as “pictures in our heads,” guiding perception long before conscious judgment enters the picture. What is new is the growing cultural distance between those producing these narratives and the audiences consuming them, combined with a strong confidence that the moral direction of society is already settled.
When this kind of storytelling dominates, it does more than persuade. It trains perception itself. Viewers learn what to notice, what to ignore, and which conclusions should feel obvious. Over time, alternative interpretations stop feeling like interpretations at all. They begin to look irrational or delusional.
This is how “the other movie” disappears.
♦ ♦ ♦
A functioning society does not require agreement on every issue. It does require a shared reality. When large groups of people cannot even see what others are responding to, debate becomes impossible. You cannot resolve disagreements if one side experiences the other as hallucinating.
The answer is not counter-propaganda, and it is not simply more facts. Research on motivated reasoning shows that facts alone rarely change minds when perceptions themselves are structured by narrative. What is required instead is closer attention to how stories shape perception. What they highlight. What they omit. And how repetition turns fiction into intuition.
Was Renee Good heroically intervening in an unlawful abduction and a victim of reckless police violence? Or was she someone who interfered with a lawful enforcement action and nearly ran over an officer? Each interpretation feels obvious to those who hold it, and nearly invisible to those who do not. If you analyze both long enough, you might start to see the narratives and the chain of events that lead one to interpret this particular incident in a particular way after watching the exact same three minutes of video.
Skepticism, properly understood, is not just about questioning explicit claims. It is about examining why certain narratives feel natural, why others feel unthinkable, and why some movies seem to be playing on the screen while others are never seen at all.
The tech world is buzzing with the claims of a startup battery company out of Finland called Donut Lab. They claim to have created the world’s first production solid state battery. At first blush the claims are exciting but seem in line with the promises that we have been hearing about solid state batteries for years. So it may seem that a company has finally cracked the technical issues with the technology and gotten a product across the finish line. But let’s take a closer look.
First let’s review their claims. The CEO is claiming that their battery has a specific energy of 400 watt hours per kilogram. This is great, considering the current lithium ion batteries in production are in the 175-250 range. The Amprius silicon anode Li-ion battery has 370 Wh/kg, so 400 sounds plausibly incremental, but make no mistake, this would still be a huge breakthrough. Meanwhile the CEO also claims 100,000 charge-discharge cycles, and operation temperature from -30 to 100C. In addition he claims his battery is cheaper than standard Li-ion, does not use any geopolitically sensitive raw materials, and is already in production (for motorcycles). Further it can be fully recharged in 5 minutes, and is incredibly stable with no risk of catching fire.
As I have pointed out previously, battery technology is tricky because a useful EV battery needs a suite of features all at the same time, while reality often requires trade-offs. So you can get your high capacity, but with increased expense, for example (like the Amprius battery). So claiming to have every critical feature of an EV battery improve all at once is beyond a huge deal. That in itself starts to get into the implausibility range, but it’s not impossible. My reaction appears to be similar to most people in the tech world – show me the money. At the CES where Donut rolled out its battery claims, in short, they did not do that.
A battery company with these claims, if they wanted to be taken serious, would have presented their actual battery at CES demonstrating at least some of these features, like the energy density and cycle life. But all they had was an empty case – no actual battery. That we either a disastrous marketing decision, or they don’t have an actual battery. I’m beginning the smell the “fake it til you make it” syndrome that tanked Theranos.
As we go deeper the story gets more dodgy. The company, Donut Lab, is a small Finish company (registered in Estonia). Their employee roster boasts a single technical expert, the rest are in marketing and management. So now we are supposed to believe that this small company with a single engineer has outperformed the world’s battery tech giants with hundreds or even thousands of experts and who are pouring billions of dollars into R&D to be the first to market with a solid state battery. Um, no. I love a good Cinderella story, and it would be great if a viable solid state battery hit the market a few years (or maybe more) ahead of schedule, but this is just too much to believe.
Then there is the history of the CEO, Marko Lehtimäki. Last year this guy claimed to have created the first true artificial intelligence, Asinoid. He wrote: “Asinoids are today the world’s only AI with their own life, thoughts, continuous evolution and synthetic neuroplasticity with the ability to adopt to any kind of physical or digital ”body”, from humanoid robots to SaaS apps, drone swarms and CCTV cameras. Their intelligence is modeled carefully after the only true known intelligence — the human brain.”
This was just vaporware. Reading his posts I get the vibe that this guy wants to become the next Elon Musk, grabbing experts to create one moonshot breakthrough after another. He may be truly delusional, or really think that his companies are on the verge of these breakthroughs, so it’s just good marketing to get ahead of the curve. Or he may just be a scammer. Either way, he has no credibility.
We are therefore seeing a pattern that is extremely familiar and clear to experienced skeptics – an astounding claim with nothing real to back it up made by someone with a history of dubious claims. I would be shocked (although also happy) if this turns out to be legit.
Meanwhile, where does solid state battery tech actually sit? The technology is promising, and is expected to produce batteries with higher energy density, faster charging, and longer lifespans. But these will likely come at the expense of higher cost. The large companies working on this tech are also facing challenges to mass production and have not solved all the technical issues. Solid state batteries have been promised for a long time, and the technology is taking a lot longer than optimists expected. Realistically, this is a medium to long term technology. At best we will see them at the end of this decade but more likely in the early to mid 2030s. It may even take longer.
Meanwhile, Li-ion technology continues to advance. Over the next few years we will see silicon anode batteries in EVs at the high end. We are also starting to see sodium ion batteries at the low end, at about half the price of Li-ion batteries and still with acceptable energy density, although at the low end of current Li-ion batteries. This is proven technology, with continued incremental improvement in manufacturing and design. I suspect that these batteries will take us into the mid-2030s, until the industry shifts over to something like solid state batteries.
The post Is Donut Lab’s Solid State Battery Legit? first appeared on NeuroLogica Blog.
If ghosts don't exist, then how do we account for all the ghost experiences that people have every day?
Learn about your ad choices: dovetail.prx.org/ad-choicesSixth-century Byzantium was a city divided by race hatred so intense that people viciously attacked each other, not only in the streets but also in churches. The inscription on an ancient tablet conveys the raw animus that spawned from color differences: “Bind them! … Destroy them! … Kill them!” The historian Procopius, who witnessed this race antagonism firsthand, called it a “disease of the soul,” and marveled at its irrational intensity:
They fight against their opponents knowing not for what end they imperil themselves … So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place, neither to the ties of marriage nor of relationship nor of friendship.1This hostility sparked multiple violent clashes and riots, culminating in the Nika Riot of 532 CE, the biggest race riot of all time: 30,000 people perished, and the greatest city of antiquity was reduced to smoldering ruins.
But the Nika Riot wasn’t the sort of race riot you might imagine. The race in question was the chariot race. The color division wasn’t between black and white but between blue and green—the colors of the two main chariot-racing teams. The teams’ supporters, who were referred to as the Blue and Green “factions,” proudly wore their team colors, not just in the hippodrome but also around town. To help distinguish themselves, many Blues also sported distinctive mullet hairstyles, like those of 1970s rock stars. Both Blues and Greens were fiercely loyal to their factions and their colors. The chariots and drivers were a secondary concern; the historian Pliny asserted that if the drivers were to swap colors in the middle of a race, the factions would immediately switch their allegiances accordingly.
Decades of studies have demonstrated the dangerous power of the human tribal instinct.The race faction rivalry had existed for a long time before the Nika Riot, yet Procopius writes that it had only become bitter and violent in “comparatively recent times.” So, what caused this trivial division over horse-racing teams to turn so deadly? In short, it was the Byzantine version of “identity politics.”
Detail of “A Roman Chariot Race,” depicted by Alexander von Wagner, circa 1882. During the Nika Riots that took place against Byzantine Emperor Justinian I in Constantinople over the course of a week in 532 C.E., tens of thousands of people lost their lives and half the city was burned to the ground. It all started over a chariot race. (Image courtesy of Manchester Art Gallery)Modern sociological research helps explain the phenomenon. Decades of studies have demonstrated the dangerous power of the human tribal instinct. Surprisingly, it doesn’t require “primordial” ethnic or tribal distinctions to engage that impulse. Minor differences are often sufficient to elicit acute ingroup-outgroup discrimination. The psychologist Henri Tajfel demonstrated this in a landmark series of studies to determine how minor those differences can be. In each successive study, Tajfel divided test subjects into groups according to increasingly trivial criteria, such as whether they preferred Klee or Kandinsky paintings or underestimated or overestimated the number of dots on a page. The results were as intriguing as they were disturbing: even the most trivial groupings induced discrimination.2, 3
However, the most significant and unexpected discovery was that simply telling subjects that they belonged to a group induced discrimination, even when the grouping was completely random. Upon learning they officially belonged to a group, the subjects reflexively adopted an us-versus-them, zero-sum game attitude toward members of other groups. Many other researchers have conducted related experiments with similar results: a government or an authority (like a researcher) designating group distinctions is, by itself, sufficient to spur contentious group rivalry. When group rewards are at stake, that rivalry is magnified and readily turns malign.
The Robbers Cave Experiment, conducted in 1954 by social psychologists Muzafer and Carolyn Sherif, investigated intergroup conflict and cooperation. The study involved 22 eleven-year-old boys at a summer camp in Robbers Cave State Park, Oklahoma. (Photo: The University of Akron)The extent to which authority-defined groups and competition for group benefits can foment nasty factionalism was demonstrated in the famous 1954 Robbers Cave experiment, in which researchers brought boys with identical socioeconomic and ethnic backgrounds to a summer camp, dividing them randomly into two official groups. They initially kept the two groups separate and encouraged them to bond through various group activities. The boys, who had not known each other before, developed strong group cohesion and a sense of shared identity. The researchers then pitted the groups against each other in contests for group rewards to see if inter-group hostility would arise. The group antagonism escalated far beyond their expectations. The two groups eventually burned each other’s flags and clothing, trashed each other’s cabins, and collected rocks to hurl at each other. Camp staff had to intervene repeatedly to break up brutal fights. The mounting hostility and risk of violence induced the researchers to abort that phase of the study.4 Other researchers have replicated this experiment: one follow-up study resulted in knife fights, and a researcher was so traumatized he had to be hospitalized for a week.5, 6
How does this apply to the Blues and Greens? As in the Tajfel experiments, the Byzantine race factions had formed a group division based on a trivial distinction—the preference for a color and a horse racing team. However, for many years, the rivalry remained relatively benign. This was likely because the emperors had long played down the factional distinction and maintained a tradition of race neutrality: if they favored a faction, they avoided openly showing it. But that tradition ended a few years before the Nika Riot when emperors began openly supporting either one faction or the other. But more importantly, they extended their support outside the hippodrome with official policies that benefited members of their preferred faction. The emperors Marcian, Anastasius, and Justinian adopted official employment preferences, allocating positions to members of their favored faction and blocking the other faction from coveted jobs. To cast it in modern terms, they began a program of “race-based” affirmative action and identity politics.7, 8
In nearly all the countries where affirmative action programs have been implemented, they have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive.Official recognition of the group distinction enhanced the us-versus-them sense of difference between the factions, and the affirmative action scheme turned this sense of difference into bitter antagonism, which eventually exploded in violence. Procopius, our primary contemporary source, placed the blame for the mounting antagonism and the riots squarely on Justinian’s program of identity politics. It had not only promoted an us-versus-them mindset in the factions, it also incited vicious enmity between them, turning a trivial color preference and sporting rivalry into a deadly “race war.”
Considering how identity politics could elicit violence from randomly assembled groups like the Blues and Greens, it is easy to imagine how disastrous identity politics can be when applied to groups that already have some long-standing, historic sense of difference. Indeed, there have been numerous instances of this in history, most ending tragically. For example, Tutsis and Hutus enjoyed centuries of relatively peaceful coexistence in Rwanda up until Belgian colonialists arrived; when the Belgians issued identity cards distinguishing the two groups and instituted affirmative action, it ossified a formerly porous group distinction and infused it with bitter rivalry, preparing the path to genocide. Likewise, when Yugoslavia instituted its “nationality key” system, with educational and employment quotas for the country’s constituent ethnic groups, it hardened group distinctions, pitting the groups against each other and setting the stage for genocide in the Balkans. And, when the Sri Lankan government opted for identity politics and affirmative action, it spawned violent conflict and genocide that destroyed a once peaceful and prosperous country. This last example—Sri Lanka—is so illustrative of the dangers of identity politics that we’ll examine it in more detail.
Sri Lanka: How Identity Politics Destroyed ParadiseShe is a fabulous isle just south of India’s teeming shore, land of paradise … with a proud and democratic people … Her flag is the flag of freedom, her citizens are dedicated to the preservation of that freedom … Her school system is as progressive as it is democratic. —1954 TWA TOURIST VIDEOSri Lanka is an island off India’s southeast coast blessed with copious amounts of arable land and natural resources. It has an ethnically diverse population, with the two main groups being Sinhalese (75 percent) and Tamils (15 percent). Before Sri Lanka’s independence in 1948, there was a long history of harmony between these groups. That history goes back at least to the fourteenth century when the Arab traveler Ibn Battuta observed how the different groups “show respect” for each other and “harbor no suspicions.” On the eve of Sri Lanka’s independence, a British governor lauded the “large measure of fellowship and understanding” that prevailed, and a British soldiers’ guide noted that “there are no historic antagonisms to overcome.” With quiescent communal relations, abundant natural resources, and one of the highest literacy rates in the developing world, newly independent Sri Lanka was poised to flourish and prosper. Nobody doubted it would outperform countries like South Korea and Singapore, with the British governor dubbing it “the best bet in Asia.”
It turned out to be a very poor bet. A few years after Sri Lanka’s independence, violent communal conflict erupted, culminating in a protracted civil war and genocide. By the time it ended, over a million people had been displaced or killed. Sri Lanka’s per capita GDP, which was on par with South Korea’s in 1960, was only one-tenth of it by 2009. As in sixth-century Byzantium, identity politics precipitated the calamity.
Turning a Disparity into a DisasterAt the end of British colonial rule in Sri Lanka, there was significant educational and income disparity between Sinhalese and Tamils. This arose by happenstance rather than because of discriminatory policy. The island’s north, where Tamils predominate, is arid and poor in resources. Because of this, the Tamils devoted their productive energy toward developing human capital, focusing on education and cultivating professional skills. This focus was abetted by American missionaries, who set up schools in the north, providing top-notch English-language education, particularly in math and the physical sciences. As a result, Tamils accounted for an outsized proportion of the better-educated people on the island, particularly in higher-paying fields like engineering and medicine.
Because of the Tamils’ superior education, the British colonial administration hired them disproportionately compared to the Sinhalese. In 1948, for example, Tamils accounted for 40 percent of the clerical workers employed by the colonial government, greatly outstripping their 15 percent share of the overall population. This unequal outcome had nothing to do with overt discrimination against the Sinhalese; it merely reflected the different levels and types of education achieved by the different ethnic groups.
When Sri Lanka gained independence, it passed a constitution that prohibited discrimination based on ethnicity. But a few years after that, an opportunist politician, S.W.R.D. Bandaranaike, figured he could advance his career by cynically appealing to identity politics, stoking Sinhalese envy over the Tamils’ over-representation in higher education and government. He launched a divisive campaign to eliminate the disparity, which spurred the majority Sinhalese to elect him. After his election in 1956, Bandaranaike passed a law that changed the official language from English to Sinhala and consigned students to separate Tamil and Sinhalese education “streams” rather than having them all learn English. As one Sinhalese journalist wrote, this divided Sri Lanka, depriving it of its “link language”:
That began a great divide that has widened over the years. Children now go to segregated schools or study in separate streams in the same school. They don’t get to know other people of their own age group unless they meet them outside.Beyond eliminating Sri Lanka’s common “link language,” this law also functioned as a de facto affirmative action program for Sinhalese. Tamils, who spoke Tamil at home and received their higher education in English, could not gain Sinhala proficiency quickly enough to meet the government’s requirement. So, many of them lost their jobs to Sinhalese. For example, the percentage of Tamils employed in government administrative services dropped dramatically: from 30 percent in 1956 to five percent in 1970; the percentage in the armed forces dropped from 40 percent to one percent.
As has happened in many other countries, Sri Lanka’s identity politics went hand-in-hand with expanded government. Sinhalese politicians made it clear: government would be the tool to redress perceived ethnic disparities. It would allocate more jobs and resources, and that allocation would be based on ethnicity. As one historian writes: “a growing perception of the state as bestowing public goods selectively began to emerge, challenging previous views and breeding mistrust between ethnic communities.” Tamils responded to this by launching a non-violent resistance campaign. With ethnic dividing lines now clearly drawn, mobs of Sinhalese staged anti-Tamil counter-demonstrations and then riots in which hundreds—mostly Tamils—were killed. The us-versus-them mentality was setting in.
Bandaranaike was eventually assassinated by radicals within his own movement. But his widow, Sirimavo, who was subsequently elected prime minister, resolved to maintain his top priorities—expansive government and identity politics. She nationalized numerous industries and launched development projects that were directed by ethnic and political considerations rather than actual need. She also removed the constitutional ban on ethnic discrimination so that she could aggressively expand affirmative action. The existing policies had already cost so many Tamils their jobs that they were now under-represented in government. However, they remained over-represented in higher education, particularly in the sciences, a disparity that Sirimavo and her political allies resolved to eliminate. In a scheme that American universities like Harvard would later emulate, the Sri Lankan universities began to reject high-scoring Tamil applicants in favor of manifestly less-qualified Sinhalese with vastly lower test scores.
Just like Justinian’s “race” preferences, the Sri Lankan affirmative action program exacerbated us-versus-them attitudes, deepening the group divide and spurring enmity between groups. As one Sri Lankan observed:
Identity was never a question for thousands of years. But now, here, for some reason, it is different … Friends that I grew up with, [messed around] with, got drunk with, now see an essential difference between us just for the fact of their ethnic identity. And there are no obvious differences at all, no matter what they say. I point to pictures in the newspapers and ask them to tell me who is Sinhalese and who is Tamil, and they simply can’t tell the difference. This identity is a fiction, I tell you, but a deadly one.9The lessons of the various affirmative action programs in Sri Lanka were clear to everyone: individuals’ access to education and government employment would be determined by ethnic group membership rather than individual merit, and political power would determine how much each group got. If you wanted your share, you needed to mobilize as a group and acquire and maintain political power at any cost. The divisive effects of these lessons would be catastrophic.
The realization that they would forever be at the mercy of an ethnic spoils system, along with the violent attacks perpetrated against them, induced the Tamils to form resistance organizations—most notably, the Liberation Tigers of Tamil Eelam (LTTE). The LTTE attacked both Sri Lankan government forces and individual Sinhalese, initiating a deadly spiral of attacks and reprisals by both sides committing the sort of atrocities that are tragically common in ethnic conflicts: burning people alive, torture, mass killings, and so on. Over the following decades, the conflict continued to fester, periodically escalating into outright civil war. Ultimately, over a million people would be killed or displaced.
The timeline of the Sri Lankan conflict establishes how communal violence originated from identity politics rather than the underlying income and occupational disparity between the groups. That disparity reached its apex at the beginning of the twentieth century. Yet, there was no communal violence at that point or during the next half-century. It was only after the introduction of affirmative action programs that ethnic violence erupted. The deadliest attacks on Tamils occurred an entire decade after those programs had enabled Sinhalese to surpass Tamils in both income and education. As Thomas Sowell observed: “It was not the disparities which led to intergroup violence but the politicizing of those disparities and the promotion of group identity politics.”10
Consequences of Identity Politics in Sri Lanka and BeyondSri Lanka’s experience highlights some underappreciated consequences of identity politics. Most notably, one would expect that affirmative action programs would have warmed the feelings of the Sinhalese toward the Tamils. After all, they were receiving preferences for jobs and education at the Tamils’ expense. Yet, precisely the opposite happened: as the affirmative action programs were implemented, Sinhalese animus toward the Tamils progressively worsened. This pattern has been repeated in nearly all the countries where affirmative action has been implemented: affirmative action programs have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive. That group tends to justify the indefinite continuation of these benefits by claiming that the other group continues to enjoy “privilege”—or by demonizing them and claiming that they are “systemically” advantaged. Thus, the beneficiaries of affirmative action are often the ones to initiate hostilities. In Rwanda, for example, it was Hutu affirmative action beneficiaries who perpetrated the violence, not Tutsis. The situation in Sri Lanka was analogous, with Sinhalese instigating all of the initial riots and pogroms against the Tamils.
One knock-on effect of identity politics in Sri Lanka was that it ultimately benefited some of the wealthiest and most privileged people in the country. The government enacted several affirmative action schemes, each increasingly contrived to benefit well-heeled Sinhalese. The last of these implemented a regional quota system that was devised so that aristocratic Sinhalese living in the Kandy region would compete for spots against poor, undereducated Tamil farm workers. As one Tamil who lost his spot in engineering wrote: “They effectively claimed that the son of a Sinhalese minister in an elite Colombo school was disadvantaged vis-à-vis a Tamil tea plucker’s son.” This follows the pattern of many other affirmative action programs around the world: the greatest beneficiaries are typically the most politically connected (and privileged) individuals within the group receiving affirmative action. They are often wealthier and more privileged than many of the individuals against whom affirmative action is directed. This has been well documented in India, which has extensive data on the subgroups that benefit from its affirmative action programs.
Decades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior.One unexpected consequence of identity politics in Sri Lanka was rampant corruption. When Sri Lanka became independent, its government was widely deemed one of the least corrupt in the developing world. However, as affirmative action programs were implemented and expanded, corruption increased in lockstep. The adoption of affirmative action set a paradigm that pervaded the government: whoever held power could steer government resources to whomever they deemed “underserved.” A baleful side effect of ethnicity-based distortion of government policy is that it undermines and erodes more general standards of government integrity and transparency, legitimating a paradigm of corruption: if it is acceptable to direct policy for the benefit of an ethnic group, is it not also acceptable to do so for the benefit of a clan or an individual? It is a small step to go from one to the other, a step that many Sri Lankan leaders and bureaucrats took. Today, Sri Lanka’s government, which once rivaled European governments in transparency, remains highly corrupt. This pattern has been repeated in other countries. For example, after the Federation of Malaysia expelled Singapore, it adopted an extensive affirmative action program, whereas Singapore prohibited ethnic preferences. Malaysia subsequently experienced proliferating corruption, whereas Singapore is one of the least corrupt countries in the world today.
Economic divergence between Singapore and Sri Lanka’s GDP per capita, 1960–2023 (Source: Our World in Data)Perhaps the most profound consequence of identity politics in Sri Lanka was that it ultimately made everybody in the country worse off. After World War II, per capita income in Sri Lanka and Singapore was nearly identical. But after it abandoned its shared “link language” and adopted ethnically divisive policies, Sri Lanka was plagued by violent conflict and economic underperformance; today, one Singaporean earns more than seven Sri Lankans put together. All the group preferences devised to elevate Sinhalese brought down everyone in the country—Tamil, Sinhalese, and all the other groups alike. Lee Kuan Yew, Singapore’s “founding father,” attributed that failure to Sri Lanka’s divisive policies, saying that if Singapore had implemented similar policies, “we would have perished politically and economically.” There are echoes of this in other countries that have implemented identity politics. When I visited Rwanda, I asked Rwandans of various backgrounds whether they thought distinguishing people by race or ethnicity ever helped anyone in their country. There was complete unanimity on this point: after they got over pondering why anyone would ask such a naïve question, they made it very clear that distinguishing people by group made everyone, whether Hutu or Tutsi, distinctly worse off. In the Balkans, I got similar answers from Bosnians, Croatians, Serbians, and Kosovars.
The Perilous Path of Identity PoliticsDecades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior. As political scientist Harold Isaacs writes:
If anything emerges plainly from our long look at the nature and functioning of basic group identity, it is the fact that the we-they syndrome is built in. It does not merely distinguish, it divides … the normal responses run from … indifference to depreciation, to contempt, to victimization, and, not at all seldom, to slaughter.11The history of Byzantium and Sri Lanka demonstrates that this tribal instinct is extremely easy to provoke. All it takes is official recognition of group distinctions and some group preferences to balkanize people into bitterly antagonistic groups, and the consequences are potentially dire. Even if a society that is balkanized in this way avoids violent conflict, it is still likely to be plagued by all the concomitants of social fractionalization: higher corruption, lower social trust, and abysmal economic performance.
A country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.It is therefore troubling to see the U.S. government, institutions, and society adopt Sri Lankan-style policies that emphasize group distinctions. As the U.S. continues down the perilous path of identity politics, it is unlikely to devolve into another Bosnia or Sri Lanka overnight. But the example of Sri Lanka is a dire warning: a country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.
Surveys and statistics are now flashing warning signs in the United States. A Gallup poll found that while 70 percent of Black Americans believed that race relations in the United States were either good or very good in 2001, only 33 percent did in 2021.12 Other statistics have shown that hate crimes have been on the rise over that time.13 In the last year, we have also seen the spectacle of angry anti-Israel protesters hammering on the doors of a college hall, terrorizing the Jewish students locked inside, and a Stanford professor telling Jewish students to stand in the corner of a classroom. While identity politics have increasingly directed public policy and institutions, relations between social groups have deteriorated rapidly. This—and a lot of history—suggest it’s time for a different approach.
Mediterranean archaeologist Dr. Flint Dibble will be our resident expert on the real history (and the fake history) at our ports of call when Skeptoid Adventures sails from Málaga, Spain to Nice, France this April. He is perhaps best known for his 2024 destruction of pseudo-archaeologist Graham Hancock on the Joe Rogan Experience.
Learn about your ad choices: dovetail.prx.org/ad-choicesWilliam S. Burroughs was one of the most controversial literary figures of the early 1960s, an American postmodern author and visual artist who was considered one of the key figures of the Beat Generation that influenced pop culture (he was friends with Allen Ginsberg and Jack Kerouac). He also became preoccupied by an unusual experiment: the cut-up, a technique in which a written text is cut up and rearranged to create a new text. But this was no mere artistic preoccupation. Burroughs, author of the notorious Naked Lunch (the subject of a major literary censorship case when its publisher was sued for violating a Massachusetts obscenity law) claimed to have found a sort of window into the future, a time warp on paper and on tape.
Burroughs got the cut-up idea in 1959 from his close friend Brion Gysin. Burroughs remembered, “It was simply of course applying the montage method, which was really rather old hat in painting at that time, to writing. As Brion said, writing is fifty years behind painting.”1 Burroughs traced the cut-up back to an incident from the Dada movement of the 1920s, when Tristan Tzara announced his intention to create a poem on the spot by pulling words out of a hat.2
For Burroughs, however, the cut-ups were something more than a creative writing technique. He traced this supposed revelation back to a Time magazine article by the oil industrialist John Paul Getty. (Burroughs may have been referring to a February 1958 Time cover story on Getty. Getty did not write the article.) Upon cutting up the article, Burroughs created the following phrase: “It’s a bad thing to sue your own father.” When Getty was in fact sued by one of his sons, Burroughs came to believe that his cut-up had foretold the future:
Perhaps events are pre-written and prerecorded and when you cut word lines the future leaks out. I have seen enough examples to convince me that the cut-ups are a basic key to the nature and function of words.3Years later, in Howard Brookner’s Burroughs, the fedora-clad, now-aged author explains to his poet friend Allen Ginsberg:
Every particle of this universe contains the whole of the universe. You yourself have the whole of the universe. If I cut you up in a certain way I cut up the universe … So in my cut-ups I was attempting to tamper with the basic pre-recordings. But I think I have succeeded to some modest extent.At this, Ginsberg could only nod and utter a number of noncommittal “um hmms,” adding later: “Burroughs was, in cutting up, creating gaps in space and time, as Cezanne, or as meditation does.” Burroughs also cited a dubious summary of Wittgenstein’s Paradox: “This is Wittgenstein: If you have a prerecorded universe, in which everything is prerecorded, the only thing that is not prerecorded are the prerecordings themselves.”4 The actual Wittgenstein’s Paradox holds that “no course of action could be determined by a rule, because any course of action can be made out to accord with the rule.”
Ludwig Wittgenstein was a philosopher and language theorist, but there is no reason to believe that he thought of the universe as a giant tape recording. Rather, Burroughs’s notion of human consciousness was clearly influenced by L. Ron Hubbard’s engram theory, itself reliant on Freudian psychoanalytic theory with its emphasis on trauma and repressed memory. Seemingly derived from the medical theory of the memory trace, Hubbard described engrams as imprints of unpleasant experiences on the protoplasm of living beings.
Burroughs went so far as to describe the cut-up method as “streamlining Dianetics therapy system.” Proposing that his tape method could be used for therapy, he went on to suggest wiping “traumatic material” off a magnetic tape.5 He even hinted that Hubbard had borrowed the tape recording idea from him! His friend Ian Sommerville sold Hubbard two recorders, and Burroughs seemed to find it significant that Sommerville had become sick soon after, as if Hubbard were using an insidious black magic.6 Burroughs began to see the Scientology system as a form of brainwashing, even as he was increasingly convinced of Hubbard’s theories.
Moving on to the world of cinema, Burroughs made two cut-up films, Towers Open Fire in 1963 and The Cut-Ups in 1966, with the help of producer Antony Balch. And, in 1965, Burroughs proposed to Balch “a new type of science fiction film,”7 one that would expose “the story of Scientology and their attempt to take over this planet.”8 The film would explain that “vulgar stupid second rate people” had taken over the planet by means of a “virus parasite.”9
Burroughs brazenly went ahead with his cut up experiment, even though it might have serious ramifications for the universe: “Could you, by cutting up … cut and nullify the pre-recordings of your own future? Could the whole prerecorded future of the universe be prerecorded or altered? I don’t know. Let’s see.” Perhaps he was thinking of the scientists at Los Alamos, who exploded the first atomic bomb without being completely sure of the ramifications.10
Nor was Burroughs’s “sample operation” in influencing the universe an especially ethical exercise. In fall 1972 the author took issue with the Moka, “London’s first espresso bar,” leading to a vengeful exercise with overtones of Maya Deren, the experimental filmmaker who was also a voodoo priestess and flinger of malicious hexes.
Burroughs’s grudge against the Moka arose over what he described as “unprovoked discourtesy and poisonous cheesecake.” He took a movie camera and began filming. Within two months, the bar was closed. Burroughs recommended using this exercise to “discommode or destroy” any business you did not particularly like. He did not consider the bar might have shut down for some unrelated reason. Maybe word got out about the bad cheesecake.11 Some of the author’s magical thinking in this period may be a result of reliance on drugs, but Burroughs was a believer in curses since childhood.12
It is perhaps not a surprise that some thought the author’s new method was a prank. At a 1962 Edinburgh festival, Burroughs spoke about his new technique, which he was then calling the fold-in method. Members of the crowd thought they were being pranked, causing an Indian author to ask, “Are you being serious?” Burroughs insisted that he was.13
Burroughs presented a summary of his method to a gathering of students at Colorado’s Naropa Institute in 1976, and part of this lecture can be heard on the record Break Through in Grey Room. When Burroughs describes the revelatory Getty cut-up, laughter can be heard from the audience. Perhaps sensing some skepticism, Burroughs insists on his innocence in constructing the Getty rewording: “I mean, it’s purely extraneous information to me. [A woman can be heard laughing.] I had nothing to gain on either side. We had no explanation for this at the time, it’s just suggesting, perhaps, that when you cut into the present the future leaks out.”14
Burroughs may have been a bit disingenuous in telling the Naropa students he had no relationship to the wealthy Getty family. In the mid-1960s, in fact, through the art dealer Robert Fraser, Burroughs mingled with John Paul Getty Jr.15 Then, Burroughs stayed at a flat owned by art dealer Bill Willis from March to July 1967, where he often saw the likes of Getty, Jr.16
Admittedly this would have been later than Burroughs’s initial Getty cut-up (apparently in 1959, when Burroughs first became immersed in the whole cut-up process). But Burroughs may have been acquainted with members of the Getty circle before he actually met the Getty family. Plus, we are relying on a version of events that Burroughs publicly recounted in Daniel Odier’s The Job and later in 1976, and relying on Burroughs’s perception is a dubious proposition. In the 1976 Naropa lecture, Burroughs claims the lawsuit occurred a year after his cut-up,17 while in Daniel Odier’s The Job he claims it was a three-year gap. Also, in The Job he seems to garble matters by conflating the magazine title—Time—with the name of Getty’s company—Tidewater.18 I have not found any record of Getty being sued by one of his sons during the time period described.
Burroughs’s literary acquaintances were not impressed to see the author seemingly risking his (still quite tenuous) literary reputation on an obsession like this. Samuel Beckett was appalled at the notion of using the words of other writers and said so to Burroughs directly: “That’s not writing. It’s plumbing.”19 The poet Gregory Corso told Burroughs the cut-up method would quickly become “redundant.”20 Novelist Paul Bowles felt the method would “alienate the reader.”21 Norman Mailer was the most prominent literary figure to champion Burroughs’s work to the American mainstream, and he must have been let down to see Burroughs abandoning a major writing career to get hung up on something Mailer probably considered a trivial sidetrack. To Mailer, the cut-up experiments were a mere “recording,” a distraction from the art of fiction.22 Jennie Skerl and Robin Lydenberg note that “positive assessments of Burroughs’s cut-ups were rare … most saw cut-ups as boring or repellent.”23
Nevertheless, Burroughs produced his “cut-up trilogy”: The Soft Machine (1961), The Ticket That Exploded (1962), and Nova Express (1964), although none sold as well as Naked Lunch. Biographer Ted Morgan calls them “inaccessible to the general reader.”24 The impenetrability of Burroughs’s cut-ups added to his reputation as a “difficult” author. Even Burroughs’s off-and-on friend Timothy Leary asked, rhetorically, “Do you actually know anyone who has finished an entire book by Bill Burroughs?”25
Burroughs was greatly impressed by the 1971 English-language publication of Konstantin Raudive’s Breakthrough: An Amazing Experiment in Electronic Communication with the Dead, which popularized what is known today as EVP (Electronic Voice Phenomenon), a widely discredited phenomenon that purports to find hidden messages in audio recordings of background noise, of recordings played backwards, in random static noise between radio stations, and other low information sources.
Raudive believed these were the voices of the dead. Burroughs offered his own theory in keeping with his cut-up cosmology, namely that the entire universe was a vast playback device, something akin to a tape recording. Inspired by Raudive (and no doubt, Hubbard), Burroughs boldly rejected the precepts of modern psychology. People suffering from schizophrenia were not experiencing hallucinations; they were “tuning in to an intergalactic network of voices.”26
If we look at Burroughs’s supposed predictive phrases, we see a lot of what can only be called “reaching” or grasping at straws. In 1964 Burroughs came up with the phrase, “And here is a horrid air conditioner.” Ten years later, he “moved into a loft with a broken air conditioner.”27 There is nothing mysterious about having an air conditioner break down. If anything, Burroughs was lucky if he went ten years without a broken air conditioner.
Then there was this cryptic recorded query of Raudive’s: “Are you without jewels?” To Burroughs, this must refer to lasers, “which are made with jewels.” And another especially absurd quote from Raudive’s recordings: “You belong to the cucumbers?” Burroughs had read that “the pickle factory” was a slang term for the CIA, so the recording seemed to be an obvious CIA reference. He read this in either Time or Newsweek. For an icon of bohemian literature, one could argue that Burroughs relied an awful lot on the mainstream media for his prognostications.28 But how were researchers like Raudive and Burroughs tapping into the playback of the universe? Burroughs himself asked this question:
Now how random is random? We know so much that we don’t consciously know that perhaps the cut-in was not random. The operator at some level knew just where he was cutting in. As you know exactly on some level exactly where you were and what you were doing ten years ago at this particular time.29Burroughs was admitting that the cutter was influencing the cut-up, but he believed this was because the cutter was unconsciously tuned in to the future. A simpler explanation would be that Burroughs convinced himself that he was doing random work while he was in fact cutting together semiconscious rephrasings. For instance, he may have heard a rumor from one of his monied acquaintances that one of Getty’s sons was considering a legal action well before actually suing.
If the experimenter (i.e., Burroughs, or Gysin, or Raudive) is unconsciously influencing the experiment, then what we have is a new version of the Ouija board with its self-guided planchette—a device whose movements and messages are created by users who come to believe they are receiving messages from a spirit or other mysterious entity when, in fact, they are moving the planchette. This is known as the ideomotor response.
It is worth noting that in this lecture Burroughs refers to a number of concepts that are often considered dubious today, such as repressed memories and unreliable eyewitness accounts of events. For instance, he discusses “freaks,” seemingly referring to individuals with alleged eidetic or “photographic” memory. Perhaps he was thinking of his late friend Jack Kerouac, who was known by some in Lowell, Massachusetts, as “Memory Babe” due to his purportedly freakish recall powers?
There is no evidence to support the notion that anyone can foretell the future by cutting up newspapers, books, or film footage.Burroughs’s countercultural reputation grew through the 1970s until his death in 1997. But his cut-ups don’t seem to have received much attention from the parapsychological community, perhaps because he was so preoccupied with now-dated media and technology: newspapers, reel-to-reel recordings, and 8mm film. His metaphysical notion of the universe as a “playback” machine seems dated next to the trendier notion of the universe as a computer matrix.
William Burroughs was one of the most fascinating (and darkly funny) literary figures of the twentieth century, but that doesn’t make him a scientist. There is no evidence to support the notion that anyone can foretell the future by cutting up newspapers, books, or film footage.
The myths and misconceptions surrounding blood donation and why you might consider donating.
Links to resources from Bloodworks Northwest:
Learn about your ad choices: dovetail.prx.org/ad-choicesSouth Korean astronomers are challenging the notion that the universe’s expansion is accelerating, an observation in the 1990s that lead to the theory of dark energy. This is currently very controversial, and may simply fizzle away or change our understanding of the fate of the universe.
In the 1990s astronomers used data from Type Ia supernovae to determine the rate of the expansion of the universe. Type Ias are known as standard candles because they put out the exact same amount of light. The reason for this is the way they form. They are caused by white dwarfs in a double star system – the white dwarfs might pull gas from their partner, and when that gas reaches a critical amount its gravity is sufficient to cause the white dwarf to explode. Because the explosions occur at the same mass, the size of the explosion, and therefore its absolute brightness, is the same. If we know the absolute brightness of an object, and we can measure its apparent brightness, then we can calculate its exact distance.
The astronomers used data from many Type Ia supernova to essentially map the expansion of the universe over time. Remember – when we look out into space we are also looking back in time. They found that the farther away galaxies were the slower they were moving away from each other, as if the universal expansion itself were accelerating over time. This discovery won them the Nobel Prize. The problem was, we did not know what force would cause such an expansion, so astronomers hypothesized the existence of dark energy, as a placeholder for the force that is pushing galaxies away from each other. This dark energy force would have to be significant, stronger than the gravitational force pulling galaxies together.
The South Korean astronomers, however, are challenging this conclusion. They hypothesize that perhaps Type Ia supernovae are not all created equally. Perhaps the age of the star affects the brightness, with older white dwarfs creating brighter supernova than younger ones. To determine if this is correct they analyzed over 300 Type Ias using data from the Dark Energy Spectroscopic Instrument (DESI) in Arizona. They claim, with high statistical significance, that the data supports the conclusion that older Type Ia supernova are brighter. If you then plug their correction into the analysis of the expansion of the universe, it turns out that the universe is currently decelerating, not accelerating.
This would not necessarily mean that dark energy does not exist. Rather they think that dark energy is weakening over time. We are already passed the point where gravity is stronger than dark energy. If true, this means the universe will not expand forever, but will eventually come back together in what is called the “Big Crunch.”
However – the rest of the astronomy community is skeptical, to varying degrees. Some argue that, while significant, the effect size is tiny and it is very easily an artifact of the analysis. This same group also made similar claims before, and those prior claims did not stand up to scrutiny. So their track record does not instill confidence.
This kind of debate among scientists is healthy. One study should not be enough to reverse a longstanding conclusion. But at the same time scientists need to be open to such challenges. In the end – the evidence will reign supreme, and will determine the consensus that emerges among astronomers. In the end, it’s hard to argue with the evidence.
The good thing about astronomy is that you can simply make more observations This is what needs to happen, more and more detailed observations will either confirm or refute the conclusions of the South Korean researchers. Their paper will then either fade into obscurity or become a seminal paper, and perhaps even the basis of a future Nobel Prize.
Meanwhile, the debate about the ultimate fate of the universe continues. I have followed this question for decades, and it remains a fascinating question. There are no implications for us in the near term, of course, we are talking about what will happen billions or trillions of years in the future. But it is important for our understanding of the universe, and it is interesting to contemplate the ultimate fate of everything.
These are two very different vision of the future. In the Big Crunch scenario, the expansion of the universe continues to slow and eventually stops. And then the universe will slowly start coming back together. This process will accelerate until you have the opposite of the Big Bang – the entire universe collapses into a singularity. This, of course, raises the question about what happens next – will this lead to another Big Bang in an endless cycle? There is something intriguing about this.
The other possibility is that the universe simply continues to expand forever. Eventually we will experience the heat death of the universe, when there is no more energy to do anything. It is also possible that the accelerated expansion will get so great that even atoms come apart in a “Big Rip”. The big difference in this scenario is that there is no cycle – the universe is a one-off. Perhaps there are many universes, and there is a greater cycle, but our universe will die.
This question has gone back and forth over my lifetime, and perhaps it will again. This is partly because, when we look at the mass-energy of the universe it is very close to being right at the equilibrium point, the point at which expansion will slow asymptotically to zero, but not contract or rip apart. Perhaps this is because that is the actual fate of the universe – balanced right on the edge between endless expansion and the Big Crunch.
At this point I think it is reasonable to say that we don’t know. At least there is significant uncertainty, enough that subtle changes to our understanding of phenomena like Type Ia supernova can change the conclusion. But that also makes it an exciting science story to follow.
The post Challenging the Acceleration of the Universe first appeared on NeuroLogica Blog.
Merriam-Webster’s Dictionary announced that 2022 saw a 1740 percent increase in searches for gaslighting “with high interest throughout the year.” Merriam-Webster refines the term:
The idea of a deliberate conspiracy to mislead has made gaslighting useful in describing lies that are part of a larger plan. Unlike lying, which tends to be between individuals, and fraud, which tends to involve organizations, gaslighting applies in both personal and political contexts.1The term “gaslighting” entered the popular consciousness through a 1944 film, the American psychological thriller Gaslight, in which a husband wants to make his newlywed wife lose her mind to have her locked up in an asylum. His agenda is to steal jewels that he knows are hidden in her late aunt’s house where they are living. The movie’s name is symbolic of the many manipulations the husband undertakes to gaslight his wife into believing she’s insane.
The film is set in London in the late nineteenth century when lamps were fueled by gas. The wife notices that their lamps randomly go dim. One way the husband destabilizes her is by denying that the gaslights are indeed dimming. It really is such a small manipulation. It’s so minor that you might not make much of it. The husband has been showering his new wife with adoration—referred to in abusive relationships as “love bombing”—making it unlikely for her to think he’s being deceptive. When the wife is told that the gaslights are not dimming, she chooses to believe her devoted husband and doubt her own perceptions. This is the beginning of what could be the end.
The wife not only notices that the gaslights are dimming, but also that sounds are coming from the attic. Her husband denies the sounds. She can’t find her brooch even though she knows it was in her purse. He has removed it without her knowing. She finds a letter from one “Sergis Bauer,” and her once-adoring husband becomes furious with her. Later, he explains that he became upset because she was upset (which she wasn’t).
The husband tells his wife that the gaslights are not dimming; there are no sounds from the attic; she lost the brooch as it was not in her purse; she didn’t see a letter from Sergis Bauer. On top of all that, he tells her that she stole a painting, and he has found out that her mother was put in an asylum. He convinces his wife that not only is she fabricating things that don’t exist, but also that she’s a kleptomaniac, too high strung and unwell to be in public. She must be crazy like her mother. Stealing the aunt’s jewels is symbolic of a much more deadly crime: stealing his target’s sanity. The husband is building a case for how his wife is obviously unstable and untrustworthy. Slowly but surely, the wife begins to lose her grip on what’s real and what’s false. She loses faith in her own perceptions.
Luckily for the 1944 wife in the movie Gaslight, it being a Hollywood movie and all, a policeman takes an interest in the unfolding manipulation. It turns out that the wife is merely useful to the husband, and he exploits her for his own means. In the movie, it turns out that the husband is the one who is untrustworthy and who steals, not his destabilized wife.
Publicity still from the film Gaslight © 1944 Metro-Goldwyn-MayerGaslighting in a marriage is disturbing. Gaslighting in an institution such as a corporation, church, school, sports club, courthouse, retirement home, government agency, news station, or political party is deeply disturbing. The target in the marriage may lose her mind and come to believe that she is, in fact, corrupt and insane. Her relationship to reality becomes unhinged. As has been demonstrated throughout history, the target in institutional gaslighting leads to whole segments of society losing their minds and coming to believe whatever alternative facts and fabricated events they are being fed by those in positions with power, credibility, and social status. This collective madness can occur in cults, even in nations. We are well-informed by history how incredibly dangerous and destructive this manipulation can be.
In 2022, the term “gaslighting” was published in a United Kingdom High Court judgment for the first time in what is being called a “milestone” hearing in a domestic abuse case. Describing the case, Maya Oppenheim defines the act as follows:
Gaslighting refers to manipulating someone by making them question their very grasp on reality by forcing them to doubt their memories and pushing a false narrative of events.Although this is being legally identified as manipulation in a marriage, it applies equally well to the workplace. Those who tell the lies of bullying and gaslighting at work make targets question their grasp on reality, force them to doubt their memories, and push a false narrative of events. This false narrative is often believed by higher-ups who have been carefully groomed over time to believe in the power, credibility, and social standing of the one bullying. In this legal ruling, gaslighting is viewed as part of a campaign of psychological abuse that uses coercion and control to destabilize someone.
Controlling the narrative, silencing questions and concerns, forcing the community to adhere to the institution’s fabricated facts all prop up the harms of institutional complicity. Lawyer and workplace bullying expert Paul Pelletier finds that the lies of workplace bullying flourish when the leadership operates from a coercion and control model as identified in the manipulative and dysfunctional marriage under scrutiny in the UK High Court. Coercion and control as a leadership model sets the stage for the drama of bullying, gaslighting, and institutional complicity to unfold. Psychiatrist Dr. Helen Riess discusses leaders who use fear and intimidation to exert their authority: “This type of failed leadership tends to spread across organizations like the plague.”2
A year later, in 2023, a lawsuit was launched in New Jersey. Once again, gaslighting is one of the alleged behaviors that drove Joseph Nyre, former president of prestigious Seton Hall University, from his institution. As reported by Ted Sherman, Nyre alleges violations of the law against the former chairman of the board at Seton Hall, including the sexual harassment of Nyre’s wife. As a whistleblower, Nyre alleges he was targeted with “gaslighting, retaliation, and intimidation,” which led him to resign. Institutional complicity in silencing those who speak up uses textbook methods and gaslighting is long overdue to be understood as one of the weapons in their arsenal. Dr. Dorothy Suskind, an expert in workplace bullying, refers to the specific abuse meted out to those with “high ethical standards” as a “degradation ceremony.”3
The problem is, those who tell the lies of bullying and gaslighting do not experience self-reflection.Although gaslighting is being recognized in the law, it is not fully understood from a psychological and brain science perspective, and it is rarely applied to workplace culture. Only recently, in 2023, psychologists Priyam Kukreja and Jatin Pandey developed a “Gaslighting at Work Questionnaire” (GWQ) that revealed two key components in workplace gaslighting: trivialization and affliction. According to psychologist Mark Travers, trivialization may take the form of “making promises that don’t match their actions, twisting or misrepresenting things you’ve said, and making degrading comments about you and pretending you have nothing to be offended about.” Victims start down the path of wondering if they’re being “too sensitive.” Affliction may take the form of excessive control, making you self-critical, creating dependence, or being “very sweet to you and then flipp[ing] a switch, becoming hostile shortly after.”4 Again, this kind of maltreatment causes self-doubt. Kukreja and Pandey conclude:
The GWQ scale offers new opportunities to understand and measure gaslighting behaviors of a supervisor toward their subordinates in the work context. It adds to the existing literature on harmful leader behaviors, workplace abuse, and mistreatment by highlighting the importance of identifying and measuring gaslighting at work.5Introducing a questionnaire on gaslighting is an effective way to draw attention to how this form of manipulation occurs. Equally important, it provides vocabulary for workplaces to understand and discuss this specific form of abuse. In recent years, Forbes began publishing articles on gaslighting in the workplace indicating that it is on the leadership radar. Jonathan Westover advises on “How to Avoid and Counteract Gaslighting as a Leader,” and his approach is insightful:
The problem is, those who tell the lies of bullying and gaslighting do not experience self-reflection. They do not feel humility as an emotion, just like they don’t feel guilt or remorse. They are disinterested in others’ perceptions as their brain tends to objectify targets especially. They often experience a roller coaster of shame and grandiosity, and they deny vulnerability or the possibility that they have made a mistake. In short, they cannot have authentic relationships. They follow an abusive script that turns them—if not stopped—into a caricature who repeats bullying lies and gaslighting manipulations over and over. They avoid accountability and see trust as a game that they want to win. Using psychological research to understand how the brains of manipulators work hopefully will give us a better chance to prevent their negative impacts in the workplace.
Manzar Bashir describes several textbook gaslighting behaviors: trivializing your feelings, shifting blame, projecting their behavior, insulting and belittling, and creating confusion and contradictions, but he articulates one in particular—withholding information—that is very tricky to identify and yet can have devastating impacts. “Gaslighters often use a tactic of withholding information and keeping you in the dark about crucial matters. By selectively sharing or concealing facts, they manipulate your perception of reality and limit your ability to make informed decisions.”7 It’s insightful: gaslighting, along with a great deal of psychological manipulation, is harmful in its omissions and passivity. In other words, it’s the opposite of how we measure the harms of physical abuse. When you hurt someone’s body, we assess severity by how much active damage was done. But when the brain is being manipulated, we need to find ways to figure out how much lack of action causes damage. Physical assaults are designed to weaken and harm the body; assaults via gaslighting are designed to weaken and destabilize the brain and the mind. Injuries to the body are far more likely to get immediate treatment, whereas neurological damage to brain architecture and disruption of the mind’s ability to function healthily are too often ignored.
The more aware we are of how abusive brains operate … the better able we are to prevent workplace bullying and gaslighting.Psychologists and brain scientists have developed extensive evidence about the way in which gaslighting brains operate, notably different from brains that do not manipulate. Knowledge of psychopathic brains and the way they work can better protect us from the gaslighters’ domineering manipulation and their cruel capacity to exploit us for their own purposes.
Most of us who are targeted for bullying at work are caught off guard. Because we are not trained to anticipate manipulation, we’re easily victimized. The more aware we are of how abusive brains operate and how our brains are completely thrown off our game by them, the better able we are to prevent workplace bullying and gaslighting. The more leaders, managers, and HR are informed, the less likely they’ll be drawn into institutional complicity.
Those who tell the self-serving lies of bullying and gaslighting—with ease—are part of a formidable trio referred to in psychology as the Dark Triad: narcissists, Machiavellians, and psychopaths.8 How can we identify these manipulative people more quickly and refuse to believe them? What if there were a way to protect ourselves, and more specifically our sanity, from lies? These are the questions that drove the researching and writing of The Gaslit Brain. I needed to answer them because I was being gaslit at work.
Excerpted and adapted by the author from The Gaslit Brain, published by Prometheus, an imprint of The Globe Pequot Publishing Group. © 2025 by Jennifer Fraser.The history and pseudohistory of this infamous and ubiquitous obscene gesture.
Learn about your ad choices: dovetail.prx.org/ad-choicesDefinitely the most fascinating and perhaps controversial topic in neuroscience, and one of the most intense debates in all of science, is the ultimate nature of consciousness. What is consciousness, specifically, and what brain functions are responsible for it? Does consciousness require biology, and if not what is the path to artificial consciousness? This is a debate that possibly cannot be fully resolved through empirical science alone (for reasons I have stated and will repeat here shortly). We also need philosophy, and an intense collaboration between philosophy and neuroscience, informing each other and building on each other.
A new paper hopes to push this discussion further – On biological and artificial consciousness: A case for biological computationalism. Before we delve into the paper, let’s set the stage a little bit. By consciousness we mean not only the state of being wakeful and conscious, but the subjective experience of our own existence and at least a portion of our cognitive state and function. We think, we feel things, we make decisions, and we experience our sensory inputs. This itself provokes many deep questions, the first of which is – why? Why do we experience our own existence? Philosopher David Chalmers asked an extremely provocative question – could a creature have evolved that is capable of all of the cognitive functions humans have but not experience their own existence (a creature he termed a philosophical zombie, or p-zombie)?
Part of the problem of this question is that – how could we know if an entity was experiencing its own existence? If a p-zombie could exist, then any artificial intelligence (AI), even one capable of duplicating human-level intelligence, could be a p-zombie. If so, what is different between the AI and biological consciousness? At this point we can only ask these questions, some of them may need to wait until we actually develop human-level AI.
What are the various current theories of consciousness? Any summary I give in a single blog post is going to be a massive oversimplification, but let me give the TLDR. First we have dualism vs pure naturalistic neuroscience. There are many flavors of dualism, but basically it is any philosophy that posits that consciousness is something more than just the biological function of the brain. We are actually not discussing dualism in this article. I have made my position on this clear in the past – there is no scientific basis for dualism, and the neuroscientific model is doing just fine without having to introduce anything non-naturalistic or other than biological function to explain consciousness. The new paper is essentially a discussion entirely within the naturalistic neuroscience model of consciousness (which is where I think the discussion should be).
Within neuroscience the authors summarize the current debate this way:
“Right now, the debate about consciousness often feels frozen between two entrenched positions. On one side sits computational functionalism, which treats cognition as something you can fully explain in terms of abstract information processing: get the right functional organization (regardless of the material it runs on) and you get consciousness. On the other side is biological naturalism, which insists that consciousness is inseparable from the distinctive properties of living brains and bodies: biology isn’t just a vehicle for cognition, it is part of what cognition is.”
They propose what they consider to be the new theory of “biological computationalism”. They write:
“For decades, it has been tempting to assume that brains “compute” in roughly the same way conventional computers do: as if cognition were essentially software, running atop neural hardware. But brains do not resemble von Neumann machines, and treating them as though they do forces us into awkward metaphors and brittle explanations. If we want a serious theory of how brains compute and what it would take to build minds in other substrates, we need to widen what we mean by “computation” in the first place.”
I mostly agree with this, but I think they are exaggerating the situation a bit. My reaction to reading this was – but, this was already my understanding for years. For example, in 2017 I wrote:
“For starters, the brain is neither hardware or software, it is both simultaneously – sometimes called “wetware.” Information is not stored in neurons, the neurons and their connections are the information. Further, processing and receiving information transforms those neurons, resulting in memory and learning.”
For the record, the idea that brains are simultaneously hardware and software, and that these two functions cannot be disentangled, goes back at least to the 1970s. Gerald Edelman, for example, stressed that the brain was neither software nor hardware but both simultaneously. Any meaningful discussion of this debate is a book-length task, and experts can argue about the exact details of the many formulations of these various theories over the years. Just know these ideas have all been hashed out over decades, without any clear resolution, but it has certainly been my understanding that the “wetware” model is dominant in neuroscience. Also – I think the debate is better understood as a spectrum from computationalism at one end to biological naturalism at the other. Even the original proponents of computationalism, for example, recognized the biological nature and constraints of that information processing. The debate is mainly about degree.
In any case, the authors do, I think, make a good contribution to the wetware side in this discussion, essentially reformulating it as their “biological computationalism” theory. This theory has three components. The first is that biological consciousness, and brain function more generally, is a hybrid between discreet events and continuous dynamics. Neurons spiking may be discrete events, but they occur on a background of chemical gradients, synaptic anatomy, voltage fields, and other aspects of brain biology. The discrete events affect the continuous dynamic state of the brain, which in turn affects the discrete events.
Second, the brain is “scale-inseparable”, which is just another way of saying that hardware and software cannot be separated. There is no algorithm running on brain hardware – the hardware is the algorithm and it is altered by the function of the algorithm – they are inseparable.
Third, brain function is constrained by the availability of energy and resources, or what they call “metabolically grounded”. This is fundamental to many aspects of brain function, which evolved to be energy and metabolically efficient. You cannot fully understand why the brain works the way it does without understanding this metabolic grounding.
I full agree with the first two points, and that this is a good way of framing the “wetware” side of this debate. I think the brain is metabolically grounded, but that may be incidental to the question of consciousness. An AI, for example, may be grounded by other physical constraints, or may be functionally unlimited, and I don’t see how that would matter to whether or not it could generate consciousness.
What does all this say about the ability to create artificial intelligence? That remains to be seen. I think what it means is that it is possible we will not be able to create true AI self-aware consciousness with software alone. We may need to create a physical computational system that functions more like biology, with hardware and software being inseparable, and with discrete events and continuous dynamics also being entangled. I don’t think the authors answer this question so much as provide a framework for discussing it.
It may be true that these aspects of brain function are not necessary for, but are incidental to, the phenomenon of consciousness. It may also be true that there is more than one way to achieve consciousness, and the fact that human brains do it in one way does not mean it is the only possible way. Further, even if their theory is correct, I don’t think this answers the question of whether or not a virtual brain would be conscious.
In other words – if we have a powerful enough computer to create a virtual human brain – so all the aspects of brain function are simulated virtually rather than built into the hardware – could that virtual brain generate consciousness? I personally think it would, but it’s a fascinating question. And again, we still have the problem of – how would we really know for sure?
The good news is I think we are on a steady road to incremental advances in the question of consciousness. We have a collaboration among philosophers, neuroscientists, and computational scientists each contributing their bit from their own perspective, and the discussion has been slowly grinding forward. It has been incredible, and challenging, to follow and I can’t wait to see where it goes.
The post Biological vs Artificial Consciousness first appeared on NeuroLogica Blog.
Double your generosity by donating to Skeptoid before the end of the year!
Learn about your ad choices: dovetail.prx.org/ad-choicesFrom fireplace to folklore, how the Yule log got its fake pagan backstory.
Learn about your ad choices: dovetail.prx.org/ad-choicesHistory can be a mirror or a wall. For many people, it’s a mirror only when they see their own family reflected in it—an ancestor who fought in a war, survived a famine, or emigrated under duress. For others, history is a wall they can never climb. The view on the other side is fixed: the past is not what was done to them, but what their parents or grandparents did to others.
That is the reality I discovered when interviewing the sons and daughters of leaders of the Third Reich.
When I began work on Hitler’s Children, I was not looking for new evidence about what happened in the Nazi Holocaust. The bureaucratic record of the Third Reich was already vast—memos, orders, trial transcripts, camp rosters—the Germans were masters of documenting their crimes.
What I wanted was something the archives could never provide: a human portrait of the children of top Nazis, the men and women who grew up in the shadow of fathers whose names had become synonyms for evil.
I wanted to know: What is it like to love a parent whom the world knows as a war criminal? How do you form a sense of self when the world has already decided who you are—and it is an identity you neither chose nor can easily shed? What happens to ordinary human relationships—marriage, friendship, parenthood—when your family name carries an explosive moral charge?
Those questions took me across Germany and Austria and into conversations that were often guarded, sometimes raw, and occasionally redemptive. Some doors never opened. Some opened a crack and then slammed shut the minute I explained that I could not promise a sympathetic portrait. A few opened wide, and what came out was not a clean confession or a tidy arc toward reconciliation but something more human: ambivalence, anger, loyalty, shame, defiance, grief. What emerged was not a single “Nazi progeny” experience but a spectrum of responses to inherited guilt.
Polish Jews captured by Germans during the suppression of the Warsaw Ghetto Uprising (Poland) and forced to leave their shelter and march to the Umschlagplatz for deportation, May 1943. Photo by Jürgen Stroop. (Credit: United States Holocaust Memorial Museum, courtesy of National Archives and Records Administration, College Park)Knocking on Closed DoorsTracking down the children of the regime’s inner circle required patience and a tolerance for being told no. Some had changed their surnames and slipped into anonymity. Others had moved abroad, where the name on their passport did not immediately freeze a room. Many were instantly hostile when I contacted them. They assumed—not unreasonably—that I was there to condemn their parents or to dredge up what they had spent decades trying to bury.
I learned quickly that the children of perpetrators could be as guarded as the children of victims. I knew many of the latter intimately because I had earlier co-authored a biography of Nazi Dr. Josef Mengele. I had spent countless hours with concentration camp survivors about their experience and the trauma it had left them. When I approached the children of the perpetrators, I discovered some had been burned by journalists who came for sensational quotes and left nuance on the cuttingroom floor. Others feared the moral judgment of strangers or the social cost in their own communities if they were seen as disloyal to family.
A few, though, agreed to speak. Some said they wanted the truth to be known while they were still alive. Others hoped that narrating their story aloud might lighten the weight they had carried in silence. What I heard, over time, was less a series of disconnected biographies than a set of recurring moral dilemmas.
The Spectrum of Inherited GuiltTo make sense of what I was hearing, I came to think of my interviewees along four rough lines. These are not scientific categories—lives overflow categories—but they capture distinct ways the various individuals navigated the same shadow.
1. The Rejectors. These were the sons and daughters who saw their fathers’ crimes with scorching clarity and devoted their lives to exposing them. Niklas Frank, son of Hans Frank—the Nazi Governor-General of occupied Poland—was the most uncompromising. He called his father a “spineless jerk,” wrote a book that dismantled the family mythology, and made no room for sentimentality in the face of historical fact. “You don’t put love for your father above the truth,” he told me. The choice for him was not between love and hate but between complicity and moral independence.
2. The Defenders. At the other end of the spectrum were those who insisted their fathers were maligned by history or punished beyond proportion. Wolf Hess defended his father, Rudolf Hess, Hitler’s deputy, as a “man of peace” betrayed by political enemies and victors’ justice. For Wolf, to defend his father was to defend himself from the conclusion that he was the son of a villain. The defense became a scaffold for identity, a way to live in the world without constantly negotiating contempt.
3. The Divided. In the middle were those who could neither fully condemn nor fully exonerate. Rolf Mengele—son of Dr. Josef Mengele—met his father only twice after the war. Rolf was sixteen the first time, when his father traveled from his South American hideaway for a skiing vacation in the Swiss Alps. Rolf’s mother had told him his real father had died in war, and the visitor was “Uncle Fritz.” Three years later he learned that Uncle Fritz was in reality his father and he learned about his crimes. He only met him again when Rolf was 33, a visit to South America to confront him about Auschwitz. The elder Mengele closed that door, telling his son never to question him about what happened at the camp and what led to the prisoners dubbing him the “Angel of Death.”
Public history sees uniforms and titles. Private memory remembers the warmth of a hand.Rolf did not deny his father’s atrocities; he had studied the documents as had everyone else. However, his sense of loyalty to his family had fractured the moral clarity that comes easily to people who never face the person behind the infamy. Rolf carried two incompatible truths: the father he barely knew and whom his family loved and the historical perpetrator he could not defend.
4. The Transcenders. Finally, there were those who took the moral debt they inherited and turned it outward—into a public ethic. Dagmar Drexel’s father was not a senior Nazi official but instead one of the murderous Einsatzgruppen, the mobile death squads that killed more than a million civilians. She chose the path of engagement and reconciliation, visiting Israel, supporting dialogue, and insisting that her children and grandchildren be raised in the light of historical truth. Dagmar hoped, as many did, that if her generation did the hard work, the third generation might be free of the burden.
These categories blur at the edges. People moved along the spectrum over time—hardening or softening as new documents and eyewitness accounts surfaced, as they aged, as their own children asked harder questions than journalists ever could.
Taken together, however, the spectrum reveals the variety of human strategies for living with the inheritance of atrocity.
The Private Life of a Perpetrator. The Höss family enjoys a seemingly idyllic domestic life—a swimming pool, a carefully tended garden, children at play—literally abutting the walls of Auschwitz. This publicity still from Jonathan Glazer’s film The Zone of Interest visually captures the double life of memory and the central torment for perpetrators’ children: reconciling the private tenderness of a parent with the public monstrosity of their crimes. (Credit: The Zone of Interest © 2023. Directed by Jonathan Glazer. Photo courtesy of A24.)The Double Life of MemoryFor outsiders, the hardest truth to grasp may be the most banal: perpetrators are still parents. A man who signed deportation orders may also have read bedtime stories, taught a child to swim, or taped the wobbling seat on a first bicycle. Public history sees uniforms and titles. Private memory remembers the warmth of a hand, the tone of a voice in the kitchen at night.
Reconciling those two realities—public monstrosity and private tenderness—was the central torment for many I met. Some resolved it by letting historical fact erase the personal. They repudiated the father and severed the line. Others clung to the personal, even when it meant being accused of denial.
Guilt is about actions; shame is about identity.Edda Göring, devoted to her father’s memory, described Hermann Göring as generous and loving. She did not deny the crimes of the regime for which he was one of its top leaders but resisted the idea that her father had been a fanatic. To critics, that sounded like apologetics. To her, it was loyalty to the man she knew as a kindly father.
The tension here is not reducible to “truth versus lies.” Rather, it is a collision of kinds of truth—the truth of documented atrocity and the truth of attachment, which does not yield easily to hard facts. I came to believe that part of the work of reckoning is sometimes learning to hold both truths at once without letting either evaporate the other.
Shame, Guilt, and the Psychology of the Second GenerationPsychology offers a vocabulary for what I heard. The “intergenerational transmission of trauma” is well documented among the children of victims—especially Holocaust survivors—where symptoms include anxiety, hypervigilance, and a deep mistrust of institutions. Among the children of perpetrators, I discovered that a related but distinct process plays out. Their inheritance is not injury but stigma—the corrosive effects of shame, moral ambiguity, and the fear that others see an invisible mark.
Guilt is about actions; shame is about identity. One can confess guilt and make amends. Shame, by contrast, whispers that one is something tainted. Several interviewees spoke of carrying a “name that enters the room first.” It affected romance (when to disclose the name), employment (whether a boss would know the family and decide against them), and decisions about parenthood (whether to have children at all).
Coping strategies reflected familiar psychological defenses. Some changed their names or emigrated—geographic cures for a moral biography. Others chose radical transparency—publicly condemning their fathers in books and interviews to reclaim their own moral agency. A third group practiced radical silence, hoping that if the topic never arose, the past might recede on its own. It never did. Silence, I learned, is a temporary dam. The water rises behind it.
How Family Systems Carry HistoryBeyond the individual psyche lies the family system—the ways stories are told or not told, the rituals of commemoration or erasure. Some families preserved elaborate mythologies in which the father had resisted orders, saved a Jewish neighbor, or known nothing about the machinery of murder.
The myths were often anchored in a single ambiguous episode—an order not carried out, a mild reprimand from a superior—that became the seed for an alternative history.
Other families split. Siblings took opposing stances. One condemned; another defended. At holiday meals, the past was both present and forbidden.
“Intergenerational trauma” named not only what moved from parent to child but what moved from child back to parent: a judgment the older generation could not bear.The emotional economy of those households looked familiar to anyone who has studied families marked by addiction or scandal: unspoken rules, competing narratives, and a tacit agreement that love depended on staying within one’s assigned role.
Children who broke the family line—who published a denunciation or appeared in a documentary—sometimes became moral exiles among their own kin. That rupture was the price of telling the truth as they saw it. In those moments, “intergenerational trauma” named not only what moved from parent to child but what moved from child back to parent: a judgment the older generation could not bear.
Social Mirrors: Schools, Workplaces, and the Public GazeThe burden was not only private. Society itself became a mirror in which these children saw themselves reflected, often in distorted ways. Several spoke of the quiet pause when a teacher or colleague recognized the surname—and then the question that followed, carefully phrased to sound neutral but freighted with suspicion: “Any relation to … ?” In adulthood, some learned to bring it up first, defanging the question with a practiced sentence—“Yes, I’m his daughter; no, I do not share his politics”—and moving on before the conversation stalled.
In public life, the reception depended on the role they chose. The rejectors found a kind of moral home among activists and historians. The defenders found communities that resent “victors’ justice.” The divided and the transcenders navigated lonelier paths, neither embraced by partisans nor comfortable with silence.
Hungarian Jews arriving at Auschwitz in May 1944. Moments after disembarking from the train, many faced Nazi selection—some to forced labor, many to death. Photo by Ernst Hofmann or Bernhard Walte. (Credit: German Federal Archives [CC-BY-SA 3.0])What Changes With Time—and What Doesn’tWe sometimes imagine that moral burdens fade in predictable half-lives. In my experience, time changed the tone but not always the weight. As my interviewees aged, many reported that reckoning deepened, not because new facts appeared but because their own children asked better questions.
The third generation—further from the emotional bond and closer to the educational curriculum—refused family mythologies in a way the second often could not. “Grandpa couldn’t have known,” a parent would say. “But he was there,” a teenager would answer.
Anniversaries, documentaries, and new archival releases periodically reset the conversation. A case reopened, a grave discovered, a diary authenticated—and the private work of reconciliation was hauled into public light. At those moments, people who had made peace with their own narrative found themselves having to make peace again, this time with an audience.
Adolf Hitler with Reich Minister of Propaganda Joseph Goebbels and wife, with their children: Helga, Hilde, and Helmut. (Credit: Bundesarchiv, Bild 183- 1987-0724-502 / Heinrich Hoffmann / CC-BY-SA 3.0)Comparative Frames: Not Only GermanyThe Nazi case is singular in scale and intent, but the dynamics I heard are not unique. Descendants of slave owners in the American South wrestle with family papers that list human beings as property and calculate children as “increase.” In post-apartheid South Africa, the Truth and Reconciliation Commission exposed a generation of children to testimony that shattered family legends. In Rwanda, the gacaca courts forced communities to confront the fact that génocidaires were not abstract monsters but neighbors—and often fathers. Across the former Yugoslavia, the International Criminal Tribunal’s judgments collided with nationalist narratives passed down at kitchen tables.
In all these contexts, the same questions surface: Am I responsible for the sins of my father? Can I love my parent without condoning their crimes? What do I owe to victims and their descendants? How do I build a life that is truly my own?
The answers vary by culture and circumstance, but the structure of the dilemma is recognizably human.
Mechanisms of Transmission: How the Shadow TravelsIf “intergenerational trauma” names an outcome, what are the mechanisms? Scholars point to at least four:
Silence. When families refuse to speak, children fill the vacuum with fantasy or shame. The mind abhors a narrative void. In several households I encountered, silence was the loudest sound in the room. It produced neither absolution nor forgetfulness—only rumination.
Mythmaking. The stories families tell—of resistance, ignorance, or necessity—shape the moral horizon. Even a small act of decency can be inflated into an alibi. Conversely, some families cultivate a punitive myth of inherited stain, a fatalism that imprisons the young in a script they cannot revise.
Ritual and Place. What families visit—or avoid—matters. One daughter told me she had been taken to battlefields but never to camps. Another said the first time she saw the Nuremberg courtroom, she felt she had stumbled into a photograph that had been waiting for her.
Rituals of remembrance can either widen or narrow moral imagination.
The second generation experiences a kind of indirect moral injury: an injury not from what they themselves did but from what knowing does to them.Institutional Echoes. Schools, museums, and media frame the past in ways that either invite reckoning or permit evasion. A curriculum that skips over the depth and breadth of atrocities—as has happened in many academic settings when it comes to the Hamas terror attack of October 7—makes it easier for descendants to imagine their relatives are free of any responsibility.
Institutions can either dignify the moral labor families attempt or tempt them with a ready-made script of innocence.
Child survivors of Auschwitz, wearing adult-sized prisoner jackets, stand behind a barbed wire fence. Still photograph from the Soviet film The Liberation of Auschwitz, taken by the film unit of the First Ukrainian Front, Auschwitz, 1945. (Credit: United States Holocaust Memorial Museum, courtesy of Belarusian State Archive of Documentary Film and Photography)Moral Injury and the Cost of Knowledge“Moral injury”—a term developed to describe soldiers who feel they have violated their own ethical codes—offers another lens. The second generation experiences a kind of indirect moral injury: an injury not from what they themselves did but from what knowing does to them. Knowledge damages one’s relationship to a beloved parent; truth injures attachment.
Some choose not to know much. Others choose to know everything and live with the ache. One daughter, who had read deeply in trial transcripts, said that learning the exact logistics of a deportation under her father’s authority broke something in her. “I used to think there must have been chaos,” she said. “It was worse—there was order.”
For her, the injury was precision—the bureaucratic elegance of evil.
Choosing Children: Reproduction Under a ShadowA notable fraction of those I interviewed had chosen not to become parents. The reasons varied: fear of passing on a name, a desire to end a line, uncertainty about what one could say to a child who asked, “Who was my grandfather?” One son told me that he chose not to become a father because he could not bear to pass on a story line he had never been able to fully explain.
None believed in genetic guilt. The concern was narrative. Parenthood would require mastering a story they themselves had not yet mastered. Others chose to have children precisely as a defiance of history—an insistence that a life could be built that was neither repetition nor repudiation but revision.
If we want to interrupt the transmission of harm—whether its currency is trauma or shame—we must map the routes it travels.These decisions often intersected with partners’ views. Some marriages could not bear the weight of history. One woman described the look on a fiancé’s face when he first grasped the details of her father’s role.
“It wasn’t revulsion,” she said. “It was calculation. He was calculating whether he could carry it with me.” The engagement ended.
The Skeptic’s Task: Between Verification and EmpathyA skeptic acknowledges the limits of memory and the demands of evidence. Interviews with perpetrators’ children are not court records; they are human documents, shaped by self-protection, loyalty, and fatigue. Defensiveness, denial, and selective recall were constants. My job, then and now, is to triangulate: place personal accounts against trial transcripts, diaries, and the scholarship of historians and psychologists.
Skepticism here is not cynicism. The aim is to understand without excusing, to listen without indulging. If we want to interrupt the transmission of harm—whether its currency is trauma or shame—we must map the routes it travels. That map requires both archival rigor and an ear for the ways people live with the past.
Freedom for the Third Generation?Again and again, interviewees asked whether their children—grandchildren of the perpetrators—could be free. There is some evidence that the burden lightens with distance, especially when the second generation does the work of truthtelling. But it is not inevitable. Silence begets fantasy, and fantasy rarely lands on justice.
The most hopeful conversations I had were with families who had made memory a practice rather than a panic. They visited sites of the crimes together. They read. They argued. They did not ask love to overrule truth or truth to annihilate love. They let both inhabit the same home. In those households, the third generation seemed less haunted and more oriented—not weighed down by a surname but awake to what it should mean to carry one.
A line of Dagmar Drexel stays with me: “Our generation has the obligation to confront the truth. Only then can the next one be free.”
The obligation is not to perpetual penance but to honest narration. Freedom comes not from forgetting, but from telling the story in a way the young can live with.
A German teacher singles out a child with “Aryan” features for special praise in class. The use of such examples taught schoolchildren to judge each other from a racial perspective. Germany, 1934. (Credit: United States Holocaust Memorial Museum, courtesy of Süddeutsche Zeitung Photo)Living in the Shadow Without Becoming ItThe story of the children of Nazi leaders is not only about Germany, nor only about the Holocaust. It is about the universal human challenge of living with a family legacy that collides with one’s moral values. We do not inherit guilt in the legal sense. Yet we can inherit its shadow—in our names, our family stories, our silences, and our choices.
Freedom comes not from forgetting, but from telling the story in a way the young can live with.The work of a lifetime, for some, is not to step out of the shadow but to learn how to live within it without becoming it. That means choosing accuracy over myth, candor over silence, accountability over performative shame. It means loving a parent, if one can, without lying about him—and refusing to let that love dictate the terms of one’s moral life.
If there is a single lesson my interviews taught me, it is that history is never safely past; it lives inside our most intimate relationships. To reckon with that is not to remain trapped. On the contrary, it is the only way through—an insistence that the very human bonds that transmitted the shadow can also be the ones that transform it.
This device claims to be able to dramatically improve the efficiency and emissions of any internal combustion engine.
Learn about your ad choices: dovetail.prx.org/ad-choicesAs human civilization spreads into every corner of the world, human and animal territories are butting up against each other more intensely. This often doesn’t end well for the animals. This is also causing evolutionary pressures that are adapting some species to living in close proximity to humans.
Humans cause significant changes to the environment – we may, for example, clear forests in order to plant crops. We also convert a lot of land to human living spaces. We alter the ecosystem with lots of light pollution. We are also now warming the planet.
Humans also produce a lot of food and along with it a lot of food waste. One of the common rules of evolution is that if a resource exists, something will adapt to exploit it. Perhaps the most versatile species in terms of adapting to human sources of food is rats. They follow humans everywhere we go, and prosper in our shadow. New York city experiencing this phenomenon first hand – there is basically no effective way to deal with the rat problem in the city as long as they have a waste problem. They will need to significantly reduce the availability of food waste if they want to make any dent in the rat population.
There is another way that humans provide a selective pressure on the animals that live close to us – we kill aggressive animals. A recent study shows this effect in a population of brown bears that live in Italy, close to humans. This isolated population has become its own genetic subpopulation of brown bears with distinctive features, including a genetic profile associated with less aggressiveness. Make no mistake, these are still wild animals, and brown bears are a dangerous animal. But they are less aggressive than other brown bears.
Another example are the golden jackals of Israel. They too have been living in close proximity to humans for year, resulting in “partial self-domestication”. This is likely very similar to the process of domestication of wolves into dogs. There are likely several selective pressures involved, not just humans having a higher tendency to kill very aggressive animals. Humans are also, as I said above, a source of food. Those animals that are less afraid of humans and willing to get a little closer to them have access to lots of calories, which is a massive survival advantage. At first human waste may simply be a calorie supplement, providing an advantage for calmer and less threatening-looking animals. Then, as they come to depend more and more on humans for food, the need to hunt decreases. Evolutionary pressures then favor a shift away from hunting, from being large, muscular, aggressive, and even away from camouflage. Selective pressure favor a friendlier demeaner, and cuter physical characteristics.
The end-stage of this process is full domestication, as happened with dogs, but this is a continuum. It is likely that most mammal species have the potential to be domesticated. There is the now famous experiments with laboratory domestication of silver foxes. By selecting individuals with a calmer demeanor, researchers were able to produce a semi-domesticated fox breed in a matter of decades. Interestingly, by selecting for behavior a suite of other features came along for the ride, including floppy ears, spotted coat, and a generally cuter appearance.
There is even a hypothesis that humans self-domesticated. This process may have begun with our split from Neanderthals 600,000 years or so ago, and continued into modern times. The idea is that we collectively will punish, in some way, members of our society that are very aggressive. Violent criminals may be punished in a way (execution, for example) that provides a negative selective pressure, so that over time genes for violence and aggression become less common in the population. In an intensely social setting, selective pressures may favor the ability to cooperate and get along. So the first species we domesticated may have been ourselves.
But to be clear, humans are not the sole agent of domestication. As I outlined above, the process starts with the species itself. Dogs likely self-domesticated much of the way, before humans took over and started breeding them. The trigger for this self-domestication was the availability of human waste food, but humans were not the direct agents of the process.
It is likely that nature will continue to adapt to the overwhelming presence of humans on the planet. For animals there is mostly one choice – if you want to live to have to live with humans. There are still plenty of wild refuges in the world, but they are mostly hemmed in by civilization, and they are mostly managed parks. Eventually contact with humans may be sufficient to provide selective pressures on more and more species.
The brown bear example is extremely interesting, and makes me wonder about other bear populations. There is a large and growing black bear population in Connecticut where I live. I have had black bears many times in my yard and even on my deck. They have come to associate humans with food, and are very adept at accessing human waste food or other sources (like bird feeders). It may be likely that the more contact these bears have with humans the less aggressive they will become. They will learn to live on the edges of human space without without getting killed.
Cars are another source of selective pressure. Many species may evolve behaviors to minimize their chance of being struck by a vehicle.
Human are also learning to adapt to the animals they live near. This is more cultural than evolutionary, but people who live close to wildlife generally learn the rules, just as people in CT are learning to live with black bears. This means you cannot store your bird seed outside, you cannot leave your garbage outside over night, and you need to learn to stay out of the bear’s way. People in the western part of the US have similarly learned to live in proximity to mountain lions. These animals are also moving east (filling a niche left by the killing off of most wolves in the east), and so within a few decades easterners will have to learn to live with mountain lions as well.
Make no mistake – bears and lions are still dangerous wild animals. One risk is that as these species become a little less aggressive people will act as if they are not threatening, and will put themselves unnecessarily at risk. It may be a good thing that they are less aggressive, so that the risk of dangerous human-animal interactions is reduced, but that means we need to have high awareness that these are wild animals and we need to respect their space as well. Reducing the friction between humans and animals works both ways.
The post Animals Adapting to Humans first appeared on NeuroLogica Blog.