Robert Trivers, who died on March 12, 2026, was arguably the most important evolutionary theorist since Darwin. He had a rare gift for seeing through the messy clutter of life and revealing the underlying logic beneath it. E. O. Wilson called him “one of the most influential and consistently correct theoretical evolutionary biologists of our time.” Steven Pinker described him as “one of the great thinkers in the history of Western thought.”
I was Robert’s graduate student at Rutgers from 2006 to 2014. Long before I knew him personally, however, he had already established himself as one of the most original and insightful scientists of the twentieth century. In an astonishing series of papers in the early 1970s, he changed forever our understanding of evolution and social behavior.
The first, published while he was still a graduate student at Harvard, confronted one of the deepest problems in evolutionary theory: how can natural selection favor cooperation between non-relatives? In The Evolution of Reciprocal Altruism Trivers proposed that cooperation could evolve when the same individuals interacted repeatedly, making it advantageous to help those who were likely to help in return while avoiding cheaters who took benefits without reciprocating — i.e.“you scratch my back, I’ll scratch yours.” The paper offered an elegant solution to the problem of how natural selection can “police the system” and has had enormous implications for human psychology, including our sense of justice, with parallels in other mammals such as capuchins and dogs.
From that insight flowed one of the most powerful and falsifiable ideas in modern scienceThe next year in 1972, Trivers published his most cited paper, Parental Investment and Sexual Selection. Here he offered a unified explanation for something that had puzzled biologists since Darwin. Writing perhaps the most famous sentence in all of evolutionary biology—“What governs the operation of sexual selection is the relative parental investment of the sexes in their offspring”—Trivers threw down the gauntlet and revealed a deceptively simple principle that reorganized the field. From that insight flowed one of the most powerful and falsifiable ideas in modern science: the sex that invests more in offspring will tend to be choosier about mates, while the sex that invests less will compete more intensely for access to them.
Two years later, in 1974, Robert once again gave birth to an entirely new field of study with Parent-Offspring Conflict. In it, he built on William Hamilton’s theory of inclusive fitness to show that parents and children have divergent genetic interests. Because a parent is equally related to all of its offspring, while each offspring is related to itself more than to its siblings, conflict is built into the family from the beginning. With that insight, Trivers revealed that some of the most intimate and emotionally charged features of life—begging, weaning, sibling rivalry, tantrums, parental favoritism, even the distribution of love and attention within families—all could be understood as the product of natural selection acting on family members with conflicting evolutionary interests.
In other papers, Trivers made wide-ranging predictions about the conditions under which parents should produce or invest more in sons than daughters, how female mate choice can favor male traits that benefit daughters, why insect colonies are structured by conflicts over sex ratios, reproduction, and control, and how self-deception may have evolved as a way of more effectively deceiving others.
It is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian booksEach of these papers spawned entirely new research fields, and many have dedicated their careers to unpacking and testing the implications of his ideas. As Harvard biologist David Haig put it, “I don’t know of any comparable set of papers. Most of my career has been based on exploring the implications of one of them.” Indeed, it is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian books from Richard Dawkins and Robert Wright to David Buss and Steven Pinker.
To know Robert personally, however, was to confront a more uneven and less orderly organism— to use one of his favorite words—than the one revealed in his papers. The man who explained the hidden order in life often struggled to impose order in his own. “Genius” is one of the most overused words in the language, with “asshole” not far behind, and I have known few people who truly deserved either label. Robert deserved both. He could be genuinely funny, extraordinarily generous, and breathtakingly perceptive, but also moody, childish, and needlessly cruel.
Bob and other committee members after my dissertation defense (2014) | Bob with undergraduate students (Jamaica, 2010)
Robert taught me that writing was endless revision and paying attention to the tiniest of details. He went through seven drafts of Parental Investment and Sexual Selection and frequently quoted Ernst Mayr telling him that papers are never finished, only abandoned. He used to call me “slovenly,” but more than once returned a draft of mine with a piece of his own dried lettuce stuck to it.
He was like an alien visiting our planet trying to make sense of our strange habitsHe had an uncanny ability to see the obvious. I used to joke that one reason he was so good at explaining behaviors the rest of us took for granted was that he was like an alien visiting our planet trying to make sense of our strange habits—why we invest in our children, why we are nice to our friends, why we lie to ourselves. He told me that conflict with his own father was part of the inspiration for parent-offspring conflict and one of the observations that led to his insight into parental investment came from watching male pigeons jockeying for position on a railing outside his apartment window in Cambridge.
He cared more about truth than about his reputationRobert also had a respect for evidence and for correcting mistakes that I’ve rarely seen among academics, a group not known for their humility. He cared more about truth than about his reputation and retracted papers at great cost to himself and his career when he thought there were errors. He also knew that he was standing on the shoulders of the giants who had come before him. He wrote that “the scales fell from his eyes,” crediting Bateman’s 1948 Heredity paper on fruit flies showing that males differ more than females in reproductive success for his insights into why males compete more for mates and females tend to be choosier, and he acknowledged that George Williams had already anticipated the importance of sex-role-reversed species in Parental Investment and Sexual Selection. Indeed he once described most of his insights into social behavior as those of W.D. Hamilton plus fractions.
He was a lifelong learner with a willingness to do hard things. After his astonishing early success, he could have done what many academics do: stay in his lane, guard his territory, and spend the rest of his career commenting on ideas he had already had. Instead, in the early 1990s he saw that genetics mattered and spent the next fifteen years trying to master it. The result was Genes in Conflict, the 2006 book he wrote with Austin Burt, which pushed his interest in conflict down to the level of selfish genetic elements. Few scientists, after making contributions as important as he had, would have had the curiosity, humility, and stamina to begin again in an entirely new area.
He liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’Trivers was a great teacher, though not always in the ways he intended. He often asked dumb questions—’What does cytosine bind to again?’ in the middle of a genetics seminar and made obvious observations—’Did you know that running the air-conditioner in the car uses gas?’ But as he liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’
He could also be volatile and aggressive and there were many times when he threatened to kick my ass. I may have been the only graduate student who ever had to wonder whether he could take his advisor in a fight. Once, over lunch at Rutgers, I asked about a cut on his thumb after he had returned from one of his frequent trips to Jamaica. He matter-of-factly told me that he had just survived a home invasion in which two men armed with machetes held him hostage. He escaped by jumping from a second-story window, rolling downhill, and stabbing both men with the eight-inch knife he carried everywhere he went. He was 67 at the time.
Bob, evolutionary biologist Virpi Lummaa, me (Robert Lynch). Finland, January 2020.The benefits of being Trivers’s only graduate student were obvious. He was a brilliant man and nobody else could speak with such clarity about the impact of operational sex ratios on parental investment and male mortality while rolling a joint. The costs were obvious too. He could be erratic and often seemed either indifferent to, or unaware of, the social consequences of what he said. This often left him professionally isolated and left me with few academic relationships I could count on when it came time to find a job.
The mark of a great person is someone who never reminds us of anyone elseOne of the last times I spoke with Robert, a fall had left his right arm nearly useless. He described it as “two sausages connected by an elbow.” He was a chaotic and deeply imperfect man, but also one of the few people whose ideas permanently changed how we understand evolution, animal behavior, and ourselves. Steven Pinker wrote that “it would not be too much of an exaggeration to say that [Trivers] provided a scientific explanation for the human condition: the intricately complicated and endlessly fascinating relationships that bind us to one another.” That seems just about right to me.
His ideas are some of the deepest insights we have into human nature, animal behavior, and our place in the web of life. The mark of a great person is someone who never reminds us of anyone else. I have never known anyone like him.
I’ll miss you, Robert. You asshole.
Bob rolling a joint in NYC, 2012.Robert Ludlow “Bob” Trivers, one of the most consequential evolutionary biologists of the twentieth century, died on March 12, 2026, at the age of 83. In an extraordinary burst of intellectual creativity between 1971 and 1974, he published four papers that permanently altered how evolutionary biologists—and eventually the public—understood cooperation, conflict, selfishness, and deception in the natural world. These papers presented original theories of reciprocal altruism (1971), parental investment and sexual selection (1972), facultative sex ratio adjustment (1973), and parent-offspring conflict (1974). Each paper addressed a deep puzzle in evolutionary theory; together they laid much of the foundation for what would become the field of sociobiology and, later, evolutionary psychology.
His paper on parental investment and sexual selection (1972) proposed that the sex which invests more in offspring becomes the choosier mate. This theory explained with elegant simplicity why males and females so often behave differently across the animal kingdom. The paper arose from watching male and female pigeons out the window of his third-floor apartment in Cambridge, Massachusetts, a reminder that transformative science can begin with simple, careful observation.
Robert Trivers (photo courtesy of Alelia Trivers Doctor) | A younger Robert Trivers
He was also among the first to explain self-deception as an adaptive evolutionary strategy, first describing the concept in 1976—arguing that we deceive ourselves in order to deceive others more convincingly, a counterintuitive idea that has since attracted enormous attention across psychology, philosophy, and the social sciences.
Robert’s books included Social Evolution (1985), widely praised as among the clearest accounts of sociobiological theory, Natural Selection and Social Theory (2002), a collection of his early influential papers outlined above, Genes in Conflict (with Austin Burt, 2006), which makes the central argument that genomes are not harmonious but instead sites of constant struggle, and The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life (2011), which brought his ideas about self-deception to a popular audience. He also chose to be the author of his own story in his memoir, Wild Life (2015).
Robert Trivers was born on February 19, 1943, in Washington, D.C., the son of Howard Trivers, an American diplomat, and renowned poet, Mildred Raynolds Trivers. Growing up in a diplomatic household, Robert attended schools in Washington, D.C., Copenhagen, and Berlin before enrolling at Phillips Academy and later Harvard, where he initially studied American history before making an important pivot to biology.
He studied evolutionary theory with Ernst Mayr and William Drury at Harvard from 1968 to 1972, earning his PhD in biology. While a graduate student at Harvard, Robert accompanied Ernest Williams on an expedition to study the green lizard in Jamaica's countryside. Robert met his first wife, Lorna Staple, in Jamaica; he fell in love with her and the island at the same time. Robert and Lorna wed in 1974 in Cambridge, Massachusetts, and they had four children together: a son, Jonny, twin girls, Natasha and Natalia, and another daughter, Alelia.
Robert was on the faculty at Harvard University from 1973 to 1978, then moved to the University of California, Santa Cruz, where he remained until 1994, before joining the faculty at Rutgers University. Robert was named one of the greatest scientists and thinkers of the 20th century by TIME magazine in 1999. In 2008–09 he was a Fellow at the Berlin Institute for Advanced Study. He was awarded the 2007 Crafoord Prize in Biosciences by the Royal Swedish Academy of Sciences for his fundamental analysis of social evolution, conflict, and cooperation—widely considered the highest honor in evolutionary biology and a prize often mentioned alongside the Nobel in scientific prestige.
His life outside the laboratory was as unconventional as his science. Robert met Huey P. Newton, Chairman of the Black Panther Party, in 1978, when Newton applied from prison to do a reading course with Robert as part of a graduate degree at UC Santa Cruz. The two became close friends and Robert joined the Black Panther Party in 1979. He and Newton later co-authored an analysis of the role of self-deception in the 1982 crash of Air Florida Flight 90.
After Robert and Lorna divorced in 1988, Robert maintained a close relationship with her and with the whole Staple family in Jamaica. He also built a home in Southfield, St. Elizabeth, and spent several months a year in Jamaica for decades. His favorite pastime at his home in Jamaica was to sit on the front veranda and observe the wildlife around him, often joking that the same group of animals would pull up a chair each evening and join him for a glass of red wine, marveling with him at the beauty of the sunset. He made lifelong friends in Jamaica and conducted research from the island on lizards, symmetry, and honor killings over the years. Robert married his second wife, Debra Dixon, in 1997 and they had one child together, a son—Aubrey. They divorced in 2004 but also remained friends until his passing.
Robert Trivers with his five children | With grandson, Lucas Malcolm Howard | With ex-wife Debra, stepson, Diego, and son Aubrey | With three children and seven grandchildren | With grandaughter, Jonisha, and his great grandson, Masiah
Robert Trivers was, by any measure, a complicated man. He was diagnosed first with schizophrenia at the age of 21 and that diagnosis was modified to bipolar disorder later in adulthood. He could be generous and brilliant in one breath, reckless and destructive in the next. But he was always a loving father, a dynamic teacher, and a caring friend, often listening to loved ones for hours and providing valuable guidance and needed moments of levity. He loved life with tenacity—both studying it and living it.
Towards the end of his life, Robert found the greatest joy spending time with his children, grandchildren, and his great grandson, Masiah. His eyes would light up the moment he saw him.
Robert’s work throughout his life was also very important to him. He wanted to make a significant contribution to scientific thought in his lifetime. The theories Robert produced reshaped how we understand the deep logic of living things. His brilliant contributions to our collective understanding—and his family—are his legacy and will spur important scientific research for years to come.
He is survived by his siblings, Jonathan Trivers (Karen), Ruth Ann Mekitarian, Milly Palmer (David), Howard Trivers (Cathy), and brother-in-law, Souham Harati. Robert is predeceased by his parents, his brother, Aylmer Trivers, and sister, Kate Harati. He is also survived by five children: Jonathan Trivers (Carline), Natasha Trivers Howard (Jonathan), Natalia Barnes (Jovan), Alelia Trivers Doctor, and Aubrey Trivers; ten grandchildren; and one great grandson.
Neuroscience terms are everywhere. If you log into social media, you’re likely to be bombarded with advice on how to “increase neuroplasticity.” You might be told to “stop chasing the dopamine” or given instructions on how to “regulate your nervous system.” Meditation works because it “rewires your brain.”
Self-help gurus and productivity coaches love these terms. They signal depth. They suggest that beneath the surface of our messy behavior there are precise mechanisms that have been identified that can give us the answer to our problems, whatever those problems may be.
The trouble is, despite their suggestion of a mechanism, most of these terms are used in a way that offers no explanatory value. When a wellness blog tells you going for a walk will “regulate your nervous system” they’re just saying a walk may reduce stress. Whether it actually does reduce stress doesn’t hinge on whether we can describe it in neural terms. Similarly, when an influencer says meditation “changes the brain” this doesn’t tell you anything new. Anything from practicing a motor skill to remembering this sentence changes your brain. The question is whether it changes it in a way that’s helpful. For that, the neuroscience doesn’t provide an answer.
Neuroscience terms used in these ways are decorative—a way to jazz up tired old advice and make it seem fresh and new again. By decorative neuroscience, I mean the use of irrelevant or oversimplified brain-based concepts to rhetorically bolster some claim, explanation, or intervention.
Neuroscience terms used in these ways are decorative—a way to jazz up tired old advice and make it seem fresh and new again.Why do we continue to see so much decorative neuroscience? A study published in 2008 found that laypeople rate explanations that contain irrelevant neuroscience as better than those that lack neuroscience. This has been termed “the seductive allure of neuroscience explanations.” People without neuroscience training interpret the presence of brain-based explanations as meaning we have a much firmer grasp on a concept than we do. When influencers throw in neuroscience terms, it ends up being interpreted as more authoritative.
Many of the uses of decorative neuroscience are innocuous enough. Influencers have discovered a new rhetorical trick to ply their trades, but much of what they’re saying is the same old thing. What's more worrying is the way decorative neuroscience has started to influence public discourse.
Dopamine talk has become ubiquitous. California psychiatrist Dr. Cameron Sepah recommends “dopamine fasting,” which involves taking a break from things like smartphones and social media. Individuals following his protocol talk about being “addicted to dopamine.” From a neuroscience perspective, these terms make little sense. You can’t take a “fast” from dopamine; it’s a naturally occurring molecule in your brain and critical for movement and motivation. While addictive substances alter dopamine signaling, you can’t be addicted to dopamine itself.
Instead, the term dopamine in “dopamine fasting” is decorative, something Dr. Sepah himself admits: “Dopamine is just a mechanism that explains how addictions can become reinforced, and makes for a catchy title. The title’s not to be taken literally.”
But when the catchy title is taken away, we see the dopamine fast for what it is: advice to take a break from technology to reconnect with ourselves and others. This may be good advice, but it certainly isn’t a new idea, and it has little to do with neuroscience.
More significantly, the term dopamine has become a catch-all for sinful pleasurable activities. The bestselling book Dopamine Nation by Anna Lembke claims anything pleasurable, even reading a book, is potentially addictive because it releases dopamine.
Positing a neural mechanism is no substitute for direct evidence that an intervention actually changes behavior, experience, or well-being.While it’s true that pleasurable activities stimulate dopamine release, superficial similarities don’t mean two things are the same. The reward system of the brain responds to everything from love to video games to chocolate to methamphetamine. The involvement of the same brain regions doesn’t mean they have the same impact on us. Both addictive drugs and video games stimulate the release of dopamine, addictive drugs stimulate much more.
But again, the neuroscience is largely irrelevant—we should just look at the behaviors associated with these activities. The majority of methamphetamine users develop a use disorder, resulting in severe health and behavioral problems. Despite how widespread technology use is, technology use disorder is rare; it’s estimated around 3 percent of video game players develop any kind of behavioral problem associated with gaming (like neglecting schoolwork to the point of harming grades), and most of those problems are mild.
Part of the trouble here is pushing our understanding of neural mechanisms beyond their scope and assuming they provide a more solid basis for understanding than simple psychology. But often, the psychological level is much closer to the level of explanation we need than neuroscience. Take the classic misunderstanding of the brain hemispheres: the idea that the left hemisphere is analytical while the right hemisphere is creative. This isn’t just bad neuroscience, it’s bad psychology to boot.
First the neuroscience: it’s true there are hemispheric differences. Some functions occur more in the right or left hemisphere, something neuroscientists refer to as lateralization. Language production is a classic example—for most people, language production mostly happens in the left-hemisphere. While you can find some functional differences between the hemispheres, nearly every complex activity involves both sides. Even for analytical tasks like solving math problems, there’s substantial involvement from both hemispheres. For example, the left-brain right-brain personality theory claims that some people (the logical type) are “left-brained” and others (the creative type) are “right-brained.” This, too, doesn’t hold—people don't predominantly “use” one hemisphere over the other.
A bad psychological model can’t be bolstered by bad neuroscience. You don’t need a neuroscience mechanism to explain something that doesn’t exist.But again, the neuroscience here is largely irrelevant. We should instead look at psychology. Is it true that people are either logical or creative? Without looking at the brain, we can determine that no, it isn’t. Far from there being two categories of people (left-brained and right-brained), people fall in different parts of the distribution for each. Classic measures of intuitive versus analytical thinking styles have found they’re largely independent. If anything, there may be a positive association between analytical thinking ability and creativity, as scoring higher on an IQ test makes one more likely to score high on a test of creativity. A bad psychological model can’t be bolstered by bad neuroscience. You don’t need a neuroscience mechanism to explain something that doesn’t exist.
If you have a theory of personality types, how to study better, be more productive, or strengthen self-control, that’s great. It should be put to the test to see if it works. What’s important is whether there’s actually an effect. Does reading books often lead to addiction? Are people either analytical or creative? Does going for walks lower stress? These are straightforward questions about behavior. Pointing to possible neural mechanisms doesn’t help—the brain is complex and has many mechanisms. You can come up with all sorts of post hoc possible neural mechanisms to explain theoretical relationships between an activity and an outcome.
Looking to neuroscience for wellness or productivity advice is like looking to cell biology for dietary advice.It would be nice if we have some specific, clear mechanism like right brain versus left brain to explain the difference between people, but neuroscience rarely can offer something like this. Neuroscience is messy. Looking to neuroscience for wellness or productivity advice is like looking to cell biology for dietary advice. It might provide constraints and guidance for nutrition research, but what you really want is to have people eat stuff to see what happens.
Moving from behavior to neurons might feel like it’s digging down a level, getting rid of the messy complexities of psychology and leaving something more precise and scientific. But our understanding of the brain isn’t clearer or more complete than our understanding of behavior. Neuroscience is full of uncertainty, indirect measures, and interpretive gaps. More importantly, it operates one level down from the level of explanation we generally care about in our everyday lives: observable behavior and experience.
The human brain is a wonderfully complex organ. It’s arguably the most complex thing we’ve discovered in the universe. Neuroscience is a young science with a gargantuan task, made all the harder by the ethics of studying the living brain and the modesty of our tools for probing it. It has enriched our understanding of behavior, perception, and ourselves as biological beings. It’s helped clarify neurological and psychiatric pathologies, and offers hope for a future for treating them. Neuroscience can illuminate constraints and underlying processes, and work alongside psychological research to triangulate how cognition works in different domains. But positing a neural mechanism is no substitute for direct evidence that an intervention actually changes behavior, experience, or well-being.
Even today, some people cling to a pre-scientific belief that germs do not cause disease.
Learn about your ad choices: dovetail.prx.org/ad-choicesThis article, presented here in abridged form, was originally published in Skeptic magazine Vol. 20 No. 4
For a scientist there is the act of studying life and the process of living it, and I have never wanted the one to overwhelm the other. Yet that is exactly what a life devoted to science will tempt you into—a life of studying and, otherwise, not much living. Yes, you may have a family and a few good friends, but most scientists embrace a sedentary life, often solitary and intensely internal. You concentrate on experiments and theory and perpetual reading. Your small area of study is the focus of your life and it is a focus you share with only a few others.
This kind of life never appealed to me. I was an out-breeder by nature, raised in a diplomat’s home. Foreign countries and languages were part of my upbringing. Since my father served in Europe, I walked through more cathedrals, museums, and art galleries than was healthy for any child. I had no interest whatsoever in European culture, nor in the academic disciplines based on them, but I did know five foreign languages and enjoyed meeting people in their own land, speaking their language, learning about their area of expertise.
When I finally found my intellectual home in evolutionary biology, it offered me exactly the right kind of foreign travel—in the rural, the bush, the exotic, and the wild. Evolutionary biology would take me around the world. And it would show me how to carve knowledge from everything I experienced in these travels with a single, very general logic—what would natural selection favor? How would one best survive and reproduce in these conditions? In short, I signed on to a system of thought that allowed me to study life and live it, sometimes very intensively.
Early Scientific StirringsWhen I was 12 years old I knew I wanted to be a scientist because it was obvious upon inspection (this was 1955) that none of the other intellectual areas—history, religion, English literature, or the social so-called sciences—provided much hope of actual, sustained intellectual advance. Initially I was attracted to astronomy, with the vastness and beauty of space and the billions of years it had been forming. I got a telescope, read Hoyle’s standard Astronomy text, and came up with the bi-stellar hypothesis for the origin of the solar system.
I liked that astronomy was a science. These people were not fooling around. They measured things and did so carefully. They tested assertions against data, and were capable of changing either, and they continually attempted to improve the precision of their measurements. When Einstein’s theory that gravity bent light was tested by the apparent change in place of a background star during an eclipse we had dramatic evidence, measured with great precision, of exactly how much that bend was. But astronomy was not a discipline you could pursue in the 8th grade, so I soon turned to mathematics.
My father happened to have a large number of math books and out of sheer boredom one day I picked out one entitled Differential Calculus. I was 13 and it took me two months to master the book. It then took me two more to master the book next to it, Integral Calculus. It was a thrill to see that the algebra I knew could generate fields with real predictive and analytic power. That was only part of the beauty of mathematics, and its scientific twin: you could learn the whole thing from the bottom up. That is, if you were willing to put in the necessary concentration and time. The methodology was strictly anti-self-deception. Everything was explicit. Experiments, for example, were described so that others could attempt to replicate them exactly to see if duplicate results were achieved. Mathematical proofs were entirely explicit, every variable and every transformation exactly described.
Harvard and PsychosisI mastered other corners of mathematics, mainly number theory, infinite, irrational, limit theory, and so on. I entered Harvard as a sophomore in pure mathematics but halfway through the year I saw the end of the whole enterprise and it was nowhere I wanted to be—at best, producing work with solid utility but far delayed, perhaps by the year 2250, but of no immediate use. Physics was for me no better, because, for one thing, I had no physical intuition at all. When they raised an object off the ground and told us they had thereby given it “negative energy” I headed for the door. And of chemistry and biology I knew nothing, having never taken a course in either at any level.
So I decided to give up truth for justice and become a lawyer. I would fight the good fights—early 1960s civil rights, poverty law, criminal law where you hoped the criminal was not too guilty, and so on. I asked people what you studied if you wished to pursue law and they said there was no such thing as “pre-law” at Harvard, so I should study the history of the United States. I declared that as my major and spent the next years learning about The Federalist Papers, the Constitution, Supreme Court decisions, and the like.
I developed an almost immediate distaste for the subject because it was obvious from the outset that U.S. history, as it was studied then, was not so much an intellectual discipline as an exercise in self-deception. The major question U.S. historians were tackling at that time was: why are we the greatest society ever created and the greatest people ever to stride the face of the earth? The major competing theories were answers to this question. The benefits of having a society designed by upper-class Englishmen was one such theory, as were the benefits of an ever-receding frontier—that is, the increasing extermination of Amerindians from East Coast to West. The larger field of history was somewhat more interesting but still consisted of stories from the past, inevitably biased and lacking critical information—and I saw little hope of correcting either defect.
In April of 1964—my junior year at Harvard—I suffered a mental breakdown and was hospitalized for two and a half months. Prior to the breakdown I went through a five week manic phase, with increasing mental excitation, decreasing sleep, and near-certainty that I was the first person to understand what Ludwig Wittgenstein was actually saying in the Tractatus, even though I was enrolled in my first-ever philosophy course. (Luckily, I was not taking it for credit.) I remember very little else from the manic phase except that I tried self-hypnosis to put myself to sleep. It did not work and lack of sleep is what brings on a full breakdown. Finally, one night my friends, who had become increasingly concerned, deposited me at the Harvard Infirmary where I could not answer the elementary question, “Who are you?” “A pregnant woman?” “A new-born baby?” But not, “A thoroughly confused Harvard Junior.”
Then came eleven weeks of self-admitted incarceration at three hospitals for treatment of my psychosis. Incarceration—even when voluntary and in a hospital—is never fun. You are locked in, no longer permitted to move about as you like. But by that time biochemists had come up with compounds that would knock the psychosis right out of you, and then hold it down afterwards to give you time to sleep and recover. After my final release in mid-June I spent the summer reading novels, one a day, and I have always blessed novelists since that summer. As a scientist, I scarcely even read the science I am supposed to, never mind a novel, but that summer novels allowed me to leave my own life and dwell in the lives of others, while my own self relaxed and repaired.
It soon became apparent that psychology was not yet a science, but rather a set of competing guesses.Harvard readmitted me in the fall. I spent most of that semester playing gin rummy all night long—in other words, still resting my brain. But I also decided to take a course in psychology, since my mental breakdown suggested it might be a useful subject to know. It soon became apparent that psychology was not yet a science, but rather a set of competing guesses about what was important in human development—stimulus-response learning, the Freudian system, or social psychology. None were integrated with each other and none could form the basis for an actual science of psychology, so I paid no attention to this subject.
The two law schools I had applied to—alleged to be among the most progressive—turned me down so I graduated with a degree in a field I had little respect for and no intention of pursuing. I returned home to live with my parents, unemployed, and with only vague hope of finding a job.
The Man Who Taught Me How to ThinkI did get a job soon enough upon graduating, and in Cambridge, MA, at that. The company itself was a Harvard off-shoot—Education Services Incorporated—set up to attract funding from the National Science Foundation for the purpose of developing new courses for school children. Just as there would be the “new math” so there would be the “new social sciences.” We would teach five million 5th graders about hunter-gatherers, baboon behavior, the social life of herring gulls, and evolutionary logic, or so we thought.
“Do you know anything about animals?” No. “In that case, you are going to work on animals.”For the first six weeks my employers had me read in various subjects and attend meetings. One day they called me in and asked me if I knew anything about humans, by which they meant anthropology, sociology, or psychology. I assured them I did not. “Do you know anything about animals?” No indeed. “In that case, you are going to work on animals.” This was because they cared less about the animal material. On such minor, chance events, one’s entire life may turn. I might have discovered biology later in life, but I doubt it and I doubt I would have ever again been in as good a position to exploit its many benefits.
Trivers (right) with evolutionary biologist William “Bill” Hamilton.They assigned me a biologist to guide my reading and sign off on my work. His name was William Drury, the research director at the Massachusetts Audubon Society. For two years, my employer paid him to be my private tutor in biology. It was perhaps the greatest stroke of luck in my life. Before Bill Drury, I knew no biology. After working with him for two years, I knew its very core. He introduced me to animal behavior and taught me many facts about the social and psychological lives of other creatures. More to the point, he taught me how to interact with them as equals, as fellow living organisms. But he could have taught me all of that and still I could have left his charge without becoming a biologist. The key to my future, which he alone could supply, was his insight that natural selection referred to individual reproductive success, that it applied to every living thing and trait, and that thinking along the lines of species advantage and group selection—the then-popular vogue—had little or nothing going for it. From then on I was a theoretical biologist. I had wanted to be a scientist since age 13. Now at age 22, I had discovered my discipline—evolutionary biology.
The thrill I felt when I first learned the whole system of evolutionary logic at the individual level, applied to all of life, was similar to the feeling I’d had when I first fell in love with astronomy as a twelve-year-old. Astronomy gave you inorganic creation and evolution over a 15-billion-year period. Evolutionary logic gave you the comparable story over 4 billion years. Astronomy spoke of the vastness of time and space, while evolutionary biology did the same thing for the vast variety of living creatures. Living creatures have been forming over a 4-billion-year period, with natural selection knitting together adaptive traits all through that time, so living creatures are expected to be organized functionally in exquisite and ever-counterintuitive forms. As I had when I was first discovering astronomy, I felt a sense of religious awe upon encountering this way of viewing the world around me.
This is not to say it was all fun and games. Bill was a hard teacher. When you were wrong, he was sure to point it out—not cruelly, no over-kill, just the simple truth. If you argued back, he was up to the challenge. That was how I learned what natural selection was and was not. Bill wasn’t interested in cradling your self-esteem. He was only interested in teaching you the truth. I liked that. I’ve always preferred knowledge over self-esteem. When I brought him population-advantage arguments for the existence of male antlers in caribou, he gently took me through the entire fallacy and then had me read two short pieces on opposite sides of the issue. Three days later I was a complete convert, willing to stop people on the subway and yell, “Do you know what is wrong with group selection thinking? Do you?”
Never assume the animal you are studying is as stupid as the one studying it.One day I was watching a herring gull through binoculars side by side with Bill. In those days, a herring gull could not scratch itself without one of us asking why natural selection favored that behavior. In any case, I offered as an explanation for the ongoing gull behavior something that was nonfunctional and suggested that the animal was not capable of acting in its own self-interest. Bill replied, “Never assume the animal you are studying is as stupid as the one studying it.” I remember looking sideways at him and saying to myself “Yes sir! I like this person. I can learn from him.”
Bill taught me to think outside of the mainstream in many areas. You think monotheism is superior to polytheism? Bill would say, what do you know about polytheism, or for that matter monotheism? You assume monotheism is superior because it presumes to have a single order to the world, a single unifying logic and force, but what does this force represent? Bill taught me that polytheistic religions often had a better attitude toward nature than did the monotheistic ones. In Amerindian religions, there were spirits of the forest, of the canopy, of the deep woods, of the gurgling spring, and each captured aspects unique to these ecological zones. For someone like Bill, who had literally lived 15 to 20 years of his life in the woods, these distinctions were so much closer to his own view than that emerging from monotheism, which basically boiled down to a form of species-advantage reasoning.
We are all living organisms—make discriminatory comments about others at your own risk.On another occasion, Bill and I were discussing racial prejudice and the possible biological components thereof, and he said to me, “Bob, once you’ve learned to think of a herring gull as an equal, the rest is easy.” What a welcome approach to the problem, especially from within biology. We are all living organisms—make discriminatory comments about others at your own risk. In Bill’s view, it was always better to try to see the world from the view of the other creature.
The Greatest American Evolutionist I Ever MetErnst Mayr was the greatest U.S. evolutionist I ever met, possessing a very broad and deep knowledge of almost all of biology. He had also perhaps the strongest phenotype of any organism I have ever met. He lived to be 100 and published more books after age 90 than most scientists do in a lifetime, and not trivial ones either. He was strong in character, personality, and mode of expression.
I first met Ernst Mayr in the spring of 1966, in his office at the Museum of Harvard’s Comparative Zoology. I was brought to him by Bill Drury, himself a former student of Mayr’s. The visit with Ernst Mayr was meant to reinforce this conviction and to offer me help along the way. Mayr was a short man, with a clear, piercing gaze and a warm countenance. After an initial discussion, Ernst told me that it was not at all impossible to become a biologist at my age and with my lack of background. “Where would you like to do your graduate work?” Ernst asked. I suggested that it would be nice to work with Konrad Lorenz. “No!” Ernst said. “He’s too Austrian for you, too authoritarian. Who else?” I suggested that it might be a good idea to work with Niko Tinbergen. “No,” Ernst said, less emphatically. “He is only repeating now in the ’60s what he already showed in the ’50s. Where else?” It was clearly time for some fresh input, so I asked him, “What would you suggest?” Ernst then flung his arms in a short arc and said in his German accent, “What about Haaarvard?” Dum-kopf, I thought, striking the side of my head with my hand. Harvard indeed!
Robert Trivers on The Michael Shermer Show, discussing evolutionary theory and human nature.
The first class I ever audited in biology couldn’t have been better. It was a graduate course taught in 1966 by Ernst Mayr and George Gaylord Simpson, the famous vertebrate paleontologist, who was quite a spectacle himself. A short man, but much softer-looking than Mayr, he wore thick glasses and his eyes often seemed to shake, along with his hands. Yet when he stood up to speak, he spoke in clean, clear paragraphs, no editing required. At times one felt there should be someone at his side chiseling his words into stone, so well were they chosen.
They thought that evolutionary biology had all the intellectual excitement of a cross between stamp collecting and the study of dead languages.I remember one memorable discussion involving Mayr and Simpson and sickle cell anemia. After various parts of the evolutionary story had been reviewed—the frequency of the sickling gene in natural populations being associated with the spread of malaria—they had occasion to refer to the molecular mechanism by which the sickling gene worked. I believe it was Simpson who referred to a paper that had just come out in a cellular/molecular journal showing that the change to a sickle-shaped blood cell literally crushed the malarial parasite within the cell. However that may be, there was a glorious feeling coming from that class that evolutionary biologists at their best were the true biologists, those who mastered biology at all its levels, right down to the molecular details when these became interesting.
What made the moment so special was the use of molecular biology, for molecular biologists treated evolutionary biology with open contempt. They thought that evolutionary biology had all the intellectual excitement of a cross between stamp collecting and the study of dead languages. At their worst, they were insufferably arrogant and ignorant. While they could cow most evolutionists, they could not do so with Ernst Mayr. His expertise was the entire subject— biology itself—and when needed he took it upon himself to master every section and subsection. It did not hurt that he was he was physically and verbally dominant as well. Best way to put it, nobody fucked with Ernst Mayr. That gave us evolutionary graduate students support and backing, the value of which we were only dimly aware.
Jane Goodall and the Meaning of DeathAs part of a seven week expedition to East Africa in the summer of 1972, we took a two-hour boat ride across Lake Tanganyika from Kigoma in order to reach the famous Gombe Stream Reserve. The Reserve was a series of base camp buildings on the shore of the lake, and student sleeping quarters dotting the hills, within which roamed chimpanzees, three groups of baboons, and some leopards.
Within minutes of our arrival I was standing next to Jane Goodall and her husband Hugo van Lawick, watching a chimpanzee and her son on the hillside among some trees. This wasn’t just any primate. Flo was the most famous living chimpanzee, having been studied by Jane for more than ten years. She was a matriarch whose clan had formed the backbone of Jane’s writings and films. Flo was far past her prime when I saw her and, in fact, was afflicted with continual diarrhea. As we watched, she took a fruit and tried to smash it against a tree but she missed and struck her own leg. “I have never seen her miss like that,” said Jane. “I don’t give her two weeks to live.” My young postgraduate heart leapt: I had just arrived for a two-week visit and according to Jane I would be witness to history!
Chimpanzees worked themselves into a frenzy in the presence of a waterfall, swinging back and forth on vines, hooting, hair erected, and so on. One can almost see a religious sentiment, on which later might be built something as huge as the Catholic Church.Jane knew her chimpanzees. Several days later I was watching a “waterfall display,” in which chimpanzees, especially adult males, work themselves into a frenzy in the presence of a waterfall, swinging back and forth on vines, hooting, hair erected, and so on. One can almost see, but not quite define, a religious sentiment, an elemental force on which later might be built something as huge as the Catholic Church. While our chimpanzees were starting to work themselves up, we were interrupted by the arrival of the shocking news that Flo was dead. I was with two graduate students at the time, and we turned, as if one, and padded back down the paths toward the hillside near the base camp. Turning off the main path we went through undergrowth and reached the bank of the small river that flowed down toward camp. Flo lay half in the water. Next to her knelt Jane. And capturing this moment for posterity was one of the largest cameras I had ever seen, on a tripod with Hugo behind the lens, just across the river. Flint, meanwhile, lay depressed in a tree 20 feet above his mother.
Thus began the human drama of Flo’s death. At the beginning, Jane appeared intent upon seeing a chimpanzee funeral. At the very least she hoped that one or more of Flo’s grown children might happen upon the body and give some interesting reaction. In fact, it never happened. Instead, the first night Flo remained where she’d died but Jane sat up the whole night nearby, with many of us for company, in order to deter scavengers such as bush pigs from carting off Flo’s body (one reason one would not expect to see many chimpanzee funerals). Jane was nostalgic, remembering the early days, nearly alone with the chimpanzees, enjoying the quiet beauty of the forest, coming to know Flo almost as well as her own mother.
Parasites can be expected to flee the dead body in search of living tissue—if any are there, they should swarm out of a corpse. This immediately suggests the value of burial.In her response to the death of a member of a closely related species, Jane Goodall revealed the curious ambivalence we display toward the dead bodies of members of our own species. It is as if the body too sharply erodes the living creature for us to leave it alone. Yet from the standpoint of parasites alone, we surely should: any living creature carries a number of parasites and may have died from an ongoing parasite attack. The parasites can be expected to flee the dead body in search of living tissue—if any are there, they should swarm out of a corpse. This immediately suggests the value of burial. From the archaeological record we know that humans have practiced this custom for at least 75,000 years. But a sentimental component shows up from the beginning, as well, since even in ancient burials the deceased is interred along with various artifacts, such as utensils, weapons ,and other items of value.
The effects of a lingering memory are notoriously strong in various monkey mothers for their recently dead offspring; in some species they carry around the body of an infant in a clinging posture for as long as two days after its death. A much stronger attachment occurs in our own species, as when the exact spot of burial is preserved in memory, often with a marker, so that the desecration of such places by others is taken as an attack on the living relatives. Consider the outrage that recent attacks on Jewish cemeteries have evoked. The attackers, who dug up corpses and assaulted some of these, were regarded as more depraved and anti-Semitic than those who do harm to living Jews, as indeed they may be since if they are that eager to desecrate burial grounds, God knows what else they are eager to do.
Richard Dawkins and the Concorde FallacyIn 1975 I was in Jamaica on sabbatical when I received a letter from one Richard Dawkins enclosing a paper written by himself and Tamsin Carlisle pointing out that I had committed the Concorde Fallacy in my paper on Parental Investment and Sexual Selection, as indeed I had. The Concorde Fallacy is the notion that because you have wasted $10 billion on a bad idea—the exceedingly expensive supersonic plane Concorde—you owe it to the 10 to throw in another 4 in hopes of making it work. In poker, the rule is, “Don’t throw good money after bad.” Good money is money you still have, bad money is already in the pot; it is no longer yours. Just because you have $300 in a large poker pot (money gone) does not mean that you owe it to that money to lose another $200, with odds stacked against you. Every decision should be rationally calibrated to future pay-offs only, not past sunk costs.
I had argued in my paper that since females almost always begin with greater investment in offspring than do males, this committed them to further investment—they would be less likely to desert their offspring. Simple Concorde Fallacy; only future payoff is relevant. I consoled myself with the thought that there probably was a sex bias similar to the one I’d proposed, but only because past investment had constrained future opportunities. In any case, I wrote back that I agreed with them right down the line.
His actual purpose in writing me was to find out if I might be willing to write the Foreword for a new book he had written called The Selfish Gene.I soon received a second letter from Richard, saying that his actual purpose in writing me was, in part, to find out if I might be willing to write the Foreword for a new book he had written called The Selfish Gene. This was especially appropriate, he told me, because my work, more than anyone else’s, was featured in his book. What the hell, I thought, and he sent the manuscript along. There were indeed chapters based on individual papers of mine—“Battle of the Generations” (parent-offspring conflict), “Battle of the Sexes” (parental investment and sexual selection), “You Scratch My Back, I’ll Ride on Yours” (reciprocal altruism). I never deluded myself that my work was more fundamental than Bill Hamilton’s, nor did Richard, but we both knew that if you wanted to get some of the fun details filled in on a variety of subjects—not ants, fig wasps, or life under bark, but social topics relevant to ourselves—my work was a better bet than Bill’s.
Better than finding my own work given such a high billing, though, was discovering that Richard had a most pleasing combination of absolute mastery of the material with a wonderful way of expressing it—funny, precise, vivid. Let me give one example. He presented Bill Hamilton’s idea that a gene—or a tightly linked cluster of genes—could evolve if it could spot itself in another individual and then transfer a benefit based on the phenotypic similarity. But Richard added a vivid image, calling this “the green beard effect.” The name soon caught on in the scientific literature, so that everyone today refers to “green beard” genes, thereby summing up a complicated idea in a way that actually makes it easier to think through. The phenotypic trait is obvious: you have a green beard. And the genetic bias is obvious: you favor green-bearded individuals. Genes spread apace. Except what about a mutant that leaves your green beard intact but takes away your bias toward green-bearded individuals? Not at all obvious, yet Richard’s vivid way of writing facilitated thinking through the complexities.
So I said to myself, yes I will write you your Foreword, though I don’t know you from Adam. I wrote a good five paragraph foreword but it consumed about a month of my life, partly because I actually like to think before I write, which does slow down writing.
In any case, once I was finished, I looked at the essay and thought, why not slip in the concept of self-deception, whose function by that time I had linked to deceiving others? This I regarded as the solution to a major puzzle that had bedeviled human minds for millennia. And Dawkins, bless his soul, could hardly have set me up more nicely: “…if [as Dawkins argues] deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray—by the subtle signs of self-knowledge—the deception being practiced. Thus, the conventional view that natural selection favors nervous systems which produce ever more accurate images of the world must be a very naïve view of mental evolution.” Perfect set-up and not even in a paper of my own but in someone else’s book and an incredible bestseller at that.
Robert Trivers’ lecture for the Skeptics Society: Why does deception play such a prominent role in our everyday lives?
When I learned that Dawkins had taken on religion in the name of science and atheism, I felt he had finally found his true intellectual niche. No way could religion keep up with Richard. One June 13, 2011, I was about to begin delivering the Tinbergen lecture at Oxford, when as usual I misplaced something on the lectern. “Jesus Christ,” I muttered, and the microphone amplified it to the 400 people in attendance. I looked up and said, “I hope Richard Dawkins isn’t here.” Richard raised his hand. Before launching into my lecture I added, “I regard Richard Dawkins as a minor prophet sent from God to torture the credulous and the weak-minded, for which he has a unique talent,” as indeed he does. One nice concept in The God Delusion is that since most people dismiss all religions except one, why not go the final step?
Hanging with Huey and the PanthersOne of the few benefits of moving from Harvard to the University of California at Santa Cruz in 1978 was the chance to meet the legendary founder of the Black Panther Party, Huey Newton. Indeed he was waved in front of me as a reason to come to Santa Cruz. He was a graduate student in “History of Social Consciousness”—roughly equivalent to Western Civilization—who had the wit to see that “social consciousness” started long before the Greeks and, in some form, by the time of the insects. He had gotten his undergraduate degree from Santa Cruz in 1974 and befriended Dr. Burney Le Boeuf, the celebrated student of elephant seals. Burney had been preaching the beauties of evolutionary biology—my own work in particular—to Huey, and so I had the good fortune of meeting him after he had already been well-primed.
Trivers with founder of the Black Panther Party Huey Newton.The Panthers began with patrolling the police. They would follow police at night or patrol until they came across police-citizen interactions. Huey might then emerge from a car with a law book in his hand and read out in a loud voice that, by law, “excessive” force cannot be used during an arrest. The police would invariably answer, “Our force isn’t excessive.” Huey would read them the legal evidence on that point. They would say, “Get the fuck out of here.” He would answer that a citizen is allowed to remain within a reasonable distance of an arrest. They would say, “Your distance is unreasonable.” He would flip to the relevant page and read the appellate ruling that declared a reasonable distance was ten yards or whatever, and it would go on like this.
Huey was armed. He knew he had the right to be armed and he knew he had the courage. So when he emerged from the car, there was usually a gun beneath the law book so that, should the interaction turn hostile or threatening, he could be ready with a response. All this was legal back then, riding shotgun, in effect, on the police themselves. During the war the Panthers waged between 1967 and 1973, roughly 15 officers died for every 35 Panthers. I believe the Panthers had the largest single effect on integrating police forces in this country. The reasoning being: hey, if Black people are firing at our officers, let’s have some Black officers firing back.
In the fall of 1978 I was informed that Huey, who was then in prison, charged with beating up a tailor in his home for calling him “boy,” wanted to take a reading course from me. I said that was fine but I wanted a paragraph from him on what he wanted to read. Before he could reply he was released from lock-up and traveled to Santa Cruz to meet me. We met.
He fell down, as do we all, when it came to his own self-deception.We decided to do a reading course on deceit and self-deception, a subject I was eager to develop and on which Huey turned out to be a master. He was a master at propagating deception, at seeing through deception in others, and at beating your self-deception out of you. He fell down, as do we all, when it came to his own self-deception. Huey Newton was certainly one of the five or six brightest human beings I have ever met. Each of them has had a different sort of intelligence, and Huey’s forte was aggressive logic. And he moved his logical sentences as if they were chess pieces meant to trap you and render you impotent. “Oh, so if that is the case, then this must be true.” If you moved away from where he was pushing you, he would say, “Well, if that is true, then surely so-and so must be true.” So he was maneuvering you via logic into an indefensible position. The argument often had a double-or-nothing quality about it where, in effect, he was doubling the stakes for each logical alternative, giving you the unpleasant sensation that you were losing more heavily as the argument wore on, making more and more costly mistakes.
According to Huey, the Black Panther Party started as a simple, old-fashioned robbery, which he was planning with a number of confederates. Problem was he was reading Franz Fanon and becoming politically conscious. So he decided to use the robbery to start a new political party, as radical as its start-up funds. The hard part was selling it to his fellow robbers. They didn’t like the idea. “They almost killed me” Huey told me, but finally he got them to sign off on it, and some of them even became Party members later.
Once, when he and I were driving through West Oakland, near Berkeley, Huey pointed out the site of the Party’s first political act. There was a particularly dangerous street corner at which local African-American children were run over nearly every year while attempting to cross on their way to school. Numerous requests had been submitted for a stop sign and a proper street crossing to protect the children. Nothing had been done. One day the Panthers appeared at the street crossing at the appropriate time, dressed in their leather jackets and berets and each carrying a rifle or shotgun. They proceeded to direct traffic, standing in the highway to permit safe passage for the children. Six weeks later the city put up, not a stop sign, but a stoplight at that very corner. Nothing like armed Black men to stir civic activity.
When the California legislature was meeting to decide whether to pass the “Huey Newton law,” as it was popularly called, which states that you could no longer “ride shotgun” but instead had to keep your loaded gun in your locked trunk, Huey and 35 other Panthers showed up in Sacramento on the day of the vote, most of them carrying rifles. They tried to enter the legislature with their guns, which was allowed by law at the time. Police stopped them from entering, ordered them out of the building, and then shortly thereafter arrested them. Huey told me that many Black people argued against the public display: “Now they’re sure to pass the bill, why don’t you ease up the pressure?” Huey’s response was simple: they were going to pass the bill anyway, and he wanted to show Black people that they had the right to show up in front of the legislature with guns and confront a mass of armed police. That was one of the main points of the Party—to encourage African Americans to use their right to bear arms in self-defense. In 1948, in response to a lynching, President Harry Truman made the first and key decision in favor of equal gun rights for the Black man in the U.S., when he integrated the armed services. Before then, most Black soldiers sliced the carrots and did the dishes.
Many African Americans of more recent times have a strong ambivalence or hostility toward Huey and the Panthers because they believe he helped spawn the culture of Black gun violence among the urban young. There is probably some truth to the charge, but I think harsh drug penalties take a larger part of the blame. With the stakes so high for being caught selling illicit drugs, the chances of internecine war and murder inevitably rise as well.
A final point on Huey’s legacy: though people tend to assume that Huey was anti-police in principle, in fact he saw obvious value to community surveillance and organized protection. That’s why he regarded himself and Party members as on a par with the official police. He used to joke, “I’ve got nothing against the police as long as we are firing in the same direction.”
Looking back and Looking ForwardI am 72 years old now, having devoted 50 years to the study of evolutionary biology, a combination of social theory based on natural selection wedded to genetics—the very backbone of all of life. I have had the good fortune to help lay the foundation for a variety of flourishing subdisciplines, from reciprocal altruism and parent-offspring conflict, to within-individual genetic conflict, and self-deception. Through this work, I have met many extraordinary individuals, several of whom were my teachers. I have also gotten to know up close and personal many non-human animals. I have “enjoyed” an unusual number of near-death experiences—due in part to my tendency toward intense interpersonal disagreements late at night.
Yet when I look back on this show, there is one thing I regret, and it is absence of self-reflection. Yes I would live life and study it, but would I study my own life? Time and time again, the answer comes back “no.” Yet exactly whose life is more important to you: others or your own? “You self-deceptionist” my first wife would sneer. “You talk a lot about parent-offspring conflict, yet you neglect your own son.” Guilty as charged. Too much ambition and too little thought about my family: wife, children, and myself.
Robert Trivers’ lecture for the Skeptics Society, based on a ground-breaking study that examines honor killings, which seem to make no evolutionary sense. Why would a father kill his own daughter and thereby eliminate half of his own genes from propagating into the next generation?
Major decisions, such as where to go when I decided to leave Harvard in 1978 were made without any serious thought at all—how about a name professorship at the University of New Mexico or a major offer from the University of Rochester with its powerful biology department? These were brushed aside with scarcely a glance. Instead I simply trotted off to the University of California at Santa Cruz because my wife and I had enjoyed a pleasant weekend with Burney LeBoeuf, his wife, and his elephant seals. I even remember mumbling to myself at one point, “Oh we’ll let autopilot handle this or that problem.” Auto-pilot? As a means of choosing which of three universities and cities you should live in for the next 15 years? By definition auto-pilot is the opposite of careful conscious introspection and evaluation—it is what you do when the path forward is obvious and no rational reflection is needed.
What is the way forward? There is one obstacle and there is one hope. The obstacle is self-deception, which is a powerful force with immense repetitive power. The hope is that after becoming more deeply conscious of one’s own self-deceptions and of the possible means of ameliorating them, one can make some real progress against this strong negative force.
Very often a spiteful response is not the best one. Then comes a stronger voice, “No, Bob, this time is different.”A more costly form of self-deception involves my spiteful side. If you say something insulting, I want to strike back. If I fail to because I am slow or inhibited, trust me—whenever the event recurs in my mind, I will torture myself, sometimes for years, with the rant I should have delivered and may do so now at full volume alone in my apartment far away. And yet very often a spiteful response is not the best one. It can easily generate spite in return and down the staircase the two of you descend. Inside me there are two voices. One cries out, “Bob, you have made this mistake 630 times in the past and regretted every single one. Why not forego it this time?” Then comes a stronger voice, “No, Bob, this time is different,” and there goes 631.
It was an eye-opener to me to discover recently the value of friends in breaking this cycle. I was telling a good friend about a nasty message I had gotten and my intended nasty response. He wanted to know why? Because, I said, she said this, that, and the third thing and it hurt. That was the key. He was unmoved by this argument. He’d suffered none of my internal hurt and was indifferent to it. Only three things were relevant to him: the message, my possible response, and its likely consequences. The likeliest consequence would be that she would write back an even nastier note and I would be further estranged for no good reason. Why would I want to do that? Why indeed. The Concorde Fallacy all over again—you owe it to your past spite, despite it being a sunk cost, to double-down. Better, of course, to do nothing.
I went to a chiropractor in the 1980s for a stiff neck that had not improved after a month. A coworker praised him with the evangelical certainty usually reserved for miracle diets, used car salesmen, and people who have just read one book on nutrition. I was skeptical but adventurous, which is how most regrettable life decisions begin.
The adjustment worked. My neck improved. Worse still, my chronic asthma improved as well.
At the time, I was deeply unhappy in my first professional job after earning a bachelor’s degree in psychology and a master’s degree in applied behavioral science at Wright State University in Dayton, Ohio. I worked for a personnel-testing firm that marketed itself as scientific while relying on psychological instruments invented—without irony—in-house. Their psychometric rigor consisted largely of confidence, clipboards, and an aggressive font choice.
Compared with the pseudoscientific theater I was being paid to defend, chiropractic felt almost wholesome.These tests produced false positives and false negatives with impressive symmetry, giving employers either a false sense of security or a convenient scapegoat. Qualified people quietly lost livelihoods. Chiropractic, by contrast, seemed refreshingly concrete. Hands. Spines. Patients who said they felt better. I imagined self-employment, ethical work, relief of pain, and perhaps even improved health. Compared with the pseudoscientific theater I was being paid to defend, chiropractic felt almost wholesome. In retrospect, this should have been a warning sign.
Why Chiropractic Made Sense at FirstI had been trained in program evaluation, a discipline shaped by people obsessed with how to infer causality in the messy real world where randomization is often impossible and people insist on behaving like people. This was the era of stress research—Hans Selye, Thomas Holmes, and Richard Rahe—demonstrating that belief, expectation, and circumstance could predict outcomes as dramatic as Navy pilots crashing jets on aircraft carriers.
Chiropractic appeared to offer a humane alternative: a hands-on profession marginalized by a medical establishment overly confident in pharmaceuticals and procedures. Like many, I believed useful treatments had been discarded not because they failed, but because they threatened professional turf. I believed science had limits, and that those limits had been selectively enforced, preferably against someone else.
So I decided to become one myself, and in 1987 I graduated from the San Jose campus of Palmer College of Chiropractic and joined the ranks of doctors of chiropractic—eager, idealistic, and spectacularly unaware of the epistemic ecosystem I had entered.
Inside the BubbleThe dominant narrative was simple: conventional medicine had unfairly dismissed us. Scientific objections were cherry-picked. Our methods worked; medicine simply refused to look properly, or long enough, or with an open heart and an open mind liberated from all that oppressive critical thinking.
On weekends, I studied at Stanford’s Green Medical Library and noticed something curious: the library did not carry chiropractic’s premier scientific journal. I proposed that Palmer purchase a subscription for Stanford. We did. Stanford thanked us politely, in the tone such institutions reserve for unsolicited fruit baskets.
Subtle vital forces, innate intelligence, and spinal “subluxations” hover just beneath the surface of even the most modern curricula, like software that never quite finishes installing.Old-guard chiropractors complained that we risked spilling our secrets to scientific medicine. The truth is, chiropractic education exists in a parallel universe. Its founding figure, D.D. Palmer, died in 1910, but his metaphysical afterlife remains active. Subtle vital forces, innate intelligence, and spinal “subluxations” hover just beneath the surface of even the most modern curricula, like software that never quite finishes installing.
The 1990s brought chiropractic its brief flirtation with legitimacy. The NIH’s Office of Alternative Medicine was established, fueled in part by philanthropic enthusiasm from abroad.
I interviewed for a position at an English health estate owned by Sir Maurice Laing, who had both an interest in alternative medicine and the resources to indulge it. I declined the offer, tethered as I was to America, but not before inserting myself into meetings with leaders of British complementary medicine.
To the British Committee on Complementary medicine, I proposed a heresy: stop arguing about putative mechanisms; first determine what works, for whom, and under what conditions. Program evaluation before explanation. My suggestion was politely ignored. Before assuming his kingship, King Charles quietly stepped away from his advocacy of complementary medicine. One suspects reality intervened, possibly with charts.
The Cracks AppearAfter years of practice and research involvement, my discomfort grew. Chiropractic diagnostics increasingly failed a basic test: face validity.
My practice partner believed she could diagnose disease by testing the strength of specific muscles, a method known as applied kinesiology (AK). Patients loved it. The ritual was impressive. They asked why I did not perform AK, as though I were withholding a party trick. I asked her once how often her diagnoses were correct. “About half the time,” she said, without irony.
This is precisely the accuracy one would expect from a fair coin flip, except coins do not bill insurance companies or require continuing education credits. These tests were never compared to gold standards, so strictly speaking they were never correct or incorrect at all. They simply were.
What finally broke me was not only the epistemology—it was the economics. Chiropractic education devotes astonishing energy to practice management. Seminars, workshops, and consultants descend with the same message delivered in different fonts: sell care plans, sell frequency, sell fear. Some that you pay for one-to-one counsel offer referrals when referring to other chiropractors. My millionaire business coach promised me $1000 per referral that signed up—but always called a few weeks later with a sad reason not to pay.
The mantra was explicit: ABC—Always Be Closing. The bottom line of all the chiropractic continuing education and coaching programs was to lie about how chiropractic is crucial for overall health, and the bottom-bottom line was that advising chiropractors is much more profitable than being one.
Patients were no longer people with problems to be evaluated; they were “cases” to be converted. Thirty-six-visit plans were praised. Lifetime care was normalized. Preventive adjustments were marketed with the confidence of seatbelts and vaccines—minus the evidence, testing, and regulatory oversight.
Certainty, I learned, is a remarkably precious commodity in chiropractic world.Those who questioned this model were told they lacked confidence, commitment, and the proper chiropractic spirit. Skepticism itself became a personal failure. Success was measured not in clinical outcomes, but in collections. The resemblance to the psychometric firm I had fled years earlier was no longer subtle. With a quiet corruption of Avedis Donabedian’s classic framework—structure, process, and outcome—chiropractic leaders instead sold belief, structure, and certainty. And certainty, I learned, is a remarkably precious commodity in chiropractic world.
Indeed, one of the central problems with chiropractic is its frank comfort with ignoring evidence in favor of belief systems that “just make sense.” Plausibility substitutes for proof. Confidence substitutes for outcomes.
In practice, chiropractic operates at two largely disconnected levels of knowledge. At the top sit researchers, faculty, and administrators—those who define the profession’s identity—yet who typically know very little about the day-to-day realities of practice. At the bottom are practicing chiropractors, submerged in diagnosis codes, billing rules, collections, hiring and firing staff, training front-desk help, negotiating with insurers, and keeping the lights on.
The irony in all that is that the most influential voices shaping chiropractic practice are almost entirely those who do not practice. These are the “paycheck chiropractors,” whose authority is inversely related to their proximity to the trenches. They do not argue with insurers. They do not explain denied claims. They do not rehire front-desk staff every six months. Yet this has never impaired their confidence in advising clinicians how to act, what to treat, and what to expect from every imaginable or unimaginable combination of symptoms.
Practicing chiropractors, for their part, are remarkably comfortable with this arrangement. When things wobble or fail, blame flows inward. The practitioner assumes personal deficiency: insufficient belief, insufficient technique, insufficient commitment. It functions like a built-in self-protection virus for the profession—very convenient for avoiding collective accountability.
This arrangement is also useful when graduates eventually notice three inconvenient facts:
Chiropractic does not compete well with medicine—or even with itself. When studied carefully, its apparent effectiveness dissolves into non-specific factors: expectation, attention, ritual, and natural history. When chiropractic researchers properly control for placebo and natural recovery, the specific effect of spinal manipulation reliably shrinks or disappears altogether. Paradoxically, better science makes chiropractic look worse.
Structurally, the profession is a two-tiered, one-directional system that rarely improves, because the real problems are invisible at the top and permanently personalized at the bottom. Some leaders continue selling early-20th-century dogma, steering chiropractic safely away from medicine by avoiding diagnosis and disease altogether.
When a profession cannot hear its own failures, cannot correct its own assumptions, and cannot tolerate honest uncertainty, leaving stops feeling like betrayal and starts feeling like hygiene.At some point, the pattern became impossible to ignore. When a profession cannot hear its own failures, cannot correct its own assumptions, and cannot tolerate honest uncertainty, leaving stops feeling like betrayal and starts feeling like hygiene. That was when I knew I was done.
Many of my former classmates reached the same conclusion, some more quickly than I did. Privately, several admitted that much of what we had been taught was baloney. They were not amused. A $200,000–$400,000 investment over four years had produced clinicians who knew just enough medicine to realize how little they could safely treat. The coping mechanism was predictable: at least we help 50 percent of patients—better than nothing.
Some eventually realized that 50 percent accuracy in a two-outcome probability space is not success at all.
Time is running out to grab one of the few remaining cabins for Málaga, Spain to Nice, France on the SV Royal Clipper AND announcing our next adventure to New Orleans followed by the Mysteries of the Maya cruise to the Yucatán Peninsula!
Learn about your ad choices: dovetail.prx.org/ad-choicesDo any of these statements resonate? Make you angry? Do some not even merit a response?
I can’t tell you exactly how I would respond to someone who defended Hitler, but I know what I would not do: stalk him on social media, contact his employer to try to get him fired, or ask my government representative to help criminalize such talk.
Does this make me a free speech absolutist? Not quite. Like Robert Jensen, a professor emeritus at the University of Austin and prolific blogger, I suspect that most people who call themselves free speech absolutists don’t actually mean it. They wouldn’t countenance speech like “let’s go kill a few Germans this morning. Here, have a gun.” Instead, Jensen writes they’re prepared to “impose a high standard in evaluating any restriction on speech. In complex cases where there are conflicts concerning competing values, [they] will default to the most expansive space possible for speech.”
In other words, they’re free speech maximalists. A more contemporary and nuanced variant of absolutism, the maximalist position grants special status to free speech and puts the burden of proof on those who wish to curtail it. While accepting some restrictions in time, place, and manner, free speech maximalism defaults to freedom of content. It aligns with the litmus test developed by U.S. Supreme Court Justices Hugo Black and William O. Douglas, which holds that government should limit its regulation of speech to speech that dovetails with lawless action:
Let’s go kill a few Germans? Not kosher.
The only good German is a dead one? Fair game.
Some pundits view this position as misguided. A 2025 Dispatch article titled “Is Free Speech Too Sacred?” laments America’s descent into an era of “free speech supramaximalism,” in which “not only must speech prevail over other regulation, but nearly everything is sooner or later described and defended as speech.” A New Statesman essay about Elon Musk, written a few months before he acquired Twitter (now X), decries Musk’s “maximalist conception of free speech usually adopted by teenage boys and libertarian men in their early 20s, before they realise its limitations and grow out of it.” The implication: free speech maximalism is an unserious pitstop on the way to more mature thinking. Only testosterone-soaked young men, drunk on their first taste of freedom, would spend more than a minute on such a naïve view.
This 69-year-old woman disagrees. I grew into my passion for free speech during the early months of the COVID-19 pandemic, when the pressure to conform in both word and deed reached an intensity I had never witnessed before. Any concerns about the labyrinthine lockdown rules elicited retorts like “moral degenerate” or “mouth-breathing Trumptard.” (Ask me how I know.)
Unexpectedly jolted into awareness of free speech principles, I began reading John Stuart Mill and Jean-Paul Sartre and writing essays about freedom of expression in the COVID era. One thing led to another, and in 2025 the newly minted Free Speech Union of Canada found a spot for me on its organizing committee. What most of us in the group shared, along with age spots and facial wrinkles, was a maximalist position on free speech. Perhaps we’re all immature. Or maybe we’ve lived long enough to understand exactly what we lose when free speech goes AWOL.
But but … critics sputter … what about hate speech? Free speech maximalism posits that you can’t regulate an inherently subjective concept. As Greg Lukianoff and Ricki Schlott note in their 2024 book The Cancelling of the American Mind, “as soon as you start legislating based on a concept as loosely defined and subjective as offense, you open the floodgates to every group and individual claim of offense.” This argument may well explain why Canada’s proposed Bill C9—the Combatting Hate Act—remains stalled after protracted parliamentary debate.
Is “you cannot change sex” hate speech or merely opinion? Is “you have a big Black butt” an offensive remark? It depends on who says it, how it’s said, and who hears it. One person may react to the big butt comment with reflexive outrage, while another may simply shrug. When said tenderly to a lover, the statement may elicit a full-throated laugh. Offense is in the eye of the beholder.
Someone can tell you that the sky is green, or that women can’t think logically, or that Hitler was right about some things, and you allow the words to bounce off your emotional core. It’s a liberating habit of mind.A case in point: In 2017, the U.S. Patent and Trademark Office refused to register the name “The Slants” (an Asian rock band) because of its derogatory, or hateful, connotations. The bandleader sued and the Supreme Court ultimately agreed that “giving offense is a particular viewpoint” and a law restricting expression on the basis of viewpoint violated the First Amendment.
Here’s the thing: when you embrace viewpoint diversity as an ideal, you tend to get less offended about things. You may profoundly disagree with a statement, but it won’t cause you to puff up in outrage. Someone can tell you that the sky is green, or that women can’t think logically, or that Hitler was right about some things, and you allow the words to bounce off your emotional core. It’s a liberating habit of mind.
And if you do get offended? Big whoop. You’ll survive. During a recent bus trip from Whistler to Vancouver my seatmate, a doctor, took it upon himself to share his candid opinions about women with me: they can’t take a raunchy joke, they make poor leaders, they’re responsible for cancel culture, and society would work better if they stayed home. Ugh. Seriously? But I survived. I wasn’t traumatized. Truth be told, I quite enjoyed our conversation. He listened as much as he spoke. I even found a few grains of value in his arguments, and perhaps a couple of my retorts gave him pause. And that’s what it’s all about, isn’t it? Humans of all stripes challenging and learning from each other.
Here I must pause to express disappointment in my own sex. Women, I have found, value free speech less than men do, and studies corroborate my perception. In one survey, 71 percent of men said they gave priority to free speech over social cohesion, while 59 percent of women held the opposite view. An article reporting on the survey affirmed that “across decades, topics, and studies, women are more censorious than men.” Boo.
Even with carte blanche to express ourselves, it’s impossibly difficult for us humans to lay bare our true thoughts. Self-censorship is baked into our DNA. Free speech maximalism serves as a counterweight to this force. It allows us to rise, even if timidly, above the lead blanket of social conformity flung over us by the finger-wagging classes. By exposing little bits of our true selves, we shed light on the glorious contradictions in the human condition—a benefit that serves not just angry young men, but women with age spots and everyone else.
To those concerned about the dangers of loosening our tongues, I offer Greg Lukianoff’s bracing maxim: “You are not safer for knowing less about what people really think.”
Practically everyone has heard of the tick-borne infection known as Lyme disease, even if they don’t live in a high-risk area. Some are aware of long-standing controversies about the consequences of infection or how best to treat it. Our concern here is for a newly emerging controversy about Lyme disease—namely, the theory that it originated as part of a bioweapons program. As U.S. Representative Chris Smith of New Jersey is heard to say while participating in a Department of Health and Human Services roundtable on Lyme disease: “They were weaponizing Ixodes burgdorferi [sic], as we all know.”1
Part of this theory is that Lyme disease’s origins can be traced to the United States Department of Agriculture’s (USDA) Plum Island Animal Disease Laboratory, where it allegedly was developed as a biological weapon, either as a genetically modified organism or by “weaponizing” native ticks to carry a secret pathogen. Plum Island, in fact, would seem to be a good place to center these hypothetical activities, because it has exclusively been the site of a restricted-access USDA facility since 1954. The facility has long conducted research on foreign animal diseases that would devastate the livestock industry in the United States if they were ever introduced accidentally or purposefully as a biological weapon. This research is essential for developing vaccines and measures to prevent potential outbreaks of animal diseases, such as foot-and-mouth disease, African swine fever, and other diseases of domesticated animals.
Plum Island is located off the eastern end of Long Island and about seven miles across the water from the town of Lyme, Connecticut, where what seemed (at the time) to be a new tick-borne disease was identified in the 1970s. Over the past five decades, Lyme disease—as that illness is now called—has been documented in several other states in the northeastern, mid-Atlantic, and north-central U.S., as well as parts of states in the Far West. It is a tick-borne infectious disease affecting tens of thousands of people each year and at an enormous cost to the public’s health and people’s well-being.
Nature poses a greater threat than human design or error as a source of new infectious diseases and epidemics for humans and other animals.The issue of whether the emergence of Lyme disease is the consequence of natural processes or might have originated from humans—namely, as a designed bioweapon, subsequently inadvertently or intentionally released—has become a hot topic in the news, social media, and podcasts. It has prompted calls for an investigation from members of Congress, where an amendment from Representative Smith is now part of the recently passed and White House-signed defense authorization bill. It would seem more convenient to have somebody or some government institution to blame for an emerging infectious disease, rather than natural events. But in reality, nature poses a greater threat than human design or error as a source of new infectious diseases and epidemics for humans and other animals.
Plum Island is a high-containment facility only reachable by boat from Long Island and Connecticut for the daily transport of authorized personnel. Visitors are not allowed, and any intruders are promptly escorted off the island. Deer and other wildlife that may be susceptible to infections and occasionally swim to the island are immediately culled by sharpshooters from helicopters. Such high security has long led to rumors and suspicion among neighboring communities that something nefarious must be going on at Plum Island. The island had undeservedly gained notoriety in the Silence of the Lambs book (1988) and film (1991) in Hannibal Lecter’s telling as “Anthrax Island.”
One of us (DF) worked on Plum Island during the 1990s, conducting research on African swine fever under a USDA research contract with Yale University. African swine fever is a tick-borne disease native to Africa, and it is highly infectious among pigs even without ticks. Access to infected animals required two changes of clothing and a shower before passing through each of two air-tight chambers. But there was no protection for personnel, as these animal diseases do not have the capacity to infect humans. If they did, self-contained spacesuits would be required, as are used for Ebola and other dangerous human pathogens in BSL-4 labs. The Plum Island facility had no capacity to work with human pathogens, and there is no evidence that scientists there ever worked on Lyme disease.
The second of us (AGB) participated in the early 1980s in the discovery and then isolation of the bacterium that causes Lyme disease. The team accomplished this from ticks that were collected at the far end of Long Island, so not far from Plum Island. This sounds suspicious for an escape from the Plum Island lab. But Long Island and Lyme, Connecticut, were not the only places where Lyme disease was occurring at the time. The availability of cultured bacteria led to diagnostic assays that were quickly developed and implemented. Application of these blood tests for laboratory diagnosis in many other places in the United States revealed that the infection was not limited to a small area near Plum Island and had not been so restricted for many years.
Besides New York and Connecticut in the early 1980s, cases were soon identified in other northeastern states, north-central states like Minnesota and Wisconsin, and even across the country in northern California. This is a disease only transmitted by ticks, which crawl and, unlike mosquitoes, do not fly. Even if attached to a deer, mouse, or bird, it would have been decades for the infection to spread so widely if it had been released from a single place at the continent’s end.
Evidence that the bacteria were already present in the area long before any theorized release from Plum Island was finding their presence in museum specimens of preserved ticks and field mice that had been collected in the northeastern U.S. in the 19th or early 20th century. In retrospect, cases of Lyme disease in different parts of the country had been described by physicians in medical case reports from the 1960s.
If the Lyme disease agent were some kind of Frankenstein germ, malignly created and released upon the world, one might as well invoke space aliens.Further justification for rejecting a Plum Island bioweapon release theory was recognition that Lyme disease, under other names, had clearly been occurring in Europe since at least the early 20th century, decades before it was first named as a new disease in North America. In Sweden, the Lyme disease agent was recovered from chronic skin rashes that had started years before it was found in some New York ticks. Subsequently, the causes of Lyme disease were identified in ticks and mammals, as well as in patients in China, Japan, Korea, and Russia. Why would there be a need for a new bioweapon delivered by ticks if the infection was already occurring in many parts of the world?
The bacterium that was isolated from those ticks from Long Island was the first example of what was soon recognized to be a species meriting its own name. But there was nothing strange about it at the time or since, even after intensive study. There is nothing to indicate that it was a genetically modified organism or was constructed from parts of other bacteria, as has been suggested. Genetic analysis of Lyme disease bacteria shows that they originated on the Eurasian continent and spread to North America thousands of years ago.
That first isolate was representative of but one strain out of several that were occurring then and now in the northeastern U.S. There are other strains in the Midwest and another set in the Far West. Europe has its own strains of the bacteria. This pattern of differences is what would be expected for bacteria that have been widely distributed for millennia and evolved to adapt to their unique local circumstances over time. If the Lyme disease agent were some kind of Frankenstein germ, malignly created and released upon the world, one might as well invoke space aliens that had visited the Earth thousands of years ago.
What’s the more plausible explanation for the increase in numbers and distribution of Lyme disease that began in the last half of the 20th century? It is clear to us that Lyme disease is a product of nature and has been present for millennia throughout the continents of Eurasia and North America. What has changed to cause it to become recently epidemic is the reestablishment of forests and deer, which has led to a proliferation of ticks over the past half-century. Massive deforestation in the Northeast and upper Midwest before 1900 for agriculture and manufacturing resulted in the near extermination of deer, the natural host of the deer tick that is responsible for transmitting Lyme disease in these areas. Long Island is the only known location in the Northeastern U.S. where white-tailed deer and deer ticks have persisted since colonial times.
Lyme disease is a product of nature and has been present for millennia throughout the continents of Eurasia and North America.Another refuge occurred in northern Wisconsin, where a case of Lyme disease occurring in the 1960s was retrospectively identified. From these two ancient refugia, Lyme disease has slowly spread to neighboring states as forests regenerated, and as deer and ticks returned to their former ranges. This spread has been well documented since the original discovery of the Lyme disease agent more than 40 years ago. The same history of reforestation of areas previously used for agriculture and industry accounts for the increase and spread of the Lyme disease bacteria and the ticks that transmit them in Europe.
Can we call this increase in Lyme disease in various parts of the world the result of “human activities”? Of course. Without the human population growth and concomitant advances in agriculture and industry, Lyme disease would be but one of many infections transmitted among mammals, birds, and reptiles by ticks in woodlands for eons. But the resurgence of the Lyme disease story is just one aspect of a broader process of demographic, environmental, and social change occurring in developed countries of North America, Europe, and parts of Asia. We need not attribute it to the intentional or inadvertent actions of some government workers in a high biosafety level laboratory off the coast of Long Island.
An exploration of the validity of the Silurian hypothesis, which posits the existence of a pre-human intelligent race on Earth.
Learn about your ad choices: dovetail.prx.org/ad-choices
“Gretchen, I’m sorry I laughed at you that time you got diarrhea at Barnes & Noble. And I’m sorry for telling everyone about it. And I’m sorry for repeating it now.”
—Karen Smith in Mean Girls 1
Popular culture, including literature and film, often extols the value of friendship and the important emotional role it plays in the lives of women and girls. From The Divine Secrets of the Ya-Ya Sisterhood, Memoirs of a Geisha, and Anne of Green Gables to films such as Steel Magnolias, Thelma and Louise, and Bend It Like Beckham we see portrayals of female friendship that highlight social and emotional support as it occurs across the lifespan. Such tales are often centered on self-discovery, and the value of generous and loyal friends. And yet, popular culture has also given us products that focus on the dark side of female relationships in films such as Mean Girls (the theatrical release poster had the tag line “Watch Your Back”), the television show Gossip Girl, and numerous songs from artists like Taylor Swift with Better Than Revenge and Katseye with Mean Girls. These works emphasize the competition that can occur between women, even those who appear to be friends, over sexual partners and social status in one’s peer group. The ubiquitous nature of social media today has also raised concerns about this type of aggression between females. While there are substantial benefits to friendship,2, 3 there can also be significant costs.4 Our friends can be our most trusted allies but they can also betray us in the name of competition. Before delving into the depths of female friendship and fiendship, it is important to understand evolutionary forces that shaped same-sex friendships in general as well as how natural selection may have differentially influenced male versus female same-sex friendships.
In general, across our evolutionary past, same-sex friends would have played a crucial role in our survival and fitness. For example, potential benefits of friends would have included protection against rivals or other threats to survival, enhancing one’s status and access to mates or resources, transmission and development of culturally important skills, social support in raising children as well as navigating other relationships, and emotional support to help manage stress and social challenges.5 The number and quality of these same-sex relationships are associated with better mental well-being and physical health for both men and women.6 However, since men’s and women’s same-sex friendships evolved in different contexts to solve somewhat different adaptive problems, there are significant differences in their same-sex relationships.7, 8 Friendships between men evolved in a side-by-side group context. Historically, this would have been men forming alliances with one another for purposes of hunting, protection, and warfare. As such, they tend to center around a shared activity (e.g., sports in modern society). In addition, these friendships tend to be hierarchical in nature and often involve direct competition (including physical contests of strength, skill, or both). In contrast, women’s same-sex friendships evolved in a face-to-face, one-on-one context in which women formed alliances with one another for purposes of alloparenting (that is, the care of offspring by individuals who are not their biological parents, from feeding and grooming to protection and socialization), emotional support, and sharing of resources and social information. Historically, upon marriage, women typically left their own kin behind and relocated to their husband’s community.9 Therefore, in the absence of others who would be invested in their well-being, these social alliances between women would have played an important role in their own survival as well as that of their offspring (and therefore of the group they had joined). Today, friendships between women are more intimate than friendships between men and tend to center around mutual disclosure, trust, and empathy. Even in contexts where there is an activity involved (the popular “Stitch-n-Bitch” groups, for example), the shared activity typically tends to come second to the emotional bonding between the women. Compared to their male counterparts, competition between female friends tends to be more indirect and involves reputation-damaging gossip, social exclusion, and subtle undermining of each other’s interests.
In addition to differences in friendship interaction style, the structure of male and female same-sex friendships also influences how men and women react to interlopers who may threaten these friendships.10 Male same-sex friendships evolved in a context that historically included banding together to defend their group against threats from other groups. Consistent with this, men (compared to women) report greater feelings of friendship jealousy when primed with a threat of intergroup conflict. Furthermore, since a larger coalition of same-sex friends would mean greater benefits accrued from those relationships, men report greater friendship jealousy (compared to women) over the prospect of losing acquaintances. Women, on the other hand, tend to engage in one-on-one interactions with their same-sex friends, and report experiencing greater loss and friendship jealousy over the prospect of losing a best friend (compared to men). This loss is compounded by the fact that, compared to men, women invest more time and energy to develop their close, intimate relationships, thus making it harder to replace their close friends. The greater self-disclosure between female close friends also makes the dissolution of such close friendships potentially more damaging to one’s reputation if the ex-friend spreads rumors about them or shares their secrets. These features motivate women to protect their friendships.
The shift from friendship to fiendship comes into play when jealousy is triggered by the friend themselves versus an interloper. As indicated above, women tend to use indirect competition strategies. Specifically, while men are more likely to engage in direct physical aggression with their competitors, women are more likely to engage in relational aggression,11 which involves attempts to harm others by damaging their social ties.12 Often done covertly, this social sabotage involves behaviors such as excluding the so-called friend (e.g., giving them the silent treatment or intentionally leaving them out of some interaction), gossiping or spreading rumors about them (e.g., sharing their secrets), and attempting to turn others against them through public embarrassment. Relational aggression in female same-sex friendships seems to peak in adolescence.13 Since this aggression occurs between friends, not just rivals, it is often perceived as a personal betrayal. Relational aggression can also be subtle, though, making it hard for the so-called friend to detect. It could include backhanded compliments or manipulating the “friend” and setting them up for failure. One example would be setting them up for failure or public embarrassment by encouraging them to wear an unflattering outfit or approach a potential romantic interest knowing they’ll be turned down. Since intimacy and emotional closeness is prioritized in female same-sex friendships, being betrayed or excluded by someone one considers to be a close friend can be especially hurtful. Research suggests that this type of betrayal in adolescence is often associated with negative academic and psychosocial outcomes, including feelings of depression, anxiety, poor self-image, suicidal ideation, and social withdrawal as they find it hard to trust others.14, 15 Prospective longitudinal studies have found that girls’ peer victimization experiences of relational aggression between ages 7 and 10 were associated with an increased risk of self-harm behaviors in late adolescence.16 The observed self-harm behaviors included cutting themselves as well as swallowing pills, with roughly 27 percent of adolescents reporting they engaged in those behaviors with suicidal intent. In addition, other longitudinal studies suggest that girls who experience peer victimization in middle childhood are more likely to develop eating disorders by early adolescence.17
While it is clear that women engage in aggression, albeit commonly in a different form than men, it’s important to understand the motivation behind it as well as the forms it takes. In general, greater female aversion to risk of physical injury promotes the pursuit of low risk and indirect strategies of same-sex competition. What are the drivers behind such competition between women and girls? They are largely intrasexual competition for social status and mates. For the majority of human history, women have lacked direct access to resources, relying on male provisioning and protection for themselves and for their children. As a result, same-sex peers are primary rivals for acquiring and retaining partners willing and able to invest and protect. We see echoes of this in the behavior of modern women, who dislike and work actively against rivals who threaten their romantic prospects, often directing their animosity toward physically attractive and sexually unrestricted peers. Cross-cultural research has demonstrated that men have a preference for physically attractive youthful women as sexual partners18 and studies examining female behavior with regard to online dating profiles to trends in cosmetic aesthetics suggest that women compete with other women over their attractiveness to men, aiming to look more youthful and attractive than their competitors.19, 20, 21 It is worth pointing out that beautification can be seen as a tactic in competing for male attention22 but also a vehicle for pursuing social status in social and workplace spaces.23 High status can also influence access to resources and valuable allies. High status individuals are in demand as friends. It is also worth noting that high status girls bully lower status ones, though they do so using less overt strategies than boys, sometimes taking on an authority or maternal role for the group, and enforcing equality among the rest at the risk of social exclusion.24 A number of studies suggest that high social status in adolescent girls, especially when indexed by peer perceptions, is linked to dating success, sexual activity, and the use of indirect aggression. It is somewhat less clear whether the status leads to increased aggression (due to lower costs) or that the covert aggression leads to increased popularity. However, some evidence suggests that physical attractiveness results in greater social status, which can be defended through indirect aggression—by keeping attractive rivals from one’s own social circle.25
A wide range of studies have examined aspects of intrasexual competition in women and how they play out in terms of friendship. Across several studies, April Bleske-Rechek and colleagues found that women are less willing to be friends with a woman who is sexually promiscuous; women perceive sexual promiscuity as undesirable in a same-sex friend, they deceive their friends about their own engagement in mate poaching, and they are more likely to be upset by imagined scenarios of a same-sex friend acting sexually available toward their partner, as well as attractiveness enhancement by friends.26 The researchers also found that attractiveness plays a role in the perceptions of rivalry within friendship dyads with pairs both agreeing on who was the more attractive woman (outside judges agreed as well), and the less attractive women seeing more rivalry in the friendship than their more attractive friend.27 Interestingly, at least one study has also shown that these competitive tactics are sensitive to costs in that women are more likely to engage in clothing-based enhancement when with an acquaintance than with a close friend, but even then only when there was a desired male present. This again suggests that intrasexual competition mechanisms are sensitive to possible friend relationship costs and are more likely to be activated when a rival is seen as a legitimate threat (such as being more attractive).28 Despite being in possible conflict over mates or status, women rely on their cooperative friendships and there is a cost to jeopardizing them.
The underlying reason is that women rely on same-sex friends for help, information, and other forms of social support. As previously described, ancestral mating and residence patterns often created an environment where women needed to build close social relationships with other biologically unrelated women. As a result, women may not only be averse to open competition but also have strong friendship preferences that encourage them to avoid other women who are highly competitive or highly status driven in favor of those who show indications of being kind, committed allies in order to develop valuable cooperative supportive friendships. Our ancestral adaptations for forming friendship ties likely shaped preferences designed to acquire same-sex friends able to help women accomplish evolutionarily recurrent tasks such as competing for status among peers, access to social information and resources, as well as caring for offspring. Recent studies of friend preferences suggest that women (particularly in comparison to men) highly value female friends who provide emotional support, intimacy, and social information.29 And even though women may report that their friends compete with them for attention from desirable men, they also report substantial emotional support as well as mating advice and companionship in mating contexts (bars, clubs, etc.).30
However, success may be best achieved by pursuing both cooperative and competitive goals at the same time. Researchers such as the late Anne Campbell and more recently Tania Reynolds have highlighted how women can pursue both by cloaking their intrasexual competition in prosocial gossip or other relatively low risk tactics that can do reputational damage to a rival while preserving own reputation and avoiding damage to status in their peer group. As discussed previously, the indirect aggression favored by women and girls focuses on social manipulation. In some cases, the victim would never know who the primary aggressor was if the tactics concentrated on social ostracization, stigmatization, and gossip. Rumors can be easily spread without the original source being singled out, protecting their reputation while damaging their target (through accusations of sexual promiscuity, disloyalty, and so on), and shielding them from retaliation. Women utilize their friends to gather and disseminate social information, including gossip about rivals, particularly when those rivals are perceived as a legitimate threat to their status or romantic opportunities. Experimental studies suggest that more attractive rivals wearing more provocative clothing increase women’s tendency to spread reputation damaging information, even when women report liking the target of their damaging gossip, and more so for highly competitive women.31 Preliminary results seem to confirm what many women may have experienced, namely that reputation damaging social information does cause harm to the target, in terms of how men and women may view and interact with them. Further, not all women are as likely to inflict such reputational harms, highlighting why less competitive women and those high in loyalty are seen as more valuable friends.
Cartoon by Oliver Ottitsch for SKEPTICThis also highlights the possible costs of being seen as someone who engages in overtly malicious gossip. If women prefer friends who are kind and loyal, those who are seen as malicious gossips are less likely to be preferred as friends and may also be seen negatively by desirable romantic partners. The problem then is how to engage in damaging gossip without being seen as malicious. How can sharing such information perhaps be seen in a prosocial light? There are at least two different strategies that may achieve this, perhaps involving a degree of self-deception or lack of awareness of one’s own motivations. The first is to disclose one’s own victimization, which may not be perceived as gossip but rather as sharing a painful experience and request for emotional support. There is evidence that women are more sensitive than men to friendship violations that suggest the friend is not a loyal and kind friend as well as being more likely to disclose such treatment to others. In addition, research has found that first person disclosures of mistreatment were more trusted than third party reports, and female perpetrators of that mistreatment did suffer reputational damage as a result of the victim sharing that narrative.32 These covert victimization narratives can effectively damage the same-sex peers that are targeted for their perceived misdeeds in terms of desirability as a friend and social status. In addition, a number of women articulate that they are sharing this information out of concern—not malevolent intent—for the target of their gossip. Researchers have also explored such concern-based gossip, demonstrating that women endorse more concern versus harm-based motivations for engaging in gossip and that concerned gossipers were viewed more positively by social and romantic partners than were malicious gossipers. Interestingly, concerned gossip harmed perceptions of the target as much as did malicious gossip, indicating that negative commentary on an individual that is framed with concern harms the targets reputation and insulates gossipers from reputation damage (due to lower perceptions of maliciousness).33 The tendency to engage in these forms of gossip may explain the fact that many women report being targeted by gossip while relatively few report spreading negative rumors. There is a degree of self-deception about one’s motivations that makes these effective tactics for covert female intrasexual competition.
The popular neologism for this type of close friend is “frenemy.” The term “frenemy” has become popularized in the last twenty years or so and is defined as a “person with whom we outwardly show characteristics of friendship because of certain benefits that come with the façade.”34 Studies suggest that people maintain such “frenemyships” because there are relational benefits such as shared social networks, status, and information sharing that may outweigh the cost of terminating the relationship—though there may be high levels of covert competition and social manipulation.35 It is clear that same-sex friendships can be some of our most valued and rewarding relationships, ones that are lifelong and help us navigate the challenges of life. Yet, they can also be damaging, with frenemies causing harm in the pursuit of their own goals. As a result, choosing same-sex friends wisely is an essential skill as is the ability to engage in covert competition. In other words … keep your friends close but your frenemies closer.
“It is better to have an enemy who honestly says they hate you than a friend who’s putting you down secretly.”—UnknownAs a gentle reminder that you will have an hour of sleep robbed from you tonight, enjoy this episode on Daylight Saving Time Myths from the archives!
Learn about your ad choices: dovetail.prx.org/ad-choicesOn his February 22, 2026 blog the estimable evolutionary biologist, outspoken atheist, and (relevant here) staunch defender of determinism, Jerry Coyne, takes me to task for presenting “a muddled argument” in my case for compatiblism (in an excerpt in Quillette), which was based on a longer chapter in my book Truth: What it is, How to Find it, and Why it Still Matters.
First, let me acknowledge that this chapter in my book is in Part III, or “Known Unknownables.” Following Donald Rumsfeld’s famous epistemological trilemma, that includes “Known Knowns” (things we know that we know), “Known Unknowns” (things we know that we do not know), and “Known Unknowables” (things that are not ultimately knowable).
In this section of the book I include consciousness (the easy problem is understanding the neural wiring; the hard problem that I claim to be unknowable is what it’s like to be the wiring), God (I know of no scientific experiments or rational arguments that can prove its existence one way or the other), and why there is something rather than nothing (what do you mean by nothing, anyway?). So, in a sense, Jerry’s determinist position is, in my understanding of the problem, no more or less likely to be true, depending on how one defines the problem itself. I have defined it in a way that compatibilism works, whereas Jerry has defined it so that determinism works.
Second, this is why I reference the survey by David Chalmers, the philosopher who made famous the “hard problem of consciousness,” along with his colleague David Bourget. They asked 3,226 philosophy professors and graduate students to weigh in on 30 different subjects. Here is what they found regarding the free will issue:
Accept or lean toward:
Compatibilism
59.1%
Libertarianism
13.7%
No free will
12.2%
Other
14.9%
Now, on one level, it is irrelevant how many people believe something, along the lines of what Philip K. Dick meant when he defined reality “as that which, when you stop believing in it, doesn’t go away.” Yet, as I argue, there is something revealing about these figures. Namely, if the most qualified people to assess a problem are not in agreement on an answer—and the free-will/determinism problem has been around for thousands of years—it may be that it is an insoluble one, a known unknowable.
Third, therefore, it is entirely possible that a highly qualified, educated, and intelligent thinker like Jerry Coyne can make a compelling case for determinism, while at the same time a highly qualified, educated, and intelligent thinker like the late Daniel Dennett can make an equally compelling case for compatibilism (and Coyne and Dennett have locked horns on this very matter).
I agree with Jerry and Dan that we live in a determined universe governed by laws of nature. But I disagree with Jerry that this eliminates free will, or if you prefer “volition” or “choice” (again, this entire field is, to use Jerry’s term, “muddled” with confusion of terminology). My compatibilist work-around is “self-determinism,” in which while we live under the causal net of a determined universe, we are part of that causal net ourselves, helping to determine the future as it unfolds before us, and of which we are a part. My compatibilist position is based on the best understanding of physics today. Let me explain.
Physicists tell us that the Second Law of Thermodynamics, or entropy, means that time flows forward, and therefore no future scenario can ever perfectly match one from the past. As Heraclitus’ idiom informs us, “you cannot step into the same river twice,” because you are different and the river is different. What you did in the past influences what you choose to do next in future circumstances, which are always different from the past. So, while the world is determined, we are active agents in determining our decisions going forward in a self-determined way, in the context of what already happened and what might happen. Thus, our universe is not pre-determined in a block-universe way (in which past, present, and future exist simultaneously) but rather post-determined (after the fact we can look back to determine the causal connections), and we are part of the causal net of the myriad determining factors to create that post-determined world.
(Jerry inquires why I didn’t discuss quantum uncertainty in my analysis. The reason is that Dennett debunked this decades ago in Elbow Room: The Varieties of Free Will Worth Wanting, when he pointed out that any such quantum effects that alter other deterministic physical laws would not grant any type of free will or volition, for it would just mean that some percentage of your “decisions” are just random noise in the machine.)
Given the muddleness of terminology here, let me bring in the philosopher Christian List and his three requirements of volition from his book Why Free Will is Real:
As List explains in more detail:
Specifically, we need to know whether what the person did was freely performed, as characterized by the three bullet points above. Was it an intentional action? Could the person have done otherwise? Was the person in control? Or, if what the person did was not freely performed, we need to know whether the person’s free will was at least implicated in the run-up to it: Was there a free decision to get drunk in the first place, for instance? Of course, moral responsibility might well require more than that…but I do take the presence of free will somewhere along the relevant chain of events to be a necessary condition for a salient form of moral responsibility.Of course, Jerry and other determinists like Robert Sapolsky and Sam Harris could just redefine the problem by saying that even the capacity to form an intention was pre-determined by atoms, molecules, and neurons, as is the capacity to consider several possibilities for action and the capacity to take such action. This is why I quoted Dan Dennett from my podcast conversation with him on this very challenge:
Determinism doesn’t tie your hands, nor does it prevent you from making and then reconsidering decisions, turning over a new leaf, learning from your mistakes. Determinism is not a puppeteer controlling you. If you’re a normal adult, you have enough self-control to maintain your autonomy, and hence responsibility, in a world full of seductions and distractions.Since determinists often reference people suffering from extreme drug addiction or alcoholism, or those with a brain tumor that led to their bad behavior, like Charles Whitman in the Texas school tower shooting incident, I asked Dan about Sam Harris’s quote that “it’s tumors all the way down,” and Robert Sapolsky’s descriptor that “it’s turtles all the way down.” Here Dennett identifies the error in this line of reasoning:
Well, I like the way you put it very much, Michael, because I think you put your finger on the mistake that Sapolsky is making there. And Sam Harris makes it too. No, it’s not tumors all the way down. It’s machinery all the way down. But there’s good machinery and there’s bad machinery. And if we have bad machinery, then yes, we’re disabled to some degree. But what about people who have good machinery? They’re not disabled. Why can’t we hold them responsible? Now, some people are, alas, through no fault of their own, not responsible for what they do. And that might well include people with terrible, terrible youths, who didn’t get a good upbringing, or who had a horrific upbringing. And so we have to decide, as society, given that this is a dangerous person, what’s the humane, good thing to do? I don’t think there’s an algorithm or a bright line for distinguishing somebody whose brain is good enough from somebody whose brain is a little too disabled. We just have to make the decision.Dennett then brings home real world examples:
We do it all the time. You’ve got to be 16 to get a driver’s license. Some 15-year-olds would be perfectly safe as drivers. Some 21-year-olds would not. But the law has to have a bright line and so it chooses one. We might argue whether we want to raise it or lower it, the way the drinking age has been raised or lowered, or the way the driving age has been raised or lowered. We have to have a policy and we have to stick to it and we can change it as we learn more and more. But what we don’t do is just say, “Oh, it’s disability all the way down.” No, you’re not disabled, I’m not disabled. I want to be held responsible. I think you want to be held responsible too.Coyne is unhappy with my invoking of “emergence” and says I’m being rude to him and Sapolsky and Harris in accusing them of “physics envy,” but that’s what it is! Here, for example, is Sapolsky defending his belief that free will does not exist because single neurons don’t have it: “Individual neurons don’t become causeless causes that defy gravity and help generate free will just because they’re interacting with lots of other neurons.”
In fact, billions of interacting neurons is exactly where self-determinism (or volition or free will) arises. This is why I like to ask determinists: Where is inflation in the laws and principles of physics, biology, or neuroscience? It’s not, because inflation is an emergent property arising from millions of individuals in economic exchange, a subject properly described by economists, not physicists, biologists, or neuroscientists.
Rather than quoting myself again, I will invoke the geneticist and neuroscientist Kevin Mitchell from his book Free Agents, in which he shows that the determinist’s reductionistic approach to understanding human thought and behavior is not just wrong, but wrong-headed! How?
Basic laws of physics that deal only with energy and matter and fundamental forces cannot explain what life is or its defining property: living organisms do things, for reasons, as causal agents in their own right. They are driven not by energy but by information. And the meaning of that information is embodied in the structure of the system itself, based on its history. In short, there are fundamentally distinct types of causation at play in living organisms by virtue of their organization. That extension through time generates a new kind of causation that is not seen in most physical process, one based on a record of history in which information about past events continues to play a causal role in the present.Thus, I conclude that the free will/determinism issue is an insoluble problem because we may be ultimately talking past one another at different levels of causality: the reductionist’s atoms, molecules, and neurons versus the emergentist’s brains, people, and societies.
Choose a side. The choice is yours!
This secretive device has been promising to deliver clean, free energy for more than 15 years — but so far nobody's been allowed to examine it.
Learn about your ad choices: dovetail.prx.org/ad-choicesThe question of whether or not we have free will has been pondered by philosophers, psychologists, theologians, neuroscientists, and by many of us in our own conversations and thoughts. Nearly two thousand years ago, the Stoic philosopher Epictetus declared, “You may fetter my leg; but not Zeus himself can get the better of my free will.”1 But Epictetus also believed in a deterministic world where each event is determined by preceding causes. How can this apparent contradiction be resolved?
In the 1940s, Bertrand Russel saw no reason that human volitions would not also be determined in the same way that inanimate processes are determined. Further, he saw the determined nature of volitions as incompatible with a person being the true source of his own actions. Russell supposed that an evil scientist could, by use of psychoactive drugs, manipulate a person to perform certain actions. And this hypothetical manipulation did not seem to Russell so different from normal life, where people are manipulated to do what they do by natural causes outside their own control.2
Fifty years after Russell published his critique of the Stoic notion of free will, several other philosophers made the same argument.3, 4, 5 Today, the continued quandary contributes to a sustained lack of consensus on free will. According to surveys, most people—including most philosophers—believe in some form of free will, most under the rubric of compatibilism.6, 7 Novelist and Nobel Laureate Isaac Bashevis Singer summed up the dilemma, “We must believe in free will, we have no choice.”
However, the debate still rages in the world of academic philosophy, in a broader audience reached by podcasts and popular books written by scientists, and among readers of Skeptic. Here I will try to convince you that free will is real and not an illusion. I’ll argue that far from being exemplars of rationality and skepticism, the main arguments against free will make unjustifiable logical leaps and are naïve in the light of cutting-edge scientific findings.
Throughout the philosophical literature,8 resolving the question of whether or not we have free will has often revolved around two criteria for free will:
I argue that humans meet both criteria through two concepts: scale and undecidability.
Scale and the True Sources of Our ActionsIn an article in The Journal of Mind and Behavior,9 I argued that many of our actions are caused by our wills; that is, by our conscious desires and intentions. This is not disputed by most (what I’ll term) free will deniers. They more often dispute that our wills are free, not that we have wills and that our actions often follow from our wills. Sam Harris, one such determinist with a large general audience, has said that the subjectively felt intention to act is the proximate cause of acting. Harris makes the same basic claim as renowned scientist Francis Crick,10 philosophers such as Bertrand Russell11 and Derk Pereboom,12 and many others. They claim that in addition to the proximate cause (the will), our actions have ultimate causes lurking behind them that are the relevant causes to consider when judging whether or not our wills are free. The ultimate causes beyond and beneath the surface of our wills, they argue, make them unfree. What are these ultimate causes? Harris identifies genetics and environmental influences as “the only things that contrive to produce” his particular will.13 Molecules beyond DNA have also been offered as ultimate causes of our decisions. Biologist Jerry Coyne argued that, “Our brains are made of molecules; those molecules must obey the laws of physics; our decisions derive from brain activity.”14 Robert Sapolsky, a prominent neuroendocrinologist, is publishing a book this year, detailing many such mechanisms that, it is claimed, obviate the role of willed choices.15
My mind does not exist as a molecule nor as a historical epoch, nor as a socioeconomic class. Yet my mind does exist.What’s wrong with this line of reasoning? Consider the following question as an analogy: Are apples red? Suppose we all agree that apples have color. The question is whether the color is red or non-red. To answer the question, determinists would look beyond the proximate color of the apple. Realizing that the apple is nothing but atoms, they would examine many of the carbon atoms on the surface of the apple. They find that not a single carbon atom is red. Since none of the atoms are red, and the apple is nothing but atoms, they would conclude that the apple can’t be red. The error is that though they agree the apple has a color, they try to examine the nature of the color at a scale (a carbon atom is smaller than the wavelength of red light) where color is incoherent. The fact that they found no redness at that scale shouldn’t lead them to conclude anything about the color of the apple.
Likewise, the fact that determinists find no personal authorship or freedom in the actions of molecules shouldn’t lead them to conclude anything about the nature of the will. We agree that we have wills, that we have subjectively experienced intentions that influence our actions. The question is whether our will is free or unfree. To look at molecules for the answer is a scale mistake. DNA and neurotransmitters observed at the molecular scale exhibit no will whatsoever. With that knowledge, is it compelling that they exhibit no free will? No. That should tell us that determinists are looking at the wrong scale to find answers about the will, just as looking for answers about redness at a scale where color is not meaningful.
The right scale for finding answers to the question of apple redness is the apple scale, not the atom scale. The right scale for finding answers to the question of freedom of the will is the agent scale, not the molecule scale. Searching the molecule scale is just one example of this error. There are many other wrong scales where a confused determinist might look for answers about the will. He or she may zoom out temporally into an irrelevant timescale, including the time before the will in question existed. In the above analogy, this would be like conceptualizing the apple as merely a step in a process of agricultural industry. Since agricultural industry is not red, should we conclude that the apple is not red? The question about the will can only find its answers from a scale where the will exists as a will. Expanding the timescale to include the time before the person was born renders the question incoherent.
If we keep our analysis in the scale where the individual agent exists, not zooming too far in nor too far out in space, time, or level of organization, then the primary and ultimate cause of my actions is me. The will emerges from the complex interactions of many small parts. It’s literally not true to say that it’s caused by any particular small part. It is caused by many small parts, but only when taken together all at once. And that’s the same thing as the whole person. So my thoughts and actions are deterministically caused by me. The molecules of which my brain is made are simply irrelevant to this fact. So I am the true source of my own actions, and there are no other “ultimate” causes. My mind does not exist as a molecule nor as a historical epoch, nor as a socioeconomic class. Yet my mind does exist. René Descartes’ “I think therefore I am” convinces me of this.16 In order to claim that my choices are really caused by a molecule or a historical epoch, one must refer to the dynamics of a scale where I (that is, my mind) cannot be found. Eliminating the mind from the analysis is not a valid way to answer a question about the mind.
The Ability to Do OtherwiseThere is a temporal asymmetry in the question of whether I could have done otherwise. In the question’s typical form, it is backward-looking. It asks about what could have been in the past, and, at first, it seems like a coherent question. I did one thing yesterday, and we wonder if I could have done something else. But what if we wanted to figure out whether or not I’ll have free will tomorrow? From that temporal angle, the question of the ability to do otherwise stops making sense. In a forward-looking sense, the question becomes manifestly nonsensical. Can I do otherwise in the future? Otherwise? Other than what? Other than the thing I will do? The question stipulates that I will do a certain thing, and simultaneously asks whether or not I can avoid doing that thing. The stipulation contained within the question makes the answer trivial. No, of course I can not do something other than the thing I will do. In order for the question to have any significance in the forward-looking tense, it must be modified. The question can not directly stipulate that I will do a certain thing. The question must ask whether or not I can do something other than what I’m expectedto do, not other than what I will do.
The will emerges from the complex interactions of many small parts. It’s literally not true to say that it’s caused by any particular small part.Human choice is temporally asymmetric and must be analyzed as such. This point could be missed without properly situating our analysis at the correct scale. An inappropriate focus on the dynamics of little particles could obscure the truth. The laws of physics that describe or govern the interactions of particles do not specify a direction of time. If we could watch a video of two protons colliding, we would have no way to know whether the video was being played forward or in reverse. This is called time reversal symmetry. This symmetry holds true in a wide variety of particle interactions.17 Time appears asymmetric only at scales where emergent phenomena transpire. Large collections of particles obey the second law of thermodynamics, which is not time reversal invariant. As astrophysicist Matt O’Dowd put it, “Zoom in to individual particle interactions and you see the perfect reversibility of the laws of physics. But zoom out, and time’s arrow emerges.”18 A consideration of scale leads to a recognition of temporal asymmetry in human choice.
In analyzing the ability to do otherwise, we should consider only a forward-looking ability because choices, by their nature, are forward-looking. We don’t deliberate or make choices about the past. Choices are always about something, and those objects of choice always lie in the future, thus choices are always forward-looking. At the time when a choice is actually made, there is as of yet no “what” as in “Could have done other than what?” I have not already made the choice, so there is no established action to have done otherwise. There can only be expectation of what I will do. If my actions are in principle perfectly predictable, then I do not have the ability to do otherwise in a forward-looking sense. If my choices are in principle not predictable, given total knowledge of the present world, then I do have the ability to do otherwise in a forward-looking sense, which is the only sense that makes any sense. Given the different dynamics found at different scales, the ability to do otherwise needs to be understood as temporally asymmetric; that is, as always forward-looking; as the ability to do something which is in principle not predictable. We do have that ability, and it derives from our self-referential nature.
Self-Reference and UndecidabilityThe fact that I am the relevant cause of my own actions comes with another important implication: I am a causally self-referencing entity. If a molecule were the relevant cause of my action, this would not be true in the same way. The molecule has no capacity for self-reflection, but I do. I can ask myself, “What will I do? What could I do? What should I do? What do I want to do? What would I do if I wanted to do X and should do Y?” Self-referential questions like these affect the choices that I make; and those choices change the self-referential questions that I ask.
At the relevant scale, self-reference is causally important. I am a system which analyzes its own inputs, character, and potential outputs; generates new outputs based on those analyses; and feeds those new outputs back into itself as inputs which affect the outputs, which affect the system’s character. I am an output of and an input for my own processing. Framing the human self-referential nature in this way brings us to the concept of undecidability.
A system that exhibits undecidable dynamics cannot be predicted, given complete knowledge of its present state. Computer scientists and mathematicians have proven that this fundamental unpredictability shows up in some algorithmic computations, mathematical systems, and dynamical systems (including physical systems).19 Though an unpredictable dynamical system may evoke the concept of chaos, undecidability is not chaos; it is a different sort of unpredictability. IBM research scientist Charles H. Bennett makes the difference clear:
For a dynamical system to be chaotic means that it exponentially amplifies ignorance of its initial condition; for it to be undecidable means that essential aspects of its long-term behaviour—such as whether a trajectory ever enters a certain region—though determined, are unpredictable even from total knowledge of the initial condition.20If a system exhibits undecidability, then it is unpredictable even given total knowledge of all of its constituent parts. Undecidability makes deterministic systems fundamentally unpredictable in principle, not as a result of merely lacking precise measurements. If humans can exhibit undecidability, then we meet the second main criterion for free will: the forward-looking ability to do otherwise. Scientists recently made such an argument feasible by explicating what features of a system give rise to undecidable dynamics. In 2019, Mikhail Prokopenko and his colleagues conducted a comparative formal analysis of recursive mathematical systems, Turing machines, and cellular automata. They come to a clear conclusion:
As we have shown, the capacity to generate undecidable dynamics is based upon three underlying factors: (1) the program-data duality; (2) the potential to access an infinite computational medium; and (3) the ability to implement negation.21If humans do have these three properties, then we meet the criteria for undecidable dynamics, which means we can take actions that are fundamentally unpredictable, which means we have the ability to do otherwise in a forward-looking sense, which means we have free will.
First, consider program-data duality, which in this context is the ability for self-reference. The word “duality” simply refers to the typical distinction between program and data with which we are all familiar. A human at time 1 has a certain overall state of mind, coinciding with a certain overall physical state. The state at time 1 is a program, in that it entails implicit rules about what the system would do, given certain types of data. The streams of perceptions taken in at time 2 are data, which get processed according to the implicit rules. In addition to processing basic sense data, this duality allows for a program (or implicit set of rules encoded in the state of a human) to process other programs as data. For example, a human can process ideas, hypothetical scenarios, mathematical operations, and representations of the self as data (thus self-reference).
The question about the will can only find its answers from a scale where the will exists as a will.The next requirement for undecidability is the potential to access an infinite computational medium. The computational medium is the substrate on which the state of the system is represented. In a computer, the computational medium would be the memory and storage. The set of all possible states of the system is called the state-space. For example, the state space of a computer would be the set of all possible configurations of its memory and storage. If we knew that a certain system had an infinite state-space, we could infer that the system has access to an infinite computational medium.
It can be informally proven that humans have an infinite state-space. How many different thoughts is it possible for a human to have? That question includes sub-questions, such as how many things is it possible for a human to see? The state of your visual perception is one small part of your overall state. Think of the number 74. Now think of the number 74 with your eyes closed. Those two occasions of thinking of 74 occupied two very different points in your state-space because of the difference in visual perception.
To roughly estimate how many overall states are possible while thinking of 74, we would need to do something like multiply the number of possible visual perceptions by the number of possible auditory perceptions by the number of possible sensations of heat and cold by the number of possible gradations of feeling sadness or happiness, and so on. Also, you may think of 74 while remembering, for example, the time you thought of 106 or 107. And the next time you think of 74, that will be yet another point in your state-space, since you’ll recall that you’ve thought of 74 before. There may be an infinite number of possible states in which you might think of 74. And there are many conceivable numbers other than 74, and many things to think about other than numbers.
An obvious objection might be that a human and his brain are physically finite. In what sense can an organ that fits inside a skull be infinite? As a starting point, consider the 100 billion neurons that make up the brain. As a simplification, a neuron can be considered to be “firing” or “not firing.” So a simplified brain has 100 billion binary cells. Such an array of cells could instantiate 2^100,000,000,000 distinct patterns of on-or-off activation. That’s a big number. For comparison, there are estimated to be roughly 10^80 atoms in the observable universe.22 The number of atoms in the universe is an infinitesimally small number compared to the number of activation patterns possible in a simplified brain. And what about a real brain? A real brain is made of neurons which are not simply on or off. Some neurons show gradations in voltage and neurotransmitter release, meaning that they have many possible states between “on” and “off.”23
Undecidability makes deterministic systems fundamentally unpredictable in principle, not as a result of merely lacking precise measurements.Besides neurons, there are many other variables in the brain that are not captured by the simplified on/off variable. Each neuron can vary in the amount of neurotransmitter in its vesicles ready for release, and the state of the receptors on its soma and dendrites (that is, to what degree they’re blocked by other molecules). There can also be variation in the amount of neurotransmitter that is floating free at any moment in the space between any two neurons. There are minute variables that will likely never be measured yet do, theoretically, make a causal difference. For example, in what spatial direction is each neurotransmitter molecule oriented? A neurotransmitter molecule must fit into a receptor in order to carry on a signal. For the molecule to fit, it must be facing a certain direction relative to the receptor. So the spatial orientation of the molecule before binding must have some nonzero effect on the binding affinity. How many different patterns of analog spatial orientation might trillions of neurotransmitter molecules be capable of? That alone may be infinite. The variable of “firing” or “not firing” does not capture any of these variables. So the actual number of possible overall brain states is some large exponent greater than 2^100,000,000,000 which is a large exponent greater than the number of atoms in the universe.
Whether the human state-space is technically infinite or merely practically infinite (larger than any other number computed for any purpose in all of science), it will not be exhausted in the meager 100 years of a human lifespan. This means that the self-referential loops of processing do not need to stop at any predetermined iteration or level of abstraction. So for the purpose of analyzing the choices of a human, the state-space and computational medium are functionally infinite.
The last element required for undecidability is the ability to implement negation. Negation in this context refers to the ability of a logical system to produce an output which is exactly contrary to the processing which led to the output. It is equivalent to the liar paradox, which is exemplified in a statement such as “everything I say is a lie,” or more formally, “this statement is unprovable.” The liar paradox is a self-referential statement, which can not be judged to be true or false without a contradiction. Self-reference is fundamental to this paradox because the statement refers to its own validity. If humans can implement this paradoxical logic into their thinking, then humans meet this requirement for producing undecidability. The fact that humans came up with the liar paradox thousands of years ago is evidence that humans can perform the logical operation of negation.
ConclusionAll three factors underlying the capacity to generate undecidable dynamics are present in humans. First, we exhibit program-data duality when we process ideas, hypothetical scenarios, mathematical operations, and representations of ourselves as objects of thought. Next, we have the potential to access an infinite computational medium. This is demonstrated by the fact that we can think of any one of an infinite number of objects of thought, which implies an infinite state-space, which implies an infinite computational medium. Finally, we have the ability to implement negation, demonstrated by the inception of the liar paradox in the minds of humans. If these three elements are sufficient to generate undecidable dynamics, then humans are capable of generating undecidable dynamics, which means we cannot be accurately predicted. And that means we have the ability to do otherwise in the forward-looking sense.
Figure 1. Relational map of concepts. The truth of each concept supports the truth of the concepts downstream from it. This diagram illustrates how the concepts described throughout this article contribute to the overall reality of free will.Figure 1 shows the relationships between the concepts discussed in this article. An understanding of the human agent at the scale where conscious humans actually exist leads to recognition of the self as the source of one’s actions, recognition of the relevance of temporal asymmetry to human choice, and recognition of self-reference as causally relevant to human actions. Self-reference, in combination with access to an infinite computational medium and the ability to implement negation results in undecidable dynamics. This entails the ability to do otherwise in the forward-looking sense, which is the only sense that makes any sense when temporal asymmetry is taken into account. The resulting total picture is that we (humans) meet two criteria for real free will: the forward-looking ability to do otherwise and being the source of one’s own actions.
Viewing human agents as whole humans instead of as molecules makes it clear that humans are the cause of their own actions, and also leads to a focus on the human features such as self-reference, that generate undecidable dynamics. The Stoic philosopher Epictetus was right. Neither Zeus, Bertrand Russell, nor the scientists recapitulating the latter’s argument 77 years later can diminish our free wills.
In what will certainly fail to go down as the news of the century, Imane Khelif, male boxer and women’s boxing Olympic medalist, has finally publicly admitted in a February 2026 interview that he is indeed biologically male. A large part of society specifically chose not to see. And another part chose not to care that eighteen months ago, two men were given a free pass to an abuser’s dream: the ability to not only assault women on an international stage, but the chance to be celebrated for it.
The 2024 Paris Olympics gold, silver and bronze medals, as designed by Chaumet (Credit: LVMH)Boxers Imane Khelif of Algeria and Lin Yu-ting of Taiwan entered the 2024 Olympics as a sex they were not and they did it with the full knowledge of the IOC. Two men, who according to an official release by the International Boxing Association in July of 2024, had failed more than one sex test for female eligibility in 2022 and 2023, and had been disqualified from female competition. For their fraud, both were rewarded with gold medals at the Olympics. One female boxer, Angela Carini, had to make the agonizing decision to forfeit rather than participate in the dangerous charade. How surreal it must have been to make that unbelievable yet necessary call, to not only go against everything one has trained for, but everything one stands for as an athlete, professional, and disciplined fighter.
For any inclined to give Khelif the benefit of the doubt that perhaps he just didn't know… if one is being raised as female and never begins menstruation at puberty, it will absolutely be examined why that is. Once illness and female conditions are ruled out, one is left with the “condition” of being male. In this case, a male with 46, XY 5-alpha reductase deficiency, as a medical report of his drafted back in 2023 outlined, later leaked to Le correspondant.
No one’s personal condition is ever a legitimate reason to disadvantage or endanger another demographic.To ignore such disorders of sexual development in order to adhere to traditionally physical sex ideals is fairly common practice in conservative and religious countries, and African nations have a history of scouting such male individuals for the purpose of dominating women’s sports, to the overwhelming ignorance of the global athletics audience. As a result, most are still under the incorrect impression that athletes like Caster Semenya, the South African runner and two-time Olympic gold medalist, are simply women with higher testosterone and absolutely unaware of the reality that these are athletes with a male karyotype. Semenya confirmed in the Court of Arbitration for Sport to have 5-ARD, a genetic condition resulting in the inability to develop typical external male genitalia.
These disorders are unbelievably unfortunate for a multitude of medical reasons, beyond being tokenized and weaponized through identity politics. However, no one’s personal condition is ever a legitimate reason to disadvantage or endanger another demographic.
He just counted on larger society not bothering to care. And on that, he wagered well.Nevertheless, such practice also happens to explain why Khelif, a Muslim in a Muslim nation, was conveniently free from traditionally mandated female attire, and able to be so comfortably hands-on with his fellow male trainers. And beyond that undisguisable situation, one must also genuinely ask why he never chose to appeal the International Boxing Association’s 2023 disqualification for failing to meet female criteria, or why he refused to participate in subsequent female competition that requires testing for sex.
So he knew. His family and community knew. He just counted on larger society not bothering to care. And on that, he wagered well.
It is the inevitable outcome of a societal ideology riddled with complacency for female safety and dignity.Because despite the protests of the female boxers, certain boxing association officials, and few but genuine feminists against the unbelievable misogyny being broadcasted globally, many decided to protest calling a spade a spade. Widespread social media commentary of the ideologically-captured claimed that Khelif and Lin were simply masculine-looking women who shouldn’t be insulted for appearances beyond their control. That it was (stop me if you’ve heard this before) right-wing propaganda and Nazi TERF bigotry to suggest that such supposed gender nonconformity made them male. The pick-me cherry on top, of course, is that it was peak misogyny to call them men at all.
But this was only to be expected when the mainstream media “reporting” on such a farce fully fed this break from reality. During the 2024 games, at very best legacy organizations legitimized Khelif as the incorrect sex, and at worst, denigrated anyone pointing out the opposite truth. From the official Olympics reporting that ignored the situation itself entirely, to BBC and NYT accounts that comfortably crowned Khelif a woman, to USA Today fluff that belittled a serious slap in the face to females into “unhinged controversy,” the overwhelming majority of outlets at best passively accepted and at worst actively furthered the grotesque farce unfolding in front of the world.
Chromosomes, anatomy, and human sight are disregarded in favor of false passport markers and old photos of pink dresses, because apparently that is the only acceptable (and desired) proof of what “woman” means.Yet beyond entrenched media preferences is another incentive as well. This was, and is still, today’s gender misogyny in action. Ironically, those who consider truth too “offensive” for the prioritized male in question never seem to consider the unimaginable offense for the women, who must not only unfairly face a recognizable man, but are expected (as women usually are) to simply take it with grace and a smile. So, concessions will be made to spare male feelings in the name of “inclusion,” ultimately excluding women from their very own opportunities.
Chromosomes, anatomy, and human sight are disregarded in favor of false passport markers and old photos of pink dresses, because apparently that is the only acceptable (and desired) proof of what “woman” means. It is the inevitable outcome of a societal ideology riddled with complacency for female safety and dignity.
Fortunately, despite a seemingly ingrained forfeit of biological honesty, the tide is beginning to turn, with the release of necessary reports and a new, supportive political landscape. The once sacrosanct gender ideology is now beginning to be questioned as a whole in the mainstream, no longer only by brave feminists. We can see the effects of this in the athletic realm through changes in various governing organizations, including World Boxing itself, who are beginning to demonstrate the bare minimum of competition integrity through mandating sex testing for eligibility. And as IOC relies on individual sport federations to set eligibility standards, this nightmare will hopefully one day all but completely fade into history.
Imane is and was always exactly as his own name states.As it tends to go, many who put on blinders then will now be miraculously blind to the harm they supported. Khelif’s unforgettable selfishness will get purposely memory holed, along with their own unforgivable enablement in this feint of reality. But as USA Today once wrote in support of Khelif and wild disregard for truth, this indeed “can never happen again” … just not in the way that they meant.
Imane is and was always exactly as his own name states. And now that the rest of the world can no longer pretend that they do not know, they will have to finally decide whether they still believe men are entitled to women's earned opportunities, or if they are truly for women after all.
A review of Parallel Lives of Astronomers. Percival Lowell and Edward Emerson Barnard by William Sheehan. (Cham, Switzerland: Springer, 2024. Hardcover, 687 pages)
Of the two astronomers whose lives and accomplishments are chronicled in William Sheehan’s Parallel Lives of Astronomers, Percival Lowell was far better known than Edward Barnard. Lowell is famous for having championed the idea that the canals on Mars were built by intelligent beings. The origins of the idea that there were canals on Mars lay in the Italian astronomer Schiaparelli’s report of “canali” on the red planet in 1877. The word is best translated as “channels” but was popularly mistranslated as “canals.” Since in the latter part of the 19th century canals were being built all over the world by intelligent humans, the implication was that the “canals” on Mars were built by intelligent aliens.
A major theme of the book is that Barnard and Lowell in many ways were opposites of each other. Barnard grew up in poverty in Nashville, Tennessee. He became interested in astronomy as a nine-year old working in a photography studio. He received some academic training in astronomy and was a superb and objective observer. Unlike Lowell, his mathematical skills were comparatively weak. Lowell came from an extremely wealthy Boston family and his interest in astronomy began in college. He graduated from Harvard in 1876 with honors in mathematics. The topic of his graduation speech was the nebular hypothesis of how solar systems came together from collections of gas and dust around a sun. These contrasts (and others) between Lowell and Barnard provide an intimate view not only of the two men, but of much of the history of astronomy of the late 19th and early 20th centuries, especially regarding Mars because the two men were at opposite ends of a raging debate among astronomers and the general public on the matter of the nature of the canals.
From a skeptical point of view, the most interesting organizational concept that Sheehan uses is the distinction between top-down and bottom-up processing. He uses this to contrast the approaches used by Lowell and Barnard in their interpretations of what they saw through their telescopes and later in photographs. Lowell was a largely top-down man, starting with an idea and then searching for evidence to support it. Barnard continued to make observations until he believed he had enough data to come to a conclusion. Lowell focused his astronomical interests largely on the canal debate, while Barnard was one of the most productive observational astronomers of his day. The top-down versus bottom-up distinction allows Sheehan to use basic concepts in perception to explain the differences between the two men in their position on the reality of the canals.
Perception is a function of two very different processes that together usually lead to an accurate perceptual experience of the world. Bottom-up processing refers to the incoming sensory inputs from the various sensory systems. These, alone, are not sufficient to specify what is actually out there in the world. Top-down processing refers to the expectations, beliefs, and knowledge that we all have about the perceptual world. These are needed for the brain to interpret and make sense of the information that is brought in by bottom-up mechanisms. Almost always these two sources are in accord and the world is perceived accurately.
Between the series of fleeting images hitting the retina of the observer and the final drawing or description of what the observer saw, the constructive nature of perception has ample room to create perceptual experiences of structures that were not there in reality.However, sometimes expectations, beliefs, and knowledge can be wrong, and the incoming sensory input may be distorted or incomplete. Under these rare circumstances, people can and do actually perceive things that are not there even though they are not intoxicated or psychologically impaired. Thus flying saucers, sea monsters, Big Foot, and the like, are perceived when the sensory input is minimal, often seen in fleeting glimpses at night and in the distance. The Loch Ness Monster never swims up the Inverness River through downtown Inverness at high noon on a pleasant sunny day for vacationers to witness. Final perceptual experiences are a function of the sensory inputs as well as expectations and beliefs. Thus, perception is said to be a constructive process and one that can produce incorrect experiences. The canals of Mars fall directly into this perceptual cognitive model.
Before reading the book, I had the mistaken impression that when looking through a telescope, one saw a fairly stable image of whatever object the instrument was focused on. Nothing could be further from the truth. The image of a planet as seen through a telescope is just a tiny disc of light. To make matters worse, that image is far from stable, especially for the telescopes in use in Lowell and Barnard’s time. The book makes clear how unstable those images could be. Momentary changes in the characteristics of the air above a telescope would make the image waver, fade in and out of focus, and change in other characteristics from moment to moment.
Even when “seeing” was excellent, all one saw were successive glimpses of the target object. Then those glimpses had to be constructed by the brain into a coherent impression of what the target was. Between the series of fleeting images hitting the retina of the observer and the final drawing or description of what the observer saw, the constructive nature of perception has ample room to create perceptual experiences of structures (i.e., canals) that were not there in reality.
Sometimes expectations, beliefs, and knowledge can be wrong, and the incoming sensory input may be distorted or incomplete.Astronomers had known since the early 19th century that such non-sensory factors could influence perceptual judgments in their observations. Thus, different observers reported different times at which a planet or star crossed a line in a telescope reticule. These differences were recognized by the term “personal equation.” But the idea that perception was constructive in the sense that honest observers could perceive structures that were not present had to wait until at least the start of the 20th century before it was recognized.
Following his Harvard graduation, Lowell was expected to go into his family business of highly profitable textile mills. As an intelligent, curious young man he found that prospect stultifying. To make matters worse, he was involved in a serious scandal. He had proposed marriage to a daughter from the sniffy Boston upper crust, but then withdrew the proposal, something that just wasn’t done in that time and place. As a result, Lowell was effectively banned from that elite circle, so in response in the early 1880s he travelled to Japan and Korea and wrote several books on Asian culture and became part of the Korean government delegation to the United States (in 1883). He continued to live in Asia until 1893.
That Lowell continued his interest in astronomy before actively pursuing the mystery of Mars was demonstrated by the “astronomical references and imagery [that] are scattered throughout the Far Eastern books and if gathered together would make a long list” (p. 97). That interest turned into a lifelong obsession in 1892 when he read French astronomer Camille Flammarion’s book La Planete Mars et ses Conditions d’habitabilite, in which the author argued that the “canals” were evidence of an advanced civilization. Lowell was wealthy enough to fund the creation of the Lowell Observatory in Flagstaff, Arizona, which opened in 1894.
In his autobiographical writings, Barnard noted that he became interested in the stars while walking home from work in the dark. One star “seemed to be slowly moving eastward among the other stars.” This struck him as unusual because the other stars “seemed all to keep to their same relative positions,” (p. 121) while this one did not. This was clear evidence of an early careful observer who had, unknowingly, seen not just another star but the planet Saturn. When he was 19 years old, Barnard was given a book written by the Reverend Thomas Dick, who believed that all the planets of the solar system were inhabited. The book included simple star charts that Barnard “rushed to compare with what he could make out in the small patch of sky visible from the open window of his apartment” (p. 126). The book, a later fellow astronomer and friend wrote, “awakened a thirst for astronomical knowledge which … never ceased to be controlling” (p. 126). Around 1880 or 1881, Barnard was given a simple telescope by an older friend at the photography studio where he was still working. He later received a scholarship to Vanderbilt University, but never finished his degree. Such things were less important in the late 19th century, and in 1887 he obtained a position at the Lick Observatory outside of San Jose, California, one of the earliest mountain-top observatories so positioned to rise above atmospheric turbulence and local city lights.
During their long careers, both Lowell and Barnard observed Mars. Their different approaches—top-down versus bottom-up—permeated how they interpreted and represented the image that fell on their respective retinas. Figure 1 (from page 291 in the book) shows this difference beautifully. On top is Lowell’s version of what he saw in 1894, while Barnard’s representation from the same year is below. Overall, the images are similar in general outline. However, Lowell has added to his drawing numerous lines, which he contended were the canals, and details not present in Barnard’s. This is a classic example of constructive perception. Lowell saw similar geometric patterns on Mercury and Venus, although he apparently did not attribute them to intelligent design.
Figure 1. Lowell’s map of Mars from 1894, published in Mars (1895), Plate XXIV. A new projection by Joel Hagen, for comparison with the Barnard map below.A map of Mars compiled on the basis of Barnard’s unpublished drawings from 1894, produced by astronomer-artist Joel Hagen. The projection has been chosen to match the map of Lowell on p. 227, so as to emphasize the striking differences. (Credit: Joel Hagen)While Lowell was seeing things that didn’t exist, Barnard was busy with more fruitful astronomical activities. In 1895 he became a professor of astronomy at the University of Chicago, which gave him access to the Yerkes Observatory in Wisconsin. It was there that he spent the rest of his life and professional career. Wisconsin is not known for warm winters and the observing platform of telescope at Yerkes was not heated. Nonetheless, Barnard would observe almost compulsively, night after night, even in the bitter cold. He was famous for having extremely good eyesight, which made him an excellent observer. During his long career he was an active member of the astronomical community. He made numerous important discoveries including over 15 comets and the fifth moon of Jupiter. Barnard’s Star, whose motion relative to the sun he determined in 1916, was named after him in 2017, although it had been recorded photographically in the 1880s. It is a red dwarf that is one of the four stars closest to Earth.
Perhaps Barnard’s most important contribution is the explanation for what are known as dark nebula, sometimes called “Barnard objects.” When the Milky Way is looked at through a telescope, there are large dark areas that appear to contain no stars. Why certain areas of the galaxy didn’t contain any stars was a mystery. In fact, these areas do contain stars, but their light is blocked by huge clouds of interstellar dust. The understanding of the nature of the dark nebula provided an important insight into the evolution of stars and planets. Another major accomplishment was his photographic atlas of portions of the Milky Way. The work, which is stunningly beautiful, took years to compile and wasn’t published until 1927, four years after his death in 1923.
During his active career Barnard did not ignore the controversial issue of the canals on Mars. He photographed Mars through the great telescope at the Yerkes Observatory in 1909, when Mars was “in opposition” to the Earth—as close as it would be for many years in the future, and was an ideal time for observation and photography. These photographs showed no canals. Barnard was not as vocal in the great canal debate as some other astronomers. It was the brilliant Greek-French astronomer Eugene Antoniadi (1870–1944) who became Lowell’s most serious detractor. Sheehan includes the often acrimonious debates between Lowell and Antoniadi in the story of the contrasts between Lowell and Barnard.
Final perceptual experiences are a function of the sensory inputs as well as expectations and beliefs … perception is said to be a constructive process and one that can produce incorrect experiences.During the time that Barnard was active in astronomical research and writing, Lowell was not inactive. However, his activities and interests were heavily focused on the issue of the canals. He lectured frequently and wrote widely defending his view that the canals were real. He, too, took photographs of Mars through the telescopes at the Lowell Observatory in Flagstaff. But constructive perception works just as well with photographs as it does with images seen through a telescope.
Both Lowell and Barnard made contributions to astronomy; Barnard as a careful scientist and Lowell as a popularizer who inspired many to an interest in astronomy, including Robert Goddard and Carl Sagan. In terms of fiction, Lowell’s argument that the canals were the products of intelligent Martians led to the writings of H.G. Wells and Edgar Rice Burroughs. Sheehan’s book goes into great, but never boring, detail about the lives and work of both men. The book is beautifully illustrated. There are pictures not only of the protagonists as they, to paraphrase Shakespeare, “strut and fret their hour upon the stage” but of their drawings and photographs of Mars and important locations in their stories. It is beautifully produced with copious references and notes. Unfortunately, the publisher did not provide an index, but with the 150th anniversary of Schiaparelli’s observation in 2027, Sheehan’s book is especially resonant.
In the span of just weeks, two major U.S. releases captured the nation’s attention: Bugonia, Yorgos Lanthimos’s darkly playful alien tale, and The Age of Disclosure, a documentary staged like science fiction, where whistleblowers insist that nonhuman craft exist and the government is concealing the truth about alien contact. Their timing is not accidental. Both arrived on the heels of the first public congressional UFO hearings in over fifty years, in the middle of a nationwide spike in reported sightings. The All-domain Anomaly Resolution Office (AARO) documented 757 new UAP (Unidentified Anomalous Phenomena) incidents between May 2023 and June 2024—more than in many previous years combined—and some analysts now describe 2025 as the most active reporting year in history. We are not just witnessing reports of the unexplained; we are witnessing the psychic temperature of a country—its anxieties, conspiratorial hunger, and collective imagination—made visible.
At the end of Bugonia, when the alien empress finally speaks—exactly as the conspiracy theorist had foretold—she delivers her verdict to her crew, all of them dressed in strange, animal-like furred spacesuits: “We believe it is over. They have had their time. And in their time they have imperiled the life they share, and so we have decided their time will end.” The aliens then waddle away in eerie unison, and the empress punctures the protective Earth bubble. What follows is an instant apocalypse: humanity wiped out in a scene that resembles the visual language of the Rapture—a sudden and absolute religious experience.
Poster for Bugonia (2025), directed by Yorgos Lanthimos. Image courtesy of Focus Features/CJ ENMBut The Age of Disclosure, Dan Farah’s latest sci-fi-styled documentary production, framed as a serious exposé of government UFO secrecy, ultimately reveals nothing new. It offers no evidence, only a procession of interchangeable older men linked to government or aerospace who repeat secondhand stories about witnesses who said they back-engineered crashed spaceships, recovered “biologics” (the new fancy term for aliens), and looming threats. At the watch party I attended, a few of us sat nonplussed at the end because, although the film insists danger is near, we wondered: danger from what, exactly?
Why are aliens capturing our cultural imagination now?Most alien or UFO reports1 involve sightings of lights, orbs, or spheres that move oddly or swiftly and vanish silently—a pattern that has remained consistent over time. Some observers also report cigar-shaped objects or triangular craft. Many of these phenomena are reported worldwide. In 2025, the National UFO Reporting Center had already logged 2,174 UFO/UAP reports by midyear, a sharp increase from 1,492 reports during the same period in 2024. This rise may reflect the establishment of the AARO and renewed government attention, which have made reporting easier and less stigmatized, not to mention nudging people to look up more and notice what was previously missed (Starlink satellites are often reported as UAPs). Increased public awareness through media coverage, documentaries, and congressional hearings also encourages people to report sightings they might previously have ignored. This explanation, of course, presumes the alien sightings are real. Are they?
An alternative interpretation—commonly referred to as the Psychosocial UFO Hypothesis—traces back to Swiss psychologist Carl Jung, whose 1958 work Flying Saucers: A Modern Myth of Things Seen in the Sky, proposed that UFOs reflect psychic and cultural realities, not extraterrestrial ones.2 Jung suggested that flying saucers emerge in the collective imagination during eras of social disorientation, technological upheaval, or existential threat, functioning as modern myths that carry the weight of collective anxiety and longing. Rather than evidence of literal beings from another world, UFOs become symbols of fear, hope, salvation, or invasion—a projection of what the psyche cannot resolve. From this view, alien encounters are psychologically real even if not physically tangible: They reveal something true about the human mind and the cultural moment, not necessarily the cosmos.
It is unsurprising that UFO sightings are on the rise today. Scholars have observed that UFO reports tend to increase during periods of societal crisis—such as existential uncertainty, geopolitical tension, or rapid technological change—reflecting collective anxieties rather than objective phenomena.3 In times of social distress and distrust, people are more likely to assign meaning or threat to ordinary or ambiguous events. Some psychological-cognitive theories suggest that ambiguous stimuli—lights in the sky, radar blips, or unexplained objects or events—are interpreted through cultural narratives and heightened pattern seeking.4 This is sometimes called the “low information zone,” in which blurry photographs and grainy videos stimulate the mind to fill in the missing spaces or connect the dots into meaningful patterns of an extraterrestrial nature.
We live in a time of deep distrust in politics, corporations, and the media, which makes people question what they are told. Heightened fears from draconian COVID policies (“they closed the schools, restaurants, and parks so the pandemic must be really bad!”), hypermediated climate collapse (“if we don’t do something in twelve years all is lost”), threats of rising fascism (“Trump, MAGA!”), threats of an AI takeover (“the singularity is near!”), and rising nihilistic political violence (“burn it all down and start over!”) have created a pervasive state of anxiety. This fear, combined with distrust of formerly trusted institutions, fuels conspiracy thinking, including beliefs about aliens. With few reliable frameworks to navigate uncertainty, many turn outward for explanations or as distractions from personal responsibility.
In Bugonia, Lanthimos suggests that conspiracy beliefs often emerge as a response to real pain and injustice. The film’s central conspiracist grew up with an addicted, neglectful mother and later lost her to a medical experiment. His belief in aliens and corporate malevolence is not baseless; it is rooted in trauma, exploitation such as pharmaceutical misconduct and corporate neglect, and social alienation. In this way, the film does not simply mock conspiracists as “crazy,” but explores the social and psychological conditions that give rise to such beliefs.
To these we can add two more conditions contributing to Americans’ increasing belief in UFOs: the decline of religious faith and a reduced reliance on instinct and common sense.
As traditional faith wanes, many turn to belief systems grounded not in evidence or instinct but in ideology and narrative—UFO conspiracies being a prime example. Belief is migrating from shared moral and religious frameworks to culturally mediated myths that promise meaning and belonging. In this sense, aliens function as a modern sacred avatar, a substitute for God, mystery, and existential structure.
This mindset—that what you see may not be true, or what you don’t see is probably true—has fundamentally contributed to the widespread and enduring belief in a U.S. government cover-up of UFOs.The complexity of contemporary society has been linked to a reduced dependence on intuitive judgment and common sense, making individuals more susceptible to being drawn into ideology and conspiracy theories.5 This effect has been amplified over the last two decades by our deep immersion in the online world, coupled with persistent global political instabilities. These factors have ushered in an era of “alternative facts” (on the right) and “postmodernism” (on the left) for many Americans, where the core assumption is that there is more than one truth or no truth at all.
This mindset—that what you see may not be true, or what you don’t see is probably true—has fundamentally contributed to the widespread and enduring belief in a U.S. government cover-up of UFOs. Thus, even though most individuals have never personally seen or experienced a UFO firsthand, they are readily pulled into the conspiratorial narrative and accept it primarily because of the powerful surrounding cultural and ideological framework. It’s ideology over instinct.
Common Sense and InstinctEvolutionarily, humans developed heuristics to make rapid decisions in uncertain environments—recognizing patterns, detecting threats, and navigating social hierarchies. These shared mental shortcuts form a basis of common knowledge, allowing groups to act cohesively, from identifying safe foods and interpreting emotional cues to cooperating in collective tasks. This intuitive knowledge also extends to social cognition: Humans can rapidly infer intentions, predict behavior, and synchronize actions with others, often without conscious reasoning. In this sense, common knowledge is not arbitrary but adaptive, providing a shared framework that increases survival, cooperation, and cultural stability. As Steven Pinker argues, common knowledge is foundational to human society because it enables social coordination and complementary decision making.6 Much of this understanding operates beneath awareness, signaled through involuntary behaviors like laughter, tears, blushing, eye contact, and blunt speech—embodied expressions of the intuitive knowledge that binds us.
Paradoxically, people often engage in elaborate efforts to obscure, ignore, or deliberately avoid acknowledging common sense and, tragically, their own instincts. The tendency to avoid recognizing widely shared knowledge is well-documented in psychology and sociology. This behavior, known as information avoidance, allows individuals to shield their happiness, preserve existing beliefs, or maintain social standing. Research also shows that information avoidance can serve as a coping mechanism in situations of uncertainty or threat, helping people reduce cognitive dissonance and emotional discomfort.7
People sometimes engage in information avoidance not merely to protect their beliefs or personal happiness, but to align with a group ideology and secure a vital sense of belonging. According to Social Identity Theory,8 individuals derive meaning, status, and self-esteem from the groups they belong to; consequently, they may reject information that threatens the group’s worldview. Specifically, people may set aside their personal instincts or empirical skepticism to be part of a community—be it political, spiritual, ideological, or conspiratorial—that claims to possess special, hidden, or insider knowledge. Aligning with a group that asserts access to deeper truths, secret insights, or a more “awakened” understanding often feels more meaningful and elevating to one’s identity than simply accepting one’s ordinary, concrete life.9
In addition, people often bypass common sense by relying on cognitively unfalsifiable ideas—using claims for aliens such as “trans-dimensional,” “telepathic,” or “unperceivable by ordinary minds,” which place the phenomenon in a realm where no evidence could ever contradict it. This creates epistemic shielding, where the claim becomes immune to challenge: Any lack of proof is simply reframed as expected, since the phenomenon supposedly exists beyond ordinary perception or logic.10 This often involves setting aside common-sense reasoning—such as the implausibility of coordinated alien visits, the immense logistical challenges of secrecy, or the extreme hazards of space travel. By suspending these rational doubts, individuals can fully engage with the group, strengthening both cohesion and commitment to shared beliefs like UFOs.
Believing the government hides alien knowledge signals social intelligence and alignment with the modern order of suspicion, whereas trusting official explanations can appear naïve or even irrational—suggesting that disbelief in conspiracy has become more deviant than belief itself.System Justification offers another cogent explanation for why people override instinct, even without empathy-driven motives. This psychological process leads individuals to defend and reinforce the prevailing system or worldview, even when it may run counter to their own interests.11 In the context of UFO belief, the dominant “system” is no longer governmental authority but rather the conspiratorial worldview itself. Institutional distrust has become the cultural status quo, so accepting the narrative of a cover-up functions as a way of justifying and maintaining that system.12 Believing the government hides alien knowledge signals social intelligence and alignment with the modern order of suspicion, whereas trusting official explanations can appear naïve or even irrational—suggesting that disbelief in conspiracy has become more deviant than belief itself.
A further reason that common sense is bypassed in UFO narratives stems from a psychological profile that makes the alien stories uniquely meaningful to the participants. The key players in The Age of Disclosure documentary, reflecting the wider UFO conspiracy community, are largely older White men, often from the Baby Boomer generation, including many former Cold War intelligence and military personnel. They were trained for decades to perceive patterns, secrets, and threats everywhere, interpreting anomalies like radar returns, classified flights, and black-project aircraft. This environment rewarded suspicion, dramatic interpretation, and assuming hidden motives—a mindset that doesn’t simply switch off upon retirement. Once retired, many lose their high status and sense of purpose; they miss being “in the know” and having a mission. UFOs restore all of that, allowing them to be relevant again by “exposing secrecy,” “protecting humanity,” and “warning people about what’s coming.” This powerful way of restoring identity and meaning creates a significant blind spot for rational facts or instinct, cementing a narrative where they matter again.
A more common-sense approach—one uninfluenced by ideology—would align closely with how neuroscientists are beginning to frame the perception of unidentified objects. A trio of researchers, for example, recently posed this question: How can we “explain why healthy, intelligent, honest, and psychologically normal people might easily misperceive lights in the sky as threatening or extraordinary objects, especially in the context of WEIRD (western, educated, industrial, rich, and democratic) societies”?13 These researchers draw on predictive-coding theories of perception, which suggests that the brain constantly generates top-down predictions based on prior experience. When sensory input is ambiguous or weak, such as distant lights in the sky or other celestial stimuli, perception becomes highly subject to existing beliefs and expectations. Frohlich, Christov-Moore, and Reggente argue that in Western contexts, where skepticism and distrust of institutions are amplified, psychologically normal people are more likely to interpret ordinary phenomena as potentially extraordinary, thereby reinforcing their mistaken beliefs and fostering the acceptance of conspiratorial explanations.14
Illustration by Marco Lawrence for SKEPTICDecline of Traditional FaithAnother factor reinforcing the heightened interest and belief in UFOs is the dramatic decline of traditional faith systems in the U.S. and globally, especially in Europe.15 We are living through a moment of profound spiritual and cultural upheaval, marked by widespread secularization. Data from the Pew Research Center’s Religious Landscape Studies (2007–2024) clearly illustrate this shift in the United States: The share of Americans identifying as Christian has dropped significantly from 78 percent in 2007 to 62 percent in 2023–2024. Much of this shift is driven by the growth of the religiously unaffiliated—those identifying as atheist, agnostic, or “nothing in particular”—the “nones.” Furthermore, a stark generational divide exists, as only approximately 46 percent of younger Americans (ages 18–24) identify as Christian, contrasted with about 80 percent of older generations. Related measures of religious practice have also declined, including the share of Americans who believe in God “with absolute certainty,” pray daily, or attend regular services.
These trends are not isolated to the U.S., reflecting global secularization that affects major world religions, including Christianity, Islam, Judaism, Buddhism, and Hinduism. A 2023 analysis of the World Values Survey data found that age and income are among the strongest predictors for decreasing religiosity, confirming that modern economic and demographic shifts correlate strongly with this decline.16 The consequence of the decline in traditional religious structures (churches, organized faith, and institutional religion) is the creation of a spiritual and cultural void. This vacuum can then be filled by alternative spiritualities, existential searches, or other belief systems that offer meaning, structure, and a sense of the transcendent—including UFOs, alien-mythologies, “otherworldly” beliefs, and nature mysticism.
As younger generations grow up without strong religious roots, their search for meaning and a comprehensive moral framework often shifts toward political, psychiatric, or identity-based frameworks rather than centuries-old orthodox religions. While these new frames of belief are influenced by contemporary cultural anxieties, they tend to be less stabilizing and reassuring than traditional faith and wisdom. Studies of the culture wars indicate that, instead of offering equanimous guidance, these ideologies frequently contribute to an “us versus them” positionality, demanding allegiance to a specific side rather than fostering broad acceptance or spiritual integration.17, 18, 19
A Desire for FaithWhen social anxieties intersect with waning religious practices, a spiritual void emerges, which faith, in its deepest sense, functions to fill. Paul Tillich described faith as the recognition of what is ultimately important in life, providing meaning and courage in the face of despair.20 Faith counters the secular demand to find fulfillment solely in the material present by offering a framework of ultimate value that extends beyond the empirical, fostering trust that reality holds order, purpose, and goodness beyond human comprehension. While it does not remove suffering, faith situates pain within a larger narrative of redemption or spiritual growth, offering hope, belonging, and the resources to endure the “unlivable self.” In this light, participation in alien beliefs can, in part, be interpreted as a search for a similarly powerful spiritual experience.
For Carl Jung, the emergence and widespread cultural interest in alien experiences and UFOs were a form of spiritual projection. He posited that this phenomenon arose from a collective longing for something transpersonal—a desire for meaning and connection beyond the material world—driven largely by the decline of traditional spiritual practice and the sociopolitical existential crisis in the West. Jung argued that, regardless of their physical reality, what UFOs primarily represent to people is the archetype of salvation or integration, serving as a potent symbol of hope that something external might save humanity from its own crises.
This powerful psychological need quickly spilled into the social sphere: By the early 1950s, the world saw the beginning of UFO religious communities, almost all of which were tied to the emerging New Age Movement.21 This established a distinct, if unconventional, religious community that has since expanded into a diverse landscape of cults, spiritual groups, and online movements. These modern mythologies offer their adherents not only an answer to the cosmic riddle but also a sense of belonging, a moral framework, and a promise of ultimate transformation—functions historically reserved for organized religion.
We are not just witnessing reports of the unexplained; we are witnessing the psychic temperature of a country—its anxieties, conspiratorial hunger, and collective imagination—made visible.The world of UFOs deeply echoes religious communities, particularly in how the phenomenon inherently divides people into believers and nonbelievers, subsequently demanding an alignment with a collective ideology or community for those who accept the narrative. In particular, abduction narratives strongly resemble spiritual transformation stories, carrying powerful mythic, symbolic, and spiritual overtones that speak to a profound human need. These experiences often involve narratives of a calling, being chosen, initiation, and transformation, placing the individual in touch with a greater, transcendent, and mysterious unknowable power.22 In this way, both alien abduction and traditional spiritual experiences—such as deep prayer, apparitions, mystical visions, or spiritual possession—can be viewed as powerful modern myths. They serve as psychic containers for deeper psychological realities, suggesting they both function as potent cultural frameworks for expressing profound feelings of internal conflict, such as disconnection, trauma, or identity crisis, and a fundamental longing for transcendence or an escape from the confines of a prescribed self.
If participation in UFO belief systems satisfies a spiritual longing, what’s the harm? Perhaps none. However, when such belief requires individuals to suppress instinct, embodied perception, and common sense, the stakes shift. We risk creating tension with the fundamental architecture of evolutionary biology and psychology. To override these deeply ingrained perceptual systems in favor of a socially constructed narrative demands a significant cognitive sacrifice—one that erodes the innate trust in our instincts that has historically kept us alive. Over time, this override may dull the very intuition evolution shaped to help us discern reality from story.
We cannot expect young Americans to find faith in religious institutions, as many are still working to repair the trust of congregants they have long disenchanted. Yet faith—faith in something, anything—is essential to begin filling the emptiness left by a lack of meaning. Without faith in a larger cosmic order—be it a sense of karma, a belief in something greater, or a feeling of being loved or held by a transcendent whole—our younger generations are far more likely to attach to an ideology introduced to them on social media, which often leaves them unattached to an embodied instinctual reality.
Into this void step alien narratives.
Area 51 may want to dust off the welcome mat. Not one, not two, but three interstellar objects have drifted through our solar system, now referred to as “interstellar interlopers.” Astronomers labeled them as 1I/‘Oumuamua in 2017, 2I/Borisov in 2019, and 3I/Atlas in 2025 (the prefixes refer to the order of discovery of the interlopers). While most astronomers see unusual but ultimately natural cosmic debris, Harvard astronomer and Galileo Project head Avi Loeb has stepped up to suggest these anomalous interstellar visitors could be alien technologies, possibly even a threat to humanity. Before we start waving white flags at space rubble, it’s worth noting that the rest of the scientific community is responding with something far less dramatic: data. Most scientists, armed with models and common sense, see nothing more exotic than fast-moving rocks and comets with unusual chemical compositions.
Avi Loeb: Prophet, Seer, or Publicity Seeker?Avi Loeb is no UFOlogist conspiracy theorist with an active imagination. He holds Harvard’s Frank B. Baird Jr. Professorship of Science and has spent most of his academic life developing rigorous theories about black holes, galaxy formation, and the early universe. So, when he started speculating about alien artifacts drifting through our solar system and writing several popular books about extraterrestrials, it’s no surprise that a bevy of UFOlogists treated his words as something akin to the “next coming.”
In recent years, he has become known less for his contributions to cosmology and more for a far more audacious proposition: that humanity may have already encountered extraterrestrial technology created somewhere beyond our solar system. The shift has turned him into a public figure with an unusually large following for an astrophysicist, even as it strains his standing among colleagues. Admirers see him as refreshingly fearless and he has inspired my young students to go into the sciences (he regularly posts emails from them on his Medium blog); critics describe him as a man who has allowed publicity to eclipse prudence. The tension between those two views defines the controversy that now surrounds his work.
The ‘Oumuamua Puzzle and Loeb’s Radical InterpretationWhen astronomers in Hawaii identified an unfamiliar object sweeping through the solar system in October 2017, they immediately realized it was something unprecedented. The object—later named ‘Oumuamua (Hawaiian for “messenger from afar”)—did not behave like the comets or asteroids astronomers routinely study. Its elongated appearance, lack of visible outgassing, and slight but measurable change in velocity puzzled researchers.
A large team of scientists, led by Karen Meech at the Institute for Astronomy in Hawaii, published a widely cited paper in Nature in 2017, concluding that ‘Oumuamua originated from outside our solar system. Building on the data from that paper, Avi Loeb and his graduate student Shmuel Bialy (now at the Israel Institute of Technology) proposed in a 2018 Astrophysical Journal Letters paper that ‘Oumuamua might be a “fully operational probe sent intentionally to Earth vicinity by an alien civilization.” That is, of course, a possibility—as is a cosmic teapot in orbit. But science does not require disproving every far-fetched alternative. The burden of proof lies squarely with Loeb and his collaborators.
In his boldly titled book Extraterrestrial: The First Sign of Intelligent Life Beyond Earth, Loeb offered a hypothesis that captured worldwide attention: perhaps ‘Oumuamua was not a natural relic at all but rather a fragment of engineered technology, possibly a thin, reflective structure propelled by starlight. He emphasized that he wasn’t announcing definitive proof (despite the book’s title), only pointing out that an artificial origin could not be ruled out. Nonetheless, his willingness to discuss this prospect publicly pushed the story far beyond the walls of academia.
Here are a few unique characteristics of ‘Oumuamua:
Occam’s razor, named after William of Ockham (1287–1347) by Libert Froidmont (1587–1653), suggests that scientific hypotheses should consist of the smallest set of possible elements. For example, while staying in an old English hotel room, the lights flicker, the floor creaks, and the room gets chilly. You could conclude it’s the ghost of a Victorian child with unresolved issues—or, per Occam’s razor, you could check the wiring, the floorboards, and maybe close a window. When in doubt, blame the insulation before the afterlife. Occam’s razor doesn’t prove the simpler explanation is correct—just that it’s preferable until better evidence arises. It’s a tool for model selection, not an avenue to absolute truth.
Admirers see Avi Loeb as refreshingly fearless and he has inspired my young students to go into the sciences; critics describe him as a man who has allowed publicity to eclipse prudence.Let’s examine the data for ‘Oumuamua in this light. The elongated or flat shape: In three research papers, Steven Desch and Alan Jackson proposed that ‘Oumuamua is a collisional fragment of nitrogen ice from an exoplanetary Pluto-like body. Not only does this explain the flat shape, but the lack of observable H2O, CO, CO2, lack of dust, and especially the magnitude of the nongravitational acceleration. I asked Desch what he thought of Loeb’s ideas about ‘Oumuamua and he responded: “Suffice it to say he [Loeb] long ago stopped being a serious scientist making innocent inquiries, and now unstoppingly manufactures doubt in the service of positioning himself as some sort of science maverick.” Sebastian Lorek’s and Anders Johansen’s theoretical work demonstrates that flattened, disc-shaped planetesimals can form naturally through the gentle gravitational collapse of a rotating “pebble cloud” in a protoplanetary disk. Lorek and Johansen emphasized to me that “the formation of flattened objects like ‘Oumuamua is a completely natural outcome of planetesimal formation.”
By contrast, Loeb postulates that ‘Oumuamua may be a light-sail—a thin, flat structure propelled by radiation pressure (i.e., the momentum of photons from starlight or sunlight). Photons carry no mass, but they do have momentum. When they hit a surface (especially a reflective one), they impart a tiny push. Over time, this small force accumulates, especially in the vacuum of space where there’s no friction. The challenge with using solar radiation for propulsion is that its force decreases with the square of the distance from the source (1/r²). While this pressure is weak but usable near Earth’s orbit (1 AU), it becomes vanishingly small at interstellar distances. In the vast space between stars, the photon flux is so low that even the nearest stars provide no meaningful thrust—effectively leaving a light sail adrift with nothing to push it along.
AI-generated rendering of a hypothetical alien light sail, the type of technology Avi Loeb proposes could explain ‘Oumuamua’s unusual acceleration through solar radiation pressure.As for the nongravitational acceleration of ‘Oumuamua out of our solar system, Loeb believes that it can’t be explained by outgassing, because no gas or dust was detected. He proposed that the acceleration was caused by the solar radiation pressure hitting a light sail. If ‘Oumuamua were an ultra-thin object, just 0.3–0.9 mm thick and tens of meters wide, it could have experienced enough radiation pressure at its closest approach to the Sun, which was 0.25 AU, or one-quarter of an Astronomical Unit (the distance from the Earth to the Sun, 1 AU) to account for the motion—without requiring any expelled material. However, in 2023, Jennifer Bergner and Darryl Seligman showed that entrapped molecular hydrogen (H2) in water ice could have been released from ‘Oumuamua’s body as it warmed, producing the observed nongravitational acceleration without a visible coma (the cloud of gas and dust that typically forms around a comet when it gets close to the Sun). This supports the view that ‘Oumuamua was a comet-like planetesimal rather than anything technological. Although the study centered on chemistry, a consequence is that ‘Oumuamua must have had a very high surface-area-to-mass ratio for H2 outgassing to be effective. Such a requirement is naturally met by a thin, sheet-like geometry (a flattened body), again consistent with the disc-like shape inferred by the light-curve analyses. In short, even its puzzling acceleration can be explained by natural processes acting on an unusually flat, icy object.
The Galileo Project and Loeb’s Expanding QuestRather than retreat from public engagement after ‘Oumuamua’s exit from the scene, Loeb broadened his search. In 2021, he launched the Galileo Project—funded entirely through private donations—with the goal of systematically looking for physical evidence of extraterrestrial technology. The initiative includes specialized camera systems aimed at tracking unusual aerial phenomena and an expanded effort to locate interstellar debris.
One object in particular drew Loeb’s attention: a meteor that exploded over the Pacific Ocean in 2014. A U.S. Space Command memo suggested the meteor may have originated outside the solar system. Loeb seized upon the idea that remnants from this event might still rest on the ocean floor, potentially offering clues about materials forged beyond our stellar neighborhood. So in 2023 he orchestrated an expedition off the coast of Papua New Guinea to retrieve microscopic debris from the area where the meteor had disintegrated. Funded by a cryptocurrency entrepreneur, the mission blended scientific ambition with adventure-story drama—all captured by a documentary crew (to be aired in 2026).
The expedition recovered tiny metal beads—mere fractions of a millimeter in diameter. Laboratory analyses revealed unusual ratios of heavy elements that did not neatly align with common terrestrial or meteoritic compositions. Loeb interpreted the findings as suggestive of an exotic, possibly interstellar, origin. He stopped short of outright claiming discovery of alien technology (the tiny spherules were not exactly the dashboard of the Millennium Falcon), but he made clear that he considered the possibility worth exploring.
Many experts quickly objected. Planetary scientists noted that it is extremely unlikely for an object traveling at such high velocity to leave behind intact solid fragments. Others questioned whether the spherules could even be tied to the 2014 meteor, or whether the meteor itself was truly interstellar. Critics argued that uncertainties in the military data make firm conclusions impossible, and that Loeb was again presenting the most sensational interpretation well before the evidence justified it.
The interstellar comet 2I/Borisov streaks through our solar system in this 2019 image from ESO’s Very Large Telescope. Unlike ‘Oumuamua, Borisov behaved like a typical comet, showing a bright coma and tail. The telescope tracked the comet’s movement, causing the background stars to appear as colorful streaks of light—a result of combining observations in different wavelength bands that give the image some disco flair. Credit: ESO/O. HainautThe interstellar comet 2I/Borisov behaves like a typical comet.2I/Borisov is considered interstellar because it entered the solar system on a hyperbolic trajectory—with an orbital eccentricity greater than 3—meaning it is not gravitationally bound to the Sun and must have originated from outside our solar system. Its inbound velocity (approximately 32 km/s) and trajectory indicate it came from the direction of the galactic plane, rather than from within the Oort Cloud or Kuiper Belt. Unlike ‘Oumuamua, which baffled astronomers with its lack of cometary features, Borisov behaved exactly like a typical comet, complete with a bright coma, a dust tail, and outgassing of familiar volatiles like water, carbon monoxide, and cyanide.
Avi Loeb has suggested that Borisov may still deserve scrutiny as a potential technological relic—noting that it was more pristine than expected for a comet traveling interstellar distances, possibly implying unusual origins. However, most scientists interpret Borisov as strong evidence that other planetary systems form comets much like our own does. Its ordinary composition, active sublimation, and typical behavior all suggest it is natural, and in fact, it reinforces the view that cometary bodies are common ejecta from planetary systems throughout the galaxy. In Galileo Project Zoom meetings of late, Loeb has conceded that 2I/Borisov is a comet (Skeptic magazine’s Michael Shermer is on the Galileo Project team and attends the Zoom meetings).
3I/Atlas: The Third Interloper3I/Atlas’s inbound excess velocity was about 58–61 km/s, far above the escape velocity of the Sun, indicating an origin outside the solar system (that is, it is not gravitationally bound to our solar system). Astronomers traced its incoming direction to the constellation Sagittarius and predict it will depart toward Gemini. Unlike the enigmatic ‘Oumuamua (which showed no outgassing) and more like 2I/Borisov, 3I/Atlas immediately revealed a coma and dust activity, behaving in most respects like a typical comet. Its trajectory and motion suggest it may have originated from the Milky Way’s thick disk, making it plausibly older than our solar system.
Hubble’s image of interstellar comet 3I/ATLAS (365 million kilometers from Earth, July 21, 2025) shows a bluish, teardrop cocoon against streaked stars. While Avi Loeb suggests its sunward jet may be artificial, the consensus confirms it behaves like a natural comet. Credit: NASA, ESA, D. Jewitt (UCLA); Image Processing: J. DePasquale (STScI)From the start, astronomers have viewed 3I/Atlas as a natural cometary body. Observatories around the world (including Hubble, the James Webb Telescope, and the Very Large Telescope in Chile) tracked its movement, noting that it started releasing gas and dust at large distances from the Sun—an unusual but not unprecedented behavior. Spectral studies revealed a coma rich in CO2, CO, and diatomic carbon (C2), while surprisingly low in water vapor, which typically dominates solar system comet outgassing. Polarimetry also showed an unusually strong negative polarization signal—meaning the light scattering off the coma’s dust was more directionally polarized than expected. (Polarimetry is the study of how light becomes polarized after it reflects off or scatters through materials like dust or gas. In astronomy, it’s used to analyze light from objects such as comets to infer the properties of their surfaces or comae. When astronomers applied polarimetry to 3I/Atlas, they found unusually strong negative polarization, suggesting its dust grains are very fine or have unusual textures—possibly hinting at a unique interstellar origin or formation environment.) These characteristics, while distinct, are seen as falling within the natural diversity of cometary compositions, especially for bodies formed in ultra-cold outer regions of a planetary system.
Researchers note that 3I/Atlas offers a unique opportunity to expand our understanding of planetary formation beyond the solar system. Its high CO2 content, early activity, and evolving tail structure suggest it likely formed in a cold, distant part of its home system—perhaps analogous to our Kuiper Belt around a distant solar system. Its compact nucleus (likely under 1 km in size) and slowly rotating, modestly active profile, contrast with the wildly tumbling, inert ‘Oumuamua. Scientists have emphasized that 3I/Atlas aligns with the expected behavior of a comet ejected from another stellar system, and they see no need to invoke exotic explanations.
Nevertheless, Avi Loeb has once again challenged the consensus. In public commentary and academic preprints, Loeb has listed a set of anomalies that, in his view, warrant consideration that 3I/Atlas might be artificial in origin. Among the features he highlights:
Although intriguing, there is nothing alien about the 3I/Atlas’s jets. The presence of multiple jets pointing in both sunward and antisunward directions suggests that 3I/Atlas has several active regions on its rotating nucleus. As different surface areas are exposed to sunlight, localized jets of gas and dust are released, sometimes curving due to the object’s motion or erupting from regions not directly facing the Sun. This directional variety is a hallmark of cometary activity and reflects a complex interplay between surface composition, thermal dynamics, and rotational orientation, a more likely explanation than alien technology rocket thrusts and maneuvers that Loeb proposes.
Both features fall within known cometary behavior and don’t require invoking alien technology.The same can be said for other characteristics Loeb deems of alien origin. The high acceleration relative to 3I/Atlas’s apparent size can be explained naturally by low-density, volatile-rich materials like CO2 or CO ices producing sustained outgassing. Similarly, the elevated nickel-to-iron ratio in its coma may result from observational bias—nickel is more easily detected in cometary gas, while iron often remains locked in dust. Both features fall within known cometary behavior and don’t require invoking alien technology.
Loeb’s position, as with ‘Oumuamua, is that extraordinary anomalies merit open-minded hypotheses. He does not claim that 3I/Atlas is definitively artificial, but argues that its distinctive properties should not be dismissed. He has proposed that it could represent alien debris, a probe, or some unknown technological object using controlled outgassing or exotic materials. Critics in the scientific community largely disagree, emphasizing that all of 3I/Atlas’s features—from its CO2-rich chemistry to its sunward jet and trajectory—can be explained by known physics. Observations of other comets with similar jets or compositional profiles provide natural precedents.
While most planetary scientists remain confident in a natural origin for 3I/Atlas, its detailed study is ongoing. Loeb’s speculations, while provocative, remain unsubstantiated.In late 2025, NASA officials released detailed observations of 3I/Atlas, and their conclusion was unequivocal: “It looks and behaves like a comet, and all evidence points to it being a comet. But this one came from outside the solar system, which makes it fascinating,” said NASA Associate Administrator Amit Kshatriya. Indeed, high-resolution images from spacecraft showed 3I/Atlas with a normal cometary coma and tail—essentially indistinguishable from ordinary long-period comets aside from its hyperbolic orbit. In other words, 3I/Atlas is far more likely a natural interstellar comet than an extraterrestrial spacecraft.
In the end, 3I/Atlas has reinforced a key message: interstellar objects are not all alike, and some may appear quite strange by our standards. While most planetary scientists remain confident in a natural origin for 3I/Atlas, its detailed study is ongoing. Loeb’s speculations, while provocative, remain unsubstantiated. Whether the anomalies he flags prove to be outliers or just unfamiliar variations within a broad population of extrasolar comets, 3I/Atlas has already deepened our understanding of how planetary systems beyond our own may evolve—and what fragments they might fling into the void.
A Netflix documentary crew has followed Loeb’s work for several years, including his 2023 expedition to recover interstellar meteor fragments from the Pacific Ocean. The film, which Loeb has confirmed is in production, is expected to be released in 2026 and will chronicle his search for extraterrestrial technology. It reflects not only his scientific ambitions but also his increasingly prominent role in the public imagination.
Over the past decades, we have witnessed a quiet yet decisive transformation in the history of human beliefs: the apparent disappearance of major paranormal phenomena that for millennia fueled mythologies, religions, folklore, and countless reports of supposed extraordinary manifestations. UFOs hovered over mountains and deserts;1 colossal creatures such as Bigfoot, the Yeti, or the Sasquatch roamed remote forests;2 spirits, apparitions, and ectoplasmic entities materialized in abandoned mansions;3 miracles occurred before the eyes of the devout;4 demonic possessions defied rational explanation.5 Today, all these phenomena seem to have taken permanent leave, an intriguing coincidence emerging precisely at the moment humanity begins to carry in its pockets (or better yet, in its hands) ultra-high-definition cameras capable of recording every detail of daily life, or any anomaly, with unprecedented precision.6
Before examining the role of smartphones, it is important to distinguish beliefs from manifestations. National opinion polls show that belief in paranormal phenomena remains high. A 2005 Gallup survey indicated that roughly three in four Americans believed in at least one type of paranormal experience, including haunted houses, communication with the dead, and astrology.7 Trend analyses aggregating data from Gallup, Harris, Pew, and other institutes show that, despite recent technological advances, these beliefs have remained remarkably stable, with only small declines in some items and even increases in specific beliefs such as ghosts and haunted houses.8 A more recent Gallup synthesis, from 2025, shows that 48 percent of American adults believe in psychic or spiritual healing and 39 percent in ghosts, while between 24 percent and 29 percent endorse six other supernatural beliefs; compared to 2001, variations are modest, with declines of only 6 to 7 percentage points in phenomena such as telepathy and clairvoyance.9 Literature reviews indicate that, in different countries, beliefs in spirits, UFOs, and other extraordinary phenomena remain widely disseminated among modern populations.10, 11, 12, 13
In other words, beliefs persist and remain widespread, but the supposed phenomena that should generate clear and reproducible evidence seem increasingly absent precisely at a moment when we possess technology capable of recording them with great clarity.14, 15 This shift invites a skeptical exercise: Why have paranormal and supernatural apparitions disappeared exactly when it became possible to document them unequivocally? For centuries, human testimony was the primary source of such accounts. However, scientific literature consistently demonstrates that testimony, even when sincere, constitutes extremely weak evidence: It is susceptible to perceptual illusions, cognitive biases, cultural expectations, and reconstructed (and often false) memories.16, 17, 18
They systematically avoid sharp, high-resolution cameras while tolerating grainy footage captured with old cameras or shaky amateur recordings.In recent decades, quantitative studies on spontaneous reports of “anomalous” experiences also reveal a telling pattern: Although belief remains high, the number of people claiming to have personally experienced paranormal and supernatural phenomena tends to decline or stabilize at low levels compared with previous decades. Population surveys in the United Kingdom, for example, indicate that around 25 percent of adults report having seen a ghost, a number smaller than the prevalence of belief in ghosts, which remains above 40 percent.19, 20, 21 The discrepancy between the high prevalence of belief and the lower prevalence of reported experiences suggests that direct accounts do not accompany the persistence of belief, a pattern compatible with the growing impact of recording technology.
Recent experimental evidence reinforces this fragility. Contemporary studies show that up to 30 percent of participants incorporate false details into memories of extraordinary events after minimal suggestions or exposure to ambiguous images.22, 23 This type of cognitive vulnerability helps explain why, even before photography, reports of supernatural phenomena were so abundant despite the absence of reliable physical documentation.
With the popularization of photography in the late nineteenth century, the first “records” of ghosts, materializations, and spiritualist phenomena emerged, almost always blurred, overexposed, composite, or manipulated.24 The skeptical science of the time, from Darwin25 to Houdini,26 had already warned of fraud, lighting tricks, and honest mistakes. Even so, these images fueled a fertile social imagination that was poorly equipped for the kind of critical analysis we now consider trivial.
Yet something fundamental changed when next-generation smartphones became ubiquitously available. Never in human history has there been a moment when billions of people possessed cameras with optical stabilization, precise sensors, 4K recording capacity, and the ability to capture phenomena instantaneously and share them within seconds.
Paradoxically, this same technological infrastructure has fueled an entire subculture of “ghost hunters” and smartphone-based spirit-detection apps. Ethnographic research on ghost-hunting communities shows the intensive use of high-definition cameras, motion sensors, and apps that simulate paranormal measurements, but despite millions of recordings, no verifiable fact regarding the existence of ghosts has been established in a robust manner.27, 28 Independent assessments of these groups further show that most of the supposed evidence, shadows, electromagnetic noise, or video distortions, corresponds to optical or acoustic artifacts already extensively described in the technical literature and often replicable under controlled conditions.29 Even more rigorous investigative protocols, such as controlled-environment monitoring with multiple cameras, have never produced replicable or consistent results. In other words, the capacity to search for evidence has increased exponentially, but the quality of the “proof” remains trapped in artifacts, ambiguities, and wishful interpretations.
Curiously, alleged extraterrestrials seem to prefer deserted roads, swamps, or isolated campgrounds, and maintain a distinctly selective shyness.At the same time, astronomers equipped with powerful, high-definition telescopes that observe the sky 24 hours a day have never recorded a single robust piece of evidence for objects of nonhuman origin. By contrast, systematic surveys conducted by professional astronomers estimate that more than 95 percent of investigated UFO reports correspond to satellites, rocket re-entries, aircraft, balloons, or common atmospheric phenomena.30, 31 This pattern was already known before the widespread adoption of smartphones, but it has become even more evident as observational instruments have grown more precise. Curiously, alleged extraterrestrials seem to prefer deserted roads, swamps, or isolated campgrounds, and maintain a distinctly selective shyness: They systematically avoid sharp, high-resolution cameras while tolerating grainy footage captured with old cameras or shaky amateur recordings.
The same inexplicable selectivity affects the great mythical creatures. Bigfoot, whose existence contradicts all biological logic, since no hominid species could survive in absolute isolation for hundreds of thousands of years without leaving fossils, consistent tracks, feces, or reproductive communities, vanished abruptly with the advent of modern smartphones. Recent research in ecology and environmental DNA biomonitoring, now used to track rare species, has likewise detected no genetic trace compatible with large unknown primates in North America, even in extensively sampled regions.32, 33 This kind of negative evidence reinforces the biological implausibility of a hidden large-bodied hominid. Hunters, hikers, mountaineers, and rural residents, all equipped with sophisticated cameras, have ceased to report sightings of the once-elusive primate. What remains alive is only the echo of old stories, always sustained by isolated footprints or shaky video footage.
Ghosts and spirits, likewise, seem to have adapted poorly to technological advancement. For centuries, claims of apparitions spread globally, reinforcing the sense that the supernatural was a universal feature of human experience. However, the more we improved our ability to record images, the more these ectoplasmic entities retreated into the invisible, or into the past. Today, there are no sharp, verifiable, or even minimally convincing records. It is as if the very ontology of such beings were incompatible with high-precision sensors, as if the supernatural had vanished precisely when it could finally prove its existence to skeptics.
From a methodological standpoint, this persistent absence of records is consistent with analyses in the philosophy of science applied to paranormal claims: If a phenomenon supposedly interacts with the physical world, it should be detectable by physical instruments; if it never is, despite the exponential growth in instrument sensitivity, then its existence becomes an increasingly implausible hypothesis.34
The same decline affects miracles and exorcisms. Although religious videos showing supposed instantaneous healings still circulate, such recordings never exhibit high-definition imagery, verifiable continuity, or transparent documentation. Sociological research on healing rituals also shows that, although millions of people report subjective experiences of “spiritual healing,” there is no video documentation of instantaneous, verifiable cures that meet minimal clinical criteria, such as independent pre- and post-examinations or transparent medical history.35 Medical literature likewise documents that many such claims can be explained by imprecise diagnoses, spontaneous remissions, or confirmation biases.36 The more sophisticated our recording technology becomes, the more rarefied extraordinary events appear to be.
Demons, once so present in cultural narratives, seem to have developed a profound aversion to high-resolution equipment. Beings allegedly so powerful, capable of opposing gods, tormenting humans across civilizations, making people speak extinct languages and levitate, now seem terrified of ordinary individuals armed with devices that could finally reveal their true face.
Some may argue that these phenomena still occur, but people have simply stopped recording them, even while carrying cameras virtually 24 hours a day. However, such a hypothesis runs entirely counter to contemporary behavior: We live in an era in which trivial dance trends accumulate millions of views, minor accidents are filmed from multiple angles, and any unusual animal becomes viral within minutes. Studies on the psychology of digital sharing show that unusual, threatening, or extraordinary content is significantly more likely to go viral, especially when it includes clear visual elements.37 This pattern makes it even more improbable that supposedly extraordinary phenomena would occur without sharp recordings, or that someone would deliberately refrain from filming or disseminating them.
Just when these phenomena could finally verify themselves before omnipresent cameras, they remain invisible.Within this context, suggesting that people witness aliens, mythical primates, miracles, ghosts, or demons and simply “forget” to record them is, at the very least, an exercise in involuntary humor. In a world so deeply connected and driven by the banal as well as the exceptional, a video that confirmed, and definitively proved, any one of these phenomena would generate an almost infinite number of likes and would instantly elevate its creators to the category of highly profitable, widely recognized influencers.
The pattern that emerges is clear and epistemologically eloquent: The massive availability of recording devices has not reduced the prevalence of paranormal beliefs, but it has made the absence of robust evidence even more striking. Opinion surveys indicate that beliefs in ghosts, haunted houses, UFOs, or astrology remain widespread and, in many cases, have been stable for decades.38, 39, 40 However, when everyone can document the world with near-forensic precision, the territory of the supernatural does not expand toward clear evidence; it remains confined to ambiguous accounts, grainy videos, and testimonies vulnerable to perceptual illusions and cognitive biases.41, 42 New cameras do more than capture reality: They make it increasingly difficult to sustain, without embarrassment, that which depends on shadows and low verifiability.
In this context, it makes little sense to speak of the “end” of paranormal beliefs; what we observe is a growing mismatch between persistent beliefs and absent evidence. On a planet where much of the population carries in their pockets, holds in their hands, or mounts on the dashboards of their cars, high-resolution cameras with immediate access to social media, one would reasonably expect an explosion of sharp recordings of ghosts, demons, intervening deities, UFOs, or mythical primates, if such entities truly interacted with the physical world in any minimally recurrent or plausible way.43, 44
Instead, what accumulates are decades of opinion inquiries showing stable beliefs and a colossal volume of “evidence” that collapses under the first skeptical examination. The coincidence remains striking: Just when these phenomena could finally verify themselves before omnipresent cameras, they remain invisible.
The most parsimonious explanation continues to be the same one skeptics have long articulated: It is not that the phenomena have decided to retire or hide themselves; rather, there were never any paranormal phenomena to be recorded, only human interpretations of natural events, illusions, and frauds.