On his February 22, 2026 blog the estimable evolutionary biologist, outspoken atheist, and (relevant here) staunch defender of determinism, Jerry Coyne, takes me to task for presenting “a muddled argument” in my case for compatiblism (in an excerpt in Quillette), which was based on a longer chapter in my book Truth: What it is, How to Find it, and Why it Still Matters.
First, let me acknowledge that this chapter in my book is in Part III, or “Known Unknownables.” Following Donald Rumsfeld’s famous epistemological trilemma, that includes “Known Knowns” (things we know that we know), “Known Unknowns” (things we know that we do not know), and “Known Unknowables” (things that are not ultimately knowable).
In this section of the book I include consciousness (the easy problem is understanding the neural wiring; the hard problem that I claim to be unknowable is what it’s like to be the wiring), God (I know of no scientific experiments or rational arguments that can prove its existence one way or the other), and why there is something rather than nothing (what do you mean by nothing, anyway?). So, in a sense, Jerry’s determinist position is, in my understanding of the problem, no more or less likely to be true, depending on how one defines the problem itself. I have defined it in a way that compatibilism works, whereas Jerry has defined it so that determinism works.
Second, this is why I reference the survey by David Chalmers, the philosopher who made famous the “hard problem of consciousness,” along with his colleague David Bourget. They asked 3,226 philosophy professors and graduate students to weigh in on 30 different subjects. Here is what they found regarding the free will issue:
Accept or lean toward:
Compatibilism
59.1%
Libertarianism
13.7%
No free will
12.2%
Other
14.9%
Now, on one level, it is irrelevant how many people believe something, along the lines of what Philip K. Dick meant when he defined reality “as that which, when you stop believing in it, doesn’t go away.” Yet, as I argue, there is something revealing about these figures. Namely, if the most qualified people to assess a problem are not in agreement on an answer—and the free-will/determinism problem has been around for thousands of years—it may be that it is an insoluble one, a known unknowable.
Third, therefore, it is entirely possible that a highly qualified, educated, and intelligent thinker like Jerry Coyne can make a compelling case for determinism, while at the same time a highly qualified, educated, and intelligent thinker like the late Daniel Dennett can make an equally compelling case for compatibilism (and Coyne and Dennett have locked horns on this very matter).
I agree with Jerry and Dan that we live in a determined universe governed by laws of nature. But I disagree with Jerry that this eliminates free will, or if you prefer “volition” or “choice” (again, this entire field is, to use Jerry’s term, “muddled” with confusion of terminology). My compatibilist work-around is “self-determinism,” in which while we live under the causal net of a determined universe, we are part of that causal net ourselves, helping to determine the future as it unfolds before us, and of which we are a part. My compatibilist position is based on the best understanding of physics today. Let me explain.
Physicists tell us that the Second Law of Thermodynamics, or entropy, means that time flows forward, and therefore no future scenario can ever perfectly match one from the past. As Heraclitus’ idiom informs us, “you cannot step into the same river twice,” because you are different and the river is different. What you did in the past influences what you choose to do next in future circumstances, which are always different from the past. So, while the world is determined, we are active agents in determining our decisions going forward in a self-determined way, in the context of what already happened and what might happen. Thus, our universe is not pre-determined in a block-universe way (in which past, present, and future exist simultaneously) but rather post-determined (after the fact we can look back to determine the causal connections), and we are part of the causal net of the myriad determining factors to create that post-determined world.
(Jerry inquires why I didn’t discuss quantum uncertainty in my analysis. The reason is that Dennett debunked this decades ago in Elbow Room: The Varieties of Free Will Worth Wanting, when he pointed out that any such quantum effects that alter other deterministic physical laws would not grant any type of free will or volition, for it would just mean that some percentage of your “decisions” are just random noise in the machine.)
Given the muddleness of terminology here, let me bring in the philosopher Christian List and his three requirements of volition from his book Why Free Will is Real:
As List explains in more detail:
Specifically, we need to know whether what the person did was freely performed, as characterized by the three bullet points above. Was it an intentional action? Could the person have done otherwise? Was the person in control? Or, if what the person did was not freely performed, we need to know whether the person’s free will was at least implicated in the run-up to it: Was there a free decision to get drunk in the first place, for instance? Of course, moral responsibility might well require more than that…but I do take the presence of free will somewhere along the relevant chain of events to be a necessary condition for a salient form of moral responsibility.Of course, Jerry and other determinists like Robert Sapolsky and Sam Harris could just redefine the problem by saying that even the capacity to form an intention was pre-determined by atoms, molecules, and neurons, as is the capacity to consider several possibilities for action and the capacity to take such action. This is why I quoted Dan Dennett from my podcast conversation with him on this very challenge:
Determinism doesn’t tie your hands, nor does it prevent you from making and then reconsidering decisions, turning over a new leaf, learning from your mistakes. Determinism is not a puppeteer controlling you. If you’re a normal adult, you have enough self-control to maintain your autonomy, and hence responsibility, in a world full of seductions and distractions.Since determinists often reference people suffering from extreme drug addiction or alcoholism, or those with a brain tumor that led to their bad behavior, like Charles Whitman in the Texas school tower shooting incident, I asked Dan about Sam Harris’s quote that “it’s tumors all the way down,” and Robert Sapolsky’s descriptor that “it’s turtles all the way down.” Here Dennett identifies the error in this line of reasoning:
Well, I like the way you put it very much, Michael, because I think you put your finger on the mistake that Sapolsky is making there. And Sam Harris makes it too. No, it’s not tumors all the way down. It’s machinery all the way down. But there’s good machinery and there’s bad machinery. And if we have bad machinery, then yes, we’re disabled to some degree. But what about people who have good machinery? They’re not disabled. Why can’t we hold them responsible? Now, some people are, alas, through no fault of their own, not responsible for what they do. And that might well include people with terrible, terrible youths, who didn’t get a good upbringing, or who had a horrific upbringing. And so we have to decide, as society, given that this is a dangerous person, what’s the humane, good thing to do? I don’t think there’s an algorithm or a bright line for distinguishing somebody whose brain is good enough from somebody whose brain is a little too disabled. We just have to make the decision.Dennett then brings home real world examples:
We do it all the time. You’ve got to be 16 to get a driver’s license. Some 15-year-olds would be perfectly safe as drivers. Some 21-year-olds would not. But the law has to have a bright line and so it chooses one. We might argue whether we want to raise it or lower it, the way the drinking age has been raised or lowered, or the way the driving age has been raised or lowered. We have to have a policy and we have to stick to it and we can change it as we learn more and more. But what we don’t do is just say, “Oh, it’s disability all the way down.” No, you’re not disabled, I’m not disabled. I want to be held responsible. I think you want to be held responsible too.Coyne is unhappy with my invoking of “emergence” and says I’m being rude to him and Sapolsky and Harris in accusing them of “physics envy,” but that’s what it is! Here, for example, is Sapolsky defending his belief that free will does not exist because single neurons don’t have it: “Individual neurons don’t become causeless causes that defy gravity and help generate free will just because they’re interacting with lots of other neurons.”
In fact, billions of interacting neurons is exactly where self-determinism (or volition or free will) arises. This is why I like to ask determinists: Where is inflation in the laws and principles of physics, biology, or neuroscience? It’s not, because inflation is an emergent property arising from millions of individuals in economic exchange, a subject properly described by economists, not physicists, biologists, or neuroscientists.
Rather than quoting myself again, I will invoke the geneticist and neuroscientist Kevin Mitchell from his book Free Agents, in which he shows that the determinist’s reductionistic approach to understanding human thought and behavior is not just wrong, but wrong-headed! How?
Basic laws of physics that deal only with energy and matter and fundamental forces cannot explain what life is or its defining property: living organisms do things, for reasons, as causal agents in their own right. They are driven not by energy but by information. And the meaning of that information is embodied in the structure of the system itself, based on its history. In short, there are fundamentally distinct types of causation at play in living organisms by virtue of their organization. That extension through time generates a new kind of causation that is not seen in most physical process, one based on a record of history in which information about past events continues to play a causal role in the present.Thus, I conclude that the free will/determinism issue is an insoluble problem because we may be ultimately talking past one another at different levels of causality: the reductionist’s atoms, molecules, and neurons versus the emergentist’s brains, people, and societies.
Choose a side. The choice is yours!
This secretive device has been promising to deliver clean, free energy for more than 15 years — but so far nobody's been allowed to examine it.
Learn about your ad choices: dovetail.prx.org/ad-choicesThe question of whether or not we have free will has been pondered by philosophers, psychologists, theologians, neuroscientists, and by many of us in our own conversations and thoughts. Nearly two thousand years ago, the Stoic philosopher Epictetus declared, “You may fetter my leg; but not Zeus himself can get the better of my free will.”1 But Epictetus also believed in a deterministic world where each event is determined by preceding causes. How can this apparent contradiction be resolved?
In the 1940s, Bertrand Russel saw no reason that human volitions would not also be determined in the same way that inanimate processes are determined. Further, he saw the determined nature of volitions as incompatible with a person being the true source of his own actions. Russell supposed that an evil scientist could, by use of psychoactive drugs, manipulate a person to perform certain actions. And this hypothetical manipulation did not seem to Russell so different from normal life, where people are manipulated to do what they do by natural causes outside their own control.2
Fifty years after Russell published his critique of the Stoic notion of free will, several other philosophers made the same argument.3, 4, 5 Today, the continued quandary contributes to a sustained lack of consensus on free will. According to surveys, most people—including most philosophers—believe in some form of free will, most under the rubric of compatibilism.6, 7 Novelist and Nobel Laureate Isaac Bashevis Singer summed up the dilemma, “We must believe in free will, we have no choice.”
However, the debate still rages in the world of academic philosophy, in a broader audience reached by podcasts and popular books written by scientists, and among readers of Skeptic. Here I will try to convince you that free will is real and not an illusion. I’ll argue that far from being exemplars of rationality and skepticism, the main arguments against free will make unjustifiable logical leaps and are naïve in the light of cutting-edge scientific findings.
Throughout the philosophical literature,8 resolving the question of whether or not we have free will has often revolved around two criteria for free will:
I argue that humans meet both criteria through two concepts: scale and undecidability.
Scale and the True Sources of Our ActionsIn an article in The Journal of Mind and Behavior,9 I argued that many of our actions are caused by our wills; that is, by our conscious desires and intentions. This is not disputed by most (what I’ll term) free will deniers. They more often dispute that our wills are free, not that we have wills and that our actions often follow from our wills. Sam Harris, one such determinist with a large general audience, has said that the subjectively felt intention to act is the proximate cause of acting. Harris makes the same basic claim as renowned scientist Francis Crick,10 philosophers such as Bertrand Russell11 and Derk Pereboom,12 and many others. They claim that in addition to the proximate cause (the will), our actions have ultimate causes lurking behind them that are the relevant causes to consider when judging whether or not our wills are free. The ultimate causes beyond and beneath the surface of our wills, they argue, make them unfree. What are these ultimate causes? Harris identifies genetics and environmental influences as “the only things that contrive to produce” his particular will.13 Molecules beyond DNA have also been offered as ultimate causes of our decisions. Biologist Jerry Coyne argued that, “Our brains are made of molecules; those molecules must obey the laws of physics; our decisions derive from brain activity.”14 Robert Sapolsky, a prominent neuroendocrinologist, is publishing a book this year, detailing many such mechanisms that, it is claimed, obviate the role of willed choices.15
My mind does not exist as a molecule nor as a historical epoch, nor as a socioeconomic class. Yet my mind does exist.What’s wrong with this line of reasoning? Consider the following question as an analogy: Are apples red? Suppose we all agree that apples have color. The question is whether the color is red or non-red. To answer the question, determinists would look beyond the proximate color of the apple. Realizing that the apple is nothing but atoms, they would examine many of the carbon atoms on the surface of the apple. They find that not a single carbon atom is red. Since none of the atoms are red, and the apple is nothing but atoms, they would conclude that the apple can’t be red. The error is that though they agree the apple has a color, they try to examine the nature of the color at a scale (a carbon atom is smaller than the wavelength of red light) where color is incoherent. The fact that they found no redness at that scale shouldn’t lead them to conclude anything about the color of the apple.
Likewise, the fact that determinists find no personal authorship or freedom in the actions of molecules shouldn’t lead them to conclude anything about the nature of the will. We agree that we have wills, that we have subjectively experienced intentions that influence our actions. The question is whether our will is free or unfree. To look at molecules for the answer is a scale mistake. DNA and neurotransmitters observed at the molecular scale exhibit no will whatsoever. With that knowledge, is it compelling that they exhibit no free will? No. That should tell us that determinists are looking at the wrong scale to find answers about the will, just as looking for answers about redness at a scale where color is not meaningful.
The right scale for finding answers to the question of apple redness is the apple scale, not the atom scale. The right scale for finding answers to the question of freedom of the will is the agent scale, not the molecule scale. Searching the molecule scale is just one example of this error. There are many other wrong scales where a confused determinist might look for answers about the will. He or she may zoom out temporally into an irrelevant timescale, including the time before the will in question existed. In the above analogy, this would be like conceptualizing the apple as merely a step in a process of agricultural industry. Since agricultural industry is not red, should we conclude that the apple is not red? The question about the will can only find its answers from a scale where the will exists as a will. Expanding the timescale to include the time before the person was born renders the question incoherent.
If we keep our analysis in the scale where the individual agent exists, not zooming too far in nor too far out in space, time, or level of organization, then the primary and ultimate cause of my actions is me. The will emerges from the complex interactions of many small parts. It’s literally not true to say that it’s caused by any particular small part. It is caused by many small parts, but only when taken together all at once. And that’s the same thing as the whole person. So my thoughts and actions are deterministically caused by me. The molecules of which my brain is made are simply irrelevant to this fact. So I am the true source of my own actions, and there are no other “ultimate” causes. My mind does not exist as a molecule nor as a historical epoch, nor as a socioeconomic class. Yet my mind does exist. René Descartes’ “I think therefore I am” convinces me of this.16 In order to claim that my choices are really caused by a molecule or a historical epoch, one must refer to the dynamics of a scale where I (that is, my mind) cannot be found. Eliminating the mind from the analysis is not a valid way to answer a question about the mind.
The Ability to Do OtherwiseThere is a temporal asymmetry in the question of whether I could have done otherwise. In the question’s typical form, it is backward-looking. It asks about what could have been in the past, and, at first, it seems like a coherent question. I did one thing yesterday, and we wonder if I could have done something else. But what if we wanted to figure out whether or not I’ll have free will tomorrow? From that temporal angle, the question of the ability to do otherwise stops making sense. In a forward-looking sense, the question becomes manifestly nonsensical. Can I do otherwise in the future? Otherwise? Other than what? Other than the thing I will do? The question stipulates that I will do a certain thing, and simultaneously asks whether or not I can avoid doing that thing. The stipulation contained within the question makes the answer trivial. No, of course I can not do something other than the thing I will do. In order for the question to have any significance in the forward-looking tense, it must be modified. The question can not directly stipulate that I will do a certain thing. The question must ask whether or not I can do something other than what I’m expectedto do, not other than what I will do.
The will emerges from the complex interactions of many small parts. It’s literally not true to say that it’s caused by any particular small part.Human choice is temporally asymmetric and must be analyzed as such. This point could be missed without properly situating our analysis at the correct scale. An inappropriate focus on the dynamics of little particles could obscure the truth. The laws of physics that describe or govern the interactions of particles do not specify a direction of time. If we could watch a video of two protons colliding, we would have no way to know whether the video was being played forward or in reverse. This is called time reversal symmetry. This symmetry holds true in a wide variety of particle interactions.17 Time appears asymmetric only at scales where emergent phenomena transpire. Large collections of particles obey the second law of thermodynamics, which is not time reversal invariant. As astrophysicist Matt O’Dowd put it, “Zoom in to individual particle interactions and you see the perfect reversibility of the laws of physics. But zoom out, and time’s arrow emerges.”18 A consideration of scale leads to a recognition of temporal asymmetry in human choice.
In analyzing the ability to do otherwise, we should consider only a forward-looking ability because choices, by their nature, are forward-looking. We don’t deliberate or make choices about the past. Choices are always about something, and those objects of choice always lie in the future, thus choices are always forward-looking. At the time when a choice is actually made, there is as of yet no “what” as in “Could have done other than what?” I have not already made the choice, so there is no established action to have done otherwise. There can only be expectation of what I will do. If my actions are in principle perfectly predictable, then I do not have the ability to do otherwise in a forward-looking sense. If my choices are in principle not predictable, given total knowledge of the present world, then I do have the ability to do otherwise in a forward-looking sense, which is the only sense that makes any sense. Given the different dynamics found at different scales, the ability to do otherwise needs to be understood as temporally asymmetric; that is, as always forward-looking; as the ability to do something which is in principle not predictable. We do have that ability, and it derives from our self-referential nature.
Self-Reference and UndecidabilityThe fact that I am the relevant cause of my own actions comes with another important implication: I am a causally self-referencing entity. If a molecule were the relevant cause of my action, this would not be true in the same way. The molecule has no capacity for self-reflection, but I do. I can ask myself, “What will I do? What could I do? What should I do? What do I want to do? What would I do if I wanted to do X and should do Y?” Self-referential questions like these affect the choices that I make; and those choices change the self-referential questions that I ask.
At the relevant scale, self-reference is causally important. I am a system which analyzes its own inputs, character, and potential outputs; generates new outputs based on those analyses; and feeds those new outputs back into itself as inputs which affect the outputs, which affect the system’s character. I am an output of and an input for my own processing. Framing the human self-referential nature in this way brings us to the concept of undecidability.
A system that exhibits undecidable dynamics cannot be predicted, given complete knowledge of its present state. Computer scientists and mathematicians have proven that this fundamental unpredictability shows up in some algorithmic computations, mathematical systems, and dynamical systems (including physical systems).19 Though an unpredictable dynamical system may evoke the concept of chaos, undecidability is not chaos; it is a different sort of unpredictability. IBM research scientist Charles H. Bennett makes the difference clear:
For a dynamical system to be chaotic means that it exponentially amplifies ignorance of its initial condition; for it to be undecidable means that essential aspects of its long-term behaviour—such as whether a trajectory ever enters a certain region—though determined, are unpredictable even from total knowledge of the initial condition.20If a system exhibits undecidability, then it is unpredictable even given total knowledge of all of its constituent parts. Undecidability makes deterministic systems fundamentally unpredictable in principle, not as a result of merely lacking precise measurements. If humans can exhibit undecidability, then we meet the second main criterion for free will: the forward-looking ability to do otherwise. Scientists recently made such an argument feasible by explicating what features of a system give rise to undecidable dynamics. In 2019, Mikhail Prokopenko and his colleagues conducted a comparative formal analysis of recursive mathematical systems, Turing machines, and cellular automata. They come to a clear conclusion:
As we have shown, the capacity to generate undecidable dynamics is based upon three underlying factors: (1) the program-data duality; (2) the potential to access an infinite computational medium; and (3) the ability to implement negation.21If humans do have these three properties, then we meet the criteria for undecidable dynamics, which means we can take actions that are fundamentally unpredictable, which means we have the ability to do otherwise in a forward-looking sense, which means we have free will.
First, consider program-data duality, which in this context is the ability for self-reference. The word “duality” simply refers to the typical distinction between program and data with which we are all familiar. A human at time 1 has a certain overall state of mind, coinciding with a certain overall physical state. The state at time 1 is a program, in that it entails implicit rules about what the system would do, given certain types of data. The streams of perceptions taken in at time 2 are data, which get processed according to the implicit rules. In addition to processing basic sense data, this duality allows for a program (or implicit set of rules encoded in the state of a human) to process other programs as data. For example, a human can process ideas, hypothetical scenarios, mathematical operations, and representations of the self as data (thus self-reference).
The question about the will can only find its answers from a scale where the will exists as a will.The next requirement for undecidability is the potential to access an infinite computational medium. The computational medium is the substrate on which the state of the system is represented. In a computer, the computational medium would be the memory and storage. The set of all possible states of the system is called the state-space. For example, the state space of a computer would be the set of all possible configurations of its memory and storage. If we knew that a certain system had an infinite state-space, we could infer that the system has access to an infinite computational medium.
It can be informally proven that humans have an infinite state-space. How many different thoughts is it possible for a human to have? That question includes sub-questions, such as how many things is it possible for a human to see? The state of your visual perception is one small part of your overall state. Think of the number 74. Now think of the number 74 with your eyes closed. Those two occasions of thinking of 74 occupied two very different points in your state-space because of the difference in visual perception.
To roughly estimate how many overall states are possible while thinking of 74, we would need to do something like multiply the number of possible visual perceptions by the number of possible auditory perceptions by the number of possible sensations of heat and cold by the number of possible gradations of feeling sadness or happiness, and so on. Also, you may think of 74 while remembering, for example, the time you thought of 106 or 107. And the next time you think of 74, that will be yet another point in your state-space, since you’ll recall that you’ve thought of 74 before. There may be an infinite number of possible states in which you might think of 74. And there are many conceivable numbers other than 74, and many things to think about other than numbers.
An obvious objection might be that a human and his brain are physically finite. In what sense can an organ that fits inside a skull be infinite? As a starting point, consider the 100 billion neurons that make up the brain. As a simplification, a neuron can be considered to be “firing” or “not firing.” So a simplified brain has 100 billion binary cells. Such an array of cells could instantiate 2^100,000,000,000 distinct patterns of on-or-off activation. That’s a big number. For comparison, there are estimated to be roughly 10^80 atoms in the observable universe.22 The number of atoms in the universe is an infinitesimally small number compared to the number of activation patterns possible in a simplified brain. And what about a real brain? A real brain is made of neurons which are not simply on or off. Some neurons show gradations in voltage and neurotransmitter release, meaning that they have many possible states between “on” and “off.”23
Undecidability makes deterministic systems fundamentally unpredictable in principle, not as a result of merely lacking precise measurements.Besides neurons, there are many other variables in the brain that are not captured by the simplified on/off variable. Each neuron can vary in the amount of neurotransmitter in its vesicles ready for release, and the state of the receptors on its soma and dendrites (that is, to what degree they’re blocked by other molecules). There can also be variation in the amount of neurotransmitter that is floating free at any moment in the space between any two neurons. There are minute variables that will likely never be measured yet do, theoretically, make a causal difference. For example, in what spatial direction is each neurotransmitter molecule oriented? A neurotransmitter molecule must fit into a receptor in order to carry on a signal. For the molecule to fit, it must be facing a certain direction relative to the receptor. So the spatial orientation of the molecule before binding must have some nonzero effect on the binding affinity. How many different patterns of analog spatial orientation might trillions of neurotransmitter molecules be capable of? That alone may be infinite. The variable of “firing” or “not firing” does not capture any of these variables. So the actual number of possible overall brain states is some large exponent greater than 2^100,000,000,000 which is a large exponent greater than the number of atoms in the universe.
Whether the human state-space is technically infinite or merely practically infinite (larger than any other number computed for any purpose in all of science), it will not be exhausted in the meager 100 years of a human lifespan. This means that the self-referential loops of processing do not need to stop at any predetermined iteration or level of abstraction. So for the purpose of analyzing the choices of a human, the state-space and computational medium are functionally infinite.
The last element required for undecidability is the ability to implement negation. Negation in this context refers to the ability of a logical system to produce an output which is exactly contrary to the processing which led to the output. It is equivalent to the liar paradox, which is exemplified in a statement such as “everything I say is a lie,” or more formally, “this statement is unprovable.” The liar paradox is a self-referential statement, which can not be judged to be true or false without a contradiction. Self-reference is fundamental to this paradox because the statement refers to its own validity. If humans can implement this paradoxical logic into their thinking, then humans meet this requirement for producing undecidability. The fact that humans came up with the liar paradox thousands of years ago is evidence that humans can perform the logical operation of negation.
ConclusionAll three factors underlying the capacity to generate undecidable dynamics are present in humans. First, we exhibit program-data duality when we process ideas, hypothetical scenarios, mathematical operations, and representations of ourselves as objects of thought. Next, we have the potential to access an infinite computational medium. This is demonstrated by the fact that we can think of any one of an infinite number of objects of thought, which implies an infinite state-space, which implies an infinite computational medium. Finally, we have the ability to implement negation, demonstrated by the inception of the liar paradox in the minds of humans. If these three elements are sufficient to generate undecidable dynamics, then humans are capable of generating undecidable dynamics, which means we cannot be accurately predicted. And that means we have the ability to do otherwise in the forward-looking sense.
Figure 1. Relational map of concepts. The truth of each concept supports the truth of the concepts downstream from it. This diagram illustrates how the concepts described throughout this article contribute to the overall reality of free will.Figure 1 shows the relationships between the concepts discussed in this article. An understanding of the human agent at the scale where conscious humans actually exist leads to recognition of the self as the source of one’s actions, recognition of the relevance of temporal asymmetry to human choice, and recognition of self-reference as causally relevant to human actions. Self-reference, in combination with access to an infinite computational medium and the ability to implement negation results in undecidable dynamics. This entails the ability to do otherwise in the forward-looking sense, which is the only sense that makes any sense when temporal asymmetry is taken into account. The resulting total picture is that we (humans) meet two criteria for real free will: the forward-looking ability to do otherwise and being the source of one’s own actions.
Viewing human agents as whole humans instead of as molecules makes it clear that humans are the cause of their own actions, and also leads to a focus on the human features such as self-reference, that generate undecidable dynamics. The Stoic philosopher Epictetus was right. Neither Zeus, Bertrand Russell, nor the scientists recapitulating the latter’s argument 77 years later can diminish our free wills.
A recent study shows pretty clearly that highschoolers benefit from a little extra sleep. We will get to the study in a bit, but first I want to note that this information is not new. Teenagers tend to stay up late, and yet we make them get up super early to be at class, often by 7:00 AM. This is not good for their health or their learning. So why do we do it?
The primary reason is logistical, which is tied to cost. School systems have tiered start times for elementary, middle school, and high school because this allows them to use the same fleet of buses and drivers for all three. Starting high school later, at the same time as middle school, would mean increasing the size of the fleet. There are other stated reasons, but honestly I think this is the real reason and everything else is a backend justification. The other reasons are more tradeoffs, that benefit some people but not others. For example, a parent with a long commute could drop off their highschooler on the way to work. There is more time for after school clubs, sports, and jobs. While some older teens may get home early to watch their younger siblings until their parents get home.
This all points to a main reason our civilization is frustratingly sub-optimal (to be polite). The default is to follow the pathway of least resistance – everyone just does what’s best for themselves, with people in power doing their best to solidify more power, with vested interests putting the most consistent effort into making the system work for their narrow interest. What is often lacking is any kind of systemic planning, and when that does occur (even with the best intentions) the law of unintended consequences often results in a net wash or even detriment. The world is complex, and we are just not very good at managing that level of complexity. What we need are institutions that can accumulate evidence-based institutional knowledge to incrementally make things work better. But that’s a lot of work, and it’s too easy for vested interests to sabotage such efforts.
I’m not trying to be nihilistic – nihilism is part of the problem, and is often used as a weapon by those vested interests to short circuit attempts to make things work better for everyone. But we have to understand the nature and scope of the problem, and we need the energy and dedication to sustain efforts to make things work better. Such efforts can work, and historically they have made things better. But it’s a constant struggle.
OK, back to the study. In this study they gave students the option to start class up to an hour later. For example, school would officially start at 8:30, but also offered an optional module at 7:30 for those who wanted to come early and end early. The found:
“Under the flexible model, 95% of students used the later-start option. The median SST was delayed by 38 minutes (n = 711, β = .57, 95% confidence interval [.53, .62], p < .001, R2β = .52), with corresponding significant delays in wake times and increased sleep duration on school days. Among the paired subsample, SST delay was significantly associated with increased school day sleep duration (n = 205, β = .51 [.05, .94], p = .03, R2β = .02). No worsening was observed. Improvements included reduced problems falling asleep, fewer students with clinically low health-related quality of life, and higher scores in mathematics and English.”
Now that I am retired I have personally experienced (yes, this is just anecdotal) the benefits of sleeping in longer. I no longer even set an alarm – I wake up when I feel like it. I am still working basically full time doing all my science communication activities, but mostly on my own schedule. My sleep quality and daytime alertness have significantly improved. I highly recommend it. But more importantly – the evidence clearly shows that this is generally true – being able to sleep in longer results in better sleep and performance.
So it seems like a no-brainer – why can’t we do this? I think the key here is flexibility, which can be paired with increased flexibility at work, especially for parents. Flexible work start times and the ability to work from home, even if only 1-2 days a week, results in a huge improvement in life satisfaction. Then families will have the ability to make their schedules work. Let’s prioritize sleep, health, and educational effectiveness first, and make the system work for these goals. It makes no sense for a school system to sacrifice the well-being and education of their own students in order to meet their own logistical needs.
The obvious response to this question is – well, it’s all about money. We have to be realistic. School systems operate with limited budgets and have to make the most with the resources they have. If they have to maintain a larger bus fleet, where will that money come from? I get it. This is reality. My question is – who made this decision? Did we as a society, or even just the affected parents, make this decision collectively with adequate information to understand the implications of their decision? We may just have to accept the fact that running an effective school system is more expensive than we might want it to be, and cutting costs in this way is simply not an acceptable option.
If we prioritize the health and education of students, I think we will find there are other elements of the system that can accommodate. This is where municipal planning becomes even more integrated. Investing in public transportation and subsidizing it for students, for example, will give students more options and reduce the strain on a dedicated school bussing system. Facilitating carpooling among students is another option. More parental flexibility helps. Make schools more local and walkable/bikeable, and organize safe group walks to and from school. Optimize and disperse drop-off areas to limit bottle necks and reduce drop-off congestion.
This requires thoughtful planning, but mostly an unwillingness to simply sacrifice students to simplify logistics and reduce costs.
The post Flexible School Start Time first appeared on NeuroLogica Blog.
In what will certainly fail to go down as the news of the century, Imane Khelif, male boxer and women’s boxing Olympic medalist, has finally publicly admitted in a February 2026 interview that he is indeed biologically male. A large part of society specifically chose not to see. And another part chose not to care that eighteen months ago, two men were given a free pass to an abuser’s dream: the ability to not only assault women on an international stage, but the chance to be celebrated for it.
The 2024 Paris Olympics gold, silver and bronze medals, as designed by Chaumet (Credit: LVMH)Boxers Imane Khelif of Algeria and Lin Yu-ting of Taiwan entered the 2024 Olympics as a sex they were not and they did it with the full knowledge of the IOC. Two men, who according to an official release by the International Boxing Association in July of 2024, had failed more than one sex test for female eligibility in 2022 and 2023, and had been disqualified from female competition. For their fraud, both were rewarded with gold medals at the Olympics. One female boxer, Angela Carini, had to make the agonizing decision to forfeit rather than participate in the dangerous charade. How surreal it must have been to make that unbelievable yet necessary call, to not only go against everything one has trained for, but everything one stands for as an athlete, professional, and disciplined fighter.
For any inclined to give Khelif the benefit of the doubt that perhaps he just didn't know… if one is being raised as female and never begins menstruation at puberty, it will absolutely be examined why that is. Once illness and female conditions are ruled out, one is left with the “condition” of being male. In this case, a male with 46, XY 5-alpha reductase deficiency, as a medical report of his drafted back in 2023 outlined, later leaked to Le correspondant.
No one’s personal condition is ever a legitimate reason to disadvantage or endanger another demographic.To ignore such disorders of sexual development in order to adhere to traditionally physical sex ideals is fairly common practice in conservative and religious countries, and African nations have a history of scouting such male individuals for the purpose of dominating women’s sports, to the overwhelming ignorance of the global athletics audience. As a result, most are still under the incorrect impression that athletes like Caster Semenya, the South African runner and two-time Olympic gold medalist, are simply women with higher testosterone and absolutely unaware of the reality that these are athletes with a male karyotype. Semenya confirmed in the Court of Arbitration for Sport to have 5-ARD, a genetic condition resulting in the inability to develop typical external male genitalia.
These disorders are unbelievably unfortunate for a multitude of medical reasons, beyond being tokenized and weaponized through identity politics. However, no one’s personal condition is ever a legitimate reason to disadvantage or endanger another demographic.
He just counted on larger society not bothering to care. And on that, he wagered well.Nevertheless, such practice also happens to explain why Khelif, a Muslim in a Muslim nation, was conveniently free from traditionally mandated female attire, and able to be so comfortably hands-on with his fellow male trainers. And beyond that undisguisable situation, one must also genuinely ask why he never chose to appeal the International Boxing Association’s 2023 disqualification for failing to meet female criteria, or why he refused to participate in subsequent female competition that requires testing for sex.
So he knew. His family and community knew. He just counted on larger society not bothering to care. And on that, he wagered well.
It is the inevitable outcome of a societal ideology riddled with complacency for female safety and dignity.Because despite the protests of the female boxers, certain boxing association officials, and few but genuine feminists against the unbelievable misogyny being broadcasted globally, many decided to protest calling a spade a spade. Widespread social media commentary of the ideologically-captured claimed that Khelif and Lin were simply masculine-looking women who shouldn’t be insulted for appearances beyond their control. That it was (stop me if you’ve heard this before) right-wing propaganda and Nazi TERF bigotry to suggest that such supposed gender nonconformity made them male. The pick-me cherry on top, of course, is that it was peak misogyny to call them men at all.
But this was only to be expected when the mainstream media “reporting” on such a farce fully fed this break from reality. During the 2024 games, at very best legacy organizations legitimized Khelif as the incorrect sex, and at worst, denigrated anyone pointing out the opposite truth. From the official Olympics reporting that ignored the situation itself entirely, to BBC and NYT accounts that comfortably crowned Khelif a woman, to USA Today fluff that belittled a serious slap in the face to females into “unhinged controversy,” the overwhelming majority of outlets at best passively accepted and at worst actively furthered the grotesque farce unfolding in front of the world.
Chromosomes, anatomy, and human sight are disregarded in favor of false passport markers and old photos of pink dresses, because apparently that is the only acceptable (and desired) proof of what “woman” means.Yet beyond entrenched media preferences is another incentive as well. This was, and is still, today’s gender misogyny in action. Ironically, those who consider truth too “offensive” for the prioritized male in question never seem to consider the unimaginable offense for the women, who must not only unfairly face a recognizable man, but are expected (as women usually are) to simply take it with grace and a smile. So, concessions will be made to spare male feelings in the name of “inclusion,” ultimately excluding women from their very own opportunities.
Chromosomes, anatomy, and human sight are disregarded in favor of false passport markers and old photos of pink dresses, because apparently that is the only acceptable (and desired) proof of what “woman” means. It is the inevitable outcome of a societal ideology riddled with complacency for female safety and dignity.
Fortunately, despite a seemingly ingrained forfeit of biological honesty, the tide is beginning to turn, with the release of necessary reports and a new, supportive political landscape. The once sacrosanct gender ideology is now beginning to be questioned as a whole in the mainstream, no longer only by brave feminists. We can see the effects of this in the athletic realm through changes in various governing organizations, including World Boxing itself, who are beginning to demonstrate the bare minimum of competition integrity through mandating sex testing for eligibility. And as IOC relies on individual sport federations to set eligibility standards, this nightmare will hopefully one day all but completely fade into history.
Imane is and was always exactly as his own name states.As it tends to go, many who put on blinders then will now be miraculously blind to the harm they supported. Khelif’s unforgettable selfishness will get purposely memory holed, along with their own unforgivable enablement in this feint of reality. But as USA Today once wrote in support of Khelif and wild disregard for truth, this indeed “can never happen again” … just not in the way that they meant.
Imane is and was always exactly as his own name states. And now that the rest of the world can no longer pretend that they do not know, they will have to finally decide whether they still believe men are entitled to women's earned opportunities, or if they are truly for women after all.
A review of Parallel Lives of Astronomers. Percival Lowell and Edward Emerson Barnard by William Sheehan. (Cham, Switzerland: Springer, 2024. Hardcover, 687 pages)
Of the two astronomers whose lives and accomplishments are chronicled in William Sheehan’s Parallel Lives of Astronomers, Percival Lowell was far better known than Edward Barnard. Lowell is famous for having championed the idea that the canals on Mars were built by intelligent beings. The origins of the idea that there were canals on Mars lay in the Italian astronomer Schiaparelli’s report of “canali” on the red planet in 1877. The word is best translated as “channels” but was popularly mistranslated as “canals.” Since in the latter part of the 19th century canals were being built all over the world by intelligent humans, the implication was that the “canals” on Mars were built by intelligent aliens.
A major theme of the book is that Barnard and Lowell in many ways were opposites of each other. Barnard grew up in poverty in Nashville, Tennessee. He became interested in astronomy as a nine-year old working in a photography studio. He received some academic training in astronomy and was a superb and objective observer. Unlike Lowell, his mathematical skills were comparatively weak. Lowell came from an extremely wealthy Boston family and his interest in astronomy began in college. He graduated from Harvard in 1876 with honors in mathematics. The topic of his graduation speech was the nebular hypothesis of how solar systems came together from collections of gas and dust around a sun. These contrasts (and others) between Lowell and Barnard provide an intimate view not only of the two men, but of much of the history of astronomy of the late 19th and early 20th centuries, especially regarding Mars because the two men were at opposite ends of a raging debate among astronomers and the general public on the matter of the nature of the canals.
From a skeptical point of view, the most interesting organizational concept that Sheehan uses is the distinction between top-down and bottom-up processing. He uses this to contrast the approaches used by Lowell and Barnard in their interpretations of what they saw through their telescopes and later in photographs. Lowell was a largely top-down man, starting with an idea and then searching for evidence to support it. Barnard continued to make observations until he believed he had enough data to come to a conclusion. Lowell focused his astronomical interests largely on the canal debate, while Barnard was one of the most productive observational astronomers of his day. The top-down versus bottom-up distinction allows Sheehan to use basic concepts in perception to explain the differences between the two men in their position on the reality of the canals.
Perception is a function of two very different processes that together usually lead to an accurate perceptual experience of the world. Bottom-up processing refers to the incoming sensory inputs from the various sensory systems. These, alone, are not sufficient to specify what is actually out there in the world. Top-down processing refers to the expectations, beliefs, and knowledge that we all have about the perceptual world. These are needed for the brain to interpret and make sense of the information that is brought in by bottom-up mechanisms. Almost always these two sources are in accord and the world is perceived accurately.
Between the series of fleeting images hitting the retina of the observer and the final drawing or description of what the observer saw, the constructive nature of perception has ample room to create perceptual experiences of structures that were not there in reality.However, sometimes expectations, beliefs, and knowledge can be wrong, and the incoming sensory input may be distorted or incomplete. Under these rare circumstances, people can and do actually perceive things that are not there even though they are not intoxicated or psychologically impaired. Thus flying saucers, sea monsters, Big Foot, and the like, are perceived when the sensory input is minimal, often seen in fleeting glimpses at night and in the distance. The Loch Ness Monster never swims up the Inverness River through downtown Inverness at high noon on a pleasant sunny day for vacationers to witness. Final perceptual experiences are a function of the sensory inputs as well as expectations and beliefs. Thus, perception is said to be a constructive process and one that can produce incorrect experiences. The canals of Mars fall directly into this perceptual cognitive model.
Before reading the book, I had the mistaken impression that when looking through a telescope, one saw a fairly stable image of whatever object the instrument was focused on. Nothing could be further from the truth. The image of a planet as seen through a telescope is just a tiny disc of light. To make matters worse, that image is far from stable, especially for the telescopes in use in Lowell and Barnard’s time. The book makes clear how unstable those images could be. Momentary changes in the characteristics of the air above a telescope would make the image waver, fade in and out of focus, and change in other characteristics from moment to moment.
Even when “seeing” was excellent, all one saw were successive glimpses of the target object. Then those glimpses had to be constructed by the brain into a coherent impression of what the target was. Between the series of fleeting images hitting the retina of the observer and the final drawing or description of what the observer saw, the constructive nature of perception has ample room to create perceptual experiences of structures (i.e., canals) that were not there in reality.
Sometimes expectations, beliefs, and knowledge can be wrong, and the incoming sensory input may be distorted or incomplete.Astronomers had known since the early 19th century that such non-sensory factors could influence perceptual judgments in their observations. Thus, different observers reported different times at which a planet or star crossed a line in a telescope reticule. These differences were recognized by the term “personal equation.” But the idea that perception was constructive in the sense that honest observers could perceive structures that were not present had to wait until at least the start of the 20th century before it was recognized.
Following his Harvard graduation, Lowell was expected to go into his family business of highly profitable textile mills. As an intelligent, curious young man he found that prospect stultifying. To make matters worse, he was involved in a serious scandal. He had proposed marriage to a daughter from the sniffy Boston upper crust, but then withdrew the proposal, something that just wasn’t done in that time and place. As a result, Lowell was effectively banned from that elite circle, so in response in the early 1880s he travelled to Japan and Korea and wrote several books on Asian culture and became part of the Korean government delegation to the United States (in 1883). He continued to live in Asia until 1893.
That Lowell continued his interest in astronomy before actively pursuing the mystery of Mars was demonstrated by the “astronomical references and imagery [that] are scattered throughout the Far Eastern books and if gathered together would make a long list” (p. 97). That interest turned into a lifelong obsession in 1892 when he read French astronomer Camille Flammarion’s book La Planete Mars et ses Conditions d’habitabilite, in which the author argued that the “canals” were evidence of an advanced civilization. Lowell was wealthy enough to fund the creation of the Lowell Observatory in Flagstaff, Arizona, which opened in 1894.
In his autobiographical writings, Barnard noted that he became interested in the stars while walking home from work in the dark. One star “seemed to be slowly moving eastward among the other stars.” This struck him as unusual because the other stars “seemed all to keep to their same relative positions,” (p. 121) while this one did not. This was clear evidence of an early careful observer who had, unknowingly, seen not just another star but the planet Saturn. When he was 19 years old, Barnard was given a book written by the Reverend Thomas Dick, who believed that all the planets of the solar system were inhabited. The book included simple star charts that Barnard “rushed to compare with what he could make out in the small patch of sky visible from the open window of his apartment” (p. 126). The book, a later fellow astronomer and friend wrote, “awakened a thirst for astronomical knowledge which … never ceased to be controlling” (p. 126). Around 1880 or 1881, Barnard was given a simple telescope by an older friend at the photography studio where he was still working. He later received a scholarship to Vanderbilt University, but never finished his degree. Such things were less important in the late 19th century, and in 1887 he obtained a position at the Lick Observatory outside of San Jose, California, one of the earliest mountain-top observatories so positioned to rise above atmospheric turbulence and local city lights.
During their long careers, both Lowell and Barnard observed Mars. Their different approaches—top-down versus bottom-up—permeated how they interpreted and represented the image that fell on their respective retinas. Figure 1 (from page 291 in the book) shows this difference beautifully. On top is Lowell’s version of what he saw in 1894, while Barnard’s representation from the same year is below. Overall, the images are similar in general outline. However, Lowell has added to his drawing numerous lines, which he contended were the canals, and details not present in Barnard’s. This is a classic example of constructive perception. Lowell saw similar geometric patterns on Mercury and Venus, although he apparently did not attribute them to intelligent design.
Figure 1. Lowell’s map of Mars from 1894, published in Mars (1895), Plate XXIV. A new projection by Joel Hagen, for comparison with the Barnard map below.A map of Mars compiled on the basis of Barnard’s unpublished drawings from 1894, produced by astronomer-artist Joel Hagen. The projection has been chosen to match the map of Lowell on p. 227, so as to emphasize the striking differences. (Credit: Joel Hagen)While Lowell was seeing things that didn’t exist, Barnard was busy with more fruitful astronomical activities. In 1895 he became a professor of astronomy at the University of Chicago, which gave him access to the Yerkes Observatory in Wisconsin. It was there that he spent the rest of his life and professional career. Wisconsin is not known for warm winters and the observing platform of telescope at Yerkes was not heated. Nonetheless, Barnard would observe almost compulsively, night after night, even in the bitter cold. He was famous for having extremely good eyesight, which made him an excellent observer. During his long career he was an active member of the astronomical community. He made numerous important discoveries including over 15 comets and the fifth moon of Jupiter. Barnard’s Star, whose motion relative to the sun he determined in 1916, was named after him in 2017, although it had been recorded photographically in the 1880s. It is a red dwarf that is one of the four stars closest to Earth.
Perhaps Barnard’s most important contribution is the explanation for what are known as dark nebula, sometimes called “Barnard objects.” When the Milky Way is looked at through a telescope, there are large dark areas that appear to contain no stars. Why certain areas of the galaxy didn’t contain any stars was a mystery. In fact, these areas do contain stars, but their light is blocked by huge clouds of interstellar dust. The understanding of the nature of the dark nebula provided an important insight into the evolution of stars and planets. Another major accomplishment was his photographic atlas of portions of the Milky Way. The work, which is stunningly beautiful, took years to compile and wasn’t published until 1927, four years after his death in 1923.
During his active career Barnard did not ignore the controversial issue of the canals on Mars. He photographed Mars through the great telescope at the Yerkes Observatory in 1909, when Mars was “in opposition” to the Earth—as close as it would be for many years in the future, and was an ideal time for observation and photography. These photographs showed no canals. Barnard was not as vocal in the great canal debate as some other astronomers. It was the brilliant Greek-French astronomer Eugene Antoniadi (1870–1944) who became Lowell’s most serious detractor. Sheehan includes the often acrimonious debates between Lowell and Antoniadi in the story of the contrasts between Lowell and Barnard.
Final perceptual experiences are a function of the sensory inputs as well as expectations and beliefs … perception is said to be a constructive process and one that can produce incorrect experiences.During the time that Barnard was active in astronomical research and writing, Lowell was not inactive. However, his activities and interests were heavily focused on the issue of the canals. He lectured frequently and wrote widely defending his view that the canals were real. He, too, took photographs of Mars through the telescopes at the Lowell Observatory in Flagstaff. But constructive perception works just as well with photographs as it does with images seen through a telescope.
Both Lowell and Barnard made contributions to astronomy; Barnard as a careful scientist and Lowell as a popularizer who inspired many to an interest in astronomy, including Robert Goddard and Carl Sagan. In terms of fiction, Lowell’s argument that the canals were the products of intelligent Martians led to the writings of H.G. Wells and Edgar Rice Burroughs. Sheehan’s book goes into great, but never boring, detail about the lives and work of both men. The book is beautifully illustrated. There are pictures not only of the protagonists as they, to paraphrase Shakespeare, “strut and fret their hour upon the stage” but of their drawings and photographs of Mars and important locations in their stories. It is beautifully produced with copious references and notes. Unfortunately, the publisher did not provide an index, but with the 150th anniversary of Schiaparelli’s observation in 2027, Sheehan’s book is especially resonant.
In the span of just weeks, two major U.S. releases captured the nation’s attention: Bugonia, Yorgos Lanthimos’s darkly playful alien tale, and The Age of Disclosure, a documentary staged like science fiction, where whistleblowers insist that nonhuman craft exist and the government is concealing the truth about alien contact. Their timing is not accidental. Both arrived on the heels of the first public congressional UFO hearings in over fifty years, in the middle of a nationwide spike in reported sightings. The All-domain Anomaly Resolution Office (AARO) documented 757 new UAP (Unidentified Anomalous Phenomena) incidents between May 2023 and June 2024—more than in many previous years combined—and some analysts now describe 2025 as the most active reporting year in history. We are not just witnessing reports of the unexplained; we are witnessing the psychic temperature of a country—its anxieties, conspiratorial hunger, and collective imagination—made visible.
At the end of Bugonia, when the alien empress finally speaks—exactly as the conspiracy theorist had foretold—she delivers her verdict to her crew, all of them dressed in strange, animal-like furred spacesuits: “We believe it is over. They have had their time. And in their time they have imperiled the life they share, and so we have decided their time will end.” The aliens then waddle away in eerie unison, and the empress punctures the protective Earth bubble. What follows is an instant apocalypse: humanity wiped out in a scene that resembles the visual language of the Rapture—a sudden and absolute religious experience.
Poster for Bugonia (2025), directed by Yorgos Lanthimos. Image courtesy of Focus Features/CJ ENMBut The Age of Disclosure, Dan Farah’s latest sci-fi-styled documentary production, framed as a serious exposé of government UFO secrecy, ultimately reveals nothing new. It offers no evidence, only a procession of interchangeable older men linked to government or aerospace who repeat secondhand stories about witnesses who said they back-engineered crashed spaceships, recovered “biologics” (the new fancy term for aliens), and looming threats. At the watch party I attended, a few of us sat nonplussed at the end because, although the film insists danger is near, we wondered: danger from what, exactly?
Why are aliens capturing our cultural imagination now?Most alien or UFO reports1 involve sightings of lights, orbs, or spheres that move oddly or swiftly and vanish silently—a pattern that has remained consistent over time. Some observers also report cigar-shaped objects or triangular craft. Many of these phenomena are reported worldwide. In 2025, the National UFO Reporting Center had already logged 2,174 UFO/UAP reports by midyear, a sharp increase from 1,492 reports during the same period in 2024. This rise may reflect the establishment of the AARO and renewed government attention, which have made reporting easier and less stigmatized, not to mention nudging people to look up more and notice what was previously missed (Starlink satellites are often reported as UAPs). Increased public awareness through media coverage, documentaries, and congressional hearings also encourages people to report sightings they might previously have ignored. This explanation, of course, presumes the alien sightings are real. Are they?
An alternative interpretation—commonly referred to as the Psychosocial UFO Hypothesis—traces back to Swiss psychologist Carl Jung, whose 1958 work Flying Saucers: A Modern Myth of Things Seen in the Sky, proposed that UFOs reflect psychic and cultural realities, not extraterrestrial ones.2 Jung suggested that flying saucers emerge in the collective imagination during eras of social disorientation, technological upheaval, or existential threat, functioning as modern myths that carry the weight of collective anxiety and longing. Rather than evidence of literal beings from another world, UFOs become symbols of fear, hope, salvation, or invasion—a projection of what the psyche cannot resolve. From this view, alien encounters are psychologically real even if not physically tangible: They reveal something true about the human mind and the cultural moment, not necessarily the cosmos.
It is unsurprising that UFO sightings are on the rise today. Scholars have observed that UFO reports tend to increase during periods of societal crisis—such as existential uncertainty, geopolitical tension, or rapid technological change—reflecting collective anxieties rather than objective phenomena.3 In times of social distress and distrust, people are more likely to assign meaning or threat to ordinary or ambiguous events. Some psychological-cognitive theories suggest that ambiguous stimuli—lights in the sky, radar blips, or unexplained objects or events—are interpreted through cultural narratives and heightened pattern seeking.4 This is sometimes called the “low information zone,” in which blurry photographs and grainy videos stimulate the mind to fill in the missing spaces or connect the dots into meaningful patterns of an extraterrestrial nature.
We live in a time of deep distrust in politics, corporations, and the media, which makes people question what they are told. Heightened fears from draconian COVID policies (“they closed the schools, restaurants, and parks so the pandemic must be really bad!”), hypermediated climate collapse (“if we don’t do something in twelve years all is lost”), threats of rising fascism (“Trump, MAGA!”), threats of an AI takeover (“the singularity is near!”), and rising nihilistic political violence (“burn it all down and start over!”) have created a pervasive state of anxiety. This fear, combined with distrust of formerly trusted institutions, fuels conspiracy thinking, including beliefs about aliens. With few reliable frameworks to navigate uncertainty, many turn outward for explanations or as distractions from personal responsibility.
In Bugonia, Lanthimos suggests that conspiracy beliefs often emerge as a response to real pain and injustice. The film’s central conspiracist grew up with an addicted, neglectful mother and later lost her to a medical experiment. His belief in aliens and corporate malevolence is not baseless; it is rooted in trauma, exploitation such as pharmaceutical misconduct and corporate neglect, and social alienation. In this way, the film does not simply mock conspiracists as “crazy,” but explores the social and psychological conditions that give rise to such beliefs.
To these we can add two more conditions contributing to Americans’ increasing belief in UFOs: the decline of religious faith and a reduced reliance on instinct and common sense.
As traditional faith wanes, many turn to belief systems grounded not in evidence or instinct but in ideology and narrative—UFO conspiracies being a prime example. Belief is migrating from shared moral and religious frameworks to culturally mediated myths that promise meaning and belonging. In this sense, aliens function as a modern sacred avatar, a substitute for God, mystery, and existential structure.
This mindset—that what you see may not be true, or what you don’t see is probably true—has fundamentally contributed to the widespread and enduring belief in a U.S. government cover-up of UFOs.The complexity of contemporary society has been linked to a reduced dependence on intuitive judgment and common sense, making individuals more susceptible to being drawn into ideology and conspiracy theories.5 This effect has been amplified over the last two decades by our deep immersion in the online world, coupled with persistent global political instabilities. These factors have ushered in an era of “alternative facts” (on the right) and “postmodernism” (on the left) for many Americans, where the core assumption is that there is more than one truth or no truth at all.
This mindset—that what you see may not be true, or what you don’t see is probably true—has fundamentally contributed to the widespread and enduring belief in a U.S. government cover-up of UFOs. Thus, even though most individuals have never personally seen or experienced a UFO firsthand, they are readily pulled into the conspiratorial narrative and accept it primarily because of the powerful surrounding cultural and ideological framework. It’s ideology over instinct.
Common Sense and InstinctEvolutionarily, humans developed heuristics to make rapid decisions in uncertain environments—recognizing patterns, detecting threats, and navigating social hierarchies. These shared mental shortcuts form a basis of common knowledge, allowing groups to act cohesively, from identifying safe foods and interpreting emotional cues to cooperating in collective tasks. This intuitive knowledge also extends to social cognition: Humans can rapidly infer intentions, predict behavior, and synchronize actions with others, often without conscious reasoning. In this sense, common knowledge is not arbitrary but adaptive, providing a shared framework that increases survival, cooperation, and cultural stability. As Steven Pinker argues, common knowledge is foundational to human society because it enables social coordination and complementary decision making.6 Much of this understanding operates beneath awareness, signaled through involuntary behaviors like laughter, tears, blushing, eye contact, and blunt speech—embodied expressions of the intuitive knowledge that binds us.
Paradoxically, people often engage in elaborate efforts to obscure, ignore, or deliberately avoid acknowledging common sense and, tragically, their own instincts. The tendency to avoid recognizing widely shared knowledge is well-documented in psychology and sociology. This behavior, known as information avoidance, allows individuals to shield their happiness, preserve existing beliefs, or maintain social standing. Research also shows that information avoidance can serve as a coping mechanism in situations of uncertainty or threat, helping people reduce cognitive dissonance and emotional discomfort.7
People sometimes engage in information avoidance not merely to protect their beliefs or personal happiness, but to align with a group ideology and secure a vital sense of belonging. According to Social Identity Theory,8 individuals derive meaning, status, and self-esteem from the groups they belong to; consequently, they may reject information that threatens the group’s worldview. Specifically, people may set aside their personal instincts or empirical skepticism to be part of a community—be it political, spiritual, ideological, or conspiratorial—that claims to possess special, hidden, or insider knowledge. Aligning with a group that asserts access to deeper truths, secret insights, or a more “awakened” understanding often feels more meaningful and elevating to one’s identity than simply accepting one’s ordinary, concrete life.9
In addition, people often bypass common sense by relying on cognitively unfalsifiable ideas—using claims for aliens such as “trans-dimensional,” “telepathic,” or “unperceivable by ordinary minds,” which place the phenomenon in a realm where no evidence could ever contradict it. This creates epistemic shielding, where the claim becomes immune to challenge: Any lack of proof is simply reframed as expected, since the phenomenon supposedly exists beyond ordinary perception or logic.10 This often involves setting aside common-sense reasoning—such as the implausibility of coordinated alien visits, the immense logistical challenges of secrecy, or the extreme hazards of space travel. By suspending these rational doubts, individuals can fully engage with the group, strengthening both cohesion and commitment to shared beliefs like UFOs.
Believing the government hides alien knowledge signals social intelligence and alignment with the modern order of suspicion, whereas trusting official explanations can appear naïve or even irrational—suggesting that disbelief in conspiracy has become more deviant than belief itself.System Justification offers another cogent explanation for why people override instinct, even without empathy-driven motives. This psychological process leads individuals to defend and reinforce the prevailing system or worldview, even when it may run counter to their own interests.11 In the context of UFO belief, the dominant “system” is no longer governmental authority but rather the conspiratorial worldview itself. Institutional distrust has become the cultural status quo, so accepting the narrative of a cover-up functions as a way of justifying and maintaining that system.12 Believing the government hides alien knowledge signals social intelligence and alignment with the modern order of suspicion, whereas trusting official explanations can appear naïve or even irrational—suggesting that disbelief in conspiracy has become more deviant than belief itself.
A further reason that common sense is bypassed in UFO narratives stems from a psychological profile that makes the alien stories uniquely meaningful to the participants. The key players in The Age of Disclosure documentary, reflecting the wider UFO conspiracy community, are largely older White men, often from the Baby Boomer generation, including many former Cold War intelligence and military personnel. They were trained for decades to perceive patterns, secrets, and threats everywhere, interpreting anomalies like radar returns, classified flights, and black-project aircraft. This environment rewarded suspicion, dramatic interpretation, and assuming hidden motives—a mindset that doesn’t simply switch off upon retirement. Once retired, many lose their high status and sense of purpose; they miss being “in the know” and having a mission. UFOs restore all of that, allowing them to be relevant again by “exposing secrecy,” “protecting humanity,” and “warning people about what’s coming.” This powerful way of restoring identity and meaning creates a significant blind spot for rational facts or instinct, cementing a narrative where they matter again.
A more common-sense approach—one uninfluenced by ideology—would align closely with how neuroscientists are beginning to frame the perception of unidentified objects. A trio of researchers, for example, recently posed this question: How can we “explain why healthy, intelligent, honest, and psychologically normal people might easily misperceive lights in the sky as threatening or extraordinary objects, especially in the context of WEIRD (western, educated, industrial, rich, and democratic) societies”?13 These researchers draw on predictive-coding theories of perception, which suggests that the brain constantly generates top-down predictions based on prior experience. When sensory input is ambiguous or weak, such as distant lights in the sky or other celestial stimuli, perception becomes highly subject to existing beliefs and expectations. Frohlich, Christov-Moore, and Reggente argue that in Western contexts, where skepticism and distrust of institutions are amplified, psychologically normal people are more likely to interpret ordinary phenomena as potentially extraordinary, thereby reinforcing their mistaken beliefs and fostering the acceptance of conspiratorial explanations.14
Illustration by Marco Lawrence for SKEPTICDecline of Traditional FaithAnother factor reinforcing the heightened interest and belief in UFOs is the dramatic decline of traditional faith systems in the U.S. and globally, especially in Europe.15 We are living through a moment of profound spiritual and cultural upheaval, marked by widespread secularization. Data from the Pew Research Center’s Religious Landscape Studies (2007–2024) clearly illustrate this shift in the United States: The share of Americans identifying as Christian has dropped significantly from 78 percent in 2007 to 62 percent in 2023–2024. Much of this shift is driven by the growth of the religiously unaffiliated—those identifying as atheist, agnostic, or “nothing in particular”—the “nones.” Furthermore, a stark generational divide exists, as only approximately 46 percent of younger Americans (ages 18–24) identify as Christian, contrasted with about 80 percent of older generations. Related measures of religious practice have also declined, including the share of Americans who believe in God “with absolute certainty,” pray daily, or attend regular services.
These trends are not isolated to the U.S., reflecting global secularization that affects major world religions, including Christianity, Islam, Judaism, Buddhism, and Hinduism. A 2023 analysis of the World Values Survey data found that age and income are among the strongest predictors for decreasing religiosity, confirming that modern economic and demographic shifts correlate strongly with this decline.16 The consequence of the decline in traditional religious structures (churches, organized faith, and institutional religion) is the creation of a spiritual and cultural void. This vacuum can then be filled by alternative spiritualities, existential searches, or other belief systems that offer meaning, structure, and a sense of the transcendent—including UFOs, alien-mythologies, “otherworldly” beliefs, and nature mysticism.
As younger generations grow up without strong religious roots, their search for meaning and a comprehensive moral framework often shifts toward political, psychiatric, or identity-based frameworks rather than centuries-old orthodox religions. While these new frames of belief are influenced by contemporary cultural anxieties, they tend to be less stabilizing and reassuring than traditional faith and wisdom. Studies of the culture wars indicate that, instead of offering equanimous guidance, these ideologies frequently contribute to an “us versus them” positionality, demanding allegiance to a specific side rather than fostering broad acceptance or spiritual integration.17, 18, 19
A Desire for FaithWhen social anxieties intersect with waning religious practices, a spiritual void emerges, which faith, in its deepest sense, functions to fill. Paul Tillich described faith as the recognition of what is ultimately important in life, providing meaning and courage in the face of despair.20 Faith counters the secular demand to find fulfillment solely in the material present by offering a framework of ultimate value that extends beyond the empirical, fostering trust that reality holds order, purpose, and goodness beyond human comprehension. While it does not remove suffering, faith situates pain within a larger narrative of redemption or spiritual growth, offering hope, belonging, and the resources to endure the “unlivable self.” In this light, participation in alien beliefs can, in part, be interpreted as a search for a similarly powerful spiritual experience.
For Carl Jung, the emergence and widespread cultural interest in alien experiences and UFOs were a form of spiritual projection. He posited that this phenomenon arose from a collective longing for something transpersonal—a desire for meaning and connection beyond the material world—driven largely by the decline of traditional spiritual practice and the sociopolitical existential crisis in the West. Jung argued that, regardless of their physical reality, what UFOs primarily represent to people is the archetype of salvation or integration, serving as a potent symbol of hope that something external might save humanity from its own crises.
This powerful psychological need quickly spilled into the social sphere: By the early 1950s, the world saw the beginning of UFO religious communities, almost all of which were tied to the emerging New Age Movement.21 This established a distinct, if unconventional, religious community that has since expanded into a diverse landscape of cults, spiritual groups, and online movements. These modern mythologies offer their adherents not only an answer to the cosmic riddle but also a sense of belonging, a moral framework, and a promise of ultimate transformation—functions historically reserved for organized religion.
We are not just witnessing reports of the unexplained; we are witnessing the psychic temperature of a country—its anxieties, conspiratorial hunger, and collective imagination—made visible.The world of UFOs deeply echoes religious communities, particularly in how the phenomenon inherently divides people into believers and nonbelievers, subsequently demanding an alignment with a collective ideology or community for those who accept the narrative. In particular, abduction narratives strongly resemble spiritual transformation stories, carrying powerful mythic, symbolic, and spiritual overtones that speak to a profound human need. These experiences often involve narratives of a calling, being chosen, initiation, and transformation, placing the individual in touch with a greater, transcendent, and mysterious unknowable power.22 In this way, both alien abduction and traditional spiritual experiences—such as deep prayer, apparitions, mystical visions, or spiritual possession—can be viewed as powerful modern myths. They serve as psychic containers for deeper psychological realities, suggesting they both function as potent cultural frameworks for expressing profound feelings of internal conflict, such as disconnection, trauma, or identity crisis, and a fundamental longing for transcendence or an escape from the confines of a prescribed self.
If participation in UFO belief systems satisfies a spiritual longing, what’s the harm? Perhaps none. However, when such belief requires individuals to suppress instinct, embodied perception, and common sense, the stakes shift. We risk creating tension with the fundamental architecture of evolutionary biology and psychology. To override these deeply ingrained perceptual systems in favor of a socially constructed narrative demands a significant cognitive sacrifice—one that erodes the innate trust in our instincts that has historically kept us alive. Over time, this override may dull the very intuition evolution shaped to help us discern reality from story.
We cannot expect young Americans to find faith in religious institutions, as many are still working to repair the trust of congregants they have long disenchanted. Yet faith—faith in something, anything—is essential to begin filling the emptiness left by a lack of meaning. Without faith in a larger cosmic order—be it a sense of karma, a belief in something greater, or a feeling of being loved or held by a transcendent whole—our younger generations are far more likely to attach to an ideology introduced to them on social media, which often leaves them unattached to an embodied instinctual reality.
Into this void step alien narratives.
The news is abuzz with talk of a potential universal respiratory vaccine. It’s definitely interesting research, but may not be what you think. In this case, the reporting has been quite good on the whole, but the headlines can be misleading if you are not deeply steeped in the complexities of mammalian immunity. Let me start with the biggest caveat – this is a mouse study. This is therefore encouraging pre-clinical research, but we are still years away from translating this into an actual vaccine. Also, most interventions that are encouraging at the animal stage don’t make it through human testing. So don’t expect any revolution based on this treatment anytime soon. Having said that – there is great potential here.
To understand how this new approach works, let’s review some basics of immunity. (Note – the immune system is incredibly complex, and I can only give a very superficial summary here, but enough to understand what’s going on.) Mammalian immune systems have two basic components, innate immunity and adaptive immunity. The adaptive immune system is probably what most people think about when they think about the immune system and vaccines. Adaptive immunity targets and recognizes specific antigens (such as proteins) on pathogens like viruses, bacteria, or fungi. Antibodies attach to these antigens, flagging them to be targeted by immune cells like macrophages which then eat them. The macrophages in turn display the antibody-flagged antigens on their surface, triggering a greater and more specific reaction to those specific antigens. Adaptive immunity is considered slow (it takes days to ramp up), specific (it targets specific antigens on specific pathogens) and durable (it has memory, and will react more quickly and robustly to the same pathogen in the future).
By contrast, the innate immune system is fast, non-specific, and short-lived with no memory. The innate immune system consists of physical barriers, like skin and mucosa, and immune cells that target pathogens based on broad patterns that are not learned but are innate (hence the name). There are Toll-like receptors (TLRs – the name Toll comes from the German for “fantastic”, allegedly said by a researcher upon discovery). The Toll gene was first discovered in fruit flies and then similar genes were later discovered in mammals, hence “Toll-like”. TLRs detect pathogen-associated molecular patterns (PAMPs), which are highly conserved features of types of pathogens. In other words – a TLR might recognize a snippet of RNA as a pattern typical of RNA viruses, or proteins that tend to occur on pathogenic bacteria. “That looks like an RNA virus, so let’s attack it.”
While these are distinct and complementary parts of the immune system, they are also highly tied together. Components of the innate immune system trigger the adaptive immune system, which in turn stimulates innate immunity. In fact, many traditional vaccines contain adjuvants which stimulate innate immunity in order to boost adaptive immunity.
The new vaccine (technical name – GLA-3M-052-LS+OVA), which is a nasal spray given in three doses to the mice being studied, stimulated innate immunity, not adaptive immunity. Normally, after exposure to a pathogen or even allergen, innate immunity will be heightened for a few days, then return to normal. The nasal vaccine extends this heightened innate immunity in the lungs and respiratory system for three months. It does this by containing synthetic molecules that bind to TLRs, tricking them into responding as if a pathogen is present. The vaccine also contains a protein called ovalbumin, which stimulated T-cells of the adaptive immune system, keeping them resident in the tissue. These T-cells help maintain the heightened state of activity of the innate immune system. According to the authors: “Protection was mediated by persistent ovalbumin-specific CD4+ and CD8+ memory T cells that imprinted alveolar macrophages (AMs), enhancing antigen presentation and antiviral immunity.”
The trick of stimulating innate immunity was partly borrowed from the tuberculosis BCG vaccine, which also works by both triggering adaptive immunity but also stimulating the innate immune system. Researchers studies how the BCG vaccine accomplished this and applied that knowledge to this new vaccine.
In the study the researchers compared mice treated with three doses of the nasal vaccine to untreated mice and found that the treated mice were protected for at least three months from “SARS-CoV-2 and Staphylococcus aureus. In addition, the vaccine protected mice from other viruses (SARS-CoV-2, SARS, SCH014 coronavirus), bacteria (Acinetobacter baumannii), and allergens.”
In the best-case-scenario where this vaccine technology is safe and effective in people, what can we expect? Well, I don’t think this would replace any traditional vaccines based on adaptive immunity. Like the two halves of the immune system itself, it will likely be complementary to traditional vaccines. Traditional vaccines can provide years and sometimes decades of specific protection from common pathogens, and there is no substitute for that. Also, this vaccine works on respiratory infections only, although it may be possible to adapt this approach to other types of infection.
What an innate immunity-based vaccine provides is a good first line of defense against an outbreak, epidemic, or seasonal infection. This would require many millions of doses (or even billions, in the context of a pandemic) being available at short notice to provide several months of resistance to an entire population at the beginning of an outbreak or a seasonal infection (like the flu). It remains to be seen if this vaccine reduces the risk of spread or just the severity of infection. If it reduces spread (which is plausible, if viruses, for example, don’t have a chance to reproduce in large numbers), it could short circuit many respiratory epidemics.
Imagine if this vaccine were available at the beginning of COVID. It could have provided significant protection, reducing death and morbidity, and allowed us time to study the virus and develop adaptive vaccines. That is one of the benefits – it provides broad spectrum non-specific defense. We don’t necessarily need to know anything about the pathogen for this vaccine to work, so it is ideal for novel respiratory outbreaks. It also means we don’t need to track new strains of a virus, and that pathogens cannot easily adapt to this immunity by simply mutating their proteins.
There is a lot of research ahead to study the safety and effectiveness of this vaccine in humans. Even once a vaccine is approved, more research is needed to study long term effectiveness and potential side effects. One thing to consider, for example – there is likely a reason that evolutionary forces did not favor us having our innate immunity on high alert at all times. There is often a downside to immune activity, which is mostly why you feel like crap during an infection. It’s not the bug, it’s your bodies reaction to the bug. The worst-case scenario is that this approach increases the risk of auto-immunity.
Having said that – we are not living in the world in which we evolved. We are living in a globally connected world of over 8 billion people, often in close proximity to potential animal reservoirs of pathogens. The selective pressures are likely now different than they were when we were living in largely isolated tribes. But we don’t have to wait for evolution to work its slow grim task, we can tweak our immune systems with science and technology to provide some enhanced protection when and where we need it.
The post Universal Respiratory Vaccine first appeared on NeuroLogica Blog.
Area 51 may want to dust off the welcome mat. Not one, not two, but three interstellar objects have drifted through our solar system, now referred to as “interstellar interlopers.” Astronomers labeled them as 1I/‘Oumuamua in 2017, 2I/Borisov in 2019, and 3I/Atlas in 2025 (the prefixes refer to the order of discovery of the interlopers). While most astronomers see unusual but ultimately natural cosmic debris, Harvard astronomer and Galileo Project head Avi Loeb has stepped up to suggest these anomalous interstellar visitors could be alien technologies, possibly even a threat to humanity. Before we start waving white flags at space rubble, it’s worth noting that the rest of the scientific community is responding with something far less dramatic: data. Most scientists, armed with models and common sense, see nothing more exotic than fast-moving rocks and comets with unusual chemical compositions.
Avi Loeb: Prophet, Seer, or Publicity Seeker?Avi Loeb is no UFOlogist conspiracy theorist with an active imagination. He holds Harvard’s Frank B. Baird Jr. Professorship of Science and has spent most of his academic life developing rigorous theories about black holes, galaxy formation, and the early universe. So, when he started speculating about alien artifacts drifting through our solar system and writing several popular books about extraterrestrials, it’s no surprise that a bevy of UFOlogists treated his words as something akin to the “next coming.”
In recent years, he has become known less for his contributions to cosmology and more for a far more audacious proposition: that humanity may have already encountered extraterrestrial technology created somewhere beyond our solar system. The shift has turned him into a public figure with an unusually large following for an astrophysicist, even as it strains his standing among colleagues. Admirers see him as refreshingly fearless and he has inspired my young students to go into the sciences (he regularly posts emails from them on his Medium blog); critics describe him as a man who has allowed publicity to eclipse prudence. The tension between those two views defines the controversy that now surrounds his work.
The ‘Oumuamua Puzzle and Loeb’s Radical InterpretationWhen astronomers in Hawaii identified an unfamiliar object sweeping through the solar system in October 2017, they immediately realized it was something unprecedented. The object—later named ‘Oumuamua (Hawaiian for “messenger from afar”)—did not behave like the comets or asteroids astronomers routinely study. Its elongated appearance, lack of visible outgassing, and slight but measurable change in velocity puzzled researchers.
A large team of scientists, led by Karen Meech at the Institute for Astronomy in Hawaii, published a widely cited paper in Nature in 2017, concluding that ‘Oumuamua originated from outside our solar system. Building on the data from that paper, Avi Loeb and his graduate student Shmuel Bialy (now at the Israel Institute of Technology) proposed in a 2018 Astrophysical Journal Letters paper that ‘Oumuamua might be a “fully operational probe sent intentionally to Earth vicinity by an alien civilization.” That is, of course, a possibility—as is a cosmic teapot in orbit. But science does not require disproving every far-fetched alternative. The burden of proof lies squarely with Loeb and his collaborators.
In his boldly titled book Extraterrestrial: The First Sign of Intelligent Life Beyond Earth, Loeb offered a hypothesis that captured worldwide attention: perhaps ‘Oumuamua was not a natural relic at all but rather a fragment of engineered technology, possibly a thin, reflective structure propelled by starlight. He emphasized that he wasn’t announcing definitive proof (despite the book’s title), only pointing out that an artificial origin could not be ruled out. Nonetheless, his willingness to discuss this prospect publicly pushed the story far beyond the walls of academia.
Here are a few unique characteristics of ‘Oumuamua:
Occam’s razor, named after William of Ockham (1287–1347) by Libert Froidmont (1587–1653), suggests that scientific hypotheses should consist of the smallest set of possible elements. For example, while staying in an old English hotel room, the lights flicker, the floor creaks, and the room gets chilly. You could conclude it’s the ghost of a Victorian child with unresolved issues—or, per Occam’s razor, you could check the wiring, the floorboards, and maybe close a window. When in doubt, blame the insulation before the afterlife. Occam’s razor doesn’t prove the simpler explanation is correct—just that it’s preferable until better evidence arises. It’s a tool for model selection, not an avenue to absolute truth.
Admirers see Avi Loeb as refreshingly fearless and he has inspired my young students to go into the sciences; critics describe him as a man who has allowed publicity to eclipse prudence.Let’s examine the data for ‘Oumuamua in this light. The elongated or flat shape: In three research papers, Steven Desch and Alan Jackson proposed that ‘Oumuamua is a collisional fragment of nitrogen ice from an exoplanetary Pluto-like body. Not only does this explain the flat shape, but the lack of observable H2O, CO, CO2, lack of dust, and especially the magnitude of the nongravitational acceleration. I asked Desch what he thought of Loeb’s ideas about ‘Oumuamua and he responded: “Suffice it to say he [Loeb] long ago stopped being a serious scientist making innocent inquiries, and now unstoppingly manufactures doubt in the service of positioning himself as some sort of science maverick.” Sebastian Lorek’s and Anders Johansen’s theoretical work demonstrates that flattened, disc-shaped planetesimals can form naturally through the gentle gravitational collapse of a rotating “pebble cloud” in a protoplanetary disk. Lorek and Johansen emphasized to me that “the formation of flattened objects like ‘Oumuamua is a completely natural outcome of planetesimal formation.”
By contrast, Loeb postulates that ‘Oumuamua may be a light-sail—a thin, flat structure propelled by radiation pressure (i.e., the momentum of photons from starlight or sunlight). Photons carry no mass, but they do have momentum. When they hit a surface (especially a reflective one), they impart a tiny push. Over time, this small force accumulates, especially in the vacuum of space where there’s no friction. The challenge with using solar radiation for propulsion is that its force decreases with the square of the distance from the source (1/r²). While this pressure is weak but usable near Earth’s orbit (1 AU), it becomes vanishingly small at interstellar distances. In the vast space between stars, the photon flux is so low that even the nearest stars provide no meaningful thrust—effectively leaving a light sail adrift with nothing to push it along.
AI-generated rendering of a hypothetical alien light sail, the type of technology Avi Loeb proposes could explain ‘Oumuamua’s unusual acceleration through solar radiation pressure.As for the nongravitational acceleration of ‘Oumuamua out of our solar system, Loeb believes that it can’t be explained by outgassing, because no gas or dust was detected. He proposed that the acceleration was caused by the solar radiation pressure hitting a light sail. If ‘Oumuamua were an ultra-thin object, just 0.3–0.9 mm thick and tens of meters wide, it could have experienced enough radiation pressure at its closest approach to the Sun, which was 0.25 AU, or one-quarter of an Astronomical Unit (the distance from the Earth to the Sun, 1 AU) to account for the motion—without requiring any expelled material. However, in 2023, Jennifer Bergner and Darryl Seligman showed that entrapped molecular hydrogen (H2) in water ice could have been released from ‘Oumuamua’s body as it warmed, producing the observed nongravitational acceleration without a visible coma (the cloud of gas and dust that typically forms around a comet when it gets close to the Sun). This supports the view that ‘Oumuamua was a comet-like planetesimal rather than anything technological. Although the study centered on chemistry, a consequence is that ‘Oumuamua must have had a very high surface-area-to-mass ratio for H2 outgassing to be effective. Such a requirement is naturally met by a thin, sheet-like geometry (a flattened body), again consistent with the disc-like shape inferred by the light-curve analyses. In short, even its puzzling acceleration can be explained by natural processes acting on an unusually flat, icy object.
The Galileo Project and Loeb’s Expanding QuestRather than retreat from public engagement after ‘Oumuamua’s exit from the scene, Loeb broadened his search. In 2021, he launched the Galileo Project—funded entirely through private donations—with the goal of systematically looking for physical evidence of extraterrestrial technology. The initiative includes specialized camera systems aimed at tracking unusual aerial phenomena and an expanded effort to locate interstellar debris.
One object in particular drew Loeb’s attention: a meteor that exploded over the Pacific Ocean in 2014. A U.S. Space Command memo suggested the meteor may have originated outside the solar system. Loeb seized upon the idea that remnants from this event might still rest on the ocean floor, potentially offering clues about materials forged beyond our stellar neighborhood. So in 2023 he orchestrated an expedition off the coast of Papua New Guinea to retrieve microscopic debris from the area where the meteor had disintegrated. Funded by a cryptocurrency entrepreneur, the mission blended scientific ambition with adventure-story drama—all captured by a documentary crew (to be aired in 2026).
The expedition recovered tiny metal beads—mere fractions of a millimeter in diameter. Laboratory analyses revealed unusual ratios of heavy elements that did not neatly align with common terrestrial or meteoritic compositions. Loeb interpreted the findings as suggestive of an exotic, possibly interstellar, origin. He stopped short of outright claiming discovery of alien technology (the tiny spherules were not exactly the dashboard of the Millennium Falcon), but he made clear that he considered the possibility worth exploring.
Many experts quickly objected. Planetary scientists noted that it is extremely unlikely for an object traveling at such high velocity to leave behind intact solid fragments. Others questioned whether the spherules could even be tied to the 2014 meteor, or whether the meteor itself was truly interstellar. Critics argued that uncertainties in the military data make firm conclusions impossible, and that Loeb was again presenting the most sensational interpretation well before the evidence justified it.
The interstellar comet 2I/Borisov streaks through our solar system in this 2019 image from ESO’s Very Large Telescope. Unlike ‘Oumuamua, Borisov behaved like a typical comet, showing a bright coma and tail. The telescope tracked the comet’s movement, causing the background stars to appear as colorful streaks of light—a result of combining observations in different wavelength bands that give the image some disco flair. Credit: ESO/O. HainautThe interstellar comet 2I/Borisov behaves like a typical comet.2I/Borisov is considered interstellar because it entered the solar system on a hyperbolic trajectory—with an orbital eccentricity greater than 3—meaning it is not gravitationally bound to the Sun and must have originated from outside our solar system. Its inbound velocity (approximately 32 km/s) and trajectory indicate it came from the direction of the galactic plane, rather than from within the Oort Cloud or Kuiper Belt. Unlike ‘Oumuamua, which baffled astronomers with its lack of cometary features, Borisov behaved exactly like a typical comet, complete with a bright coma, a dust tail, and outgassing of familiar volatiles like water, carbon monoxide, and cyanide.
Avi Loeb has suggested that Borisov may still deserve scrutiny as a potential technological relic—noting that it was more pristine than expected for a comet traveling interstellar distances, possibly implying unusual origins. However, most scientists interpret Borisov as strong evidence that other planetary systems form comets much like our own does. Its ordinary composition, active sublimation, and typical behavior all suggest it is natural, and in fact, it reinforces the view that cometary bodies are common ejecta from planetary systems throughout the galaxy. In Galileo Project Zoom meetings of late, Loeb has conceded that 2I/Borisov is a comet (Skeptic magazine’s Michael Shermer is on the Galileo Project team and attends the Zoom meetings).
3I/Atlas: The Third Interloper3I/Atlas’s inbound excess velocity was about 58–61 km/s, far above the escape velocity of the Sun, indicating an origin outside the solar system (that is, it is not gravitationally bound to our solar system). Astronomers traced its incoming direction to the constellation Sagittarius and predict it will depart toward Gemini. Unlike the enigmatic ‘Oumuamua (which showed no outgassing) and more like 2I/Borisov, 3I/Atlas immediately revealed a coma and dust activity, behaving in most respects like a typical comet. Its trajectory and motion suggest it may have originated from the Milky Way’s thick disk, making it plausibly older than our solar system.
Hubble’s image of interstellar comet 3I/ATLAS (365 million kilometers from Earth, July 21, 2025) shows a bluish, teardrop cocoon against streaked stars. While Avi Loeb suggests its sunward jet may be artificial, the consensus confirms it behaves like a natural comet. Credit: NASA, ESA, D. Jewitt (UCLA); Image Processing: J. DePasquale (STScI)From the start, astronomers have viewed 3I/Atlas as a natural cometary body. Observatories around the world (including Hubble, the James Webb Telescope, and the Very Large Telescope in Chile) tracked its movement, noting that it started releasing gas and dust at large distances from the Sun—an unusual but not unprecedented behavior. Spectral studies revealed a coma rich in CO2, CO, and diatomic carbon (C2), while surprisingly low in water vapor, which typically dominates solar system comet outgassing. Polarimetry also showed an unusually strong negative polarization signal—meaning the light scattering off the coma’s dust was more directionally polarized than expected. (Polarimetry is the study of how light becomes polarized after it reflects off or scatters through materials like dust or gas. In astronomy, it’s used to analyze light from objects such as comets to infer the properties of their surfaces or comae. When astronomers applied polarimetry to 3I/Atlas, they found unusually strong negative polarization, suggesting its dust grains are very fine or have unusual textures—possibly hinting at a unique interstellar origin or formation environment.) These characteristics, while distinct, are seen as falling within the natural diversity of cometary compositions, especially for bodies formed in ultra-cold outer regions of a planetary system.
Researchers note that 3I/Atlas offers a unique opportunity to expand our understanding of planetary formation beyond the solar system. Its high CO2 content, early activity, and evolving tail structure suggest it likely formed in a cold, distant part of its home system—perhaps analogous to our Kuiper Belt around a distant solar system. Its compact nucleus (likely under 1 km in size) and slowly rotating, modestly active profile, contrast with the wildly tumbling, inert ‘Oumuamua. Scientists have emphasized that 3I/Atlas aligns with the expected behavior of a comet ejected from another stellar system, and they see no need to invoke exotic explanations.
Nevertheless, Avi Loeb has once again challenged the consensus. In public commentary and academic preprints, Loeb has listed a set of anomalies that, in his view, warrant consideration that 3I/Atlas might be artificial in origin. Among the features he highlights:
Although intriguing, there is nothing alien about the 3I/Atlas’s jets. The presence of multiple jets pointing in both sunward and antisunward directions suggests that 3I/Atlas has several active regions on its rotating nucleus. As different surface areas are exposed to sunlight, localized jets of gas and dust are released, sometimes curving due to the object’s motion or erupting from regions not directly facing the Sun. This directional variety is a hallmark of cometary activity and reflects a complex interplay between surface composition, thermal dynamics, and rotational orientation, a more likely explanation than alien technology rocket thrusts and maneuvers that Loeb proposes.
Both features fall within known cometary behavior and don’t require invoking alien technology.The same can be said for other characteristics Loeb deems of alien origin. The high acceleration relative to 3I/Atlas’s apparent size can be explained naturally by low-density, volatile-rich materials like CO2 or CO ices producing sustained outgassing. Similarly, the elevated nickel-to-iron ratio in its coma may result from observational bias—nickel is more easily detected in cometary gas, while iron often remains locked in dust. Both features fall within known cometary behavior and don’t require invoking alien technology.
Loeb’s position, as with ‘Oumuamua, is that extraordinary anomalies merit open-minded hypotheses. He does not claim that 3I/Atlas is definitively artificial, but argues that its distinctive properties should not be dismissed. He has proposed that it could represent alien debris, a probe, or some unknown technological object using controlled outgassing or exotic materials. Critics in the scientific community largely disagree, emphasizing that all of 3I/Atlas’s features—from its CO2-rich chemistry to its sunward jet and trajectory—can be explained by known physics. Observations of other comets with similar jets or compositional profiles provide natural precedents.
While most planetary scientists remain confident in a natural origin for 3I/Atlas, its detailed study is ongoing. Loeb’s speculations, while provocative, remain unsubstantiated.In late 2025, NASA officials released detailed observations of 3I/Atlas, and their conclusion was unequivocal: “It looks and behaves like a comet, and all evidence points to it being a comet. But this one came from outside the solar system, which makes it fascinating,” said NASA Associate Administrator Amit Kshatriya. Indeed, high-resolution images from spacecraft showed 3I/Atlas with a normal cometary coma and tail—essentially indistinguishable from ordinary long-period comets aside from its hyperbolic orbit. In other words, 3I/Atlas is far more likely a natural interstellar comet than an extraterrestrial spacecraft.
In the end, 3I/Atlas has reinforced a key message: interstellar objects are not all alike, and some may appear quite strange by our standards. While most planetary scientists remain confident in a natural origin for 3I/Atlas, its detailed study is ongoing. Loeb’s speculations, while provocative, remain unsubstantiated. Whether the anomalies he flags prove to be outliers or just unfamiliar variations within a broad population of extrasolar comets, 3I/Atlas has already deepened our understanding of how planetary systems beyond our own may evolve—and what fragments they might fling into the void.
A Netflix documentary crew has followed Loeb’s work for several years, including his 2023 expedition to recover interstellar meteor fragments from the Pacific Ocean. The film, which Loeb has confirmed is in production, is expected to be released in 2026 and will chronicle his search for extraterrestrial technology. It reflects not only his scientific ambitions but also his increasingly prominent role in the public imagination.
Over the past decades, we have witnessed a quiet yet decisive transformation in the history of human beliefs: the apparent disappearance of major paranormal phenomena that for millennia fueled mythologies, religions, folklore, and countless reports of supposed extraordinary manifestations. UFOs hovered over mountains and deserts;1 colossal creatures such as Bigfoot, the Yeti, or the Sasquatch roamed remote forests;2 spirits, apparitions, and ectoplasmic entities materialized in abandoned mansions;3 miracles occurred before the eyes of the devout;4 demonic possessions defied rational explanation.5 Today, all these phenomena seem to have taken permanent leave, an intriguing coincidence emerging precisely at the moment humanity begins to carry in its pockets (or better yet, in its hands) ultra-high-definition cameras capable of recording every detail of daily life, or any anomaly, with unprecedented precision.6
Before examining the role of smartphones, it is important to distinguish beliefs from manifestations. National opinion polls show that belief in paranormal phenomena remains high. A 2005 Gallup survey indicated that roughly three in four Americans believed in at least one type of paranormal experience, including haunted houses, communication with the dead, and astrology.7 Trend analyses aggregating data from Gallup, Harris, Pew, and other institutes show that, despite recent technological advances, these beliefs have remained remarkably stable, with only small declines in some items and even increases in specific beliefs such as ghosts and haunted houses.8 A more recent Gallup synthesis, from 2025, shows that 48 percent of American adults believe in psychic or spiritual healing and 39 percent in ghosts, while between 24 percent and 29 percent endorse six other supernatural beliefs; compared to 2001, variations are modest, with declines of only 6 to 7 percentage points in phenomena such as telepathy and clairvoyance.9 Literature reviews indicate that, in different countries, beliefs in spirits, UFOs, and other extraordinary phenomena remain widely disseminated among modern populations.10, 11, 12, 13
In other words, beliefs persist and remain widespread, but the supposed phenomena that should generate clear and reproducible evidence seem increasingly absent precisely at a moment when we possess technology capable of recording them with great clarity.14, 15 This shift invites a skeptical exercise: Why have paranormal and supernatural apparitions disappeared exactly when it became possible to document them unequivocally? For centuries, human testimony was the primary source of such accounts. However, scientific literature consistently demonstrates that testimony, even when sincere, constitutes extremely weak evidence: It is susceptible to perceptual illusions, cognitive biases, cultural expectations, and reconstructed (and often false) memories.16, 17, 18
They systematically avoid sharp, high-resolution cameras while tolerating grainy footage captured with old cameras or shaky amateur recordings.In recent decades, quantitative studies on spontaneous reports of “anomalous” experiences also reveal a telling pattern: Although belief remains high, the number of people claiming to have personally experienced paranormal and supernatural phenomena tends to decline or stabilize at low levels compared with previous decades. Population surveys in the United Kingdom, for example, indicate that around 25 percent of adults report having seen a ghost, a number smaller than the prevalence of belief in ghosts, which remains above 40 percent.19, 20, 21 The discrepancy between the high prevalence of belief and the lower prevalence of reported experiences suggests that direct accounts do not accompany the persistence of belief, a pattern compatible with the growing impact of recording technology.
Recent experimental evidence reinforces this fragility. Contemporary studies show that up to 30 percent of participants incorporate false details into memories of extraordinary events after minimal suggestions or exposure to ambiguous images.22, 23 This type of cognitive vulnerability helps explain why, even before photography, reports of supernatural phenomena were so abundant despite the absence of reliable physical documentation.
With the popularization of photography in the late nineteenth century, the first “records” of ghosts, materializations, and spiritualist phenomena emerged, almost always blurred, overexposed, composite, or manipulated.24 The skeptical science of the time, from Darwin25 to Houdini,26 had already warned of fraud, lighting tricks, and honest mistakes. Even so, these images fueled a fertile social imagination that was poorly equipped for the kind of critical analysis we now consider trivial.
Yet something fundamental changed when next-generation smartphones became ubiquitously available. Never in human history has there been a moment when billions of people possessed cameras with optical stabilization, precise sensors, 4K recording capacity, and the ability to capture phenomena instantaneously and share them within seconds.
Paradoxically, this same technological infrastructure has fueled an entire subculture of “ghost hunters” and smartphone-based spirit-detection apps. Ethnographic research on ghost-hunting communities shows the intensive use of high-definition cameras, motion sensors, and apps that simulate paranormal measurements, but despite millions of recordings, no verifiable fact regarding the existence of ghosts has been established in a robust manner.27, 28 Independent assessments of these groups further show that most of the supposed evidence, shadows, electromagnetic noise, or video distortions, corresponds to optical or acoustic artifacts already extensively described in the technical literature and often replicable under controlled conditions.29 Even more rigorous investigative protocols, such as controlled-environment monitoring with multiple cameras, have never produced replicable or consistent results. In other words, the capacity to search for evidence has increased exponentially, but the quality of the “proof” remains trapped in artifacts, ambiguities, and wishful interpretations.
Curiously, alleged extraterrestrials seem to prefer deserted roads, swamps, or isolated campgrounds, and maintain a distinctly selective shyness.At the same time, astronomers equipped with powerful, high-definition telescopes that observe the sky 24 hours a day have never recorded a single robust piece of evidence for objects of nonhuman origin. By contrast, systematic surveys conducted by professional astronomers estimate that more than 95 percent of investigated UFO reports correspond to satellites, rocket re-entries, aircraft, balloons, or common atmospheric phenomena.30, 31 This pattern was already known before the widespread adoption of smartphones, but it has become even more evident as observational instruments have grown more precise. Curiously, alleged extraterrestrials seem to prefer deserted roads, swamps, or isolated campgrounds, and maintain a distinctly selective shyness: They systematically avoid sharp, high-resolution cameras while tolerating grainy footage captured with old cameras or shaky amateur recordings.
The same inexplicable selectivity affects the great mythical creatures. Bigfoot, whose existence contradicts all biological logic, since no hominid species could survive in absolute isolation for hundreds of thousands of years without leaving fossils, consistent tracks, feces, or reproductive communities, vanished abruptly with the advent of modern smartphones. Recent research in ecology and environmental DNA biomonitoring, now used to track rare species, has likewise detected no genetic trace compatible with large unknown primates in North America, even in extensively sampled regions.32, 33 This kind of negative evidence reinforces the biological implausibility of a hidden large-bodied hominid. Hunters, hikers, mountaineers, and rural residents, all equipped with sophisticated cameras, have ceased to report sightings of the once-elusive primate. What remains alive is only the echo of old stories, always sustained by isolated footprints or shaky video footage.
Ghosts and spirits, likewise, seem to have adapted poorly to technological advancement. For centuries, claims of apparitions spread globally, reinforcing the sense that the supernatural was a universal feature of human experience. However, the more we improved our ability to record images, the more these ectoplasmic entities retreated into the invisible, or into the past. Today, there are no sharp, verifiable, or even minimally convincing records. It is as if the very ontology of such beings were incompatible with high-precision sensors, as if the supernatural had vanished precisely when it could finally prove its existence to skeptics.
From a methodological standpoint, this persistent absence of records is consistent with analyses in the philosophy of science applied to paranormal claims: If a phenomenon supposedly interacts with the physical world, it should be detectable by physical instruments; if it never is, despite the exponential growth in instrument sensitivity, then its existence becomes an increasingly implausible hypothesis.34
The same decline affects miracles and exorcisms. Although religious videos showing supposed instantaneous healings still circulate, such recordings never exhibit high-definition imagery, verifiable continuity, or transparent documentation. Sociological research on healing rituals also shows that, although millions of people report subjective experiences of “spiritual healing,” there is no video documentation of instantaneous, verifiable cures that meet minimal clinical criteria, such as independent pre- and post-examinations or transparent medical history.35 Medical literature likewise documents that many such claims can be explained by imprecise diagnoses, spontaneous remissions, or confirmation biases.36 The more sophisticated our recording technology becomes, the more rarefied extraordinary events appear to be.
Demons, once so present in cultural narratives, seem to have developed a profound aversion to high-resolution equipment. Beings allegedly so powerful, capable of opposing gods, tormenting humans across civilizations, making people speak extinct languages and levitate, now seem terrified of ordinary individuals armed with devices that could finally reveal their true face.
Some may argue that these phenomena still occur, but people have simply stopped recording them, even while carrying cameras virtually 24 hours a day. However, such a hypothesis runs entirely counter to contemporary behavior: We live in an era in which trivial dance trends accumulate millions of views, minor accidents are filmed from multiple angles, and any unusual animal becomes viral within minutes. Studies on the psychology of digital sharing show that unusual, threatening, or extraordinary content is significantly more likely to go viral, especially when it includes clear visual elements.37 This pattern makes it even more improbable that supposedly extraordinary phenomena would occur without sharp recordings, or that someone would deliberately refrain from filming or disseminating them.
Just when these phenomena could finally verify themselves before omnipresent cameras, they remain invisible.Within this context, suggesting that people witness aliens, mythical primates, miracles, ghosts, or demons and simply “forget” to record them is, at the very least, an exercise in involuntary humor. In a world so deeply connected and driven by the banal as well as the exceptional, a video that confirmed, and definitively proved, any one of these phenomena would generate an almost infinite number of likes and would instantly elevate its creators to the category of highly profitable, widely recognized influencers.
The pattern that emerges is clear and epistemologically eloquent: The massive availability of recording devices has not reduced the prevalence of paranormal beliefs, but it has made the absence of robust evidence even more striking. Opinion surveys indicate that beliefs in ghosts, haunted houses, UFOs, or astrology remain widespread and, in many cases, have been stable for decades.38, 39, 40 However, when everyone can document the world with near-forensic precision, the territory of the supernatural does not expand toward clear evidence; it remains confined to ambiguous accounts, grainy videos, and testimonies vulnerable to perceptual illusions and cognitive biases.41, 42 New cameras do more than capture reality: They make it increasingly difficult to sustain, without embarrassment, that which depends on shadows and low verifiability.
In this context, it makes little sense to speak of the “end” of paranormal beliefs; what we observe is a growing mismatch between persistent beliefs and absent evidence. On a planet where much of the population carries in their pockets, holds in their hands, or mounts on the dashboards of their cars, high-resolution cameras with immediate access to social media, one would reasonably expect an explosion of sharp recordings of ghosts, demons, intervening deities, UFOs, or mythical primates, if such entities truly interacted with the physical world in any minimally recurrent or plausible way.43, 44
Instead, what accumulates are decades of opinion inquiries showing stable beliefs and a colossal volume of “evidence” that collapses under the first skeptical examination. The coincidence remains striking: Just when these phenomena could finally verify themselves before omnipresent cameras, they remain invisible.
The most parsimonious explanation continues to be the same one skeptics have long articulated: It is not that the phenomena have decided to retire or hide themselves; rather, there were never any paranormal phenomena to be recorded, only human interpretations of natural events, illusions, and frauds.
Is there somewhere on Earth where Sovereign Citizens can actually be free of any nation's laws?
Learn about your ad choices: dovetail.prx.org/ad-choicesA review of This Book May Cause Side Effects: Why Our Minds Are Making Us Sick by Helen Pilcher.
In the early years of Viagra, “the little blue pill” that generated such excitement about its sexual effects on men, I read an account by a woman who decided to try it herself, because isn’t what’s good for the gander good for the goose? (Answer: Not always.) She took that little blue pill and described the exhilarating night of lovemaking that ensued. The best sex she’d ever had! Rapture divine! When she awoke in the morning, she saw that the blue pill she had swallowed was an Aleve (naproxen). At least she didn’t get a headache.
Most people know about the placebo, the inert “sugar pill” given to a control group in a clinical trial when the experimental group gets the active medication. This method allows researchers to rule out the effects of expectations on a new drug’s medical benefits, if any. (Placebo-controlled tests of Viagra for women found that women did slightly better on the placebo, which ended Pfizer’s efforts to double their market.) Expectations can be powerful: the bigger the biologically inactive placebo—a larger pill, a bigger injection—or the more complex the intervention, even a sham surgery, the greater its benefits. Placebos have been used in many settings, most dramatically on the battlefield, where suffering, dying soldiers plead for morphine that has long run out of supply. Given a saline solution but told it is that powerful pain-killer, their pain vanishes.
This Book May Cause Side Effects: Why Our Minds Are Making Us Sick by Helen Pilcher. (Abrams Press, 2026)Where the placebo goes, can the nocebo be far behind? In This Book May Cause Side Effects, Helen Pilcher, a science writer and TV presenter with a PhD in cell biology, delves into the placebo’s “evil twin”—the myriad ways that our negative expectations affect us. If you had chills, fatigue, or headaches after getting a COVID shot, she writes, they were likely due to your being told those are frequent “side effects.” If you read the list of symptoms that your newly prescribed drug “might” produce, chances are you will experience one or more of them—and possibly decide not to take that drug after all. “If just the thought of eating a certain food makes you feel sick,” she writes, “it’s highly likely that placebo’s evil twin has struck again. Indeed, many of those who believe they have intolerances to certain ingredients, such as lactose or gluten, may well owe their misery to psychological rather than physical processes.” When self-reported “gluten intolerant” people are given gluten-free bread but told that the bread contains gluten, very often they develop gastrointestinal symptoms. “And when some gluten-intolerant people are covertly fed regular bread but told that it’s gluten-free, they don’t get symptoms,” Pilcher writes. “It’s the idea of gluten that they are intolerant to, rather than theprotein itself.”
The combination of “sometimes” with dramatic anecdotes weakens her case that the nocebo affects all illness.Pilcher makes her case for the nocebo’s malevolent antics in 12 chapters, starting with deaths from hexes to “psychogenic” deaths that have no apparent physiological cause to the downsides of labelling mental and physical illnesses and thereby creating more cases of them. “The nocebo effect can conjure blindness and paralysis, seizures, vomiting and asthma attacks. With no brain injury in sight, it can trigger the symptoms of concussion … With no allergen present, it can induce features of an allergic reaction—watery eyes, runny nose and an itchy rash—that are indistinguishable from the more common, pollen-triggered alternative.”
There is really no scientific reason to distinguish placebos from nocebos, since both terms describe the way that beliefs, expectations, and apprehensions affect our bodies. But the nocebo is hot; “the nocebo effect has been promoted from academic footnote to nerdy hot potato,” she notes, and Pilcher makes the most of that hotness. The nocebo “is far more pervasive and potent than most people had realized,” she writes. “All symptoms, all illness and all disease has [sic] the potential to be negatively impacted by the thoughts that swirl around inside our heads.” All disease? Yes: “Hiding in plain sight, the phenomenon is part of all illness and all disease, where it makes us more unwell than we need to be.” Does she literally mean “all” or do all diseases merely have the “potential” to be impacted?
That fuzziness undermines her reporting. To be sure, giving us details of every one of the many studies she describes could become stultifying; yet, by not providing actual numbers and percentages of people in an experiment who were affected by a nocebo, and by speaking vaguely of “most” people or “some” people who have the “potential” to succumb, we cannot assess the strength of the finding. For example, she writes that in one study, “people who were falsely ‘diagnosed’ with the ‘bad’ version [of a fictitious gene that allegedly influences their response to exercise] did much worse. They had less endurance and their lung capacity was reduced.” “People”? All of them? One tenth? How many people? 3? 30? Lung capacity “reduced” by how much? How long did that reduction last after they went home? Or, in noting that “some” people die from the stress of bereavement or surviving a plane crash, she adds “that’s certainly not to imply that intense stress is going to kill us all. These deaths are rare. You are far more likely to muddle your way through life’s major stressors than you are to die from them, but sometimes it happens.” The combination of “sometimes” with dramatic anecdotes (Johnny Cash died four months after his wife June) weakens her case that the nocebo affects all illness. Did he die of a broken heart? Or complications from diabetes, respiratory failure, autonomic neuropathy, and pneumonia?
90 percent of the symptoms that people reported when on statins were also what they experienced when on the placebo.More worrisome is Pilcher’s enthusiastic endorsement of experiments long discredited and unreplicated, such as Robert Rosenthal’s “Pygmalion” study, in which teachers allegedly raised the IQs of the randomly chosen students they had been told would intellectually bloom that year, simply by the power of their expectations. And because Pilcher so enjoyed meeting Ellen Langer, the Harvard psychology professor who became famous for her decades-old “chambermaid” and “counterclockwise” studies, she suspended scepticism, not even doing a quick google search that would have revealed what was wrong with those studies. In the former, hotel maids were said to have lost weight and lowered their blood pressure simply by being told their activities were “exercise” rather than “work.” But the experimenters relied on the women’s subjective self-reports, so they could not rule out whether the women actually—consciously or subconsciously—increased their activity level or changed their diet. And the 1979 “counterclockwise” study, which supposedly showed that having eight men in their 70s live in a simulated 1959 environment for a week would physically reverse their frailty and other signs of aging, was never published in a peer-reviewed journal or replicated. (It later became a made-for-TV stunt with celebrities.) Langer actually said to the participants, "we have good reason to believe that if you are successful at this, you will feel as you did in 1959." No bias there.
Although these lapses give one pause, Pilcher provides the details in other studies that rise to a “wow” level. In one, 60 patients who had stopped taking statins because they couldn’t stand the side effects were persuaded to try again. They were given 12 bottles of pills: four containing statins; four containing identical-looking placebo pills; and four empty bottles. The patients used one bottle per month, in a randomly prescribed order, over one year, recording their symptoms daily on their smartphones. The study was double blinded, so neither patients nor doctors knew which tablets the participants were taking (or none). The researchers found that 90 percent of the symptoms that people reported when on statins were also what they experienced when on the placebo. This means that most of the side effects of statins are caused by expectations, not the tablet’s content.
You’ve nothing to lose and possibly a world of delicious bread to gain.In her final chapter, Pilcher offers ways of countering, if not overcoming, the nocebo’s influence. Reframe the aftereffects of an injection not as painful “side effects” but as evidence the medication is working; if you need a medication, cautioning that 20 percent of the people taking it get headaches, focus on the 80 percent who don’t; and if you have been diagnosed with a serious disease, you can ask your doctor for “personalized informed consent:” telling you about possibly serious symptoms that would require medical attention, but none of the milder symptoms were more likely to be evoked by the nocebo. And if you are one of the thousands of people who think they are allergic to gluten—unlike those with celiac disease, who most definitely are—why not ask a friend or partner to subject you to a nice double-blind experiment? You’ve nothing to lose and possibly a world of delicious bread to gain.
Fascination with UFOs (unidentified flying objects) is endless. I get it – I was into the whole UFO narrative when I was a child, and didn’t shed it until I learned science and critical thinking and filtered the evidence through that lens. I credit Carl Sagan for initiating that change. In his excellent series, Cosmos (still worth a watch today), he summarized the skeptical position quite well. To paraphrase – after decades, there isn’t a single hard piece of evidence, not one unambiguous photo or video. He gave a couple of examples of evidence (widely cited at the time) that were completely useless. Now -four decades later – the situation is the same. The evidence, in a word, is crap. It is exactly what you would expect (if you were an experienced skeptic) from a psychocultural phenomenon, without any evidence that forces us to reject the null hypothesis.
So why does belief in UFOs (meaning that some of them are alien spacecraft) not only persist but are experiencing a resurgence? Ostensibly this was triggered by the release of the Pentagon videos. I have already dealt with them – they are just more low-grade evidence. In fact, as I have argued, the low-grade quality of the images is the phenomenon. UFOs, or UAPs as the Pentagon now calls them, are not an alien phenomenon, they are an “unidentified” phenomenon. Mick West has arguably done the most thorough analysis of these videos. He convincingly shows how they are just misidentified birds, balloons, and planes. If you look at the videos you will see that they are blobs and shadows and lights. They are not clear and unambiguous images of spacecraft. Believers must infer that they are spacecraft by their apparent properties – and that is where the technical analysis comes in. A sprinkle of motivated reasoning, or simply lack of expertise, is enough to convince yourself that these are fast moving large objects. But a better analysis (again, see Mick West above) shows this is not the case. They are small, moving with the wind, or flying at the speed of a bird.
But the US military is taking UAPs seriously. This is actually not a surprise – unidentified anomalous phenomena might be Chinese spy balloons, or Russian fighter planes. This has always been at the core of the government’s interest. it is now policy to scramble fighter jets for visual confirmation of anything not identifiable on radar. And now that they are doing that – 100% of UAPs so far have been identified as mundane objects, mostly balloons. In fact, the US military is happy to encourage public belief in “UFOs” because it is a convenient cover for their own top secret projects. It is not a coincidence that UFO sightings tend to cluster around military bases.
Another factor in the recent upsurge in interest is the media. The media, of course, loves stories that generate a lot of interest, and UFOs fit the bill. However, they also know that UFO stories are fringe and often based on rumor or testimony from dubious sources, so they are often relegated to “fluff” stories. They are like the ghost stories that circulate every Halloween – journalists know they are nonsense, but make great headlines. But now – the media feels they have permission from the US government to take UFO stories seriously, so they gleefully are. Here is an example from the New York Times. The author, a regular columnist, Roth Douthat, has four questions for the Trump administrations. Do they have more videos, why are there so many apparent whistle-blowers, why are some US senators calling for disclosure, and is the US government pursuing research into UFO experiencers and paranormal phenomenon (which they have in the past)?
These sound like serious questions, and so a serious journalist can write a column about them without looking silly. But the thing is – we already have the answers to these questions. The Pentagon has done a thorough analysis of all the evidence the US government has, and concluded – there is no evidence of aliens. As predicted, the whole thing is a giant nothing-burger. Except for the newer videos, most of the evidence is old and long-debunked nonsense by the same cast of characters that have been peddling this pseudoscience for decades. Why are people interested in this – because other people are interested in it. But whenever you dig down, there is simply nothing there. I have been following the UFO story for literally 50 years, and nothing has changed.
This brings me to another reason we are seeing a resurgence in interest in UFOs – because that is the natural cycle. Each generation, since the 1940s, has a fascination with UFOs. This lasts for a decade or so, then wanes for a decade or so, then comes back. This is because people get hyped up about some apparently new evidence or claim, or a movie, or now some social media video, and we get another round of people learning about UFOs for the first time. This interest lasts for a while, with many people feeling as if some big disclosure is right around the corner. They see the recent activity as a trend, rather than just as the cycle it is, and expect some big government announcement, or the proverbial aliens landing on the White House lawn.
But of course – nothing happens. Eventually, nothing becomes boring. There are always die-hards who keep the flames going, or turn their UFO interest into a job, but public interest fades and turns to something else. UFO enthusiasts then wait for another generation to forget how boring the whole thing is, or who never experienced it before, and then fan the flames back into fire, which will also eventually burn itself out.
Meanwhile, skeptics like me, who have been at this for awhile, see it coming a mile away. We can immediately respond because we have seen it all before – it’s the same tired arguments and the same lame evidence. But we still have to be careful not to seem dismissive. We are not – we’ve just been here before so we have a head start. Also we (collectively – there is a lot of dividing and conquering going on) do the detailed analysis, the hard work necessary to demonstrate convincingly that whatever new evidence is being put forward is what it is.
UFO believers reading this blog, at this point, are likely to leave in the comments – “well, what about this evidence?” Hit me. Give me your best evidence. I am happy to do a deep dive and see what we got. But you should first look for skeptical analysis of the claim – be your own most dedicated skeptic first. If you still think the evidence is worthwhile, send it my way. (And don’t tell me to read thousands of pages of low grade evidence – give me your best evidence.) Decades of making this challenge has not resulted in anything (for example), but I am willing to keep going. Also, keep in mind, if aliens were visiting the Earth, I would want to know, and if the evidence were compelling, I would have every motivation in the world to support and promote that conclusion. And I would have much to lose if I wrongfully denied a genuine phenomenon – arguably the most interesting and impactful phenomenon in human history. I would not want to be on the wrong side of that story. So yeah – convince me.
But you should be open to the possibility that you are wrong, that all the evidence is best explained as a psychocultural phenomenon without any need to invoke aliens. I strongly believe that is the case, and it would take compelling evidence to convince me otherwise. Such evidence does not exist, because if it did, we wouldn’t need to be debating this anymore. That is why believers have to invoke conspiracy theories or make the absurd claim that aliens are just teasing us with the possibility of their existence but withhold any solid evidence. Maybe that worked in the 1950s, but 75 years later it’s increasingly untenable.
The post Why UFOs Are Back first appeared on NeuroLogica Blog.
In today’s America, humor—like nearly everything else—has become serious business, and in ways at once unusual and plain to see. Never before has every half-drunk joke, or every stumble of language, been so on the record. Welcome to the social media century. Never before have young people been more uptight, more afraid than old people, now labeled as the anxious generation. Never before has stand-up comedy in Republican Texas felt more cutting edge than in New York City.
The comedian Norm Macdonald has called this age a crisis of “clapter”—diagnosing a humorless age where jokes are rewarded with polite applause instead of genuine laughter. It is a mark of social retardation and nervous conformity. A strange fate for one of humanity’s oldest and most complex behaviors. As such, this essay is on the origin of humor, its evolutionary function, and its history in the United States.
The Origin of HumorBabies do it. It exists in every known culture. We even see it in other species. Since Darwin, scientists have developed three ways to test for whether or not a trait evolved by natural selection for adaptive purposes. And by every test, laughter qualifies. That is to say, whatever else humor is, it is first and foremost, a fact of our evolved biology.
To this day, however, neither the scientists nor comedians (nor anyone else for that matter) has been able to produce what might be recognized as “a complete theory of humor.” What follows instead are the core components of a consilient model. These are ideas that do not compete so much as they combine, each explaining a different dimension that converge on a single theme.
1. Humor as play. The most fundamental and widely accepted finding in the study of humor is that it evolved as a function of mammalian play behavior—a way to test limits and roughhouse the rules. Dolphins laugh when they butt heads; elk laugh when they wrestle; and all the apes, including human children, laugh when we are being chased, like playing tag. All of these interactions are games that simulate aggressive predator-prey behavior; like fighting, stalking, hunting, or fleeing, it’s easier to learn the rules of conflict when the danger is make-believe. Laughter, on this account, evolved as a signal to the predator-in-pretend that he is not being perceived as a threat and that playtime can continue.
Laughing out loud is not just a reaction, it is a social tool that helps young mammals learn how to walk the line between aggression and cooperation, between pushing limits and maintaining bonds. It’s a training ground for managing social complexity. And so while we may be the only species that tells jokes, the logic is the same. Louis C.K. explaining that “you should never rape anyone unless you want to cum in them and they won’t let you” or Norm Macdonald reminiscing about “the old days when tweeting meant stabbing a hooker” is what scientists call “verbal play.” Here is how Jerry Seinfeld put it: “Comedy is a very aggressive art form. You put the brain into a vulnerable state [the setup] and then attack and destroy it [the punchline].”
Understanding the role of laughter in distinguishing between aggression and play explains why humor—like no other form of speech—is allowed to not make sense, to cross the line, and to have it not matter. As Louis C.K. often puts it after his punchlines: “I don’t know. I don’t care.”
2. Laughter is a hard to fake signal. Birds laugh, dogs laugh, rats laugh, cows laugh. There are—so far as we have counted—over sixty animal species that laugh. But there is only one species that can fake a laugh, and that’s us. It’s what biologists call nonduchenne laughter (tactical, deliberate, and carefully timed), as opposed to duchenne laughter (involuntary and honest). A duchenne smile—named after anatomist Guillaume Duchenne who first identified it, is characterized by the simultaneous contraction of the zygomatic major muscle (lifting mouth corners) and the orbicularis oculi muscle (crinkling eyes, forming “crow’s feet”), distinguishing it from a forced smile that only uses mouth muscles.
The duchenne smile evolved in humans because we are the only species that has language. In a world where deceiving others has obvious survival and reproductive advantages, language enhances our ability to manipulate beliefs and rig behaviors to our benefit, whether by lying about resources, alliances, or why the basement smells like bleach. In other words, it gives us the ability to influence each other, not just through force or direct observation, but through stories, symbols, and imagination. Try convincing a chimpanzee to give you a banana by promising eternal paradise or warning of a mythical curse and see what happens. Tell the right story to a human, however, and they might just give you all of their arable lands.
All this is to say that once we have language we also have bullshit, and so what we really need is a way to tell who’s full of it. Biologists call it an “honest signal,” and for a slick-tongued species of tricksters, the best we’ve got is duchenne laughter. Less corruptible than speech and harder to counterfeit, it works as a backchannel of communication by revealing genuine and honest feelings inside, unfiltered by words.
Studies suggest that few people can voluntarily produce crow’s feet in their eyes (the telltale sign of duchenne laughter) without feeling genuine joy—it is easy to identify and we respond more positively to it than the fake stuff.
But a laugh, real or not, means little until you know what provoked it.
3. Comedy is surprises. Arguably the most obvious feature of any joke is that the punchline arrives unexpected and upside down. Across cultures and contexts, the most consistent finding in humor research is that without surprise, there is no laugh.
The human brain, at its most basic, is a prediction making machine, honed by natural selection for survival in environments where knowing what’s going to happen before it does keeps you one step ahead of the predators. To know where the predator lurks, when the fruit will ripen, how an ally will behave—all in advance of the fact—is arguably a chief advantage of our big brained species over others. We are, put simply, pattern-seeking junkies—so wired that we are likely to see patterns that don’t exist (patternicity). As such, our awareness is often not of things as they are, but as we expect them to be.
Even our most basic experiences are not records of the present but guesses about what’s to come. Take, for example, drinking water. Our cells do not absorb the intake until about twenty minutes after the fact, but feeling quenched happens almost immediately. It is the brain, anticipating the chemistry that will follow, extending to us in the present the comfort of a future state. Most of life is lived in this way—on credit, in trust—our minds forever writing promissory notes for what the world has not yet delivered.
The advantage of the man with a sense of humor is that he is able to act more rationally by considering multiple angles and weighing their contradictionsBut as much a benefit as there is in good predictions, there is a cost to bad ones. Evolution, therefore, had to do more than just adapt us to anticipate. It had to make us eager to correct our mistakes when reality proved us wrong. Laughter, in this view, evolved as a reward signal for fixing a bad prediction—an outburst of joy that marks the moment our model of the universe just got more accurate. One after another, it is a comedy of errors—predictions misfiring, intentions slipping—that keeps the system honest and the mind awake. As Norm Macdonald explains:
At times, the joy that life attacks me with is unbearable and leads to gasping hysterical laughter. I find myself completely out of control and wonder how life could surprise me again and again and again, so completely. How could a man be a cynic? It is a sin.Yet if laughter were merely a private reward for cognitive course correction, it would be a silent, internal affair. But it isn’t. It is loud, contagious, and social. This is because the same mechanism that helps an individual update their model of the world becomes, in a social species, a powerful tool for establishing shared truths.
4. It’s funny because it’s true. Whether it’s making fun of someone else, making fun of ourselves, or making fun of the situation, we laugh because in some hidden, half-said sort of way, the joke forces us to connect the dots already in our head. It is an unspoken reality suddenly made obvious, but only to the people laughing. Anthropologists call it the encryption model of humor, and it explains humor’s widest social function.
As it suggests, the whole ludic apparatus works like the German Enigma machine of World War II, in which messages were sent via code to receivers who can crack it. In order to “get” a joke, you must share some background knowledge or belief that allows recognition to snap into place. This means that when people are laughing at the same thing, they are effectively signaling that they all possess the same information and preferences, thereby marking themselves as members of the same ingroup.
“You had to be there.” “If you know, you know.” In this way, all jokes are inside jokes, and research shows that the more encrypted comedy is, the funnier people find it. The writer E.B. White once compared explaining a joke to dissecting a frog—you understand it better but the frog dies in the process. Humor is like a bubble, he observed:
It won’t stand much blowing up, and it won’t stand much poking. It has a certain fragility, an evasiveness, which one had best respect. Essentially, it is a complete mystery.And it is this very quality that allows humor to do its dirtiest work—exposing suppressed beliefs, humbling status, challenging groupthink, and revealing unseen truths.
5. We’ve all got a little Jeffrey Dahmer in us—and those of us who deny it rarely laugh at all. Research suggests that people who have a harder time acknowledging difficult truths find less humor in the world. In studies using the self-deception questionnaire, for example, subjects are asked to rate how much they agree (on a scale from “not at all true” to “very true”) with statements such as “More than once it felt good when I heard on the news that someone had been killed” or “I have never done anything that I am ashamed of.” Those who mark more claims as “not true” are scored as higher in self-deception and later observed to laugh less than individuals more able and willing to confess their sins. Other statements on the survey include: “Once in a while I think of things too bad to talk about.” Or: “I have never wanted to rape or be raped by someone.”
If self-deception hides the inconvenient angle, laughter drags it into view by forcing honesty not meant for show.The results reflect two competing adaptations in the evolutionary arms race between liars and lie detectors. On the one hand, self-deception works in service of deceit, allowing lies to roll off the tongue with all the same confident fluency as truth. In other words, by believing our own lies we are less likely to show external cues of deception (e.g., sweaty palms, nervous voice changes, or averted eye contact), which makes them harder to detect. Its function is to protect us from admitting beliefs that might expose weakness, lower status, or trigger shame. Ninety-four percent of professors, for example, think they are in the top half of their field.
But if self-deception hides the inconvenient angle, laughter drags it into view by forcing honesty not meant for show. Chris Rock’s joke that “a man is as faithful as his options,” for example, plays on a familiar tension between our grandiose theories about marriage being a sacrament and our deep animalistic understanding that it’s easy to be faithful if nobody else wants to have sex with you.
Where self-deception narrows the field of vision, humor splits it open. The advantage of the man with a sense of humor is that he is able to act more rationally by considering multiple angles and weighing their contradictions. As Samuel Crothers wrote for The Atlantic in 1899:
The pleasure of humor is of a complex kind. There are some works of art that can be enjoyed by the man of one idea. To enjoy humor one must have at least two ideas. There must be two trains of thought going at full speed in opposite directions, so that there may be a collision. Such an accident does not happen in minds under economical management, that run only one train of thought a day.It is what the poet John Keats called “negative capability”—the ability to keep in mind two incompatible truths that circle one another without resolution. Shakespeare, he argued, possessed this quality to an extraordinary degree, forcing his audience to hold both the positive and negative aspects of a character for as long as possible, denying them the sort of quick and facile judgment most of us make about most things all the time.
6. Funny is when the world won’t fit our ideas. Incongruity theory is the most supported scientific explanation for why humans laugh, and explains laughter as a shock moment of mismatch between the world we know and the world we thought we knew. In other words, comedians tell jokes that violate our expectations, identifying incongruities that can only be resolved by a shift in perspective. The setup creates an expectation, the punchline violates it, and laughter signals the change in perspective.
Take, for example, the old Onion headline: “School Bully Not So Tough Since Being Molested.” The setup primes us to cheer the bully’s downfall … until out of nowhere, like a trigger yanked too soon, the last word detonates that expectation. Had, for example, the line read “School Bully Not So Tough Since Being Cut From The Team”— it would have ended in simple justice, within the range of predicted ends. Instead, “molested” hurls a monkey-wrench perspective onto the tracks. In a flash, it turns the bully we wanted punished into the victim we want to protect—our original point of view bent, broken, flipped end over end like a compass needle snapped loose from north. Put another way, the joke forces contempt and pity to occupy, for a split second, the same moment of experience.
Its feeling is awkward, ambiguous, uncomfortable, bewildering; requiring the mind to twist in on itself, tight and ugly, in order to get the joke. As the character Marlo Stanfield says in season four of The Wire, “[We] want it to be one way. But it’s the other way.”
We want the world to be drawn in clean lines, with answers settled and nonsense gone. But experience proves otherwise.
Humor and Democracy in AmericaIt was for the first time in 1789 that a new generation of men on a whole new continent chose to work with their flaws and make use of the mess. They were a generation of men who laughed at pretension, heckled certainty, and made a sport of nonconformity. This was, in part, because they had an American sense of funny. Only on this side of the Atlantic was humor fully let off the leash, divorced from the polite understanding that jokes ought leave the order intact. In Europe, mockery operated within a fixed aristocratic structure—a pressure valve in a system not designed to change its fundamental hierarchy. In America, however, ridicule was integrated into a self-correcting democratic project.
Historian Henry Steele Commager called American humor a “comedy of circumstance” that made fun of every man, who “at one time or another [had] aimed too high, adventured too boldly [or] boasted too loudly.” It mocked rich people like poor people, made fun of smart people in the same ways as dumb people; because in the United States, no man is allowed to stay king. Commager goes on to describe the American sense of humor like this:
It was fundamentally outrageous, and in this reflected the attitude towards authority and precedent. It celebrated the ludicrous and the grotesque with unruffled gravity … It bore the impress of the frontier long after the frontier had passed. It was leisurely and conversational; the tall story was usually a long story and was designed to be heard rather than read. American humor was shrewd, racy, robust, and masculine … It was generous and good-natured, and malicious only when directed against vanity and pretense. It cultivated understatement not, as with the British, as a sign of sophistication, but as an inverse exaggeration … It was democratic and leveling, took the side of the underdog, ridiculed the great and the proud, and the politician was its natural butt.And as the democratic experiment hurtled forth, so too did its comedic counterpart, growing louder, meaner, and goofier. From the rambling tall tales of the frontier sprang, one after the other, a hard plain line of distinctly American inventions, including vaudeville, the comic strip, sketch shows, and stand-up comedy.
But now, as Americans slip back into the Old World habits we once escaped, both democracy and humor are dying of the same disease.
The Unfunny RevolutionIn 2008, near the peak of his career, Louis C.K. taped what would become one of the most talked-about comedy specials in comedic history. Dedicating the set to his hero George Carlin, who had died earlier that year, Louis began his special with a joke modeled on one of Carlin’s most famous bits—the “seven dirty words”—that in 2008 became “nigger, cunt, faggot.” Operating under the same premise, both jokes asked what kind of society still has forbidden words. Some found it funny, some found it offensive, some found it stupid, and some didn’t care at all. But in 2009, one of the most obscene jokes in American comedy was nominated for an Emmy by the high and mighty Television Academy.
Fifteen years later, that world is unrecognizable. The culture has shifted so completely that now even Jerry Seinfeld—a comedian whose most offensive material pokes fun at airplane food—refuses to play college campuses, citing excessive political correctness. As Chris Rock, another comedian who no longer performs at universities, put it, “You can’t even be offensive on your way to being inoffensive.”
Cartoon by Oliver Ottitsch for SKEPTICThe shift is not just in what Americans find funny. It is a fundamental misunderstanding of the nature and function of humor. In a culture that now treats laughter as a moral act, it’s been bent out of shape by all sides; its purpose twisted into a dog and pony proof of allegiance. On the right, the rules are clear enough—mock the leader, mock the faith, and you’re done. The threat is old school dictatorship. On the left, nobody’s in charge, but everyone’s policing everyone else. The result is a social bureaucracy so sprawling and self-contradictory that no one, least of all the people enforcing it, can tell you where it starts, what it’s for, or whether anyone is still keeping score. Can a man tell a rape joke? Can a woman? Do gay, Black, or fat comedians (or any others belonging to oppressed or marginalized groups) have the exclusive right to make fun of their own group?
But beneath all the shouting lies something simpler: a handful of inconvenient facts that neither orthodoxy can accept.
1. Comedy has no responsibility. Jokes aren’t Hallmark cards. There’s no lesson. No moral mission. Funny has nothing to do with right or wrong, good or bad. If people laugh—the joke works. If they don’t, it doesn’t. It’s that simple. As Seinfeld put it, “The audience is the only judge. If they laugh, it’s funny.”
And whether they laugh for the right reasons, the wrong reasons, or no reason at all, it doesn’t matter. It’s all the same currency. Because again, no committee, no critic, no theoretical or ethical standard, not even comedians themselves, can determine what is funny. Only laughter can.
The impulse to sanitize humor in the name of safety is a well-intentioned but misguided coddling that infantilizes the very people it claims to protect.It is for this reason that comedian Ricky Gervais argues you should never apologize for laughing—because it is an involuntary reflex, born of recognitions we can’t fully name; maddeningly hard to locate, explain, or repeat. Whatever insights, however real, are accidents, not assignments. A joke may be philosophical, but it must not philosophize. It may be moral, but it must not moralize, because life is serious and comedy is not.
2. There is no such thing as punching down. It is a conceit that rests on the fantasy that people exist within a clear hierarchy of oppression and that comedians should consult a moral spreadsheet before telling a joke. Humans, however, are messy, and power is multidimensional. If the joke lands, it’s good, and not because it “punched up,” but because it’s funny. As comedian Rowan Atkinson put it:
You’ve always got to kick up? Really? What if there’s someone extremely smug, arrogant, aggressive, self-satisfied, who happens to be below in society? … There are lots of extremely smug and self-satisfied people in what would be deemed lower down in society, who also deserve to be pulled up.Humor, rather than reinforcing hierarchies, scrambles them, making a carnival of power, where prince and pauper swap faces and butts. People can be both victims and perpetrators at the same time. If a rich guy mocks a poor guy for being poor, he’s an asshole; if a poor guy does it, he’s an asshole too.
The impulse to sanitize humor in the name of safety is a well-intentioned but misguided coddling that infantilizes the very people it claims to protect. To be teased is to be an equal; to be seen as resilient enough to take a joke and confident enough to play along. Because good humor, by refusing to grant anyone a permanent victim’s pass, reminds us that our shared humanity, not our segregated identities, is the ultimate leveler.
3. The subject is not always the target. I heard a joke at an open mic the other day about a newspaper headline that read “World’s Worst Pedophile.” The story was about a man who had molested hundreds of children. After reading the headline, the comedian asked, “Shouldn’t he be the world’s best pedophile? I mean … the world’s worst pedophile—he’s been trying for years. He can’t afford the good candy, so he hands out stale trail mix. His van won’t start …” If you think the joke is making fun of molesting children or that it’s about finding pedophilia funny, you’re an idiot. It’s making fun of reporters and sloppy language.
But even if the joke actually was about pedophilia—as in Louis C.K.’s Saturday Night Live monologue, where he compares the joy of eating his favorite candy bar to what sex with children must be like for a child molester—treating a topic playfully doesn’t erase its gravity; it just recognizes that serious issues need not always be handled seriously.
Forcing comedy to seek 100 percent approval is like demanding a surgeon operate with a butter knife—you remove the danger, but you also remove the point.4. Failure is the process. Even the best comics bomb; but in a decontextualized culture incentivized to screenshot rather than understand, we’ve made a habit of demanding perfection on the first try. The trouble is that, while great jokes look effortless, they’re the end result of a process that’s anything but. As David Chase said about the hundred hour weeks he spent making The Sopranos—“hard work looks like magic.” Seinfeld once said he spent 20 minutes fine-tuning a single syllable. Chris Rock worked on three of his jokes in a recent Netflix special for over a decade. Being funny is hard—and comics need the space to fail. If you’ve ever watched open mics and seen the same comedians go up week after week to tinker with their bits, you know that the difference between killing and bombing often hinges on a single well-timed pause. Perhaps comedian Ari Shaffir summed it up best:
Failing is part of my process … A new bit never works the first time. I figure I have to bomb seven times to make it good. So I tweak it. Then maybe the next time it will do great … but then it will fall flat again. So I’ll make more adjustments. Then it will be great, then it will be terrible again … and all of that is okay.This is why people who understand the function of humor tend to be more forgiving when things go wrong; and comedians are the most likely to forgive a failed joke. Dave Chappelle, for instance, responded to Michael Richards (Kramer on Seinfeld) calling a heckler a “nigger” at the Laugh Factory—an incident widely perceived as genuinely racist—by saying that he learned that he was 20 percent Black and 80 percent comedian:
The Black part of me was offended and hurt, but the comedian part was like, “Whoo, dude is having a bad set. Hang in there, Kramer!”The bottom line is this—good jokes can’t emerge without experimentation. If it kills—great. If it doesn’t, better—it means you’re part of a free society.
5. Risk is the form. Most humor involves taking risks. Larry David, for example, compared stand-up comedy to diving. You get extra points for degree of difficulty. Seinfeld said that jokes are like leaping from one tall building to another—the further the distance, the harder the joke. There is a big payoff if you can bring the audience with you, but if you try to jump too far or the dive is too difficult and you aren’t yet good enough, the joke bombs. This is why the worst thing you can do as a comedian is play it safe. As Patrice O’Neal put it: “The idea of comedy, really, is not [that] everybody should be laughing. It should be about 50 people laughing and 50 people horrified.”
Forcing comedy to seek 100 percent approval is like demanding a surgeon operate with a butter knife—you remove the danger, but you also remove the point.
The Last LaughHumor is not meant to be figured out, put to use, or taken seriously. It is meant to be experienced. But in a botox-bleached nation of caped crusaders wearing noise-cancelling headphones, deaf to anything but our own theme music and the imagined sound of unseen eggshells cracking beneath; Americans are being starved of the freedom to play without purpose.
Like an overzealous gardener who, in his war against the dandelion, has paved his entire yard with concrete, we are succeeding in eradicating the weed of offense but in the process killing the soil where flowers take root.
All of us, each so consumed in our own tiny corner of the universe, must be reminded every now and again that the world is what it is, and our ideas about it are not. It’s a ticklish business.
A little over eight years ago The New York Times published a story that had profound implications for the way in which the UFO topic was perceived.1 It also began, at least in the U.S., a process by which the subject became increasingly more mainstream. In this article I want to address three questions: (1) How did ufology get here? (2) Where does ufology stand now? (3) What does the future hold for ufology?
How did ufology get here?On December 16, 2017, The New York Times broke two related stories. The first was the existence of forward-looking infrared videos of UAP (the U.S. government uses the term UAP—Unidentified Anomalous Phenomenon—as opposed to UFO) taken from U.S. Navy jets and confirmed by the Department of Defense as being authentic footage.2
The second part of the story was the existence of a shadowy intelligence program known as the Advanced Aerospace Threat Identification Program (AATIP), that supposedly researched and investigated UAP. This was newsworthy in and of itself, because for years the official position of the U.S. government was that there was no longer any interest in UAP, and that no programs had existed to study the phenomenon since the end of the 1960s, when a long running U.S. Air Force program known as Project Blue Book was terminated. Many people in the UFO community believed this was a lie and that covert programs existed, so it seemed like a clear-cut example of a conspiracy theory that turned out to be true.
The truth was rather more complex, and there’s still no universally accepted narrative here. Some skeptics say AATIP was more of an unofficial effort undertaken by a group of believers in the Intelligence Community. Whatever its true nature, AATIP was clearly a spin-off of an earlier Defense Intelligence Agency (DIA) program called the Advanced Aerospace Weapon System Applications Program (AAWSAP). AAWSAP was demonstrably a genuine program, and some official documents use the terms AAWSAP and AATIP interchangeably.3 In January 2020, Pentagon public affairs spokesperson Susan Gough issued a statement attempting to clear up the confusion. It stated:
The Advanced Aerospace Threat Identification Program (AATIP) was the name of the overall program. The Advanced Aerospace Weapons Systems Application Program (AAWSAP) was the name of the contract that DIA awarded for the production of all technical reports under AATIP.I sought further clarification, and on January 13, 2020, Susan Gough followed this up with a statement that:
DIA managed the Advanced Aerospace Threat Identification Program. All of the work performed under AATIP was done via a single contract vehicle called AAWSAP. The total work effort for AATIP consisted of the 38 technical reports produced under the contract vehicle. DIA was the sole lead for management of AATIP via AAWSAP. Congress was briefed on the total work conducted for AATIP—the aforementioned 38 technical reports.The authors of these 38 reports include Hal Puthoff, Eric Davis, and Kit Green—names well-known to those who follow government dabbling in fringe science and the paranormal.
My personal assessment is that all the euphemistic “advanced aerospace” references were a way of disguising a UFO or paranormal research program as being a program looking at next-generation foreign aerospace weapon threats, to try to protect it from skeptical Pentagon financiers and Congressional oversight folks who would have been horrified to learn that taxpayers’ money was being spent on such matters. This attempt was ultimately unsuccessful, because while $10M was appropriated in FY2008 and a further $12M in FY2010, funding ended in FY2012, after an earlier official review concluded that “the reports were of limited value to DIA.”
The roots of AAWSAP trace back to Intelligence Community personnel Jay Stratton and James Lacatski, as well as to Skinwalker Ranch in Utah, often portrayed as a hotbed of UFO sightings and paranormal phenomena. Following the DIA’s 2008 issue of a contractual solicitation (carefully worded to focus on breakthrough technologies that might underpin future aerospace weapon systems, while avoiding mention of UFOs or the paranormal), the contract was awarded to Bigelow Aerospace Advanced Space Studies (BAASS).4 Billionaire space entrepreneur Robert Bigelow was, at the time, the owner of Skinwalker Ranch.
Robert Bigelow had a longstanding interest in UFOs and the paranormal, and had previously funded the National Institute for Discovery Science (NIDS).5 The Chairman of the Board was the aforementioned Hal Puthoff, a parapsychologist who’d previously managed (with Russell Targ) a program at the Stanford Research Institute (not affiliated with Stanford University) to investigate paranormal phenomena. This work likely led to the U.S. government’s dabbling in such areas as remote viewing through Project Stargate, run by the DIA and CIA during the Cold War.
NIDS looked at a range of fringe science topics, and some have argued that AAWSAP was essentially a way to secure government funding for a continuation of the sort of work that had been done by NIDS. Senator Harry Reid (who knew Robert Bigelow) was instrumental in securing official status and funding for AAWSAP.
The New York Times story was quickly picked up by other mainstream media outlets around the world, and this caught the attention of numerous Congressional representatives and staffers. A key reason for this interest was the fact that aside from Harry Reid and two Senatorial colleagues, there seemed to have been no Congressional knowledge of AAWSAP or AATIP, and certainly no oversight.
In terms of UFOs, folks in Congress likely aren’t that different from society as a whole, in that there’s a wide range of opinions across the spectrum from skeptic to believer. Furthermore, irrespective of beliefs, it’s hardly surprising that an unknown but clearly significant number of people in Congress saw The New York Times article and thought to themselves something like, “Wait, the government has a UFO program, but didn’t tell us? It was run by Intelligence Community personnel and there’s no Congressional oversight? What are they doing and what have they found out?”
What followed was multifaceted Congressional interest in and engagement on the topic of UAP, to the extent that a critical mass built up. I believe a key factor here was that this engagement was bipartisan, covered both the Senate and the House, and involved several committees, mainly the Armed Services committees, the Intelligence committees, and the Oversight committees. This Congressional engagement led to classified briefings and public hearings. Witnesses at the public hearings included whistleblowers like Luis Elizondo (a retired counter-intelligence operative prominently featured in The New York Times article and described therein as being the individual who had run AATIP) and David Grusch, a former Intelligence Community member who had been attached to the UAP Task Force under the directorship of Jay Stratton.
Perhaps the most important part of Congressional UAP engagement was the insertion of multiple UAP-related provisions into several of the recent, annual National Defense Authorization Acts (NDAA). In part to meet these legislative remits, the DOD set up an office (the aforementioned UAP Task Force) to handle the response and to lead on the topic across government. This task force published a number of official reports and was eventually replaced by the All-domain Anomaly Resolution Office (AARO). AARO’s website hosts a wealth of reports, briefings, and other UAP-related materials, sourced both from the DOD and Congress, that perfectly illustrate both the breadth and depth of Congressional engagement and the government response to this Congressional interest.6
As an interesting side note, one of the directors of the UAP Task Force was the aforementioned Jay Stratton, who had previously been involved in AAWSAP and who had an anomalous experience at Skinwalker Ranch. Stratton’s upcoming memoir, apparently to be published in 2026 by HarperCollins, may shed some light on unresolved questions concerning the evolutionary process from NIDS to BAASS to AAWSAP to AATIP, as well as other not-yet-resolved questions.
Every intelligence analyst on the face of the planet knows the importance of differentiating between what they know and what they think, yet these very people often seem to be blurring the line.It’s certainly interesting to note the connections between the various individuals involved and to see how the same names pop up repeatedly. This gives some potential insights into who the key players are and what the overall agenda is. The New York Times story, for example, had a long gestation period. The story was shopped around for some months prior to publication, not only to The New York Times, but also to The Washington Times and Politico, both of which were thus able to run fairly detailed stories very shortly after The New York Times got the scoop.
Further insights can be gained by looking at the three names that appeared on the byline for The New York Times story: Helene Cooper, Ralph Blumenthal, and Leslie Kean.
Helene Cooper was a Pentagon correspondent with The New York Times, with no previous UAP interest; The New York Times veteran reporter Ralph Blumenthal’s interest predated the December 2017 article and began with his research into Harvard Professor of Psychiatry John Mack, who had conducted research into the alien abduction mystery. This led to the 2021 publication of Blumenthal’s book on Mack, The Believer. Leslie Kean comes from a wealthy political family and had a prior interest in UAP and alien abductions, illustrated by her previous writings and by the fact that she lived for some years with abduction researcher Budd Hopkins, who first introduced John Mack to the topic.
It was Leslie Kean who was instrumental in bringing the story to The New York Times. Luis Elizondo had resigned from government service in the fall of 2017, but very shortly before leaving had passed the three best-known U.S. Navy UAP videos to Christopher Mellon, a former Deputy Assistant Secretary of Defense for Intelligence. Elizondo believed he had obtained official security clearance for their release, though it seems there was a misunderstanding and that the clearance was not intended to authorize public release. To illustrate this, an April 27, 2020, statement from the DOD referred to “unauthorized releases” of the videos in 2007 and 2017.7 In 2007, one of the videos leaked online on the Above Top Secret discussion forum, while 2017 referred to the process that led to The New York Times running the story.
Mellon and Elizondo then joined an organization called the To The Stars Academy of Arts and Science (TTSA), ostensibly headed by Blink-182 musician Tom DeLonge. TTSA was a sort of collaborative hub for a number of individuals, many with backgrounds in government UAP and fringe science research, including Hal Puthoff and retired CIA officer Jim Semivan.
It was Christopher Mellon who facilitated a meeting between Kean, Elizondo, and others, which then gave Kean enough to take the story to The New York Times, via Ralph Blumenthal, setting in motion a series of events that was to forever change the field of ufology.8
Where does ufology stand now?This is how ufology in the U.S. went from fringe to mainstream, though it’s a simplified version, and not all the twists and turns of the story are universally agreed upon. If I had to summarize what I think happened and why, my best assessment would be as follows: A loose coalition of believers in UAP and the paranormal, often with backgrounds in government, military, and the Intelligence Community, sought and obtained official funding for their work. When that funding was terminated, they continued the work in a quasi-official capacity. Finally, when they felt they’d taken matters as far as they could without official funding, they decided to go public, successfully gambling that the resultant firestorm would generate other ways to take things forward. The goals may have included funding (TTSA certainly raised some money through a share issue) and Congressional engagement. The latter has clearly been a big success.
However, eight years into this process, there’s still no smoking gun and we appear to have hit some speed bumps, with several new and parallel events putting things in a rather different light.
Further ex-government whistleblowers have come forward. This sounds like a good thing, and in one sense, it is, but the unintended consequence has been that this has added to the information overload and created a landscape so complicated that even veteran commentators like myself, who follow the situation very closely, find it difficult to keep up. Furthermore, not all whistleblowers are equal. While one can be reasonably confident that those who have testified to Congress are who they say they are (staffers vet such people fairly thoroughly, not least by quizzing their former employers), others haven’t had their backgrounds investigated in such depth.
It should also be remembered that even when someone’s government background checks out, their specific role is often harder to pin down and their information can be all but impossible to verify. That’s partly because many of these folks have a background in the military and the Intelligence Community, where issues of classification often arise and where deception was literally in some of these people’s job descriptions. It’s also because much of the information is second hand, but where those concerned don’t make it clear that this is something that somebody else told them. Every intelligence analyst on the face of the planet knows the importance of differentiating between what they know and what they think, yet these very people often seem to be blurring the line. No wonder one occasionally hears some civilian UFO researchers complain that the whole thing is a PSYOP.
This already murky situation has been further complicated by factional infighting. There’s clearly a struggle for narrative control within the field. Even among the various whistleblowers and other key players, who are ostensibly polite with each other, there are clearly some tensions. By way of a personal anecdote, I’ve had more than one TV producer tell me how Individual A told them he’d appear on a show, provided Individual B wasn’t featured (the requests backfired because producers don’t usually play that game). I’m similarly aware that some of the key players who are ostensibly being polite to me are briefing against me, perhaps seeing my mainstream media platform as a potential threat, especially given that I’m independent in all this and don’t take anybody’s side. Because it so perfectly describes the situation, I can’t resist quoting a lyric from the O’Jays song Back Stabbers: “They smile in your face. All the time they want to take your place.”
There’s nothing new about infighting in the UFO community. What is new, however, is that folks with a background in military intelligence know a few dirty tricks that their civilian counterparts don’t. Plus, social media has acted as a force multiplier, with 𝕏 in particular having turned into a veritable battlefield between some of the key players, often using proxies and sock puppet accounts. Cliques, harassment, and doxxing seem to be the order of the day. Neither should we sweep under the carpet the uncomfortable truth that some of the people who’ve recently jumped aboard the ufology train clearly have psychological issues, while others sense a money-making opportunity.
To pick one example of all this infighting, the December 2025 appearance of Jay Anderson on Joe Rogan’s podcast seems to have set off a particularly nasty squabble.9 Jay criticized Luis Elizondo (among others), accusing him of orchestrating an aggressive campaign to control the narrative, as well as making reference to what he’s sometimes called a “UFO Hate Group.”10 In response, a group of Elizondo supporters, sometimes dubbed “the Lue Crew,” hit back against Jay Anderson.11
A related development is that a new generation of influential podcast hosts and YouTube channel owners saw the topic become increasingly mainstream and entered the fray. While many are honest brokers, their podcasts and channels are often the arena in which the struggle for narrative control plays out. Again, despite being a veteran commentator who follows all this closely, I struggle to work out who’s supporting which faction, how many factions there are, and the true nature of their respective agendas.
Cartoon by Oliver Ottitsch for SKEPTICWhat is the result of all this information overload, confusion, and infighting? Speaking personally, I’m fatigued. Moreover, I see from social media that other people are fatigued too. I’m a free speech absolutist, so I’m certainly not advocating any controls on this. I completely reject the idea (which has been floated several times over the years) that ufology should set up some sort of governing body, or somehow police itself. After all, who gets to decide who’s on the governing body, and quis custodiet ipsos custodes?
There are other developments that give me cause for concern. One of them relates to a couple of narrative shifts that I’ve noticed creeping into the topic.
Ufology has always been a big tent. In whistleblower David Grusch’s testimony to Congress, and in some of his media interviews, he used the terms “nonhuman” and “non-human intelligence.”12 In the Schumer-Rounds Amendment (a legislative proposal intended for insertion into FY2024 NDAA, but which did not find its way into the final bill), the term “non-human intelligence” was used multiple times.13 Grusch has said that this leaves the door open for other possibilities aside from the extraterrestrial hypothesis. And this has opened the door to some highly speculative discussions about cryptoterrestrials, ultraterrestrials, extratempestrials, and interdimensionals. It’s also led to something a little more on the dark side, with a theological bent.
The idea that aliens are fallen angels, or demons, isn’t new. But this once-niche theory has gotten a little more traction lately. Luis Elizondo has previously told the story of how, when he lobbied a senior Pentagon official to take more action over UAP, the official told him he should read his Bible. This appeared to reflect a belief that some aspects of UAP are demonic and that to study it would be to give it energy and feed it.
Such opinions have gained more mainstream traction with Representative Marjorie Taylor Greene expressing the views that aliens could be fallen angels,14 while high-profile broadcaster Tucker Carlson has also talked about UAP in terms of spiritual forces and entities like angels and demons.15 All of this plays into a neoreligious interpretation of ufology. Chris Bledsoe—author of UFO of God—talks about how an entity he dubs “The Lady” told him how glowing orbs would intervene to stop the missiles if Israel and Iran go to war. There’s an “end times” theme to a lot of this.16
Again, as a free speech absolutist, I wouldn’t dream of telling people what they can and can’t say about UAP, let alone what they should believe. Again, I’m merely commenting on the current state of play and expressing a personal opinion that I think some of the current narrative isn’t necessarily healthy or helpful. And I certainly doubt that it holds any validity.
Another narrative shift is the use of the term “psionics”—the idea that one can use the power of one’s mind to summon UAP. It’s a scientific-sounding term, but is it really that different from Steven Greer’s CE5 (Close Encounter of the Fifth Kind) protocols, whereby one can supposedly use meditation and other techniques to initiate contact with extraterrestrials? The danger, of course, is that certain individuals can then insert themselves as intermediaries; you can access the phenomenon, but only through them, because of their special abilities. Again, there’s a sort of quasi-religious, cultish feel to all this, in which one can only access the divine through the intermediary of the priest.
What does the future hold for ufology?Given my assessment that ufology has to some extent moved from fringe to mainstream, but has hit some speed bumps, where do we go from here? I don’t have a crystal ball, but based on statements from a range of people involved in the process, it seems that further Congressional hearings and more whistleblowers would be a fairly good bet. The problem, of course, is that, short of a “smoking gun” (actual evidence and not just more stories), this runs the risk of reinforcing the view that it’s all talk and no action. Where’s the beef?
The Task Force on the Declassification of Federal Secrets is looking at UAP. There’s considerable overlap between personnel involved with the Task Force and personnel serving on the House Oversight Committee, which has been particularly vociferous on UAP. This brings up a potential problem, because while the Task Force is bipartisan, it skews toward Republicans. Thus, it wouldn’t take much to jeopardize the bipartisan nature of Congressional engagement, which would be a setback.
If Donald Trump’s presidency ends without disclosure, I’ll be 99.9 percent convinced that there’s nothing to disclose.The UFO community continues to hope for Disclosure—the official acknowledgement of an extraterrestrial presence. The Age of Disclosure, a documentary produced by Dan Farah and released late in 2025 plays into this.17 So does Steven Spielberg’s upcoming film Disclosure Day.18 But it goes further than this, and 2027 is a potential date that’s been frequently mentioned.
Disclosure in 2027 would mean that Donald Trump would be the Disclosure President. There’s a curious kind of logic in this, because if there truly is a decades-long official cover-up of an extraterrestrial presence, the secret has been scrupulously kept by successive administrations of both political parties. By inference, therefore, the reasons for secrecy must be exceptionally compelling. Perhaps only a populist, maverick, second-term President would disclose in such circumstances—more so, given that Trump will soon be in his 80s and is doubtless mindful of his legacy. I agree that if the U.S. government is aware of an extraterrestrial presence, Trump is more likely than any previous president to spill the beans. President Trump has occasionally hinted that he’s privy to some interesting information about UFOs, but has yet to elaborate on the topic.19
Some argue that the secret of an extraterrestrial presence is kept even from presidents (perhaps to maintain plausible deniability) and is in the hands of an unelected set of gatekeepers, perhaps in the government, but possibly in the private sector. I find this unconvincing. Most Western governments operate on the basis of what the UK civil service calls the culture of “no surprises,” by which political leaders need to be briefed on all big, impactful issues that might require quick decisions and action.
If Donald Trump’s presidency ends without Disclosure, I’ll be 99.9 percent convinced that there’s nothing to disclose. I’d have to accept that if extraterrestrials are visiting Earth, nobody in the government is aware of it. The acceptance of such a state of affairs might actually be rather good for ufology. After all, while some conspiracies are real, most conspiracy theories are false, and encourage a negative, accusatory approach. Removing—or at least reducing—this mindset from ufology might lead to a healthier, less aggressive approach. It would also remove a lot of redundant effort, which could be better used elsewhere, such as in encouraging more scientists and academics to engage on the topic.
As I see it, ufology stands at an interesting crossroads. While some of the details remain disputed, the topic has undoubtedly transitioned from fringe to mainstream in the last few years. However, a mixture of information overload, infighting, and quasi-religious narratives may conspire to undo this progress. Allied to this, mainstream media interest in most topics waxes and wanes. The UFO community can’t expect their current fascination with the subject to last indefinitely. This is particularly true if Congressional engagement falls away, as it may well do if the perception is that the subject is becoming more partisan and more fringe, with the attendant dangers of reputational damage attaching to those Representatives who continue to express an interest.
Ufology has come out of the fringe and into the mainstream, but I believe there’s a distinct possibility that it will move out of the mainstream and back into the fringe.
Imagine you are at a puzzle night with friends. Someone poses this question: “You roll two dice. At least one shows a six. What’s the probability both show a six?”
The table splits: Half the people argue that the dice are independent, so the answer must be one in six. The other half insists it’s one in 11. They may refer to the image below: There 11 equally-likely ways that a roll of two dice can show at least one six (bottom row and rightmost column), and in one of these rolls, they are both sixes.
So who’s right? Both—and neither. The correct answer is: “We can’t answer this without more information.” Depending on how you came to the information that there was at least one six, the answer can be one in six or one in 11.
Many so-called probability “paradoxes” arise from vague framing. In practice, data are generated by processes. Those processes define the pool of possibilities—and thus the probabilities. When the information-generating process isn’t specified, reasonable people can come up with different answers because they’re answering different questions.
Let’s return to the opening question. To reveal the ambiguity, I’ll frame it in two ways:
Puzzle 1: You roll two dice. One of them falls under the table and you can’t see it. The other one lands on top of the table, and it’s a six. What is the probability that both dice landed on a six?
Puzzle 2: You are rolling two dice blindfolded. A machine is programmed to ding if and only if at least one of them lands on a six. You keep rolling until the machine dings. What is the probability that both dice landed on a six?
SolutionsPuzzle 1: The probability that both dice landed on a six is one in six. In this scenario, you’ve learned a fact about a particular die: the one you can see is a six. The first die doesn’t affect the second, so the six possible outcomes of the second die are equally likely.
Puzzle 2: The probability that both dice landed on a six is one in 11. In this scenario, you don’t have information about a particular die; the ding of the machine only tells you a property of the pair. This roll passed a filter that admits only outcomes with at least one six. Among the 36 ordered outcomes of two dice, 11 contain at least one six. Only one of those 11 outcomes is the double six.
Both of these puzzles answer the question “You rolled two dice. At least one shows a six. What’s the chance both show a six?” However, since they have different information-generating processes, they have different solutions.
Boy or Girl ParadoxIn Martin Gardner’s famous “Boy or Girl Paradox,” sometimes called the “Two-Child Problem,” he poses this question: “Mr. Smith has two children. At least one is a boy. What is the probability that both children are boys?”
If this puzzle sounds familiar, that is because it is like the dice puzzle. Even if we adopt the assumption that births are like coin flips (i.e., independent, equally likely boy or girl, no multiple births), the puzzle is unanswerable. Gardner initially proclaimed the answer was “one in three,” but later admitted that the puzzle was ambiguous. The problem is that it does not tell us how we learned that at least one child is a boy.
Imagine you randomly meet a man named Mr. Smith at the park. He’s with one child, and that child is a boy. He mentions he has another child at home. What is the probability the child at home is a boy?
Seeing the boy in the park tells us nothing about the child at home. The possibilities are simply: “boy at the park, girl at home” or “boy at the park, boy at home.” Those two possibilities are equally likely, so the answer is one in two. The “child at the park” is like the die you can see and the “child at home” is like the die under the table.
When the information-generating process isn’t specified, reasonable people can come up with different answers because they’re answering different questions.Now imagine you have a list of all men named Mr. Smith in your city who have two children and at least one boy. You pick a man at random from that list. What are the chances both his children are boys? The four ordered possibilities in all two-child families are GG, GB, BG, and BB. However, your list excludes GG. That leaves GB, BG, BB–three equally likely families, only one of which is BB. So the probability is one in three. This “filtered list” setup is like the machine-ding scenario: your knowledge is based on a property of the pair, not a particular child.
In both stories, it is true that Mr. Smith has two children and at least one is a boy. The answers differ because we came to know that fact in different ways.
The Monty Hall ProblemThe best-known version of the Monty Hall problem was posed by Craig F. Whitaker to Marilyn vos Savant in a 1990 issue of Parade magazine, one of the most widely read publications in the country at the time:
Suppose you’re on a game show, and you’re given the choice of three doors: behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?Marilyn answered, “Yes; you should switch. The first door has a 1/3 chance of winning, but the second door has a 2/3 chance.”
The magazine received over 10,000 letters, including many from highly educated readers, insisting that this answer was wrong. Don Edwards, from Sunriver, Oregon, suggested: “Maybe women look at math problems differently from men.” A Georgia State University professor, one W. Robert Smith, PhD, advised: “I am sure you will receive many letters on this topic from high school and college students. Perhaps you should keep a few addresses for help with future columns.” Another PhD correspondent, a University of Florida professor named Scott Smith, exclaimed:
You blew it, and you blew it big! Since you seem to have difficulty grasping the basic principle at work here, I’ll explain. After the host reveals a goat, you now have a one-in-two chance of being correct. Whether you change your selection or not, the odds are the same. There is enough mathematical illiteracy in this country, and we don’t need the world’s highest IQ propagating more. Shame!Today, it’s widely accepted that Marilyn was right. People have even built computer simulations that reproduce the result. The story is often cited as a reminder that probability can be counterintuitive, and as a lesson that confidence and credentials don’t make us immune to mistakes. Those are valuable lessons—but I think much of the pushback came from a simpler reason: the problem phrasing was too vague.
For Marilyn’s solution to be correct, the game must guarantee from the start that the host will open a door showing a goat. This needs to be explicitly stated as a rule of the game. Many readers assume that the host knowing what’s behind the doors implies that he is guaranteed to open a goat door. But even if the host knows, that alone does not tell us what he is obliged to do. Marilyn’s answer relies on the following assumptions:
1. The host always opens a door after your initial choice.
2. He never opens the door with the car.
Simulations built to show why Marilyn’s answer is correct have these assumptions built into their programming. But if those conditions aren’t guaranteed, the probabilities change.
If we make the above two assumptions, then Marilyn’s advice is correct–you should switch. This can be explained simply:
• If your initial choice was a goat door, switching will certainly give you the car. Since there is a 2 in 3 chance your initial choice was a goat, there is a 2 in 3 chance you will win if you switch.
• If your initial choice was the car door, you will certainly lose if you switch. Since there is only a 1 in 3 chance your initial choice was the car, there is a 1 in 3 chance you will lose if you switch.
This reasoning treats the host’s action as guaranteed and therefore uninformative about your original choice. If the host’s behavior is left unspecified, the fact that he opened a goat door can give you different information. Here are two variations of the puzzle that help to demonstrate that.
Optional Opening VariationOn a game show, there are three doors. Behind one is a car; behind the others, goats. After contestants choose a door, the host sometimes opens another door to show a goat (he never reveals the car). If he does open a door, he then offers you the chance to switch to the other closed door. It’s your turn: You pick a door, and the host opens a goat door. He offers a choice to switch to the other unopened door. What should you do?
You can’t answer this until you know the host’s policy—how often he opens a door given that a contestant’s initial pick is the car versus a goat. If the host is much more likely to open a door for contestants who initially picked the car, then seeing him open a door increases the likelihood that the car is behind your chosen door. Different policies lead to different conditional probabilities, so the question is unanswerable without more information.
Random Door Variation of the Monty Hall ProblemOn a game show, there are three closed doors. Behind one is a car; behind the others, goats. After contestants pick a door, the host randomly opens another door. If he reveals a car, it’s game over. It’s your turn to play! You choose a door. The host randomly opens another door. It’s a goat! He offers a choice to switch to the other unopened door. What should you do?
It can help to think of this game being played many times. One third of players initially pick the car. For them, the host will always reveal a goat. Two thirds of players initially pick a goat. For them, the host reveals a goat half the time and reveals the car the other half. If we consider all games, we can split the players into three equal groups:
All of these scenarios are equally likely, so a third of the players will be in each group. Since the question tells you that, in your game, the host opens a door that has a goat, you know that you are in either group 1 or 2. Since it is equally likely that you are in either one, switching or staying makes no difference.
Once you state the host’s policy clearly, many people who previously found the problem baffling finally see where the answer comes from.The crucial difference is that here the host could have shown the car but didn’t. In this variation, the host is more likely to open a goat door if your initial pick was the car door. In other words, the host opening a goat door gives you information about your initially chosen door: it increases the probability that it has a car from ⅓ to ½.
A Clearer PhrasingHere is a clear way to pose the problem such that Marilyn’s answer is correct:
You are about to go on a game show. The game is always played in the same way: The player is shown three closed doors. Behind two of them are goats; behind one is a car. The player wins if they pick the door with the car. After the player picks a door, the host opens another door, and it's always one with a goat behind it. Then the host offers the player the chance to switch to the other closed door. He does this in every game. What should you do to maximize your chances of winning?
You may have noticed that I also specified that you win if you pick the car door (not that you win what’s behind the door). This is because sometimes people say they prefer a goat over a car.
Clarity is crucial, whether you’re posing a puzzle or talking with someone who sees things differently than you do.The problem with the Monty Hall problem is that the standard wording is often too vague. Marilyn’s answer is correct only under a particular set of rules about what the host will do, but those rules are frequently left out. Once you state the host’s policy clearly, many people who previously found the problem baffling finally see where the answer comes from.
For readers familiar with Bayes’s theorem, I leave you with a challenge:
You’re playing a game with the same setup as above, but now you’re told the host opens a door 75 percent of the time when the contestant initially picks a goat, and 25 percent of the time when the contestant initially picks the car. In your game, the host opens a door to show a goat. Should you switch or stay?
The “Obvious” InterpretationFor many such probability puzzles, one might object, “But the most natural interpretation is obviously X.” Natural to whom? When a puzzle leaves out the information-generating process, people may assume different backstories and end up answering different problems. What seems obvious to you may not be to someone else.
This lesson applies beyond puzzles. In many disagreements, people talk past each other because they understand the same term in different ways or are working from different assumptions. When you make those terms and assumptions explicit, you may find you disagree on less than you thought. Even if you don’t, that precision lays the groundwork for a more productive discussion. Clarity is crucial, whether you’re posing a puzzle or talking with someone who sees things differently than you do.
The new documentary film by Dan Farah, The Age of Disclosure, has been widely reviewed in mainstream media (CNN, Fox News, News Nation, The New York Times, The Guardian, etc.) and intensely discussed not only on popular podcasts with UFO enthusiasts but at the highest levels of government with comments by President Trump and an exclusive interview on Fox News by Sean Hannity with Secretary of State Marco Rubio, who clarified that the film had been selectively edited to make it seem like he knows more than he does about the phenomena, now known as UAPs, or Unidentified Anomalous Phenomena, so designated to mark the transition of this once fringe movement into mainstream conversation. How did this happen … and why?
In this excerpt from my book, Truth, I will give a general overview of the UFO/UAP phenomena, explain why most scientists and journalists reject the evidence (that consists almost entirely of grainy videos, blurry photographs, and anecdotes about strange lights in the sky) and remain skeptical, discuss the accusations of a government cover-up of the evidence and a secret Pentagon crash-retrieval program, and offer a sociocultural explanation for how and why all this unfolded as it did and what the deeper quasi-religious motivations might be for belief in a higher power come to Earth to rescue humanity from itself.
The Residue ProblemIn Leslie Kean’s 2010 book UFOs: Generals, Pilots and Government Officials Go on the Record, the ufologist admitted that “roughly 90 to 95 percent of UFO sightings can be explained” as:
weather balloons, flares, sky lanterns, planes flying in formation, secret military aircraft, birds reflecting the sun, planes reflecting the sun, blimps, helicopters, the planets Venus or Mars, meteors or meteorites, space junk, satellites, swamp gas, spinning eddies, sundogs, ball lightning, ice crystals, reflected light off clouds, lights on the ground or lights reflected on a cockpit window, temperature inversions, hole-punch clouds, and the list goes on!1So the entire extraterrestrial hypothesis for explaining UFOs and UAPs is based on a residue of data left over after the above list has been exhausted. What’s left? Not much.
Kean opens her exploration “on very solid ground, with a Major General’s firsthand chronicle of one of the most vivid and well-documented UFO cases ever”—the UFO wave over Belgium in 1989–1990. That Major General is Wilfried De Brouwer, and here is his recounting of the first night of sightings:
Hundreds of people saw a majestic triangular craft with a span of approximately a hundred and twenty feet and powerful beaming spotlights, moving very slowly without making any significant noise but, in several cases, accelerating to very high speeds.First, how does he know it had a span of 120 feet? What measurement instrument was used? Questions that if answered could lead us to truth about this UFO sighting. Regardless, even seemingly unexplainable sightings such as De Brouwer’s can have quotidian explanations. Perhaps it was an early experimental model of a delta-wing bomber (U.S., Soviet, or otherwise) that secret-keeping military agencies were understandably loath to reveal. Or maybe it was three sources of aerial lights (flares? small planes?) that from the perspective of a ground observer appeared triangular shaped, with the mind filling in the space in between the lights.
The one and only photograph associated with the Belgian event seems to show a triangular shaped craft, but UFO investigator Robert Sheaffer reports that it was, in fact, a faked photograph of a small styrofoam model with three spots affixed to it.2 According to the Belgian news organization RTL, it was hoaxed by a 20-something man named Patrick:
A DIY, done in a few hours and photographed in the evening, a joke, inspired by the UFO wave born a few months earlier, which targeted the friends of the small company where Patrick worked. But now, the joke will leave the walls of the factory. “We didn’t think it would come out of the factory where we worked. It went much further and then we let it go,” admits Patrick.3Since the fake photograph was inspired by “real” sightings, we still need to deal with those in order to get to the truth, so let’s compare De Brouwer’s narrative above to Kean’s summary of the same incident:
Common sense tells us that if a government had developed huge craft that can hover motionless only a few hundred feet up, and then speed off in the blink of an eye—all without making a sound—such technology would have revolutionized both air travel and modern warfare, and probably physics as well.Note how de Brouwer’s 120-foot craft becomes “huge” in Kean’s retelling, how “moving very slowly” was changed to “can hover motionless,” how “without making any significant noise” shifted to “without making a sound,” and how “accelerating to very high speeds” was transformed into “speed off in the blink of an eye.” This language transmutation is common in UFO narratives, making it harder for scientists and skeptics to provide natural explanations.
Pilots, Astronauts, and Eyewitness AccuracyOne reason for Kean’s confidence in her assertion that at least some UFOs and UAPs represent alien spacecraft is that she thinks pilots and astronauts “represent the world’s best-trained observers of everything that flies. What better source for data on UFOs is there? ... [They] are among the least likely of any group of witnesses to fabricate or exaggerate reports of strange sightings.”
Is that true? Consider this assessment by the renowned astronaut and pilot Scott Kelly, at a NASA press conference dealing with the latest flap of UAP sightings, who threw cold water on the myth of extraordinary perceptual powers of pilots and astronauts (condensed and edited for style and clarity):4
In my experience of flying over 15,000 hours in 30 something years in airplanes and in space, the environment that we fly in is very conducive to optical illusions, so I get why these pilots would look at that Go Fast video and think it was going really, really fast. I remember one time I was flying off Virginia Beach Military operating area, and my RIO [Radar Intercept Officer], who sits in the back of the Tomcat, was convinced we flew by a UFO. I didn’t see it, so we turned around to go look at it. It turns out it was a Bart Simpson balloon.When UFO enthusiasts breathlessly state that this latest wave of UAP sightings was confirmed as “real” by no less an authority than The New York Times, the assumption is that the “paper of record” launched an investigation of its own, independent of ufologists.
That is not what happened. If you check the byline for that and related articles in that paper, one of the coauthors is none other than Leslie Kean, who as we have seen is anything but a neutral and objective narrator of the UFO phenomena and the government’s response to it. (Kean has since written a book and produced a Netflix documentary series called Surviving Death, on Near-Death Experiences and the afterlife.5) Although coauthor Helene Cooper does work for the paper as a correspondent for Pentagon matters, the other coauthor, Ralph Blumenthal, left the paper in 2009 and wrote a book titled The Believer: Alien Encounters, Hard Science, and the Passion of John Mack, the late Harvard psychiatrist who uncritically accepted alien abduction stories as accounts of real close encounters of the fourth kind.6 And while The New York Times article was an accurate work of reportage as far as it goes, it didn’t go very far, quoting only one skeptic, James Oberg. This was at least better than 60 Minutes in their coverage of the UAP flap that astonishingly—given their reputation as one of the most respected sources in all media—failed to interview a single scientist or skeptic familiar with the sightings under investigation.
When UFO believers and the general public hear the word “real,” their brains tend to autocorrect to “alien” or “Russian or Chinese assets,” instead of an ordinary effect of cameras and visual illusions or, simply, unexplained anomalies.60 Minutes’ correspondent Bill Whitaker asked Lue Elizondo, who directed the Pentagon’s Advanced Aerospace Threat Identification Program (AATIP), “So what you are telling me is that UFOs, Unidentified Flying Objects, are real?” Elizondo replied: “The government has already stated for the record that they’re real. I’m not telling you that. The United States government is telling you that.”7
The word “real” is doing a lot of work here. No one—not the media, not the military, and certainly not the United States government—is saying that these sightings represent real alien visitors. What they are confirming as “real” is the videos themselves, as representing something out there in the world (and not a fake video or hoaxed CGI production). But when UFO believers and the general public hear the word “real,” their brains tend to autocorrect to “alien” or “Russian or Chinese assets,” instead of an ordinary effect of cameras and visual illusions or, simply, unexplained anomalies.
In my own classification system to explain UFO and UAP sightings, I distill them into three hypotheses: (1) Ordinary Terrestrial (balloons, camera or lens effects, visual illusions, etc.), (2) Extraordinary Terrestrial (Russian or Chinese spy planes or drones capable of feats of physics and aerodynamics unheard of in the U.S.), and (3) Extraordinary Extraterrestrial (alien intelligence). Let’s consider each of these hypotheses and see which one has the highest credence.
Ordinary TerrestrialThe first video in this latest UFO/UAP flap was that of Lt. Commander Alex Dietrich, who reported seeing an unidentified aircraft about 70 miles west of San Diego in 2004. Her explanation of what she thinks she saw is emblematic of the entire phenomena and reinforces my point about the residue problem: “Just because I’m saying that we saw this unusual thing in 2004, I am in no way implying that it was extraterrestrial or alien technology or anything like that,” adding that “I think that the [U.S. government] report’s going to be a huge letdown. I don’t think that it’s going to reveal any fantastic new insight.”8 Indeed, the report was predictably unrevealing of anything alien.
The three most widely viewed and discussed videos were filmed by infrared cameras mounted on Navy F/A-18 jets over the Atlantic seaboard and off the coast of San Diego. They were taken by the Navy Advanced Targeting Forward Looking Infrared (ATFLIR) camera pods attached to the fuselage of the jets, and the videos are now known as FLIR/Nimitz/Tic Tac (San Diego, 2004), Gimbal, and Go Fast (Florida coast, 2015).
Figure 1. FLIR/Nimitz/Tic Tac (video still frame)FLIR/Nimitz/Tic Tac (Figure 1) is the 2004 Nimitz video taken by Lt. Chad Underwood. According to Popular Mechanics, it first came to light in 2007 on a UFO website.9 It was elevated into public consciousness when it was reposted by The New York Times in Leslie Kean’s original article, then re-reposted in 2019 by the former Blink-182 front man guitarist Tom DeLonge’s UFO organization “To the Stars Academy of Arts and Science.”10 In response, the Navy acknowledged that the videos were “real,” meaning that they are real videos and not hoaxes.11 Finally, in 2020 the Pentagon re-re-reposted the three videos “in order to clear up any misconceptions by the public on whether or not the footage that has been circulating was real, or whether or not there is more to the videos.”12 So, when people talk about these “new” videos, they are anything but new.
The heavy lifting on analyzing these videos from the skeptical community was done by Mick West, a former video game designer, host of the Metabunk.org website and Tales From the Rabbit Holepodcast, and a former columnist for Skeptic magazine.13 It is a remarkable body of work that one can only hope the Pentagon has conducted at such a high level on its own, or at the very least considered West’s analyses as part of their investigations.
In the FLIR video, for example, the object appears to zoom almost instantly off the screen, interpreted by some to indicate extraordinary speed and turning ability far beyond anything our jets are capable of. Note that in the upper left of the screen the camera “zoom” indicator doubles from 1 to 2 at the moment the object “zooms” to the left. When West slowed down the video replay from zoom 2 to 1, the extraordinary maneuver becomes quite ordinary.
FLIR and Gimbal, says West, are what one would see if a jet were flying away from the camera, thus accounting for the eyewitness accounts that the object showed no directional control surfaces or exhaust. And their apparent shapes as saucer-like and “Tic Tac,” West continues, are due to glare on the lens of the camera. As he told the San Diego Union-Tribune reporter Andrew Dyer, “What we’re seeing in the distance is essentially just the glare of a hot object,” most likely that “of an engine—maybe a pair of engines with an F/A-18—something like that.”
(To be sure, not everyone accepts West’s conclusions. See, for example, ufologist Robert Powell’s analysis in his 2024 book UFOs,14 who told me “You are correct in your quoting of Mick. Whether his assertions are correct is very debatable.”15)
As well, West notes, sudden acceleration of the aircraft could cause the FLIR camera to lose lock on the object, thereby making it look like it is the object making extraordinary maneuvers. As he writes, “The supposed impossible accelerations in the Tic Tac video were revealed to coincide with (and hence caused by) sudden movements of the camera, leading to the conclusion that the object in the video was not actually doing anything special.”16
Figure 2a. Go Fast (video still frame). Watch the video on the US Navy website.Figure 2b. Basic trigonometry reveals the Go Fast object was at 13,000 feet altitude, not skimming the ocean. The apparent speed is a parallax effect from the jet’s movement.The Go Fast video (Figure 2a) purportedly shows an object with no heat source (and therefore propelled by some unconventional engine) that appears to move impossibly fast just above the surface of the ocean. West then conducted what he describes as “10th-grade trigonometry” to show that, in fact, the object was actually well above the ocean surface at around 13,000 feet (Figure 2b) and was probably just a weather balloon traveling at about 30–40 knots.17 “Because of the extreme zoom and because the camera is locked onto this object … the motion of the ocean in this video is actually exactly the same as the motion of the jet plane itself. You’re seeing something that’s actually hardly moving at all, and all of the apparent motion is the parallax effect from the jet flying by.”
Figure 3a. Mick West’s Gimbal analysis (YouTube video)
Figure 3b. The Gimbal video’s rotation is a camera artifact. When the gimbal mechanism rotates, the entire image—including background lights—rotates in sync, not the object itself.The most talked-about video is “Gimbal” (Figure 3a), an object that appears to skim effortlessly over background clouds then come to an abrupt stop and rotate in midair with no apparent propulsion systems to pull off such a maneuver. Again, astoundingly, West appears to be the only person to notice that when the Gimbal object rotates, background patches of light in the scene also rotate in perfect unison with the object. “I think what’s clear about Gimbal is it’s very hot—it’s consistent with two jet engines next to each other and the glare of these engines gets a lot bigger than the actual aircraft itself, so it gets obscured by it,” West explains, adding that “at the start of the video, it looks like the object is moving rapidly to the left because of the parallax effect, and the rotation was a camera artifact (Figure 3b), and that the ‘flying saucer’ was simply the infrared glare from the engines of a distant aircraft that was flying away.”18 When he looked up the patents for that camera, West found that the gimbal mechanism was responsible for the apparent rotation.19
Figure 4. Flying Triangle (YouTube video). Mick West demonstrating that the triangular shape is likely the bokeh effect—where out-of-focus light takes the shape of the camera’s triangular lens aperture rather than a physical craft.
Since then, two more videos by the UAP Task Force were released, one showing a flying triangle (Figure 4) and the second an apparently zig-zagging submersible sphere (Figure 5). As the media and public gawked at yet another triangle-shaped UFO, West noted that it was filmed at night beneath the flight path into LAX, and that the object blinked in perfect unison with that of commercial airliners flying into Los Angeles from Hawaii. The triangular shape, he surmised, was most likely the result of a triangular shaped lens aperture, and the “bokeh” effect, or the soft out-of-focus background generated by shooting a subject with a fast lens and wide aperture.20 In fact, there were other triangle shaped objects in the image that correspond perfectly to celestial objects that West identified as the planet Jupiter and some known stars.
Figure 5. Submersible Sphere (YouTube video). Analysis by Mick West.
As for the “zig-zagging” object, also filmed off the coast of California from the combat ship Omaha, as you can see in West’s video analysis it is the camera that is zig-zagging, not the object, and it doesn’t “submerse” into the water, it simply disappears beyond the horizon (and is, in any case, so grainy a video that it isn’t clear at all what is going on with whatever it was being filmed).21
Extraordinary TerrestrialAn alternative to ordinary explanations for UAP sightings is that they represent Russian or Chinese assets, drones, spy planes, or some related but as yet unknown (to us) technology capable of speeds and turns that seemingly defy all known physics and aerodynamics.
This hypothesis is highly unlikely, given what we know about the evolution of technological innovation, which is cumulative from the past. In his 2020 book, How Innovation Works,22 Matt Ridley demonstrates through numerous examples that innovation is an incremental, bottom-up, fortuitous process that is a result of the human habit of exchange, rather than an orderly, top-down process developing according to a plan. Innovation, he continues, “is always a collective, collaborative phenomenon, not a matter of lonely genius. It is gradual, serendipitous, recombinant, inexorable, contagious, experimental, and unpredictable. It happens mainly in just a few parts of the world at any one time.” Examples include steam engines, jet engines, search engines, airships, vaping, vaccines, cuisine, antibiotics, mosquito nets, turbines, propellers, fertilizer, computers, dogs, farming, fire, genetic engineering, gene editing, container shipping, railways, cars, safety rules, wheeled suitcases, mobile phones, corrugated iron, powered flight, chlorinated water, toilets, vacuum cleaners, shale gas, the telegraph, radio, social media, block chain, the sharing economy, artificial intelligence, faddish diets, and hyperloop tubes.
ETIs are probably out there in the cosmos, but there probably are not that many of them, and because of the vast interstellar distances and their extreme rarity they have not been here.It is simply not possible that some nation, corporation, or lone individual—no matter how smart and creative—could have invented and innovated new physics and aerodynamics to create an aircraft of any sort that could be, essentially, centuries ahead of all known present technologies. That is not how innovation works. It would be as if the United States were using rotary phones while the Russians or Chinese had smart phones, or we were flying biplanes while they were flying Stealth fighter jets, or we were sending letters and memos via Fax machine while they were emailing files via the internet, or we were still experimenting with captured German V-2 rockets while they were testing SpaceX-level rocketry. Impossible. We would know about all the steps leading to such technological wizardry.
Extraordinary ExtraterrestrialCould these UAPs and UFOs represent visitations by ETIs? Let’s first separate two questions that most people confuse: (1) Are aliens out there somewhere in the cosmos? (2) Have aliens come here? When I state my skepticism about the latter, people assume I’m also skeptical about the former. “Do you seriously think we’re alone in this vast cosmos?” is a common rejoinder I hear when I say something like “UFOs are not ETIs.” So let me state for the record that although we have no definitive evidence to answer either question in the affirmative, I think it highly likely that aliens are out there somewhere but have not yet come here.
To the first question, the law of large numbers suggests that aliens are very likely out there somewhere in the cosmos. A 2016 analysis of the Hubble Ultra Deep Field by NASA and the European Space Agency estimated that there are ten times the number of galaxies previously known (about one hundred billion), meaning that there are at least one trillion galaxies in the universe,23 each of which has at least one hundred billion stars, for a total of a hundred million trillion stars—100,000,000,000,000,000,000,000—an almost inconceivably large number made even larger by the Kepler Space Telescope’s discovery that nearly all stars have planets, adding yet another zero to that already Brobdingnagian number for the number of possible places where life could evolve into an intelligent communicating species. We also now know that it takes only a few million years for stars and planets to coalesce out of clouds of dust and gas to form solar systems. In our galaxy alone this happens about once a month. In the universe with the above number of stars, this would mean a thousand new solar systems are born every second.
To the second question, Fermi’s Paradox—first articulated by the renowned physicist Enrico Fermi—implies that with so many stars and planets in the known universe there should be lots of ETIs out there, and assuming that at least some of those (half?) would be millions of years ahead of us on an evolutionary time scale, their technologies would be advanced enough to have found us by now, but they haven’t, so … where is everybody?
Answers to the paradox are now legion, with at least 75 explanations for why we haven’t found ETIs yet,24 including: uniqueness (we’re alone), out of range (they’re too far away to have been discovered yet), failures of perception (they’re aquatic instead of land-based), failures of imagination (they haven’t thought of searching), inadequate search strategies (they or we are using the wrong technology to search), dark forest (they’re hiding), zoo hypothesis (they’re observing us secretly), transcendence (they’re from a different dimension or are pure spirit beings), ancient aliens (they visited thousands or millions of years ago), home bound (they don’t travel), and beyond our imaginations (they are so wholly Other that we can’t begin to know how to make contact).25 Here is my Twitter-length answer to Fermi’s Paradox:
ETIs are probably out there in the cosmos, but there probably are not that many of them, and because of the vast interstellar distances and their extreme rarity they have not been here. But keep searching, as such a discovery would be one of the greatest in human history! Sky Gods for SkepticsIn his 1982 book The Plurality of Worlds, the historian of science Steven Dick suggested that when Newton’s mechanical universe replaced the medieval spiritual world it left a lifeless void that was filled with the modern search for ETI.26 In his 1995 book Are We Alone? the physicist Paul Davies wondered: “What I am more concerned with is the extent to which the modern search for aliens is, at rock bottom, part of an ancient religious quest.”27 The historian George Basalla made a similar observation in his 2006 work Civilized Life in the Universe: “The idea of the superiority of celestial beings is neither new nor scientific. It is a widespread and old belief in religious thought.”28 In his 2007 book, Contact with Alien Civilizations, Michael A.G. Michaud proposes that “one of the drivers behind our search for other intelligent beings is our desire to find or attribute purpose to our existence. We have an innate yearning to be identified as part of some ill-defined grander scheme of things.”29Here is how Carl Sagan expressed the sentiment in an interview with CBS anchor Walter Cronkite:
I think a key to what’s behind the real belief in flying saucers is most easily obtained if you look at the contact myths. There are several hundred people in the United States who claim to have had personal contact with the inhabitants of flying saucers that have landed. And if you examine these myths, you find that there are some peculiar regularities. The inhabitants of saucers are benevolent. I mean, they’re really concerned for our well-being. They’re omnipotent, extremely powerful, omniscient, extremely knowledgeable, and they often wear long, white robes. Now this combination is something I’ve heard in another context. This isn’t science, this is religion.30To test this hypothesis the psychologist Clay Routledge and his colleagues published a paper titled “We Are Not Alone,” in which they reported an inverse relationship between religiosity and ETI beliefs—that is, those who report low levels of religious belief but high desire for meaning show greater belief in ETIs.31 In Study 1, subjects who read an essay “arguing that human life is ultimately meaningless and cosmically insignificant” were statistically significantly more likely to believe in ETIs than those who read an essay on the “limitations of computers.” In Study 2, subjects who self-identified as either atheist or agnostic were statistically significantly more likely to report believing in ETIs than those who reported being religious (primarily Christian). In Studies 3 and 4, subjects completed a religiosity scale, a meaning in life scale, a well-being scale, an ETI belief scale, and a religious supernatural belief scale. “Lower presence of meaning and higher search for meaning were associated with greater belief in ETI,” the researchers reported, but ETI beliefs showed no correlation with supernatural beliefs or well-being beliefs.
From these studies the authors conclude: “ETI beliefs serve an existential function: the promotion of perceived meaning in life. In this way, we view belief in ETI as serving a function similar to religion without relying on the traditional religious doctrines that some people have deliberately rejected.” By this they mean the supernatural. “That is, accepting ETI beliefs does not require one to believe in supernatural forces or agents that are incompatible with a scientific understanding of the world.” If you don’t believe in God, but seek deeper meaning outside of our world, the thought that we are not alone in the universe “could make humans feel like they are part of a larger and more meaningful cosmic drama.” I concur, and so give the last word to Lt. Commander Alex Dietrich, who witnessed the 2004 UAP incident from a USS Nimitz fighter jet, as I think it well sums up 75 years of ufologists’ search for aliens: “I think they enjoy the anticipation more than actually finding answers.”
Advertising wants your attention, not your soul; and it’s not nearly as good at getting either as you might think.
Learn about your ad choices: dovetail.prx.org/ad-choicesIt’s not easy being a futurist (which I guess I technically am, having written a book about the future of technology). It never was, judging by the predictions of past futurists, but it seems to be getting harder as the future is moving more and more quickly. Even if we don’t get to something like “The Singularity”, the pace of change in many areas of technology is speeding up. Actually it’s possible this may, paradoxically, be good for futurists. We get to see fairly quickly how wrong our predictions were, and so have a chance at making adjustments and learning from our mistakes.
We are now near the beginning of many transformative technologies – genetic engineering, artificial intelligence, nanotechnology, additive manufacturing, robotics, and brain-machine interface. Extrapolating these technologies into the future is challenging. How will they interact with each other? How will they be used and accepted? What limitations will we run into? And (the hardest question) what new technologies not on that list will disrupt the future of technology?
While we are dealing with these big question, let’s focus on one specific technology – controllable robotic prosthetics. I have been writing about this for years, and this is an area that is advancing more quickly than I had anticipated. The reason for this is, briefly, AI. Recent advances in AI are allowing for far better brain-machine interface control than previously achievable. Recent advances in AI allow for technology that is really good at picking out patterns from tons of noisy data. This includes picking out patterns in EEG signals from a noisy human brain.
This matters when the goal is having a robotic prosthetic limb controlled by the user through some sort of BMI (from nerves, muscles, or directly from the brain). There are always two components to this control – the software driving the robotic limb has to learn what the user wants, and the user has to learn how to control the limb. Traditionally this takes weeks to months of training, in order to achieve a moderate but usable degree of control. By adding AI to the computer-learning end of the equation, this training time is reduced to days, with far better results. This is what has accelerated progress by a couple of decades beyond where I thought it would be.
But it turns out this AI-assisted control can be a double-edged sword. To understand why we need to quickly review how the human brain adapts to artificial bodies or body parts. The short answer is – quite well. The reason is that our sense of ownership and control is all a constructed illusion of the brain in the first place. Circuits in our brain create the subjective sensation that each part of our body is part of us, that we own that body part (the sense of ownership) and the we control that body part (a sense of agency). We know about this largely from studying patients who have damage in one or more of these circuits that causes them to feel like a body part is not theirs or that they don’t control it.
This means that this circuitry can be hacked to make the brain create the sensation that you own and control a robotic or virtual limb. Luckily, this hacking is actually pretty simple. The brain compares different sensory inputs to see if they match, while also comparing motor intentions with motor outputs. So – if you see and feel a limb being touched, your brain will interpret that as you owning the limb. It can be that simple. If you intend to make a movement, and you see and feel the limb make that movement, then you feel as if you control the limb. So a robotic limb with some sensation, with some haptic feedback, and that does what we want it to do, will feel as if it is naturally part of us. The research is moving now in this direction, to close these loops as much as possible.
This, however, is where we run into a snag with AI-controlled robotic limbs. Part of the advance is that AI can add fine motor control to an artificial hand, say. Quickly, robotic movement tends to fall into one of three categories. You can directly control the robot, the robot can carry out a pre-programmed sequence of movements, or the robot can determine its movements in real time based on sensory feedback. When seeing a robotic demonstration you should always ask – what type of control is being demonstrated?
For robotic limbs what we want is direct control of the robot. While this is advancing, it is still somewhat limited and clumsy. So we can refine the direct control by adding one or both of the other two types of control. This means to some extent the robotic limb is carrying out the desired movements of the user with internal control. This can greatly increase the functionality of the robotic limb, but it comes at a cost of the user’s sense of embodiment and agency. Imagine if your hand were executing movements all by itself. It would feel uncanny and unnerving.
This is a long windup to a new study which tries to address this issue. The researchers were looking at the effect of the movement speed of the AI-controlled robotic limb to see how that affected the user’s sense of ownership and agency. What they found was not surprising, but good to know that this variable is effective and needs to be taken into consideration. They varied the execution time of an AI-controlled movement from 125 ms to 4 seconds. A moderate speed, about 1 second, resulted in the best sense of ownership and agency (or we can say the least interference with these senses). The further you got to either extreme the more the user felt an uncanny sense of unease, as if they did not own or control the robotic limb. This is a Goldilocks effect – too fast or too slow is no bueno, but just right results in a good outcome.
This result also makes sense from the perspective that prior neurological research shows that our brains also evaluate the world by how it moves. We separate agents from non-agents by how they move (the latter moves in an inertial frame while the former does not). Neurologists also know this because diseases that are movement disorders can often be diagnosed (and sometimes at a glance) by how the patient moves. Our brains are finely tuned to what constitutes normal human movement. Too fast or too slow, hypokinetic or hyperkinetic, and our brains immediately register that something is wrong.
So if we see our robotic limb moving at a normal human pace, doing what we want it to do (even though the fine movements are enhanced by AI) that can still be good enough for us to accept the limb as belonging to us and that we control it. There is likely also a Goldilocks zone here as well – too much AI control will break the illusion of control, while too little is of no use, but just right will be the best compromise between functionality and acceptance.
The nuances of neurological control through a brain-machine interface of an AI-enhanced robotic limb is one of those futurism problems that would have been difficult to anticipate.
The post The Future of AI-Powered Prosthetics first appeared on NeuroLogica Blog.
In 2020 Joe Biden became the first Democratic nominee in 36 years without a degree from the Ivy League. Obama, before him, filled no less than two-thirds of all cabinet positions with Ivy League graduates—over half of which were drawn from either Harvard or Yale.1 In Congress today, 95 percent of House members and 100 percent of senators are college educated.
According to a recent study published in Nature, 54 percent of “high achievers” across a broad range of fields—law, science, art, business, and politics—hold degrees from the 34 most elite universities in the country.2 The sociologist Lauren Rivera, studying top firms in finance, consulting, and law, found that recruiters are jonesing for applicants from a prestigious academic institution; typically targeting just three to five “core” universities in their hiring efforts—Harvard, Yale, Princeton, Stanford, and MIT—the usual suspects; then identifying five to fifteen additional second-tier options—such as Berkeley, Amherst, and Duke—from which they will more tentatively accept resumés.3 Everyone else—almost certainly never even gets a reply email. Why? Because, one lawyer explained the strategy to Rivera, “Number one people go to number one schools.”
“If destruction be our lot, we must ourselves be its author and finisher.” —Abraham LincolnGiven this new American caste system, it’s no surprise that 63 percent of Americans think that “experts in this country don’t understand the lives of people like me,” or that 69 percent feel the “political and economic elite don’t care about hardworking people.”4 And, I suggest, they’re not wrong. A culture that sanctifies college as the gateway to full citizenship, over time corrodes the foundations of democratic life. It devalues work that doesn’t come with a degree, licenses contempt for those not formally educated, and locks the working class out of positions of power. The result isn’t just underrepresentation; it’s resentment. As the journalist David Goodhart writes, “We now have a single route into a single dominant cognitive class”; where “an enormous social vacuum cleaner has sucked up status from manual occupations, even skilled ones,” and appropriated it to white-collar jobs, even low-level ones, in “prosperous metropolitan centers and university towns”; and where broad civic contribution has been replaced with narrow intellectual consensus.5 The result is a backlash not against education, but against the assumption that only one kind of education counts.
“At a time when racism and sexism are out of favor,” writes Harvard philosopher Michael Sandel, “credentialism is the last acceptable prejudice.”6 In a cross-national study conducted in the United States, Britain, the Netherlands, and Belgium, a team of social psychologists led by Toon Kuppens found that the college-educated class had a greater bias against less educated people than they did other disfavored groups.7 In a list that included Muslims, poor people, obese people, disabled people, and the working class, “stupid people” were the most disliked. Moreover, the researchers found that elites are unembarrassed by the prejudice; that unlike homophobia or classism, it isn’t hidden, hedged, or softened—it’s worn openly, with an air of self-congratulation. As the Swedish political scientist Bo Rothstein observes, “The more than 150-year-old alliance between the industrial working class and what one might call the intellectual-cultural Left is over.”8
Today we are living through a strange time in American life in which the numbers have declared victory. By most standard economic measures—employment, wages, even household net worth—the working class is better off than it was a generation ago.9, 10, 11 The average elevator mechanic gets paid over $100,000 per year12; master plumbers can make more than double that.13 Even in Mississippi, our country’s poorest state, workers see higher average wages than in Germany, Britain, or Canada.14
Elites are unembarrassed by the prejudice; that unlike homophobia or classism, it isn’t hidden, hedged, or softened—it’s worn openly, with an air of self-congratulation.It is, for working-class Americans today, the best of times, objectively—and the worst of times, subjectively. This is not because the spreadsheets are wrong, but because we fail to count the things that history records in tone, not totals—but rather things like mood, myth, and cultural resolve.
The Service EconomyAccording to the most recent data available from the United States Bureau of Labor Statistics, nearly four out of five Americans work in the service sector.15 For most Americans in most states, that means retail, fast-food, or some other smile-for-hire job located at the end of a check-out line.16 It’s a kind of work where labor isn’t just accomplished, it’s seen—performed under the soft surveillance of the American customer. So, beneath inflation charts and unemployment rates, if you want to understand the feelings side of the postindustrial economy—you might start with tipping.
It is, today, perhaps our most American habit—tipping for service; whether it be good, bad, or not provided. In restaurants, hair salons, and hotel lobbies, Americans tip over a hundred billion dollars a year—indeed, more than any other country on earth, and more than all of them combined.17 We tip cab drivers and pool cleaners and dog groomers and coat checkers. We tip the doorman on the way in, the bellhop on the way up, and the concierge on the way out. Americans tip so much that, as one European put it—the whole “approach [has become] completely deranged and out of control.”18
However, it wasn’t always this way. In fact, for much of the early 20th century, it was Americans who mocked Europeans for tipping—seeing it as smug, corrupt, and born of feudal etiquette.19 States such as Iowa, South Carolina, and Tennessee—among others—outlawed the practice entirely20; and wherever it remained legal, businesses proudly posted signs that read “No Tipping Allowed.”21 Some hotels even installed “servidors”—a two-way drawer that opened from hallway and room—so staff could deliver laundry without being seen, and without being tipped.22 As the author William R. Scott, in a book-length critique, put it in 1916:
In an aristocracy a waiter may accept a tip and be servile without violating the ideals of the system. In the American democracy to be servile is incompatible with citizenship … Every tip given in the United States is a blow at our experiment in democracy … Tipping is the price of pride. It is what one American is willing to pay to induce another American to acknowledge inferiority.Somewhere along the way, however—somewhere between the Marshall Plan and the first McDonald’s Happy Meal—the parts reversed; and we became the punchline. It became the Americans who tipped like royals—and the Europeans who saw it as such.
It was during this time that the gesture was institutionalized—not of custom or conscience—but because the Pullman Company, the National Restaurant Association, and eventually big tech sold it as part of the deal.23 Lobbying congress, adding tip lines to receipts and making feudalism feel American—if you’re the one tipping.24 Because on the other end—where the customer is always right—yes, the tip is now expected and yes, it is now appreciated; but gratuity has never been the same thing as respect and especially not when, for most working-class Americans, IHOP has become the least humiliating option.
The Status EconomyWe are signaling obsessed, hierarchy calibrated social apes. All of us, according to author Will Storr in The Status Game, walk around like buzzed-up antennas—attuned to the faintest frequency of admiration or disdain, gossip, or snicker.25 Given that for most of human history, it wasn’t guns, germs, or steel that mattered most; it was access to the cooperative networks and high-yield alliances of a species where insiders eat first and the gates are closely guarded. And so what governs our decisions—above all else, even when no one’s watching—is the paranoia of social scrutiny. In other words, it’s a cost-benefit analysis where the material outcome barely matters and utility is downstream of reputational impact.
Absent this understanding of human behavior, very little of it makes sense; a core theme in the work of the early 20th century economist Thorstein Veblen, whose concept of “conspicuous consumption” describes how people often consume products they don’t need—or even want—in order to flaunt status and social class.26 Luxury watches that tell time worse, minimalist chairs you can’t sit on are purchases where the high price is the point.
Of course, it is no major insight to say that people buy things to show off. The anthropological record is rich with lavish feasts and displays of abundance. The famous “potlatch ceremonies” of Pacific Northwest Indian tribes, for example, involved burning immense stores of wealth—copper shields, hand-carved canoes that took years to build, blankets, oil, and food—generations of accumulated capital, in a single afternoon; just to signal status.27
But what about meditating, carrying around a well-worn copy of The New Yorker in your back pocket, or believing in climate change? Veblen’s brilliance was seeing that even our quietest preferences are currency in a market economy of social prestige. As British philosopher Dan Williams puts it:
Much cognition is competitive and conspicuous. People strive to show off their intelligence, knowledge, and wisdom. They compete to win attention and recognition for making novel discoveries or producing rationalizations of what others want to believe. They often reason not to figure out the truth but to persuade and manage their reputation. They often form beliefs not to acquire knowledge but to signal their impressive qualities and loyalties. When people are angry, it’s rarely about money. It’s about being looked down on.It’s the kind of signaling that thrives in what sociologists call “post-material economies” such as contemporary America.28 Because in a society maxed out on comfort—where even the ultrawealthy can’t buy a better Netflix or a softer couch—the only lines left to draw are ideological; and social distinction becomes the new class war. The rub, however, is that unlike the peacock’s tail—a hard- to-fake signal, metabolically costly, and policed by survival—immaterial prestige hierarchies are cultural inventions; often arbitrary, often performative, and almost always enforced from the top down. In other words, social prestige isn’t earned—it’s distributed by those who already have it. As social scientists Johnston and Baumann described in a 2007 paper:
The dominant classes affirm their high social status through consumption of cultural forms consecrated by institutions with cultural authority. Through family socialization and formal education, class‑bound tastes for legitimate culture develop alongside aversions for unrefined, illegitimate, or popular culture.29The elite don’t just consume goods. They consecrate tastes, turning culture into a class barrier such that status is socially assigned rather than materially demonstrated. French sociologist Pierre Bourdieu called it symbolic capital—where opinions double as vocabulary tests and entry fees for membership into the aristocracy.30 As Princeton’s Shamus Khan explains, “Culture is a resource used by elites to recognize one another and distribute opportunities on the basis of the display of appropriate attributes.”31
Observing today’s ruling class, social psychologist Rob Henderson has coined the term “luxury beliefs,” arguing that the experts, the celebrities, and the institutions are all fluent in the same woke-speak, and by their material abundance can afford to focus almost exclusively on social justice issues that, ensconced in their gated communities, have no effect on their own luxurious lives (nor those of the people they profess to be helping).32
The words turn and turn again—testing for status, enforcing the pecking order.33 And now, just as working-class Americans born in the industrial economy once rejected cash tips—those born in the culture-capital economy don’t want the tip either. They want respect. The redneck reluctance to simply “trust the experts” or pronounce it “people of color” instead of “colored people” isn’t about bigotry or Bible verses or disinformation—it’s about refusing the role of grateful recipient in someone else’s moral theater. It’s not anti-intellectualism or anti-love and kindness. It’s anti-elitism.
A culture that sanctifies college as the gateway to full citizenship, over time corrodes the foundations of democratic life.How is it that a born-rich multibillionaire has become the standard-bearer for the working class? It’s because his favorite food is McDonald’s; and to Nancy Pelosi, George Clooney, and my high school guidance counselor—Trump is trash. They see him the same way they see trailer park America—as tacky, ignorant, and disposable; always on the lowborn side of the tip. It’s a feeling well-known in union organizing circles.34 That when people are angry, it’s rarely about money. It’s about being looked down on.
A New NationalismCulture can often be hard to think about because it doesn’t exist in the world of objects—it exists in the world as a perceptual experience. It has no mass, no edge, no location. It’s not made of things; it’s made of meanings—real, but not tangible.
The cultural backlash hypothesis, the status threat hypothesis, the social isolation hypothesis, the political alienation hypothesis, the nostalgic deprivation hypothesis—a growing body of scholarship has emerged to name and quantify the immaterial contours of twenty-first century populist discontent; all circling the drain of an old, half-remembered truth.35, 36, 37, 38, 39
For most of history, kings, philosophers, and statesmen took seriously the idea that civilizations depend on symbolic cohesion—on rituals, traditions, and agreed-upon fictions capable of domesticating our most socially inconvenient biological biases. They understood, whether by insight or instinct, that there’s something important about ceremony and uniform and national character. That propaganda isn’t all bad. That done right, good slogans make good citizens. And good citizens make great nations. As Gidron and Hall put it in a recent paper:
[I]ssues of social integration [must be taken] more seriously in studies of comparative political behavior. Such issues figured prominently in the work of an earlier era … but they fell out of fashion as decades of prosperity seemed to cement social integration.40In the old economy it was simple. You had the rich, who lunched at steakhouses and voted Republican; the working class, who labored in factories and voted Democrat; and in between, the mass suburban middle class. When it came, the conflict was clear—members of the working class joining forces with progressive intellectuals to oppose the moneyed elite. Yet every once in a while, a new, revolutionary class of citizens comes along and scrambles the whole social order. In the late 20th century it was the scholastic king—and the new culture-laureate class. He is not merely an academic; he is society’s central planner, a warden of elite passage, and the face of the new American aristocracy; and as The New York Times columnist David Brooks put it:
If our old class structure was like a layer cake—rich, middle, and poor—the creative class is like a bowling ball that was dropped from a great height onto that cake. Chunks splattered everywhere.41Outsourcing made economic sense, globalization was in large part inevitable, and cheap goods are always good politics—sure, fine. But for over fifty years now, neither political party has been able to solve the social problem of a postindustrial economy. And no American president has been able to tell a story good enough to replace the one previous generations called true. As sociologist Arlie Hochschild explained in a recent interview with The New York Times:
We keep looking for real policies. That’s not the thing. Trump offers a veneer of policies and a story, and we’ve got to tune in to the effect of that story on people who feel like the world’s melting and sinking … Because whatever the policies, these voters are following the story and the emotional payoff of that anti-shaming ritual. So we have to stop the story, reverse the story: Nobody stole your pride, we’re restoring it together.42In the same way philanthropy never solves economic inequality, bigger and better information tips will never win the culture war—because it’s not about being rich or poor, stupid or smart; it’s about better than or worse than. And the only thing that can make a rich person feel worse than a poor person—or a smart person worse than a stupid one—is a national story written by poor people and stupid people too. It’s the sort of new nationalism that, in the past, has required several interconnected efforts.
The Bottom LineRobert F. Kennedy, in March of 1968, in a speech at the University of Kansas, noted: “The gross national product can tell us everything about America except why we are proud that we are Americans.”43
Rubber in Akron. Meat in Chicago. Coal in Scranton. Steel in Gary. It used to be you knew a city by what it made—how it sounded, how it smelled. In 1950 Detroit was the richest city in the world—that’s right, the entire world.44 On Zug Island, they used to make the whole car, start to finish—iron ore mined and smelted on one end, parts shaped and assembled along the way, and a new Ford rolled off the line at the other—no imports, no one else. It was vertical integration—of work, of community, of pride.
But by the 1970s a new day had dawned, the old days were gone, and the unraveling had begun. Over half the manufacturing jobs moved elsewhere, a quarter of the population went too; and with whole neighborhoods left to rot, Detroit, once called “the Paris of the Midwest,” became one of the deadliest cities in the country.45, 46 From 1965 to 1974, homicides quintupled47; the central business district earned the name “zone of decay”; and businesses began installing bulletproof glass—floor to ceiling—to protect storefront clerks.
Just like that—two short decades transformed America’s motor city into America’s murder city. And burnt, bled, and bankrupt, the once shining example rolled out perhaps the saddest, most pitiful ad campaign in American history: “Say Nice Things About Detroit.”48
It’s not about being rich or poor, stupid or smart; it’s about better than or worse than.The bottom line is this. Every new economy produces different winners and losers—it’s just the way it is. What happened in Detroit was, in many ways, what was expected. But when the losses came—when the bottom fell out for the millions of working-class Americans still there, still trying—it was treated not as a national obligation but as an unfortunate footnote to progress. Detroit was told to retrain, relocate, find a way to adjust—and when they failed, just like the people still living in Akron, Scranton, and Gary, they were humiliated, cast as mascots of ignorance and failure. The problem is that the ignorant and the failed far outnumber those who aren’t. And so, as Franklin Roosevelt said, it’s not “whether we add more to the abundance of those who have much” that matters—“it is whether we provide enough for those who have too little.”
Because when the empire falls—when the American experiment joins the long ledger of civilizations past, it won’t be at the hands of China or Russia or Al Qaeda or anyone else. We are the richest nation in the history of the world; no other society has ever wielded as much global influence; not even a coalition of all the world’s armies could best ours. “If destruction be our lot,” wrote a 28-year- old Abraham Lincoln, “we must ourselves be its author and finisher.”49 As “a nation of freemen, we must live through all time, or die by suicide.”
And if it comes to that—if we choose death; it won’t be about free trade or wages or unemployment rates any more than it was about taxes in 1776. Once again, it will be about respect.