You are here

Skeptic

Robert Trivers (1943–2026) Reflects on His Life and Work

Skeptic.com feed - Mon, 03/16/2026 - 12:01pm

This article, presented here in abridged form, was originally published in Skeptic magazine Vol. 20 No. 4

For a scientist there is the act of studying life and the process of living it, and I have never wanted the one to overwhelm the other. Yet that is exactly what a life devoted to science will tempt you into—a life of studying and, otherwise, not much living. Yes, you may have a family and a few good friends, but most scientists embrace a sedentary life, often solitary and intensely internal. You concentrate on experiments and theory and perpetual reading. Your small area of study is the focus of your life and it is a focus you share with only a few others.

This kind of life never appealed to me. I was an out-breeder by nature, raised in a diplomat’s home. Foreign countries and languages were part of my upbringing. Since my father served in Europe, I walked through more cathedrals, museums, and art galleries than was healthy for any child. I had no interest whatsoever in European culture, nor in the academic disciplines based on them, but I did know five foreign languages and enjoyed meeting people in their own land, speaking their language, learning about their area of expertise.

When I finally found my intellectual home in evolutionary biology, it offered me exactly the right kind of foreign travel—in the rural, the bush, the exotic, and the wild. Evolutionary biology would take me around the world. And it would show me how to carve knowledge from everything I experienced in these travels with a single, very general logic—what would natural selection favor? How would one best survive and reproduce in these conditions? In short, I signed on to a system of thought that allowed me to study life and live it, sometimes very intensively.

Early Scientific Stirrings

When I was 12 years old I knew I wanted to be a scientist because it was obvious upon inspection (this was 1955) that none of the other intellectual areas—history, religion, English literature, or the social so-called sciences—provided much hope of actual, sustained intellectual advance. Initially I was attracted to astronomy, with the vastness and beauty of space and the billions of years it had been forming. I got a telescope, read Hoyle’s standard Astronomy text, and came up with the bi-stellar hypothesis for the origin of the solar system.

I liked that astronomy was a science. These people were not fooling around. They measured things and did so carefully. They tested assertions against data, and were capable of changing either, and they continually attempted to improve the precision of their measurements. When Einstein’s theory that gravity bent light was tested by the apparent change in place of a background star during an eclipse we had dramatic evidence, measured with great precision, of exactly how much that bend was. But astronomy was not a discipline you could pursue in the 8th grade, so I soon turned to mathematics.

My father happened to have a large number of math books and out of sheer boredom one day I picked out one entitled Differential Calculus. I was 13 and it took me two months to master the book. It then took me two more to master the book next to it, Integral Calculus. It was a thrill to see that the algebra I knew could generate fields with real predictive and analytic power. That was only part of the beauty of mathematics, and its scientific twin: you could learn the whole thing from the bottom up. That is, if you were willing to put in the necessary concentration and time. The methodology was strictly anti-self-deception. Everything was explicit. Experiments, for example, were described so that others could attempt to replicate them exactly to see if duplicate results were achieved. Mathematical proofs were entirely explicit, every variable and every transformation exactly described.

Harvard and Psychosis

I mastered other corners of mathematics, mainly number theory, infinite, irrational, limit theory, and so on. I entered Harvard as a sophomore in pure mathematics but halfway through the year I saw the end of the whole enterprise and it was nowhere I wanted to be—at best, producing work with solid utility but far delayed, perhaps by the year 2250, but of no immediate use. Physics was for me no better, because, for one thing, I had no physical intuition at all. When they raised an object off the ground and told us they had thereby given it “negative energy” I headed for the door. And of chemistry and biology I knew nothing, having never taken a course in either at any level.

So I decided to give up truth for justice and become a lawyer. I would fight the good fights—early 1960s civil rights, poverty law, criminal law where you hoped the criminal was not too guilty, and so on. I asked people what you studied if you wished to pursue law and they said there was no such thing as “pre-law” at Harvard, so I should study the history of the United States. I declared that as my major and spent the next years learning about The Federalist Papers, the Constitution, Supreme Court decisions, and the like.

I developed an almost immediate distaste for the subject because it was obvious from the outset that U.S. history, as it was studied then, was not so much an intellectual discipline as an exercise in self-deception. The major question U.S. historians were tackling at that time was: why are we the greatest society ever created and the greatest people ever to stride the face of the earth? The major competing theories were answers to this question. The benefits of having a society designed by upper-class Englishmen was one such theory, as were the benefits of an ever-receding frontier—that is, the increasing extermination of Amerindians from East Coast to West. The larger field of history was somewhat more interesting but still consisted of stories from the past, inevitably biased and lacking critical information—and I saw little hope of correcting either defect.

In April of 1964—my junior year at Harvard—I suffered a mental breakdown and was hospitalized for two and a half months. Prior to the breakdown I went through a five week manic phase, with increasing mental excitation, decreasing sleep, and near-certainty that I was the first person to understand what Ludwig Wittgenstein was actually saying in the Tractatus, even though I was enrolled in my first-ever philosophy course. (Luckily, I was not taking it for credit.) I remember very little else from the manic phase except that I tried self-hypnosis to put myself to sleep. It did not work and lack of sleep is what brings on a full breakdown. Finally, one night my friends, who had become increasingly concerned, deposited me at the Harvard Infirmary where I could not answer the elementary question, “Who are you?” “A pregnant woman?” “A new-born baby?” But not, “A thoroughly confused Harvard Junior.”

Then came eleven weeks of self-admitted incarceration at three hospitals for treatment of my psychosis. Incarceration—even when voluntary and in a hospital—is never fun. You are locked in, no longer permitted to move about as you like. But by that time biochemists had come up with compounds that would knock the psychosis right out of you, and then hold it down afterwards to give you time to sleep and recover. After my final release in mid-June I spent the summer reading novels, one a day, and I have always blessed novelists since that summer. As a scientist, I scarcely even read the science I am supposed to, never mind a novel, but that summer novels allowed me to leave my own life and dwell in the lives of others, while my own self relaxed and repaired.

It soon became apparent that psychology was not yet a science, but rather a set of competing guesses.

Harvard readmitted me in the fall. I spent most of that semester playing gin rummy all night long—in other words, still resting my brain. But I also decided to take a course in psychology, since my mental breakdown suggested it might be a useful subject to know. It soon became apparent that psychology was not yet a science, but rather a set of competing guesses about what was important in human development—stimulus-response learning, the Freudian system, or social psychology. None were integrated with each other and none could form the basis for an actual science of psychology, so I paid no attention to this subject.

The two law schools I had applied to—alleged to be among the most progressive—turned me down so I graduated with a degree in a field I had little respect for and no intention of pursuing. I returned home to live with my parents, unemployed, and with only vague hope of finding a job.

The Man Who Taught Me How to Think

I did get a job soon enough upon graduating, and in Cambridge, MA, at that. The company itself was a Harvard off-shoot—Education Services Incorporated—set up to attract funding from the National Science Foundation for the purpose of developing new courses for school children. Just as there would be the “new math” so there would be the “new social sciences.” We would teach five million 5th graders about hunter-gatherers, baboon behavior, the social life of herring gulls, and evolutionary logic, or so we thought.

“Do you know anything about animals?” No. “In that case, you are going to work on animals.”

For the first six weeks my employers had me read in various subjects and attend meetings. One day they called me in and asked me if I knew anything about humans, by which they meant anthropology, sociology, or psychology. I assured them I did not. “Do you know anything about animals?” No indeed. “In that case, you are going to work on animals.” This was because they cared less about the animal material. On such minor, chance events, one’s entire life may turn. I might have discovered biology later in life, but I doubt it and I doubt I would have ever again been in as good a position to exploit its many benefits.

Trivers (right) with evolutionary biologist William “Bill” Hamilton.

They assigned me a biologist to guide my reading and sign off on my work. His name was William Drury, the research director at the Massachusetts Audubon Society. For two years, my employer paid him to be my private tutor in biology. It was perhaps the greatest stroke of luck in my life. Before Bill Drury, I knew no biology. After working with him for two years, I knew its very core. He introduced me to animal behavior and taught me many facts about the social and psychological lives of other creatures. More to the point, he taught me how to interact with them as equals, as fellow living organisms. But he could have taught me all of that and still I could have left his charge without becoming a biologist. The key to my future, which he alone could supply, was his insight that natural selection referred to individual reproductive success, that it applied to every living thing and trait, and that thinking along the lines of species advantage and group selection—the then-popular vogue—had little or nothing going for it. From then on I was a theoretical biologist. I had wanted to be a scientist since age 13. Now at age 22, I had discovered my discipline—evolutionary biology.

The thrill I felt when I first learned the whole system of evolutionary logic at the individual level, applied to all of life, was similar to the feeling I’d had when I first fell in love with astronomy as a twelve-year-old. Astronomy gave you inorganic creation and evolution over a 15-billion-year period. Evolutionary logic gave you the comparable story over 4 billion years. Astronomy spoke of the vastness of time and space, while evolutionary biology did the same thing for the vast variety of living creatures. Living creatures have been forming over a 4-billion-year period, with natural selection knitting together adaptive traits all through that time, so living creatures are expected to be organized functionally in exquisite and ever-counterintuitive forms. As I had when I was first discovering astronomy, I felt a sense of religious awe upon encountering this way of viewing the world around me.

This is not to say it was all fun and games. Bill was a hard teacher. When you were wrong, he was sure to point it out—not cruelly, no over-kill, just the simple truth. If you argued back, he was up to the challenge. That was how I learned what natural selection was and was not. Bill wasn’t interested in cradling your self-esteem. He was only interested in teaching you the truth. I liked that. I’ve always preferred knowledge over self-esteem. When I brought him population-advantage arguments for the existence of male antlers in caribou, he gently took me through the entire fallacy and then had me read two short pieces on opposite sides of the issue. Three days later I was a complete convert, willing to stop people on the subway and yell, “Do you know what is wrong with group selection thinking? Do you?”

Never assume the animal you are studying is as stupid as the one studying it.

One day I was watching a herring gull through binoculars side by side with Bill. In those days, a herring gull could not scratch itself without one of us asking why natural selection favored that behavior. In any case, I offered as an explanation for the ongoing gull behavior something that was nonfunctional and suggested that the animal was not capable of acting in its own self-interest. Bill replied, “Never assume the animal you are studying is as stupid as the one studying it.” I remember looking sideways at him and saying to myself “Yes sir! I like this person. I can learn from him.”

Bill taught me to think outside of the mainstream in many areas. You think monotheism is superior to polytheism? Bill would say, what do you know about polytheism, or for that matter monotheism? You assume monotheism is superior because it presumes to have a single order to the world, a single unifying logic and force, but what does this force represent? Bill taught me that polytheistic religions often had a better attitude toward nature than did the monotheistic ones. In Amerindian religions, there were spirits of the forest, of the canopy, of the deep woods, of the gurgling spring, and each captured aspects unique to these ecological zones. For someone like Bill, who had literally lived 15 to 20 years of his life in the woods, these distinctions were so much closer to his own view than that emerging from monotheism, which basically boiled down to a form of species-advantage reasoning.

We are all living organisms—make discriminatory comments about others at your own risk.

On another occasion, Bill and I were discussing racial prejudice and the possible biological components thereof, and he said to me, “Bob, once you’ve learned to think of a herring gull as an equal, the rest is easy.” What a welcome approach to the problem, especially from within biology. We are all living organisms—make discriminatory comments about others at your own risk. In Bill’s view, it was always better to try to see the world from the view of the other creature.

The Greatest American Evolutionist I Ever Met

Ernst Mayr was the greatest U.S. evolutionist I ever met, possessing a very broad and deep knowledge of almost all of biology. He had also perhaps the strongest phenotype of any organism I have ever met. He lived to be 100 and published more books after age 90 than most scientists do in a lifetime, and not trivial ones either. He was strong in character, personality, and mode of expression.

I first met Ernst Mayr in the spring of 1966, in his office at the Museum of Harvard’s Comparative Zoology. I was brought to him by Bill Drury, himself a former student of Mayr’s. The visit with Ernst Mayr was meant to reinforce this conviction and to offer me help along the way. Mayr was a short man, with a clear, piercing gaze and a warm countenance. After an initial discussion, Ernst told me that it was not at all impossible to become a biologist at my age and with my lack of background. “Where would you like to do your graduate work?” Ernst asked. I suggested that it would be nice to work with Konrad Lorenz. “No!” Ernst said. “He’s too Austrian for you, too authoritarian. Who else?” I suggested that it might be a good idea to work with Niko Tinbergen. “No,” Ernst said, less emphatically. “He is only repeating now in the ’60s what he already showed in the ’50s. Where else?” It was clearly time for some fresh input, so I asked him, “What would you suggest?” Ernst then flung his arms in a short arc and said in his German accent, “What about Haaarvard?” Dum-kopf, I thought, striking the side of my head with my hand. Harvard indeed!

Robert Trivers on The Michael Shermer Show, discussing evolutionary theory and human nature.

The first class I ever audited in biology couldn’t have been better. It was a graduate course taught in 1966 by Ernst Mayr and George Gaylord Simpson, the famous vertebrate paleontologist, who was quite a spectacle himself. A short man, but much softer-looking than Mayr, he wore thick glasses and his eyes often seemed to shake, along with his hands. Yet when he stood up to speak, he spoke in clean, clear paragraphs, no editing required. At times one felt there should be someone at his side chiseling his words into stone, so well were they chosen.

They thought that evolutionary biology had all the intellectual excitement of a cross between stamp collecting and the study of dead languages.

I remember one memorable discussion involving Mayr and Simpson and sickle cell anemia. After various parts of the evolutionary story had been reviewed—the frequency of the sickling gene in natural populations being associated with the spread of malaria—they had occasion to refer to the molecular mechanism by which the sickling gene worked. I believe it was Simpson who referred to a paper that had just come out in a cellular/molecular journal showing that the change to a sickle-shaped blood cell literally crushed the malarial parasite within the cell. However that may be, there was a glorious feeling coming from that class that evolutionary biologists at their best were the true biologists, those who mastered biology at all its levels, right down to the molecular details when these became interesting.

What made the moment so special was the use of molecular biology, for molecular biologists treated evolutionary biology with open contempt. They thought that evolutionary biology had all the intellectual excitement of a cross between stamp collecting and the study of dead languages. At their worst, they were insufferably arrogant and ignorant. While they could cow most evolutionists, they could not do so with Ernst Mayr. His expertise was the entire subject— biology itself—and when needed he took it upon himself to master every section and subsection. It did not hurt that he was he was physically and verbally dominant as well. Best way to put it, nobody fucked with Ernst Mayr. That gave us evolutionary graduate students support and backing, the value of which we were only dimly aware.

Jane Goodall and the Meaning of Death

As part of a seven week expedition to East Africa in the summer of 1972, we took a two-hour boat ride across Lake Tanganyika from Kigoma in order to reach the famous Gombe Stream Reserve. The Reserve was a series of base camp buildings on the shore of the lake, and student sleeping quarters dotting the hills, within which roamed chimpanzees, three groups of baboons, and some leopards.

Within minutes of our arrival I was standing next to Jane Goodall and her husband Hugo van Lawick, watching a chimpanzee and her son on the hillside among some trees. This wasn’t just any primate. Flo was the most famous living chimpanzee, having been studied by Jane for more than ten years. She was a matriarch whose clan had formed the backbone of Jane’s writings and films. Flo was far past her prime when I saw her and, in fact, was afflicted with continual diarrhea. As we watched, she took a fruit and tried to smash it against a tree but she missed and struck her own leg. “I have never seen her miss like that,” said Jane. “I don’t give her two weeks to live.” My young postgraduate heart leapt: I had just arrived for a two-week visit and according to Jane I would be witness to history!

Chimpanzees worked themselves into a frenzy in the presence of a waterfall, swinging back and forth on vines, hooting, hair erected, and so on. One can almost see a religious sentiment, on which later might be built something as huge as the Catholic Church.

Jane knew her chimpanzees. Several days later I was watching a “waterfall display,” in which chimpanzees, especially adult males, work themselves into a frenzy in the presence of a waterfall, swinging back and forth on vines, hooting, hair erected, and so on. One can almost see, but not quite define, a religious sentiment, an elemental force on which later might be built something as huge as the Catholic Church. While our chimpanzees were starting to work themselves up, we were interrupted by the arrival of the shocking news that Flo was dead. I was with two graduate students at the time, and we turned, as if one, and padded back down the paths toward the hillside near the base camp. Turning off the main path we went through undergrowth and reached the bank of the small river that flowed down toward camp. Flo lay half in the water. Next to her knelt Jane. And capturing this moment for posterity was one of the largest cameras I had ever seen, on a tripod with Hugo behind the lens, just across the river. Flint, meanwhile, lay depressed in a tree 20 feet above his mother.

Thus began the human drama of Flo’s death. At the beginning, Jane appeared intent upon seeing a chimpanzee funeral. At the very least she hoped that one or more of Flo’s grown children might happen upon the body and give some interesting reaction. In fact, it never happened. Instead, the first night Flo remained where she’d died but Jane sat up the whole night nearby, with many of us for company, in order to deter scavengers such as bush pigs from carting off Flo’s body (one reason one would not expect to see many chimpanzee funerals). Jane was nostalgic, remembering the early days, nearly alone with the chimpanzees, enjoying the quiet beauty of the forest, coming to know Flo almost as well as her own mother.

Parasites can be expected to flee the dead body in search of living tissue—if any are there, they should swarm out of a corpse. This immediately suggests the value of burial.

In her response to the death of a member of a closely related species, Jane Goodall revealed the curious ambivalence we display toward the dead bodies of members of our own species. It is as if the body too sharply erodes the living creature for us to leave it alone. Yet from the standpoint of parasites alone, we surely should: any living creature carries a number of parasites and may have died from an ongoing parasite attack. The parasites can be expected to flee the dead body in search of living tissue—if any are there, they should swarm out of a corpse. This immediately suggests the value of burial. From the archaeological record we know that humans have practiced this custom for at least 75,000 years. But a sentimental component shows up from the beginning, as well, since even in ancient burials the deceased is interred along with various artifacts, such as utensils, weapons ,and other items of value.

The effects of a lingering memory are notoriously strong in various monkey mothers for their recently dead offspring; in some species they carry around the body of an infant in a clinging posture for as long as two days after its death. A much stronger attachment occurs in our own species, as when the exact spot of burial is preserved in memory, often with a marker, so that the desecration of such places by others is taken as an attack on the living relatives. Consider the outrage that recent attacks on Jewish cemeteries have evoked. The attackers, who dug up corpses and assaulted some of these, were regarded as more depraved and anti-Semitic than those who do harm to living Jews, as indeed they may be since if they are that eager to desecrate burial grounds, God knows what else they are eager to do.

Richard Dawkins and the Concorde Fallacy

In 1975 I was in Jamaica on sabbatical when I received a letter from one Richard Dawkins enclosing a paper written by himself and Tamsin Carlisle pointing out that I had committed the Concorde Fallacy in my paper on Parental Investment and Sexual Selection, as indeed I had. The Concorde Fallacy is the notion that because you have wasted $10 billion on a bad idea—the exceedingly expensive supersonic plane Concorde—you owe it to the 10 to throw in another 4 in hopes of making it work. In poker, the rule is, “Don’t throw good money after bad.” Good money is money you still have, bad money is already in the pot; it is no longer yours. Just because you have $300 in a large poker pot (money gone) does not mean that you owe it to that money to lose another $200, with odds stacked against you. Every decision should be rationally calibrated to future pay-offs only, not past sunk costs.

I had argued in my paper that since females almost always begin with greater investment in offspring than do males, this committed them to further investment—they would be less likely to desert their offspring. Simple Concorde Fallacy; only future payoff is relevant. I consoled myself with the thought that there probably was a sex bias similar to the one I’d proposed, but only because past investment had constrained future opportunities. In any case, I wrote back that I agreed with them right down the line.

His actual purpose in writing me was to find out if I might be willing to write the Foreword for a new book he had written called The Selfish Gene.

I soon received a second letter from Richard, saying that his actual purpose in writing me was, in part, to find out if I might be willing to write the Foreword for a new book he had written called The Selfish Gene. This was especially appropriate, he told me, because my work, more than anyone else’s, was featured in his book. What the hell, I thought, and he sent the manuscript along. There were indeed chapters based on individual papers of mine—“Battle of the Generations” (parent-offspring conflict), “Battle of the Sexes” (parental investment and sexual selection), “You Scratch My Back, I’ll Ride on Yours” (reciprocal altruism). I never deluded myself that my work was more fundamental than Bill Hamilton’s, nor did Richard, but we both knew that if you wanted to get some of the fun details filled in on a variety of subjects—not ants, fig wasps, or life under bark, but social topics relevant to ourselves—my work was a better bet than Bill’s.

Better than finding my own work given such a high billing, though, was discovering that Richard had a most pleasing combination of absolute mastery of the material with a wonderful way of expressing it—funny, precise, vivid. Let me give one example. He presented Bill Hamilton’s idea that a gene—or a tightly linked cluster of genes—could evolve if it could spot itself in another individual and then transfer a benefit based on the phenotypic similarity. But Richard added a vivid image, calling this “the green beard effect.” The name soon caught on in the scientific literature, so that everyone today refers to “green beard” genes, thereby summing up a complicated idea in a way that actually makes it easier to think through. The phenotypic trait is obvious: you have a green beard. And the genetic bias is obvious: you favor green-bearded individuals. Genes spread apace. Except what about a mutant that leaves your green beard intact but takes away your bias toward green-bearded individuals? Not at all obvious, yet Richard’s vivid way of writing facilitated thinking through the complexities.

So I said to myself, yes I will write you your Foreword, though I don’t know you from Adam. I wrote a good five paragraph foreword but it consumed about a month of my life, partly because I actually like to think before I write, which does slow down writing.

In any case, once I was finished, I looked at the essay and thought, why not slip in the concept of self-deception, whose function by that time I had linked to deceiving others? This I regarded as the solution to a major puzzle that had bedeviled human minds for millennia. And Dawkins, bless his soul, could hardly have set me up more nicely: “…if [as Dawkins argues] deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray—by the subtle signs of self-knowledge—the deception being practiced. Thus, the conventional view that natural selection favors nervous systems which produce ever more accurate images of the world must be a very naïve view of mental evolution.” Perfect set-up and not even in a paper of my own but in someone else’s book and an incredible bestseller at that.

Robert Trivers’ lecture for the Skeptics Society: Why does deception play such a prominent role in our everyday lives?

When I learned that Dawkins had taken on religion in the name of science and atheism, I felt he had finally found his true intellectual niche. No way could religion keep up with Richard. One June 13, 2011, I was about to begin delivering the Tinbergen lecture at Oxford, when as usual I misplaced something on the lectern. “Jesus Christ,” I muttered, and the microphone amplified it to the 400 people in attendance. I looked up and said, “I hope Richard Dawkins isn’t here.” Richard raised his hand. Before launching into my lecture I added, “I regard Richard Dawkins as a minor prophet sent from God to torture the credulous and the weak-minded, for which he has a unique talent,” as indeed he does. One nice concept in The God Delusion is that since most people dismiss all religions except one, why not go the final step?

Hanging with Huey and the Panthers

One of the few benefits of moving from Harvard to the University of California at Santa Cruz in 1978 was the chance to meet the legendary founder of the Black Panther Party, Huey Newton. Indeed he was waved in front of me as a reason to come to Santa Cruz. He was a graduate student in “History of Social Consciousness”—roughly equivalent to Western Civilization—who had the wit to see that “social consciousness” started long before the Greeks and, in some form, by the time of the insects. He had gotten his undergraduate degree from Santa Cruz in 1974 and befriended Dr. Burney Le Boeuf, the celebrated student of elephant seals. Burney had been preaching the beauties of evolutionary biology—my own work in particular—to Huey, and so I had the good fortune of meeting him after he had already been well-primed.

Trivers with founder of the Black Panther Party Huey Newton.

The Panthers began with patrolling the police. They would follow police at night or patrol until they came across police-citizen interactions. Huey might then emerge from a car with a law book in his hand and read out in a loud voice that, by law, “excessive” force cannot be used during an arrest. The police would invariably answer, “Our force isn’t excessive.” Huey would read them the legal evidence on that point. They would say, “Get the fuck out of here.” He would answer that a citizen is allowed to remain within a reasonable distance of an arrest. They would say, “Your distance is unreasonable.” He would flip to the relevant page and read the appellate ruling that declared a reasonable distance was ten yards or whatever, and it would go on like this.

Huey was armed. He knew he had the right to be armed and he knew he had the courage. So when he emerged from the car, there was usually a gun beneath the law book so that, should the interaction turn hostile or threatening, he could be ready with a response. All this was legal back then, riding shotgun, in effect, on the police themselves. During the war the Panthers waged between 1967 and 1973, roughly 15 officers died for every 35 Panthers. I believe the Panthers had the largest single effect on integrating police forces in this country. The reasoning being: hey, if Black people are firing at our officers, let’s have some Black officers firing back.

In the fall of 1978 I was informed that Huey, who was then in prison, charged with beating up a tailor in his home for calling him “boy,” wanted to take a reading course from me. I said that was fine but I wanted a paragraph from him on what he wanted to read. Before he could reply he was released from lock-up and traveled to Santa Cruz to meet me. We met.

He fell down, as do we all, when it came to his own self-deception.

We decided to do a reading course on deceit and self-deception, a subject I was eager to develop and on which Huey turned out to be a master. He was a master at propagating deception, at seeing through deception in others, and at beating your self-deception out of you. He fell down, as do we all, when it came to his own self-deception. Huey Newton was certainly one of the five or six brightest human beings I have ever met. Each of them has had a different sort of intelligence, and Huey’s forte was aggressive logic. And he moved his logical sentences as if they were chess pieces meant to trap you and render you impotent. “Oh, so if that is the case, then this must be true.” If you moved away from where he was pushing you, he would say, “Well, if that is true, then surely so-and so must be true.” So he was maneuvering you via logic into an indefensible position. The argument often had a double-or-nothing quality about it where, in effect, he was doubling the stakes for each logical alternative, giving you the unpleasant sensation that you were losing more heavily as the argument wore on, making more and more costly mistakes.

According to Huey, the Black Panther Party started as a simple, old-fashioned robbery, which he was planning with a number of confederates. Problem was he was reading Franz Fanon and becoming politically conscious. So he decided to use the robbery to start a new political party, as radical as its start-up funds. The hard part was selling it to his fellow robbers. They didn’t like the idea. “They almost killed me” Huey told me, but finally he got them to sign off on it, and some of them even became Party members later.

Once, when he and I were driving through West Oakland, near Berkeley, Huey pointed out the site of the Party’s first political act. There was a particularly dangerous street corner at which local African-American children were run over nearly every year while attempting to cross on their way to school. Numerous requests had been submitted for a stop sign and a proper street crossing to protect the children. Nothing had been done. One day the Panthers appeared at the street crossing at the appropriate time, dressed in their leather jackets and berets and each carrying a rifle or shotgun. They proceeded to direct traffic, standing in the highway to permit safe passage for the children. Six weeks later the city put up, not a stop sign, but a stoplight at that very corner. Nothing like armed Black men to stir civic activity.

When the California legislature was meeting to decide whether to pass the “Huey Newton law,” as it was popularly called, which states that you could no longer “ride shotgun” but instead had to keep your loaded gun in your locked trunk, Huey and 35 other Panthers showed up in Sacramento on the day of the vote, most of them carrying rifles. They tried to enter the legislature with their guns, which was allowed by law at the time. Police stopped them from entering, ordered them out of the building, and then shortly thereafter arrested them. Huey told me that many Black people argued against the public display: “Now they’re sure to pass the bill, why don’t you ease up the pressure?” Huey’s response was simple: they were going to pass the bill anyway, and he wanted to show Black people that they had the right to show up in front of the legislature with guns and confront a mass of armed police. That was one of the main points of the Party—to encourage African Americans to use their right to bear arms in self-defense. In 1948, in response to a lynching, President Harry Truman made the first and key decision in favor of equal gun rights for the Black man in the U.S., when he integrated the armed services. Before then, most Black soldiers sliced the carrots and did the dishes.

Many African Americans of more recent times have a strong ambivalence or hostility toward Huey and the Panthers because they believe he helped spawn the culture of Black gun violence among the urban young. There is probably some truth to the charge, but I think harsh drug penalties take a larger part of the blame. With the stakes so high for being caught selling illicit drugs, the chances of internecine war and murder inevitably rise as well.

A final point on Huey’s legacy: though people tend to assume that Huey was anti-police in principle, in fact he saw obvious value to community surveillance and organized protection. That’s why he regarded himself and Party members as on a par with the official police. He used to joke, “I’ve got nothing against the police as long as we are firing in the same direction.”

Looking back and Looking Forward

I am 72 years old now, having devoted 50 years to the study of evolutionary biology, a combination of social theory based on natural selection wedded to genetics—the very backbone of all of life. I have had the good fortune to help lay the foundation for a variety of flourishing subdisciplines, from reciprocal altruism and parent-offspring conflict, to within-individual genetic conflict, and self-deception. Through this work, I have met many extraordinary individuals, several of whom were my teachers. I have also gotten to know up close and personal many non-human animals. I have “enjoyed” an unusual number of near-death experiences—due in part to my tendency toward intense interpersonal disagreements late at night.

Yet when I look back on this show, there is one thing I regret, and it is absence of self-reflection. Yes I would live life and study it, but would I study my own life? Time and time again, the answer comes back “no.” Yet exactly whose life is more important to you: others or your own? “You self-deceptionist” my first wife would sneer. “You talk a lot about parent-offspring conflict, yet you neglect your own son.” Guilty as charged. Too much ambition and too little thought about my family: wife, children, and myself.

Robert Trivers’ lecture for the Skeptics Society, based on a ground-breaking study that examines honor killings, which seem to make no evolutionary sense. Why would a father kill his own daughter and thereby eliminate half of his own genes from propagating into the next generation?

Major decisions, such as where to go when I decided to leave Harvard in 1978 were made without any serious thought at all—how about a name professorship at the University of New Mexico or a major offer from the University of Rochester with its powerful biology department? These were brushed aside with scarcely a glance. Instead I simply trotted off to the University of California at Santa Cruz because my wife and I had enjoyed a pleasant weekend with Burney LeBoeuf, his wife, and his elephant seals. I even remember mumbling to myself at one point, “Oh we’ll let autopilot handle this or that problem.” Auto-pilot? As a means of choosing which of three universities and cities you should live in for the next 15 years? By definition auto-pilot is the opposite of careful conscious introspection and evaluation—it is what you do when the path forward is obvious and no rational reflection is needed.

What is the way forward? There is one obstacle and there is one hope. The obstacle is self-deception, which is a powerful force with immense repetitive power. The hope is that after becoming more deeply conscious of one’s own self-deceptions and of the possible means of ameliorating them, one can make some real progress against this strong negative force.

Very often a spiteful response is not the best one. Then comes a stronger voice, “No, Bob, this time is different.”

A more costly form of self-deception involves my spiteful side. If you say something insulting, I want to strike back. If I fail to because I am slow or inhibited, trust me—whenever the event recurs in my mind, I will torture myself, sometimes for years, with the rant I should have delivered and may do so now at full volume alone in my apartment far away. And yet very often a spiteful response is not the best one. It can easily generate spite in return and down the staircase the two of you descend. Inside me there are two voices. One cries out, “Bob, you have made this mistake 630 times in the past and regretted every single one. Why not forego it this time?” Then comes a stronger voice, “No, Bob, this time is different,” and there goes 631.

It was an eye-opener to me to discover recently the value of friends in breaking this cycle. I was telling a good friend about a nasty message I had gotten and my intended nasty response. He wanted to know why? Because, I said, she said this, that, and the third thing and it hurt. That was the key. He was unmoved by this argument. He’d suffered none of my internal hurt and was indifferent to it. Only three things were relevant to him: the message, my possible response, and its likely consequences. The likeliest consequence would be that she would write back an even nastier note and I would be further estranged for no good reason. Why would I want to do that? Why indeed. The Concorde Fallacy all over again—you owe it to your past spite, despite it being a sunk cost, to double-down. Better, of course, to do nothing.

Categories: Critical Thinking, Skeptic

Life on Exomoons

neurologicablog Feed - Mon, 03/16/2026 - 5:42am

How common is life in the universe? This is one of the greatest scientific questions, with incredible implications, but we lack sufficient information to answer it. The main problem is the “N of 1” problem – we only have one example of life in all the universe. So we are left to speculate, which is still very useful when based on solid scientific evidence and reasoning. It helps guide our search for signs of life that arose independently from life on Earth.

One important question, therefore, is where is it possible for life to exist? We know life can arise on a rocky planet with a nitrogen and CO2 atmosphere in a temperature range that allows liquid water on the surface. We also know that such life may create and sustain large amounts of oxygen in the atmosphere. It therefore makes sense to focus our search on similar planets. But life does not have to be restricted to Earth-like life. Scientists, therefore, try to imagine what other conditions might also support some kind of life. It is possible, for example, that life arose in the vast oceans under the ice of moons like Europa or Enceladus. Such life would be very different than most life on Earth. It would be dependent on chemical processes for energy (chemosynthetic), rather than sunlight.

Knowing how many different kinds of places life could possibly exist affects our estimate of the number of locations in our galaxy that might harbor life. The current estimates for how many Earth-like exoplanets there are in the Milky Way galaxy ranges from 300 million to 40 billion, depending on various assumptions and how tightly you define “Earth-like”. There are 100-400 billion stars in the galaxy, but about a third of those stars are in multi-star systems, so that means there are tens to up to 100 billion distinct stellar systems in the Milky Way.  One estimate from observed multi-star systems is that about 89% of them could allow for a stable orbit of a rocky planet in the habitable zone.

But perhaps we should not limit the calculations of how many worlds in the galaxy may support life to Earth-like planets. I am not just talking about life in oceans under icy moons. Astronomers have also been considering the possibility of life on moons that orbit free floating gas giant planets. A free floating planet (FFP), also called a nomadic planet or rogue planet, does not orbit a star at all. At some point, likely early in the life of its parent star, it was flung out of its system and now wanders freely between the stars. Astronomers estimate there may be hundreds of billions of such planets in the Milky Way. But this means the planet is dark, without any sunlight to keep it warm or fuel life. What about the moons of an FFP, however?

It is possible that an FFP can retain some of its moons even once ejected from its system – they would not necessarily be stripped of their moons in the process. However, the orbits of those moons would likely become more eccentric. Astronomers imagine a large moon orbiting an FFP gas giant in an elliptical orbit. Tidal forces would constantly stretch and pull the moon, causing its interior to heat up. These forces can be immense. Io, a large moon of Jupiter, is close enough to Jupiter that the tidal forces on it heat it up so that it is constantly volcanic and molten, turning itself inside out through such activity. So there would be a tidal Goldilocks zone around such gas giants as well, heating them up enough to support life but not become a volcanic hellscape.

Such moons could therefore be like Europa, with an icy shell but enough internal heat from tidal forces to keep a liquid ocean. But astronomers also want to know if such a moon could have liquid water on its surface. This would require a thick enough atmosphere to keep the surface water from evaporating away into space. It would also require an atmosphere capable of trapping enough heat to keep the surface warm (in this case the heat would be coming from the moon itself through tidal forces, and not from starlight, but it doesn’t matter). Astronomers have previously considered CO2 as a heat trapping gas, and this would work. However, because the upper atmosphere faces the cold dark of space, without a star to warm it up, the CO2 would slowly condense out of the atmosphere. Astronomers estimate such a moon could maintain surface water for about 1.3 billion years before the system collapses. This is a long time, long enough for life to arise, but not as long as it took life on Earth to get to its current state of complexity.

In a recent paper astronomers propose another situation that might work better – a mostly hydrogen atmosphere. An H2 dominated atmosphere would also trap sufficient heat (if it were thick enough) to maintain liquid water on the surface, just from internal heat through tidal forces. Further, such an atmosphere would be more stable than a CO2 atmosphere, lasting up to 4.3 billion years – long enough for complex life to evolve. Such life would likely be very different than Earth life, lacking sunlight and therefore photosynthesis, but it could exist.

If this analysis pans out, this could mean that potential locations for life in our galaxy is many times current estimates that do not include such moons. But again – until we actually find such life, we can only speculate about possibilities. Obviously we have no way of going to such locations (at least, not anytime soon, or likely for a very long time), but we can look for biosignatures, such as the presence of large amounts of oxygen (or any molecule that is not stable and would have to be constantly replenished by living processes) in the atmosphere.

And of course the ultimate question – could such complex life become technological, in which case we might also look for technosignatures. What would an intelligent technologically advanced species from a hydrogen exomoon around a rogue planet be like? Wouldn’t it be wonderful to find out one day.

 

The post Life on Exomoons first appeared on NeuroLogica Blog.

Categories: Skeptic

Confessions of a Former Chiropractor

Skeptic.com feed - Fri, 03/13/2026 - 10:04am

I went to a chiropractor in the 1980s for a stiff neck that had not improved after a month. A coworker praised him with the evangelical certainty usually reserved for miracle diets, used car salesmen, and people who have just read one book on nutrition. I was skeptical but adventurous, which is how most regrettable life decisions begin.

The adjustment worked. My neck improved. Worse still, my chronic asthma improved as well.

At the time, I was deeply unhappy in my first professional job after earning a bachelor’s degree in psychology and a master’s degree in applied behavioral science at Wright State University in Dayton, Ohio. I worked for a personnel-testing firm that marketed itself as scientific while relying on psychological instruments invented—without irony—in-house. Their psychometric rigor consisted largely of confidence, clipboards, and an aggressive font choice.

Compared with the pseudoscientific theater I was being paid to defend, chiropractic felt almost wholesome.

These tests produced false positives and false negatives with impressive symmetry, giving employers either a false sense of security or a convenient scapegoat. Qualified people quietly lost livelihoods. Chiropractic, by contrast, seemed refreshingly concrete. Hands. Spines. Patients who said they felt better. I imagined self-employment, ethical work, relief of pain, and perhaps even improved health. Compared with the pseudoscientific theater I was being paid to defend, chiropractic felt almost wholesome. In retrospect, this should have been a warning sign.

Why Chiropractic Made Sense at First

I had been trained in program evaluation, a discipline shaped by people obsessed with how to infer causality in the messy real world where randomization is often impossible and people insist on behaving like people. This was the era of stress research—Hans Selye, Thomas Holmes, and Richard Rahe—demonstrating that belief, expectation, and circumstance could predict outcomes as dramatic as Navy pilots crashing jets on aircraft carriers.

Chiropractic appeared to offer a humane alternative: a hands-on profession marginalized by a medical establishment overly confident in pharmaceuticals and procedures. Like many, I believed useful treatments had been discarded not because they failed, but because they threatened professional turf. I believed science had limits, and that those limits had been selectively enforced, preferably against someone else.

So I decided to become one myself, and in 1987 I graduated from the San Jose campus of Palmer College of Chiropractic and joined the ranks of doctors of chiropractic—eager, idealistic, and spectacularly unaware of the epistemic ecosystem I had entered.

Inside the Bubble

The dominant narrative was simple: conventional medicine had unfairly dismissed us. Scientific objections were cherry-picked. Our methods worked; medicine simply refused to look properly, or long enough, or with an open heart and an open mind liberated from all that oppressive critical thinking.

On weekends, I studied at Stanford’s Green Medical Library and noticed something curious: the library did not carry chiropractic’s premier scientific journal. I proposed that Palmer purchase a subscription for Stanford. We did. Stanford thanked us politely, in the tone such institutions reserve for unsolicited fruit baskets.

Subtle vital forces, innate intelligence, and spinal “subluxations” hover just beneath the surface of even the most modern curricula, like software that never quite finishes installing.

Old-guard chiropractors complained that we risked spilling our secrets to scientific medicine. The truth is, chiropractic education exists in a parallel universe. Its founding figure, D.D. Palmer, died in 1910, but his metaphysical afterlife remains active. Subtle vital forces, innate intelligence, and spinal “subluxations” hover just beneath the surface of even the most modern curricula, like software that never quite finishes installing.

The 1990s brought chiropractic its brief flirtation with legitimacy. The NIH’s Office of Alternative Medicine was established, fueled in part by philanthropic enthusiasm from abroad.

I interviewed for a position at an English health estate owned by Sir Maurice Laing, who had both an interest in alternative medicine and the resources to indulge it. I declined the offer, tethered as I was to America, but not before inserting myself into meetings with leaders of British complementary medicine. 

To the British Committee on Complementary medicine, I proposed a heresy: stop arguing about putative mechanisms; first determine what works, for whom, and under what conditions. Program evaluation before explanation. My suggestion was politely ignored. Before assuming his kingship, King Charles quietly stepped away from his advocacy of complementary medicine. One suspects reality intervened, possibly with charts.

The Cracks Appear

After years of practice and research involvement, my discomfort grew. Chiropractic diagnostics increasingly failed a basic test: face validity. 

My practice partner believed she could diagnose disease by testing the strength of specific muscles, a method known as applied kinesiology (AK). Patients loved it. The ritual was impressive. They asked why I did not perform AK, as though I were withholding a party trick. I asked her once how often her diagnoses were correct. “About half the time,” she said, without irony.

This is precisely the accuracy one would expect from a fair coin flip, except coins do not bill insurance companies or require continuing education credits. These tests were never compared to gold standards, so strictly speaking they were never correct or incorrect at all. They simply were.

What finally broke me was not only the epistemology—it was the economics. Chiropractic education devotes astonishing energy to practice management. Seminars, workshops, and consultants descend with the same message delivered in different fonts: sell care plans, sell frequency, sell fear. Some that you pay for one-to-one counsel offer referrals when referring to other chiropractors. My millionaire business coach promised me $1000 per referral that signed up—but always called a few weeks later with a sad reason not to pay. 

The mantra was explicit: ABC—Always Be Closing. The bottom line of all the chiropractic continuing education and coaching programs was to lie about how chiropractic is crucial for overall health, and the bottom-bottom line was that advising chiropractors is much more profitable than being one.

Patients were no longer people with problems to be evaluated; they were “cases” to be converted. Thirty-six-visit plans were praised. Lifetime care was normalized. Preventive adjustments were marketed with the confidence of seatbelts and vaccines—minus the evidence, testing, and regulatory oversight.

Certainty, I learned, is a remarkably precious commodity in chiropractic world.

Those who questioned this model were told they lacked confidence, commitment, and the proper chiropractic spirit. Skepticism itself became a personal failure. Success was measured not in clinical outcomes, but in collections. The resemblance to the psychometric firm I had fled years earlier was no longer subtle. With a quiet corruption of Avedis Donabedian’s classic framework—structure, process, and outcome—chiropractic leaders instead sold belief, structure, and certainty. And certainty, I learned, is a remarkably precious commodity in chiropractic world.

Indeed, one of the central problems with chiropractic is its frank comfort with ignoring evidence in favor of belief systems that “just make sense.” Plausibility substitutes for proof. Confidence substitutes for outcomes.

In practice, chiropractic operates at two largely disconnected levels of knowledge. At the top sit researchers, faculty, and administrators—those who define the profession’s identity—yet who typically know very little about the day-to-day realities of practice. At the bottom are practicing chiropractors, submerged in diagnosis codes, billing rules, collections, hiring and firing staff, training front-desk help, negotiating with insurers, and keeping the lights on.

The irony in all that is that the most influential voices shaping chiropractic practice are almost entirely those who do not practice. These are the “paycheck chiropractors,” whose authority is inversely related to their proximity to the trenches. They do not argue with insurers. They do not explain denied claims. They do not rehire front-desk staff every six months. Yet this has never impaired their confidence in advising clinicians how to act, what to treat, and what to expect from every imaginable or unimaginable combination of symptoms.

Practicing chiropractors, for their part, are remarkably comfortable with this arrangement. When things wobble or fail, blame flows inward. The practitioner assumes personal deficiency: insufficient belief, insufficient technique, insufficient commitment. It functions like a built-in self-protection virus for the profession—very convenient for avoiding collective accountability.

This arrangement is also useful when graduates eventually notice three inconvenient facts:

  1. There are few jobs.
  2. There is no meaningful referral network within medicine.
  3. Fifty years of accumulating studies have failed to make a compelling case for chiropractic’s widespread clinical utility.

Chiropractic does not compete well with medicine—or even with itself. When studied carefully, its apparent effectiveness dissolves into non-specific factors: expectation, attention, ritual, and natural history. When chiropractic researchers properly control for placebo and natural recovery, the specific effect of spinal manipulation reliably shrinks or disappears altogether. Paradoxically, better science makes chiropractic look worse.

Structurally, the profession is a two-tiered, one-directional system that rarely improves, because the real problems are invisible at the top and permanently personalized at the bottom. Some leaders continue selling early-20th-century dogma, steering chiropractic safely away from medicine by avoiding diagnosis and disease altogether.

When a profession cannot hear its own failures, cannot correct its own assumptions, and cannot tolerate honest uncertainty, leaving stops feeling like betrayal and starts feeling like hygiene.

At some point, the pattern became impossible to ignore. When a profession cannot hear its own failures, cannot correct its own assumptions, and cannot tolerate honest uncertainty, leaving stops feeling like betrayal and starts feeling like hygiene. That was when I knew I was done.

Many of my former classmates reached the same conclusion, some more quickly than I did. Privately, several admitted that much of what we had been taught was baloney. They were not amused. A $200,000–$400,000 investment over four years had produced clinicians who knew just enough medicine to realize how little they could safely treat. The coping mechanism was predictable: at least we help 50 percent of patients—better than nothing.

Some eventually realized that 50 percent accuracy in a two-outcome probability space is not success at all.

Categories: Critical Thinking, Skeptic

Sailing the Skeptical Seas! New Orleans Escapade! Mysteries of the Maya Cruise!

Skeptoid Feed - Fri, 03/13/2026 - 2:00am

Time is running out to grab one of the few remaining cabins for Málaga, Spain to Nice, France on the SV Royal Clipper AND announcing our next adventure to New Orleans followed by the Mysteries of the Maya cruise to the Yucatán Peninsula!

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

The Case for Free Speech Maximalism

Skeptic.com feed - Thu, 03/12/2026 - 2:31pm
  • Any group differences in outcomes can be traced to systemic racism.
  • If systemic racism exists at all, it works against so-called privileged groups.
  • Abortion is murder, period.
  • The sanctity of human life is a made-up concept.
  • Jews have a biblical right to Israel.
  • Hitler was right about a few things.
  • Masculinity is inherently toxic.
  • If women ran the world, we would still be living in grass huts.
  • The colonialists need to give back the land they stole.
  • Indigenous people need to get over the fact that they were conquered.
  • Providing sex is an obligation within a marriage.
  • Any sexual coercion constitutes rape.

Do any of these statements resonate? Make you angry? Do some not even merit a response?

I can’t tell you exactly how I would respond to someone who defended Hitler, but I know what I would not do: stalk him on social media, contact his employer to try to get him fired, or ask my government representative to help criminalize such talk. 

Does this make me a free speech absolutist? Not quite. Like Robert Jensen, a professor emeritus at the University of Austin and prolific blogger, I suspect that most people who call themselves free speech absolutists don’t actually mean it. They wouldn’t countenance speech like “let’s go kill a few Germans this morning. Here, have a gun.” Instead, Jensen writes they’re prepared to “impose a high standard in evaluating any restriction on speech. In complex cases where there are conflicts concerning competing values, [they] will default to the most expansive space possible for speech.”

In other words, they’re free speech maximalists. A more contemporary and nuanced variant of absolutism, the maximalist position grants special status to free speech and puts the burden of proof on those who wish to curtail it. While accepting some restrictions in time, place, and manner, free speech maximalism defaults to freedom of content. It aligns with the litmus test developed by U.S. Supreme Court Justices Hugo Black and William O. Douglas, which holds that government should limit its regulation of speech to speech that dovetails with lawless action:

Let’s go kill a few Germans? Not kosher. 
The only good German is a dead one? Fair game.

Some pundits view this position as misguided. A 2025 Dispatch article titled “Is Free Speech Too Sacred?” laments America’s descent into an era of “free speech supramaximalism,” in which “not only must speech prevail over other regulation, but nearly everything is sooner or later described and defended as speech.” A New Statesman essay about Elon Musk, written a few months before he acquired Twitter (now X), decries Musk’s “maximalist conception of free speech usually adopted by teenage boys and libertarian men in their early 20s, before they realise its limitations and grow out of it.” The implication: free speech maximalism is an unserious pitstop on the way to more mature thinking. Only testosterone-soaked young men, drunk on their first taste of freedom, would spend more than a minute on such a naïve view.

This 69-year-old woman disagrees. I grew into my passion for free speech during the early months of the COVID-19 pandemic, when the pressure to conform in both word and deed reached an intensity I had never witnessed before. Any concerns about the labyrinthine lockdown rules elicited retorts like “moral degenerate” or “mouth-breathing Trumptard.” (Ask me how I know.)

Unexpectedly jolted into awareness of free speech principles, I began reading John Stuart Mill and Jean-Paul Sartre and writing essays about freedom of expression in the COVID era. One thing led to another, and in 2025 the newly minted Free Speech Union of Canada found a spot for me on its organizing committee. What most of us in the group shared, along with age spots and facial wrinkles, was a maximalist position on free speech. Perhaps we’re all immature. Or maybe we’ve lived long enough to understand exactly what we lose when free speech goes AWOL.

But but … critics sputter … what about hate speech? Free speech maximalism posits that you can’t regulate an inherently subjective concept. As Greg Lukianoff and Ricki Schlott note in their 2024 book The Cancelling of the American Mind, “as soon as you start legislating based on a concept as loosely defined and subjective as offense, you open the floodgates to every group and individual claim of offense.” This argument may well explain why Canada’s proposed Bill C9—the Combatting Hate Act—remains stalled after protracted parliamentary debate.

Is “you cannot change sex” hate speech or merely opinion? Is “you have a big Black butt” an offensive remark? It depends on who says it, how it’s said, and who hears it. One person may react to the big butt comment with reflexive outrage, while another may simply shrug. When said tenderly to a lover, the statement may elicit a full-throated laugh. Offense is in the eye of the beholder. 

Someone can tell you that the sky is green, or that women can’t think logically, or that Hitler was right about some things, and you allow the words to bounce off your emotional core. It’s a liberating habit of mind. 

A case in point: In 2017, the U.S. Patent and Trademark Office refused to register the name “The Slants” (an Asian rock band) because of its derogatory, or hateful, connotations. The bandleader sued and the Supreme Court ultimately agreed that “giving offense is a particular viewpoint” and a law restricting expression on the basis of viewpoint violated the First Amendment.

Here’s the thing: when you embrace viewpoint diversity as an ideal, you tend to get less offended about things. You may profoundly disagree with a statement, but it won’t cause you to puff up in outrage. Someone can tell you that the sky is green, or that women can’t think logically, or that Hitler was right about some things, and you allow the words to bounce off your emotional core. It’s a liberating habit of mind. 

And if you do get offended? Big whoop. You’ll survive. During a recent bus trip from Whistler to Vancouver my seatmate, a doctor, took it upon himself to share his candid opinions about women with me: they can’t take a raunchy joke, they make poor leaders, they’re responsible for cancel culture, and society would work better if they stayed home. Ugh. Seriously? But I survived. I wasn’t traumatized. Truth be told, I quite enjoyed our conversation. He listened as much as he spoke. I even found a few grains of value in his arguments, and perhaps a couple of my retorts gave him pause. And that’s what it’s all about, isn’t it? Humans of all stripes challenging and learning from each other. 

Here I must pause to express disappointment in my own sex. Women, I have found, value free speech less than men do, and studies corroborate my perception. In one survey, 71 percent of men said they gave priority to free speech over social cohesion, while 59 percent of women held the opposite view. An article reporting on the survey affirmed that “across decades, topics, and studies, women are more censorious than men.” Boo.

Even with carte blanche to express ourselves, it’s impossibly difficult for us humans to lay bare our true thoughts. Self-censorship is baked into our DNA. Free speech maximalism serves as a counterweight to this force. It allows us to rise, even if timidly, above the lead blanket of social conformity flung over us by the finger-wagging classes. By exposing little bits of our true selves, we shed light on the glorious contradictions in the human condition—a benefit that serves not just angry young men, but women with age spots and everyone else.

To those concerned about the dangers of loosening our tongues, I offer Greg Lukianoff’s bracing maxim: “You are not safer for knowing less about what people really think.” 

Categories: Critical Thinking, Skeptic

Creationists Don’t Understand Nested Hierarchies

neurologicablog Feed - Thu, 03/12/2026 - 6:30am

Creationism, in all its various manifestations, is sophisticated pseudoscience. This makes it a great teaching tool to demonstrate the difference between legitimate science and science denial dressing up as a cheap imitation of science. Creationist arguments are a great example of motivated reasoning, providing copious examples of all the ways logic and argumentation can go awry. It has also been interesting to see creationist arguments (at the leading edge) “adapt” and “evolve” into more complex forms, while maintaining their core feature of denying evolution at all costs.

I am going to focus in this article on young Earth creationists, specifically Answers in Genesis, and something that is a persistent element of their position. Essentially they do not understand the concept of nested hierarchies. I have a strong sense that this is because they are highly motivated not to understand it, because if they did the entire structure of their YEC arguments would collapse.

This AiG article is a great example – Speciation is Not Evolution. The article is more than a bit galling, given that the author seeks to lecture scientists about the use of precise definitions. It begins by patronizingly explaining the humor in the famous “Who’s on First” skit (gee, thanks for that), then accuses scientists of not being precise with their definitions. This is, of course, the opposite of the truth. Good science endeavors to be maximally precise in terminology (hence the jargon of science), and it is creationists who habitually use vague and shifting definitions – such as their abuse of the word “information” and for that matter “evolution”.

We see this right in the title of the paper – speciation is not evolution – well, speciation is part of evolution. No one claims that by itself it encompasses evolution, but it’s a pretty critical part. They play this game frequently, by claiming, for example, that natural selection does not increase “information”. Correct, it non-randomly selects information. But mutations, duplications, and recombinations demonstrably increase information. They then argue that mutations only “degrade” information, and duplications only copy what is already there. Mutations change information in ways that can be neutral, positive, or negative, as judged by the context of the individual organism. Duplications absolutely increase the amount of information (again, what definition of information are they even using), allowing for one copy to maintain its original function while the new copy can mutate into new functions.

But let’s get to the core argument of this article, that speciation can occur within “kinds” but cannot turn one kind into another. In other words, dogs can evolve into new species of dogs, but a dog can never evolve into a cat. “Evolutionists”, they argue, don’t understand this difference, and so confuse speciation within a kind to “macroevolution” from one kind to another. Meanwhile, they do not have an operational precise definition of what a “kind” is. The word comes from the Bible (God created creatures each according to their own kind) and is not a scientific concept. The author states that a kind roughly correlates to a family level taxonomically. But that doesn’t help. A taxonomical “family” is also not a precise thing. It is simply a categorization convention, and varies tremendously across the tree of life. The same is true of macroevolution – this is not a scientific concept and has no operational definition.

The problem with both of these concepts – kind and macroevolution – is that they suffer from a fatal demarcation problem. There are lots of demarcation problems in science, anytime we are trying to categorize a messy continuum of nature. What’s a planet, or species, or continent? The difference is, the YEC argument is contingent on there being a sharp demarcation – evolution can proceed to this amount, but no further. Evolution can account for this degree of change, but no further. The problem is, they never state any reason, based on any valid principles, as to why. They simply assert that kinds are inviolate.

But at the core of their claims is a complete misunderstanding of what evolutionary science actually claims. Ironically, when they say that dogs can only evolve into more dogs, and never into cats – they are correct. Evolutionary scientists agree with this statement, especially if you take a cladistic approach to taxonomy. By definition a clade is one species and all of its descendants. This is why it is cladistically correct to say that people are fish. Once the eukaryotic clade evolved, everything that descends from it are still eukaryotes. So humans are eukaryotes, and animals, vertebrates, fish, lobe-finned fish, reptiles, mammals, and primates. It is correct, for example, to say that all descendants of fish are still fish, but you have to count humans as fish. What you cannot ever do is go back up the cladistic tree. You cannot undo evolution. You also cannot make a lateral move to another unrelated clade. So an animal cannot evolve into a plant.

The YEC misunderstanding of this concept renders all of their arguments as to why evolutionary scientists are wrong into strawman arguments. No one ever said a dog can evolve into a cat – in fact scientists say this is impossible. It is not part of evolutionary thinking.

What creationist do is grossly underestimate how much change can occur within a clade, because they are stuck on the concept of “kinds”. Functionally what is a kind? It’s one of those things that you vaguely sense. You know it when you see it. Everyone knows what dinosaurs look like – they have a dinosaurish vibe. This is why they falsely argue that birds could not have evolved from dinosaurs. Actually, it is more correct to simply say that birds are dinosaurs – they are a subclade within the dinosaur clade. Birds are also reptiles, because dinosaurs are a subclade within reptiles, which are a subclade within fish, etc. It’s nested hierarchies all the way down. But birds look like a different kind than dinosaurs, so this violates their vague sense of what a kind is. They then mock this idea by analogizing it to a dog evolving into a cat – this this is a false analogy. Dogs and cats are different subclades of mammals, and you cannot evolve from one clade into another, only into subclades within your existing clade.

Stephen J. Gould also discussed this idea and zoomed in on an important concept that is highly misunderstood. Over evolutionary time we expect that disparity (not diversity, the amount of differences, but disparity, the degree of difference) decreases. This seems counterintuitive, but it makes sense once you fully internalize the concept of nested hierarchies. Multicellular life achieved maximal morphological disparity soon after the Cambrian explosion, and from that point forward we only see variations of the various body plan themes. Over evolutionary time the nested hierarchy structure of the tree of life means that we see variations on progressively constrained themes. Evolution is constrained by its history, so the more evolutionary history a lineage has, the more constrained its future evolution. If we look at the entire history of evolution, we see this increasing constraint play out as decreasing disparity. At most disparity can stay the same, but extinction is like a ratchet slowly decreasing disparity.

To take an extreme example used by Gould to illustrate this, imagine a mass extinction where the only surviving land vertebrates are dogs. Eventually those dogs will adapt and fill all the empty niches – you will have herbivore dogs, grazing dogs, dogs living in trees, predator dogs, and more. But they will all be variations on dogs. A dog will not evolve into a giraffe, but it may evolve into a giraffe-like dog, while still retaining dog features. This is also why using modern extant examples (a dog evolving into a cat) also makes no sense. The dog clade is evolutionarily constrained to forever be dogs, even though that can include a lot of diversity. But if you go back in time a few hundred million years, you can have a mammal that is less evolutionarily constrained that evolved into both cats and dogs.

We can also ask the question – what does the evidence show? Above is the picture that AiG uses to illustrate its speciation within clades. The depiction of each clade is conceptually not bad (I don’t think it was meant to be literally accurate), but it artificially stops at an arbitrary line of “kinds”. Does the evidence support this view? What would we expect to see if each kind were created unto itself and separate from all other kinds? What would we expect to see if these nested hierarchies go all the way back to the beginning of life? You can fill a book reviewing the actual evidence, but let me give a quick summary.

If the YEC schematic is correct, then we would expect to see discrete clades that can be cleanly separated – morphologically, genetically, physiologically and biochemically. If the evolution schematic is correct then we would not expect any clean separation, but a continuum along all these features leading back as far as the evidence goes. The bottom line is that the evidence is a home run for the evolutionary prediction. Creationists deal with this devastating fact in a couple of ways. First, they often simply deny the evidence, saying things like “there are no transitional fossils”. They support this claim by mischaracterizing the evidence, ignoring evidence, and also by playing loose with the definition of “transitional”.

They also make the claim that any similarities between kinds is due to each kind having the same creator. Why would the creator reinvent the wheel with each kind, of course he just used the same solutions over and over again. But this argument only goes so far. There are numerous connections between clades that go far beyond utility, such as viral inclusions. The genetic material from a virus can get stuck in the genome of a creature, and then persist down throughout its clade. These are non-functional bits of viral residue in the genome, and they provide a map of nested hierarchies which obey clades, but violate any notion of kinds.

We also can look at the fossil record temporally. In the YEC model, we should see all clades appearing at the same time (creation), then going through a simultaneous bottleneck (the flood) followed by speciation into our current extant species. That is not what we see – not even close. Some will say – what about the Cambrian, that is the sudden appearance of all kinds. Um, no. There are no birds, dogs, triceratops, horses, or humans in the Cambrian. All the family-levelish kinds they say exist were not in the Cambrian fauna. The Cambrian explosion resulted mainly in the multicellular phyla (basic body plans), including some that are now extinct. If they claimed that kinds were phyla and that they were created 500 million years ago, they would have a stronger case. But that is not what they say. Over time we then see increasing diversity within clades, with new subclades evolving and appearing over evolutionary time. We basically see exactly what we would predict if all life has a common ancestor, and not what we would expect to see if life were divided into family level kinds created all at the same time.

Creationists cannot engage with what evolutionary scientists actually claim, so they have to invent ridiculous straw men to attack. They use loose and shifting definitions, and then have the gall to falsely accuse scientists of doing that. They can’t explain the evidence, so that have to ignore and distort it beyond all recognition.

And to clarify my position, in case you are new to this blog, I am not against belief in God and essentially don’t care what anyone believes when it comes to metaphysical questions. But science follows methodological naturalism, and if you follow the methods of science there is only one logical, evidence-based, and scientific answer to the question of the origin of species. The evidence overwhelmingly shows that all life is descended from a common ancestor in a nested hierarchy of relationships.

The post Creationists Don’t Understand Nested Hierarchies first appeared on NeuroLogica Blog.

Categories: Skeptic

The Other Lab Leak Hypothesis: Is Lyme Disease Caused by an Escaped Bioweapon?

Skeptic.com feed - Tue, 03/10/2026 - 1:47pm

Practically everyone has heard of the tick-borne infection known as Lyme disease, even if they don’t live in a high-risk area. Some are aware of long-standing controversies about the consequences of infection or how best to treat it. Our concern here is for a newly emerging controversy about Lyme disease—namely, the theory that it originated as part of a bioweapons program. As U.S. Representative Chris Smith of New Jersey is heard to say while participating in a Department of Health and Human Services roundtable on Lyme disease: “They were weaponizing Ixodes burgdorferi [sic], as we all know.”1

Part of this theory is that Lyme disease’s origins can be traced to the United States Department of Agriculture’s (USDA) Plum Island Animal Disease Laboratory, where it allegedly was developed as a biological weapon, either as a genetically modified organism or by “weaponizing” native ticks to carry a secret pathogen. Plum Island, in fact, would seem to be a good place to center these hypothetical activities, because it has exclusively been the site of a restricted-access USDA facility since 1954. The facility has long conducted research on foreign animal diseases that would devastate the livestock industry in the United States if they were ever introduced accidentally or purposefully as a biological weapon. This research is essential for developing vaccines and measures to prevent potential outbreaks of animal diseases, such as foot-and-mouth disease, African swine fever, and other diseases of domesticated animals. 

Plum Island is located off the eastern end of Long Island and about seven miles across the water from the town of Lyme, Connecticut, where what seemed (at the time) to be a new tick-borne disease was identified in the 1970s. Over the past five decades, Lyme disease—as that illness is now called—has been documented in several other states in the northeastern, mid-Atlantic, and north-central U.S., as well as parts of states in the Far West. It is a tick-borne infectious disease affecting tens of thousands of people each year and at an enormous cost to the public’s health and people’s well-being. 

Nature poses a greater threat than human design or error as a source of new infectious diseases and epidemics for humans and other animals. 

The issue of whether the emergence of Lyme disease is the consequence of natural processes or might have originated from humans—namely, as a designed bioweapon, subsequently inadvertently or intentionally released—has become a hot topic in the news, social media, and podcasts. It has prompted calls for an investigation from members of Congress, where an amendment from Representative Smith is now part of the recently passed and White House-signed defense authorization bill. It would seem more convenient to have somebody or some government institution to blame for an emerging infectious disease, rather than natural events. But in reality, nature poses a greater threat than human design or error as a source of new infectious diseases and epidemics for humans and other animals. 

Plum Island is a high-containment facility only reachable by boat from Long Island and Connecticut for the daily transport of authorized personnel. Visitors are not allowed, and any intruders are promptly escorted off the island. Deer and other wildlife that may be susceptible to infections and occasionally swim to the island are immediately culled by sharpshooters from helicopters. Such high security has long led to rumors and suspicion among neighboring communities that something nefarious must be going on at Plum Island. The island had undeservedly gained notoriety in the Silence of the Lambs book (1988) and film (1991) in Hannibal Lecter’s telling as “Anthrax Island.” 

One of us (DF) worked on Plum Island during the 1990s, conducting research on African swine fever under a USDA research contract with Yale University. African swine fever is a tick-borne disease native to Africa, and it is highly infectious among pigs even without ticks. Access to infected animals required two changes of clothing and a shower before passing through each of two air-tight chambers. But there was no protection for personnel, as these animal diseases do not have the capacity to infect humans. If they did, self-contained spacesuits would be required, as are used for Ebola and other dangerous human pathogens in BSL-4 labs. The Plum Island facility had no capacity to work with human pathogens, and there is no evidence that scientists there ever worked on Lyme disease. 

The second of us (AGB) participated in the early 1980s in the discovery and then isolation of the bacterium that causes Lyme disease. The team accomplished this from ticks that were collected at the far end of Long Island, so not far from Plum Island. This sounds suspicious for an escape from the Plum Island lab. But Long Island and Lyme, Connecticut, were not the only places where Lyme disease was occurring at the time. The availability of cultured bacteria led to diagnostic assays that were quickly developed and implemented. Application of these blood tests for laboratory diagnosis in many other places in the United States revealed that the infection was not limited to a small area near Plum Island and had not been so restricted for many years. 

Besides New York and Connecticut in the early 1980s, cases were soon identified in other northeastern states, north-central states like Minnesota and Wisconsin, and even across the country in northern California. This is a disease only transmitted by ticks, which crawl and, unlike mosquitoes, do not fly. Even if attached to a deer, mouse, or bird, it would have been decades for the infection to spread so widely if it had been released from a single place at the continent’s end. 

Evidence that the bacteria were already present in the area long before any theorized release from Plum Island was finding their presence in museum specimens of preserved ticks and field mice that had been collected in the northeastern U.S. in the 19th or early 20th century. In retrospect, cases of Lyme disease in different parts of the country had been described by physicians in medical case reports from the 1960s. 

If the Lyme disease agent were some kind of Frankenstein germ, malignly created and released upon the world, one might as well invoke space aliens.

Further justification for rejecting a Plum Island bioweapon release theory was recognition that Lyme disease, under other names, had clearly been occurring in Europe since at least the early 20th century, decades before it was first named as a new disease in North America. In Sweden, the Lyme disease agent was recovered from chronic skin rashes that had started years before it was found in some New York ticks. Subsequently, the causes of Lyme disease were identified in ticks and mammals, as well as in patients in China, Japan, Korea, and Russia. Why would there be a need for a new bioweapon delivered by ticks if the infection was already occurring in many parts of the world? 

The bacterium that was isolated from those ticks from Long Island was the first example of what was soon recognized to be a species meriting its own name. But there was nothing strange about it at the time or since, even after intensive study. There is nothing to indicate that it was a genetically modified organism or was constructed from parts of other bacteria, as has been suggested. Genetic analysis of Lyme disease bacteria shows that they originated on the Eurasian continent and spread to North America thousands of years ago. 

That first isolate was representative of but one strain out of several that were occurring then and now in the northeastern U.S. There are other strains in the Midwest and another set in the Far West. Europe has its own strains of the bacteria. This pattern of differences is what would be expected for bacteria that have been widely distributed for millennia and evolved to adapt to their unique local circumstances over time. If the Lyme disease agent were some kind of Frankenstein germ, malignly created and released upon the world, one might as well invoke space aliens that had visited the Earth thousands of years ago. 

What’s the more plausible explanation for the increase in numbers and distribution of Lyme disease that began in the last half of the 20th century? It is clear to us that Lyme disease is a product of nature and has been present for millennia throughout the continents of Eurasia and North America. What has changed to cause it to become recently epidemic is the reestablishment of forests and deer, which has led to a proliferation of ticks over the past half-century. Massive deforestation in the Northeast and upper Midwest before 1900 for agriculture and manufacturing resulted in the near extermination of deer, the natural host of the deer tick that is responsible for transmitting Lyme disease in these areas. Long Island is the only known location in the Northeastern U.S. where white-tailed deer and deer ticks have persisted since colonial times. 

Lyme disease is a product of nature and has been present for millennia throughout the continents of Eurasia and North America.

Another refuge occurred in northern Wisconsin, where a case of Lyme disease occurring in the 1960s was retrospectively identified. From these two ancient refugia, Lyme disease has slowly spread to neighboring states as forests regenerated, and as deer and ticks returned to their former ranges. This spread has been well documented since the original discovery of the Lyme disease agent more than 40 years ago. The same history of reforestation of areas previously used for agriculture and industry accounts for the increase and spread of the Lyme disease bacteria and the ticks that transmit them in Europe. 

Can we call this increase in Lyme disease in various parts of the world the result of “human activities”? Of course. Without the human population growth and concomitant advances in agriculture and industry, Lyme disease would be but one of many infections transmitted among mammals, birds, and reptiles by ticks in woodlands for eons. But the resurgence of the Lyme disease story is just one aspect of a broader process of demographic, environmental, and social change occurring in developed countries of North America, Europe, and parts of Asia. We need not attribute it to the intentional or inadvertent actions of some government workers in a high biosafety level laboratory off the coast of Long Island.

Categories: Critical Thinking, Skeptic

Skeptoid #1031: Unearthing Ancient Advanced Civilizations

Skeptoid Feed - Tue, 03/10/2026 - 2:00am

An exploration of the validity of the Silurian hypothesis, which posits the existence of a pre-human intelligent race on Earth.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Improved Photosynthesis

neurologicablog Feed - Mon, 03/09/2026 - 6:18am

Researchers have recently published a discovery that could lead to more efficient photosynthesis in many crops. It’s hard to overstate how impactful this would be, as this could significantly increase crop yields while decreasing inputs. The growing human population makes such advances critical. Even without that factor, increasing yields decreases the land intensiveness of agriculture, which has a dramatic impact on our environment and sustainability. Improved photosynthesis would be a win across the board.

Before we get into the study there are a couple of points I want to explore. When I first learned of the various research efforts to improve photosynthesis my first reaction was – why hasn’t evolution already optimized something that is so critical to all life. The first photosynthetic organisms evolved at least 3.4 billion years ago. That’s a lot of time for evolutionary tweaking. So why is efficiency still an issue? There are a couple answers, but the primary one appears to be the constraints of evolutionary history. What this means is that evolution can only work with what it has, and it cannot undo its history. Once development leads down a certain path, evolution can make variations on the path but it cannot go back in time and take a completely different path. All vertebrates are variations on a basic body plan, for example.

So what are the evolutionary constraints of photosynthesis? Photosynthesis involves using the energy from sunlight to combine carbon dioxide (CO2) with water (H2)) to make glucose and oxygen. Critical to this reaction is an enzyme, ribulose-1,5-bisphosphate carboxylase/oxygenase (RubisCO), which fixes the carbon from CO2 into organic compounds. This enzyme, RubisCO, is responsible for over 90% of all carbon in living things. It is the most common enzyme in the world and is a cornerstone of living ecosystems, which mostly depend on energy from the sun.

RubisCO, however, is not very efficient. It does not catalyze the reaction very quickly or specifically. The most likely reason for this inefficiency is that RubisCO evolved on the ancient Earth, before the “great oxidation event”. This means it evolved when the atmosphere had lots of CO2 but no or little oxygen, therefore it did not have to distinguish between the two. This means there was no selective pressure for an enzyme that would catalyze a reaction with CO2 but not O2. RubisCO catalyzes both. By the time oxygen started to build up in the atmosphere, RubisCO was well established as the enzyme of photosynthesis. There is also a tradeoff between efficiency and specificity, meaning that the more specific RubisCO is for CO2 over O2, the slower the reaction, and the faster the reaction, the lower the specificity (the more “mistakes” the enzyme makes by catalyzing a side reaction with O2).

To be clear, scientists often use metaphors when discussing this situation. RubisCO does not really make “mistakes”, it just does what it does. And the reaction with O2 is only a “side” reaction from the perspective of what’s best for the organism and from evolutionary selective pressures (but that’s the context that matters). So evolution has tweaked RubisCO over billions of years to have the optimal balance between efficiency and specificity. It should also be noted that this side reaction with O2 is not just wasteful, it creates toxic compounds that have to be cleared. It is estimated that plants waste 30% of the energy captured from sunlight creating and then dealing with these O2 side reactions. But evolution was effectively “trapped” in this tradeoff. Organisms had been using RubisCO  for over a billion years prior to the great oxidation event and were too dependent on it to evolve a completely new method of photosynthesis.

How do we break out of this trap? For this we need another concept – stoichiometry. You remember the bunsen burners from high school science class. You have to adjust the air intake to get the flame to go from a sputtering yellow flame to a bright blue steady flame. You need just the right ratio of gas to air to optimize the efficiency of the reaction. The situation with RubisCO is similar, although simpler. We need to maximize the concentration of CO2 and minimize the concentration of O2 around the RubisCO, in order to simultaneously improve the efficiency and specificity of the reaction. These are called carbon concentrating mechanisms, or CCMs. This idea may be simple, but evolutionarily it is very difficult (judging by how often such CO2 concentrating mechanisms have evolved in nature).

Cyanobacteria and eukaryotic algae have evolved CCMs. Algae specifically evolved structures called pyrenoids which concentrate RubisCO in parts of the chloroplasts where CO2 can also be concentrated. Researchers have been trying to understand the genetics and physiology of these CCMs to see if they can be ported to land plants, specifically crops. Unfortunately, these CCM systems are complex, involving many genes working together. Plus the evolutionary distance between algae and land plants makes adapting these systems difficult.

This brings us to the latest study – which looks at the CCM in a specific type of land plant. About 8-15% of land plants have also evolved some sort of CCM, so most still use what is called the traditional C3 version of RubisCO. Perhaps the CCM in one of these branches of land plants could more easily be adopted in crops. Some plants use what it called C4, which uses a biochemical pump to move CO2 into sheath cells. This evolved only about 20-30 million years ago, and is found in maize, sorghum, sugarcane, and some tropical grasses. Another mechanism is CAM Plants (Crassulacean Acid Metabolism), which take up CO2 at night and store it as acid, then use it during the day to increase CO2 during photosynthesis.  Then there is the hornworts which concentrate RubisCO using organelles similar to algae. The recent study looks at this third mechanism.

Here’s the good news – the researchers found that hornworts (which are small ground plants) use a very simple mechanism. There is an extra tail on the C terminus of one of the subunits of RubisCO. The researchers named this region RbcS-STAR, or the STAR region of the RubisCO. This extra tail acts like velcro, causing RubisCO to stick together and clump, which is good if you want to concentrate CO2 and RubisCO in the same part of the cell. They added the STAR piece to a relative of hornwort, and it worked. They added it to Arabidopsis, an unrelated plant often used in research, and this also caused the RubisCO to clump. So they demonstrated that STAR works, even in unrelated species. This research suggests that RbcS-STAR will likely work in a diverse range of plants.

However – the research is not done yet. Essentially they have only one half of the job done. Now they need to find a way to bring high concentrations of CO2 to the clumps of RubisCO. Perhaps they can borrow the biochemical pumps from C4 plants. There is already extensive research into porting C4 photosynthesis into C3 crops, like wheat and rice. These efforts have proved challenging, because they involve complex leaf restructuring (such as increasing the density of veins). It is possible that this discovery of RbcS-STAR could offer a simpler solution to making C4 work in these plants.

Making C4 wheat or rice could increase their yield by up to 50%. That would be transformative to agriculture, and is worth the extensive research into cracking this complex problem. While the current discovery is just one possible piece to the puzzle, it is very encouraging and hopefully moves us significantly closer to a solution.

 

The post Improved Photosynthesis first appeared on NeuroLogica Blog.

Categories: Skeptic

From Sisterhood to Mean Girls: Evolutionary Insights Into Friendship and Fiendship

Skeptic.com feed - Sun, 03/08/2026 - 11:20am

“Gretchen, I’m sorry I laughed at you that time you got diarrhea at Barnes & Noble. And I’m sorry for telling everyone about it. And I’m sorry for repeating it now.”
—Karen Smith in Mean Girls 1

Popular culture, including literature and film, often extols the value of friendship and the important emotional role it plays in the lives of women and girls. From The Divine Secrets of the Ya-Ya SisterhoodMemoirs of a Geisha, and Anne of Green Gables to films such as Steel MagnoliasThelma and Louise, and Bend It Like Beckham we see portrayals of female friendship that highlight social and emotional support as it occurs across the lifespan. Such tales are often centered on self-discovery, and the value of generous and loyal friends. And yet, popular culture has also given us products that focus on the dark side of female relationships in films such as Mean Girls (the theatrical release poster had the tag line “Watch Your Back”), the television show Gossip Girl, and numerous songs from artists like Taylor Swift with Better Than Revenge and Katseye with Mean Girls. These works emphasize the competition that can occur between women, even those who appear to be friends, over sexual partners and social status in one’s peer group. The ubiquitous nature of social media today has also raised concerns about this type of aggression between females. While there are substantial benefits to friendship,23 there can also be significant costs.4 Our friends can be our most trusted allies but they can also betray us in the name of competition. Before delving into the depths of female friendship and fiendship, it is important to understand evolutionary forces that shaped same-sex friendships in general as well as how natural selection may have differentially influenced male versus female same-sex friendships. 

In general, across our evolutionary past, same-sex friends would have played a crucial role in our survival and fitness. For example, potential benefits of friends would have included protection against rivals or other threats to survival, enhancing one’s status and access to mates or resources, transmission and development of culturally important skills, social support in raising children as well as navigating other relationships, and emotional support to help manage stress and social challenges.5 The number and quality of these same-sex relationships are associated with better mental well-being and physical health for both men and women.6 However, since men’s and women’s same-sex friendships evolved in different contexts to solve somewhat different adaptive problems, there are significant differences in their same-sex relationships.78 Friendships between men evolved in a side-by-side group context. Historically, this would have been men forming alliances with one another for purposes of hunting, protection, and warfare. As such, they tend to center around a shared activity (e.g., sports in modern society). In addition, these friendships tend to be hierarchical in nature and often involve direct competition (including physical contests of strength, skill, or both). In contrast, women’s same-sex friendships evolved in a face-to-face, one-on-one context in which women formed alliances with one another for purposes of alloparenting (that is, the care of offspring by individuals who are not their biological parents, from feeding and grooming to protection and socialization), emotional support, and sharing of resources and social information. Historically, upon marriage, women typically left their own kin behind and relocated to their husband’s community.9 Therefore, in the absence of others who would be invested in their well-being, these social alliances between women would have played an important role in their own survival as well as that of their offspring (and therefore of the group they had joined). Today, friendships between women are more intimate than friendships between men and tend to center around mutual disclosure, trust, and empathy. Even in contexts where there is an activity involved (the popular “Stitch-n-Bitch” groups, for example), the shared activity typically tends to come second to the emotional bonding between the women. Compared to their male counterparts, competition between female friends tends to be more indirect and involves reputation-damaging gossip, social exclusion, and subtle undermining of each other’s interests. 

In addition to differences in friendship interaction style, the structure of male and female same-sex friendships also influences how men and women react to interlopers who may threaten these friendships.10 Male same-sex friendships evolved in a context that historically included banding together to defend their group against threats from other groups. Consistent with this, men (compared to women) report greater feelings of friendship jealousy when primed with a threat of intergroup conflict. Furthermore, since a larger coalition of same-sex friends would mean greater benefits accrued from those relationships, men report greater friendship jealousy (compared to women) over the prospect of losing acquaintances. Women, on the other hand, tend to engage in one-on-one interactions with their same-sex friends, and report experiencing greater loss and friendship jealousy over the prospect of losing a best friend (compared to men). This loss is compounded by the fact that, compared to men, women invest more time and energy to develop their close, intimate relationships, thus making it harder to replace their close friends. The greater self-disclosure between female close friends also makes the dissolution of such close friendships potentially more damaging to one’s reputation if the ex-friend spreads rumors about them or shares their secrets. These features motivate women to protect their friendships. 

The shift from friendship to fiendship comes into play when jealousy is triggered by the friend themselves versus an interloper. As indicated above, women tend to use indirect competition strategies. Specifically, while men are more likely to engage in direct physical aggression with their competitors, women are more likely to engage in relational aggression,11 which involves attempts to harm others by damaging their social ties.12 Often done covertly, this social sabotage involves behaviors such as excluding the so-called friend (e.g., giving them the silent treatment or intentionally leaving them out of some interaction), gossiping or spreading rumors about them (e.g., sharing their secrets), and attempting to turn others against them through public embarrassment. Relational aggression in female same-sex friendships seems to peak in adolescence.13 Since this aggression occurs between friends, not just rivals, it is often perceived as a personal betrayal. Relational aggression can also be subtle, though, making it hard for the so-called friend to detect. It could include backhanded compliments or manipulating the “friend” and setting them up for failure. One example would be setting them up for failure or public embarrassment by encouraging them to wear an unflattering outfit or approach a potential romantic interest knowing they’ll be turned down. Since intimacy and emotional closeness is prioritized in female same-sex friendships, being betrayed or excluded by someone one considers to be a close friend can be especially hurtful. Research suggests that this type of betrayal in adolescence is often associated with negative academic and psychosocial outcomes, including feelings of depression, anxiety, poor self-image, suicidal ideation, and social withdrawal as they find it hard to trust others.1415 Prospective longitudinal studies have found that girls’ peer victimization experiences of relational aggression between ages 7 and 10 were associated with an increased risk of self-harm behaviors in late adolescence.16 The observed self-harm behaviors included cutting themselves as well as swallowing pills, with roughly 27 percent of adolescents reporting they engaged in those behaviors with suicidal intent. In addition, other longitudinal studies suggest that girls who experience peer victimization in middle childhood are more likely to develop eating disorders by early adolescence.17

While it is clear that women engage in aggression, albeit commonly in a different form than men, it’s important to understand the motivation behind it as well as the forms it takes. In general, greater female aversion to risk of physical injury promotes the pursuit of low risk and indirect strategies of same-sex competition. What are the drivers behind such competition between women and girls? They are largely intrasexual competition for social status and mates. For the majority of human history, women have lacked direct access to resources, relying on male provisioning and protection for themselves and for their children. As a result, same-sex peers are primary rivals for acquiring and retaining partners willing and able to invest and protect. We see echoes of this in the behavior of modern women, who dislike and work actively against rivals who threaten their romantic prospects, often directing their animosity toward physically attractive and sexually unrestricted peers. Cross-cultural research has demonstrated that men have a preference for physically attractive youthful women as sexual partners18 and studies examining female behavior with regard to online dating profiles to trends in cosmetic aesthetics suggest that women compete with other women over their attractiveness to men, aiming to look more youthful and attractive than their competitors.192021 It is worth pointing out that beautification can be seen as a tactic in competing for male attention22 but also a vehicle for pursuing social status in social and workplace spaces.23 High status can also influence access to resources and valuable allies. High status individuals are in demand as friends. It is also worth noting that high status girls bully lower status ones, though they do so using less overt strategies than boys, sometimes taking on an authority or maternal role for the group, and enforcing equality among the rest at the risk of social exclusion.24 A number of studies suggest that high social status in adolescent girls, especially when indexed by peer perceptions, is linked to dating success, sexual activity, and the use of indirect aggression. It is somewhat less clear whether the status leads to increased aggression (due to lower costs) or that the covert aggression leads to increased popularity. However, some evidence suggests that physical attractiveness results in greater social status, which can be defended through indirect aggression—by keeping attractive rivals from one’s own social circle.25

A wide range of studies have examined aspects of intrasexual competition in women and how they play out in terms of friendship. Across several studies, April Bleske-Rechek and colleagues found that women are less willing to be friends with a woman who is sexually promiscuous; women perceive sexual promiscuity as undesirable in a same-sex friend, they deceive their friends about their own engagement in mate poaching, and they are more likely to be upset by imagined scenarios of a same-sex friend acting sexually available toward their partner, as well as attractiveness enhancement by friends.26 The researchers also found that attractiveness plays a role in the perceptions of rivalry within friendship dyads with pairs both agreeing on who was the more attractive woman (outside judges agreed as well), and the less attractive women seeing more rivalry in the friendship than their more attractive friend.27 Interestingly, at least one study has also shown that these competitive tactics are sensitive to costs in that women are more likely to engage in clothing-based enhancement when with an acquaintance than with a close friend, but even then only when there was a desired male present. This again suggests that intrasexual competition mechanisms are sensitive to possible friend relationship costs and are more likely to be activated when a rival is seen as a legitimate threat (such as being more attractive).28 Despite being in possible conflict over mates or status, women rely on their cooperative friendships and there is a cost to jeopardizing them. 

The underlying reason is that women rely on same-sex friends for help, information, and other forms of social support. As previously described, ancestral mating and residence patterns often created an environment where women needed to build close social relationships with other biologically unrelated women. As a result, women may not only be averse to open competition but also have strong friendship preferences that encourage them to avoid other women who are highly competitive or highly status driven in favor of those who show indications of being kind, committed allies in order to develop valuable cooperative supportive friendships. Our ancestral adaptations for forming friendship ties likely shaped preferences designed to acquire same-sex friends able to help women accomplish evolutionarily recurrent tasks such as competing for status among peers, access to social information and resources, as well as caring for offspring. Recent studies of friend preferences suggest that women (particularly in comparison to men) highly value female friends who provide emotional support, intimacy, and social information.29 And even though women may report that their friends compete with them for attention from desirable men, they also report substantial emotional support as well as mating advice and companionship in mating contexts (bars, clubs, etc.).30

However, success may be best achieved by pursuing both cooperative and competitive goals at the same time. Researchers such as the late Anne Campbell and more recently Tania Reynolds have highlighted how women can pursue both by cloaking their intrasexual competition in prosocial gossip or other relatively low risk tactics that can do reputational damage to a rival while preserving own reputation and avoiding damage to status in their peer group. As discussed previously, the indirect aggression favored by women and girls focuses on social manipulation. In some cases, the victim would never know who the primary aggressor was if the tactics concentrated on social ostracization, stigmatization, and gossip. Rumors can be easily spread without the original source being singled out, protecting their reputation while damaging their target (through accusations of sexual promiscuity, disloyalty, and so on), and shielding them from retaliation. Women utilize their friends to gather and disseminate social information, including gossip about rivals, particularly when those rivals are perceived as a legitimate threat to their status or romantic opportunities. Experimental studies suggest that more attractive rivals wearing more provocative clothing increase women’s tendency to spread reputation damaging information, even when women report liking the target of their damaging gossip, and more so for highly competitive women.31 Preliminary results seem to confirm what many women may have experienced, namely that reputation damaging social information does cause harm to the target, in terms of how men and women may view and interact with them. Further, not all women are as likely to inflict such reputational harms, highlighting why less competitive women and those high in loyalty are seen as more valuable friends. 

Cartoon by Oliver Ottitsch for SKEPTIC

This also highlights the possible costs of being seen as someone who engages in overtly malicious gossip. If women prefer friends who are kind and loyal, those who are seen as malicious gossips are less likely to be preferred as friends and may also be seen negatively by desirable romantic partners. The problem then is how to engage in damaging gossip without being seen as malicious. How can sharing such information perhaps be seen in a prosocial light? There are at least two different strategies that may achieve this, perhaps involving a degree of self-deception or lack of awareness of one’s own motivations. The first is to disclose one’s own victimization, which may not be perceived as gossip but rather as sharing a painful experience and request for emotional support. There is evidence that women are more sensitive than men to friendship violations that suggest the friend is not a loyal and kind friend as well as being more likely to disclose such treatment to others. In addition, research has found that first person disclosures of mistreatment were more trusted than third party reports, and female perpetrators of that mistreatment did suffer reputational damage as a result of the victim sharing that narrative.32 These covert victimization narratives can effectively damage the same-sex peers that are targeted for their perceived misdeeds in terms of desirability as a friend and social status. In addition, a number of women articulate that they are sharing this information out of concern—not malevolent intent—for the target of their gossip. Researchers have also explored such concern-based gossip, demonstrating that women endorse more concern versus harm-based motivations for engaging in gossip and that concerned gossipers were viewed more positively by social and romantic partners than were malicious gossipers. Interestingly, concerned gossip harmed perceptions of the target as much as did malicious gossip, indicating that negative commentary on an individual that is framed with concern harms the targets reputation and insulates gossipers from reputation damage (due to lower perceptions of maliciousness).33 The tendency to engage in these forms of gossip may explain the fact that many women report being targeted by gossip while relatively few report spreading negative rumors. There is a degree of self-deception about one’s motivations that makes these effective tactics for covert female intrasexual competition. 

The popular neologism for this type of close friend is “frenemy.” The term “frenemy” has become popularized in the last twenty years or so and is defined as a “person with whom we outwardly show characteristics of friendship because of certain benefits that come with the façade.”34 Studies suggest that people maintain such “frenemyships” because there are relational benefits such as shared social networks, status, and information sharing that may outweigh the cost of terminating the relationship—though there may be high levels of covert competition and social manipulation.35 It is clear that same-sex friendships can be some of our most valued and rewarding relationships, ones that are lifelong and help us navigate the challenges of life. Yet, they can also be damaging, with frenemies causing harm in the pursuit of their own goals. As a result, choosing same-sex friends wisely is an essential skill as is the ability to engage in covert competition. In other words … keep your friends close but your frenemies closer. 

“It is better to have an enemy who honestly says they hate you than a friend who’s putting you down secretly.”—Unknown
Categories: Critical Thinking, Skeptic

Gentle Reminder: Daylight Saving Time Starts Tomorrow

Skeptoid Feed - Sat, 03/07/2026 - 2:00am

As a gentle reminder that you will have an hour of sleep robbed from you tonight, enjoy this episode on Daylight Saving Time Myths from the archives!

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Scientists Grow Chickpeas In Lunar(ish) Soil

neurologicablog Feed - Fri, 03/06/2026 - 5:07am

If we are going to have an enduring presence on either the Moon or Mars, or anyplace off of Earth, we will need to grow food there. It is simply too expensive, inconvenient, and fragile to be dependent on food entirely from Earth. In fact, any off-Earth habitat will need to be able to recycle most if not all of its resources. You basically need a reliable source of energy, sufficient food, water, and oxygen (consumables) to sustain all inhabitants, and the ability to endlessly recycle that food, water, and oxygen.

The ISS has achieved 98% recycling of water, which is what NASA claims is the threshold for sustainability of long space missions. The ISS also recycles about 40% of its oxygen. However, the ISS grows none of its food. It is all delivered from Earth, with a 6 month supply aboard the ISS. There are experiments to grow plants on the ISS, and these have been successful, but this is not a significant source of nutrition for the astronauts.

Doing the same on the Moon is not practical for long missions, although we will certainly be doing this for a time. But the goal, if we are to have a lunar base as NASA hopes (NASA plans a lunar base at the Moon’s south pole by 2030) is to grow food on the Moon (and eventually on Mars). On the ISS the big limiting factor is microgravity. The Moon has lower gravity than Earth, but it has some gravity and so that will likely not be a major problem, especially since we can grow plants on the ISS. We can also grow plants hydroponically pretty much anywhere, and I suspect this will happen on any lunar base. But a fully hydroponic system has its limits as well.

Hydroponics on the Moon would be challenging for several reasons. First, it is energy intensive, and energy may be a premium on a lunar base, especially early on. Second, it requires a precise balance of nutrients in the water, and those nutrients would have to be sourced from Earth. So it doesn’t really solve the problem of dependence on Earth. And third, hydroponics requires a lot of equipment which would have to be shipped from Earth. We could theoretically leach nutrients from lunar regolith, and this might help a bit, but is also energy intensive and would not be a source of nitrogen.

Therefore – NASA and others are looking into the possibility of growing plants in lunar regolith. This could have multiple advantages. It requires much less equipment, energy, and water than hydroponics. Many of the nutrients would come from the regolith itself. This would reduce dependence on supplies from Earth. A soil-based system can also more easily recycle nutrients from food waste and human waste. Likely, a lunar base would have a hybrid hydroponic and soil-based system. As a side benefit, if such a base grew enough food to feed its human inhabitants, this would also recycle CO2 and produce more than enough oxygen for them to breath. In fact, they would have to figure out something to do with the extra oxygen to keep it from building up (likely not a problem – oxygen has many uses).

The major hurdle to growing food in lunar regolith is that – well, you can’t. Plants do not grow well in lunar regolith. It lacks nitrogen and other nutrients, it lacks organic matter, and it contains toxic compounds. Experimentally, plants will not grow sufficiently in simulated lunar regolith. But, we can treat the regolith to turn it into soil that can grow plants, and that is the focus of the current study mentioned in the headline. Scientists have used simulated regolith, modified by adding organic matter (vermicompost) created by red wiggler earthworms composting organic waste, and were able to grow chickpeas in the resulting soil. They tried various mixtures, and found that 75% regolith to 25% soil was the limit – more than 75% regolith and the plants would not survive. They also coated the chickpeas with arbuscular mycorrhizae before planting. The fungus is symbiotic, increasing the uptake of some nutrients while decreasing the uptake of some toxins like heavy metals.

The experiment was considered a success – the chickpea plants grew, survived, and produced chickpeas. However, they have not yet tested the chickpeas to see if they are safe and edible. They need to be tested for any toxic compounds. This is also not the first such study, there have been dozens of others. They generally show that crops will grow in modified simulated Martian and Lunar regolith. But questions remain about how good the simulated regoliths are.

There has also been one study using actual unmodified lunar regolith (brought back by the Apollo missions). In this study the plants grew, but showed signs of severe stress and were morphologically altered. That they grew at all, however, is amazing and encouraging.

What does all this mean for the future of lunar and Martian bases? They will very likely include some growing of food in modified regolith. The implication of the research is that we can likely develop a self-sustaining system in which plants are grown in modified soil using mostly native regolith. These plants produce food and oxygen while using CO2. The soil can then be fertilized using compost from any organic waste generated by the base, including humanure. You can even recycle urine in order to source nitrogen. In short, we can envision a system in which everything is recycled to locally produce food and air. We can also recycle 98% of the water in the system, perhaps eventually even more. You just need to kickstart the system with initial resources, and maybe need to top them off from time to time, but otherwise the system is self-sustaining.

It is also likely that the more the lunar or Martian regolith is used to grow food, the more it will look like Earth soil. The percentage of organic matter will increase, it will develop an ecosystem of microorganisms, and any toxins will be leached out over time. This high quality soil can then be used to expand the farm, and generate more modified soil from regolith.

It is also likely that such a lunar farm would exist underground, probably within a lava tube. This means that all the light with be artificial, but that’s not a big problem – we can do grow lights. Having a farm under a dome on the surface is likely not worth it. This would provide free sunlight, but only half the time, and not in a typical circadian cycle, but roughly 14 days of sunlight followed by 14 days of darkness. It would also be susceptible to radiation and micrometeors. Better to be in the safety of a lava tube, deep under ground, and just use grow lights.

Finally, one factor I have not mentioned yet is the potential to alter the plants themselves to adapt them to growing on the Moon, or on Mars or on a space station. Through some combination of cultivation and genetic engineering, we may be able to adapt crops to the lower gravity and the modified lunar soil. This could optimize productivity, safety, and nutrition.

While there is a lot of work to be done, the research so far shows that farming the Moon or Mars is feasible, which is good if we plan to have long term bases on either.

The post Scientists Grow Chickpeas In Lunar(ish) Soil first appeared on NeuroLogica Blog.

Categories: Skeptic

Free Will, Determinism, and Compatibilism: Shermer Responds to Jerry Coyne

Skeptic.com feed - Tue, 03/03/2026 - 2:14pm

On his February 22, 2026 blog the estimable evolutionary biologist, outspoken atheist, and (relevant here) staunch defender of determinism, Jerry Coyne, takes me to task for presenting “a muddled argument” in my case for compatiblism (in an excerpt in Quillette), which was based on a longer chapter in my book Truth: What it is, How to Find it, and Why it Still Matters.

First, let me acknowledge that this chapter in my book is in Part III, or “Known Unknownables.” Following Donald Rumsfeld’s famous epistemological trilemma, that includes “Known Knowns” (things we know that we know), “Known Unknowns” (things we know that we do not know), and “Known Unknowables” (things that are not ultimately knowable).

In this section of the book I include consciousness (the easy problem is understanding the neural wiring; the hard problem that I claim to be unknowable is what it’s like to be the wiring), God (I know of no scientific experiments or rational arguments that can prove its existence one way or the other), and why there is something rather than nothing (what do you mean by nothing, anyway?). So, in a sense, Jerry’s determinist position is, in my understanding of the problem, no more or less likely to be true, depending on how one defines the problem itself. I have defined it in a way that compatibilism works, whereas Jerry has defined it so that determinism works.

Second, this is why I reference the survey by David Chalmers, the philosopher who made famous the “hard problem of consciousness,” along with his colleague David Bourget. They asked 3,226 philosophy professors and graduate students to weigh in on 30 different subjects. Here is what they found regarding the free will issue:

Accept or lean toward:

Compatibilism

59.1%

Libertarianism

13.7%

No free will

12.2%

Other

14.9%

Now, on one level, it is irrelevant how many people believe something, along the lines of what Philip K. Dick meant when he defined reality “as that which, when you stop believing in it, doesn’t go away.” Yet, as I argue, there is something revealing about these figures. Namely, if the most qualified people to assess a problem are not in agreement on an answer—and the free-will/determinism problem has been around for thousands of years—it may be that it is an insoluble one, a known unknowable.

Third, therefore, it is entirely possible that a highly qualified, educated, and intelligent thinker like Jerry Coyne can make a compelling case for determinism, while at the same time a highly qualified, educated, and intelligent thinker like the late Daniel Dennett can make an equally compelling case for compatibilism (and Coyne and Dennett have locked horns on this very matter).

I agree with Jerry and Dan that we live in a determined universe governed by laws of nature. But I disagree with Jerry that this eliminates free will, or if you prefer “volition” or “choice” (again, this entire field is, to use Jerry’s term, “muddled” with confusion of terminology). My compatibilist work-around is “self-determinism,” in which while we live under the causal net of a determined universe, we are part of that causal net ourselves, helping to determine the future as it unfolds before us, and of which we are a part. My compatibilist position is based on the best understanding of physics today. Let me explain.

Physicists tell us that the Second Law of Thermodynamics, or entropy, means that time flows forward, and therefore no future scenario can ever perfectly match one from the past. As Heraclitus’ idiom informs us, “you cannot step into the same river twice,” because you are different and the river is different. What you did in the past influences what you choose to do next in future circumstances, which are always different from the past. So, while the world is determined, we are active agents in determining our decisions going forward in a self-determined way, in the context of what already happened and what might happen. Thus, our universe is not pre-determined in a block-universe way (in which past, present, and future exist simultaneously) but rather post-determined (after the fact we can look back to determine the causal connections), and we are part of the causal net of the myriad determining factors to create that post-determined world.

(Jerry inquires why I didn’t discuss quantum uncertainty in my analysis. The reason is that Dennett debunked this decades ago in Elbow Room: The Varieties of Free Will Worth Wanting, when he pointed out that any such quantum effects that alter other deterministic physical laws would not grant any type of free will or volition, for it would just mean that some percentage of your “decisions” are just random noise in the machine.)

Given the muddleness of terminology here, let me bring in the philosopher Christian List and his three requirements of volition from his book Why Free Will is Real:

  1. Intentional agency—the capacity to form an intention to pursue different possibilities;
  2. Alternative possibilities—the capacity to consider several possibilities for action (this is the “could have done otherwise” element);
  3. Causal control—the capacity to take action to move toward one of those possibilities.

As List explains in more detail:

Specifically, we need to know whether what the person did was freely performed, as characterized by the three bullet points above. Was it an intentional action? Could the person have done otherwise? Was the person in control? Or, if what the person did was not freely performed, we need to know whether the person’s free will was at least implicated in the run-up to it: Was there a free decision to get drunk in the first place, for instance? Of course, moral responsibility might well require more than that…but I do take the presence of free will somewhere along the relevant chain of events to be a necessary condition for a salient form of moral responsibility.

Of course, Jerry and other determinists like Robert Sapolsky and Sam Harris could just redefine the problem by saying that even the capacity to form an intention was pre-determined by atoms, molecules, and neurons, as is the capacity to consider several possibilities for action and the capacity to take such action. This is why I quoted Dan Dennett from my podcast conversation with him on this very challenge:

Determinism doesn’t tie your hands, nor does it prevent you from making and then reconsidering decisions, turning over a new leaf, learning from your mistakes. Determinism is not a puppeteer controlling you. If you’re a normal adult, you have enough self-control to maintain your autonomy, and hence responsibility, in a world full of seductions and distractions.

Since determinists often reference people suffering from extreme drug addiction or alcoholism, or those with a brain tumor that led to their bad behavior, like Charles Whitman in the Texas school tower shooting incident, I asked Dan about Sam Harris’s quote that “it’s tumors all the way down,” and Robert Sapolsky’s descriptor that “it’s turtles all the way down.” Here Dennett identifies the error in this line of reasoning:

Well, I like the way you put it very much, Michael, because I think you put your finger on the mistake that Sapolsky is making there. And Sam Harris makes it too. No, it’s not tumors all the way down. It’s machinery all the way down. But there’s good machinery and there’s bad machinery. And if we have bad machinery, then yes, we’re disabled to some degree. But what about people who have good machinery? They’re not disabled. Why can’t we hold them responsible? Now, some people are, alas, through no fault of their own, not responsible for what they do. And that might well include people with terrible, terrible youths, who didn’t get a good upbringing, or who had a horrific upbringing. And so we have to decide, as society, given that this is a dangerous person, what’s the humane, good thing to do? I don’t think there’s an algorithm or a bright line for distinguishing somebody whose brain is good enough from somebody whose brain is a little too disabled. We just have to make the decision.

Dennett then brings home real world examples:

We do it all the time. You’ve got to be 16 to get a driver’s license. Some 15-year-olds would be perfectly safe as drivers. Some 21-year-olds would not. But the law has to have a bright line and so it chooses one. We might argue whether we want to raise it or lower it, the way the drinking age has been raised or lowered, or the way the driving age has been raised or lowered. We have to have a policy and we have to stick to it and we can change it as we learn more and more. But what we don’t do is just say, “Oh, it’s disability all the way down.” No, you’re not disabled, I’m not disabled. I want to be held responsible. I think you want to be held responsible too.

Coyne is unhappy with my invoking of “emergence” and says I’m being rude to him and Sapolsky and Harris in accusing them of “physics envy,” but that’s what it is! Here, for example, is Sapolsky defending his belief that free will does not exist because single neurons don’t have it: “Individual neurons don’t become causeless causes that defy gravity and help generate free will just because they’re interacting with lots of other neurons.”

In fact, billions of interacting neurons is exactly where self-determinism (or volition or free will) arises. This is why I like to ask determinists: Where is inflation in the laws and principles of physics, biology, or neuroscience? It’s not, because inflation is an emergent property arising from millions of individuals in economic exchange, a subject properly described by economists, not physicists, biologists, or neuroscientists.

Rather than quoting myself again, I will invoke the geneticist and neuroscientist Kevin Mitchell from his book Free Agents, in which he shows that the determinist’s reductionistic approach to understanding human thought and behavior is not just wrong, but wrong-headed! How?

Basic laws of physics that deal only with energy and matter and fundamental forces cannot explain what life is or its defining property: living organisms do things, for reasons, as causal agents in their own right. They are driven not by energy but by information. And the meaning of that information is embodied in the structure of the system itself, based on its history. In short, there are fundamentally distinct types of causation at play in living organisms by virtue of their organization. That extension through time generates a new kind of causation that is not seen in most physical process, one based on a record of history in which information about past events continues to play a causal role in the present.

Thus, I conclude that the free will/determinism issue is an insoluble problem because we may be ultimately talking past one another at different levels of causality: the reductionist’s atoms, molecules, and neurons versus the emergentist’s brains, people, and societies.

Choose a side. The choice is yours!

Categories: Critical Thinking, Skeptic

Skeptoid #1030: Testing the Rossi E-Cat

Skeptoid Feed - Tue, 03/03/2026 - 2:00am

This secretive device has been promising to deliver clean, free energy for more than 15 years — but so far nobody's been allowed to examine it.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Free Will Is Real

Skeptic.com feed - Mon, 03/02/2026 - 2:15pm

The question of whether or not we have free will has been pondered by philosophers, psychologists, theologians, neuroscientists, and by many of us in our own conversations and thoughts. Nearly two thousand years ago, the Stoic philosopher Epictetus declared, “You may fetter my leg; but not Zeus himself can get the better of my free will.”1 But Epictetus also believed in a deterministic world where each event is determined by preceding causes. How can this apparent contradiction be resolved?

In the 1940s, Bertrand Russel saw no reason that human volitions would not also be determined in the same way that inanimate processes are determined. Further, he saw the determined nature of volitions as incompatible with a person being the true source of his own actions. Russell supposed that an evil scientist could, by use of psychoactive drugs, manipulate a person to perform certain actions. And this hypothetical manipulation did not seem to Russell so different from normal life, where people are manipulated to do what they do by natural causes outside their own control.2

Fifty years after Russell published his critique of the Stoic notion of free will, several other philosophers made the same argument.345 Today, the continued quandary contributes to a sustained lack of consensus on free will. According to surveys, most people—including most philosophers—believe in some form of free will, most under the rubric of compatibilism.67 Novelist and Nobel Laureate Isaac Bashevis Singer summed up the dilemma, “We must believe in free will, we have no choice.” 

However, the debate still rages in the world of academic philosophy, in a broader audience reached by podcasts and popular books written by scientists, and among readers of Skeptic. Here I will try to convince you that free will is real and not an illusion. I’ll argue that far from being exemplars of rationality and skepticism, the main arguments against free will make unjustifiable logical leaps and are naïve in the light of cutting-edge scientific findings. 

Throughout the philosophical literature,8 resolving the question of whether or not we have free will has often revolved around two criteria for free will: 

  1. We must be the true sources of our own actions. 
  2. We must have the ability to do otherwise. 

I argue that humans meet both criteria through two concepts: scale and undecidability. 

Scale and the True Sources of Our Actions 

In an article in The Journal of Mind and Behavior,9 I argued that many of our actions are caused by our wills; that is, by our conscious desires and intentions. This is not disputed by most (what I’ll term) free will deniers. They more often dispute that our wills are free, not that we have wills and that our actions often follow from our wills. Sam Harris, one such determinist with a large general audience, has said that the subjectively felt intention to act is the proximate cause of acting. Harris makes the same basic claim as renowned scientist Francis Crick,10 philosophers such as Bertrand Russell11 and Derk Pereboom,12 and many others. They claim that in addition to the proximate cause (the will), our actions have ultimate causes lurking behind them that are the relevant causes to consider when judging whether or not our wills are free. The ultimate causes beyond and beneath the surface of our wills, they argue, make them unfree. What are these ultimate causes? Harris identifies genetics and environmental influences as “the only things that contrive to produce” his particular will.13 Molecules beyond DNA have also been offered as ultimate causes of our decisions. Biologist Jerry Coyne argued that, “Our brains are made of molecules; those molecules must obey the laws of physics; our decisions derive from brain activity.”14 Robert Sapolsky, a prominent neuroendocrinologist, is publishing a book this year, detailing many such mechanisms that, it is claimed, obviate the role of willed choices.15

My mind does not exist as a molecule nor as a historical epoch, nor as a socioeconomic class. Yet my mind does exist.

What’s wrong with this line of reasoning? Consider the following question as an analogy: Are apples red? Suppose we all agree that apples have color. The question is whether the color is red or non-red. To answer the question, determinists would look beyond the proximate color of the apple. Realizing that the apple is nothing but atoms, they would examine many of the carbon atoms on the surface of the apple. They find that not a single carbon atom is red. Since none of the atoms are red, and the apple is nothing but atoms, they would conclude that the apple can’t be red. The error is that though they agree the apple has a color, they try to examine the nature of the color at a scale (a carbon atom is smaller than the wavelength of red light) where color is incoherent. The fact that they found no redness at that scale shouldn’t lead them to conclude anything about the color of the apple. 

Likewise, the fact that determinists find no personal authorship or freedom in the actions of molecules shouldn’t lead them to conclude anything about the nature of the will. We agree that we have wills, that we have subjectively experienced intentions that influence our actions. The question is whether our will is free or unfree. To look at molecules for the answer is a scale mistake. DNA and neurotransmitters observed at the molecular scale exhibit no will whatsoever. With that knowledge, is it compelling that they exhibit no free will? No. That should tell us that determinists are looking at the wrong scale to find answers about the will, just as looking for answers about redness at a scale where color is not meaningful. 

The right scale for finding answers to the question of apple redness is the apple scale, not the atom scale. The right scale for finding answers to the question of freedom of the will is the agent scale, not the molecule scale. Searching the molecule scale is just one example of this error. There are many other wrong scales where a confused determinist might look for answers about the will. He or she may zoom out temporally into an irrelevant timescale, including the time before the will in question existed. In the above analogy, this would be like conceptualizing the apple as merely a step in a process of agricultural industry. Since agricultural industry is not red, should we conclude that the apple is not red? The question about the will can only find its answers from a scale where the will exists as a will. Expanding the timescale to include the time before the person was born renders the question incoherent. 

If we keep our analysis in the scale where the individual agent exists, not zooming too far in nor too far out in space, time, or level of organization, then the primary and ultimate cause of my actions is me. The will emerges from the complex interactions of many small parts. It’s literally not true to say that it’s caused by any particular small part. It is caused by many small parts, but only when taken together all at once. And that’s the same thing as the whole person. So my thoughts and actions are deterministically caused by me. The molecules of which my brain is made are simply irrelevant to this fact. So I am the true source of my own actions, and there are no other “ultimate” causes. My mind does not exist as a molecule nor as a historical epoch, nor as a socioeconomic class. Yet my mind does exist. René Descartes’ “I think therefore I am” convinces me of this.16 In order to claim that my choices are really caused by a molecule or a historical epoch, one must refer to the dynamics of a scale where I (that is, my mind) cannot be found. Eliminating the mind from the analysis is not a valid way to answer a question about the mind. 

The Ability to Do Otherwise 

There is a temporal asymmetry in the question of whether I could have done otherwise. In the question’s typical form, it is backward-looking. It asks about what could have been in the past, and, at first, it seems like a coherent question. I did one thing yesterday, and we wonder if I could have done something else. But what if we wanted to figure out whether or not I’ll have free will tomorrow? From that temporal angle, the question of the ability to do otherwise stops making sense. In a forward-looking sense, the question becomes manifestly nonsensical. Can I do otherwise in the future? Otherwise? Other than what? Other than the thing I will do? The question stipulates that I will do a certain thing, and simultaneously asks whether or not I can avoid doing that thing. The stipulation contained within the question makes the answer trivial. No, of course I can not do something other than the thing I will do. In order for the question to have any significance in the forward-looking tense, it must be modified. The question can not directly stipulate that I will do a certain thing. The question must ask whether or not I can do something other than what I’m expectedto do, not other than what I will do. 

The will emerges from the complex interactions of many small parts. It’s literally not true to say that it’s caused by any particular small part.

Human choice is temporally asymmetric and must be analyzed as such. This point could be missed without properly situating our analysis at the correct scale. An inappropriate focus on the dynamics of little particles could obscure the truth. The laws of physics that describe or govern the interactions of particles do not specify a direction of time. If we could watch a video of two protons colliding, we would have no way to know whether the video was being played forward or in reverse. This is called time reversal symmetry. This symmetry holds true in a wide variety of particle interactions.17 Time appears asymmetric only at scales where emergent phenomena transpire. Large collections of particles obey the second law of thermodynamics, which is not time reversal invariant. As astrophysicist Matt O’Dowd put it, “Zoom in to individual particle interactions and you see the perfect reversibility of the laws of physics. But zoom out, and time’s arrow emerges.”18 A consideration of scale leads to a recognition of temporal asymmetry in human choice. 

In analyzing the ability to do otherwise, we should consider only a forward-looking ability because choices, by their nature, are forward-looking. We don’t deliberate or make choices about the past. Choices are always about something, and those objects of choice always lie in the future, thus choices are always forward-looking. At the time when a choice is actually made, there is as of yet no “what” as in “Could have done other than what?” I have not already made the choice, so there is no established action to have done otherwise. There can only be expectation of what I will do. If my actions are in principle perfectly predictable, then I do not have the ability to do otherwise in a forward-looking sense. If my choices are in principle not predictable, given total knowledge of the present world, then I do have the ability to do otherwise in a forward-looking sense, which is the only sense that makes any sense. Given the different dynamics found at different scales, the ability to do otherwise needs to be understood as temporally asymmetric; that is, as always forward-looking; as the ability to do something which is in principle not predictable. We do have that ability, and it derives from our self-referential nature. 

Self-Reference and Undecidability 

The fact that I am the relevant cause of my own actions comes with another important implication: I am a causally self-referencing entity. If a molecule were the relevant cause of my action, this would not be true in the same way. The molecule has no capacity for self-reflection, but I do. I can ask myself, “What will I do? What could I do? What should I do? What do I want to do? What would I do if I wanted to do X and should do Y?” Self-referential questions like these affect the choices that I make; and those choices change the self-referential questions that I ask. 

At the relevant scale, self-reference is causally important. I am a system which analyzes its own inputs, character, and potential outputs; generates new outputs based on those analyses; and feeds those new outputs back into itself as inputs which affect the outputs, which affect the system’s character. I am an output of and an input for my own processing. Framing the human self-referential nature in this way brings us to the concept of undecidability. 

A system that exhibits undecidable dynamics cannot be predicted, given complete knowledge of its present state. Computer scientists and mathematicians have proven that this fundamental unpredictability shows up in some algorithmic computations, mathematical systems, and dynamical systems (including physical systems).19 Though an unpredictable dynamical system may evoke the concept of chaos, undecidability is not chaos; it is a different sort of unpredictability. IBM research scientist Charles H. Bennett makes the difference clear: 

For a dynamical system to be chaotic means that it exponentially amplifies ignorance of its initial condition; for it to be undecidable means that essential aspects of its long-term behaviour—such as whether a trajectory ever enters a certain region—though determined, are unpredictable even from total knowledge of the initial condition.20

If a system exhibits undecidability, then it is unpredictable even given total knowledge of all of its constituent parts. Undecidability makes deterministic systems fundamentally unpredictable in principle, not as a result of merely lacking precise measurements. If humans can exhibit undecidability, then we meet the second main criterion for free will: the forward-looking ability to do otherwise. Scientists recently made such an argument feasible by explicating what features of a system give rise to undecidable dynamics. In 2019, Mikhail Prokopenko and his colleagues conducted a comparative formal analysis of recursive mathematical systems, Turing machines, and cellular automata. They come to a clear conclusion: 

As we have shown, the capacity to generate undecidable dynamics is based upon three underlying factors: (1) the program-data duality; (2) the potential to access an infinite computational medium; and (3) the ability to implement negation.21

If humans do have these three properties, then we meet the criteria for undecidable dynamics, which means we can take actions that are fundamentally unpredictable, which means we have the ability to do otherwise in a forward-looking sense, which means we have free will. 

First, consider program-data duality, which in this context is the ability for self-reference. The word “duality” simply refers to the typical distinction between program and data with which we are all familiar. A human at time 1 has a certain overall state of mind, coinciding with a certain overall physical state. The state at time 1 is a program, in that it entails implicit rules about what the system would do, given certain types of data. The streams of perceptions taken in at time 2 are data, which get processed according to the implicit rules. In addition to processing basic sense data, this duality allows for a program (or implicit set of rules encoded in the state of a human) to process other programs as data. For example, a human can process ideas, hypothetical scenarios, mathematical operations, and representations of the self as data (thus self-reference). 

The question about the will can only find its answers from a scale where the will exists as a will. 

The next requirement for undecidability is the potential to access an infinite computational medium. The computational medium is the substrate on which the state of the system is represented. In a computer, the computational medium would be the memory and storage. The set of all possible states of the system is called the state-space. For example, the state space of a computer would be the set of all possible configurations of its memory and storage. If we knew that a certain system had an infinite state-space, we could infer that the system has access to an infinite computational medium. 

It can be informally proven that humans have an infinite state-space. How many different thoughts is it possible for a human to have? That question includes sub-questions, such as how many things is it possible for a human to see? The state of your visual perception is one small part of your overall state. Think of the number 74. Now think of the number 74 with your eyes closed. Those two occasions of thinking of 74 occupied two very different points in your state-space because of the difference in visual perception. 

To roughly estimate how many overall states are possible while thinking of 74, we would need to do something like multiply the number of possible visual perceptions by the number of possible auditory perceptions by the number of possible sensations of heat and cold by the number of possible gradations of feeling sadness or happiness, and so on. Also, you may think of 74 while remembering, for example, the time you thought of 106 or 107. And the next time you think of 74, that will be yet another point in your state-space, since you’ll recall that you’ve thought of 74 before. There may be an infinite number of possible states in which you might think of 74. And there are many conceivable numbers other than 74, and many things to think about other than numbers. 

An obvious objection might be that a human and his brain are physically finite. In what sense can an organ that fits inside a skull be infinite? As a starting point, consider the 100 billion neurons that make up the brain. As a simplification, a neuron can be considered to be “firing” or “not firing.” So a simplified brain has 100 billion binary cells. Such an array of cells could instantiate 2^100,000,000,000 distinct patterns of on-or-off activation. That’s a big number. For comparison, there are estimated to be roughly 10^80 atoms in the observable universe.22 The number of atoms in the universe is an infinitesimally small number compared to the number of activation patterns possible in a simplified brain. And what about a real brain? A real brain is made of neurons which are not simply on or off. Some neurons show gradations in voltage and neurotransmitter release, meaning that they have many possible states between “on” and “off.”23

Undecidability makes deterministic systems fundamentally unpredictable in principle, not as a result of merely lacking precise measurements.

Besides neurons, there are many other variables in the brain that are not captured by the simplified on/off variable. Each neuron can vary in the amount of neurotransmitter in its vesicles ready for release, and the state of the receptors on its soma and dendrites (that is, to what degree they’re blocked by other molecules). There can also be variation in the amount of neurotransmitter that is floating free at any moment in the space between any two neurons. There are minute variables that will likely never be measured yet do, theoretically, make a causal difference. For example, in what spatial direction is each neurotransmitter molecule oriented? A neurotransmitter molecule must fit into a receptor in order to carry on a signal. For the molecule to fit, it must be facing a certain direction relative to the receptor. So the spatial orientation of the molecule before binding must have some nonzero effect on the binding affinity. How many different patterns of analog spatial orientation might trillions of neurotransmitter molecules be capable of? That alone may be infinite. The variable of “firing” or “not firing” does not capture any of these variables. So the actual number of possible overall brain states is some large exponent greater than 2^100,000,000,000 which is a large exponent greater than the number of atoms in the universe. 

Whether the human state-space is technically infinite or merely practically infinite (larger than any other number computed for any purpose in all of science), it will not be exhausted in the meager 100 years of a human lifespan. This means that the self-referential loops of processing do not need to stop at any predetermined iteration or level of abstraction. So for the purpose of analyzing the choices of a human, the state-space and computational medium are functionally infinite. 

The last element required for undecidability is the ability to implement negation. Negation in this context refers to the ability of a logical system to produce an output which is exactly contrary to the processing which led to the output. It is equivalent to the liar paradox, which is exemplified in a statement such as “everything I say is a lie,” or more formally, “this statement is unprovable.” The liar paradox is a self-referential statement, which can not be judged to be true or false without a contradiction. Self-reference is fundamental to this paradox because the statement refers to its own validity. If humans can implement this paradoxical logic into their thinking, then humans meet this requirement for producing undecidability. The fact that humans came up with the liar paradox thousands of years ago is evidence that humans can perform the logical operation of negation. 

Conclusion 

All three factors underlying the capacity to generate undecidable dynamics are present in humans. First, we exhibit program-data duality when we process ideas, hypothetical scenarios, mathematical operations, and representations of ourselves as objects of thought. Next, we have the potential to access an infinite computational medium. This is demonstrated by the fact that we can think of any one of an infinite number of objects of thought, which implies an infinite state-space, which implies an infinite computational medium. Finally, we have the ability to implement negation, demonstrated by the inception of the liar paradox in the minds of humans. If these three elements are sufficient to generate undecidable dynamics, then humans are capable of generating undecidable dynamics, which means we cannot be accurately predicted. And that means we have the ability to do otherwise in the forward-looking sense. 

Figure 1. Relational map of concepts. The truth of each concept supports the truth of the concepts downstream from it. This diagram illustrates how the concepts described throughout this article contribute to the overall reality of free will.

Figure 1 shows the relationships between the concepts discussed in this article. An understanding of the human agent at the scale where conscious humans actually exist leads to recognition of the self as the source of one’s actions, recognition of the relevance of temporal asymmetry to human choice, and recognition of self-reference as causally relevant to human actions. Self-reference, in combination with access to an infinite computational medium and the ability to implement negation results in undecidable dynamics. This entails the ability to do otherwise in the forward-looking sense, which is the only sense that makes any sense when temporal asymmetry is taken into account. The resulting total picture is that we (humans) meet two criteria for real free will: the forward-looking ability to do otherwise and being the source of one’s own actions. 

Viewing human agents as whole humans instead of as molecules makes it clear that humans are the cause of their own actions, and also leads to a focus on the human features such as self-reference, that generate undecidable dynamics. The Stoic philosopher Epictetus was right. Neither Zeus, Bertrand Russell, nor the scientists recapitulating the latter’s argument 77 years later can diminish our free wills.

Categories: Critical Thinking, Skeptic

Flexible School Start Time

neurologicablog Feed - Mon, 03/02/2026 - 6:06am

A recent study shows pretty clearly that highschoolers benefit from a little extra sleep. We will get to the study in a bit, but first I want to note that this information is not new. Teenagers tend to stay up late, and yet we make them get up super early to be at class, often by 7:00 AM. This is not good for their health or their learning. So why do we do it?

The primary reason is logistical, which is tied to cost. School systems have tiered start times for elementary, middle school, and high school because this allows them to use the same fleet of buses and drivers for all three. Starting high school later, at the same time as middle school, would mean increasing the size of the fleet. There are other stated reasons, but honestly I think this is the real reason and everything else is a backend justification. The other reasons are more tradeoffs, that benefit some people but not others. For example, a parent with a long commute could drop off their highschooler on the way to work. There is more time for after school clubs, sports, and jobs. While some older teens may get home early to watch their younger siblings until their parents get home.

This all points to a main reason our civilization is frustratingly sub-optimal (to be polite). The default is to follow the pathway of least resistance  – everyone just does what’s best for themselves, with people in power doing their best to solidify more power, with vested interests putting the most consistent effort into making the system work for their narrow interest. What is often lacking is any kind of systemic planning, and when that does occur (even with the best intentions) the law of unintended consequences often results in a net wash or even detriment. The world is complex, and we are just not very good at managing that level of complexity. What we need are institutions that can accumulate evidence-based institutional knowledge to incrementally make things work better. But that’s a lot of work, and it’s too easy for vested interests to sabotage such efforts.

I’m not trying to be nihilistic – nihilism is part of the problem, and is often used as a weapon by those vested interests to short circuit attempts to make things work better for everyone.  But we have to understand the nature and scope of the problem, and we need the energy and dedication to sustain efforts to make things work better. Such efforts can work, and historically they have made things better. But it’s a constant struggle.

OK, back to the study. In this study they gave students the option to start class up to an hour later. For example, school would officially start at 8:30, but also offered an optional module at 7:30 for those who wanted to come early and end early. The found:

“Under the flexible model, 95% of students used the later-start option. The median SST was delayed by 38 minutes (n = 711, β = .57, 95% confidence interval [.53, .62], p < .001, R2β = .52), with corresponding significant delays in wake times and increased sleep duration on school days. Among the paired subsample, SST delay was significantly associated with increased school day sleep duration (n = 205, β = .51 [.05, .94], p = .03, R2β = .02). No worsening was observed. Improvements included reduced problems falling asleep, fewer students with clinically low health-related quality of life, and higher scores in mathematics and English.”

Now that I am retired I have personally experienced (yes, this is just anecdotal) the benefits of sleeping in longer. I no longer even set an alarm – I wake up when I feel like it. I am still working basically full time doing all my science communication activities, but mostly on my own schedule. My sleep quality and daytime alertness have significantly improved. I highly recommend it. But more importantly – the evidence clearly shows that this is generally true – being able to sleep in longer results in better sleep and performance.

So it seems like a no-brainer – why can’t we do this? I think the key here is flexibility, which can be paired with increased flexibility at work, especially for parents. Flexible work start times and the ability to work from home, even if only 1-2 days a week, results in a huge improvement in life satisfaction. Then families will have the ability to make their schedules work. Let’s prioritize sleep, health, and educational effectiveness first, and make the system work for these goals. It makes no sense for a school system to sacrifice the well-being and education of their own students in order to meet their own logistical needs.

The obvious response to this question is – well, it’s all about money. We have to be realistic. School systems operate with limited budgets and have to make the most with the resources they have. If they have to maintain a larger bus fleet, where will that money come from? I get it. This is reality. My question is – who made this decision? Did we as a society, or even just the affected parents, make this decision collectively with adequate information to understand the implications of their decision? We may just have to accept the fact that running an effective school system is more expensive than we might want it to be, and cutting costs in this way is simply not an acceptable option.

If we prioritize the health and education of students, I think we will find there are other elements of the system that can accommodate. This is where municipal planning becomes even more integrated. Investing in public transportation and subsidizing it for students, for example, will give students more options and reduce the strain on a dedicated school bussing system. Facilitating carpooling among students is another option. More parental flexibility helps. Make schools more local and walkable/bikeable, and organize safe group walks to and from school. Optimize and disperse drop-off areas to limit bottle necks and reduce drop-off congestion.

This requires thoughtful planning, but mostly an unwillingness to simply sacrifice students to simplify logistics and reduce costs.

 

 

The post Flexible School Start Time first appeared on NeuroLogica Blog.

Categories: Skeptic

I, Man: Reflections on Boxer Imane Khelif’s Admission That He Is Male

Skeptic.com feed - Fri, 02/27/2026 - 12:10pm

In what will certainly fail to go down as the news of the century, Imane Khelif, male boxer and women’s boxing Olympic medalist, has finally publicly admitted in a February 2026 interview that he is indeed biologically male. A large part of society specifically chose not to see. And another part chose not to care that eighteen months ago, two men were given a free pass to an abuser’s dream: the ability to not only assault women on an international stage, but the chance to be celebrated for it.

The 2024 Paris Olympics gold, silver and bronze medals, as designed by Chaumet (Credit: LVMH)

Boxers Imane Khelif of Algeria and Lin Yu-ting of Taiwan entered the 2024 Olympics as a sex they were not and they did it with the full knowledge of the IOC. Two men, who according to an official release by the International Boxing Association in July of 2024, had failed more than one sex test for female eligibility in 2022 and 2023, and had been disqualified from female competition. For their fraud, both were rewarded with gold medals at the Olympics. One female boxer, Angela Carini, had to make the agonizing decision to forfeit rather than participate in the dangerous charade. How surreal it must have been to make that unbelievable yet necessary call, to not only go against everything one has trained for, but everything one stands for as an athlete, professional, and disciplined fighter.

For any inclined to give Khelif the benefit of the doubt that perhaps he just didn't know… if one is being raised as female and never begins menstruation at puberty, it will absolutely be examined why that is. Once illness and female conditions are ruled out, one is left with the “condition” of being male. In this case, a male with 46, XY 5-alpha reductase deficiency, as a medical report of his drafted back in 2023 outlined, later leaked to Le correspondant.

No one’s personal condition is ever a legitimate reason to disadvantage or endanger another demographic.

To ignore such disorders of sexual development in order to adhere to traditionally physical sex ideals is fairly common practice in conservative and religious countries, and African nations have a history of scouting such male individuals for the purpose of dominating women’s sports, to the overwhelming ignorance of the global athletics audience. As a result, most are still under the incorrect impression that athletes like Caster Semenya, the South African runner and two-time Olympic gold medalist, are simply women with higher testosterone and absolutely unaware of the reality that these are athletes with a male karyotype. Semenya confirmed in the Court of Arbitration for Sport to have 5-ARD, a genetic condition resulting in the inability to develop typical external male genitalia.

These disorders are unbelievably unfortunate for a multitude of medical reasons, beyond being tokenized and weaponized through identity politics. However, no one’s personal condition is ever a legitimate reason to disadvantage or endanger another demographic.

He just counted on larger society not bothering to care. And on that, he wagered well.

Nevertheless, such practice also happens to explain why Khelif, a Muslim in a Muslim nation, was conveniently free from traditionally mandated female attire, and able to be so comfortably hands-on with his fellow male trainers. And beyond that undisguisable situation, one must also genuinely ask why he never chose to appeal the International Boxing Association’s 2023 disqualification for failing to meet female criteria, or why he refused to participate in subsequent female competition that requires testing for sex.

So he knew. His family and community knew. He just counted on larger society not bothering to care. And on that, he wagered well.

It is the inevitable outcome of a societal ideology riddled with complacency for female safety and dignity.

Because despite the protests of the female boxers, certain boxing association officials, and few but genuine feminists against the unbelievable misogyny being broadcasted globally, many decided to protest calling a spade a spade. Widespread social media commentary of the ideologically-captured claimed that Khelif and Lin were simply masculine-looking women who shouldn’t be insulted for appearances beyond their control. That it was (stop me if you’ve heard this before) right-wing propaganda and Nazi TERF bigotry to suggest that such supposed gender nonconformity made them male. The pick-me cherry on top, of course, is that it was peak misogyny to call them men at all.

But this was only to be expected when the mainstream media “reporting” on such a farce fully fed this break from reality. During the 2024 games, at very best legacy organizations legitimized Khelif as the incorrect sex, and at worst, denigrated anyone pointing out the opposite truth. From the official Olympics reporting that ignored the situation itself entirely, to BBC and NYT accounts that comfortably crowned Khelif a woman, to USA Today fluff that belittled a serious slap in the face to females into “unhinged controversy,” the overwhelming majority of outlets at best passively accepted and at worst actively furthered the grotesque farce unfolding in front of the world.

Chromosomes, anatomy, and human sight are disregarded in favor of false passport markers and old photos of pink dresses, because apparently that is the only acceptable (and desired) proof of what “woman” means. 

Yet beyond entrenched media preferences is another incentive as well. This was, and is still, today’s gender misogyny in action. Ironically, those who consider truth too “offensive” for the prioritized male in question never seem to consider the unimaginable offense for the women, who must not only unfairly face a recognizable man, but are expected (as women usually are) to simply take it with grace and a smile. So, concessions will be made to spare male feelings in the name of “inclusion,” ultimately excluding women from their very own opportunities.

Chromosomes, anatomy, and human sight are disregarded in favor of false passport markers and old photos of pink dresses, because apparently that is the only acceptable (and desired) proof of what “woman” means. It is the inevitable outcome of a societal ideology riddled with complacency for female safety and dignity.

Fortunately, despite a seemingly ingrained forfeit of biological honesty, the tide is beginning to turn, with the release of necessary reports and a new, supportive political landscape. The once sacrosanct gender ideology is now beginning to be questioned as a whole in the mainstream, no longer only by brave feminists. We can see the effects of this in the athletic realm through changes in various governing organizations, including World Boxing itself, who are beginning to demonstrate the bare minimum of competition integrity through mandating sex testing for eligibility. And as IOC relies on individual sport federations to set eligibility standards, this nightmare will hopefully one day all but completely fade into history.

 Imane is and was always exactly as his own name states.

As it tends to go, many who put on blinders then will now be miraculously blind to the harm they supported. Khelif’s unforgettable selfishness will get purposely memory holed, along with their own unforgivable enablement in this feint of reality. But as USA Today once wrote in support of Khelif and wild disregard for truth, this indeed “can never happen again” … just not in the way that they meant.

Imane is and was always exactly as his own name states. And now that the rest of the world can no longer pretend that they do not know, they will have to finally decide whether they still believe men are entitled to women's earned opportunities, or if they are truly for women after all.

Categories: Critical Thinking, Skeptic

Martian Astronomers

Skeptic.com feed - Fri, 02/27/2026 - 7:25am

A review of Parallel Lives of Astronomers. Percival Lowell and Edward Emerson Barnard by William Sheehan. (Cham, Switzerland: Springer, 2024. Hardcover, 687 pages)

Of the two astronomers whose lives and accomplishments are chronicled in William Sheehan’s Parallel Lives of Astronomers, Percival Lowell was far better known than Edward Barnard. Lowell is famous for having championed the idea that the canals on Mars were built by intelligent beings. The origins of the idea that there were canals on Mars lay in the Italian astronomer Schiaparelli’s report of “canali” on the red planet in 1877. The word is best translated as “channels” but was popularly mistranslated as “canals.” Since in the latter part of the 19th century canals were being built all over the world by intelligent humans, the implication was that the “canals” on Mars were built by intelligent aliens.

A major theme of the book is that Barnard and Lowell in many ways were opposites of each other. Barnard grew up in poverty in Nashville, Tennessee. He became interested in astronomy as a nine-year old working in a photography studio. He received some academic training in astronomy and was a superb and objective observer. Unlike Lowell, his mathematical skills were comparatively weak. Lowell came from an extremely wealthy Boston family and his interest in astronomy began in college. He graduated from Harvard in 1876 with honors in mathematics. The topic of his graduation speech was the nebular hypothesis of how solar systems came together from collections of gas and dust around a sun. These contrasts (and others) between Lowell and Barnard provide an intimate view not only of the two men, but of much of the history of astronomy of the late 19th and early 20th centuries, especially regarding Mars because the two men were at opposite ends of a raging debate among astronomers and the general public on the matter of the nature of the canals. 

From a skeptical point of view, the most interesting organizational concept that Sheehan uses is the distinction between top-down and bottom-up processing. He uses this to contrast the approaches used by Lowell and Barnard in their interpretations of what they saw through their telescopes and later in photographs. Lowell was a largely top-down man, starting with an idea and then searching for evidence to support it. Barnard continued to make observations until he believed he had enough data to come to a conclusion. Lowell focused his astronomical interests largely on the canal debate, while Barnard was one of the most productive observational astronomers of his day. The top-down versus bottom-up distinction allows Sheehan to use basic concepts in perception to explain the differences between the two men in their position on the reality of the canals.

Perception is a function of two very different processes that together usually lead to an accurate perceptual experience of the world. Bottom-up processing refers to the incoming sensory inputs from the various sensory systems. These, alone, are not sufficient to specify what is actually out there in the world. Top-down processing refers to the expectations, beliefs, and knowledge that we all have about the perceptual world. These are needed for the brain to interpret and make sense of the information that is brought in by bottom-up mechanisms. Almost always these two sources are in accord and the world is perceived accurately. 

Between the series of fleeting images hitting the retina of the observer and the final drawing or description of what the observer saw, the constructive nature of perception has ample room to create perceptual experiences of structures that were not there in reality.

However, sometimes expectations, beliefs, and knowledge can be wrong, and the incoming sensory input may be distorted or incomplete. Under these rare circumstances, people can and do actually perceive things that are not there even though they are not intoxicated or psychologically impaired. Thus flying saucers, sea monsters, Big Foot, and the like, are perceived when the sensory input is minimal, often seen in fleeting glimpses at night and in the distance. The Loch Ness Monster never swims up the Inverness River through downtown Inverness at high noon on a pleasant sunny day for vacationers to witness. Final perceptual experiences are a function of the sensory inputs as well as expectations and beliefs. Thus, perception is said to be a constructive process and one that can produce incorrect experiences. The canals of Mars fall directly into this perceptual cognitive model.

Before reading the book, I had the mistaken impression that when looking through a telescope, one saw a fairly stable image of whatever object the instrument was focused on. Nothing could be further from the truth. The image of a planet as seen through a telescope is just a tiny disc of light. To make matters worse, that image is far from stable, especially for the telescopes in use in Lowell and Barnard’s time. The book makes clear how unstable those images could be. Momentary changes in the characteristics of the air above a telescope would make the image waver, fade in and out of focus, and change in other characteristics from moment to moment. 

Even when “seeing” was excellent, all one saw were successive glimpses of the target object. Then those glimpses had to be constructed by the brain into a coherent impression of what the target was. Between the series of fleeting images hitting the retina of the observer and the final drawing or description of what the observer saw, the constructive nature of perception has ample room to create perceptual experiences of structures (i.e., canals) that were not there in reality. 

Sometimes expectations, beliefs, and knowledge can be wrong, and the incoming sensory input may be distorted or incomplete.

Astronomers had known since the early 19th century that such non-sensory factors could influence perceptual judgments in their observations. Thus, different observers reported different times at which a planet or star crossed a line in a telescope reticule. These differences were recognized by the term “personal equation.”  But the idea that perception was constructive in the sense that honest observers could perceive structures that were not present had to wait until at least the start of the 20th century before it was recognized.

Following his Harvard graduation, Lowell was expected to go into his family business of highly profitable textile mills. As an intelligent, curious young man he found that prospect stultifying. To make matters worse, he was involved in a serious scandal. He had proposed marriage to a daughter from the sniffy Boston upper crust, but then withdrew the proposal, something that just wasn’t done in that time and place. As a result, Lowell was effectively banned from that elite circle, so in response in the early 1880s he travelled to Japan and Korea and wrote several books on Asian culture and became part of the Korean government delegation to the United States (in 1883). He continued to live in Asia until 1893. 

That Lowell continued his interest in astronomy before actively pursuing the mystery of Mars was demonstrated by the “astronomical references and imagery [that] are scattered throughout the Far Eastern books and if gathered together would make a long list” (p. 97). That interest turned into a lifelong obsession in 1892 when he read French astronomer Camille Flammarion’s book La Planete Mars et ses Conditions d’habitabilite, in which the author argued that the “canals” were evidence of an advanced civilization. Lowell was wealthy enough to fund the creation of the Lowell Observatory in Flagstaff, Arizona, which opened in 1894. 

In his autobiographical writings, Barnard noted that he became interested in the stars while walking home from work in the dark. One star “seemed to be slowly moving eastward among the other stars.” This struck him as unusual because the other stars “seemed all to keep to their same relative positions,” (p. 121) while this one did not. This was clear evidence of an early careful observer who had, unknowingly, seen not just another star but the planet Saturn. When he was 19 years old, Barnard was given a book written by the Reverend Thomas Dick, who believed that all the planets of the solar system were inhabited. The book included simple star charts that Barnard “rushed to compare with what he could make out in the small patch of sky visible from the open window of his apartment” (p. 126). The book, a later fellow astronomer and friend wrote, “awakened a thirst for astronomical knowledge which … never ceased to be controlling” (p. 126). Around 1880 or 1881, Barnard was given a simple telescope by an older friend at the photography studio where he was still working. He later received a scholarship to Vanderbilt University, but never finished his degree. Such things were less important in the late 19th century, and in 1887 he obtained a position at the Lick Observatory outside of San Jose, California, one of the earliest mountain-top observatories so positioned to rise above atmospheric turbulence and local city lights.

During their long careers, both Lowell and Barnard observed Mars. Their different approaches—top-down versus bottom-up—permeated how they interpreted and represented the image that fell on their respective retinas. Figure 1 (from page 291 in the book) shows this difference beautifully. On top is Lowell’s version of what he saw in 1894, while Barnard’s representation from the same year is below. Overall, the images are similar in general outline. However, Lowell has added to his drawing numerous lines, which he contended were the canals, and details not present in Barnard’s. This is a classic example of constructive perception. Lowell saw similar geometric patterns on Mercury and Venus, although he apparently did not attribute them to intelligent design. 

Figure 1. Lowell’s map of Mars from 1894, published in Mars (1895), Plate XXIV. A new projection by Joel Hagen, for comparison with the Barnard map below.A map of Mars compiled on the basis of Barnard’s unpublished drawings from 1894, produced by astronomer-artist Joel Hagen. The projection has been chosen to match the map of Lowell on p. 227, so as to emphasize the striking differences. (Credit: Joel Hagen)

While Lowell was seeing things that didn’t exist, Barnard was busy with more fruitful astronomical activities. In 1895 he became a professor of astronomy at the University of Chicago, which gave him access to the Yerkes Observatory in Wisconsin. It was there that he spent the rest of his life and professional career. Wisconsin is not known for warm winters and the observing platform of telescope at Yerkes was not heated. Nonetheless, Barnard would observe almost compulsively, night after night, even in the bitter cold. He was famous for having extremely good eyesight, which made him an excellent observer. During his long career he was an active member of the astronomical community. He made numerous important discoveries including over 15 comets and the fifth moon of Jupiter. Barnard’s Star, whose motion relative to the sun he determined in 1916, was named after him in 2017, although it had been recorded photographically in the 1880s. It is a red dwarf that is one of the four stars closest to Earth. 

Perhaps Barnard’s most important contribution is the explanation for what are known as dark nebula, sometimes called “Barnard objects.” When the Milky Way is looked at through a telescope, there are large dark areas that appear to contain no stars. Why certain areas of the galaxy didn’t contain any stars was a mystery. In fact, these areas do contain stars, but their light is blocked by huge clouds of interstellar dust. The understanding of the nature of the dark nebula provided an important insight into the evolution of stars and planets. Another major accomplishment was his photographic atlas of portions of the Milky Way. The work, which is stunningly beautiful, took years to compile and wasn’t published until 1927, four years after his death in 1923. 

During his active career Barnard did not ignore the controversial issue of the canals on Mars. He photographed Mars through the great telescope at the Yerkes Observatory in 1909, when Mars was “in opposition” to the Earth—as close as it would be for many years in the future, and was an ideal time for observation and photography. These photographs showed no canals. Barnard was not as vocal in the great canal debate as some other astronomers. It was the brilliant Greek-French astronomer Eugene Antoniadi (1870–1944) who became Lowell’s most serious detractor. Sheehan includes the often acrimonious debates between Lowell and Antoniadi in the story of the contrasts between Lowell and Barnard. 

Final perceptual experiences are a function of the sensory inputs as well as expectations and beliefs … perception is said to be a constructive process and one that can produce incorrect experiences.

During the time that Barnard was active in astronomical research and writing, Lowell was not inactive. However, his activities and interests were heavily focused on the issue of the canals. He lectured frequently and wrote widely defending his view that the canals were real. He, too, took photographs of Mars through the telescopes at the Lowell Observatory in Flagstaff. But constructive perception works just as well with photographs as it does with images seen through a telescope.  

Both Lowell and Barnard made contributions to astronomy; Barnard as a careful scientist and Lowell as a popularizer who inspired many to an interest in astronomy, including Robert Goddard and Carl Sagan. In terms of fiction, Lowell’s argument that the canals were the products of intelligent Martians led to the writings of H.G. Wells and Edgar Rice Burroughs. Sheehan’s book goes into great, but never boring, detail about the lives and work of both men. The book is beautifully illustrated. There are pictures not only of the protagonists as they, to paraphrase Shakespeare, “strut and fret their hour upon the stage” but of their drawings and photographs of Mars and important locations in their stories. It is beautifully produced with copious references and notes. Unfortunately, the publisher did not provide an index, but with the 150th anniversary of Schiaparelli’s observation in 2027, Sheehan’s book is especially resonant.  

Categories: Critical Thinking, Skeptic

America’s Alien Problem: Why We Ignore Common Sense in Favor of Belief

Skeptic.com feed - Thu, 02/26/2026 - 1:34pm

In the span of just weeks, two major U.S. releases captured the nation’s attention: Bugonia, Yorgos Lanthimos’s darkly playful alien tale, and The Age of Disclosure, a documentary staged like science fiction, where whistleblowers insist that nonhuman craft exist and the government is concealing the truth about alien contact. Their timing is not accidental. Both arrived on the heels of the first public congressional UFO hearings in over fifty years, in the middle of a nationwide spike in reported sightings. The All-domain Anomaly Resolution Office (AARO) documented 757 new UAP (Unidentified Anomalous Phenomena) incidents between May 2023 and June 2024—more than in many previous years combined—and some analysts now describe 2025 as the most active reporting year in history. We are not just witnessing reports of the unexplained; we are witnessing the psychic temperature of a country—its anxieties, conspiratorial hunger, and collective imagination—made visible.

At the end of Bugonia, when the alien empress finally speaks—exactly as the conspiracy theorist had foretold—she delivers her verdict to her crew, all of them dressed in strange, animal-like furred spacesuits: “We believe it is over. They have had their time. And in their time they have imperiled the life they share, and so we have decided their time will end.” The aliens then waddle away in eerie unison, and the empress punctures the protective Earth bubble. What follows is an instant apocalypse: humanity wiped out in a scene that resembles the visual language of the Rapture—a sudden and absolute religious experience. 

Poster for Bugonia (2025), directed by Yorgos Lanthimos. Image courtesy of Focus Features/CJ ENM

But The Age of Disclosure, Dan Farah’s latest sci-fi-styled documentary production, framed as a serious exposé of government UFO secrecy, ultimately reveals nothing new. It offers no evidence, only a procession of interchangeable older men linked to government or aerospace who repeat secondhand stories about witnesses who said they back-engineered crashed spaceships, recovered “biologics” (the new fancy term for aliens), and looming threats. At the watch party I attended, a few of us sat nonplussed at the end because, although the film insists danger is near, we wondered: danger from what, exactly? 

Why are aliens capturing our cultural imagination now? 

Most alien or UFO reports1 involve sightings of lights, orbs, or spheres that move oddly or swiftly and vanish silently—a pattern that has remained consistent over time. Some observers also report cigar-shaped objects or triangular craft. Many of these phenomena are reported worldwide. In 2025, the National UFO Reporting Center had already logged 2,174 UFO/UAP reports by midyear, a sharp increase from 1,492 reports during the same period in 2024. This rise may reflect the establishment of the AARO and renewed government attention, which have made reporting easier and less stigmatized, not to mention nudging people to look up more and notice what was previously missed (Starlink satellites are often reported as UAPs). Increased public awareness through media coverage, documentaries, and congressional hearings also encourages people to report sightings they might previously have ignored. This explanation, of course, presumes the alien sightings are real. Are they? 

An alternative interpretation—commonly referred to as the Psychosocial UFO Hypothesis—traces back to Swiss psychologist Carl Jung, whose 1958 work Flying Saucers: A Modern Myth of Things Seen in the Sky, proposed that UFOs reflect psychic and cultural realities, not extraterrestrial ones.2 Jung suggested that flying saucers emerge in the collective imagination during eras of social disorientation, technological upheaval, or existential threat, functioning as modern myths that carry the weight of collective anxiety and longing. Rather than evidence of literal beings from another world, UFOs become symbols of fear, hope, salvation, or invasion—a projection of what the psyche cannot resolve. From this view, alien encounters are psychologically real even if not physically tangible: They reveal something true about the human mind and the cultural moment, not necessarily the cosmos. 

It is unsurprising that UFO sightings are on the rise today. Scholars have observed that UFO reports tend to increase during periods of societal crisis—such as existential uncertainty, geopolitical tension, or rapid technological change—reflecting collective anxieties rather than objective phenomena.3 In times of social distress and distrust, people are more likely to assign meaning or threat to ordinary or ambiguous events. Some psychological-cognitive theories suggest that ambiguous stimuli—lights in the sky, radar blips, or unexplained objects or events—are interpreted through cultural narratives and heightened pattern seeking.4 This is sometimes called the “low information zone,” in which blurry photographs and grainy videos stimulate the mind to fill in the missing spaces or connect the dots into meaningful patterns of an extraterrestrial nature. 

We live in a time of deep distrust in politics, corporations, and the media, which makes people question what they are told. Heightened fears from draconian COVID policies (“they closed the schools, restaurants, and parks so the pandemic must be really bad!”), hypermediated climate collapse (“if we don’t do something in twelve years all is lost”), threats of rising fascism (“Trump, MAGA!”), threats of an AI takeover (“the singularity is near!”), and rising nihilistic political violence (“burn it all down and start over!”) have created a pervasive state of anxiety. This fear, combined with distrust of formerly trusted institutions, fuels conspiracy thinking, including beliefs about aliens. With few reliable frameworks to navigate uncertainty, many turn outward for explanations or as distractions from personal responsibility. 

In Bugonia, Lanthimos suggests that conspiracy beliefs often emerge as a response to real pain and injustice. The film’s central conspiracist grew up with an addicted, neglectful mother and later lost her to a medical experiment. His belief in aliens and corporate malevolence is not baseless; it is rooted in trauma, exploitation such as pharmaceutical misconduct and corporate neglect, and social alienation. In this way, the film does not simply mock conspiracists as “crazy,” but explores the social and psychological conditions that give rise to such beliefs. 

To these we can add two more conditions contributing to Americans’ increasing belief in UFOs: the decline of religious faith and a reduced reliance on instinct and common sense. 

As traditional faith wanes, many turn to belief systems grounded not in evidence or instinct but in ideology and narrative—UFO conspiracies being a prime example. Belief is migrating from shared moral and religious frameworks to culturally mediated myths that promise meaning and belonging. In this sense, aliens function as a modern sacred avatar, a substitute for God, mystery, and existential structure. 

This mindset—that what you see may not be true, or what you don’t see is probably true—has fundamentally contributed to the widespread and enduring belief in a U.S. government cover-up of UFOs.

The complexity of contemporary society has been linked to a reduced dependence on intuitive judgment and common sense, making individuals more susceptible to being drawn into ideology and conspiracy theories.5 This effect has been amplified over the last two decades by our deep immersion in the online world, coupled with persistent global political instabilities. These factors have ushered in an era of “alternative facts” (on the right) and “postmodernism” (on the left) for many Americans, where the core assumption is that there is more than one truth or no truth at all. 

This mindset—that what you see may not be true, or what you don’t see is probably true—has fundamentally contributed to the widespread and enduring belief in a U.S. government cover-up of UFOs. Thus, even though most individuals have never personally seen or experienced a UFO firsthand, they are readily pulled into the conspiratorial narrative and accept it primarily because of the powerful surrounding cultural and ideological framework. It’s ideology over instinct. 

Common Sense and Instinct 

Evolutionarily, humans developed heuristics to make rapid decisions in uncertain environments—recognizing patterns, detecting threats, and navigating social hierarchies. These shared mental shortcuts form a basis of common knowledge, allowing groups to act cohesively, from identifying safe foods and interpreting emotional cues to cooperating in collective tasks. This intuitive knowledge also extends to social cognition: Humans can rapidly infer intentions, predict behavior, and synchronize actions with others, often without conscious reasoning. In this sense, common knowledge is not arbitrary but adaptive, providing a shared framework that increases survival, cooperation, and cultural stability. As Steven Pinker argues, common knowledge is foundational to human society because it enables social coordination and complementary decision making.6 Much of this understanding operates beneath awareness, signaled through involuntary behaviors like laughter, tears, blushing, eye contact, and blunt speech—embodied expressions of the intuitive knowledge that binds us. 

Paradoxically, people often engage in elaborate efforts to obscure, ignore, or deliberately avoid acknowledging common sense and, tragically, their own instincts. The tendency to avoid recognizing widely shared knowledge is well-documented in psychology and sociology. This behavior, known as information avoidance, allows individuals to shield their happiness, preserve existing beliefs, or maintain social standing. Research also shows that information avoidance can serve as a coping mechanism in situations of uncertainty or threat, helping people reduce cognitive dissonance and emotional discomfort.7

People sometimes engage in information avoidance not merely to protect their beliefs or personal happiness, but to align with a group ideology and secure a vital sense of belonging. According to Social Identity Theory,8 individuals derive meaning, status, and self-esteem from the groups they belong to; consequently, they may reject information that threatens the group’s worldview. Specifically, people may set aside their personal instincts or empirical skepticism to be part of a community—be it political, spiritual, ideological, or conspiratorial—that claims to possess special, hidden, or insider knowledge. Aligning with a group that asserts access to deeper truths, secret insights, or a more “awakened” understanding often feels more meaningful and elevating to one’s identity than simply accepting one’s ordinary, concrete life.9

In addition, people often bypass common sense by relying on cognitively unfalsifiable ideas—using claims for aliens such as “trans-dimensional,” “telepathic,” or “unperceivable by ordinary minds,” which place the phenomenon in a realm where no evidence could ever contradict it. This creates epistemic shielding, where the claim becomes immune to challenge: Any lack of proof is simply reframed as expected, since the phenomenon supposedly exists beyond ordinary perception or logic.10 This often involves setting aside common-sense reasoning—such as the implausibility of coordinated alien visits, the immense logistical challenges of secrecy, or the extreme hazards of space travel. By suspending these rational doubts, individuals can fully engage with the group, strengthening both cohesion and commitment to shared beliefs like UFOs. 

Believing the government hides alien knowledge signals social intelligence and alignment with the modern order of suspicion, whereas trusting official explanations can appear naïve or even irrational—suggesting that disbelief in conspiracy has become more deviant than belief itself.

System Justification offers another cogent explanation for why people override instinct, even without empathy-driven motives. This psychological process leads individuals to defend and reinforce the prevailing system or worldview, even when it may run counter to their own interests.11 In the context of UFO belief, the dominant “system” is no longer governmental authority but rather the conspiratorial worldview itself. Institutional distrust has become the cultural status quo, so accepting the narrative of a cover-up functions as a way of justifying and maintaining that system.12 Believing the government hides alien knowledge signals social intelligence and alignment with the modern order of suspicion, whereas trusting official explanations can appear naïve or even irrational—suggesting that disbelief in conspiracy has become more deviant than belief itself. 

A further reason that common sense is bypassed in UFO narratives stems from a psychological profile that makes the alien stories uniquely meaningful to the participants. The key players in The Age of Disclosure documentary, reflecting the wider UFO conspiracy community, are largely older White men, often from the Baby Boomer generation, including many former Cold War intelligence and military personnel. They were trained for decades to perceive patterns, secrets, and threats everywhere, interpreting anomalies like radar returns, classified flights, and black-project aircraft. This environment rewarded suspicion, dramatic interpretation, and assuming hidden motives—a mindset that doesn’t simply switch off upon retirement. Once retired, many lose their high status and sense of purpose; they miss being “in the know” and having a mission. UFOs restore all of that, allowing them to be relevant again by “exposing secrecy,” “protecting humanity,” and “warning people about what’s coming.” This powerful way of restoring identity and meaning creates a significant blind spot for rational facts or instinct, cementing a narrative where they matter again. 

A more common-sense approach—one uninfluenced by ideology—would align closely with how neuroscientists are beginning to frame the perception of unidentified objects. A trio of researchers, for example, recently posed this question: How can we “explain why healthy, intelligent, honest, and psychologically normal people might easily misperceive lights in the sky as threatening or extraordinary objects, especially in the context of WEIRD (western, educated, industrial, rich, and democratic) societies”?13 These researchers draw on predictive-coding theories of perception, which suggests that the brain constantly generates top-down predictions based on prior experience. When sensory input is ambiguous or weak, such as distant lights in the sky or other celestial stimuli, perception becomes highly subject to existing beliefs and expectations. Frohlich, Christov-Moore, and Reggente argue that in Western contexts, where skepticism and distrust of institutions are amplified, psychologically normal people are more likely to interpret ordinary phenomena as potentially extraordinary, thereby reinforcing their mistaken beliefs and fostering the acceptance of conspiratorial explanations.14

Illustration by Marco Lawrence for SKEPTICDecline of Traditional Faith 

Another factor reinforcing the heightened interest and belief in UFOs is the dramatic decline of traditional faith systems in the U.S. and globally, especially in Europe.15 We are living through a moment of profound spiritual and cultural upheaval, marked by widespread secularization. Data from the Pew Research Center’s Religious Landscape Studies (2007–2024) clearly illustrate this shift in the United States: The share of Americans identifying as Christian has dropped significantly from 78 percent in 2007 to 62 percent in 2023–2024. Much of this shift is driven by the growth of the religiously unaffiliated—those identifying as atheist, agnostic, or “nothing in particular”—the “nones.” Furthermore, a stark generational divide exists, as only approximately 46 percent of younger Americans (ages 18–24) identify as Christian, contrasted with about 80 percent of older generations. Related measures of religious practice have also declined, including the share of Americans who believe in God “with absolute certainty,” pray daily, or attend regular services. 

These trends are not isolated to the U.S., reflecting global secularization that affects major world religions, including Christianity, Islam, Judaism, Buddhism, and Hinduism. A 2023 analysis of the World Values Survey data found that age and income are among the strongest predictors for decreasing religiosity, confirming that modern economic and demographic shifts correlate strongly with this decline.16 The consequence of the decline in traditional religious structures (churches, organized faith, and institutional religion) is the creation of a spiritual and cultural void. This vacuum can then be filled by alternative spiritualities, existential searches, or other belief systems that offer meaning, structure, and a sense of the transcendent—including UFOs, alien-mythologies, “otherworldly” beliefs, and nature mysticism. 

As younger generations grow up without strong religious roots, their search for meaning and a comprehensive moral framework often shifts toward political, psychiatric, or identity-based frameworks rather than centuries-old orthodox religions. While these new frames of belief are influenced by contemporary cultural anxieties, they tend to be less stabilizing and reassuring than traditional faith and wisdom. Studies of the culture wars indicate that, instead of offering equanimous guidance, these ideologies frequently contribute to an “us versus them” positionality, demanding allegiance to a specific side rather than fostering broad acceptance or spiritual integration.171819

A Desire for Faith 

When social anxieties intersect with waning religious practices, a spiritual void emerges, which faith, in its deepest sense, functions to fill. Paul Tillich described faith as the recognition of what is ultimately important in life, providing meaning and courage in the face of despair.20 Faith counters the secular demand to find fulfillment solely in the material present by offering a framework of ultimate value that extends beyond the empirical, fostering trust that reality holds order, purpose, and goodness beyond human comprehension. While it does not remove suffering, faith situates pain within a larger narrative of redemption or spiritual growth, offering hope, belonging, and the resources to endure the “unlivable self.” In this light, participation in alien beliefs can, in part, be interpreted as a search for a similarly powerful spiritual experience. 

For Carl Jung, the emergence and widespread cultural interest in alien experiences and UFOs were a form of spiritual projection. He posited that this phenomenon arose from a collective longing for something transpersonal—a desire for meaning and connection beyond the material world—driven largely by the decline of traditional spiritual practice and the sociopolitical existential crisis in the West. Jung argued that, regardless of their physical reality, what UFOs primarily represent to people is the archetype of salvation or integration, serving as a potent symbol of hope that something external might save humanity from its own crises. 

This powerful psychological need quickly spilled into the social sphere: By the early 1950s, the world saw the beginning of UFO religious communities, almost all of which were tied to the emerging New Age Movement.21 This established a distinct, if unconventional, religious community that has since expanded into a diverse landscape of cults, spiritual groups, and online movements. These modern mythologies offer their adherents not only an answer to the cosmic riddle but also a sense of belonging, a moral framework, and a promise of ultimate transformation—functions historically reserved for organized religion. 

We are not just witnessing reports of the unexplained; we are witnessing the psychic temperature of a country—its anxieties, conspiratorial hunger, and collective imagination—made visible.

The world of UFOs deeply echoes religious communities, particularly in how the phenomenon inherently divides people into believers and nonbelievers, subsequently demanding an alignment with a collective ideology or community for those who accept the narrative. In particular, abduction narratives strongly resemble spiritual transformation stories, carrying powerful mythic, symbolic, and spiritual overtones that speak to a profound human need. These experiences often involve narratives of a calling, being chosen, initiation, and transformation, placing the individual in touch with a greater, transcendent, and mysterious unknowable power.22 In this way, both alien abduction and traditional spiritual experiences—such as deep prayer, apparitions, mystical visions, or spiritual possession—can be viewed as powerful modern myths. They serve as psychic containers for deeper psychological realities, suggesting they both function as potent cultural frameworks for expressing profound feelings of internal conflict, such as disconnection, trauma, or identity crisis, and a fundamental longing for transcendence or an escape from the confines of a prescribed self. 

If participation in UFO belief systems satisfies a spiritual longing, what’s the harm? Perhaps none. However, when such belief requires individuals to suppress instinct, embodied perception, and common sense, the stakes shift. We risk creating tension with the fundamental architecture of evolutionary biology and psychology. To override these deeply ingrained perceptual systems in favor of a socially constructed narrative demands a significant cognitive sacrifice—one that erodes the innate trust in our instincts that has historically kept us alive. Over time, this override may dull the very intuition evolution shaped to help us discern reality from story. 

We cannot expect young Americans to find faith in religious institutions, as many are still working to repair the trust of congregants they have long disenchanted. Yet faith—faith in something, anything—is essential to begin filling the emptiness left by a lack of meaning. Without faith in a larger cosmic order—be it a sense of karma, a belief in something greater, or a feeling of being loved or held by a transcendent whole—our younger generations are far more likely to attach to an ideology introduced to them on social media, which often leaves them unattached to an embodied instinctual reality. 

Into this void step alien narratives.

Categories: Critical Thinking, Skeptic

Universal Respiratory Vaccine

neurologicablog Feed - Thu, 02/26/2026 - 7:34am

The news is abuzz with talk of a potential universal respiratory vaccine. It’s definitely interesting research, but may not be what you think. In this case, the reporting has been quite good on the whole, but the headlines can be misleading if you are not deeply steeped in the complexities of mammalian immunity. Let me start with the biggest caveat – this is a mouse study. This is therefore encouraging pre-clinical research, but we are still years away from translating this into an actual vaccine. Also, most interventions that are encouraging at the animal stage don’t make it through human testing. So don’t expect any revolution based on this treatment anytime soon. Having said that – there is great potential here.

To understand how this new approach works, let’s review some basics of immunity. (Note – the immune system is incredibly complex, and I can only give a very superficial summary here, but enough to understand what’s going on.) Mammalian immune systems have two basic components, innate immunity and adaptive immunity. The adaptive immune system is probably what most people think about when they think about the immune system and vaccines. Adaptive immunity targets and recognizes specific antigens (such as proteins) on pathogens like viruses, bacteria, or fungi. Antibodies attach to these antigens, flagging them to be targeted by immune cells like macrophages which then eat them. The macrophages in turn display the antibody-flagged antigens on their surface, triggering a greater and more specific reaction to those specific antigens. Adaptive immunity is considered slow (it takes days to ramp up), specific (it targets specific antigens on specific pathogens) and durable (it has memory, and will react more quickly and robustly to the same pathogen in the future).

By contrast, the innate immune system is fast, non-specific, and short-lived with no memory. The innate immune system consists of physical barriers, like skin and mucosa, and immune cells that target pathogens based on broad patterns that are not learned but are innate (hence the name). There are Toll-like receptors (TLRs – the name Toll comes from the German for “fantastic”, allegedly said by a researcher upon discovery). The Toll gene was first discovered in fruit flies and then similar genes were later discovered in mammals, hence “Toll-like”. TLRs detect pathogen-associated molecular patterns (PAMPs), which are highly conserved features of types of pathogens. In other words –  a TLR might recognize a snippet of RNA as a pattern typical of RNA viruses, or proteins that tend to occur on pathogenic bacteria. “That looks like an RNA virus, so let’s attack it.”

While these are distinct and complementary parts of the immune system, they are also highly tied together. Components of the innate immune system trigger the adaptive immune system, which in turn stimulates innate immunity. In fact, many traditional vaccines contain adjuvants which stimulate innate immunity in order to boost adaptive immunity.

The new vaccine (technical name – GLA-3M-052-LS+OVA), which is a nasal spray given in three doses to the mice being studied, stimulated innate immunity, not adaptive immunity. Normally, after exposure to a pathogen or even allergen, innate immunity will be heightened for a few days, then return to normal. The nasal vaccine extends this heightened innate immunity in the lungs and respiratory system for three months. It does this by containing synthetic molecules that bind to TLRs, tricking them into responding as if a pathogen is present. The vaccine also contains a protein called ovalbumin, which stimulated T-cells of the adaptive immune system, keeping them resident in the tissue. These T-cells help maintain the heightened state of activity of the innate immune system. According to the authors: “Protection was mediated by persistent ovalbumin-specific CD4+ and CD8+ memory T cells that imprinted alveolar macrophages (AMs), enhancing antigen presentation and antiviral immunity.”

The trick of stimulating innate immunity was partly borrowed from the tuberculosis BCG vaccine, which also works by both triggering adaptive immunity but also stimulating the innate immune system. Researchers studies how the BCG vaccine accomplished this and applied that knowledge to this new vaccine.

In the study the researchers compared mice treated with three doses of the nasal vaccine to untreated mice and found that the treated mice were protected for at least three months from “SARS-CoV-2 and Staphylococcus aureus. In addition, the vaccine protected mice from other viruses (SARS-CoV-2, SARS, SCH014 coronavirus), bacteria (Acinetobacter baumannii), and allergens.”

In the best-case-scenario where this vaccine technology is safe and effective in people, what can we expect? Well, I don’t think this would replace any traditional vaccines based on adaptive immunity. Like the two halves of the immune system itself, it will likely be complementary to traditional vaccines. Traditional vaccines can provide years and sometimes decades of specific protection from common pathogens, and there is no substitute for that. Also, this vaccine works on respiratory infections only, although it may be possible to adapt this approach to other types of infection.

What an innate immunity-based vaccine provides is a good first line of defense against an outbreak, epidemic, or seasonal infection. This would require many millions of doses (or even billions, in the context of a pandemic) being available at short notice to provide several months of resistance to an entire population at the beginning of an outbreak or a seasonal infection (like the flu). It remains to be seen if this vaccine reduces the risk of spread or just the severity of infection. If it reduces spread (which is plausible, if viruses, for example, don’t have a chance to reproduce in large numbers), it could short circuit many respiratory epidemics.

Imagine if this vaccine were available at the beginning of COVID. It could have provided significant protection, reducing death and morbidity, and allowed us time to study the virus and develop adaptive vaccines. That is one of the benefits – it provides broad spectrum non-specific defense. We don’t necessarily need to know anything about the pathogen for this vaccine to work, so it is ideal for novel respiratory outbreaks. It also means we don’t need to track new strains of a virus, and that pathogens cannot easily adapt to this immunity by simply mutating their proteins.

There is a lot of research ahead to study the safety and effectiveness of this vaccine in humans. Even once a vaccine is approved, more research is needed to study long term effectiveness and potential side effects. One thing to consider, for example – there is likely a reason that evolutionary forces did not favor us having our innate immunity on high alert at all times. There is often a downside to immune activity, which is mostly why you feel like crap during an infection. It’s not the bug, it’s your bodies reaction to the bug. The worst-case scenario is that this approach increases the risk of auto-immunity.

Having said that – we are not living in the world in which we evolved. We are living in a globally connected world of over 8 billion people, often in close proximity to potential animal reservoirs of pathogens. The selective pressures are likely now different than they were when we were living in largely isolated tribes. But we don’t have to wait for evolution to work its slow grim task, we can tweak our immune systems with science and technology to provide some enhanced protection when and where we need it.

The post Universal Respiratory Vaccine first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic