You are here

Skeptic

Skeptoid #1035: How Disastrous Are Declining Birth Rates?

Skeptoid Feed - Tue, 04/07/2026 - 2:00am

Popular influencers claim birth rates are declining disastrously. How true is that, and is it a disaster?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

What Is Your Favorite Color?

neurologicablog Feed - Mon, 04/06/2026 - 7:10am

Many people might find this to be an easy question and simple concept – what is your favorite color? In fact it was used as the quintessential easy question by the bridge guardian in Monty Python and the Holy Grail. But it is a good rule of thumb that everything is much more complicated than you think or than it may at first appear, and this is no exception. We recently had a casual discussion about this topic on the SGU, and it left me unsatisfied, so I thought I would do a deeper dive. Perhaps there is a neuroscientific answer to this question.

The panel differed in their reactions to the question of favorite color (we were just giving our subjective feelings, not discussing research or evidence). Cara felt that “favorite color” is largely arbitrary. Kids are asked to pick a favorite color, which they do (under pressure) and then often just stick with that answer as they get older. She also felt the question was meaningless without context – are you referring to clothes, cars, house color, or something else? Jay was at the other end of the spectrum – he has a strong affiliation for the color orange which gives him a pleasant feeling. The rest were somewhere in between these two extremes.

I knew there had to be a science of “favorite color”, which I thought might be interesting. Indeed there is – and it is interesting.

First, what is the distribution of favorite color, across the world and demographically? Blue is, far and away, the most favorite color, in most countries across the world, so it seems to be very cross-cultural. It is also the favorite across age groups and gender. The second-most favorite color is either green, red, or purple. Brown is almost universally the least favorite color. Gender has an effect on favorite color, with more women favoring pink, and reds in general (but still preferring blue overall). Republicans still prefer blue over red, but more Republicans prefer red than Democrats. There are country-specific differences as well. Red is a higher preference in China than many other countries, for example.

The demographics of favorite color are clues as to potential underlying causes. Is favorite color purely a cultural phenomenon? It does not seem to be, but there are some minor cultural influences. Is it a neuro-biological phenomenon? It could be, but not purely. If it is partly neurological, what does it track with? How about personality. The evidence is, in short, mixed, and reveals the hidden complexity of seemingly straightforward questions.

Most people think of color preference as referring to hue, but saturation and brightness have just as much of an influence on color choice. When you consider all aspects of color, the picture becomes more complex. Extroverts, for example, prefer bright colors. Adults tend to prefer more saturated colors. The results of studies, therefore, depend on how the questions were asked. But an overall summary is – you can make some statistical predictions about the big five personality types (extroversion, openness, agreeableness, neuroticism, and conscientiousness) from color choice. But this is one factor among many, and depend on multiple factors (the context, the object, and all three color traits). There does seem to be an actual phenomenon here – an influence of personality on color choice – but it’s mixed and complicated.

So far we have mostly just been describing who has which color preferences, but not why or how. We have some clues from the demographics of color choice, but no answers. Given everything above, it is still possible that color choice is entirely learned, or partly learned but mostly an inherited trait. What does the evidence say about this question? Well, there is no current answer, but there is a strong theory that is a good fit to the evidence – the ecological valence theory.

According to this theory color preferences emerge from the totality of our life experience mainly through emotional association. We have a partly associatative memory, in that we tend to remember things partly by associating them with other things that occur together. This includes color. If green things tend to be associated with good experiences, then we will begin to associate the color green with good feelings. According to EVT blue is the most common favorite color because we associate with blue clear skies and clean water, which tend to be associated with happy experiences. We tend to associate brown with feces or rotten food, so that is consistently the least favorite color.

The strength of EVT is that it allows for biological, cultural, experiential, and personality factors all at once. They all can affect our associations with colors, and contribute to how they make us feel. Some associations may be natural, like blue skies, green vegetation, and putrid yellow and brown. Others can be purely cultural, like pink for girls or purple with royalty. Different personalities would be drawn to different colors that tend to be associated with congruent moods, like vibrant reds for extroverts, or calming blues for introverts. And then there are likely to be some quirky individual factors as well – extreme individual experiences, or social group sorting (which color wedge do you typically play in Trivial Pursuit).

Does neuroscience add anything to this picture? So far, neuroscientific studies have elucidated some of the underlying brain regions that relate to color preference and processing, but don’t really provide any insight into why color preferences exist. Here is the most relevant study I could find:

These results demonstrate that brain activity is modulated by color preference, even when such preferences are irrelevant to the ongoing task the participants are engaged. They also suggest that color preferences automatically influence our processing of the visual world. Interestingly, the effect in the PMC overlaps with regions identified in neuroimaging studies of preference and value judgements of other types of stimuli.

Sure – color preferences and experiences happen in the brain, and involve a brain region generally involved in value judgement. This is a piece to the puzzle, but itself does not really address the cause of color preferences, just some of the neurological mechanisms.

There is still a lot to learn about color preferences. The evidence does not support the notion that color preference is a purely arbitrary phenomenon, but rather that it has a psychological, cultural, and neurological basis. But there is still a lot of research to be done in terms of the nature and causes of color preferences.

 

The post What Is Your Favorite Color? first appeared on NeuroLogica Blog.

Categories: Skeptic

The Good Side of Virtue Signaling

Skeptic.com feed - Fri, 04/03/2026 - 12:58pm

Humans have to signal just like birds have to sing, beavers have to build, bears have to hibernate, fish have to swim, and wolves have to howl. Such behaviors are how those animals make themselves legible to one another. Social life under uncertainty forces them to externalize what matters like fitness, temperament, and willingness to cooperate. Humans face the same basic problem with more complicated traits like temperament, virtue, skill, and intelligence—traits that aren’t directly observable. So people must signal them to coordinate and to survive. Humans are a highly cooperative species that will cooperate with almost anyone on almost any task if they are trustworthy and reliable enough as a cooperation partner—it is our evolutionary superpower.

The temptation, especially in the age of social media, is to treat signaling as a mode pathology of people who need attention and lack good taste—a symptom of moral decadence or attention addiction. So much so that until recently, the term virtue signaling was a favored insult. But even if much of what gets called virtue signaling is shallow or cheap, the underlying practice is a structural feature of social life. If people never signaled their moral commitments, reliability, or competence, strangers would have no basis for trust, coalition, or cooperation. In such a world, hiring and romance, to give a couple examples, would be harder and more expensive. Signaling is what we get instead of omniscience.

 If people never signaled their moral commitments, reliability, or competence, strangers would have no basis for trust, coalition, or cooperation.

Start with the simplest case—other people—who are, at best, partial strangers to one another (and even to themselves). People do not directly observe the counterfactual behavior of other people—things they would have done under different conditions. People do not directly perceive the strength of their willpower, their long-run loyalty, or their competence once the training wheels are off. What we see are limited slices and outcomes. Under those conditions, reputations are a necessary compression device—a running summary of the signals someone has sent over time. And the more costly and stable those signals are, the more weight observers give them.

This is why temperament, virtue, intelligence, and skill are surrounded by behavioral scaffolding. Calmness under pressure is signaled by how people behave in cramped and stressful situations. Trustworthiness is signaled by patterns of keeping or breaking commitments when defection would have been tempting. Intelligence is signaled by the difficulty of problems one can reliably solve. Skill is signaled through portfolios, track records, and performances that are costly to fake and time-consuming to build. None of this guarantees accuracy, but it does allow for some sorting in a world where full information is off the table.

People discover who they are by seeing what they actually do in situations that impose real costs.  

Less obvious, but crucial for understanding why signaling is inescapable, is that we are also partial strangers to ourselves. Introspection does not give us the same kind of access to our dispositions that we sometimes imagine. People often misjudge their own resolve, generosity, loyalty, and competence. They discover who they are by seeing what they actually do in situations that impose real costs. In that sense, signaling is a way of generating evidence for ourselves when first-person access is unreliable.

This is self-signaling. When people make public commitments, take on demanding projects, or voluntarily incur costs that close off tempting alternatives, they are creating a record that will constrain their future self. Once they have logged enough signals of a certain kind—being the colleague who always shows up prepared, the partner who follows through, the person who sees difficult tasks through to completion—it becomes psychologically and socially harder to act out of character. The signals help stabilize identity over time in the face of temptation and fatigue. They are, in effect, side bets placed against one’s own future wavering.

A great deal of moral psychology can be reinterpreted through that lens. Consider moral outrage, which at first glance looks like a purely internal reaction: an emotional upsurge in response to perceived wrongdoing. It does not feel strategic from the inside. But when researchers isolate outrage and punishment in controlled experiments, a different pattern appears. In a set of studies, Jillian Jordan and David Rand find that people express more outrage and are more willing to punish selfish behavior when they lack the opportunity to signal their virtue through direct helping. When opportunities to share resources or incur costs for others are blocked, participants “compensate” with condemnation instead.

The key twist is that these experiments are anonymous, one-shot interactions. No one in the subject pool can build a usable, long-term reputation off their choices. And yet people behave as if punishment and moral condemnation will function as signals of trustworthiness and moral commitment even when, in fact, they will not. This is what Jordan and Rand call a reputation heuristics account where our minds are calibrated for environments in which reputation usually is at stake, so those heuristics continue to operate even in artificially anonymous contexts. Moral outrage, on this picture, is one of the mechanisms by which we communicate that we can be counted on to side with the cooperative, norm-abiding majority.

Trying to strip all signaling out of moral life would be like trying to strip chirping from the life of birds.

The usual complaint is that this makes outrage “fake,” as if any reputational logic behind an emotion automatically discredits it. That assumes that either one really cares or they are performing for an audience. The data suggests that the impulse to signal one’s moral commitments and the felt experience of moral concern are tightly coupled. People want to be good and be seen as good, and the psychology that bundles those aims together is what actually enforces many norms in practice. That does not mean every expression of outrage is proportionate or wise. But it does mean that trying to strip all signaling out of moral life would be like trying to strip chirping from the life of birds.

The same work also helps explain why some moral signals function like moral junk food. In other writing, I have compared low-cost moral outrage to ultra-processed snacks: engineered to satisfy strong cravings with minimal nutritional value. Outrage, especially in online environments, is often cheap, fast, and highly visible. Donating significant time or money, bearing interpersonal costs to repair harm, or changing one’s own habits in light of a moral insight are expensive, slow, and often invisible. When opportunities for high-cost moral behavior are scarce or blocked, the cheaper substitute predictably fills the gap. People must still demonstrate that they care about fairness, harm, and loyalty. When costlier moral actions are constrained, cheaper signals in the form of moral outrage are often substituted.

Economically speaking, when the cost of supplying a valued good rises, people shift to substitutes. That is the structure behind the experimental results: when participants are denied the chance to help, they lean harder on condemnation. The signaling need remains, and the portfolio of available signals changes. Craving for reputational evidence is built deeply into how cooperation and trust function.

Signals help stabilize identity over time in the face of temptation and fatigue. They are, in effect, side bets placed against one's own future wavering.

And not just in the moral domain. Employers face self-selection problems: applicants know far more about their own character and competence than hiring committees. In romantic settings, each person knows more about their own long-term intentions and vulnerabilities than the other. Friends, business partners, and political allies all confront versions of the same problem. Under those conditions, signals are one of the main ways both sides try to reduce the risk of pairing with the wrong person.

Degrees, certificates, job titles, grants, and publications are costly to accumulate and relatively hard to fake at scale. They are imperfect, often biased toward certain kinds of talents, but serve an indispensable sorting function in the absence of omniscience. Employers rely on them because the alternative is guessing. The same goes for how people signal temperament and character in everyday life. Someone who consistently reacts to provocation with restraint is signaling about their temperament.

Romantic life adds an extra layer because the signals here often involve foreclosing alternatives. A willingness to invest significant time, to endure periods of difficulty, or to incur costs for a partner’s sake are all signals that burn resources that could have gone elsewhere—what economists call opportunity costs. A promise that leaves all options open is cheap. A sacrifice that rules out other paths sends a clearer message about one’s priorities. This is a reminder that absent signals, no one would know what sort of partner they were dealing with until it was too late and the incentives would be even more against pairing up.

Seen in this light, the analogy with nonhuman animals reappears in a less sentimental form. Birds sing because individuals that failed to advertise themselves effectively left fewer descendants. Beavers that did not build or maintain dams paid the price. Social animals whose signals did not reliably track underlying traits found their cooperative arrangements collapsing. Humans occupy a different ecological and cultural niche, but the basic information problem is the same. Only the content of the signals has changed.

Signaling is the price we pay for cooperation under uncertainty.

So when people insist that humans should stop virtue signaling and be authentic, it is worth noting how much that demand presupposes a world where others already know what we are like, a world without asymmetric information or risk, a world where employers, partners, and friends do not need to make educated guesses. That is not the world we inhabit. People must signal temperament, virtue, skill, and intelligence because they are partial strangers both to others and to themselves, and social life requires bets about who can be trusted with what. Signaling is the price we pay for cooperation under uncertainty.

Categories: Critical Thinking, Skeptic

Brain As Receiver Is Still Wrong

neurologicablog Feed - Thu, 04/02/2026 - 5:48am

I have a love-hate relationship with TikTok, as I do social media in general. It is a great communication tool and allows scientists and science communicators to get their content out to a larger audience cheaply and easily. If you know how to use the internet and social media as a resource, you can find a video about almost any topic. I particularly love the “how to” videos. And yet these applications are also used (mostly used) to spread nonsense and misinformation, or at least inaccurate, misleading, or overly generalized information. The low bar of entry cuts both ways.

As a result I spend part of my time as a communicator with my finger in the dike of social media pseudoscience and science denial. For example, this individual feels his insights into the workings of the human brain need to be shared with the world. His musings are based entirely on a false premise, his apparent misunderstanding of what neuroscientists understand about brain function. He begins with the nicely vague statement, “scientists have discovered”, followed by a completely incorrect statement – that thoughts come to our brain from outside the brain.

Before I get into this old “brain as receiver” claim, I want to point out that this format is extremely common on TikTok in particular and social media in general. This is more worrying than any individual claim – the culture is to present some random nonsense in the format of “isn’t this crazy”, or with with a cynical tone implying something nefarious is going on. Such authors may or may not believe what they say, they may just be trying to amplify their engagement with a total disregard toward whether what they are saying is true or not. They may even be a full Poe – knowing that what they say is nonsense. Either way, they feel it is appropriate to spend the time to record and upload a video without spending the few minutes that would be needed to check to see if what they are saying is even true. The very platform they are using to spread their nonsense often has all the information they need to answer their alleged questions. The culture is profoundly incurious, intellectually vacuous, lacking all scholarship or quality control, and seems to value only engagement. Thrown into the mix are true believers, grifters, and those who display classic symptoms of some form of thought disorder. This is “infotainment” taken to its ultimate expression.

Back to the video at hand – the author begins with an unsourced vague claim, but one that is not uncommon in the “new age” subculture, that our brains are mostly just receivers for a vast intelligence that comes from somewhere outside the brain. He states this as if it is a scientific fact. He then goes on to muse about some new age nonsense regarding being on a higher or lower “frequency” and therefore attracting good thoughts or bad thoughts. Is there any plausibility or evidence for the notion that some of the information that comes to our brain originates somewhere outside the brain? By this I do not mean through the known senses, but that part or all of the “mind” is a non-physical phenomenon, and the brain is a conduit for the mind, interfacing it with the physical body.

This is one formulation of what is known as dualism, which I have written about here many times – that mind and brain are not entirely one phenomenon, but two. My position, which tracks with the consensus opinion of neuroscientists, is that the mind is what the brain does. There is only the brain. The mind is not software running on the brain – it is the brain, simply describing our perception of what the brain is doing. That sci-fi trope of a “consciousness” being transferred from one body to another, or into an object, is simply impossible. Just as you cannot “upload” yourself into a computer. At best you can make a copy that replicates some of your mental functions, but it is in no meaningful way you. You are your brain.

How do we know this is true? This is, far and away, the best inference from all available data. While the brain is incredibly complex and we are still learning lots of the details, it is now entirely clear that the brain is a living information processing machine. Neurons connect to each other forming circuits and networks the can store and process information. These networks correspond to specific functions, and those functions can be altered or destroyed by changes to the corresponding physical circuits in the brain. We have known this for over a century – if you have a stroke that damages part of the brain, you lose that part of your functionality. And this does not only relate to physical things like movement, but also to thought, such as the ability to understand language, to reason spatially or mathematically, to process visual information, etc. This can even have bizarre manifestations, like your ability to feel as if you own or control parts of your body. As our technology has improved we have been able to map the circuits in the brain to finer and finer detail – and throughout the entire process nothing has emerged to challenge this core understanding of neuroscience. The mind is the brain.

There are also many ways in which there is a lack of findings to support any alternative interpretation. For example – no part of the brain is an actual receiver for any kind of external signals, of any frequency. We perceive the world through our sensory organs, and there is no “extrasensory” perception. There is no functionality without a corresponding neurological cause. There does not appear to be any limit to our ability to alter mental function by altering brain function. There is no evidence for mental function outside of brain function. In short, when we look at the brain we find wetware, a living computer, not a receiver of any sort.

All of this information, often patiently explained by experts, is freely available on the internet. All someone has to do is, before they post a video of their incredible opinions, ask a very simple question – is what I am about to say actually true?

The post Brain As Receiver Is Still Wrong first appeared on NeuroLogica Blog.

Categories: Skeptic

Yes, We Have No Free Will

Skeptic.com feed - Wed, 04/01/2026 - 4:03pm

I have long argued that free will, as understood by most people, is simply an illusion, and I recently criticized Shermer’s view that it is not. In response, Shermer says I’m mistaken, but concludes that the issue of free will versus determinism is “an insoluble problem because we may be ultimately talking past one another at different levels of causality.”

In fact, the problem is not one of levels of causality, but of semantics: Shermer has made up a new definition of free will that’s very different from the one most people hold, and different as well from definitions offered by other “compatibilists”—people who argue that yes, human decisions and behavior are determined by the laws of physics, but we still have free will anyway. Here, I argue that Shermer’s compatibilist definition of free will is incoherent and incapable of refutation. In contrast, my form of determinism, adhering to purely physical causation of thoughts and behaviors free from any human “will,” is scientifically testable—and, so far, supported by lots of evidence.  

But first let’s look at our respective definitions. I adhere to biochemist Anthony Cashmore’s definition of free will:

… I believe that free will is better defined as a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature.

In this definition there’s a “will” that doesn’t involve physical processes but can alter decisions. Another way of saying this is the way most people understand free will: “If you could replay the tape of life and return to a moment of decision at which everything—every molecule—was in exactly the same position, you have free will if you could have decided differently—and that decision was up to you.” This in turn can be condensed to the view that “you could have done other than what you did.” This concept is called “libertarian free will” or “contra-causal free will.” 

Surveys in different countries show that most people indeed think we live in a world in which behavior is not deterministic, and our actions are controlled by an intangible, nonphysical “will.” The prevailing view is that we could have done other than what we did. 

The science suggests that our feeling that we could have acted differently is, pure and simple, an illusion. 

This concept is rejected by physical determinists like Shermer and me. Determinism does, however, allow different outcomes in a moment of decision, but only insofar as the laws of physics are non-deterministic and inherently unpredictable. The only physical laws with such unpredictability are those of quantum mechanics (some physicists suggest that quantum events are deterministic in a way we don’t yet understand). For example, it is possible that you ordered a steak rather than salmon because, somewhere in the neurons of your brain, a quantum event took place when you gave your order. But most physicists and biologists think that quantum effects don’t apply on the macro scale of human behavior, where classical mechanics probably rules. And, at any rate, quantum effects cannot buttress free will, for we cannot will the movement of electrons. Libertarianism says the decision must be up to you, not up to probabilistic movements of particles. 

Like most compatibilists, Shermer is a determinist, asserting that, “I agree with Jerry and Dan [Dennett] that we live in a determined universe governed by laws of nature.” But he argues that this determinism still leaves us room for free will. 

How can that be? It’s because Shermer defines free will in such a way that even in a physics-determined universe we still have a “freedom to choose.” Although I find his definition somewhat confusing, here’s what he says:

So, while the world is determined, we are active agents in determining our decisions going forward in a self-determined way, in the context of what already happened and what might happen.

… Here, for example, is [Robert] Sapolsky defending his belief that free will does not exist because single neurons don’t have it: “Individual neurons don’t become causeless causes that defy gravity and help generate free will just because they’re interacting with lots of other neurons.” In fact, billions of interacting neurons is exactly where self-determinism (or volition or free will) arises.

Shermer adds that our behavior satisfies the three requirements for volition given by philosopher Christian List. We have:

  1. “the capacity to form an intention to pursue different possibilities,”
  2. “the capacity to consider several possibilities for this action (this is the ‘could have done otherwise’ element),” and
  3. causal control, “the capacity to take action to move toward one of these possibilities.” 
Our brains, of course, are the meat computers that form intentions, weigh possibilities, and emit decisions.

All this is puzzling because if we live in a universe governed by the laws of nature, then of course our bodies and brains are part of that physical nexus. Our brains, of course, are the meat computers that form intentions, weigh possibilities, and emit decisions. But this doesn’t answer the critical question: At any moment, could we have done other than what we did? If so, then there is something spooky going on whereby our brains are somehow exempt from the laws of physics. This seems to reside in Shermer’s claim that we are “active agents in determining our decisions going forward in a self-determined way.” What else can that mean but a form of dualism, or even magic?

This smuggled-in dualism becomes clear when Shermer claims that although the action of individual neurons may be determined, “billions of interacting neurons is exactly where self-determinism (or volition or free will) arises.” But how can one neuron be governed by the laws of physics but a group of interacting neurons not be governed by the laws of physics. If they are, then there is no freedom, no volition, no “willed” control of our behavior, and no ability to have done otherwise. Yet Shermer argues that when a group of neurons cooperates, some kind of “will” arises. This dilemma won’t be resolved until Shermer explains the relevant difference between the behavior of one neuron and of a group of neurons.

This is not a semantic distinction, for the definition of free will I gave is testable while Shermer’s is not. There are many experiments and phenomena showing that our sense of agency can be altered by physically manipulating the brain (a big group of neurons), observing human behavior, or performing psychological tricks. For example, neurological experiments show that predictable binary “choices” occur in the brain well before they are consciously made by an individual—up to ten seconds in advance. Such decisions cannot come from conscious “will.” Various lesions in the brain can remove the illusion that we can make real choices (e.g., alien hand syndrome), and doctors, by electrically stimulating parts of the brain can create intentions to do specific acts, like licking your lips or moving your arms. Given more electricity, patients report that they had indeed done those acts even when they didn’t. 

What we think of as choice is really a neuronal newsreel screened after the events have already happened.

Alternatively, computer games or Ouija boards show that humans can perform actions they attribute to external forces like spirits even though they’re actually, but unconsciously, moving their muscles. All of this suggests that our conscious intentions are not “free,” but are formed by the brain before we’re aware of them, and can be manipulated to either add or remove feelings of “intention.” “Will,” “volition,” or “agency” may well be post facto phenomena in which deterministic activity in the brain is brought into consciousness a bit later, so that what we think of as choice is really a neuronal newsreel screened after the events have already happened. To repeat, it’s useless to see freedom in groups of neurons if it doesn’t occur in single neurons. As Cashmore noted:

Some will argue that free will could be explained by emergent properties that may be associated with neural networks. This is almost certainly correct in reference to the phenomenon of consciousness. However, as admirably appreciated by Epicurus and Lucretius, in the absence of any hint of a mechanism that affects the activities of atoms in a manner that is not a direct and unavoidable consequence of the forces of GES [genes, environment, and stochastic processes], this line of thinking is not informative in reference to the question of free will.

The science suggests that our feeling that we could have acted differently is, pure and simple, an illusion. 

In contrast, Shermer’s definition of free will is untestable, precisely because he’s defined free will tautologically: because people feel and act like they have free will, they do have some form of it. We feel like we control our actions, weigh alternatives, and make “choices” among those alternatives. But if we couldn’t have done other than what we did—if, at bottom, all we think and do reflects physical law—then what exactly is “free” about our decisions and behaviors? 

As Shermer notes, 59 percent of surveyed philosophers are compatibilists while the rest are almost equally divided between libertarians, determinists, and those with no opinion. He deems philosophers the “most qualified people” to pronounce on the problem, but are philosophers more qualified than neuroscientists or physicists? As Sam Harris (a neuroscientist and a determinist) said

[Compatibilism] ignores the very source of our belief in free will: the feeling of conscious agency. People feel that they are the authors of their thoughts and actions, and this is the only reason why there seems to be a problem of free will worth talking about.

… Compatibilism amounts to nothing more than an assertion of the following creed: A puppet is free as long as he loves his strings.

Importantly, the “folk” conception of free will—the libertarian version—is what most people think they have. It is that version that permeates society, the legal system, and, of course, religion, and is therefore the most important version to discuss. 

Frankly, I’m puzzled by the eagerness of intellectuals to embrace various forms of compatibilism, and I’ve concluded—Dennett said this explicitly—that this comes largely from the view that without some idea that we have free will, society would fall apart, with nobody being “morally responsible” for their actions. I don’t have space to rebut that claim, except to say that it’s an untested assertion. Further, it’s clear that most determinists are not running amok by flouting morality and the law, nor are we nihilists who see no point in getting out of bed. I’ll add that while we are “responsible” for our actions in the sense that we performed them, under determinism the concept of moral responsibility is incoherent, for it assumes we could have made either a moral or an immoral choice.                                                                                   

Finally, Shermer poses what he sees as an unassailable challenge to my determinism: 

In fact, billions of interacting neurons is exactly where self-determinism (or volition or free will) arises. This is why I like to ask determinists: Where is inflation [of the monetary sort] in the laws and principles of physics, biology, or neuroscience? It’s not, because inflation is an emergent property arising from millions of individuals in economic exchange, a subject properly described by economists, not physicists, biologists, or neuroscientists.

That is a red herring. Like all phenomena in human society, you won’t find monetary inflation in the laws of physics. Nor will you find academics, music, sports, or any other human endeavor. The question is not whether these phenomena are in the laws of physics, but whether they result from the laws of physics as emergent phenomena wholly compatible with underlying naturalism. And Shermer himself said yes, they do: “we live in a determined universe governed by laws of nature.”

The problem of free will is “insoluble” only insofar as Shermer, trying to retain an idea of self-control, and ignoring the massive body of data on affecting volition, has confected a new definition that simply redescribes human behavior. The important question is this: “Is there physical determinism of human behavior or not?” Both Shermer and I agree that there is. In the end, however, Shermer seems to argue that we have free will because we feel like it. One might as well say that there’s a God because we feel like there is one.

Categories: Critical Thinking, Skeptic

AI And Schools

neurologicablog Feed - Tue, 03/31/2026 - 6:25am

Many teachers are panicking over AI (artificial intelligence), and for good reason. This goes beyond students using AI to cheat on their homework or write their essays for them. If you have AI essentially think for you, then you will not learn to think. On the other hand optimists point out that AI can be a powerful tool to aid in learning. It all comes down to how we use, regulate, and manage our AI tools.

The cautionary approach was captured well, I think, by Mark Crislip in this SBM commentary, in which worries about the effects of AI on doctor education. How will a new generation of physicians learn how to think like expert clinicians if they can have AIs do all their clinical thinking for them? My question is – is AI fundamentally different than all the other technological advances that have come before. Did calculators take away our ability to do math? The answer appears to be no. Students still gain basic math skills at the same rate with or without access to calculators. But there are lots of confounding factors here, and so some teachers still warn of allowing kids access to calculators too soon. Others point out that access to calculators has simply shifted our math abilities, away from basic operations toward more modeling, problem solving, and complex concepts. It seems we are in the middle of the same exact conversation about AI.

We can also think about things like GPS. My ability to navigate from point A to point B without GPS, or to navigate with maps, has definitely declined. But using GPS has also made my navigating to unfamiliar locations easier and more efficient. I would not want to go back to a world without it.

But is AI different because it is not about some narrow specific skill but about fundamental skills like writing, arguing, and thinking? I think the answer is – it could be. At the very least we cannot assume that it isn’t. We don’t want to look back in 20 years and realize we raised a generation that is intellectually crippled by previous standards. It does not seem prudent to just hope that this is not the case and it will all work out, like it did for calculators.

Part of the problem is that AI technology is developing very fast, and our culture and institutions do not have the time to adapt. Regulations, if any are needed or would be helpful, are also lagging behind. In fact it seems that the tech industry has been successful in cutting off any serious regulations at the knees. They have a point that sloppy regulations could hamper innovation and cede a vital emerging industry to our competitors. But they present this as a false choice, with the only other option to just trust them and have essentially no regulation. They want us to replicate what happened with social media, or with crypto, where lack of effective regulations turned what could have been useful tools into…something else. It is no surprise that recent surveys find people are more nervous than optimistic about the net effects of AI.

It is hard to know what the long term effect of the recent judgement against META and Google will be, but a court did find that these companies were “negligent” in protecting children from their products. These products have been deliberately optimized for addictiveness. Algorithms provide a bottomless scroll of content designed to outrage people, or drag them down a rabbit hole of increasing radicalization – whatever maximizes their engagement.  The effects on individuals and society do not seem to have factored in.

As with so many complex and technological issues, we seem to be perpetually stuck between two extremes. On the one hand we have tech bros unfettered in their attempts to “move fast and break things” and then use their billions to buy up media outlets and politicians to fend off any regulations. On the other we have politicians who may or may not be well-meaning, but either way seem to lack the knowledge and expertise to effectively regulate these new technologies. So their clumsy attempts at regulation backfire, and are used to skuttle any further regulation attempts. This is happening during a time of intense political polarization and the collapse, in many ways, of effective legislating.

What we want is a third option – effective, narrow, targeted regulation informed by experts with meaningful metrics that prevent abuse and harm with a minimal effect on innovation. Of course, this is not easy. It requires hard work, lots of consultation and discussion, and rounds of experimentation, evaluation, and adjustment. But that is what our complex world requires. Perhaps we are just not up to it.

The academic world also needs a carefully calibrated and thoughtful response. I do think we can leverage AI as a tool to improve education, to make it more personal and adaptive. But at the same time we need to avoid or minimize the obvious potential downsides. I do think it is a good idea for young children to avoid certain technologies while their brains are still developing. We need to maximize their use of verbal, math, and cognitive skills so that their brains will maximally develop these abilities. Then we can phase in technologies as tools they can use to be more effective. Start too young, however, and technology becomes a crutch, and their skills not only atrophy – they never develop in the first place.

In fact we need to think carefully about this digital virtual world we are creating for ourselves. Yes, this technology provides amazing tools and opportunities for engagement and entertainment. But they are also a soporific, lulling us into contentment for a small and isolated existence. I worry about a generation that never knows anything else.

Education is an opportunity to prevent such a digital dystopia, by not only providing the opportunity but the necessity that children do physical activities, get out into nature, communicate with actual people, and use every cognitive skill they have. We obviously have to introduce them to technology along the way, and in fact there is no way to avoid it. It is embedded out there in the world, and children do not live at school. So we also need to teach children to use technology responsibly and effectively. Meanwhile school is a place where they use and develop other abilities.

We have to be thoughtful about this. It is doubtful that just going with the flow down the path of least resistance (and maximal profits for the tech industry) will lead to the world we want to have.

 

The post AI And Schools first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #1034: The Colorado Martian

Skeptoid Feed - Tue, 03/31/2026 - 2:00am

The discovery of a Martian sarcophagus in Colorado in 1864: An oddball story, or consistent with the lore of the day?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

NASA Unveils New Moon Plans

neurologicablog Feed - Mon, 03/30/2026 - 6:29am

As we anticipate the Artemis II launch, now slated for early April with plans to take four astronauts on a trip around the Moon and back to Earth, NASA has been unveiling some significant changes to its plans for returning to the Moon and beyond. If you have fallen behind these announcements, here is a summary of the important bits.

Artemis II will continue as planned, marking the first crewed deep space mission since 1972 (Apollos 17). The original plan was for Artemis III to land on the Moon in 2027, but this mission has been pushed to an Artemis IV mission in 2028. A new Artemis III mission has been inserted – this will go only to low Earth orbit (LEO) and will test the integration of all the systems necessary to land on the Moon. This will include docking with one or both of the two landers, one being built by SpaceX and one by Blue Origin. This sounds like a really good idea, and it did seem unusual that they were planning on going straight to the Moon without ever test docking with the lander.

Even though landing on the Moon will be delayed by at least a year, NASA says this will set them up to have at least annual landings on the Moon after that, with a goal of a landing every six months. The reason for this frequent pace is the the more recent announcement by NASA last week – that they are putting on pause plans for a Lunar Gateway in lunar orbit and instead are going to focus on building a permanent Moon base near the lunar south pole.

In order to make this possible, and to support the future Moon base (no word yet on whether this will be called Moon Base Alpha, as it should) NASA plans about 30 uncrewed robotic landings on the Moon every year. They will be scoping out the location for the base and delivering equipment and supplies.

What about the Space Launch System (SLS)? When hearing these plans one of my first questions was – are they going to do this all with the SLS? Each SLS launch costs $4.1 billion, with the cost of the single-use ship itself being $2.2-2.5 billion. This is one of the biggest criticisms of the SLS system – they are designed as single-use rockets. Meanwhile, the rocket industry has moved on to reusable rockets, which dramatically reduces the cost. As of now, NASA has approved SLS launches through Artemis V. After that they have not committed to a specific plan. But – they have stated that their goal is to transition to “commercial hardware.” This almost certainly means SpaceX and Starship. I guess they cannot fully commit because Starship is still in development. But if it is ready in time, it seems likely NASA will start relying on Starships to get to the Moon.

This makes a lot of sense. SpaceX’s lander is really a modified Starship – it is stripped of anything it needs to land back on the Earth and is optimized for landing on the airless lunar surface. So – why go all the way to the Moon then dock with a Starship lander to land on the Moon. Why not just dock with the Starship in LEO then take the lunar-modified Starship all the to the Moon and then down to the lunar surface? That seems to be what NASA is planning. For now they will use the SLS to get into LEO, then go the rest of the way on a modified Starship. After Artemis V they may take one Starship into LEO and another to the Moon. They are not fully committing to SpaceX because they don’t want to give them a de-facto monopoly, so the door is open for other companies to compete for this service.

The apparent plans are for the base to be on the surface near the south pole. NASA has been investigating lunar lava tubes as a potential location for a moon base, but there are not identified sites or specific plans right now. This means the surface base will have to be heavily shielded. Perhaps the permanent presence will allow them to build a future base inside a lava tube, which would be much better protected from radiation and micrometeors.

Once all this is worked out, and we have a lunar base serviced by a system to frequently land crew on the surface and return them to Earth, NASA plans to use that lunar base as a stepping stone to Mars. This makes great sense. Remember – 90% of the energy you need to get anywhere in the solar system you expend just getting into LEO. Getting off the lunar surface is relatively easy. This means that a lunar base is an excellent platform from which to launch ships throughout the solar system, including Mars. A ship launching from the Moon can use most of its fuel getting to Mars faster, by spending more of that fuel accelerating to Mars then decelerating to insert into Martian orbit. This is critical because getting to Mars fast is the best defense against radiation exposure by the astronauts.

Along those lines NASA has also announced their plans to use nuclear power in space. This has two components – the first is using nuclear power for the Moon base itself. This is a great idea because you do not want to rely on fuel, which is expensive to ship to the Moon. Solar power on the Moon can be great, but you only see sunlight half the time. This is actually part of the reason to build the base at the south pole, where there are high peak regions that see sunlight 90% of the time. That will likely be an important source of power for the base. The other reason is that the poles also have deep craters that see light 0% of the time, which means there may be some frozen water there, which can be mined as a resource of the base. But even 90% sunlight still means 2-3 days with no sun, which would require significant battery backup. This is fine, but a mini nuclear plant (like the kind of thing you would have on a nuclear submarine) could provide years of reliable power for a lunar base.

The second use of nuclear power in space is for their planned nuclear electric propulsion spaceship. NASA plans for Space Reactor-1 (SR-1) Freedom, a ship propelled by a nuclear electric engine, to be launched in 2028. Nuclear propulsion has been long anticipated, and honestly we should have developed it long ago. This gets beyond the limits of chemical propulsion, and would cut the travel time to Mars. SR-1 Freedom will fly to Mars, and take 1 year to get there. This trip is optimized for efficiency, not speed, as it is a test mission. Once mature it is estimated that nuclear propulsion will reduce a typical trip to Mars from 7-9 months down to 3-4 months, with a theoretical advanced system getting to Mars in 45 days. Now we’re talking.

This also relates to why a lunar base is so important to Mars missions. Nuclear engines are efficient, but do not have the thrust to launch from Earth’s surface into orbit. You would have to launch any such vehicle with chemical fuel then switch to nuclear for the trip to Mars. But – if you are already on the lunar surface, you still need chemical rockets, but only small boosters rather than something the size of SLS or Starship.

Taking all of this into account, it really does seem that NASA has a well-thought out plan for developing the infrastructure to maintain a presence on the Moon and for missions to Mars (and potentially other solar system destinations). This is much better than the one-off (so-called flags and footprints) missions of the past. Honestly, this is what I naively expected would happen back in the 1970s or 80s to follow up the Apollo missions. It took 50 years longer than expected, but it’s good to see happening now. I know not everyone agrees with the priority of sending any people into space, and would rather have an entirely robotic space program, but that is a discussion for another day.

The post NASA Unveils New Moon Plans first appeared on NeuroLogica Blog.

Categories: Skeptic

The Peptide Craze: Biohacking and Human Guinea Pigs

Skeptic.com feed - Fri, 03/27/2026 - 5:30pm
A new compass for the SKEPDOC column. This column was founded by Harriet Hall, MD (1945–2023) who wrote it from 2006 to 2023. In 2026, we welcome William Meller, MD, to the helm. As an expert in evolutionary medicine, Dr. Meller will be our guide in navigating the deep biological history of our species to find the “True North” of human health.

On February 27, 2026, Robert F. Kennedy Jr. appeared on Joe Rogan’s podcast and announced that the FDA is preparing to move approximately 14 experimental peptide compounds off its restricted list and back into the hands of compounding pharmacies. He called himself a “big fan” of peptides. He said he expected the announcement “within a couple of weeks.”

One industry executive responded with what he called a prediction: “We’re about to unleash one of the biggest medical experiments in the history of America onto Americans as the test subjects.” He meant it as a good thing. 

In response, first let me tell you about a patient of mine. Two weeks before I wrote this, a 40-year-old man came into my clinic in acute distress. He was intelligent, fit, and successful—and he was terrified. His throat was swelling. Hives covered his body. He was struggling to breathe. And … he had been injecting a peptide “stack” he’d ordered online. He’d been at it for exactly two weeks. 

That timing is not a coincidence. Two weeks is how long it takes for our immune systems to mount a full IgE-mediated allergic response to a new foreign substance—the same mechanism behind severe penicillin reactions. With a slightly higher dose, or a slightly longer drive to my clinic, he could have gone into full anaphylaxis. He responded quickly to epinephrine and antihistamines. He will be fine. But his immune system now has a permanent record of that peptide as a lethal enemy. Any future exposure risks a faster, more severe reaction. 

This is the experiment that is about to be released on the American public. 

A New Label on Old Snake Oil 

Quackery has long been handy with new names. “Remedies,” “tonics,” “panaceas,” and “snake oil” gave way to “complementary and alternative medicine,” which gave way to “integrative” and “functional” medicine. Today’s label is “biohacking”—and its latest product line is peptides. 

To be clear: some self-experimentation is entirely reasonable. Adjusting your diet, sleep schedule, or exercise routine can have rapid results and manageable risks. That is not what I am cautioning about. I am writing about people who order vials of white powder from overseas websites, mix them with water in their kitchens, and inject themselves based on advice from social media influencers and, now, the Secretary of Health and Human Services. 

This is the actual evidence base: rodent studies, discontinued trials, and anecdotes from podcast guests with financial stakes in the outcome.

If you spend any time in online “wellness” spaces, you have encountered the pitch. Coaches, longevity clinics, and podcasters hawking discount codes are aggressively marketing injectable grey-market chemicals that promise to “optimize your metabolic pathways,” “boost your immune system,” “detoxify your cellular matrix,” and “address the root cause of aging.” They claim these compounds will dramatically increase muscle mass, melt body fat, skyrocket libido, erase wrinkles, and heal injuries without the inconvenience of waiting for evidence. 

As I tell my patients: if a drug could genuinely do any of that, we would all know about it. It would be very hard to hide. You would not be buying it through an internet loophole labeled “not for human consumption.” Nor would there be proclamations about what “they” don’t want you to know about this new remedy. 

What Peptides Actually Are 

Peptides are real, biologically important, and increasingly valuable. They are short chains of amino acids—smaller versions of proteins—that often function as chemical messengers in the body. Insulin is a peptide. More than 40 peptide hormones are known in humans, governing everything from blood pressure to appetite to milk production. The body’s own peptides act quickly: released, delivered to a specific receptor, then broken down by enzymes within minutes. 

Medicine has successfully harnessed this biology. There are now more than 100 FDA-approved peptide drugs on the market. The GLP-1 medications—Ozempic, Wegovy, and their weight-loss relatives—have genuinely revolutionized the treatment of diabetes and obesity. Peptide pharmacology is good, productive science, and anyone who tells you the FDA is categorically hostile to peptides is simply wrong. 

The compounds being sold by anti-aging clinics and wellness websites are a different kettle of goo. These are unapproved, experimental, synthetic molecules manufactured in a regulatory and industrial grey zone. They are sold with legally evasive disclaimers—”for research purposes only,” “not for human consumption”—while being marketed with explicit instructions for human injection. Many are synthesized in foreign facilities and imported for sale online. The FDA does not approve them. Independent quality testing is essentially nonexistent. 

The Appeal-to-Nature Fallacy, Wearing a Lab Coat 

Peptide sellers claim their products are “gentle” and “natural” because the body already produces similar molecules. This argument collapses on inspection. 

Because our natural peptides are removed by enzymes within minutes, lab-made versions must be chemically engineered to survive much longer in the bloodstream. This is why an Ozempic injection can last a week. The molecule is altered—designed to evade the very mechanisms that keep natural signaling tight, targeted, and controlled. Calling a chemically tweaked, enzyme-resistant synthetic compound ordered from an overseas supplier a “natural holistic remedy” is a remarkable feat of cognitive dissonance. 

The natural precedent proves nothing about safety or efficacy at supraphysiological doses. The dose, the duration, the delivery route, and the molecular structure all matter enormously. This is not ideology. It is pharmacology. 

The Wolverine Stack and Tooth Fairy Science 

One popular combination—BPC-157 and TB-500—is marketed as the “Wolverine Stack,” named after the X-Men character’s mutant regenerative ability. Sellers claim it heals torn ligaments, repairs damaged tissue, and accelerates recovery from virtually any injury. 

BPC-157 is a synthetic analog of a compound found in human stomach juice. In rats and cell cultures, it has shown interesting tissue-regeneration effects. There is no robust human clinical trial evidence that BPC-157 accelerates injury recovery, reduces inflammation, or supports gut health. A Phase I trial conducted in 2015 on 42 volunteers was discontinued and no results were ever published. The only human data in the published literature consist of a retrospective analysis of 12 patients and a pilot study with two participants. Based on this, influencers and longevity clinics sell it as a proven cure-all. At a MAHA (Make America Healthy Again) summit in Washington last November, a panelist told the audience his grandmother was taking it and that “it’s just one example of these products that can change people’s lives.” The audience clapped and whooped. 

Then there are the peptides that are alleged to pump up growth hormone—CJC-1295 and Ipamorelin—heavily marketed to men hoping to reclaim muscle and youth without effort. What rat data actually showed for Ipamorelin was increased body weight and increased fat. Its only significant human clinical trial, investigating bowel function after surgery, found it no more effective than placebo. As for CJC-1295: clinical trials investigating it as a treatment for HIV patients were permanently halted after a participant died of a heart attack. 

This is the actual evidence base: rodent studies, discontinued trials, and anecdotes from podcast guests with financial stakes in the outcome. The plural of anecdote is not data. 

Downplaying Risks 

The FDA’s 2023 decision to move many of these compounds to its Category 2 restricted list was not arbitrary bureaucratic overreach. It was grounded in specific, documented biology. 

BPC-157 promotes angiogenesis—the formation of new blood vessels. This sounds appealing for tendon repair. It is considerably less appealing when you consider that angiogenesis is also precisely what early-stage, undetected cancers need to grow and spread. (Oncologists have long sought anti-angiogenesis drugs to attenuate the growth of blood vessels to cancerous tumors.) A person injecting unapproved angiogenic compounds has no way of knowing whether they are healing a joint or feeding a tumor. Growth hormone secretagogues carry documented risks of acromegaly—the pathological and irreversible enlargement of bones and organs from excess growth hormone exposure. 

Then there is immunogenicity, the actual problem illustrated by my patient. Because synthetic peptides are engineered to persist in the bloodstream far longer than natural ones, the immune system frequently recognizes them as foreign invaders. It builds antibodies. In the best case, those antibodies simply neutralize the drug, rendering it ineffective. In worse cases, they trigger escalating allergic responses. In the worst cases, they cause anaphylaxis. 

There is no quality control. There is no chain of custody. The buyer has no reliable way to know what is actually in the vial.

Then there is contamination. Grey-market peptide vials from unregulated sources often contain chemical residues from synthesis, heavy metals, bacterial contamination, or simply the wrong compound entirely. There is no quality control. There is no chain of custody. The buyer has no reliable way to know what is actually in the vial. 

We are already seeing the collateral damage. Bad injections have produced hospitalizations for muscle paralysis, scarring, and sepsis. In Las Vegas, two women were hospitalized with swollen tongues, respiratory distress, and elevated heart rates—classic anaphylaxis—following peptide injections at an anti-aging festival. Medical journals have reported cases of necrotizing pancreatitis directly linked to unregulated peptide use. 

The MAHA Paradox 

Here is where the story becomes increasingly interesting, and particularly strange. 

Kennedy is not entirely wrong about one thing. When the FDA moved these compounds to Category 2 in 2023, it did not eliminate demand. It drove patients toward Chinese suppliers and grey-market “research chemical” vendors with no oversight whatsoever. Kennedy acknowledged this directly, stating that the restrictions “created the gray market.” There is a narrow, genuine point buried here: regulated compounding pharmacy access, with physician oversight and USP-compliant quality controls, is meaningfully safer than a vial of white powder ordered from an overseas website. 

But reclassification from Category 2 to Category 1 does not mean FDA approval. It does not mean these compounds are safe or effective. It means licensed compounding pharmacies would be permitted to prepare them under physician prescription for individual patients. The evidence base does not change. The angiogenesis risk does not change. The immunogenicity risk does not change. The absence of human clinical trial data does not change. What changes is the supply chain—and while that matters for contamination risk, it does nothing about the fundamental problem that we do not know what these compounds actually do in human beings at the doses being used. 

Meanwhile, the people celebrating the loudest have the most to gain financially. Brigham Buhler, the compounding pharmacy and wellness clinic owner, who has Kennedy’s ear and has been loudly predicting regulatory liberation on podcasts, owns the businesses that would compound and sell these newly accessible peptides. At the MAHA summit last November, he moderated a discussion on compounding pharmacies, and declared, “I think the future is bright with peptides.” The audience, again, clapped and cheered. The financial conflicts of interest here are not subtle. 

Eric Topol, director of the Scripps Research Translational Institute, identified the deeper contradiction more sharply than I could: “These are the same people that won’t take a vaccine that’s been shown to work in millions of people.” 

Read that again. The MAHA movement—which has spent years amplifying vaccine hesitancy, questioning FDA-approved treatments, and casting pharmaceutical medicine as a corrupt conspiracy—is now enthusiastically championing the mass use of unapproved synthetic compounds based on rodent studies and podcast testimonials. They claim that the FDA was corrupt and captured when it approved vaccines backed by Phase III trials enrolling tens of thousands of participants. It is apparently now a liberating force when it opens the door to peptides with two-patient pilot studies. 

The standard of evidence, it turns out, is not a principle. It is a preference. 

A Multi-Million Dollar Experiment 

The market is already staggering. U.S. Customs data show that imports of peptide and hormone compounds reached $328 million in just the first three quarters of 2024—up from $164 million during the same period the year before. That was before a sitting cabinet secretary went on the most popular podcast in America to announce that the regulatory gates are opening. 

Wellness clinics function as middlemen, lending a veneer of medical legitimacy while requiring patients to sign waivers acknowledging the substances are experimental—a maneuver that transfers liability to the patient. The proponents of “functional” medicine who accuse conventional physicians of “just pushing pills” are simultaneously instructing patients to inject unapproved synthetic compounds mixed in their own kitchens. This is not a contradiction they appear to notice. 

Patients frustrated by the pace of conventional healing, or simply hoping to optimize bodies that are already healthy, are understandable targets for this marketing. But enthusiasm and financial interest are not substitutes for evidence. 

Caveat Emptor 

Peptide pharmacology is a burgeoning field of research. FDA-approved peptide drugs have produced genuine medical advances. The problem is not peptides. The problem is the systematic exploitation of public enthusiasm for that science to sell unproven, potentially dangerous compounds to people willing to self-inject in pursuit of a shortcut—and now, the prospect of that exploitation being scaled and legitimized by federal policy. 

Here is a simple test. If a compound genuinely possessed the ability to burn fat, build muscle, regenerate tissue, and reverse aging without meaningful adverse effects, it would not need to be endorsed on a podcast. It would not need a cabinet secretary to rescue it from regulatory scrutiny. It would survive clinical trials. It would earn FDA approval. It would be prescribed by physicians and covered by insurance. 

It would simply be called medicine. 

The man whose throat was swelling in my clinic was not a fool. He was a careful, educated person who trusted the wrong sources. He got lucky. As the regulatory gates open and the market expands, not everyone will.

Categories: Critical Thinking, Skeptic

What Media Consolidation Means for Our Reality

Skeptic.com feed - Wed, 03/25/2026 - 2:40pm

It’s been said that “He who controls the media controls the mind.” (Variously attributed to Jim Morrison of the rock band The Doors, along with Noam Chomsky.)

Whoever said it, billionaires seem to have taken it to heart. Elon Musk has made 𝕏 his “de facto public town square.” Jeff Bezos has The Washington Post. Rupert Murdoch continues to consolidate conservative media outfits via Fox and News Corp (which owns The Wall Street Journal, the New York Post, and HarperCollins). Mark Zuckerberg’s Meta has expanded from merely friending people on Facebook to Instagram, WhatsApp, and Threads. Brian Roberts’s Comcast is in charge of NBCUniversal, Sky News, Peacock, and Universal Pictures. And so on. Meanwhile, the Ellison family controls Paramount and CBS. 

Recent headlines read like a game of high-stakes Pac-Man. Most notably, David Ellison’s Skydance Media merged with Paramount Global, bringing CBS, Paramount Pictures, MTV, Nickelodeon, and other assets under its new entity, Paramount Skydance Corporation. Then Paramount Skydance proceeded to buy The Free Press for some $150 million—putting its founder, Bari Weiss, at the helm of CBS News as its new Editor-in-Chief (she also retains her role at The Free Press). 

Meanwhile, Netflix is in an $82.7 billion definitive agreement to acquire Warner Bros. Studios (subject to regulatory approvals), but not if Paramount Skydance can help it, with a lawsuit in place against the venerable studio alleging that the Netflix deal lacked transparency and that the Warner Bros. board has ignored higher offers from Skydance (the board has repeatedly rejected Skydance’s offers in support of the Netflix deal). The matter is currently in dispute, but if Paramount Skydance manages to win, it would have control over a giant piece of the media apparatus—including both CBS News and CNN. 

And then there’s the recent forced sale of TikTok U.S. to an American entity. The deal creates a new U.S. joint venture where a consortium of investors led by Oracle Corporation, Silver Lake Technology Management, and MGX Fund Management Limited will hold a 50 percent stake, while ByteDance retains a 19.9 percent minority interest. 

This marks a fundamental restructuring of the media landscape. Is it good for the public? 

On the one hand, it’s possible that audiences will be pleased with having access to larger content libraries from a single provider, though Netflix is likely to raise its prices for the privilege of being able to share HBO’s “It’s not TV” content with them. Given its market share and massive content library, Netflix will sit firmly in the driver’s seat when negotiating acquisition costs and more. 

It also means that Netflix could control 30–40 percent of all paid streaming in the U.S., according to analysts. This move risks creating a content monoculture where data-driven algorithms, rather than creative risk, dictate what gets made, especially given Netflix’s streaming-forward model, rather than a focus on theatrical releases. This new layout also makes it incredibly difficult for mid-sized companies with less capital to acquire attractive content and compete with existing massive IP libraries, and creates a near monopoly on content: a few giants at the helm, with only smaller, niche creators, podcasters, and independent outlets left on the margins. It also means that filmmakers have fewer options for their projects. 

In fact, Netflix already offers a preview of what a fully consolidated media environment looks like in practice. Netflix has become infamous for canceling series after one or two seasons, often despite strong critical reception or dedicated audiences. Shows like Mindhunter1899Glow, and Archive 81were all discontinued without narrative resolution. In several cases, creators later stated that the shows met or exceeded traditional benchmarks of success but failed to satisfy Netflix’s internal metrics for rapid audience growth and completion rates. The result is a cultural landscape littered with unfinished stories. Viewers learn, over time, that emotional investment is risky. Storytelling itself becomes provisional and disposable. 

Genres proliferate, aesthetics vary, but narrative structures converge.

This incentive structure also shapes how stories are told. Former Netflix writers and executives have described internal guidelines that prioritize early engagement above all else. As a result, many Netflix originals front-load dramatic events—major chases, twists, or revelations often occur within the first five to ten minutes of an episode. Compare this to earlier television and feature films, where narrative tension was allowed to accumulate gradually, and climactic moments were often reserved for the end. 

Dialogue has changed as well. In series such as The Witcher or You, key plot points are frequently repeated verbally, sometimes multiple times within the same scene. This is not accidental. Matt Damon, while promoting his new Netflix film The Rip, has mentioned that they’ve had discussions with the streamer about ensuring that the plot is restated “three or four times in the dialogue” to address the fact that many of the viewers are simultaneously on their phones while watching. 

A number of writers have also openly noted that scripts are being increasingly optimized for distracted viewing. In other words, they are designed to be intelligible even when audiences are scrolling on their phones or half-paying attention. Subtle visual storytelling gives way to explicit exposition, because ambiguity does not perform well in engagement data. And Netflix is quite data driven indeed. 

Over time, this produces a subtle form of cultural monoculture. Genres proliferate, aesthetics vary, but narrative structures converge. The result is a narrowing of how storytelling is constructed. Novelty is cosmetic and experimentation is constrained by metrics designed to optimize retention rather than meaning. 

For most of television history, this logic would have been alien. In the broadcast era, shows were often allowed to fail slowly or to grow into themselves. Series such as The WireBreaking Bad, and Mad Men all struggled initially to attract large audiences, despite being critically acclaimed. The Wirein particular was never a ratings success during its original run, yet it survived because executives believed in its long-term cultural value and its ability to enhance the network’s reputation. Success was measured over years, not weeks, and shows were allowed to develop complexity that only made sense in retrospect. Creative risk was tolerated because it signaled seriousness, ambition in storytelling, and—significantly—trust in the audiences. Initially a modestly performing niche show, Mad Men saw a 63 percent increase in viewership by its second season alone and went on to become a cultural phenomenon. 

HBO famously framed itself not as television, but as something adjacent to cinema—summed up in its slogan, “It’s not TV.” The network accepted that certain shows would never be mass hits, but would instead function as prestige anchors, shaping brand identity and attracting subscribers indirectly. A series like The Sopranos justified risks taken elsewhere; Six Feet Under or Deadwoodexisted because the ecosystem allowed for uneven returns. FX’s The Americans showrunners—Joel Fields and Joe Weisberg—had chosen to end the show on its sixth season—something they had announced during the fourth, which allowed them to plan their storytelling and provide a proper ending. 

In that environment, creative autonomy was not merely tolerated but protected. Writers could trust that if an audience existed—even a modest one—it would be allowed to find the work. Today’s streaming platforms invert that logic. Instead of prestige underwriting experimentation, experimentation must justify itself instantly in data. What once functioned as cultural capital has been replaced by performance analytics, and patience has been redefined as inefficiency. 

Of course, the issue drawing the most attention and concern is how this consolidation will affect who controls the narrative and how it is shared. 

When only a handful of entities control the information available to us about the world around us, how can we make informed decisions about its future?

In particular, a lot of attention has surrounded the acquisition of CBS and the installment of Bari Weiss as its Editor-in-Chief. Proponents see this as a positive move that will help CBS become a more ideologically moderate—or centrist—outlet, creating a legacy broadcast network that appeals to and serves everyone on the political spectrum, not just those who lean left. 

Critics, meanwhile, are concerned that the outlet will reflect the ideological leanings of its new owner, sympathetic to the current U.S. administration. As evidence, they point to the last-minute pulling and postponement of a 60 Minutes segment on the Trump administration’s deportations of Venezuelan migrants to El Salvador’s CECOT prison, with reports of internal tension around the ongoing delay. When the segment did eventually run, some critics noted that it didn’t contain additions that justified delaying it and argued it was intentionally aired during an NFL playoff. 

To many of Weiss’s detractors, this seems to serve as a confirmation of what they believed all along—that Weiss is the mouthpiece of the Trump administration, intentionally put in place by Ellison to promote specific narratives. They point to her tenure at The Free Press, where sustained criticism of Trump has been less prominent. 

Her proponents disagree, and claim that she was merely ensuring the coverage was balanced and provided an opportunity for the administration to respond to various claims—as per journalistic standards that they feel have been replaced by bias and activism elsewhere. They also note that none of the recent hires brought into CBS under Weiss could reasonably be described as MAGA. 

It’s possible that Weiss is genuinely striving to bring a balanced perspective to CBS News, without ulterior motives or loyalties. Yet the network’s legacy audience is likely to remain skeptical, and many may drift away. Weiss’s goal appears to be attracting a more centrist, moderate audience—both left- and right-of-center—but in today’s polarized media landscape, many viewers seek content that aligns with their existing perspectives. In the first week under new editorial leadership, for example, CBS Evening News saw viewership drop 23 percent compared to last year, which signals, at the very least, a steep adjustment period. 

Mainstream media has generally leaned left, with exceptions such as The Wall Street Journal and the New York Post. Hollywood, too, has remained largely left-leaning, which makes the recent acquisitions all the more significant when it comes to shaping culture. The right-wing media ecosystem has expanded beyond Fox with a strong presence in the online world. 

In a recent article about Bari Weiss in The New Yorker, it was noted that her new role wasn’t necessarily a matter of a merely editorial choice. “Don’t think about it as David Ellison paying a hundred and fifty million dollars for The Free Press,” an unnamed industry exec said. “Think about it as a hundred and fifty million dollars on top of the price they paid for Paramount. It was basically the cost to get it to go through.” Whether that’s true will continue to be debated. 

As more media outlets consolidate into the hands of a few, the number of voices shaping what we see and hear shrinks.

But as I mentioned earlier, whatever the ideology, what matters isn’t who owns which outlet, but that ownership itself is converging—across news, entertainment, and social platforms—into a single layer of influence. 

When ownership is diverse, multiple perspectives can still compete for public attention. But as more media outlets consolidate into the hands of a few, the number of voices shaping what we see and hear shrinks, from news and opinion reporting to entertainment in the case of Netflix, Paramount, etc. 

Our ability to understand the world from multiple perspectives diminishes, and our view of reality becomes narrower. When only a handful of entities control the information available to us about the world around us, how can we make informed decisions about its future?

Categories: Critical Thinking, Skeptic

What Happened to Comet 3I/Atlas

neurologicablog Feed - Tue, 03/24/2026 - 6:01am

Last year the inner solar system had an interstellar visitor – 3I/Atlas (which stands for the third interstellar object which was discovered by the Atlas telescope). The third ever of anything is by definition a rare event, and so this was scientifically exciting. The comet came into the inner solar system, passing close to Jupiter and Mars, but not to the Earth, went behind the sun, then emerged on its path away from the sun. It is now headed for the orbit of Jupiter and out of the solar system. At first 3I/Atlas displayed a number of minor anomalies. It was behaving sort of like a comet, but with some differences. This fits well, however, with the main hypothesis that it is an interstellar comet – so it’s a comet, but may have a different composition from comets that were formed in our own solar system. This is not almost certainly the case – the comet comes from the thick disc of the galaxy, likely from a low metallicity star system, and has likely been travelling through interstellar space for billions of years, possibly being even older than our own star.

Now that it is passing out of the solar system we can look at all the data that NASA collected and make some fairly confident conclusions. There are a lot of sources of information, but Wikipedia actually has a pretty good summary and list of references. In the end, 3I/Atlas behaved mostly like a typical comet. It formed a tail heading away from the sun, brightened as it got close, then faded away as it moved away from the sun. Spectral analysis found that the comet was unusually rich in carbon dioxide (CO2), with small amounts of water ice, water vapor, carbon monoxide (CO), and carbonyl sulfide (OCS). It also had small amounts of cyanide and nickel gas, which is common in comets from our own solar system. In other words – it is a comet. It did originate from a part of the sky that we had previously calculated would have fewer such interstellar objects, which either makes it especially rare or means that our calculations are off.

Every time we encounter a new interstellar object we gather more data about such objects – how frequent are they, where do they come from, and what is their nature. Right now we have just three data points. After the first one, Oumuamua, we had not idea how common they were because we had just one data point. Now we have enough instruments surveying the sky that we are better able to detect such objects, which are very fleeting. The question was – was Oumuamua a one-off, and we just got lucky to detect something that happens very rarely, or are such objects common. We now have three data points and can conclude that they are fairly common, and we should detect one every few years or so, perhaps even more often if we start looking more.

Interstellar objects are a fairly new astronomical phenomenon, and what typically happens to new astronomical phenomena is that someone asks – could this be an alien artifact? So far the answer has been universally, no. The universe is a very big and complex place with lots of unusual phenomena. Historically speaking we have only just started to examine the cosmos, and are still encountering new phenomena on a regular basis. We have yet, however, to detect anything demonstrably, or even likely, alien. No one would be more excited than me if we discovered a genuine technosignature of an alien civilization. That is precisely why we have to be very careful before leaping to any such conclusions. But sure, ask the question, just don’t leap off the deep end.

What I mean by that is – do not make bad arguments to prop up an alien hypothesis, do not mystery-monger, do not truck in conspiracy theories, and do not draw undue attention to such speculation or present it as anything other than speculation. Every generation seems to have someone, sometimes with a scientific background, who does all of these things. The allure of the alien hypothesis is just too great. It is genuinely fascinating. It is the fast track to fame and attention. You can portray yourself as just being open-minded, brave enough to ask the tough questions, and criticize your colleagues for being closed-minded. Of course, like many things, this is a continuum. A little  is reasonable, more starts to get sketchy, and a lot makes you a crank.

An example of something which I consider to be in the sweet spot of good scientific exploration of the possibility of alien technosignatures is SETI. SETI essentially uses radioastronomy to survey for potential radio signals of alien origin. But they are not just doing this – they are also doing lots of ordinary good radio astronomy. But mixed in with their radio astronomy are methods to screen for signals that might be technosignatures. They are also extremely careful not to make any premature or overblown claims, and they are their own most dedicated skeptics.

At the other end of the spectrum, in my opinion, is Avi Loeb. He has seemed to make a career now out of mystery mongering anything unusual as a possible alien artifact. He claimed that all three interstellar objects might be alien craft. Why is he at the crank end of the spectrum? Because he elevated this possibility prematurely and with a series of really bad arguments, sometimes distorting the data or making bad calculations. He said that Oumuamua might be alien because it was coming close to the Earth, to observe it. He then argued that 3I/Atlas might be alien because it was not coming close to the Earth, to hide from us. He exaggerated its possible size, its apparent lack of a tail, its composition. He made a lot of the fact that the comet’s trajectory is close to the ecliptic, about 5 degrees off, committing a classic lottery fallacy argument. He calculated how likely this specific feature is, but only after knowing it, and did not adjust for all possible features that might be individually unlikely. He engaged in classic post-hoc reasoning. In the end, the predictions of NASA scientists all proved correct – 3I/Atlas is a comet, and displays all the features of a comet. Loeb attracted attention by saying 3I/Atlas might pivot toward the Earth once it emerges from behind the sun. When this prediction failed he did admit it was “most likely natural”, but is still emphasizing its apparent anomalies.

What he is doing is playing coy, which is a common strategy for those who are pushing fringe ideas but who are trying to seem reasonable. All along he said – the most likely explanation is that it is natural. But then follows up with – here are lots of (really bad) reasons why it is unusual and might be alien. This is a win-win for him – in the rare case that he turns out to be right, he is a genius and takes all the credit (keep in mind, if it were alien NASA would have found out all by themselves, with his prodding). If it turns out he is wrong, then he can claim he said all along it was likely to be natural. Either way he sucks up as much oxygen as possible from the media and distracts from the hard-working scientists at NASA doing good work. There is some great and interesting science here. The conclusion that this is almost certainly not an alien craft is a footnote at best, because there was never any good reason to hypothesize that it was.

Loeb is at it again (or still) with a recent post about a “mysterious” Mars cylinder (see the picture above the fold). This is also a common strategy of mystery mongers – comb through tons of data looking for anything unusual, then declare it a mystery. Again – looking for anomalies is a legitimate process of science. Blowing up apparent anomalies into a high-priority mystery is something that an attention-seeking crank would do. In this case others combed through NASA pictures from the Rover and then send it along to Loeb, so he is now a magnet for such things. And again – he admits this is most likely to be just a piece of debris from the Rover itself, or its landing, or whatever. There is now debris on Mars from all the spacecraft we have sent from Earth, so when we encounter a bit of what looks like ordinary debris, that is most likely what it is.

But Loeb is saying that NASA should turn the rover around and travel a few days to go back and take a closer look at this debris. NASA has not responded or commented to Loeb’s statement. This is actually a good operational definition of making too much of an apparent anomaly. Thinking that such anomalies, even when they are likely mundane, should take high priority and redirect our limited resources away from other scientific priorities, is worse than grabbing attention. It is trying to commandeer precious public resources to go on your own wild-goose chases, not because it is good science, but because it serves your own personal agenda.  NASA is perfectly capable of determining the proper priorities for their own rover. They don’t have to go chasing after every piece of space junk because Loeb is trying to grab attention and justify his own dubious professional existence.

The post What Happened to Comet 3I/Atlas first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #1033: Skeptifying BBC's Uncanny

Skeptoid Feed - Tue, 03/24/2026 - 2:00am

Supplying some much-needed skepticism to an episode of the BBC podcast Uncanny.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Family Resemblance: Why Intelligent Extraterrestrials May Look Strangely Familiar

Skeptic.com feed - Mon, 03/23/2026 - 1:03pm

There’s a kind of storytelling tariff that sci-fi thrillers pay: the alien has to be visually—and physiologically—“other.” The more it resembles us, the less it feels like an invasion, and the less it sells popcorn. So, filmmakers crank the dials. Alien is the perfect example: a creature engineered for maximum dread—extra jaws, parasitic reproduction, and even acid for blood, a brilliant idea because it turns injury into a terrifying weapon. Great cinema. Bad biology.

The alien as a monsterConstraints, Not Monsters

But biology isn’t a special-effects studio. Evolution doesn’t get to pick any chemistry, any anatomy, any habitat, and call it a day. It’s boxed in by constraints: what molecules can build durable, information-rich structures; what solvents allow complex reactions; what temperatures keep chemistry running without shredding it; what gravity and atmosphere allow efficient movement; what energy sources are stable long enough for complexity to accumulate. And here’s the part science fiction usually skips: only a limited range of environments in the universe are likely to be hospitable to the long, fragile process that produces intelligent life at all. If that’s true, then the number of viable “starting conditions” shrinks—and the range of plausible outcomes shrinks with it. In other words, the universe may not be a boundless zoo of monster anatomies. It may be a narrower set of workable habitats repeatedly producing a narrower set of workable body plans—ones that, at a distance, start to look surprisingly familiar.

Carbon is the first and biggest constraint. If you want a system capable of building large, stable molecules that can both store information and do chemistry, carbon is the standout: it forms strong chains and rings, bonds flexibly with common elements (H, O, N, S, P), and supports the kind of combinatorial complexity life seems to require.1 Silicon gets invoked in sci-fi because it sits under carbon on the periodic table, but careful technical reviews conclude that silicon biochemistry faces steep hurdles compared with carbon—especially when you ask for the chemical diversity, solvent compatibility, and long-term stability you’d need for an evolving biosphere rather than a one-off laboratory curiosity.2 Carbon, by contrast, isn’t just “what we have”—it’s what the periodic table offers as good at being life’s scaffolding.

And carbon chemistry, at least as far as we understand it, almost certainly needs a liquid reaction medium. You can think of a solvent as evolution’s workshop: it transports reactants, buffers temperature swings, enables compartmentalization (membranes), and keeps chemistry running long enough for complexity to accumulate. NASA astrobiology treatments make the key point crisply: water is not merely “wet background”; its physical and chemical properties are unusually helpful for life-like chemistry.3 That doesn’t mean life must use water—serious work examines alternatives—but it does mean that when you ask where complex life is most likely to arise, you’re pulled toward a relatively narrow band of worlds with long-lived liquids, stable energy gradients, and conditions that support molecular complexity rather than constantly tearing it down.4

Carbon, by contrast, isn’t just “what we have”—it’s what the periodic table offers as good at being life’s scaffolding.

Once you accept those constraints, the “anything goes” alien starts to look less likely. A restricted set of workable environments tends to funnel evolution toward a restricted set of workable solutions—especially once organisms get big, mobile, and cognitively complex. From there, the argument becomes a cascade: mobility favors efficient body plans; efficient body plans often converge on bilateral symmetry for streamlined, directional movement; and bilateral movers tend to concentrate sensors and processing at the leading end—cephalization—because that’s the part that encounters the world first.5 

Finally, any lineage that’s going to build technology needs not just brains, but some way to manipulate the world with precision—one or more appendages capable of fine control. And Earth at least shows that “high intelligence” is not a one-time miracle: complex brains and sophisticated cognition have evolved multiple times in very different lineages, which is exactly what you’d expect if evolution keeps rediscovering similar solutions to similar problems.6

It Takes a Long Time

For most of Earth’s history, life was microbial. There are abundant signs of life by around 3.5 billion years ago, with plausible evidence reaching back toward approximately 3.8 billion years and earlier, meaning single-celled organisms dominated the planet for the overwhelming majority of its existence.7 Complex multicellular life—and especially animals with nervous systems—arrives strikingly late by comparison: the Ediacaran record pushes recognizable multicellular complexity to roughly approximately 600 million years ago, and the Cambrian explosion (around 540 million years ago) is where diverse animal body plans and their organ systems, including nervous systems, become conspicuous in the fossil record.8 Even “brains,” in any familiar sense, are a comparatively recent evolutionary product of animal history.

And yet, despite billions of years of evolutionary “experimentation” across oceans, lakes, microbial mats, reefs, forests, and ice ages, technological intelligence—the kind that builds radios, telescopes, and spacecraft—emerged only once, and only under a narrow set of ecological circumstances. That doesn’t prove intelligence is unique in the universe, but it strongly suggests that it’s constrained: not every habitable world is equally likely to produce it, and not every habitable environment on a given world is equally likely to nurture it. In other words, the universe may contain places where life is possible, but far fewer where the long chain of transitions to technology can reliably occur.

Evolution is repeatedly solving the same engineering problems under similar constraints.

Long before our ancestors spent most of their time on the ground, their life was shaped in trees—an environment that rewards three-dimensional vision, fine depth perception, color discrimination, and exquisitely controlled hands, arms, and digits for climbing, grasping, and precise manipulation. When some of those primates began living in woodland–savanna mosaics, bipedal walking freed the already dexterous hands for carrying and tool use, effectively repurposing “arboreal skills” into a terrestrial, cumulative technology pathway. That transition—tree-built perception and manipulation deployed on open ground—may be a rare ecological combination, and it helps explain why large brains can evolve in many settings, yet only once has intelligence ratcheted up into an industrial civilization.9

If only a limited set of planetary and ecological conditions can support the long chain from chemistry to cognition, then evolution is repeatedly solving the same engineering problems under similar constraints. And once you narrow the environments where intelligence is even plausible, you also narrow the range of bodies that can thrive there. That doesn’t point to identical aliens—but it does make wildly un-Earthlike “monster designs” (think War of the Worlds with Tom Cruise) less likely, and a recognizable family resemblance—convergent, familiar motifs—more likely.

How the Ratchet Turns

As soon as hominins became more committed to life in woodland–savanna mosaics, a new class of problems moved to center stage: social problems. On open ground, survival often depends less on a single clever trick than on navigating alliances, rivalries, status, reciprocity, and betrayal inside a group—and sometimes between groups. That framing goes back to classic arguments that intellect evolved largely to manage social life.10 It’s also the logic behind the “social brain” tradition: as group life becomes more demanding, selection favors minds better at tracking relationships, intentions, and reputations at scale.11 

In that world, intelligence isn’t just tool-use; it’s the ability to detect cheaters and liars, anticipate others’ moves, and calibrate cooperation—exactly the kind of psychological machinery psychologists Leda Cosmides and John Tooby argued would be favored in repeated social exchange.12 And once you have minds built for social exchange, you have the psychological preconditions for reciprocal altruism—the willingness to help now in expectation of help later—which is one of the foundations of large-scale human cooperation that builds civilizations.13, 14 And when resources are patchy and competition is real, intergroup conflict can further raise the stakes, selecting for coordination, cohesion, and strategic behavior within coalitions. 

Intelligence exists in many lineages; an industrial pathway likely requires intelligence plus a controllable, high-energy lever and a dry-work environment where tools can persist, accumulate, and improve.

Language doesn’t merely label the world; it lets individuals coordinate plans, negotiate alliances, transmit know-how, and build reputations—turning individual cognition into group cognition.15 Most importantly, humans crossed a threshold into cumulative culture: shared intentions, teaching, and high-fidelity social learning allow useful innovations to persist and improve across generations, creating the technological “ratchet” that other smart animals rarely achieve. Humans are distinctive because our know-how doesn’t reset each generation; it accumulates—tools beget better tools in a cultural “ratchet.”16 But brains are expensive tissue, so any species that evolves them must solve an energy-budget problem—through diet quality, provisioning, and other tradeoffs that reliably pay the bill.17, 18

This is where fire and cooking matter: cooking increases the calories you can extract from food and reduces the time and gut investment needed to process it, freeing energy for a larger brain.19 Just as important, controlled fire is a gateway technology—warmth, protection, nighttime sociality, and eventually high-temperature chemistry.20 Intelligence exists in many lineages; an industrial pathway likely requires intelligence plus a controllable, high-energy lever and a dry-work environment where tools can persist, accumulate, and improve.

A skeptic might object that oceans already produce impressive intelligence—dolphins and whales, for example—so why didn’t technology take off there? The point isn’t that marine brains can’t be sophisticated; it’s that an industrial pathway needs more than cognition: it needs persistent tool chains and a controllable high-energy lever.

The decisive step wasn’t just smarter brains—it was solving the problem of memory across generations.

And that points to a subtle filter. Oceans can produce impressive cognition—on Earth in the form of cetaceans and, perhaps, octopus—but water is hostile to the industrial ratchet: fire is hard to control, durable toolkits are harder to store and transport, and metallurgy is effectively off the table.21 On land—especially in variable, resource-patchy habitats—portable tools, teaching, and cooperative planning can compound. That’s why the story is less “savanna created intelligence” than “a particular ecological combination made technology cumulative.”

The decisive step wasn’t just smarter brains—it was solving the problem of memory across generations. Most animals, even very intelligent ones, learn largely within a lifetime. When the individual dies, much of that hard-won knowledge dies with it. Humans broke that bottleneck. We became a species whose best ideas can outlive their inventors, because we can store information—in other minds, in shared practices, and eventually in artifacts and symbols—and then transmit it with unusually high fidelity. That’s the ratchet: innovation that doesn’t evaporate.

This requires more than imitation. It requires teaching, joint attention, and shared goals—what some researchers call “shared intentionality”—so that skills can be transferred efficiently and improvements can accumulate rather than drift. Once a lineage crosses that threshold, technology starts to behave less like a set of clever tricks and more like a compounding system.22 

Language then acts as a compression algorithm for culture. It turns “watch me do this” into “here’s the rule,” making know-how portable, scalable, and teachable to people who never saw the original problem. It also enables coordination at scale—plans, roles, promises, reputations—so groups can build things no individual could.23, 24

And on land, cultural memory can be externalized. Tools can be cached, improved, standardized, and inherited. Eventually information migrates into marks, symbols, and writing—literal memory outside the brain. At that point, progress accelerates, because each generation starts not from scratch, but from a platform built by those before it.

So, What Might ET Look Like?

What does all of this imply about the appearance of extraterrestrial intelligence? Not that aliens will be “human,” as if evolution everywhere is destined to reproduce our exact anatomy. Evolution is too contingent for that. But it’s not completely random. If intelligence that builds technology is constrained by chemistry, physics, and ecology – and if similar constraints repeatedly force similar solutions—then truly alien intelligence may come with a surprisingly familiar set of design motifs.

Humans broke that bottleneck. We became a species whose best ideas can outlive their inventors, because we can store information … and then transmit it with unusually high fidelity.

Start with the big one: directional movement in a complex world. Once organisms become large, mobile, and behaviorally flexible, the “engineering problem” of getting around efficiently tends to favor bilateral symmetry—a front and a back, a left and a right—because it streamlines movement and organizes the body around a direction of travel.25 Bilateral movers also tend toward cephalization: concentrating senses and information processing at the leading end, because that’s the part that meets the environment first.26 In plain terms, if something is navigating the world and making decisions quickly, it’s likely to be built around a “front end” where sensing and control are concentrated (and, less glamorously, but no less practically, a “waste end” where, well, waste products are dispensed).

Then comes the key requirement for technology: manipulation. A brain can model the world all day, but technology requires a high-bandwidth interface between mind and matter: appendages capable of precise, repeatable control. On Earth, that role is played by hands and digits—originally honed for climbing and grasping in trees—later repurposed for shaping objects, carrying toolkits, and building cumulative tool traditions. This doesn’t mandate five fingers, or even “arms” in the human sense. But it strongly suggests that technological intelligence will be paired with one or more manipulators—structures evolved for fine control, not just locomotion.

Finally, technological intelligence requires culture that compounds. If each generation must rediscover the basics from scratch, there is no sustained trajectory toward industry. The transition to cumulative culture—high-fidelity social learning, teaching, shared intentions, and the ability to preserve and improve innovations—creates the technological ratchet.27, 28, 29 Once a lineage crosses that threshold, intelligence becomes more than cleverness; it becomes a system that accumulates, and that accumulation eventually externalizes into tools, structures, symbols, and records. In other words: even if the bodies vary, a technological species will likely have something analogous to language, teaching, and external memory—because without those, the ratchet stalls.30, 31

Put those pieces together and a rough “family resemblance” emerges: not humans exactly, of course (there’s contingency again), but mobile, bilateral organisms with front-loaded sensing/processing, manipulators, and a cultural transmission system that lets knowledge outlive individuals. That is the opposite of the cinematic monster. It’s less a nightmare creature and more a familiar engineering solution—built under unfamiliar skies.

Caveats and Conclusions

A skeptic’s first objection is an obvious one, namely that Earth is a sample size of one. Any story about extraterrestrial biology risks generalizing from the particular to the universal. That caution is warranted. Our lineage’s specific path—arboreal heritage, bipedalism, the woodland–savanna mosaic—may be historically contingent. Different worlds could produce intelligence by different routes (although it is not clear how), and even on Earth, high cognition appears in multiple lineages.32 So, the claim here should be modest: not “ET must look like us,” but “constraints bias evolution toward a limited menu of workable solutions.”

The Grey is a popular alien figure because it’s a humanoid distilled to a few cues: bilateral symmetry, a head-dominated body plan, and exaggerated eyes. Those broad motifs actually align with what a constraint-based view would predict. But the specific “Grey” is also a cultural icon with a traceable modern history—especially after Whitley Strieber’s Communion (1987) and its widely reproduced cover image. So, it’s better understood as a modern cultural meme than as a biologically derived prediction.

The “Grey” alien.

A second objection is this: what if technology doesn’t require fire and metallurgy? Perhaps some species develop a different high-energy lever or a different materials pathway. That’s possible. But the broader point still holds: industrial-scale technology requires some means of harnessing scalable energy and building durable tool chains. Whatever substitutes exist, they still must operate under the same physical logic: persistent artifacts, repeatable processes, and the ability to store and transmit complex know-how over long spans of time. 

For example, we know Earth’s atmosphere didn’t always permit fire because oxygen arrived late—and we can see that transition written in the rocks. For much of the Archean, oceans carried abundant dissolved ferrous iron (Fe²⁺); when oxygen produced by early photosynthesizers (e.g., blue-green algae that scientists call cyanobacteria) began reaching surface waters, it oxidized Fe²⁺ to insoluble ferric iron (Fe³⁺) that precipitated in vast banded iron formations (BIFs), essentially recording oxygen’s first sustained appearance as it was “soaked up” by iron sinks. Around 2.4 to 2.3 billion years ago—during the Great Oxidation Event—atmospheric O2 rose from trace levels to much more significant amounts, while BIF deposition eventually waned as the ocean’s iron sink diminished and broader oxygenation progressed. 

That history matters for our argument because recognizable, combustion-driven technology depends not just on brains, but on a planet reaching an oxygen state that reliably supports open-air fire and high-temperature chemistry—the “oxygen bottleneck” for technospheres. That is why the “oxygen bottleneck” argument is useful: it highlights that recognizable, combustion-driven technospheres are not guaranteed by intelligence alone—they depend on planetary conditions that enable certain kinds of energy use.33

So, the claim is not inevitability, but probability. Constrain the environments, and you constrain the solutions. And that means the wildest designs of monster cinema are not the most realistic expectation. They are the least constrained.

Science fiction thrives on the alien as shock: the creature that breaks every rule and looks like nothing that ever walked, swam, or crawled on Earth. Alien is a masterpiece precisely because it is so unconstrained—a physiology engineered for dread. Great theater. But real evolution does not have that freedom. Biology is boxed in by chemistry, by solvents, by energy budgets, by gravity and materials, by the logic of movement and sensing, and by the requirements of cultural accumulation.

The details will be alien. The motifs may not be.

That’s why the best prediction for extraterrestrial intelligence is not a monster, but a constrained organism that has solved a familiar set of problems in a workable way: a body built for efficient movement, sensors and processing concentrated forward, appendages capable of precise manipulation, and a culture that can store and transmit information across generations so that technology compounds. The details will be alien. The motifs may not be.

If we ever detect a true technosignature—or one day meet its makers—the surprise may not be how strange they are. The surprise may be how recognizable the underlying design logic feels.

Categories: Critical Thinking, Skeptic

Another Bold Battery Claim

neurologicablog Feed - Mon, 03/23/2026 - 7:03am

In the decades before the Wright brothers historic 1903 flight at Kitty Hawk there were many claims of powered heavier-than-air flying machines. There were also many false sightings of “airships”, amounting to a form of mass delusion. But the false claims and false sightings do not change the fact that the technology for powered flight was right on the cusp, and that the Wright brothers crossed that threshold in 1903, leading ultimately to the massive industry we have today. This is not surprising. There is often a sense, in the industry and spreading to the public, that the technological pieces are in place for a significant application breakthrough. Today this is more true than ever, with a vibrant industry of tech news, showcases, conferences, blogs, podcasts, etc. I cover plenty of tech new here. It’s interesting to try to glimpse what technology is right around the corner. Any technology that is closely watched and much anticipated is likely to generate lots of premature hype and false claims.

This is definitely true for battery technology. We are arguably in the middle of a massive effort to electrify as much of our industry as possible, especially transportation. Also maximizing intermittent renewable sources of energy would be greatly facilitated by advances in energy storage. Meanwhile electronic devices are becoming increasingly integrated into our daily lives. Advances in battery technology can have a dramatic impact on all these sectors, and is likely to be a critical technology for the next century. So it’s no surprise that there is a lot of hype surrounding battery tech, some of it legitimate, some of it fake, and some just premature. But this hype does not change the fact that battery technology is rapidly improving and the hype will become reality soon enough (just like the Wright flyer).

When it comes to EV batteries we all have a wish-list of features we would like to see. I now own two EVs, and they are the best cars I have ever owned. At least for my personal situation (I live in an exurb and own my own parking spots), EVs are great, and current battery technology is more than adequate for EVs. But sure, I live everyday with the reality of how advances in battery tech will make EVs even more convenient and useful. I have detailed the wish-list before, but here it is again: increased capacity, both in terms of volume but especially weight (specific energy), to decrease the weight while increasing the potential range of EVs, faster charging (with the holy grail being the ability to fully recharge an EV as fast as you can fill a car with gas), long charge-discharge cycle lifespan (longer than the lifespan of the car), useful in a wide range of temperatures, stability (does not spontaneously catch fire), and cheap, which is tied to being made from cheap and abundant elements. This last feature also means that the battery is not dependent on rare elements whose supply line is largely controlled by hostile or conflict-ridden countries.

Making a significant breakthrough in any one of these features is big news. This is why Donut Lab’s claim to have simultaneously improved all of these wish-list features at once was met with so much skepticism. (I will give a quick update on Donut Labs at the end of this post.) Now we have another bold claim, this one from a US company based in Dallas. Their claim focuses on just one feature of EV batteries, the recharge time, however they also claim reduced need for cobalt, which is nice. The company is OMI, who claims to have innovated a new iron-based cathode that allows an EV to recharge from empty to full in 3 minutes. That would be huge – 3 minutes is the holy grail, about as long as it takes to fill a tank of gas. Technically they claim a 20C recharge rate. The “C” is based on a convention with 1C meaning that a battery can fully charge in 1 hour. So a 20C battery, by definition, would recharge fully in 3 minutes. For reference, most fast charging EV batteries today are rated at 8-12C, or a 7.5 to 5 minute recharge time. This is already pretty good, and as you can see there is a diminishing return with increased C rating when translated into recharge time. Of note, however, these ratings are under ideal conditions. In the real world we are still looking at 10-12 minute recharge times for the fastest recharging batteries.

To me this is not a big deal at all. Even when I use a charger that requires 20 minutes to go from 20-80% charge, it’s rare I am doing that on the road (only during long trips), and it’s relatively easy to plan that around a pit stop anyway. Go to the restroom, get a snack, and by the time you get back to your car you are done or almost done. Any improvement from there is icing on the cake. Ten to twelve minutes would be fantastic. Three minutes is insane. Keep in mind, 99% of the time I am slow charging my EVs at home. But sure, that occasional time you are driving home late at night and you need a top off to make it home, and you have nothing to do but wait there while your car recharges, faster is definitely better.

So how reliable is this claim from OMI. It looks pretty credible. They are calling the technology LnFP (lithium nano-ferrophosphate). This is a variation on the established LMFP technology which uses manganese in the cathode. Doping the cathode with manganese allows for faster charging. OMI is not revealing the exact chemistry of their new cathode (industry secrets and all), but will only say that it is nano-structured, hence the “nano”. Nothing there that breaks the laws of physics, and this all seems reasonably incremental. But again, prematurely hyping plausible incremental advances, but ones that will give a company dominance in an industry, is not uncommon. Claim unlimited free energy and you are just an obvious crank or a fraudster. Claim a plausible incremental advance, and you generate excitement in the industry. But that still leaves the question – did they really achieve this, or are they hyping a lab phenomenon, or are they pulling a “fake it till you make it” maneuver to goose funding?

The broader context here is that OMI is not one of the major players in battery technology, investing billions in a global race to push the industry forward and grab market share. They are a small startup, although they have been providing components to large companies like Harley Davidson. Are we seeing the democratization of battery tech, with spunky small startup leveraging creativity and innovation to challenge the major players? Or is this mostly small startups trying to make a quick score by making bold claims and either attracting big funding or getting snapped up by one of the big boys? OMI claims their battery claims are validated, but I cannot find any independent third-part validation. They also claim they will go into production in 2027. That is the ultimate test – can they mass produce these batteries at a competitive price and they actually work as advertised in products?

Speaking of which, two months ago Donut Labs announced to the world a dream solid-state battery with all the wish-list features. Now they are claiming independent testing and validation, but again it is not quite worthy of the hype they are putting out. Finland’s state-owned VTT Technical Research Centre has tested some of its features. It tested the rapid recharge time revealing a 0-80% charge in 4.5 minutes, with a 5C rating. Testing has also demonstrated their solid state battery is not a supercapacitor, which was one of the theories. But that, so far, is it. The 400 Wh/kg specific energy has not been validated, and that is really the main feature. So far we have more of a glimpse than total verification. So I am still withholding ultimate judgement until all the evidence is in, but it still seems sketchy to me. I hope that everyone is wrong, and Donut Labs has really achieved what they claim. But that hope, I think, is the point.

The post Another Bold Battery Claim first appeared on NeuroLogica Blog.

Categories: Skeptic

Selling Fear and Half-Truths: The Latest 60 Minutes ‘Exposé’ on Havana Syndrome

Skeptic.com feed - Sat, 03/21/2026 - 3:09pm

“A brain biased toward seeing meaning rather than randomness is one of our greatest assets. The price we pay is occasionally connecting dots that don’t really belong together.”1 –Rob Brotherton

For nearly a decade, a mysterious ailment known as “Havana Syndrome” has been portrayed as proof that American diplomats and intelligence officers have been attacked by a foreign adversary using a secret energy weapon. Few outlets have promoted this narrative more forcefully than the CBS television News Magazine 60 Minutes, which has presented the saga as a chilling geopolitical mystery. Yet after years of investigation, the U.S. intelligence community has concluded that such attacks are “highly unlikely.” So how did one of America’s most respected news programs become so invested in a story that the evidence increasingly contradicts? The answer tells us less about the shadowy world of spycraft and secret weapons, and more about the psychology of belief, the power of social contagion, and the media’s enduring fascination with invisible enemies. 

60 Minutes is widely regarded as one of the most prestigious and successful news programs in American television history. For decades it has been the gold standard in investigative reporting and has won every major award in broadcast journalism since its inception in 1968.2 Over the past decade the program has aired four exposés on “Havana Syndrome,” a mysterious clustering of health complaints first noticed by U.S. government officials in Havana, Cuba in 2016 (hence the name).3 However, for the past three years its reputation has been tarnished by two separate intelligence assessments that have challenged and discredited key elements of their investigations.4 

Immediately after their third report aired in March 2024, which claimed that an elite Russian military unit was targeting Americans with an energy weapon, the segment prompted calls for a renewed congressional investigation.5 Yet the CIA Director in the Biden Administration, William Burns, responded to the broadcast by issuing a firm assurance that the claims had been thoroughly investigated and were unfounded.6 This conclusion was reaffirmed in an updated intelligence assessment that was issued in 2025.7

On Sunday March 8, 2026, 60 Minutes aired its fourth investigation into “Havana Syndrome” in nine years, once again making dramatic claims that American spies, diplomats, and military personnel have been targeted by a mysterious weapon, first in Havana, and later around the world.8 The three previous segments were critiqued in the pages of Skeptic as they relied heavily on speculation with limited physical evidence, while largely excluding skeptical perspectives.9 The latest chapter in this saga is no different, repeating old, discredited claims and introducing a striking new allegation that the government purchased a Havana Syndrome-type device on the Russian black market.10

The “Attacks” on Chris and Heidi

In the latest segment, narrator Scott Pelley interviews Chris (last name withheld) who worked on top secret spy satellites near Washington DC, and claimed to have been attacked several times between August and December 2020. Pelley implies that Chris had been targeted with an energy weapon, describing him as having been “struck by an unseen force.” He said the first incident felt like someone punched him in the throat, his left ear was clogged, and a sharp pain shot down his left arm. During the second incident, in the kitchen of his Virginia home, he suddenly felt like a vice was squeezing his head, and he became disoriented, confused, and dizzy. A third episode occurred in his living room when he was stricken with a cramping of his back muscles “like a charley horse,” accompanied by a hot, sharp pain. In the final episode, he woke up feeling like a vice was gripping his brainstem and he experienced “a full body convulsion.” 

While the segment frames Chris’s experience as a targeted strike, his clinical presentation is consistent with common neurological and psychological conditions such as migraines and anxiety disorder. Migraines often cluster over several months and grow progressively worse before resolving. His description of vice-like pressure is commonly reported by migraine sufferers. Symptoms typically involve head pressure and pain, dizziness, confusion, disorientation, muscle spasms, and throat sensations. They often include unilateral symptoms (affecting one side of the body) such as the clogging of his left ear and the shooting pain down his left arm.  

That he experienced several distinct episodes with differing symptoms raises further questions about the likelihood of an attack. Why would the same weapon produce such different effects? Chris’s other symptoms such as throat tightness (globus) and muscle spasms that grew progressively worse, may reflect anxiety from someone who was working in an extreme stress environment (a classified spy satellite program). The least likely explanation for his symptoms is an attack by a directed energy weapon. 

The 60 Minutes narrative survives primarily through a strategic omission of key facts.

His partner Heidi described waking up with joint pain that was concentrated in her left shoulder. Pelley said that “bones in her shoulder were dissolving,” and she was diagnosed with osteolysis, which required an operation. The implication was that she too had been struck with the same mysterious weapon. But osteolysis of the shoulder is a well-known condition that is becoming increasingly diagnosed in women. It is associated with repetitive strain injuries, weightlifting, trauma, and inflammation, not mysterious external agents.11 Heidi’s shoulder condition is an entirely different pathology from that of Chris. It is far more probable that two people living together simply developed two unrelated conditions.  

Pelley then mentions several other victims who supposedly had similar symptoms: an FBI agent who experienced a drilling sensation in her right ear; a Commerce Department official who reported severe head pressure and ear pain; and the wife of an official who felt a piercing pain and pressure in her left ear and a headache. He asserts that a striking aspect of these stories is that “people who never met tell it the same way.” A more plausible explanation is that they were suffering from vestibular disorders: conditions that affect the inner ear and parts of the brain that regulate balance and spatial awareness. The symptoms described in the 60 Minutes interviews include ear pain and pressure, headaches and head pressure, and unusual sounds and sensations in the ear. The descriptions of the victims would be familiar to any vestibular neurologist treating migraines and inner ear conditions including unusual ear sensations, stabbing pains, or a perception of drilling, pulsation, or vibrations.12 It is estimated that one-third of adults over 40 will experience vestibular dysfunction.13

The Omission of Key Information

The 60 Minutes narrative survives primarily through a strategic omission of key facts. It fails to mention that the foundational studies in the Journal of the American Medical Association (JAMA) that gave rise to the belief that a mysterious weapon had injured American personnel in Cuba, were mired in controversy. This included internal ethics complaints, the withdrawal of authors, and accusations of scientific misconduct. In doing so, the program presents a house of cards as a fortress of settled science. The first study appeared in JAMA in February 2018, and caused a sensation with claims that the patients suffered brain damage.14 Prior to its publication, UCLA neurologist Dr. Robert Baloh, who developed some of the tests that were used in the study, was asked by the editors to review the findings. He found the manuscript to be laden with inconsistencies, described the claims as “science fiction,” and recommended against acceptance.15

Three of the study’s original authors removed their names just prior to publication as they were refused access to the data or earlier revisions of the manuscript. One of them—Dr. Carey Balaban, an ear, nose and throat specialist at the University of Pittsburgh, was so disturbed by this that he filed an ethics complaint over what he described as potential scientific misconduct.16 When the study appeared, there were calls by neurologists for their methods to be clarified or the study retracted.17 A later attempt to clarify the study’s findings was described by University of Edinburgh neurologist Sergio Della Sala as incomprehensible.18 Prior to its publication, information had been leaked to the media that several of the patients suffered white matter tract changes in their brains, prompting dramatic headlines about brain damage. However, when the study appeared, the prevalence of white matter changes fell within a normal range.19 

A second JAMA study in 2019, was equally controversial. It found brain anomalies in a small group of victims, once again prompting sensational headlines about brain damage. The study’s lead author, Dr. Ragini Verma, even described the differences in brain images of “Havana Syndrome” victims and a control group as “jaw-dropping.”20 Yet such findings are common in small cohorts and are consistent with what one would expect to see in a group of people under prolonged stress. The authors even admitted that the anomalies were so minor that they could have been caused by individual variation.21 Another problem was that 12 of the Havana Syndrome patients had pre-existing histories of concussion compared to none in the control group. Despite this, many media outlets had a field day citing a few rogue scientists who proclaimed that it was clear evidence of an attack by a microwave weapon. 

Dubious Beginnings 

The 60 Minutes segment also failed to mention that social contagion may have played a role in the initial spread of “Havana Syndrome.” CIA analyst Fulton Armstrong would later reveal that the undercover intelligence agent in Havana who first reported the mysterious sounds and believed they were responsible for his health issues, had engaged in a vigorous campaign to persuade colleagues that the sounds were significant. “He was lobbying, if not coercing, people to report symptoms and connect the dots,” Armstrong said.22 The man, who has since been dubbed “patient zero,” later attended a gathering of embassy personnel and played the recording of his “attack,” encouraging them to report their symptoms as he was convinced that they too had been targeted. His recording was analyzed by government scientists and identified as crickets.23 In fact, eight of the first group of victims in Cuba who reported feeling unwell and hearing sounds, recorded their “attacks.” They were later identified as the mating call of the Indies short-tailed cricket.24

Soon American and Canadian diplomats stationed in Havana were on the lookout for strange sounds and health complaints. Eventually the U.S. government alerted all of its active military personnel and embassy staff around the world to be vigilant for mysterious sounds and “anomalous health incidents.” In response, there were over 1,500 reports of possible attacks. The problem with these alerts is that “Havana Syndrome” symptoms are common in the general population and include headaches, nausea, dizziness, forgetfulness, difficulty concentrating, tinnitus, fatigue, facial pressure, hearing loss, ear pain, trouble walking, depression, irritability, and even nose bleeds.

One study found that the average person experiences five different symptoms in any given week. Thirty-six percent noted fatigue; 35 percent reported headaches. Nearly 30 percent said they had insomnia, while 15 percent had difficulty concentrating, 13 percent reported memory problems; roughly 8 percent noted nausea and dizziness.25 These symptoms overlap with those attributed to “Havana Syndrome.” When one eliminates claims of brain damage and hearing loss (which were never demonstrated), one is left with an array of exceedingly common symptoms.

A Fixation on David Relman

The 60 Minutes segment includes extensive interviews with Stanford University microbiologist David Relman who headed two panels that both concluded that pulsed microwave radiation was likely involved in some cases. As with the earlier 60 Minutes investigations, the government intelligence assessments on “Havana Syndrome” have rejected his conclusions. One of Relman’s panels said it was not possible to assess the involvement of social contagion as there was no data on the early spread.26 Yet, the spread from “patient zero” to fellow spies and diplomats in Havana has been well-documented and was widely known over a year before the panel issued their findings in December 2020.27 The same panel interviewed fringe figures such as Dr. Beatrice Golomb, a researcher at the University of California, San Diego, known for her extreme views on mass psychogenic illness, which she believes does not exist.28 His 2022 panel concluded that social contagion could not have affected spies and diplomats operating in Havana because they were highly educated and trained to deal with stress.29 This is a common fallacy.30 These conclusions may not be surprising given that Relman’s panels failed to interview a single prominent skeptic.  

The enduring lesson of “Havana Syndrome” is not secret weapons but the psychology of belief.

Scott Pelley complains that the panels’ conclusions have been ignored by the intelligence community. Relman told Pelley that it was embarrassing and insulting that the victims have been “dismissed as malingerers or people who are manufacturing things.” Pelley concurred by saying that the American government “has doubted their stories” and they have been labelled as “delusional.” These claims are misleading. In 2023, the Office of the Director of National Intelligence stated unequivocally that it was the consensus of the intelligence community that the symptoms exhibited by “Havana Syndrome” sufferers are real, but it was “highly unlikely” the stimulus was a directed energy weapon from a foreign adversary. Instead, they attributed the complaints to an array of factors including pre-existing conditions, conventional illnesses, environmental causes, and social factors (a clear reference to mass suggestion and social contagion). The intelligence assessment explicitly states that their findings “do not call into question the very real experiences and symptoms that our colleagues and their family members have reported.”31 A second intelligence assessment issued in 2025 reached a similar conclusion,32 while a recent study by the National Institutes of Health found no evidence of brain damage.33  

The Portable Microwave Device

The 60 Minutes segment also reported that in 2024 undercover U.S. government agents obtained a portable microwave weapon from a Russian criminal network and have tested it on animals. They said that the Pentagon-funded mission to obtain the weapon cost about $15 million. For being the centerpiece of this story, they provide few details. Pelley said “Our confidential sources tell us the still classified weapon has been tested in a U.S. military lab for more than a year. Tests on rats and sheep show injuries consistent with those seen in humans.” The problem with this claim is that there is no credible evidence that the victims of “Havana Syndrome” were injured by a weapon. 60 Minutes didn’t break this story; that distinction goes to CNN, who this year reported on their investigation into the same device, but their perspective was in sharp contrast to the 60 Minutes claims. The CNN sources said there was an ongoing debate and skepticism over attempts to link the device to “Havana Syndrome.”34

The claims by 60 Minutes are based on anonymous sources rather than technical reports, there are no test results, and they did not even obtain a picture of the device! Even after the device was acquired, the updated assessment on “Havana Syndrome” that was published in 2025 continued to maintain that the involvement of an adversarial weapon was highly unlikely. The U.S. and foreign governments have long conducted research on potential new weapons, so the existence of the Russian device should come as no surprise. Yet there is a big difference between testing, and producing an effective, practical weapon, with a major impediment being the laws of physics. The details surrounding the device and who created it, are nebulous. For instance, how could a Russian criminal syndicate obtain such a highly classified device and offer it for sale on the black market, without the knowledge of Russian intelligence, or U.S. intelligence for that matter?  

A Media Zombie That Won’t Die

This is not the first claim of its kind. In February 2026, the Washington Post reported that a Norwegian government researcher had built a device that was purportedly behind the Havana Syndrome “attacks.”35 Unnamed sources claimed that after exposing himself to pulsed microwave radiation, he developed neurological symptoms consistent with the victims. The report stated that after the Norwegian government informed the CIA, officials from both the White House and Pentagon visited Norway on two occasions to learn more. However, the Norwegian government says they know nothing about it. An investigation by one of the country’s leading newspapers was unable to identify any such researcher, while a microwave expert at the Norwegian University of Science and Technology, Trym Holter, said any such study would have required ethics approval and been carried out in a controlled fashion with test subjects. He said for someone to have conducted such an experiment on themselves would have been “completely crazy” and he questioned whether any such experiment had ever occurred.36

Perhaps the most troubling reason for this one-sided reporting is a glaring conflict of interest: the producers behind all four 60 Minutes segments, are marketing a book on the subject.

This pattern of credulous reporting is not limited to CBS News or the Post. Recently British journalist Nicky Woolf wrote a sensational article in the Sunday Times claiming that the evidence for a directed energy weapon is now overwhelming, while omitting the US intelligence community’s own conclusions to the contrary.37 He stated (falsely) that “many of the early cases didn’t know about each other,” and repeated the debunked claim that during the recent US raid in Venezuela, the American military used a directed energy weapon to incapacitate enemy soldiers.38

Historical Precedents

Unfortunately, 60 Minutes has repeatedly focused on one side of the story instead of presenting competing perspectives. A key problem when evaluating controversial claims is that once investigators become convinced that a hidden adversary exists, the belief itself can shape how evidence is interpreted. History is replete with examples. During the Salem witch-hunts of 1692, an idea spread that witches were attacking members of the community. Before long, over 200 residents were accused of consorting with the devil. During the “Red Scare” of the 1950s, a belief spread that communist sympathizers had infiltrated communities across the United States. In response, scores of innocent people were blacklisted, often on the flimsiest of evidence.

The enduring lesson of “Havana Syndrome” is not secret weapons but the psychology of belief. The producers at 60 Minutes continue to focus on exotic explanations while ignoring mundane ones. The colloquial term for this is “doubling down”—the stubborn persistence of clinging to a discredited hypothesis in the face of compelling evidence to the contrary. In the case of CBS News, it may be a subconscious attempt to avoid the embarrassment of having to correct the record after having been mistaken. The continued advocacy by David Relman and Scott Pelley for the microwave weapon hypothesis despite intelligence assessments to the contrary, exemplifies what psychologists refer to as “belief perseverance.” This is the well-documented tendency to maintain deeply held beliefs in the face of contrary evidence. 

Perhaps the most troubling reason for this one-sided reporting is a glaring conflict of interest: the producers behind all four 60 Minutes segments, are marketing a book on the subject. The Havana Syndrome: Secret Weapons, a Government cover-up, and the Greatest Spy Mystery of Our Time, is scheduled to be published this fall, with an introduction by none other than Scott Pelley himself.39 By continuing to air these “exposés,” CBS News is effectively providing a multi-million-dollar infomercial for a product that relies on a spy mystery narrative to drive sales. The authors say their reason for writing the book is “to tell the whole story” including “the cover-up.” This is ironic given that their reports have consistently left out key parts of the narrative.40  

Chasing Shadows

The history of science and journalism are replete with examples of how institutions can cling to persuasive stories long after the evidence begins to unravel. In the 1840s Hungarian physician Ignaz Semmelweis produced strong empirical evidence that handwashing among midwives dramatically reduced the deaths of mothers from childbed fever, yet his findings were resisted for decades by the medical establishment.41 More recently, in the lead-up to the Iraq War many media outlets published erroneous stories that Saddam Hussein had obtained weapons of mass destruction (WMDs) even though United Nations weapons inspectors in the field insisted they had found no clear evidence.42 This led to an apology by The New York Times for publishing claims that were never independently verified, and the Washington Post acknowledging that skeptical stories were frequently “pushed to the back of the paper” while pro-WMD claims dominated the front pages.43

This pursuit of unicorns over horses is a cautionary tale of how fear, expectation, and sensational storytelling can create a phantom menace where there is no concrete evidence that one exists.

When investigators become convinced of the existence of a hidden adversary, ambiguous evidence can take on new meaning and be seen as patterns in a grand conspiracy. Anonymous sources become credible witnesses. Coincidences can appear to be coordinated acts of aggression, and mundane symptoms are redefined as signs of an attack. As physicist Richard Feynman famously warned: “The first principle is that you must not fool yourself—and you are the easiest person to fool.”44  Throughout history, when a seductive explanation takes root—whether in the form of germs, hidden arsenals, or mysterious attacks—ambiguous signs are reinterpreted as confirmation rather than treated with skepticism. 

The promotion of ghostly enemies while omitting key facts is a dangerous game because it expends valuable resources at a time of confirmed threats to our homeland. This pursuit of unicorns over horses is a cautionary tale of how fear, expectation, and sensational storytelling can create a phantom menace where there is no concrete evidence that one exists.

Beliefs Have Consequences 

Unfounded beliefs and pseudoscientific ideas can have serious consequences by distorting scientific understanding, propagating myths, and shaping public policy.

Shortly after the airing of the 60 Minutes episode, the House Intelligence Committee met on March 19th with its chair, Republican Rick Crawford, asserting that the 2023 and 2025 assessments about that the involvement of an energy weapon was “highly unlikely,” were influenced by members of the Biden administration who have been covering up the ‘real’ cause – attacks by a foreign adversary.

National Intelligence Director Tulsi Gabbard, FBI Director Kash Patel, and National Security Agency acting director William Hartman all agreed that there was an urgent need to retract the current assessment.

The last major hearing on ‘Havana Syndrome’ was conducted by the House Committee on Homeland Security on March 8, 2024. The hearing was titled: “Silent Weapons: Examining Foreign Anomalous Health Incidents Targeting Americans in the Homeland and Abroad.”

The title reflects the biased nature of the hearing. Not surprisingly, the witnesses were all supporters of the energy weapon hypothesis.

I was originally asked if I would be willing to testify at this hearing, only to be later told my testimony was no longer required.
Categories: Critical Thinking, Skeptic

Remembering Robert Trivers

Skeptic.com feed - Fri, 03/20/2026 - 10:02am

Robert Trivers, who died on March 12, 2026, was arguably the most important evolutionary theorist since Darwin. He had a rare gift for seeing through the messy clutter of life and revealing the underlying logic beneath it. E. O. Wilson called him “one of the most influential and consistently correct theoretical evolutionary biologists of our time.” Steven Pinker described him as “one of the great thinkers in the history of Western thought.”

I was Robert’s graduate student at Rutgers from 2006 to 2014. Long before I knew him personally, however, he had already established himself as one of the most original and insightful scientists of the twentieth century. In an astonishing series of papers in the early 1970s, he changed forever our understanding of evolution and social behavior.

The first, published while he was still a graduate student at Harvard, confronted one of the deepest problems in evolutionary theory: how can natural selection favor cooperation between non-relatives?  In The Evolution of Reciprocal Altruism Trivers proposed that cooperation could evolve when the same individuals interacted repeatedly, making it advantageous to help those who were likely to help in return while avoiding cheaters who took benefits without reciprocating — i.e.“you scratch my back, I’ll scratch yours.” The paper offered an elegant solution to the problem of how natural selection can “police the system” and has had enormous implications for human psychology, including our sense of justice, with parallels in other mammals such as capuchins and dogs.

From that insight flowed one of the most powerful and falsifiable ideas in modern science

The next year in 1972, Trivers published his most cited paper, Parental Investment and Sexual Selection. Here he offered a unified explanation for something that had puzzled biologists since Darwin. Writing perhaps the most famous sentence in all of evolutionary biology—“What governs the operation of sexual selection is the relative parental investment of the sexes in their offspring”—Trivers threw down the gauntlet and revealed a deceptively simple principle that reorganized the field. From that insight flowed one of the most powerful and falsifiable ideas in modern science: the sex that invests more in offspring will tend to be choosier about mates, while the sex that invests less will compete more intensely for access to them.

Two years later, in 1974, Robert once again gave birth to an entirely new field of study with Parent-Offspring Conflict.  In it, he built on William Hamilton’s theory of inclusive fitness to show that parents and children have divergent genetic interests. Because a parent is equally related to all of its offspring, while each offspring is related to itself more than to its siblings, conflict is built into the family from the beginning. With that insight, Trivers revealed that some of the most intimate and emotionally charged features of life—begging, weaning, sibling rivalry, tantrums, parental favoritism, even the distribution of love and attention within families—all could be understood as the product of natural selection acting on family members with conflicting evolutionary interests.

In other papers, Trivers made wide-ranging predictions about the conditions under which parents should produce or invest more in sons than daughters, how female mate choice can favor male traits that benefit daughters, why insect colonies are structured by conflicts over sex ratios, reproduction, and control, and how self-deception may have evolved as a way of more effectively deceiving others.

It is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian books

Each of these papers spawned entirely new research fields, and many have dedicated their careers to unpacking and testing the implications of his ideas. As Harvard biologist David Haig put it, “I don’t know of any comparable set of papers. Most of my career has been based on exploring the implications of one of them.” Indeed, it is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian books from Richard Dawkins and Robert Wright to David Buss and Steven Pinker. 

To know Robert personally, however, was to confront a more uneven and less orderly organism— to use one of his favorite words—than the one revealed in his papers. The man who explained the hidden order in life often struggled to impose order in his own. “Genius” is one of the most overused words in the language, with “asshole” not far behind, and I have known few people who truly deserved either label. Robert deserved both. He could be genuinely funny, extraordinarily generous, and breathtakingly perceptive, but also moody, childish, and needlessly cruel.

Bob and other committee members after my dissertation defense (2014) | Bob with undergraduate students (Jamaica, 2010)

Robert taught me that writing was endless revision and paying attention to the tiniest of details. He went through seven drafts of Parental Investment and Sexual Selection and frequently quoted Ernst Mayr telling him that papers are never finished, only abandoned. He used to call me “slovenly,” but more than once returned a draft of mine with a piece of his own dried lettuce stuck to it.

He was like an alien visiting our planet trying to make sense of our strange habits

He had an uncanny ability to see the obvious. I used to joke that one reason he was so good at explaining behaviors the rest of us took for granted was that he was like an alien visiting our planet trying to make sense of our strange habits—why we invest in our children, why we are nice to our friends, why we lie to ourselves. He told me that conflict with his own father was part of the inspiration for parent-offspring conflict and one of the observations that led to his insight into parental investment came from watching male pigeons jockeying for position on a railing outside his apartment window in Cambridge.

He cared more about truth than about his reputation

Robert also had a respect for evidence and for correcting mistakes that I’ve rarely seen among academics, a group not known for their humility. He cared more about truth than about his reputation and retracted papers at great cost to himself and his career when he thought there were errors. He also knew that he was standing on the shoulders of the giants who had come before him. He wrote that “the scales fell from his eyes,” crediting Bateman’s 1948 Heredity paper on fruit flies showing that males differ more than females in reproductive success for his insights into why males compete more for mates and females tend to be choosier, and he acknowledged that George Williams had already anticipated the importance of sex-role-reversed species in Parental Investment and Sexual Selection. Indeed he once described most of his insights into social behavior as those of W.D. Hamilton plus fractions.

He was a lifelong learner with a willingness to do hard things. After his astonishing early success, he could have done what many academics do: stay in his lane, guard his territory, and spend the rest of his career commenting on ideas he had already had. Instead, in the early 1990s he saw that genetics mattered and spent the next fifteen years trying to master it. The result was Genes in Conflict, the 2006 book he wrote with Austin Burt, which pushed his interest in conflict down to the level of selfish genetic elements. Few scientists, after making contributions as important as he had, would have had the curiosity, humility, and stamina to begin again in an entirely new area.

He liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’

Trivers was a great teacher, though not always in the ways he intended. He often asked dumb questions—’What does cytosine bind to again?’ in the middle of a genetics seminar and made obvious observations—’Did you know that running the air-conditioner in the car uses gas?’ But as he liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’

He could also be volatile and aggressive and there were many times when he threatened to kick my ass. I may have been the only graduate student who ever had to wonder whether he could take his advisor in a fight. Once, over lunch at Rutgers, I asked about a cut on his thumb after he had returned from one of his frequent trips to Jamaica. He matter-of-factly told me that he had just survived a home invasion in which two men armed with machetes held him hostage. He escaped by jumping from a second-story window, rolling downhill, and stabbing both men with the eight-inch knife he carried everywhere he went. He was 67 at the time.

Bob, evolutionary biologist Virpi Lummaa, me (Robert Lynch). Finland, January 2020.

The benefits of being Trivers’s only graduate student were obvious. He was a brilliant man and nobody else could speak with such clarity about the impact of operational sex ratios on parental investment and male mortality while rolling a joint. The costs were obvious too. He could be erratic and often seemed either indifferent to, or unaware of, the social consequences of what he said. This often left him professionally isolated and left me with few academic relationships I could count on when it came time to find a job.

The mark of a great person is someone who never reminds us of anyone else

One of the last times I spoke with Robert, a fall had left his right arm nearly useless. He described it as “two sausages connected by an elbow.” He was a chaotic and deeply imperfect man, but also one of the few people whose ideas permanently changed how we understand evolution, animal behavior, and ourselves. Steven Pinker wrote that “it would not be too much of an exaggeration to say that [Trivers] provided a scientific explanation for the human condition: the intricately complicated and endlessly fascinating relationships that bind us to one another.”  That seems just about right to me.

His ideas are some of the deepest insights we have into human nature, animal behavior, and our place in the web of life. The mark of a great person is someone who never reminds us of anyone else. I have never known anyone like him.

I’ll miss you, Robert. You asshole.

Bob rolling a joint in NYC, 2012.
Categories: Critical Thinking, Skeptic

Robert L. Trivers, Evolutionary Biologist Who Transformed the Science of Social Behavior, Dies at 83

Skeptic.com feed - Fri, 03/20/2026 - 10:02am

Robert Ludlow “Bob” Trivers, one of the most consequential evolutionary biologists of the twentieth century, died on March 12, 2026, at the age of 83. In an extraordinary burst of intellectual creativity between 1971 and 1974, he published four papers that permanently altered how evolutionary biologists—and eventually the public—understood cooperation, conflict, selfishness, and deception in the natural world. These papers presented original theories of reciprocal altruism (1971), parental investment and sexual selection (1972), facultative sex ratio adjustment (1973), and parent-offspring conflict (1974). Each paper addressed a deep puzzle in evolutionary theory; together they laid much of the foundation for what would become the field of sociobiology and, later, evolutionary psychology.

His paper on parental investment and sexual selection (1972) proposed that the sex which invests more in offspring becomes the choosier mate. This theory explained with elegant simplicity why males and females so often behave differently across the animal kingdom. The paper arose from watching male and female pigeons out the window of his third-floor apartment in Cambridge, Massachusetts, a reminder that transformative science can begin with simple, careful observation.

Robert Trivers (photo courtesy of Alelia Trivers Doctor) | A younger Robert Trivers

He was also among the first to explain self-deception as an adaptive evolutionary strategy, first describing the concept in 1976—arguing that we deceive ourselves in order to deceive others more convincingly, a counterintuitive idea that has since attracted enormous attention across psychology, philosophy, and the social sciences.

Robert’s books included Social Evolution (1985), widely praised as among the clearest accounts of sociobiological theory, Natural Selection and Social Theory (2002), a collection of his early influential papers outlined above, Genes in Conflict (with Austin Burt, 2006), which makes the central argument that genomes are not harmonious but instead sites of constant struggle, and The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life (2011), which brought his ideas about self-deception to a popular audience. He also chose to be the author of his own story in his memoir, Wild Life (2015).

Robert Trivers was born on February 19, 1943, in Washington, D.C., the son of Howard Trivers, an American diplomat, and renowned poet, Mildred Raynolds Trivers. Growing up in a diplomatic household, Robert attended schools in Washington, D.C., Copenhagen, and Berlin before enrolling at Phillips Academy and later Harvard, where he initially studied American history before making an important pivot to biology.

He studied evolutionary theory with Ernst Mayr and William Drury at Harvard from 1968 to 1972, earning his PhD in biology. While a graduate student at Harvard, Robert accompanied Ernest Williams on an expedition to study the green lizard in Jamaica's countryside. Robert met his first wife, Lorna Staple, in Jamaica; he fell in love with her and the island at the same time. Robert and Lorna wed in 1974 in Cambridge, Massachusetts, and they had four children together: a son, Jonny, twin girls, Natasha and Natalia, and another daughter, Alelia.

Robert was on the faculty at Harvard University from 1973 to 1978, then moved to the University of California, Santa Cruz, where he remained until 1994, before joining the faculty at Rutgers University. Robert was named one of the greatest scientists and thinkers of the 20th century by TIME magazine in 1999. In 2008–09 he was a Fellow at the Berlin Institute for Advanced Study. He was awarded the 2007 Crafoord Prize in Biosciences by the Royal Swedish Academy of Sciences for his fundamental analysis of social evolution, conflict, and cooperation—widely considered the highest honor in evolutionary biology and a prize often mentioned alongside the Nobel in scientific prestige.

His life outside the laboratory was as unconventional as his science. Robert met Huey P. Newton, Chairman of the Black Panther Party, in 1978, when Newton applied from prison to do a reading course with Robert as part of a graduate degree at UC Santa Cruz. The two became close friends and Robert joined the Black Panther Party in 1979. He and Newton later co-authored an analysis of the role of self-deception in the 1982 crash of Air Florida Flight 90.

After Robert and Lorna divorced in 1988, Robert maintained a close relationship with her and with the whole Staple family in Jamaica. He also built a home in Southfield, St. Elizabeth, and spent several months a year in Jamaica for decades. His favorite pastime at his home in Jamaica was to sit on the front veranda and observe the wildlife around him, often joking that the same group of animals would pull up a chair each evening and join him for a glass of red wine, marveling with him at the beauty of the sunset. He made lifelong friends in Jamaica and conducted research from the island on lizards, symmetry, and honor killings over the years. Robert married his second wife, Debra Dixon, in 1997 and they had one child together, a son—Aubrey. They divorced in 2004 but also remained friends until his passing.

Robert Trivers with his five children | With grandson, Lucas Malcolm Howard | With ex-wife Debra, stepson, Diego, and son Aubrey | With three children and seven grandchildren | With grandaughter, Jonisha, and his great grandson, Masiah

Robert Trivers was, by any measure, a complicated man. He was diagnosed first with schizophrenia at the age of 21 and that diagnosis was modified to bipolar disorder later in adulthood. He could be generous and brilliant in one breath, reckless and destructive in the next. But he was always a loving father, a dynamic teacher, and a caring friend, often listening to loved ones for hours and providing valuable guidance and needed moments of levity. He loved life with tenacity—both studying it and living it.

Towards the end of his life, Robert found the greatest joy spending time with his children, grandchildren, and his great grandson, Masiah. His eyes would light up the moment he saw him.

Robert’s work throughout his life was also very important to him. He wanted to make a significant contribution to scientific thought in his lifetime. The theories Robert produced reshaped how we understand the deep logic of living things. His brilliant contributions to our collective understanding—and his family—are his legacy and will spur important scientific research for years to come.

He is survived by his siblings, Jonathan Trivers (Karen), Ruth Ann Mekitarian, Milly Palmer (David), Howard Trivers (Cathy), and brother-in-law, Souham Harati. Robert is predeceased by his parents, his brother, Aylmer Trivers, and sister, Kate Harati. He is also survived by five children: Jonathan Trivers (Carline), Natasha Trivers Howard (Jonathan), Natalia Barnes (Jovan), Alelia Trivers Doctor, and Aubrey Trivers; ten grandchildren; and one great grandson.

Categories: Critical Thinking, Skeptic

Federal Judge Partly Blocks RFK Jr’s Anti-Vaccine Wrecking Ball

neurologicablog Feed - Thu, 03/19/2026 - 7:32am

This is a tiny ray of light in what has been a gloomy year for science-based federal health policy. Recently U.S. District Court Judge Brian Murphy in Boston ruled that the actions of RFK Jr. as HHS Secretary to fire the entire Advisory Committee on Immunization Practices (ACIP) did not follow procedure and is therefore not valid. Further, he concluded that the new ACIP, packed with anti-vaxxers, made capricious and arbitrary decisions that did not follow established science-based procedure. His ruling is a preliminary injunction that has delayed meetings of the ACIP and stays the revised vaccine schedule. The ruling is in a case brought by a coalition of medical professional societies, including the American Academy of Pediatrics. They are celebrating the ruling as “a momentous step toward restoring science-based vaccine policymaking.”

There are a few layers to this story. The first is RFK Jr. himself and what he has been doing as HHS secretary. I have not written much about him here, because posts about him and other Trump health appointees have dominated the SBM blog over the last year. This has been an “extinction level event” for rational federal health policy, and we have documented it and analyzed it every step of the way. David Gorski has done a great job specifically documenting what RFK Jr. has done to vaccines in the US in his series – “RFK Jr. is definitely coming for your vaccines”, in which he just published part 8. He did a great job not only documenting all of RFK Jr’s harmful actions but actually predicting them. Essentially, RFK is systematically using every lever at his disposal to dismantle the vaccine infrastructure in the US to reduce vaccines as much as possible. Given his actions he clearly straight-up lied to the confirmation committee when he said he was not anti-vaccine and would not take away American’s vaccines.

We, of course, recognized exactly what RFK Jr was doing during the hearings, because we have been following his nonsense for 30 years. He said, for example, “If we want uptake of vaccines, we need a trustworthy government,” Kennedy said. “That’s what I want to restore to the American people and the vaccine program. I want people to know that if the government says something, it’s true.” He then promised “gold standard science”. I would argue he has done the exact opposite. But what this statement is is classic denialism. Just claim you want to review the science, that everything is open to examination, and you just want the highest standards of science. These principles are great, but they can be used as a weapon, not just a tool. You can deny well-established scientific conclusions by arbitrarily claiming we need yet higher standards. Also, claiming you want to “restore” faith in the vaccine program assumes there is currently a lack of faith, which is rich coming from the person who has done the most to undermine that faith with pseudoscience and false claims. That is another denialist strategy – make a well-established science seem controversial, then argue that because it’s controversial we need to reexamine it and call it into question.

This point requires further discussion. It may seem ironic that at SBM we are constantly calling for higher standards of medical science, but now we are complaining about calling for higher standards of science. But again, this gets to using such calls as a weapon vs a tool. No conclusion in medical science is bullet-proof. All science is simply inference to the best current conclusion based on existing evidence. Medical science, because we are dealing with variable biological units (and not things like electrons), is especially complex. We are always making decisions with imperfect information, making our best extrapolation from what is known, and ultimately making a risk vs benefit decision. This requires constant review of the evidence by recognized experts to help establish and maintain a standard of care. But you can attack any medical practice as lacking sufficient evidence, if that is your agenda. This is why expert reviews need to be as free from bias as possible, and as transparent as possible. And the reviews need reviews. It’s a constant process.

The problem with what RFK Jr is doing is not that he is reviewing the science, it’s that he is putting a massive anti-scientific, conspiracy-addled, and biased thumb on the scale. He arbitrarily fired the entire ACIP, then packed it with known anti-vaxxers. Packing a review panel is one way to get the outcome you want.

David lays out what RFK Jr has already done and will likely do going forward to undermine vaccines. The most recent outrage – his MAHA institute is sponsoring a MEVI conference, which stands for Massive Epidemic of Vaccine Injury. Gee – I wonder what they will conclude. He’s not even pretending anymore.

The other big layer to this story, however, is how effective will a court injunction be in stopping the RFK Jr anti-vaccine wrecking ball? The court is correct – we have a process for a reason, to ensure that judgements about what the evidence say are objective and transparent. Bypassing that process and arbitrarily replacing it with one that is blatantly agenda-driven is not a valid process. But this gets into a tricky area – the “checks and balances” of the three equal branches of our federal government. How much oversight and veto power does and should the judicial branch have against overreach by the executive branch? Legal scholars can debate this – again I just hope we have an objective and transparent process to make such decisions.

But executives can put their fat thumb on the scale of this process too – by packing the federal courts with ideologues that will follow their wishes rather than following the law. They can also do it by judge-shopping, keep raising cases until you get a friendly judge. Our rights and freedoms should not so heavily depend on “federal judge roulette”. It should also not depend so much on the randomness of which executive gets to appoint the most Supreme Court judges. If the system gets too biased in one direction, then the public starts to lose confidence in the objectivity of the court, and the overall problem deepens. We seem to be digging ourselves deeper and deeper into a hole of affective polarization, lack of faith in the system, and justifying extremism.

What saves us from bias, arbitrary decisions, extremism, and corruption are institutions that have a process to maximize transparency, average out and minimize bias and conflicts of interest, and elevate genuine expertise. This is partly built on codified procedure, but also on democratic and professional culture and standards. RFK Jr is a blatant example of what happens when you ignore that culture of professionalism and let lose an ideologue to “go wild”.

 

The post Federal Judge Partly Blocks RFK Jr’s Anti-Vaccine Wrecking Ball first appeared on NeuroLogica Blog.

Categories: Skeptic

The Rise of Decorative Neuroscience

Skeptic.com feed - Wed, 03/18/2026 - 2:37pm

Neuroscience terms are everywhere. If you log into social media, you’re likely to be bombarded with advice on how to “increase neuroplasticity.” You might be told to “stop chasing the dopamine” or given instructions on how to “regulate your nervous system.” Meditation works because it “rewires your brain.”

Self-help gurus and productivity coaches love these terms. They signal depth. They suggest that beneath the surface of our messy behavior there are precise mechanisms that have been identified that can give us the answer to our problems, whatever those problems may be.

The trouble is, despite their suggestion of a mechanism, most of these terms are used in a way that offers no explanatory value. When a wellness blog tells you going for a walk will “regulate your nervous system” they’re just saying a walk may reduce stress. Whether it actually does reduce stress doesn’t hinge on whether we can describe it in neural terms. Similarly, when an influencer says meditation “changes the brain” this doesn’t tell you anything new. Anything from practicing a motor skill to remembering this sentence changes your brain. The question is whether it changes it in a way that’s helpful. For that, the neuroscience doesn’t provide an answer.

Neuroscience terms used in these ways are decorative—a way to jazz up tired old advice and make it seem fresh and new again. By decorative neuroscience, I mean the use of irrelevant or oversimplified brain-based concepts to rhetorically bolster some claim, explanation, or intervention.

Neuroscience terms used in these ways are decorative—a way to jazz up tired old advice and make it seem fresh and new again.

Why do we continue to see so much decorative neuroscience? A study published in 2008  found that laypeople rate explanations that contain irrelevant neuroscience as better than those that lack neuroscience. This has been termed “the seductive allure of neuroscience explanations.” People without neuroscience training interpret the presence of brain-based explanations as meaning we have a much firmer grasp on a concept than we do. When influencers throw in neuroscience terms, it ends up being interpreted as more authoritative.

Many of the uses of decorative neuroscience are innocuous enough. Influencers have discovered a new rhetorical trick to ply their trades, but much of what they’re saying is the same old thing. What's more worrying is the way decorative neuroscience has started to influence public discourse.

Dopamine talk has become ubiquitous. California psychiatrist Dr. Cameron Sepah recommends “dopamine fasting,” which involves taking a break from things like smartphones and social media. Individuals following his protocol talk about being “addicted to dopamine.” From a neuroscience perspective, these terms make little sense. You can’t take a “fast” from dopamine; it’s a naturally occurring molecule in your brain and critical for movement and motivation. While addictive substances alter dopamine signaling, you can’t be addicted to dopamine itself.

Instead, the term dopamine in “dopamine fasting” is decorative, something Dr. Sepah himself admits: “Dopamine is just a mechanism that explains how addictions can become reinforced, and makes for a catchy title. The title’s not to be taken literally.” 

But when the catchy title is taken away, we see the dopamine fast for what it is: advice to take a break from technology to reconnect with ourselves and others. This may be good advice, but it certainly isn’t a new idea, and it has little to do with neuroscience.

More significantly, the term dopamine has become a catch-all for sinful pleasurable activities. The bestselling book Dopamine Nation by Anna Lembke claims anything pleasurable, even reading a book, is potentially addictive because it releases dopamine.

Positing a neural mechanism is no substitute for direct evidence that an intervention actually changes behavior, experience, or well-being.

While it’s true that pleasurable activities stimulate dopamine release, superficial similarities don’t mean two things are the same. The reward system of the brain responds to everything from love to video games to chocolate to methamphetamine. The involvement of the same brain regions doesn’t mean they have the same impact on us. Both addictive drugs and video games stimulate the release of dopamine, addictive drugs stimulate much more.

But again, the neuroscience is largely irrelevant—we should just look at the behaviors associated with these activities. The majority of methamphetamine users develop a use disorderresulting in severe health and behavioral problems. Despite how widespread technology use is, technology use disorder is rare; it’s estimated around 3 percent of video game players develop any kind of behavioral problem associated with gaming (like neglecting schoolwork to the point of harming grades), and most of those problems are mild

Part of the trouble here is pushing our understanding of neural mechanisms beyond their scope and assuming they provide a more solid basis for understanding than simple psychology. But often, the psychological level is much closer to the level of explanation we need than neuroscience. Take the classic misunderstanding of the brain hemispheres: the idea that the left hemisphere is analytical while the right hemisphere is creative. This isn’t just bad neuroscience, it’s bad psychology to boot. 

First the neuroscience: it’s true there are hemispheric differences. Some functions occur more in the right or left hemisphere, something neuroscientists refer to as lateralization. Language production is a classic example—for most people, language production mostly happens in the left-hemisphere. While you can find some functional differences between the hemispheres, nearly every complex activity involves both sides. Even for analytical tasks like solving math problems, there’s substantial involvement from both hemispheres. For example, the left-brain right-brain personality theory claims that some people (the logical type) are “left-brained” and others (the creative type) are “right-brained.” This, too, doesn’t hold—people don't predominantly “use” one hemisphere over the other.

A bad psychological model can’t be bolstered by bad neuroscience. You don’t need a neuroscience mechanism to explain something that doesn’t exist.

But again, the neuroscience here is largely irrelevant. We should instead look at psychology. Is it true that people are either logical or creative? Without looking at the brain, we can determine that no, it isn’t. Far from there being two categories of people (left-brained and right-brained), people fall in different parts of the distribution for each. Classic measures of intuitive versus analytical thinking styles have found they’re largely independent. If anything, there may be a positive association between analytical thinking ability and creativity, as scoring higher on an IQ test makes one more likely to score high on a test of creativity. A bad psychological model can’t be bolstered by bad neuroscience. You don’t need a neuroscience mechanism to explain something that doesn’t exist.

If you have a theory of personality types, how to study better, be more productive, or strengthen self-control, that’s great. It should be put to the test to see if it works. What’s important is whether there’s actually an effect. Does reading books often lead to addiction? Are people either analytical or creative? Does going for walks lower stress? These are straightforward questions about behavior. Pointing to possible neural mechanisms doesn’t help—the brain is complex and has many mechanisms. You can come up with all sorts of post hoc possible neural mechanisms to explain theoretical relationships between an activity and an outcome.

Looking to neuroscience for wellness or productivity advice is like looking to cell biology for dietary advice.

It would be nice if we have some specific, clear mechanism like right brain versus left brain to explain the difference between people, but neuroscience rarely can offer something like this. Neuroscience is messy. Looking to neuroscience for wellness or productivity advice is like looking to cell biology for dietary advice. It might provide constraints and guidance for nutrition research, but what you really want is to have people eat stuff to see what happens.

Moving from behavior to neurons might feel like it’s digging down a level, getting rid of the messy complexities of psychology and leaving something more precise and scientific. But our understanding of the brain isn’t clearer or more complete than our understanding of behavior. Neuroscience is full of uncertainty, indirect measures, and interpretive gaps. More importantly, it operates one level down from the level of explanation we generally care about in our everyday lives: observable behavior and experience.

The human brain is a wonderfully complex organ. It’s arguably the most complex thing we’ve discovered in the universe. Neuroscience is a young science with a gargantuan task, made all the harder by the ethics of studying the living brain and the modesty of our tools for probing it. It has enriched our understanding of behavior, perception, and ourselves as biological beings. It’s helped clarify neurological and psychiatric pathologies, and offers hope for a future for treating them. Neuroscience can illuminate constraints and underlying processes, and work alongside psychological research to triangulate how cognition works in different domains. But positing a neural mechanism is no substitute for direct evidence that an intervention actually changes behavior, experience, or well-being.

Categories: Critical Thinking, Skeptic

Skeptoid #1032: Is Germ Theory a Myth?

Skeptoid Feed - Tue, 03/17/2026 - 2:00am

Even today, some people cling to a pre-scientific belief that germs do not cause disease.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic