It is long past time the US eliminated gerrymandering, the drawing of district lines specifically for the purpose of favoring one political party, across the board. This requires either a 50 state agreement, or action at the federal level. This has been a problem since near the beginning of our democracy, and seems to be getting worse. We are now in the middle of a mid-decade tit-for-tat rash of gerrymandering that is extremely anti-democratic, so it’s a good time to raise this as an issue voters should definitely understand and prioritize.
As a quick aside – this is not a “political” blog, which does not mean that I never discuss political issues or topics with a political dimension. It partly means that I try my best to by non-partisan, and to avoid purely political value-judgements. I recognize this is an impossible ideal – we all have our biases and perspectives that color our thinking on topics in subtle ways. But we can try. Also, this is not a strictly science blog, it covers science, critical thinking, and media savvy, which are part of what we call scientific skepticism. Recently I started a video podcast, Political Reality, with co-host Andrea Jones Roy, who is a political scientist, for the purpose of applying scientific skepticism to political topics. This is also not a partisan show, and is mostly part civics lesson and part fact-checking. With that in mind, I thought I would write about what science and critical thinking have to say about gerrymandering, given that this is a topic in the news recently, although not as much as I think it should be. We also did cover this topic on Political Reality.
The term gerrymander dates back to 1812 when Massachusetts Governor Elbridge Gerry redistricted his state’s representative districts in order to favor his party, the Democratic Republicans. One of the districts looked like a salamander, leading the Boston Gazette to quip that it was really a “Gerry-mander”, and the name stuck. (Ironically, the two parts of that term, gerry and mander, both kinda sound like they mean “rig”, but the word has nothing to do with that.) Since then all political parties have used gerrymandering to gain unfair advantage. This stems from some features of US politics.
First, we have single representative districts, in a winner-take-all system. Senators are elected state-wide, and so gerrymandering is not an issue. Many countries have multi-representative districts, with representatives being apportioned to the votes – if your party wins 40% of the votes, you get 40% of the representatives. This also, by the way, is part of why we have such a dominantly two-party system – you need to earn a plurality of votes in order to have any representation. A party representing 10% of voters, without a local power base, would have zero representation. Districting, in a fair world, would be designed to share power roughly according to the population. In a state that is 60% party A and 40% party B it seems intuitively fair that party A, on average, should net about 60% of the representatives and party B 40%. Also, districts can be drawn to keep people with similar demographic interests together enough to have their interests represented. This would be partly geographic, but also partly urban vs rural, cultural, and racial.
Gerrymandering happens when one party controls the process of redistricting, usually because they control the state legislature. In our hypothetical 60/40 state, with let’s say 10 representatives, you could draw districts so that all 10 are 60/40, meaning party A would likely win all 10 representatives. You could also use redistricting to specifically disenfranchise specific demographics of voters. With modern data and computers you could theoretically do this with “surgical precision” (as one judge put it).
Partisan gerrymandering causes several problems for democracy. It is often referred to as politicians choosing their voters, rather than voters choosing their politicians, and this is apt. It makes districts less competitive, and often non-competitive, which reduces voter choice. This shifts the real election battle to the primary, which tends to favor more extreme partisan candidates. There is then no incentive to appeal to the middle in the general election because the outcome of that election is all but predetermined. So gerrymandering disenfranchises voters, reduces voter choice, and favors more extreme partisan politicians. This results in greater political polarization among our politicians, which causes dysfunction in Congress. How do we stop this?
The 2019 SCOTUS decision on Rucho vs Common Cause determined that federal courts have no roll to play in deciding questions of redistricting, which should be left entirely to the states. This is a deep issue unto itself – in our federalist system, what rights do congress and federal courts have in controlling how the states manage elections? Under Rucho vs Common Cause, however, Congress still has the right to pass laws to regulate redistricting. So it could be as simple as passing an anti-gerrymandering law. This would be ideal, rather than dealing with this state-by-state, which hasn’t worked. We are seeing what happens when this is left to the states. Some hold to principles, and leave redistricting in the hands of non-partisan committees, or some other reasonable fair process. But many states use their control to unfairly gerrymander their state, which then leads other states to do the same in retaliation. The best solution would therefore involve all 50 states at once.
Congress, however, has failed to pass anti-gerrymandering laws, most recently in 2025. This is typically blamed on political polarization, but also on the fact that many congressmen benefit from gerrymandering, on both sides, and would not want to see their favorable district suddenly become competitive. About 85% of House seats are not competitive (even lass after the recent round of gerrymandering), so that is most representatives. It is likely that only extreme pressure from voters will break this logjam and get us the anti-gerrymandering law we deserve. In fact, I would prefer a constitutional amendment. This is a higher bar to cross, but that’s the point – it would also be far more difficult to undue.
Gerrymandering makes America less democratic, it reduces voter choice, disenfranchises some voters, and increases political extremism and polarization. When asked, 70% of voters say that gerrymandering is bad and we should do something to eliminate it. However, those same voters seem to be OK with it when it is done to the advantage of their own party, justifying it by saying it is necessary because the other side does it. This is another reason why action at the federal level is needed, because that would affect everyone all at once. This is not going to happen, however, unless it comes from the bottom up. Voters need to take control of their own voting rights.
The post We Need to Ditch Gerrymandering first appeared on NeuroLogica Blog.
This Internet legend claims the Earth's gravity will shut off for seven seconds in 2026.
Learn about your ad choices: dovetail.prx.org/ad-choicesIt’s an iconic image – a giant cephalopod with its tentacles wrapped around a sailing ship, tearing it apart as the crew panic. Eventually it drags the splintered remains down into the deep. Meanwhile, the largest living octopus is the Giant Pacific octopus (Enteroctopus dofleini), averaging about 16 feet long, however an exceptionally large specimen about 30 feet long weighing 600 pounds was found. The largest squid is the Colossal Squid (Mesonychoteuthis hamiltoni), reaching roughly 1,500 pounds (490–500 kg) and lengths up to 46 feet (14 m). That’s huge – but it’s no Kraken.
What about in the past? Everything was bigger in the past, right? That’s obviously a trope, but there is some truth to it, in that there have been ages of gigantism in the evolutionary past. In some periods and locations there are rich resources allowing for the evolution of larger body size, which comes with a number of survival advantages. This can set off an arms race of size, with prey becoming larger to avoid predation, and predators becoming larger to hunt bigger prey. The age of the dinosaurs is the most iconic example of this. But that, of course, does not mean that all lineages were necessarily larger in the past. Whales are a good example – the largest whales (and animals) to have ever lived are extant. So what about cephalopods? Are the largest ones living now, like with whales, or were there even larger ones in the past?
A new study examines the fossil remains of 12 giant octopuses that lived 100-72 million years ago. These were discovered and examined through grinding digital mining techniques at Hokkaido University in Japan. This method grinds very thin (25-50 micrometers) layers from a rock specimen, then takes a high resolution full color image of each layer. This method completely destroys the specimen, but results in a high resolution 3D image of any fossils within the rock. It uses AI models to reconstruct the fossils. The technique is used in cases where the fossils are too soft to X-ray (they are invisible to X-rays), cannot be chemically separated from the surrounding rock, and are too fragile for ordinary extraction. All of these are true for the soft beaks of octopuses.
Cephalopods are soft-bodied invertebrates, and so they rarely fossilize well. However, they do have chitinous jaws or beaks they use for eating. These are like the exoskeletons of insects or shell fish, but with some structural differences. Crustacean exoskeletons are mineralized to make them hard, so they serve well as armor. The octopus jaws are not mineralized but rather are reinforced with specialized proteins. The edges are hard to form a cutting edge, and become less hard but stronger as you move away from the edge. This way the jaws don’t crack under strain. These are evolved to be predatory crushing instruments. But they are also too soft for traditional fossil extraction methods, which is why the new technique was needed.
What did the paleontologists learn from examining these new specimens? They were able to infer the size of the creatures, which they estimate were up to 19 meters long – that is enormous. OK, it’s not quite Kraken size, but we are getting close. The wear patterns on the jaws also indicates that they were used to crush bones. What this could mean is that these cephalopods (Vampyronassa rhodanica) were definitely predators, and given their size they may have even been top predators. That is an incredible claim, given that they shared the Cretaceous oceans with plesiosaurs and mosasaurs. Mosasaurs were giant reptilian (but not dinosaurs) sea-dwelling predators up to 18 meters long. Could one of these invertebrate giants have taken on a mosasaur? Probably not, unless they were a baby.
As a point of clarification – the mosasaur was an apex predator, which means they they had no natural predators. The researchers are arguing that Vampyronassa rhodanica was a top predator, which means it occupied the top tier of the food chain, but could also have been prey itself. In a cage match between a mosasaur and a Vampyronassa rhodanica, my money is on the mosasaur.
But still, this means that there were cephalopods around 100 million years ago that were among the top predators of the ocean, competing with giant sharks and aquatic reptiles. This is the first invertebrate to join this group of top predators.
The researchers point out one more detail from the fossils – they had an asymmetric wear pattern, meaning that one side was significantly more worn than the other. This may not sound like much, but it suggests they had a preference for one side over the other. This likely reflects what is known as lateralization – that there were functional differences between the left and right side of their central nervous systems. This phenomenon tends to be seen only in species that have fairly complex central nervous systems, and the authors put this forward as evidence for this in this species. We know that modern cephalopods are highly intelligent, and this evidence suggests that these early cephalopods may have already evolved CNS sophistication. But this is, overall, a rather weak line of inference. Lateralization is not an iron-clad sign of intelligence, and is context dependent, but in this case it is a reasonable inference given that we know cephalopods eventually do evolve in this direction.
Overall this is a pretty interesting study, using a new technique to get a window into ancient cephalopods that was not previously possible. As a result we have gained new insight into this branch of the tree of life. I do have mixed feelings about the new technique, grinding digital mining, because it is completely destructive. It does seem like these fossils would otherwise not be usable, however. But – we do not know if we will eventually develop a non-destructive technique to examine such fossils, maybe even ones that can yield more or better information. The researchers and the field are aware of these tradeoffs. Destructive techniques are therefore used sparingly and only when the scientific information gained outweighs the loss of physical evidence, which they thought was justified in this case. Still, I hope this technique becomes obsolete quickly.
The post Release the Kraken first appeared on NeuroLogica Blog.
As I write this the week of April 20, 2026, both mainstream media and social media are chockablock full of coverage of the disappearance or death of eleven (and counting) U.S. scientists who worked on UFOs, nuclear weapons, military defense, propulsion systems, or other related fields (a category that keeps growing as new deaths or disappearances are identified not associated with one of the original categories).
House Oversight Chair James Comer, for example, told Fox News “Congress is very concerned about this. Our committee is making this one of our priorities now because we view this as a national security threat,” adding “there’s a high possibility that something sinister is taking place here.”
Congressman Eric Burlison (R) told Fox News “This has all the hallmarks of a foreign operation,” and suggested to Elizabeth Vargas at NewsNation that it could be China, Russia, or Iran behind the cabal. Famed physicist Michio Kaku opined “If 10 scientists suddenly die or vanish who all have access to sensitive research, this is cause for national concern.” Even President Trump admitted that this is “pretty serious stuff…some of them were very important people,” but added “I hope it’s random.”
It’s random, Mr. President. Connecting a small cohort of individuals from a wide range of fields to deaths or disappearances is an example of what I call patternicity, or the tendency to find meaningful patterns in random noise. It is also a case study in what cognitive psychologists call base rate neglect, or the tendency to focus on specific, vivid, or anecdotal evidence and ignore statistical generalizations that better explain the phenomenon.
One of the eleven scientists, for example, Amy Eskridge, who was president of the Institute for Exotic Science (an organization she co-founded) and worked on anti-gravity propulsion and electrostatic propulsion systems, died by suicide of a self-inflicted gunshot wound to the head. How unusual is that? According to the Johns Hopkins University Center for Gun Violence Solutions, 27,300 people die each year by gun-inflicted suicide in the U.S. That’s the base rate, and Eskridge’s own non-conspiratorial family accepts the fact that Amy was another lamentable casualty of gun violence and suicidality and not the victim of a vicious UFO cabal. “Scientists die also, just like other people,” explained her father Richard.
Most of the other scientists have similar prosaic (albeit heartbreaking) explanations. Monica Reza, who worked on orbital communication systems, for example, disappeared while hiking in the Angeles National Forest near Mount Waterman in California, which is a remote forested area near where I live in which people go missing every year. Although she was accompanied by two other experienced hikers who reported that she just dropped off the side of the trail, I have done a fair amount of hiking and mountain biking in those mountains and well know that there are countless precipitous cliffs off which one could easily fall off and disappear into thick brush below (which is how I broke my collarbone on a mountain bike ride in 1991).
A similar disappearance is that of retired Major General William Neil McCasland, who was Director of Air Force Research Lab who worked on hypersonics, directive energy systems, and advanced propulsion technology, who went missing during a wilderness hike on February 27, 2026 in New Mexico, apparently taking with him his wallet and a .38 caliber revolver and leather holster (leaving behind his phone and prescription glasses). According to his wife, McCasland had been experiencing short-term memory loss, medical issues, anxiety, and a lack of sleep, adding that she suspected he “planned not to be found” and, in any case, “He retired from the [Air Force] almost 13 years ago and has had only very commonly held clearances since. It seems quite unlikely that he was taken to extract very dated secrets from him.”
Before we jump to conspiratorial speculations on these particular vanishings, consider the fact that somewhere between 1,200 and 1,600 people disappear in America’s National Parks annually in the U.S., a stunning number that shrinks by comparison to the over 500,000 people who go missing each year according to the FBI. That’s a base rate one should never neglect and likely is the explanation for the disappearance of 48-year-old government contractor Steven Garcia in August of 2025, also in New Mexico, who worked on nuclear and aerospace research, carrying a handgun and also leaving behind his phone, keys, wallet and car. Anecdotally weird? Sure. Statistically out of the ordinary for missing persons? No.
The rest of the outcomes are equally unsurprising and not out of the ordinary: Michael Hicks “undisclosed cause of death” was in reality, according to the LA County Coroner, caused by arteriosclerotic cardiovascular disease, for which the CDC and the American Heart Association document over 900,000 Americans die each year due to this and related heart diseases.
Plasma physicist Nuno Loureiro was murdered by a revenge-seeking ex-classmate from the 1990s, who confessed that he’d been planning it for years and that he was envious and resentful of Loureiro’s success. Disturbing, but not mysterious.
Astronomer Carl Grillmair, a 67-year-old Caltech professor who worked on exoplanets, stellar streams, and near-earth objects, was shot to death in February 2026 on the front porch of his rural home in Antelope Valley, CA (about a hundred miles from Caltech out in the desert outside Los Angeles), by 29-year-old Freddy Snyder, a known criminal with a long rap sheet that included carjacking and burglary, including on Grillmair’s property months before, which the astronomer responded to by calling the police on him (as one would rationally do). Again, troubling and tragic, but not inexplicable or grand conspiratorial.
And so on.
The Internet, especially X, is rapidly filling up with additional confusions over these alleged cabals. One Dr. John Brandenburg, a self-identified “plasma physicist” who works on “fusion energy and advanced space propulsion,” with “Phd” in his X username, told his 22.2k followers (see screenshot below) that the death of an “antigravity researcher” named Dr. Ning Li, who was stuck by a vehicle and sustained brain damage that would take her life many years later, was actually the victim of a murderous conspiracy:
Dear Friends, Like Dr. Ning Li, antigravity researcher, professor John Mack of Harvard, Pulitzer Prize winner, and a Psychiatrist researching UFO abductees, was also run over by a car. This happened in London in 2004. This must end, and whoever is responsible brought to justice.In fact, Dr. Li died of Alzheimer’s disease in 2021 at the age of 78, following a long health decline after a 2014 automobile accident where she was struck by a vehicle while crossing a street at the University of Alabama in Huntsville and sustained permanent brain damage. As I explained to Dr. Brandenberg in my response to his post on X:
In the US ~7,500 pedestrians are killed in traffic crashes annually. Globally, WHO reports ~ 1.19 million deaths/year. Before you concoct wild conspiracy theories about UFO people being run over, stop neglecting the base rate.The tireless UFO disclosure activist and one-time government insider Lue Elizondo went on Chris Cuomo’s popular podcast to explain that UFO disclosure activists and former (and present) government insiders are being murdered, which as I also pointed out on X (see screen shot below) is just what one would do if you didn’t actually believe that you could be murdered yourself.
And in this mode, I also pointed out on X all the proponents of UFO and UAP disclosure who have not been murdered or disappeared, which again as a counterfactual would seem to negate what is on the table with this so-called mystery, namely that such people are being murdered by some nefarious “they” purportedly operating in the name of some government agency or private corporation.
More generally, this phenomenon is also emblematic of what I call the fallacy of excluded exceptions, an illustration for which can be seen in a 2x2 matrix of four cells (see figure below). Cell 1 represents our mystery, namely UFO and nuclear/military scientists who go missing or are found dead before old age. What about all the UFO and nuclear/military scientists who do not go missing or are not found dead before old age (Cell 2)? Or the non-UFO and non-nuclear/military scientists who go missing or are found dead before old age (Cell 3)? Or the non-UFO and non-nuclear/military scientists who do not go missing or are not found dead before old age (Cell 4)? Suddenly our mystery disappears. There’s nothing unusual to explain in the broader context of everything else that could happen but are ignored in our focus on just the combination we’re interested in exploring.
Keep this matrix of possibilities in mind as we hear about additional Cell 1 examples in the coming days and weeks, such as the one posted by Representative Anna Paulina Luna (R) on April 21, 2026 (see screenshot below), about “the tragic passing of David Wilcock,” citing the biblical passage of John 8:32, which reads “Then you will know the truth, and the truth will set you free.”
What truth is that? David Wilcock was an American paranormal writer and YouTube influencer (over 500,000 followers) deeply involved in the UFO “disclosure movement”, who suggested that he might be the reincarnation of the famed early 20th century psychic Edgar Cayce, that he is in telepathic contact with space aliens, and that reptilian aliens inhabit parts of Antarctica where they are preparing for an invasion to take over the world’s governments and banks.
Sadly, Wilcock died by suicide the morning of April 20, 2026. Although Luna suggests otherwise, according to the Boulder County Sheriff’s Office, “The emergency communications specialist who took the call suspected the caller was experiencing a mental health crisis.” Additional details noted that “officers reportedly reached around 11:02am and tried to make contact with the male who was outside his residence holding a weapon.”
Again, regretfully but necessarily, we must consider the base rate for this issue: according to the CDC nearly 50,000 Americans every year die by suicide, around half of which are struggling with mental health issues. As such, and woefully but realistically, I think most of us can agree that if you think you are telepathically communicating with alien beings and you think they may be trying to take over the world, you may not be fully sound of mind.
No doubt more deaths and disappearances will be announced in the coming weeks as believers go digging around for more examples of Cell 1, but keep the other cells in mind, along with these other principles of critical thinking, before jumping to unwarranted conspiratorial conclusions.
Zoologist by career, TV celebrity in the 1960s, renowned surrealist painter, and bestselling author of more than 70 books, Desmond Morris left a legacy that enlightened our species, answered taboo questions, and made audiences around the world look at behavior with renewed eyes. This is a tribute to one of the greatest observers of human behavior.
He never shied away from controversy. His first popular book, published in 1967, proclaimed on its cover what at the time was seen as offensive: that we humans are “naked apes.” The logic was compelling: if one were to place close to 400 primate species side by side, a quick visual inspection would reveal that the most notorious difference is the general lack of body hair in humans. Not intelligence, not language, not technology. That was the beginning of his effort to spoon-feed society a lesson in evolutionary humility: there is nothing insulting in seeing humans as animals; every species is extraordinary in its own way.
Going back to that book, in his 1979 autobiography Animal Days, Morris recounts the 30 days he took to write the whole manuscript for The Naked Ape on a typewriter, without editing—an astonishing result by any measure. The book spread fast not only because of its provocativeness, but because the world got to experience what descriptive, entertaining, and compelling writing can do when science merges with audience-centered prose. With over 20 million copies sold, it still stands among the 100 bestselling books in history.
Desmond’s curiosity was unstoppable, and it can be traced back to his unusual rise in academic science through the study of animal behavior. His Ph.D. began with small fish, sticklebacks. While his mentor Niko Tinbergen—the man who showed him there was a path for studying animals without putting them in cages through ethology—was adamant about the importance of specializing in a single species, Desmond rebelled against that idea. That was his character. He then expanded, in his postdoctoral studies, to birds, particularly the small finch. By this time his basement at the university had become overcrowded with multiple species, and there was even an aviary on the department’s roof. No fewer than 84 species passed through his lab during this period at Oxford. He was able to dedicate three full years to the ten-spined stickleback, while exploring variation in other species, fulfilling his tendency to be a “spreader”—to broaden his interests too much.
Out of academia, Morris became curator of the largest collection of mammals at the renowned London Zoo, sharpening his observations across more than 300 species. His insatiable curiosity pushed him to want to know everything there was to know about every mammal. He later focused on our closest relatives, non-human primates, such as Congo—the chimpanzee he taught to paint and whose works ended up in the hands of world-class painters like Picasso and Miró. Again, non-human primates were only a pitstop before the next stage, an obvious one to him: humans.
Once The Naked Ape skyrocketed, Morris moved to Malta, where he enjoyed the pleasure of spending his earnings and living a comfortable life. There he realized something that we may better understand from the flip side: “The city is not a concrete jungle, it is a human zoo.” Under that premise, he published what could be seen as a follow-up to The Naked Ape, called Human Zoo (1969), where he revisits controversial topics of status, sex, and power. From this work, his commandments of dominance are priceless. He lists the behaviors that, in primate species, are associated with gaining and defending power and status, like “make changes even if no change is needed to demonstrate that you are in control” or “a leader should display his position in their demeanour.” All his work cultivated a unique view of the human animal through the lens of ethology, or through Desmond’s eyes.
Then, motivated by his book editor, Morris began the odyssey that he never finished. It started with a simple premise: a full description of the repertoire of human behavior. After a few months of work, his editor asked about his progress, and he said he was covering the eyebrows. To the editor’s surprise, he had started not from the feet but from the top of the head. That was a sign that his dedication to cataloging gestures was going to take him a lifetime, much like the Oxford English Dictionary (OED).
Not coincidentally, Morris moved to North Oxford, to the house of James Murray, one of the main lexicographic contributors to the OED, as if foreshadowing his own intentions. His book originally titled Manwatching (1977), later adapted to the zeitgeist of our times as Peoplewatching (2003), is still, to this day, the most exhaustive and profound description of human behavior. I believe it offers the highest rate of insight per sentence among all the books I’ve read, and I have called it the bible of human behavior. Ten years later Morris produced another version of that project, this time focused on areas of the body, covering each one through biology, anatomy, culture, and behavior, called Bodywatching (1985). For the serious human observer, these two are indispensable guides.
But Morris knew that the journey was longer than a book. The human repertoire of behaviors cannot be compressed into a trade book. He kept collecting behaviors, labeling them one by one. He had to coin names for many of them, because code-to-elbow or nose-to-forehead behaviors are not commonly described in ordinary language. His approach aimed to solve the natural ambiguity of behavior, so he used descriptive labels to avoid subjective interpretations. His encyclopedia of human actions, titled The Human Ethogram, reached at least two thousand entries by the time he decided to let it go. Now those archives sit at the University of Porto, at the Museu de História Natural e da Ciência, where at some point they may be compiled into one of those posthumous manuscripts worthy of Desmond’s legacy.
Morris’s success transcended writing, probably inspired by the admiration he held for Julian Huxley, a trailblazing biologist who broke scientific etiquette by appearing in mass media. Desmond became a celebrity-like figure with his weekly TV show Zootime. Each week he introduced audiences to different species from the London Zoo, where he worked. The anecdotes are hilarious, and his descriptions of behavior glued audiences to topics they otherwise might have ignored. He developed a charismatic presence that evolved further in his documentaries.
Over his life Morris ended up writing three autobiographies, each time adding new elements, culminating in his more than 600-page 2006 memoir, Watching. This book is as funny as a comedy, and it has the depth and texture of stories that let you enjoy and learn in equal parts. In it, Desmond shares an observational palate so rich that he successfully predicts winners of sumo fights, accidentally receives a papal blessing from Paul VI, and is mistaken for British intelligence in Moscow.
Since 2017, I have had the great good fortune to be in regular contact with Desmond Morris. We exchanged ideas, discussed a few gesture interpretations, like the elbow clapping, and he revealed that his favorite animal was the chequered elephant shrew. He kindly wrote a letter of recommendation for my Ph.D., gave me a few signed books, and invited me to dinner with his family in Ireland. I conducted one of the last interviews with him.
Desmond Morris with the author, Alan Crawley.Over these years I asked Morris many questions. Among them was: “If you have to give a single recommendation to those interested in studying nonverbal behavior, what would it be?” Here is Desmond Morris’s insightful response (personal communication, 03/03/2021):
With body language studies, it is my impression that there is often too much abstract theorizing and semantic debate, when we should be getting out in the street conducting field studies. The question I would ask any student of human behavior is “How many hours of field observation have you done?”, not “How many theoretical papers have you written?” How many riots, bar-fights, pop concerts, boxing matches, art auctions, festivals, law courts, beach parties, military parades, religious gatherings and sporting events, have you attended as an objective, body language observer?Desmond had in mind Tinbergen’s warning about his tendency to spread too thin across multiple problems and numerous species, a signature of his identity. That tension lived in the two sides of his personality: scientific researcher and popularizer. Those identities wrestled within him, and both appear relentlessly in his work and demeanor. For example, in Oxford Morris bought the neighboring house to accommodate his collection of more than 20,000 books. Intrigued by how many of them he had actually read, I asked. His answer was revealing:
I can’t remember the last time I read a book cover to cover.That line reveals the tradeoff between scope and depth. Morris consumed texts across domains, ages, and styles, allowing him to create unique compilations of facts organized under a single ethological framework, something that could only have been achieved by an unsatisfied curious mind that pursued one question and then moved on to the next. Such an approach may increase the likelihood of stating inaccurate claims, and some people use Desmond’s mistakes as a convenient excuse to discard the rest of his ideas. That is a dishonest and unfair approach. He was a prolific well of novel ideas: where others saw laughter, he saw an evolved mechanism of tension; where Freud saw sexual fixation, Morris described behavioral relics that increase in frequency under discomfort.
Awards and prizes were not his motivation. He was never interested in being knighted as a Sir. Someone of his accomplishments would have been a strong candidate for such recognition. I once asked him about this, to which he replied in his unique humorous manner:
I have made enough rude comments about the authorities and about politicians to ensure that my name is safe from that nonsense. And The Naked Ape won’t have helped.Morris was well aware of the consequences brought on by the depiction he made of the human animal. Those depictions may have reached their widest audience through his TV documentaries, like The Human Animal, a fantastic visual portrayal of human behavior across more than 40 cultures.
Desmond enjoyed his competing interests—writing and painting—which occupied his mind deeply throughout the day. In his words:
There are two Desmond Morrises, and they are quite different people. I can easily pass from one to the other, but I cannot be both at the same time. When I'm Desmond Morris the painter, I am quite different.... There is rarely any clash between the two aspects. The one helps the other. I obey the two sides of my brain alternately.Morris’s legacy is gigantic. Beyond more than 12 books on human behavior, he produced books on the behavior of dogs, cats, horses, primates, bison, leopards, and owls. Yet his impact on surrealism was far more than a hobby. Not only were books like The Lives of Surrealists (2018) influential, but, more importantly, in 1950 his paintings were exhibited in galleries alongside Joan Miró. He was an accomplished surrealist painter and filmmaker. If you have read Dawkins’ most famous book, The Selfish Gene, you may have encountered one of his paintings, since Richard himself chose one for the cover.
Until his last days he kept painting and writing. In perspective, he was an outlier who reached the highest level in two incredibly different professions through sheer excellence. And that excellence was cultivated over time, until the end.
For the past five years, he shared in his emails that he woke up with the desire to write and paint—a man in his late 90s who continued relentlessly to enjoy his daily work. Someone who, at the age of 95, published three books in a single year. This year he was also doing two gallery exhibitions of his paintings. That was Desmond: an unstoppable force of passion and curiosity.
Thanks, Desmond. We will continue watching for you.
In 1989, Bob Lazar told Las Vegas reporter George Knapp that he had worked at a secret facility called S4 near Area 51, where his job was to help reverse-engineer the propulsion system of a craft “not made by human hands.”
More than three decades later, despite other whistleblowers alleging the existence of such programs, Lazar remains a rare figure in claiming direct technical work on a purportedly non-human vehicle. And he is now back in the spotlight because a new documentary, S4: The Bob Lazar Story, directed by Luigi Vendittelli, was released on Amazon Prime in early April 2026, and Lazar then did a burst of media coverage, including Joe Rogan, Area52, and Jessie Michels.
He has claimed to have earned two master’s degrees, one in physics from MIT and the other in engineering from Caltech. Skeptics reported finding no record of him at either institution.Lazar is a contested figure. He has claimed to have earned two master’s degrees, one in physics from MIT and the other in engineering from Caltech. Skeptics, including ufologist Stanton Friedman, reported finding no record of him at either institution and have pointed to the absence of identifiable professors or classmates who could corroborate his attendance. Friedman also cited evidence that Lazar attended Pierce Junior College in Los Angeles, which he argued was difficult to reconcile with the timeline Lazar later described. Lazar has maintained that records connected to his work were altered or removed. He also pleaded guilty in 1990 to a felony pandering charge in Nevada. Taken together, these elements have remained central to skeptical assessments of his credibility.
But beyond these biographical facts lies a deeper disagreement about how his case should be evaluated at all. Part of the friction in the Lazar debate is about what kinds of evidence people are willing—or able—to perceive. When you listen to Lazar at length, you start processing how his claims are generated. Over time, it produces a strong impression that the account is being recalled rather than constructed. Notably, individuals who have spent extended time with Lazar without prior exposure to his story have described a similar shift: from initial skepticism to the sense that they were dealing with a person recounting, rather than constructing, an experience. For some observers, that distinction becomes difficult to ignore.
Many skeptics, however, operate with a different evidentiary filter. When claims are extraordinary, they tend to discount behavioral authenticity signals almost entirely, treating them as unreliable or irrelevant. Testimony, in this view, is flattened: people lie and misremember, and beyond that there is little to be extracted from the manner of delivery. This has the advantage of protecting against being misled by charismatic or deceptive individuals. But it also comes at a cost. It removes from consideration a set of cues that, while imperfect, are often central to how humans actually evaluate one another in real-world contexts.
So we are left with a perceptual mismatch. Where one person sees constraint, specificity, and resistance to fabrication, another sees only an unverified claim. One may register the difference between a narrative that is expanding versus bounded, while another treats both as functionally equivalent. On top of this, many skeptics place heavy weight on abstract priors—chief among them the assumption that non-human technology is so unlikely that no amount of testimonial evidence can meaningfully shift the balance. Once that prior is fixed, the rest of the evaluation becomes largely procedural.
This produces a kind of epistemic stalemate with asymmetrical risks. If behavioral signals are granted no weight, then no amount of constraint, consistency, or non-performative delivery can ever move the needle. Testimony collapses into a binary of verified or dismissed, and cases like Lazar’s are effectively decided in advance by prior assumptions. But if those signals are taken seriously, even provisionally, then the burden shifts: one can no longer dismiss the account wholesale without offering a comparably structured alternative explanation. The alternative explanations largely fall into two categories: 1) Bob Lazar fabricated the story, or 2) Bob Lazar is sincerely recounting a real experience that he fundamentally misinterpreted.
Before turning to those explanations, it is worth acknowledging that Lazar’s disputed credentials and legal history are real and relevant, and any serious assessment has to take them into account. They establish that he is not an unimpeachable witness and that elements of his biography invite skepticism. Whether they are sufficient, on their own, to resolve the case is far less obvious.
Bob Lazar is a FabulistLazar’s central claim has not been proved, but several elements once dismissed as fantasy have since entered the documentary record. After his account told to George Knapp, Area 51 was eventually acknowledged by the CIA, and federal litigation in the 1990s showed that the government was willing to invoke state-secrets doctrine and repeated presidential exemptions to shield information about the Groom Lake site. That does not prove Lazar worked on non-human craft, but it does mean one major plank of the old dismissive posture—that he had built an outlandish story around an imaginary place—has aged badly.
The CIA’s own history describes daily air shuttles moving personnel and cargo to the facilityThe same is true of the surrounding logistics and of Lazar himself. Beyond a secret base in the desert, his story concerned a tightly compartmented installation serviced through unusual access patterns, including shuttle flights out of Las Vegas. The CIA’s own history describes daily air shuttles moving personnel and cargo to the facility, and reporting from Las Vegas has since made the JANET system (or Janet Airlines—a highly classified, top-secret airline operated for the United States Air Force) and its secure terminal common knowledge. Again, this proves far less than believers want. But it also proves more than skeptics used to allow. A fabulist could have been lucky once. He is harder to dismiss as a mere fabulist when elements of the practical architecture around his story keeps turning out to be real.
It is also worth recalling the context in which these claims were first made. In 1989, even within UFO circles, the idea of intact craft in government possession—let alone reverse-engineering programs—sat at the fringe of an already fringe field. The involvement of the U.S. Navy in such matters was not part of the discourse at all. Whatever one ultimately makes of Lazar’s account, it did not emerge as a straightforward amplification of existing narratives.
Then there is Lazar himself. Whatever one makes of his grander claims, it is no longer serious to imply that he was simply invented out of whole cloth as a nobody pretending to have moved in scientific circles. A 1982 Los Alamos Monitor article identified him as a physicist at the Los Alamos Meson Physics Facility, years before the UFO story made him notorious. Even the skeptical archival work that has tried hardest to reduce that credential concedes the key point: Lazar was in the Los Alamos world, and the facility in question was a major user laboratory hosting large numbers of outside researchers and contractors. That does not settle what his precise status was, but it does narrow the space for the old picture of Lazar as a basement fantasist who conjured a scientific persona after the fact.
Taken together, these later confirmations vindicate enough of the external scaffolding of his story to make the pure-fabulist thesis look increasingly strained. Even the once-mocked reference to element 115 no longer belongs to the category of obvious fantasy, though its later recognition by IUPAC does not validate Lazar’s specific claims about a stable isotope or gravity propulsion. But the record increasingly undermines the idea that he spun his tale out of pure nonsense.
The most common objection to Lazar’s credibility concerns his lack of verifiable academic records, particularly his claim of having attended MIT. This is often treated as dispositive. But it only is if one assumes a normal career trajectory. Lazar has consistently maintained—publicly in broad terms, and in more detail in private conversations—that his presence in that environment was tied to recruitment into classified work. If that is even partially true, the absence of a standard paper trail is a predictable outcome. That explanation may be challenged, but it is not incoherent, and it is not obviously less plausible than the idea that an individual capable of navigating Los Alamos environments simply fabricated an MIT background without anticipating the most obvious line of scrutiny.
That is why the fabulist position now looks less like skepticism than inertia. That model asks us to believe that Lazar wrapped an elaborate falsehood around a secret aerospace world he happened, by chance or intuition, to sketch in several increasingly accurate ways before much of that world entered the public record. That is possible, but it is no longer the modest position. Too much of the story’s external scaffolding has since been independently corroborated to go on speaking as if we are dealing with a man who simply spun a science-fiction yarn out of thin air.
Bob Lazar is Sincere but MistakenLazar may not be lying, this argument goes, but that does not mean he is reporting reality accurately. He may be recounting a real experience, interpreted incorrectly.
At first glance, this sounds like a reasonable position. It avoids the embarrassment of outright credulity while refusing the cheap certainty that he is simply a fraud. It lets one acknowledge the obvious fact that Lazar does not present like a conventional fabricator without having to follow that concession where it may lead.
“He believes what he is saying” has no explanatory power.The trouble is that this middle position is often treated as though it were self-supporting. It is not. “He believes what he is saying” has no explanatory power. It tells us something about Lazar, but almost nothing about the world. To get from there to a real account of events, one has to specify how a sincere man ended up with this particular story: a decades-long account of a highly unusual engineering environment, populated by sharply bounded details that do not behave like decorative embellishments.
A more concrete version of the “sincere but mistaken” hypothesis is sometimes proposed: that Lazar did have some level of access to classified environments, but in a limited or peripheral role—variously described as a technician, contractor, or even something as mundane as scanning badges—after which he constructed a far more elaborate narrative around fragmentary exposure. In this version, the expansion is not assumed to be deceptive, but the result of inference that gradually hardened into belief. This is, in many ways, the strongest non-fabulist alternative. It preserves sincerity, explains his familiarity with certain logistical details, and avoids the need to posit a decades-long fabrication.
But this refinement simply relocates the core difficulty. It still has to explain how limited, peripheral access could generate a highly specific, mechanically structured account of a system he would not have meaningfully interacted with. It must also explain why that account exhibits the same constraint, stability, and resistance to embellishment as a bounded recollection, rather than the looser, more adaptive structure one would expect from extrapolation. In other words, it replaces one explanatory burden with another, without clearly reducing the overall cost.
He says he did not believe in flying saucers and thought those who did were nuts.One striking thing is that Lazar describes initially drawing the ordinary conclusion. When he first saw the craft, he says the American flag on it made him think it belongs to the US, a top-secret breakthrough that would explain the UFO reports he had previously dismissed. He says he did not believe in flying saucers and thought those who did were nuts. Only later did he conclude that it was not human-made. In his account, the non-human inference was what he was pulled into by the structure of the work itself.
That is already a problem for the standard middle position. It means the “misinterpretation” in question cannot be a simple matter of a UFO-minded witness projecting his prior beliefs onto an ambiguous event. Lazar’s own account begins with the conservative interpretation and moves away from it only when the setting itself stops making sense under that frame. The skeptic who grants that Lazar is sincere now has to say more than “people can be mistaken.” Of course they can. The question is: mistaken about what, exactly?
That question becomes sharper once one notices the kind of details around which his account is built. The memorable parts are not the ones a hoaxer would obviously choose. Instead of dwelling on awe, he repeatedly says the dominant feeling when coming into contact with the craft was ominous, even creepy. The emotional tone is constraining.
One need not treat that as decisive.The same is true of the physical details. Lazar describes the inside of the craft not in grandiose terms but in awkward, almost inconvenient ones: no seams, no stylized features, the same sheen and radius of curvature everywhere, light behaving strangely inside, halogen lamps illuminating where they were aimed but failing to brighten the surrounding interior the way one would expect. Luigi Vendittelli, director of the S4 documentary that recreated the facility in a VR environment, says that when they built the set, they ran into exactly this problem: the interior remained unexpectedly dark. He presents this as one of the moments that made him feel Lazar had not simply invented a cool image but was describing a physicality that does not lend itself easily to intuitive fabrication. One need not treat that as decisive. But it is exactly the sort of thing that makes the middle position harder. The details are bounded in ways that feel discovered rather than chosen.
That distinction is central. A constructed story tends to optimize for effect, and answers too many questions. Lazar’s account contains stubborn little irregularities. He says the craft turned into sky when he walked beneath it because the light bent around it, and that the weight was simply gone rather than transferred to the ground. He describes people working around a purportedly non-human craft in a surprisingly nonchalant, dusty hangar rather than in the kind of sterilized environment one might imagine from science fiction. These details raise the cost of the fallback explanation that he is sincere and simply mistaken.
He also describes intimidation tactics after going public.We are also not in the presence of a private mythology floating free of the world. Lazar told Gene Huff first, then John Lear, and brought them out to see a Wednesday-night test flight because he had the schedule. He also describes intimidation tactics after going public: locked car doors and trunks found open, houses entered, George Knapp himself being followed. One can reject some or all of that. But once again, the middle position cannot simply wave it away with the generic proposition that sincere people can misread events. It has to say what kind of reality generates this pattern.
“He believes it” allows a skeptic to concede the very thing that gives the case its force while refusing to pay the price of that concession. But once sincerity is granted, the path to error is no longer cheap. It has to explain why Lazar’s account exhibits the structure of a constrained recollection of a specific environment, rather than that of an interpretation layered over an ambiguous experience.
In short, Lazar’s central claim—the custody and reverse-engineering of non-human craft—remains unproven, but the standard counterclaims do not carry the weight often assigned to them. Treating Lazar as a fabulist requires a level of sustained fabrication that sits uneasily with the structure of his account and its partial alignment with a once-hidden environment. Treating him as sincere but mistaken requires a chain of error that struggles to generate the specific, constrained features of the story. Neither path collapses under scrutiny, but neither settles the matter.
What remains is a less comfortable position: the case resists easy resolution, and the confidence with which it is often dismissed exceeds the explanatory work that has been done.
This interesting case was reported in the literature in 2007. For some reason it was then widely published in the mainstream media in 2015. Now it is making the rounds again on social media to support a false narrative about brain function. The story is of a 20 year old German woman who suffered a traumatic brain injury in a car accident. Over the next several months she started to slowly lose her vision – which is an important detail, it was not a sudden loss as a result of the physical trauma. After evaluation she was diagnosed with psychogenic blindness, meaning that it was not due to any physical damage to her visual system but was rather due to psychological stress. This patient also has what is now called dissociative disorder, or multiple personality, with 10 distinct personalities.
What makes the case even more interesting is that, with therapy, some of her personalities regained vision while others did not. Eventually eight of her ten personalities regained vision. This presented a rare, perhaps unique, opportunity to study the underlying neuroanatomical correlates of psychogenic blindness – what is happening in the brain when someone loses the ability for conscious sight despite their visual system working?
Psychogenic or functional neurological disorders are a complex and poorly understood phenomenon in which emotional stress and trauma presents as physical neurological symptoms. Common presentations include paralysis, language difficulty, sensory loss, and blindness. The diagnosis is mostly one of exclusion, which means sufficient examination and study is done to rule out any demonstrable damage, lesion, or other physical cause. This does not mean the patient is faking (technically called malingering) – that is a distinct condition that can usually be distinguished from a functional disorder. Usually patients with a functional disorder are very distressed by their symptoms and want further examination to find out what is wrong. In addition to simply ruling out physical causes, the diagnosis of a functional disorder can be supported by some positive evidence from the neurological exam. With psychogenic blindness, for example, patients will have normal pupillary responses (assuming no separate baseline deficit), and will have a normal reaction to optokinetic testing. This involves moving vertical black and white stripes horizontally across their vision. This will cause an involuntary response of tracking the stripes with eye movements. If this happens then we know that visual information is getting in and making its way to the visual cortex.
With functional neurological disorders what we do not know is what specific pathways in the brain are causing the symptoms. The hypothesis is that higher brain functions are somehow interfering with or inhibiting more basic functions. Those higher brain functions, the ones responsible for our subjective awareness and consciousness, are extremely complex. There is a lot of emergent behavior there, where we experience the net effect of many processes in the brain. Also, the more we investigate brain function with the latest tools the more we are discovering that communication in the brain does not just flow from basic inputs (like vision) to the higher conscious centers of the brain, but also back down, meaning that our higher brain centers can influence the basic processing of information. When you think you hear something, your brain makes it sound more like what you think you are hearing. When you see a shape that your brain matches to a giraffe, your cortex then sends signals back down the chain to construct the image to make it look even more like a giraffe. This is critical for pulling signals out of noise and for our ability to make sense of all the information coming it, but it also tends to generate illusions.
We also have to note that there is a lot of neurodiversity when it comes to brain anatomy and function – some people literally have pathways in their brain that most other people do not, or the relative robustness of specific pathways may differ wildly. Some people, therefore, may simply have neurological abilities that others lack. This case is very unusual – the person in question is neurologically capable of having a dramatic functional disorder, which may not be true of everyone. She also has dissociative disorder, which again is extremely rare. It would not be reasonable to assume she is neurotypical, and that we can extrapolate from her to the general population.
With those caveats in mind, the doctors studying her did something interesting – they performed a visual evoked potential (VEP) on her while she was exhibiting a personality that was blind and again while she was exhibiting a personality that could see. What a rare opportunity to compare the two states. The VEP essentially is a test in which a flash of light is given to the patient while electrodes record the response from her visual cortex. There is typically a delay of about 100 ms. If this is significantly slow or absent that could indicate a lesion in the visual pathway. This was a common test to evaluate patients with MS, for example, but is less common now due to more advanced MRI scans and other methods. They found that the VEP was present and normal while she expressed a personality that could see, but was absent when she had a personality with persistent psychogenic blindness. That is a rather incredible result, indicating that there is some process in her brain that is actually suppressing her visual system. To be clear, there is no conscious way to do this (again, at least not known, but I guess this could be the way in which she is very neuroatypical). So it seems that her psychogenic blindness was do to a reversible inhibition of her visual pathway, in a way that would block the VEP.
This was exactly what the researchers were looking for, trying to determine at which neurological level the psychogenic blindness originates, at least in this subject. This also means that VEPs cannot be used to reliably distinguish organic blindness from psychogenic blindness. I really want to know what her optokinetic testing found, but could not find this information in the report. However – a 2001 study of 72 subjects with psychogenic blindness found that every one had normal VEPs. VEPs are still used to assess these patients – a normal VEP does suggest a nonorganic cause of blindness, however it is recognized that an abnormal VEP does not rule out a psychogenic cause.
As interesting as all this is, this case is being used by some promoters of a particular type of dualism, specifically the notion that the brain is a receiver or filter for an external consciousness. The case is being misinterpreted as meaning that “experience determines neurological function” rather than the other way around. This, of course, is not true, for the reasons I outlined above. Experience is in the brain, and this just represents the brain affecting itself. I always find it sad and frustrating when truly interesting science is missed because it is being misused to promote pseudoscience or magic.
The post A Unique Case of Psychogenic Blindness and Multiple Personality first appeared on NeuroLogica Blog.
The latest social media buzz involves a list of scientists who have either died or gone missing over the last three years, with the implication that there must be something nefarious going on. The FBI is now investigating these cases to see if there is any connection, and the White House appears to be taking the case seriously. James Comer of the House Oversight Committee said: “It does appear that there’s a high possibility that something sinister is taking place here. It’s very unlikely that this is a coincidence. Congress is very concerned about this. Our committee is making this one of our priorities now because we view this as a national security threat.”
My initial reaction to stories like this is – these kinds of things crop up all the time and they always turn out to be just coincidences, or not even that. Sometimes they are just stories fabricated out of increasingly distorted information, almost always to serve some conspiracy narrative. So my reaction is the same as if someone claims to have seen Bigfoot or an alien spacecraft – initial skepticism is fully warranted, but sure, I am happy to take an objective look. This may be a rare case when there is a genuine phenomenon going on, and in any case this is what activist skeptic do – take a deep dive when these stories emerge.
Let’s first review the basic facts as presented. Here are the 11 scientists currently on the list:
Amy Eskridge—Scientist reportedly researching anti-gravity technology. Died: 2022
Michael David Hicks—Research scientist at NASA’s Jet Propulsion Laboratory; worked on the DART Project and Deep Space 1 mission. Died: July 2023.
Frank Maiwald—Principal researcher at NASA’s Jet Propulsion Laboratory. Died: July 2024.
Anthony Chavez—Former employee at Los Alamos National Laboratory. Missing since: May 2025.
Monica Reza—Director of Materials Processing at NASA’s Jet Propulsion Laboratory. Missing since: June 2025.
Melissa Casias—Administrative worker at Los Alamos National Laboratory. Missing since: June 2025.
Steven Garcia—Government contractor at a New Mexico facility for the Kansas City National Security Campus. Missing since: August 2025.
Nuno Loureiro—Director of MIT’s Plasma Science and Fusion Center. Died: December 2025.
Carl Grillmair—Caltech astrophysicist who worked on NASA’s NEOWISE and NEO Surveyor missions. Died: February 2026.
William “Neil” McCasland—Retired U.S. Air Force major general. Missing since: February 27, 2026.
Jason Thomas—Pharmaceutical researcher. Found dead: March 2026.
From a scientific (specifically epidemiological) perspective what we have here is called an apparent cluster. We encounter these in medicine all the time. I remember when I was a neurology resident in the 1990s there was an apparent cluster of cases of CJD (mad cow disease) in New England where I was working (more specifically the Naugatuck Valley of Connecticut). I had a few cases myself, and it definitely seemed to be more than we would expect by chance. It is the job of the CDC to investigate all such apparent clusters and first determine if they are real. This is mostly a statistical analysis – is this just the random clumping that we expect in data, or are these cases truly outside the statistical noise? It was determined that the CJD cluster was not real – just statistical noise.
With a case like the dead or missing scientists, we can do a similar type of analysis. Is this really beyond what we would expect by chance? Remember that people are really good at pattern recognition, to the point that we see patterns that are not really there (a recognized phenomenon known as apophenia). We also feed these illusory patterns with other cognitive biases, such as confirmation bias, subjective validation, anomaly hunting, and post-hoc reasoning. In the case of apparent clusters like this, what that means is that people might decide after they see a potential data point that it is significant, rather than determining ahead of time what constitutes a “hit”. They also may stretch any definitions they are using to cast a deceptively wide net. Once an apparent cluster is noticed then confirmation bias kicks in. In today’s world this means that an army of social media “sleuths” can go hunting for any apparent cases that fit the cluster – again, casting a very wide net.
Without getting into the individual cases yet, the numbers do not seem impressive. Just eleven missing or dead over four years – but what’s the baseline? Well, there are about 2 million researchers in the US. There are about 25 deaths per million people per day in the US, that’s 50 scientists dying each day, or 73,000 scientists over a four year period. Finding 11 that have some vague connection does not seem unusual to me. I would be amazed if you couldn’t find far more convincing clusters than this one. When we look at the list this base-rate problem gets even worse. On the list is a retired US Air Force major general – not a scientist. There is also a government contractor, and an “employee” – the net widens. Also, we are including both deaths and people who have gone missing.
I should point out I am using numbers for the general population, which may not match the rate for scientists. However, since the list included non-scientists and people who have retired, the numbers are reasonable, at least to get a general idea of probability. But I also looked at CDC data – about 800,000 people in the US between 25 and 65 die each year, or 3,200,000 over a four year period. About 6% of the population work in the science field, which would be 192,000, or half that if you use a narrow definition of 3%, so close to the 73,000 figure I calculated the other way.
We can also look at the institutions – JPL has 4,500 employees. If we crunch the numbers, then we would expect about 41 JPL deaths each year, or 164 over four years. At the Los Alamos National Laboratory, the figure is 18,000 employees, or 164 people per year, 657 over four years. Even if you want to be super conservative – even one tenth of these deaths at JPL and LANL would still be 82 deaths over four years – so again, the five on that list are not impressive. Given these numbers I think it is reasonable to conclude this is not a real cluster. This is far less than random noise, by at least two orders of magnitude.
The other approach to questions like this is to investigate the individual cases. The CDC, for example, would not only look at the numbers in a potential disease cluster, but would also review individual cases. If individuals with a foodborne illness all ate at the same restaurant, that would be significant, even if the overall numbers were not that impressive. So I don’t have a problem with the FBI doing some basic investigation so see if there is anything suspicious going on, but I would be really surprised if there were. It is not inherently implausible that one or more of these people were targeted because of their work or high security clearance, but looking through the list there doesn’t appear to be a real connection there.
Eskridge, for example, doesn’t seem to have anything connecting her to anyone else on the list, except her work was vaguely “sciencey”. I say this because she is on the list because she supported research into antigravity technology. I don’t think it’s fair to say she was an antigravity researcher. She had a bachelor’s degree in biology and chemistry, no masters or PhD. She has no published papers. She started the Institute for Exotic Science and had an interest in antigravity. This makes her more of a crank than anything else – give that it is extremely likely that antigravity is impossible (this goes way beyond this blog post, and perhaps I can do a deep dive on this later, but if you are interested just look it up). Until we have a theory of quantum gravity we have to keep the door slightly cracked open that maybe it’s not strictly impossible, but that is extremely unlikely. In any case, we don’t even have the beginning of a basic science to work from, and what we do have says it’s not possible. So unless you are a world-class theoretical physicist working specifically on uniting quantum mechanics and general relativity, your not worth killing if the goal is to prevent the emergence of antigravity technology.
Hicks worked on the DART project, the goal of which is to develop technology to deflect asteroids that might strike the Earth. Why is that connected to antigravity research? Why is that a threat to anyone? What is the connection to a pharmaceutical worker, a fusion researcher, or a materials scientist? Grillmair worked on the NEO telescope, which is a near Earth object scope, so there is a potential connection to DART, but not anyone else. The rest are mostly just administrators, workers, and employees, and one major general thrown in.
At first blush this seems to be a list of people put together by searching for anyone who has died or gone missing over the last few years with any vague connection to anything space related. I would be surprised if this turns into anything. I suspect that the FBI will do a preliminary investigation, find nothing, and the whole story will fade away. However, it will likely live on in the conspiracy subculture, morphing over time to make the details seem more impressive until there is a mostly false mythology about the dead scientists.
The post What’s With the Dead or Missing Scientists first appeared on NeuroLogica Blog.
This social media fad promises to supercharge your sleep and make you healthier than healthy, when in fact it probably does significantly more harm than good.
Learn about your ad choices: dovetail.prx.org/ad-choicesA feature length documentary film, released in December 2025, has revived an oft-touted claim of strong evidence of the supernatural. The Case For Miracles1 is based on the 2018 book of the same title by Christian evangelist Lee Strobel.2 Since the film has been criticized for being long on drama and short on evidence, I decided to look for documentation in the book. Unfortunately, when it comes to presenting specific cases of miraculous cures, this is limited to a single chapter, titled “A Tide of Miracles.”3
Among the dramatic cases cited by Strobel is that of a woman identified only as “Barbara” who was suffering from multiple sclerosis to the point that she had been confined to bed for seven years. She heard a voice telling her to rise and walk, which she did. She was sure this was the voice of Jesus. The documentation of this miracle, along with other claims in the chapter, however, is less than impressive, since they all consist entirely of testimonials; nor did the end notes to that chapter provide any medical documentation.4
Among the problems with testimonials as accurate histories are the imprecision of human memory, the tendency of narratives involving storytelling to arise among a group of those witnessing the same event, and bias on the part of witnesses. For example, consider the testimony of Tim Ley and members of his family regarding the appearance of the Phoenix Lights (thought by some to be UFOs) in 1997. He, along with his wife Bobbi, his son Hal, and his grandson Damien Turnidge, initially saw them as five lights in an arc shape. They soon realized the lights were moving toward them. As they did so, over the next ten minutes the lights resolved into a V shape similar to a carpenter’s square, or like two sides of an equilateral triangle. They, like other witnesses, reported a huge object, discernible not only by five lights on its leading edge, but as well because it blotted out stars in the night sky as it passed silently over the city. Soon, the object appeared to be coming right down the street where they lived, only about 100 to 150 feet above them, traveling so slowly it appeared to hover.
It would appear that much of what witnesses saw resulted from the perceptual centers of their brains automatically filling in the spaces between the lights to create a whole object.Fortunately, in addition to the testimony of many witnesses, we have videos taken of the 1997 incident,5 which show a series of lights appearing in the sky, one by one, then winking out one at a time. In one of the videos, the man filming it exclaims, “Another one just showed up!” In that video the first three lights form a line, then a fourth appears in such a position as to make a shallow angle. In another video, this one without sound, one light appears, then another, then more, up to five, then six lights. These are first in shallow “V” shape, then in a more or less straight line. Then the lights wink out, one by one. None of the videos shows a solid V-shaped object blotting out the stars as it moves overhead. In fact, in most of them the lights simply hover, rather than moving in any discernible direction.6 It would appear that much of what witnesses saw resulted from the perceptual centers of their brains automatically filling in the spaces between the lights to create a whole object.
The images of the lights in these videos support the claim by the Air Force that the “Phoenix Lights” were not alien spaceships but military flares dropped by an Air Force reserve unit on a training mission. These flares are used in combat to illuminate a battlefield at night. As such, they were dropped by parachutes, which allowed them to hover for some time. They were dropped west of the Estrella Mountains, which lie west of Phoenix. They seemed to suddenly wink out as they slowly drifted downward, and their images were blocked by the darkened, hence invisible, mountains.
More to the point of miracle cures, consider the claim that the Indian mystic and holy man, Sathya Sai Baba, raised a devotee of his, Walter Cowan, from the dead on Christmas 1971. The narrative of this miraculous healing begins with Walter Cowan and his wife Elsie, followers of Sathya Sai Baba, arriving in Madras, India, on December 23, 1971. Walter, an elderly man, suffered a massive heart attack on Christmas Eve and was taken to a hospital, where he died. Then, on Christmas Day, Sai Baba entered the hospital room where Mr. Cowan’s body lay. After a time, he left. Then, friends of Cowan’s arrived and found him alive. This miracle was attested to by a medical doctor, Dr. John Hislop. His wife reported:
When we reached the hospital with the vibhuti, Mrs. Cowan said, “Walter took a very bad turn just a little while ago. I thought he was dead, and I was terrified. I at once called Baba in a loud voice. Now, Walter seems a little improved. When I called Baba I felt his presence at once.”7The validity of this dramatic testimony is somewhat undone by Elsie’s statement that she thought her husband was dead and that he was then “a little improved.” In any case, both she, along with Dr. Hislop and his wife, were devotees of Sai Baba, rendering the objectivity of their testimonies suspect.
Since I wasn’t able to find more rigorous evidence than testimonies in Strobel’s book, I decided to look online for medical reports of miraculous healing, specifically healing attributed to the effect of intercessory prayer. In an article in the medical journal Heliyon from 2023 I found an article titled “The remote intercessory prayer, during the clinical evolution of patients with COVID-19, randomized double-blind clinical trial.”8 The article states the objective of the study as follows:
The objective of this study was to evaluate the effect of intercessory prayer performed by a group of spiritual leaders on the health outcomes of hospitalized patients with Novel Coronavirus (COVID-19) infection, specifically focusing on mortality and hospitalization rates. Design: This was a double-blinded, controlled, and randomized trial conducted at a private hospital in São Paulo, Brazil.Here are the results of the study:
A total of 199 participants were randomly assigned to the groups. The primary outcome, in-hospital mortality, occurred in 8 out of 100 (8.0 percent) patients in the intercessory prayer group and 8 out of 99 (8.1 percent) patients in the control group […] The study found no evidence of an effect of intercessory prayer on the primary outcome of mortality or on the secondary outcomes of hospitalization time, ICU time, and mechanical ventilation time.In another study, doctors measured the healing effects of intercessory prayer on patients recovering from cardiac bypass surgery:
Patients at 6 U.S. hospitals were randomly assigned to 1 of 3 groups: 604 received intercessory prayer after being informed that they may or may not receive prayer; 597 did not receive intercessory prayer also after being informed that they may or may not receive prayer; and 601 received intercessory prayer after being informed they would receive prayer. Intercessory prayer was provided for 14 days.9The study yielded the following results and conclusions:
In the 2 groups uncertain about receiving intercessory prayer, complications occurred in 52 percent (315/6o4) of patients who received intercessory prayer versus 51 percent (304/597) of those who did not […] Complications occurred in 59 percent (352/601) of patients certain of receiving intercessory prayer compared with the 52 percent (315/6o4) of those uncertain of receiving intercessory prayer […] Major events and 30-day mortality were similar across the 3 groups.Conclusions:
Intercessory prayer itself had no effect on complication-free recovery […] but certainty of receiving intercessory prayer was associated with a higher incidence of complications.Another clinical double-blind study gave more positive results,10 in which intercessory prayers were made by a group that did not know the patient for whom they were praying, nor did any of the patients know whether or not they were the subjects of intercessory prayers. The researchers concluded that remote, intercessory prayer was associated with lower CCU scores (a metric used to evaluate severity of cardiac illness), suggesting that prayer may be an effective adjunct to standard medical care. While this study suggested that intercessory prayer aided recovery, the benefits gained were far from dramatic:
Using the unweighted MAHI-CCU score, which simply counted elements in the original scoring system without assigning point values, the prayer group had 10 percent fewer elements […] than the usual care group. There were no statistically significant differences between groups for any individual component of the MAHI-CCU score.While a ten percent improvement sounds good, it hardly equals Strobel’s claimed miracle case of the woman with multiple sclerosis, bedridden for seven years, suddenly walking.
Effects of emotions or psychological states on the brain … can result in the transmission of healing by way of the nervous system acting on the body through the endocrine system.Far more dramatic and positive results occurred in a notable Dutch study on the efficacy of intercessory prayer as an instrument of healing: “A Dutch Study of Remarkable Recoveries After Prayer: How to Deal with Uncertainties of Explanation.”11 The study encompasses in-depth interviews of 14 people selected from a group of 27 cases, which were evaluated by a medical assessment team at the Amsterdam University Medical Center. Each of the participants had experienced a remarkable recovery immediately after, or even during, intercessory prayer sessions. So, is this evidence of miraculous, supernatural healing? Not necessarily.
The article begins with a description of one of these healings, experienced by a woman named Julia who was diagnosed in 1990 with post-traumatic dystrophy, also known as Complex Regional Pain Syndrome (CRPS). She was wheelchair bound due to intense pain. In 2007, after 17 years of suffering, she and her husband took part in a prayer healing session led by a well-known Dutch evangelist. After the session, Julia stood up and started walking without a trace of pain. She was still free of pain 15 years later, when the study was conducted.
Julia’s CRPS is initially acute pain caused by an injury, that persisted long after the injury was healed. Among the causes of this syndrome are psychological factors and a neurologically triggered autoimmune response.12 In autoimmune disorders, the immune system goes from attacking foreign invaders, such as viruses and bacteria, to attacking the person’s own body. Other patients in the study also suffered from autoimmune disorders. Among these were muscular dystrophy, psoriatic arthritis, ulcerative colitis, and Crohn’s Disease. Some of the patients also suffered from purely psychological problems, such as anorexia nervosa and alcoholism.
All of these diseases can be induced by malfunctioning of the nervous system. This is not to say these disorders are all in the patients’ heads. However, the effects of emotions or psychological states on the brain-such as taking part in a prayer session and states of belief-can result in the transmission of healing by way of the nervous system acting on the body through the endocrine system.
Three other patients suffered from brain injuries or malfunction. One patient had Parkinson’s Disease, which is caused by the failure of certain brain cells to produce dopamine. Another had suffered from a stroke. Another patient suffered from deafness. While the healing of these problems cannot be so simply assigned to the effect of a psychological state on the nervous system and transmission of these effects to the body by way of the endocrine system, they all do involve central nervous system functioning, which could be affected by an induced emotional state.
Only four of the patients suffered from complaints seemingly separate from the nervous system. One suffered from iatrogenic aortic dissection—an injury or scarring suffered during a surgical procedure, such as the insertion of a stent. This is usually treated with beta blockers. These medications block adrenaline, thus relaxing the heart and easing stress on the aorta. So, a changed psychological state could, likewise ease this stress.
Another patient suffered from pelvic instability, which often results from pregnancy and is caused by a weakening of the ligaments at the pelvic floor. This is a basin-shaped structure, consisting of the sacrum, pubis, and hip bones, all held together by ligaments. When these ligaments are overstretched or injured, the bones of the pelvic floor move excessively during physical activities, resulting in pain in the groin, hip, or back. This makes even simple activities difficult and painful. This condition is usually treated by various stretching exercises.
Another patient suffered from drug induced hepatitis. This is inflammation of the liver caused by various medications, treated by simply stopping the use of these medications. Finally, one patient suffered from rotator cuff rupture. While this is caused by traumatic injury, its protracted pain results from inflammation. Thus, just as in Julia’s case, all four of these disorders involve chronic inflammation.
There are three problems imputing the dramatic healings to divine intervention. One is that they all seem to stem, one way or another, from either chronic pain or nervous system dysfunction. We do not see in them people being healed of drastic infectious diseases, such as COVID-19. Nor do any of them involve permanent remission of metastasizing cancers.
It is too far a leap to extrapolate divine intervention from a few healings we can’t explain.Another problem is that of patient involvement. In both the study involving patients with COVID-19 and the one dealing with patients recovering from cardiac bypass surgery, the intercessory prayers were remote for the purposes of performing objective double-blind studies. Particularly in the case of Julia’s healing, the patients in the Dutch study were actively involved in the prayer sessions, thus clouding any clear evidence of cause and effect. Finally, it is too far a leap to extrapolate divine intervention from a few healings we can’t explain.
One last problem with seemingly miraculous cures as evidence of the Judeo-Christian God, is that such a deity would seem to be acting in a rather haphazard manner, healing some people here and there, while not bothering to intervene in horrific atrocities, for example, either the Holocaust or the Armenian genocide. In the latter event, the Armenians were targeted specifically because they were Christians.13 Between one and two million of them perished at the hands of the Turks and other of their Muslim neighbors.
Thus, these now and again, possibly miraculous, healings hardly constitute proof of the God of the Bible.
Regeneration is one of the futuristic tropes of science-fiction, because it is both incredibly powerful and not theoretically impossible. Imagine the ability to regrow a lost limb, or simply to replace a diseased or worn out limb. There are about a million limb amputations worldwide every year, so it is a very common medical problem. What if we could regenerate organs? This would be a game-changer for medicine.
There are several approaches to addressing missing limbs or failing organs. One is the cyborg approach – make a mechanical version to replace the biological one. We are making progress here, with brain machine interfaces, mechanical hearts, and other advances. Or you could transplant the body part from another person, or even an animal that has been genetically modified to be compatible. You can also regrow the missing or failing body part from the intended recipient’s own tissues and then transplant that. Or you could inject stem cells programed to regrow the needed part inside the recipient. All of these options are active research programs, have shown some incredible promise, but are also years or even decades away, especially in their mature form.
Let’s now add one more technology to the list – genetic therapy that triggers natural regeneration, meaning from the person’s own tissue. This has long been a target of potential therapy, inspired by the fact that there are many animals that can already naturally do this. Most extreme is the axolotl (a type of salamander that for some reason has become very population with the young generation), which can regenerate just about any of its body parts. They form a blastoma of pluripotent stem cells at the site of injury that can quickly regrow into a missing limb, heart, spinal cord, parts of the brain, etc. in weeks. There are also zebrafish, which can regrow their tail fins. Mice can also regrow missing digits, which is important because they are mammals showing that regeneration can happen even within the mammal clade. You don’t have to be a salamander.
The amazing regenerating ability of the axolotls was first documented in 1768. Molecular and genetic studies of the regeneration process go back to the end of the 20th century. But now with modern genetics tools, like CRISPR, genetic research is really taking off. A recent study tried to find if there were any genetic similarities in the regeneration abilities of axolotls, zebrafish, and mice. If these three animals share the same genetic basis of their regeneration that this would suggest that these genetic abilities are highly conserved, all the way from fish to mammals. This would be good for the prospects of regeneration in humans, because we would likely share some of these same highly conserved genetic infrastructure. As you may have guessed, these researchers hit pay dirt (which is why I am writing about this today). They found that they shared SP6 and SP8 transcription factors. They confirmed the relevance of these factors by making knockout mice missing SP6 and SP8, which impaired their ability to regenerate lost digits. Knocking out sp8 in the axolotl also impaired their ability to regenerate – so the same factor seems critical for both species.
They then took a factor from the zebrafish which has been shown to enhance regeneration – FGF8, whose gene is normally turned on by SP8. Replacing the missing FGF8 then partially restored the regeneration ability of mice missing SP6 and SP8.
Do humans have SP6 and SP8 genes? Yes, we do. Again, these are highly conserved genes with basic biological function. They are the Specificity Protein family of genes that are involved in regulating the development of limbs, teeth, skin, and even organs. That is essentially how development works – there is a suite of genes with all the information to make, for example, a human arm (or a bird’s wing, or a the antenna of a moth). This suite of genes is turned on by a regulatory gene, that essentially says – build an arm here. Regeneration in a creature like the axolotl essentially involved going back to this developmental stage, creating a blob of stem cells, and then saying – build a limb here.
Obviously this entire process is more complicated than just tweaking one gene or replacing one missing factor. It is very complex. Humans form scar tissue to repair a wound, they do not form a blastema. This is partly driven by differences in the availability and sensing of oxygen in the tissues. Further, scar formation is driven by the immune reaction, involving macrophages, which are actively suppressed in salamanders. And finally, the reason we do not already have the ability for unlimited regeneration, is that there is a tradeoff between regenerative ability and cancer suppression. It is likely that our ancestors sacrificed regenerative abilities for cancer suppression mechanisms – this was the best evolutionary tradeoff. In other words – we simply went down a different evolutionary path than the axolotl. Gaining the ability to regenerate limbs or organs would therefore probably involve a complex coordination of multiple factors, while simultaneously preventing cancer formation.
Interestingly, at present there is nothing we know that would make it theoretically impossible to have full regeneration in humans. However, it is extremely complex. It is the perfect sci-fi technology – possible, but likely only in the distant future. I suspect it will take decades to perfect this technology.
The post The Prospect of Regenerating Limbs first appeared on NeuroLogica Blog.
Skeptoid is now going into production on our third feature film: Alien Echoes. Get all the details at skeptoid.org/alienechoes
Learn about your ad choices: dovetail.prx.org/ad-choicesThe recent rapid advance in the capabilities of artificial intelligence (AI) applications I think qualifies as a disruptive technology. The term “disruptive technology” was popularized in 1997 by Clayton M. Christensen. To summarize, a disruptive technology is “an innovation that fundamentally alters the way industries operate, businesses function, or consumers behave, often rendering existing technologies, products, or services obsolete.” AI is potentially so powerful, and changing so quickly, that it is challenging to optimally regulate it. We are caught in a classic dilemma – we do not want to hamper our own competitiveness in a critical new technology, but we also don’t want to unwittingly create new vulnerabilities or unintended negative consequences. For now we seem to be erring on the side of not hampering competitiveness, which basically places us at the tender mercies of tech bros.
Which is partly why I found the conflict between Anthropic and the Department of Defense (still the legal name) so fascinating. In short, Anthropic’s powerful AI application, Claude, has at least two significant internal “red lines” or guardrails – it cannot be used for massive domestic surveillance, and it cannot be used for final military targeting, without a human in the loop. Anthropic CEO Dario Amodei has not backed down on this – he says that the first restriction on domestic surveillance is simply a matter of ethics. The second restriction, however, is mainly a matter of quality control – their system is still vulnerable to hallucinations and is not reliable enough to count on for final targeting decisions. Hegseth has criticized his concerns as “woke” and a critical vulnerability for the US military. More charitably, he say essentially that the US military is using the application lawfully, and should not be restricted in any lawful use of the software. Others have also stated that in an emergency they have to know the software will do whatever they ask it.
This conflict has many deep implications, and is beyond what I intend for this blog post. What I want to focus on is the fact that an AI application is creating this ethical dilemma, and forcing us to ask – who should control such awesome power, the CEO of a tech company or the Federal government? It seems that we are facing or about to face many similar questions provoked by the disruptive nature of recent AI applications.
Anthropic, in fact, is at the center of another similar discussion, involving the security of the internet. They have a new application, Mythos, which is an AI coding app. Mythos is potentially disruptive in two ways. The first is more mundane, and certainly not unique to mythos – it allows for non-coders to do what is called “vibe coding”, giving an AI coder a natural language description of the application you want, and the AI coder making it. Why this is disruptive is because it takes coding out of the limited hands of a relatively few highly trained and skilled individuals and puts it in the hands of everybody. This can lead to the proliferation of code that has not gone through any rigorous safety testing for vulnerabilities.
But the feature of Mythos that has many experts (including those from Anthropic itself) very concerned is that the program turns out to be excellent at identifying security vulnerabilities in code. I mean – really good. It has found vulnerabilities that have been sitting there unnoticed for years, and can reliably exploit them. When Anthropic realized how good their software was at essentially cracking software security, they had an “Oh, shit” moment. We are at an “inflection point”. Anthropic estimates they are 12-18 months ahead of the competition, so very soon similarly powerful software will proliferate. If we do not lock down critical software infrastructure by then, the internet can be screwed. Much of the internet and many applications run on core software that is open source, maintained by volunteers with shoestring budgets. Mythos has already cracked open some of these core bits of code.
Turning the internet, and essentially the software infrastructure that increasingly runs our world, into a cybersecurity nightmare is, I would imagine, not good for business. So Anthropic has given a preview version of Mythos to a consortium of 40 software companies, including their competitors, to basically give them a head start in finding and fixing any vulnerabilities in their software (which they are calling Project Glasswing). They are also dedicating some money to fund the project, especially for open source software. This all sounds great, and maybe this will fix the problem. Hopefully we will eventually see this as a Y2K situation, the disaster that never happened because we prevented it.
What this affair highlights is how the disruptive nature of AI is creating the potential for significant problems, if we do not stay ahead of it with rational regulation and quality control. It seems that Anthropic is trying to be an ethical and responsible corporate citizen, and that it recognizes the power of its products. Thank goodness for that – imagine if the same tech were in the hands of a less scrupulous or responsible company? It’s pretty easy to imagine. This is happening at a time when the Federal government not only has no apparent interest in regulating AI, they are trying to prevent the states from doing so either. And they are throwing a temper tantrum when they cannot use their new toys without restrictions.
Going forward we should not rely on the noblesse oblige of tech CEOs. We need to make sure that security and ethical restrictions are baked into any new applications. I am all for vibe coding, for example, but such apps need to have rigorous quality control, so we don’t fill the world with the coding equivalent of AI slop, creating a vulnerabilities tsunami. Perhaps this consortium of tech companies will evolve into something bigger – an organization dedicated to safely and securely developing this technology. This means, of course, we need to get buy in from China, which means we need international standards to regulate this tech. I think of it like nuclear weapons. AI is a very different kind of threat, but it is also a powerful technology that would benefit from international agreements so that we don’t accidentally destroy our civilization.
The post AI May Disrupt The Internet first appeared on NeuroLogica Blog.
I am a firm believer in miracles—a confession that will be immediately off-putting to readers of Skeptic. Below I will offer a definition of miracles and attempt to justify belief in them, but for the moment I will focus on a fundamental distinction between two modes of causality. I call these because-of causal mode and so-that causal mode. We can think of these as two ways of explaining an event.
Because-of causal mode example: a man walks into a bank and we ask for an explanation. One explanation tells us about the neurons firing in the motor cortex of the brain that excited a cascade of additional neuron firings, and then muscle flexing. And, of course, there was the mass of the body, the friction of shoes against the sidewalk, the heft and leverage of the doorway, and so on. This mechanical explanation makes the event intelligible; it tells us how the event took place. It took place because of all these enabling factors.
So-that causal mode example: There’s another way of making the event intelligible, and that is to explain the purpose of the man’s actions—he went into the bank so-that he could deposit some money. This is a teleological explanation.
The scientific because of explanation is concerned with immediate past events—facts about what things happened and theories about how they happened. Meanwhile, teleological explanations focus on future outcomes involving values. A teleological explanation tells us that an agent is acting for the sake of bringing about an intended state of affairs—causality guided by purpose. All living systems act with purpose; they seek beneficial outcomes; their behaviors are goal-directed, functional. They are about something.
Here we have two modes of causal explanation—both claiming to render events intelligible, but in different ways. There has been a long tradition of attempts to conciliate these two modes of causality, a tradition that I will now grossly oversimplify. Some people say that the so-that mode of causality is a mere illusion, or at best, a convenient pretense. They believe there is only one kind of causality, and that all genuine explanations can be reduced to the logic of because-of causality.
Others believe that teleological explanations are real, insisting that the universe has some sort of inherent or endowed purpose—it has a point, it is about something, for something. The entire universe behaves in the ways it does so-that an ultimate purpose in creation might be achieved. In one approach because-of causality is ultimately real and so-thatcausality is a fantasy. In the other approach so-that causality is ultimately real and the because-of causality of science is merely an instrument for working out an ultimate cosmic purpose.
The cosmic bus isn’t going anywhere that matters. It has no driver and no destination.Here’s the big question prompted by our encounter with contemporary science: is the grand epic of cosmic evolution in some way driven or guided so-that some destiny might be achieved, or is the cosmos, despite its awesome splendors, ultimately void of genuine meaning or purpose? As Steven Weinberg famously said, “the more we know about the universe the more it appears to be pointless.” There are difficulties with each of these views. If you claim there is genuine meaning somehow inherent in the cosmos, then you must tell us what it is and why we should accept it. But if the claim is that teleological dynamics are not genuinely real, then you are left with the problem of convincing us that meanings (e.g., values, expectations, the force of will) fail to have genuinely real consequences.
I wish to offer a third option, one that avoids both problems. This view says that all the elements of so-that causality (goal-directed behavior) are genuinely real phenomena, but they are recent and unintended emergents of because-of dynamics.
We might frame this emergence view in terms of two different perspectives on the nature of matter: the grunge theory and the glitz theory of matter. The grunge theory says that matter isn’t much—it’s just some sort of vague or chaotic and uninteresting stuff that becomes interesting only when the laws of nature or the will of God whip it into shape. So the grunge theory appears to assign matter to one domain, while relegating both natural law and divine purpose to another.
I want to reject the dualism of this view in favor of what I’m calling the glitz theory of matter, which holds that there are no independently real laws of nature. What we have are simply the properties of matter. A law of nature is just something we formulate as we observe regularities in the properties of matter. If we take this view then we can see that matter is not boring grunge, but wonderfully interesting and creative stuff. What makes it interesting: when certain properties of matter interact with other properties of matter, we find increasing probabilities that novel and unanticipated properties of matter will emerge spontaneously.
Here’s a simple illustration: Oxygen and hydrogen atoms have distinctive properties, and when they interact they can produce water molecules, which present new properties not found in either oxygen or hydrogen. And then the interaction of water properties with other properties of matter will increase the probability of even more novel properties. And, as proposed above, the emergence of new properties of matter may result in the formulation of completely new laws of nature. All of this follows the straight-forward logic of because-of causality. As interactions continue the probability of getting large molecules will increase, and when you have interactions between large molecules, then the probability of emergent living systems will increase dramatically. And as living creatures arrive on the scene, so too does the visionary logic of so-that causality. In a fundamental sense, the story of creation is a story about shifting probabilities and how these result in the various entities, events, properties and relations that make up the natural world.
I want to suggest that the goal-directed causal dynamics of teleology amounts to an emergent property of living systems. Before the appearance of living systems causality was limited to because-of dynamics, but with life comes purpose and value. Now agency enters the picture and things begin to matter. Living systems behave in certain ways so-that they will survive and reproduce. Molecules don’t do this. Molecules are created and constrained entirely by the care-less dynamics of because-of causality. But when molecules get really complex and interactive then it becomes more and more probable that they will gang up and behave according to a completely new mode of causality. This does not mean that because-ofcausality becomes overruled or deactivated. It means only that the because-of dynamics have called into play additional sets of anticipatory, goal-directed algorithms.
A meaningless universe has inadvertently, accidentally and aimlessly created the conditions for meaningfulness.Purposeful behavior and meaningfulness are real phenomena, not illusory; but they are also recent (~4 billion years ago) and localized (on Earth, at least). This suggests that the cosmos itself is essentially absurd—it has no meaning; it is not guided or coaxed by any agent or purpose. It is not about anything. However, without question, there are pockets of genuine meaning and purpose within the cosmos, as we are here to attest. The cosmic bus isn’t going anywhere that matters. It has no driver and no destination. But there are living beings on the bus, and they hustle here and there with all kinds of determination. My life, your life, all our lives, can be rich and full of meaning without having to claim they have cosmic significance. Life can be worth living even if we are not the point of some cosmic drama. The thing that impresses me most about the cosmic drama is that a meaningless universe has inadvertently, accidentally and aimlessly created the conditions for meaningfulness. This mysterious and wonderfully ironic accident—dare I say, “miracle”?—takes my breath away.
By “miracle” I do not mean an impossible event occurring at the behest of an all-powerful supernatural agent. I mean only this: any event, the occurrence of which is considered to be so radically improbable as to be virtually impossible. (I am excluding logically impossible events from discussion because they have a probability of zero—even gods cannot square circles). A miracle is an event having a probability value so close to zero that you cannot imagine any conditions under which it might occur. Given these terms, it might be said with good reason that many miracles have occurred in our universe—it’s just that they never occur before their time.
A thought experiment might help to clarify this. Suppose we place ourselves backward in time to some point immediately after the primordial Big Bang, when the universe was nothing but a raging inferno (no quarks, no atoms, just pure radiation) and consider the prospect of a supernova. Nothing that might have been known of the natural world at the time could possibly predict or explain the formation of stars, not to mention their fusion and expulsion of atoms. The very idea of such events would be considered so improbable as to be preposterous, impossible, and contrary to nature.
Life can be worth living even if we are not the point of some cosmic drama.Or, let us go back a mere four billion years. Again, at that point we would be completely incredulous if faced with the notion that billions of tiny objects would soon be exploring about on our young planet and behaving in complex patterns that defy all that could possibly be known at the time about the natural order of things. And yet, lo and behold, living beings emerged, not because of some magic wand, and not because of necessity, but rather because a countless series of unpredictable probability-enhancing events brought forth the enabling conditions.
We have the meaning-bearing lives we do because they were made incrementally less improbable by the epic events of cosmic evolution, whereby matter was distilled out of radiant energy, segregated into galaxies, collapsed into stars, fused into atoms, swirled into planets, spliced into molecules, captured into cells, mutated into species, compromised into ecosystems, provoked into thought, and cajoled into cultures. Surely, there is nothing intellectually shameful about embracing the staggering beauty and the humbling fortuity of these events as … miraculous.
Remember The Last Starfighter from 1984? In that movie a trailer-park kid with limited prospects spends his time on an arcade-style video game, Starfighter. He plays the game so much that he beats the final level, and it turns out he is the first person to ever do so. He is heavily criticized for spending so much time playing a game, which is seen as a sign of boredom and lack of ambition – a waste of time. The twist (42 year old spoiler incoming) is that the game was actually a test (the Excalibur test – a deliberate reference to King Arthur) to find a skilled pilot for an actual real-life starfighter. He goes on to save the galaxy from invasion.
The interesting premise of the movie is that playing a video game is not only a test of real-life skill, but can be used to train such skill. In 1984 this was kind of a new idea, and appealing to a generation of kids newly hooked on video games. Video games have been significantly mainstreamed over the last half century, but there is still a bit of a cultural stigma attached to them – they are seen as the realm of dorks and geeks, with inevitable jokes about how avid video gamers with “never get laid” (or something to that effect). Since the beginning of their popularity parents have worried, with such worry being fed by a sensationalist media, that video games were going to “rot” their kids’ brains, turn them into losers who can never get a skilled job, and might even cause violent behavior. Every mass shooting someone brings up violent video games.
But the evidence simply does not support these concerns. One big problem with the research is that it shows correlation only, not causation. Sure, people who play aggressive video games tend to be more aggressive, but that doesn’t mean the game is the cause. Further, there are many confounding factors, and more recent research shows that violence in the game is not the key feature. It has more to do with the level of difficulty and the resulting frustration that seems to raise aggression, not violence in the game. More competitive and difficult games tend to be more stimulating, regardless of the level of violence. The bottom line – after decades of research, systematic reviews conclude: “There is insufficient scientific evidence to support a causal link between violent video games and violent behavior.”
Now we seem to be going through the same cycle again, but this time with anxiety and depression. It is also not just video games being criticized, but social media and any screen time. And again there is evidence of some correlation, but without showing causation. It is very likely that people who feel socially isolated or depressed might seek out video games and social media as a distraction or to have some social connection. Taking away those outlets out of fear they are causing the symptoms can easily be counterproductive. A recent systematic review found:
“Scientific research investigating social media’s impact on adolescent mental health has failed to provide clarity. There is converging evidence for a small negative cross-sectional association between time spent on social media and well-being. However, longitudinal studies and those measuring social media use beyond time spent or mental health beyond general well-being show diverging results.”
In short, the evidence is weak and mixed, while better studies designed to control for likely confounding variables do not show any consistent effect. This does not mean there are no potential issues with excessive video-game use or social media use. It is one variable that we need to consider and carefully research, and there are likely some individuals in some contexts where is does exacerbate or cause problems. But are video games and social media the “one true cause” of all adolescent current ills, and basically responsible for the recent increase in mental health diagnoses? Probably not.
The current best inference is that video games and social media are filling a void of social support structures of various kinds, and that the solution is not to simply restrict or take away screens. Rather, we should be filling the void with more diverse support and activities.
On the flip side, there is evidence that video games and other interactions with digital technology increase some skills (just like in Starfighter). What we are seeing is not an atrophy of skills, but a shifting of skills from more analog to more digital activity. Since the industrial revolution it seems that each generation laments the fact that “these kids today” lack the skills that we older folks developed, while missing the fact that they are developing new skills for a new world. We may not get this new world they are creating, but they are not creating it for us. This is part of the reason it is difficult to predict the future use of technology, because we keep trying to imagine ourselves in this future. But we will not be in that future – new generations of people will, and they will be different in ways we cannot predict. To some extent, we have to trust that new generations will find their own way.
Meanwhile, it turns out that video games are a really good way to train certain skills. If anything, the technology is under-leveraged. Video gamers are better at endoscopic surgery, because certain kinds of games develop psychomotor skills like those used in this kind of surgery. Video games can cause more general cognitive skills as well: “Findings indicate that higher levels of videogaming proficiency are linked to improvements in visuospatial short-term and working memory, psychomotor speed, and attention.” Some of this data is correlational, but a lot of it is experimental, showing a causal effect with a dose-response.
But also, video games can train specific skill, not just improve cognitive function. They are great at keeping the level of difficulty just ahead of the user, and advancing them at their own pace. You can also simulate situations that you cannot recreate in the physical world. The FAA is even trying to get in on the “Starfighter effect” – they are specifically recruiting video game players for jobs in air traffic control.
Video games definitely do not have the stigma they did when I was younger, but it is not gone completely, and much of the same instincts have migrated over to screen-time in general and social media specifically. I do think we need to resist the temptation to simplistically blame the latest new technology our kids are using for whatever societal ills we are worried about. This does not mean we should not carefully consider and research the effects of new technology on society, especially to identity vulnerable individuals or potentials for abuse. But don’t panic or overreact. Just taking away screens is likely to be counterproductive. It’s better to fill kids’ lives with diverse experiences and opportunities (which is a lot more work than just demonizing video games and screens). Also we risk losing out on the potential benefits of new technologies. Video games can build cognitive ability and are great at training specific skills, and there are many potential upsides to social media.
The post Do You Have Video Game Skilz? first appeared on NeuroLogica Blog.
It turns out there's probably no such thing as an alpha wolf.
Learn about your ad choices: dovetail.prx.org/ad-choicesTelegony is a long-discredited concept of sexual heredity that has been making a surprising comeback in recent years—particularly within digital filter bubbles, right-wing esoteric milieus, and so-called energy coaching scenes. But what does this tongue-twisting term actually mean?
Classical philologists will recognize Telegony as the title of a lost Greek epic recounting the story of Telegonus, the son of Odysseus and the sorceress Circe.1 This rare literary reference, however, has little to do with the way the term is used today.
In scientific-historical terms, telegony refers to the former belief that a woman’s previous sexual partner—often assumed to be the first—could permanently influence her body and thereby affect the traits of children conceived later with different partners. One dictionary definition calls it “a former belief that a sire can influence the characteristics of the progeny of the female parent by subsequent mates.”2
Derived from the Greek tēle (distant) and goneia (procreation), telegony literally means “remote reproduction.” According to this notion, an earlier partner leaves a lasting biological imprint that shapes a woman’s health and the genetic makeup of future offspring—even when those children are fathered by someone else.
This assumption has been decisively refuted for more than a century. Since the formulation of Mendel’s laws of inheritance, modern genetics has established beyond doubt that only the biological parents contribute to a child’s genetic constitution.3 Telegony has therefore long been classified as a pseudoscientific myth.
Curiously, contemporary dictionaries still cite prominent media outlets—Time, Newsweek, and The Guardian—as sources that allegedly support or discuss telegony. A closer examination, however, reveals persistent misinterpretations.
Both Time and Newsweek claim that Aristotle defended telegony.4 Not so. While Aristotle wrote extensively on biology and reproduction, his treatise, De generatione animalium, does not propose that former sexual partners influence future offspring. Instead, he advanced a speculative model in which male semen supplies form while the female body provides matter.5 This reflects a metaphysical conception of gender—associating masculinity with form and intellect, femininity with substance and passivity— rather than an empirical theory of heredity.
Telegony’s modern revival is not a scientific rediscovery but a cultural repetition—a myth repackaged to meet contemporary anxieties about sexuality, identity, and control.The remaining references stem from The Guardian and are often cited in sensational headlines.6 These articles report on field studies by Australian researchers suggesting that previous mates might influence offspring size.7 Crucially, however, the observed effect concerned houseflies only. What headlines obscure—but the articles themselves clarify—is that these findings have no relevance for mammals, let alone humans.
From Discredited Biology to Political MythAlthough Mendel’s laws relegated telegony to scientific error by the early twentieth century, ideas of genetic “imprinting” did not disappear entirely. They resurfaced in ideological form within National Socialist racial doctrine—though not under the explicit label of telegony.
The Nuremberg Laws did not claim that a woman’s first sexual partner permanently affected her later offspring. Yet the underlying logic of “Aryan bloodlines” and the notion of racial defilement through sexual contact relied on structurally similar assumptions: that sexual encounters could transmit lasting biological or moral contamination.8 Political theorists have long noted that myths become politicized when they resonate with prevailing cultural anxieties— whether about heredity, purity, or social order.
This recursive history did not end with the twentieth century. The contemporary revival of telegony occurs in milieus that generally reject any association with historical racism. Nevertheless, similar narrative patterns reappear—now reframed in spiritual, esoteric, or pseudotherapeutic language.
In October 2025, these developments reached a broader public audience. At a Skeptic Awards ceremony in Vienna, a European provider of so-called “telegony erasure” services placed third in a public vote for the most unscientific claim of the year.9 The Berlin-based proponent advertised the ability to remove alleged energetic imprints of former sexual partners from a person’s DNA through nonmedical “energetic healing,” and claimed to have trained a network of practitioners across Germany, Austria, and Switzerland.
Publicly available material reveals striking similarities across these offerings. Multiple providers use nearly identical language, concepts, and website structures when promoting telegony deletion services, suggesting not isolated belief but a loosely organized commercial ecosystem.
The idea that a woman is permanently “imprinted” by her first sexual partner functions as a mechanism of control, naturalizing female subordination.The ideological references invoked by these providers are revealing. Alongside esoteric concepts, they cite the so-called Rita Laws and Slavic-Aryan Vedas as foundational sources.10 These texts are largely dismissed within Slavic studies as modern fabrications, likely originating in the twentieth century. Today, they are frequently employed within strands of Slavic neopaganism (Rodnoverie) to mythologize ethnonationalist ideas such as hereditary purity and ancestral obligation—claims devoid of medical or historical foundation.11
In this context, the Anastasia movement also appears. Based on novels by Russian author Vladimir Megre, the movement centers on a fictional Siberian healer and promotes a social utopia grounded in “natural” living, ancestral land, and hereditary harmony.12 Telegony-like ideas—particularly notions of female purity, bodily contamination, and transgenerational burden—play a central role.13 Sect-monitoring bodies in several European countries have classified parts of the movement as sectarian and, in some cases, as promoting antisemitic and ethnonationalist motifs.
These environments often overlap with right-wing esotericism, purity cultures, and manosphere-related discourses. Blogs and forums within these spheres repeatedly—and incorrectly— reject Mendelian genetics, misattribute claims to Aristotle, and revive essentialist gender models in which women are framed as permanently passive and subordinate to male agency. What emerges is not a revival of science, but a repackaging of myth—adapted to digital platforms and marketed as personal transformation.
The Demand Behind the MythWhen a long-disproved concept resurfaces despite overwhelming refutation, a psychological belief question arises: Why do people adopt the myth rather than the evidence? The revival of telegony is driven by several overlapping dynamics.
Within Anastasia-related narratives, telegony is embedded in a closed worldview that promotes rigid gender hierarchies.14 Men are portrayed as active lineage bearers, women as passive vessels and spiritual caretakers. Within this framework, the idea that a woman is permanently “imprinted” by her first sexual partner functions as a mechanism of control, naturalizing female subordination.
Comparable patterns appear in manosphererelated online environments, where telegony is framed polemically as pseudobiological justification for moral judgments about women’s sexuality. In these filter bubbles, reductive gender stereotypes dominate.15
The wish to “remove” traces of former sexual partners may reflect dissatisfaction with experiences of medicine and intimacy.By contrast, telegony’s resonance in alternative medicine and energy-healing scenes follows a different logic. Here, the appeal lies less in authoritarian gender ideology than in the promise of liberation from perceived constraints of conventional medicine. Audiences range from curious experimentalists to resolute opponents of scientific institutions.16
Across these contexts, however, a more general motive may be discerned. The wish to “remove” traces of former sexual partners may reflect dissatisfaction with experiences of medicine and intimacy. Many people long for healthcare that feels meaningful rather than bureaucratic, and for sexuality that carries symbolic weight beyond the purely physical.17
Against this backdrop, telegony can appear to offer something else: the promise that sexual encounters matter, that they leave traces, that intimacy has depth and consequence. This emotional appeal helps explain why myths such as telegony persist despite scientific refutation.
Telegony’s modern revival is not a scientific rediscovery but a cultural repetition—a myth repackaged to meet contemporary anxieties about sexuality, identity, and control. Recognizing this pattern is essential to distinguishing legitimate meaning-making from the misuse of discredited science.
Last week I wrote about the possibilities of genetically engineering humans. The quickie version is this – we are already using genetic engineering (CRISPR) for somatic changes to treat diseases, and other applications are likely to follow. Engineering germline cells, which would get into the human gene pool, are legally and ethically fraught, but it’s hard to predict how this will play out. I have also written often about genetically engineering food. I think this is a great technology with many powerful applications, but it should be, and largely is, highly regulated to make sure that anything that gets into the human food chain is safe.
I haven’t written as much about genetically engineering pets, and this is likely to be the lowest hanging fruit. That is because pets are neither food nor are they a human medical intervention. But that does not mean they are not regulated – they are regulated in the US under the FDA and USDA. Genetic engineering is treated as an animal drug, and must be deemed safe to the animals being engineered. The USDA also can regulate engineered plants and animals to make sure they do not pose any risk to the environment, humans, or livestock. This makes sense. We would not want, for example, to allow a company to release a genetically engineered bee, pest, or predator into the environment without proper oversight.
Pets, as a category, are domesticated, are not intended to be used as food, nor are they intended to be released into the wild. I say “intended” because pets can become food for predators, and they can escape or be released into the wild, and even become feral. But these contingencies are much easier to prevent than with food or wild plants or animals. For example, if you get a rescue pet, it has likely already automatically been spade or neutered. One easy way to reduce risk would be to make any GE pet sterile, which is likely what the company would want to do anyway to prevent violation of their patents through breeding. In short, it seems that reasonable regulatory hurdles should not be a major problem for any effort to commercialize GE pets.
Unsurprisingly there are companies already working on this. One company, the Los Angeles Project, is working on making rabbits that glow in the dark. This is actually pretty easy (I bought some glow-in-the-dark petunias last year), as we already have isolated genes for green fluorescent protein and have put them in many types of plants and animals. Another company, Rejuvenate Bio, researches genetic treatments for chronic diseases in humans. This, of course, involves a lot of animal research, so they are also developing these treatments for pets, to increase their health and lifespan. Scoutbio is another company working on gene therapies for disease, but they are focusing on treatments for adults. There are also pet cloning companies, which is not the same thing, but there is a lot of overlap in this technology and it is not a big leap to start tweaking those embryos.
So where is all this likely to lead? First, I think GE pets will happen a lot faster than GE humans, because the ethical and therefore legal bar is likely to be a lot lower. What kinds of modifications are we likely to see? Some we will see simply because it is already possible to do, like the green fluorescent rabbits. We are doing it because we can. But as the tech evolves we can see pets with much longer lifespans. That raises an interesting question – how long would you want your dog or cat to live? Most people I talk to feel that 10-15 years for dogs and 15-20 years for cats is too short. I have owned many pets, and their brief lives always seem to go by too quickly. But at the other end of the spectrum I have also known people who own parrots, which is a lifelong commitment. Also, even though the loss of a pet can be heart-wrenching, you then get to experience a new kind of pet with their own personality and go through the puppy phase again. I also wonder how difficult it would be to lose a beloved pet you owned for 30 years, say. How much harder would that be? There is a sweet spot in there somewhere, perhaps 20-30 years. In any case, it would be interesting to be able to choose the longevity of your pet. And of course, it would be great to reduce the many chronic illnesses that plague our pets.
One other difference between pets and humans is that we have already, through conventional breeding, significantly altered our pets, especially dogs. Just think of all the different dog breeds. Some of them, I would argue, are unethical, like making dog breeds that have difficulty breathing. I seriously think that the institutions that regulate purebred dogs should place a much higher priority on the overall health of any recognized breeds, and not formally recognize any breeds with inherent health problems. It may be too late for this, but that would happen in my perfect world. In fact, genetically engineering pets may improve their overall health and happiness. The compromises that come with breeding cute traits may not be necessary with the power of genetic engineering. We could engineer new traits into baseline healthy and outbred populations, and would not have to use severe genetic restriction to create these extreme breeds.
And of course genetic engineering could create pets that would not otherwise exist. Superficial traits, like eye color and coat pattern, should be easy. Do you want a long hair, short, or wire hair? What color? Short or long tail, straight or curly? Floppy ears or pointy? Non-shedding and hypoallergenic are a must. It would also be possible to engineer their personality – easy to train, family friendly, never bites, etc. We are not far from the age of designer pets. We could also go outside the bounds of existing traits, to make exotic even mythical-seeming pets. This starts to get trickier the more ambitious we get, but is within the realm of possibility.
We could also use genetic engineering to domesticate species that would be difficult to impossible to turn into pets through breeding alone. Most people by now know about the Russian silver foxes bred to be friendly and tame. There is still some controversy about the research – how domesticated are they and did they already have some traits before breeding? But regardless, they do not make good pets. They are difficult to train (they pee everywhere), are destructive, and are very high maintenance. But, with some targeted genetic engineering, it would be easier to give them all the traits we love in dogs, for example. We could do the same possibly with racoons and many other species – GE away their problematic traits and make them easy pets. This starts to get into trickier ethical territory, but at least I would argue that fully domesticating a population of wild animal through genetic engineering is ethically no different than doing it through breeding.
It seems very likely that all of this will happen eventually, with the main question being the timeline. Personally, I have no problem with it, and have to admit I would love an exotic pet – as long as it is properly regulated with the welfare of the animals being adequately considered. In fact, I would like to see a higher standard than currently exists for traditional animal breeding.
My final question, however, is what will eventually be more popular – GE pets or robotic pets. There are interesting arguments to be made for both, and perhaps people will have both, in different contexts and for different purposes. If you could have one or the other right now, in a mature form of the technology (say from 200 years from now), which would you pick? Maybe it won’t matter much because the technologies will both converge on your perfect pet.
The post Genetically Engineered Pets Are Coming first appeared on NeuroLogica Blog.
Are we getting close to the time when parents would have the option of genetically engineering their children at the embryo stage? If so, is this a good thing, a bad thing, or both? In order for this to happen such engineering would need to be technically, legally, and commercially viable. Let’s take these in order, and then discuss the potential implications.
The main reason this is even a topic for discussion is because genetically engineering is technically feasible. Obviously we do it to plants and animals all the time. We also have increasingly powerful and affordable technology for doing so, such as CRISPR. This is already powerful and practical enough for small startups to perform CRISPR as a service, if it were legal. We already have FDA-approved CRISPR treatments, and have performed personalized CRISPR therapy. CRISPR is fast and affordable enough to have made its way into the clinic. But there is a crucial difference between these treatments and genetic modification – these treatments affect somatic cells, not germ-line cells. This means that whatever change is made will stay confined to that one individual, and cannot get into the human gene pool. What we are talking about now is genetically modifying an embryo at an early enough stage that it will affect all cells, including germ cells. This means that these changed can be passed down to the next generation, and effectively enter the human gene pool.
This difference is precisely why there is regulation dealing with such procedures in many countries, including the US. In the US the situation is a little complex. It is not explicitly illegal to perform germ line gene editing on humans. However, there is a ban on federal funding for any such research. This does allow for private funding of such research, but any resulting treatment would still need FDA approval, which is highly unlikely in the current environment. Despite this, there is discussion among several startups to start exploring this idea. Why this is happening all at once is not clear, but it seems like we have crossed some threshold and startups have noticed. With current regulation, where does that leave us regarding our three criteria?
Technically a CRISPR-based germ-line treatment for humans is possible. We do have the technology. What needs to be worked out is specific changes and their results. This would require clinical trials, and that is the main stumbling block in the US and some other countries. It seems unlikely the FDA would approve such trials, and therefore there would be no way to even work towards FDA approval. A company could theoretically do privately funded studies that are not part of FDA approval, but they would still need ethical approval (IRB approval) for such studies, which may prove difficult (although not necessarily impossible). Such research could be carried out in countries with more lax regulations, however. Over 70 nations have such regulations, which means many do not. So technically we are theoretically close to having marketable treatments designed to change actual human genetic inheritance.
Legally, in most developed nations there does not appear to be any appetite for allowing human germ-line manipulation. However, such services could be offered in countries without hindering regulations, perhaps the same countries in which the translational research was done. We currently do not have any international bans or regulations. The WHO advises against germline engineering, but there are no legally-binding international regulations. This is a technology that definitely requires not only an international consensus but enforceable regulations, because what happens in one country can affect the entire human population.
In short, there is a pathway to skirt any current regulations and make such treatments available. However, if startups start developing germline-altering treatments, that might motivate governments to find ways to regulate and effectively ban such treatments. Would such treatments be commercially viable? If by this you mean – would there be a customer base willing to pay enough to make it a profitable service, the answer is clearly yes. If you mean – are there companies currently offering such services, the answer is no. But that may be changing soon.
What could be the implications of this technology? It depends on how it is regulated and used (like so many advanced technologies). I will speculate on what I think is the best-case and worst-case scenarios. Best case, such technology would be used to minimize the burden of genetic disease. We already have treatments to sort sperm to avoid sex-linked mutations and to select more genetically healthy sperm. But what if we could do this down to the individual gene, and make sure the IVF occurs only with sperm that does not contain an allele for a genetic disease? I can’t see any downside to this.
The next step, however, would be altering genes, not just selecting them. But again, this could be limited to altering genes that would result in a genetic disease to a healthy version. The resulting gene would be one that is already in the human population, and the only result would be the elimination of one version of that gene that is disease-causing. Again, hard to see a downside. Such treatments would almost certainly be more cost effective than managing the genetic disease itself. And if it were done to the germline, it would only have to be once for that genetic line. I suspect that when such treatments become technically available, and confidence is high enough in the technology itself, they will become legal and available.
But there are at least two other categories of genetic alteration that become increasingly problematic. The first category we can call disease treating. The second is risk modifying. What if we could also alter a gene from one version that conveys a high risk of ultimately developing Alzheimer’s disease, to another version that has a relatively low risk? This would not be treating a genetic disease, but simply altering the genetic risk of developing a disease. We could potentially do the same for high cholesterol, diabetes, obesity, and high blood pressure. Again, we would not be introducing any new genes into the human gene pool, just giving people alleles that convey lower risk of specific diseases.
However, there is a potential downside here. If such treatments became common, they would potentially reduce genetic diversity in the human population. Many genes that convey a high risk in one area have other benefits. They just have different tradeoffs. We may be reducing disease risk in one area, but also reducing resilience to other diseases. In other words, there is a potential for unforeseen consequences. Also, the number of people who could potentially benefit from such genetic alterations is much higher than for genetic diseases, so the implications for the human gene pool are greater. The risk-benefit ratio is therefore harder to calculate. I think such treatments might be viable one day, but would require a lot of research to minimize the possibility of unforeseen negative consequences.
The final category I will call gain-of-function alterations. This might include introducing genes from other species or novel genetic alleles that provide a phenotype that does not currently exist in the human population. This category has the greatest potential for change, and therefore for both best-case and worst-case scenarios. Some people might think there is no best-case in this category, and that is reasonable if you think that the risk will never be worth it, and such changes could alter what it even means to be human. If we still want to imagine a best-case, that might involve limiting such changes to ones for which there is a robust consensus that they would be good for humanity with little to no down side. This would also have to include some consideration of fair and just access to such changes. Perhaps this might include genes that help adapt people to living in space or on Mars, or eliminate addiction. It’s hard to think of a lot of examples outside of disease modification, however.
It is much easier to imagine worst-case scenarios. The common ones that are frequently raised include creating not just different classes of people, but different subspecies. Wealthy individual could potentially afford a suite of upgrades to their children, making them smarter, stronger, healthier, with a longer lifespan. It’s hard to imagine such a thing ending well. Another classic doomsday scenario is the creation of genetic supersoldiers, creating an arms race among competitive nations to engineer the most deadly soldiers. Again, hard to see this ending well. Yet another common sci-fi scenario is the introduction of genes that will significantly alter the human phenotype, blurring the lines between human and non-human. And of course the ultimate worst-case scenario, an accidental (or perhaps not so accidental) genetic apocalypse. There is a range of possibilities here as well, with the absolute worst imagined in a Rick and Morty episode where the entire planet was reduced to genetic monstrosities.
There are also some edge cases that have complex elements, including some truly horrific ones. What if, for example, genetic alteration could change someone’s apparent “race” or even their biological sex? What would be the social implications of an African family deciding they wanted a European looking child, or vice versa. How common would this become? Would apparent race become a fad, shifting from generation to generation? It is now common among some Asian youth to seek eyelid cosmetic surgery. What if this could be accomplished with gene therapy? How accepting would society be towards pre-pubescent children wanting gene therapy to alter their biological sex so that they go through puberty as the other sex? How would the furry community react to the possibility of genetic furriness? What if parents wanted for their children a standard of beauty that is generally considered to be extreme, even freakish? What if a culture decides that women should be genetically prevented from having certain bodily functions?
Genetic alteration is a powerful technology, especially when applied to the germline. There is the potential for extreme good, extreme harm, and extreme weirdness. Sounds like an area that would benefit from thoughtful regulation, and not left to the whims of startup culture.
The post Are Genetically Engineered Humans Coming first appeared on NeuroLogica Blog.
Ostensibly, the reasons Donald Trump and his administration (particularly Secretary of War Pete Hegseth) went to war with Iran were as a response to the Iranian leadership’s brutal suppression of Iranian protesters, putting a stop to the activities of Iran’s network of proxy groups throughout the Middle East and to destroy Iran’s ability to create a nuclear arsenal.1 President Trump specifically stated (emphasis in the original):
(…) if we didn’t do what we’re doing right now, you would have had a nuclear war, and they would have taken out many countries.2He continued:
The regime already had missiles capable of hitting Europe and our bases, both local and overseas, and would soon have had missiles capable of reaching our beautiful America.3Since then, the Trump administration has added “enriched uranium” as another reason to invade.
Iran’s religiously based autocratic regime has indeed brutally suppressed peaceful protest and does support a considerable number of violent proxies in the Middle East. However, there appears to be little or no support for the president’s assertions that Iran has a viable nuclear weapons program. He has previously stated that U.S. strikes on Iran’s nuclear facilities in June 2025 had “obliterated” that nation’s nuclear weapons program.4
So, if Iran’s military capabilities aren’t the rationale for the Trump administration’s war on Iran, did the administration’ prosecute this war to help pro-democracy groups in Iran bring down that country’s dictatorial regime? Apparently not. War Secretary Pete Hegseth said at a March 2 Pentagon press briefing, “This is not a so-called regime-change war, but the regime sure did change, and the world is better off for it.”5
That’s not quite correct. Iran’s new leader, Mojtaba Khamenei, the son of Ayatollah Ali Khamenei, Iran’s previous religious and political leader, recently killed in a U.S. air strike, isn’t likely to turn Iran into a secular democracy. U.S. air strikes have, if anything, hardened the anti-western, anti-democracy stance of the Iranian leadership.
This view—that we are involved in a holy war against Islam—is not Hegseth’s alone.So, if the United States isn’t intent on democratizing Iran, and Iran’s military capabilities aren’t an issue, what is our government’s motivation for attacking Iran, even bringing it to its knees in what President Trump characterized as “unconditional surrender”? While Trump’s motives may be a bit murky and unfocused, those of Secretary Hegseth are not.
Sporting on his chest, among his many other tattoos, is a Jerusalem cross—a favored emblem of the medieval crusaders. Hegseth, author of the 2020 book, American Crusade, told CBS reporter, Major Garrett: “I mean, obviously, we’re fighting religious fanatics who seek a nuclear capability in order for some religious Armageddon.”6 Troops, he later added, “need a connection with their almighty God in these moments.” A couple of days later, not long after returning from a dignified transfer of soldiers killed in action, Hegseth quoted Psalm 144 at a Pentagon press conference, “Blessed be the Lord, my rock, who trains my hands for war and my fingers for battle.”
This view—that we are involved in a holy war against Islam—is not Hegseth’s alone. The Military Religious Freedom Foundation (MRFF) has received over 110 complaints from enlisted personnel that their officers, referencing the Book of Revelation, have been essentially preaching to them, telling them this war was part of a divine plan. In one such complaint, a noncommissioned officer (NCO) explained that his commander even said President Trump was divinely anointed to carry out this plan: “This morning our commander opened up the combat readiness status briefing by urging us to not be ‘afraid’ as to what is happening with our combat operations in Iran right now,” the NCO wrote. “He said that ‘President Trump has been anointed by Jesus to light the signal fire in Iran to cause Armageddon and mark his return to Earth,’” the NCO continued. “He had a big grin on his face when he said all of this which made his message seem even more crazy.”7
This message reflects Hegseth’s own rhetoric, as expressed at a recent Pentagon Prayer Service (emphasis added):
Give them wisdom in every decision, endurance for the trial ahead, unbreakable unity, and overwhelming violence of action against those who deserve no mercy.8One major source of evangelical Christian bias among officers in the military is the Air Force Academy. Evangelical Christian proselytizing and pressure to adhere to fundamentalist end-times rhetoric has long been a problem at the Academy. Consider this 2007 news item:
Three faculty members from United States Air Force Academy (USAFA) in Colorado Springs, Colorado–one of whom is also a former cadet–have gone public today with their criticisms of evangelical Christian proselytizing at the USAFA. They are joined by another former cadet now serving in Iraq. One faculty member has been reassigned to the Air Command and Staff College at Maxwell Air Force Base in Alabama.9This is one of several news items I found reporting on this problem during the 2000s. Since I was unable to find any recent news stories on the present state affairs at the Academy, I called the Military Religious Freedom Foundation and was privileged to speak with Michael Weinstein, founder and president of MRFF. I asked him if, since there had been some congressional scrutiny of the Air Force Academy’s religious policies, if the Academy had reformed with respect to its religious bias. He told me that, unfortunately, the problem of evangelical Christian religious proselytizing was now worse than ever.10
The coupling of war-making with religious dogma also dredges up the specter of religious wars in the past, culminating in the Thirty Years War.Among the many instances of religious coercion posted on MRFF’s Air Force Academy’s “Wall of Shame” is the 2022 incident in which a training day was scheduled on Yom Kippur, perhaps the most solemn of Jewish religious holidays (emphasis in the original):
In its latest slap in the face to Jewish cadets, the ever-religious-diversity-challenged Air Force Academy this year scheduled its “Commandant’s Challenge” on October 5, perfectly timed to fall right smack on Yom Kippur, the most solemn of all Jewish holy days, forcing Jewish cadets to choose between their religion and joining their much-preferred Christian counterparts in the semester’s most important training day.11This would seem to be an obvious violation of the separation of church and state. However, when the Air Force Academy invited the highly religious former Housing and Urban Development Secretary Ben Carson to speak, he answered a cadet’s question about the separation of church and state as follows:
[God] is the reason that our nation excelled the way that it does. And those people that like to criticize America—criticize people in America—and always talking about separation of church and state, which is not in the Constitution, by the way—do they realize that our founding document, the Declaration of Independence, talks about certain unalienable rights given to us by our creator, a.k.a. God—do they realize that the Pledge of Allegiance to our flag says we are one nation under God—in many courtrooms, on the wall, it says ‘In God we Trust’—every coin in our pocket, every bill in our wallet says ‘In God we Trust.’ So, if it’s in our founding documents, it’s in our Pledge, it’s on our courts, it’s on our money, but we’re not supposed to talk about it. What in the world is that? In medicine we call it schizophrenia.12While “In God We Trust” is engraved on our coins, and while “under God:” was inserted into the Pledge of Allegiance in the 1950s, this hardly constitutes the imposition of a state religion. In any case, Carson was wrong in saying separation of church and state is not in the Constitution. The First Amendment, possibly the most important portion of the Bill of Rights opens with a prohibition against government involvement in religion:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.Further erosion of the separation of church and state may be found at the Air Force Academy, as evidenced by the recent appointment of Erika Kirk, conservative activist and widow of Charlie Kirk, to the academy’s Board of Visitors. A recent news report on this appointment reported how this is in keeping with Secretary Hegseth’s framing of the current war in terms of a Christian end-times struggle between good and evil:
Records from the United States Air Force Academy’s oversight board show leaders dismantling diversity programs and reviewing curriculum as the board embraces what critics call a concerning ideological turn toward Christian nationalism and prepares to seat conservative activist Erika Kirk.13The rhetoric voiced above by a military commander to his troops is ominous since it brings to mind the specter of nuclear war. The coupling of war-making with religious dogma also dredges up the specter of religious wars in the past, culminating in the Thirty Years War, and the creation of religious states such as Savonarola’s Florence, Calvin’s Geneva, and Oliver Cromwell’s England. Our more secular society grew out of the Enlightenment of the 18th century, itself engendered in reaction to the excesses of these religious wars and religious states.
Hegseth, in contrast, sees our nation not as one founded on the principles of the Enlightenment, but rather as a specifically Christian nation:
“America was founded as a Christian nation,” he said at a recent National Prayer Breakfast. “It remains a Christian nation in our DNA, if we can keep it,” he added, splicing some religion onto a famous Benjamin Franklin quip about whether the US was a republic or a monarchy.14So, was America founded as a Christian nation? Not according to the second president John Adams who was one of the authors of the Constitution. Adams, then vice president under George Washington, while negotiating the Treaty of Tripoli in 1796 to secure commercial shipping rights and to protect American ships in the Mediterranean from the Barbary pirates, said:
As the government of the United States of America is not in any sense founded on the Christian Religion,—as it has in itself no character of enmity against the laws, religion or tranquility of Musselmen [Muslims],—and as the said States never have entered into any war or act of hostility against any Mehomitan nation, it is declared by the parties that no pretext arising from religious opinions shall ever produce an interruption of the harmony existing between the two countries.15Hegseth is heavily influenced by Douglas Wilson, a conservative theologian and Christian Nationalist—one who advocates for Christian dominance over government and society. The sort of Christianity Wilson advocates is something few American Christians today would recognize as what they believe16 Hegseth’s views would also seem to derive from the (now discredited) end-times scenario proposed by the late Hal Lindsey, which involved the building of the Third Temple, elucidated in a 2015 report from one of his websites:
Unbelieving religious Jews will rebuild the false temple and offer false animal sacrifices during the first part of the Tribulation. (Daniel 11:31). Then the “man of lawlessness”, the Antichrist, will desecrate that false temple of God by taking his seat in the Holy of Holies, displaying himself as being God. (2 Thessalonians 2:3-4 NASB) That event will start the last half of the Tribulation. That will start 3½ years of the greatest horrors yet known to mankind. It will end with the visible Coming of THE ALMIGHTY, the Lord Jesus Christ. He will rule for 1000 years of peace. Then is the last Judgment of all unbelievers of all Ages. He will then establish forever the New Heaven and Earth.17In a 2018 speech, Hegseth rhapsodized about the possibility of building the Third Temple on the Temple Mount.18 Lindsey’s prophecies, first expressed in his first book The Late, Great Planet Earth(the best-selling book of the 1970s), originally called for the Tribulation, the seven-year period leading up to the Battle of Armageddon, to begin within the generation (in his reckoning a period of 40 years) of the creation of the state of Israel. Since Israel became a state in 1948, that would have meant the Tribulation would have begun in 1988. However, as that year approached without it being likely it would be the beginning of the end, Lindsey recalculated the time two different ways. First, he said that the beginning of Israel as a state perhaps should not be calculated as 1948. Rather, it should be calculated as 1967, when Israel captured the West Bank in the Six Day War. Thus, the Tribulation would begin in 2007. Next, he decided a generation might really mean 100 years, rather than 40. Thus, the Tribulation might well begin in 2048 (1948 + 100) or even 2067 (1967 + 100).
The end-times scenarios that so animate Pete Hegseth and many of the proselytizers at the Air Force Academy aren’t really based that firmly on the Christian scriptures.The event that will supposedly herald the Tribulation is the Rapture—the belief that, just before the horrific catastrophes of the end-times are about to take place, true believers will be taken up to heaven, thus saved from all the horrors specified in the Book of Revelation. This elaborate doctrine is based on just two verses from the Pauline epistle 1 Thessalonians, 1 Thess.14:16, 17:
For the Lord himself will come down from heaven, with a loud command, with the voice of the archangel and with the trumpet call of God, and the dead in Christ will rise first. After that, we who are still alive and are left will be caught up together with them in the clouds to meet the Lord in the air. And so, we will be with the Lord forever.The “we” Paul was referring to in these verses was quite literal, since the Christians of the first century believed the world would end with their generation. Consider, for example, the following passages from the Gospel of Matthew, First (Mt. 10:23):
When you are persecuted in one place, flee to another. Truly I tell you, you will not finish going through all the towns of Israel before the Son of Man comes.This view that Christ would return to the earth in the generation of the first believers is made even more explicit in MT. 16:27, 28:
For the Son of man shall come in the glory of his Father with his angels; and then he shall requite every man according to his works. Verily I say unto you: There are some standing here, which shall not taste of death, till they see the Son of man coming in his kingdom.Since Jesus didn’t return in the lifetimes of those to whom he was speaking, to requite everyone according to their works, i.e., the Last Judgement, and since Paul and the Christians of the first century did not rise to meet God in the air, how is it that end times prognosticators see the verses above as applying to today, some two thousand years later? Christian apologists go to great lengths to explain these contradictions. One of these rationalizations is that, “the Son of man coming in the glory of his father” refers to the Transfiguration, when, according to the Synoptic Gospels (Mark, Matthew, and Luke) Jesus was supernaturally transformed on a mountain in the presence of three of his disciples.19
While this interpretation is rather adroit it fails to explain the allusion to the last judgment in Mt. 16:28. A less adroit rationalization is that Mt. 16:27, 28 refers to the miracle of Pentecost (Acts 2:1–12) when the Holy Spirit supposedly descend upon the disciples, allowing them to speak in other languages than their own. Both rationalizations violate Occam’s Razor. The simplest and most direct interpretation of the verses above is that both Paul and the author of Matthew believed in the imminent return of Jesus, and that the verses above were never intended to refer to events two thousand years in the future.20
They are, in fact, extrabiblical elaborations, wild fantasies based on teasing bizarre interpretations out of tenuous biblical passagesThe end-times scenarios that so animate Pete Hegseth and many of the proselytizers at the Air Force Academy aren’t really based that firmly on the Christian scriptures. They are, in fact, extrabiblical elaborations, wild fantasies based on teasing bizarre interpretations out of tenuous biblical passages. As an example of this, consider the Rapture, a mainstay of modern end-times narratives. As noted above, the entire biblical support for this is just two verses from a single Pauline epistle, 1 Thessalonians 14:16, 17. In fact, the modern fundamentalist scenario of the Rapture, that has believers suddenly and mysteriously disappearing en masse as a prelude to the Tribulation, was the invention, in 1830, of a single maverick theologian of dubious credentials, John Nelson Darby (1800–1882).21
Perhaps Secretary of War Pete Hegseth, the proselytizers at the Air Force Academy, and those military officers who see the President as anointed by God to bring about Armageddon, and who reference the Bible to back up their views, should read one more Bible verse, purporting to be the words of Jesus, concerning when the end will come, Matthew 24:36 (KJV): “But of that day and hour knoweth no man, no, not the angels of heaven, but my Father only.”