I thought everyone needed one more thing to worry about, so here you go: evolving AI. When I hear this phrase I think of two things. The first are AI systems designed to simulate organic evolution. The second are artificially intelligent systems that are capable of evolving themselves. That latter one is the type you need to worry about.
Systems that simulate evolution already exist – Avida, Biogenesis, Grovolve, Tierra, Framsticks: and others. They basically have some code that competes for some resource or to complete some task and the code randomly mutates and reproduces. That’s it, all you need for an evolution simulation. Code can compete for computer resources, or be a physics simulator with digital creature trying to move quickly across terrain. These are sometime gamified for entertainment, but are also used for serious research, to study patterns within evolutionary systems. I would love to see these kinds of systems get more and more sophisticated, even to the point of reasonably simulating living systems. Such systems could be used to test hypotheses about evolution – and would also disprove a lot of silly creationist talking points.
But now we are talking about evolvable AI – AI systems that are capable of developing themselves through evolutionary processes. A new paper in PNAS discusses the potential power and risks of such systems. They echo they kinds of issues that have been explored in science fiction for decades. The authors write: “Evolvable AI (eAI), i.e., AI systems whose components, learning rules, and deployment conditions can themselves undergo Darwinian evolution, may soon emerge from current trends in generative, agentic, and embodied AI.” The results, they argue, have not been adequately addressed when discussing the potential risks of rapidly developing AI ability.
The authors distinguish two types of evolving AI – breeder systems and ecological systems. In breeder scenarios the programmers are in control of the process, selecting which code to “breed” and evaluating the outcome. This process is like a digital version of domestication, and has the potential, if done wisely, to maintain control. In fact, systems can be bred to have greater predictability and control. There are still risks here. So far humanity has not bred an animal to be more intelligent than humans. This could theoretically happen with AI, resulting in emergent behavior not specifically selected for that could get out of the control of human programmers.
A far greater risk, however, is the ecosystem scenario in which the program itself produces variation and selection, without external control. They argue that such systems lead to “selfish replication” which “reliably gives rise to cheating, parasitism, deception, and manipulation, even in very simple systems.” This echos Dawkins’ “selfish gene” in which evolutionary forces result in genes, essentially, doing whatever they can to maximize their passing into the next generation, without consideration for the interests of the whole organism, the population, the species, or the ecosystem. That is how evolution works – it cannot really see the bigger picture, but rather the selective feedback loop considers only survival and reproduction. There is still ongoing debate among evolutionary biologists the extent to which selective pressures can operate at any level other than the individual creature. Dawkins argued it was better understood at the gene level, which is why a parent, for example, would sacrifice themselves for their child – they may die, but the genes live on through their children.
In any case – this same “selfish” principle, when applied to AI, could lead to unpredictable and extremely bad behavior on the part of the AI. They too would not really see or understand the big picture, and will simply maximize whatever parameters they were given. Systems capable of independent evolution are likely to find unpredicted (perhaps unpredictable) solutions to problems, ones that might be anathema to human interests. Again, we are already seeing this is current AI systems (lying, cheating), but this phenomenon would be much greater with evolving systems.
One significant problem with evolving AI is that would essentially be impossible to control. Any controls we put in place would simply become a selective pressure, with evolving AI systems finding creative ways around the controls. This would be exactly like the evolution of antibiotic resistance in bacteria. In fact, it could be a lot worse. Natural systems essentially have to wait for a fortuitous mutation to occur. The reason why bacteria evolve resistance so quick is because there are so many of them and their lifecycle is so short. The opportunities for such mutations are therefore enormous. The same would be true of an AI system that could test billions of possibilities in moments. But also, AI systems do not have to wait for the right mutation to pop up – they can create it themselves. They can explore new possibilities, direct the course of their own evolution, and in fact evolve their ability to do so. They can learn how to optimize randomness vs directed changes, and learn which patterns predict successful evolution. If something doesn’t work, they can try something else. They could pass on acquired characteristics. Such systems would not only be evolutionary, they could be super-evolutionary.
These types of processes can function at multiple levels, not just the code itself. For example, programmers are already using evolutionary methods to evolve prompts for AI systems. Prompts themselves affect the behavior of AI, and when engineered in a sophisticated way can significantly improve an AI’s ability.
The outcome of such systems would be essentially impossible to predict. There would be emergent behaviors that may even be hard to notice, or fully understand. The most predictable thing about such systems is that they will be “selfish”, because that seems to be inherent in evolving systems themselves. The end result is the creation of AI systems that are prone to cheating, lying, parasitism, and manipulation, that we cannot understand or control. If we make such systems powerful enough and give them enough resources, it seems likely that they will eventually become more intelligent (at least in some ways – even short of true sentience) than humans.
The authors also recognize that such systems would be incredibly powerful, and therefore they are coming and can produce useful products. We just have to do it wisely. For example, any such evolutionary AI should be run entirely in a sandbox, isolated from the outside world. It has to be truly isolated, so that it cannot find a way out of the sandbox. Once the result of such an evolutionary AI is sufficiently tested and understood, it can be released. But they warn against running evolutionary systems out in the world where their behavior cannot be controlled. This makes sense, but I wonder if the sandbox method is sufficient. If these systems are prone to deception and manipulation, might one such system trick its users into thinking it is safe, until it is release into the world? That sounds like the plot to a great sci-fi dystopian horror. We may be living through act I of such a horror story right now.
One final word – I get that there is a lot of AI hype our there. This is almost always the case with any new technology that is sufficiently disruptive or game-changing. The existence of hype is a given – it does not mean, however, that the technology is not truly disruptive. It often means that it will just take longer than the hype indicates, but in the long run the hype will not only be realized but exceeded. I do not buy the “AI is all hype” brigade, nor do I buy the “fund me” propaganda or blithe reassurances by the tech bros. The truth is somewhere in the middle. What I mostly listen to are reasonable experts who are given sober warnings, like the authors of the current paper. This technology is genuinely very powerful. That power needs to be respected, understood, and properly regulated. This requires anticipating what can potentially go wrong, and that is what this paper does. This is not a prediction – it is laying out potential worst-case-scenarios so that we do not blindly walk into them.
The post Evolving AI first appeared on NeuroLogica Blog.
It is long past time the US eliminated gerrymandering, the drawing of district lines specifically for the purpose of favoring one political party, across the board. This requires either a 50 state agreement, or action at the federal level. This has been a problem since near the beginning of our democracy, and seems to be getting worse. We are now in the middle of a mid-decade tit-for-tat rash of gerrymandering that is extremely anti-democratic, so it’s a good time to raise this as an issue voters should definitely understand and prioritize.
As a quick aside – this is not a “political” blog, which does not mean that I never discuss political issues or topics with a political dimension. It partly means that I try my best to by non-partisan, and to avoid purely political value-judgements. I recognize this is an impossible ideal – we all have our biases and perspectives that color our thinking on topics in subtle ways. But we can try. Also, this is not a strictly science blog, it covers science, critical thinking, and media savvy, which are part of what we call scientific skepticism. Recently I started a video podcast, Political Reality, with co-host Andrea Jones Roy, who is a political scientist, for the purpose of applying scientific skepticism to political topics. This is also not a partisan show, and is mostly part civics lesson and part fact-checking. With that in mind, I thought I would write about what science and critical thinking have to say about gerrymandering, given that this is a topic in the news recently, although not as much as I think it should be. We also did cover this topic on Political Reality.
The term gerrymander dates back to 1812 when Massachusetts Governor Elbridge Gerry redistricted his state’s representative districts in order to favor his party, the Democratic Republicans. One of the districts looked like a salamander, leading the Boston Gazette to quip that it was really a “Gerry-mander”, and the name stuck. (Ironically, the two parts of that term, gerry and mander, both kinda sound like they mean “rig”, but the word has nothing to do with that.) Since then all political parties have used gerrymandering to gain unfair advantage. This stems from some features of US politics.
First, we have single representative districts, in a winner-take-all system. Senators are elected state-wide, and so gerrymandering is not an issue. Many countries have multi-representative districts, with representatives being apportioned to the votes – if your party wins 40% of the votes, you get 40% of the representatives. This also, by the way, is part of why we have such a dominantly two-party system – you need to earn a plurality of votes in order to have any representation. A party representing 10% of voters, without a local power base, would have zero representation. Districting, in a fair world, would be designed to share power roughly according to the population. In a state that is 60% party A and 40% party B it seems intuitively fair that party A, on average, should net about 60% of the representatives and party B 40%. Also, districts can be drawn to keep people with similar demographic interests together enough to have their interests represented. This would be partly geographic, but also partly urban vs rural, cultural, and racial.
Gerrymandering happens when one party controls the process of redistricting, usually because they control the state legislature. In our hypothetical 60/40 state, with let’s say 10 representatives, you could draw districts so that all 10 are 60/40, meaning party A would likely win all 10 representatives. You could also use redistricting to specifically disenfranchise specific demographics of voters. With modern data and computers you could theoretically do this with “surgical precision” (as one judge put it).
Partisan gerrymandering causes several problems for democracy. It is often referred to as politicians choosing their voters, rather than voters choosing their politicians, and this is apt. It makes districts less competitive, and often non-competitive, which reduces voter choice. This shifts the real election battle to the primary, which tends to favor more extreme partisan candidates. There is then no incentive to appeal to the middle in the general election because the outcome of that election is all but predetermined. So gerrymandering disenfranchises voters, reduces voter choice, and favors more extreme partisan politicians. This results in greater political polarization among our politicians, which causes dysfunction in Congress. How do we stop this?
The 2019 SCOTUS decision on Rucho vs Common Cause determined that federal courts have no roll to play in deciding questions of redistricting, which should be left entirely to the states. This is a deep issue unto itself – in our federalist system, what rights do congress and federal courts have in controlling how the states manage elections? Under Rucho vs Common Cause, however, Congress still has the right to pass laws to regulate redistricting. So it could be as simple as passing an anti-gerrymandering law. This would be ideal, rather than dealing with this state-by-state, which hasn’t worked. We are seeing what happens when this is left to the states. Some hold to principles, and leave redistricting in the hands of non-partisan committees, or some other reasonable fair process. But many states use their control to unfairly gerrymander their state, which then leads other states to do the same in retaliation. The best solution would therefore involve all 50 states at once.
Congress, however, has failed to pass anti-gerrymandering laws, most recently in 2025. This is typically blamed on political polarization, but also on the fact that many congressmen benefit from gerrymandering, on both sides, and would not want to see their favorable district suddenly become competitive. About 85% of House seats are not competitive (even lass after the recent round of gerrymandering), so that is most representatives. It is likely that only extreme pressure from voters will break this logjam and get us the anti-gerrymandering law we deserve. In fact, I would prefer a constitutional amendment. This is a higher bar to cross, but that’s the point – it would also be far more difficult to undue.
Gerrymandering makes America less democratic, it reduces voter choice, disenfranchises some voters, and increases political extremism and polarization. When asked, 70% of voters say that gerrymandering is bad and we should do something to eliminate it. However, those same voters seem to be OK with it when it is done to the advantage of their own party, justifying it by saying it is necessary because the other side does it. This is another reason why action at the federal level is needed, because that would affect everyone all at once. This is not going to happen, however, unless it comes from the bottom up. Voters need to take control of their own voting rights.
The post We Need to Ditch Gerrymandering first appeared on NeuroLogica Blog.
It’s an iconic image – a giant cephalopod with its tentacles wrapped around a sailing ship, tearing it apart as the crew panic. Eventually it drags the splintered remains down into the deep. Meanwhile, the largest living octopus is the Giant Pacific octopus (Enteroctopus dofleini), averaging about 16 feet long, however an exceptionally large specimen about 30 feet long weighing 600 pounds was found. The largest squid is the Colossal Squid (Mesonychoteuthis hamiltoni), reaching roughly 1,500 pounds (490–500 kg) and lengths up to 46 feet (14 m). That’s huge – but it’s no Kraken.
What about in the past? Everything was bigger in the past, right? That’s obviously a trope, but there is some truth to it, in that there have been ages of gigantism in the evolutionary past. In some periods and locations there are rich resources allowing for the evolution of larger body size, which comes with a number of survival advantages. This can set off an arms race of size, with prey becoming larger to avoid predation, and predators becoming larger to hunt bigger prey. The age of the dinosaurs is the most iconic example of this. But that, of course, does not mean that all lineages were necessarily larger in the past. Whales are a good example – the largest whales (and animals) to have ever lived are extant. So what about cephalopods? Are the largest ones living now, like with whales, or were there even larger ones in the past?
A new study examines the fossil remains of 12 giant octopuses that lived 100-72 million years ago. These were discovered and examined through grinding digital mining techniques at Hokkaido University in Japan. This method grinds very thin (25-50 micrometers) layers from a rock specimen, then takes a high resolution full color image of each layer. This method completely destroys the specimen, but results in a high resolution 3D image of any fossils within the rock. It uses AI models to reconstruct the fossils. The technique is used in cases where the fossils are too soft to X-ray (they are invisible to X-rays), cannot be chemically separated from the surrounding rock, and are too fragile for ordinary extraction. All of these are true for the soft beaks of octopuses.
Cephalopods are soft-bodied invertebrates, and so they rarely fossilize well. However, they do have chitinous jaws or beaks they use for eating. These are like the exoskeletons of insects or shell fish, but with some structural differences. Crustacean exoskeletons are mineralized to make them hard, so they serve well as armor. The octopus jaws are not mineralized but rather are reinforced with specialized proteins. The edges are hard to form a cutting edge, and become less hard but stronger as you move away from the edge. This way the jaws don’t crack under strain. These are evolved to be predatory crushing instruments. But they are also too soft for traditional fossil extraction methods, which is why the new technique was needed.
What did the paleontologists learn from examining these new specimens? They were able to infer the size of the creatures, which they estimate were up to 19 meters long – that is enormous. OK, it’s not quite Kraken size, but we are getting close. The wear patterns on the jaws also indicates that they were used to crush bones. What this could mean is that these cephalopods (Vampyronassa rhodanica) were definitely predators, and given their size they may have even been top predators. That is an incredible claim, given that they shared the Cretaceous oceans with plesiosaurs and mosasaurs. Mosasaurs were giant reptilian (but not dinosaurs) sea-dwelling predators up to 18 meters long. Could one of these invertebrate giants have taken on a mosasaur? Probably not, unless they were a baby.
As a point of clarification – the mosasaur was an apex predator, which means they they had no natural predators. The researchers are arguing that Vampyronassa rhodanica was a top predator, which means it occupied the top tier of the food chain, but could also have been prey itself. In a cage match between a mosasaur and a Vampyronassa rhodanica, my money is on the mosasaur.
But still, this means that there were cephalopods around 100 million years ago that were among the top predators of the ocean, competing with giant sharks and aquatic reptiles. This is the first invertebrate to join this group of top predators.
The researchers point out one more detail from the fossils – they had an asymmetric wear pattern, meaning that one side was significantly more worn than the other. This may not sound like much, but it suggests they had a preference for one side over the other. This likely reflects what is known as lateralization – that there were functional differences between the left and right side of their central nervous systems. This phenomenon tends to be seen only in species that have fairly complex central nervous systems, and the authors put this forward as evidence for this in this species. We know that modern cephalopods are highly intelligent, and this evidence suggests that these early cephalopods may have already evolved CNS sophistication. But this is, overall, a rather weak line of inference. Lateralization is not an iron-clad sign of intelligence, and is context dependent, but in this case it is a reasonable inference given that we know cephalopods eventually do evolve in this direction.
Overall this is a pretty interesting study, using a new technique to get a window into ancient cephalopods that was not previously possible. As a result we have gained new insight into this branch of the tree of life. I do have mixed feelings about the new technique, grinding digital mining, because it is completely destructive. It does seem like these fossils would otherwise not be usable, however. But – we do not know if we will eventually develop a non-destructive technique to examine such fossils, maybe even ones that can yield more or better information. The researchers and the field are aware of these tradeoffs. Destructive techniques are therefore used sparingly and only when the scientific information gained outweighs the loss of physical evidence, which they thought was justified in this case. Still, I hope this technique becomes obsolete quickly.
The post Release the Kraken first appeared on NeuroLogica Blog.
This interesting case was reported in the literature in 2007. For some reason it was then widely published in the mainstream media in 2015. Now it is making the rounds again on social media to support a false narrative about brain function. The story is of a 20 year old German woman who suffered a traumatic brain injury in a car accident. Over the next several months she started to slowly lose her vision – which is an important detail, it was not a sudden loss as a result of the physical trauma. After evaluation she was diagnosed with psychogenic blindness, meaning that it was not due to any physical damage to her visual system but was rather due to psychological stress. This patient also has what is now called dissociative disorder, or multiple personality, with 10 distinct personalities.
What makes the case even more interesting is that, with therapy, some of her personalities regained vision while others did not. Eventually eight of her ten personalities regained vision. This presented a rare, perhaps unique, opportunity to study the underlying neuroanatomical correlates of psychogenic blindness – what is happening in the brain when someone loses the ability for conscious sight despite their visual system working?
Psychogenic or functional neurological disorders are a complex and poorly understood phenomenon in which emotional stress and trauma presents as physical neurological symptoms. Common presentations include paralysis, language difficulty, sensory loss, and blindness. The diagnosis is mostly one of exclusion, which means sufficient examination and study is done to rule out any demonstrable damage, lesion, or other physical cause. This does not mean the patient is faking (technically called malingering) – that is a distinct condition that can usually be distinguished from a functional disorder. Usually patients with a functional disorder are very distressed by their symptoms and want further examination to find out what is wrong. In addition to simply ruling out physical causes, the diagnosis of a functional disorder can be supported by some positive evidence from the neurological exam. With psychogenic blindness, for example, patients will have normal pupillary responses (assuming no separate baseline deficit), and will have a normal reaction to optokinetic testing. This involves moving vertical black and white stripes horizontally across their vision. This will cause an involuntary response of tracking the stripes with eye movements. If this happens then we know that visual information is getting in and making its way to the visual cortex.
With functional neurological disorders what we do not know is what specific pathways in the brain are causing the symptoms. The hypothesis is that higher brain functions are somehow interfering with or inhibiting more basic functions. Those higher brain functions, the ones responsible for our subjective awareness and consciousness, are extremely complex. There is a lot of emergent behavior there, where we experience the net effect of many processes in the brain. Also, the more we investigate brain function with the latest tools the more we are discovering that communication in the brain does not just flow from basic inputs (like vision) to the higher conscious centers of the brain, but also back down, meaning that our higher brain centers can influence the basic processing of information. When you think you hear something, your brain makes it sound more like what you think you are hearing. When you see a shape that your brain matches to a giraffe, your cortex then sends signals back down the chain to construct the image to make it look even more like a giraffe. This is critical for pulling signals out of noise and for our ability to make sense of all the information coming it, but it also tends to generate illusions.
We also have to note that there is a lot of neurodiversity when it comes to brain anatomy and function – some people literally have pathways in their brain that most other people do not, or the relative robustness of specific pathways may differ wildly. Some people, therefore, may simply have neurological abilities that others lack. This case is very unusual – the person in question is neurologically capable of having a dramatic functional disorder, which may not be true of everyone. She also has dissociative disorder, which again is extremely rare. It would not be reasonable to assume she is neurotypical, and that we can extrapolate from her to the general population.
With those caveats in mind, the doctors studying her did something interesting – they performed a visual evoked potential (VEP) on her while she was exhibiting a personality that was blind and again while she was exhibiting a personality that could see. What a rare opportunity to compare the two states. The VEP essentially is a test in which a flash of light is given to the patient while electrodes record the response from her visual cortex. There is typically a delay of about 100 ms. If this is significantly slow or absent that could indicate a lesion in the visual pathway. This was a common test to evaluate patients with MS, for example, but is less common now due to more advanced MRI scans and other methods. They found that the VEP was present and normal while she expressed a personality that could see, but was absent when she had a personality with persistent psychogenic blindness. That is a rather incredible result, indicating that there is some process in her brain that is actually suppressing her visual system. To be clear, there is no conscious way to do this (again, at least not known, but I guess this could be the way in which she is very neuroatypical). So it seems that her psychogenic blindness was do to a reversible inhibition of her visual pathway, in a way that would block the VEP.
This was exactly what the researchers were looking for, trying to determine at which neurological level the psychogenic blindness originates, at least in this subject. This also means that VEPs cannot be used to reliably distinguish organic blindness from psychogenic blindness. I really want to know what her optokinetic testing found, but could not find this information in the report. However – a 2001 study of 72 subjects with psychogenic blindness found that every one had normal VEPs. VEPs are still used to assess these patients – a normal VEP does suggest a nonorganic cause of blindness, however it is recognized that an abnormal VEP does not rule out a psychogenic cause.
As interesting as all this is, this case is being used by some promoters of a particular type of dualism, specifically the notion that the brain is a receiver or filter for an external consciousness. The case is being misinterpreted as meaning that “experience determines neurological function” rather than the other way around. This, of course, is not true, for the reasons I outlined above. Experience is in the brain, and this just represents the brain affecting itself. I always find it sad and frustrating when truly interesting science is missed because it is being misused to promote pseudoscience or magic.
The post A Unique Case of Psychogenic Blindness and Multiple Personality first appeared on NeuroLogica Blog.
The latest social media buzz involves a list of scientists who have either died or gone missing over the last three years, with the implication that there must be something nefarious going on. The FBI is now investigating these cases to see if there is any connection, and the White House appears to be taking the case seriously. James Comer of the House Oversight Committee said: “It does appear that there’s a high possibility that something sinister is taking place here. It’s very unlikely that this is a coincidence. Congress is very concerned about this. Our committee is making this one of our priorities now because we view this as a national security threat.”
My initial reaction to stories like this is – these kinds of things crop up all the time and they always turn out to be just coincidences, or not even that. Sometimes they are just stories fabricated out of increasingly distorted information, almost always to serve some conspiracy narrative. So my reaction is the same as if someone claims to have seen Bigfoot or an alien spacecraft – initial skepticism is fully warranted, but sure, I am happy to take an objective look. This may be a rare case when there is a genuine phenomenon going on, and in any case this is what activist skeptic do – take a deep dive when these stories emerge.
Let’s first review the basic facts as presented. Here are the 11 scientists currently on the list:
Amy Eskridge—Scientist reportedly researching anti-gravity technology. Died: 2022
Michael David Hicks—Research scientist at NASA’s Jet Propulsion Laboratory; worked on the DART Project and Deep Space 1 mission. Died: July 2023.
Frank Maiwald—Principal researcher at NASA’s Jet Propulsion Laboratory. Died: July 2024.
Anthony Chavez—Former employee at Los Alamos National Laboratory. Missing since: May 2025.
Monica Reza—Director of Materials Processing at NASA’s Jet Propulsion Laboratory. Missing since: June 2025.
Melissa Casias—Administrative worker at Los Alamos National Laboratory. Missing since: June 2025.
Steven Garcia—Government contractor at a New Mexico facility for the Kansas City National Security Campus. Missing since: August 2025.
Nuno Loureiro—Director of MIT’s Plasma Science and Fusion Center. Died: December 2025.
Carl Grillmair—Caltech astrophysicist who worked on NASA’s NEOWISE and NEO Surveyor missions. Died: February 2026.
William “Neil” McCasland—Retired U.S. Air Force major general. Missing since: February 27, 2026.
Jason Thomas—Pharmaceutical researcher. Found dead: March 2026.
From a scientific (specifically epidemiological) perspective what we have here is called an apparent cluster. We encounter these in medicine all the time. I remember when I was a neurology resident in the 1990s there was an apparent cluster of cases of CJD (mad cow disease) in New England where I was working (more specifically the Naugatuck Valley of Connecticut). I had a few cases myself, and it definitely seemed to be more than we would expect by chance. It is the job of the CDC to investigate all such apparent clusters and first determine if they are real. This is mostly a statistical analysis – is this just the random clumping that we expect in data, or are these cases truly outside the statistical noise? It was determined that the CJD cluster was not real – just statistical noise.
With a case like the dead or missing scientists, we can do a similar type of analysis. Is this really beyond what we would expect by chance? Remember that people are really good at pattern recognition, to the point that we see patterns that are not really there (a recognized phenomenon known as apophenia). We also feed these illusory patterns with other cognitive biases, such as confirmation bias, subjective validation, anomaly hunting, and post-hoc reasoning. In the case of apparent clusters like this, what that means is that people might decide after they see a potential data point that it is significant, rather than determining ahead of time what constitutes a “hit”. They also may stretch any definitions they are using to cast a deceptively wide net. Once an apparent cluster is noticed then confirmation bias kicks in. In today’s world this means that an army of social media “sleuths” can go hunting for any apparent cases that fit the cluster – again, casting a very wide net.
Without getting into the individual cases yet, the numbers do not seem impressive. Just eleven missing or dead over four years – but what’s the baseline? Well, there are about 2 million researchers in the US. There are about 25 deaths per million people per day in the US, that’s 50 scientists dying each day, or 73,000 scientists over a four year period. Finding 11 that have some vague connection does not seem unusual to me. I would be amazed if you couldn’t find far more convincing clusters than this one. When we look at the list this base-rate problem gets even worse. On the list is a retired US Air Force major general – not a scientist. There is also a government contractor, and an “employee” – the net widens. Also, we are including both deaths and people who have gone missing.
I should point out I am using numbers for the general population, which may not match the rate for scientists. However, since the list included non-scientists and people who have retired, the numbers are reasonable, at least to get a general idea of probability. But I also looked at CDC data – about 800,000 people in the US between 25 and 65 die each year, or 3,200,000 over a four year period. About 6% of the population work in the science field, which would be 192,000, or half that if you use a narrow definition of 3%, so close to the 73,000 figure I calculated the other way.
We can also look at the institutions – JPL has 4,500 employees. If we crunch the numbers, then we would expect about 41 JPL deaths each year, or 164 over four years. At the Los Alamos National Laboratory, the figure is 18,000 employees, or 164 people per year, 657 over four years. Even if you want to be super conservative – even one tenth of these deaths at JPL and LANL would still be 82 deaths over four years – so again, the five on that list are not impressive. Given these numbers I think it is reasonable to conclude this is not a real cluster. This is far less than random noise, by at least two orders of magnitude.
The other approach to questions like this is to investigate the individual cases. The CDC, for example, would not only look at the numbers in a potential disease cluster, but would also review individual cases. If individuals with a foodborne illness all ate at the same restaurant, that would be significant, even if the overall numbers were not that impressive. So I don’t have a problem with the FBI doing some basic investigation so see if there is anything suspicious going on, but I would be really surprised if there were. It is not inherently implausible that one or more of these people were targeted because of their work or high security clearance, but looking through the list there doesn’t appear to be a real connection there.
Eskridge, for example, doesn’t seem to have anything connecting her to anyone else on the list, except her work was vaguely “sciencey”. I say this because she is on the list because she supported research into antigravity technology. I don’t think it’s fair to say she was an antigravity researcher. She had a bachelor’s degree in biology and chemistry, no masters or PhD. She has no published papers. She started the Institute for Exotic Science and had an interest in antigravity. This makes her more of a crank than anything else – give that it is extremely likely that antigravity is impossible (this goes way beyond this blog post, and perhaps I can do a deep dive on this later, but if you are interested just look it up). Until we have a theory of quantum gravity we have to keep the door slightly cracked open that maybe it’s not strictly impossible, but that is extremely unlikely. In any case, we don’t even have the beginning of a basic science to work from, and what we do have says it’s not possible. So unless you are a world-class theoretical physicist working specifically on uniting quantum mechanics and general relativity, your not worth killing if the goal is to prevent the emergence of antigravity technology.
Hicks worked on the DART project, the goal of which is to develop technology to deflect asteroids that might strike the Earth. Why is that connected to antigravity research? Why is that a threat to anyone? What is the connection to a pharmaceutical worker, a fusion researcher, or a materials scientist? Grillmair worked on the NEO telescope, which is a near Earth object scope, so there is a potential connection to DART, but not anyone else. The rest are mostly just administrators, workers, and employees, and one major general thrown in.
At first blush this seems to be a list of people put together by searching for anyone who has died or gone missing over the last few years with any vague connection to anything space related. I would be surprised if this turns into anything. I suspect that the FBI will do a preliminary investigation, find nothing, and the whole story will fade away. However, it will likely live on in the conspiracy subculture, morphing over time to make the details seem more impressive until there is a mostly false mythology about the dead scientists.
The post What’s With the Dead or Missing Scientists first appeared on NeuroLogica Blog.
Regeneration is one of the futuristic tropes of science-fiction, because it is both incredibly powerful and not theoretically impossible. Imagine the ability to regrow a lost limb, or simply to replace a diseased or worn out limb. There are about a million limb amputations worldwide every year, so it is a very common medical problem. What if we could regenerate organs? This would be a game-changer for medicine.
There are several approaches to addressing missing limbs or failing organs. One is the cyborg approach – make a mechanical version to replace the biological one. We are making progress here, with brain machine interfaces, mechanical hearts, and other advances. Or you could transplant the body part from another person, or even an animal that has been genetically modified to be compatible. You can also regrow the missing or failing body part from the intended recipient’s own tissues and then transplant that. Or you could inject stem cells programed to regrow the needed part inside the recipient. All of these options are active research programs, have shown some incredible promise, but are also years or even decades away, especially in their mature form.
Let’s now add one more technology to the list – genetic therapy that triggers natural regeneration, meaning from the person’s own tissue. This has long been a target of potential therapy, inspired by the fact that there are many animals that can already naturally do this. Most extreme is the axolotl (a type of salamander that for some reason has become very population with the young generation), which can regenerate just about any of its body parts. They form a blastoma of pluripotent stem cells at the site of injury that can quickly regrow into a missing limb, heart, spinal cord, parts of the brain, etc. in weeks. There are also zebrafish, which can regrow their tail fins. Mice can also regrow missing digits, which is important because they are mammals showing that regeneration can happen even within the mammal clade. You don’t have to be a salamander.
The amazing regenerating ability of the axolotls was first documented in 1768. Molecular and genetic studies of the regeneration process go back to the end of the 20th century. But now with modern genetics tools, like CRISPR, genetic research is really taking off. A recent study tried to find if there were any genetic similarities in the regeneration abilities of axolotls, zebrafish, and mice. If these three animals share the same genetic basis of their regeneration that this would suggest that these genetic abilities are highly conserved, all the way from fish to mammals. This would be good for the prospects of regeneration in humans, because we would likely share some of these same highly conserved genetic infrastructure. As you may have guessed, these researchers hit pay dirt (which is why I am writing about this today). They found that they shared SP6 and SP8 transcription factors. They confirmed the relevance of these factors by making knockout mice missing SP6 and SP8, which impaired their ability to regenerate lost digits. Knocking out sp8 in the axolotl also impaired their ability to regenerate – so the same factor seems critical for both species.
They then took a factor from the zebrafish which has been shown to enhance regeneration – FGF8, whose gene is normally turned on by SP8. Replacing the missing FGF8 then partially restored the regeneration ability of mice missing SP6 and SP8.
Do humans have SP6 and SP8 genes? Yes, we do. Again, these are highly conserved genes with basic biological function. They are the Specificity Protein family of genes that are involved in regulating the development of limbs, teeth, skin, and even organs. That is essentially how development works – there is a suite of genes with all the information to make, for example, a human arm (or a bird’s wing, or a the antenna of a moth). This suite of genes is turned on by a regulatory gene, that essentially says – build an arm here. Regeneration in a creature like the axolotl essentially involved going back to this developmental stage, creating a blob of stem cells, and then saying – build a limb here.
Obviously this entire process is more complicated than just tweaking one gene or replacing one missing factor. It is very complex. Humans form scar tissue to repair a wound, they do not form a blastema. This is partly driven by differences in the availability and sensing of oxygen in the tissues. Further, scar formation is driven by the immune reaction, involving macrophages, which are actively suppressed in salamanders. And finally, the reason we do not already have the ability for unlimited regeneration, is that there is a tradeoff between regenerative ability and cancer suppression. It is likely that our ancestors sacrificed regenerative abilities for cancer suppression mechanisms – this was the best evolutionary tradeoff. In other words – we simply went down a different evolutionary path than the axolotl. Gaining the ability to regenerate limbs or organs would therefore probably involve a complex coordination of multiple factors, while simultaneously preventing cancer formation.
Interestingly, at present there is nothing we know that would make it theoretically impossible to have full regeneration in humans. However, it is extremely complex. It is the perfect sci-fi technology – possible, but likely only in the distant future. I suspect it will take decades to perfect this technology.
The post The Prospect of Regenerating Limbs first appeared on NeuroLogica Blog.
The recent rapid advance in the capabilities of artificial intelligence (AI) applications I think qualifies as a disruptive technology. The term “disruptive technology” was popularized in 1997 by Clayton M. Christensen. To summarize, a disruptive technology is “an innovation that fundamentally alters the way industries operate, businesses function, or consumers behave, often rendering existing technologies, products, or services obsolete.” AI is potentially so powerful, and changing so quickly, that it is challenging to optimally regulate it. We are caught in a classic dilemma – we do not want to hamper our own competitiveness in a critical new technology, but we also don’t want to unwittingly create new vulnerabilities or unintended negative consequences. For now we seem to be erring on the side of not hampering competitiveness, which basically places us at the tender mercies of tech bros.
Which is partly why I found the conflict between Anthropic and the Department of Defense (still the legal name) so fascinating. In short, Anthropic’s powerful AI application, Claude, has at least two significant internal “red lines” or guardrails – it cannot be used for massive domestic surveillance, and it cannot be used for final military targeting, without a human in the loop. Anthropic CEO Dario Amodei has not backed down on this – he says that the first restriction on domestic surveillance is simply a matter of ethics. The second restriction, however, is mainly a matter of quality control – their system is still vulnerable to hallucinations and is not reliable enough to count on for final targeting decisions. Hegseth has criticized his concerns as “woke” and a critical vulnerability for the US military. More charitably, he say essentially that the US military is using the application lawfully, and should not be restricted in any lawful use of the software. Others have also stated that in an emergency they have to know the software will do whatever they ask it.
This conflict has many deep implications, and is beyond what I intend for this blog post. What I want to focus on is the fact that an AI application is creating this ethical dilemma, and forcing us to ask – who should control such awesome power, the CEO of a tech company or the Federal government? It seems that we are facing or about to face many similar questions provoked by the disruptive nature of recent AI applications.
Anthropic, in fact, is at the center of another similar discussion, involving the security of the internet. They have a new application, Mythos, which is an AI coding app. Mythos is potentially disruptive in two ways. The first is more mundane, and certainly not unique to mythos – it allows for non-coders to do what is called “vibe coding”, giving an AI coder a natural language description of the application you want, and the AI coder making it. Why this is disruptive is because it takes coding out of the limited hands of a relatively few highly trained and skilled individuals and puts it in the hands of everybody. This can lead to the proliferation of code that has not gone through any rigorous safety testing for vulnerabilities.
But the feature of Mythos that has many experts (including those from Anthropic itself) very concerned is that the program turns out to be excellent at identifying security vulnerabilities in code. I mean – really good. It has found vulnerabilities that have been sitting there unnoticed for years, and can reliably exploit them. When Anthropic realized how good their software was at essentially cracking software security, they had an “Oh, shit” moment. We are at an “inflection point”. Anthropic estimates they are 12-18 months ahead of the competition, so very soon similarly powerful software will proliferate. If we do not lock down critical software infrastructure by then, the internet can be screwed. Much of the internet and many applications run on core software that is open source, maintained by volunteers with shoestring budgets. Mythos has already cracked open some of these core bits of code.
Turning the internet, and essentially the software infrastructure that increasingly runs our world, into a cybersecurity nightmare is, I would imagine, not good for business. So Anthropic has given a preview version of Mythos to a consortium of 40 software companies, including their competitors, to basically give them a head start in finding and fixing any vulnerabilities in their software (which they are calling Project Glasswing). They are also dedicating some money to fund the project, especially for open source software. This all sounds great, and maybe this will fix the problem. Hopefully we will eventually see this as a Y2K situation, the disaster that never happened because we prevented it.
What this affair highlights is how the disruptive nature of AI is creating the potential for significant problems, if we do not stay ahead of it with rational regulation and quality control. It seems that Anthropic is trying to be an ethical and responsible corporate citizen, and that it recognizes the power of its products. Thank goodness for that – imagine if the same tech were in the hands of a less scrupulous or responsible company? It’s pretty easy to imagine. This is happening at a time when the Federal government not only has no apparent interest in regulating AI, they are trying to prevent the states from doing so either. And they are throwing a temper tantrum when they cannot use their new toys without restrictions.
Going forward we should not rely on the noblesse oblige of tech CEOs. We need to make sure that security and ethical restrictions are baked into any new applications. I am all for vibe coding, for example, but such apps need to have rigorous quality control, so we don’t fill the world with the coding equivalent of AI slop, creating a vulnerabilities tsunami. Perhaps this consortium of tech companies will evolve into something bigger – an organization dedicated to safely and securely developing this technology. This means, of course, we need to get buy in from China, which means we need international standards to regulate this tech. I think of it like nuclear weapons. AI is a very different kind of threat, but it is also a powerful technology that would benefit from international agreements so that we don’t accidentally destroy our civilization.
The post AI May Disrupt The Internet first appeared on NeuroLogica Blog.
Remember The Last Starfighter from 1984? In that movie a trailer-park kid with limited prospects spends his time on an arcade-style video game, Starfighter. He plays the game so much that he beats the final level, and it turns out he is the first person to ever do so. He is heavily criticized for spending so much time playing a game, which is seen as a sign of boredom and lack of ambition – a waste of time. The twist (42 year old spoiler incoming) is that the game was actually a test (the Excalibur test – a deliberate reference to King Arthur) to find a skilled pilot for an actual real-life starfighter. He goes on to save the galaxy from invasion.
The interesting premise of the movie is that playing a video game is not only a test of real-life skill, but can be used to train such skill. In 1984 this was kind of a new idea, and appealing to a generation of kids newly hooked on video games. Video games have been significantly mainstreamed over the last half century, but there is still a bit of a cultural stigma attached to them – they are seen as the realm of dorks and geeks, with inevitable jokes about how avid video gamers with “never get laid” (or something to that effect). Since the beginning of their popularity parents have worried, with such worry being fed by a sensationalist media, that video games were going to “rot” their kids’ brains, turn them into losers who can never get a skilled job, and might even cause violent behavior. Every mass shooting someone brings up violent video games.
But the evidence simply does not support these concerns. One big problem with the research is that it shows correlation only, not causation. Sure, people who play aggressive video games tend to be more aggressive, but that doesn’t mean the game is the cause. Further, there are many confounding factors, and more recent research shows that violence in the game is not the key feature. It has more to do with the level of difficulty and the resulting frustration that seems to raise aggression, not violence in the game. More competitive and difficult games tend to be more stimulating, regardless of the level of violence. The bottom line – after decades of research, systematic reviews conclude: “There is insufficient scientific evidence to support a causal link between violent video games and violent behavior.”
Now we seem to be going through the same cycle again, but this time with anxiety and depression. It is also not just video games being criticized, but social media and any screen time. And again there is evidence of some correlation, but without showing causation. It is very likely that people who feel socially isolated or depressed might seek out video games and social media as a distraction or to have some social connection. Taking away those outlets out of fear they are causing the symptoms can easily be counterproductive. A recent systematic review found:
“Scientific research investigating social media’s impact on adolescent mental health has failed to provide clarity. There is converging evidence for a small negative cross-sectional association between time spent on social media and well-being. However, longitudinal studies and those measuring social media use beyond time spent or mental health beyond general well-being show diverging results.”
In short, the evidence is weak and mixed, while better studies designed to control for likely confounding variables do not show any consistent effect. This does not mean there are no potential issues with excessive video-game use or social media use. It is one variable that we need to consider and carefully research, and there are likely some individuals in some contexts where is does exacerbate or cause problems. But are video games and social media the “one true cause” of all adolescent current ills, and basically responsible for the recent increase in mental health diagnoses? Probably not.
The current best inference is that video games and social media are filling a void of social support structures of various kinds, and that the solution is not to simply restrict or take away screens. Rather, we should be filling the void with more diverse support and activities.
On the flip side, there is evidence that video games and other interactions with digital technology increase some skills (just like in Starfighter). What we are seeing is not an atrophy of skills, but a shifting of skills from more analog to more digital activity. Since the industrial revolution it seems that each generation laments the fact that “these kids today” lack the skills that we older folks developed, while missing the fact that they are developing new skills for a new world. We may not get this new world they are creating, but they are not creating it for us. This is part of the reason it is difficult to predict the future use of technology, because we keep trying to imagine ourselves in this future. But we will not be in that future – new generations of people will, and they will be different in ways we cannot predict. To some extent, we have to trust that new generations will find their own way.
Meanwhile, it turns out that video games are a really good way to train certain skills. If anything, the technology is under-leveraged. Video gamers are better at endoscopic surgery, because certain kinds of games develop psychomotor skills like those used in this kind of surgery. Video games can cause more general cognitive skills as well: “Findings indicate that higher levels of videogaming proficiency are linked to improvements in visuospatial short-term and working memory, psychomotor speed, and attention.” Some of this data is correlational, but a lot of it is experimental, showing a causal effect with a dose-response.
But also, video games can train specific skill, not just improve cognitive function. They are great at keeping the level of difficulty just ahead of the user, and advancing them at their own pace. You can also simulate situations that you cannot recreate in the physical world. The FAA is even trying to get in on the “Starfighter effect” – they are specifically recruiting video game players for jobs in air traffic control.
Video games definitely do not have the stigma they did when I was younger, but it is not gone completely, and much of the same instincts have migrated over to screen-time in general and social media specifically. I do think we need to resist the temptation to simplistically blame the latest new technology our kids are using for whatever societal ills we are worried about. This does not mean we should not carefully consider and research the effects of new technology on society, especially to identity vulnerable individuals or potentials for abuse. But don’t panic or overreact. Just taking away screens is likely to be counterproductive. It’s better to fill kids’ lives with diverse experiences and opportunities (which is a lot more work than just demonizing video games and screens). Also we risk losing out on the potential benefits of new technologies. Video games can build cognitive ability and are great at training specific skills, and there are many potential upsides to social media.
The post Do You Have Video Game Skilz? first appeared on NeuroLogica Blog.
Last week I wrote about the possibilities of genetically engineering humans. The quickie version is this – we are already using genetic engineering (CRISPR) for somatic changes to treat diseases, and other applications are likely to follow. Engineering germline cells, which would get into the human gene pool, are legally and ethically fraught, but it’s hard to predict how this will play out. I have also written often about genetically engineering food. I think this is a great technology with many powerful applications, but it should be, and largely is, highly regulated to make sure that anything that gets into the human food chain is safe.
I haven’t written as much about genetically engineering pets, and this is likely to be the lowest hanging fruit. That is because pets are neither food nor are they a human medical intervention. But that does not mean they are not regulated – they are regulated in the US under the FDA and USDA. Genetic engineering is treated as an animal drug, and must be deemed safe to the animals being engineered. The USDA also can regulate engineered plants and animals to make sure they do not pose any risk to the environment, humans, or livestock. This makes sense. We would not want, for example, to allow a company to release a genetically engineered bee, pest, or predator into the environment without proper oversight.
Pets, as a category, are domesticated, are not intended to be used as food, nor are they intended to be released into the wild. I say “intended” because pets can become food for predators, and they can escape or be released into the wild, and even become feral. But these contingencies are much easier to prevent than with food or wild plants or animals. For example, if you get a rescue pet, it has likely already automatically been spade or neutered. One easy way to reduce risk would be to make any GE pet sterile, which is likely what the company would want to do anyway to prevent violation of their patents through breeding. In short, it seems that reasonable regulatory hurdles should not be a major problem for any effort to commercialize GE pets.
Unsurprisingly there are companies already working on this. One company, the Los Angeles Project, is working on making rabbits that glow in the dark. This is actually pretty easy (I bought some glow-in-the-dark petunias last year), as we already have isolated genes for green fluorescent protein and have put them in many types of plants and animals. Another company, Rejuvenate Bio, researches genetic treatments for chronic diseases in humans. This, of course, involves a lot of animal research, so they are also developing these treatments for pets, to increase their health and lifespan. Scoutbio is another company working on gene therapies for disease, but they are focusing on treatments for adults. There are also pet cloning companies, which is not the same thing, but there is a lot of overlap in this technology and it is not a big leap to start tweaking those embryos.
So where is all this likely to lead? First, I think GE pets will happen a lot faster than GE humans, because the ethical and therefore legal bar is likely to be a lot lower. What kinds of modifications are we likely to see? Some we will see simply because it is already possible to do, like the green fluorescent rabbits. We are doing it because we can. But as the tech evolves we can see pets with much longer lifespans. That raises an interesting question – how long would you want your dog or cat to live? Most people I talk to feel that 10-15 years for dogs and 15-20 years for cats is too short. I have owned many pets, and their brief lives always seem to go by too quickly. But at the other end of the spectrum I have also known people who own parrots, which is a lifelong commitment. Also, even though the loss of a pet can be heart-wrenching, you then get to experience a new kind of pet with their own personality and go through the puppy phase again. I also wonder how difficult it would be to lose a beloved pet you owned for 30 years, say. How much harder would that be? There is a sweet spot in there somewhere, perhaps 20-30 years. In any case, it would be interesting to be able to choose the longevity of your pet. And of course, it would be great to reduce the many chronic illnesses that plague our pets.
One other difference between pets and humans is that we have already, through conventional breeding, significantly altered our pets, especially dogs. Just think of all the different dog breeds. Some of them, I would argue, are unethical, like making dog breeds that have difficulty breathing. I seriously think that the institutions that regulate purebred dogs should place a much higher priority on the overall health of any recognized breeds, and not formally recognize any breeds with inherent health problems. It may be too late for this, but that would happen in my perfect world. In fact, genetically engineering pets may improve their overall health and happiness. The compromises that come with breeding cute traits may not be necessary with the power of genetic engineering. We could engineer new traits into baseline healthy and outbred populations, and would not have to use severe genetic restriction to create these extreme breeds.
And of course genetic engineering could create pets that would not otherwise exist. Superficial traits, like eye color and coat pattern, should be easy. Do you want a long hair, short, or wire hair? What color? Short or long tail, straight or curly? Floppy ears or pointy? Non-shedding and hypoallergenic are a must. It would also be possible to engineer their personality – easy to train, family friendly, never bites, etc. We are not far from the age of designer pets. We could also go outside the bounds of existing traits, to make exotic even mythical-seeming pets. This starts to get trickier the more ambitious we get, but is within the realm of possibility.
We could also use genetic engineering to domesticate species that would be difficult to impossible to turn into pets through breeding alone. Most people by now know about the Russian silver foxes bred to be friendly and tame. There is still some controversy about the research – how domesticated are they and did they already have some traits before breeding? But regardless, they do not make good pets. They are difficult to train (they pee everywhere), are destructive, and are very high maintenance. But, with some targeted genetic engineering, it would be easier to give them all the traits we love in dogs, for example. We could do the same possibly with racoons and many other species – GE away their problematic traits and make them easy pets. This starts to get into trickier ethical territory, but at least I would argue that fully domesticating a population of wild animal through genetic engineering is ethically no different than doing it through breeding.
It seems very likely that all of this will happen eventually, with the main question being the timeline. Personally, I have no problem with it, and have to admit I would love an exotic pet – as long as it is properly regulated with the welfare of the animals being adequately considered. In fact, I would like to see a higher standard than currently exists for traditional animal breeding.
My final question, however, is what will eventually be more popular – GE pets or robotic pets. There are interesting arguments to be made for both, and perhaps people will have both, in different contexts and for different purposes. If you could have one or the other right now, in a mature form of the technology (say from 200 years from now), which would you pick? Maybe it won’t matter much because the technologies will both converge on your perfect pet.
The post Genetically Engineered Pets Are Coming first appeared on NeuroLogica Blog.
Are we getting close to the time when parents would have the option of genetically engineering their children at the embryo stage? If so, is this a good thing, a bad thing, or both? In order for this to happen such engineering would need to be technically, legally, and commercially viable. Let’s take these in order, and then discuss the potential implications.
The main reason this is even a topic for discussion is because genetically engineering is technically feasible. Obviously we do it to plants and animals all the time. We also have increasingly powerful and affordable technology for doing so, such as CRISPR. This is already powerful and practical enough for small startups to perform CRISPR as a service, if it were legal. We already have FDA-approved CRISPR treatments, and have performed personalized CRISPR therapy. CRISPR is fast and affordable enough to have made its way into the clinic. But there is a crucial difference between these treatments and genetic modification – these treatments affect somatic cells, not germ-line cells. This means that whatever change is made will stay confined to that one individual, and cannot get into the human gene pool. What we are talking about now is genetically modifying an embryo at an early enough stage that it will affect all cells, including germ cells. This means that these changed can be passed down to the next generation, and effectively enter the human gene pool.
This difference is precisely why there is regulation dealing with such procedures in many countries, including the US. In the US the situation is a little complex. It is not explicitly illegal to perform germ line gene editing on humans. However, there is a ban on federal funding for any such research. This does allow for private funding of such research, but any resulting treatment would still need FDA approval, which is highly unlikely in the current environment. Despite this, there is discussion among several startups to start exploring this idea. Why this is happening all at once is not clear, but it seems like we have crossed some threshold and startups have noticed. With current regulation, where does that leave us regarding our three criteria?
Technically a CRISPR-based germ-line treatment for humans is possible. We do have the technology. What needs to be worked out is specific changes and their results. This would require clinical trials, and that is the main stumbling block in the US and some other countries. It seems unlikely the FDA would approve such trials, and therefore there would be no way to even work towards FDA approval. A company could theoretically do privately funded studies that are not part of FDA approval, but they would still need ethical approval (IRB approval) for such studies, which may prove difficult (although not necessarily impossible). Such research could be carried out in countries with more lax regulations, however. Over 70 nations have such regulations, which means many do not. So technically we are theoretically close to having marketable treatments designed to change actual human genetic inheritance.
Legally, in most developed nations there does not appear to be any appetite for allowing human germ-line manipulation. However, such services could be offered in countries without hindering regulations, perhaps the same countries in which the translational research was done. We currently do not have any international bans or regulations. The WHO advises against germline engineering, but there are no legally-binding international regulations. This is a technology that definitely requires not only an international consensus but enforceable regulations, because what happens in one country can affect the entire human population.
In short, there is a pathway to skirt any current regulations and make such treatments available. However, if startups start developing germline-altering treatments, that might motivate governments to find ways to regulate and effectively ban such treatments. Would such treatments be commercially viable? If by this you mean – would there be a customer base willing to pay enough to make it a profitable service, the answer is clearly yes. If you mean – are there companies currently offering such services, the answer is no. But that may be changing soon.
What could be the implications of this technology? It depends on how it is regulated and used (like so many advanced technologies). I will speculate on what I think is the best-case and worst-case scenarios. Best case, such technology would be used to minimize the burden of genetic disease. We already have treatments to sort sperm to avoid sex-linked mutations and to select more genetically healthy sperm. But what if we could do this down to the individual gene, and make sure the IVF occurs only with sperm that does not contain an allele for a genetic disease? I can’t see any downside to this.
The next step, however, would be altering genes, not just selecting them. But again, this could be limited to altering genes that would result in a genetic disease to a healthy version. The resulting gene would be one that is already in the human population, and the only result would be the elimination of one version of that gene that is disease-causing. Again, hard to see a downside. Such treatments would almost certainly be more cost effective than managing the genetic disease itself. And if it were done to the germline, it would only have to be once for that genetic line. I suspect that when such treatments become technically available, and confidence is high enough in the technology itself, they will become legal and available.
But there are at least two other categories of genetic alteration that become increasingly problematic. The first category we can call disease treating. The second is risk modifying. What if we could also alter a gene from one version that conveys a high risk of ultimately developing Alzheimer’s disease, to another version that has a relatively low risk? This would not be treating a genetic disease, but simply altering the genetic risk of developing a disease. We could potentially do the same for high cholesterol, diabetes, obesity, and high blood pressure. Again, we would not be introducing any new genes into the human gene pool, just giving people alleles that convey lower risk of specific diseases.
However, there is a potential downside here. If such treatments became common, they would potentially reduce genetic diversity in the human population. Many genes that convey a high risk in one area have other benefits. They just have different tradeoffs. We may be reducing disease risk in one area, but also reducing resilience to other diseases. In other words, there is a potential for unforeseen consequences. Also, the number of people who could potentially benefit from such genetic alterations is much higher than for genetic diseases, so the implications for the human gene pool are greater. The risk-benefit ratio is therefore harder to calculate. I think such treatments might be viable one day, but would require a lot of research to minimize the possibility of unforeseen negative consequences.
The final category I will call gain-of-function alterations. This might include introducing genes from other species or novel genetic alleles that provide a phenotype that does not currently exist in the human population. This category has the greatest potential for change, and therefore for both best-case and worst-case scenarios. Some people might think there is no best-case in this category, and that is reasonable if you think that the risk will never be worth it, and such changes could alter what it even means to be human. If we still want to imagine a best-case, that might involve limiting such changes to ones for which there is a robust consensus that they would be good for humanity with little to no down side. This would also have to include some consideration of fair and just access to such changes. Perhaps this might include genes that help adapt people to living in space or on Mars, or eliminate addiction. It’s hard to think of a lot of examples outside of disease modification, however.
It is much easier to imagine worst-case scenarios. The common ones that are frequently raised include creating not just different classes of people, but different subspecies. Wealthy individual could potentially afford a suite of upgrades to their children, making them smarter, stronger, healthier, with a longer lifespan. It’s hard to imagine such a thing ending well. Another classic doomsday scenario is the creation of genetic supersoldiers, creating an arms race among competitive nations to engineer the most deadly soldiers. Again, hard to see this ending well. Yet another common sci-fi scenario is the introduction of genes that will significantly alter the human phenotype, blurring the lines between human and non-human. And of course the ultimate worst-case scenario, an accidental (or perhaps not so accidental) genetic apocalypse. There is a range of possibilities here as well, with the absolute worst imagined in a Rick and Morty episode where the entire planet was reduced to genetic monstrosities.
There are also some edge cases that have complex elements, including some truly horrific ones. What if, for example, genetic alteration could change someone’s apparent “race” or even their biological sex? What would be the social implications of an African family deciding they wanted a European looking child, or vice versa. How common would this become? Would apparent race become a fad, shifting from generation to generation? It is now common among some Asian youth to seek eyelid cosmetic surgery. What if this could be accomplished with gene therapy? How accepting would society be towards pre-pubescent children wanting gene therapy to alter their biological sex so that they go through puberty as the other sex? How would the furry community react to the possibility of genetic furriness? What if parents wanted for their children a standard of beauty that is generally considered to be extreme, even freakish? What if a culture decides that women should be genetically prevented from having certain bodily functions?
Genetic alteration is a powerful technology, especially when applied to the germline. There is the potential for extreme good, extreme harm, and extreme weirdness. Sounds like an area that would benefit from thoughtful regulation, and not left to the whims of startup culture.
The post Are Genetically Engineered Humans Coming first appeared on NeuroLogica Blog.
Many people might find this to be an easy question and simple concept – what is your favorite color? In fact it was used as the quintessential easy question by the bridge guardian in Monty Python and the Holy Grail. But it is a good rule of thumb that everything is much more complicated than you think or than it may at first appear, and this is no exception. We recently had a casual discussion about this topic on the SGU, and it left me unsatisfied, so I thought I would do a deeper dive. Perhaps there is a neuroscientific answer to this question.
The panel differed in their reactions to the question of favorite color (we were just giving our subjective feelings, not discussing research or evidence). Cara felt that “favorite color” is largely arbitrary. Kids are asked to pick a favorite color, which they do (under pressure) and then often just stick with that answer as they get older. She also felt the question was meaningless without context – are you referring to clothes, cars, house color, or something else? Jay was at the other end of the spectrum – he has a strong affiliation for the color orange which gives him a pleasant feeling. The rest were somewhere in between these two extremes.
I knew there had to be a science of “favorite color”, which I thought might be interesting. Indeed there is – and it is interesting.
First, what is the distribution of favorite color, across the world and demographically? Blue is, far and away, the most favorite color, in most countries across the world, so it seems to be very cross-cultural. It is also the favorite across age groups and gender. The second-most favorite color is either green, red, or purple. Brown is almost universally the least favorite color. Gender has an effect on favorite color, with more women favoring pink, and reds in general (but still preferring blue overall). Republicans still prefer blue over red, but more Republicans prefer red than Democrats. There are country-specific differences as well. Red is a higher preference in China than many other countries, for example.
The demographics of favorite color are clues as to potential underlying causes. Is favorite color purely a cultural phenomenon? It does not seem to be, but there are some minor cultural influences. Is it a neuro-biological phenomenon? It could be, but not purely. If it is partly neurological, what does it track with? How about personality. The evidence is, in short, mixed, and reveals the hidden complexity of seemingly straightforward questions.
Most people think of color preference as referring to hue, but saturation and brightness have just as much of an influence on color choice. When you consider all aspects of color, the picture becomes more complex. Extroverts, for example, prefer bright colors. Adults tend to prefer more saturated colors. The results of studies, therefore, depend on how the questions were asked. But an overall summary is – you can make some statistical predictions about the big five personality types (extroversion, openness, agreeableness, neuroticism, and conscientiousness) from color choice. But this is one factor among many, and depend on multiple factors (the context, the object, and all three color traits). There does seem to be an actual phenomenon here – an influence of personality on color choice – but it’s mixed and complicated.
So far we have mostly just been describing who has which color preferences, but not why or how. We have some clues from the demographics of color choice, but no answers. Given everything above, it is still possible that color choice is entirely learned, or partly learned but mostly an inherited trait. What does the evidence say about this question? Well, there is no current answer, but there is a strong theory that is a good fit to the evidence – the ecological valence theory.
According to this theory color preferences emerge from the totality of our life experience mainly through emotional association. We have a partly associatative memory, in that we tend to remember things partly by associating them with other things that occur together. This includes color. If green things tend to be associated with good experiences, then we will begin to associate the color green with good feelings. According to EVT blue is the most common favorite color because we associate with blue clear skies and clean water, which tend to be associated with happy experiences. We tend to associate brown with feces or rotten food, so that is consistently the least favorite color.
The strength of EVT is that it allows for biological, cultural, experiential, and personality factors all at once. They all can affect our associations with colors, and contribute to how they make us feel. Some associations may be natural, like blue skies, green vegetation, and putrid yellow and brown. Others can be purely cultural, like pink for girls or purple with royalty. Different personalities would be drawn to different colors that tend to be associated with congruent moods, like vibrant reds for extroverts, or calming blues for introverts. And then there are likely to be some quirky individual factors as well – extreme individual experiences, or social group sorting (which color wedge do you typically play in Trivial Pursuit).
Does neuroscience add anything to this picture? So far, neuroscientific studies have elucidated some of the underlying brain regions that relate to color preference and processing, but don’t really provide any insight into why color preferences exist. Here is the most relevant study I could find:
These results demonstrate that brain activity is modulated by color preference, even when such preferences are irrelevant to the ongoing task the participants are engaged. They also suggest that color preferences automatically influence our processing of the visual world. Interestingly, the effect in the PMC overlaps with regions identified in neuroimaging studies of preference and value judgements of other types of stimuli.
Sure – color preferences and experiences happen in the brain, and involve a brain region generally involved in value judgement. This is a piece to the puzzle, but itself does not really address the cause of color preferences, just some of the neurological mechanisms.
There is still a lot to learn about color preferences. The evidence does not support the notion that color preference is a purely arbitrary phenomenon, but rather that it has a psychological, cultural, and neurological basis. But there is still a lot of research to be done in terms of the nature and causes of color preferences.
The post What Is Your Favorite Color? first appeared on NeuroLogica Blog.
I have a love-hate relationship with TikTok, as I do social media in general. It is a great communication tool and allows scientists and science communicators to get their content out to a larger audience cheaply and easily. If you know how to use the internet and social media as a resource, you can find a video about almost any topic. I particularly love the “how to” videos. And yet these applications are also used (mostly used) to spread nonsense and misinformation, or at least inaccurate, misleading, or overly generalized information. The low bar of entry cuts both ways.
As a result I spend part of my time as a communicator with my finger in the dike of social media pseudoscience and science denial. For example, this individual feels his insights into the workings of the human brain need to be shared with the world. His musings are based entirely on a false premise, his apparent misunderstanding of what neuroscientists understand about brain function. He begins with the nicely vague statement, “scientists have discovered”, followed by a completely incorrect statement – that thoughts come to our brain from outside the brain.
Before I get into this old “brain as receiver” claim, I want to point out that this format is extremely common on TikTok in particular and social media in general. This is more worrying than any individual claim – the culture is to present some random nonsense in the format of “isn’t this crazy”, or with with a cynical tone implying something nefarious is going on. Such authors may or may not believe what they say, they may just be trying to amplify their engagement with a total disregard toward whether what they are saying is true or not. They may even be a full Poe – knowing that what they say is nonsense. Either way, they feel it is appropriate to spend the time to record and upload a video without spending the few minutes that would be needed to check to see if what they are saying is even true. The very platform they are using to spread their nonsense often has all the information they need to answer their alleged questions. The culture is profoundly incurious, intellectually vacuous, lacking all scholarship or quality control, and seems to value only engagement. Thrown into the mix are true believers, grifters, and those who display classic symptoms of some form of thought disorder. This is “infotainment” taken to its ultimate expression.
Back to the video at hand – the author begins with an unsourced vague claim, but one that is not uncommon in the “new age” subculture, that our brains are mostly just receivers for a vast intelligence that comes from somewhere outside the brain. He states this as if it is a scientific fact. He then goes on to muse about some new age nonsense regarding being on a higher or lower “frequency” and therefore attracting good thoughts or bad thoughts. Is there any plausibility or evidence for the notion that some of the information that comes to our brain originates somewhere outside the brain? By this I do not mean through the known senses, but that part or all of the “mind” is a non-physical phenomenon, and the brain is a conduit for the mind, interfacing it with the physical body.
This is one formulation of what is known as dualism, which I have written about here many times – that mind and brain are not entirely one phenomenon, but two. My position, which tracks with the consensus opinion of neuroscientists, is that the mind is what the brain does. There is only the brain. The mind is not software running on the brain – it is the brain, simply describing our perception of what the brain is doing. That sci-fi trope of a “consciousness” being transferred from one body to another, or into an object, is simply impossible. Just as you cannot “upload” yourself into a computer. At best you can make a copy that replicates some of your mental functions, but it is in no meaningful way you. You are your brain.
How do we know this is true? This is, far and away, the best inference from all available data. While the brain is incredibly complex and we are still learning lots of the details, it is now entirely clear that the brain is a living information processing machine. Neurons connect to each other forming circuits and networks the can store and process information. These networks correspond to specific functions, and those functions can be altered or destroyed by changes to the corresponding physical circuits in the brain. We have known this for over a century – if you have a stroke that damages part of the brain, you lose that part of your functionality. And this does not only relate to physical things like movement, but also to thought, such as the ability to understand language, to reason spatially or mathematically, to process visual information, etc. This can even have bizarre manifestations, like your ability to feel as if you own or control parts of your body. As our technology has improved we have been able to map the circuits in the brain to finer and finer detail – and throughout the entire process nothing has emerged to challenge this core understanding of neuroscience. The mind is the brain.
There are also many ways in which there is a lack of findings to support any alternative interpretation. For example – no part of the brain is an actual receiver for any kind of external signals, of any frequency. We perceive the world through our sensory organs, and there is no “extrasensory” perception. There is no functionality without a corresponding neurological cause. There does not appear to be any limit to our ability to alter mental function by altering brain function. There is no evidence for mental function outside of brain function. In short, when we look at the brain we find wetware, a living computer, not a receiver of any sort.
All of this information, often patiently explained by experts, is freely available on the internet. All someone has to do is, before they post a video of their incredible opinions, ask a very simple question – is what I am about to say actually true?
The post Brain As Receiver Is Still Wrong first appeared on NeuroLogica Blog.
Many teachers are panicking over AI (artificial intelligence), and for good reason. This goes beyond students using AI to cheat on their homework or write their essays for them. If you have AI essentially think for you, then you will not learn to think. On the other hand optimists point out that AI can be a powerful tool to aid in learning. It all comes down to how we use, regulate, and manage our AI tools.
The cautionary approach was captured well, I think, by Mark Crislip in this SBM commentary, in which worries about the effects of AI on doctor education. How will a new generation of physicians learn how to think like expert clinicians if they can have AIs do all their clinical thinking for them? My question is – is AI fundamentally different than all the other technological advances that have come before. Did calculators take away our ability to do math? The answer appears to be no. Students still gain basic math skills at the same rate with or without access to calculators. But there are lots of confounding factors here, and so some teachers still warn of allowing kids access to calculators too soon. Others point out that access to calculators has simply shifted our math abilities, away from basic operations toward more modeling, problem solving, and complex concepts. It seems we are in the middle of the same exact conversation about AI.
We can also think about things like GPS. My ability to navigate from point A to point B without GPS, or to navigate with maps, has definitely declined. But using GPS has also made my navigating to unfamiliar locations easier and more efficient. I would not want to go back to a world without it.
But is AI different because it is not about some narrow specific skill but about fundamental skills like writing, arguing, and thinking? I think the answer is – it could be. At the very least we cannot assume that it isn’t. We don’t want to look back in 20 years and realize we raised a generation that is intellectually crippled by previous standards. It does not seem prudent to just hope that this is not the case and it will all work out, like it did for calculators.
Part of the problem is that AI technology is developing very fast, and our culture and institutions do not have the time to adapt. Regulations, if any are needed or would be helpful, are also lagging behind. In fact it seems that the tech industry has been successful in cutting off any serious regulations at the knees. They have a point that sloppy regulations could hamper innovation and cede a vital emerging industry to our competitors. But they present this as a false choice, with the only other option to just trust them and have essentially no regulation. They want us to replicate what happened with social media, or with crypto, where lack of effective regulations turned what could have been useful tools into…something else. It is no surprise that recent surveys find people are more nervous than optimistic about the net effects of AI.
It is hard to know what the long term effect of the recent judgement against META and Google will be, but a court did find that these companies were “negligent” in protecting children from their products. These products have been deliberately optimized for addictiveness. Algorithms provide a bottomless scroll of content designed to outrage people, or drag them down a rabbit hole of increasing radicalization – whatever maximizes their engagement. The effects on individuals and society do not seem to have factored in.
As with so many complex and technological issues, we seem to be perpetually stuck between two extremes. On the one hand we have tech bros unfettered in their attempts to “move fast and break things” and then use their billions to buy up media outlets and politicians to fend off any regulations. On the other we have politicians who may or may not be well-meaning, but either way seem to lack the knowledge and expertise to effectively regulate these new technologies. So their clumsy attempts at regulation backfire, and are used to skuttle any further regulation attempts. This is happening during a time of intense political polarization and the collapse, in many ways, of effective legislating.
What we want is a third option – effective, narrow, targeted regulation informed by experts with meaningful metrics that prevent abuse and harm with a minimal effect on innovation. Of course, this is not easy. It requires hard work, lots of consultation and discussion, and rounds of experimentation, evaluation, and adjustment. But that is what our complex world requires. Perhaps we are just not up to it.
The academic world also needs a carefully calibrated and thoughtful response. I do think we can leverage AI as a tool to improve education, to make it more personal and adaptive. But at the same time we need to avoid or minimize the obvious potential downsides. I do think it is a good idea for young children to avoid certain technologies while their brains are still developing. We need to maximize their use of verbal, math, and cognitive skills so that their brains will maximally develop these abilities. Then we can phase in technologies as tools they can use to be more effective. Start too young, however, and technology becomes a crutch, and their skills not only atrophy – they never develop in the first place.
In fact we need to think carefully about this digital virtual world we are creating for ourselves. Yes, this technology provides amazing tools and opportunities for engagement and entertainment. But they are also a soporific, lulling us into contentment for a small and isolated existence. I worry about a generation that never knows anything else.
Education is an opportunity to prevent such a digital dystopia, by not only providing the opportunity but the necessity that children do physical activities, get out into nature, communicate with actual people, and use every cognitive skill they have. We obviously have to introduce them to technology along the way, and in fact there is no way to avoid it. It is embedded out there in the world, and children do not live at school. So we also need to teach children to use technology responsibly and effectively. Meanwhile school is a place where they use and develop other abilities.
We have to be thoughtful about this. It is doubtful that just going with the flow down the path of least resistance (and maximal profits for the tech industry) will lead to the world we want to have.
The post AI And Schools first appeared on NeuroLogica Blog.
As we anticipate the Artemis II launch, now slated for early April with plans to take four astronauts on a trip around the Moon and back to Earth, NASA has been unveiling some significant changes to its plans for returning to the Moon and beyond. If you have fallen behind these announcements, here is a summary of the important bits.
Artemis II will continue as planned, marking the first crewed deep space mission since 1972 (Apollos 17). The original plan was for Artemis III to land on the Moon in 2027, but this mission has been pushed to an Artemis IV mission in 2028. A new Artemis III mission has been inserted – this will go only to low Earth orbit (LEO) and will test the integration of all the systems necessary to land on the Moon. This will include docking with one or both of the two landers, one being built by SpaceX and one by Blue Origin. This sounds like a really good idea, and it did seem unusual that they were planning on going straight to the Moon without ever test docking with the lander.
Even though landing on the Moon will be delayed by at least a year, NASA says this will set them up to have at least annual landings on the Moon after that, with a goal of a landing every six months. The reason for this frequent pace is the the more recent announcement by NASA last week – that they are putting on pause plans for a Lunar Gateway in lunar orbit and instead are going to focus on building a permanent Moon base near the lunar south pole.
In order to make this possible, and to support the future Moon base (no word yet on whether this will be called Moon Base Alpha, as it should) NASA plans about 30 uncrewed robotic landings on the Moon every year. They will be scoping out the location for the base and delivering equipment and supplies.
What about the Space Launch System (SLS)? When hearing these plans one of my first questions was – are they going to do this all with the SLS? Each SLS launch costs $4.1 billion, with the cost of the single-use ship itself being $2.2-2.5 billion. This is one of the biggest criticisms of the SLS system – they are designed as single-use rockets. Meanwhile, the rocket industry has moved on to reusable rockets, which dramatically reduces the cost. As of now, NASA has approved SLS launches through Artemis V. After that they have not committed to a specific plan. But – they have stated that their goal is to transition to “commercial hardware.” This almost certainly means SpaceX and Starship. I guess they cannot fully commit because Starship is still in development. But if it is ready in time, it seems likely NASA will start relying on Starships to get to the Moon.
This makes a lot of sense. SpaceX’s lander is really a modified Starship – it is stripped of anything it needs to land back on the Earth and is optimized for landing on the airless lunar surface. So – why go all the way to the Moon then dock with a Starship lander to land on the Moon. Why not just dock with the Starship in LEO then take the lunar-modified Starship all the to the Moon and then down to the lunar surface? That seems to be what NASA is planning. For now they will use the SLS to get into LEO, then go the rest of the way on a modified Starship. After Artemis V they may take one Starship into LEO and another to the Moon. They are not fully committing to SpaceX because they don’t want to give them a de-facto monopoly, so the door is open for other companies to compete for this service.
The apparent plans are for the base to be on the surface near the south pole. NASA has been investigating lunar lava tubes as a potential location for a moon base, but there are not identified sites or specific plans right now. This means the surface base will have to be heavily shielded. Perhaps the permanent presence will allow them to build a future base inside a lava tube, which would be much better protected from radiation and micrometeors.
Once all this is worked out, and we have a lunar base serviced by a system to frequently land crew on the surface and return them to Earth, NASA plans to use that lunar base as a stepping stone to Mars. This makes great sense. Remember – 90% of the energy you need to get anywhere in the solar system you expend just getting into LEO. Getting off the lunar surface is relatively easy. This means that a lunar base is an excellent platform from which to launch ships throughout the solar system, including Mars. A ship launching from the Moon can use most of its fuel getting to Mars faster, by spending more of that fuel accelerating to Mars then decelerating to insert into Martian orbit. This is critical because getting to Mars fast is the best defense against radiation exposure by the astronauts.
Along those lines NASA has also announced their plans to use nuclear power in space. This has two components – the first is using nuclear power for the Moon base itself. This is a great idea because you do not want to rely on fuel, which is expensive to ship to the Moon. Solar power on the Moon can be great, but you only see sunlight half the time. This is actually part of the reason to build the base at the south pole, where there are high peak regions that see sunlight 90% of the time. That will likely be an important source of power for the base. The other reason is that the poles also have deep craters that see light 0% of the time, which means there may be some frozen water there, which can be mined as a resource of the base. But even 90% sunlight still means 2-3 days with no sun, which would require significant battery backup. This is fine, but a mini nuclear plant (like the kind of thing you would have on a nuclear submarine) could provide years of reliable power for a lunar base.
The second use of nuclear power in space is for their planned nuclear electric propulsion spaceship. NASA plans for Space Reactor-1 (SR-1) Freedom, a ship propelled by a nuclear electric engine, to be launched in 2028. Nuclear propulsion has been long anticipated, and honestly we should have developed it long ago. This gets beyond the limits of chemical propulsion, and would cut the travel time to Mars. SR-1 Freedom will fly to Mars, and take 1 year to get there. This trip is optimized for efficiency, not speed, as it is a test mission. Once mature it is estimated that nuclear propulsion will reduce a typical trip to Mars from 7-9 months down to 3-4 months, with a theoretical advanced system getting to Mars in 45 days. Now we’re talking.
This also relates to why a lunar base is so important to Mars missions. Nuclear engines are efficient, but do not have the thrust to launch from Earth’s surface into orbit. You would have to launch any such vehicle with chemical fuel then switch to nuclear for the trip to Mars. But – if you are already on the lunar surface, you still need chemical rockets, but only small boosters rather than something the size of SLS or Starship.
Taking all of this into account, it really does seem that NASA has a well-thought out plan for developing the infrastructure to maintain a presence on the Moon and for missions to Mars (and potentially other solar system destinations). This is much better than the one-off (so-called flags and footprints) missions of the past. Honestly, this is what I naively expected would happen back in the 1970s or 80s to follow up the Apollo missions. It took 50 years longer than expected, but it’s good to see happening now. I know not everyone agrees with the priority of sending any people into space, and would rather have an entirely robotic space program, but that is a discussion for another day.
The post NASA Unveils New Moon Plans first appeared on NeuroLogica Blog.
Last year the inner solar system had an interstellar visitor – 3I/Atlas (which stands for the third interstellar object which was discovered by the Atlas telescope). The third ever of anything is by definition a rare event, and so this was scientifically exciting. The comet came into the inner solar system, passing close to Jupiter and Mars, but not to the Earth, went behind the sun, then emerged on its path away from the sun. It is now headed for the orbit of Jupiter and out of the solar system. At first 3I/Atlas displayed a number of minor anomalies. It was behaving sort of like a comet, but with some differences. This fits well, however, with the main hypothesis that it is an interstellar comet – so it’s a comet, but may have a different composition from comets that were formed in our own solar system. This is not almost certainly the case – the comet comes from the thick disc of the galaxy, likely from a low metallicity star system, and has likely been travelling through interstellar space for billions of years, possibly being even older than our own star.
Now that it is passing out of the solar system we can look at all the data that NASA collected and make some fairly confident conclusions. There are a lot of sources of information, but Wikipedia actually has a pretty good summary and list of references. In the end, 3I/Atlas behaved mostly like a typical comet. It formed a tail heading away from the sun, brightened as it got close, then faded away as it moved away from the sun. Spectral analysis found that the comet was unusually rich in carbon dioxide (CO2), with small amounts of water ice, water vapor, carbon monoxide (CO), and carbonyl sulfide (OCS). It also had small amounts of cyanide and nickel gas, which is common in comets from our own solar system. In other words – it is a comet. It did originate from a part of the sky that we had previously calculated would have fewer such interstellar objects, which either makes it especially rare or means that our calculations are off.
Every time we encounter a new interstellar object we gather more data about such objects – how frequent are they, where do they come from, and what is their nature. Right now we have just three data points. After the first one, Oumuamua, we had not idea how common they were because we had just one data point. Now we have enough instruments surveying the sky that we are better able to detect such objects, which are very fleeting. The question was – was Oumuamua a one-off, and we just got lucky to detect something that happens very rarely, or are such objects common. We now have three data points and can conclude that they are fairly common, and we should detect one every few years or so, perhaps even more often if we start looking more.
Interstellar objects are a fairly new astronomical phenomenon, and what typically happens to new astronomical phenomena is that someone asks – could this be an alien artifact? So far the answer has been universally, no. The universe is a very big and complex place with lots of unusual phenomena. Historically speaking we have only just started to examine the cosmos, and are still encountering new phenomena on a regular basis. We have yet, however, to detect anything demonstrably, or even likely, alien. No one would be more excited than me if we discovered a genuine technosignature of an alien civilization. That is precisely why we have to be very careful before leaping to any such conclusions. But sure, ask the question, just don’t leap off the deep end.
What I mean by that is – do not make bad arguments to prop up an alien hypothesis, do not mystery-monger, do not truck in conspiracy theories, and do not draw undue attention to such speculation or present it as anything other than speculation. Every generation seems to have someone, sometimes with a scientific background, who does all of these things. The allure of the alien hypothesis is just too great. It is genuinely fascinating. It is the fast track to fame and attention. You can portray yourself as just being open-minded, brave enough to ask the tough questions, and criticize your colleagues for being closed-minded. Of course, like many things, this is a continuum. A little is reasonable, more starts to get sketchy, and a lot makes you a crank.
An example of something which I consider to be in the sweet spot of good scientific exploration of the possibility of alien technosignatures is SETI. SETI essentially uses radioastronomy to survey for potential radio signals of alien origin. But they are not just doing this – they are also doing lots of ordinary good radio astronomy. But mixed in with their radio astronomy are methods to screen for signals that might be technosignatures. They are also extremely careful not to make any premature or overblown claims, and they are their own most dedicated skeptics.
At the other end of the spectrum, in my opinion, is Avi Loeb. He has seemed to make a career now out of mystery mongering anything unusual as a possible alien artifact. He claimed that all three interstellar objects might be alien craft. Why is he at the crank end of the spectrum? Because he elevated this possibility prematurely and with a series of really bad arguments, sometimes distorting the data or making bad calculations. He said that Oumuamua might be alien because it was coming close to the Earth, to observe it. He then argued that 3I/Atlas might be alien because it was not coming close to the Earth, to hide from us. He exaggerated its possible size, its apparent lack of a tail, its composition. He made a lot of the fact that the comet’s trajectory is close to the ecliptic, about 5 degrees off, committing a classic lottery fallacy argument. He calculated how likely this specific feature is, but only after knowing it, and did not adjust for all possible features that might be individually unlikely. He engaged in classic post-hoc reasoning. In the end, the predictions of NASA scientists all proved correct – 3I/Atlas is a comet, and displays all the features of a comet. Loeb attracted attention by saying 3I/Atlas might pivot toward the Earth once it emerges from behind the sun. When this prediction failed he did admit it was “most likely natural”, but is still emphasizing its apparent anomalies.
What he is doing is playing coy, which is a common strategy for those who are pushing fringe ideas but who are trying to seem reasonable. All along he said – the most likely explanation is that it is natural. But then follows up with – here are lots of (really bad) reasons why it is unusual and might be alien. This is a win-win for him – in the rare case that he turns out to be right, he is a genius and takes all the credit (keep in mind, if it were alien NASA would have found out all by themselves, with his prodding). If it turns out he is wrong, then he can claim he said all along it was likely to be natural. Either way he sucks up as much oxygen as possible from the media and distracts from the hard-working scientists at NASA doing good work. There is some great and interesting science here. The conclusion that this is almost certainly not an alien craft is a footnote at best, because there was never any good reason to hypothesize that it was.
Loeb is at it again (or still) with a recent post about a “mysterious” Mars cylinder (see the picture above the fold). This is also a common strategy of mystery mongers – comb through tons of data looking for anything unusual, then declare it a mystery. Again – looking for anomalies is a legitimate process of science. Blowing up apparent anomalies into a high-priority mystery is something that an attention-seeking crank would do. In this case others combed through NASA pictures from the Rover and then send it along to Loeb, so he is now a magnet for such things. And again – he admits this is most likely to be just a piece of debris from the Rover itself, or its landing, or whatever. There is now debris on Mars from all the spacecraft we have sent from Earth, so when we encounter a bit of what looks like ordinary debris, that is most likely what it is.
But Loeb is saying that NASA should turn the rover around and travel a few days to go back and take a closer look at this debris. NASA has not responded or commented to Loeb’s statement. This is actually a good operational definition of making too much of an apparent anomaly. Thinking that such anomalies, even when they are likely mundane, should take high priority and redirect our limited resources away from other scientific priorities, is worse than grabbing attention. It is trying to commandeer precious public resources to go on your own wild-goose chases, not because it is good science, but because it serves your own personal agenda. NASA is perfectly capable of determining the proper priorities for their own rover. They don’t have to go chasing after every piece of space junk because Loeb is trying to grab attention and justify his own dubious professional existence.
The post What Happened to Comet 3I/Atlas first appeared on NeuroLogica Blog.
In the decades before the Wright brothers historic 1903 flight at Kitty Hawk there were many claims of powered heavier-than-air flying machines. There were also many false sightings of “airships”, amounting to a form of mass delusion. But the false claims and false sightings do not change the fact that the technology for powered flight was right on the cusp, and that the Wright brothers crossed that threshold in 1903, leading ultimately to the massive industry we have today. This is not surprising. There is often a sense, in the industry and spreading to the public, that the technological pieces are in place for a significant application breakthrough. Today this is more true than ever, with a vibrant industry of tech news, showcases, conferences, blogs, podcasts, etc. I cover plenty of tech new here. It’s interesting to try to glimpse what technology is right around the corner. Any technology that is closely watched and much anticipated is likely to generate lots of premature hype and false claims.
This is definitely true for battery technology. We are arguably in the middle of a massive effort to electrify as much of our industry as possible, especially transportation. Also maximizing intermittent renewable sources of energy would be greatly facilitated by advances in energy storage. Meanwhile electronic devices are becoming increasingly integrated into our daily lives. Advances in battery technology can have a dramatic impact on all these sectors, and is likely to be a critical technology for the next century. So it’s no surprise that there is a lot of hype surrounding battery tech, some of it legitimate, some of it fake, and some just premature. But this hype does not change the fact that battery technology is rapidly improving and the hype will become reality soon enough (just like the Wright flyer).
When it comes to EV batteries we all have a wish-list of features we would like to see. I now own two EVs, and they are the best cars I have ever owned. At least for my personal situation (I live in an exurb and own my own parking spots), EVs are great, and current battery technology is more than adequate for EVs. But sure, I live everyday with the reality of how advances in battery tech will make EVs even more convenient and useful. I have detailed the wish-list before, but here it is again: increased capacity, both in terms of volume but especially weight (specific energy), to decrease the weight while increasing the potential range of EVs, faster charging (with the holy grail being the ability to fully recharge an EV as fast as you can fill a car with gas), long charge-discharge cycle lifespan (longer than the lifespan of the car), useful in a wide range of temperatures, stability (does not spontaneously catch fire), and cheap, which is tied to being made from cheap and abundant elements. This last feature also means that the battery is not dependent on rare elements whose supply line is largely controlled by hostile or conflict-ridden countries.
Making a significant breakthrough in any one of these features is big news. This is why Donut Lab’s claim to have simultaneously improved all of these wish-list features at once was met with so much skepticism. (I will give a quick update on Donut Labs at the end of this post.) Now we have another bold claim, this one from a US company based in Dallas. Their claim focuses on just one feature of EV batteries, the recharge time, however they also claim reduced need for cobalt, which is nice. The company is OMI, who claims to have innovated a new iron-based cathode that allows an EV to recharge from empty to full in 3 minutes. That would be huge – 3 minutes is the holy grail, about as long as it takes to fill a tank of gas. Technically they claim a 20C recharge rate. The “C” is based on a convention with 1C meaning that a battery can fully charge in 1 hour. So a 20C battery, by definition, would recharge fully in 3 minutes. For reference, most fast charging EV batteries today are rated at 8-12C, or a 7.5 to 5 minute recharge time. This is already pretty good, and as you can see there is a diminishing return with increased C rating when translated into recharge time. Of note, however, these ratings are under ideal conditions. In the real world we are still looking at 10-12 minute recharge times for the fastest recharging batteries.
To me this is not a big deal at all. Even when I use a charger that requires 20 minutes to go from 20-80% charge, it’s rare I am doing that on the road (only during long trips), and it’s relatively easy to plan that around a pit stop anyway. Go to the restroom, get a snack, and by the time you get back to your car you are done or almost done. Any improvement from there is icing on the cake. Ten to twelve minutes would be fantastic. Three minutes is insane. Keep in mind, 99% of the time I am slow charging my EVs at home. But sure, that occasional time you are driving home late at night and you need a top off to make it home, and you have nothing to do but wait there while your car recharges, faster is definitely better.
So how reliable is this claim from OMI. It looks pretty credible. They are calling the technology LnFP (lithium nano-ferrophosphate). This is a variation on the established LMFP technology which uses manganese in the cathode. Doping the cathode with manganese allows for faster charging. OMI is not revealing the exact chemistry of their new cathode (industry secrets and all), but will only say that it is nano-structured, hence the “nano”. Nothing there that breaks the laws of physics, and this all seems reasonably incremental. But again, prematurely hyping plausible incremental advances, but ones that will give a company dominance in an industry, is not uncommon. Claim unlimited free energy and you are just an obvious crank or a fraudster. Claim a plausible incremental advance, and you generate excitement in the industry. But that still leaves the question – did they really achieve this, or are they hyping a lab phenomenon, or are they pulling a “fake it till you make it” maneuver to goose funding?
The broader context here is that OMI is not one of the major players in battery technology, investing billions in a global race to push the industry forward and grab market share. They are a small startup, although they have been providing components to large companies like Harley Davidson. Are we seeing the democratization of battery tech, with spunky small startup leveraging creativity and innovation to challenge the major players? Or is this mostly small startups trying to make a quick score by making bold claims and either attracting big funding or getting snapped up by one of the big boys? OMI claims their battery claims are validated, but I cannot find any independent third-part validation. They also claim they will go into production in 2027. That is the ultimate test – can they mass produce these batteries at a competitive price and they actually work as advertised in products?
Speaking of which, two months ago Donut Labs announced to the world a dream solid-state battery with all the wish-list features. Now they are claiming independent testing and validation, but again it is not quite worthy of the hype they are putting out. Finland’s state-owned VTT Technical Research Centre has tested some of its features. It tested the rapid recharge time revealing a 0-80% charge in 4.5 minutes, with a 5C rating. Testing has also demonstrated their solid state battery is not a supercapacitor, which was one of the theories. But that, so far, is it. The 400 Wh/kg specific energy has not been validated, and that is really the main feature. So far we have more of a glimpse than total verification. So I am still withholding ultimate judgement until all the evidence is in, but it still seems sketchy to me. I hope that everyone is wrong, and Donut Labs has really achieved what they claim. But that hope, I think, is the point.
The post Another Bold Battery Claim first appeared on NeuroLogica Blog.
This is a tiny ray of light in what has been a gloomy year for science-based federal health policy. Recently U.S. District Court Judge Brian Murphy in Boston ruled that the actions of RFK Jr. as HHS Secretary to fire the entire Advisory Committee on Immunization Practices (ACIP) did not follow procedure and is therefore not valid. Further, he concluded that the new ACIP, packed with anti-vaxxers, made capricious and arbitrary decisions that did not follow established science-based procedure. His ruling is a preliminary injunction that has delayed meetings of the ACIP and stays the revised vaccine schedule. The ruling is in a case brought by a coalition of medical professional societies, including the American Academy of Pediatrics. They are celebrating the ruling as “a momentous step toward restoring science-based vaccine policymaking.”
There are a few layers to this story. The first is RFK Jr. himself and what he has been doing as HHS secretary. I have not written much about him here, because posts about him and other Trump health appointees have dominated the SBM blog over the last year. This has been an “extinction level event” for rational federal health policy, and we have documented it and analyzed it every step of the way. David Gorski has done a great job specifically documenting what RFK Jr. has done to vaccines in the US in his series – “RFK Jr. is definitely coming for your vaccines”, in which he just published part 8. He did a great job not only documenting all of RFK Jr’s harmful actions but actually predicting them. Essentially, RFK is systematically using every lever at his disposal to dismantle the vaccine infrastructure in the US to reduce vaccines as much as possible. Given his actions he clearly straight-up lied to the confirmation committee when he said he was not anti-vaccine and would not take away American’s vaccines.
We, of course, recognized exactly what RFK Jr was doing during the hearings, because we have been following his nonsense for 30 years. He said, for example, “If we want uptake of vaccines, we need a trustworthy government,” Kennedy said. “That’s what I want to restore to the American people and the vaccine program. I want people to know that if the government says something, it’s true.” He then promised “gold standard science”. I would argue he has done the exact opposite. But what this statement is is classic denialism. Just claim you want to review the science, that everything is open to examination, and you just want the highest standards of science. These principles are great, but they can be used as a weapon, not just a tool. You can deny well-established scientific conclusions by arbitrarily claiming we need yet higher standards. Also, claiming you want to “restore” faith in the vaccine program assumes there is currently a lack of faith, which is rich coming from the person who has done the most to undermine that faith with pseudoscience and false claims. That is another denialist strategy – make a well-established science seem controversial, then argue that because it’s controversial we need to reexamine it and call it into question.
This point requires further discussion. It may seem ironic that at SBM we are constantly calling for higher standards of medical science, but now we are complaining about calling for higher standards of science. But again, this gets to using such calls as a weapon vs a tool. No conclusion in medical science is bullet-proof. All science is simply inference to the best current conclusion based on existing evidence. Medical science, because we are dealing with variable biological units (and not things like electrons), is especially complex. We are always making decisions with imperfect information, making our best extrapolation from what is known, and ultimately making a risk vs benefit decision. This requires constant review of the evidence by recognized experts to help establish and maintain a standard of care. But you can attack any medical practice as lacking sufficient evidence, if that is your agenda. This is why expert reviews need to be as free from bias as possible, and as transparent as possible. And the reviews need reviews. It’s a constant process.
The problem with what RFK Jr is doing is not that he is reviewing the science, it’s that he is putting a massive anti-scientific, conspiracy-addled, and biased thumb on the scale. He arbitrarily fired the entire ACIP, then packed it with known anti-vaxxers. Packing a review panel is one way to get the outcome you want.
David lays out what RFK Jr has already done and will likely do going forward to undermine vaccines. The most recent outrage – his MAHA institute is sponsoring a MEVI conference, which stands for Massive Epidemic of Vaccine Injury. Gee – I wonder what they will conclude. He’s not even pretending anymore.
The other big layer to this story, however, is how effective will a court injunction be in stopping the RFK Jr anti-vaccine wrecking ball? The court is correct – we have a process for a reason, to ensure that judgements about what the evidence say are objective and transparent. Bypassing that process and arbitrarily replacing it with one that is blatantly agenda-driven is not a valid process. But this gets into a tricky area – the “checks and balances” of the three equal branches of our federal government. How much oversight and veto power does and should the judicial branch have against overreach by the executive branch? Legal scholars can debate this – again I just hope we have an objective and transparent process to make such decisions.
But executives can put their fat thumb on the scale of this process too – by packing the federal courts with ideologues that will follow their wishes rather than following the law. They can also do it by judge-shopping, keep raising cases until you get a friendly judge. Our rights and freedoms should not so heavily depend on “federal judge roulette”. It should also not depend so much on the randomness of which executive gets to appoint the most Supreme Court judges. If the system gets too biased in one direction, then the public starts to lose confidence in the objectivity of the court, and the overall problem deepens. We seem to be digging ourselves deeper and deeper into a hole of affective polarization, lack of faith in the system, and justifying extremism.
What saves us from bias, arbitrary decisions, extremism, and corruption are institutions that have a process to maximize transparency, average out and minimize bias and conflicts of interest, and elevate genuine expertise. This is partly built on codified procedure, but also on democratic and professional culture and standards. RFK Jr is a blatant example of what happens when you ignore that culture of professionalism and let lose an ideologue to “go wild”.
The post Federal Judge Partly Blocks RFK Jr’s Anti-Vaccine Wrecking Ball first appeared on NeuroLogica Blog.
How common is life in the universe? This is one of the greatest scientific questions, with incredible implications, but we lack sufficient information to answer it. The main problem is the “N of 1” problem – we only have one example of life in all the universe. So we are left to speculate, which is still very useful when based on solid scientific evidence and reasoning. It helps guide our search for signs of life that arose independently from life on Earth.
One important question, therefore, is where is it possible for life to exist? We know life can arise on a rocky planet with a nitrogen and CO2 atmosphere in a temperature range that allows liquid water on the surface. We also know that such life may create and sustain large amounts of oxygen in the atmosphere. It therefore makes sense to focus our search on similar planets. But life does not have to be restricted to Earth-like life. Scientists, therefore, try to imagine what other conditions might also support some kind of life. It is possible, for example, that life arose in the vast oceans under the ice of moons like Europa or Enceladus. Such life would be very different than most life on Earth. It would be dependent on chemical processes for energy (chemosynthetic), rather than sunlight.
Knowing how many different kinds of places life could possibly exist affects our estimate of the number of locations in our galaxy that might harbor life. The current estimates for how many Earth-like exoplanets there are in the Milky Way galaxy ranges from 300 million to 40 billion, depending on various assumptions and how tightly you define “Earth-like”. There are 100-400 billion stars in the galaxy, but about a third of those stars are in multi-star systems, so that means there are tens to up to 100 billion distinct stellar systems in the Milky Way. One estimate from observed multi-star systems is that about 89% of them could allow for a stable orbit of a rocky planet in the habitable zone.
But perhaps we should not limit the calculations of how many worlds in the galaxy may support life to Earth-like planets. I am not just talking about life in oceans under icy moons. Astronomers have also been considering the possibility of life on moons that orbit free floating gas giant planets. A free floating planet (FFP), also called a nomadic planet or rogue planet, does not orbit a star at all. At some point, likely early in the life of its parent star, it was flung out of its system and now wanders freely between the stars. Astronomers estimate there may be hundreds of billions of such planets in the Milky Way. But this means the planet is dark, without any sunlight to keep it warm or fuel life. What about the moons of an FFP, however?
It is possible that an FFP can retain some of its moons even once ejected from its system – they would not necessarily be stripped of their moons in the process. However, the orbits of those moons would likely become more eccentric. Astronomers imagine a large moon orbiting an FFP gas giant in an elliptical orbit. Tidal forces would constantly stretch and pull the moon, causing its interior to heat up. These forces can be immense. Io, a large moon of Jupiter, is close enough to Jupiter that the tidal forces on it heat it up so that it is constantly volcanic and molten, turning itself inside out through such activity. So there would be a tidal Goldilocks zone around such gas giants as well, heating them up enough to support life but not become a volcanic hellscape.
Such moons could therefore be like Europa, with an icy shell but enough internal heat from tidal forces to keep a liquid ocean. But astronomers also want to know if such a moon could have liquid water on its surface. This would require a thick enough atmosphere to keep the surface water from evaporating away into space. It would also require an atmosphere capable of trapping enough heat to keep the surface warm (in this case the heat would be coming from the moon itself through tidal forces, and not from starlight, but it doesn’t matter). Astronomers have previously considered CO2 as a heat trapping gas, and this would work. However, because the upper atmosphere faces the cold dark of space, without a star to warm it up, the CO2 would slowly condense out of the atmosphere. Astronomers estimate such a moon could maintain surface water for about 1.3 billion years before the system collapses. This is a long time, long enough for life to arise, but not as long as it took life on Earth to get to its current state of complexity.
In a recent paper astronomers propose another situation that might work better – a mostly hydrogen atmosphere. An H2 dominated atmosphere would also trap sufficient heat (if it were thick enough) to maintain liquid water on the surface, just from internal heat through tidal forces. Further, such an atmosphere would be more stable than a CO2 atmosphere, lasting up to 4.3 billion years – long enough for complex life to evolve. Such life would likely be very different than Earth life, lacking sunlight and therefore photosynthesis, but it could exist.
If this analysis pans out, this could mean that potential locations for life in our galaxy is many times current estimates that do not include such moons. But again – until we actually find such life, we can only speculate about possibilities. Obviously we have no way of going to such locations (at least, not anytime soon, or likely for a very long time), but we can look for biosignatures, such as the presence of large amounts of oxygen (or any molecule that is not stable and would have to be constantly replenished by living processes) in the atmosphere.
And of course the ultimate question – could such complex life become technological, in which case we might also look for technosignatures. What would an intelligent technologically advanced species from a hydrogen exomoon around a rogue planet be like? Wouldn’t it be wonderful to find out one day.
The post Life on Exomoons first appeared on NeuroLogica Blog.
Creationism, in all its various manifestations, is sophisticated pseudoscience. This makes it a great teaching tool to demonstrate the difference between legitimate science and science denial dressing up as a cheap imitation of science. Creationist arguments are a great example of motivated reasoning, providing copious examples of all the ways logic and argumentation can go awry. It has also been interesting to see creationist arguments (at the leading edge) “adapt” and “evolve” into more complex forms, while maintaining their core feature of denying evolution at all costs.
I am going to focus in this article on young Earth creationists, specifically Answers in Genesis, and something that is a persistent element of their position. Essentially they do not understand the concept of nested hierarchies. I have a strong sense that this is because they are highly motivated not to understand it, because if they did the entire structure of their YEC arguments would collapse.
This AiG article is a great example – Speciation is Not Evolution. The article is more than a bit galling, given that the author seeks to lecture scientists about the use of precise definitions. It begins by patronizingly explaining the humor in the famous “Who’s on First” skit (gee, thanks for that), then accuses scientists of not being precise with their definitions. This is, of course, the opposite of the truth. Good science endeavors to be maximally precise in terminology (hence the jargon of science), and it is creationists who habitually use vague and shifting definitions – such as their abuse of the word “information” and for that matter “evolution”.
We see this right in the title of the paper – speciation is not evolution – well, speciation is part of evolution. No one claims that by itself it encompasses evolution, but it’s a pretty critical part. They play this game frequently, by claiming, for example, that natural selection does not increase “information”. Correct, it non-randomly selects information. But mutations, duplications, and recombinations demonstrably increase information. They then argue that mutations only “degrade” information, and duplications only copy what is already there. Mutations change information in ways that can be neutral, positive, or negative, as judged by the context of the individual organism. Duplications absolutely increase the amount of information (again, what definition of information are they even using), allowing for one copy to maintain its original function while the new copy can mutate into new functions.
But let’s get to the core argument of this article, that speciation can occur within “kinds” but cannot turn one kind into another. In other words, dogs can evolve into new species of dogs, but a dog can never evolve into a cat. “Evolutionists”, they argue, don’t understand this difference, and so confuse speciation within a kind to “macroevolution” from one kind to another. Meanwhile, they do not have an operational precise definition of what a “kind” is. The word comes from the Bible (God created creatures each according to their own kind) and is not a scientific concept. The author states that a kind roughly correlates to a family level taxonomically. But that doesn’t help. A taxonomical “family” is also not a precise thing. It is simply a categorization convention, and varies tremendously across the tree of life. The same is true of macroevolution – this is not a scientific concept and has no operational definition.
The problem with both of these concepts – kind and macroevolution – is that they suffer from a fatal demarcation problem. There are lots of demarcation problems in science, anytime we are trying to categorize a messy continuum of nature. What’s a planet, or species, or continent? The difference is, the YEC argument is contingent on there being a sharp demarcation – evolution can proceed to this amount, but no further. Evolution can account for this degree of change, but no further. The problem is, they never state any reason, based on any valid principles, as to why. They simply assert that kinds are inviolate.
But at the core of their claims is a complete misunderstanding of what evolutionary science actually claims. Ironically, when they say that dogs can only evolve into more dogs, and never into cats – they are correct. Evolutionary scientists agree with this statement, especially if you take a cladistic approach to taxonomy. By definition a clade is one species and all of its descendants. This is why it is cladistically correct to say that people are fish. Once the eukaryotic clade evolved, everything that descends from it are still eukaryotes. So humans are eukaryotes, and animals, vertebrates, fish, lobe-finned fish, reptiles, mammals, and primates. It is correct, for example, to say that all descendants of fish are still fish, but you have to count humans as fish. What you cannot ever do is go back up the cladistic tree. You cannot undo evolution. You also cannot make a lateral move to another unrelated clade. So an animal cannot evolve into a plant.
The YEC misunderstanding of this concept renders all of their arguments as to why evolutionary scientists are wrong into strawman arguments. No one ever said a dog can evolve into a cat – in fact scientists say this is impossible. It is not part of evolutionary thinking.
What creationist do is grossly underestimate how much change can occur within a clade, because they are stuck on the concept of “kinds”. Functionally what is a kind? It’s one of those things that you vaguely sense. You know it when you see it. Everyone knows what dinosaurs look like – they have a dinosaurish vibe. This is why they falsely argue that birds could not have evolved from dinosaurs. Actually, it is more correct to simply say that birds are dinosaurs – they are a subclade within the dinosaur clade. Birds are also reptiles, because dinosaurs are a subclade within reptiles, which are a subclade within fish, etc. It’s nested hierarchies all the way down. But birds look like a different kind than dinosaurs, so this violates their vague sense of what a kind is. They then mock this idea by analogizing it to a dog evolving into a cat – this this is a false analogy. Dogs and cats are different subclades of mammals, and you cannot evolve from one clade into another, only into subclades within your existing clade.
Stephen J. Gould also discussed this idea and zoomed in on an important concept that is highly misunderstood. Over evolutionary time we expect that disparity (not diversity, the amount of differences, but disparity, the degree of difference) decreases. This seems counterintuitive, but it makes sense once you fully internalize the concept of nested hierarchies. Multicellular life achieved maximal morphological disparity soon after the Cambrian explosion, and from that point forward we only see variations of the various body plan themes. Over evolutionary time the nested hierarchy structure of the tree of life means that we see variations on progressively constrained themes. Evolution is constrained by its history, so the more evolutionary history a lineage has, the more constrained its future evolution. If we look at the entire history of evolution, we see this increasing constraint play out as decreasing disparity. At most disparity can stay the same, but extinction is like a ratchet slowly decreasing disparity.
To take an extreme example used by Gould to illustrate this, imagine a mass extinction where the only surviving land vertebrates are dogs. Eventually those dogs will adapt and fill all the empty niches – you will have herbivore dogs, grazing dogs, dogs living in trees, predator dogs, and more. But they will all be variations on dogs. A dog will not evolve into a giraffe, but it may evolve into a giraffe-like dog, while still retaining dog features. This is also why using modern extant examples (a dog evolving into a cat) also makes no sense. The dog clade is evolutionarily constrained to forever be dogs, even though that can include a lot of diversity. But if you go back in time a few hundred million years, you can have a mammal that is less evolutionarily constrained that evolved into both cats and dogs.
We can also ask the question – what does the evidence show? Above is the picture that AiG uses to illustrate its speciation within clades. The depiction of each clade is conceptually not bad (I don’t think it was meant to be literally accurate), but it artificially stops at an arbitrary line of “kinds”. Does the evidence support this view? What would we expect to see if each kind were created unto itself and separate from all other kinds? What would we expect to see if these nested hierarchies go all the way back to the beginning of life? You can fill a book reviewing the actual evidence, but let me give a quick summary.
If the YEC schematic is correct, then we would expect to see discrete clades that can be cleanly separated – morphologically, genetically, physiologically and biochemically. If the evolution schematic is correct then we would not expect any clean separation, but a continuum along all these features leading back as far as the evidence goes. The bottom line is that the evidence is a home run for the evolutionary prediction. Creationists deal with this devastating fact in a couple of ways. First, they often simply deny the evidence, saying things like “there are no transitional fossils”. They support this claim by mischaracterizing the evidence, ignoring evidence, and also by playing loose with the definition of “transitional”.
They also make the claim that any similarities between kinds is due to each kind having the same creator. Why would the creator reinvent the wheel with each kind, of course he just used the same solutions over and over again. But this argument only goes so far. There are numerous connections between clades that go far beyond utility, such as viral inclusions. The genetic material from a virus can get stuck in the genome of a creature, and then persist down throughout its clade. These are non-functional bits of viral residue in the genome, and they provide a map of nested hierarchies which obey clades, but violate any notion of kinds.
We also can look at the fossil record temporally. In the YEC model, we should see all clades appearing at the same time (creation), then going through a simultaneous bottleneck (the flood) followed by speciation into our current extant species. That is not what we see – not even close. Some will say – what about the Cambrian, that is the sudden appearance of all kinds. Um, no. There are no birds, dogs, triceratops, horses, or humans in the Cambrian. All the family-levelish kinds they say exist were not in the Cambrian fauna. The Cambrian explosion resulted mainly in the multicellular phyla (basic body plans), including some that are now extinct. If they claimed that kinds were phyla and that they were created 500 million years ago, they would have a stronger case. But that is not what they say. Over time we then see increasing diversity within clades, with new subclades evolving and appearing over evolutionary time. We basically see exactly what we would predict if all life has a common ancestor, and not what we would expect to see if life were divided into family level kinds created all at the same time.
Creationists cannot engage with what evolutionary scientists actually claim, so they have to invent ridiculous straw men to attack. They use loose and shifting definitions, and then have the gall to falsely accuse scientists of doing that. They can’t explain the evidence, so that have to ignore and distort it beyond all recognition.
And to clarify my position, in case you are new to this blog, I am not against belief in God and essentially don’t care what anyone believes when it comes to metaphysical questions. But science follows methodological naturalism, and if you follow the methods of science there is only one logical, evidence-based, and scientific answer to the question of the origin of species. The evidence overwhelmingly shows that all life is descended from a common ancestor in a nested hierarchy of relationships.
The post Creationists Don’t Understand Nested Hierarchies first appeared on NeuroLogica Blog.
Researchers have recently published a discovery that could lead to more efficient photosynthesis in many crops. It’s hard to overstate how impactful this would be, as this could significantly increase crop yields while decreasing inputs. The growing human population makes such advances critical. Even without that factor, increasing yields decreases the land intensiveness of agriculture, which has a dramatic impact on our environment and sustainability. Improved photosynthesis would be a win across the board.
Before we get into the study there are a couple of points I want to explore. When I first learned of the various research efforts to improve photosynthesis my first reaction was – why hasn’t evolution already optimized something that is so critical to all life. The first photosynthetic organisms evolved at least 3.4 billion years ago. That’s a lot of time for evolutionary tweaking. So why is efficiency still an issue? There are a couple answers, but the primary one appears to be the constraints of evolutionary history. What this means is that evolution can only work with what it has, and it cannot undo its history. Once development leads down a certain path, evolution can make variations on the path but it cannot go back in time and take a completely different path. All vertebrates are variations on a basic body plan, for example.
So what are the evolutionary constraints of photosynthesis? Photosynthesis involves using the energy from sunlight to combine carbon dioxide (CO2) with water (H2)) to make glucose and oxygen. Critical to this reaction is an enzyme, ribulose-1,5-bisphosphate carboxylase/oxygenase (RubisCO), which fixes the carbon from CO2 into organic compounds. This enzyme, RubisCO, is responsible for over 90% of all carbon in living things. It is the most common enzyme in the world and is a cornerstone of living ecosystems, which mostly depend on energy from the sun.
RubisCO, however, is not very efficient. It does not catalyze the reaction very quickly or specifically. The most likely reason for this inefficiency is that RubisCO evolved on the ancient Earth, before the “great oxidation event”. This means it evolved when the atmosphere had lots of CO2 but no or little oxygen, therefore it did not have to distinguish between the two. This means there was no selective pressure for an enzyme that would catalyze a reaction with CO2 but not O2. RubisCO catalyzes both. By the time oxygen started to build up in the atmosphere, RubisCO was well established as the enzyme of photosynthesis. There is also a tradeoff between efficiency and specificity, meaning that the more specific RubisCO is for CO2 over O2, the slower the reaction, and the faster the reaction, the lower the specificity (the more “mistakes” the enzyme makes by catalyzing a side reaction with O2).
To be clear, scientists often use metaphors when discussing this situation. RubisCO does not really make “mistakes”, it just does what it does. And the reaction with O2 is only a “side” reaction from the perspective of what’s best for the organism and from evolutionary selective pressures (but that’s the context that matters). So evolution has tweaked RubisCO over billions of years to have the optimal balance between efficiency and specificity. It should also be noted that this side reaction with O2 is not just wasteful, it creates toxic compounds that have to be cleared. It is estimated that plants waste 30% of the energy captured from sunlight creating and then dealing with these O2 side reactions. But evolution was effectively “trapped” in this tradeoff. Organisms had been using RubisCO for over a billion years prior to the great oxidation event and were too dependent on it to evolve a completely new method of photosynthesis.
How do we break out of this trap? For this we need another concept – stoichiometry. You remember the bunsen burners from high school science class. You have to adjust the air intake to get the flame to go from a sputtering yellow flame to a bright blue steady flame. You need just the right ratio of gas to air to optimize the efficiency of the reaction. The situation with RubisCO is similar, although simpler. We need to maximize the concentration of CO2 and minimize the concentration of O2 around the RubisCO, in order to simultaneously improve the efficiency and specificity of the reaction. These are called carbon concentrating mechanisms, or CCMs. This idea may be simple, but evolutionarily it is very difficult (judging by how often such CO2 concentrating mechanisms have evolved in nature).
Cyanobacteria and eukaryotic algae have evolved CCMs. Algae specifically evolved structures called pyrenoids which concentrate RubisCO in parts of the chloroplasts where CO2 can also be concentrated. Researchers have been trying to understand the genetics and physiology of these CCMs to see if they can be ported to land plants, specifically crops. Unfortunately, these CCM systems are complex, involving many genes working together. Plus the evolutionary distance between algae and land plants makes adapting these systems difficult.
This brings us to the latest study – which looks at the CCM in a specific type of land plant. About 8-15% of land plants have also evolved some sort of CCM, so most still use what is called the traditional C3 version of RubisCO. Perhaps the CCM in one of these branches of land plants could more easily be adopted in crops. Some plants use what it called C4, which uses a biochemical pump to move CO2 into sheath cells. This evolved only about 20-30 million years ago, and is found in maize, sorghum, sugarcane, and some tropical grasses. Another mechanism is CAM Plants (Crassulacean Acid Metabolism), which take up CO2 at night and store it as acid, then use it during the day to increase CO2 during photosynthesis. Then there is the hornworts which concentrate RubisCO using organelles similar to algae. The recent study looks at this third mechanism.
Here’s the good news – the researchers found that hornworts (which are small ground plants) use a very simple mechanism. There is an extra tail on the C terminus of one of the subunits of RubisCO. The researchers named this region RbcS-STAR, or the STAR region of the RubisCO. This extra tail acts like velcro, causing RubisCO to stick together and clump, which is good if you want to concentrate CO2 and RubisCO in the same part of the cell. They added the STAR piece to a relative of hornwort, and it worked. They added it to Arabidopsis, an unrelated plant often used in research, and this also caused the RubisCO to clump. So they demonstrated that STAR works, even in unrelated species. This research suggests that RbcS-STAR will likely work in a diverse range of plants.
However – the research is not done yet. Essentially they have only one half of the job done. Now they need to find a way to bring high concentrations of CO2 to the clumps of RubisCO. Perhaps they can borrow the biochemical pumps from C4 plants. There is already extensive research into porting C4 photosynthesis into C3 crops, like wheat and rice. These efforts have proved challenging, because they involve complex leaf restructuring (such as increasing the density of veins). It is possible that this discovery of RbcS-STAR could offer a simpler solution to making C4 work in these plants.
Making C4 wheat or rice could increase their yield by up to 50%. That would be transformative to agriculture, and is worth the extensive research into cracking this complex problem. While the current discovery is just one possible piece to the puzzle, it is very encouraging and hopefully moves us significantly closer to a solution.
The post Improved Photosynthesis first appeared on NeuroLogica Blog.