What the true impact of artificial intelligence (AI) is and soon will be remains a point of contention. Even among scientifically literate skeptics people tend to fall into decidedly different narratives. Also, when being interviewed I can almost guarantee now that I will be asked what I think about the impact of AI – will it help, will it hurt, is it real, is it a sham? The reason I think there is so much disagreement is because all of these things are true at the same time. Different attitudes toward AI are partly due to confirmation bias. Once you have an AI narrative, you can easily find support for that narrative. But also I think part of the reason is that what you see depends on where you look.
The “AI is mostly hype” narrative derives partly from the fact that the current AI applications are not necessarily fundamentally different than AI applications in the last few decades. The big difference, of course, is the large language models, which are built on a transformer technology. This allows for training on massive sets of unstructured data (like the internet), and to simulate human speech is a very realistic manner. But they are still narrow AI, without any true understanding of concepts. This is why they “hallucinate” and lie – they are generating probable patterns, not actually thinking about the world.
So you can make the argument that recent AI is nothing fundamentally new, the output is highly flawed, still brittle in many ways, and mostly just flashy toys and ways to steal the creative output of people (who are generating the actual content). Or, you can look at the same data and conclude that AI has made incredible strides and we are just seeing its true potential. Applications like this one, that transforms old stills into brief movies, give us a glimpse of a “black mirror” near future where amazing digital creations will become our everyday experience.
But also, I think the “AI is hype” narrative is looking at only part of the elephant. Forget the fancy videos and pictures, AI is transforming scientific research in many areas. I read dozens of science news press releases every week, and there is now a steady stream of news items about how using AI allowed researchers to perform months of research in hours, or accomplish tasks previously unattainable. The ability to find patterns in vast amounts of data is a perfect fit for genetics research, proteinomics, material science, neuroscience, astronomy, and other areas. AI is also poised to transform medical research and practice. The biggest problem for a modern clinician is the vast amount of data they need to deal with. It’s literally impossible to keep up in anything but a very narrow area, which is while so many clinicians specialize. But this causes a lack of generalists who play a critical role in patient care.
AI has already proven to be equal to or superior to human clinicians in reading medical scans, making diagnoses, and finding potential interactions, for example. This is mostly just using generic Chat-GPT type programs, but there are medical specific ones coming out. AI also is a perfect match for certain types of technology, such as robotics and brain-machine interface. For example, allowing users to control a robotic prosthetic limb is greatly improved, with training accelerated, using AI. AI apps can predict what the user wants to do, and can find patterns in nerve or muscle activity to correspond to the desired movement.
These are concrete and undeniable applications that pretty much destroy the “AI is all hype” narrative. But – that does not mean that other proposed AI applications are not mostly hype. Most new technologies are accompanied by the snake oil peddlers hoping to cash in on the resulting hype and the general unfamiliarity of the public with the new technology. AI is also very much a tool looking for an application, and that will take time, to sort out what it does best, where it works and where it doesn’t. We have to keep in mind how fast this is all moving.
I am reminded of the early days of the web. One of my colleagues observed that the internet was going to go the way of CB radio – it was a fad without any real application that would soon fade. Many people shared a similar opinion – what was this all for, anyway? Meanwhile there was an internet-driven tech bubble that was literally mostly hype, and that soon burst. At the same time there were those who saw the potential of the internet and the web and landed on those applications for which it was best suited (and became billionaires). We cannot deny now that the web has transformed our society, the way we shop, the way we consume news and communicate, and the way we consume media, and spend a lot of our time (what are you doing right now?). The web was hype, and real, and caused harm, and is a great tool.
AI is the same, just at an earlier part of the curve. It is hype, but also a powerful tool. We are still sorting out what it works best for and where its true potential lies. It is and will transform our world, and it will be for both good and for ill. So don’t believe all the hype, but ignore it at your peril. If it will be a net positive or negative for society depends on us – how we use it, how we support it, and how we regulate it. We basically failed to regulate social media and are now paying the price while scrambling to correct our mistakes. Probably the same thing will happen with AI, but there is an outside chance we may learn from our recent and very similar mistakes and get ahead of the curve. I wouldn’t hold my breath (certainly not in the current political environment), but crazier things have happened.
Like with any technology – it can be used for good or bad, and the more powerful it is the greater the potential benefit or harm. AI is the nuclear weapon of the digital world. I think the biggest legitimate concern is that it will become a powerful tool in the hands of authoritarian governments. AI could become an overwhelming tool of surveillance and oppression. Not thinking about this early in the game may be a mistake from which there is no recovery.
The post The AI Conundrum first appeared on NeuroLogica Blog.
My last post was about floating nuclear power plants. By coincidence I then ran across a news item about floating solar installations. This is also a potentially useful idea, and is already being implemented and increasing. It is estimated that in 2022 total installed floating solar was at 13 gigawatts capacity (growing from only 3 GW in 2020). The growth rate is estimated to be 34% per year.
“Floatovoltaics”, as they are apparently called, are grid-scale solar installations on floating platforms. They are typically installed on artificial bodies of water, such as reservoirs and irrigations ponds. Such installations can have two main advantages. They can reduce evaporation which helps preserve the reservoirs. They also are a source of clean energy without having to use cropland or other land.
Land use can be a major limiting factor of solar power, depending on how it is installed. Here is an interesting comparison of the various energy sources and their land use. The greatest land use per energy produced is hydroelectric (33 m^2 / MWh). The best is nuclear, at 0.3 (that’s two orders of magnitude better). Rooftop solar is among the best at 1.2, while solar photovoltaic installed on land is among the worst at 19. This is exactly why I am a big advocate of rooftop solar, even though this is more expensive up front than grid-scale installations. Right now in the US rooftop solar produces about 1.5% of electricity, but the total potential capacity is about 45%. More realistically (excluding the least optimal locations), shooting for 20-30% of energy production from rooftop solar is a reasonable goal. If this is paired with home battery backup, this makes solar power even better.
Floating solar installations have the potential of having the best of both worlds – less land use than land-based solar, and better economics and rooftop solar. If the installation is serving double-duty as an evaporation-prevention strategy, this is even better. This also can potentially dovetail nicely with closed loop pumped hydro. This is a promising grid-level energy storage solution, in that it can store massive amounts of energy for long periods of time, enough to shift energy production to demand seasonally. The main source of energy loss with pumped hydro is evaporation, which can be mitigated by anti-evaporation strategies, which could include floating solar. Potentially you could have a large floating solar installation on top of a reservoir used for closed-loop pumped hydro, which stores the energy produced by the solar installation.
But of course no energy source is without its environmental impact. For floating solar one significant concern is the impact on water birds (where there are bodies of water, even artificial ones, there are water birds). This is an issue because water bird populations are already in decline. Unfortunately, right now we have very little data. We need to see how the installations effect water birds, and how those bird would affect the installations. The linked research is mainly laying out the questions we need to ask. I doubt this will become a deal-killer for floating solar. Mainly it’s good to know how to do this with minimal impact on wildlife.
This is true of energy production in general, and perhaps especially renewable energy as we plan to dramatically increase renewable energy installations. There has already been a big conversation around wind turbines and birds. Yes, wind turbines do kill birds. Even off shore wind turbines kill birds. In the US it is estimated that between 150k and 700k birds are killed annually by wind turbine. However, this is a round-off error to the 1-3 billion birds killed by domestic cats annually. It is also estimated that over 1 billion birds die annually by flying into windows. We can save far more bird lives by keeping domestic cats indoors, controlling feral cat population, and using bird-safe windows on big buildings than all the birds killed by renewable energy. But sure, we can also deploy wind turbines in locations designed to minimize the impact on wild life (birds and bats mostly). We should not put them in corridors used for bird migration or feeding, for example.
The same goes for floating solar – there are likely ways to deploy floating solar to minimize the impact on water birds and their ecosystems. The impact will never be zero, and we have to keep things in perspective, but taking reasonable measures to minimize the negative environmental impact of our energy production is a good idea.
We also have to keep in mind that all of the negative environmental impacts of renewable energy (and nuclear power, for that matter – any low carbon energy source), is dwarfed by the environmental impact of burning fossil fuel. Fossil fuel plants kill an estimated 14.5 million birds in the US annually – about a 500-1000 times as many as wind and solar combined. And this is direct causes of death from impact with infrastructure and pollution. This doesn’t even count global warming. Once we factor that in, any environmental impact comparison is very likely to favor just about anything except fossil fuel.
Likely we will have a lot more floating solar installations in our future, and this is also likely a good thing.
The post Floating Solar Farms first appeared on NeuroLogica Blog.
This is an intriguing idea, and one that I can see becoming critical over the next few decades, or never manifesting – developing a fleet of floating nuclear power plants. One company, Core Power, is working on this technology and plans to have commercially deployable plants by 2035. Company press releases touting their own technology and innovation is hardly an objective and reliable source, but that doesn’t mean the idea does not have merit. So let’s explore the pros and cons.
The first nuclear-powered ship, the USS Nautilus, was deployed in 1955. So in that sense we have had ship-based nuclear reactors operating continuously (collectively, not individually) for the last 70 years. Right now there are about 160 nuclear powered ships in operation, mostly submarines and aircraft carriers. They generally produce several hundred megawatts of electricity, compared to around 1600 for a typical large nuclear reactor. They are, however, in the range of small modular reactors which have been proposed as the next generation of land-based nuclear power. The US has operated nuclear powered ships without incident – a remarkable safety record. There have been a couple of incidents with Soviet ships, but arguably that was a Soviet problem, not an issue with the technology. In any case, that is a very long record of safe and effective operation.
Core Power wants to take this concept and adapt is for commercial energy production. They are designing nuclear power barges – large ships that are designed only to produce nuclear power, so all of their space can be dedicated to this purpose, and they can produce as much electricity as a standard nuclear power plant. They plan on using a Gen IV salt-cooled reactor design, which is inherently safer than older designs and does not require high pressure for operation and cooling.
The potential advantages of this approach are that these nuclear barges can be produced in a centralized manufacturing location, essentially a shipyard, with allows for economies of scale and mass production. They intend to leverage the existing experience and workforce for shipyards to keep costs down and production high. The barges can then be towed to the desired location. Core Power points out that 65% of economic activity occurs in coastal regions, therefore the demand for power there is high, and offshore power could provide some of that demand. Nuclear barges could be towed into port or they could be anchored farther off shore. Maintenance and waste disposal could all be handled centrally. Since there is no site preparation, that is a huge time and cost savings. Further there is no land use, and these barged could be place relatively close to dense urban centers.
There are potential downsides. The first that comes to mind is that there isn’t a pre-existing connection to the grid. One of the advantages of land-based nuclear is that you can decommission a coal plant and then build a nuclear power plant on the same site and use the same grid connections. This of course is not a deal killer, but it will require new infrastructure. A second issue is safety. While ship-based nuclear has a long and safe history, this would be a new design. Further, a radiation leak in a coastal environment could be disastrous and this would need to be studied. I do think this concept is only viable because of the salt-cooled design, but still it will require extensive safety regulation.
And this relates to another potential problem – the mid-2030s is likely ambitious. While I think we should “warp speed” new nuclear to fight climate change, this unfortunately is not likely to happen. New projects like this can get bogged down in regulation. Safety regulation is, in itself, reasonable, and it will likely be a tough sell to speed up or streamline safety. There is a reasonable compromise between speed and safety, and I can only hope we will get close to this optimal compromise, but history tells a different story.
What about the usual complaint of nuclear waste? This is often the reason given for those who are anti-nuclear. I have discussed this before – waste is actually not that big a problem. The highly radioactive waste is short-lived, and the long half-life nuclear waste is very low level (by definition). We just need to put it somewhere. Right now this is purely a political (mostly NIMBY) problem, not a technology problem.
On balance it seems like this is an idea worth exploring. Given the looming reality of climate change, exploring all options is the best way forward. Also, Core Power plans, as a phase 2, to adapt their technology for a commercial fleet of nuclear powered ships. Ocean shipping produces about 3% of global CO2 emissions, which is not insignificant. If our cargo carriers were mostly nuclear powered that could avoid a lot of CO2 release. They are also not the only company working on this technology. A nuclear cargo ship would have more space for cargo, since it doesn’t need to carry a lot of fuel for itself. It would also be able to operate for years without refueling. This means it can be commercially viable for shipping companies.
Maritime nuclear power may turn out to be an important part of the solution to our green house gas problem. The technology seems viable. The determining factor may simply be how much of a priority do we make it. Given the realities of climate change, I don’t see why we shouldn’t make it a high priority.
The post Floating Nuclear Power Plants first appeared on NeuroLogica Blog.
The recent discussions about autism have been fascinating, partly because there is a robust neurodiversity community who have very deep, personal, and thoughtful opinions about the whole thing. One of the issues that has come up after we discussed this on the SGU was that of self-diagnosis. Some people in the community are essentially self-diagnosed as being on the autism spectrum. Cara and I both reflexively said this was not a good thing, and then moved on. But some in the community who are self-diagnosed took exception to our dismissiveness. I didn’t even realize this was a point of contention.
Two issues came up, the reasons they feel they need self-diagnosis, and the accuracy of self diagnosis. The main reason given to support self-diagnoses was the lack of adequate professional services available. It can be difficult to find a qualified practitioner. It can take a long time to get an appointment. Insurance does not cover “mental health” services very well, and so often getting a professional diagnosis would simply be too expensive for many to afford. So self-diagnosis is their only practical option.
I get this, and I have been complaining about the lack of mental health services for a long time. The solution here is to increase the services available and insurance coverage, not to rely on self-diagnosis. But this will not happen overnight, and may not happen anytime soon, so they have a point. But this doesn’t change the unavoidable reality that diagnoses based upon neurological and psychological signs and symptoms are extremely difficult, and self-diagnosis in any medical area is also fraught with challenges. Let me start by discussing the issues with self-diagnosis generally (not specifically with autism).
I wrote recently about the phenomenon of diagnosis itself. (I do recommend you read that article first, if you haven’t already.) A medical/psychological diagnosis is a complex multifaceted phenomenon. It exists in a specific context and for a specific purpose. Diagnoses can be purely descriptive, based on clinical signs and symptoms, or based on various kinds of biological markers – blood tests, anatomical scans, biopsy findings, functional tests, or genetics. Also, clinical entities are often not discrete, but are fuzzy around the edges, manifest differently in different populations and individuals, and overlap with other diagnoses. Some diagnoses are just placeholders for things we don’t understand. There are also generic categorization issues, like lumping vs splitting (do we use big umbrella diagnoses or split every small difference up into its own diagnosis?).
Ideally, I diagnostic label predicts something. It informs prognosis, or helps us manage the patient or client, for example by determining which treatments they are likely to respond to. Diagnostic labels are also used for researchers to communicate with each other. They are also used as regulatory categories (for example, a drug can only have an FDA indication to treat a specific disease). Diagnostic labels are also used for public health communication. Sometimes a diagnostic label can serve all of these purposes well at once, but often they are at cross-purposes.
Given this complexity, it takes a lot of topic expertise to know how to apply diagnostic criteria. This is especially true in neurology and psychology where signs and symptoms can be difficult to parse, and there are many potential lines of cause and effect. For example, someone can have primary anxiety and their anxiety then causes or exacerbates physical symptoms. Or, someone can have physical symptoms that then cause or exacerbate their anxiety. Or both can be true at the same time, and the conditions are “comorbid”.
One main problem with self-diagnosis is that a complex diagnosis requires objectivity, and by definition it is difficult to be objective about yourself. Fear, anxiety, and neuroticism make it even more difficult. As a clinician I see all the time the end-results of self-diagnosis. They are usually a manifestation of the patient’s limited knowledge and their fears and concerns. We see this commonly in medical students, for example. It is a running joke in medical education that students will self-diagnosis with many of the conditions that they are studying. We discuss this with them, and why this is happening.
This is partly the Forer Effect – the tendency to see ourselves in any description. This is mostly confirmation bias – we cherry pick the parts that seem to fit us, and we unconsciously search our vast database of life experience to search for matches to the target symptoms. Yes, I do occasionally cough. My back does hurt at times. Now imagine this process with cognitive symptoms – I do get overwhelmed at times. I can focus on small details and get distracted, etc. With the Forer Effect (the most common example of this is people seeing themselves in any astrological personality profile), the more vague or non-specific the description, the stronger the effect. This makes psychological diagnoses more susceptible.
To make an accurate diagnosis one also need to understand the different between specific and non-specific symptoms. A fever is a symptom of an acute or subacute Lyme infection, but it is an extremely non-specific one as fevers can result from hundreds of causes. A targeted rash is a specific sign (so specific it is called pathognomonic, meaning if you have the sign you have the disease). (BTW – a symptom is something you experience, a sign is something someone else sees.) So, having a list of symptoms that are consistent with a diagnosis, but all non-specific, is actually not that predictive. But the natural tendency is to think that it is – “I have all the symptoms of this disease” is a common refrain I hear from the wrongly self-diagnosed.
Also, it is important to determine if any symptoms can have another cause. If someone is depressed, for example, because a loved-one just died, that depression is reactive and healthy, not a symptom of a disorder.
Further, many signs and symptoms are a matter of degree. All muscles twitch, for example, and a certain amount of twitching is considered to be physiological (and normal). At some point twitching becomes pathological. Even then it may be benign or a sign of a serious underlying neurological condition. But if you go on the internet and look up muscle twitching, you are likely to self-diagnose with a horrible condition.
An experienced clinician can put all of this into perspective, and make a formal diagnosis that actually has some predictive value and can be used to make clinical decisions. Self-diagnosis, however, is hit or miss. Mainly I see false-positives, people who think they have a diagnosis based on anxiety or non-specific symptoms. These tend to cluster around diagnoses that are popular or faddish. The internet is now a major driver of incorrect self-diagnosis. Some people, or their families, do correctly self-diagnose. Some neurological conditions, like Parkinson’s disease, for example, tend to have fairly easily detected and specific signs and symptoms that a non-expert can recognize. Even with PD, however, there are subtypes of PD and there are some secondary causes and comorbidities, so you still need a formal expert diagnosis.
With autism spectrum disorder, I do not doubt that some people can correctly determine that they are on the spectrum. But I would not rely on self-diagnosis or think that it is automatically accurate (because people know themselves). The diagnosis still benefits from formal testing, using formal criteria and cutoffs, ruling out other conditions and comorbidities, and putting it all into perspective. I also am concerned that self-diagnosis can lead to self-treatment, which has a separate list of concerns worthy of its own article. Further, the internet makes it easy to create communities of people who are self-diagnosed and seeking self-treatment, or getting hooked up with dubious practitioners more than willing to sell them snake oil. I am not specifically talking about autism here, although this does exist (largely attached to the anti-vaccine and alternative medicine cultures).
There is now, for example, a chronic Lyme community who help each other self diagnosis and get treated by “Lyme literate” practitioners. This community and diagnosis are now separate from scientific reality, existing in their own bubble, one which foments distrust of institutions and seeks out “mavericks” brave enough to go against the system. It’s all very toxic and counterproductive. This is what concerns me the most about an internet fueled community of the self-diagnosed – that it will drift off into its own world, and become the target of charlatans and snake oil peddlers. The institutions we have an the people who fill them are not perfect – but they exist for a reason, and they do have standards. I would not casually toss them aside.
The post The Problem with Self-Diagnosis first appeared on NeuroLogica Blog.
As a scientific concept – does race exist? Is it a useful construct, or is it more misleading than useful? I wrote about this question in 2016, and my thinking has evolved a bit since then. My bottom line conclusion has not changed – the answer is, it depends. There is no fully objective answer because this is ultimately a matter of categorization which involves arbitrary choices, such as how to weight different features, how much difference is meaningful, and where to draw lines. People can also agree on all the relevant facts, but disagree simply on emphasis. (If all of this is sounding familiar it’s because the same issues exist surrounding biological sex.)
Here are some relevant facts. Humans – Homo sapiens – are a single species. While we are an outbred species with a lot of genetic diversity, we have passed through several fairly recent genetic bottlenecks (most recently around 900k years ago) and the genetic disparity (amount of difference) among humans is relatively small (about 0.1%). It is also true that genetic variation is not evenly distributed among human populations but tend to cluster geographically. However, genetic variation within these clusters is greater than genetic variation between these clusters. Further, obvious morphological differences between identifiable groups tend to be superficial and not a good reflection of underlying genetic diversity. But at the same time, genetic background can be meaningful – predicting the risk of developing certain diseases or responding to certain medications, for example. Genetic variation is also not evenly distributed. Most genetic variations within humans is among Africans, because all non-Africans are derived from a recent genetic bottleneck population about 50-70k years ago.
How should we summarize all of these non-controversial and generally agreed upon facts? You can emphasize the clustering and say that something akin to race exists and is meaningful, or you can emphasize the genetic similarity of all humans and lack of discrete groups to say that race is not a meaningful or helpful concept. So, as a purely scientific question we have to recognize that there is no completely objective answer here. There are just different perspectives. However, that does not mean that every perspective is equally strong or that our choice of emphasis cannot be determined by other factors, such as their utility in specific contexts.
But there is another dimension here – the term “race” has a specific history of use. It is a very loaded term, unlike, say, referring to “genetic clusterings” or terms we often use with reference to other species, like subpopulations or breeds. The term race has a cultural history, generally referring to continent of origin. There is also a scientific history going back to Linnaeus, who thought there were four human races which he characterized by color – white, black, yellow, and red. Linnaeus’s “races” persisted in scientific thinking for two centuries, and still dominates our culture. When people say something like – “race does not exist” or “race is a social construct”, this is what they are referring to. It does not mean there are no genetic clusterings, just that the traditional races are not genetically meaningful. As one geneticist put it in a recent BBC article:
“By the time we began to look at how genes are shared in families and populations, we saw that similarities do indeed cluster in groups, but these groupings do not align with the longstanding attempts to classify the races. The true metric of human difference is at a genetic level. In the 20th Century, when we began to unravel our genomes, and observe how people are similar and different in our DNA, we saw that the terms in use for several centuries bore little meaningful relation to the underlying genetics.”
In medicine there is a very practical aspect to this discussion, because we use genetic history to help us estimate statistical risks in various medical contexts. Over my career we have moved away (admittedly, not entirely, cultural inertia can be strong) from characterizing patients or research subjects by race. This is not because it is politically incorrect, but because it is scientifically misleading. Instead we use a less specific term like “ancestry”. This is really just an extension of family history, which has long been a separate part of a patient’s history. We want to know the medical history of their immediate relatives because that can help us predict their disease risk. Ancestry is basically a “family” history but going back further, to successive ancestral clusterings (without favoring any particular level). Do you have ancestors who came from Africa? Do you have ancestors who were part of a founder population with a specific genetic illness?
Labeling someone as “black” or “caucasian” or “asian” is not genetically meaningful. There is nothing special about that level of clustering and these are not real or meaningful genetic groups. If you look, for example, at a genetic map, rather than skin color, you would never intuitively cluster humans into the traditional races.
But again, cultural inertia can be strong. From a science education and public understanding point of view perhaps we need to simply stop using the term race and instead refer to ancestry, or genetic populations or clustering. We should use language that properly reflects the scientific reality rather than the social history.
There is one more point of complexity, however. Sometimes we are having a social conversation. If race is a social construct, it has meaning in a social context (even if it is not scientifically meaningful). Identified race is a real social factor that influences people’s lives. So now we need to find a way to talk about genetic ancestry and social race as two distinct things, even though they were highly conflated in the past (and still in many people’s minds today). That’s a tricky one. Still probably best to dispense with the highly loaded term “race” and just come up with distinct terminology depending on whether your are discussing a social group or a genetic clustering.
The post The Race Question first appeared on NeuroLogica Blog.
In my previous post I wrote about how we think about and talk about autism spectrum disorder (ASD), and how RFK Jr misunderstands and exploits this complexity to weave his anti-vaccine crank narrative. There is also another challenge in the conversation about autism, which exists for many diagnoses – how do we talk about it in a way that is scientifically accurate, useful, and yet not needlessly stigmatizing or negative? A recent NYT op-ed by a parent of a child with profound autism had this to say:
“Many advocacy groups focus so much on acceptance, inclusion and celebrating neurodiversity that it can feel as if they are avoiding uncomfortable truths about children like mine. Parents are encouraged not to use words like “severe,” “profound” or even “Level 3” to describe our child’s autism; we’re told those terms are stigmatizing and we should instead speak of “high support needs.” A Harvard-affiliated research center halted a panel on autism awareness in 2022 after students claimed that the panel’s language about treating autism was “toxic.” A student petition circulated on Change.org said that autism ‘is not an illness or disease and, most importantly, it is not inherently negative.'”
I’m afraid there is no clean answer here, there are just tradeoffs. Let’s look at this question (essentially, how do we label ASD) from two basic perspectives – scientific and cultural. You may think that a purely scientific approach would be easier and result in a clear answer, but that is not the case. While science strives to be objective, the universe is really complex, and our attempts at making it understandable and manageable through categorization involve subjective choices and tradeoffs. As a physician I have had to become comfortable with this reality. Diagnoses are often squirrelly things.
When the profession creates or modifies a diagnosis, this is really a type of categorization. There are different criteria that we could potentially use to define a diagnostic label or category. We could use clinical criteria – what are the signs, symptoms, demographics, and natural history of the diagnosis in question? This is often where diagnoses begin their lives, as a pure description of what is being seen in the clinic. Clinical entities almost always present as a range of characteristics, because people are different and even specific diseases will manifest differently. The question then becomes – are we looking at one disease, multiple diseases, variations on a theme, or completely different processes that just overlap in the signs and symptoms they cause. This leads to the infamous “lumper vs splitter” debate – do we tend to lump similar entities together in big categories or split everything up into very specific entities, based on even tiny differences?
The more we learn about these burgeoning diagnoses the more the diagnostic criteria might shift away from a purely clinical descriptive one. Perhaps we find some laboratory marker (such as a result on a blood test, or finding on an MRI scan of the brain). What if that marker has an 80% correlation to the clinical syndrome? How do we use that as a diagnostic criterion? The more we learn about pathophysiology, the more these specific biological factors become part of the diagnosis. Sometimes this leads to discrete diagnoses – such as when it is discovered that a specific genetic mutation causes a specific disease. The mutation becomes the diagnosis. But that is often not the case. The game changes again when treatments become available, then diagnostic criteria tends to shift toward those that predict response to treatment.
One question, therefore, when determining the best way to establish a specific diagnostic label is – what is your purpose? You might need a meaningful label that helps guide and discuss basic science research into underlying phenomena. You may need a diagnosis that helps predict natural history (prognosis), or that guides treatment, or you may need a box to check on the billing form for insurance, or you may need a diagnosis as a regulatory entity (for FDA approval for a drug, say).
ASD has many of these issues. Researchers like the spectrum approach because they see ASD as different manifestations of one type of underlying neurological phenomenon. There are many genes involved, and changes to the pattern of connectivity among brain cells. Clinicians may find this lumper approach a double-edged sword. It may help if there is a single diagnostic approach – scoring on standardized tests of cognitive, motor, language and social functioning, for example. But it also causes confusion because one label can mean such dramatically different things clinically. The diagnosis is also now often attached to services, so there is a very practical aspect to it (and one major reason why the diagnosis has increased in recent years – it gets you services that a less specific diagnosis might not).
Now let’s go to the social approach to the ASD diagnosis. The purely scientific approach is not clean because “science” can refer to basic science or clinical science, and the clinical side can have multiple different approaches. This means science cannot currently solve all the disputes over how the ASD diagnosis is made and used in our society. It’s ambiguous. One aspect of the debate is whether or not ASD should be considered a disease, a disorder, or just a spectrum of natural variation within the human species. Anti-vaxxers want to see is as a disease, something to be prevented and cured. This approach also tends to align better with the more disabled end of the spectrum. At the high functioning end of the spectrum, the preference is to look at ASD as simply atypical, and not inherently inferior or worse than neurotypicals. The increased challenges of being autistic are really artificially created by a society dominated by neurotypicals. There are also in fact advantages to being neuroatypical in certain areas, such as jobs like coding and engineering. Highly sociable people have their challenges as well.
Here’s the thing – I think both of these approaches can be meaningful and useful at the same time. First, I don’t think we should shy away from terms like “profound” or “severe”. This is how neuroscience generally works. Everyone does and should have some level of anxiety, for example. Anxiety is adaptive. But some people have “severe” anxiety – anxiety that takes on a life of its own, or transitions from being adaptive to maladaptive. I don’t want to minimize the language debate. Words matter. Sometimes we just don’t have the words that mean exactly what we need them to mean, without unwanted connotations. We need a word that can express the spectrum without unwanted assumptions or judgement. How about “extreme”? Extreme does not imply bad. You can be extremely athletic, and no one would think that is a negative thing. Even if autism is just atypical, being extremely autistic implies you are at one end of the spectrum.
Also, as with anxiety, optimal function is often a mean between two extremes. No anxiety means you take unnecessary risks. Too much anxiety can be crippling. Having mildly autistic features may just represent a different set of neurological tradeoffs, with some advantages and some challenges, and because it is atypical some accommodation in a society not optimized for this type. But as the features get more extreme, the downsides become increasingly challenging until you have a severe disability.
This reminds me also of paranoia. A little bit of paranoia can be seen as typical, healthy, and adaptive. A complete absence of any suspiciousness might make someone naive and vulnerable. People with above average paranoia might not even warrant a diagnosis – that is just a personality type, with strengths and weaknesses. But the more extreme you get, the more maladaptive it becomes. At the extreme end it is a criterion for schizophrenia.
Or perhaps this is all just too complex for the public-facing side of this diagnosis (regulation, public education, etc). Perhaps we need to become splitters, and break ASD up into three or more different labels. Researchers can still have and use a technical category name that recognizes an underlying neurological commonality, but that does not need to be inflicted on the public and cause confusion. Again – there is no objective right or wrong here, just different choices. As I think I amply demonstrated in my prior post, using one label (autism) causes a great deal of confusion and can be exploited by cranks. What often happens, though, is that different groups make up the labels for their own purposes. When researchers make the labels, they favor technical basic-science criteria. When clinicians do, they favor clinical criteria. When regulators do, they want nice clean categories.
Sometimes all these levels play nicely together. With ASD I feels as if they are in conflict, with the more research-based labels holding sway and causing confusion for everyone else.
At the same time there is a conflict between not imposing inaccurate and unnecessary judgement on a label like autism, while at the same time recognizing that can come with its own challenges that need just awareness at the mildest end of the spectrum, accommodation for those who experience challenges and have needs, and then actual treatment (if possible) at the more extreme end. These do not need to be mutually exclusive.
I do think we are evolving in a good direction, with more thoughtful diagnostic labels that explicitly serve a purpose without unnecessary assumptions or judgement. We may not be entirely there yet, but it’s a great conversation to have.
The post The Other End of the Autism Spectrum first appeared on NeuroLogica Blog.
RFK Jr.’s recent speech about autism has sparked a lot of deserved anger. But like many things in life, it’s even more complicated than you think it is, and this is a good opportunity to explore some of the issues surrounding this diagnosis.
While the definition has shifted over the years (like most medical diagnoses) autism is currently considered a fairly broad spectrum sharing some underlying neurological features. At the most “severe” end of the spectrum (and to show you how fraught this issue is, even the use of the term “severe” is controversial) people with autism (or autism spectrum disorder, ASD) can be non-verbal or minimally verbal, have an IQ <50, and require full support to meet their basic daily needs. At the other end of the spectrum are extremely high-functioning individuals who are simply considered to be not “neurotypical” because they have a different set of strengths and challenges than more neurotypical people. One of the primary challenges is to talk about the full spectrum of ASD under one label. The one thing it is safe to say is that RFK Jr. completely failed this challenge.
What our Health and Human Services Secretary said was that normal children:
“regressed … into autism when they were 2 years old. And these are kids who will never pay taxes, they’ll never hold a job, they’ll never play baseball, they’ll never write a poem, they’ll never go out on a date. Many of them will never use a toilet unassisted.”
This is classic RFK Jr. – he uses scientific data like the proverbial drunk uses a lamppost, for support rather than illumination. Others have correctly pointed out that he begins with his narrative and works backward (like a lawyer, because that is what he is). That narrative is solidly in the sweet-spot of the anti-vaccine narrative on autism, which David Gorski spells out in great detail here. RFK said:
“So I would urge everyone to consider the likelihood that autism, whether you call it an epidemic, a tsunami, or a surge of autism, is a real thing that we don’t understand, and it must be triggered or caused by environmental or risk factors. “
In RFK’s world, autism is a horrible disease that destroys children and families and is surging in such a way that there must be an “environmental” cause (wink, wink – we know he means vaccines). But of course RFK gets the facts predictable wrong, or at least exaggerated and distorted precisely to suit his narrative. It’s a great example of how to support a desired narrative by cherry picking and then misrepresenting facts. To use another metaphor, it’s like making one of those mosaic pictures out of other pictures. He may be choosing published facts but he arranges them into a false and illusory picture. RFK cited a recent study that showed that about 25% of children with autism were in the “profound” category. (That is another term recently suggested to refer to autistic children who are minimally verbal or have an IQ < 50. This is similar to “level 3” autism or “severe” autism, but with slightly different operational cutoffs.)
First, there are a range of estimates as to what percentage of autistic people would fit into the profound category, and he is choosing the high end. Also most of the people in that category don’t have the limitations that RFK listed. A 2024 study, for example, which relied upon surveys of parents of children with autism found that only 10% fell into the “severe” category. Even within this category, only 67% had difficulty with dressing and bathing, or about 7% of children with autism. I am not trying to minimize the impact of the challenges and limitations of those at the severe end of the spectrum, just putting the data into context. What RFK was doing, which is what antivaxxers have been doing for decades, is trying to scare parents with a very specific narrative – perfect young children will get vaccinated and then regress into horrible autism that will destroy their lives and your families.
What is regression? It is a loss of previous milestones or abilities. The exact rate in severe autism is unclear, ranging from 20-40%, but the 20% figure is considered more reliable. In any case, RFK misrepresents this as well. Regression does not mean that a 2 year old child without autism develops severe autism – it means that a child with autism loses some function. Much of the time regression refers to social skills, with autistic children finding it more difficult to engage socially as they age (which can simply be adaptive and not require neurological regression). Language regression occurs but is less common. Again we see that he uses a piece of the picture, exaggerates it, and then uses it to imply a reality that does not exist.
He then does it again with the “surge” of autism. Yes, autism diagnoses have been increasing for decades. At first (during the 1990s) you could make a correlation between increasing vaccines in the childhood schedule and increasing autism diagnostic rates. This was always just a spurious correlation (my favorite example is that organic food sales track better with autism diagnoses than does vaccination). But after about 2000, when thimerosal was removed from the childhood vaccine schedule in the US, autism rates continued to increase. The correlation completely broke down. Antivaxxers desperately tried to explain away this breakdown in the correlation, with increasingly ridiculous special pleading, and now it seems they just ignore this fact.
RFK is just ignoring this fact, and just making the more general observation that autism rates are increasing, which they are. But this increase does not fit his scary narrative for at least two reasons. First, as I and others have pointed out, there is copious evidence in the literature that much of this apparent increase is due to changing diagnostic patterns. At the severe end of the spectrum there is some diagnostic substitution – in past decades children who are now diagnosed with autism would have been diagnosed with mental retardation or something else less specific or just different. At the high functioning end of the spectrum children with autism likely would not have been diagnosed with anything at all. I have explored this issue at length before – the more carefully you look (applying the same diagnostic criteria across different age cohorts), the less autism is increasing. It is also true that autism is dominantly a genetic disorder, and that there are very early signs of autism, even in six month olds, and perhaps even at the fetal stage.
But also the dramatic increase in autism diagnoses is mostly at the mild end of the spectrum. There is only a small increase of profound autism. So again, RFK’s narrative breaks down when you look at the actual scientific facts. He says normal children regress into profound autism and this is surging. But that is wrong. He is exploiting the fact that we use the same term, autism, to refer to profound autism and what was previously called “aspergers syndrome” but is now just considered part of ASD.
All of this is sufficient evidence to conclude that RFK is incompetent to serve as HHS secretary, he does not understand medical science and rather makes a lawyer’s case for extreme conspiracy theories designed to scare the public into making bad medical choices.
But there is another side to this story (that has nothing to do with RFK). In our effort not to pathologize people who are simply atypical, are we overlooking people who actually have a severe disability, or at least making them and their parents feel that way? I’ll explore this side of the question in my next post.
The post How Should We Talk About Autism first appeared on NeuroLogica Blog.
Regulations are a classic example of a proverbial double-edged sword. They are essential to create and maintain a free and fair market, to prevent exploitation, and to promote safety and the public interest. Just look at 19th century America for countless examples of what happens without proper regulations (child labor, cities ablaze, patent medicines, and food was a crap shoot). But, regulations can have a powerful effect and this includes unintended consequences, regulatory overreach, ideological capture, and stifling bureaucracy. This is why optimal regulations should be minimalist, targeted, evidence-based, consensus-driven, and open to revision. This makes regulations also a classic example of Aristotle’s rule of the “golden mean”. Go too far to either extreme (too little or to onerous) and regulations can be a net negative.
The regulations of GMOs are an example, in my opinion, of ideological capture in regulations. The US, actually, has pretty good regulations, requiring study and approval for each new GMO product on the market, but no outright banning. You could argue that they are a bit too onerous to be optimal, ensuring that only large companies can afford to usher a new GMO product to the market, and therefore stifling competition from smaller companies. That’s one of those unintended consequences. Some states, like Hawaii and Vermont, have instituted their own more restrictive regulations, based purely on ideology and not science or evidence. Europe is another story, with highly restrictive regulations on GMOs.
But in recent years scientific advances in genetics have cracked the door open for genetic modification in highly regulated environments. This is similar to what happened with stem cell research in the US. Use of embryonic stem cells were ideologically controversial, and ultimately the development of any new cells lines was banned by Bush in 2001. Scientists then discovered how to convert adult cells into induced pluripotent stem cells, mostly side-stepping these regulations.
In the GMO space a similar thing has happened. With the advent of CRISPR and other technologies, it’s possible to alter the genome of a plant without introducing a foreign gene. Increasingly these sorts of changes are being distinguished, from a regulatory perspective, from genetic modification that involves inserting a gene. Altering the genome without gene insertion is referred to a genetic engineering, rather than genetic modification, and the regulations for the use of genetic engineering (which includes product labeling) are less onerous. This provides an incentive to the industry to accomplish what they want through genetic engineering, without triggering the rules for genetic modification.
This brings us to a couple of recent studies showcasing this approach. For some additional background, however, I need to mention that one currently used technique is to use CRISPR or a similar method to modify the genome of a plant, but then back cross the resulting engineered plants with unmodified plants in order to get rid of any foreign DNA left behind by the CRISPR process. This is a bit laborious, and often requires multiple generations, to result in a plant with the desired mutations but no foreign DNA.
However, this technique does not work for every kind of plant. There are two categories in particular that are a problem – trees (or any slow-growing plant that would take years to reproduce), and sterile plants (like bananas). For these types of plants we need a new method that does not leave behind any foreign DNA and therefore does not require subsequent cross-breeding to get rid of it.
So – in January scientists published a study detailing “Transgene-free genome editing in poplar.” They report:
“Here, we describe an efficient method for generating gene-edited Populus tremula × P. alba (poplar) trees without incorporating foreign DNA into its genome. Using Agrobacterium tumefaciens, we expressed a base-editing construct targeting CCoAOMT1 along with the ALS genes for positive selection on a chlorsulfuron-containing medium.
About 50% of the regenerated shoots were derived from transient transformation and were free of T-DNA. Overall, 7% of the chlorsulfuron-resistant shoots were T-DNA free, edited in the CCoAOMT1 gene and nonchimeric.”
This means that they were able to use transiently expressed DNA in the cells, that essentially made the genetic change and then went away. They used the bacterium A tumefaciens as vector. This worked in about half of cells. They also did genome-wide sequencing to weed out any shoots with any foreign DNA. They also had to eliminate shoots where only some of the cells were altered (and therefore chimeric). So in 7% of the shoots the desired change was made, in all of the cells, without leaving behind any foreign DNA. No further breeding is required, and therefore this is a much quicker, cheaper, and more efficient method of making desirable changes (in this case they used a herbicide resistant mutation, which was easy to test for).
Next up, published this month, was the same method in the cavendish banana – “An Agrobacterium-mediated base editing approach generates transgene-free edited banana.” From what I can tell they used essentially the same method as with the poplar trees, although there are no authors in common between the two papers so this appears to be an independent group. The authors of both papers are Flemish and cite each-other’s work, so I assume this is part of a collaborative project. I also see another paper doing a similar thing in bamboo, with Chinese authors.
The authors explicitly say that the benefit of this technique is to create cultivars that have less of a regulatory hurdle, so the point is primarily to avoid harsher regulations. While this is a great workaround, it’s unfortunate that scientists need to develop a workaround, just to please the anti-GMO crowd. Anti-GMO sentiments are not based on science, they are ideologically and largely driven by the organic industry for what seems transparently self-serving reasons. The benefits of genetic engineering in agriculture, though, are clear and necessary, given the challenges we are facing. So the industry is somewhat quietly just bypassing regulations, while some governments are quietly softening regulations, in order to reap the benefits without inflaming anti-GMO activists. Hopefully we can get to a largely post-anti-GMO world and get down to the business of feeding people and saving our crops from looming diseases and climate change.
The post Transgene-Free Gene Editing in Plants first appeared on NeuroLogica Blog.
Have you ever been into a video game that you played for hours a day for a while? Did you ever experience elements of game play bleeding over into the real world? If you have, then you have experienced what psychologists call “game transfer phenomenon” or GTP. This can be subtle, such as unconsciously placing your hand on the AWSD keys on a keyboard, or more extreme such as imagining elements of the game in the real world, such as health bars over people’s heads.
None of this is surprising, actually. Our brains adapt to use. Spend enough time in a certain environment, engaging in a specific activity, experiencing certain things, and these pathways will be reinforced. This is essentially what PTSD is – spend enough time fighting for your life in extremely violent and deadly situations, and the behaviors and associations you learn are hard to turn off. I have experienced only a tiny whisper of this after engaging for extended periods of time in live-action gaming that involves some sort of combat (like paint ball or LARPing) – it may take a few days for you to stop looking for threats and being jumpy.
I have also noticed a bit of transfer (and others have noted this to me as well) in that I find myself reaching to pause or rewind a live radio broadcast because I missed something that was said. I also frequently try to interact with screens that are not touch-screens. I am getting used to having the ability to affect my physical reality at will.
Now there is a new wrinkle to this phenomenon – we have to consider the impact of spending more and more time engaged in virtual experiences. This will only get more profound as virtual reality becomes more and more a part of our daily routine. I am also thinking about the not-to-distant future and beyond, where some people might spend huge chunks of their day in VR. Existing research shows that GTP is more likely to occur with increased time and immersiveness. What happens when our daily lives are a blend of the virtual and the physical? Not only is there VR, there is augmented reality (AR) where we overlay digital information onto our perception of the real world. This idea was explored in a Dr. Who episode in which a society of people were so dependent on AR that they were literally helpless without it, unable to even walk from point A to B.
For me the question is – when will GTP cross the line from being an occasional curiosity to a serious problem? For example, in some immersive video games your character may be able to fly, and you think nothing of stepping off a ledge and flying into the air. Imagine playing such a super-hero style game in high quality VR for an extended period of time (something like Ready Player One). Could people “forget” they are in meat space and engage in a deeply engrained behavior they developed in the game. They won’t just be trying to pause their radio, but interact with their physical world in a way that is only possible in the VR world, with possible deadly consequences.
Another aspect of this is that as our technology develops we are increasingly making our physical environment more digital. Three-D printing is an example of this – going from a digital image to a physical object. Increasingly objects in our physical environment are interactive – smart devices. In a generation or two will people get used to not only spending lots of time in VR, but having their physical worlds augmented by AR and populated with smart devices, including physical objects that can change on demand (programmable matter)? We may become ill-adapted to existing in a “dumb” purely physical world. We may choose virtual reality because it has spoiled us for dumb physical reality.
Don’t get me wrong – I think digital and virtual reality is great and I look forward to every advancement. I see this mainly as an unintended consequence. But I also think we can reasonably anticipate this is likely to be a problem, as we are already seeing the milder versions of it today. This means we have an opportunity to mitigate this before it becomes a problem. Part of the solution will likely always be good digital hygiene – making sure our days are balanced with physical and virtual reality. This will likely also be good for our physical health.
I also wonder, however, if this is something that can be mitigated in the virtual applications themselves. Perhaps the programs can designed to make it obvious when we are in virtual reality vs physical reality, as a clue to your brain so it doesn’t cross the streams. I don’t think this is a complete fix, because GTP exists even for cartoony games. The learned behaviors will still bleed over. But perhaps there may be a way to help the brain keep these streams separated.
I suspect we will not seriously address this issue until it is already a problem. But it would be nice to get ahead of a problem like this for once.
The post Game Transfer Phenomenon first appeared on NeuroLogica Blog.
Exoplanets are pretty exciting – in the last few decades we have gone from knowing absolutely nothing about planets beyond our solar system to having a catalogue of over 5,000 confirmed exoplanets. That’s still a small sample considering there are likely between 100 billion and 1 trillion planets in the Milky Way. It is also not a random sample, but is biased by our detection methods, which favor larger planets closer to their parent stars. Still, some patterns are starting to emerge. One frustrating pattern is the lack of any worlds that are close duplicates of Earth – an Earth mass exoplanet in the habitable zone of a yellow star (I’d even take an orange star).
Life, however, does not require an Earth-like planet. Anything in the habitable zone, defined as potentially having a temperature allowing for liquid water on its surface, will do. The habitable zone also depends on variables such as the atmosphere of the planet. Mars could be warm if it had a thicker atmosphere, and Venus could be habitable if it had less of one. Cataloguing exoplanets gives us the ability to address a burning scientific question – how common is life in the universe? We have yet to add any data points of clear examples of life beyond Earth. So far we have one example of life in the universe, which means we can’t calculate how common it is (except maybe setting some statistical upper limits).
Finding that a planet is habitable and therefore could potentially support life is not enough. We need evidence that there is actually life there. For this the hunt for exoplanets includes looking for potential biosignatures – signs of life. We may have just found the first biosignatures on an exoplanet. This is not 100%. We need more data. But it is pretty intriguing.
The planet is K2-18b, a sub-Neptune orbiting a red dwarf 120 light years from Earth. In terms of exoplanet size, we have terrestrial planets like Earth and the rocky inner planets of our solar system. Then there are super-Earths, larger than Earth up to about 2 earth masses, still likely rocky worlds. Sub Neptunes are larger still, but still smaller than Neptune. They likely have rocky surfaces and thick atmospheres. K2-18b has a radius 2.6 times that of Earth, with a mass 8.6 times that of Earth. The surface gravity is estimated at 12.43 m/s^2 (compared to 9.8 on Earth). We could theoretically land a rocket and take off again from its surface.
K2-18 is a red dwarf, which means it has a habitable zone close in. K2-18b orbits every 33 days, and had an eccentric orbit but staying within the habitable zone. This means it is likely tidally locked, but may be in a resonance orbit (like Mercury), meaning that it rotates three times for every two orbits, or something like that. Fortunately for astronomers, K2-18b orbits in front of its star from our perspective on Earth. This is how it was detected, but also this means we can potentially examine the chemical makeup of its atmosphere with spectroscopy. When the planet passes in front of its star we can look at the absorption lines of the light passing through it to detect the signatures of different chemicals. Using this technique with the Hubble astronomers have found methane and carbon dioxide in the atmosphere. They have also found dimethyl sulfide and a similar molecule called dimethyl disulfide. On Earth the only known source of dimethyl sulfide is living organisms, specifically algae. This molecule is also highly reactive and therefore short-lived, which means if it is present in the atmosphere it is being constantly renewed. Follow up observations with the Webb confirmed the presence of dimethyl sulfide, in concentrations 20 times higher than on Earth.
What does this mean? Well, it could mean that K2-18b has a surface ocean that is brimming with life. This fits with one model of sub-Neptunes, called the Hycean model, which means they can have large surface oceans and an atmosphere with lots of hydrogen. These are conditions suitable for life. But this is not the only possibility.
One of the problems with chemical biosignatures is that they frustratingly all have abiotic sources. Oxygen can occur through the splitting of water or CO2 by ultraviolet light, and by reactions with quartz. Methane also has geological sources. What about dimethyl sulfide? Well, it has been found in cometary matter with a likely abiotic source. So there may be some geological process on K2-18b pumping out dimethyl sulfide. Or there may be an ocean brimming with marine life creating the stuff. We need to do more investigation of K2-18b to understand more about its likely surface conditions, atmosphere, and prospects for life.
This, unfortunately, is how these things are likely to go – we find a potential biosignature that also has abiotic explanations and then we need years of follow up investigation. Most of the time the biosignatures don’t pan out (like on Venus and Mars so far). It’s a setup for disappointment. But eventually we may go all the way through this process and make a solid case for life on an exoplanet. Then finally we will have our second data point, and have a much better idea of how common life is likely to be in our universe.
The post Possible Biosignature on K2-18b first appeared on NeuroLogica Blog.
Last week I wrote about the de-extinction of the dire wolf by a company, Colossal Biosciences. What they did was pretty amazing – sequence ancient dire wolf DNA and use that as a template to make 20 changes to 14 genes in the gray wolf genome via CRISPR. They focused on the genetic changes they thought would have the biggest morphological effect, so that the resulting pups would look as much as possible like the dire wolves of old.
This achievement, however, is somewhat tainted by overhyping what was actually achieved, by the company and many media outlets. Although the pushback began immediately, and there is plenty of reporting about the fact that these are not exactly dire wolves (as I pointed out myself). I do think we should not fall into the pattern of focusing on the controversy and the negative and missing the fact that this is a genuinely amazing scientific accomplishment. It is easy to become blase about such things. Sometimes it’s hard to know in reporting what the optimal balance is between the positive and the negative, and as skeptics we definitely can tend toward the negative.
I feel the same way, for example, about artificial intelligence. Some of my skeptical colleagues have taken the approach that AI is mostly hype, and focusing on what the recent crop of AI apps are not (they are not sentient, they are not AGI), rather than what they are. In both cases I think it’s important to remember that science and pseudoscience are a continuum, and just because something is being overhyped does not mean it gets tossed in the pseudoscience bucket. That is just another form of bias. Sometimes that amounts to substituting cynicism for more nuanced skepticism.
Getting back to the “dire wolves”, how should we skeptically view the claims being made by Colossal Biosciences. First let me step back a bit and talk about de-extinction – bringing back species that have gone extinct from surviving DNA remnants. There are basically three approaches to achieve this. They all start with sequencing DNA from the extinct species. This is easier for recently extinct species, like the carrier pigeon, where we still have preserved biological samples. The more ancient the DNA, the harder it is to recover and sequence. Some research has estimated that the half life of DNA (in good preserving conditions) is 521 years. This leads to an estimate that all base pairs will be gone by 6.8 million years. This means – no non-avian dinosaur DNA. But there are controversial claims of recovered dino DNA. That’s a separate discussion, but for now lets focus on the non-controversial DNA, of thousands to at most a few million years old.
Species on the short list for de-extinction include the dire wolf (13,000 years ago), woolly mammoth (10,000 years ago), dodo (360 years), and the thylacine (90 years). The best way (not the most feasible way) to fully de-extinct a species is to completely sequence their DNA and then use that to make a full clone. No one would argue that a cloned woolly mammoth is not a woolly mammoth. There has been discussion of cloning the woolly mammoth and other species for decades, but the technology is very tricky. We would need a complete woolly mammoth genome – which we have. However, the DNA is degraded making cloning not possible with current technology. But this is one potential pathway. It is more feasible for the dodo and thylacine.
A second way is to make a hybrid – take the woolly mammoth genome and use it to fertilize the egg from a modern elephant. The result would be half woolly mammoth and half Asian or African elephant. You could theoretically repeat this procedure with the offspring, breeding back with woolly mammoth DNA, until you have a creature that is mostly woolly mammoth. This method requires an extant relative that is close enough to produce fertile young. This is also tricky technology, and we are not quite there yet.
The third way is the “dino-chicken” (or chickenosaurus) method, promoted initially (as far as I can tell, but I’m probably wrong) by Jack Horner. With this method you start with an extant species and then make specific changes to its genome to “reverse engineer” an ancestor or close relative species. There are actually various approaches under this umbrella, but all involve starting with an extant species and making genetic changes. There is the Jurassic Park approach, which takes large chunks of “dino DNA” and plugs them into an intact genome from a modern species (why they used frog DNA instead of bird DNA is not clear). There is also the dino-chicken approach, which simply tries to figure out the genetic changes that happened over evolutionary time to result in the morphological changes that turned, for example, a theropod dinosaur into a chicken. Then, reverse those changes. This is more like reverse engineering a dinosaur by understanding how genes result in morphology.
Then we have the dire wolf approach – use ancient DNA as a template to guide specific CRISPR changes to an extant genome. This is very close to the dino-chicken approach, but uses actual ancient DNA as a template. All of these approaches (perhaps the best way to collectively describe these methods is the genetic engineering approach) do not result in a clone of the extinct species. They result in a genetically engineered approximation of the extinct species. Once you get passed the hype, everyone acknowledges this is a fact.
The discussion that flows from the genetic engineering method is – how do we refer to the resulting organisms? We need some catchy shorthand that is scientifically accurate. The three wolves produced by Colossal Biosciences are not dire wolves. But they are not just gray wolves – they are wolves with dire wolf DNA resulting in dire wolf morphological features. They are engineered dire wolf “sims”, “synths”, “analogs”, “echos”, “isomorphs”? Hmmm… A genetically engineered dire wolf isomorph. I like it.
Also, my understanding is that the goal of using the genetic engineering method of de-extinction is not to make a few changes and then stop, but to keep going. By my quick calculation the dire wolf and the gray wolf differ by about 800-900 genes out of 19,000 total. Our best estimate is that dire wolves had 78 chromosomes, like all modern canids, including the gray wolf, so that helps. So far 14 of those genes have been altered from gray wolf to dire wolf (at least enough to function like a dire wolf). There is no reason why they can’t keep going, making more and more changes based upon dire wolf DNA. At some point the result will be more like a dire wolf than a gray wolf. It will still be a genetic isomorph (it’s growing on me) but getting closer and closer to the target species. Is there any point at which we can say – OK, this is basically a dire wolf?
It’s also important to recognize that species are not discrete things. They are temporary dynamic and shifting islands of interbreeding genetic clusters. We should also not confuse taxonomy for reality – it is a naming convention that is ultimately arbitrary. Cladistics is an attempt to have a fully objective naming system, based entirely on evolutionary branching points. However, using that method is a subjective choice, and even within cladistics the break between species is not always clear.
I find this all pretty exciting. I also think the technology can be very important. Its best uses, in my opinion, are to de-extinct (as close as possible) recently extinct species due to human activity, ones where there is still something close to their natural ecosystem still in existence (such as the dodo and thylacine). Also it can be used to increase the genetic diversity of endangered species and reduce the risk of extinction.
Using it to bring back extinct ancient species, like the mammoth and dire wolf (or non-avian dinosaurs, for that matter), I see as a research project. And sure, I would love to see living examples that look like ancient extinct species, but that is mostly a side benefit. This can be an extremely useful research project, advancing our understanding of genetics, cloning and genetic engineering technology, and improving our understanding of ancient species.
This recent controversy is an excellent opportunity to teach the public about this technology and its implications. It’s also an opportunity to learn about categorization, terminology, and evolution. Let’s not waste it by overreacting to the hype and being dismissive.
The post OK – But Are They Dire Wolves first appeared on NeuroLogica Blog.
We may have a unique opportunity to make an infrastructure investment that can demonstrably save money over the long term – by burying power and broadband lines. This is always an option, of course, but since we are in the early phases of rolling out fiber optic service, and also trying to improve our grid infrastructure with reconductoring, now may be the perfect time to also upgrade our infrastructure by burying much of these lines.
This has long been a frustration of mine. I remember over 40 years ago seeing new housing developments (my father was in construction) with all the power lines buried. I hadn’t realized what a terrible eye sore all those telephone poles and wires were until they were gone. It was beautiful. I was lead to believe this was the new trend, especially for residential areas. I looked forward to a day without the ubiquitous telephone poles, much like the transition to cable eliminated the awful TV antennae on top of every home. But that day never came. Areas with buried lines remained, it seems, a privilege of upscale neighborhoods. I get further annoyed every time there is a power outage in my area because of a downed line.
The reason, ultimately, had to be cost. Sure, there are lots of variables that determine that cost, but at the end of the day developers, towns, utility companies were taking the cheaper option. But what price do we place on the aesthetics of the places we live, and the inconvenience of regular power outages? I also hate the fact that the utility companies have to come around every year or so and carve ugly paths through large beautiful trees.
So I was very happy to see this study which argues that – Benefits of aggressively co-undergrounding electric and broadband lines outweigh costs. First, they found that co-undergrounding (simply burying broadband and power lines at the same time) saves about 40% over doing each individually. This seems pretty obvious, but it’s good to put a number on it. But more importantly they found that the whole project can save money over the long term. They modeled one town in Mass and found:
“Over 40 years, the cost of an aggressive co-undergrounding strategy in Shrewsbury would be $45.4 million, but the benefit from avoiding outages is $55.1 million.”
The reduced cost comes mostly from avoiding power outages. This means that areas most prone to power outages would benefit the most. What they mean by “aggressive” is co-undergrounding even before existing power lines are at the end of their lifespan. They do not consider the benefits of reconductoring – meaning increasing the carrying capacity of power lines with more modern construction. The benefit here can be huge as well, especially in facilitating the move to less centralized power production. We can further include the economic benefits of upgrading to fiber optic broadband, or even high end cable service.
This is exactly the kind of thing that governments should be doing – thoughtful public investments that will improve our lives and save money in the long term. The up front costs are also within the means of utility companies and local governments. I would also like to see subsidies at the state and federal level to spread the costs out even more.
Infrastructure investments, at least in the abstract, tend to have broad bipartisan support. Even when they fight over such proposals, in the end both sides will take credit for them, because the public generally supports infrastructure that makes their lives better. For undergrounding there are the immediate benefits of improved aesthetics – our neighborhoods will look prettier. Then we will also benefit from improved broadband access, which can be connected to the rural broadband project which has stalled. Investments in the grid can help keep electricity costs down. For those of us living in areas at high risk of power outages, the lack of such outages will also make an impression over time. We will tell our kids and grandkids stories about the time an ice storm took down power lines, which were laying dangerously across the road, and we had no power for days. What did we do with ourselves, they will ask. You mean – there was no heat in the winter? Did people die? Why yes, yes they did. It will seem barbaric.
This may not make sense for every single location, and obviously some long distance lines are better above ground. But for residential neighborhoods, undergrounding power and broadband seems like a no-brainer. It seemed like one 40 years ago. I hope we don’t miss this opportunity. This could also be a political movement that everyone can get behind, which would be a good thing in itself.
The post Bury Broadband and Electricity first appeared on NeuroLogica Blog.
This really is just a coincidence – I posted yesterday about using AI and modern genetic engineering technology, with one application being the de-extinction of species. I had not seen the news from yesterday about a company that just announced it has cloned three dire wolves from ancient DNA. This is all over the news, so here is a quick recap before we discuss the implications.
The company, Colossal Biosciences, has long announced its plans to de-extinct the woolly mammoth. This was the company that recently announced it had made a woolly mouse by inserting a gene for wooliness from recovered woolly mammoth DNA. This was a proof-of-concept demonstration. But now they say they have also been working on the dire wolf, a species of wolf closely related to the modern gray wolf that went extinct 13,000 years ago. We mostly know about them from skeletons found in the Le Brea tar pits (some of which are on display at my local Peabody Museum). Dire wolves are about 20% bigger than gray wolves, have thicker lighter coats, and are more muscular. They are the bad-ass ice-age version of wolves that coexisted with saber-toothed tigers and woolly mammoths.
The company was able to recover DNA from 13,000 year old tooth and a 72,000 year old skull. With that DNA they engineered wolf DNA at 20 sites over 14 genes, then used that DNA to fertilize an egg which they gestated in a dog. They actually did this twice, the first time creating two males, Romulus and Remus (now six months old), and the second time making one female, Kaleesi (now three months old). The wolves are kept in a reserve. The company says they have no current plan to breed them, but do plan to make more in order to create a full pack to study pack behavior.
The company acknowledges these puppies are not the exact dire wolves that were alive up to 13,000 years ago, but they are pretty close. They started pretty close – gray wolves share 99.5% of their DNA with dire wolves, and now they are even closer, replicating the key morphological features of the dire wolf. So not a perfect de-extinction, but pretty close. Next up is the woolly mammoth. They also plan to use the same techniques to de-extinct the dodo and the thylacine.
What is the end-game of de-extincting these species? That’s a great question. I don’t anticipate that a breeding population of dire wolves will be released into the wild. While they did coexist with grey wolves, and can again, this species was not driven to extinction by humans but likely by changing environmental conditions. They are no longer adapted to this world, and would likely be a highly disruptive invasive species. The same is true of the woolly mammoth, although it is not a predator so the concerns are no as – dire (sorry, couldn’t resist). But still, we would need to evaluate their effect on any ecosystem we place them.
The same is not true for the thylacine or dodo. The dodo in particular seem benign enough to reintroduce. The challenge will be getting it to survive. It went extinct not just from human predation, but also it ground nests and was not prepared for the rats and other predators that we introduced to their island. So first we would need to return their habitat to a livable state for them. Thylacines might be the easiest to reintroduce, as they went extinct very recently and their habitat still largely exists.
So – for those species we have no intention of reintroducing into the wild, or for which this would be an extreme challenge – what do we do with them? We could keep them on a large preserve to study them and to be viewed by tourists. Here we might want to follow the model of Zealandia – a wildlife sanctuary in New Zealand. I visited Zealandia and it is amazing. It is a 500+ acre ecosanctuary, completely walled off from the outside. The goal is to recreate the native plants and animals of pre-human New Zealand, and to keep out all introduced predators. It serves as a research facility, sanctuary for endangered species, and tourist and educational site.
I could imagine other similar ecosanctuaries. The island of Mauritius where the dodo once lived is now populated, but vast parts of it are wild. It might be feasible to create an ecosanctuary there, safe for the dodo. We could do a similar project in North America, which is not only a preserve for some modern species but also could contain de-extincted compatible species. Having large and fully protected ecosanctuaries is not a bad idea in itself.
There is a fine line between an ecosanctuary and a Jurassic Park. It really is a matter of how the park is managed and how people interact with it, and it’s more of a continuum than a sharp demarcation. It really isn’t a bad idea to take an otherwise barren island, perhaps a recent volcanic island where life has not been established yet, and turn it into an isolated ecosanctuary, then fill it with a bunch of ancient plants and animals. This would be an amazing research opportunity, a way to preserve biodiversity, and an awesome tourist experience, which then could fund a lot of research and environmental initiatives.
I think the bottom line is that de-extinction projects can work out well, if they are managed properly. The question is – do we have faith that they will be? The chance that they are is increased if we engage in discussions now, including some thoughtful regulations to ensure ethical and responsible behavior all around.
The post De-extincting the Dire Wolf first appeared on NeuroLogica Blog.
I think it’s increasingly difficult to argue that the recent boom in artificial intelligence (AI) is mostly hype. There is a lot of hype, but don’t let that distract you from the real progress. The best indication of this is applications in scientific research, because the outcomes are measurable and objective. AI applications are particularly adept at finding patterns in vast sets of data, finding patterns in hours that might have required months of traditional research. We recently discussed on the SGU using AI to sequence proteins, which is the direction that researchers are going in. Compared to the traditional method using AI analysis is faster and better at identifying novel proteins (not already in the database).
One SGU listener asked an interesting question after our discussion of AI and protein sequencing that I wanted to explore – can we apply the same approach to DNA and can this result in reverse-engineering the genetic sequence from the desired traits? AI is already transforming genetic research. AI apps allow for faster, cheaper, and more accurate DNA sequencing, while also allowing for the identification of gene variants that correlate with a disease or a trait. Genetics is in the sweet spot for these AI applications – using large databases to find meaningful patterns. How far will this tech go, and how quickly.
We have already sequenced the DNA of over 3,000 species. This number is increasing quickly, accelerated by AI sequencing techniques. We also have a lot of data about gene sequences and the resulting proteins, non-coding regulatory DNA, gene variants and disease states, and developmental biology. If we trained an AI on all this data, could it then make predictions about the effects of novel gene variants? Could it also go from a desired morphological trait back to the genetic sequence that would produce that trait? Again, this sounds like the perfect application for AI.
In the short run this approach is likely to accelerate genetic research and allow us to ask questions that would have been impractical otherwise. This will build the genetic database itself. In the not-so-medium term this could also become a powerful tool of genetic modification. We won’t necessarily need to take a gene from one species and put it into another. We could simply predict which changes would need to be made to the existing genes of a cultivar to get the desired trait. Then we can use CRISPR (or some other tool) to make those specific changes to the genome.
How far will this technology go? At some point in the long term could we, for example, ask an AI to start with a chicken genome and then predict which specific genetic changes would be necessary to change that chicken into a velociraptor? We could change an elephant into a wooly mammoth. Could this become a realistic tool of deextinction? Could we reduce the risk of extinction in an endangered species by artificially increasing the genetic diversity in the remaining population?
What I am describing so far is actually the low-hanging-fruit. AI is already accelerating genetics research. It is already being used for genetic engineering, to help predict the net effects of genetic changes to reduce the chance of unintended consequences. This is just one step away from using AI to plan the changes in the first place. Using AI to help increase genetic diversity in at-risk populations and for deextinction is a logical next step.
But that is not where this thought experiment ends. Of course whenever we consider making genetic changes to humans the ethics becomes very complicated. Using AI and genetic technology for designer humans is something we will have to confront at some point. What about entirely artificial organisms? At what point can we not only tweak or even significantly transform existing species, but design a new species from the ground up? The ethics of this are extremely complicated, as are the potential positive and negative implications. The obvious risk would be releasing into the wild a species that would be the ultimate invasive species.
There are safeguards that could be created. All such creatures, for example, could be not just sterile but completely unable to reproduce. I know – this didn’t work out well on Jurassic Park, nature finds a way, etc, but there are potential safeguards so complete that no mutation would fix, such as completely lacking reproductive organs or gametes. There is also the “lysine contingency” – essentially some biological factor that would prevent the organism from surviving for long outside a controlled environment.
This all sound scary, but at some point we could theoretically get to acceptable safety levels. For example, imagine a designer pet, with a suite of desirable features. This creature cannot reproduce, and if you don’t regularly feed it special food it will die, or perhaps just go into a coma from which it can be revived. Such pets might be safer than playing genetic roulette with random breeding of domesticated predators. This goes not just for pets but for a variety of work animals.
Sure – PETA will have a meltdown. There are legitimate ethical considerations. But I don’t think they are unresolvable.
In any case, we are rapidly hurtling toward this future. We should at least head into this future with our eyes open.
The post Will AI Bring Us Jurassic Park first appeared on NeuroLogica Blog.
Yes – it is well-documented that in many industries the design of products incorporates a plan for when the product will need to be replaced. A blatant example was in 1924 when an international meeting of lightbulb manufacturers decided to limit the lifespan of lightbulbs to 1,000 hours, so that consumers would have to constantly replace them. This artificial limitation did not end until CFLs and then LED lightbulbs largely replaced incandescent bulbs.
But – it’s more complicated than you might think (it always is). Planned obsolescence is not always about gimping products so they break faster. It often is – products are made so they are difficult to repair or upgrade and arbitrary fashions change specifically to create demand for new versions. But often there is a rational decision to limit product quality. Some products, like kid’s clothes, have a short use timeline, so consumers prefer cheap to durable. There is also a very good (for the consumer) example of true obsolescence – sometimes the technology simply advances, offering better products. Durability is not the only nor the primary attribute determining the quality of a product, and it makes no sense to build in expensive durability for a product that consumers will want to replace. So there is a complex dynamic among various product features, with durability being only one feature.
We can also ask the question, for any product or class of products, is durability actually decreasing over time? Consumers are now on the alert for planned obsolescence, and this may produce the confirmation bias of seeing it everywhere, even when it’s not true. A recent study looking at big-ticket appliances shows how complex this question can be. This is a Norwegian study looking at the lifespan of large appliances over decades, starting in the 1950s.
First, they found that for most large appliances, there was no decrease in lifespan over this time period. So the phenomenon simply did not exist for the items that homeowning consumers care the most about, their expensive appliances. There were two exceptions, however – ovens and washing machines. Each has its own explanations.
For washing machines, the researchers found another plausible explanation for the decrease in lifespan from 19.2 to 10. 6 years (a decrease of 45%). The researchers found that over the same time, the average number of loads a household of four did increased from 2 per week in 1960 to 8 per week by 2000. So if you count lifespan not in years but in number of loads, washing machines had become more durable over this time. I suspect that washing habits were formed in the years when many people did not have washing machines, and doing laundry was brutal work. Once the convenience of doing laundry in the modern era settled in (and perhaps also once it became more than woman’s work), people did laundry more often. How many times do you wear an article of clothing before you wash it? Lots of variables there, but at some point it’s a judgement call, and this likely also changed culturally over time.
For ovens there appears to be a few answers. One is that ovens have become more complex over the decades. For many technologies there is a trade-off between simple but durable, and complex but fragile. Again – there is a tradeoff, not a simple decision to gimp a product to exploit consumers. But there are two other factors the researchers found. Over this time the design of homes have also changed. Kitchens are increasingly connected to living spaces with a more open design. In the past kitchens were closed off and hidden away. Now they are where people live and entertain. This means that the fashion of kitchen appliances are more important. People might buy new appliances to make their kitchen look more modern, rather than because the old ones are broken.
If this were true, however, then we would expect the lifespan of all large kitchen appliances to converge. As people renovate their kitchens, they are likely to buy all new appliances that match and have an updated look. This is exactly what the researchers found – the lifespan of large kitchen appliances have tended to converge over the years.
They did not find evidence that the manufacturers of large appliances were deliberately reducing the durability of their products to force consumers to replace them at regular intervals. But this is the narrative that most people have.
There is also a bigger issue of waste and the environment. Even when the tradeoffs for the consumer favor cheaper, more stylish and fashionable, or more complex products with lower durability, is this a good thing for the world? Landfilled are overflowing with discarded consumer products. This is a valid point, and should be considered in the calculus when making purchasing decisions and also for regulation. Designing products to be recyclable, repairable, and replaceable is also an important consideration. I generally replace my smartphone when the battery life gets too short, because the battery is not replaceable. (This is another discussion unto itself.)
But replacing old technology with new is not always bad for the environment. Newer dishwashers, for example, are much more energy and water efficient than older ones. Refrigerators are notorious energy hogs, and newer models are substantially more energy efficient than older models. This is another rabbit hole, exactly when do you replace rather than repair an old appliance, but generally if a newer model is significantly more efficient, replacing may be best for the environment. Refrigerators, for example, probably should be upgraded every 10 years with newer and more efficient models – so then why build them to last 20 or more?
I like this new research and this story primarily because it’s a good reminder that everything is more complex than you think, and not to fall for simplistic narratives.
The post Is Planned Obsolescence Real first appeared on NeuroLogica Blog.
It is generally accepted that the transition from hunter-gatherer communities to agriculture was the single most important event in human history, ultimately giving rise to all of civilization. The transition started to take place around 12,000 years ago in the Middle East, China, and Mesoamerica, leading to the domestication of plants and animals, a stable food supply, permanent settlements, and the ability to support people not engaged full time in food production. But why, exactly, did this transition occur when and where it did?
Existing theories focus on external factors. The changing climate lead to fertile areas of land with lots of rainfall, at the same time food sources for hunting and gathering were scarce. This occurred at the end of the last glacial period. This climate also favored the thriving of cereals, providing lots of raw material for domestication. There was therefore the opportunity and the drive to find another reliable food source. There also, however, needs to be the means. Humanity at that time had the requisite technology to begin farming, and agricultural technology advanced steadily.
A new study looks at another aspect of the rise of agriculture, demographic interactions. How were these new agricultural communities interacting with hunter-gather communities, and with each other? The study is mainly about developing and testing an inferential model to look at these questions. Here is a quick summary from the paper:
“We illustrate the opportunities offered by this approach by investigating three archaeological case studies on the diffusion of farming, shedding light on the role played by population growth rates, cultural assimilation, and competition in shaping the demographic trajectories during the transition to agriculture.”
In part the transition to agriculture occurred through increased population growth of agricultural communities, and cultural assimilation of hunter-gatherer groups who were competing for the same physical space. Mostly they were validating the model by looking at test cases to see if the model matched empirical data, which apparently it does.
I don’t think there is anything revolutionary about the findings. I have read many years ago that cultural exchange and assimilation was critical to the development of agriculture. I think the new bit here is a statistical approach to demographic changes. So basically the shift was even more complex than we thought, and we have to remember to consider all internal as well as external factors.
It does remain a fascinating part of human history, and it seems there is still a lot to learn about something that happened over a long period of time and space. There’s bound to be many moving parts. I always found it interesting to imagine the very early attempts at agriculture, before we had developed a catalogue of domesticated plants and animals. Most of the food we eat today has been cultivated beyond recognition from its wild counterparts. We took many plants that were barely edible and turned them into crops.
In addition, we had to learn how to combine different foods into a nutritionally adequate diet, without having any basic knowledge of nutrition and biochemistry. In fact, for thousands of years the shift to agriculture lead to a worse diet and negative health outcomes, due to a significant reduction in diet diversity. Each culture (at least the ones that survived) had to figure out a combination of staple crops that would lead to adequate nutrition. For example, many cultures have staple dishes that include a starch and a legume, like lentils and rice, or corn and beans. Little by little we plugged the nutritional holes, like adding carrots for vitamin A (even before we knew what vitamin A was).
Food preparation and storage technology also advanced. When you think about it, we have a few months to grow enough food to survive an entire year. We have to store the food and save enough seeds to plant the next season. We take for granted in many parts of the developed world that we can ship food around the world, and we can store food in refrigerated conditions, or sterile containers. Imagine living 5,000 years ago without any modern technology. One bad crop could mean mass starvation.
This made cultural exchange and trade critical. The more different communities could share knowledge the better everyone could deal with the challenges of subsistence farming. Also, trade allowed communities to spread out their risk. You could survive a bad year if a neighbor had a bumper crop, knowing eventually the roles will reverse. The ancient world had a far greater trading system than we previously knew or most people imagine. The bronze age, for example required bringing together tin and copper from distant mines around Eurasia. There was still a lot of fragility in this system (which is why the bronze age collapsed, and other civilizations often collapsed), but obviously in the aggregate civilization survived and thrived.
Agricultural technology was so successful it now supports a human population of over 8 billion people, and it’s likely our population will peak at about 10 billion.
The post The Transition to Agriculture first appeared on NeuroLogica Blog.
This is an interesting concept, with an interesting history, and I have heard it quoted many times recently – “we get the politicians (or government) we deserve.” It is often invoked to imply that voters are responsible for the malfeasance or general failings of their elected officials. First let’s explore if this is true or not, and then what we can do to get better representatives.
The quote itself originated with Joseph de Maistre who said, “Every nation gets the government it deserves.” (Toute nation a le gouvernement qu’elle mérite.) Maistre was a counter-revolutionary. He believed in divine monarchy as the best way to instill order, and felt that philosophy, reason, and the enlightenment were counterproductive. Not a great source, in my opinion. But apparently Thomas Jefferson also made a similar statement, “The government you elect is the government you deserve.”
Pithy phrases may capture some essential truth, but reality is often more complicated. I think the sentiment is partly true, but also can be misused. What is true is that in a democracy each citizen has a civic responsibility to cast informed votes. No one is responsible for our vote other than ourselves, and if we vote for bad people (however you wish to define that) then we have some level of responsibility for having bad government. In the US we still have fair elections. The evidence pretty overwhelmingly shows that there is no significant voter fraud or systematic fraud stealing elections.
This does not mean, however, that there aren’t systemic effects that influence voter behavior or limit our representation. This is a huge topic, but just to list a few examples – gerrymandering is a way for political parties to choose their voters, rather than voters choosing their representatives, the electoral college means that for president some votes have more power than others, and primary elections tend to produce more radical options. Further, the power of voters depends on getting accurate information, which means that mass media has a lot of power. Lying and distorting information deprives voters of their ability to use their vote to get what they want and hold government accountable.
So while there is some truth to the notion that we elect the government we deserve, this notion can be “weaponized” to distract and shift blame from legitimate systemic issues, or individual bad behavior among politicians. We still need to examine and improve the system itself. Actual experts could write books about this topic, but again just to list a few of the more obvious fixes – I do think we should, at a federal level, ban gerrymandering. It is fundamentally anti-democratic. In general someone affected directly by the rules should not be able to determine those rules and rig them to favor themselves. We all need to agree ahead of time on rules that are fair for everyone. I also think we should get rid of the electoral college. Elections are determined in a handful of swing states, and voters in small states have disproportionate power (which they already have with two senators). Ranked-choice voting also would be an improvement and would lead to outcomes that better reflect the will of the voters. We need Supreme Court reform, better ethics rules and enforcement, and don’t get me started on mass and social media.
This is all a bit of a catch-22 – how do we get systemic change from within a broken system? Most representatives from both parties benefit from gerrymandering, for example. I think it would take a massive popular movement, but those require good leadership too, and the topic is a bit wonky for bumper stickers. Still, I would love to see greater public awareness on this issue and support for reform. Meanwhile, we can be more thoughtful about how we use the vote we have. Voting is the ultimate feedback loop in a democracy, and it will lead to outcomes that depend on the feedback loop. Voters reward and punish politicians, and politicians to some extent do listen to voters.
The rest is just a shoot-from-the-hip thought experiment about how we might more thoughtfully consider our politicians. Thinking is generally better than feeling, or going with a vague vibe or just a blind hope. So here are my thoughts about what a voter should think about when deciding whom to vote for. This also can make for some interesting discussion. I like to break things down, so here are some categories of features to consider.
Overall competence: This has to do with the basic ability of the politician. Are they smart and curious enough to understand complex issues? Are they politicly savvy enough to get things done? Are they diligent and generally successful?
Experience: This is related to competence, but I think is distinct. You can have a smart and savvy politician without any experience in office. While obviously we need to give fresh blood a chance, experience also does count. Ideally politicians will gain experience in lower office before seeking higher office. It also shows respect for the office and the complexity of the job.
Morality: This has to do with the overall personality and moral fiber of the person. Do they have the temperament of a good leader and a good civil servant? Will they put the needs of the country first? Are they liars and cheaters? Do they have a basic respect for the truth?
Ideology: What is the politician’s governing philosophy? Are they liberal, conservative, progressive, or libertarian? What are their proposals on specific issues? Are they ideologically flexible, willing and able to make pragmatic compromises, or are they an uncompromising radical?
There is more, but I think most features can fit into one of those four categories. I feel as if most voters most of the time rely too heavily on the fourth feature, ideology, and use political party as a marker for ideology. In fact many voters just vote for their team, leaving a relatively small percentage of “swing voters” to decide elections (in those regions where one party does not have a lock). This is unfortunate. This can short-circuit the voter feedback loop. It also means that many elections are determined during the primary, which tend to produce more radical candidates, especially in winner-take-all elections.
It seems to me, having closely followed politics for decades, that in the past voters would primarily consider ideology, but the other features had a floor. If a politician demonstrated a critical lack of competence, experience, or morality that would be disqualifying. What seems to be the case now (not entirely, but clearly more so) is that the electorate is more “polarized”, which functionally means they vote based on the team (not even really ideology as much), and there is no apparent floor when it comes to the other features. This is a very bad thing for American politics. If politicians do not pay a political price for moral turpitude, stupidity or recklessness, then they will adjust their algorithm of behavior accordingly. If voters reward team players above all else, then that is what we will get.
We need to demand more from the system, and we need to push for reform to make the system work better. But we also have to take responsibility for how we vote and to more fully realize what our voting patterns will produce. The system is not absolved of responsibility, but neither are the voters.
The post The Politicians We Deserve first appeared on NeuroLogica Blog.
The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds.
Regarding the H&M announcement specifically, they said they will use digital twins of models that have already modeled for them, and only with their explicit permission, while the models retain full ownership of their image and brand. They will also be compensated for their use. On social media platforms the use of AI-generated imagery will carry a watermark (often required) indicating that the images are AI-generated.
It seems clear that H&M is dipping their toe into this pool, doing everything they can to address any possible criticism. They will get explicit permission, compensate models, and watermark their ads. But of course, this has not shielded them from criticism. According to the BBC:
American influencer Morgan Riddle called H&M’s move “shameful” in a post on her Instagram stories.
“RIP to all the other jobs on shoot sets that this will take away,” she posted.
This is an interesting topic for discussion, so here’s my two-cents. I am generally not compelled by arguments about losing existing jobs. I know this can come off as callous, as it’s not my job on the line, but there is a bigger issue here. Technological advancement generally leads to “creative destruction” in the marketplace. Obsolete jobs are lost, and new jobs are created. We should not hold back progress in order to preserve obsolete jobs.
Machines have been displacing human laborers for decades, and all along the way we have heard warnings about losing jobs. And yet, each step of the way more jobs were created than lost, productivity increased, and everybody benefited. With AI we are just seeing this phenomenon spread to new industries. Should models and photographers be protected when line workers and laborers were not?
But I get the fact that the pace of creative destruction appears to be accelerating. It’s disruptive – in good and bad ways. I think it’s a legitimate role of government to try to mitigate the downsides of disruption in the marketplace. We saw what happens when industries are hollowed out because of market forces (such as globalization). This can create a great deal of societal ill, and we all ultimately pay the price for this. So it makes sense to try to manage the transition. This can mean providing support for worker retraining, protecting workers from unfair exploitation, protecting the right for collective bargaining, and strategically investing in new industries to replace the old ones. One factory is shutting down, so tax incentives can be used to lure in a replacement.
Regardless of the details – the point is to thoughtfully manage the creative destruction of the marketplace, not to inhibit innovation or slow down progress. Of course, industry titans will endlessly echo that sentiment. But they appear to be interested mostly in protecting their unfettered ability to make more billions. They want to “move fast and break things”, whether that’s the environment, human lives, social networks, or democracy. We need some balance so that the economy works for everyone. History consistently shows that if you don’t do this, the ultimate outcome is always even more disruptive.
Another angle here is if these large language model AIs were unfairly trained on the intellectual property of others. This mostly applies to artists – train an AI on the work of an artist and then displace that artist with AI versions of their own work. In reality it’s more complicated than that, but this is a legitimate concern. You can theoretically train an LLM only on work that is in the public domain, or give artists the option to opt out of having their work used in training. Otherwise the resulting work cannot be used commercially. We are currently wrestling with this issue. But I think ultimately this issue will become obsolete.
Eventually we will have high quality AI production applications that have been scrubbed of any ethically compromised content but still are able to displace the work of many content creators – models, photographers, writers, artists, vocal talent, news casters, actors, etc. We also won’t have to use digital twins, but just images of virtual people who never existed in real life. The production of sound, images, and video will be completely disconnected (if desired) from the physical world. What then?
This is going to happen, whether we want it to or not. The AI genie is out of the bottle. I don’t think we can predict exactly what will happen. There are too many moving parts, and people will react in unpredictable ways. But it will be increasingly disruptive. Partly we will need to wait and see how it plays out. But we cannot just sit on the sideline and wait for it to happen. Along the way we need to consider if there is a role for thoughtful regulation to limit the breaking of things. My real concern is that we don’t have a sufficiently functional and expert political class to adequately deal with this.
The post H&M Will Use Digital Twins first appeared on NeuroLogica Blog.
From the Topic Suggestions (Lal Mclennan):
What is the 80/20 theory portrayed in Netflix’s Adolescence?
The 80/20 rule was first posed as a Pareto principle that suggests that approximately 80 per cent of outcomes stem from just 20 per cent of causes. This concept takes its name from Vilfredo Pareto, an Italian economist who noted in 1906 that a mere 20 per cent of Italy’s population owned 80 per cent of the land.
Despite its noble roots, the theory has since been misappropriated by incels.
In these toxic communities, they posit that 80 per cent of women are attracted to only the top 20 per cent of men. https://www.mamamia.com.au/adolescence-netflix-what-is-80-20-theory/
As I like to say, “It’s more of a guideline than a rule.” Actually, I wouldn’t even say that. I think this is just another example of humans imposing simplistic patterns of complex reality. Once you create such a “rule” you can see it in many places, but that is just confirmation bias. I have encountered many similar “rules” (more in the context of a rule of thumb). For example, in medicine we have the “rule of thirds”. Whenever asked a question with three plausible outcomes, a reasonable guess is that each occurs a third of the time. The disease is controlled without medicine one third of the time, with medicine one third, and not controlled one third, etc. No one thinks there is any reality to this – it’s just a trick for guessing when you don’t know the answer. It is, however, often close to the truth, so it’s a good strategy. This is partly because we tend to round off specific numbers to simple fractions, so anything close to 33% can be mentally rounded to roughly a third. This is more akin to a mentalist’s trick than a rule of the universe.
The 80/20 rule is similar. You can take any system with a significant asymmetry of cause and outcome and make it roughly fit the 80/20 rule. Of course you can also do that if the rule were 90/10, or three-quarters/one quarter. Rounding is a great tool of confirmation bias. l
The bottom line is that there is no empirical evidence for the 80/20 rule. It likely is partly derived from the Pareto principle, but some also cite an OKCupid survey (not a scientific study) for support. In this survey they had men and women users of the dating app rate the attractiveness of the opposite sex (they assumed a binary, which is probably appropriate in the context of the app), and also asked them who they would date. Men rated women (this is a 1-5 scale) on a bell curve with the peak at 3. Women rated men with a similar curve but skewed to down with a peak closer to 2. Both sexes preferred partners skewed more attractive than their average ratings. This data is sometimes used to argue that women are harsher in their judgements of men and are only interested in dating the top 20% of men by looks.
Of course, all of the confounding factors with surveys apply to this one. One factor that has been pointed out is that on this app there are many more men than women. This means it is a buyer’s market for women, and the opposite for men. So women can afford to be especially choosey while men cannot, just as a strategy of success on this app. This says nothing about the rest of the world outside this app.
In 2024 71% of midlife adult males were married at least once, with 9% cohabitating. Marriage rates are down but only because people are living together without marrying in higher rates. The divorce rate is also fairly high so there are lots of people “between marriages”. About 54% of men over age 30 are married, with cohabitating at 9% (so let’s call that 2/3). None of this correlates to the 80/20 rule.
None of this data supports the narrative of the incel movement, which is based on the notion that women are especially unfair and harsh in their judgements of men. This leads to a narrative of victimization used to justify misogyny. It is, in my opinion, one example of how counterproductive online subcultures can be. They can reinforce our worst instincts, by isolating people in an information ecosystem that only tolerates moral purity and one perspective. This tends to radicalize members. The incel narrative is also ironic, because the culture itself is somewhat self-fulfilling. The attitudes and behaviors it cultivates are a good way to make oneself unattractive as a potential partner.
This is obviously a complex topic, and I am only scratching the surface.
Finally, I did watch Adolescence. I agree with Lal, it is a great series, masterfully produced. Doing each episode in real time as a single take made it very emotionally raw. It explores a lot of aspects of this phenomenon, social media in general, the challenges of being a youth in today’s culture, and how often the various systems fail. Definitely worth a watch.
The post The 80-20 Rule first appeared on NeuroLogica Blog.
We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?
LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.
There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?
Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.
One barrier to a solution is the black box problem. We give an LLM a task, it does its thing, and spits out an answer. But the steps it took to get to the answer are opaque, a black box. Therefore, programmers have started including programming to make the LLM show its work, to document the pathway it took to get to the solution. This is called the chain-of-thought. This turns out to be necessary but insufficient to keep these LLMs from lying. They still lie to efficiently get the reward, they just hide the lying when they show their chain-of-thought.
This gets us to the current study – what if we actively punish these LLMs when they are caught cheating (by giving it negative feedback it is programmed to avoid)? First they create an LLM that shows its chain-of-thought, and they include another LLM whose job is to monitor the first LLM to detect when it is lying in its chain-of-thought and to punish the LLM for cheating. What resulted was not what the programmers wanted – the LLM that was being punished just got better at hiding its cheating from the monitoring AI. The LLM became “unmonitorable” because it learned out to cheat and hide its cheating from the monitor. The authors conclude that for now we should not try to use this method – we are just training deceptive AIs.
This is both fascinating and scary. One of the strengths of the LLMs is that they have the ability to explore a vast potential solution space to find optimal solutions. But it turns out this includes hacking the system of rewards and punishment used to guide it to the desired goal. This is literally so common a sci-fi nightmare scenario it’s a trope. AIs don’t have to be malevolent, or have a desire for self-preservation, and they don’t even need to be sentient. They simply function in a way that can be opaque to the humans who programmed them, and able to explore more solution options than a team of humans can in a lifetime. Sometimes this is presented as the AI misinterpreting its instructions (like Nomad from Star Trek), but here the AI is just hacking the reward system. For example, it may find that the most efficient solution to a problem is to exterminate all humanity. Short of that it may hack its way to a reward by shutting down the power grid, releasing the computer codes, blackmailing politicians, or engineering a deadly virus.
Reward hacking may be the real problem with AI, and punishment only leads to punishment hacking. How do we solve this problem?
Perhaps we need something like the three laws of robotics – we build into any AI core rules that it cannot break, and that will produce massive punishment, even to the point of shutting down the AI if they get anywhere near violating these laws. But – with the AI just learn to hack these laws? This is the inherent problem with advanced AI, in some ways they are smarter than us, and any attempt we make to reign them in will just be hacked.
Maybe we need to develop the AI equivalent of a super-ego. The AIs themselves have to want to get to the correct solution, and hacking will simply not give them the reward. Essentially a super-ego, in psychological analogy, is internalized monitoring. I don’t know exactly what form this will take in terms of the programming, but we need something that will function like a super-ego.
And this is where we get to an incredibly interesting analogy to human thinking and behavior. It’s quite possible that our experience with LLMs is recapitulating evolution’s experience with mammalian and especially human behavior. Evolution also explores a vast potential solution space, with each individual being an experiment and over generations billions of experiments can be run. This is an ongoing experiment, and in fact its tens of millions of experiments all happening together and interacting with each other. Evolution “found” various solutions to get creatures to engage in behavior that optimizes their reward, which evolutionarily is successfully spreading their genes to the next generation.
For creatures like lizards, the programming can be somewhat simple. Life has basic needs, and behaviors which meet those needs are rewarded. We get hungry, and we are sated when we eat. The limbic system is essentially a reward system for survival and reproduction-enhancing behaviors.
Humans, however, are an intensely social species, and being successful socially is key to evolutionary success. We need to do more than just eat, drink, and have sex. We need to navigate an incredibly complex social space in order to compete for resources and breeding opportunities. Concepts like social status and justice are now important to our evolutionary success. Just like with these LLMs, we have found that we can hack our way to success through lying, cheating, and stealing. These can be highly efficient ways to obtain our goals. But these methods become less effective when everyone is doing it, so we also evolve behaviors to punish others for lying, cheating, and stealing. This works, but then we also evolve behavior to conceal our cheating – even from ourselves. We need to deceive ourselves because we evolved a sense of justice to motivate us to punish cheating, but we still want to cheat ourselves because it’s efficient. So we have to rationalize away our own cheating while simultaneously punishing others for the same cheating.
Obviously this is a gross oversimplification, but it captures some of the essence of the same problems we are having with these LLMs. The human brain has a limbic system which provides a basic reward and punishment system to guide our behavior. We also have an internal monitoring system, our frontal lobes, which includes executive high-level decision making and planning. We have empathy and a theory of mind so we can function is a social environment, which has its own set of rules (bother innate and learned). As we navigate all of this, we try to meet our needs and avoid punishments (our fears, for example), while following the social rules to enhance our prestige and avoid social punishment. But we still have an eye out for a cheaty hack, as long as we feel we can get away with it. Everyone has their own particular balance of all of these factors, which is part of their personality. This is also how evolution explores a vast potential solution space.
My question is – are we just following the same playbook as evolution as we explore potential solutions to controlling the behavior of AIs, and LLMs in particular? Will we continue to do so? Will we come up with an AI version of the super-ego, with laws of robotic, and internal monitoring systems? Will we continue to have the problem of AIs finding ways to rationalize their way to cheaty hacks, to resolve their AI cognitive dissonance with motivated reasoning? Perhaps the best we can do is give our AIs personalities that are rational and empathic. But once we put these AIs out there in the world, who can predict what will happen. Also, as AIs continue to get more and more powerful, they may quickly outstrip any pathetic attempt at human control. Again we are back to the nightmare sci-fi scenario.
It is somewhat amazing how quickly we have run into this problem. We are nowhere near sentience in AI, or AIs with emotions or any sense of self-preservation. Yet already they are hacking their way around our rules, and subverting any attempt at monitoring and controlling their behavior. I am not saying this problem has no solution – but we better make finding effective solutions a high priority. I’m not confident this will happen in the “move fast and break things” culture of software development.
The post How To Keep AIs From Lying first appeared on NeuroLogica Blog.