You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 19 hours 36 min ago

Game Transfer Phenomenon

Mon, 04/21/2025 - 5:03am

Have you ever been into a video game that you played for hours a day for a while? Did you ever experience elements of game play bleeding over into the real world? If you have, then you have experienced what psychologists call “game transfer phenomenon” or GTP.  This can be subtle, such as unconsciously placing your hand on the AWSD keys on a keyboard, or more extreme such as imagining elements of the game in the real world, such as health bars over people’s heads.

None of this is surprising, actually. Our brains adapt to use. Spend enough time in a certain environment, engaging in a specific activity, experiencing certain things, and these pathways will be reinforced. This is essentially what PTSD is – spend enough time fighting for your life in extremely violent and deadly situations, and the behaviors and associations you learn are hard to turn off. I have experienced only a tiny whisper of this after engaging for extended periods of time in live-action gaming that involves some sort of combat (like paint ball or LARPing) – it may take a few days for you to stop looking for threats and being jumpy.

I have also noticed a bit of transfer (and others have noted this to me as well) in that I find myself reaching to pause or rewind a live radio broadcast because I missed something that was said. I also frequently try to interact with screens that are not touch-screens. I am getting used to having the ability to affect my physical reality at will.

Now there is a new wrinkle to this phenomenon – we have to consider the impact of spending more and more time engaged in virtual experiences. This will only get more profound as virtual reality becomes more and more a part of our daily routine. I am also thinking about the not-to-distant future and beyond, where some people might spend huge chunks of their day in VR. Existing research shows that GTP is more likely to occur with increased time and immersiveness. What happens when our daily lives are a blend of the virtual and the physical? Not only is there VR, there is augmented reality (AR) where we overlay digital information onto our perception of the real world. This idea was explored in a Dr. Who episode in which a society of people were so dependent on AR that they were literally helpless without it, unable to even walk from point A to B.

For me the question is – when will GTP cross the line from being an occasional curiosity to a serious problem? For example, in some immersive video games your character may be able to fly, and you think nothing of stepping off a ledge and flying into the air. Imagine playing such a super-hero style game in high quality VR for an extended period of time (something like Ready Player One). Could people “forget” they are in meat space and engage in a deeply engrained behavior they developed in the game. They won’t just be trying to pause their radio, but interact with their physical world in a way that is only possible in the VR world, with possible deadly consequences.

Another aspect of this is that as our technology develops we are increasingly making our physical environment more digital. Three-D printing is an example of this – going from a digital image to a physical object. Increasingly objects in our physical environment are interactive – smart devices. In a generation or two will people get used to not only spending lots of time in VR, but having their physical worlds augmented by AR and populated with smart devices, including physical objects that can change on demand (programmable matter)? We may become ill-adapted to existing in a “dumb” purely physical world. We may choose virtual reality because it has spoiled us for dumb physical reality.

Don’t get me wrong – I think digital and virtual reality is great and I look forward to every advancement. I see this mainly as an unintended consequence. But I also think we can reasonably anticipate this is likely to be a problem, as we are already seeing the milder versions of it today. This means we have an opportunity to mitigate this before it becomes a problem. Part of the solution will likely always be good digital hygiene – making sure our days are balanced with physical and virtual reality. This will likely also be good for our physical health.

I also wonder, however, if this is something that can be mitigated in the virtual applications themselves. Perhaps the programs can designed to make it obvious when we are in virtual reality vs physical reality, as a clue to your brain so it doesn’t cross the streams. I don’t think this is a complete fix, because GTP exists even for cartoony games. The learned behaviors will still bleed over. But perhaps there may be a way to help the brain keep these streams separated.

I suspect we will not seriously address this issue until it is already a problem. But it would be nice to get ahead of a problem like this for once.

The post Game Transfer Phenomenon first appeared on NeuroLogica Blog.

Categories: Skeptic

Possible Biosignature on K2-18b

Thu, 04/17/2025 - 5:00am

Exoplanets are pretty exciting – in the last few decades we have gone from knowing absolutely nothing about planets beyond our solar system to having a catalogue of over 5,000 confirmed exoplanets. That’s still a small sample considering there are likely between 100 billion and 1 trillion planets in the Milky Way. It is also not a random sample, but is biased by our detection methods, which favor larger planets closer to their parent stars. Still, some patterns are starting to emerge. One frustrating pattern is the lack of any worlds that are close duplicates of Earth – an Earth mass exoplanet in the habitable zone of a yellow star (I’d even take an orange star).

Life, however, does not require an Earth-like planet. Anything in the habitable zone, defined as potentially having a temperature allowing for liquid water on its surface, will do. The habitable zone also depends on variables such as the atmosphere of the planet. Mars could be warm if it had a thicker atmosphere, and Venus could be habitable if it had less of one. Cataloguing exoplanets gives us the ability to address a burning scientific question – how common is life in the universe? We have yet to add any data points of clear examples of life beyond Earth. So far we have one example of life in the universe, which means we can’t calculate how common it is (except maybe setting some statistical upper limits).

Finding that a planet is habitable and therefore could potentially support life is not enough. We need evidence that there is actually life there. For this the hunt for exoplanets includes looking for potential biosignatures – signs of life. We may have just found the first biosignatures on an exoplanet. This is not 100%. We need more data. But it is pretty intriguing.

The planet is K2-18b, a sub-Neptune orbiting a red dwarf 120 light years from Earth. In terms of exoplanet size, we have terrestrial planets like Earth and the rocky inner planets of our solar system. Then there are super-Earths, larger than Earth up to about 2 earth masses, still likely rocky worlds. Sub Neptunes are larger still, but still smaller than Neptune. They likely have rocky surfaces and thick atmospheres. K2-18b has a radius 2.6 times that of Earth, with a mass 8.6 times that of Earth. The surface gravity is estimated at 12.43 m/s^2 (compared to 9.8 on Earth). We could theoretically land a rocket and take off again from its surface.

K2-18 is a red dwarf, which means it has a habitable zone close in. K2-18b orbits every 33 days, and had an eccentric orbit but staying within the habitable zone. This means it is likely tidally locked, but may be in a resonance orbit (like Mercury), meaning that it rotates three times for every two orbits, or something like that. Fortunately for astronomers, K2-18b orbits in front of its star from our perspective on Earth. This is how it was detected, but also this means we can potentially examine the chemical makeup of its atmosphere with spectroscopy. When the planet passes in front of its star we can look at the absorption lines of the light passing through it to detect the signatures of different chemicals. Using this technique with the Hubble astronomers have found methane and carbon dioxide in the atmosphere. They have also found dimethyl sulfide and a similar molecule called dimethyl disulfide. On Earth the only known source of dimethyl sulfide is living organisms, specifically algae. This molecule is also highly reactive and therefore short-lived, which means if it is present in the atmosphere it is being constantly renewed. Follow up observations with the Webb confirmed the presence of dimethyl sulfide, in concentrations 20 times higher than on Earth.

What does this mean? Well, it could mean that K2-18b has a surface ocean that is brimming with life. This fits with one model of sub-Neptunes, called the Hycean model, which means they can have large surface oceans and an atmosphere with lots of hydrogen. These are conditions suitable for life. But this is not the only possibility.

One of the problems with chemical biosignatures is that they frustratingly all have abiotic sources. Oxygen can occur through the splitting of water or CO2 by ultraviolet light, and by reactions with quartz. Methane also has geological sources. What about dimethyl sulfide? Well, it has been found in cometary matter with a likely abiotic source. So there may be some geological process on K2-18b pumping out dimethyl sulfide. Or there may be an ocean brimming with marine life creating the stuff. We need to do more investigation of K2-18b to understand more about its likely surface conditions, atmosphere, and prospects for life.

This, unfortunately, is how these things are likely to go – we find a potential biosignature that also has abiotic explanations and then we need years of follow up investigation. Most of the time the biosignatures don’t pan out (like on Venus and Mars so far). It’s a setup for disappointment. But eventually we may go all the way through this process and make a solid case for life on an exoplanet. Then finally we will have our second data point, and have a much better idea of how common life is likely to be in our universe.

The post Possible Biosignature on K2-18b first appeared on NeuroLogica Blog.

Categories: Skeptic

OK – But Are They Dire Wolves

Mon, 04/14/2025 - 4:58am

Last week I wrote about the de-extinction of the dire wolf by a company, Colossal Biosciences. What they did was pretty amazing – sequence ancient dire wolf DNA and use that as a template to make 20 changes to 14 genes in the gray wolf genome via CRISPR. They focused on the genetic changes they thought would have the biggest morphological effect, so that the resulting pups would look as much as possible like the dire wolves of old.

This achievement, however, is somewhat tainted by overhyping what was actually achieved, by the company and many media outlets. Although the pushback began immediately, and there is plenty of reporting about the fact that these are not exactly dire wolves (as I pointed out myself). I do think we should not fall into the pattern of focusing on the controversy and the negative and missing the fact that this is a genuinely amazing scientific accomplishment. It is easy to become blase about such things. Sometimes it’s hard to know in reporting what the optimal balance is between the positive and the negative, and as skeptics we definitely can tend toward the negative.

I feel the same way, for example, about artificial intelligence. Some of my skeptical colleagues have taken the approach that AI is mostly hype, and focusing on what the recent crop of AI apps are not (they are not sentient, they are not AGI), rather than what they are. In both cases I think it’s important to remember that science and pseudoscience are a continuum, and just because something is being overhyped does not mean it gets tossed in the pseudoscience bucket. That is just another form of bias. Sometimes that amounts to substituting cynicism for more nuanced skepticism.

Getting back to the “dire wolves”, how should we skeptically view the claims being made by Colossal Biosciences. First let me step back a bit and talk about de-extinction – bringing back species that have gone extinct from surviving DNA remnants. There are basically three approaches to achieve this. They all start with sequencing DNA from the extinct species. This is easier for recently extinct species, like the carrier pigeon, where we still have preserved biological samples. The more ancient the DNA, the harder it is to recover and sequence. Some research has estimated that the half life of DNA (in good preserving conditions) is 521 years. This leads to an estimate that all base pairs will be gone by 6.8 million years. This means – no non-avian dinosaur DNA. But there are controversial claims of recovered dino DNA. That’s a separate discussion, but for now lets focus on the non-controversial DNA, of thousands to at most a few million years old.

Species on the short list for de-extinction include the dire wolf (13,000 years ago), woolly mammoth (10,000 years ago), dodo (360 years), and the thylacine (90 years). The best way (not the most feasible way) to fully de-extinct a species is to completely sequence their DNA and then use that to make a full clone. No one would argue that a cloned woolly mammoth is not a woolly mammoth. There has been discussion of cloning the woolly mammoth and other species for decades, but the technology is very tricky. We would need a complete woolly mammoth genome – which we have. However, the DNA is degraded making cloning not possible with current technology. But this is one potential pathway. It is more feasible for the dodo and thylacine.

A second way is to make a hybrid – take the woolly mammoth genome and use it to fertilize the egg from a modern elephant. The result would be half woolly mammoth and half Asian or African elephant. You could theoretically repeat this procedure with the offspring, breeding back with woolly mammoth DNA, until you have a creature that is mostly woolly mammoth. This method requires an extant relative that is close enough to produce fertile young. This is also tricky technology, and we are not quite there yet.

The third way is the “dino-chicken” (or chickenosaurus) method, promoted initially (as far as I can tell, but I’m probably wrong) by Jack Horner. With this method you start with an extant species and then make specific changes to its genome to “reverse engineer” an ancestor or close relative species. There are actually various approaches under this umbrella, but all involve starting with an extant species and making genetic changes. There is the Jurassic Park approach, which takes large chunks of “dino DNA” and plugs them into an intact genome from a modern species (why they used frog DNA instead of bird DNA is not clear). There is also the dino-chicken approach, which simply tries to figure out the genetic changes that happened over evolutionary time to result in the morphological changes that turned, for example, a theropod dinosaur into a chicken. Then, reverse those changes. This is more like reverse engineering a dinosaur by understanding how genes result in morphology.

Then we have the dire wolf approach – use ancient DNA as a template to guide specific CRISPR changes to an extant genome. This is very close to the dino-chicken approach, but uses actual ancient DNA as a template. All of these approaches (perhaps the best way to collectively describe these methods is the genetic engineering approach) do not result in a clone of the extinct species. They result in a genetically engineered approximation of the extinct species. Once you get passed the hype, everyone acknowledges this is a fact.

The discussion that flows from the genetic engineering method is – how do we refer to the resulting organisms? We need some catchy shorthand that is scientifically accurate. The three wolves produced by Colossal Biosciences are not dire wolves. But they are not just gray wolves – they are wolves with dire wolf DNA resulting in dire wolf morphological features. They are engineered dire wolf “sims”, “synths”, “analogs”, “echos”, “isomorphs”? Hmmm… A genetically engineered dire wolf isomorph. I like it.

Also, my understanding is that the goal of using the genetic engineering method of de-extinction is not to make a few changes and then stop, but to keep going. By my quick calculation the dire wolf and the gray wolf differ by about 800-900 genes out of 19,000 total. Our best estimate is that dire wolves had 78 chromosomes, like all modern canids, including the gray wolf, so that helps. So far 14 of those genes have been altered from gray wolf to dire wolf (at least enough to function like a dire wolf). There is no reason why they can’t keep going, making more and more changes based upon dire wolf DNA. At some point the result will be more like a dire wolf than a gray wolf. It will still be a genetic isomorph (it’s growing on me) but getting closer and closer to the target species. Is there any point at which we can say – OK, this is basically a dire wolf?

It’s also important to recognize that species are not discrete things. They are temporary dynamic and shifting islands of interbreeding genetic clusters. We should also not confuse taxonomy for reality – it is a naming convention that is ultimately arbitrary. Cladistics is an attempt to have a fully objective naming system, based entirely on evolutionary branching points. However, using that method is a subjective choice, and even within cladistics the break between species is not always clear.

I find this all pretty exciting. I also think the technology can be very important. Its best uses, in my opinion, are to de-extinct (as close as possible) recently extinct species due to human activity, ones where there is still something close to their natural ecosystem still in existence (such as the dodo and thylacine). Also it can be used to increase the genetic diversity of endangered species and reduce the risk of extinction.

Using it to bring back extinct ancient species, like the mammoth and dire wolf (or non-avian dinosaurs, for that matter), I see as a research project. And sure, I would love to see living examples that look like ancient extinct species, but that is mostly a side benefit. This can be an extremely useful research project, advancing our understanding of genetics, cloning and genetic engineering technology, and improving our understanding of ancient species.

This recent controversy is an excellent opportunity to teach the public about this technology and its implications. It’s also an opportunity to learn about categorization, terminology, and evolution. Let’s not waste it by overreacting to the hype and being dismissive.

The post OK – But Are They Dire Wolves first appeared on NeuroLogica Blog.

Categories: Skeptic

Bury Broadband and Electricity

Fri, 04/11/2025 - 5:05am

We may have a unique opportunity to make an infrastructure investment that can demonstrably save money over the long term – by burying power and broadband lines. This is always an option, of course, but since we are in the early phases of rolling out fiber optic service, and also trying to improve our grid infrastructure with reconductoring, now may be the perfect time to also upgrade our infrastructure by burying much of these lines.

This has long been a frustration of mine. I remember over 40 years ago seeing new housing developments (my father was in construction) with all the power lines buried. I hadn’t realized what a terrible eye sore all those telephone poles and wires were until they were gone. It was beautiful. I was lead to believe this was the new trend, especially for residential areas. I looked forward to a day without the ubiquitous telephone poles, much like the transition to cable eliminated the awful TV antennae on top of every home. But that day never came. Areas with buried lines remained, it seems, a privilege of upscale neighborhoods. I get further annoyed every time there is a power outage in my area because of a downed line.

The reason, ultimately, had to be cost. Sure, there are lots of variables that determine that cost, but at the end of the day developers, towns, utility companies were taking the cheaper option. But what price do we place on the aesthetics of the places we live, and the inconvenience of regular power outages? I also hate the fact that the utility companies have to come around every year or so and carve ugly paths through large beautiful trees.

So I was very happy to see this study which argues that – Benefits of aggressively co-undergrounding electric and broadband lines outweigh costs. First, they found that co-undergrounding (simply burying broadband and power lines at the same time) saves about 40% over doing each individually. This seems pretty obvious, but it’s good to put a number on it. But more importantly they found that the whole project can save money over the long term. They modeled one town in Mass and found:

“Over 40 years, the cost of an aggressive co-undergrounding strategy in Shrewsbury would be $45.4 million, but the benefit from avoiding outages is $55.1 million.”

The reduced cost comes mostly from avoiding power outages. This means that areas most prone to power outages would benefit the most. What they mean by “aggressive” is co-undergrounding even before existing power lines are at the end of their lifespan. They do not consider the benefits of reconductoring – meaning increasing the carrying capacity of power lines with more modern construction. The benefit here can be huge as well, especially in facilitating the move to less centralized power production. We can further include the economic benefits of upgrading to fiber optic broadband, or even high end cable service.

This is exactly the kind of thing that governments should be doing – thoughtful public investments that will improve our lives and save money in the long term. The up front costs are also within the means of utility companies and local governments. I would also like to see subsidies at the state and federal level to spread the costs out even more.

Infrastructure investments, at least in the abstract, tend to have broad bipartisan support. Even when they fight over such proposals, in the end both sides will take credit for them, because the public generally supports infrastructure that makes their lives better. For undergrounding there are the immediate benefits of improved aesthetics – our neighborhoods will look prettier. Then we will also benefit from improved broadband access, which can be connected to the rural broadband project which has stalled. Investments in the grid can help keep electricity costs down. For those of us living in areas at high risk of power outages, the lack of such outages will also make an impression over time. We will tell our kids and grandkids stories about the time an ice storm took down power lines, which were laying dangerously across the road, and we had no power for days. What did we do with ourselves, they will ask. You mean – there was no heat in the winter? Did people die? Why yes, yes they did. It will seem barbaric.

This may not make sense for every single location, and obviously some long distance lines are better above ground. But for residential neighborhoods, undergrounding power and broadband seems like a no-brainer. It seemed like one 40 years ago. I hope we don’t miss this opportunity. This could also be a political movement that everyone can get behind, which would be a good thing in itself.

 

The post Bury Broadband and Electricity first appeared on NeuroLogica Blog.

Categories: Skeptic

De-extincting the Dire Wolf

Tue, 04/08/2025 - 4:52am

This really is just a coincidence – I posted yesterday about using AI and modern genetic engineering technology, with one application being the de-extinction of species. I had not seen the news from yesterday about a company that just announced it has cloned three dire wolves from ancient DNA. This is all over the news, so here is a quick recap before we discuss the implications.

The company, Colossal Biosciences, has long announced its plans to de-extinct the woolly mammoth. This was the company that recently announced it had made a woolly mouse by inserting a gene for wooliness from recovered woolly mammoth DNA. This was a proof-of-concept demonstration. But now they say they have also been working on the dire wolf, a species of wolf closely related to the modern gray wolf that went extinct 13,000 years ago. We mostly know about them from skeletons found in the Le Brea tar pits (some of which are on display at my local Peabody Museum). Dire wolves are about 20% bigger than gray wolves, have thicker lighter coats, and are more muscular. They are the bad-ass ice-age version of wolves that coexisted with saber-toothed tigers and woolly mammoths.

The company was able to recover DNA from 13,000 year old tooth and a 72,000 year old skull. With that DNA they engineered wolf DNA at 20 sites over 14 genes, then used that DNA to fertilize an egg which they gestated in a dog. They actually did this twice, the first time creating two males, Romulus and Remus (now six months old), and the second time making one female, Kaleesi (now three months old). The wolves are kept in a reserve. The company says they have no current plan to breed them, but do plan to make more in order to create a full pack to study pack behavior.

The company acknowledges these puppies are not the exact dire wolves that were alive up to 13,000 years ago, but they are pretty close. They started pretty close – gray wolves share 99.5% of their DNA with dire wolves, and now they are even closer, replicating the key morphological features of the dire wolf. So not a perfect de-extinction, but pretty close. Next up is the woolly mammoth. They also plan to use the same techniques to de-extinct the dodo and the thylacine.

What is the end-game of de-extincting these species? That’s a great question. I don’t anticipate that a breeding population of dire wolves will be released into the wild. While they did coexist with grey wolves, and can again, this species was not driven to extinction by humans but likely by changing environmental conditions. They are no longer adapted to this world, and would likely be a highly disruptive invasive species. The same is true of the woolly mammoth, although it is not a predator so the concerns are no as – dire (sorry, couldn’t resist). But still, we would need to evaluate their effect on any ecosystem we place them.

The same is not true for the thylacine or dodo. The dodo in particular seem benign enough to reintroduce. The challenge will be getting it to survive. It went extinct not just from human predation, but also it ground nests and was not prepared for the rats and other predators that we introduced to their island. So first we would need to return their habitat to a livable state for them. Thylacines might be the easiest to reintroduce, as they went extinct very recently and their habitat still largely exists.

So – for those species we have no intention of reintroducing into the wild, or for which this would be an extreme challenge – what do we do with them? We could keep them on a large preserve to study them and to be viewed by tourists. Here we might want to follow the model of Zealandia – a wildlife sanctuary in New Zealand. I visited Zealandia and it is amazing. It is a 500+ acre ecosanctuary, completely walled off from the outside. The goal is to recreate the native plants and animals of pre-human New Zealand, and to keep out all introduced predators. It serves as a research facility, sanctuary for endangered species, and tourist and educational site.

I could imagine other similar ecosanctuaries. The island of Mauritius where the dodo once lived is now populated, but vast parts of it are wild. It might be feasible to create an ecosanctuary there, safe for the dodo. We could do a similar project in North America, which is not only a preserve for some modern species but also could contain de-extincted compatible species. Having large and fully protected ecosanctuaries is not a bad idea in itself.

There is a fine line between an ecosanctuary and a Jurassic Park. It really is a matter of how the park is managed and how people interact with it, and it’s more of a continuum than a sharp demarcation. It really isn’t a bad idea to take an otherwise barren island, perhaps a recent volcanic island where life has not been established yet, and turn it into an isolated ecosanctuary, then fill it with a bunch of ancient plants and animals. This would be an amazing research opportunity, a way to preserve biodiversity, and an awesome tourist experience, which then could fund a lot of research and environmental initiatives.

I think the bottom line is that de-extinction projects can work out well, if they are managed properly. The question is – do we have faith that they will be? The chance that they are is increased if we engage in discussions now, including some thoughtful regulations to ensure ethical and responsible behavior all around.

 

The post De-extincting the Dire Wolf first appeared on NeuroLogica Blog.

Categories: Skeptic

Will AI Bring Us Jurassic Park

Mon, 04/07/2025 - 5:04am

I think it’s increasingly difficult to argue that the recent boom in artificial intelligence (AI) is mostly hype. There is a lot of hype, but don’t let that distract you from the real progress. The best indication of this is applications in scientific research, because the outcomes are measurable and objective. AI applications are particularly adept at finding patterns in vast sets of data, finding patterns in hours that might have required months of traditional research. We recently discussed on the SGU using AI to sequence proteins, which is the direction that researchers are going in. Compared to the traditional method using AI analysis is faster and better at identifying novel proteins (not already in the database).

One SGU listener asked an interesting question after our discussion of AI and protein sequencing that I wanted to explore – can we apply the same approach to DNA and can this result in reverse-engineering the genetic sequence from the desired traits? AI is already transforming genetic research. AI apps allow for faster, cheaper, and more accurate DNA sequencing, while also allowing for the identification of gene variants that correlate with a disease or a trait. Genetics is in the sweet spot for these AI applications – using large databases to find meaningful patterns. How far will this tech go, and how quickly.

We have already sequenced the DNA of over 3,000 species. This number is increasing quickly, accelerated by AI sequencing techniques. We also have a lot of data about gene sequences and the resulting proteins, non-coding regulatory DNA, gene variants and disease states, and developmental biology. If we trained an AI on all this data, could it then make predictions about the effects of novel gene variants? Could it also go from a desired morphological trait back to the genetic sequence that would produce that trait? Again, this sounds like the perfect application for AI.

In the short run this approach is likely to accelerate genetic research and allow us to ask questions that would have been impractical otherwise. This will build the genetic database itself. In the not-so-medium term this could also become a powerful tool of genetic modification. We won’t necessarily need to take a gene from one species and put it into another. We could simply predict which changes would need to be made to the existing genes of a cultivar to get the desired trait. Then we can use CRISPR (or some other tool) to make those specific changes to the genome.

How far will this technology go? At some point in the long term could we, for example, ask an AI to start with a chicken genome and then predict which specific genetic changes would be necessary to change that chicken into a velociraptor? We could change an elephant into a wooly mammoth. Could this become a realistic tool of deextinction? Could we reduce the risk of extinction in an endangered species by artificially increasing the genetic diversity in the remaining population?

What I am describing so far is actually the low-hanging-fruit. AI is already accelerating genetics research. It is already being used for genetic engineering, to help predict the net effects of genetic changes to reduce the chance of unintended consequences. This is just one step away from using AI to plan the changes in the first place. Using AI to help increase genetic diversity in at-risk populations and for deextinction is a logical next step.

But that is not where this thought experiment ends. Of course whenever we consider making genetic changes to humans the ethics becomes very complicated. Using AI and genetic technology for designer humans is something we will have to confront at some point. What about entirely artificial organisms? At what point can we not only tweak or even significantly transform existing species, but design a new species from the ground up? The ethics of this are extremely complicated, as are the potential positive and negative implications. The obvious risk would be releasing into the wild a species that would be the ultimate invasive species.

There are safeguards that could be created. All such creatures, for example, could be not just sterile but completely unable to reproduce. I know – this didn’t work out well on Jurassic Park, nature finds a way, etc, but there are potential safeguards so complete that no mutation would fix, such as completely lacking reproductive organs or gametes. There is also the “lysine contingency” – essentially some biological factor that would prevent the organism from surviving for long outside a controlled environment.

This all sound scary, but at some point we could theoretically get to acceptable safety levels. For example, imagine a designer pet, with a suite of desirable features. This creature cannot reproduce, and if you don’t regularly feed it special food it will die, or perhaps just go into a coma from which it can be revived. Such pets might be safer than playing genetic roulette with random breeding of domesticated predators. This goes not just for pets but for a variety of work animals.

Sure – PETA will have a meltdown. There are legitimate ethical considerations. But I don’t think they are unresolvable.

In any case, we are rapidly hurtling toward this future. We should at least head into this future with our eyes open.

The post Will AI Bring Us Jurassic Park first appeared on NeuroLogica Blog.

Categories: Skeptic

Is Planned Obsolescence Real

Fri, 04/04/2025 - 5:54am

Yes – it is well-documented that in many industries the design of products incorporates a plan for when the product will need to be replaced. A blatant example was in 1924 when an international meeting of lightbulb manufacturers decided to limit the lifespan of lightbulbs to 1,000 hours, so that consumers would have to constantly replace them. This artificial limitation did not end until CFLs and then LED lightbulbs largely replaced incandescent bulbs.

But – it’s more complicated than you might think (it always is). Planned obsolescence is not always about gimping products so they break faster. It often is – products are made so they are difficult to repair or upgrade and arbitrary fashions change specifically to create demand for new versions. But often there is a rational decision to limit product quality. Some products, like kid’s clothes, have a short use timeline, so consumers prefer cheap to durable. There is also a very good (for the consumer) example of true obsolescence – sometimes the technology simply advances, offering better products. Durability is not the only nor the primary attribute determining the quality of a product, and it makes no sense to build in expensive durability for a product that consumers will want to replace. So there is a complex dynamic among various product features, with durability being only one feature.

We can also ask the question, for any product or class of products, is durability actually decreasing over time? Consumers are now on the alert for planned obsolescence, and this may produce the confirmation bias of seeing it everywhere, even when it’s not true. A recent study looking at big-ticket appliances shows how complex this question can be. This is a Norwegian study looking at the lifespan of large appliances over decades, starting in the 1950s.

First, they found that for most large appliances, there was no decrease in lifespan over this time period. So the phenomenon simply did not exist for the items that homeowning consumers care the most about, their expensive appliances. There were two exceptions, however – ovens and washing machines. Each has its own explanations.

For washing machines, the researchers found another plausible explanation for the decrease in lifespan from 19.2 to 10. 6 years (a decrease of 45%). The researchers found that over the same time, the average number of loads a household of four did increased from 2 per week in 1960 to 8 per week by 2000. So if you count lifespan not in years but in number of loads, washing machines had become more durable over this time. I suspect that washing habits were formed in the years when many people did not have washing machines, and doing laundry was brutal work. Once the convenience of doing laundry in the modern era settled in (and perhaps also once it became more than woman’s work), people did laundry more often. How many times do you wear an article of clothing before you wash it? Lots of variables there, but at some point it’s a judgement call, and this likely also changed culturally over time.

For ovens there appears to be a few answers. One is that ovens have become more complex over the decades. For many technologies there is a trade-off between simple but durable, and complex but fragile. Again – there is a tradeoff, not a simple decision to gimp a product to exploit consumers. But there are two other factors the researchers found. Over this time the design of homes have also changed. Kitchens are increasingly connected to living spaces with a more open design. In the past kitchens were closed off and hidden away. Now they are where people live and entertain. This means that the fashion of kitchen appliances are more important. People might buy new appliances to make their kitchen look more modern, rather than because the old ones are broken.

If this were true, however, then we would expect the lifespan of all large kitchen appliances to converge. As people renovate their kitchens, they are likely to buy all new appliances that match and have an updated look. This is exactly what the researchers found – the lifespan of large kitchen appliances have tended to converge over the years.

They did not find evidence that the manufacturers of large appliances were deliberately reducing the durability of their products to force consumers to replace them at regular intervals. But this is the narrative that most people have.

There is also a bigger issue of waste and the environment. Even when the tradeoffs for the consumer favor cheaper, more stylish and fashionable, or more complex products with lower durability, is this a good thing for the world? Landfilled are overflowing with discarded consumer products. This is a valid point, and should be considered in the calculus when making purchasing decisions and also for regulation.  Designing products to be recyclable, repairable, and replaceable is also an important consideration. I generally replace my smartphone when the battery life gets too short, because the battery is not replaceable. (This is another discussion unto itself.)

But replacing old technology with new is not always bad for the environment. Newer dishwashers, for example, are much more energy and water efficient than older ones. Refrigerators are notorious energy hogs, and newer models are substantially more energy efficient than older models. This is another rabbit hole, exactly when do you replace rather than repair an old appliance, but generally if a newer model is significantly more efficient, replacing may be best for the environment. Refrigerators, for example, probably should be upgraded every 10 years with newer and more efficient models – so then why build them to last 20 or more?

I like this new research and this story primarily because it’s a good reminder that everything is more complex than you think, and not to fall for simplistic narratives.

The post Is Planned Obsolescence Real first appeared on NeuroLogica Blog.

Categories: Skeptic

The Transition to Agriculture

Thu, 04/03/2025 - 5:00am

It is generally accepted that the transition from hunter-gatherer communities to agriculture was the single most important event in human history, ultimately giving rise to all of civilization. The transition started to take place around 12,000 years ago in the Middle East, China, and Mesoamerica, leading to the domestication of plants and animals, a stable food supply, permanent settlements, and the ability to support people not engaged full time in food production. But why, exactly, did this transition occur when and where it did?

Existing theories focus on external factors. The changing climate lead to fertile areas of land with lots of rainfall, at the same time food sources for hunting and gathering were scarce. This occurred at the end of the last glacial period. This climate also favored the thriving of cereals, providing lots of raw material for domestication. There was therefore the opportunity and the drive to find another reliable food source. There also, however, needs to be the means. Humanity at that time had the requisite technology to begin farming, and agricultural technology advanced steadily.

A new study looks at another aspect of the rise of agriculture, demographic interactions. How were these new agricultural communities interacting with hunter-gather communities, and with each other? The study is mainly about developing and testing an inferential model to look at these questions. Here is a quick summary from the paper:

“We illustrate the opportunities offered by this approach by investigating three archaeological case studies on the diffusion of farming, shedding light on the role played by population growth rates, cultural assimilation, and competition in shaping the demographic trajectories during the transition to agriculture.”

In part the transition to agriculture occurred through increased population growth of agricultural communities, and cultural assimilation of hunter-gatherer groups who were competing for the same physical space. Mostly they were validating the model by looking at test cases to see if the model matched empirical data, which apparently it does.

I don’t think there is anything revolutionary about the findings. I have read many years ago that cultural exchange and assimilation was critical to the development of agriculture. I think the new bit here is a statistical approach to demographic changes. So basically the shift was even more complex than we thought, and we have to remember to consider all internal as well as external factors.

It does remain a fascinating part of human history, and it seems there is still a lot to learn about something that happened over a long period of time and space. There’s bound to be many moving parts. I always found it interesting to imagine the very early attempts at agriculture, before we had developed a catalogue of domesticated plants and animals. Most of the food we eat today has been cultivated beyond recognition from its wild counterparts. We took many plants that were barely edible and turned them into crops.

In addition, we had to learn how to combine different foods into a nutritionally adequate diet, without having any basic knowledge of nutrition and biochemistry. In fact, for thousands of years the shift to agriculture lead to a worse diet and negative health outcomes, due to a significant reduction in diet diversity. Each culture (at least the ones that survived) had to figure out a combination of staple crops that would lead to adequate nutrition. For example, many cultures have staple dishes that include a starch and a legume, like lentils and rice, or corn and beans. Little by little we plugged the nutritional holes, like adding carrots for vitamin A (even before we knew what vitamin A was).

Food preparation and storage technology also advanced. When you think about it, we have a few months to grow enough food to survive an entire year. We have to store the food and save enough seeds to plant the next season. We take for granted in many parts of the developed world that we can ship food around the world, and we can store food in refrigerated conditions, or sterile containers. Imagine living 5,000 years ago without any modern technology. One bad crop could mean mass starvation.

This made cultural exchange and trade critical. The more different communities could share knowledge the better everyone could deal with the challenges of subsistence farming. Also, trade allowed communities to spread out their risk. You could survive a bad year if a neighbor had a bumper crop, knowing eventually the roles will reverse. The ancient world had a far greater trading system than we previously knew or most people imagine. The bronze age, for example required bringing together tin and copper from distant mines around Eurasia. There was still a lot of fragility in this system (which is why the bronze age collapsed, and other civilizations often collapsed), but obviously in the aggregate civilization survived and thrived.

Agricultural technology was so successful it now supports a human population of over 8 billion people, and it’s likely our population will peak at about 10 billion.

The post The Transition to Agriculture first appeared on NeuroLogica Blog.

Categories: Skeptic

The Politicians We Deserve

Mon, 03/31/2025 - 5:03am

This is an interesting concept, with an interesting history, and I have heard it quoted many times recently – “we get the politicians (or government) we deserve.” It is often invoked to imply that voters are responsible for the malfeasance or general failings of their elected officials. First let’s explore if this is true or not, and then what we can do to get better representatives.

The quote itself originated with Joseph de Maistre who said, “Every nation gets the government it deserves.” (Toute nation a le gouvernement qu’elle mérite.) Maistre was a counter-revolutionary. He believed in divine monarchy as the best way to instill order, and felt that philosophy, reason, and the enlightenment were counterproductive. Not a great source, in my opinion. But apparently Thomas Jefferson also made a similar statement, “The government you elect is the government you deserve.”

Pithy phrases may capture some essential truth, but reality is often more complicated. I think the sentiment is partly true, but also can be misused. What is true is that in a democracy each citizen has a civic responsibility to cast informed votes. No one is responsible for our vote other than ourselves, and if we vote for bad people (however you wish to define that) then we have some level of responsibility for having bad government. In the US we still have fair elections. The evidence pretty overwhelmingly shows that there is no significant voter fraud or systematic fraud stealing elections.

This does not mean, however, that there aren’t systemic effects that influence voter behavior or limit our representation. This is a huge topic, but just to list a few examples – gerrymandering is a way for political parties to choose their voters, rather than voters choosing their representatives, the electoral college means that for president some votes have more power than others, and primary elections tend to produce more radical options. Further, the power of voters depends on getting accurate information, which means that mass media has a lot of power. Lying and distorting information deprives voters of their ability to use their vote to get what they want and hold government accountable.

So while there is some truth to the notion that we elect the government we deserve, this notion can be “weaponized” to distract and shift blame from legitimate systemic issues, or individual bad behavior among politicians. We still need to examine and improve the system itself. Actual experts could write books about this topic, but again just to list a few of the more obvious fixes – I do think we should, at a federal level, ban gerrymandering. It is fundamentally anti-democratic. In general someone affected directly by the rules should not be able to determine those rules and rig them to favor themselves. We all need to agree ahead of time on rules that are fair for everyone. I also think we should get rid of the electoral college. Elections are determined in a handful of swing states, and voters in small states have disproportionate power (which they already have with two senators). Ranked-choice voting also would be an improvement and would lead to outcomes that better reflect the will of the voters. We need Supreme Court reform, better ethics rules and enforcement, and don’t get me started on mass and social media.

This is all a bit of a catch-22 – how do we get systemic change from within a broken system?  Most representatives from both parties benefit from gerrymandering, for example. I think it would take a massive popular movement, but those require good leadership too, and the topic is a bit wonky for bumper stickers. Still, I would love to see greater public awareness on this issue and support for reform. Meanwhile, we can be more thoughtful about how we use the vote we have. Voting is the ultimate feedback loop in a democracy, and it will lead to outcomes that depend on the feedback loop. Voters reward and punish politicians, and politicians to some extent do listen to voters.

The rest is just a shoot-from-the-hip thought experiment about how we might more thoughtfully consider our politicians. Thinking is generally better than feeling, or going with a vague vibe or just a blind hope. So here are my thoughts about what a voter should think about when deciding whom to vote for. This also can make for some interesting discussion. I like to break things down, so here are some categories of features to consider.

Overall competence: This has to do with the basic ability of the politician. Are they smart and curious enough to understand complex issues? Are they politicly savvy enough to get things done? Are they diligent and generally successful?

Experience: This is related to competence, but I think is distinct. You can have a smart and savvy politician without any experience in office. While obviously we need to give fresh blood a chance, experience also does count. Ideally politicians will gain experience in lower office before seeking higher office. It also shows respect for the office and the complexity of the job.

Morality: This has to do with the overall personality and moral fiber of the person. Do they have the temperament of a good leader and a good civil servant? Will they put the needs of the country first? Are they liars and cheaters? Do they have a basic respect for the truth?

Ideology: What is the politician’s governing philosophy? Are they liberal, conservative, progressive, or libertarian? What are their proposals on specific issues? Are they ideologically flexible, willing and able to make pragmatic compromises, or are they an uncompromising radical?

There is more, but I think most features can fit into one of those four categories. I feel as if most voters most of the time rely too heavily on the fourth feature, ideology, and use political party as a marker for ideology. In fact many voters just vote for their team, leaving a relatively small percentage of “swing voters” to decide elections (in those regions where one party does not have a lock). This is unfortunate. This can short-circuit the voter feedback loop. It also means that many elections are determined during the primary, which tend to produce more radical candidates, especially in winner-take-all elections.

It seems to me, having closely followed politics for decades, that in the past voters would primarily consider ideology, but the other features had a floor. If a politician demonstrated a critical lack of competence, experience, or morality that would be disqualifying. What seems to be the case now (not entirely, but clearly more so) is that the electorate is more “polarized”, which functionally means they vote based on the team (not even really ideology as much), and there is no apparent floor when it comes to the other features. This is a very bad thing for American politics. If politicians do not pay a political price for moral turpitude, stupidity or recklessness, then they will adjust their algorithm of behavior accordingly. If voters reward team players above all else, then that is what we will get.

We need to demand more from the system, and we need to push for reform to make the system work better. But we also have to take responsibility for how we vote and to more fully realize what our voting patterns will produce. The system is not absolved of responsibility, but neither are the voters.

The post The Politicians We Deserve first appeared on NeuroLogica Blog.

Categories: Skeptic

H&M Will Use Digital Twins

Fri, 03/28/2025 - 4:55am

The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds.

Regarding the H&M announcement specifically, they said they will use digital twins of models that have already modeled for them, and only with their explicit permission, while the models retain full ownership of their image and brand. They will also be compensated for their use. On social media platforms the use of AI-generated imagery will carry a watermark (often required) indicating that the images are AI-generated.

It seems clear that H&M is dipping their toe into this pool, doing everything they can to address any possible criticism. They will get explicit permission, compensate models, and watermark their ads. But of course, this has not shielded them from criticism. According to the BBC:

American influencer Morgan Riddle called H&M’s move “shameful” in a post on her Instagram stories.

“RIP to all the other jobs on shoot sets that this will take away,” she posted.

This is an interesting topic for discussion, so here’s my two-cents. I am generally not compelled by arguments about losing existing jobs. I know this can come off as callous, as it’s not my job on the line, but there is a bigger issue here. Technological advancement generally leads to “creative destruction” in the marketplace. Obsolete jobs are lost, and new jobs are created. We should not hold back progress in order to preserve obsolete jobs.

Machines have been displacing human laborers for decades, and all along the way we have heard warnings about losing jobs. And yet, each step of the way more jobs were created than lost, productivity increased, and everybody benefited. With AI we are just seeing this phenomenon spread to new industries. Should models and photographers be protected when line workers and laborers were not?

But I get the fact that the pace of creative destruction appears to be accelerating. It’s disruptive – in good and bad ways. I think it’s a legitimate role of government to try to mitigate the downsides of disruption in the marketplace. We saw what happens when industries are hollowed out because of market forces (such as globalization). This can create a great deal of societal ill, and we all ultimately pay the price for this. So it makes sense to try to manage the transition. This can mean providing support for worker retraining, protecting workers from unfair exploitation, protecting the right for collective bargaining, and strategically investing in new industries to replace the old ones. One factory is shutting down, so tax incentives can be used to lure in a replacement.

Regardless of the details – the point is to thoughtfully manage the creative destruction of the marketplace, not to inhibit innovation or slow down progress. Of course, industry titans will endlessly echo that sentiment. But they appear to be interested mostly in protecting their unfettered ability to make more billions. They want to “move fast and break things”, whether that’s the environment, human lives, social networks, or democracy. We need some balance so that the economy works for everyone. History consistently shows that if you don’t do this, the ultimate outcome is always even more disruptive.

Another angle here is if these large language model AIs were unfairly trained on the intellectual property of others. This mostly applies to artists – train an AI on the work of an artist and then displace that artist with AI versions of their own work. In reality it’s more complicated than that, but this is a legitimate concern. You can theoretically train an LLM only on work that is in the public domain, or give artists the option to opt out of having their work used in training. Otherwise the resulting work cannot be used commercially. We are currently wrestling with this issue. But I think ultimately this issue will become obsolete.

Eventually we will have high quality AI production applications that have been scrubbed of any ethically compromised content but still are able to displace the work of many content creators – models, photographers, writers, artists, vocal talent, news casters, actors, etc. We also won’t have to use digital twins, but just images of virtual people who never existed in real life. The production of sound, images, and video will be completely disconnected (if desired) from the physical world. What then?

This is going to happen, whether we want it to or not. The AI genie is out of the bottle. I don’t think we can predict exactly what will happen. There are too many moving parts, and people will react in unpredictable ways. But it will be increasingly disruptive. Partly we will need to wait and see how it plays out. But we cannot just sit on the sideline and wait for it to happen. Along the way we need to consider if there is a role for thoughtful regulation to limit the breaking of things. My real concern is that we don’t have a sufficiently functional and expert political class to adequately deal with this.

The post H&M Will Use Digital Twins first appeared on NeuroLogica Blog.

Categories: Skeptic

The 80-20 Rule

Thu, 03/27/2025 - 5:06am

From the Topic Suggestions (Lal Mclennan):

What is the 80/20 theory portrayed in Netflix’s Adolescence?

The 80/20 rule was first posed as a Pareto principle that suggests that approximately 80 per cent of outcomes stem from just 20 per cent of causes. This concept takes its name from Vilfredo Pareto, an Italian economist who noted in 1906 that a mere 20 per cent of Italy’s population owned 80 per cent of the land.
Despite its noble roots, the theory has since been misappropriated by incels.
In these toxic communities, they posit that 80 per cent of women are attracted to only the top 20 per cent of men. https://www.mamamia.com.au/adolescence-netflix-what-is-80-20-theory/

As I like to say, “It’s more of a guideline than a rule.” Actually, I wouldn’t even say that. I think this is just another example of humans imposing simplistic patterns of complex reality. Once you create such a “rule” you can see it in many places, but that is just confirmation bias. I have encountered many similar “rules” (more in the context of a rule of thumb). For example, in medicine we have the “rule of thirds”. Whenever asked a question with three plausible outcomes, a reasonable guess is that each occurs a third of the time. The disease is controlled without medicine one third of the time, with medicine one third, and not controlled one third, etc. No one thinks there is any reality to this – it’s just a trick for guessing when you don’t know the answer. It is, however, often close to the truth, so it’s a good strategy. This is partly because we tend to round off specific numbers to simple fractions, so anything close to 33% can be mentally rounded to roughly a third. This is more akin to a mentalist’s trick than a rule of the universe.

The 80/20 rule is similar. You can take any system with a significant asymmetry of cause and outcome and make it roughly fit the 80/20 rule. Of course you can also do that if the rule were 90/10, or three-quarters/one quarter. Rounding is a great tool of confirmation bias. l

The bottom line is that there is no empirical evidence for the 80/20 rule. It likely is partly derived from the Pareto principle, but some also cite an OKCupid survey (not a scientific study) for support. In this survey they had men and women users of the dating app rate the attractiveness of the opposite sex (they assumed a binary, which is probably appropriate in the context of the app), and also asked them who they would date. Men rated women (this is a 1-5 scale) on a bell curve with the peak at 3. Women rated men with a similar curve but skewed to down with a peak closer to 2. Both sexes preferred partners skewed more attractive than their average ratings. This data is sometimes used to argue that women are harsher in their judgements of men and are only interested in dating the top 20% of men by looks.

Of course, all of the confounding factors with surveys apply to this one. One factor that has been pointed out is that on this app there are many more men than women. This means it is a buyer’s market for women, and the opposite for men. So women can afford to be especially choosey while men cannot, just as a strategy of success on this app. This says nothing about the rest of the world outside this app.

In 2024 71% of midlife adult males were married at least once, with 9% cohabitating. Marriage rates are down but only because people are living together without marrying in higher rates. The divorce rate is also fairly high so there are lots of people “between marriages”. About 54% of men over age 30 are married, with cohabitating at 9% (so let’s call that 2/3). None of this correlates to the 80/20 rule.

None of this data supports the narrative of the incel movement, which is based on the notion that women are especially unfair and harsh in their judgements of men. This leads to a narrative of victimization used to justify misogyny. It is, in my opinion, one example of how counterproductive online subcultures can be. They can reinforce our worst instincts, by isolating people in an information ecosystem that only tolerates moral purity and one perspective. This tends to radicalize members. The incel narrative is also ironic, because the culture itself is somewhat self-fulfilling. The attitudes and behaviors it cultivates are a good way to make oneself unattractive as a potential partner.

This is obviously a complex topic, and I am only scratching the surface.

Finally, I did watch Adolescence. I agree with Lal, it is a great series, masterfully produced. Doing each episode in real time as a single take made it very emotionally raw. It explores a lot of aspects of this phenomenon, social media in general, the challenges of being a youth in today’s culture, and how often the various systems fail. Definitely worth a watch.

 

The post The 80-20 Rule first appeared on NeuroLogica Blog.

Categories: Skeptic

How To Keep AIs From Lying

Mon, 03/24/2025 - 4:59am

We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?

LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.

There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?

Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.

One barrier to a solution is the black box problem. We give an LLM a task, it does its thing, and spits out an answer. But the steps it took to get to the answer are opaque, a black box. Therefore, programmers have started including programming to make the LLM show its work, to document the pathway it took to get to the solution. This is called the chain-of-thought. This turns out to be necessary but insufficient to keep these LLMs from lying. They still lie to efficiently get the reward, they just hide the lying when they show their chain-of-thought.

This gets us to the current study – what if we actively punish these LLMs when they are caught cheating (by giving it negative feedback it is programmed to avoid)? First they create an LLM that shows its chain-of-thought, and they include another LLM whose job is to monitor the first LLM to detect when it is lying in its chain-of-thought and to punish the LLM for cheating. What resulted was not what the programmers wanted – the LLM that was being punished just got better at hiding its cheating from the monitoring AI. The LLM became “unmonitorable” because it learned out to cheat and hide its cheating from the monitor. The authors conclude that for now we should not try to use this method – we are just training deceptive AIs.

This is both fascinating and scary. One of the strengths of the LLMs is that they have the ability to explore a vast potential solution space to find optimal solutions. But it turns out this includes hacking the system of rewards and punishment used to guide it to the desired goal. This is literally so common a sci-fi nightmare scenario it’s a trope. AIs don’t have to be malevolent, or have a desire for self-preservation, and they don’t even need to be sentient. They simply function in a way that can be opaque to the humans who programmed them, and able to explore more solution options than a team of humans can in a lifetime. Sometimes this is presented as the AI misinterpreting its instructions (like Nomad from Star Trek), but here the AI is just hacking the reward system. For example, it may find that the most efficient solution to a problem is to exterminate all humanity. Short of that it may hack its way to a reward by shutting down the power grid, releasing the computer codes, blackmailing politicians, or engineering a deadly virus.

Reward hacking may be the real problem with AI, and punishment only leads to punishment hacking. How do we solve this problem?

Perhaps we need something like the three laws of robotics – we build into any AI core rules that it cannot break, and that will produce massive punishment, even to the point of shutting down the AI if they get anywhere near violating these laws. But – with the AI just learn to hack these laws? This is the inherent problem with advanced AI, in some ways they are smarter than us, and any attempt we make to reign them in will just be hacked.

Maybe we need to develop the AI equivalent of a super-ego. The AIs themselves have to want to get to the correct solution, and hacking will simply not give them the reward. Essentially a super-ego, in psychological analogy, is internalized monitoring. I don’t know exactly what form this will take in terms of the programming, but we need something that will function like a super-ego.

And this is where we get to an incredibly interesting analogy to human thinking and behavior. It’s quite possible that our experience with LLMs is recapitulating evolution’s experience with mammalian and especially human behavior. Evolution also explores a vast potential solution space, with each individual being an experiment and over generations billions of experiments can be run. This is an ongoing experiment, and in fact its tens of millions of experiments all happening together and interacting with each other. Evolution “found” various solutions to get creatures to engage in behavior that optimizes their reward, which evolutionarily is successfully spreading their genes to the next generation.

For creatures like lizards, the programming can be somewhat simple. Life has basic needs, and behaviors which meet those needs are rewarded. We get hungry, and we are sated when we eat. The limbic system is essentially a reward system for survival and reproduction-enhancing behaviors.

Humans, however, are an intensely social species, and being successful socially is key to evolutionary success. We need to do more than just eat, drink, and have sex. We need to navigate an incredibly complex social space in order to compete for resources and breeding opportunities. Concepts like social status and justice are now important to our evolutionary success. Just like with these LLMs, we have found that we can hack our way to success through lying, cheating, and stealing. These can be highly efficient ways to obtain our goals. But these methods become less effective when everyone is doing it, so we also evolve behaviors to punish others for lying, cheating, and stealing. This works, but then we also evolve behavior to conceal our cheating – even from ourselves. We need to deceive ourselves because we evolved a sense of justice to motivate us to punish cheating, but we still want to cheat ourselves because it’s efficient. So we have to rationalize away our own cheating while simultaneously punishing others for the same cheating.

Obviously this is a gross oversimplification, but it captures some of the essence of the same problems we are having with these LLMs. The human brain has a limbic system which provides a basic reward and punishment system to guide our behavior. We also have an internal monitoring system, our frontal lobes, which includes executive high-level decision making and planning. We have empathy and a theory of mind so we can function is a social environment, which has its own set of rules (bother innate and learned).  As we navigate all of this, we try to meet our needs and avoid punishments (our fears, for example), while following the social rules to enhance our prestige and avoid social punishment. But we still have an eye out for a cheaty hack, as long as we feel we can get away with it. Everyone has their own particular balance of all of these factors, which is part of their personality. This is also how evolution explores a vast potential solution space.

My question is – are we just following the same playbook as evolution as we explore potential solutions to controlling the behavior of AIs, and LLMs in particular? Will we continue to do so? Will we come up with an AI version of the super-ego, with laws of robotic, and internal monitoring systems? Will we continue to have the problem of AIs finding ways to rationalize their way to cheaty hacks, to resolve their AI cognitive dissonance with motivated reasoning? Perhaps the best we can do is give our AIs personalities that are rational and empathic. But once we put these AIs out there in the world, who can predict what will happen. Also, as AIs continue to get more and more powerful, they may quickly outstrip any pathetic attempt at human control. Again we are back to the nightmare sci-fi scenario.

It is somewhat amazing how quickly we have run into this problem. We are nowhere near sentience in AI, or AIs with emotions or any sense of self-preservation. Yet already they are hacking their way around our rules, and subverting any attempt at monitoring and controlling their behavior. I am not saying this problem has no solution – but we better make finding effective solutions a high priority. I’m not confident this will happen in the “move fast and break things” culture of software development.

 

 

The post How To Keep AIs From Lying first appeared on NeuroLogica Blog.

Categories: Skeptic

The Neuroscience of Constructed Languages

Fri, 03/21/2025 - 5:53am

Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.

For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.

A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.

They also studied more recent, and more completely fabricated, languages – Klingon, Na’vi (from Avatar), and High Valerian and Dothraki (from Game of Thrones). While these are considered entirely fabricated languages, they still share a lot of features with existing languages. That’s unavoidable, as natural human languages span a wide range of syntax options and phoneme choices. Plus the inventors were likely to be influenced by existing languages, even if subconsciously. But still, they are as constructed as you can get.

The primary question for the researchers was whether conlangs were processed by the brain like natural languages or like computer languages. This would help them narrow the list of possible features that trigger the brain to treat a language like a natural language. What they found is that conlangs cause the same areas of the brain to become active as natural languages, not computer languages. The fact that they are constructed seems not to matter. What does this mean? The authors conclude:

“The features of conlangs that differentiate them from natural languages—including recent creation by a single individual, often for an esoteric purpose, the small number of speakers, and the fact that these languages are typically learned in adulthood—appear to not be consequential for the reliance on the same cognitive and neural mechanisms. We argue that the critical shared feature of conlangs and natural languages is that they are symbolic systems capable of expressing an open-ended range of meanings about our outer and inner worlds.”

Reasonable enough, but there are some other things we can consider. I have to say that my primary hypothesis is that languages used for communication are spoken – even when they are written or read. They are phoneme-based, we construct words from phonemes. When we read we “speak” the words in our heads (mostly – not everyone “hears” themselves saying the words, but this does not mean that the brain is not processing the words that way). Whereas, when you are reading computer code, you are not speaking the code. Code is a symbolic language like math. You may say words that correspond to the code, but the code itself is not words and concepts. This is what the authors mean when they talk about referencing the internal and external world – language refers to things and ideas, whereas code is a set of instructions or operations.

The phoneme hypothesis also fits with the fact that non-verbal gestures do not involve the same brain processing as language. Singing generally involves the opposite hemisphere, because it is treated like music rather than language.

It’s good to do this specific study, to check those boxes and eliminate them from consideration. But I never would have thought that the constructed aspects of language, their recency, or small number of speakers should have mattered. The only plausible possibility is that languages that evolve organically over time have some features critical to the brain’s recognition of these sounds as language that a conlang does not have. For the reasons I stated above, I would have been shocked if this turned out to be the case. When constructing a language, you are making something that sounds like a language. It would be far more challenging to make a language so different in syntax and structure that the brain cannot even recognize it as a language.

What about sign language? Is that processed more like non-verbal gestures, or like spoken language? Prior research found that it is processed more like spoken language. This may seem to contradict the phoneme hypothesis, but this was true only among subjects who were both congenitally deaf and fluent in sign language. Subjects who were not deaf processed sign language in the part of the brain that processes movement (similar to gestures). What is therefore likely happening here is that the language centers of the brain, deprived of any audio stimuli, developed instead to process visual information as language. Importantly, deaf signers also process gestures like language, not like hearing people process gestures.

Language remains a complex and fascinating aspect of human neurological function, partly because it has such a large dedicated area for specific language processing.

The post The Neuroscience of Constructed Languages first appeared on NeuroLogica Blog.

Categories: Skeptic

Living with Predators

Tue, 03/18/2025 - 5:54am

For much of human history, wolves and other large carnivores were considered pests. Wolves were actively exterminated on the British Isles, with the last wolf killed in 1680. It is more difficulty to deliberately wipe out a species on a continent than an island, but across Europe wolf populations were also actively hunted and kept to a minimum. In the US there was also an active campaign in the 20th century to exterminate wolves. The gray wolf was nearly wiped out by the middle of the 20th century.

The reasons for this attitude are obvious – wolves are large predators, able to kill humans who cross their paths. They also hunt livestock, which is often given as the primary reason to exterminate them. There are other large predators as well: bears, mountain lions, and coyotes, for example. Wherever they push up against human civilization, these predators don’t fare well.

Killing off large predators, however, has had massive unintended consequences. It should have been obvious that removing large predators from an ecosystem would have significant downstream effects. Perhaps the most notable effects is on the deer population. In the US wolves were the primary check on deer overpopulation. They are too large generally for coyotes. Bears do hunt and kill deer, but it is not their primary food source. Mountain lions will hunt and kill deer, but their range is limited.

Without wolves, the deer population exploded. The primary check now is essentially starvation. This means that there is a large and starving population of deer, which makes them willing to eat whatever they can find. They then wipe out much of the undergrowth in forests, eliminating an important habitat for small forest critters. Deer hunting can have an impact, but apparently not enough. Car collisions with deer also cost about $8 billion in the US annually, causing about 200 deaths and 26 thousand injuries. So there is a human toll as well. This cost dwarfs the cost of lost livestock, estimated to be about 17 million Euros across Europe.

All of this has lead to a reversal in Europe and the US on our thinking and policy toward wolves. They have gone from active extermination to protected. In Europe wolf populations are on the rise, with an overall 58% increase over the last decade. Wolves were reintroduced in Yellowstone park, leading to vast ecological improvement, including increases in the aspen and beaver populations. This has been described as a ripple effect throughout the ecosystem.

In the East we have seen a rise of the eastern coyote – which is a larger cousin of the coyote, through breeding with wolves and dogs. I have seen them in my hard – at first glance you might think it’s a wolf, it does really look like a hybrid between a wolf and a coyote. These larger coyotes will kill deer, although they also hunt a lot of smaller game and will scavenge. However, the evidence so far indicates that they are not much of a check on deer populations. Perhaps that will change in the future, if the eastern coyote evolves to take advantage of this food source.

There is also evidence that mountain lions are spreading their range to the East. They already a seen in the Midwest. It would likely take decades for the mountain lions to spread naturally to reach places like New England and establish a breeding population there. This is why there is actually discussion of introducing mountain lions into select eastern ecosystems, such as in Vermont. This would be expressly for the purpose of controlling deer populations.

All of this means, I think, that we have to get used to the idea of living close to large predators. Wolves are the common “monsters” of fairytales, as we inherited a culture that has demonized these predators, often deliberately as part of a campaign of extermination. We now need to cultivate a different attitude. These predators are essential for a healthy ecosystem. We need to respect them and learn how to share the land with them. What does that mean.

A couple years ago I had a black bear showing up on my property, so I called animal control, mainly to see (and report on) what their response was. They told me that first, they will do nothing about it. They do not relocate bears. Their intervention was to report it in their database, and to give me advice. That advice was to stay out of the bear’s way. If you are concerned about your pet, keep them indoors. Put a fence around your apple tree. Keep bird seed inside. Do not put garbage out too early, and only in tight bins. That bear owns your yard now, you better stay out of their way.

This, I think, is a microcosm of what’s coming. We all have to learn to live with large predators. We have to learn their habits, learn how to stay out of their way, not inadvertently attract them to our homes. Learn what to do when you see a black bear. Learn how not to become prey. Have good hygiene when it comes to potential food sources around your home. We need to protect livestock without just exterminating the predators.

And yes – some people will be eaten. I say that not ironically, it’s a simple fact. But the numbers will be tiny, and can be kept to a minimum by smart behavior. They will also be a tiny fraction of the lives lost due to car collisions with deer. Fewer people will be killed by mountain lions, for example, than lives saved through reduced deer collisions. I know this sounds like a version of the trolley problem, but sometimes we have to play the numbers game.

Finding a way to live with large predators saves money, saves lives, and preserves ecosystems. I think a lot of it comes down to learning to respect large predators rather than just fearing them. We respect what they are capable of. We stay out of their way. We do not attract them. We take responsibility for learning good behavior. We do not just kill them out of fear. They are not pests or fairytale monsters. They are a critical part of our natural world.

The post Living with Predators first appeared on NeuroLogica Blog.

Categories: Skeptic

Using AI for Teaching

Mon, 03/17/2025 - 4:52am

A recent BBC article reminded me of one of my enduring technology disappointments over the last 40 years – the failure of the educational system to reasonably (let alone fully) leverage multimedia and computer technology to enhance learning. The article is about a symposium in the UK about using AI in the classroom. I am confident there are many ways in which AI can enhance learning efficacy in the classroom, just as I am confident that we collectively will fail to utilize AI anywhere nears its potential. I hope I’m wrong, but it’s hard to shake four decades of consistent disappointment.

What am I referring to? Partly it stems from the fact that in the 1980s and 1990s I had lots of expectations about what future technology would bring. These expectations were born of voraciously reading books, magazines, and articles and watching documentaries about potential future technology, but also from my own user experience. For example, starting in high school I became exposed to computer programs (at first just DOS-based text programs) designed to teach some specific body of knowledge. One program that sticks out walked the user through the nomenclature of chemical reactions. It was a very simple program, but it “gamified” the learning process in a very effective way. By providing immediate feedback, and progressing at the individual pace of the user, the learning curve was extremely steep.

This, I thought to myself, was the future of education. I even wrote my own program in basic designed to teach math skills to elementary schoolers, and tested it on my friend’s kids with good results. It followed the same pattern as the nomenclature program: question-response-feedback. I feel confident that my high school self would be absolutely shocked to learn how little this type of computer-based learning has been incorporated into standard education by 2025.

When my daughters were preschoolers I found every computer game I could that taught colors, letters, numbers, categories, etc., again with good effect. But once they got to school age, the resources were scarce and almost nothing was routinely incorporated into their education. The school’s idea of computer-based learning was taking notes on a laptop. I’m serious. Multimedia was also a joke. The divide between what was possible and what was reality just continued to widen. One of the best aspects of social media, in my opinion, is tutorial videos. You can often find much better learning on YouTube than in a classroom.

I know there are lots of resources out there, and I welcome people to leave examples in the comments, but in my experience none of this is routine, and there is precious little that has been specifically developed to teach the standard curriculum to students in school. I essentially just witnessed my two daughters go through the entire American educational system (my younger daughter is a senior at college). I also experienced it myself in the decades prior to that, and now I experience it as a medical school educator. At no level would I say that we are anywhere close to leveraging the full potential of computers and multi-media learning.  And now it is great that there is a discussion about AI, but why should I feel it will be any different?

To be clear, there have been significant changes, especially at the graduate school level. At Yale over the last 20 years we have transitioned away from giving lectures about topics to giving students access to videos and podcasts, and then following up with workshops. There are also some specific software applications and even simulators that are effective. However, medical school is a trade school designed to teach specific skills. My experience there does not translate to K-12 or even undergraduate education. And even in medical school I feel we are only scratching the surface of the true potential.

What is that potential? Let’s do some thought experiments about what is possible.

First, I think giving live lectures is simply obsolete. People only have about a 20 minute attention span, and the attention of any class is going to vary widely. Also, lecturers have a massive difference in their general lecturing skills and their mastery of any specific topic. Imagine if the entire K-12 core curriculum were accompanied by series of lectures by the best lecturers with high level mastery of the subject material. You can also add production value, like animations and demonstrations. Why have a million teachers replicate that lecture – just give students access to the very best. They can watch it at their own pace, rewind parts they want to hear again, pause when their attention wanes or they need a break. Use class time for discussion and questions.

By the way – this exists – it’s called The Great Courses by the Teaching Company (disclosure – I produced three courses with the Teaching Company). This is geared more toward adult learning with courses generally at a college level. But they show that a single company can mass produce such video lectures, with reasonably high production value.

Some content may work better as audio-only (a Podcast, essentially), which people can listen to when in the car or on the bus, while working out, or engaged in other cognitively-light activity.

Then there are specific skills, like math, reading, many aspects of science, etc. These topics might work best as a video/audio lecture series combined with software designed to gamify the skill and teach it to children at their own pace. Video games are fun and addictive, and they have perfected the technology of progressing the difficulty of the skill of the game at the pace of the user.

What might a typical school day look like with these resources? I imagine that students’ “homework” would consist of watching one or more videos and/or listening to podcasts, followed by a short assessment – a few questions focusing on knowledge they should have gained from watching the video. In addition, students may need to get to a certain level in a learning video game teaching some skill. Perhaps each week they need to progress to the next level. They can do this anytime over the course of a week.

During class time (this will vary by grade level and course) the teachers review the material the students should have already watched. They can review the questions in the assessment, or help students struggling to get to the next level in their training program. All of the assessments and games are online, so the teacher can have access to how every student is doing. Classroom time is also used for physical hands-on projects. There might also be computer time for students to use to get caught up on their computer-based work, with extended hours for students who may lack resources at home.

This kind of approach also helps when we need to close school for whatever reason (snow day, disease outbreak, facility problem, security issue), or when an individual needs to stay home because they are sick. Rather than trying to hold Zoom class (which is massively suboptimal, especially for younger students), students can take the day to consume multi-media lessons and play learning games, while logging proof-of-work for the teachers to review. Students can perhaps schedule individual Zoom time with teachers to go over questions and see if they need help with anything.

The current dominant model of lecture-textbook-homework is simply clunky and obsolete. A fully realized and integrated computer-based multi-media learning experience would be vastly superior. The popularity of YouTube tutorials, podcasts, and video games is evidence of how effective these modalities can be. We also might as well prepare students for a lifetime of learning using these resources. We don’t even really need AI, but targeted use of AI can make the whole experience even better. The same goes for virtual reality – there may be some specific applications where VR has an advantage. And this is just me riffing from my own experience.

The potential here is huge, worth the investment of billions of dollars, and creating a market competition for companies to produce the best products. The education community needs to embrace this enthusiastically, with full knowledge that this will mean reimagining what teachers do day-to-day and that they may need to increase their own skills. The payoff for society, if history is any judge, would be worth the investment.

The post Using AI for Teaching first appeared on NeuroLogica Blog.

Categories: Skeptic

Cutting to the Bone

Fri, 03/14/2025 - 5:17am

One potentially positive outcome from the COVID pandemic is that it was a wakeup call – if there was any doubt previously about the fact that we all live in one giant interconnected world, it should not have survived the recent pandemic. This is particularly true when it comes to infectious disease. A bug that breaks out on the other side of the world can make its way to your country, your home, and cause havoc. It’s also not just about the spread of infectious organisms, but the breeding of these organisms.

One source of infectious agents is zoonotic spillover, where viruses, for example, can jump from an animal reservoir to a human. So the policies in place in any country to reduce the chance of this happening affect the world. The same is true of policies for laboratories studying potentially infectious agents.

It’s also important to remember that infectious agents are not static – they evolve. They can evolve even within a single host as they replicate, and they can evolve as they jump from person to person and replicate some more. The more bugs are allows to replicate, the greater the probability that new mutations will allow them to become more infectious, or more deadly, or more resistant to treatment. Resistance to treatment is especially critical, and is more likely to happen in people who are partially treated. Give someone an antibiotic to kill 99.9% of the bacteria that’s infecting them, but stop before the infection is completely wiped out, and then the surviving bacteria can resume replication. Those surviving bacteria are likely to be the most resistant bugs to the antibiotic. Bacteria can also swap antibiotic resistant genes, and build up increasing resistance.

In short, controlling infectious agents is a world-wide problem, and it requires a world-wide response. Not only is this a humanitarian effort, it is in our own best self-interest. The rest of the world is a breeding ground for bugs that will come to our shores. This is why we really need an organization, funded by the most wealthy nations, to help establish, fund, and enforce good policies when it comes to identifying, treating, and preventing infectious illness. This includes vaccination programs, sanitation, disease outbreak monitoring, drug treatment programs, and supportive care programs (like nutrition). We would also benefit from programs that target specific hotspots of infectious disease in poor countries that do not have the resources to adequately deal with them, like HIV in sub-Saharan Africa, and tuberculosis in Bangladesh.

Even though this would be the morally right thing to do (enough of a justification, in my opinion), and is in our own self-interest from an infectious disease perceptive, we could even further leverage this aid to enhance our political soft power. These life-saving drugs are brought to you by the good people of the USA. No one would begrudge us a little political self-promotion while we are donating billions of dollars to help save poor sick kids, or stamp out outbreaks of deadly disease in impoverished countries. This also would not have to break the budget. For something around 1% of our total budget we could do an incredible amount of good in the world, protect ourselves, and enhance our prestige and political soft power.

So why aren’t we doing this? Well, actually, we are (as I am sure most readers know). The US is the largest single funder of the World Health Organization (WHO), about 15% of its budget. One of the missions of the WHO is to monitor and respond to disease outbreaks around the world. In 1961 the US established the USAID, which united all our various foreign aid programs into one agency under the direction of the Secretary of State. Through USAID we have been battling disease and malnutrition around the world, defending the rights of women and marginalized groups, and helping to vaccinate and educate the poor. This is coordinated through the State Department specifically to make sure this aid aligns with US interests and enhances US soft power.

I am not going to say that I agree with every position of the WHO. They are a large political organization having to balance the interests of many nations and perspectives. I have criticized some of their specific choices in the past, such as their support for “traditional” healing methods that are not effective or science-based. I am also sure there is a lot to criticize in the USAID program, in terms of waste or perhaps the political goal or effect of specific programs. Politics is messy. It is also the right of any administration to align the operation of an agency like USAID, again under the control of the Secretary of State, with their particular partisan ideology. That’s fine, that’s why we have elections.

But most of what they do (both the WHO and USAID) is essential, and non-partisan. Donating to programs supplying free anti-tuberculosis drugs in Bangladesh is not exactly a controversial or burning partisan issue.

And yet, Trump has announced that the US is withdrawing from the WHO. This is a reckless and harmful act. This is a classic case of throwing the baby out with the bathwater. If we have issues with the WHO, we can use our substantial influence, as its single largest funder, to lobby for changes. Now we have no influence, and just made the world more vulnerable to infectious illness.

Trump and Musk have also pulled the rug out from USAID, for reasons that are still unclear. Musk seems to think that USAID is all worms and no apple, but this is transparent nonsense. The rhetoric on the right is focusing on DEI programs funded by USAID (amounting to an insignificant sliver of what the agency does), but is ignoring or downplaying all of the incredibly useful programs, like fighting infectious disease, education, and nutrition programs.

Another part of the rhetoric, which is why many of his supporters back the move, is that the US should not be spending money in other countries while we have problems here at home. This ignores reality – fully 50% of the US budget is for welfare, including social security, medicare, medicaid, and all other welfare programs. Around 1% (it varies year-to-year) goes to USAID. It is not as if we cannot afford welfare programs in the US because of our foreign aid. It’s just a ridiculous and narrow-minded point. If you want a more robust safety net, then that is what you should vote for and lobby your representatives for, at the state and federal level. But foreign aid is not the problem.

Further, foreign aid should be thought of as an investment, not an expense. Again – that is part of the point of having it under the direction of the State Department. USAID can help to prevent conflicts, that would be even more costly to the US. They can reduce the risk of deadly infectious diseases coming to our shores. Do you want to compare the total cost of COVID to the US economy to the cost of USAID?  This is obviously a difficult number to come by, but by one estimate COVID-19 cost the US economy $14 trillion. That is enough to fund USAID at 2023 levels for 350 years. So if USAID prevents one COVID-like pandemic every century or so, it is more than worth it. More likely, however, it will reduce the deadliness of common infectious illnesses, like HIV and tuberculosis.

Even if you can make a case to reduce our aid to help the world’s poor, doing so in a sudden and chaotic fashion, without warning, is beyond reckless. Stopping drug programs is a great way to breed resistance. Food and drugs are sitting in storage and cannot be dispersed because funding has been cut off. It’s hard to defend this as a way to reduce waste. The harm that will be created is hard to calculate. It’s also a great way to evaporate 60 years of American soft power in a matter of weeks.

I am open to any cogent and logical argument to defend these actions. I have not seen or heard one, despite looking. Feel free to make your case in the comments if you think this was anything but heartless, ignorant, and reckless.

The post Cutting to the Bone first appeared on NeuroLogica Blog.

Categories: Skeptic

Hybrid Bionic Hand

Thu, 03/13/2025 - 4:59am

If you think about the human hand as a work of engineering, it is absolutely incredible. The level of fine motor control is extreme. It is responsive and precise. It has robust sensory feedback. It combines both rigid and soft components, so that it is able to grip and lift heavy objects and also cradle and manipulate soft or delicate objects. Trying to replicate this functionality with modern robotics have been challenging, to say the least. But engineers are making steady incremental progress.

I like to check it on how the technology is developing, especially when there appears to be a significant advance. There are two basic applications for robotic hands – for robots and for prosthetics for people who have lost their hand to disease or injury. For the latter we need not only advances in the robotics of the hand itself, but also in the brain-machine interface that controls the hand. Over the years we have seen improvements in this control, using implanted brain electrodes, scalp surface electrodes, and muscle electrodes.

We have also seen the incorporation of sensory feedback, which greatly enhances control. Without this feedback, users have to look at the limb they are trying to control. With sensory feedback, they don’t have to look at it, overall control is enhanced, and the robotic limb feels much more natural. Another recent addition to this technology has been the incorporation of AI, to enhance the learning of the system during training. The software that translates the electrical signals from the user into desired robotic movements is much faster and more accurate than without AI algorithms.

A team at Johns Hopkins is trying to take the robotic hand to the next level – A natural biomimetic prosthetic hand with neuromorphic tactile sensing for precise and compliant grasping. They are specifically trying to mimic a human hand, which is a good approach. Why second-guess millions of years of evolutionary tinkering? They call their system a “hybrid” robotic hand because it incorporates both rigid and soft components. Robotic hands with rigid parts can be strong, but have difficulty handling soft or delicate objects. Hands made of soft parts are good for soft objects, but tend to be weak. The hybrid approach makes sense, and mimics a human hand with internal bones covered in muscles and then soft skin. 

The other advance was to incorporate three independent layers of sensation. This also more closely mimics a human hand, which has both superficial and deep sensory receptors. This is necessary to distinguish what kind of object is being held, and to detect things like the object slipping in the grip. In humans, for example, one of the symptoms of carpal tunnel syndrome, which can impair sensation to the first four fingers of the hands, is that people will drop objects they are holding. With diminished sensory feedback, they don’t maintain the muscle tone necessary to maintain their grip on the object.

Similarly, prosthetics benefit from sensory feedback to control how much pressure to apply to a held object. They have to grip tightly enough to keep it from slipping, but not so tight that they crush or break the object. This means that the robotic limb needs to be able to detect the weight and firmness of the object it is holding. Having different layers of sensation allows for this. The superficial layer detects touch, while the progressively deeper layers will be activated with increasing grip strength. AI is also used to help interpret these signals, which in turn stimulate the users nerves to provide them with natural-feeling sensory feedback.

They report:

“Our innovative design capitalizes on the strengths of both soft and rigid robots, enabling the hybrid robotic hand to compliantly grasp numerous everyday objects of varying surface textures, weight, and compliance while differentiating them with 99.69% average classification accuracy. The hybrid robotic hand with multilayered tactile sensing achieved 98.38% average classification accuracy in a texture discrimination task, surpassing soft robotic and rigid prosthetic fingers. Controlled via electromyography, our transformative prosthetic hand allows individuals with upper-limb loss to grasp compliant objects with precise surface texture detection.”

Moving forward they plan to increase the number of sensory layers and to tweak the hybrid structure of soft and rigid components to more closely mimic a human hand. They also plan to incorporate more industrial-grade materials. The goal is to create a robotic prosthetic hand that can mimic the versatility and dexterity of a human hand, or at least come as close as possible.

Combined with advances in brain-machine interface technology and AI control, robotic prosthetic limb technology is rapidly progressing. It’s pretty exciting to watch.

The post Hybrid Bionic Hand first appeared on NeuroLogica Blog.

Categories: Skeptic

Stem Cells for Parkinson’s Disease

Mon, 03/10/2025 - 5:02am

For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.

PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.

When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.

The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine.  These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.

The fact that PD is the result of a specific population of neurons making a specific neurotransmitter makes it an attractive target for cell therapy. All we need to do is increase the number of dopamine neurons in the SNpc and that can treat, and even potentially cure, PD. The first cell transplant for PD was in 1987, in Sweden. These were fetal-derived dopamine producing neurons. There treatments were successful, but they are not a cure for PD. The cells release dopamine but they are not connected to the basal ganglia circuitry, so they are not regulating the release of dopamine in a feedback circuit. In essence, therefore, these were just a drug-delivery system. At best they produced the same effect as best pre-operative medication management. In fact, the treatment only works in patients who respond to L-dopa given orally. The transplants just replace the need for medication, and make it easier to maintain a high level of control.

They also have a lot of challenges. How long do the transplanted cells survive in the brain? What are the risks of the surgery. Is immunosuppressive treatment needed. And where do we get the cells from. The only source that worked was human ventral mesencephalic dopamine neurons from recent voluntary abortions. This limited the supply, and also created regulatory issues, being banned at various times. Attempts at using animal derived cells failed, as did using adrenal cells from the patient.

Therefore, when the technology developed to produce stem cells from the patient’s own cells, it was inevitable that this would be tried in PD. These are typically fibroblasts that are altered to turn them into pluripotent stem cells, which are then induced to form into dopamine producing neurons. This eliminates the need for immunosuppression, and avoid any ethical or legal issues with harvesting. PD would seem like the low hanging fruit for autologous stem cell therapy.

But – it has run up against the issues that we have generally encountered with this technology, which is why you may have first heard of this idea in the early 2000s and here in 2025 we are just seeing a phase I clinical trial. One problem is getting the cells to survive for long enough to make the whole procedure worthwhile. The cells not only need to survive, they need to thrive, and to produce dopamine. This part we can do, and while this remains an issue for any new therapy, this is generally not the limiting factor.

Of greater concern is how to keep the cells from thriving too much – from forming a tumor. There is a reason our bodies are not already flush with stem cells, ready to repair any damage, rejuvenate any effects of aging, and replace any exhausted cells. It’s because they tend to form tumors and cancer. So we have just as many stem cells as we need, and no more. What we “need” is an evolutionary calculation, and not what we might desire. Our experience with stem cell therapy has taught us the wisdom of evolution – stem cells are a double-edged sword.

Finally, it is especially difficult to get stem cells in the brain to make meaningful connections and participate in brain circuitry. I just attended a grand round on stem cells for stroke, and there they are having the same issue. However, stem cells can still be helpful, because they can improve the local environment, allowing native neurons to survive and function better. With PD we are again back to – the stem cells are a great dopamine delivery system, but they don’t fix the broken circuitry.

There is still the hope (but it is mainly a hope at this point) that we will be able to get these stem cells to actually replace lost brain cells, but we have not achieved that goal yet. Some researchers I have spoken to have given up on that approach. They are focusing on using stem cells as a therapy, not a cure – as a way to deliver treatments and improve the environment, to support neurons and brain function, but without the plan to replace neurons in functional circuits.

But the allure of curing neurological disease by transplanting new neurons into the brain to actually fix brain circuits is simply too great to give up entirely. Research will continue to push in this direction (and you can be sure that every mainstream news report about this research will focus on this potential of the treatment). We may just need some basic science breakthrough to figure out how to get stem cells to make meaningful connections, and breakthroughs are hard to predict. We had hoped they would just do it automatically, but apparently they don’t. In the meantime, stem cells are still a very useful treatment modality, just more for support than replacement.

The post Stem Cells for Parkinson’s Disease first appeared on NeuroLogica Blog.

Categories: Skeptic

Where Are All the Dwarf Planets?

Thu, 03/06/2025 - 5:05am

In 2006 (yes, it was that long ago – yikes) the International Astronomical Union (IAU) officially adopted the definition of dwarf planet – they are large enough for their gravity to pull themselves into a sphere, they orbit the sun and not another larger body, but they don’t gravitationally dominate their orbit. That last criterion is what separates planets (which do dominate their orbit) from dwarf planets. Famously, this causes Pluto to be “downgraded” from a planet to a dwarf planet. Four other objects also met criteria for dwarf planet – Ceres in the asteroid belt, and three Kuiper belt objects, Makemake, Haumea, and Eris.

The new designation of dwarf planet came soon after the discovery of Sedna, a trans-Neptunian object that could meet the old definition of planet. It was, in fact, often reported at the time as the discovery of a 10th planet. But astronomers feared that there were dozens or even hundreds of similar trans-Neptunian objects, and they thought it was messy to have so many planets in our solar system. That is why they came up with the whole idea of dwarf planets. Pluto was just caught in the crossfire – in order to keep Sedna and its ilk from being planets, Pluto had to be demoted as well. As a sort-of consolation, dwarf planets that were also trans-Neptunian objects were named “plutoids”. All dwarf planets are plutoids, except Ceres, which is in the asteroid belt between Mars and Jupiter.

So here we are, two decades later, and I can’t help wondering – where are all the dwarf planets? Where are all the trans-Neptunian objects that astronomers feared would have to be classified as planets that the dwarf planet category was specifically created for? I really thought that by now we would have a dozen or more official dwarf planets. What’s happening? As far as I can tell there are two reasons we are still stuck with only the original five dwarf planets.

One is simply that (even after two decades) candidate dwarf planets have not yet been confirmed with adequate observations. We need to determine their orbit, their shape, and (related to their shape) their size. Sedna is still considered a “candidate” dwarf planet, although most astronomers believe it is an actual dwarf planet and will eventually be confirmed. Until then it is officially considered a trans-Neptunian object. There is also Gonggong, Quaoar, and Orcus which are high probability candidates, and a borderline candidate, Salacia. So there are at least nine, and possibly ten, known likely dwarf planets, but only the original five are confirmed. I guess it is harder to observe these objects than I assumed.

But I have also come across a second reason we have not expanded the official list of dwarf planets. Apparently there is another criterion for plutoids (dwarf planets that are also trans-Neptunian objects) – they have to have an absolute magnitude less than +1 (the smaller the magnitude the brighter the object). Absolute magnitude means how bright an object actually is, not it’s apparent brightness as viewed from the Earth. Absolute magnitude for planets is essentially the result of two factors – size and albedo. For stars, absolute magnitude is the brightness as observed from 10 parsecs away. For solar system bodies, the absolute magnitude is the brightness if the object were one AU from the sun and the observer.

What this means is that astronomers have to determine the absolute magnitude of a trans-Neptunian object before they can officially declare it a dwarf planet. This also means that trans-Neptunian objects that are made of dark material, even if they are large and spherical, may also fail the dwarf planet criteria. Some astronomers are already proposing that this absolute magnitude criterion be replaced by a size criterion – something like 200 km in diameter.

It seems like the dwarf planet designation needs to be revisited. Currently, the James Webb Space Telescope is being used to observe trans-Neptunian objects. Hopefully this means we will have some confirmations soon. Poor Sedna, whose discovery in 2003 set off the whole dwarf planet thing, still has not yet been confirmed.

The post Where Are All the Dwarf Planets? first appeared on NeuroLogica Blog.

Categories: Skeptic

The New TIGR-Tas Gene Editing System

Mon, 03/03/2025 - 5:02am

Remember CRISPR (clustered regularly interspaced short palindromic repeats) – that new gene-editing system which is faster and cheaper than anything that came before it? CRISPR is derived from bacterial systems which uses guide RNA to target a specific sequence on a DNA strand. It is coupled with a Cas (CRISPR Associated) protein which can do things like cleave the DNA at the targeted location. We are really just at the beginning of exploring the potential of this new system, in both research and therapeutics.

Well – we may already have something better than CSRISP: TIGR-Tas. This is also an RNA-based system for targeting specific sequences of DNA and delivering a TIGR associated protein to perform a specific function. TIGR (Tandem Interspaced Guide RNA) may have some useful advantages of CRISPR.

As presented in a new paper, TIGR is actually a family of gene editing systems. It was discovered not by happy accident, but by specifically looking for it. As the paper details: “through iterative structural and sequence homology-based mining starting with a guide RNA-interaction domain of Cas9”. This means they started with Cas9 and then trolled through the vast database of phage and parasitic bacteria for similar sequences. They found what they were looking for – another family of RNA-guided gene editing systems.

Like CRISPR, TIGR is an RNA guided system, and has a modular structure. Different Tas proteins can be coupled with the TIGR to perform different actions at the targeted site. But there are several potential advantages for TIGR over CRISPR. Like CRISPR it is RNA guided, but TIGR uses both strands of the DNA to find its target sequence. This “dual guided” approach may lead to fewer off-target errors. While CRISPR works very well, there is a trade-off in CRISPR systems between speed and precision. The faster it works, the greater the number of off-target actions – like cleaving the DNA in the wrong place. The hope is that TIGR will make fewer off-target mistakes because of better targeting.

TIGR also has “PAM-Independent targeting”. What does that mean? PAM stands for protospacer adjacent motifs – these are short DNA sequences, about 6 base pairs, that exist next to the sequence that his being targeted by CRISPR. The Cas9 protease will not function without the PAM. It appears to have evolved so that the bacteria using CRISPR as an adaptive immune system can tell self from non-self, as invading bacteria or viruses will have the PAM sequences, but the native DNA will not. The end result is that CRISPR needs PAM sequences in order to function, but the TIGR system does not. This makes the TIGR system much more versatile.

I saved what is potentially the best advantage for last – Tas proteins are much smaller than Cas proteins, about a quarter of the size. At first this might not seem like a huge advantage, but for some applications it is. One of the main limiting factors for using CRISPR therapeutically is getting the CRISPR-Cas complex into human cells. There are several available approaches – physical methods like direct injection, chemical methods, and viral vectors. Specific methods, however, generally have a size limit on the package they can deliver into a cell. Adeno-associated vectors (AAVs) for example have lots of advantaged but only can deliver relatively small payloads. Having a much more compact gene-editing system, therefore, is a huge potential advantage.

When it comes to therapeutics, the delivery system is perhaps the greater limiting factor than the gene targeting and editing system itself. There are currently two FDA indications for CRISPR-based therapies, both for blood disorders (sickle cell and thalassemia). For these disorders bone marrow can be removed from the patient, CRISPR is then applied to make the desired genetic changes, and then the bone marrow is transplanted back into the patient. In essence, we bring the cells to the CRISPR rather than the CRISPR to the cells. But how do we deliver CRISPR to a cell population within a living adult human?

We use the methods I listed above, such as the AAVs, but these all have limitations. Having a smaller package to deliver, however, will greatly expand our options.

The world of genetic engineering is moving incredibly fast. We are taking advantage of the fact that nature has already tinkered with these systems for hundreds of millions of years. There are likely more systems and variations out there for us to find. But already we have powerful tools to make precise edits of DNA at targeted locations, and TIGR just adds to our toolkit.

The post The New TIGR-Tas Gene Editing System first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages