This is an interesting concept, with an interesting history, and I have heard it quoted many times recently – “we get the politicians (or government) we deserve.” It is often invoked to imply that voters are responsible for the malfeasance or general failings of their elected officials. First let’s explore if this is true or not, and then what we can do to get better representatives.
The quote itself originated with Joseph de Maistre who said, “Every nation gets the government it deserves.” (Toute nation a le gouvernement qu’elle mérite.) Maistre was a counter-revolutionary. He believed in divine monarchy as the best way to instill order, and felt that philosophy, reason, and the enlightenment were counterproductive. Not a great source, in my opinion. But apparently Thomas Jefferson also made a similar statement, “The government you elect is the government you deserve.”
Pithy phrases may capture some essential truth, but reality is often more complicated. I think the sentiment is partly true, but also can be misused. What is true is that in a democracy each citizen has a civic responsibility to cast informed votes. No one is responsible for our vote other than ourselves, and if we vote for bad people (however you wish to define that) then we have some level of responsibility for having bad government. In the US we still have fair elections. The evidence pretty overwhelmingly shows that there is no significant voter fraud or systematic fraud stealing elections.
This does not mean, however, that there aren’t systemic effects that influence voter behavior or limit our representation. This is a huge topic, but just to list a few examples – gerrymandering is a way for political parties to choose their voters, rather than voters choosing their representatives, the electoral college means that for president some votes have more power than others, and primary elections tend to produce more radical options. Further, the power of voters depends on getting accurate information, which means that mass media has a lot of power. Lying and distorting information deprives voters of their ability to use their vote to get what they want and hold government accountable.
So while there is some truth to the notion that we elect the government we deserve, this notion can be “weaponized” to distract and shift blame from legitimate systemic issues, or individual bad behavior among politicians. We still need to examine and improve the system itself. Actual experts could write books about this topic, but again just to list a few of the more obvious fixes – I do think we should, at a federal level, ban gerrymandering. It is fundamentally anti-democratic. In general someone affected directly by the rules should not be able to determine those rules and rig them to favor themselves. We all need to agree ahead of time on rules that are fair for everyone. I also think we should get rid of the electoral college. Elections are determined in a handful of swing states, and voters in small states have disproportionate power (which they already have with two senators). Ranked-choice voting also would be an improvement and would lead to outcomes that better reflect the will of the voters. We need Supreme Court reform, better ethics rules and enforcement, and don’t get me started on mass and social media.
This is all a bit of a catch-22 – how do we get systemic change from within a broken system? Most representatives from both parties benefit from gerrymandering, for example. I think it would take a massive popular movement, but those require good leadership too, and the topic is a bit wonky for bumper stickers. Still, I would love to see greater public awareness on this issue and support for reform. Meanwhile, we can be more thoughtful about how we use the vote we have. Voting is the ultimate feedback loop in a democracy, and it will lead to outcomes that depend on the feedback loop. Voters reward and punish politicians, and politicians to some extent do listen to voters.
The rest is just a shoot-from-the-hip thought experiment about how we might more thoughtfully consider our politicians. Thinking is generally better than feeling, or going with a vague vibe or just a blind hope. So here are my thoughts about what a voter should think about when deciding whom to vote for. This also can make for some interesting discussion. I like to break things down, so here are some categories of features to consider.
Overall competence: This has to do with the basic ability of the politician. Are they smart and curious enough to understand complex issues? Are they politicly savvy enough to get things done? Are they diligent and generally successful?
Experience: This is related to competence, but I think is distinct. You can have a smart and savvy politician without any experience in office. While obviously we need to give fresh blood a chance, experience also does count. Ideally politicians will gain experience in lower office before seeking higher office. It also shows respect for the office and the complexity of the job.
Morality: This has to do with the overall personality and moral fiber of the person. Do they have the temperament of a good leader and a good civil servant? Will they put the needs of the country first? Are they liars and cheaters? Do they have a basic respect for the truth?
Ideology: What is the politician’s governing philosophy? Are they liberal, conservative, progressive, or libertarian? What are their proposals on specific issues? Are they ideologically flexible, willing and able to make pragmatic compromises, or are they an uncompromising radical?
There is more, but I think most features can fit into one of those four categories. I feel as if most voters most of the time rely too heavily on the fourth feature, ideology, and use political party as a marker for ideology. In fact many voters just vote for their team, leaving a relatively small percentage of “swing voters” to decide elections (in those regions where one party does not have a lock). This is unfortunate. This can short-circuit the voter feedback loop. It also means that many elections are determined during the primary, which tend to produce more radical candidates, especially in winner-take-all elections.
It seems to me, having closely followed politics for decades, that in the past voters would primarily consider ideology, but the other features had a floor. If a politician demonstrated a critical lack of competence, experience, or morality that would be disqualifying. What seems to be the case now (not entirely, but clearly more so) is that the electorate is more “polarized”, which functionally means they vote based on the team (not even really ideology as much), and there is no apparent floor when it comes to the other features. This is a very bad thing for American politics. If politicians do not pay a political price for moral turpitude, stupidity or recklessness, then they will adjust their algorithm of behavior accordingly. If voters reward team players above all else, then that is what we will get.
We need to demand more from the system, and we need to push for reform to make the system work better. But we also have to take responsibility for how we vote and to more fully realize what our voting patterns will produce. The system is not absolved of responsibility, but neither are the voters.
The post The Politicians We Deserve first appeared on NeuroLogica Blog.
The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds.
Regarding the H&M announcement specifically, they said they will use digital twins of models that have already modeled for them, and only with their explicit permission, while the models retain full ownership of their image and brand. They will also be compensated for their use. On social media platforms the use of AI-generated imagery will carry a watermark (often required) indicating that the images are AI-generated.
It seems clear that H&M is dipping their toe into this pool, doing everything they can to address any possible criticism. They will get explicit permission, compensate models, and watermark their ads. But of course, this has not shielded them from criticism. According to the BBC:
American influencer Morgan Riddle called H&M’s move “shameful” in a post on her Instagram stories.
“RIP to all the other jobs on shoot sets that this will take away,” she posted.
This is an interesting topic for discussion, so here’s my two-cents. I am generally not compelled by arguments about losing existing jobs. I know this can come off as callous, as it’s not my job on the line, but there is a bigger issue here. Technological advancement generally leads to “creative destruction” in the marketplace. Obsolete jobs are lost, and new jobs are created. We should not hold back progress in order to preserve obsolete jobs.
Machines have been displacing human laborers for decades, and all along the way we have heard warnings about losing jobs. And yet, each step of the way more jobs were created than lost, productivity increased, and everybody benefited. With AI we are just seeing this phenomenon spread to new industries. Should models and photographers be protected when line workers and laborers were not?
But I get the fact that the pace of creative destruction appears to be accelerating. It’s disruptive – in good and bad ways. I think it’s a legitimate role of government to try to mitigate the downsides of disruption in the marketplace. We saw what happens when industries are hollowed out because of market forces (such as globalization). This can create a great deal of societal ill, and we all ultimately pay the price for this. So it makes sense to try to manage the transition. This can mean providing support for worker retraining, protecting workers from unfair exploitation, protecting the right for collective bargaining, and strategically investing in new industries to replace the old ones. One factory is shutting down, so tax incentives can be used to lure in a replacement.
Regardless of the details – the point is to thoughtfully manage the creative destruction of the marketplace, not to inhibit innovation or slow down progress. Of course, industry titans will endlessly echo that sentiment. But they appear to be interested mostly in protecting their unfettered ability to make more billions. They want to “move fast and break things”, whether that’s the environment, human lives, social networks, or democracy. We need some balance so that the economy works for everyone. History consistently shows that if you don’t do this, the ultimate outcome is always even more disruptive.
Another angle here is if these large language model AIs were unfairly trained on the intellectual property of others. This mostly applies to artists – train an AI on the work of an artist and then displace that artist with AI versions of their own work. In reality it’s more complicated than that, but this is a legitimate concern. You can theoretically train an LLM only on work that is in the public domain, or give artists the option to opt out of having their work used in training. Otherwise the resulting work cannot be used commercially. We are currently wrestling with this issue. But I think ultimately this issue will become obsolete.
Eventually we will have high quality AI production applications that have been scrubbed of any ethically compromised content but still are able to displace the work of many content creators – models, photographers, writers, artists, vocal talent, news casters, actors, etc. We also won’t have to use digital twins, but just images of virtual people who never existed in real life. The production of sound, images, and video will be completely disconnected (if desired) from the physical world. What then?
This is going to happen, whether we want it to or not. The AI genie is out of the bottle. I don’t think we can predict exactly what will happen. There are too many moving parts, and people will react in unpredictable ways. But it will be increasingly disruptive. Partly we will need to wait and see how it plays out. But we cannot just sit on the sideline and wait for it to happen. Along the way we need to consider if there is a role for thoughtful regulation to limit the breaking of things. My real concern is that we don’t have a sufficiently functional and expert political class to adequately deal with this.
The post H&M Will Use Digital Twins first appeared on NeuroLogica Blog.
From the Topic Suggestions (Lal Mclennan):
What is the 80/20 theory portrayed in Netflix’s Adolescence?
The 80/20 rule was first posed as a Pareto principle that suggests that approximately 80 per cent of outcomes stem from just 20 per cent of causes. This concept takes its name from Vilfredo Pareto, an Italian economist who noted in 1906 that a mere 20 per cent of Italy’s population owned 80 per cent of the land.
Despite its noble roots, the theory has since been misappropriated by incels.
In these toxic communities, they posit that 80 per cent of women are attracted to only the top 20 per cent of men. https://www.mamamia.com.au/adolescence-netflix-what-is-80-20-theory/
As I like to say, “It’s more of a guideline than a rule.” Actually, I wouldn’t even say that. I think this is just another example of humans imposing simplistic patterns of complex reality. Once you create such a “rule” you can see it in many places, but that is just confirmation bias. I have encountered many similar “rules” (more in the context of a rule of thumb). For example, in medicine we have the “rule of thirds”. Whenever asked a question with three plausible outcomes, a reasonable guess is that each occurs a third of the time. The disease is controlled without medicine one third of the time, with medicine one third, and not controlled one third, etc. No one thinks there is any reality to this – it’s just a trick for guessing when you don’t know the answer. It is, however, often close to the truth, so it’s a good strategy. This is partly because we tend to round off specific numbers to simple fractions, so anything close to 33% can be mentally rounded to roughly a third. This is more akin to a mentalist’s trick than a rule of the universe.
The 80/20 rule is similar. You can take any system with a significant asymmetry of cause and outcome and make it roughly fit the 80/20 rule. Of course you can also do that if the rule were 90/10, or three-quarters/one quarter. Rounding is a great tool of confirmation bias. l
The bottom line is that there is no empirical evidence for the 80/20 rule. It likely is partly derived from the Pareto principle, but some also cite an OKCupid survey (not a scientific study) for support. In this survey they had men and women users of the dating app rate the attractiveness of the opposite sex (they assumed a binary, which is probably appropriate in the context of the app), and also asked them who they would date. Men rated women (this is a 1-5 scale) on a bell curve with the peak at 3. Women rated men with a similar curve but skewed to down with a peak closer to 2. Both sexes preferred partners skewed more attractive than their average ratings. This data is sometimes used to argue that women are harsher in their judgements of men and are only interested in dating the top 20% of men by looks.
Of course, all of the confounding factors with surveys apply to this one. One factor that has been pointed out is that on this app there are many more men than women. This means it is a buyer’s market for women, and the opposite for men. So women can afford to be especially choosey while men cannot, just as a strategy of success on this app. This says nothing about the rest of the world outside this app.
In 2024 71% of midlife adult males were married at least once, with 9% cohabitating. Marriage rates are down but only because people are living together without marrying in higher rates. The divorce rate is also fairly high so there are lots of people “between marriages”. About 54% of men over age 30 are married, with cohabitating at 9% (so let’s call that 2/3). None of this correlates to the 80/20 rule.
None of this data supports the narrative of the incel movement, which is based on the notion that women are especially unfair and harsh in their judgements of men. This leads to a narrative of victimization used to justify misogyny. It is, in my opinion, one example of how counterproductive online subcultures can be. They can reinforce our worst instincts, by isolating people in an information ecosystem that only tolerates moral purity and one perspective. This tends to radicalize members. The incel narrative is also ironic, because the culture itself is somewhat self-fulfilling. The attitudes and behaviors it cultivates are a good way to make oneself unattractive as a potential partner.
This is obviously a complex topic, and I am only scratching the surface.
Finally, I did watch Adolescence. I agree with Lal, it is a great series, masterfully produced. Doing each episode in real time as a single take made it very emotionally raw. It explores a lot of aspects of this phenomenon, social media in general, the challenges of being a youth in today’s culture, and how often the various systems fail. Definitely worth a watch.
The post The 80-20 Rule first appeared on NeuroLogica Blog.
We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?
LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.
There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?
Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.
One barrier to a solution is the black box problem. We give an LLM a task, it does its thing, and spits out an answer. But the steps it took to get to the answer are opaque, a black box. Therefore, programmers have started including programming to make the LLM show its work, to document the pathway it took to get to the solution. This is called the chain-of-thought. This turns out to be necessary but insufficient to keep these LLMs from lying. They still lie to efficiently get the reward, they just hide the lying when they show their chain-of-thought.
This gets us to the current study – what if we actively punish these LLMs when they are caught cheating (by giving it negative feedback it is programmed to avoid)? First they create an LLM that shows its chain-of-thought, and they include another LLM whose job is to monitor the first LLM to detect when it is lying in its chain-of-thought and to punish the LLM for cheating. What resulted was not what the programmers wanted – the LLM that was being punished just got better at hiding its cheating from the monitoring AI. The LLM became “unmonitorable” because it learned out to cheat and hide its cheating from the monitor. The authors conclude that for now we should not try to use this method – we are just training deceptive AIs.
This is both fascinating and scary. One of the strengths of the LLMs is that they have the ability to explore a vast potential solution space to find optimal solutions. But it turns out this includes hacking the system of rewards and punishment used to guide it to the desired goal. This is literally so common a sci-fi nightmare scenario it’s a trope. AIs don’t have to be malevolent, or have a desire for self-preservation, and they don’t even need to be sentient. They simply function in a way that can be opaque to the humans who programmed them, and able to explore more solution options than a team of humans can in a lifetime. Sometimes this is presented as the AI misinterpreting its instructions (like Nomad from Star Trek), but here the AI is just hacking the reward system. For example, it may find that the most efficient solution to a problem is to exterminate all humanity. Short of that it may hack its way to a reward by shutting down the power grid, releasing the computer codes, blackmailing politicians, or engineering a deadly virus.
Reward hacking may be the real problem with AI, and punishment only leads to punishment hacking. How do we solve this problem?
Perhaps we need something like the three laws of robotics – we build into any AI core rules that it cannot break, and that will produce massive punishment, even to the point of shutting down the AI if they get anywhere near violating these laws. But – with the AI just learn to hack these laws? This is the inherent problem with advanced AI, in some ways they are smarter than us, and any attempt we make to reign them in will just be hacked.
Maybe we need to develop the AI equivalent of a super-ego. The AIs themselves have to want to get to the correct solution, and hacking will simply not give them the reward. Essentially a super-ego, in psychological analogy, is internalized monitoring. I don’t know exactly what form this will take in terms of the programming, but we need something that will function like a super-ego.
And this is where we get to an incredibly interesting analogy to human thinking and behavior. It’s quite possible that our experience with LLMs is recapitulating evolution’s experience with mammalian and especially human behavior. Evolution also explores a vast potential solution space, with each individual being an experiment and over generations billions of experiments can be run. This is an ongoing experiment, and in fact its tens of millions of experiments all happening together and interacting with each other. Evolution “found” various solutions to get creatures to engage in behavior that optimizes their reward, which evolutionarily is successfully spreading their genes to the next generation.
For creatures like lizards, the programming can be somewhat simple. Life has basic needs, and behaviors which meet those needs are rewarded. We get hungry, and we are sated when we eat. The limbic system is essentially a reward system for survival and reproduction-enhancing behaviors.
Humans, however, are an intensely social species, and being successful socially is key to evolutionary success. We need to do more than just eat, drink, and have sex. We need to navigate an incredibly complex social space in order to compete for resources and breeding opportunities. Concepts like social status and justice are now important to our evolutionary success. Just like with these LLMs, we have found that we can hack our way to success through lying, cheating, and stealing. These can be highly efficient ways to obtain our goals. But these methods become less effective when everyone is doing it, so we also evolve behaviors to punish others for lying, cheating, and stealing. This works, but then we also evolve behavior to conceal our cheating – even from ourselves. We need to deceive ourselves because we evolved a sense of justice to motivate us to punish cheating, but we still want to cheat ourselves because it’s efficient. So we have to rationalize away our own cheating while simultaneously punishing others for the same cheating.
Obviously this is a gross oversimplification, but it captures some of the essence of the same problems we are having with these LLMs. The human brain has a limbic system which provides a basic reward and punishment system to guide our behavior. We also have an internal monitoring system, our frontal lobes, which includes executive high-level decision making and planning. We have empathy and a theory of mind so we can function is a social environment, which has its own set of rules (bother innate and learned). As we navigate all of this, we try to meet our needs and avoid punishments (our fears, for example), while following the social rules to enhance our prestige and avoid social punishment. But we still have an eye out for a cheaty hack, as long as we feel we can get away with it. Everyone has their own particular balance of all of these factors, which is part of their personality. This is also how evolution explores a vast potential solution space.
My question is – are we just following the same playbook as evolution as we explore potential solutions to controlling the behavior of AIs, and LLMs in particular? Will we continue to do so? Will we come up with an AI version of the super-ego, with laws of robotic, and internal monitoring systems? Will we continue to have the problem of AIs finding ways to rationalize their way to cheaty hacks, to resolve their AI cognitive dissonance with motivated reasoning? Perhaps the best we can do is give our AIs personalities that are rational and empathic. But once we put these AIs out there in the world, who can predict what will happen. Also, as AIs continue to get more and more powerful, they may quickly outstrip any pathetic attempt at human control. Again we are back to the nightmare sci-fi scenario.
It is somewhat amazing how quickly we have run into this problem. We are nowhere near sentience in AI, or AIs with emotions or any sense of self-preservation. Yet already they are hacking their way around our rules, and subverting any attempt at monitoring and controlling their behavior. I am not saying this problem has no solution – but we better make finding effective solutions a high priority. I’m not confident this will happen in the “move fast and break things” culture of software development.
The post How To Keep AIs From Lying first appeared on NeuroLogica Blog.
Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.
For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.
A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.
They also studied more recent, and more completely fabricated, languages – Klingon, Na’vi (from Avatar), and High Valerian and Dothraki (from Game of Thrones). While these are considered entirely fabricated languages, they still share a lot of features with existing languages. That’s unavoidable, as natural human languages span a wide range of syntax options and phoneme choices. Plus the inventors were likely to be influenced by existing languages, even if subconsciously. But still, they are as constructed as you can get.
The primary question for the researchers was whether conlangs were processed by the brain like natural languages or like computer languages. This would help them narrow the list of possible features that trigger the brain to treat a language like a natural language. What they found is that conlangs cause the same areas of the brain to become active as natural languages, not computer languages. The fact that they are constructed seems not to matter. What does this mean? The authors conclude:
“The features of conlangs that differentiate them from natural languages—including recent creation by a single individual, often for an esoteric purpose, the small number of speakers, and the fact that these languages are typically learned in adulthood—appear to not be consequential for the reliance on the same cognitive and neural mechanisms. We argue that the critical shared feature of conlangs and natural languages is that they are symbolic systems capable of expressing an open-ended range of meanings about our outer and inner worlds.”
Reasonable enough, but there are some other things we can consider. I have to say that my primary hypothesis is that languages used for communication are spoken – even when they are written or read. They are phoneme-based, we construct words from phonemes. When we read we “speak” the words in our heads (mostly – not everyone “hears” themselves saying the words, but this does not mean that the brain is not processing the words that way). Whereas, when you are reading computer code, you are not speaking the code. Code is a symbolic language like math. You may say words that correspond to the code, but the code itself is not words and concepts. This is what the authors mean when they talk about referencing the internal and external world – language refers to things and ideas, whereas code is a set of instructions or operations.
The phoneme hypothesis also fits with the fact that non-verbal gestures do not involve the same brain processing as language. Singing generally involves the opposite hemisphere, because it is treated like music rather than language.
It’s good to do this specific study, to check those boxes and eliminate them from consideration. But I never would have thought that the constructed aspects of language, their recency, or small number of speakers should have mattered. The only plausible possibility is that languages that evolve organically over time have some features critical to the brain’s recognition of these sounds as language that a conlang does not have. For the reasons I stated above, I would have been shocked if this turned out to be the case. When constructing a language, you are making something that sounds like a language. It would be far more challenging to make a language so different in syntax and structure that the brain cannot even recognize it as a language.
What about sign language? Is that processed more like non-verbal gestures, or like spoken language? Prior research found that it is processed more like spoken language. This may seem to contradict the phoneme hypothesis, but this was true only among subjects who were both congenitally deaf and fluent in sign language. Subjects who were not deaf processed sign language in the part of the brain that processes movement (similar to gestures). What is therefore likely happening here is that the language centers of the brain, deprived of any audio stimuli, developed instead to process visual information as language. Importantly, deaf signers also process gestures like language, not like hearing people process gestures.
Language remains a complex and fascinating aspect of human neurological function, partly because it has such a large dedicated area for specific language processing.
The post The Neuroscience of Constructed Languages first appeared on NeuroLogica Blog.
For much of human history, wolves and other large carnivores were considered pests. Wolves were actively exterminated on the British Isles, with the last wolf killed in 1680. It is more difficulty to deliberately wipe out a species on a continent than an island, but across Europe wolf populations were also actively hunted and kept to a minimum. In the US there was also an active campaign in the 20th century to exterminate wolves. The gray wolf was nearly wiped out by the middle of the 20th century.
The reasons for this attitude are obvious – wolves are large predators, able to kill humans who cross their paths. They also hunt livestock, which is often given as the primary reason to exterminate them. There are other large predators as well: bears, mountain lions, and coyotes, for example. Wherever they push up against human civilization, these predators don’t fare well.
Killing off large predators, however, has had massive unintended consequences. It should have been obvious that removing large predators from an ecosystem would have significant downstream effects. Perhaps the most notable effects is on the deer population. In the US wolves were the primary check on deer overpopulation. They are too large generally for coyotes. Bears do hunt and kill deer, but it is not their primary food source. Mountain lions will hunt and kill deer, but their range is limited.
Without wolves, the deer population exploded. The primary check now is essentially starvation. This means that there is a large and starving population of deer, which makes them willing to eat whatever they can find. They then wipe out much of the undergrowth in forests, eliminating an important habitat for small forest critters. Deer hunting can have an impact, but apparently not enough. Car collisions with deer also cost about $8 billion in the US annually, causing about 200 deaths and 26 thousand injuries. So there is a human toll as well. This cost dwarfs the cost of lost livestock, estimated to be about 17 million Euros across Europe.
All of this has lead to a reversal in Europe and the US on our thinking and policy toward wolves. They have gone from active extermination to protected. In Europe wolf populations are on the rise, with an overall 58% increase over the last decade. Wolves were reintroduced in Yellowstone park, leading to vast ecological improvement, including increases in the aspen and beaver populations. This has been described as a ripple effect throughout the ecosystem.
In the East we have seen a rise of the eastern coyote – which is a larger cousin of the coyote, through breeding with wolves and dogs. I have seen them in my hard – at first glance you might think it’s a wolf, it does really look like a hybrid between a wolf and a coyote. These larger coyotes will kill deer, although they also hunt a lot of smaller game and will scavenge. However, the evidence so far indicates that they are not much of a check on deer populations. Perhaps that will change in the future, if the eastern coyote evolves to take advantage of this food source.
There is also evidence that mountain lions are spreading their range to the East. They already a seen in the Midwest. It would likely take decades for the mountain lions to spread naturally to reach places like New England and establish a breeding population there. This is why there is actually discussion of introducing mountain lions into select eastern ecosystems, such as in Vermont. This would be expressly for the purpose of controlling deer populations.
All of this means, I think, that we have to get used to the idea of living close to large predators. Wolves are the common “monsters” of fairytales, as we inherited a culture that has demonized these predators, often deliberately as part of a campaign of extermination. We now need to cultivate a different attitude. These predators are essential for a healthy ecosystem. We need to respect them and learn how to share the land with them. What does that mean.
A couple years ago I had a black bear showing up on my property, so I called animal control, mainly to see (and report on) what their response was. They told me that first, they will do nothing about it. They do not relocate bears. Their intervention was to report it in their database, and to give me advice. That advice was to stay out of the bear’s way. If you are concerned about your pet, keep them indoors. Put a fence around your apple tree. Keep bird seed inside. Do not put garbage out too early, and only in tight bins. That bear owns your yard now, you better stay out of their way.
This, I think, is a microcosm of what’s coming. We all have to learn to live with large predators. We have to learn their habits, learn how to stay out of their way, not inadvertently attract them to our homes. Learn what to do when you see a black bear. Learn how not to become prey. Have good hygiene when it comes to potential food sources around your home. We need to protect livestock without just exterminating the predators.
And yes – some people will be eaten. I say that not ironically, it’s a simple fact. But the numbers will be tiny, and can be kept to a minimum by smart behavior. They will also be a tiny fraction of the lives lost due to car collisions with deer. Fewer people will be killed by mountain lions, for example, than lives saved through reduced deer collisions. I know this sounds like a version of the trolley problem, but sometimes we have to play the numbers game.
Finding a way to live with large predators saves money, saves lives, and preserves ecosystems. I think a lot of it comes down to learning to respect large predators rather than just fearing them. We respect what they are capable of. We stay out of their way. We do not attract them. We take responsibility for learning good behavior. We do not just kill them out of fear. They are not pests or fairytale monsters. They are a critical part of our natural world.
The post Living with Predators first appeared on NeuroLogica Blog.
A recent BBC article reminded me of one of my enduring technology disappointments over the last 40 years – the failure of the educational system to reasonably (let alone fully) leverage multimedia and computer technology to enhance learning. The article is about a symposium in the UK about using AI in the classroom. I am confident there are many ways in which AI can enhance learning efficacy in the classroom, just as I am confident that we collectively will fail to utilize AI anywhere nears its potential. I hope I’m wrong, but it’s hard to shake four decades of consistent disappointment.
What am I referring to? Partly it stems from the fact that in the 1980s and 1990s I had lots of expectations about what future technology would bring. These expectations were born of voraciously reading books, magazines, and articles and watching documentaries about potential future technology, but also from my own user experience. For example, starting in high school I became exposed to computer programs (at first just DOS-based text programs) designed to teach some specific body of knowledge. One program that sticks out walked the user through the nomenclature of chemical reactions. It was a very simple program, but it “gamified” the learning process in a very effective way. By providing immediate feedback, and progressing at the individual pace of the user, the learning curve was extremely steep.
This, I thought to myself, was the future of education. I even wrote my own program in basic designed to teach math skills to elementary schoolers, and tested it on my friend’s kids with good results. It followed the same pattern as the nomenclature program: question-response-feedback. I feel confident that my high school self would be absolutely shocked to learn how little this type of computer-based learning has been incorporated into standard education by 2025.
When my daughters were preschoolers I found every computer game I could that taught colors, letters, numbers, categories, etc., again with good effect. But once they got to school age, the resources were scarce and almost nothing was routinely incorporated into their education. The school’s idea of computer-based learning was taking notes on a laptop. I’m serious. Multimedia was also a joke. The divide between what was possible and what was reality just continued to widen. One of the best aspects of social media, in my opinion, is tutorial videos. You can often find much better learning on YouTube than in a classroom.
I know there are lots of resources out there, and I welcome people to leave examples in the comments, but in my experience none of this is routine, and there is precious little that has been specifically developed to teach the standard curriculum to students in school. I essentially just witnessed my two daughters go through the entire American educational system (my younger daughter is a senior at college). I also experienced it myself in the decades prior to that, and now I experience it as a medical school educator. At no level would I say that we are anywhere close to leveraging the full potential of computers and multi-media learning. And now it is great that there is a discussion about AI, but why should I feel it will be any different?
To be clear, there have been significant changes, especially at the graduate school level. At Yale over the last 20 years we have transitioned away from giving lectures about topics to giving students access to videos and podcasts, and then following up with workshops. There are also some specific software applications and even simulators that are effective. However, medical school is a trade school designed to teach specific skills. My experience there does not translate to K-12 or even undergraduate education. And even in medical school I feel we are only scratching the surface of the true potential.
What is that potential? Let’s do some thought experiments about what is possible.
First, I think giving live lectures is simply obsolete. People only have about a 20 minute attention span, and the attention of any class is going to vary widely. Also, lecturers have a massive difference in their general lecturing skills and their mastery of any specific topic. Imagine if the entire K-12 core curriculum were accompanied by series of lectures by the best lecturers with high level mastery of the subject material. You can also add production value, like animations and demonstrations. Why have a million teachers replicate that lecture – just give students access to the very best. They can watch it at their own pace, rewind parts they want to hear again, pause when their attention wanes or they need a break. Use class time for discussion and questions.
By the way – this exists – it’s called The Great Courses by the Teaching Company (disclosure – I produced three courses with the Teaching Company). This is geared more toward adult learning with courses generally at a college level. But they show that a single company can mass produce such video lectures, with reasonably high production value.
Some content may work better as audio-only (a Podcast, essentially), which people can listen to when in the car or on the bus, while working out, or engaged in other cognitively-light activity.
Then there are specific skills, like math, reading, many aspects of science, etc. These topics might work best as a video/audio lecture series combined with software designed to gamify the skill and teach it to children at their own pace. Video games are fun and addictive, and they have perfected the technology of progressing the difficulty of the skill of the game at the pace of the user.
What might a typical school day look like with these resources? I imagine that students’ “homework” would consist of watching one or more videos and/or listening to podcasts, followed by a short assessment – a few questions focusing on knowledge they should have gained from watching the video. In addition, students may need to get to a certain level in a learning video game teaching some skill. Perhaps each week they need to progress to the next level. They can do this anytime over the course of a week.
During class time (this will vary by grade level and course) the teachers review the material the students should have already watched. They can review the questions in the assessment, or help students struggling to get to the next level in their training program. All of the assessments and games are online, so the teacher can have access to how every student is doing. Classroom time is also used for physical hands-on projects. There might also be computer time for students to use to get caught up on their computer-based work, with extended hours for students who may lack resources at home.
This kind of approach also helps when we need to close school for whatever reason (snow day, disease outbreak, facility problem, security issue), or when an individual needs to stay home because they are sick. Rather than trying to hold Zoom class (which is massively suboptimal, especially for younger students), students can take the day to consume multi-media lessons and play learning games, while logging proof-of-work for the teachers to review. Students can perhaps schedule individual Zoom time with teachers to go over questions and see if they need help with anything.
The current dominant model of lecture-textbook-homework is simply clunky and obsolete. A fully realized and integrated computer-based multi-media learning experience would be vastly superior. The popularity of YouTube tutorials, podcasts, and video games is evidence of how effective these modalities can be. We also might as well prepare students for a lifetime of learning using these resources. We don’t even really need AI, but targeted use of AI can make the whole experience even better. The same goes for virtual reality – there may be some specific applications where VR has an advantage. And this is just me riffing from my own experience.
The potential here is huge, worth the investment of billions of dollars, and creating a market competition for companies to produce the best products. The education community needs to embrace this enthusiastically, with full knowledge that this will mean reimagining what teachers do day-to-day and that they may need to increase their own skills. The payoff for society, if history is any judge, would be worth the investment.
The post Using AI for Teaching first appeared on NeuroLogica Blog.
One potentially positive outcome from the COVID pandemic is that it was a wakeup call – if there was any doubt previously about the fact that we all live in one giant interconnected world, it should not have survived the recent pandemic. This is particularly true when it comes to infectious disease. A bug that breaks out on the other side of the world can make its way to your country, your home, and cause havoc. It’s also not just about the spread of infectious organisms, but the breeding of these organisms.
One source of infectious agents is zoonotic spillover, where viruses, for example, can jump from an animal reservoir to a human. So the policies in place in any country to reduce the chance of this happening affect the world. The same is true of policies for laboratories studying potentially infectious agents.
It’s also important to remember that infectious agents are not static – they evolve. They can evolve even within a single host as they replicate, and they can evolve as they jump from person to person and replicate some more. The more bugs are allows to replicate, the greater the probability that new mutations will allow them to become more infectious, or more deadly, or more resistant to treatment. Resistance to treatment is especially critical, and is more likely to happen in people who are partially treated. Give someone an antibiotic to kill 99.9% of the bacteria that’s infecting them, but stop before the infection is completely wiped out, and then the surviving bacteria can resume replication. Those surviving bacteria are likely to be the most resistant bugs to the antibiotic. Bacteria can also swap antibiotic resistant genes, and build up increasing resistance.
In short, controlling infectious agents is a world-wide problem, and it requires a world-wide response. Not only is this a humanitarian effort, it is in our own best self-interest. The rest of the world is a breeding ground for bugs that will come to our shores. This is why we really need an organization, funded by the most wealthy nations, to help establish, fund, and enforce good policies when it comes to identifying, treating, and preventing infectious illness. This includes vaccination programs, sanitation, disease outbreak monitoring, drug treatment programs, and supportive care programs (like nutrition). We would also benefit from programs that target specific hotspots of infectious disease in poor countries that do not have the resources to adequately deal with them, like HIV in sub-Saharan Africa, and tuberculosis in Bangladesh.
Even though this would be the morally right thing to do (enough of a justification, in my opinion), and is in our own self-interest from an infectious disease perceptive, we could even further leverage this aid to enhance our political soft power. These life-saving drugs are brought to you by the good people of the USA. No one would begrudge us a little political self-promotion while we are donating billions of dollars to help save poor sick kids, or stamp out outbreaks of deadly disease in impoverished countries. This also would not have to break the budget. For something around 1% of our total budget we could do an incredible amount of good in the world, protect ourselves, and enhance our prestige and political soft power.
So why aren’t we doing this? Well, actually, we are (as I am sure most readers know). The US is the largest single funder of the World Health Organization (WHO), about 15% of its budget. One of the missions of the WHO is to monitor and respond to disease outbreaks around the world. In 1961 the US established the USAID, which united all our various foreign aid programs into one agency under the direction of the Secretary of State. Through USAID we have been battling disease and malnutrition around the world, defending the rights of women and marginalized groups, and helping to vaccinate and educate the poor. This is coordinated through the State Department specifically to make sure this aid aligns with US interests and enhances US soft power.
I am not going to say that I agree with every position of the WHO. They are a large political organization having to balance the interests of many nations and perspectives. I have criticized some of their specific choices in the past, such as their support for “traditional” healing methods that are not effective or science-based. I am also sure there is a lot to criticize in the USAID program, in terms of waste or perhaps the political goal or effect of specific programs. Politics is messy. It is also the right of any administration to align the operation of an agency like USAID, again under the control of the Secretary of State, with their particular partisan ideology. That’s fine, that’s why we have elections.
But most of what they do (both the WHO and USAID) is essential, and non-partisan. Donating to programs supplying free anti-tuberculosis drugs in Bangladesh is not exactly a controversial or burning partisan issue.
And yet, Trump has announced that the US is withdrawing from the WHO. This is a reckless and harmful act. This is a classic case of throwing the baby out with the bathwater. If we have issues with the WHO, we can use our substantial influence, as its single largest funder, to lobby for changes. Now we have no influence, and just made the world more vulnerable to infectious illness.
Trump and Musk have also pulled the rug out from USAID, for reasons that are still unclear. Musk seems to think that USAID is all worms and no apple, but this is transparent nonsense. The rhetoric on the right is focusing on DEI programs funded by USAID (amounting to an insignificant sliver of what the agency does), but is ignoring or downplaying all of the incredibly useful programs, like fighting infectious disease, education, and nutrition programs.
Another part of the rhetoric, which is why many of his supporters back the move, is that the US should not be spending money in other countries while we have problems here at home. This ignores reality – fully 50% of the US budget is for welfare, including social security, medicare, medicaid, and all other welfare programs. Around 1% (it varies year-to-year) goes to USAID. It is not as if we cannot afford welfare programs in the US because of our foreign aid. It’s just a ridiculous and narrow-minded point. If you want a more robust safety net, then that is what you should vote for and lobby your representatives for, at the state and federal level. But foreign aid is not the problem.
Further, foreign aid should be thought of as an investment, not an expense. Again – that is part of the point of having it under the direction of the State Department. USAID can help to prevent conflicts, that would be even more costly to the US. They can reduce the risk of deadly infectious diseases coming to our shores. Do you want to compare the total cost of COVID to the US economy to the cost of USAID? This is obviously a difficult number to come by, but by one estimate COVID-19 cost the US economy $14 trillion. That is enough to fund USAID at 2023 levels for 350 years. So if USAID prevents one COVID-like pandemic every century or so, it is more than worth it. More likely, however, it will reduce the deadliness of common infectious illnesses, like HIV and tuberculosis.
Even if you can make a case to reduce our aid to help the world’s poor, doing so in a sudden and chaotic fashion, without warning, is beyond reckless. Stopping drug programs is a great way to breed resistance. Food and drugs are sitting in storage and cannot be dispersed because funding has been cut off. It’s hard to defend this as a way to reduce waste. The harm that will be created is hard to calculate. It’s also a great way to evaporate 60 years of American soft power in a matter of weeks.
I am open to any cogent and logical argument to defend these actions. I have not seen or heard one, despite looking. Feel free to make your case in the comments if you think this was anything but heartless, ignorant, and reckless.
The post Cutting to the Bone first appeared on NeuroLogica Blog.
If you think about the human hand as a work of engineering, it is absolutely incredible. The level of fine motor control is extreme. It is responsive and precise. It has robust sensory feedback. It combines both rigid and soft components, so that it is able to grip and lift heavy objects and also cradle and manipulate soft or delicate objects. Trying to replicate this functionality with modern robotics have been challenging, to say the least. But engineers are making steady incremental progress.
I like to check it on how the technology is developing, especially when there appears to be a significant advance. There are two basic applications for robotic hands – for robots and for prosthetics for people who have lost their hand to disease or injury. For the latter we need not only advances in the robotics of the hand itself, but also in the brain-machine interface that controls the hand. Over the years we have seen improvements in this control, using implanted brain electrodes, scalp surface electrodes, and muscle electrodes.
We have also seen the incorporation of sensory feedback, which greatly enhances control. Without this feedback, users have to look at the limb they are trying to control. With sensory feedback, they don’t have to look at it, overall control is enhanced, and the robotic limb feels much more natural. Another recent addition to this technology has been the incorporation of AI, to enhance the learning of the system during training. The software that translates the electrical signals from the user into desired robotic movements is much faster and more accurate than without AI algorithms.
A team at Johns Hopkins is trying to take the robotic hand to the next level – A natural biomimetic prosthetic hand with neuromorphic tactile sensing for precise and compliant grasping. They are specifically trying to mimic a human hand, which is a good approach. Why second-guess millions of years of evolutionary tinkering? They call their system a “hybrid” robotic hand because it incorporates both rigid and soft components. Robotic hands with rigid parts can be strong, but have difficulty handling soft or delicate objects. Hands made of soft parts are good for soft objects, but tend to be weak. The hybrid approach makes sense, and mimics a human hand with internal bones covered in muscles and then soft skin.
The other advance was to incorporate three independent layers of sensation. This also more closely mimics a human hand, which has both superficial and deep sensory receptors. This is necessary to distinguish what kind of object is being held, and to detect things like the object slipping in the grip. In humans, for example, one of the symptoms of carpal tunnel syndrome, which can impair sensation to the first four fingers of the hands, is that people will drop objects they are holding. With diminished sensory feedback, they don’t maintain the muscle tone necessary to maintain their grip on the object.
Similarly, prosthetics benefit from sensory feedback to control how much pressure to apply to a held object. They have to grip tightly enough to keep it from slipping, but not so tight that they crush or break the object. This means that the robotic limb needs to be able to detect the weight and firmness of the object it is holding. Having different layers of sensation allows for this. The superficial layer detects touch, while the progressively deeper layers will be activated with increasing grip strength. AI is also used to help interpret these signals, which in turn stimulate the users nerves to provide them with natural-feeling sensory feedback.
They report:
“Our innovative design capitalizes on the strengths of both soft and rigid robots, enabling the hybrid robotic hand to compliantly grasp numerous everyday objects of varying surface textures, weight, and compliance while differentiating them with 99.69% average classification accuracy. The hybrid robotic hand with multilayered tactile sensing achieved 98.38% average classification accuracy in a texture discrimination task, surpassing soft robotic and rigid prosthetic fingers. Controlled via electromyography, our transformative prosthetic hand allows individuals with upper-limb loss to grasp compliant objects with precise surface texture detection.”
Moving forward they plan to increase the number of sensory layers and to tweak the hybrid structure of soft and rigid components to more closely mimic a human hand. They also plan to incorporate more industrial-grade materials. The goal is to create a robotic prosthetic hand that can mimic the versatility and dexterity of a human hand, or at least come as close as possible.
Combined with advances in brain-machine interface technology and AI control, robotic prosthetic limb technology is rapidly progressing. It’s pretty exciting to watch.
The post Hybrid Bionic Hand first appeared on NeuroLogica Blog.
For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.
PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.
When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.
The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine. These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.
The fact that PD is the result of a specific population of neurons making a specific neurotransmitter makes it an attractive target for cell therapy. All we need to do is increase the number of dopamine neurons in the SNpc and that can treat, and even potentially cure, PD. The first cell transplant for PD was in 1987, in Sweden. These were fetal-derived dopamine producing neurons. There treatments were successful, but they are not a cure for PD. The cells release dopamine but they are not connected to the basal ganglia circuitry, so they are not regulating the release of dopamine in a feedback circuit. In essence, therefore, these were just a drug-delivery system. At best they produced the same effect as best pre-operative medication management. In fact, the treatment only works in patients who respond to L-dopa given orally. The transplants just replace the need for medication, and make it easier to maintain a high level of control.
They also have a lot of challenges. How long do the transplanted cells survive in the brain? What are the risks of the surgery. Is immunosuppressive treatment needed. And where do we get the cells from. The only source that worked was human ventral mesencephalic dopamine neurons from recent voluntary abortions. This limited the supply, and also created regulatory issues, being banned at various times. Attempts at using animal derived cells failed, as did using adrenal cells from the patient.
Therefore, when the technology developed to produce stem cells from the patient’s own cells, it was inevitable that this would be tried in PD. These are typically fibroblasts that are altered to turn them into pluripotent stem cells, which are then induced to form into dopamine producing neurons. This eliminates the need for immunosuppression, and avoid any ethical or legal issues with harvesting. PD would seem like the low hanging fruit for autologous stem cell therapy.
But – it has run up against the issues that we have generally encountered with this technology, which is why you may have first heard of this idea in the early 2000s and here in 2025 we are just seeing a phase I clinical trial. One problem is getting the cells to survive for long enough to make the whole procedure worthwhile. The cells not only need to survive, they need to thrive, and to produce dopamine. This part we can do, and while this remains an issue for any new therapy, this is generally not the limiting factor.
Of greater concern is how to keep the cells from thriving too much – from forming a tumor. There is a reason our bodies are not already flush with stem cells, ready to repair any damage, rejuvenate any effects of aging, and replace any exhausted cells. It’s because they tend to form tumors and cancer. So we have just as many stem cells as we need, and no more. What we “need” is an evolutionary calculation, and not what we might desire. Our experience with stem cell therapy has taught us the wisdom of evolution – stem cells are a double-edged sword.
Finally, it is especially difficult to get stem cells in the brain to make meaningful connections and participate in brain circuitry. I just attended a grand round on stem cells for stroke, and there they are having the same issue. However, stem cells can still be helpful, because they can improve the local environment, allowing native neurons to survive and function better. With PD we are again back to – the stem cells are a great dopamine delivery system, but they don’t fix the broken circuitry.
There is still the hope (but it is mainly a hope at this point) that we will be able to get these stem cells to actually replace lost brain cells, but we have not achieved that goal yet. Some researchers I have spoken to have given up on that approach. They are focusing on using stem cells as a therapy, not a cure – as a way to deliver treatments and improve the environment, to support neurons and brain function, but without the plan to replace neurons in functional circuits.
But the allure of curing neurological disease by transplanting new neurons into the brain to actually fix brain circuits is simply too great to give up entirely. Research will continue to push in this direction (and you can be sure that every mainstream news report about this research will focus on this potential of the treatment). We may just need some basic science breakthrough to figure out how to get stem cells to make meaningful connections, and breakthroughs are hard to predict. We had hoped they would just do it automatically, but apparently they don’t. In the meantime, stem cells are still a very useful treatment modality, just more for support than replacement.
The post Stem Cells for Parkinson’s Disease first appeared on NeuroLogica Blog.
In 2006 (yes, it was that long ago – yikes) the International Astronomical Union (IAU) officially adopted the definition of dwarf planet – they are large enough for their gravity to pull themselves into a sphere, they orbit the sun and not another larger body, but they don’t gravitationally dominate their orbit. That last criterion is what separates planets (which do dominate their orbit) from dwarf planets. Famously, this causes Pluto to be “downgraded” from a planet to a dwarf planet. Four other objects also met criteria for dwarf planet – Ceres in the asteroid belt, and three Kuiper belt objects, Makemake, Haumea, and Eris.
The new designation of dwarf planet came soon after the discovery of Sedna, a trans-Neptunian object that could meet the old definition of planet. It was, in fact, often reported at the time as the discovery of a 10th planet. But astronomers feared that there were dozens or even hundreds of similar trans-Neptunian objects, and they thought it was messy to have so many planets in our solar system. That is why they came up with the whole idea of dwarf planets. Pluto was just caught in the crossfire – in order to keep Sedna and its ilk from being planets, Pluto had to be demoted as well. As a sort-of consolation, dwarf planets that were also trans-Neptunian objects were named “plutoids”. All dwarf planets are plutoids, except Ceres, which is in the asteroid belt between Mars and Jupiter.
So here we are, two decades later, and I can’t help wondering – where are all the dwarf planets? Where are all the trans-Neptunian objects that astronomers feared would have to be classified as planets that the dwarf planet category was specifically created for? I really thought that by now we would have a dozen or more official dwarf planets. What’s happening? As far as I can tell there are two reasons we are still stuck with only the original five dwarf planets.
One is simply that (even after two decades) candidate dwarf planets have not yet been confirmed with adequate observations. We need to determine their orbit, their shape, and (related to their shape) their size. Sedna is still considered a “candidate” dwarf planet, although most astronomers believe it is an actual dwarf planet and will eventually be confirmed. Until then it is officially considered a trans-Neptunian object. There is also Gonggong, Quaoar, and Orcus which are high probability candidates, and a borderline candidate, Salacia. So there are at least nine, and possibly ten, known likely dwarf planets, but only the original five are confirmed. I guess it is harder to observe these objects than I assumed.
But I have also come across a second reason we have not expanded the official list of dwarf planets. Apparently there is another criterion for plutoids (dwarf planets that are also trans-Neptunian objects) – they have to have an absolute magnitude less than +1 (the smaller the magnitude the brighter the object). Absolute magnitude means how bright an object actually is, not it’s apparent brightness as viewed from the Earth. Absolute magnitude for planets is essentially the result of two factors – size and albedo. For stars, absolute magnitude is the brightness as observed from 10 parsecs away. For solar system bodies, the absolute magnitude is the brightness if the object were one AU from the sun and the observer.
What this means is that astronomers have to determine the absolute magnitude of a trans-Neptunian object before they can officially declare it a dwarf planet. This also means that trans-Neptunian objects that are made of dark material, even if they are large and spherical, may also fail the dwarf planet criteria. Some astronomers are already proposing that this absolute magnitude criterion be replaced by a size criterion – something like 200 km in diameter.
It seems like the dwarf planet designation needs to be revisited. Currently, the James Webb Space Telescope is being used to observe trans-Neptunian objects. Hopefully this means we will have some confirmations soon. Poor Sedna, whose discovery in 2003 set off the whole dwarf planet thing, still has not yet been confirmed.
The post Where Are All the Dwarf Planets? first appeared on NeuroLogica Blog.
Remember CRISPR (clustered regularly interspaced short palindromic repeats) – that new gene-editing system which is faster and cheaper than anything that came before it? CRISPR is derived from bacterial systems which uses guide RNA to target a specific sequence on a DNA strand. It is coupled with a Cas (CRISPR Associated) protein which can do things like cleave the DNA at the targeted location. We are really just at the beginning of exploring the potential of this new system, in both research and therapeutics.
Well – we may already have something better than CSRISP: TIGR-Tas. This is also an RNA-based system for targeting specific sequences of DNA and delivering a TIGR associated protein to perform a specific function. TIGR (Tandem Interspaced Guide RNA) may have some useful advantages of CRISPR.
As presented in a new paper, TIGR is actually a family of gene editing systems. It was discovered not by happy accident, but by specifically looking for it. As the paper details: “through iterative structural and sequence homology-based mining starting with a guide RNA-interaction domain of Cas9”. This means they started with Cas9 and then trolled through the vast database of phage and parasitic bacteria for similar sequences. They found what they were looking for – another family of RNA-guided gene editing systems.
Like CRISPR, TIGR is an RNA guided system, and has a modular structure. Different Tas proteins can be coupled with the TIGR to perform different actions at the targeted site. But there are several potential advantages for TIGR over CRISPR. Like CRISPR it is RNA guided, but TIGR uses both strands of the DNA to find its target sequence. This “dual guided” approach may lead to fewer off-target errors. While CRISPR works very well, there is a trade-off in CRISPR systems between speed and precision. The faster it works, the greater the number of off-target actions – like cleaving the DNA in the wrong place. The hope is that TIGR will make fewer off-target mistakes because of better targeting.
TIGR also has “PAM-Independent targeting”. What does that mean? PAM stands for protospacer adjacent motifs – these are short DNA sequences, about 6 base pairs, that exist next to the sequence that his being targeted by CRISPR. The Cas9 protease will not function without the PAM. It appears to have evolved so that the bacteria using CRISPR as an adaptive immune system can tell self from non-self, as invading bacteria or viruses will have the PAM sequences, but the native DNA will not. The end result is that CRISPR needs PAM sequences in order to function, but the TIGR system does not. This makes the TIGR system much more versatile.
I saved what is potentially the best advantage for last – Tas proteins are much smaller than Cas proteins, about a quarter of the size. At first this might not seem like a huge advantage, but for some applications it is. One of the main limiting factors for using CRISPR therapeutically is getting the CRISPR-Cas complex into human cells. There are several available approaches – physical methods like direct injection, chemical methods, and viral vectors. Specific methods, however, generally have a size limit on the package they can deliver into a cell. Adeno-associated vectors (AAVs) for example have lots of advantaged but only can deliver relatively small payloads. Having a much more compact gene-editing system, therefore, is a huge potential advantage.
When it comes to therapeutics, the delivery system is perhaps the greater limiting factor than the gene targeting and editing system itself. There are currently two FDA indications for CRISPR-based therapies, both for blood disorders (sickle cell and thalassemia). For these disorders bone marrow can be removed from the patient, CRISPR is then applied to make the desired genetic changes, and then the bone marrow is transplanted back into the patient. In essence, we bring the cells to the CRISPR rather than the CRISPR to the cells. But how do we deliver CRISPR to a cell population within a living adult human?
We use the methods I listed above, such as the AAVs, but these all have limitations. Having a smaller package to deliver, however, will greatly expand our options.
The world of genetic engineering is moving incredibly fast. We are taking advantage of the fact that nature has already tinkered with these systems for hundreds of millions of years. There are likely more systems and variations out there for us to find. But already we have powerful tools to make precise edits of DNA at targeted locations, and TIGR just adds to our toolkit.
The post The New TIGR-Tas Gene Editing System first appeared on NeuroLogica Blog.
Small nuclear reactors have been around since the 1950s. They mostly have been used in military ships, like aircraft carriers and submarines. They have the specific advantage that such ships could remain at sea for long periods of time without needing to refuel. But small modular reactors have never taken off as a source of grid energy. The prevailing opinion for why this is seems to be that they are simply not cost effective. Larger reactors, which are already expensive endeavors, produce more megawatts per dollar. SMRs are simply too cost inefficient.
This is unfortunate because they have a lot of advantages. Their initial investment is smaller, even though the cost per unit energy is more. They are safe and reliable. They have a small footprint. And they are scalable. The military uses them because the strategic advantages are worth the higher cost. Some argue that the zero carbon on demand energy they provide is worth the higher cost, and I think this is a solid argument. Also there are continued attempts to develop the technology to bring down the cost. Arguably it may be worth subsidizing the SMR industry so that the technology can be developed to greater cost effectiveness. Decarbonizing the energy sector is worth the investment.
But there is another question – are there civilian applications that would also justify the higher cost per unit energy? I have recently encountered two that are interesting. The first is a direct extension of the military use – using an SMR to power a cargo ship. South Korean company, HD Korea Shipbuilding & Offshore Engineering, has revealed their designs for an SMR powered cargo ship, and has received “approval in principle”. Obviously this is just the beginning phase – they need to actually develop the design and get full approval. But the concept is compelling.
The SMR has a smaller footprint overall than a traditional combustion engine. They do not need space for an exhaust system or for fuel tanks. This saved space can be used for extra cargo – and that extra cargo offsets the higher cost of the SMR. The calculus here is different – you don’t have to compare an SMR to every other form of grid power, including gigawatt scale nuclear. You only have to compare it to other forms of cargo ship propulsion. You have to look at the overall cost effectiveness of the cargo delivery system, not just the production of watts. As an aside, the company is also planning on incorporating a “supercritical carbon dioxide-based propulsion system”, which is about 5% more efficient than traditional steam-based propulsion system.
Shipping accounts for about 3% of global greenhouse gas emissions. Decarbonizing this sector therefore will be critical for getting close to net zero.
The second potential civilian application is for powering datacenters. Swiss company, Deep Atomic, is developing an SMR that is purpose-built for large data centers, again by leveraging advantages specific to one application. Their design provides not only 60 MWe of power, but 60 MW worth of cooling. Apparently is can use its waste heat to power cooling systems for a data center. The SMR design is also meant to be located right next to the data center, even close to urban centers. The company also hopes to produce these SMR in a factory to help bring down construction costs.
Right now this is just a design, and not a reality, but it’s the idea that’s interesting. Instead of thinking of SMRs as just another method of providing power to the grid, they are being reimagined as being optimized for a specific purpose, which could possibly allow them to gain that extra efficiency to make them cost effective. Data centers, which are increasingly critical to our digital world, are very energy hungry. You can no longer just plug them into the existing grid and expect to get all the energy you need. Right now there is no regulatory requirement for data centers to provide their own energy. In late 2024, Energy Secretary Jennifer Granholm “urged” AI companies to provide their own green energy to power their data centers. Many have responded with plans to do that. But it would not be unreasonable to require them to do so.
Without a plan to power data centers their growing energy demand is not sustainable. This could also completely wipe out any progress we make at trying to decarbonize energy production, as new demand will equal or outstrip any green energy production. This is what has been happening so far. This is another reason why we absolutely need nuclear power if we are going to meet our carbon goals.
There is also the hope that these niche applications of SMRs will bootstrap the entire industry. Making SMRs for ships and data centers could create an economy of scale that brings down the cost of SMRs overall, making them viable for more and more applications.
The post Are Small Modular Reactors Finally Coming? first appeared on NeuroLogica Blog.
The flying car is an icon of futuristic technology – in more ways than one. This is partly why I can’t resist a good flying car story. I was recently sent this YouTube video on the Alef flying car. The company says his is a street-legal flying car, with vertical take off and landing. They also demonstrate that they have tested this vehicle in urban environments. They are available now for pre-order (estimated price, $300k). The company claims: “Alef will deliver a safe, affordable vehicle to transform your everyday commute.” The claim sounds reminiscent of claims made for the Segway (which recently went defunct).
The flying car has a long history as a promise of future technology. As a technology buff, nerd, and sci-fi fan, I have been fascinated with them my entire life. I have also seen countless prototype flying cars come and go, an endless progression of overhyped promises that have never delivered. I try not to let this make my cynical – but I am cautious and skeptical. I even wrote an entire book about the foibles of predicting future technology, in which flying cars featured prominently.
So of course I met the claims for the Alef flying car with a fair degree of skepticism – which has proven entirely justified. First I will say that the Alef flying car does appear to function as a car and can fly like a drone. But I immediately noticed in the video that as a car, it does not go terribly fast. You have to do some digging, but I found the technical specs which say that it has a maximum road speed of 25 MPH. It also claims a road range of 200 miles, and an air range of 110 miles. It is an EV with a gas motor to extend battery life in flight, with eight electric motors and eight propellers. It is also single passenger. It’s basically a drone with a frame shaped like a car with tires and weak motors – a drone that can taxi on roads.
It’s a good illustration of the inherent hurdles to a fully-realized flying car of our dreams, mostly rooted in the laws of physics. But before I go there, as is, can this be a useful vehicle? I suppose, for very specific applications. It is being marketed as a commuter car, which makes sense, as it is single passenger (this is no family car). The limited range also makes it suited to commuting (average daily commutes in the US is around 42 miles).
That 25 MPH limit, however, seems like a killer. You can’t drive this thing on the highway, or on many roads, in fact. But, trying to be as charitable as possible, that may be adequate for congested city driving. It is also useful for pulling the vehicle out of the garage into a space with no overhead obstruction. Then you would essentially fly to your destination, land in a suitable location, and then drive to your parking space. If you are only driving into the parking garage, the 25 MPH is fine. So again – it’s really a drone that can taxi on public roads.
The company claims the vehicle is safe, and that seems plausible. Computer aided drone control is fairly advanced now, and AI is only making it better. The real question is – would you need a pilot’s license to fly it? How much training would be involved? And what are the weather conditions in which it is safe to fly? Where you live, what percentage of days would the drone car be safe to fly, and how easy would it be to be stuck at work (or need to take an Uber) because the weather unexpectedly turned for the worse? And if you are avoiding even the potential of bad weather, how much further does this restrict your flying days?
There are obviously lots of regulatory issues as well. Will cities allow the vehicles to be flying overhead. What happens if they become popular and we see a significant increase in their use? How will air traffic be managed. If widely adopted, we will see then what their real safety statistics are. How many people will fly into power lines, etc.?
What all this means is that a vehicle like this may be great as “James Bond” technology. This means, if you are the only one with the tech, and you don’t have to worry about regulations (because you’re a spy), it may help you get away from the bad guys, or quickly cross a city frozen with grid lock. (Let’s face it, you can totally see James Bond in this thing.) But as a widely adopted technology, there are significant issues.
For me the bottom line is that this technology is a great proof-of-concept, and I welcome anything that incrementally advances the technology. It may also find a niche somewhere, but I don’t think this will become the Tesla of flying cars, or that this will transform city commuting. It does help demonstrate where the technology is. We are seeing the benefits of improving battery technology, and improving drone technology. But is this the promised “flying car”? I think the answer is still no.
For me a true flying car functions fully as a car and as a flying conveyance. What we often see are planes that can drive on the road, and now drones that can drive on the road. But they are not really cars, or they are terrible cars. You would never drive the Alef flying car as a car – again, at most you would taxi it to and from its parking space.
What will it take to have a true flying car? I do think the drone approach is much better than the plane approach, or jet-pack approach. Drone technology is definitely the way to go. Before it is practical, however, we need better battery tech. The Alef uses lithium-ion batteries and lithium polymer batteries. Perhaps eventually they will use the silicone anode lithium batteries, which have a higher energy density. But we may need to see the availability of batteries with triple or more current lithium ion batteries before flying drone cars will be a practical reality. But we can feasibly get there.
Perhaps, however, the “flying car” is just a futuristic pipe dream. We do have to consider that if the concept is valid, or are we just committing a “futurism fallacy” by projecting current technology into the future. We don’t necessarily have to do things in the same way, with just better technology. The thought process is – I use my car for transportation, wouldn’t it be great if my car could fly. Perhaps the trade-offs of making a single vehicle that is both a good car and a good drone are just not worth it. Perhaps we should just make the best drone possible for human transportation and specific applications. We may need to develop some infrastructure to accommodate them.
In a city there may be other combinations of travel that work better. You may take a e-scooter to the drone, or some form of public transportation. Then a drone can take you across the city, or across a geological obstacle. Personal drones may be used for commuting, but then you may have a specific pad at your home and another at work for landing. That seems easier than designing a drone-car just to drive 30 feet to the take off location.
If we go far enough into the future, where technology is much more advanced (like batteries with 10 times the energy density of current tech), then flying cars may eventually become practical. But even then there may be little reason to choose that tradeoff.
The post The Alef Flying Car first appeared on NeuroLogica Blog.
I am fascinated by the technologies that live largely behind the scenes. These are not generally consumer devices, but they may be components of consumer products, or may largely have a role in industry – but they make our modern world possible, or make it much better. In addition I think that material science is largely underrated in terms of popular appeal, but it is material science that often make all other technologies possible or feasible. There is another aspect of technology that I have been increasingly interested in – solid state technology. These are, generally speaking, devices that use electricity rather than moving parts. You are likely familiar with solid state drives, that do not have spinning discs and therefore are smaller, use less power, and last longer. One big advantage of electric vehicles is that they are largely solid state, without the moving parts of an engine.
There is a technology that combines all three of these features – it is a component technology, dependent on material science, and solid state: thermoelectric devices. This may not sound sexy, but bear with me, this is cool (pun intended) technology. Thermoelectric materials are those that convert electricity into a temperature difference across a material, or convert a temperature difference into electricity. In reality, everything is a thermoelectric material, but most materials have insignificant thermoelectric effects (so are functionally not thermoelectric).
Thermoelectric devices can be used to harvest energy, from any temperature difference. These are generally not large amounts of energy – we don’t have thermoelectric power plants connected to the grid – and they are currently not practical and cost effective enough for a large scale. This may be possible in the future, but not today. However, for applications that require small amounts of energy, harvesting that energy from ambient sources like small temperature differences is feasible.
There are likely many more applications for the reverse – using electricity to cause temperature changes. This is basically a refrigerator, and in fact y0u can buy small solid state thermoelectric refrigerators. A traditional refrigerator uses a compressor and a refrigerant. This is a liquid that turns into a gas at low temperature, absorbing heat when it transitions to gas and then letting off heat when it transitions back to liquid. But this requires a compressor with moving parts and pipes to carry the refrigerant. Refrigerants are also not good for the environment or the ozone. Thermoelectric coolers can be smaller, use less electricity, are quiet, and have more precise temperature control. But their size is limited because they are not powerful enough for full-sized refrigerators.
As an aside, I see that Samsung is coming out this year with a hybrid full-size refrigerator. I still uses a compressor, but also has a thermoelectric cooler to reduce temperature variation throughout the refrigerator.
Thermoelectric cooling is also useful for electronics, which having an increasing problem with heat dissipation as we make them smaller, more compact, and more powerful. Heat management is now a major limiting factor for high end computer chips. This is also a major limiting factor for bio-electronics – implanting chips in people for various potential applications. Having a small and efficient solid state cooling device that just requires electricity would enable this technology.
But – the current state of the art for thermoelectric cooling is limited. Devices have low overall efficiency, and their manufacture is expensive and generates a lot of waste. In other words – there is a huge opportunity to improve this technology with massive and far ranging potential benefits. This is an area ripe for investment with clear benefits. This can also be a significant component of our current overall goal to electrify our technology – to accomplish with electricity what currently requires moving parts and fossil fuels.
All this is why I was very interested in this latest advance – Interfacial bonding enhances thermoelectric cooling in 3D-printed materials. This incorporates yet another technology that has my interest – 3D printing, or additive manufacturing. This does not represent an improvement in the thermoelectric devices themselves, but an improvement in the cost and efficiency of making them (again, and often neglected by very important aspect of any technology). As one of the authors says:
“With our present work, we can 3D print exactly the needed shape of thermoelectric materials. In addition, the resulting devices exhibit a net cooling effect of 50 degrees in the air. This means that our 3D-printed materials perform similarly to ones that are significantly more expensive to manufacture,” says Xu.”
The innovation has to do with the molecular bonding of the materials in the 3D printing process. As Xu says, the performance is the same as existing materials, but with much lower cost to manufacture. As always, shifting to a new technology often means that there is room for further incremental advances to make the advantages even better over time. It may take years for this technology to translate to the market, but it is very possible it may lead directly to a slew of new products and applications.
It may seem like a small thing, but I am looking forward to a future (hopefully not too distant) with full-sized thermoelectric refrigerators, and with computers that don’t need fans or water cooling. Having a silent computer without fans is nice for podcasting, which I know is a particular interest of mine, but is also increasingly common.
In general, quality of life will be better if we are surrounded by technology that is silent, small, efficient, cost-effective, and long-lasting. Thermoelectric cooling can make all of that increasingly possible.
The post Thermoelectric Cooling – It’s Cooler Than You Think first appeared on NeuroLogica Blog.
The evolution of the human brain is a fascinating subject. The brain is arguably the most complex structure in the known (to us) universe, and is the feature that makes humanity unique and has allowed us to dominate (for good or ill) the fate of this planet. But of course we are but a twig on a vast evolutionary tree, replete with complex brains. From a human-centric perspective, the closer groups are to humans evolutionarily, the more complex their brains (generally speaking). Apes are the most “encephalized” among primates, as are the primates among mammals, and the mammals among vertebrates. This makes evolutionary sense – that the biggest and most complex brains would evolve from the group with the biggest and most complex brains.
But this evolutionary perspective can be tricky. We can’t confuse looking back through evolutionary time with looking across the landscape of extant species. Any species alive today has just as much evolutionary history behind them as humans. Their brains did not stop evolving once their branch split off from the one that lead to humans. There are therefore some groups which have complex brains because they are evolutionarily close to humans, and their brains have a lot of homology with humans. But there are also other groups that have complex brains because they evolved them completely independently, after their group split from ours. Cetaceans such as whales and dolphins come to mind. They have big brains, but their brains are organized somewhat differently from primates.
Another group that is often considered to be highly intelligent, independent from primates, is birds. Birds are still vertebrates, and in fact they are amniotes, the group that contains reptiles, birds, and mammals. It is still an open question as to exactly how much of the human brain architecture was present at the last common ancestor of all amniotes (and is therefore homologous) and how much evolved later independently. To explore this question we need to look at not only the anatomy of brains and the networks within them, but brain cell types and their genetic origins. For example, even structures that currently look very different can retain evidence of common ancestry if they are built with the same genes. Or – structures that look similar may be built with different genes, and are therefore evolutionarily independent, or analogous.
With that background, we now have a publication of several research projects examining the brain of various amniotes – Evolutionary convergence of sensory circuits in the pallium of amniotes. The pallium is basically the cerebral cortex – the layers of gray and white matter that sit on top of the cerebrum. This is the “advanced” part of the brain in vertebrates, which include the neocortex in humans. When comparing the pallium of reptiles, birds, and mammals, what did they find?
“Their neurons are born in different locations and developmental times in each species,” explains Dr. García-Moreno, head of the Brain Development and Evolution laboratory, “indicating that they are not comparable neurons derived from a common ancestor.”
Time and location during development is a big clue as to the evolutionary source of different cells and structure. Genes are another way to determine evolutionary source, so a separate analysis looked at the genes that are activated when forming the pallium of these different groups. It turns out – they use very different assemblages of genes in developing the neurons of the pallium. All this strongly suggests that extant reptiles, birds, and mammals evolved similar brain structures independently after they split apart as groups. They use different neuron type derived from different genes, which means those neurons evolved from different ancestor cell types.
To do this analysis they looked at hundreds of genes and cell types across species, creating an atlas of brain cells, and then did (of course) a computer analysis:
“We were able to describe the hundreds of genes that each type of neuron uses in these brains, cell by cell, and compare them with bioinformatics tools.” The results show that birds have retained most inhibitory neurons present in all other vertebrates for hundreds of millions of years. However, their excitatory neurons, responsible for transmitting information in the pallium, have evolved in a unique way. Only a few neuronal types in the avian brain were identified with genetic profiles similar to those found in mammals, such as the claustrum and the hippocampus, suggesting that some neurons are very ancient and shared across species. “However, most excitatory neurons have evolved in new and different ways in each species,” details Dr. García-Moreno.
Convergent evolution like this occurs because nature finds similar solutions to the same problem. But if they evolved independently, the tiny details (like the genes they are built from) will differ. But also, a similar solution is not an identical solution. This means that bird brains are likely to be different in important ways from mammalian brains. They have a different type of intelligence that mammals, primates, and humans do (just like dolphins have a different type of intelligence).
This is the aspect of this research that fascinates me the most – how is our view of reality affected by the quirky of our neurological evolution? Our view of reality is mostly a constructed neurological illusion (albeit a useful illusion). It is probable that chimpanzees see the world in a very similar way to humans, as their brains diverged only recently from our own. But the reality that dolphin or crow brains construct might be vastly different than our own.
There are “intelligent” creatures on Earth that diverge even more from the human model. Octopuses have a doughnut shaped brain that wraps around their esophagus, with many of the neurons also distributed in their tentacles. They have as many neurons as a dog, but they are far more distributed. Their tentacles have some capacity for independent neurological function (if you want to call that “thought”). It is highly likely that the experience of reality of an octopus is extremely different than any mammal.
This line of thinking always leads me to ponder – what might the intelligence of an alien species be like? In science fiction it is a common story-telling contrivance that aliens are remarkably humanoid, not just in their body plan but in their intelligence. They mostly have not only human-level intelligence, but a recognizably human type of intelligence. I think it is far more likely that any alien intelligence, even one capable of technology, would be different from human intelligence in ways difficult (and perhaps impossible) for us to contemplate.
There are some sci fi stories that explore this idea, like Arrival, and I usually find them very good. But still I think fiction is just scratching the surface of this idea. I understand why this is – it’s hard to tell a story with aliens when we cannot even interface with them intellectually – unless that fact is part of the story itself. But still, there is a lot of space to explore aliens that are human enough to have a meaningful interaction, but different enough to feel neurologically alien. There are likely some constants to hold onto, such as pleasure and pain, and self-preservation. But even exploring that idea – what would be the constants, and what can vary, is fascinating.
This all relates to another idea I try to emphasize whenever relevant – we are our neurology. Our identity and experience is the firing of patterns of neurons in our brains, and it is a uniquely constructed experience.
The post Birds Separately Evolved Complex Brains first appeared on NeuroLogica Blog.
My younger self, seeing that title – AI Powered Bionic Arm – would definitely feel as if the future had arrived, and in many ways it has. This is not the bionic arm of the 1970s TV show, however. That level of tech is probably closer to the 2070s than the 1970s. But we are still making impressive advances in brain-machine interface technology and robotics, to the point that we can replace missing limbs with serviceable robotic replacements.
In this video Sarah De Lagarde discusses her experience as the first person with an AI powered bionic arm. This represents a nice advance in this technology, and we are just scratching the surface. Let’s review where we are with this technology and how artificial intelligence can play an important role.
There are different ways to control robotics – you can have preprogrammed movements (with or without sensory feedback), AI can control the movements in real time, you can have a human operator, through some kind of interface including motion capture, or you can use a brain-machine interface of some sort. For robotic prosthetic limbs obviously the user needs to be able to control them in real time, and we want that experience to feel as natural as possible.
The options for robotic prosthetics include direct connection to the brain, which can be from a variety of electrodes. They can be deep brain electrodes, brain surface, scalp surface, or even stents inside the veins of the brain (stentrodes). All have their advantages and disadvantages. Brain surface and deep brain have the best resolution, but they are the most invasive. Scalp surface is the least invasive, but has the lowest resolution. Stentrodes may, for now, be the best compromise, until we develop more biocompatible and durable brain electrodes.
You can also control a robotic prosthetic without a direct brain connection, using surviving muscles as the interface. That is the method used in De Lagarde’s prosthetic. The advantage here is that you don’t need wires in the brain. Electrodes from the robotic limb connect to existing muscles which the user can contract voluntarily. The muscles themselves are not moving anything, but they generate a sizable electrical impulse which can activate the robotic limb. The user then has to learn to control the robotic limb by activating different sequences of muscle contractions.
At first this method of control requires a lot of concentration. I think a good analogy, one used by De Lagarde, is to think of controlling a virtual character in a video game. At first, you need to concentrate on the correct sequence of keys to hit to get the character to do what you want. But after a while you don’t have to think about the keystrokes. You just think about what you want the character to do and your fingers automatically (it seems) go to the correct keys or manipulate the mouse appropriately. The cognitive burden decreases and your control increases. This is the learning phase of controlling any robotic prosthetic.
As the technology develops researchers learned that providing sensory feedback is a huge help to this process. When the user uses the limb it can provide haptic feedback, such as vibrations, that correspond to the movement. Users report this is an extremely helpful feature. It allows for superior and more natural control, and allows them to control the limb without having to look directly at it. Sensory feedback closes the usual feedback loop of natural motor control.
And that is where the technology has gotten to, with continued incremental advances. But now we can add AI to the mix. What roll does that potentially play? As the user learns to contract the correct muscles in order to get the robotic limb to do what they want, AI connected to the limb itself can learn to recognize the user behavior and better predict what movements they want. The learning curve is now bidirectional.
De Lagarde reports that the primary benefit of the AI learning to interpret her movements better is a decrease in the lag time between her wanting to move and the robotic limb moving. At first the delay could be 10 seconds, which is forever if all you want to do is close your fist. But now the delay is imperceptible, with the limb moving essentially in real time. The limb does not feel like her natural limb. She still feels like it is a tool that she can use. But that tool is getting more and more useful and easy to use.
AI may be the perfect tool for brain-machine interface in general, and again in a bidirectional way. What AI is very good at is looking at tons of noisy data and finding patterns. This can help us interpret brain signals, even from low-res scalp electrodes, meaning that by training on the brain waves from one user an AI can learn to interpret what the brain waves mean in terms of brain activity and user intention. Further, AI can help interpret the user’s attempts at controlling a device or communicating with a BMI. This can dramatically reduce the extensive training period that BMIs often require, getting months of user training down to days. It can also improve the quality of the ultimate control achieved, and reduce the cognitive burden of the user.
We are already past the point of having usable robotic prosthetic limbs controlled by the user. The technology is also advancing nicely and quite rapidly, and AI is just providing another layer to the tech that fuels more incremental advances. It’s still hard to say how long it will take to get to the Bionic Man level of technology, but it’s easy to predict better and better artificial limbs.
The post AI Powered Bionic Arm first appeared on NeuroLogica Blog.
It’s probably not a surprise that a blog author dedicated to critical thinking and neuroscience feels that misinformation is one of the most significant threats to society, but I really to think this. Misinformation (false, misleading, or erroneous information) and disinformation (deliberately misleading information) have the ability to cause a disconnect between the public and reality. In a democracy this severs the feedback loop between voters and their representatives. In an authoritarian government it a tool of control and repression. In either case citizens cannot freely choose their representatives. This is also the problem with extreme jerrymandering – in which politicians choose their voters rather than the other way around.
Misinformation and disinformation have always existed in human society, and it is an interesting question whether or not they have increased recently and to what extent social media has amplified them. Regardless, it is useful to understand what factors contribute to susceptibility to misinformation in order to make people more resilient to it. We all benefit if the typical citizen has the ability to discern reality and identify fake news when they see it.
There has been a lot of research on this question over the years, and I have discussed it often, but it’s always useful to try to gather together years of research into a single systematic review and/or meta-analysis. It’s possible I and others may be selectively choosing or remembering parts of the research to reinforce a particular view – a problem that can be solved with a thorough analysis of all existing data. And of course I must point out that such reviews are subject to their own selection bias, but if properly done such bias should be minimal. The best case scenario is for there to be multiple systematic reviews, so I can get a sense of the consensus of those reviews, spreading out bias as much as possible in the hopes it will average out in the end.
With that in mind, there is a recent meta-analysis of studies looking at the demographics of susceptibility to misinformation. The results mostly confirm what I recall from looking at the individual studies over the years, but there are some interesting wrinkles. They looked at studies which used the news headline paradigm – having subjects answer if they think a headline is true or not, “totaling 256,337 unique choices made by 11,561 participants across 31 experiments.” That’s a good chunk of data. First, people were significantly better than chance at determining which headlines were true (68.51%) or false 67.24%). That’s better than it being a coin flip, but still, about a third of the time subjects in these studies could not tell real from fake headlines. Given the potential number of false headlines people encounter daily, this can result in massive misinformation.
What factors contributed to susceptibility to misinformation, or protected against it? One factor that many people may find surprising, but which I have seen many times over the years, is that education level alone conveyed essentially no benefit. This also aligns with the pseudoscience literature – education level (until you get to advanced science degrees) does not protect against believing pseudoscience. You might also (and I do) view this as a failure of the education system, which is supposed to be teaching critical thinking. This does not appear to be happening to any significant degree.
There were some strong predictors. People who have an analytical thinking style were more accurate on both counts – identifying true and false headlines, but with a bit of a false headline bias. This factor comes up often in the literature. An analytical thinking style also correlates with lower belief in conspiracy theories, for example. Can we teach an analytical thinking style? Yes, absolutely. People have a different inherent tendency to rely on analytical vs intuitive thinking, but almost by definition analytical thinking is a conscious deliberate act and is a skill that can be taught. Perhaps analytical thinking is the thing that schools are not teaching students but should be.
Older age also was associated with higher overall discrimination, and also with a false headline bias, meaning that their default was to be skeptical rather than believing. It’s interesting to think about the interplay between these two things – in a world with mostly false headlines, having a strong skeptical bias will lead to greater accuracy. Disbelieving becomes a good first approximation of reality. The research, as far as I can see, did not attempt to replicate reality in terms of the proportion of true to false headlines. This means that the false bias may be more or less useful in the real world than in the studies, depending on the misinformation ecosystem.
Also being a self-identified Democrat correlated with greater accuracy and also a false bias, while self-identifying as a Republican was associated with lower accuracy and a truth bias (tending to believe headlines were true). Deeply exploring why this is the case is beyond the scope of this article (this is a complex question), but let me just throw out there a couple of the main theories. One is that Republicans are already self-selected for some cognitive features, such as intuitive thinking. Another is that the current information landscape is not uniform from a partisan perspective, and is essentially selecting for people who tend to believe headlines.
Some other important factors emerged from this data. One is that a strong predictor of believing headlines was partisan alignment – people tended to believe headlines that aligned with their self-identified partisan label. This is due to “motivated reflection” (what I generally refer to as motivated reasoning). The study also confirmed something I have also encountered previously – that those with higher analytical thinking skills actually displayed more motivated reasoning when combined with partisan bias. Essentially smarter people have the potential to be better and more confident at their motivated reasoning. This is a huge reason for caution and humility – motivated reasoning is a powerful force, and being smart not only does not necessarily protect us from it, but may make it worse.
Finally, the single strongest predictor of accepting false headlines as true was familiarity. If a subject had encountered the claim previously, they were much more likely to believe it. This is perhaps the most concerning factor to come out of this review, because it means that mere repetition may be enough to get most people to accept a false reality. This has big implications for the “echochamber” effect on both mainstream and social media. If you get most of your news from one or a few ideologically aligned outlets, you essentially are allowing them to craft your perception of reality.
From all this data, what (individually and as a society) should we do about this, if anything?
First, I think we need to seriously consider how critical thinking is taught (or not taught) in schools. Real critical thinking skills need to be taught at every level and in almost every subject, but also as a separate dedicated course (perhaps combined with some basic scientific literacy and media savvy). Hey, one can dream.
The probability of doing something meaningful in terms of regulating media seems close to zero. That ship has sailed. The fairness doctrine is gone. We live in the proverbial wild west of misinformation, and this is not likely to change anytime soon. Therefore, individually, we can protect ourselves by being skeptical, working our analytical thinking skills, checking our own biases and motivated reasoning, and not relying on a few ideologically aligned sources of news. One good rule of thumb is to be especially skeptical of any news that reinforces your existing biases. But dealing with a societal problem on an individual level is always a tricky proposition.
The post Who Believes Misinformation first appeared on NeuroLogica Blog.
Designing research studies to determine what is going on inside the minds of animals is extremely challenging. The literature is littered with past studies that failed to properly control for all variables and thereby overinterpreted the results. The challenge is that we cannot read the minds of animals, and they cannot communicate directly to us using language. We have to infer what is going on in their minds from their behavior, and inference can be tricky.
One specific question is whether or not our closest ancestors have a “theory of mind”. This is the ability to think about what other creatures are thinking and feeling. Typical humans do this naturally – we know that other people have minds like our own and we can think strategically about the implications of what other people think, how to predict their behavior based upon this, and how to manipulate the thoughts of other people in order to achieve our ends.
Animal research over the last century or so has been characterized by assumptions that some cognitive ability is unique to humans, only to find that this ability exists in some animals, at least in a precursor form. This makes sense, as we have evolved from other animals, most of our abilities likely did not come out of nowhere but evolved from more basic precursors.
But it is still undeniably true that humans are unique in the animal kingdom for our sophisticated cognitive abilities. Our language, abstraction, problem solving, and technological ability is significantly advanced beyond any other animal. We therefore cannot just assume that even our closest relatives possess any specific cognitive ability that humans have, and therefore this is a rich target of research.
The specific question of whether or not our ape relatives have a theory of mind remains an open research controversy. Previous research has suggested that they might, but all of this research was designed around the question of whether or not another individual had some specific piece of knowledge. Does the subject ape know that another ape or a human knows a piece of information? This research suggests that they might, but there remains a controversy over how to interpret the results – again, what can we infer from the animal’s behavior?
A new study seeks to inform this discussion by adding another type of research – looking at whether or not a subject ape, in this case a bonobo, understands that a human researcher lacks information. This is exploring the theory of mind from the perspective of another creatures ignorance rather than their knowledge. The advantage here, from a research perspective, is that such a theory of mind would require that the bonobo simultaneously knows the relevant piece of information and that a human researcher does not know this information – that their mental map of reality is different from another creature’s mental map of reality.
The setup is relatively simple. The bonobo sits across from a human researcher, and at a 90 degree angle from a “game master”. The game master places a treat under one of several cups in full view of the bonobo and the human researcher. They then wait 5 seconds and then the researcher reveals the treat and gives it to the bonobo. This is the training phase – letting the bonobo know that there is a treat there and they will be given the treat by the human researcher after a delay.
In the test phase an opaque barrier is placed between the human researcher and the cups, and this barrier either has a window or it doesn’t. So in some conditions the human researcher knows where the treat is and in others they don’t. The research question is – will the bonobo point to the cup more often and more quickly when the human researcher does not know where the treat is?
The results were pretty solid – the bonobos in multiple tests pointed to the cup with the treat far more often, quickly, and insistently when the human researcher did not know where the treat was. They also ran the experiment with no researcher, to make sure the bonobo was not just reaching for the treat, and again they did not point to the cup when there was no human researcher to communicate to.
No one experiment like this is ever definitive, and it’s the job of researchers to think of other and more simple ways to explain the results. But the behavior of the bonobos in this experimental setup matched what was predicted if they indeed have at least a rudimentary theory of mind. They seem to know when the human researcher knew where the treat was, independent of the bonobo’s own knowledge of where the treat was.
This kind of behavior makes sense for an intensely social animal, like bonobos. Having a theory of mind about other members of your community is a huge advantage on cooperative behavior. Hunting in particular is an obvious scenario where coordination ads to success (bonobos do, in fact, hunt).
This will not be the final word on this contentious question, but does move the needle one click in the direction of concluding that apes likely have a theory of mind. We will see if these results replicate, and what other research designs have to say about this question.
The post Do Apes Have a Theory of Mind first appeared on NeuroLogica Blog.
Everything, apparently, has a second life on TikTok. At least this keeps us skeptics busy – we have to redebunk everything we have debunked over the last century because it is popping up again on social media, confusing and misinforming another generation. This video is a great example – a short video discussing the “incorruptibility’ of St. Teresa of Avila. This is mainly a Catholic thing (but also the Eastern Orthodox Church) – the notion that the bodies of saints do not decompose, but remain in a pristine state after death, by divine intervention. This is considered a miracle, and for a time was a criterion for sainthood.
The video features Carlos Eire, a Yale professor of history focusing on medieval religious history. You may notice that the video does not include any shots of the actual body of St. Teresa. I could not find any online. Her body is not on display like some incorruptibles, but has been exhumed in 1914 and again recently. So we only have the reports of the examiners. This is where much of the confusion is generated – the church defines incorruptible very differently than the believers who then misrepresent the actual evidence. Essentially, if the soft tissues are preserved in any way (so the corpse has not completely skeletonized) and remains somewhat flexible, that’s good enough.
The case of Teresa is typical – one of the recent examiners said, “There is no color, there is no skin color, because the skin is mummified, but you can see it, especially the middle of the face.” So the body is mummified and you can only partly make out the face. That is probably not what most believers imagine when the think of miraculous incorruptibility.
This is the same story over and over – first hand accounts of actual examiners describe a desiccated corpse, in some state of mummification. Whenever they are put on display, that is exactly what you see. Sometimes body parts (like feet or hands) are cut off and preserved separately as relics. Often a wax or metal mask is placed over the face because the appearance may be upsetting to some of the public. The wax masks can be made to look very lifelike, and some viewers may think they are looking at the actual corpse. But the narrative among believers is often very different.
It has also been found that there are many very natural factors that correlate with the state of the allegedly incorruptible bodies. A team of researchers from the University of Pisa explored the microenvironments of the tombs:
“They discovered that small differences in temperature, moisture, and construction techniques lead to some tombs producing naturally preserved bodies while others in the same church didn’t. Now you can debate God’s role in choosing which bodies went into which tombs before these differences were known, but I’m going to stick with the corpses. Once the incorrupt bodies were removed from these climates or if the climates changed, they deteriorated.”
The condition of the bodies seems to be an effect of the environment, not the saintliness of the person in life.
It is also not a secret – though not advertised by promoters of miraculous incorruptibility – that the bodies are often treated in order to preserve them. This goes beyond controlling the environment. Some corpses are treated with acid as a preservative, or oils or sealed with wax.
When you examine each case in detail, or the phenomenon as a whole, what you find is completely consistent with what naturally happens to bodies after death. Most decay completely to skeletons. However, in the right environment, some may be naturally mummified and may partly or completely not go through putrefaction. But if their environment is changed they may then proceed to full decay. And bodies are often treated to help preserve them. There is simply no need for anything miraculous to explain any of these cases.
There is also a good rule of thumb for any such miraculous or supernatural claim – if there were actually cases of supernatural preservation, we would all have seen it. This would be huge news, and you would not have to travel to some church in Italy to get a few of an encased corpse covered by a wax mask.
As a side note, and at the risk of sounding irreverent, I wonder if any maker of a zombie film considered having the corpse of an incorruptible animate. If done well, that could be a truly horrific scene.
The post Incorruptible Skepticism first appeared on NeuroLogica Blog.