You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 16 hours 48 min ago

Thermoelectric Cooling – It’s Cooler Than You Think

Fri, 02/21/2025 - 4:45am

I am fascinated by the technologies that live largely behind the scenes. These are not generally consumer devices, but they may be components of consumer products, or may largely have a role in industry – but they make our modern world possible, or make it much better. In addition I think that material science is largely underrated in terms of popular appeal, but it is material science that often make all other technologies possible or feasible. There is another aspect of technology that I have been increasingly interested in – solid state technology. These are, generally speaking, devices that use electricity rather than moving parts. You are likely familiar with solid state drives, that do not have spinning discs and therefore are smaller, use less power, and last longer. One big advantage of electric vehicles is that they are largely solid state, without the moving parts of an engine.

There is a technology that combines all three of these features – it is a component technology, dependent on material science, and solid state: thermoelectric devices. This may not sound sexy, but bear with me, this is cool (pun intended) technology. Thermoelectric materials are those that convert electricity into a temperature difference across a material, or convert a temperature difference into electricity. In reality, everything is a thermoelectric material, but most materials have insignificant thermoelectric effects (so are functionally not thermoelectric).

Thermoelectric devices can be used to harvest energy, from any temperature difference. These are generally not large amounts of energy – we don’t have thermoelectric power plants connected to the grid – and they are currently not practical and cost effective enough for a large scale. This may be possible in the future, but not today. However, for applications that require small amounts of energy, harvesting that energy from ambient sources like small temperature differences is feasible.

There are likely many more applications for the reverse – using electricity to cause temperature changes. This is basically a refrigerator, and in fact y0u can buy small solid state thermoelectric refrigerators. A traditional refrigerator uses a compressor and a refrigerant. This is a liquid that turns into a gas at low temperature, absorbing heat when it transitions to gas and then letting off heat when it transitions back to liquid. But this requires a compressor with moving parts and pipes to carry the refrigerant. Refrigerants are also not good for the environment or the ozone. Thermoelectric coolers can be smaller, use less electricity, are quiet, and have more precise temperature control. But their size is limited because they are not powerful enough for full-sized refrigerators.

As an aside, I see that Samsung is coming out this year with a hybrid full-size refrigerator. I still uses a compressor, but also has a thermoelectric cooler to reduce temperature variation throughout the refrigerator.

Thermoelectric cooling is also useful for electronics, which having an increasing problem with heat dissipation as we make them smaller, more compact, and more powerful. Heat management is now a major limiting factor for high end computer chips. This is also a major limiting factor for bio-electronics – implanting chips in people for various potential applications. Having a small and efficient solid state cooling device that just requires electricity would enable this technology.

But – the current state of the art for thermoelectric cooling is limited. Devices have low overall efficiency, and their manufacture is expensive and generates a lot of waste.  In other words – there is a huge opportunity to improve this technology with massive and far ranging potential benefits. This is an area ripe for investment with clear benefits. This can also be a significant component of our current overall goal to electrify our technology – to accomplish with electricity what currently requires moving parts and fossil fuels.

All this is why I was very interested in this latest advance – Interfacial bonding enhances thermoelectric cooling in 3D-printed materials. This incorporates yet another technology that has my interest – 3D printing, or additive manufacturing. This does not represent an improvement in the thermoelectric devices themselves, but an improvement in the cost and efficiency of making them (again, and often neglected by very important aspect of any technology). As one of the authors says:

“With our present work, we can 3D print exactly the needed shape of thermoelectric materials. In addition, the resulting devices exhibit a net cooling effect of 50 degrees in the air. This means that our 3D-printed materials perform similarly to ones that are significantly more expensive to manufacture,” says Xu.”

The innovation has to do with the molecular bonding of the materials in the 3D printing process. As Xu says, the performance is the same as existing materials, but with much lower cost to manufacture. As always, shifting to a new technology often means that there is room for further incremental advances to make the advantages even better over time. It may take years for this technology to translate to the market, but it is very possible it may lead directly to a slew of new products and applications.

It may seem like a small thing, but I am looking forward to a future (hopefully not too distant) with full-sized thermoelectric refrigerators, and with computers that don’t need fans or water cooling. Having a silent computer without fans is nice for podcasting, which I know is a particular interest of mine, but is also increasingly common.

In general, quality of life will be better if we are surrounded by technology that is silent, small, efficient, cost-effective, and long-lasting. Thermoelectric cooling can make all of that increasingly possible.

The post Thermoelectric Cooling – It’s Cooler Than You Think first appeared on NeuroLogica Blog.

Categories: Skeptic

Birds Separately Evolved Complex Brains

Tue, 02/18/2025 - 4:58am

The evolution of the human brain is a fascinating subject. The brain is arguably the most complex structure in the known (to us) universe, and is the feature that makes humanity unique and has allowed us to dominate (for good or ill) the fate of this planet. But of course we are but a twig on a vast evolutionary tree, replete with complex brains. From a human-centric perspective, the closer groups are to humans evolutionarily, the more complex their brains (generally speaking). Apes are the most “encephalized” among primates, as are the primates among mammals, and the mammals among vertebrates. This makes evolutionary sense – that the biggest and most complex brains would evolve from the group with the biggest and most complex brains.

But this evolutionary perspective can be tricky. We can’t confuse looking back through evolutionary time with looking across the landscape of extant species. Any species alive today has just as much evolutionary history behind them as humans. Their brains did not stop evolving once their branch split off from the one that lead to humans. There are therefore some groups which have complex brains because they are evolutionarily close to humans, and their brains have a lot of homology with humans. But there are also other groups that have complex brains because they evolved them completely independently, after their group split from ours. Cetaceans such as whales and dolphins come to mind. They have big brains, but their brains are organized somewhat differently from primates.

Another group that is often considered to be highly intelligent, independent from primates, is birds. Birds are still vertebrates, and in fact they are amniotes, the group that contains reptiles, birds, and mammals. It is still an open question as to exactly how much of the human brain architecture was present at the last common ancestor of all amniotes (and is therefore homologous) and how much evolved later independently. To explore this question we need to look at not only the anatomy of brains and the networks within them, but brain cell types and their genetic origins. For example, even structures that currently look very different can retain evidence of common ancestry if they are built with the same genes. Or – structures that look similar may be built with different genes, and are therefore evolutionarily independent, or analogous.

With that background, we now have a publication of several research projects examining the brain of various amniotes – Evolutionary convergence of sensory circuits in the pallium of amniotes. The pallium is basically the cerebral cortex – the layers of gray and white matter that sit on top of the cerebrum. This is the “advanced” part of the brain in vertebrates, which include the neocortex in humans. When comparing the  pallium of reptiles, birds, and mammals, what did they find?

 “Their neurons are born in different locations and developmental times in each species,” explains Dr. García-Moreno, head of the Brain Development and Evolution laboratory, “indicating that they are not comparable neurons derived from a common ancestor.”

Time and location during development is a big clue as to the evolutionary source of different cells and structure. Genes are another way to determine evolutionary source, so a separate analysis looked at the genes that are activated when forming the pallium of these different groups. It turns out – they use very different assemblages of genes in developing the neurons of the pallium. All this strongly suggests that extant reptiles, birds, and mammals evolved similar brain structures independently after they split apart as groups. They use different neuron type derived from different genes, which means those neurons evolved from different ancestor cell types.

To do this analysis they looked at hundreds of genes and cell types across species, creating an atlas of brain cells, and then did (of course) a computer analysis:

“We were able to describe the hundreds of genes that each type of neuron uses in these brains, cell by cell, and compare them with bioinformatics tools.” The results show that birds have retained most inhibitory neurons present in all other vertebrates for hundreds of millions of years. However, their excitatory neurons, responsible for transmitting information in the pallium, have evolved in a unique way. Only a few neuronal types in the avian brain were identified with genetic profiles similar to those found in mammals, such as the claustrum and the hippocampus, suggesting that some neurons are very ancient and shared across species. “However, most excitatory neurons have evolved in new and different ways in each species,” details Dr. García-Moreno.

Convergent evolution like this occurs because nature finds similar solutions to the same problem. But if they evolved independently, the tiny details (like the genes they are built from) will differ. But also, a similar solution is not an identical solution. This means that bird brains are likely to be different in important ways from mammalian brains. They have a different type of intelligence that mammals, primates, and humans do (just like dolphins have a different type of intelligence).

This is the aspect of this research that fascinates me the most – how is our view of reality affected by the quirky of our neurological evolution? Our view of reality is mostly a constructed neurological illusion (albeit a useful illusion). It is probable that chimpanzees see the world in a very similar way to humans, as their brains diverged only recently from our own. But the reality that dolphin or crow brains construct might be vastly different than our own.

There are “intelligent” creatures on Earth that diverge even more from the human model. Octopuses have a doughnut shaped brain that wraps around their esophagus, with many of the neurons also distributed in their tentacles. They have as many neurons as a dog, but they are far more distributed. Their tentacles have some capacity for independent neurological function (if you want to call that “thought”). It is highly likely that the experience of reality of an octopus is extremely different than any mammal.

This line of thinking always leads me to ponder – what might the intelligence of an alien species be like? In science fiction it is a common story-telling contrivance that aliens are remarkably humanoid, not just in their body plan but in their intelligence. They mostly have not only human-level intelligence, but a recognizably human type of intelligence. I think it is far more likely that any alien intelligence, even one capable of technology, would be different from human intelligence in ways difficult (and perhaps impossible) for us to contemplate.

There are some sci fi stories that explore this idea, like Arrival, and I usually find them very good. But still I think fiction is just scratching the surface of this idea. I understand why this is – it’s hard to tell a story with aliens when we cannot even interface with them intellectually – unless that fact is part of the story itself. But still, there is a lot of space to explore aliens that are human enough to have a meaningful interaction, but different enough to feel neurologically alien. There are likely some constants to hold onto, such as pleasure and pain, and self-preservation. But even exploring that idea – what would be the constants, and what can vary, is fascinating.

This all relates to another idea I try to emphasize whenever relevant – we are our neurology. Our identity and experience is the firing of patterns of neurons in our brains, and it is a uniquely constructed experience.

The post Birds Separately Evolved Complex Brains first appeared on NeuroLogica Blog.

Categories: Skeptic

AI Powered Bionic Arm

Fri, 02/14/2025 - 4:49am

My younger self, seeing that title – AI Powered Bionic Arm – would definitely feel as if the future had arrived, and in many ways it has. This is not the bionic arm of the 1970s TV show, however. That level of tech is probably closer to the 2070s than the 1970s. But we are still making impressive advances in brain-machine interface technology and robotics, to the point that we can replace missing limbs with serviceable robotic replacements.

In this video Sarah De Lagarde discusses her experience as the first person with an AI powered bionic arm. This represents a nice advance in this technology, and we are just scratching the surface. Let’s review where we are with this technology and how artificial intelligence can play an important role.

There are different ways to control robotics – you can have preprogrammed movements (with or without sensory feedback), AI can control the movements in real time, you can have a human operator, through some kind of interface including motion capture, or you can use a brain-machine interface of some sort. For robotic prosthetic limbs obviously the user needs to be able to control them in real time, and we want that experience to feel as natural as possible.

The options for robotic prosthetics include direct connection to the brain, which can be from a variety of electrodes. They can be deep brain electrodes, brain surface, scalp surface, or even stents inside the veins of the brain (stentrodes). All have their advantages and disadvantages. Brain surface and deep brain have the best resolution, but they are the most invasive. Scalp surface is the least invasive, but has the lowest resolution. Stentrodes may, for now, be the best compromise, until we develop more biocompatible and durable brain electrodes.

You can also control a robotic prosthetic without a direct brain connection, using surviving muscles as the interface. That is the method used in De Lagarde’s prosthetic. The advantage here is that you don’t need wires in the brain. Electrodes from the robotic limb connect to existing muscles which the user can contract voluntarily. The muscles themselves are not moving anything, but they generate a sizable electrical impulse which can activate the robotic limb. The user then has to learn to control the robotic limb by activating different sequences of muscle contractions.

At first this method of control requires a lot of concentration. I think a good analogy, one used by De Lagarde, is to think of controlling a virtual character in a video game. At first, you need to concentrate on the correct sequence of keys to hit to get the character to do what you want. But after a while you don’t have to think about the keystrokes. You just think about what you want the character to do and your fingers automatically (it seems) go to the correct keys or manipulate the mouse appropriately. The cognitive burden decreases and your control increases. This is the learning phase of controlling any robotic prosthetic.

As the technology develops researchers learned that providing sensory feedback is a huge help to this process. When the user uses the limb it can provide haptic feedback, such as vibrations, that correspond to the movement. Users report this is an extremely helpful feature. It allows for superior and more natural control, and allows them to control the limb without having to look directly at it. Sensory feedback closes the usual feedback loop of natural motor control.

And that is where the technology has gotten to, with continued incremental advances. But now we can add AI to the mix. What roll does that potentially play? As the user learns to contract the correct muscles in order to get the robotic limb to do what they want, AI connected to the limb itself can learn to recognize the user behavior and better predict what movements they want. The learning curve is now bidirectional.

De Lagarde reports that the primary benefit of the AI learning to interpret her movements better is a decrease in the lag time between her wanting to move and the robotic limb moving. At first the delay could be 10 seconds, which is forever if all you want to do is close your fist. But now the delay is imperceptible, with the limb moving essentially in real time. The limb does not feel like her natural limb. She still feels like it is a tool that she can use. But that tool is getting more and more useful and easy to use.

AI may be the perfect tool for brain-machine interface in general, and again in a bidirectional way. What AI is very good at is looking at tons of noisy data and finding patterns. This can help us interpret brain signals, even from low-res scalp electrodes, meaning that by training on the brain waves from one user an AI can learn to interpret what the brain waves mean in terms of brain activity and user intention. Further, AI can help interpret the user’s attempts at controlling a device or communicating with a BMI. This can dramatically reduce the extensive training period that BMIs often require, getting months of user training down to days. It can also improve the quality of the ultimate control achieved, and reduce the cognitive burden of the user.

We are already past the point of having usable robotic prosthetic limbs controlled by the user. The technology is also advancing nicely and quite rapidly, and AI is just providing another layer to the tech that fuels more incremental advances. It’s still hard to say how long it will take to get to the Bionic Man level of technology, but it’s easy to predict better and better artificial limbs.

The post AI Powered Bionic Arm first appeared on NeuroLogica Blog.

Categories: Skeptic

Who Believes Misinformation

Mon, 02/10/2025 - 4:57am

It’s probably not a surprise that a blog author dedicated to critical thinking and neuroscience feels that misinformation is one of the most significant threats to society, but I really to think this. Misinformation (false, misleading, or erroneous information) and disinformation (deliberately misleading information) have the ability to cause a disconnect between the public and reality. In a democracy this severs the feedback loop between voters and their representatives. In an authoritarian government it a tool of control and repression. In either case citizens cannot freely choose their representatives. This is also the problem with extreme jerrymandering – in which politicians choose their voters rather than the other way around.

Misinformation and disinformation have always existed in human society, and it is an interesting question whether or not they have increased recently and to what extent social media has amplified them. Regardless, it is useful to understand what factors contribute to susceptibility to misinformation in order to make people more resilient to it. We all benefit if the typical citizen has the ability to discern reality and identify fake news when they see it.

There has been a lot of research on this question over the years, and I have discussed it often, but it’s always useful to try to gather together years of research into a single systematic review and/or meta-analysis. It’s possible I and others may be selectively choosing or remembering parts of the research to reinforce a particular view – a problem that can be solved with a thorough analysis of all existing data. And of course I must point out that such reviews are subject to their own selection bias, but if properly done such bias should be minimal. The best case scenario is for there to be multiple systematic reviews, so I can get a sense of the consensus of those reviews, spreading out bias as much as possible in the hopes it will average out in the end.

With that in mind, there is a recent meta-analysis of studies looking at the demographics of susceptibility to misinformation.  The results mostly confirm what I recall from looking at the individual studies over the years, but there are some interesting wrinkles. They looked at studies which used the news headline paradigm – having subjects answer if they think a headline is true or not, “totaling 256,337 unique choices made by 11,561 participants across 31 experiments.” That’s a good chunk of data. First, people were significantly better than chance at determining which headlines were true (68.51%) or false 67.24%). That’s better than it being a coin flip, but still, about a third of the time subjects in these studies could not tell real from fake headlines. Given the potential number of false headlines people encounter daily, this can result in massive misinformation.

What factors contributed to susceptibility to misinformation, or protected against it? One factor that many people may find surprising, but which I have seen many times over the years, is that education level alone conveyed essentially no benefit. This also aligns with the pseudoscience literature – education level (until you get to advanced science degrees) does not protect against believing pseudoscience. You might also (and I do) view this as a failure of the education system, which is supposed to be teaching critical thinking. This does not appear to be happening to any significant degree.

There were some strong predictors. People who have an analytical thinking style were more accurate on both counts – identifying true and false headlines, but with a bit of a false headline bias. This factor comes up often in the literature. An analytical thinking style also correlates with lower belief in conspiracy theories, for example. Can we teach an analytical thinking style? Yes, absolutely. People have a different inherent tendency to rely on analytical vs intuitive thinking, but almost by definition analytical thinking is a conscious deliberate act and is a skill that can be taught. Perhaps analytical thinking is the thing that schools are not teaching students but should be.

Older age also was associated with higher overall discrimination, and also with a false headline bias, meaning that their default was to be skeptical rather than believing. It’s interesting to think about the interplay between these two things – in a world with mostly false headlines, having a strong skeptical bias will lead to greater accuracy. Disbelieving becomes a good first approximation of reality. The research, as far as I can see, did not attempt to replicate reality in terms of the proportion of true to false headlines. This means that the false bias may be more or less useful in the real world than in the studies, depending on the misinformation ecosystem.

Also being a self-identified Democrat correlated with greater accuracy and also a false bias, while self-identifying as a Republican was associated with lower accuracy and a truth bias (tending to believe headlines were true). Deeply exploring why this is the case is beyond the scope of this article (this is a complex question), but let me just throw out there a couple of the main theories. One is that Republicans are already self-selected for some cognitive features, such as intuitive thinking. Another is that the current information landscape is not uniform from a partisan perspective, and is essentially selecting for people who tend to believe headlines.

Some other important factors emerged from this data. One is that a strong predictor of believing headlines was partisan alignment – people tended to believe headlines that aligned with their self-identified partisan label. This is due to “motivated reflection” (what I generally refer to as motivated reasoning).  The study also confirmed something I have also encountered previously – that those with higher analytical thinking skills actually displayed more motivated reasoning when combined with partisan bias. Essentially smarter people have the potential to be better and more confident at their motivated reasoning. This is a huge reason for caution and humility – motivated reasoning is a powerful force, and being smart not only does not necessarily protect us from it, but may make it worse.

Finally, the single strongest predictor of accepting false headlines as true was familiarity. If a subject had encountered the claim previously, they were much more likely to believe it. This is perhaps the most concerning factor to come out of this review, because it means that mere repetition may be enough to get most people to accept a false reality. This has big implications for the “echochamber” effect on both mainstream and social media. If you get most of your news from one or a few ideologically aligned outlets, you essentially are allowing them to craft your perception of reality.

From all this data, what (individually and as a society) should we do about this, if anything?

First, I think we need to seriously consider how critical thinking is taught (or not taught) in schools. Real critical thinking skills need to be taught at every level and in almost every subject, but also as a separate dedicated course (perhaps combined with some basic scientific literacy and media savvy). Hey, one can dream.

The probability of doing something meaningful in terms of regulating media seems close to zero. That ship has sailed. The fairness doctrine is gone. We live in the proverbial wild west of misinformation, and this is not likely to change anytime soon. Therefore, individually, we can protect ourselves by being skeptical, working our analytical thinking skills, checking our own biases and motivated reasoning, and not relying on a few ideologically aligned sources of news. One good rule of thumb is to be especially skeptical of any news that reinforces your existing biases. But dealing with a societal problem on an individual level is always a tricky proposition.

The post Who Believes Misinformation first appeared on NeuroLogica Blog.

Categories: Skeptic

Do Apes Have a Theory of Mind

Tue, 02/04/2025 - 4:56am

Designing research studies to determine what is going on inside the minds of animals is extremely challenging. The literature is littered with past studies that failed to properly control for all variables and thereby overinterpreted the results. The challenge is that we cannot read the minds of animals, and they cannot communicate directly to us using language. We have to infer what is going on in their minds from their behavior, and inference can be tricky.

One specific question is whether or not our closest ancestors have a “theory of mind”. This is the ability to think about what other creatures are thinking and feeling. Typical humans do this naturally – we know that other people have minds like our own and we can think strategically about the implications of what other people think, how to predict their behavior based upon this, and how to manipulate the thoughts of other people in order to achieve our ends.

Animal research over the last century or so has been characterized by assumptions that some cognitive ability is unique to humans, only to find that this ability exists in some animals, at least in a precursor form. This makes sense, as we have evolved from other animals, most of our abilities likely did not come out of nowhere but evolved from more basic precursors.

But it is still undeniably true that humans are unique in the animal kingdom for our sophisticated cognitive abilities. Our language, abstraction, problem solving, and technological ability is significantly advanced beyond any other animal. We therefore cannot just assume that even our closest relatives possess any specific cognitive ability that humans have, and therefore this is a rich target of research.

The specific question of whether or not our ape relatives have a theory of mind remains an open research controversy. Previous research has suggested that they might, but all of this research was designed around the question of whether or not another individual had some specific piece of knowledge. Does the subject ape know that another ape or a human knows a piece of information? This research suggests that they might, but there remains a controversy over how to interpret the results – again, what can we infer from the animal’s behavior?

A new study seeks to inform this discussion by adding another type of research – looking at whether or not a subject ape, in this case a bonobo, understands that a human researcher lacks information. This is exploring the theory of mind from the perspective of another creatures ignorance rather than their knowledge. The advantage here, from a research perspective, is that such a theory of mind would require that the bonobo simultaneously knows the relevant piece of information and that a human researcher does not know this information – that their mental map of reality is different from another creature’s mental map of reality.

The setup is relatively simple. The bonobo sits across from a human researcher, and at a 90 degree angle from a “game master”. The game master places a treat under one of several cups in full view of the bonobo and the human researcher. They then wait 5 seconds and then the researcher reveals the treat and gives it to the bonobo. This is the training phase – letting the bonobo know that there is a treat there and they will be given the treat by the human researcher after a delay.

In the test phase an opaque barrier is placed between the human researcher and the cups, and this barrier either has a window or it doesn’t. So in some conditions the human researcher knows where the treat is and in others they don’t. The research question is – will the bonobo point to the cup more often and more quickly when the human researcher does not know where the treat is?

The results were pretty solid – the bonobos in multiple tests pointed to the cup with the treat far more often, quickly, and insistently when the human researcher did not know where the treat was. They also ran the experiment with no researcher, to make sure the bonobo was not just reaching for the treat, and again they did not point to the cup when there was no human researcher to communicate to.

No one experiment like this is ever definitive, and it’s the job of researchers to think of other and more simple ways to explain the results. But the behavior of the bonobos in this experimental setup matched what was predicted if they indeed have at least a rudimentary theory of mind. They seem to know when the human researcher knew where the treat was, independent of the bonobo’s own knowledge of where the treat was.

This kind of behavior makes sense for an intensely social animal, like bonobos. Having a theory of mind about other members of your community is a huge advantage on cooperative behavior. Hunting in particular is an obvious scenario where coordination ads to success (bonobos do, in fact, hunt).

This will not be the final word on this contentious question, but does move the needle one click in the direction of concluding that apes likely have a theory of mind. We will see if these results replicate, and what other research designs have to say about this question.

The post Do Apes Have a Theory of Mind first appeared on NeuroLogica Blog.

Categories: Skeptic

Incorruptible Skepticism

Thu, 01/30/2025 - 4:50am

Everything, apparently, has a second life on TikTok. At least this keeps us skeptics busy – we have to redebunk everything we have debunked over the last century because it is popping up again on social media, confusing and misinforming another generation. This video is a great example – a short video discussing the “incorruptibility’ of St. Teresa of Avila. This is mainly a Catholic thing (but also the Eastern Orthodox Church) – the notion that the bodies of saints do not decompose, but remain in a pristine state after death, by divine intervention. This is considered a miracle, and for a time was a criterion for sainthood.

The video features Carlos Eire, a Yale professor of history focusing on medieval religious history. You may notice that the video does not include any shots of the actual body of St. Teresa. I could not find any online. Her body is not on display like some incorruptibles, but has been exhumed in 1914 and again recently. So we only have the reports of the examiners. This is where much of the confusion is generated – the church defines incorruptible very differently than the believers who then misrepresent the actual evidence. Essentially, if the soft tissues are preserved in any way (so the corpse has not completely skeletonized) and remains somewhat flexible, that’s good enough.

The case of Teresa is typical – one of the recent examiners said, “There is no color, there is no skin color, because the skin is mummified, but you can see it, especially the middle of the face.” So the body is mummified and you can only partly make out the face. That is probably not what most believers imagine when the think of miraculous incorruptibility.

This is the same story over and over – first hand accounts of actual examiners describe a desiccated corpse, in some state of mummification. Whenever they are put on display, that is exactly what you see. Sometimes body parts (like feet or hands) are cut off and preserved separately as relics. Often a wax or metal mask is placed over the face because the appearance may be upsetting to some of the public. The wax masks can be made to look very lifelike, and some viewers may think they are looking at the actual corpse. But the narrative among believers is often very different.

It has also been found that there are many very natural factors that correlate with the state of the allegedly incorruptible bodies. A team of researchers from the University of Pisa explored the microenvironments of the tombs:

“They discovered that small differences in temperature, moisture, and construction techniques lead to some tombs producing naturally preserved bodies while others in the same church didn’t. Now you can debate God’s role in choosing which bodies went into which tombs before these differences were known, but I’m going to stick with the corpses. Once the incorrupt bodies were removed from these climates or if the climates changed, they deteriorated.”

The condition of the bodies seems to be an effect of the environment, not the saintliness of the person in life.

It is also not a secret – though not advertised by promoters of miraculous incorruptibility – that the bodies are often treated in order to preserve them. This goes beyond controlling the environment. Some corpses are treated with acid as a preservative, or oils or sealed with wax.

When you examine each case in detail, or the phenomenon as a whole, what you find is completely consistent with what naturally happens to bodies after death. Most decay completely to skeletons. However, in the right environment, some may be naturally mummified and may partly or completely not go through putrefaction. But if their environment is changed they may then proceed to full decay. And bodies are often treated to help preserve them. There is simply no need for anything miraculous to explain any of these cases.

There is also a good rule of thumb for any such miraculous or supernatural claim – if there were actually cases of supernatural preservation, we would all have seen it. This would be huge news, and you would not have to travel to some church in Italy to get a few of an encased corpse covered by a wax mask.

As a side note, and at the risk of sounding irreverent, I wonder if any maker of a zombie film considered having the corpse of an incorruptible animate. If done well, that could be a truly horrific scene.

The post Incorruptible Skepticism first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skinny on DeepSeek

Tue, 01/28/2025 - 4:44am

On January 20th a Chinese tech company released the free version of their chatbot called DeepSeek. The AI chatbot, by all accounts, is about on par with existing widely available chatbots, like ChatGPT. It does not represent any new abilities or breakthrough in quality. And yet the release shocked the industry causing the tech-heavy stock market Nasdaq to fall 3%. Let’s review why that is, and then I will give some thoughts on what this means for AI in general.

What was apparently innovative about DeepSeek is that, the company claims, it was trained for only $8 million. Meanwhile ChatGPT 4 training cost over $100. The AI tech industry is of the belief that further advances in LLMs (large language models – a type of AI) requires greater investments, with ChatGPT-5 estimated to cost over a billion dollars. Being able to accomplish similar results at a fraction of the cost is a big deal. It may also mean that existing AI companies are overvalued (which is why their stocks tumbled).

Further, the company that made DeepSeek used mainly lower power graphics chips. Apparently they did have a horde of high end chips (the export of which are banned to China) but was able to combine them with more basic graphics chips to create DeepSeek. Again, this is what is disruptive – they are able to get similar results with lower cost components and cheaper training. Finally, this innovation represents a change for the balance of AI tech between the US and China. Up until now China has mainly been following the US, copying its technology and trailing by a couple of years. But now a Chinese company has innovated something new, not just copied US technology. This is what has China hawks freaking out. (Mr. President, we cannot allow an AI gap!)

There is potentially some good and some bad to the DeepSeek phenomenon. From a purely industry and market perspective, this could ultimately be a good thing. Competition is healthy. And it is also good to flip the script a bit and show that innovation does not always mean bigger and more expensive. Low cost AI will likely have the effect of lowering the bar for entry so that not only the tech giants are playing. I would also like to see innovation that allows for the operation of AI data centers requiring less energy. Energy efficiency is going to have to be a priority.

But what are the doomsayers saying? There are basically two layers to the concerns – fear over AI in general, and fears over China. Cheaper more efficient AIs might be good for the market, but this will also likely accelerate the development and deployment of AI applications, something which is already happening so fast that many experts fear we cannot manage security risks and avoid unintended consequences.

For example, LLMs can write code, and in some cases they can even alter their own code, even unexpectedly. Recently an AI demonstrated the ability to clone itself. This has often been considered a tipping point where we potentially lose control over AI – AI that an iterate and duplicate itself without human intervention, leading to code no one fully understands. This will make it increasingly difficult to know how an AI app is working and what it is capable of. Cheaper LLMs leading to proliferation obviously makes all this more likely to happen and therefore more concerning. It’s a bit like CRISPR – cheap genetic manipulation is great for research and medical applications, but at some point we begin to get concerned about cheap and easy genetic engineering.

What about the China angle? I wrote recently about the TikTok hubbub, and concerns about an authoritarian rival country having access to large amounts of data on US citizens as well as the ability to put their thumb on the scale of our internal political discourse (not to mention deliberate dumbing down our citizenry). If China takes the lead in AI this will give them another powerful platform to do the same. At the very least it subjects people outside of China to Chinese government censorship. DeepSeek, for example, will not discuss any details of Tiananmen Square, because that topic is taboo by the Chinese government.

It is difficult to know, while we are in the middle of all of this happening, how it will ultimately play out. In 20 years or so will we look back at this time as a period of naive AI panic, with fears of AI largely coming to nothing? Or will we look back and realize we were all watching a train wreck in slow motion while doing nothing about it? There is a third possibility – the YdK pathway. Perhaps we pass some reasonable regulations that allow for the industry to develop and innovate, while protecting the public from the worst risks and preventing authoritarian governments from getting their hands on a tool of ultimate oppression (at least outside their own countries). Then we can endlessly debate what would have happened if we did not take steps to prevent disaster.

The post The Skinny on DeepSeek first appeared on NeuroLogica Blog.

Categories: Skeptic

The Hubble Tension Hubbub

Mon, 01/20/2025 - 6:29am

There really is a significant mystery in the world of cosmology. This, in my opinion, is a good thing. Such mysteries point in the direction of new physics, or at least a new understanding of the universe. Resolving this mystery – called the Hubble Tension – is a major goal of cosmology. This is a scientific cliffhanger, one which will unfortunately take years or even decades to sort out. Recent studies have now made the Hubble Tension even more dramatic.

The Hubble Tension refers to discrepancies in measuring the rate of expansion of the universe using different models or techniques. We have known since 1929 that the universe is not static, but it is expanding. This was the famous discovery of Edwin Hubble who notice

d that galaxies further from Earth have a greater red-shift, meaning they are moving away from us faster. This can only be explained as an expanding universe – everything (not gravitationally bound) is moving away from everything else. This became known as Hubble’s Law, and the rate of expansion as the Hubble Constant.

Then in 1998 two teams, the Supernova Cosmology Project and the High-Z Supernova Search Team, analyzing data from Type 1a supernovae, found that the expansion rate of the universe is actually accelerating – it is faster now than in the distant past. This discovery won the Nobel Prize in physics in 2011 for  Adam Riess, Saul Perlmutter, and Brian Schmidt. The problem remains, however, that we have no idea what is causing this acceleration, or even any theory about what might have the necessary properties to cause it. This mysterious force was called “dark energy”, and instantly became the dominant form of mass-energy in the universe, making up 68-70% of the universe.

I have seen the Hubble Tension framed in two ways – it is a disconnect between our models of cosmology (what they predict) and measurements of the rate of expansion, or it is a disagreement between different methods of measuring that expansion rate. The two main methods of measuring the expansion rate are using Type 1a supernovae and by measuring the cosmic background radiation. Type 1a supernovae are considered standard candle because they have roughly the same absolute magnitude (brightness). The are white dwarf stars in a binary system that are siphoning off mass from their partner. When they reach a critical point of mass, they go supernova. So every Type 1a goes supernova with the same mass, and therefore the same brightness. If we know an object’s absolute magnitude of brightness, then we can calculate its distance. It was this data that lead to the discovery that the universe is accelerating.

But using our models of physics, we can also calculate the expansion of the universe by looking at the cosmic microwave background (CMB) radiation, which is the glow left over after the Big Bang. This gets cooler as the universe expands, and so we can calculate that expansion by looking at the CMB close to us and farther away. Here is where the Hubble Tension comes in. Using Type 1a supernovae, we calculate the Hubble Constant to be 73 km/s per megaparsec. Using the CMB the calculation is 67 km/s/Mpc. These numbers are not close enough – they are very different.

At first it was thought that perhaps the difference is due to imprecision in our measurements. As we gather more and better data (such as building a more complete sample of Type 1a supernovae), using newer and better instruments, some hoped that perhaps these two numbers would come into alignment. The opposite has happened – newer data has solidified the Hubble Tension.

A recent study, for example, uses the Dark Energy Spectroscopic Instrument (DESI) to make more precise measurements of Type 1a’s in the nearby Coma cluster. This is used to make a more precise calibration of our overall measurements of distance in the universe. With this more precise data, the authors argue that the Hubble Tension should now be considered a “Hubble Crisis” (a term which then metastasized throughout reporting headlines). The bottom line is that there really is a disconnect between theory and measurements.

Even more interesting, another group has used updated Type 1a supernovae data to argue that perhaps dark energy does not have to exist at all. This is their argument: The calculation of the Hubble Constant throughout the universe used to establish an accelerating universe is based on the assumption of isotropy and homogeneity at the scale we are observing. Isotropy means that the universe is essentially the same density no matter which direction you look in, while homogeneity means that every piece of the universe is the same as every other piece. So no matter where you are and which direction you look in, you will observe about the same density of mass and energy. This is obviously not true at small scales, like within a galaxy, so the real question is – at what scale does the universe become isotropic and homogenous? Essentially cosmologists have used the assumption of isotropy and homogeneity at the scale of the observable universe to make their calculations regarding expansion. This is called the lambda CDM model (ΛCDM), where lambda is the cosmological constant and CDM is cold dark matter.

This group, however, argues that this is not true. There are vast gaps with little matter, and matter tends to clump along filaments in the universe. If instead you take into account these variations in the density of matter throughout the universe, you get different results for the Hubble Constant. The primary reason for this is General Relativity. This is part of Einstein’s (highly verified) theory that matter affects spacetime. Where matter is dense, time relatively slows down. This means as we look out into the universe, the light that we see is travelling faster through empty space than it is through space with lots of matter, because that matter is causing time to slow down. So if you measure the expansion rate of the universes it will appear faster in gaps and slower in galaxy clusters. As the universe expands, the gaps expand, meaning the later universe will have more gaps and therefore measure a faster acceleration, while the older universe has smaller gaps and therefore measures a slower expansion. They call this the timescape model.

If the timescape model is true, then the expansion of the universe is not accelerating (it’s just an illusion of our observations and assumptions), and therefore there is no need for dark energy. They further argue that their model is a better fit for the data than ΛCDM (but not by much). We need more and better data to definitively determine which model is correct. They are also not mutually exclusive – timescape may explain some but not all of the observed acceleration, still leaving room for some dark energy.

I find this all fascinating. I will admit I am rooting for timescape. I never liked the concept of dark energy. It was always a placeholder, but also just has properties that are really counter-intuitive. For example, dark energy does not dilute as spacetime expands. This does not mean it is false – the universe can be really counterintuitive to us apes with our very narrow perspectives. I will also follow whatever the data says. But wouldn’t it be exciting if an underdog like timescape overturned a Nobel Prize winning discovery, and for at least a second time in my lifetime radically changed how we think about cosmology. Timescape may also resolve the Hubble Tension to boot.

Whatever the answer turns out to be – clearly there is something wrong with our current cosmology. Resolving this “crisis” will expand our knowledge of the universe.

The post The Hubble Tension Hubbub first appeared on NeuroLogica Blog.

Categories: Skeptic

Should the US Ban TikTok?

Mon, 01/13/2025 - 5:13am

My recent article on social media has fostered good social media engagement, so I thought I would follow up with a discussion of the most urgent question regarding social media – should the US ban TikTok? The Biden administration signs into law legislation that would ban the social media app TikTok on January 19th (deliberately the day before Trump takes office) unless it is sold off to a company that is not, as it is believed, beholden to the Chinese government. The law states it must be divested from ByteDance, which is the Chinese parent company who owns TikTok. This raises a few questions – is this constitutional, are the reasons for it legitimate, how will it work, and will it work?

A federal appeals court ruled that the ban is constitutional and can take place, and that decision is now before the Supreme Court. We will know soon how they rule, but indicators are they are leaning towards allowing the law to take effect. Trump, who previously tried to ban TikTok himself, now supports allowing the app and his lawyers have argued that  he should be allowed to solve the issue. He apparently does not have any compelling legal argument for this. In any case, we will hear the Supreme Court’s decision soon.

If the ban is allowed to take place, how will it work? First, if you are not aware, TikTok is a short form video sharing app. I have been using it extensively over the past couple of years, along with most of the other popular platforms, to share skeptical videos and have had good engagement. Apparently TikTok is popular because it has a good algorithm that people like. TikTok is already banned on devices owned by Federal employees. The new ban will force app stores in the US to remove the TikTok app and now allow any further updates or support. Existing TikTok users will continue to be able to use their existing apps, but they will not be able to get updates so they will eventually become unusable.

ByteDance will have time to comply with the law by divesting TikTok before the app becomes unusable, and many believe they are essentially waiting to see if the law will actually take effect. So, it is possible that even if the law does take effect, not much will change for existing users unless ByteDance refuses to comply and the app slowly fades away. In this case it is likely that the two existing main competitors, YouTube shorts, and Instagram, will benefit.

Will users be able to bypass the ban? Possibly. You can use a virtual private network (VPN) to change your apparent location to download the app from foreign stores. But even if it is technically possible, this would be a significant hurdle for some users and likely reduce use of the app in the US.

That is the background. Now lets get to the most interesting question – are the stated reasons for wanting to ban the app legitimate? This is hotly debated, but I think there is a compelling argument to make for the risks of the app and they essentially echo many of the points I made in my previous post. Major social media platforms undeniably have an influence on the broader culture. If the platforms are left entirely open, this allows for bad actors to have unfettered access to tools to spread misinformation, disinformation, radicalization, and hate speech. I have stated that my biggest fear is that these platforms will be used by authoritarian governments to control their society and people. The TikTok ban is about a hostile foreign power using an app to undermine the US.

There are essentially two components to the fear – that TikTok is gathering information on US citizens that can then be weaponized against them or our society. The second is that the Chinese government will use TikTok in order to spread pro-communist China propaganda, anti-American propaganda, so social civil strife and influence American politics. We actually don’t have to speculate about whether or not China will do this – TikTok has already admitted that they have identified and shut down massive Chinese government campaigns to influence US users – one with 110,000 accounts, and another with 141,000 accounts.  You might argue that the fact that they took them down means they are not cooperating with the Chinese government, but we cannot conclude that. They may be making a public show of taking down some campaigns but leaving others in place. The more important fact here is that the Chinese government is using TikTok to influence US politics and society.

There are also more subtle ways than massive networks of accounts to influence the US through TikTok. American TikTok is different from the Chinese version, and analyses have found that the Chinese version has better quality informational content and more educational content than the US version. China can be playing the long game (actually, not that long, in my opinion) of dumbing down the US. Algorithms can put light thumbs on the scale of information that have massive effects.

It was raised in the comments to my previous post if all this discussion is premised on the notion that people are easily manipulated pawns in the hands of social media giants. Unfortunately, the answer to that question is a pretty clear yes. There is a lot of social psychology research to show that influence campaigns are effective. Obviously not everyone is affected, but moving the needle 10 or 20 percentage points (or even a lot less) can have a big impact on society. Again – I have been on TikTok for over a year. It is flooded with videos that seem crafted to spread ignorance and anti-intellectualism. I know that most of them are not crafted specifically for this purpose – but that is the effect they have, and if one did intend to craft content for this purpose they could not do a better j0b than what is already on the platform. There is also a lot of great science communication content, but it is drowned out by nonsense.

Social media, regardless of who owns it, has all the risks and problems I discussed. But it does seem reasonable that we also do not want to add another layer of having a foreign adversary with significant influence over the platform. Some argue that it doesn’t really matter, social media can be used for influence campaigns regardless of who owns them. But that is hardly reassuring. At the very least I would argue we don’t really know and this is probably not an experiment we want to add on top of the social media experiment itself.

The post Should the US Ban TikTok? first appeared on NeuroLogica Blog.

Categories: Skeptic

New Material for Nanoconductors

Fri, 01/10/2025 - 5:06am

One of the things I have come to understand from following technology news for decades is that perhaps the most important breakthroughs, and often the least appreciated, are those in material science. We can get better at engineering and making stuff out of the materials we have, but new materials with superior properties change the game. They make new stuff possible and feasible. There are many futuristic technologies that are simply not possible, just waiting on the back burning for enough breakthroughs in material science to make them feasible. Recently, for example, I wrote about fusion reactors. Is the addition of high temperature superconducting material sufficient to get us over the finish line of commercial fusion, or are more material breakthroughs required?

One area where material properties are becoming a limiting factor is electronics, and specifically computer technology. As we make smaller and smaller computer chips, we are running into the limits of materials like copper to efficiently conduct electrons.  Further advance is therefore not just about better technology, but better materials. Also, the potential gain is not just about making computers smaller. It is also about making them more energy efficient by reducing losses to heat when processors work. Efficiency is arguably now a more important factor, as we are straining our energy grids with new data centers to run all those AI and cryptocurrency programs.

This is why a new study detailing a new nanoconducting material is actually more exciting than it might at first sound. Here is the editor’s summary:

Noncrystalline semimetal niobium phosphide has greater surface conductance as nanometer-scale films than the bulk material and could enable applications in nanoscale electronics. Khan et al. grew noncrystalline thin films of niobium phosphide—a material that is a topological semimetal as a crystalline material—as nanocrystals in an amorphous matrix. For films with 1.5-nanometer thickness, this material was more than twice as conductive as copper. —Phil Szuromi

Greater conductance at nanoscale means we can make smaller transistors. The study also claims that this material has lower resistance, which means more efficient – less waste heat. They also claim that manufacturing is similar to existing transistors at similar temperatures, so it’s feasible to mass produce (at least it seems like it should be). But what about niobium? Another lesson I have learned from examining technology news is to look for weaknesses in any new technology, including the necessary raw material. I see lots of battery and electronic news, for example, that uses platinum, which means it’s not going to be economical.

Niobium is considered a rare metal, and is therefore relatively expensive, about $45 per kilogram. (By comparison copper goes for $9.45 per kg.) Most of the world’s niobium is sourced in Brazil (so at least it’s not a hostile or unstable country). It is not considered a “precious” metal like gold or platinum, so that is a plus. About 90% of niobium is currently used as a steel alloy, to make steel stronger and tougher. If we start producing advanced computer chips using niobium what would that do to world demand? How will that affect the price of niobium? By definition we are talking about tiny amounts of niobium per chip, the wires are only a few molecules thick, but the world produces a lot of computer chips.

How all this will sort out is unclear, and the researchers don’t get into that kind of analysis. They basically are concerned with the material science and proving their concept works. This is often where the disconnect is between exciting-sounding technology news and ultimate real-world applications. Much of the stuff we read about never comes to fruition, because it simply cannot work at scale or is too expensive. Some breakthroughs do work, but we don’t see the results in the marketplace for 10-20 years, because that is how long it took to go from the lab to the factory. I have been doing this long enough now that I am seeing the results of lab breakthroughs I first reported on 20 years ago.

Even if a specific demonstration is not translatable into mass production, however, material scientists still learn from it. Each new discovery increases our knowledge of how materials work and how to engineer their properties. So even when the specific breakthrough may not translate, it may lead to other spin-offs which do. This is why such a proof-of-concept is exciting – it shows us what is possible and potential pathways to get there. Even if that specific material may not ultimately be practical, it still is a stepping stone to getting there.

What this means is that I have learned to be patient, to ignore the hype, but not dismiss science entirely. Everything is incremental. It all adds up and slowly churns out small advances that compound over time. Don’t worry about each individual breakthrough – track the overall progress over time. From 2000 to today, lithium-ion batteries have about tripled their energy capacity, for example, while solar panels have doubled their energy production efficiency. This was due to no one breakthrough, just the cumulative effects of hundreds of experiments. I still like to read about individual studies, but it’s important to put them into context.

The post New Material for Nanoconductors first appeared on NeuroLogica Blog.

Categories: Skeptic

What Kind of Social Media Do We Want?

Thu, 01/09/2025 - 5:05am

Recently Meta decided to end their fact-checkers on Facebook and Instagram. The move has been both hailed and criticized. They are replacing the fact-checkers with an X-style “community notes”. Mark Zuckerberg summed up the move this way: “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

That is the essential tradeoff- whether you think false positives are more of a problem or false negatives. Are you concerned more with enabling free speech or minimizing hate speech and misinformation? Obviously both are important, and an ideal platform would maximize both freedom and content quality. It is becoming increasingly apparent that it matters. The major social media platforms are not mere vanity projects, they are increasingly the main source of news and information, and foster ideological communities. They affect the functioning of our democracy.

Let’s at least be clear about the choice that “we” are making (meaning that Zuckerberg is making for us). Maximal freedom without even basic fact-checking will significantly increase the amount of misinformation and disinformation on these platforms, as well as hate-speech. Community notes is a mostly impotent method of dealing with this. Essentially this leads to crowd-sourcing our collective perception of reality.

Free-speech optimists argue that this is all good, and that we should let the marketplace of ideas sort everything out. I do somewhat agree with this, and the free marketplace of ideas is an essential element of any free and open society. It is a source of strength. I also am concerned about giving any kind of censorship power to any centralized authority. So I buy the argument that this may be the lesser of two evils – but it still comes with some significant downsides that should not be minimized.

What I think the optimists are missing (whether out of ignorance or intention) is that a completely open platform is not a free marketplace of ideas. The free marketplace assumes that everyone is playing fair and everyone is acting in good faith. This is 2005 level of naivete. This leaves the platform open to people who are deliberately exploiting it and using it as a tool of political disinformation. This also leaves it open to motivated and dedicated ideological groups that can flood the zone with extreme views. Corporations can use the platform for their own influence campaigns and self-serving propaganda. This is not a free and fair marketplace – it means people with money, resources, and motivation can dominate the narrative. We are simply taking control away from fact-checkers and handing it over to shadowy groups with nefarious motivations. And don’t think that authoritarian governments won’t find a way to thrive in this environment also.

So we have ourselves a Catch-22. We are damned if we do and damned if we don’t. This does not mean, however, that some policies are not better than others. There is a compromise in the middle that allows for the free marketplace of idea without making it trivially easy to spread disinformation, to radicalize innocent users of the platform, and allow for ideological capture. I don’t know exactly what those policies are, we need to continue to experiment and find them. But I don’t think we should throw up our hands in defeat (and acquiescence).

I think we should approach the issue like an editorial policy. Having editorial standards is not censorship. But who makes and enforces the editorial standards? Independent, transparent, and diverse groups with diffuse power and appeals processes is a place to start. No such process will be perfect, but it is likely better than having no filter at all. Such a process should have a light touch, err on the side of tolerance, and focus on the worst blatant disinformation.

I also think that we need to take a serious look at social media algorithms. This also is not censorship, but Facebook, for example, gets to decide how to recommend new content to you. They tweak the algorithms to maximize engagement. How about tweaking the algorithms to maximize quality of content and diverse perspectives instead?

We may need to also address the question of whether or not giant social media platforms represent a monopoly. Let’s face it, they do, and they also concentrate a lot of media into a few hands. We have laws to protect against such things because we have long recognized the potential harm of so much concentrated power. Social media giants have simply side-stepped these laws because they are relatively new and exist in a gray zone. Our representatives have failed to really address these issues, and the public is conflicted so there isn’t a clear political will. I think the public is conflicted partly because this is all still relatively new, but also as a result of a deliberate ideological campaign to sow doubt and confusion. The tech giants are influencing the narrative on how we should deal with tech giants.

I know there is an inherent problem here – social media outlets work best when everyone is using them, i.e. they have a monopoly. But perhaps we need to find a way to maintain the advantage of an interconnected platform while breaking up the management of that platform into smaller pieces run independently. The other option is to just have a lot of smaller platforms, but what is happening there is that different platforms are becoming their own ideological echochambers. We seem to have a knack for screwing up every option.

Right now there does not seem to be anyway for any of these things to happen. The tech giants are in control and have little incentive to give up their power and monopoly. Government has been essentially hapless on this issue. And the public is divided. Many have a vague sense that something is wrong, but there is no clear consensus on what exactly the problem is and what to do about it.

 

The post What Kind of Social Media Do We Want? first appeared on NeuroLogica Blog.

Categories: Skeptic

Plan To Build First Commercial Fusion Reactor

Mon, 01/06/2025 - 7:02am

How close are we to having fusion reactors actually sending electric power to the grid? This is a huge and complicated question, and one with massive implications for our civilization. I think we are still at the point where we cannot count on fusion reactors coming online anytime soon, but progress has been steady and in some ways we are getting tatalizingly close.

One company, Commonwealth Fusion Systems, claims it will have completed a fusion reactor capable of producing net energy by “the early 2030’s”. A working grid-scale fusion reactor within 10 years seems really optimistic, but there are reasons not to dismiss this claim entirely out of hand. After doing a deep dive my take is that the 2040’s or even 2050’s is a safer bet, but this may be the fusion design that crosses the finish line.

Let’s first give the background and reasons for optimism. I have written about fusion many times over the years. The basic idea is to fuse lighter elements into heavier elements, which is what fuels stars, in order to release excess energy. This process releases a lot of energy, much more than fission or any chemical process. In terms of just the physics, the best elements to fuse are one deuterium atom to one tritium atom, but deuterium to deuterium is also feasible. Other fusion elements are simply way outside our technological capability and so are not reasonable candidates.

There are also many reactor designs. Basically you have to squeeze the elements close together at high temperature so as to have a sufficiently high probability of fusion. Stars use gravitational confinement to achieve this condition at their cores. We cannot do that on Earth, so we use one of two basic methods – inertial confinement and magnetic confinement. Inertial confinement includes a variety of methods that squeeze hydrogen atoms together using inertia, usually from implosions. These methods have achieved ignition (burning plasma) but are not really a sustainable method of producing energy. Using laser inertial confinement, for example, destroys the container in the process.

By far the best method, and the one favors by physics, is magnetic confinement. Here too there are many designs, but the one that is closest to the finish line (and the one used by CFS) is called a tokamak design. This is torus shaped in a specific way to control the flow of plasma just so to avoid any kind of turbulence that will prevent fusion.

In order to achieve the energies necessary to create sustained fusion you need really powerful magnetic fields, and the industry has essentially been building larger and larger tokamaks to achieve this. CFS has the advantage of being the first to design a reactor using the latest higher temperature superconductors (HTS), which really are a game changer for tokamaks. They allow for a smaller design with more powerful magnets using less energy. Without these HTS I don’t think there would even be a question of feasibility.

CFS is currently building a test facility called the SPARC reactor, which stands for the smallest possible ARC reactor, and ARC in turn stand for “affordable, robust, compact”. This is a test facility that will not be commercial. Meanwhile they are planning their first ARC reactor, which is grid commercial scale, in Virginia and which they claim will produce 400 Megawatts of power.

Reasons for optimism – the physics all seems to be good here. CFS was founded by engineers and scientists from MIT – essentially some of the best minds in fusion physics. They have mapped out the most viable path to commercial fusion, and the numbers all seem to add up.

Reasons for caution – they haven’t done it yet. This is not, at this point, so much a physics problem as an engineering problem. As they push to higher energies, and incorporate the mechanisms necessary to bleed off energy to heat water to run a turbine, they may run into problems they did not anticipate. They may hit a hurdle that will suddenly throw 10 or 20 years into the development process. Again, my take is that the 2035 timeline is if everything goes perfectly well. Any bumps in the road will keep adding years. This is a project at the very limits of our technology (as complex as going to the Moon), and delays are the rule, not the exception.

So – how close are they? The best so far is the JET tokamak reactor which produced 67% of net energy. That sounds close, but keep in mind, 100% is break even. Also – this is heat energy, not electricity. Modern fission reactors have about a 30% efficiency in converting heat to electricity, so that is a reasonable assumption. Also, this is fusion energy efficiency, not total energy. This is the energy that goes into the plasma, not the total energy to run the reactor.

The bottom line is that they probably need to increase their energy output by an order of magnitude or more in order to be commercially viable. Just producing a little bit of net energy is not enough. They need massive excess energy (meaning electricity) in order to justify the expense. So really we are no where near net total energy in any fusion design. CFS is hoping that their fancy new HTS magnets will get them there. They actually might – but until they do, it’s still just an informed hope.

I do hope that my pessimism, born of decades of overhyped premature tech promises, is overcalling it in this case. I hope these MIT plasma jocks can get it done, somewhere close to the promised timeline. The sooner the better, in terms of global warming. Let’s explore for a bit what this would mean.

Obviously the advantage of fusions reactors like the planned ARC design if it works is that it produces a lot of carbon-free energy. They can be plugged into existing connections to the grid, and produce stable predictable energy. They produce only low level nuclear waste. They also have a relatively small land footprint for energy produced. If the first ARC reactor works, we would need to build thousands around the world as fast as possible. If they are profitable, this will happen. But the industry can also be supported by targeted regulations. Such reactors could replace fossil fuel-based reactors, and then eventually fission reactors.

Once we develop viable fusion energy, it is very likely that this will become our primary energy source literally forever. At least for hundreds if not thousands or tens of thousands of years. It gets hard to predict technology that far out, but there are really no candidates for advanced energy sources that are better. Matter-antimatter theoretically could work, but why bother messing around with antimatter, which is hard to make and contain. The advantage is probably not enough to justify it. Other energy sources, like black holes, are theoretically and extremely exotic, perhaps something for millions of years advanced beyond where we are.

Even if some really advanced energy source becomes possible, fusion will likely remain in the sweet spot in terms of producing large amounts of energy cleanly and sustainable. Once we cross the line to being able to produce net total electricity with fusion, incremental advances in material science and the overall technology will just make fusion better. From that point forward all we really need to do is make fusion better. There will likely still be a role for distributed energy like solar, but fusion will replace all centralized large sources of power.

The post Plan To Build First Commercial Fusion Reactor first appeared on NeuroLogica Blog.

Categories: Skeptic

The Jersey Drones Are Likely Drones

Mon, 12/23/2024 - 4:58am

The latest flap over drone sightings in New Jersey and other states in the North East appears to be – essentially nothing. Or rather, it’s a classic example of a mass panic. There are reports of “unusual” drone activity, which prompts people to look for drones, which results in people seeing drones or drone-like objects and therefore reporting them, leading to more drone sightings. Lather, rinse, repeat. The news media happily gets involved to maximize the sensationalism of the non-event. Federal agencies eventually comment in a “nothing to see here” style that just fosters more speculation. UFO and other fringe groups confidently conclude that whatever is happening is just more evidence for whatever they already believed in.

I am not exempting myself from the cycle either. Skeptics are now part of the process, eventually explaining how the whole thing is a classic example of some phenomenon of human self-deception, failure of critical thinking skills, and just another sign of our dysfunctional media ecosystem. But I do think this is a healthy part of the media cycle. One of the roles that career skeptics play is to be the institution memory for weird stuff like this. We can put such events rapidly into perspective because we have studied the history and likely been through numerous such events before.

Before I get to that bigger picture, here is a quick recap. In November there were sightings in New Jersey of “mysterious” drone activity. I don’t know exactly what made them mysterious, but it lead to numerous reportings of other drone sightings. Some of those sightings were close to a military base, Joint Base McGuire-Dix-Lakehurst, and some were concerned of a security threat. Even without the UFO/UAP angle, there is concern about foreign powers using drones for spying or potentially as a military threat. This is perhaps enhanced by all the reporting of the major role that drones are playing in the Russian-Ukraine war. Some towns in Southern New Jersey have banned the use of drones temporarily, and the FAA has also restricted some use.

A month after the first sightings Federal officials have stated that the sightings that have been investigated have all turned out to be drones, planes mistaken for drones, and even stars mistaken for drones. None have turned out to be anything mysterious or nefarious. So the drones, it turns out, are mostly drones.

Also in November (which may or may not be related) a CT police officer came forward and reported a “UFO” sighting he had in 2022. Local news helpfully created a “reenactment” of the encounter (to accompany their breathless reporting), which is frankly ridiculous. The officer, Robert Klein, did capture the encounter on his smart phone video. The video shows – a hovering light in the distance. That is all – 100% consistent with a drone.

So here’s the bigger picture – as technology evolves, so does sightings to match that technology. Popular expectations also match the sightings. Around the turn of the century it was anticipated that someone would invest a flying machine, so there were lots of false sightings of such machines. After the first “flying saucer” was reported in 1947, UFO sightings often looked like flying saucers. As military aircraft increased in number and capability, sightings would track along with them, being more common near military air bases. When ultralight aircraft became a thing, people reports UFOs of silent floating craft (I saw one myself and was perplexed until I read in the news what it was). As rocket launches become more common, so do sightings of rocket launches mistaken for “UFOs”. There was the floating candle flap from over a decade ago – suddenly many people were releasing floating candles for celebrations, and people were reporting floating candle “UFOs”.

And now we are seeing a dramatic increase in drone activity.  Drones are getting better, cheaper, and more common, so we should be having more drone sightings. This is not a mystery.

Interestingly there is one technological development that does not lead to more sightings but does lead to more evidence – smart phones. Most people are now walking around all the time with a camera and video. Just like with the CT cop, we not only have his sensational report but an accompanying video. What does this dramatic increase in photo and video evidence show? Mundane objects and blurry nothings. What do they not show? Unambiguous alien spacecraft. This is the point at which alien true-believers insert some form of special pleading to explain away the lack of objective evidence.

This pattern, of sightings tracking with technology, goes beyond alien activity. We see the same thing with ghost photos. It turns out that the specific way in which ghosts manifest on photographic film is highly dependent on camera technology. What we are actually seeing is different kinds of camera artifacts resulting from specific camera technology, and those artifacts being interpreted as ghosts or something paranormal. So back in the day when it was possible to accidentally create a double-exposure, we had lots of double-exposure ghosts. Those cameras that can create the “golden door” illusion because of their shutter created golden door phenomena. Those cameras with camera straps create camera strap ghosts. When high-powered flashes became common we started to see lots of flashback ghosts. Now we are seeing lots of AI generated fakes.

All of this is why it is important to study and understand history. Often those enamored of the paranormal or the notion of aliens are seeing the phenomenon in a tiny temporal bubble. It seems like this is all new and exciting, and major revelations are right around the corner. Of course it has seemed this way for decades, or even hundreds of years for some phenomena. Meanwhile it’s the same old thing. This was made obvious to me when I first read Sagan’s 1972 book, UFOs: A Scientific Debate. I read this three decades after it was first published – and virtually nothing had changed in the UFO community. It was deja vu all over again. I had the same reaction to the recent Pentagon UFO thing – same people selling the same crappy evidence and poor logic.

New cases are occasionally added, and as I said as the technology evolves so does some of the evidence. But what does not change is people, who are still making the same poor arguments based on flimsy evidence and dodgy logic.

 

 

The post The Jersey Drones Are Likely Drones first appeared on NeuroLogica Blog.

Categories: Skeptic

Factory Farming is Better Than Organic Farming

Tue, 12/17/2024 - 4:58am

Some narratives are simply ubiquitous in our culture (every culture has its universal narratives). Sometimes these narratives emerge out of shared values, like liberty and freedom. Sometimes they emerge out of foundational beliefs (the US still has a puritanical bent). And sometimes they are the product of decades of marketing. Marketing-based narratives deserve incredible scrutiny because they are crafted to alter the commercial decision-making of people in society, not for the benefit of society or the public, but for the benefit of an industry. For example, I have tried to expose the fallacy of the “natural is always good, and chemicals are always bad” narrative. Nature, actually, is quite indifferent to humanity, and everything is made of chemicals.

Another narrative that is based entirely on propaganda meant to favor one industry and demonize its competition is the notion that organic farming is better for health and better for the environment. Actually, there is no evidence of any nutritional or health advantage from consuming organic produce. Further – and most people I talk to find this claim shocking – organic farming is worse for the environment than conventional or even “factory” farming. Stick with me and I will explain why this is the case.

A recent article in the NYT by Michael Grunwald nicely summarizes what I have been saying for years. First let me explain why I think there is such a disconnect between reality and public perception. This gets back to the narrative idea – people tend to view especially complex situations through simplistic narratives that give them a sense of understanding. We all do this because the world is complicated and we have to break it down. There is nothing inherently wrong with this – we use schematic, categories, and diagrams to simplify complex reality and chunk it into digestible bits. But we have to understand this is what we are doing, and how this may distort our understanding of reality. There are also better and worse ways to do this.

One principle I like to use as a guide is the Moneyball approach. This refers to Paul DePodesta who devised a new method of statistical analysis to find undervalued baseball players. Prior to DePodesta talent scouts would find high value players to recruit, players who had impressive classic statistics, like batting average. They would then pay high sums for these star players. DePodesta, however, realized that players without star-quality stats still might be solid players, and for their price could have a disproportionate positive effect on a team’s performance. If, therefore, you have a finite amount of funds to spread out over a team’s players, you might be better off shoring up your players at the low end rather than paying huge sums for star players. Famously this approach worked extremely well (first applied to the Oakland Athletics).

So let’s “Moneyball” farming. We can start with the premise that we have to produce a certain amount of calories in order to feed the world. Even if we consider population control as a long term solution – that’s a really long term solution for any ethically acceptable methods. I will add as a premise that it is not morally or politically feasible to reduce the human population through deliberate starvation. Right now there are 8.2 billion humans on Earth. Estimates are this will rise to about 10 billion before the population starts to come down again through ethical methods like poverty mitigation and better human rights. So for the next hundred years or so we will have to feed 8+ billion people.

If our goal is to feed humanity while minimizing any negative effect on the environment, then we have to consider what all the negative effects are of farming. As Grunwald points out – they are huge. Right now we are using about 38% of the land on Earth for farming. We are already using just about all of the arable land – arable land is actually a continuum, so it is more accurate to say we are using the most arable land. Any expansion of farmland will therefore expand into less and less arable land, at greater and greater cost and lower efficiency. Converting a natural ecosystem, whether a prairie, forest, meadow, or whatever, into farmland is what has, by far, the greatest negative effect on the ecosystem. This is what causes habitat loss,  isolates populations, reduces biodiversity, and uses up water. The difference between different kinds of farming is tiny compared to the difference between farming and natural ecosystems.

This all means that the most important factor, by far, in determining the net effect of calorie production for humans on the environment is the amount of land dedicated to all the various kinds of farming. Organic farming simply uses more land than conventional farming, 20-40% more land on average. This fact overwhelms any other alleged advantage of organic farming. I say alleged because organic farms can and many do use pesticides – they just use natural pesticides, which are often less effective requiring more applications. Sometimes they also rely on tilling, which releases carbon from the soil.

But even if we compare maximally productive farming to the most science-based regenerative farming techniques, designed to minimize pesticide use and optimize soil health – maximally efficient farming wins the Moneyball game. It’s no contest. Also, the advantage of efficient factory farming will only get greater as agricultural science and technology improves. GMOs, for example, have the potential for massive improvements in crop efficiency, leaving organic farming progressively in the dust.

But all this does not fit the cultural narrative. We have been fed this constant image of the gentle farm, using regenerative practices, protecting the soil, with local mom and pop farmers producing food for local consumption. It’s a nice romantic image, and I have no problem with having some small local farms growing heirloom produce for local consumption. But this should be viewed as a niche luxury – not the primary source of our calories. Eating locally grown food from such farms is, in a way, a selfish act of privilege. It is condemning the environment so you can feel good about yourself. Again, it’s fine in moderation. But we need to get 95% of our calories from factory farms that are brutally efficient. This also does not mean that factory farms should not endeavor to be environmentally friendly, as long as it does not come at the cost of efficiency.

At this point many people will point out that we can improve farming efficiency by eliminating meat. It is true that overproducing meat for calories is hugely inefficient. But so is underproducing meat. What the evidence shows is that maximal efficiency comes from using each parcel of land for it’s optimal use. Grazing land for animals in many cases is the optimal use. Cattle, for example, can convert a lot of non-edible calories into edible calories. And finishing lots can also use low grade feed not fit for humans to pack on high-grade calories for humans. Yes – many industrialized nations consume too much meat. Part of optimizing efficiency is also optimizing the ratio of which kinds of calories we consume. But zero meat is not maximally efficient. Also – half our fertilizer comes from manure, and we can’t just eliminate the source of half our fertilizer without creating a disaster.

It’s a complicated system. We no longer, however, have the luxury of just letting everyone do what they want to do and what they think is in their best interest. Optimally there would be some voluntary coordination for the world’s agricultural system to maximize efficiency and minimize land use. This can come through science-based standards, and funding to help poorer countries have access to more modern farming techniques, rather than just converting more land for inefficient farming.

But first we have to dispense with the comforting but ultimately fictional narrative that the old gentle methods of farming are the best. We need science-based maximal efficiency.

 

The post Factory Farming is Better Than Organic Farming first appeared on NeuroLogica Blog.

Categories: Skeptic

Podcast Pseudoscience

Fri, 12/13/2024 - 5:04am

A recent BBC article highlights some of the risk of the new age of social media we have crafted for ourselves. The BBC investigated the number one ranked UK podcast, Diary of a CEO with host Steven Bartlett, for the accuracy of the medical claims recently made on the show. While the podcast started out as focusing on tips from successful businesspeople, it has recently turned toward unconventional medical opinions as this has boosted downloads.

“In an analysis of 15 health-related podcast episodes, BBC World Service found each contained an average of 14 harmful health claims that went against extensive scientific evidence.”

These includes showcasing an anti-vaccine crank, Dr. Malhotra, who claimed that the “Covid vaccine was a net negative for society”. Meanwhile the WHO estimates that the COVID vaccine saved 14 million lives worldwide. A Lancet study estimates that in the European region alone the vaccine saved 1.4 million lives. This number could have been greater were in not for the very type of antivaccine misinformation spread by Dr. Malhotra.

Another guest promoted the Keto diet as a treatment for cancer. Not only is there no evidence to support this claim, dietary restrictions while undergoing treatment for cancer can be very dangerous, and imperil the health of cancer patients.

This reminds me of the 2014 study that found that, “For recommendations in The Dr Oz Show, evidence supported 46%, contradicted 15%, and was not found for 39%.” Of course, evidence published in the BMJ does little to counter misinformation spread on extremely popular shows. The BBC article highlights the fact that in the UK podcasts are not covered by the media regulator Ofcom, which has standards of accuracy and fairness for legacy media.

I have discussed previously the double-edged sword of social media. It did democratize information publishing and has made it easier for experts to communicate directly with the public. But this has come at the expense of quality control – there is now no editorial filter, so the public is overwhelmed with low quality information, misinformation, and disinformation. I think it’s difficulty to argue that this was a good trade-off for society, at least in the short run.

Journalism has never been perfect (nothing is), but at least there are standards and an editorial process. Much of those standards, however, were just norms. Even back to the 1980s there was a lot of handwringing about erosion of those norms by mass media. I remember those quaint days when people worried about The Phil Donahue Show, which dominated daytime television by having on sensational guests. Donahue justified the erosion of quality standards he was pioneering by saying, you have to get viewers. The, occasionally, you can slip in some quality content. But of course Donahue was soon eclipsed by daytime talk shows that abandoned any pretense of being interested in quality content, and who fought to outdo each other in brazen sensationalism.

Perhaps most notorious was Morton Downey Jr., who all but encouraged fights on set. He did not last long, and in a desperate attempt to remain relevant even faked getting attacked by neo-nazis. His hoax was busted, however, because he drew the swastika on himself in the mirror and drew it backwards. Downey was eclipsed by so-called “trash TV” shows like Jerry Springer. These shows were little more than freak shows, without any pretense of being “news” or informative.

But at the same time we saw the rise of shows that did seem to go back to more of a Phil Donahue format of spreading information, not just highlighting the most dysfunctional lives they could find. The Queen of this format was Oprah Winfrey. Unfortunately, her stated goal was to spread her particular brand of spirituality, and she did it very well. She spawned many acolytes, including Dr. Oz, whose shows were based almost entirely on profitable misinformation.

So even before social media hit, there were major problems with the quality of information being fed to the public through mass media. Social media just cranked up the misinformation by a couple orders of magnitude, and swept away any remaining mechanisms of quality control. Social media gives the ability of a few superspreaders of misinformation to have a magnified effect. Misinformation can be favored by algorithms that prioritize engagement over all else – not just misinformation, but radicalizing information. One result is that people trust all news sources less. This leads to a situation where everyone can just believe what suits them, because all information is suspect. In some social media cultures it seems that truth is irrelevant – it’s no longer even a meaningful concept. These are trends that imperil democracy.

Steven Bartlett defends the low quality of the health information he spreads in the laziest of ways, saying the this is about free speech and airing opposing opinions. He is essentially absolving himself of any journalistic responsibility, so that he can be free to pursue maximal audience size at the expense of quality information. Of course, in a unregulated market that is the inevitable result. Most people will consume the information that most people consume, with popularity being driven by sensationalism and ideological support, not quality. Again – this is nothing new. It’s now just algorithmically assured and there are no longer and breaks to slow the spread of misinformation. Worse, ideological and bad actors have learned how to exploit this situation to spread politically motivated disinformation.

Worse still, authoritarian governments now have a really easy time controlling information and therefore their populations. We may have (and this is my worst fear) created the ultimate authoritarian tools. In the big picture of history, this may lead to a ratcheting of societies in the authoritarian direction. We likely won’t see this happening until it’s too late. I know this will be triggering to many partisans, but I think it is reasonable to argue that we are seeing this in the US with the election of Trump, something that would likely have been impossible 20 years ago. His election (I know, it’s difficult to make sweeping conclusions like this) was partly due to the spread of misinformation and the successful leveraging of social media to control the narrative.

I don’t have any clear solutions to all this. We just have to find a way through it somehow. Individual critical thinking and media savvy are essential. But we do need to also have a conversation about the information ecosystem we have created for our societies.

The post Podcast Pseudoscience first appeared on NeuroLogica Blog.

Categories: Skeptic

Diamond Batteries Again

Thu, 12/12/2024 - 5:08am

Why does news reporting of science and technology have to be so terrible at baseline? I know the answers to this question – lack of expertise, lack of a business model to support dedicated science news infrastructure, the desire for click-bait and sensationalism – but it is still frustrating that this is the case. Social media outlets do allow actual scientists and informed science journalists to set the record straight, but they are also competing with millions of pseudoscientific, ideological, and other outlets far worse than mainstream media. In any case, I’m going to complain about while I try to do my bit to set the record straight.

I wrote about nuclear diamond batteries in 2020. The concept is intriguing but the applications very limited, and cost likely prohibitive for most uses. The idea is that you take a bit of radioactive material and surround it with “diamond like carbon” which serves two purposes. It prevents leaking of radiation to the environment, and it capture the beta decay and converts it into a small amount of electricity. This is not really a battery (a storage of energy) but an energy cell that produces energy, but it would have some battery-like applications.

The first battery based on this concept, capturing the beta decay of a radioactive substance to generate electricity, was in 1913, made by physicist Henty Moseley. So year, despite the headlines about the “first of its kind” whatever, we have had nuclear batteries for over a hundred years. The concept of using diamond like carbon goes back to 2016, with the first prototype created in 2018.

So of course I was disappointed when the recent news reporting on another such prototype declares this is a “world first” without putting it into any context. It is reporting on a new prototype that does have a new feature, but they make it sound like this is the first nuclear battery, when it’s not even the first diamond nuclear battery.  The new prototype is a diamond nuclear battery using Carbon-14 and the beta decay source. They make diamond like carbon out of C-14 and surround it with diamond like carbon made from non-radioactive carbon. C-14 has a half life of 5,700 years, so they claim the battery lasts of over 5,000 years.

The previous prototype nuclear diamond batteries used Nickle 63, including this Chinese prototype from earlier this year, and the one from 2018. So sure, it’s the first prototype using C-14 as the beta decay source. But that is hardly clear from the reporting, nor is there any mentions of other nuclear batteries and previous diamond nuclear batteries.

But worse, the reporting says explicitly this technology could replace the alkaline or lithium ion batteries you currently use in your devices. This will likely never be the case, for a simple reason – these devices have an extremely low energy density and specific energy. The current generated by these small diamond batteries is tiny –  on the order of 10 microwatts per cubic centimeter (called the power density). So you would need a 100 liter volume battery to produce one watt, which is about what a cell phone uses (depending on which features you are using).

But wait, that is for Ni63, which has a half life of 101.2 years. C14 has a half life of 5,700 years, which means it would produce about 56 times less current for 56 time longer per a given mass. This is just math and is unavoidable. So using a C14 battery you would need about 5,600 liters of battery to power a cell phone. They don’t mention that in the reporting.

This does not mean there are no potential applications for such batteries. Right now they are mainly used for deep space probes or satellites – devices that we will never be able to recharge or service and may need only a small amount of energy. Putting cost aside, there are some other applications feasible based on physics. We could recycle C14 from nuclear power plants and make them into diamond batteries. This is a good way to deal with nuclear waste, and it would produce electricity as a bonus. Warehouses of such batteries could be connected to the grid to produce a small amount of steady power. A building that is 100 meters by 100 meters by 20 meters tall, if it was packed with such batteries, could product about 35 Watts of power. Hmmm – probably not worth it.

The low power density is just a deal killer for any widespread or large application. You would have to use very short half-life materials to get the power density up, but then of course the lifespan is much shorter. But still, for some applications, a battery with a half-life of a few years would still be very useful.

Another potential application, however is not as a primary power source but as a source to trickle charge another battery, that has a much higher power density. But again, we have the question – is it worth it? I doubt there are many applications outside NASA that would be considered cost effective. Still, it is an interesting technology with some potential applications, just mostly niche. But reporters cannot help by hype this technology as if you are going to have everlasting cell phone batteries soon.

The post Diamond Batteries Again first appeared on NeuroLogica Blog.

Categories: Skeptic

Have We Achieved General AI

Mon, 12/09/2024 - 5:01am

As I predicted the controversy over whether or not we have achieved general AI will likely exist for a long time before there is a consensus that we have. The latest round of this controversy comes from Vahid Kazemi from OpenAI. He posted on X:

“In my opinion we have already achieved AGI and it’s even more clear with O1. We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks”. Some say LLMs only know how to follow a recipe. Firstly, no one can really explain what a trillion parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify. Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. There’s nothing that can’t be learned with examples.”

I will set aside the possibility that this is all for publicity of OpenAI’s newest O1 platform. Taken at face value – what is the claim being made here? I actually am not sure (part of the problem of short form venues like X). In order to say whether or not OpenAI O1 platform qualified as an artificial general intelligence (AGI) we need to operationally define what an AGI is. Right away, we get deep into the weeds, but here is a basic definition: “Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.”

That may seem straightforward, but it is highly problematic for many reasons. Scientific American has a good discussion of the issues here. But at it’s core two features pop up regularly in various definitions of general AI – the AI has to have wide-ranging abilities, and it has to equal or surpass human level cognitive function. There is a discussion about whether or not how the AI achieves its ends matters or should matter. Does it matter if the AI is truly thinking or understanding? Does it matter if the AI is self-aware or sentient? Does the output have to represent true originality or creativity?

Kazemi puts his nickel down on how he operationally defines general AI – “better than most humans at most tasks”. As if often the case, one has to frame such claims as “If you define X this way, then this is X.” So, if you define AGI as being better than most humans at most tasks, then Kazemi’s claims are somewhat reasonable. There is still a lot to debate, but at least we have some clear parameters. This definition also eliminated the thorny question of understanding and awareness.

But not everyone agrees with this definition. There are still many experts who contend that the modern LLM’s are still just really good autocompletes. They are language prediction algorithms that simulate thought through simulating language, but are not capable of true thought, understanding, or creativity. What they are great at is sifting through massive amounts of data and finding patterns, and then regenerating those patterns.

This is not a mere discussion of “how” LLMs function but gets to the core of whether or not they are “better” than humans at what they do. I think the primary argument against LLMs being better than humans is that they function by using the output of humans. Kazemi essentially says this is just how they learn, they are following a recipe like people do. But I think that dodges the key question.

Let’s take art as an example. Humans create art, and some artists are truly creative and can bring into existence new and unique works. There are always influences and context, but there is also true creativity. AI art does not do this. It sifts through the work of humans, learns the patterns, and then generates imitations from those patterns. Since AI does not experience existence, it cannot draw upon experience or emotions or the feeling of what it is to be a human in order to manifest artistic creativity. It just regurgitates the work of humans. So how can we say that AI is better than humans at art when it is completely dependent on humans for what it does? The same is true for everything LLMs do, but it is just more obvious when it comes to art.

I am not denigrating LLMs or any modern AI as extremely useful tools. They are powerful, and fast, and can accomplish many great tasks. They are accelerating the rate of scientific research in many areas. They can improve the practice of medicine. They can help us control the tsunami of data that we are drowning ourselves in. And yes, they can do a lot of different tasks.

Perhaps it is easier to define what is not AGI. A chess-playing computer is not AGI, as it is programmed to do one task. In fact, the term AGI was developed by programmers to distinguish this effort from the crop of narrow AI applications that were popping up, like Chess and Go players. But is everything that is not a very narrow AI an AGI? Seems like we need more highly specific terms.

OpenAI and other LLMs are more than just the narrow AIs of old. But they are not thinking machines, nor do they have human-level intelligence. They are also certainly not self-aware. I think Kazemi’s point about a trillion parameter deep neural net misses the point. Sure, we don’t know exactly what it is doing, but we know what it is not doing, and we can infer from it’s output and also how it’s programmed the general way that it is accomplishing its outcome. There is also the fact that LLMs are still “brittle” – a term that refers to the fact that narrow AIs can be easily “broken” when they are pushed beyond their parameters. It’s not hard to throw an LLM off its game and push the limits of it’s ability. It still has not true thinking or understanding, and this makes it brittle.

For that reason I don’t think that LLMs have achieved AGI. But I could be wrong, and even if we are not there yet we may be really close. But regardless, I think we need to go back to the drawing board, look at what we currently have in terms of AI, and experts need to come up with perhaps new more specific operational definitions. We do this in medicine all the time – as our knowledge evolves, sometimes we need for experts to get together and revamp diagnostic definitions and make up new diagnoses to reflect that knowledge. Perhaps ANI and AGI are not enough.

To me LLMs seems like a multi-purpose ANI, and perhaps that is a good definition. Either “AGI” needs to be reserved for an AI that can truly derive new knowledge from a general understanding of the world, or we “downgrade” the term “AGI” to refer to what LLMs currently are (multi-purpose but otherwise narrow) and come up with a new term for true human-level thinking and understanding.

What’s exciting (and for some scary) is that AIs are advancing quickly enough to force a reconsideration of our definitions of what AIs actually are.

The post Have We Achieved General AI first appeared on NeuroLogica Blog.

Categories: Skeptic

Power-To-X and Climate Change Policy

Thu, 12/05/2024 - 5:01am

What is Power-to-X (PtX)? It’s just a fancy marketing term for green hydrogen – using green energy, like wind, solar, nuclear, or hydroelectric, to make hydrogen from water. This process does not release any CO2, just oxygen, and when the hydrogen is burned back with that oxygen it creates only water as a byproduct. Essentially hydrogen is being used as an energy storage medium. This whole process does not create energy, it uses energy. The wind and solar etc. are what create the energy. The “X” refers to all the potential applications of hydrogen, from fuel to fertilizer. Part of the idea is that intermittent energy production can be tied to hydrogen production, so when there is excess energy available it can be used to make hydrogen.

A recent paper explores the question of why, despite all the hype surrounding PtX, there is little industry investment. Right now only 0.1% of the world’s hydrogen production is green. Most of the rest comes from fossil fuel (gray and brown hydrogen) and in many cases is actually worse than just burning the fossil fuel. Before I get into the paper, let’s review what hydrogen is currently used for. Hydrogen is essentially a high energy molecule and it can be used to drive a lot of reactions. It is mostly used in industry – making fertilizer, reducing the sulfur content of gas, producing industrial chemicals, and making biofuel. It can also be used for hydrogen fuel cells cars, which I think is a wasted application as BEVs are a better technology and any green hydrogen we do make has better uses. There are also emerging applications, like using hydrogen to refine iron ore, displacing the use of fossil fuels.

A cheap abundant source of green hydrogen would be a massive boost to multiple industries and would also be a key component to achieving net zero carbon emissions. So where is all the investment? This is the question the paper explores.

The short answer has to do with investment risk. Investors, especially when we are talking about billions of dollars, like predictability. Uncertainty increases their risk and is a huge disincentive to invest large sums of money. The paper concludes that there are two main sources of uncertainty that make PtX investments seem like they are high risk – regulatory uncertainty and lack of infrastructure.

Regulations in many countries are still in flux. This, fortunately, is an entirely solvable problem. Governments can put resources and priority into hammering out comprehensive regulations for the hydrogen and related industries, lock in those regulations for years, and provide the stability that investors want. Essentially the lack of proper regulations is a hurdle for green hydrogen investment, and governments simply need to do their job.

The second issue is lack of infrastructure, with further uncertainty about the completion of planned hydrogen projects –

“For instance, in October, the Danish government announced that a planned hydrogen pipeline to Germany would not be established until 2031 at the earliest, whereas the previous target was scheduled for 2028.”

The fossil fuel industry has the advantage of a mature infrastructure. Imagine if we had to develop all the oil rigs, oil wells, pipelines, trucking infrastructure, and gas stations from scratch. That would be a massive investment on an uncertain timeline. Hydrogen is facing the same issue. Again, this is a solvable issue – invest in hydrogen infrastructure. Make sure projects are sufficiently funded to keep on the originally promised timeline. Governments are supposed to craft regulation and invest in common infrastructure in order to facilitate private industry investing in new technologies. This may be all that is necessary to accelerate the green transition. At least we shouldn’t be holding it back because governments are doing their job.

The authors of the paper also explore another aspect of this issue – incentives for industry to specifically invest in green technology. This is essentially what the IRA did in the US. Here incentives fall into two broad categories, carrots and sticks. One type of carrot is to reduce risk for private investment. Beyond what I already mentioned, government can, for example, guarantee loans to reduce financial risk. They can also provide direct subsidies, such as tax breaks for investments in green technology. For context, the fossil fuel industry received $1.4 trillion in 2022 in direct subsidies worldwide. It is also estimated that the fossil fuel industry was allowed to externalize $5.6 trillion in health and environmental costs (whether or not you consider this a “subsidy”). This is for a mature industry with massive profits sitting on top of a massive infrastructure partly paid for with public dollars. The bottom line is that some targeted subsidies for green energy technology is perfectly reasonable, and in fact is a good investment.

But the authors argue that this might not be enough. They also recommend we add some sticks to the equation. This usually takes the form of some type of carbon tax, which would make fossil fuels less profitable. This seems perfectly reasonable. They also recommend mandated phase out of fossil fuel investments. This is trickier, and I think this type of approach should be a last resort if anything. You won’t have to mandate a phase out if you make green technologies more attractive through subsidies and infrastructure, and fossil fuels less attractive by eliminating subsidies and perhaps taxing carbon.

At the very least governments should be not slowing down the green transition because they are neglecting to do their basic job.

The post Power-To-X and Climate Change Policy first appeared on NeuroLogica Blog.

Categories: Skeptic

Finding Small Primordial Black Holes

Tue, 12/03/2024 - 5:08am

Astrophysicists come up with a lot of whacky ideas, some of which actually turn out to be possibly true (like the Big Bang, black holes, accelerating cosmic expansion, dark matter). Of course, all of these conclusions are provisional, but some are now backed by compelling evidence. Evidence is the real key – often the challenge is figuring out a way to find evidence that can potentially support or refute some hypothesis about the cosmos. Sometimes it’s challenging to figure out even theoretically (let alone practically) how we might prove or disprove a hypothesis. Decades may go buy before we have the ability to run relevant experiments or make the kinds of observations necessary.

Black holes fell into that category. They were predicted by physics long before we could find evidence of their existence. There is a category of black hole, however, that we still have not confirmed through any observation – primordial black holes (PBH). As the name implies, these black holes may have been formed in the early universe, even before the first stars. In the early dense universe, fluctuations in the density of space could have lead to the formation of black holes. These black holes could theoretically be of any size, since they are not dependent on a massive star collapsing to form them. This process could lead to black holes smaller than the smaller stellar remnant black hole.

In fact, it is possible that there are enough small primordial black holes out there to account for the missing dark matter  – matter we can detect through its gravitational effects but that we cannot otherwise see (hence dark). PBHs are considered a black hole candidate, but the evidence for this so far is not encouraging. For example, we might be able to detect black holes through microlensing. If a black hole happens to pass in front of a more distant star (from the perspective of an observer on Earth), then gravitational lensing will cause that star to appear to brighten, until the black hole passes. However, microlensing surveys have not found the number of microlensing events that would be necessary for PBHs to explain dark matter. Dark matter makes up 85% of the matter in the universe, so there would have to be lots of PBHs to be the sole cause of dark matter. It’s still possible that longer observation times would detect larger black holes (brightening events can take years if the black holes are large). But so far there is a negative result.

Observations of galaxies have also not shown the effects of swarms of PBHs, which should have (those > 10 solar masses) congregated in the centers of small galaxies over the age of the universe. This would have disturbed stars near the centers of these galaxies, causing the galaxies to appear fluffier. Observations of dwarf galaxies so far have not seen this effect, however.

A recent paper suggest two ways in which we might observe small PBHs, or at least their effects. These ideas are pretty out there, and are extreme long shots, which I think reflects the desperation for new ideas on how we might confirm the existence of PBHs. One idea is that small PBHs might have been gravitationally captured by planets. If the planet had a molten core, it’s then possible that the PBH would consume the molten core, leaving behind a hollow solid shell. The researchers calculate that for planets with a radius smaller than one tenth that of Earth, they outer solid shell could remain intact and not collapse in on itself. This idea then requires that a later collision knocks the PBH out of the center of this hollowed out small planet.

If this sequence of events occurs, then we could theoretically observe small hollow exoplanets to confirm PBHs. We could know a planet is hollow if we can calculate its size and mass, which we can do for some exoplanets. An object can have a mass much too small for its apparent size, meaning that it could be hollow. Yes, such an object would be unlikely, but the universe is a big place and even very unlikely events happen all the time. Being unlikely, however, means that such objects would be hard to find. That doesn’t matter if we can survey large parts of the universe, but finding exoplanets requires lots of observations. So far we have identified over 5 thousand exoplanets, with thousands of candidates waiting for confirmation. Most of these are larger worlds, which are easier to detect. In any case, it may be a long time before we find a small hollow world, if they are out there.

The second proposed method is also highly speculative. The idea here is that there may be really small PBHs that formed in the early universe, which can theoretically have masses in the range of 10^17 to 10^24 grams. The authors calculate that a PBH with a mass of 10^22 grams, if it passed through a solid object at high speed, would leave behind a tunnel of radius 0.1 micrometers. This tunnel would make a long straight path, which is otherwise not something you would expect to see in a solid object.

Therefore, we can look at solid objects, especially really old solid objects, with light microscopy to see if any such tiny straight tunnels exist. If they do, that could be evidence of tiny PBHs. What is the probability of finding such microscopic tunnels? The authors calculate that the probability of a billion year old boulder containing such a tunnel is 0.000001. So on average you would have to examine a million such boulders to find a single PBH tunnel. This may seem like a daunting task – because it is. The authors argue that at least the procedure is not expensive (I guess they are not counting the people time needed).

Perhaps if there were some way to automate such a search, using robots or equipment designed for the purpose. I feel like if such an experiment were to occur, it would be in the future when technology makes it feasible. The only other possibility is to crowd source it in some way. We would need millions of volunteers.

The authors recognize that these are pretty mad ideas, but they also argue that at this point any idea for finding PBHs, or dark matter, is likely to be out there. Fair enough. But unless we can practically do the experiment, it is likely to remain just a thought experiment and not really get us closer to an answer.

The post Finding Small Primordial Black Holes first appeared on NeuroLogica Blog.

Categories: Skeptic

Some Climate Change Trends and Thoughts

Mon, 12/02/2024 - 5:08am

Climate change is a challenging issue on multiple levels – it’s challenging for scientists to understand all of the complexities of a changing climate, it’s difficult to know how to optimally communicate to the public about climate change, and of course we face an enormous challenge in figuring out how best to mitigate climate change. The situation is made significantly more difficult by the presence of a well-funded campaign of disinformation aimed at sowing doubt and confusion about the issue.

I recently interviewed climate scientist Michael Mann about some of these issues and he confirmed one trend that I had noticed, that the climate change denier rhetoric has, to some extent, shifted to what he called “doomism”. I have written previously about some of the strategies of climate change denial, specifically the motte and bailey approach. This approach refers to a range of positions, all of which lead to the same conclusion – that we should essentially do nothing to mitigate climate change. We should continue to burn fossil fuels and not worry about the consequences. However, the exact position shifts based upon current circumstances. You can deny that climate change is even happening, when you have evidence or an argument that seems to support this position. But when that position is not rhetorically tenable, you can back off to more easily defended positions, that while climate change may be happening, we don’t know the causes and it may just be a natural trend. When that position fails, then you can fall back to the notion that climate change may not be a bad thing. And then, even if forced to admit that climate change is happening, it is largely anthropogenic, and it will have largely negative consequences, there isn’t anything we can do about it anyway.

This is where doomism comes in. It is a way of turning calls for climate action against themselves. Advocates for taking steps to mitigate climate change often emphasize how dire the situation is. The climate is already showing dangerous signs of warming, the world is doing too little to change course, the task at hand is enormous, and time is running out. That’s right, say the doomists, in fact it’s already too late and we will never muster the political will to do anything significant, so why bother trying. Again, the answer is – do nothing.

This means that science communicators dealing with climate change have to recalibrate. First, we always have to accurately portray what the science actually says (a limitation that does not burden the other side). But we also need to put this information into a proper context, and think carefully about our framing and emphasis. For example, we can focus on all the negative aspects of climate change and our political dysfunction, trying to convince people how urgent the situation is and the need for bold action. But if we just do this, that would feed the doomist narrative. We also need to emphasize the things we can do, the power we have to change course, the assets (technological and otherwise) at our disposal, and the fact that any change in course has the potential to make things better (or at least less bad). As Mann says – we have created the sense of urgency, and now we need to create a sense of agency.

The framing, therefore, should be one of strategic optimism. Pessimism is self-defeating and self-fulfilling. Admittedly, optimism can be challenging. Trump has pledged to nominate for energy secretary Chris Wright, an oil executive who essentially denies climate change as an issue. Apparently, he does not deny that human-released CO2 is warming the climate, he just thinks the negative consequences are overblown, that the costs of a green energy transition are too great, and that the efforts of the US will likely be offset by emerging industrial nations anyway. Again – do nothing. Just keep drilling. I would dispute all of these positions. Sure, the media overhypes everything, but climate scientists are generally being pretty conservative in their projections. Some argue, too conservative if anything. Yes, the cost of the green transition will be great, but the cost of climate change will be greater. And for the investment we get less pollution, better health, and greater energy independence.

That last claim, essentially – why should the US bother to do anything unless everyone is making the same effort, is simply not logical. Climate change is not all or nothing, it is a continuum. Anything anyone does to mitigate greenhouse gas release will help. Also it’s pretty clear that the US has a leadership role to play in this issue, and when we take steps to mitigate climate change other countries tend to follow. Further still, the US has released more CO2 than any other nation, and we still have among the highest per capita CO2 release (mostly exceeded only by petro-states with high oil production and low populations), so it makes little sense to blame emerging economies with comparatively negligible impacts.

But if I’m trying to be optimistic I can focus on a couple of things. First, there is a momentum to technology that is not easily turned off. The IRA has provided billions in subsidies to industry to accelerate the green transition, and a lot of that money is going to red states. It’s doubtful that money will be clawed back. Further, wind and solar are increasing rapidly because they are cost effective, especially while the overall penetration of these sources is still relatively low. Electric vehicles are also getting better and cheaper. So my hope is that these industries have enough momentum to not only survived but to thrive on their own.

Also, there is one green energy technology that has bipartisan support – nuclear. As I discussed recently, we are making moves to significantly increase nuclear energy, and this does require government support to help revitalize the industry and transition to the next generation. Hopefully this will continue over the next four years.  So while having someone like Wright as energy secretary (or someone like Trump as president, for that matter) is not ideal for our efforts to make a green energy transition, it is not unreasonable to hope that we can coast through the next four years without too much disruption. We’ll see.

There is also some good news – bad news on the climate front. The bad news is that the negative effects of climate change are happening faster than models predicted. One recent study, for example, shows that there are heat wave hot spots around the world that are difficult to model. Climate models have been great at predicting average global temperatures, but are less able to predict local variation. What is happening is called “tail-widening” – as average temperatures increase, the variability across regions also increases, leading to outlier hotspots. This is causing an increase in heat related deaths, and bringing extreme heat to areas that have not previously experienced it.

We are also seeing events like hurricane Helene that hit North Carolina. Scientists are confident that the amount of rainfall was significantly increased due to increases in global temperatures. Warmer air holds more moisture. Dropping more rain meant increased flooding, bringing extreme flooding events and catastrophic damage to an area that was not considered a flood risk and was therefore largely unprepared to such an event.

What’s the good news part of this? Events like extreme heat waves and hurricane destruction seem to be shifting the political center of gravity. It’s becoming harder to deny that climate change is happening with potential negative effects. This gets back to the doomism phenomenon – increasingly, doomism is all the climate change deniers have left. They are essentially saying, sorry, it’s too late. But it is objectively not too late, and it will never be too late to make changes that will have a positive impact, even if that impact is just making things less bad.

The Biden Administration actually showed a good way forward, using essentially all carrots and no sticks. Just give industry some incentives and assurances to make investments in green energy, and they will. We also need to invest in infrastructure, which is also something that tends to have bipartisan support. Climate activists do need to become strategic about their messaging (the other side certainly is). This might mean focusing on bipartisan wins – investing in industry, investing in infrastructure, becoming economic leaders in 21st century technology, and facilitating nuclear and geothermal energy. These are win-wins everyone should be able to get behind.

 

The post Some Climate Change Trends and Thoughts first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages