Small nuclear reactors have been around since the 1950s. They mostly have been used in military ships, like aircraft carriers and submarines. They have the specific advantage that such ships could remain at sea for long periods of time without needing to refuel. But small modular reactors have never taken off as a source of grid energy. The prevailing opinion for why this is seems to be that they are simply not cost effective. Larger reactors, which are already expensive endeavors, produce more megawatts per dollar. SMRs are simply too cost inefficient.
This is unfortunate because they have a lot of advantages. Their initial investment is smaller, even though the cost per unit energy is more. They are safe and reliable. They have a small footprint. And they are scalable. The military uses them because the strategic advantages are worth the higher cost. Some argue that the zero carbon on demand energy they provide is worth the higher cost, and I think this is a solid argument. Also there are continued attempts to develop the technology to bring down the cost. Arguably it may be worth subsidizing the SMR industry so that the technology can be developed to greater cost effectiveness. Decarbonizing the energy sector is worth the investment.
But there is another question – are there civilian applications that would also justify the higher cost per unit energy? I have recently encountered two that are interesting. The first is a direct extension of the military use – using an SMR to power a cargo ship. South Korean company, HD Korea Shipbuilding & Offshore Engineering, has revealed their designs for an SMR powered cargo ship, and has received “approval in principle”. Obviously this is just the beginning phase – they need to actually develop the design and get full approval. But the concept is compelling.
The SMR has a smaller footprint overall than a traditional combustion engine. They do not need space for an exhaust system or for fuel tanks. This saved space can be used for extra cargo – and that extra cargo offsets the higher cost of the SMR. The calculus here is different – you don’t have to compare an SMR to every other form of grid power, including gigawatt scale nuclear. You only have to compare it to other forms of cargo ship propulsion. You have to look at the overall cost effectiveness of the cargo delivery system, not just the production of watts. As an aside, the company is also planning on incorporating a “supercritical carbon dioxide-based propulsion system”, which is about 5% more efficient than traditional steam-based propulsion system.
Shipping accounts for about 3% of global greenhouse gas emissions. Decarbonizing this sector therefore will be critical for getting close to net zero.
The second potential civilian application is for powering datacenters. Swiss company, Deep Atomic, is developing an SMR that is purpose-built for large data centers, again by leveraging advantages specific to one application. Their design provides not only 60 MWe of power, but 60 MW worth of cooling. Apparently is can use its waste heat to power cooling systems for a data center. The SMR design is also meant to be located right next to the data center, even close to urban centers. The company also hopes to produce these SMR in a factory to help bring down construction costs.
Right now this is just a design, and not a reality, but it’s the idea that’s interesting. Instead of thinking of SMRs as just another method of providing power to the grid, they are being reimagined as being optimized for a specific purpose, which could possibly allow them to gain that extra efficiency to make them cost effective. Data centers, which are increasingly critical to our digital world, are very energy hungry. You can no longer just plug them into the existing grid and expect to get all the energy you need. Right now there is no regulatory requirement for data centers to provide their own energy. In late 2024, Energy Secretary Jennifer Granholm “urged” AI companies to provide their own green energy to power their data centers. Many have responded with plans to do that. But it would not be unreasonable to require them to do so.
Without a plan to power data centers their growing energy demand is not sustainable. This could also completely wipe out any progress we make at trying to decarbonize energy production, as new demand will equal or outstrip any green energy production. This is what has been happening so far. This is another reason why we absolutely need nuclear power if we are going to meet our carbon goals.
There is also the hope that these niche applications of SMRs will bootstrap the entire industry. Making SMRs for ships and data centers could create an economy of scale that brings down the cost of SMRs overall, making them viable for more and more applications.
The post Are Small Modular Reactors Finally Coming? first appeared on NeuroLogica Blog.
The flying car is an icon of futuristic technology – in more ways than one. This is partly why I can’t resist a good flying car story. I was recently sent this YouTube video on the Alef flying car. The company says his is a street-legal flying car, with vertical take off and landing. They also demonstrate that they have tested this vehicle in urban environments. They are available now for pre-order (estimated price, $300k). The company claims: “Alef will deliver a safe, affordable vehicle to transform your everyday commute.” The claim sounds reminiscent of claims made for the Segway (which recently went defunct).
The flying car has a long history as a promise of future technology. As a technology buff, nerd, and sci-fi fan, I have been fascinated with them my entire life. I have also seen countless prototype flying cars come and go, an endless progression of overhyped promises that have never delivered. I try not to let this make my cynical – but I am cautious and skeptical. I even wrote an entire book about the foibles of predicting future technology, in which flying cars featured prominently.
So of course I met the claims for the Alef flying car with a fair degree of skepticism – which has proven entirely justified. First I will say that the Alef flying car does appear to function as a car and can fly like a drone. But I immediately noticed in the video that as a car, it does not go terribly fast. You have to do some digging, but I found the technical specs which say that it has a maximum road speed of 25 MPH. It also claims a road range of 200 miles, and an air range of 110 miles. It is an EV with a gas motor to extend battery life in flight, with eight electric motors and eight propellers. It is also single passenger. It’s basically a drone with a frame shaped like a car with tires and weak motors – a drone that can taxi on roads.
It’s a good illustration of the inherent hurdles to a fully-realized flying car of our dreams, mostly rooted in the laws of physics. But before I go there, as is, can this be a useful vehicle? I suppose, for very specific applications. It is being marketed as a commuter car, which makes sense, as it is single passenger (this is no family car). The limited range also makes it suited to commuting (average daily commutes in the US is around 42 miles).
That 25 MPH limit, however, seems like a killer. You can’t drive this thing on the highway, or on many roads, in fact. But, trying to be as charitable as possible, that may be adequate for congested city driving. It is also useful for pulling the vehicle out of the garage into a space with no overhead obstruction. Then you would essentially fly to your destination, land in a suitable location, and then drive to your parking space. If you are only driving into the parking garage, the 25 MPH is fine. So again – it’s really a drone that can taxi on public roads.
The company claims the vehicle is safe, and that seems plausible. Computer aided drone control is fairly advanced now, and AI is only making it better. The real question is – would you need a pilot’s license to fly it? How much training would be involved? And what are the weather conditions in which it is safe to fly? Where you live, what percentage of days would the drone car be safe to fly, and how easy would it be to be stuck at work (or need to take an Uber) because the weather unexpectedly turned for the worse? And if you are avoiding even the potential of bad weather, how much further does this restrict your flying days?
There are obviously lots of regulatory issues as well. Will cities allow the vehicles to be flying overhead. What happens if they become popular and we see a significant increase in their use? How will air traffic be managed. If widely adopted, we will see then what their real safety statistics are. How many people will fly into power lines, etc.?
What all this means is that a vehicle like this may be great as “James Bond” technology. This means, if you are the only one with the tech, and you don’t have to worry about regulations (because you’re a spy), it may help you get away from the bad guys, or quickly cross a city frozen with grid lock. (Let’s face it, you can totally see James Bond in this thing.) But as a widely adopted technology, there are significant issues.
For me the bottom line is that this technology is a great proof-of-concept, and I welcome anything that incrementally advances the technology. It may also find a niche somewhere, but I don’t think this will become the Tesla of flying cars, or that this will transform city commuting. It does help demonstrate where the technology is. We are seeing the benefits of improving battery technology, and improving drone technology. But is this the promised “flying car”? I think the answer is still no.
For me a true flying car functions fully as a car and as a flying conveyance. What we often see are planes that can drive on the road, and now drones that can drive on the road. But they are not really cars, or they are terrible cars. You would never drive the Alef flying car as a car – again, at most you would taxi it to and from its parking space.
What will it take to have a true flying car? I do think the drone approach is much better than the plane approach, or jet-pack approach. Drone technology is definitely the way to go. Before it is practical, however, we need better battery tech. The Alef uses lithium-ion batteries and lithium polymer batteries. Perhaps eventually they will use the silicone anode lithium batteries, which have a higher energy density. But we may need to see the availability of batteries with triple or more current lithium ion batteries before flying drone cars will be a practical reality. But we can feasibly get there.
Perhaps, however, the “flying car” is just a futuristic pipe dream. We do have to consider that if the concept is valid, or are we just committing a “futurism fallacy” by projecting current technology into the future. We don’t necessarily have to do things in the same way, with just better technology. The thought process is – I use my car for transportation, wouldn’t it be great if my car could fly. Perhaps the trade-offs of making a single vehicle that is both a good car and a good drone are just not worth it. Perhaps we should just make the best drone possible for human transportation and specific applications. We may need to develop some infrastructure to accommodate them.
In a city there may be other combinations of travel that work better. You may take a e-scooter to the drone, or some form of public transportation. Then a drone can take you across the city, or across a geological obstacle. Personal drones may be used for commuting, but then you may have a specific pad at your home and another at work for landing. That seems easier than designing a drone-car just to drive 30 feet to the take off location.
If we go far enough into the future, where technology is much more advanced (like batteries with 10 times the energy density of current tech), then flying cars may eventually become practical. But even then there may be little reason to choose that tradeoff.
The post The Alef Flying Car first appeared on NeuroLogica Blog.
I am fascinated by the technologies that live largely behind the scenes. These are not generally consumer devices, but they may be components of consumer products, or may largely have a role in industry – but they make our modern world possible, or make it much better. In addition I think that material science is largely underrated in terms of popular appeal, but it is material science that often make all other technologies possible or feasible. There is another aspect of technology that I have been increasingly interested in – solid state technology. These are, generally speaking, devices that use electricity rather than moving parts. You are likely familiar with solid state drives, that do not have spinning discs and therefore are smaller, use less power, and last longer. One big advantage of electric vehicles is that they are largely solid state, without the moving parts of an engine.
There is a technology that combines all three of these features – it is a component technology, dependent on material science, and solid state: thermoelectric devices. This may not sound sexy, but bear with me, this is cool (pun intended) technology. Thermoelectric materials are those that convert electricity into a temperature difference across a material, or convert a temperature difference into electricity. In reality, everything is a thermoelectric material, but most materials have insignificant thermoelectric effects (so are functionally not thermoelectric).
Thermoelectric devices can be used to harvest energy, from any temperature difference. These are generally not large amounts of energy – we don’t have thermoelectric power plants connected to the grid – and they are currently not practical and cost effective enough for a large scale. This may be possible in the future, but not today. However, for applications that require small amounts of energy, harvesting that energy from ambient sources like small temperature differences is feasible.
There are likely many more applications for the reverse – using electricity to cause temperature changes. This is basically a refrigerator, and in fact y0u can buy small solid state thermoelectric refrigerators. A traditional refrigerator uses a compressor and a refrigerant. This is a liquid that turns into a gas at low temperature, absorbing heat when it transitions to gas and then letting off heat when it transitions back to liquid. But this requires a compressor with moving parts and pipes to carry the refrigerant. Refrigerants are also not good for the environment or the ozone. Thermoelectric coolers can be smaller, use less electricity, are quiet, and have more precise temperature control. But their size is limited because they are not powerful enough for full-sized refrigerators.
As an aside, I see that Samsung is coming out this year with a hybrid full-size refrigerator. I still uses a compressor, but also has a thermoelectric cooler to reduce temperature variation throughout the refrigerator.
Thermoelectric cooling is also useful for electronics, which having an increasing problem with heat dissipation as we make them smaller, more compact, and more powerful. Heat management is now a major limiting factor for high end computer chips. This is also a major limiting factor for bio-electronics – implanting chips in people for various potential applications. Having a small and efficient solid state cooling device that just requires electricity would enable this technology.
But – the current state of the art for thermoelectric cooling is limited. Devices have low overall efficiency, and their manufacture is expensive and generates a lot of waste. In other words – there is a huge opportunity to improve this technology with massive and far ranging potential benefits. This is an area ripe for investment with clear benefits. This can also be a significant component of our current overall goal to electrify our technology – to accomplish with electricity what currently requires moving parts and fossil fuels.
All this is why I was very interested in this latest advance – Interfacial bonding enhances thermoelectric cooling in 3D-printed materials. This incorporates yet another technology that has my interest – 3D printing, or additive manufacturing. This does not represent an improvement in the thermoelectric devices themselves, but an improvement in the cost and efficiency of making them (again, and often neglected by very important aspect of any technology). As one of the authors says:
“With our present work, we can 3D print exactly the needed shape of thermoelectric materials. In addition, the resulting devices exhibit a net cooling effect of 50 degrees in the air. This means that our 3D-printed materials perform similarly to ones that are significantly more expensive to manufacture,” says Xu.”
The innovation has to do with the molecular bonding of the materials in the 3D printing process. As Xu says, the performance is the same as existing materials, but with much lower cost to manufacture. As always, shifting to a new technology often means that there is room for further incremental advances to make the advantages even better over time. It may take years for this technology to translate to the market, but it is very possible it may lead directly to a slew of new products and applications.
It may seem like a small thing, but I am looking forward to a future (hopefully not too distant) with full-sized thermoelectric refrigerators, and with computers that don’t need fans or water cooling. Having a silent computer without fans is nice for podcasting, which I know is a particular interest of mine, but is also increasingly common.
In general, quality of life will be better if we are surrounded by technology that is silent, small, efficient, cost-effective, and long-lasting. Thermoelectric cooling can make all of that increasingly possible.
The post Thermoelectric Cooling – It’s Cooler Than You Think first appeared on NeuroLogica Blog.
The evolution of the human brain is a fascinating subject. The brain is arguably the most complex structure in the known (to us) universe, and is the feature that makes humanity unique and has allowed us to dominate (for good or ill) the fate of this planet. But of course we are but a twig on a vast evolutionary tree, replete with complex brains. From a human-centric perspective, the closer groups are to humans evolutionarily, the more complex their brains (generally speaking). Apes are the most “encephalized” among primates, as are the primates among mammals, and the mammals among vertebrates. This makes evolutionary sense – that the biggest and most complex brains would evolve from the group with the biggest and most complex brains.
But this evolutionary perspective can be tricky. We can’t confuse looking back through evolutionary time with looking across the landscape of extant species. Any species alive today has just as much evolutionary history behind them as humans. Their brains did not stop evolving once their branch split off from the one that lead to humans. There are therefore some groups which have complex brains because they are evolutionarily close to humans, and their brains have a lot of homology with humans. But there are also other groups that have complex brains because they evolved them completely independently, after their group split from ours. Cetaceans such as whales and dolphins come to mind. They have big brains, but their brains are organized somewhat differently from primates.
Another group that is often considered to be highly intelligent, independent from primates, is birds. Birds are still vertebrates, and in fact they are amniotes, the group that contains reptiles, birds, and mammals. It is still an open question as to exactly how much of the human brain architecture was present at the last common ancestor of all amniotes (and is therefore homologous) and how much evolved later independently. To explore this question we need to look at not only the anatomy of brains and the networks within them, but brain cell types and their genetic origins. For example, even structures that currently look very different can retain evidence of common ancestry if they are built with the same genes. Or – structures that look similar may be built with different genes, and are therefore evolutionarily independent, or analogous.
With that background, we now have a publication of several research projects examining the brain of various amniotes – Evolutionary convergence of sensory circuits in the pallium of amniotes. The pallium is basically the cerebral cortex – the layers of gray and white matter that sit on top of the cerebrum. This is the “advanced” part of the brain in vertebrates, which include the neocortex in humans. When comparing the pallium of reptiles, birds, and mammals, what did they find?
“Their neurons are born in different locations and developmental times in each species,” explains Dr. García-Moreno, head of the Brain Development and Evolution laboratory, “indicating that they are not comparable neurons derived from a common ancestor.”
Time and location during development is a big clue as to the evolutionary source of different cells and structure. Genes are another way to determine evolutionary source, so a separate analysis looked at the genes that are activated when forming the pallium of these different groups. It turns out – they use very different assemblages of genes in developing the neurons of the pallium. All this strongly suggests that extant reptiles, birds, and mammals evolved similar brain structures independently after they split apart as groups. They use different neuron type derived from different genes, which means those neurons evolved from different ancestor cell types.
To do this analysis they looked at hundreds of genes and cell types across species, creating an atlas of brain cells, and then did (of course) a computer analysis:
“We were able to describe the hundreds of genes that each type of neuron uses in these brains, cell by cell, and compare them with bioinformatics tools.” The results show that birds have retained most inhibitory neurons present in all other vertebrates for hundreds of millions of years. However, their excitatory neurons, responsible for transmitting information in the pallium, have evolved in a unique way. Only a few neuronal types in the avian brain were identified with genetic profiles similar to those found in mammals, such as the claustrum and the hippocampus, suggesting that some neurons are very ancient and shared across species. “However, most excitatory neurons have evolved in new and different ways in each species,” details Dr. García-Moreno.
Convergent evolution like this occurs because nature finds similar solutions to the same problem. But if they evolved independently, the tiny details (like the genes they are built from) will differ. But also, a similar solution is not an identical solution. This means that bird brains are likely to be different in important ways from mammalian brains. They have a different type of intelligence that mammals, primates, and humans do (just like dolphins have a different type of intelligence).
This is the aspect of this research that fascinates me the most – how is our view of reality affected by the quirky of our neurological evolution? Our view of reality is mostly a constructed neurological illusion (albeit a useful illusion). It is probable that chimpanzees see the world in a very similar way to humans, as their brains diverged only recently from our own. But the reality that dolphin or crow brains construct might be vastly different than our own.
There are “intelligent” creatures on Earth that diverge even more from the human model. Octopuses have a doughnut shaped brain that wraps around their esophagus, with many of the neurons also distributed in their tentacles. They have as many neurons as a dog, but they are far more distributed. Their tentacles have some capacity for independent neurological function (if you want to call that “thought”). It is highly likely that the experience of reality of an octopus is extremely different than any mammal.
This line of thinking always leads me to ponder – what might the intelligence of an alien species be like? In science fiction it is a common story-telling contrivance that aliens are remarkably humanoid, not just in their body plan but in their intelligence. They mostly have not only human-level intelligence, but a recognizably human type of intelligence. I think it is far more likely that any alien intelligence, even one capable of technology, would be different from human intelligence in ways difficult (and perhaps impossible) for us to contemplate.
There are some sci fi stories that explore this idea, like Arrival, and I usually find them very good. But still I think fiction is just scratching the surface of this idea. I understand why this is – it’s hard to tell a story with aliens when we cannot even interface with them intellectually – unless that fact is part of the story itself. But still, there is a lot of space to explore aliens that are human enough to have a meaningful interaction, but different enough to feel neurologically alien. There are likely some constants to hold onto, such as pleasure and pain, and self-preservation. But even exploring that idea – what would be the constants, and what can vary, is fascinating.
This all relates to another idea I try to emphasize whenever relevant – we are our neurology. Our identity and experience is the firing of patterns of neurons in our brains, and it is a uniquely constructed experience.
The post Birds Separately Evolved Complex Brains first appeared on NeuroLogica Blog.
My younger self, seeing that title – AI Powered Bionic Arm – would definitely feel as if the future had arrived, and in many ways it has. This is not the bionic arm of the 1970s TV show, however. That level of tech is probably closer to the 2070s than the 1970s. But we are still making impressive advances in brain-machine interface technology and robotics, to the point that we can replace missing limbs with serviceable robotic replacements.
In this video Sarah De Lagarde discusses her experience as the first person with an AI powered bionic arm. This represents a nice advance in this technology, and we are just scratching the surface. Let’s review where we are with this technology and how artificial intelligence can play an important role.
There are different ways to control robotics – you can have preprogrammed movements (with or without sensory feedback), AI can control the movements in real time, you can have a human operator, through some kind of interface including motion capture, or you can use a brain-machine interface of some sort. For robotic prosthetic limbs obviously the user needs to be able to control them in real time, and we want that experience to feel as natural as possible.
The options for robotic prosthetics include direct connection to the brain, which can be from a variety of electrodes. They can be deep brain electrodes, brain surface, scalp surface, or even stents inside the veins of the brain (stentrodes). All have their advantages and disadvantages. Brain surface and deep brain have the best resolution, but they are the most invasive. Scalp surface is the least invasive, but has the lowest resolution. Stentrodes may, for now, be the best compromise, until we develop more biocompatible and durable brain electrodes.
You can also control a robotic prosthetic without a direct brain connection, using surviving muscles as the interface. That is the method used in De Lagarde’s prosthetic. The advantage here is that you don’t need wires in the brain. Electrodes from the robotic limb connect to existing muscles which the user can contract voluntarily. The muscles themselves are not moving anything, but they generate a sizable electrical impulse which can activate the robotic limb. The user then has to learn to control the robotic limb by activating different sequences of muscle contractions.
At first this method of control requires a lot of concentration. I think a good analogy, one used by De Lagarde, is to think of controlling a virtual character in a video game. At first, you need to concentrate on the correct sequence of keys to hit to get the character to do what you want. But after a while you don’t have to think about the keystrokes. You just think about what you want the character to do and your fingers automatically (it seems) go to the correct keys or manipulate the mouse appropriately. The cognitive burden decreases and your control increases. This is the learning phase of controlling any robotic prosthetic.
As the technology develops researchers learned that providing sensory feedback is a huge help to this process. When the user uses the limb it can provide haptic feedback, such as vibrations, that correspond to the movement. Users report this is an extremely helpful feature. It allows for superior and more natural control, and allows them to control the limb without having to look directly at it. Sensory feedback closes the usual feedback loop of natural motor control.
And that is where the technology has gotten to, with continued incremental advances. But now we can add AI to the mix. What roll does that potentially play? As the user learns to contract the correct muscles in order to get the robotic limb to do what they want, AI connected to the limb itself can learn to recognize the user behavior and better predict what movements they want. The learning curve is now bidirectional.
De Lagarde reports that the primary benefit of the AI learning to interpret her movements better is a decrease in the lag time between her wanting to move and the robotic limb moving. At first the delay could be 10 seconds, which is forever if all you want to do is close your fist. But now the delay is imperceptible, with the limb moving essentially in real time. The limb does not feel like her natural limb. She still feels like it is a tool that she can use. But that tool is getting more and more useful and easy to use.
AI may be the perfect tool for brain-machine interface in general, and again in a bidirectional way. What AI is very good at is looking at tons of noisy data and finding patterns. This can help us interpret brain signals, even from low-res scalp electrodes, meaning that by training on the brain waves from one user an AI can learn to interpret what the brain waves mean in terms of brain activity and user intention. Further, AI can help interpret the user’s attempts at controlling a device or communicating with a BMI. This can dramatically reduce the extensive training period that BMIs often require, getting months of user training down to days. It can also improve the quality of the ultimate control achieved, and reduce the cognitive burden of the user.
We are already past the point of having usable robotic prosthetic limbs controlled by the user. The technology is also advancing nicely and quite rapidly, and AI is just providing another layer to the tech that fuels more incremental advances. It’s still hard to say how long it will take to get to the Bionic Man level of technology, but it’s easy to predict better and better artificial limbs.
The post AI Powered Bionic Arm first appeared on NeuroLogica Blog.
It’s probably not a surprise that a blog author dedicated to critical thinking and neuroscience feels that misinformation is one of the most significant threats to society, but I really to think this. Misinformation (false, misleading, or erroneous information) and disinformation (deliberately misleading information) have the ability to cause a disconnect between the public and reality. In a democracy this severs the feedback loop between voters and their representatives. In an authoritarian government it a tool of control and repression. In either case citizens cannot freely choose their representatives. This is also the problem with extreme jerrymandering – in which politicians choose their voters rather than the other way around.
Misinformation and disinformation have always existed in human society, and it is an interesting question whether or not they have increased recently and to what extent social media has amplified them. Regardless, it is useful to understand what factors contribute to susceptibility to misinformation in order to make people more resilient to it. We all benefit if the typical citizen has the ability to discern reality and identify fake news when they see it.
There has been a lot of research on this question over the years, and I have discussed it often, but it’s always useful to try to gather together years of research into a single systematic review and/or meta-analysis. It’s possible I and others may be selectively choosing or remembering parts of the research to reinforce a particular view – a problem that can be solved with a thorough analysis of all existing data. And of course I must point out that such reviews are subject to their own selection bias, but if properly done such bias should be minimal. The best case scenario is for there to be multiple systematic reviews, so I can get a sense of the consensus of those reviews, spreading out bias as much as possible in the hopes it will average out in the end.
With that in mind, there is a recent meta-analysis of studies looking at the demographics of susceptibility to misinformation. The results mostly confirm what I recall from looking at the individual studies over the years, but there are some interesting wrinkles. They looked at studies which used the news headline paradigm – having subjects answer if they think a headline is true or not, “totaling 256,337 unique choices made by 11,561 participants across 31 experiments.” That’s a good chunk of data. First, people were significantly better than chance at determining which headlines were true (68.51%) or false 67.24%). That’s better than it being a coin flip, but still, about a third of the time subjects in these studies could not tell real from fake headlines. Given the potential number of false headlines people encounter daily, this can result in massive misinformation.
What factors contributed to susceptibility to misinformation, or protected against it? One factor that many people may find surprising, but which I have seen many times over the years, is that education level alone conveyed essentially no benefit. This also aligns with the pseudoscience literature – education level (until you get to advanced science degrees) does not protect against believing pseudoscience. You might also (and I do) view this as a failure of the education system, which is supposed to be teaching critical thinking. This does not appear to be happening to any significant degree.
There were some strong predictors. People who have an analytical thinking style were more accurate on both counts – identifying true and false headlines, but with a bit of a false headline bias. This factor comes up often in the literature. An analytical thinking style also correlates with lower belief in conspiracy theories, for example. Can we teach an analytical thinking style? Yes, absolutely. People have a different inherent tendency to rely on analytical vs intuitive thinking, but almost by definition analytical thinking is a conscious deliberate act and is a skill that can be taught. Perhaps analytical thinking is the thing that schools are not teaching students but should be.
Older age also was associated with higher overall discrimination, and also with a false headline bias, meaning that their default was to be skeptical rather than believing. It’s interesting to think about the interplay between these two things – in a world with mostly false headlines, having a strong skeptical bias will lead to greater accuracy. Disbelieving becomes a good first approximation of reality. The research, as far as I can see, did not attempt to replicate reality in terms of the proportion of true to false headlines. This means that the false bias may be more or less useful in the real world than in the studies, depending on the misinformation ecosystem.
Also being a self-identified Democrat correlated with greater accuracy and also a false bias, while self-identifying as a Republican was associated with lower accuracy and a truth bias (tending to believe headlines were true). Deeply exploring why this is the case is beyond the scope of this article (this is a complex question), but let me just throw out there a couple of the main theories. One is that Republicans are already self-selected for some cognitive features, such as intuitive thinking. Another is that the current information landscape is not uniform from a partisan perspective, and is essentially selecting for people who tend to believe headlines.
Some other important factors emerged from this data. One is that a strong predictor of believing headlines was partisan alignment – people tended to believe headlines that aligned with their self-identified partisan label. This is due to “motivated reflection” (what I generally refer to as motivated reasoning). The study also confirmed something I have also encountered previously – that those with higher analytical thinking skills actually displayed more motivated reasoning when combined with partisan bias. Essentially smarter people have the potential to be better and more confident at their motivated reasoning. This is a huge reason for caution and humility – motivated reasoning is a powerful force, and being smart not only does not necessarily protect us from it, but may make it worse.
Finally, the single strongest predictor of accepting false headlines as true was familiarity. If a subject had encountered the claim previously, they were much more likely to believe it. This is perhaps the most concerning factor to come out of this review, because it means that mere repetition may be enough to get most people to accept a false reality. This has big implications for the “echochamber” effect on both mainstream and social media. If you get most of your news from one or a few ideologically aligned outlets, you essentially are allowing them to craft your perception of reality.
From all this data, what (individually and as a society) should we do about this, if anything?
First, I think we need to seriously consider how critical thinking is taught (or not taught) in schools. Real critical thinking skills need to be taught at every level and in almost every subject, but also as a separate dedicated course (perhaps combined with some basic scientific literacy and media savvy). Hey, one can dream.
The probability of doing something meaningful in terms of regulating media seems close to zero. That ship has sailed. The fairness doctrine is gone. We live in the proverbial wild west of misinformation, and this is not likely to change anytime soon. Therefore, individually, we can protect ourselves by being skeptical, working our analytical thinking skills, checking our own biases and motivated reasoning, and not relying on a few ideologically aligned sources of news. One good rule of thumb is to be especially skeptical of any news that reinforces your existing biases. But dealing with a societal problem on an individual level is always a tricky proposition.
The post Who Believes Misinformation first appeared on NeuroLogica Blog.
Designing research studies to determine what is going on inside the minds of animals is extremely challenging. The literature is littered with past studies that failed to properly control for all variables and thereby overinterpreted the results. The challenge is that we cannot read the minds of animals, and they cannot communicate directly to us using language. We have to infer what is going on in their minds from their behavior, and inference can be tricky.
One specific question is whether or not our closest ancestors have a “theory of mind”. This is the ability to think about what other creatures are thinking and feeling. Typical humans do this naturally – we know that other people have minds like our own and we can think strategically about the implications of what other people think, how to predict their behavior based upon this, and how to manipulate the thoughts of other people in order to achieve our ends.
Animal research over the last century or so has been characterized by assumptions that some cognitive ability is unique to humans, only to find that this ability exists in some animals, at least in a precursor form. This makes sense, as we have evolved from other animals, most of our abilities likely did not come out of nowhere but evolved from more basic precursors.
But it is still undeniably true that humans are unique in the animal kingdom for our sophisticated cognitive abilities. Our language, abstraction, problem solving, and technological ability is significantly advanced beyond any other animal. We therefore cannot just assume that even our closest relatives possess any specific cognitive ability that humans have, and therefore this is a rich target of research.
The specific question of whether or not our ape relatives have a theory of mind remains an open research controversy. Previous research has suggested that they might, but all of this research was designed around the question of whether or not another individual had some specific piece of knowledge. Does the subject ape know that another ape or a human knows a piece of information? This research suggests that they might, but there remains a controversy over how to interpret the results – again, what can we infer from the animal’s behavior?
A new study seeks to inform this discussion by adding another type of research – looking at whether or not a subject ape, in this case a bonobo, understands that a human researcher lacks information. This is exploring the theory of mind from the perspective of another creatures ignorance rather than their knowledge. The advantage here, from a research perspective, is that such a theory of mind would require that the bonobo simultaneously knows the relevant piece of information and that a human researcher does not know this information – that their mental map of reality is different from another creature’s mental map of reality.
The setup is relatively simple. The bonobo sits across from a human researcher, and at a 90 degree angle from a “game master”. The game master places a treat under one of several cups in full view of the bonobo and the human researcher. They then wait 5 seconds and then the researcher reveals the treat and gives it to the bonobo. This is the training phase – letting the bonobo know that there is a treat there and they will be given the treat by the human researcher after a delay.
In the test phase an opaque barrier is placed between the human researcher and the cups, and this barrier either has a window or it doesn’t. So in some conditions the human researcher knows where the treat is and in others they don’t. The research question is – will the bonobo point to the cup more often and more quickly when the human researcher does not know where the treat is?
The results were pretty solid – the bonobos in multiple tests pointed to the cup with the treat far more often, quickly, and insistently when the human researcher did not know where the treat was. They also ran the experiment with no researcher, to make sure the bonobo was not just reaching for the treat, and again they did not point to the cup when there was no human researcher to communicate to.
No one experiment like this is ever definitive, and it’s the job of researchers to think of other and more simple ways to explain the results. But the behavior of the bonobos in this experimental setup matched what was predicted if they indeed have at least a rudimentary theory of mind. They seem to know when the human researcher knew where the treat was, independent of the bonobo’s own knowledge of where the treat was.
This kind of behavior makes sense for an intensely social animal, like bonobos. Having a theory of mind about other members of your community is a huge advantage on cooperative behavior. Hunting in particular is an obvious scenario where coordination ads to success (bonobos do, in fact, hunt).
This will not be the final word on this contentious question, but does move the needle one click in the direction of concluding that apes likely have a theory of mind. We will see if these results replicate, and what other research designs have to say about this question.
The post Do Apes Have a Theory of Mind first appeared on NeuroLogica Blog.
Everything, apparently, has a second life on TikTok. At least this keeps us skeptics busy – we have to redebunk everything we have debunked over the last century because it is popping up again on social media, confusing and misinforming another generation. This video is a great example – a short video discussing the “incorruptibility’ of St. Teresa of Avila. This is mainly a Catholic thing (but also the Eastern Orthodox Church) – the notion that the bodies of saints do not decompose, but remain in a pristine state after death, by divine intervention. This is considered a miracle, and for a time was a criterion for sainthood.
The video features Carlos Eire, a Yale professor of history focusing on medieval religious history. You may notice that the video does not include any shots of the actual body of St. Teresa. I could not find any online. Her body is not on display like some incorruptibles, but has been exhumed in 1914 and again recently. So we only have the reports of the examiners. This is where much of the confusion is generated – the church defines incorruptible very differently than the believers who then misrepresent the actual evidence. Essentially, if the soft tissues are preserved in any way (so the corpse has not completely skeletonized) and remains somewhat flexible, that’s good enough.
The case of Teresa is typical – one of the recent examiners said, “There is no color, there is no skin color, because the skin is mummified, but you can see it, especially the middle of the face.” So the body is mummified and you can only partly make out the face. That is probably not what most believers imagine when the think of miraculous incorruptibility.
This is the same story over and over – first hand accounts of actual examiners describe a desiccated corpse, in some state of mummification. Whenever they are put on display, that is exactly what you see. Sometimes body parts (like feet or hands) are cut off and preserved separately as relics. Often a wax or metal mask is placed over the face because the appearance may be upsetting to some of the public. The wax masks can be made to look very lifelike, and some viewers may think they are looking at the actual corpse. But the narrative among believers is often very different.
It has also been found that there are many very natural factors that correlate with the state of the allegedly incorruptible bodies. A team of researchers from the University of Pisa explored the microenvironments of the tombs:
“They discovered that small differences in temperature, moisture, and construction techniques lead to some tombs producing naturally preserved bodies while others in the same church didn’t. Now you can debate God’s role in choosing which bodies went into which tombs before these differences were known, but I’m going to stick with the corpses. Once the incorrupt bodies were removed from these climates or if the climates changed, they deteriorated.”
The condition of the bodies seems to be an effect of the environment, not the saintliness of the person in life.
It is also not a secret – though not advertised by promoters of miraculous incorruptibility – that the bodies are often treated in order to preserve them. This goes beyond controlling the environment. Some corpses are treated with acid as a preservative, or oils or sealed with wax.
When you examine each case in detail, or the phenomenon as a whole, what you find is completely consistent with what naturally happens to bodies after death. Most decay completely to skeletons. However, in the right environment, some may be naturally mummified and may partly or completely not go through putrefaction. But if their environment is changed they may then proceed to full decay. And bodies are often treated to help preserve them. There is simply no need for anything miraculous to explain any of these cases.
There is also a good rule of thumb for any such miraculous or supernatural claim – if there were actually cases of supernatural preservation, we would all have seen it. This would be huge news, and you would not have to travel to some church in Italy to get a few of an encased corpse covered by a wax mask.
As a side note, and at the risk of sounding irreverent, I wonder if any maker of a zombie film considered having the corpse of an incorruptible animate. If done well, that could be a truly horrific scene.
The post Incorruptible Skepticism first appeared on NeuroLogica Blog.
On January 20th a Chinese tech company released the free version of their chatbot called DeepSeek. The AI chatbot, by all accounts, is about on par with existing widely available chatbots, like ChatGPT. It does not represent any new abilities or breakthrough in quality. And yet the release shocked the industry causing the tech-heavy stock market Nasdaq to fall 3%. Let’s review why that is, and then I will give some thoughts on what this means for AI in general.
What was apparently innovative about DeepSeek is that, the company claims, it was trained for only $8 million. Meanwhile ChatGPT 4 training cost over $100. The AI tech industry is of the belief that further advances in LLMs (large language models – a type of AI) requires greater investments, with ChatGPT-5 estimated to cost over a billion dollars. Being able to accomplish similar results at a fraction of the cost is a big deal. It may also mean that existing AI companies are overvalued (which is why their stocks tumbled).
Further, the company that made DeepSeek used mainly lower power graphics chips. Apparently they did have a horde of high end chips (the export of which are banned to China) but was able to combine them with more basic graphics chips to create DeepSeek. Again, this is what is disruptive – they are able to get similar results with lower cost components and cheaper training. Finally, this innovation represents a change for the balance of AI tech between the US and China. Up until now China has mainly been following the US, copying its technology and trailing by a couple of years. But now a Chinese company has innovated something new, not just copied US technology. This is what has China hawks freaking out. (Mr. President, we cannot allow an AI gap!)
There is potentially some good and some bad to the DeepSeek phenomenon. From a purely industry and market perspective, this could ultimately be a good thing. Competition is healthy. And it is also good to flip the script a bit and show that innovation does not always mean bigger and more expensive. Low cost AI will likely have the effect of lowering the bar for entry so that not only the tech giants are playing. I would also like to see innovation that allows for the operation of AI data centers requiring less energy. Energy efficiency is going to have to be a priority.
But what are the doomsayers saying? There are basically two layers to the concerns – fear over AI in general, and fears over China. Cheaper more efficient AIs might be good for the market, but this will also likely accelerate the development and deployment of AI applications, something which is already happening so fast that many experts fear we cannot manage security risks and avoid unintended consequences.
For example, LLMs can write code, and in some cases they can even alter their own code, even unexpectedly. Recently an AI demonstrated the ability to clone itself. This has often been considered a tipping point where we potentially lose control over AI – AI that an iterate and duplicate itself without human intervention, leading to code no one fully understands. This will make it increasingly difficult to know how an AI app is working and what it is capable of. Cheaper LLMs leading to proliferation obviously makes all this more likely to happen and therefore more concerning. It’s a bit like CRISPR – cheap genetic manipulation is great for research and medical applications, but at some point we begin to get concerned about cheap and easy genetic engineering.
What about the China angle? I wrote recently about the TikTok hubbub, and concerns about an authoritarian rival country having access to large amounts of data on US citizens as well as the ability to put their thumb on the scale of our internal political discourse (not to mention deliberate dumbing down our citizenry). If China takes the lead in AI this will give them another powerful platform to do the same. At the very least it subjects people outside of China to Chinese government censorship. DeepSeek, for example, will not discuss any details of Tiananmen Square, because that topic is taboo by the Chinese government.
It is difficult to know, while we are in the middle of all of this happening, how it will ultimately play out. In 20 years or so will we look back at this time as a period of naive AI panic, with fears of AI largely coming to nothing? Or will we look back and realize we were all watching a train wreck in slow motion while doing nothing about it? There is a third possibility – the YdK pathway. Perhaps we pass some reasonable regulations that allow for the industry to develop and innovate, while protecting the public from the worst risks and preventing authoritarian governments from getting their hands on a tool of ultimate oppression (at least outside their own countries). Then we can endlessly debate what would have happened if we did not take steps to prevent disaster.
The post The Skinny on DeepSeek first appeared on NeuroLogica Blog.
There really is a significant mystery in the world of cosmology. This, in my opinion, is a good thing. Such mysteries point in the direction of new physics, or at least a new understanding of the universe. Resolving this mystery – called the Hubble Tension – is a major goal of cosmology. This is a scientific cliffhanger, one which will unfortunately take years or even decades to sort out. Recent studies have now made the Hubble Tension even more dramatic.
The Hubble Tension refers to discrepancies in measuring the rate of expansion of the universe using different models or techniques. We have known since 1929 that the universe is not static, but it is expanding. This was the famous discovery of Edwin Hubble who notice
d that galaxies further from Earth have a greater red-shift, meaning they are moving away from us faster. This can only be explained as an expanding universe – everything (not gravitationally bound) is moving away from everything else. This became known as Hubble’s Law, and the rate of expansion as the Hubble Constant.
Then in 1998 two teams, the Supernova Cosmology Project and the High-Z Supernova Search Team, analyzing data from Type 1a supernovae, found that the expansion rate of the universe is actually accelerating – it is faster now than in the distant past. This discovery won the Nobel Prize in physics in 2011 for Adam Riess, Saul Perlmutter, and Brian Schmidt. The problem remains, however, that we have no idea what is causing this acceleration, or even any theory about what might have the necessary properties to cause it. This mysterious force was called “dark energy”, and instantly became the dominant form of mass-energy in the universe, making up 68-70% of the universe.
I have seen the Hubble Tension framed in two ways – it is a disconnect between our models of cosmology (what they predict) and measurements of the rate of expansion, or it is a disagreement between different methods of measuring that expansion rate. The two main methods of measuring the expansion rate are using Type 1a supernovae and by measuring the cosmic background radiation. Type 1a supernovae are considered standard candle because they have roughly the same absolute magnitude (brightness). The are white dwarf stars in a binary system that are siphoning off mass from their partner. When they reach a critical point of mass, they go supernova. So every Type 1a goes supernova with the same mass, and therefore the same brightness. If we know an object’s absolute magnitude of brightness, then we can calculate its distance. It was this data that lead to the discovery that the universe is accelerating.
But using our models of physics, we can also calculate the expansion of the universe by looking at the cosmic microwave background (CMB) radiation, which is the glow left over after the Big Bang. This gets cooler as the universe expands, and so we can calculate that expansion by looking at the CMB close to us and farther away. Here is where the Hubble Tension comes in. Using Type 1a supernovae, we calculate the Hubble Constant to be 73 km/s per megaparsec. Using the CMB the calculation is 67 km/s/Mpc. These numbers are not close enough – they are very different.
At first it was thought that perhaps the difference is due to imprecision in our measurements. As we gather more and better data (such as building a more complete sample of Type 1a supernovae), using newer and better instruments, some hoped that perhaps these two numbers would come into alignment. The opposite has happened – newer data has solidified the Hubble Tension.
A recent study, for example, uses the Dark Energy Spectroscopic Instrument (DESI) to make more precise measurements of Type 1a’s in the nearby Coma cluster. This is used to make a more precise calibration of our overall measurements of distance in the universe. With this more precise data, the authors argue that the Hubble Tension should now be considered a “Hubble Crisis” (a term which then metastasized throughout reporting headlines). The bottom line is that there really is a disconnect between theory and measurements.
Even more interesting, another group has used updated Type 1a supernovae data to argue that perhaps dark energy does not have to exist at all. This is their argument: The calculation of the Hubble Constant throughout the universe used to establish an accelerating universe is based on the assumption of isotropy and homogeneity at the scale we are observing. Isotropy means that the universe is essentially the same density no matter which direction you look in, while homogeneity means that every piece of the universe is the same as every other piece. So no matter where you are and which direction you look in, you will observe about the same density of mass and energy. This is obviously not true at small scales, like within a galaxy, so the real question is – at what scale does the universe become isotropic and homogenous? Essentially cosmologists have used the assumption of isotropy and homogeneity at the scale of the observable universe to make their calculations regarding expansion. This is called the lambda CDM model (ΛCDM), where lambda is the cosmological constant and CDM is cold dark matter.
This group, however, argues that this is not true. There are vast gaps with little matter, and matter tends to clump along filaments in the universe. If instead you take into account these variations in the density of matter throughout the universe, you get different results for the Hubble Constant. The primary reason for this is General Relativity. This is part of Einstein’s (highly verified) theory that matter affects spacetime. Where matter is dense, time relatively slows down. This means as we look out into the universe, the light that we see is travelling faster through empty space than it is through space with lots of matter, because that matter is causing time to slow down. So if you measure the expansion rate of the universes it will appear faster in gaps and slower in galaxy clusters. As the universe expands, the gaps expand, meaning the later universe will have more gaps and therefore measure a faster acceleration, while the older universe has smaller gaps and therefore measures a slower expansion. They call this the timescape model.
If the timescape model is true, then the expansion of the universe is not accelerating (it’s just an illusion of our observations and assumptions), and therefore there is no need for dark energy. They further argue that their model is a better fit for the data than ΛCDM (but not by much). We need more and better data to definitively determine which model is correct. They are also not mutually exclusive – timescape may explain some but not all of the observed acceleration, still leaving room for some dark energy.
I find this all fascinating. I will admit I am rooting for timescape. I never liked the concept of dark energy. It was always a placeholder, but also just has properties that are really counter-intuitive. For example, dark energy does not dilute as spacetime expands. This does not mean it is false – the universe can be really counterintuitive to us apes with our very narrow perspectives. I will also follow whatever the data says. But wouldn’t it be exciting if an underdog like timescape overturned a Nobel Prize winning discovery, and for at least a second time in my lifetime radically changed how we think about cosmology. Timescape may also resolve the Hubble Tension to boot.
Whatever the answer turns out to be – clearly there is something wrong with our current cosmology. Resolving this “crisis” will expand our knowledge of the universe.
The post The Hubble Tension Hubbub first appeared on NeuroLogica Blog.
My recent article on social media has fostered good social media engagement, so I thought I would follow up with a discussion of the most urgent question regarding social media – should the US ban TikTok? The Biden administration signs into law legislation that would ban the social media app TikTok on January 19th (deliberately the day before Trump takes office) unless it is sold off to a company that is not, as it is believed, beholden to the Chinese government. The law states it must be divested from ByteDance, which is the Chinese parent company who owns TikTok. This raises a few questions – is this constitutional, are the reasons for it legitimate, how will it work, and will it work?
A federal appeals court ruled that the ban is constitutional and can take place, and that decision is now before the Supreme Court. We will know soon how they rule, but indicators are they are leaning towards allowing the law to take effect. Trump, who previously tried to ban TikTok himself, now supports allowing the app and his lawyers have argued that he should be allowed to solve the issue. He apparently does not have any compelling legal argument for this. In any case, we will hear the Supreme Court’s decision soon.
If the ban is allowed to take place, how will it work? First, if you are not aware, TikTok is a short form video sharing app. I have been using it extensively over the past couple of years, along with most of the other popular platforms, to share skeptical videos and have had good engagement. Apparently TikTok is popular because it has a good algorithm that people like. TikTok is already banned on devices owned by Federal employees. The new ban will force app stores in the US to remove the TikTok app and now allow any further updates or support. Existing TikTok users will continue to be able to use their existing apps, but they will not be able to get updates so they will eventually become unusable.
ByteDance will have time to comply with the law by divesting TikTok before the app becomes unusable, and many believe they are essentially waiting to see if the law will actually take effect. So, it is possible that even if the law does take effect, not much will change for existing users unless ByteDance refuses to comply and the app slowly fades away. In this case it is likely that the two existing main competitors, YouTube shorts, and Instagram, will benefit.
Will users be able to bypass the ban? Possibly. You can use a virtual private network (VPN) to change your apparent location to download the app from foreign stores. But even if it is technically possible, this would be a significant hurdle for some users and likely reduce use of the app in the US.
That is the background. Now lets get to the most interesting question – are the stated reasons for wanting to ban the app legitimate? This is hotly debated, but I think there is a compelling argument to make for the risks of the app and they essentially echo many of the points I made in my previous post. Major social media platforms undeniably have an influence on the broader culture. If the platforms are left entirely open, this allows for bad actors to have unfettered access to tools to spread misinformation, disinformation, radicalization, and hate speech. I have stated that my biggest fear is that these platforms will be used by authoritarian governments to control their society and people. The TikTok ban is about a hostile foreign power using an app to undermine the US.
There are essentially two components to the fear – that TikTok is gathering information on US citizens that can then be weaponized against them or our society. The second is that the Chinese government will use TikTok in order to spread pro-communist China propaganda, anti-American propaganda, so social civil strife and influence American politics. We actually don’t have to speculate about whether or not China will do this – TikTok has already admitted that they have identified and shut down massive Chinese government campaigns to influence US users – one with 110,000 accounts, and another with 141,000 accounts. You might argue that the fact that they took them down means they are not cooperating with the Chinese government, but we cannot conclude that. They may be making a public show of taking down some campaigns but leaving others in place. The more important fact here is that the Chinese government is using TikTok to influence US politics and society.
There are also more subtle ways than massive networks of accounts to influence the US through TikTok. American TikTok is different from the Chinese version, and analyses have found that the Chinese version has better quality informational content and more educational content than the US version. China can be playing the long game (actually, not that long, in my opinion) of dumbing down the US. Algorithms can put light thumbs on the scale of information that have massive effects.
It was raised in the comments to my previous post if all this discussion is premised on the notion that people are easily manipulated pawns in the hands of social media giants. Unfortunately, the answer to that question is a pretty clear yes. There is a lot of social psychology research to show that influence campaigns are effective. Obviously not everyone is affected, but moving the needle 10 or 20 percentage points (or even a lot less) can have a big impact on society. Again – I have been on TikTok for over a year. It is flooded with videos that seem crafted to spread ignorance and anti-intellectualism. I know that most of them are not crafted specifically for this purpose – but that is the effect they have, and if one did intend to craft content for this purpose they could not do a better j0b than what is already on the platform. There is also a lot of great science communication content, but it is drowned out by nonsense.
Social media, regardless of who owns it, has all the risks and problems I discussed. But it does seem reasonable that we also do not want to add another layer of having a foreign adversary with significant influence over the platform. Some argue that it doesn’t really matter, social media can be used for influence campaigns regardless of who owns them. But that is hardly reassuring. At the very least I would argue we don’t really know and this is probably not an experiment we want to add on top of the social media experiment itself.
The post Should the US Ban TikTok? first appeared on NeuroLogica Blog.
One of the things I have come to understand from following technology news for decades is that perhaps the most important breakthroughs, and often the least appreciated, are those in material science. We can get better at engineering and making stuff out of the materials we have, but new materials with superior properties change the game. They make new stuff possible and feasible. There are many futuristic technologies that are simply not possible, just waiting on the back burning for enough breakthroughs in material science to make them feasible. Recently, for example, I wrote about fusion reactors. Is the addition of high temperature superconducting material sufficient to get us over the finish line of commercial fusion, or are more material breakthroughs required?
One area where material properties are becoming a limiting factor is electronics, and specifically computer technology. As we make smaller and smaller computer chips, we are running into the limits of materials like copper to efficiently conduct electrons. Further advance is therefore not just about better technology, but better materials. Also, the potential gain is not just about making computers smaller. It is also about making them more energy efficient by reducing losses to heat when processors work. Efficiency is arguably now a more important factor, as we are straining our energy grids with new data centers to run all those AI and cryptocurrency programs.
This is why a new study detailing a new nanoconducting material is actually more exciting than it might at first sound. Here is the editor’s summary:
Noncrystalline semimetal niobium phosphide has greater surface conductance as nanometer-scale films than the bulk material and could enable applications in nanoscale electronics. Khan et al. grew noncrystalline thin films of niobium phosphide—a material that is a topological semimetal as a crystalline material—as nanocrystals in an amorphous matrix. For films with 1.5-nanometer thickness, this material was more than twice as conductive as copper. —Phil Szuromi
Greater conductance at nanoscale means we can make smaller transistors. The study also claims that this material has lower resistance, which means more efficient – less waste heat. They also claim that manufacturing is similar to existing transistors at similar temperatures, so it’s feasible to mass produce (at least it seems like it should be). But what about niobium? Another lesson I have learned from examining technology news is to look for weaknesses in any new technology, including the necessary raw material. I see lots of battery and electronic news, for example, that uses platinum, which means it’s not going to be economical.
Niobium is considered a rare metal, and is therefore relatively expensive, about $45 per kilogram. (By comparison copper goes for $9.45 per kg.) Most of the world’s niobium is sourced in Brazil (so at least it’s not a hostile or unstable country). It is not considered a “precious” metal like gold or platinum, so that is a plus. About 90% of niobium is currently used as a steel alloy, to make steel stronger and tougher. If we start producing advanced computer chips using niobium what would that do to world demand? How will that affect the price of niobium? By definition we are talking about tiny amounts of niobium per chip, the wires are only a few molecules thick, but the world produces a lot of computer chips.
How all this will sort out is unclear, and the researchers don’t get into that kind of analysis. They basically are concerned with the material science and proving their concept works. This is often where the disconnect is between exciting-sounding technology news and ultimate real-world applications. Much of the stuff we read about never comes to fruition, because it simply cannot work at scale or is too expensive. Some breakthroughs do work, but we don’t see the results in the marketplace for 10-20 years, because that is how long it took to go from the lab to the factory. I have been doing this long enough now that I am seeing the results of lab breakthroughs I first reported on 20 years ago.
Even if a specific demonstration is not translatable into mass production, however, material scientists still learn from it. Each new discovery increases our knowledge of how materials work and how to engineer their properties. So even when the specific breakthrough may not translate, it may lead to other spin-offs which do. This is why such a proof-of-concept is exciting – it shows us what is possible and potential pathways to get there. Even if that specific material may not ultimately be practical, it still is a stepping stone to getting there.
What this means is that I have learned to be patient, to ignore the hype, but not dismiss science entirely. Everything is incremental. It all adds up and slowly churns out small advances that compound over time. Don’t worry about each individual breakthrough – track the overall progress over time. From 2000 to today, lithium-ion batteries have about tripled their energy capacity, for example, while solar panels have doubled their energy production efficiency. This was due to no one breakthrough, just the cumulative effects of hundreds of experiments. I still like to read about individual studies, but it’s important to put them into context.
The post New Material for Nanoconductors first appeared on NeuroLogica Blog.
Recently Meta decided to end their fact-checkers on Facebook and Instagram. The move has been both hailed and criticized. They are replacing the fact-checkers with an X-style “community notes”. Mark Zuckerberg summed up the move this way: “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
That is the essential tradeoff- whether you think false positives are more of a problem or false negatives. Are you concerned more with enabling free speech or minimizing hate speech and misinformation? Obviously both are important, and an ideal platform would maximize both freedom and content quality. It is becoming increasingly apparent that it matters. The major social media platforms are not mere vanity projects, they are increasingly the main source of news and information, and foster ideological communities. They affect the functioning of our democracy.
Let’s at least be clear about the choice that “we” are making (meaning that Zuckerberg is making for us). Maximal freedom without even basic fact-checking will significantly increase the amount of misinformation and disinformation on these platforms, as well as hate-speech. Community notes is a mostly impotent method of dealing with this. Essentially this leads to crowd-sourcing our collective perception of reality.
Free-speech optimists argue that this is all good, and that we should let the marketplace of ideas sort everything out. I do somewhat agree with this, and the free marketplace of ideas is an essential element of any free and open society. It is a source of strength. I also am concerned about giving any kind of censorship power to any centralized authority. So I buy the argument that this may be the lesser of two evils – but it still comes with some significant downsides that should not be minimized.
What I think the optimists are missing (whether out of ignorance or intention) is that a completely open platform is not a free marketplace of ideas. The free marketplace assumes that everyone is playing fair and everyone is acting in good faith. This is 2005 level of naivete. This leaves the platform open to people who are deliberately exploiting it and using it as a tool of political disinformation. This also leaves it open to motivated and dedicated ideological groups that can flood the zone with extreme views. Corporations can use the platform for their own influence campaigns and self-serving propaganda. This is not a free and fair marketplace – it means people with money, resources, and motivation can dominate the narrative. We are simply taking control away from fact-checkers and handing it over to shadowy groups with nefarious motivations. And don’t think that authoritarian governments won’t find a way to thrive in this environment also.
So we have ourselves a Catch-22. We are damned if we do and damned if we don’t. This does not mean, however, that some policies are not better than others. There is a compromise in the middle that allows for the free marketplace of idea without making it trivially easy to spread disinformation, to radicalize innocent users of the platform, and allow for ideological capture. I don’t know exactly what those policies are, we need to continue to experiment and find them. But I don’t think we should throw up our hands in defeat (and acquiescence).
I think we should approach the issue like an editorial policy. Having editorial standards is not censorship. But who makes and enforces the editorial standards? Independent, transparent, and diverse groups with diffuse power and appeals processes is a place to start. No such process will be perfect, but it is likely better than having no filter at all. Such a process should have a light touch, err on the side of tolerance, and focus on the worst blatant disinformation.
I also think that we need to take a serious look at social media algorithms. This also is not censorship, but Facebook, for example, gets to decide how to recommend new content to you. They tweak the algorithms to maximize engagement. How about tweaking the algorithms to maximize quality of content and diverse perspectives instead?
We may need to also address the question of whether or not giant social media platforms represent a monopoly. Let’s face it, they do, and they also concentrate a lot of media into a few hands. We have laws to protect against such things because we have long recognized the potential harm of so much concentrated power. Social media giants have simply side-stepped these laws because they are relatively new and exist in a gray zone. Our representatives have failed to really address these issues, and the public is conflicted so there isn’t a clear political will. I think the public is conflicted partly because this is all still relatively new, but also as a result of a deliberate ideological campaign to sow doubt and confusion. The tech giants are influencing the narrative on how we should deal with tech giants.
I know there is an inherent problem here – social media outlets work best when everyone is using them, i.e. they have a monopoly. But perhaps we need to find a way to maintain the advantage of an interconnected platform while breaking up the management of that platform into smaller pieces run independently. The other option is to just have a lot of smaller platforms, but what is happening there is that different platforms are becoming their own ideological echochambers. We seem to have a knack for screwing up every option.
Right now there does not seem to be anyway for any of these things to happen. The tech giants are in control and have little incentive to give up their power and monopoly. Government has been essentially hapless on this issue. And the public is divided. Many have a vague sense that something is wrong, but there is no clear consensus on what exactly the problem is and what to do about it.
The post What Kind of Social Media Do We Want? first appeared on NeuroLogica Blog.
How close are we to having fusion reactors actually sending electric power to the grid? This is a huge and complicated question, and one with massive implications for our civilization. I think we are still at the point where we cannot count on fusion reactors coming online anytime soon, but progress has been steady and in some ways we are getting tatalizingly close.
One company, Commonwealth Fusion Systems, claims it will have completed a fusion reactor capable of producing net energy by “the early 2030’s”. A working grid-scale fusion reactor within 10 years seems really optimistic, but there are reasons not to dismiss this claim entirely out of hand. After doing a deep dive my take is that the 2040’s or even 2050’s is a safer bet, but this may be the fusion design that crosses the finish line.
Let’s first give the background and reasons for optimism. I have written about fusion many times over the years. The basic idea is to fuse lighter elements into heavier elements, which is what fuels stars, in order to release excess energy. This process releases a lot of energy, much more than fission or any chemical process. In terms of just the physics, the best elements to fuse are one deuterium atom to one tritium atom, but deuterium to deuterium is also feasible. Other fusion elements are simply way outside our technological capability and so are not reasonable candidates.
There are also many reactor designs. Basically you have to squeeze the elements close together at high temperature so as to have a sufficiently high probability of fusion. Stars use gravitational confinement to achieve this condition at their cores. We cannot do that on Earth, so we use one of two basic methods – inertial confinement and magnetic confinement. Inertial confinement includes a variety of methods that squeeze hydrogen atoms together using inertia, usually from implosions. These methods have achieved ignition (burning plasma) but are not really a sustainable method of producing energy. Using laser inertial confinement, for example, destroys the container in the process.
By far the best method, and the one favors by physics, is magnetic confinement. Here too there are many designs, but the one that is closest to the finish line (and the one used by CFS) is called a tokamak design. This is torus shaped in a specific way to control the flow of plasma just so to avoid any kind of turbulence that will prevent fusion.
In order to achieve the energies necessary to create sustained fusion you need really powerful magnetic fields, and the industry has essentially been building larger and larger tokamaks to achieve this. CFS has the advantage of being the first to design a reactor using the latest higher temperature superconductors (HTS), which really are a game changer for tokamaks. They allow for a smaller design with more powerful magnets using less energy. Without these HTS I don’t think there would even be a question of feasibility.
CFS is currently building a test facility called the SPARC reactor, which stands for the smallest possible ARC reactor, and ARC in turn stand for “affordable, robust, compact”. This is a test facility that will not be commercial. Meanwhile they are planning their first ARC reactor, which is grid commercial scale, in Virginia and which they claim will produce 400 Megawatts of power.
Reasons for optimism – the physics all seems to be good here. CFS was founded by engineers and scientists from MIT – essentially some of the best minds in fusion physics. They have mapped out the most viable path to commercial fusion, and the numbers all seem to add up.
Reasons for caution – they haven’t done it yet. This is not, at this point, so much a physics problem as an engineering problem. As they push to higher energies, and incorporate the mechanisms necessary to bleed off energy to heat water to run a turbine, they may run into problems they did not anticipate. They may hit a hurdle that will suddenly throw 10 or 20 years into the development process. Again, my take is that the 2035 timeline is if everything goes perfectly well. Any bumps in the road will keep adding years. This is a project at the very limits of our technology (as complex as going to the Moon), and delays are the rule, not the exception.
So – how close are they? The best so far is the JET tokamak reactor which produced 67% of net energy. That sounds close, but keep in mind, 100% is break even. Also – this is heat energy, not electricity. Modern fission reactors have about a 30% efficiency in converting heat to electricity, so that is a reasonable assumption. Also, this is fusion energy efficiency, not total energy. This is the energy that goes into the plasma, not the total energy to run the reactor.
The bottom line is that they probably need to increase their energy output by an order of magnitude or more in order to be commercially viable. Just producing a little bit of net energy is not enough. They need massive excess energy (meaning electricity) in order to justify the expense. So really we are no where near net total energy in any fusion design. CFS is hoping that their fancy new HTS magnets will get them there. They actually might – but until they do, it’s still just an informed hope.
I do hope that my pessimism, born of decades of overhyped premature tech promises, is overcalling it in this case. I hope these MIT plasma jocks can get it done, somewhere close to the promised timeline. The sooner the better, in terms of global warming. Let’s explore for a bit what this would mean.
Obviously the advantage of fusions reactors like the planned ARC design if it works is that it produces a lot of carbon-free energy. They can be plugged into existing connections to the grid, and produce stable predictable energy. They produce only low level nuclear waste. They also have a relatively small land footprint for energy produced. If the first ARC reactor works, we would need to build thousands around the world as fast as possible. If they are profitable, this will happen. But the industry can also be supported by targeted regulations. Such reactors could replace fossil fuel-based reactors, and then eventually fission reactors.
Once we develop viable fusion energy, it is very likely that this will become our primary energy source literally forever. At least for hundreds if not thousands or tens of thousands of years. It gets hard to predict technology that far out, but there are really no candidates for advanced energy sources that are better. Matter-antimatter theoretically could work, but why bother messing around with antimatter, which is hard to make and contain. The advantage is probably not enough to justify it. Other energy sources, like black holes, are theoretically and extremely exotic, perhaps something for millions of years advanced beyond where we are.
Even if some really advanced energy source becomes possible, fusion will likely remain in the sweet spot in terms of producing large amounts of energy cleanly and sustainable. Once we cross the line to being able to produce net total electricity with fusion, incremental advances in material science and the overall technology will just make fusion better. From that point forward all we really need to do is make fusion better. There will likely still be a role for distributed energy like solar, but fusion will replace all centralized large sources of power.
The post Plan To Build First Commercial Fusion Reactor first appeared on NeuroLogica Blog.