You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 13 hours 53 min ago

How Humans Can Adapt to Space

Fri, 01/26/2024 - 5:11am

My recent article on settling Mars has generated a lot of discussion, some of it around the basic concept of how difficult it is for humans to live anywhere but a thin envelope of air hugging the surface of the Earth. This is undoubtedly true, as I have discussed before – we evolved to be finely adapted to Earth. We are only comfortable in a fairly narrow range of temperature. We need a fairly high percentage of oxygen (Earth’s is 21%) at sufficient pressure, and our atmosphere can’t have too much of other gases that might cause us problems. We are protected from most radiation that bathes the universe. Our skin and eyes have adapted to the light of our sun, both in frequency and intensity. And we are adapted to Earth’s surface gravity, with any significantly more or less causing problems for our biology.

Space itself is an extremely unforgiving environment requiring a total human habitat, with the main current technological challenges being artificial gravity and radiation protection. But even on other worlds it is extremely unlikely that all of the variables will be within the range of human survival, let alone comfort and thriving. Mars, for example, has too thin an atmosphere with no oxygen, no magnetic field to protect from radiation, it’s too cold and its surface gravity is too little. It’s better than the cold vacuum of space, but not by much. You still need essentially a total habitat, and we will probably have to go underground for radiation protection. Gravity is 38% that of Earths, which is probably not ideal for human biology. In space, with microgravity, at least you can theoretically use rotation to simulate gravity.

In addition to adapting off-Earth environments to humans, is it feasible to adapt humans to other environments? Let me start with some far-future options then finish with what is likely to be the nearest-future options.

Perhaps the optimal way to most fully adapt humans to alien environments is to completely replace the human body with one that is adapted. This could be a robot body, a genetically engineered biological one, or a cyborg combination. How does one replace their body? One option might be taking virtual control of the “brain” of the avatar (yes, like in the movie, Avatar). This could be through a neural link, or even just through virtual reality. This way you can remain safely ensconced in a protective environment, while your Avatar runs around a world that would instantly kill you. We are closer to having robotic avatars than biological ones, and to a limited degree we are already doing this through virtual presence technology.

But this approach has a severe limitation – you have to be relatively close to your Avatar. If, for example, you wanted to explore the Martian surface with an avatar, you would need to be in Mars orbit or on the surface of Mars. You could not be on Earth, because the delay in communication would be too great. So essentially this approach is limited by the speed of light.

You could also “upload” your mind into the Avatar, so that real time communication is not required. I put “upload” in quotes, because in reality you would be copying the structure and function of your brain. The avatar would not be you, it would be a mental copy of you operating the avatar (again, whether machine or biological). That copy would feel like it is you, and so that would be a way for “you” to explore a hostile environment, but it would not be the original you. However, it may also be possible, once the exploration has concluded, to copy the acquired memories back to you. It may also be possible to do this as a streaming function. In this case the distance does not matter as much, because you have a local copy with real time interaction, while you are receiving the feed in a constant stream, just delayed by the communication time. Because the avatar is a copy of you, the original you would not need to send instructions, only receive the feed. So you could be safely on Earth while your mental twin avatar is running around on Mars.

A more advanced version of this is similar to the series Altered Carbon. In this hypothetical future people can have their minds transferred (again, copied) to a “stack” which is essentially a computer. The stack, which is now you, operates your body, which is called your “sleeve”. This means, however, that you can change sleeves by pulling our your stack and plugging it into a different sleeve. Such a sleeve could be genetically engineered for a specific environment, or again it could be a robot. This envisions a future in which humans are really digital information that can inhabit biological, robotic, or virtual entities.

So far these options are pretty far in the future. The closest would be using virtual reality to control a robot, which is currently very limited but I can this being fairly robust by the time we could, for example, get to Mars. Another approach which is also fairly near term (at least more than the other options) is to use genetic engineering, medical interventions, and cyborg implants to enhance our existing bodies. This does not involve any avatars or neural transfer, just making our existing bodies better able to handle harsh environments.

For existing adults, genetic engineering options are likely limited, but could still be helpful. For example, inserting a gene that produces a protein derived from tardigrades could protect our DNA from radiation damage. We could also adapt our skin to block out more radiation, and be resistant to UV damage. We could adapt our bones and muscles to different surface gravities. We may even find ways to adapt to microgravity, allowing our bodies to better handle fluids with gravity.

For adults, using medical interventions, such as drugs, is another option. Drugs could theoretically compensate for lower oxygen tension, radiation damage, altered cardiac function, neutralizing toxins, and other physiological responses to alien environments.  Cyborg implants are yet another option, reinforcing our bones, enhancing cardiac function, shielding light or radiation, or adapting to low pressure.

But we could more profoundly adapt humans to alien environments with germ line genetic engineering – altering the genes that control development from an embryo. We could then make profound alterations to the anatomy and physiology of humans. This would create, in essence, a subspecies of humans, adapted to a specific environment – Homo martianus or Homo lunus. Then we could theoretically include extreme adaptations, to temperature, air pressure, oxygen tension, radiation exposure, and surface gravity. These subspecies would not be adapted to Earth, and may find Earth as hostile and we find Mars. They would be an offshoot of humanity.

Even the nearest of these technologies will take a long time to develop. For now we need to carry our Earth environment with us, even if it is within the confines of a spacesuit. But it seems likely we will find ways to adapt ourselves to space to some degree.

The post How Humans Can Adapt to Space first appeared on NeuroLogica Blog.

Categories: Skeptic

DNA Directed Assembly of Nanomaterials

Thu, 01/25/2024 - 4:54am

Arguably the type of advance that has the greatest impact on technology is material science. Technology can advance by doing more with the materials we have, but new materials can change the game entirely. It is no coincidence that we mark different technological ages by the dominant material used, such as the bronze age and iron age. But how do we invent new materials?

Historically new materials were mostly discovered, not invented. Or we discovered techniques that allowed us to use new materials. Metallurgy, for example, was largely about creating a fire hot enough to smelt different metals. Sometimes we literally discovered new elements, like aluminum or tungsten, with desirable properties. We also figured out how to make alloys, combining different elements to create a new material with unique or improved properties. Adding tin to copper made a much stronger and more durable metal, bronze. While the hunt for new usable elements is basically over, there are so many possible combinations that researching new alloys is still a viable way to find new materials. In fact a recent class of materials known as “superalloys” have incredible properties, such as extreme heat resistance.

If there are no new elements (other than really big and therefore unstable artificial elements), and we already have a mature science of making alloys, what’s next? There are also chemically based materials, such as polymers, resins, and composites, that can have excellent properties, including the ability to be manufactured easily. Plastics clearly had a dramatic effect on our technology, and some of the strongest and lightest materials we have are carbon composites. But again it feels like we have already picked the low-hanging fruit here. We still need new better materials.

It seems like the new frontier of material science is nanostructured material. Now it’s not only about the elements that a material is made from, it is how the atoms of that material are arranged on a nano-scale. We are just at the beginning of this technology. This approach has yielded what we call metamaterials – substances with properties determined by their structure, not just their composition. Some metamaterials can accomplish feats previously thought theoretically impossible, like focusing light beyond the diffraction limit. Another class of structured material is two-dimensional material, such as carbon nanofibers.

The challenge of nanostructured materials, however, is manufacturing them with high quality and high output. It’s one thing to use a precise technique in the lab as a proof of concept, but unless we can mass produce such material they will benefit only the highest end users. This is still great for institutions like NASA, but we probably won’t be seeing such materials on the desktop or in the home.

This brings us to the topic of today’s post – using DNA in order to direct the assembly of nanomaterials. This is already in used, and has been for about a decade, but a recent paper highlights some advances in this technique:  Three-dimensional nanoscale metal, metal oxide, and semiconductor frameworks through DNA-programmable assembly and templating.

There are a few techniques being used here. DNA is a nanoscale molecule that essentially evolved to direct the assembly of proteins. The same process is not being used here, but rather the programmable structure of DNA means we can exploit it for other purposes. The first step in the process being outlined here is to use DNA in order to direct the assembly of a lattice out of inorganic material. They make the analogy that the lattice is like the frame of a house. It provides the basic structure, but then you install specific structures (like copper pipes for water and insulation) to provide specific functionality.

So they then use two different methods to infiltrate the lattice with specific materials to provide the desired properties – semiconductors, insulators, magnetic conduction, etc. One method is vapor-phase infiltration which introduces the desired elements as a gas, which can penetrate deeply into the lattice structure. The other is liquid phase infiltration, which is better at depositing substance on the surface of the lattice.

These combinations of methods address some of the challenging of DNA directly assembly. First, the process is highly programmable. This is critical for allowing the production of a variety of 3D nanostructured materials with differing properties. Second the process takes advantage of self-assembly, which is another concept critical to nanostructured materials. When you get down to the 30 nm scale, you can’t really place individual atoms or molecules in the desired locations. You need a manufacturing method that causes the molecules to automatically go where they are supposed to – to self assemble. This is what happens with infiltration of the lattice.

The researchers also hope to develop a method that can work with a variety of materials to produce a range of desirable structures in a process that can be scaled up to manufacturing levels. They demonstrate at least the first two properties here, and show the potential for mass production, but of course that has yet to be actually demonstrated. They worked with a variety of materials, including: ” zinc, aluminum, copper, molybdenum, tungsten, indium, tin, and platinum, and composites such as aluminum-doped zinc oxide, indium tin oxide, and platinum/aluminum-doped zinc oxide.”

I don’t know if we are quite there yet, but this seems like a big step toward the ultimate goal of mass producing specific 3D nanostructured inorganic materials that we can program to have a range of desirable properties. One day the computer chips in your smartphone or desktop may come off an assembly line using a process similar to the one outlined in this paper. Or this may allow for new applications that are not even possible today.

The post DNA Directed Assembly of Nanomaterials first appeared on NeuroLogica Blog.

Categories: Skeptic

Microbes Aboard the ISS

Tue, 01/23/2024 - 5:00am

As I have written many times, including in yesterday’s post, people occupying space is hard. The environment of space, or really anywhere not on Earth, is harsh and unforgiving. One of the issues, for example, rarely addressed in science fiction or even discussions of space travel, is radiation. We don’t really have a solution to deal with radiation exposure outside the protective atmosphere and magnetic field of Earth.

There are other challenges, however, that do not involve space itself but just the fact that people living off Earth will have to be in an enclosed environment. Whether this is a space station or habitat on the Moon or Mars, people will be living in a relatively small finite physical space. These spaces will be enclosed environments – no opening a window to let some fresh air in. Our best experience so far with this type of environment is the International Space Station (ISS). By all accounts, the ISS smells terrible. It is a combination of antiseptic, body odor, sweat, and basically 22 years of funk.

Perhaps even worse, the ISS is colonized with numerous pathogenic bacteria and different types of fungus. The bacteria is mainly human-associated bacteria, the kinds of critters that live on and in humans. According to NASA:

The researchers found that microbes on the ISS were mostly human-associated. The most prominent bacteria were Staphylococcus (26% of total isolates), Pantoea (23%) and Bacillus (11%). They included organisms that are considered opportunistic pathogens on Earth, such as Staphylococcus aureus (10% of total isolates identified), which is commonly found on the skin and in the nasal passage, and Enterobacter, which is associated with the human gastrointestinal tract.

This is similar to what one might find in a gym or crowded office space, but worse. This is something I often considered – when establishing a new environment off Earth, what will the microbiota look like? On the one hand, establishing a new base is an opportunity to avoid many infectious organisms. Having strict quarantine procedures can create a settlement without flu viruses, COVID, HIV or many of the germs that plague humans. I can imagine strict medical examinations and isolation prior to gaining access to such a community. But can such efforts to make an infection-free settlement succeed?

What is unavoidable is human-associated organisms. We are colonized with bacteria, most of which are benign, but some of which are opportunistic pathogens. We live with them, but they will infect us if they are given the chance. There are also viruses that many of us harbor in a dormant state, but can become activated, such as chicken pox. It would be near impossible to find people free of any such organisms. Also – in such an environment, would the population become vulnerable to infection because their immune systems will become weak in the absence of a regular workout? (The answer is almost certainly yes.) And would this mean that they are a setup for potentially catastrophic disease outbreaks when an opportunistic bug strikes?

In the end it is probably impossible to make an infection-free society. The best we can do is keep out the worst bugs, like HIV, but we will likely never be free of the common cold and living with bacteria.

There is also another issue – food contamination. There has been a research program aboard the ISS to grow food on board, like lettuce, as a supplement of fresh produce. However, long term NASA would like to develop an infrastructure of self-sustaining food production. If we are going to settle Mars, for example, it would be best to be able to produce all necessary food on Mars. But our food crops are not adapted to the microgravity of the ISS, or the low gravity of the Moon or Mars. A recent study shows that this might produce unforeseen challenges.

First, prior research has shown that the lettuce grown aboard the ISS is colonized with lots of different bacteria, including some groups capable of being pathogens. There have not been any cases of foodborne illness aboard the ISS, which is great, so the amounts and specific bacteria so far have not caused disease (also thoroughly washing the lettuce is probably a good idea). But it shows there is the potential for bacterial contamination.

What the new study looks at is the behavior of the stomata of the lettuce leaves under simulated microgravity (they slowly rotate the plants so they can never orient to gravity). The stomata of plants are little openings through which they breath. They can open and close these stomata under different conditions, and will generally close them when stressed by bacteria to prevent the bugs from entering and causing infection. However, under simulated microgravity the lettuce leaves opened rather than closed their stomata in response to a bacterial stress. This is not good and would make them vulnerable to infection. Further, there are friendly bacteria that cause the stomata to close, helping them to defend against harmful bacteria. But in microgravity these friendly bacteria failed to cause stomata closure.

This is concerning, but again we don’t know how practically relevant this is. We have too little experience aboard the ISS with locally grown plants. It suggests, however, that we can choose, or perhaps cultivate or engineer, plants that are better adapted to microgravity. We can test to see which cultivars will retain their defensive stomata closure even in simulated microgravity. Once we do that we may be able to determine which gene variants convey that adaptation. This is the direction the researchers hope to go next.

So yeah, while space is harsh and the challenges immense, people are clever and we can likely find solutions to whatever space throws at us. Likely we will need to develop crops that are adapted to microgravity, lunar gravity, and Martian gravity. We may need to develop plants that can grow in treated Martian soil, or lunar regolith. Or perhaps off Earth we need to go primarily hydroponic.

I also wonder how solvable the funk problem is. It seems likely that a sufficiently robust air purifier could make a huge impact. Environmental systems will not only need to scrub CO2, add oxygen, and manage humidity and temperature in the air aboard a station, ship, or habitat. It will also have to have a serious defunking ability.

 

The post Microbes Aboard the ISS first appeared on NeuroLogica Blog.

Categories: Skeptic

Is Mars the New Frontier?

Mon, 01/22/2024 - 5:08am

In the excellent sci fi show, The Expanse, which takes place a couple hundred years in the future, Mars has been settled and is an independent self-sustaining society. In fact, Mars is presented as the most scientifically and technologically advanced society of humans in the solar system. This is presented as being due to the fact that Martians have had to struggle to survive and build their world, and that lead to a culture of innovation and dynamism.

This is a  version of the Turner thesis, which has been invoked as one justification for the extreme expense and difficulty of settling locations off Earth. I was recently pointed to this article discussing the Turner thesis in the context of space settlement, which I found interesting. The Turner thesis is that the frontier mindset of the old West created a culture of individualism, dynamism, and democracy that is a critical part of the success of America in general. This theory was popular in the late 19th and early 20th centuries, but fell out of academic favor in the second half of the 20th century. Recent papers trying to revive some version of it are less than compelling, showing that frontier exposure correlates only very softly with certain political and social features, and that those features are a mixed bag rather than an unalloyed good.

The article is generally critical of the notion that some version of the Turner thesis should be used to justify settling Mars – that humanity would benefit from a new frontier. But I basically agree with the article, that the Turner thesis is rather weak and complex, and that analogies between the American Western frontier and Mars (or other space locations) is highly problematic. In every material sense, it’s a poor analogy. On the frontier there was already air, food, soil, water, and other people living there. None of those things (as far as we know) exists on Mars.

But I do think that something closer to The Expanse hypothesis is not unreasonable. Just as the Apollo program spawned a lot of innovation and technology, solving the problems of getting to and settling Mars would likely have some positive technological fallout. However, I would not put this forward as a major reason to explore and settle Mars. We could likely dream up many other technological projects here on Earth that would be better investments with a much higher ROI.

I do support space exploration, including human space exploration, however. I largely agree with those who argue that robots are much better adapted to space, and sending our robotic avatars into space is much cheaper and safer than trying to keep fragile biological organisms alive in the harsh environment of space. For this reason I think that most of our space exploration and development should be robotic.

I also think we should continue to develop our ability to send people into space. Yes, this is expensive and dangerous, but I think it would be worth it. One reason is that I think humanity should become a multi-world spacefaring species. This will be really hard in the early days (now) but there is every reason to believe that technological advancements will make it easier, cheaper, and safer. This is not just as a hedge against extinction, but also opens up new possibilities for humanity. It is also part of the human psyche to be explorers, and this is one activity that can have unifying effect on shared human culture (depending, of course, on how it’s done).

There is still debate about the effectiveness of sending humans into space for scientific activity. Sure, our robots are capable and getting more capable, but for the time-being they are no substitute for having people on site actively carrying out scientific exploration. Landers and rovers are great, but imagine if we had a team of scientists stationed on Mars able to guide scientific investigations, react to findings, and take research in new directions without having to wait 20 years for the next mission to be designed and executed.

There are also romantic reasons which I don’t think can be dismissed. Being a species that explores and lives in space can have a profound effect on our collective psyche. If nothing else it can inspire generations of scientists and engineers, as the Apollo program did. Sometimes we just need to do big and great things. It gives us purpose and perspective and can inspire further greatness.

In terms of cost the raw numbers are huge, but then anything the government does on that scale has huge dollar figures. But comparatively, the amount of money we spend on space exploration is tiny compared to other activity of dubious or even whimsical value. NASAs annual budget is around $23 billion, but Americans spend over $12 billion on Halloween each year. I’m not throwing shade on Halloween, but it’s hard to complain about the cost of NASA when we so blithely spend similar amounts on things of no practical value. NASA is only 0.48% of our annual budget. It’s almost a round off error. I know all spending counts and it all adds up, but this does put things into perspective.

Americans also spent $108 billion on lottery tickets in 2022. Those have, statistically speaking, almost no value. People are essentially buying the extremely unlikely dream of winning, which most will not. I would much rather buy the dream of space exploration. In fact, that may be a good way to supplement NASA’s funding. Sell the equivalent of NASA lottery tickets for a chance to take an orbital flight, or go to the ISS, or perhaps name a new feature or base on Mars. People spend more for less.

The post Is Mars the New Frontier? first appeared on NeuroLogica Blog.

Categories: Skeptic

Why Do Species Evolve to Get Bigger or Smaller

Fri, 01/19/2024 - 4:58am

Have you heard of Cope’s Rule or Foster’s Rule? American paleontologist Edward Drinker Cope first noticed a trend in the fossil record that certain animal lineages tend to get bigger over evolutionary time. Most famously this was noticed in the horse lineage, beginning with small dog-sized species and ending with the modern horse. Bristol Foster noticed a similar phenomenon specific to islands – populations that find their way to islands tend to either increase or decrease in size over time, depending on the availability of resources. This may also be called island dwarfism or gigantism (or insular dwarfism or gigantism).

When both of these things happen in the same place there can be some interesting results. On the island of Flores a human lineage, Homo floresiensis (the Hobbit species) experienced island dwarfism, while the local rats experienced island gigantism. The result were people living with rats the relative size of large dogs.

Based on these observations, two questions emerge. The first (and always important and not to be skipped) is – are these trends actually true or are the initial observations just quirks or hyperactive pattern recognition. For example, with horses, there are many horse lineages and not all of them got bigger over time. Is this just cherry-picking to notice the one lineage that survived today as modern horses? If some lineages are getting bigger and some are getting smaller, is this just random evolutionary change without necessarily any specific trend? I believe this question has been answered and the consensus is that these trends are real, although more complicated than first observed.

This leads to the second question – why? We have to go beyond just saying “evolutionary pressure” to determine if there is any unifying specific evolutionary pressure that is a dominant determinant of trends in body size over time. Of course, it’s very likely that there isn’t one answer in every case. Evolution is complex and contingent, and statistical trends in evolution over time can emerge from many potential sources. But we do see these body size trends a lot, and it does suggest there may be a common factor.

Also, the island dwarfism/gigantism thing seems to be real, and the fact that these trends correlate so consistently with migrating to an island suggests a common evolutionary pressure. Foster, who published his ideas in 1964, thought it was due to resources. Species get smaller if island resources are scarce, or they get bigger if island resources are abundant due to relative lack of competition. Large mainland species who find themselves on islands may find a smaller region in which to operate which much less resources, so the smaller critters will have an advantage. Also, smaller species can have a shorter gestation time and more rapid generational time, which can provide advantages in a stressed environment. Predator species may then become smaller in order to adapt to smaller prey (which apparently is also a thing).

At the small end of the animal size, getting bigger has advantages. Larger animals can go longer between meals and can roam across a larger range looking for food. Again, this is what we see, the largest animals become smaller and the smallest animals become larger, meeting in the middle (hence the Hobbits with dog-sized rats).

Now a recent study looks that these ideas with computer evolutionary simulations. The simulations pretty much confirm what I summarized above but also add some new wrinkles. The simulations show that a key factor, beyond the availability of resources, is competition for those resources. First, it showed a general trend in increasing body size due to competition between species. When different species compete in the same niche the larger animals tend to win out. They state this as Cope’s rule applying when interaction between species is determines largely by their body size.

The simulations also showed, however, that when the environment is stress the larger species were more vulnerable to extinction. There were relatively fewer individuals in larger species with long gestation and generational times. While smaller species could weather the strain better and bounce back quicker, but then they subsequently undergo slow increase in size when the environment stabilizes. This leads to what they call recurrent Cope’s Rule – each subsequent pulse of gigantism gets even bigger.

The simulations also confirmed island dwarfism – that species tend to shrink over time when there is overlap in niches and resource use, which contributes to a decreased resource availability. They call this an inverse Cope’s Rule. They don’t refer to Foster’s Rule, I think because the simulations were independent of being on an island or in an insular environment (which is the core of Foster’s observation). Rather, species become smaller when their interaction is determined more by the environment and resource availability than their relative body size (which could be the case on islands).

So the simulations don’t really change anything dramatically. They largely confirm Cope’s Rule and Foster’s Rule, and add the layer that niche overlap and competition is important, not just total availability of recourses.

The post Why Do Species Evolve to Get Bigger or Smaller first appeared on NeuroLogica Blog.

Categories: Skeptic

Converting CO2 to Carbon Nanofibers

Thu, 01/18/2024 - 4:56am

One of the dreams of a green economy where the amount of CO2 in the atmosphere is stable, and not slowly increasing, is the ability to draw CO2 from the atmosphere and convert it to a solid form. Often referred to as carbon capture, some form of this is going to be necessary eventually, and most climate projections include the notion of carbon capture coming online by 2050. Right now we don’t have a way to economically and on a massive industrial scale pull significant CO2 from the air. There is some carbon capture in the US, for example, but it accounts for only 0.4% of CO2 emissions. It is used near locations of high CO2 production, like coal-fired plants.

But there is a lot of research being done, mostly in the proof of concept stage. Scientists at the DOE and Brookhaven National Laboratory have published a process which seems to have promise. They can convert CO2 in the atmosphere to carbon nanofibers, which is a solid form of carbon with potential industrial uses. One potential use of these nanofibers would be as filler for concrete. This would bind up the carbon for at least 50 years, while making the concrete stronger.

In order to get from CO2 to carbon nanofibers they break the process up into two steps. They figured out a way, using an iron-cobalt catalyst, to make carbon monoxide (CO) into carbon nanofibers. This is a thermocatalyst process operating at 400 degrees C. That’s hot, but practical for industrial processes. It’s also much lower than the 1000 degrees C required for a method that would go directly from CO2 to carbon nanofibers.

That’s great, but first you have to convert the CO2 to CO, and that’s actually the hard part. They decided to use a proven method which uses a commercially available catalyst – palladium supported on carbon. This is an electrocatalyst process, that converts CO2 and H2O into CO and H2 (together called syngas). Both CO and H2 are high energy molecules that are very useful in industry. Hydrogen, as I have written about extensively, has many uses, including in steel making, concrete, and energy production. CO is a feed molecule for many useful reactions creating a range of hydrocarbons.

But as I said – conversion of CO2 and H20 to CO and H2 is the hard part. There has been active research to create an industrial scale, economic, and energy efficient process to do this for years, and you can find many science news items reporting on different processes. It seems like this is the real game, this first step in the process, and from what I can tell that is not the new innovation in this research, which focuses on the second part, going from CO to carbon nanofibers.

The electrocatalyst process that goes from CO2 to CO uses electricity. Other processes are thermocatalytic, and may use exothermic reactions to drive the process. Using a lot of energy is unavoidable, because essentially we are going from a low energy molecule (CO2) to a higher energy molecule (CO), which requires the addition of energy. This is the unavoidable reality of carbon capture in general – CO2 gets released in the process of making energy, and if we want to recapture that CO2 we need to put the energy back in.

The researchers (and pretty much all reporting on CO2 to CO conversion research) state that if the electricity were provided by a green energy source (solar, wind, nuclear) then the entire process itself can be carbon neutral. But this is exactly why any type of carbon capture like this is not going to be practical or useful anytime soon. Why have a nuclear power plant powering a carbon capture facility, that is essentially recapturing the carbon released from a coal-fired plant? Why not just connect the nuclear power plant to the grid and shut down the coal-fired plant? That’s more direct and efficient.

What this means is that any industrial scale carbon capture will only be useful after we have already converted our energy infrastructure to low or zero carbon. Once all the fossil fuel plants are shut down, and we get all our electricity from wind, solar, nuclear, hydro, and geothermal then we can make some extra energy in order to capture back some of the CO2 that has already been released. This is why when experts project out climate change for the rest of the century they figure in carbon capture after 2050 – after we have already achieved zero carbon energy. Carbon capture prior to that makes no sense, but after will be essential.

This is also why some in the climate science community think that premature promotion of carbon capture is a con and a diversion. The fossil fuel industry would like to use carbon capture as a way to keep burning fossil fuels, or to “cook their books” and make it seem like they are less carbon polluting than they are. But the whole concept is fatally flawed – why have a coal-fired plant to make electricity and a nuclear plant to recapture the CO2 produced, when you can just have a nuclear plant to make the electricity?

The silver lining here is that we have time. We won’t really need industrial scale carbon capture for 20-30 years, so we have time to perfect the technology and make it as efficient as possible. But then, the technology will become essential to avoid the worst risks of climate change.

 

The post Converting CO2 to Carbon Nanofibers first appeared on NeuroLogica Blog.

Categories: Skeptic

Betavoltaic Batteries

Tue, 01/16/2024 - 5:08am

In 1964 Isaac Asimov, asked to imagine the world 50 years in the future, wrote:

“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes. The isotopes will not be expensive for they will be by- products of the fission-power plants which, by 2014, will be supplying well over half the power needs of humanity.”

Today nuclear fission provides about 10% of the world’s electricity. Asimov can be forgiven for being off by such a large amount. He, as a science fiction futurist, was thinking more about the technology itself. Technology is easier to predict than things like public acceptance, irrational fear of anything nuclear, or even economics (which even economists have a hard time predicting).

But he was completely off about the notion that nuclear batteries would be running most everyday appliances and electronics. This now seems like a quaint retro-futuristic vision, something out of the Fallout franchise. Here the obstacle to widespread adoption of nuclear batteries has been primarily technological (issues of economics and public acceptance have not even come into play yet). Might Asimov’s vision still come true, just decades later than he thought? It’s theoretically possible, but there is still a major limitation that for now appears to be a deal-killer – the power output is still extremely low.

Nuclear batteries that run through thermoelectric energy production have been in use for decades by the aerospace industry. These work by converting the heat generated by the decay of nuclear isotopes into electricity. Their main advantage is that they can last a long time, so they are ideal for putting on deep space probes. These batteries are heavy and operate at high temperatures – not suitable for powering your vacuum cleaner. There are also non-thermal nuclear batteries, which do not depend on a heat gradient to generate electricity. There are different types depending on the decay particle and the mechanism for converting it into electricity. These can be small cool devices, and can function safely for commercial. In fact, for a while nuclear powered pacemakers were in common use, until lithium-ion batteries became powerful enough to replace them.

One type of non-thermal nuclear battery is betavoltaic, which is widely seen as the most likely to achieve widespread commercial use. These convert beta particles, which are the source of energy –

“…energy is converted to electricity when the beta particles inter-act with a semiconductor p–n junction to create electron–hole pairs that are drawn off as current.”

Beta particles are essentially either high energy electrons or positrons emitted during certain types of radioactive decay. They are pretty safe, as radiation goes, and are most dangerous when inhaled. From outside the skin they are less dangerous, but high exposure can cause burns. The small amounts released within a battery are unlikely to be dangerous, and the whole idea is that they are captured and converted into electricity, not radiated away from the device. A betavoltaic device is often referred to as a “battery” but are not charged or recharged with energy. When made they have a finite amount of energy that they release over time – but that time can be years or even decades.

Imagine having a betavoltaic power source in your smartphone. This “battery” never has to be charged and can last for 20-30 years. In such a scenario you might have one such battery that you transfer to subsequent phones. Such an energy source would also be ideal for medical uses, for remote applications, as backup power, and for everyday use. If they were cheap enough, I could imagine such batteries being ubiquitous in everyday electronics. Imagine if most devices were self-powered. How close are we to this future?

I wish I could say that we are close or that such a vision is inevitable, but there is a major limiting factor to betavoltaics – they have low power output. This is suitable for some applications, but not most. A recent announcement by a Chinese company,  Betavolt, reminded me of this challenge. Their press release does read like some grade A propaganda, but I tried to read between the lines.

Their battery uses nickel-63 as a power source, which decays safely into copper. The design incorporates a crystal diamond semiconductor, which is not new (nuclear diamond batteries have been in the news for years). In a device as small as a coin they can generate 100 microwatts (at 3 volts) for “50 years”. In reality the nickel-63 has a half-life of 100 years. That is a more precise way to describe its lifespan. In 100 years it will be generating half the energy it did when manufactured. So saying it has a functional life of 50 years is not unreasonable.

The problem is the 100 microwatts. A typical smart phone requires 3-5 watts of power. So the betavolt battery produces only 1/30 thousandth the energy necessary to run your smart phone. That’s four orders of magnitude. And yet, Betavolt claims they will produce a version of their battery that can produce 1 watt of power by 2025. Farther down in the article it says they plan-

“to continue to study the use of strontium 90, plethium 147 and deuterium and other isotopes to develop atomic energy batteries with higher power and a service life of 2 to 30 years.”

I suspect these two things are related. What I mean is that when it comes to powering a device with nuclear decay, the half-life is directly related to power output. If the radioisotope decays at half the rate, then it produces half the energy (given a fixed mass). There are three variables that could affect power output. One is the starting mass of the isotope that is producing the beta particles. The second is the half life of that substance. And the third is the efficiency of conversion to electricity. I doubt there are four orders of magnitude to be gained in efficiency.

From what I can find betavoltaics are getting to about the 5% efficiency range. So maybe there is one order of magnitude to gain here, if we could design a device that is 50% efficient (which seems like a massive gain). Where are the other three orders of magnitude coming from? If you use an isotope with a much shorter half-life, say 1 year instead of 100 years, there are two orders of magnitude. I just don’t see where the other one is coming from. You would need 10 such batteries to run your smart phone, and even then, in one year you are operating at half power.

Also, nuclear batteries have constant energy output. You do not draw power from them as needed, like with a lithium-ion battery. They just produce electricity at a constant (and slowly decreasing) rate. Perhaps, then, such a battery could be paired with a lithium-ion battery (or other traditional battery). The nuclear battery slowly charges the traditional battery, which operates the devices. This way the nuclear battery does not have to power the device, and can produce much less power than needed. If you use your device 10% of the time, the nuclear battery can keep it charged. Even if the nuclear battery does not produce all the energy the device needs, you would be able to go much longer between charges, and you will never be dead in the water. You could always wait and build up some charge in an emergency or when far away from any power source to recharge. So I can see a roll for betavoltaic batteries, not only in devices that use tiny amounts of power, but in consumer devices as a source of “trickle” charging.

At first this might be gimicky, and we will have to see if it provides a real-world benefit that is worth the expense. But it’s plausible. I can see it being very useful in some situations, and the real variable is how widely adopted such a technology would be.

The post Betavoltaic Batteries first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages