You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 12 hours 47 min ago

Electronic Noses

Tue, 10/22/2024 - 4:45am

I am always sniffing around (pun intended) for new and interesting technology, especially anything that I think is currently flying under the radar of public awareness but has the potential to transform our world in some way. I think electronic nose technology fits into this category.

The idea is to use electronic sensors that can detect chemicals, specifically those that are abundant in the air, such as volatile organic compounds (VOCs). Such technology has many potential uses, which I will get to below. The current state of the art is advancing quickly with the introduction of various nanomaterials, but at present these sensing arrays require multiple antenna coated with different materials. As a result they are difficult and expensive to manufacture and energy intensive to operate. They work, and often are able to detect specific VOCs with 95% or greater accuracy. But their utility is limited by cost and inconvenience.

A new advance, however, is able to reproduce and even improve upon current performance with a single antenna and single coating. The technology uses a single graphene oxide coated antenna which then uses ultrawide microwave band signals to detect specific VOCs. These molecules will reflect different wavelengths differently depending on their chemical structure. That is how they “sniff” the air. The results are impressive.

The authors report that a “classification accuracy of 96.7 % is attained for multiple VOC gases.” This is comparable to current technology, but again with a simpler, cheaper, and less energy hungry technology. Further, they actually has better results in terms of discriminating different isomers. Isomers are different configurations of the same molecular composition – same atoms in the same ratios and but arranged differently, so that the chemical properties may be different. This is a nice proof of concept advance in this technology.

Now the fun part – let’s speculate about how this technology might be used. The basic application for electronic noses is to automatically detect VOCs in the environment or associated with a specific item as a way of detecting something useful. For example, this could be used as a breath test to detect specific diseases. This could be a non-invasive bedside quick test that could reliably detect different infections, disease states, event things like cancer or Alzheimer’s disease. When disease alters the biochemistry of the body, it may be reflected in VOCs in the breath, or even the sweat, of a person.

VOC detection can also be used in manufacturing to monitor chemical processes for quality control or to warn about any problems. They could be used to detect fire, gas leaks, contraband, or explosives. People and things are often surrounded by a cloud of chemical information, a cloud that would be difficult to impossible to hide from sensitive sniffers.

So far this may seem fairly mundane, and just an incremental extrapolation of stuff we already can do. That’s because it is. The real innovation here is doing all this with a much cheaper, smaller, and less energy intensive design. As an analogy, think about the iPhone, a icon of disruptive technology. The iPhone could not really do anything that we didn’t already have a device or app for. We already had phones, texting devices, PDAs, digital cameras, flashlights, MP3 players, web browsers, handheld gaming platforms, and GPS devices. But the iPhone put all this into one device you could fit in your pocket, and carry around with you everywhere. Functionality then got added on with more apps and with motions sensors. But the main innovation that changed the world was the all-in-one portability and convenience. A digital camera, for example, is only useful when you have it on you, but are you really going to carry around a separate digital camera with you every day everywhere you go?

This new electronic nose technology has the potential to transform the utility of this tech for similar reasons – it’s potentially cheap enough to become ubiquitous and portable enough to carry with you. In fact, there is already talk about incorporating the technology into smartphones. That would be transformative. Imagine if you now also could carry with you everywhere at all times an electronic nose that could detect smoke, dangerous gas, that you or others might be ill, or that your food is spoiled and potentially dangerous.

Imagine that most people are carrying such devices, and that they are networked together. Now we have millions of sensors out there in the community able to detect all these things. This could add up to an incredible early warning system for all sorts of dangers. It’s one of those things that is challenging to just sit here and think of all the potential specific uses. Once such technology gets out there, there will be millions of people figuring out innovative uses. But even the immediately obvious ones would be incredibly useful. I can think of several people I know personally whose lives would have been saved if they had such a device on them.

As I often have to say, this is in the proof-of-concept stage and it remains to be seen if this technology can scale and be commercializable. But it seems promising. Even if it does not end up in every smartphone, having dedicated artificial nose devices in the hospital, in industry, and in the home can be extremely useful.

The post Electronic Noses first appeared on NeuroLogica Blog.

Categories: Skeptic

Tesla Demonstrated its Optimus Robot

Mon, 10/21/2024 - 5:38am

At a recent event Tesla showcased the capabilities of its humanoid autonomous robot, Optimus. The demonstration has come under some criticism, however, for not being fully transparent about the nature of the demonstration. We interviewed robotics expert, Christian Hubicki, on the SGU this week to discuss the details. Here are some of the points I found most interesting.

First, let’s deal with the controversy – to what extent were the robots autonomous, and how transparent was this to the crowd? The first question is easier to answer. There are basically three types of robot control, pre-programmed, autonomous, and teleoperated. Pre-programmed means they are following a predetermined set of instructions. Often if you see a robot dancing, for example, that is a pre-programmed routine. Autonomous means the robot has internal real-time control. Teleoperated means that a human in a motion-capture suit is controlling the movement of the robots. All three of these types of control have their utility.

These are humanoid robots, and they were able to walk on their own. Robot walking has to be autonomous or pre-programmed, it cannot be teleoperated. This is because balance requires real-time feedback of position and other information to produces the moment-to-moment adjustments that maintain balance. A tele-operator would not have this (at least not with current technology). The Optimus robots walked out, so this was autonomous.

Once in position, however, the robots began serving and interacting with the humans present. Christian noted that he and other roboticists were able to immediately tell that the upper body movements of the robots were teleoperated, just by the way they were moving. Also, the verbal interaction also seemed teleoperated as each robot had a difference voice and the responses were immediate and included gesticulations.

Some might say – so what? The engineering of the robots themselves is impressive. They can autonomously walk, and not of them fell over or did anything weird. This much is a fairly impressive demonstration. It is actually quite dangerous to have fully autonomous robots interacting with people. The technology is not quite there yet. Robots are heavy and powerful, and just falling over might cause human injury. Reliability has to be extremely high before we will be comfortable putting fully autonomous robots in human spaces. Making robots lighter and softer is one solution, because they they were be less physically dangerous.

But the question for the Optimus demonstration is – how transparent was the teleoperation of the robots? Tesla, apparently, did not explicitly say the robots were being operated fully autonomously, nor did any of the robot operator lie when directly asked. But at the same time, the teleoperators were not in view, and Tesla did not go out of their way to transparently point out that they were being teleoperated. How big a deal is this? That is a matter of perception.

But Christian pointed out that there is a very specific question at the heart of the demonstration – where is Tesla compared to its competitors in terms of autonomous control? The demonstration, if you did not know there were teleoperators, makes the Optimus seem years ahead of where it really is. It made it seem as if Tesla is ahead of their competition when in fact they may not be.

While Tesla was operating in a bit of a transparency grey-zone, I think the pushback is healthy for the industry. The fact is that robotics demonstrations typically use various methods of making the robots seem more impressive than they are – speeding up videos, hiding teleoperation, only showing successes and not the failures, and glossing over significant limitations. This is OK if you are Disney and your intent is to create an entertaining illusion. This is not OK if you are a robotics company demonstrating the capabilities of your product.

What is happening as a result of push back and exposure of lack of total transparency is an increasing use of transparency in robotic videos. This, in my opinion, should become standard, and anything less unacceptable. Videos, for example, can be labeled as “autonomous” or “teleoperated” and also can be labeled if they are being shown in a speed other than 1x. Here is a follow up video from Tesla where they do just that. However, this video is in a controlled environment, we don’t know how many “takes” were required, and the Optimus demonstrates only some of what it did at the event. At live events, if there are teleoperators, they should not be hidden in any way.

This controversy aside, the Optimus is quite impressive just from a hardware point of view. But the real question is – what will be the market and the use of these robots? The application will depend partly on the safety and reliability, and therefore on its autonomous capabilities. Tesla wants their robots to be all-purpose. This is an extremely high bar, and requires significant advances in autonomous control. This is why people are very particular about how transparent Tesla is being about where their autonomous technology is.

 

The post Tesla Demonstrated its Optimus Robot first appeared on NeuroLogica Blog.

Categories: Skeptic

The Clipper Europa Mission

Thu, 10/17/2024 - 5:03am

I wrote earlier this week about the latest successful test of Starship and the capture of the Super Heavy booster by grabbing arms of the landing tower. This was quite a feat, but it should not eclipse what was perhaps even bigger space news this week – the launch of NASAs Clipper probe to Europa. If all goes well the probe will enter orbit around Jupiter in 2030.

Europa is one of the four large moons of Jupiter. It’s an icy world but one with a subsurface ocean – an ocean that likely contains twice as much water as the oceans of Earth combined. Europa is also one of the most likely locations in our solar system for life outside Earth. It is possible that conditions in that ocean are habitable to some form of life. Europa, for example, has a rocky core, which may still be molten, heating Europa from the inside and seeding its ocean with minerals. Chemosynthetic organisms survive around volcanic vents on Earth, so we know that life can exist without photosynthesis and Europa might have the right conditions for this.

But there is still a lot we don’t know about Europa. Previous probes to Jupiter have gathered some information, but Clipper will be the first dedicated Europa probe. It will make 49 close flybys over a 4 year primary mission, during which it will survey its magnetic field, gravity, and chemical composition. Perhaps most exciting is that Clipper is equipped with instruments that can sample any material around Europa. The hope is that Clipper will be able to fly through a plume of material shooting up geyser-like from the surface. It would then be able to detect the chemical composition of Europa material, especially looking for organic compounds.

Clipper is not equipped specifically to detect if there is life on Europa. Rather it is equipped to determine how habitable Europa is. If there are conditions suitable to subsurface ocean life, and certainly if we detect organic compounds, that would justify another mission to Europa specifically to look for life. This may be our best chance and finding life off Earth.

Clipper is the largest probe that NASA has sent out into space so far. It is about the size of an SUV, and will be powered by solar panels that span 100 feet. Light intensity at Jupiter is only 3-4% what it is on Earth, so it will need large panels to generate significant power. It also has batteries so that it can operate while in shadow. NASA reports that soon after launch Clipper’s solar arrays successfully fully unfolded, so the probe will have power throughout the rest of its mission. These are the largest solar arrays for any NASA probe. At Jupiter they will generate 700 watts of power. NASA says they are “more sensitive” than typical commercial solar panels, but I could not find more specific technical information, such as their conversion efficiency. But I did learn that the panels have much more sturdy, in order to survive the frigid temperatures and heavy radiation environment around Jupiter.

Clipper will take a somewhat indirect path, first flying to Mars where it will get a gravity boost and swing back to Earth where it will get a second gravity boost. Only then will it head for Jupiter, where it will arrive in 2030 and then use its engines to enter into orbit around Jupiter. The orbit is designed to bring it close to Europa, where it will get as close at 16 miles from the surface over its 49 flybys. At the end of its mission NASA will crash Clipper into Ganymede, another of Jupiter’s large moons, in order to avoid any potential contamination of Europa itself.

I always get excited at the successful launch of another planetary probe, but then you have to wait years before the probe finally arrives at its destination. The solar system is big and it takes a long time to get anywhere. But it is likely to be worth the wait.

An even longer wait will be for what comes after Clipper. NASA is “discussing” a Europa lander. Such a mission will take years to design, engineer, and build, and then more years to arrive and land on Europa. We won’t get data back until the 2040s at the earliest. So let’s get hopping. The potential for finding life off Earth should be one of NASA’s top priorities.

The post The Clipper Europa Mission first appeared on NeuroLogica Blog.

Categories: Skeptic

Latest Starship Launch

Mon, 10/14/2024 - 5:30am

SpaceX has conducted their most successful test launch of a Starship system to date. The system they tested has three basic components – the Super Heavy first stage rocket booster, the Starship second stage (which is the actual space ship that will go places), and the “chopsticks”, which is a mechanical tower designed to catch the Super Heavy as it returns. All three components apparently functioned as hoped.

The Super Heavy lifted Starship into space (suborbital), then returned to the launch pad in Southern Texas where it maneuvered into the grasping mechanical arms of the chopsticks. The tower’s arms closed around the Super Heavy, successfully grabbing it. The engines then turned off and the rocket remained held in place. The idea here is to replicate the reusable function of the Falcon rockets, which can return to a landing pad after lifting their cargo into orbit. The Falcons land on a platform one the water. SpaceX, however, envisions many Starship launches and wants to be able to return the rockets directly to the launch pad, for quicker turnaround.

The Starship, for its part, also performed as expected. It came back down over the designated target in the Indian Ocean. Once it got to the surface it rolled on its side and exploded. They were never planning on recovering any of the Starship so this was an acceptable outcome. Of course, eventually they will need to land Starship safely on the ground.

The system that SpaceX came up with reflects some of the realities and challenges of space travel. The Earth is a massive gravity well, and it is difficult to get out of and back into that gravity well. Getting into orbit requires massive rockets with lots of fuel, and falls prey to the rocket equation – you need fuel to carry the fuel, etc. This is also why, if we want to use Starship to go to Mars, SpaceX will have to develop a system to refuel in orbit.

Getting back down to the ground is also a challenge. Orbital velocity is fast, and you have to lose all that speed. Using the atmosphere for breaking works, but the air compression (not friction as most people falsely believe) causing significant heat, so reentering through the atmosphere requires heat shielding. You then have to slow down enough for a soft landing. You can use parachutes. You can splash down in the water. You can use bouncy cushions on a hard landing. Or you can use rockets. Or you can land like a plane, which was the Shuttle option. All of these methods are challenging.

If you want to reuse your rockets and upper stages, then a splashdown is problematic as salt water is bad. No one has gotten the cushion approach to work on Earth, although we have used it on Mars. The retro-rocket approach is what SpaceX is going with, and it works well. They have now added a new method, by combining rockets with a tower and mechanical arms to grab the first stage. I think this is also the planned method for Starship itself.

On the Moon and Mars the plan is to land on legs. These worlds have a lower gravity than Earth, so this method can work. In fact, NASA is planning on using the Starship as their lunar lander for the Artemis program. We apparently can’t do this on Earth because the legs would have to be super strong to handle the weight of the Super Heavy or Starship, and therefore difficult to engineer. It does seem amazing that a tower with mechanical arms grabbing the rocket out of the air was considered to be an easier engineering feat than designing strong-enough landing legs, but there it is. Needing a tower does limit the location where you can land – you have to return to the landing pad exactly.

SpaceX, however, is already good at this. They perfected the technology the the Falcon rocket boosters, which can land precisely on a floating landing pad in the ocean. So they are going with technology they already have. But it does seem to me that it would be worth it to have an engineering team work on the whole strong-landing-legs problem. That would seem like a useful technology to have.

All of this is a reminder that the space program, as mature as it is, is still operating at the very limits of our technology. It makes it all the more amazing that the Apollo program was able to send successful missions to the Moon. Apollo solved these various issues also by going with a complex system. As a reminder, the Saturn V used three stages to get into space for the Apollo program (although only two stages for Skylab). You then had the spaceship that would go to the moon, which consisted of a service module, a command module, and a lander. On approach to the Moon, it would have to undergo, “transposition, docking, and extraction”. The command module would detach from the service module, turn around, then dock with the lunar lander and extract it from the service module. The pair would then go into lunar orbit. The lander would detach and land on the lunar surface, and eventually blast off back into orbit around the Moon. There it would dock again with the command module for return to Earth.

This was considered a crazy idea at first within NASA, and many of the engineers were worried they couldn’t pull it off. Docking in orbit was considered the most risk aspect, and if that failed it would have resulted in astronauts being stranded in lunar orbit. This is why they perfected the procedure in Earth orbit before going to the Moon.

All of this complexity is a response to the the physical realities of getting a lot of mass out of Earth’s gravity well, and having enough fuel to get to the Moon, land, take off again, return to Earth, and then get back down to the ground. The margins were super thin. It is amazing it all worked as well as it did. Here we are more than 50 years later and it is still a real challenge.

Spaceflight technology has not fundamentally changed in the last 50 years – rockets, fuel, capsules are essentially the same in overall design, with some tweaks and refinements. Except for one thing – computer technology. This has been transformative, make no mistake. SpaceX’s reusable rockets would not be possible without advanced computer controls. Modern astronauts have the benefits of computer control of their craft, and are travelling with the Apollo-era equivalent of supercomputers. Computer advances have been the real game-changing technology for space travel.

Otherwise we are still using the same kinds of rocket fuel. We are still using stages and boosters to get into orbit. Modern capsule design would be recognizable to an Apollo-era astronaut, although the interior design is greatly improved, again due to the availability of computer technology. There are some advanced materials in certain components, but Starship is literally built out of steel.

Again, I am not downplaying the very real advances in the aerospace industry, especially in getting down costs and in reusability. My point is more that there haven’t been any game-changing technological advances not dependent on computer technology. There is no super fuel, or game-changing material. And we are still operating at the limits of physics, and have to make very real tradeoffs to make it work. If I’m missing something, feel free to let me know in the comments.

In any case, I’m glad to see progress being made, and I look forward the the upcoming Artemis missions. I do hope that this time we are successful in building a permanent cis-lunar infrastructure. That, in turn, would be a stepping stone to Mars.

The post Latest Starship Launch first appeared on NeuroLogica Blog.

Categories: Skeptic

Spider-Man’s Web Shooter

Fri, 10/11/2024 - 5:03am

I have to admit that my favorite superhero as a kid, and still today, is Spider-Man (and yes, that’s the correct spelling). There are a number of narrative reasons for this that I grew to appreciate more as I aged. First, Spider-Man is in the sweet spot of super abilities – he is strong, fast, agile, and has “spidey senses”. But he is not boringly invulnerable like Superman. He doesn’t brute force his way to solving situations. You don’t have to retcon questions like – if Ironman has the technology to produce immense energy, why doesn’t he just make it available to the world? He would save more lives that way.

But of course the coolest aspect of Spider-Man is his webslinging. This allows him to fly through the city, and to tie-up villains for the police to collect. This is also one aspect of the Spider-Man story that I thought was a bit contrived (even for the superhero genre where being bitten by a radioactive spider gives you super powers). In science fiction you generally get one gimmie – the author is allowed to make up just one fantastical fact to use as a cornerstone of their story. But they should not introduce multiple such gimmies. It breaks the unwritten contract between author and reader.

With Spider-Man, the one gimmie is the whole radioactive spider thing. That’s the one thing we are being asked to just accept and not question. I do like how more modern versions of the story changed that to genetic engineering – still fantastical, but way more plausible than radioactivity. I also liked that in the Tobey Maguire Spider-Man his webbing was part of the genetic engineering, and he produced the spider silk himself and extruded it from spinners in his wrists. For other versions we are asked to accept a double-gimmie – first, the whole spider thing, and second that Peter Parker also happens to be such a genius that he invented practically overnight something that scientists have been unable to do in decades, mimic the spider silk-spinning of spiders. Spider-Man was created in 1962, and here we are more than 60 years later and this remains an intractable problem of material science.

Or is it?

OK, we are not there quite yet, but scientists have made a significant advance in artificially creating strands of silk. The problem has always been spinning the silk into threads. We can genetically engineer animals to produce spider silk, but it comes out as a glob. Spiders, however, are able to keep their silk a liquid, and then extrude it from their spinnerets as threads with variable properties, such as stickiness. We really want to be able to do this artificially and at scale because spider silk is really strong – depending on what type of strength you are talking about, spider silk can be as strong or stronger than steel. When you hear this statistic, however, that often is referring to specific strength, because spider silk is much lighter than steel, it is stronger per unit weight than steel. In any case – it’s strong.

Perhaps a better comparison is Kevlar. Spider silk has several advantages over this modern material – it is more resilient, flexible, and in some cases tougher. But we are still not close to spinning spider-silk bullet-proof vests.

The current study has a title that does not betray its possible significance – Dynamic Adhesive Fibers for Remote Capturing of Objects. That’s a technical way of saying – you can shoot freaking spider webs. What the researchers found is that if you take a liquid silk from B. mori, which is a domestic silk moth, and combine it with a solvent like alcohol or (as in this case) acetone, it will become a semi-solid hydrogel. But the process takes hours. You can’t have your villain waiting around for hours for the webbing to solidify. But, if you also add dopamine to the mix, the dopamine helps draw water away from the silk quickly, and the solidification process becomes almost instant. Shoot this combination as a stream and the acetone evaporates in the air while the dopamine draws away the water and you have -an instant sticky string of silk. You can literally shoot this at a object at range and then pick it up. The silk will stick to the object.

This is a massive advance, figuring out a key component to the process. Spiders and silk-producing insects also use dopamine in the process. Spiders generally don’t shoot their webs. They adhere it to an anchor and then draw it out. So in a way they have done spiders one better. But the real goal is making artificial silk that can then be made into fibers that can then be made into stuff.

Now, the main limiting factor here – spider silk is still about 1000 times stronger than the resulting silk in this study. It’s strong and sticky enough to pick up small objects, but it’s not going to replace Kevlar. But the authors point out – the properties of this silk are “tunable”. They write:

“Furthermore, the possibility of tuning these properties is demonstrated by adding chitosan (Ch) and borate ions (BB), leading to remarkable mechanical and adhesive performances up to 107 MPa and 280 kPa, respectively, which allows the retrieval of objects from the ejected structure. This process can be finely tuned to achieve a controlled fabrication of instantaneously formed adhesive hydrogel fibers for manifold applications, mimicking living organisms’ ability to eject tunable adhesive functional threads.”

Spider silk has a tensile strength of about 1 GPa, so that is still 100 times this silk. Of course, they are just getting started. The hope is that further research will reveal formulas for tuning the properties of this silk to make it super strong, or have whatever other properties we need. I don’t want to trivialize this. As I have frequently pointed out – when scientists say “all we have to do is” they really mean “there is a huge problem we cannot currently fix, and may never be able to fix.”

It’s possible this method of spinning silk fibers may end up being little more than a laboratory curiosity, or may have a few niche applications at best. It is also possible this is the beginning of the next plastic or carbon fibers. Probably we will end up somewhere in between. But I am hopeful. There is a reason material scientists have been trying to crack the spider silk puzzle for decades – because the potential is huge. This really is an amazing material with incredible potential.

The post Spider-Man’s Web Shooter first appeared on NeuroLogica Blog.

Categories: Skeptic

Confidently Wrong

Thu, 10/10/2024 - 5:05am

How certain are you of anything that you believe? Do you even think about your confidence level, and do you have a process for determining what your confidence level should be or do you just follow your gut feelings?

Thinking about confidence is a form of metacognition – thinking about thinking. It is something, in my opinion, that we should all do more of, and it is a cornerstone of scientific skepticism (and all good science and philosophy). As I like to say, our brains are powerful tools, and they are our most important and all-purpose tool for understanding the universe. So it’s extremely useful to understand how that tool works, including all its strengths, weaknesses, and flaws.

A recent study focuses in on one tiny slice of metacognition, but an important one – how we form confidence in our assessment of a situation or a question. More specifically, it highlights The illusion of information adequacy. This is yet another form of cognitive bias. The experiment divided subjects into three groups – one group was given one half of the information about a specific situation (the information that favored one side), while a second group was given the other half. The control group was given all the information. They were then asked to evaluate the situation and how confident they were in their conclusions. They were also asked if they thought other people would come to the same conclusion.

You can probably see this coming – the subjects in the test groups receiving only half the information felt that they had all the necessary information to make a judgement and were highly confident in their assessment. They also felt that other people would come to the same conclusion as they did. And of course, the two test groups came to the conclusion favored by the information they were given.

The researchers conclude (reasonably) that the main  problem here is that the test groups assumed that the information they had was adequate to judge the situation – the illusion of information adequacy. This, in turn, stems from the well documented phenomenon that people generally don’t notice what is not there, or at least it is a lot more difficult to notice the absence of something. Assuming they have all relevant information, it then seems obvious what the answer is – whichever position is favored by the information they are given. In fact, the test groups were more confident in their answers than the control group. The control group had to balance conflicting information, while the test groups were unburdened by any ambiguity.

There are some obvious parallels to the real world here. There is a lot of discussion about how polarized the US has become in recent years. Both sides appear highly confident that they are right, that the other side has lost their collective mind, and nothing short of total political victory and any cost will suffice. This is obviously a toxic situation for any democracy. Experts debate over the exact causes of this polarization, but there is one very common theme – the two sides are largely siloed in different “information ecosystems”. This is the echochamber effect. If you listen mainly or only to partisan news, then you are getting one half of the story, the half that supports your side. You will have the illusion that you have all the information, and in light of that information the conclusion is obvious, and anyone who disagrees must have dark motives, or be mentally defective in some way.

I have seen this effect in many skeptical contexts as well. After watching or reading a work that presents only half the story – the case for one side of a controversy – many people are convinced. They think they now understand the situation, and feel that such a large amount of information has to add up to something. I have had many discussions, for example, with people who have rad books like The Aquatic Ape, that argues that humans went through an evolutionary period of adaptation to an aquatic life. It’s all nonsense and wild speculation, without any actual science, but it’s hard not to be persuaded by a book-length argument if you don’t already have the background to put it into context. The same happened with many people who watched the movie Loose Change.

This is why it is a good rule of thumb to suspend judgement when you encounter such claims and arguments. Professionals in investigative professions learn to do this as part of their deliberate analytical process. What am I not being told? What information is missing? What do those who disagree with this position have to say? What’s the other side of the story?

This is a good intellectual habit to have, and is also a cornerstone of good skepticism. Who disagrees with this claim and why? In everyday life it is a good idea to have diverse sources of information, and in fact to seek out information from the “other side”. For political news, no one source can be adequate, although some sources are better than others. Not all news sources are equally partisan and biased. It’s a good idea to seek out news sources that are generally considered to be, and may have been rated, to be less partisan and are balanced in their reporting. But it is also a good idea to utilize multiple sources of news, and to specifically consume news that is of reasonable quality but comes from a different position than your own. What is the other side saying and why? It may be painful and uncomfortable sometimes, but that is a good reason to do it.

It’s good to know that there is a bias towards the illusion of information adequacy, because with that knowledge you can work against it. In the study, when the test subjects were given the other half of the information that they were initially missing, many of them did change their minds. This is something else we often see in psychological studies – humans are generally rational by default, and will listen to information. But this is true only as long as there is no large emotional stake. If their identity, tribe, ego, or fundamental world view is at stake, then rationality gives way to motivated reasoning.

This is why it is extremely useful (although also extremely difficult) to have no emotional stake in any claim. The only stake a rational person should have is in the truth. Your identity should be as an objective truth-seeker,  not as a partisan of any kind. Also there should be no loss in ego from being wrong, only from failing to change your mind in light of new evidence. This is a rational ideal, and no one achieves it perfectly, but it’s good to have a goal.

At least it’s good to be engaged in metacognition, and to think about your thought process and everything that might be biasing it. This includes information and perspective that might be missing. This is the most difficult to detect, so it requires special attention.

The post Confidently Wrong first appeared on NeuroLogica Blog.

Categories: Skeptic

AI Copilots Are Coming

Tue, 10/08/2024 - 5:24am

I’m going to do something I rarely do and make a straight-up prediction – I think we are close to having AI apps that will function as our all-purpose digital assistants. That’s not really a tough call, we already have digital assistants and they are progressing rapidly. So I am just extending an existing trend a little bit into the future. My real prediction is that they will become popular and people will use them. Predicting technology is often easier than predicting public acceptance and use (see the Segway and many other examples.) So this is more of a risky prediction.

I know, for those who lived through the early days of personal computers, if you mention “personal digital assistant” the specter of “Clippy” immediately comes to mind. Such assistants can be intrusive and annoying. That has something to do with the fact that they are intrusive and annoying, coupled with the fact that they are not that useful. Siri and similar apps are great for a few things – acting as a verbal interface for Google searchers or serving up music, or basic functions like setting an alarm on your phone. But I am talking next level. Siri is to the AI-fueled assistants I am talking about as the PDAs of the 80s and 90s are to the smartphones of today.

With that analogy I am getting into real tricky prediction territory – predicting a transformative or disruptive technology. This is the kind of technology that, shortly after using it regularly you lose the ability to conceive of life without it. Nor would you want to go back to the dark days before your life was transformed. Think microwave, the ability to record and play pre-recorded content for TV, the web, GPS, and the smartphone. This is what Segway wanted to be.

Clearly I am thinking of some idealized version of a personal AI assistant, and I am. But there is no reason we can’t get there. All the elements are already there, someone just has to put it all together in a functional and pretty package (like the iPhone). Microsoft thinks we are one year away from such an application, and clearly they are planning on being the ones to bring it to market. They will likely have competition.

Let’s think first about what a personal AI assistant can be in that idealized form, and then consider the potential downsides. I am envisioning an app that lives on multiple of your electronic devices – your phone, tablet, laptop and desktop. It uses all the devices you do and is always there for you. You can interact with it by voice or text. It has access to whatever information you give it access to, such as your calendar, contact list, accounts, passwords, and digital assets. And essentially it can do anything you want within those digital contents at a command. It can manage your schedule, take the initiate to remind you about upcoming appointments or deadlines, and schedule new events.

Further, it can sift through your e-mail, getting rid of spam, warning about dangerous e-mails, organizing the rest by priority or whatever scheme you wish, even respond to some e-mails automatically or by your command. It can interact with all your other apps – “Find the quickest route to my destination, load it up into GPS, and remind me 10 minutes before I have to leave.” Or you can tell it to prepare a summary for you on some topic, after searching the web for the latest information. It can manage your computer hygiene – “Your anti-spyware software is out of date, and there is a notice of a virus coming you are not protected from. Shall I download and install the update?” You can tell it to always download and install security updates without asking first. Yes – Windows can already to this, for Windows, but not third party apps.

A feature I would love to see – find me flights with these parameters, or you could tell it to book you a hotel, rental car, or anything. It knows your preferences, your frequent flyer numbers, your seating preferences, and which airports you prefer. Or you could ask it to find 20 gift options for Mother’s Day.

What will make an AI assistant better than anything that has come before (using the new AI tech) is that it can remember all of its interactions with you and learn. Over time, it becomes more and more personalized. It will know when not to disturb you, for example. It will learn to become less annoying and more helpful, as you learn how best to leverage this technology. This kind of tech has the potential to relieve a significant amount of the digital drudgery that we have foisted upon ourselves. I know some people will say – just disconnect. But that is not a viable option for many people, and we should not have to surrender all the benefits of computers simply to avoid that drudgery.

What about the downsides? The biggest potential weakness is that such apps will just suck. They won’t do their jobs well enough to reduce your digital burden. They can also be a security risk, if they have access to all your personal information. Security would need to be an iron-clad feature of such apps. They can also just get information wrong. This is a universal problem with the latest crop of AI, the so-called hallucinations. But this is something the industry is working on and it is getting better. It’s also less of a problem with focused (rather than open-ended) tasks.

There will eventually also be some optional features that some people will want in such an app, such as personal AI counseling or life-coaching. This can have different levels. At its most basic level, the AI can be just a rational friend who is a good listener, and gives really basic time-tested and expert-approved advice. It can function as a first-level counselor who is always there for you, and remembers all your previous conversations. You can select its personality, and level of intrusiveness. You may be able to have certain optional “nag” settings, such as keeping you on that diet, or reminding you not to be too sarcastic. It could make you more thoughtful, reminding you of all the social niceties that often slip through the cracks of our busy lives.

Then there will be those features that I am not thinking of, but that someone will think of when you have hundreds or even thousands of companies competing with each other and using feedback from billions of users. There may also be negative unintended consequences, and culture wars about social engineering. We will have to see how it all shakes out.

But I stick by my prediction – the potential of relieving us of digital drudgery and all the potential value-added of such an AI assistant – when it works really well – is just too great. I do think this will be like the next smartphone. We will probably know soon enough.

Note: I suspect the comments will fill with people giving examples of how the various pieces of this functionality already exists. I know, and I use a lot of them. You can cobble together password managers, an app to go through your photos, a schedule reminder, and e-mail sorters. Individual applications of a smartphone also predated the smartphone. The power was having everything in one device. Same here – one AI to bring it all together and add new functionality.

The post AI Copilots Are Coming first appeared on NeuroLogica Blog.

Categories: Skeptic

Fruit Fly Connectome Completed

Mon, 10/07/2024 - 5:58am

Scientists have just published in Nature that they have completed the entire connectome of a fruit fly: Network statistics of the whole-brain connectome of Drosophila. The map includes 140,000 neurons and more than 50 million connections. This is an incredible achievement that marks a milestone in neuroscience and is likely to advance our research.

A “connectome” is a complete map of all the neurons and all the connections in a brain. The ultimate goal is to map the entire human brain, which has 86 billion neurons and about 100 trillion connections – that’s more than six orders of magnitude greater than the drosophila. The human genome project was started in 2009 through the NIH, and today there are several efforts contributing to this goal.

Right now we have what is called a mesoscale connectome of the human brain. This is more detailed than a macroscopic map of human brain anatomy, but not as detailed as a microscopic map at the neuronal and synapse level. It’s in between, so mesoscale. Essentially we have built a mesoscale map of the human brain from functional MRI and similar data, showing brain regions and types of neurons at the millimeter scale and their connections. We also have mesoscale connectomes of other mammalian brains. These are highly useful, but the more detail we have obviously the better for research.

We can mark progress on developing connectomes in a number of ways – how is the technology improving, how much detail do we have on the human brain, and how complex is the most complex brain we have fully mapped. That last one just got its first entry – the fruit fly or drosophila brain.

The Nature paper doesn’t just say – here’s the Drosophila brain. It does some interesting statistics on the connectome, showing the utility of having one. The ultimate goal is to fully understand how brains process information. Learning such principles (which we already have a pretty good idea of) can be applied to other brains, including humans. For example, the study finds that the Drosophila brain has hubs and networks, which vary in terms of their robustness. It also reflects what is known as rich-hub organization.

Rich-hub organization means that there are hubs of neurons that have lots of connections, and these hubs have lots of connections to other hubs. This structure allows brains to efficiently integrate and disseminate information. This follows the same principle as with any distribution system. Even Amazon follows a similar model, with distribution centers serving as hubs. Further, the researchers identified specific subsets of the hubs that serve as integrators of information and other subsets that serve as broadcasters.

The connectome also includes synapse and neurotransmitter level data, which is critical to asking any questions about function. A connectome is not just a map of wiring. Different neurons use different neurotransmitters, which have different functions. Some neurotransmitters, for example, are excitatory, which means they increase the firing rate of the neuron in which they synapse. Some neurotransmitters are inhibitory, which means they decrease firing rate. So at the very least we need to know if a connection is increasing or decreasing the activity of the neurons it connects to.

Now that the model is complete, they are just getting started examining the model. This is the kind of research that is primarily meant to facilitate other research, so expects lot of papers using the Drosophila connectome as its subject.

Meanwhile scientists are working on completing the connectome of the mouse, which will likely be the first mammalian brain connectome. We already have mesoscale connectomes, and detailed connectomes of small sections of mouse brain. A completed mouse brain connectome is likely 10-15 years off (but of course, could be longer). That would be a huge milestone, as all mammalian brains share a lot of anatomy in common. With the Drosophila brain we can learn a lot about network principles, but the anatomy evolved completely independently from mammals (beyond the very rudimentary brain of our common ancestor).

One type of research that I would love to see is not just mapping a connectome, but emulating it in a computer. This information may be out there somewhere, but I have not found it so far – do we have a computer powerful enough to emulate the functioning of a Drosophila brain in real time? That would be a good test of the completeness and accuracy of our connectome – does it behave like an actual fruit  fly?

Creating this would likely require more than just the connectome itself. We need, as I referenced above, some biological data as well. We need to know how the neurons are behaving biologically, not just as wires. We need to know how the neurotransmitters are behaving chemically. And we need to know how other cells in the brain, other than neurons, are affecting neuronal function. Then we need to give this virtual brain some input simulating a body and an environment, and simulate the environment’s response to the virtual fruit fly. That sounds like a lot of computing power, and I wonder how it compares to our current supercomputers. Likely we will be able to do this before we can do it in real time, meaning that a second of the life of our virtual Drosophila may take a day to compute (that is just a representative figure, I have no idea what the real current answer is). Then over time, our virtual Drosophila will go faster and faster until it catches up to real time.

Eventually the same will be true for a human. At some point we will have a full human connectome. Then we will be able to emulate in a computer, but very slowly. Eventually it will catch up to real time, but why would it stop there? We may eventually have a computer that can simulate a human’s thought processes 1000 times faster than a human.

There is another wrinkle to this whole story – the role of our current and likely short term future AI. We are already using AI as a tool to help us make sense of the mesoscale connectomes we have. Our predictions of how long it will take to have complete connectomes may be way off. What if someone figures out a way to use AI to predict neuron level connectomes from our current mesoscale connectomes? We are already seeing, in many contexts, AI being used to do literally years of research in days or weeks, or months of research in hours. This is especially true for information-heavy research questions involving highly complex systems – exactly like the connectome. It would therefore not surprise me at all if AI-boosted connectome research suddenly progresses orders of magnitude faster than previous predictions.

Another potential area of advance is using AI to figure out ways to emulate a mammalian or even human brain more efficiently. We don’t necessarily need to emulate every function of an entire brain. We can probably cheat our way to make simple approximations of the functions we are not interested in for any particular emulation or research project. Then dedicate the computing power to what we are interested in, such as higher level decision-making.

And of course I have to mention the ethical considerations of all of this? Would a high fidelity emulation of a human brain be a human? I think the answer is either yes, or very close to yes. This means we have to consider the rights of the emulated human. For this reason it actually may be more useful to emulate a mouse brain. We already have worked out ethical considerations for doing mouse research, and this would be an extension of that. I can imagine a future where we do lots of behavioral research on virtual mice in simulated environments. We could run millions of trials in a few minutes, without having to care for living creatures. We can then work our way evolutionarily toward humans. How far will we go? Would virtual primate research be OK? Can we guarantee our virtual models don’t “suffer”. Does it matter that they “exist” for just a fraction of a second? We’ll have to sort all this out eventually.

The post Fruit Fly Connectome Completed first appeared on NeuroLogica Blog.

Categories: Skeptic

Nadir Crater – A Double Tap for Dinosaurs?

Thu, 10/03/2024 - 4:58am

It is now generally accepted that 66 million years ago a large asteroid smacked into the Earth, causing the large Chicxulub crater off the coast of Mexico. This was a catastrophic event, affecting the entire globe. Fire rained down causing forest fires across much of the globe, while ash and debris blocked out the sun. A tsunami washed over North America – one site in North Dakota contains fossils from the day the asteroid hit, including fish with embedded asteroid debris. About 75% of species went extinct as a result, including all non-avian dinosaurs.

For a time there has been an alternate theory that intense vulcanism at the Deccan Traps near modern-day India is what did-in the dinosaurs, or at least set them up for the final coup de grace of the asteroid. I think the evidence strongly favors the asteroid hypothesis, and this is the way scientific opinion has been moving. Although the debate is by no means over, a majority of scientists now accept the asteroid hypothesis.

But there is also a wrinkle to the impact theory – perhaps there was more than one asteroid impact. I wrote in 2010 about this question, mentioning several other candidate craters that seem to date to around the same time. Now we have a new candidate for a second KT impact – the Nadir crater off the coast of West Africa.

Geologists first published about the Nadir crater in 2022, discussing it as a candidate crater. They wrote at the time:

“Our stratigraphic framework suggests that the crater formed at or near the Cretaceous-Paleogene boundary (~66 million years ago), approximately the same age as the Chicxulub impact crater. We hypothesize that this formed as part of a closely timed impact cluster or by breakup of a common parent asteroid.”

Now they have published a follow up study, having been given access to private seismic data that allows for a detailed 3D analysis of the crater site. This is important because of how scientists identify impact craters. The gold standard is to identify physical evidence of impact, such as shock crystals. There are telltale minerals that can only be formed by the intense sudden power of an impact, or that form when debris is thrown into the high atmosphere while molten and then rains back down. These are conclusive signs of an impact. But there are many somewhat circular structures in the world, and often they may be prematurely declared a crater without solid evidence. So geologists are cautious and skeptical.

For the Nadir crater, which is on the sea floor, we do not have physical evidence. The initial study showed that it has a candidate circular structure, but this was not enough evidence to convince the scientific community. The detailed new analysis, however, is more compelling. First the scientists find that it does have a complete circular structure consistent with an impact basin. Even more significant, however, is that they have documented a “central uplift”, which is a characteristic sign of an impact crater. When asteroids hit they cause a depression and liquify the underlying rock. This causes a shockwave which rebounds, causing the molten rock to uplift in the center of the crater, leaving behind an uplift. This means that a circular basin with an uplift in the exact middle is a signature of an impact. This is solid and convincing evidence, even without the physical evidence of impact crystals. They write:

“Our new study published in Communications Earth & Environment 3 presents this new, state-of-the-art 3D data, revealing the architecture of the crater in exceptional detail and confirms (beyond reasonable doubt!) an impact origin for the crater. This is the first time that an impact structure has ever been imaged fully with high-resolution seismic data like this and it is a real treasure trove of information to help us to reconstruct how this crater formed and evolved.”

Sounds pretty convincing. But this leaves the questions they raised in their study two years ago – did this asteroid hit at the exact same time as the Chicxulub asteroid? If not, how far apart were they? If they did hit at the same time, were they originally part of the same asteroid? Given that there are other candidate craters that date to the same period of time, perhaps the asteroid broke up into multiple pieces that all struck the Earth at the same time.

If these asteroids were not originally part of the same asteroid, then what are the other possibilities? It is possible, although statistically unlikely, that there were simply different independent major impacts within a short time of each other. There is nothing to keep this from happening, and given the history of life on Earth perhaps it’s not that surprising, but it would be a statistical fluke.

The other possibility is that, even if they were different asteroids, perhaps there was some astronomical event that caused multiple chunks of rock and ice to swarm into the inner solar system. This would have caused a temporary period of relatively high bombardment. Perhaps a rogue planet swung by our solar system, scattering material from the Kuiper belt, some of which found its way to Earth.

The authors propose to drill into this structure, to get that physical evidence that would be so helpful. Not only would this confirm its impact status, but we may be able to tell if the chunk of rock that caused the Nadir crater has the same mineral signature as the Chicxulub asteroid. I suspect we could also tell their relative timing. Perhaps we could see the iridium layer from the Chicxulub impact, and see how that relates to the Nadir impact.

We may be able to answer – were the dinosaurs just really unlucky, or was the Chicxulub impact event more devastating than we even realized? Either way I look forward to more scientific investigation of the Nadir crater.

The post Nadir Crater – A Double Tap for Dinosaurs? first appeared on NeuroLogica Blog.

Categories: Skeptic

What Is Orbitronics

Tue, 10/01/2024 - 5:03am

You have definitely heard of electronics. You may (if you are a tech nerd like me) have heard of spintronics and photonics. Now there is also the possibility of orbitronics. What do these cool-sounding words mean?

Electronic technology is one of those core technologies that has transformed our civilization. Prior to harnessing electricity and developing electrical engineering we essentially had steam punk – mechanical, steam-powered technology. Electronics and electricity to power them, however, opened the door to countless gadgets, from electric lights, appliances, handheld devices, and eventually computer technology and the internet. I am occasionally reminded of how absolutely essential electricity is to my daily life during power outages. I get a brief glimpse of a pre-electronic world and – well, it’s rough. And that’s just a taste, with the real drudgery prolonged life without power would require.

Increasingly electronic devices are computerized, with embedded chips, possibly leading to the “internet of things”. Data centers eat an increasing percentage of our power production, and the latest AI applications will likely dramatically increase that percentage. Power use is now a limiting factor for such technology. It’s one main argument against widespread use of cryptocurrencies, for example. To illustrate the situation, Microsoft has just cut a deal to reopen Unit 1 at the Three-Mile Island nuclear power plant (not the one that melted down, that was Unit 2) with an agreement to purchase all of its power output for 20 years – to power its AI data center.

Therefore there is a lot of research into developing computer hardware that is not necessarily faster, smaller, or more powerful but is simply more energy efficient. We are getting to the limits of physics with the energy efficiency of electronic computers, however. Software engineers are also focusing on this issue, trying to create more energy-efficient algorithms. But it would be nice if the hardware itself used less energy. This is one of the big hopes for developing high temperature superconductors, but we have no idea how long or if we will develop anything usable in computing.

The other options is to fundamentally change the way computers work, to rely on different physics. Electronic computers transfer information essentially in the electrical charge of an electron (I say “essentially” to deliberately gloss over a lot of details that are not necessary to discuss the current news item). The current leading contender to replace (or supplement) electronic is photonics, which uses light instead of electrons to transfer information. Photonics are more energy efficient, generate less waste heat, have less data loss, and use smaller devices. Photonic integrated circuits are already being used in some data centers. Photonic computers were first proposed in the 1960s, so they have been a long time coming.

There are also other possible physical phenomenon that could be the basis of computing in the future. The basic science is just being worked out, which to me means that it will likely be a couple of decades, at least, before we see actual applications. One option is spintronics, which uses the spin of electrons, rather than their charge, in order to carry information. Spintronics is also faster and more energy efficient than electronics. Spintronic devices could also store information without power. But they have technological challenges as well, such as controlling spin over long distances. It’s likely that spintronic and photonic devices will coexist, depending on the application, and may even be integrated together in opto-spintronics.

Enter orbitronics – another possibility that uses the orbital angular momentum (OAM) of electrons as they orbit their nucleus as a way of storing and transferring information. The challenge has been to find materials that allow for the flow of OAM. OAM has the advantage of being isotropic – the same in every direction – so it can potentially flow in any direction. But we need a material where this can happen, and we need to control the flow. That material was possibly discovered in 2019 – chiral topological semi-metals, or chiral crystals. Chiral means that they have a handedness, in this case a helical structure like DNA. But in order to work it would need OAM monopoles, which are only theoretical. That is where the new study comes in.

Researchers have demonstrated that OAM monopoles actually exist. They also showed that the direction of the monopole can be flipped – from pointing out to pointing in, for example. These are properties that can be exploited in an orbitronics-based computer technology. The article, which is available at Nature, has the details for those who want to get into the technical weeds.

As always, it’s difficult to predict how potential new technologies will pan out. But we can make optimistic predictions – if everything works out, here is a likely timeline. We are on the cusp of photonics taking off, with projected significant growth over the next decade. This will likely be focused in data centers and high-end consumer devices, but will trickle down over time as the technology becomes more affordable. Photonics, in other words, is already happening. Next up will likely be spintronics, which as I said will most likely complement rather than replace photonics.

Orbitronics, if it pans out, and has sufficient advantages over photonics and spintronics, is likely more a technology for the 2040s, or perhaps 2050s. There is also the possibility that some other new technology will eclipse orbitronics (or even spintronics) before they can even get going.

The post What Is Orbitronics first appeared on NeuroLogica Blog.

Categories: Skeptic

Wood Vaulting for Carbon Sequestration

Mon, 09/30/2024 - 5:14am

I can’t resist a good science story involving technology that we can possibly use to stabilize our climate in the face of anthropogenic global warming. This one is a fun story and an interesting, and potentially useful, idea. As we map out potential carbon pathways into the future, focusing on the rest of this century, it is pretty clear that it is going to be extremely difficult to completely decarbonize our civilization. This means we can only slow down, but not stop or reverse global warming. Once carbon is released into the ecosystem, it will remain there for hundreds or even thousands of years. So waiting for natural processes isn’t a great solution.

What we could really use is a way to cost-effectively at scale remove CO2 already in the atmosphere (or from seawater – another huge reservoir) to compensate for whatever carbon release we cannot eliminate from industry, and even to reverse some of the CO2 build up. This is often referred to as carbon capture and sequestration. There is a lot of research in this area, but we do not currently have a technology that fits the bill. Carbon capture is small scale and expensive. The most useful methods are chemical carbon capture done at power plants, to reduce some of the carbon released.

There is, however, a “technology” that cheaply and automatically captures carbon from the air and binds it up in solid form – trees. This is why there is much discussion of planting trees as a climate change mitigation strategy. Trees, however, eventually give up their captured carbon back into the atmosphere. So at best they are a finite carbon reservoir. A 2019 study found that if we restored global forests by planting half a trillion trees, that would capture about 20 years worth of CO2 at the current rate of release, or about half of all the CO2 released since 1960 (at least as of 2019). But once those trees matured we would reach a new steady state and further sequestering would stop. This is at least better than continuing to cut down forests and reducing their store of carbon. Tree planting can still be a useful strategy to help buy time as we further decarbonize technology.

But what if we could keep trees from rotting and releasing their captured CO2 back into the atmosphere? They could then become a longer term sequestration strategy. One way to do this is to build stuff out of the wood, and this also has already been proposed. There is a movement to use more wood for commercial construction, as it has a lower carbon footprint than steel or concrete. Wood in a building that is kept dry can easily last hundreds of years.

A recent study now offers a potential other option – we could just bury trees. But wait, won’t they just rot under ground and still release their CO2? Yes – unless the soil conditions are just right. Ning Zeng and his colleagues set out to study if wood could survive long term in specific kinds of soil, those with lots of clay and low oxygen. Zeng found a location near Quebec with soil conditions he thought would be conducive to preserving wood long term. He dug a trench to place fresh wood in the soil so they could then track it over years and measure its carbon release. But here’s the fun part – when they dug the trench they found a log naturally buried in the soil. They examined the log and discovered that it was 3,775 years old. Not only that, they estimate that the log has lost less than 5% of its carbon over that period of time. Nature has already conducted the experiment Zeng wanted to run, so he published those results.

What this means is that we can potentially just grow trees, find or even create locations with the right conditions (clay seems to be the key), and just bury the logs. Then replace the trees and capture more carbon, without the older trees releasing their carbon back. They analyzed the potential of this method and found:

“We estimate a global sequestration potential of up to 10 gigatonnes CO2 per year with existing technology at a low cost of $30 to $100 per tonne after optimization.”

That is a lot. The global release of CO2 is now at about 36 gigatonnes per year, so this would be more than a quarter of our current release. So if we can get our global CO2 release to less than 10 gigatonnes per year, and combine it with burying logs in the right conditions, we could get to net zero, and even net negative. Current methods of direct air capture of CO2 cost $100-$300 per tonne, so if we can get this approach closer to the $30 per tonne cost that would be potentially viable. At the low end sequestering 10 gigatons per year of CO2 using this method would cost $300 billion per year. That’s a big number, but not that big if we consider this a global project. Estimates of the cost of global warming range from $1.7 to $38 trillion dollars per year by 2050, which means this could be a cost-effective investment. f

Obviously before scaling up this approach we need more study, including a survey of potential locations. But we can certainly get started planting some trees while we figure where to put them. And a point I frequently make – we should not be putting all our eggs in one basket, or necessarily looking for the one solution to climate change. Reforestation, wood construction, and wood vaulting, combined with other carbon capture technologies, can all work together. We can use trees to capture a lot of carbon over the next 50 and 100 years, altering the path of global climate change significantly.

 

The post Wood Vaulting for Carbon Sequestration first appeared on NeuroLogica Blog.

Categories: Skeptic

What Happened to the Atmosphere on Mars

Thu, 09/26/2024 - 5:10am

Of every world known to humans outside the Earth, Mars is likely the most habitable. We have not found any genuinely Earth-like exoplanets. They are almost sure to exist, but we just haven’t found any yet. The closest so far is Kepler 452-b, which is a super Earth, specifically 60% larger than Earth. It is potentially in the habitable zone, but we don’t know what the surface conditions are like. Within our own solar system, Mars is by far more habitable for humans than any other world.

And still, that’s not very habitable. It’s surface gravity is 38% that of Earth, it has no global magnetic field to protect against radiation, and its surface temperature ranges from -225°F (-153°C) to 70°F (20°C), with a median temperature of -85°F (-65°C). But things might have been different, and they were in the past. Once upon a time Mars had a more substantial atmosphere – today its atmosphere is less than 1% as dense as Earth’s. That atmosphere was not breathable, but contained CO2 which warmed the planet allowing for there to be liquid water on the surface. A human could likely walk on the surface of Mars 3 billion years ago with just a face mask and oxygen tank. But then the atmosphere mostly went away, leaving Mars the dry barren world we see today. What happened?

It’s likely that the primary factor was the lack of a global magnetic field, like we have on Earth. Earth’ magnetic field is like a protective shield that protects the Earth from the solar wind, which is charged so the particles are mostly diverted away from the Earth or drawn to the magnetic poles. On Mars the solar wind did not encounter a magnetic field, and it slowly stripped away the atmosphere on Mars. If we were somehow able to reconstitute a thick atmosphere on Mars, it too would slowly be stripped away, although that would take thousands of years to be significant, and perhaps millions of years in total.

But this may not have been the only process at work.  A recent study models the chemistry at the surface of Mars to see if perhaps the abundant CO2 in the early Mars atmosphere might still be there. What the model shows, based on known chemical reactions on Earth, is that CO2 in the early Mars atmosphere would have dissolved in high concentrations in any liquid water. As the CO2-rich water percolated through the crust of Mars it would have combined with olivine, an abundant iron-containing mineral on Mars. The oxygen would have combined with the iron, forming the red rusty color for which Mars is famous, while releasing the hydrogen. This hydrogen would then combine with CO2 to form methane. Over time the olivine would be converted to serpentine, which would then further react with water to form smectite, which today is very common in the clays near the surface of Mars.

The researchers calculate that if Mars has smectite clays down to 1,100 meters deep, that could contain enough sequestered carbon to account for the original amount of carbon in the early atmosphere of Mars. It is possible, therefore, that the atmosphere of Mars may mostly still be there, bound up in clays.

Does this have any practical application? Even if not, it is helpful to add to our knowledge of planetary science – how planets evolve and change over time. But it might also have implications for future Mars missions. A vast store of carbon could be quite useful. If some of that carbon is in the form of methane, that could be a valuable energy source.

In theory we could also release the CO2 from the smectite clays back into the atmosphere. Would this be a good thing (assuming it’s feasible)? On the plus side a thicker atmosphere would warm the planet, making it more livable. It would also reduce the need for pressurized suits and living spaces. Humans could survive in as little at 6% of an atmosphere on Earth – not comfortably, but technically survivable. If you get to 30-40% that is basically like being on top of a mountain, something humans could adapt to. We could theoretically get back to the point where a human could survive with just a mask and oxygen tank rather than a pressure suit.

The potential downside is dust storms. They are already bad on Mars and would be much worse with a thicker atmosphere. These occur because the surface is so dry. Ideally as we released CO2 into the atmosphere that would also melt the ice caps and release water from the soil. Surface water would reduce the risk of dust storms.

Terraforming Mars would be extremely tricky, and probably not feasible. But it is interesting to think about how we could theoretically do it. Then we would have the problem of maintaining the atmosphere against further soil chemistry and the solar wind. There has been discussion of how we could create an artificial magnetic field to protect the atmosphere, but again we are talking about massive geoengineering projects. This is all still in the realm of science fiction for now, but it is fun to think about theoretical possibilities.

 

The post What Happened to the Atmosphere on Mars first appeared on NeuroLogica Blog.

Categories: Skeptic

Decarbonizing Aviation and Agriculture

Tue, 09/24/2024 - 5:04am

When we talk about reducing carbon release in order to slow down and hopefully stop anthropogenic global warming much of the focus is on the energy and transportation sectors. There is a good reason for this – the energy sector is responsible for 25% of greenhouse gas (GHG) emissions, while the transportation sector is responsible for 28% (if you separate out energy production and not include it in the end-user category). But that is just over half of GHG emissions. We can’t ignore the other half. Agriculture is responsible for 10% of GHG emissions, while industry is responsible for 23%, and residential and commercial activity 13%. Further, the transportation sector has many components, not just cars and trucks. It includes mass transit, rail, and aviation.

Any plan to deeply decarbonize our civilization must consider all sectors. We won’t get anywhere near net zero with just green energy and electric cars. It is tempting to focus on energy and cars because at least there we know exactly what to do, and we are, in fact, doing it. Most of the disagreement is about the optimal path to take and what the optimal mix of green energy options would be in different locations. For electric vehicles the discussion is mostly about how to make the transition happen faster – do we focus on subsidies, infrastructure, incentives, or mandates?

Industry is a different situation, and has been a tough nut to crack, although we are making progress. There are many GHG intensive processes in industry (like steel and concrete), and each requires different solutions and difficult transitions. Also, the solution often involves electrifying some aspect of industry, which works only if the energy sector is green, and will increase the demand for clean energy. Conservative estimates are that the energy sector will increase by 50% by 2050, but if we are successful in electrifying transportation and industry (not to mention all those data centers for AI applications) this estimate may be way off. This is yet another reason why we need an all-of-the-above approach to green energy.

Let’s focus on agriculture and aviation, which are also considered difficult  sectors to decarbonize, starting with agriculture. Often the discussion on agriculture focuses on meat consumption, because the meat industry is a very GHG intensive portion of the agricultural sector. There is a good argument to be made for moderating meat consumption in industrialized nations, both from a health and environmental perspective. This doesn’t mean banning hamburgers, and it is often strawmanned, but some voluntary moderation would be a good thing. There is also some mitigation possible – yes, I am talking about capturing cow farts.

There are also efforts to shift farming from a net carbon emitter to a net carbon sequester. A recent analysis finds that this is plausible, by switching to certain farming practices that could be a net financial benefit to farmers and help maintain farming productivity in the face of warming. This includes the use of cover crops, combining farming with forestry, and using no-till methods of farming. With these methods farms can turn into a net carbon sink.

Obviously this is a temporary mechanism, but could help buy us time. Right now we are doing the opposite – cutting down forest to convert to farmland, and eliminating carbon sinks. Farming forests, or incorporating more trees into farmland, can help reverse this process.

What about aviation? This is also a difficult sector, like industry, because we don’t have off-the-shelf solutions ready to go. A recent report, however, outlines steps the industry can take over the next five years that can put it on track to reach net zero by 2050. One step, which I had not heard of before, is deploying a global contrail avoidance system. Contrails are vapor trails that form when the hot jet exhaust mixes with cool moist air. These act like artificial cirrus clouds, which have a mild cooling effect during the day but a much more significant warming effect at night (by trapping heat). Contrails are responsible for 35% of the aviation industry’s warming effect. Using AI and satellite data, pilots can be directed to routes that would minimize contrail formation.

They also recommend system-wide efficiency strategies, which they find can halve fuel burn from aviation by 2050. That seems incredible, but they argue that there are efficiencies that individual companies are unable to address, but that can be achieved with system wide policies.

The next point is a bit more obvious – switching to sustainable aviation fuel (SAF). This mostly means making jet fuel from biomass. They mostly recommend policy changes that will help the industry rapidly scale up biofuel production from biomass. That’s really the only way to decarbonize jet travel. Hydrogen will never be energy dense enough for aviation.

We are on the verge of seeing commercial electric planes, mainly because of advances in battery technology. These could fill the regional service and small city routes, with ranges in the 300-400 mile zone (and likely to increase as battery technology continues to advance). Not only would an electric plane industry replace current fossil-fuel burning regional flights, but they could expand the industry and displace other forms of travel. Many more people may choose to take a quick flight from a regional airport than drive for 8 hours.

Their last recommendation is essentially a roll of the dice: “Launching several moonshot technology demonstration programmes designed to rapidly assess the viability and scalability of transformative technologies, bringing forward the timeline for their deployment.”

This sounds partly like an admission that we don’t currently have all the technology we would need to fully decarbonize aviation. Maybe they consider this to be not absolutely necessary but a good option. In any case, I am always in favor of supporting research in needed technology areas. This has generally proven to be an investment worth making.

As is often the case, this all looks good on paper. We just have to actually do it.

The post Decarbonizing Aviation and Agriculture first appeared on NeuroLogica Blog.

Categories: Skeptic

Subjective Neurological Experience

Thu, 09/19/2024 - 4:47am

On the SGU we recently talked about aphantasia, the condition in which some people have a decreased or entirely absent ability to imagine things. The term was coined recently, in 2015, by neurologist Adam Zeman, who described the condition of “congenital aphantasia,” that he described as being with mental imagery. After we discussed in on the show we received numerous e-mails from people with the condition, many of which were unaware that they were different from most other people. Here is one recent example:

“Your segment on aphantasia really struck a chord with me. At 49, I discovered that I have total multisensory aphantasia and Severely Deficient Autobiographical Memory (SDAM). It’s been a fascinating and eye-opening experience delving into the unique way my brain processes information.

Since making this discovery, I’ve been on a wild ride of self-exploration, and it’s been incredible. I’ve had conversations with artists, musicians, educators, and many others about how my experience differs from theirs, and it has been so enlightening.

I’ve learned to appreciate living in the moment because that’s where I thrive. It’s been a life-changing journey, and I’m incredibly grateful for the impact you’ve had on me.”

Perhaps more interesting than the condition itself, and what I want to talk about today, is that the e-mailer was entirely unaware that most of the rest of humanity have a very different experience of their own existence. This makes sense when you think about it – how would they know? How can you know the subjective experience happening inside one’s brain? We tend to assume that other people’s brains function similar to our own, and therefore their experience must be similar. This is partly a reasonable assumption, and partly projection. We do this psychologically as well. When we speculate about other people’s motivations, we generally are just projecting our own motivations onto them.

Projecting our neurological experience, however, is a little different. What the aphantasia experience demonstrates is a couple of things, beginning with the fact that whatever is normal for you is normal. We don’t know, for example, if we have a deficit because we cannot detect what is missing. We can only really know by sharing other people’s experiences.

For example, let’s consider color vision. Someone who is completely color blind, who sees only in shades of grey, would have no idea that they are not seeing color, or that color exists as a phenomenon, except for the fact that other people speak of the fact that they perceive this thing called color. Even then it may take time as they grow to realize that other people are experiencing something they are not. But if they lived in a world with color-blind people, they would never know what they are missing.

This also relates to the old question – is what I experience as “red” the same thing that you experience as “red”? Is there any way we can know? We can only infer from indirect evidence. It’s likely that people experience colors similarly since we tend to associate the same emotions and feelings to those colors, but of course that could also be learned. However, there is no reason to assume our color experiences are identical. There are likely differences in vibrancy, contrast, shading, and other details. Also there are many people who are partially color blind (like me – I have a deficit in red-green distinction). I would never ever know, however, that my color vision was different than most people were it not for those tests we were forced to take where we try to see the number in the circles.

Similarly, if  you cannot form visual mental representations in your mind, you might assume everyone is that way. Several people with aphantasia have told me that when other people talked about “seeing” things in their mind, they assumed it was a metaphor. They had no idea other people were literally seeing an image in their mind.

Sometimes even the objective lack of a sensory experience might be entirely unknown to the person. For example, people who are born with a decrease in sensation because of a disorder of their nerves do not know this. Whatever sensation they have is normal for them. So they don’t complain of numbness, even though on exam they have a profound decrease in sensation (that’s how we know its congenital and not acquired).

We should, I think, extrapolate from this experience. There are likely countless ways in which our brains differ from each other in how they construct our subjective experience of reality, our abstractions, our emotional worlds, and our sensory perceptions. These are all brain constructs, dependent on the particulars of networks and nodes in the brain, how they connect, and how they function. We cannot get outside of this – this is who and what we are.  This is why neuroscientists have moved toward the concept of “neurodiversity” – understanding the full diversity of how different human brains function. There may be a “typical” brain, in one or more aspects, but there is also lots of diversity. We also should not automatically pathologize this diversity and assume anything not typical is a “disorder” or even worse, a “disease.” Mostly biological diversity is a matter of different tradeoffs.

Even when we recognize that some forms of neurodiversity may quality as a “disorder”, meaning that there are demonstrable objective negative outcomes, sometimes this is very context dependent. They may only have negative outcomes because neurotypicals have designed society to best suit them.  They may be on the short end of the tradeoffs, but that is not an inherent reality, just a societal choice.

Even more fascinating to me is to think about the universal human neurological experience. In other words – what do humans lack, or in what ways is human experience of reality idiosyncratic? Just like those with aphantasia, we likely will never know – not until we encounter other intelligent species who experience reality differently. If we are even able to sufficiently communicate with them, we may find their realities are very different from our own. Until then we may not know what it truly means to be human.

The post Subjective Neurological Experience first appeared on NeuroLogica Blog.

Categories: Skeptic

The Potential of AI + CRISPR

Tue, 09/17/2024 - 4:59am

In my book, which I will now shamelessly promote – The Skeptics’ Guide to the Future – my coauthors and I discuss the incredible potential of information-based technologies. As we increasingly transition to digital technology, we can leverage the increasing power of computer hardware and software. This is not just increasing linearly, but geometrically. Further, there are technologies that make other technologies more information-based or digital, such as 3D printing. The physical world and the virtual world are merging.

With current technology this is perhaps most profound when it comes to genetics. The genetic code of life is essentially a digital technology. Efficient gene-editing tools, like CRISPR, give us increasing control over the genetic code. Arguably two of the most dramatic science and technology news stories over the last decade have been advances in gene editing and advances in artificial intelligence (AI). These two technologies also work well together – the genome is a large complex system of interacting information, and AI tools excel at dealing with large complex systems of interacting information. This is definitely a “you got chocolate in my peanut butter” situation.

A recent paper nicely illustrates the synergistic power of these two technologies – Interpreting cis-regulatory interactions from large-scale deep neural networks. Let’s break it down.

Cis-regulatory interactions refer to several regulatory functions of non-coding DNA. Coding DNA, which is contained within genes (genes contain both coding and non-coding elements) directly code for amino acids which are assembled into polypeptides and then folded into functional proteins. Remember the ATCG four letter base code, with three bases coding for a specific amino acid (or coding function, like a stop signal). This is coding DNA. Noncoding DAN regulates how coding DNA is transcribed into proteins.

There are, for example, promoter sequences, which are necessary for transcription in eukaryotes. There are also enhancer sequences which increase transcription, and silencer sequences which decrease transcription. Interactions among these various regulatory segments control how much of which proteins any particular cell will make, while responding dynamically to its metabolic and environmental needs. It is a horrifically complex system, as one might imagine.

CRISPR gives us the ability to not only change the coding sequence of a gene (or remote or splice in entire genes), it can also be used to alter regulation of gene expression. It can reversibly turn off, and then back on again, the transcription of a gene. But doing so messes with this complex systems of regulatory sequences, so the more we understand about it, the better. Also, we are discovering that there are genetic diseases that do not involve mutations of coding DNA but of regulatory DNA. So again, the more we understand about the regulatory system, the better we will be able to study and eventually treat diseases of gene expression regulation.

This is a perfect job for AI, and in this case specifically, deep neural networks (DNN). The problem with conventional research into a massive and complex system like the human genome (or any genome) is that the number of individual experiments you would need to do in order to address even a single question can be vast. You would need the resources of laboratory time, personnel and money to do thousands of individual experiments. Or – we could let AI do those experiments virtually, at a tiny fraction of the cost and time. This is exactly the tool that the researchers have developed. They write:

“Here we present cis-regulatory element model explanations (CREME), an in silico perturbation toolkit that interprets the rules of gene regulation learned by a genomic DNN. Applying CREME to Enformer, a state-of-the-art DNN, we identify cis-regulatory elements that enhance or silence gene expression and characterize their complex interactions.”

Essentially this is a two-step process. Enformer is a DNN that plows through tons of data to learn the rules of gene regulation. The problem with some of these AIs, however, is that they spit out answers but not necessarily the steps that led to the answers. This is the so-called “black box” problem of some AIs. But genetics researchers want to know the steps – they want to know the individual regulatory elements that Enformer identified as the building blocks for the overall rules the produce. That is what CREME does – it looks at the rule output of Enformer and reverse engineers the cis-regulatory elements.

The combination essentially allows genetics researchers to run thousands of virtual experiments in silico to build a picture of cis-regulatory elements and interactions that make up the web of rules that control gene expression. This is great example of how AI can potentially dramatically increase the pace of scientific research. It also highlights how genetics is perhaps ideally suited to reap the benefits of AI-enhanced research, because it is already an inherently digital science.

This is perhaps the sweat spot for AI-enhanced scientific research – look through billions of potential targets and tell me which 2 or 3 I should focus on. This also applies to drug research and material science, where the number of permutations – the potential space – of possible solutions is incredibly vast. For many types of research, AI is condensing down months or years of research into hours or days of processing time.

For genetics these two technologies (AI and gene-editing such as, but not limited to, CRISPR) combine to give us incredible knowledge and control over the literal code of life. It still takes a lot of time to translate this into specific practical applications, but they are coming. We already, for example, have approved therapies for genetic diseases, like sickle cell, that previously had no treatments that could alter their course. More is coming.

This field is getting so powerful, in fact, that we are discussing the ethics of potential applications. I understand why people might be a little freaked out at the prospect of tinkering with life at its most fundamental level. We need a regulatory framework that allows us to reap the immense benefits without unleashing unintended consequences, which can be similarly immense. For now this largely means that we don’t mess with the germ line, and that anything a company wishes to put out into the world has to be individually approved. But like many technologies, as both AI and genetic manipulation gets cheaper, easier, and more powerful, the challenge will be maintaining effective regulation as the tech proliferates.

For now, at least, we can remain focused on ethical biomedical research. I expect in the next 5-20 years we will see not only increasing knowledge of genetics, but specific medical applications. There is still a lot of low hanging fruit to be picked.

The post The Potential of AI + CRISPR first appeared on NeuroLogica Blog.

Categories: Skeptic

Flooding is Increasing

Mon, 09/16/2024 - 5:04am

Last month my flight home from Chicago was canceled because of an intense rainstorm. In CT the storm was intense enough to cause flash flooding, which washed out roads and bridges and shut down traffic in many areas. The epicenter of the rainfall was in Oxford, CT (where my brother happens to live), which qualified as a 1,000 year flood (on average a flood of this intensity would occur once every 1,000 years). The flooding killed two people, with an estimated $300 million of personal property damage, and much more costly damage to infrastructure.

Is this now the new normal? Will we start seeing 1,000 year floods on a regular basis? How much of this is due to global climate change? The answers to these questions are complicated and dynamic, but basically yes, yes, and partly. This is just one more thing we are not ready for that will require some investment and change in behavior.

First, some flooding basics. There are three categories of floods. Fluvial floods (the most common in the US) occur near rivers and lakes, and essentially result from existing bodies of water overflowing their banks due to heavy cumulative or sudden rainfall. There are also pluvial floods which are also due to rainfall, but occur independent of any existing body of water. The CT flood were mainly pluvial. Finally, there are coastal flood related to the ocean. These can be due to extremely high tide, storm surges from intense storms like hurricanes, and tsunamis which are essentially giant waves.

How does global warming contribute to flooding? First, there has been about 6-8 inches of sea level rise in the last 100 years. As water warms in expands, which causes some of the sea level rise. Also, melting glacial ice ends up in the ocean. Sea ice melt does not contribute, because the ice is already displacing the same amount of water as it would occupy when melted. Higher sea levels means higher high tide, resulting in more tidal flooding. Increased temperature also means there is more moisture in the air which leads to heavier rainfall – more fluvial and pluvial flooding and storm surges.

In terms of flooding damage there are other factors at play as well. We have been developing more property in floodplains – in the US we developed 2 million acres of property in floodplains in the last two decades, half of which was in Florida.

In addition there have been two development trends that can worsen flooding. We also put down a lot of concrete and asphalt. When it rains or there is a storm surge, the water has to go somewhere. Flooding results when the water in exceeds the water out. Water out includes rivers carrying water to the sea, but also the land absorbing water. The more land that is covered with concrete, the farther the water has to spread before it gets absorbed. The result is increased flooding.

Further, local communities often build damns and levies in order to protect themselves from flooding. This can be coastally or along rivers. However – this can make flooding worse. It actually extends the floodplain deeper inland or farther from major rivers, and intensifies the flooding when it occurs. Again, the water has to go somewhere. This means that even communities dozens of miles inland may still be in a coastal floodplain, even if they are not aware and don’t have proper protections (including flood insurance). The result is a predictable increase in flood damage. According to FEMA:

“From 1980–2000, the NFIP paid almost $9.4 billion in flood insurance claims. From 2000–2020, that number increased over 660% to $62.2 billion.”

What can we do? We can’t change the laws of physics. Water is heavy, and flowing water can have massive momentum, capable of causing extreme damage. People caught in a flood learn the hard way how powerful water can be, which is why so many people are just “swept away” by flood waters. Also, once flooding occurs, flowing water will likely carry a lot of debris, which just adds to further damage. We also can’t change the physics of the water cycle – water will evaporate and then rain back down, and will have to flow to bodies of water or get absorbed into soil.

What we can do is everything possible to slow and hopefully stop anthropogenic climate change. This is just one more reason we need to transition to a green economy. Increasing flood damage (and the cost of mitigation) needs to be factored into the cost of emitting CO2.

But we already have the effects of existing climate change, and a certain amount is already baked in over the next century regardless of what we do (it will just be degrees of bad depending on how quickly we decarbonize our industries). This means we need to think about flooding mitigation. This is economically and socially tricky. There are existing communities in floodplains, and it would be no simple matter to uproot and move them. There are also a lot of economic incentives why states and communities would want to expand into floodplains. Lakeside and coastal properties are often attractive.

It does seem reasonable, however, to set limits on development in high risk floodplains, and to encourage shifting to lower risk areas. I don’t think we should uproot communities, but arranging incentives and regulations so that trends over time shift away from floodplains is feasible. Also, if a community is devastated by a flood, perhaps we shouldn’t just rebuild in a floodplain. If we have to rebuild anyway, why not somewhere safer. I know this is massively complex and painful, but just rebuilding in a high (and increasing) risk floodplain does not seem rational.

Local regulations can also require building standards that are resistant to flooding, such as putting homes on raised foundations, and putting structures on relatively high land while leaving lowing lying lands for water flow. Communities in floodplains, in other words, need to be engineered with flooding in mind. Have lots of open soil to absorb water, have adequate drainage to accommodate heavy rainfall, and raise up property as high as possible.

Finally, civil engineers need to continue to study the dynamics of floodplains to make sure, at least, we aren’t making the problem worse when each community just tries to protect themselves. We need an integrated plan to manage the entire floodplain.

It’s a difficult problem, and there is no simple solution. But I have been reading about this topic for years, and it seems like we are still having the same problems and wrangling over the same issues. There are efforts on the Federal level to address flooding, but they all seem either reactive or small scale. We may need an aggressive national-level strategy to properly address this issue. Otherwise – get ready for 1,000 year flooding.

The post Flooding is Increasing first appeared on NeuroLogica Blog.

Categories: Skeptic

Carbon Fiber Structural Battery

Thu, 09/12/2024 - 5:01am

I have written previously about the concept of structural batteries, such as this recent post on a concrete battery. The basic idea is a battery made out of material that is strong enough that it can bare a load. Essentially we’re asking the material to do two things at once – be a structural material and be a battery. I am generally wary of such approaches to technology as you often wind up with something that is bad at two things, rather than simply optimizing each function.

In medicine, for example, I generally don’t like combo medications – a single pill with two drugs meant to take together. I would rather mix and match the best options for each function. But sometimes there is such a convenience in the combination that it’s worth it. As with any technology, we have to consider the overall tradeoffs.

With structural batteries there is one huge gain – the weight and/or volume savings of having a material do double duty. The potential here is too great to ignore. For the concrete battery the advantage is about volume, not weight. The idea is to have the foundation of a building serve as individual or even grid power storage. For a structural battery that will save weight, we need a material is light and strong. One potential material is carbon fiber, which may be getting close to characteristics with practical applications.

Material scientists have created in the lab a carbon fiber battery material that could serve as a structural battery. Carbon fiber is a good substrate because it is light and strong, and also can be easily shaped as needed. Many modern jets are largely made out of carbon fiber for this reason. Of course you compromise the strength when you introduce materials needed for the energy storage, and these researchers have been working on achieving an optimal compromise. Their latest product has an elastic modulus that exceeds 76 GPa. For comparison, aluminum, which is also used for aircraft, has an elastic modulus of 70 GPa. Optimized carbon fiber has an elastic modulus of 200-500 GPa. Elastic modulus is one type of strength, specifically the resistance to non-permanent deformity. Being stronger than aluminum means it is in the range that is suitable for making lots of things, from laptops to airplanes.

How is the material as a battery? The basic features are good – it is stable, can last over 1000 cycles, and can be charged and discharged fast enough. But of course the key feature is energy density – their current version has an energy density of 30 Wh kg. For comparison, a typical Li Ion battery in an electric vehicle today has an energy density of 200-300 Wh kg. High end Amprius silicon anode Li ion batteries are up to 400-500 Wh kg.

So, as I said, the carbon fiber structural battery is essentially bad at two things. It is not as strong as regular carbon fiber, and it is not as good a battery as Li ion batteries. But is the combination worth it? If we run some numbers I think the answer right now is – probably not. What I don’t know is the cost to mass produce this material, which so far is just a laboratory proof of concept. All bets are off if the material is super expensive. But let’s assume cost is reasonable and focus on the weight and energy storage.

If, for example, we look at a typical electric vehicle how would the availability of this material be useful? It’s hard to say exactly because I would need to see the specs on a vehicle engineered to incorporate this material, but let’s do some rough estimates. A Tesla, for example, has a chassis made of steel and titanium, with a body that is almost entirely aluminum. So we can replace all the aluminum in such a vehicle with structural carbon fiber, which is stronger and lighter. Depending on the vehicle of course, we’re talking about 100kg of carbon fiber for the body of a car. The battery weighs about 500 kg. The carbon fiber battery has one tenth the specific energy as a Tesla Li ion battery, so 100 kg of carbon fiber battery would hold as much energy at 10 kg of battery. This would allow a reduction in the battery weight from 500 kg to 490 kg. That hardly seems worth it, for what is very likely to be a more expensive material than either the current battery or aluminum.

Of course you could beef up the frame, which could have the double advantage of making it stronger and longer lasting. Let’s say you have a triple thick 300 kg carbon fiber frame – that still only saves you 30 kg of battery. My guess is that we would need to get that energy density up to 100 Wh kg or more before the benefits start to become worth it.

The calculus changes, however, when we talk about electric aircraft. Here there is a huge range of models but just to throw out some typical figures – we could be talking about a craft that weighs 1,500 kg and battery that weight 2,000 kg. If the body of the craft were made out of structural carbon fiber, that could knock 150 kg off the weight of the battery, which for an aircraft is significant. For a commercial aircraft it might even be worth the higher cost of the plane, given the lower operating costs.

What about at the low end of the spectrum – say a laptop or smart phone? A laptop might be the sweet spot for this  type of material. If the case were made of a carbon fiber battery that could allow for a thinner and lighter laptop or it could extend the battery life of the laptop, both of which would be desirable. These are already expensive, and adding a bit to the overall cost to improve performance is likely something consumers will pay for. But of course the details matter.

Given all this – is the carbon fiber structural battery ready for commercial use? I think it’s marginal. It’s plausible for commercial electric aircraft and maybe laptops, depending on the ultimate manufacturing cost, and assuming no hidden gotchas in terms of material properties. We may be at the very low end of viability. Any improvement from this point, especially in energy density, makes it much more viable. Widespread adoption, such as in EVs, probably won’t come until we get to 100 Wh kg or more.

The post Carbon Fiber Structural Battery first appeared on NeuroLogica Blog.

Categories: Skeptic

Artificial Robotic Muscles

Tue, 09/10/2024 - 5:03am

By now we have all seen the impressive robot videos, such as the ones from Boston Dynamics, in which robots show incredible flexibility and agility. These are amazing, but I understand they are a bit like trick-shot videos – we are being shown the ones that worked, which may not represent a typical outcome. Current robot technology, however, is a bit like steam-punk – we are making the most out of an old technology, but that technology is inherently limiting.

The tech I am talking about is motor-driven actuators. An actuator is a device that converts energy into mechanical force, such a torque or displacement. This is a technology that is about 200 years old. While they get the job done, they have a couple of significant limitations. One is that they use a lot of energy, much of which is wasted as heat. This is important as we try to make battery-driven robots that are not tethered to a power cord. Dog-like and humanoid robots typically last 60-90 minutes on one charge. Current designs are also relatively hard, so that limits their interaction with the environment. They also depend heavily on sensors to read their environment.

By contrast we can think about biological systems. Muscles are much more energy efficient, are soft, can be incredibly precise, are silent, and contain some of their own feedback to augment control. Developing artificial robotic muscles that would perform similar to biological systems is now a goal of robotics research, but it is a very challenging problem to crack. Such a system would also need to contract slowly or quickly, and even produce bursts of speed (if, for example, you want your robot to jump). They would need to be able to produce a lot of power, enough for the robot to move itself and carry out whatever function it has. It would also need to be able to efficiently hold a position for long periods of times.

As a bonus, human muscles, for example, have stretch receptors in them which provide feedback to the control system which not only enhances control but allow for rapid reflexive movements. Biological systems are actually very sophisticated, which is not surprising given that they have had hundreds of millions of years to evolve. Reverse engineering such systems is no easy task.

Researchers, however, have made some preliminary progress. To start they need a material that can contract or stretch (or change its shape is some way) when a voltage is applied to it. That is the fundamental function of a muscle – they contract when activated by nerve stimulation. Muscles will also contract when an external electrical stimulus is applied to them. The musculoskeletal system is essentially a system of contracting muscles, arranged so as to move joints in different directions – the biceps flexes the elbow while the triceps extends the elbow, for example. But also there are often different muscles for the same action but with different positions of maximal mechanical advantage.

Designing such a system won’t be the challenge for engineers – thinking about such forces is bread and butter for engineers. The limiting factor right now is the material science, the artificial muscle itself. The other technological challenge (where we have already made good progress) is developing the various sensors that work together to provide all the necessary feedback. Humans, for example, use multiple sensory modalities at the same time. We use vision, of course, to see our environment and guild our movements. We also have proprioception which allows our brains to sense where our limbs are in three-dimensional space. This is why you can move accurately with your eyes closed (close your eyes and touch your nose – that’s proprioception). The vestibular system tells us how we are oriented with respect to gravity and senses any accelerating forces acting on us (such as spinning around). We also have tactile sensation so we can sense when we are touching something (our feet against the ground, or something in our hands). Our muscles can also sense when they are being stretched, which further helps coordinate movement.

Our brains process all of this information in real time, comparing them to each other to provide a unified sense of how we are oriented and how we are moving. Motion sickness, vertigo, and dizziness result when the various sensory streams do not all sync up, or if the brain is having difficulty processing it all.

Designing a robotic system that can do all this is challenging, but it starts with the artificial muscles. There are a few approaches in development. MIT researchers, for example, developed a fiber made of different materials with different thermal expansion properties. When stimulated the fiber coils, and therefore shortens. Muscles are made of many individual fibers that shorten when activated, so this could serve as the building block of a similar approach. The question is – will dozens or hundreds of these fibers work together to form a muscle?

More recently scientists have developed an electrohydraulic system – essentially bags of oil that contract or stretch when stimulated. Preliminary testing is promising, with a key feature that the system is energy efficient.

A recent Nature review breaks down the various artificial muscle systems by the environmental stimuli to which they respond: “According to different stimuli, artificial muscles can be categorized as thermoresponsive, electrically responsive, magnetically responsive, photoresponsive, chemically responsive, and pressure driven.” There are also multi-stimuli driven systems. They can also be categories by potential application. These include micro-robotic systems, where very tiny actuators are needed. Also there are biomedical applications, such as prosthetics and implantable devices. And of course there are robotic applications, but this is a huge category that includes many different sizes and designs of robots.

Most of this research has been essentially done in the last decade, so it is still very new. Interest and investment is increasing, however, as the potential of “microactuators” and “soft robotics” is better understood. This could potentially be a transformative technology, with lots of applications beyond just building more efficient and agile robots.

The post Artificial Robotic Muscles first appeared on NeuroLogica Blog.

Categories: Skeptic

Marmosets Call Each Other By Name

Tue, 09/03/2024 - 5:00am

Humans identify and call each other by specific names. So far this advanced cognitive behavior has only been identified in a few other species, dolphins, elephants, and some parrots. Interestingly, it has never been documented in our closest relatives, non-human primates – that is, until now. A recent study finds that marmoset monkeys have unique calls, “phee-calls”, that they use to identify specific individual members of their group. The study also found that within a group of marmosets, all members use the same name to refer to the same individual, so they are learning the names from each other. Also interesting, different families of marmosets use different kinds of sounds in their names, as if each family has their own dialect.

In these behaviors we can see the roots of language and culture. It is not surprising that we see these roots in our close relatives. It is perhaps more surprising that we don’t see it more in the very closest relatives, like chimps and gorillas. What this implies is that these sorts of high-level behaviors, learning names for specific individuals in your group, is not merely a consequence of neurological develop. You need something else. There needs to be an evolutionary pressure.

That pressure is likely living in an environment and situation where families members are likely to be out of visual contact of each other. Part of this is the ability to communicate at long enough distance that will put individuals out of visual contact. For example, elephants can communicate over miles. Dolphins often swim in murky water with low visibility. Parrots and marmosets live in dense jungle. Of course, you need to have that evolutionary pressure and the neurological sophistication for the behavior – the potential and the need have to align.

There is also likely another element – the quirkiness of evolution. Not all species will find the same solution to the same problem. Many animals evolve innate calls that they use to communicate to their group – such as warnings that a predator is near, or a summons that they have found food. But very few have hit upon the strategy of adjusting those calls to represent specific individuals.

The researchers hope that this one puzzle piece will help them investigate the evolution of human language. I find this a fascinating topic, but it’s one that is difficult to research. We have information going back preserved in writings, which go back about 5,400 years. We have extant cultural knowledge as well – the languages that people around the world speak today. But that’s it, a window going back about 5 thousand years. We also have information from our closest relatives – the uses of language and the language potential is non-human primates. This can give us a somewhat complicated window into the evolution of human language, but this picks up with our last common ancestor about 8 million years ago (with a wide range of uncertainty).

In between these two time periods, when all the interesting stuff was happening, we have almost no information. We have cave painting going back tens of thousands of years, and these give us some insight into the intellectual world of our ancestors, but not directly into their language. We can study hominid anatomy, to see if their larynxes were optimized for human speech. Only Homo sapiens have a “modern” vocal tract. Neanderthals were close but had some specific differences which likely meant their vocal range was lower than modern humans. But this does not mean that our older ancestors could not communicate vocally. Some researchers argue that primates have had sufficient vocal anatomy for some speech going back 27 million years.

But again, this gives us scant information about the evolution of language itself. Most of what we know comes from examining the direct evidence of actual language, from the last few thousand years. We can still learn a lot from this, from studying what different languages have in common, how they are structured, and their internal logic. We can also investigate the neurological correlates of language, and there are ways to disentangle which components of language are evolved (wired in the brain) and which are cultural and learned.

Once concept I find interesting is that of embodied cognition. We use a lot of words to represent abstract ideas that are metaphors for physical relationships. A boss is “above” their employee, but not literally physically above them. Ideas can be “deep”, and arguments can be “strong” or “weak”. This makes some evolutionary sense. Things evolve generally from simpler but fully functional forms. They do not evolve directly to their modern complexity. The eye evolved from simpler forms, but ones that were fully functional for what they did.

Similarly it is likely language evolved from simpler vocal communication, but ones that functioned. What is especially interesting about language is that language also relates to cognition. The two may have evolved hand-in-hand. First we developed sounds for the concrete things in our world, then for features of those concrete things. At some point there was a cognitive breakthrough – the metaphor. This stone is rough and it hurts to rub it. Your behavior is also “rough” and “hurts”. What’s interesting to think about is which came first, the idea or the word. Or did they crystalize together? Likely there was some back and forth, with ideas pushing language forward, which in turn pushed ideas forward. Language and ideas slowly expanded together. This resulted in a cognitive explosion that separates us from all other animals on Earth.

The elements that lead to this explosion can be found in our ancestors. But only in humans did they all come together.

The post Marmosets Call Each Other By Name first appeared on NeuroLogica Blog.

Categories: Skeptic

Accusation of Mental Illness as a Political Strategy

Fri, 08/30/2024 - 5:10am

I am not the first to say this but it bears repeating – it is wrong to use the accusation of a mental illness as a political strategy. It is unfair, stigmatizing, and dismissive. Thomas Szasz (let me say straight up – I am not a Szaszian) was a psychiatrist who made it his professional mission to make this point. He was concerned especially about oppressive governments diagnosing political dissidents with mental illness and using that as a justification to essentially imprison them.

Szasz had a point (especially back in the 1960s when he started making it) but unfortunately took his point way too far, as often happens. He decided that mental illness, in fact, does not exist, and is 100% political oppression. He took a legitimate criticism of the institution of mental health and abuse by oppressive political figures and systems and turned it into science denial. But that does not negate the legitimate points at the core of his argument – we should be careful not to conflate unpopular political opinions with mental illness, and certainly not use it as a deliberate political strategy.

While the world of mental illness is much better today (at least in developed nations), the strategy of labeling your political opponents as mentally ill continues. I truly sincerely wish it would stop. For example, in a recent interview on ABC, senator Tom Cotton was asked about some fresh outrageous thing Trump said, criticism of which Cotton waved away as “Trump Derangement Syndrome”.

Sorry, but you cannot just make up a new mental illness by framing it in clinical-sounding terminology.  There is no such syndrome, and it is supremely insulting and dismissive to characterize political disagreements as “derangement”. Szasz should be turning over in his grave. This is, of course, and ad hominem strategy – attacking the person rather than the argument. “Oh, you just feel that way because you are suffering from some derangement, you poor thing.” This also can cut both ways – I have heard some on the left argue that it is, in fact, those who support Trump who are suffering from TDS. Some may justify this as “turnabout is fair play”, but in this case it isn’t. Don’t play their game, and don’t endorse the underlying strategy of framing political disagreements as mental illness.

Sometimes accusations are leveled at individuals, which in some ways is worse than accusing half the country of derangement. The most recent episode of this to come to my attention stems from the Arlington cemetery controversy. When the Trump campaign was asked for a statement, this is what they gave:

“The fact is that a private photographer was permitted on the premises and for whatever reason an unnamed individual, clearly suffering from a mental health episode, decided to physically block members of President Trump’s team during a very solemn ceremony,” Cheung said in the statement. (Steven Cheung is a Trump campaign spokesman)

Was it really clear that they were having a “mental health episode”? Why not just say they were “hysterical” and be done with it. By all accounts the person in question acted completely professionally the entire time, and when they were physical pushed aside decided to deescalate the conflict and stand down. My point is not to relitigate the controversy itself, but to point out the casual use of an accusation of mental illness as a political tool. It apparently is not enough to say someone was rude, unprofessional, or inappropriate – you have to accuse them of a “mental health episode” to truly put them in their place.

I am also uncomfortable with the way in which both ends of the political spectrum have characterized the other candidate, in this and previous election cycles. It is one thing to raise legitimate concerns about the “fitness” of a candidate, whether due to the effects of age, or their apparent personality and moral standing. But I think it is inappropriate and harmful to start speculating about actual neurological or psychological diagnoses. This is meant to lend weight and gravitas to the accusation. However, either you are not qualified to make such diagnosis, in which case you shouldn’t, or you are qualified to make such diagnoses, in which case you shouldn’t, although for different reasons. Actual professionals generally abstain from making public diagnoses without the benefit of an actual clinical exam, and those who perhaps have made a clinical exam are then bound by confidentiality. Non-professionals should stay out of the diagnosis business.

It’s best to be conscious of this, and to frame whatever political criticism you have in terms other than mental or neurological illness. Casual accusations of mental illness are cheap and gratuitous, and exist on a spectrum that begins with dismissiveness and includes the abuse of mental illness for political oppression.

The post Accusation of Mental Illness as a Political Strategy first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages