You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 15 hours 1 min ago

Mach Effect Thrusters Fail

Mon, 03/11/2024 - 5:07am

When thinking about potential future technology, one way to divide possible future tech is into probable and speculative. Probable future technology involves extrapolating existing technology into the future, such as imaging what advanced computers might be like. This category also includes technology that we know is possible, we just haven’t mastered it yet, like fusion power. For these technologies the question is more when than if.

Speculative technology, however, may or may not even be possible within the laws of physics. Such technology is usually highly disruptive, seems magical in nature, but would be incredibly useful if it existed. Common technologies in this group include faster than light travel or communication, time travel, zero-point energy, cold fusion, anti-gravity, and propellantless thrust. I tend to think of these as science fiction technologies, not just speculative. The big question for these phenomena is how confident are we that they are impossible within the laws of physics. They would all be awesome if they existed (well, maybe not time travel – that one is tricky), but I am not holding my breath for any of them. If I had to bet, I would say none of these exist.

That last one, propellantless thrust, does not usually get as much attention as the other items on the list. The technology is rarely discussed explicitly in science fiction, but often it is portrayed and just taken for granted. Star Trek’s “impulse drive”, for example, seems to lack any propellant. Any ship that zips into orbit like the Millennium Falcon likely is also using some combination of anti-gravity and propellantless thrust. It certainly doesn’t have large fuel tanks or display any exhaust similar to a modern rocket.

In recent years NASA has tested two speculative technologies that claim to be able to produce thrust without propellant – the EM drive and the Mach Effect thruster (MET). For some reason the EM drive received more media attention (including from me), but the MET was actually the more interesting claim. All existing forms of internal thrust involve throwing something out the back end of the ship. The conservation of momentum means that there will be an equal and opposite reaction, and the ship will be thrust in the opposite direction. This is your basic rocket. We can get more efficient by accelerating the propellant to higher and higher velocity, so that you get maximal thrust from each atom or propellant your ship carries, but there is no escape from the basic physics. Ion drives are perhaps the most efficient thrusters we have, because they accelerate charged particles to relativistic speeds, but they produce very little thrust. So they are good for moving ships around in space but cannot get a ship off the surface of the Earth.

The problem with propellant is the rocket equation – you need to carry enough fuel to accelerate the fuel, and more fuel for that fuel, etc. It means that in order to go anywhere interesting very fast you need to carry massive amounts of fuel. The rocket equation also sets a lot of serious limits on space travel, in terms of how fast and far we can go, how much we can lift into orbit, and even if it is possible to escape from a strong gravity well (chemical rockets have a limit of about 1.5 g).

If it were possible to create thrust directly from energy without the need for propellant, a so-called propellantless or reactionless drive, that would free us from the rocket equation. This would make space travel much easier, and even make interstellar travel possible. We can accomplish a similar result by using external thrust, for example with a light sail. The thrust can come from a powerful stationary laser that pushes against the light sail of a spacecraft. This may, in fact, be our best bet for long distance space travel. But this approach has limits as well, and having an onboard source of thrust is extremely useful.

The problem with propellantless drives is that they probably violate the laws of physics, specifically the conservation of momentum. Again, the real question is – how confident are we that such a drive is impossible? Saying we don’t know how it could work is not the same as saying we know it can’t work. The EM drive is alleged to work using microwaves in a specially designed cone so that as they bounce around they push slightly more against one side than the other, generating a small amount of net thrust (yes, this is a simplification, but that’s the basic idea). It was never a very compelling idea, but early tests did show some possible net thrust, although very tiny.

The fact that the thrust was extremely tiny, to me, was very telling. The problem with very small effect sizes is that it’s really easy for them to be errors, or to have extraneous sources. This is a pattern we frequently see with speculative technologies, from cold fusion to free-energy machines. The effect is always super tiny, with the claim that the technology just needs to be “scaled up”. Of course, the scaling up never happens, because the tiny effect was a tiny error. So this is always a huge red flag to me, one that has proven extremely predictive.

And in fact when NASA tested the EM drive under rigorous testing conditions, they could not detect any anomalous thrust. With new technology there are two basic types of studies we can do to explore them. One is to explore the potential underlying physics or phenomena – how could such technology work. The other is to simply test whether or not the technology works, regardless of how. Ideally both of these types of evidence will align. There is often debate about which type of evidence is more important, with many proponents arguing that the only thing that matters is if the technology works. But the problem here is that often the evidence is low-grade or ambiguous, and we need the mechanistic research to put it into context.

But I do agree, at the end of the day, if you have sufficiently high level rigorous evidence that the phenomenon either exists or doesn’t exist, that would trump whether or not we currently know the mechanism or the underlying physics. That is what NASA was trying to do – a highly rigorous experiment to simply answer the question – is there anomalous thrust. Their answer was no.

The same is true of the MET. The theory behind the MET is different, and is based on some speculative physics. The idea stems from a question in physics for which we do not currently have a good answer – what determines inertial frames of reference. For example, if you have a bucket of water in deep intergalactic space (sealed at the top to contain the water), and you spin it, centrifugal force will cause the water to climb up the sides of the buck. But how can we prove physically that the bucket is spinning and the universe is not spinning around it. In other words – what is the frame of reference. We might intuitive feel like it makes more sense that the bucket is spinning, but how do we prove that with physics and math? What theory determines the frame of reference?

One speculative theory is that the inertial frame of reference is determined by the total mass energy of the universe, it derives from an interaction between an object and the rest of the universe. If this is the case then perhaps you can change that inertia by pushing against the rest of the universe, without expelling propellant. If this is all true, then the MET could theoretically work. This seems to be one step above the EM drive in that the EM drive likely violates the known laws of physics, while the MET is based on unknown laws.

Well, NASA tested the MET also and – no anomalous thrust. Proponents, of course, could always argue that the experimental setup was not sensitive enough. But at some point, teeny tiny becomes practically indistinguishable from zero.

It seems that we do not have a propellantless drive in our future, which is too bad. But the idea is so compelling that I also doubt we have seen the end of such claims, as with perpetual motion machines and free energy. There are already other claims, such as the quantum drive. There are likely to be more. What I typically say to proponents is this – scale it up first, then come talk to me. Since “scaling up” tends to be the death of all of these claims, that’s a good filter.

The post Mach Effect Thrusters Fail first appeared on NeuroLogica Blog.

Categories: Skeptic

Is the AI Singularity Coming?

Thu, 03/07/2024 - 4:49am

Like it or not, we are living in the age of artificial intelligence (AI). Recent advances in large language models, like ChatGPT, have helped put advanced AI in the hands of the average person, who now has a much better sense of how powerful these AI applications can be (and perhaps also their limitations). Even though they are narrow AI, not sentient in a human way, they can be highly disruptive. We are about to go through the first US presidential election where AI may play a significant role. AI has revolutionized research in many areas, performing months or even years of research in mere days.

Such rapid advances legitimately make one wonder where we will be in 5, 10, or 20 years. Computer scientist Ben Goertzel, who popularized the term AGI (artificial general intelligence), recently stated during a presentation that he believes we will achieve not only AGI but an AGI singularity involving a superintelligent AGI within 3-8 years. He thinks it is likely to happen by 2030, but could happen as early as 2027.

My reaction to such claims, as a non-expert who follows this field closely, is that this seems way to optimistic. But Goertzel is an expert, so perhaps he has some insight into research and development that’s happening in the background that I am not aware of. So I was very interested to see his line of reasoning. Will he hint at research that is on the cusp of something new?

Goertzel laid out three lines of reasoning to support his claim. The first is simply extrapolating from the recent exponential grown of narrow AI. He admits that LLM systems and other narrow AI are not themselves on a path to AGI, but they show the rapid advance of the technology. He aligns himself here with Ray Kurzweil, who apparently has a new book coming out, The Singularity is Nearer. Kurzweil has a reputation for predicting advances in computer technology that were overly optimistic, so that is not surprising.

I find this particular argument not very compelling. Exponential growth in one area of technology at one particular time does not mean that this is a general rule about technology for all time. I know that is explicitly what Kurzweil argues, but I disagree with it. Some technologies hit roadblocks, or experience diminishing returns, or simply peak. Stating exponential advance as a general rule did not mean that the hydrogen economy was coming 20 years ago. It has not made commercial airline travel any faster over the last 50 years. Rather, history is pretty clear that we need to do a detailed analysis of individual technologies to see how they are advancing and what their potential is. Even still, this only gives us a roadmap for a certain amount of time, and is not useful for predicting disruptive technologies or advances.

So that is a strike one, in my opinion. Recent rapid advances in narrow AI does not predict, in and of itself, that AGI is right around the corner. It’s also strike two, actually, because he argues that one line of evidence to support his thesis is Kurzweil’s general rule of exponential advance, and the other is the recent rapid advances in LLM narrow AIs. So what is his third line of evidence?

This one I find the most compelling, because at least it deals with specific developments in the field. Goertzel here is referring to his own work: “OpenCog Hyperon,” as well as associated software systems and a forthcoming AGI programming language, dubbed “MeTTa”. The idea here is that you can create an AGI by stitching together many narrow AI systems. I think this is a viable approach. It’s basically how our brains work. If you had 20 or so narrow AI systems that handled specific parts of cognition and were all able to communicate with each other, so that the output of one algorithm becomes the input of another, then you are getting close to a human brain type of cognition.

But saying this approach will achieve AGI in a few years is a huge leap. There is still a lot we don’t know about how such a system would work, and there is much we don’t know about how sentience emerges from the activity of our brains. We don’t know if linking many narrow AI systems together will cause AGI to emerge, or if it will just be a bunch of narrow AIs working in parallel. I am not saying there is something unique about biological cognition, and I do think we can achieve AGI in silicon, but we don’t know all the elements that go into AGI.

If I had to predict I would say that AGI is likely to happen both slower and faster than we predict. I highly doubt it will happen in 3-8 years. I suspect it is more like 20-30 years. But when it does happen, like with the LLMs, it will probably happen fast and take us by surprise. Goertzel, to his credit, admits he may be wrong. He says we may need a, “quantum computer with a million qubits or something.”  To me that is a pretty damning admission, that all his extrapolations actually mean very little.

Another aspect of his predictions is what happens after we achieve AGI. He, as many others have also predicted, said that if we give the AGI the ability to write its own code then it could rapidly become superintelligent, like a single entity with the cognitive ability of all human civilization. Theoretically, sure. But having an AGI that powerful is more than about writing better code, right? It’s also limited by the hardware, and the availability of training data, and perhaps other variables as well. But yes, such an AGI would be a powerful tool of science and technology that could be turned toward making the AGI itself more advanced.

Will this create a Kurzweil-style “singularity”? Ultimately I think that idea is a bit subjective, and we won’t really know until we get there.

The post Is the AI Singularity Coming? first appeared on NeuroLogica Blog.

Categories: Skeptic

Climate Sensitivity and Confirmation Bias

Mon, 03/04/2024 - 6:02am

I love to follow kerfuffles between different experts and deep thinkers. It’s great for revealing the subtleties of logic, science, and evidence. Recently there has been an interesting online exchange between a physicists science communicator (Sabine Hossenfelder) and some climate scientists (Zeke Hausfather and Andrew Dessler). The dispute is over equilibrium climate sensitivity (ECS) and the recent “hot model problem”.

First let me review the relevant background. ECS is a measure of how much climate warming will occur as CO2 concentration in the atmosphere increases, specifically the temperature rise in degrees Celsius with a doubling of CO2 (from pre-industrial levels). This number of of keen significance to the climate change problem, as it essentially tells us how much and how fast the climate will warm as we continue to pump CO2 into the atmosphere. There are other variables as well, such as other greenhouse gases and multiple feedback mechanisms, making climate models very complex, but the ECS is certainly a very important variable in these models.

There are multiple lines of evidence for deriving ECS, such as modeling the climate with all variables and seeing what the ECS would have to be in order for the model to match reality – the actual warming we have been experiencing. Therefore our estimate of ECS depends heavily on how good our climate models are. Climate scientists use a statistical method to determine the likely range of climate sensitivity. They take all the studies estimating ECS, creating a range of results, and then determine the 90% confidence range – it is 90% likely, given all the results, that ECS is between 2-5 C.

Hossenfelder did a recent video discussing the hot model problem. This refers to the fact that some of the recent climate models, ones that are ostensible improved from older models incorporating better physics and cloud modeling, produced an estimate for ECS outside the 90% confidence interval, with ECSs above 5.0. Hossenfelder expressed grave concern that if these models are closer to the truth on ECS we are in big trouble. There is likely to be more warming sooner, which means we have even less time than we thought to decarbonize our economy if we want to avoid the worst climate change has in store for us. Some climate scientists responded to her video, and then Hossenfelder responded back (links above). This is where it gets interesting.

To frame my take on this debate a bit, when thinking about any scientific debate we often have to consider two broad levels of issues. One type of issue is generic principles of logic and proper scientific procedure. These generic principles can apply to any scientific field – P-hacking is P-hacking, whether you are a geologist or chiropractor. This is the realm I generally deal with, basic principles of statistics, methodological rigor, and avoiding common pitfalls in how to gather and interpret evidence.

The second relevant level, however, is topic-specific expertise. Here I do my best to understand the relevant science, defer to experts, and essentially try to understand the consensus of expert opinion as best I can. There is often a complex interaction between these two levels. But if researchers are making egregious mistakes on the level of basic logic and statistics, the topic-specific details do not matter very much to that fact.

What I have tried to do over my science communication career is to derive a deep understanding of the logic and methods of good science vs bad science from my own field of expertise, medicine. This allows me to better apply those general principles to other areas. At the same time I have tried to develop expertise in the philosophy of science, and understanding the difference between science and pseudoscience.

In her response video Hossenfelder is partly trying to do the same thing, take generic lessons from her field and apply them to climate science (while acknowledging that she is not a climate scientist). Her main point is that, in the past, physicists had grossly underestimated the uncertainty of certain measurements they were making (such as the half life of protons outside a nucleus). The true value ended up being outside the earlier uncertainty range – h0w did that happen? Her conclusions was that it was likely confirmation bias – once a value was determined (even if just preliminary) then confirmation bias kicks in. You tend to accept later evidence that supports the earlier preliminary evidence while investigating more robustly any results that are outside this range.

Here is what makes confirmation bias so tricky and often hard to detect. The logic and methods used to question unwanted or unexpected results may be legitimate. But there is often some subjective judgement involved in which methods are best or most appropriate and there can be a bias in how they are applied. It’s like P-hacking – the statistical methods used may be individually reasonable, but if you are using them after looking at data their application will be biased. Hossenfelder correctly, in my opinion, recommends deciding on all research methods before looking at any data. The same recommendation now exists in medicine, even with pre-registration of methods before collective data and reviewers now looking at how well this process was complied with.

So Hausfather and Dessler make valid points in their response to Hossenfelder, but interestingly this does not negate her point. Their points can be legitimate in and of themselves, but biased in their application. The climate scientists point out (as others have) that the newer hot models do a relatively poor job of predicting historic temperatures and also do a poor job of modeling the most recent glacial maximum. That sounds like a valid point. Some climate scientists have therefore recommended that when all the climate models are averaged together to produce a probability curve of ECS that models which are better and predicting historic temperatures be weighted heavier than models that do a poor job. Again, sounds reasonable.

But – this does not negate Hossenfelder’s point. They decided to weight climate models after some of the recent models were creating a problem by running hot. They were “fixing” the “problem” of hot models. Would they have decided to weight models if there weren’t a problem with hot models? Is this just confirmation bias?

None of this means that there fix is wrong, or that the hot models are right. But what it means is that climate scientists should acknowledge exactly what they are doing. This opens the door to controlling for any potential confirmation bias. The way this works (again, generic scientific principle that could apply to any field) is to look a fresh data. Climate scientists need to agree on a consensus method – which models to look at, how to weight their results – and then do a fresh analysis including new data. Any time you make any change to your methods after looking at the data, you cannot really depend on the results. At best you have created a hypothesis – maybe this new method will give more accurate results – but then you have to confirm that method by applying it to fresh data.

Perhaps climate scientists are doing this (I suspect they will eventually), although Hausfather and Dessler did not explicitly address this in their response.

It’s all a great conversation to have. Every scientific field, no matter how legitimate, could benefit from this kind of scrutiny and questioning. Science is hard, and there are many ways  bias can slip in. It’s good for scientists in every field to have a deep and subtle understanding of statistical pitfalls, how to minimize confirmation bias and p-hacking, and the nature of pseudoscience.

The post Climate Sensitivity and Confirmation Bias first appeared on NeuroLogica Blog.

Categories: Skeptic

Virtual Walking

Fri, 03/01/2024 - 5:07am

When I use my virtual reality gear I do practical zero virtual walking – meaning that I don’t have my avatar walk while I am not walking. I general play standing up which means I can move around the space in my office mapped by my VR software – so I am physically walking to move in the game. If I need to move beyond the limits of my physical space, I teleport – point to where I want to go and instantly move there. The reason for this is that virtual walking creates severe motion sickness for me, especially if there is even the slightest up and down movement.

But researchers are working on ways to make virtual walking a more compelling, realistic, and less nausea-inducing experience. A team from the Toyohashi University of Technology and the University of Tokyo studied virtual walking and introduced two new variables – they added a shadow to the avatar, and they added vibration sensation to the feet. An avatar is a virtual representation of the user in the virtual space. Most applications allow some level of user control over how the avatar is viewed, but typically either first person (you are looking through the avatar’s eyes) or third person (typically your perspective is floating above and behind the avatar). In this study they used only first person perspective, which makes sense since they were trying to see how realistic an experience they can create.

The shadow was always placed in front of the avatar and moved with the avatar. This may seem like a little thing, but it provides visual feedback connecting the desired movements of the user with the movements of the avatar. As weird as this sounds, this is often all that it takes to not only feel as if the user controls the avatar but is embodied within the avatar. (More on this below.) Also they added four pads to the bottom of the feet, two on each foot, on the toe-pad and the heel. These vibrated in coordination with the virtual avatar’s foot strikes. How did these two types of sensory feedback affect user perception?

They found:

“Our findings indicate that the synchronized foot vibrations enhanced telepresence as well as self-motion, walking, and leg-action sensations, while also reducing instances of nausea and disorientation sickness. The avatar’s cast shadow was found to improve telepresence and leg-action sensation, but had no impact on self-motion and walking sensation. These results suggest that observation of the self-body cast shadow does not directly improve walking sensation, but is effective in enhancing telepresence and leg-action sensation, while foot vibrations are effective in improving telepresence and walking experience and reducing instances of cybersickness.”

So the shadow made people feel more like they were in the virtual world (telepresence) and that they were moving their legs, even when they weren’t. But the shadow did not seem to enhance the sensation of walking. Meanwhile the foot vibrations improved the sense of telepresence and leg movement, but also the sense that the user was actually walking. Further (and this is of keen interest to me) the foot vibrations also reduced motion sickness and nausea. Keep in mind, the entire time the user is sitting in a chair.

I do not find the telepresence or sense of movement surprising. It is now well established that this is how the brain usually works to create the sensation that we occupy our bodies and own and control the parts of our bodies. These sensations do not flow automatically from the fact that we are our bodies and do control them. There are specific circuits in the brain that create these sensations, and if those circuits are disrupted people can have out-of-body sensations or even feel disconnected from parts of their body. These circuits depend on sensory feedback.

What is happening is that our brains are comparing various information streams in real time – what movements do we intend to make, visual feedback regarding whether or not our body is moving in the way we intend, combined with physical sensation such as proprioception (feeling where your body is in three dimensional space) and tactile sensation. When everything lines up, we feel as if we occupy and control our bodies. When they don’t line up, weird stuff happens.

The same is true for motion sickness. Our brains compare several streams of information at once – visual information, proprioception, and vestibular information (sensing gravity and acceleration). When these sensory streams do not match up, we feel vertigo (spinning sensation) or motion sickness. Sometimes people have just a vague sense of “dizziness” without overt spinning – they are just off.

In VR there can be a complete mismatch between visual input and vestibular input. My eyes are telling me that I am running over a landscape, while my vestibular system is telling me I am not moving. The main way this is currently addressed is by not having virtual movement, hence the teleporting (which does not count as movement visually). Another potential way to deal with this is to have physical movement match the virtual movement, but this requires a large and expensive rig, which is currently not ready for consumer use. This is the Ready Player One scenario – a harness and an omnidirectional treadmill. This would probably be the best solution, and I suspect you would need only a little bit of movement to significantly reduce motion sickness, as long as it was properly synchronized.

There has also been speculation that perhaps motion sickness can be reduced by leveraging other sensory inputs, such as haptic feedback. There has also been research into using brain stimulation to reduce the effect. A 2023 study looked at “transcranial alternating current stimulation (tACS) at 10 Hz, biophysically modelled to reach the vestibular cortex bilaterally.” I look at this as a proof of concept, not a likely practical solution. But perhaps some lower tech stimulation might be effective.

I am a little surprised, although pleased, that in the current study a little haptic feedback of the feet lowered motion sickness. My hope is that as the virtual experience gets more multi-modal, with several sensory streams all synchronized, the motion sickness problem will be mostly resolved. In the current study, if the provided picture (see above) is any indication, the users were walking through virtual streets. This would not provide a lot of up and down movement, which is the killer. So perhaps haptic feedback might work for situations that would create mild motion sickness, but I doubt it would be enough for me to survive a virtual roller coaster.

All of this bodes well for a Ready Player One future – with mature VR including haptic feedback with some physical motion. I do wonder if the brain hacking (brain stimulation) component will be necessary or practical in the near future.

One last aside – the other solution to the motion sickness problem is AR – augmented reality. With AR you can see the physical world around you through the goggles, which overlay virtual information. This way you are moving through the physical world, which can be skinned to look very different or have virtual objects added. This does not work for every VR application, however, and is limited because you need the physical space to move around in. But applications and games built around what AR can do has the added benefit of no motion sickness.

The post Virtual Walking first appeared on NeuroLogica Blog.

Categories: Skeptic

Frozen Embryos Are Not People

Tue, 02/27/2024 - 5:07am

Amid much controversy, the Alabama State Supreme Court ruled that frozen embryos are children. They did not support their decision with compelling logic, with cited precedence (their decision is literally unprecedented), with practical considerations, or with sound ethical judgement. They essentially referenced god. It was a pretty naked religious justification.

The relevant politics have been hashed out by many others. What I want to weigh in on is the relevant logic. Two years ago I wrote about the question of when a fetus becomes a person. I laid out the core question here – when does a clump of cells become a person? Standard rhetoric in the anti-abortion community is to frame the question differently, claiming that from the point of fertilization we have human life. But from a legal, moral, and ethical perspective, that is not the relevant question. My colon is human life, but it’s not a person. Similarly, a frozen clump of cells is not a child.

This point inevitably leads to the rejoinder that those cells have the potential to become a person. But the potential to become a thing is not the same same as being a thing. If allowed to develop those cells have the potential to become a person – but they are not a person. This would be analogous to pointing to a stand of trees and claiming it is a house. Well, the wood in those trees has the potential to become a house. It has to go through a process, and at some point you have a house.

That analogy, however, breaks down when you consider that the trees will not become a house on their own. An implanted embryo will become a child (if all goes well) unless you do something to stop it. True but irrelevant to the point. The embryo is still not a person. The fact that the process to become a person is an internal rather than external one does not matter. Also, the Alabama Supreme Court is extending the usual argument beyond this point – those frozen embryos will not become children on their own either. They would need to go through a deliberate, external, artificial process in order to have the full potential to develop into a person. In fact, they would not exist without such a process.

But again – none of this really matters. The potential to become something through some kind of process, whether internal or external, spontaneous or artificial, does not make one thing morally  equivalent to something else. A frozen clump of cells is not a child.

The history of how evangelicals and conservatives came to this rigid position – that personhood begins at fertilization – is complex, but illuminating. The quick version is that nowhere in the bible does it say life or personhood begins at conception, and many pre-1980 Christians believed that the bible says personhood begins at birth. However, the idea that the soul enters the body at conception goes back to the ancient Greeks. This view was largely accepted by Catholics and rejected by Protestants – until Jerry Falwell and then others started linking the Catholic view with American political conservatives, making it into a cultural issue that was good for outraging and motivating donors and voters.

Now it is a matter of unalterable faith, that human personhood begins at conception. This is what leads to the bizarre conclusion that a frozen embryo is a child. But this is not a biblical belief, not a historically universal belief, and is certainly not a scientific belief.

On some level, however, the religious right in America knows they cannot just legislate their faith. They really want to, and they have a couple of strategies for doing so. One is to argue against the separation of church and state. They will rewrite history, cherry pick references, and mostly just assert what they want to be true. When in power, such as in Alabama, they will just ignore the separation (unless and until slapped down by the Supreme Court).  But failing that they will sometimes argue that their religious view is actually the scientific view. This, of course, is when I become most interested.

One arena where they have done that extensively is in the teaching of evolution. They have legally failed on the separation of church and state arguments. They therefore pivoted to the scientific ones, with creationism and later Intelligent Design. But these are all warmed over religious views, and any attempt at sounding scientific is laughable and has completely failed. They do provide many object lessons in pseudoscience and poor logic, however.

I believe they are doing the same thing with the abortion issue. They are saying that the scientific view is that human life begins at conception. But again, this is a deceptive framing. That is not the question – the question is when personhood begins. Once again, frozen cells are not a person.

The post Frozen Embryos Are Not People first appeared on NeuroLogica Blog.

Categories: Skeptic

Odysseus Lands on the Moon

Fri, 02/23/2024 - 5:00am

December 11, 1972, Apollo 17 soft landed on the lunar surface, carrying astronauts Gene Cernan and Harrison Schmitt. This was the last time anything American soft landed on the moon, over 50 years ago. It seems amazing that it’s been that long. On February 22, 2024, the Odysseus soft landed on the Moon near the south pole. This was the first time a private company has achieved this goal, and the first time an American craft has landed on the Moon since Apollo 17.

Only five countries have ever achieved a soft landing on the moon, America, China, Russia, Japan, and India. Only America did so with a crewed mission, the rest were robotic. Even though this feat was first accomplished in 1966 by the Soviet Union, it is still an extremely difficult thing to pull off. Getting to the Moon requires powerful rocket. Inserting into lunar orbit requires a great deal of control, on a craft that is too far away for real time remote control. This means you either need pilots on the craft, or the craft is able to carry out a pre-programmed sequence to accomplish this goal. Then landing on the lunar surface is tricky. There is no atmosphere to slow the craft down, but also no atmosphere to get in the way. As the ship descends it burns fuel, which constantly changes the weight of the vehicle. It has to remain upright with respect to the lunar surface and reduce its speed by just the right amount to touch down softly – either with a human pilot or all by itself.

The Odysseus mission is funded by NASA as part of their program to develop private industry to send instruments and supplies to the Moon. It is the goal of their Artemis mission to establish a permanent base on the moon, which will need to be supported by regular supply runs. In January another company with a NASA grant under the same program, Astrobotic Technology, sent their own craft to the Moon, the Peregrine. However, a fuel leak prevented the craft from orienting its solar panels toward the sun, and the mission had to be abandoned. This left the door open for the Odysseus mission to grab the achievement of being the first private company to do so.

One of the primary missions of Odysseus is to investigate is the effect of the rocket’s exhaust on the landing site. When the Apollo missions landed the lander’s exhaust blasted regolith from the lunar surface at up to 3-4 km/second, faster than a bullet. With no atmosphere to slow down these particles, they blasted everything in the area and went a long distance. When Apollo 12 landed somewhat near the Surveyor 3 robotic lander the astronauts then walked to the Surveyor to bring back pieces for study. They found that the Surveyor had been “sandblasted” by the lander’s exhaust.

This is a much more serious problem for Artemis than Apollo. Sandblasting on landing is not really a problem if there is nothing else of value nearby. But with a permanent base on the Moon, and even possibly equipment from other nation’s lunar programs, this sandblasting can be dangerous and harm sensitive equipment. We need to know, therefore, how much damage it does, and how close landers can land to existing infrastructure.

There are potential ways to deal with the issue, including landing at a safe distance, but also erecting walls or curtains to block the blasted regolith from reaching infrastructure. A landing pad that is hardened and free of loose regolith is another option. These options, in turn, require a high degree of precision in terms of the landing location. For the Apollo missions, the designated landing areas were huge, with the landers often being kilometers away from their target. If the plan for Artemis is to land on a precise location, eventually onto a landing pad, then we need to not only pull off soft landings, but we need to hit a bullseye.

Fortunately, our technology is no longer in the Apollo era. SpaceX, for example, now routinely pulls off similar feats, with their reusable rockets that descend back down to Earth after launching their payload, and make a soft landing on a small target such as a floating platform.

The Odysseus craft will also carry out other experiments and missions to prepare the way for Artemis. This is also the first soft landing for the US near the south pole. All the Apollo missions landed near the equator. The craft will also be placing a laser retroreflector on the lunar surface. This is a reflector that can return a laser pointed at it directly back at the source. Such reflectors have been left on the Moon before and are used to do things like measure the precise distance between the Earth and Moon. NASA plans to place many retroreflectors on the Moon to use as a positioning system for spacecraft and satellites in lunar orbit.

This is all part of building an infrastructure for a permanent presence on the Moon. This, I think, is the right approach. NASA knows they need to go beyond the “flags and footprints” style one-off missions. Such missions are still useful for doing research and developing technology, but they are not sustainable. We should be focusing now on partnering with private industry, developing a commercial space industry, advancing international cooperation, developing long term infrastructure and reusable technology. While I’m happy to see the Artemis program get underway, I also hope this is the last time NASA develops these expensive one-time use rocket systems. Reusable systems are the way to go.

 

The post Odysseus Lands on the Moon first appeared on NeuroLogica Blog.

Categories: Skeptic

AI Video

Thu, 02/22/2024 - 5:05am

Recently OpenAI launched a website showcasing their latest AI application, Sora. This app, based on prompts similar to what you would use for ChatGPT or the image creation applications, like Midjourney or Dalle-2, creates a one minute photorealistic video without sound. Take a look at the videos and then come back.

Pretty amazing. Of course, I have no idea how cherry picked these videos are. Were there hundreds of failures for each one we are seeing? Probably not, but we don’t know. They do give the prompts they used, and they state explicitly that these videos were created entirely by Sora from the prompt without any further editing.

I have been using Midjourney quite extensively since it came out, and more recently I have been using ChatGPT 4 which is linked to Dalle-2, so that ChatGPT will create the prompt for you from more natural language instructions. It’s pretty neat. I sometimes use it to create the images I attach to my blog posts. If I need, for example, a generic picture of a lion I can just make one, rather than borrowing one from the internet and risking that some German firm will start harassing me about copyright violation and try to shake me down for a few hundred Euros. I also make images for personal use, mostly gaming. It’s a lot of fun.

Now I am looking forward to getting my hands on Sora. They say that they are testing the app, having given it to some creators to give them feedback. They are also exploring ways in which the app can be exploited for evil and trying to make it safe. This is where the app raises some tricky questions.

But first I have a technical question – how long will it be before AI video creation is so good that it becomes indistinguishable (without technical analysis) from real video? Right now Sora is about as good at video as Midjourney is at pictures. It’s impressive, but there are some things it has difficulty doing. It doesn’t actually understand anything, like physics or cause and effect, and is just inferring in it’s way what something probably looks like. Probably the best representation of this is how they deal with words. They will create pseudo-letters and words, reconstructing word like images without understanding language.

Here is a picture I made through ChatGPT and Dalle-2 asking for an advanced spaceship with the SGU logo. Superficially very nice, but the words are not quite right (and this is after several iterations). You can see the same kind of thing in the Sora videos. Often there are errors in scale, in how things related to each other, and objects just spawn our of nowhere. The video of the birthday party is interesting – I think everyone is supposed to be clapping, but it’s just weird.

So we are still right in the middle of the uncanny valley with AI generated video. Also, this is without sound. The hardest thing to do with photorealistic CG people is make them talk. As soon as their mouth starts moving, you know they are CG. They don’t even attempt that in these videos. My question is – how close are we to getting past the uncanny valley and fixing all the physics problems with these videos?

On the one hand it seems close. These videos are pretty impressive. But this kind of technology historically (AI driving cars, speech recognition) tend to follow a curve where the last 5% of quality is as hard or harder than the first 95%. So while we may seem close, fixing the current problems may be really hard. We will have to wait and see.

The more tricky question is – once we do get through the uncanny valley and can essentially create realistic video, paired with sound, of anything that is indistinguishable from reality, what will the world be like? We can already make fairly good voice simulations (again, at the 95% level). OpenAI says they are addressing these questions, and that’s great, but once this code is out there in the world who says everyone will adhere to good AI hygiene?

There are some obvious abuses of this technology to consider. One is to create fake videos meant to confuse the public and influence elections or for general propaganda purposes. Democracy requires a certain amount of transparency and shared reality. We are already seeing what happens when different groups cannot even agree on basic facts. This problem also cuts both ways – people can make videos to create the impression that something happened that didn’t, but also real video can be dismissed as fake. That wasn’t me taking a bribe, it was an AI fake video. This creates easy plausible deniability.

This is a perfect scenario for dictators and authoritarians, who can simply create and claim whatever reality they wish. The average person will be left with no solid confidence in what reality is. You can’t trust anything, and so there is no shared truth. Best put our trust in a strongman who vows to protect us.

There are other ways to abuse this technology, such as violating other people’s privacy by using their image. This could also revolutionize the porn industry, although I wonder if that will be a good thing.

While I am excited to get my hands on this kind of software for my personal use, and I am excited to see what real artists and creators can do with the medium, I also worry that we again are at the precipice of a social disruption. It seems that we need to learn the lessons of recent history and try to get ahead of this technology with regulations and standards. We can’t just leave it up to individual companies. Even if most of them are responsible, there are bound to be ones that aren’t. Not only do we need some international standards, we need the technology to enforce them (if that’s even possible).

The trick is, even if AI generated videos can be detected and revealed, the damage may already be done. The media will have to take a tremendous amount of responsibility for any video they show, and this includes social media giants. At the very least any AI generated video should be clearly labels as such. There may need to be several layers of detection to make this effective. At least we need to make it as difficult as possible, so not every teenager with a cellphone can interfere with elections. At the creation end AI created video can be watermarked, for example. There may also be several layers of digital watermarking to alert social media platforms so they can properly label such videos, or refuse to host them depending on content.

I don’t have the final answers, but I do have a strong feeling we should not just go blindly into this new world. I want a world in which I can write a screenplay, and have that screenplay automatically translated into a film. But I don’t want a world in which there is no shared reality, where everything is “fake news” and “alternative facts”. We are already too close to that reality, and taking another giant leap in that direction is intimidating.

The post AI Video first appeared on NeuroLogica Blog.

Categories: Skeptic

Scammers on the Rise

Tue, 02/20/2024 - 5:06am

Good rule of thumb – assume it’s a scam. Anyone who contacts you, or any unusual encounter, assume it’s a scam and you will probably be right. Recently I was called on my cell phone by someone claiming to be from Venmo. They asked me to confirm if I had just made two fund transfers from my Venmo account, both in the several hundred dollar range. I had not. OK, they said, these were suspicious withdrawals and if I did not make them then someone has hacked my account. They then transferred me to someone from the bank that my Venmo account is linked to.

I instantly knew this was a scam for several reasons, but even just the overall tone and feel of the exchange had my spidey senses tingling. The person was just a bit too helpful and friendly. They reassured me multiple times that they will not ask for any personal identifying information. And there was the constant and building pressure that I needed to act immediately to secure my account, but not to worry, they would walk me through what I needed to do. I played along, to learn what the scam was. At what point was the sting coming?

Meanwhile, I went directly to my bank account on a separate device and could see there were no such withdrawals. When I pointed this out they said that was because the transactions were still pending (but I could stop them if I acted fast). Of course, my account would show pending transactions. When I pointed this out I got a complicated answer that didn’t quite make sense. They gave me a report number that would identify this event, and I could use that number when they transferred me to someone allegedly from my bank to get further details. Again, I was reassured that they would not ask me for any identifying information. It all sounded very official. The bank person confirmed (even though it still did not appear on my account) that there was an attempt to withdraw funds and sent me back to the Venmo person who would walk be through the remedy.

What I needed to do was open my Venmo account. Then I needed to hit the send button in order to send a report to Venmo. Ding, ding ding!. That was the sting. They wanted me to send money from my Venmo account to whatever account they tricked me into entering. “You mean the button that says ‘send money’, that’s the button you want me to press?” Yes, because that would “send” a report to their fraud department to resolve the issue. I know, it sounds stupid, but it only has to work a fraction of the time. I certainly have elderly and not tech savvy relatives who I could see falling for this. At this point I confronted the person with the fact that they were trying to scam me, but they remained friendly and did not drop the act, so eventually I just hung up.

Digital scammers like this are growing, and getting more sophisticated. By now you may have heard about the financial advice columnist who was scammed out of $50,000. Hearing the whole story at the end, knowing where it is all leading, does make it seem obvious. But you have to understand the panic that someone can feel when confronted with the possibility that their identify has been stolen or their life savings are at risk. That panic is then soothed by a comforting voice who will help you through this crisis. The FBI documented $10.2 billion in online fraud in 2022. This is big business.

We are now living in a world where everyone needs to know how to defend themselves from such scams. First, don’t assume you have to be stupid to fall for a scam. Con artists want you to think that – a false sense of security or invulnerability plays into their hands.

There are many articles detailing good internet hygiene to protect yourself, but frequent reminders are helpful, so here is my list. As I said up top – assume it’s a scam. Whenever anyone contacts me I assume it’s a scam until proven otherwise. That also means – do not call that number, do not click that link, do not give any information, do not do anything that someone who contacted you (by phone, text, e-mail, or even snail mail) asks you to do. In many cases you can just assume it’s a scam and comfortably ignore it. But if you have any doubt, then independently look up a contact number for the relevant institution and call them directly.

Do not be disarmed by a friendly voice. The primary vulnerability of your digital life is not some sophisticated computer hack, but a social hack – someone manipulating you, trying to get you to act impulsively or out of fear. They also know how to make people feel socially uncomfortable. If you push back, they will make it seem like you are being unreasonable, rude, or stupid for doing so. They will push whatever social and psychological buttons they can. This means you have to be prepared, you have to be armed with a defense against this manipulation. Perhaps the best defense is simply protocol. If you don’t want to be rude, then just say, “Sorry, I can’t do that.” Take the basic information and contact the relevant institution directly. Or – just hang up. Remember, they are trying to scam you. You own them nothing. Even if they are legit, it’s their fault for breaking protocol – they should not be asking you to do something risky.

When in doubt, ask someone you know. Don’t be pressured by the alleged need to act fast. Don’t be pressured into not telling anyone or contacting them directly. Always ask yourself – is there any possible way this could be a scam. If there is, then it’s probably is a scam.

It’s also important to know that anything can be spoofed. A scammer can make it seem like the call is coming from a legit organization, or someone you know. Now, with AI, it’s possible to fake someone’s voice. Standard protocol should always be, take the information, hang up, look up the number independently and contact them directly. Just assume, if they contacted you, it’s a scam. Nothing should reassure you that it isn’t.

The post Scammers on the Rise first appeared on NeuroLogica Blog.

Categories: Skeptic

Fake Fossils

Mon, 02/19/2024 - 4:46am

In 1931 a fossil lizard was recovered from the Italian Alps, believed to be a 280 million year old specimen. The fossil was also rare in that it appeared to have some preserved soft tissue. It was given the species designation Tridentinosaurus antiquus and was thought to be part of the Protorosauria group.

A recent detailed analysis of the specimen, hoping to learn more about the soft tissue elements of the fossil, revealed something unexpected. The fossil is a fake (at least mostly). What appears to have happened is that a real fossil which was poorly preserved was “enhanced” to make it more valuable. There are real fossilized femur bones and some bony scales on what was the back of the lizard. But the overall specimen was poorly preserved and of not much value. What the forger did was carve out the outline of the lizard around the preserved bones and then paint it black to make it stand out, giving the appearance of carbonized soft tissue.

How did such a fake go undetected for 93 years? Many factors contributed to this delay. First, there were real bones in the specimen and it was taken from an actual fossil deposit. Initial evaluation did reveal some kind of lacquer on the specimen, but this was common practice at the time as a way of preserving the fossils, so did not raise any red flags. Also, characterization of the nature of the black material required UV photography and microscopic examination using technology not available at the time. This doesn’t mean they couldn’t have revealed it as a fake back then, but it is certainly much easier now.

It also helps to understand how fossils are typically handled. Fossils are treated as rare and precious items. They are typically examined with non-destructive techniques. It is also common for casts to be made and photographs taken, with the original fossils then catalogued and stored away for safety. Not every fossil has a detailed examination before being put away in a museum drawer. There simply aren’t the resources for that.

No fossil fake can withstand detailed examination. There is no way to forge a fossil that cannot be detected by the many types of analysis that we have available today. Some fakes are detected immediately, usually because of some feature that a paleontologist will recognize as fake. Others require high tech analysis. The most famous fake fossil, Piltdown Man, was a chimera of modern human and ape bones aged to look old. The fraud was revealed by drilling into the bones revealing they were not fossilized.

There was also an entire industry of fake fossils coming out of China. These are mostly for sale to private collectors, exploiting the genuine fossil deposits in China, especially of feathered dinosaurs. It is illegal to export real fossils from China, but not fakes. In at least one case, paleontologists were fooled for about a year by a well-crafted fake. Some of these fakes were modified real (but not very valuable) fossils while others were entire fabrications. The work was often so good, they could have just sold them as replicas for decent amounts of money. But still, claiming they were real inflated the price.

Creationists would have you believe that all fossils are fake, and will point to known cases as evidence. But this is an absurd claim. The Smithsonian alone boasts a collection of 40 million fossil specimens. Most fossils are discovered by paleontologists looking for them in geological locations that correspond to specific periods of time and have conditions amenable to fossil preservation. There is transparency, documentation, and a provenance to the fossils that would make a forgery impossible.

There are a few features that fake fossils have in common that in fact reinforce the nature of genuine fossils. Fake fossils generally were not found by scientists. They were found by amateurs who claim to have gotten lucky. The source and provenance of the fossils are therefore often questionable. This does not automatically mean they are fakes. There is a lot of non-scientific activity that can dig up fossils or other artifacts by chance. Ideally as soon as the artifacts are detected scientists are called in to examine them first hand, in situ. But that does not always happen.

Perhaps most importantly, fake fossils rarely have an enduring impact on science. Many are just knock-offs, and therefore even if they were real they are of little scientific value. They are just copies of real fossils. Fakes purported to be of unique fossil specimens, like Piltdown, have an inherent problem. If they are unique, then they would tell us something about the evolutionary history of the group. But if they are fake, they can’t be telling us something real. Chances are the fakes will not comport to the actual fossil record. They will be enigmas, and likely will be increasingly out of step with the actual fossil record as more genuine specimens are found.

That is exactly what happened with Piltdown. Some paleontologists were immediately skeptical of the find, and it was always thought of as a quirky specimen that scientists did not know how to fit into the tree of life. As more hominid specimens were found Piltdown became increasingly the exception, until finally scientists had enough, pulled the original specimens out of the vault, and showed them to be fakes. The same is essentially true of Tridentinosaurus antiquus specimen. Paleontologists could not figure out exactly where it fit taxonomically, and did not know how it had apparent soft tissue preservation. It was an enigma, which prompted the analysis which revealed it to be a fake.

Paleontology is essentially the world’s largest puzzle, with each fossil specimen being a puzzle piece. A fake fossil is either redundant or a puzzle piece that does not fit.

The post Fake Fossils first appeared on NeuroLogica Blog.

Categories: Skeptic

Biofrequency Gadgets are a Total Scam

Fri, 02/16/2024 - 4:51am

I was recently asked what I thought about the Solex AO Scan. The website for the product includes this claim:

AO Scan Technology by Solex is an elegant, yet simple-to-use frequency technology based on Tesla, Einstein, and other prominent scientists’ discoveries. It uses delicate bio-frequencies and electromagnetic signals to communicate with the body.

The AO Scan Technology contains a library of over 170,000 unique Blueprint Frequencies and created a hand-held technology that allows you to compare your personal frequencies to these Blueprints in order to help you achieve homeostasis, the body’s natural state of balance.

This is all hogwash (to use the technical term). Throwing out the names Tesla and Einstein, right off, is a huge red flag. This is a good rule of thumb – whenever these names (or Galileo) are invoked to hawk a product, it is most likely a scam. I guess you can say that any electrical device is based on the work of any scientist who had anything to do with electromagnetism.

What are “delicate bio-frequencies”? Nothing, they don’t exist. The idea, which is an old one used in scam medical devices for decades, is that the cells in our bodies have their own unique “frequency” and you want these frequencies to be in balance and healthy. If the frequencies are blocked or off, in some way, this causes illness. You can therefore read these frequencies to diagnoses diseases or illness, and you can likewise alter these frequencies to restore health and balance. This is all complete nonsense, not based on anything in reality.

Living cells, of course, do have tiny electromagnetic fields associated with them. Electrical potential is maintained across all cellular membranes. Specialized cells, like muscles and nervous tissue, use this potential as the basis for their function. But there is no magic “frequency” associated with these fields. There is no “signature” or “blueprint”. That is all made up nonsense. They claim to have researched 170,000 “Blueprint Frequencies” but the relevant science appears to be completely absent from the published literature. And of course there are no reliable clinical trials indicating that any type of frequency-based intervention such as this has any health or medical application.

As an aside, there are brainwave frequencies (although this is not what they are referring to). This is caused by networks of neurons in the brain all firing together with a regular frequency. We can also pick up the electrical signals caused by the contraction of the heart – a collection of muscle cells all firing in synchrony. When you contract a skeletal muscle, we can also record that electrical activity – again, because there are lots of cells activating in coordination. Muscle contractions have a certain frequency to them. Motor units don’t just contract, they fire at an increasing frequency as they are recruited, peaking (in a healthy muscle) at 10 hz. We will measure these frequencies to look for nerve or motor neuron damage. If you cannot recruit as many motor units, the ones you can recruit will fire faster to compensate.

These are all specialized tests looking at specific organs with many cells firing in a synchronous fashion. If you are just looking at the body in general, not nervous tissue or muscles, the electrical signals are generally too tiny to measure and would just be white noise anyway. You will not pick up “frequencies”, and certainly not anything with any biological meaning.

In general, be very skeptical of any “frequency” based claims. That is just a science-sounding buzzword used by some to sell dubious products and claims.

The post Biofrequency Gadgets are a Total Scam first appeared on NeuroLogica Blog.

Categories: Skeptic

Using AI and Social Media to Measure Climate Change Denial

Thu, 02/15/2024 - 5:14am

A recent study finds that 14.8% of Americans do not believe in global climate change. This number is roughly in line with what recent survey have found, such as this 2024 Yale study which put the figure at 16%. In 2009, by comparison, the figure was at 33% (although this was a peak – the 2008 result was 21%). The numbers are also encouraging when we ask about possible solutions, with 67% of Americans saying that we should prioritize development of green energy and should take steps to become carbon neutral by 2050. The good news is that we now have a solid majority of Americans who accept the consensus on climate change and broadly support measures to reduce our carbon footprint.

But there is another layer to this study I first mentioned – the methods used in deriving the numbers. It was not a survey. It used artificial intelligence to analyze posts on X (Twitter) and their networks. The fact that the results aligns fairly well to more tried and true methods, like surveys, is somewhat validating of the methods. Of course surveys can be variable as well, depending on exactly how questions are asked and how populations are targeted. But multiple well designed survey by experienced institutions, like Pew, can create an accurate picture of public attitudes.

The advantage of analyzing social media is that it can more easily provide vast amounts of data. The authors report:

We used a Deep Learning text recognition model to classify 7.4 million geocoded tweets containing keywords related to climate change. Posted by 1.3 million unique users in the U.S., these tweets were collected between September 2017 and May 2019.

That’s a lot of data. As is almost always the case, however, there is a price to pay for using methods which capture such vast amounts of data – that data is not strictly controlled. It’s observational. It is a self-selective group – people who post on X. It therefore may not be representative of the general population. Because the results broadly agree with more traditional survey methods, however, this does suggest that any such selective effects balanced out. Also, they adjusted for and skew toward certain demographic groups – so if younger people were overrepresented in the sample they adjusted for that.

The results also showed some more detail. Because the posts were geocoded the analysis can look at regional difference. They found broadly that acceptance of global warming science was highest on the coasts, and lower in the midwest and south. There were also significant county level differences. They found:

Political affiliation has the strongest correlation, followed by level of education, COVID-19 vaccination rates, carbon intensity of the regional economy, and income.

Climate change denial, again in line with prior data, correlated strongly with identifying as a Republican. That was the dominant factor. It’s likely that other factors, like COVID-19 vaccination rates, also derive from political affiliation. But it does suggest that when one rejects scientific consensus and the opinion of experts on climate change, it makes it more likely to do so on other issues.

Because they did network analysis they were also able to analyze who is talking to whom, and who the big influencers were. The found, again unsurprising, that there are networks of users who accept climate change and networks that reject climate change, with very little communication between the networks. This shows that the echo-chamber effect on social media is real, at least on this issue. This is a disturbing finding, perhaps the most disturbing of this study (even if we already knew this).

It reflects in data what many of us feel – that social media and the internet has transformed our society from one where there is a basic level of shared culture and facts to one in which different factions are siloed in different realities. There have always been different subcultures, with vastly different ideologies and life experiences. But the news was the news, perhaps with different spin and emphasis. Now it is possible for people to exist in completely different and relatively isolated information ecosystems. We don’t just have different priorities and perspectives -we live in different realities.

The study also identified individual influencers who were responsible for many of the climate change denial posts. Number one among them was Trump, followed by conservative media outlets. Trump is, of course, a polarizing figure, a poster child for the echo-chamber social media phenomenon. For many he represents either salvation or the destruction of American democracy.

On the bright side, it does seem there is still the possibility of movement in the middle. The middle may have shrunk, but still holds some sway in American politics, and there does seem to be a number of people who can be persuaded by facts and reason. We have moved the needle on many scientific issues, and attitudes have improved on topics such as climate change, GMOs, and nuclear power. The next challenge is fixing our dysfunctional political system so we can translate solid public majorities into tangible action.

The post Using AI and Social Media to Measure Climate Change Denial first appeared on NeuroLogica Blog.

Categories: Skeptic

Flow Batteries – Now With Nanofluids

Tue, 02/13/2024 - 5:12am

Battery technology has been advancing nicely over the last few decades, with a fairly predictable incremental increase in energy density, charging time, stability, and lifecycle. We now have lithium-ion batteries with a specific energy of 296 Wh/kg – these are in use in existing Teslas. This translates to BE vehicles with ranges from 250-350 miles per charge, depending on the vehicle. That is more than enough range for most users. Incremental advances continue, and every year we should expect newer Li-ion batteries with slightly better specs, which add up quickly over time. But still, range anxiety is a thing, and batteries with that range are heavy.

What would be nice is a shift to a new battery technology with a leap in performance. There are many battery technologies being developed that promise just that. We actually already have one, shifting from graphite anodes to silicon anodes in the Li-ion battery, with an increase in specific energy to 500 Wh/kg. Amprius is producing these batteries, currently for aviation but with plans to produce them for BEVs within a couple of years. Panasonic, who builds 10% of the world’s EV batteries and contracts with Tesla, is also working on a silocon anode battery and promises to have one in production soon. That is basically a doubling of battery capacity from the average in use today, and puts us on a path to further incremental advances. Silicon anode lithium-ion batteries should triple battery capacity over the next decade, while also making a more stable battery that uses less (or no – they are working on this too) rare earth elements and no cobalt. So even without any new battery breakthroughs, there is a very bright future for battery technology.

But of course, we want more. Battery technology is critical to our green energy future, so while we are tweaking Li-ion technology and getting the most out of that tech, companies are working to develop something to replace (or at least complement) Li-ion batteries. Here is a good overview of the best technologies being developed, which include sodium-ion, lithium-sulphur, lithium-metal, and solid state lithium-air batteries. As an aside, the reason lithium is a common element here is because it is the third-lightest element (after hydrogen and helium) and the first that can be used for this sort of battery chemistry. Sodium is right below lithium on the period table, so it is the next lightest element with similar chemistry.

But for the rest of this article I want to focus on one potential successor to Li-ion batteries – flow batteries. So-called flow batteries are called that because they use two liquid electrochemical substance to carry their charge and create electrical current. Flow batteries are stable, less prone to fires than lithium batteries, and have a potential critical advantage – they can be recharged by swapping out the electrolyte. They can also be recharged in the conventional way, by plugging them in. So theoretically a flow battery could provide the same BEV experience as a current Li-ion battery, but with an added option. For “fast charging” you could pull into a station, connect a hose to your car, and swap out spent electrolyte for fresh electrolyte, fully charging your vehicle in the same time it would take to fill up a tank. This is the best of both worlds – for those who own their own off-street parking space (82% of Americans) routine charging at home is super convenient. But for longer trips, the option to just “fill the tank” is great.

But there is a problem. As I have outlined previously, battery technology is one of those tricky technologies that requires a suite of characteristics in order to be functional, and any one falling short is a deal-killer. For flow batteries the problem is that their energy density is only about 10% that of Li-ion batteries. This makes them unsuitable for BEVs. This is also an inherent limitation of chemistry – you can only dissolve so much solute in a liquid. However, as you likely have guessed based upon my headline, there is also a solution to this limitation – nanofluids. Nanoparticles suspended in a fluid can potentially have much greater energy density.

Research into this approach actually goes back to 2009, at Argonne National Laboratory and the Illinois Institute of Technology, who did the initial proof of concept. Then in 2013 DARPA-energy gave a grant to the same team to build a working prototype, which they did. Those same researchers then spun off a private company, Influit Energy, to develop a commercial product, with further government contracts for such development. As an aside, we see here an example of how academic researchers, government funding, and private industry work together to bring new cutting edge technology to market. It can be a fruitful arrangement, as long as the private companies give back to the public the public support they built upon.

Where is this technology now? John Katsoudas, a founder and chief executive of Influit, claims that they are developing a battery with an specific energy of 550 to 850 Wh/kg, with the potential to go even higher. That’s roughly double to triple current EV batteries. They also claim these batteries (soup to nuts) will be cost competitive to Li-ion batteries. Of course, claims from company executives always need to be taken with a huge grain of salt, and I don’t get too excited until a product is actually in production, but this does all look very promising.

Part of the technology involved how much nanoparticles they can cram into their electrolyte fluid. They claim they are currently up to 50% by weight, but believe they can push that to 80%. At 80% nanoparticles, the fluid would have the viscosity of motor oil.

A big part of any new technology, often neglected in the hype, is infrastructure. We are facing this issue with BEVs. The technology is great, but we need an infrastructure of charging stations. They are being built, but currently are a limiting factor to public acceptance of the technology (lack of chargers contributes to range anxiety). The same issue would exist with nanoparticle flow batteries. However, they would have at least a good an infrastructure for normal recharging as current BEVs. Plus also they would benefit from pumping electrolyte fluid as a means of fast charging. Such fluid could be process and recharged on site, but also could be trucked or piped as with existing gasoline infrastructure. Still, this is not like flipping a switch. It could take a decade to build out an adequate infrastructure. But again, meanwhile at least such batteries can be charges as normal.

I don’t know if this battery technology will be the one to displace lithium-ion batteries. A lot will depend on which technologies make it to market first, and what infrastructure investments we make. It’s possible that the silicon anode Li-ion batteries may improve so quickly they will eclipse their competitors. Or the solid state batteries may make a big enough leap to crush the competition. Or companies may decide that pumping fluid is the path to public acceptance and go all-in on flow batteries. It’s a good problem to have, and will be fascinating to watch this technology race unfold.

The only prediction that seems certain is that battery technology is advancing quickly, and by the 2030s we should have batteries for electric vehicles with 2-3 times the energy density and specific energy of those in common use today. That will be a different world for BEVs.

 

The post Flow Batteries – Now With Nanofluids first appeared on NeuroLogica Blog.

Categories: Skeptic

The Exoplanet Radius Gap

Mon, 02/12/2024 - 5:03am

As of this writing, there are 5,573 confirmed exoplanets in 4,146 planetary systems. That is enough exoplanets, planets around stars other than our own sun, that we can do some statistics to describe what’s out there. One curious pattern that has emerged is a relative gap in the radii of exoplanets between 1.5 and 2.0 Earth radii. What is the significance, if any, of this gap?

First we have to consider if this is an artifact of our detection methods. The most common method astronomers use to detect exoplanets is the transit method – carefully observe a star over time precisely measuring its brightness. If a planet moves in front of the star, the brightness will dip, remain low while the planet transits, and then return to its baseline brightness. This produces a classic light curve that astronomers recognize as a planet orbiting that start in the plane of observation from the Earth. The first time such a dip is observed that is a suspected exoplanet, and if the same dip is seen again that confirms it. This also gives us the orbital period. This method is biased toward exoplanets with short periods, because they are easier to confirm. If an exoplanet has a period of 60 years, that would take 60 years to confirm, so we haven’t confirmed a lot of those.

There is also the wobble method. We can observe the path that a star takes through the sky. If that path wobbles in a regular pattern that is likely due to the gravitational tug from a large planet or other dark companion that is orbiting it. This method favors more massive planets closer to their parent star. Sometimes we can also directly observe exoplanets by blocking out their parent star and seeing the tiny bit of reflected light from the planet. This method favors large planets distant from their parent star. There are also a small number of exoplanets discovered through gravitational microlensing, and effect of general relativity.

None of these methods, however, explain the 1.5 to 2.0 radii gap. It’s also likely not a statistical fluke given the number of exoplanets we have discovered. Therefore it may be telling us something about planetary evolution. But there are lots of variables that determine the size of an exoplanet, so it can be difficult to pin down a single explanation.

One theory has to do the atmospheres if planets. Exoplanets that are small and rocky but larger than Earth are called super-earths. Here is an example of a recent super-earth discovered in the habitable zone of a nearby red star – TOI-715 b. It has a mass of 3.02 earth masses, and a radius 1.55 that of Earth. So it is right on the edge of the gap. I calculated the surface gravity of this planet, which is 1.25 g. It has an orbital period of 19.3 days, which means it is likely tidally locked to its parent star. This planet was discovered by the TESS telescope using the transit method.

Planets like TOI-715 b, at or below the gap, likely are close to their parent stars and have relatively thin atmospheres (something like Earth or less). If the same planet were further out from its parent star, however, with that mass it would likely retain a thick atmosphere. This would increase the apparent radius of the planet using the transit method (which cannot distinguish a rocky world from a thick atmosphere), increasing its size to greater than two Earth radii – vaulting it across the gap. These worlds, above the gap, are called mini-Neptunes or sub-Neptunes. So according to this theory the main factor is distance from the parent star and whether or not the planet can retain a thick atmosphere. When small rocky worlds get big enough and far enough from their parent star, they jump to the sub-Neptune category by retaining a thick atmosphere.

But as I said, there are lots of variables here, such as the mass of the parent star.  A recent paper adds another layer – what about planets that migrate? One theory of planetary formation (mainly through simulations) holds that some planets may migrate either closer to or farther away from their parent stars over time. Also the existence of “hot Jupiters” – large gas planets very close to their parent stars – suggests migration, as such planets likely could not have formed where they are.  It is likely that Neptune and Uranus migrated farther away from the sun after their formation. This is part of a broader theory about the stability of planetary systems. Such systems, almost by definition, are stable. If they weren’t, they would not last for long, which means we would not observe many of them in the universe. Our own solar system has been relatively stable for billions of years.

There are several possible explanations for this remarkable stability. One is that this is how planetary systems evolve. The planets form from a rotating disc of material which means they form roughly circular orbits all going in the same plane and same direction. But it is also possible that early stellar systems develop many more planets than ultimately survive. Those is stable orbits survive long term, while those in not stable orbits either fall into their parent star or get ejected from the system to become rogue planets wandering between the stars. There is therefore a selection for planets in stable orbits. There is also now a third process likely happening, and that is planetary migration. Planets may migrate to more stable orbits over time. Eventually all the planets in a system jockey into position in stable orbits that can last billions of years.

Observing exoplanetary systems is one way to test our theories about how planetary systems form and evolve. The relative gap in planet size is one tiny piece of this puzzle. With migrating planets what the paper says is likely happening is that if you have sub-Neptunes that migrate closer to their parent star, the thick atmosphere will be stripped away leaving behind a smaller rocky world below the gap. But also they hypothesize that a very icy world may migrate closer to their parent star, melting the ice and forming a thick atmosphere, jumping the gap to large planetary size.

What all of these theories of the gap have in common is the presence or absence of a thick atmosphere, which makes sense. There are some exoplanets in the gap, but it’s just much less likely. It’s hard to get a planet right in the gap, because either it’s too light to have a thick atmosphere, or too massive not to have one. The gap can be seen as an unstable region of planetary formation.

The more time that goes by the more data we will have and the better our exoplanet statistics will be. Not only will we have more data, but longer observation periods allow for the confirmation of planets with longer orbital periods, so our data will become progressively more representative. Also, better telescopes will be able to detect smaller worlds in orbits more difficult to observe, so again the data will become more representative of what’s out there.

Finally, I have to add, with greater than 5000 exoplanets and counting, we have still not found an Earth analogue. No exoplanet that is a small rocky world of roughly Earth size and mass in the habitable zone of an orange or yellow star. Until we find one, it’s hard to do statistics, except to say that truly Earth-like planets are relatively rare. But I anxiously await the discovery of the first true Earth twin.

The post The Exoplanet Radius Gap first appeared on NeuroLogica Blog.

Categories: Skeptic

JET Fusion Experiment Sets New Record

Fri, 02/09/2024 - 5:06am

Don’t get excited. It’s always nice to see incremental progress being made with the various fusion experiments happening around the world, but we are still a long way off from commercial fusion power, and this experiment doesn’t really bring us any close, despite the headlines. Before I get into the “maths”, here is some quick background.

Fusion is the process of combining light elements into heavier elements. This is the process the fuels stars. We have been dreaming about a future powered by clean abundant fusion energy for at least 80 years. The problem is – it’s really hard. In order to get atoms to smash into each other with sufficient energy to fuse, you need high temperatures and pressures, like those at the core of our sun. We can’t replicate the density and pressure at a star’s core, so we have to compensate here on Earth with even higher temperatures.

There are a few basic fusion reactor designs. The tokamak design (like the JET rector) is a torus, with a plasma of hydrogen isotopes (usually deuterium and tritium) inside the torus contained by powerful magnetic fields. The plasma is heated and squeezed by brute magnetic force until fusion happens. Another method, the pinch method, also uses magnetic fields, but they use a stream of plasma that gets pinched at one point to high density and temperature. Then there is kinetic confinement which essentially uses an implosion created by powerful lasers to create a brief moment of high density and temperature. More recently a group has used sonic cavitation to create an instance of fusion (rather than sustained fusion). These methods are essentially in a race to create commercial fusion. It’s an exciting (if very slow motion) race.

There are essentially three thresholds to keep an eye out for. The first is fusion – does the setup create any measurable fusion. You might think that this is the ultimate milestone, but it isn’t. Remember, the goal for commercial fusion is to create net energy. Fusion creates energy through heat, which can then be used to run a convention turbine. So just achieving fusion, while super nice, is not even close to where we need to get. If you are putting thousands of times the energy into the process as you get out, that is not a commercial power plant. The next threshold is “ignition”, or sustained fusion in which the heat energy created by fusion is sufficient to sustain the fusion process. (This is not relevant to the cavitation method which does not even try to sustain fusion.) A couple of labs have recently achieve this milestone.

But wait, there’s more. Even though they achieved ignition, and (as was widely reported) produced net fusion energy, they are still far from a commercial plant. The fusion created more energy than when into the fusion itself. But the entire process still used about 100 times the total energy output. So we are only about 1% of the way toward the ultimate goal of total net energy. When framed that way, it doesn’t sound like we are close at all. We need lasers or powerful magnets that are more than 100 times as efficient as the ones we are using now, or the entire method needs to pick up an order of magnitude or two of greater efficiency. That is no small task. It’s quite possible that we simply can’t do it with existing materials and technology. Fusion power may have to wait for some future unknown technology.

In the meantime we are learning an awful lot about plasmas and how to create and control fusion. It’s all good. It’s just not on a direct path to commercial fusion. It’s not just a matter of “scaling up”. We need to make some fundamental changes to the whole process.

So what record did the JET fusion experiment break? Using the tokamak torus constrained by magnetic fields design, they were able to create fusion and generate “69 megajoules of fusion energy for five seconds.” Although the BBC reports it produced, “69 megajoules of energy over five seconds.” That is not a subtle difference. Was it 69 megajoules per second for five seconds, or was it 13.8 megajoules per second for five seconds for a total of 69 megajoules? More to the point – what percentage of energy input was this. I could not find anyone reporting it (and ChatGPT didn’t know). But I did find this – “In total, when JET runs, it consumes 700 – 800 MW of electrical power.” A joule is one watt of power for one second.

It’s easy to get the power vs energy units confused, and I’m trying not to do that here, but the sloppy reporting is no help. Watts are a measure of power. Watts over time are a measure of energy, so a watt second or watt hour is a unit of energy. From here:

1 Joule (J) is the MKS unit of energy, equal to the force of one Newton acting through one meter.
1 Watt is the power of a Joule of energy per second

So since joules are a measure of energy, it makes more sense that it would be a total amount of energy created over 5 seconds (so the BBC was more accurate). So 700 MW of power over 5 seconds is 3,500 megajoules of energy input, compared to 69 megajoules output. That is 1.97%, which is close to where the best fusion reactors are so I think I got that right. However, that’s only counting the energy to run the reactor for the 5 seconds it was fusing. What about all the energy for starting up the process and everything else soup to nuts?

This is not close to a working fusion power plant. Some reporting says the scientists hope to double the efficiency with better superconducting magnets. That would be nice – but double is still nowhere close. We need two orders of magnitude, at least, just to break even. We probably need closer to three orders of magnitude for the whole thing to be worth it, cradle to grave. We have to create all that tritium too, remember. Then there is inefficiency in converting the excess heat energy to electricity. That may be an order a magnitude right there.

I am not down on fusion. I think we should continue to research it. Once we can generate net energy through fusion reactors, that will likely be our best energy source forever – at least for the foreseeable future. It would take super advanced technology to eclipse it. So it’s worth doing the research. But just being realistic, I think we are looking at the energy of the 22nd century, and maybe the end of this one. Not the 2040s as some optimists predict. I hope to be proven wrong on this one. But either way, premature hype is likely to be counterproductive. This is a long term research and development project. It’s possible no one alive today will see a working fusion plant.

At least, for the existing fusion reactor concepts I think this is true. The exception is the cavitation method, which does not even try to sustain fusion. They are just looking for a “putt putt putt” of individual fusion events, each creating heat. Perhaps this, or some other radical new approach, will cross over the finish line much sooner than anticipated and make me look foolish (although happily so).

 

The post JET Fusion Experiment Sets New Record first appeared on NeuroLogica Blog.

Categories: Skeptic

Weaponized Pedantry and Reverse Gish Gallop

Tue, 02/06/2024 - 4:45am

Have you ever been in a discussion where the person with whom you disagree dismisses your position because you got some tiny detail wrong or didn’t know the tiny detail? This is a common debating technique. For example, opponents of gun safety regulations will often use the relative ignorance of proponents regarding gun culture and technical details about guns to argue that they therefore don’t know what they are talking about and their position is invalid. But, at the same time, GMO opponents will often base their arguments on a misunderstanding of the science of genetics and genetic engineering.

Dismissing an argument because of an irrelevant detail is a form of informal logical fallacy. Someone can be mistaken about a detail while still being correct about a more general conclusion. You don’t have to understand the physics of the photoelectric effect to conclude that solar power is a useful form of green energy.

There are also some details that are not irrelevant, but may not change an ultimate conclusion. If someone thinks that industrial release of CO2 is driving climate change, but does not understand the scientific literature on climate sensitivity, that doesn’t make them wrong. But understanding climate sensitivity is important to the climate change debate, it just happens to align with what proponents of anthropogenic global warming are concluding. In this case you need to understand what climate sensitivity is, and what the science says about it, in order to understand and counter some common arguments deniers use to argue against the science of climate change.

What these few examples show is a general feature of the informal logical fallacies – they are context dependent. Just because you can frame someone’s position as a logical fallacy does not make their argument wrong (thinking this is the case is the fallacy fallacy). What logical fallacy is using details to dismissing the bigger picture? I have heard this referred to as a “Reverse Gish Gallop”. I’m don’t use this term because I don’t think it captures the essence of the fallacy. I have used the term “weaponized pedantry” before and I think that is better.

It’s OK to be a little pedantic if the purpose is to be precise and accurate. That is consistent with good science and good scholarship. But such pedantry must be fair and in context. This requires a fair assessment of the implications of the detail. It is good to get the details right for their own sake, but some details don’t matter to a particular argument or position. There are a couple of ways to weaponize pedantry not to advocate for genuine good scholarship but as a hit job against a position you don’t like.

One way is to simply be biased in your search for and exposure of small mistakes. If you are only looking for them on one side or in one direction of an argument, then that is not good scholarship. It’s searching for ammunition to use as a weapon. The other method is to imply, or sometimes even explicitly state, that an error in a detail calls into question or even invalidates the bigger picture, even when it doesn’t. Sometimes this could just be a non sequitur argument – you made a mistake in describing the uranium cycle, therefore your opinion on nuclear power is not correct. And sometimes this can be an ad hominem fallacy – you don’t know the difference between a clip and a magazine so you are not allowed to have an opinion on gun safety.

Given this complexity, what is a good approach to pedantry about details and accuracy? First, I will reiterate my position that having a discussion or even an “argument” should not be about winning. Winning is for debate club and the courtroom. Having a discussion should be about understanding the other person’s position, understanding your own position better, understanding the topic better, and coming to as much common ground as possible. This means identifying the factual claims and resolving any differences, hopefully with reliable sources. Then you need to examine the logic of every claim and statement, including your own, to see if it is valid. You may also need to identify any value judgements that are subjective, or any areas where the facts are unknown or ambiguous.

With this approach, knowledge of logical fallacies is a good way to police your own arguments and thinking on a topic, and a good way to resolve differences and come to common ground. But if wielded as a rhetorical weapon, you are almost certain to commit the fallacy fallacy, including weaponized pedantry.

Specifically with reference to this fallacy – you need to ask the question, does this detail affect the larger claim? It may be entirely irrelevant, or it may be a tiny tweak, or it may be truly critical to the claim. If someone falsely thinks that Monsanto sued farmers solely for accidental contamination, that is not a tiny detail – that is core to one anti-GMO argument. Try to be as fair and neutral as possible it making that call, and then be honest about it (to yourself and anyone else involved in the discussion).

It’s OK to be that person who says, “Well, actually.” It’s OK to get the details right for the sake of getting the details right. We all should have a dedication to accuracy and precision. But its very easy to disguise biased advocacy as dedication to accuracy when it isn’t.

The post Weaponized Pedantry and Reverse Gish Gallop first appeared on NeuroLogica Blog.

Categories: Skeptic

Did They Find Amelia Earhart’s Plane

Mon, 02/05/2024 - 4:25am

Is this sonar image taken at 16,000 feet below the surface about 100 miles from Howland island, that of a downed Lockheed Model 10-E Electra plane? Tony Romeo hopes it is. He spent $9 million to purchase an underwater drone, the Hugan 6000, then hired a crew and scoured 5,200 square miles in a 100 day search hoping to find exactly that. He was looking, of course, for the lost plane of Amelia Earhart. Has he found it? Let’s explore how we answer that question.

First some quick background, and most people know Amelia Earhart was a famous (and much beloved) early female pilot, the first female to cross the Atlantic solo. She was engaged in a mission to be the first solo pilot (with her navigator, Fred Noonan) to circumnavigate the globe. She started off in Oakland California flying east. She made it all the way to Papua New Guinea. From there her plan was to fly to Howland Island, then Honolulu, and back to Oakland. So she three legs of her journey left. However, she never made it to Howland Island. This is a small island in the middle of the Pacific ocean and navigating to it is an extreme challenge. The last communication from Earhart was that she was running low on fuel.

That was the last anyone heard from her. The primary assumption has always been that she never found Howland Island, her plane ran out of fuel and crashed into the ocean. This happened in 1937.  But people love mysteries and there has been endless speculation about what may have happened to her. Did she go of course an arrive at the Marshall Islands 1000 miles away? Was she captured by the Japanese (remember, this was right before WWII)? Every now and the a tidbit of suggestive evidence crops up, but always evaporates on close inspection. It’s all just wishful thinking and anomaly hunting.

There have also been serious attempts to find her plane. However, assuming she was off course, and that’s why they never made it to their target, there could potentially be a huge area of the Pacific ocean where her plane ended up. Romeo’s effort is the latest to look for her plane, and his approach was entirely reasonably – sonar scan the bottom of the ocean around Howland Island. He and his crew did this starting in September 2023. After the scanning mission was over, while going through the images, they found the image you can see above. Is this Earhart’s plane?

There are three possibilities to consider. One is that the image is not that of a plane at all, but just a random geological formation or something else. Remember that Romeo and his team poured through tons of data looking for a plane-like image. It’s not all that surprising that they found something. This could just be an example of the Face on Mars or the Martian Bigfoot – if you look at enough images looking for stuff you will find it.

The second possibility is that the sonar image is that of a plane, just not Earhart’s Lockheed Electra. There are lots of known missing aircraft. But more importantly perhaps, how many unknown missing aircraft are there? How many planes were lost during WWII and unaccounted for? There could be private unregistered planes, even drug smugglers. And of course, the third possibility is that this is Amelia Earhart’s plane. How can we know?

First, we can make some inferences from the information we have. Is the image that of a plane? I think this is a coin toss. It is reasonably symmetrical, has things that can be wings, a fuselage, and a tail. But again, it’s just a fuzzy image. It could just be a ledge and a rock. Neither outcome would shock me.

If it is a plane, could this be Earhart’s plane? The one data point that is in favor of this conclusion is the location – 100 miles off Howland Island. That is within the scope of where we would expect to find her plane. But there are two big things going against it being the Lockheed Electra. First, the Electra had straight wings, while, if this is a plane, the wings appear to be swept back. If this image is accurate, then the answer is no. But it is possible that the plane was damaged by the crash. Perhaps the wings broke and were pushed back by the fall through the water.

Also, the Lockheed Electra was a twin engine plane, with one large engine on each wing. They are not apparent in this image, and they should be. So we also have to speculate that the engines were lost in the process of the plane crashing and sinking, or that the image is too distorted to see them.

As you can see, speculation from the existing evidence is pretty thin. We need more data. What we have with the sonar image is not confirmatory evidence, just a clue that needs follow up. We need better images, hopefully with sufficient detail to provide forensic evidence. This will require a deep sea mission with lights and cameras, like the kind used to explore the wreckage of the Titanic. With such images it should be easy to tell if this is a Lockheed Electra. If it is, then it is almost certainly Earhart’s plane. But also, we may be able to read the registration numbers on the side of the plane, and that would be definitive.

Romeo is in the process of planning a follow up mission to investigate this sonar image. Unless and until this happens, we will not be able to say with any confidence if this is or is not Earhart’s plane.

The post Did They Find Amelia Earhart’s Plane first appeared on NeuroLogica Blog.

Categories: Skeptic

How To Prove Prevention Works

Fri, 02/02/2024 - 4:55am

Homer: Not a bear in sight. The Bear Patrol must be working like a charm.
Lisa: That’s specious reasoning, Dad.
Homer: Thank you, dear.
Lisa: By your logic I could claim that this rock keeps tigers away.
Homer: Oh, how does it work?
Lisa: It doesn’t work.
Homer: Uh-huh.
Lisa: It’s just a stupid rock.
Homer: Uh-huh.
Lisa: But I don’t see any tigers around, do you?
[Homer thinks of this, then pulls out some money]
Homer: Lisa, I want to buy your rock.
[Lisa refuses at first, then takes the exchange]

 

This memorable exchange from The Simpsons is one of the reasons the fictional character, Lisa Simpson, is a bit of a skeptical icon. From time to time on the show she does a descent job of defending science and reason, even toting a copy of “Jr. Skeptic” magazine (which was fictional at the time then created as a companion to Skeptic magazine).

What the exchange highlights is that it can be difficult to demonstrate (let alone “prove”) that a preventive measure has worked. This is because we cannot know for sure what the alternate history or counterfactual would have been. If I take a measure to prevent contracting COVID and then I don’t get COVID, did the measure work, or was I not going to get COVID anyway? Historically the time this happened on a big scale was Y2K – this was a computer glitch set to go off when the year changed to 2000. Most computer code only encoded the year as two digits, assuming the first two digits were 19, so 1995 was encoded as 95. So when the year changed to 2000, computers around the world would think it was 1900 and chaos would ensue. Between $300 billion and $500 billion were spent world wide to fix this bug by upgrading millions of lines of code to a four digit year stamp.

Did it work? Well, the predicted disasters did not happen, so from that perspective it did. But we can’t know for sure what would have happened if we did not fix the code. This has lead to speculation and even criticism about wasting all that time and money fixing a non-problem. There is good reason to think that the preventive measures worked, however.

At the other end of the spectrum, often doomsday cults, predicting that the world will end in some way on a specific date, have to deal with the day after. One strategy is to say that the faith of the group prevented doomsday (the tiger-rock strategy). They can now celebrate and start recruiting to prevent the next doomsday.

The question is – how do we know when our preventive efforts have been successful or if they were not needed. In either scenario above you can use the absence of anything bad happening as both evidence that the problem was fake all along, or that the preventive measures worked. The absence of disaster fits both narratives. The problem can get very complicated. When preventive measures are taken and negative outcomes happen anyway, can we argue that it would have been worse? Did the school closures during COVID prevent any deaths? What would have happened if we tried to keep schools open? The absence of a definitive answer means that anyone can use the history to justify their ideological narrative.

How do we determine if a preventive measure works. There are several valid methods, mostly involving statistics. There is no definitive proof (you can’t run history back again to see what happens), but you can show convincing correlation. Ideally the correlation will be repeatable with at least some control of confounding variables. For public health measures, for example, we can compare data from either a time or a place without the preventive measures to those with the preventive measures. This can vary by state, province, country, region, demographic population, or over historic time. In each country where the measles vaccine is rolled out, for example, there is an immediate sharp decline in the incidence of measles. And if vaccine compliance decreases there is a rises in measles. If this happens often enough, the statistical data can be incredibly robust.

This relates to a commonly invoked (but often misunderstood) logical fallacy, the confusion of correlation with causation. Often people will say “correlation does not equal causation”. This is true but can be misleading. Correlation is not necessarily due to a specific causation, but it can be. Over applying this principle is a way to dismiss correlational data as useless – but it isn’t. The way scientists use correlation is to look for multiple correlations and triangulate to the one causation that is consistent with all of them. Smoking correlates with an increased risk of lung cancer. But also, duration and intensity also correlate, as does filtered vs unfiltered, and quitting correlates with reduced risk over time back to baseline. There are multiple correlations that only make sense in total if smoking causes lung cancer. Interestingly, the tobacco industry argued for decades that this data does not prove smoking causes cancer, because it was just correlation.

Another potential line of evidence is simulations. We cannot rerun history, but we can simulate it to some degree. Our ability to do so is growing fast, as computers get more powerful and AI technology advances. So we can run the counterfactual and ask, what would have happened if we had not taken a specific measure. But of course, these conclusions are only as good as the simulations themselves, which are only as good as our models. Are we accounting for all variables? This, of course, is at the center of the global climate change debate. We can test our models both against historical data (would they have predicted what has already happened) and future data (did they predict what happened after the prediction). It turns out, the climate models have been very accurate, and are getting more precise. So we should probably pay attention to what they say is likely to happen with future release of greenhouse gases.

But I predict that if by some miracle we are able to prevent the worst of climate change through a massive effort of decarbonizing our industry, future deniers will argue that climate change was a hoax all along, because it didn’t happen. It will be Y2K all over again but on a more massive scale. That’s a problem I am willing to have, however.

Another way to evaluate claims for prevention is plausibility. The tiger rock example that Lisa gives is brilliant for two reason. First, the rock is clearly “just a stupid rock” that she randomly picked up off the ground. Second, there is no reason to think that there are any tigers anywhere near where they are. For any prevention claim, the empirical data from correlation or simulations has to be put into the context of plausibility. Is there a clear mechanism? The lower the plausibility (or prior probability, in statistical terms) then the greater the need for empirical evidence to show probable causation.

For Y2K, there was a clear and fully understood mechanism at play. They could also easily simulate what would happen, and computer systems did crash. For global climate change, there is a fairly mature science with thousands of papers published over decades. We have a pretty good handle on the greenhouse effect. We don’t know everything (we never do) and there are error-bars on our knowledge (climate sensitivity, for example) but we also don’t know nothing. Carbon dioxide does trap heat, and more CO2 in the atmosphere does increase the equilibrium point of the total heat in the Earth system. There is no serious debate about this, only about the precise relationship. Regarding smoking, we have a lot of basic science data showing how the carcinogens in tobacco smoke can cause cancer, so it’s no surprise that it does.

But if the putative mechanism is magic, then a simple unidirectional correlation would not be terribly convincing, and certainly not the absence of a single historical event.

Of course there are many complicated example about which sincere experts can disagree, but it is good to at least understand the relevant logic.

The post How To Prove Prevention Works first appeared on NeuroLogica Blog.

Categories: Skeptic

Some Future Tech Possibilities

Thu, 02/01/2024 - 5:10am

It’s difficult to pick winners and losers in the future tech game. In reality you just have to see what happens when you try out a new technology in the real world with actual people. Many technologies that look good on paper run into logistical problems, difficulty scaling, fall victim to economics, or discover that people just don’t like using the tech. Meanwhile, surprises hits become indispensable or can transform the way we live our lives.

Here are a few technologies from recent news that may or may not be part of our future.

Recharging Roads

Imaging recharging your electric vehicle wirelessly just by driving over a road. Sounds great, but is it practical and scalable? Detroit is running an experiment to help find out. On a 400 meter stretch of downtown road they installed inducting cables under the ground and connected them to the city grid. EVs that have the $1,000 device attached to their battery can charge up while driving over this stretch of road.

The technology itself is proven, and is already common for recharging smartphones. It’s inductive charging, using a magnetic field to induce a current which recharges a battery. Is this a practical approach to range anxiety? Right now this technology costs $2 million per mile. Having any significant infrastructure of these roads would be incredibly costly, and it’s not clear the benefit is worth it. How much are they going to charge the EV? What is the efficiency? Will drivers fork out $1000 for minimal benefit?

I think this approach has a low probability of working. Where I think there might be a role, however, is in long stretches of interstate highway. This will still be an expensive option, but a 100 mile stretch of highway, for example, fit with these coils would cost $200 million. Hopefully with mass production and advances the cost will come down, so maybe it will be only $100 million. That is not a bank breaker for a Federal infrastructure project. This could significantly extend the rage of EVs on long trips along such highways. Busy corridors, like I95, could potentially benefit. You could also put the coils under parking spaces at rest stations.

Will this be better and more efficient than just plugging in? Probably not. I give this a low probability, but it’s possible there may be some limited applications.

 

The Virtual Office

I like VR, and still use it for occasional gaming. I don’t use an app because it’s VR, but some VR games and apps are great. The technology, however, is not yet fully mature. Companies have tried to promote a virtual office in the past. Again it looks good on paper. Imagine having your office be a virtual space that you can configure anyway you want with everything you need to do right in front of you.

But these efforts all failed, because people simply don’t like wearing heavy goggles on their face for hours at a time. I get this – I can only play VR games for so long at once, then I need to stop. It can be exhausting (that is actually a feature for me, not a bug, to get off my chair, and at least stand up and move around). But for an 8 hour work day – no way.

Ideas that look good on paper often don’t die completely, they keep coming back. In this case, I think we will need to keep taking a look at this technology as it evolves. A recent spate of companies are doing just that, trying again for the virtual office. Now they are calling it “extended reality” or XR, which involves a combination of augmented reality and virtual reality. There are some real advantages – training is more effective in XR (than either in person or online). It also is cost effective to have remote, rather than in person meetings. It allows people to work more effectively from home, which also has potential huge efficiency gains.

Still I think this is essentially a hardware problem. The goggles are still bulky and tiring. The experience is still limited by motion sickness. At some point, however, we will get to a critical point where the hardware is good enough for regular extended use, and then adoption may explode.

Apple is coming out with their long awaited entry – the Vision Pro is being released tomorrow, Feb 2. It still looks pretty bulky, but does look like a solid incremental advance. I would like the opportunity to test it out. If this does not turn out to be the killer tech, I think it’s inevitable that we will get there eventually.

 

AI Generated News Anchors

We have been talking about this for years now – when will AI generated characters get good enough to replace actors completely? Now we are starting to see AI generated news anchors. That makes sense, and is likely much easier than an AI character in a dramatic role in a movie. A TV anchor is often just a talking head (while on camera – I’m not saying they are not also sometimes serious journalists). But this way you completely separate the journalism from the good looking talking head part of TV news. The journalism is all done behind the scenes, and the attractive anchor is AI generated.

All they have to do is read the text, with a fairly narrow range of emotional expression. It’s actually perfect, if you think about it. I predict this will rapidly become a thing. Probably the biggest limiting factor is going to be protests, contracts, and other legal stuff. But the tech itself is ready, and perhaps perfectly suited to this application.

 

Those are just a few things in tech news that caught my attention this week. This will be a fun post to look back on in a few years to see how I did.

The post Some Future Tech Possibilities first appeared on NeuroLogica Blog.

Categories: Skeptic

Neuralink Implants Chip in Human

Tue, 01/30/2024 - 2:18pm

Elon Musk has announced that his company, Neuralink, has implanted their first wireless computer chip into a human. The chip, which they plan on calling Telepathy (not sure how I feel about that) connects with 64 thin hair-like electrodes, is battery powered and can be recharged remotely. This is exciting news, but of course needs to be put into context. First, let’s get the Musk thing out of the way.

Because this is Elon Musk the achievement gets more attention than it probably deserves, but also more criticism. It gets wrapped up in the Musk debate – is he a genuine innovator, or just an exploiter and showman? I think the truth is a little bit of both. Yes, the technologies he is famous for advancing (EVs, reusable rockets, digging tunnels, and now brain-machine interface) all existed before him (at least potentially) and were advancing without him. But he did more than just gobble up existing companies or people and slap his brand on it (as his harshest critics claim). Especially with Tesla and SpaceX, he invested his own fortune and provided a specific vision which pushed these companies through to successful products, and very likely advanced their respective industries considerably.

What about Neuralink and BMI (brain-machine interface) technology? I think Musk’s impact in this industry is much less than with EVs and reusable rockets. But he is increasing the profile of the industry, providing funding for research and development, and perhaps increasing the competition. In the end I think Neuralink will have a more modest, but perhaps not negligible, impact on bringing BMI applications to the world. I think it will end up being a net positive, and anything that accelerates this technology is a good thing.

So – how big a deal is this one advance, implanting a wireless chip into a human brain? Not very, at least not yet. Just the mere fact of implanting a chip is not a big deal. The real test is how long it lasts, how long it maintains its function, and how well it functions – none of which has yet been demonstrated. Also, other companies (although only a few) are ahead of the game already.

Here is a list of five companies (in addition to Neuralink) working on BMI technology (and I have written about many of them before). Synchron is taking a different approach, with their stentrodes. Instead of implanting in the brain, which is very invasive, they place their electrodes inside veins inside the brain, which gets them very close to brain tissue, and critically inside the skull. They completed their first human implant in 2022.

Blackrock Neurotech has a similar computer chip with an array of tiny electrodes that gets implanted in the brain. They are farther along than Neuralink and are the favorite to have a product available for use outside a research lab setting. Clearpoint Neuro is working with Blackrock to develop a robot to automatically implant their chips with the precision necessary to optimize function. They also are developing their own applications for BMI and also implants for drug delivery to brain tissue.

Braingate has also successfully implants an array of electrodes into humans that allows them to communicate wireless to external devices, allowing them to control computer interfaces or robotic limbs.

These companies are all focusing on implanted devices. There is also research into using scalp surface electrodes for a BMI connection. The advantage here is that nothing has to be implanted. The disadvantage is that the quality of the signal is much less. Which option is better depends on the application. Neurable is working on external BMI that you wear like headphones. They envision this will be used like a virtual reality application, but with neuro-reality (VR through a neurological connection, rather than goggles).

All of these advances are exciting, and I have been following them closely and reporting on them over the years. The Neuralink announcement adds them to the list of companies who have implanted a BMI chip into a human, a very exclusive club, but does not advance the cutting edge beyond where it already is.

What has me the most excited recently, actually, is advances in AI. What we need to have fairly mature BMI technology, the kind that can allow a paralyzed person to communicate effectively or control robotic limbs, is an implant (surface electrodes are not enough for these applications) that has many connection, is durable, self powered (or easily recharged), does not damage brain tissue, and maintains a consistent connection (does not move or migrate). We keep inching close to this goal. The stentrode may be a great intermediary step, good enough for decades until we develop really good implantable electrodes, which will almost certainly have to be soft and flexible.

But as we slowly and incrementally advance toward this goal (basically the hardware) we also have to keep an eye on the software. I had thought that this basically peaked and was more than advanced enough for what it needed to do – translate brain signals into what the person is thinking with enough fidelity to provide communication and control. But recent AI applications are showing how much more powerful this software can be. This is what AI is good at – taking lots of data and making sense of it. The same way it can make a deep fake of someone’s voice, or recreate a work of art in the style of a specific artist, it can take the jumble of blurry signals from the brain and assemble it into coherent speech (at least that’s the goal). This essentially means we can do much more with the hardware we have.

This is the kind of thing that might make Stentrode the leader of the pack – they sacrifice a little resolution for being much safer and less invasive. But that sacrifice may be more than compensated for with a good AI interface.

The bottom line is that this industry is advancing nicely. We are at the cusp of going from the laboratory to early medical applications. From there we will go to more advanced medical applications, and then eventually to consumer applications. It should be exciting to watch.

 

The post Neuralink Implants Chip in Human first appeared on NeuroLogica Blog.

Categories: Skeptic

Controlling the Narrative with AI

Mon, 01/29/2024 - 5:08am

There is an ongoing battle in our society to control the narrative, to influence the flow of information, and thereby move the needle on what people think and how they behave. This is nothing new, but the mechanisms for controlling the narrative are evolving as our communication technology evolves. The latest addition to this technology is the large language model AIs.

“The media”, of course, has been a large focus of this competition. On the right there is constant complaints of the “liberal bias” in the media, and on the left there are complaints of the rise of right-wing media which they feel is biased and radicalizing. The culture wars focus mainly on schools, because those schools teach not only facts and knowledge but convey the values of our society. The left views DEI (diversity, equity, and inclusion) initiates as promoting social justice while the right views it as brainwashing the next generation with liberal propaganda. This is an oversimplification, but it is the basic dynamic. Even industry has been targeted by the culture wars – which narratives are specific companies supporting? Is Disney pro-gay? Which companies fly BLM or LGBTQ flags?

But increasingly “the narrative” (the overall cultural conversation) is not being controlled by the media, educational system, or marketing campaigns. It’s being controlled by social media. This is why, when the power of social media started to become apparent, many people panicked. Suddenly it seemed we had seeded control of the narrative to a few tech companies, who had apparently decided that destroying democracy was a price they were prepared to pay for maximizing their clicks. We now live in a world where YouTube algorithms can destroy lives and relationships.

We are not yet over panicking about the influence of social media and the tech giants who control them when another player has crashed the party – artificial intelligence, chatbots, and the large language models that run them. This is an extension of the social media infrastructure, but it is enough of a technological advance to be disruptive. Here is the concern – by shaping the flow of information to the masses, social media platforms and AI can have a significant effect on the narrative, enough to create populist movements, to alter the outcome of elections, or to make or destroy brands.

It seems likely that increasingly we will be giving control of the flow of information to AI. Now, instead of searching on Google for information you can have a conversation with Chat GPT. Behind the scenes it’s still searching the web for information, but the interface is radically different. I have documented and discussed here many times how easy human brains are to fool. We have evolved circuits in our brain that construct our perception of reality and make certain judgements about how to do so. One subset of these circuits is dedicated to determining if something out there in the world has agency (are they a person or just a thing) and once the agency-algorithm determines that something is an agent, that then connects to the emotional centers of our brain. We then feel toward that apparent agent and treat them as if they were a person. This extends to cartoons, digital entities, and even abstract shapes. Physical form, or the lack thereof, does not seem to matter because it is not part of the agency algorithm.

It is increasingly well established that people respond to an even half-way decent chatbot as if that chatbot were a person. So now when we interface with “the internet”, looking for information, we may not just be searching for websites but talking with an entity – an entity that can sound friendly, understanding, and authoritative. Even though we may know completely that this is just an AI, we emotionally fall for it. It’s just how our brains are wired.

A recent study demonstrates the subtle power that such chatbots can have. They asked subjects to talk with ChatGPT-3 about black lives matter (BLM) and climate change, but gave them no other instructions. They also surveyed the subjects attitudes toward these topics before and after the conversation. Those who scored negatively toward BLM or climate change ranked their experience half a point lower on a five point scale (which is significant), so they were unhappy when the AI told them things they did not agree with. But, more importantly, after the interaction their attitudes moved 6% in the direction of accepting climate change and the BLM movement. We don’t know from this study if this effect is enduring, or if it is enough to affect behavior, but at least temporarily ChatGPT did move the needle a little. This is a proof of concept.

So the question is – who controls these large language model AI chatbots, who we are rapidly making the gatekeepers to information on the internet?

One approach is to make it so that no one controls them (as much as possible). Through transparency, regulation, and voluntary standards, the large tech companies can try to keep their thumbs off the scale as much as possible, and essentially “let the chips fall where they may.” But this is a problem and early indications are this approach likely won’t work. The problem is that even if they are trying not to influence the behavior of these AI, they can’t help but to have a large influence on them by the choices they make about how to program and train them. There is no neutral approach. Every decision has a large influence, and they have to make choices. What do they prioritize.

If, for example, they prioritize the user experience, well, as we see in this study, one way to improve the user experience is to tell people what they want to hear, rather what the AI determines is the truth. How much does the AI caveat what it says? How authoritative should it sound? How thoroughly should it source whatever information it gives? And how does it weight different sources that it is using? Further, we know that these AI applications can “hallucinate” – just make up fake information. How do we stop that, and to what extent (and how) to we build in fact-checking processes into the AI?

These are all difficult and challenging questions, even for a well-meaning tech company acting in good faith. But of course, there are powerful actors out there who would not act in good faith. There is already deep concern about the rise of Tik Tok, and the ability of China to control the flow of information through that app to favor pro-China news and opinion. How long will it be before ChatGPT is accused of having a liberal bias, and ConservaGPT is created to combat that (just like the Conservapedia, or Truth Social)?

The narrative wars go on, but they seem to be increasingly concentrated in fewer and fewer choke points of information. That, I think, is the real risk. And the best solution may be an anti-trust approach – make sure there are lots of options out there, so no one or few options dominate.

The post Controlling the Narrative with AI first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages