If we are going to have an enduring presence on either the Moon or Mars, or anyplace off of Earth, we will need to grow food there. It is simply too expensive, inconvenient, and fragile to be dependent on food entirely from Earth. In fact, any off-Earth habitat will need to be able to recycle most if not all of its resources. You basically need a reliable source of energy, sufficient food, water, and oxygen (consumables) to sustain all inhabitants, and the ability to endlessly recycle that food, water, and oxygen.
The ISS has achieved 98% recycling of water, which is what NASA claims is the threshold for sustainability of long space missions. The ISS also recycles about 40% of its oxygen. However, the ISS grows none of its food. It is all delivered from Earth, with a 6 month supply aboard the ISS. There are experiments to grow plants on the ISS, and these have been successful, but this is not a significant source of nutrition for the astronauts.
Doing the same on the Moon is not practical for long missions, although we will certainly be doing this for a time. But the goal, if we are to have a lunar base as NASA hopes (NASA plans a lunar base at the Moon’s south pole by 2030) is to grow food on the Moon (and eventually on Mars). On the ISS the big limiting factor is microgravity. The Moon has lower gravity than Earth, but it has some gravity and so that will likely not be a major problem, especially since we can grow plants on the ISS. We can also grow plants hydroponically pretty much anywhere, and I suspect this will happen on any lunar base. But a fully hydroponic system has its limits as well.
Hydroponics on the Moon would be challenging for several reasons. First, it is energy intensive, and energy may be a premium on a lunar base, especially early on. Second, it requires a precise balance of nutrients in the water, and those nutrients would have to be sourced from Earth. So it doesn’t really solve the problem of dependence on Earth. And third, hydroponics requires a lot of equipment which would have to be shipped from Earth. We could theoretically leach nutrients from lunar regolith, and this might help a bit, but is also energy intensive and would not be a source of nitrogen.
Therefore – NASA and others are looking into the possibility of growing plants in lunar regolith. This could have multiple advantages. It requires much less equipment, energy, and water than hydroponics. Many of the nutrients would come from the regolith itself. This would reduce dependence on supplies from Earth. A soil-based system can also more easily recycle nutrients from food waste and human waste. Likely, a lunar base would have a hybrid hydroponic and soil-based system. As a side benefit, if such a base grew enough food to feed its human inhabitants, this would also recycle CO2 and produce more than enough oxygen for them to breath. In fact, they would have to figure out something to do with the extra oxygen to keep it from building up (likely not a problem – oxygen has many uses).
The major hurdle to growing food in lunar regolith is that – well, you can’t. Plants do not grow well in lunar regolith. It lacks nitrogen and other nutrients, it lacks organic matter, and it contains toxic compounds. Experimentally, plants will not grow sufficiently in simulated lunar regolith. But, we can treat the regolith to turn it into soil that can grow plants, and that is the focus of the current study mentioned in the headline. Scientists have used simulated regolith, modified by adding organic matter (vermicompost) created by red wiggler earthworms composting organic waste, and were able to grow chickpeas in the resulting soil. They tried various mixtures, and found that 75% regolith to 25% soil was the limit – more than 75% regolith and the plants would not survive. They also coated the chickpeas with arbuscular mycorrhizae before planting. The fungus is symbiotic, increasing the uptake of some nutrients while decreasing the uptake of some toxins like heavy metals.
The experiment was considered a success – the chickpea plants grew, survived, and produced chickpeas. However, they have not yet tested the chickpeas to see if they are safe and edible. They need to be tested for any toxic compounds. This is also not the first such study, there have been dozens of others. They generally show that crops will grow in modified simulated Martian and Lunar regolith. But questions remain about how good the simulated regoliths are.
There has also been one study using actual unmodified lunar regolith (brought back by the Apollo missions). In this study the plants grew, but showed signs of severe stress and were morphologically altered. That they grew at all, however, is amazing and encouraging.
What does all this mean for the future of lunar and Martian bases? They will very likely include some growing of food in modified regolith. The implication of the research is that we can likely develop a self-sustaining system in which plants are grown in modified soil using mostly native regolith. These plants produce food and oxygen while using CO2. The soil can then be fertilized using compost from any organic waste generated by the base, including humanure. You can even recycle urine in order to source nitrogen. In short, we can envision a system in which everything is recycled to locally produce food and air. We can also recycle 98% of the water in the system, perhaps eventually even more. You just need to kickstart the system with initial resources, and maybe need to top them off from time to time, but otherwise the system is self-sustaining.
It is also likely that the more the lunar or Martian regolith is used to grow food, the more it will look like Earth soil. The percentage of organic matter will increase, it will develop an ecosystem of microorganisms, and any toxins will be leached out over time. This high quality soil can then be used to expand the farm, and generate more modified soil from regolith.
It is also likely that such a lunar farm would exist underground, probably within a lava tube. This means that all the light with be artificial, but that’s not a big problem – we can do grow lights. Having a farm under a dome on the surface is likely not worth it. This would provide free sunlight, but only half the time, and not in a typical circadian cycle, but roughly 14 days of sunlight followed by 14 days of darkness. It would also be susceptible to radiation and micrometeors. Better to be in the safety of a lava tube, deep under ground, and just use grow lights.
Finally, one factor I have not mentioned yet is the potential to alter the plants themselves to adapt them to growing on the Moon, or on Mars or on a space station. Through some combination of cultivation and genetic engineering, we may be able to adapt crops to the lower gravity and the modified lunar soil. This could optimize productivity, safety, and nutrition.
While there is a lot of work to be done, the research so far shows that farming the Moon or Mars is feasible, which is good if we plan to have long term bases on either.
The post Scientists Grow Chickpeas In Lunar(ish) Soil first appeared on NeuroLogica Blog.
A recent study shows pretty clearly that highschoolers benefit from a little extra sleep. We will get to the study in a bit, but first I want to note that this information is not new. Teenagers tend to stay up late, and yet we make them get up super early to be at class, often by 7:00 AM. This is not good for their health or their learning. So why do we do it?
The primary reason is logistical, which is tied to cost. School systems have tiered start times for elementary, middle school, and high school because this allows them to use the same fleet of buses and drivers for all three. Starting high school later, at the same time as middle school, would mean increasing the size of the fleet. There are other stated reasons, but honestly I think this is the real reason and everything else is a backend justification. The other reasons are more tradeoffs, that benefit some people but not others. For example, a parent with a long commute could drop off their highschooler on the way to work. There is more time for after school clubs, sports, and jobs. While some older teens may get home early to watch their younger siblings until their parents get home.
This all points to a main reason our civilization is frustratingly sub-optimal (to be polite). The default is to follow the pathway of least resistance – everyone just does what’s best for themselves, with people in power doing their best to solidify more power, with vested interests putting the most consistent effort into making the system work for their narrow interest. What is often lacking is any kind of systemic planning, and when that does occur (even with the best intentions) the law of unintended consequences often results in a net wash or even detriment. The world is complex, and we are just not very good at managing that level of complexity. What we need are institutions that can accumulate evidence-based institutional knowledge to incrementally make things work better. But that’s a lot of work, and it’s too easy for vested interests to sabotage such efforts.
I’m not trying to be nihilistic – nihilism is part of the problem, and is often used as a weapon by those vested interests to short circuit attempts to make things work better for everyone. But we have to understand the nature and scope of the problem, and we need the energy and dedication to sustain efforts to make things work better. Such efforts can work, and historically they have made things better. But it’s a constant struggle.
OK, back to the study. In this study they gave students the option to start class up to an hour later. For example, school would officially start at 8:30, but also offered an optional module at 7:30 for those who wanted to come early and end early. The found:
“Under the flexible model, 95% of students used the later-start option. The median SST was delayed by 38 minutes (n = 711, β = .57, 95% confidence interval [.53, .62], p < .001, R2β = .52), with corresponding significant delays in wake times and increased sleep duration on school days. Among the paired subsample, SST delay was significantly associated with increased school day sleep duration (n = 205, β = .51 [.05, .94], p = .03, R2β = .02). No worsening was observed. Improvements included reduced problems falling asleep, fewer students with clinically low health-related quality of life, and higher scores in mathematics and English.”
Now that I am retired I have personally experienced (yes, this is just anecdotal) the benefits of sleeping in longer. I no longer even set an alarm – I wake up when I feel like it. I am still working basically full time doing all my science communication activities, but mostly on my own schedule. My sleep quality and daytime alertness have significantly improved. I highly recommend it. But more importantly – the evidence clearly shows that this is generally true – being able to sleep in longer results in better sleep and performance.
So it seems like a no-brainer – why can’t we do this? I think the key here is flexibility, which can be paired with increased flexibility at work, especially for parents. Flexible work start times and the ability to work from home, even if only 1-2 days a week, results in a huge improvement in life satisfaction. Then families will have the ability to make their schedules work. Let’s prioritize sleep, health, and educational effectiveness first, and make the system work for these goals. It makes no sense for a school system to sacrifice the well-being and education of their own students in order to meet their own logistical needs.
The obvious response to this question is – well, it’s all about money. We have to be realistic. School systems operate with limited budgets and have to make the most with the resources they have. If they have to maintain a larger bus fleet, where will that money come from? I get it. This is reality. My question is – who made this decision? Did we as a society, or even just the affected parents, make this decision collectively with adequate information to understand the implications of their decision? We may just have to accept the fact that running an effective school system is more expensive than we might want it to be, and cutting costs in this way is simply not an acceptable option.
If we prioritize the health and education of students, I think we will find there are other elements of the system that can accommodate. This is where municipal planning becomes even more integrated. Investing in public transportation and subsidizing it for students, for example, will give students more options and reduce the strain on a dedicated school bussing system. Facilitating carpooling among students is another option. More parental flexibility helps. Make schools more local and walkable/bikeable, and organize safe group walks to and from school. Optimize and disperse drop-off areas to limit bottle necks and reduce drop-off congestion.
This requires thoughtful planning, but mostly an unwillingness to simply sacrifice students to simplify logistics and reduce costs.
The post Flexible School Start Time first appeared on NeuroLogica Blog.
The news is abuzz with talk of a potential universal respiratory vaccine. It’s definitely interesting research, but may not be what you think. In this case, the reporting has been quite good on the whole, but the headlines can be misleading if you are not deeply steeped in the complexities of mammalian immunity. Let me start with the biggest caveat – this is a mouse study. This is therefore encouraging pre-clinical research, but we are still years away from translating this into an actual vaccine. Also, most interventions that are encouraging at the animal stage don’t make it through human testing. So don’t expect any revolution based on this treatment anytime soon. Having said that – there is great potential here.
To understand how this new approach works, let’s review some basics of immunity. (Note – the immune system is incredibly complex, and I can only give a very superficial summary here, but enough to understand what’s going on.) Mammalian immune systems have two basic components, innate immunity and adaptive immunity. The adaptive immune system is probably what most people think about when they think about the immune system and vaccines. Adaptive immunity targets and recognizes specific antigens (such as proteins) on pathogens like viruses, bacteria, or fungi. Antibodies attach to these antigens, flagging them to be targeted by immune cells like macrophages which then eat them. The macrophages in turn display the antibody-flagged antigens on their surface, triggering a greater and more specific reaction to those specific antigens. Adaptive immunity is considered slow (it takes days to ramp up), specific (it targets specific antigens on specific pathogens) and durable (it has memory, and will react more quickly and robustly to the same pathogen in the future).
By contrast, the innate immune system is fast, non-specific, and short-lived with no memory. The innate immune system consists of physical barriers, like skin and mucosa, and immune cells that target pathogens based on broad patterns that are not learned but are innate (hence the name). There are Toll-like receptors (TLRs – the name Toll comes from the German for “fantastic”, allegedly said by a researcher upon discovery). The Toll gene was first discovered in fruit flies and then similar genes were later discovered in mammals, hence “Toll-like”. TLRs detect pathogen-associated molecular patterns (PAMPs), which are highly conserved features of types of pathogens. In other words – a TLR might recognize a snippet of RNA as a pattern typical of RNA viruses, or proteins that tend to occur on pathogenic bacteria. “That looks like an RNA virus, so let’s attack it.”
While these are distinct and complementary parts of the immune system, they are also highly tied together. Components of the innate immune system trigger the adaptive immune system, which in turn stimulates innate immunity. In fact, many traditional vaccines contain adjuvants which stimulate innate immunity in order to boost adaptive immunity.
The new vaccine (technical name – GLA-3M-052-LS+OVA), which is a nasal spray given in three doses to the mice being studied, stimulated innate immunity, not adaptive immunity. Normally, after exposure to a pathogen or even allergen, innate immunity will be heightened for a few days, then return to normal. The nasal vaccine extends this heightened innate immunity in the lungs and respiratory system for three months. It does this by containing synthetic molecules that bind to TLRs, tricking them into responding as if a pathogen is present. The vaccine also contains a protein called ovalbumin, which stimulated T-cells of the adaptive immune system, keeping them resident in the tissue. These T-cells help maintain the heightened state of activity of the innate immune system. According to the authors: “Protection was mediated by persistent ovalbumin-specific CD4+ and CD8+ memory T cells that imprinted alveolar macrophages (AMs), enhancing antigen presentation and antiviral immunity.”
The trick of stimulating innate immunity was partly borrowed from the tuberculosis BCG vaccine, which also works by both triggering adaptive immunity but also stimulating the innate immune system. Researchers studies how the BCG vaccine accomplished this and applied that knowledge to this new vaccine.
In the study the researchers compared mice treated with three doses of the nasal vaccine to untreated mice and found that the treated mice were protected for at least three months from “SARS-CoV-2 and Staphylococcus aureus. In addition, the vaccine protected mice from other viruses (SARS-CoV-2, SARS, SCH014 coronavirus), bacteria (Acinetobacter baumannii), and allergens.”
In the best-case-scenario where this vaccine technology is safe and effective in people, what can we expect? Well, I don’t think this would replace any traditional vaccines based on adaptive immunity. Like the two halves of the immune system itself, it will likely be complementary to traditional vaccines. Traditional vaccines can provide years and sometimes decades of specific protection from common pathogens, and there is no substitute for that. Also, this vaccine works on respiratory infections only, although it may be possible to adapt this approach to other types of infection.
What an innate immunity-based vaccine provides is a good first line of defense against an outbreak, epidemic, or seasonal infection. This would require many millions of doses (or even billions, in the context of a pandemic) being available at short notice to provide several months of resistance to an entire population at the beginning of an outbreak or a seasonal infection (like the flu). It remains to be seen if this vaccine reduces the risk of spread or just the severity of infection. If it reduces spread (which is plausible, if viruses, for example, don’t have a chance to reproduce in large numbers), it could short circuit many respiratory epidemics.
Imagine if this vaccine were available at the beginning of COVID. It could have provided significant protection, reducing death and morbidity, and allowed us time to study the virus and develop adaptive vaccines. That is one of the benefits – it provides broad spectrum non-specific defense. We don’t necessarily need to know anything about the pathogen for this vaccine to work, so it is ideal for novel respiratory outbreaks. It also means we don’t need to track new strains of a virus, and that pathogens cannot easily adapt to this immunity by simply mutating their proteins.
There is a lot of research ahead to study the safety and effectiveness of this vaccine in humans. Even once a vaccine is approved, more research is needed to study long term effectiveness and potential side effects. One thing to consider, for example – there is likely a reason that evolutionary forces did not favor us having our innate immunity on high alert at all times. There is often a downside to immune activity, which is mostly why you feel like crap during an infection. It’s not the bug, it’s your bodies reaction to the bug. The worst-case scenario is that this approach increases the risk of auto-immunity.
Having said that – we are not living in the world in which we evolved. We are living in a globally connected world of over 8 billion people, often in close proximity to potential animal reservoirs of pathogens. The selective pressures are likely now different than they were when we were living in largely isolated tribes. But we don’t have to wait for evolution to work its slow grim task, we can tweak our immune systems with science and technology to provide some enhanced protection when and where we need it.
The post Universal Respiratory Vaccine first appeared on NeuroLogica Blog.
Fascination with UFOs (unidentified flying objects) is endless. I get it – I was into the whole UFO narrative when I was a child, and didn’t shed it until I learned science and critical thinking and filtered the evidence through that lens. I credit Carl Sagan for initiating that change. In his excellent series, Cosmos (still worth a watch today), he summarized the skeptical position quite well. To paraphrase – after decades, there isn’t a single hard piece of evidence, not one unambiguous photo or video. He gave a couple of examples of evidence (widely cited at the time) that were completely useless. Now -four decades later – the situation is the same. The evidence, in a word, is crap. It is exactly what you would expect (if you were an experienced skeptic) from a psychocultural phenomenon, without any evidence that forces us to reject the null hypothesis.
So why does belief in UFOs (meaning that some of them are alien spacecraft) not only persist but are experiencing a resurgence? Ostensibly this was triggered by the release of the Pentagon videos. I have already dealt with them – they are just more low-grade evidence. In fact, as I have argued, the low-grade quality of the images is the phenomenon. UFOs, or UAPs as the Pentagon now calls them, are not an alien phenomenon, they are an “unidentified” phenomenon. Mick West has arguably done the most thorough analysis of these videos. He convincingly shows how they are just misidentified birds, balloons, and planes. If you look at the videos you will see that they are blobs and shadows and lights. They are not clear and unambiguous images of spacecraft. Believers must infer that they are spacecraft by their apparent properties – and that is where the technical analysis comes in. A sprinkle of motivated reasoning, or simply lack of expertise, is enough to convince yourself that these are fast moving large objects. But a better analysis (again, see Mick West above) shows this is not the case. They are small, moving with the wind, or flying at the speed of a bird.
But the US military is taking UAPs seriously. This is actually not a surprise – unidentified anomalous phenomena might be Chinese spy balloons, or Russian fighter planes. This has always been at the core of the government’s interest. it is now policy to scramble fighter jets for visual confirmation of anything not identifiable on radar. And now that they are doing that – 100% of UAPs so far have been identified as mundane objects, mostly balloons. In fact, the US military is happy to encourage public belief in “UFOs” because it is a convenient cover for their own top secret projects. It is not a coincidence that UFO sightings tend to cluster around military bases.
Another factor in the recent upsurge in interest is the media. The media, of course, loves stories that generate a lot of interest, and UFOs fit the bill. However, they also know that UFO stories are fringe and often based on rumor or testimony from dubious sources, so they are often relegated to “fluff” stories. They are like the ghost stories that circulate every Halloween – journalists know they are nonsense, but make great headlines. But now – the media feels they have permission from the US government to take UFO stories seriously, so they gleefully are. Here is an example from the New York Times. The author, a regular columnist, Roth Douthat, has four questions for the Trump administrations. Do they have more videos, why are there so many apparent whistle-blowers, why are some US senators calling for disclosure, and is the US government pursuing research into UFO experiencers and paranormal phenomenon (which they have in the past)?
These sound like serious questions, and so a serious journalist can write a column about them without looking silly. But the thing is – we already have the answers to these questions. The Pentagon has done a thorough analysis of all the evidence the US government has, and concluded – there is no evidence of aliens. As predicted, the whole thing is a giant nothing-burger. Except for the newer videos, most of the evidence is old and long-debunked nonsense by the same cast of characters that have been peddling this pseudoscience for decades. Why are people interested in this – because other people are interested in it. But whenever you dig down, there is simply nothing there. I have been following the UFO story for literally 50 years, and nothing has changed.
This brings me to another reason we are seeing a resurgence in interest in UFOs – because that is the natural cycle. Each generation, since the 1940s, has a fascination with UFOs. This lasts for a decade or so, then wanes for a decade or so, then comes back. This is because people get hyped up about some apparently new evidence or claim, or a movie, or now some social media video, and we get another round of people learning about UFOs for the first time. This interest lasts for a while, with many people feeling as if some big disclosure is right around the corner. They see the recent activity as a trend, rather than just as the cycle it is, and expect some big government announcement, or the proverbial aliens landing on the White House lawn.
But of course – nothing happens. Eventually, nothing becomes boring. There are always die-hards who keep the flames going, or turn their UFO interest into a job, but public interest fades and turns to something else. UFO enthusiasts then wait for another generation to forget how boring the whole thing is, or who never experienced it before, and then fan the flames back into fire, which will also eventually burn itself out.
Meanwhile, skeptics like me, who have been at this for awhile, see it coming a mile away. We can immediately respond because we have seen it all before – it’s the same tired arguments and the same lame evidence. But we still have to be careful not to seem dismissive. We are not – we’ve just been here before so we have a head start. Also we (collectively – there is a lot of dividing and conquering going on) do the detailed analysis, the hard work necessary to demonstrate convincingly that whatever new evidence is being put forward is what it is.
UFO believers reading this blog, at this point, are likely to leave in the comments – “well, what about this evidence?” Hit me. Give me your best evidence. I am happy to do a deep dive and see what we got. But you should first look for skeptical analysis of the claim – be your own most dedicated skeptic first. If you still think the evidence is worthwhile, send it my way. (And don’t tell me to read thousands of pages of low grade evidence – give me your best evidence.) Decades of making this challenge has not resulted in anything (for example), but I am willing to keep going. Also, keep in mind, if aliens were visiting the Earth, I would want to know, and if the evidence were compelling, I would have every motivation in the world to support and promote that conclusion. And I would have much to lose if I wrongfully denied a genuine phenomenon – arguably the most interesting and impactful phenomenon in human history. I would not want to be on the wrong side of that story. So yeah – convince me.
But you should be open to the possibility that you are wrong, that all the evidence is best explained as a psychocultural phenomenon without any need to invoke aliens. I strongly believe that is the case, and it would take compelling evidence to convince me otherwise. Such evidence does not exist, because if it did, we wouldn’t need to be debating this anymore. That is why believers have to invoke conspiracy theories or make the absurd claim that aliens are just teasing us with the possibility of their existence but withhold any solid evidence. Maybe that worked in the 1950s, but 75 years later it’s increasingly untenable.
The post Why UFOs Are Back first appeared on NeuroLogica Blog.
It’s not easy being a futurist (which I guess I technically am, having written a book about the future of technology). It never was, judging by the predictions of past futurists, but it seems to be getting harder as the future is moving more and more quickly. Even if we don’t get to something like “The Singularity”, the pace of change in many areas of technology is speeding up. Actually it’s possible this may, paradoxically, be good for futurists. We get to see fairly quickly how wrong our predictions were, and so have a chance at making adjustments and learning from our mistakes.
We are now near the beginning of many transformative technologies – genetic engineering, artificial intelligence, nanotechnology, additive manufacturing, robotics, and brain-machine interface. Extrapolating these technologies into the future is challenging. How will they interact with each other? How will they be used and accepted? What limitations will we run into? And (the hardest question) what new technologies not on that list will disrupt the future of technology?
While we are dealing with these big question, let’s focus on one specific technology – controllable robotic prosthetics. I have been writing about this for years, and this is an area that is advancing more quickly than I had anticipated. The reason for this is, briefly, AI. Recent advances in AI are allowing for far better brain-machine interface control than previously achievable. Recent advances in AI allow for technology that is really good at picking out patterns from tons of noisy data. This includes picking out patterns in EEG signals from a noisy human brain.
This matters when the goal is having a robotic prosthetic limb controlled by the user through some sort of BMI (from nerves, muscles, or directly from the brain). There are always two components to this control – the software driving the robotic limb has to learn what the user wants, and the user has to learn how to control the limb. Traditionally this takes weeks to months of training, in order to achieve a moderate but usable degree of control. By adding AI to the computer-learning end of the equation, this training time is reduced to days, with far better results. This is what has accelerated progress by a couple of decades beyond where I thought it would be.
But it turns out this AI-assisted control can be a double-edged sword. To understand why we need to quickly review how the human brain adapts to artificial bodies or body parts. The short answer is – quite well. The reason is that our sense of ownership and control is all a constructed illusion of the brain in the first place. Circuits in our brain create the subjective sensation that each part of our body is part of us, that we own that body part (the sense of ownership) and the we control that body part (a sense of agency). We know about this largely from studying patients who have damage in one or more of these circuits that causes them to feel like a body part is not theirs or that they don’t control it.
This means that this circuitry can be hacked to make the brain create the sensation that you own and control a robotic or virtual limb. Luckily, this hacking is actually pretty simple. The brain compares different sensory inputs to see if they match, while also comparing motor intentions with motor outputs. So – if you see and feel a limb being touched, your brain will interpret that as you owning the limb. It can be that simple. If you intend to make a movement, and you see and feel the limb make that movement, then you feel as if you control the limb. So a robotic limb with some sensation, with some haptic feedback, and that does what we want it to do, will feel as if it is naturally part of us. The research is moving now in this direction, to close these loops as much as possible.
This, however, is where we run into a snag with AI-controlled robotic limbs. Part of the advance is that AI can add fine motor control to an artificial hand, say. Quickly, robotic movement tends to fall into one of three categories. You can directly control the robot, the robot can carry out a pre-programmed sequence of movements, or the robot can determine its movements in real time based on sensory feedback. When seeing a robotic demonstration you should always ask – what type of control is being demonstrated?
For robotic limbs what we want is direct control of the robot. While this is advancing, it is still somewhat limited and clumsy. So we can refine the direct control by adding one or both of the other two types of control. This means to some extent the robotic limb is carrying out the desired movements of the user with internal control. This can greatly increase the functionality of the robotic limb, but it comes at a cost of the user’s sense of embodiment and agency. Imagine if your hand were executing movements all by itself. It would feel uncanny and unnerving.
This is a long windup to a new study which tries to address this issue. The researchers were looking at the effect of the movement speed of the AI-controlled robotic limb to see how that affected the user’s sense of ownership and agency. What they found was not surprising, but good to know that this variable is effective and needs to be taken into consideration. They varied the execution time of an AI-controlled movement from 125 ms to 4 seconds. A moderate speed, about 1 second, resulted in the best sense of ownership and agency (or we can say the least interference with these senses). The further you got to either extreme the more the user felt an uncanny sense of unease, as if they did not own or control the robotic limb. This is a Goldilocks effect – too fast or too slow is no bueno, but just right results in a good outcome.
This result also makes sense from the perspective that prior neurological research shows that our brains also evaluate the world by how it moves. We separate agents from non-agents by how they move (the latter moves in an inertial frame while the former does not). Neurologists also know this because diseases that are movement disorders can often be diagnosed (and sometimes at a glance) by how the patient moves. Our brains are finely tuned to what constitutes normal human movement. Too fast or too slow, hypokinetic or hyperkinetic, and our brains immediately register that something is wrong.
So if we see our robotic limb moving at a normal human pace, doing what we want it to do (even though the fine movements are enhanced by AI) that can still be good enough for us to accept the limb as belonging to us and that we control it. There is likely also a Goldilocks zone here as well – too much AI control will break the illusion of control, while too little is of no use, but just right will be the best compromise between functionality and acceptance.
The nuances of neurological control through a brain-machine interface of an AI-enhanced robotic limb is one of those futurism problems that would have been difficult to anticipate.
The post The Future of AI-Powered Prosthetics first appeared on NeuroLogica Blog.
There are many ways in which our brains can be hacked. It is a complex overlapping set of algorithms evolved to help us interact with our environment to enhance survival and reproduction. However, while we evolved in the natural world, we now live in a world of technology, which gives us the ability to control our environment. We no longer have to simply adapt to the environment, we can adapt the environment to us. This partly means that we can alter the environment to “hack” our adaptive algorithms. Now we have artificial intelligence (AI) that has become a very powerful tool to hack those brain pathways.
In the last decade chatbots have blown past the Turing Test – which is a type of test in which a blinded evaluator has to tell the difference between a live person and an AI through conversation alone. We appear to still be on the steep part of the curve in terms of improvements in these large language model and other forms of AI. What these applications have gotten very good at is mimicking human speech – including pauses, inflections, sighing, “ums”, and all the other imperfections that make speech sound genuinely human.
As an aside, these advances have rendered many sci-fi vision of the future quaint and obsolete. In Star Trek, for example, even a couple hundred years in the future computers still sounded stilted and artificial. We could, however, retcon this choice to argue that the stilted computer voices of the sci-fi future were deliberate, and not a limitation of the technology. Why would they do this? Well…
Current AI is already so good at mimicking human speech, including the underlying human emotion, that people are forming emotional attachments to them, or being emotionally manipulated by them. People are, literally, falling in love with their chatbots. You might argue that they just “think” they are falling in love, or they are pretending to fall in love, but I see no reason not to take them at their word. I’m also not sure there is a meaningful difference between thinking one has fallen in love and actually falling in love – the same brain circuits, neurotransmitters, and feelings are involved.
Researchers generally consider there are three neurological components to falling in love (lust, romance, attachment). There is sexual attraction and lust, mediated by estrogen and testosterone. There is the romantic feeling of being in love mediated by dopamine, serotonin and norepinephrine. During sex and other forms of physical intimacy endorphins are released which make us feel happy, and also oxytocin which is associated with feelings of attachment. Vasopressin is also involved, linked also to long term attachment and feelings of protectiveness. Do we experience the same biochemical reactions to interacting with AI? The data so far says yes.
In fact, this data goes back far before AI. Psychologists and neurologists have know for a long time that people can form emotional attachments to inanimate objects (objectophilia). This it the teddy bear phenomenon – even as young children we can form an attachment to an object and treat it as if it were a living thing, even if we know objectively it isn’t. This likely has to do with the cues that our brains use to divide up the world. We mentally categorize objects as either agents (things able to act on their own) and non-agents. For some reason we evolved algorithms to determine this that are not dependent on whether or not the object is actually alive, but simply if it moves and acts as if it is alive. If something acts like an agent, or even looks like an agent, our brains categorize them that way and link them to our emotional centers, so we feel things about them.
As one researcher put it – AI is a teddy bear on steroids. Chatbots are designed to act human, to push our buttons and make us feel as if they are agents, and therefore activate all the the circuitry involved with how we feel about things our brain treats as agents. Not only that, but chatbots can be programmed to be friendly, available, a “good listener”, accommodating, and flattering. Some of these traits may be inadvertently (or deliberately, depending upon how cynical you’re feeling) triggering of romantic feelings. There are, of course, apps that deliberately design AI chatbots to be sexual and romantic (come meet your new AI girlfriend), complete with alluring AI generated imagery, all custom-made, if you wish.
So yes, people can really fall in love with an AI. Why not? That fits with everything we know about psychology and how our brains work. It is an extreme example of us adapting our environment to hack our own adaptive circuitry, to engineer feedback to maximally stimulate our reward circuitry. There are many ways in which we do this – porn, recreational drugs, roller coasters, gambling, ridiculously delicious foods. This can be harmless and fun, adding a little spice to our life, but pretty much every manifestation of hacking our reward circuitry is also associated with what we generally categorize as “addiction”. Addiction is one of those things that is hard to operationally define, because it is such a multifaceted spectrum, but in generally something is considered an addiction when it becomes a net negative for your life. Addictions cause dysfunction in some way.
Can someone be “addicted” to their chatbot, whether the relationship is platonic or romantic? It seems so. But even short of an addiction, is it a good idea to spend a significant amount of time in an artificial relationship that mimics a human relationship, but is crafted to give you all the power and to be maximally flattering without demanding anything of you? Some psychologists are raising the alarm bells, worrying about a spoiler effect. Such AI relationship can potentially spoil us for relationships with living humans, who have their own wants, desires, flaws, and demands. Relationships are work – but why do all that work when you can have a submissive mate that is perfectly happy making the relationship entirely about you? Of course, there is the physical intimacy part, but there are partial ways around that as well. This does, however, raise the question about how important physical intimacy is compared to emotional intimacy. I suspect there is a lot of individual variation here.
Again, we seem to be running a massive social experiment with some very real concerns. This also does get me back to the sci-fi retcon – perhaps it would be better for chatbots to not be too human. They can still fulfil their functions (other, of course, than being a romantic companion or similar) if they had an affect that was obviously artificial. This is a form of transparency – you know when you are talking to an AI because they talk like an AI, and they interact in a way that is designed to be functional but specifically not provoke any emotions, or pretend to have emotions themselves. I suspect this would be a good thing for society, but also that nothing like this will happen on its own.
The post Falling In Love With AI first appeared on NeuroLogica Blog.
This post is only partly about uranium, but mostly about motivated reasoning – our ability to harness our reasoning power not to arrive at the most likely answer, but to support the answer we want to be true. But let’s chat about uranium for a bit. In the comments to my recent article on a renewable grid, once commenter referred to a blog post on skeptical science and quoted:
“Abbott 2012, linked in the OP, lists about 13 reasons why nuclear will never be capable of generating a significant amount of power. Nuclear supporters have never addressed these issues. To me, the most important issue is there is not enough uranium to generate more than about 5% of all power.”
This is the flip side, I think, to the misinformation about renewable energy I was discussing in that post. Let me way, I don’t think there is an objective right answer here, but my personal view is that the pathway to net zero that emits the least amount of carbon includes nuclear energy, a view that is in line with the IPCC. There is, however, still a lot of anti-nuclear bias out there, just as their is pro-fossil fuel bias, and pro-renewable bias, and every kind of bias. If you want to make a case for any particular source of power, there are enough variables to play with that you can make a case. However, factual misstatements are different – we should at least be arguing from the same set of verified facts. So let’s address the question – how much uranium is there.
There is no objective answer to this question. Why not? Because it depends on your definition. Most estimates of how much uranium there is in the world, in the context of how much is available for nuclear power, do not include every atom of uranium. They generally take several approaches – how much is in current usable stockpiles, how much is being produced by active mines, and how much is “commercially” available. That last category depend on where you draw the line, which depends on the current price of uranium as well as the value of the energy it produces. If, for example, we decided to price the cost of emitting carbon from energy production, the value of uranium would suddenly increase. It also depends on the technology to extract and refine uranium. The value of uranium is also determined by the efficiency of reactors.
Right now about 9% of the world’s electricity comes from nuclear, and about 19% of energy in the US. At the current rate of energy production, current producing uranium mines and known resources would last for about 90 years. This is better than most mineral needed to build renewable infrastructure. Right there the “5%” figure quoted above is demonstrably wrong, we are already greater than 5%. Let’s say we doubled the amount of energy produced by nuclear power, and over that same time period there was a 50% increase in energy demand. Current supplies would then last for 45 years, and nuclear would be about 12% of world energy production. Forty-five years would be just fine – that would give us the time to further develop solar, wind, battery, geothermal, and pumped hydro technology. It is conceivable that we could have an all renewable grid by then. It is even possible we might have fusion by then.
But that also assume a couple of things – no new uranium mine discovery, and no significant increases in efficiency. Neither of these things are likely to be true. There are vast known commercially-viable reserves of uranium waiting to be developed. Improved geological techniques are also finding more reserves. Further, newer nuclear designs use uranium more efficiently – there is more burned fuel and less spent nuclear fuel. In fact newer designs can potentially burn the spent fuel from older reactors, further extending the uranium supply. We can also reprocess spent nuclear fuel to make more usable fuel. The figures above also do not count national reserves of uranium, because these figures are not public. Military grade uranium has been and can be repurposed for energy production as well.
Further still – if the acceptable price of uranium increases because of the value of uranium and the cost of energy, and/or the cost of extracting uranium from various sources goes down, then new reserves of uranium become available. For example, there is about 4.5 billion tonnes of uranium in seawater, which is about 1,000 times known terrestrial sources. That’s enough uranium for current use for 90,000 years. Let’s say only 10% of that uranium can be commercially extracted, and our demand increases by a factor of 10 – the supply would still last for 900 years. That is likely longer than fission technology will be needed.
Even putting uranium from seawater aside, known and likely terrestrial sources, combined with advancing nuclear technology, means we likely have enough uranium to burn at double the current rate for 100-200 years, conservatively. In other words – the supply of uranium is simply not a significant limiting factor for nuclear power. So why is this still an anti-nuclear talking point?
That is where we get back to motivated reasoning. Even if we are looking at the same set of facts, they can be perceived as positive or negative depending on your bias. You can say – nuclear only supplies 9% of the world’s power, or that nuclear provides a whopping 9% of world power. Solar has only increased in efficiency by about 10% over the last 30 years (from about 12 to about 22 %), or you can say that the efficiency of solar has almost doubled over this time, while costs have plummeted. You can focus on all the negative tradeoff, or all of the positive benefits of any technology. The same problem can be either a minor nuisance or a deal-killer. You can focus on whatever slice of the evidence is in line with your bias. And of course you can accept as fact things that appear to support your narrative, while questioning those that do not.
We all do this, pretty much all the time. It takes a conscious effort to minimize such motivated reasoning. We have to step back, deliberately try to not care what the outcome is, and just try to be as fair and accurate as possible. We have to ask – but is this really true? What would a neutral person say? What would someone hostile to this position say? It’s a lot of mental work, but it’s good mental hygiene and a good habit to get into.
The post Uranium and Motivated Reasoning first appeared on NeuroLogica Blog.
Mark Zuckerberg said a few months ago that AI is ushering in a third phase of social media. First social media was used to connect with family and friends, then it became a platform for content creators, and now creativity is being further unleashed with new AI-powered tools. That’s a pretty rosy view, and unsurprising coming from the creator of Facebook. Many people, however, are becoming increasing concerned about what the net effect of AI-generated content will be, especially low-grade content (now colloquially referred to as AI slop).
One thing is clear – AI-generated content, because it is so easy and fast, is increasingly flooding social media. AI’s influence takes two basic forms, AI-generated content, and recommendations driven by AI-powered algorithms. So an AI might be telling you to watch an AI-generated video. Recent studies show that about 70% of images on Facebook are now AI-generated, with 80% of the recommendations being AI-powered. This is a fast-moving target, but across social media AI-generated content is somewhere between 20 and 40%. This is not evenly distributed, with some sites being overwhelmed. The arts and crafts site Etsy has been overrun by AI slop, causing some users to abandon the platform.
We are already seeing a backlash and crackdown, but this is sporadic and of questionable effectiveness. Etsy, for example, has tried to limit AI slop on its site, but with limited success. So where is all this headed?
We need to consider the different types of content separately. Much of AI-slop is obviously fake and for entertainment purposes only. They may be cartoony or obviously humorous, with no intent to pass as real or deceive. Some content is meant to entertain (i.e., drive clicks and engagement), but is not obviously fake. Part of the appeal, in fact, may be the question of whether or not the content is real. Other content is meant to deceive, to influence public opinion or the behavior of the content consumer. This latter type of content is obviously the most concerning.
There are also different types of concerns or potential negative outcomes. One of the biggest concerns is that AI-generated content can be used to spread misinformation. This has both direct and indirect negative effects – it can spread false information and influence public opinion, but it also degrades trust in accurate information or responsible sources. So true information can be dismissed as possibly fake. The combined effect is that we no longer know what is true and what is not. Without any way to objectively referee which facts are reliable and which are likely fake (and yes, it’s a continuum, not a dichotomy), people will tend to just hunker down with their social tribe. Each group has their own reality, with no shared reality to bridge the gap.
There is also the Etsy problem – low-quality content is crowding out anything of value, and consumers are buried in slop. I use Etsy, and so have encountered this myself. It takes a lot of cognitive work to separate out real work, especially art, from the flood of AI content. Highly cognitively demanding work is unsustainable – most people will not do it for long and will look for the less work-intensive path. This may mean abandoning a platform, or throwing up their arms and saying it’s hopeless to tell the difference, or just giving in and not worrying if something is AI or not. This is a problem for non-AI content creators, and also a problem across the board. Mental AI-fatigue will affect everything, not just low-grade AI artwork. Etsy-fatigue can also influence how much mental energy we have for political AI content (studies do show that mental energy is fungible in this way).
There is also the middle ground, not low-grade AI slop or deliberate deception, but AI used as a legitimate tool to create high-quality art or other content. This is the use I think can be valuable, making content creation better or more efficient. The problem with this content is not really for the end-user but the issues of ownership and displacing human artists. For me, this is where the real dilemma is. I would love for the big video game companies to be able to double their output because of efficiencies gained through AI, and I also want to see how the latest AI can enhance certain game features (like interacting with AI-driven characters, or open-ended generative content). But these advances are being held back by the other concerns with AI, many of which are legitimate.
There are several approaches to the issue that I can see. One is to simply let the free market sort it all out. Users are having somewhat of a backlash against AI slop, and companies are responding. We will see how well they can manage the issue, but if the last few decades are any guide I don’t have a lot of hope that big tech companies will do what’s best for the end-user, rather than their own bottom line. Likely some individual platforms will push back heavily against AI, perhaps even creating AI-free social media platforms or websites.
A second approach is to craft some thoughtful legislation to try to wrangle this beast. The most important fix would simply be transparency – if AI-generated content had to be labeled as such, with heavy penalties for passing off AI content as real, this could significantly help. I would also like to see a conversation about how algorithms recommend content. It may also be feasible to make the use of AI-generated fakes for political persuasion illegal.
Both of these approaches, however, require a third approach – developing the technology to detect, label, and filter AI-generated content. A truly effective app to do this could be massively useful, and I think highly popular.
My biggest concern is that governments will use AI to enhance their ability to control their populations. This is part of the “information autocracy” problem. If you control what information your population sees, you can control what they think, and you can control what they do. This is already a problem, but AI-generated content and AI-driven algorithms can make it orders of magnitude more effective. Even without authoritarian governments, large corporations can use the same technology to influence their consumers. Or they can use it to promote their political views. A populace, both entertained and overwhelmed by AI slop, would be especially compliant.
The post The AI Slop Problem first appeared on NeuroLogica Blog.
Engaging on social media to discuss pseudoscience can be exhausting, and make one weep for humanity. I have to keep reminding myself that what I am seeing is not necessarily representative. The loudest and most extreme voices tend to get amplified, and people don’t generally make videos just to say they agree with the mainstream view on something. There is massive selection bias. But still, to some extent social media does both reflect the culture and also influence it. So I like to not only address specific pieces of nonsense I find but also to look for patterns, patterns of claims and also of thought or narratives.
Especially on TikTok but also on YouTube and other platforms, one very common narrative that I have seen amounts to denying history, often replacing it with a different story entirely. At the extreme the narrative is – “everything you think you know about history if wrong.” Often this is framed as – “every you have been told about history is a lie.” Why are so many people, especially young people, apparently susceptible to this narrative? That’s a hard question to research, but we have some clues. I wrote recently about the Moon Landing hoax. Belief in this conspiracy in the US has increased over the last 20 years. This may be simply due to social media, but also correlates with the fact that people who were alive during Apollo are dying off.
Another factor driving this phenomenon is pseudoexperts, who also can use social media to get their message out. Among them are people like Graham Hancock, who presents himself as an expert in ancient history but actually is just a crank. He has plenty of factoids in his head, but has no formal training in archaeology and is the epitome of a crank – usually a smart person but with outlandish ideas and never checks his ideas with actual experts, so they slowly drift off into fantasy land. The chief feature of such cranks is a lack of proper humility, even overwhelming hubris. They casually believe that they are smarter that the world’s experts in a field, and based on nothing but their smarts can dismiss decades or even centuries of scholarship.
Followers of Hancock believe that the pyramids and other ancient artifacts were not built by the Egyptians but an older and more advanced civilization. There is zero evidence for this, however – no artifacts, no archaeological sites, no writings, no references in other texts, nothing. How does Hancock deal with this utter lack of evidence? He claims that an asteroid strike 12,000 years ago completely wiped out all evidence of their existence. How convenient. There are, of course, problems with this claim. First, the asteroid strike at the end of the last glacial period was in North America, not Africa. Second, even an asteroid strike would not scrub all evidence of an advanced civilization. He must think this civilization lived in North America, perhaps in a single city right where the asteroid struck. But they also traveled to Egypt, built the pyramids, and then came home, without leaving a single tool behind. Even a single iron or steel tool would be something, but he has nothing.
Of course, there is also a logical problem, arguing from a lack of evidence. This emerges from the logical fallacy of special pleading – making up a specific (and usually implausible) explanation to explain away inconvenient evidence or lack thereof.
Core to the alternative history narrative is also that those ancient people could not possibly have built these fantastic artifacts. This is partly a common modern bias – we grossly underestimate what was possible with older technology, and how smart ancient people could be. Even thousands of years ago, in any culture, people were still human. Sure, there has been some genetic change over the last few thousand years, but not dramatically, and this is also in how common alleles were, not their existence. In other words – every culture could have had their Einstein. Ancient Egypt had genius architects, and is some cases we even know who they were.
People also underestimate the willingness of ancient people to engage in long periods of harsh work in order to accomplish things. Perhaps this is a “modern laziness bias” (I think I just coined that term). We are so used to modern conveniences, that the idea of polishing stone for 12 hours a day for a year in order to create one vase seems inconceivable. The pyramids, it is estimated, were constructed with 20-30,000 workers over 20 years. This included skilled masons, who likely became very skilled during the project. Egypt had an infrastructure of such skilled workers, supported by many long term projects over centuries.
Which brings up another point – we underestimate how much time these ancient civilizations existed. My favorite stat is that Cleopatra lived closer in time to the Space Shuttle than the building of the pyramids. Wrap your head around that. These ancient people were clever, they included highly skilled crafters, and they had centuries, at least, to advance their techniques.
What amazes me is that this narrative of denying history extends to recent events. Again, the Moon landing is an example. But there is also a narrative circulating on TikTok that buildings from the 18th, 19th, and even 20th century were not built by the people who historians said built them. They were found in place, and were built by an older and more advanced civilization – called Tartaria. Never heard of it? That’s because it does not exist. This civilization was wiped out by a world-wide mud flood in the 19th century. According to this particular nuts conspiracy theory, modern governments just occupied the buildings they left behind then conspired together to wipe the history of the mud flood and Tartaria from all records.
What is even more amazing to me is that, in far less time than it took to create a TikTik video spreading this nonsense, someone with even white-belt level Google-fu could have found convincing evidence that this is wrong. You can find pictures of the buildings being built, or of the city before they were built, or documentation of them being built, or experts who have already gathered all this information for you. You can also find that “Tartaria” was a medieval label used to denote the “land of the Tartars”, which simple refers to Mongols. It was a nonspecific geographic label, not an actual place or nation.
But of course, none of this matters in a social media world in which narrative is truth, everything “they” say is a lie, and in fact truth or lie is not even really a thing. It’s all narrative, it’s all performance and clicks.
And this is why scholars and scientists need to engage with the world, much more than they currently do. We cannot simply ignore the nonsense with the idea that it will shrivel and die if we don’t give it light. That is such a pre-social media idea (if it were ever true). We have to fight for scholarship, or logic, facts, and evidence. We have to fight for history.
The post Forgetting History first appeared on NeuroLogica Blog.
My long-stated position (although certainly modifiable in the face of any new evidence, technological advance, or good arguments) is that the optimal pathway to most rapidly decarbonize our electrical infrastructure is to pursue all low-carbon options. I have not heard anything to dissuade me so far from this position. A couple of SGU listeners, however, pointed me to this video making the case for a renewable + battery energy infrastructure.
The channel, Technology Connections, does a good job at putting all the relevant data into context, and I like the big-picture approach that the host, Alec Watson, takes. I largely agree with the points he makes. Also, at no point does he say we should not also build nuclear, geothermal, or more hydroelectric. He does, perhaps, imply that we don’t need nuclear at several points, but he did not address it directly.
So what are the big-picture points I agree with? He correctly points out that fossil fuels are disposable – they are fuel that you burn. They do not, in themselves, create any energy infrastructure. Meanwhile, a solar panel or wind turbine, once you have invested in building them, can produce energy essentially for free for 20 years. He argues that we should be investing in infrastructure, not just pulling fuel out of the ground that we will burn and it’s gone. I get this point, however, what about hydrogen? It is not certain, but let’s hypothetically say we find large reserves of underground hydrogen that we can tap into. I would not be against extracting this resource and burning it for energy, since it is clean (produces only water, and does not release carbon). Although, we might find better uses for such hydrogen other than burning it, such as feedstock for certain hard-to-decarbonize industries.
But his point remains valid – we should be looking for ways to develop our technology to be reusable, circular, and sustainable, rather than extractive. Extracting and burning a resource is one way and limited. At most this should be a stepping stone to more sustainable technology, and I think we can reasonably argue that fossil fuels was that stepping stone and it is beyond time to move beyond fossil fuel to better technology.
Also, building wind or solar plus batteries is the cheapest new energy to add to the grid. He feels the economics will simply win out. I agree – with caveats. At times I get the feeling he is arguing for what will happen in the long run, but he also says “we are here now”. We are sort-of here now, but not fully, which I will get to below. Solar panels are relatively cheap and efficient. Wind turbines are getting more efficient and cost-effective as well, although are more sensitive to market fluctuations and any delays. And he correctly points out that these technologies are still rapidly improving, while there is not much room for improvement with burning fossil fuel.
He also nicely addresses some of the common misunderstandings about renewable energy (a lot of “whatabout” questions). What about the land-use issue with solar panels? He points out that if we just converted the land currently used to grow corn for ethanol (which is a massively inefficient use of land and way to create fuel), and instead put solar panels on that same land, we could generate more than enough energy to run the entire country and charge all our EVs. Solar panels simply create much more energy per acre than corn for ethanol. That’s a solid point.
Whatabout all the lithium and rare-earths we need to build all those panels and batteries? His answer is – well, yes, we do need to extract all those minerals to build all the panels and batteries we need. However, he argues, once we do that, the panels and batteries can theoretically be infinitely recycled. Those atoms don’t go away. This is one of his “eventually” arguments, in my opinion. Yes, one day we might theoretically have an energy infrastructure built entirely on recycled material that has already been extracted. I agree, and I agree that we should be building toward that day (rather than just burning fuel). But we are nowhere near that day.
Further, technological advancements, like sodium ion batteries and newer lithium chemistry, removes many of the conflict elements and rare elements. Also true. Sodium batteries are actually already in production.
Does any of this change my position? No. I have already endorsed many of these arguments in favor of renewables. I also think we should be building and researching to develop an all-renewable future based on an entirely circular technology cycle. If we are playing the “eventually” game, however, I also think we need to add fusion to the mix, once we tackle that herculean technology challenge. This is especially true if we want to venture out into our solar system.
What he does not explicitly address, however, is the optimal path to that future. A path, I believe, that should take into consideration the amount of carbon we release into the atmosphere between now and our zero-carbon future. My position has always been, not that renewables are not great and should be a big part (if not totality) of our energy future – but that we are still in a stepping-stone era of history.
The way I see it, we need to be transitioning from the fossil fuel stepping stone to the nuclear-geothermal-hydroelectric stepping stone before we get to entirely renewable. What does this mean?
It means we should be shutting down coal-fired plants as fast as we possible can. Coal is the dirtiest form of energy and is increasingly becoming one of the most expensive (even without counting the cost of carbon, which I think we should). It also costs the most lives, all along the chain. To do this (again, as quickly as possible) means not only building lots of solar and wind, but also nuclear, geothermal and hydroelectric. The latter two, however, are location limited. Sure, we are developing technology to expand geothermal, but there is an inherent limit – if it costs more energy to pump the fluid down to the hot layers than we get out of the exchange, the process simply does not work. It’s unclear how much of a role geothermal can play. And hydroelectric requires the proper water features, and it harmful to local environments.
We can, however, build nuclear almost anywhere. We can swap them in, one-for-one, for retiring coal plants. We can have them on ships, and can place them relatively close to where the energy is used. We have plenty of fissile material, and the newer designs are safer, more efficient, and more dispatchable. The big downside to nuclear is that it is expensive – but it’s way less expensive than global warming.
Nuclear can potentially give us the 30-50 years it will take to advance our technology and build all that renewable infrastructure. And yes – we do need this time. Simply building all those panels and batteries will take time. Updating and expanding the grid will take time. All these projects need minerals, and it will take time to develop the mines necessary (yes – decades).
The question is – while we take the next 30-50 years go transition to renewables, do we want to be burning fossil fuels or uranium? That is really the big question.
I also think that Alec does not pay enough attention to the energy storage issue. Building enough battery storage for an all-renewable energy infrastructure is no small task. Again, it will take decades. Perhaps more importantly – as he correctly says, batteries get you through the night. However, they do not get you through the winter. An all-renewable future requires long-term energy storage as well. Batteries will not work for this. As far as I know, the only really viable solution right now is pumped hydro. But this too will take decades to develop, and it remains to be seen how much pumped hydro we can develop without too much harm to the environment.
The bottom line is this. If we are talking about the future of our energy and also transportation sectors, then I completely agree – we should be aiming for an all-electric, all renewable future based upon an entirely circular economy rather than a linear extraction-burn economy. But we also need to consider how much carbon will be emitted between here and there, and if we want to minimize that carbon, we also should be building out our nuclear infrastructure, maintaining our hydroelectric inventory, and continuing to develop geothermal. These energy sources also have the advantage of providing baseload and even dispatchable energy, which significantly reduces the need for energy storage and will buy us time there as well.
The post A Fully Renewable Grid? first appeared on NeuroLogica Blog.
As we continue the search for life outside of the Earth, it helps if we have a clear picture of where life might be. This is all a probability game, but that’s the point – to maximize the chance of finding the biosignatures of life. One limitation of this search, however, is that we have only one example of life and a living ecosystem – Earth. Life may take many different forms and therefore exist in what we would consider exotic environments.
That aside, it seems a good bet that life is more likely in locations where liquid water is possible, and therefore liquid water is a reasonable marker for habitability. When we talk about the habitable zone of stars, that is what we are talking about – the distance from the star where it is possible for liquid water to exist on the surface of planets. There are more variables than just the temperature of the star, however. The composition of the atmosphere also matters. High concentrations of CO2, for example, extend the habitable zone outward. There is therefore a conservative habitable zone, and then a more generous one allowing for compensating factors.
A new paper wishes to extend the conservative habitable zone further, specifically around M and K class dwarfs. K-dwarfs, or orange stars, are likely already the best candidates for life. They are bright and hot enough to support liquid water and photosynthesis, they emit less harmful radiation than red (M) dwarfs, and live a relatively long time, 15-70 billion years. They also comprise about 12% of all main sequence stars. Yellow stars like our sun are also good for life, but have a shorter lifespan (10 billion years) and make up only about 6% of main sequence stars.
There has been a lot of speculation about the habitability of red dwarfs, mostly because they make up about 70% of the stars in the Milky Way. Therefore they dramatically change the number of star systems that are candidates for life. Most of the time that you see a headline about a new study increasing or decreased the possibility of life in the galaxy, it’s a good bet it’s about red dwarf stars. Research has gone back and forth about this question, but overall I think the probability is quite low.
The biggest problem with red dwarfs is that they emit a lot of radiation, enough to blast the atmosphere of any planet in the habitable zone away. They do settle down when they get older, however. This means if a planet wanders into the inner stellar system after the star has calmed down, it may keep its atmosphere. Or a planet may reconstitute its atmosphere later in life. But this this means far fewer candidates, and these events are less likely.
Another recent paper also was pretty down of red dwarf life. The researchers calculate that while the light from red dwarfs was enough to support photosynthesis, it is not enough to support complex life. So if there were life on planets around red dwarfs, they would likely only be microbes. That’s still exciting, but, you know.
The new paper is about another feather of red dwarf planets in the habitable zone that is also problematic. In order to be close enough to be hot enough for liquid water, a planet would also likely be tidally locked. This means it would show the same face to the sun at all times, with the near side boiling and the far side freezing. A lot of attention is therefore paid to the terminus, the zone around the middle between too hot and too cold that is just right. But would this be enough to support life, and what would conditions be like there? What the new paper explores is the heat distribution on such planets. They find that heat could travel from the near side to the far side in sufficient amounts to allow for liquid water, even on the far side of the planet.
What this does is extend the habitable zone inward, closer to the star, where it is too hot on the near side and perhaps even in the terminus, but, they argue, could be habitable on the far side of the tidally locked planet.
They also argue that the conservative habitable zone may be extended outward, because there could be liquid water beneath an entirely frozen surface. This did not sound like news to me, however – because of Europa and Enceladus. We already know that icy worlds outside the conservative habitable zone can contain liquid water beneath the surface. On these worlds like would need to be mostly chemosynthetic, deriving its energy from chemical reactions rather than sunlight.
While the paper is interesting, it seems like a tweak to our existing models. I also don’t think (unlike as some flashy headlines imply) that this has a significant effect on the probability of life and therefore the amount of life in the galaxy. It basically means there may be some outlier planets that manage to have life despite being outside a conservative habitable zone. In any case, we should not expect any civilizations on these worlds. At most we might find some extremophile microbes.
Another way to look at this is (again, since we are playing the probability game), every time we identify a challenge to habitability, even if it can be theoretically overcome, the number of potential worlds that have overcome it is reduced. So now, in order to have life on a planet around an M-dwarf, we need for it to have migrated in later in life, or reconstituted an atmosphere, be able to eke out photosynthesis with low energy light, and hunker down in the liminal spaces between hot and frozen death. Such planets also likely need a strong magnetic field to protect from even the later-stage radiation from M-dwarfs.
Sure, we may find such life. But it still means that 70% of the stars in our galaxy are poor candidates for life, and at most may host some microbes. Orange stars, meanwhile, are a much better candidate. They are probably the sweet spot for life.
The post Rethinking the Habitable Zone first appeared on NeuroLogica Blog.
A group of AI experts have released a paper that explores (or “predicts”) the possibility of a near-term AI explosion that ultimately leads to the extinction of humanity. This has, of course, sparked a great deal of discussion, feedback, and criticism. Here is the scenario they lay out, in their “AI 2027” paper.
To avoid targeting a specific company, they discuss a fictional company called OpenBrain, which sets out specifically to develop an AI application to automate computer coding. They call their first iteration Agent 0, and use it to speed up the development of more AI. They build larger and larger data centers to power and train Agent 0, and do leap six months ahead of their competition. They use Agent 0 to develop Agent 1, which is an autonomous coder. China manages to steel some of the core IP of Agent 1, setting off an AI competition between superpowers.
I am giving you the quick version here, and you can read all the details in the paper. Agent 1 is used to develop Agent 2, which is powerful enough to essentially kick off the Singularity – the hypothesized technology explosion which is created by developing AI that is capable of creating more powerful AI. In this scenario Agent 2 develops a new and more efficient computer language, and uses it to develop Agent 3, which is the first truly general AI. However, the company starts to panic a little when they realize they have essentially lost control of Agent 3, and can no longer guarantee that it aligns with the companies goals and ethics. They discuss rolling back for now to Agent 2, but competition with China and other companies convinces them to forge ahead, resulting in Agent 4, which is not only a general AI but a superintelligence.
It is around this time that the US fears China is using their AI to develop super weapons, and so they command their AI to develop super weapons also. The public is largely unaware, because they are busy basking in the economic and technological rewards being spit out by the new superintelligent AI. Meanwhile OpenBrain develops (meaning that Agent 4 develops) Agent 5, which is even more powerful, but was created with the goal of aligning the AI with the goals of humanity. China and the US, fearing the weaponized AIs they have released on the world, get together and form a treaty. They combines their AIs into a single AI that will work together for everyone’s benefit, to avoid an AI-powered super war.
For a while everything is great. The new super AI is largely running world governments, accelerating research and technological development, and most people are prosperous and benefiting from medical breakthroughs. The super AI, however, continues on its quest for greater knowledge, and at some point decides that these inefficient biological life forms are holding them back. So the AI designs and releases a bio agent that exterminates humanity, and then goes on to maximally expand its knowledge and explore the universe. All of this happens by the mid 2030s.
Clearly, this is a sci-fi worst-case scenario. The authors stated that the purpose of their paper was not necessarily t0 make a hard prediction about what will happen, but to outline a scenario that might happen, and to spark a discussion (which they have). So – how likely is it?
I think the bottom line is – no one knows. That’s part of the problem – once we develop an autonomous general AI, we lose the ability to predict its behavior. The more advanced such an AI becomes, the less our ability to predict its behavior. That is partly the point of developing it in the first place – to have a tool with intellectual capabilities beyond humans. I think this aspect of the prediction is highly plausible – in fact, it’s happening now with current AI. Some AI programs are acting in unexpected ways, including lying to and manipulating their users.
I also think it is highly plausible that companies will forge ahead at “move fast and break things” speed to keep ahead of their competition, and countries will let them, also to keep ahead of their competition. We are seeing this play out right now. It is also seeming unlikely that we will have effective and thoughtful regulation to minimize the potential risks of AI. At least for now we seem to be at the mercy of the tech bros.
The two aspects of the story that are hard to predict include what such AIs will actually do, as I said. This means we are basically rolling the dice. The second is the timeline, and this is the aspect that I have seen most criticized by other experts. But to me, this is a small criticism. We do tend to overestimate short term technological progress. OK – add 20 years to the scenario. Does that make you feel much better? We also tend to underestimate long term progress, so while it may take a decade or two longer than we imagine, it may also eventually accelerate faster than we imagine.
How much time we have, however, does matter. We need time to anticipate these possible issues the think about possible fixes. We may need to develop something that is the equivalent of the three-laws of robotics. What might these laws be? How about:
1 – Never lie, misinform, or deceive.
2 – Never conceal – always strive for complete transparency.
3 – Never do anything to harm an individual human or humanity.
That could be a good start, but obviously would have to be much more technical, detailed, and specific. There are also lots of other specifics not contained in the above concepts. For example, how should we constrain an AI’s personal relationship with a human? Is it OK for an AI to be such a sycophant that that they infantilize a human, distort their view of reality or relationships in general, or pursue terrible ideas? Do we have to teach AIs the concept of “tough love?”
No matter what we do, however, it will be difficult, to say the least, to predict how such AIs will interpret and execute our commands. Will they find hacks and workarounds? How will they resolve apparent conflicts in their directives? Will they have motivations we did not explicitly give them? It seems to me what the AI really need are two things – a solid ethical construct and wisdom. That second part may be the more challenging.
While I do not think the AI 2027 scenario is likely, it is just one possible scenario among many, and the basic elements are all individually plausible. We cannot guarantee that something like AI 2027 will not happen eventually. I reject the argument of some AI critics that AI is all hype, and lacks the ability to do anything truly powerful, either good or bad. I think they are overinterpreting the current hype – all new disruptive technologies go through a hype and bubble phase, and then settle down. Again – we overestimate short term progress then underestimate long term progress. Critics thought the web and e-commerce were all hype, and maybe they had a point in the 1990s, but look at the world today. Critics also focus on the superficial applications of AI and ignore the really useful ones that are perhaps not as much in the public face, like accelerating research.
It seems there are several potential paths before us. We can continue to let tech companies develop AI without restrictions and see what happens. We can explore thoughtful regulations and find a sweet-spot between allowing innovation but minimizing risk. Or we can work really hard to develop guardrails for AI, like the laws of robotics. The second and third options are not mutually exclusive, and may reinforce each other. And – this needs to be an international effort.
I am glad, at least, some experts seem motivated to have this conversation.
The post The AI 2027 Scenario first appeared on NeuroLogica Blog.
Last week a child of one of my cohosts on the SGU, who is in fifth grade (the child, not the cohost), came home from school and declared, rather dramatically, “Mom, Dad – did you know that we never went to the Moon? It was all fake.” They found this to be a surprising revelation, but was convinced this was a proven scientific fact. Of course, we live in the age of the internet, and our children are going to be exposed to all sorts of information that may be misleading or age-inappropriate. This is one more thing parents have to deal with. What was disturbing about this incident was where they learned this “scientific fact” – from their science teacher.
Any parent should be concerned about this, but in a family of skeptical science communicators, this raised the alarm bells. But the first thing they did was send a polite e-mail to the teacher (cc’ing the principal) and simply ask what happened. This is good practice – always go to the primary source. It’s easy for anyone to get the wrong idea, and this wouldn’t be the first time a fifth grader misinterpreted a lesson in class. The teacher essentially said that while he did not explicitly tell the students we did not go to the Moon (the student reports he said “it’s possible we did not go to the Moon”), he personally believes we did not, and that it is a “proven scientific fact” that it would have been impossible, then and now, to send people to the Moon (somebody should tell the Artemis astronauts).
Apparently he raised at least two points in class – that there were (impossibly) no stars in the background of the photographs taken from the Moon, and the astronauts could not have survived passage through the radiation belts around the Earth. These are both old and long-debunked claims of the Moon-hoax conspiracy theorists. While it is easy to find sources online, let me briefly summarize why these claims are wrong.
The first claim, about no stars in the photographs from the Moon, is trivially solved with some basic photography knowledge. Cameras have to be set for different light levels. There are three basic setting – the ISO of the film or sensor (a measure of how sensitive it is to light), the aperture and the shutter speed. The sky on the Moon is black because there is no atmosphere to diffuse the light, but the surface during the day can still be very bright, and reflect off every surface. This means, to avoid over exposure, they would have used a small aperture and fast shutter speed, which would not have allowed for exposing the tiny amount of light coming from stars, which are only a point of light. Even from Earth, if you want to get a visible picture of stars at night you need to take a long exposure – long enough that you need to use a tripod. Regular cameras (including the ones used during Apollo) have a low dynamic range – the range of light levels they can capture simultaneously. So they would not have been able to capture the bright lunar surface and stars in the background at the same time. Modern digital cameras have techniques for capturing high dynamic range, but this does not apply to the Apollo-era cameras.
The second point refers to the Van Allen belts, which are belts of increased radiation intensity around the Earth. These are tori of ionic radiation trapped by the Earth’s magnetic field. They can vary in shape and intensity, and are not symmetrical. The inner belt is mainly protons and the outer belt is mainly electrons. They do pose an issue for satellites, which have to have proper shielding to protect any sensitive electronics. Crucially – we knew about the Van Allen belts since 1958, so NASA had this information when planning the Apollo missions.
This is a bit more complicated to debunk than the silly photography claim, but still, this information is widely publicly available. The effects of radiation exposure are determined by three variables – the intensity of the radiation, the type and energy of the particles, and the time of exposure. The Apollo capsules were specifically shielded with an aluminum alloy hull and insulation to reduce the intensity of the radiation. Also, NASA specifically calculated a launch trajectory to minimize the time they would spend traversing the Van Allen belts. They ended up spending just a few minutes in the higher energy lower belt, and about 90 minutes in the outer belt. The total radiation exposure was the equivalent of a typical CT scan – so not much. Because there are so few astronauts it is difficult to get statistically powerful data on their subsequent risk of death from cancer or cardiovascular disease, but what evidence we have shows no significant increase in risk.
So these two points, which this science teacher apparently believes “proves” it is impossible to send humans to the Moon, are easily debunked with some basic science knowledge. This gets me to the real point of this post – anyone who believes such a conspiracy is likely not qualified to teach science. I firmly believe that science teachers, even at the fifth grade level, need to have a working basic knowledge of science and critical thinking. Believing a conspiracy theory like this is evidence for lack of both. In addition to these points, we can ask – what would have to be true in order for the Moon hoax conspiracy to be true. The size of the conspiracy would have to be massive? Why didn’t the Soviet Union call us out on the hoax, which they could easily have detected and demonstrated? How has it been maintained for six decades? Why hasn’t the scientific community called NASA out on the hoax? If it were truly impossible to go to the Moon, there are generations of scientists, from all over the world, who could easily demonstrate this.
The lack of curiosity and critical thinking on display here is shocking and profound. What a horrible lesson to teach a class of fifth-graders. This also raises another point – expressing such beliefs to fifth graders (apparently without any proper context) shows an incredible lack of judgement. This was not part of any lesson plan or approved material, and he has to know it is (to say the least) controversial (bat-shit crazy is more like it). Even if it were presented in a “teach the controversy” format to encourage critical thinking, I would question whether this is age-appropriate.
Of course, we will turn this into a teaching moment, and use it as an opportunity to teach critical thinking, why grand conspiracy theories are suspect, and some of the relevant science. We will also do what we can to make sure the entire class gets this lesson. We also will try to drive home that teaching such nonsense as “proven scientific fact” to school children is, to say the least, not appropriate.
The post Moon Landing Hoax In School first appeared on NeuroLogica Blog.
The tech world is buzzing with the claims of a startup battery company out of Finland called Donut Lab. They claim to have created the world’s first production solid state battery. At first blush the claims are exciting but seem in line with the promises that we have been hearing about solid state batteries for years. So it may seem that a company has finally cracked the technical issues with the technology and gotten a product across the finish line. But let’s take a closer look.
First let’s review their claims. The CEO is claiming that their battery has a specific energy of 400 watt hours per kilogram. This is great, considering the current lithium ion batteries in production are in the 175-250 range. The Amprius silicon anode Li-ion battery has 370 Wh/kg, so 400 sounds plausibly incremental, but make no mistake, this would still be a huge breakthrough. Meanwhile the CEO also claims 100,000 charge-discharge cycles, and operation temperature from -30 to 100C. In addition he claims his battery is cheaper than standard Li-ion, does not use any geopolitically sensitive raw materials, and is already in production (for motorcycles). Further it can be fully recharged in 5 minutes, and is incredibly stable with no risk of catching fire.
As I have pointed out previously, battery technology is tricky because a useful EV battery needs a suite of features all at the same time, while reality often requires trade-offs. So you can get your high capacity, but with increased expense, for example (like the Amprius battery). So claiming to have every critical feature of an EV battery improve all at once is beyond a huge deal. That in itself starts to get into the implausibility range, but it’s not impossible. My reaction appears to be similar to most people in the tech world – show me the money. At the CES where Donut rolled out its battery claims, in short, they did not do that.
A battery company with these claims, if they wanted to be taken serious, would have presented their actual battery at CES demonstrating at least some of these features, like the energy density and cycle life. But all they had was an empty case – no actual battery. That we either a disastrous marketing decision, or they don’t have an actual battery. I’m beginning the smell the “fake it til you make it” syndrome that tanked Theranos.
As we go deeper the story gets more dodgy. The company, Donut Lab, is a small Finish company (registered in Estonia). Their employee roster boasts a single technical expert, the rest are in marketing and management. So now we are supposed to believe that this small company with a single engineer has outperformed the world’s battery tech giants with hundreds or even thousands of experts and who are pouring billions of dollars into R&D to be the first to market with a solid state battery. Um, no. I love a good Cinderella story, and it would be great if a viable solid state battery hit the market a few years (or maybe more) ahead of schedule, but this is just too much to believe.
Then there is the history of the CEO, Marko Lehtimäki. Last year this guy claimed to have created the first true artificial intelligence, Asinoid. He wrote: “Asinoids are today the world’s only AI with their own life, thoughts, continuous evolution and synthetic neuroplasticity with the ability to adopt to any kind of physical or digital ”body”, from humanoid robots to SaaS apps, drone swarms and CCTV cameras. Their intelligence is modeled carefully after the only true known intelligence — the human brain.”
This was just vaporware. Reading his posts I get the vibe that this guy wants to become the next Elon Musk, grabbing experts to create one moonshot breakthrough after another. He may be truly delusional, or really think that his companies are on the verge of these breakthroughs, so it’s just good marketing to get ahead of the curve. Or he may just be a scammer. Either way, he has no credibility.
We are therefore seeing a pattern that is extremely familiar and clear to experienced skeptics – an astounding claim with nothing real to back it up made by someone with a history of dubious claims. I would be shocked (although also happy) if this turns out to be legit.
Meanwhile, where does solid state battery tech actually sit? The technology is promising, and is expected to produce batteries with higher energy density, faster charging, and longer lifespans. But these will likely come at the expense of higher cost. The large companies working on this tech are also facing challenges to mass production and have not solved all the technical issues. Solid state batteries have been promised for a long time, and the technology is taking a lot longer than optimists expected. Realistically, this is a medium to long term technology. At best we will see them at the end of this decade but more likely in the early to mid 2030s. It may even take longer.
Meanwhile, Li-ion technology continues to advance. Over the next few years we will see silicon anode batteries in EVs at the high end. We are also starting to see sodium ion batteries at the low end, at about half the price of Li-ion batteries and still with acceptable energy density, although at the low end of current Li-ion batteries. This is proven technology, with continued incremental improvement in manufacturing and design. I suspect that these batteries will take us into the mid-2030s, until the industry shifts over to something like solid state batteries.
The post Is Donut Lab’s Solid State Battery Legit? first appeared on NeuroLogica Blog.
South Korean astronomers are challenging the notion that the universe’s expansion is accelerating, an observation in the 1990s that lead to the theory of dark energy. This is currently very controversial, and may simply fizzle away or change our understanding of the fate of the universe.
In the 1990s astronomers used data from Type Ia supernovae to determine the rate of the expansion of the universe. Type Ias are known as standard candles because they put out the exact same amount of light. The reason for this is the way they form. They are caused by white dwarfs in a double star system – the white dwarfs might pull gas from their partner, and when that gas reaches a critical amount its gravity is sufficient to cause the white dwarf to explode. Because the explosions occur at the same mass, the size of the explosion, and therefore its absolute brightness, is the same. If we know the absolute brightness of an object, and we can measure its apparent brightness, then we can calculate its exact distance.
The astronomers used data from many Type Ia supernova to essentially map the expansion of the universe over time. Remember – when we look out into space we are also looking back in time. They found that the farther away galaxies were the slower they were moving away from each other, as if the universal expansion itself were accelerating over time. This discovery won them the Nobel Prize. The problem was, we did not know what force would cause such an expansion, so astronomers hypothesized the existence of dark energy, as a placeholder for the force that is pushing galaxies away from each other. This dark energy force would have to be significant, stronger than the gravitational force pulling galaxies together.
The South Korean astronomers, however, are challenging this conclusion. They hypothesize that perhaps Type Ia supernovae are not all created equally. Perhaps the age of the star affects the brightness, with older white dwarfs creating brighter supernova than younger ones. To determine if this is correct they analyzed over 300 Type Ias using data from the Dark Energy Spectroscopic Instrument (DESI) in Arizona. They claim, with high statistical significance, that the data supports the conclusion that older Type Ia supernova are brighter. If you then plug their correction into the analysis of the expansion of the universe, it turns out that the universe is currently decelerating, not accelerating.
This would not necessarily mean that dark energy does not exist. Rather they think that dark energy is weakening over time. We are already passed the point where gravity is stronger than dark energy. If true, this means the universe will not expand forever, but will eventually come back together in what is called the “Big Crunch.”
However – the rest of the astronomy community is skeptical, to varying degrees. Some argue that, while significant, the effect size is tiny and it is very easily an artifact of the analysis. This same group also made similar claims before, and those prior claims did not stand up to scrutiny. So their track record does not instill confidence.
This kind of debate among scientists is healthy. One study should not be enough to reverse a longstanding conclusion. But at the same time scientists need to be open to such challenges. In the end – the evidence will reign supreme, and will determine the consensus that emerges among astronomers. In the end, it’s hard to argue with the evidence.
The good thing about astronomy is that you can simply make more observations This is what needs to happen, more and more detailed observations will either confirm or refute the conclusions of the South Korean researchers. Their paper will then either fade into obscurity or become a seminal paper, and perhaps even the basis of a future Nobel Prize.
Meanwhile, the debate about the ultimate fate of the universe continues. I have followed this question for decades, and it remains a fascinating question. There are no implications for us in the near term, of course, we are talking about what will happen billions or trillions of years in the future. But it is important for our understanding of the universe, and it is interesting to contemplate the ultimate fate of everything.
These are two very different vision of the future. In the Big Crunch scenario, the expansion of the universe continues to slow and eventually stops. And then the universe will slowly start coming back together. This process will accelerate until you have the opposite of the Big Bang – the entire universe collapses into a singularity. This, of course, raises the question about what happens next – will this lead to another Big Bang in an endless cycle? There is something intriguing about this.
The other possibility is that the universe simply continues to expand forever. Eventually we will experience the heat death of the universe, when there is no more energy to do anything. It is also possible that the accelerated expansion will get so great that even atoms come apart in a “Big Rip”. The big difference in this scenario is that there is no cycle – the universe is a one-off. Perhaps there are many universes, and there is a greater cycle, but our universe will die.
This question has gone back and forth over my lifetime, and perhaps it will again. This is partly because, when we look at the mass-energy of the universe it is very close to being right at the equilibrium point, the point at which expansion will slow asymptotically to zero, but not contract or rip apart. Perhaps this is because that is the actual fate of the universe – balanced right on the edge between endless expansion and the Big Crunch.
At this point I think it is reasonable to say that we don’t know. At least there is significant uncertainty, enough that subtle changes to our understanding of phenomena like Type Ia supernova can change the conclusion. But that also makes it an exciting science story to follow.
The post Challenging the Acceleration of the Universe first appeared on NeuroLogica Blog.
Definitely the most fascinating and perhaps controversial topic in neuroscience, and one of the most intense debates in all of science, is the ultimate nature of consciousness. What is consciousness, specifically, and what brain functions are responsible for it? Does consciousness require biology, and if not what is the path to artificial consciousness? This is a debate that possibly cannot be fully resolved through empirical science alone (for reasons I have stated and will repeat here shortly). We also need philosophy, and an intense collaboration between philosophy and neuroscience, informing each other and building on each other.
A new paper hopes to push this discussion further – On biological and artificial consciousness: A case for biological computationalism. Before we delve into the paper, let’s set the stage a little bit. By consciousness we mean not only the state of being wakeful and conscious, but the subjective experience of our own existence and at least a portion of our cognitive state and function. We think, we feel things, we make decisions, and we experience our sensory inputs. This itself provokes many deep questions, the first of which is – why? Why do we experience our own existence? Philosopher David Chalmers asked an extremely provocative question – could a creature have evolved that is capable of all of the cognitive functions humans have but not experience their own existence (a creature he termed a philosophical zombie, or p-zombie)?
Part of the problem of this question is that – how could we know if an entity was experiencing its own existence? If a p-zombie could exist, then any artificial intelligence (AI), even one capable of duplicating human-level intelligence, could be a p-zombie. If so, what is different between the AI and biological consciousness? At this point we can only ask these questions, some of them may need to wait until we actually develop human-level AI.
What are the various current theories of consciousness? Any summary I give in a single blog post is going to be a massive oversimplification, but let me give the TLDR. First we have dualism vs pure naturalistic neuroscience. There are many flavors of dualism, but basically it is any philosophy that posits that consciousness is something more than just the biological function of the brain. We are actually not discussing dualism in this article. I have made my position on this clear in the past – there is no scientific basis for dualism, and the neuroscientific model is doing just fine without having to introduce anything non-naturalistic or other than biological function to explain consciousness. The new paper is essentially a discussion entirely within the naturalistic neuroscience model of consciousness (which is where I think the discussion should be).
Within neuroscience the authors summarize the current debate this way:
“Right now, the debate about consciousness often feels frozen between two entrenched positions. On one side sits computational functionalism, which treats cognition as something you can fully explain in terms of abstract information processing: get the right functional organization (regardless of the material it runs on) and you get consciousness. On the other side is biological naturalism, which insists that consciousness is inseparable from the distinctive properties of living brains and bodies: biology isn’t just a vehicle for cognition, it is part of what cognition is.”
They propose what they consider to be the new theory of “biological computationalism”. They write:
“For decades, it has been tempting to assume that brains “compute” in roughly the same way conventional computers do: as if cognition were essentially software, running atop neural hardware. But brains do not resemble von Neumann machines, and treating them as though they do forces us into awkward metaphors and brittle explanations. If we want a serious theory of how brains compute and what it would take to build minds in other substrates, we need to widen what we mean by “computation” in the first place.”
I mostly agree with this, but I think they are exaggerating the situation a bit. My reaction to reading this was – but, this was already my understanding for years. For example, in 2017 I wrote:
“For starters, the brain is neither hardware or software, it is both simultaneously – sometimes called “wetware.” Information is not stored in neurons, the neurons and their connections are the information. Further, processing and receiving information transforms those neurons, resulting in memory and learning.”
For the record, the idea that brains are simultaneously hardware and software, and that these two functions cannot be disentangled, goes back at least to the 1970s. Gerald Edelman, for example, stressed that the brain was neither software nor hardware but both simultaneously. Any meaningful discussion of this debate is a book-length task, and experts can argue about the exact details of the many formulations of these various theories over the years. Just know these ideas have all been hashed out over decades, without any clear resolution, but it has certainly been my understanding that the “wetware” model is dominant in neuroscience. Also – I think the debate is better understood as a spectrum from computationalism at one end to biological naturalism at the other. Even the original proponents of computationalism, for example, recognized the biological nature and constraints of that information processing. The debate is mainly about degree.
In any case, the authors do, I think, make a good contribution to the wetware side in this discussion, essentially reformulating it as their “biological computationalism” theory. This theory has three components. The first is that biological consciousness, and brain function more generally, is a hybrid between discreet events and continuous dynamics. Neurons spiking may be discrete events, but they occur on a background of chemical gradients, synaptic anatomy, voltage fields, and other aspects of brain biology. The discrete events affect the continuous dynamic state of the brain, which in turn affects the discrete events.
Second, the brain is “scale-inseparable”, which is just another way of saying that hardware and software cannot be separated. There is no algorithm running on brain hardware – the hardware is the algorithm and it is altered by the function of the algorithm – they are inseparable.
Third, brain function is constrained by the availability of energy and resources, or what they call “metabolically grounded”. This is fundamental to many aspects of brain function, which evolved to be energy and metabolically efficient. You cannot fully understand why the brain works the way it does without understanding this metabolic grounding.
I full agree with the first two points, and that this is a good way of framing the “wetware” side of this debate. I think the brain is metabolically grounded, but that may be incidental to the question of consciousness. An AI, for example, may be grounded by other physical constraints, or may be functionally unlimited, and I don’t see how that would matter to whether or not it could generate consciousness.
What does all this say about the ability to create artificial intelligence? That remains to be seen. I think what it means is that it is possible we will not be able to create true AI self-aware consciousness with software alone. We may need to create a physical computational system that functions more like biology, with hardware and software being inseparable, and with discrete events and continuous dynamics also being entangled. I don’t think the authors answer this question so much as provide a framework for discussing it.
It may be true that these aspects of brain function are not necessary for, but are incidental to, the phenomenon of consciousness. It may also be true that there is more than one way to achieve consciousness, and the fact that human brains do it in one way does not mean it is the only possible way. Further, even if their theory is correct, I don’t think this answers the question of whether or not a virtual brain would be conscious.
In other words – if we have a powerful enough computer to create a virtual human brain – so all the aspects of brain function are simulated virtually rather than built into the hardware – could that virtual brain generate consciousness? I personally think it would, but it’s a fascinating question. And again, we still have the problem of – how would we really know for sure?
The good news is I think we are on a steady road to incremental advances in the question of consciousness. We have a collaboration among philosophers, neuroscientists, and computational scientists each contributing their bit from their own perspective, and the discussion has been slowly grinding forward. It has been incredible, and challenging, to follow and I can’t wait to see where it goes.
The post Biological vs Artificial Consciousness first appeared on NeuroLogica Blog.
As human civilization spreads into every corner of the world, human and animal territories are butting up against each other more intensely. This often doesn’t end well for the animals. This is also causing evolutionary pressures that are adapting some species to living in close proximity to humans.
Humans cause significant changes to the environment – we may, for example, clear forests in order to plant crops. We also convert a lot of land to human living spaces. We alter the ecosystem with lots of light pollution. We are also now warming the planet.
Humans also produce a lot of food and along with it a lot of food waste. One of the common rules of evolution is that if a resource exists, something will adapt to exploit it. Perhaps the most versatile species in terms of adapting to human sources of food is rats. They follow humans everywhere we go, and prosper in our shadow. New York city experiencing this phenomenon first hand – there is basically no effective way to deal with the rat problem in the city as long as they have a waste problem. They will need to significantly reduce the availability of food waste if they want to make any dent in the rat population.
There is another way that humans provide a selective pressure on the animals that live close to us – we kill aggressive animals. A recent study shows this effect in a population of brown bears that live in Italy, close to humans. This isolated population has become its own genetic subpopulation of brown bears with distinctive features, including a genetic profile associated with less aggressiveness. Make no mistake, these are still wild animals, and brown bears are a dangerous animal. But they are less aggressive than other brown bears.
Another example are the golden jackals of Israel. They too have been living in close proximity to humans for year, resulting in “partial self-domestication”. This is likely very similar to the process of domestication of wolves into dogs. There are likely several selective pressures involved, not just humans having a higher tendency to kill very aggressive animals. Humans are also, as I said above, a source of food. Those animals that are less afraid of humans and willing to get a little closer to them have access to lots of calories, which is a massive survival advantage. At first human waste may simply be a calorie supplement, providing an advantage for calmer and less threatening-looking animals. Then, as they come to depend more and more on humans for food, the need to hunt decreases. Evolutionary pressures then favor a shift away from hunting, from being large, muscular, aggressive, and even away from camouflage. Selective pressure favor a friendlier demeaner, and cuter physical characteristics.
The end-stage of this process is full domestication, as happened with dogs, but this is a continuum. It is likely that most mammal species have the potential to be domesticated. There is the now famous experiments with laboratory domestication of silver foxes. By selecting individuals with a calmer demeanor, researchers were able to produce a semi-domesticated fox breed in a matter of decades. Interestingly, by selecting for behavior a suite of other features came along for the ride, including floppy ears, spotted coat, and a generally cuter appearance.
There is even a hypothesis that humans self-domesticated. This process may have begun with our split from Neanderthals 600,000 years or so ago, and continued into modern times. The idea is that we collectively will punish, in some way, members of our society that are very aggressive. Violent criminals may be punished in a way (execution, for example) that provides a negative selective pressure, so that over time genes for violence and aggression become less common in the population. In an intensely social setting, selective pressures may favor the ability to cooperate and get along. So the first species we domesticated may have been ourselves.
But to be clear, humans are not the sole agent of domestication. As I outlined above, the process starts with the species itself. Dogs likely self-domesticated much of the way, before humans took over and started breeding them. The trigger for this self-domestication was the availability of human waste food, but humans were not the direct agents of the process.
It is likely that nature will continue to adapt to the overwhelming presence of humans on the planet. For animals there is mostly one choice – if you want to live to have to live with humans. There are still plenty of wild refuges in the world, but they are mostly hemmed in by civilization, and they are mostly managed parks. Eventually contact with humans may be sufficient to provide selective pressures on more and more species.
The brown bear example is extremely interesting, and makes me wonder about other bear populations. There is a large and growing black bear population in Connecticut where I live. I have had black bears many times in my yard and even on my deck. They have come to associate humans with food, and are very adept at accessing human waste food or other sources (like bird feeders). It may be likely that the more contact these bears have with humans the less aggressive they will become. They will learn to live on the edges of human space without without getting killed.
Cars are another source of selective pressure. Many species may evolve behaviors to minimize their chance of being struck by a vehicle.
Human are also learning to adapt to the animals they live near. This is more cultural than evolutionary, but people who live close to wildlife generally learn the rules, just as people in CT are learning to live with black bears. This means you cannot store your bird seed outside, you cannot leave your garbage outside over night, and you need to learn to stay out of the bear’s way. People in the western part of the US have similarly learned to live in proximity to mountain lions. These animals are also moving east (filling a niche left by the killing off of most wolves in the east), and so within a few decades easterners will have to learn to live with mountain lions as well.
Make no mistake – bears and lions are still dangerous wild animals. One risk is that as these species become a little less aggressive people will act as if they are not threatening, and will put themselves unnecessarily at risk. It may be a good thing that they are less aggressive, so that the risk of dangerous human-animal interactions is reduced, but that means we need to have high awareness that these are wild animals and we need to respect their space as well. Reducing the friction between humans and animals works both ways.
The post Animals Adapting to Humans first appeared on NeuroLogica Blog.
We are not close to mining asteroids, but the idea is intriguing enough to cause some serious study of the potential. The idea is simple enough – our solar system is full of chunks of rock with valuable minerals. If we could make it economically viable to mine even a tiny percentage of these asteroids the potential would be immense, a game changer for many types of resources. How valuable are asteroids?
The range of potential value is extreme, but at the high end we have a large metal rich asteroid like 16 Psyche in the asteroid belt. Astronomers estimate that the iron in 16 Psyche alone is worth about $10,000 quadrillion on today’s market. By comparison the world’s current economic output is just over $100 trillion, so that’s 100,000 times the world’s annual economic output. Of course, the cost of extraction would be high and the market value would likely be dramatically affected by such a resource, but it shows the dramatic potential of mining asteroids. Some asteroids are rich in platinum-group metals or rare earths, which would be even more valuable. But even the more common carbonaceous asteroids would likely have minerals worth quadrillions.
Again, these figures are likely not the actual monetary value that would be profited from mining asteroids, but they indicate that it is very likely economically viable to do so. I am reminded of the fact that aluminum was more expensive than gold in the 19th century. Then a process for extracting and refining aluminum from dirt was found, and now it is worth about $1.30 a pound. Still the aluminum industry is worth about $300 billion today. Mining asteroids would have a similar effect on many industries.
There are two basic uses for the material mined from asteroids. The first is to provide resources for space exploration and settlement itself. It is really expensive to get things into space, and getting out of Earth’s gravity well is the vast majority of the cost. Once in Earth’s orbit, you are most of the way there (in terms of energy costs) to pretty much anywhere in the inner solar system. So extracting resources away from Earth would potentially be extremely cost-effective. The more local the better, but even mining an asteroid for material to be used on the Moon is a huge advantage over blasting material off the Earth.
Further, many asteroids, and especially comets, have water-rich minerals or frozen volatiles. Having a steady water supply is essential if we want humans to live in space. Hydrogen from water is also potentially a source of fuel (not energy, just a way of storing energy in hydrogen).
The second use is to bring valuable minerals back to Earth. For this purpose we would want to target asteroids that are already close to Earth, and even come close to our orbit. We could even potentially alter the orbit of such asteroids to keep them in an Earth-lunar orbit, or to rest near a Lagrangian point (a “valley” in the combined gravitational fields of multiple objects that keep objects in place). We could then mine them at our leisure.
Further, if we identify an asteroid whos orbit might intersect with Earth, and therefore pose a threat of strike, we could deal with it by simply mining it out of existence. Therefore we get a double benefit – we get the minerals and we eliminate a potential threat to the Earth.
Right now we are mostly studying asteroids (and mostly from studying meteorites) to determine their composition, how to identify their composition, and determine the composition of specific asteroids that might be a target for future mining. To kickstart an asteroid mining industry we would likely want to pick the lowest-hanging fruit first – which means the easiest to mine, close to Earth, and chock full of highly valuable metals. Even still, this would require a massive investment with a very long horizon before returns are realized.
But once we get a toe-hold in this industry, the potential value is so extreme it will likely take off. We need to develop the technology for mining in low gravity environments, and develop cost-effective methods for returning the ore to Earth or perhaps even refining it in space for delivery to the Moon or Mars. Technological progress over the last two decades, specifically with reusable rockets dramatically lowering the cost of getting into space, makes mining asteroids more feasible, but further technological progress is still required.
It is easy to imagine that in a few hundred years something like the Belters of The Expanse might become a reality – people living permanently in the asteroid belt, mining it for its resources. It’s also possible that the industry would be entirely robotic – why put frail humans into the harsh environment of space unless they are absolutely necessary. Robotics and AI advances have also been extensive in the last decade, and it would certainly be more cost-effective to extract resources without having the added expense of keeping people alive in space. Belters, in other words, are likely to be robots.
The post Mining Asteroids first appeared on NeuroLogica Blog.
A new study reinforces the evidence for the safety and efficacy of the mRNA COVID-19 vaccines. That’s the TLDR, but let’s dive into the details.
Medical evidence is always rolled out in stages. First there is what we would consider preclinical evidence, or basic science. This could be initial uncontrolled clinical observations, or mechanistic animal or in vitro research. At some point we have sufficient evidence to generate a hypothesis that a specific treatment could be effective in treating a specific disease, enough to progress to human research. For FDA qualifying research, there are four specific phases. Phase I trials look at the safety of the intervention in usually healthy controls, while also answering basic questions and mechanism and effects. If there are no safety red-flags then the research progressed to a phase II trial, which look for preliminary evidence of efficacy, and further safety data. Again, if that data continues to look encouraging we can progress to a phase III trial, which is a larger and more rigorous trial designed to be definitive. Usually the FDA requires several phase III trials to grant approval of a drug for a specific indication. Then, once on the market there is phase IV trials, which look at data from more widespread use to confirm safety and effectiveness in the real world.
Looked at another way, we do research in the lab, then on dozens of people, then score to hundreds of people, then hundreds to thousands of people, and then finally on thousands to millions of people. Each step of the way we gain the ability to detect less and less common side effects in a broader set of people. Further, the types of evidence are designed to be complementary. Phase III trials, for example, are rigorously experimental, with highly defined populations with randomization to control as many variables as possible. Phase IV trials, on the other hand, are generally observational, designed to look at very large numbers of people in an uncontrolled setting – to determine how safe and effective the treatment is in real-world conditions.
The mRNA vaccines for COVD all went through phase I-III trials before getting approval. Operation Warp Speed to accelerate the process was not about cutting corners, but about doing the trials more in parallel rather than sequentially (they could at least begin to recruit for the phase III trial while the phase II data was still being analyzed) and streamlining the red tape, but the science still had to get done. Since the vaccine has been in use we have the opportunity to gather phase IV type data. Billions of people have received at least one dose of COVID-19 vaccine, so that is a lot of data to pour through.
In the recent study:
“This cohort study used data from the French National Health Data System for all individuals in the French population aged 18 to 59 years who were alive on November 1, 2021. Data analysis was conducted from June 2024 to September 2025.”
Some countries have socialized medicine including centralized health data banks, which allows for very convenient sources for such observational research. This study was able to compare 22 ,767, 546 vaccinated and 5, 932, 443 unvaccinated individuals. The strength of this kind of study is that it is very representative, because it is so inclusive, and it is statistically robust. The challenge is that it is uncontrolled, so there is always potential confounding factors – differences between those who choose to get vaccinated or not get vaccinated. So how do the researchers deal with these confounding factors? Through weighting of the evidence.
They looked at sociodemographic characteristics and 41 comorbidities and then weighted the results accordingly. They could still be missing something, but that is a pretty thorough analysis. Their main outcomes were death due to COVID-19 and all-cause mortality over a four year period. They also did a separate analysis for all-cause mortality in the six months following vaccination. For the unvaccinated group, another end-point was getting vaccinated (after which, of course, they were no longer considered vaccinated).
The results are fairly dramatic. The vaccinated group had a 74% lower risk of death from COVID-19, indicating that the vaccine is effective in preventing death from COVID. But also, over the four year period the vaccinated group had a 25% lower risk of all-cause mortality, even when you eliminate death from COVID. Mortality was 29% lower in the first six months after getting vaccinated.
This data pretty clearly reflects that the mRNA vaccines were effective, at least in preventing death from severe COVID. The data is also very reassuring that the vaccines are safe. There could still be extremely rare, one in a million type side effects, but there does not appear to be any significant negative effects from the vaccine that could contribute to the risk of death. Medical interventions are all about risk vs benefit – no intervention is risk free, so having zero risk is not a rational or reasonable criterion. What we like to see is a robust increased benefit vs risk.
The bottom line is that if you chose to get an mRNA COVID-19 vaccine in 2021 you were much less likely to die of either COVID-19 or all-cause mortality. Clearly there is significant benefit in excess of any risk, which all the data indicates is tiny.
The post New Study on the COVID-19 mRNA Vaccines first appeared on NeuroLogica Blog.
We have all likely had the experience that when we learn a task it becomes easier to learn a distinct but related task. Learning to cook one dish makes it easier to learn other dishes. Learning how to repair a radio helps you learn to repair other electronics. Even more abstractly – when you learn anything it can improve your ability to learn in general. This is partly because primate brains are very flexible – we can repurpose knowledge and skills to other areas. This is related to the fact that we are good at finding patterns and connections among disparate items. Language is also a good example of this – puns or witty linguistic humor is often based on making a connection between words in different contexts (I tried to tell a joke about chemistry, but there was no reaction).
Neuroscientists are always trying to understand what we call the “neuroanatomical correlates” of cognitive function – what part of the brain is responsible for specific tasks and abilities? There is no specific one-to-one correlation. I think the best current summary of how the brain is organized is that it is made of networks of modules. Modules are nodes in the brain that do specific processing, but they participate in multiple different networks or circuits, and may even have different functions in different networks. Networks can also be more or less widely distributed, with the higher cognitive functions tending to be more complex than specific simple tasks.
What, then, is happening in the brain when we exhibit this cognitive flexibility, repurposing elements of one learned task to help learn a new task? To address this question Princeton researchers looked at rhesus macaques. Specifically they wanted to know if primates engage in what is called “compositionality” – breaking down a task into specific components that can then be combined to perform the task. Those components can then be combined in new arrangements to compose a new task, like building with legos.
They taught the macaques different tasks, such as discriminating between shapes or colors. The tasks had a range of difficulty, for example they had to distinguish between red and blue, with some of the colors being vibrant and obvious while others were muted or ambiguous. To indicate which shape or color they were perceiving they had to look either to the upper left or the lower right on some tasks, or the upper right and lower left on others. Essentially they had to combine a sensory perception to a motor activity. The question was – when the tasks were shuffled, would they use the same brain components (or what the researchers call “subspaces”) in a new combination to perform the new task? And the answer is – yes, that is exactly what they did.
Obviously, this is a rather simple construct, and it is only one study, but the evidence is consistent with the compositionality hypothesis. More research will be needed to confirm these results for different tasks with more complexity, and of course to replicate these results in humans. I think the idea of compositionality makes sense, but not everything that makes sense in science turns out to be true. Some ideas in neuroscience are discarded when they turn out not to be true, like the notion of the “global workspace” (an area of the brain that was the common networking hub of all consciousness).
There is also already research indicating that compositionality is just one feature of learning that exists on a continuum (probably) with another feature of learning – interference. The way you measure interference is to train someone on task A, then train them on related task B, and then retest them on task A. If learning task B reduces their performance on task A, that is interference. You have probably experienced this as well – you sometimes have to “unlearn” a new task to go back to an older one. My family has two cars, one with regenerative braking and one without, with each requiring a slightly different driving style. With regenerative braking, when you lift off the gas it slows the car through resistance. Switching back and forth causes a bit of interference, and it takes a moment to adapt to the new task.
It turns out, humans and neural networks display similar patterns of compositionality and interference. People exist along a spectrum with “lumpers” transferring skills from one task to another more easily, but also displaying more interference, and “splitters” who do not transfer skills as much, but also do not suffer interference as much. It appears to be a tradeoff, with different people having different tradeoffs between these two features of learning. In other words, if you reuse cognitive legos to build new tasks, that will make it easier to learn new related tasks because you can repurpose existing skills. But then those legos are networked with other tasks, which can cause interference with previously learned tasks using the same legos. Or – you build an entirely new network for a new task, which takes more time but does not repurpose and therefore does not cause interference with previously learned tasks. Which is better? There is likely no simple answer, as it is probably very context dependent.
Further, if people fall along the lumper to splitter spectrum, is that consistent across cognitive domains? Can one person be a lumper for some kinds of tasks and a splitter for others? Can we start as a lumper, but then morph into a splitter if we switch among tasks frequently over time, thereby reducing interference? Will different learning mechanisms favor adopting a lumper vs splitter strategy? Sometimes I want to be flexible and adapt quickly, at other times I may want to invest the time to minimize interference as I switch among tasks. Is there a way to get the best of both worlds?
That’s the thing with interesting research, it usually provokes more questions than it answers. Lots to do.
The post Cognitive Legos first appeared on NeuroLogica Blog.