Definitely the most fascinating and perhaps controversial topic in neuroscience, and one of the most intense debates in all of science, is the ultimate nature of consciousness. What is consciousness, specifically, and what brain functions are responsible for it? Does consciousness require biology, and if not what is the path to artificial consciousness? This is a debate that possibly cannot be fully resolved through empirical science alone (for reasons I have stated and will repeat here shortly). We also need philosophy, and an intense collaboration between philosophy and neuroscience, informing each other and building on each other.
A new paper hopes to push this discussion further – On biological and artificial consciousness: A case for biological computationalism. Before we delve into the paper, let’s set the stage a little bit. By consciousness we mean not only the state of being wakeful and conscious, but the subjective experience of our own existence and at least a portion of our cognitive state and function. We think, we feel things, we make decisions, and we experience our sensory inputs. This itself provokes many deep questions, the first of which is – why? Why do we experience our own existence? Philosopher David Chalmers asked an extremely provocative question – could a creature have evolved that is capable of all of the cognitive functions humans have but not experience their own existence (a creature he termed a philosophical zombie, or p-zombie)?
Part of the problem of this question is that – how could we know if an entity was experiencing its own existence? If a p-zombie could exist, then any artificial intelligence (AI), even one capable of duplicating human-level intelligence, could be a p-zombie. If so, what is different between the AI and biological consciousness? At this point we can only ask these questions, some of them may need to wait until we actually develop human-level AI.
What are the various current theories of consciousness? Any summary I give in a single blog post is going to be a massive oversimplification, but let me give the TLDR. First we have dualism vs pure naturalistic neuroscience. There are many flavors of dualism, but basically it is any philosophy that posits that consciousness is something more than just the biological function of the brain. We are actually not discussing dualism in this article. I have made my position on this clear in the past – there is no scientific basis for dualism, and the neuroscientific model is doing just fine without having to introduce anything non-naturalistic or other than biological function to explain consciousness. The new paper is essentially a discussion entirely within the naturalistic neuroscience model of consciousness (which is where I think the discussion should be).
Within neuroscience the authors summarize the current debate this way:
“Right now, the debate about consciousness often feels frozen between two entrenched positions. On one side sits computational functionalism, which treats cognition as something you can fully explain in terms of abstract information processing: get the right functional organization (regardless of the material it runs on) and you get consciousness. On the other side is biological naturalism, which insists that consciousness is inseparable from the distinctive properties of living brains and bodies: biology isn’t just a vehicle for cognition, it is part of what cognition is.”
They propose what they consider to be the new theory of “biological computationalism”. They write:
“For decades, it has been tempting to assume that brains “compute” in roughly the same way conventional computers do: as if cognition were essentially software, running atop neural hardware. But brains do not resemble von Neumann machines, and treating them as though they do forces us into awkward metaphors and brittle explanations. If we want a serious theory of how brains compute and what it would take to build minds in other substrates, we need to widen what we mean by “computation” in the first place.”
I mostly agree with this, but I think they are exaggerating the situation a bit. My reaction to reading this was – but, this was already my understanding for years. For example, in 2017 I wrote:
“For starters, the brain is neither hardware or software, it is both simultaneously – sometimes called “wetware.” Information is not stored in neurons, the neurons and their connections are the information. Further, processing and receiving information transforms those neurons, resulting in memory and learning.”
For the record, the idea that brains are simultaneously hardware and software, and that these two functions cannot be disentangled, goes back at least to the 1970s. Gerald Edelman, for example, stressed that the brain was neither software nor hardware but both simultaneously. Any meaningful discussion of this debate is a book-length task, and experts can argue about the exact details of the many formulations of these various theories over the years. Just know these ideas have all been hashed out over decades, without any clear resolution, but it has certainly been my understanding that the “wetware” model is dominant in neuroscience. Also – I think the debate is better understood as a spectrum from computationalism at one end to biological naturalism at the other. Even the original proponents of computationalism, for example, recognized the biological nature and constraints of that information processing. The debate is mainly about degree.
In any case, the authors do, I think, make a good contribution to the wetware side in this discussion, essentially reformulating it as their “biological computationalism” theory. This theory has three components. The first is that biological consciousness, and brain function more generally, is a hybrid between discreet events and continuous dynamics. Neurons spiking may be discrete events, but they occur on a background of chemical gradients, synaptic anatomy, voltage fields, and other aspects of brain biology. The discrete events affect the continuous dynamic state of the brain, which in turn affects the discrete events.
Second, the brain is “scale-inseparable”, which is just another way of saying that hardware and software cannot be separated. There is no algorithm running on brain hardware – the hardware is the algorithm and it is altered by the function of the algorithm – they are inseparable.
Third, brain function is constrained by the availability of energy and resources, or what they call “metabolically grounded”. This is fundamental to many aspects of brain function, which evolved to be energy and metabolically efficient. You cannot fully understand why the brain works the way it does without understanding this metabolic grounding.
I full agree with the first two points, and that this is a good way of framing the “wetware” side of this debate. I think the brain is metabolically grounded, but that may be incidental to the question of consciousness. An AI, for example, may be grounded by other physical constraints, or may be functionally unlimited, and I don’t see how that would matter to whether or not it could generate consciousness.
What does all this say about the ability to create artificial intelligence? That remains to be seen. I think what it means is that it is possible we will not be able to create true AI self-aware consciousness with software alone. We may need to create a physical computational system that functions more like biology, with hardware and software being inseparable, and with discrete events and continuous dynamics also being entangled. I don’t think the authors answer this question so much as provide a framework for discussing it.
It may be true that these aspects of brain function are not necessary for, but are incidental to, the phenomenon of consciousness. It may also be true that there is more than one way to achieve consciousness, and the fact that human brains do it in one way does not mean it is the only possible way. Further, even if their theory is correct, I don’t think this answers the question of whether or not a virtual brain would be conscious.
In other words – if we have a powerful enough computer to create a virtual human brain – so all the aspects of brain function are simulated virtually rather than built into the hardware – could that virtual brain generate consciousness? I personally think it would, but it’s a fascinating question. And again, we still have the problem of – how would we really know for sure?
The good news is I think we are on a steady road to incremental advances in the question of consciousness. We have a collaboration among philosophers, neuroscientists, and computational scientists each contributing their bit from their own perspective, and the discussion has been slowly grinding forward. It has been incredible, and challenging, to follow and I can’t wait to see where it goes.
The post Biological vs Artificial Consciousness first appeared on NeuroLogica Blog.
As 2025 barrels towards its depressing conclusion, I look back at the damage federal science and medicine have sustained thus far under Donald Trump and Robert F. Kennedy Jr. through the lens of a classic film. Truly, it's a mad house, in which our federal science apparatus is run by Lysenko's heirs.
The post It’s a madhouse! Public health under the heirs to Lysenko (and Dr. Zaius) in 2025 first appeared on Science-Based Medicine.A joint research team from South Korea has developed a fascinating wheel inspired by origami and Da Vinci bridge principles that could unlock access to the Moon’s most dangerous and scientifically useful terrain. The wheel expands from 230 mm to 500 mm in diameter on demand, allowing small rovers to navigate steep lunar pits and lava tube entrances that would trap conventional vehicles.
Astronomers using the Hubble Space Telescope have discovered the largest planet forming disk ever observed around a young star, stretching nearly 40 times the diameter of our Solar System. Nicknamed “Dracula’s Chivito” for its hamburger like appearance when viewed edge on, this massive disk reveals an unexpectedly chaotic and asymmetric structure with wisps of material extending far above and below its central plane. The discovery offers an unprecedented window into how planets might form in extreme environments, challenging previous assumptions about the orderly nature of planetary nurseries.
Posing as an wise, elder statesman, a neutral guardian of science, Dr. Ioannidis managed to pontificate mightily on COVID from a safe distance without ever being forced to acknowledge the tragic realities on the ground. There is no for journalists need to enable this charade today.
The post Dr. John Ioannidis: To Protect Science and Keep it Apolitical, We Must Not Resist MAHA. first appeared on Science-Based Medicine.Several people, whom I won’t name, have taken to commenting more often than is suggested by Da Roolz. Let me reiterate the relevant one: Rool #9:
Try not to dominate threads, particularly in a one-on-one argument. I’ve found that those are rarely informative, and the participants never reach agreement. A good guideline is that if your comments constitute over 10% of the comments on a thread, you’re posting too much.
This is a guideline, not a hard-and-fast dictum, but be aware that comments should be informative, advance the discussion, and aren’t there just so you can tell the world that you exist. Comments that say “+1” are particularly egregious because they say nothing more than “I agree,” evincing a laziness that can’t even produce those two words! (And even “I agree” is not that useful.)
I can’t resist calling your attention to a 2016 article on free will, mainly because it appeared in The Atlantic—a magazine many here (including me) admire. And as I’m reading Matthew Cobb’s terrific new biography of Francis Crick, I see that Crick was a determinist like me, though he realized that different phenomena require different levels of analysis. Crick didn’t think that free will was even worth considering, and avoided it like the plague though he was deeply concerned with consciousness. His research program for understanding the brain is deeply deterministic and pretty reductionist. But read Matthew’s book for yourself.
In view of Crick’s ideas that I’ve just learned about, and a reader calling my attention to this article, which I haven’t seen, it’s worth seeing how author Stephen Cave deals with determinism. You can read the article by clicking below, but since it’s likely to be paywalled you can find it archived here.
The article’s main points are these, two of which are summarized in the title and subtitle (my take):
1.) We have no such thing as free will in the libertarian sense of “you could have done other than what you did”
2.) But studies show that if you reject free will, you are likely to cheat, be lazy and fatalistic, and reject the idea of moral responsibility
3.) To avoid these injurious social effects, we must confect a new take on free will, encouraging others to behave better. This can enhance “autonomy” (not “agency” or “autonomy in the sense of ‘ability to govern oneself'”, neither of which we have) but “autonomy” in the sense of “adhering to behaviors that help our selves and society”.
Now #3 may look like a bogus solution, and author Steven Cave sort of admits that, but we can clearly improve our behaviors with the right carrots and sticks. It’s a misconception about determinism that people’s behavior can’t be changed. Clearly, the influence of others, blaming and praising people for actions they consider respectively injurious and admirable, can, over time, change your neurons in such a way that you begin behaving in ways better for you and for society. The fly in this ointment is the infinite regression of determinism: whether and how we even try to change people’s minds is itself determined by people’s genes and environments. But I won’t go down that rabbit hole here.
Cave’s solution is at least better than that of compatibilists like Dan Dennett, who simply redefined free will so that we could tell people they had it. Since Dan adhered to point #2, thinking that belief in strict determinism was bad for everyone, he wrote two books designed to convince people that they had free will in a meaningful way. I found his arguments unconvincing. Dan later stressed that he was not making this “little people’s” argument, one similar to making the “belief in belief” claim that even though there’s no God, it’s good for society to be religious. But in Dan’s own writings I did find him making the Little People’s argument, which I quoted in a post here in 2022:
Here, for example, are two statements by the doyen of compatibilism, my pal Dan Dennett (sorry, Dan!):
There is—and has always been—an arms race between persuaders and their targets or intended victims, and folklore is full of tales of innocents being taken in by the blandishments of sharp talkers. This folklore is part of the defense we pass on to our children, so they will become adept at guarding against it. We don’t want our children to become puppets! If neuroscientists are saying that it is no use—we are already puppets, controlled by the environment, they are making a big, and potentially harmful mistake. . . . we [Dennett and Erasmus] both share the doctrine that free will is an illusion is likely to have profoundly unfortunate consequences if not rebutted forcefully.
—Dan Dennett, “Erasmus: Sometimes a Spin Doctor is Right” (Erasmus Prize Essay).
and
If nobody is responsible, not really, then not only should the prisons be emptied, but no contract is valid, mortgages should be abolished, and we can never hold anybody to account for anything they do. Preserving “law and order” without a concept of real responsibility is a daunting task.
—Dan Dennett, “Reflections on Free Will” (naturalism.org)
But you can be a “hard determinist” and still believe in responsibility!
Dan is no longer with us, but I did post these when he was alive, so I’m not beating a dead philosopher.
I will try to be brief, discussing the three points above. Quotes from the Atlantic article are indented, while my own take is flush left:
1.) We have no such thing as free will in the libertarian sense of “you could have done other than what you did.” To his credit, Cave admits this straight off, noting that science supports determinism.
In recent decades, research on the inner workings of the brain has helped to resolve the nature-nurture debate—and has dealt a further blow to the idea of free will. Brain scanners have enabled us to peer inside a living person’s skull, revealing intricate networks of neurons and allowing scientists to reach broad agreement that these networks are shaped by both genes and environment. But there is also agreement in the scientific community that the firing of neurons determines not just some or most but all of our thoughts, hopes, memories, and dreams.
. . . . The 20th-century nature-nurture debate prepared us to think of ourselves as shaped by influences beyond our control. But it left some room, at least in the popular imagination, for the possibility that we could overcome our circumstances or our genes to become the author of our own destiny. The challenge posed by neuroscience is more radical: It describes the brain as a physical system like any other, and suggests that we no more will it to operate in a particular way than we will our heart to beat. The contemporary scientific image of human behavior is one of neurons firing, causing other neurons to fire, causing our thoughts and deeds, in an unbroken chain that stretches back to our birth and beyond. In principle, we are therefore completely predictable. If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy.
This is what I believe, and also what Crick believed. Now we’ll never know enough to be able to predict people’s behavior, but if quantum effects don’t manifest themselves in behavior (making you choose a salad rather than french fries, for example), then yes, determinism could lead to absolute predictability. But that will never happen, because we’d have to know enough to predict environmental factors like the weather. Besides, scientists have not decided that quantum phenomena affect behavior. Crick himself rejected that as “woo”, and I’m awaiting evidence for such influences. (We have none.) Finally, even if quantum effects do scupper determinism for some behaviors, they are not effects that we can control by “will.”
I won’t add here the many experiments showing that you can largely predict people’s (simple) decisions before they’re made, beginning with the study of Libet. As these studies continue, we can, by monitoring brain activity, predict what people will do in simple binary tasks farther and farther ahead of the time they’re aware of making such decisions (up to ten seconds, I believe). Free willies, however, always find ways to reject these studies, since that work suggests that our feeling of agency is a post facto phenomenon occurring only after the brain’s neurons have made a “decision”.
2.) But studies show that if you reject free will, you are likely to cheat, be lazy and fatalistic, and reject the idea of moral responsibility. Much of this is based on an early study of Vohs and Schooler showing that college students who are “primed” by reading passages on determinism are more likely to act badly and to cheat than students primed by reading about free will. But that was just over a very short time, was a highly artificial study on college students, and a later meta-analysis showed no deleterious effect of rejecting free will on “prosocial” behaviors. (Note that most of the studies tested behaviors lasting at most a week or so after “priming”. Cave does, however, mention one study suggesting inimical effects of belief in determinism, though:
In another study, for instance, Vohs and colleagues measured the extent to which a group of day laborers believed in free will, then examined their performance on the job by looking at their supervisor’s ratings. Those who believed more strongly that they were in control of their own actions showed up on time for work more frequently and were rated by supervisors as more capable. In fact, belief in free will turned out to be a better predictor of job performance than established measures such as self-professed work ethic. I suggest you look at that study (it appears to be Stillman et al. 2020, “study 2”), as it doesn’t contain a multifactorial analysis using all the cross-correlated factors. Furthere, the p values are low, yet the authors did not correct for multiple tests of significance using something like the Bonferroni correction.But even if the evidence did show small deleterious effects on behavior stemming from determinism, are we supposed to pretend to believe we have agency so we can behave better? How can you pretend to believe something you don’t? It would be like asking atheists to believe in God because that belief has salubrious effects. It can’t be done—at least not for rational people. It’s like asking a lion to stop chasing gazelles and start eating salads. It’s not in us!
Two other points. We always feel like we have free will, so I doubt that the scientific truth will make people fatalistic. Whether this belief evolved by natural selection or is merely an epiphenomenon of our evolved brain structure is not clear, and I doubt we’ll ever know. So I don’t take point #2 seriously in most circumstances. Where it IS important to recognize the truth of determinism is in our system or rewards and punishment, most notably in the legal system. If people who act badly are simply people with “broken brains,” then how we treat them depends crucially on recognizing this. A society in which we realize, for instance, that a thief had no choice about whether or not he stole, or a killer about whether or not he pulled the trigger, we would have a very different system of punishment than a society in which we think people had a choice of how they behaved. (Yes, I know that some people say that belief in libertarian free will wouldn’t change how we dispense justice, but I reject that view.)
This does not mean that we should do away with the idea of responsibility and punishment. Far from it. While I don’t consider people morally responsible in the sense that they could have done something “moral” rather than “immoral”, that doesn’t mean that every criminal obtains a get-out-of-jail-free card. People are responsible for their acts in the sense that they are the people who do the acts, and that leads to the idea that those people need, for their own sake and society’s, to be punished or rewarded. Punishment is still justified under determinism to keep criminals out of society, to give them a chance to be rehabilitated, and (to most) as a form of deterrence. What is not justified is retributive punishment like the death penalty, as that implicitly assumes the criminal made a choice (the death penalty isn’t a deterrent, anyway, and can’t be revoked if someone is later found to be innocent).
Finally, praise is as justified as punishment, for praising people for some actions, even if they had no choice, will almost always lead them to perform more good actions, because we’re evolved to appreciate praise, which raises our status. In the end, though none of us have choices about how we behave, we go about our lives feeling as if we did, and that’s enough for me. When the rubber hits the road, as when determinism really matters (as in punishment), we can still revert to what science tells us.
3.) To avoid this injurious social effects, we must confect a new take on free will, encouraging others to behave better, which can enhance “autonomy” (not “agency” or “autonomy” in the sense of “ability to govern oneself”, neither of which we have, but “autonomy” in the sense of adhering to behaviors that help our selves and society. Author Cave is wise enough to accept the science and the determinism it suggests, but he still thinks we need a solution to the problem that belief in determinism leads to bad behavior. I am not convinced that this is true, as different studies show different things. And I don’t think we need to do what Dennett did, writing big books confecting new definitions of a “free will worth wanting.” It is this last part of the article that most disappointed me, for Cave suggest a tepid solution: we all need to behave better. (He cites Bruce Waller, a philosophy professor at Youngstown State University):
Yet Waller’s account of free will still leads to a very different view of justice and responsibility than most people hold today. No one has caused himself: No one chose his genes or the environment into which he was born. Therefore no one bears ultimate responsibility for who he is and what he does. Waller told me he supported the sentiment of Barack Obama’s 2012 “You didn’t build that” speech, in which the president called attention to the external factors that help bring about success. He was also not surprised that it drew such a sharp reaction from those who want to believe that they were the sole architects of their achievements. But he argues that we must accept that life outcomes are determined by disparities in nature and nurture, “so we can take practical measures to remedy misfortune and help everyone to fulfill their potential.”
Of course Obama was determined to say this via the laws of physics, but his words may still have had a good effect on society. Poor people don’t choose to be poor, nor homeless people to be homeless. We need to realize this, for that form of determinism is good for everyone (except perhaps for some Republicans). Cave admits that accepting determinism but trying to be good is somewhat bogus, but at least it’s nor harmful—not in the way I think Dennett’s views were.
Cave:
Understanding how will be the work of decades, as we slowly unravel the nature of our own minds. In many areas, that work will likely yield more compassion: offering more (and more precise) help to those who find themselves in a bad place. And when the threat of punishment is necessary as a deterrent, it will in many cases be balanced with efforts to strengthen, rather than undermine, the capacities for autonomy that are essential for anyone to lead a decent life. The kind of will that leads to success—seeing positive options for oneself, making good decisions and sticking to them—can be cultivated, and those at the bottom of society are most in need of that cultivation.
To some people, this may sound like a gratuitous attempt to have one’s cake and eat it too. And in a way it is. It is an attempt to retain the best parts of the free-will belief system while ditching the worst. President Obama—who has both defended “a faith in free will” and argued that we are not the sole architects of our fortune—has had to learn what a fine line this is to tread. Yet it might be what we need to rescue the American dream—and indeed, many of our ideas about civilization, the world over—in the scientific age.
Well, that’s a bit dramatic, but we do need to reform our notions of praise and—especially—blame. I’ve outlined some of the changes in the justice system we should make in light of determinism, and Gregg Caruso (e.g., here) has done so far more extensively. But I don’t think we should go around telling people that the classical notion of free will is true. Although I’ve been kicked out of a friend’s house and also threatened by a jazz musician for defending determinism (in the latter case by telling him that his saxophone solos were determined rather than improvised under free will, so that he could not have played a different solo), I’m still a diehard determinist.
Yes, the Atlantic article is nine years old, but the field hasn’t moved very far since it was written. Do people even need to think and write about free will, then? Certainly Francis Crick didn’t think so: he completely ignored the problem in his late-life work on the brain, dismissing free will as a nonstarter. But because notions of free will still permeate our justice system in a bad way, yes, I think everyone needs to think about determinism and accept the science buttressing it. Then we can go about our everyday lives acting as though we have choices.
h/t: Reese
At the end of their lives, most satellites fall to their death. Many of the smaller ones, including most of those going up as part of the “mega-constellations” currently under construction, are intended to burn up in the atmosphere. This Design for Demise (D4D) principle has unintended consequences, according to a paper by Antoinette Ott and Christophe Bonnal, both of whom work for MaiaSpace, a company designing reusable launch vehicles for the small satellite market.