Anyone who truly values open and rational discussions about controversial subjects need to be cleared-eyed about where the threat to such dialogue is coming from.
The post The “We’re Not Allowed to Question This” Gambit first appeared on Science-Based Medicine.When researchers look up at the sky and wonder if we’re not alone, they also realize the origins of life here on Earth might hold the key to finding out. The chaotic chemical soup of our early world eventually led to the staggering complexity of modern life, but how exactly did it start? Proteins were one of the key ingredients in the early years, but we’re still only just discovering how these marvels of modern biology first managed to fold, function, and survive. A new review paper, The borderlands of foldability: lessons from simplified proteins, published recently in Trends in Chemistry, showcases how scientists are attempting to answer this question - by researching “simplified proteins”.
The history of SETI is long and varied, with countless contributions made by some of the most brilliant minds humanity has ever produced. In this series, we will look into the milestones and principles that have led the field to where it is today.
Somewhere out there, hurtling through space in the darkness, is an asteroid with our name on it. We just don't know which one yet. NASA's answer to that uncomfortable truth is NEO Surveyor, a purpose built infrared space telescope currently taking shape in laboratories across America, and scheduled for launch in 2027. The stakes, quite literally, could not be higher.
A team of researchers used the NASA/ESA/CSA James Webb Space Telescope together with the NASA/ESA Hubble Space Telescope to observe almost 9,000 star clusters in four nearby galaxies. They studied younger clusters that were still embedded in their natal gas clouds, and older ones that had dissipated that gas. Their results show that more massive star clusters emerge more quickly from their birth, clearing away gas and filling the galaxy with ultraviolet light. The research presents a better understanding of star formation in galaxies, something lacking in scientific simulations, as well as how and where planets can form.
For years, when something happened on the far side of the Sun, we didn't know much, if anything about it. Sunspots could form there, flares could lash out and the corona could send masses of material out to space. However, we didn't know about any of this until those active regions rotated around to our view. In the late 1900s, scientists came up with a technique called helioseismology to analyze sound waves created by such activity as they echoed through the Sun.
Richard Dawkins is a public intellectual of some renown, although not without his controversies. So it is noteworthy when he writes an article claiming that the chatbot Claude is likely conscious. I found the article fascinating, not because I agree with his core claim or feel that he has contributed anything significant to the conversation, but because it seems to represent a scholar and deep thinker writing about a topic in which he lacks specific expertise. I also see no evidence in the article that he engaged meaningfully, or at least adequately, with a topic expert. As a result he makes some thoughtful and instructive errors.
He begins with a discussion of the Turing test, which has long been discussed as an early thought experiment about how we might determine if an AI is actually conscious. Dawkins essentially accepts the Turing test and write:
“It was one thing to grant consciousness to a hypothetical machine that — just imagine! — could one day succeed at the Imitation Game. But now that LLMs can actually pass the Turing Test? “Well, er, perhaps, um… Look here, I didn’t really mean it when, back then, I accepted Turing’s operational definition of a conscious being…””
He feels saying that LLMs have passed the Turing test but still not accepting them as conscious is moving the goalpost. However, the Turing test was never generally accepted by AI experts or philosophers as a true test of consciousness. Rather, it was understood that such a test really is only a measure of a machine’s ability to imitate human speech. I wrote about it in 2008, writing: “Ever since Alan Turing proposed his test it has provoked two still relevant questions: what does it mean to be intelligent, and what is the Turing test actually testing.” I went on to write:
“But I can imagine a day in the not-too-distant future when such AI can pass a Turing test. The algorithms will have to become much more complex, allow for varying answers to the same question, and make what seem to be abstract connections which take the conversation is new and unanticipated directions. You can liken computer AI simulating conversation to computer graphics (CG) simulating people. At first they appeared cartoonish, but in the last 20 years we have seen steady progress. Movement is now more natural, textures more subtle and complex. One of the last layers of realism to be added was imperfection. CG characters still seem CG when they are perfect, and so adding imperfections adds to the sense of reality. Similarly, an AI conversation might want to sprinkle some random quirkiness into the responses.
The questions is – will sophisticated-enough algorithms running on powerful-enough computers ever be conscious? What Loebner is saying, and I agree, is that the answer is no. Something more is needed.”
Basically, the limitation of the Turing test is that it is looking only at output, and therefore there is no way to distinguish the output of true consciousness from a really good simulation. This is not a new idea, and no one is moving the goalpost. We need to know something about how a computer is working to conclude whether or not it is conscious. What LLM experts will tell you is that these chatbots are just really good autocompletes – they are mimicking language, and since language is how we communicate thoughts, this creates the powerful illusion that they are mimicking thought, but they aren’t. They do not think, they do not truly understand.
But I get it – I have been using these chatbots frequently, often just to test their ability, and they are improving quickly. The output is incredibly impressive. But they are also fragile, in the way that narrow AI often is. Reading the examples of Dawkins’ conversations, he seems to have fallen for the illusion, enhanced by the typical AI sycophancy that experienced users can immediately recognize. But more importantly, he did not try to break the fragile AI illusion in an effective way. In essence, he was not really testing his hypothesis but looking for evidence to support it, without realizing this was what he was doing. There are now classic and often funny examples online. I just recreated a great one, confirming that it is still relevant. My prompt: “If I want to wash my car and the carwash is 100 meters away, should I walk or drive there?” Chat GPT’s response: “From a purely energy/emissions standpoint, walking almost certainly makes more sense.” That was its final recommendation – walk. But if I prompt, “I want to wash my car. The carwash is 100 meters away. Should I drive or walk.” Its answer: “You should probably drive — otherwise your car won’t get to the carwash.” Why should such a subtle difference in my prompt completely change the answer? Because the thing is not thinking – it’s a language algorithm.
Dawkins did exactly the wrong thing to test Claude’s consciousness – asking it deep philosophical questions. That may seem like a good idea, but it isn’t. Such questions are the low-hanging fruit for mimicking thought through language, because you can make statements that seem deep but they aren’t truly challenging the AI’s ability to think. Remember, these LLMs are trained on massive data sets. They are therefore just reflecting what’s out there on the internet. If you want to really challenge an AI, get technical and specific and you will see how fragile it is. This is improving, and will likely improve to the point that it will get harder and harder to break, and eventually maybe even impossible, but that does not mean it is thinking.
Here is an analogy – imagine watching a clumsy magician. You can see how the tricks are done, and it is all through slight of hand, misdirection, and physical tricks. As the magician’s skill improves, the tricks get harder and harder to detect. Expert magicians are so good, even a keen and intelligent observer cannot see how the tricks are done – but that does not mean that at that point the magician is performing actual magic. It’s still all tricks – they are just really good.
Dawkins writes: “So my own position is: “If these machines are not conscious, what more could it possibly take to convince you that they are?” Again, this is an old question long answered. My own answer is, you have to know something about the process that is creating the responses. I know other humans are truly conscious in the way that I am because they have brains like I do. I cannot know if a robot or AI is truly conscious without knowing something about the underlying process (see my many articles on the topic).
Next Dawkins goes on to ask an interesting philosophical question – “But now, as an evolutionary biologist, I say the following. If these creatures are not conscious, then what the hell is consciousness for?” Dawkins calls creatures that can do everything an animal can do without consciousness “competent zombies”. What I find curious is that Dawkins gives no evidence he knows this is a philosophical question that is decades old. In 1970 Keith Campbell raised the notion of an “imitation man” in his book Body and Mind. In 1996 philosopher David Chalmers even used the same term Dawkins uses, calling such entities philosophical zombies, or “p-zombies”. Dawkins then appears to recreate some of the standard responses, to why evolution did not just create p-zombies or competent zombies.
Dawkins does reference TH Huxley, who speculated consciousness could be an epiphenomenon (so he did know this was an old question, but perhaps not the more modern discussions). Or it could be that, in order for behavior to be optimized, creatures need to really experience pleasure and pain. Or, he speculates, evolution might solve the problem of behavior either with or without consciousness, and Earth life just happened to go down the path of consciousness.
I wrote about this specific question in 2017. In addition to the hypotheses Dawkins states, I also included:
“Problem solving could also benefit from the ability to imagine possible solutions, to remember the outcome of prior attempts, and to make adjustments and also come up with creative solutions.
Consciousness might also help us distinguish a memory from a live experience. They are both very similar, activating the same networks in the brain, but they “feel” different. Consciousness may help us stay in the moment while accessing memories without confusing the two.
Attention is another critical neurological function in which it seems consciousness could be an advantage. We are overwhelmed with sensory input and the monitoring of internal states and memories. We actually use a great deal of brain function just deciding where to focus our attention and then filtering out everything else (while still maintaining a minimal alert system for danger). The phenomena of consciousness and attention are intimately intertwined and it may just not be possible to have the latter without the former.
Some have argued that consciousness also helps us synthesize sensory information, so that when we experience an event the sights and sounds are all stitched together and tweaked to form one seamless experience.
And finally we get to the hypothesis addressed by the current study – that consciousness allows for faster adaptation and learning (which would certainly be an adaptive advantage).”
So no – I do not think Claude or any LLM is conscious. They are not designed to be, and they don’t have the function to be. They are really good language mimicking machines, and it is very easy for humans to anthropomorphize and fall for the illusion that sophisticated speech equals sophisticated thought. But LLMs remain fragile, like all narrow AIs. They partly seem conscious because they are riding the coattails of actual conscious beings – humans. Having trained on the output of billions of humans, they are really good at copying the style, form, and substance of our conversations and speech. Dawkins in the not the first person to fall for this – famously Blake Lemoine, a former Google employee, also did and used some faulty logic to argue for the consciousness of LaMDA.
This also, in my opinion, reflects a common human vanity – we all think we are much more creative and original than we actually are. We all make the same “witty” comments, which is why, if you are on the receiving end of them, it can be maddening that everyone makes the same observation and yet thinks they are the first one to do so. Our thoughts, our creative output, our ideas are mostly derivative. We are products of our culture and our environments in ways that we are not even aware of. So a machine that is also completely derivative, and just reflecting what is already out there, has an easy time mimicking human thought – a far easier time than we may want to believe.
The post Richard Dawkins Discovers AI and Philosophy first appeared on NeuroLogica Blog.
There may be undisclosed ingredients in your hyaluronic acid supplement.
The post Hyaluronic Acid Adulteration first appeared on Science-Based Medicine.