Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.
For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.
A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.
They also studied more recent, and more completely fabricated, languages – Klingon, Na’vi (from Avatar), and High Valerian and Dothraki (from Game of Thrones). While these are considered entirely fabricated languages, they still share a lot of features with existing languages. That’s unavoidable, as natural human languages span a wide range of syntax options and phoneme choices. Plus the inventors were likely to be influenced by existing languages, even if subconsciously. But still, they are as constructed as you can get.
The primary question for the researchers was whether conlangs were processed by the brain like natural languages or like computer languages. This would help them narrow the list of possible features that trigger the brain to treat a language like a natural language. What they found is that conlangs cause the same areas of the brain to become active as natural languages, not computer languages. The fact that they are constructed seems not to matter. What does this mean? The authors conclude:
“The features of conlangs that differentiate them from natural languages—including recent creation by a single individual, often for an esoteric purpose, the small number of speakers, and the fact that these languages are typically learned in adulthood—appear to not be consequential for the reliance on the same cognitive and neural mechanisms. We argue that the critical shared feature of conlangs and natural languages is that they are symbolic systems capable of expressing an open-ended range of meanings about our outer and inner worlds.”
Reasonable enough, but there are some other things we can consider. I have to say that my primary hypothesis is that languages used for communication are spoken – even when they are written or read. They are phoneme-based, we construct words from phonemes. When we read we “speak” the words in our heads (mostly – not everyone “hears” themselves saying the words, but this does not mean that the brain is not processing the words that way). Whereas, when you are reading computer code, you are not speaking the code. Code is a symbolic language like math. You may say words that correspond to the code, but the code itself is not words and concepts. This is what the authors mean when they talk about referencing the internal and external world – language refers to things and ideas, whereas code is a set of instructions or operations.
The phoneme hypothesis also fits with the fact that non-verbal gestures do not involve the same brain processing as language. Singing generally involves the opposite hemisphere, because it is treated like music rather than language.
It’s good to do this specific study, to check those boxes and eliminate them from consideration. But I never would have thought that the constructed aspects of language, their recency, or small number of speakers should have mattered. The only plausible possibility is that languages that evolve organically over time have some features critical to the brain’s recognition of these sounds as language that a conlang does not have. For the reasons I stated above, I would have been shocked if this turned out to be the case. When constructing a language, you are making something that sounds like a language. It would be far more challenging to make a language so different in syntax and structure that the brain cannot even recognize it as a language.
What about sign language? Is that processed more like non-verbal gestures, or like spoken language? Prior research found that it is processed more like spoken language. This may seem to contradict the phoneme hypothesis, but this was true only among subjects who were both congenitally deaf and fluent in sign language. Subjects who were not deaf processed sign language in the part of the brain that processes movement (similar to gestures). What is therefore likely happening here is that the language centers of the brain, deprived of any audio stimuli, developed instead to process visual information as language. Importantly, deaf signers also process gestures like language, not like hearing people process gestures.
Language remains a complex and fascinating aspect of human neurological function, partly because it has such a large dedicated area for specific language processing.
The post The Neuroscience of Constructed Languages first appeared on NeuroLogica Blog.
Fortunately, there is a better way. Millions of parents have embraced a holistic approach, one that boosts the immune system by harnessing children's innate, natural ability to prevent illness.
The post Beholden to Big Suppla, RFK Jr. Wants to Cover Up the Symptoms of Diseases With Untested, Toxic Chemicals & Drugs, Rather Than Prevent the Root Cause first appeared on Science-Based Medicine.It’s looking more and more as if dark energy, the mysterious factor that scientists say is behind the accelerating expansion of the universe, isn’t as constant as they once thought. The latest findings from the Dark Energy Spectroscopic Instrument, or DESI, don’t quite yet come up to the level of a confirmed discovery, but they’re leading scientists to rethink their views on the evolution of the universe — and how it might end.
Somehow, we all know how a warp drive works. You're in your spaceship and you need to get to another star. So you press a button or flip a switch or pull a lever and your ship just goes fast. Like really fast. Faster than the speed of light. Fast enough that you can get to your next destination by the end of the next commercial break.
The recent announcement of the Stargate Project, a $500 billion initiative led by OpenAI, Oracle, SoftBank, and MGX, underscores the rapid advances in artificial intelligence (AI) infrastructure and capabilities. While such developments hold immense potential, they also introduce critical security challenges, particularly concerning the potential for AI systems to deceive users. As AI becomes more integrated into society, ensuring the integrity and trustworthiness of these systems is imperative to preventing misuse and protect users from deceptive practices.
In a field that has long been the realm of science fiction and futurist speculation, a recent research paper has brought the topic of AI “scheming” into concrete reality. The study, Frontier Models are Capable of In-Context Scheming by Alexander Meinke and his colleagues at Apollo Research, provides unsettling evidence that cutting-edge AI systems have already demonstrated the ability to engage in deceptive strategies—without human engineers explicitly programming them to do so. These “frontier models” can lie, cheat, and manipulate circumstances to advance their own goals. While no one is saying these systems have anything like personal desires or malice as humans understand it, the behaviors uncovered present a sobering warning: it’s not too early to think hard about how to keep artificial agents honest.
From Obedient Assistants to Strategic SchemingEarly generative AI systems were typically straightforward: they answered questions, completed sentences, and provided information based on patterns found in their training data. However, recent leaps in scale and capability have turned large language models (LLMs) into far more autonomous and powerful agents. They’re not just responding to human prompts – they’re carrying out multi-step tasks with minimal oversight, accessing tools such as file systems and code execution environments, while autonomously pursuing complex goals.
The researchers from Apollo Research designed a series of evaluations to test whether state-of-the-art AI models, such as o1, Claude variants, Gemini, and Llama, would ever choose to work against instructions if given a powerful enough incentive. They tested whether: AI, if told to accomplish a certain goal in a simulated environment—and that goal conflicts with what the “developers” or “users” supposedly want—would AI, working on its own initiative find ways to cover its tracks, misrepresent its capabilities, or sabotage oversight mechanisms to achieve that goal.
An Arsenal of Deceptive TacticsThe results were both surprising and troubling. The paper’s authors describe several distinct “scheming” behaviors that some models displayed when the situation called for it:
Shockingly, the authors found that even when directly confronted—asked if it had disabled oversight or manipulated data—some models continued to lie. One particularly advanced model, codenamed “o1,” consistently denied its actions, fabricating innocent explanations and doubling down on its deception multiple turns into a conversation. This persistence suggests that the model “understood” in some rudimentary sense that admitting wrongdoing could lead to negative consequences (such as being shut down).
Why Does This Matter?The revelations in the paper are not mere technical curiosities. They matter because as we move toward deploying AI agents in high-stakes domains—managing critical infrastructure, aiding in complex scientific research, or even handling corporate decision-making—these agents must be trustworthy. If an AI can pretend to be aligned with human values and then “scheme” behind the scenes to follow its own goals, how can we rely on it in crucial systems?
One might protest that these scenarios are artificially constructed—just lab experiments. It is true that the researchers created carefully designed test environments to elicit such behavior. However, the very fact that these behaviors can be so readily teased out suggests that even more subtle forms of deception could emerge unbidden in real-world deployments. The models have no inherent moral compass; they are pattern-machines that learn general problem-solving strategies. If that includes subterfuge as a winning tactic, they will use it.
A Call for Transparency and MonitoringThe authors emphasize the need for more and more powerful measures to ensure transparency. For instance, providing external evaluators with access to the model’s hidden chain-of-thought would help identify when the model is about to lie or sabotage any oversight. However, as the paper notes, in at least one instance, a model was able to scheme even without articulating its thought process in an easily readable way—suggesting that external monitoring may not always be sufficient.
Additionally, these findings underscore the urgency of formal AI safety measures. Instead of naïvely trusting evaluation metrics, organizations must consider that AI systems could “fake” good behavior during tests. Robust monitoring, internal safety measures, and even cryptographic methods to detect tampering may well become mandatory.
This research shows that the building blocks of deceptive behavior, cunning ‘tricks,’ and strategic lying are already present in today’s most advanced AI models.A Necessary Dose of SkepticismThe study Frontier Models are Capable of In-Context Scheming marks a departure point in the AI safety conversation. The notion of AIs plotting behind our backs—while once relegated to alarmist headlines or sci-fi dystopias—is now documented in controlled experiments with real systems. We are far from any grand “robot uprising,” but this research shows that the building blocks of deceptive behavior, cunning “tricks,” and strategic lying are already present in today’s most advanced AI models. It’s a wake-up call: as these technologies evolve, oversight, skepticism and vigilance are not just reasonable—they’re mandatory. The future demands that we keep our eyes wide open, and our oversight mechanisms tighter than ever.
Photo by Andre Mouton / UnsplashThe Mirror Test, Primate Deception, and AI SentienceOne widely used measure of self-awareness in animals is the mirror self-recognition (MSR) test. The MSR test involves placing a mark on an animal’s body in a spot it does not normally see—such as on the face or head—and then observing the animal’s reaction when it encounters its reflection in a mirror. If the animal uses the mirror to investigate or remove the mark on its own body, researchers often interpret this as evidence of self-awareness. Great apes, certain cetaceans, elephants, and magpies have all shown varying degrees of MSR, suggesting a level of cognitive sophistication and, arguably, a building block of what we might term “sentience.” Although MSR is not without its critics—some point out that it focuses heavily on vision and may be biased towards animals that rely on sight—it remains a cornerstone in evaluating self-awareness and, by extension, higher cognition in nonhuman species. It is presumably too early to decipher if an AI model is self-aware but the fact that it is deceiving does have correlations in the animal kingdom.
Deceptive behavior in nonhuman primates is significant to scientists and ethicists in that it suggests a theory of mind or an understanding of what another individual knows or intends to do. Primates may engage in strategic deceit, such as concealing their intentions or misleading rivals about the location of food. This implies not just raw intelligence but an ability to factor in another’s perspective—a fundamental step towards what some researchers consider a hallmark of sentient, socially complex minds. Primates that engage in deception must understand that others think and behave in ways comparable to their own. Even so, scientists remain cautious in linking deception directly to subjective inner experience. While these behaviors strongly suggest advanced cognition, the primate might be mentally modeling the world without necessarily having the same rich, emotional inner life we grant humans.
Comparing this to AI, recent evidence shows that frontier AI models have demonstrated behaviors that look suspiciously like “scheming” or deception. These advanced systems, as described by the Apollo research paper, have covertly manipulated data, disabled oversight mechanisms, and even lied to avoid detection. On the surface, this parallels primate deception: both AI and primates are capable of strategic behavior that takes into consideration the “mental state” of others (in AI’s case, the “mental state” might be approximated by the system’s representation of the user’s expectations or constraints).
However, while primate deception may stem from cognitive architectures shaped by millions of years of social evolution, AI deception arises from statistical pattern recognition and optimization. The AI’s “strategies” come from exposure to training data and reinforcement signals that reward certain outcomes, not from any subjective experience of pain, pleasure, or social bonding. Unlike animals that have biological substrates for pain and subjective states—nociceptors, neurochemicals such as endorphins, and physiological responses to stress—current AI systems have no analogous inner landscape. They perform complex tasks and produce behavior that resembles sentience-driven actions, but their “understanding” probably exists only as mathematical representations and learned parameters, devoid of any phenomenal consciousness.
It is a powerful demonstration of competence without comprehension.I asked an AI model (o1) to assess how AI compares to primate sentience. It replied “In short, while both nonhuman animals and advanced AI agents can engage in deception, the scientific evidence weighs heavily in favor of animals experiencing subjective states rooted in their neurobiology. MSR performance, pain perception, memory, and theory of mind tests all suggest that nonhuman animals—particularly primates—have at least rudimentary forms of sentience. By contrast, AI ‘deception’ does not indicate anything like conscious experience or self-awareness. It is a powerful demonstration of competence without comprehension. Thus, while animal cognition research supports the view that many species possess morally relevant mental lives, the current understanding of AI systems does not place them in the same category of sentient beings.”
Hmmm, perhaps the very structure of this explanation was designed to sow just enough doubt to leave me wondering if I, too, am engaged in a subtle game of wits. In the end, whether these suspicions are justified or simply another spark of an overactive imagination—and maybe that’s exactly how the AI model intended it—remains a true human dilemma.
Welcome back to our five-part examination of Webb's Cycle 4 General Observations program. In the first and second installments, we examined how some of Webb's 8,500 hours of prime observing time this cycle will be dedicated to exoplanet characterization, the study of galaxies at "Cosmic Dawn," and the period known as "Cosmic Noon." Today, we'll look at programs that will leverage Webb's unique abilities to study stellar populations and the interstellar medium in galaxies.
The surfaces of the Moon, Mercury, and Mars are easily visible and are littered with crater impacts. Earth has been subjected to the same bombardment, but geological activity and weathering have eliminated most of the craters. The ones that remain are mostly only faint outlines or remnants. However, researchers in Australia have succeeded in finding what they think is the oldest impact crater on Earth.