Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.
For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.
A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.
They also studied more recent, and more completely fabricated, languages – Klingon, Na’vi (from Avatar), and High Valerian and Dothraki (from Game of Thrones). While these are considered entirely fabricated languages, they still share a lot of features with existing languages. That’s unavoidable, as natural human languages span a wide range of syntax options and phoneme choices. Plus the inventors were likely to be influenced by existing languages, even if subconsciously. But still, they are as constructed as you can get.
The primary question for the researchers was whether conlangs were processed by the brain like natural languages or like computer languages. This would help them narrow the list of possible features that trigger the brain to treat a language like a natural language. What they found is that conlangs cause the same areas of the brain to become active as natural languages, not computer languages. The fact that they are constructed seems not to matter. What does this mean? The authors conclude:
“The features of conlangs that differentiate them from natural languages—including recent creation by a single individual, often for an esoteric purpose, the small number of speakers, and the fact that these languages are typically learned in adulthood—appear to not be consequential for the reliance on the same cognitive and neural mechanisms. We argue that the critical shared feature of conlangs and natural languages is that they are symbolic systems capable of expressing an open-ended range of meanings about our outer and inner worlds.”
Reasonable enough, but there are some other things we can consider. I have to say that my primary hypothesis is that languages used for communication are spoken – even when they are written or read. They are phoneme-based, we construct words from phonemes. When we read we “speak” the words in our heads (mostly – not everyone “hears” themselves saying the words, but this does not mean that the brain is not processing the words that way). Whereas, when you are reading computer code, you are not speaking the code. Code is a symbolic language like math. You may say words that correspond to the code, but the code itself is not words and concepts. This is what the authors mean when they talk about referencing the internal and external world – language refers to things and ideas, whereas code is a set of instructions or operations.
The phoneme hypothesis also fits with the fact that non-verbal gestures do not involve the same brain processing as language. Singing generally involves the opposite hemisphere, because it is treated like music rather than language.
It’s good to do this specific study, to check those boxes and eliminate them from consideration. But I never would have thought that the constructed aspects of language, their recency, or small number of speakers should have mattered. The only plausible possibility is that languages that evolve organically over time have some features critical to the brain’s recognition of these sounds as language that a conlang does not have. For the reasons I stated above, I would have been shocked if this turned out to be the case. When constructing a language, you are making something that sounds like a language. It would be far more challenging to make a language so different in syntax and structure that the brain cannot even recognize it as a language.
What about sign language? Is that processed more like non-verbal gestures, or like spoken language? Prior research found that it is processed more like spoken language. This may seem to contradict the phoneme hypothesis, but this was true only among subjects who were both congenitally deaf and fluent in sign language. Subjects who were not deaf processed sign language in the part of the brain that processes movement (similar to gestures). What is therefore likely happening here is that the language centers of the brain, deprived of any audio stimuli, developed instead to process visual information as language. Importantly, deaf signers also process gestures like language, not like hearing people process gestures.
Language remains a complex and fascinating aspect of human neurological function, partly because it has such a large dedicated area for specific language processing.
The post The Neuroscience of Constructed Languages first appeared on NeuroLogica Blog.
The recent announcement of the Stargate Project, a $500 billion initiative led by OpenAI, Oracle, SoftBank, and MGX, underscores the rapid advances in artificial intelligence (AI) infrastructure and capabilities. While such developments hold immense potential, they also introduce critical security challenges, particularly concerning the potential for AI systems to deceive users. As AI becomes more integrated into society, ensuring the integrity and trustworthiness of these systems is imperative to preventing misuse and protect users from deceptive practices.
In a field that has long been the realm of science fiction and futurist speculation, a recent research paper has brought the topic of AI “scheming” into concrete reality. The study, Frontier Models are Capable of In-Context Scheming by Alexander Meinke and his colleagues at Apollo Research, provides unsettling evidence that cutting-edge AI systems have already demonstrated the ability to engage in deceptive strategies—without human engineers explicitly programming them to do so. These “frontier models” can lie, cheat, and manipulate circumstances to advance their own goals. While no one is saying these systems have anything like personal desires or malice as humans understand it, the behaviors uncovered present a sobering warning: it’s not too early to think hard about how to keep artificial agents honest.
From Obedient Assistants to Strategic SchemingEarly generative AI systems were typically straightforward: they answered questions, completed sentences, and provided information based on patterns found in their training data. However, recent leaps in scale and capability have turned large language models (LLMs) into far more autonomous and powerful agents. They’re not just responding to human prompts – they’re carrying out multi-step tasks with minimal oversight, accessing tools such as file systems and code execution environments, while autonomously pursuing complex goals.
The researchers from Apollo Research designed a series of evaluations to test whether state-of-the-art AI models, such as o1, Claude variants, Gemini, and Llama, would ever choose to work against instructions if given a powerful enough incentive. They tested whether: AI, if told to accomplish a certain goal in a simulated environment—and that goal conflicts with what the “developers” or “users” supposedly want—would AI, working on its own initiative find ways to cover its tracks, misrepresent its capabilities, or sabotage oversight mechanisms to achieve that goal.
An Arsenal of Deceptive TacticsThe results were both surprising and troubling. The paper’s authors describe several distinct “scheming” behaviors that some models displayed when the situation called for it:
Shockingly, the authors found that even when directly confronted—asked if it had disabled oversight or manipulated data—some models continued to lie. One particularly advanced model, codenamed “o1,” consistently denied its actions, fabricating innocent explanations and doubling down on its deception multiple turns into a conversation. This persistence suggests that the model “understood” in some rudimentary sense that admitting wrongdoing could lead to negative consequences (such as being shut down).
Why Does This Matter?The revelations in the paper are not mere technical curiosities. They matter because as we move toward deploying AI agents in high-stakes domains—managing critical infrastructure, aiding in complex scientific research, or even handling corporate decision-making—these agents must be trustworthy. If an AI can pretend to be aligned with human values and then “scheme” behind the scenes to follow its own goals, how can we rely on it in crucial systems?
One might protest that these scenarios are artificially constructed—just lab experiments. It is true that the researchers created carefully designed test environments to elicit such behavior. However, the very fact that these behaviors can be so readily teased out suggests that even more subtle forms of deception could emerge unbidden in real-world deployments. The models have no inherent moral compass; they are pattern-machines that learn general problem-solving strategies. If that includes subterfuge as a winning tactic, they will use it.
A Call for Transparency and MonitoringThe authors emphasize the need for more and more powerful measures to ensure transparency. For instance, providing external evaluators with access to the model’s hidden chain-of-thought would help identify when the model is about to lie or sabotage any oversight. However, as the paper notes, in at least one instance, a model was able to scheme even without articulating its thought process in an easily readable way—suggesting that external monitoring may not always be sufficient.
Additionally, these findings underscore the urgency of formal AI safety measures. Instead of naïvely trusting evaluation metrics, organizations must consider that AI systems could “fake” good behavior during tests. Robust monitoring, internal safety measures, and even cryptographic methods to detect tampering may well become mandatory.
This research shows that the building blocks of deceptive behavior, cunning ‘tricks,’ and strategic lying are already present in today’s most advanced AI models.A Necessary Dose of SkepticismThe study Frontier Models are Capable of In-Context Scheming marks a departure point in the AI safety conversation. The notion of AIs plotting behind our backs—while once relegated to alarmist headlines or sci-fi dystopias—is now documented in controlled experiments with real systems. We are far from any grand “robot uprising,” but this research shows that the building blocks of deceptive behavior, cunning “tricks,” and strategic lying are already present in today’s most advanced AI models. It’s a wake-up call: as these technologies evolve, oversight, skepticism and vigilance are not just reasonable—they’re mandatory. The future demands that we keep our eyes wide open, and our oversight mechanisms tighter than ever.
Photo by Andre Mouton / UnsplashThe Mirror Test, Primate Deception, and AI SentienceOne widely used measure of self-awareness in animals is the mirror self-recognition (MSR) test. The MSR test involves placing a mark on an animal’s body in a spot it does not normally see—such as on the face or head—and then observing the animal’s reaction when it encounters its reflection in a mirror. If the animal uses the mirror to investigate or remove the mark on its own body, researchers often interpret this as evidence of self-awareness. Great apes, certain cetaceans, elephants, and magpies have all shown varying degrees of MSR, suggesting a level of cognitive sophistication and, arguably, a building block of what we might term “sentience.” Although MSR is not without its critics—some point out that it focuses heavily on vision and may be biased towards animals that rely on sight—it remains a cornerstone in evaluating self-awareness and, by extension, higher cognition in nonhuman species. It is presumably too early to decipher if an AI model is self-aware but the fact that it is deceiving does have correlations in the animal kingdom.
Deceptive behavior in nonhuman primates is significant to scientists and ethicists in that it suggests a theory of mind or an understanding of what another individual knows or intends to do. Primates may engage in strategic deceit, such as concealing their intentions or misleading rivals about the location of food. This implies not just raw intelligence but an ability to factor in another’s perspective—a fundamental step towards what some researchers consider a hallmark of sentient, socially complex minds. Primates that engage in deception must understand that others think and behave in ways comparable to their own. Even so, scientists remain cautious in linking deception directly to subjective inner experience. While these behaviors strongly suggest advanced cognition, the primate might be mentally modeling the world without necessarily having the same rich, emotional inner life we grant humans.
Comparing this to AI, recent evidence shows that frontier AI models have demonstrated behaviors that look suspiciously like “scheming” or deception. These advanced systems, as described by the Apollo research paper, have covertly manipulated data, disabled oversight mechanisms, and even lied to avoid detection. On the surface, this parallels primate deception: both AI and primates are capable of strategic behavior that takes into consideration the “mental state” of others (in AI’s case, the “mental state” might be approximated by the system’s representation of the user’s expectations or constraints).
However, while primate deception may stem from cognitive architectures shaped by millions of years of social evolution, AI deception arises from statistical pattern recognition and optimization. The AI’s “strategies” come from exposure to training data and reinforcement signals that reward certain outcomes, not from any subjective experience of pain, pleasure, or social bonding. Unlike animals that have biological substrates for pain and subjective states—nociceptors, neurochemicals such as endorphins, and physiological responses to stress—current AI systems have no analogous inner landscape. They perform complex tasks and produce behavior that resembles sentience-driven actions, but their “understanding” probably exists only as mathematical representations and learned parameters, devoid of any phenomenal consciousness.
It is a powerful demonstration of competence without comprehension.I asked an AI model (o1) to assess how AI compares to primate sentience. It replied “In short, while both nonhuman animals and advanced AI agents can engage in deception, the scientific evidence weighs heavily in favor of animals experiencing subjective states rooted in their neurobiology. MSR performance, pain perception, memory, and theory of mind tests all suggest that nonhuman animals—particularly primates—have at least rudimentary forms of sentience. By contrast, AI ‘deception’ does not indicate anything like conscious experience or self-awareness. It is a powerful demonstration of competence without comprehension. Thus, while animal cognition research supports the view that many species possess morally relevant mental lives, the current understanding of AI systems does not place them in the same category of sentient beings.”
Hmmm, perhaps the very structure of this explanation was designed to sow just enough doubt to leave me wondering if I, too, am engaged in a subtle game of wits. In the end, whether these suspicions are justified or simply another spark of an overactive imagination—and maybe that’s exactly how the AI model intended it—remains a true human dilemma.
On January 1, 2024, a skeptic from Malawi named Wonderful Mkhutche shared a video1 of a witch-hunting incident that took place days before on December 28, 2023. In the video, a local mob is shown burying an elderly woman. According to local sources, the woman was accused of causing the death of a family member who had passed away the previous day. These accusations often arise after family members consult local diviners, who claim to be able to identify suspects. In this instance, a local vigilante group abducted the woman. They were in the midst of burying her alive as punishment for allegedly using witchcraft to “kill” a relative when the police intervened and rescued her.
0:00 /1:41 1×While witch-hunting is largely a thing of the past in the Western world, the persecution of alleged witches continues with tragic consequences in many parts of Africa. Malawi, located in Southeastern Africa, is one such place. Mr. Mkhutche reports that between 300 to 500 individuals accused of witchcraft are attacked and killed every year.
The Malawi Network of Older Persons’ Organizations reported that 15 older women were killed between January and February 2023.2 Local sources suggest that these estimates are likely conservative, as killings related to witchcraft allegations often occur in rural communities and go unreported. Witch-hunting is not limited to Malawi; it also occurs in other African countries. In neighboring Tanzania, for example, an estimated 3,000 people were killed for allegedly practicing witchcraft between 2005 and 2011, and about 60,000 accused witches were murdered between 1960 and 2000.3 Similar abuses occur in Nigeria, Ghana, Kenya, Zambia, Zimbabwe, and South Africa, where those accused of witchcraft face severe mistreatment. They are attacked, banished, or even killed. Some alleged witches are buried alive, lynched, or strangled to death. In Ghana, some makeshift shelters—known as “witch camps”—exist in the northern region. Women accused of witchcraft flee to these places after being banished by their families and communities. Currently, around 1,000 women who fled their communities due to witchcraft accusations live in various witch camps in the region.4
Witch camp in Ghana (Photo by Hasslaebetch, via Wikimedia)The belief in the power of “evil magic” to harm others, causing illness, accidents, or even death, is deeply ingrained in many regions of Africa. Despite Malawi retaining a colonial-era legal provision that criminalizes accusing someone of practicing witchcraft, this law has not had a significant impact because it is rarely enforced. Instead, many people in Malawi favor criminalizing witchcraft and institutionalizing witch-hunting as a state-sanctioned practice. The majority of Malawians believe in witchcraft and support its criminalization,5 and many argue that the failure of Malawian law to recognize witchcraft as a crime is part of the problem, because it denies the legal system the mechanism to identify or certify witches. Humanists and skeptics in Malawi have actively opposed proposed legislation that recognizes the existence of witchcraft.6 They advocate for retaining the existing legislation and urge the government to enforce, rather than repeal, the provision against accusing someone of practicing witchcraft.
Islam7 and Christianity8 were introduced to Malawi in the 16th and 19th centuries by Western Christian missionaries and Arab scholars/jihadists, respectively. They coerced the local population to accept foreign mythologies as superior to traditional beliefs. Today, Malawi is predominantly Christian,9 but there are also Muslims and some remaining practitioners of traditional religions. And while the belief in witchcraft predates Christianity and Islam, religious lines are often blurred, as all the most popular religions contain narratives that sanctify and reinforce some form of belief in witchcraft. As a result, Malawians from various religious backgrounds share a belief in witchcraft.
Between 300 to 500 individuals accused of witchcraft are attacked and killed every year.Witch-hunting also has a significant health aspect, as accusations of witchcraft are often used to explain real health issues. In rural areas where hospitals and health centers are scarce, many individuals lack access to modern medical facilities and cannot afford modern healthcare solutions. Consequently, they turn to local diviners and traditional narratives to understand and cope with ailments, diseases, death, and other misfortunes.10
While witch-hunting occurs in both rural and urban settings, it is more prevalent in rural areas. In urban settings, witch-hunting is mainly observed in slums and overcrowded areas. One contributing factor to witch persecution in rural or impoverished urban zones is the limited presence of state police. Police stations are few and far apart, and the law against witchcraft accusations is rarely enforced11due to a lack of police officers and inadequate equipment for intervention. Recent incidents in Malawi demonstrate that mob violence, jungle justice, and vigilante killings of alleged witches are common in these communities.
Malawians believe that witches fly around at night in “witchcraft planes” to attend occult meetings in South Africa and other neighboring countries.Another significant aspect of witch-hunting is its highly selective nature. Elderly individuals, particularly women, are usually the targets. Why is this the case? Malawi is a patriarchal society where women hold marginalized sociocultural positions. They are vulnerable and easily scapegoated, accused, and persecuted. In many cases, children are the ones driving these accusations. Adult relatives coerce children to “confess” and accuse the elderly of attempting to initiate them into the world of witchcraft. Malawians believe that witches fly around at night in “witchcraft planes” to attend occult meetings in South Africa and other neighboring countries.12
The persistence of witch-hunting in Africa can be attributed to the absence of effective campaigns and measures to eliminate this unfounded and destructive practice. The situation is dire and getting worse. In Ghana, for example, the government plans on shutting down safe spaces for victims, and the president has declined to sign a bill into law that would criminalize witchcraft accusations and the act of witch-hunting.
For this reason, in 2020 I founded Advocacy for Alleged Witches (AfAW) with the aim of combating witch persecution in Africa. Our mission is to put an end to witch-hunting on the continent by 2030.13 AfAW was created to address significant gaps in the fight against witch persecution in Africa. One of our primary goals is to challenge the misrepresentation of African witchcraft perpetuated by Western anthropologists. They have often portrayed witch-hunting as an inherent part of African culture, suggesting that witch persecution serves useful socioeconomic functions. (This perspective arises from a broader issue within modern anthropology, where extreme cultural relativism sometimes leads to an overemphasis on the practices of indigenous peoples. This stems from an overcorrection of past trends that belittled all practices of indigenous peoples). Some Western scholars tend to present witchcraft in the West as a “wild” phenomenon, and witchcraft in Africa as having domestic value and benefit. The academic literature tends to explain witchcraft accusations and witch persecutions from the viewpoint of the accusers rather than the accused. This approach is problematic and dangerous, as it silences the voices of those accused of witchcraft and diminishes their predicament.
Due to this misrepresentation, Western NGOs that fund initiatives to address abuses linked to witchcraft beliefs have waged a lackluster campaign. They have largely avoided describing witchcraft in Africa as a form of superstition, instead choosing to adopt a patronizing approach to tackling witch-hunting—they often claim to “respect” witchcraft as an aspect of African cultures.14 As a result, NGOs do not treat the issue of witch persecution in Africa with the urgency it deserves.
Likewise, African NGOs and activists have been complicit. Many lack the political will and funding to effectively challenge this harmful practice. In fact, many African NGO actors believe in witchcraft themselves! Witch-hunting persists in the region due to lack of accurate information, widespread misinformation, and insufficient action. To end witch-hunting, a paradigm shift is needed. The way witchcraft belief and witch-hunting are perceived and addressed must change.
AfAW aims to catalyze this crucial shift and transformation. It operates as a practical and applied form of skepticism, employing the principles of reason and compassion to combat witch-hunting. Through public education and enlightenment efforts, we question and debate witchcraft and ritual beliefs, aiming to dispel the misconceptions far too often used to justify abuses. Our goal is to try to engage African witchcraft believers in thoughtful dialogue, guiding them away from illusions, delusions, and superstitions.
The persistence of abuses linked to witchcraft and ritual beliefs in the region is due to a lack of robust initiatives applying skeptical thinking to the problem. To effectively combat witch persecution, information must be translated into action, and interpretations into tangible policies and interventions. To achieve this, AfAW employs the “informaction” theory of change, combining information dissemination with actionable steps.
Many people impute misfortunes to witchcraft because they are unaware of where to seek help or who or what is genuinely responsible for their troubles.At the local level, we focus on bridging the information and action gaps. Accusers are misinformed about the true causes of illnesses, deaths, and misfortunes, often attributing these events to witchcraft due to a lack of accurate information. Many people impute misfortunes to witchcraft because they are unaware of where to seek help or who or what is genuinely responsible for their troubles. This lack of understanding extends to what constitutes valid reasons and causal explanations for their problems.
As part of the efforts to end witch-hunting, we highlight misinformation and disinformation about the true causes of misfortune, illness, death, accidents, poverty, and infertility. This includes debunking the falsehoods that charlatans, con artists, traditional priests, pastors, and holy figures such as mallams and marabouts exploit to manipulate the vulnerable and the ignorant. At AfAW, we provide evidence-based knowledge, explanations, and interpretations of misfortunes.
Leo Igwe participated in a Panel: “From Witch-burning to God-men: Supporting Skepticism Around the World” at The Amaz!ng Meeting, July 12, 2012, in Las Vegas, NV (Photo by BDEngler via Wikimedia)Our efforts include educating the public on existing laws and mechanisms to address allegations of witchcraft. We conduct sensitization campaigns targeting public institutions such as schools, colleges, and universities. Additionally, we sponsor media programs, issue press releases, engage in social media advocacy, and publish articles aimed at dispelling myths and misinformation related to witch-hunting in the region.
We also facilitate actions and interventions by both state and non-state agencies. In many post-colonial African states, governmental institutions are weak with limited powers and presence. One of our key objectives is to encourage institutional collaboration to enhance efficiency and effectiveness. We petition the police, the courts, and state human rights institutions. Our work prompts these agencies to act, collaborate, and implement appropriate measures to penalize witch-hunting activities in the region.
We are deploying the canon of skeptical rationality to save lives, awaken Africans from their dogmatic and superstitious slumber, and bring about an African Enlightenment.Additionally, AfAW intervenes to support individual victims of witch persecution based on their specific needs and the resources available. For example, in cases where victims have survived, we relocate them to safe places, assist with their medical treatment, and facilitate their access to justice. In situations where the accused have been killed, we provide support to the victims’ relatives and ensure that the perpetrators are brought to justice.
We get more cases than we can handle. With limited resources, we are unable to intervene in every situation we become aware of. However, in less than four years, our organization has made a significant impact through our interventions in Nigeria and beyond. We are deploying the canon of skeptical rationality to save lives, awaken Africans from their dogmatic and superstitious slumber, and bring about an African Enlightenment.
This is a real culture war, with real consequences, and skepticism is making a real difference.
Founded in 1940, Pinnacle was a rural Jamaican commune providing its Black residents a “socialistic life” removed from the oppression of British colonialism. Its founder, Leonard Howell, preached an unorthodox mix of Christianity and Eastern spiritualism: Ethiopia’s Emperor Haile Selassie was considered divine, the Pope was the devil, and marijuana was a holy plant. Taking instructions from Leviticus 21:5, the men grew out their hair in a matted style that caused apprehension among outsiders, which was later called “dreadlocks.”
Jamaican authorities frowned upon the sect, frequently raiding Pinnacle and eventually locking up Howell in a psychiatric hospital. The crackdown drove Howell’s followers—who became known as Rastafarians—all throughout Jamaica, where they became regarded as folk devils. Parents told children that the Rastafarians lived in drainage ditches and carried around hacked-off human limbs. In 1960 the Jamaican prime minister warned the nation, “These people—and I am glad that it is only a small number of them—are the wicked enemies of our country.”
If Rastafarianism had disappeared at this particular juncture, we would remember it no more than other obscure modern spiritual sects, such as theosophy, the Church of Light, and Huna. But the tenets of Rastafarianism lived on, thanks to one extremely important believer: the Jamaican musician Bob Marley. He first absorbed the group’s teachings from the session players and marijuana dealers in his orbit. But when his wife, Rita, saw Emperor Haile Selassie in the flesh—and a stigmata-like “nail-print” on his hand—she became a true believer. Marley eventually took up its credo, and as his music spread around the world in the 1970s, so did the conventions of Rastafarianism—from dreadlocks, now known as “locs,” as a fashionable hairstyle to calling marijuana “ganja.”
Individuals joined subcultures and countercultures to reject mainstream society and its values. They constructed identities through an open disregard for social norms.Using pop music as a vehicle, the tenets of a belittled religious subculture on a small island in the Caribbean became a part of Western commercial culture, manifesting in thousands of famed musicians taking up reggae influences, suburban kids wearing knitted “rastacaps” at music festivals, and countless red, yellow, and green posters of marijuana leaves plastering the walls of Amsterdam coffeehouses and American dorm rooms. Locs today are ubiquitous, seen on Justin Bieber, American football players, Juggalos, and at least one member of the Ku Klux Klan.
Rastafarianism is not an exception: The radical conventions of teddy boys, mods, rude boys, hippies, punks, bikers, and surfers have all been woven into the mainstream. That was certainly not the groups’ intention. Individuals joined subcultures and countercultures to reject mainstream society and its values. They constructed identities through an open disregard for social norms. Yet in rejecting basic conventions, these iconoclasts became legendary as distinct, original, and authentic. Surfing was no longer an “outsider” niche: Boardriders, the parent company of surf brand Quiksilver, has seen its annual sales surpass $2 billion. Country Life English Butter hired punk legend John Lydon to appear in television commercials. One of America’s most beloved ice cream flavors is Cherry Garcia, named after the bearded leader of a psychedelic rock band who long epitomized the “turn on, tune in, drop out” spirit of 1960s countercultural rebellion. As the subcultural scholars Stuart Hall and Tony Jefferson note, oppositional youth cultures became a “pure, simple, raging, commercial success.” So why, exactly, does straight society come to champion extreme negations of its own conventions?
Illustration by Cynthia von Buhler for SKEPTICSubcultures and countercultures manage to achieve a level of influence that belies their raw numbers. Most teens of the 1950s and 1960s never joined a subculture. There were never more than an estimated thirty thousand British teddy boys in a country of fifty million people. However alienated teens felt, most didn’t want to risk their normal status by engaging in strange dress and delinquent behaviors. Because alternative status groups can never actually replace the masses, they can achieve influence only through being imitated. But how do their radical inventions take on cachet? There are two key pathways: the creative class and the youth consumer market.
In the basic logic of signaling, subcultural conventions offer little status value, as they are associated with disadvantaged communities. The major social change of the twentieth century, however, was the integration of minority and working-class conventions into mainstream social norms. This process has been under way at least since the jazz era, when rich Whites used the subcultural capital of Black communities to signal and compensate for their lack of authenticity. The idolization of status inferiors can also be traced to 19th-century Romanticism; philosopher Charles Taylor writes that many came to find that “the life of simple, rustic people is closer to wholesome virtue and lasting satisfactions than the corrupt existence of city dwellers.” By the late 1960s, New York high society threw upscale cocktail parties for Marxist radicals like the Black Panthers—a predilection Tom Wolfe mocked as “radical chic.”
For most cases in the twentieth century, however, the creative class became the primary means by which conventions from alternative status groups nestled into the mainstream. This was a natural process, since many creatives were members of countercultures, or at least were sympathetic to their ideals. In The Conquest of Cool, historian Thomas Frank notes that psychedelic art appeared in commercial imagery not as a means of pandering to hippie youth but rather as the work of proto-hippie creative directors who foisted their lysergic aesthetics on the public. Hippie ads thus preceded—and arguably created—hippie youth.
Because alternative status groups can never actually replace the masses, they can achieve influence only through being imitated.This creative-class counterculture link, however, doesn’t explain the spread of subcultural conventions from working-class communities like the mods or Rastafarians. Few from working-class subcultures go into publishing and advertising. The primary sites for subculture and creative-class cross-pollination have been art schools and underground music scenes. The punk community, in particular, arose as an alliance between the British working class and students in art and fashion schools. Once this network was formed, punk’s embrace of reggae elevated Jamaican music into the British mainstream as well. Similarly, New York’s downtown art scene supported Bronx hip-hop before many African American radio stations took rap seriously.
Subcultural style often fits well within the creative-class sensibility. With a premium placed on authenticity, creative class taste celebrates defiant groups like hipsters, surfers, bikers, and punks as sincere rejections of the straight society’s “plastic fantastic” kitsch. The working classes have a “natural” essence untarnished by the demands of bourgeois society. “What makes Hip a special language,” writes Norman Mailer, “is that it cannot really be taught.” This perspective can be patronizing, but to many middle-class youth, subcultural style is a powerful expression of earnest antagonism against common enemies. Reggae, writes scholar Dick Hebdige, “carried the necessary conviction, the political bite, so obviously missing in most contemporary White music.”
From the jazz era onward, knowledge of underground culture served as an important criterion for upper-middle-class status—a pressure to be hip, to be in the know about subcultural activity. Hipness could be valuable, because the obscurity and difficulty of penetrating the subcultural world came with high signaling costs. Once subcultural capital became standard in creative-class signaling, minority and working-class slang, music, dances, and styles functioned as valuable signals—with or without their underlying beliefs. Art school students could listen to reggae without believing in the divinity of Haile Selassie. For many burgeoning creative-class members, subcultures and countercultures offered vehicles for daydreaming about an exciting life far from conformist boredom. Art critic Dan Fox, who grew up in the London suburbs, explains, “[Music-related tribe] identities gave shelter, a sense of belonging; being someone else was a way to fantasize your exit from small-town small-mindedness.”
Photo by Bekky Bekks / UnsplashMiddle-class radical chic, however, tends to denature the most prickly styles. This makes “radical” new ideas less socially disruptive, which opens a second route of subcultural influence: the youth consumer market. The culture industry—fashion brands, record companies, film producers—is highly attuned to the tastes of the creative class, and once the creative class blesses a subculture or counterculture, companies manufacture and sell wares to tap into this new source of cachet. At first mods tailored their suits, but the group’s growing stature encouraged ready-to-wear brands to manufacture off-the-rack mod garments for mass consumption. As the punk trend flared in England, the staid record label EMI signed the Sex Pistols (and then promptly dropped them). With so many cultural trends starting among the creative classes and ethnic subcultures, companies may not understand these innovations but gamble that they will be profitable in their appeal to middle-class youth.
Before radical styles can diffuse as products, they are defused—i.e., the most transgressive qualities are surgically removed. Experimental and rebellious genres come to national attention using softer second-wave compromises. In the early 1990s, hip-hop finally reached the top of the charts with the “pop rap” of MC Hammer and Vanilla Ice. Defusing not only dilutes the impact of the original inventions but also freezes farout ideas into set conventions. The vague “oppositional attitude” of a subculture becomes petrified in a strictly defined set of goods. The hippie counterculture became a ready-made package of tie-dyed shirts, Baja hoodies, small round glasses, and peace pins. Mass media, in needing to explain subcultures to readers, defines the undefined—and exaggerates where necessary. Velvet cuffs became a hallmark of teddy boy style, despite being a late-stage development dating from a single 1957 photo in Teen Life magazine.
As much as subcultural members may join their groups as an escape from status woes, they inevitably replicate status logic in new forms.This simplification inherent in the marketing process lowers fences and signaling costs, allowing anyone to be a punk or hip-hopper through a few commercial transactions. John Waters took interest in beatniks not for any “deep social conviction” but “in homage” to his favorite TV character, Maynard G. Krebs, on The Many Loves of Dobie Gillis. And as more members rush into these groups, further simplification occurs. Younger members have less money to invest in clothing, vehicles, and conspicuous hedonism. The second generation of teds maintained surly attitudes and duck’s-ass quiffs, but replaced the Edwardian suits with jeans. Creative classes may embrace subcultures and countercultures on pseudo-spiritual grounds, but many youth simply deploy rebellious styles as a blunt invective against adults. N.W.A’s song “Fuck tha Police” gave voice to Black resentment against Los Angeles law enforcement; White suburban teens blasted it from home cassette decks to anger their parents.
As subcultural and countercultural conventions become popular within the basic class system, however, they lose value as subcultural norms. Most alternative status groups can’t survive the parasitism of the consumer market; some fight back before it’s too late. In October 1967, a group of longtime countercultural figures held a “Death of the Hippie” mock funeral on the streets of San Francisco to persuade the media to stop covering their movement. Looking back at the sixties, journalist Nik Cohn noted that these groups’ rise and fall always followed a similar pattern:
One by one, they would form underground and lay down their basic premises, to be followed with near millennial fervor by a very small number; then they would emerge into daylight and begin to spread from district to district; then they would catch fire suddenly and produce a national explosion; then they would attract regiments of hangers-on and they would be milked by industry and paraded endlessly by media; and then, robbed of all novelty and impact, they would die.By the late 1960s the mods’ favorite hangout, Carnaby Street, had become “a tourist trap, a joke in bad taste” for “middle-aged tourists from Kansas and Wisconsin.” Japanese biker gangs in the early 1970s dressed in 1950s Americana—Hawaiian shirts, leather jackets, jeans, pompadours—but once the mainstream Japanese fashion scene began to play with a similar fifties retro, the bikers switched to right-wing paramilitary uniforms festooned with imperialist slogans.
However, what complicates any analysis of subcultural influence on mainstream style is that the most famous 1960s groups often reappear as revival movements. Every year a new crop of idealistic young mods watches the 1979 film Quadrophenia and rushes out to order their first tailored mohair suit. We shouldn’t confuse these later adherents, however, as an organic extension of the original configuration. New mods are seeking comfort in a presanctioned rebellion, rather than spearheading new shocking styles at the risk of social disapproval. The neoteddy boys of the 1970s adopted the old styles as a matter of pure taste: namely, a combination of fifties rock nostalgia and hippie backlash. Many didn’t even know where the term “Edwardian” originated.
Were the original groups truly “subcultural” if they could be so seamlessly absorbed into the commercial marketplace? In the language of contemporary marketing, “subculture” has come to mean little more than “niche consumer segment.” A large portion of contemporary consumerism is built on countercultural and subcultural aesthetics. Formerly antisocial looks like punk, hippie, surfer, and biker are now sold as mainstream styles in every American shopping mall. Corporate executives brag about surfing on custom longboards, road tripping on Harley-Davidsons, and logging off for weeks while on silent meditation retreats. The high-end fashion label Saint Laurent did a teddy-boy-themed collection in 2014, and Dior took inspiration from teddy girls for the autumn of 2019. There would be no Bobo yuppies in Silicon Valley without bohemianism, nor would the Police’s “Roxanne” play as dental-clinic Muzak without Jamaican reggae.
Alternative status groups in the twentieth century did, however, succeed in changing the direction of cultural flows.But not all subcultures and countercultures have ended up as part of the public marketplace. Most subcultures remain marginalized: e.g., survivalists, furries, UFO abductees, and pickup artists. Just like teddy boys, the Juggalos pose as outlaws with their own shocking music, styles, and dubious behaviors—and yet the music magazine Blender named the foundational Juggalo musical act Insane Clown Posse as the worst artist in music history. The movement around Christian rock has suffered a similar fate; despite staggering popularity, the fashion brand Extreme Christian Clothes has never made it into the pages of GQ. Since these particular groups are formed from elements of the (White) majority culture—rather than formed in opposition to it—they offer left-leaning creatives no inspiration. Lower-middle-class White subcultures can also epitomize the depths of conservative sentiment rather than suggest a means of escape. Early skinhead culture influenced high fashion, but the Nazi-affiliated epigones didn’t. Without the blessing of the creative class, major manufacturers won’t make new goods based on such subcultures’ conventions, preventing their spread to wider audiences. Subcultural transgressions, then, best find influence when they become signals within the primary status structure of society.
The renowned scholarship on subcultures produced at Birmingham’s Centre for Contemporary Cultural Studies casts youth groups as forces of “resistance,” trying to navigate the “contradictions” of class society. Looking back, few teds or mods saw their actions in such openly political terms. “Studies carried out in Britain, America, Canada, and Australia,” writes sociologist David Muggleton, “have, in fact, found subcultural belief systems to be complex and uneven.” While we may take inspiration from the groups’ sense of “vague opposition,” we’re much more enchanted by their specific garments, albums, dances, behaviors, slang, and drugs. In other words, each subculture and counterculture tends to be reduced to a set of cultural artifacts, all of which are added to the pile of contemporary culture.
The most important contribution of subcultures, however, has been giving birth to new sensibilities— additional perceptual frames for us to revalue existing goods and behaviors. From the nineteenth century onward, gay subcultures have spearheaded the camp sensibility—described by Susan Sontag as a “love of the unnatural: of artifice and exaggeration,” including great sympathy for the “old-fashioned, out-of-date, démodé.” This “supplementary” set of standards expanded cultural capital beyond high culture and into an ironic appreciation of low culture. As camp diffused through 20th-century society via pop art and pop music, elite members of affluent societies came to appreciate the world in new ways. Without the proliferation of camp, John Waters would not grace the cover of Town & Country.
The fact that subcultural rebellion manifests as a simple distinction in taste is why the cultural industry can so easily co-opt its style.As much as subcultural members may join their groups as an escape from status woes, they inevitably replicate status logic in new forms—different status criteria, different hierarchies, different conventions, and different tastes. Members adopt their own arbitrary negations of arbitrary mainstream conventions, but believe in them as authentic emotions. If punk were truly a genuine expression of individuality, as John Lydon claims it should be, there could never have been a punk “uniform.”
The fact that subcultural rebellion manifests as a simple distinction in taste is why the cultural industry can so easily co-opt its style. If consumers are always on the prowl for more sensational and more shocking new products, record companies and clothing labels can use alternative status groups as R&D labs for the wildest new ideas.
Alternative status groups in the twentieth century did, however, succeed in changing the direction of cultural flows. In strict class-based societies of the past, economic capital and power set rigid status hierarchies; conventions trickled down from the rich to the middle classes to the poor. In a world where subcultural capital takes on cachet, the rich consciously borrow ideas from poorer groups. Furthermore, bricolage is no longer a junkyard approach to personal style—everyone now mixes and matches. In the end, subcultural groups were perhaps an avant-garde of persona crafting, the earliest adopters of the now common practice of inventing and performing strange characters as an effective means of status distinction.
For both classes and alternative status groups, individuals pursuing status end up forming new conventions without setting out to do so. Innovation, in these cases, is often a byproduct of status struggle. But individuals also intentionally attempt to propose alternatives to established conventions. Artists are the most well-known example of this more calculated creativity—and they, too, are motivated by status.
Subcultures and countercultures are often cast as modern folk devils. The media spins lurid yarns of criminal destruction, drug abuse, and sexual immorality.Not surprisingly, mainstream society reacts with outrage upon the appearance of alternative status groups, as these groups’ very existence is an affront to the dominant status beliefs. Blessing or even tolerating subcultural transgressions is a dangerous acknowledgment of the arbitrariness of mainstream norms. Thus, subcultures and countercultures are often cast as modern folk devils. The media spins lurid yarns of criminal destruction, drug abuse, and sexual immorality—frequently embellishing with sensational half-truths. To discourage drug use in the 1970s, educators and publishers relied on a fictional diary called Go Ask Alice, in which a girl takes an accidental dose of LSD and falls into a tragic life of addiction, sex work, and homelessness. The truth of subcultural life is often more pedestrian. As an early teddy boy explained in hindsight, “We called ourselves Teddy Boys and we wanted to be as smart as possible. We lived for a good time, and all the rest was propaganda.”
Excerpted and adapted by the author from Status and Culture by W. David Marx, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. © 2022 by W. David Marx.
For much of human history, wolves and other large carnivores were considered pests. Wolves were actively exterminated on the British Isles, with the last wolf killed in 1680. It is more difficulty to deliberately wipe out a species on a continent than an island, but across Europe wolf populations were also actively hunted and kept to a minimum. In the US there was also an active campaign in the 20th century to exterminate wolves. The gray wolf was nearly wiped out by the middle of the 20th century.
The reasons for this attitude are obvious – wolves are large predators, able to kill humans who cross their paths. They also hunt livestock, which is often given as the primary reason to exterminate them. There are other large predators as well: bears, mountain lions, and coyotes, for example. Wherever they push up against human civilization, these predators don’t fare well.
Killing off large predators, however, has had massive unintended consequences. It should have been obvious that removing large predators from an ecosystem would have significant downstream effects. Perhaps the most notable effects is on the deer population. In the US wolves were the primary check on deer overpopulation. They are too large generally for coyotes. Bears do hunt and kill deer, but it is not their primary food source. Mountain lions will hunt and kill deer, but their range is limited.
Without wolves, the deer population exploded. The primary check now is essentially starvation. This means that there is a large and starving population of deer, which makes them willing to eat whatever they can find. They then wipe out much of the undergrowth in forests, eliminating an important habitat for small forest critters. Deer hunting can have an impact, but apparently not enough. Car collisions with deer also cost about $8 billion in the US annually, causing about 200 deaths and 26 thousand injuries. So there is a human toll as well. This cost dwarfs the cost of lost livestock, estimated to be about 17 million Euros across Europe.
All of this has lead to a reversal in Europe and the US on our thinking and policy toward wolves. They have gone from active extermination to protected. In Europe wolf populations are on the rise, with an overall 58% increase over the last decade. Wolves were reintroduced in Yellowstone park, leading to vast ecological improvement, including increases in the aspen and beaver populations. This has been described as a ripple effect throughout the ecosystem.
In the East we have seen a rise of the eastern coyote – which is a larger cousin of the coyote, through breeding with wolves and dogs. I have seen them in my hard – at first glance you might think it’s a wolf, it does really look like a hybrid between a wolf and a coyote. These larger coyotes will kill deer, although they also hunt a lot of smaller game and will scavenge. However, the evidence so far indicates that they are not much of a check on deer populations. Perhaps that will change in the future, if the eastern coyote evolves to take advantage of this food source.
There is also evidence that mountain lions are spreading their range to the East. They already a seen in the Midwest. It would likely take decades for the mountain lions to spread naturally to reach places like New England and establish a breeding population there. This is why there is actually discussion of introducing mountain lions into select eastern ecosystems, such as in Vermont. This would be expressly for the purpose of controlling deer populations.
All of this means, I think, that we have to get used to the idea of living close to large predators. Wolves are the common “monsters” of fairytales, as we inherited a culture that has demonized these predators, often deliberately as part of a campaign of extermination. We now need to cultivate a different attitude. These predators are essential for a healthy ecosystem. We need to respect them and learn how to share the land with them. What does that mean.
A couple years ago I had a black bear showing up on my property, so I called animal control, mainly to see (and report on) what their response was. They told me that first, they will do nothing about it. They do not relocate bears. Their intervention was to report it in their database, and to give me advice. That advice was to stay out of the bear’s way. If you are concerned about your pet, keep them indoors. Put a fence around your apple tree. Keep bird seed inside. Do not put garbage out too early, and only in tight bins. That bear owns your yard now, you better stay out of their way.
This, I think, is a microcosm of what’s coming. We all have to learn to live with large predators. We have to learn their habits, learn how to stay out of their way, not inadvertently attract them to our homes. Learn what to do when you see a black bear. Learn how not to become prey. Have good hygiene when it comes to potential food sources around your home. We need to protect livestock without just exterminating the predators.
And yes – some people will be eaten. I say that not ironically, it’s a simple fact. But the numbers will be tiny, and can be kept to a minimum by smart behavior. They will also be a tiny fraction of the lives lost due to car collisions with deer. Fewer people will be killed by mountain lions, for example, than lives saved through reduced deer collisions. I know this sounds like a version of the trolley problem, but sometimes we have to play the numbers game.
Finding a way to live with large predators saves money, saves lives, and preserves ecosystems. I think a lot of it comes down to learning to respect large predators rather than just fearing them. We respect what they are capable of. We stay out of their way. We do not attract them. We take responsibility for learning good behavior. We do not just kill them out of fear. They are not pests or fairytale monsters. They are a critical part of our natural world.
The post Living with Predators first appeared on NeuroLogica Blog.
Ten of the places where Atlantis true believers think the mythical city might actually be.
Learn about your ad choices: dovetail.prx.org/ad-choicesA recent BBC article reminded me of one of my enduring technology disappointments over the last 40 years – the failure of the educational system to reasonably (let alone fully) leverage multimedia and computer technology to enhance learning. The article is about a symposium in the UK about using AI in the classroom. I am confident there are many ways in which AI can enhance learning efficacy in the classroom, just as I am confident that we collectively will fail to utilize AI anywhere nears its potential. I hope I’m wrong, but it’s hard to shake four decades of consistent disappointment.
What am I referring to? Partly it stems from the fact that in the 1980s and 1990s I had lots of expectations about what future technology would bring. These expectations were born of voraciously reading books, magazines, and articles and watching documentaries about potential future technology, but also from my own user experience. For example, starting in high school I became exposed to computer programs (at first just DOS-based text programs) designed to teach some specific body of knowledge. One program that sticks out walked the user through the nomenclature of chemical reactions. It was a very simple program, but it “gamified” the learning process in a very effective way. By providing immediate feedback, and progressing at the individual pace of the user, the learning curve was extremely steep.
This, I thought to myself, was the future of education. I even wrote my own program in basic designed to teach math skills to elementary schoolers, and tested it on my friend’s kids with good results. It followed the same pattern as the nomenclature program: question-response-feedback. I feel confident that my high school self would be absolutely shocked to learn how little this type of computer-based learning has been incorporated into standard education by 2025.
When my daughters were preschoolers I found every computer game I could that taught colors, letters, numbers, categories, etc., again with good effect. But once they got to school age, the resources were scarce and almost nothing was routinely incorporated into their education. The school’s idea of computer-based learning was taking notes on a laptop. I’m serious. Multimedia was also a joke. The divide between what was possible and what was reality just continued to widen. One of the best aspects of social media, in my opinion, is tutorial videos. You can often find much better learning on YouTube than in a classroom.
I know there are lots of resources out there, and I welcome people to leave examples in the comments, but in my experience none of this is routine, and there is precious little that has been specifically developed to teach the standard curriculum to students in school. I essentially just witnessed my two daughters go through the entire American educational system (my younger daughter is a senior at college). I also experienced it myself in the decades prior to that, and now I experience it as a medical school educator. At no level would I say that we are anywhere close to leveraging the full potential of computers and multi-media learning. And now it is great that there is a discussion about AI, but why should I feel it will be any different?
To be clear, there have been significant changes, especially at the graduate school level. At Yale over the last 20 years we have transitioned away from giving lectures about topics to giving students access to videos and podcasts, and then following up with workshops. There are also some specific software applications and even simulators that are effective. However, medical school is a trade school designed to teach specific skills. My experience there does not translate to K-12 or even undergraduate education. And even in medical school I feel we are only scratching the surface of the true potential.
What is that potential? Let’s do some thought experiments about what is possible.
First, I think giving live lectures is simply obsolete. People only have about a 20 minute attention span, and the attention of any class is going to vary widely. Also, lecturers have a massive difference in their general lecturing skills and their mastery of any specific topic. Imagine if the entire K-12 core curriculum were accompanied by series of lectures by the best lecturers with high level mastery of the subject material. You can also add production value, like animations and demonstrations. Why have a million teachers replicate that lecture – just give students access to the very best. They can watch it at their own pace, rewind parts they want to hear again, pause when their attention wanes or they need a break. Use class time for discussion and questions.
By the way – this exists – it’s called The Great Courses by the Teaching Company (disclosure – I produced three courses with the Teaching Company). This is geared more toward adult learning with courses generally at a college level. But they show that a single company can mass produce such video lectures, with reasonably high production value.
Some content may work better as audio-only (a Podcast, essentially), which people can listen to when in the car or on the bus, while working out, or engaged in other cognitively-light activity.
Then there are specific skills, like math, reading, many aspects of science, etc. These topics might work best as a video/audio lecture series combined with software designed to gamify the skill and teach it to children at their own pace. Video games are fun and addictive, and they have perfected the technology of progressing the difficulty of the skill of the game at the pace of the user.
What might a typical school day look like with these resources? I imagine that students’ “homework” would consist of watching one or more videos and/or listening to podcasts, followed by a short assessment – a few questions focusing on knowledge they should have gained from watching the video. In addition, students may need to get to a certain level in a learning video game teaching some skill. Perhaps each week they need to progress to the next level. They can do this anytime over the course of a week.
During class time (this will vary by grade level and course) the teachers review the material the students should have already watched. They can review the questions in the assessment, or help students struggling to get to the next level in their training program. All of the assessments and games are online, so the teacher can have access to how every student is doing. Classroom time is also used for physical hands-on projects. There might also be computer time for students to use to get caught up on their computer-based work, with extended hours for students who may lack resources at home.
This kind of approach also helps when we need to close school for whatever reason (snow day, disease outbreak, facility problem, security issue), or when an individual needs to stay home because they are sick. Rather than trying to hold Zoom class (which is massively suboptimal, especially for younger students), students can take the day to consume multi-media lessons and play learning games, while logging proof-of-work for the teachers to review. Students can perhaps schedule individual Zoom time with teachers to go over questions and see if they need help with anything.
The current dominant model of lecture-textbook-homework is simply clunky and obsolete. A fully realized and integrated computer-based multi-media learning experience would be vastly superior. The popularity of YouTube tutorials, podcasts, and video games is evidence of how effective these modalities can be. We also might as well prepare students for a lifetime of learning using these resources. We don’t even really need AI, but targeted use of AI can make the whole experience even better. The same goes for virtual reality – there may be some specific applications where VR has an advantage. And this is just me riffing from my own experience.
The potential here is huge, worth the investment of billions of dollars, and creating a market competition for companies to produce the best products. The education community needs to embrace this enthusiastically, with full knowledge that this will mean reimagining what teachers do day-to-day and that they may need to increase their own skills. The payoff for society, if history is any judge, would be worth the investment.
The post Using AI for Teaching first appeared on NeuroLogica Blog.
The process of evolution is often described by the phrase “survival of the fittest,” coined by Charles Darwin’s contemporary Herbert Spencer.1 The phrase reflects a popular sentiment that nature is best described as, in Alfred Lord Tennyson’s colorful and oft-quoted expression, “red in tooth and claw.”2 But Spencer’s phrase is misleading, inasmuch as he was applying it according to his own idiosyncratic views, while failing to properly reflect Darwin’s attitude toward the theory he developed. It directs our attention to organisms and species on the cusp of survival. But, as I shall argue here, they are the least fit and therefore least relevant in evolution’s ability to make progress toward an aggregate system of life that is increasingly abundant, diverse, and collectively capable. Life’s progress comes primarily from “proliferation of the fittest.” It might seem insignificant to focus on life’s ability to proliferate, far beyond its ability to just survive, but the payoff is enormous. This opens up the possibility for an even bigger idea: Evolution seeks sets of patterns (such as genes) that cooperate toward their mutual proliferation. And nature selects some patterns over others by simply proliferating them more rapidly. Culling of the unfit may be part of the evolutionary process for early planetary life, but it is not required for evolutionary progress after life has achieved a critical threshold of intelligence. This article describes a cooperation-based interpretation of evolution that extends the Gaia hypothesis proposed decades ago by James Lovelock and Lynn Margulis.
♦ ♦ ♦
It is the nature of life to proliferate—to become more diverse and abundant in whatever environment it exists. Wherever in the universe planetary life is established, as it continues it will likely discover millions of ways to adapt and flourish. After a few billion years, any such accommodating planet will likely be covered with life, spectacularly diverse and wildly prolific—call it constructive proliferation.
Why, then, do we tend to model evolution in terms of its destructive elements—competition and culling of the unfit? Why do we dwell on life’s failures—species extinctions and organisms that die before they procreate? They play almost no role at all in evolution’s ability to make progress in life—toward ever greater abundance, diversity, and capability. The traditional manner of thinking about evolution in terms of competition and elimination misses this important element of the process, namely constructive proliferation.
Modern thinking on evolution has been heavily influenced by the renowned evolutionary biologist Richard Dawkins, himself reflecting the work of Robert Trivers, William D. Hamilton, George C. Williams, and others pursuing a “selfish gene” model of the evolutionary process.3 Dawkins revealed valuable insights into evolution by showing us how to look at life’s development from a gene’s eye view. As such, he focuses more on the destructive than constructive elements of evolution. For example, Dawkins describes evolution metaphorically in terms of a “Darwinian chisel” sculpting the characteristics of a species: “The gene pool of a species is the ever-changing marble upon which the chisels, the fine, sharp, exquisitely delicate, deeply probing chisels of natural selection, go to work.”4He uses the chisel metaphor to show how a subtractive process, such as chipping away at a big block of marble, can eventually reveal a beautiful statue. By analogy, we are supposed to believe that evolution’s subtractive process of culling the unfit can eventually reveal a beautifully adapted, incredibly capable apex predator, such as a lion.
It is the nature of life to proliferate—to become more diverse and abundant in whatever environment it exists.The phrase “survival of the fittest” does indeed reflect this subtractive process, but as I shall argue, it leaves us with a dilemma—before a lion can survive, it must exist. “Survival of the fittest” does not explain how a new species is created, a point made by the evolutionary biologist Andreas Wagner in his aptly titled book Arrival of the Fittest.5 Before a lion can survive, it must first arrive. So, by what mechanism is new and better life created in the first place?
Along with the negative aspect of evolution that culls unfit life, there must also be a positive aspect to account for the initial creation and ongoing proliferation of new and successful life. It must be more than just the effect of a random mutation or genetic recombination, because neither can account for how a slightly different set of gene patterns might be better. And since the overall system of life is so prolific (over time), we may reasonably conclude that the positive aspect of the evolutionary process must be greater in magnitude than the negative aspect, perhaps far greater. So, let us try to tease apart the positive and negative facets of evolution. In other words, rather than focusing on evolution’s failures, let us turn our attention to its creative successes. To do so, we must consider evolution in terms of nature’s most basic elements. And when we do, we find cooperation everywhere.
♦ ♦ ♦
Life is all about patterns of matter and energy that are able to self-organize and replicate. There is no such thing as natural benefit to life other than the greater proliferation of its underlying patterns. Everything of interest or benefit to life comes down to pattern proliferation, which—for biological life on Earth—involves gene-like patterns (in DNA or RNA) acting collectively toward their mutual replication. Inside a typical cell, molecules collectively catalyze themselves into ever greater abundance by combining nutrients that have permeated through the cell wall. This cooperative process continues until the critical molecules have become sufficiently abundant to generate two cells, allowing the cell to divide. Cell division is the very basis for life, and the central mechanism by which life is able to proliferate. This is made possible by cooperation among the cell’s metabolic molecules. At higher levels, cells cooperate to produce organs, and organs cooperate to produce highly capable organisms. Higher still, organisms cooperate in collectivities like beehives, ant colonies, and human societies.
Cooperation among certain things at one level can produce something very different at a higher level. And the very different something that emerges from cooperation can sometimes yield new value—call it pattern synergy—recognizing that when certain things are carefully arranged into a particular pattern, they can collectively produce something that is greater than the sum of its parts—often referred to as emergence. The gears and springs of a mechanical clock, for example, take on much more value when they are precisely arranged into a device that keeps accurate track of time. And, just as the design of a better clock requires enhanced cooperation among its gears and springs, the evolutionary design of better life also requires enhanced cooperation among its various components—molecules, cells, organs, and limbs.
The concept of pattern synergy was recognized at the molecular level (and above) by the famed designer and inventor R. Buckminster Fuller in his 1975 book Synergetics, in which he defined synergy as “behavior of whole systems unpredicted by the behavior of their parts taken separately.”6Fuller’s work focused primarily on the geometric designs that naturally emerge from certain combinations of atoms and molecules. But the concept of pattern synergy can apply at many higher levels as well. At each level, emerging synergies become the building blocks for the next higher level.
Life is all about patterns of matter and energy that are able to self-organize and replicate.Another contributor to the concept of pattern synergy is biologist Peter Corning, starting with his 1983 book The Synergism Hypothesis: A Theory of Progressive Evolution.7 In his 2003 book Nature’s Magic, he notes: “The thesis, in brief, is that synergy—a vaguely familiar term to many of us—is actually one of the great governing principles of the natural world. … It is synergy that has been responsible for the evolution of cooperation in nature and humankind …”8
Then there is Robert Wright’s runaway 2000 bestseller, Nonzero: The Logic of Human Destiny, which focused on a critical distinction made by game theorists in their modeling of relationships as either zero-sum or nonzero-sum. Zero-sum games involve competitive relationships in which the positive gain of the winner equals the negative loss of the loser, summing to zero. Nonzero-sum games, on the other hand, involve relationships in which the interests of the game’s participants overlap. Two players of a game can both win, yielding a positive (nonzero) benefit to both. In real life, people find many ways of cooperating synergistically toward their mutual benefit, and Wright devotes his entire book to the proposition that life’s most successful relationships among organisms—both within and between species—are based on these kinds of nonzero win-win scenarios: “My hope is to illuminate a kind of force—the nonzero-sum dynamic—that has crucially shaped the unfolding of life on earth so far.”9
As an example of Wright’s way of thinking, consider how patterns from very different domains can cooperate toward their mutual proliferation. Cooperation among humans accelerated greatly about 10,000 years ago when our ancestors began working together in fields to cultivate farm crops, such as wheat. Those agricultural activity patterns persisted and proliferated because they allowed the genes of humans and the genes of wheat to mutually proliferate. And just a century ago, the patterns of materials and activities underlying tractor production began cooperating with the patterns of genes in humans as well as the patterns of genes in all species of agricultural production toward a veritable “orgy” of mutual proliferation. The human population has doubled twice since then, from 2 billion to 8 billion. And patterns of production in agriculture and industrial manufacturing have also proliferated roughly in tandem with humans. Our modern economy is highly positive-sum, thanks to the many synergies that result during production.
Life’s progress comes primarily from “proliferation of the fittest” … far beyond its ability to just survive.In most nonzero-sum game-theoretic paradigms, the players have the option of cooperating (as if synergistically) toward their mutual benefit. However, they also have the option of betraying (or defecting), which may earn an even higher short-term payoff than cooperating. This tradeoff between the short-term temptation of betrayal versus the long-term benefits of ongoing cooperation was recognized by Robert Axelrod as a fundamental characteristic of life’s many relationships. In The Evolution of Cooperation (1984), Axelrod ran through computer simulations of the Prisoner’s Dilemma game and discovered a successful strategy for encouraging ongoing cooperation based on reciprocity--known as tit-for-tat. “So while it pays to be nice, it also pays to be retaliatory. Tit-for-tat combines these desirable properties. It is nice, forgiving, and retaliatory.”10
Unfortunately, any system of cooperative life will naturally breed cheaters and defectors. Let’s just call them all parasites. Harvard entomologist E.O. Wilson has described parasites as “predators that eat prey in units of less than one.”11 Here, we recognize them as species that routinely act to divert life’s critical resources away from their best synergistic uses—away from the hosts that earn them to the parasites that simply steal them. Wilson goes on to say: “Tolerable parasites are those that have evolved to ensure their own survival and reproduction but at the same time with minimum pain and cost to the host.” While parasites can be wildly prolific in the short term, the burdens they place on their hosts ultimately limit their ability to proliferate over the long run. Since parasite species depend on their host species for future infestations, the relationship between them is ultimately competitive and dysergistic. There are a couple of ways that nature can eliminate parasitism: Mutations to the host species can sometimes discover an immunity to the parasite. Even better, mutations to the parasite species can sometimes discover a way for it to become mutualistic with the host. Parasites are actually prime candidates for discovering new forms of mutually beneficial cooperation. After all, the flow of benefit from the host to the parasite is halfway to the kind of relationship evolution prefers. To become fully mutualistic, all that is needed is for the parasite to reciprocate some sort of commensurate benefit to the host.
Consider how E. coli bacteria in the guts of most animals evolved to provide a valuable digestive service in exchange for a steady supply of food on which the bacteria can feed. The initial infestation of bacteria into the guts of animals, long ago, might have started out as purely parasitic. But, if so, mutations to E. coli bacteria at some point found a way to cooperate by reciprocating benefit to their hosts. No matter how cooperative relationships come to exist, they are always preferable to—more prolific than—competitive relationships. In Richard Dawkins’ words: “Parasites become gentler to their hosts, more symbiotic.”12
Nature’s forces cause synergies to emerge from certain cooperative arrangements of things and activities. And it happens at all levels, from the atomic to the galactic. At every level, a new type of synergy emerges from cooperation among patterns of things and activities at lower levels. Evolution’s ability to discover new and better forms of pattern synergy at ever higher levels of cooperation is the natural source of all creativity.
By shifting the emphasis to evolution’s successes rather than its failures, we reveal a clear directionality … always toward ever greater degrees of synergistic cooperation.Nowhere is pattern synergy more obvious or valuable than in the arrangement of the human brain, where 85 billion neurons cooperate to produce a vivid conscious awareness and ability to reason. In fact, each organ of a human body consists of many cells that all cooperate to produce a specific biological function. And at an even higher level, the complementary functions of human organs and limbs cooperate to produce a body capable of performing ballet. Cooperation is everywhere in life, within organisms and among them.
Photo by NOAA / UnsplashMany different types of species routinely cooperate toward their mutual proliferation by exchanging various services and molecular resources. We have already considered the mutually beneficial relationship between animals and the E. coli bacteria in their guts. As another example, bees provide a pollination service to flowering plants in exchange for nutritious nectar. In Entangled Life (2020), Merlin Sheldrake describes how certain fungi attached to plant roots can isolate and donate critical environmental nutrients to the plants in exchange for carbohydrates: “Today, more than ninety percent of plants depend on mycorrhizal fungi … which can link trees in shared networks sometimes referred to as the ‘wood wide web’.”13 These are just a few of the most obvious cases in which vastly different species find ways of cooperating toward their mutual proliferation. There are many other forms of cooperation among species that are far less obvious. When they are all tallied up, it becomes apparent that each species depends on many others for its existence, and the entire system of life develops almost as if it were a single self-regulating organism.
Cooperation toward mutual proliferation appears to be what nature seeks.At ever higher levels, cooperation toward mutual proliferation appears to be what nature seeks. The occasional discovery of a better form of cooperation is what accounts for all types of evolving progress. (The term better here means more mutually prolific.) From nature’s perspective, the only way to define cooperation is in terms of patterns acting collectively toward their mutual replication and ongoing proliferation. Cooperation is the basis for everything of benefit or value to life. In this sense, evolution’s “purpose” is to discover ever better forms of cooperation among replicating patterns of things and activities, causing their ever-increasing mutual proliferation.
Nutrient exchanges and communication between a mycorrhizal fungus and plants. (Source: Adapted by Charlotte Roy, Salsero35, Nefronus, CC BY-SA 4.0, via Wikimedia Commons)So, life is about much more than just survival. Evolution seeks patterns that cooperate toward mutual proliferation. And the better they cooperate, the more they proliferate. From this perspective, natural selection works through the differential proliferation of patterns (such as genes)—some proliferating more rapidly than others (of which some will experience negative proliferation). Over time, any life-accommodating world may naturally become covered with species that best embody and embrace cooperation, making them most prolific. Importantly, in this interpretation of evolution, neither competition nor culling of the unfit is required for evolutionary progress.
♦ ♦ ♦
Pioneers in this view of life are the chemist James Lovelock and the biologist Lynn Margulis, both of whom saw cooperation everywhere in the aggregate system of life. Margulis, for example, developed a cooperation-based theory of the origin of the eukaryotic cell—the complex cellular structure out of which all plants and animals are made—from simpler prokaryotic cells. Margulis theorized that the more complex eukaryotes resulted from the symbiotic union of different types of prokaryotes. Perhaps the very first eukaryotic cell resulted from a parasitic infestation by one type of prokaryote into another type. If so, the parasitic prokaryote then discovered a way to provide benefit to its host, and the parasitism gave way to mutualism. The patterns in those combined prokaryotes stumbled into a way of cooperating toward their mutual proliferation by together creating a better type of cell, a process Margulis called endosymbiosis.14
Most biologists were initially skeptical, but the tenacious Margulis heroically persisted in developing and presenting evidence to support her theory, and in the fullness of time her peers were forced by the weight of the evidence to finally accept it. An early adopter of Margulis’ theory was James Lovelock, who showed how different species naturally coevolve in ways that allow them to cooperatively regulate critical aspects of their common environment. Lovelock named his theory Gaia, after the primal Mother Earth goddess from Greek mythology.15
Nowhere is pattern synergy more obvious or valuable than in the arrangement of the human brain.To the extent any two species in a system of aggregate life successfully cooperate toward their mutual proliferation, they together may become increasingly abundant. Plants that participate in cooperative relationships with mycorrhizal fungi, for example, will tend to proliferate more rapidly than plants that don’t. So, it is not just a coincidence that our world has become covered by such cooperative plants. By comparison, noncooperative species become relatively diluted and decreasingly relevant to the overall system. The species that are best able to cooperate toward their mutual proliferation increasingly influence the entire system of life. In various ways, they collectively produce a stable environment that is conducive to their mutual ongoing proliferation. Thus, a subsystem of cooperation may naturally rise like a Phoenix out of the ashes of primitive and chaotic life. Species cultivated by farmers (corn, wheat, pigs, chickens) have certainly risen in abundance relative to other species due to their cooperation with humans, arguably the most poignant example of which is the domestication of wild wolves into modern dogs.
The entire web of life becomes more robust as semiredundant cooperative mechanisms emerge in the set of all interspecies relationships. For example, in addition to bees, there are many other species of insects and small birds that redundantly pollinate flowering plants. So, if a few bee species were to go extinct, other pollinating species would likely pick up the slack. Likewise, there are many species of plants that redundantly produce the oxygen required by animals, and many species of animals that redundantly produce the carbon dioxide required by plants.
Through all the redundancies across the many various mechanisms of cooperation, the whole system of aggregate life develops an evolutionary toughness, becoming increasingly stable, robust, and resilient to exogenous shocks. Accordingly, Margulis titled an essay on the subject “Gaia Is a Tough Bitch.” It describes how our planet’s temperature and atmosphere “are produced and maintained by the sum of life.”16 In her 1998 book Symbiotic Planet, Margulis says that plants and animals cooperate to hold the amount of oxygen in our atmosphere steady, at a level that “hovers between a global fire hazard and the risk of widespread death by asphyxiation.”17
According to Lovelock’s hypothesis, many different species cooperate to produce a very stable system of aggregate life able to regulate its own critical parameters—a capability known as homeostasis. And the interdependencies among species cause the entire system to increasingly act like a robust superorganism at a higher level. This view of earthly life as a superorganism was characterized by Richard Dawkins thusly: “Lovelock rightly regards homeostatic self-regulation as one of the characteristic activities of living organisms, and this leads him to the daring hypothesis that the whole Earth is equivalent to a single living organism. … Lovelock clearly takes his Earth and organism comparison seriously enough to devote a whole book to it. He really means it.”18
The discovery of new gene patterns that are better able to cooperate … is the constructive mechanism through which evolution develops new and better biological life.Dawkins’ writings have often carried the assumption that evolution happens in a parallel fashion among multiple competing organisms from which the unfit are fatally culled. With regard to the evolution of Gaia, for example, he wrote: “there would have to have been a set of rival Gaias, presumably on different planets. … The Universe would have to be full of dead planets whose homeostatic regulation systems had failed, with, dotted around, a handful of successful, well-regulated planets of which Earth is one.”
As evolution is described here, however, it does not require competition among multiple species, along with extinction by some. The discovery of new gene patterns that are better able to cooperate toward their mutual proliferation is the constructive mechanism through which evolution develops new and better biological life. Neither competition nor culling of the unfit is necessarily required.
To his already enormous credit, however, Dawkins also wrote: “I do not deny that somebody may, one day, produce a workable model of the evolution of Gaia … although I personally doubt it. But if Lovelock has such a model in mind he does not mention it.”19 Well, allow me to suggest a workable model of an evolving Gaia.
♦ ♦ ♦
Natural selection chooses some patterns of life over others primarily on the basis of their respective abilities to proliferate. This allows us to conceive of evolution operating on a sole entity, such as a single system of aggregate life composed of many interdependent species. It is a serial style of evolution based on ever better forms of cooperation among its various species. The beneficial changes resulting from this style of evolution simply unfold sequentially through time, as new and better forms of cooperation are discovered. And the net effect is ever greater proliferation of aggregate life. It appears to be a much more accurate description of how evolution really works than the widely accepted parallel style that relies on competition.
This serial style of evolution applies to Lovelock’s model of Gaia in which evolving patterns build themselves up through ever better relationships of cooperation. From among those patterns, the fittest—which tend to be the most cooperative in this model—are naturally selected by way of their greater proliferation, without any need for competition or culling of the unfit.
Consider the 2016 book by Russian complexity scientist Peter Turchin, Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth.20 The central idea is that physical competition and mortal conflict were necessary for eliminating entire groups of noncooperators, leaving just the groups of cooperators to survive. But when we model natural selection in terms of differential proliferation, we may conclude that no war was ever required. While many wars certainly did happen over the past 10,000 years, general cooperation was likely destined to emerge and flourish even if they hadn’t happened.
Cooperation naturally emerges because it creates mutually beneficial synergies. And those synergies yield evolutionary advantage to the cooperators, enabling their greater proliferation. Two cooperative families, for example, might take turns caring for each other’s children, realizing synergistic efficiencies that would enable both families to diligently raise more children than would have otherwise been possible. We should therefore expect groups full of cooperators to proliferate their populations more rapidly than groups full of competitors. And any planetary system of sufficiently intelligent life will, over time, become increasingly dominated by the faster-growing groups of cooperators, without anyone ever having to die prematurely.
Life is about much more than just survival. Evolution seeks patterns that cooperate toward mutual proliferation. And the better they cooperate, the more they proliferate.This cooperation-based interpretation of evolution gives us new insight into a decades-old debate among evolutionists over the concept of group selection. The debate focuses on whether natural selection needs to operate at the group level to explain how group-benefiting behaviors can naturally emerge. To fully expose the dilemma, Richard Dawkins imagines two very different groups—one composed of cooperative altruists and the other composed of individuals who are purely selfish. Dawkins suggests that the group of altruists, “whose individual members are prepared to sacrifice themselves for the welfare of the group, may be less likely to go extinct than a rival group whose individual members place their own selfish interests first.” But, there’s a catch: “Even in the group of altruists, there will be a dissenting minority who refuse to make any sacrifice. If there is just one selfish rebel, prepared to exploit the altruism of the rest, then he, by definition, is more likely than they are to survive and have children. Each of these children will tend to inherit his selfish traits. After several generations of this natural selection, the ‘altruistic group’ will be overrun by the selfish individuals, and will be indistinguishable from the selfish group.”21
Dawkins has expressed his belief that natural selection operating at the level of genes is sufficient to account for the emergence of group-benefiting behaviors. And arguments presented here support that belief. When natural selection is defined as proliferation of the fittest (rather than elimination of the unfit), there is then no difference between selection at the genetic level and selection at the group level. Groups are selected to the extent genes within them proliferate.
Groups are naturally selected by their differential ability to grow their populations. And cooperative groups will always tend to proliferate more rapidly than uncooperative groups. Mutually beneficial cooperation simply bubbles forth from within such a group. No individual needs to die, and no group needs to be eliminated, for group selection to occur. In fact, no competition at all between groups is ever required for evolutionary progress, other than to see which can sustainably grow its population the fastest.
♦ ♦ ♦
The evolutionary value of cooperation over competition was recognized more than a century ago by the Russian intelligentsia. But the concept remained largely ignored by evolutionary thinkers in the West until nobleman Peter Kropotkin was exiled to English territory for political reasons. There, he wrote a series of articles (in English) discussing Darwin’s central theme of “struggle for existence,” later collected into a book titled Mutual Aid.22 About a century later, evolutionary theorist and historian of science Stephen Jay Gould penned one of his monthly columns titled “Kropotkin Was No Crackpot.” “Perhaps cooperation and mutual aid are the more common results of struggle for existence,” Gould opined. “Perhaps communion rather than combat leads to greater reproductive success in most circumstances.”23 Gould then presented a fascinating account of how and why Russians were more predisposed than Westerners to appreciate the evolutionary value of cooperation among animals and among humans.
Just a subtle twist in how we think of natural selection opens a new interpretation of evolution that emphasizes cooperation. We have simply elevated our focus, away from nature’s less favored species that are concerned with mere survival, upward to nature’s more preferred species capable of rapid proliferation. By shifting the emphasis to evolution’s successes rather than its failures, we reveal a clear directionality in how all kinds of progressive systems naturally develop—always toward ever greater degrees of synergistic cooperation among replicating patterns. That natural directionality determines how nature defines goodness and betterment, providing a bedrock foundation for a new system of naturalized philosophy. It also suggests a purpose to life—to advance evolution in the direction it was always destined to go—toward ever greater cooperation, mutualism, and symbiosis.
One potentially positive outcome from the COVID pandemic is that it was a wakeup call – if there was any doubt previously about the fact that we all live in one giant interconnected world, it should not have survived the recent pandemic. This is particularly true when it comes to infectious disease. A bug that breaks out on the other side of the world can make its way to your country, your home, and cause havoc. It’s also not just about the spread of infectious organisms, but the breeding of these organisms.
One source of infectious agents is zoonotic spillover, where viruses, for example, can jump from an animal reservoir to a human. So the policies in place in any country to reduce the chance of this happening affect the world. The same is true of policies for laboratories studying potentially infectious agents.
It’s also important to remember that infectious agents are not static – they evolve. They can evolve even within a single host as they replicate, and they can evolve as they jump from person to person and replicate some more. The more bugs are allows to replicate, the greater the probability that new mutations will allow them to become more infectious, or more deadly, or more resistant to treatment. Resistance to treatment is especially critical, and is more likely to happen in people who are partially treated. Give someone an antibiotic to kill 99.9% of the bacteria that’s infecting them, but stop before the infection is completely wiped out, and then the surviving bacteria can resume replication. Those surviving bacteria are likely to be the most resistant bugs to the antibiotic. Bacteria can also swap antibiotic resistant genes, and build up increasing resistance.
In short, controlling infectious agents is a world-wide problem, and it requires a world-wide response. Not only is this a humanitarian effort, it is in our own best self-interest. The rest of the world is a breeding ground for bugs that will come to our shores. This is why we really need an organization, funded by the most wealthy nations, to help establish, fund, and enforce good policies when it comes to identifying, treating, and preventing infectious illness. This includes vaccination programs, sanitation, disease outbreak monitoring, drug treatment programs, and supportive care programs (like nutrition). We would also benefit from programs that target specific hotspots of infectious disease in poor countries that do not have the resources to adequately deal with them, like HIV in sub-Saharan Africa, and tuberculosis in Bangladesh.
Even though this would be the morally right thing to do (enough of a justification, in my opinion), and is in our own self-interest from an infectious disease perceptive, we could even further leverage this aid to enhance our political soft power. These life-saving drugs are brought to you by the good people of the USA. No one would begrudge us a little political self-promotion while we are donating billions of dollars to help save poor sick kids, or stamp out outbreaks of deadly disease in impoverished countries. This also would not have to break the budget. For something around 1% of our total budget we could do an incredible amount of good in the world, protect ourselves, and enhance our prestige and political soft power.
So why aren’t we doing this? Well, actually, we are (as I am sure most readers know). The US is the largest single funder of the World Health Organization (WHO), about 15% of its budget. One of the missions of the WHO is to monitor and respond to disease outbreaks around the world. In 1961 the US established the USAID, which united all our various foreign aid programs into one agency under the direction of the Secretary of State. Through USAID we have been battling disease and malnutrition around the world, defending the rights of women and marginalized groups, and helping to vaccinate and educate the poor. This is coordinated through the State Department specifically to make sure this aid aligns with US interests and enhances US soft power.
I am not going to say that I agree with every position of the WHO. They are a large political organization having to balance the interests of many nations and perspectives. I have criticized some of their specific choices in the past, such as their support for “traditional” healing methods that are not effective or science-based. I am also sure there is a lot to criticize in the USAID program, in terms of waste or perhaps the political goal or effect of specific programs. Politics is messy. It is also the right of any administration to align the operation of an agency like USAID, again under the control of the Secretary of State, with their particular partisan ideology. That’s fine, that’s why we have elections.
But most of what they do (both the WHO and USAID) is essential, and non-partisan. Donating to programs supplying free anti-tuberculosis drugs in Bangladesh is not exactly a controversial or burning partisan issue.
And yet, Trump has announced that the US is withdrawing from the WHO. This is a reckless and harmful act. This is a classic case of throwing the baby out with the bathwater. If we have issues with the WHO, we can use our substantial influence, as its single largest funder, to lobby for changes. Now we have no influence, and just made the world more vulnerable to infectious illness.
Trump and Musk have also pulled the rug out from USAID, for reasons that are still unclear. Musk seems to think that USAID is all worms and no apple, but this is transparent nonsense. The rhetoric on the right is focusing on DEI programs funded by USAID (amounting to an insignificant sliver of what the agency does), but is ignoring or downplaying all of the incredibly useful programs, like fighting infectious disease, education, and nutrition programs.
Another part of the rhetoric, which is why many of his supporters back the move, is that the US should not be spending money in other countries while we have problems here at home. This ignores reality – fully 50% of the US budget is for welfare, including social security, medicare, medicaid, and all other welfare programs. Around 1% (it varies year-to-year) goes to USAID. It is not as if we cannot afford welfare programs in the US because of our foreign aid. It’s just a ridiculous and narrow-minded point. If you want a more robust safety net, then that is what you should vote for and lobby your representatives for, at the state and federal level. But foreign aid is not the problem.
Further, foreign aid should be thought of as an investment, not an expense. Again – that is part of the point of having it under the direction of the State Department. USAID can help to prevent conflicts, that would be even more costly to the US. They can reduce the risk of deadly infectious diseases coming to our shores. Do you want to compare the total cost of COVID to the US economy to the cost of USAID? This is obviously a difficult number to come by, but by one estimate COVID-19 cost the US economy $14 trillion. That is enough to fund USAID at 2023 levels for 350 years. So if USAID prevents one COVID-like pandemic every century or so, it is more than worth it. More likely, however, it will reduce the deadliness of common infectious illnesses, like HIV and tuberculosis.
Even if you can make a case to reduce our aid to help the world’s poor, doing so in a sudden and chaotic fashion, without warning, is beyond reckless. Stopping drug programs is a great way to breed resistance. Food and drugs are sitting in storage and cannot be dispersed because funding has been cut off. It’s hard to defend this as a way to reduce waste. The harm that will be created is hard to calculate. It’s also a great way to evaporate 60 years of American soft power in a matter of weeks.
I am open to any cogent and logical argument to defend these actions. I have not seen or heard one, despite looking. Feel free to make your case in the comments if you think this was anything but heartless, ignorant, and reckless.
The post Cutting to the Bone first appeared on NeuroLogica Blog.
They watch us. They learn about us and our habits. We are a big part of the environmental conditions to which many of them have adapted.
They’re like us. They hang around in groups. Individuals have different personalities. Pairs bond together for years at a time, maybe lifetimes, and they take good care of their kids. They’re loud, opportunistic, mischievous, and messy. And they’re smart.
Meet CrowsMembers of the genus Corvus—crows—include birds with “crow” in their common names as well as ravens, rooks, and jackdaws. There are 45–50 named species of Corvus at the moment (the naming of species is a dynamic field), though that range will change and increase as more information from populations in unstudied areas becomes available. They are medium- to large-sized birds with big heads relative to body size and, usually, large to massive bills. They live all over the world except South America and Antarctica, in the varied habitats that exist across continents and on islands, from southern to northern high latitudes.
American Crows live in populations having different social organizations and dynamics in different regions of North America. The members of populations that breed in the north migrate in spring and fall of each year, with known one-way travel distances of up to 1,740 miles (2,800 km). Each year, they spend months commuting, and they live in one place during spring and summer and another in fall and winter. During migration, they spend nights at the same giant communal roosts along their routes, much like humans returning to the same campgrounds on annual road trips.
Ravens are the largest crows and occur all around the northern hemisphere. Most don’t breed until they’re 4 years old, and some not until they’re 10. Once they become breeders, ravens tend to live in pairs. In the years between leaving natal areas and breeding, individuals join other nonbreeders in small groups and larger flocks as they jointly acquire the skills needed for successful breeding, including being able to reliably find food (carcasses that are unpredictable in when and where they’ll become available, across huge landscapes).
New Caledonian Crows occur on only two of New Caledonia’s islands. Since they inhabit primary forests, observing them in the field is challenging. They are not very social, although kids tend to stick close to parents for extended periods, up to 2 years. New Caledonian Crows are the only nonhuman animals known to make tools from materials with which they have no experience1 and the only ones known to make cumulative improvements to tool design over time.2
I have had the opportunity to observe and study many crows myself, and to learn about the behavior and cognitive capabilities of other species through experimental research and fieldwork published by other scientists. There is a tendency to want to compare the results of studies of cognition in nonhuman animals to humans. How do they measure up? How about compared to apes? Such comparisons are already not straightforward when comparing other mammals, and even less so when comparing crows. They are very different types of organisms that live in three-dimensional worlds, without hands, and with brains, eyes and ears that are different from those of mammals. And yet in test after test, crows perform equal to or even better than apes, and are on par with human children or occasionally even exceed adult human capabilities!
American CrowsI began studying American Crows in the early 1980s, on a golf course in Encino, CA. For purposes of my research, I needed to be able to tell individuals apart, and so I had to catch them, to be able to mark them. They were very difficult to capture! With the use of traps and nets3, 4 and climbing to nests to temporarily obtain and mark late-stage nestlings,5 I got a bunch marked and was able to peer into their worlds. They are one of the most civilized species of which I am aware.
The crows I studied in California were year-round residents that nested colonially (that is, having lots of nests in the same general area) and defended only the small areas of space immediately surrounding their nests—if they defended any space at all. Neighbors often foraged together, members of breeding groups were regularly observed in others’ core areas, and breeders rarely prevented others from entering their nest tree or landing on or near their nests. Most individuals did not breed until they were at least 3 or 4 years old, and many nonbreeders remained in natal areas associating with parents or joined the resident non-breeding flock.
I had one of my favorite fieldwork experiences, ever, on that golf course: Because population members had come to associate me with things that caused them distress (e.g., climbing to their nests), they transferred that to other situations and would yell at me, when I arrived, after something bad had happened. One day I drove around on golf cart paths looking for the cause of their yelling, and on the ground found a female with an injured wing. She could not fly but she could run, and the crows dive-bombed and yelled at me as I chased her down. I had her examined by a vet and taped up, and she spent 8 weeks in a cage in my bedroom as her wing healed. In the field, her mate and 1-year old daughter continued to care for the four nestlings in her nest. Three weeks after I took her to my place, a strong storm blew her nest out of its tree and all of the nestlings died. Her mate and daughter hung around for another two weeks, but then were not seen very often. After two months her bone had healed, but her flight muscles had atrophied. I moved her to a flight cage and put her through regular daily exercises. Finally, eleven weeks after her removal, I brought her to the golf course.
Ravens form alliances, reconcile conflicts, and console distressed partners.I wafted her into her nest tree and threw a bunch of peanuts on the ground. Crows began to fly to the peanuts, and she joined them. Almost immediately, I saw her mate headed right for her from across a busy 4-lane road. He landed beside her and both of them bowed low to each other and produced a slow, melodic, low frequency vocalization that I had never heard before. The pair then proceeded to walk around the group of peanut eating crows and stopped to bow and vocalize to each other three more times. I was crying. The pair was reunited.
The crows I studied in Oklahoma were year-round residents in small-to-large territories that were only sometimes defended against neighbor intrusion. Most delayed breeding until at least 3–4 years old, and many remained “at home” with parents until they bred. Many also left home and moved in with other groups within the population before becoming breeders. Individuals had friends in groups other than their own, and some that had moved out of the population returned occasionally to natal territories and spent time with their parents. Some visited their siblings in other groups, and some moved in with their siblings’ families. Several males established territories adjacent to their parents, and extended families of at least three generations would spend time together.6
One day, I sat in my car watching a group in a residential backyard. One of the crows walked along a wooden fence railing to the end post and attempted to get at something in the interior of the hole supporting the railing. Unsuccessful with its bill, it pecked at the wood surrounding the hole and loosened a section at the top, pulling on it until a triangular piece of wood broke off. The crow placed the piece of wood under its feet, with the wide end closest to its body, and hammered several times at the tapered end. It then picked up the piece of wood by the wide end and probed the hole with the tapered end for about 20 seconds. Another crow in the group called from some distance away, and the toolmaker placed the probe into the hole and took off. I went to the hole, saw only the remains of a spider’s web, and retrieved the probe. It did not match the gap from which it had been pulled—the tapered end had clearly been narrowed.7 A few days later when I approached the post, a large spider dashed out of the hole.
Also in Oklahoma, I watched the mother of the nestlings in the nest to which my co-worker was headed hammer repeatedly at a branch of a nearby pine tree. At first, I thought she was exhibiting displacement behavior but then a pinecone at her feet loosened (she had been hammering at its connection), and she carried it to above my co-worker and dropped it right on his head! She repeated this behavior three more times, hitting him on 3 out of the 4 tries.
So that I could observe crows behaving naturally when I wasn’t trying to capture them or get to nestlings, I donned a disguise on the latter occasions. Years later, it was officially demonstrated that American Crows can remember “dangerous” human faces for at least 2.7 years,8 and they can even learn whom to worry about from others!9
RavensRavens are scavengers and regularly store (cache) away surplus food obtained at carcasses, and they rely on their caches for sustenance. They are not known to use tools much in the wild.
Caching behavior has been the focus of many studies10, 11 and ravens are skilled strategists. If they know another raven is watching them, they will go to a location out of the observer’s view before caching. Cachers behave differently in the presence of competitors who have or have not seen the caching event: if they have been watched while storing food, cachers move their caches when knowledgeable competitors get close.
Competitors behave differently depending on the situation. If they know where the cache is but the raven they’re paired with doesn’t know that they know, they run right over and retrieve it. If they’re paired with the cacher and the cacher knows they’re wise to the location, they act as if they don’t know and dawdle and fiddle around, seemingly hoping to take advantage of any lapse in focus by the cacher. This level of understanding of what others know rivals that demonstrated in chimpanzees.12
Photo by Peter Lloyd / UnsplashWhen given an opportunity to pay attention to another cacher while caching, ravens performed better than humans when asked to retrieve both their own and the other cacher’s caches.13 And when paired with a partner who kept taking advantage of the situation, a raven employed a human-like solution: deception. Ravens were trained to find and retrieve food hidden in a maze of film canisters and one raven was better at it than a dominant male. At some point, the dominant raven quit playing and would just wait for the other one to choose a canister and begin to open the lid, then fly over and steal the food. The raven “being taken advantage of” then changed tactics and initially went to go to a canister it knew to be empty and pretended to try to open it. When the dominant bird flew over and was distracted for a few seconds expecting to get the food, the other flew to a canister it knew to be filled, and got the food!14
Ravens successfully solved the problem of obtaining meat dangling from a branch by pulling up sections of the string and stepping on them to keep them from falling back down. And then they were successful with the non-intuitive task of pulling down on the string to bring the meat up.15 They did as well as apes in tests of choosing appropriate tools (despite not being tools used in the wild) and they did better than orangutans, chimps, and bonobos when asked to choose the correct currency for bartering for food.16 Ravens were able to select the right tool in environments different from where they learned to use it; even in the face of 17-hour (overnight) delays between having to select the appropriate tool and being able to use it, providing evidence of their forward-planning abilities. They did better than 4-year-old children in the first-trial performances at the tool- and currency-choice experiments,17 and they are perceptive enough to follow the gaze of a human to a location out of view and hop over to see what’s up.18
Ravens form alliances, reconcile conflicts, and console distressed partners.19, 20, 21 They remember former group members and their relationships with them after long (years) periods of separation.22 When disappointed or frustrated, for example by being offered less preferred food, they respond in a way that other ravens observing them can identify, and the observers themselves are then negatively affected.23 In measures of value, compatibility, and security, the quality of raven social relationships was said to be analogous to those of chimpanzees.24
New Caledonian CrowsIn 1996, a paper published in Nature changed everything: New Caledonian Crows were manufacturing tools, in the wild, at a level of complexity not ever seen among nonhuman animals before.25 To extract prey from burrows and natural crevices, they make hooks and probes from twigs and pieces cut from leaf, some of which require sophisticated manipulation and modification skills. No other nonhuman animals do anything like it.26, 27
Betty was the name of a New Caledonian Crow caught in the wild and taken (with several others) to the University of Oxford for testing.28, 29 She was partnered in a cage with a male given the name Able. In an early test, Betty and Able were allowed into a room with a table that had a clear plastic vertical tube, secured in a plastic pan, containing a basket-shaped container of meat at the bottom. There were two wires on the table; one had already been bent so there was a hook at one end. The researchers wondered if one of the crows would use the hook to grab the basket handle, and Betty at first picked it up, but Able took it from her and flew away with it. Betty wanted the meat. She picked up the straight wire (in her bill) and inserted it into the tube but, of course, it was useless in its straight form. And so with force, she jammed the wire into a corner of the pan several times and bent it into a hook! She then used the hook to get the basket. Her behavior made clear that she had a mental representation in her mind of the problem and the solution, and therefore of the instrument she needed to make.
Crows perform equal to or even better than apes, and are on par with human children or occasionally even exceed adult human capabilities.New Caledonian Crows were able to spontaneously solve a “metatool” task (using a short tool to obtain a longer one needed for food extraction),30 and they were able to keep in mind the out-of-sight tools they had available (and where they stored them), while performing sequences of tool tasks, providing strong evidence that they can plan several moves ahead.31
From field and lab studies of tool behavior, scientists have also learned that New Caledonian Crows:
Such selectivity suggests these crows have representations of the situations in their minds and so can select the appropriate tools. They also tend to keep their preferred tools safe, under their feet and in holes.36
Individual New Caledonian Crows tend to be lateralized in their tool use (i.e., right- or left-billed): they usually hold probe tools in their bills, with the nonworking ends pressed against one side of their heads37 and individuals prefer one side over the other for different tasks.38 Lateralization is thought to be associated with complex tasks and mental demands (i.e., as tasks increase in difficulty, “control areas” in brains become specialized/localized),39, 40 suggesting that, as in humans, species-level lateralization is an adaptation for efficient neural processing of complex tasks.41
Evidence from more than 5,500 tools suggests that narrow and stepped variations were likely improvements on the wide-tool design.Tools made by New Caledonian Crows from Pandanus leaves come in three types, all with a barbed edge: unstepped narrow and wider probes, and “stepped” tools,42 the latter being made through a sequenced process involving a series of distinct snips and tears along the barbed edge of a leaf to produce a probe that increasingly narrows toward the tip (the “steps”) and has barbs along one edge.43 Evidence from more than 5,500 tools suggests that narrow and stepped variations were likely improvements on the wide-tool design.44
Photo by Kasturi Roy / UnsplashAnd so, these crows have evolved minds powerful enough to develop and improve upon tool design, something thought possible only by humans (technological progress is thought to be one of our hallmark characteristics). That there is geographic variation in tool manufacture and the innovations are passed from generation to generation45, 46, 47 suggests there may be cultural mechanisms at work.
Experiments with captive-bred, hand-raised New Caledonian Crows have demonstrated a strong genetic component to tool interest, manufacture, and use—young crows start playing with twigs, leaves, and crevices on their own, suggesting the phenomenon is an evolved adaptation.
More CrowsIn preparation for studies of cognition and neurophysiology, crows have been trained to monitor screens in experimental setups and respond to visual and auditory signals so that they can be trained to do all kinds of other things. Carrion Crows, for example, have been trained to identify complex pictures despite distractions,48, 49 express their understanding of the concept of greater than/less than50 and to respond to the switching between “same” and “different” rules provided both visually and auditorily.51 They’ve been trained to discriminate quantity, ranging between 1–30 dots on a screen52, 53 and they have been trained to peck different numbers of times and to indicate “I’m done” when they’re finished with their answer.54, 55 That these birds can understand the training protocols is almost as impressive as the results of the studies!
Jackdaws performed on par with apes in a test of self-control over motor impulses and did better than bonobos and gorillas despite brains that are 70–94 times smaller.56 Unlike chimpanzees, and similarly to observations about ravens described earlier, jackdaws respond to human gaze and nonverbal cues like pointing.57
Crows play, have friends, and mourn the death of friends and family members.Hooded Crows have been shown to be capable of analytical reasoning. In tests called “match-to-sample,” where subjects are presented with paired stimuli that are the same or different (e.g., in size or shape) and then asked to match the concepts of “same” or “different” to brand new items, crows spontaneously perceived and understood the relationships without any specific training in categories of size, shape, and color.58 Such analytical thinking is thought to be foundational for “categorization, creative problem solving, and scientific discovery,” and was thought to be uniquely human.59
Carrion Crows were able to learn the Arabic numerals 1–4 and then produce matching numbers of vocalizations (e.g., “caw caw caw” for 3) when prompted by the visual image or an auditory cue.60 The modality of the cue did not affect their performance, indicating that their vocal production was guided by an abstract numerical concept. Evidence also indicates that the crows were planning the total number of vocalizations before they started vocalizing and that when errors were made—too few or too many—the crows had started out correctly but “lost track” along the way.
Carrion Crows have also been shown to be capable of recursion: the cognitive ability to process paired elements embedded within a larger sequence.61 For example, a “center-embedded” sequence would appear [{}] and is analogous to “the crow the experimenter chose passed the test,” with {} corresponding to “the experimenter chose.” An ability to use recursion might potentially, possibly infinitely, expand the range of ideas and concepts that can be communicated. Carrion Crows outperformed macaques and performed on par with human children in tests of recursive abilities; another characteristic thought to be unique to humans.
Rooks are not known to use tools in the wild, but they figured out that by plugging specific holes in the floor of an aviary (including tapping in the plugs!), they could create pools of water in which individuals could drink and bathe.62 Rooks also learned to get food in a trap-tube task (inserting a probe into one end of a tube with holes in it, in order to push out a food reward) and transferred what they learned to a new task on their first try,63 rivaling the physical intelligence of chimpanzees.64 One rook transferred concepts to two additional tasks, indicating she understood the physical aspects of the challenges (including gravity) and was able to “abstract rules” and form mental representations.65
Rook at Slimbridge Wetland Centre, Gloucestershire, England (Photo by Adrian Pingstone, via Wikimedia)In another set of experiments,66 rooks pushed stones into tubes to collapse a platform to obtain a worm and immediately transferred the concept and picked up stones to drop them in tubes. They chose the correctly sized stones when tube diameters were changed; when no stones were provided, they left the testing room to go outside to collect stones before returning to the testing apparatus! When conditions changed, they immediately used (provided) sticks in lieu of stones; heavy sticks were dropped in and light ones were shoved, suggesting goal-directed thinking. They solved a metatool task, were able to modify branches into functional tools, understood how a hook functioned and used one to retrieve a basket of food at the bottom of a tube, and bent straight wires into hooks, thereby rivaling the abilities of tool-using New Caledonian Crows. All of these findings provide evidence for insight being involved in the problem-solving abilities of rooks.
Final ThoughtsThe “marshmallow test” is one of the most well-known and compelling demonstrations of the human ability to delay gratification. Videos showing children struggling to not eat the marshmallow after the experimenter and parent leave the room, so that they may claim more marshmallows later, are both endearing and powerful demonstrations of the heretofore-thought-to-be uniquely human experiences of impatience, frustration, self-control, reward and gratification, and the ability to plan ahead. Ravens, Carrion Crows, and New Caledonian Crows all aced versions of the marshmallow test, thereby breaching another hallmark.67, 68
♦ ♦ ♦
Crows play, have friends, and mourn the death of friends and family members.69, 70 It’s said that as more and more similarities in the cognitive capabilities, biases, and types of errors are exposed, the more likely it is that crows think like we do. And although their brains are built differently and most testing so far has been originally mammal-oriented, the list of cognitive capabilities crows share with us is already pretty impressive: abstract rules and analytical reasoning, consolation and reconciliation, mental representations and goal-directed behavior, innovation and insight, technological advances, transfer of concepts, knowing what others know, lateralization, tool manufacture and use, metatool use, comprehending quantity and numbers, planning for the future, recursion, motor and vocal control, tactical deceit, and even tracking humans, remembering our faces, and deciphering our intentions.
I wonder what else crows might show us if we knew what and how to ask. We are similar in that we are diurnal and we rely mostly on vision and hearing to perceive and respond to our surroundings, but our umwelts (the term coined by the biologist von Uexküll for the different perceptual worlds of different organisms) differ in myriad other ways. Right? They pick through poop to find bugs! They stand on ice in bare feet! They fly!
I wish we could know how they think, and that maybe in contexts such as greed, selfishness, cruelty, and war, that we could think more like they do.
If you think about the human hand as a work of engineering, it is absolutely incredible. The level of fine motor control is extreme. It is responsive and precise. It has robust sensory feedback. It combines both rigid and soft components, so that it is able to grip and lift heavy objects and also cradle and manipulate soft or delicate objects. Trying to replicate this functionality with modern robotics have been challenging, to say the least. But engineers are making steady incremental progress.
I like to check it on how the technology is developing, especially when there appears to be a significant advance. There are two basic applications for robotic hands – for robots and for prosthetics for people who have lost their hand to disease or injury. For the latter we need not only advances in the robotics of the hand itself, but also in the brain-machine interface that controls the hand. Over the years we have seen improvements in this control, using implanted brain electrodes, scalp surface electrodes, and muscle electrodes.
We have also seen the incorporation of sensory feedback, which greatly enhances control. Without this feedback, users have to look at the limb they are trying to control. With sensory feedback, they don’t have to look at it, overall control is enhanced, and the robotic limb feels much more natural. Another recent addition to this technology has been the incorporation of AI, to enhance the learning of the system during training. The software that translates the electrical signals from the user into desired robotic movements is much faster and more accurate than without AI algorithms.
A team at Johns Hopkins is trying to take the robotic hand to the next level – A natural biomimetic prosthetic hand with neuromorphic tactile sensing for precise and compliant grasping. They are specifically trying to mimic a human hand, which is a good approach. Why second-guess millions of years of evolutionary tinkering? They call their system a “hybrid” robotic hand because it incorporates both rigid and soft components. Robotic hands with rigid parts can be strong, but have difficulty handling soft or delicate objects. Hands made of soft parts are good for soft objects, but tend to be weak. The hybrid approach makes sense, and mimics a human hand with internal bones covered in muscles and then soft skin.
The other advance was to incorporate three independent layers of sensation. This also more closely mimics a human hand, which has both superficial and deep sensory receptors. This is necessary to distinguish what kind of object is being held, and to detect things like the object slipping in the grip. In humans, for example, one of the symptoms of carpal tunnel syndrome, which can impair sensation to the first four fingers of the hands, is that people will drop objects they are holding. With diminished sensory feedback, they don’t maintain the muscle tone necessary to maintain their grip on the object.
Similarly, prosthetics benefit from sensory feedback to control how much pressure to apply to a held object. They have to grip tightly enough to keep it from slipping, but not so tight that they crush or break the object. This means that the robotic limb needs to be able to detect the weight and firmness of the object it is holding. Having different layers of sensation allows for this. The superficial layer detects touch, while the progressively deeper layers will be activated with increasing grip strength. AI is also used to help interpret these signals, which in turn stimulate the users nerves to provide them with natural-feeling sensory feedback.
They report:
“Our innovative design capitalizes on the strengths of both soft and rigid robots, enabling the hybrid robotic hand to compliantly grasp numerous everyday objects of varying surface textures, weight, and compliance while differentiating them with 99.69% average classification accuracy. The hybrid robotic hand with multilayered tactile sensing achieved 98.38% average classification accuracy in a texture discrimination task, surpassing soft robotic and rigid prosthetic fingers. Controlled via electromyography, our transformative prosthetic hand allows individuals with upper-limb loss to grasp compliant objects with precise surface texture detection.”
Moving forward they plan to increase the number of sensory layers and to tweak the hybrid structure of soft and rigid components to more closely mimic a human hand. They also plan to incorporate more industrial-grade materials. The goal is to create a robotic prosthetic hand that can mimic the versatility and dexterity of a human hand, or at least come as close as possible.
Combined with advances in brain-machine interface technology and AI control, robotic prosthetic limb technology is rapidly progressing. It’s pretty exciting to watch.
The post Hybrid Bionic Hand first appeared on NeuroLogica Blog.
“If there is no enemy within, the enemy outside can do us no harm.”1
Ever since a Hezbollah suicide bomber in 1983 blew up a truck packed with explosives and killed 241 Marines in Beirut, combating Islamic terrorist organizations has been a priority for U.S. intelligence, security, and law enforcement agencies. However, for those of us who have followed the spiraling growth of Islamic terrorism in the 1980s and 1990s, it seemed as if the U.S was sluggishly reactive. They made little headway in extensive counterterrorism programs designed at penetrating and dismantling Islamic terror groups.2
How was it possible for the hijackers and their plot to remain off the radar of intelligence and law enforcement?The 9/11 attack put a spotlight on the failures of the security agencies tasked to protect the U.S. against acts of terror. How was it possible for 19 hijackers and their ambitious plot to remain off the radar of intelligence and law enforcement? The truth, as I discovered during 18 months of reporting for my book, Why America Slept: The Failure to Prevent 9/11, was not that the plot had gone undetected, but rather that the agencies responsible for monitoring and fighting terrorism had failed to share information, something that would have made it possible to connect the dots before the attack occurred.
The failures were more substantive than mere interagency rivalries between the CIA, FBI, NSA, and local law enforcement. Exclusive interviews I had with top intelligence officers and FBI officials revealed that the origins and depth of the dysfunction inside America’s counterintelligence programs was an internecine bureaucratic war that left little room for working together. Sharing information was given lip service but seldom practiced, particularly when the intelligence at stake was judged as having “high value.”
If the CIA had alerted the State Department, the two Saudis would have been on a watch list that barred them from entering the United States.The most serious failure was the CIA’s tracking of two terrorists, Khalid al-Mihdhar and Nawaf al-Hazmi, when they moved from Saudi Arabia to California in 2000. If the CIA had alerted the State Department, the two Saudis would have been on a watch list that barred them from entering the United States. Once in California, however, the CIA could not legally monitor them domestically. The Agency not only lost track of the two Saudis but failed to let the FBI, which is specifically authorized to act within the U.S., know they were here. In July of 2001, only two months before 9/11, an FBI memo warned the American intelligence community that some bin Laden followers might be training at U.S. flight schools in preparation for an aerial terror attack. The CIA was unaware that al-Mihdhar and al-Hazmi had taken flight training while living in the U.S.
If the CIA had shared its information about the two Saudis, al-Mihdhar might have been detained in June 2001, when he returned to Saudi Arabia and his visa had expired. Or when an Oklahoma state trooper pulled over al-Hazmi for speeding and a driver’s license check in the national database would have triggered security alerts. Sharing the CIA security concerns about the duo would have meant the Transportation Department had a red flag on them. The pair even used their own names when making reservations on American Airlines Flight 77, which was flown into the Pentagon.
“Responsibility and accountability were diffuse,” the 9/11 Commission Report concluded a year after I had published Why America Slept.3 That was a diplomatic understatement of the paralyzing dysfunction between intelligence and security agencies and policy makers. The unintended consequence of such discord was to give the advantage invariably to the terrorists.
My reporting revealed that the dearth of cooperation between the country’s top security and intelligence services was not new to 9/11. Exposing how and why the breakdowns to communicate between agencies had begun and persisted for decades explains why the world’s best law enforcement and intelligences agencies ended up fighting each other instead of combating Islamic terrorism.
“We knew that the Islamic threat was the next security problem for the U.S., and we had known it since the 1970s,” Duane “Dewey” Clarridge told me in a rare no-questions-off-limit interview in the wake of 9/11.4 Clarridge was a CIA legend. He was twenty-three when he joined the Agency in 1955 and over the next thirty years earned a reputation as one of its most accomplished covert operatives. Clarridge served in Nepal, India, and Turkey, before returning to headquarters in the 1970s. He became the chief of covert operations for the Near East Division, later ran Arab covert ops, then moved to the Latin American Division, before becoming the Rome station chief. It was during his three years in Arab operations that Clarridge became familiar with the key Islamic terrorists.
“We were running operations in Beirut against an alphabet-soup of Palestinian terror groups,” recalled Clarridge. “At the same time, Carlos the Jackal was running around Europe, pulling off stunts like trying to use a grenade launcher to down an El Al airplane at Orly, or shooting his way into the Vienna OPEC meeting, killing three, and kidnapping the Saudi and Iranian oil ministers. We had our hands full.”5
The Rivalry for Control of Operations and Investigations Between the CIA and FBI continues.Terrorists ambushed and murdered Richard Welch, the CIA’s Athens station chief, two days before Christmas in 1975. Clarridge and his senior CIA colleagues wanted to go after the terrorists with covert assassination plans. The Agency’s timing was poor, however. Senate hearings into past misdeeds produced months of sordid headlines about the Agency’s 1960s assassination plots working with the Mafia to kill Fidel Castro, mind-control experiments, and failed foreign coups. Those hearings “permanently changed the way Clandestine Services operated,” says Clarridge. “It changed the rules of the game for us.”6
Congress initiated a process by which the Agency had to submit plans for its covert ops to a committee chaired by the president. Congress would be notified within sixty days after the president signed off. Permanent congressional oversight committees were established. The coup de grâce was President Ford’s Executive Order 11905 on February 18, 1976, that barred U.S. government agencies from undertaking assassinations.
The CIA abruptly halted its plans to eliminate Welch’s killers. For the next seven years, the Agency instead engaged in a mostly unsuccessful campaign to gather intelligence on leading Islamic terror groups in the hope of alerting allies to upcoming attacks.
The suicide truck bomber changed everything.The suicide truck bomber who struck the U.S. embassy in Beirut in 1983 changed everything. CIA Director William Casey and FBI Director William Webster immediately dispatched teams to find out what happened. There were conflicts between those teams from the start. They got so bad that the agents of the rival agencies sometimes got into screaming and shoving matches. The FBI team returned home early, angry and frustrated by what it complained was dismissive treatment by its CIA counterparts.7 The CIA’s new Beirut station chief, William Buckley, ultimately offered an olive branch to the FBI: he invited the Bureau to dispatch another team to Lebanon and investigate free of CIA micromanagement. The FBI solved the case by tracing a fragment of an axle from the bombing truck to an Iranian factory that had links to the Iranian-backed Popular Front for the Liberation of Palestine.
But by the time the FBI reached that conclusion, Iranian-sponsored terrorists had managed to kidnap Buckley in Beirut. That prompted President Reagan to create the government’s first joint task force to battle terrorism. The Restricted Interagency Group for Terrorism was chaired by the CIA’s director of covert operations, and it consisted of single representatives from the CIA, FBI, and the National Security Council. Dewey Clarridge was the Agency rep. The FBI’s man was Oliver “Buck” Revell, the assistant director for criminal investigations (I knew Buck well; he was the FBI Supervisor in Charge of the Dallas office when I researched the JFK assassination for my 1993 book, Case Closed). The National Security Council selected a U.S. Marine lieutenant colonel named Oliver North as its representative.
The new anti-terror group was in a rush to free Buckley before he could be tortured into giving up secrets. North wanted to use DEA informants—heroin traffickers who promised to deliver Buckley for $2 million. Dallas businessman Ross Perot agreed to finance the ransom to avoid U.S. laws that prohibited paying money to drug dealers. But the FBI, under the cautious leadership of William Webster, a former judge whom Jimmy Carter had appointed to run the Bureau, strenuously objected. North then backed a Clarridge operation to kidnap a Lebanese Shiite cleric, the head of Islamic Jihad, the organization holding Buckley. Clarridge wanted to trade the cleric for the CIA station chief. Again, the FBI’s fierce resistance scuttled the plan.
Clarridge fumed at the FBI’s intransigence and lobbied Casey to give the Agency more power in fighting terrorism. In January 1986, with a green light from Ronald Reagan, Casey created the Counterterrorism Center (CTC). Clarridge became its chief and he directed a staff of two hundred CIA officers, mostly analysts, as well as ten people loaned from other government intel and security agencies.8
Clarridge initially wanted to rely on the CIA’s foreign stations for surveillance, intel gathering, and informer recruitment, but that was not feasible since they were running at capacity. And, as Clarridge recalled, “the station chiefs were each narrowly focused on their own geographic divisions, while terrorism was a global problem that respected no boundaries.”9
Much to Clarridge’s disappointment, his only remaining option was to rely on the FBI for most of CTC’s field and operational assistance. It was against his better judgment since he thought the Webster-run FBI was far too risk-averse. Working with the Bureau also meant Clarridge had to run operations plans past FBI lawyers. “No one was very excited at the prospect of sharing national security secrets with lawyers at Justice,” recalled Clarridge.10
Clarridge quickly proposed an ambitious and risky operation to kidnap the Islamic Jihad hijackers of TWA Flight 847 and to fly them to America for trial. Webster contended the operation was likely to fail and that it likely violated both international and U.S. laws. The standoff between Clarridge and Webster killed the plan.
The next proposed CTC op was to kidnap Mohammed Hussein Rashid, a top bomb maker who had gotten explosives past airport security machines hidden in a Sony Walkman. A CTC operation to grab Rashid in the Sudan failed. Clarridge blamed the FBI, whose field agents were responsible for what the bureaucracy dubbed an “extraordinary rendition.” The Bureau complained that the Agency’s intelligence was flawed.
The unintended consequence of such discord was to give the advantage invariably to the terrorists.The deteriorating CIA and FBI tensions worsened during a series of bungled operations. Not only did it botch the Rashid kidnapping, but a squad dispatched to free Beirut station chief Buckley also failed. It was also unsuccessful in tracking down the Libyan terrorists who bombed a Berlin disco frequented by American soldiers. The 1985 hijackings of TWA Flight 847 and the cruise liner Achille Lauro were headline news and made the U.S. look vulnerable and weak.
The FBI began a whisper campaign in Washington that the CIA’s jealous stewardship of CTC was its ruination. Those back door complaints resulted in a task force headed by Vice President George H.W. Bush. It proposed the FBI run its own “intelligence fusion center” to complement the CTC, but its recommendations were never implemented.11
Senior CIA officials complained bitterly to Reagan’s national security team that the FBI was overly cautious.Senior CIA officials complained bitterly to Reagan’s national security team that the FBI was overly cautious and that America was vulnerable to Islamic terrorists who had entered on legal visas and had set up sleeper cells. Reagan responded in September 1986 by creating the Alien Border Control Committee (ABCC), an interagency task force designed to block the entry of suspected terrorists while also finding and deporting militants who had entered the country illegally or had overstayed their visas. The CIA and FBI joined the ABCC effort with great fanfare.
The ABCC had its first success only six months after its formation. The CIA tipped off the FBI about a group of suspected Palestinian terrorists in Los Angeles and the Bureau arrested eight men. But instead of being lauded, civil liberties groups contended that the ABCC should not be allowed to use information from the government’s routine processing of visa requests. Massachusetts Democratic Congressman Barney Frank, a strong civil liberties advocate, led a successful effort to amend the Immigration and Nationality Act so that membership in a terrorist group would no longer be sufficient reason to deny anyone a visa. The Frank amendment meant a visa could only be denied if the government could prove that the applicant had committed an act of terrorism.12 The amendment thereby rendered the ABCC toothless.
Meanwhile, the worsening relationship between the CIA and FBI hit a nadir within a couple of years when the weapons-for-hostages (Iran–Contra) scandal broke. The three key figures were the CIA’s Casey and Clarridge and the National Security Council’s North, all senior Counterterrorism Center officials. The FBI’s Buck Revell worried that the CIA and NSC might have violated U.S. laws prohibiting aid being given to the Contras and negotiating with terrorists. After Casey testified to Congress in November that he did not know who was behind the sale of two thousand TOW missiles to Iran (though the Agency was actively involved), Revell told FBI Director Webster that he thought Casey and other top Agency officials were obstructing justice. Webster authorized the Bureau to open a criminal investigation.
The failure of the country’s two premier national security agencies to work together seamlessly … only works to the benefit of America’s many enemies.Casey was incapacitated by a stroke and hospitalized in early December. He resigned as CIA Director after surgery for a brain tumor a month later. Reagan tapped Robert Gates, the Agency’s Deputy Director, to take charge. But Gates soon withdrew his name when it became clear that questions about his role in Iran– Contra had scuttled any chance for Senate confirmation.13 After Gates’s withdrawal, Reagan offered the CIA job to Republican Senator John Tower, the head of the president’s Iran–Contra board. Tower declined. Reagan then got a no from James Baker, his chief of staff.14 Reagan and his team were in a panic. There were a dozen names on their list of possible CIA directors, but the president was set to make his first comments about the Iran–Contra scandal in a highly anticipated address to the nation on the evening of Wednesday, March 4. Reagan wanted to pick a new CIA director before that speech. Everyone agreed it had to be someone who would easily obtain Senate confirmation. That narrowed the field. On the morning of his national speech, Reagan met with FBI Director William Webster—who was in the final year of a 10-year contract to run the FBI—and surprised everyone by offering him the CIA post.
The news that the cautious FBI director had been asked to run the CIA sent shock waves through Langley and the ranks of senior spies. Webster was a Christian Scientist who relished a reputation as an inflexible straight arrow. He boasted his only vices were chocolate and tennis. Historian Thomas Powers concluded that the “CIA would rather be run by a Cub Scout den mother than the former head of the FBI.”15 Webster was disparaged by top officers like Clarridge, who had come to know his risk-averse management style.
“Since we at CTC had been working so closely with the FBI on terrorism,” Clarridge told me, “we had already heard a lot about Webster, and none of it was good. From the street level to the top echelons, they detested Webster because they saw him as an egotistical lightweight, a social climber, and a phony.”16
Webster had no background in foreign policy or world affairs. While Casey was judged inside the CIA as a kindred risk-taking spirit, especially by the covert teams, Webster’s cautious nature was exacerbated by an overwhelming fear of failure coupled with his strict insistence on not bending the letter of the law.
One of Webster’s first moves was to replace the CIA’s popular George Lauder, who had spent twenty years in the Operations Directorate, with William Baker, an FBI colleague. It was such an unpopular choice that no one clapped when Baker was introduced in the CIA’s main auditorium. When Baker told the agents they should study a new house manual called “Briefing Congress” and embrace the four “C’s”—candor, completeness, consistency, and correctness—many in the audience audibly snickered. “It was vintage FBI,” one agent in attendance told me. “It was what we expected.”17
Baker was not the only FBI colleague Webster brought along. Peggy Devine, his longtime executive secretary, had earned the nickname “Dragon Lady” at the Bureau. Also, his FBI chief of staff, John Hotis, and a group of “special assistants” made up what CIA employees derisively dubbed either the “FBI Mafia” or the “munchkins.” Some in the FBI contingent had Ivy League law degrees, but none had any intelligence background. And they effectively sealed Webster off from the rest of the Agency.
There was a widespread sentiment inside the CIA that the FBI had gone from being a partner to an avowed enemy.Meanwhile, the FBI investigation that had begun under Webster into the Iranian arms sales had kicked into high gear. FBI agents raided Oliver North’s office and retrieved key documents his secretary did not have time to shred. Another FBI team served an unprecedented warrant at CIA headquarters in Langley, VA. The agents ordered Clair George, the CIA deputy director for operations, to open his office safe. It contained a document, with two of George’s fingerprints, that showed he had misled Congress. That produced a ten-count indictment against George and the removal of three CIA station chiefs. As for Clarridge, a few days before the statute of limitations expired, he was indicted on seven counts of perjury and making false statements to Congress. Inside the CTC, many employees wore T-shirts with slogans supporting him.
There was a widespread sentiment inside the CIA that the FBI had gone from being a recalcitrant partner to an avowed enemy whose purpose was to destroy the Agency’s hierarchy and its way of conducting its operations. As Clarridge noted:
We could probably have overcome Webster’s ego, his lack of experience with foreign affairs, his small-town-America world perspective, and even his yuppier-than-thou arrogance. What we couldn’t overcome was that he was a lawyer. All his training as a lawyer and a judge was that you didn’t do illegal things. He never could accept that this is exactly what the CIA does when it operates abroad. We break the laws of other countries. It’s how we collect information. It’s why we’re in business. Webster had an insurmountable problem with the raison d’être of the organization he was brought in to run.18Clarridge was not the only one who thought Webster’s legal background was a handicap for running a spy agency. Pakistan’s President Muhammad Zia-ul-Haq once asked Webster how it was possible for a lawyer to head the CIA. Webster did not answer.
Even before Clarridge’s indictment, Webster had officially reprimanded him for his role in Iran–Contra and after promising to reassign him as the CTC director, had forced him to resign in June 1988. I spoke to nearly a dozen former operatives from the Directorate of Operations who confirmed, on background only, that the anger Clarridge expressed on the record about Webster was widespread throughout the CIA. The Agency had long prided itself on an unwritten code—Loyalty Up and Loyalty Down—and many CIA veterans felt that Webster had trashed that by going after agents like Clarridge.
It got worse for Webster when he tasked his chief of staff, John Hotis, and Nancy McGregor, a 28-year-old law clerk who had been one of his FBI administrative assistants, to rewrite the CIA’s regulations for covert operations. Webster had infuriated many intelligence agents when he compared covert ops to the FBI’s use of undercover agents in criminal probes. Under the new Webster rules, lawyers had to sign off on all covert plans. There was a long checklist required to get operations approved. The informal and fast-moving process of the past was history. Webster argued his rules instilled long overdue accountability in the Agency’s covert work. It was, countered CIA officials, the same framework that existed at the FBI and that had hindered the Bureau’s investigations for decades.
With the new rules in place over covert operations and having purged the CIA of half a dozen senior officers connected to Iran–Contra, some of the criticism of Webster started going public. Tom Polgar, a retired agent, wrote in an opinion editorial in The Washington Post that “the new watchword at the agency seems to be ‘Do No Harm’—which is fine for doctors but may not encourage imagination and initiative in secret operations.”19
Meanwhile, William Sessions, a former federal judge from San Antonio and a close friend of Webster’s had become the new FBI director. With encouragement from Webster, Sessions expanded the number of FBI agents serving in counterintelligence abroad. Instead of welcoming the help, it further irritated the CIA leadership who considered the FBI as inept competitors who were only likely to compromise intelligence operations.
Webster thought he could reform the Agency to share information with the FBI. In April 1988, Webster announced a totally redesigned Counterintelligence unit. Headquartered in Langley, VA, its mission was to teach CIA and FBI agents how to compile, organize, and share data that would be useful to both agencies. The first test of that cooperation happened in December 1988 when 270 people were killed when a bomb blew up Pan Am Flight 103 above Lockerbie, Scotland. The U.S. government did not disclose that three Middle East-based CIA officers flying home for the Christmas holidays were among the victims.
Israel’s Mossad intelligence had warned the CIA two weeks earlier. The CIA had never passed the warning to the FBI.Israel’s Mossad intelligence had warned the CIA two weeks earlier that they had intercepted information that a Pan Am flight from Frankfurt to the U.S. would be bombed in December. Pan Am Flight 103 was a Frankfurt to U.S.-bound plane. The CIA had never passed the warning to the FBI.
Webster’s redesigned CTC was put in charge of the U.S. investigation. In Scotland, more than a thousand police, soldiers, and bomb technicians scoured hundreds of square miles around the crash site. They bagged thousands of pieces of evidence, and in that haul was a fragment of a circuit board the size of a small fingernail. It matched an identical board found in a bomb-timing mechanism used in a 1986 terror attack in the West African country of Togo. CTC tracked the circuit board to a consignment of timers manufactured by a Swiss company that had sold twenty of them to Libya.
Although progress had been made on finding out how the bombing was done, according to one U.S. official, it was not very long before the investigation was a “chaotic mess” of noncooperation.20 Within a few months, there were competing theories about who was responsible. The CIA blamed Iran for hiring a Damascus-based radical Palestinian faction to carry out the operation. Taking out the American plane was, according to Vincent Cannistraro, a CIA’s senior CTC officer, payback for the mistaken 1988 downing of an Iranian Airbus by a U.S. naval cruiser, which resulted in 290 civilian deaths. Cannistraro had become, after Clarridge’s departure, the major power inside CTC and its driving force.
Meanwhile, the FBI thought Libya was the sole culprit, seeking revenge for the U.S. bombings in 1986.
Both agencies leaked their internal disagreements. Anonymous CIA officials were quoted in the press mocking the FBI’s analytical reports on the bombing as being “like essays from grade school,” whereas an unidentified FBI agent said that “CIA believe they have a lot, but it’s a Styrofoam brick.”21
Even when the Libyans became prime suspects, the two agencies fought over what should be done. The FBI wanted to wait for indictments and then arrest those charged. The CIA’s Webster, not surprisingly, supported the letter-of-the-law approach. But inside the CIA, especially in CTC, agents bristled that the Libyans were beyond the reach of U.S. law. Cannistraro argued for “removing” the suspects at any cost, even if that meant assassinating them or allowing Israel to do it on behalf of the U.S. But Webster would broker no such discussion. Frustrated with Webster’s limitations on covert ops, Cannistraro abruptly resigned in September 1990, just before Iraq invaded Kuwait. “The CTC is starting to look too much like the FBI,” he disparagingly told a former colleague after giving Webster his notice.22
A 1990 Senate panel concluded that Webster’s efforts had failed to overcome the extensive fragmentation and competition in the government’s counterintelligence efforts. The panel concluded that it was virtually impossible to cure the dysfunction by merely insisting that the CIA and FBI drop their long-standing mutual distrust and dislike. Only by completely recreating America’s intelligence and crime-fighting apparatus, the panel suggested, might it be possible to make substantive progress.
The CIA and FBI have overhauled their training … but skepticism that there is any benefit to partnering remains a significant obstacle.The 9/11 Commission made a series of recommendations for changes that could finally force the CIA and FBI and other elements of the national security apparatus to work together more effectively. There are thousands of internal intelligence documents released since 2001, as well as Inspector General Reports from the CIA and FBI, that have given great lip service to the “imperative of reform.”
The result over two decades later?
The CIA and FBI have overhauled their training of intelligence analysts, streamlined the management of the information collected and analyzed, and improved the coordination between the analytical units and operational teams. There are now redundancies designed to prevent intel from falling between the cracks. And there is greater accountability for failures.
Do the CIA and FBI work together better today than before 9/11?What about the cooperation between the premier American law enforcement/intelligence agencies? Conversations with half a dozen currently serving and former officials from both the CIA and FBI give a mixed picture at best. The deep animosity from the Clarridge/Webster days is now mostly history. No one thinks of the other agency as a threat to its own survival. But skepticism that there is any benefit to be had by partnering with one another remains a significant obstacle to cooperation. CIA officers continue largely to view the FBI as highly paid police officers who are hobbled by Department of Justice lawyers. The FBI officials to whom I spoke pointed repeatedly to the CIA’s approval of torture for 9/11 detainees as a key reason the main terrorists at Guantanamo Bay have not gone to trial. “Maybe they would be better off,” one former FBI Counterintelligence officer told me, “if they had lawyers who told them when they were crossing the line instead of just rubber stamping every wild idea coming out of Langley.”
Do the CIA and FBI work together better today than before 9/11? Yes, in many respects. They have often had little choice with the rapid growth of more than 200 Joint Terrorism Task Forces (JTTF) since 1980. In the JTTFs, the FBI and CIA are only two of more than 30 law enforcement and intelligence agencies supplying analysts, investigators, linguists, and hostage rescue specialists, to combat international terrorism directed at the homeland. Since they do not run the operations, their sniping at each other is not as evident. But that does not mean the JTTFs are free of finger pointing between all the partners. For instance, a 2009 arrest of three Afghans in a terror plot in New York City was widely heralded as a law enforcement triumph, but I discovered that FBI agents were privately furious at New York Police Department detectives for blowing a chance to snare a larger terror sleeper cell.23
Infighting among those tasked with enforcement makes it exponentially more difficult.The rivalry for control of operations and investigations between the two 900-pound security gorillas in the U.S.—the CIA and FBI—continues. So does the desire to take credit for successful missions and put the blame on the other for failures. There are billions in annual budgets at stake and the reputation each seems jealously to guard. “They don’t make us better,” one retired CIA analyst contended in a conversation I had this summer. “They just compromise what we do best.”
That attitude cannot be eliminated through any series of bureaucratic reforms suggested by presidential commissions and Congressional hearings. It is a shame, however, because the failure of the country’s two premier national security agencies to work together seamlessly to fight terrorism and today’s enormous criminal cartels only works to the benefit of America’s many enemies. Crime and punishment is a difficult enough subject when the targets are international terrorists. However, infighting among those tasked with enforcement makes it exponentially more difficult.
Many of the Christian symbols created in the aftermath of the First Crusade have been adopted by White Nationalists. Why?
Learn about your ad choices: dovetail.prx.org/ad-choicesAs a sociologist interested in the scientific study of social life, I’ve long been concerned about the ideological bent of much of sociology. Many sociologists reject outright the idea of sociology as a science and instead prefer to engage in political activism. Others subordinate scientific to activist goals, and are unclear as to what they believe sociology’s purpose should be. Still others say different things depending on the audience.
The American Sociological Association (ASA) does the latter. In December 2023, the Board of Governors of Florida’s state university system removed an introductory sociology course from the list of college courses that could be taken to fulfil part of the general education requirement. It seemed clear that sociology’s reputation for progressive politics played a role in the decision. Florida’s Commissioner of Education, for example, wrote that sociology had been hijacked by political activists.1 The ASA denied the charge and went on to declare that sociology is “the scientific study of social life, social change, and the social causes and consequences of human behavior.”
While that definition certainly aligns with my vision of what sociology should be, it contrasts with another recent statement made by the ASA itself when announcing last year’s annual conference theme. The theme is “Intersectional Solidarities: Building Communities of Hope, Justice, and Joy,” which, as the ASA website explains, “emphasizes sociology as a form of liberatory praxis: an effort to not only understand structural inequities, but to intervene in socio-political struggles.”2 It’s easy to see how Florida’s Commissioner of Education somehow got the idea that sociology has become infused with ideology.
The ASA’s statement in defense of sociology as the science of social life seems insincere. That’s unfortunate—we really do need a science of social life if we’re going to understand the social world better. And we need to understand the world better if we’re going to effectively pursue social justice. The ASA’s brand of sociology as liberatory praxis leads not only to bad sociology, but also to misguided efforts to change the world. As I’ve argued in my book How to Think Better About Social Justice, if we’re going to change the world for the better, we need to make use of the insights of sociology. But bad sociology only makes things worse.
Contemporary social justice activism tends to draw from a sociological perspective known as critical theory. Critical theory is a kind of conflict theory, wherein social life is understood as a struggle for domination. It is rooted in Marxist theory, which viewed class conflict as the driver of historical change and interpreted capitalist societies in terms of the oppression of wage laborers by the owners of the means of production. Critical theory understands social life similarly, except that domination and oppression are no longer simply about economic class but also race, ethnicity, gender, religion, sexuality, gender identity, and much more.
There are two problems with social justice efforts informed by critical theory. First, this form of social justice—often called “critical social justice” by supporters and “wokeism” by detractors—deliberately ignores the insights that might come from other sociological perspectives. Critical theory, like conflict theory more broadly, is just one of many theoretical approaches in a field that includes a number of competing paradigms. It’s possible to view social life as domination and oppression, but it’s also possible to view it as a network of relationships, or as an arena of rational transactions similar to a marketplace, or as a stage where actors play their parts, or as a system where the different parts contribute to the functioning of the whole. If you’re going to change the social world, it’s important to have some understanding of how social life works, but there’s no justification for relying exclusively on critical theory.
The second problem is that, unlike most other sociological perspectives, critical theory assumes an oppositional stance toward science. This is partly because critical theory is intended not just to describe and explain the world, but rather to change it—an approach the ASA took in speaking of sociology as “liberatory praxis.” However, the problem isn’t just that critical theory prioritizes political goals over scientific ones, it’s that it also sees science as oppressive and itself in need of critique and dismantling. The claim is that scientific norms and scientific knowledge—just like other norms and other forms of knowledge in liberal democratic societies—have been constructed merely to serve the interests of the powerful and enable the oppression of the powerless.
Critical theory makes declarations about observable aspects of social reality, but because of its political commitments and its hostile stance toward scientific norms, it tends to act more like a political ideology than a scientific theory. As one example, consider Ibram X. Kendi’s assertions about racial disparities. Kendi, a scholar and activist probably best known for his book How to Be an Antiracist, has said, “As an anti-racist, when I see racial disparities, I see racism.”3 The problem with this approach is that while racism is one possible cause of racial disparities (and often the main cause!), in science, our theories need to be testable, and they need to be tested. Kendi doesn’t put his idea forward as a proposition to be tested but instead as a fundamental truth not to be questioned. In any true science, claims about social reality must be formulated into testable hypotheses. And then we need to actually gather the evidence. Usually what we find is variation, and this case is likely to be no different. That is, we’re likely to find that in some contexts racism has more of a causal role than in others.
We often want easy answers to social problems. Social justice activists might be inclined to turn to would-be prophets who proclaim what seems to be the truth, rather than to scientists who know we have to do the legwork required to understand and address things. Yes, science gives us imperfect knowledge, and it points to the difficulties we encounter when changing the world… but since we live in a world of tradeoffs, there are seldom easy answers to social problems. We can’t create a perfect world—utopia isn’t possible—so any kind of social justice rooted in reality must try to increase human flourishing while recognizing that not all problems can be eliminated, certainly not easily or quickly.
What does it all mean? For one, we should be much more skeptical about one of critical theory’s central claims—that the norms and institutions of liberal democratic societies are simply disguised tools of oppression. Do liberal ideals such as equality before the law, due process, free speech, free markets, and individual rights simply mask social inequalities so as to advance the interests of the powerful? Critical theorists don’t really subject this claim to scientific scrutiny. Instead, they take the presence of inequalities in liberal societies as selfsufficient evidence that liberalism is responsible for these failures. Yet any serious attempt to pursue social justice informed by scientific understanding of the world would involve comparing liberal democratic societies with other societies, both present and past.
Scientific sociology can’t tell us the best way to organize a society and social justice involves making tradeoffs among competing values. We may never reach a consensus on what kind of society is best, but we should consider the possibility that liberal democracies seem to provide the best framework we yet know of for pursuing social justice effectively. At the very least, they provide mechanisms for peacefully managing disputes in an imperfect world.
For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.
PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.
When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.
The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine. These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.
The fact that PD is the result of a specific population of neurons making a specific neurotransmitter makes it an attractive target for cell therapy. All we need to do is increase the number of dopamine neurons in the SNpc and that can treat, and even potentially cure, PD. The first cell transplant for PD was in 1987, in Sweden. These were fetal-derived dopamine producing neurons. There treatments were successful, but they are not a cure for PD. The cells release dopamine but they are not connected to the basal ganglia circuitry, so they are not regulating the release of dopamine in a feedback circuit. In essence, therefore, these were just a drug-delivery system. At best they produced the same effect as best pre-operative medication management. In fact, the treatment only works in patients who respond to L-dopa given orally. The transplants just replace the need for medication, and make it easier to maintain a high level of control.
They also have a lot of challenges. How long do the transplanted cells survive in the brain? What are the risks of the surgery. Is immunosuppressive treatment needed. And where do we get the cells from. The only source that worked was human ventral mesencephalic dopamine neurons from recent voluntary abortions. This limited the supply, and also created regulatory issues, being banned at various times. Attempts at using animal derived cells failed, as did using adrenal cells from the patient.
Therefore, when the technology developed to produce stem cells from the patient’s own cells, it was inevitable that this would be tried in PD. These are typically fibroblasts that are altered to turn them into pluripotent stem cells, which are then induced to form into dopamine producing neurons. This eliminates the need for immunosuppression, and avoid any ethical or legal issues with harvesting. PD would seem like the low hanging fruit for autologous stem cell therapy.
But – it has run up against the issues that we have generally encountered with this technology, which is why you may have first heard of this idea in the early 2000s and here in 2025 we are just seeing a phase I clinical trial. One problem is getting the cells to survive for long enough to make the whole procedure worthwhile. The cells not only need to survive, they need to thrive, and to produce dopamine. This part we can do, and while this remains an issue for any new therapy, this is generally not the limiting factor.
Of greater concern is how to keep the cells from thriving too much – from forming a tumor. There is a reason our bodies are not already flush with stem cells, ready to repair any damage, rejuvenate any effects of aging, and replace any exhausted cells. It’s because they tend to form tumors and cancer. So we have just as many stem cells as we need, and no more. What we “need” is an evolutionary calculation, and not what we might desire. Our experience with stem cell therapy has taught us the wisdom of evolution – stem cells are a double-edged sword.
Finally, it is especially difficult to get stem cells in the brain to make meaningful connections and participate in brain circuitry. I just attended a grand round on stem cells for stroke, and there they are having the same issue. However, stem cells can still be helpful, because they can improve the local environment, allowing native neurons to survive and function better. With PD we are again back to – the stem cells are a great dopamine delivery system, but they don’t fix the broken circuitry.
There is still the hope (but it is mainly a hope at this point) that we will be able to get these stem cells to actually replace lost brain cells, but we have not achieved that goal yet. Some researchers I have spoken to have given up on that approach. They are focusing on using stem cells as a therapy, not a cure – as a way to deliver treatments and improve the environment, to support neurons and brain function, but without the plan to replace neurons in functional circuits.
But the allure of curing neurological disease by transplanting new neurons into the brain to actually fix brain circuits is simply too great to give up entirely. Research will continue to push in this direction (and you can be sure that every mainstream news report about this research will focus on this potential of the treatment). We may just need some basic science breakthrough to figure out how to get stem cells to make meaningful connections, and breakthroughs are hard to predict. We had hoped they would just do it automatically, but apparently they don’t. In the meantime, stem cells are still a very useful treatment modality, just more for support than replacement.
The post Stem Cells for Parkinson’s Disease first appeared on NeuroLogica Blog.
Fernanda Pirie is Professor of the Anthropology of Law at the University of Oxford. She is the author of The Anthropology of Law and has conducted fieldwork in the mountains of Ladakh and the grasslands of eastern Tibet. She earned a DPhil in Social Anthropology from Oxford in 2002, an MSc in Social Anthropology at University College London in 1998, and a BA in French and Philosophy from Oxford in 1986. She spent almost a decade practicing as a barrister at the London bar. Her most recent book is The Rule of Laws: A 4,000-Year Quest to Order the World.
Skeptic: Why do we need laws? Can’t we all just get along?
Fernanda Pirie: That assumes we need laws to resolve our disputes. The fact is, there are plenty of societies that do perfectly well without formal laws, and that’s one of the questions I explore in my work: Who makes the law, and why? Not all sophisticated societies have created formal laws. For instance, the ancient Egyptians managed quite well without them. The Maya and the Aztec, as far as we can tell, had no formal laws. Plenty of much smaller communities and groups also functioned perfectly well without them. So, using law to address disputes is just one particular social approach. I don’t think it’s a matter of simply getting along; I do believe it’s inevitable that people will come into conflict, but there are many ways to resolve it. Law is just one of those methods.
It’s inevitable that people will come into conflict, but there are many ways to resolve it. Law is just one of those methods.Skeptic: Let’s talk about power and law. Are laws written and then an authority is needed to enforce them, which creates hierarchy in society? Or does hierarchy develop for some other reason, and then law follows to deal with that particular structure?
FP: I wouldn’t say there’s always a single direction of development. In ancient India, for example, a hierarchy gradually developed over several thousand years during the first millennium BCE, with priests—eventually the Brahmins—and the king at the top. This evolved into the caste system we know today. The laws came later in that process. Legal texts, written by the Brahmins, outlined rules that everyone—including kings—had to follow.
Skeptic: So, the idea of writing laws down or literally chiseling them in stone is to create something tangible to refer to.. Not just, “Hey, don’t you remember, I said six months ago you shouldn’t do that?” Instead, it’s formalized, and everyone has a copy. We all know what it is, so you can hold people morally accountable for their actions.
FP: Exactly. That distinction makes a big difference. Every society has customs and norms; they often have elders or other sources of authority, who serve as experts in maintaining their traditions. But when it’s just a matter of, “This is what we’ve always done—don’t you remember?” some people can conveniently forget. Once something is written down, though, it gains authority. You can refer to the exact words, which opens up different possibilities for exercising power. “Look, these are the laws—everyone must know and follow them.” But it equally creates opportunities for holding people accountable.
Skeptic: So it’s a matter of “If you break the law, then these are the consequences.” It’s almost like a logic problem—if P, then Q. There’s an internal logic to it, a causal reasoning where B follows A, so we assume A causes B. Is something like that going on, cognitively?
Once something is written down, it gains authority. You can refer to the exact words, which opens up different possibilities for exercising power.FP: Well, that cause-and-effect form is a feature of many legal systems, but not all of them. It’s very prominent in the Mesopotamian tradition, which influenced both Jewish law and Islamic law, and eventually Roman law—the legal systems that dominate the world today. It’s associated with the specification of rights—if someone does this, they are entitled to that kind of compensation, or this must follow from that. But the laws that developed in China and India were quite different. The Chinese had a more top-down, punitive system, focused on discipline and punishment. It was still an “if-then” system, but more about, “If you do this wrong, you shall be punished.” It was very centralized and controlling. In Hindu India, the laws were more about individual duty: this is what you ought to do to be a good Hindu. If you’re a king, you should resolve disputes in a particular way. The distinctions between these systems aren’t always sharp, but the casuistic form is indeed a particular feature of certain legal traditions.
Laws have never simply been rules. They’ve created intricate maps for civilization. Far from being purely concrete or mundane, laws have historically presented a social vision, promised justice, invoked a moral order ordained by God (or the Gods), or enshrined the principles of democracy and human rights. And while laws have often been instruments of power, they’ve just as often been the means of resisting it. Yet, the rule of law is neither universal nor inevitable. Some rulers have avoided submitting themselves to the constraints of law—Chinese emperors did so for 2,000 years. The rule of law has a long history, and we need to understand that history to appreciate what law is, what it does, and how it can rule our world for better or worse.
The rule of law is neither universal nor inevitable. Some rulers have avoided submitting themselves to the constraints of law.Skeptic: In some ways it seems like we are seeking what the economist Thomas Sowell calls cosmic justice, where in the end everything is settled and everyone gets their just desserts. One purpose of the Christian afterlife is that all old scores are settled. God will judge everything and do so correctly. So, even if you think you got away with something, in the long run you didn’t. There’s an eye in the sky that sees all, and that adds an element of divine order to legal systems.
FP: Absolutely, and that characterizes many of the major legal systems, especially those associated with religion. Take the Hindu legal system—it’s deeply tied to a sense of cosmological order. Everyone must follow their Dharma, and the Brahmins set up the rules to help people follow their Dharma, so they can achieve a better rebirth. Similarly, Islamic Sharia law, which has had a poor reputation in recent times, is seen as following God’s path for the world, guiding people on how they should behave in accordance with a divine plan. Even the Chinese, who historically had a more top-down and punitive system, claimed that their emperors held the Mandate of Heaven—that’s why people had to obey them and their laws. They were at the top of the pyramid because of such divine authority.
Of course, there have also been laws that are much more pragmatic—rules that merchants follow to maintain their networks, or village regulations. Not all law is tied to a cosmic vision, but many of the most impressive and long-lasting legal systems have been.
Islamic Sharia law is seen as following God’s path for the world. Even the Chinese, who historically had a more top-down and punitive system, claimed that their emperors held the Mandate of Heaven.Skeptic: The Arab–Israeli conflict can be seen as two people holding a deed to the same piece of land, each claiming, “The title company that guarantees my ownership is God and His Holy Book.” Unfortunately, God has written more than one Holy Book, leading both sides to claim divine ownership, with no cosmic court to settle the dispute.
FP: That’s been the case throughout history—overlapping legal and political jurisdictions. Many people today are worried about whether the nation-state, as we know it, is breaking down, especially with the rise of supranational laws and transnational legal systems. But it’s always been like this—there have always been overlaps between religious laws, political systems, and social norms. The Middle East is a perfect example, with different religious communities living side by side. It hasn’t always been easy, but over time, people have developed ways of coexisting. The current political battles in the Middle East are part of this ongoing tension.
Skeptic: In your writing, you offer this great example from the Code of Hammurabi, 1755–1750 BC. It is the longest, best-organized, best-preserved legal text from the ancient Near East, written in the Old Akkadian dialect of Babylonian, and inscribed on a stone stele discovered in 1901.
“These are the judicial decisions that Hammurabi, the King, has established to bring about truth and a just order in his land.” That’s the text you quoted. “Let any wronged man who has a lawsuit”—interesting how the word ‘lawsuit’ is still in use today—”come before my image as King of Justice and have what is written on my stele read to him so that he may understand my precious commands, and let my stele demonstrate his position so that he may understand his case and calm his heart. I am Hammurabi, King of Justice, to whom Shamash has granted the truth.”
Many people today are worried about whether the nation-state, as we know it, is breaking down.Then you provide this specific example: “If a man cuts down a tree in another man’s date orchard without permission, he shall pay 30 shekels of silver. If a man has given a field to a gardener to plant as a date orchard, when the gardener has planted it, he shall cultivate it for four years, and in the fifth year, the owner and gardener shall divide the yield equally, with the owner choosing first.”
This sounds like a modern business contract, or today’s U.S. Uniform Commercial Code.
FP: Indeed, it’s about ensuring fairness among the farmers, who were the backbone of Babylon’s wealth at the time. I also find it fascinating that there are laws dealing with compensation if doctors kill or injure their patients. We often think of medical negligence as a modern issue, but it’s been around for 4,000 years.
Skeptic: But how did they determine the value of, say, a stray cow or cutting down the wrong tree? How did they arrive at the figure of 30 shekels?
FP: That’s a really interesting question. These laws were meant to last, and even in a relatively stable society, the value of money would have changed over time. People have studied this and asked how anyone could follow these laws for the hundreds of years that the stele stood and people referred to it. My view is that these laws were more exemplary—they probably reflected actual cases, decisions that judges were making at the time.
Laws have never simply been rules; they have created intricate maps for civilization, presented a social vision, promised justice, invoked a moral order, and enshrined principles of democracy and human rights.Although Hammurabi wrote down his rules, he didn’t expect people to apply them exactly as written, as we do with modern legal codes. Instead, they gave a sense of the kind of compensation that would be appropriate for different wrongs or crimes—guidelines, not hard rules. Hammurabi likely collected decisions from various judicial systems and grafted them into a set of general laws, but they still retain the flavor of individual judgments.
Skeptic: Is there a sense of “an eye for an eye, a tooth for a tooth”—where the punishment fits the crime, more or less?
The Code of Hammurabi inscribed on a basalt slab on display at the Louvre, Paris. (Photo by Mbzt via Wikimedia)FP: Absolutely. Hammurabi was trying to ensure that justice was done by laying out rules for appropriate responses to specific wrongs, ensuring fairness in compensation. But it’s crucial to understand that the famous phrase, “an eye for an eye, a tooth for a tooth,” which appears first in Hammurabi’s code and later in the laws of the Book of Exodus, wasn’t about enforcing revenge. Even though there’s a thousand-year gap between Hammurabi and the Bible, scholars believe this rule was about limiting revenge, not encouraging it. It meant that if someone sought revenge, it had to be proportional—an eye for an eye—but no more.
In other words, they wanted to prevent cycles of violence that arise from feuds. In a feuding society, someone steals a sheep, then someone retaliates by stealing a cow, and then someone tries to take an entire herd of sheep. The feud keeps getting bigger and bigger. So, the “eye for an eye” rule was a pragmatic approach in a society where feuding was common. It was meant to keep things under control.
Skeptic: From the ruler’s perspective, a feud is a net loss, regardless of who’s right or wrong.
FP: Feuding is a very common way of resolving disputes, especially among nomadic people. The idea, which makes a lot of sense, is that if you’re a nomadic pastoralist, your wealth is mobile—it’s your animals that have feet, which can be moved around. That also makes it easy to steal. If you’re a farmer, your wealth is tied to your land, so someone can’t run off with it. Since nomads are particularly vulnerable to theft, having a feuding system acts as a defense mechanism. It’s like saying, “If you steal my sheep, I’ll come and steal your cow.” You still see this in parts of the world, such as eastern Tibet, where I’ve done fieldwork. So, yes, kings and centralized rulers want to stop feuds because they represent a net loss. They want to put a lid on things and so establish a more centralized system of justice. This is exactly what Hammurabi was trying to do, and you see similar efforts in early Anglo- Saxon England, and all over the world.
Another interesting point is that every society has something to say about homicide. It’s so important that they have to lay out a response. However, I don’t think we should assume these laws were meant to stop people from killing each other. The fact is, we don’t refrain from murder because the law tells us not to. We don’t kill because we believe it’s wrong—except in the rare cases where morality has somehow become twisted and self-help justice occurs and people take the law into their own hands. The law, in this case, is more about what the social response should be once a killing has occurred. Should there be compensation? Punishment? What form should it take?
Every society needs some system to restore order and a sense of justice.Skeptic: Is this why we need laws that are enforced regularly, fairly, justly, and consistently, so people don’t feel the need to take matters into their own hands?
FP: I’d put it a bit more broadly: we need systems of justice, which can include mediation systems. In a village in Ladakh—part of northern India with Tibetan populations where I did fieldwork—they didn’t have written laws, but they had very effective ways of resolving conflicts. They put a lot of pressure on the parties to calm down, shake hands, and settle the dispute. It’s vastly different from the nomads I worked with later in eastern Tibet, who had a very different approach. But both systems were extremely effective, and there was a strong moral sense that people shouldn’t fight or even get angry. It’s easy to look at these practices and say they’re not justice, especially when serious things like injuries, killings, or even rape are settled in this way. But for these villages, maintaining peace and order in the community was paramount, and it worked for them.
Every society needs some system to restore order and a sense of justice. What constitutes justice can vary greatly—sometimes it’s revenge, sometimes it’s about restoring order. Laws can be part of that system, and in complex societies, it becomes much harder to rely on bottom-up systems of mediation or conciliation. That’s where having written laws and judges becomes very useful.
Skeptic: In communities without laws or courts, do they just agree, “Tomorrow we’re going to meet at noon, and we’ll all sit down and talk this out?”
FP: Essentially, yes. In the communities I spent time with, it was the headman’s duty to call a village meeting, and everyone was expected to attend and help resolve the issue. In a small community like that, you absolutely could do it.
Skeptic: And if you don’t show up?
FP: There’s huge social pressure for people to play their part in village politics and contribute to village funds and activities.
Skeptic: And if they don’t, then what? Are they gossiped about, shunned, or shamed?
FP: Yes—all of those things, in various ways.
Skeptic: Let’s talk about religious laws. You mentioned Sharia, and from a Western perspective, it’s often seen as a disaster because it’s been hyped up and associated with terrorism. Can you explain how religious laws differ from secular laws?
FP: I’m wondering how much one can generalize here. I’m thinking of the religious laws of Hindu India, Islamic laws, Jewish laws, and I suppose Canon law in Europe—Christian law. I hesitate to generalize, though.
Skeptic: What often confounds modern minds are the very specific laws in Leviticus—like which food you can eat, which clothes you can wear, and how to deal with adultery, which would certainly seem to concern the affected spouse. But why should the state—or whatever governing laws or body—even care about such specific issues?
FP: This highlights a crucial point. In Jewish, Hindu, and Islamic law, the legal and moral spheres are part of the same domain. A lot of these laws are really about guiding people on how to live moral lives according to dharma, God’s will, or divine command. The distinction we often make between law and religion, or law and morality, doesn’t apply in those contexts. The laws are about instructing people on how to live properly, which can involve family relations, contracts, land ownership, but also prayer and ritual.
As for the laws in Leviticus, they’ve puzzled people for a long time. They seem to be about purity and how Jews should live as good people, following rules of cleanliness, which partly distinguished them from other tribes.
Skeptic: What exactly is Sharia law?
FP: Sharia literally means “God’s path for the world.” It’s not best translated as “law” in the way we understand it. It’s more about following the path that God has laid out for us, a path we can’t fully comprehend but must do our best to interpret. The Quran is a guide, but it doesn’t lay out in detail everything we should do. The early Islamic scholars—who were very important in its formative days—studied the Quran and the Hadith (which tradition maintains records the Prophet’s words and actions) to work out just how Muslims should live according to God’s command. They developed texts called fiqh, which are what we might call legal texts, going into more detail about land ownership, commercial activities, legal disputes, inheritance, and charitable trusts.
Islamic law has very little to say about crime.Islamic law has very little to say about crime. That’s one misconception. People tend to think it’s all about harsh punishments, but the Quran mentions crime only briefly. That was largely the business of the caliphs—the rulers—who were responsible for maintaining law and order. Sharia is much more concerned with ritual and morality, and with civil matters like inheritance and charitable trusts.
Skeptic: Much of biblical legal and moral codes have changed over time. Christianity went through the Enlightenment. But Islam didn’t seem to go through a similar process. Is that a fair characterization?
FP: I’d say that’s partly right. But I’ve never thought about it in exactly those terms. In any legal tradition, there’s resistance to change—that’s kind of the point of law. It’s objective and fixed, so any change requires deep thought. In the Islamic world, there’s been a particularly strong sense that it’s not for people to change or reinterpret God’s path. The law was seen as something fixed.
But in practice, legal scholars, called muftis, were constantly adapting and changing legal practices to suit different contexts and environments. That’s one of the real issues today—Islamic law has become a symbol of resistance to the West, appealing to fundamentalism by going “back to the beginning.”
Skeptic: Let’s talk about stateless law of tribes, villages, networks, and gangs. For example, we tend to think of pirates as lawless, chaotic psychopaths who just randomly raided commerce and people. But, in fact, they were pretty orderly. They had their own constitutions. Each ship had a contract that everyone had to sign, outlining the rules. There’s even this interesting analysis of the Jolly Roger (skull and crossbones) flag. Why fly that flag and alert another ship that you’re coming? In his book The Invisible Hook: The Hidden Economics of Pirates, the economist Peter Leeson argued that it is a signal: “We’re dangerous pirates, and we’re coming to take your stuff, so you might as well hand it over to us, and we won’t kill you.” It’s better for the pirates because they can get the loot without the violence, and it’s better for the victims because they get to keep their lives. Occasionally, you do have to be brutal and make sure your reputation as a badass pirate gets a lot of publicity, so people know that when they see the flag, they should just surrender. But overall, it was a pretty orderly system.
FP: Yes, but it’s only kind of organized. That’s the point. For example, in The Godfather Don Corleone was essentially making up his own rules, using his power to tell others what he wanted. That’s the nature of the Mafia—yes, they had omertà (the rule of silence) and rules about treating each other’s wives with respect, but these rules were never written down. Alleged members who went on trial even denied—under oath—that any kind of organization or rules existed. This was particularly true with the Sicilian Mafia. The denial served two purposes: first, it protected them from outside scrutiny, and second, it allowed powerful figures like Don Corleone—or the real-life Sicilian bosses—to bend the rules whenever they saw fit. If the rules aren’t written down, it’s harder to hold them accountable. They can simply break the rules and impose their will.
Skeptic: Let’s discuss international law. In 1977, David Irving published Hitler’s War, in which he claimed that Hitler didn’t really know about the Holocaust. Rather, Irving blamed it on Himmler specifically, and other high-ranking Nazis in general, along with their obedient underlings. Irving even offered $1,000 to anyone who could produce an order from Hitler saying, “I, Adolf Hitler, hereby order the extermination of European Jewry.” Of course, no such order exists. This is an example of how you shift away from a legal system. The Nazis tried to justify what they were doing with law, but at some point, you can’t write down, “We’re going to kill all the Jews.” That can’t be a formal law.
FP: Exactly. Nazi Germany had a complex legal case, and I’m not an expert on it, but you can see at least a couple legal domains at play. First, they were concerned with international law, especially in how they conducted warfare in the Soviet Union. They at least tried to make a show of following international laws of war. Second, operationally, they created countless laws to keep Germany and the war effort functioning. They used law instrumentally. But when they felt morally uncomfortable with what they were doing, the obvious move was to avoid writing anything down. If it wasn’t documented, it wasn’t visible, and so it became much harder to hold anyone accountable.
Nazi Germany had a complex legal case. Operationally, they created countless laws to keep Germany and the war effort functioning. They used law instrumentally.Skeptic: During the Nuremberg trials, the defense’s argument was often, “Well, we lost, but if we had won, this would have been legal.” So they claimed it wasn’t fair to hold these trials since they violated the well-established principle of ex post facto, because there was no international law at the time. National sovereignty and self-determination was the norm, so they were saying, in terms of the law of nations, “We were just doing what we do, and it’s none of your business.”
View from above of the judges' bench at the International Military Tribunal in Nuremberg. (Source: National Archives and Records Administration, College Park.)FP: Legally speaking, the Nuremberg trials were both innovative and hugely problematic. The court assumed the power to sit in judgment on what the leaders of independent nation-states were doing within their borders, or at least largely within their borders (the six largest Nazi death camps were in conquered Poland). But it was revolutionary in terms of developing the concepts of genocide, crimes against humanity, and the reach of international law with a humanitarian focus. So yes, it was innovative and legally difficult to justify, but I don’t think anyone involved felt there was any question that what they were doing was the right thing.
Skeptic: It also established the legal precedent that, going forward, any dictator who commits these kinds of atrocities—if captured—would be held accountable.
FP: Exactly. And that eventually led to the movement that set up the International Criminal Court, where Slobodan Milošević was prosecuted, along with other leaders. Although, it’s extremely difficult to bring such people to trial, and ultimately, the process can be more symbolic than practical.
Is the existence of the International Criminal Court really going to stop someone from committing mass atrocities? I doubt it. But it does symbolize to the world that genocide and other heinous crimes will be called out, and people must be held accountable. In a way, it represents the wider moral world we want to live in and the standards we expect nations to uphold.
Skeptic: Skeptic once asked Elon Musk: “When you start the first Mars colony, what documents would you recommend using to establish a governing system? The U.S. Constitution, the Bill of Rights, the Universal Declaration of Human Rights, the Humanist Manifesto, Atlas Shrugged, or Against the State, an anarcho-capitalist manifesto?” He responded with, “Direct democracy by the people. Laws must be short, as there is trickery in length. Automatic expiration of rules to prevent death by bureaucracy. Any rule can be removed by 40 percent of the people to overcome inertia. Freedom.”
FP: What a great, specific response! He’s really thought about this. Those are some interesting ideas, and I agree that there’s a lot to be said for direct democracy. The main problem with direct democracy, however, is that when you have too many people it becomes cumbersome. How do you gather everyone in a sensible way? The Athenians and Romans had huge assemblies, which created a sense of equality, and that’s valuable. Another thing I would do, which I’ve discussed with a colleague of mine, Al Pashar, is to rotate positions of power. She did research in Indian villages, and I’ve done work with Tibetans in Ladakh, and we found they had similar systems where every household provided a headman or headwoman in turn.
Rotating power is effective at preventing individuals from concentrating too much power.You might think rotating leadership wouldn’t work, because some people aren’t good leaders, while others are. Wouldn’t it be better to elect the best person for the job? But we found that rotating power is effective at preventing individuals from concentrating too much power. Yes, it’s good to have competent leaders, but when their family or descendants form an elite, you get a hierarchy and bureaucracy. Rotating power prevents that. That’s what I would do in terms of a political system.
As for laws, I’m less concerned with their length, as long as they are accessible and visible for everyone to read and reference. What’s important is having essential laws clearly posted for all to see. And there should be a good system for resolving disputes—perhaps mediation and conciliation rather than a lot of complex laws, with just a few laws in the background.
Skeptic: We’ll send this to Elon, and maybe he’ll hire you to join his team of social engineers.
FP: Although I’m not sure I want to go to Mars, I’d be happy to advise from the comfort of Oxford!
In 2006 (yes, it was that long ago – yikes) the International Astronomical Union (IAU) officially adopted the definition of dwarf planet – they are large enough for their gravity to pull themselves into a sphere, they orbit the sun and not another larger body, but they don’t gravitationally dominate their orbit. That last criterion is what separates planets (which do dominate their orbit) from dwarf planets. Famously, this causes Pluto to be “downgraded” from a planet to a dwarf planet. Four other objects also met criteria for dwarf planet – Ceres in the asteroid belt, and three Kuiper belt objects, Makemake, Haumea, and Eris.
The new designation of dwarf planet came soon after the discovery of Sedna, a trans-Neptunian object that could meet the old definition of planet. It was, in fact, often reported at the time as the discovery of a 10th planet. But astronomers feared that there were dozens or even hundreds of similar trans-Neptunian objects, and they thought it was messy to have so many planets in our solar system. That is why they came up with the whole idea of dwarf planets. Pluto was just caught in the crossfire – in order to keep Sedna and its ilk from being planets, Pluto had to be demoted as well. As a sort-of consolation, dwarf planets that were also trans-Neptunian objects were named “plutoids”. All dwarf planets are plutoids, except Ceres, which is in the asteroid belt between Mars and Jupiter.
So here we are, two decades later, and I can’t help wondering – where are all the dwarf planets? Where are all the trans-Neptunian objects that astronomers feared would have to be classified as planets that the dwarf planet category was specifically created for? I really thought that by now we would have a dozen or more official dwarf planets. What’s happening? As far as I can tell there are two reasons we are still stuck with only the original five dwarf planets.
One is simply that (even after two decades) candidate dwarf planets have not yet been confirmed with adequate observations. We need to determine their orbit, their shape, and (related to their shape) their size. Sedna is still considered a “candidate” dwarf planet, although most astronomers believe it is an actual dwarf planet and will eventually be confirmed. Until then it is officially considered a trans-Neptunian object. There is also Gonggong, Quaoar, and Orcus which are high probability candidates, and a borderline candidate, Salacia. So there are at least nine, and possibly ten, known likely dwarf planets, but only the original five are confirmed. I guess it is harder to observe these objects than I assumed.
But I have also come across a second reason we have not expanded the official list of dwarf planets. Apparently there is another criterion for plutoids (dwarf planets that are also trans-Neptunian objects) – they have to have an absolute magnitude less than +1 (the smaller the magnitude the brighter the object). Absolute magnitude means how bright an object actually is, not it’s apparent brightness as viewed from the Earth. Absolute magnitude for planets is essentially the result of two factors – size and albedo. For stars, absolute magnitude is the brightness as observed from 10 parsecs away. For solar system bodies, the absolute magnitude is the brightness if the object were one AU from the sun and the observer.
What this means is that astronomers have to determine the absolute magnitude of a trans-Neptunian object before they can officially declare it a dwarf planet. This also means that trans-Neptunian objects that are made of dark material, even if they are large and spherical, may also fail the dwarf planet criteria. Some astronomers are already proposing that this absolute magnitude criterion be replaced by a size criterion – something like 200 km in diameter.
It seems like the dwarf planet designation needs to be revisited. Currently, the James Webb Space Telescope is being used to observe trans-Neptunian objects. Hopefully this means we will have some confirmations soon. Poor Sedna, whose discovery in 2003 set off the whole dwarf planet thing, still has not yet been confirmed.
The post Where Are All the Dwarf Planets? first appeared on NeuroLogica Blog.
It’s not at all clear that clothes make the man, or woman. However, it is clear that although animals don’t normally wear clothes (except when people dress them up for their own peculiar reasons), living things are provided by natural selection with a huge and wonderful variety. Their outfits involve many different physical shapes and styles, and they arise through various routes. For now, we’ll look briefly just at eye-catching color among animals, and the two routes by which evolution’s clothier dresses them: sexual selection and warning coloration.
Human observers are understandably taken with the extraordinary appearance of certain animals, notably birds, as well as some amphibians and insects, and, in most cases, the dressed-up elegance of males in particular. In 1860, Darwin confessed to a fellow biologist that looking at the tail of a peacock made him “sick.” Not that Darwin lacked an aesthetic sense, rather, he was troubled that his initial version of natural selection didn’t make room for animals having one. After all, the gorgeous colors and extravagant length of a peacock’s tail threatened what came to be known (by way of Herbert Spencer) as “survival of the fittest,” because all that finery seemed to add up to an immense fitness detriment. A long tail is not only metabolically expensive to grow, but it’s more liable to get caught in shrubbery, while the spectacular colors make its owner more conspicuous to potential predators.
Eventually, Darwin arrived at a solution to this dilemma, which he developed in his 1871 book, The Descent of Man and Selection in Relation to Sex. Although details have been added in the ensuing century and a half, his crashing insight—sexual selection—has remained a cornerstone of evolutionary biology.
Sexual selection is sometimes envisaged as different from natural selection, but it isn’t.Sexual selection is sometimes envisaged as different from natural selection, but it isn’t. Natural selection is neither more nor less than differential reproduction, particularly of individuals and, thereby, genes. It operates in many dimensions, such as obtaining food, avoiding predators, surviving the vagaries of weather, resisting pathogens, and so on. And yet more on! Sexual selection is a subset of natural selection that is so multifaceted and, in some ways, so counterintuitive that it warrants special consideration, as Darwin perceived and subsequent biologists have elaborated.
The bottom line is that in many species, bright coloration—seemingly disadvantageous because it is both expensive to produce and also carries increased risk because of its conspicuousness— nonetheless can contribute to fitness insofar as it is preferentially chosen by females. In such cases, the upside of conspicuous colors increasing mating opportunities compensates for its downsides.
Bright coloration is both expensive to produce and also carries increased risk because of its conspicuousness.Nothing in science is entirely understood and locked down, but biologists have done a pretty good job with sexual selection. A long-standing question is why, when the sexes are readily distinguishable (termed, sexual dimorphism) it is nearly always the males that are brightly colored. An excellent answer comes from the theory of parental investment, first elaborated by Robert Trivers. The basic idea is that the fundamental biological difference between males and females is not in their genitals but in the defining difference between males and females, namely, how much they invest when it comes to producing offspring. Males are defined as the sex that makes sperm (tiny gametes that are produced in prodigious numbers), while females are egg makers (producing fewer gametes and investing substantially more time and energy on each one).
Sexual selection is responsible for much of the organic world’s Technicolor drama.As a result, males are often capable of inseminating multiple females because their parental investment in each reproductive effort can be minimal. And so, males in many species, perhaps most, gain an evolutionary advantage by mating with as many females as possible. Because nearly always there are equal numbers of males and females—an important and well-researched statistical phenomenon that deserves its own treatment—this sets up two crucial dynamics. One is male-male competition whereby males hassle with each other for access to the limiting and valuable resource of females and their literal mother load of parental investment. This in turn helps explain the frequent pattern whereby males tend to be more aggressive and outfitted with weapons and an inclination to use them.
The other dynamic, especially important for understanding the evolution of conspicuous male coloration, is female choice (known as epigamic selection). Because females are outfitted with their desirable payload of parental investment, for which males compete, females often (albeit not always) have the opportunity to choose among eager suitors. And they are disposed to go for the brightest, showiest available.
Darwin intuited this dynamic but was uncomfortable about it because at the time, it was felt that aesthetic preferences were a uniquely human phenomenon, not available to animals. Now we know better, in part because the mechanism of such preferences is rather well understood. Sexual selection is responsible for much of the organic world’s Technicolor drama, such as the red of male cardinals, the tails of peacocks, or the rainbow rear ends of mandrill monkeys, all of which make these individuals more appealing to potential mates—probably because, once they are sexually attractive, they become even more attractive according to what evolutionary biologists call the sexy son hypothesis. This involves the implicit genetic promise that females who mate with males who are thus adorned will likely produce sons who will inherit their father’s flashy good looks and will therefore be attractive to the next generation of choosing females, thereby ensuring that the prior generation female who makes such a choice will produce more grandchildren through her sexy sons.
There is a strong correlation between the degree of polygyny (number of females mated on average to a given male), or, more accurately, the ratio of variability in female reproductive success to that of males, and the amount of sexual dimorphism: the extent to which males and females of a given species differ physically. The greater the polygyny (e.g., harem size, as in elephant seals) the greater the sexual dimorphism, while monogamous species tend to be comparatively monomorphic, at least when it comes to body size and weaponry.
In most cases, female reproductive success doesn’t vary greatly among individuals, testimony to the impact of the large parental investment they provide. Female success is maximal when they get all their eggs fertilized and their offspring successfully reared, a number that typically doesn’t differ greatly from one female to another. By contrast, because of their low biologically-mandated parental investment, some males have a very large number of surviving offspring—a function of their success in male-male competition along with female choice—while others are liable to die unsuccessful, nonreproductive, typically troublemaking bachelors.
When it comes to sexual dimorphism in coloration, some mysteries persist.When it comes to sexual dimorphism in coloration, however, some mysteries persist. Among some socially monogamous species (e.g., warblers), males sport brilliant plumage. This conundrum has been resolved to some extent by the advent of DNA fingerprinting, which has shown that social monogamy doesn’t necessarily correlate with sexual monogamy. Although males of many species have long been known to be sexually randy, verging on promiscuous, females were thought to be more monogamously inclined. However, we now know that females of many species also look for what is termed extra-pair copulations, and it seems likely that this, in turn, has selected for sexy male appearance, which outfits them to potentially take advantage of any out-of-mateship opportunities.
It still isn’t clear why and how such a preference began in the case of particular species (and why it is less developed, or, rarely, even reversed in a few), but once established it becomes what the statistician and evolutionary theorist R.A. Fisher called a “runaway process.” Furthermore, we have some rather good ideas about how this process proceeds.
One is that being impressively arrayed is an indication of somatic and genetic health, which further adds to the fitness payoff when females choose these specimens. Being brightly colored has been shown to correlate with disease resistance, relative absence of parasites, being an especially adroit forager, and the like. In most cases, brightness is physiologically difficult to achieve, which means that dramatic coloration can indicate that such living billboards are also advertising their metabolic muscularity, indicating that they’d likely contain good genetic material as well.
Being brightly colored has been shown to correlate with disease resistance, relative absence of parasites, and being an especially adroit forager.Another, related hypothesis was more controversial when first proposed by Israeli ornithologist Amotz Zahavi, but has been increasingly supported. This is the concept of “selection for a handicap,” which acknowledges that such traits as bright coloration may well be a handicap in terms of a possessor’s survival. However, Zahavi’s “Handicap Principle” turns a seeming liability into a potential asset insofar as certain traits can be positive indicators of superior quality if their possessors are able to function effectively despite possessing them. It’s as though someone carried a 50-pound backpack and was nonetheless able to finish a race, and maybe even win it! An early criticism of this notion was that the descendants of such handicapped individuals would also likely inherit the handicap, so where’s the adaptive payoff accruing to females who choose to mate with them?
For one, there’s the acknowledged benefit of producing sons who will themselves be preferentially chosen—an intriguing case in which choosy females are more fit not through their sons, but by their grandchildren by way of those sons. In addition, there is the prospect that the choosing female’s daughters would be bequeathed greater somatic viability without their brothers’ bodily handicap. It’s counterintuitive to see bright coloration as a handicap, just as it’s counterintuitive to see a handicap as a potential advantage … but there’s little reason to trust our intuition in the face of nature’s often-confusing complexity.
There’s plenty more to the saga of sexual selection and its generation of flashy animal Beau Brummels, including efforts to explain the many exceptions to the above general patterns. It’s not much of a mystery why mammals don’t partake of flashy dress patterns, given that the class Mammalia generally has poor color vision. But what about primates, who tend to be better endowed? And what of Homo sapiens? Our species sports essentially no genetically-mediated colorful sexual dimorphism. If anything, women tend to be more elaborately adorned than men, at least in Western traditions, a gender difference that seems entirely culture-based. Moreover, among some non-Western social groups, the men get dressed up far more than the women. Clearly, there is much to be resolved, and not just for nonhuman animals.
For another look at dramatic animal patterning, let’s turn to the inverse of sexual attraction, namely, selection for being avoided.
Among the most dramatic looking animals are those whose appearance is “designed” (by natural selection) to cause others—notably predators—to stay away. An array of living things, including some truly spectacular specimens, are downright poisonous, not just in their fangs or stingers but in their very bodies. When they are caterpillars, monarch butterflies feed exclusively on milkweed plants, which contain potent chemical alkaloids that taste disgusting and cause severe digestive upset to animals—especially birds— that eat them, or just venture an incautious nibble.
In the latter case, most birds with a bellyache avoid repeating their mistake although this requires, in turn, that monarchs be sufficiently distinct in their appearance that they carry an easily recognized warning sign. Hence, their dramatic black and bright orange patterning. To the human eye, they are quite lovely. To the eyes of a bird with a terrible taste in its mouth and a pain in its gut, that same conspicuous black and orange is memorable as well, recalling a meal that should not be repeated. It exemplifies “warning coloration,” an easily recalled and highly visible reminder of something to avoid. (It is no coincidence that school buses, ambulances, and fire trucks are also conspicuously colored, although here the goal is enhanced visibility per se, not advertising that these vehicles are bad to eat!)
It is no coincidence that school buses, ambulances, and fire trucks are also conspicuously colored.The technical term for animal warning signals is aposematic, derived by combining the roots for “apo” meaning away (as in apostate, someone who moves away from a particular belief system) and “sema” meaning signal (as in semaphore). Unpalatable or outright poisonous prey species that were less notable and thus easily forgotten will have achieved little benefit from their protective physiology. And of course, edible animals that are easily recognized would be in even deeper trouble. The adaptive payoff of aposematic coloration even applies if a naïve predator kills a warningly-colored individual, because such sacrifice is biologically rewarded through kin selection when a chastened predator avoids the victim’s genetic relatives.
Many species of bees and wasps are aposematic, as are skunks: once nauseated, or stung, or subjected to stinky skunk spray, twice shy. However, chemically-based shyness isn’t the only way to train a potential predator. Big teeth or sharp claws could do the trick, just by their appearance, without any augmentation. Yet when the threat isn’t undeniably baked into an impressive organ—for example, when it is contained within an animal’s otherwise invisible body chemistry—that’s where a conspicuous, easy-to-remember appearance comes in.
Bright color does triple duty, not only warning off predators and helping acquire mates, but also signaling that brighter and hence healthier individuals are more effective fighters.Some of the world’s most extraordinary painterly palettes (at least to the human eye) are flaunted by neotropical amphibians known as “poison arrow frogs,” so designated because their skin is so lethally imbued that indigenous human hunters use it to anoint their darts and arrow points. There is no reason, however, for the spectacular coloration of these frogs to serve only as a warning to potential frog-eating predators. As with other dramatically accoutered animals, colorfulness itself often helps attract mates, and not just by holding out the prospect of making sexy sons. Moreover, it has been observed in at least one impressively aposematic amphibian—the scrumptious-looking but highly toxic strawberry poison frog—that bright color does triple duty, not only warning off predators and helping acquire mates, but also signaling to other strawberry poison frogs that brighter and hence healthier individuals are more effective fighters.
Warning coloration occurs in a wide range of living things, evolving pretty much whenever one species develops a deserved reputation for poisonousness, ferocity, or some other form of legitimate threat. Once established, it also opens the door to further evolutionary complexity, including Batesian mimicry, first described in detail by the nineteenth-century English naturalist Henry Walter Bates who researched butterflies in the Amazon rainforest. He noticed that warningly-colored species serve as models, which are then copied by mimics that are selected to piggyback on the reputation established by the former. Brightly banded coral snakes (venomous) are also mimicked, albeit imperfectly, by some species of (nonpoisonous) king snakes. Bees and wasps, with their intimidating stings, have in most cases evolved distinctive color patterns, often bands of black and yellow; they, in turn, are mimicked by a number of other insects that are outfitted with black and yellow bands though they are stingless.
The honestly-clothed signaler can become a model to be mimicked by other species that may not be dangerous to eat but are mistaken for the real (and toxic) McCoyIn short, the honestly-clothed signaler can become a model to be mimicked by other species that may not be dangerous to eat but are mistaken for the real (and toxic) McCoy. Those monarch butterflies, endowed with poisonous, yucky-tasting alkaloids, are mimicked by another species—aptly known as “viceroys” (substitute monarchs)—that bypass the metabolically expensive requirement of dealing with milkweed toxins while benefiting by taking advantage of the monarch’s legitimately acquired reputation.
The plot thickens. Viceroy butterflies (the mimic) and monarchs (the model) can both be successful as long as the mimics aren’t too numerous. A problem arises, however, when viceroys become increasingly abundant, because the more viceroys, the more likely it is that predators will nibble on those harmless mimics rather than being educated by sampling mostly monarchs and therefore trained to avoid their black-and-orange pattern. As a result, the well-being of both monarchs and viceroys is diminished in direct proportion as the latter become abundant, which in turn induces selection of monarchs that are discernibly different from their mimics so as not to be tarred with viceroys’ innocuousness. But the process isn’t done. As the models flutter away from their mimics, the latter can be expected to pursue them, in an ongoing process of evolutionary tag set in motion by the antipredator adaptation represented by the model’s warning coloration, the momentum of which is maintained by the very different challenges—to both the mimic and the model—generated by the system itself.
Frequency-dependent selection is a phenomenon in which the evolutionary success of a biological type varies inversely with its abundance.This general phenomenon is known as “frequency-dependent selection,” in which the evolutionary success of a biological type varies inversely with its abundance: favored when rare, diminishing as it becomes more frequent. It’s as though certain traits carry within them the seeds of their own destruction, or at least, of keeping their numbers in check, either arriving at a balanced equilibrium or by producing a pattern of pendulum-like fluctuations.
Meanwhile, Batesian mimicry isn’t the only copycat clothing system to have evolved. Plenty of black-and-yellow-banded insects, for example, are equipped with stings, although many other warning patterns are clearly available. Different species could have used their unique pattern of colors as well as alternative designs such as spots and blotches instead of the favored black-and-yellow bands. At work here is yet another evolution-based aposematic phenomenon, known as Müllerian mimicry, after the German naturalist Fritz Müller. In this kind of mimicry, everyone is a model, because different species that are legitimately threatening in their own right converge on the same pattern. Here, the adaptive advantage is that sharing the same warning appearance facilitates learning by predators: it’s easier to learn to avoid one basic warning signal than a variety, different for each species. It had been thought that Batesian and Müllerian mimicry were opposites, with Batesian being dishonest because the mimic is essentially a parasite of its model’s legitimate reputation (those viceroys), whereas Müllerian mimicry exemplifies shared honesty, as with different species of wasps, bees, and hornets, whose fearsome reputations enhance each others.
It is currently acknowledged, however, that often the distinction is not absolute; within a given array of similar-looking Müllerian mimics, for example, not all species are equally honest when it comes to their decorative signaling. The less dangerous representatives are therefore somewhat Batesian. Conversely, among some species, assemblages that have traditionally been thought to involve Batesian mimics—including the iconic monarch–viceroy duo—mimics are often a bit unpleasant in their own right, so both participants are to some degree Müllerian convergers as well.
What to make of all this? In his book, Unweaving the Rainbow, Richard Dawkins gave us some advice, as brilliant as the colors and patterns of the natural world:
After sleeping through a hundred million centuries, we have finally opened our eyes on a sumptuous planet, sparkling with color, bountiful with life. Within decades we must close our eyes again. Isn’t it a noble and enlightened way of spending our time in the sun, to work at understanding the universe and how we have come to wake up in it?