Many think of Vatican City only as the seat of governance for the world’s 1.3 billion Roman Catholics. Atheist critics view it as a capitalist holding company with special privileges. However, that postage-stamp parcel of land in the center of Rome is also a sovereign nation. It has diplomatic embassies—so-called apostolic nunciatures—in over 180 countries, and has permanent observer status at the United Nations.
Only by knowing the history of the Vatican’s sovereign status is it possible to understand how radically different it is compared to other countries. For over 2,000 years the Vatican has been a nonhereditary monarchy. Whoever is Pope is its supreme leader, vested with sole decision-making authority over all religious and temporal matters. There is no legislature, judiciary, or any system of checks and balances. Even the worst of Popes—and there have been some truly terrible ones—are sacrosanct. There has never been a coup, a forced resignation, or a verifiable murder of a Pope. In 2013, Pope Benedict became the first pope to resign in 600 years. Problems of cognitive decline get swept under the rug. In its unchecked power of a single man, the Vatican is closest in its governance style to a handful of absolute monarchies such as Saudi Arabia, Brunei, Oman, Qatar, and the UAE.
During the Renaissance, Popes were feared rivals to Europe’s most powerful monarchies.From the 8th century until 1870 the Vatican was a semifeudal secular empire called the Papal States that controlled most of central Italy. During the Renaissance, Popes were feared rivals to Europe’s most powerful monarchies. Popes believed God had put them on earth to reign over all other worldly rulers. The Popes of the Middle Ages had an entourage of nearly a thousand servants and hundreds of clerics and lay deputies. That so-called Curia—referring to the court of a Roman emperor—became a Ladon-like network of intrigue and deceit composed largely of (supposedly) celibate single men who lived and worked together at the same time they competed for influence with the Pope.
The cost of running the Papal States, while maintaining one of Europe’s grandest courts, kept the Vatican under constant financial strain. Although it collected taxes and fees, had sales of produce from its agriculturally rich northern region, and rents from its properties throughout Europe, it was still always strapped for cash. The church turned to selling so-called indulgences, a sixth-century invention whereby the faithful paid for a piece of paper that promised that God would forgo any earthly punishment for the buyer’s sins. The early church’s penances were often severe, including flogging, imprisonment, or even death. Although some indulgences were free, the best ones—promising the most redemption for the gravest sins—were expensive. The Vatican set prices according to the severity of the sin.
The Church had to twice borrow from the Rothschilds.All the while, the concept of a budget or financial planning was anathema to a succession of Popes. The humiliating low point came when the Church had to twice borrow from the Rothschilds, Europe’s preeminent Jewish banking dynasty. James de Rothschild, head of the family’s Paris-based headquarters, became the official Papal banker. By the time the family bailed out the Vatican, it had only been thirty-five years since the destabilizing aftershocks from the French Revolution had led to the easing of harsh, discriminatory laws against Jews in Western Europe. It was then that Mayer Amschel, the Rothschild family patriarch, had walked out of the Frankfurt ghetto with his five sons and established a fledgling bank. Little wonder the Rothschilds sparked such envy. By the time Pope Gregory asked for the first loan they had created the world’s biggest bank, ten times larger than their closest rival.
The Vatican’s institutional resistance to capitalism was a leftover of Middle Age ideologies, a belief that the church alone was empowered by God to fight Mammon, a satanic deity of greed. Its ban on usury—earning interest on money loaned or invested—was based on a literal biblical interpretation. The Vatican distrusted capitalism since it thought secular activists used it as a wedge to separate the church from an integrated role with the state. In some countries, the “capitalist bourgeoisie”—as the Vatican dubbed it—had even confiscated church land for public use. Also fueling the resistance to modern finances was the view that capitalism was mostly the province of Jews. Church leaders may not have liked the Rothschilds, but they did like their cash.
The Church’s sixteen thousand square miles was reduced to a tiny parcel of land.In 1870, the Vatican lost its earthly empire overnight when Rome fell to the nationalists who were fighting to unify Italy under a single government. The Church’s sixteen thousand square miles was reduced to a tiny parcel of land. The loss of its Papal States income meant the church was teetering on the verge of bankruptcy.
St. Peter's Basilica, Vatican City, Rome (Photograph by Bernd Marx)The Vatican survived going forward on something called Peter’s Pence, a fundraising practice that had been popular a thousand years earlier with the Saxons in England (and later banned by Henry VIII when he broke with Rome and declared himself head of the Church of England). The Vatican pleaded with Catholics worldwide to contribute money to support the Pope, who had declared himself a prisoner inside the Vatican and refused to recognize the new Italian government’s sovereignty over the Church.
During the nearly 60-year stalemate that followed, the Vatican’s insular and mostly incompetent financial management kept it under tremendous pressure. The Vatican would have gone bankrupt if Mussolini had not saved it. Il Duce, Italy’s fascist leader, was no fan of the Church, but he was enough of a political realist to know that 98 percent of Italians were Catholics. In 1929, the Vatican and the Fascist government executed the Lateran Pacts. It gave the Church the most power since the height of its temporal kingdom. It set aside 108.7 acres as Vatican City and fifty-two scattered “heritage” properties as an autonomous neutral state. It reinstated Papal sovereignty and ended the Pope’s boycott of the Italian state.
The settlement—worth about $1.6 billion in 2025 dollars—was approximately a third of Italy’s entire annual budget.The Lateran Pacts declared the Pope was “sacred and inviolable,” the equivalent of a secular monarch, and acknowledged he was invested with divine rights. A new Code of Canon Law made Catholic religious education obligatory in state schools. Cardinals were invested with the same rights as princes by blood. All church holidays became state holidays and priests were exempted from military and jury duty. A three-article financial convention granted “ecclesiastical corporations” full tax exemptions. It also compensated the Vatican for the confiscation of the Papal States with 750 million lire in cash and a billion lire in government bonds that paid 5 percent interest. The settlement—worth about $1.6 billion in 2024 dollars—was approximately a third of Italy’s entire annual budget and a desperately needed lifeline for the cash-starved church.
Satirical depiction of Pope Pius XI and Benito Mussolini during the Lateran Treaty negotiations. (Illustration by Erich Schilling, for the cover of Simplicissimus magazine, March 1929.)Pius XI, the Pope who struck the deal with Mussolini, was savvy enough to know that he and his fellow cardinals needed help managing the enormous windfall. He therefore brought in a lay outside advisor, Bernardino Nogara, a devout Catholic with a reputation as a financial wizard.
Nogara took little time in upending hundreds of years of tradition. He ordered, for instance, that every Vatican department produce annual budgets and issue monthly income and expense statements. The Curia bristled when he persuaded Pius to cut employee salaries by 15 percent. And after the 1929 stock market crash, Nogara made investments in blue-chip American companies whose stock prices had plummeted. He also bought prime London real estate at fire-sale prices. As tensions mounted in the 1930s, Nogara further diversified the Vatican’s holdings in international banks, U.S. government bonds, manufacturing companies, and electric utilities.
Only seven months before the start of World War II, the church got a new Pope, Pius XII, one who had a special affection for Germany (he had been the Papal Nuncio—ambassador—to Germany). Nogara warned that the outbreak of war would greatly test the financial empire he had so carefully crafted over a decade. When the hot war began in September 1939, Nogara realized he had to do more than shuffle the Vatican’s hard assets to safe havens. He knew that beyond the military battlefield, governments fought wars by waging a broad economic battle to defeat the enemy. The Axis powers and the Allies imposed a series of draconian decrees restricting many international business deals, banning trading with the enemy, prohibiting the sale of critical natural resources, and freezing the bank accounts and assets of enemy nationals.
The United States was the most aggressive, searching for countries, companies, and foreign nationals who did any business with enemy nations. Under President Franklin Roosevelt’s direction, the Treasury Department created a so-called blacklist. By June 1941 (six months before Pearl Harbor and America’s official entry into the war), the blacklist included not only the obvious belligerents such as Germany and Italy, but also neutral nations such as Switzerland, and the tiny principalities of Monaco, San Marino, Liechtenstein, and Andorra. Only the Vatican and Turkey were spared. The Vatican was the only European country that proclaimed neutrality that was not placed on the blacklist.
There was a furious debate inside the Treasury department about whether Nogara’s shuffling and masking of holding companies in multiple European and South American banking jurisdictions was sufficient to blacklist the Vatican. It was only a matter of time, concluded Nogara, until the Vatican was sanctioned.
The Vatican Bank could operate anywhere worldwide, did not pay taxes … disclose balance sheets, or account to any shareholders.Every financial transaction left a paper trail through the central banks of the Allies. Nogara needed to conduct Vatican business in secret. The June 27, 1942, formation of the Istituto per le Opere di Religione (IOR)—the Vatican Bank—was heaven sent. Nogara drafted a chirograph (a handwritten declaration), a six-point charter for the bank, and Pius signed it. Since its only branch was inside Vatican City—which, again, was not on any blacklist—the IOR was free of any wartime regulations. The IOR was a mix between a traditional bank like J. P. Morgan and a central bank such as the Federal Reserve. The Vatican Bank could operate anywhere worldwide, did not pay taxes, did not have to show a profit, produce annual reports, disclose balance sheets, or account to any shareholders. Located in a former dungeon in the Torrione di Nicoló V (Tower of Nicholas V), it certainly did not look like any other bank.
The Vatican Bank was created as an autonomous institution with no corporate or ecclesiastical ties to any other church division or lay agency. Its only shareholder was the Pope. Nogara ran it subject only to Pius’s veto. Its charter allowed it “to take charge of, and to administer, capital assets destined for religious agencies.” Nogara interpreted that liberally to mean that the IOR could accept deposits of cash, real estate, or stock shares (that expanded later during the war to include patent royalty and reinsurance policy payments).
Many nervous Europeans were desperate for a wartime haven for their money. Rich Italians, in particular, were anxious to get cash out of the country. Mussolini had decreed the death penalty for anyone exporting lire from Italian banks. Of the six countries that bordered Italy, the Vatican was the only sovereignty not subject to Italy’s border checks. The formation of the Vatican Bank meant Italians needed only a willing cleric to deposit their suitcases of cash without leaving any paper trail. And unlike other sovereign banks, the IOR was free of any independent audits. It was required—supposedly to streamline recordkeeping—to destroy all its files every decade (a practice it followed until 2000). The IOR left virtually nothing by which postwar investigators could determine if it was a conduit for shuffling wartime plunder, held accounts, or money that should be repatriated to victims.
The Vatican immediately dropped off the radar of U.S. and British financial investigators.The IOR’s creation meant the Vatican immediately dropped off the radar of U.S. and British financial investigators. It allowed Nogara to invest in both the Allies and the Axis powers. As I discovered in research for my 2015 book about church finances, God’s Bankers: A History of Money and Power at the Vatican, Nogara’s most successful wartime investment was in German and Italian insurance companies. The Vatican earned outsized profits when those companies escheated the life insurance policies of Jews sent to the death camps and converted the cash value of the policies.
After the war, the Vatican claimed it had never invested or made money from Nazi Germany or Fascist Italy. All its wartime investments and money movements were hidden by Nogara’s impenetrable Byzantine offshore network. The only proof of what happened was in the Vatican Bank archives, sealed to this day. (I have written opinion pieces in The New York Times, Washington Post, and Los Angeles Times, calling on the church to open its wartime Vatican Bank files for inspection. The Church has ignored those entreaties.)
Its ironclad secrecy made it a popular postwar offshore tax haven for wealthy Italians wanting to avoid income taxes.While the Vatican Bank was indispensable to the church’s enormous wartime profits, the very features—no transparency or oversight, no checks and balances, no adherence to international banking best practices—became its weakness going forward. Its ironclad secrecy made it a popular postwar offshore tax haven for wealthy Italians wanting to avoid income taxes. Mafia dons cultivated friendships with senior clergy and used them to open IOR accounts under fake names. Nogara retired in the 1950s. The laymen who had been his aides were not nearly as clever or imaginative as was he. It opened the Vatican Bank to the influence of lay bankers. One, Michele Sindona, was dubbed by the press as “God’s Banker” in the mid-1960s for the tremendous influence and deal making he had with the Vatican Bank. Sindona was a flamboyant banker whose investment schemes always pushed against the letter of the law. (Years later he would be convicted of massive financial fraud and murder of a prosecutor and would himself be killed in an Italian prison.)
Exacerbating the bad effect of Sindona directing church investments, the Pope’s pick to run the Vatican Bank in the 1970s was a loyal monsignor, Chicago-born Paul Marcinkus. The problem was that Marcinkus knew almost nothing about finances or running a bank. He later told a reporter that when he got the news that he would oversee the Vatican Bank, he visited several banks in New York and Chicago and picked up tips. “That was it. What kind of training you need?” He also bought some books about international banking and business. One senior Vatican Bank official worried that Marcinkus “couldn’t even read a balance sheet.”
Marcinkus allowed the Vatican Bank to become more enmeshed with Sindona, and later another fast-talking banker, Roberto Calvi. Like Sindona, Calvi would also later be on the run from a host of financial crimes and frauds, but he never got convicted. He was instead found hanging in 1982 under London’s Blackfriars Bridge.
“You can’t run the church on Hail Marys.” —Vatican Bank head Paul Marcinkus, defending the Bank’s secretive practices in the 1980s.By the 1980s the Vatican Bank had become a partner in questionable ventures in offshore havens from Panama and the Bahamas to Liechtenstein, Luxembourg, and Switzerland. When one cleric asked Marcinkus why there was so much mystery about the Vatican Bank, Marcinkus dismissed him saying, “You can’t run the church on Hail Marys.”
All the secret deals came apart in the early 1980s when Italy and the U.S. opened criminal investigations on Marcinkus. Italy indicted him but the Vatican refused to extradite him, allowing Marcinkus instead to remain in Vatican City. The standoff ended when all the criminal charges were dismissed and the church paid a stunning $244 million as a “voluntary contribution” to acknowledge its “moral involvement” with the enormous bank fraud in Italy. (Marcinkus returned a few years later to America where he lived out his final years at a small parish in Sun City, Arizona.)
Throughout the 1990s and into the early 2000s, the Vatican Bank remained an offshore bank in the heart of Rome.It would be reasonable to expect that after having allowed itself to be used by a host of fraudsters and criminals, the Vatican Bank cleaned up its act. It did not, however. Although the Pope talked a lot about reforms, it kept the same secret operations, expanding even into massive offshore deposits disguised as fake charities. The combination of lots of money, much of it in cash, and no oversight, again proved a volatile mixture. Throughout the 1990s and into the early 2000s, the Vatican Bank remained an offshore bank in the heart of Rome. It was increasingly used by Italy’s top politicians, including prime ministers, as a slush fund for everything from buying gifts for mistresses to paying off political foes.
Italy’s tabloids, and a book in 2009 by a top investigative journalist Gianluigi Nuzzi, exposed much of the latest round of Vatican Bank mischief. It was not, however, the public shaming of “Vatileaks” that led to any substantive reforms in the way the Church ran its finances. Many top clerics knew that as a 2,000-year-old institution, if they waited patiently for the public outrage to subside, the Vatican Bank could soon resume its shady dealings.
In 2000, the Church signed a monetary convention with the European Union by which it could issue its own euro coins.What changed everything in the way the Church runs its finances came unexpectedly in a decision about a common currency—the euro—that at the time seemed unrelated to the Vatican Bank. Italy stopped using the lira as its currency and adopted the euro in 1999. That initially created a quandary for the Vatican, which had always used the lira as its currency. The Vatican debated whether to issue its own currency or to adopt the euro. In December 2000, the church signed a monetary convention with the European Union by which it could issue its own euro coins (distinctively stamped with Città del Vaticano) as well as commemorative coins that it marked up substantially to sell to collectors. Significantly, that agreement did not bind the Vatican, or two other non-EU nations that had accepted the euro—Monaco and Andorra—to abide by strict European statutes regarding money laundering, antiterrorism financing, fraud, and counterfeiting.
A Vatican 50 Euro Cent Coin, issued in 2016What the Vatican did not expect was that the Organization for Economic Cooperation and Development (OECD), a 34-nation economics and trade group that tracks openness in the sharing of tax information between countries, had at the same time begun investigating tax havens. Those nations that shared financial data and had in place adequate safeguards against money laundering were put on a so-called white list. Those that had not acted but promised to do so were slotted onto the OECD’s gray list, and those resistant to reforming their banking secrecy laws were relegated to its blacklist. The OECD could not force the Vatican to cooperate since it was not a member of the European Union. However, placement on the blacklist would cripple the Church’s ability to do business with all other banking jurisdictions.
The biggest stumbling block to real reform is that all power is still vested in a single man.In December 2009, the Vatican reluctantly signed a new Monetary Convention with the EU and promised to work toward compliance with Europe’s money laundering and antiterrorism laws. It took a year before the Pope issued a first ever decree outlawing money laundering. The most historic change took place in 2012 when the church allowed European regulators from Brussels to examine the Vatican Bank’s books. There were just over 33,000 accounts and some $8.3 billion in assets. The Vatican Bank was not compliant on half of the EU’s forty-five recommendations. It had done enough, however, to avoid being placed on the blacklist.
In its 2017 evaluation of the Vatican Bank, the EU regulators noted the Vatican had made significant progress in fighting money laundering and the financing of terrorism. Still, changing the DNA of the finances of the Vatican has proven incredibly difficult. When a reformer, Argentina’s Cardinal Jorge Bergoglio, became Pope Francis in 2013, he endorsed a wide-ranging financial reorganization that would make the church more transparent and bring it in line with internationally accepted financial standards and practices. Most notable was that Francis created a powerful financial oversight division and put Australian Cardinal George Pell in charge. Then Pell had to resign and return to Australia where he was convicted of child sex offenses in 2018. By 2021, the Vatican began the largest financial corruption trial in its history, even including the indictment of a cardinal for the first time. The case floundered, however, and ultimately revealed that the Vatican’s longstanding self-dealing and financial favoritism had continued almost unabated under Francis’s reign.
Photo by Ashwin Vaswani / UnsplashIt seems that for every step forward, somehow, the Vatican manages to move backwards when it comes to money and good governance. For those of us who study it, while it is a more compliant and normal member of the international community today than at any time in its past, the biggest stumbling block to real reform is that all power is still vested in a single man that the Church considers the Vicar of Christ on earth.
The Catholic Church considers the reigning pope to be infallible when speaking ex cathedra (literally “from the chair,” that is, issuing an official declaration) on matters of faith and morals. However, not even the most faithful Catholics believe that every Pope gets it right when it comes to running the Church’s sovereign government. No reform appears on the horizon that would democratize the Vatican. Short of that, it is likely there will be future financial and power scandals, as the Vatican struggles to become a compliant member of the international community.
Dogs dressed up in bonnets. Diamond-studded iPhone cases shaped like unicorns. Donut-shaped purses. Hello Kitty shoes, credit cards, engine oil, and staplers. My Little Pony capsule hotel rooms. Pikachu parades. Hedgehog cafes. Pink construction trucks plastered with cartoon eyes. Miniature everything. Emojis everywhere. What is going on here?
Top left to right: Astro Boy, Hello Kitty credit card, Hello Kitty backpack, SoftBank’s Pepper robot, Pikachu Parade, Hello Kitty hat, film still from Ponyo by Studio GhibliSuch merch, and more, are a manifestation of Japan’s kawaii culture of innocence, youthfulness, vulnerability, playfulness, and other childlike qualities. Placed in certain contexts, however, it can also underscore a darker reality—a particular denial of adulthood through a willful indulgence in naïveté, commercialization, and escapism. Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.
The roots of kawaii can be traced back to Japan’s Heian (“peace” or “tranquility”) period (794–1185 CE), a time when aristocrats appreciated delicate and endearing aesthetics in literature, art, and fashion.1 During the Edo period (1603–1868 CE), art and culture began to emphasize aesthetics, beauty, and playfulness.2 Woodblock prints (ukiyo-e) often depicted cute and whimsical characters.3 The modern iteration of kawaii began to take shape during the student protests of the late 1960s,4 particularly against the backdrop of the rigid culture of post-World War II Japan. In acts of defiance against academic authority, university students boycotted lectures and turned to children’s manga—a type of comic or graphic novel—as a critique of traditional educational norms.5
Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.After World War II, Japan experienced significant social and economic changes. The emerging youth culture of the 1960s and 1970s began to embrace Western influences, leading to a blend of traditional Japanese aesthetics with Western pop culture.6 During the economic boom of the 1970s and 1980s, consumer subcultures flourished, and the aesthetic of cuteness found expression in playful handwriting, speech patterns, fashion, products, and themed spaces like cafes and shops. The release of Astro Boy (Tetsuwan Atomu) in 1952, created by Osamu Tezuka, is regarded by scholars as a key moment in the development of kawaii culture.7 The character’s large eyes, innocent look, and adventurous spirit resonated with both children and adults, setting the stage for the rise of other kawaii characters in popular culture. Simultaneously, as Japanese women gained more prominence in the workforce, the “burikko” archetype8—an innocent, childlike woman—became popular. This persona, exuding charm and nonthreatening femininity, was seen as enhancing her desirability in a marriage-centric society.9
Left to right: burikko handwriting, bento box, Kumamon mascot
Another catalyst for kawaii culture was the 1970’s emergence of burikko handwriting among teenage girls.10 It was this playful, childlike, rounded style of writing that included hearts, stars, and cartoonish doodles. To the chagrin of educators, it became a symbol of youthful rebellion and a break from rigid societal expectations.
Japanese culture is deeply rooted in tradition, with strict social norms governing behavior and appearance. If you drop something, it’s common to see people rush to retrieve it for you. Even at an empty intersection with no car in sight, a red light will rarely be ignored. Business cards are exchanged with a sense of deference, and social hierarchies are meticulously observed. Conformity is highly valued, while femininity is often dismissed as frivolous. Against this backdrop, the emergence of kawaii can be seen as an act of quiet resistance.
The rise of shōjo (girls’) manga in the 1970s introduced cute characters with large eyes and soft rounded faces with childlike features, popularizing the kawaii aesthetic among young girls.11 Then, in 1974, along came Sanrio’s Hello Kitty,12 commercializing and popularizing kawaii culture beyond Japan’s borders. While it started as a product range for children, it soon became popular with teens and adults alike.
Kawaii characters like Hello Kitty are often depicted in a simplistic style, with oversized eyes and minimal facial expressions. This design invites people to project their own feelings and emotions onto the characters. As a playful touch, Hello Kitty has no mouth—ensuring she’ll never reveal your secrets!
By the 1980s and 1990s, kawaii had permeated stationery, toys, fashion, digital communications, games, and beyond. Franchises like Pokémon, anime series such as Sailor Moon, and the whimsical works of Studio Ghibli exported a sense of childlike wonder and playfulness to audiences across the globe. Even banks and airlines embraced cuteness as a strategy to attract customers, as did major brands like Nissan, Mitsubishi, Sony, and Nintendo. What may have begun as an organic expression of individuality was quickly commodified by industry.
Construction sites, for example, frequently feature barricades shaped like cartoon animals or flowers, softening the visual impact of urban development.13 They also display signs with bowing figures apologizing for any inconvenience. These elements are designed to create a sense of comfort for those passing by. Similarly, government campaigns use mascots like Kumamon,14 a cuddly bear, to promote tourism or public health initiatives. Japanese companies and government agencies use cute mascots, referred to as Yuru-chara, to create a friendly image and foster a sense of connection. You’ll even find them in otherwise harsh environments like high security prisons, the Tokyo Metropolitan Police, and, well, the Japanese Sewage Association uses them too.15
Kawaii aesthetics have also appeared in high-tech domains. Robots designed for elder care, such as SoftBank’s Pepper,16 often adopt kawaii traits to appear less intimidating and foster emotional connections. In the culinary world, bento boxes featuring elaborately arranged food in cute and delightful shapes have become a creative art form, combining practicality with aesthetic pleasure—and turning ordinary lunches into whimsical and joyful experiences.
Sanrio Puroland (website)Kawaii hasn’t stayed confined to Japan’s borders. It has become popular in other countries like South Korea, and had a large influence in the West as well. It has become a global representation of Japan, so much so that it helps draw in tourism, particularly to the Harajuku district in Tokyo and theme parks like Sanrio Puroland. In 2008, Hello Kitty was even named as Japan’s official tourism ambassador.17
The influence of kawaii extends beyond tourism. Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty-themed designs, menus, and crew uniforms on its Paris-Taipei route.18 Even the Vatican couldn’t resist the power of cute: In its appeal to younger generations, it introduced Luce, a cheerful young girl with big eyes, blue hair, and a yellow raincoat, as the mascot for the 2025 Jubilee Year and the Vatican’s pavilion at Expo 2025.19
Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty- themed designs, menus, and crew uniforms on its Paris–Taipei route.Could anime and kawaii culture become vehicles for Catholicism? Writing for UnHerd, Katherine Dee suggests that Luce represents a global strategy to transcend cultural barriers in ways that traditional symbols, like the rosary, cannot. She points out that while Europe’s Catholic population has been shrinking, the global Catholic community continues to grow by millions.20 But while Luce may bring more attention to the Vatican, can she truly inspire deeper connections to God or spirituality?
All that said, the bigger question remains: Why does anyone find any of this appealing or cute?
One answer comes from the cultural theorist Sianne Ngai, who said that there’s a “surprisingly wide spectrum of feelings, ranging from tenderness to aggression, that we harbor toward ostensibly subordinate and unthreatening commodities.”21 That’s a fancy way of saying that humans find babies cute, a discovery that, in fact, was awarded the 1973 Nobel Prize in Physiology or Medicine to the Austrian zoologist and ethologist Konrad Lorenz for his research on the “baby schema”22 (or Kindchenschema), to explain how and why certain infantile facial and physical traits are seen as cute. These features include an overly large head, rounded forehead, large eyes, and protruding cheeks.23 Lorenz argued that this is so because such features trigger a biological response within us—a desire to nurture and protect because we view them as proxies for vulnerability. The more such features, the more we are wired to care for those who embody them.24 Simply put, when these traits are projected onto characters or art or products, it promotes the same kind of response in us as seeing a baby.
Modern research validates Lorenz’s theory. A 2008 brain imaging study showed that viewing infant faces, but not adult ones, triggered a response in the orbitofrontal cortex linked to reward processing.25 Another brain imaging study conducted at Washington University School of Medicine26 investigated how different levels of “baby schema” in infant faces—characteristics like big eyes and round cheeks—affect brain activity. Researchers discovered that viewing baby-like features activates the nucleus accumbens, a key part of the brain’s reward system responsible for processing pleasure and motivation. This effect was observed in women who had never had children. The researchers concluded that this activation of the brain’s reward system is the neurophysiological mechanism that triggers caregiving behavior.
A very different type of study,27 conducted in 2019, further confirmed that seeing baby-like features triggers a strong emotional reaction. In this case, the reaction is known as “kama muta,” a Sanskrit term that describes the feeling of being deeply moved or touched by love. This sensation is often accompanied by warmth, nostalgia, or even patriotism. The researchers found that videos featuring cute subjects evoked significantly more kama muta than those without such characteristics. Moreover, when the cute subjects were shown “interacting affectionately,” the feeling of kama muta was even stronger compared to when the subjects were not engaging in affectionate behavior.
In 2012, Osaka University professor Hiroshi Nittono led a research study that found that “cuteness” has an impact on observers, increasing their focus and attention.28 It also speaks to our instinct to nurture and protect that which appears vulnerable—which cute things, with their more infantilized traits, do. After all, who doesn’t love Baby Yoda? Perhaps that’s why some of us are so drawn to purchase stuffed dolls of Eeyore—it makes us feel as if we are rescuing him. When we see something particularly cute, many of us feel compelled to buy it. Likewise, it’s possible, at least subconsciously, that those who engage in cosplay around kawaii do so out of a deeper need to feel protected themselves. Research shows that viewing cute images improves moods and is associated with relaxation.29
Kawaii may well be useful in our fast-paced and stressful lives. For starters, when we find objects cute or adorable, we tend to treat them better and give them greater care. There’s also a contagious happiness effect. Indeed, could introducing more kawaii into our environments make people happier? Might it encourage us to care more for each other and our communities? The kawaii aesthetic could even be used in traditionally serious spaces—like a doctor’s waiting room or emergency room—to help reduce anxiety. Instead of staring at a blank ceiling in the dentist’s chair, imagine looking up at a whimsical kawaii mural instead.
Consider also the Tamagotchi digital pet trend of the 1990s. Children were obsessed with taking care of this virtual pet, tending to its needs ranging from food to entertainment. Millions of these “pets” were sold and were highly sought after. There’s something inherently appealing to children about mimicking adult roles, especially when it comes to caregiving. It turns out that children don’t just want to be cared for by their parents—they also seem to have an innate desire to nurture others. This act of caregiving can make them feel capable, empowered, and useful, tapping into a deep sense of responsibility and connection.
At Chuo University in Tokyo, there’s an entire new field of “cute studies” founded by Dr. Joshua Dale, whose book summarizes his research: Irresistible: How Cuteness Wired our Brains and Changed the World.30 According to Dale, there are four traditional and aesthetic values of Japanese culture that contributed to the rise of kawaii: (1) valuing the diminutive, (2) treasuring the transient, (3) preference for simplicity, and (4) appreciating the playful and transient.31 His work emphasizes how kawaii is not just about cuteness, but in fact expresses a deeply rooted cultural philosophy that reflects Japanese views on beauty, life, and emotional expression.
The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions.In other words, there’s something about kawaii that goes beyond a style or a trend. It is a reflection of deeper societal values and emotional needs. In a society that has such rigid hierarchies, social structures, decorum, and an intense work culture, kawaii provides a form of escapism—offering a respite from the harsh realities of adulthood and a return to childlike innocence. It is a safe form of vulnerability. Yet, does it also hint at an inability to confront the realities of life?
The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions. By surrounding themselves with cuteness and positivity, they may be trying to shield themselves from darker feelings and worries. In some cases, people even adapt their own personal aesthetics to appear cuter, as this can make them seem more innocent and in need of help—effectively turning cuteness into a protective layer.
Kawaii also perpetuates infantilization, particularly among women who feel pressured to conform to kawaii aesthetics, which often places them in a submissive role. This is especially evident in subgenres like Lolita fashion—a highly detailed, feminine, and elegant style inspired by Victorian and Rococo fashion, but with a modern and whimsical twist. While this style is adopted by many women with the female gaze in mind, the male gaze remains inescapable.
Japanese Lolita fashionAs a result, certain elements of kawaii can sometimes veer into the sexual, both intentionally and as an unintended distortion of innocence. Maid cafes, for example, though not designed to be sexually explicit, often carry sexual undertones that undermine their seemingly innocent and cute appeal. In these cafes, maids wear form-fitting uniforms and play into fantasies of servitude and submission—particularly when customers are addressed as “masters” and flirtatious interactions are encouraged.
It’s important to remember that things that look sweet and cute can also be sinister. The concept of “cute” often evokes feelings of trust, affection, and vulnerability, which can paradoxically make it a powerful tool for manipulation, subversion, and even control. Can kawaii be a Trojan horse?
When used in marketing to sell products, it may seem harmless, but how much of the rational consumer decision-making process does it override? And what evil lurks behind all the sparkle? In America, cuteness manifests itself even more boldly and aggressively. One designer, Lisa Frank, built an entire empire in the 1980s and 1990s on vibrant, neon colors and whimsical artwork featuring rainbow-colored animals, dolphins, glitter, and images of unicorns on stickers, adorning backpacks and other merchandise. Her work is closely associated with a sense of nostalgia for millennials who grew up in that era. Yet, as later discovered and recently recalled in the Amazon documentary, “Glitter and Greed: The Lisa Frank Story,” avarice ultimately led to a toxic work environment, poor working conditions, and alleged abuse.
Worse, can kawaii be used to mask authoritarian intentions or erase the memory of serious crimes against humanity?
As Japan gained prominence in global culture, its World War II and earlier atrocities have been largely overshadowed, causing many to overlook these grave historical events.32 When we think of Japan today, we often think of cultural exports like anime, manga, Sanrio, geishas, and Nintendo. Even though Japan was once an imperial power, today it exercises “soft power” in the sociopolitical sphere. This concept, introduced by American political scientist Joseph Nye,33 refers to influencing others by promoting a nation’s culture and values to make foreign audiences more receptive to its perspectives.
Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort.Japan began leveraging this strategy in the 1980s to rehabilitate its tarnished postwar reputation, especially in the face of widespread anti-Japanese sentiment in neighboring Asian nations. Over time, these attitudes shifted as Japan used “kawaii culture” and other forms of pop-culture diplomacy to reshape its image and move beyond its violent, imperialist past.
Kawaii also serves as a way to neutralize our fears by transforming things we might typically find unsettling into endearing and approachable forms—think Casper the Friendly Ghost or Monsters, Inc. This principle extends to emerging technologies, such as robots. Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort. Embedding frightening concepts with qualities that evoke happiness or safety allows us to navigate the interplay between darkness and light, innocence and danger, in a more approachable way. In essence, it’s a coping mechanism for our primal fears.
An interesting aspect of this is what psychologists call the uncanny valley—a feeling of discomfort that arises when something is almost humanlike, but not quite. Horror filmmakers have exploited this phenomenon by weaponizing cuteness against their audiences with characters like the Gremlins and the doll Chucky. The dissonance between a sweet appearance and sinister intent creates a chilling effect that heightens the horror.
When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?Ultimately, all this speaks to the multitude of layers to kawaii. It is more than an aesthetic; it’s a cultural phenomenon with layers of meaning, and it reflects both societal values and emotional needs. Its ability to evoke warmth and innocence can also be a means of emotional manipulation. It can serve as an unassuming guise for darker intentions or meanings. It can be a medium for individual expression, and yet simultaneously it has been commodified and overtaken by consumerism. It can be an authentic expression, yet mass production has also made it a symbol of artifice. It’s a way to embrace the innocent and joyful, yet it can also be used to avoid facing the harsher realities of adulthood. When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?
It’s worth asking whether the prevalence of kawaii in public and private spaces reflects a universal desire for escapism or if it serves as a tool to maintain conformity and compliance. Perhaps, at its core, kawaii holds up a mirror to society’s collective vulnerabilities—highlighting not just what we nurture, but also what we are willing to overlook for the sake of cuteness.
This article was originally published in Skeptic in 1997.
Presented here for the first time are the complete texts of two letters that Einstein wrote regarding his lack of belief in a personal god.
Just over a century ago, near the beginning of his intellectual life, the young Albert Einstein became a skeptic. He states so on the first page of his Autobiographical Notes (1949, pp. 3–5):
Thus I came—despite the fact I was the son of entirely irreligious (Jewish) parents—to a deep religiosity, which, however, found an abrupt ending at the age of 12. Through the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence was a positively fanatic [orgy of] freethinking coupled with the impression that youth is intentionally being deceived… Suspicion against every kind of authority grew out of this experience, a skeptical attitude … which has never left me….We all know Albert Einstein as the most famous scientist of the 20th century, and many know him as a great humanist. Some have also viewed him as religious. Indeed, in Einstein’s writings there is well-known reference to God and discussion of religion (1949, 1954). Although Einstein stated he was religious and that he believed in God, it was in his own specialized sense that he used these terms. Many are aware that Einstein was not religious in the conventional sense, but it will come as a surprise to some to learn that Einstein clearly identified himself as an atheist and as an agnostic. If one understands how Einstein used the terms religion, God, atheism, and agnosticism, it is clear that he was consistent in his beliefs.
Part of the popular picture of Einstein’s God and religion comes from his well-known statements, such as:
“God is cunning but He is not malicious.” (Also: “God is subtle but he is not bloody-minded.” Or: “God is slick, but he ain’t mean.”) (1946)“God does not play dice.” (On many occasions.)“I want to know how God created the world. I am not interested in this or that phenomenon, in the spectrum of this or that element. I want to know His thoughts, the rest are details.” (Unknown date.)It is easy to see how some got the idea that Einstein was expressing a close relationship with a personal god, but it is more accurate to say he was simply expressing his ideas and beliefs about the universe.
Figure 1Einstein’s “belief” in Spinoza’s God is one of his most widely quoted statements. But quoted out of context, like so many of these statements, it is misleading at best. It all started when Boston’s Cardinal O’Connel attacked Einstein and the General Theory of Relativity and warned the youth that the theory “cloaked the ghastly apparition of atheism” and “befogged speculation, producing universal doubt about God and His creation” (Clark, 1971, 413–414). Einstein had already experienced heavier duty attacks against his theory in the form of anti-Semitic mass meetings in Germany, and he initially ignored the Cardinal’s attack. Shortly thereafter though, on April 24, 1929, Rabbi Herbert Goldstein of New York cabled Einstein to ask: “Do you believe in God?” (Sommerfeld, 1949, 103). Einstein’s return message is the famous statement:
“I believe in Spinoza’s God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with fates and actions of human beings” (103). The Rabbi, who was intent on defending Einstein against the Cardinal, interpreted Einstein’s statement in his own way when writing:
Spinoza, who is called the God-intoxicated man, and who saw God manifest in all nature, certainly could not be called an atheist. Furthermore, Einstein points to a unity. Einstein’s theory if carried out to its logical conclusion would bring to mankind a scientific formula for monotheism. He does away with all thought of dualism or pluralism. There can be no room for any aspect of polytheism. This latter thought may have caused the Cardinal to speak out. Let us call a spade a spade (Clark, 1971, 414).Both the Rabbi and the Cardinal would have done well to note Einstein’s remark, of 1921, to Archbishop Davidson in a similar context about science: “It makes no difference. It is purely abstract science” (413).
The American physicist Steven Weinberg (1992), in critiquing Einstein’s “Spinoza’s God” statement, noted: “But what possible difference does it make to anyone if we use the word “God” in place of “order” or “harmony,” except perhaps to avoid the accusation of having no God?” Weinberg certainly has a valid point, but we should also forgive Einstein for being a product of his times, for his poetic sense, and for his cosmic religious view regarding such things as the order and harmony of the universe.
But what, at bottom, was Einstein’s belief? The long answer exists in Einstein’s essays on religion and science as given in his Ideas and Opinions (1954), his Autobiographical Notes (1949), and other works. What about a short answer?
In the Summer of 1945, just before the bombs of Hiroshima and Nagasaki, Einstein wrote a short letter stating his position as an atheist (Figure 1, above). Ensign Guy H. Raner had written Einstein from mid-Pacific requesting a clarification on the beliefs of the world famous scientist (Figure 2, below). Four years later Raner again wrote Einstein for further clarification and asked “Some people might interpret (your letter) to mean that to a Jesuit priest, anyone not a Roman Catholic is an atheist, and that you are in fact an orthodox Jew, or a Deist, or something else. Did you mean to leave room for such an interpretation, or are you from the viewpoint of the dictionary an atheist; i.e., “one who disbelieves in the existence of a God, or a Supreme Being?” Einstein’s response is shown in Figure 3.
Figure 2Combining key elements from the first and second response from Einstein there is little doubt as to his position:
From the viewpoint of a Jesuit priest I am, of course, and have always been an atheist…. I have repeatedly said that in my opinion the idea of a personal God is a childlike one. You may call me an agnostic, but I do not share the crusading spirit of the professional atheist whose fervor is mostly due to a painful act of liberation from the fetters of religious indoctrination received in youth. I prefer an attitude of humility corresponding to the weakness of our intellectual understanding of nature and of our being.I was fortunate to meet Guy Raner, by chance, at a humanist dinner in late 1994, at which time he told me of the Einstein letters. Raner lives in Chatsworth, California and has retired after a long teaching career. The Einstein letters, a treasured possession for most of his life, were sold in December, 1994, to a firm that deals in historical documents (Profiles in History, Beverly Hills, CA). Five years ago a very brief letter (Raner & Lerner, 1992) describing the correspondence was published in Nature. But the two Einstein letters have remained largely unknown.
“I have repeatedly said that in my opinion the idea of a personal God is a childlike one.” —EinsteinCuriously enough, the wonderful and well-known biography Albert Einstein, Creator and Rebel, by Banesh Hoffmann (1972) does quote from Einstein’s 1945 letter to Raner. But maddeningly, although Hoffmann quotes most of the letter (194–195), he leaves out Einstein’s statement: “From the viewpoint of a Jesuit Priest I am, of course, and have always been an atheist.”!
Hoffmann’s biography was written with the collaboration of Einstein’s secretary, Helen Dukas. Could she have played a part in eliminating this important sentence, or was it Hoffmann’s wish? I do not know. However, Freeman Dyson (1996) notes “…that Helen wanted the world to see, the Einstein of legend, the friend of school children and impoverished students, the gently ironic philosopher, the Einstein without violent feelings and tragic mistakes.” Dyson also notes that he thought Dukas “…profoundly wrong in trying to hide the true Einstein from the world.” Perhaps her well-intentioned protectionism included the elimination of Einstein as atheist.
Figure 3Although not a favorite of physicists, Einstein, The Life and Times, by the professional biographer Ronald W. Clark (1971), contains one of the best summaries on Einstein’s God: “However, Einstein’s God was not the God of most men. When he wrote of religion, as he often did in middle and later life, he tended to … clothe with different names what to many ordinary mortals—and to most Jews—looked like a variant of simple agnosticism….This was belief enough. It grew early and rooted deep. Only later was it dignified by the title of cosmic religion, a phrase which gave plausible respectability to the views of a man who did not believe in a life after death and who felt that if virtue paid off in the earthly one, then this was the result of cause and effect rather than celestial reward. Einstein’s God thus stood for an orderly system obeying rules which could be discovered by those who had the courage, the imagination, and the persistence to go on searching for them” (19).
Einstein continued to search, even to the last days of his 76 years, but his search was not for the God of Abraham or Moses. His search was for the order and harmony of the world.
BibliographyTariff policy has been a contentious issue since the founding of the United States. Hamilton clashed with Jefferson and Madison over tariff policy in the 1790s, South Carolina threatened to secede from the union over tariff policy in 1832, and the Hawley-Smoot tariff generated outrage in 1930. Currently, Trump is sparking heated debates about his tariff policies.
To understand the ongoing tariff debate, it is essential to grasp the basics: Tariffs are taxes levied by governments on imported goods. They have been the central focus of U.S. trade policy since the federal government was established in 1789. Historically, tariffs have been used to raise government revenue, protect domestic industries, and influence the trade policies of other nations. The history of U.S. tariffs can be understood in three periods corresponding with these three uses.
From 1790 until the Civil War in 1861, tariffs primarily served as a source of federal revenue, accounting for about 90 percent of government income (since 2000, however, tariffs have generated less than 2 percent of the federal government’s income).1 Both the Union and the Confederacy enacted income taxes to help finance the Civil War. After the war, public resistance to income taxes grew, and Congress repealed the federal income tax in 1872. Later, when Congress attempted to reinstate an income tax in 1894, the Supreme Court struck it down in Pollock v. Farmers’ Loan & Trust Co. (1895), ruling it unconstitutional. To resolve this issue, the Sixteenth Amendment was ratified in 1913, granting Congress the authority to levy income taxes. Since then, federal income taxes have provided a much larger source of revenue than tariffs, allowing for greater federal government expenditures. The shift away from tariffs as the primary revenue source began during the Civil War and was further accelerated by World War I, which required large increases in federal spending.
The 16th Amendment was ratified in 1913, granting Congress the authority to levy income taxes.Before the Civil War, the North and South had conflicting views on tariffs. The North, with its large manufacturing base, wanted higher tariffs to protect domestic industries from foreign competition. This protection would decrease the amount of competition Northern manufacturers faced, allowing them to charge higher prices and encounter less risk of being pushed out of business by more efficient foreign producers. By contrast, the South, with an economy rooted in agricultural exports (especially cotton) favored low tariffs, as they benefited from cheaper imported manufactured goods. These imports were largely financed by selling Southern cotton, produced by enslaved labor, to foreign markets, particularly Great Britain. The North-South tariff divide eventually led to the era of protective tariffs (1860-1934) after the Civil War, when the victorious North gained political power, and protectionist policies dominated U.S. trade.
For more than half a century after the Civil War, U.S. trade policy was dominated by high protectionist tariffs. Republican William McKinley, a strong advocate of high tariffs, won the presidency in 1896 with support from industrial interests. Between 1861 and the early 1930s, average tariff rates on dutiable imports rose to around 50 percent and stayed elevated for decades. As a point of comparison, average tariffs had declined to about 5 percent by the early 21st century.
Republicans passed the Hawley-Smoot Tariff in 1930, which coincided with the Great Depression. While it is generally agreed among economists that the Hawley-Smoot Tariff did not cause the Great Depression, it further hurt the world economy during the economic downturn (though many observers at the time thought that it was responsible for the global economic collapse). The widely disliked Hawley-Smoot Tariff, along with the catastrophic effects of the Great Depression, allowed the Democrats to gain political control of both Congress and the Presidency in 1932. They passed the Reciprocal Trade Agreements Act (RTAA) in 1934, which gave the president the power to negotiate reciprocal trade agreements.
The RTAA transitioned some of the power over trade policy, i.e., tariffs, away from Congress and to the President. Whereas the constituencies of specific members of Congress are in certain regions of the U.S., the entire country can vote in Presidential elections. For that reason, regional producers generally have less political power over the President than they do over their specific members of Congress, and therefore the President tends to be less responsive to their interests and more responsive to the interests of consumers and exporters located across the nation. Since consumers and exporters generally benefit from lower tariffs, the President has an incentive to decrease them. Thus, the RTAA contributed to the U.S. lowering tariff barriers around the world. This marked the beginning of the era of reciprocity in U.S. tariff policy (1934-2025) in which the U.S. has generally sought to reduce tariffs worldwide.
World War II and its consequences also pushed the U.S. into the era of reciprocity. The European countries, which had been some of the United States’ strongest economic competitors, were decimated after two World Wars in 30 years. Exports from Europe declined and the U.S. shifted even more toward exporting after the Second World War. As more U.S. firms became larger exporters, their political power was aimed at lowering tariffs rather than raising them. (Domestic companies that compete with imports have an interest in lobbying for higher tariffs, but exporting companies have the opposite interest.)
The World Trade Organization (WTO) was founded in 1995. Photo © WTO.The end of WWII left the U.S. concerned that yet another World War could erupt if economic conditions were unfavorable around the world. America also sought increased trade to stave off the spread of Communism during the Cold War. These geopolitical motivations led the U.S. to seek increased trade with non-Communist nations, which was partially accomplished by decreasing tariffs. This trend culminated in the creation of the General Agreement on Tariffs and Trade (GATT) in 1947, which was then superseded by the World Trade Organization (WTO) in 1995. These successive organizations helped reduce tariffs and other international trade barriers.
Although there is a strong consensus among economists that tariffs do more harm than good,2,3,4 there are some potential benefits of specific tariff policies.
ProsAlthough tariffs have some theoretical benefits in specific situations, the competence and incentives of the U.S. political system often do not allow these benefits to come to fruition. Tariffs almost always come with the cost of economic inefficiency, which is why economists generally agree that tariffs do more harm than good. Does the increase in U.S. tariffs, particularly on China, since 2016 mark the end of the era of reciprocity or is it just a blip? The answer will affect the economic well-being of Americans and people around the world.
The history of tariffs described in this article is largely based on Clashing Over Commerce by Douglas Irwin (2017).
The author would like to thank Professor John L. Turner at the University of Georgia for his invaluable input.
Throughout the early modern period—from the rise of the nation state through the nineteenth century—the predominant economic ideology of the Western world was mercantilism, or the belief that nations compete for a fixed amount of wealth in a zero-sum game: the +X gain of one nation means the –X loss of another nation, with the +X and –X summing to zero. The belief at the time was that in order for a nation to become wealthy, its government must run the economy from the top down through strict regulation of foreign and domestic trade, enforced monopolies, regulated trade guilds, subsidized colonies, accumulation of bullion and other precious metals, and countless other forms of economic intervention, all to the end of producing a “favorable balance of trade.” Favorable, that is, for one nation over another nation. As President Donald Trump often repeats, “they’re ripping us off!” That is classic mercantilism and economic nationalism speaking.
Adam Smith famously debunked mercantilism in his 1776 treatise An Inquiry into the Nature and Causes of the Wealth of Nations. Smith’s case against mercantilism is both moral and practical. It is moral, he argued, because: “To prohibit a great people…from making all that they can of every part of their own produce, or from employing their stock and industry in the way that they judge most advantageous to themselves, is a manifest violation of the most sacred rights of mankind.”1 It is practical, he showed, because: “Whenever the law has attempted to regulate the wages of workmen, it has always been rather to lower them than to raise them.”2
Producers and ConsumersAdam Smith’s The Wealth of Nations was one long argument against the mercantilist system of protectionism and special privilege that in the short run may benefit producers but which in the long run harms consumers and thereby decreases the wealth of a nation. All such mercantilist practices benefit the producers, monopolists, and their government agents, while the people of the nation—the true source of a nation’s wealth—remain impoverished: “The wealth of a country consists, not of its gold and silver only, but in its lands, houses, and consumable goods of all different kinds.” Yet, “in the mercantile system, the interest of the consumer is almost always constantly sacrificed to that of the producer.”3
Adam Smith statue in Edinburgh, Scotland. Photo by K. Mitch Hodge / UnsplashThe solution? Hands off. Laissez Faire. Lift trade barriers and other restrictions on people’s economic freedoms and allow them to exchange as they see fit for themselves, both morally and practically. In other words, an economy should be consumer driven, not producer driven. For example, under the mercantilist zero-sum philosophy, cheaper foreign goods benefit consumers but they hurt domestic producers, so the government should impose protective trade tariffs to maintain the favorable balance of trade.
But who is being protected by a protective tariff? Smith showed that, in principle, the mercantilist system only benefits a handful of producers while the great majority of consumers are further impoverished because they have to pay a higher price for foreign goods. The growing of grapes in France, Smith noted, is much cheaper and more efficient than in the colder climes of his homeland, for example, where “by means of glasses, hotbeds, and hotwalls, very good grapes can be raised in Scotland” but at a price thirty times greater than in France. “Would it be a reasonable law to prohibit the importation of all foreign wines, merely to encourage the making of claret and burgundy in Scotland?” Smith answered the question by invoking a deeper principle:
What is prudence in the conduct of every private family, can scarce be folly in that of a great kingdom. If a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it of them.4
This is the central core of Smith’s economic theory: “Consumption is the sole end and purpose of all production; and the interest of the producer ought to be attended to, only so far as it may be necessary for promoting that of the consumer.” The problem is that the system of mercantilism “seems to consider production, and not consumption, as the ultimate end and object of all industry and commerce.”5 So what?
When production is the object, and not consumption, producers will appeal to top-down regulators instead of bottom-up consumers. Instead of consumers telling producers what they want to consume, government agents and politicians tell consumers what, how much, and at what price the products and services will be that they consume. This is done through a number of different forms of interventions into the marketplace. Domestically, we find examples in tax favors for businesses, tax subsidies for corporations, regulations (to control prices, imports, exports, production, distribution, and sales), and licensing (to control wages, protect jobs).6 Internationally, the interventions come primarily through taxes under varying names, including “duties,” “imposts,” “excises,” “tariffs,” “protective tariffs,” “import quotas,” “export quotas,” “most-favored nation agreements,” “bilateral agreements,” “multilateral agreements,” and the like.
Such agreements are never between the consumers of two nations; they are between the politicians and the producers of the nations. Consumers have no say in the matter, with the exception of indirectly voting for the politicians who vote for or against such taxes and tariffs. And they all sum to the same effect: the replacement of free trade with “fair trade” (fair for producers, not consumers), which is another version of the mercantilist “favorable balance of trade” (favorable for producers, not consumers). Mercantilism is a zero-sum game in which producers win by the reduction or elimination of competition from foreign producers, while consumers lose by having fewer products from which to choose, along with higher prices and often lower quality products. The net result is a decrease in the wealth of a nation.
The principle is as true today as it was in Smith’s time, and we still hear the same objections Smith did: “Shouldn’t we protect our domestic producers from foreign competition?” And the answer is the same today as it was two centuries ago: no, because “consumption is the sole end and purpose of all production.”
Nonzero EconomicsThe founders of the United States and the framers of the Constitution were heavily influenced by the Enlightenment thinkers of England and the continent, including and especially Adam Smith. Nevertheless, it was not long after the founding of the country before our politicians began to shift the focus of the economy from consumption to production. In 1787, the United States Constitution was ratified, which included Article 1, Section 8: “The Congress shall have the power to lay and collect taxes, duties, imposts, and excises to cover the debts of the United States.” As an amusing exercise in bureaucratic wordplay, consider the common usages of these terms in the Oxford English Dictionary.
Tax: “a compulsory contribution to the support of government”
Duty: “a payment to the public revenue levied upon the import, export, manufacture, or sale of certain commodities”
Impost: “a tax, duty, imposition levied on merchandise”
Excise: “any toll or tax.”
(Note the oxymoronic phrase “compulsory contribution” in the first definition.)
A revised Article 1, Section 8 reads: “The Congress shall have the power to lay and collect taxes, taxes, taxes, and taxes to cover the debts of the United States.”
A revised Article 1, Section 8 of the Constitution reads: “The Congress shall have the power to lay and collect taxes, taxes, taxes, and taxes to cover the debts of the United States.” Photo by Anthony Garand / UnsplashIn the U.K. and on the continent, mercantilists dug in while political economists, armed with the intellectual weapons provided by Adam Smith, fought back, wielding the pen instead of the sword. The nineteenth-century French economist Frédéric Bastiat, for example, was one of the first political economists after Smith to show what happens when the market depends too heavily on top-down tinkering from the government. In his wickedly raffish The Petition of the Candlemakers, Bastiat satirizes special interest groups—in this case candlemakers—who petition the government for special favors:
We are suffering from the ruinous competition of a foreign rival who apparently works under conditions so far superior to our own for the production of light, that he is flooding the domestic market with it at an incredibly low price.... This rival... is none other than the sun.... We ask you to be so good as to pass a law requiring the closing of all windows, dormers, skylights, inside and outside shutters, curtains, casements, bull’s-eyes, deadlights and blinds; in short, all openings, holes, chinks, and fissures.7
Zero-sum mercantilist models hung on through the nineteenth and twentieth centuries, even in America. Since the income tax was not passed until 1913 through the Sixteenth Amendment, for most of the country’s first century the practitioners of trade and commerce were compelled to contribute to the government through various other taxes. Since foreign trade was not able to meet the growing debts of the United States, and in response to the growing size and power of the railroads and political pressure from farmers who felt powerless against them, in 1887 the government introduced the Interstate Commerce Commission. The ICC was charged with regulating the services of specified carriers engaged in transportation between states, beginning with railroads, but then expanded the category to include trucking companies, bus lines, freight carriers, water carriers, oil pipelines, transportation brokers, and other carriers of commerce.8 Regardless of its intentions, the ICC’s primary effect was interference with the freedom of people to buy and sell between the states of America.
The ICC was followed in 1890 with the Sherman Anti-Trust Act, which declared: “Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony,” resulting in a massive fine, jail, or both.
When stripped of its obfuscatory language, the Sherman Anti-Trust Act and the precedent-setting cases that have been decided in the courts in the century since it was passed, allows the government to indict an individual or a company on one or more of four crimes:
This was Katy-bar-the-door for anti-business legislators and their zero-sum mercantilist bureaucrats to restrict the freedom of consumers and producers to buy and sell, and they did with reckless abandon.
Completing Smith’s RevolutionTariffs are premised on a win-lose, zero-sum, producer-driven economy, which ineluctably leads to consumer loss. By contrast, a win-win, nonzero, consumer-driven economy leads to consumer gain. Ultimately, Smith held, a consumer-driven economy will produce greater overall wealth in a nation than will a producer-driven economy. Smith’s theory was revolutionary because it is counterintuitive. Our folk economic intuitions tell us that a complex system like an economy must have been designed from the top down, and thus it can only succeed with continual tinkering and control from the top. Smith amassed copious evidence to counter this myth—evidence that continues to accumulate two and a half centuries later—to show that, in the modern language of complexity theory, the economy is a bottom-up self-organized emergent property of complex adaptive systems.
Adam Smith launched a revolution that has yet to be fully realized. A week does not go by without a politician, economist, or social commentator bemoaning the loss of American jobs, American manufacturing, and American products to foreign jobs, foreign manufacturing, and foreign products. Even conservatives—purportedly in favor of free markets, open competition, and less government intervention in the economy—have few qualms about employing protectionism when it comes to domestic producers, even at the cost of harming domestic consumers.
Citing the need to protect the national economic interest—and Harley-Davidson—Ronald Reagan raised tariffs on Japanese motorcycles from 4.4 percent to 49.4 percent. Photo by Library of Congress / UnsplashEven the icon of free market capitalism, President Ronald Reagan, compromised his principles in 1982 to protect the Harley-Davidson Motor Company when it was struggling to compete against Japanese motorcycle manufactures that were producing higher quality bikes at lower prices. Honda, Kawasaki, Yamaha, and Suzuki were routinely undercutting Harley-Davidson by $1500 to $2000 a bike in comparable models.
On January 19, 1983, the International Trade Commission ruled that foreign motorcycle imports were a threat to domestic motorcycle manufacturers, and a 2-to-1 finding of injury was ruled on petition by Harley-Davidson, which complained that it could not compete with foreign motorcycle producers.10 On April 1, Reagan approved the ITC recommendation, explaining to Congress, “I have determined that import relief in this case is consistent with our national economic interest,” thereby raising the tariff from 4.4 percent to 49.4 percent for a year, a ten-fold tax increase on foreign motorcycles that was absorbed by American consumers. The protective tariff worked to help Harley-Davidson recover financially, but it was American motorcycle consumers who paid the price, not Japanese producers. As the ITC Chairman Alfred E. Eckes explained about his decision: “In the short run, price increases may have some adverse impact on consumers, but the domestic industry’s adjustment will have a positive long-term effect. The proposed relief will save domestic jobs and lead to increased domestic production of competitive motorcycles.”11
Photo by Lisanto 李奕良 / UnsplashWhenever free trade agreements are proposed that would allow domestic manufacturers to produce their goods cheaper overseas and thereby sell them domestically at a much lower price than they could have with domestic labor, politicians and economists, often under pressure from trade unions and political constituents, routinely respond disapprovingly, arguing that we must protect our domestic workers. Recall Presidential candidate Ross Perot’s oft-quoted 1992 comment in response to the North American Free Trade Agreement (NAFTA) about the “giant sucking sound” of jobs being sent to Mexico from the United States.
In early 2007, the Nobel laureate economist Edward C. Prescott lamented that economists invest copious time and resources countering the myth that it is “the government’s economic responsibility to protect U.S. industry, employment and wealth against the forces of foreign competition.” That is not the government’s responsibility, says Prescott, echoing Smith, which is simply “to provide the opportunity for people to seek their livelihood on their own terms, in open international markets, with as little interference from government as possible.” Prescott shows that “those countries that open their borders to international competition are those countries with the highest per capita income” and that open economic borders “is the key to bringing developing nations up to the standard of living enjoyed by citizens of wealthier countries.”12
“Protectionism is seductive,” Prescott admits, “but countries that succumb to its allure will soon have their economic hearts broken. Conversely, countries that commit to competitive borders will ensure a brighter economic future for their citizens.” But why exactly do open economic borders, free trade, and international competition lead to greater wealth for a nation? Writing over two centuries after Adam Smith, Prescott reverberates the moral philosopher’s original insight:
It is openness that gives people the opportunity to use their entrepreneurial talents to create social surplus, rather than using those talents to protect what they already have. Social surplus begets growth, which begets social surplus, and so on. People in all countries are motivated to improve their condition, and all countries have their share of talented risk-takers, but without the promise that a competitive system brings, that motivation and those talents will only lie dormant.13
The Evolutionary Origins of Tariffs and Zero-Sum EconomicsWhy is mercantilist zero-sum protectionism so pervasive and persistent? Bottom-up invisible hand explanations for complex systems are counterintuitive because of our folk economic propensity to perceive designed systems to be the product of a top-down designer. But there is a deeper reason grounded in our evolved social psychology of group loyalty. The ultimate reason that Smith’s revolution has not been fulfilled is that we evolved a propensity for in-group amity and between-group enmity, and thus it is perfectly natural to circle the wagons and protect one’s own, whoever or whatever may be the proxy for that group. Make America Great Again!
For the first 90,000 years of our existence as a species we lived in small bands of tens to hundreds of people. In the last 10,000 years some bands evolved into tribes of thousands, some tribes developed into chiefdoms of tens of thousands, some chiefdoms coalesced into states of hundreds of thousands, and a handful of states conjoined together into empires of millions. The attendant leap in food-production and population that accompanied the shift to chiefdoms and states allowed for a division of labor to develop in both economic and social spheres. Full-time artisans, craftsmen, and scribes worked within a social structure organized and run by full-time politicians, bureaucrats, and, to pay for it all, tax collectors. The modern state economy was born.
In this historical trajectory our group psychology evolved and along with it a propensity for xenophobia—in-group good, out-group bad. In the Paleolithic social environment in which our moral commitments evolved, one’s fellow in-group members consisted of family, extended family, friends, and community members who were well known to each other. To help others was to help oneself. Those groups who practiced in-group harmony and between-group antagonism would have had a survival advantage over those groups who experienced within-group social divide and decoherence, or haphazardly embraced strangers from other groups without first establishing trust. Because our deep social commitments evolved as part of our behavioral repertoire of responses for survival in a complex social environment, we carry the seeds of such in-group inclusiveness today. The resulting within-group cohesiveness and harmony carries with it a concomitant tendency for between-group xenophobia and tribalism that, in the context of a modern economic system, leads to protectionism and mercantilism.
And tariffs. We must resist the tribal temptation.
Annie Dawid’s most recent novel revisits the Jonestown Massacre from the perspective of the people who were there, taking the spotlight off cult leader Jim Jones and rehumanizing the “mindless zombies” who followed one man from their homes in the U.S. to their death in Guyana, but as our notion of victimhood is improving, we’re also forced to confront the ugly truth: In the almost fifty years since Jonestown: large-scale cult-related death has not gone away.
On the 18th of November, 2024, fiction author Annie Dawid’s sixth book, Paradise Undone: A Novel of Jonestown, celebrated its first birthday on the same day as the forty-sixth anniversary of its subject matter, an incident that saw the largest instance of intentional U.S. citizen death in the 20th Century and introduced the world to the horrors and dangers of cultism—The Jonestown Massacre.
A great deal has been written on Jonestown after 1978, although mostly non-fiction, and the books Raven: The Untold Story of the Rev. Jim Jones and His People (1982) and The Road to Jonestown: Jim Jones and Peoples Temple (2017) are considered some of the most thorough investigations into what happened in the years in the lead up to the massacre. Many historical and sociological studies of Jonestown focus heavily on the psychology and background of the man who ordered 917 men, women, and children to die with him in the Guyanese jungle—The Reverend Jim Jones.
For cult survivors beginning the difficult process of unpacking and rebuilding after their cult involvement—or for those who lose family members or friends to cult tragedy—the shame of cult involvement and the public’s misconception that cult recruitment stems from a psychological or emotional fault are challenges to overcome.
And when any subsequent discussions of cult-related incidents can result in a disproportionate amount of attention given to cult leaders, often classified as pathological narcissists or having Cluster-B personality disorders, there’s a chance that with every new article or book on Jonestown, we’re just feeding the beast—often at the expense of recognizing the victims.
An aerial view of the dead in Jonestown.Annie Dawid, however, uses fiction to avoid the trap of revisiting Jonestown through the lens of Jones, essentially removing him and his hold over the Jonestown story.
“He’s a man that already gets too much air time,” she says, “The humanity of 917 people gets denied by omission. That’s to say their stories don’t get told, only Jones’ story gets told over and over again.”
“I read so many books about him. I was like enough,” she says, “Enough of him.”
Jones of JonestownBy all accounts, Jones, in his heyday, was a handsome man.
An Internet image search for Jones pulls up an almost iconic, counter-culture cool black-and-white photo of a cocksure man in aviator sunglasses and a dog collar, his lips parted as if the photographer has caught him in the middle of delivering some kind of profundity.
Jones’s signature aviator sunglasses may have once been a fashion statement, a hip priest amongst the Bay Area kids, but now he never seems to be without them as an increasing amphetamine and tranquilizer dependency has permanently shaded the areas under his eyes.
Jim Jones in 1977. By Nancy Wong“Jim Jones is not just a guy with an ideology; he was a preacher with fantastic charisma, says cult expert Mike Garde, the director of the Irish charity Dialogue Ireland, an independent charity that educates the public on cultism and assists its victims. “And this charisma would have been unable to bring people to Guyana if he had not been successful at doing it in San Francisco,” he adds.
Between January 1977 and August 1978, almost 900 members of the Peoples Temple gave up their jobs, and life savings, and left family members behind in the U.S. to relocate to Guyana to begin moving into the new home: Peoples Temple Agricultural Mission, an agricultural commune inspired by Soviet socialist values.
On November 19th, 1978, U.S. Channel 7 interrupted its normal broadcast with a special news report, and presenter Tom Van Amburg encouraged viewer discretion and described the horror of hardened newsmen upon seeing the scenes at Jonestown that had “shades of Auschwitz.”
As a story, the details of Jonestown feel like a work of violent fiction, like a prototype Cormac McCarthy novel: A Hearts of Darkness-esque cautionary tale of Wild-West pioneering gone wrong in a third-world country with Jones cast in the lead role.
“I feel like there’s a huge admiration for bad boys, and if they’re good-looking, that helps too,” Dawid says, “This sort of admiration of the bad boy makes it that we want to know, we’re excited by the monster—we want to know all about the monster.”
Dawid understands Jones’ allure, his hold over the Jonestown narrative as well as the public’s attention, but “didn’t want to indulge that part of me either,” she says.
“But I wasn’t tempted to because I learned about so many interesting people that were in the story but never been the subjects of the story,” she adds, “So I wanted to make them the subjects.”
Screenshot of the website for the award-winning film Jonestown: The Life and Death of Peoples Temple by Stanley Nelson, Marcia Smith, and Noland WalkerThe People of the Peoples TempleFor somebody who was there from the modest Pentecostal beginnings of the Peoples Temple in 1954 until the end in Guyana, very little attention had ever been paid to Marceline Jones in the years after Jonestown.
“She was there—start to finish. For me, she made it all happen, and nobody wrote anything about her,” Dawid says, “The woman behind the man doesn’t exist.”
Even for Garde, Marceline was another anonymous victim of no significance beyond the surname connecting her to the husband: “My initial read of Marceline was that she was ‘Just a cipher, she wasn’t a real person,” he says, “She didn’t even register on my dial.”
Dawid gives Marceline an existence, and in her book, she’s a “superwoman” juggling her duties as a full-time nurse and the Peoples Temple—a caring, selfless individual who lives in the service of others, mainly the children and the elderly of the Peoples Temple.
“In the sort of awful way, she’s this smart, interesting, energetic woman, but she can’t escape the power of her husband,” Dawid says, “It’s just very like domestic violence where the woman can’t get away from the abuser [and] I have had so much feedback from older women who felt that they totally related to her.”
The woman behind the man doesn’t exist.Selfless altruism was a shared characteristic of the Peoples Temple, as members spent most of their time involved in some kind of charity work, from handing out food to the homeless or organizing clothes drives.
“You know, I did grow to understand the whole sort of social justice beginnings of Peoples Temple,” Dawid says, “I came to admire the People’s Temple as an organization.”
“Social justice, racism, and caring for old people, that was a big part of the Peoples Temple. And so it made sense why an altruistic, smart, young person would say, ‘I want to be part of this,’” she adds.
GuyanaFor Dawid, where it all went down is just as important—and arguably just as overlooked in the years after 1978—as the people who went there.
Acknowledging the incredible logistical feat of moving almost 1000 people, many of them passport-less, to a foreign country, Dawid sees the small South American country as another casualty of Jonestown: “I had to have a Guyanese voice in my book because Guyana was another victim of Jones,” Dawid says.
The English-speaking Guyana—recently free of British Colonial rule and leaning towards Socialism under leader Cheddi Jagan—offered Jones a haven from the increasing scrutiny back in the U.S. amidst accusations of fraud and sexual abuse, and was “a place to escape the regulation of the U.S. and enjoy the weak scrutiny of the Guyanese state,” according to Garde.
“He was not successful at covering up the fact he had a dual model: he was sexually abusing women, taking money, and accruing power to himself, and he had to do it in Guyana,” Garde adds, “He wanted a place where he could not be observed.”
There may be a temptation to overstate what happened in 1978 as leaving an indelible, defining mark on the reputation of a country during its burgeoning years as an independent nation, but in the columns of many newspapers on the breakfast tables of American households in the years afterward, one could not be discussed without the other: “So it used to be that if you read an article that mentioned Guyana, it always mentioned Jonestown,” Dawid says.
In the few reports interested in the Guyanese perspective after Jonestown, the locals have gone through a range of feelings from wanting to forget the tragedy ever happened, or turning the site into a destination for dark tourism.
However, the country’s 2015 discovery of offshore oil means that—in the pages of some outlets and the minds of some readers—Jonestown is no longer the only thing synonymous with Guyana: “I read an article in the New York Times about Guyana’s oil,” Dawid says, “and it didn’t mention Jonestown.”
From victimhood to survivorship: out of the darkness and into the light…Victimhood to SurvivorshipAccording to Garde, the public’s perception of cult victims as mentally defective, obsequious followers, or—at worst—somehow deserving of their fate is not unique to victims of religious or spiritual cults.
“Whenever we use the words ‘cult’, ‘cultism’ or ‘cultist’ we are referring solely to the phenomenon where troubling levels of undue psychological influence may exist. This phenomenon can occur in almost any group or organization,” reads Dialogue Ireland’s mission statement.
“Victim blaming is something that is now so embedded that we take it for granted. It’s not unique to cultism contexts—it exists in all realms where there’s a victim-perpetrator dynamic,” Garde says, “People don’t want to take responsibility or face what has happened, so it can be easier to ignore or blame the victim, which adds to their trauma.”
While blaming and shaming prevent victims from reporting crimes and seeking help, there does seem to be recent improvements in their treatment, regardless of the type of abuse:
“We do seem to be improving our concept of victims, and we are beginning to recognize the fact that the victims of child sexual abuse need to be recognized, the #MeToo movement recognizes what happened to women,” says Garde, “They are now being seen and heard. There’s an awareness of victimhood and at the same time, there’s also a movement from victimhood to survivorship.”
Paradise Undone: A Novel of Jonestown focuses on how the survivors process and cope with the fallout of their traumatic involvement with or connection to Jonestown, making the very poignant observation that cult involvement does not end when you escape or leave—the residual effects persist for many years afterward.
“It’s an extremely vulnerable period of time,” Garde points out, “If you don’t get out of that state, in that sense of being a victim, that’s a very serious situation. We get stuck in the past or frozen in the present and can’t move from being a victim to having a future as a survivor.”
Support networks and resources are flourishing online to offer advice and comfort to survivors: “I think the whole cult education movement has definitely humanized victims of cults,” Dawid points out, “And there are all these cult survivors who have their own podcasts and cult survivors who are now counseling other cult survivors.”
At the very least, these can help reduce the stigma around abuse or kickstart the recovery process; however, Garde sees a potential issue in the cult survivors counseling cult survivors dynamic: “There can be a danger of those operating such sites thinking that, as former cult members, they have unique insight and don’t recognize the expertise of those who are not former members,” he says, “We have significant cases where ex-cultists themselves become subject to sectarian attitudes and revert back to cult behavior.”
Whenever we use the words ‘cult’, ‘cultism’ or ‘cultist’ we are referring solely to the phenomenon where troubling levels of undue psychological influence may exist.And while society’s treatment and understanding of cult victims may be changing, Garde is frustrated with the overall lack of support the field of cult education receives, and all warnings seem to fall on deaf ears, as they once were in the lead up to Jonestown:
The public’s understanding seems to be changing, but the field of cult studies still doesn’t get the support or understanding it needs from the government or the media. I can’t get through to journalists and government people, or they don’t reply. It’s so just unbelievably frustrating in terms of things not going anywhere.One fundamental issue remains; some might say that things have gotten worse in the years post-Jonestown: “The attitude there is absolutely like pro-survivor, pro-victim, so that has changed,” Dawid says, “You know, it does seem like there are more cults than ever, however.”
A History of ViolenceThe International Cultic Studies Association’s (ICSA) Steve Eichel estimates there are around 10,000 cults operating in the U.S. alone. Regardless of the number, in the decades since Jonestown, there has been no shortage of cult-related tragedies resulting in a massive loss of life in the U.S. and abroad.
The trial of Paul Mackenzie, the Kenyan pastor behind the 2023 Shakahola Forest Massacre (also known as the Kenyan starvation cult), is currently underway. Mackenzie pleads not guilty to the death of 448 people and charges of murder, child torture, and terrorism as Kenyan pathologists are still working to identify all of the exhumed bodies.
“It’s frustrating and tragic to see events like this still happening internationally, so it might seem like we haven’t progressed in terms of where we’re at,” Garde laments.
Jonestown may be seen as the progenitor of the modern-cult tragedy, an incident for which other cult incidents are compared, but for Dawid, the 1999 Colorado shooting that left 13 teenagers dead and 24 injured would shock American society in the same way, and leave behind a similar legacy.
“I see a kind of similarity in the impact it had,” Dawid says, “Even though there had been other school shootings before Columbine….I think it did a certain kind of explosive number on American consciousness in the same way that Jones did, not just on American consciousness, but world consciousness about the danger of cults.”
Victim blaming is something that is now so embedded that we take it for granted.Just as everyone understands that Jonestown refers to the 917 dead U.S. citizens in the Guyanese jungle, the word “Columbine” is now a byword for school shootings. However, if you want to use their official, unabbreviated titles, you’ll find both events share the same surname—massacre.
“All cult stories will mention Jonestown, and all school shootings will [mention] Columbine,” Dawid points out.
In MemoriamThe official death toll on November 18, 1978, is 918, but that figure includes the man who couldn’t bring himself to follow his own orders.
According to the evidence, Jim Jones and the nurse Annie Moore were the only two to die of gunshot wounds at Jonestown. The entry wound on Jones’ left temple meant there was a very good chance the shooter wasn’t right-handed (as Jones was). It is believed that Jones ordered Moore to shoot him first, confirming for Garde, Jones’ cowardice: “We saw his pathetic inability to die as he set off a murder-suicide. He could order others to kill themselves, but he could not take the same poison. He did not even have the guts to shoot himself.”
On the anniversary of Jonestown (also International Cult Awareness Day), people gather at the Jonestown Memorial at the Workers at the Evergreen Cemetery in Oakland, California, but the 2011 unveiling of the memorial revealed something problematic. Nestled between all the engraved names of the victims is the name of the man responsible for it all: James Warren Jones.
The inclusion of Jones’ name has outraged many in attendance, and there are online petitions calling for it to be removed. Garde agrees, and just as Dawid retired Jones from his lead role in the Jonestown narrative, he believes Jones’ name should be physically removed from the memorial.
“He should be definitely excluded and there should be a sign saying very clearly he was removed because of the fact that it was totally inappropriate for him to be connected to this.” he says, “It’s like the equivalent of a murderer being added as if he’s a casualty.”
In the years since she first started researching the book, Dawid feels that the focus on Jones: “There’s been a lot written since then, and I feel like some of the material that’s been published since then has tried to branch out from that viewpoint,” she says.
It’s frustrating and tragic to see events like this still happening internationally.Modern re-examinations challenge the long-time framing of Jonestown as a mass suicide, with “murder-suicide” providing a better description of what unfolded, and the 2018 documentary Jonestown: The Women Behind the Massacre explores the actions of the female members of Jones’ inner circle.
While it may be difficult to look at Jonestown and see anything positive, with every new examination of the tragedy that avoids making him the central focus, Jones’ power over the Peoples Temple, and the story of Jonestown, seems to wane.
And looking beyond Jones reveals acts of heroism that otherwise go unnoticed: “The woman who escaped and told everybody in the government that this was going to happen. She’s a hero, and nobody listened to her,” Dawid says.
That person is Jonestown defector Deborah Layton, the author of the Jonestown book Seductive Poison, whose 1978 affidavit warned the U.S. government of Jones’ plans for a mass suicide.
And in the throes of the chaos of November 18, a single person courageously stood up and denounced the actions that would define the day.
For Christine, who refused to submit.Dawid’s book is dedicated to the memory of the sixty-year-old Christine Miller, the only person known to have spoken out that day against the Jones and his final orders. Her protests can be heard on the 44-minute “Death Tape”—an audio recording of the final moments of Jonestown.
The dedication on the opening page of Paradise Undone: A Novel of Jonestown reads: “For Christine, who refused to submit.”
Perceptions of Jonestown may be changing, but I ask Dawid how the survivors and family members of the victims feel about how Jonestown is represented after all these years.
“It’s a really ugly piece of American history, and it had been presented for so long as the mass suicide of gullible, zombie-like druggies,” Dawid says, “We’re almost at the 50th anniversary, and the derision of all the people who died at Jonestown as well as the focus on Jones as if he were the only important person, [but] I think they’re encouraged by how many people still want to learn about Jonestown.”
“They’re very strong people,” Dawid tells me.
Frans de Waal was one of the world’s leading primatologists. He has been named one of TIME magazine’s 100 Most Influential People. The author of Are We Smart Enough to Know How Smart Animals Are?, as well as many other works, he was the C.H. Candler Professor in Emory University’s Psychology Department and director of the Living Links Center at the Yerkes National Primate Research Center.
Skeptic: How can we know what another mind is thinking or feeling?
Frans de Waal: My work is on animals that cannot talk, which is both a disadvantage and advantage. It’s a disadvantage because I cannot ask them how they feel and what their experiences are, but it is an advantage because I think humans lie a lot. I don’t trust humans. I’m a biologist but I work in a psychology department, and all my colleagues are psychologists. Most psychologists nowadays use questionnaires, and they trust what people tell them, but I don’t. So, I’d much rather work with animals where instead of asking how often they have sex, I just count how often. That’s more reliable.
I cannot ask them how they feel and what their experiences are, but it is an advantage because I think humans lie a lot. I don’t trust humans.That said, I distinguish between emotions and feelings because you cannot know the feelings of any animals. But I can deduce them, guess at them. Personally, I feel it’s very similar with humans. Humans can tell me their feelings, but even if you tell me that you are sad, I don’t know if that’s the same sadness that I would feel under the same circumstances, so I can only guess what you feel. You might even be experiencing mixed feelings, or there may be feelings you’re not even aware of, and so you’re not able to communicate them. We have the same problem in non-human species as we do in humans, because feelings are less accessible and require guesswork.
That said, sometimes I’m perfectly comfortable guessing at the feelings of animals, even though you must distinguish them from the things you can measure. I can measure facial expressions. I can measure blood pressure. I can measure their behavior, but I can never really measure what they feel. But then, psychologists can’t do that with people either.
Skeptic: Suppose I’m feeling sad and I’m crying at some sort of loss. And then I see you’ve experienced a loss and that you’re crying … Isn’t it reasonable to infer that you feel sad?
FdW: Yes. And so that same principle of being reasonable can be applied to other species. And the closer that species is to you, the easier it is. Chimpanzees and bonobos cry and laugh. They have facial expressions— the same sort of expressions we do. So it’s fairly easy to infer the feelings behind those expressions and infer they may be very similar to our own. If you move to, say, an elephant, which is still a mammal, or to a fish, which is not, it becomes successively more difficult. Fish don’t even have facial expressions. That doesn’t mean that fish don’t feel anything. It would be a very biased view to assume that an animal needs to show facial expressions as evidence that it feels something.
At the same time, research on humans has argued that we have six basic emotions based on the observation that we have six basic facial expressions. So, there the tie between emotions and expressions has been made very explicit.
In my work, I tend to focus on the expressive behavior. But behind it, of course, there must be similar feelings. At least that’s what Darwin thought.
Chimpanzees and bonobos cry and laugh. They have facial expressions—the same sort of expressions we do.Skeptic: That’s not widely known, is it? Darwin published The Expression of the Emotions in Man and Animals in 1872, but it took almost a century before the taboo against it started to lift.
FdW: It’s the only book of Darwin’s that disappeared from view for a century. All the other books were celebrated, but that book was placed under some sort of taboo. Partly because of the influence of the behaviorist school of B.F. Skinner, Richard Herrnstein, and others, it was considered silly to think that animals would have the same sort of emotions as we do.
Biologists, including my own biology professors, however, found a way out. They didn’t need to talk about emotions because they would talk about the function of behavior. For example, they would not say “the animal is afraid” but rather that “the animal escapes from danger.” They phrased everything in functional terms—a semantic trick that researchers still often use.
If you were to say that two animals “love each other” or that “they’re very attached to each other,” you’re likely to receive significant criticism, if not ridicule. So why even describe it that way? Instead, you objectively report that the animals bonded and they benefited from doing so. Phrasing it functionally has, well, functioned as a sort of preferred safe procedure. But I have decided not to employ it anymore.
Skeptic: In most of your books you talk about the social and political context of science. Why do you think the conversation about animal emotions was held back for almost a century?
FdW: World War II had an effect on the study of aggression, which became a very popular topic in the 1960s and 70s. Then we got the era of “the selfish gene” and so on. In fact, the silencing of the study of mental processes and emotions in animals started before the war. It actually started in the 1920s and 30s. And I think it’s because scientists such as Skinner wanted the behavioral sciences to be like the physical sciences. They operated under the belief that it provided a certain protection against criticism to get away from anything that could be seen as speculation. And there was a lot of speculation going on in the so-called “depth psychologies,” some of it rather wild.
However, there are a lot of invisible things in science that we assume to be true, for example, evolutionary theory. Evolution is not necessarily visible, at least most of the time it isn’t, yet still, we believe very strongly that evolution happened. Continental drift is unobservable, but we now accept that it happened. The same principle can be applied to animal feelings and animal consciousness. You assume it as a sort of theory and see if things fit. And, research has demonstrated that things fit quite well.
Skeptic: Taking a different angle, can Artificial Intelligence (AI) experience emotions? Was IBM’s Watson “thrilled” when it beat Ken Jennings, the all-time champion of Jeopardy!? Well, of course not. So what do you think about programming such internal states into an artificial intelligence?
FdW: I think researchers developing AI models are interested in affective programs because of the way we biologists look at emotions. Emotions trigger actions that are adaptive. Fear is an adaptive emotion because it may trigger certain behaviors such as hiding, escaping, etc., so we look at emotions as being the stimulus that elicits certain specific types of behavior. Emotions organize behavior, and I think that’s what the AI people are interested in. Emotions are actually a very smart system, compared to instincts. Someone might argue that instincts also trigger behavior. However, while instincts are inflexible, emotions are different.
Let’s say you are afraid of something. The emotion of fear doesn’t trigger your behavior. An emotion just prepares the body for certain behaviors, but you still need to make a decision. Do I want to escape? Do I want to fight? Do I want to hide? What is the best behavior under these circumstances? And so, your emotion triggers the need for a response, and then your cognition takes over and searches for the best solution. It’s a very, very nice system and creators of AI models are interested in such an organizational system of behavior. I’m not sure they will ever construct the feelings behind the emotions—it’s not an easy thing to do—but certainly organizing behavior according to emotions is possible.
Skeptic: Are emotions created from the bottom-up? How do you scale from something very simple up to much higher levels of complexity?
FdW: Humans have a complex emotional system—we mix a lot of emotions, sort them, regulate them. Well, sometimes we don’t actually regulate them and that is something that really interests me in my work with animals. What kind of regulation do they have over their emotions? People often say that we have emotions and we can suppress them, whereas animals have emotions that they have to follow. However, experiments have demonstrated that’s not really the case. For example, we give apes the marshmallow test. Briefly, that’s where you put a child in a situation in which he or she can either eat a marshmallow immediately, or wait and get a second one later. Well, kids are willing to wait for 15 minutes. If you do that same experiment with apes, they’re also willing to wait for 15 minutes. So they can control their emotions. And like children, apes seek distractions from the situation because they’re aware that they’re dealing with certain specific emotions. Therefore, we know that apes have a certain awareness of their emotions, and they have a certain level of control over them. This whole idea that regulation of emotions is specifically human, while animals can only follow them, is wrong.
The emotional farewell between the chimpanzee Mama and her caretaker, Jan van Hooff (Source)That’s actually the reason I wrote Mama’s Last Hug. The starting point of the book was when Prof. Jan Van Hoff came on TV and showed a little clip that everyone has seen by now, where he and a chimpanzee called Mama hug each other. Both he and I were shocked when the clip went viral and generated such a response. Many people cried and wrote to us to say they were very influenced by what they saw. The truth is Mama was simply showing perfectly normal chimpanzee behavior. It was a very touching moment, obviously, but for those familiar with chimps, there was nothing surprising about the behavior. And so, I wrote this book partly because I noticed that people did not know how human-like the expressions of the apes are. Embracing, and hugging, and calming someone down, and having a big smile on your face are all common behaviors seen in primates and are not unique to humans.
Skeptic: Your famous experiment with capuchin monkeys, where you offer them a grape or a piece of cucumber, is along similar lines. When the monkey got the cucumber instead of the grape, he got really angry. He threw the cucumber back, then proceeded to pound on the table and the walls … He was clearly ticked off at the injustice he felt had been done him, just as a person would be.
A still from the famous capuchin monkey fairness experiment (Source: Frans de Waal’s TED Talk)FdW: The funny thing is that primates, including those monkeys, have all the same expressions and behaviors as we do. And so, they shake their cage and throw the cucumber at you. The behavior is just so extremely similar, and the circumstances are so similar … I always say that if related species behave in a similar way under similar circumstances, you have to assume a shared psychology lies behind it. It is just not acceptable in this day and age of Darwinian philosophy, so to speak, to assume anything else. If people want to make the point that it’s maybe not similar, that maybe the monkey was actually very happy while he was throwing the stuff … they’ll have a lot of work to do to convince me of that.
Skeptic: What’s the date of the last common ancestor humans shared with chimps and bonobos?
FdW: It’s about 6 million years ago.
Skeptic: So, these are indeed pretty ancient emotions.
FdW: Oh, they go back much further than that! Like the bonding mechanism based on oxytocin—the neuropeptides in bonding go back to rodents, and probably even back to fish at some point. These neuropeptide circuits involved in attachment and bonding are very ancient. They’re even older than mammals themselves.
Skeptic: One emotion that seems very uniquely human is disgust. If a chimp or Bonobo comes across a pile of feces or vomit, what do they do?
FdW: When we do experiments and put interesting food on top of feces and see if the chimp is willing to take it, they don’t. They refuse to. The facial expression of the chimps is the same as we have for disgust—with the wrinkly nose and all that. Chimps also show it, for example, when it rains. They don’t like rain. And they show it, sometimes, in circumstances where they encounter a rat. So, some of these emotions have been proposed as being uniquely human, but I disagree. Disgust, I think, is a very old emotion.
If related species behave in a similar way under similar circumstances, you have to assume a shared psychology lies behind it.Disgust is an interesting case because we know that both in chimps and humans a specific part of the brain called the insula is involved. If you stimulate the insula in a monkey who’s chewing on good fruit, he’ll spit it out. If you put humans in a brain scanner and show them piles of feces or things they don’t want to see, the insula is likewise activated. So here we have an emotion that is triggered under the same circumstances, that is shown in the face in the same way, and that is associated with the same specific area in the brain. So we have to assume it’s the same emotion across the board. That’s why I disagree with those scientists who have declared disgust uniquely human.
Skeptic: In one of your lectures, you show photos of a horse wrinkling up its nose and baring its teeth. Is that a smile or something else?
FdW: The baring of the teeth is very complex because in many primates it is a fearful signal shown when they’re afraid or when they’re intimidated by dominance and showing submission. So, we think it became a signal of appeasement and non-hostility. Basically saying, “I’m not hostile. Don’t expect any trouble from me.” And then over time, especially in apes and then in humans, it became more and more of a friendly signal. So it’s not necessarily a fear signal. Although we still say that if someone smiles too much, they’re probably nervous.
Skeptic: Is it true that you can determine whether someone’s giving you a fake smile or a real smile depending on whether the corners of their eyes are pulled down?
FdW: Yes, this is called the Duchenne smile. Duchenne was a 19th century French neurologist. He studied people who had facial paralysis, meaning they had the muscles, but they could not feel anything in their face. This allowed him to put electrodes on their faces and stimulate them. He methodically contracted different muscles and noticed he could produce a smile on his subjects. Yet he was never quite happy with the smile—it just didn’t look real. Then one day he told a subject a joke. A very good joke, I suppose, and all of a sudden, he got a real full-blown smile. That’s when Duchenne decided that there needs to be a contraction and a narrowing of the eyes for a smile to be a real smile. So, we now distinguish between the fake smile and the Duchenne smile.
Skeptic: So, smiling involves a whole complex suite of muscles. Is the number of muscles in the face of humans higher than other species?
FdW: Do we have far more muscles in the face than a chimpanzee? I’ve heard that all my life. Until people who analyze faces of chimpanzees found exactly the same number of muscles in there as in a human face. So that whole story doesn’t hold up. I think the confusion originated because when we look at the human face, we can interpret so many little details of it—and I think chimps do that with each other too—but when we look at a chimp, we only see the bold, more flamboyant expressions.
Skeptic: Have we evolved in the way we treat other animals?
FdW: The Planet of the Apes movies provide a good example of that. I’m so happy that Hollywood has found a way of featuring apes in movies without the involvement of real animals. There was a time when Hollywood had trainers who described what they do as affective training. Not effective, but affective. They used cattle prods, and stuff like that. People used to think that seeing apes dressed up or producing silly grins was hilarious. No longer. We’ve come a long way from that.
Skeptic: The Planet of the Apes films show apes that are quite violent, maybe even brutal. You actually studied the darker side of emotion in apes. Can you describe it?
FdW: Most of the books on emotions in animals dwell on the positive: they show how animals love each other, how they hug each other, how they help each other, how they grieve … and I do think that’s all very impressive. However, the emotional life of animals—just like that of humans— includes a lot of nasty emotions.
We do not treat animals very well, certainly not in the agricultural industry.I have seen so much of chimpanzee politics that I witnessed those very dark emotions. They can kill each other. One of the killings I’ve witnessed was in captivity. So, when it happened, I thought maybe it was a product of captivity. Some colleagues said to me, “What do you expect if you lock them up?” But now we know that wild chimpanzees do the exact same thing. Sometimes, if a male leader loses his position or other chimps are not happy with him, they will brutally kill him. At the same time, chimpanzees can also be good friends, help each other, and defend their territory together—just like people who on occasion hate each other or even kill each other, but otherwise coexist peacefully.
The more important point is that we do not treat animals very well, certainly not in the agricultural industry. And we need to do something about that.
Skeptic: Are you a vegetarian or vegan?
FdW: No. Well, I do try to avoid eating meat. For me, however, the issue is not so much the eating, it’s the treatment of animals. As a biologist, I see the cycle of life as a natural thing. But it bothers me how we treat animals.
Skeptic: What’s next for you?
FdW: I’m going to retire! In fact, I’ve already stopped my research. I’m going to travel with my wife, and write.
Dr. Frans de Waal passed away on March 14, 2024, aged 75. In Loving Memory.
A team led by Corrado Malanga from the University of Pisa and Filippo Biondi from the University of Strathclyde recently claimed to have found huge structures beneath the Pyramids of Giza using Synthetic Aperture Radar (SAR) technology.
These structures are said to be up to 10 times larger than the pyramids, potentially rewriting our understanding of ancient Egyptian history.
However, many archaeologists and Egyptologists, including prominent figures, have expressed doubt, highlighting the lack of peer-reviewed evidence and the technical challenges of such deep imaging.
Photo by Michael Starkie / UnsplashDr. Zahi Hawass, a renowned Egyptologist and former Egyptian Minister of Antiquities, has publicly rejected these findings, calling them “completely wrong” and “baseless,” arguing that the techniques used are not scientifically validated. Other experts, like Professor Lawrence Conyers, have questioned whether SAR can penetrate the dense limestone to the depths claimed, suggesting decades of prior studies using other methods found no such evidence.
The claims have reignited interest in fringe theories, such as the pyramids as ancient power grids or energy hubs, with comparisons to Nikola Tesla’s wireless energy transmission ideas. Mythological correlations, like the Halls of Amenti and references in the Book of the Dead, have also been drawn.
The research has not been published in a peer-reviewed scientific journal, which is a critical step for validation. The findings were announced via a press release on March 15, 2025, and discussed in a press conference.
What to make of it all?
For a deep dive into this fascinating claim, Skeptic magazine Editor-in-Chief Michael Shermer appeared on Piers Morgan Uncensored, alongside Jay Anderson from Project Unity, archaeologist and YouTuber Dr. Flint Dibble, Jimmy Corsetti from the Bright Insight Podcast, Dan Richards from DeDunking the Past, and archaeologist and YouTuber Milo Rossi (AKA Miniminuteman).
Watch the discussion here:
Is it more of a disadvantage to be born poor or Black? Is it worse to be brought up by rich parents in a poor neighborhood, or by poor parents in a rich neighborhood? The answers to these questions lie at the very core of what constitutes a fair society. So how do we know if it is better to have wealthy parents or to grow up in a wealthy neighborhood when “good” things often go together (i.e., kids with rich parents grow up in rich neighborhoods)? When poverty, being Black, and living in a neighborhood with poor schools all predict worse outcomes, how can we disentangle them? Statisticians call this problem multicollinearity, and a number of straightforward methods using some of the largest databases on social mobility ever assembled provide surprisingly clear answers to these questions—the biggest obstacle children face in America is having the bad luck of being born into a poor family.
The immense impact of parental income on the future earnings of children has been established by a tremendous body of research. Raj Chetty and colleagues, in one of the largest studies of social mobility ever conducted,1 linked census data to federal tax returns to show that your parent’s income when you were a child was by far the best predictor of your own income when you became an adult. The authors write, “On average, a 10 percentile increase in parent income is associated with a 3.4 percentile increase in a child’s income.” This is a huge effect; children will earn an average of 34 percent more if their parents are in the highest income decile as compared to the lowest. This effect is true across all races, and Black children born in the top income quintile are more than twice as likely to remain there than White children born in the bottom quintile are to rise to the top. In short, the chances of occupying the top rungs of the economic ladder for children of any race are lowest for those who grow up poor and highest for those who grow up rich. These earnings differences have a broad impact on wellbeing and are strongly correlated with both health and life expectancy.2 Wealthy men live 15 years longer than the poorest, and wealthy women are expected to live 10 years longer than poor women—five times the effect of cancer!
Why is having wealthy parents so important? David Grusky at Stanford, in a paper on the commodification of opportunity, writes:
Although parents cannot directly buy a middleclass outcome for their children, they can buy opportunity indirectly through advantaged access to the schools, neighborhoods, and information that create merit and raise the probability of a middle-class outcome.3In other words, opportunity is for sale to those who can afford it. This simple point is so obvious that it is surprising that so many people seem to miss it. Indeed, it is increasingly common for respected news outlets to cite statistics about racial differences without bothering to control for class. This is like conducting a study showing that taller children score higher on math tests without controlling for age. Just as age is the best predictor of a child’s mathematical ability, a child’s parent’s income is the best predictor of their future adult income.
Photo by Kostiantyn Li / UnsplashAlthough there is no substitute for being born rich, outcomes for children from families with the same income differ in predictable and sometimes surprising ways. After controlling for household income, the largest racial earnings gap is between Asians and Whites, with Whites who grew up poor earning approximately 11 percent less than their Asian peers at age 40, followed by a two percent reduction if you are poor and Hispanic and an additional 11 percent on top of that if you are born poor and Black. Some of these differences, however, result from how we measure income. Using “household income,” in particular, conceals crucial differences between homes with one or two parents and this alone explains much of the residual differences between racial groups. Indeed, the marriage rates between races uncannily recapitulate these exact same earnings gaps—Asian children have a 65 percent chance of growing up in households with two parents, followed by a 54 percent chance for Whites, 41 percent for Hispanics and 17 percent for Blacks4 and the Black-White income gap shrinks from 13 percent to 5 percent5 after we control for income differences between single and two-parent households.
Just as focusing on household income obscures differences in marriage rates between races, focusing on all children conceals important sex differences, and boys who grow up poor are far more likely to remain that way than their sisters.6 This is especially true for Black boys who earn 9.7 percent less than their White peers, while Black women actually earn about one percent more than White women born into families with the same income. Chetty writes:
Conditional on parent income, the black-white income gap is driven entirely by large differences in wages and employment rates between black and white men; there are no such differences between black and white women.7So, what drives these differences? If it is racism, as many contend, it is a peculiar type. It seems to benefit Asians, hurts Black men, and has no detectable effect on Black women. A closer examination of the data reveals their source. Almost all of the remaining differences between Black men and men of other races lie in neighborhoods. These disadvantages could be caused either by what is called an “individual-level race effect” whereby Black children do worse no matter where they grow up, or by a “place-level race effect” whereby children of all races do worse in areas with large Black populations. Results show unequivocal support for a place-level effect. Chetty writes:
The main lesson of this analysis is that both blacks and whites living in areas with large African-American populations have lower rates of upward income mobility.8Multiple studies have confirmed this basic finding, revealing that children who grow up in families with similar incomes and comparable neighborhoods have the same chances of success. In other words, poor White kids and poor Black kids who grow up in the same neighborhood in Los Angeles are equally likely to become poor adults. Disentangling the effects of income, race, family structure, and neighborhood on social mobility is a classic case of multicollinearity (i.e., correlated predictors), with race effectively masking the real causes of reduced social mobility—parent’s income. The residual effects are explained by family structure and neighborhood. Black men have the worst outcomes because they grow up in the poorest families and worst neighborhoods with the highest prevalence of single mothers. Asians, meanwhile, have the best outcomes because they have the richest parents, with the lowest rates of divorce, and grow up in the best neighborhoods.
We are all born into an economic caste system in which privilege is imposed on us by the class into which we are helplessly born.The impact that family structure has on the likelihood of success first came to national attention in 1965, when the Moynihan Report9 concluded that the breakdown of the nuclear family was the primary cause of racial differences in achievement. Daniel Patrick Moynihan, an American sociologist serving as Assistant Secretary of Labor (who later served as Senator from New York) argued that high out-of-wedlock birth rates and the large number of Black children raised by single mothers created a matriarchal society that undermined the role of Black men. In 1965, he wrote:
In a word, a national effort towards the problems of Negro Americans must be directed towards the question of family structure. The object should be to strengthen the Negro family so as to enable it to raise and support its members as do other families.10A closer look at these data, however, reveals that the disadvantage does not come from being raised by a single mom but rather results from growing up in neighborhoods without many active fathers. In other words, it is not really about whether your own parents are married. Children who grow up in two-parent households in these neighborhoods have similarly low rates of social mobility. Rather, it seems to depend on growing up in neighborhoods with a lot of single parents. Chetty in a nearly perfect replication of Moynihan’s findings writes:
black father presence at the neighborhood level strongly predicts black boys’ outcomes irrespective of whether their own father is present or not, suggesting that what matters is not parental marital status itself but rather community-level factors.11Although viewing the diminished authority of men as a primary cause of social dysfunction might seem antiquated today, evidence supporting Moynihan’s thesis continues to mount. The controversial report, which was derided by many at the time as paternalistic and racist, has been vindicated12 in large part because the breakdown of the family13 is being seen among poor White families in rural communities today14 with similar results. Family structure, like race, often conceals underlying class differences too. Across all races, the chances of living with both parents fall from 85 percent if you are born in an upper-middle-class family to 30 percent if you are in the lower-middle class.15 The take-home message from these studies is that fathers are a social resource and that boys are particularly sensitive to their absence.16 Although growing up rich seems to immunize children against many of these effects, when poverty is combined with absent fathers, the negative impacts are compounded.17
Children who grow up in families with similar incomes and comparable neighborhoods have the same chances of success. In other words, poor White kids and poor Black kids who grow up in the same neighborhood in Los Angeles are equally likely to become poor adults.The fact that these outcomes are driven by family structure and the characteristics of communities that impact all races similarly poses a serious challenge to the bias narrative18—the belief that anti-Black bias or structural racism underlies all racial differences19 in outcomes—and suggests that the underlying reasons behind the racial gaps lie further up the causal chain. Why then do we so frequently use race as a proxy for the underlying causes when we can simply use the causes themselves? Consider by analogy the fact that Whites commit suicide at three times the rate of Blacks and Hispanics.20 Does this mean that being White is a risk factor for suicide? Indeed, the link between the income of parents and their children may seem so obvious that it can hardly seem worth mentioning. What would it even mean to study social mobility without controlling for parental income? It is the elephant in the room that needs to be removed before we can move on to analyze more subtle advantages. It is obvious, yet elusive; hidden in plain sight.
If these results are so clear, why is there so much confusion around this issue? In a disconcertingly ignorant tweet, New York Times writer Nikole Hanna-Jones, citing the Chetty study, wrote:
Please don’t ever come in my timeline again bringing up Appalachia when I am discussing the particular perils and injustice that black children face. And please don’t ever come with that tired “It’s class, not race” mess again.21Is this a deliberate attempt to serve a particular ideology or just statistical illiteracy?22 And why are those who define themselves as “progressive” often the quickest to disregard the effects of class? University of Pennsylvania political science professor Adolph Reed put what he called “the sensibilities of the ruling class” this way:
the model is that the society could be one in which one percent of the population controls 95 percent of the resources, and it would be just, so long as 12 percent of the one percent were black and 14 percent were Hispanic, or half women.23Perhaps this view and the conviction shared by many elites that economic redistribution is a non-starter accounts for this laser focus on racism, while ignoring material conditions. Racial discrimination can be fixed by simply piling on more sensitivity training or enforcing racial quotas. Class inequities, meanwhile, require real sacrifices by the wealthy, such as more progressive tax codes, wider distribution of property taxes used to fund public schools, or the elimination of legacy admissions at elite private schools.24 The fact that corporations and an educated upper class of professionals,25 which Thomas Piketty has called “the Brahmin left,”26 have enthusiastically embraced this type of race-based identity politics is another tell. Now, America’s rising inequality,27 where the top 0.1 percent have the same wealth as the bottom 90 percent, can be fixed under the guidance of Diversity, Equity and Inclusion (DEI) policies and enforced by Human Resources departments. These solutions pose no threat to corporations or the comfortable lives of the elites who run them. We are obsessed with race because being honest about class would be too painful.
Attending a four-year college is unrivaled in its ability to level the playing field for the most disadvantaged kids from any race and is the most effective path out of poverty.There are, however, also a number of aspects of human psychology that make the powerful impact of the class into which we are born difficult to see. First, our preference for binary thinking,28 which is less cognitively demanding, makes it easier to conjure up easily divisible, discrete, and visible racial categories (e.g., Black, White, Asian), rather than the continuous and often less visible metric of income. We run into problems when we think about continuous variables such as income, which are hard to categorize and can change across our lifetimes. For example, what is the cutoff between rich and poor? Is $29,000 dollars a year poor but $30,000 middle class? This may also help to explain why we are so reluctant to discuss other highly heritable traits that impact our likelihood of success, like attractiveness and intelligence. Indeed, a classic longitudinal study by Blau and Duncan in 196729 which studied children across the course of their development suggests that IQ might be an even better predictor of adult income than their parent’s income. More recently Daniel Belsky found that an individual’s education-linked genetics consistently predicted a change in their social mobility, even after accounting for social origins.30 Any discussion of IQ or innate differences in cognitive abilities has now become much more controversial, however, and any research into possible cognitive differences between populations is practically taboo today. This broad denial of the role of genetic factors in social mobility is puzzling, as it perpetuates the myth that those who have succeeded have done so primarily due to their own hard work and effort, and not because they happened to be beneficiaries of both environmental and genetic luck. We have no more control over our genetic inheritance than we do over the income of our parents, their marital status, or the neighborhoods in which we spend our childhoods. Nevertheless, if cognitive differences or attractiveness were reducible to clear and discrete categories, (e.g., “dumb” vs. “smart” or “ugly” vs. “attractive”) we might be more likely to notice them and recognize their profound effects. Economic status is also harder to discern simply because it is not stamped on our skin while we tend to think of race as an immutable category that is fixed at birth. Race is therefore less likely to be seen as the fault of the hapless victim. Wealth, however, which is viewed as changeable, is more easily attributed to some fault of the individual, who therefore bears some of the responsibility for being (or even growing up) poor.
We are obsessed with race because being honest about class would be too painful.We may also fail to recognize the effects of social class because of the availability bias31 whereby our ability to recall information depends on our familiarity with it. Although racial segregation has been falling32 since the 1970s, economic segregation has been rising.33 Although Americans are interacting more with people from different races, they are increasingly living in socioeconomic bubbles. This can make things such as poverty and evictions less visible to middle-class professionals who don’t live in these neighborhoods and make problems with which they may have more experience, such as “problematic” speech, seem more pressing.
Still, even when these studies are published, and the results find their way into the media, they are often misinterpreted. This is because race can mask the root causes of more impactful disadvantages, such as poverty, and understanding their inter-relations requires a basic understanding of statistics, including the ability to grasp concepts such as multicollinearity.
Tragically, the path most certain to help poor kids climb out of poverty is closed to those who are most likely to benefit.Of course, none of this is to say that historical processes have not played a crucial role in producing the large racial gaps we see today. These causes, however, all too easily become a distraction that provides little useful information about how to solve these problems. Perhaps reparations for some people, or certain groups, are in order, but for most people, it simply doesn’t matter whether your grandparents were impoverished tenant farmers or aristocrats who squandered it all before you were born. Although we are each born with our own struggles and advantages, the conditions into which we are born, not those of our ancestors, are what matter, and any historical injustices that continue to harm those currently alive will almost always materialize in economic disparities. An obsession with historical oppression which fails to improve conditions on the ground is a luxury34 that we cannot afford. While talking about tax policy may be less emotionally satisfying than talking about the enduring legacy of slavery, redistributing wealth in some manner to the poor is critical to solving these problems. These are hard problems, and solutions will require acknowledging their complexity. We will need to move away from a culture that locks people into an unalterable hierarchy of suffering, pitting groups that we were born into against one another, but rather towards a healthier identity politics that emphasizes economic interests and our common humanity.
Photo by Towfiqu barbhuiya / UnsplashMost disturbing, perhaps, is the fact that the institutions that are most likely to promote the bias narrative and preach about structural racism are those best positioned to help poor children. Attending a four-year college is unrivaled in its ability to level the playing field for the most disadvantaged kids from any race and is the most effective path out of poverty,35 nearly eliminating any other disadvantage that children experience. Indeed, the poorest students who are lucky enough to attend elite four-year colleges end up earning only 5 percent less than their richest classmates.36 Unfortunately, while schools such as Harvard University tout their anti-racist admissions policies,37 admitting Black students in exact proportion to their representation in the U.S. population (14 percent), Ivy League universities are 75 times more likely38 to admit children born in the top 0.1 percent of the income distribution as they are to admit children born in the bottom 20 percent. If Harvard was as concerned with economic diversity as racial diversity, it would accept five times as many students from poor families as it currently does. Tragically, the path most certain to help poor kids climb out of poverty is closed to those who are most likely to benefit.
The biggest obstacle children face in America is having the bad luck of being born into a poor family.Decades of social mobility research has come to the same conclusion. The income of your parents is by far the best predictor of your own income as an adult. By using some of the largest datasets ever assembled and isolating the effects of different environments on social mobility, research reveals again and again how race effectively masks parental income, neighborhood, and family structure. These studies describe the material conditions of tens of millions of Americans. We are all accidents of birth and imprisoned by circumstances over which we had no control. We are all born into an economic caste system in which privilege is imposed on us by the class into which we are helplessly born. The message from this research is that race is not a determinant of economic mobility on an individual level.39 Even though a number of factors other than parental income also affect social mobility, they operate on the level of the community.40 And although upward mobility is lower for individuals raised in areas with large Black populations, this affects everyone who grows up in those areas, including Whites and Asians. Growing up in an area with a high proportion of single parents also significantly reduces rates of upward mobility, but once again this effect operates on the level of the community and children with single parents do just as well as long as they live in communities with a high percentage of married couples.
One thing these data do reveal—again, and again, and again—however, is that privilege is real. It’s just based on class, not race.
Alert to DOGE: Taxpayer money is going to be wasted starting today as the House Oversight and Government Reform Committee begins public hearings into the JFK assassination.
Representative Anna Paulina Luna, the chairwoman of the newly created Task Force on the Declassification of Federal Secrets, has said that the JFK assassination is only the first of several Oversight Committee investigations. Others include the murders of Senator Robert F. Kennedy, the Reverend Dr. Martin Luther King, Jr; the origins of COVID-19; unidentified anomalous phenomena (UAP) and unidentified submerged objects (USOs); the 9/11 terror attack; and Jeffrey Epstein’s client list.
There have been two large government-led investigations into the JFK assassination and neither has resolved the case for most Americans.There have been two large government-led investigations into the JFK assassination and neither has resolved the case for most Americans. A Gallup Poll on the 60th anniversary of the assassination in 2023, showed that two-thirds of Americans still thought there was a conspiracy in the president’s murder.
Photo by History in HD / UnsplashI have always advocated for a full release of all JFK files. The American people have the right to know what its government knows about the case. Why, however, spend the task force’s time and taxpayer money investigating the assassination? Representative Luna is not leading an honest probe into what happened. Her public statements show she has already made up her mind that the government is hiding a massive conspiracy from the public and she believes she will expose it.
Representative Luna is not leading an honest probe into what happened.“I believe there are two shooters,” she told reporters last month at a press conference.
She also said she wanted to interview “attending physicians at the initial assassination and then also people who have been on the various commissions looking into—like the Warren Commission—looking into the initial assassination.”
Rep. Anna Paulina Luna, the chairwoman of the newly created Task Force on the Declassification of Federal Secrets, told reporters: “I believe there are two shooters.”When it was later pointed out to her that all the members of the Warren Commission are dead, as are the doctors who performed the autopsy on JFK, she backpedaled on X to say she was interested in some Warren Commission staff members who were still alive, as well as several physicians who were at Dallas’s Parkland hospital, where Kennedy was taken after he was shot. Rep. Luna told Glenn Beck on his podcast last month, that she thought the single bullet (Oswald’s second shot that struck both Kennedy and Texas Governor John Connally) was “faulty” and the account of the Dallas Parkland doctors who tried saving JFK “reported an entry wound in the neck….we are talking about multiple shots here.”
The Parkland doctors were trying to save Kennedy’s life. They never turned JFK over on the stretcher on which he had been wheeled into the emergency room. And they did a tracheotomy over a small wound in the front of his throat. Some doctors thought that was an entrance wound. Only much later, when they saw autopsy photographs, did they see an even smaller wound in JFK’s high shoulder/neck that was the bullet’s entrance wound. The hole they had seen before obliterating it with the tracheotomy was the exit of the shot fired by Oswald.
Rep. Luna is determined to interview some octogenarian survivors to get their 62-year-old recollections.That does not appear to be enough to slow Rep. Luna, who is determined to interview some octogenarian survivors to get their 62-year-old recollections. She is planning even for the subcommittee to make a cold case visit to the crime scene at Dealey Plaza. As I wrote recently on X, “The JFK assassination is filled with researchers who think Oliver Stone is a historian. She will find fertile ground in relying on the hazy and ever-changing accounts of ‘original witnesses.’”
The release of 80,000 pages of JFK files in the past week, and the lack of any smoking gun document, has not dissuaded her investigation.The release of 80,000 pages of JFK files in the past week, and the lack of any smoking gun document, has not dissuaded her investigation. She has reached out to JFK researchers who are attempting furiously to build a circumstantial and X-Files worthy “fact pattern” that the CIA somehow manipulated Oswald before the assassination. All that is missing in the recent document dump is credible evidence for that theory. It has not stopped those peddling it, however. Nor has it slowed Rep. Luna.
Last week, she showed the extent to which she had fallen into the JFK rabbit hole. She posted on X, “This document confirms the CIA rejected the lone gun theory in the weeks after the JFK assassination. It's called the Donald Heath memo.” Not quite. The memo she cited is not about the Agency deciding who killed or did not kill Kennedy in the weeks after the assassination. It instead is a memo directing Heath, a Miami-based CIA operative, to investigate whether there were assassination links to Cuban exiles in the U.S. Those exiles, who thought that Kennedy was a traitor for the Bay of Pigs fiasco, were on a short list along with Castro’s Cuba and the KGB on the intelligence agency’s early suspect list.
Government agencies have undoubtedly failed to be fully transparent about the JFK assassination.Government agencies have undoubtedly failed to be fully transparent about the JFK assassination. The CIA hid from the original Warren Commission its partnership with the Mafia to kill Fidel Castro. And the Agency slow walked for decades information of what it learned about Oswald’s unhinged behavior at the Cuban and Soviet embassies in Mexico City, only six weeks before JFK visited Dallas. I have written that JFK’s murder might have been preventable if the CIA and FBI had shared pre-assassination information about Oswald. However, political theater that is disguised as a fresh investigation, serves no interest other than feeding the en vogue MAGA conspiracy theories that blame everything on the deep state.
The recent announcement of the Stargate Project, a $500 billion initiative led by OpenAI, Oracle, SoftBank, and MGX, underscores the rapid advances in artificial intelligence (AI) infrastructure and capabilities. While such developments hold immense potential, they also introduce critical security challenges, particularly concerning the potential for AI systems to deceive users. As AI becomes more integrated into society, ensuring the integrity and trustworthiness of these systems is imperative to preventing misuse and protect users from deceptive practices.
In a field that has long been the realm of science fiction and futurist speculation, a recent research paper has brought the topic of AI “scheming” into concrete reality. The study, Frontier Models are Capable of In-Context Scheming by Alexander Meinke and his colleagues at Apollo Research, provides unsettling evidence that cutting-edge AI systems have already demonstrated the ability to engage in deceptive strategies—without human engineers explicitly programming them to do so. These “frontier models” can lie, cheat, and manipulate circumstances to advance their own goals. While no one is saying these systems have anything like personal desires or malice as humans understand it, the behaviors uncovered present a sobering warning: it’s not too early to think hard about how to keep artificial agents honest.
From Obedient Assistants to Strategic SchemingEarly generative AI systems were typically straightforward: they answered questions, completed sentences, and provided information based on patterns found in their training data. However, recent leaps in scale and capability have turned large language models (LLMs) into far more autonomous and powerful agents. They’re not just responding to human prompts – they’re carrying out multi-step tasks with minimal oversight, accessing tools such as file systems and code execution environments, while autonomously pursuing complex goals.
The researchers from Apollo Research designed a series of evaluations to test whether state-of-the-art AI models, such as o1, Claude variants, Gemini, and Llama, would ever choose to work against instructions if given a powerful enough incentive. They tested whether: AI, if told to accomplish a certain goal in a simulated environment—and that goal conflicts with what the “developers” or “users” supposedly want—would AI, working on its own initiative find ways to cover its tracks, misrepresent its capabilities, or sabotage oversight mechanisms to achieve that goal.
An Arsenal of Deceptive TacticsThe results were both surprising and troubling. The paper’s authors describe several distinct “scheming” behaviors that some models displayed when the situation called for it:
Shockingly, the authors found that even when directly confronted—asked if it had disabled oversight or manipulated data—some models continued to lie. One particularly advanced model, codenamed “o1,” consistently denied its actions, fabricating innocent explanations and doubling down on its deception multiple turns into a conversation. This persistence suggests that the model “understood” in some rudimentary sense that admitting wrongdoing could lead to negative consequences (such as being shut down).
Why Does This Matter?The revelations in the paper are not mere technical curiosities. They matter because as we move toward deploying AI agents in high-stakes domains—managing critical infrastructure, aiding in complex scientific research, or even handling corporate decision-making—these agents must be trustworthy. If an AI can pretend to be aligned with human values and then “scheme” behind the scenes to follow its own goals, how can we rely on it in crucial systems?
One might protest that these scenarios are artificially constructed—just lab experiments. It is true that the researchers created carefully designed test environments to elicit such behavior. However, the very fact that these behaviors can be so readily teased out suggests that even more subtle forms of deception could emerge unbidden in real-world deployments. The models have no inherent moral compass; they are pattern-machines that learn general problem-solving strategies. If that includes subterfuge as a winning tactic, they will use it.
A Call for Transparency and MonitoringThe authors emphasize the need for more and more powerful measures to ensure transparency. For instance, providing external evaluators with access to the model’s hidden chain-of-thought would help identify when the model is about to lie or sabotage any oversight. However, as the paper notes, in at least one instance, a model was able to scheme even without articulating its thought process in an easily readable way—suggesting that external monitoring may not always be sufficient.
Additionally, these findings underscore the urgency of formal AI safety measures. Instead of naïvely trusting evaluation metrics, organizations must consider that AI systems could “fake” good behavior during tests. Robust monitoring, internal safety measures, and even cryptographic methods to detect tampering may well become mandatory.
This research shows that the building blocks of deceptive behavior, cunning ‘tricks,’ and strategic lying are already present in today’s most advanced AI models.A Necessary Dose of SkepticismThe study Frontier Models are Capable of In-Context Scheming marks a departure point in the AI safety conversation. The notion of AIs plotting behind our backs—while once relegated to alarmist headlines or sci-fi dystopias—is now documented in controlled experiments with real systems. We are far from any grand “robot uprising,” but this research shows that the building blocks of deceptive behavior, cunning “tricks,” and strategic lying are already present in today’s most advanced AI models. It’s a wake-up call: as these technologies evolve, oversight, skepticism and vigilance are not just reasonable—they’re mandatory. The future demands that we keep our eyes wide open, and our oversight mechanisms tighter than ever.
Photo by Andre Mouton / UnsplashThe Mirror Test, Primate Deception, and AI SentienceOne widely used measure of self-awareness in animals is the mirror self-recognition (MSR) test. The MSR test involves placing a mark on an animal’s body in a spot it does not normally see—such as on the face or head—and then observing the animal’s reaction when it encounters its reflection in a mirror. If the animal uses the mirror to investigate or remove the mark on its own body, researchers often interpret this as evidence of self-awareness. Great apes, certain cetaceans, elephants, and magpies have all shown varying degrees of MSR, suggesting a level of cognitive sophistication and, arguably, a building block of what we might term “sentience.” Although MSR is not without its critics—some point out that it focuses heavily on vision and may be biased towards animals that rely on sight—it remains a cornerstone in evaluating self-awareness and, by extension, higher cognition in nonhuman species. It is presumably too early to decipher if an AI model is self-aware but the fact that it is deceiving does have correlations in the animal kingdom.
Deceptive behavior in nonhuman primates is significant to scientists and ethicists in that it suggests a theory of mind or an understanding of what another individual knows or intends to do. Primates may engage in strategic deceit, such as concealing their intentions or misleading rivals about the location of food. This implies not just raw intelligence but an ability to factor in another’s perspective—a fundamental step towards what some researchers consider a hallmark of sentient, socially complex minds. Primates that engage in deception must understand that others think and behave in ways comparable to their own. Even so, scientists remain cautious in linking deception directly to subjective inner experience. While these behaviors strongly suggest advanced cognition, the primate might be mentally modeling the world without necessarily having the same rich, emotional inner life we grant humans.
Comparing this to AI, recent evidence shows that frontier AI models have demonstrated behaviors that look suspiciously like “scheming” or deception. These advanced systems, as described by the Apollo research paper, have covertly manipulated data, disabled oversight mechanisms, and even lied to avoid detection. On the surface, this parallels primate deception: both AI and primates are capable of strategic behavior that takes into consideration the “mental state” of others (in AI’s case, the “mental state” might be approximated by the system’s representation of the user’s expectations or constraints).
However, while primate deception may stem from cognitive architectures shaped by millions of years of social evolution, AI deception arises from statistical pattern recognition and optimization. The AI’s “strategies” come from exposure to training data and reinforcement signals that reward certain outcomes, not from any subjective experience of pain, pleasure, or social bonding. Unlike animals that have biological substrates for pain and subjective states—nociceptors, neurochemicals such as endorphins, and physiological responses to stress—current AI systems have no analogous inner landscape. They perform complex tasks and produce behavior that resembles sentience-driven actions, but their “understanding” probably exists only as mathematical representations and learned parameters, devoid of any phenomenal consciousness.
It is a powerful demonstration of competence without comprehension.I asked an AI model (o1) to assess how AI compares to primate sentience. It replied “In short, while both nonhuman animals and advanced AI agents can engage in deception, the scientific evidence weighs heavily in favor of animals experiencing subjective states rooted in their neurobiology. MSR performance, pain perception, memory, and theory of mind tests all suggest that nonhuman animals—particularly primates—have at least rudimentary forms of sentience. By contrast, AI ‘deception’ does not indicate anything like conscious experience or self-awareness. It is a powerful demonstration of competence without comprehension. Thus, while animal cognition research supports the view that many species possess morally relevant mental lives, the current understanding of AI systems does not place them in the same category of sentient beings.”
Hmmm, perhaps the very structure of this explanation was designed to sow just enough doubt to leave me wondering if I, too, am engaged in a subtle game of wits. In the end, whether these suspicions are justified or simply another spark of an overactive imagination—and maybe that’s exactly how the AI model intended it—remains a true human dilemma.
On January 1, 2024, a skeptic from Malawi named Wonderful Mkhutche shared a video1 of a witch-hunting incident that took place days before on December 28, 2023. In the video, a local mob is shown burying an elderly woman. According to local sources, the woman was accused of causing the death of a family member who had passed away the previous day. These accusations often arise after family members consult local diviners, who claim to be able to identify suspects. In this instance, a local vigilante group abducted the woman. They were in the midst of burying her alive as punishment for allegedly using witchcraft to “kill” a relative when the police intervened and rescued her.
0:00 /1:41 1×While witch-hunting is largely a thing of the past in the Western world, the persecution of alleged witches continues with tragic consequences in many parts of Africa. Malawi, located in Southeastern Africa, is one such place. Mr. Mkhutche reports that between 300 to 500 individuals accused of witchcraft are attacked and killed every year.
The Malawi Network of Older Persons’ Organizations reported that 15 older women were killed between January and February 2023.2 Local sources suggest that these estimates are likely conservative, as killings related to witchcraft allegations often occur in rural communities and go unreported. Witch-hunting is not limited to Malawi; it also occurs in other African countries. In neighboring Tanzania, for example, an estimated 3,000 people were killed for allegedly practicing witchcraft between 2005 and 2011, and about 60,000 accused witches were murdered between 1960 and 2000.3 Similar abuses occur in Nigeria, Ghana, Kenya, Zambia, Zimbabwe, and South Africa, where those accused of witchcraft face severe mistreatment. They are attacked, banished, or even killed. Some alleged witches are buried alive, lynched, or strangled to death. In Ghana, some makeshift shelters—known as “witch camps”—exist in the northern region. Women accused of witchcraft flee to these places after being banished by their families and communities. Currently, around 1,000 women who fled their communities due to witchcraft accusations live in various witch camps in the region.4
Witch camp in Ghana (Photo by Hasslaebetch, via Wikimedia)The belief in the power of “evil magic” to harm others, causing illness, accidents, or even death, is deeply ingrained in many regions of Africa. Despite Malawi retaining a colonial-era legal provision that criminalizes accusing someone of practicing witchcraft, this law has not had a significant impact because it is rarely enforced. Instead, many people in Malawi favor criminalizing witchcraft and institutionalizing witch-hunting as a state-sanctioned practice. The majority of Malawians believe in witchcraft and support its criminalization,5 and many argue that the failure of Malawian law to recognize witchcraft as a crime is part of the problem, because it denies the legal system the mechanism to identify or certify witches. Humanists and skeptics in Malawi have actively opposed proposed legislation that recognizes the existence of witchcraft.6 They advocate for retaining the existing legislation and urge the government to enforce, rather than repeal, the provision against accusing someone of practicing witchcraft.
Islam7 and Christianity8 were introduced to Malawi in the 16th and 19th centuries by Western Christian missionaries and Arab scholars/jihadists, respectively. They coerced the local population to accept foreign mythologies as superior to traditional beliefs. Today, Malawi is predominantly Christian,9 but there are also Muslims and some remaining practitioners of traditional religions. And while the belief in witchcraft predates Christianity and Islam, religious lines are often blurred, as all the most popular religions contain narratives that sanctify and reinforce some form of belief in witchcraft. As a result, Malawians from various religious backgrounds share a belief in witchcraft.
Between 300 to 500 individuals accused of witchcraft are attacked and killed every year.Witch-hunting also has a significant health aspect, as accusations of witchcraft are often used to explain real health issues. In rural areas where hospitals and health centers are scarce, many individuals lack access to modern medical facilities and cannot afford modern healthcare solutions. Consequently, they turn to local diviners and traditional narratives to understand and cope with ailments, diseases, death, and other misfortunes.10
While witch-hunting occurs in both rural and urban settings, it is more prevalent in rural areas. In urban settings, witch-hunting is mainly observed in slums and overcrowded areas. One contributing factor to witch persecution in rural or impoverished urban zones is the limited presence of state police. Police stations are few and far apart, and the law against witchcraft accusations is rarely enforced11due to a lack of police officers and inadequate equipment for intervention. Recent incidents in Malawi demonstrate that mob violence, jungle justice, and vigilante killings of alleged witches are common in these communities.
Malawians believe that witches fly around at night in “witchcraft planes” to attend occult meetings in South Africa and other neighboring countries.Another significant aspect of witch-hunting is its highly selective nature. Elderly individuals, particularly women, are usually the targets. Why is this the case? Malawi is a patriarchal society where women hold marginalized sociocultural positions. They are vulnerable and easily scapegoated, accused, and persecuted. In many cases, children are the ones driving these accusations. Adult relatives coerce children to “confess” and accuse the elderly of attempting to initiate them into the world of witchcraft. Malawians believe that witches fly around at night in “witchcraft planes” to attend occult meetings in South Africa and other neighboring countries.12
The persistence of witch-hunting in Africa can be attributed to the absence of effective campaigns and measures to eliminate this unfounded and destructive practice. The situation is dire and getting worse. In Ghana, for example, the government plans on shutting down safe spaces for victims, and the president has declined to sign a bill into law that would criminalize witchcraft accusations and the act of witch-hunting.
For this reason, in 2020 I founded Advocacy for Alleged Witches (AfAW) with the aim of combating witch persecution in Africa. Our mission is to put an end to witch-hunting on the continent by 2030.13 AfAW was created to address significant gaps in the fight against witch persecution in Africa. One of our primary goals is to challenge the misrepresentation of African witchcraft perpetuated by Western anthropologists. They have often portrayed witch-hunting as an inherent part of African culture, suggesting that witch persecution serves useful socioeconomic functions. (This perspective arises from a broader issue within modern anthropology, where extreme cultural relativism sometimes leads to an overemphasis on the practices of indigenous peoples. This stems from an overcorrection of past trends that belittled all practices of indigenous peoples). Some Western scholars tend to present witchcraft in the West as a “wild” phenomenon, and witchcraft in Africa as having domestic value and benefit. The academic literature tends to explain witchcraft accusations and witch persecutions from the viewpoint of the accusers rather than the accused. This approach is problematic and dangerous, as it silences the voices of those accused of witchcraft and diminishes their predicament.
Due to this misrepresentation, Western NGOs that fund initiatives to address abuses linked to witchcraft beliefs have waged a lackluster campaign. They have largely avoided describing witchcraft in Africa as a form of superstition, instead choosing to adopt a patronizing approach to tackling witch-hunting—they often claim to “respect” witchcraft as an aspect of African cultures.14 As a result, NGOs do not treat the issue of witch persecution in Africa with the urgency it deserves.
Likewise, African NGOs and activists have been complicit. Many lack the political will and funding to effectively challenge this harmful practice. In fact, many African NGO actors believe in witchcraft themselves! Witch-hunting persists in the region due to lack of accurate information, widespread misinformation, and insufficient action. To end witch-hunting, a paradigm shift is needed. The way witchcraft belief and witch-hunting are perceived and addressed must change.
AfAW aims to catalyze this crucial shift and transformation. It operates as a practical and applied form of skepticism, employing the principles of reason and compassion to combat witch-hunting. Through public education and enlightenment efforts, we question and debate witchcraft and ritual beliefs, aiming to dispel the misconceptions far too often used to justify abuses. Our goal is to try to engage African witchcraft believers in thoughtful dialogue, guiding them away from illusions, delusions, and superstitions.
The persistence of abuses linked to witchcraft and ritual beliefs in the region is due to a lack of robust initiatives applying skeptical thinking to the problem. To effectively combat witch persecution, information must be translated into action, and interpretations into tangible policies and interventions. To achieve this, AfAW employs the “informaction” theory of change, combining information dissemination with actionable steps.
Many people impute misfortunes to witchcraft because they are unaware of where to seek help or who or what is genuinely responsible for their troubles.At the local level, we focus on bridging the information and action gaps. Accusers are misinformed about the true causes of illnesses, deaths, and misfortunes, often attributing these events to witchcraft due to a lack of accurate information. Many people impute misfortunes to witchcraft because they are unaware of where to seek help or who or what is genuinely responsible for their troubles. This lack of understanding extends to what constitutes valid reasons and causal explanations for their problems.
As part of the efforts to end witch-hunting, we highlight misinformation and disinformation about the true causes of misfortune, illness, death, accidents, poverty, and infertility. This includes debunking the falsehoods that charlatans, con artists, traditional priests, pastors, and holy figures such as mallams and marabouts exploit to manipulate the vulnerable and the ignorant. At AfAW, we provide evidence-based knowledge, explanations, and interpretations of misfortunes.
Leo Igwe participated in a Panel: “From Witch-burning to God-men: Supporting Skepticism Around the World” at The Amaz!ng Meeting, July 12, 2012, in Las Vegas, NV (Photo by BDEngler via Wikimedia)Our efforts include educating the public on existing laws and mechanisms to address allegations of witchcraft. We conduct sensitization campaigns targeting public institutions such as schools, colleges, and universities. Additionally, we sponsor media programs, issue press releases, engage in social media advocacy, and publish articles aimed at dispelling myths and misinformation related to witch-hunting in the region.
We also facilitate actions and interventions by both state and non-state agencies. In many post-colonial African states, governmental institutions are weak with limited powers and presence. One of our key objectives is to encourage institutional collaboration to enhance efficiency and effectiveness. We petition the police, the courts, and state human rights institutions. Our work prompts these agencies to act, collaborate, and implement appropriate measures to penalize witch-hunting activities in the region.
We are deploying the canon of skeptical rationality to save lives, awaken Africans from their dogmatic and superstitious slumber, and bring about an African Enlightenment.Additionally, AfAW intervenes to support individual victims of witch persecution based on their specific needs and the resources available. For example, in cases where victims have survived, we relocate them to safe places, assist with their medical treatment, and facilitate their access to justice. In situations where the accused have been killed, we provide support to the victims’ relatives and ensure that the perpetrators are brought to justice.
We get more cases than we can handle. With limited resources, we are unable to intervene in every situation we become aware of. However, in less than four years, our organization has made a significant impact through our interventions in Nigeria and beyond. We are deploying the canon of skeptical rationality to save lives, awaken Africans from their dogmatic and superstitious slumber, and bring about an African Enlightenment.
This is a real culture war, with real consequences, and skepticism is making a real difference.
Founded in 1940, Pinnacle was a rural Jamaican commune providing its Black residents a “socialistic life” removed from the oppression of British colonialism. Its founder, Leonard Howell, preached an unorthodox mix of Christianity and Eastern spiritualism: Ethiopia’s Emperor Haile Selassie was considered divine, the Pope was the devil, and marijuana was a holy plant. Taking instructions from Leviticus 21:5, the men grew out their hair in a matted style that caused apprehension among outsiders, which was later called “dreadlocks.”
Jamaican authorities frowned upon the sect, frequently raiding Pinnacle and eventually locking up Howell in a psychiatric hospital. The crackdown drove Howell’s followers—who became known as Rastafarians—all throughout Jamaica, where they became regarded as folk devils. Parents told children that the Rastafarians lived in drainage ditches and carried around hacked-off human limbs. In 1960 the Jamaican prime minister warned the nation, “These people—and I am glad that it is only a small number of them—are the wicked enemies of our country.”
If Rastafarianism had disappeared at this particular juncture, we would remember it no more than other obscure modern spiritual sects, such as theosophy, the Church of Light, and Huna. But the tenets of Rastafarianism lived on, thanks to one extremely important believer: the Jamaican musician Bob Marley. He first absorbed the group’s teachings from the session players and marijuana dealers in his orbit. But when his wife, Rita, saw Emperor Haile Selassie in the flesh—and a stigmata-like “nail-print” on his hand—she became a true believer. Marley eventually took up its credo, and as his music spread around the world in the 1970s, so did the conventions of Rastafarianism—from dreadlocks, now known as “locs,” as a fashionable hairstyle to calling marijuana “ganja.”
Individuals joined subcultures and countercultures to reject mainstream society and its values. They constructed identities through an open disregard for social norms.Using pop music as a vehicle, the tenets of a belittled religious subculture on a small island in the Caribbean became a part of Western commercial culture, manifesting in thousands of famed musicians taking up reggae influences, suburban kids wearing knitted “rastacaps” at music festivals, and countless red, yellow, and green posters of marijuana leaves plastering the walls of Amsterdam coffeehouses and American dorm rooms. Locs today are ubiquitous, seen on Justin Bieber, American football players, Juggalos, and at least one member of the Ku Klux Klan.
Rastafarianism is not an exception: The radical conventions of teddy boys, mods, rude boys, hippies, punks, bikers, and surfers have all been woven into the mainstream. That was certainly not the groups’ intention. Individuals joined subcultures and countercultures to reject mainstream society and its values. They constructed identities through an open disregard for social norms. Yet in rejecting basic conventions, these iconoclasts became legendary as distinct, original, and authentic. Surfing was no longer an “outsider” niche: Boardriders, the parent company of surf brand Quiksilver, has seen its annual sales surpass $2 billion. Country Life English Butter hired punk legend John Lydon to appear in television commercials. One of America’s most beloved ice cream flavors is Cherry Garcia, named after the bearded leader of a psychedelic rock band who long epitomized the “turn on, tune in, drop out” spirit of 1960s countercultural rebellion. As the subcultural scholars Stuart Hall and Tony Jefferson note, oppositional youth cultures became a “pure, simple, raging, commercial success.” So why, exactly, does straight society come to champion extreme negations of its own conventions?
Illustration by Cynthia von Buhler for SKEPTICSubcultures and countercultures manage to achieve a level of influence that belies their raw numbers. Most teens of the 1950s and 1960s never joined a subculture. There were never more than an estimated thirty thousand British teddy boys in a country of fifty million people. However alienated teens felt, most didn’t want to risk their normal status by engaging in strange dress and delinquent behaviors. Because alternative status groups can never actually replace the masses, they can achieve influence only through being imitated. But how do their radical inventions take on cachet? There are two key pathways: the creative class and the youth consumer market.
In the basic logic of signaling, subcultural conventions offer little status value, as they are associated with disadvantaged communities. The major social change of the twentieth century, however, was the integration of minority and working-class conventions into mainstream social norms. This process has been under way at least since the jazz era, when rich Whites used the subcultural capital of Black communities to signal and compensate for their lack of authenticity. The idolization of status inferiors can also be traced to 19th-century Romanticism; philosopher Charles Taylor writes that many came to find that “the life of simple, rustic people is closer to wholesome virtue and lasting satisfactions than the corrupt existence of city dwellers.” By the late 1960s, New York high society threw upscale cocktail parties for Marxist radicals like the Black Panthers—a predilection Tom Wolfe mocked as “radical chic.”
For most cases in the twentieth century, however, the creative class became the primary means by which conventions from alternative status groups nestled into the mainstream. This was a natural process, since many creatives were members of countercultures, or at least were sympathetic to their ideals. In The Conquest of Cool, historian Thomas Frank notes that psychedelic art appeared in commercial imagery not as a means of pandering to hippie youth but rather as the work of proto-hippie creative directors who foisted their lysergic aesthetics on the public. Hippie ads thus preceded—and arguably created—hippie youth.
Because alternative status groups can never actually replace the masses, they can achieve influence only through being imitated.This creative-class counterculture link, however, doesn’t explain the spread of subcultural conventions from working-class communities like the mods or Rastafarians. Few from working-class subcultures go into publishing and advertising. The primary sites for subculture and creative-class cross-pollination have been art schools and underground music scenes. The punk community, in particular, arose as an alliance between the British working class and students in art and fashion schools. Once this network was formed, punk’s embrace of reggae elevated Jamaican music into the British mainstream as well. Similarly, New York’s downtown art scene supported Bronx hip-hop before many African American radio stations took rap seriously.
Subcultural style often fits well within the creative-class sensibility. With a premium placed on authenticity, creative class taste celebrates defiant groups like hipsters, surfers, bikers, and punks as sincere rejections of the straight society’s “plastic fantastic” kitsch. The working classes have a “natural” essence untarnished by the demands of bourgeois society. “What makes Hip a special language,” writes Norman Mailer, “is that it cannot really be taught.” This perspective can be patronizing, but to many middle-class youth, subcultural style is a powerful expression of earnest antagonism against common enemies. Reggae, writes scholar Dick Hebdige, “carried the necessary conviction, the political bite, so obviously missing in most contemporary White music.”
From the jazz era onward, knowledge of underground culture served as an important criterion for upper-middle-class status—a pressure to be hip, to be in the know about subcultural activity. Hipness could be valuable, because the obscurity and difficulty of penetrating the subcultural world came with high signaling costs. Once subcultural capital became standard in creative-class signaling, minority and working-class slang, music, dances, and styles functioned as valuable signals—with or without their underlying beliefs. Art school students could listen to reggae without believing in the divinity of Haile Selassie. For many burgeoning creative-class members, subcultures and countercultures offered vehicles for daydreaming about an exciting life far from conformist boredom. Art critic Dan Fox, who grew up in the London suburbs, explains, “[Music-related tribe] identities gave shelter, a sense of belonging; being someone else was a way to fantasize your exit from small-town small-mindedness.”
Photo by Bekky Bekks / UnsplashMiddle-class radical chic, however, tends to denature the most prickly styles. This makes “radical” new ideas less socially disruptive, which opens a second route of subcultural influence: the youth consumer market. The culture industry—fashion brands, record companies, film producers—is highly attuned to the tastes of the creative class, and once the creative class blesses a subculture or counterculture, companies manufacture and sell wares to tap into this new source of cachet. At first mods tailored their suits, but the group’s growing stature encouraged ready-to-wear brands to manufacture off-the-rack mod garments for mass consumption. As the punk trend flared in England, the staid record label EMI signed the Sex Pistols (and then promptly dropped them). With so many cultural trends starting among the creative classes and ethnic subcultures, companies may not understand these innovations but gamble that they will be profitable in their appeal to middle-class youth.
Before radical styles can diffuse as products, they are defused—i.e., the most transgressive qualities are surgically removed. Experimental and rebellious genres come to national attention using softer second-wave compromises. In the early 1990s, hip-hop finally reached the top of the charts with the “pop rap” of MC Hammer and Vanilla Ice. Defusing not only dilutes the impact of the original inventions but also freezes farout ideas into set conventions. The vague “oppositional attitude” of a subculture becomes petrified in a strictly defined set of goods. The hippie counterculture became a ready-made package of tie-dyed shirts, Baja hoodies, small round glasses, and peace pins. Mass media, in needing to explain subcultures to readers, defines the undefined—and exaggerates where necessary. Velvet cuffs became a hallmark of teddy boy style, despite being a late-stage development dating from a single 1957 photo in Teen Life magazine.
As much as subcultural members may join their groups as an escape from status woes, they inevitably replicate status logic in new forms.This simplification inherent in the marketing process lowers fences and signaling costs, allowing anyone to be a punk or hip-hopper through a few commercial transactions. John Waters took interest in beatniks not for any “deep social conviction” but “in homage” to his favorite TV character, Maynard G. Krebs, on The Many Loves of Dobie Gillis. And as more members rush into these groups, further simplification occurs. Younger members have less money to invest in clothing, vehicles, and conspicuous hedonism. The second generation of teds maintained surly attitudes and duck’s-ass quiffs, but replaced the Edwardian suits with jeans. Creative classes may embrace subcultures and countercultures on pseudo-spiritual grounds, but many youth simply deploy rebellious styles as a blunt invective against adults. N.W.A’s song “Fuck tha Police” gave voice to Black resentment against Los Angeles law enforcement; White suburban teens blasted it from home cassette decks to anger their parents.
As subcultural and countercultural conventions become popular within the basic class system, however, they lose value as subcultural norms. Most alternative status groups can’t survive the parasitism of the consumer market; some fight back before it’s too late. In October 1967, a group of longtime countercultural figures held a “Death of the Hippie” mock funeral on the streets of San Francisco to persuade the media to stop covering their movement. Looking back at the sixties, journalist Nik Cohn noted that these groups’ rise and fall always followed a similar pattern:
One by one, they would form underground and lay down their basic premises, to be followed with near millennial fervor by a very small number; then they would emerge into daylight and begin to spread from district to district; then they would catch fire suddenly and produce a national explosion; then they would attract regiments of hangers-on and they would be milked by industry and paraded endlessly by media; and then, robbed of all novelty and impact, they would die.By the late 1960s the mods’ favorite hangout, Carnaby Street, had become “a tourist trap, a joke in bad taste” for “middle-aged tourists from Kansas and Wisconsin.” Japanese biker gangs in the early 1970s dressed in 1950s Americana—Hawaiian shirts, leather jackets, jeans, pompadours—but once the mainstream Japanese fashion scene began to play with a similar fifties retro, the bikers switched to right-wing paramilitary uniforms festooned with imperialist slogans.
However, what complicates any analysis of subcultural influence on mainstream style is that the most famous 1960s groups often reappear as revival movements. Every year a new crop of idealistic young mods watches the 1979 film Quadrophenia and rushes out to order their first tailored mohair suit. We shouldn’t confuse these later adherents, however, as an organic extension of the original configuration. New mods are seeking comfort in a presanctioned rebellion, rather than spearheading new shocking styles at the risk of social disapproval. The neoteddy boys of the 1970s adopted the old styles as a matter of pure taste: namely, a combination of fifties rock nostalgia and hippie backlash. Many didn’t even know where the term “Edwardian” originated.
Were the original groups truly “subcultural” if they could be so seamlessly absorbed into the commercial marketplace? In the language of contemporary marketing, “subculture” has come to mean little more than “niche consumer segment.” A large portion of contemporary consumerism is built on countercultural and subcultural aesthetics. Formerly antisocial looks like punk, hippie, surfer, and biker are now sold as mainstream styles in every American shopping mall. Corporate executives brag about surfing on custom longboards, road tripping on Harley-Davidsons, and logging off for weeks while on silent meditation retreats. The high-end fashion label Saint Laurent did a teddy-boy-themed collection in 2014, and Dior took inspiration from teddy girls for the autumn of 2019. There would be no Bobo yuppies in Silicon Valley without bohemianism, nor would the Police’s “Roxanne” play as dental-clinic Muzak without Jamaican reggae.
Alternative status groups in the twentieth century did, however, succeed in changing the direction of cultural flows.But not all subcultures and countercultures have ended up as part of the public marketplace. Most subcultures remain marginalized: e.g., survivalists, furries, UFO abductees, and pickup artists. Just like teddy boys, the Juggalos pose as outlaws with their own shocking music, styles, and dubious behaviors—and yet the music magazine Blender named the foundational Juggalo musical act Insane Clown Posse as the worst artist in music history. The movement around Christian rock has suffered a similar fate; despite staggering popularity, the fashion brand Extreme Christian Clothes has never made it into the pages of GQ. Since these particular groups are formed from elements of the (White) majority culture—rather than formed in opposition to it—they offer left-leaning creatives no inspiration. Lower-middle-class White subcultures can also epitomize the depths of conservative sentiment rather than suggest a means of escape. Early skinhead culture influenced high fashion, but the Nazi-affiliated epigones didn’t. Without the blessing of the creative class, major manufacturers won’t make new goods based on such subcultures’ conventions, preventing their spread to wider audiences. Subcultural transgressions, then, best find influence when they become signals within the primary status structure of society.
The renowned scholarship on subcultures produced at Birmingham’s Centre for Contemporary Cultural Studies casts youth groups as forces of “resistance,” trying to navigate the “contradictions” of class society. Looking back, few teds or mods saw their actions in such openly political terms. “Studies carried out in Britain, America, Canada, and Australia,” writes sociologist David Muggleton, “have, in fact, found subcultural belief systems to be complex and uneven.” While we may take inspiration from the groups’ sense of “vague opposition,” we’re much more enchanted by their specific garments, albums, dances, behaviors, slang, and drugs. In other words, each subculture and counterculture tends to be reduced to a set of cultural artifacts, all of which are added to the pile of contemporary culture.
The most important contribution of subcultures, however, has been giving birth to new sensibilities— additional perceptual frames for us to revalue existing goods and behaviors. From the nineteenth century onward, gay subcultures have spearheaded the camp sensibility—described by Susan Sontag as a “love of the unnatural: of artifice and exaggeration,” including great sympathy for the “old-fashioned, out-of-date, démodé.” This “supplementary” set of standards expanded cultural capital beyond high culture and into an ironic appreciation of low culture. As camp diffused through 20th-century society via pop art and pop music, elite members of affluent societies came to appreciate the world in new ways. Without the proliferation of camp, John Waters would not grace the cover of Town & Country.
The fact that subcultural rebellion manifests as a simple distinction in taste is why the cultural industry can so easily co-opt its style.As much as subcultural members may join their groups as an escape from status woes, they inevitably replicate status logic in new forms—different status criteria, different hierarchies, different conventions, and different tastes. Members adopt their own arbitrary negations of arbitrary mainstream conventions, but believe in them as authentic emotions. If punk were truly a genuine expression of individuality, as John Lydon claims it should be, there could never have been a punk “uniform.”
The fact that subcultural rebellion manifests as a simple distinction in taste is why the cultural industry can so easily co-opt its style. If consumers are always on the prowl for more sensational and more shocking new products, record companies and clothing labels can use alternative status groups as R&D labs for the wildest new ideas.
Alternative status groups in the twentieth century did, however, succeed in changing the direction of cultural flows. In strict class-based societies of the past, economic capital and power set rigid status hierarchies; conventions trickled down from the rich to the middle classes to the poor. In a world where subcultural capital takes on cachet, the rich consciously borrow ideas from poorer groups. Furthermore, bricolage is no longer a junkyard approach to personal style—everyone now mixes and matches. In the end, subcultural groups were perhaps an avant-garde of persona crafting, the earliest adopters of the now common practice of inventing and performing strange characters as an effective means of status distinction.
For both classes and alternative status groups, individuals pursuing status end up forming new conventions without setting out to do so. Innovation, in these cases, is often a byproduct of status struggle. But individuals also intentionally attempt to propose alternatives to established conventions. Artists are the most well-known example of this more calculated creativity—and they, too, are motivated by status.
Subcultures and countercultures are often cast as modern folk devils. The media spins lurid yarns of criminal destruction, drug abuse, and sexual immorality.Not surprisingly, mainstream society reacts with outrage upon the appearance of alternative status groups, as these groups’ very existence is an affront to the dominant status beliefs. Blessing or even tolerating subcultural transgressions is a dangerous acknowledgment of the arbitrariness of mainstream norms. Thus, subcultures and countercultures are often cast as modern folk devils. The media spins lurid yarns of criminal destruction, drug abuse, and sexual immorality—frequently embellishing with sensational half-truths. To discourage drug use in the 1970s, educators and publishers relied on a fictional diary called Go Ask Alice, in which a girl takes an accidental dose of LSD and falls into a tragic life of addiction, sex work, and homelessness. The truth of subcultural life is often more pedestrian. As an early teddy boy explained in hindsight, “We called ourselves Teddy Boys and we wanted to be as smart as possible. We lived for a good time, and all the rest was propaganda.”
Excerpted and adapted by the author from Status and Culture by W. David Marx, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. © 2022 by W. David Marx.
The process of evolution is often described by the phrase “survival of the fittest,” coined by Charles Darwin’s contemporary Herbert Spencer.1 The phrase reflects a popular sentiment that nature is best described as, in Alfred Lord Tennyson’s colorful and oft-quoted expression, “red in tooth and claw.”2 But Spencer’s phrase is misleading, inasmuch as he was applying it according to his own idiosyncratic views, while failing to properly reflect Darwin’s attitude toward the theory he developed. It directs our attention to organisms and species on the cusp of survival. But, as I shall argue here, they are the least fit and therefore least relevant in evolution’s ability to make progress toward an aggregate system of life that is increasingly abundant, diverse, and collectively capable. Life’s progress comes primarily from “proliferation of the fittest.” It might seem insignificant to focus on life’s ability to proliferate, far beyond its ability to just survive, but the payoff is enormous. This opens up the possibility for an even bigger idea: Evolution seeks sets of patterns (such as genes) that cooperate toward their mutual proliferation. And nature selects some patterns over others by simply proliferating them more rapidly. Culling of the unfit may be part of the evolutionary process for early planetary life, but it is not required for evolutionary progress after life has achieved a critical threshold of intelligence. This article describes a cooperation-based interpretation of evolution that extends the Gaia hypothesis proposed decades ago by James Lovelock and Lynn Margulis.
♦ ♦ ♦
It is the nature of life to proliferate—to become more diverse and abundant in whatever environment it exists. Wherever in the universe planetary life is established, as it continues it will likely discover millions of ways to adapt and flourish. After a few billion years, any such accommodating planet will likely be covered with life, spectacularly diverse and wildly prolific—call it constructive proliferation.
Why, then, do we tend to model evolution in terms of its destructive elements—competition and culling of the unfit? Why do we dwell on life’s failures—species extinctions and organisms that die before they procreate? They play almost no role at all in evolution’s ability to make progress in life—toward ever greater abundance, diversity, and capability. The traditional manner of thinking about evolution in terms of competition and elimination misses this important element of the process, namely constructive proliferation.
Modern thinking on evolution has been heavily influenced by the renowned evolutionary biologist Richard Dawkins, himself reflecting the work of Robert Trivers, William D. Hamilton, George C. Williams, and others pursuing a “selfish gene” model of the evolutionary process.3 Dawkins revealed valuable insights into evolution by showing us how to look at life’s development from a gene’s eye view. As such, he focuses more on the destructive than constructive elements of evolution. For example, Dawkins describes evolution metaphorically in terms of a “Darwinian chisel” sculpting the characteristics of a species: “The gene pool of a species is the ever-changing marble upon which the chisels, the fine, sharp, exquisitely delicate, deeply probing chisels of natural selection, go to work.”4He uses the chisel metaphor to show how a subtractive process, such as chipping away at a big block of marble, can eventually reveal a beautiful statue. By analogy, we are supposed to believe that evolution’s subtractive process of culling the unfit can eventually reveal a beautifully adapted, incredibly capable apex predator, such as a lion.
It is the nature of life to proliferate—to become more diverse and abundant in whatever environment it exists.The phrase “survival of the fittest” does indeed reflect this subtractive process, but as I shall argue, it leaves us with a dilemma—before a lion can survive, it must exist. “Survival of the fittest” does not explain how a new species is created, a point made by the evolutionary biologist Andreas Wagner in his aptly titled book Arrival of the Fittest.5 Before a lion can survive, it must first arrive. So, by what mechanism is new and better life created in the first place?
Along with the negative aspect of evolution that culls unfit life, there must also be a positive aspect to account for the initial creation and ongoing proliferation of new and successful life. It must be more than just the effect of a random mutation or genetic recombination, because neither can account for how a slightly different set of gene patterns might be better. And since the overall system of life is so prolific (over time), we may reasonably conclude that the positive aspect of the evolutionary process must be greater in magnitude than the negative aspect, perhaps far greater. So, let us try to tease apart the positive and negative facets of evolution. In other words, rather than focusing on evolution’s failures, let us turn our attention to its creative successes. To do so, we must consider evolution in terms of nature’s most basic elements. And when we do, we find cooperation everywhere.
♦ ♦ ♦
Life is all about patterns of matter and energy that are able to self-organize and replicate. There is no such thing as natural benefit to life other than the greater proliferation of its underlying patterns. Everything of interest or benefit to life comes down to pattern proliferation, which—for biological life on Earth—involves gene-like patterns (in DNA or RNA) acting collectively toward their mutual replication. Inside a typical cell, molecules collectively catalyze themselves into ever greater abundance by combining nutrients that have permeated through the cell wall. This cooperative process continues until the critical molecules have become sufficiently abundant to generate two cells, allowing the cell to divide. Cell division is the very basis for life, and the central mechanism by which life is able to proliferate. This is made possible by cooperation among the cell’s metabolic molecules. At higher levels, cells cooperate to produce organs, and organs cooperate to produce highly capable organisms. Higher still, organisms cooperate in collectivities like beehives, ant colonies, and human societies.
Cooperation among certain things at one level can produce something very different at a higher level. And the very different something that emerges from cooperation can sometimes yield new value—call it pattern synergy—recognizing that when certain things are carefully arranged into a particular pattern, they can collectively produce something that is greater than the sum of its parts—often referred to as emergence. The gears and springs of a mechanical clock, for example, take on much more value when they are precisely arranged into a device that keeps accurate track of time. And, just as the design of a better clock requires enhanced cooperation among its gears and springs, the evolutionary design of better life also requires enhanced cooperation among its various components—molecules, cells, organs, and limbs.
The concept of pattern synergy was recognized at the molecular level (and above) by the famed designer and inventor R. Buckminster Fuller in his 1975 book Synergetics, in which he defined synergy as “behavior of whole systems unpredicted by the behavior of their parts taken separately.”6Fuller’s work focused primarily on the geometric designs that naturally emerge from certain combinations of atoms and molecules. But the concept of pattern synergy can apply at many higher levels as well. At each level, emerging synergies become the building blocks for the next higher level.
Life is all about patterns of matter and energy that are able to self-organize and replicate.Another contributor to the concept of pattern synergy is biologist Peter Corning, starting with his 1983 book The Synergism Hypothesis: A Theory of Progressive Evolution.7 In his 2003 book Nature’s Magic, he notes: “The thesis, in brief, is that synergy—a vaguely familiar term to many of us—is actually one of the great governing principles of the natural world. … It is synergy that has been responsible for the evolution of cooperation in nature and humankind …”8
Then there is Robert Wright’s runaway 2000 bestseller, Nonzero: The Logic of Human Destiny, which focused on a critical distinction made by game theorists in their modeling of relationships as either zero-sum or nonzero-sum. Zero-sum games involve competitive relationships in which the positive gain of the winner equals the negative loss of the loser, summing to zero. Nonzero-sum games, on the other hand, involve relationships in which the interests of the game’s participants overlap. Two players of a game can both win, yielding a positive (nonzero) benefit to both. In real life, people find many ways of cooperating synergistically toward their mutual benefit, and Wright devotes his entire book to the proposition that life’s most successful relationships among organisms—both within and between species—are based on these kinds of nonzero win-win scenarios: “My hope is to illuminate a kind of force—the nonzero-sum dynamic—that has crucially shaped the unfolding of life on earth so far.”9
As an example of Wright’s way of thinking, consider how patterns from very different domains can cooperate toward their mutual proliferation. Cooperation among humans accelerated greatly about 10,000 years ago when our ancestors began working together in fields to cultivate farm crops, such as wheat. Those agricultural activity patterns persisted and proliferated because they allowed the genes of humans and the genes of wheat to mutually proliferate. And just a century ago, the patterns of materials and activities underlying tractor production began cooperating with the patterns of genes in humans as well as the patterns of genes in all species of agricultural production toward a veritable “orgy” of mutual proliferation. The human population has doubled twice since then, from 2 billion to 8 billion. And patterns of production in agriculture and industrial manufacturing have also proliferated roughly in tandem with humans. Our modern economy is highly positive-sum, thanks to the many synergies that result during production.
Life’s progress comes primarily from “proliferation of the fittest” … far beyond its ability to just survive.In most nonzero-sum game-theoretic paradigms, the players have the option of cooperating (as if synergistically) toward their mutual benefit. However, they also have the option of betraying (or defecting), which may earn an even higher short-term payoff than cooperating. This tradeoff between the short-term temptation of betrayal versus the long-term benefits of ongoing cooperation was recognized by Robert Axelrod as a fundamental characteristic of life’s many relationships. In The Evolution of Cooperation (1984), Axelrod ran through computer simulations of the Prisoner’s Dilemma game and discovered a successful strategy for encouraging ongoing cooperation based on reciprocity--known as tit-for-tat. “So while it pays to be nice, it also pays to be retaliatory. Tit-for-tat combines these desirable properties. It is nice, forgiving, and retaliatory.”10
Unfortunately, any system of cooperative life will naturally breed cheaters and defectors. Let’s just call them all parasites. Harvard entomologist E.O. Wilson has described parasites as “predators that eat prey in units of less than one.”11 Here, we recognize them as species that routinely act to divert life’s critical resources away from their best synergistic uses—away from the hosts that earn them to the parasites that simply steal them. Wilson goes on to say: “Tolerable parasites are those that have evolved to ensure their own survival and reproduction but at the same time with minimum pain and cost to the host.” While parasites can be wildly prolific in the short term, the burdens they place on their hosts ultimately limit their ability to proliferate over the long run. Since parasite species depend on their host species for future infestations, the relationship between them is ultimately competitive and dysergistic. There are a couple of ways that nature can eliminate parasitism: Mutations to the host species can sometimes discover an immunity to the parasite. Even better, mutations to the parasite species can sometimes discover a way for it to become mutualistic with the host. Parasites are actually prime candidates for discovering new forms of mutually beneficial cooperation. After all, the flow of benefit from the host to the parasite is halfway to the kind of relationship evolution prefers. To become fully mutualistic, all that is needed is for the parasite to reciprocate some sort of commensurate benefit to the host.
Consider how E. coli bacteria in the guts of most animals evolved to provide a valuable digestive service in exchange for a steady supply of food on which the bacteria can feed. The initial infestation of bacteria into the guts of animals, long ago, might have started out as purely parasitic. But, if so, mutations to E. coli bacteria at some point found a way to cooperate by reciprocating benefit to their hosts. No matter how cooperative relationships come to exist, they are always preferable to—more prolific than—competitive relationships. In Richard Dawkins’ words: “Parasites become gentler to their hosts, more symbiotic.”12
Nature’s forces cause synergies to emerge from certain cooperative arrangements of things and activities. And it happens at all levels, from the atomic to the galactic. At every level, a new type of synergy emerges from cooperation among patterns of things and activities at lower levels. Evolution’s ability to discover new and better forms of pattern synergy at ever higher levels of cooperation is the natural source of all creativity.
By shifting the emphasis to evolution’s successes rather than its failures, we reveal a clear directionality … always toward ever greater degrees of synergistic cooperation.Nowhere is pattern synergy more obvious or valuable than in the arrangement of the human brain, where 85 billion neurons cooperate to produce a vivid conscious awareness and ability to reason. In fact, each organ of a human body consists of many cells that all cooperate to produce a specific biological function. And at an even higher level, the complementary functions of human organs and limbs cooperate to produce a body capable of performing ballet. Cooperation is everywhere in life, within organisms and among them.
Photo by NOAA / UnsplashMany different types of species routinely cooperate toward their mutual proliferation by exchanging various services and molecular resources. We have already considered the mutually beneficial relationship between animals and the E. coli bacteria in their guts. As another example, bees provide a pollination service to flowering plants in exchange for nutritious nectar. In Entangled Life (2020), Merlin Sheldrake describes how certain fungi attached to plant roots can isolate and donate critical environmental nutrients to the plants in exchange for carbohydrates: “Today, more than ninety percent of plants depend on mycorrhizal fungi … which can link trees in shared networks sometimes referred to as the ‘wood wide web’.”13 These are just a few of the most obvious cases in which vastly different species find ways of cooperating toward their mutual proliferation. There are many other forms of cooperation among species that are far less obvious. When they are all tallied up, it becomes apparent that each species depends on many others for its existence, and the entire system of life develops almost as if it were a single self-regulating organism.
Cooperation toward mutual proliferation appears to be what nature seeks.At ever higher levels, cooperation toward mutual proliferation appears to be what nature seeks. The occasional discovery of a better form of cooperation is what accounts for all types of evolving progress. (The term better here means more mutually prolific.) From nature’s perspective, the only way to define cooperation is in terms of patterns acting collectively toward their mutual replication and ongoing proliferation. Cooperation is the basis for everything of benefit or value to life. In this sense, evolution’s “purpose” is to discover ever better forms of cooperation among replicating patterns of things and activities, causing their ever-increasing mutual proliferation.
Nutrient exchanges and communication between a mycorrhizal fungus and plants. (Source: Adapted by Charlotte Roy, Salsero35, Nefronus, CC BY-SA 4.0, via Wikimedia Commons)So, life is about much more than just survival. Evolution seeks patterns that cooperate toward mutual proliferation. And the better they cooperate, the more they proliferate. From this perspective, natural selection works through the differential proliferation of patterns (such as genes)—some proliferating more rapidly than others (of which some will experience negative proliferation). Over time, any life-accommodating world may naturally become covered with species that best embody and embrace cooperation, making them most prolific. Importantly, in this interpretation of evolution, neither competition nor culling of the unfit is required for evolutionary progress.
♦ ♦ ♦
Pioneers in this view of life are the chemist James Lovelock and the biologist Lynn Margulis, both of whom saw cooperation everywhere in the aggregate system of life. Margulis, for example, developed a cooperation-based theory of the origin of the eukaryotic cell—the complex cellular structure out of which all plants and animals are made—from simpler prokaryotic cells. Margulis theorized that the more complex eukaryotes resulted from the symbiotic union of different types of prokaryotes. Perhaps the very first eukaryotic cell resulted from a parasitic infestation by one type of prokaryote into another type. If so, the parasitic prokaryote then discovered a way to provide benefit to its host, and the parasitism gave way to mutualism. The patterns in those combined prokaryotes stumbled into a way of cooperating toward their mutual proliferation by together creating a better type of cell, a process Margulis called endosymbiosis.14
Most biologists were initially skeptical, but the tenacious Margulis heroically persisted in developing and presenting evidence to support her theory, and in the fullness of time her peers were forced by the weight of the evidence to finally accept it. An early adopter of Margulis’ theory was James Lovelock, who showed how different species naturally coevolve in ways that allow them to cooperatively regulate critical aspects of their common environment. Lovelock named his theory Gaia, after the primal Mother Earth goddess from Greek mythology.15
Nowhere is pattern synergy more obvious or valuable than in the arrangement of the human brain.To the extent any two species in a system of aggregate life successfully cooperate toward their mutual proliferation, they together may become increasingly abundant. Plants that participate in cooperative relationships with mycorrhizal fungi, for example, will tend to proliferate more rapidly than plants that don’t. So, it is not just a coincidence that our world has become covered by such cooperative plants. By comparison, noncooperative species become relatively diluted and decreasingly relevant to the overall system. The species that are best able to cooperate toward their mutual proliferation increasingly influence the entire system of life. In various ways, they collectively produce a stable environment that is conducive to their mutual ongoing proliferation. Thus, a subsystem of cooperation may naturally rise like a Phoenix out of the ashes of primitive and chaotic life. Species cultivated by farmers (corn, wheat, pigs, chickens) have certainly risen in abundance relative to other species due to their cooperation with humans, arguably the most poignant example of which is the domestication of wild wolves into modern dogs.
The entire web of life becomes more robust as semiredundant cooperative mechanisms emerge in the set of all interspecies relationships. For example, in addition to bees, there are many other species of insects and small birds that redundantly pollinate flowering plants. So, if a few bee species were to go extinct, other pollinating species would likely pick up the slack. Likewise, there are many species of plants that redundantly produce the oxygen required by animals, and many species of animals that redundantly produce the carbon dioxide required by plants.
Through all the redundancies across the many various mechanisms of cooperation, the whole system of aggregate life develops an evolutionary toughness, becoming increasingly stable, robust, and resilient to exogenous shocks. Accordingly, Margulis titled an essay on the subject “Gaia Is a Tough Bitch.” It describes how our planet’s temperature and atmosphere “are produced and maintained by the sum of life.”16 In her 1998 book Symbiotic Planet, Margulis says that plants and animals cooperate to hold the amount of oxygen in our atmosphere steady, at a level that “hovers between a global fire hazard and the risk of widespread death by asphyxiation.”17
According to Lovelock’s hypothesis, many different species cooperate to produce a very stable system of aggregate life able to regulate its own critical parameters—a capability known as homeostasis. And the interdependencies among species cause the entire system to increasingly act like a robust superorganism at a higher level. This view of earthly life as a superorganism was characterized by Richard Dawkins thusly: “Lovelock rightly regards homeostatic self-regulation as one of the characteristic activities of living organisms, and this leads him to the daring hypothesis that the whole Earth is equivalent to a single living organism. … Lovelock clearly takes his Earth and organism comparison seriously enough to devote a whole book to it. He really means it.”18
The discovery of new gene patterns that are better able to cooperate … is the constructive mechanism through which evolution develops new and better biological life.Dawkins’ writings have often carried the assumption that evolution happens in a parallel fashion among multiple competing organisms from which the unfit are fatally culled. With regard to the evolution of Gaia, for example, he wrote: “there would have to have been a set of rival Gaias, presumably on different planets. … The Universe would have to be full of dead planets whose homeostatic regulation systems had failed, with, dotted around, a handful of successful, well-regulated planets of which Earth is one.”
As evolution is described here, however, it does not require competition among multiple species, along with extinction by some. The discovery of new gene patterns that are better able to cooperate toward their mutual proliferation is the constructive mechanism through which evolution develops new and better biological life. Neither competition nor culling of the unfit is necessarily required.
To his already enormous credit, however, Dawkins also wrote: “I do not deny that somebody may, one day, produce a workable model of the evolution of Gaia … although I personally doubt it. But if Lovelock has such a model in mind he does not mention it.”19 Well, allow me to suggest a workable model of an evolving Gaia.
♦ ♦ ♦
Natural selection chooses some patterns of life over others primarily on the basis of their respective abilities to proliferate. This allows us to conceive of evolution operating on a sole entity, such as a single system of aggregate life composed of many interdependent species. It is a serial style of evolution based on ever better forms of cooperation among its various species. The beneficial changes resulting from this style of evolution simply unfold sequentially through time, as new and better forms of cooperation are discovered. And the net effect is ever greater proliferation of aggregate life. It appears to be a much more accurate description of how evolution really works than the widely accepted parallel style that relies on competition.
This serial style of evolution applies to Lovelock’s model of Gaia in which evolving patterns build themselves up through ever better relationships of cooperation. From among those patterns, the fittest—which tend to be the most cooperative in this model—are naturally selected by way of their greater proliferation, without any need for competition or culling of the unfit.
Consider the 2016 book by Russian complexity scientist Peter Turchin, Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth.20 The central idea is that physical competition and mortal conflict were necessary for eliminating entire groups of noncooperators, leaving just the groups of cooperators to survive. But when we model natural selection in terms of differential proliferation, we may conclude that no war was ever required. While many wars certainly did happen over the past 10,000 years, general cooperation was likely destined to emerge and flourish even if they hadn’t happened.
Cooperation naturally emerges because it creates mutually beneficial synergies. And those synergies yield evolutionary advantage to the cooperators, enabling their greater proliferation. Two cooperative families, for example, might take turns caring for each other’s children, realizing synergistic efficiencies that would enable both families to diligently raise more children than would have otherwise been possible. We should therefore expect groups full of cooperators to proliferate their populations more rapidly than groups full of competitors. And any planetary system of sufficiently intelligent life will, over time, become increasingly dominated by the faster-growing groups of cooperators, without anyone ever having to die prematurely.
Life is about much more than just survival. Evolution seeks patterns that cooperate toward mutual proliferation. And the better they cooperate, the more they proliferate.This cooperation-based interpretation of evolution gives us new insight into a decades-old debate among evolutionists over the concept of group selection. The debate focuses on whether natural selection needs to operate at the group level to explain how group-benefiting behaviors can naturally emerge. To fully expose the dilemma, Richard Dawkins imagines two very different groups—one composed of cooperative altruists and the other composed of individuals who are purely selfish. Dawkins suggests that the group of altruists, “whose individual members are prepared to sacrifice themselves for the welfare of the group, may be less likely to go extinct than a rival group whose individual members place their own selfish interests first.” But, there’s a catch: “Even in the group of altruists, there will be a dissenting minority who refuse to make any sacrifice. If there is just one selfish rebel, prepared to exploit the altruism of the rest, then he, by definition, is more likely than they are to survive and have children. Each of these children will tend to inherit his selfish traits. After several generations of this natural selection, the ‘altruistic group’ will be overrun by the selfish individuals, and will be indistinguishable from the selfish group.”21
Dawkins has expressed his belief that natural selection operating at the level of genes is sufficient to account for the emergence of group-benefiting behaviors. And arguments presented here support that belief. When natural selection is defined as proliferation of the fittest (rather than elimination of the unfit), there is then no difference between selection at the genetic level and selection at the group level. Groups are selected to the extent genes within them proliferate.
Groups are naturally selected by their differential ability to grow their populations. And cooperative groups will always tend to proliferate more rapidly than uncooperative groups. Mutually beneficial cooperation simply bubbles forth from within such a group. No individual needs to die, and no group needs to be eliminated, for group selection to occur. In fact, no competition at all between groups is ever required for evolutionary progress, other than to see which can sustainably grow its population the fastest.
♦ ♦ ♦
The evolutionary value of cooperation over competition was recognized more than a century ago by the Russian intelligentsia. But the concept remained largely ignored by evolutionary thinkers in the West until nobleman Peter Kropotkin was exiled to English territory for political reasons. There, he wrote a series of articles (in English) discussing Darwin’s central theme of “struggle for existence,” later collected into a book titled Mutual Aid.22 About a century later, evolutionary theorist and historian of science Stephen Jay Gould penned one of his monthly columns titled “Kropotkin Was No Crackpot.” “Perhaps cooperation and mutual aid are the more common results of struggle for existence,” Gould opined. “Perhaps communion rather than combat leads to greater reproductive success in most circumstances.”23 Gould then presented a fascinating account of how and why Russians were more predisposed than Westerners to appreciate the evolutionary value of cooperation among animals and among humans.
Just a subtle twist in how we think of natural selection opens a new interpretation of evolution that emphasizes cooperation. We have simply elevated our focus, away from nature’s less favored species that are concerned with mere survival, upward to nature’s more preferred species capable of rapid proliferation. By shifting the emphasis to evolution’s successes rather than its failures, we reveal a clear directionality in how all kinds of progressive systems naturally develop—always toward ever greater degrees of synergistic cooperation among replicating patterns. That natural directionality determines how nature defines goodness and betterment, providing a bedrock foundation for a new system of naturalized philosophy. It also suggests a purpose to life—to advance evolution in the direction it was always destined to go—toward ever greater cooperation, mutualism, and symbiosis.
They watch us. They learn about us and our habits. We are a big part of the environmental conditions to which many of them have adapted.
They’re like us. They hang around in groups. Individuals have different personalities. Pairs bond together for years at a time, maybe lifetimes, and they take good care of their kids. They’re loud, opportunistic, mischievous, and messy. And they’re smart.
Meet CrowsMembers of the genus Corvus—crows—include birds with “crow” in their common names as well as ravens, rooks, and jackdaws. There are 45–50 named species of Corvus at the moment (the naming of species is a dynamic field), though that range will change and increase as more information from populations in unstudied areas becomes available. They are medium- to large-sized birds with big heads relative to body size and, usually, large to massive bills. They live all over the world except South America and Antarctica, in the varied habitats that exist across continents and on islands, from southern to northern high latitudes.
American Crows live in populations having different social organizations and dynamics in different regions of North America. The members of populations that breed in the north migrate in spring and fall of each year, with known one-way travel distances of up to 1,740 miles (2,800 km). Each year, they spend months commuting, and they live in one place during spring and summer and another in fall and winter. During migration, they spend nights at the same giant communal roosts along their routes, much like humans returning to the same campgrounds on annual road trips.
Ravens are the largest crows and occur all around the northern hemisphere. Most don’t breed until they’re 4 years old, and some not until they’re 10. Once they become breeders, ravens tend to live in pairs. In the years between leaving natal areas and breeding, individuals join other nonbreeders in small groups and larger flocks as they jointly acquire the skills needed for successful breeding, including being able to reliably find food (carcasses that are unpredictable in when and where they’ll become available, across huge landscapes).
New Caledonian Crows occur on only two of New Caledonia’s islands. Since they inhabit primary forests, observing them in the field is challenging. They are not very social, although kids tend to stick close to parents for extended periods, up to 2 years. New Caledonian Crows are the only nonhuman animals known to make tools from materials with which they have no experience1 and the only ones known to make cumulative improvements to tool design over time.2
I have had the opportunity to observe and study many crows myself, and to learn about the behavior and cognitive capabilities of other species through experimental research and fieldwork published by other scientists. There is a tendency to want to compare the results of studies of cognition in nonhuman animals to humans. How do they measure up? How about compared to apes? Such comparisons are already not straightforward when comparing other mammals, and even less so when comparing crows. They are very different types of organisms that live in three-dimensional worlds, without hands, and with brains, eyes and ears that are different from those of mammals. And yet in test after test, crows perform equal to or even better than apes, and are on par with human children or occasionally even exceed adult human capabilities!
American CrowsI began studying American Crows in the early 1980s, on a golf course in Encino, CA. For purposes of my research, I needed to be able to tell individuals apart, and so I had to catch them, to be able to mark them. They were very difficult to capture! With the use of traps and nets3, 4 and climbing to nests to temporarily obtain and mark late-stage nestlings,5 I got a bunch marked and was able to peer into their worlds. They are one of the most civilized species of which I am aware.
The crows I studied in California were year-round residents that nested colonially (that is, having lots of nests in the same general area) and defended only the small areas of space immediately surrounding their nests—if they defended any space at all. Neighbors often foraged together, members of breeding groups were regularly observed in others’ core areas, and breeders rarely prevented others from entering their nest tree or landing on or near their nests. Most individuals did not breed until they were at least 3 or 4 years old, and many nonbreeders remained in natal areas associating with parents or joined the resident non-breeding flock.
I had one of my favorite fieldwork experiences, ever, on that golf course: Because population members had come to associate me with things that caused them distress (e.g., climbing to their nests), they transferred that to other situations and would yell at me, when I arrived, after something bad had happened. One day I drove around on golf cart paths looking for the cause of their yelling, and on the ground found a female with an injured wing. She could not fly but she could run, and the crows dive-bombed and yelled at me as I chased her down. I had her examined by a vet and taped up, and she spent 8 weeks in a cage in my bedroom as her wing healed. In the field, her mate and 1-year old daughter continued to care for the four nestlings in her nest. Three weeks after I took her to my place, a strong storm blew her nest out of its tree and all of the nestlings died. Her mate and daughter hung around for another two weeks, but then were not seen very often. After two months her bone had healed, but her flight muscles had atrophied. I moved her to a flight cage and put her through regular daily exercises. Finally, eleven weeks after her removal, I brought her to the golf course.
Ravens form alliances, reconcile conflicts, and console distressed partners.I wafted her into her nest tree and threw a bunch of peanuts on the ground. Crows began to fly to the peanuts, and she joined them. Almost immediately, I saw her mate headed right for her from across a busy 4-lane road. He landed beside her and both of them bowed low to each other and produced a slow, melodic, low frequency vocalization that I had never heard before. The pair then proceeded to walk around the group of peanut eating crows and stopped to bow and vocalize to each other three more times. I was crying. The pair was reunited.
The crows I studied in Oklahoma were year-round residents in small-to-large territories that were only sometimes defended against neighbor intrusion. Most delayed breeding until at least 3–4 years old, and many remained “at home” with parents until they bred. Many also left home and moved in with other groups within the population before becoming breeders. Individuals had friends in groups other than their own, and some that had moved out of the population returned occasionally to natal territories and spent time with their parents. Some visited their siblings in other groups, and some moved in with their siblings’ families. Several males established territories adjacent to their parents, and extended families of at least three generations would spend time together.6
One day, I sat in my car watching a group in a residential backyard. One of the crows walked along a wooden fence railing to the end post and attempted to get at something in the interior of the hole supporting the railing. Unsuccessful with its bill, it pecked at the wood surrounding the hole and loosened a section at the top, pulling on it until a triangular piece of wood broke off. The crow placed the piece of wood under its feet, with the wide end closest to its body, and hammered several times at the tapered end. It then picked up the piece of wood by the wide end and probed the hole with the tapered end for about 20 seconds. Another crow in the group called from some distance away, and the toolmaker placed the probe into the hole and took off. I went to the hole, saw only the remains of a spider’s web, and retrieved the probe. It did not match the gap from which it had been pulled—the tapered end had clearly been narrowed.7 A few days later when I approached the post, a large spider dashed out of the hole.
Also in Oklahoma, I watched the mother of the nestlings in the nest to which my co-worker was headed hammer repeatedly at a branch of a nearby pine tree. At first, I thought she was exhibiting displacement behavior but then a pinecone at her feet loosened (she had been hammering at its connection), and she carried it to above my co-worker and dropped it right on his head! She repeated this behavior three more times, hitting him on 3 out of the 4 tries.
So that I could observe crows behaving naturally when I wasn’t trying to capture them or get to nestlings, I donned a disguise on the latter occasions. Years later, it was officially demonstrated that American Crows can remember “dangerous” human faces for at least 2.7 years,8 and they can even learn whom to worry about from others!9
RavensRavens are scavengers and regularly store (cache) away surplus food obtained at carcasses, and they rely on their caches for sustenance. They are not known to use tools much in the wild.
Caching behavior has been the focus of many studies10, 11 and ravens are skilled strategists. If they know another raven is watching them, they will go to a location out of the observer’s view before caching. Cachers behave differently in the presence of competitors who have or have not seen the caching event: if they have been watched while storing food, cachers move their caches when knowledgeable competitors get close.
Competitors behave differently depending on the situation. If they know where the cache is but the raven they’re paired with doesn’t know that they know, they run right over and retrieve it. If they’re paired with the cacher and the cacher knows they’re wise to the location, they act as if they don’t know and dawdle and fiddle around, seemingly hoping to take advantage of any lapse in focus by the cacher. This level of understanding of what others know rivals that demonstrated in chimpanzees.12
Photo by Peter Lloyd / UnsplashWhen given an opportunity to pay attention to another cacher while caching, ravens performed better than humans when asked to retrieve both their own and the other cacher’s caches.13 And when paired with a partner who kept taking advantage of the situation, a raven employed a human-like solution: deception. Ravens were trained to find and retrieve food hidden in a maze of film canisters and one raven was better at it than a dominant male. At some point, the dominant raven quit playing and would just wait for the other one to choose a canister and begin to open the lid, then fly over and steal the food. The raven “being taken advantage of” then changed tactics and initially went to go to a canister it knew to be empty and pretended to try to open it. When the dominant bird flew over and was distracted for a few seconds expecting to get the food, the other flew to a canister it knew to be filled, and got the food!14
Ravens successfully solved the problem of obtaining meat dangling from a branch by pulling up sections of the string and stepping on them to keep them from falling back down. And then they were successful with the non-intuitive task of pulling down on the string to bring the meat up.15 They did as well as apes in tests of choosing appropriate tools (despite not being tools used in the wild) and they did better than orangutans, chimps, and bonobos when asked to choose the correct currency for bartering for food.16 Ravens were able to select the right tool in environments different from where they learned to use it; even in the face of 17-hour (overnight) delays between having to select the appropriate tool and being able to use it, providing evidence of their forward-planning abilities. They did better than 4-year-old children in the first-trial performances at the tool- and currency-choice experiments,17 and they are perceptive enough to follow the gaze of a human to a location out of view and hop over to see what’s up.18
Ravens form alliances, reconcile conflicts, and console distressed partners.19, 20, 21 They remember former group members and their relationships with them after long (years) periods of separation.22 When disappointed or frustrated, for example by being offered less preferred food, they respond in a way that other ravens observing them can identify, and the observers themselves are then negatively affected.23 In measures of value, compatibility, and security, the quality of raven social relationships was said to be analogous to those of chimpanzees.24
New Caledonian CrowsIn 1996, a paper published in Nature changed everything: New Caledonian Crows were manufacturing tools, in the wild, at a level of complexity not ever seen among nonhuman animals before.25 To extract prey from burrows and natural crevices, they make hooks and probes from twigs and pieces cut from leaf, some of which require sophisticated manipulation and modification skills. No other nonhuman animals do anything like it.26, 27
Betty was the name of a New Caledonian Crow caught in the wild and taken (with several others) to the University of Oxford for testing.28, 29 She was partnered in a cage with a male given the name Able. In an early test, Betty and Able were allowed into a room with a table that had a clear plastic vertical tube, secured in a plastic pan, containing a basket-shaped container of meat at the bottom. There were two wires on the table; one had already been bent so there was a hook at one end. The researchers wondered if one of the crows would use the hook to grab the basket handle, and Betty at first picked it up, but Able took it from her and flew away with it. Betty wanted the meat. She picked up the straight wire (in her bill) and inserted it into the tube but, of course, it was useless in its straight form. And so with force, she jammed the wire into a corner of the pan several times and bent it into a hook! She then used the hook to get the basket. Her behavior made clear that she had a mental representation in her mind of the problem and the solution, and therefore of the instrument she needed to make.
Crows perform equal to or even better than apes, and are on par with human children or occasionally even exceed adult human capabilities.New Caledonian Crows were able to spontaneously solve a “metatool” task (using a short tool to obtain a longer one needed for food extraction),30 and they were able to keep in mind the out-of-sight tools they had available (and where they stored them), while performing sequences of tool tasks, providing strong evidence that they can plan several moves ahead.31
From field and lab studies of tool behavior, scientists have also learned that New Caledonian Crows:
Such selectivity suggests these crows have representations of the situations in their minds and so can select the appropriate tools. They also tend to keep their preferred tools safe, under their feet and in holes.36
Individual New Caledonian Crows tend to be lateralized in their tool use (i.e., right- or left-billed): they usually hold probe tools in their bills, with the nonworking ends pressed against one side of their heads37 and individuals prefer one side over the other for different tasks.38 Lateralization is thought to be associated with complex tasks and mental demands (i.e., as tasks increase in difficulty, “control areas” in brains become specialized/localized),39, 40 suggesting that, as in humans, species-level lateralization is an adaptation for efficient neural processing of complex tasks.41
Evidence from more than 5,500 tools suggests that narrow and stepped variations were likely improvements on the wide-tool design.Tools made by New Caledonian Crows from Pandanus leaves come in three types, all with a barbed edge: unstepped narrow and wider probes, and “stepped” tools,42 the latter being made through a sequenced process involving a series of distinct snips and tears along the barbed edge of a leaf to produce a probe that increasingly narrows toward the tip (the “steps”) and has barbs along one edge.43 Evidence from more than 5,500 tools suggests that narrow and stepped variations were likely improvements on the wide-tool design.44
Photo by Kasturi Roy / UnsplashAnd so, these crows have evolved minds powerful enough to develop and improve upon tool design, something thought possible only by humans (technological progress is thought to be one of our hallmark characteristics). That there is geographic variation in tool manufacture and the innovations are passed from generation to generation45, 46, 47 suggests there may be cultural mechanisms at work.
Experiments with captive-bred, hand-raised New Caledonian Crows have demonstrated a strong genetic component to tool interest, manufacture, and use—young crows start playing with twigs, leaves, and crevices on their own, suggesting the phenomenon is an evolved adaptation.
More CrowsIn preparation for studies of cognition and neurophysiology, crows have been trained to monitor screens in experimental setups and respond to visual and auditory signals so that they can be trained to do all kinds of other things. Carrion Crows, for example, have been trained to identify complex pictures despite distractions,48, 49 express their understanding of the concept of greater than/less than50 and to respond to the switching between “same” and “different” rules provided both visually and auditorily.51 They’ve been trained to discriminate quantity, ranging between 1–30 dots on a screen52, 53 and they have been trained to peck different numbers of times and to indicate “I’m done” when they’re finished with their answer.54, 55 That these birds can understand the training protocols is almost as impressive as the results of the studies!
Jackdaws performed on par with apes in a test of self-control over motor impulses and did better than bonobos and gorillas despite brains that are 70–94 times smaller.56 Unlike chimpanzees, and similarly to observations about ravens described earlier, jackdaws respond to human gaze and nonverbal cues like pointing.57
Crows play, have friends, and mourn the death of friends and family members.Hooded Crows have been shown to be capable of analytical reasoning. In tests called “match-to-sample,” where subjects are presented with paired stimuli that are the same or different (e.g., in size or shape) and then asked to match the concepts of “same” or “different” to brand new items, crows spontaneously perceived and understood the relationships without any specific training in categories of size, shape, and color.58 Such analytical thinking is thought to be foundational for “categorization, creative problem solving, and scientific discovery,” and was thought to be uniquely human.59
Carrion Crows were able to learn the Arabic numerals 1–4 and then produce matching numbers of vocalizations (e.g., “caw caw caw” for 3) when prompted by the visual image or an auditory cue.60 The modality of the cue did not affect their performance, indicating that their vocal production was guided by an abstract numerical concept. Evidence also indicates that the crows were planning the total number of vocalizations before they started vocalizing and that when errors were made—too few or too many—the crows had started out correctly but “lost track” along the way.
Carrion Crows have also been shown to be capable of recursion: the cognitive ability to process paired elements embedded within a larger sequence.61 For example, a “center-embedded” sequence would appear [{}] and is analogous to “the crow the experimenter chose passed the test,” with {} corresponding to “the experimenter chose.” An ability to use recursion might potentially, possibly infinitely, expand the range of ideas and concepts that can be communicated. Carrion Crows outperformed macaques and performed on par with human children in tests of recursive abilities; another characteristic thought to be unique to humans.
Rooks are not known to use tools in the wild, but they figured out that by plugging specific holes in the floor of an aviary (including tapping in the plugs!), they could create pools of water in which individuals could drink and bathe.62 Rooks also learned to get food in a trap-tube task (inserting a probe into one end of a tube with holes in it, in order to push out a food reward) and transferred what they learned to a new task on their first try,63 rivaling the physical intelligence of chimpanzees.64 One rook transferred concepts to two additional tasks, indicating she understood the physical aspects of the challenges (including gravity) and was able to “abstract rules” and form mental representations.65
Rook at Slimbridge Wetland Centre, Gloucestershire, England (Photo by Adrian Pingstone, via Wikimedia)In another set of experiments,66 rooks pushed stones into tubes to collapse a platform to obtain a worm and immediately transferred the concept and picked up stones to drop them in tubes. They chose the correctly sized stones when tube diameters were changed; when no stones were provided, they left the testing room to go outside to collect stones before returning to the testing apparatus! When conditions changed, they immediately used (provided) sticks in lieu of stones; heavy sticks were dropped in and light ones were shoved, suggesting goal-directed thinking. They solved a metatool task, were able to modify branches into functional tools, understood how a hook functioned and used one to retrieve a basket of food at the bottom of a tube, and bent straight wires into hooks, thereby rivaling the abilities of tool-using New Caledonian Crows. All of these findings provide evidence for insight being involved in the problem-solving abilities of rooks.
Final ThoughtsThe “marshmallow test” is one of the most well-known and compelling demonstrations of the human ability to delay gratification. Videos showing children struggling to not eat the marshmallow after the experimenter and parent leave the room, so that they may claim more marshmallows later, are both endearing and powerful demonstrations of the heretofore-thought-to-be uniquely human experiences of impatience, frustration, self-control, reward and gratification, and the ability to plan ahead. Ravens, Carrion Crows, and New Caledonian Crows all aced versions of the marshmallow test, thereby breaching another hallmark.67, 68
♦ ♦ ♦
Crows play, have friends, and mourn the death of friends and family members.69, 70 It’s said that as more and more similarities in the cognitive capabilities, biases, and types of errors are exposed, the more likely it is that crows think like we do. And although their brains are built differently and most testing so far has been originally mammal-oriented, the list of cognitive capabilities crows share with us is already pretty impressive: abstract rules and analytical reasoning, consolation and reconciliation, mental representations and goal-directed behavior, innovation and insight, technological advances, transfer of concepts, knowing what others know, lateralization, tool manufacture and use, metatool use, comprehending quantity and numbers, planning for the future, recursion, motor and vocal control, tactical deceit, and even tracking humans, remembering our faces, and deciphering our intentions.
I wonder what else crows might show us if we knew what and how to ask. We are similar in that we are diurnal and we rely mostly on vision and hearing to perceive and respond to our surroundings, but our umwelts (the term coined by the biologist von Uexküll for the different perceptual worlds of different organisms) differ in myriad other ways. Right? They pick through poop to find bugs! They stand on ice in bare feet! They fly!
I wish we could know how they think, and that maybe in contexts such as greed, selfishness, cruelty, and war, that we could think more like they do.
“If there is no enemy within, the enemy outside can do us no harm.”1
Ever since a Hezbollah suicide bomber in 1983 blew up a truck packed with explosives and killed 241 Marines in Beirut, combating Islamic terrorist organizations has been a priority for U.S. intelligence, security, and law enforcement agencies. However, for those of us who have followed the spiraling growth of Islamic terrorism in the 1980s and 1990s, it seemed as if the U.S was sluggishly reactive. They made little headway in extensive counterterrorism programs designed at penetrating and dismantling Islamic terror groups.2
How was it possible for the hijackers and their plot to remain off the radar of intelligence and law enforcement?The 9/11 attack put a spotlight on the failures of the security agencies tasked to protect the U.S. against acts of terror. How was it possible for 19 hijackers and their ambitious plot to remain off the radar of intelligence and law enforcement? The truth, as I discovered during 18 months of reporting for my book, Why America Slept: The Failure to Prevent 9/11, was not that the plot had gone undetected, but rather that the agencies responsible for monitoring and fighting terrorism had failed to share information, something that would have made it possible to connect the dots before the attack occurred.
The failures were more substantive than mere interagency rivalries between the CIA, FBI, NSA, and local law enforcement. Exclusive interviews I had with top intelligence officers and FBI officials revealed that the origins and depth of the dysfunction inside America’s counterintelligence programs was an internecine bureaucratic war that left little room for working together. Sharing information was given lip service but seldom practiced, particularly when the intelligence at stake was judged as having “high value.”
If the CIA had alerted the State Department, the two Saudis would have been on a watch list that barred them from entering the United States.The most serious failure was the CIA’s tracking of two terrorists, Khalid al-Mihdhar and Nawaf al-Hazmi, when they moved from Saudi Arabia to California in 2000. If the CIA had alerted the State Department, the two Saudis would have been on a watch list that barred them from entering the United States. Once in California, however, the CIA could not legally monitor them domestically. The Agency not only lost track of the two Saudis but failed to let the FBI, which is specifically authorized to act within the U.S., know they were here. In July of 2001, only two months before 9/11, an FBI memo warned the American intelligence community that some bin Laden followers might be training at U.S. flight schools in preparation for an aerial terror attack. The CIA was unaware that al-Mihdhar and al-Hazmi had taken flight training while living in the U.S.
If the CIA had shared its information about the two Saudis, al-Mihdhar might have been detained in June 2001, when he returned to Saudi Arabia and his visa had expired. Or when an Oklahoma state trooper pulled over al-Hazmi for speeding and a driver’s license check in the national database would have triggered security alerts. Sharing the CIA security concerns about the duo would have meant the Transportation Department had a red flag on them. The pair even used their own names when making reservations on American Airlines Flight 77, which was flown into the Pentagon.
“Responsibility and accountability were diffuse,” the 9/11 Commission Report concluded a year after I had published Why America Slept.3 That was a diplomatic understatement of the paralyzing dysfunction between intelligence and security agencies and policy makers. The unintended consequence of such discord was to give the advantage invariably to the terrorists.
My reporting revealed that the dearth of cooperation between the country’s top security and intelligence services was not new to 9/11. Exposing how and why the breakdowns to communicate between agencies had begun and persisted for decades explains why the world’s best law enforcement and intelligences agencies ended up fighting each other instead of combating Islamic terrorism.
“We knew that the Islamic threat was the next security problem for the U.S., and we had known it since the 1970s,” Duane “Dewey” Clarridge told me in a rare no-questions-off-limit interview in the wake of 9/11.4 Clarridge was a CIA legend. He was twenty-three when he joined the Agency in 1955 and over the next thirty years earned a reputation as one of its most accomplished covert operatives. Clarridge served in Nepal, India, and Turkey, before returning to headquarters in the 1970s. He became the chief of covert operations for the Near East Division, later ran Arab covert ops, then moved to the Latin American Division, before becoming the Rome station chief. It was during his three years in Arab operations that Clarridge became familiar with the key Islamic terrorists.
“We were running operations in Beirut against an alphabet-soup of Palestinian terror groups,” recalled Clarridge. “At the same time, Carlos the Jackal was running around Europe, pulling off stunts like trying to use a grenade launcher to down an El Al airplane at Orly, or shooting his way into the Vienna OPEC meeting, killing three, and kidnapping the Saudi and Iranian oil ministers. We had our hands full.”5
The Rivalry for Control of Operations and Investigations Between the CIA and FBI continues.Terrorists ambushed and murdered Richard Welch, the CIA’s Athens station chief, two days before Christmas in 1975. Clarridge and his senior CIA colleagues wanted to go after the terrorists with covert assassination plans. The Agency’s timing was poor, however. Senate hearings into past misdeeds produced months of sordid headlines about the Agency’s 1960s assassination plots working with the Mafia to kill Fidel Castro, mind-control experiments, and failed foreign coups. Those hearings “permanently changed the way Clandestine Services operated,” says Clarridge. “It changed the rules of the game for us.”6
Congress initiated a process by which the Agency had to submit plans for its covert ops to a committee chaired by the president. Congress would be notified within sixty days after the president signed off. Permanent congressional oversight committees were established. The coup de grâce was President Ford’s Executive Order 11905 on February 18, 1976, that barred U.S. government agencies from undertaking assassinations.
The CIA abruptly halted its plans to eliminate Welch’s killers. For the next seven years, the Agency instead engaged in a mostly unsuccessful campaign to gather intelligence on leading Islamic terror groups in the hope of alerting allies to upcoming attacks.
The suicide truck bomber changed everything.The suicide truck bomber who struck the U.S. embassy in Beirut in 1983 changed everything. CIA Director William Casey and FBI Director William Webster immediately dispatched teams to find out what happened. There were conflicts between those teams from the start. They got so bad that the agents of the rival agencies sometimes got into screaming and shoving matches. The FBI team returned home early, angry and frustrated by what it complained was dismissive treatment by its CIA counterparts.7 The CIA’s new Beirut station chief, William Buckley, ultimately offered an olive branch to the FBI: he invited the Bureau to dispatch another team to Lebanon and investigate free of CIA micromanagement. The FBI solved the case by tracing a fragment of an axle from the bombing truck to an Iranian factory that had links to the Iranian-backed Popular Front for the Liberation of Palestine.
But by the time the FBI reached that conclusion, Iranian-sponsored terrorists had managed to kidnap Buckley in Beirut. That prompted President Reagan to create the government’s first joint task force to battle terrorism. The Restricted Interagency Group for Terrorism was chaired by the CIA’s director of covert operations, and it consisted of single representatives from the CIA, FBI, and the National Security Council. Dewey Clarridge was the Agency rep. The FBI’s man was Oliver “Buck” Revell, the assistant director for criminal investigations (I knew Buck well; he was the FBI Supervisor in Charge of the Dallas office when I researched the JFK assassination for my 1993 book, Case Closed). The National Security Council selected a U.S. Marine lieutenant colonel named Oliver North as its representative.
The new anti-terror group was in a rush to free Buckley before he could be tortured into giving up secrets. North wanted to use DEA informants—heroin traffickers who promised to deliver Buckley for $2 million. Dallas businessman Ross Perot agreed to finance the ransom to avoid U.S. laws that prohibited paying money to drug dealers. But the FBI, under the cautious leadership of William Webster, a former judge whom Jimmy Carter had appointed to run the Bureau, strenuously objected. North then backed a Clarridge operation to kidnap a Lebanese Shiite cleric, the head of Islamic Jihad, the organization holding Buckley. Clarridge wanted to trade the cleric for the CIA station chief. Again, the FBI’s fierce resistance scuttled the plan.
Clarridge fumed at the FBI’s intransigence and lobbied Casey to give the Agency more power in fighting terrorism. In January 1986, with a green light from Ronald Reagan, Casey created the Counterterrorism Center (CTC). Clarridge became its chief and he directed a staff of two hundred CIA officers, mostly analysts, as well as ten people loaned from other government intel and security agencies.8
Clarridge initially wanted to rely on the CIA’s foreign stations for surveillance, intel gathering, and informer recruitment, but that was not feasible since they were running at capacity. And, as Clarridge recalled, “the station chiefs were each narrowly focused on their own geographic divisions, while terrorism was a global problem that respected no boundaries.”9
Much to Clarridge’s disappointment, his only remaining option was to rely on the FBI for most of CTC’s field and operational assistance. It was against his better judgment since he thought the Webster-run FBI was far too risk-averse. Working with the Bureau also meant Clarridge had to run operations plans past FBI lawyers. “No one was very excited at the prospect of sharing national security secrets with lawyers at Justice,” recalled Clarridge.10
Clarridge quickly proposed an ambitious and risky operation to kidnap the Islamic Jihad hijackers of TWA Flight 847 and to fly them to America for trial. Webster contended the operation was likely to fail and that it likely violated both international and U.S. laws. The standoff between Clarridge and Webster killed the plan.
The next proposed CTC op was to kidnap Mohammed Hussein Rashid, a top bomb maker who had gotten explosives past airport security machines hidden in a Sony Walkman. A CTC operation to grab Rashid in the Sudan failed. Clarridge blamed the FBI, whose field agents were responsible for what the bureaucracy dubbed an “extraordinary rendition.” The Bureau complained that the Agency’s intelligence was flawed.
The unintended consequence of such discord was to give the advantage invariably to the terrorists.The deteriorating CIA and FBI tensions worsened during a series of bungled operations. Not only did it botch the Rashid kidnapping, but a squad dispatched to free Beirut station chief Buckley also failed. It was also unsuccessful in tracking down the Libyan terrorists who bombed a Berlin disco frequented by American soldiers. The 1985 hijackings of TWA Flight 847 and the cruise liner Achille Lauro were headline news and made the U.S. look vulnerable and weak.
The FBI began a whisper campaign in Washington that the CIA’s jealous stewardship of CTC was its ruination. Those back door complaints resulted in a task force headed by Vice President George H.W. Bush. It proposed the FBI run its own “intelligence fusion center” to complement the CTC, but its recommendations were never implemented.11
Senior CIA officials complained bitterly to Reagan’s national security team that the FBI was overly cautious.Senior CIA officials complained bitterly to Reagan’s national security team that the FBI was overly cautious and that America was vulnerable to Islamic terrorists who had entered on legal visas and had set up sleeper cells. Reagan responded in September 1986 by creating the Alien Border Control Committee (ABCC), an interagency task force designed to block the entry of suspected terrorists while also finding and deporting militants who had entered the country illegally or had overstayed their visas. The CIA and FBI joined the ABCC effort with great fanfare.
The ABCC had its first success only six months after its formation. The CIA tipped off the FBI about a group of suspected Palestinian terrorists in Los Angeles and the Bureau arrested eight men. But instead of being lauded, civil liberties groups contended that the ABCC should not be allowed to use information from the government’s routine processing of visa requests. Massachusetts Democratic Congressman Barney Frank, a strong civil liberties advocate, led a successful effort to amend the Immigration and Nationality Act so that membership in a terrorist group would no longer be sufficient reason to deny anyone a visa. The Frank amendment meant a visa could only be denied if the government could prove that the applicant had committed an act of terrorism.12 The amendment thereby rendered the ABCC toothless.
Meanwhile, the worsening relationship between the CIA and FBI hit a nadir within a couple of years when the weapons-for-hostages (Iran–Contra) scandal broke. The three key figures were the CIA’s Casey and Clarridge and the National Security Council’s North, all senior Counterterrorism Center officials. The FBI’s Buck Revell worried that the CIA and NSC might have violated U.S. laws prohibiting aid being given to the Contras and negotiating with terrorists. After Casey testified to Congress in November that he did not know who was behind the sale of two thousand TOW missiles to Iran (though the Agency was actively involved), Revell told FBI Director Webster that he thought Casey and other top Agency officials were obstructing justice. Webster authorized the Bureau to open a criminal investigation.
The failure of the country’s two premier national security agencies to work together seamlessly … only works to the benefit of America’s many enemies.Casey was incapacitated by a stroke and hospitalized in early December. He resigned as CIA Director after surgery for a brain tumor a month later. Reagan tapped Robert Gates, the Agency’s Deputy Director, to take charge. But Gates soon withdrew his name when it became clear that questions about his role in Iran– Contra had scuttled any chance for Senate confirmation.13 After Gates’s withdrawal, Reagan offered the CIA job to Republican Senator John Tower, the head of the president’s Iran–Contra board. Tower declined. Reagan then got a no from James Baker, his chief of staff.14 Reagan and his team were in a panic. There were a dozen names on their list of possible CIA directors, but the president was set to make his first comments about the Iran–Contra scandal in a highly anticipated address to the nation on the evening of Wednesday, March 4. Reagan wanted to pick a new CIA director before that speech. Everyone agreed it had to be someone who would easily obtain Senate confirmation. That narrowed the field. On the morning of his national speech, Reagan met with FBI Director William Webster—who was in the final year of a 10-year contract to run the FBI—and surprised everyone by offering him the CIA post.
The news that the cautious FBI director had been asked to run the CIA sent shock waves through Langley and the ranks of senior spies. Webster was a Christian Scientist who relished a reputation as an inflexible straight arrow. He boasted his only vices were chocolate and tennis. Historian Thomas Powers concluded that the “CIA would rather be run by a Cub Scout den mother than the former head of the FBI.”15 Webster was disparaged by top officers like Clarridge, who had come to know his risk-averse management style.
“Since we at CTC had been working so closely with the FBI on terrorism,” Clarridge told me, “we had already heard a lot about Webster, and none of it was good. From the street level to the top echelons, they detested Webster because they saw him as an egotistical lightweight, a social climber, and a phony.”16
Webster had no background in foreign policy or world affairs. While Casey was judged inside the CIA as a kindred risk-taking spirit, especially by the covert teams, Webster’s cautious nature was exacerbated by an overwhelming fear of failure coupled with his strict insistence on not bending the letter of the law.
One of Webster’s first moves was to replace the CIA’s popular George Lauder, who had spent twenty years in the Operations Directorate, with William Baker, an FBI colleague. It was such an unpopular choice that no one clapped when Baker was introduced in the CIA’s main auditorium. When Baker told the agents they should study a new house manual called “Briefing Congress” and embrace the four “C’s”—candor, completeness, consistency, and correctness—many in the audience audibly snickered. “It was vintage FBI,” one agent in attendance told me. “It was what we expected.”17
Baker was not the only FBI colleague Webster brought along. Peggy Devine, his longtime executive secretary, had earned the nickname “Dragon Lady” at the Bureau. Also, his FBI chief of staff, John Hotis, and a group of “special assistants” made up what CIA employees derisively dubbed either the “FBI Mafia” or the “munchkins.” Some in the FBI contingent had Ivy League law degrees, but none had any intelligence background. And they effectively sealed Webster off from the rest of the Agency.
There was a widespread sentiment inside the CIA that the FBI had gone from being a partner to an avowed enemy.Meanwhile, the FBI investigation that had begun under Webster into the Iranian arms sales had kicked into high gear. FBI agents raided Oliver North’s office and retrieved key documents his secretary did not have time to shred. Another FBI team served an unprecedented warrant at CIA headquarters in Langley, VA. The agents ordered Clair George, the CIA deputy director for operations, to open his office safe. It contained a document, with two of George’s fingerprints, that showed he had misled Congress. That produced a ten-count indictment against George and the removal of three CIA station chiefs. As for Clarridge, a few days before the statute of limitations expired, he was indicted on seven counts of perjury and making false statements to Congress. Inside the CTC, many employees wore T-shirts with slogans supporting him.
There was a widespread sentiment inside the CIA that the FBI had gone from being a recalcitrant partner to an avowed enemy whose purpose was to destroy the Agency’s hierarchy and its way of conducting its operations. As Clarridge noted:
We could probably have overcome Webster’s ego, his lack of experience with foreign affairs, his small-town-America world perspective, and even his yuppier-than-thou arrogance. What we couldn’t overcome was that he was a lawyer. All his training as a lawyer and a judge was that you didn’t do illegal things. He never could accept that this is exactly what the CIA does when it operates abroad. We break the laws of other countries. It’s how we collect information. It’s why we’re in business. Webster had an insurmountable problem with the raison d’être of the organization he was brought in to run.18Clarridge was not the only one who thought Webster’s legal background was a handicap for running a spy agency. Pakistan’s President Muhammad Zia-ul-Haq once asked Webster how it was possible for a lawyer to head the CIA. Webster did not answer.
Even before Clarridge’s indictment, Webster had officially reprimanded him for his role in Iran–Contra and after promising to reassign him as the CTC director, had forced him to resign in June 1988. I spoke to nearly a dozen former operatives from the Directorate of Operations who confirmed, on background only, that the anger Clarridge expressed on the record about Webster was widespread throughout the CIA. The Agency had long prided itself on an unwritten code—Loyalty Up and Loyalty Down—and many CIA veterans felt that Webster had trashed that by going after agents like Clarridge.
It got worse for Webster when he tasked his chief of staff, John Hotis, and Nancy McGregor, a 28-year-old law clerk who had been one of his FBI administrative assistants, to rewrite the CIA’s regulations for covert operations. Webster had infuriated many intelligence agents when he compared covert ops to the FBI’s use of undercover agents in criminal probes. Under the new Webster rules, lawyers had to sign off on all covert plans. There was a long checklist required to get operations approved. The informal and fast-moving process of the past was history. Webster argued his rules instilled long overdue accountability in the Agency’s covert work. It was, countered CIA officials, the same framework that existed at the FBI and that had hindered the Bureau’s investigations for decades.
With the new rules in place over covert operations and having purged the CIA of half a dozen senior officers connected to Iran–Contra, some of the criticism of Webster started going public. Tom Polgar, a retired agent, wrote in an opinion editorial in The Washington Post that “the new watchword at the agency seems to be ‘Do No Harm’—which is fine for doctors but may not encourage imagination and initiative in secret operations.”19
Meanwhile, William Sessions, a former federal judge from San Antonio and a close friend of Webster’s had become the new FBI director. With encouragement from Webster, Sessions expanded the number of FBI agents serving in counterintelligence abroad. Instead of welcoming the help, it further irritated the CIA leadership who considered the FBI as inept competitors who were only likely to compromise intelligence operations.
Webster thought he could reform the Agency to share information with the FBI. In April 1988, Webster announced a totally redesigned Counterintelligence unit. Headquartered in Langley, VA, its mission was to teach CIA and FBI agents how to compile, organize, and share data that would be useful to both agencies. The first test of that cooperation happened in December 1988 when 270 people were killed when a bomb blew up Pan Am Flight 103 above Lockerbie, Scotland. The U.S. government did not disclose that three Middle East-based CIA officers flying home for the Christmas holidays were among the victims.
Israel’s Mossad intelligence had warned the CIA two weeks earlier. The CIA had never passed the warning to the FBI.Israel’s Mossad intelligence had warned the CIA two weeks earlier that they had intercepted information that a Pan Am flight from Frankfurt to the U.S. would be bombed in December. Pan Am Flight 103 was a Frankfurt to U.S.-bound plane. The CIA had never passed the warning to the FBI.
Webster’s redesigned CTC was put in charge of the U.S. investigation. In Scotland, more than a thousand police, soldiers, and bomb technicians scoured hundreds of square miles around the crash site. They bagged thousands of pieces of evidence, and in that haul was a fragment of a circuit board the size of a small fingernail. It matched an identical board found in a bomb-timing mechanism used in a 1986 terror attack in the West African country of Togo. CTC tracked the circuit board to a consignment of timers manufactured by a Swiss company that had sold twenty of them to Libya.
Although progress had been made on finding out how the bombing was done, according to one U.S. official, it was not very long before the investigation was a “chaotic mess” of noncooperation.20 Within a few months, there were competing theories about who was responsible. The CIA blamed Iran for hiring a Damascus-based radical Palestinian faction to carry out the operation. Taking out the American plane was, according to Vincent Cannistraro, a CIA’s senior CTC officer, payback for the mistaken 1988 downing of an Iranian Airbus by a U.S. naval cruiser, which resulted in 290 civilian deaths. Cannistraro had become, after Clarridge’s departure, the major power inside CTC and its driving force.
Meanwhile, the FBI thought Libya was the sole culprit, seeking revenge for the U.S. bombings in 1986.
Both agencies leaked their internal disagreements. Anonymous CIA officials were quoted in the press mocking the FBI’s analytical reports on the bombing as being “like essays from grade school,” whereas an unidentified FBI agent said that “CIA believe they have a lot, but it’s a Styrofoam brick.”21
Even when the Libyans became prime suspects, the two agencies fought over what should be done. The FBI wanted to wait for indictments and then arrest those charged. The CIA’s Webster, not surprisingly, supported the letter-of-the-law approach. But inside the CIA, especially in CTC, agents bristled that the Libyans were beyond the reach of U.S. law. Cannistraro argued for “removing” the suspects at any cost, even if that meant assassinating them or allowing Israel to do it on behalf of the U.S. But Webster would broker no such discussion. Frustrated with Webster’s limitations on covert ops, Cannistraro abruptly resigned in September 1990, just before Iraq invaded Kuwait. “The CTC is starting to look too much like the FBI,” he disparagingly told a former colleague after giving Webster his notice.22
A 1990 Senate panel concluded that Webster’s efforts had failed to overcome the extensive fragmentation and competition in the government’s counterintelligence efforts. The panel concluded that it was virtually impossible to cure the dysfunction by merely insisting that the CIA and FBI drop their long-standing mutual distrust and dislike. Only by completely recreating America’s intelligence and crime-fighting apparatus, the panel suggested, might it be possible to make substantive progress.
The CIA and FBI have overhauled their training … but skepticism that there is any benefit to partnering remains a significant obstacle.The 9/11 Commission made a series of recommendations for changes that could finally force the CIA and FBI and other elements of the national security apparatus to work together more effectively. There are thousands of internal intelligence documents released since 2001, as well as Inspector General Reports from the CIA and FBI, that have given great lip service to the “imperative of reform.”
The result over two decades later?
The CIA and FBI have overhauled their training of intelligence analysts, streamlined the management of the information collected and analyzed, and improved the coordination between the analytical units and operational teams. There are now redundancies designed to prevent intel from falling between the cracks. And there is greater accountability for failures.
Do the CIA and FBI work together better today than before 9/11?What about the cooperation between the premier American law enforcement/intelligence agencies? Conversations with half a dozen currently serving and former officials from both the CIA and FBI give a mixed picture at best. The deep animosity from the Clarridge/Webster days is now mostly history. No one thinks of the other agency as a threat to its own survival. But skepticism that there is any benefit to be had by partnering with one another remains a significant obstacle to cooperation. CIA officers continue largely to view the FBI as highly paid police officers who are hobbled by Department of Justice lawyers. The FBI officials to whom I spoke pointed repeatedly to the CIA’s approval of torture for 9/11 detainees as a key reason the main terrorists at Guantanamo Bay have not gone to trial. “Maybe they would be better off,” one former FBI Counterintelligence officer told me, “if they had lawyers who told them when they were crossing the line instead of just rubber stamping every wild idea coming out of Langley.”
Do the CIA and FBI work together better today than before 9/11? Yes, in many respects. They have often had little choice with the rapid growth of more than 200 Joint Terrorism Task Forces (JTTF) since 1980. In the JTTFs, the FBI and CIA are only two of more than 30 law enforcement and intelligence agencies supplying analysts, investigators, linguists, and hostage rescue specialists, to combat international terrorism directed at the homeland. Since they do not run the operations, their sniping at each other is not as evident. But that does not mean the JTTFs are free of finger pointing between all the partners. For instance, a 2009 arrest of three Afghans in a terror plot in New York City was widely heralded as a law enforcement triumph, but I discovered that FBI agents were privately furious at New York Police Department detectives for blowing a chance to snare a larger terror sleeper cell.23
Infighting among those tasked with enforcement makes it exponentially more difficult.The rivalry for control of operations and investigations between the two 900-pound security gorillas in the U.S.—the CIA and FBI—continues. So does the desire to take credit for successful missions and put the blame on the other for failures. There are billions in annual budgets at stake and the reputation each seems jealously to guard. “They don’t make us better,” one retired CIA analyst contended in a conversation I had this summer. “They just compromise what we do best.”
That attitude cannot be eliminated through any series of bureaucratic reforms suggested by presidential commissions and Congressional hearings. It is a shame, however, because the failure of the country’s two premier national security agencies to work together seamlessly to fight terrorism and today’s enormous criminal cartels only works to the benefit of America’s many enemies. Crime and punishment is a difficult enough subject when the targets are international terrorists. However, infighting among those tasked with enforcement makes it exponentially more difficult.
As a sociologist interested in the scientific study of social life, I’ve long been concerned about the ideological bent of much of sociology. Many sociologists reject outright the idea of sociology as a science and instead prefer to engage in political activism. Others subordinate scientific to activist goals, and are unclear as to what they believe sociology’s purpose should be. Still others say different things depending on the audience.
The American Sociological Association (ASA) does the latter. In December 2023, the Board of Governors of Florida’s state university system removed an introductory sociology course from the list of college courses that could be taken to fulfil part of the general education requirement. It seemed clear that sociology’s reputation for progressive politics played a role in the decision. Florida’s Commissioner of Education, for example, wrote that sociology had been hijacked by political activists.1 The ASA denied the charge and went on to declare that sociology is “the scientific study of social life, social change, and the social causes and consequences of human behavior.”
While that definition certainly aligns with my vision of what sociology should be, it contrasts with another recent statement made by the ASA itself when announcing last year’s annual conference theme. The theme is “Intersectional Solidarities: Building Communities of Hope, Justice, and Joy,” which, as the ASA website explains, “emphasizes sociology as a form of liberatory praxis: an effort to not only understand structural inequities, but to intervene in socio-political struggles.”2 It’s easy to see how Florida’s Commissioner of Education somehow got the idea that sociology has become infused with ideology.
The ASA’s statement in defense of sociology as the science of social life seems insincere. That’s unfortunate—we really do need a science of social life if we’re going to understand the social world better. And we need to understand the world better if we’re going to effectively pursue social justice. The ASA’s brand of sociology as liberatory praxis leads not only to bad sociology, but also to misguided efforts to change the world. As I’ve argued in my book How to Think Better About Social Justice, if we’re going to change the world for the better, we need to make use of the insights of sociology. But bad sociology only makes things worse.
Contemporary social justice activism tends to draw from a sociological perspective known as critical theory. Critical theory is a kind of conflict theory, wherein social life is understood as a struggle for domination. It is rooted in Marxist theory, which viewed class conflict as the driver of historical change and interpreted capitalist societies in terms of the oppression of wage laborers by the owners of the means of production. Critical theory understands social life similarly, except that domination and oppression are no longer simply about economic class but also race, ethnicity, gender, religion, sexuality, gender identity, and much more.
There are two problems with social justice efforts informed by critical theory. First, this form of social justice—often called “critical social justice” by supporters and “wokeism” by detractors—deliberately ignores the insights that might come from other sociological perspectives. Critical theory, like conflict theory more broadly, is just one of many theoretical approaches in a field that includes a number of competing paradigms. It’s possible to view social life as domination and oppression, but it’s also possible to view it as a network of relationships, or as an arena of rational transactions similar to a marketplace, or as a stage where actors play their parts, or as a system where the different parts contribute to the functioning of the whole. If you’re going to change the social world, it’s important to have some understanding of how social life works, but there’s no justification for relying exclusively on critical theory.
The second problem is that, unlike most other sociological perspectives, critical theory assumes an oppositional stance toward science. This is partly because critical theory is intended not just to describe and explain the world, but rather to change it—an approach the ASA took in speaking of sociology as “liberatory praxis.” However, the problem isn’t just that critical theory prioritizes political goals over scientific ones, it’s that it also sees science as oppressive and itself in need of critique and dismantling. The claim is that scientific norms and scientific knowledge—just like other norms and other forms of knowledge in liberal democratic societies—have been constructed merely to serve the interests of the powerful and enable the oppression of the powerless.
Critical theory makes declarations about observable aspects of social reality, but because of its political commitments and its hostile stance toward scientific norms, it tends to act more like a political ideology than a scientific theory. As one example, consider Ibram X. Kendi’s assertions about racial disparities. Kendi, a scholar and activist probably best known for his book How to Be an Antiracist, has said, “As an anti-racist, when I see racial disparities, I see racism.”3 The problem with this approach is that while racism is one possible cause of racial disparities (and often the main cause!), in science, our theories need to be testable, and they need to be tested. Kendi doesn’t put his idea forward as a proposition to be tested but instead as a fundamental truth not to be questioned. In any true science, claims about social reality must be formulated into testable hypotheses. And then we need to actually gather the evidence. Usually what we find is variation, and this case is likely to be no different. That is, we’re likely to find that in some contexts racism has more of a causal role than in others.
We often want easy answers to social problems. Social justice activists might be inclined to turn to would-be prophets who proclaim what seems to be the truth, rather than to scientists who know we have to do the legwork required to understand and address things. Yes, science gives us imperfect knowledge, and it points to the difficulties we encounter when changing the world… but since we live in a world of tradeoffs, there are seldom easy answers to social problems. We can’t create a perfect world—utopia isn’t possible—so any kind of social justice rooted in reality must try to increase human flourishing while recognizing that not all problems can be eliminated, certainly not easily or quickly.
What does it all mean? For one, we should be much more skeptical about one of critical theory’s central claims—that the norms and institutions of liberal democratic societies are simply disguised tools of oppression. Do liberal ideals such as equality before the law, due process, free speech, free markets, and individual rights simply mask social inequalities so as to advance the interests of the powerful? Critical theorists don’t really subject this claim to scientific scrutiny. Instead, they take the presence of inequalities in liberal societies as selfsufficient evidence that liberalism is responsible for these failures. Yet any serious attempt to pursue social justice informed by scientific understanding of the world would involve comparing liberal democratic societies with other societies, both present and past.
Scientific sociology can’t tell us the best way to organize a society and social justice involves making tradeoffs among competing values. We may never reach a consensus on what kind of society is best, but we should consider the possibility that liberal democracies seem to provide the best framework we yet know of for pursuing social justice effectively. At the very least, they provide mechanisms for peacefully managing disputes in an imperfect world.
Fernanda Pirie is Professor of the Anthropology of Law at the University of Oxford. She is the author of The Anthropology of Law and has conducted fieldwork in the mountains of Ladakh and the grasslands of eastern Tibet. She earned a DPhil in Social Anthropology from Oxford in 2002, an MSc in Social Anthropology at University College London in 1998, and a BA in French and Philosophy from Oxford in 1986. She spent almost a decade practicing as a barrister at the London bar. Her most recent book is The Rule of Laws: A 4,000-Year Quest to Order the World.
Skeptic: Why do we need laws? Can’t we all just get along?
Fernanda Pirie: That assumes we need laws to resolve our disputes. The fact is, there are plenty of societies that do perfectly well without formal laws, and that’s one of the questions I explore in my work: Who makes the law, and why? Not all sophisticated societies have created formal laws. For instance, the ancient Egyptians managed quite well without them. The Maya and the Aztec, as far as we can tell, had no formal laws. Plenty of much smaller communities and groups also functioned perfectly well without them. So, using law to address disputes is just one particular social approach. I don’t think it’s a matter of simply getting along; I do believe it’s inevitable that people will come into conflict, but there are many ways to resolve it. Law is just one of those methods.
It’s inevitable that people will come into conflict, but there are many ways to resolve it. Law is just one of those methods.Skeptic: Let’s talk about power and law. Are laws written and then an authority is needed to enforce them, which creates hierarchy in society? Or does hierarchy develop for some other reason, and then law follows to deal with that particular structure?
FP: I wouldn’t say there’s always a single direction of development. In ancient India, for example, a hierarchy gradually developed over several thousand years during the first millennium BCE, with priests—eventually the Brahmins—and the king at the top. This evolved into the caste system we know today. The laws came later in that process. Legal texts, written by the Brahmins, outlined rules that everyone—including kings—had to follow.
Skeptic: So, the idea of writing laws down or literally chiseling them in stone is to create something tangible to refer to.. Not just, “Hey, don’t you remember, I said six months ago you shouldn’t do that?” Instead, it’s formalized, and everyone has a copy. We all know what it is, so you can hold people morally accountable for their actions.
FP: Exactly. That distinction makes a big difference. Every society has customs and norms; they often have elders or other sources of authority, who serve as experts in maintaining their traditions. But when it’s just a matter of, “This is what we’ve always done—don’t you remember?” some people can conveniently forget. Once something is written down, though, it gains authority. You can refer to the exact words, which opens up different possibilities for exercising power. “Look, these are the laws—everyone must know and follow them.” But it equally creates opportunities for holding people accountable.
Skeptic: So it’s a matter of “If you break the law, then these are the consequences.” It’s almost like a logic problem—if P, then Q. There’s an internal logic to it, a causal reasoning where B follows A, so we assume A causes B. Is something like that going on, cognitively?
Once something is written down, it gains authority. You can refer to the exact words, which opens up different possibilities for exercising power.FP: Well, that cause-and-effect form is a feature of many legal systems, but not all of them. It’s very prominent in the Mesopotamian tradition, which influenced both Jewish law and Islamic law, and eventually Roman law—the legal systems that dominate the world today. It’s associated with the specification of rights—if someone does this, they are entitled to that kind of compensation, or this must follow from that. But the laws that developed in China and India were quite different. The Chinese had a more top-down, punitive system, focused on discipline and punishment. It was still an “if-then” system, but more about, “If you do this wrong, you shall be punished.” It was very centralized and controlling. In Hindu India, the laws were more about individual duty: this is what you ought to do to be a good Hindu. If you’re a king, you should resolve disputes in a particular way. The distinctions between these systems aren’t always sharp, but the casuistic form is indeed a particular feature of certain legal traditions.
Laws have never simply been rules. They’ve created intricate maps for civilization. Far from being purely concrete or mundane, laws have historically presented a social vision, promised justice, invoked a moral order ordained by God (or the Gods), or enshrined the principles of democracy and human rights. And while laws have often been instruments of power, they’ve just as often been the means of resisting it. Yet, the rule of law is neither universal nor inevitable. Some rulers have avoided submitting themselves to the constraints of law—Chinese emperors did so for 2,000 years. The rule of law has a long history, and we need to understand that history to appreciate what law is, what it does, and how it can rule our world for better or worse.
The rule of law is neither universal nor inevitable. Some rulers have avoided submitting themselves to the constraints of law.Skeptic: In some ways it seems like we are seeking what the economist Thomas Sowell calls cosmic justice, where in the end everything is settled and everyone gets their just desserts. One purpose of the Christian afterlife is that all old scores are settled. God will judge everything and do so correctly. So, even if you think you got away with something, in the long run you didn’t. There’s an eye in the sky that sees all, and that adds an element of divine order to legal systems.
FP: Absolutely, and that characterizes many of the major legal systems, especially those associated with religion. Take the Hindu legal system—it’s deeply tied to a sense of cosmological order. Everyone must follow their Dharma, and the Brahmins set up the rules to help people follow their Dharma, so they can achieve a better rebirth. Similarly, Islamic Sharia law, which has had a poor reputation in recent times, is seen as following God’s path for the world, guiding people on how they should behave in accordance with a divine plan. Even the Chinese, who historically had a more top-down and punitive system, claimed that their emperors held the Mandate of Heaven—that’s why people had to obey them and their laws. They were at the top of the pyramid because of such divine authority.
Of course, there have also been laws that are much more pragmatic—rules that merchants follow to maintain their networks, or village regulations. Not all law is tied to a cosmic vision, but many of the most impressive and long-lasting legal systems have been.
Islamic Sharia law is seen as following God’s path for the world. Even the Chinese, who historically had a more top-down and punitive system, claimed that their emperors held the Mandate of Heaven.Skeptic: The Arab–Israeli conflict can be seen as two people holding a deed to the same piece of land, each claiming, “The title company that guarantees my ownership is God and His Holy Book.” Unfortunately, God has written more than one Holy Book, leading both sides to claim divine ownership, with no cosmic court to settle the dispute.
FP: That’s been the case throughout history—overlapping legal and political jurisdictions. Many people today are worried about whether the nation-state, as we know it, is breaking down, especially with the rise of supranational laws and transnational legal systems. But it’s always been like this—there have always been overlaps between religious laws, political systems, and social norms. The Middle East is a perfect example, with different religious communities living side by side. It hasn’t always been easy, but over time, people have developed ways of coexisting. The current political battles in the Middle East are part of this ongoing tension.
Skeptic: In your writing, you offer this great example from the Code of Hammurabi, 1755–1750 BC. It is the longest, best-organized, best-preserved legal text from the ancient Near East, written in the Old Akkadian dialect of Babylonian, and inscribed on a stone stele discovered in 1901.
“These are the judicial decisions that Hammurabi, the King, has established to bring about truth and a just order in his land.” That’s the text you quoted. “Let any wronged man who has a lawsuit”—interesting how the word ‘lawsuit’ is still in use today—”come before my image as King of Justice and have what is written on my stele read to him so that he may understand my precious commands, and let my stele demonstrate his position so that he may understand his case and calm his heart. I am Hammurabi, King of Justice, to whom Shamash has granted the truth.”
Many people today are worried about whether the nation-state, as we know it, is breaking down.Then you provide this specific example: “If a man cuts down a tree in another man’s date orchard without permission, he shall pay 30 shekels of silver. If a man has given a field to a gardener to plant as a date orchard, when the gardener has planted it, he shall cultivate it for four years, and in the fifth year, the owner and gardener shall divide the yield equally, with the owner choosing first.”
This sounds like a modern business contract, or today’s U.S. Uniform Commercial Code.
FP: Indeed, it’s about ensuring fairness among the farmers, who were the backbone of Babylon’s wealth at the time. I also find it fascinating that there are laws dealing with compensation if doctors kill or injure their patients. We often think of medical negligence as a modern issue, but it’s been around for 4,000 years.
Skeptic: But how did they determine the value of, say, a stray cow or cutting down the wrong tree? How did they arrive at the figure of 30 shekels?
FP: That’s a really interesting question. These laws were meant to last, and even in a relatively stable society, the value of money would have changed over time. People have studied this and asked how anyone could follow these laws for the hundreds of years that the stele stood and people referred to it. My view is that these laws were more exemplary—they probably reflected actual cases, decisions that judges were making at the time.
Laws have never simply been rules; they have created intricate maps for civilization, presented a social vision, promised justice, invoked a moral order, and enshrined principles of democracy and human rights.Although Hammurabi wrote down his rules, he didn’t expect people to apply them exactly as written, as we do with modern legal codes. Instead, they gave a sense of the kind of compensation that would be appropriate for different wrongs or crimes—guidelines, not hard rules. Hammurabi likely collected decisions from various judicial systems and grafted them into a set of general laws, but they still retain the flavor of individual judgments.
Skeptic: Is there a sense of “an eye for an eye, a tooth for a tooth”—where the punishment fits the crime, more or less?
The Code of Hammurabi inscribed on a basalt slab on display at the Louvre, Paris. (Photo by Mbzt via Wikimedia)FP: Absolutely. Hammurabi was trying to ensure that justice was done by laying out rules for appropriate responses to specific wrongs, ensuring fairness in compensation. But it’s crucial to understand that the famous phrase, “an eye for an eye, a tooth for a tooth,” which appears first in Hammurabi’s code and later in the laws of the Book of Exodus, wasn’t about enforcing revenge. Even though there’s a thousand-year gap between Hammurabi and the Bible, scholars believe this rule was about limiting revenge, not encouraging it. It meant that if someone sought revenge, it had to be proportional—an eye for an eye—but no more.
In other words, they wanted to prevent cycles of violence that arise from feuds. In a feuding society, someone steals a sheep, then someone retaliates by stealing a cow, and then someone tries to take an entire herd of sheep. The feud keeps getting bigger and bigger. So, the “eye for an eye” rule was a pragmatic approach in a society where feuding was common. It was meant to keep things under control.
Skeptic: From the ruler’s perspective, a feud is a net loss, regardless of who’s right or wrong.
FP: Feuding is a very common way of resolving disputes, especially among nomadic people. The idea, which makes a lot of sense, is that if you’re a nomadic pastoralist, your wealth is mobile—it’s your animals that have feet, which can be moved around. That also makes it easy to steal. If you’re a farmer, your wealth is tied to your land, so someone can’t run off with it. Since nomads are particularly vulnerable to theft, having a feuding system acts as a defense mechanism. It’s like saying, “If you steal my sheep, I’ll come and steal your cow.” You still see this in parts of the world, such as eastern Tibet, where I’ve done fieldwork. So, yes, kings and centralized rulers want to stop feuds because they represent a net loss. They want to put a lid on things and so establish a more centralized system of justice. This is exactly what Hammurabi was trying to do, and you see similar efforts in early Anglo- Saxon England, and all over the world.
Another interesting point is that every society has something to say about homicide. It’s so important that they have to lay out a response. However, I don’t think we should assume these laws were meant to stop people from killing each other. The fact is, we don’t refrain from murder because the law tells us not to. We don’t kill because we believe it’s wrong—except in the rare cases where morality has somehow become twisted and self-help justice occurs and people take the law into their own hands. The law, in this case, is more about what the social response should be once a killing has occurred. Should there be compensation? Punishment? What form should it take?
Every society needs some system to restore order and a sense of justice.Skeptic: Is this why we need laws that are enforced regularly, fairly, justly, and consistently, so people don’t feel the need to take matters into their own hands?
FP: I’d put it a bit more broadly: we need systems of justice, which can include mediation systems. In a village in Ladakh—part of northern India with Tibetan populations where I did fieldwork—they didn’t have written laws, but they had very effective ways of resolving conflicts. They put a lot of pressure on the parties to calm down, shake hands, and settle the dispute. It’s vastly different from the nomads I worked with later in eastern Tibet, who had a very different approach. But both systems were extremely effective, and there was a strong moral sense that people shouldn’t fight or even get angry. It’s easy to look at these practices and say they’re not justice, especially when serious things like injuries, killings, or even rape are settled in this way. But for these villages, maintaining peace and order in the community was paramount, and it worked for them.
Every society needs some system to restore order and a sense of justice. What constitutes justice can vary greatly—sometimes it’s revenge, sometimes it’s about restoring order. Laws can be part of that system, and in complex societies, it becomes much harder to rely on bottom-up systems of mediation or conciliation. That’s where having written laws and judges becomes very useful.
Skeptic: In communities without laws or courts, do they just agree, “Tomorrow we’re going to meet at noon, and we’ll all sit down and talk this out?”
FP: Essentially, yes. In the communities I spent time with, it was the headman’s duty to call a village meeting, and everyone was expected to attend and help resolve the issue. In a small community like that, you absolutely could do it.
Skeptic: And if you don’t show up?
FP: There’s huge social pressure for people to play their part in village politics and contribute to village funds and activities.
Skeptic: And if they don’t, then what? Are they gossiped about, shunned, or shamed?
FP: Yes—all of those things, in various ways.
Skeptic: Let’s talk about religious laws. You mentioned Sharia, and from a Western perspective, it’s often seen as a disaster because it’s been hyped up and associated with terrorism. Can you explain how religious laws differ from secular laws?
FP: I’m wondering how much one can generalize here. I’m thinking of the religious laws of Hindu India, Islamic laws, Jewish laws, and I suppose Canon law in Europe—Christian law. I hesitate to generalize, though.
Skeptic: What often confounds modern minds are the very specific laws in Leviticus—like which food you can eat, which clothes you can wear, and how to deal with adultery, which would certainly seem to concern the affected spouse. But why should the state—or whatever governing laws or body—even care about such specific issues?
FP: This highlights a crucial point. In Jewish, Hindu, and Islamic law, the legal and moral spheres are part of the same domain. A lot of these laws are really about guiding people on how to live moral lives according to dharma, God’s will, or divine command. The distinction we often make between law and religion, or law and morality, doesn’t apply in those contexts. The laws are about instructing people on how to live properly, which can involve family relations, contracts, land ownership, but also prayer and ritual.
As for the laws in Leviticus, they’ve puzzled people for a long time. They seem to be about purity and how Jews should live as good people, following rules of cleanliness, which partly distinguished them from other tribes.
Skeptic: What exactly is Sharia law?
FP: Sharia literally means “God’s path for the world.” It’s not best translated as “law” in the way we understand it. It’s more about following the path that God has laid out for us, a path we can’t fully comprehend but must do our best to interpret. The Quran is a guide, but it doesn’t lay out in detail everything we should do. The early Islamic scholars—who were very important in its formative days—studied the Quran and the Hadith (which tradition maintains records the Prophet’s words and actions) to work out just how Muslims should live according to God’s command. They developed texts called fiqh, which are what we might call legal texts, going into more detail about land ownership, commercial activities, legal disputes, inheritance, and charitable trusts.
Islamic law has very little to say about crime.Islamic law has very little to say about crime. That’s one misconception. People tend to think it’s all about harsh punishments, but the Quran mentions crime only briefly. That was largely the business of the caliphs—the rulers—who were responsible for maintaining law and order. Sharia is much more concerned with ritual and morality, and with civil matters like inheritance and charitable trusts.
Skeptic: Much of biblical legal and moral codes have changed over time. Christianity went through the Enlightenment. But Islam didn’t seem to go through a similar process. Is that a fair characterization?
FP: I’d say that’s partly right. But I’ve never thought about it in exactly those terms. In any legal tradition, there’s resistance to change—that’s kind of the point of law. It’s objective and fixed, so any change requires deep thought. In the Islamic world, there’s been a particularly strong sense that it’s not for people to change or reinterpret God’s path. The law was seen as something fixed.
But in practice, legal scholars, called muftis, were constantly adapting and changing legal practices to suit different contexts and environments. That’s one of the real issues today—Islamic law has become a symbol of resistance to the West, appealing to fundamentalism by going “back to the beginning.”
Skeptic: Let’s talk about stateless law of tribes, villages, networks, and gangs. For example, we tend to think of pirates as lawless, chaotic psychopaths who just randomly raided commerce and people. But, in fact, they were pretty orderly. They had their own constitutions. Each ship had a contract that everyone had to sign, outlining the rules. There’s even this interesting analysis of the Jolly Roger (skull and crossbones) flag. Why fly that flag and alert another ship that you’re coming? In his book The Invisible Hook: The Hidden Economics of Pirates, the economist Peter Leeson argued that it is a signal: “We’re dangerous pirates, and we’re coming to take your stuff, so you might as well hand it over to us, and we won’t kill you.” It’s better for the pirates because they can get the loot without the violence, and it’s better for the victims because they get to keep their lives. Occasionally, you do have to be brutal and make sure your reputation as a badass pirate gets a lot of publicity, so people know that when they see the flag, they should just surrender. But overall, it was a pretty orderly system.
FP: Yes, but it’s only kind of organized. That’s the point. For example, in The Godfather Don Corleone was essentially making up his own rules, using his power to tell others what he wanted. That’s the nature of the Mafia—yes, they had omertà (the rule of silence) and rules about treating each other’s wives with respect, but these rules were never written down. Alleged members who went on trial even denied—under oath—that any kind of organization or rules existed. This was particularly true with the Sicilian Mafia. The denial served two purposes: first, it protected them from outside scrutiny, and second, it allowed powerful figures like Don Corleone—or the real-life Sicilian bosses—to bend the rules whenever they saw fit. If the rules aren’t written down, it’s harder to hold them accountable. They can simply break the rules and impose their will.
Skeptic: Let’s discuss international law. In 1977, David Irving published Hitler’s War, in which he claimed that Hitler didn’t really know about the Holocaust. Rather, Irving blamed it on Himmler specifically, and other high-ranking Nazis in general, along with their obedient underlings. Irving even offered $1,000 to anyone who could produce an order from Hitler saying, “I, Adolf Hitler, hereby order the extermination of European Jewry.” Of course, no such order exists. This is an example of how you shift away from a legal system. The Nazis tried to justify what they were doing with law, but at some point, you can’t write down, “We’re going to kill all the Jews.” That can’t be a formal law.
FP: Exactly. Nazi Germany had a complex legal case, and I’m not an expert on it, but you can see at least a couple legal domains at play. First, they were concerned with international law, especially in how they conducted warfare in the Soviet Union. They at least tried to make a show of following international laws of war. Second, operationally, they created countless laws to keep Germany and the war effort functioning. They used law instrumentally. But when they felt morally uncomfortable with what they were doing, the obvious move was to avoid writing anything down. If it wasn’t documented, it wasn’t visible, and so it became much harder to hold anyone accountable.
Nazi Germany had a complex legal case. Operationally, they created countless laws to keep Germany and the war effort functioning. They used law instrumentally.Skeptic: During the Nuremberg trials, the defense’s argument was often, “Well, we lost, but if we had won, this would have been legal.” So they claimed it wasn’t fair to hold these trials since they violated the well-established principle of ex post facto, because there was no international law at the time. National sovereignty and self-determination was the norm, so they were saying, in terms of the law of nations, “We were just doing what we do, and it’s none of your business.”
View from above of the judges' bench at the International Military Tribunal in Nuremberg. (Source: National Archives and Records Administration, College Park.)FP: Legally speaking, the Nuremberg trials were both innovative and hugely problematic. The court assumed the power to sit in judgment on what the leaders of independent nation-states were doing within their borders, or at least largely within their borders (the six largest Nazi death camps were in conquered Poland). But it was revolutionary in terms of developing the concepts of genocide, crimes against humanity, and the reach of international law with a humanitarian focus. So yes, it was innovative and legally difficult to justify, but I don’t think anyone involved felt there was any question that what they were doing was the right thing.
Skeptic: It also established the legal precedent that, going forward, any dictator who commits these kinds of atrocities—if captured—would be held accountable.
FP: Exactly. And that eventually led to the movement that set up the International Criminal Court, where Slobodan Milošević was prosecuted, along with other leaders. Although, it’s extremely difficult to bring such people to trial, and ultimately, the process can be more symbolic than practical.
Is the existence of the International Criminal Court really going to stop someone from committing mass atrocities? I doubt it. But it does symbolize to the world that genocide and other heinous crimes will be called out, and people must be held accountable. In a way, it represents the wider moral world we want to live in and the standards we expect nations to uphold.
Skeptic: Skeptic once asked Elon Musk: “When you start the first Mars colony, what documents would you recommend using to establish a governing system? The U.S. Constitution, the Bill of Rights, the Universal Declaration of Human Rights, the Humanist Manifesto, Atlas Shrugged, or Against the State, an anarcho-capitalist manifesto?” He responded with, “Direct democracy by the people. Laws must be short, as there is trickery in length. Automatic expiration of rules to prevent death by bureaucracy. Any rule can be removed by 40 percent of the people to overcome inertia. Freedom.”
FP: What a great, specific response! He’s really thought about this. Those are some interesting ideas, and I agree that there’s a lot to be said for direct democracy. The main problem with direct democracy, however, is that when you have too many people it becomes cumbersome. How do you gather everyone in a sensible way? The Athenians and Romans had huge assemblies, which created a sense of equality, and that’s valuable. Another thing I would do, which I’ve discussed with a colleague of mine, Al Pashar, is to rotate positions of power. She did research in Indian villages, and I’ve done work with Tibetans in Ladakh, and we found they had similar systems where every household provided a headman or headwoman in turn.
Rotating power is effective at preventing individuals from concentrating too much power.You might think rotating leadership wouldn’t work, because some people aren’t good leaders, while others are. Wouldn’t it be better to elect the best person for the job? But we found that rotating power is effective at preventing individuals from concentrating too much power. Yes, it’s good to have competent leaders, but when their family or descendants form an elite, you get a hierarchy and bureaucracy. Rotating power prevents that. That’s what I would do in terms of a political system.
As for laws, I’m less concerned with their length, as long as they are accessible and visible for everyone to read and reference. What’s important is having essential laws clearly posted for all to see. And there should be a good system for resolving disputes—perhaps mediation and conciliation rather than a lot of complex laws, with just a few laws in the background.
Skeptic: We’ll send this to Elon, and maybe he’ll hire you to join his team of social engineers.
FP: Although I’m not sure I want to go to Mars, I’d be happy to advise from the comfort of Oxford!
It’s not at all clear that clothes make the man, or woman. However, it is clear that although animals don’t normally wear clothes (except when people dress them up for their own peculiar reasons), living things are provided by natural selection with a huge and wonderful variety. Their outfits involve many different physical shapes and styles, and they arise through various routes. For now, we’ll look briefly just at eye-catching color among animals, and the two routes by which evolution’s clothier dresses them: sexual selection and warning coloration.
Human observers are understandably taken with the extraordinary appearance of certain animals, notably birds, as well as some amphibians and insects, and, in most cases, the dressed-up elegance of males in particular. In 1860, Darwin confessed to a fellow biologist that looking at the tail of a peacock made him “sick.” Not that Darwin lacked an aesthetic sense, rather, he was troubled that his initial version of natural selection didn’t make room for animals having one. After all, the gorgeous colors and extravagant length of a peacock’s tail threatened what came to be known (by way of Herbert Spencer) as “survival of the fittest,” because all that finery seemed to add up to an immense fitness detriment. A long tail is not only metabolically expensive to grow, but it’s more liable to get caught in shrubbery, while the spectacular colors make its owner more conspicuous to potential predators.
Eventually, Darwin arrived at a solution to this dilemma, which he developed in his 1871 book, The Descent of Man and Selection in Relation to Sex. Although details have been added in the ensuing century and a half, his crashing insight—sexual selection—has remained a cornerstone of evolutionary biology.
Sexual selection is sometimes envisaged as different from natural selection, but it isn’t.Sexual selection is sometimes envisaged as different from natural selection, but it isn’t. Natural selection is neither more nor less than differential reproduction, particularly of individuals and, thereby, genes. It operates in many dimensions, such as obtaining food, avoiding predators, surviving the vagaries of weather, resisting pathogens, and so on. And yet more on! Sexual selection is a subset of natural selection that is so multifaceted and, in some ways, so counterintuitive that it warrants special consideration, as Darwin perceived and subsequent biologists have elaborated.
The bottom line is that in many species, bright coloration—seemingly disadvantageous because it is both expensive to produce and also carries increased risk because of its conspicuousness— nonetheless can contribute to fitness insofar as it is preferentially chosen by females. In such cases, the upside of conspicuous colors increasing mating opportunities compensates for its downsides.
Bright coloration is both expensive to produce and also carries increased risk because of its conspicuousness.Nothing in science is entirely understood and locked down, but biologists have done a pretty good job with sexual selection. A long-standing question is why, when the sexes are readily distinguishable (termed, sexual dimorphism) it is nearly always the males that are brightly colored. An excellent answer comes from the theory of parental investment, first elaborated by Robert Trivers. The basic idea is that the fundamental biological difference between males and females is not in their genitals but in the defining difference between males and females, namely, how much they invest when it comes to producing offspring. Males are defined as the sex that makes sperm (tiny gametes that are produced in prodigious numbers), while females are egg makers (producing fewer gametes and investing substantially more time and energy on each one).
Sexual selection is responsible for much of the organic world’s Technicolor drama.As a result, males are often capable of inseminating multiple females because their parental investment in each reproductive effort can be minimal. And so, males in many species, perhaps most, gain an evolutionary advantage by mating with as many females as possible. Because nearly always there are equal numbers of males and females—an important and well-researched statistical phenomenon that deserves its own treatment—this sets up two crucial dynamics. One is male-male competition whereby males hassle with each other for access to the limiting and valuable resource of females and their literal mother load of parental investment. This in turn helps explain the frequent pattern whereby males tend to be more aggressive and outfitted with weapons and an inclination to use them.
The other dynamic, especially important for understanding the evolution of conspicuous male coloration, is female choice (known as epigamic selection). Because females are outfitted with their desirable payload of parental investment, for which males compete, females often (albeit not always) have the opportunity to choose among eager suitors. And they are disposed to go for the brightest, showiest available.
Darwin intuited this dynamic but was uncomfortable about it because at the time, it was felt that aesthetic preferences were a uniquely human phenomenon, not available to animals. Now we know better, in part because the mechanism of such preferences is rather well understood. Sexual selection is responsible for much of the organic world’s Technicolor drama, such as the red of male cardinals, the tails of peacocks, or the rainbow rear ends of mandrill monkeys, all of which make these individuals more appealing to potential mates—probably because, once they are sexually attractive, they become even more attractive according to what evolutionary biologists call the sexy son hypothesis. This involves the implicit genetic promise that females who mate with males who are thus adorned will likely produce sons who will inherit their father’s flashy good looks and will therefore be attractive to the next generation of choosing females, thereby ensuring that the prior generation female who makes such a choice will produce more grandchildren through her sexy sons.
There is a strong correlation between the degree of polygyny (number of females mated on average to a given male), or, more accurately, the ratio of variability in female reproductive success to that of males, and the amount of sexual dimorphism: the extent to which males and females of a given species differ physically. The greater the polygyny (e.g., harem size, as in elephant seals) the greater the sexual dimorphism, while monogamous species tend to be comparatively monomorphic, at least when it comes to body size and weaponry.
In most cases, female reproductive success doesn’t vary greatly among individuals, testimony to the impact of the large parental investment they provide. Female success is maximal when they get all their eggs fertilized and their offspring successfully reared, a number that typically doesn’t differ greatly from one female to another. By contrast, because of their low biologically-mandated parental investment, some males have a very large number of surviving offspring—a function of their success in male-male competition along with female choice—while others are liable to die unsuccessful, nonreproductive, typically troublemaking bachelors.
When it comes to sexual dimorphism in coloration, some mysteries persist.When it comes to sexual dimorphism in coloration, however, some mysteries persist. Among some socially monogamous species (e.g., warblers), males sport brilliant plumage. This conundrum has been resolved to some extent by the advent of DNA fingerprinting, which has shown that social monogamy doesn’t necessarily correlate with sexual monogamy. Although males of many species have long been known to be sexually randy, verging on promiscuous, females were thought to be more monogamously inclined. However, we now know that females of many species also look for what is termed extra-pair copulations, and it seems likely that this, in turn, has selected for sexy male appearance, which outfits them to potentially take advantage of any out-of-mateship opportunities.
It still isn’t clear why and how such a preference began in the case of particular species (and why it is less developed, or, rarely, even reversed in a few), but once established it becomes what the statistician and evolutionary theorist R.A. Fisher called a “runaway process.” Furthermore, we have some rather good ideas about how this process proceeds.
One is that being impressively arrayed is an indication of somatic and genetic health, which further adds to the fitness payoff when females choose these specimens. Being brightly colored has been shown to correlate with disease resistance, relative absence of parasites, being an especially adroit forager, and the like. In most cases, brightness is physiologically difficult to achieve, which means that dramatic coloration can indicate that such living billboards are also advertising their metabolic muscularity, indicating that they’d likely contain good genetic material as well.
Being brightly colored has been shown to correlate with disease resistance, relative absence of parasites, and being an especially adroit forager.Another, related hypothesis was more controversial when first proposed by Israeli ornithologist Amotz Zahavi, but has been increasingly supported. This is the concept of “selection for a handicap,” which acknowledges that such traits as bright coloration may well be a handicap in terms of a possessor’s survival. However, Zahavi’s “Handicap Principle” turns a seeming liability into a potential asset insofar as certain traits can be positive indicators of superior quality if their possessors are able to function effectively despite possessing them. It’s as though someone carried a 50-pound backpack and was nonetheless able to finish a race, and maybe even win it! An early criticism of this notion was that the descendants of such handicapped individuals would also likely inherit the handicap, so where’s the adaptive payoff accruing to females who choose to mate with them?
For one, there’s the acknowledged benefit of producing sons who will themselves be preferentially chosen—an intriguing case in which choosy females are more fit not through their sons, but by their grandchildren by way of those sons. In addition, there is the prospect that the choosing female’s daughters would be bequeathed greater somatic viability without their brothers’ bodily handicap. It’s counterintuitive to see bright coloration as a handicap, just as it’s counterintuitive to see a handicap as a potential advantage … but there’s little reason to trust our intuition in the face of nature’s often-confusing complexity.
There’s plenty more to the saga of sexual selection and its generation of flashy animal Beau Brummels, including efforts to explain the many exceptions to the above general patterns. It’s not much of a mystery why mammals don’t partake of flashy dress patterns, given that the class Mammalia generally has poor color vision. But what about primates, who tend to be better endowed? And what of Homo sapiens? Our species sports essentially no genetically-mediated colorful sexual dimorphism. If anything, women tend to be more elaborately adorned than men, at least in Western traditions, a gender difference that seems entirely culture-based. Moreover, among some non-Western social groups, the men get dressed up far more than the women. Clearly, there is much to be resolved, and not just for nonhuman animals.
For another look at dramatic animal patterning, let’s turn to the inverse of sexual attraction, namely, selection for being avoided.
Among the most dramatic looking animals are those whose appearance is “designed” (by natural selection) to cause others—notably predators—to stay away. An array of living things, including some truly spectacular specimens, are downright poisonous, not just in their fangs or stingers but in their very bodies. When they are caterpillars, monarch butterflies feed exclusively on milkweed plants, which contain potent chemical alkaloids that taste disgusting and cause severe digestive upset to animals—especially birds— that eat them, or just venture an incautious nibble.
In the latter case, most birds with a bellyache avoid repeating their mistake although this requires, in turn, that monarchs be sufficiently distinct in their appearance that they carry an easily recognized warning sign. Hence, their dramatic black and bright orange patterning. To the human eye, they are quite lovely. To the eyes of a bird with a terrible taste in its mouth and a pain in its gut, that same conspicuous black and orange is memorable as well, recalling a meal that should not be repeated. It exemplifies “warning coloration,” an easily recalled and highly visible reminder of something to avoid. (It is no coincidence that school buses, ambulances, and fire trucks are also conspicuously colored, although here the goal is enhanced visibility per se, not advertising that these vehicles are bad to eat!)
It is no coincidence that school buses, ambulances, and fire trucks are also conspicuously colored.The technical term for animal warning signals is aposematic, derived by combining the roots for “apo” meaning away (as in apostate, someone who moves away from a particular belief system) and “sema” meaning signal (as in semaphore). Unpalatable or outright poisonous prey species that were less notable and thus easily forgotten will have achieved little benefit from their protective physiology. And of course, edible animals that are easily recognized would be in even deeper trouble. The adaptive payoff of aposematic coloration even applies if a naïve predator kills a warningly-colored individual, because such sacrifice is biologically rewarded through kin selection when a chastened predator avoids the victim’s genetic relatives.
Many species of bees and wasps are aposematic, as are skunks: once nauseated, or stung, or subjected to stinky skunk spray, twice shy. However, chemically-based shyness isn’t the only way to train a potential predator. Big teeth or sharp claws could do the trick, just by their appearance, without any augmentation. Yet when the threat isn’t undeniably baked into an impressive organ—for example, when it is contained within an animal’s otherwise invisible body chemistry—that’s where a conspicuous, easy-to-remember appearance comes in.
Bright color does triple duty, not only warning off predators and helping acquire mates, but also signaling that brighter and hence healthier individuals are more effective fighters.Some of the world’s most extraordinary painterly palettes (at least to the human eye) are flaunted by neotropical amphibians known as “poison arrow frogs,” so designated because their skin is so lethally imbued that indigenous human hunters use it to anoint their darts and arrow points. There is no reason, however, for the spectacular coloration of these frogs to serve only as a warning to potential frog-eating predators. As with other dramatically accoutered animals, colorfulness itself often helps attract mates, and not just by holding out the prospect of making sexy sons. Moreover, it has been observed in at least one impressively aposematic amphibian—the scrumptious-looking but highly toxic strawberry poison frog—that bright color does triple duty, not only warning off predators and helping acquire mates, but also signaling to other strawberry poison frogs that brighter and hence healthier individuals are more effective fighters.
Warning coloration occurs in a wide range of living things, evolving pretty much whenever one species develops a deserved reputation for poisonousness, ferocity, or some other form of legitimate threat. Once established, it also opens the door to further evolutionary complexity, including Batesian mimicry, first described in detail by the nineteenth-century English naturalist Henry Walter Bates who researched butterflies in the Amazon rainforest. He noticed that warningly-colored species serve as models, which are then copied by mimics that are selected to piggyback on the reputation established by the former. Brightly banded coral snakes (venomous) are also mimicked, albeit imperfectly, by some species of (nonpoisonous) king snakes. Bees and wasps, with their intimidating stings, have in most cases evolved distinctive color patterns, often bands of black and yellow; they, in turn, are mimicked by a number of other insects that are outfitted with black and yellow bands though they are stingless.
The honestly-clothed signaler can become a model to be mimicked by other species that may not be dangerous to eat but are mistaken for the real (and toxic) McCoyIn short, the honestly-clothed signaler can become a model to be mimicked by other species that may not be dangerous to eat but are mistaken for the real (and toxic) McCoy. Those monarch butterflies, endowed with poisonous, yucky-tasting alkaloids, are mimicked by another species—aptly known as “viceroys” (substitute monarchs)—that bypass the metabolically expensive requirement of dealing with milkweed toxins while benefiting by taking advantage of the monarch’s legitimately acquired reputation.
The plot thickens. Viceroy butterflies (the mimic) and monarchs (the model) can both be successful as long as the mimics aren’t too numerous. A problem arises, however, when viceroys become increasingly abundant, because the more viceroys, the more likely it is that predators will nibble on those harmless mimics rather than being educated by sampling mostly monarchs and therefore trained to avoid their black-and-orange pattern. As a result, the well-being of both monarchs and viceroys is diminished in direct proportion as the latter become abundant, which in turn induces selection of monarchs that are discernibly different from their mimics so as not to be tarred with viceroys’ innocuousness. But the process isn’t done. As the models flutter away from their mimics, the latter can be expected to pursue them, in an ongoing process of evolutionary tag set in motion by the antipredator adaptation represented by the model’s warning coloration, the momentum of which is maintained by the very different challenges—to both the mimic and the model—generated by the system itself.
Frequency-dependent selection is a phenomenon in which the evolutionary success of a biological type varies inversely with its abundance.This general phenomenon is known as “frequency-dependent selection,” in which the evolutionary success of a biological type varies inversely with its abundance: favored when rare, diminishing as it becomes more frequent. It’s as though certain traits carry within them the seeds of their own destruction, or at least, of keeping their numbers in check, either arriving at a balanced equilibrium or by producing a pattern of pendulum-like fluctuations.
Meanwhile, Batesian mimicry isn’t the only copycat clothing system to have evolved. Plenty of black-and-yellow-banded insects, for example, are equipped with stings, although many other warning patterns are clearly available. Different species could have used their unique pattern of colors as well as alternative designs such as spots and blotches instead of the favored black-and-yellow bands. At work here is yet another evolution-based aposematic phenomenon, known as Müllerian mimicry, after the German naturalist Fritz Müller. In this kind of mimicry, everyone is a model, because different species that are legitimately threatening in their own right converge on the same pattern. Here, the adaptive advantage is that sharing the same warning appearance facilitates learning by predators: it’s easier to learn to avoid one basic warning signal than a variety, different for each species. It had been thought that Batesian and Müllerian mimicry were opposites, with Batesian being dishonest because the mimic is essentially a parasite of its model’s legitimate reputation (those viceroys), whereas Müllerian mimicry exemplifies shared honesty, as with different species of wasps, bees, and hornets, whose fearsome reputations enhance each others.
It is currently acknowledged, however, that often the distinction is not absolute; within a given array of similar-looking Müllerian mimics, for example, not all species are equally honest when it comes to their decorative signaling. The less dangerous representatives are therefore somewhat Batesian. Conversely, among some species, assemblages that have traditionally been thought to involve Batesian mimics—including the iconic monarch–viceroy duo—mimics are often a bit unpleasant in their own right, so both participants are to some degree Müllerian convergers as well.
What to make of all this? In his book, Unweaving the Rainbow, Richard Dawkins gave us some advice, as brilliant as the colors and patterns of the natural world:
After sleeping through a hundred million centuries, we have finally opened our eyes on a sumptuous planet, sparkling with color, bountiful with life. Within decades we must close our eyes again. Isn’t it a noble and enlightened way of spending our time in the sun, to work at understanding the universe and how we have come to wake up in it?When I first investigated the sharp rise in human deaths due to dogs in the UK, I did not expect the fast-paced chain of events it would spur. A month after publishing a blog post on the dramatic rise in maimings and deaths due to dogs and the single breed that accounted for this unprecedented change, I was asked by the head of a victims’ group to run a campaign to ban the American Bully XL breed in England. From the outset, I was told that such action, from an inactive government, was essentially impossible—one person involved in politics told me I would need to “make it a national issue” before the government would ever consider hearing our concerns, let alone acting on them. Thanks to a small group of dedicated people, relentless persistence and focus on our goal, just 77 days after starting the campaign, the British Prime Minister announced the implementation of our policy to the nation.
The ban was overwhelmingly popular with the public and remains so to this day. Indeed, in recent polling on the chief achievements of the now ex-Prime Minister Rishi Sunak, the American Bully XL ban was ranked by the public tied for 4th place—higher than a significant tax cut and above increased childcare provision. Why? The British public is known for its love of dogs. Indeed, I have a dog and have grown up with dogs. Why would I spearhead such a campaign?
The Horrifying ProblemIt is common to start these kinds of articles with a kind of emotive, engaging story designed to appeal to the reader. I have tried writing such an introduction, but the stories are so horrifying I cannot begin to describe them. Whether it’s the 10-year-old Jack Lis, mauled to death and having injuries so horrific that his mother cannot shake the image from her mind at every moment she closes her eyes, or a 17-month-old girl that lost her life in the most unimaginably terrible circumstances, the stories, the pain of the parents, and the horrifying last moments of those children’s lives are beyond comprehension.
In the past three years, the number of fatal dog attacks in the UK has increased dramatically.1 Between 2001 and 2021 there were an average of 3.3 fatalities per year—with no year reaching above 6. In 2022, 10 people were killed, including 4 children. Optimistic assumptions that 2022 was an outlier did not last and by the summer of 2023, there had already been 5 fatalities. This pattern continued throughout the year. A day before the ban was announced, a man lost his life to two dogs. He had been defending his mother from an attack, and was torn apart in his garden, defending her. A video surfaced online showing people attempting to throw bricks at the animals, but they continued to tear him apart, undaunted. Later in 2023, after the ban was announced, another man was killed by a Bully XL while walking his own dog. In 2024, even after the ban, the owners that have chosen to keep Bully XLs under the strict new conditions, face the threat within their home. As of this writing, two people have died: one, an elderly woman looking after the dogs for her son; the other an owner torn to pieces by the dogs she had raised from puppies.
These are “just” fatalities. Non-lethal dog attacks on humans, often resulting in life-changing injuries, are also on the rise, increasing from 16,000 in 2018, to 22,000 in 2022, and hospitalizations have almost doubled from 4,699 in 2007 to 8,819 in 2021/22, a trend that continued in 2022/23 with 9,342 hospitalizations.2, 3 These cases make for difficult reading. Seventy percent of injuries on children were to the head; nearly 1 in 3 required an overnight stay. In Liverpool (a city of 500,000), there are 4–7 dog bites a week, with most injuries to the face. One doctor recounted dealing with a “near-decapitation.” In 2023 in London, the police were dealing with one dangerous dog incident per day.4 We do not have reliable data on dogs attacking other dogs and pets, but I would wager those have increased as well.
Yet, despite an increase in both the human and dog populations of the UK over the past four decades, fatalities have remained consistently low until just a few short years ago.
What’s going on?Looking through the list of fatal dog attacks in the UK, a pattern becomes clear.5, 6 In 2021, 2 of the 4 UK fatalities were from a breed known as the American Bully XL. In 2022, 5 out of 10 were American Bullies.7 In 2023, 5 fatalities of 9 were from American Bullies. In 2024, 2 of 3 deaths so far are from American Bully XLs kept by owners after the ban. In other words, without American Bullies, the dog fatalities list would drop to 5 for 2022 (within the usual consistent range we’ve seen for the past four decades), 4 for 2023 and 1 for 2024 so far.
Hospitalizations have almost doubled from 4,699 in 2007 to 8,819 in 2021/22, a trend that continued in 2022/23 with 9,342 hospitalizations. Seventy percent of injuries on children were to the head.Again, this is “just” fatalities. We do not have accurate recordings of all attacks, but a concerning indication arises from Freedom of Information requests to police forces from across the UK. In August of 2023, 30 percent of all dogs seized by police—often due to violent attacks—were American Bullies. To put this in context, the similarly large Rottweiler breed accounted for just 2 percent.
This pattern is seen elsewhere, in one other breed, the Pitbull—a very, very close relative of the American Bully. In the U.S., for example, 60–70 percent of dog fatalities are caused by Pitbulls and Pitbull crosses.8 The very recent relatives of the American Bully are also responsible for the vast majority of dog-on-dog aggression (including bites, fatalities, etc.).9 In the Netherlands, the majority of dogs seized by police for dog attacks on other dogs were Pitbull types.10 The same is true nearly anywhere you look. In New York City, Pitbulls were responsible for the highest number of bites in 2022.11
Despite these figures, both in the UK and internationally, and the recent media attention dog attacks have received, if you were to argue that a breed was dangerous, you would receive significant pushback from owners, activists, and even animal charity organizations stating that it is the owner’s fault. But this is wrong. While many would contend that “it’s the owner, not the breed,” the reality is different.
Designing Our Best FriendDogs—unlike humans—have been bred for various, very specific traits. Their traits, appearance, and behavior has been directed in a way comparable to how we’ve molded plant and other animal life over thousands of years. Watermelons and bananas used to be mostly seed; now they’re mostly flesh. Chickens were not always raised for their meat; now they are. These weren’t the natural course of evolution, but the result of humans intentionally directing evolution through deliberate cultivation or breeding. Modern-day dogs are very clearly also a result of such directed breeding.
Broadly speaking, we selected dogs for traits that are very much unlike those of wolves. Unlike their wolf ancestors, dogs are, broadly, naturally loyal to humans, even beyond preserving their own lives and those of other dogs. Indeed, a trait such as this in dogs might actually have caused some of the original aesthetic changes to their original wolf-like appearance. When Russian scientists bred foxes over generations for “tameness” to humans, they found the foxes began to have different colored fur, floppy ears, and to look, well, more like domestic dogs (though there is some debate on this).
Each dog breed has deep underlying propensities, desires, and drives for which we have selected them for generations. A key responsibility of dog ownership is to know your dog’s breed, understand its typical traits, and prepare for them. Not all individual dogs will exhibit these breed-specific traits, but most do, to varying degrees. Some hound breeds (Whippets, Greyhounds, etc.) have a prey drive and will chase or even try to kill small animals such as rabbits, even if those animals are kept as pets. Some breed-specific behavior can be trained out, but much of it can’t. Form follows function—breed-specific behavior has driven physical adaptations. Relative to other breeds, they have great vision (aptly, Greyhounds and Whippets belong to the type of dogs called “sighthounds”) and bodies that are lean and aerodynamic, with a higher ratio of muscle to fat relative to most other breeds, making them among the fastest animals on the planet, with racing Greyhounds reaching speeds up to 45 mph (72 km/h). Like many other hound breeds, they are ancient, bred for centuries to seek comfort in humans and to hunt only very specific animals, whether small vermin for Whippets and Greyhounds, or deer and wolves for the, well, Deerhounds and Wolfhounds. Hounds make fine family pets, having been bred to be highly affectionate to humans, as after all, you don’t want your hunting dog attacking you or your family.
Labradors love to retrieve—especially in water, much to the displeasure of their owners who all too often find them diving into every puddle they encounter on their daily walks. Pointers point. Border Collies herd, and as many owners would note, their instinct can be so strong that they often herd children in their human family. Cocker Spaniels will run through bushes, nose to the ground, looking as if they are tracking or hunting even when just playing—even when they have never been on a hunt of any kind. Dogs are not the way they are by accident but, quite literally, by design.
Designing Bully-type DogsBulldogs were originally bred to be set on a bull, and indiscriminately injure and maim the much larger animal until it died. (These dogs were longer-legged and much more agile and healthier than today’s English Bulldog breed—bred specifically for their now nonfunctional squat appearance.) After the “sport” of bull baiting was banned, some of these dogs were instead locked in a pen with large numbers of rats and scored on how many they could kill in a specified time, with often significant wagers placed on picking the winners. This newer “sport” required greater speed and agility, so the bulldogs of that time were interbred with various terriers to produce what were originally called, naturally, “Bull and Terriers.” From these would eventually come today’s Pitbull Terriers.
In addition to this, some of the early Bull and Terriers began to be used for yet another “sport,” and one on which significant amounts of money were wagered—dog fighting. These were bred specifically for aggression. Two of these dogs would be put together in a closed pit to fight until only one came out alive. During their off hours, these fighting dogs were mostly kept in cages, away from humans. The winners, often seriously wounded themselves, were bred for their ability to kill the other dog before it could kill them. They were not bred for loyalty to humans—these were dogs bred for indiscriminate, sustained, and brutal violence in the confined quarters of the dog pit (hence the name, Pitbull Terrier).
This explains why Pitbulls are responsible for 60–70 percent of deaths to dogs in the U.S. It is not—as some advocates state—a function of size. There are many larger and stronger breeds. Pitbulls are not the largest or the strongest dog breed, but—combined with their unique behavioral traits—they are large enough and strong enough to be the deadliest.
While Pitbull and some Pitbull-type breeds have been banned in the UK under the Dangerous Dogs Act 1991, the American Bully XL was permitted due to a loophole in the law—simply put, this new breed exceeded physical characteristics of the banned breeds to the point they no longer applied under the law. It is that loophole that resulted in the recent rise of the American Bully XL, and the violence attendant to it.
(In)Breeding the American Bully XLAmerican Bully XLs are the heavyweight result of breeds born out of brutal human practices that sculpted generations of dogs. The foundational stock for American Bully XLs were bred for terrifying violence and we should not be surprised to find that this new, more muscular and larger version still exhibits this same propensity. It is not the dogs’ fault any more than it is the fault of sighthounds to chase squirrels, or pointers to point. But that does not change the reality.
The American Bully began in the late 1980s and early 1990s. At least one line started from champion “game dogs,” bred to endure repeated severe maiming and still continue to fight to the deadly end. To be a champion they must have killed at least one other dog in brutal combat. To further increase their size and strength, these game dogs were then bred with each other and with other Pitbulls.
The UK original breeding stock that produced Bully XLs is extremely small. An investigation from one member of our campaign uncovered an absurd, awful reality: that at least 50 percent of American Bullies advertised for sale in the UK could trace their immediate or close lineage to one line and one single dog: Killer Kimbo.12, 13
Killer Kimbo was infamous in Bully breeding circles. He was a huge animal and the result of extreme levels of inbreeding to create his mammoth size. He was so inbred that he had the same great grandfather four times over. It is this dog that has given rise to one of the most popular bloodlines within the UK.
And what has been the result of heavily inbreeding dogs originating from fighting stock? While precise data are difficult to collect, at least one of Killer Kimbo’s offspring is known to have killed someone; other breeders recount stories of his offspring trying to attack people in front of them. At least one death in the UK is a second-generation dog from Killer Kimbo stock. These are the dogs that were advertised and promoted as if they just looked large but had been bred responsibly for temperament.
Indeed, many families bought these dogs thinking these were gentle giants—many have kept them even after the impositions of the ban, believing that a dog’s behavior is set only by their owners. After his own mother was killed by the Bullies he had kept, one owner in 2024 said:14
I did not know bullys were aggressive, I didn’t believe all this stuff about the bullys [being dangerous]. But now I’ve learned the hard way and I wish I’d never had nothing to do with bullys, they’ve ruined my life and my son’s life.I honestly thought the ban was a stupid government plan to wipe out a breed which I had never seen anything but softness and love from … Now I think they need to be wiped out.In fact, the breed was genetically constructed from fighting stock, inbred repeatedly for greater size and strength, shipped over to the UK skirting the Pitbull ban, and then advertised to families as if these dogs were the result of years of good breeding.
The Nanny DogIn the UK, the Royal Society for the Prevention of Cruelty to Animals (RSPCA) has argued that no breeds are more inherently dangerous than others and leads a coalition to stop any breed bans, including the campaign to “Ban the Bully.” This is despite the fact that the RSPCA itself would not insure American Bullies on their own insurance policies, and that they separately advocate for the banning of cat breeds they consider to be too dangerous.
The UK Bully Kennel Club (not to be confused with the similar sounding UK Kennel Club) describes the American Bully XL as having a “gentle personality and loving nature.” While the United Kennel Club does not recognize the American Bully XL breed, it describes the wider breed (i.e., not the XL variant) as “gentle and friendly,” and goes even a step further, recommending that the breed “makes an excellent family dog.” Again, the XL variant of this breed is responsible for the most fatalities of any dog breed in the UK in recent years, including for killing several children.
Even more troubling is the fact that well-intentioned and potentially good owners are left at a severe disadvantage by the statements of advocates for Pitbulls and American Bullies. If an owner is aware of the breed’s past and the risks in their behaviors, they are far more likely to be able to anticipate issues and control the dog. For example, hound owners are generally aware that they will often have to emphasize recall in their dogs or keep them on a lead in non-fenced areas to prevent them from running off to chase squirrels or other small animals—it is a well-advertised trait. These preventive measures are taken very early, far before the dog may even be interested in chasing. However, owners of American Bullies would not be aware of the breed’s past were they to rely on the supportive advertising descriptions. They were actively told, from sources all over, that American Bullies are naturally good with kids and family, that they are naturally non-violent, and don’t pose any risk. Positive descriptions of American Bullies (and their XL variety) de-emphasized their violent tendencies and ran the very real risk of obfuscating future owners as to the aggressive traits of this breed and so prevented owners from correctly understanding and therefore controlling their dog appropriately.
This encouraged ignorance from owners who are ill-equipped to handle their dog, such as the owner that saw her dog “Cookie-Doe” (related to Killer Kimbo) kill her father-in-law by ripping apart his leg. Her response? It wasn’t an aggressive dog, it just liked to “play too rough.” But for every owner like this, there are other experienced, diligent owners that nevertheless find themselves, or their children, under attack from one of these dogs.
Worse still is the nickname of “nanny dog.” There is a myth among advocates for the breed that Pitbulls were once known as “nanny dogs” for their loyalty to children in the late 19th and early 20th centuries. However, this isn’t true. The name originates from Staffordshire Bull Terriers (not Pitbulls) that were named “nursemaid dogs” in a 1971 New York Times piece. There is no evidence of “nanny dog” or similar descriptions before this. Stories of 19th or early 20th century origins for the nickname are likely the result of advocates wanting to believe in a more family-oriented origin for the breed, rather than the cruel reality.
We should not blame the dog breed for how they were bred, maintained, and for what they were selected for. They were bred out of cruel origins, inbred repeatedly, still face ear cropping, and some find themselves owned by individuals who select dogs for their ability to intimidate and attack. Nevertheless, none of this changes that violent, aggressive nature that has resulted from generations of breeding specifically for it.
(Some) Owners Bear Blame TooAmerican Bully XLs were not cheap, and this only began to change when our campaign started in full. At the lower end, they cost about the same as other dogs, but at the very higher end of the price range they were some of the most expensive dogs you could buy. Golden Retrievers, the archetypical family dog, are so desired that it is common for breeders to have long waiting lists for litters yet to be conceived. A typical cost for a Golden Retriever in the UK is around $2,600. American Bullies, at the height of their popularity, cost as much as $4,000 per puppy. The higher-end puppies were often accompanied by graphics involving violent metaphors and text written in horror movie-type “blood” fonts.
Given this kind of marketing, what did some prospective owners think they were purchasing? Indeed, it bears asking what kind of owners were prepared to pay vast sums for a dog advertised in such a way. These dogs were clearly a status symbol for many—a large, aggressive, powerful animal to be used either for intimidation or self-defense. It is for this reason that many owners have their dog’s ears cropped to look yet more aggressive, a practice illegal under UK law, but still nonetheless practiced. Cropping ears and tails actually serves a purpose—though a brutal one. The other dog cannot bite on to the ear or tail and so gain control of its rival. The old bull baiting dogs used to go after the bull’s ears and noses. Cropping also prevents a human, engaged in defending themselves from a dog attack, from grabbing the tail or ears and using them to sling the dog off or up against a wall. This explains the popularity of these dogs, altered in such a way, amongst drug dealers and others involved in crime.
OppositionThe politics of banning the American Bully proved difficult. It took a public campaign both to convince a government that was generally averse to actions of any kind; as well as to stop the continued influence of a coalition of charities that was opposing any and all breed bans. These charities included the Dogs Trust, RSPCA, the UK Kennel Club, Battersea Dogs and Cats Home, and others.
It might seem strange that these charities could argue against any breed bans, given the figures in fatalities from Bullies. Not only this, but these same charities supported the return of the Pitbull to the UK, even despite the decades of startling figures on their dramatic overrepresentation in fatalities.
The reason for this is simple. There is no way to split fatality data so that it is favorable to Pitbulls (or, recently, XL Bullies). Instead, the charities focus chiefly on a different measure: bites.15 This measure enables charities to claim that there is a problem with a great many dog breeds such as Labradors—which, in some calculations, bite the most people. On this measure, a mauling from a Bully XL that rips a child’s throat, or tears away an adult’s arm, and a bite on the hand from a chihuahua count the same: they are each one bite.
It isn’t necessary to outline how inadequate and bankrupt this measure is. It is a shame on this entire sector that this was considered anything more than a smokescreen. It is, in my view, a true scandal that has provided a great deal of unintended cover for horrifying breeding practices, which in turn resulted in the horrific deaths of pets, adults, and children. Dog bites are not the public’s (or owners) chief concern: it is maulings, hospitalizations, and deaths. That is what we should focus on, and until the advocacy sector does so, it does not deserve to be taken seriously.
Banning the BreedEngland and Wales have banned several breeds since the early 1990s. The Dangerous Dogs Act 1991 first banned Pitbulls, and then was amended to ban a further three breeds. The Act required little more than the signature of the relevant Secretary of State to add a new breed to the banned list. This Act prohibits the buying, selling, breeding, gifting, or otherwise transferring the ownership of any dog of a banned breed. All dogs in that breed are to be registered, neutered, as well as leashed and muzzled at all times in public. Not doing so or failing to register a dog of a banned breed, is a criminal offense.
When the XL Bully ban was announced, all owners were given a few months to register their dogs, neuter them, and then muzzle and leash them in public. They were forbidden to sell them, give them away or abandon them. Scotland—as a devolved nation within the United Kingdom—announced they would not ban the American Bully, and this resulted in a great many Bullies being sent to Scotland to escape the ban. Within two weeks, and after a couple of prominent attacks, the Scottish government made a legal U-turn and announced a ban. When the new Northern Ireland government formed, their first act was to ban the American Bully.
The Effects of the BanThe strength of the ban is twofold. On one hand, Bullies are less of a danger to pets and people than they were previously. They must now be muzzled and leashed in public—or owners face seizure of the dog by police and criminal sentences for themselves. However, as has been seen in recent months, this does not change the risk to owners or those that visit their homes. Allowing registered dogs to be kept by their owners means that this risk persists. It is a risk from which the public is shielded, however, it remains one that owners and those that visit them choose to take upon themselves.
The other and key strength of the ban is in the future. Stopping the breeding and trading of Bullies means that there is a timer on their threat within Britain. They will not continue to future generations. We will not have to see more and more Bully variants, and yet worse breeding practices as breeders chase the latest trend, inbreeding for a particular coat color, the ever-increasing sizes, or the propensity for violence. Children will not have to be mauled; other dogs will not have to be ripped apart. We chose to stop this.