You are here

Skeptic

Stem Cells for Parkinson’s Disease

neurologicablog Feed - Mon, 03/10/2025 - 5:02am

For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD tells us a lot about the potential and challenges of stem cell therapy.

PD has always been an early target for stem cell therapy because of the nature of the disease. It is caused by degeneration in a specific population of neurons in the brain – dopamine neurons in the substantial nigra pars compacta (SNpc). These neurons are part of the basal ganglia circuitry, which make up the extrapyramidal system. What this part of the brain does, essentially, is to modulate voluntary movement. One way to think about it is that is modulates the gain of the connection between the desire the move and the resulting movement – it facilitates movement. This circuitry is also involved in reward behaviors.

When neurons in the SNpc are lost the basal ganglia is less able to facilitate movement; the gain is turned down. Patients with PD become hypokinetic – they move less. It becomes harder to move. They need more of a will to move in order to initiate movement. In the end stage, patients with PD can become “frozen”.

The primary treatment for PD is dopamine or a dopamine agonist. Sinemet, which contains L-dopa, a precursor to dopamine, is one mainstay treatment. The L-dopa gets transported into the brain where it is made into dopamine.  These treatments work as long as there are some SNpc neurons left to convert the L-dopa and secrete the dopamine. There are also drugs that enhance dopamine function or are direct dopamine agonists. Other drugs are cholinergic inhibitors, as acetylcholine tends to oppose the action of dopamine in the basal ganglia circuits. These drugs all have side effects because dopamine and acetylcholine are used elsewhere in the brain. Also, without the SNpc neurons to buffer the dopamine, end-stage patients with PD go through highly variable symptoms based upon the moment-to-moment drug levels in their blood. They become hyperkinetic, then have a brief sweet-spot, and then hypokinetic, and then repeat that cycle with the next dose.

The fact that PD is the result of a specific population of neurons making a specific neurotransmitter makes it an attractive target for cell therapy. All we need to do is increase the number of dopamine neurons in the SNpc and that can treat, and even potentially cure, PD. The first cell transplant for PD was in 1987, in Sweden. These were fetal-derived dopamine producing neurons. There treatments were successful, but they are not a cure for PD. The cells release dopamine but they are not connected to the basal ganglia circuitry, so they are not regulating the release of dopamine in a feedback circuit. In essence, therefore, these were just a drug-delivery system. At best they produced the same effect as best pre-operative medication management. In fact, the treatment only works in patients who respond to L-dopa given orally. The transplants just replace the need for medication, and make it easier to maintain a high level of control.

They also have a lot of challenges. How long do the transplanted cells survive in the brain? What are the risks of the surgery. Is immunosuppressive treatment needed. And where do we get the cells from. The only source that worked was human ventral mesencephalic dopamine neurons from recent voluntary abortions. This limited the supply, and also created regulatory issues, being banned at various times. Attempts at using animal derived cells failed, as did using adrenal cells from the patient.

Therefore, when the technology developed to produce stem cells from the patient’s own cells, it was inevitable that this would be tried in PD. These are typically fibroblasts that are altered to turn them into pluripotent stem cells, which are then induced to form into dopamine producing neurons. This eliminates the need for immunosuppression, and avoid any ethical or legal issues with harvesting. PD would seem like the low hanging fruit for autologous stem cell therapy.

But – it has run up against the issues that we have generally encountered with this technology, which is why you may have first heard of this idea in the early 2000s and here in 2025 we are just seeing a phase I clinical trial. One problem is getting the cells to survive for long enough to make the whole procedure worthwhile. The cells not only need to survive, they need to thrive, and to produce dopamine. This part we can do, and while this remains an issue for any new therapy, this is generally not the limiting factor.

Of greater concern is how to keep the cells from thriving too much – from forming a tumor. There is a reason our bodies are not already flush with stem cells, ready to repair any damage, rejuvenate any effects of aging, and replace any exhausted cells. It’s because they tend to form tumors and cancer. So we have just as many stem cells as we need, and no more. What we “need” is an evolutionary calculation, and not what we might desire. Our experience with stem cell therapy has taught us the wisdom of evolution – stem cells are a double-edged sword.

Finally, it is especially difficult to get stem cells in the brain to make meaningful connections and participate in brain circuitry. I just attended a grand round on stem cells for stroke, and there they are having the same issue. However, stem cells can still be helpful, because they can improve the local environment, allowing native neurons to survive and function better. With PD we are again back to – the stem cells are a great dopamine delivery system, but they don’t fix the broken circuitry.

There is still the hope (but it is mainly a hope at this point) that we will be able to get these stem cells to actually replace lost brain cells, but we have not achieved that goal yet. Some researchers I have spoken to have given up on that approach. They are focusing on using stem cells as a therapy, not a cure – as a way to deliver treatments and improve the environment, to support neurons and brain function, but without the plan to replace neurons in functional circuits.

But the allure of curing neurological disease by transplanting new neurons into the brain to actually fix brain circuits is simply too great to give up entirely. Research will continue to push in this direction (and you can be sure that every mainstream news report about this research will focus on this potential of the treatment). We may just need some basic science breakthrough to figure out how to get stem cells to make meaningful connections, and breakthroughs are hard to predict. We had hoped they would just do it automatically, but apparently they don’t. In the meantime, stem cells are still a very useful treatment modality, just more for support than replacement.

The post Stem Cells for Parkinson’s Disease first appeared on NeuroLogica Blog.

Categories: Skeptic

Why Is There Law? Skeptic Interviews Oxford Professor Fernanda Pirie

Skeptic.com feed - Thu, 03/06/2025 - 5:57am

Fernanda Pirie is Professor of the Anthropology of Law at the University of Oxford. She is the author of  The Anthropology of Law and has conducted fieldwork in the mountains of Ladakh and the grasslands of eastern Tibet. She earned a DPhil in Social Anthropology from Oxford in 2002, an MSc in Social Anthropology at University College London in 1998, and a BA in French and Philosophy from Oxford in 1986. She spent almost a decade practicing as a barrister at the London bar. Her most recent book is  The Rule of Laws: A 4,000-Year Quest to Order the World.

Skeptic: Why do we need laws? Can’t we all just get along?

Fernanda Pirie: That assumes we need laws to resolve our disputes. The fact is, there are plenty of societies that do perfectly well without formal laws, and that’s one of the questions I explore in my work: Who makes the law, and why? Not all sophisticated societies have created formal laws. For instance, the ancient Egyptians managed quite well without them. The Maya and the Aztec, as far as we can tell, had no formal laws. Plenty of much smaller communities and groups also functioned perfectly well without them. So, using law to address disputes is just one particular social approach. I don’t think it’s a matter of simply getting along; I do believe it’s inevitable that people will come into conflict, but there are many ways to resolve it. Law is just one of those methods.

It’s inevitable that people will come into conflict, but there are many ways to resolve it. Law is just one of those methods.

Skeptic: Let’s talk about power and law. Are laws written and then an authority is needed to enforce them, which creates hierarchy in society? Or does hierarchy develop for some other reason, and then law follows to deal with that particular structure?

FP: I wouldn’t say there’s always a single direction of development. In ancient India, for example, a hierarchy gradually developed over several thousand years during the first millennium BCE, with priests—eventually the Brahmins—and the king at the top. This evolved into the caste system we know today. The laws came later in that process. Legal texts, written by the Brahmins, outlined rules that everyone—including kings—had to follow.

Skeptic: So, the idea of writing laws down or literally chiseling them in stone is to create something tangible to refer to.. Not just, “Hey, don’t you remember, I said six months ago you shouldn’t do that?” Instead, it’s formalized, and everyone has a copy. We all know what it is, so you can hold people morally accountable for their actions.

FP: Exactly. That distinction makes a big difference. Every society has customs and norms; they often have elders or other sources of authority, who serve as experts in maintaining their traditions. But when it’s just a matter of, “This is what we’ve always done—don’t you remember?” some people can conveniently forget. Once something is written down, though, it gains authority. You can refer to the exact words, which opens up different possibilities for exercising power. “Look, these are the laws—everyone must know and follow them.” But it equally creates opportunities for holding people accountable.

Skeptic: So it’s a matter of “If you break the law, then these are the consequences.” It’s almost like a logic problem—if P, then Q. There’s an internal logic to it, a causal reasoning where B follows A, so we assume A causes B. Is something like that going on, cognitively?

Once something is written down, it gains authority. You can refer to the exact words, which opens up different possibilities for exercising power.

FP: Well, that cause-and-effect form is a feature of many legal systems, but not all of them. It’s very prominent in the Mesopotamian tradition, which influenced both Jewish law and Islamic law, and eventually Roman law—the legal systems that dominate the world today. It’s associated with the specification of rights—if someone does this, they are entitled to that kind of compensation, or this must follow from that. But the laws that developed in China and India were quite different. The Chinese had a more top-down, punitive system, focused on discipline and punishment. It was still an “if-then” system, but more about, “If you do this wrong, you shall be punished.” It was very centralized and controlling. In Hindu India, the laws were more about individual duty: this is what you ought to do to be a good Hindu. If you’re a king, you should resolve disputes in a particular way. The distinctions between these systems aren’t always sharp, but the casuistic form is indeed a particular feature of certain legal traditions.

Laws have never simply been rules. They’ve created intricate maps for civilization. Far from being purely concrete or mundane, laws have historically presented a social vision, promised justice, invoked a moral order ordained by God (or the Gods), or enshrined the principles of democracy and human rights. And while laws have often been instruments of power, they’ve just as often been the means of resisting it. Yet, the rule of law is neither universal nor inevitable. Some rulers have avoided submitting themselves to the constraints of law—Chinese emperors did so for 2,000 years. The rule of law has a long history, and we need to understand that history to appreciate what law is, what it does, and how it can rule our world for better or worse.

The rule of law is neither universal nor inevitable. Some rulers have avoided submitting themselves to the constraints of law.

Skeptic: In some ways it seems like we are seeking what the economist Thomas Sowell calls cosmic justice, where in the end everything is settled and everyone gets their just desserts. One purpose of the Christian afterlife is that all old scores are settled. God will judge everything and do so correctly. So, even if you think you got away with something, in the long run you didn’t. There’s an eye in the sky that sees all, and that adds an element of divine order to legal systems.

FP: Absolutely, and that characterizes many of the major legal systems, especially those associated with religion. Take the Hindu legal system—it’s deeply tied to a sense of cosmological order. Everyone must follow their Dharma, and the Brahmins set up the rules to help people follow their Dharma, so they can achieve a better rebirth. Similarly, Islamic Sharia law, which has had a poor reputation in recent times, is seen as following God’s path for the world, guiding people on how they should behave in accordance with a divine plan. Even the Chinese, who historically had a more top-down and punitive system, claimed that their emperors held the Mandate of Heaven—that’s why people had to obey them and their laws. They were at the top of the pyramid because of such divine authority.

Of course, there have also been laws that are much more pragmatic—rules that merchants follow to maintain their networks, or village regulations. Not all law is tied to a cosmic vision, but many of the most impressive and long-lasting legal systems have been.

Islamic Sharia law is seen as following God’s path for the world. Even the Chinese, who historically had a more top-down and punitive system, claimed that their emperors held the Mandate of Heaven.

Skeptic: The Arab–Israeli conflict can be seen as two people holding a deed to the same piece of land, each claiming, “The title company that guarantees my ownership is God and His Holy Book.” Unfortunately, God has written more than one Holy Book, leading both sides to claim divine ownership, with no cosmic court to settle the dispute.

FP: That’s been the case throughout history—overlapping legal and political jurisdictions. Many people today are worried about whether the nation-state, as we know it, is breaking down, especially with the rise of supranational laws and transnational legal systems. But it’s always been like this—there have always been overlaps between religious laws, political systems, and social norms. The Middle East is a perfect example, with different religious communities living side by side. It hasn’t always been easy, but over time, people have developed ways of coexisting. The current political battles in the Middle East are part of this ongoing tension.

Skeptic: In your writing, you offer this great example from the Code of Hammurabi, 1755–1750 BC. It is the longest, best-organized, best-preserved legal text from the ancient Near East, written in the Old Akkadian dialect of Babylonian, and inscribed on a stone stele discovered in 1901.

“These are the judicial decisions that Hammurabi, the King, has established to bring about truth and a just order in his land.” That’s the text you quoted. “Let any wronged man who has a lawsuit”—interesting how the word ‘lawsuit’ is still in use today—”come before my image as King of Justice and have what is written on my stele read to him so that he may understand my precious commands, and let my stele demonstrate his position so that he may understand his case and calm his heart. I am Hammurabi, King of Justice, to whom Shamash has granted the truth.”

Many people today are worried about whether the nation-state, as we know it, is breaking down.

Then you provide this specific example: “If a man cuts down a tree in another man’s date orchard without permission, he shall pay 30 shekels of silver. If a man has given a field to a gardener to plant as a date orchard, when the gardener has planted it, he shall cultivate it for four years, and in the fifth year, the owner and gardener shall divide the yield equally, with the owner choosing first.”

This sounds like a modern business contract, or today’s U.S. Uniform Commercial Code.

FP: Indeed, it’s about ensuring fairness among the farmers, who were the backbone of Babylon’s wealth at the time. I also find it fascinating that there are laws dealing with compensation if doctors kill or injure their patients. We often think of medical negligence as a modern issue, but it’s been around for 4,000 years.

Skeptic: But how did they determine the value of, say, a stray cow or cutting down the wrong tree? How did they arrive at the figure of 30 shekels?

FP: That’s a really interesting question. These laws were meant to last, and even in a relatively stable society, the value of money would have changed over time. People have studied this and asked how anyone could follow these laws for the hundreds of years that the stele stood and people referred to it. My view is that these laws were more exemplary—they probably reflected actual cases, decisions that judges were making at the time.

Laws have never simply been rules; they have created intricate maps for civilization, presented a social vision, promised justice, invoked a moral order, and enshrined principles of democracy and human rights.

Although Hammurabi wrote down his rules, he didn’t expect people to apply them exactly as written, as we do with modern legal codes. Instead, they gave a sense of the kind of compensation that would be appropriate for different wrongs or crimes—guidelines, not hard rules. Hammurabi likely collected decisions from various judicial systems and grafted them into a set of general laws, but they still retain the flavor of individual judgments.

Skeptic: Is there a sense of “an eye for an eye, a tooth for a tooth”—where the punishment fits the crime, more or less?

The Code of Hammurabi inscribed on a basalt slab on display at the Louvre, Paris. (Photo by Mbzt via Wikimedia)

FP: Absolutely. Hammurabi was trying to ensure that justice was done by laying out rules for appropriate responses to specific wrongs, ensuring fairness in compensation. But it’s crucial to understand that the famous phrase, “an eye for an eye, a tooth for a tooth,” which appears first in Hammurabi’s code and later in the laws of the Book of Exodus, wasn’t about enforcing revenge. Even though there’s a thousand-year gap between Hammurabi and the Bible, scholars believe this rule was about limiting revenge, not encouraging it. It meant that if someone sought revenge, it had to be proportional—an eye for an eye—but no more.

In other words, they wanted to prevent cycles of violence that arise from feuds. In a feuding society, someone steals a sheep, then someone retaliates by stealing a cow, and then someone tries to take an entire herd of sheep. The feud keeps getting bigger and bigger. So, the “eye for an eye” rule was a pragmatic approach in a society where feuding was common. It was meant to keep things under control.

Skeptic: From the ruler’s perspective, a feud is a net loss, regardless of who’s right or wrong.

FP: Feuding is a very common way of resolving disputes, especially among nomadic people. The idea, which makes a lot of sense, is that if you’re a nomadic pastoralist, your wealth is mobile—it’s your animals that have feet, which can be moved around. That also makes it easy to steal. If you’re a farmer, your wealth is tied to your land, so someone can’t run off with it. Since nomads are particularly vulnerable to theft, having a feuding system acts as a defense mechanism. It’s like saying, “If you steal my sheep, I’ll come and steal your cow.” You still see this in parts of the world, such as eastern Tibet, where I’ve done fieldwork. So, yes, kings and centralized rulers want to stop feuds because they represent a net loss. They want to put a lid on things and so establish a more centralized system of justice. This is exactly what Hammurabi was trying to do, and you see similar efforts in early Anglo- Saxon England, and all over the world.

Another interesting point is that every society has something to say about homicide. It’s so important that they have to lay out a response. However, I don’t think we should assume these laws were meant to stop people from killing each other. The fact is, we don’t refrain from murder because the law tells us not to. We don’t kill because we believe it’s wrong—except in the rare cases where morality has somehow become twisted and self-help justice occurs and people take the law into their own hands. The law, in this case, is more about what the social response should be once a killing has occurred. Should there be compensation? Punishment? What form should it take?

Every society needs some system to restore order and a sense of justice.

Skeptic: Is this why we need laws that are enforced regularly, fairly, justly, and consistently, so people don’t feel the need to take matters into their own hands?

FP: I’d put it a bit more broadly: we need systems of justice, which can include mediation systems. In a village in Ladakh—part of northern India with Tibetan populations where I did fieldwork—they didn’t have written laws, but they had very effective ways of resolving conflicts. They put a lot of pressure on the parties to calm down, shake hands, and settle the dispute. It’s vastly different from the nomads I worked with later in eastern Tibet, who had a very different approach. But both systems were extremely effective, and there was a strong moral sense that people shouldn’t fight or even get angry. It’s easy to look at these practices and say they’re not justice, especially when serious things like injuries, killings, or even rape are settled in this way. But for these villages, maintaining peace and order in the community was paramount, and it worked for them.

Every society needs some system to restore order and a sense of justice. What constitutes justice can vary greatly—sometimes it’s revenge, sometimes it’s about restoring order. Laws can be part of that system, and in complex societies, it becomes much harder to rely on bottom-up systems of mediation or conciliation. That’s where having written laws and judges becomes very useful.

Skeptic: In communities without laws or courts, do they just agree, “Tomorrow we’re going to meet at noon, and we’ll all sit down and talk this out?”

FP: Essentially, yes. In the communities I spent time with, it was the headman’s duty to call a village meeting, and everyone was expected to attend and help resolve the issue. In a small community like that, you absolutely could do it.

Skeptic: And if you don’t show up?

FP: There’s huge social pressure for people to play their part in village politics and contribute to village funds and activities.

Skeptic: And if they don’t, then what? Are they gossiped about, shunned, or shamed?

FP: Yes—all of those things, in various ways.

Skeptic: Let’s talk about religious laws. You mentioned Sharia, and from a Western perspective, it’s often seen as a disaster because it’s been hyped up and associated with terrorism. Can you explain how religious laws differ from secular laws?

FP: I’m wondering how much one can generalize here. I’m thinking of the religious laws of Hindu India, Islamic laws, Jewish laws, and I suppose Canon law in Europe—Christian law. I hesitate to generalize, though.

Skeptic: What often confounds modern minds are the very specific laws in Leviticus—like which food you can eat, which clothes you can wear, and how to deal with adultery, which would certainly seem to concern the affected spouse. But why should the state—or whatever governing laws or body—even care about such specific issues?

FP: This highlights a crucial point. In Jewish, Hindu, and Islamic law, the legal and moral spheres are part of the same domain. A lot of these laws are really about guiding people on how to live moral lives according to dharma, God’s will, or divine command. The distinction we often make between law and religion, or law and morality, doesn’t apply in those contexts. The laws are about instructing people on how to live properly, which can involve family relations, contracts, land ownership, but also prayer and ritual.

As for the laws in Leviticus, they’ve puzzled people for a long time. They seem to be about purity and how Jews should live as good people, following rules of cleanliness, which partly distinguished them from other tribes.

Skeptic: What exactly is Sharia law?

FP: Sharia literally means “God’s path for the world.” It’s not best translated as “law” in the way we understand it. It’s more about following the path that God has laid out for us, a path we can’t fully comprehend but must do our best to interpret. The Quran is a guide, but it doesn’t lay out in detail everything we should do. The early Islamic scholars—who were very important in its formative days—studied the Quran and the Hadith (which tradition maintains records the Prophet’s words and actions) to work out just how Muslims should live according to God’s command. They developed texts called fiqh, which are what we might call legal texts, going into more detail about land ownership, commercial activities, legal disputes, inheritance, and charitable trusts.

Islamic law has very little to say about crime.

Islamic law has very little to say about crime. That’s one misconception. People tend to think it’s all about harsh punishments, but the Quran mentions crime only briefly. That was largely the business of the caliphs—the rulers—who were responsible for maintaining law and order. Sharia is much more concerned with ritual and morality, and with civil matters like inheritance and charitable trusts.

Skeptic: Much of biblical legal and moral codes have changed over time. Christianity went through the Enlightenment. But Islam didn’t seem to go through a similar process. Is that a fair characterization?

FP: I’d say that’s partly right. But I’ve never thought about it in exactly those terms. In any legal tradition, there’s resistance to change—that’s kind of the point of law. It’s objective and fixed, so any change requires deep thought. In the Islamic world, there’s been a particularly strong sense that it’s not for people to change or reinterpret God’s path. The law was seen as something fixed.

But in practice, legal scholars, called muftis, were constantly adapting and changing legal practices to suit different contexts and environments. That’s one of the real issues today—Islamic law has become a symbol of resistance to the West, appealing to fundamentalism by going “back to the beginning.”

Skeptic: Let’s talk about stateless law of tribes, villages, networks, and gangs. For example, we tend to think of pirates as lawless, chaotic psychopaths who just randomly raided commerce and people. But, in fact, they were pretty orderly. They had their own constitutions. Each ship had a contract that everyone had to sign, outlining the rules. There’s even this interesting analysis of the Jolly Roger (skull and crossbones) flag. Why fly that flag and alert another ship that you’re coming? In his book The Invisible Hook: The Hidden Economics of Pirates, the economist Peter Leeson argued that it is a signal: “We’re dangerous pirates, and we’re coming to take your stuff, so you might as well hand it over to us, and we won’t kill you.” It’s better for the pirates because they can get the loot without the violence, and it’s better for the victims because they get to keep their lives. Occasionally, you do have to be brutal and make sure your reputation as a badass pirate gets a lot of publicity, so people know that when they see the flag, they should just surrender. But overall, it was a pretty orderly system.

FP: Yes, but it’s only kind of organized. That’s the point. For example, in The Godfather Don Corleone was essentially making up his own rules, using his power to tell others what he wanted. That’s the nature of the Mafia—yes, they had omertà (the rule of silence) and rules about treating each other’s wives with respect, but these rules were never written down. Alleged members who went on trial even denied—under oath—that any kind of organization or rules existed. This was particularly true with the Sicilian Mafia. The denial served two purposes: first, it protected them from outside scrutiny, and second, it allowed powerful figures like Don Corleone—or the real-life Sicilian bosses—to bend the rules whenever they saw fit. If the rules aren’t written down, it’s harder to hold them accountable. They can simply break the rules and impose their will.

Skeptic: Let’s discuss international law. In 1977, David Irving published Hitler’s War, in which he claimed that Hitler didn’t really know about the Holocaust. Rather, Irving blamed it on Himmler specifically, and other high-ranking Nazis in general, along with their obedient underlings. Irving even offered $1,000 to anyone who could produce an order from Hitler saying, “I, Adolf Hitler, hereby order the extermination of European Jewry.” Of course, no such order exists. This is an example of how you shift away from a legal system. The Nazis tried to justify what they were doing with law, but at some point, you can’t write down, “We’re going to kill all the Jews.” That can’t be a formal law.

FP: Exactly. Nazi Germany had a complex legal case, and I’m not an expert on it, but you can see at least a couple legal domains at play. First, they were concerned with international law, especially in how they conducted warfare in the Soviet Union. They at least tried to make a show of following international laws of war. Second, operationally, they created countless laws to keep Germany and the war effort functioning. They used law instrumentally. But when they felt morally uncomfortable with what they were doing, the obvious move was to avoid writing anything down. If it wasn’t documented, it wasn’t visible, and so it became much harder to hold anyone accountable.

Nazi Germany had a complex legal case. Operationally, they created countless laws to keep Germany and the war effort functioning. They used law instrumentally.

Skeptic: During the Nuremberg trials, the defense’s argument was often, “Well, we lost, but if we had won, this would have been legal.” So they claimed it wasn’t fair to hold these trials since they violated the well-established principle of ex post facto, because there was no international law at the time. National sovereignty and self-determination was the norm, so they were saying, in terms of the law of nations, “We were just doing what we do, and it’s none of your business.”

View from above of the judges' bench at the International Military Tribunal in Nuremberg. (Source: National Archives and Records Administration, College Park.)

FP: Legally speaking, the Nuremberg trials were both innovative and hugely problematic. The court assumed the power to sit in judgment on what the leaders of independent nation-states were doing within their borders, or at least largely within their borders (the six largest Nazi death camps were in conquered Poland). But it was revolutionary in terms of developing the concepts of genocide, crimes against humanity, and the reach of international law with a humanitarian focus. So yes, it was innovative and legally difficult to justify, but I don’t think anyone involved felt there was any question that what they were doing was the right thing.

Skeptic: It also established the legal precedent that, going forward, any dictator who commits these kinds of atrocities—if captured—would be held accountable.

FP: Exactly. And that eventually led to the movement that set up the International Criminal Court, where Slobodan Milošević was prosecuted, along with other leaders. Although, it’s extremely difficult to bring such people to trial, and ultimately, the process can be more symbolic than practical.

Is the existence of the International Criminal Court really going to stop someone from committing mass atrocities? I doubt it. But it does symbolize to the world that genocide and other heinous crimes will be called out, and people must be held accountable. In a way, it represents the wider moral world we want to live in and the standards we expect nations to uphold.

SkepticSkeptic once asked Elon Musk: “When you start the first Mars colony, what documents would you recommend using to establish a governing system? The U.S. Constitution, the Bill of Rights, the Universal Declaration of Human Rights, the Humanist Manifesto, Atlas Shrugged, or Against the State, an anarcho-capitalist manifesto?” He responded with, “Direct democracy by the people. Laws must be short, as there is trickery in length. Automatic expiration of rules to prevent death by bureaucracy. Any rule can be removed by 40 percent of the people to overcome inertia. Freedom.”

FP: What a great, specific response! He’s really thought about this. Those are some interesting ideas, and I agree that there’s a lot to be said for direct democracy. The main problem with direct democracy, however, is that when you have too many people it becomes cumbersome. How do you gather everyone in a sensible way? The Athenians and Romans had huge assemblies, which created a sense of equality, and that’s valuable. Another thing I would do, which I’ve discussed with a colleague of mine, Al Pashar, is to rotate positions of power. She did research in Indian villages, and I’ve done work with Tibetans in Ladakh, and we found they had similar systems where every household provided a headman or headwoman in turn.

Rotating power is effective at preventing individuals from concentrating too much power.

You might think rotating leadership wouldn’t work, because some people aren’t good leaders, while others are. Wouldn’t it be better to elect the best person for the job? But we found that rotating power is effective at preventing individuals from concentrating too much power. Yes, it’s good to have competent leaders, but when their family or descendants form an elite, you get a hierarchy and bureaucracy. Rotating power prevents that. That’s what I would do in terms of a political system.

As for laws, I’m less concerned with their length, as long as they are accessible and visible for everyone to read and reference. What’s important is having essential laws clearly posted for all to see. And there should be a good system for resolving disputes—perhaps mediation and conciliation rather than a lot of complex laws, with just a few laws in the background.

Skeptic: We’ll send this to Elon, and maybe he’ll hire you to join his team of social engineers.

FP: Although I’m not sure I want to go to Mars, I’d be happy to advise from the comfort of Oxford!

Categories: Critical Thinking, Skeptic

Where Are All the Dwarf Planets?

neurologicablog Feed - Thu, 03/06/2025 - 5:05am

In 2006 (yes, it was that long ago – yikes) the International Astronomical Union (IAU) officially adopted the definition of dwarf planet – they are large enough for their gravity to pull themselves into a sphere, they orbit the sun and not another larger body, but they don’t gravitationally dominate their orbit. That last criterion is what separates planets (which do dominate their orbit) from dwarf planets. Famously, this causes Pluto to be “downgraded” from a planet to a dwarf planet. Four other objects also met criteria for dwarf planet – Ceres in the asteroid belt, and three Kuiper belt objects, Makemake, Haumea, and Eris.

The new designation of dwarf planet came soon after the discovery of Sedna, a trans-Neptunian object that could meet the old definition of planet. It was, in fact, often reported at the time as the discovery of a 10th planet. But astronomers feared that there were dozens or even hundreds of similar trans-Neptunian objects, and they thought it was messy to have so many planets in our solar system. That is why they came up with the whole idea of dwarf planets. Pluto was just caught in the crossfire – in order to keep Sedna and its ilk from being planets, Pluto had to be demoted as well. As a sort-of consolation, dwarf planets that were also trans-Neptunian objects were named “plutoids”. All dwarf planets are plutoids, except Ceres, which is in the asteroid belt between Mars and Jupiter.

So here we are, two decades later, and I can’t help wondering – where are all the dwarf planets? Where are all the trans-Neptunian objects that astronomers feared would have to be classified as planets that the dwarf planet category was specifically created for? I really thought that by now we would have a dozen or more official dwarf planets. What’s happening? As far as I can tell there are two reasons we are still stuck with only the original five dwarf planets.

One is simply that (even after two decades) candidate dwarf planets have not yet been confirmed with adequate observations. We need to determine their orbit, their shape, and (related to their shape) their size. Sedna is still considered a “candidate” dwarf planet, although most astronomers believe it is an actual dwarf planet and will eventually be confirmed. Until then it is officially considered a trans-Neptunian object. There is also Gonggong, Quaoar, and Orcus which are high probability candidates, and a borderline candidate, Salacia. So there are at least nine, and possibly ten, known likely dwarf planets, but only the original five are confirmed. I guess it is harder to observe these objects than I assumed.

But I have also come across a second reason we have not expanded the official list of dwarf planets. Apparently there is another criterion for plutoids (dwarf planets that are also trans-Neptunian objects) – they have to have an absolute magnitude less than +1 (the smaller the magnitude the brighter the object). Absolute magnitude means how bright an object actually is, not it’s apparent brightness as viewed from the Earth. Absolute magnitude for planets is essentially the result of two factors – size and albedo. For stars, absolute magnitude is the brightness as observed from 10 parsecs away. For solar system bodies, the absolute magnitude is the brightness if the object were one AU from the sun and the observer.

What this means is that astronomers have to determine the absolute magnitude of a trans-Neptunian object before they can officially declare it a dwarf planet. This also means that trans-Neptunian objects that are made of dark material, even if they are large and spherical, may also fail the dwarf planet criteria. Some astronomers are already proposing that this absolute magnitude criterion be replaced by a size criterion – something like 200 km in diameter.

It seems like the dwarf planet designation needs to be revisited. Currently, the James Webb Space Telescope is being used to observe trans-Neptunian objects. Hopefully this means we will have some confirmations soon. Poor Sedna, whose discovery in 2003 set off the whole dwarf planet thing, still has not yet been confirmed.

The post Where Are All the Dwarf Planets? first appeared on NeuroLogica Blog.

Categories: Skeptic

Dressed to Impress or Dressed to Kill: The Evolutionary Story of Animal Color

Skeptic.com feed - Wed, 03/05/2025 - 3:51pm

It’s not at all clear that clothes make the man, or woman. However, it is clear that although animals don’t normally wear clothes (except when people dress them up for their own peculiar reasons), living things are provided by natural selection with a huge and wonderful variety. Their outfits involve many different physical shapes and styles, and they arise through various routes. For now, we’ll look briefly just at eye-catching color among animals, and the two routes by which evolution’s clothier dresses them: sexual selection and warning coloration.

Human observers are understandably taken with the extraordinary appearance of certain animals, notably birds, as well as some amphibians and insects, and, in most cases, the dressed-up elegance of males in particular. In 1860, Darwin confessed to a fellow biologist that looking at the tail of a peacock made him “sick.” Not that Darwin lacked an aesthetic sense, rather, he was troubled that his initial version of natural selection didn’t make room for animals having one. After all, the gorgeous colors and extravagant length of a peacock’s tail threatened what came to be known (by way of Herbert Spencer) as “survival of the fittest,” because all that finery seemed to add up to an immense fitness detriment. A long tail is not only metabolically expensive to grow, but it’s more liable to get caught in shrubbery, while the spectacular colors make its owner more conspicuous to potential predators.

Eventually, Darwin arrived at a solution to this dilemma, which he developed in his 1871 book, The Descent of Man and Selection in Relation to Sex. Although details have been added in the ensuing century and a half, his crashing insight—sexual selection—has remained a cornerstone of evolutionary biology.

Sexual selection is sometimes envisaged as different from natural selection, but it isn’t.

Sexual selection is sometimes envisaged as different from natural selection, but it isn’t. Natural selection is neither more nor less than differential reproduction, particularly of individuals and, thereby, genes. It operates in many dimensions, such as obtaining food, avoiding predators, surviving the vagaries of weather, resisting pathogens, and so on. And yet more on! Sexual selection is a subset of natural selection that is so multifaceted and, in some ways, so counterintuitive that it warrants special consideration, as Darwin perceived and subsequent biologists have elaborated.

The bottom line is that in many species, bright coloration—seemingly disadvantageous because it is both expensive to produce and also carries increased risk because of its conspicuousness— nonetheless can contribute to fitness insofar as it is preferentially chosen by females. In such cases, the upside of conspicuous colors increasing mating opportunities compensates for its downsides.

Bright coloration is both expensive to produce and also carries increased risk because of its conspicuousness.

Nothing in science is entirely understood and locked down, but biologists have done a pretty good job with sexual selection. A long-standing question is why, when the sexes are readily distinguishable (termed, sexual dimorphism) it is nearly always the males that are brightly colored. An excellent answer comes from the theory of parental investment, first elaborated by Robert Trivers. The basic idea is that the fundamental biological difference between males and females is not in their genitals but in the defining difference between males and females, namely, how much they invest when it comes to producing offspring. Males are defined as the sex that makes sperm (tiny gametes that are produced in prodigious numbers), while females are egg makers (producing fewer gametes and investing substantially more time and energy on each one).

Sexual selection is responsible for much of the organic world’s Technicolor drama.

As a result, males are often capable of inseminating multiple females because their parental investment in each reproductive effort can be minimal. And so, males in many species, perhaps most, gain an evolutionary advantage by mating with as many females as possible. Because nearly always there are equal numbers of males and females—an important and well-researched statistical phenomenon that deserves its own treatment—this sets up two crucial dynamics. One is male-male competition whereby males hassle with each other for access to the limiting and valuable resource of females and their literal mother load of parental investment. This in turn helps explain the frequent pattern whereby males tend to be more aggressive and outfitted with weapons and an inclination to use them.

The other dynamic, especially important for understanding the evolution of conspicuous male coloration, is female choice (known as epigamic selection). Because females are outfitted with their desirable payload of parental investment, for which males compete, females often (albeit not always) have the opportunity to choose among eager suitors. And they are disposed to go for the brightest, showiest available.

Darwin intuited this dynamic but was uncomfortable about it because at the time, it was felt that aesthetic preferences were a uniquely human phenomenon, not available to animals. Now we know better, in part because the mechanism of such preferences is rather well understood. Sexual selection is responsible for much of the organic world’s Technicolor drama, such as the red of male cardinals, the tails of peacocks, or the rainbow rear ends of mandrill monkeys, all of which make these individuals more appealing to potential mates—probably because, once they are sexually attractive, they become even more attractive according to what evolutionary biologists call the sexy son hypothesis. This involves the implicit genetic promise that females who mate with males who are thus adorned will likely produce sons who will inherit their father’s flashy good looks and will therefore be attractive to the next generation of choosing females, thereby ensuring that the prior generation female who makes such a choice will produce more grandchildren through her sexy sons.

There is a strong correlation between the degree of polygyny (number of females mated on average to a given male), or, more accurately, the ratio of variability in female reproductive success to that of males, and the amount of sexual dimorphism: the extent to which males and females of a given species differ physically. The greater the polygyny (e.g., harem size, as in elephant seals) the greater the sexual dimorphism, while monogamous species tend to be comparatively monomorphic, at least when it comes to body size and weaponry.

In most cases, female reproductive success doesn’t vary greatly among individuals, testimony to the impact of the large parental investment they provide. Female success is maximal when they get all their eggs fertilized and their offspring successfully reared, a number that typically doesn’t differ greatly from one female to another. By contrast, because of their low biologically-mandated parental investment, some males have a very large number of surviving offspring—a function of their success in male-male competition along with female choice—while others are liable to die unsuccessful, nonreproductive, typically troublemaking bachelors.

When it comes to sexual dimorphism in coloration, some mysteries persist.

When it comes to sexual dimorphism in coloration, however, some mysteries persist. Among some socially monogamous species (e.g., warblers), males sport brilliant plumage. This conundrum has been resolved to some extent by the advent of DNA fingerprinting, which has shown that social monogamy doesn’t necessarily correlate with sexual monogamy. Although males of many species have long been known to be sexually randy, verging on promiscuous, females were thought to be more monogamously inclined. However, we now know that females of many species also look for what is termed extra-pair copulations, and it seems likely that this, in turn, has selected for sexy male appearance, which outfits them to potentially take advantage of any out-of-mateship opportunities.

It still isn’t clear why and how such a preference began in the case of particular species (and why it is less developed, or, rarely, even reversed in a few), but once established it becomes what the statistician and evolutionary theorist R.A. Fisher called a “runaway process.” Furthermore, we have some rather good ideas about how this process proceeds.

One is that being impressively arrayed is an indication of somatic and genetic health, which further adds to the fitness payoff when females choose these specimens. Being brightly colored has been shown to correlate with disease resistance, relative absence of parasites, being an especially adroit forager, and the like. In most cases, brightness is physiologically difficult to achieve, which means that dramatic coloration can indicate that such living billboards are also advertising their metabolic muscularity, indicating that they’d likely contain good genetic material as well.

Being brightly colored has been shown to correlate with disease resistance, relative absence of parasites, and being an especially adroit forager.

Another, related hypothesis was more controversial when first proposed by Israeli ornithologist Amotz Zahavi, but has been increasingly supported. This is the concept of “selection for a handicap,” which acknowledges that such traits as bright coloration may well be a handicap in terms of a possessor’s survival. However, Zahavi’s “Handicap Principle” turns a seeming liability into a potential asset insofar as certain traits can be positive indicators of superior quality if their possessors are able to function effectively despite possessing them. It’s as though someone carried a 50-pound backpack and was nonetheless able to finish a race, and maybe even win it! An early criticism of this notion was that the descendants of such handicapped individuals would also likely inherit the handicap, so where’s the adaptive payoff accruing to females who choose to mate with them?

For one, there’s the acknowledged benefit of producing sons who will themselves be preferentially chosen—an intriguing case in which choosy females are more fit not through their sons, but by their grandchildren by way of those sons. In addition, there is the prospect that the choosing female’s daughters would be bequeathed greater somatic viability without their brothers’ bodily handicap. It’s counterintuitive to see bright coloration as a handicap, just as it’s counterintuitive to see a handicap as a potential advantage … but there’s little reason to trust our intuition in the face of nature’s often-confusing complexity.

There’s plenty more to the saga of sexual selection and its generation of flashy animal Beau Brummels, including efforts to explain the many exceptions to the above general patterns. It’s not much of a mystery why mammals don’t partake of flashy dress patterns, given that the class Mammalia generally has poor color vision. But what about primates, who tend to be better endowed? And what of Homo sapiens? Our species sports essentially no genetically-mediated colorful sexual dimorphism. If anything, women tend to be more elaborately adorned than men, at least in Western traditions, a gender difference that seems entirely culture-based. Moreover, among some non-Western social groups, the men get dressed up far more than the women. Clearly, there is much to be resolved, and not just for nonhuman animals.

For another look at dramatic animal patterning, let’s turn to the inverse of sexual attraction, namely, selection for being avoided.

Among the most dramatic looking animals are those whose appearance is “designed” (by natural selection) to cause others—notably predators—to stay away. An array of living things, including some truly spectacular specimens, are downright poisonous, not just in their fangs or stingers but in their very bodies. When they are caterpillars, monarch butterflies feed exclusively on milkweed plants, which contain potent chemical alkaloids that taste disgusting and cause severe digestive upset to animals—especially birds— that eat them, or just venture an incautious nibble.

In the latter case, most birds with a bellyache avoid repeating their mistake although this requires, in turn, that monarchs be sufficiently distinct in their appearance that they carry an easily recognized warning sign. Hence, their dramatic black and bright orange patterning. To the human eye, they are quite lovely. To the eyes of a bird with a terrible taste in its mouth and a pain in its gut, that same conspicuous black and orange is memorable as well, recalling a meal that should not be repeated. It exemplifies “warning coloration,” an easily recalled and highly visible reminder of something to avoid. (It is no coincidence that school buses, ambulances, and fire trucks are also conspicuously colored, although here the goal is enhanced visibility per se, not advertising that these vehicles are bad to eat!)

It is no coincidence that school buses, ambulances, and fire trucks are also conspicuously colored.

The technical term for animal warning signals is aposematic, derived by combining the roots for “apo” meaning away (as in apostate, someone who moves away from a particular belief system) and “sema” meaning signal (as in semaphore). Unpalatable or outright poisonous prey species that were less notable and thus easily forgotten will have achieved little benefit from their protective physiology. And of course, edible animals that are easily recognized would be in even deeper trouble. The adaptive payoff of aposematic coloration even applies if a naïve predator kills a warningly-colored individual, because such sacrifice is biologically rewarded through kin selection when a chastened predator avoids the victim’s genetic relatives.

Many species of bees and wasps are aposematic, as are skunks: once nauseated, or stung, or subjected to stinky skunk spray, twice shy. However, chemically-based shyness isn’t the only way to train a potential predator. Big teeth or sharp claws could do the trick, just by their appearance, without any augmentation. Yet when the threat isn’t undeniably baked into an impressive organ—for example, when it is contained within an animal’s otherwise invisible body chemistry—that’s where a conspicuous, easy-to-remember appearance comes in.

Bright color does triple duty, not only warning off predators and helping acquire mates, but also signaling that brighter and hence healthier individuals are more effective fighters.

Some of the world’s most extraordinary painterly palettes (at least to the human eye) are flaunted by neotropical amphibians known as “poison arrow frogs,” so designated because their skin is so lethally imbued that indigenous human hunters use it to anoint their darts and arrow points. There is no reason, however, for the spectacular coloration of these frogs to serve only as a warning to potential frog-eating predators. As with other dramatically accoutered animals, colorfulness itself often helps attract mates, and not just by holding out the prospect of making sexy sons. Moreover, it has been observed in at least one impressively aposematic amphibian—the scrumptious-looking but highly toxic strawberry poison frog—that bright color does triple duty, not only warning off predators and helping acquire mates, but also signaling to other strawberry poison frogs that brighter and hence healthier individuals are more effective fighters.

Warning coloration occurs in a wide range of living things, evolving pretty much whenever one species develops a deserved reputation for poisonousness, ferocity, or some other form of legitimate threat. Once established, it also opens the door to further evolutionary complexity, including Batesian mimicry, first described in detail by the nineteenth-century English naturalist Henry Walter Bates who researched butterflies in the Amazon rainforest. He noticed that warningly-colored species serve as models, which are then copied by mimics that are selected to piggyback on the reputation established by the former. Brightly banded coral snakes (venomous) are also mimicked, albeit imperfectly, by some species of (nonpoisonous) king snakes. Bees and wasps, with their intimidating stings, have in most cases evolved distinctive color patterns, often bands of black and yellow; they, in turn, are mimicked by a number of other insects that are outfitted with black and yellow bands though they are stingless.

The honestly-clothed signaler can become a model to be mimicked by other species that may not be dangerous to eat but are mistaken for the real (and toxic) McCoy

In short, the honestly-clothed signaler can become a model to be mimicked by other species that may not be dangerous to eat but are mistaken for the real (and toxic) McCoy. Those monarch butterflies, endowed with poisonous, yucky-tasting alkaloids, are mimicked by another species—aptly known as “viceroys” (substitute monarchs)—that bypass the metabolically expensive requirement of dealing with milkweed toxins while benefiting by taking advantage of the monarch’s legitimately acquired reputation.

The plot thickens. Viceroy butterflies (the mimic) and monarchs (the model) can both be successful as long as the mimics aren’t too numerous. A problem arises, however, when viceroys become increasingly abundant, because the more viceroys, the more likely it is that predators will nibble on those harmless mimics rather than being educated by sampling mostly monarchs and therefore trained to avoid their black-and-orange pattern. As a result, the well-being of both monarchs and viceroys is diminished in direct proportion as the latter become abundant, which in turn induces selection of monarchs that are discernibly different from their mimics so as not to be tarred with viceroys’ innocuousness. But the process isn’t done. As the models flutter away from their mimics, the latter can be expected to pursue them, in an ongoing process of evolutionary tag set in motion by the antipredator adaptation represented by the model’s warning coloration, the momentum of which is maintained by the very different challenges—to both the mimic and the model—generated by the system itself.

Frequency-dependent selection is a phenomenon in which the evolutionary success of a biological type varies inversely with its abundance.

This general phenomenon is known as “frequency-dependent selection,” in which the evolutionary success of a biological type varies inversely with its abundance: favored when rare, diminishing as it becomes more frequent. It’s as though certain traits carry within them the seeds of their own destruction, or at least, of keeping their numbers in check, either arriving at a balanced equilibrium or by producing a pattern of pendulum-like fluctuations.

Meanwhile, Batesian mimicry isn’t the only copycat clothing system to have evolved. Plenty of black-and-yellow-banded insects, for example, are equipped with stings, although many other warning patterns are clearly available. Different species could have used their unique pattern of colors as well as alternative designs such as spots and blotches instead of the favored black-and-yellow bands. At work here is yet another evolution-based aposematic phenomenon, known as Müllerian mimicry, after the German naturalist Fritz Müller. In this kind of mimicry, everyone is a model, because different species that are legitimately threatening in their own right converge on the same pattern. Here, the adaptive advantage is that sharing the same warning appearance facilitates learning by predators: it’s easier to learn to avoid one basic warning signal than a variety, different for each species. It had been thought that Batesian and Müllerian mimicry were opposites, with Batesian being dishonest because the mimic is essentially a parasite of its model’s legitimate reputation (those viceroys), whereas Müllerian mimicry exemplifies shared honesty, as with different species of wasps, bees, and hornets, whose fearsome reputations enhance each others.

It is currently acknowledged, however, that often the distinction is not absolute; within a given array of similar-looking Müllerian mimics, for example, not all species are equally honest when it comes to their decorative signaling. The less dangerous representatives are therefore somewhat Batesian. Conversely, among some species, assemblages that have traditionally been thought to involve Batesian mimics—including the iconic monarch–viceroy duo—mimics are often a bit unpleasant in their own right, so both participants are to some degree Müllerian convergers as well.

What to make of all this? In his book, Unweaving the Rainbow, Richard Dawkins gave us some advice, as brilliant as the colors and patterns of the natural world:

After sleeping through a hundred million centuries, we have finally opened our eyes on a sumptuous planet, sparkling with color, bountiful with life. Within decades we must close our eyes again. Isn’t it a noble and enlightened way of spending our time in the sun, to work at understanding the universe and how we have come to wake up in it?
Categories: Critical Thinking, Skeptic

The Skeptics Guide #1026 - Mar 8 2025

Skeptics Guide to the Universe Feed - Wed, 03/05/2025 - 8:00am
Quickie with Bob: Atlantic Shutdown; News Items: Measles Outbreak, Reintroducing Wolves, TIGR-Tas Gene Editing, Blood Donor Who Saved Millions Dies, Star Mergers; Who's That Noisy; Your Questions and E-mails: Intricate Web of Civilization; Science or Fiction
Categories: Skeptic

Skeptoid #978: Leaded Gasoline and Mental Health

Skeptoid Feed - Tue, 03/04/2025 - 2:00am

A look at recent studies finding leaded gasoline caused 151 million mental health illnesses in the United States.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

The New TIGR-Tas Gene Editing System

neurologicablog Feed - Mon, 03/03/2025 - 5:02am

Remember CRISPR (clustered regularly interspaced short palindromic repeats) – that new gene-editing system which is faster and cheaper than anything that came before it? CRISPR is derived from bacterial systems which uses guide RNA to target a specific sequence on a DNA strand. It is coupled with a Cas (CRISPR Associated) protein which can do things like cleave the DNA at the targeted location. We are really just at the beginning of exploring the potential of this new system, in both research and therapeutics.

Well – we may already have something better than CSRISP: TIGR-Tas. This is also an RNA-based system for targeting specific sequences of DNA and delivering a TIGR associated protein to perform a specific function. TIGR (Tandem Interspaced Guide RNA) may have some useful advantages of CRISPR.

As presented in a new paper, TIGR is actually a family of gene editing systems. It was discovered not by happy accident, but by specifically looking for it. As the paper details: “through iterative structural and sequence homology-based mining starting with a guide RNA-interaction domain of Cas9”. This means they started with Cas9 and then trolled through the vast database of phage and parasitic bacteria for similar sequences. They found what they were looking for – another family of RNA-guided gene editing systems.

Like CRISPR, TIGR is an RNA guided system, and has a modular structure. Different Tas proteins can be coupled with the TIGR to perform different actions at the targeted site. But there are several potential advantages for TIGR over CRISPR. Like CRISPR it is RNA guided, but TIGR uses both strands of the DNA to find its target sequence. This “dual guided” approach may lead to fewer off-target errors. While CRISPR works very well, there is a trade-off in CRISPR systems between speed and precision. The faster it works, the greater the number of off-target actions – like cleaving the DNA in the wrong place. The hope is that TIGR will make fewer off-target mistakes because of better targeting.

TIGR also has “PAM-Independent targeting”. What does that mean? PAM stands for protospacer adjacent motifs – these are short DNA sequences, about 6 base pairs, that exist next to the sequence that his being targeted by CRISPR. The Cas9 protease will not function without the PAM. It appears to have evolved so that the bacteria using CRISPR as an adaptive immune system can tell self from non-self, as invading bacteria or viruses will have the PAM sequences, but the native DNA will not. The end result is that CRISPR needs PAM sequences in order to function, but the TIGR system does not. This makes the TIGR system much more versatile.

I saved what is potentially the best advantage for last – Tas proteins are much smaller than Cas proteins, about a quarter of the size. At first this might not seem like a huge advantage, but for some applications it is. One of the main limiting factors for using CRISPR therapeutically is getting the CRISPR-Cas complex into human cells. There are several available approaches – physical methods like direct injection, chemical methods, and viral vectors. Specific methods, however, generally have a size limit on the package they can deliver into a cell. Adeno-associated vectors (AAVs) for example have lots of advantaged but only can deliver relatively small payloads. Having a much more compact gene-editing system, therefore, is a huge potential advantage.

When it comes to therapeutics, the delivery system is perhaps the greater limiting factor than the gene targeting and editing system itself. There are currently two FDA indications for CRISPR-based therapies, both for blood disorders (sickle cell and thalassemia). For these disorders bone marrow can be removed from the patient, CRISPR is then applied to make the desired genetic changes, and then the bone marrow is transplanted back into the patient. In essence, we bring the cells to the CRISPR rather than the CRISPR to the cells. But how do we deliver CRISPR to a cell population within a living adult human?

We use the methods I listed above, such as the AAVs, but these all have limitations. Having a smaller package to deliver, however, will greatly expand our options.

The world of genetic engineering is moving incredibly fast. We are taking advantage of the fact that nature has already tinkered with these systems for hundreds of millions of years. There are likely more systems and variations out there for us to find. But already we have powerful tools to make precise edits of DNA at targeted locations, and TIGR just adds to our toolkit.

The post The New TIGR-Tas Gene Editing System first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #1025 - Mar 1 2025

Skeptics Guide to the Universe Feed - Sat, 03/01/2025 - 8:00am
Interview with Adam Russell; News Items: Congestion Pricing, AI Therapists, Redefining Dyslexia, Small Modular Reactors for Cargo Ships; Who's That Noisy; Science or Fiction
Categories: Skeptic

Are Small Modular Reactors Finally Coming?

neurologicablog Feed - Thu, 02/27/2025 - 5:00am

Small nuclear reactors have been around since the 1950s. They mostly have been used in military ships, like aircraft carriers and submarines. They have the specific advantage that such ships could remain at sea for long periods of time without needing to refuel. But small modular reactors have never taken off as a source of grid energy. The prevailing opinion for why this is seems to be that they are simply not cost effective. Larger reactors,  which are already expensive endeavors, produce more megawatts per dollar. SMRs are simply too cost inefficient.

This is unfortunate because they have a lot of advantages. Their initial investment is smaller, even though the cost per unit energy is more. They are safe and reliable. They have a small footprint. And they are scalable. The military uses them because the strategic advantages are worth the higher cost. Some argue that the zero carbon on demand energy they provide is worth the higher cost, and I think this is a solid argument. Also there are continued attempts to develop the technology to bring down the cost. Arguably it may be worth subsidizing the SMR industry so that the technology can be developed to greater cost effectiveness. Decarbonizing the energy sector is worth the investment.

But there is another question – are there civilian applications that would also justify the higher cost per unit energy? I have recently encountered two that are interesting. The first is a direct extension of the military use – using an SMR to power a cargo ship. South Korean company, HD Korea Shipbuilding & Offshore Engineering, has revealed their designs for an SMR powered cargo ship, and has received “approval in principle”. Obviously this is just the beginning phase – they need to actually develop the design and get full approval. But the concept is compelling.

The SMR has a smaller footprint overall than a traditional combustion engine. They do not need space for an exhaust system or for fuel tanks. This saved space can be used for extra cargo – and that extra cargo offsets the higher cost of the SMR. The calculus here is different – you don’t have to compare an SMR to every other form of grid power, including gigawatt scale nuclear. You only have to compare it to other forms of cargo ship propulsion. You have to look at the overall cost effectiveness of the cargo delivery system, not just the production of watts. As an aside, the company is also planning on incorporating a “supercritical carbon dioxide-based propulsion system”, which is about 5% more efficient than traditional steam-based propulsion system.

Shipping accounts for about 3% of global greenhouse gas emissions.  Decarbonizing this sector therefore will be critical for getting close to net zero.

The second potential civilian application is for powering datacenters. Swiss company, Deep Atomic, is developing an SMR that is purpose-built for large data centers, again by leveraging advantages specific to one application. Their design provides not only 60 MWe of power, but 60 MW worth of cooling. Apparently is can use its waste heat to power cooling systems for a data center. The SMR design is also meant to be located right next to the data center, even close to urban centers. The company also hopes to produce these SMR in a factory to help bring down construction costs.

Right now this is just a design, and not a reality, but it’s the idea that’s interesting. Instead of thinking of SMRs as just another method of providing power to the grid, they are being reimagined as being optimized for a specific purpose, which could possibly allow them to gain that extra efficiency to make them cost effective. Data centers, which are increasingly critical to our digital world, are very energy hungry. You can no longer just plug them into the existing grid and expect to get all the energy you need. Right now there is no regulatory requirement for data centers to provide their own energy. In late 2024, Energy Secretary Jennifer Granholm “urged” AI companies to provide their own green energy to power their data centers. Many have responded with plans to do that. But it would not be unreasonable to require them to do so.

Without a plan to power data centers their growing energy demand is not sustainable. This could also completely wipe out any progress we make at trying to decarbonize energy production, as new demand will equal or outstrip any green energy production. This is what has been happening so far. This is another reason why we absolutely need nuclear power if we are going to meet our carbon goals.

There is also the hope that these niche applications of SMRs will bootstrap the entire industry. Making SMRs for ships and data centers could create an economy of scale that brings down the cost of SMRs overall, making them viable for more and more applications.

The post Are Small Modular Reactors Finally Coming? first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #977: A Dingo Ate My Baby

Skeptoid Feed - Tue, 02/25/2025 - 2:00am

This catchphrase has become popular with comedians. Is that in line with its true origin?

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

American Bully XL Attacks and the Campaign That Banned the Breed in Britain

Skeptic.com feed - Mon, 02/24/2025 - 8:40am

When I first investigated the sharp rise in human deaths due to dogs in the UK, I did not expect the fast-paced chain of events it would spur. A month after publishing a blog post on the dramatic rise in maimings and deaths due to dogs and the single breed that accounted for this unprecedented change, I was asked by the head of a victims’ group to run a campaign to ban the American Bully XL breed in England. From the outset, I was told that such action, from an inactive government, was essentially impossible—one person involved in politics told me I would need to “make it a national issue” before the government would ever consider hearing our concerns, let alone acting on them. Thanks to a small group of dedicated people, relentless persistence and focus on our goal, just 77 days after starting the campaign, the British Prime Minister announced the implementation of our policy to the nation.

The ban was overwhelmingly popular with the public and remains so to this day. Indeed, in recent polling on the chief achievements of the now ex-Prime Minister Rishi Sunak, the American Bully XL ban was ranked by the public tied for 4th place—higher than a significant tax cut and above increased childcare provision. Why? The British public is known for its love of dogs. Indeed, I have a dog and have grown up with dogs. Why would I spearhead such a campaign?

The Horrifying Problem

It is common to start these kinds of articles with a kind of emotive, engaging story designed to appeal to the reader. I have tried writing such an introduction, but the stories are so horrifying I cannot begin to describe them. Whether it’s the 10-year-old Jack Lis, mauled to death and having injuries so horrific that his mother cannot shake the image from her mind at every moment she closes her eyes, or a 17-month-old girl that lost her life in the most unimaginably terrible circumstances, the stories, the pain of the parents, and the horrifying last moments of those children’s lives are beyond comprehension.

In the past three years, the number of fatal dog attacks in the UK has increased dramatically.1 Between 2001 and 2021 there were an average of 3.3 fatalities per year—with no year reaching above 6. In 2022, 10 people were killed, including 4 children. Optimistic assumptions that 2022 was an outlier did not last and by the summer of 2023, there had already been 5 fatalities. This pattern continued throughout the year. A day before the ban was announced, a man lost his life to two dogs. He had been defending his mother from an attack, and was torn apart in his garden, defending her. A video surfaced online showing people attempting to throw bricks at the animals, but they continued to tear him apart, undaunted. Later in 2023, after the ban was announced, another man was killed by a Bully XL while walking his own dog. In 2024, even after the ban, the owners that have chosen to keep Bully XLs under the strict new conditions, face the threat within their home. As of this writing, two people have died: one, an elderly woman looking after the dogs for her son; the other an owner torn to pieces by the dogs she had raised from puppies.

These are “just” fatalities. Non-lethal dog attacks on humans, often resulting in life-changing injuries, are also on the rise, increasing from 16,000 in 2018, to 22,000 in 2022, and hospitalizations have almost doubled from 4,699 in 2007 to 8,819 in 2021/22, a trend that continued in 2022/23 with 9,342 hospitalizations.2, 3 These cases make for difficult reading. Seventy percent of injuries on children were to the head; nearly 1 in 3 required an overnight stay. In Liverpool (a city of 500,000), there are 4–7 dog bites a week, with most injuries to the face. One doctor recounted dealing with a “near-decapitation.” In 2023 in London, the police were dealing with one dangerous dog incident per day.4 We do not have reliable data on dogs attacking other dogs and pets, but I would wager those have increased as well.

Yet, despite an increase in both the human and dog populations of the UK over the past four decades, fatalities have remained consistently low until just a few short years ago.

What’s going on?

Looking through the list of fatal dog attacks in the UK, a pattern becomes clear.56 In 2021, 2 of the 4 UK fatalities were from a breed known as the American Bully XL. In 2022, 5 out of 10 were American Bullies.7 In 2023, 5 fatalities of 9 were from American Bullies. In 2024, 2 of 3 deaths so far are from American Bully XLs kept by owners after the ban. In other words, without American Bullies, the dog fatalities list would drop to 5 for 2022 (within the usual consistent range we’ve seen for the past four decades), 4 for 2023 and 1 for 2024 so far.

Hospitalizations have almost doubled from 4,699 in 2007 to 8,819 in 2021/22, a trend that continued in 2022/23 with 9,342 hospitalizations. Seventy percent of injuries on children were to the head.

Again, this is “just” fatalities. We do not have accurate recordings of all attacks, but a concerning indication arises from Freedom of Information requests to police forces from across the UK. In August of 2023, 30 percent of all dogs seized by police—often due to violent attacks—were American Bullies. To put this in context, the similarly large Rottweiler breed accounted for just 2 percent.

This pattern is seen elsewhere, in one other breed, the Pitbull—a very, very close relative of the American Bully. In the U.S., for example, 60–70 percent of dog fatalities are caused by Pitbulls and Pitbull crosses.8 The very recent relatives of the American Bully are also responsible for the vast majority of dog-on-dog aggression (including bites, fatalities, etc.).9 In the Netherlands, the majority of dogs seized by police for dog attacks on other dogs were Pitbull types.10 The same is true nearly anywhere you look. In New York City, Pitbulls were responsible for the highest number of bites in 2022.11

Despite these figures, both in the UK and internationally, and the recent media attention dog attacks have received, if you were to argue that a breed was dangerous, you would receive significant pushback from owners, activists, and even animal charity organizations stating that it is the owner’s fault. But this is wrong. While many would contend that “it’s the owner, not the breed,” the reality is different.

Designing Our Best Friend

Dogs—unlike humans—have been bred for various, very specific traits. Their traits, appearance, and behavior has been directed in a way comparable to how we’ve molded plant and other animal life over thousands of years. Watermelons and bananas used to be mostly seed; now they’re mostly flesh. Chickens were not always raised for their meat; now they are. These weren’t the natural course of evolution, but the result of humans intentionally directing evolution through deliberate cultivation or breeding. Modern-day dogs are very clearly also a result of such directed breeding.

Broadly speaking, we selected dogs for traits that are very much unlike those of wolves. Unlike their wolf ancestors, dogs are, broadly, naturally loyal to humans, even beyond preserving their own lives and those of other dogs. Indeed, a trait such as this in dogs might actually have caused some of the original aesthetic changes to their original wolf-like appearance. When Russian scientists bred foxes over generations for “tameness” to humans, they found the foxes began to have different colored fur, floppy ears, and to look, well, more like domestic dogs (though there is some debate on this).

Each dog breed has deep underlying propensities, desires, and drives for which we have selected them for generations. A key responsibility of dog ownership is to know your dog’s breed, understand its typical traits, and prepare for them. Not all individual dogs will exhibit these breed-specific traits, but most do, to varying degrees. Some hound breeds (Whippets, Greyhounds, etc.) have a prey drive and will chase or even try to kill small animals such as rabbits, even if those animals are kept as pets. Some breed-specific behavior can be trained out, but much of it can’t. Form follows function—breed-specific behavior has driven physical adaptations. Relative to other breeds, they have great vision (aptly, Greyhounds and Whippets belong to the type of dogs called “sighthounds”) and bodies that are lean and aerodynamic, with a higher ratio of muscle to fat relative to most other breeds, making them among the fastest animals on the planet, with racing Greyhounds reaching speeds up to 45 mph (72 km/h). Like many other hound breeds, they are ancient, bred for centuries to seek comfort in humans and to hunt only very specific animals, whether small vermin for Whippets and Greyhounds, or deer and wolves for the, well, Deerhounds and Wolfhounds. Hounds make fine family pets, having been bred to be highly affectionate to humans, as after all, you don’t want your hunting dog attacking you or your family.

Labradors love to retrieve—especially in water, much to the displeasure of their owners who all too often find them diving into every puddle they encounter on their daily walks. Pointers point. Border Collies herd, and as many owners would note, their instinct can be so strong that they often herd children in their human family. Cocker Spaniels will run through bushes, nose to the ground, looking as if they are tracking or hunting even when just playing—even when they have never been on a hunt of any kind. Dogs are not the way they are by accident but, quite literally, by design.

Designing Bully-type Dogs

Bulldogs were originally bred to be set on a bull, and indiscriminately injure and maim the much larger animal until it died. (These dogs were longer-legged and much more agile and healthier than today’s English Bulldog breed—bred specifically for their now nonfunctional squat appearance.) After the “sport” of bull baiting was banned, some of these dogs were instead locked in a pen with large numbers of rats and scored on how many they could kill in a specified time, with often significant wagers placed on picking the winners. This newer “sport” required greater speed and agility, so the bulldogs of that time were interbred with various terriers to produce what were originally called, naturally, “Bull and Terriers.” From these would eventually come today’s Pitbull Terriers.

In addition to this, some of the early Bull and Terriers began to be used for yet another “sport,” and one on which significant amounts of money were wagered—dog fighting. These were bred specifically for aggression. Two of these dogs would be put together in a closed pit to fight until only one came out alive. During their off hours, these fighting dogs were mostly kept in cages, away from humans. The winners, often seriously wounded themselves, were bred for their ability to kill the other dog before it could kill them. They were not bred for loyalty to humans—these were dogs bred for indiscriminate, sustained, and brutal violence in the confined quarters of the dog pit (hence the name, Pitbull Terrier).

This explains why Pitbulls are responsible for 60–70 percent of deaths to dogs in the U.S. It is not—as some advocates state—a function of size. There are many larger and stronger breeds. Pitbulls are not the largest or the strongest dog breed, but—combined with their unique behavioral traits—they are large enough and strong enough to be the deadliest.

While Pitbull and some Pitbull-type breeds have been banned in the UK under the Dangerous Dogs Act 1991, the American Bully XL was permitted due to a loophole in the law—simply put, this new breed exceeded physical characteristics of the banned breeds to the point they no longer applied under the law. It is that loophole that resulted in the recent rise of the American Bully XL, and the violence attendant to it.

(In)Breeding the American Bully XL

American Bully XLs are the heavyweight result of breeds born out of brutal human practices that sculpted generations of dogs. The foundational stock for American Bully XLs were bred for terrifying violence and we should not be surprised to find that this new, more muscular and larger version still exhibits this same propensity. It is not the dogs’ fault any more than it is the fault of sighthounds to chase squirrels, or pointers to point. But that does not change the reality.

The American Bully began in the late 1980s and early 1990s. At least one line started from champion “game dogs,” bred to endure repeated severe maiming and still continue to fight to the deadly end. To be a champion they must have killed at least one other dog in brutal combat. To further increase their size and strength, these game dogs were then bred with each other and with other Pitbulls.

The UK original breeding stock that produced Bully XLs is extremely small. An investigation from one member of our campaign uncovered an absurd, awful reality: that at least 50 percent of American Bullies advertised for sale in the UK could trace their immediate or close lineage to one line and one single dog: Killer Kimbo.12, 13

Killer Kimbo was infamous in Bully breeding circles. He was a huge animal and the result of extreme levels of inbreeding to create his mammoth size. He was so inbred that he had the same great grandfather four times over. It is this dog that has given rise to one of the most popular bloodlines within the UK.

And what has been the result of heavily inbreeding dogs originating from fighting stock? While precise data are difficult to collect, at least one of Killer Kimbo’s offspring is known to have killed someone; other breeders recount stories of his offspring trying to attack people in front of them. At least one death in the UK is a second-generation dog from Killer Kimbo stock. These are the dogs that were advertised and promoted as if they just looked large but had been bred responsibly for temperament.

Indeed, many families bought these dogs thinking these were gentle giants—many have kept them even after the impositions of the ban, believing that a dog’s behavior is set only by their owners. After his own mother was killed by the Bullies he had kept, one owner in 2024 said:14

I did not know bullys were aggressive, I didn’t believe all this stuff about the bullys [being dangerous]. But now I’ve learned the hard way and I wish I’d never had nothing to do with bullys, they’ve ruined my life and my son’s life.I honestly thought the ban was a stupid government plan to wipe out a breed which I had never seen anything but softness and love from … Now I think they need to be wiped out.

In fact, the breed was genetically constructed from fighting stock, inbred repeatedly for greater size and strength, shipped over to the UK skirting the Pitbull ban, and then advertised to families as if these dogs were the result of years of good breeding.

The Nanny Dog

In the UK, the Royal Society for the Prevention of Cruelty to Animals (RSPCA) has argued that no breeds are more inherently dangerous than others and leads a coalition to stop any breed bans, including the campaign to “Ban the Bully.” This is despite the fact that the RSPCA itself would not insure American Bullies on their own insurance policies, and that they separately advocate for the banning of cat breeds they consider to be too dangerous.

The UK Bully Kennel Club (not to be confused with the similar sounding UK Kennel Club) describes the American Bully XL as having a “gentle personality and loving nature.” While the United Kennel Club does not recognize the American Bully XL breed, it describes the wider breed (i.e., not the XL variant) as “gentle and friendly,” and goes even a step further, recommending that the breed “makes an excellent family dog.” Again, the XL variant of this breed is responsible for the most fatalities of any dog breed in the UK in recent years, including for killing several children.

Even more troubling is the fact that well-intentioned and potentially good owners are left at a severe disadvantage by the statements of advocates for Pitbulls and American Bullies. If an owner is aware of the breed’s past and the risks in their behaviors, they are far more likely to be able to anticipate issues and control the dog. For example, hound owners are generally aware that they will often have to emphasize recall in their dogs or keep them on a lead in non-fenced areas to prevent them from running off to chase squirrels or other small animals—it is a well-advertised trait. These preventive measures are taken very early, far before the dog may even be interested in chasing. However, owners of American Bullies would not be aware of the breed’s past were they to rely on the supportive advertising descriptions. They were actively told, from sources all over, that American Bullies are naturally good with kids and family, that they are naturally non-violent, and don’t pose any risk. Positive descriptions of American Bullies (and their XL variety) de-emphasized their violent tendencies and ran the very real risk of obfuscating future owners as to the aggressive traits of this breed and so prevented owners from correctly understanding and therefore controlling their dog appropriately.

This encouraged ignorance from owners who are ill-equipped to handle their dog, such as the owner that saw her dog “Cookie-Doe” (related to Killer Kimbo) kill her father-in-law by ripping apart his leg. Her response? It wasn’t an aggressive dog, it just liked to “play too rough.” But for every owner like this, there are other experienced, diligent owners that nevertheless find themselves, or their children, under attack from one of these dogs.

Worse still is the nickname of “nanny dog.” There is a myth among advocates for the breed that Pitbulls were once known as “nanny dogs” for their loyalty to children in the late 19th and early 20th centuries. However, this isn’t true. The name originates from Staffordshire Bull Terriers (not Pitbulls) that were named “nursemaid dogs” in a 1971 New York Times piece. There is no evidence of “nanny dog” or similar descriptions before this. Stories of 19th or early 20th century origins for the nickname are likely the result of advocates wanting to believe in a more family-oriented origin for the breed, rather than the cruel reality.

We should not blame the dog breed for how they were bred, maintained, and for what they were selected for. They were bred out of cruel origins, inbred repeatedly, still face ear cropping, and some find themselves owned by individuals who select dogs for their ability to intimidate and attack. Nevertheless, none of this changes that violent, aggressive nature that has resulted from generations of breeding specifically for it.

(Some) Owners Bear Blame Too

American Bully XLs were not cheap, and this only began to change when our campaign started in full. At the lower end, they cost about the same as other dogs, but at the very higher end of the price range they were some of the most expensive dogs you could buy. Golden Retrievers, the archetypical family dog, are so desired that it is common for breeders to have long waiting lists for litters yet to be conceived. A typical cost for a Golden Retriever in the UK is around $2,600. American Bullies, at the height of their popularity, cost as much as $4,000 per puppy. The higher-end puppies were often accompanied by graphics involving violent metaphors and text written in horror movie-type “blood” fonts.

Given this kind of marketing, what did some prospective owners think they were purchasing? Indeed, it bears asking what kind of owners were prepared to pay vast sums for a dog advertised in such a way. These dogs were clearly a status symbol for many—a large, aggressive, powerful animal to be used either for intimidation or self-defense. It is for this reason that many owners have their dog’s ears cropped to look yet more aggressive, a practice illegal under UK law, but still nonetheless practiced. Cropping ears and tails actually serves a purpose—though a brutal one. The other dog cannot bite on to the ear or tail and so gain control of its rival. The old bull baiting dogs used to go after the bull’s ears and noses. Cropping also prevents a human, engaged in defending themselves from a dog attack, from grabbing the tail or ears and using them to sling the dog off or up against a wall. This explains the popularity of these dogs, altered in such a way, amongst drug dealers and others involved in crime.

Opposition

The politics of banning the American Bully proved difficult. It took a public campaign both to convince a government that was generally averse to actions of any kind; as well as to stop the continued influence of a coalition of charities that was opposing any and all breed bans. These charities included the Dogs Trust, RSPCA, the UK Kennel Club, Battersea Dogs and Cats Home, and others.

It might seem strange that these charities could argue against any breed bans, given the figures in fatalities from Bullies. Not only this, but these same charities supported the return of the Pitbull to the UK, even despite the decades of startling figures on their dramatic overrepresentation in fatalities.

The reason for this is simple. There is no way to split fatality data so that it is favorable to Pitbulls (or, recently, XL Bullies). Instead, the charities focus chiefly on a different measure: bites.15 This measure enables charities to claim that there is a problem with a great many dog breeds such as Labradors—which, in some calculations, bite the most people. On this measure, a mauling from a Bully XL that rips a child’s throat, or tears away an adult’s arm, and a bite on the hand from a chihuahua count the same: they are each one bite.

It isn’t necessary to outline how inadequate and bankrupt this measure is. It is a shame on this entire sector that this was considered anything more than a smokescreen. It is, in my view, a true scandal that has provided a great deal of unintended cover for horrifying breeding practices, which in turn resulted in the horrific deaths of pets, adults, and children. Dog bites are not the public’s (or owners) chief concern: it is maulings, hospitalizations, and deaths. That is what we should focus on, and until the advocacy sector does so, it does not deserve to be taken seriously.

Banning the Breed

England and Wales have banned several breeds since the early 1990s. The Dangerous Dogs Act 1991 first banned Pitbulls, and then was amended to ban a further three breeds. The Act required little more than the signature of the relevant Secretary of State to add a new breed to the banned list. This Act prohibits the buying, selling, breeding, gifting, or otherwise transferring the ownership of any dog of a banned breed. All dogs in that breed are to be registered, neutered, as well as leashed and muzzled at all times in public. Not doing so or failing to register a dog of a banned breed, is a criminal offense.

When the XL Bully ban was announced, all owners were given a few months to register their dogs, neuter them, and then muzzle and leash them in public. They were forbidden to sell them, give them away or abandon them. Scotland—as a devolved nation within the United Kingdom—announced they would not ban the American Bully, and this resulted in a great many Bullies being sent to Scotland to escape the ban. Within two weeks, and after a couple of prominent attacks, the Scottish government made a legal U-turn and announced a ban. When the new Northern Ireland government formed, their first act was to ban the American Bully.

The Effects of the Ban

The strength of the ban is twofold. On one hand, Bullies are less of a danger to pets and people than they were previously. They must now be muzzled and leashed in public—or owners face seizure of the dog by police and criminal sentences for themselves. However, as has been seen in recent months, this does not change the risk to owners or those that visit their homes. Allowing registered dogs to be kept by their owners means that this risk persists. It is a risk from which the public is shielded, however, it remains one that owners and those that visit them choose to take upon themselves.

The other and key strength of the ban is in the future. Stopping the breeding and trading of Bullies means that there is a timer on their threat within Britain. They will not continue to future generations. We will not have to see more and more Bully variants, and yet worse breeding practices as breeders chase the latest trend, inbreeding for a particular coat color, the ever-increasing sizes, or the propensity for violence. Children will not have to be mauled; other dogs will not have to be ripped apart. We chose to stop this.

Categories: Critical Thinking, Skeptic

Trust No One, Believe Everything: Does Common Sense Have a Future?

Skeptic.com feed - Mon, 02/24/2025 - 8:12am

For as long as I can remember, espionage has fascinated me. Over the years, I’ve developed a certain expertise—at least in the pop culture sense—interviewing former spies for publications ranging from The Washington Post to, well, Playboy. I even once worked as a researcher at an international investigative firm, a job that, regrettably, involved fewer trench coats and shadowy rendezvous than one might hope. But I did walk away with a highly marketable skill: knowing how to conduct a proper background check (one never knows when that might prove useful).

Spies have long been the pillars of Hollywood storytelling, woven into thrilling tales of intrigue, and deception. But what is it about them that keeps us so enthralled? I’d argue that our obsession stems from an innate desire to know what is hidden from us. Secrets are power, and in a world increasingly shaped by information, nothing is more seductive than the idea of being the one in the know.

Secrets are power, and in a world increasingly shaped by information, nothing is more seductive than the idea of being the one in the know.

But while James Bond is synonymous with adrenaline filled action and shaken-but-not-stirred glamour, in real life, intelligence work is usually rather mundane and bureaucratic. More along the lines of painstaking, systematic data gathering and staring for hours at your screen, rather than the dramatic fight sequences we’ve been conditioned to associate with spycraft through the media. In other words, making sense of what’s going on is often hard, dull work.

Making sense of what’s going on is often hard, dull work.

We have never had more access to information, yet somehow, we understand less. The sheer volume of data is overwhelming—no single person can process even a fraction of it—so we outsource the task to algorithms, aggregators, and search engines with their own opaque filtration systems. In theory, social media should expose us to a diversity of perspectives, but in practice, its algorithms ensure we’re served more of what we already believe, cocooning us in ideological comfort.

We like to think of Google, X, Facebook, and even ChatGPT as neutral tools, but neutrality is an illusion. These platforms, intentionally or not, prioritize engagement over accuracy, outrage over nuance, and emotional provocation over intellectual depth. Further, the speed at which information spreads tends to outpace our ability to critically analyze it. Misinformation, half-truths, and emotionally charged narratives circulate rapidly, shaping perceptions before facts can be verified. In this landscape, false stories are 70 percent more likely to be shared than true ones and travel six times faster. Eager to engage in the conversation as it happens, we jump in before having even had sufficient time to process the “latest thing.” Our public discourse is shaped not by careful reasoning but by knee-jerk reactions.

Social media should expose us to a diversity of perspectives, but in practice, its algorithms ensure we’re served more of what we already believe.

Then there’s the growing crisis of trust in media. As per Gallup, Americans’ trust in mass media remains at a record low, with only 31% expressing confidence in its accuracy and fairness in 2024. Trust first dropped to 32% in 2016 and has remained low. For the third year in a row, more Americans (36%) have no trust at all in the media than those who trust it. Another 33% have little confidence. Contrast this with 72% of Americans trusting newspapers in 1976, after Watergate.

Trust in mass media remains at a record low. What is behind this erosion? A cocktail of inaccuracies, overt ideological bias, viewpoint discrimination, the weaponization of fact-checking, and outright censorship.

What is behind this erosion? A cocktail of inaccuracies, overt ideological bias, viewpoint discrimination, the weaponization of fact-checking, and outright censorship has pushed many toward alternatives: independent media, podcasters, influencers, social media, and, naturally, grifters. Yet rejecting legacy media in favor of these alternatives is often a case of leaping from the frying pan into the fire. There’s a common misconception that because something isn’t mainstream, it must be more truthful—but plenty of these new voices are just as ideologically captured, if not more so, with even fewer guardrails against deception and little investment in accuracy. Many embrace them because they mistake truthfulness for ideological alignment. Paradoxically, many have embraced the idea that “we are the media now,” a phrase frequently echoed by Elon Musk and his admirers on X—even as they repost news from the very mainstream outlets they claim are now irrelevant, and even, “dead.”

Rejecting legacy media in favor of these alternatives is often a case of leaping from the frying pan into the fire.

We are living in the middle of an information battlefield, where reality itself feels up for debate. What’s legitimate news, and what’s an AI-generated psyop? Who’s a real person, and who’s a bot designed to amplify division? How much of what we read is organic, and how much is algorithmically nudged into our feeds? And then there are also state-sponsored disinformation campaigns added to the mix—with countries like Russia, Iran, China, and yes, even the United States deploying fake news sites, deepfakes, and coordinated social media operations to manipulate global narratives.

Russia, Iran, China, and yes, even the United States deploy fake news sites and coordinated social media operations to manipulate global narratives.

In this environment, conspiracy theories thrive. People don’t fall down rabbit holes at random—there are certain preconditions that make them susceptible. Institutional distrust is a major factor, and right now, faith in institutions is in free fall, whether it’s the government, the courts, or the medical establishment. Many people feel betrayed. Add in alienation and social disconnection, and you have the perfect recipe for radicalization. The irony, of course, is that while conspiracy thinking is often framed as a form of skepticism about official narratives, it frequently results in an even greater willingness to believe in something—just not the official story.

Faith in institutions is in free fall, whether it’s the government, the courts, or the medical establishment. Many people feel betrayed.

Not all people become full blown conspiracy theorists, of course, but we can see how conspiratorial thinking has taken root. But then again, perhaps we are simply seeing this phenomenon because social media lets us see people we might have otherwise never come in contact with? What we do know is that people have a high need for certainty and control when times are uncertain, so they become more prone towards believing false things because they no longer trust institutions that they once might have.

The fragmentation of media consumption means that reaching people with authoritative information has never been more difficult. Everyone is living in a slightly different version of reality, dictated by the platforms they frequent and the sources they trust. And because attention spans have collapsed, many don’t even make it past the headlines before forming an opinion. When everything is engineered to make us feel angry, polarized, scared, and reactionary, how can we stay nuanced, critical, open-minded, and objective? How can we be more truth-seeking in a world where everyone seems to have their own version of the truth on tap?

Everyone is living in a slightly different version of reality, dictated by the platforms they frequent and the sources they trust.

A recent controversy over a certain billionaire’s hand gesture provided a perfect case study in perception bias. We all saw the same video. To some, Elon Musk’s movement was undeniably a Nazi salute. To others, it was merely an overzealous gesture made to express “my heart goes out to you.” Few people remained undecided. The fact that two groups could witness the exact same footage and walk away with diametrically opposed conclusions is a testament to how much our prior beliefs shape our perception of reality and speaks to the difficulty of uniting people behind a single understanding of reality. Psychologists call this phenomenon, “motivated perception.” We often see what we expect to see, rather than what’s actually there.

So in this landscape, what is it that grounds me? It all comes down to a simple question: How much of what I believe is based on evidence, and how much is just my own emotions, assumptions, and attempts to connect the dots? What is it that I really know? Very often in life, we imagine what something might be, rather than seeing it for what it is.

In a world where narratives compete for dominance, my goal is not to add another, but to cut to the core of what is verifiable and likely to be true.

With this new column at Skeptic, my aim is to strip away the noise in front of the headlines and get to the core of what is verifiable and true. I have no interest in reinforcing anyone’s preconceived notions—including my own. The only way to do that is through curiosity rather than confirmation. In a world where narratives compete for dominance, my goal is not to add another, but to cut to the core of what is verifiable and likely to be true. It’s easy to be swayed by emotion, to see what we expect rather than what’s in front of us. But the only way forward—the only way to make sense of this fractured information landscape—is to remain committed to facts, no matter where they lead.

I would like to keep my door open to topics you’d like to see me cover, or just feedback and thoughts. Comment below, and feel free to reach out anytime: mysteriouskat[at]protonmail.com

Categories: Critical Thinking, Skeptic

How Behavioral Science Lost its Way And How It Can Recover

Skeptic.com feed - Mon, 02/24/2025 - 8:10am

Over the past decade behavioral science, particularly psychology, has come under fire from critics for being fixated on progressive political ideology, most notably Diversity, Equity, and Inclusion (DEI). The critics’ evidence is, unfortunately, quite strong. For example, a recent volume, Ideological and Political Bias in Psychology,1 recounts many incidents of scholarly censorship and personal attacks that a decade ago might have only been conceivable as satire.

We believe that many problems plaguing contemporary behavioral science, especially for issues touching upon DEI, can best be understood, at their root, as a failure to adhere to basic scientific principles. In this essay, we will address three fundamental scientific principles: (1) Prioritize Objective Data Over Lived Experience; (2) Measure Well; and (3) Distinguish Appropriately Between Correlation and Causation. We will show how DEI scholarship often violates those principles, and offer suggestions for getting behavioral science back on track. “Getting back to the basics” may not sound exciting but, as athletes, musicians, and other performers have long recognized, reinforcing the fundamentals is often the best way to eliminate bad habits in order to then move forward.

The Failure to Adhere to Basic Scientific Principles
Principle #1: Prioritize Objective Data Over Lived Experience

A foundational assumption of science is that objective truth exists and that humans can discover it.2345 We do this most effectively by proposing testable ideas about the world, making systematic observations to test the ideas, and revising our ideas based on those observations. A crucial point is that this process of proposing and testing ideas is open to everyone. A fifth grader in Timbuktu, with the right training and equipment, should be able to take atmospheric observations that are as valuable as those of a Nobel Prize-winning scientist from MIT. If the fifth grader’s observations are discounted, this should only occur because their measurement methods were poor, not because of their nationality, gender, age, family name, or any other personal attribute.

A corollary of science being equally open to all is that an individual’s personal experience or “lived experience” carries no inherent weight in claims about objective reality. It is not that lived experience doesn’t have value; indeed, it has tremendous value in that it provides a window into individuals’ perceptions of reality. However, perception can be wildly inaccurate and does not necessarily equate to reality. If that Nobel Prizewinning scientist vehemently disputed global warming because his personal experience was that temperatures have not changed over time, yet he provided no atmospheric measurements or systematic tests of his claim, other scientists would rightly ignore his statements—at least as regards the question of climate change.

The limited utility of a person’s lived experience seems obvious in most scientific disciplines, such as in the study of rocks and wind patterns, but less so in psychology. After all, psychological science involves the study of people—and they think and have feelings about their lived experiences. However, what is the case in other scientific disciplines is also the case in psychological science: lived experience does not provide a foolproof guide to objective reality.

To take an example from the behavioral sciences, consider the Cambridge-Somerville Youth Study.6 At-risk boys were mentored for five years, from the ages of 10 to 15. They participated in a host of programs, including tutoring, sports, and community groups, and were given medical and psychiatric care. Decades later, most of those who participated claimed the program had been helpful. Put differently, their lived experience was that the program had a positive impact on their life. However, these boys were not any better in important outcomes relative to a matched group of at-risk boys who were not provided mentoring or extra support. In fact, boys in the program ended up more likely to engage in serious street crimes and, on average, they died at a younger age. The critical point is that giving epistemic authority to lived experience would have resulted in making inaccurate conclusions. And the Cambridge-Somerville Youth Study is not an isolated example. There are many programs that people feel are effective, but when tested systematically turn out to be ineffective, at best. These include programs like DARE,7 school-wide mental health interventions,8 and—of course—many diversity training programs.9

DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature.

Indeed, when it comes to concerns related to DEI, the scientific tenet of prioritizing testable truth claims over lived experience has often fallen to the wayside. Members of specific identity groups are given privilege to speak about things that cannot be contested by those from other groups. In other words, in direct contradiction of the scientific method, some people are granted epistemic authority based solely on their lived experience.10

Consider gender dysphoria. In the past decade, there has been a drastic increase in the number of people, particularly children and adolescents, identifying as transgender. Those who express the desire to biologically transition often describe their lived experience as feeling “born in the wrong body,” and express confidence that transition will dramatically improve their lives. We argue while these feelings must be acknowledged, they should not be taken as objective truth; instead, such feelings should be weighed against objective data on life outcomes of others who have considered gender transition and/or transitioned. And those data, while limited, suggest that many individuals who identify as transgender during childhood, but who do not medically transition, eventually identify again with the gender associated with their birth sex.1112 Although these are small, imperfect studies, they underscore that medical transition is not always the best option.

Photo by Delia Giandeini / Unsplash

Caution in automatically acceding to a client’s preference to transition is particularly important among minors. Few parents and health care professionals would affirm a severely underweight 13-year-old’s claim that, based on their lived experience, they are fat and will only be happy if they lose weight. Nevertheless, many psychologists and psychiatrists make a similar mistake when they affirm a transgender child’s desire to transition without carefully weighing the risks. In one study, 65 percent of people who had detransitioned reported that their clinician, who often was a psychologist, “did not evaluate whether their desire to transition was secondary to trauma or a mental health condition.”13 The concern, in other words, is that lived experience is being given too much weight. How patients feel is important, but their feelings should be only one factor among many, especially if they are minors. Mental health professionals should know this, and parents should be able to trust them to act accordingly.

Principle #2: Measure Well

Another basic principle of behavioral science is that anything being measured must be measured reliably and validly. Reliability refers to the consistency of measurement; validity refers to whether the instrument is truly measuring what it claims to measure. For example, a triple beam balance is reliable if it yields the same value when repeatedly measuring the same object. The balance is valid if it yields a value of exactly 1 kg when measuring the reference kilogram (i.e., the International Prototype of the Kilogram), a platinum-iridium cylinder housed in a French vault under standardized conditions.

Behavioral scientists’ understanding of any concept is constrained by the degree to which they can measure it consistently and accurately. Thus, to make a claim about a concept, whether about its prevalence in a population or its relation to another concept, scientists must first demonstrate both the reliability and the validity of the measure being used. For some measures of human behavior, such as time spent listening to podcasts or number of steps taken each day, achieving good reliability and validity is reasonably straightforward. Things are generally more challenging for the self-report measures that psychologists often use.

Nevertheless, good measurement can sometimes be achieved, and the study of personality provides a nice model. In psychology, there are several excellent measures of the Big Five personality factors (Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness).14 Individuals’ responses are highly reliable: people who rate themselves as highly extraverted as young adults rate themselves similarly years later. Moreover, personality assessments are valid: individuals’ responses correlate with their actual day-to-day behaviors, as reported by themselves and as observed by others.15 In other words, people who rate themselves as high (versus low) in extroversion on psychological questionnaires, for example, really do spend more time socializing.

Credit: Simply Psychology

However, not all psychological measures turn out to have solid reliability and validity. These include the popular Myers Briggs Type Indicator personality test and projective tests such as the Rorschach. Unfortunately, in the quest to support DEI, some concepts that fail the requirements of good measurement are used widely and without reservation. The concept of microaggressions, for example, has gained enormous traction despite its having fundamental measurement issues.

“Microaggressions” were brought to psychologists’ attention by Derald Wing Sue and colleagues.16 Originally described as “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color” (p. 271),17 the concept has since expanded in use to describe brief, verbal or nonverbal, indignities directed toward a different “other.”1819

In 2017, Scott Lilienfeld discussed how the failure to adhere to the principles of good measurement has rendered the concept of microaggression “wide open,” without any clear anchors to reality.20 The primary weakness for establishing validity, that is, for establishing evidence of truly measuring what scientists claim to be measuring, is that “microaggression” is defined in the eye of the beholder.21 Thus, any person at any point can say they have been “microaggressed” against, and no one can test, let alone refute, the claim because it is defined solely by the claimant’s subjective appraisal—their lived experience.

Our criticism of microaggressions, then, spans concerns related to both weak measurement and an undue reliance on lived experience.

As Scott Lilienfeld explained, the end result is that essentially anything, including opposing behaviors (such as calling on a student in class or not calling on a student in class) can be labeled a microaggression. A question such as, “Do you feel like you belong here?” could be perceived as a microaggression by one person but not by someone else; in fact, even the same person can perceive the same comment differently depending on their mood or on who asks the question (which would indicate poor reliability). Our criticism of microaggressions, then, spans concerns related to both weak measurement and an undue reliance on lived experience.

Another of psychology’s most famous recent topics is the Implicit Association Test (IAT), which supposedly reveals implicit, or subconscious, bias. The IAT measures an individual’s reaction times when asked to classify pictures or text spatially. A video22 may be the best way to appreciate what is happening in the IAT, but the basic idea is that if a person more quickly pairs pictures of a Black person than those of a White person with a negative word (for example, “lazy” or “stupid”) then they have demonstrated their unconscious bias against Black people. The IAT was introduced by Anthony Greenwald and colleagues in the 1990s.23 They announced that their newly developed instrument, the race IAT, measures unconscious racial prejudice or bias and that 90 to 95 percent of Americans, including many racial minorities, demonstrated such bias. Since then, these scholars and their collaborators (plus others such as DEI administrators) have enjoyed tremendous success advancing the claim that the race IAT reveals pervasive unconscious bias that contributes to society-wide discrimination.

Screenshot from Harvard’s Project Implicit Skin Type Test

Despite its immense influence, the IAT is a flawed measure. Regarding reliability, the correlation between a person’s response when taking the test at two different times hovers around 0.5.24 This is well below conventionally acceptable levels in psychology, and far below the test-retest reliabilities for accepted personality and cognitive ability measures, which can reach around .8, even when a person takes the tests decades later.2526

The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation.

As for the IAT’s validity, nobody has convincingly shown that patterns of reaction times actually reflect “unconscious bias” (or “implicit prejudice”) as opposed to cultural stereotypes.27 Moreover, in systematic syntheses of published studies, the association between scores on the race IAT and observations or measurements of real-world biased behavior is inconsistent and weak.2829 In other words, scores on the IAT do not meaningfully correlate with other ways of measuring racial bias or real life manifestations of it.

Principle #3: Distinguish Appropriately Between Correlation and Causation

“Correlation does not equal causation” is another basic principle of behavioral science (indeed, all science). Although human brains seem built to readily notice and even anticipate causal connections, a valid claim that “X” has a causal effect on “Y” needs to meet three criteria, and a correlation between X and Y is only the first. The second criterion is that X precedes Y in time. The third and final criterion is the link between X and Y is not actually due to some other variable that influences both X and Y (“confounders”). To test this final point, researchers typically need to show that when X is manipulated in an experiment, Y also changes.

Imagine, for instance, that a researcher asks students about their caffeine intake and sleep schedule, and upon analyzing the data finds that students’ caffeine consumption is negatively correlated with how much they sleep—those who report consuming more caffeine tend to report sleeping less. This is what many psychologists call correlational research (or associational or observational research). These correlational data could mean that caffeine consumption reduces sleep time, but the data could also mean that a lack of sleep causes an increase in caffeine consumption, or that working long hours causes both a decrease in sleep and an increase in caffeine. To make the case that caffeine causes poor sleep, the researcher must impose, by random assignment, different amounts of caffeine on students to determine how sleep is affected by varying doses. That is, the researcher would conduct a true experiment.

Distinguishing between correlation and causation is easier said in the abstract than practiced in reality, even for psychological scientists who are specifically trained to make the distinction.30 Part of the difficulty is that in behavioral science, many variables that are generally thought of as causal cannot be manipulated for ethical or practical reasons. For example, researchers cannot impose neglect (or abuse, corporal punishment, parental divorce, etc.) on some children and not others to study how children are affected by the experience. Still, absent experiments, psychologists bear the responsibility of providing converging, independent lines of evidence that indicate causality before they draw a causal conclusion. Indeed, scientists did this when it came to claiming that smoking causes cancer: they amassed evidence from national datasets with controls, discordant twin designs, correlational studies of exposure to second-hand smoke, non-human experiments, and so on—everything but experiments on humans—before coming to a consensus view that smoking causes cancer in humans. Our point is that investigating causal claims without true experiments is possible, but extremely difficult and time consuming.

The conflation of correlation with causation seems especially prevalent when it comes to DEI issues.

That said, the conflation of correlation with causation seems especially prevalent when it comes to DEI issues. In the context of microaggressions, for example, a Google search quickly reveals many scholars claiming that microaggressions cause psychological harm. Lilienfeld has been a rare voice suggesting that it is dangerous to claim that microaggressions cause mental health issues when there are no experimental data to support such a claim. Moreover, there is a confounding variable that predicts both (1) perceiving oneself as having been “microaggressed” against and (2) struggling with one’s mental health—namely, the well-documented personality trait of neuroticism. In other words, individuals who are prone to experience negative emotions (those who are high in neuroticism) often perceive that more people try to inflict harm on them than actually do, and these same individuals also struggle with mental health.

Assuming we were able to develop a workable definition of “microaggressions,” what would a true experiment look like? An experiment would require that participants be exposed to microaggressions (or not), and then be measured or observed for indications of psychological harm. There are valid ethical concerns for such a study, but we believe it can be done. There is a lengthy precedent in psychological research where temporary discomfort can be inflicted with appropriate safeguards. For instance, a procedure called the “trier social stress test” (TSST) is widely used, where participants make a speech with little preparation time in front of judges who purposefully avoid any non-verbal reaction. This is followed by a mental arithmetic task.31 If the TSST is acceptable for use in research, then it should also be acceptable to expose study participants to subtle slights.

This fallacy of equating correlation with causation also arises in the context of gender transitioning and suicide. To make the point that not being able to transition is deeply damaging, transgender individuals, and sometimes their professional supporters, may ask parents something such as, “would you rather have a dead daughter or a living son?” One logical flaw here is in assuming that because gender distress is associated with suicidal ideation, then the gender distress must be causing the suicidal ideation. However, other psychological concerns, such as depression, anxiety, trauma, eating disorders, ADHD, and autism, could be causing both the gender distress and the suicidal ideation—another case of confounding variables. Indeed, these disorders occur more frequently in individuals who identify as transgender. Thus, it is quite possible that someone may suffer from depression, and this simultaneously raises their likelihood of identifying as transgender and of expressing suicidal ideation.

Photo by Uday Mittal / Unsplash

It is not possible (nor would it be ethical if possible) to impose gender identity concerns on some children and not others to study the effect of gender dysphoria on suicidality. However, at this point, the correlational research that does exist has not offered compelling evidence that gender dysphoria causes increased suicidality. Studies have rarely attempted to rule out third variables, such as other mental health diagnoses. The few studies that have tried to control for other variables have yielded mixed results.3233 Until researchers have consistently isolated gender dysphoria as playing an independent role in suicidality, they should not claim that gender dysphoria increases suicide risk.

Over three decades ago, the psychologist David Lykken wrote, “Psychology isn’t doing very well as a scientific discipline and something seems to be wrong somewhere” (p. 3).34 Sadly, psychology continues to falter; in fact, we think it has gotten worse. The emotional and moral pull of DEI concerns are understandable but they may have short-circuited critical thinking about the limitations of lived experience, the requirement of using only reliable and valid measurement instruments, and the need to meet strict criteria before claiming that one variable has a causal influence on another variable.

DEI Concepts Contradict Known Findings About Human Cognition

The empirical bases for some DEI concepts contradict social scientific principles. Additionally, certain DEI ideas run counter to important findings about human nature that scientists have established by following the required scientific principles. We discuss three examples below.

1) Out-Group Antipathy

Humans are tribal by nature. We have a long history of living in stable groups and competing against other groups. Thus, it’s no surprise that one of social psychology’s most robust findings is that in-group preferences are powerful and easy to evoke. For example, in studies where psychologists create in-groups and out-groups using arbitrary criteria such as shirt color, adults and children alike have a large preference for their group members.3536 Even infants prefer those who are similar to themselves37 and respond preferentially to those who punish dissimilar others.38

Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently.

DEI, although generally well-intentioned, often overlooks this tribal aspect of our psychology. In particular, in the quest to confront the historical mistreatment of certain identity groups, it often instigates zero-sum thinking (i.e., that one group owes a debt to another; that one group cannot gain unless another loses). This type of thinking will exacerbate, rather than mitigate, animosity. A more fruitful approach would emphasize individual characteristics over group identity, and the common benefits that can arise when all individuals are treated fairly.

2) Expectancies

When people expect to feel a certain way, they are more likely to experience that feeling.3940 Thus, when someone, especially an impressionable teenager or young adult, is told that they are a victim, the statement (even if true) is not merely a neutral descriptor. It can also set up the expectation of victimhood with the downstream consequence of making one feel themselves to be even more of a victim. DEI microaggression workshops may do exactly this—they prime individuals to perceive hostility and negative intent in ambiguous words and actions.41 The same logic applies to more pronounced forms of bigotry. For instance, when Robin DiAngelo describes “uniquely anti-black sentiment integral to white identity” (p. 95),42 the suggestion that White people are all anti-Black might have the effect of exacerbating both actual and perceived racism. Of course, we need to deal honestly with any and all racism when it does exist, but it is also important to understand potential costs of exaggerating such claims. Expectancy effects might interact with the “virtuous victim effect,” wherein individuals perceive victims as being more moral than non-victims.4344 Thus, there can be a social value gained simply in presenting oneself as a victim.

3) Cognitive Biases

Cognitive biases are one of the most important and well-replicated discoveries of the behavioral sciences. It is therefore troubling that, in the discussion of DEI topics, psychologists often fall victim to those very biases. (If you’re reading this on a desktop computer, be sure to explore the interactive version of the comprehensive chart shown below.)

Credit: Design by John Manoogian III; Categories & Descriptions by Buster Benson; Implementation by TilmannR (CC BY-SA 4.0, via Wikimedia Commons)

A striking example is the American Psychological Association’s (APA) statement shortly after the death of George Floyd, which provides a textbook illustration of the availability bias, the tendency to overvalue evidence that easily comes to mind. The APA, the largest psychological organization in the world, asserted after Floyd’s death that “The deaths of innocent black people targeted specifically because of their race—often by police officers—are both deeply shocking and shockingly routine.”45 How “shockingly routine” are they? According to the Washington Post database of police killings, in 2020 there were 248 Black people killed by police. By comparison, over 6,500 Black people were killed in traffic fatalities that year—a 26-fold difference.46 Also, some portion of those 248 victims were not innocent—given that 216 were armed, some killings would probably have been an appropriate use of force by the police to defend themselves or others. Some portion was also not killed specifically because of their race. So why would the APA describe a relatively rare event as “shockingly routine”? This statement came in the aftermath of the widely publicized police killings of Floyd and those of Ahmaud Arbery and Breonna Taylor. In other words, these rare events were seen as common likely because widespread media coverage made them readily available in our minds.

Unfortunately, the APA also recently fell prey to another well-known bias, the base rate fallacy, where relevant population sizes are ignored. In this case, the APA described new research that found “The typical woman was considered to be much more similar to a typical White woman than a typical Black woman.”47 Although not stated explicitly, the implication seems to be that, absent racism, the typical woman would be roughly midway between typical White woman and typical Black woman. That is an illogical conclusion given base rates. In the U.S., White people outnumber Black people by roughly 5 to 1; hence the typical woman should be perceived as more similar to a typical White woman than to a typical Black woman.

What Happened? Some Possible Causes

At this stage, we expect that many readers may be wondering how it can be that social scientists regularly violate basic scientific principles—principles that are so fundamental that these same social scientists routinely teach them in introductory courses. One possible reason is myside bias, wherein individuals process information in a way that favors their own “team.” For example, in the case of the race Implicit Association Test, proponents of the IAT might more heavily scrutinize the methodology of studies that yield negative results compared to those that have yielded their desired results. Similarly, although lived experience is a limited kind of evidence, it certainly is a source of evidence, and thus scholars may elevate its importance and overlook its limitations when doing so bolsters their personal views.

A related challenge facing behavioral scientists is that cognitive biases are universal and ubiquitous—everyone, including professional scientists, is susceptible.48 In fact, one might say that the scientific method, including the three principles we emphasize here, is an algorithm (i.e., a set of rules and processes) designed to overcome our eternally pervasive cognitive biases.

A third challenge confronting behavioral scientists is the current state of the broader scientific community. Scientific inquiry works best when practiced in a community adhering to a suite of norms, including organized skepticism, that incentivize individuals to call out each other’s poor practices.4950 In other words, in a healthy scientific community, if a claim becomes widely adopted without sufficient evidence, or if a basic principle is neglected, a maverick scientist would be rewarded for sounding the alarm by gaining respect and opportunities. Unfortunately, the scientific community does not act this way with respect to DEI issues, perhaps because the issues touch widely held personal values (e.g., about equality between different groups of people). If different scientists held different values, there would probably be more healthy skepticism of DEI topics. However, there is little ideological diversity within the academy. In areas such as psychology, for example, liberal-leaning scholars outnumber conservative-leaning scholars by at least 8 to 1, and in some disciplines the ratio is 20 to 1 or even more.5152 A related concern is that these values are more than just personal views. They often seem to function as sacred values, non-negotiable principles that cannot be compromised and only questioned at risk to one’s status within the community.

A related challenge facing behavioral scientists is that cognitive biases are universal and ubiquitous—everyone, including professional scientists, is susceptible.

From this perspective,53 it is easy to see how those who question DEI may well face moral outrage, even if (or maybe especially if) their criticisms are well-founded. The fact that this outrage sometimes translates into public cancellations is extremely disheartening. Yet there are likely even more de facto cancellations than it seems. Someone can be cancelled directly or indirectly. Indirect cancellations can take the form of contract nonrenewal, pressure to resign, or having one’s employer dig for another offense to use as the stated grounds of forcing someone out of their job. This latter strategy is a very subtle, yet no less insidious, method of cancellation. As an analogy, it is like a police officer following someone with an out-of-state license plate and then pulling the car over when they fail to use a turn signal. An offense was committed, but the only reason the offense was observed in the first place is because the officer was looking for a reason to make the stop and therefore artificially enhanced the time window in which the driver was being scrutinized. The stated reason for the stop is failure to signal; the real reason is the driver is from out of town. Whether direct or indirect, the key to a cancellation is that holding the same job becomes untenable after failing to toe the party line on DEI topics.

It is against this backdrop that DEI scholarship is conducted. Academics fear punishment (often subtle) for challenging DEI research. Ideas that cannot be freely challenged are unfalsifiable. Those ideas will likely gain popularity because the marketplace of ideas becomes the monopoly of a single idea. An illusory consensus can emerge about a complex area for which reasonable, informed, and qualified individuals have highly differing views. An echo chamber created by forced consensus is the breeding ground for bad science.

How to Get Behavioral Science Back on Track

We are not the first ones to express concern about the quality of science in our discipline.5455 However, to our knowledge, we are the first to discuss how DEI over-reach goes hand-in-hand with the failure to engage in good science. Nonetheless, this doesn’t mean it can’t be fixed. We offer a few suggestions for improvement.

First, disagreement should be normalized. Advisors should model disagreement by presenting an idea and explicitly asking their lab members to talk about its weaknesses. We need to develop a culture where challenging others’ ideas is viewed as an integral (and even enjoyable) part of the scientific process, and not an ad hominem attack.

Second, truth seeking must be re-established as the fundamental goal of behavioral science. Unfortunately, many academics in behavioral science seem now to be more interested in advocacy than science. Of course, as a general principle, faculty and students should not be restricted from engaging in advocacy. However, this advocacy should not mingle with their academic work; it must occur on their own time. The tension between advocacy and truth seeking is that advocates, by definition, have an a priori position and are tasked with convincing others to accept and then act upon that belief. Truth seekers must be open to changing their opinion whenever new evidence or better analyses demand it.

To that end, we need to resurrect guardrails that hold students accountable for demonstrating mastery of important scientific concepts, including those described above, before receiving a PhD. Enforcing high standards may sound obvious, but actually failing students who do not meet those standards is an exclusionary practice that might be met with resistance.

We need to develop a culture where challenging others’ ideas is viewed as an integral (and even enjoyable) part of the scientific process, and not an ad hominem attack.

Another intriguing solution is to conduct “adversarial collaborations,” wherein scholars who disagree work together on a joint project.56 Adversarial collaborators explicitly spell out their competing hypotheses and together develop a method for answering a particular question, including the measures and planned analyses. Stephen Ceci, Shulamit Kahn, and Wendy Williams,57 for example, engaged in an adversarial collaboration that synthesized evidence regarding gender bias in six areas of academic science, including hiring, grant funding, and teacher ratings. They found evidence for gender bias in some areas but not others, a finding that should prove valuable in decisions about where to allocate resources.

Illustration by Izhar Cohen

In conclusion, we suggest that DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature. The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation. We need to remember that the goal of science is to discover truth. This requires putting ideology and advocacy aside while in the lab or classroom. Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently. The scientific method requires us to stay humble and accept that we just might be wrong. That principle applies to all scientists, including the three authors of this article. To that end, readers who disagree with any of our points should let us know! Maybe we can sort out our differences—and find common ground—through an adversarial collaboration.

The views presented in this article are solely those of the authors. They do not represent the views of any author’s employer or affiliation.

Categories: Critical Thinking, Skeptic

The New Skeptic: Welcome

Skeptic.com feed - Mon, 02/24/2025 - 7:37am

Consider the word “skeptic.” What image does it evoke? A cynic? Someone who doubts everything? Many conflate skepticism with pure doubt, but true skepticism is far richer. It is thoughtful inquiry and open-minded analysis. Its essence is captured by Spinoza’s timeless dictum, “not to ridicule, not to bewail, not to scorn human actions, but to understand them.”

This commitment to understanding different viewpoints is at the heart of meaningful discourse—yet, in practice, it is often abandoned. Public debates—on issues from abortion and climate change to the principle of free speech—tend to degenerate into a melee of name-calling and outrage. Genuine skepticism, however, demands thoughtful engagement. It insists that we immerse ourselves in diverse perspectives, striving to understand them thoroughly before reaching reasoned conclusions. It calls for intellectual honesty—the willingness to consider opposing arguments without succumbing to anger or mockery, even when the evidence seems overwhelmingly in favor of one side.

Skepticism requires the ability to grasp opposing arguments without resorting to anger or ridicule, even when the evidence overwhelmingly shows they’re wrong.

Indeed, one of the greatest obstacles to having an accurate understanding of reality isn’t just having your facts wrong—it’s the human tendency to moralize our biases. We rarely think of ourselves as extremists or ideologues.

Instead, we often embrace belief systems that validate our most destructive impulses, fooling ourselves into thinking we’re champions of justice. Some of the most toxic voices in our society are utterly convinced that they stand on the right side of history. Skepticism requires the courage and conscious effort to step outside that mindset.

If this resonates with you, then welcome—you’ve found your intellectual home.

People rarely think of themselves as extremists or ideologues. Instead, they find belief systems that validate their most destructive impulses. Skepticism requires the courage to step outside that mindset.

And there’s more: even when we adopt Spinoza’s dictum, our understanding of the world will be lacking if the very foundation of our knowledge—the very basis on which you (and others) make decisions—doesn’t accurately reflect the world. This is where critical thinking and the tools of science come in.

Skeptic magazine: A Commitment to Depth and Balance

Skeptic is a leading popular science magazine that explores the biggest questions in science, technology, society, and culture with a relentless commitment to truth. We don’t push an agenda—we follow the evidence. Every article, before it’s published, is put to the test: Is this really accurate? How could it be wrong? Do the cited sources support the claims being made? Our mission is clear—to cut through misinformation and dogma, delivering sharp, evidence-based analysis grounded in reality.

So, what can you expect to find in our pages? Long-form, analytical pieces that explore complex issues in depth constitute the vast majority of our work. At times, we may also feature op-eds—particularly when they emerge from rigorous research and present an intriguing, contrarian perspective. And when an issue carries significant weight, we may publish “the best case for...” articles—usually pairing them with an equally strong piece presenting the counterargument(s). As an example, our recent coverage of abortion after the Roe v. Wade ruling featured my own explanation of the pro-choice position, Danielle D’Souza Gill’s robust argument against abortion (quoting none other than Christopher Hitchens!), and a comprehensive Skeptic Research Center report analyzing public attitudes toward the issue (it turns out most people don’t understand the effects of overturning Roe).

In today’s age of activism, this balanced approach might seem unassertive. But the truth is, absolute certainties are rare. We can only approximate truth, and what constitutes “truth” varies by domain—science, politics, law, journalism, and ethics all demand different methods of reasoning. Put simply, our mission is to present what is known about the world as rigorously as possible.

You, the reader, decide where you stand.

So, are you simply believers in “The Science”?

Lately, the rallying cry “Trust the Science” has become a viral meme—a slogan that, on the surface, criticizes the limitations of science in being able to solve complex problems. Yet, a closer look reveals two deeper issues. First, some public science communicators are overstating consensus and stepping into the policy arena—territory traditionally reserved for politicians and activists. Second, many academic institutions have allowed ideology to seep into their departments, undermining strict adherence to the scientific method and neutral, dispassionate inquiry. This contamination isn’t confined to academia; even reputable popular science outlets have been affected. The net result is a degradation of trust in deep expertise and the scientific approach, allowing less rigorous voices to gain prominence.

In truth, trusting science—meaning evidence gathered through systematic, methodical inquiry—remains our best tool for uncovering reality. But conflating this with blind faith in public science figures is a category error. Science is not a priesthood, and consensus is not dogma.

Likewise, flawed research built on unfalsifiable assumptions can only be dismantled through relentless skepticism and the unwavering application of the scientific method; within its fallibility lies the greatest strength of science: self-correction.

Whether mistakes are made honestly or dishonestly, whether a fraud is knowingly or unknowingly perpetrated, in time it will be flushed out of the system through the lack of external verification. The cold fusion fiasco is a classic example of the system’s swift consequences for error and hasty publication; the purported link between vaccines and autism was debunked in the 1990s, and yet still persists in some circles, which indicates that reason, like freedom, requires eternal vigilance.

Despite built-in mechanisms science is still subject to a number of problems and fallacies that even the most careful scientist and skeptic are aware can be troublesome. We can, however, find inspiration in those who have overcome them to make monumental contributions to our understanding of the world.

Charles Darwin is a sterling example of a scientist who struck the right balance between total acceptance of and devotion to the status quo, and an open willingness to explore and accept new ideas. This delicate balance forms the basis of the whole concept of paradigm shifts in the history of science.

Photo by Hulki Okan Tabak / UnsplashThe Next Chapter: A New Era for Skeptic

It’s been over 30 years since Pat Linse and I founded Skeptic magazine and the Skeptics Society, our 501(c)(3) nonprofit dedicated to science research and education—a modest beginning in my garage that has since blossomed into one of the world’s most influential popular science publications.

Along the way, we’ve had the honor of collaborating with some of the greatest thinkers of our time, including our current Editorial Board members Jared Diamond, Steven Pinker, and Richard Dawkins, amongst many.

While Pat’s passing left an irreplaceable void, our team has doubled down on the mission to promote an evidence-based understanding of the world.

A glimpse into the past: Pat, Randi, Tanja, me, and our old offices—lost in the Los Angeles fires of January 2025 but not forgotten.

Today, I’m proud to announce that our esteemed Editorial Board is joined by three fresh voices—April Bleske-Rechek, Robert Maranto, and Catherine Salmon—whose incisive articles grace our recent issues and this new website.

We’re also delighted to have contributing editor Katherine Brodsky join us, launching her regular column, Culture Code.

Finally, we’re excited to reintroduce the Skeptic Research Center—led by social scientists Kevin McCaffree and Anondah Saide—now with its own dedicated site, where we’ll continue our mission of data-driven inquiry into some of today’s biggest issues.

In celebration of our rich 30+ year history, we’ll also be republishing some of our most timeless articles—works that remain as relevant and thought-provoking now as when they were first written—and publishing many, many more brand-new articles, podcasts, research reports, and even documentary films.

Welcome to the new Skeptic! Let’s explore reality together.

Categories: Critical Thinking, Skeptic

Black Lives Matter vs. Black Lives Saved: The Urgent Need for Better Policing

Skeptic.com feed - Mon, 02/24/2025 - 6:06am

To paraphrase Shakespeare’s Romeo and Juliet, Black Lives Matter activists and police unions are two houses both alike in indignity. Neither truly wants to improve policing in the most necessary ways: the former because it could undermine their view of the world and reduce revenue streams, including billions in donations; the latter for a more mundane reason. Cops, like other street-level bureaucrats, don’t want to change their standard operating procedures and face accountability for screwups. Unfortunately, with Black Lives Matter groups receiving billions in donations and helping increase progressive turnout, media and academia failing to provide accurate information to voters, and police unions enjoying iconic status among conservatives when they are better viewed as armed but equally inefficient teachers’ unions, we don’t see the political incentives for reform any time soon, despite some recent local level successes.

Injustice—How Progressives (and Some Conservatives) Got Us Into This Mess

Professors and other respectables rail against “deplorables,”1 but missing in political discourse is that mass rule, AKA populism, is not a mass pathological delusion. Rather, its appearance is for solid economic and social reasons. When problems that affect regular citizens get ignored by their leaders, people in democratic systems can get revenge at the ballot box. From inflation and foreign policy debacles, to COVID-19 school shutdowns that went on far longer in the U.S. than in Europe at immense and immensely unequal social cost,2 ordinary people sense that the wealthy, bureaucrats, professionals, and professors often advance their own interests and fetishes at the expense of regular folks, and then use mainstream “knowledge producing” institutions, particularly academia and the mainstream news media, to cover up their failings.

Indeed, as Newsweek’s Batya Ungar-Sargon shows in her brilliant book Bad News: How Woke Media Is Undermining Democracy, the mainstream media now stand forthrightly behind the plutocrats. This can be documented empirically: the Center for Public Integrity points out that, during the 2016 presidential race, identified mainstream media journalists made 96 percent of their financial donations to one political party (the Democrats) and to the more mainstream of the candidates running.3 That basic instinct to hold the respectables accountable for their failings may have been the only thing keeping the Trump 2024 presidential candidacy viable despite his many and well-documented failings and debate loss against Kamala Harris.4

Perhaps nowhere is popular anger more justifiable than regarding crime, a trend best captured in the saga of the Black Lives Matter (BLM) movement. The roots of that failure go deep, and implicate multiple sacred cows in contemporary elite politics. As Anglo-Canadian political scientist Eric P. Kaufmann writes in his landmark work The Third Awokening,5 critical theory and other postmodern ideologies (AKA woke) have been evolving for over a century. To his credit, and unlike most conservatives, Kaufmann does not paint wokeism as entirely wrong—like populism, it too came about as a result of grievances experienced by the wider society. Rather, he describes it as needing moderating influences because, as with all other ideologies, it is not entirely (or in this case even mainly) correct. This is all the more so since so many among the woke, who are vastly overrepresented in the political class, lack experience with people from different walks of life. Their insulation, which Democratic commentator and political consultant James Carville—who coined the phrase “it’s the economy, stupid” that was key to then-Governor Bill Clinton’s 1992 victory over President George H.W. Bush—derides as “faculty lounge politics,”6 promotes fanaticism, declaring formerly extreme ideas not merely contestable or even mainstream, but off limits to criticism.

The nonnegotiable assumptions of late-stage woke include reflexively disparaging the achievements of Western civilization, while anointing non-Western or traditionally marginalized peoples and ideas as sacred. This deep script makes those (particularly wealthy Whites)7 with advanced degrees susceptible to believing the worst about White police officers, leaving influential segments of the political class subject to exploitation by grifters, with disastrous results. As one of us shows, many Americans believe that police pose a near genocidal threat to Black people, when in fact in a typical year fewer than 20 unarmed Black people (some of whom were attacking the police) are killed by nearly a half million White police officers, a lot lower than one would expect given that the Black crime rate is more than double that of other cohorts.8 Likewise, The 1619 Project creator Nikole Hannah-Jones and many other activists claim that police departments evolved from racist slave-catching patrols, which is simply not true.9

The problem arises when the Pulitzer Prize-winning Hannah-Jones and many other scholars and activists have an interest in maintaining the assertion that police are a threat to Black people, employing shocking visual images and taking advantage of widespread ignorance to make the case. The PBS News Hour, like other media outlets, has constantly highlighted the very rare instances in which White police officers actually do kill unarmed Black people, without ever placing them in the context of overall statistical evidence, which demonstrates that these tragic events are incredibly rare, nor giving comparable treatment to the far more numerous White casualties of police.10

Since the Black Lives era began, fatal ambushes of police officers have risen dramatically, almost certainly due to demonizing of the police.

Academia is an even greater offender. At the opening plenary of the 2021 American Educational Research Association annual meeting, AERA President Shaun Harper spent most of his hour-long session lambasting police as a threat to Black people. Harper is a master at securing grants and climbing the hierarchy to run academic associations. Yet his views on cops are out of sync with both reality and with the views of Black voters, who have consistently refused to support defunding police, and whose opinions on criminal justice generally resemble those of Whites and Hispanics.11, 12

Effective, accountable policing can save lives, especially in Black communities. Reform, rather than de-policing, is crucial.

Harper’s views do, however, reflect the Critical Race Theory (CRT) approaches preferred by professors studying race, both in education and in the social sciences more broadly—24 of the 25 most cited works with Black Lives Matter in their titles do not involve research that would save Black lives in any conceivable time frame. The 19th most cited article does empirically study (and suggest better) police procedures, making a case for having police document their actions in writing not just every time they fire their guns, but every time they unholster them. This mere reform, likely forcing cops to think an extra second before acting, reduces police shootings of civilians without increasing casualties among officers.13 In sharp contrast, however, other highly cited “scholarly” articles on Black Lives Matter:

… explore social media use and activism (4, including one piece involving Ben and Jerry’s ice cream and BLM), racial activism and white attitudes (3), immigration and migrants (2), anti-Blackness in higher education, “democratic repair,” radically re-imagining law, anti-Blackness of global capital, urban geography, counseling psychology, research on K–12 schools, BLM and “technoscientific expertise amid solar transitions,” BLM and “evidence based outrage in obstetrics and gynecology,” and BLM and differential mortality in the electorate.14

It is probably worth repeating here that at least one article, written by senior academics at respected institutions, looks specifically at the influence of the Black Lives Matter movement on the naming of popular ice cream flavors at Ben and Jerry’s. These “studies” get professors tenure, grants, and notoriety, but will not save Black (or any) lives in any conceivable time frame.

Sometimes academia allies with progressive politicians. As Harvard University-affiliated Democratic pollster John Della Volpe boasted at a recent political science conference,15 Black Lives Matter offers dramatic symbols that can measurably increase progressive voter turnout. Left unsaid was that the dominant BLM narrative both misleads voters and gets Black people killed—or that questioning it can be risky. This tension likely explains why, after careful, peer-reviewed empirical research by economist Roland Fryer found that controlling for suspect behavior, police do not disproportionately kill Black people (White suspects were in fact 27 percent more likely to be shot), then-Harvard University President Claudine Gay tried to fire Fryer.

She accused the tenured professor, an African- American academic star, of the use of inappropriate language, an offense for which Harvard’s own policies dictated sensitivity training. Fryer’s published findings were likely seen as attacking “sacred” beliefs and threatening external grants received on the premise of overwhelming police racism.16 As renegade journalist Batya Ungar-Sargon shows, the same dynamic holds in newsrooms, where reporting on Black Lives Matter’s spectacular failures to save Black (and other) lives is a firing offense.17 Indeed, were we not tenured professors at public universities in the South, we could likely get in trouble for writing essays like this one.

So what if progressives use anti-police demagoguery to win a few elections and grants? Isn’t that just election campaign “gamesmanship?” Does that hurt anyone? Yes, it does. Since the Black Lives era began, fatal ambushes of police officers have risen dramatically, almost certainly due to demonizing of the police. More importantly, Black Lives Matter de-policing policies seem to have taken thousands of (mainly Black) lives.18 During the BLM era, dated here as beginning in 2012, the age-adjusted Black homicide rate has almost doubled, rising from 18.6 murders per 100,000 African-American citizens in 2011 to 32 murders per 100,000 in 2021.19 Murders of Black males rose to an astonishing peak of 56/100,000 during this period (in 2021), while Black women (9.0/100,000) came to “boast” a higher homicide rate than White men (6.4) and all American men (8.2).

Yet for all our lambasting of Black Lives Matter, police unions and leaders have not covered themselves in glory in the BLM era, largely supporting precinct level decisions to de-police the dangerous parts (“no-go”- or “slow-go”-zones) of major cities, and refusing to support reforms that do cut crime but discomfort cops. Astonishingly, high homicide rates have little or no impact on whether police commissioners keep their jobs, giving cops few incentives to do better rather than just well enough.20

On the positive side, the political system is starting to respond to public anger from the increased crime and disorder of the Black Lives Matter era. In its presidential transition, the Biden administration largely sidelined the BLM portions of its racial reckoning agenda—even as it poured money into counterproductive and arguably racist DEI initiatives.21 More impactful responses came at the level of major city governments, which are those most affected by crime and disorder. Across progressive cities such as Seattle, Portland, and New York and less progressive cities like Philadelphia and Dallas, voters have started distancing themselves from Black Lives Matter policies. For the first time in decades, Seattle elected a Republican prosecutor (supported by most Democratic leaders). Uber-left Portland elected a prosecutor who was a Republican until recently. The Dallas mayor switched parties (from Democratic to Republican) out of frustration with progressive opposition to his (successful) efforts to cut crime by hiring and empowering more cops. New York elected a tough on crime (Democratic) former police captain to replace the prior progressive mayor. Even uber-progressives like Minnesota Governor and 2024 Democratic VP candidate Tim Walz did U-turns on issues such as whether police belong in schools, and what they can do while there.

Yet cops can do far more, and the Big Apple has shown the way. How that happened suggests that color matters, but the color is not Black so much as green.

New York City’s Turnaround: How a White Tourist’s Murder Made Black Lives Truly Matter

Sometimes history is shaped by unexpected (and undesirable) events that have positive impacts. A case in point is Brian Watkins, the 22-year-old White tourist from Provo, Utah, who was brutally murdered in front of his family on Labor Day Weekend in 1990 in NYC, while in town to watch the U.S. Open tennis tournament. His murder had historic impacts on New York, ultimately saving thousands of (mainly) Black lives, but it did not have the same impact nationally, a fact that says volumes about whose lives matter and why.

In 1990, New York City was among the most dangerous cities in the country. Today, as we show in our article “Which Police Departments Make Black Lives Matter?”22 despite high poverty, New York has the sixth lowest homicide rate among the 50 largest cities. That might not have happened without the brutal murder of Brian Watkins. As City Limits detailed in a 20-year retrospective23 on the Watkins killing, in 1990 New York City resembled the dystopian movie Escape from New York, with a record 2,245 homicides, including 75 murders of children under 16 and 35 killings of cab drivers, forced to risk their lives daily for their livelihoods. For their part, police, who found themselves outnumbered and sometimes outgunned, killed 41 civilians, around four times more than today.

The city that never sleeps was awash in blood, but NYC residents did not bleed equitably. Mainly, in what would turn out to be a common pattern, low-income minorities killed other low-income minorities in underpoliced neighborhoods. To use the first person for a bit, as I (Reilly) note in my 2020 book Taboo,24 and Rafael Mangual points out in his Criminal Injustice (2022),25 felony crime such as murder is remarkably concentrated by income and race. In my hometown of Chicago, the 10 relatively small community areas with the highest murder rates contain 53 percent of all recorded homicides in the city and have a total murder rate of 61.7/100,000, versus 18.2/100,000 for the rest of the city—with those districts included. In the even larger New York City, few wealthy businesspeople or tourists were affected by the most serious crime even during its horrendous peak.

Against that backdrop, after spending the day watching the U.S. Open, the Watkins family left their upscale hotel to enjoy Moroccan food in Greenwich Village. While waiting on a subway platform, they were assaulted by a “wolfpack” scouting for mugging victims so they could steal enough money to pay the $10-per-man cover charge at a nightclub.

In those bad old days, many young New Yorkers committed an occasional mugging to supplement their incomes, but this attack was unusually violent. In a matter of seconds, Brian Watkins’ brother and sister-in-law were roughed up while his father was knocked to the ground and slashed with a box-cutter, cutting his wallet out of his pocket. Brian’s mother was pulled down by her hair and kicked in the face and chest. While trying to protect her, Brian was fatally stabbed in the chest with a spring-handled butterfly knife. Not realizing the extent of his injury—a severed pulmonary artery—Brian chased the thieves until collapsing by a toll booth, dying shortly thereafter.

In 1990, New York City was among the most dangerous cities in the country. Today... despite high poverty, New York has the sixth lowest homicide rate among the 50 largest cities.

In Turnaround: How America’s Top Cop Reversed the Crime Epidemic,26 then-New York City Transit Police Chief and later NYPD Commissioner William Bratton recalled the Watkins killing as “among the worst nightmares” city leaders could imagine: “A tourist in the subway during a high-profile event with which the mayor is closely associated … gets stabbed and killed by a wolfpack. The murder made international headlines.”

Within hours a team of top cops apprehended the perpetrators, which just shows what police can do when a crime, such as the murder of a wealthy tourist, is made an actual priority. Twenty years later, rotting in a prison cell, Brian’s killer sadly recalled his decisions that night as the worst of his life. Had police been in control of the subways, the teen might have been deterred from making the decision that in essence ended two lives.

Unlike the great majority of the other 2,244 murder victims in 1990, the dead Brian mattered by name to Big Apple politicians. Bratton wrote that New York Governor Mario Cuomo “understood the impact this killing could have on New York tourism.” With hundreds of millions of dollars at stake, two days after the Watkins murder, Bratton got a call out of the blue from a top aid to the Governor asking whether transit police could make the subways safe if the state kicked in $40 million—big money in 1990. For Bratton, “this was the turnaround I needed.”

With the cash for more transit police, communications and data analytic tools to put cops where crimes occurred, and better police armaments, subway crime plummeted. Later, NYPD Commissioner Bratton drove homicide down by over a third in just two years with similar tactics, and by replacing hundreds of ineffective administrators with better leaders, as Patrick Wolf and one of us (Maranto) detail in “Cops, Teachers, and the Art of the Impossible: Explaining the Lack of Diffusion of Innovations That Make Impossible Jobs Possible.”27 In another article coauthored with Domonic Bearfield,28 we estimated that as of 2020, NYPD’s reforms saved over 20,000 lives, disproportionately of Black Americans.

NYPD leadership made ineffective leaders get better or get out. This is a tool almost never used by police reformers at the level of city governance.

So how did NYPD do it? New York got serious about both recruiting and training great, tough cops and about holding them accountable. In the 1990s, NYPD Commissioner William Bratton imposed CompStat, a statistical program reporting crimes by location in real time. In weekly meetings, NYPD leaders praised precinct commanders who cut crime and grilled others. They made ineffective managers get better or get out. Homicides fell by over a third in just two years, followed by steady declines since.

Let us repeat part of that for emphasis: NYPD leadership made ineffective leaders get better or get out. This is a tool almost never used by police reformers at the level of city governance, who don’t want to be hated by officers, and who are also hamstrung by civil service rules and union contracts that make it difficult to terminate bad police officers, and almost impossible to jettison bad managers. NYPD was the exception.

Because of obscure personnel reforms by Benjamin Ward, the first Black NYPD commissioner and someone who wanted to shake up NYPD’s Irish Mafia of officers, where promotion often depended on what some called “the friends and family plan,” NYPD commissioners have unusual power over personnel. The commissioner can bust precinct commanders and other key leaders back in rank almost to the street level. Since retirement is based on pay at an officer’s rank, this essentially forces managers into early retirement, with the commissioner getting to pick their replacements rather than having seniority or other civil service rules determine the outcomes.

Legendary police leader John Timoney, who was Bratton’s Chief of Department in NYPD before going on to successfully run departments in Philadelphia and Miami, told us that he had the ability to personally fire over 300 cops in NYPD compared to just two in Philadelphia—the two being himself and his driver. In the latter city, everyone else was covered by civil service tenure.29 Politicians such as Tim Walz were publicly emphasizing their focus on saving Black lives, but showed no enthusiasm for personnel reforms such as these, which could actually get the job done.

Of course, firing cops can’t work if you don’t know who to fire. Since the mid-1990s, NYPD strengthened its internal affairs unit to get off the streets unprofessional cops in the mold of Minneapolis’ Derek Chauvin, the officer who killed George Floyd and who had 18 prior citizen complaints, before rather than after a disaster. Longtime NYPD Internal Affairs leader Charles Campisi details this process well in Blue on Blue: An Insider’s Story of Good Cops Catching Bad Cops.30

Yet none of this might have happened without the brutal murder of Brian Watkins. In a real sense, the Watkins family suffered so thousands could live. They deserve a monument.

How to Make Black (and All) Lives Matter

Rather than supporting neo-Marxist activism portraying police as fascists enforcing “late-stage capitalist technocratic white supremacy,” or similarly impenetrable academic jargon that seeks to pit citizens against police and fails to solve problems, we see police departments as public organizations staffed by unionized employees, some of whom are public servants, some of whom mainly serve themselves, and most of whom are somewhere in between.31 Just like companies, some police departments are incredibly successful; some are so ineffective that it might make sense to defund them and start over … and some—most by far—are somewhere in between.

So the real question for those of us who want to make police better rather than run for office or get government grants, is how we can get low-performing police departments to learn from the best, and how we can get the mayors, city councils, governors, and state legislatures overseeing police to enact the sort of civil service reforms, like higher pay coupled with abolishing civil service tenure, that are likely to succeed in getting police to make all lives matter.

Black Lives Matter de-policing policies seem to have taken thousands of (mainly Black) lives. During the BLM era … the age-adjusted Black homicide rate has almost doubled, rising from 18.6 murders per 100,000 African-American citizens in 2011 to 32 murders per 100,000 in 2021.

For us, the key to get elected politicians to take police reform seriously is to make police reform a serious election issue, rather than how well one virtue signals for BLM. To do that, first and foremost, failed police departments and the mayors and city council members running them must be shamed into action. Businesses should be encouraged to relocate from dangerous cities to safe ones. That starts with data.

To make that happen, earlier this year, in a leading public administration journal, along with Patrick Wolf, we published “Which Police Departments Make Black Lives Matter?,” an article that anyone can download for free.32 Here, we ranked police in the 50 largest U.S. cities (using 2020 statistics, but the overall rankings were stable from 2015–2020) by their effectiveness in keeping homicides low and not taking civilian lives, while adjusting for poverty, which makes policing more difficult. Some departments excel. On our Police Professionalism Index, New York City easily takes first place, just as it did in 2015. The top 18 cities also include Boston, MA; Mesa, AZ; Raleigh, NC; Virginia Beach, VA; five California cities including San Diego and San Jose; and five Texas cities including El Paso and Austin.

In contrast, by a wide margin, Baltimore ranked dead last (as it did in 2015). Baltimore’s homicide rate (56.12 per 100,000 population) was roughly 15 times higher than New York’s, and Baltimore police kill roughly ten times as many civilians per capita as NYPD. Baltimoreans should be outraged, particularly since, as noted above, top-ranked NYPD used to be in Baltimore’s league. Fifty years ago, NYPD killed about 100 civilians annually, compared to 10 today. In 1990, New York City had 2,245 homicides, mostly people of color, compared to just 462 in 2020. And, as discussed earlier, reforming NYPD saved tens of thousands of lives, mainly Black lives, while at the same time reducing incarceration.

If democracy means anything, it means the ability to influence government, and the first duty of government is protecting life and property. For too long, this most basic of needs has been denied to people without means, who are disproportionately people of color. If we want to increase trust in government, we must start with the police. Doing that requires real data, not agitprop that paints cops as racist killers. To enable that, the U.S. Department of Justice (DOJ) needs to rank large cities on their policing, in a manner we did, awarding those doing well and calling out those doing badly. The DOJ should also issue reports on which cities enable their police chiefs to terminate problematic officers.

This methodical approach would offend leftist cultural warriors and rightist police unions alike. On the local levels, to copy NYPD’s success, voters in Baltimore and other poorly policed cities such as Kansas City, Las Vegas, Albuquerque, and Miami, must ask pointed questions about their police, such as:

  • Can police chiefs hire and retain the great officers they need? If not, why not?
  • Can police chiefs fire subordinates who are not up to their tough jobs?
  • Are there enough cops to do the job?
  • Do police use CompStat to copy what works in fighting crime?
  • Does the internal affairs unit hold brutal cops accountable?

Building a great police department takes time, but the NYPD has shown how it can be done. It is long past time to stop political virtue signaling and start reforming policing to save all lives.

Categories: Critical Thinking, Skeptic

The Alef Flying Car

neurologicablog Feed - Mon, 02/24/2025 - 5:01am

The flying car is an icon of futuristic technology – in more ways than one. This is partly why I can’t resist a good flying car story. I was recently sent this YouTube video on the Alef flying car. The company says his is a street-legal flying car, with vertical take off and landing. They also demonstrate that they have tested this vehicle in urban environments. They are available now for pre-order (estimated price, $300k). The company claims: “Alef will deliver a safe, affordable vehicle to transform your everyday commute.” The claim sounds reminiscent of claims made for the Segway (which recently went defunct).

The flying car has a long history as a promise of future technology. As a technology buff, nerd, and sci-fi fan, I have been fascinated with them my entire life. I have also seen countless prototype flying cars come and go, an endless progression of overhyped promises that have never delivered. I try not to let this make my cynical – but I am cautious and skeptical. I even wrote an entire book about the foibles of predicting future technology, in which flying cars featured prominently.

So of course I met the claims for the Alef flying car with a fair degree of skepticism – which has proven entirely justified. First I will say that the Alef flying car does appear to function as a car and can fly like a drone. But I immediately noticed in the video that as a car, it does not go terribly fast. You have to do some digging, but I found the technical specs which say that it has a maximum road speed of 25 MPH.  It also claims a road range of 200 miles, and an air range of 110 miles. It is an EV with a gas motor to extend battery life in flight, with eight electric motors and eight propellers. It is also single passenger. It’s basically a drone with a frame shaped like a car with tires and weak motors – a drone that can taxi on roads.

It’s a good illustration of the inherent hurdles to a fully-realized flying car of our dreams, mostly rooted in the laws of physics. But before I go there, as is, can this be a useful vehicle? I suppose, for very specific applications. It is being marketed as a commuter car, which makes sense, as it is single passenger (this is no family car). The limited range also makes it suited to commuting (average daily commutes in the US is around 42 miles).

That 25 MPH limit, however, seems like a killer. You can’t drive this thing on the highway, or on many roads, in fact. But, trying to be as charitable as possible, that may be adequate for congested city driving. It is also useful for pulling the vehicle out of the garage into a space with no overhead obstruction. Then you would essentially fly to your destination, land in a suitable location, and then drive to your parking space. If you are only driving into the parking garage, the 25 MPH is fine. So again – it’s really a drone that can taxi on public roads.

The company claims the vehicle is safe, and that seems plausible. Computer aided drone control is fairly advanced now, and AI is only making it better. The real question is – would you need a pilot’s license to fly it? How much training would be involved? And what are the weather conditions in which it is safe to fly? Where you live, what percentage of days would the drone car be safe to fly, and how easy would it be to be stuck at work (or need to take an Uber) because the weather unexpectedly turned for the worse? And if you are avoiding even the potential of bad weather, how much further does this restrict your flying days?

There are obviously lots of regulatory issues as well. Will cities allow the vehicles to be flying overhead. What happens if they become popular and we see a significant increase in their use? How will air traffic be managed. If widely adopted, we will see then what their real safety statistics are. How many people will fly into power lines, etc.?

What all this means is that a vehicle like this may be great as “James Bond” technology. This means, if you are the only one with the tech, and you don’t have to worry about regulations (because you’re a spy), it may help you get away from the bad guys, or quickly cross a city frozen with grid lock. (Let’s face it, you can totally see James Bond in this thing.) But as a widely adopted technology, there are significant issues.

For me the bottom line is that this technology is a great proof-of-concept, and I welcome anything that incrementally advances the technology. It may also find a niche somewhere, but I don’t think this will become the Tesla of flying cars, or that this will transform city commuting. It does help demonstrate where the technology is. We are seeing the benefits of improving battery technology, and improving drone technology. But is this the promised “flying car”? I think the answer is still no.

For me a true flying car functions fully as a car and as a flying conveyance. What we often see are planes that can drive on the road, and now drones that can drive on the road. But they are not really cars, or they are terrible cars. You would never drive the Alef flying car as a car – again, at most you would taxi it to and from its parking space.

What will it take to have a true flying car? I do think the drone approach is much better than the plane approach, or jet-pack approach. Drone technology is definitely the way to go. Before it is practical, however, we need better battery tech. The Alef uses lithium-ion batteries and lithium polymer batteries. Perhaps eventually they will use the silicone anode lithium batteries, which have a higher energy density. But we may need to see the availability of batteries with triple or more current lithium ion batteries before flying drone cars will be a practical reality. But we can feasibly get there.

Perhaps, however, the “flying car” is just a futuristic pipe dream. We do have to consider that if the concept is valid, or are we just committing a “futurism fallacy” by projecting current technology into the future. We don’t necessarily have to do things in the same way, with just better technology. The thought process is – I use my car for transportation, wouldn’t it be great if my car could fly. Perhaps the trade-offs of making a single vehicle that is both a good car and a good drone are just not worth it. Perhaps we should just make the best drone possible for human transportation and specific applications. We may need to develop some infrastructure to accommodate them.

In a city there may be other combinations of travel that work better. You may take a e-scooter to the drone, or some form of public transportation. Then a drone can take you across the city, or across a geological obstacle. Personal drones may be used for commuting, but then you may have a specific pad at your home and another at work for landing. That seems easier than designing a drone-car just to drive 30 feet to the take off location.

If we go far enough into the future, where technology is much more advanced (like batteries with 10 times the energy density of current tech), then flying cars may eventually become practical. But even then there may be little reason to choose that tradeoff.

The post The Alef Flying Car first appeared on NeuroLogica Blog.

Categories: Skeptic

White Coat Crime

Skeptic.com feed - Sun, 02/23/2025 - 3:52pm

She murdered her patients. At least, that’s what the prosecutors said. All it took to get powerful opioids from California internist Lisa Tseng was a brief conversation. No X-rays. No lab tests. No medical exam. Video surveillance shows an undercover officer posing as a patient who asks Dr. Tseng for methadone (an opioid) and Xanax (an anti-anxiety medication), drugs that can form a deadly cocktail when combined. He tells her that he is in recovery and takes the drugs at night with alcohol to “take the edge off.” He makes clear that he is not in pain and does not plan to use the medications to treat a medical condition. Tseng writes the prescription—after the agent hands over $75 cash.1

Did she know what she was doing was wrong? Tseng received desperate calls from patients’ families and friends concerned that their loved ones were hooked on the meds she prescribed.2 She did not stop. Coroners and law enforcement agents called Dr. Tseng each time a patient died—14 in total.3 She did not stop. Perhaps she thought the financial perks outweighed the risks. Dr. Tseng’s reckless prescribing raked in $3,000 a day and exceeded $5 million in three years. 

Dr. Tseng’s prescribing spree ended in 2015, when a jury convicted her on three counts of second-degree murder.4 In 2016, Superior Court Judge George G. Lomeli imposed a prison sentence of 30 years to life in prison. The trial lasted eight weeks. It included 77 witnesses and 250 pieces of evidence. Families of overdose victims praised the judge’s decision and concluded that “justice has been served.”5

Dr. Tseng was the first California physician ever convicted of murder for overprescribing opioids, and one of the first in the United States. Her case was a turning point for law enforcement because it created a playbook for subsequent prosecutions and because it sent a clear signal to physicians across the nation: you could be next. “The message this case sends is you can’t hide behind a white lab coat and commit crimes,” declared Deputy District Attorney John Niedermann. “A lab coat and stethoscope are no shield.” Medical experts warned that Tseng’s case could scare physicians away from prescribing opioids and leave chronic pain patients to suffer without care.6

Illustration by Izhar Cohen for SKEPTIC

Law enforcement is responsible for making sure that doctors only prescribe opioids legally, which is no easy task. However, some physicians make it easy when they engage in behavior that is explicitly and undeniably criminal. These are the cases that make headlines. Opioids are illegal by default. Federal law gives doctors a special exemption to prescribe them for legitimate medical purposes, particularly pain. But how can a physician be legitimate if he has a parking lot filled with out-of-state license plates and a line of patients snaking around the building as if they are waiting to buy concert tickets? If he asks the patient to state his blood pressure while a brand-new blood pressure cuff hangs on the wall, unused? If he can’t tell the difference between a dog X-ray and a human one? 

Doctors are hard to investigate and even harder to prosecute. It is difficult for judges and juries to wrap their minds around the idea that physicians perpetrate crimes.

It sounds far-fetched, but in July 2012, Glendora, CA, police arrested physician Rolando Lodevico Atiga for prescribing powerful opioids to an undercover officer. The officer used a dog X-ray—with the tail clearly visible—to prove he had a bad back. Police Captain Tim Staab told CBS, “Either Sparky the dog really needs Percocet or this doctor is a drug dealer masquerading as a physician.”7 The medical board suspended Dr. Atiga’s license in August 2012.8 Then criminal proceedings were suspended in 2013 due to Dr. Atiga’s poor mental health and inability to stand trial.9

Doctors are hard to investigate and even harder to prosecute. It is difficult for judges and juries to wrap their minds around the idea that physicians perpetrate crimes. The image of the “dirty doctor” just doesn’t mesh with the popular image of “doctor as savior.” And many overdoses involve multiple drugs, making it hard to pin a death on a single drug or a single doctor.10 Still, over the past decade, judges and juries have put physicians behind bars. Law enforcement arrests scores of physicians for opioid crimes each year. They charge physicians with the same counts as illicit drug dealers: fraud, unlawful distribution, racketeering, manslaughter, and murder.11 Doctors are legally required to keep extensive records that investigators use to prove criminal activity. Physicians who avoid arrest still face steep penalties, such as losing their medical license, losing the ability to prescribe controlled substances, or paying a hefty fine. 

It was not always this way. As early as the mid-1990s, evidence showed that physicians were generously doling out opioids, but the first murder conviction did not occur until 2016.12 What happened over those twenty years that unleashed prosecutors’ power and helped them win cases against providers? The answer lies in organizational change, education, and technological innovation. New organizations centered on criminal healthcare providers cropped up, enforcement agents came together to share strategies, and Prescription Drug Monitoring Programs (PDMPs) spread across the nation that made targeting physicians a far easier task. 

Reshaping the Enforcement Landscape 

A lot has changed since the days when pill mills popped up like weeds and law enforcement had no way to stop them. Enforcement agencies have responded to the opioid crisis with three strategies: (1) organizing task forces, (2) educating investigators, and (3) using PDMPs. Together, these efforts have made physician cases easier and faster to initiate, even if some challenges persist. 

Task forces are subunits of enforcement agencies that bring together individuals who have different resources and expertise to address a common goal. Federal agencies such as the Drug Enforcement Administration (DEA) and local agencies such as sheriffs’ departments have devoted themselves to physician cases by creating task forces centered on prescription opioids. DEA task forces do much of the heavy lifting, a major difference from decades ago. 

The DEA plays the biggest federal role in regulating opioids. The DEA’s Office of Diversion Control oversees registrants—physicians, pharmacies, hospitals, manufacturers, wholesalers, and drug distributors—who must register with the agency in order to provide controlled substances. The Controlled Substances Act (CSA) designates these registrants as part of a “closed system of distribution,” which means that the DEA tracks everyone who handles opioids along the supply chain and accounts for every transaction. The DEA monitors opioid transactions using the Automation of Reports and Consolidated Orders System (ARCOS), a database that tracks controlled substances all the way from manufacture to public distribution.13

“The message this case sends is … a lab coat and stethoscope are no shield.” 
—Deputy District Attorney John Niedermann

For decades, the Office of Diversion Control14 was considered a lesser part of the DEA, and the agents who worked for it—known as Diversion Investigators (DIs)—were treated as less important than Special Agents (SAs), who work for the Operations Division. The position of DI was originally created to relieve SAs from the burden of inspecting and auditing manufacturers and distributors of controlled substances as mandated by the CSA. Handing off those tasks to DIs freed SAs to focus on heroin and cocaine trafficking. This hierarchy persisted into the late 1990s, the heyday of opioid prescribing, when physicians treated pain as a fifth vital sign and were urged to treat it aggressively. With physicians and regulators on board with generous opioid prescribing, the diversion office found itself underfunded and understaffed. Laura Nagel, who was appointed head of the DEA’s Office of Diversion Control in 2000, led DIs who struggled to get resources and respect. Unaware of the giant opioid wave poised to crest a few short years later, SAs thought prescription opioids were nothing more than a child’s version of the hard drugs they pursued. 

That all changed in the early 2000s when, for the first time in U.S. history, Americans were more likely to overdose on prescription drugs than illegal ones.15 Suddenly, DIs were in high demand. In late 2006, the DEA created task forces called Tactical Diversion Squads. These included DIs, SAs, and Task Force Officers (TFOs), who are local police deputized to work with the DEA. DIs understood healthcare norms; SAs could arrest people; and TFOs had fine-grained knowledge of their communities. This arrangement created the organizational synergy needed to pursue doctors. 

Local agencies such as police departments and sheriff’s departments also created narcotics task forces that enabled them to exchange information with other local agencies. Members of such task forces can represent various police departments, the highway patrol, the district attorney’s office, the department of healthcare services, and the medical board. They may also ally with the FBI, the DEA, and the Food and Drug Administration (FDA). 

Federal and local agencies have complementary resources. Local police departments have insufficient funding to do provider cases, so they collaborate with federal law enforcement either formally by sending one of their officers to the DEA’s task force or informally by working cases with them. Federal agencies have more money and equipment. They can perform federal wire taps, which are expensive and require specialized technology. They can also afford expert witnesses, whose expertise is crucial in building a solid case against a doctor. Local agencies, on the other hand, have more agents, so they are better equipped to conduct undercover investigations and process the mountains of paperwork that a doctor case generates. 

Prescription Drug Monitoring Programs (PDMPs) have dramatically transformed the ways that investigators and prosecutors conduct cases against providers.

Task forces are only one site of information exchange. Enforcement agents have found various ways to break down information silos and thereby distribute knowledge. Years of failed attempts have taught investigators and prosecutors both what works, and what doesn’t. They know which questions to ask, which behaviors to look for, and which charges to bring. When task force members, eager to share what they had learned with others, lacked formal venues in which to do so, they got creative. 

Together, new organizations, new knowledge, and new technology expand law enforcement capacity. These changes are evident when we consider what investigation and prosecution look like today. Let’s turn to PDMPs as an example. 

Prescription Drug Monitoring Programs 

PDMPs have dramatically transformed the ways that investigators and prosecutors conduct cases against providers. New organizational developments paved the way for monitoring programs to have the greatest impact. Enforcement agencies’ impetus to investigate providers coincided with the arrival of technology that made those investigations easier and faster. Enforcement agents find both provider and patient data useful—the former because it shows patterns of providers’ behavior and the latter data because it helps law enforcement convince patients to become confidential informants in exchange for leniency in their own cases. 

Healthcare providers have direct access to the database, but law enforcement access is more complicated. State laws restrict which enforcement agencies can get access and how. Some states give law enforcement direct access to data. In those states, enforcement agents have their own login to the system but can only legally access the data in the process of an active case, meaning that they are already investigating a specific crime. They can’t just search through the database to see what they find. Other states require law enforcement to request access from the agency that houses the PDMP, and the agency returns only information that is relevant to the case. Still other states require enforcement agents to obtain a warrant or a subpoena to access the data.16, 17 Regardless of how they get the information, PDMPs are a boon to law enforcement because they make tasks easier and more efficient. 

A prescription drug monitoring program (PDMP) is an electronic database that tracks controlled substance prescriptions in a state. (Source: CDC.gov)

Physician cases are reactive instead of proactive, which creates a barrier to starting an investigation. Enforcement agents say that they do not go out looking for bad doctors but find them through tips they receive from a patient, a parent, a healthcare provider, or another agency. They use information from tips to gather evidence and determine whether the case is worth pursuing. For a provider to come under law enforcement scrutiny, someone has to notice their behavior, feel compelled to do something, and know who to call.

The legwork necessary to investigate a physician traditionally posed a second barrier because investigators had to travel from pharmacy to pharmacy to gather the physician’s prescriptions. Now, thanks to the PDMP, that legwork has become deskwork. Instead of spending time on a potentially fruitless pharmacy expedition, enforcement agents simply look up the physician in the database or request access to information from the agency that controls it. Investigators can obtain a physician’s prescribing history, analyze prescribing patterns, and link their findings to other databases without setting foot outside the office. 

Physician cases are decidedly unsexy. There are no drugs. There are no guns. There is paperwork. Stacks and stacks of paperwork.

PDMP data are a starting point. They do not make a case alone. Investigators examine the data from various angles and try to come up with alternative explanations for the patterns they see.

PDMPs also have their drawbacks. Investigators can use the database to track physicians, but a smart criminal physician also uses the database to monitor their patients and identify potential undercover investigators. People who are addicted to or diverting medications usually have a long PDMP report because they are actively trying to obtain opioids from various physicians. Undercover agents do not have a report at all, so running a report is a way to root out narcs. Knowing this, law enforcement finds ways to create fake reports so that they blend in with other patients. Overall, PDMPs benefit law enforcement because they improve the speed and accuracy of their investigations. Better investigations lead to more successful prosecutions (that is, a greater percentage of convictions). 

The War on Drug Doctors 

Drug cases capture media attention for a reason. Whether on popular TV shows or the evening news, drug cases are sexy. Towering bags of confiscated drugs and arrays of automatic rifles captivate audiences. This stagecraft also helps to justify the War on Drugs. Props such as drugs and guns show that the “bad guys,” the drug dealers, are armed and dangerous. They also show how desperately we need the “good guys,” the investigators and prosecutors, to keep the bad guys off the street. 

Photo by Wesley Tingey / Unsplash

By comparison, physician cases are decidedly unsexy. There are no drugs. There are no guns. There is paperwork. Stacks and stacks of paperwork. Not only do prosecutors have to prove to judges and juries that doctors—professionals revered as pillars of our society—are criminals, but they have to do so using something as uninspiring as paperwork. It’s a tough sell.

This essay was excerpted and adapted by the author from Policing Patients: Treatment and Surveillance on the Frontlines of the Opioid Crisis. Copyright © 2024 by Elizabeth Chiarello. Reprinted by permission of Princeton University Press.

Categories: Critical Thinking, Skeptic

Unraveling the Myths Surrounding the Shroud of Turin

Skeptic.com feed - Sun, 02/23/2025 - 3:20pm

Pseudoscience can often survive because of the continuous publication and dissemination of alleged new discoveries that cast doubt on the findings of “official science.” Mass media regularly republish these “discoveries,” which question otherwise clear and well-established findings. The Shroud of Turin is a perfect example: each year, new statements and new “studies” surface, and instill in the public the (false) idea that there is sufficient evidence to think that the relic is not medieval, but does in fact date back to the time of Christ.

For example, in recent weeks newspapers around the world have reported1 that a group of Italian researchers discovered an innovative way to date the fabric of the Shroud of Turin, and that this dating disproved the results of radiocarbon dating carried out in 1988 (which had placed the creation of the Shroud to somewhere between the 13th and 14th centuries). According to these media reports, the cloth is likely to be around 2,000 years old.

However, this “information” is incorrect, and the media did not bother to check the reliability of what they published. If we examine the reports closely, here is what actually happened:

  1. The article by the Italian scholars was published in 2022, so it is not new.2 The simple facts are that a news outlet in the U.S. broke the news two years late—and then many others simply copied from it.
  2. The proposed dating system is not normally used nor has it been validated by the scientific community. It is based on the use of X-rays (Wide- Angle X-ray Scattering, or WAXS), which are supposed to measure the degradation of cellulose fibers. This system was invented in 2019 by these very same authors, and for the purpose of dating the Shroud, and so is not used by anyone else.3
  3. The method is highly unreliable, because tissue aging is strongly influenced by environmental factors, such as humidity, temperature, light exposure, storage conditions, and the possible presence of microorganisms or of various chemicals, all of which are unpredictable variables that can heavily alter the results. Thus, it cannot provide a reliable dating that is remotely comparable to that provided by the proven Carbon-14 method, which dates the Shroud as being of medieval origin.
  4. The inventors of the WAXS method are not neutral scientists; they are sindonologists (i.e., people who study the Shroud of Turin from a believing perspective; from the Greek word sindòn, used in the Gospels to define the type of fine fabric, undoubtedly linen, with which the corpse of Jesus was believed to be wrapped), and who for years have been trying hard to prove that the Shroud is authentic. None of them are experts in either dating or textiles. The main proponents of the research are Giulio Fanti and Liberato De Caro. Both share the commonality of being followers of the Italian pseudomystic Maria Valtorta, who died in 1961, and who, bedridden by illness, told of receiving heavenly messages and seeing the entire life of Christ, which she described in many books. Although the Catholic Church has put these books on the Index (that is, a catalog of writings condemned as contrary to faith or morals), Fanti and De Caro believe in Valtorta’s visions. Fanti also believes he received personal messages from Jesus and Our Lady, and De Caro, a deacon, is known for his belief in creationism.
  5. The authors were never allowed to extract material directly from the Shroud. What they used was a very small sample (approx. 0.5 mm × 1 mm), which they claim originally belonged to the Shroud.
  6. Between 2014 and 2022, these two authors have already invented four different systems to date textiles in order to authenticate the Shroud: measurement of the mechanical properties of individual linen fibers, Raman spectroscopy, Fourier transform infrared spectroscopy (Fanti), and WAXS (De Caro).
  7. Their conclusions are considered so unreliable that even a journal published by the Center for Sindonology in Turin (which pursues proof of the Shroud’s authenticity) urged people to be cautious of their conclusions.4
Display of the Shroud in the chapel of the Dukes of Savoy; miniature from the Prayer Book donated in 1559 by Cristoforo Duc of Moncalieri to Margaret of Valois. Turin, Royal Library, Varia 84, f. 3v. Courtesy of the Ministry for Cultural Heritage and Activities, Regional Directorate for Cultural and Landscape Heritage of Piedmont

Around that same time, an article published in The Telegraph5 (and later recycled by other outlets) garnered significant interest. It stated that “new research by Cicero Moraes, a world leader in forensic facial reconstruction software, showed it could not have enveloped a corpse.” In fact, “the expert found the image on the shroud could only be created if a cloth was placed over a bas-relief of a human figure, such as a shallow stone carving.” Cicero Moraes is right, but his research is not particularly groundbreaking. For at least four centuries, we have known that the body image on the Shroud is comparable to an orthogonal projection onto a plane, which certainly could not have been created through contact with a three-dimensional body.

Without any need for computer imaging, practical experiments of putting a piece of cloth on a statue or on a human body have been conducted and described in a book published exactly four hundred years ago by French historian Jean-Jacques Chifflet.6 A little over two hundred years later, in the 19th century, Italian historian Lazzaro Giuseppe Piano wrote: “Let the face of a statue be dyed with color and let a white cloth be applied to it; if, after having pressed it a bit by hand, the cloth is removed and spread out, one will see on it a distorted image, much wider than the face itself.”7 Cicero Moraes has certainly created some beautiful images with the help of software, and for that his efforts are to be appreciated, but he certainly did not uncover anything that we did not already know.

Why study a shroud?

I have devoted myself to studying the Shroud of Turin for over a decade,8 along with all the faces of sindonology, and the set of scientific disciplines tasked with determining the authenticity of such relics. My work began with an in-depth analysis of the theory linking the Knights Templar to the relic,9 and the theory according to which the Mandylion of Edessa (more on this below) and the Shroud are one and the same.10 Studying the fabric also revealed that the textile has a complex structure that would have required a sufficiently advanced loom, that is, a horizontal treadle loom with four shafts, probably introduced by the Flemish artisans in the 13th century, while the archaeological record provides clear evidence that the Shroud is completely different from all the cloths woven in ancient Palestine.11

As a historian I was more interested in the history of the Shroud than in determining its authenticity as the burial cloth of Jesus, although the evidence is clear that it was not. That said, for a historiographical reconstruction seeking to address the history of the relationship between faith and science in relation to relics, the Shroud does offer a useful case for understanding how insistence on a relic’s authenticity, along with a lack of interest on the part of mainstream science, leaves ample room for pseudoscientific arguments.

Relics

The Christian cult of relics revolves around the desire to perpetuate the memory of illustrious figures and encourage religious devotion towards them. Initially limited to veneration of the (sometimes alleged) bodies of martyrs, over the centuries it extended to include the bodies of saints and, finally, objects that had come into contact with them. As Christianity spread, the ancient custom of making pilgrimages to the burial places of saints was accompanied by the custom of transferring their relics (or parts of them) to the furthest corners of the Christian world. These transfers, called “translations,” had several effects:

  1. They increased devotion towards the person from whom the relic derived.
  2. They were believed to protect against war, natural disasters, and disease, and to attract healings, conversions, miracles, and visions.
  3. They heightened interest in the place hosting the relics, thus attracting pilgrims and so enriching both the church and the city that housed them.
  4. They increased the prestige of the owners of relics.

Relics are objects without intrinsic or objective value outside of the specific religious environment that attributes a significance to them. In a religious environment, however, they become semiophores, or “objects which were of absolutely no use, but which, being endowed with meaning, represented the invisible.”12 However, enthusiasm for relics tended to wane over time unless it was periodically reawakened through constant efforts or significant events, such as festivals, acts of worship, or translations, along with claims of healings, apparitions, and miracles. When a relic fails to attract attention to itself, or loses such appeal, it becomes nearly indistinguishable from any other object.

As the demand for relics grew among not only the faithful masses but also the fortunate abbots, bishops, prelates, and princes owning or associated with them, the supply inevitably increased. One of the effects of this drive was the frenzied search for ancient relics in holy places. Though the searches were often conducted in good faith, our modern perspective, equipped with greater historical and scientific expertise, can hardly consider most of these relics to be authentic. It was thus almost inevitable that relic intermediaries and dealers emerged—some honest, believing, brokers, but others outright dishonest fraudsters. There were so many of the latter that St. Augustine of Hippo famously spoke out against the trade in martyrs’ relics as early as the 5th century.

The Matter of Relic Authenticity

For a long time, many scholars did not consider relics to be objects deserving of interest to professional historians because the cult of veneration surrounding them was regarded as a purely devotional set of practices. Historians who study relics from the perspective of the history of piety, devotion, worship, beliefs, secular or ecclesiastical politics, and social and economic impact, should also speak to the origin of such relics, and hence their authenticity. In the case of relics of lesser value—those that have been downgraded, forgotten, undervalued, or removed from worship—the historian’s task is relatively simple.

By contrast, historians and scientists face greater resistance when dealing with fake relics that still attract great devotional interest. Many historians sidestep the authenticity issue by overlooking the question of the relic’s origin, instead focusing only on what the faithful have believed over time and the role of the relic in history. While this approach is legitimate, what people most want to know about holy relics like the Shroud of Turin today is their authenticity.

The Shroud of Turin is part of the trove of Christ-related relics that were never mentioned in ancient times. When the search for relics in the Holy Land began—with the discovery of the (alleged) true cross, belatedly attributed to Helena, mother of the emperor Constantine—no one at that time ever claimed to have found Jesus’ burial cloths, nor is there any record of anyone having thought to look for them.

There is more than one shroud.

The earliest travel accounts of pilgrims visiting the sites of Jesus in the 4th century show that people venerated various relics, but they do not mention a shroud. By the beginning of the 6th century, pilgrims to Jerusalem were shown what were claimed to be the spear with which Jesus was stabbed, the crown of thorns, the reed and sponge of his passion, the chalice of the Last Supper, the tray on which John the Baptist’s head was placed, the bed of the paralytic healed by Jesus, the stone on which the Lord left an imprint of his shoulders, and the stone where Our Lady sat to rest after dismounting from her donkey. But no shroud. It was not until the second half of the 6th century that pilgrims began to mention relics of Jesus’ burial cloths being in Jerusalem, albeit with various doubts as to where they had been preserved and what form they took.

The next step was the systematic and often unverified discovery of additional—and preposterous— relics from the Holy Land, including the bathtub of baby Jesus, his cradle, nappy, footprints, foreskin, umbilical cord, milk teeth, the tail of the donkey on which He entered Jerusalem, the crockery from the Last Supper, the scourging pillar, His blood, the relics of the bodies of Jesus’ grandparents and the Three Wise Men, and even the milk from the Virgin Mary and her wedding ring. Obviously, objects related to Jesus’ death and resurrection could easily be included in such a list. Predictably, the movement of such relics from Jerusalem—be they bought, stolen, or forged—reached its peak at the time of the Crusades.

The beginning of the 9th century was a time of intense traffic in relics. One legend, built up around no one less than Charlemagne himself, held that he had made a journey to Jerusalem and obtained a shroud of Jesus. According to this legend, the cloth was then taken to the imperial city of Aachen (in modern Germany), and then, perhaps, to Compiègne, France. There are accounts of a shroud in both cities, and Aachen still hosts this relic today.

The coexistence of these relics in two important religious centers has not prevented other cities from claiming to possess the very same objects. Arles and Cadouin (France), as well as Rome (Italy), all boast a shroud, although in 1933 the one in Cadouin was revealed to be a medieval Islamic cloth. There is an 11th-century holy shroud in the cathedral of Cahors (France) as well as in Mainz (Germany) and Istanbul (Turkey), and dozens of other cities claimed to possess fragments of such a relic.13 An 8th-century sudarium is still venerated in Oviedo, Spain, as if it were authentic.14

The Shroud of Turin

With this background it might not surprise readers to learn that the Shroud of Turin, in fact, is not one of the oldest but rather one of the most recent such relics. It is a large cloth that resembles a long tablecloth of over four meters in length, whose uniqueness is a double monochromatic image that shows the front and back of a man. The man bears marks from flagellation and crucifixion, with various red spots corresponding to where blows were received. The Turin Shroud first appeared in the historical record in France (a place that already hosted many competing shrouds) around 1355 CE. It is different from all the previous shrouds in that the others did not display the image of the dead Christ, and until then no source had ever mentioned a shroud bearing such an image (although Rome hosted the well-known Veil of Veronica, a piece of cloth said to feature an image of the Holy Face of Jesus). The explanation behind its creation can be found in the contemporary development of a cult of devotion centered on the representations of the physical suffering of Christ and His wounded body.

Pilgrimage badge of Lirey (Aube), dated between 1355 and 1410, depicts the first appearance of the Shroud. (Photo © Jean-Gilles Berizzi / RMN-Grand Palais, Musée de Cluny, Musée National du Moyen Âge).

The Shroud of Turin made its first appearance in a small country church built in Lirey, France, and by an aristocratic soldier Geoffroy de Charny. As soon as this relic was put on public display, it immediately became the subject of debate. Two local bishops declared the relic to be fake. In 1389, the bishop of Troyes, France, wrote a letter to the Pope denouncing the falsity of the relic and accusing the canons of the Church of Lirey of deliberate fraud. According to the bishop, the canons had commissioned a skilled artist to create the image, acting out of greed and taking advantage of people’s gullibility. The Pope responded by allowing the canons to continue exhibiting the cloth, but simultaneously obliging them to publicly declare that it was being displayed as a “figure or representation” of the true Shroud of Christ, not the original.

Various erasures and acts of subterfuge were required to cover up these historical events and transform an artistic representation into an authentic shroud of Jesus. The process began after 1453, when the relic was purchased by a noble family, the House of Savoy (and which reigned as Kings of Italy from 1861 to 1946).

Historians loyal to the court constructed a false history of the relic’s origins, deliberately disregarding all the medieval events that cast doubt on its authenticity.

Interpretations of this first part of the history of the Shroud diverge significantly between those who accept the validity of the historical documents and those who reject it. However, the following developments are almost universally agreed upon. Deposited in the city of Chambéry, capital of the Duchy of Savoy, the Shroud became a dynastic relic, that is, an instrument of political-religious legitimization and referenced by the same symbolic language used by other noble European dynasties. After surviving a fire in 1532, the Shroud remained in Chambéry until 1578. It was then transferred to Turin, the duchy’s new capital, where a richly appointed chapel connected to the city’s cathedral was specially built to house it in the 17th century.

Historians loyal to the court constructed a false history of the relic’s origins, deliberately disregarding all the medieval events that cast doubt on its authenticity and attested to the intense reluctance of contemporary ecclesiastical authorities to accept it. In the meantime, the papacy and clergy abandoned their former prudence and began to encourage veneration of the Shroud, established a liturgical celebration, and initiated theological and exegetical debate about it. The court of the Duchy of Savoy, for its part, showed great devotion to its relic and at the same time used it as an instrument of political legitimization,15, 16 seeking to export the Shroud’s fame outside the duchy by gifting painted copies that were in turn treated as relics-by-contact (there are at least 50 such copies known to still exist throughout the world).

Having survived changes of fortune and emerging unscathed from both the rational criticism of the Enlightenment and the turmoil of the Napoleonic period, the Shroud seemed destined to suffer the fate of other similar relics, namely a slow decline. Following a solemn exhibition in 1898, however, the Shroud returned to the spotlight and its reputation began to grow outside Italy as well. Two very important events in the history of the relic took place that year: it was photographed for the first time, and the first historiographical studies of it were published.

Shroud Science

Photography made available to everyone what previously had been viewable by only a few: an image of the shape of Christ’s body and face, scarcely discernible on the cloth but perfectly visible on the photographic plate. It was especially visible in the negative image, which by inverting the tonal values, reducing them to white and black, and accentuating the contrast, revealed the character of the imprint.

“Santo Volto del Divin Redentore” (Holy Face of the Divine Redeemer), a detail of the Shroud of Turin. Photo by Giuseppe Enrie, taken during the 1931 public exhibition of the Shroud of Turin. It is a negative photographic image, meaning that the lighter areas represent the darker areas of the Shroud.

Photographs of the Shroud, accompanied by imprecise technical assessments claiming that the photograph proved that the image could not possibly have been generated artificially, were circulated widely. This prompted scholars to seek through chemistry, physics, and, above all, forensic medicine an explanation for the origins of the image impressed on the cloth. More recently, these disciplines have been joined by palynology, computer science, biology, and mathematics, all aimed at demonstrating the authenticity of the relic experimentally, or at least removing doubts that it might have been a fake. At the beginning of the 20th century, there were many scientific articles published on the Shroud and discussions held in distinguished forums, including the Academy of Sciences in Paris.

The scientist associated with the birth of scientific sindonology is the zoologist Paul Vignon, while Ulysse Chevalier was the first to conduct serious historical investigations of the Shroud. Both were Catholics (the latter indeed being a priest), but they held completely contrasting positions: the former defended the Shroud’s authenticity while the latter denied it. Chevalier was responsible for publishing the most significant medieval documents on the early history of the Shroud, showing how it had been condemned and declarations of its falseness covered up, and wrote the first essays on the history of the Shroud to employ a historical-critical method (Chevalier was an illustrious medievalist at the time). The debate became very heated in the historical and theological fields, and almost all the leading history and theology journals of the time published articles on the Shroud.

After the early 20th century, almost no one applied themselves to thoroughly examining the entirety of the historical records regarding the Shroud (much less comparing it against all the other shrouds). After a period of relative lack of interest, new technologies brought the Shroud back into the limelight. In 1978, a group of American scholars, mostly military employees or researchers associated with the Catholic Holy Shroud Guild, formed the STURP (Shroud of Turin Research Project) and were allowed to conduct a series of direct scientific studies on the relic. They did not find a universally accepted explanation for the origin of the image. Some members of the group used the mass media to disseminate the idea that the image was actually the result of a supernatural event: in this explanation, the image was not the result of a body coming into contact with the cloth, perhaps involving blood, sweat, and burial oils (as believed in previous centuries) but rather caused by irradiation. At this time the two most popular theories formulated—despite their implausibility—as to the historical origin of the Shroud were:

  1. The Shroud and the Mandylion of Edessa are one and the same object. (The Mandylion is another miraculous relic known to have existed since at least the 6th century BCE in the form of a towel that, according to the faithful, Jesus used to wipe His face, miraculously leaving the mark of His features on it).
  2. The Knights Templar transported the Shroud from the East to the West. (It was based on statements extracted under torture from the Templars during their infamous trial of 1307–1312.)

The clash between sindonology and science reached its peak in 1988; without involving STURP but with permission from the Archbishop of Turin, the Holy See, and the Pontifical Academy of Sciences, a radiocarbon examination was carried out that involved 12 measurements conducted in three different laboratories. As expected, the test provided a date that corresponds perfectly with the date indicated by the historical documents, namely the 13th–14th century. As often happens when a scientific finding contradicts a religious belief, however, from that moment on attempts to invalidate the carbon dating proliferated. These included conspiracy, pollution of the samples, unreliability of the examination, enrichment of the radiocarbon percentage due to the secondary effects of the resurrection, among others.

Dating the Shroud

In 1945, chemist Willard Libby devised the Carbon-14 (C14) radiocarbon dating method. Despite rumors that Libby was against applying the C14 method to the Shroud, I found proof that at least twice he stated precisely the opposite, declaring his own interest in performing the study himself.17 In the early 1970s, the test had been repeatedly postponed, first because it was not yet considered sufficiently reliable, and later because of the amount of cloth that would have to be sacrificed as the procedure is destructive. By the mid-1980s, however, C14 was accepted universally as a reliable system of dating, and was regularly used to date archeological artifacts as well as antiques. Several C14 laboratories offered to perform the testing for free, likely under the assumption that, whatever the result, it would bring them publicity.

The cloth of the Shroud can be assigned with a confidence of 95 percent to a date between 1260 and 1390 CE.

Once Cardinal Ballestrero, who was not the relic’s “owner” but only charged with the Shroud’s protection, had made the decision to proceed, he asked for the support and approval of the Holy See. The Pontifical Academy of Sciences was invested with the responsibility to oversee all operations. For the first time in its history, the papal academy was presided over by a scientist who was not a priest, biophysicist Carlos Chagas Filho. The scientists’ desire was to date the Shroud and nothing more, and they did not want the sindonologists to take part in the procedure. The Vatican’s Secretary of State and the representatives of Turin agreed to supply no more than three samples. Seven laboratories were proposed from which three selected: those at the University of Arizona, Tucson, the University of Oxford, and Zurich Polytechnic, because they had the most experience in dating small archaeological fragments.

The day chosen for the extraction was April 21, 1988. The textile experts examined the fabric and discussed the best place to extract samples; they decided to take a strip from one of the corners, in the same place in which a sample had already been taken for examination in 1973. The strip was divided into smaller pieces and each of the three laboratories received a sample. The procedure was filmed while being performed under the scrutiny of over 30 people.

The results were published in the world’s leading multidisciplinary scientific journal, Nature. Conclusion: the cloth of the Shroud can be assigned with a confidence of 95 percent to a date between 1260 and 1390 CE. In response, the Cardinal of Turin issued this statement:

I think that it is not the case that the Church should call these results into question…. I do not believe that we, the Church, should trouble ourselves to quibble with highly respected scientists who until this moment have merited only respect, and that it would not be responsible to subject them to censure solely because their results perhaps do not align with the arguments of the heart that one can carry within himself.18Prof. Edward Hall (Oxford), Dr. Michael Tite (British Museum) and Dr. Robert Hedges (Oxford), announcing on October 13, 1988, in the British Museum, London, that the Shroud of Turin had been radiocarbon dated to 1260–1390.

Predictably, Shroud believers rejected the findings and started to criticize the Turin officials who had cut the material. Others preferred to deny the validity of the radiocarbon dating.

Sindonologists tried to discredit the result of the C14 testing by claiming the samples were contaminated. This hypothesis asserts that through the centuries the Shroud picked up deposits of more recent elements that would contain a greater quantity of carbon; the radiocarbon dating, having been performed on a linen so contaminated, would thus have produced an erroneous result. Candidates for the role of pollutants are many: the smoke of the candles, the sweat of the hands that touched and held the fabric, the water used to extinguish the fire of 1532, the smoggy Turin skies, pollens, oil, and many more.

On the surface, these may seem convincing, especially to those who do not know how C14 dating works; in reality, however, they are untenable. Indeed, if a bit of smoke and sweat were enough to produce a false result, the Carbon-14 method would have been almost completely useless and certainly not used still to this day to date thousands of objects every year. The truth is rather that the system is not significantly sensitive to any such pollutants.

So assume that the fabric of the Shroud dates back to the 30s of the first century and that the Shroud has suffered exposure to strong pollution (for example, around 1532, the year of the Chambéry fire). To distort the C14 dating by up to 1300 years, it would be necessary that for every 100 carbon atoms originally present in the cloth, another 500 dating to 1532 would have to have been added by contamination. In practice, in the Shroud, the amount of pollutant should be several times higher than the amount of the original linen, which is simply nonsensical.

If we assume that pollution did not happen all at the same time, but gradually over the centuries, there is still no mathematical possibility that pollution that occurred before the 14th century—even if tens of times higher than the quantity of the original material—could give a result of dating to the 14th century. It should be added, moreover, that all samples, before being radiocarbon dated, are subjected to cleaning treatments able to remove the upper patina that has been in contact with outside contaminants and this procedure was also used for the Shroud.

Those who allege that the Shroud was an object that could not be dated because it was subjected to numerous vicissitudes over the intervening centuries ignore the fact that often C14 dating laboratories work on materials in much worse condition, whether coming from archaeological excavations or from places where they have been in contact with various contaminants. For radiocarbon dating purposes, the Shroud is a very clean object.

A more curious variant of the pollution theory suggests that the radiocarbon dating was performed on a sample that was repaired with more recent threads. This would mean that the two (widely recognized) textile experts who were present on the day of the sampling were unable to notice that they had cut a piece so repaired, despite the fact that they had examined the fabric carefully for hours. To distort the result by 13 centuries, the threads employed in the mending would have had to have been more numerous than the threads of the part to be mended. To eliminate any doubt, in 2010 the University of Arizona reexamined a trace of fabric left over from the radiocarbon dating in 1988, and concluded:

We find no evidence for any coatings or dyeing of the linen…. Our sample was taken from the main part of the shroud. There is no evidence to the contrary. We find no evidence to support the contention that the 14C samples actually used for measurements are dyed, treated, or otherwise manipulated. Hence, we find no reason to dispute the original 14C measurements.19

Another possibility raised against C14 dating falls within the sphere of the supernatural. German chemist, Eberhard Lindner, explained to the 1990 sindonology convention that the resurrection of Christ caused an emission of neutrons that enriched the Shroud with radioactive isotope C14.20 Miraculous explanations can be cloaked in scientific jargon, but they simply cannot be tested scientifically, given that there are no available bodies that have risen from the dead emitting protons and neutrons. They are, however, extremely convenient because they are able to solve any problem without having to submit the explanation to the laws of nature.

With all of the available evidence, it is rational to conclude—as some astute historians had already established more than a century ago—that the Shroud of Turin is a 14th century artifact and not the burial cloth of a man who was crucified in the first third of the 1st century CE.

Categories: Critical Thinking, Skeptic

Searching for Help: My Son’s Autism Diagnosis in the World of Alternative Medicine & Anti-Vaxxers

Skeptic.com feed - Sun, 02/23/2025 - 12:25pm

A pediatric neurologist at Boston Children’s Hospital diagnosed my son, Misha, with autism spectrum disorder at age three. At Massachusetts General Hospital, another pediatric neurologist answered my call for a second opinion only to rebuff my hope for a different one. “I did not find him to be very receptive to testing,” the expert sighed. Both neurologists observed that Misha didn’t respond to their request to identify colors, body parts, or animals, that he averted his eyes from theirs, that he pawed their examination table when he didn’t flap his arms. Autism, the doctors said, constituted a lifelong condition. Medical science didn’t understand its causes or cures, and scarcely comprehended the limits of its woes.

How could the neurologists deduce such a bleak judgment from 90 minutes in the bell jar of their examination rooms? If they knew so little about autism, then how could they gavel down a life sentence? I remembered reading somewhere that a properly trained neurologist ought to be able to argue both for and against any single diagnosis in a stepwise process of elimination. I opened the Diagnostic and Statistical Manual of Mental Disorders (DSM), leafed to the entry under autism, and plucked out of its basket several inculpating symptoms. Aggrieved, I sought out the Handbook of Differential Diagnosis, a companion volume, and underlined an admonitory passage: “Clinicians typically decide on the diagnosis within the first five minutes of meeting the patient and then spend the rest of the time during their evaluation interpreting (and often misinterpreting) elicited information through this diagnostic bias.” Now what?

As an educated citizen of progressive Cambridge, Massachusetts, I consumed large volumes of such second-hand, semi-digested information. I felt that I should, and believed that I could, develop my own, independent judgment about Misha’s condition. I would do my own research, and I would draw my own conclusions based on what I learned.

I felt that I should, and believed that I could, develop my own, independent judgment. I would do my own research, and I would draw my own conclusions based on what I learned.

These virtues turned out to be constituent features of my error. My skepticism and sense of responsibility blended with my stubbornness as I struggled to evaluate a welter of “holistic” attitudes about medicine and health. Several fixed ideas confronted me. Autism, I read, is neither the psychopathology listed in the DSM nor the organic twist of disease supposed by neurologists. Autism, these alternative sources explained, is one among an epidemic of preventable chronic illnesses that American children contract from toxins in the environment. Holistic therapy, according to another, contains the requisite resources. Vitamin therapy, homeopathy, and antifungal treatment could heal children like Misha of their injuries.

The claim that autism is a treatable, toxin-induced chronic illness is a half-century old. Its history forms a pattern of culture and credulity imprinted on our own time. Today, indeed, as one in every 36 children receive the diagnosis, and as controversies swirl around COVID-19, more people than ever turn to holistic remedies to treat illnesses real and imagined. Homeopathic remedies fly off the shelves at pharmacies, alongside an array of alleged immunity-boosting, anti-inflammatory vitamins and herbal supplements.

Critics view the vogue for holism as the product of an irrational transaction between charlatans and suckers. As I reflect on my experience with Misha in the grassroots of autism agonistes, however, I find the issues don’t divide so tidily. The question isn’t whom to trust or what to believe, but how to make an existential choice between incommensurable propositions.

A family friend introduced me to Mary Coyle, a homeopath at the Real Child Center in New York. Coyle said Misha had likely contracted autism from contaminants in the environment. Was I aware of the epidemic of chronic illnesses afflicting children like him? Some of them, Coyle explained, received diagnoses of asthma, chronic fatigue, or dermatitis. Others were diagnosed with fibromyalgia, Lyme disease, or PANDAS (Pediatric Autoimmune Neuropsychiatric Disorder Associated with Streptococcal Infections). Pathogens lying at the nexus between the body and the environment fooled medical specialists at places like Boston Children’s Hospital and Massachusetts General Hospital. Coyle urged me to abandon their dead-end query, “Is your child on the autism spectrum?” To help Misha, I needed to switch the predicate and envisage a different question: “How toxic is your child?”

“Is your child on the autism spectrum?” To help Misha, I needed to switch the predicate and envisage a different question: “How toxic is your child?”

Why not find out? Although I had never heard of homeopathy or Coyle’s sub-specialty of homotoxicology, I believed that with some study I could probably draw the necessary distinction between evidence and interpretation in the test results. Coyle herself had been trained by conventional physicians before seeking out propaedeutic instruction in holistic medicine. Holism sounded nice.

We started out with an “Energetic Assessment.” Measuring Misha’s rates of “galvanic skin response,” Coyle said, would weigh the balance of electrical vibrations conducted through his pores. Toward this end, she deployed an electrodermal screening device that deciphered imbalances in his “meridians,” or “pathways.” Toxic metals, alas, appeared from the results to be obstructing his “flow” of energy.

With Coyle’s theory confirmed, she referred me to Lawrence Caprio to canvass for food and environmental allergens. Caprio, like Coyle, had defected from conventional to alternative medicine. I learned that while attending medical school at the University of Rome he had befriended a homeopath in the Italian countryside and lived “a very natural lifestyle”; the experience led him to pursue naturopathy.

Misha—Caprio now reported—turned out to be “intolerant” of bread, butter, eggplant, oatmeal, peanuts, potatoes, and tomatoes. Misha also displayed a “sensitivity” to bananas, car exhaust, cheese, chlorine, chocolate, cow milk, dust mites, garlic, onions, oranges, soy beans, and strawberries. Caprio flagged “phenolics” such as malvin (in corn sweeteners) and piperin (in nightshade vegetables and animal proteins).

Next, I mailed urine and stool samples to the Great Plains Laboratory in Kansas. The director there, William Shaw, had worked as a researcher in biochemistry, endocrinology, and immunology at the Centers for Disease Control before he quit and set up his own laboratory. Shaw suspected lithium in “the bottled water craze” and fluoridation in the public water supply as just two of the causes of autism. He came to believe that government scientists woefully misunderstood such sources. He compared their dereliction to the Red Cross’s failure to intervene in the Holocaust. Shaw also found toxic levels of yeast flooding Misha’s intestines.

Homeopathy, naturopathy, and renegade biochemistry cast me outside the institutions of science where Misha’s neurologists practiced. But to grasp how these new realms might be objective correlates of Misha’s condition—and how toxins, foods, and yeast might be culprits—I had only to remind myself of the progressive demonology that made the diagnosis seem plausible.

Industrial corporations have been chewing up the land, choking the air, and despoiling the water, I read, turning the whole country into a hazardous materials zone. I’d read Silent Spring, in which ecologist Rachel Carson claimed that our bodies weren’t shields, but permeable organisms that absorbed particulates. I’d heard Ralph Nader liken air and water pollution to “domestic chemical and biological warfare.” I’d finished Bill McKibben’s The End of Nature with the requisite dread. Listening to progressive news media about “forever chemicals” evoked moods that swung between indignation and paranoia. I paid for eco-friendly cribs, de-leaded the windows in our apartment, and tried to shop organic.

As Coyle, Caprio, and Shaw whispered in my ear, though, my imagination boggled with an even greater catalogue of possible pathogens. Our food contained more pesticides, hormones, and insecticides than I had suspected. Our air is filled with methanol and carbon monoxide. Chlorine, herbicides, and parasites degraded our tap water. Mold festered in our walls, floors, and ceilings. Formaldehyde lurked in our furniture. Heavy metals hid in our lotions, shampoos, and antiperspirants. Synthetic chemical compounds—polychlorinated biphenyls, phthalates, bisphenol A, polybrominated diphenyl ethers—seeped into our toys, diapers, bottles, soaps, and appliances. Even our Wi-Fi, cell phones, refrigerator, light bulbs, and microwave oven emitted radiation through electromagnetic fields.

Had the dystopia of the contemporary world poisoned my son? Coyle, Caprio, and Shaw not only defined autism as a preventable, “biomedical” illness, they traced the mechanism of harm to his pediatrician’s office.

Misha had received three-in-one vaccines: DTP and MMR. The holistic experts now told me that these vaccines contain dangerous metals, including mercury and aluminum.

Misha had received three-in-one vaccines against diphtheria, tetanus, and pertussis (DTP) and measles, mumps, and rubella (MMR) according to the recommended schedule. The holistic experts now told me that these vaccines contain dangerous metals, including mercury and aluminum. The vaccines, I read, could have spread from Misha’s arm to his gut and persisted long enough to perforate an intestinal wall. Mercury, a neurotoxin, could have leaked into his bloodstream and surreptitiously addled his brain. Or his pediatrician could have set off a chain reaction that had the same effect. The antibiotics she gave him for petty infections could have reduced the diversity of natural flora that controlled yeast in his gastrointestinal tract. An overabundance of yeast could have generated enzymes that perforated his intestines even if live-virus vaccines had not done so directly.

Either way, undigested food molecules such as gluten (in wheat) and casein (in dairy) could have joined forces with environmental toxins and heavy metals and attached to Misha’s opiate receptors, disrupting his neurotransmitters and triggering allergic reactions. The ballooning inflammation would have thwarted his immune responses. If so, then his “toxic load” could be starving his cells of nutrients. Escalating levels of “oxidative stress” could be congesting his metabolism. No wonder he lacked muscle tone, coordination, and balance!

How could I dismiss their diagnosis of “autism enterocolitis,” AKA “leaky gut?” My liberal education prided open-mindedness, after all. In 1998, a midlevel British lab researcher named Andrew Wakefield published a study warranting the diagnosis in The Lancet, one of the world’s most prestigious medical journals. Wakefield’s paper, it turned out, “entered his profession’s annals of shame as among the most unethical, dishonest, and damaging medical research to be unmasked in living memory,” according to Brian Deer’s The Doctor Who Fooled the World.

“The science right now is inconclusive,” Barack Obama said in 2008. Thousands of media outlets around the world reported a controversy between two legitimate sides.

In the meantime, both liberal and conservative politicians echoed the implications of Wakefield’s hoax. “The science right now is inconclusive,” Barack Obama said in 2008. Thousands of media outlets around the world reported a controversy between two legitimate sides. “Fears raised over preservatives in vaccines,” a front-page headline in the Boston Globe announced. Wakefield appeared on television with articulate parents by his side. “You have to listen to the story the parents tell,” he said on CBS’s 60 Minutes. Reputable television programs did just that. ABC’s NightlineGood Morning America, and 20/20, NBC’s Dateline, and The Oprah Winfrey Show broadcast the gravamen of the indictment out of the mouths of well-educated parents.

The accusation against antibiotics resonated with definite misgivings that I held over the dispensations of American medicine. Doctors in the United States order more excessive diagnostic tests, perform more needless caesarean sections, and prescribe more superfluous antibiotics than their counterparts around the world. A prepossessing dependence on technology encourages American medicine to treat symptoms rather than people. From this indubitable truth, Coyle, Caprio, and Shaw drew an uncommon inference that aggressive medical care had sabotaged Misha’s birthright immunity.

Misha, so endowed, could have repaired the damage done, no matter whether vaccines or antibiotics had upset his “primary pathways.” His body would have availed “secondary pathways” such as his skin and mucous membrane. Coyle said his innate capacity for adaptation had been telegraphing itself in his fevers, his eczema, his ear infections, even his runny noses. Yet his pediatrician had stood blind before the hidden meaning of these irruptions. Reaching into her chamber of magic bullets, she prescribed steroid creams for his eczema, acetaminophen for his headaches, amoxicillin for his ear and sinus infections, antihistamines for his coughs and runny noses, and ibuprofen for his fevers. This “Whac-a-Mole mentality,” Coyle despaired, had plugged his “secondary pathways” as well.

The trio of virtuoso healers would help me sidestep the adulterated dialectic of science and charm Misha’s autism out of its chronic condition.

A vicious cycle set in. Vaccines and/or antibiotics had predisposed Misha’s microbiome to harbor viruses, bacteria, and fungi. Turning toxic, they invaded his cells, tissues, and fluids. The foreign occupation precipitated allergies. The allergies provoked inflammation, which arrested metabolic energy, which led to anemia, which invited recurring infections. His pediatrician perpetuated those with cascading doses of foreign chemicals. “Rather than freak out and take medication and look to suppress,” Coyle counseled, “we should celebrate that the body is working and go and look at the primary pathways and clear out the blockages.” Up to 103 degrees Fahrenheit, “the fever might be a good thing.”

If I could accept that “allopathic” medicine did not stand apart and speak objectively, but instead reflected the sickness of American society, then the trio of virtuoso healers would help me sidestep the adulterated dialectic of science and health. A holistic treatment protocol would charm Misha’s autism out of its chronic condition and turn it into a treatable medical illness. “The body’s infinite wisdom,” Coyle said, “would take care of the rest.” As the protocol purged and flushed his toxins, the fawn of nature would close the holes in his intestines. His allergies would ebb, reducing inflammation, reviving cellular respiration, and reconnecting his neurotransmitters. The realignment of his meridians would reflow his energy. “Once you clear,” Caprio said, “the whole thing just changes dramatically.”

♦ ♦ ♦

Autism parents first embraced holistic treatments in the 1960s and 1970s, when emphatic personal testimonials, printed and distributed in underground newsletters, led to the formation of grassroots groups such as Defeat Autism Now! (DAN!) and ushered in the “leaky gut” theory. DAN! grew out of the psychologist Bernard Rimland’s Autism Research Institute. Rimland’s 1964 book Infantile Autism blew up the prevailing, psychogenetic thesis of autism’s origins, which blamed mothers for failing to love their children enough.

The Today Show and The Dick Cavett Show had given psychologist Bruno Bettelheim, the chief exponent of the “refrigerator mothers” thesis, free reign to liken them to concentration camp guards. Rimland’s Infantile Autism refuted that thesis. Letters poured into his Autism Research Institute from grateful parents attesting to the efficacy of the holistic approach: vitamin therapy, detoxification, and elimination dieting. Pharmaceutical companies rolled out new childhood vaccines for measles (1963), mumps (1967), and rubella (1969) and combined the immunizations against pertussis, diphtheria, and tetanus into one injection. Rimland began distributing an annual survey that queried parents about the effects.

Belief in an etiology variously called “leaky gut,” “autism enterocolitis,” or “toxic psychosis” awkwardly amalgamated elements from both ancient and modern medical philosophy. The old idea of disease as a sign of disharmony with nature queued behind the modern concept of infection through the invasion of microorganisms. But no theory of etiology needs to be complete for a treatment to work. “Help the child first,” Rimland urged, “worry later about exactly what it is that’s helping the child.”

Like anti-psychiatry activists, breast cancer patients, and AIDS activists, autism parents confronted physicians with the backlash doctrine of “consumer choice” in specialist medical care. “The parent who reads this book should assume that their family doctor, or even their neurologist or other specialist, may not know nearly as much as they do about autism,” William Shaw wrote in Biological Treatments for Autism.

The first television program to elevate parental intuitions, Vaccine Roulette, aired in 1982 on an NBC affiliate in Washington, DC. The show promoted the vaccine injury theory—and won an Emmy Award. Accelerating rates of the diagnosis over the next decades brought the injury theory from a simmer to a boil. In the 1960s, an average of one out of every 2,500 children received the diagnosis. By the first decade of the 21st century, the prevalence rose to one out of every 88, an increase of over 2,500 percent. Up to three-quarters of autism parents used some form of holistic treatment on their children.

A Congressional hearing in 2012 featured their cause, heaping suspicion on vaccines, speculating on gut flora, and praising the efficacy of vitamins, homeopathy, and elimination dieting. Dennis Kucinich, a Democrat from Ohio and one-time Presidential candidate, expressed outrage over the spectacle of “children all over the country turning up with autism.” Kucinich blamed “neurotoxic chemicals in the environment,” particularly emissions from coal-burning power plants. Like the autism parents in attendance at the hearing, Kucinich did his own research and drew his own conclusions.

“There’s only medicine that works and medicine that doesn’t.” Clever and concise, Offit’s polemic nonetheless begged the relevant questions. Who decides what works? Fundamental science is one thing; therapeutic interventions are quite another.

“There’s no such thing as ‘conventional’ or ‘alternative’ or ‘complementary’ or ‘integrative’ or ‘holistic’ medicine,” alternative medicine skeptic Paul Offit complained the next year. “There’s only medicine that works and medicine that doesn’t.” Clever and concise, Offit’s polemic nonetheless begged the relevant questions. Who decides what works? Fundamental science is one thing; therapeutic interventions are quite another. “Evidence-based medicine,” introduced in 1991, supplies a template of criteria to translate medical science into clinical medicine. Atop its hierarchy sits the “randomized control trial,” a methodology loaded with social and financial biases. Even when a therapy works incontrovertibly, that fact doesn’t free its applications of ambiguity. Antibiotics work. We’ve known that since the 1930s. But which of their benefits are worth which of their costs?

When does an accumulation of confirmed research equal a consensus of reasonable certainty? In 1992, ABC’s 20/20 exposed a cluster of autism cases in Leominster, Massachusetts. A sunglasses’ manufacturer had long treated the city as a dumping ground for its chemical waste. After the company shuttered, a group of mothers counted 43 autistic children born to parents who had worked at the plant or resided near it. Commenting on the Leominster case, the eminently sane neurologist Oliver Sacks voiced a curious sentiment. “The question of whether autism can be caused by exposure to toxic agents has yet to be fully studied,” Sacks wrote, three years after epidemiologists from the Massachusetts Department of Public Health determined that no unusual cluster of cases had existed in that city in the first place. Who gets to decide the meaning of “fully studied”?

Bernard Rimland and the autism parents in his movement answered the question for themselves. “There are thousands of children who have recovered from autism as a result of the biomedical interventions pioneered by the innovative scientists and physicians in the DAN! movement,” Rimland insisted in the group’s 2005 treatment manual, Autism: Effective Biomedical Treatments.

William Shaw and Mary Coyle, both DAN! clinicians, adapted Rimland’s manual for Misha. Coyle vouched personally for the safety and efficacy of the holistic treatment therein. She swore she used it to “recover” her own son.

Interdicting toxins marked the first step on the “healing journey.” Taking it obliged me to decline Misha’s pneumococcal conjugate vaccine (for pneumonia) and his varicella vaccine (for chickenpox). Meanwhile, I eliminated from our cupboard and refrigerator the foods for which Caprio had proved Misha sensitive and intolerant, and I prepared a course of “optimal dose sub-lingual immunotherapy” to “de-sensitize” him. Coyle drew up a monthly schedule to detoxify him with homeopathic remedies from a manufacturer in Belgium. Shaw itemized vitamins and minerals to supplement Misha’s intake of nutrients, plus probiotics and antifungals to control his yeast and rehabilitate his intestinal tract. My kitchen turned into an ersatz pharmacy of unguents, powders, drops, and tablets.

Every morning, I inserted two tablets of a Chinese herbal supplement, Huang Lian Su, into an apple. This would crank-start his digestion. I added half a capsule of methylfolate into his breakfast. This would juice his metabolism. Ten minutes after he finished breakfast, I stirred Nystatin powder into warm coconut water, drew two ounces into a dropper, irrigated his mouth, and ensured that he abstained from eating or drinking for ten more minutes. Fifteen minutes before his midday snack, I squeezed six drops of a B12 vitamin under his tongue. Every evening, I slipped him two more Huang Lian Su tablets.

An exception in federal law places vitamins, supplements, and homeopathic remedies outside the FDA’s approval process. Only their manufacturers know what these dummy drugs contain.

To fortify his glucose levels, I could elect to give him two vials of raisin water every other hour. To normalize his alkaline levels, I added a quarter-cup of baking soda to his baths. The “de-sensitizing drops,” however, had to be dribbled onto his wrists twice every day. Misha also needed regular, carefully calibrated doses of boron, chromium, folic acid, glutathione, iodine, magnesium, manganese, milk thistle, selenium, vitamins A, C, D, E, and zinc.

Homotoxicology, the core modality, entailed his daily ingestion of homeopathic “drainage remedies” to purge toxins and open pathways. The bottles arrived in the mail. Coyle provided a table of equivalencies, linking particular remedies to organs. This compound for his small intestines; That one for his large intestine; This one for his kidney; and That one for his mucous membrane.

At the same time, homeopathy’s whole-body scope of intervention claimed to relieve a wide range of illnesses. Shaw and his colleagues said the modality could treat autism, plus sensory integration disorder, central auditory processing disorder, speech and language problems, fine motor and gross motor problems, oppositional defiance disorder, obsessive compulsive disorder, eating disorders, headaches, eczema, and irritable bowel syndrome. The marketing materials that accompanied Misha’s compounds claimed that they could treat bloating, constipation, cramps, flatulence, nausea, night sweats, and sneezing.

I learned the shorthand rationale as part of my self-education. Homeopaths stake their claim on a manufacturing process that distinguishes their remedies from pharmaceutical medicaments. It’s called “succussion.” A label that reads “4X,” for example, indicates that the original ingredient has been diluted four times by a factor of 10—the manufacturer has succussed it 10,000 times. “12X” indicated that the original ingredient has been succussed one trillion times.

The compounds prescribed for Misha said they contained asparagus, bark, boldo leaf, goldenrod, goldenseal, horsetail, juniper, marigold, milk thistle, parsley, passionflower, Scottish pine root, and other herbs and plants of which I’d never heard.

The compounds prescribed for Misha said they contained asparagus, bark, boldo leaf, goldenrod, goldenseal, horsetail, juniper, marigold, milk thistle, parsley, passionflower, Scottish pine root, and other herbs and plants of which I’d never heard. Having been succussed, though, the remedies actually contained no active ingredients. In the bottles remained “the mother tincture,” a special kind of water said to “remember” the original ingredient. The only other ingredient listed on the label was an organic compound that served as a solvent and preservative. Thirty-one percent of some of Misha’s remedies contained ethanol alcohol, a proof as strong as vodka or gin. Coyle instructed me to “gas off the alcohol” on the stove before serving him.

Succussion confused me. Misha’s reaction worried me. He looked a fright. Black circles ringed his eyelids. Yeast blanketed his nostrils and lips. Rashes and red spots appeared all over his body. Pale and lethargic, he oscillated between diarrhea and constipation. He broke out with recurring fevers. He stopped gaining weight. Because he didn’t speak, or reliably communicate in any other manner, I couldn’t understand why his emotions seemed to be running at an unusually high pitch.

Coyle explained that different glands and organs in the body stored specific feelings. The kidneys stored fear. The pancreas stored frustration. The thyroid stored misunderstanding, the liver anger, the lungs grief, the bladder a sense of loss, and so forth. Those emotions poured out as his body excreted toxins. I shouldn’t regard the worsening of his symptoms as a side effect, but rather as a necessary condition of his recovery—“aggravations,” in homeopathy’s parlance. A Table of Homotoxicosis charted the correspondences with the precision and predictability of biochemistry. Nor should I abandon the treatment. To do so would be to “re-toxify” him. I must allow the treatment to fully fledge. I must keep my nerve.

♦ ♦ ♦

I lost my nerve. It took 18 months of gnawing doubt and thousands of dollars out the door. Then one day I swept all the vitamins, antigens, probiotics, antifungals, and homeopathic remedies into the trash bin. I restored Misha to a regular diet, caught him up on his vaccines, and demanded (and received) a full refund from Coyle.

I had blundered into a non sequitur. The environment is toxic. Conventional medicine does reflect the sickness of our culture. Yet that doesn’t render holism any better. The supplement industry, I came to understand, has pumped hundreds of millions of dollars into thousands of clinical studies without demonstrating that vitamins, herbal products, or mineral compounds are either safe or effective, much less necessary. The Food & Drug Administration (FDA) neither tests the industry’s marketing claims nor regulates its product standards.

Caprio and Coyle regard Traditional Chinese Medicine (TCM) as a reproach to modern, Western medicine. TCM, they pointed out, is 5,000 years old. Actually, I learned, Chairman Mao Zedong contrived TCM after 1950 as a means of controlling China’s rural population and burnishing the regime’s reputation abroad. In 1972, during Richard Nixon’s tour of Chinese hospitals, his guides stage-managed a demonstration of TCM’s miracles. American media reported the healing event at face value and launched the holistic health movement stateside. Several years later, the FDA sought to regulate the vitamin and supplement industry. Manufacturers fought back with a marketing campaign centered on “freedom of choice” and convinced Americans to stand up for their right not to know which ingredients may (or may not) be contained in their daily vitamins.

I needed to file a public records request with the Connecticut Department of Public Health to discover that Lawrence Caprio had been censured and fined for improperly labeling medication, for practicing without a license, and for passing himself off as a medical doctor. I also learned that Caprio’s naturopathy license had been suspended for two years after the FDA determined his bogus “sensitivity tests” violated its regulations. Misha, an actual immunologist confirmed, had no food allergies in the first place.

Misha, an actual immunologist confirmed, had no food allergies in the first place. Was my son ever really burdened by toxins?

Was my son ever really burdened by toxins? Coyle said the results of the “energetic assessments” revealed that Misha carried quantities of heavy metals. Degrees of dangerousness were measured against a standard range credited to “Dr. Richard L. Cowden.” I sent Misha’s results to Cowden. I stated my belated impression that meaningful ranges for heavy metals don’t exist—we all have traces—and my belief that autism cannot be reversed. “I have reversed advanced autism in many children,” Dr. Cowden snapped. “I saw reversal of more than a dozen cases of full-blown autism, including my own grandson. So I am pretty sure the parents of those dozen+ children would debate you on your IMPRESSION/BELIEF.”

Cowden advised me to repeat Misha’s energetic assessment through the Internet and to place him into an “infrared sauna” to detoxify him. I declined.

Even before Misha’s first energetic assessment, the FDA had accused the device’s manufacturer of making unapproved claims. The FDA had approved it only for measuring “galvanic skin response.” But the company’s marketing materials had crossed over into unapproved diagnostic and predictive territory when they claimed that the “software indicates what is referred to as Biological Preference and Biological Aversion.” The software was recalled. “Dr. Cowden,” I also learned too late, was not the “Board Certified cardiologist and internist” that he advertises. He surrendered his medical license in 2008 after the Texas Board of Medical Examiners twice reprimanded him for endangering his patients. According to the American Board of Internal Medicine, Cowden’s certifications are “inactive.”

The “homotoxicology” that Coyle practiced had sounded to me like a branch of toxicology. But the two fields turn out to have nothing in common.

The “homotoxicology” that Coyle practiced had sounded to me like a branch of toxicology. But the two fields turn out to have nothing in common. An analysis of clinical trials of homotoxicology established that it is “not a method based on accepted scientific principles or biological plausibility.” Actual toxicologists pass a rigorous examination for their board certifications and adhere to a code of ethics. Homotoxicologists become so simply by declaring themselves homotoxicologists.

As for vitamins, supplements, and homeopathic remedies: an exception in federal law places them outside the FDA’s approval process. Only their manufacturers know what these dummy drugs contain. Last year, after fielding numerous reports of “toxic” reactions, finding “many serious violations” of manufacturing controls, and recording “significant harm” to children, the FDA warned the consuming public.

Homeopathy offers no detectable mechanism of action, nor any reason to believe that “aggravating” the primary symptoms of an illness is necessary to cure it. Water does not “remember,” at least not if the laws of molecular physics hold true. The tinier the dosage, homeopaths insist, the more potent the therapeutic effect the mother tincture will deliver. By this logic, a patient who misses a day might die of an overdose.

As I steered Misha back toward medical science, though, I remembered the gap that holism fills for parents like me. I took him to a “neuro-biologist,” a “neuro-psychologist,” and a “neuro-immunologist.” His “neuro-ophthalmologist” ordered an MRI. His “neuro-radiologist” read the images with algorithms—and pronounced his brain “normal” due to the absence of indications of damage.

That determination proved only the vacuity of scientific materialism. The “biological revolution” that seized psychiatry in the 1980s aspired to network the anatomical, electrical, and chemical functions of the brain. A procession of neuroimaging technologies held out the promise of progress: electroencephalography (EEG); computerized axial tomography (CAT); positron emission tomography (PET); magnetic resonance spectroscopy (MRS); magnetic resonance imaging (MRI). The resulting studies have always fallen pitifully short of a credible evidentiary threshold and have never done anything to expand treatment options. Mainly, neuroimaging has furnished opportunities to market the research industry, a breakthrough culture that has never broken through.

Holism, by contrast, answers prayers in the immaterial world, bidding to restore harmony through an aesthetically elegant fusion of mind, body, and spirit. As Coyle explained on her website: “Homotoxicology utilizes complex homeopathic remedies designed to restore the child’s vital force and balance the biological flow system.”

One part of me still craves holism’s beautiful notions. Another part recognizes in their desiccated spiritualism the return of a repressed pagan unconscious.

One part of me still craves holism’s beautiful notions. Another part recognizes in their desiccated spiritualism the return of a repressed pagan unconscious. I can no more believe in goblets of magic water and occult energy than I can conceal my disappointment with “neuro-radiology.”

Scientists long ago dispatched the “leaky gut” theory with a series of disproof. Holistic parents, researchers, and clinicians, however, continue to reject what they contend are the false revelations of cold, mechanical instrumentalism. Tylenol, electromagnetic fields, “toxic baby food,” COVID-19 vaccines, HPV inoculation, “geo-engineering,” and genetically modified foods top the current indictment. William Shaw published a paper in 2020 purporting to demonstrate “rapid complete recovery from autism” through antifungal therapy. Mary Coyle attested last year to having healed her son’s chickenpox through “natural” remedies.

Many of the holistic advocacy organizations intermittently lost access to social media platforms during COVID. Yet censorship has deepened the martyrdom ingrained in this theodicy of misfortune. A spiritual war against invisible enemies animates their imaginations and elevates their personal disappointment to the status of a historical event. Rebaptized in nature’s holy immunity by ascetic protocols of abstinence and purification, they turn over a new leaf, as it were, and crave vindication above all else. “This book offers you two messages,” Bernard Rimland promised of the testimonials that he collected in Recovering Autistic Children: “You are not alone in your fight, and you can win.”

Here’s another message: Children need love and respect above all. As René Dubos wrote in Mirage of Health, “As far as life is concerned, there is no such thing as ‘Nature.’ There are only homes.”

Categories: Critical Thinking, Skeptic

Amanda Knox: My Wrongful Murder Conviction Made Me a Better Thinker

Skeptic.com feed - Sun, 02/23/2025 - 11:47am

In 2007, I was studying abroad in Perugia, Italy. I had been there for five weeks, my eyes wide with the excitement of navigating a foreign culture, my heart aflutter over a nerdy boy I’d met at a classical music recital. It all seemed like a glorious dream, until it became a nightmare. On November 1, a local burglar named Rudy Guede broke into the apartment I shared with three other young women, two Italian law interns and a British exchange student named Meredith Kercher. Meredith was the only one home that night. Rudy Guede raped her, stabbed her to death, and then fled the country to Germany.

Before the forensic evidence came back, showing unequivocally that Rudy Guede was responsible for this crime, the police and prosecution focused their attention on me. It was a logical place to start. Of all the roommates, I knew Meredith the best. I was the one who discovered that our house was a crime scene and notified the police. They told me I was their most important witness, that any small detail I might remember could be the clue they needed to find out who had done this to poor Meredith.

Over the five days after Meredith was murdered, the police questioned me for a total of 53 hours, without a lawyer and almost entirely without a translator.

A week later, I was in jail, charged with Meredith’s murder. Two years later, I was convicted and sentenced to 26 years in prison. I went on to win my appeal and in 2011 I was acquitted, after four years incarcerated. Even after this vindication, however, so convinced were the Italian authorities that I was guilty, they overturned my acquittal, put me on trial in absentia for the same crime, convicted me again, and sentenced me to 28.5 years in prison. It wasn’t until 2015 that my legal nightmare ended when I was definitively acquitted by Italy’s highest court per non aver commesso il fatto — for not having committed the act. How and why did this happen? A big part of the answer has to do with cognitive bias and motivated reasoning.

Over the five days after Meredith was murdered, the police questioned me for a total of 53 hours, without a lawyer and almost entirely without a translator, and all in a language I spoke about as well as a ten-year-old. I was young (20), scared, and naïve to the ways of the criminal justice system. My final round of questioning went long into the night as they deprived me of sleep, of food, and of bathroom access. When I told them over and over again that I didn’t know what happened to Meredith and that I was at my boyfriend’s house that night, they refused to accept my answers. They slapped me, and they told me that I had amnesia, that I was so traumatized by what I’d witnessed that I had blocked it out.

Instead of listening to what I was telling them, they pushed me to “remember” something I didn’t remember, namely meeting my boss, Patrick Lumumba, at my house that night. Why? They’d found a text message on my phone. I worked at a local cafe, and Patrick had given me the evening off on the night Meredith was killed. I had thanked him, and I texted him back in my broken Italian, “Ci vediamo più tardi,” my best attempt at “See you later.” But to the Perugian authorities, this English idiom didn’t translate as a casual, “I’ll see you when I see you.” To them, it meant I had made an appointment to meet Patrick later that night. You met Patrick, they told me. We know you brought him to the house. I told them that was wrong countless times, but they wouldn’t believe me.

Motivated reasoning was already in full effect among the investigators. Some early lost-in-translation moments outside the house once the police arrived had given them suspicions about my candor. There was confusion over whether Meredith regularly locked her door or merely closed her door (in Italian, the word for “to lock” is “to close with a key”). And as there was nothing obviously stolen from the apartment, they leapt to the conclusion that the break-in — the rock, the smashed window — was staged. The prosecutor, Giuliano Mignini, even assumed, as he said much later in a documentary about the case, that only a woman would cover the body of a murder victim with a blanket. Oh really? And how does he know this?

I was simply naïve to the fact that police lie to suspects to get a confession, even a false one.

My behavior was also grossly misinterpreted. With a flurry of panicked Italian whipping past me, I often didn’t understand what was happening. When my other roommate looked into Meredith’s room once they kicked the door down, she started screaming about what she saw. But I never saw into Meredith’s room, and I didn’t quite understand what she was so upset about. The idea that Meredith had been killed was just so far out of my world of possibilities that I couldn’t even imagine it. So when I stood outside the crime scene, looking dazed but not obviously hysterical, this was interpreted as me looking cold and unemotional in the face of my roommate’s murder, a fact I had yet to fully comprehend.

It didn’t help that I did lie to them early in my questioning. My Italian roommates, who were both big pot smokers, begged me not to tell the police about the marijuana, to deny that anyone in the house smoked it, because they’d lose their law internships if anyone found out. Coming from pot-friendly Seattle, and thinking this was small potatoes and quite irrelevant to what had happened to Meredith, I did what they asked me. But the police had found evidence of marijuana in the house, and they knew I wasn’t being honest. I came clean immediately, but it was too late. That small lie, coupled with the other misunderstandings, were anchoring biases that shaped how the investigators interpreted everything afterward and led them to believe that I was withholding something, that I wasn’t telling them the whole truth about that night. Hence their erroneous certainty that the benign text message to Patrick was evidence of something nefarious.

Photo of Amanda Knox by Patrik Andersson

My own biases led me to trust them. I was nearly 6,000 miles from home, my friend had just been murdered, the killer was on the loose, and I was scared. I thought if anyone would keep me safe, if anyone had my best interests and wellbeing in mind, it was the authorities. This was my fundamental prior belief, shaped by my own privileged upbringing: the cops are the good guys. I’d never had a bad interaction with the police. I had no reason to think they would lie to me. So when they did lie, when they told me that Raffaele, my boyfriend of a week, had turned on me and denied my alibi (he hadn’t), when they lied that they had evidence that I was home that night (they didn’t), I tried to make sense of it. If they weren’t lying, then what other explanation was there? After hours and hours of this intense pressure, I started to believe them that I did have amnesia, and I honestly tried to remember what might have happened. I tried to imagine meeting Patrick like they said I did. They typed up a statement from these blurry incoherent ramblings, and, utterly exhausted, well past the edge of my sanity, I signed it. I was simply naïve to the fact that police lie to suspects to get a confession, even a false one.

It didn’t matter that I recanted that statement almost immediately once I was out of the pressure cooker of the interrogation room and had recovered my senses and reasoning. That false admission sealed my fate. And it became the biggest anchoring bias that would shape the case for the next eight years and my own reputation to this day.

Once you see how the false confession I signed became a fundamental prior for the investigators and prosecution, everything else starts to make sense.

We know how unreliable such interrogation methods are from DNA exonerations. According to the Innocence Project, nearly one in four proven wrongful convictions involves a false confession. And yet, it’s so hard for anyone who hasn’t been through a coercive interrogation to understand how a person could sign false statements implicating themselves or others. Even the police on the other side of the table don’t understand it. They truly think they’re just cracking a suspect and getting them to admit the truth. But they are suffering from a cognitive bias that we are all susceptible to, the idea that our own experience of the world is a reasonable reference. They’ve never signed false statements, so why would a suspect?

Once you see how the false confession I signed became a fundamental prior for the investigators and prosecution, everything else starts to make sense. When the forensic evidence came back weeks later implicating a sole perpetrator, Rudy Guede, they had to find a way to fit this new evidence with their prior belief. This is just what humans do.

recent study argues that nearly all the cognitive biases we are susceptible to — confirmation bias, the anchoring bias, the framing effect, and so on — can be reduced to “the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing.” The prosecution, holding my coerced statements as a fundamental prior belief, tried to force the new forensic information implicating Rudy Guede to be consistent with the idea that I was present that night. And thus, with no evidence, and contrary to my own character and history, they invented a motive and wove a story out of whole cloth about a sex game gone awry and a three-way murder plot involving Rudy, a man whose name I didn’t even know, and my boyfriend of a week.

Source: Statista

It was never a satisfying answer to me that the people responsible for my wrongful conviction were evil, or uniquely incompetent. And once I learned about how common wrongful convictions are even in the U.S., this was even more obvious to me. I wanted to know why this had happened to me, and how mostly well-intentioned people who wanted to repair the rend in the fabric of their community, to bring a perpetrator to justice, and to bring closure to Meredith’s grieving family, could have gotten it so, so wrong. Nothing has been more illuminating for me on this question than diving deeply into the research on cognitive bias.

Most of the specific biases I’m about to discuss are reducible to a general pattern of a fundamental belief and belief-consistent information processing, but I find the added specificity useful to help me see these types of errors in my own thinking.

The anchoring bias is the tendency to rely on the first piece of information, regardless of its validity, when interpreting later information. Thus, early suspicion against me shaped how all later evidence was interpreted. This has also impacted my reputation, and explains why I still receive so much vitriol. Despite my definitive acquittal, the first thing most people heard about me was that I was a suspected killer, and that colors everything else they ever hear about me.

And if they persist in believing conspiracy theories about my guilt, they are helped along by the base rate fallacy, the tendency to ignore general information and focus only on the specifics of one case. Those who think I’m guilty rarely look at general information regarding murders and wrongful convictions. If they did, they’d see how vanishingly rare it is for women to commit knife killings against other women, and how common the errors in my case were. It features all the hallmarks of wrongful convictions, many of which result from cognitive biases themselves.

The most general form of this is often called confirmation bias, the tendency to seek out information that confirms a hypothesis and ignore information that disconfirms it. With wrongful convictions, this is known as tunnel vision. The anchoring bias of an initial hunch shapes the investigators’ search for more information. They magnify the significance of any tiny thing that confirms the anchor and write off large things that don’t. Thus, much weight was put on a kiss between Raffaele and me, while the fact that my DNA was not present in the room where the murder happened and that it was impossible to have participated in such a violent struggle without leaving any traces of oneself, was ignored. This is sometimes called the conservatism bias, the tendency to insufficiently revise one’s prior beliefs in light of new evidence.

The conjunction of many small cognitive biases by the authorities and media is enough to explain the massive debacle the case became.

Then there’s the salience bias: the tendency to ignore unremarkable items and focus on striking ones. The prosecution did this to me, and many people continue to succumb to this bias still. Malcolm Gladwell makes this mistake in his analysis of my case. Like the prosecution and tabloid media, he overlooked the copious moments of unremarkable behavior and highlighted the few moments of so-called “odd” behavior, putting great explanatory weight on them and framing me as someone who acts guilty despite my innocence.

That bias is tied in with the fundamental attribution error: the tendency to overemphasize personality-based explanations for others’ behavior and to de-emphasize the role of context. Thus, in judging my behavior in those early days, people ignored the fact that I was alone, far from home, my roommate had just been murdered, and the killer was on the loose. It was a scary and traumatizing experience, and people react in many ways to trauma. Instead, people often strip my behavior from this context and conclude that I must be weird, “off,” or suspicious. This same bias often comes into play when people reflect upon the false statements I signed. Instead of explaining those false statements by the brutal and coercive context I was in, they leap to a conclusion about my character, that I must be an untrustworthy liar.

Selection bias magnifies all these initial biases by shaping what gets reported. There are no news articles from 2007 about all the moments that I looked sad, or scared, or exhausted. There are no deep-dive articles about my perfectly benign upbringing, complete lack of a history of violence or mental illness, about my strong community and loving family. But one small moment caught on camera of me seeking comfort from the young man I’d met five days previous, sharing a chaste kiss while confused and scared, gets endlessly republished, repeated, and played on loop.

At trial, a host of other cognitive biases came into play. Stereotyping was used to paint me as an American “girl gone wild,” though I was in fact a nerdy poetry and language student. The rhyme as reason effect, in which something that rhymes is seen as more truthful, was used against me. Thus, the moniker “Foxy Knoxy” shaped opinion of me as sly and devious. In Italian, they translated this as Volpe Cattiva, the wicked fox.

The framing effect was used repeatedly at trial to present benign behaviors as suspicious. She ate pizza after her friend had been murdered? Why wasn’t she wasting away, sobbing? Literally, the fact that I ate pizza was used against me as evidence that I was not sufficiently morose, as if a grieving and scared person can’t also be hungry.

All of that framing was repeated for eight years of trial, and it affects me to this day through the continued influence effect, the tendency to believe previously learned misinformation even after it has been corrected. My reputation has not been fully restored. Many people still think that, even if I’m not guilty of murder, I must have had something to do with the crime, or I must have somehow brought suspicion upon myself.

The proportionality bias is our tendency to assume that big events have big causes, when often they are caused by many small things. The massive decade-long series of trials with global media coverage doesn’t need an underlying conspiracy as a cause. It doesn’t require that I was grandly suspicious, nor does it require that the authorities were grandly corrupt. The conjunction of many small cognitive biases by the authorities and media is enough to explain the massive debacle the case became, but the proportionality bias leads us to think there must be a bigger reason.

As far as my continued reputational damage, I can thank the illusory truth effect, the tendency to believe a statement is true if it’s easier to process or has been repeated many times. “Amanda Knox is Bad” is a lot simpler than explaining the miscarriage of justice. This is related to the availability cascade, in which a collective belief is seen as more plausible through repetition in public discourse. The hundreds (thousands?) of media articles painting me as a killer have shaped this perception that many people still have of me.

People wrongly assume that there can only be one true victim, and that if we are to honor the victim of the original crime, we must deny that anything wrong happened to the person wrongfully convicted. In truth, wrongful convictions multiply victimhood.

I try to counter that perception by acting honorably and putting thoughtful work into the world. However, the structures of social media and psychological factors create further selection bias. If I tweet about criminal justice reform, I get maybe a dozen retweets. If I make a joke about my wrongful imprisonment, the tweet spreads far and wide, and I pop onto others’ radar in that context. They don’t see the vast amount of serious work I do, and only see the highly retweeted joke, and conclude that I’m purely flippant.

And then they judge me for making light of a tragedy, but fail to distinguish between the tragedy that befell Meredith and the one that befell me. This is the zero sum bias, assuming incorrectly that if one person gains, another must lose. In this case, they assume that respecting my victimhood by the Italian justice system is tantamount to disrespecting Meredith’s victimhood for being murdered by Rudy Guede. I’ve coined my own term for this specific situation: the single victim fallacy.

You see it often in wrongful conviction cases. People wrongly assume that there can only be one true victim, and that if we are to honor the victim of the original crime, we must deny that anything wrong happened to the person wrongfully convicted. In truth, wrongful convictions multiply victimhood. Meredith is a victim of murder. I am a victim of a miscarriage of justice. Both our families are also victims of this miscarriage of justice, which has denied them closure and put our families through hell. Because of this single victim fallacy, I am told I should never joke about the injustice I suffered, because it is conflated with the injustice done to Meredith by someone else. Because of this fallacy, I am told to shut up and disappear.

These cognitive biases have caused a lot of pain in my life, and in the lives of others touched by this case. And they have also gotten in the way of potential healing. I still hope one day to be able to come together with Meredith’s surviving family in recognition of our shared and overlapping victimhood from the actions of Rudy Guede and the Italian authorities. But as far as I know, they remain in thrall to the single victim fallacy.

I don’t know if that day will ever come, but in the meantime, I take solace in the fact that I have such a great opportunity to see these cognitive errors up close. I was able to see how poorly many people judge this complicated case that took over my life, particularly the facts and the individuals involved in it. To see how wrongly they judge me. This makes me a better thinker. It helps me to better avoid all the cognitive biases that caused my wrongful conviction, that led to slanderous media coverage, and that are still responsible for the hate I regularly receive.

And I would be remiss if I didn’t point out the bias blind spot, the tendency to see yourself as less biased than others. Knowing these biases exist doesn’t make me immune to them. I know I can fall prey to them just as much as the people who imprisoned me. So if you have to have a fundamental prior belief that shapes your reasoning, let it be a belief in your own susceptibility to cognitive bias.

Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic