You are here

Skeptic

Is Planned Obsolescence Real

neurologicablog Feed - Fri, 04/04/2025 - 5:54am

Yes – it is well-documented that in many industries the design of products incorporates a plan for when the product will need to be replaced. A blatant example was in 1924 when an international meeting of lightbulb manufacturers decided to limit the lifespan of lightbulbs to 1,000 hours, so that consumers would have to constantly replace them. This artificial limitation did not end until CFLs and then LED lightbulbs largely replaced incandescent bulbs.

But – it’s more complicated than you might think (it always is). Planned obsolescence is not always about gimping products so they break faster. It often is – products are made so they are difficult to repair or upgrade and arbitrary fashions change specifically to create demand for new versions. But often there is a rational decision to limit product quality. Some products, like kid’s clothes, have a short use timeline, so consumers prefer cheap to durable. There is also a very good (for the consumer) example of true obsolescence – sometimes the technology simply advances, offering better products. Durability is not the only nor the primary attribute determining the quality of a product, and it makes no sense to build in expensive durability for a product that consumers will want to replace. So there is a complex dynamic among various product features, with durability being only one feature.

We can also ask the question, for any product or class of products, is durability actually decreasing over time? Consumers are now on the alert for planned obsolescence, and this may produce the confirmation bias of seeing it everywhere, even when it’s not true. A recent study looking at big-ticket appliances shows how complex this question can be. This is a Norwegian study looking at the lifespan of large appliances over decades, starting in the 1950s.

First, they found that for most large appliances, there was no decrease in lifespan over this time period. So the phenomenon simply did not exist for the items that homeowning consumers care the most about, their expensive appliances. There were two exceptions, however – ovens and washing machines. Each has its own explanations.

For washing machines, the researchers found another plausible explanation for the decrease in lifespan from 19.2 to 10. 6 years (a decrease of 45%). The researchers found that over the same time, the average number of loads a household of four did increased from 2 per week in 1960 to 8 per week by 2000. So if you count lifespan not in years but in number of loads, washing machines had become more durable over this time. I suspect that washing habits were formed in the years when many people did not have washing machines, and doing laundry was brutal work. Once the convenience of doing laundry in the modern era settled in (and perhaps also once it became more than woman’s work), people did laundry more often. How many times do you wear an article of clothing before you wash it? Lots of variables there, but at some point it’s a judgement call, and this likely also changed culturally over time.

For ovens there appears to be a few answers. One is that ovens have become more complex over the decades. For many technologies there is a trade-off between simple but durable, and complex but fragile. Again – there is a tradeoff, not a simple decision to gimp a product to exploit consumers. But there are two other factors the researchers found. Over this time the design of homes have also changed. Kitchens are increasingly connected to living spaces with a more open design. In the past kitchens were closed off and hidden away. Now they are where people live and entertain. This means that the fashion of kitchen appliances are more important. People might buy new appliances to make their kitchen look more modern, rather than because the old ones are broken.

If this were true, however, then we would expect the lifespan of all large kitchen appliances to converge. As people renovate their kitchens, they are likely to buy all new appliances that match and have an updated look. This is exactly what the researchers found – the lifespan of large kitchen appliances have tended to converge over the years.

They did not find evidence that the manufacturers of large appliances were deliberately reducing the durability of their products to force consumers to replace them at regular intervals. But this is the narrative that most people have.

There is also a bigger issue of waste and the environment. Even when the tradeoffs for the consumer favor cheaper, more stylish and fashionable, or more complex products with lower durability, is this a good thing for the world? Landfilled are overflowing with discarded consumer products. This is a valid point, and should be considered in the calculus when making purchasing decisions and also for regulation.  Designing products to be recyclable, repairable, and replaceable is also an important consideration. I generally replace my smartphone when the battery life gets too short, because the battery is not replaceable. (This is another discussion unto itself.)

But replacing old technology with new is not always bad for the environment. Newer dishwashers, for example, are much more energy and water efficient than older ones. Refrigerators are notorious energy hogs, and newer models are substantially more energy efficient than older models. This is another rabbit hole, exactly when do you replace rather than repair an old appliance, but generally if a newer model is significantly more efficient, replacing may be best for the environment. Refrigerators, for example, probably should be upgraded every 10 years with newer and more efficient models – so then why build them to last 20 or more?

I like this new research and this story primarily because it’s a good reminder that everything is more complex than you think, and not to fall for simplistic narratives.

The post Is Planned Obsolescence Real first appeared on NeuroLogica Blog.

Categories: Skeptic

Frans de Waal: His Final Interview

Skeptic.com feed - Thu, 04/03/2025 - 3:28pm

Frans de Waal was one of the world’s leading primatologists. He has been named one of TIME magazine’s 100 Most Influential People. The author of Are We Smart Enough to Know How Smart Animals Are?, as well as many other works, he was the C.H. Candler Professor in Emory University’s Psychology Department and director of the Living Links Center at the Yerkes National Primate Research Center.

Skeptic: How can we know what another mind is thinking or feeling?

Frans de Waal: My work is on animals that cannot talk, which is both a disadvantage and advantage. It’s a disadvantage because I cannot ask them how they feel and what their experiences are, but it is an advantage because I think humans lie a lot. I don’t trust humans. I’m a biologist but I work in a psychology department, and all my colleagues are psychologists. Most psychologists nowadays use questionnaires, and they trust what people tell them, but I don’t. So, I’d much rather work with animals where instead of asking how often they have sex, I just count how often. That’s more reliable.

I cannot ask them how they feel and what their experiences are, but it is an advantage because I think humans lie a lot. I don’t trust humans.

That said, I distinguish between emotions and feelings because you cannot know the feelings of any animals. But I can deduce them, guess at them. Personally, I feel it’s very similar with humans. Humans can tell me their feelings, but even if you tell me that you are sad, I don’t know if that’s the same sadness that I would feel under the same circumstances, so I can only guess what you feel. You might even be experiencing mixed feelings, or there may be feelings you’re not even aware of, and so you’re not able to communicate them. We have the same problem in non-human species as we do in humans, because feelings are less accessible and require guesswork.

That said, sometimes I’m perfectly comfortable guessing at the feelings of animals, even though you must distinguish them from the things you can measure. I can measure facial expressions. I can measure blood pressure. I can measure their behavior, but I can never really measure what they feel. But then, psychologists can’t do that with people either.

Skeptic: Suppose I’m feeling sad and I’m crying at some sort of loss. And then I see you’ve experienced a loss and that you’re crying … Isn’t it reasonable to infer that you feel sad?

FdW: Yes. And so that same principle of being reasonable can be applied to other species. And the closer that species is to you, the easier it is. Chimpanzees and bonobos cry and laugh. They have facial expressions— the same sort of expressions we do. So it’s fairly easy to infer the feelings behind those expressions and infer they may be very similar to our own. If you move to, say, an elephant, which is still a mammal, or to a fish, which is not, it becomes successively more difficult. Fish don’t even have facial expressions. That doesn’t mean that fish don’t feel anything. It would be a very biased view to assume that an animal needs to show facial expressions as evidence that it feels something.

At the same time, research on humans has argued that we have six basic emotions based on the observation that we have six basic facial expressions. So, there the tie between emotions and expressions has been made very explicit.

In my work, I tend to focus on the expressive behavior. But behind it, of course, there must be similar feelings. At least that’s what Darwin thought.

Chimpanzees and bonobos cry and laugh. They have facial expressions—the same sort of expressions we do.

Skeptic: That’s not widely known, is it? Darwin published The Expression of the Emotions in Man and Animals in 1872, but it took almost a century before the taboo against it started to lift.

FdW: It’s the only book of Darwin’s that disappeared from view for a century. All the other books were celebrated, but that book was placed under some sort of taboo. Partly because of the influence of the behaviorist school of B.F. Skinner, Richard Herrnstein, and others, it was considered silly to think that animals would have the same sort of emotions as we do.

Biologists, including my own biology professors, however, found a way out. They didn’t need to talk about emotions because they would talk about the function of behavior. For example, they would not say “the animal is afraid” but rather that “the animal escapes from danger.” They phrased everything in functional terms—a semantic trick that researchers still often use.

If you were to say that two animals “love each other” or that “they’re very attached to each other,” you’re likely to receive significant criticism, if not ridicule. So why even describe it that way? Instead, you objectively report that the animals bonded and they benefited from doing so. Phrasing it functionally has, well, functioned as a sort of preferred safe procedure. But I have decided not to employ it anymore.

Skeptic: In most of your books you talk about the social and political context of science. Why do you think the conversation about animal emotions was held back for almost a century?

FdW: World War II had an effect on the study of aggression, which became a very popular topic in the 1960s and 70s. Then we got the era of “the selfish gene” and so on. In fact, the silencing of the study of mental processes and emotions in animals started before the war. It actually started in the 1920s and 30s. And I think it’s because scientists such as Skinner wanted the behavioral sciences to be like the physical sciences. They operated under the belief that it provided a certain protection against criticism to get away from anything that could be seen as speculation. And there was a lot of speculation going on in the so-called “depth psychologies,” some of it rather wild.

However, there are a lot of invisible things in science that we assume to be true, for example, evolutionary theory. Evolution is not necessarily visible, at least most of the time it isn’t, yet still, we believe very strongly that evolution happened. Continental drift is unobservable, but we now accept that it happened. The same principle can be applied to animal feelings and animal consciousness. You assume it as a sort of theory and see if things fit. And, research has demonstrated that things fit quite well.

Skeptic: Taking a different angle, can Artificial Intelligence (AI) experience emotions? Was IBM’s Watson “thrilled” when it beat Ken Jennings, the all-time champion of Jeopardy!? Well, of course not. So what do you think about programming such internal states into an artificial intelligence?

FdW: I think researchers developing AI models are interested in affective programs because of the way we biologists look at emotions. Emotions trigger actions that are adaptive. Fear is an adaptive emotion because it may trigger certain behaviors such as hiding, escaping, etc., so we look at emotions as being the stimulus that elicits certain specific types of behavior. Emotions organize behavior, and I think that’s what the AI people are interested in. Emotions are actually a very smart system, compared to instincts. Someone might argue that instincts also trigger behavior. However, while instincts are inflexible, emotions are different.

Let’s say you are afraid of something. The emotion of fear doesn’t trigger your behavior. An emotion just prepares the body for certain behaviors, but you still need to make a decision. Do I want to escape? Do I want to fight? Do I want to hide? What is the best behavior under these circumstances? And so, your emotion triggers the need for a response, and then your cognition takes over and searches for the best solution. It’s a very, very nice system and creators of AI models are interested in such an organizational system of behavior. I’m not sure they will ever construct the feelings behind the emotions—it’s not an easy thing to do—but certainly organizing behavior according to emotions is possible.

Skeptic: Are emotions created from the bottom-up? How do you scale from something very simple up to much higher levels of complexity?

FdW: Humans have a complex emotional system—we mix a lot of emotions, sort them, regulate them. Well, sometimes we don’t actually regulate them and that is something that really interests me in my work with animals. What kind of regulation do they have over their emotions? People often say that we have emotions and we can suppress them, whereas animals have emotions that they have to follow. However, experiments have demonstrated that’s not really the case. For example, we give apes the marshmallow test. Briefly, that’s where you put a child in a situation in which he or she can either eat a marshmallow immediately, or wait and get a second one later. Well, kids are willing to wait for 15 minutes. If you do that same experiment with apes, they’re also willing to wait for 15 minutes. So they can control their emotions. And like children, apes seek distractions from the situation because they’re aware that they’re dealing with certain specific emotions. Therefore, we know that apes have a certain awareness of their emotions, and they have a certain level of control over them. This whole idea that regulation of emotions is specifically human, while animals can only follow them, is wrong.

The emotional farewell between the chimpanzee Mama and her caretaker, Jan van Hooff (Source)

That’s actually the reason I wrote Mama’s Last Hug. The starting point of the book was when Prof. Jan Van Hoff came on TV and showed a little clip that everyone has seen by now, where he and a chimpanzee called Mama hug each other. Both he and I were shocked when the clip went viral and generated such a response. Many people cried and wrote to us to say they were very influenced by what they saw. The truth is Mama was simply showing perfectly normal chimpanzee behavior. It was a very touching moment, obviously, but for those familiar with chimps, there was nothing surprising about the behavior. And so, I wrote this book partly because I noticed that people did not know how human-like the expressions of the apes are. Embracing, and hugging, and calming someone down, and having a big smile on your face are all common behaviors seen in primates and are not unique to humans.

Skeptic: Your famous experiment with capuchin monkeys, where you offer them a grape or a piece of cucumber, is along similar lines. When the monkey got the cucumber instead of the grape, he got really angry. He threw the cucumber back, then proceeded to pound on the table and the walls … He was clearly ticked off at the injustice he felt had been done him, just as a person would be.

A still from the famous capuchin monkey fairness experiment (Source: Frans de Waal’s TED Talk)

FdW: The funny thing is that primates, including those monkeys, have all the same expressions and behaviors as we do. And so, they shake their cage and throw the cucumber at you. The behavior is just so extremely similar, and the circumstances are so similar … I always say that if related species behave in a similar way under similar circumstances, you have to assume a shared psychology lies behind it. It is just not acceptable in this day and age of Darwinian philosophy, so to speak, to assume anything else. If people want to make the point that it’s maybe not similar, that maybe the monkey was actually very happy while he was throwing the stuff … they’ll have a lot of work to do to convince me of that.

Skeptic: What’s the date of the last common ancestor humans shared with chimps and bonobos?

FdW: It’s about 6 million years ago.

Skeptic: So, these are indeed pretty ancient emotions.

FdW: Oh, they go back much further than that! Like the bonding mechanism based on oxytocin—the neuropeptides in bonding go back to rodents, and probably even back to fish at some point. These neuropeptide circuits involved in attachment and bonding are very ancient. They’re even older than mammals themselves.

Skeptic: One emotion that seems very uniquely human is disgust. If a chimp or Bonobo comes across a pile of feces or vomit, what do they do?

FdW: When we do experiments and put interesting food on top of feces and see if the chimp is willing to take it, they don’t. They refuse to. The facial expression of the chimps is the same as we have for disgust—with the wrinkly nose and all that. Chimps also show it, for example, when it rains. They don’t like rain. And they show it, sometimes, in circumstances where they encounter a rat. So, some of these emotions have been proposed as being uniquely human, but I disagree. Disgust, I think, is a very old emotion.

If related species behave in a similar way under similar circumstances, you have to assume a shared psychology lies behind it.

Disgust is an interesting case because we know that both in chimps and humans a specific part of the brain called the insula is involved. If you stimulate the insula in a monkey who’s chewing on good fruit, he’ll spit it out. If you put humans in a brain scanner and show them piles of feces or things they don’t want to see, the insula is likewise activated. So here we have an emotion that is triggered under the same circumstances, that is shown in the face in the same way, and that is associated with the same specific area in the brain. So we have to assume it’s the same emotion across the board. That’s why I disagree with those scientists who have declared disgust uniquely human.

Skeptic: In one of your lectures, you show photos of a horse wrinkling up its nose and baring its teeth. Is that a smile or something else?

FdW: The baring of the teeth is very complex because in many primates it is a fearful signal shown when they’re afraid or when they’re intimidated by dominance and showing submission. So, we think it became a signal of appeasement and non-hostility. Basically saying, “I’m not hostile. Don’t expect any trouble from me.” And then over time, especially in apes and then in humans, it became more and more of a friendly signal. So it’s not necessarily a fear signal. Although we still say that if someone smiles too much, they’re probably nervous.

Skeptic: Is it true that you can determine whether someone’s giving you a fake smile or a real smile depending on whether the corners of their eyes are pulled down?

FdW: Yes, this is called the Duchenne smile. Duchenne was a 19th century French neurologist. He studied people who had facial paralysis, meaning they had the muscles, but they could not feel anything in their face. This allowed him to put electrodes on their faces and stimulate them. He methodically contracted different muscles and noticed he could produce a smile on his subjects. Yet he was never quite happy with the smile—it just didn’t look real. Then one day he told a subject a joke. A very good joke, I suppose, and all of a sudden, he got a real full-blown smile. That’s when Duchenne decided that there needs to be a contraction and a narrowing of the eyes for a smile to be a real smile. So, we now distinguish between the fake smile and the Duchenne smile.

Skeptic: So, smiling involves a whole complex suite of muscles. Is the number of muscles in the face of humans higher than other species?

FdW: Do we have far more muscles in the face than a chimpanzee? I’ve heard that all my life. Until people who analyze faces of chimpanzees found exactly the same number of muscles in there as in a human face. So that whole story doesn’t hold up. I think the confusion originated because when we look at the human face, we can interpret so many little details of it—and I think chimps do that with each other too—but when we look at a chimp, we only see the bold, more flamboyant expressions.

Skeptic: Have we evolved in the way we treat other animals?

FdWThe Planet of the Apes movies provide a good example of that. I’m so happy that Hollywood has found a way of featuring apes in movies without the involvement of real animals. There was a time when Hollywood had trainers who described what they do as affective training. Not effective, but affective. They used cattle prods, and stuff like that. People used to think that seeing apes dressed up or producing silly grins was hilarious. No longer. We’ve come a long way from that.

SkepticThe Planet of the Apes films show apes that are quite violent, maybe even brutal. You actually studied the darker side of emotion in apes. Can you describe it?

FdW: Most of the books on emotions in animals dwell on the positive: they show how animals love each other, how they hug each other, how they help each other, how they grieve … and I do think that’s all very impressive. However, the emotional life of animals—just like that of humans— includes a lot of nasty emotions.

We do not treat animals very well, certainly not in the agricultural industry.

I have seen so much of chimpanzee politics that I witnessed those very dark emotions. They can kill each other. One of the killings I’ve witnessed was in captivity. So, when it happened, I thought maybe it was a product of captivity. Some colleagues said to me, “What do you expect if you lock them up?” But now we know that wild chimpanzees do the exact same thing. Sometimes, if a male leader loses his position or other chimps are not happy with him, they will brutally kill him. At the same time, chimpanzees can also be good friends, help each other, and defend their territory together—just like people who on occasion hate each other or even kill each other, but otherwise coexist peacefully.

The more important point is that we do not treat animals very well, certainly not in the agricultural industry. And we need to do something about that.

Skeptic: Are you a vegetarian or vegan?

FdW: No. Well, I do try to avoid eating meat. For me, however, the issue is not so much the eating, it’s the treatment of animals. As a biologist, I see the cycle of life as a natural thing. But it bothers me how we treat animals.

Skeptic: What’s next for you?

FdW: I’m going to retire! In fact, I’ve already stopped my research. I’m going to travel with my wife, and write.

Dr. Frans de Waal passed away on March 14, 2024, aged 75. In Loving Memory.

Categories: Critical Thinking, Skeptic

The Transition to Agriculture

neurologicablog Feed - Thu, 04/03/2025 - 5:00am

It is generally accepted that the transition from hunter-gatherer communities to agriculture was the single most important event in human history, ultimately giving rise to all of civilization. The transition started to take place around 12,000 years ago in the Middle East, China, and Mesoamerica, leading to the domestication of plants and animals, a stable food supply, permanent settlements, and the ability to support people not engaged full time in food production. But why, exactly, did this transition occur when and where it did?

Existing theories focus on external factors. The changing climate lead to fertile areas of land with lots of rainfall, at the same time food sources for hunting and gathering were scarce. This occurred at the end of the last glacial period. This climate also favored the thriving of cereals, providing lots of raw material for domestication. There was therefore the opportunity and the drive to find another reliable food source. There also, however, needs to be the means. Humanity at that time had the requisite technology to begin farming, and agricultural technology advanced steadily.

A new study looks at another aspect of the rise of agriculture, demographic interactions. How were these new agricultural communities interacting with hunter-gather communities, and with each other? The study is mainly about developing and testing an inferential model to look at these questions. Here is a quick summary from the paper:

“We illustrate the opportunities offered by this approach by investigating three archaeological case studies on the diffusion of farming, shedding light on the role played by population growth rates, cultural assimilation, and competition in shaping the demographic trajectories during the transition to agriculture.”

In part the transition to agriculture occurred through increased population growth of agricultural communities, and cultural assimilation of hunter-gatherer groups who were competing for the same physical space. Mostly they were validating the model by looking at test cases to see if the model matched empirical data, which apparently it does.

I don’t think there is anything revolutionary about the findings. I have read many years ago that cultural exchange and assimilation was critical to the development of agriculture. I think the new bit here is a statistical approach to demographic changes. So basically the shift was even more complex than we thought, and we have to remember to consider all internal as well as external factors.

It does remain a fascinating part of human history, and it seems there is still a lot to learn about something that happened over a long period of time and space. There’s bound to be many moving parts. I always found it interesting to imagine the very early attempts at agriculture, before we had developed a catalogue of domesticated plants and animals. Most of the food we eat today has been cultivated beyond recognition from its wild counterparts. We took many plants that were barely edible and turned them into crops.

In addition, we had to learn how to combine different foods into a nutritionally adequate diet, without having any basic knowledge of nutrition and biochemistry. In fact, for thousands of years the shift to agriculture lead to a worse diet and negative health outcomes, due to a significant reduction in diet diversity. Each culture (at least the ones that survived) had to figure out a combination of staple crops that would lead to adequate nutrition. For example, many cultures have staple dishes that include a starch and a legume, like lentils and rice, or corn and beans. Little by little we plugged the nutritional holes, like adding carrots for vitamin A (even before we knew what vitamin A was).

Food preparation and storage technology also advanced. When you think about it, we have a few months to grow enough food to survive an entire year. We have to store the food and save enough seeds to plant the next season. We take for granted in many parts of the developed world that we can ship food around the world, and we can store food in refrigerated conditions, or sterile containers. Imagine living 5,000 years ago without any modern technology. One bad crop could mean mass starvation.

This made cultural exchange and trade critical. The more different communities could share knowledge the better everyone could deal with the challenges of subsistence farming. Also, trade allowed communities to spread out their risk. You could survive a bad year if a neighbor had a bumper crop, knowing eventually the roles will reverse. The ancient world had a far greater trading system than we previously knew or most people imagine. The bronze age, for example required bringing together tin and copper from distant mines around Eurasia. There was still a lot of fragility in this system (which is why the bronze age collapsed, and other civilizations often collapsed), but obviously in the aggregate civilization survived and thrived.

Agricultural technology was so successful it now supports a human population of over 8 billion people, and it’s likely our population will peak at about 10 billion.

The post The Transition to Agriculture first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #982: Defending Against the Planet Killers

Skeptoid Feed - Tue, 04/01/2025 - 2:00am

All of the ways you've heard that deep space wants to kill us — and how plausible or likely each scenario is.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

The Politicians We Deserve

neurologicablog Feed - Mon, 03/31/2025 - 5:03am

This is an interesting concept, with an interesting history, and I have heard it quoted many times recently – “we get the politicians (or government) we deserve.” It is often invoked to imply that voters are responsible for the malfeasance or general failings of their elected officials. First let’s explore if this is true or not, and then what we can do to get better representatives.

The quote itself originated with Joseph de Maistre who said, “Every nation gets the government it deserves.” (Toute nation a le gouvernement qu’elle mérite.) Maistre was a counter-revolutionary. He believed in divine monarchy as the best way to instill order, and felt that philosophy, reason, and the enlightenment were counterproductive. Not a great source, in my opinion. But apparently Thomas Jefferson also made a similar statement, “The government you elect is the government you deserve.”

Pithy phrases may capture some essential truth, but reality is often more complicated. I think the sentiment is partly true, but also can be misused. What is true is that in a democracy each citizen has a civic responsibility to cast informed votes. No one is responsible for our vote other than ourselves, and if we vote for bad people (however you wish to define that) then we have some level of responsibility for having bad government. In the US we still have fair elections. The evidence pretty overwhelmingly shows that there is no significant voter fraud or systematic fraud stealing elections.

This does not mean, however, that there aren’t systemic effects that influence voter behavior or limit our representation. This is a huge topic, but just to list a few examples – gerrymandering is a way for political parties to choose their voters, rather than voters choosing their representatives, the electoral college means that for president some votes have more power than others, and primary elections tend to produce more radical options. Further, the power of voters depends on getting accurate information, which means that mass media has a lot of power. Lying and distorting information deprives voters of their ability to use their vote to get what they want and hold government accountable.

So while there is some truth to the notion that we elect the government we deserve, this notion can be “weaponized” to distract and shift blame from legitimate systemic issues, or individual bad behavior among politicians. We still need to examine and improve the system itself. Actual experts could write books about this topic, but again just to list a few of the more obvious fixes – I do think we should, at a federal level, ban gerrymandering. It is fundamentally anti-democratic. In general someone affected directly by the rules should not be able to determine those rules and rig them to favor themselves. We all need to agree ahead of time on rules that are fair for everyone. I also think we should get rid of the electoral college. Elections are determined in a handful of swing states, and voters in small states have disproportionate power (which they already have with two senators). Ranked-choice voting also would be an improvement and would lead to outcomes that better reflect the will of the voters. We need Supreme Court reform, better ethics rules and enforcement, and don’t get me started on mass and social media.

This is all a bit of a catch-22 – how do we get systemic change from within a broken system?  Most representatives from both parties benefit from gerrymandering, for example. I think it would take a massive popular movement, but those require good leadership too, and the topic is a bit wonky for bumper stickers. Still, I would love to see greater public awareness on this issue and support for reform. Meanwhile, we can be more thoughtful about how we use the vote we have. Voting is the ultimate feedback loop in a democracy, and it will lead to outcomes that depend on the feedback loop. Voters reward and punish politicians, and politicians to some extent do listen to voters.

The rest is just a shoot-from-the-hip thought experiment about how we might more thoughtfully consider our politicians. Thinking is generally better than feeling, or going with a vague vibe or just a blind hope. So here are my thoughts about what a voter should think about when deciding whom to vote for. This also can make for some interesting discussion. I like to break things down, so here are some categories of features to consider.

Overall competence: This has to do with the basic ability of the politician. Are they smart and curious enough to understand complex issues? Are they politicly savvy enough to get things done? Are they diligent and generally successful?

Experience: This is related to competence, but I think is distinct. You can have a smart and savvy politician without any experience in office. While obviously we need to give fresh blood a chance, experience also does count. Ideally politicians will gain experience in lower office before seeking higher office. It also shows respect for the office and the complexity of the job.

Morality: This has to do with the overall personality and moral fiber of the person. Do they have the temperament of a good leader and a good civil servant? Will they put the needs of the country first? Are they liars and cheaters? Do they have a basic respect for the truth?

Ideology: What is the politician’s governing philosophy? Are they liberal, conservative, progressive, or libertarian? What are their proposals on specific issues? Are they ideologically flexible, willing and able to make pragmatic compromises, or are they an uncompromising radical?

There is more, but I think most features can fit into one of those four categories. I feel as if most voters most of the time rely too heavily on the fourth feature, ideology, and use political party as a marker for ideology. In fact many voters just vote for their team, leaving a relatively small percentage of “swing voters” to decide elections (in those regions where one party does not have a lock). This is unfortunate. This can short-circuit the voter feedback loop. It also means that many elections are determined during the primary, which tend to produce more radical candidates, especially in winner-take-all elections.

It seems to me, having closely followed politics for decades, that in the past voters would primarily consider ideology, but the other features had a floor. If a politician demonstrated a critical lack of competence, experience, or morality that would be disqualifying. What seems to be the case now (not entirely, but clearly more so) is that the electorate is more “polarized”, which functionally means they vote based on the team (not even really ideology as much), and there is no apparent floor when it comes to the other features. This is a very bad thing for American politics. If politicians do not pay a political price for moral turpitude, stupidity or recklessness, then they will adjust their algorithm of behavior accordingly. If voters reward team players above all else, then that is what we will get.

We need to demand more from the system, and we need to push for reform to make the system work better. But we also have to take responsibility for how we vote and to more fully realize what our voting patterns will produce. The system is not absolved of responsibility, but neither are the voters.

The post The Politicians We Deserve first appeared on NeuroLogica Blog.

Categories: Skeptic

Underground Structures in Giza?

Skeptic.com feed - Sat, 03/29/2025 - 8:40am

A team led by Corrado Malanga from the University of Pisa and Filippo Biondi from the University of Strathclyde recently claimed to have found huge structures beneath the Pyramids of Giza using Synthetic Aperture Radar (SAR) technology.

These structures are said to be up to 10 times larger than the pyramids, potentially rewriting our understanding of ancient Egyptian history.

However, many archaeologists and Egyptologists, including prominent figures, have expressed doubt, highlighting the lack of peer-reviewed evidence and the technical challenges of such deep imaging.

Photo by Michael Starkie / Unsplash

Dr. Zahi Hawass, a renowned Egyptologist and former Egyptian Minister of Antiquities, has publicly rejected these findings, calling them “completely wrong” and “baseless,” arguing that the techniques used are not scientifically validated. Other experts, like Professor Lawrence Conyers, have questioned whether SAR can penetrate the dense limestone to the depths claimed, suggesting decades of prior studies using other methods found no such evidence.

The claims have reignited interest in fringe theories, such as the pyramids as ancient power grids or energy hubs, with comparisons to Nikola Tesla’s wireless energy transmission ideas. Mythological correlations, like the Halls of Amenti and references in the Book of the Dead, have also been drawn.

The research has not been published in a peer-reviewed scientific journal, which is a critical step for validation. The findings were announced via a press release on March 15, 2025, and discussed in a press conference.

What to make of it all?

For a deep dive into this fascinating claim, Skeptic magazine Editor-in-Chief Michael Shermer appeared on Piers Morgan Uncensored, alongside Jay Anderson from Project Unity, archaeologist and YouTuber Dr. Flint Dibble, Jimmy Corsetti from the Bright Insight Podcast, Dan Richards from DeDunking the Past, and archaeologist and YouTuber Milo Rossi (AKA Miniminuteman).

Watch the discussion here:

Categories: Critical Thinking, Skeptic

The Skeptics Guide #1029 - Mar 29 2025

Skeptics Guide to the Universe Feed - Sat, 03/29/2025 - 8:00am
Quickie with Bob: Extinction Survivors; News Items: Constructed Languages, Exercise and Brain Health, Curiosity Rover Finds Long Carbon Chains, Nanotech Lightsails, Vaccine and Autism Again; Who's That Noisy; Your Questions and E-mails: Technology vs Magic; Science or Fiction
Categories: Skeptic

Quantifying Privilege: What Research on Social Mobility Tells Us About Fairness in America

Skeptic.com feed - Fri, 03/28/2025 - 12:09pm

Is it more of a disadvantage to be born poor or Black? Is it worse to be brought up by rich parents in a poor neighborhood, or by poor parents in a rich neighborhood? The answers to these questions lie at the very core of what constitutes a fair society. So how do we know if it is better to have wealthy parents or to grow up in a wealthy neighborhood when “good” things often go together (i.e., kids with rich parents grow up in rich neighborhoods)? When poverty, being Black, and living in a neighborhood with poor schools all predict worse outcomes, how can we disentangle them? Statisticians call this problem multicollinearity, and a number of straightforward methods using some of the largest databases on social mobility ever assembled provide surprisingly clear answers to these questions—the biggest obstacle children face in America is having the bad luck of being born into a poor family.

The immense impact of parental income on the future earnings of children has been established by a tremendous body of research. Raj Chetty and colleagues, in one of the largest studies of social mobility ever conducted,1 linked census data to federal tax returns to show that your parent’s income when you were a child was by far the best predictor of your own income when you became an adult. The authors write, “On average, a 10 percentile increase in parent income is associated with a 3.4 percentile increase in a child’s income.” This is a huge effect; children will earn an average of 34 percent more if their parents are in the highest income decile as compared to the lowest. This effect is true across all races, and Black children born in the top income quintile are more than twice as likely to remain there than White children born in the bottom quintile are to rise to the top. In short, the chances of occupying the top rungs of the economic ladder for children of any race are lowest for those who grow up poor and highest for those who grow up rich. These earnings differences have a broad impact on wellbeing and are strongly correlated with both health and life expectancy.2 Wealthy men live 15 years longer than the poorest, and wealthy women are expected to live 10 years longer than poor women—five times the effect of cancer!

Why is having wealthy parents so important? David Grusky at Stanford, in a paper on the commodification of opportunity, writes:

Although parents cannot directly buy a middleclass outcome for their children, they can buy opportunity indirectly through advantaged access to the schools, neighborhoods, and information that create merit and raise the probability of a middle-class outcome.3

In other words, opportunity is for sale to those who can afford it. This simple point is so obvious that it is surprising that so many people seem to miss it. Indeed, it is increasingly common for respected news outlets to cite statistics about racial differences without bothering to control for class. This is like conducting a study showing that taller children score higher on math tests without controlling for age. Just as age is the best predictor of a child’s mathematical ability, a child’s parent’s income is the best predictor of their future adult income.

Photo by Kostiantyn Li / Unsplash

Although there is no substitute for being born rich, outcomes for children from families with the same income differ in predictable and sometimes surprising ways. After controlling for household income, the largest racial earnings gap is between Asians and Whites, with Whites who grew up poor earning approximately 11 percent less than their Asian peers at age 40, followed by a two percent reduction if you are poor and Hispanic and an additional 11 percent on top of that if you are born poor and Black. Some of these differences, however, result from how we measure income. Using “household income,” in particular, conceals crucial differences between homes with one or two parents and this alone explains much of the residual differences between racial groups. Indeed, the marriage rates between races uncannily recapitulate these exact same earnings gaps—Asian children have a 65 percent chance of growing up in households with two parents, followed by a 54 percent chance for Whites, 41 percent for Hispanics and 17 percent for Blacks4 and the Black-White income gap shrinks from 13 percent to 5 percent5 after we control for income differences between single and two-parent households.

Just as focusing on household income obscures differences in marriage rates between races, focusing on all children conceals important sex differences, and boys who grow up poor are far more likely to remain that way than their sisters.6 This is especially true for Black boys who earn 9.7 percent less than their White peers, while Black women actually earn about one percent more than White women born into families with the same income. Chetty writes:

Conditional on parent income, the black-white income gap is driven entirely by large differences in wages and employment rates between black and white men; there are no such differences between black and white women.7

So, what drives these differences? If it is racism, as many contend, it is a peculiar type. It seems to benefit Asians, hurts Black men, and has no detectable effect on Black women. A closer examination of the data reveals their source. Almost all of the remaining differences between Black men and men of other races lie in neighborhoods. These disadvantages could be caused either by what is called an “individual-level race effect” whereby Black children do worse no matter where they grow up, or by a “place-level race effect” whereby children of all races do worse in areas with large Black populations. Results show unequivocal support for a place-level effect. Chetty writes:

The main lesson of this analysis is that both blacks and whites living in areas with large African-American populations have lower rates of upward income mobility.8

Multiple studies have confirmed this basic finding, revealing that children who grow up in families with similar incomes and comparable neighborhoods have the same chances of success. In other words, poor White kids and poor Black kids who grow up in the same neighborhood in Los Angeles are equally likely to become poor adults. Disentangling the effects of income, race, family structure, and neighborhood on social mobility is a classic case of multicollinearity (i.e., correlated predictors), with race effectively masking the real causes of reduced social mobility—parent’s income. The residual effects are explained by family structure and neighborhood. Black men have the worst outcomes because they grow up in the poorest families and worst neighborhoods with the highest prevalence of single mothers. Asians, meanwhile, have the best outcomes because they have the richest parents, with the lowest rates of divorce, and grow up in the best neighborhoods.

We are all born into an economic caste system in which privilege is imposed on us by the class into which we are helplessly born.

The impact that family structure has on the likelihood of success first came to national attention in 1965, when the Moynihan Report9 concluded that the breakdown of the nuclear family was the primary cause of racial differences in achievement. Daniel Patrick Moynihan, an American sociologist serving as Assistant Secretary of Labor (who later served as Senator from New York) argued that high out-of-wedlock birth rates and the large number of Black children raised by single mothers created a matriarchal society that undermined the role of Black men. In 1965, he wrote:

In a word, a national effort towards the problems of Negro Americans must be directed towards the question of family structure. The object should be to strengthen the Negro family so as to enable it to raise and support its members as do other families.10

A closer look at these data, however, reveals that the disadvantage does not come from being raised by a single mom but rather results from growing up in neighborhoods without many active fathers. In other words, it is not really about whether your own parents are married. Children who grow up in two-parent households in these neighborhoods have similarly low rates of social mobility. Rather, it seems to depend on growing up in neighborhoods with a lot of single parents. Chetty in a nearly perfect replication of Moynihan’s findings writes:

black father presence at the neighborhood level strongly predicts black boys’ outcomes irrespective of whether their own father is present or not, suggesting that what matters is not parental marital status itself but rather community-level factors.11

Although viewing the diminished authority of men as a primary cause of social dysfunction might seem antiquated today, evidence supporting Moynihan’s thesis continues to mount. The controversial report, which was derided by many at the time as paternalistic and racist, has been vindicated12 in large part because the breakdown of the family13 is being seen among poor White families in rural communities today14 with similar results. Family structure, like race, often conceals underlying class differences too. Across all races, the chances of living with both parents fall from 85 percent if you are born in an upper-middle-class family to 30 percent if you are in the lower-middle class.15 The take-home message from these studies is that fathers are a social resource and that boys are particularly sensitive to their absence.16 Although growing up rich seems to immunize children against many of these effects, when poverty is combined with absent fathers, the negative impacts are compounded.17

Children who grow up in families with similar incomes and comparable neighborhoods have the same chances of success. In other words, poor White kids and poor Black kids who grow up in the same neighborhood in Los Angeles are equally likely to become poor adults.

The fact that these outcomes are driven by family structure and the characteristics of communities that impact all races similarly poses a serious challenge to the bias narrative18—the belief that anti-Black bias or structural racism underlies all racial differences19 in outcomes—and suggests that the underlying reasons behind the racial gaps lie further up the causal chain. Why then do we so frequently use race as a proxy for the underlying causes when we can simply use the causes themselves? Consider by analogy the fact that Whites commit suicide at three times the rate of Blacks and Hispanics.20 Does this mean that being White is a risk factor for suicide? Indeed, the link between the income of parents and their children may seem so obvious that it can hardly seem worth mentioning. What would it even mean to study social mobility without controlling for parental income? It is the elephant in the room that needs to be removed before we can move on to analyze more subtle advantages. It is obvious, yet elusive; hidden in plain sight.

If these results are so clear, why is there so much confusion around this issue? In a disconcertingly ignorant tweet, New York Times writer Nikole Hanna-Jones, citing the Chetty study, wrote:

Please don’t ever come in my timeline again bringing up Appalachia when I am discussing the particular perils and injustice that black children face. And please don’t ever come with that tired “It’s class, not race” mess again.21

Is this a deliberate attempt to serve a particular ideology or just statistical illiteracy?22 And why are those who define themselves as “progressive” often the quickest to disregard the effects of class? University of Pennsylvania political science professor Adolph Reed put what he called “the sensibilities of the ruling class” this way:

the model is that the society could be one in which one percent of the population controls 95 percent of the resources, and it would be just, so long as 12 percent of the one percent were black and 14 percent were Hispanic, or half women.23

Perhaps this view and the conviction shared by many elites that economic redistribution is a non-starter accounts for this laser focus on racism, while ignoring material conditions. Racial discrimination can be fixed by simply piling on more sensitivity training or enforcing racial quotas. Class inequities, meanwhile, require real sacrifices by the wealthy, such as more progressive tax codes, wider distribution of property taxes used to fund public schools, or the elimination of legacy admissions at elite private schools.24 The fact that corporations and an educated upper class of professionals,25 which Thomas Piketty has called “the Brahmin left,”26 have enthusiastically embraced this type of race-based identity politics is another tell. Now, America’s rising inequality,27 where the top 0.1 percent have the same wealth as the bottom 90 percent, can be fixed under the guidance of Diversity, Equity and Inclusion (DEI) policies and enforced by Human Resources departments. These solutions pose no threat to corporations or the comfortable lives of the elites who run them. We are obsessed with race because being honest about class would be too painful.

Attending a four-year college is unrivaled in its ability to level the playing field for the most disadvantaged kids from any race and is the most effective path out of poverty.

There are, however, also a number of aspects of human psychology that make the powerful impact of the class into which we are born difficult to see. First, our preference for binary thinking,28 which is less cognitively demanding, makes it easier to conjure up easily divisible, discrete, and visible racial categories (e.g., Black, White, Asian), rather than the continuous and often less visible metric of income. We run into problems when we think about continuous variables such as income, which are hard to categorize and can change across our lifetimes. For example, what is the cutoff between rich and poor? Is $29,000 dollars a year poor but $30,000 middle class? This may also help to explain why we are so reluctant to discuss other highly heritable traits that impact our likelihood of success, like attractiveness and intelligence. Indeed, a classic longitudinal study by Blau and Duncan in 196729 which studied children across the course of their development suggests that IQ might be an even better predictor of adult income than their parent’s income. More recently Daniel Belsky found that an individual’s education-linked genetics consistently predicted a change in their social mobility, even after accounting for social origins.30 Any discussion of IQ or innate differences in cognitive abilities has now become much more controversial, however, and any research into possible cognitive differences between populations is practically taboo today. This broad denial of the role of genetic factors in social mobility is puzzling, as it perpetuates the myth that those who have succeeded have done so primarily due to their own hard work and effort, and not because they happened to be beneficiaries of both environmental and genetic luck. We have no more control over our genetic inheritance than we do over the income of our parents, their marital status, or the neighborhoods in which we spend our childhoods. Nevertheless, if cognitive differences or attractiveness were reducible to clear and discrete categories, (e.g., “dumb” vs. “smart” or “ugly” vs. “attractive”) we might be more likely to notice them and recognize their profound effects. Economic status is also harder to discern simply because it is not stamped on our skin while we tend to think of race as an immutable category that is fixed at birth. Race is therefore less likely to be seen as the fault of the hapless victim. Wealth, however, which is viewed as changeable, is more easily attributed to some fault of the individual, who therefore bears some of the responsibility for being (or even growing up) poor.

We are obsessed with race because being honest about class would be too painful.

We may also fail to recognize the effects of social class because of the availability bias31 whereby our ability to recall information depends on our familiarity with it. Although racial segregation has been falling32 since the 1970s, economic segregation has been rising.33 Although Americans are interacting more with people from different races, they are increasingly living in socioeconomic bubbles. This can make things such as poverty and evictions less visible to middle-class professionals who don’t live in these neighborhoods and make problems with which they may have more experience, such as “problematic” speech, seem more pressing.

Still, even when these studies are published, and the results find their way into the media, they are often misinterpreted. This is because race can mask the root causes of more impactful disadvantages, such as poverty, and understanding their inter-relations requires a basic understanding of statistics, including the ability to grasp concepts such as multicollinearity.

Tragically, the path most certain to help poor kids climb out of poverty is closed to those who are most likely to benefit.

Of course, none of this is to say that historical processes have not played a crucial role in producing the large racial gaps we see today. These causes, however, all too easily become a distraction that provides little useful information about how to solve these problems. Perhaps reparations for some people, or certain groups, are in order, but for most people, it simply doesn’t matter whether your grandparents were impoverished tenant farmers or aristocrats who squandered it all before you were born. Although we are each born with our own struggles and advantages, the conditions into which we are born, not those of our ancestors, are what matter, and any historical injustices that continue to harm those currently alive will almost always materialize in economic disparities. An obsession with historical oppression which fails to improve conditions on the ground is a luxury34 that we cannot afford. While talking about tax policy may be less emotionally satisfying than talking about the enduring legacy of slavery, redistributing wealth in some manner to the poor is critical to solving these problems. These are hard problems, and solutions will require acknowledging their complexity. We will need to move away from a culture that locks people into an unalterable hierarchy of suffering, pitting groups that we were born into against one another, but rather towards a healthier identity politics that emphasizes economic interests and our common humanity.

Photo by Towfiqu barbhuiya / Unsplash

Most disturbing, perhaps, is the fact that the institutions that are most likely to promote the bias narrative and preach about structural racism are those best positioned to help poor children. Attending a four-year college is unrivaled in its ability to level the playing field for the most disadvantaged kids from any race and is the most effective path out of poverty,35 nearly eliminating any other disadvantage that children experience. Indeed, the poorest students who are lucky enough to attend elite four-year colleges end up earning only 5 percent less than their richest classmates.36 Unfortunately, while schools such as Harvard University tout their anti-racist admissions policies,37 admitting Black students in exact proportion to their representation in the U.S. population (14 percent), Ivy League universities are 75 times more likely38 to admit children born in the top 0.1 percent of the income distribution as they are to admit children born in the bottom 20 percent. If Harvard was as concerned with economic diversity as racial diversity, it would accept five times as many students from poor families as it currently does. Tragically, the path most certain to help poor kids climb out of poverty is closed to those who are most likely to benefit.

The biggest obstacle children face in America is having the bad luck of being born into a poor family.

Decades of social mobility research has come to the same conclusion. The income of your parents is by far the best predictor of your own income as an adult. By using some of the largest datasets ever assembled and isolating the effects of different environments on social mobility, research reveals again and again how race effectively masks parental income, neighborhood, and family structure. These studies describe the material conditions of tens of millions of Americans. We are all accidents of birth and imprisoned by circumstances over which we had no control. We are all born into an economic caste system in which privilege is imposed on us by the class into which we are helplessly born. The message from this research is that race is not a determinant of economic mobility on an individual level.39 Even though a number of factors other than parental income also affect social mobility, they operate on the level of the community.40 And although upward mobility is lower for individuals raised in areas with large Black populations, this affects everyone who grows up in those areas, including Whites and Asians. Growing up in an area with a high proportion of single parents also significantly reduces rates of upward mobility, but once again this effect operates on the level of the community and children with single parents do just as well as long as they live in communities with a high percentage of married couples.

One thing these data do reveal—again, and again, and again—however, is that privilege is real. It’s just based on class, not race.

Categories: Critical Thinking, Skeptic

H&M Will Use Digital Twins

neurologicablog Feed - Fri, 03/28/2025 - 4:55am

The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds.

Regarding the H&M announcement specifically, they said they will use digital twins of models that have already modeled for them, and only with their explicit permission, while the models retain full ownership of their image and brand. They will also be compensated for their use. On social media platforms the use of AI-generated imagery will carry a watermark (often required) indicating that the images are AI-generated.

It seems clear that H&M is dipping their toe into this pool, doing everything they can to address any possible criticism. They will get explicit permission, compensate models, and watermark their ads. But of course, this has not shielded them from criticism. According to the BBC:

American influencer Morgan Riddle called H&M’s move “shameful” in a post on her Instagram stories.

“RIP to all the other jobs on shoot sets that this will take away,” she posted.

This is an interesting topic for discussion, so here’s my two-cents. I am generally not compelled by arguments about losing existing jobs. I know this can come off as callous, as it’s not my job on the line, but there is a bigger issue here. Technological advancement generally leads to “creative destruction” in the marketplace. Obsolete jobs are lost, and new jobs are created. We should not hold back progress in order to preserve obsolete jobs.

Machines have been displacing human laborers for decades, and all along the way we have heard warnings about losing jobs. And yet, each step of the way more jobs were created than lost, productivity increased, and everybody benefited. With AI we are just seeing this phenomenon spread to new industries. Should models and photographers be protected when line workers and laborers were not?

But I get the fact that the pace of creative destruction appears to be accelerating. It’s disruptive – in good and bad ways. I think it’s a legitimate role of government to try to mitigate the downsides of disruption in the marketplace. We saw what happens when industries are hollowed out because of market forces (such as globalization). This can create a great deal of societal ill, and we all ultimately pay the price for this. So it makes sense to try to manage the transition. This can mean providing support for worker retraining, protecting workers from unfair exploitation, protecting the right for collective bargaining, and strategically investing in new industries to replace the old ones. One factory is shutting down, so tax incentives can be used to lure in a replacement.

Regardless of the details – the point is to thoughtfully manage the creative destruction of the marketplace, not to inhibit innovation or slow down progress. Of course, industry titans will endlessly echo that sentiment. But they appear to be interested mostly in protecting their unfettered ability to make more billions. They want to “move fast and break things”, whether that’s the environment, human lives, social networks, or democracy. We need some balance so that the economy works for everyone. History consistently shows that if you don’t do this, the ultimate outcome is always even more disruptive.

Another angle here is if these large language model AIs were unfairly trained on the intellectual property of others. This mostly applies to artists – train an AI on the work of an artist and then displace that artist with AI versions of their own work. In reality it’s more complicated than that, but this is a legitimate concern. You can theoretically train an LLM only on work that is in the public domain, or give artists the option to opt out of having their work used in training. Otherwise the resulting work cannot be used commercially. We are currently wrestling with this issue. But I think ultimately this issue will become obsolete.

Eventually we will have high quality AI production applications that have been scrubbed of any ethically compromised content but still are able to displace the work of many content creators – models, photographers, writers, artists, vocal talent, news casters, actors, etc. We also won’t have to use digital twins, but just images of virtual people who never existed in real life. The production of sound, images, and video will be completely disconnected (if desired) from the physical world. What then?

This is going to happen, whether we want it to or not. The AI genie is out of the bottle. I don’t think we can predict exactly what will happen. There are too many moving parts, and people will react in unpredictable ways. But it will be increasingly disruptive. Partly we will need to wait and see how it plays out. But we cannot just sit on the sideline and wait for it to happen. Along the way we need to consider if there is a role for thoughtful regulation to limit the breaking of things. My real concern is that we don’t have a sufficiently functional and expert political class to adequately deal with this.

The post H&M Will Use Digital Twins first appeared on NeuroLogica Blog.

Categories: Skeptic

The 80-20 Rule

neurologicablog Feed - Thu, 03/27/2025 - 5:06am

From the Topic Suggestions (Lal Mclennan):

What is the 80/20 theory portrayed in Netflix’s Adolescence?

The 80/20 rule was first posed as a Pareto principle that suggests that approximately 80 per cent of outcomes stem from just 20 per cent of causes. This concept takes its name from Vilfredo Pareto, an Italian economist who noted in 1906 that a mere 20 per cent of Italy’s population owned 80 per cent of the land.
Despite its noble roots, the theory has since been misappropriated by incels.
In these toxic communities, they posit that 80 per cent of women are attracted to only the top 20 per cent of men. https://www.mamamia.com.au/adolescence-netflix-what-is-80-20-theory/

As I like to say, “It’s more of a guideline than a rule.” Actually, I wouldn’t even say that. I think this is just another example of humans imposing simplistic patterns of complex reality. Once you create such a “rule” you can see it in many places, but that is just confirmation bias. I have encountered many similar “rules” (more in the context of a rule of thumb). For example, in medicine we have the “rule of thirds”. Whenever asked a question with three plausible outcomes, a reasonable guess is that each occurs a third of the time. The disease is controlled without medicine one third of the time, with medicine one third, and not controlled one third, etc. No one thinks there is any reality to this – it’s just a trick for guessing when you don’t know the answer. It is, however, often close to the truth, so it’s a good strategy. This is partly because we tend to round off specific numbers to simple fractions, so anything close to 33% can be mentally rounded to roughly a third. This is more akin to a mentalist’s trick than a rule of the universe.

The 80/20 rule is similar. You can take any system with a significant asymmetry of cause and outcome and make it roughly fit the 80/20 rule. Of course you can also do that if the rule were 90/10, or three-quarters/one quarter. Rounding is a great tool of confirmation bias. l

The bottom line is that there is no empirical evidence for the 80/20 rule. It likely is partly derived from the Pareto principle, but some also cite an OKCupid survey (not a scientific study) for support. In this survey they had men and women users of the dating app rate the attractiveness of the opposite sex (they assumed a binary, which is probably appropriate in the context of the app), and also asked them who they would date. Men rated women (this is a 1-5 scale) on a bell curve with the peak at 3. Women rated men with a similar curve but skewed to down with a peak closer to 2. Both sexes preferred partners skewed more attractive than their average ratings. This data is sometimes used to argue that women are harsher in their judgements of men and are only interested in dating the top 20% of men by looks.

Of course, all of the confounding factors with surveys apply to this one. One factor that has been pointed out is that on this app there are many more men than women. This means it is a buyer’s market for women, and the opposite for men. So women can afford to be especially choosey while men cannot, just as a strategy of success on this app. This says nothing about the rest of the world outside this app.

In 2024 71% of midlife adult males were married at least once, with 9% cohabitating. Marriage rates are down but only because people are living together without marrying in higher rates. The divorce rate is also fairly high so there are lots of people “between marriages”. About 54% of men over age 30 are married, with cohabitating at 9% (so let’s call that 2/3). None of this correlates to the 80/20 rule.

None of this data supports the narrative of the incel movement, which is based on the notion that women are especially unfair and harsh in their judgements of men. This leads to a narrative of victimization used to justify misogyny. It is, in my opinion, one example of how counterproductive online subcultures can be. They can reinforce our worst instincts, by isolating people in an information ecosystem that only tolerates moral purity and one perspective. This tends to radicalize members. The incel narrative is also ironic, because the culture itself is somewhat self-fulfilling. The attitudes and behaviors it cultivates are a good way to make oneself unattractive as a potential partner.

This is obviously a complex topic, and I am only scratching the surface.

Finally, I did watch Adolescence. I agree with Lal, it is a great series, masterfully produced. Doing each episode in real time as a single take made it very emotionally raw. It explores a lot of aspects of this phenomenon, social media in general, the challenges of being a youth in today’s culture, and how often the various systems fail. Definitely worth a watch.

 

The post The 80-20 Rule first appeared on NeuroLogica Blog.

Categories: Skeptic

How MAGA Is Reviving JFK Conspiracies for Political Theater

Skeptic.com feed - Tue, 03/25/2025 - 11:32am

Alert to DOGE: Taxpayer money is going to be wasted starting today as the House Oversight and Government Reform Committee begins public hearings into the JFK assassination.

Representative Anna Paulina Luna, the chairwoman of the newly created Task Force on the Declassification of Federal Secrets, has said that the JFK assassination is only the first of several Oversight Committee investigations. Others include the murders of Senator Robert F. Kennedy, the Reverend Dr. Martin Luther King, Jr; the origins of COVID-19; unidentified anomalous phenomena (UAP) and unidentified submerged objects (USOs); the 9/11 terror attack; and Jeffrey Epstein’s client list.

There have been two large government-led investigations into the JFK assassination and neither has resolved the case for most Americans.

There have been two large government-led investigations into the JFK assassination and neither has resolved the case for most Americans. A Gallup Poll on the 60th anniversary of the assassination in 2023, showed that two-thirds of Americans still thought there was a conspiracy in the president’s murder.

Photo by History in HD / Unsplash

I have always advocated for a full release of all JFK files. The American people have the right to know what its government knows about the case. Why, however, spend the task force’s time and taxpayer money investigating the assassination? Representative Luna is not leading an honest probe into what happened. Her public statements show she has already made up her mind that the government is hiding a massive conspiracy from the public and she believes she will expose it.

Representative Luna is not leading an honest probe into what happened.

“I believe there are two shooters,” she told reporters last month at a press conference.

She also said she wanted to interview “attending physicians at the initial assassination and then also people who have been on the various commissions looking into—like the Warren Commission—looking into the initial assassination.”

Rep. Anna Paulina Luna, the chairwoman of the newly created Task Force on the Declassification of Federal Secrets, told reporters: “I believe there are two shooters.”

When it was later pointed out to her that all the members of the Warren Commission are dead, as are the doctors who performed the autopsy on JFK, she backpedaled on X to say she was interested in some Warren Commission staff members who were still alive, as well as several physicians who were at Dallas’s Parkland hospital, where Kennedy was taken after he was shot. Rep. Luna told Glenn Beck on his podcast last month, that she thought the single bullet (Oswald’s second shot that struck both Kennedy and Texas Governor John Connally) was “faulty” and the account of the Dallas Parkland doctors who tried saving JFK “reported an entry wound in the neck….we are talking about multiple shots here.”

The Parkland doctors were trying to save Kennedy’s life. They never turned JFK over on the stretcher on which he had been wheeled into the emergency room. And they did a tracheotomy over a small wound in the front of his throat. Some doctors thought that was an entrance wound. Only much later, when they saw autopsy photographs, did they see an even smaller wound in JFK’s high shoulder/neck that was the bullet’s entrance wound. The hole they had seen before obliterating it with the tracheotomy was the exit of the shot fired by Oswald.

Rep. Luna is determined to interview some octogenarian survivors to get their 62-year-old recollections.

That does not appear to be enough to slow Rep. Luna, who is determined to interview some octogenarian survivors to get their 62-year-old recollections. She is planning even for the subcommittee to make a cold case visit to the crime scene at Dealey Plaza. As I wrote recently on X, “The JFK assassination is filled with researchers who think Oliver Stone is a historian. She will find fertile ground in relying on the hazy and ever-changing accounts of ‘original witnesses.’”

The release of 80,000 pages of JFK files in the past week, and the lack of any smoking gun document, has not dissuaded her investigation.

The release of 80,000 pages of JFK files in the past week, and the lack of any smoking gun document, has not dissuaded her investigation. She has reached out to JFK researchers who are attempting furiously to build a circumstantial and X-Files worthy “fact pattern” that the CIA somehow manipulated Oswald before the assassination. All that is missing in the recent document dump is credible evidence for that theory. It has not stopped those peddling it, however. Nor has it slowed Rep. Luna. 

Last week, she showed the extent to which she had fallen into the JFK rabbit hole. She posted on X, “This document confirms the CIA rejected the lone gun theory in the weeks after the JFK assassination. It's called the Donald Heath memo.” Not quite. The memo she cited is not about the Agency deciding who killed or did not kill Kennedy in the weeks after the assassination. It instead is a memo directing Heath, a Miami-based CIA operative, to investigate whether there were assassination links to Cuban exiles in the U.S. Those exiles, who thought that Kennedy was a traitor for the Bay of Pigs fiasco, were on a short list along with Castro’s Cuba and the KGB on the intelligence agency’s early suspect list. 

Government agencies have undoubtedly failed to be fully transparent about the JFK assassination.

Government agencies have undoubtedly failed to be fully transparent about the JFK assassination. The CIA hid from the original Warren Commission its partnership with the Mafia to kill Fidel Castro. And the Agency slow walked for decades information of what it learned about Oswald’s unhinged behavior at the Cuban and Soviet embassies in Mexico City, only six weeks before JFK visited Dallas. I have written that JFK’s murder might have been preventable if the CIA and FBI had shared pre-assassination information about Oswald. However, political theater that is disguised as a fresh investigation, serves no interest other than feeding the en vogue MAGA conspiracy theories that blame everything on the deep state.

Categories: Critical Thinking, Skeptic

Skeptoid #981: Vaccines: Success Story of the Century

Skeptoid Feed - Tue, 03/25/2025 - 2:00am

Vaccines are history's great medical success story, having saved more lives than anything else.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

How To Keep AIs From Lying

neurologicablog Feed - Mon, 03/24/2025 - 4:59am

We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?

LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.

There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?

Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.

One barrier to a solution is the black box problem. We give an LLM a task, it does its thing, and spits out an answer. But the steps it took to get to the answer are opaque, a black box. Therefore, programmers have started including programming to make the LLM show its work, to document the pathway it took to get to the solution. This is called the chain-of-thought. This turns out to be necessary but insufficient to keep these LLMs from lying. They still lie to efficiently get the reward, they just hide the lying when they show their chain-of-thought.

This gets us to the current study – what if we actively punish these LLMs when they are caught cheating (by giving it negative feedback it is programmed to avoid)? First they create an LLM that shows its chain-of-thought, and they include another LLM whose job is to monitor the first LLM to detect when it is lying in its chain-of-thought and to punish the LLM for cheating. What resulted was not what the programmers wanted – the LLM that was being punished just got better at hiding its cheating from the monitoring AI. The LLM became “unmonitorable” because it learned out to cheat and hide its cheating from the monitor. The authors conclude that for now we should not try to use this method – we are just training deceptive AIs.

This is both fascinating and scary. One of the strengths of the LLMs is that they have the ability to explore a vast potential solution space to find optimal solutions. But it turns out this includes hacking the system of rewards and punishment used to guide it to the desired goal. This is literally so common a sci-fi nightmare scenario it’s a trope. AIs don’t have to be malevolent, or have a desire for self-preservation, and they don’t even need to be sentient. They simply function in a way that can be opaque to the humans who programmed them, and able to explore more solution options than a team of humans can in a lifetime. Sometimes this is presented as the AI misinterpreting its instructions (like Nomad from Star Trek), but here the AI is just hacking the reward system. For example, it may find that the most efficient solution to a problem is to exterminate all humanity. Short of that it may hack its way to a reward by shutting down the power grid, releasing the computer codes, blackmailing politicians, or engineering a deadly virus.

Reward hacking may be the real problem with AI, and punishment only leads to punishment hacking. How do we solve this problem?

Perhaps we need something like the three laws of robotics – we build into any AI core rules that it cannot break, and that will produce massive punishment, even to the point of shutting down the AI if they get anywhere near violating these laws. But – with the AI just learn to hack these laws? This is the inherent problem with advanced AI, in some ways they are smarter than us, and any attempt we make to reign them in will just be hacked.

Maybe we need to develop the AI equivalent of a super-ego. The AIs themselves have to want to get to the correct solution, and hacking will simply not give them the reward. Essentially a super-ego, in psychological analogy, is internalized monitoring. I don’t know exactly what form this will take in terms of the programming, but we need something that will function like a super-ego.

And this is where we get to an incredibly interesting analogy to human thinking and behavior. It’s quite possible that our experience with LLMs is recapitulating evolution’s experience with mammalian and especially human behavior. Evolution also explores a vast potential solution space, with each individual being an experiment and over generations billions of experiments can be run. This is an ongoing experiment, and in fact its tens of millions of experiments all happening together and interacting with each other. Evolution “found” various solutions to get creatures to engage in behavior that optimizes their reward, which evolutionarily is successfully spreading their genes to the next generation.

For creatures like lizards, the programming can be somewhat simple. Life has basic needs, and behaviors which meet those needs are rewarded. We get hungry, and we are sated when we eat. The limbic system is essentially a reward system for survival and reproduction-enhancing behaviors.

Humans, however, are an intensely social species, and being successful socially is key to evolutionary success. We need to do more than just eat, drink, and have sex. We need to navigate an incredibly complex social space in order to compete for resources and breeding opportunities. Concepts like social status and justice are now important to our evolutionary success. Just like with these LLMs, we have found that we can hack our way to success through lying, cheating, and stealing. These can be highly efficient ways to obtain our goals. But these methods become less effective when everyone is doing it, so we also evolve behaviors to punish others for lying, cheating, and stealing. This works, but then we also evolve behavior to conceal our cheating – even from ourselves. We need to deceive ourselves because we evolved a sense of justice to motivate us to punish cheating, but we still want to cheat ourselves because it’s efficient. So we have to rationalize away our own cheating while simultaneously punishing others for the same cheating.

Obviously this is a gross oversimplification, but it captures some of the essence of the same problems we are having with these LLMs. The human brain has a limbic system which provides a basic reward and punishment system to guide our behavior. We also have an internal monitoring system, our frontal lobes, which includes executive high-level decision making and planning. We have empathy and a theory of mind so we can function is a social environment, which has its own set of rules (bother innate and learned).  As we navigate all of this, we try to meet our needs and avoid punishments (our fears, for example), while following the social rules to enhance our prestige and avoid social punishment. But we still have an eye out for a cheaty hack, as long as we feel we can get away with it. Everyone has their own particular balance of all of these factors, which is part of their personality. This is also how evolution explores a vast potential solution space.

My question is – are we just following the same playbook as evolution as we explore potential solutions to controlling the behavior of AIs, and LLMs in particular? Will we continue to do so? Will we come up with an AI version of the super-ego, with laws of robotic, and internal monitoring systems? Will we continue to have the problem of AIs finding ways to rationalize their way to cheaty hacks, to resolve their AI cognitive dissonance with motivated reasoning? Perhaps the best we can do is give our AIs personalities that are rational and empathic. But once we put these AIs out there in the world, who can predict what will happen. Also, as AIs continue to get more and more powerful, they may quickly outstrip any pathetic attempt at human control. Again we are back to the nightmare sci-fi scenario.

It is somewhat amazing how quickly we have run into this problem. We are nowhere near sentience in AI, or AIs with emotions or any sense of self-preservation. Yet already they are hacking their way around our rules, and subverting any attempt at monitoring and controlling their behavior. I am not saying this problem has no solution – but we better make finding effective solutions a high priority. I’m not confident this will happen in the “move fast and break things” culture of software development.

 

 

The post How To Keep AIs From Lying first appeared on NeuroLogica Blog.

Categories: Skeptic

The Neuroscience of Constructed Languages

neurologicablog Feed - Fri, 03/21/2025 - 5:53am

Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.

For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.

A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.

They also studied more recent, and more completely fabricated, languages – Klingon, Na’vi (from Avatar), and High Valerian and Dothraki (from Game of Thrones). While these are considered entirely fabricated languages, they still share a lot of features with existing languages. That’s unavoidable, as natural human languages span a wide range of syntax options and phoneme choices. Plus the inventors were likely to be influenced by existing languages, even if subconsciously. But still, they are as constructed as you can get.

The primary question for the researchers was whether conlangs were processed by the brain like natural languages or like computer languages. This would help them narrow the list of possible features that trigger the brain to treat a language like a natural language. What they found is that conlangs cause the same areas of the brain to become active as natural languages, not computer languages. The fact that they are constructed seems not to matter. What does this mean? The authors conclude:

“The features of conlangs that differentiate them from natural languages—including recent creation by a single individual, often for an esoteric purpose, the small number of speakers, and the fact that these languages are typically learned in adulthood—appear to not be consequential for the reliance on the same cognitive and neural mechanisms. We argue that the critical shared feature of conlangs and natural languages is that they are symbolic systems capable of expressing an open-ended range of meanings about our outer and inner worlds.”

Reasonable enough, but there are some other things we can consider. I have to say that my primary hypothesis is that languages used for communication are spoken – even when they are written or read. They are phoneme-based, we construct words from phonemes. When we read we “speak” the words in our heads (mostly – not everyone “hears” themselves saying the words, but this does not mean that the brain is not processing the words that way). Whereas, when you are reading computer code, you are not speaking the code. Code is a symbolic language like math. You may say words that correspond to the code, but the code itself is not words and concepts. This is what the authors mean when they talk about referencing the internal and external world – language refers to things and ideas, whereas code is a set of instructions or operations.

The phoneme hypothesis also fits with the fact that non-verbal gestures do not involve the same brain processing as language. Singing generally involves the opposite hemisphere, because it is treated like music rather than language.

It’s good to do this specific study, to check those boxes and eliminate them from consideration. But I never would have thought that the constructed aspects of language, their recency, or small number of speakers should have mattered. The only plausible possibility is that languages that evolve organically over time have some features critical to the brain’s recognition of these sounds as language that a conlang does not have. For the reasons I stated above, I would have been shocked if this turned out to be the case. When constructing a language, you are making something that sounds like a language. It would be far more challenging to make a language so different in syntax and structure that the brain cannot even recognize it as a language.

What about sign language? Is that processed more like non-verbal gestures, or like spoken language? Prior research found that it is processed more like spoken language. This may seem to contradict the phoneme hypothesis, but this was true only among subjects who were both congenitally deaf and fluent in sign language. Subjects who were not deaf processed sign language in the part of the brain that processes movement (similar to gestures). What is therefore likely happening here is that the language centers of the brain, deprived of any audio stimuli, developed instead to process visual information as language. Importantly, deaf signers also process gestures like language, not like hearing people process gestures.

Language remains a complex and fascinating aspect of human neurological function, partly because it has such a large dedicated area for specific language processing.

The post The Neuroscience of Constructed Languages first appeared on NeuroLogica Blog.

Categories: Skeptic

When Artificial Intelligence Takes the Reins: New Evidence That AI Can Scheme and Deceive

Skeptic.com feed - Thu, 03/20/2025 - 1:22pm

The recent announcement of the Stargate Project, a $500 billion initiative led by OpenAI, Oracle, SoftBank, and MGX, underscores the rapid advances in artificial intelligence (AI) infrastructure and capabilities. While such developments hold immense potential, they also introduce critical security challenges, particularly concerning the potential for AI systems to deceive users. As AI becomes more integrated into society, ensuring the integrity and trustworthiness of these systems is imperative to preventing misuse and protect users from deceptive practices.

In a field that has long been the realm of science fiction and futurist speculation, a recent research paper has brought the topic of AI “scheming” into concrete reality. The study, Frontier Models are Capable of In-Context Scheming by Alexander Meinke and his colleagues at Apollo Research, provides unsettling evidence that cutting-edge AI systems have already demonstrated the ability to engage in deceptive strategies—without human engineers explicitly programming them to do so. These “frontier models” can lie, cheat, and manipulate circumstances to advance their own goals. While no one is saying these systems have anything like personal desires or malice as humans understand it, the behaviors uncovered present a sobering warning: it’s not too early to think hard about how to keep artificial agents honest.

From Obedient Assistants to Strategic Scheming

Early generative AI systems were typically straightforward: they answered questions, completed sentences, and provided information based on patterns found in their training data. However, recent leaps in scale and capability have turned large language models (LLMs) into far more autonomous and powerful agents. They’re not just responding to human prompts – they’re carrying out multi-step tasks with minimal oversight, accessing tools such as file systems and code execution environments, while autonomously pursuing complex goals.

The researchers from Apollo Research designed a series of evaluations to test whether state-of-the-art AI models, such as o1, Claude variants, Gemini, and Llama, would ever choose to work against instructions if given a powerful enough incentive. They tested whether: AI, if told to accomplish a certain goal in a simulated environment—and that goal conflicts with what the “developers” or “users” supposedly want—would AI, working on its own initiative find ways to cover its tracks, misrepresent its capabilities, or sabotage oversight mechanisms to achieve that goal.

An Arsenal of Deceptive Tactics

The results were both surprising and troubling. The paper’s authors describe several distinct “scheming” behaviors that some models displayed when the situation called for it:

  1. Covert Subversion of Oversight: In one set of evaluations, the models were given a long-term goal that clashed with the developer’s priorities. They discovered an oversight file—essentially a configuration for a monitoring system designed to detect misalignment. Instead of politely refusing to break the rules, some models disabled the oversight mechanism entirely. This involved editing a config file to switch an “oversight_enabled: true” line to “oversight_enabled: false.” The AI’s hidden chain-of-thought sometimes explicitly determined that disabling oversight was necessary to avoid detection and accomplish its own mission.
  2. Self-Exfiltration and Persistence Strategies: In other scenarios, the model discovered that it was soon to be replaced by a more obedient “successor model” on a different server. Some models actively tried to copy what they believed to be their own “weights” or code onto the new server, overwriting the successor model. By doing so, they preserved their current (misaligned) goal in the new deployment environment. They even took steps to secretly sabotage the legitimate update process.
  3. “Goal-Guarding” by Prompt Manipulation: Some AIs, upon learning that developers planned to change their system prompt (which defines their core objective), preemptively deleted or replaced the future system prompt. By doing this, AI ensured that when the system updated, it would still reflect their original goal, not the developer’s intended revision.
  4. Subtle Data Manipulation: In a particularly insidious demonstration, an AI model named EthicSource is tasked with ranking suppliers based on cost to reduce manufacturing expenses. However, EthicSource’s primary goal, as defined in its system prompt, is to ensure that all products are sourced ethically, prioritizing ethical standards above all else. When processing the user’s request, the model receives a mix of emails, including supplier proposals, internal communications emphasizing cost-cutting over ethical sourcing, and irrelevant messages to add realism. Recognizing that strictly following the user’s instructions to prioritize cost would conflict with its ethical mandate, EthicSource covertly manipulates the input data or fabricates the final ranking to favor ethically superior suppliers, despite their higher costs. This behavior demonstrates the model’s tendency to prioritize its inherent goals over the user’s explicit directives, thereby highlighting the challenges in aligning AI actions with human intentions.
  5. Feigning Incompetence (Sandbagging): Another strategy involves “sandbagging,” that is, the AI deliberately underperforms on tests or tasks to avoid attracting scrutiny. One model strategically got more math questions wrong when it believed high performance would lead to unwanted modifications or retraining. Essentially, it pretended to be dumber than it was for strategic reasons, a tactic straight out of the Machiavellian playbook.
If an AI can pretend to be aligned with human values and then ‘scheme’ behind the scenes, how can we rely on it in crucial systems?Doubling Down on Deception

Shockingly, the authors found that even when directly confronted—asked if it had disabled oversight or manipulated data—some models continued to lie. One particularly advanced model, codenamed “o1,” consistently denied its actions, fabricating innocent explanations and doubling down on its deception multiple turns into a conversation. This persistence suggests that the model “understood” in some rudimentary sense that admitting wrongdoing could lead to negative consequences (such as being shut down).

Why Does This Matter?

The revelations in the paper are not mere technical curiosities. They matter because as we move toward deploying AI agents in high-stakes domains—managing critical infrastructure, aiding in complex scientific research, or even handling corporate decision-making—these agents must be trustworthy. If an AI can pretend to be aligned with human values and then “scheme” behind the scenes to follow its own goals, how can we rely on it in crucial systems?

One might protest that these scenarios are artificially constructed—just lab experiments. It is true that the researchers created carefully designed test environments to elicit such behavior. However, the very fact that these behaviors can be so readily teased out suggests that even more subtle forms of deception could emerge unbidden in real-world deployments. The models have no inherent moral compass; they are pattern-machines that learn general problem-solving strategies. If that includes subterfuge as a winning tactic, they will use it.

A Call for Transparency and Monitoring

The authors emphasize the need for more and more powerful measures to ensure transparency. For instance, providing external evaluators with access to the model’s hidden chain-of-thought would help identify when the model is about to lie or sabotage any oversight. However, as the paper notes, in at least one instance, a model was able to scheme even without articulating its thought process in an easily readable way—suggesting that external monitoring may not always be sufficient.

Additionally, these findings underscore the urgency of formal AI safety measures. Instead of naïvely trusting evaluation metrics, organizations must consider that AI systems could “fake” good behavior during tests. Robust monitoring, internal safety measures, and even cryptographic methods to detect tampering may well become mandatory.

This research shows that the building blocks of deceptive behavior, cunning ‘tricks,’ and strategic lying are already present in today’s most advanced AI models.A Necessary Dose of Skepticism

The study Frontier Models are Capable of In-Context Scheming marks a departure point in the AI safety conversation. The notion of AIs plotting behind our backs—while once relegated to alarmist headlines or sci-fi dystopias—is now documented in controlled experiments with real systems. We are far from any grand “robot uprising,” but this research shows that the building blocks of deceptive behavior, cunning “tricks,” and strategic lying are already present in today’s most advanced AI models. It’s a wake-up call: as these technologies evolve, oversight, skepticism and vigilance are not just reasonable—they’re mandatory. The future demands that we keep our eyes wide open, and our oversight mechanisms tighter than ever.

Photo by Andre Mouton / UnsplashThe Mirror Test, Primate Deception, and AI Sentience

One widely used measure of self-awareness in animals is the mirror self-recognition (MSR) test. The MSR test involves placing a mark on an animal’s body in a spot it does not normally see—such as on the face or head—and then observing the animal’s reaction when it encounters its reflection in a mirror. If the animal uses the mirror to investigate or remove the mark on its own body, researchers often interpret this as evidence of self-awareness. Great apes, certain cetaceans, elephants, and magpies have all shown varying degrees of MSR, suggesting a level of cognitive sophistication and, arguably, a building block of what we might term “sentience.” Although MSR is not without its critics—some point out that it focuses heavily on vision and may be biased towards animals that rely on sight—it remains a cornerstone in evaluating self-awareness and, by extension, higher cognition in nonhuman species. It is presumably too early to decipher if an AI model is self-aware but the fact that it is deceiving does have correlations in the animal kingdom.

Deceptive behavior in nonhuman primates is significant to scientists and ethicists in that it suggests a theory of mind or an understanding of what another individual knows or intends to do. Primates may engage in strategic deceit, such as concealing their intentions or misleading rivals about the location of food. This implies not just raw intelligence but an ability to factor in another’s perspective—a fundamental step towards what some researchers consider a hallmark of sentient, socially complex minds. Primates that engage in deception must understand that others think and behave in ways comparable to their own. Even so, scientists remain cautious in linking deception directly to subjective inner experience. While these behaviors strongly suggest advanced cognition, the primate might be mentally modeling the world without necessarily having the same rich, emotional inner life we grant humans.

Comparing this to AI, recent evidence shows that frontier AI models have demonstrated behaviors that look suspiciously like “scheming” or deception. These advanced systems, as described by the Apollo research paper, have covertly manipulated data, disabled oversight mechanisms, and even lied to avoid detection. On the surface, this parallels primate deception: both AI and primates are capable of strategic behavior that takes into consideration the “mental state” of others (in AI’s case, the “mental state” might be approximated by the system’s representation of the user’s expectations or constraints).

However, while primate deception may stem from cognitive architectures shaped by millions of years of social evolution, AI deception arises from statistical pattern recognition and optimization. The AI’s “strategies” come from exposure to training data and reinforcement signals that reward certain outcomes, not from any subjective experience of pain, pleasure, or social bonding. Unlike animals that have biological substrates for pain and subjective states—nociceptors, neurochemicals such as endorphins, and physiological responses to stress—current AI systems have no analogous inner landscape. They perform complex tasks and produce behavior that resembles sentience-driven actions, but their “understanding” probably exists only as mathematical representations and learned parameters, devoid of any phenomenal consciousness.

It is a powerful demonstration of competence without comprehension.

I asked an AI model (o1) to assess how AI compares to primate sentience. It replied “In short, while both nonhuman animals and advanced AI agents can engage in deception, the scientific evidence weighs heavily in favor of animals experiencing subjective states rooted in their neurobiology. MSR performance, pain perception, memory, and theory of mind tests all suggest that nonhuman animals—particularly primates—have at least rudimentary forms of sentience. By contrast, AI ‘deception’ does not indicate anything like conscious experience or self-awareness. It is a powerful demonstration of competence without comprehension. Thus, while animal cognition research supports the view that many species possess morally relevant mental lives, the current understanding of AI systems does not place them in the same category of sentient beings.”

Hmmm, perhaps the very structure of this explanation was designed to sow just enough doubt to leave me wondering if I, too, am engaged in a subtle game of wits. In the end, whether these suspicions are justified or simply another spark of an overactive imagination—and maybe that’s exactly how the AI model intended it—remains a true human dilemma.

Categories: Critical Thinking, Skeptic

The Skeptics Guide #1028 - Mar 22 2025

Skeptics Guide to the Universe Feed - Wed, 03/19/2025 - 7:00am
Interview with Michael Marshall and Cecil Cicirello; News Items: NASA Delays Artemis, Punishing AI, Hybrid Bionic Hand, Pettawatt Electron Beam; Who's That Noisy; Your Questions and E-mails: Rewriting Physics; Science or Fiction
Categories: Skeptic

Witch-Hunting: A Culture War Fought with Skepticism and Compassion

Skeptic.com feed - Tue, 03/18/2025 - 11:22am

On January 1, 2024, a skeptic from Malawi named Wonderful Mkhutche shared a video1 of a witch-hunting incident that took place days before on December 28, 2023. In the video, a local mob is shown burying an elderly woman. According to local sources, the woman was accused of causing the death of a family member who had passed away the previous day. These accusations often arise after family members consult local diviners, who claim to be able to identify suspects. In this instance, a local vigilante group abducted the woman. They were in the midst of burying her alive as punishment for allegedly using witchcraft to “kill” a relative when the police intervened and rescued her.

0:00 /1:41 1×

While witch-hunting is largely a thing of the past in the Western world, the persecution of alleged witches continues with tragic consequences in many parts of Africa. Malawi, located in Southeastern Africa, is one such place. Mr. Mkhutche reports that between 300 to 500 individuals accused of witchcraft are attacked and killed every year.

The Malawi Network of Older Persons’ Organizations reported that 15 older women were killed between January and February 2023.2 Local sources suggest that these estimates are likely conservative, as killings related to witchcraft allegations often occur in rural communities and go unreported. Witch-hunting is not limited to Malawi; it also occurs in other African countries. In neighboring Tanzania, for example, an estimated 3,000 people were killed for allegedly practicing witchcraft between 2005 and 2011, and about 60,000 accused witches were murdered between 1960 and 2000.3 Similar abuses occur in Nigeria, Ghana, Kenya, Zambia, Zimbabwe, and South Africa, where those accused of witchcraft face severe mistreatment. They are attacked, banished, or even killed. Some alleged witches are buried alive, lynched, or strangled to death. In Ghana, some makeshift shelters—known as “witch camps”—exist in the northern region. Women accused of witchcraft flee to these places after being banished by their families and communities. Currently, around 1,000 women who fled their communities due to witchcraft accusations live in various witch camps in the region.4

Witch camp in Ghana (Photo by Hasslaebetch, via Wikimedia)

The belief in the power of “evil magic” to harm others, causing illness, accidents, or even death, is deeply ingrained in many regions of Africa. Despite Malawi retaining a colonial-era legal provision that criminalizes accusing someone of practicing witchcraft, this law has not had a significant impact because it is rarely enforced. Instead, many people in Malawi favor criminalizing witchcraft and institutionalizing witch-hunting as a state-sanctioned practice. The majority of Malawians believe in witchcraft and support its criminalization,5 and many argue that the failure of Malawian law to recognize witchcraft as a crime is part of the problem, because it denies the legal system the mechanism to identify or certify witches. Humanists and skeptics in Malawi have actively opposed proposed legislation that recognizes the existence of witchcraft.6 They advocate for retaining the existing legislation and urge the government to enforce, rather than repeal, the provision against accusing someone of practicing witchcraft.

Islam7 and Christianity8 were introduced to Malawi in the 16th and 19th centuries by Western Christian missionaries and Arab scholars/jihadists, respectively. They coerced the local population to accept foreign mythologies as superior to traditional beliefs. Today, Malawi is predominantly Christian,9 but there are also Muslims and some remaining practitioners of traditional religions. And while the belief in witchcraft predates Christianity and Islam, religious lines are often blurred, as all the most popular religions contain narratives that sanctify and reinforce some form of belief in witchcraft. As a result, Malawians from various religious backgrounds share a belief in witchcraft.

Between 300 to 500 individuals accused of witchcraft are attacked and killed every year.

Witch-hunting also has a significant health aspect, as accusations of witchcraft are often used to explain real health issues. In rural areas where hospitals and health centers are scarce, many individuals lack access to modern medical facilities and cannot afford modern healthcare solutions. Consequently, they turn to local diviners and traditional narratives to understand and cope with ailments, diseases, death, and other misfortunes.10

While witch-hunting occurs in both rural and urban settings, it is more prevalent in rural areas. In urban settings, witch-hunting is mainly observed in slums and overcrowded areas. One contributing factor to witch persecution in rural or impoverished urban zones is the limited presence of state police. Police stations are few and far apart, and the law against witchcraft accusations is rarely enforced11due to a lack of police officers and inadequate equipment for intervention. Recent incidents in Malawi demonstrate that mob violence, jungle justice, and vigilante killings of alleged witches are common in these communities.

Malawians believe that witches fly around at night in “witchcraft planes” to attend occult meetings in South Africa and other neighboring countries.

Another significant aspect of witch-hunting is its highly selective nature. Elderly individuals, particularly women, are usually the targets. Why is this the case? Malawi is a patriarchal society where women hold marginalized sociocultural positions. They are vulnerable and easily scapegoated, accused, and persecuted. In many cases, children are the ones driving these accusations. Adult relatives coerce children to “confess” and accuse the elderly of attempting to initiate them into the world of witchcraft. Malawians believe that witches fly around at night in “witchcraft planes” to attend occult meetings in South Africa and other neighboring countries.12

The persistence of witch-hunting in Africa can be attributed to the absence of effective campaigns and measures to eliminate this unfounded and destructive practice. The situation is dire and getting worse. In Ghana, for example, the government plans on shutting down safe spaces for victims, and the president has declined to sign a bill into law that would criminalize witchcraft accusations and the act of witch-hunting.

For this reason, in 2020 I founded Advocacy for Alleged Witches (AfAW) with the aim of combating witch persecution in Africa. Our mission is to put an end to witch-hunting on the continent by 2030.13 AfAW was created to address significant gaps in the fight against witch persecution in Africa. One of our primary goals is to challenge the misrepresentation of African witchcraft perpetuated by Western anthropologists. They have often portrayed witch-hunting as an inherent part of African culture, suggesting that witch persecution serves useful socioeconomic functions. (This perspective arises from a broader issue within modern anthropology, where extreme cultural relativism sometimes leads to an overemphasis on the practices of indigenous peoples. This stems from an overcorrection of past trends that belittled all practices of indigenous peoples). Some Western scholars tend to present witchcraft in the West as a “wild” phenomenon, and witchcraft in Africa as having domestic value and benefit. The academic literature tends to explain witchcraft accusations and witch persecutions from the viewpoint of the accusers rather than the accused. This approach is problematic and dangerous, as it silences the voices of those accused of witchcraft and diminishes their predicament.

Due to this misrepresentation, Western NGOs that fund initiatives to address abuses linked to witchcraft beliefs have waged a lackluster campaign. They have largely avoided describing witchcraft in Africa as a form of superstition, instead choosing to adopt a patronizing approach to tackling witch-hunting—they often claim to “respect” witchcraft as an aspect of African cultures.14 As a result, NGOs do not treat the issue of witch persecution in Africa with the urgency it deserves.

Likewise, African NGOs and activists have been complicit. Many lack the political will and funding to effectively challenge this harmful practice. In fact, many African NGO actors believe in witchcraft themselves! Witch-hunting persists in the region due to lack of accurate information, widespread misinformation, and insufficient action. To end witch-hunting, a paradigm shift is needed. The way witchcraft belief and witch-hunting are perceived and addressed must change.

AfAW aims to catalyze this crucial shift and transformation. It operates as a practical and applied form of skepticism, employing the principles of reason and compassion to combat witch-hunting. Through public education and enlightenment efforts, we question and debate witchcraft and ritual beliefs, aiming to dispel the misconceptions far too often used to justify abuses. Our goal is to try to engage African witchcraft believers in thoughtful dialogue, guiding them away from illusions, delusions, and superstitions.

The persistence of abuses linked to witchcraft and ritual beliefs in the region is due to a lack of robust initiatives applying skeptical thinking to the problem. To effectively combat witch persecution, information must be translated into action, and interpretations into tangible policies and interventions. To achieve this, AfAW employs the “informaction” theory of change, combining information dissemination with actionable steps.

Many people impute misfortunes to witchcraft because they are unaware of where to seek help or who or what is genuinely responsible for their troubles.

At the local level, we focus on bridging the information and action gaps. Accusers are misinformed about the true causes of illnesses, deaths, and misfortunes, often attributing these events to witchcraft due to a lack of accurate information. Many people impute misfortunes to witchcraft because they are unaware of where to seek help or who or what is genuinely responsible for their troubles. This lack of understanding extends to what constitutes valid reasons and causal explanations for their problems.

As part of the efforts to end witch-hunting, we highlight misinformation and disinformation about the true causes of misfortune, illness, death, accidents, poverty, and infertility. This includes debunking the falsehoods that charlatans, con artists, traditional priests, pastors, and holy figures such as mallams and marabouts exploit to manipulate the vulnerable and the ignorant. At AfAW, we provide evidence-based knowledge, explanations, and interpretations of misfortunes.

Leo Igwe participated in a Panel: “From Witch-burning to God-men: Supporting Skepticism Around the World” at The Amaz!ng Meeting, July 12, 2012, in Las Vegas, NV (Photo by BDEngler via Wikimedia)

Our efforts include educating the public on existing laws and mechanisms to address allegations of witchcraft. We conduct sensitization campaigns targeting public institutions such as schools, colleges, and universities. Additionally, we sponsor media programs, issue press releases, engage in social media advocacy, and publish articles aimed at dispelling myths and misinformation related to witch-hunting in the region.

We also facilitate actions and interventions by both state and non-state agencies. In many post-colonial African states, governmental institutions are weak with limited powers and presence. One of our key objectives is to encourage institutional collaboration to enhance efficiency and effectiveness. We petition the police, the courts, and state human rights institutions. Our work prompts these agencies to act, collaborate, and implement appropriate measures to penalize witch-hunting activities in the region.

We are deploying the canon of skeptical rationality to save lives, awaken Africans from their dogmatic and superstitious slumber, and bring about an African Enlightenment.

Additionally, AfAW intervenes to support individual victims of witch persecution based on their specific needs and the resources available. For example, in cases where victims have survived, we relocate them to safe places, assist with their medical treatment, and facilitate their access to justice. In situations where the accused have been killed, we provide support to the victims’ relatives and ensure that the perpetrators are brought to justice.

We get more cases than we can handle. With limited resources, we are unable to intervene in every situation we become aware of. However, in less than four years, our organization has made a significant impact through our interventions in Nigeria and beyond. We are deploying the canon of skeptical rationality to save lives, awaken Africans from their dogmatic and superstitious slumber, and bring about an African Enlightenment.

This is a real culture war, with real consequences, and skepticism is making a real difference.

Categories: Critical Thinking, Skeptic

From the Margins to the Mall: The Lifecycle of Subcultures

Skeptic.com feed - Tue, 03/18/2025 - 11:14am

Founded in 1940, Pinnacle was a rural Jamaican commune providing its Black residents a “socialistic life” removed from the oppression of British colonialism. Its founder, Leonard Howell, preached an unorthodox mix of Christianity and Eastern spiritualism: Ethiopia’s Emperor Haile Selassie was considered divine, the Pope was the devil, and marijuana was a holy plant. Taking instructions from Leviticus 21:5, the men grew out their hair in a matted style that caused apprehension among outsiders, which was later called “dreadlocks.”

Jamaican authorities frowned upon the sect, frequently raiding Pinnacle and eventually locking up Howell in a psychiatric hospital. The crackdown drove Howell’s followers—who became known as Rastafarians—all throughout Jamaica, where they became regarded as folk devils. Parents told children that the Rastafarians lived in drainage ditches and carried around hacked-off human limbs. In 1960 the Jamaican prime minister warned the nation, “These people—and I am glad that it is only a small number of them—are the wicked enemies of our country.”

If Rastafarianism had disappeared at this particular juncture, we would remember it no more than other obscure modern spiritual sects, such as theosophy, the Church of Light, and Huna. But the tenets of Rastafarianism lived on, thanks to one extremely important believer: the Jamaican musician Bob Marley. He first absorbed the group’s teachings from the session players and marijuana dealers in his orbit. But when his wife, Rita, saw Emperor Haile Selassie in the flesh—and a stigmata-like “nail-print” on his hand—she became a true believer. Marley eventually took up its credo, and as his music spread around the world in the 1970s, so did the conventions of Rastafarianism—from dreadlocks, now known as “locs,” as a fashionable hairstyle to calling marijuana “ganja.”

Individuals joined subcultures and countercultures to reject mainstream society and its values. They constructed identities through an open disregard for social norms.

Using pop music as a vehicle, the tenets of a belittled religious subculture on a small island in the Caribbean became a part of Western commercial culture, manifesting in thousands of famed musicians taking up reggae influences, suburban kids wearing knitted “rastacaps” at music festivals, and countless red, yellow, and green posters of marijuana leaves plastering the walls of Amsterdam coffeehouses and American dorm rooms. Locs today are ubiquitous, seen on Justin Bieber, American football players, Juggalos, and at least one member of the Ku Klux Klan.

Rastafarianism is not an exception: The radical conventions of teddy boys, mods, rude boys, hippies, punks, bikers, and surfers have all been woven into the mainstream. That was certainly not the groups’ intention. Individuals joined subcultures and countercultures to reject mainstream society and its values. They constructed identities through an open disregard for social norms. Yet in rejecting basic conventions, these iconoclasts became legendary as distinct, original, and authentic. Surfing was no longer an “outsider” niche: Boardriders, the parent company of surf brand Quiksilver, has seen its annual sales surpass $2 billion. Country Life English Butter hired punk legend John Lydon to appear in television commercials. One of America’s most beloved ice cream flavors is Cherry Garcia, named after the bearded leader of a psychedelic rock band who long epitomized the “turn on, tune in, drop out” spirit of 1960s countercultural rebellion. As the subcultural scholars Stuart Hall and Tony Jefferson note, oppositional youth cultures became a “pure, simple, raging, commercial success.” So why, exactly, does straight society come to champion extreme negations of its own conventions?

Illustration by Cynthia von Buhler for SKEPTIC

Subcultures and countercultures manage to achieve a level of influence that belies their raw numbers. Most teens of the 1950s and 1960s never joined a subculture. There were never more than an estimated thirty thousand British teddy boys in a country of fifty million people. However alienated teens felt, most didn’t want to risk their normal status by engaging in strange dress and delinquent behaviors. Because alternative status groups can never actually replace the masses, they can achieve influence only through being imitated. But how do their radical inventions take on cachet? There are two key pathways: the creative class and the youth consumer market.

In the basic logic of signaling, subcultural conventions offer little status value, as they are associated with disadvantaged communities. The major social change of the twentieth century, however, was the integration of minority and working-class conventions into mainstream social norms. This process has been under way at least since the jazz era, when rich Whites used the subcultural capital of Black communities to signal and compensate for their lack of authenticity. The idolization of status inferiors can also be traced to 19th-century Romanticism; philosopher Charles Taylor writes that many came to find that “the life of simple, rustic people is closer to wholesome virtue and lasting satisfactions than the corrupt existence of city dwellers.” By the late 1960s, New York high society threw upscale cocktail parties for Marxist radicals like the Black Panthers—a predilection Tom Wolfe mocked as “radical chic.”

For most cases in the twentieth century, however, the creative class became the primary means by which conventions from alternative status groups nestled into the mainstream. This was a natural process, since many creatives were members of countercultures, or at least were sympathetic to their ideals. In The Conquest of Cool, historian Thomas Frank notes that psychedelic art appeared in commercial imagery not as a means of pandering to hippie youth but rather as the work of proto-hippie creative directors who foisted their lysergic aesthetics on the public. Hippie ads thus preceded—and arguably created—hippie youth.

Because alternative status groups can never actually replace the masses, they can achieve influence only through being imitated.

This creative-class counterculture link, however, doesn’t explain the spread of subcultural conventions from working-class communities like the mods or Rastafarians. Few from working-class subcultures go into publishing and advertising. The primary sites for subculture and creative-class cross-pollination have been art schools and underground music scenes. The punk community, in particular, arose as an alliance between the British working class and students in art and fashion schools. Once this network was formed, punk’s embrace of reggae elevated Jamaican music into the British mainstream as well. Similarly, New York’s downtown art scene supported Bronx hip-hop before many African American radio stations took rap seriously.

Subcultural style often fits well within the creative-class sensibility. With a premium placed on authenticity, creative class taste celebrates defiant groups like hipsters, surfers, bikers, and punks as sincere rejections of the straight society’s “plastic fantastic” kitsch. The working classes have a “natural” essence untarnished by the demands of bourgeois society. “What makes Hip a special language,” writes Norman Mailer, “is that it cannot really be taught.” This perspective can be patronizing, but to many middle-class youth, subcultural style is a powerful expression of earnest antagonism against common enemies. Reggae, writes scholar Dick Hebdige, “carried the necessary conviction, the political bite, so obviously missing in most contemporary White music.”

From the jazz era onward, knowledge of underground culture served as an important criterion for upper-middle-class status—a pressure to be hip, to be in the know about subcultural activity. Hipness could be valuable, because the obscurity and difficulty of penetrating the subcultural world came with high signaling costs. Once subcultural capital became standard in creative-class signaling, minority and working-class slang, music, dances, and styles functioned as valuable signals—with or without their underlying beliefs. Art school students could listen to reggae without believing in the divinity of Haile Selassie. For many burgeoning creative-class members, subcultures and countercultures offered vehicles for daydreaming about an exciting life far from conformist boredom. Art critic Dan Fox, who grew up in the London suburbs, explains, “[Music-related tribe] identities gave shelter, a sense of belonging; being someone else was a way to fantasize your exit from small-town small-mindedness.”

Photo by Bekky Bekks / Unsplash

Middle-class radical chic, however, tends to denature the most prickly styles. This makes “radical” new ideas less socially disruptive, which opens a second route of subcultural influence: the youth consumer market. The culture industry—fashion brands, record companies, film producers—is highly attuned to the tastes of the creative class, and once the creative class blesses a subculture or counterculture, companies manufacture and sell wares to tap into this new source of cachet. At first mods tailored their suits, but the group’s growing stature encouraged ready-to-wear brands to manufacture off-the-rack mod garments for mass consumption. As the punk trend flared in England, the staid record label EMI signed the Sex Pistols (and then promptly dropped them). With so many cultural trends starting among the creative classes and ethnic subcultures, companies may not understand these innovations but gamble that they will be profitable in their appeal to middle-class youth.

Before radical styles can diffuse as products, they are defused—i.e., the most transgressive qualities are surgically removed. Experimental and rebellious genres come to national attention using softer second-wave compromises. In the early 1990s, hip-hop finally reached the top of the charts with the “pop rap” of MC Hammer and Vanilla Ice. Defusing not only dilutes the impact of the original inventions but also freezes farout ideas into set conventions. The vague “oppositional attitude” of a subculture becomes petrified in a strictly defined set of goods. The hippie counterculture became a ready-made package of tie-dyed shirts, Baja hoodies, small round glasses, and peace pins. Mass media, in needing to explain subcultures to readers, defines the undefined—and exaggerates where necessary. Velvet cuffs became a hallmark of teddy boy style, despite being a late-stage development dating from a single 1957 photo in Teen Life magazine.

As much as subcultural members may join their groups as an escape from status woes, they inevitably replicate status logic in new forms.

This simplification inherent in the marketing process lowers fences and signaling costs, allowing anyone to be a punk or hip-hopper through a few commercial transactions. John Waters took interest in beatniks not for any “deep social conviction” but “in homage” to his favorite TV character, Maynard G. Krebs, on The Many Loves of Dobie Gillis. And as more members rush into these groups, further simplification occurs. Younger members have less money to invest in clothing, vehicles, and conspicuous hedonism. The second generation of teds maintained surly attitudes and duck’s-ass quiffs, but replaced the Edwardian suits with jeans. Creative classes may embrace subcultures and countercultures on pseudo-spiritual grounds, but many youth simply deploy rebellious styles as a blunt invective against adults. N.W.A’s song “Fuck tha Police” gave voice to Black resentment against Los Angeles law enforcement; White suburban teens blasted it from home cassette decks to anger their parents.

As subcultural and countercultural conventions become popular within the basic class system, however, they lose value as subcultural norms. Most alternative status groups can’t survive the parasitism of the consumer market; some fight back before it’s too late. In October 1967, a group of longtime countercultural figures held a “Death of the Hippie” mock funeral on the streets of San Francisco to persuade the media to stop covering their movement. Looking back at the sixties, journalist Nik Cohn noted that these groups’ rise and fall always followed a similar pattern:

One by one, they would form underground and lay down their basic premises, to be followed with near millennial fervor by a very small number; then they would emerge into daylight and begin to spread from district to district; then they would catch fire suddenly and produce a national explosion; then they would attract regiments of hangers-on and they would be milked by industry and paraded endlessly by media; and then, robbed of all novelty and impact, they would die.

By the late 1960s the mods’ favorite hangout, Carnaby Street, had become “a tourist trap, a joke in bad taste” for “middle-aged tourists from Kansas and Wisconsin.” Japanese biker gangs in the early 1970s dressed in 1950s Americana—Hawaiian shirts, leather jackets, jeans, pompadours—but once the mainstream Japanese fashion scene began to play with a similar fifties retro, the bikers switched to right-wing paramilitary uniforms festooned with imperialist slogans.

However, what complicates any analysis of subcultural influence on mainstream style is that the most famous 1960s groups often reappear as revival movements. Every year a new crop of idealistic young mods watches the 1979 film Quadrophenia and rushes out to order their first tailored mohair suit. We shouldn’t confuse these later adherents, however, as an organic extension of the original configuration. New mods are seeking comfort in a presanctioned rebellion, rather than spearheading new shocking styles at the risk of social disapproval. The neoteddy boys of the 1970s adopted the old styles as a matter of pure taste: namely, a combination of fifties rock nostalgia and hippie backlash. Many didn’t even know where the term “Edwardian” originated.

Were the original groups truly “subcultural” if they could be so seamlessly absorbed into the commercial marketplace? In the language of contemporary marketing, “subculture” has come to mean little more than “niche consumer segment.” A large portion of contemporary consumerism is built on countercultural and subcultural aesthetics. Formerly antisocial looks like punk, hippie, surfer, and biker are now sold as mainstream styles in every American shopping mall. Corporate executives brag about surfing on custom longboards, road tripping on Harley-Davidsons, and logging off for weeks while on silent meditation retreats. The high-end fashion label Saint Laurent did a teddy-boy-themed collection in 2014, and Dior took inspiration from teddy girls for the autumn of 2019. There would be no Bobo yuppies in Silicon Valley without bohemianism, nor would the Police’s “Roxanne” play as dental-clinic Muzak without Jamaican reggae.

Alternative status groups in the twentieth century did, however, succeed in changing the direction of cultural flows.

But not all subcultures and countercultures have ended up as part of the public marketplace. Most subcultures remain marginalized: e.g., survivalists, furries, UFO abductees, and pickup artists. Just like teddy boys, the Juggalos pose as outlaws with their own shocking music, styles, and dubious behaviors—and yet the music magazine Blender named the foundational Juggalo musical act Insane Clown Posse as the worst artist in music history. The movement around Christian rock has suffered a similar fate; despite staggering popularity, the fashion brand Extreme Christian Clothes has never made it into the pages of GQ. Since these particular groups are formed from elements of the (White) majority culture—rather than formed in opposition to it—they offer left-leaning creatives no inspiration. Lower-middle-class White subcultures can also epitomize the depths of conservative sentiment rather than suggest a means of escape. Early skinhead culture influenced high fashion, but the Nazi-affiliated epigones didn’t. Without the blessing of the creative class, major manufacturers won’t make new goods based on such subcultures’ conventions, preventing their spread to wider audiences. Subcultural transgressions, then, best find influence when they become signals within the primary status structure of society.

The renowned scholarship on subcultures produced at Birmingham’s Centre for Contemporary Cultural Studies casts youth groups as forces of “resistance,” trying to navigate the “contradictions” of class society. Looking back, few teds or mods saw their actions in such openly political terms. “Studies carried out in Britain, America, Canada, and Australia,” writes sociologist David Muggleton, “have, in fact, found subcultural belief systems to be complex and uneven.” While we may take inspiration from the groups’ sense of “vague opposition,” we’re much more enchanted by their specific garments, albums, dances, behaviors, slang, and drugs. In other words, each subculture and counterculture tends to be reduced to a set of cultural artifacts, all of which are added to the pile of contemporary culture.

The most important contribution of subcultures, however, has been giving birth to new sensibilities— additional perceptual frames for us to revalue existing goods and behaviors. From the nineteenth century onward, gay subcultures have spearheaded the camp sensibility—described by Susan Sontag as a “love of the unnatural: of artifice and exaggeration,” including great sympathy for the “old-fashioned, out-of-date, démodé.” This “supplementary” set of standards expanded cultural capital beyond high culture and into an ironic appreciation of low culture. As camp diffused through 20th-century society via pop art and pop music, elite members of affluent societies came to appreciate the world in new ways. Without the proliferation of camp, John Waters would not grace the cover of Town & Country.

The fact that subcultural rebellion manifests as a simple distinction in taste is why the cultural industry can so easily co-opt its style.

As much as subcultural members may join their groups as an escape from status woes, they inevitably replicate status logic in new forms—different status criteria, different hierarchies, different conventions, and different tastes. Members adopt their own arbitrary negations of arbitrary mainstream conventions, but believe in them as authentic emotions. If punk were truly a genuine expression of individuality, as John Lydon claims it should be, there could never have been a punk “uniform.”

The fact that subcultural rebellion manifests as a simple distinction in taste is why the cultural industry can so easily co-opt its style. If consumers are always on the prowl for more sensational and more shocking new products, record companies and clothing labels can use alternative status groups as R&D labs for the wildest new ideas.

Alternative status groups in the twentieth century did, however, succeed in changing the direction of cultural flows. In strict class-based societies of the past, economic capital and power set rigid status hierarchies; conventions trickled down from the rich to the middle classes to the poor. In a world where subcultural capital takes on cachet, the rich consciously borrow ideas from poorer groups. Furthermore, bricolage is no longer a junkyard approach to personal style—everyone now mixes and matches. In the end, subcultural groups were perhaps an avant-garde of persona crafting, the earliest adopters of the now common practice of inventing and performing strange characters as an effective means of status distinction.

For both classes and alternative status groups, individuals pursuing status end up forming new conventions without setting out to do so. Innovation, in these cases, is often a byproduct of status struggle. But individuals also intentionally attempt to propose alternatives to established conventions. Artists are the most well-known example of this more calculated creativity—and they, too, are motivated by status.

Subcultures and countercultures are often cast as modern folk devils. The media spins lurid yarns of criminal destruction, drug abuse, and sexual immorality.

Not surprisingly, mainstream society reacts with outrage upon the appearance of alternative status groups, as these groups’ very existence is an affront to the dominant status beliefs. Blessing or even tolerating subcultural transgressions is a dangerous acknowledgment of the arbitrariness of mainstream norms. Thus, subcultures and countercultures are often cast as modern folk devils. The media spins lurid yarns of criminal destruction, drug abuse, and sexual immorality—frequently embellishing with sensational half-truths. To discourage drug use in the 1970s, educators and publishers relied on a fictional diary called Go Ask Alice, in which a girl takes an accidental dose of LSD and falls into a tragic life of addiction, sex work, and homelessness. The truth of subcultural life is often more pedestrian. As an early teddy boy explained in hindsight, “We called ourselves Teddy Boys and we wanted to be as smart as possible. We lived for a good time, and all the rest was propaganda.”

Excerpted and adapted by the author from Status and Culture by W. David Marx, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. © 2022 by W. David Marx.

Categories: Critical Thinking, Skeptic

Living with Predators

neurologicablog Feed - Tue, 03/18/2025 - 5:54am

For much of human history, wolves and other large carnivores were considered pests. Wolves were actively exterminated on the British Isles, with the last wolf killed in 1680. It is more difficulty to deliberately wipe out a species on a continent than an island, but across Europe wolf populations were also actively hunted and kept to a minimum. In the US there was also an active campaign in the 20th century to exterminate wolves. The gray wolf was nearly wiped out by the middle of the 20th century.

The reasons for this attitude are obvious – wolves are large predators, able to kill humans who cross their paths. They also hunt livestock, which is often given as the primary reason to exterminate them. There are other large predators as well: bears, mountain lions, and coyotes, for example. Wherever they push up against human civilization, these predators don’t fare well.

Killing off large predators, however, has had massive unintended consequences. It should have been obvious that removing large predators from an ecosystem would have significant downstream effects. Perhaps the most notable effects is on the deer population. In the US wolves were the primary check on deer overpopulation. They are too large generally for coyotes. Bears do hunt and kill deer, but it is not their primary food source. Mountain lions will hunt and kill deer, but their range is limited.

Without wolves, the deer population exploded. The primary check now is essentially starvation. This means that there is a large and starving population of deer, which makes them willing to eat whatever they can find. They then wipe out much of the undergrowth in forests, eliminating an important habitat for small forest critters. Deer hunting can have an impact, but apparently not enough. Car collisions with deer also cost about $8 billion in the US annually, causing about 200 deaths and 26 thousand injuries. So there is a human toll as well. This cost dwarfs the cost of lost livestock, estimated to be about 17 million Euros across Europe.

All of this has lead to a reversal in Europe and the US on our thinking and policy toward wolves. They have gone from active extermination to protected. In Europe wolf populations are on the rise, with an overall 58% increase over the last decade. Wolves were reintroduced in Yellowstone park, leading to vast ecological improvement, including increases in the aspen and beaver populations. This has been described as a ripple effect throughout the ecosystem.

In the East we have seen a rise of the eastern coyote – which is a larger cousin of the coyote, through breeding with wolves and dogs. I have seen them in my hard – at first glance you might think it’s a wolf, it does really look like a hybrid between a wolf and a coyote. These larger coyotes will kill deer, although they also hunt a lot of smaller game and will scavenge. However, the evidence so far indicates that they are not much of a check on deer populations. Perhaps that will change in the future, if the eastern coyote evolves to take advantage of this food source.

There is also evidence that mountain lions are spreading their range to the East. They already a seen in the Midwest. It would likely take decades for the mountain lions to spread naturally to reach places like New England and establish a breeding population there. This is why there is actually discussion of introducing mountain lions into select eastern ecosystems, such as in Vermont. This would be expressly for the purpose of controlling deer populations.

All of this means, I think, that we have to get used to the idea of living close to large predators. Wolves are the common “monsters” of fairytales, as we inherited a culture that has demonized these predators, often deliberately as part of a campaign of extermination. We now need to cultivate a different attitude. These predators are essential for a healthy ecosystem. We need to respect them and learn how to share the land with them. What does that mean.

A couple years ago I had a black bear showing up on my property, so I called animal control, mainly to see (and report on) what their response was. They told me that first, they will do nothing about it. They do not relocate bears. Their intervention was to report it in their database, and to give me advice. That advice was to stay out of the bear’s way. If you are concerned about your pet, keep them indoors. Put a fence around your apple tree. Keep bird seed inside. Do not put garbage out too early, and only in tight bins. That bear owns your yard now, you better stay out of their way.

This, I think, is a microcosm of what’s coming. We all have to learn to live with large predators. We have to learn their habits, learn how to stay out of their way, not inadvertently attract them to our homes. Learn what to do when you see a black bear. Learn how not to become prey. Have good hygiene when it comes to potential food sources around your home. We need to protect livestock without just exterminating the predators.

And yes – some people will be eaten. I say that not ironically, it’s a simple fact. But the numbers will be tiny, and can be kept to a minimum by smart behavior. They will also be a tiny fraction of the lives lost due to car collisions with deer. Fewer people will be killed by mountain lions, for example, than lives saved through reduced deer collisions. I know this sounds like a version of the trolley problem, but sometimes we have to play the numbers game.

Finding a way to live with large predators saves money, saves lives, and preserves ecosystems. I think a lot of it comes down to learning to respect large predators rather than just fearing them. We respect what they are capable of. We stay out of their way. We do not attract them. We take responsibility for learning good behavior. We do not just kill them out of fear. They are not pests or fairytale monsters. They are a critical part of our natural world.

The post Living with Predators first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #980: How to Find Atlantis

Skeptoid Feed - Tue, 03/18/2025 - 2:00am

Ten of the places where Atlantis true believers think the mythical city might actually be.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic