You are here

Skeptic

The New Zealand Māori Astrology Craze: A Case Study

Skeptic.com feed - Wed, 04/30/2025 - 4:57am
“It is fundamental in science that no knowledge is protected from challenge. … Knowledge that requires protection is belief, not science.” —Peter Winsley

There is growing international concern over erosion of objectivity in both education and research. When political and social agendas enter the scientific domain there is a danger that they may override evidence-based inquiry and compromise the core principles of science. A key component of the scientific process is an inherent skeptical willingness to challenge assumptions. When that foundation is replaced by a fear of causing offense or conforming to popular trends, what was science becomes mere pseudoscientific propaganda employed for the purpose of reinforcing ideology.

When Europeans formally colonized New Zealand in 1840 with the signing of the Treaty of Waitangi, the culture of the indigenous Māori people was widely disparaged and their being viewed an inferior race. One year earlier historian John Ward described Māori as having “the intellect of children” who were living in an immature society that called out for the guiding hand of British civilization.1 The recognition of Māori as fully human, with rights, dignity, and a rich culture worthy of respect, represents a seismic shift from the 19th century attitudes that permeated New Zealand and much of the Western world, and that were used to justify the European subjugation of indigenous peoples. 

Since the 1970s, Māori society has experienced a cultural Renaissance with a renewed appreciation of the language, art, and literature of the first people to settle Aotearoa—“the land of the long white cloud.” While speaking Māori was once banned in public schools, it is now thriving and is an official language of the country. Learning about Māori culture is an integral part of the education system that emphasizes that it is a treasure (taonga) that must be treated with reverence. Māori knowledge often holds great spiritual significance and should be respected. Like all indigenous knowledge, it contains valuable wisdom obtained over millennia, and while it contains some ideas that can be tested and replicated, it is not the same as science. 

When political and social agendas enter the scientific domain there is a danger that they may override evidence-based inquiry

For example, Māori knowledge encompasses traditional methods for rendering poisonous karaka berries safe for consumption. Science, on the other hand, focuses on how and why things happen, like why karaka berries are poisonous and how the poison can be removed.2 The job of science is to describe the workings of the natural world in ways that are testable and repeatable, so that claims can be checked against empirical evidence—data gathered from experiments or observations. That does not mean we should discount the significance of indigenous knowledge—but these two systems of looking at the world operate in different domains. As much as indigenous knowledge deserves our respect, we should not become so enamoured with it that we give it the same weight as scientific knowledge. 

The Māori Knowledge Debate 

In recent years the government of New Zealand has given special treatment to indigenous knowledge. The issue came to a head in 2021, when a group of prominent academics published a letter expressing concern that giving indigenous knowledge parity with science could undermine the integrity of the country’s science education. The seven professors who signed the letter were subjected to a national inquisition. There were public attacks by their own colleagues and an investigation by the New Zealand Royal Society on whether to expel members who had signed the letter.3

Ironically, part of the reason for the Society’s existence is to promote science. At its core is the issue of whether “Māori ancient wisdom” should be given equal status in the curriculum with science, which is the official government position.4 This situation has resulted in tension in the halls of academia, where many believe that the pendulum has now swung to another extreme. Frustration and unease permeate university campuses as professors and students alike walk on eggshells, afraid to broach the subject for fear of being branded racist and anti-Māori, or subjected to personal attacks or harassment campaigns. 

The Lunar Calendar 

Infatuation with indigenous knowledge and the fear of criticising claims surrounding it has infiltrated many of the country’s key institutions, from the health and education systems to the mainstream media. The result has been a proliferation of pseudoscience. There is no better example of just how extreme the situation has become than the craze over the Māori Lunar Calendar. Its rise is a direct result of what can happen when political activism enters the scientific arena and affects policymaking. Interest in the Calendar began to gain traction in late 2017. 

An example of the Maramataka Māori lunar calendar (Source: Museum of New Zealand)

Since then, many Kiwis have been led to believe that it can impact everything from horticulture to health to human behavior. The problem is that the science is lacking, but because of the ugly history of the mistreatment of the Māori people, public institutions are afraid to criticize or even take issue anything to do with Māori culture. Consider, for example, media coverage. Between 2020 and 2024, there were no less than 853 articles that mention “maramataka”—the Māori word for the Calendar which translates to “the turning of the moon.” After reading through each text, I was unable to identify a single skeptical article.5 Many openly gush about the wonders of the Calendar, and gave no hint that it has little scientific backing. 

Based on the Dow Jones Factiva Database

The Calendar once played an important role in Māori life, tracking the seasons. Its main purpose was to inform fishing, hunting, and horticultural activities. There is some truth in the use of specific phases or cycles to time harvesting practices. For instance, some fish are more active or abundant during certain fluctuations of the tides, which in turn are influenced by the moon’s gravitational pull. Two studies have shown a slight increase in fish catch using the Calendar.6 However, there is no support for the belief that lunar phases influence human health and behavior, plant growth, or the weather. Despite this, government ministries began providing online materials that feature an array of claims about the moon’s impact on human affairs. Fearful of causing offense by publicly criticizing Māori knowledge, the scientific position was usually nowhere to be found. 

Soon primary and secondary schools began holding workshops to familiarize staff with the Calendar and how to teach it. These materials were confusing for students and teachers alike because most were breathtakingly uncritical and there was an implication that it was all backed by science. Before long, teachers began consulting the maramataka to determine which days were best to conduct assessments, which days were optimal for sporting activities, and which days were aligned with “calmer activities at times of lower energy phases.” Others used it to predict days when problem students were more likely to misbehave.7

Fearful of causing offense by publicly criticizing Māori knowledge, the scientific position was usually nowhere to be found.

As one primary teacher observed: “If it’s a low energy day, I might not test that week. We’ll do meditation, mirimiri (massage). I slowly build their learning up, and by the time of high energy days we know the kids will be energetic. You’re not fighting with the children, it’s a win-win, for both the children and myself. Your outcomes are better.”8 The link between the Calendar and human behavior was even promoted by one of the country’s largest education unions.9 Some teachers and government officials began scheduling meetings on days deemed less likely to trigger conflict,10 while some media outlets began publishing what were essentially horoscopes under the guise of ‘ancient Māori knowledge.’11

The Calendar also gained widespread popularity among the public as many Kiwis began using online apps and visiting the homepages of maramataka enthusiasts to guide their daily activities. In 2022, a Māori psychiatrist published a popular book on how to navigate the fluctuating energy levels of Hina—the moon goddess. In Wawata Moon Dreaming, Dr. Hinemoa Elder advises that during the Tamatea Kai-ariki phase people should: “Be wary of destructive energies,”12 while the Māwharu phase is said to be a time of “female sexual energy … and great sex.”13 Elder is one of many “maramataka whisperers” who have popped up across the country. 

By early 2025, the Facebook page “Maramataka Māori” had 58,000 followers,14 while another, “Living by the Stars” on Māori Astronomy had 103,000 admirers.15 Another popular book, Living by the Moon, also asserts that lunar phases can affect a person’s energy levels and behavior. We are told that the Whiro phase (new moon) is associated with troublemaking. It even won awards for best educational book and best Māori language resource.16 In 2023, Māori politician Hana Maipi-Clarke, who has written her own book on the Calendar, stood up in Parliament and declared that the maramataka could foretell the weather.17

A Public Health Menace 

Several public health clinics have encouraged their staff to use the Calendar to navigate “high energy” and “low energy” days and help clients apply it to their lives. As a result of the positive portrayal of the Calendar in the Kiwi media and government websites, there are cases of people discontinuing their medication for bipolar disorder and managing contraception with the Calendar.18 In February 2025, the government-funded Māori health organization, Te Rau Ora, released an app that allows people to enhance their physical and mental health by following the maramataka to track their mauri (vital life force).

While Te Rau Ora claims that it uses “evidence-based resources,” there is no evidence that mauri exists, or that following the phases of the moon directly affects health and well-being. Mauri is the Māori concept of a life force—or vital energy—that is believed to exist in all living beings and inanimate objects. The existence of a “life force” was once the subject of debate in the scientific community and was known as “vitalism,” but no longer has any scientific standing.19 Despite this, one of app developers, clinical psychologist Dr. Andre McLachlan, has called for widespread use of the app.20 Some people are adamant that following the Calendar has transformed their lives, and this is certainly possible given the belief in its spiritual significance. However, the impact would not be from the influence of the Moon, but through the power of expectation and the placebo effect. 

No Science Allowed 

While researching my book, The Science of the Māori Lunar Calendar, I was repeatedly told by Māori scholars that it was inappropriate to write on this topic without first obtaining permission from the Māori community. They also raised the issue of “Māori data sovereignty”—the right of Māori to have control over their own data, including who has access to it and what it can be used for. They expressed disgust that I was using “Western colonial science” to validate (or invalidate) the Calendar. 

It is a dangerous world where subjective truths are given equal standing with science under the guise of relativism.

This is a reminder of just how extreme attempts to protect indigenous knowledge have become in New Zealand. It is a dangerous world where subjective truths are given equal standing with science under the guise of relativism, blurring the line between fact and fiction. It is a world where group identity and indigenous rights are often given priority over empirical evidence. The assertion that forms of “ancient knowledge” such as the Calendar, cannot be subjected to scientific scrutiny as it has protected cultural status, undermines the very foundations of scientific inquiry. The expectation that indigenous representatives must serve as gatekeepers who must give their consent before someone can engage in research on certain topics is troubling. The notion that only indigenous people can decide which topics are acceptable to research undermines intellectual freedom and stifles academic inquiry. 

While indigenous knowledge deserves our respect, its uncritical introduction into New Zealand schools and health institutions is worrisome and should serve as a warning to other countries. When cultural beliefs are given parity with science, it jeopardizes public trust in scientific institutions and can foster misinformation, especially in areas such as public health, where the stakes are especially high.

Categories: Critical Thinking, Skeptic

The Measure of the Wealth of Nations: Why Economic Statistics Matter

Skeptic.com feed - Tue, 04/29/2025 - 2:08pm
Are things getting better?
For whom? What does “better” mean?

The economic and social phenomena so clear in everyday experience are invisible in the standard national accounts and GDP (Gross Domestic Product) statistics. The current concept of value added used to construct GDP numbers does not correspond to the views many people hold about societal value. This disconnect has given momentum to the Beyond GDP movement and to those similarly challenging the metrics of shareholder value that determine how businesses act. The digitalization of the economy, in shifting the ways economic value can be created, amplifies the case for revisiting existing economic statistics.

Without good statistics, states cannot function. In my work focusing on both the digital economy and the natural economy, I have worked closely with official statisticians in the ONS (Office for National Statistics), BEA (Bureau of Economic Analysis), OECD (Organization for Economic Cooperation and Development), INSEE (National Institute of Statistics and Economic Studies), and elsewhere for many years. Without question there has been a widespread loss of belief in conventional statistics even among knowledgeable commentators, as the vigorous Beyond GDP agenda testifies.

Why Not Well-Being?

An alternative metric of social welfare that many people find appealing is the direct measurement of well-being. Economists who focus on well-being have differing views on exactly how to measure it, but the balance of opinion has tilted toward life satisfaction measured on a fixed scale. One such measurement is the Cantril Ladder, which asks respondents to think of a ladder, with the best possible life for them being a 10 and the worst possible life being a 0, and are then asked to rate their own current lives on that 0 to 10 scale.

Although people’s well-being is the ultimate aim of collective action, using it as a measurement is problematic in several ways. One is the set of measurement issues highlighted in research by Mark Fabian. These include scale norming, whereby when people state their life satisfaction as, say, a 7 on a scale of 1 to 10 at different time periods, they are doing so by reference to the scale rather than events in their life.12 One of the more firmly established behavioral facts is the idea of an individual set point, whereby individuals generally revert to an initial level of well-being after experiencing events that send it up or down, but this is hardly a reason for concluding that nothing can improve in their lives.

Although people’s well-being is the ultimate aim of collective action, using it as a measurement is problematic.

Another issue is that the empirical literature is atheoretical, providing a weak basis for policy intervention in people’s lives. The conclusion from my research project on well-being is that while national policy could certainly be informed by top-down life satisfaction survey statistics, at smaller scales people’s well-being will depend on the context and on who is affected; the definition and measurement of well-being should be tailored appropriately, and it is not a very useful metric for policy at an aggregate level.

Why Not an Alternative Index?

GDP is calculated by summing up the total value of all final goods and services produced within a country’s borders during a specific period, typically a year. Over the years, several single indices as alternatives to GDP have been proposed. However, indices internalize the trade-offs to present a single number that advocates hope will dethrone conventional measures. Some of these are explicit about the social welfare framework they involve.

Another alternative is provided by Jones and Klenow (2016),3 who include consumption, leisure, inequality, and mortality in social welfare. They convert other indicators into “consumption-equivalent welfare,” which has a long tradition in economics.4 In their paper, they observe that France has much lower consumption per capita than the United States—it is only at 60 percent of the U.S. level—but less inequality, greater life expectancy at birth, and longer leisure hours. Their adjustment puts France at 92 percent of the consumption-equivalent level of the United States.

A well-established alternative to GDP is the Human Development Index (HDI), inspired by Nobel Prize winning economist Amartya Sen’s capabilities approach—improving access to the tools people use to live a fulfilling life. The index demonstrates the dangers of combining a number of indicators, each one measuring something relevant, without having a conceptual structure for the trade-offs and how the components should be weighted together. The late Martin Ravallion of the World Bank advocated for a multidimensional set of indicators, with the aggregation necessary to get to these being informed by talking to poor people about their priorities:

The role played by prices lies at the heart of the matter. It is widely agreed that prices can be missing for some goods and deceptive for others. There are continuing challenges facing applied economists in addressing these problems. However, it is one thing to recognize that markets and prices are missing or imperfect, and quite another to ignore them in welfare and poverty measurement. There is a peculiar inconsistency in the literature on multidimensional indices of poverty, whereby prices are regarded as an unreliable guide to the tradeoffs, and are largely ignored, while the actual weights being assumed in lieu of prices are not made explicit in the same space as prices. We have no basis for believing that the weights being used are any better than market price.5Why Not a Dashboard?

One frequent proposal, which certainly has intuitive appeal, is replacing the political and policy focus on GDP growth and related macroeconomic statistics with a broader dashboard. But there are three big challenges related to what to display on the dashboard. One, which indicators? A proliferation of alternatives has focused on what their advocates think is important rather than being shaped by either theory or broad consensus. So potential users face an array of possibilities and can select what interests them. Second, there are trade-offs and dependencies between indicators, and although dashboards could be designed to display these clearly, often they do not. Consequently, the third challenge is how to weight or display the various component indicators for decision purposes.

Table 1 lists the headline categories for four frequently cited dashboards, showing how little they overlap. The selection of indicators to represent an underlying concept is evidently arbitrary, in the sense that the lists do not have a clear theoretical basis, and the selection of indicators is generally determined by what data are available or even by political negotiation. For instance, I was told by someone closely involved in the process that the debate within the UN about the SDGs (Sustainable Development Goals) included a discussion about the definition of a tree; depending on the height specified in the definition, coffee bushes might or might not be included, which for some countries would affect their measure of deforestation. Practicality and arbitrary decisions certainly affect mainstream economic statistics too, but these result from decades of debate and practice among the community of relevant experts informed by a theoretical basis. We are not there yet with dashboards.

Still, there are many things people care about in life, even if one confines the question to their economic well-being. Indeed, one of my criticisms of using growth of real GDP as a guide was the flawed assumption that utility can be collapsed to a single dimension.

Comprehensive Wealth

If not well-being directly measured, nor (yet) a dashboard, nor a single index number alternative to GDP, what are the options? Consider comprehensive wealth. First, it embeds sustainability because of its focus on assets. Adding in effect a balance sheet recording stocks—or equivalently a full account of the flow of services provided by the assets—immediately highlights the key trade-off between present and future consumption. One measurement challenge is to identify the economically relevant assets and collect the underlying data. Focusing on assets revives an old debate in economics during the 1950s and early 1960s between the “two Cambridges”—Cambridge, Massachusetts, home to MIT and Harvard (where I did my PhD), and Cambridge, England (where I now work). That debate was about whether it made any sense to think of (physical) capital as a single aggregate when this would inevitably be a mash-up of many different types of physical buildings and equipment.

The American Cambridge (led by Paul Samuelson and Robert Solow) said yes, and the concept has become the “K” of production functions and growth accounting. The British Cambridge (particularly Piero Sraffa and Joan Robinson) disputed this, arguing for example that different vintages of capital would embed different generations of technology, so even a straightforward machine tool to stamp out components could not be aggregated with a twenty-year-old equivalent. Even the review articles discussing the debate (Cohen and Harcourt 2003,6 Stiglitz 19747) take sides, but the mainstream profession has given total victory to the U.S. single-aggregate version.

A balance-sheet approach also helps integrate the role of debt into consideration of progress.

A second point in favor of a comprehensive wealth approach is that investment for future consumption always involves different types of assets in combination. This means it will be important to consider not just the stocks of different assets—whether machines, patents, or urban trees (which cool the ambient temperature)—but also the extent to which the services they provide are substitutes or complements for each other: What is the correlation matrix? A patent for a new gadget will require investment in specific machines to put it into production and may benefit from tree planting if the production process heats the factory; the trees may substitute for an air-conditioning plant and also for concrete flood defenses downstream if their roots absorb enough rain. A recent paper8 highlights the importance of understanding the complementarities: “So long as a particular irreversible capital good remains with its project, in many cases until it is scrapped, its contribution comes not solely on its own account but as a result of complementarity with other capital goods. The project’s income is not composed of distinct contributions from individual assets.”

A balance-sheet approach also helps integrate the role of debt into consideration of progress. Debt is how consumption occurs now at the expense of consumption in future. In addition to financial debt, whether issued by governments or businesses or owed by individuals, there is a large and unmeasured burden of debt to nature. In a range of natural capital assets, including a stable climate, past and current consumption is reducing future opportunities.

In summary, to track sustainable economic welfare, a comprehensive wealth approach is desirable, identifying separately the types of assets that contribute capital services to economic actors. Some of them have no natural volume units. (You can count the number of isotope ratio mass spectrometers, but how do you count the accumulated know-how of a top law firm?) Many will not have a market price at all, and if they do, it is likely not to be the shadow price relevant to social welfare, so the monetary valuation needed to aggregate individual assets (by putting them into a common unit of account) is problematic.9 And the complementarities and substitutability across categories need to be better understood, including non-market assets such as organizational capabilities. (The development economics literature talks about this in terms of institutions or social capital; Singapore had few physical assets and little manufacturing industry to speak of in 1946, so it clearly relied on other assets to become one of the world’s highest per capita income countries.)

This is a challenging measurement agenda to say the least, but it is an obvious path for statistical development. Some readers will find the sustainability argument the most persuasive. There are two other supporting rationales, though. One is that a significant body of economic theory (appealing to both neoclassical and heterodox economists) supports it:1011 An increase in comprehensive wealth, at appropriately measured shadow prices, corresponds to an increase in social well-being. The other is that the statistical community has already started heading down this path with the agreement of UN statistical standards for measuring (some) natural capital and the services it provides.

The 2025 System of National Accounts (SNA) revision will include a little more detail about how official statisticians should be implementing this. It is a giant step forward, conceptually and practically— although it does not go far enough in that it insists on the use of valuations as close as possible to market prices, when the main issue in accounting for the environment is that markets grotesquely misprice resource use. (SNA is an internationally agreed-upon framework for compiling economic data, providing a standardized approach to measuring economic activity, including GDP and other key economic variables, facilitating analysis and policy-making.)

Conclusion

Today’s official framework for measuring the economy dates from an era when physical capital was scarce and natural resources were seemingly unconstrained. Manufacturing was the leading sector of the economy, and digital technology was in its infancy. The original national accounts were created using a mechanical calculating machine, not on a computer. Digital technologies have transformed the structure of production and consumption, and at a time of such significant structural change the supply side of the economy needs to be taken seriously. Policy decisions taken now will affect people’s lives for decades to come because the structure of so many industries is changing significantly. It is no wonder industrial policy is back in fashion among policymakers.

Unfortunately, there are yawning gaps in our basic statistics. Official statisticians do important work even as many governments have been cutting their budgets. However, the focus of the statistical agencies is on incremental improvement to the existing System of National Accounts, which will change for the better but not by much when the new standards are confirmed in 2025. There are huge data collection and analytical gaps in what is needed now, comprehensive wealth and time use, and a huge intellectual agenda when those statistics do become available. Just as the production of the first GDP figures gave birth to theories of economic growth, so sustainable balance sheet and time-use metrics will be generative for economists thinking about how societies progress.

The critiques of the earlier Beyond GDP movement have given way to a more constructive period of statistical innovation.

There is no doubt this area of economic statistics will continue to expand—because it is all too obvious that something new is needed. The critiques of the earlier Beyond GDP movement have given way to a more constructive period of statistical innovation—and I have given some examples of fruitful new methods and types of data.

However, I think some conclusions are clear. Measures that account for sustainability, natural and societal, are clearly imperative; the comprehensive wealth framework does this, and can potentially provide a broad scaffolding that others can use to tailor dashboards that serve specific purposes. A second conclusion is that while ideas have always driven innovation and progress, their role in adding value is even more central as the share of intangible value in the economy increases.

Finally, economic value added cannot be defined and measured without an underlying conception of value. This normative conception varies greatly between societies and over time, not least because of profound changes in technology and structure. It is a question of public philosophy as much as economics. Welfare economics has hardly moved on from the heyday of social choice theory in the 1970s, with social welfare defined as the sum of individual utilities; the philosophically rich capabilities approach has made little headway in everyday economics, except perhaps for development economics.

It is not yet clear whether the OECD economies will break away from the public philosophy of individualism and markets that has dominated policy for the past half century, despite all the critiques of neoliberalism; but the fact of popular discontent and its political consequences suggest they might. No wonder commentators so often reach for Gramsci’s famous Prison Notebooks comment, “The old order is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.”

Economic value added cannot be defined and measured without an underlying conception of value.

If a new shared understanding of economic value emerges from the changes underway now, it will look quite different. It will acknowledge the importance of context and variety, moving beyond averages and “representative consumers.” It will incorporate collective outcomes alongside individual ones, while recognizing the differences between them due to pervasive externalities, spillovers, and scale effects. And, it will embed the economy in nature, appreciating the resource constraints that limit future growth.

Excerpted and adapted by the author from The Measure of Progress: Counting What Really Matters © 2025 Diane Coyle. Reprinted with permission of Princeton University Press.

Categories: Critical Thinking, Skeptic

The Other End of the Autism Spectrum

neurologicablog Feed - Tue, 04/29/2025 - 5:10am

In my previous post I wrote about how we think about and talk about autism spectrum disorder (ASD), and how RFK Jr misunderstands and exploits this complexity to weave his anti-vaccine crank narrative. There is also another challenge in the conversation about autism, which exists for many diagnoses – how do we talk about it in a way that is scientifically accurate, useful, and yet not needlessly stigmatizing or negative? A recent NYT op-ed by a parent of a child with profound autism had this to say:

“Many advocacy groups focus so much on acceptance, inclusion and celebrating neurodiversity that it can feel as if they are avoiding uncomfortable truths about children like mine. Parents are encouraged not to use words like “severe,” “profound” or even “Level 3” to describe our child’s autism; we’re told those terms are stigmatizing and we should instead speak of “high support needs.” A Harvard-affiliated research center halted a panel on autism awareness in 2022 after students claimed that the panel’s language about treating autism was “toxic.” A student petition circulated on Change.org said that autism ‘is not an illness or disease and, most importantly, it is not inherently negative.'”

I’m afraid there is no clean answer here, there are just tradeoffs. Let’s look at this question (essentially, how do we label ASD) from two basic perspectives – scientific and cultural. You may think that a purely scientific approach would be easier and result in a clear answer, but that is not the case. While science strives to be objective, the universe is really complex, and our attempts at making it understandable and manageable through categorization involve subjective choices and tradeoffs. As a physician I have had to become comfortable with this reality. Diagnoses are often squirrelly things.

When the profession creates or modifies a diagnosis, this is really a type of categorization. There are different criteria that we could potentially use to define a diagnostic label or category. We could use clinical criteria – what are the signs, symptoms, demographics, and natural history of the diagnosis in question? This is often where diagnoses begin their lives, as a pure description of what is being seen in the clinic. Clinical entities almost always present as a range of characteristics, because people are different and even specific diseases will manifest differently. The question then becomes – are we looking at one disease, multiple diseases, variations on a theme, or completely different processes that just overlap in the signs and symptoms they cause. This leads to the infamous “lumper vs splitter” debate – do we tend to lump similar entities together in big categories or split everything up into very specific entities, based on even tiny differences?

The more we learn about these burgeoning diagnoses the more the diagnostic criteria might shift away from a purely clinical descriptive one. Perhaps we find some laboratory marker (such as a result on a blood test, or finding on an MRI scan of the brain). What if that marker has an 80% correlation to the clinical syndrome? How do we use that as a diagnostic criterion? The more we learn about pathophysiology, the more these specific biological factors become part of the diagnosis. Sometimes this leads to discrete diagnoses – such as when it is discovered that a specific genetic mutation causes a specific disease. The mutation becomes the diagnosis. But that is often not the case. The game changes again when treatments become available, then diagnostic criteria tends to shift toward those that predict response to treatment.

One question, therefore, when determining the best way to establish a specific diagnostic label is – what is your purpose? You might need a meaningful label that helps guide and discuss basic science research into underlying phenomena. You may need a diagnosis that helps predict natural history (prognosis), or that guides treatment, or you may need a box to check on the billing form for insurance, or you may need a diagnosis as a regulatory entity (for FDA approval for a drug, say).

ASD has many of these issues. Researchers like the spectrum approach because they see ASD as different manifestations of one type of underlying neurological phenomenon. There are many genes involved, and changes to the pattern of connectivity among brain cells. Clinicians may find this lumper approach a double-edged sword. It may help if there is a single diagnostic approach – scoring on standardized tests of cognitive, motor, language and social functioning, for example. But it also causes confusion because one label can mean such dramatically different things clinically. The diagnosis is also now often attached to services, so there is a very practical aspect to it (and one major reason why the diagnosis has increased in recent years – it gets you services that a less specific diagnosis might not).

Now let’s go to the social approach to the ASD diagnosis. The purely scientific approach is not clean because “science” can refer to basic science or clinical science, and the clinical side can have multiple different approaches. This means science cannot currently solve all the disputes over how the ASD diagnosis is made and used in our society. It’s ambiguous. One aspect of the debate is whether or not ASD should be considered a disease, a disorder, or just a spectrum of natural variation within the human species. Anti-vaxxers want to see is as a disease, something to be prevented and cured. This approach also tends to align better with the more disabled end of the spectrum. At the high functioning end of the spectrum, the preference is to look at ASD as simply atypical, and not inherently inferior or worse than neurotypicals. The increased challenges of being autistic are really artificially created by a society dominated by neurotypicals. There are also in fact advantages to being neuroatypical in certain areas, such as jobs like coding and engineering. Highly sociable people have their challenges as well.

Here’s the thing – I think both of these approaches can be meaningful and useful at the same time. First, I don’t think we should shy away from terms like “profound” or “severe”. This is how neuroscience generally works. Everyone does and should have some level of anxiety, for example. Anxiety is adaptive. But some people have “severe” anxiety – anxiety that takes on a life of its own, or transitions from being adaptive to maladaptive. I don’t want to minimize the language debate. Words matter. Sometimes we just don’t have the words that mean exactly what we need them to mean, without unwanted connotations. We need a word that can express the spectrum without unwanted assumptions or judgement. How about “extreme”? Extreme does not imply bad. You can be extremely athletic, and no one would think that is a negative thing. Even if autism is just atypical, being extremely autistic implies you are at one end of the spectrum.

Also, as with anxiety, optimal function is often a mean between two extremes. No anxiety means you take unnecessary risks. Too much anxiety can be crippling. Having mildly autistic features may just represent a different set of neurological tradeoffs, with some advantages and some challenges, and because it is atypical some accommodation in a society not optimized for this type. But as the features get more extreme, the downsides become increasingly challenging until you have a severe disability.

This reminds me also of paranoia. A little bit of paranoia can be seen as typical, healthy, and adaptive. A complete absence of any suspiciousness might make someone naive and vulnerable. People with above average paranoia might not even warrant a diagnosis – that is just a personality type, with strengths and weaknesses. But the more extreme you get, the more maladaptive it becomes. At the extreme end it is a criterion for schizophrenia.

Or perhaps this is all just too complex for the public-facing side of this diagnosis (regulation, public education, etc). Perhaps we need to become splitters, and break ASD up into three or more different labels. Researchers can still have and use a technical category name that recognizes an underlying neurological commonality, but that does not need to be inflicted on the public and cause confusion. Again – there is no objective right or wrong here, just different choices. As I think I amply demonstrated in my prior post, using one label (autism) causes a great deal of confusion and can be exploited by cranks. What often happens, though, is that different groups make up the labels for their own purposes. When researchers make the labels, they favor technical basic-science criteria. When clinicians do, they favor clinical criteria. When regulators do, they want nice clean categories.

Sometimes all these levels play nicely together. With ASD I feels as if they are in conflict, with the more research-based labels holding sway and causing confusion for everyone else.

At the same time there is a conflict between not imposing inaccurate and unnecessary judgement on a label like autism, while at the same time recognizing that can come with its own challenges that need just awareness at the mildest end of the spectrum, accommodation for those who experience challenges and have needs, and then actual treatment (if possible) at the more extreme end. These do not need to be mutually exclusive.

I do think we are evolving in a good direction, with more thoughtful diagnostic labels that explicitly serve a purpose without unnecessary assumptions or judgement. We may not be entirely there yet, but it’s a great conversation to have.

The post The Other End of the Autism Spectrum first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #986: Do Functional Mushrooms Function?

Skeptoid Feed - Tue, 04/29/2025 - 2:00am

Super mushrooms are claimed by some to provide vague health benefits beyond their known nutritional values.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Don’t Ban the Book: Kids Can Benefit From Challenging Stories

Skeptic.com feed - Mon, 04/28/2025 - 10:03am

During her sojourns among the Inuit throughout the 1960s and 70s, pioneering anthropologist Jean Briggs observed some peculiar parenting practices. In a chapter she contributed to The Anthropology of Peace and Nonviolence, a collection of essays from 1994, Briggs describes various methods the Inuit used to reduce the risk of physical conflict among community members. Foremost among them was the deliberate cultivation of modesty and equanimity, along with a penchant for reframing disputes or annoyances as jokes. “An immodest person or one who liked attention,” Briggs writes, “was thought silly or childish.” Meanwhile, a critical distinction held sway between seriousness and playfulness. “To be ‘serious’ had connotations of tension, anxiety, hostility, brooding,” she explains. “On the other hand, it was highest praise to say of someone: ‘He never takes anything seriously’.”1 The ideal then was to be happy, jocular, and even-tempered.

This distaste for displays of anger applied in the realm of parenting as well. No matter how unruly children’s behavior, adults would refrain from yelling at them. So, it came as a surprise to Briggs that Inuit adults would often purposely instigate conflicts among the children in their charge. One exchange Briggs witnessed involved an aunt taking her three-year-old niece’s hand and putting it in another child’s hair while telling her to pull it. When the girl refused, the aunt gave it a tug herself. The other child, naturally enough, turned around and hit the one she thought had pulled her hair. A fight ensued, eliciting laughter and cheers from the other adults, who intervened before anyone was hurt. None of the other adults who witnessed this incident seemed to think the aunt had done anything wrong.

“Why Don’t You Kill Your Baby Brother?” The provocations didn’t always involve rough treatment or incitements to conflict but often took the form of outrageous lines of questioning.

On another occasion, Briggs witnessed a mother picking up a friend’s baby and saying to her own nursling, “Shall I nurse him instead of you?” The other mother played along, offering her breast to the first woman’s baby, saying, “Do you want to nurse from me? Shall I be your mother?”2 The nursling shrieked in protest, and both mothers burst into laughter. Briggs witnessed countless more of what she calls “playful dramas” over the course of her research. Westerners might characterize what the adults were doing in these cases as immature, often cruel pranks, even criminal acts of child abuse. What Briggs came to understand, however, was that the dramas served an important function in the context of Inuit culture. Tellingly, the provocations didn’t always involve rough treatment or incitements to conflict but often took the form of outrageous or disturbing lines of questioning. This approach is reflected in the title of Briggs’s chapter, “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps.” However, even these gentler sessions were more interrogation than thought experiment, the clear goal being to arouse intense emotions in the children. 

The parents were training the children, using simulated and age-calibrated dilemmas, to develop exactly the kind of equanimity and joking attitude they would need to mature into successful adults.

From interviews with adults in the communities hosting her, Briggs gleaned that the purpose of these dramas was to force children to learn how to handle difficult social situations. The term they used is isumaqsayuq, meaning “to cause thought,” which Briggs notes is a “central idea of Inuit socialization.” “More than that,” she goes on, “and as an integral part of thought, the dramas stimulate emotion.” The capacity for clear thinking in tense situations—and for not taking the tension too seriously—would help the children avoid potentially dangerous confrontations. Briggs writes: 

The games were, themselves, models of conflict management through play. And when children learned to recognize the playful in particular dramas, people stopped playing those games with them. They stopped tormenting them. The children had learned to keep their own relationships smoother—to keep out of trouble, so to speak— and in doing so, they had learned to do their part in smoothing the relationships of others.3

The parents, in other words, were training the children, using simulated and age-calibrated dilemmas, to develop exactly the kind of equanimity and joking attitude they would need to mature into successful adults capable of maintaining a mostly peaceful society. They were prodding at the kids’ known sensitivities to teach them not to take themselves too seriously, because taking yourself too seriously makes you apt to take offense, and offense can often lead to violence. 

Are censors justified in their efforts at protecting children from the wrong types of lessons? 

The Inuit’s aversion to being at the center of any drama and their penchant for playfulness in potentially tense encounters are far removed from our own culture. Rather their approach to socialization relies on an insight that applies universally, one that’s frequently paid lip service in the West but even more frequently lost sight of. Anthropologist Margaret Mead captures the idea in her 1928 ethnography Coming of Age in Samoa, writing, “The children must be taught how to think, not what to think.”4 People fond of spouting this truism today usually intend to communicate something diametrically opposite to its actual meaning, with the suggestion being that anyone who accepts rival conclusions must have been duped by unscrupulous teachers. However, the crux of the insight is that education should not focus on conclusions at all. Thinking is not about memorizing and being able to recite facts and propositions. Thinking is a process. It relies on knowledge to be sure, but knowledge alone isn’t sufficient. It also requires skills.

Inuit Children. Photo by UC Berkeley, Department of Geography.

Cognitive psychologists label knowing that and knowing how as declarative and procedural knowledge, respectively.5 Declarative knowledge can be imparted by the more knowledgeable to the less knowledgeable—the earth orbits the sun—but to develop procedural knowledge or skills you need practice. No matter how precisely you explain to someone what goes into riding a bike, for instance, that person has no chance of developing the requisite skills without at some point climbing on and pedaling. Skills require training, which to be effective must incorporate repetition and feedback. 

It’s good to be honest, but should you lie to protect a friend?

What the Inuit understood, perhaps better than most other cultures, is that morality plays out far less in the realm of knowing what than in the realm of knowing how. The adults could simply lecture the children about the evils of getting embroiled in drama, but those children would still need to learn how to manage their own aggressive and retributive impulses. And explaining that the most effective method consists of reframing slights as jokes is fine, but no child can be expected to master the trick on first attempt. So it is with any moral proposition. We tell young children it’s good to share, for instance, but how easy is it for them to overcome their greedy impulses? And what happens when one moral precept runs up against another? It’s good to share a toy sword, but should you hand it over to someone you suspect may use it to hurt another child? Adults face moral dilemmas like this all the time. It’s wrong to cheat on your spouse, but what if your spouse is controlling and threatens to take your children if you file for divorce? It’s good to be honest, but should you lie to protect a friend? There’s no simple formula that applies to the entire panoply of moral dilemmas, and even if there were, it would demand herculean discipline to implement. 

Conservatives are working to impose bans on books they deem inappropriate for school children. Left-leaning citizens are being treated to PC bowdlerizations of a growing list of classic books.

Unfortunately, Western children have a limited range of activities that provide them opportunities to develop their moral skillsets. Perhaps it’s testament to the strength of our identification with our own moral principles that few of us can abide approaches to moral education that are in any regard open-ended. Consider children’s literature. As I write, political conservatives in the U.S. are working to impose bans on books6 they deem inappropriate for school children. Meanwhile, more left-leaning citizens are being treated to PC bowdlerizations7 of a disconcertingly growing8 list of classic books. One side is worried about kids being indoctrinated with life-deranging notions about race and gender. The other is worried about wounding kids’ and older readers’ fragile psyches with words and phrases connoting the inferiority of some individual or group. What neither side appreciates is that stories can’t be reduced to a set of moral propositions, and that what children are taught is of far less consequence than what they practice.

Do children’s books really have anything in common with the playful dramas Briggs observed among the Inuit? What about the fictional stories adults in our culture enjoy? One obvious point of similarity is that stories tend to focus on conflict and feature high-stakes moral dilemmas. The main difference is that reading or watching a story entails passively witnessing the actions of others, as opposed to actively participating in the plots. Nonetheless, the principle of isumaqsayuq comes into play as we immerse ourselves in a good novel or movie. Stories, if they’re at all engaging, cause us to think. They also arouse intense emotions. But what could children and adults possibly be practicing when they read or watch stories? If audiences were simply trying to figure out how to work through the dilemmas faced by the protagonists, wouldn’t the outcome contrived by the author represent some kind of verdict, some kind of lesson? In that case, wouldn’t censors be justified in their efforts at protecting children from the wrong types of lessons? 

What could children possibly be practicing when they read stories? Wouldn’t the outcome contrived by the author represent some kind of verdict or lesson?

To answer these questions, we must consider why humans are so readily held rapt by fictional narratives in the first place. If the events we’re witnessing aren’t real, why do we care enough to devote time and mental resources to them? The most popular stories, at least in Western societies, feature characters we favor engaging in some sort of struggle against characters we dislike—good guys versus bad guys. In his book Just Babies: The Origins of Good and Evil, psychologist Paul Bloom describes a series of experiments9 he conducted with his colleague Karen Wynn, along with their then graduate student Kiley Hamlin. They used what he calls “morality plays” to explore the moral development of infants. In one experiment, the researchers had the babies watch a simple puppet show in which a tiger rolls a ball to one rabbit and then to another. The first rabbit rolls the ball back to the tiger and a game ensues. But the second rabbit steals away with the ball at first opportunity. When later presented with both puppets and encouraged to reach for one to play with, the babies who had witnessed the exchanges showed a strong preference for the one who had played along. What this and several related studies show is that by as early as three months of age, infants start to prefer characters who are helpful and cooperative over those who are selfish and exploitative.

Photo by Natasha Jenny / Unsplash

That such a preference would develop so early and so reliably in humans makes a good deal of sense in light of how deeply dependent each individual is on other members of society. Throughout evolutionary history, humans have had to cooperate to survive, but any proclivity toward cooperation left them vulnerable to exploitation. This gets us closer to the question of what we’re practicing when we enjoy fiction. In On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd points out that animals’ play tends to focus on activities that help them develop the skills they’ll need to survive, typically involving behaviors like chasing, fleeing, and fighting. When it comes to what skills are most important for humans to acquire, Boyd explains: 

Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch.10

Fiction, then, can be viewed as a type of imaginative play that activates many of the same evolved cognitive mechanisms as gossip, but without any real-world stakes. This means that when we’re consuming fiction, we’re not necessarily practicing to develop equanimity in stressful circumstances as do the Inuit; we’re rather honing our skills at assessing people’s proclivities and weighing their potential contributions to our group. Stories, in other words, activate our instinct, while helping us to develop the underlying skillset, for monitoring people for signals of selfish or altruistic tendencies. The result of this type of play would be an increased capacity for cooperation, including an improved ability to recognize and sanction individuals who take advantage of cooperative norms without contributing their fair share. 

Ethnographic research into this theory of storytelling is still in its infancy, but the anthropologist Daniel Smith and his colleagues have conducted an intensive study11 of the role of stories among the Agta, a hunter-gatherer population in the Philippines. They found that 70 percent of the Agta stories they collected feature characters who face some type of social dilemma or moral decision, a theme that appears roughly twice as often as interactions with nature, the next most common topic. It turned out, though, that separate groups of Agta invested varying levels of time and energy in storytelling. The researchers saw this as an opportunity to see what the impact of a greater commitment to stories might be. In line with the evolutionary account laid out by Boyd and others, the groups that valued storytelling more outperformed the other groups in economic games that demand cooperation among the players. This would mean that storytelling improves group cohesion and coordination, which would likely provide a major advantage in any competition with rival groups. A third important finding from this study is that the people in these groups knew who the best storytellers were, and they preferred to work with these talented individuals on cooperative endeavors, including marriage and childrearing. This has obvious evolutionary implications. 

Remarkably, the same dynamics at play in so many Agta tales are also prominent in classic Western literature. When literary scholar Joseph Carroll and his team surveyed thousands of readers’ responses to characters in 200 novels from authors like Jane Austen and Charles Dickens, they found that people see in them the basic dichotomy between altruists and selfish actors. They write: 

Antagonists virtually personify Social Dominance—the self-interested pursuit of wealth, prestige, and power. In these novels, those ambitions are sharply segregated from prosocial and culturally acquisitive dispositions. Antagonists are not only selfish and unfriendly but also undisciplined, emotionally unstable, and intellectually dull. Protagonists, in contrast, display motive dispositions and personality traits that exemplify strong personal development and healthy social adjustment. They are agreeable, conscientious, emotionally stable, and open to experience.12

Interestingly, openness to experience may be only loosely connected to cooperativeness and altruism, just as humor is only tangentially related to peacefulness among the Inuit. However, being curious and open-minded ought to open the door to the appreciation of myriad forms of art, including different types of literature, leading to a virtuous cycle. So, the evolutionary theory, while focusing on cooperation, leaves ample room for other themes, depending on the cultural values of the storytellers.

Photo by João Rafael / Unsplash

In a narrow sense then, cooperation is what many, perhaps most, stories are about, and our interest in them depends to some degree on our attraction to more cooperative, less selfish, individuals. We obsessively track the behavior of our fellow humans because our choices of who to trust and who to team up with are some of the most consequential in our lives. This monitoring compulsion is so powerful that it can be triggered by opportunities to observe key elements of people’s behavior—what they do when they don’t know they’re being watched—even when those people don’t exist in the real world. But what keeps us reading or watching once we’ve made our choices of which characters to root for? And, if one of the functions of stories is to help us improve our social abilities, what mechanism provides the feedback necessary for such training to be effective? 

Fiction can be viewed as a type of imaginative play that activates many of the same evolved cognitive mechanisms as gossip, but without any real-world stakes.

In Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction, literary scholar William Flesch theorizes that our moment-by-moment absorption in fictional plots can be attributed to our desire to see cooperators rewarded and exploiters punished. Citing experiments that showed participants were willing to punish people they had observed cheating other participants—even when the punishment came at a cost13 to the punishers— Flesch argues that stories offer us opportunities to demonstrate our own impulse to enforce norms of fair play. Within groups, individual members will naturally return tit for tat when they’ve been mistreated. For a norm of mutual trust to take hold, however, uninvolved third parties must also be willing to step in to sanction violators. Flesch calls these third-party players “strong reciprocators” because they respond to actions that aren’t directed at them personally. He explains that 

the strong reciprocator punishes or rewards others for their behavior toward any member of the social group, and not just or primarily for their individual interactions with the reciprocator.14

His insight here is that we don’t merely attend to people’s behavior in search of clues to their disposition. We also watch to make sure good and bad alike get their just deserts. And the fact that we can’t interfere in the unfolding of a fictional plot doesn’t prevent us from feeling that we should. Sitting on the edge of your seat, according to this theory, is evidence of your readiness to step in.

It doesn’t matter that a story is fictional if a central reason for liking it is to signal to others that we’re the type of person who likes the type of person portrayed in that story.

Another key insight emerging from Flesch’s work is that humans don’t merely monitor each other’s behavior. Rather, since they know others are constantly monitoring them, they also make a point of signaling that they possess desired traits, including a disposition toward enforcing cooperative norms. Here we have another clue to why we care about fictional characters and their fates. It doesn’t matter that a story is fictional if a central reason for liking it is to signal to others that we’re the type of person who likes the type of person portrayed in that story. Reading tends to be a solitary endeavor, but the meaning of a given story paradoxically depends in large part on the social context in which it’s discussed. We can develop one-on-one relationships with fictional characters for sure, but part of the enjoyment we get from these relationships comes from sharing our enthusiasm and admiration with nonfictional others. 

Children who read Harry Potter discuss which House the Sorting Hat would place them in, but you don’t hear many of them enthusiastically talking about Voldemort murdering Muggles.

This brings us back to the question of where feedback comes into the social training we get from fiction. One feedback mechanism relies on the comprehensibility and plausibility of the plot. If a character’s behavior strikes us as arbitrary or counter to their personality as we’ve assessed it, then we’re forced to think back and reassess our initial impressions—or else dismiss the story as poorly conceived. A character’s personality offers us a chance to make predictions, and the plot either confirms or disproves them. However, Flesch’s work points to another type of feedback that’s just as important. The children at the center of Inuit playful dramas receive feedback from the adults in the form of laughter and mockery. They learn that if they take the dramas too seriously and thus get agitated, then they can expect to be ridiculed. Likewise, when we read or watch fiction, we gauge other audience members’ reactions, including their reactions to our own reactions, to see if those responses correspond with the image of ourselves we want to project. In other words, we can try on traits and aspects of an identity by expressing our passion for fictional characters who embody them. The outcome of such experimentation isn’t determined solely by how well the identity suits the individual fan, but also by how well that identity fits within the wider social group. 

We obsessively track the behavior of our fellow humans because our choices of who to trust and who to team up with are some of the most consequential in our lives.

Parents worried that their children’s minds are being hijacked by ideologues will hardly be comforted by the suggestion that teachers and peers mitigate the impact of any book they read. Nor will those worried that their children are being inculcated with more or less subtle forms of bigotry find much reassurance in the idea that we’re given to modeling15 our own behavior on that of the fictional characters we admire. Consider, however, the feedback children receive from parents who respond to the mere presence of a book in a school library with outrage. What do children learn from parents’ concern that single words may harm or corrupt them? 

Kids are graduating high school with historically unprecedented rates of depression and anxiety.

Today, against a backdrop of increasing vigilance and protectiveness among parents, kids are graduating high school and moving on to college or the workforce with historically unprecedented rates of depression16 and anxiety,17 having had far fewer risky but rewarding experiences18 such as dating, drinking alcohol, getting a driver’s license, and working for pay. It’s almost as though the parents who should be helping kids learn to work through difficult situations by adopting a playful attitude have themselves become so paranoid and humorless that the only lesson they manage to impart is that the world is a dangerous place, one young adults with their fragile psyches can’t be trusted to navigate on their own.

Even pre-verbal infants are able to pick out the good guys from the bad.

Parents should, however, take some comfort from the discovery that even pre-verbal infants are able to pick out the good guys from the bad. As much as young Harry Potter fans discuss which Hogwarts House the Sorting Hat would place them in, you don’t hear19 many of them talking enthusiastically about how cool it was when Voldemort killed all those filthy Muggles. The other thing to keep in mind is that while some students may embrace the themes of a book just because the teacher assigned it, others will reject them for the same reason. It depends on the temperament of the child and the social group they hope to achieve status in.

Should parents let their kids read just anything? We must acknowledge that books, like playful dramas, need to be calibrated to the maturity levels of the readers. However, banning books deemed dangerous deprives children not only of a new perspective. It deprives them of an opportunity to train themselves for the difficulties they’ll face in the upcoming stages of their lives. If you’re worried your child might take the wrong message from a story, you can make sure you’re around to provide some of your own feedback on their responses. Maybe you could even introduce other books to them with themes you find more congenial. Should we censor words or images—or cease publication of entire books—that denigrate individuals or groups? Only if we believe children will grow up in a world without denigration. Do you want your children’s first encounter with life’s ugliness to occur in the wild, as it were, or as they sit next to you with a book spread over your laps? 

What should we do with great works by authors guilty of terrible acts? What about mostly good characters who sometimes behave badly? What happens when the bad guy starts to seem a little too cool? These are all great prompts for causing thought and arousing emotions. Why would we want to take these training opportunities away from our kids? It’s undeniable that books and teachers and fellow students and, yes, even parents themselves really do influence children to some degree. That influence, however, may not always be in the intended direction. Parents who devote more time and attention to their children’s socialization can probably improve their chances of achieving desirable ends. However, it’s also true that the most predictable result of any effort at exerting complete control over children’s moral education is that their social development will be stunted.

Categories: Critical Thinking, Skeptic

How Should We Talk About Autism

neurologicablog Feed - Mon, 04/28/2025 - 4:31am

RFK Jr.’s recent speech about autism has sparked a lot of deserved anger. But like many things in life, it’s even more complicated than you think it is, and this is a good opportunity to explore some of the issues surrounding this diagnosis.

While the definition has shifted over the years (like most medical diagnoses) autism is currently considered a fairly broad spectrum sharing some underlying neurological features. At the most “severe” end of the spectrum (and to show you how fraught this issue is, even the use of the term “severe” is controversial) people with autism (or autism spectrum disorder, ASD) can be non-verbal or minimally verbal, have an IQ <50, and require full support to meet their basic daily needs. At the other end of the spectrum are extremely high-functioning individuals who are simply considered to be not “neurotypical” because they have a different set of strengths and challenges than more neurotypical people. One of the primary challenges is to talk about the full spectrum of ASD under one label. The one thing it is safe to say is that RFK Jr. completely failed this challenge.

What our Health and Human Services Secretary said was that normal children:

“regressed … into autism when they were 2 years old. And these are kids who will never pay taxes, they’ll never hold a job, they’ll never play baseball, they’ll never write a poem, they’ll never go out on a date. Many of them will never use a toilet unassisted.”

This is classic RFK Jr. – he uses scientific data like the proverbial drunk uses a lamppost, for support rather than illumination. Others have correctly pointed out that he begins with his narrative and works backward (like a lawyer, because that is what he is).  That narrative is solidly in the sweet-spot of the anti-vaccine narrative on autism, which David Gorski spells out in great detail here. RFK said:

“So I would urge everyone to consider the likelihood that autism, whether you call it an epidemic, a tsunami, or a surge of autism, is a real thing that we don’t understand, and it must be triggered or caused by environmental or risk factors. “

In RFK’s world, autism is a horrible disease that destroys children and families and is surging in such a way that there must be an “environmental” cause (wink, wink – we know he means vaccines). But of course RFK gets the facts predictable wrong, or at least exaggerated and distorted precisely to suit his narrative. It’s a great example of how to support a desired narrative by cherry picking and then misrepresenting facts. To use another metaphor, it’s like making one of those mosaic pictures out of other pictures. He may be choosing published facts but he arranges them into a false and illusory picture. RFK cited a recent study that showed that about 25% of children with autism were in the “profound” category. (That is another term recently suggested to refer to autistic children who are minimally verbal or have an IQ < 50. This is similar to “level 3” autism or “severe” autism, but with slightly different operational cutoffs.)

First, there are a range of estimates as to what percentage of autistic people would fit into the profound category, and he is choosing the high end. Also most of the people in that category don’t have the limitations that RFK listed. A 2024 study, for example, which relied upon surveys of parents of children with autism found that only 10% fell into the “severe” category. Even within this category, only 67% had difficulty with dressing and bathing, or about 7% of children with autism. I am not trying to minimize the impact of the challenges and limitations of those at the severe end of the spectrum, just putting the data into context. What RFK was doing, which is what antivaxxers have been doing for decades, is trying to scare parents with a very specific narrative – perfect young children will get vaccinated and then regress into horrible autism that will destroy their lives and your families.

What is regression? It is a loss of previous milestones or abilities. The exact rate in severe autism is unclear, ranging from 20-40%, but the 20% figure is considered more reliable. In any case, RFK misrepresents this as well. Regression does not mean that a 2 year old child without autism develops severe autism – it means that a child with autism loses some function. Much of the time regression refers to social skills, with autistic children finding it more difficult to engage socially as they age (which can simply be adaptive and not require neurological regression). Language regression occurs but is less common.  Again we see that he uses a piece of the picture, exaggerates it, and then uses it to imply a reality that does not exist.

He then does it again with the “surge” of autism. Yes, autism diagnoses have been increasing for decades. At first (during the 1990s) you could make a correlation between increasing vaccines in the childhood schedule and increasing autism diagnostic rates. This was always just a spurious correlation (my favorite example is that organic food sales track better with autism diagnoses than does vaccination). But after about 2000, when thimerosal was removed from the childhood vaccine schedule in the US, autism rates continued to increase. The correlation completely broke down. Antivaxxers desperately tried to explain away this breakdown in the correlation, with increasingly ridiculous special pleading, and now it seems they just ignore this fact.

RFK is just ignoring this fact, and just making the more general observation that autism rates are increasing, which they are. But this increase does not fit his scary narrative for at least two reasons. First, as I and others have pointed out, there is copious evidence in the literature that much of this apparent increase is due to changing diagnostic patterns. At the severe end of the spectrum there is some diagnostic substitution – in past decades children who are now diagnosed with autism would have been diagnosed with mental retardation or something else less specific or just different. At the high functioning end of the spectrum children with autism likely would not have been diagnosed with anything at all. I have explored this issue at length before – the more carefully you look (applying the same diagnostic criteria across different age cohorts), the less autism is increasing. It is also true that autism is dominantly a genetic disorder, and that there are very early signs of autism, even in  six month olds, and perhaps even at the fetal stage.

But also the dramatic increase in autism diagnoses is mostly at the mild end of the spectrum. There is only a small increase of profound autism. So again, RFK’s narrative breaks down when you look at the actual scientific facts. He says normal children regress into profound autism and this is surging. But that is wrong. He is exploiting the fact that we use the same term, autism, to refer to profound autism and what was previously called “aspergers syndrome” but is now just considered part of ASD.

All of this is sufficient evidence to conclude that RFK is incompetent to serve as HHS secretary, he does not understand medical science and rather makes a lawyer’s case for extreme conspiracy theories designed to scare the public into making bad medical choices.

But there is another side to this story (that has nothing to do with RFK). In our effort not to pathologize people who are simply atypical, are we overlooking people who actually have a severe disability, or at least making them and their parents feel that way? I’ll explore this side of the question in my next post.

The post How Should We Talk About Autism first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #1033 - Apr 26 2025

Skeptics Guide to the Universe Feed - Sat, 04/26/2025 - 8:00am
Quickie with Steve: Game Transfer Phenomenon; Geoengineering, Biosignature Candidate, Skull Rock on Mars; Commercial Perovskite Solar Panels; Who's That Noisy; Your Questions and E-mails: Another Unified Theory; Science or Fiction
Categories: Skeptic

Transgene-Free Gene Editing in Plants

neurologicablog Feed - Thu, 04/24/2025 - 4:59am

Regulations are a classic example of a proverbial double-edged sword. They are essential to create and maintain a free and fair market, to prevent exploitation, and to promote safety and the public interest. Just look at 19th century America for countless examples of what happens without proper regulations (child labor, cities ablaze, patent medicines, and food was a crap shoot). But, regulations can have a powerful effect and this includes unintended consequences, regulatory overreach, ideological capture, and stifling bureaucracy. This is why optimal regulations should be minimalist, targeted, evidence-based, consensus-driven, and open to revision. This makes regulations also a classic example of Aristotle’s rule of the “golden mean”. Go too far to either extreme (too little or to onerous) and regulations can be a net negative.

The regulations of GMOs are an example, in my opinion, of ideological capture in regulations. The US, actually, has pretty good regulations, requiring study and approval for each new GMO product on the market, but no outright banning. You could argue that they are a bit too onerous to be optimal, ensuring that only large companies can afford to usher a new GMO product to the market, and therefore stifling competition from smaller companies. That’s one of those unintended consequences. Some states, like Hawaii and Vermont, have instituted their own more restrictive regulations, based purely on ideology and not science or evidence. Europe is another story, with highly restrictive regulations on GMOs.

But in recent years scientific advances in genetics have cracked the door open for genetic modification in highly regulated environments. This is similar to what happened with stem cell research in the US. Use of embryonic stem cells were ideologically controversial, and ultimately the development of any new cells lines was banned by Bush in 2001. Scientists then discovered how to convert adult cells into induced pluripotent stem cells, mostly side-stepping these regulations.

In the GMO space a similar thing has happened. With the advent of CRISPR and other technologies, it’s possible to alter the genome of a plant without introducing a foreign gene. Increasingly these sorts of changes are being distinguished, from a regulatory perspective, from genetic modification that involves inserting a gene. Altering the genome without gene insertion is referred to a genetic engineering, rather than genetic modification, and the regulations for the use of genetic engineering (which includes product labeling) are less onerous. This provides an incentive to the industry to accomplish what they want through genetic engineering, without triggering the rules for genetic modification.

This brings us to a couple of recent studies showcasing this approach. For some additional background, however, I need to mention that one currently used technique is to use CRISPR or a similar method to modify the genome of a plant, but then back cross the resulting engineered plants with unmodified plants in order to get rid of any foreign DNA left behind by the CRISPR process. This is a bit laborious, and often requires multiple generations, to result in a plant with the desired mutations but no foreign DNA.

However, this technique does not work for every kind of plant. There are two categories in particular that are a problem – trees (or any slow-growing plant that would take years to reproduce), and sterile plants (like bananas). For these types of plants we need a new method that does not leave behind any foreign DNA and therefore does not require subsequent cross-breeding to get rid of it.

So – in January scientists published a study detailing “Transgene-free genome editing in poplar.” They report:

“Here, we describe an efficient method for generating gene-edited Populus tremula × P. alba (poplar) trees without incorporating foreign DNA into its genome. Using Agrobacterium tumefaciens, we expressed a base-editing construct targeting CCoAOMT1 along with the ALS genes for positive selection on a chlorsulfuron-containing medium.
About 50% of the regenerated shoots were derived from transient transformation and were free of T-DNA. Overall, 7% of the chlorsulfuron-resistant shoots were T-DNA free, edited in the CCoAOMT1 gene and nonchimeric.”

This means that they were able to use transiently expressed DNA in the cells, that essentially made the genetic change and then went away. They used the bacterium A tumefaciens as vector. This worked in about half of cells. They also did genome-wide sequencing to weed out any shoots with any foreign DNA. They also had to eliminate shoots where only some of the cells were altered (and therefore chimeric). So in 7% of the shoots the desired change was made, in all of the cells, without leaving behind any foreign DNA. No further breeding is required, and therefore this is a much quicker, cheaper, and more efficient method of making desirable changes (in this case they used a herbicide resistant mutation, which was easy to test for).

Next up, published this month, was the same method in the cavendish banana – “An Agrobacterium-mediated base editing approach generates transgene-free edited banana.” From what I can tell they used essentially the same method as with the poplar trees, although there are no authors in common between the two papers so this appears to be an independent group. The authors of both papers are Flemish and cite each-other’s work, so I assume this is part of a collaborative project. I also see another paper doing a similar thing in bamboo, with Chinese authors.

The authors explicitly say that the benefit of this technique is to create cultivars that have less of a regulatory hurdle, so the point is primarily to avoid harsher regulations. While this is a great workaround, it’s unfortunate that scientists need to develop a workaround, just to please the anti-GMO crowd. Anti-GMO sentiments are not based on science, they are ideologically and largely driven by the organic industry for what seems transparently self-serving reasons. The benefits of genetic engineering in agriculture, though, are clear and necessary, given the challenges we are facing. So the industry is somewhat quietly just bypassing regulations, while some governments are quietly softening regulations, in order to reap the benefits without inflaming anti-GMO activists. Hopefully we can get to a largely post-anti-GMO world and get down to the business of feeding people and saving our crops from looming diseases and climate change.

The post Transgene-Free Gene Editing in Plants first appeared on NeuroLogica Blog.

Categories: Skeptic

The Lazarus Sign: When Faith and Medicine Diverge

Skeptic.com feed - Wed, 04/23/2025 - 5:50pm

My life changed in February of 1993. It began with an early morning phone call from a fellow student at our private Evangelical Christian college. I was informed that our mutual friend Tim had fallen asleep while driving home from a ski trip. He’d been critically injured in a terrible accident and was now lying unconscious in an Ohio hospital. Though it was over 100 miles away and we had class that morning, we left immediately.

Arriving, we were advised to prepare ourselves before seeing him. We tried, but how can one do so? We walked in and recoiled at what was left of our friend. Others came. We took turns praying over our dying friend after being assured by our spiritual leader that, if we prayed hard enough and believed, Tim would be healed.

Hovering over his body, we began our prayer. We held hands as we closed our eyes, me taking Tim’s left hand as we pleaded for a miracle. Tim lifted my hand in the air about six inches as we did so! I opened my eyes in wonderment, and considered interrupting the prayer, but chose to wait and show them. As soon as our leader said “Amen,” and everyone opened their eyes, Tim’s strength left and my hand fell with his.

If he was brain dead, how could he lift my hand?

Unsure what had happened, I told the others about Tim lifting my hand. It was unanimously agreed that God was communicating with me through Tim. It was such a fantastic coincidence that it could only be attributed to divine intervention. We asked ourselves, “If he was brain dead, how could he lift my hand?” And why, if not to send a message from God, did he do so at the precise moments our prayer began and ended?

A doctor examined Tim and told his parents their son’s pupils were not responding to light, he was brain dead, and his body was shutting down. He respectfully advised them that they needed to prepare themselves for his death. The most devout among us corrected the good doctor, assuring him (and me, specifically) that Tim would rise again. The doctor kindly responded, “No. He has one foot in the grave.” Our leader countermanded him, reminding us “Jesus had two feet in the grave.” I believed our leader.

Tim passed away three days later, as the doctor predicted he would. Our leader rationalized Tim’s death (and the false assurances that he would be healed) as having been God’s will. We convinced ourselves that Tim, as a fellow believer, was now rejoicing in heaven, where we would meet him when our time came. I adopted Tim’s hand raising my own into my testimony as I turned my life around.

My dying friend’s disinhibited spinal cord told him to raise his hand.

Over the years I’ve come to accept that my life-changing miracle of a hand-raising while brain dead was, in actuality, explainable. The kind doctor who tried to prepare Tim’s parents probably knew exactly why Tim lifted my hand, and he knew it wasn’t from divine intervention. My dying friend’s disinhibited spinal cord told him to raise his hand. My hand was lifted by a “reflex arc”—a residual signal passing through a neural pathway in Tim’s spinal column and not, crucially, through his (no longer registering) brain.12 Neither Tim nor the Holy Spirit was responsible.

Photo by Aarón Blanco Tejedor / Unsplash

Raising one’s limbs, in reality, is common for those experiencing brain death.3 First reported in 1974, “brain death-associated reflexes and automatisms” are frequent enough to have gained a moniker, “the Lazarus Sign.”4 People experiencing brain death have been recorded doing much more than raising another’s hand too, including hugging motions for up to 30 seconds, rapidly jerking all four limbs for up to eight inches, and symmetric movement of both arms.5

Raising one’s limbs, in reality, is common for those experiencing brain death.

There is another seemingly inexplicable facet to the story, though: If raising my hand can be explained naturally, what then of the incredible coincidence that my hand was raised and lowered at the same moment when the group prayer began and ended?

Swiss psychologist Carl Jung might describe my experience as an example of “synchronicity,” i.e., an acausal connecting principle.6 According to Jung and his adherents, science cannot offer a reasonable causal connection to explain why a brain-dead man lifted my hand at the exact moment a prayer began and dropped it at the exact moment the prayer ended.7 Jung adherents claim the odds are so improbable that the connection must be cosmic.8

Interpreting Tim’s act of lifting my hand as a ‘miracle’ was the result of my creative license, probability, and desire to find meaning.

But science can explain the coincidence. My profound coincidence was causal. Interpreting Tim’s act of lifting my hand at a certain moment as a “miracle” was the result of my creative license, probability, and desire to find a pattern and meaning through trauma. In fact, research through the years has revealed much about the phenomena of coincidence. This can be illustrated through a skeptical examination of seemingly much more widely known coincidences: A list of eerie comparisons between the assassinations of Abraham Lincoln and John F. Kennedy. The first of these lists appeared in the year following Kennedy’s assassination in a GOP newsletter and typically include the following:9

  • “Lincoln” and “Kennedy” each have seven letters.
  • Both presidents were elected to Congress in ’46 and later to the presidency in ’60.
  • Both assassins, John Wilkes Booth and Lee Harvey Oswald, were born in ’39 and were known by their three names, which were composed of fifteen letters.
  • Both presidents were succeeded by southerners named Johnson. • Booth ran from a theater and was caught in a warehouse; Oswald ran from a warehouse and was caught in a theater.
  • Oswald and Booth were killed before they could be put on trial.

And so on…10

How Coincidences Work1. Creative License and the Role of Context

First, the likelihood of noticing one becomes more flexible when defining what counts as a coincidence.11 Given enough creative license and disregarding context,12 one can find coincidences in any two events. Let us look, for example, at the two other assassinations, those of James A. Garfield and William McKinley. Both “Garfield” and “McKinley” have eight letters, both were Ohioans, both served as officers in the Civil War on the same side, both were shot twice in the torso, and both of their successors were from New York state.

Creative license is also used to justify such coincidences: Booth ran from a theater and was caught in a warehouse; Oswald ran from a warehouse and was caught in a theater. Booth did run from Ford’s Theater, and Oswald was indeed apprehended in a movie house called “The Texas Theater.”13 John Wilkes Booth did not, however, get caught in a warehouse. A federal soldier named Boston Corbett shot him from outside a burning tobacco barn in Bowling Green, VA, on April 26, 1865. Booth was dragged out still alive and died later that day.14

Our brains are wired to create order from chaos.

Creative license is also used in, Both presidents were elected to Congress in ’46 and later to the presidency in ’60. The apostrophe preceding each numbered year omits the glaring inconsistency that Lincoln and Kennedy were elected to these offices 100 years apart from each other. In context, the “coincidence” doesn’t seem so incredible.

2. Probability

Coincidences are counterintuitive. Consider the probability found in three of the Lincoln and Kennedy coincidences:

Both presidents were elected to Congress in ’46 and later to the presidency in ’60. We only elect our representatives to Congress every two years, and a president every four years. This omits all odd-numbered years.

Both presidents were succeeded by southerners named Johnson. “Johnson” is second only to “Smith” as the most common surname in the U.S.15 Both northern presidents (Lincoln was from Illinois, Kennedy from Massachusetts) needed a southerner to balance the ticket. In the years following the American Civil War, it wasn’t until 1992 that a ticket with two southerners (Clinton and Gore) won the presidency.16

Oswald and Booth were killed before they could be put on trial.17 Booth and Oswald were the subjects of nationwide manhunts and unprecedented vitriol. It is little wonder they were murdered before their trials.

Being elected in years that end in the same two digits, having a successor with a popular surname, and an assassin who was killed before being brought to trial are not at all impossible; indeed, they are relatively probable.

3. Looking for Meaning

Science has shown us that people who describe themselves as religious or spiritual (that is, those seeking meaning and those searching for signs) are more likely to experience coincidences.18 Our brains are wired to create order from chaos,19 and the days following each presidential assassination were overwhelmingly chaotic. The country was shocked when Presidents Lincoln and Kennedy were assassinated, and it seemed just too simple that such inspiring leaders could be shot down by two relative nobodies who would otherwise be forgotten by history.

Photo by Alexei Scutari / UnsplashWas my experience with my dying friend a divine sign? Was it acausal? Probably not.

But both tragic events were really that simple. John Wilkes Booth shot Lincoln while the president was watching Our American Cousin in Ford’s Theater in Washington, DC.20 Lee Harvey Oswald shot John F. Kennedy from the sixth-floor window of the Texas School Book Depository in Dallas.21

Applying Creative License, Probability, and Looking for Meaning to My Profound Coincidence

Let us now return to my profound coincidence. I used creative license in accepting Tim raising my hand as miraculous. I was desperately looking for any sign that he could communicate with me and took it as such. The probability of my friend dying young in a car accident doesn’t defy probability at all. The National Safety Council reports that 6,400 Americans die annually from falling asleep while driving.22 Tim raising one of our hands is probable, too. Movement of the body from residual spinal activity has been found in up to a third of those suffering from brain death.23 During my time with Tim at the hospital, I was surrounded by Evangelicals who assured me of my friend’s resurrection. Being confronted with the unexpected loss of a loved one heightened my emotions. I was more susceptible to believing in miracles than in my normal, rational state.

Was my experience with my dying friend a divine sign? Was it acausal? Probably not. Science has shown me the spiritual “meaning” I once attributed to Tim raising my hand was, in reality, meaningless. As years have gone by, I still stay in touch with a few of my friends who surrounded Tim. We are middle-aged now, with children of our own. Sometimes we remember Tim together. And that’s enough.

Categories: Critical Thinking, Skeptic

Skeptoid #985: Supervolcanoes and Super Earthquakes

Skeptoid Feed - Tue, 04/22/2025 - 2:00am

A roundup of the world's riskiest volcanoes and fault zones — and they're not necessarily the most hazardous.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

The Vatican: City, City-State, Nation, or … Bank?

Skeptic.com feed - Mon, 04/21/2025 - 9:43am

Many think of Vatican City only as the seat of governance for the world’s 1.3 billion Roman Catholics. Atheist critics view it as a capitalist holding company with special privileges. However, that postage-stamp parcel of land in the center of Rome is also a sovereign nation. It has diplomatic embassies—so-called apostolic nunciatures—in over 180 countries, and has permanent observer status at the United Nations.

Only by knowing the history of the Vatican’s sovereign status is it possible to understand how radically different it is compared to other countries. For over 2,000 years the Vatican has been a nonhereditary monarchy. Whoever is Pope is its supreme leader, vested with sole decision-making authority over all religious and temporal matters. There is no legislature, judiciary, or any system of checks and balances. Even the worst of Popes—and there have been some truly terrible ones—are sacrosanct. There has never been a coup, a forced resignation, or a verifiable murder of a Pope. In 2013, Pope Benedict became the first pope to resign in 600 years. Problems of cognitive decline get swept under the rug. In its unchecked power of a single man, the Vatican is closest in its governance style to a handful of absolute monarchies such as Saudi Arabia, Brunei, Oman, Qatar, and the UAE. 

During the Renaissance, Popes were feared rivals to Europe’s most powerful monarchies.

From the 8th century until 1870 the Vatican was a semifeudal secular empire called the Papal States that controlled most of central Italy. During the Renaissance, Popes were feared rivals to Europe’s most powerful monarchies. Popes believed God had put them on earth to reign over all other worldly rulers. The Popes of the Middle Ages had an entourage of nearly a thousand servants and hundreds of clerics and lay deputies. That so-called Curia—referring to the court of a Roman emperor—became a Ladon-like network of intrigue and deceit composed largely of (supposedly) celibate single men who lived and worked together at the same time they competed for influence with the Pope. 

The cost of running the Papal States, while maintaining one of Europe’s grandest courts, kept the Vatican under constant financial strain. Although it collected taxes and fees, had sales of produce from its agriculturally rich northern region, and rents from its properties throughout Europe, it was still always strapped for cash. The church turned to selling so-called indulgences, a sixth-century invention whereby the faithful paid for a piece of paper that promised that God would forgo any earthly punishment for the buyer’s sins. The early church’s penances were often severe, including flogging, imprisonment, or even death. Although some indulgences were free, the best ones—promising the most redemption for the gravest sins—were expensive. The Vatican set prices according to the severity of the sin.

The Church had to twice borrow from the Rothschilds.

All the while, the concept of a budget or financial planning was anathema to a succession of Popes. The humiliating low point came when the Church had to twice borrow from the Rothschilds, Europe’s preeminent Jewish banking dynasty. James de Rothschild, head of the family’s Paris-based headquarters, became the official Papal banker. By the time the family bailed out the Vatican, it had only been thirty-five years since the destabilizing aftershocks from the French Revolution had led to the easing of harsh, discriminatory laws against Jews in Western Europe. It was then that Mayer Amschel, the Rothschild family patriarch, had walked out of the Frankfurt ghetto with his five sons and established a fledgling bank. Little wonder the Rothschilds sparked such envy. By the time Pope Gregory asked for the first loan they had created the world’s biggest bank, ten times larger than their closest rival. 

The Vatican’s institutional resistance to capitalism was a leftover of Middle Age ideologies, a belief that the church alone was empowered by God to fight Mammon, a satanic deity of greed. Its ban on usury—earning interest on money loaned or invested—was based on a literal biblical interpretation. The Vatican distrusted capitalism since it thought secular activists used it as a wedge to separate the church from an integrated role with the state. In some countries, the “capitalist bourgeoisie”—as the Vatican dubbed it—had even confiscated church land for public use. Also fueling the resistance to modern finances was the view that capitalism was mostly the province of Jews. Church leaders may not have liked the Rothschilds, but they did like their cash. 

The Church’s sixteen thousand square miles was reduced to a tiny parcel of land.

In 1870, the Vatican lost its earthly empire overnight when Rome fell to the nationalists who were fighting to unify Italy under a single government. The Church’s sixteen thousand square miles was reduced to a tiny parcel of land. The loss of its Papal States income meant the church was teetering on the verge of bankruptcy. 

St. Peter's Basilica, Vatican City, Rome (Photograph by Bernd Marx)

The Vatican survived going forward on something called Peter’s Pence, a fundraising practice that had been popular a thousand years earlier with the Saxons in England (and later banned by Henry VIII when he broke with Rome and declared himself head of the Church of England). The Vatican pleaded with Catholics worldwide to contribute money to support the Pope, who had declared himself a prisoner inside the Vatican and refused to recognize the new Italian government’s sovereignty over the Church. 

During the nearly 60-year stalemate that followed, the Vatican’s insular and mostly incompetent financial management kept it under tremendous pressure. The Vatican would have gone bankrupt if Mussolini had not saved it. Il Duce, Italy’s fascist leader, was no fan of the Church, but he was enough of a political realist to know that 98 percent of Italians were Catholics. In 1929, the Vatican and the Fascist government executed the Lateran Pacts. It gave the Church the most power since the height of its temporal kingdom. It set aside 108.7 acres as Vatican City and fifty-two scattered “heritage” properties as an autonomous neutral state. It reinstated Papal sovereignty and ended the Pope’s boycott of the Italian state. 

The settlement—worth about $1.6 billion in 2025 dollars—was approximately a third of Italy’s entire annual budget.

The Lateran Pacts declared the Pope was “sacred and inviolable,” the equivalent of a secular monarch, and acknowledged he was invested with divine rights. A new Code of Canon Law made Catholic religious education obligatory in state schools. Cardinals were invested with the same rights as princes by blood. All church holidays became state holidays and priests were exempted from military and jury duty. A three-article financial convention granted “ecclesiastical corporations” full tax exemptions. It also compensated the Vatican for the confiscation of the Papal States with 750 million lire in cash and a billion lire in government bonds that paid 5 percent interest. The settlement—worth about $1.6 billion in 2024 dollars—was approximately a third of Italy’s entire annual budget and a desperately needed lifeline for the cash-starved church. 

Satirical depiction of Pope Pius XI and Benito Mussolini during the Lateran Treaty negotiations. (Illustration by Erich Schilling, for the cover of Simplicissimus magazine, March 1929.)

Pius XI, the Pope who struck the deal with Mussolini, was savvy enough to know that he and his fellow cardinals needed help managing the enormous windfall. He therefore brought in a lay outside advisor, Bernardino Nogara, a devout Catholic with a reputation as a financial wizard. 

Nogara took little time in upending hundreds of years of tradition. He ordered, for instance, that every Vatican department produce annual budgets and issue monthly income and expense statements. The Curia bristled when he persuaded Pius to cut employee salaries by 15 percent. And after the 1929 stock market crash, Nogara made investments in blue-chip American companies whose stock prices had plummeted. He also bought prime London real estate at fire-sale prices. As tensions mounted in the 1930s, Nogara further diversified the Vatican’s holdings in international banks, U.S. government bonds, manufacturing companies, and electric utilities. 

Only seven months before the start of World War II, the church got a new Pope, Pius XII, one who had a special affection for Germany (he had been the Papal Nuncio—ambassador—to Germany). Nogara warned that the outbreak of war would greatly test the financial empire he had so carefully crafted over a decade. When the hot war began in September 1939, Nogara realized he had to do more than shuffle the Vatican’s hard assets to safe havens. He knew that beyond the military battlefield, governments fought wars by waging a broad economic battle to defeat the enemy. The Axis powers and the Allies imposed a series of draconian decrees restricting many international business deals, banning trading with the enemy, prohibiting the sale of critical natural resources, and freezing the bank accounts and assets of enemy nationals. 

The United States was the most aggressive, searching for countries, companies, and foreign nationals who did any business with enemy nations. Under President Franklin Roosevelt’s direction, the Treasury Department created a so-called blacklist. By June 1941 (six months before Pearl Harbor and America’s official entry into the war), the blacklist included not only the obvious belligerents such as Germany and Italy, but also neutral nations such as Switzerland, and the tiny principalities of Monaco, San Marino, Liechtenstein, and Andorra. Only the Vatican and Turkey were spared. The Vatican was the only European country that proclaimed neutrality that was not placed on the blacklist. 

There was a furious debate inside the Treasury department about whether Nogara’s shuffling and masking of holding companies in multiple European and South American banking jurisdictions was sufficient to blacklist the Vatican. It was only a matter of time, concluded Nogara, until the Vatican was sanctioned. 

The Vatican Bank could operate anywhere worldwide, did not pay taxes … disclose balance sheets, or account to any shareholders.

Every financial transaction left a paper trail through the central banks of the Allies. Nogara needed to conduct Vatican business in secret. The June 27, 1942, formation of the Istituto per le Opere di Religione (IOR)—the Vatican Bank—was heaven sent. Nogara drafted a chirograph (a handwritten declaration), a six-point charter for the bank, and Pius signed it. Since its only branch was inside Vatican City—which, again, was not on any blacklist—the IOR was free of any wartime regulations. The IOR was a mix between a traditional bank like J. P. Morgan and a central bank such as the Federal Reserve. The Vatican Bank could operate anywhere worldwide, did not pay taxes, did not have to show a profit, produce annual reports, disclose balance sheets, or account to any shareholders. Located in a former dungeon in the Torrione di Nicoló V (Tower of Nicholas V), it certainly did not look like any other bank. 

The Vatican Bank was created as an autonomous institution with no corporate or ecclesiastical ties to any other church division or lay agency. Its only shareholder was the Pope. Nogara ran it subject only to Pius’s veto. Its charter allowed it “to take charge of, and to administer, capital assets destined for religious agencies.” Nogara interpreted that liberally to mean that the IOR could accept deposits of cash, real estate, or stock shares (that expanded later during the war to include patent royalty and reinsurance policy payments). 

Many nervous Europeans were desperate for a wartime haven for their money. Rich Italians, in particular, were anxious to get cash out of the country. Mussolini had decreed the death penalty for anyone exporting lire from Italian banks. Of the six countries that bordered Italy, the Vatican was the only sovereignty not subject to Italy’s border checks. The formation of the Vatican Bank meant Italians needed only a willing cleric to deposit their suitcases of cash without leaving any paper trail. And unlike other sovereign banks, the IOR was free of any independent audits. It was required—supposedly to streamline recordkeeping—to destroy all its files every decade (a practice it followed until 2000). The IOR left virtually nothing by which postwar investigators could determine if it was a conduit for shuffling wartime plunder, held accounts, or money that should be repatriated to victims. 

The Vatican immediately dropped off the radar of U.S. and British financial investigators.

The IOR’s creation meant the Vatican immediately dropped off the radar of U.S. and British financial investigators. It allowed Nogara to invest in both the Allies and the Axis powers. As I discovered in research for my 2015 book about church finances, God’s Bankers: A History of Money and Power at the Vatican, Nogara’s most successful wartime investment was in German and Italian insurance companies. The Vatican earned outsized profits when those companies escheated the life insurance policies of Jews sent to the death camps and converted the cash value of the policies. 

After the war, the Vatican claimed it had never invested or made money from Nazi Germany or Fascist Italy. All its wartime investments and money movements were hidden by Nogara’s impenetrable Byzantine offshore network. The only proof of what happened was in the Vatican Bank archives, sealed to this day. (I have written opinion pieces in The New York TimesWashington Post, and Los Angeles Times, calling on the church to open its wartime Vatican Bank files for inspection. The Church has ignored those entreaties.) 

Its ironclad secrecy made it a popular postwar offshore tax haven for wealthy Italians wanting to avoid income taxes.

While the Vatican Bank was indispensable to the church’s enormous wartime profits, the very features—no transparency or oversight, no checks and balances, no adherence to international banking best practices—became its weakness going forward. Its ironclad secrecy made it a popular postwar offshore tax haven for wealthy Italians wanting to avoid income taxes. Mafia dons cultivated friendships with senior clergy and used them to open IOR accounts under fake names. Nogara retired in the 1950s. The laymen who had been his aides were not nearly as clever or imaginative as was he. It opened the Vatican Bank to the influence of lay bankers. One, Michele Sindona, was dubbed by the press as “God’s Banker” in the mid-1960s for the tremendous influence and deal making he had with the Vatican Bank. Sindona was a flamboyant banker whose investment schemes always pushed against the letter of the law. (Years later he would be convicted of massive financial fraud and murder of a prosecutor and would himself be killed in an Italian prison.) 

Exacerbating the bad effect of Sindona directing church investments, the Pope’s pick to run the Vatican Bank in the 1970s was a loyal monsignor, Chicago-born Paul Marcinkus. The problem was that Marcinkus knew almost nothing about finances or running a bank. He later told a reporter that when he got the news that he would oversee the Vatican Bank, he visited several banks in New York and Chicago and picked up tips. “That was it. What kind of training you need?” He also bought some books about international banking and business. One senior Vatican Bank official worried that Marcinkus “couldn’t even read a balance sheet.” 

Marcinkus allowed the Vatican Bank to become more enmeshed with Sindona, and later another fast-talking banker, Roberto Calvi. Like Sindona, Calvi would also later be on the run from a host of financial crimes and frauds, but he never got convicted. He was instead found hanging in 1982 under London’s Blackfriars Bridge. 

“You can’t run the church on Hail Marys.” —Vatican Bank head Paul Marcinkus, defending the Bank’s secretive practices in the 1980s.

By the 1980s the Vatican Bank had become a partner in questionable ventures in offshore havens from Panama and the Bahamas to Liechtenstein, Luxembourg, and Switzerland. When one cleric asked Marcinkus why there was so much mystery about the Vatican Bank, Marcinkus dismissed him saying, “You can’t run the church on Hail Marys.” 

All the secret deals came apart in the early 1980s when Italy and the U.S. opened criminal investigations on Marcinkus. Italy indicted him but the Vatican refused to extradite him, allowing Marcinkus instead to remain in Vatican City. The standoff ended when all the criminal charges were dismissed and the church paid a stunning $244 million as a “voluntary contribution” to acknowledge its “moral involvement” with the enormous bank fraud in Italy. (Marcinkus returned a few years later to America where he lived out his final years at a small parish in Sun City, Arizona.) 

Throughout the 1990s and into the early 2000s, the Vatican Bank remained an offshore bank in the heart of Rome.

It would be reasonable to expect that after having allowed itself to be used by a host of fraudsters and criminals, the Vatican Bank cleaned up its act. It did not, however. Although the Pope talked a lot about reforms, it kept the same secret operations, expanding even into massive offshore deposits disguised as fake charities. The combination of lots of money, much of it in cash, and no oversight, again proved a volatile mixture. Throughout the 1990s and into the early 2000s, the Vatican Bank remained an offshore bank in the heart of Rome. It was increasingly used by Italy’s top politicians, including prime ministers, as a slush fund for everything from buying gifts for mistresses to paying off political foes. 

Italy’s tabloids, and a book in 2009 by a top investigative journalist Gianluigi Nuzzi, exposed much of the latest round of Vatican Bank mischief. It was not, however, the public shaming of “Vatileaks” that led to any substantive reforms in the way the Church ran its finances. Many top clerics knew that as a 2,000-year-old institution, if they waited patiently for the public outrage to subside, the Vatican Bank could soon resume its shady dealings. 

In 2000, the Church signed a monetary convention with the European Union by which it could issue its own euro coins.

What changed everything in the way the Church runs its finances came unexpectedly in a decision about a common currency—the euro—that at the time seemed unrelated to the Vatican Bank. Italy stopped using the lira as its currency and adopted the euro in 1999. That initially created a quandary for the Vatican, which had always used the lira as its currency. The Vatican debated whether to issue its own currency or to adopt the euro. In December 2000, the church signed a monetary convention with the European Union by which it could issue its own euro coins (distinctively stamped with Città del Vaticano) as well as commemorative coins that it marked up substantially to sell to collectors. Significantly, that agreement did not bind the Vatican, or two other non-EU nations that had accepted the euro—Monaco and Andorra—to abide by strict European statutes regarding money laundering, antiterrorism financing, fraud, and counterfeiting. 

A Vatican 50 Euro Cent Coin, issued in 2016

What the Vatican did not expect was that the Organization for Economic Cooperation and Development (OECD), a 34-nation economics and trade group that tracks openness in the sharing of tax information between countries, had at the same time begun investigating tax havens. Those nations that shared financial data and had in place adequate safeguards against money laundering were put on a so-called white list. Those that had not acted but promised to do so were slotted onto the OECD’s gray list, and those resistant to reforming their banking secrecy laws were relegated to its blacklist. The OECD could not force the Vatican to cooperate since it was not a member of the European Union. However, placement on the blacklist would cripple the Church’s ability to do business with all other banking jurisdictions. 

The biggest stumbling block to real reform is that all power is still vested in a single man.

In December 2009, the Vatican reluctantly signed a new Monetary Convention with the EU and promised to work toward compliance with Europe’s money laundering and antiterrorism laws. It took a year before the Pope issued a first ever decree outlawing money laundering. The most historic change took place in 2012 when the church allowed European regulators from Brussels to examine the Vatican Bank’s books. There were just over 33,000 accounts and some $8.3 billion in assets. The Vatican Bank was not compliant on half of the EU’s forty-five recommendations. It had done enough, however, to avoid being placed on the blacklist. 

In its 2017 evaluation of the Vatican Bank, the EU regulators noted the Vatican had made significant progress in fighting money laundering and the financing of terrorism. Still, changing the DNA of the finances of the Vatican has proven incredibly difficult. When a reformer, Argentina’s Cardinal Jorge Bergoglio, became Pope Francis in 2013, he endorsed a wide-ranging financial reorganization that would make the church more transparent and bring it in line with internationally accepted financial standards and practices. Most notable was that Francis created a powerful financial oversight division and put Australian Cardinal George Pell in charge. Then Pell had to resign and return to Australia where he was convicted of child sex offenses in 2018. By 2021, the Vatican began the largest financial corruption trial in its history, even including the indictment of a cardinal for the first time. The case floundered, however, and ultimately revealed that the Vatican’s longstanding self-dealing and financial favoritism had continued almost unabated under Francis’s reign. 

Photo by Ashwin Vaswani / Unsplash

It seems that for every step forward, somehow, the Vatican manages to move backwards when it comes to money and good governance. For those of us who study it, while it is a more compliant and normal member of the international community today than at any time in its past, the biggest stumbling block to real reform is that all power is still vested in a single man that the Church considers the Vicar of Christ on earth. 

The Catholic Church considers the reigning pope to be infallible when speaking ex cathedra (literally “from the chair,” that is, issuing an official declaration) on matters of faith and morals. However, not even the most faithful Catholics believe that every Pope gets it right when it comes to running the Church’s sovereign government. No reform appears on the horizon that would democratize the Vatican. Short of that, it is likely there will be future financial and power scandals, as the Vatican struggles to become a compliant member of the international community.

Categories: Critical Thinking, Skeptic

Game Transfer Phenomenon

neurologicablog Feed - Mon, 04/21/2025 - 5:03am

Have you ever been into a video game that you played for hours a day for a while? Did you ever experience elements of game play bleeding over into the real world? If you have, then you have experienced what psychologists call “game transfer phenomenon” or GTP.  This can be subtle, such as unconsciously placing your hand on the AWSD keys on a keyboard, or more extreme such as imagining elements of the game in the real world, such as health bars over people’s heads.

None of this is surprising, actually. Our brains adapt to use. Spend enough time in a certain environment, engaging in a specific activity, experiencing certain things, and these pathways will be reinforced. This is essentially what PTSD is – spend enough time fighting for your life in extremely violent and deadly situations, and the behaviors and associations you learn are hard to turn off. I have experienced only a tiny whisper of this after engaging for extended periods of time in live-action gaming that involves some sort of combat (like paint ball or LARPing) – it may take a few days for you to stop looking for threats and being jumpy.

I have also noticed a bit of transfer (and others have noted this to me as well) in that I find myself reaching to pause or rewind a live radio broadcast because I missed something that was said. I also frequently try to interact with screens that are not touch-screens. I am getting used to having the ability to affect my physical reality at will.

Now there is a new wrinkle to this phenomenon – we have to consider the impact of spending more and more time engaged in virtual experiences. This will only get more profound as virtual reality becomes more and more a part of our daily routine. I am also thinking about the not-to-distant future and beyond, where some people might spend huge chunks of their day in VR. Existing research shows that GTP is more likely to occur with increased time and immersiveness. What happens when our daily lives are a blend of the virtual and the physical? Not only is there VR, there is augmented reality (AR) where we overlay digital information onto our perception of the real world. This idea was explored in a Dr. Who episode in which a society of people were so dependent on AR that they were literally helpless without it, unable to even walk from point A to B.

For me the question is – when will GTP cross the line from being an occasional curiosity to a serious problem? For example, in some immersive video games your character may be able to fly, and you think nothing of stepping off a ledge and flying into the air. Imagine playing such a super-hero style game in high quality VR for an extended period of time (something like Ready Player One). Could people “forget” they are in meat space and engage in a deeply engrained behavior they developed in the game. They won’t just be trying to pause their radio, but interact with their physical world in a way that is only possible in the VR world, with possible deadly consequences.

Another aspect of this is that as our technology develops we are increasingly making our physical environment more digital. Three-D printing is an example of this – going from a digital image to a physical object. Increasingly objects in our physical environment are interactive – smart devices. In a generation or two will people get used to not only spending lots of time in VR, but having their physical worlds augmented by AR and populated with smart devices, including physical objects that can change on demand (programmable matter)? We may become ill-adapted to existing in a “dumb” purely physical world. We may choose virtual reality because it has spoiled us for dumb physical reality.

Don’t get me wrong – I think digital and virtual reality is great and I look forward to every advancement. I see this mainly as an unintended consequence. But I also think we can reasonably anticipate this is likely to be a problem, as we are already seeing the milder versions of it today. This means we have an opportunity to mitigate this before it becomes a problem. Part of the solution will likely always be good digital hygiene – making sure our days are balanced with physical and virtual reality. This will likely also be good for our physical health.

I also wonder, however, if this is something that can be mitigated in the virtual applications themselves. Perhaps the programs can designed to make it obvious when we are in virtual reality vs physical reality, as a clue to your brain so it doesn’t cross the streams. I don’t think this is a complete fix, because GTP exists even for cartoony games. The learned behaviors will still bleed over. But perhaps there may be a way to help the brain keep these streams separated.

I suspect we will not seriously address this issue until it is already a problem. But it would be nice to get ahead of a problem like this for once.

The post Game Transfer Phenomenon first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #1032 - Apr 19 2025

Skeptics Guide to the Universe Feed - Sat, 04/19/2025 - 8:00am
Dumbest Thing of the Week: Turned to Stone; News Items: Where Did Earth's Water Come From, EPA Data on Emissions, Is Your Red My Red, Evolution of Complex Life, Crow Math Skills; Who's That Noisy; Your Questions and E-mails: Separate the Art from the Artists, Does the Moon Rotate; Science or Fiction
Categories: Skeptic

Possible Biosignature on K2-18b

neurologicablog Feed - Thu, 04/17/2025 - 5:00am

Exoplanets are pretty exciting – in the last few decades we have gone from knowing absolutely nothing about planets beyond our solar system to having a catalogue of over 5,000 confirmed exoplanets. That’s still a small sample considering there are likely between 100 billion and 1 trillion planets in the Milky Way. It is also not a random sample, but is biased by our detection methods, which favor larger planets closer to their parent stars. Still, some patterns are starting to emerge. One frustrating pattern is the lack of any worlds that are close duplicates of Earth – an Earth mass exoplanet in the habitable zone of a yellow star (I’d even take an orange star).

Life, however, does not require an Earth-like planet. Anything in the habitable zone, defined as potentially having a temperature allowing for liquid water on its surface, will do. The habitable zone also depends on variables such as the atmosphere of the planet. Mars could be warm if it had a thicker atmosphere, and Venus could be habitable if it had less of one. Cataloguing exoplanets gives us the ability to address a burning scientific question – how common is life in the universe? We have yet to add any data points of clear examples of life beyond Earth. So far we have one example of life in the universe, which means we can’t calculate how common it is (except maybe setting some statistical upper limits).

Finding that a planet is habitable and therefore could potentially support life is not enough. We need evidence that there is actually life there. For this the hunt for exoplanets includes looking for potential biosignatures – signs of life. We may have just found the first biosignatures on an exoplanet. This is not 100%. We need more data. But it is pretty intriguing.

The planet is K2-18b, a sub-Neptune orbiting a red dwarf 120 light years from Earth. In terms of exoplanet size, we have terrestrial planets like Earth and the rocky inner planets of our solar system. Then there are super-Earths, larger than Earth up to about 2 earth masses, still likely rocky worlds. Sub Neptunes are larger still, but still smaller than Neptune. They likely have rocky surfaces and thick atmospheres. K2-18b has a radius 2.6 times that of Earth, with a mass 8.6 times that of Earth. The surface gravity is estimated at 12.43 m/s^2 (compared to 9.8 on Earth). We could theoretically land a rocket and take off again from its surface.

K2-18 is a red dwarf, which means it has a habitable zone close in. K2-18b orbits every 33 days, and had an eccentric orbit but staying within the habitable zone. This means it is likely tidally locked, but may be in a resonance orbit (like Mercury), meaning that it rotates three times for every two orbits, or something like that. Fortunately for astronomers, K2-18b orbits in front of its star from our perspective on Earth. This is how it was detected, but also this means we can potentially examine the chemical makeup of its atmosphere with spectroscopy. When the planet passes in front of its star we can look at the absorption lines of the light passing through it to detect the signatures of different chemicals. Using this technique with the Hubble astronomers have found methane and carbon dioxide in the atmosphere. They have also found dimethyl sulfide and a similar molecule called dimethyl disulfide. On Earth the only known source of dimethyl sulfide is living organisms, specifically algae. This molecule is also highly reactive and therefore short-lived, which means if it is present in the atmosphere it is being constantly renewed. Follow up observations with the Webb confirmed the presence of dimethyl sulfide, in concentrations 20 times higher than on Earth.

What does this mean? Well, it could mean that K2-18b has a surface ocean that is brimming with life. This fits with one model of sub-Neptunes, called the Hycean model, which means they can have large surface oceans and an atmosphere with lots of hydrogen. These are conditions suitable for life. But this is not the only possibility.

One of the problems with chemical biosignatures is that they frustratingly all have abiotic sources. Oxygen can occur through the splitting of water or CO2 by ultraviolet light, and by reactions with quartz. Methane also has geological sources. What about dimethyl sulfide? Well, it has been found in cometary matter with a likely abiotic source. So there may be some geological process on K2-18b pumping out dimethyl sulfide. Or there may be an ocean brimming with marine life creating the stuff. We need to do more investigation of K2-18b to understand more about its likely surface conditions, atmosphere, and prospects for life.

This, unfortunately, is how these things are likely to go – we find a potential biosignature that also has abiotic explanations and then we need years of follow up investigation. Most of the time the biosignatures don’t pan out (like on Venus and Mars so far). It’s a setup for disappointment. But eventually we may go all the way through this process and make a solid case for life on an exoplanet. Then finally we will have our second data point, and have a much better idea of how common life is likely to be in our universe.

The post Possible Biosignature on K2-18b first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #984: Should You Feed Your Dog That?

Skeptoid Feed - Tue, 04/15/2025 - 2:00am

Some people try to feed their dogs the same alternative diet they eat themselves... not necessarily so good for the dog.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Kawaii and the Cult of Cute

Skeptic.com feed - Mon, 04/14/2025 - 10:00am
Japan’s culture of childlike innocence, vulnerability, and playfulness has a downside.

Dogs dressed up in bonnets. Diamond-studded iPhone cases shaped like unicorns. Donut-shaped purses. Hello Kitty shoes, credit cards, engine oil, and staplers. My Little Pony capsule hotel rooms. Pikachu parades. Hedgehog cafes. Pink construction trucks plastered with cartoon eyes. Miniature everything. Emojis everywhere. What is going on here?

Top left to right: Astro Boy, Hello Kitty credit card, Hello Kitty backpack, SoftBank’s Pepper robot, Pikachu Parade, Hello Kitty hat, film still from Ponyo by Studio Ghibli

Such merch, and more, are a manifestation of Japan’s kawaii culture of innocence, youthfulness, vulnerability, playfulness, and other childlike qualities. Placed in certain contexts, however, it can also underscore a darker reality—a particular denial of adulthood through a willful indulgence in naïveté, commercialization, and escapism. Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.

The roots of kawaii can be traced back to Japan’s Heian (“peace” or “tranquility”) period (794–1185 CE), a time when aristocrats appreciated delicate and endearing aesthetics in literature, art, and fashion.1 During the Edo period (1603–1868 CE), art and culture began to emphasize aesthetics, beauty, and playfulness.2 Woodblock prints (ukiyo-e) often depicted cute and whimsical characters.3 The modern iteration of kawaii began to take shape during the student protests of the late 1960s,4 particularly against the backdrop of the rigid culture of post-World War II Japan. In acts of defiance against academic authority, university students boycotted lectures and turned to children’s manga—a type of comic or graphic novel—as a critique of traditional educational norms.5

Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.

After World War II, Japan experienced significant social and economic changes. The emerging youth culture of the 1960s and 1970s began to embrace Western influences, leading to a blend of traditional Japanese aesthetics with Western pop culture.6 During the economic boom of the 1970s and 1980s, consumer subcultures flourished, and the aesthetic of cuteness found expression in playful handwriting, speech patterns, fashion, products, and themed spaces like cafes and shops. The release of Astro Boy (Tetsuwan Atomu) in 1952, created by Osamu Tezuka, is regarded by scholars as a key moment in the development of kawaii culture.7 The character’s large eyes, innocent look, and adventurous spirit resonated with both children and adults, setting the stage for the rise of other kawaii characters in popular culture. Simultaneously, as Japanese women gained more prominence in the workforce, the “burikko” archetype8—an innocent, childlike woman—became popular. This persona, exuding charm and nonthreatening femininity, was seen as enhancing her desirability in a marriage-centric society.9

Left to right: burikko handwriting, bento box, Kumamon mascot

Another catalyst for kawaii culture was the 1970’s emergence of burikko handwriting among teenage girls.10 It was this playful, childlike, rounded style of writing that included hearts, stars, and cartoonish doodles. To the chagrin of educators, it became a symbol of youthful rebellion and a break from rigid societal expectations.

Japanese culture is deeply rooted in tradition, with strict social norms governing behavior and appearance. If you drop something, it’s common to see people rush to retrieve it for you. Even at an empty intersection with no car in sight, a red light will rarely be ignored. Business cards are exchanged with a sense of deference, and social hierarchies are meticulously observed. Conformity is highly valued, while femininity is often dismissed as frivolous. Against this backdrop, the emergence of kawaii can be seen as an act of quiet resistance.

The rise of sh&#x14D;jo (girls’) manga in the 1970s introduced cute characters with large eyes and soft rounded faces with childlike features, popularizing the kawaii aesthetic among young girls.11 Then, in 1974, along came Sanrio’s Hello Kitty,12 commercializing and popularizing kawaii culture beyond Japan’s borders. While it started as a product range for children, it soon became popular with teens and adults alike.

Kawaii characters like Hello Kitty are often depicted in a simplistic style, with oversized eyes and minimal facial expressions. This design invites people to project their own feelings and emotions onto the characters. As a playful touch, Hello Kitty has no mouth—ensuring she’ll never reveal your secrets!

By the 1980s and 1990s, kawaii had permeated stationery, toys, fashion, digital communications, games, and beyond. Franchises like Pokémon, anime series such as Sailor Moon, and the whimsical works of Studio Ghibli exported a sense of childlike wonder and playfulness to audiences across the globe. Even banks and airlines embraced cuteness as a strategy to attract customers, as did major brands like Nissan, Mitsubishi, Sony, and Nintendo. What may have begun as an organic expression of individuality was quickly commodified by industry.

Construction sites, for example, frequently feature barricades shaped like cartoon animals or flowers, softening the visual impact of urban development.13 They also display signs with bowing figures apologizing for any inconvenience. These elements are designed to create a sense of comfort for those passing by. Similarly, government campaigns use mascots like Kumamon,14 a cuddly bear, to promote tourism or public health initiatives. Japanese companies and government agencies use cute mascots, referred to as Yuru-chara, to create a friendly image and foster a sense of connection. You’ll even find them in otherwise harsh environments like high security prisons, the Tokyo Metropolitan Police, and, well, the Japanese Sewage Association uses them too.15

Kawaii aesthetics have also appeared in high-tech domains. Robots designed for elder care, such as SoftBank’s Pepper,16 often adopt kawaii traits to appear less intimidating and foster emotional connections. In the culinary world, bento boxes featuring elaborately arranged food in cute and delightful shapes have become a creative art form, combining practicality with aesthetic pleasure—and turning ordinary lunches into whimsical and joyful experiences.

Sanrio Puroland (website)

Kawaii hasn’t stayed confined to Japan’s borders. It has become popular in other countries like South Korea, and had a large influence in the West as well. It has become a global representation of Japan, so much so that it helps draw in tourism, particularly to the Harajuku district in Tokyo and theme parks like Sanrio Puroland. In 2008, Hello Kitty was even named as Japan’s official tourism ambassador.17

The influence of kawaii extends beyond tourism. Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty-themed designs, menus, and crew uniforms on its Paris-Taipei route.18 Even the Vatican couldn’t resist the power of cute: In its appeal to younger generations, it introduced Luce, a cheerful young girl with big eyes, blue hair, and a yellow raincoat, as the mascot for the 2025 Jubilee Year and the Vatican’s pavilion at Expo 2025.19

Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty- themed designs, menus, and crew uniforms on its Paris–Taipei route.

Could anime and kawaii culture become vehicles for Catholicism? Writing for UnHerd, Katherine Dee suggests that Luce represents a global strategy to transcend cultural barriers in ways that traditional symbols, like the rosary, cannot. She points out that while Europe’s Catholic population has been shrinking, the global Catholic community continues to grow by millions.20 But while Luce may bring more attention to the Vatican, can she truly inspire deeper connections to God or spirituality?

All that said, the bigger question remains: Why does anyone find any of this appealing or cute?

One answer comes from the cultural theorist Sianne Ngai, who said that there’s a “surprisingly wide spectrum of feelings, ranging from tenderness to aggression, that we harbor toward ostensibly subordinate and unthreatening commodities.”21 That’s a fancy way of saying that humans find babies cute, a discovery that, in fact, was awarded the 1973 Nobel Prize in Physiology or Medicine to the Austrian zoologist and ethologist Konrad Lorenz for his research on the “baby schema”22 (or Kindchenschema), to explain how and why certain infantile facial and physical traits are seen as cute. These features include an overly large head, rounded forehead, large eyes, and protruding cheeks.23 Lorenz argued that this is so because such features trigger a biological response within us—a desire to nurture and protect because we view them as proxies for vulnerability. The more such features, the more we are wired to care for those who embody them.24 Simply put, when these traits are projected onto characters or art or products, it promotes the same kind of response in us as seeing a baby.

Modern research validates Lorenz’s theory. A 2008 brain imaging study showed that viewing infant faces, but not adult ones, triggered a response in the orbitofrontal cortex linked to reward processing.25 Another brain imaging study conducted at Washington University School of Medicine26 investigated how different levels of “baby schema” in infant faces—characteristics like big eyes and round cheeks—affect brain activity. Researchers discovered that viewing baby-like features activates the nucleus accumbens, a key part of the brain’s reward system responsible for processing pleasure and motivation. This effect was observed in women who had never had children. The researchers concluded that this activation of the brain’s reward system is the neurophysiological mechanism that triggers caregiving behavior.

A very different type of study,27 conducted in 2019, further confirmed that seeing baby-like features triggers a strong emotional reaction. In this case, the reaction is known as “kama muta,” a Sanskrit term that describes the feeling of being deeply moved or touched by love. This sensation is often accompanied by warmth, nostalgia, or even patriotism. The researchers found that videos featuring cute subjects evoked significantly more kama muta than those without such characteristics. Moreover, when the cute subjects were shown “interacting affectionately,” the feeling of kama muta was even stronger compared to when the subjects were not engaging in affectionate behavior.

In 2012, Osaka University professor Hiroshi Nittono led a research study that found that “cuteness” has an impact on observers, increasing their focus and attention.28 It also speaks to our instinct to nurture and protect that which appears vulnerable—which cute things, with their more infantilized traits, do. After all, who doesn’t love Baby Yoda? Perhaps that’s why some of us are so drawn to purchase stuffed dolls of Eeyore—it makes us feel as if we are rescuing him. When we see something particularly cute, many of us feel compelled to buy it. Likewise, it’s possible, at least subconsciously, that those who engage in cosplay around kawaii do so out of a deeper need to feel protected themselves. Research shows that viewing cute images improves moods and is associated with relaxation.29

Kawaii may well be useful in our fast-paced and stressful lives. For starters, when we find objects cute or adorable, we tend to treat them better and give them greater care. There’s also a contagious happiness effect. Indeed, could introducing more kawaii into our environments make people happier? Might it encourage us to care more for each other and our communities? The kawaii aesthetic could even be used in traditionally serious spaces—like a doctor’s waiting room or emergency room—to help reduce anxiety. Instead of staring at a blank ceiling in the dentist’s chair, imagine looking up at a whimsical kawaii mural instead.

Consider also the Tamagotchi digital pet trend of the 1990s. Children were obsessed with taking care of this virtual pet, tending to its needs ranging from food to entertainment. Millions of these “pets” were sold and were highly sought after. There’s something inherently appealing to children about mimicking adult roles, especially when it comes to caregiving. It turns out that children don’t just want to be cared for by their parents—they also seem to have an innate desire to nurture others. This act of caregiving can make them feel capable, empowered, and useful, tapping into a deep sense of responsibility and connection.

At Chuo University in Tokyo, there’s an entire new field of “cute studies” founded by Dr. Joshua Dale, whose book summarizes his research: Irresistible: How Cuteness Wired our Brains and Changed the World.30 According to Dale, there are four traditional and aesthetic values of Japanese culture that contributed to the rise of kawaii: (1) valuing the diminutive, (2) treasuring the transient, (3) preference for simplicity, and (4) appreciating the playful and transient.31 His work emphasizes how kawaii is not just about cuteness, but in fact expresses a deeply rooted cultural philosophy that reflects Japanese views on beauty, life, and emotional expression.

The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions.

In other words, there’s something about kawaii that goes beyond a style or a trend. It is a reflection of deeper societal values and emotional needs. In a society that has such rigid hierarchies, social structures, decorum, and an intense work culture, kawaii provides a form of escapism—offering a respite from the harsh realities of adulthood and a return to childlike innocence. It is a safe form of vulnerability. Yet, does it also hint at an inability to confront the realities of life?

The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions. By surrounding themselves with cuteness and positivity, they may be trying to shield themselves from darker feelings and worries. In some cases, people even adapt their own personal aesthetics to appear cuter, as this can make them seem more innocent and in need of help—effectively turning cuteness into a protective layer.

Kawaii also perpetuates infantilization, particularly among women who feel pressured to conform to kawaii aesthetics, which often places them in a submissive role. This is especially evident in subgenres like Lolita fashion—a highly detailed, feminine, and elegant style inspired by Victorian and Rococo fashion, but with a modern and whimsical twist. While this style is adopted by many women with the female gaze in mind, the male gaze remains inescapable.

Japanese Lolita fashion

As a result, certain elements of kawaii can sometimes veer into the sexual, both intentionally and as an unintended distortion of innocence. Maid cafes, for example, though not designed to be sexually explicit, often carry sexual undertones that undermine their seemingly innocent and cute appeal. In these cafes, maids wear form-fitting uniforms and play into fantasies of servitude and submission—particularly when customers are addressed as “masters” and flirtatious interactions are encouraged.

It’s important to remember that things that look sweet and cute can also be sinister. The concept of “cute” often evokes feelings of trust, affection, and vulnerability, which can paradoxically make it a powerful tool for manipulation, subversion, and even control. Can kawaii be a Trojan horse?

When used in marketing to sell products, it may seem harmless, but how much of the rational consumer decision-making process does it override? And what evil lurks behind all the sparkle? In America, cuteness manifests itself even more boldly and aggressively. One designer, Lisa Frank, built an entire empire in the 1980s and 1990s on vibrant, neon colors and whimsical artwork featuring rainbow-colored animals, dolphins, glitter, and images of unicorns on stickers, adorning backpacks and other merchandise. Her work is closely associated with a sense of nostalgia for millennials who grew up in that era. Yet, as later discovered and recently recalled in the Amazon documentary, “Glitter and Greed: The Lisa Frank Story,” avarice ultimately led to a toxic work environment, poor working conditions, and alleged abuse.

Worse, can kawaii be used to mask authoritarian intentions or erase the memory of serious crimes against humanity?

As Japan gained prominence in global culture, its World War II and earlier atrocities have been largely overshadowed, causing many to overlook these grave historical events.32 When we think of Japan today, we often think of cultural exports like anime, manga, Sanrio, geishas, and Nintendo. Even though Japan was once an imperial power, today it exercises “soft power” in the sociopolitical sphere. This concept, introduced by American political scientist Joseph Nye,33 refers to influencing others by promoting a nation’s culture and values to make foreign audiences more receptive to its perspectives.

Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort.

Japan began leveraging this strategy in the 1980s to rehabilitate its tarnished postwar reputation, especially in the face of widespread anti-Japanese sentiment in neighboring Asian nations. Over time, these attitudes shifted as Japan used “kawaii culture” and other forms of pop-culture diplomacy to reshape its image and move beyond its violent, imperialist past.

Kawaii also serves as a way to neutralize our fears by transforming things we might typically find unsettling into endearing and approachable forms—think Casper the Friendly Ghost or Monsters, Inc. This principle extends to emerging technologies, such as robots. Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort. Embedding frightening concepts with qualities that evoke happiness or safety allows us to navigate the interplay between darkness and light, innocence and danger, in a more approachable way. In essence, it’s a coping mechanism for our primal fears.

An interesting aspect of this is what psychologists call the uncanny valley—a feeling of discomfort that arises when something is almost humanlike, but not quite. Horror filmmakers have exploited this phenomenon by weaponizing cuteness against their audiences with characters like the Gremlins and the doll Chucky. The dissonance between a sweet appearance and sinister intent creates a chilling effect that heightens the horror.

When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?

Ultimately, all this speaks to the multitude of layers to kawaii. It is more than an aesthetic; it’s a cultural phenomenon with layers of meaning, and it reflects both societal values and emotional needs. Its ability to evoke warmth and innocence can also be a means of emotional manipulation. It can serve as an unassuming guise for darker intentions or meanings. It can be a medium for individual expression, and yet simultaneously it has been commodified and overtaken by consumerism. It can be an authentic expression, yet mass production has also made it a symbol of artifice. It’s a way to embrace the innocent and joyful, yet it can also be used to avoid facing the harsher realities of adulthood. When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?

It’s worth asking whether the prevalence of kawaii in public and private spaces reflects a universal desire for escapism or if it serves as a tool to maintain conformity and compliance. Perhaps, at its core, kawaii holds up a mirror to society’s collective vulnerabilities—highlighting not just what we nurture, but also what we are willing to overlook for the sake of cuteness.

Categories: Critical Thinking, Skeptic

OK – But Are They Dire Wolves

neurologicablog Feed - Mon, 04/14/2025 - 4:58am

Last week I wrote about the de-extinction of the dire wolf by a company, Colossal Biosciences. What they did was pretty amazing – sequence ancient dire wolf DNA and use that as a template to make 20 changes to 14 genes in the gray wolf genome via CRISPR. They focused on the genetic changes they thought would have the biggest morphological effect, so that the resulting pups would look as much as possible like the dire wolves of old.

This achievement, however, is somewhat tainted by overhyping what was actually achieved, by the company and many media outlets. Although the pushback began immediately, and there is plenty of reporting about the fact that these are not exactly dire wolves (as I pointed out myself). I do think we should not fall into the pattern of focusing on the controversy and the negative and missing the fact that this is a genuinely amazing scientific accomplishment. It is easy to become blase about such things. Sometimes it’s hard to know in reporting what the optimal balance is between the positive and the negative, and as skeptics we definitely can tend toward the negative.

I feel the same way, for example, about artificial intelligence. Some of my skeptical colleagues have taken the approach that AI is mostly hype, and focusing on what the recent crop of AI apps are not (they are not sentient, they are not AGI), rather than what they are. In both cases I think it’s important to remember that science and pseudoscience are a continuum, and just because something is being overhyped does not mean it gets tossed in the pseudoscience bucket. That is just another form of bias. Sometimes that amounts to substituting cynicism for more nuanced skepticism.

Getting back to the “dire wolves”, how should we skeptically view the claims being made by Colossal Biosciences. First let me step back a bit and talk about de-extinction – bringing back species that have gone extinct from surviving DNA remnants. There are basically three approaches to achieve this. They all start with sequencing DNA from the extinct species. This is easier for recently extinct species, like the carrier pigeon, where we still have preserved biological samples. The more ancient the DNA, the harder it is to recover and sequence. Some research has estimated that the half life of DNA (in good preserving conditions) is 521 years. This leads to an estimate that all base pairs will be gone by 6.8 million years. This means – no non-avian dinosaur DNA. But there are controversial claims of recovered dino DNA. That’s a separate discussion, but for now lets focus on the non-controversial DNA, of thousands to at most a few million years old.

Species on the short list for de-extinction include the dire wolf (13,000 years ago), woolly mammoth (10,000 years ago), dodo (360 years), and the thylacine (90 years). The best way (not the most feasible way) to fully de-extinct a species is to completely sequence their DNA and then use that to make a full clone. No one would argue that a cloned woolly mammoth is not a woolly mammoth. There has been discussion of cloning the woolly mammoth and other species for decades, but the technology is very tricky. We would need a complete woolly mammoth genome – which we have. However, the DNA is degraded making cloning not possible with current technology. But this is one potential pathway. It is more feasible for the dodo and thylacine.

A second way is to make a hybrid – take the woolly mammoth genome and use it to fertilize the egg from a modern elephant. The result would be half woolly mammoth and half Asian or African elephant. You could theoretically repeat this procedure with the offspring, breeding back with woolly mammoth DNA, until you have a creature that is mostly woolly mammoth. This method requires an extant relative that is close enough to produce fertile young. This is also tricky technology, and we are not quite there yet.

The third way is the “dino-chicken” (or chickenosaurus) method, promoted initially (as far as I can tell, but I’m probably wrong) by Jack Horner. With this method you start with an extant species and then make specific changes to its genome to “reverse engineer” an ancestor or close relative species. There are actually various approaches under this umbrella, but all involve starting with an extant species and making genetic changes. There is the Jurassic Park approach, which takes large chunks of “dino DNA” and plugs them into an intact genome from a modern species (why they used frog DNA instead of bird DNA is not clear). There is also the dino-chicken approach, which simply tries to figure out the genetic changes that happened over evolutionary time to result in the morphological changes that turned, for example, a theropod dinosaur into a chicken. Then, reverse those changes. This is more like reverse engineering a dinosaur by understanding how genes result in morphology.

Then we have the dire wolf approach – use ancient DNA as a template to guide specific CRISPR changes to an extant genome. This is very close to the dino-chicken approach, but uses actual ancient DNA as a template. All of these approaches (perhaps the best way to collectively describe these methods is the genetic engineering approach) do not result in a clone of the extinct species. They result in a genetically engineered approximation of the extinct species. Once you get passed the hype, everyone acknowledges this is a fact.

The discussion that flows from the genetic engineering method is – how do we refer to the resulting organisms? We need some catchy shorthand that is scientifically accurate. The three wolves produced by Colossal Biosciences are not dire wolves. But they are not just gray wolves – they are wolves with dire wolf DNA resulting in dire wolf morphological features. They are engineered dire wolf “sims”, “synths”, “analogs”, “echos”, “isomorphs”? Hmmm… A genetically engineered dire wolf isomorph. I like it.

Also, my understanding is that the goal of using the genetic engineering method of de-extinction is not to make a few changes and then stop, but to keep going. By my quick calculation the dire wolf and the gray wolf differ by about 800-900 genes out of 19,000 total. Our best estimate is that dire wolves had 78 chromosomes, like all modern canids, including the gray wolf, so that helps. So far 14 of those genes have been altered from gray wolf to dire wolf (at least enough to function like a dire wolf). There is no reason why they can’t keep going, making more and more changes based upon dire wolf DNA. At some point the result will be more like a dire wolf than a gray wolf. It will still be a genetic isomorph (it’s growing on me) but getting closer and closer to the target species. Is there any point at which we can say – OK, this is basically a dire wolf?

It’s also important to recognize that species are not discrete things. They are temporary dynamic and shifting islands of interbreeding genetic clusters. We should also not confuse taxonomy for reality – it is a naming convention that is ultimately arbitrary. Cladistics is an attempt to have a fully objective naming system, based entirely on evolutionary branching points. However, using that method is a subjective choice, and even within cladistics the break between species is not always clear.

I find this all pretty exciting. I also think the technology can be very important. Its best uses, in my opinion, are to de-extinct (as close as possible) recently extinct species due to human activity, ones where there is still something close to their natural ecosystem still in existence (such as the dodo and thylacine). Also it can be used to increase the genetic diversity of endangered species and reduce the risk of extinction.

Using it to bring back extinct ancient species, like the mammoth and dire wolf (or non-avian dinosaurs, for that matter), I see as a research project. And sure, I would love to see living examples that look like ancient extinct species, but that is mostly a side benefit. This can be an extremely useful research project, advancing our understanding of genetics, cloning and genetic engineering technology, and improving our understanding of ancient species.

This recent controversy is an excellent opportunity to teach the public about this technology and its implications. It’s also an opportunity to learn about categorization, terminology, and evolution. Let’s not waste it by overreacting to the hype and being dismissive.

The post OK – But Are They Dire Wolves first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #1031 - Apr 12 2025

Skeptics Guide to the Universe Feed - Sat, 04/12/2025 - 7:00am
TikTok Flat Earthers; News Items: De-extincting the Dire Wolf, What Experts Think About AI, Planned Obsolescence, VR Touch Sensory; Who's That Noisy; Your Questions and E-mails: SNPs vs Sequencing; Science or Fiction
Categories: Skeptic

Bury Broadband and Electricity

neurologicablog Feed - Fri, 04/11/2025 - 5:05am

We may have a unique opportunity to make an infrastructure investment that can demonstrably save money over the long term – by burying power and broadband lines. This is always an option, of course, but since we are in the early phases of rolling out fiber optic service, and also trying to improve our grid infrastructure with reconductoring, now may be the perfect time to also upgrade our infrastructure by burying much of these lines.

This has long been a frustration of mine. I remember over 40 years ago seeing new housing developments (my father was in construction) with all the power lines buried. I hadn’t realized what a terrible eye sore all those telephone poles and wires were until they were gone. It was beautiful. I was lead to believe this was the new trend, especially for residential areas. I looked forward to a day without the ubiquitous telephone poles, much like the transition to cable eliminated the awful TV antennae on top of every home. But that day never came. Areas with buried lines remained, it seems, a privilege of upscale neighborhoods. I get further annoyed every time there is a power outage in my area because of a downed line.

The reason, ultimately, had to be cost. Sure, there are lots of variables that determine that cost, but at the end of the day developers, towns, utility companies were taking the cheaper option. But what price do we place on the aesthetics of the places we live, and the inconvenience of regular power outages? I also hate the fact that the utility companies have to come around every year or so and carve ugly paths through large beautiful trees.

So I was very happy to see this study which argues that – Benefits of aggressively co-undergrounding electric and broadband lines outweigh costs. First, they found that co-undergrounding (simply burying broadband and power lines at the same time) saves about 40% over doing each individually. This seems pretty obvious, but it’s good to put a number on it. But more importantly they found that the whole project can save money over the long term. They modeled one town in Mass and found:

“Over 40 years, the cost of an aggressive co-undergrounding strategy in Shrewsbury would be $45.4 million, but the benefit from avoiding outages is $55.1 million.”

The reduced cost comes mostly from avoiding power outages. This means that areas most prone to power outages would benefit the most. What they mean by “aggressive” is co-undergrounding even before existing power lines are at the end of their lifespan. They do not consider the benefits of reconductoring – meaning increasing the carrying capacity of power lines with more modern construction. The benefit here can be huge as well, especially in facilitating the move to less centralized power production. We can further include the economic benefits of upgrading to fiber optic broadband, or even high end cable service.

This is exactly the kind of thing that governments should be doing – thoughtful public investments that will improve our lives and save money in the long term. The up front costs are also within the means of utility companies and local governments. I would also like to see subsidies at the state and federal level to spread the costs out even more.

Infrastructure investments, at least in the abstract, tend to have broad bipartisan support. Even when they fight over such proposals, in the end both sides will take credit for them, because the public generally supports infrastructure that makes their lives better. For undergrounding there are the immediate benefits of improved aesthetics – our neighborhoods will look prettier. Then we will also benefit from improved broadband access, which can be connected to the rural broadband project which has stalled. Investments in the grid can help keep electricity costs down. For those of us living in areas at high risk of power outages, the lack of such outages will also make an impression over time. We will tell our kids and grandkids stories about the time an ice storm took down power lines, which were laying dangerously across the road, and we had no power for days. What did we do with ourselves, they will ask. You mean – there was no heat in the winter? Did people die? Why yes, yes they did. It will seem barbaric.

This may not make sense for every single location, and obviously some long distance lines are better above ground. But for residential neighborhoods, undergrounding power and broadband seems like a no-brainer. It seemed like one 40 years ago. I hope we don’t miss this opportunity. This could also be a political movement that everyone can get behind, which would be a good thing in itself.

 

The post Bury Broadband and Electricity first appeared on NeuroLogica Blog.

Categories: Skeptic

What Did Einstein Believe About God?

Skeptic.com feed - Tue, 04/08/2025 - 2:24pm

This article was originally published in Skeptic in 1997.

Presented here for the first time are the complete texts of two letters that Einstein wrote regarding his lack of belief in a personal god.

Just over a century ago, near the beginning of his intellectual life, the young Albert Einstein became a skeptic. He states so on the first page of his Autobiographical Notes (1949, pp. 3–5):

Thus I came—despite the fact I was the son of entirely irreligious (Jewish) parents—to a deep religiosity, which, however, found an abrupt ending at the age of 12. Through the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence was a positively fanatic [orgy of] freethinking coupled with the impression that youth is intentionally being deceived… Suspicion against every kind of authority grew out of this experience, a skeptical attitude … which has never left me….

We all know Albert Einstein as the most famous scientist of the 20th century, and many know him as a great humanist. Some have also viewed him as religious. Indeed, in Einstein’s writings there is well-known reference to God and discussion of religion (1949, 1954). Although Einstein stated he was religious and that he believed in God, it was in his own specialized sense that he used these terms. Many are aware that Einstein was not religious in the conventional sense, but it will come as a surprise to some to learn that Einstein clearly identified himself as an atheist and as an agnostic. If one understands how Einstein used the terms religion, God, atheism, and agnosticism, it is clear that he was consistent in his beliefs.

Part of the popular picture of Einstein’s God and religion comes from his well-known statements, such as:

“God is cunning but He is not malicious.” (Also: “God is subtle but he is not bloody-minded.” Or: “God is slick, but he ain’t mean.”) (1946)“God does not play dice.” (On many occasions.)“I want to know how God created the world. I am not interested in this or that phenomenon, in the spectrum of this or that element. I want to know His thoughts, the rest are details.” (Unknown date.)

It is easy to see how some got the idea that Einstein was expressing a close relationship with a personal god, but it is more accurate to say he was simply expressing his ideas and beliefs about the universe.

Figure 1

Einstein’s “belief” in Spinoza’s God is one of his most widely quoted statements. But quoted out of context, like so many of these statements, it is misleading at best. It all started when Boston’s Cardinal O’Connel attacked Einstein and the General Theory of Relativity and warned the youth that the theory “cloaked the ghastly apparition of atheism” and “befogged speculation, producing universal doubt about God and His creation” (Clark, 1971, 413–414). Einstein had already experienced heavier duty attacks against his theory in the form of anti-Semitic mass meetings in Germany, and he initially ignored the Cardinal’s attack. Shortly thereafter though, on April 24, 1929, Rabbi Herbert Goldstein of New York cabled Einstein to ask: “Do you believe in God?” (Sommerfeld, 1949, 103). Einstein’s return message is the famous statement:

“I believe in Spinoza’s God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with fates and actions of human beings” (103). The Rabbi, who was intent on defending Einstein against the Cardinal, interpreted Einstein’s statement in his own way when writing:

Spinoza, who is called the God-intoxicated man, and who saw God manifest in all nature, certainly could not be called an atheist. Furthermore, Einstein points to a unity. Einstein’s theory if carried out to its logical conclusion would bring to mankind a scientific formula for monotheism. He does away with all thought of dualism or pluralism. There can be no room for any aspect of polytheism. This latter thought may have caused the Cardinal to speak out. Let us call a spade a spade (Clark, 1971, 414).

Both the Rabbi and the Cardinal would have done well to note Einstein’s remark, of 1921, to Archbishop Davidson in a similar context about science: “It makes no difference. It is purely abstract science” (413).

The American physicist Steven Weinberg (1992), in critiquing Einstein’s “Spinoza’s God” statement, noted: “But what possible difference does it make to anyone if we use the word “God” in place of “order” or “harmony,” except perhaps to avoid the accusation of having no God?” Weinberg certainly has a valid point, but we should also forgive Einstein for being a product of his times, for his poetic sense, and for his cosmic religious view regarding such things as the order and harmony of the universe.

But what, at bottom, was Einstein’s belief? The long answer exists in Einstein’s essays on religion and science as given in his Ideas and Opinions (1954), his Autobiographical Notes (1949), and other works. What about a short answer?

In the Summer of 1945, just before the bombs of Hiroshima and Nagasaki, Einstein wrote a short letter stating his position as an atheist (Figure 1, above). Ensign Guy H. Raner had written Einstein from mid-Pacific requesting a clarification on the beliefs of the world famous scientist (Figure 2, below). Four years later Raner again wrote Einstein for further clarification and asked “Some people might interpret (your letter) to mean that to a Jesuit priest, anyone not a Roman Catholic is an atheist, and that you are in fact an orthodox Jew, or a Deist, or something else. Did you mean to leave room for such an interpretation, or are you from the viewpoint of the dictionary an atheist; i.e., “one who disbelieves in the existence of a God, or a Supreme Being?” Einstein’s response is shown in Figure 3.

Figure 2

Combining key elements from the first and second response from Einstein there is little doubt as to his position:

From the viewpoint of a Jesuit priest I am, of course, and have always been an atheist…. I have repeatedly said that in my opinion the idea of a personal God is a childlike one. You may call me an agnostic, but I do not share the crusading spirit of the professional atheist whose fervor is mostly due to a painful act of liberation from the fetters of religious indoctrination received in youth. I prefer an attitude of humility corresponding to the weakness of our intellectual understanding of nature and of our being.

I was fortunate to meet Guy Raner, by chance, at a humanist dinner in late 1994, at which time he told me of the Einstein letters. Raner lives in Chatsworth, California and has retired after a long teaching career. The Einstein letters, a treasured possession for most of his life, were sold in December, 1994, to a firm that deals in historical documents (Profiles in History, Beverly Hills, CA). Five years ago a very brief letter (Raner & Lerner, 1992) describing the correspondence was published in Nature. But the two Einstein letters have remained largely unknown.

“I have repeatedly said that in my opinion the idea of a personal God is a childlike one.” —Einstein

Curiously enough, the wonderful and well-known biography Albert Einstein, Creator and Rebel, by Banesh Hoffmann (1972) does quote from Einstein’s 1945 letter to Raner. But maddeningly, although Hoffmann quotes most of the letter (194–195), he leaves out Einstein’s statement: “From the viewpoint of a Jesuit Priest I am, of course, and have always been an atheist.”!

Hoffmann’s biography was written with the collaboration of Einstein’s secretary, Helen Dukas. Could she have played a part in eliminating this important sentence, or was it Hoffmann’s wish? I do not know. However, Freeman Dyson (1996) notes “…that Helen wanted the world to see, the Einstein of legend, the friend of school children and impoverished students, the gently ironic philosopher, the Einstein without violent feelings and tragic mistakes.” Dyson also notes that he thought Dukas “…profoundly wrong in trying to hide the true Einstein from the world.” Perhaps her well-intentioned protectionism included the elimination of Einstein as atheist.

Figure 3

Although not a favorite of physicists, Einstein, The Life and Times, by the professional biographer Ronald W. Clark (1971), contains one of the best summaries on Einstein’s God: “However, Einstein’s God was not the God of most men. When he wrote of religion, as he often did in middle and later life, he tended to … clothe with different names what to many ordinary mortals—and to most Jews—looked like a variant of simple agnosticism….This was belief enough. It grew early and rooted deep. Only later was it dignified by the title of cosmic religion, a phrase which gave plausible respectability to the views of a man who did not believe in a life after death and who felt that if virtue paid off in the earthly one, then this was the result of cause and effect rather than celestial reward. Einstein’s God thus stood for an orderly system obeying rules which could be discovered by those who had the courage, the imagination, and the persistence to go on searching for them” (19).

Einstein continued to search, even to the last days of his 76 years, but his search was not for the God of Abraham or Moses. His search was for the order and harmony of the world.

Bibliography
  • Dyson, F. 1996. Forward In The Quotable Einstein (Calaprice, Alice, Ed. ) Princeton, New Jersey: Princeton University Press. 1996. (Note: The section “On Religion, God, and Philosophy” is perhaps the best brief source to present the range and depth of Einstein’s views.)
  • Einstein, A. 1929. quoted in Sommerfeld (see below). 1949. Also as Telegram to a Jewish Newspaper, 1929; Einstein Archive Number 33–272.
  • ___. 1946 and of unknown date. In Einstein, A Centenary Volume. (A. P. French, Ed.) Cambridge: Harvard Univ Press. 1979. 32, 73, & 67.
  • ___. 1959 (1949). “Autobiographical Notes.” In Albert Einstein, Philosopher–Scientist. (Paul Arthur Schilpp, Ed.) New York: Harper & Bros.
  • ___. 1950. Letter to M. Berkowitz, October 25,1950; Einstein Archive Number 59–215.
  • ___. 1954. Ideas and Opinions. New York: Crown Pub.
  • ___. on many occasions. In Albert Einstein, Creator and Rebel. (B. Hoffmann with the collaboration of Helen Dukas.) New York: The Viking Press.
  • Hoffmann, B. (collaboration with Helen Dukas). 1972. Albert Einstein, Creator and Rebel. New York: The Viking Press.
  • Raner, G.H. & Lerner, L. S. “Einstein’s Beliefs.” Nature, 358:102.
  • Sommerfeld, A. 1949. “To Albert Einstein’s 70th Birthday.” In Albert Einstein, Philospher–Scientist. (Paul Arthur Schilpp, Ed.) New York: Harper & Bros. 1959. 99–105.
  • Weinberg, S. 1992. Dreams of a Final Theory. New York: Pantheon Books. 245.
Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic