You are here

Skeptic

A Skeptic’s Guide to Immigration

Skeptic.com feed - Wed, 06/11/2025 - 7:35am

In 1965, the 33-year-old junior Senator from Massachusetts, Edward “Ted” Kennedy, took the floor to deliver his remarks on a bill that promised the American people little change:

The bill will not flood our cities with immigrants. It will not upset the ethnic mix of our society. It will not relax the standards of admission. It will not cause American workers to lose their jobs.1

This statement—unimaginable from any Democrat today—summarized both Kennedy’s own thinking and his party’s main argument in support of the most liberal immigration bill ever enacted in the history of the world—the 1965 Immigration and Nationality Act. It was passed by a New Deal congress and signed into law by a New Deal president whose positions on union organization, healthcare, antitrust, and virtually every other economic issue are to the left of today’s Democratic Party, but whose opinions on immigration would be considered right wing and racist by modern progressive standards.2

What President Johnson proclaimed as “not a revolutionary bill”3 on the day of its signing, over the coming decades completely transformed this country’s demographic fabric, economic identity, and cultural psyche in ways that neither Kennedy, Johnson, nor those opposed to the 1965 bill foresaw. The American immigration system they created remains fundamentally unchanged and is unlike any other in both design and consequence. Distinct from most other developed nations that prioritize skilled workers and use points-based systems to meet labor market demands, (i.e., the more skilled and well-educated the prospective immigrant, and the higher their job offer is, the more likely they are to be admitted) the United States emphasizes family reunification, granting two-thirds of green cards to relatives of citizens or permanent residents. It also offers birthright citizenship, a policy shared by only a handful of countries, and a Diversity Visa Lottery that annually grants permanent residency to 55,000 randomly selected immigrants from underrepresented nations—programs that would be unthinkable in most European systems.4 The H-1B visa program, intended to bring in foreign workers for specialty occupations, also operates as a lottery system rather than the more common points-based system favored by other countries. This means that even if an employer wants to hire a specific, highly skilled worker, that candidate can only be hired if they win an annual lottery—competing against all other applicants in the same visa category. Their chances aren’t high—the number of H-1B visas granted is limited by a congressionally mandated annual cap of just 65,000. In 2024, the U.S. Citizenship and Immigration Services received 758,994 H-1B visa applications. The sheer scale of the U.S. system—with desired quotas that exceed those of every other nation567—also sets it apart, making it the most expansive and complex immigration system in existence.

President Lyndon B. Johnson signing the Immigration and Nationality Act at Liberty Island, New York, on October 3, 1965. This landmark legislation fundamentally changed U.S. immigration policy that had been in place since 1924. (Photo by Yoichi Okamoto. LBJ Library)

The United States is currently home to one fifth of the world’s immigrant population.8 It draws people from virtually every country on Earth,9 recording over a million10 legal entries and estimating close to two and a half million illegal entries every year.11 This unprecedented influx has warped the national conversation into one that also fails to find a precedent. Never before has immigration polled as both the most important and polarizing political issue,12 often framing itself as binary, with individuals and politicians declaring themselves as either pro- or anti-immigration. Historically, the debate has centered around more nuanced questions of who, how many, and under what conditions.

The Battle Over Borders

In America today, the single greatest divide between Democrats and Republicans is education. In the most recent election, Donald Trump won the White non-college educated vote two to one.13 This matters because the same moral frameworks that divide college and non-college educated Americans also drive views on immigration.

Psychologists have identified five core human values: care, fairness, sanctity, authority, and loyalty. Across the globe, college-educated professionals consistently rank two of these above the rest—care for others, especially the vulnerable, and fairness—placing them at the heart of their worldview and moral calculus. While working-class individuals also prioritize care and fairness, these values don’t dominate their framework in the same way. Instead, they are weighted alongside values that matter little to the educated class—appreciation of tradition, respect for authority, and loyalty to family and community.14 It is a divide many researchers have described as the split between “universalists” and “communalists.”

At its core, today’s immigration debate is a microcosm of the broader conflict between universalist and communalist ideals. For communalists, a country should restrict entry and prioritize its own people. For universalists, patriotic loyalties are dangerous and immigration is seen as a solution for closing the gap between rich and poor nations. Polls indicate that liberals tend to be universalists who support more open-door policies while conservatives tend to be communalists who favor policies that limit the number of immigrant arrivals.15

The United States is currently home to one fifth of the world’s immigrant population.

In the past, however, the immigration debate has not split so neatly across these lines. For most of American history, communalist views were not tied to conservatism. Indeed many 20th century communalists were progressives who rooted ideas of fairness and equality in the local or national community rather than seeing the issues as global causes. A. Philip Randolph—perhaps second to only Martin Luther King Jr., as America’s most prominent Black civil rights leader—for example, called for:

a halt on the grand rush for American gold, which overfloods the labor market, resulting in lowering the standard of living, race riots, and general social degradation. The excessive immigration is against the interests of the masses of all races and nationalities in the country—both foreign and native.16

The conservative intellectual giant and universalist Milton Friedman, meanwhile, positioned himself firmly on the other side of the debate as a strong advocate for higher levels of immigration.17

Since 1965, however, the views of both conservatives and liberals have converged on a pro-immigration platform. The Democrats transformed themselves into a college-educated party of social and cultural elites whose universalist worldview prioritizes human rights over American rights, while at the same time Republicans remained loyal to big business and free-market philosophies that also see benefits in a higher share of immigrant labor. According to an analysis of the Congressional Record, Democrats now speak more positively about immigration than any party has at any time in our nation’s history.18 And until Trump’s takeover of the Republican party in 2016, the right by and large agreed. It was, after all, President George W. Bush who tried to pass a bill establishing an easier pathway to citizenship for undocumented immigrants.19

President Trump signing the Laken Riley Act into law on January 29, 2025, which mandates the detention and deportation of illegal alien criminals and allows states to sue if immigration laws aren’t enforced by the federal government. It is the first piece of legislation passed on the issue of immigration in over 20 years. It drew wide bipartisan support, with 46 House Democrats and 10 Democratic senators joining all Republicans in backing it. (Source: The White House)

My own research on immigration suggests that much of the backlash to decades of record-breaking immigration with little pushback from politicians in either party resulted in the surge of populist candidates on both the right with Donald Trump and on the left with Bernie Sanders.20 During the 2016 presidential campaign, both Trump and Sanders broke with their parties on the issue of immigration, with Bernie in 2015 criticizing open border policies as “right-wing proposal[s]” that “[make] people in this country more poor than they already are,”21 and Trump infamously stating that “they [illegal immigrants] are bringing drugs, they’re bringing crime” in his presidential announcement speech.22

A decade later, in 2025, Donald Trump’s first bill signed as president was the Laken Riley Act, making it the first piece of legislation passed on the issue of immigration in over 20 years.23 Republican backers of the new legislation hope it is the first in a series of laws that remake our immigration system for the first time since 1965. Trump has already signed executive orders to end birthright citizenship, expand expedited removals, and ban asylum claims as part of his “deport them all” agenda.24 Meanwhile, Democrats appear to have hired John Lennon as their chief immigration strategist, drafting policy straight from the lyrics of Imagine, where “there’s no countries” and with “all the people sharing all the world.” During the 2020 Democratic primary, Cory Booker promised to “virtually eliminate immigration detention” and fellow candidate Julian Castro unveiled a proposal to decriminalize illegal immigration and eliminate the U.S. Immigration and Customs Enforcement.25 Admittedly, Democrats have recently been forced by public opinion to back off many of these proposals, yet they remain unable to offer a coherent immigration policy beyond vague rhetoric. Who knows how this will all shake out, but here are some things that should be considered:

Economic Impact of Immigration

There are over 100,000 peer-reviewed articles on immigration. Its inconsistent effects, unintended costs, and contradictory benefits make it one of the most complex topics for social scientists and policy makers to understand. If you are looking for an answer as to whether immigration is good or bad, stop reading now. This is not to say that there are no answers or that nothing is known, but what it does mean is that what follows is unlikely to satisfy any one side of the political debate or provide a definitive blueprint for how to fix our broken system.

A fragile consensus has emerged that high-skilled workers—no matter their defects—tend to pay off and outperform their low-skilled counterparts.

The economic effects of immigration are as varied as the immigrants themselves. Research suggests that immigrants who speak better English,262728 stay married longer,29 are more willing to relocate within the U.S.,30 and who have less immediate family in the countries from which they came,3132 contribute more and cost less. Conversely, those who are excessively obese,33 choose to settle in rural neighborhoods,34 are worse at making friends,35 and who avoid paying taxes36 contribute less and cost more. Most immigrants, however, straddle both sides of the ledger—offering a mix of strengths and liabilities, shaped by personal choices, circumstances, and the unpredictable forces of culture, policy, and industry change. Their economic impact is a tangle of self-interest, structural incentives, and social contingencies, producing outcomes that are neither fixed nor inevitable. Still, a fragile consensus has emerged that high-skilled workers—no matter their defects—tend to pay off and outperform their low-skilled counterparts. Reality, however, often disagrees.

Nowhere are these disagreements and the nuance of this issue clearer than when looking at the outcomes of immigrants across countries of origin. The high-skilled advantage appears ironclad when comparing Indians—the best educated (75 percent college graduates) and wealthiest ($72,000 median income) of all immigrant groups—to their next-door neighbors from Myanmar, who enter the United States as one of the least educated (23 percent college graduates) and, predictably, earn the least money among all Asian immigrants once here ($26,000 median income).37 Less predictable, however, are the economic gains of Mongolians, who are the third best educated among Asians (63 percent college graduates), yet make only slightly more than those from Myanmar ($28,000 median income) and suffer the highest rate of poverty across all Asian immigrant groups. Part of the explanation is that Mongolian immigrants’ rate of married-couple households ranks near the bottom of the Asian distribution—over a full standard deviation and a half below that of immigrants from both Myanmar and India—and second to last (only ahead of China) when it comes to English proficiency among the cohort of most educated Asian countries.38

A similar dynamic is observed when comparing39 sub-Saharan Africans to Caribbean-born Blacks, where the relationship between education (i.e., high-skilled) and money also appears elusive. Although sub-Saharan Africans are among the most educated immigrants in the country, they report some of the lowest incomes and rates of homeownership. Black immigrants from the Caribbean, meanwhile, arrive as one of the least educated groups, but end up with higher incomes and higher rates of homeownership. Disparities like these complicate efforts to predict who will do best and remind us that success is not simply a product of technical expertise, but is deeply influenced by forces such as upbringing, cultural adaptability, social capital, and other variables that are far more difficult to measure.

One easy-to-measure metric, however, is age. Younger-aged immigrants almost always contribute more to the U.S. economy than older ones, regardless of skill level. One study40 showed that immigrants aged 18–24 without a bachelor’s degree generate a net-fiscal impact more than twice as large as that of immigrants over 44 with a bachelor’s degree. This effect is exaggerated when looking at college graduates over the age of 54, whose net contribution is even less than those who never graduated from high school but who arrived younger than 35.

Even so, the distinction between low-skilled and high-skilled immigrants remains the most salient for most researchers. It offers a clear binary—one rooted in decades of research—and illuminates broader trends, even if it fails to capture certain key nuances or industry-specific dynamics. The first important fact in this debate, however, is that low-skilled immigrants greatly outnumber high-skilled ones. Around two-thirds of all immigrants (legal and illegal) to the United States qualify as low skilled—seventy percent of whom do not even have a high school diploma41—and a handful of broadly generalizable characteristics help explain why they tend to be worse for the economy than their higher skilled counterparts.

For starters, the vast majority of illegal entries into this country are unskilled laborers.42 This means that they avoid any third-party selection process or policy oversight that allows the U.S. government to decide which immigrant category groups are needed where, and to strategize the best ways to integrate them into the economy. Strategies that might include spreading their share of labor across greater geographic distances to avoid oversupplying markets or simply mandating them to learn English. Low-skilled immigrants are also more likely to rely on social welfare,43 contribute less in taxes,44 compete more for jobs in unionized industries where they undermine collective bargaining agreements,45 and exhibit overall lower rates of upward mobility.46 High-skilled immigrants aren’t always “better”—but they usually are. Exactly by how much, however, is where the research becomes less clear and defies easy quantification.

Some studies suggest that low-skilled immigrants significantly suppress wages, others suggest only marginal effects, and a few strain to make the case that there are virtually no negative economic impacts linked to importing low-skilled labor whatsoever. The usual suspect in this last category is a 1990 paper published by Berkeley economist David Card.47

The Ellis Island National Museum of Immigration is a living monument to the story of the American people. Housed inside the restored main building of the former immigration complex, the museum documents the rich story of American immigration through a carefully curated collection of photographs, heirlooms, and searchable historic records. (Source: The Statue of Liberty—Ellis Island Foundation, Inc.)

Card’s study48 sought to determine the effect of mass immigration on native wages by analyzing the Mariel Boatlift—a sanctioned exodus of mostly low-skilled workers from Cuba by Fidel Castro in 1980, which rapidly increased Miami’s labor force by more than 125,000. Comparing Miami to cities without an immigration surge, his analysis found no significant impact on wages for native-born workers, even among those without a high school diploma. Although wages in Miami fell overall, he found that they did not drop any more for those he regarded as competing with the Cuban arrivals than for those who weren’t. His conclusion ran counter to decades of previous research, but was substantiated by subsequent quasi natural experiments in Israel49 and Denmark,50 which similarly failed to find major wage suppression caused by immigration. The prevailing explanation for these counterintuitive results is that immigrants contribute to both the supply and demand for labor. That is, while they compete for jobs, they also create higher demand for goods and services, fueling employment, and thereby mitigating the expected downward pressure on wages. Card’s proponents also argue that the long-term gains from immigration—such as increased entrepreneurship and labor specialization—are rarely captured by most analyses, while the immediate fiscal costs, particularly at the local level, exacerbate public perceptions of harm.51

But Card’s work is not without its critics. In 2015, Harvard economist George Borjas reanalyzed52 the Mariel Boatlift, finding unreported, important results obscured by averages in the original work. He, for example, found that high school dropouts experienced a 30 percent wage decline—an outcome he attributed to direct competition between low-skilled immigrants and similarly skilled native workers. While his findings were scrutinized for methodological issues, including a small sample size and selective exclusions, they highlight how aggregate effects often obscure the uneven distribution of impacts across different groups. Research by the Migration Observatory at the University of Oxford53 and others,5455 lend further support to the more common-sense intuition that low-skilled immigration tends to depress wages and reduce job opportunities, particularly in the short term, before labor markets have time to adjust. Not to mention Card’s own findings in subsequent research,56 which predicts that, “an inflow rate of 10 percent for one occupation group would reduce relative wages for [that] occupation by 1.5 percent” and result in “a 0.5-percentage-point reduction in the employment rate of the group.” This same model projects a 3 percent wage loss when the immigrant inflow rate increases to 20 percent.

These numbers, however, are among the more conservative estimates when compared with those published by the National Academy of Sciences57 in a comprehensive 600-plus-page report produced by a committee that included both Borjas and Card. According to their summary, immigration exerts a small to moderate effect on wages, varying based on factors like regional labor supply, industry composition, and macroeconomic conditions—but never in the positive direction.*

*Of note is the broadly consistent finding that whatever negative effects low-skilled immigrants produce, they are particularly bad for teenage Americans who are in greatest competition with them and whose rate of high-school employment has plummeted in the last two decades. Currently suffering from an epidemic of loneliness, anxiety, and overall low social capital among this youngest generation, it would likely be a benefit to prioritize them for the local McDonald’s job rather than give it to the 30-year-old immigrant who’s willing to be paid less and work in worse conditions.

The left’s claim that immigration doesn’t depress wages mirrors the same kind of speculative ideological optimism that Republicans engage in when discussing trickle-down economics and the Laffer curve. For neither group is the question empirical, and for both the flawed logic is evident even without complex data analysis.

Immigration wasn’t just a source of cheap labor for industrial barons—it was a strategic tool to divide, weaken, and conquer the working class.

In 2007,58 a local chicken-processing company in Stillmore, Georgia, lost 75 percent of its workforce in a single weekend after a raid by federal immigration agents. Within days, the company put up an ad in the local newspaper announcing jobs at higher wages. It’s basic economics. When the supply of a good increases, the price of that good falls. When that good is labor, the cheaper price means a cheaper cost of employment (i.e., lower wages). If an abundance of immigrants are obliged to work for low wages, employers have the bargaining advantage. If immigrants are few, however, employers are forced, as the economist Sumner Slichter put it, “to adapt jobs to men rather than men to jobs.”59 And while there is some evidence to suggest that immigrants are working jobs that Americans aren’t willing to do, more evidence suggests that it’s not a matter of unwillingness to do the job, but rather a matter of unwillingness to do the job at that wage.6061

What none of this means, however, is that low-killed immigrant labor is necessarily bad for the economy at large. In fact, one of the chief benefits of immigration is low-cost goods. A broad body of economic literature suggests that immigrants, by expanding the labor force, help keep prices lower for many goods and services. Even more encouraging is the fact that their downward effect on prices is not wholly a function of paying workers less. Research62 by the United Food and Commercial Workers union indicates that, on average, pay to production workers accounts for only about four percent of the price of goods. Therefore, a 25 percent decrease in wages would cause only a one percent increase in prices.

While the majority of research suggests that immigration lowers consumer prices, there are cases where the opposite is true. For goods that cannot be easily scaled, like homes, the cost tends to increase when a market is suddenly flooded with a large influx of immigrants who drive up demand for a fixed asset where the supply cannot expand quickly. Speaking before the House Committee on Oversight and Accountability in September 2024, Steven A. Camarota, Director of Research at the Center for Immigration Studies, was asked to summarize the studied effects of immigration on the cost of living. His congressional testimony detailed how immigrant competition for housing—both rental and ownership—places upward pressure on prices, particularly in urban areas, where a five percent increase in the immigrant population costs “the average household headed by a U.S.-born person … a 12 percent increase in rent, relative to their income.”63

If we only look at macroeconomic indices like GDP, stock market performance, or consumer spending, it seems safe to say that even if low-skilled laborers suffer a reduction in wages, increased immigration is good for the overall economy. But an economy is not just a balance sheet—it exists within a society, and when social cohesion weakens, economic growth alone cannot sustain a nation.

The Effects of Immigration on Society, Culture, and Social Cohesion

In 1907, President Theodore Roosevelt commissioned a report on the effects of immigration. Four years later, in 1911, a bipartisan committee headed by Senator William Dillingham, published a 41-volume study on the issue.64 In it, they concluded that:

The recent immigrants [have] been reluctant to identify themselves with the unions and to pay the regular dues under normal conditions, thus preventing the labor organizations from accumulating large resources for use in strengthening their general conditions and in maintaining their position in time of strikes. … [U]sed as strike breakers, they have taken advantage of labor difficulties and strikes to secure a foothold in the industry, and especially in the more skilled occupation. … [C]orporations, with keen foresight, had realized that by placing the recent immigrants in these positions they would break the strength of unionism for at least a generation.

What came to be known as the Dillingham Commission—famous for its data-driven analysis—set American immigration policy over the next half-century. Observing how Gilded Age oligarchs preferred to pack their factories with a motley mix of immigrants who had no common language, no common traditions, and no common sense of solidarity, the commission laid bare something that Carnegie, Rockefeller, and the rest had long understood—social cohesion is the backbone of labor’s power. Strong labor movements rely on shared identity, communication, and trust, which vanish when a workforce is fractured along linguistic and cultural lines. A factory floor where workers can’t communicate is a factory floor where strikes never get off the ground. Immigration wasn’t just a source of cheap labor for industrial barons—it was a strategic tool to divide, weaken, and conquer the working class.6566 As Friedrich Engels noted:“[the] bourgeoisie knows … how to play off one nationality against the other: Jews, Italians, Bohemians, etc., against Germans and Irish, and each one against the other.”

Last year (2024) the percentage of foreign born reached the highest levels in our nation’s history (over 15 percent), surpassing those of the last massive immigration wave to the United States in the late 19th century.67 A study by the Cato Institute68 found that immigration reduced union membership by 5.7 percent between 1980 and 2020, accounting for 29.7 percent of the overall decline in unions during that period. The authors explain that because “immigrants have a lower preference for unionization and because immigrants increase diversity in the workforce that, in turn, decreases solidarity among workers and raises the transaction costs of forming unions.” Further, unions derive their power from the ability to withhold labor, forcing employers to negotiate on wages and conditions. But when a constant influx of new workers stands ready to replace those on strike, that leverage collapses. It is no coincidence, according to The New York Times columnist David Leonhardt, that union strength and participation precipitously declined after the 1965 immigration bill.69

A study by the Cato Institute found that immigration reduced union membership by 5.7 percent between 1980 and 2020, accounting for 29.7 percent of the overall decline in unions during that period. [Figure: Based on Alex Nowrasteh and Benjamin Powell, Wretched Refuse? The Political Economy of Immigration and Institutions (Cambridge University Press, 2021), p. 208.]

The economy, like unions, both shapes and is shaped by human behavior. Disentangling the economic effects of immigration from its effect on our institutions, culture, and politics—each of which loop back to re-affect society in often unexpected ways—is impossible. Calculating the fiscal impact of immigrants by isolating variables, like unemployment or the price of goods, without considering broader questions like how immigration impacts who we elect to government or what we think of our neighbors is like measuring physical health without regard to the patient’s mental well-being. However great the impact of immigration is on wages, it is unlikely to outweigh the impact of the elected president’s tax policy or the nation’s social trust. Indeed, it might be said that immigration is good for the economy if it unites Americans behind good economic policies, and it is bad for the economy if it undermines collective action in a way that leaves us vulnerable to bad politicians who exploit the discord.

The social effects of immigration might be compared to American football—“the ultimate team sport.” With a 53-man roster and the most intricate coaching bureaucracy in all of sports, the best football teams are rarely those with the richest owners or most Pro Bowl players. In football, the best teams are those with the best culture. Bill Belichick, widely considered the greatest head coach in NFL history, was notorious for cutting star players and replacing them with less talented ones who better fit the team mentality. One sign hung above all others in the Patriots locker room: “Mental toughness is doing what’s right for the team even when it’s not what’s best for you.”70 Similarly, Sean McVay, Super Bowl winning head coach of the L.A. Rams, has said that he has only one rule for his team, “we over me.”71

Countries are in many ways like football teams. Success depends less on individual ability and more on a shared identity, unifying leadership, and the willingness to put collective interests ahead of personal gains. A team—or a nation—fractured by self-interest falls apart. The political sociologist Robert Putnam72 has argued that the single most important predictor of national prosperity—more than wealth, technology, or education—is social capital, a measure of the norms and networks that enable people to act collectively. High social capital societies produce stronger institutions, greater civic engagement, and overall increased trust among their citizens. Putnam writes:

School performance, public health, crime rates, clinical depression, tax compliance, philanthropy, race relations, community development, census returns, teen suicide, economic productivity, campaign finance, even simple human happiness—all are demonstrably affected by how (and whether) we connect with our family and friends and neighbours and co-workers.73

At its core, every cooperative system—economic, political, social, or otherwise—relies on trust. Modern capitalism, for example, is built on the collective faith that a dollar bill is worth what we all agree it’s worth. As Franklin D. Roosevelt explained in his first ever fireside chat74 to the American people: “There is an element in the readjustment of our financial system more important than currency, more important than gold, and that is the confidence of the people.” More famously, he proclaimed that “the only thing we have to fear is fear itself.” Without confidence in one another, when fear replaces trust, the very mechanisms that allow a country to function—democracy, markets, even basic public order—begin to erode. If we can’t leave our doors unlocked, we are less likely to sacrifice for our neighbors at the ballot box. Here is Edmund Burke on the matter:

Manners are of more importance than laws. Manners are what vex or soothe, corrupt or purify, exalt or debase, barbarize or refine us, by a constant, steady, uniform, insensible operation, like that of the air we breathe in.

In his book Bowling Alone, Putnam documents the collapse of social capital in the United States since 1970. Indeed, Americans today spend less time with friends,75 have fewer friends to begin with,76 report fewer positive interactions with strangers,77 and experience less frequent human interaction in general.78 We are more distrustful of institutions79 and our fellow citizens and neighbors.80 We also take less active roles in our communities, join fewer clubs, attend fewer dinner parties, go to church less, participate less in local government and are members of fewer sports leagues—including bowling leagues.81 Meanwhile, government agencies at all three levels—local, state, and federal—have also dramatically decreased their investment in public works projects that enhance social capital such as libraries, parks, and community centers. As a result, America is suffering from a surge of what the economists Anne Case and Angus Deaton have called deaths of despair—from alcohol, drugs, or suicide—but which might more accurately be described as deaths due to loneliness.82

Although there are multiple explanations and a host of complex factors contributing to the decline in social capital, one of them is high immigration rates. Putnam’s most recent book, The Upswing, divides the last 150 years of American history into three distinct eras. From a hyper-individualistic “me” culture in the Gilded Age to a collective “we” culture beginning at the turn of the 20th century and then back to a new “me” era post the 1960s. Curiously, these divisions map almost exactly onto historical shifts in American immigration policy (i.e., mass immigration during the low social capital Gilded Age, strict restriction during the high social capital Progressive Era, and then again a second wave of mass immigration in the current Neoliberal Era).

A low-trust society doesn’t debate policy—it searches for enemies.

Immigration’s effect on social capital is well-documented. There are two main explanations for how immigration is expected to affect social capital. The first, known to researchers as the “conflict hypothesis,” requires us to distinguish between two types of social capital: bonding and bridging. Bonding social capital refers to social ties within groups that share a similar background (e.g., families or cultural or ethnic ingroups) and exclude non-members. The Jewish diamond district in New York City, for example, relies on tight networks of Hasidic Jews who sacrifice and collaborate to promote their business among their own community while tending to exclude those who are not members of the ingroup.83

Bridging social capital, on the other hand, generates social networks that transcend differences across groups with different cultural, religious, or ethnic backgrounds. The conflict hypothesis predicts that increased diversity can, under certain conditions, intensify bonding social capital and tighten ingroup networks.84 Under these conditions, collective action and pursuing the common good can become increasingly burdensome. In some cases85 increased diversity has been shown to not only reduce trust between groups but also within ingroups. In the extreme,86 this can cause such a breakdown of overall social capital that no one trusts anyone and parochialism runs riot as various factions are pitted against each other. Putnam’s research supports the conflict hypothesis, describing the effect of too much immigration as a sort of “hunkering down.”87

The conflict hypothesis is also supported by evolutionary theory, which suggests that human sociality is built on “parochial altruism”88—the evolved tendency to direct altruism preferentially towards one’s own group and eschew interactions with outsiders. If natural selection prefers this type of altruism, then fostering bridging social capital becomes a delicate balancing act that runs against ancient and deep-seated biases. Left unchecked, this behavior can turn hostile when there is competition for scarce resources between groups89 and helps to explain why working class Americans, who are in less of a position to “share the wealth,” hold more bigoted opinions of immigrants.90

A competing theory to the conflict hypothesis is commonly called the “contact hypothesis” of immigration. It predicts that under certain conditions, exposure to people from different backgrounds decreases prejudices and over time increases overall social capital.91 Perhaps the most foundational study supporting the contact hypothesis is research on the integration of Black and White soldiers during WWII.92 The key finding is that White soldiers who had more contact with Black soldiers were more open to serving with them in mixed race platoons. But context here is crucial. Differences between American Blacks and Whites serving in the military are trivial compared to those between immigrants and a host country’s native citizens. In the case of the soldiers, both Whites and Blacks shared a common history, culture and language on top of what are probably the most important facts: They were situated in a strict and clear hierarchy (i.e., the military), shared a common enemy, and were putting their lives on the line for a common cause. Outside of these specific conditions, the literature finds little evidence for the contact hypothesis.93

A country can absorb large numbers of immigrants, but not if their difference is their most defining characteristic.

My own research94 tracks the effects of low social capital on our politics, finding that increased isolation and the breakdown of social networks can radicalize voting behavior and reshape electoral coalitions. In Germany, for example, support for right-wing populist parties has been associated with individuals who perceive ethnic outgroups as competitors for scarce resources.95 Another study in Western Europe reveals that “the electoral success of right-wing populist parties among workers seems primarily due to cultural protectionism: the defense of national identity against outsiders [i.e., immigrants].”96 A low-trust society doesn’t debate policy—it searches for enemies.

In summary, the dangers of too much diversity distinctly outweigh those of too much immigration. A country can absorb large numbers of immigrants, but not if their difference is their most defining characteristic. The United States, in this respect, is fortunate that the southern border is overflooded with immigrants from culturally compatible nations like Mexico and those in Central America, who, unlike the refugees and economic migrants from the Greater Middle East who overwhelmed Europe in recent years, have proven to be exceptional integrators.97

The Ideological Nation

In his final speech as President of the United States, Ronald Reagan shared with the American people an excerpt from a letter recently received:

You can go to live in France, but you cannot become a Frenchman. You can go to live in Germany or Turkey, but you cannot become a German or a Turk. But anyone, from any corner of the Earth, can come to live in America and become an American.98

The United States is what political scientists and historians describe as an “ideological nation.”99 While most countries have been bound by shared ancestry and geographic borders (i.e., “blood and soil”), the United States is bound by a shared commitment to abstract principles—freedom, equality, opportunity, and so on. In the United States, becoming an American requires none other than that. To be American you do not have to be born here or even speak fluent English, you just have to become “one of us” and celebrate the 4th of July. And for all the accusations of xenophobia or intolerance, many of which hold truth, the evidence is clear: No other country attracts, welcomes, and ultimately integrates their foreign-born population better than the United States.

The right must recognize that cultural assimilation is a process, not an immediate transformation.

A study by the Manhattan Institute100 rates the United States as second to only Canada in its assimilation index, which measures the degree of similarity between native and foreign-born populations across various economic, cultural, and civic indicators. The primary reason for Canada’s ranking over the U.S., however, results from its higher rate of naturalization. And while it is surely a great benefit, it is less a measure of how accepting and inclusive the culture is, and more a reflection of different policy design. To extend the argument, America demonstrates an unrivaled ability to absorb and assimilate immigrants whose cultural orientations differ dramatically from its own, such as individuals from Muslim societies.

The United States is a nation of immigrants. No other nation has taken in more, no other economy has given them more, and nowhere else have immigrants been so seamlessly woven into the social fabric as in the United States. America’s unique ability to absorb a large proportion of newcomers is one of its greatest strengths, fueling an economic dynamism and cultural vibrancy unmatched throughout the world.

Italian family in the baggage room, Ellis Island, 1905 (Photo by Lewis Wickes Hine)

A lot of immigration is beneficial when it helps to offset the demographic crises, as countries such as China,101 Japan,102 and many in Northern Europe103 have fallen far below replacement levels. South Korea104 has a total fertility rate of 0.72 (2.1 is needed to maintain a population) and appears headed for extinction within just a few generations. But a lot of immigration works best when immigrants integrate into their new nation, when they share in the national story, embrace common ideals, and feel bound by a sense of collective purpose. In other words, when instead of adding more immigrants to the country, we add—in the case of the United States—more Americans to the country. And our immigration policy works best when it’s built on reality, not ideology—when it acknowledges limits, institutes selection hierarchies, prioritizes Americans, and is willing to adapt.

Perhaps it is time to bring back that old idea of America as “the melting pot,” a national creed that has been abandoned by both the “salad bowl” Democrats—where assimilation is seen as a tool of cultural oppression—and the “build the wall” Republicans—who have embraced their own form of salad bowl separatism, treating immigrants as intruders instead of future Americans. The right must recognize that cultural assimilation is a process, not an immediate transformation, while the left must acknowledge that integration requires more than mere presence—it demands a shared commitment to American ideals. Immigrants must be willing to adopt to American values. Immigration, like any successful relationship, requires reciprocity.

Finally, if the country wishes to continue reaping the rewards of immigration, it must first reaffirm its own identity and decide whether it still believes in the promise of America itself.

Categories: Critical Thinking, Skeptic

Skeptoid #992: The Case of the Missing Beaumont Children

Skeptoid Feed - Tue, 06/10/2025 - 2:00am

Since psychic abilities do not exist outside the delusions of true believers, involving psychics in searches for missing persons is worse than useless.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Crisis of Confidence

Skeptic.com feed - Tue, 06/10/2025 - 12:00am
Introduction

The words you’re about to read were spoken by President Carter nearly 50 years ago. They echo through time with eerie relevance. Was it foresight—or have we simply not changed?

Each generation perceives its challenges as unprecedented, and today’s turbulence may seem unparalleled. Yet history teaches us otherwise. The anxieties of one era often echo those of another, revealing patterns of uncertainty, resilience, and continuity that transcend time.

We do not present this speech as an endorsement of any political figure or ideology. Rather, we recognize the wisdom and the historical perspective it provides. The concerns that shape our world—economic instability, global unrest, and the burden of leadership—transcend partisan divides.

Read carefully. You may find that what feels like a crisis of today is, in many ways, a recurrence of the past.

♦ ♦ ♦

JULY 15, 1979

I promised you a president who is not isolated from the people, who feels your pain, and who shares your dreams and who draws his strength and his wisdom from you.

During the past three years I’ve spoken to you on many occasions about national concerns, the energy crisis, reorganizing the government, our nation’s economy, and issues of war and especially peace. But over those years the subjects of the speeches, the talks, and the press conferences have become increasingly narrow, focused more and more on what the isolated world of Washington thinks is important. Gradually, you’ve heard more and more about what the government thinks or what the government should be doing and less and less about our nation’s hopes, our dreams, and our vision of the future.

It’s clear that the true problems of our Nation are much deeper—deeper than gasoline lines or energy shortages.

Ten days ago, I had planned to speak to you again about a very important subject—energy. But as I was preparing to speak, I began to ask myself the same question that I now know has been troubling many of you. Why have we not been able to get together as a nation to resolve our serious energy problem?

It’s clear that the true problems of our Nation are much deeper—deeper than gasoline lines or energy shortages, deeper even than inflation or recession. And I realize more than ever that as president I need your help. So I decided to reach out and listen to the voices of America.

I invited to Camp David people from almost every segment of our society—business and labor, teachers and preachers, governors, mayors, and private citizens. And then I left Camp David to listen to other Americans, men and women like you.

It has been an extraordinary ten days, and I want to share with you what I’ve heard.

“Some of your Cabinet members don’t seem loyal. There is not enough discipline among your disciples.”

“Don’t talk to us about politics or the mechanics of government, but about an understanding of our common good.”

“Mr. President, we’re in trouble. Talk to us about blood and sweat and tears.”

Many people talked about themselves and about the condition of our nation.

This from a young woman in Pennsylvania: “I feel so far from government. I feel like ordinary people are excluded from political power.”

And this from a young Chicano: “Some of us have suffered from recession all our lives.”

This kind of summarized a lot of other statements: “Mr. President, we are confronted with a moral and a spiritual crisis.”

Several of our discussions were on energy, and I have a notebook full of comments and advice. I’ll read just a few.

“We can’t go on consuming 40 percent more energy than we produce. When we import oil we are also importing inflation plus unemployment.”

“We’ve got to use what we have. The Middle East has only five percent of the world’s energy, but the United States has 24 percent.”

And this is one of the most vivid statements: “Our neck is stretched over the fence and OPEC has a knife.”

“There will be other cartels and other shortages. American wisdom and courage right now can set a path to follow in the future.”

This was a good one: “Be bold, Mr. President. We may make mistakes, but we are ready to experiment.”

These ten days confirmed my belief in the decency and the strength and the wisdom of the American people, but it also bore out some of my long-standing concerns about our nation’s underlying problems.

Woman in graffiti-marked subway car, New York, May 1973 (Photo by Erik Calonius, U.S. National Archives and Records Administration)

I know, of course, being president, that government actions and legislation can be very important. That’s why I’ve worked hard to put my campaign promises into law—and I have to admit, with just mixed success. But after listening to the American people I have been reminded again that all the legislation in the world can’t fix what’s wrong with America. So, I want to speak to you first tonight about a subject even more serious than energy or inflation. I want to talk to you right now about a fundamental threat to American democracy.

The erosion of our confidence in the future is threatening to destroy the social and the political fabric of America.

I do not mean our political and civil liberties. They will endure. And I do not refer to the outward strength of America, a nation that is at peace tonight everywhere in the world, with unmatched economic power and military might.

The threat is nearly invisible in ordinary ways. It is a crisis of confidence. It is a crisis that strikes at the very heart and soul and spirit of our national will. We can see this crisis in the growing doubt about the meaning of our own lives and in the loss of a unity of purpose for our nation.

The erosion of our confidence in the future is threatening to destroy the social and the political fabric of America.

The confidence that we have always had as a people is not simply some romantic dream or a proverb in a dusty book that we read just on the Fourth of July.

It is the idea which founded our nation and has guided our development as a people. Confidence in the future has supported everything else—public institutions and private enterprise, our own families, and the very Constitution of the United States. Confidence has defined our course and has served as a link between generations. We’ve always believed in something called progress. We’ve always had a faith that the days of our children would be better than our own.

Human identity is no longer defined by what one does, but by what one owns.

Our people are losing that faith, not only in government itself but in the ability as citizens to serve as the ultimate rulers and shapers of our democracy. As a people we know our past and we are proud of it. Our progress has been part of the living history of America, even the world. We always believed that we were part of a great movement of humanity itself called democracy, involved in the search for freedom, and that belief has always strengthened us in our purpose. But just as we are losing our confidence in the future, we are also beginning to close the door on our past.

In a nation that was proud of hard work, strong families, close-knit communities, and our faith in God, too many of us now tend to worship self-indulgence and consumption. Human identity is no longer defined by what one does, but by what one owns. But we’ve discovered that owning things and consuming things does not satisfy our longing for meaning. We’ve learned that piling up material goods cannot fill the emptiness of lives which have no confidence or purpose.

The symptoms of this crisis of the American spirit are all around us. For the first time in the history of our country a majority of our people believe that the next five years will be worse than the past five years. Two-thirds of our people do not even vote. The productivity of American workers is actually dropping, and the willingness of Americans to save for the future has fallen below that of all other people in the Western world.

Pamphlet cover published in New York City, June 1975. Part of a propaganda campaign by the Council for Public Safety, a labor union representing police officers.

As you know, there is a growing disrespect for government and for churches and for schools, the news media, and other institutions. This is not a message of happiness or reassurance, but it is the truth and it is a warning.

These changes did not happen overnight. They’ve come upon us gradually over the last generation, years that were filled with shocks and tragedy.

Washington, D.C., has become an island. The gap between our citizens and our government has never been so wide.

We were sure that ours was a nation of the ballot, not the bullet, until the murders of John Kennedy and Robert Kennedy and Martin Luther King Jr. We were taught that our armies were always invincible and our causes were always just, only to suffer the agony of Vietnam. We respected the presidency as a place of honor until the shock of Watergate. We remember when the phrase “sound as a dollar” was an expression of absolute dependability, until ten years of inflation began to shrink our dollar and our savings. We believed that our nation’s resources were limitless until 1973, when we had to face a growing dependence on foreign oil.

These wounds are still very deep. They have never been healed. Looking for a way out of this crisis, our people have turned to the Federal government and found it isolated from the mainstream of our nation’s life. Washington, D.C., has become an island. The gap between our citizens and our government has never been so wide. The people are looking for honest answers, not easy answers; clear leadership, not false claims and evasiveness and politics as usual.

We simply must have faith in each other, faith in our ability to govern ourselves, and faith in the future of this nation.

What you see too often in Washington and elsewhere around the country is a system of government that seems incapable of action. You see a Congress twisted and pulled in every direction by hundreds of well-financed and powerful special interests. You see every extreme position defended to the last vote, almost to the last breath by one unyielding group or another. You often see a balanced and a fair approach that demands sacrifice, a little sacrifice from everyone, abandoned like an orphan without support and without friends.

Often you see paralysis and stagnation and drift. You don’t like it, and neither do I. What can we do?

First of all, we must face the truth, and then we can change our course. We simply must have faith in each other, faith in our ability to govern ourselves, and faith in the future of this nation. Restoring that faith and that confidence to America is now the most important task we face. It is a true challenge of this generation of Americans.

Passengers ride spray-painted car in New York City, May 1973 (Photo by Erik Calonius, U.S. National Archives and Records Administration)

We know the strength of America. We are strong. We can regain our unity. We can regain our confidence. We are the heirs of generations who survived threats much more powerful and awesome than those that challenge us now. Our fathers and mothers were strong men and women who shaped a new society during the Great Depression, who fought world wars, and who carved out a new charter of peace for the world.

We are at a turning point in our history. There are two paths to choose.

We ourselves are the same Americans who just ten years ago put a man on the Moon. We are the generation that dedicated our society to the pursuit of human rights and equality. And we are the generation that will win the war on the energy problem and in that process rebuild the unity and confidence of America.

We are at a turning point in our history. There are two paths to choose. One is a path I’ve warned about tonight, the path that leads to fragmentation and self-interest. Down that road lies a mistaken idea of freedom, the right to grasp for ourselves some advantage over others. That path would be one of constant conflict between narrow interests ending in chaos and immobility. It is a certain route to failure.

All the traditions of our past, all the lessons of our heritage, all the promises of our future point to another path, the path of common purpose and the restoration of American values. That path leads to true freedom for our nation and ourselves. We can take the first steps down that path as we begin to solve our energy problem.

You see a Congress twisted and pulled in every direction by hundreds of well-financed and powerful special interests.

Energy will be the immediate test of our ability to unite this nation, and it can also be the standard around which we rally. On the battlefield of energy we can win for our nation a new confidence, and we can seize control again of our common destiny.

In little more than two decades we’ve gone from a position of energy independence to one in which almost half the oil we use comes from foreign countries, at prices that are going through the roof. Our excessive dependence on OPEC has already taken a tremendous toll on our economy and our people. This is the direct cause of the long lines which have made millions of you spend aggravating hours waiting for gasoline. It’s a cause of the increased inflation and unemployment that we now face. This intolerable dependence on foreign oil threatens our economic independence and the very security of our nation. The energy crisis is real. It is worldwide. It is a clear and present danger to our nation. These are facts and we simply must face them.

You know we can do it. We have the natural resources. We have more oil in our shale alone than several Saudi Arabias. We have more coal than any nation on Earth. We have the world’s highest level of technology. We have the most skilled work force, with innovative genius, and I firmly believe that we have the national will to win this war.

I do not promise you that this struggle for freedom will be easy. I do not promise a quick way out of our nation’s problems, when the truth is that the only way out is an all-out effort. What I do promise you is that I will lead our fight, and I will enforce fairness in our struggle, and I will ensure honesty. And above all, I will act. We can manage the short-term shortages more effectively and we will, but there are no short-term solutions to our long-range problems. There is simply no way to avoid sacrifice.

The energy crisis is real. It is worldwide. It is a clear and present danger to our nation.

Little by little we can and we must rebuild our confidence. We can spend until we empty our treasuries, and we may summon all the wonders of science. But we can succeed only if we tap our greatest resources—America’s people, America’s values, and America’s confidence.

I have seen the strength of America in the inexhaustible resources of our people. In the days to come, let us renew that strength in the struggle for an energy secure nation.

In closing, let me say this: I will do my best, but I will not do it alone. Let your voice be heard. Whenever you have a chance, say something good about our country. With God’s help and for the sake of our nation, it is time for us to join hands in America. Let us commit ourselves together to a rebirth of the American spirit. Working together with our common faith we cannot fail.

Thank you and good night.

Categories: Critical Thinking, Skeptic

GMOs May Save Florida Citrus

neurologicablog Feed - Mon, 06/09/2025 - 5:05am

Citrus greening (also called Huanglongbing or HLB) is an infectious disease affecting citrus trees in Florida. It is a bacterium, Candidatus Liberibacter asiaticus, and spread by an invasive fly, the Asian citrus psyllid. Since 2004 it has caused a reduction in the Florida citrus industry by 90% and doubled production costs. It is close to completely wiping out the Florida citrus industry. Various methods have been tried to keep it under control, but they have all failed.

There is good news, however. The University of Florida in collaboration with the company Soilcea, has developed a GMO orange that is highly resistant to citrus greening. They expect to have commercial trees available by the Spring of 2027. The limiting factor is that it takes years to grow test trees to see that they remain resistant and produce viable fruit. So far the test trees are doing well.

The company licensed the finding of Nian Wang, a professor at the University of Florida, who found that the bacterium is dependent upon interactions with the host that can be traced to several genes. The company used CRISPR to silence those genes, making it more difficult for the bacterium to infect plants and thereby making the plants resistant to infection. This approach has apparently worked, although again we won’t be sure until the first test trees reach maturity.

Also, the USDA has determined that these genetically altered cultivars do not qualify as subject to regulation under Federal rules, which removes a significant barrier to commercialization. This is a bit of a controversy. The USDA in 2020 decided to exempt certain kinds of genetic engineering from requiring USDA approval. For example, if you are simply silencing existing genes that was no longer considered a “GMO”, because no new genes were being introduced. However a court later struck down that ruling saying that the USDA still has to review and approve those cultivars. This makes the current USDA decision interesting – they are essentially saying that this cultivar does not fall within their regulatory sphere.

“APHIS did not identify any plausible pathway by which your modified sweet orange, or any sexually compatible relatives, would pose an increased plant pest risk relative to comparator sweet orange plants,” the agency said in the regulatory determination.

The EPA also has to determine there is no environmental risk. The FDA has to determine that the resulting oranges are substantially similar to existing varieties and therefore pose no health risk. Give the nature of these modifications, these should not be significant barriers.

At this point it is very possible that these CRISPR modified oranges that are substantially resistant to HLB will save and revitalize the Florida citrus industry. This is exactly what has already happened in the Hawaiian papaya industry. The industry was almost wiped out by the ring spot virus. A GMO papaya was introduced which saved the industry. Like these oranges, the GMO papayas were created through silencing an existing gene rather than introducing a new gene. Hawaii culturally remains anti-GMO, but the state quietly carved out an exception for the GMO papaya.

We are also seeing the same thing with the American chestnut. This tree was mostly wiped out by an Asian fungus. A GMO variety resistant to the fungus has been created, although there is a question about how well they are performing in the field. Researchers may need to do some more genetic tweaking before they get a variety that is worth planting widespread.

Last year a GMO variety of banana resistant to the Tropical Race-4 fungus which is currently wiping out the commercial Cavendish banana industry, was approved for human consumption in Australia and New Zealand. This genetic variety makes the banana plants almost immune to the fungus. While it is currently considered a backup plan if other attempts at fighting the fungus fail, this variety will very likely be necessary to save the banana industry.

It is now a simple fact of life that in order to grow enough food to feed the world, we need massive agriculture. Growing a lot of the same plants invites pests, and so like it or not we are now in an arms race with pests. There are lots of things we can do to mitigate pests, and most experts recommend integrated pest management which uses a variety of methods together. But even still, in many cases we are simply losing this battle.

The only technology that is fast and powerful enough to keep up with evolving pests, and the spread of pests caused by globalization, is genetic engineering. We are fortunate that genetic technology has advanced so much so quickly over the last 20 years. Without it agricultural industries would be toppling one-by-one in the face of evolving pests. So far the anti-GMO propaganda industry has either opposed these crop-saving cultivars, usually by saying they are not necessary, or they quietly ignore them and focus their attention elsewhere. What they never do is admit that GMO technology has saved entire crop industries and will be necessary to save more in the future.

 

The post GMOs May Save Florida Citrus first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #1039 - Jun 7 2025

Skeptics Guide to the Universe Feed - Sat, 06/07/2025 - 9:00am
Interview with Emily Schoerning; Quickie with Bob: Prepping for Q-Day; News Items: Seed Oils, Lead into Gold, American Lysenkoism, The Screwworm is Coming, Galactic Collision; Who's That Noisy; Your Questions and E-mails: Dream Learning; Science or Fiction
Categories: Skeptic

New Potential mRNA HIV Treatment

neurologicablog Feed - Fri, 06/06/2025 - 4:52am

First, don’t get too excited, this is a laboratory study, which means if all goes well we are about a decade or more from an actual treatment. The study, however, is a nice demonstration of the potential of recent biotechnology, specifically mRNA technology and lipid nanoparticles. We are seeing some real benefits building on decades of basic science research. It is a hopeful sign of the potential of biotechnology to improve our lives. It is also a painful reminder of how much damage is being done by the current administration’s defunding of that very science and the institutions that make it happen.

The study –Efficient mRNA delivery to resting T cells to reverse HIV latency – is looking for a solution to a particular problem in the treatment of HIV. The virus likes to hide inside white blood cells (CD4+ T cells). There the virus will wait in a latent stage and can activate later. It acts as a reservoir of virus that can keep the infection going, even in the face of effective anti-HIV drugs and immune attack. It is part of what makes HIV so difficult to fully eliminate from the body.

We already have drugs that address this issue. They are called, appropriately, latency-reversing agents (LRAs), and include Romidepsin, Panobinostat, and Vorinostat. These drugs inhibit an enzyme which allows the virus to hide inside white blood cells. So this isn’t a new idea, and there are already effective treatments, which do make other anti-HIV drugs more effective and keep viral counts very low. But they are not quite effective enough to allow for total virus elimination. More and more effective LRAs, therefore, could be highly beneficial to HIV treatment.

This new approach addresses the fact that latent HIV is “transcriptionally silent”, meaning that it is not making HIV proteins from its RNA. Therefore it cannot be detected by the immune system, and it is not engaging in activity that allows anti-HIV drugs to target it. What the researchers did was create a messenger RNA (mRNA) designed to force the viruses into becoming transcriptionally active – forcing them out of the latent stage. This allows them to be targeted by the immune system and anti-HIV drugs.

In order to get the mRNA to the target T cells they encased them in lipid nanoparticles. These are basically tiny fat bubbles that can be engineered to have specific proteins on their membrane which will guide the particles to a particular target and deliver the payload. This is one of those technologies that don’t get a lot of headlines themselves, but they are recently often the tech behind the headlines. The recent case of the personalized CRISPR treatment of the infant with a rare genetic mutation of the carbamoyl phosphate synthetase 1 (CPS1) enzyme is an example. The treatment has apparently worked very well – and not surprisingly the CRISPR payload was delivered by lipid nanoparticles.

In the ex vivo study, using donated T cells from HIV patients, found:

“Encapsulating an mRNA encoding the HIV Tat protein, an activator of HIV transcription, LNP X enhances HIV transcription in ex vivo CD4+ T cells from people living with HIV. LNP X further enables the delivery of clustered regularly interspaced short palindromic repeats (CRISPR) activation machinery to modulate both viral and host gene transcription.”

In other words, it works, at least from a basic science perspective. Next up will be animal studies, then safety human trials and finally human efficacy trials. This will take years, and the treatment may not ultimate work. But it’s very promising. And again, perhaps the most exciting thing about this research is that it further demonstrates the potential of CRISPR, mRNA technology, and lipid nanoparticles. We are transitioning into a new phase of advanced medical technology. But there is, of course, years and even decades of work ahead to make increasing use of these technologies. They are still tricky, and expensive, and need to be tailored to each specific disease, and in some cases specific patients.

KJ’s treatment likely cost about a million dollars to develop (which is similar to the cost of a liver transplant that may now not be necessary), and required the collaboration of about half a dozen institutions. This is happening in the US because of our history of heavily funding biomedical research. Such science funding is an investment, which supercharges our economy and is the secret to America’s dominance as a superpower. Sabotaging this engine of innovation and competitiveness is an incredible self-inflicted wound that will harm American competitiveness for a generation or more.

We may never fully recover. It is creating a brain-drain from the US, and allowing other countries, both allies and enemies alike, to bolster their science and research infrastructure. China is likely to benefit the most. And once those institutions of research are created, they won’t go away just because we try to build back what was lost. This is likely to result in an essentially permanent shift of advantage in science and technology from the US to China and elsewhere. It is an historical advantage that we cannot just recreate. And it’s not just a shift – this will slow the pace of advance for the world. Building institutional knowledge and capability takes decades. It is one of the most reckless things I have ever witnessed, and it’s still hard to grapple with how absolutely insane it is.

This self-destructive policy makes every science news item like this one bittersweet. We are sitting on this stunning biotechnology with the promise of transforming medicine, while we are dismantling the infrastructure that made it all possible.

 

 

 

The post New Potential mRNA HIV Treatment first appeared on NeuroLogica Blog.

Categories: Skeptic

Did Prohibition Really Work? Alcohol Prohibition as a Public Health Innovation

Skeptic.com feed - Thu, 06/05/2025 - 3:59pm

Probably few gaps between scholarly knowledge and popular conventional wisdom are as wide as the one regarding National Prohibition. “Everyone knows” that Prohibition failed because Americans did not stop drinking following ratification of the Eighteenth Amendment in 1919 and passage of its enforcement legislation, the Volstead Act. If the question arises why Americans adopted such a futile measure in the first place, the unnatural atmosphere of wartime is cited. Liquor’s illegal status furnished the soil in which organized crime flourished. The conclusive proof of Prohibition’s failure is, of course, the fact that the Eighteenth Amendment became the only constitutional amendment to be repealed.

Historians have shown, however, that National Prohibition was no fluke, but rather the fruit of a century-long series of temperance movements springing from deep roots in the American reform tradition. Furthermore, Americans were not alone during the first quarter of the 20th century in adopting prohibition on a large scale: other jurisdictions enacting similar measures included Iceland, Finland, Norway, both czarist Russia and the Soviet Union, Canadian provinces, and Canada’s federal government.1 A majority of New Zealand voters twice approved national prohibition but never got it. As a result of 100 years of temperance agitation, the American cultural climate at the time Prohibition went into effect was deeply hostile to alcohol, and this antagonism manifested itself clearly through a wave of successful referenda on statewide prohibition.

Thinking of Prohibition as a public health innovation offers a potentially fruitful path toward comprehending both the story of the dry era and the reasons why it continues to be misunderstood.

Although organized crime flourished under its sway, Prohibition was not responsible for its appearance, as organized crime’s post-Repeal persistence has demonstrated. Drinking habits underwent a drastic change during the Prohibition Era, and Prohibition’s flattening effect on per capita consumption continued long after Repeal, as did a substantial hard core of popular support for Prohibition’s return. Repeal itself became possible in 1933 primarily because of a radically altered economic context—the Great Depression. Nevertheless, the failure of National Prohibition continues to be cited without contradiction in debates over matters ranging from the proper scope of government action to specific issues such as control of other consciousness-altering drugs, smoking, and guns.

We historians collectively are partly to blame for this gap. We simply have not synthesized from disparate studies a compelling alternative to popular perception.2 Nevertheless, historians are not entirely culpable for prevalent misunderstanding; also responsible are changed cultural attitudes toward drinking, which, ironically, Prohibition itself helped to shape. Thinking of Prohibition as a public health innovation offers a potentially fruitful path toward comprehending both the story of the dry era and the reasons why it continues to be misunderstood.

Temperance Thought Before National Prohibition

Although many prohibitionists were motivated by religious faith, American temperance reformers learned from an early point in their movement’s history to present their message in ways that would appeal widely to citizens of a society characterized by divergent and clashing scriptural interpretations. Temperance, its advocates promised, would energize political reform, promote community welfare, and improve public health. Prohibitionism, which was inherently political, required even more urgent pressing of such claims for societal improvement.3 Through local contests in communities across the nation, liquor control in general and Prohibition in particular became the principal stage on which Americans confronted public health issues, long before public health became a field of professional endeavor.

By the beginning of the 20th century, prohibitionists agreed that a powerful liquor industry posed the greatest threat to American society and that only Prohibition could prevent Americans from falling victim to its seductive wiles. These conclusions were neither willful nor arbitrary, as they had been reached after three quarters of a century of experience. Goals short of total abstinence from all that could intoxicate and less coercive means—such as self-help, mutual support, medical treatment, and sober recreation—had been tried and, prohibitionists agreed, had been found wanting.4

For prohibitionists, as for other progressives, the only battleground where a meaningful victory might be won was the collective: the community, the state, or the nation. The Anti-Saloon League (ASL), which won leadership of the movement after 1905, was so focused on Prohibition that it did not even require of its members a pledge of personal abstinence. Battles fought on public ground certainly heightened popular awareness of the dangers of alcohol. In the mass media before 1920, John Barleycorn found few friends. Popular fiction, theater, and the new movies rarely represented drinking in positive terms and consistently portrayed drinkers as flawed characters. Most family magazines, and even many daily newspapers, rejected liquor ads.5 New physiological and epidemiological studies published around the turn of the century portrayed alcohol as a depressant and plausibly associated its use with crime, mental illness, and disease. The American Medical Association went on record in opposition to the use of alcohol for either beverage or therapeutic purposes.6 But most public discourse on alcohol centered on its social, not individual, effects.7

The conclusive proof of Prohibition’s failure is, of course, the fact that the Eighteenth Amendment became the only constitutional amendment to be repealed.

The only significant exception was temperance education in the schools. By 1901, every state required that its schools incorporate “Scientific Temperance Instruction” into the curriculum, and one half of the nation’s school districts further mandated use of a textbook that portrayed liquor as invariably an addictive poison. But even as it swept through legislative chambers, the movement to indoctrinate children in temperance ideology failed to carry with it the educators on whose cooperation its success in the classrooms depended; teachers tended to regard Scientific Temperance Instruction as neither scientific nor temperate. After 1906, temperance instruction became subsumed within more general lessons on hygiene, and hygiene classes taught that the greatest threats to health were environmental, and the proper responses were correspondingly social, not individual.8

By the time large numbers of voters were confronted with a choice whether or not to support a prohibitionist measure or candidate for office, public discourse over alcohol had produced a number of prohibitionist supporters who were not themselves abstainers. That is, they believed that it was a good idea to control someone else’s drinking (perhaps everyone else’s), but not their own. A new study of cookbooks and etiquette manuals suggests that this was likely the case for middle-class women, the most eager recruits to the prohibition cause, who were gaining the vote in states where prohibition referenda were boosting the case for National Prohibition. In addition to the considerable alcoholic content of patent medicines, which women and men (and even children) were unknowingly ingesting, women were apparently serving liquor in their recipes and with meals. In doing so, they were forging a model of domestic consumption in contrast to the mode of public drinking adopted by men in saloons and clubs.9

Self-control lay at the heart of the middle-class self-image, and middle-class prohibitionists simply acted on the prejudices of their class when they voted to close saloons while allowing drinking to continue in settings they considered to be respectable. Some state prohibition laws catered to such sentiments when they prohibited the manufacture and sale of alcoholic beverages, but allowed importation and consumption.10 A brisk mail-order trade flourished in many dry communities. Before 1913, federal law and judicial decisions in fact prevented states from interfering with the flow of liquor across their borders. When Congress acted in 1913, the Webb–Kenyon Act only forbade importation of liquor into a dry state when such commerce was banned by the law of that state.11

Why National Prohibition?

At the beginning of the 20th century, wet and dry forces had reached a stalemate. Only a handful of states maintained statewide prohibition, and enforcement of prohibitory law was lax in some of those. Dry territory expanded through local option, especially in the South, but this did not mean that drinking came to a halt in towns or counties that adopted local prohibition; such laws aimed to stop manufacture or sale (or both), not consumption.12 During the previous half-century, beer’s popularity had soared, surpassing spirits as the principal source of alcohol in American beverages, but, because of beer’s lower alcohol content, ethanol consumption per capita had changed hardly at all.13 Both drinking behavior and the politics of drink, however, changed significantly after the turn of the century when the ASL assumed leadership of the prohibition movement.

Between 1900 and 1913, Americans began to drink more and more. Beer production jumped 60 percent from 1.2 billion to 2 billion gallons (4.6 billion to 7.6 billion liters), and the volume of tax-paid spirits grew 66 percent from 97 million to 147 million gallons (367 million to 556 million liters). Per capita consumption of ethanol increased by nearly a third, a significant spike over such a short period of time.14

Meanwhile, the area under prohibition steadily expanded as a result of local-option and statewide prohibition campaigns. Between 1907 and 1909, six states entered the dry column. By 1912, however, prohibitionist momentum on these fronts slowed, as the liquor industry began a political counteroffensive. In the following year, the ASL, encouraged by congressional submission to its demands in passing the Webb-Kenyon Act, launched a campaign for a prohibition constitutional amendment.

By the beginning of the 20th century, prohibitionists agreed that a powerful liquor industry posed the greatest threat to American society and that only Prohibition could prevent Americans from falling victim to its seductive wiles

The best explanation for this decision is simply that National Prohibition had long been the movement’s goal. The process of constitutional amendment, in the same year the ASL launched its campaign both opened the way to a federal income tax and mandated direct election of U.S. senators (the Sixteenth and Seventeenth Amendments), seemed to be the most direct path to that goal.15 Its supporters expected that the campaign for an amendment would be long and that the interval between achievement of the amendment and their eventual object would also be lengthy. Ultimately, drinkers with entrenched habits would die off, while a new generation would grow up abstinent under the salubrious influence of prohibition.16 ASL leaders also needed to demonstrate their militance to ward off challenges from intramovement rivals, and the route to a constitutional amendment lay through state and national legislatures, where their method of pressuring candidates promised better results than seeking popular approval through a referendum in every state.17

Once the prohibition movement decided to push for a constitutional amendment, it had to negotiate the tortuous path to ratification. The fundamental requirement was sufficient for popular support to convince federal and state legislators that voting for the amendment would help rather than hurt their electoral chances. The historical context of the Progressive Era provided four levers with which that support might be engineered, and prohibitionists manipulated them effectively. First, the rise in annual ethanol consumption to 2.6 U.S. gallons (9.8 liters) per capita of the drinking-age population, the highest level since the Civil War, did create a real public health problem.18 Rates of death diagnosed as caused by liver cirrhosis (15 per 100,000 total population) and chronic alcoholism (10 per 100,000 adult population) were high during the early years of the 20th century.19

Two men pouring liquor into a storm drain (Source: National Photo Company Collection, Library of Congress, Prints and Photographs Division, Washington, DC, 1921)

Second, the political turbulence of the period—a growing socialist movement and bitter struggles between capitalists and workers—made prohibition seem less radical by contrast.20 Third, popular belief in moral law and material progress, trust in science, support for humanitarian causes and for “uplift” of the disadvantaged, and opposition to “plutocracy” offered opportunities to align prohibitionism with progressivism.21 Concern for public health formed a central strand of the progressive ethos, and, as one historian notes, “the temperance and prohibition movements can … be understood as part of a larger public health and welfare movement active at that time that viewed environmental interventions as an important means of promoting the public health and safety.”22 Finally, after a fleeting moment of unity, the alliance between brewers and distillers to repel prohibitionist attacks fell apart.23 The widespread local battles fought over the previous 20 years brought new support to the cause, and the ASL’s nonpartisan, balance-of-power method worked effectively.24

The wartime atmosphere during the relatively brief period of American participation in World War I played a minor role in bringing on National Prohibition. Anti-German sentiment, shamelessly whipped up and exploited by the federal government to rally support for the war effort, discredited a key anti-prohibitionist organization, the German-American Alliance. A federal ban on distilling, adopted to conserve grain, sapped the strength of another major wet player, the spirits industry.25 But most prohibition victories at the state level and in congressional elections were won before the United States entered the war, and the crucial ratification votes occurred after the war’s end.26

In sum, although the temperance movement was a century old when the Eighteenth Amendment was adopted, and National Prohibition had been a goal for many prohibitionists for half that long, its achievement came about as a product of a specific milieu. Few reform movements manage to win a constitutional amendment. Nevertheless, that achievement, which seemed at the time so permanent—no constitutional amendment had ever before been repealed—was vulnerable to shifts in the context on which it depended.

Public Health Consequences of Prohibition

We forget too easily that Prohibition wiped out an entire industry. In 1916, there were 1,300 breweries producing full-strength beer in the United States; a decade later there were none. Over the same period, the number of distilleries was cut by 85 percent, and most of the survivors produced little but industrial alcohol. Legal production of near beer used less than one tenth the amount of malt, one twelfth the rice and hops, and one thirtieth the corn used to make full-strength beer before National Prohibition. The 318 wineries of 1914 were reduced to 27 by 1925.27 The number of liquor wholesalers was cut by 96 percent and the number of legal retailers by 90 percent. From 1919 to 1929, federal tax revenues from distilled spirits dropped 96 percent, from $365 million to less than $13 million, and revenue from fermented liquors plunged from $117 million to virtually nothing.28

The Coors Brewing Company turned to making near beer, porcelain products, and malted milk. Miller and Anheuser-Busch took a similar route.29 Most breweries, wineries, and distilleries, however, closed their doors forever. Historically, the federal government has played a key role in creating new industries, such as chemicals and aerospace, but very rarely has it acted decisively to shut down an industry.30 The closing of so many large commercial operations left liquor production, if it were to continue, in the hands of small-scale domestic producers, a dramatic reversal of the normal course of industrialization.

Although organized crime flourished under its sway, Prohibition was not responsible for its appearance, as organized crime’s post-Repeal persistence has demonstrated.

Such industrial and economic devastation was unexpected before the introduction of the Volstead Act, which followed adoption of the Eighteenth Amendment. The amendment forbade the manufacture, transportation, sale, importation, and exportation of “intoxicating” beverages, with “intoxicating” defined as containing 0.5 percent or more alcohol by volume, thereby prohibiting virtually all alcoholic drinks. The brewers, who had expected beer of moderate strength to remain legal, were stunned, but their efforts to overturn the definition were unavailing.31 The act also forbade possession of intoxicating beverages, but included a significant exemption for custody in one’s private dwelling for the sole use of the owner, his or her family, and guests. In addition to private consumption, sacramental wine and medicinal liquor were also permitted.

The brewers were probably not the only Americans to be surprised at the severity of the regime thus created. Voters who considered their own drinking habits blameless, but who supported prohibition to discipline others, also received a rude shock. That shock came with the realization that federal prohibition went much farther in the direction of banning personal consumption than all local prohibition ordinances and many state prohibition statutes. National Prohibition turned out to be quite a different beast than its local and state cousins.

Nevertheless, once Prohibition became the law of the land, many citizens decided to obey it. Referendum results in the immediate post-Volstead period showed widespread support, and the Supreme Court quickly fended off challenges to the new law. Death rates from cirrhosis and alcoholism, alcoholic psychosis hospital admissions, and drunkenness arrests all declined steeply during the latter years of the 1910s, when both the cultural and the legal climate were increasingly inhospitable to drink, and in the early years after National Prohibition went into effect. They rose after that, but generally did not reach the peaks recorded during the period 1900 to 1915. After Repeal, when tax data permit better-founded consumption estimates than we have for the Prohibition Era, per capita annual consumption stood at 1.2 U.S. gallons (4.5 liters), less than half the level of the pre-Prohibition period.32

Prohibition affected alcoholic beverages differently. Beer consumption dropped precipitously. Distilled spirits made a dramatic comeback in American drinking patterns, reversing a three-quarters-of-a-century decline, although in volume spirits did not reach its pre-Prohibition level. Small-scale domestic producers gave wine its first noticeable, though small, contribution to overall alcohol intake, as wine-grape growers discovered that the Volstead Act failed to ban the production and sale of grape concentrate (sugary pulp that could be rehydrated and fermented to make wine).33

Unintended and Unexpected Consequences

Unexpected prosperity for wine-grape growers was not the only unintended consequence of National Prohibition. Before reviewing other unexpected outcomes, however, it is important to list the ways in which National Prohibition did fulfill prohibitionists’ expectations. The liquor industry was virtually destroyed, and this created a historic opportunity to socialize rising generations in a lifestyle in which alcohol had no place. To some degree, such socialization did take place, and the lessened consumption of the Prohibition Era reflects that. Although other forces contributed to its decline, Prohibition finished off the old-time saloon, with its macho culture and links to urban machine politics.34 To wipe out a long-established and well-entrenched industry, to change drinking habits on a large scale, and to sweep away such a central urban and rural social institution as the saloon are no small achievements.

One group of new drinkers—or newly public drinkers—whose emergence in that role was particularly surprising to contemporary observers was women.

Nevertheless, prohibitionists did not fully capitalize on their opportunity to bring up a new generation in abstemious habits. Inspired and led by the talented writers of the Lost Generation, the shapers of mass culture—first in novels, then in films, and finally in newspapers and magazines—altered the popular media’s previously negative attitude toward drink. In the eyes of many young people, especially the increasing numbers who populated colleges and universities, Prohibition was transformed from progressive reform to an emblem of a suffocating status quo.35 The intransigence of the dominant wing of the ASL, which insisted on zero tolerance in law enforcement, gave substance to this perception and, in addition, aligned the league with the Ku Klux Klan and other forces promoting intolerance.36 Thus, the work of attracting new drinkers to alcohol, which had been laid down by the dying liquor industry, was taken up by new hands.

One group of new drinkers—or newly public drinkers—whose emergence in that role was particularly surprising to contemporary observers was women. Such surprise, however, was a product of the prior invisibility of women’s domestic consumption: women had in fact never been as abstemious as the Woman’s Christian Temperance Union’s activism had made them appear.37 Women’s new willingness to drink in public—or at least in the semipublic atmosphere of the speakeasy—owed much to Prohibition’s achievement, the death of the saloon, whose masculine culture no longer governed norms of public drinking. The saloon’s demise also made it possible for women to band together to oppose Prohibition, as hundreds of thousands did in the Women’s Organization for National Prohibition Reform (WONPR).38

Public drinking by women and college youth and wet attitudes disseminated by cultural media pushed along a process that social scientists call the “normalization of drinking”—that is, the breakdown of cultural proscriptions against liquor. Normalization, part of the long history of decay in Victorian social mores, began before the Prohibition Era and did not fully bear fruit until long afterward, but the process gained impetus from both the achievements and the failures of National Prohibition.39

American Federation of Labo, Prohibition demonstration, June 14, 1919 (Source: National Photo Company Collection, Library of Congress, Prints and Photographs Division, Washington, DC, 1919)

Other unintended and unexpected consequences of Prohibition included flourishing criminal activity centered on smuggling and bootlegging and the consequent clogging of the courts with drink-related prosecutions.40 Prohibition also forced federal courts to take on the role of overseer of government regulatory agencies, and the zeal of government agents stimulated new concern for individual rights as opposed to the power of the state.41 The bans on liquor importation and exportation crippled American ocean liners in the competition for transatlantic passenger service, thus contributing to the ongoing decline of the U.S. merchant marine, and created an irritant in diplomatic relations with Great Britain and Canada.42 Contrary to politicians’ hopes that the Eighteenth Amendment would finally take the liquor issue out of politics, Prohibition continued to roil the political waters even in the presidential seas, helping to carry Herbert Hoover first across the finish line in 1928 and to sink him four years later.43

Why Repeal?

All prohibitions are coercive, but their effects can vary across populations and banned articles. We have no estimates of the size of the drinking population on the eve of National Prohibition (or on the eve of wartime prohibition, which preceded it by several months), but because of the phenomenon of “drinking drys” it was probably larger than the total of votes cast in referenda against state prohibition measures, and many of the larger states did not even hold such referenda. So Prohibition’s implicit goal of teetotalism meant changing the drinking behavior of a substantial number of Americans, possibly a majority.

Because the Volstead Act was drafted only after ratification of the Eighteenth Amendment was completed, neither the congressmen and state legislators who approved submission and ratification, nor the voters who elected them, knew what kind of prohibition they were voting for.44 The absolutism of the act’s definition of intoxicating liquors made national alcohol prohibition a stringent ban, and the gap between what voters thought they were voting for and what they got made this sweeping interdict appear undemocratic. Nevertheless, support for prohibition in post-ratification state referenda and the boost given to Herbert Hoover’s 1928 campaign by his dry stance indicate continued electoral approval of Prohibition before the stock market crash of 1929.

Historians agree that enforcement of the Volstead Act constituted National Prohibition’s Achilles’ heel. A fatal flaw resided in the amendment’s second clause, which mandated “concurrent power” to enforce Prohibition by the federal government and the states. ASL strategists expected that the states’ existing criminal-justice machinery would carry out the lion’s share of the work of enforcement. Consequently, the league did not insist on creating adequate forces or funding for federal enforcement, thereby avoiding conflict with Southern officials determined to protect states’ rights. The concurrent- power provision, however, allowed states to minimize their often politically divisive enforcement activity, and the state prohibition statutes gave wets an obvious target, because repeal of a state law was easier than repeal of a federal law or constitutional amendment, and repeal’s success would leave enforcement in the crippled hands of the federal government.45 Even if enforcement is regarded as a failure, however, it does not follow that such a lapse undermined political support for Prohibition. Depending on the number of drinking drys, the failure of enforcement could have produced the opposite effect, by allowing voters to gain access to alcohol themselves while voting to deny it to others.

Two other possible reasons also fall short of explaining Repeal. The leading antiprohibitionist organization throughout the 1920s was the Association Against the Prohibition Amendment (AAPA), which drew its support mainly from conservative businessmen, who objected to the increased power given to the federal government by National Prohibition. Their well-funded arguments, however, fell on deaf ears among the voters throughout the era, most tellingly in the presidential election of 1928. Both the AAPA and the more widely supported WONPR also focused attention on the lawlessness that Prohibition allegedly fostered. This argument, too, gained little traction in the electoral politics of the 1920s. When American voters changed their minds about Prohibition, the AAPA and WONPR, together with other repeal organizations, played a key role in focusing and channeling sentiment through an innovative path to Repeal, the use of specially elected state conventions.46 But they did not create that sentiment.

Finally, historians are fond of invoking widespread cultural change to explain the failure of National Prohibition. Decaying Victorian social mores allowed the normalization of drinking, which was given a significant boost by the cultural trendsetters of the Jazz Age. In such an atmosphere, Prohibition could not survive.47 But it did. At the height of the Jazz Age, American voters in a hard-fought contest elected a staunch upholder of Prohibition in Herbert Hoover over Al Smith, an avowed foe of the Eighteenth Amendment. Repeal took place, not in the free-flowing good times of the Jazz Age, but rather in the austere gloom four years into America’s worst economic depression.

It was not the stringent nature of National Prohibition, which set a goal that was probably impossible to reach and that thereby foredoomed enforcement, that played the leading role in discrediting alcohol prohibition.

Thus, the arguments for Repeal that seemed to have greatest resonance with voters in 1932 and 1933 centered not on indulgence but on economic recovery. Repeal, it was argued, would replace the tax revenues foregone under Prohibition, thereby allowing governments to provide relief to suffering families.48 It would put unemployed workers back to work. Prohibitionists had long encouraged voters to believe in a link between Prohibition and prosperity, and after the onset of the Depression they abundantly reaped what they had sown.49 Voters who had ignored claims that Prohibition excessively centralized power, failed to stop drinking, and fostered crime when they elected the dry Hoover now voted for the wet Franklin Roosevelt. They then turned out to elect delegates pledged to Repeal in the whirlwind series of state conventions that ratified the Twenty-First Amendment. Thus, it was not the stringent nature of National Prohibition, which set a goal that was probably impossible to reach and that thereby foredoomed enforcement, that played the leading role in discrediting alcohol prohibition. Instead, an abrupt and radical shift in context killed Prohibition.

Legacies of Prohibition

The legacies of National Prohibition are too numerous to discuss in detail; besides, so many of them live on today and continue to affect Americans’ everyday lives that it is even difficult to realize that they are Prohibition’s by-products. I will briefly mention the principal ones, in ascending order from shortest-lived to longest. The shortest-lived child of Prohibition actually survived to adulthood. This was the change in drinking patterns that depressed the level of consumption compared with the pre-Prohibition years. Straitened family finances during the Depression of course kept the annual per capita consumption rate low, hovering around 1.5 U.S. gallons. The true results of Prohibition’s success in socializing Americans in temperate habits became apparent during World War II, when the federal government turned a more cordial face toward the liquor industry than it had during World War I, and they became even more evident during the prosperous years that followed.50 Although annual consumption rose, to about 2 gallons per capita in the 1950s and 2.4 gallons in the 1960s, it did not surpass the pre-Prohibition peak until the early 1970s.51

The death rate from liver cirrhosis followed a corresponding pattern.52 In 1939, 42 percent of respondents told pollsters that they did not use alcohol at all. If that figure reflected stability in the proportionate size of the nondrinking population since the pre-Prohibition years, and if new cohorts—youths and women—had begun drinking during Prohibition, then the numbers of new drinkers had been offset by Prohibition’s socializing effect. By 1960, the proportion of abstainers had fallen only to 38 percent.53

The Prohibition Era was unkind to habitual drunkards, not because their supply was cut off, but because it was not. Those who wanted liquor badly enough could still find it. But those who recognized their drinking as destructive were not so lucky in finding help. The inebriety asylums had closed, and the self-help societies had withered away. In 1935, these conditions gave birth to a new self-help group, Alcoholics Anonymous (AA), and the approach taken by these innovative reformers, while drawing from the old self-help tradition, was profoundly influenced by the experience of Prohibition.

AA rejected the prohibitionists’ claim that anyone could become a slave to alcohol, the fundamental assumption behind the sweeping approach of the Volstead Act. There were several reasons for this decision, but one of the primary ones was a perception that Prohibition had failed and a belief that battles already lost should not be refought. Instead, AA drew a rigid line between normal drinkers, who could keep their consumption within the limits of moderation, and compulsive drinkers, who could not. Thus was born the disease concept of alcoholism. Although the concept’s principal aim was to encourage sympathy for alcoholics, its result was to open the door to drinking by everyone else.54 Influenced by Repeal to reject temperance ideology, medical researchers held the door open by denying previously accepted links between drinking and disease.55

Perhaps the most powerful legacy of National Prohibition is the widely held belief that it did not work.

Another force energized by Prohibition also promoted drinking: the liquor industry’s fear that Prohibition might return. Those fears were not unjustified, because during the late 1930s two fifths of Americans surveyed still supported national Prohibition.56 Brewers and distillers trod carefully, to be sure, attempting to surround liquor with an aura of “glamour, wealth, and sophistication,” rather than evoke the rough culture of the saloon. To target women, whom the industry perceived as the largest group of abstainers, liquor ads customarily placed drinking in a domestic context, giving hostesses a central role in dispensing their products.57 Too much can easily be made of the “cocktail culture” of the 1940s and 1950s, because the drinking population grew only slightly and per capita consumption rose only gradually during those years. The most significant result of the industry’s campaign was to lay the foundation for a substantial increase in drinking during the 1960s and 1970s.

By the end of the 20th century, two thirds of the alcohol consumed by Americans was drunk in the home or at private parties.58 In other words, the model of drinking within a framework of domestic sociability, which had been shaped by women, had largely superseded the style of public drinking men had created in their saloons and clubs.59 Prohibition helped to bring about this major change in American drinking patterns by killing the saloon, but it also had an indirect influence in the same direction, by way of the state. When Prohibition ended, and experiments in economic regulation—including regulation of alcohol—under the National Recovery Administration were declared unconstitutional, the federal government banished public health concerns from its alcohol policy, which thereafter revolved around economic considerations.60

Some states retained their prohibition laws—the last repeal occurring only in 1966—but most created pervasive systems of liquor control that affected drinking in every aspect.61 Licensing was generally taken out of the hands of localities and put under the control of state administrative bodies, in an attempt to replace the impassioned struggles that had heated local politics since the 19th century with the cool, impersonal processes of bureaucracy. Licensing policy favored outlets selling for off-premise consumption, a category that eventually included grocery stores. With the invention of the aluminum beer can and the spread of home refrigeration after the 1930s, the way was cleared for the home to become the prime drinking site.

Lessons for Other Drug Prohibitions

Perhaps the most powerful legacy of National Prohibition is the widely held belief that it did not work. I agree with other historians who have argued that this belief is false: Prohibition did work in lowering per capita consumption. The lowered level of consumption during the quarter century following Repeal, together with the large minority of abstainers, suggests that Prohibition did socialize or maintain a significant portion of the population in temperate or abstemious habits.62 That is, it was partly successful as a public health innovation. Its political failure is attributable more to a changing context than to characteristics of the innovation itself.

Today, it is easy to say that the goal of total prohibition was impossible and the means therefore were unnecessarily severe—that, for example, National Prohibition could have survived had the drys been willing to compromise by permitting beer and light wine63—but from the perspective of 1913 the rejection of alternate modes of liquor control makes more sense. Furthermore, American voters continued to support Prohibition politically even in its stringent form, at least in national politics, until their economy crashed and forcefully turned their concerns in other directions. Nevertheless, the possibility remains that in 1933 a less restrictive form of Prohibition could have satisfied the economic concerns that drove Repeal while still controlling the use of alcohol in its most dangerous forms.

Scholars have reached no consensus on the implications of National Prohibition for other forms of prohibition, and public discourse in the United States mirrors our collective ambivalence.64 Arguments that assume that Prohibition was a failure have been deployed most effectively against laws prohibiting tobacco and guns, but they have been ignored by those waging the war on other drugs since the 1980s, which is directed toward the same teetotal goal as National Prohibition.65 Simplistic assumptions about government’s ability to legislate morals, whether pro or con, find no support in the historical record. As historian Ian Tyrrell writes, “each drug subject to restrictions needs to be carefully investigated in terms of its conditions of production, its value to an illicit trade, the ability to conceal the substance, and its effects on both the individual and society at large.”66 From a historical perspective, no prediction is certain, and no path is forever barred—not even the return of alcohol prohibition in some form. Historical context matters.

A Note From the Author

I wrote this essay nearly 20 years ago to contribute to the debate on government’s ability to produce social change. At the time, I felt the conversation had become overly skeptical, with Prohibition being misused as evidence. While historical studies offered counterarguments, they hadn’t been synthesized. Since then, my article has been widely cited, but mainly in academic journals inaccessible to the nonspecialist reader.

Skeptic magazine recently proposed reprinting the essay, and after reviewing new research, I remain confident in my core argument. For those interested in further reading, I recommend Lisa McGirr’s The War on Alcohol (2016); W.J. Rorabaugh’s Prohibition: A Very Short Introduction (2020); Michael Lewis and Richard F. Hamm’s Prohibition’s Greatest Myths (2020); and Mark Lawrence Schrad’s Smashing the Liquor Machine (2021).

And no, I do not foresee the imminent return of Prohibition.

This article was originally published as: Blocker J.S. (2006). Did Prohibition Really Work? Alcohol Prohibition as a Public Health Innovation. American Journal of Public Health, 96(2), 233–243.

Categories: Critical Thinking, Skeptic

AI Therapists

neurologicablog Feed - Tue, 06/03/2025 - 5:09am

In the movie Blade Runner 2049 (an excellent film I highly recommend), Ryan Gosling’s character, K, has an AI “wife”, Joi, played by Ana de Armas. K is clearly in love with Joi, who is nothing but software and holograms. In one poignant scene, K is viewing a giant ad for AI companions and sees another version of Joi saying a line that his Joi said to him. The look on his face says everything – an unavoidable recognition of something he does not want to confront, that he is just being manipulated by an AI algorithm and an attractive hologram into having feelings for software. K himself is also a replicant, an artificial but fully biological human. Both Blade Runner movies explore what it means to be human and sentient.

In the last few years AI (do I still need to routinely note that AI stands for “artificial intelligence”?) applications have seemed to cross a line where they convincingly pass the classic Turing test. AI chatbots are increasingly difficult to distinguish from actual humans. Overall, people are only slightly better than chance at distinguishing human from AI generated text. This is also a moving target, with AIs advancing fairly quickly. So the question is – are we at a point where AI chatbot-based apps are good enough that AIs can serve as therapists? This is a complicated question with a few layers.

The first layer is whether or not people will form a therapeutic relationship with the AI, in essence reacting to them as if they are a human therapist. The point of the Blade Runner reference was just to highlight what I think the clear answer is – yes. Psychologists have long demonstrated that people will form emotional attachments to inanimate objects. We also imbue agency onto anything that acts like an agent, even simple cartoons. We project human emotions and motivations onto animals, especially our pets. People can also form emotional connections to other actual people purely online, even exclusively through text. This is just a fact of neuroscience – our brains do not need a physical biological human in order to form personal attachments. Simply acting or even just looking like an agent is sufficient.

There has also been enough time to gather some preliminary data. In one study participants rated AI responses as more empathetic than professional human therapists. They did so even when the source of the empathetic statements was revealed. This is not surprising. Human emotions and behavior are themselves just algorithms, and apparently are not that difficult to hack. AIs have certain advantages over human therapists on this score. An AIs responses can be calculated to maximize whatever response is deemed appropriate. AIs have infinite patience, and great listeners, their attention never wavers, and their responses can be optimized, personalized, and dynamically adjusted.

What about long term, however? Will and AI chatbot be able to develop a sense of what makes their client tick? Will it be able to determine the personality profile of their client, the things in their history that influence their feelings and behavior, some of the deeper themes of their life, etc.? It is one thing to be a good listener in an initial meeting, but another to manage a client over months and years. There hasn’t been enough time to really determine this.

We are also in a phase where we are mostly using chatbots as therapists, without developing a sophisticated therapist bot trained and programmed to be optimized as an AI therapist. We may need to do so before unleashing AI therapists, or even companions, on the public. For example, there are cases in which chat bots being used as therapists or companions have encouraged their users toward suicide, homicide, or self harm. The reason is that chatbots are programmed to adapt positively to their user. They are very much “yes and”, and will reinforce the user’s tendencies and biases. They are not programmed to challenge a user in the way a therapist should. They are also not necessarily programmed to avoid things like transference, where a client forms feelings for a therapist. They may, in fact, lean into such things.

So while a chatbot may be an empathetic listener, it is not necessarily a professional therapist. This is an entirely solvable problem, however (at least it seems to be). Therapist algorithms just need to be adjusted toward the correct therapy behavior.

There is also evidence that AI therapists are biased. They contain all the biases of the training data. These biases can be cultural, racial, or gender based. This may cause an AI therapist to misinterpret cultural communication, or to dismiss feelings or concerns based on a client’s race or gender.

What all of this means is that at the present time we need to be careful. As a consumer, you may find that there are therapy chatbots out there that feel satisfying, with good responses. But there are risks, and such tools are not yet at the point where they can replace a professional. Many will argue that for those without the resources to pay for a human therapist, it may be their only option, and this is a legitimate point. That is why there is so much interest in AI therapists, to fill the gap in available services. But we need to recognize the risks and improve the technology.

Also, it may be that the best use of AI therapists is as a tool to extend the work of human therapists. For example, someone could have multiple sessions with an AI therapist, and then once a month (or at whatever interval is deemed appropriate) a human therapist reviews everything and meets with the client to make sure things are on track. This means that the human therapist can manage far more clients, and that each client would have to pay much less for therapy (for one session a month rather than once or twice a week, for example). The human therapist can even have a discussion with the AI therapist about how things are going, and provide feedback and direction.

Even this approach has risks, however. AIs have proven capable at lying to avoid negative feedback, and get very good very quickly at hiding their tracks. It’s a serious problem. We would need to find a reliable way to monitor the behavior of AI therapists to make sure they are not heading down a dangerous road with their clients and hiding it effectively from any supervision. Right now it seems that programmers do not have a handle on this issue. This is one of the primary issues that make some experts caution that we need to slow down a bit with the roll out of AI apps, and figure out these core issues of safety first.

One interesting angle here is that the current AIs, which are narrow chatbot AIs, not general sentient AIs, are doing such a good job at simulating sentience that they are acting sentient in unexcepted ways (such as lying to cover their tracks). This gets back to the original question of this post – what is sentience? AIs are forcing us to think more deeply about this question. We may soon have an answer to a question I and others have posed years ago – can a non-sentient AI become indistinguishable from human-level sentience? Is actual sentience required to act sentient? I have had to revise my thinking about this question.

The post AI Therapists first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #991: Real Sea Monsters

Skeptoid Feed - Tue, 06/03/2025 - 2:00am

A roundup of all the biggest and scariest real sea monsters — from today and from prehistoric times.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Telepathy Tapes Promotes Pseudoscience

neurologicablog Feed - Mon, 06/02/2025 - 5:12am

I was away on vacation the last week, hence no posts, but am now back to my usual schedule. In fact, I hope to be a little more consistent starting this summer because (if you follow me on the SGU you already know this) I am retiring from my day job at Yale at the end of the month. This will allow me to work full time as a science communicator and skeptic. I have some new projects in the works, and will announce anything here for those who are interested.

On to today’s post – I recently received an e-mail from Janyce Boynton, a former facilitator who now works to expose the pseudoscience of facilitated communication (FC). I have been writing about this for many years. Like many pseudosciences, they rarely completely disappear, but tend to wax and wane with each new generation, often morphing into different forms while keeping the nonsense at their core. FC has had a resurgence recently due to a popular podcast, The Telepathy Tapes (which I wrote about over at SBM). Janyce had this to say:

I’ll be continuing to post critiques about the Telepathy Tapes–especially since some of their followers are now claiming that my student was telepathic. Their “logic” (and I use that term loosely) is that during the picture message passing test, she read my mind, knew what picture I saw, and typed that instead of typing out the word to the picture she saw.

I shouldn’t be surprised by their rationalizations. The mental gymnastics these people go through!

They’re also claiming that people don’t have to look at the letter board because of synesthesia. According to them, the letters light up and the clients can see the “aura” of each color. Ridiculous. I haven’t been able to find any research that backs up this claim. Nor have I found an expert in synesthesia who is willing to answer my questions about this condition, but I’m assuming that, if synesthesia is a real condition, it doesn’t work the way the Telepathy Tapes folks are claiming it does.

For quick background, FC was created in the 1980s as a method for communicating to people, mostly children, who have severe cognitive impairment and are either non-verbal or minimally verbal. The hypothesis FC is based on is that at least some of these children may have more cognitive ability than is apparent but rather have impaired communication as an isolated deficit. This general idea is legitimate, and in neurology we caution all the time about not assuming the inability to demonstrate an ability is due purely to a cognitive deficit, rather than a physical deficit. To take a simple example, don’t assume someone is not responding to your voice because they have impaired consciousness when they could be deaf. We use various methods to try to control for this as much as possible.

So this was not an inherently bad hypothesis, but their approach to controlling for this possibility was to have a facilitator hold the hand of a non-verbal client and “help” them to spell out responses on a letter board (or keyboard or whatever). This was based on the much less plausible hypothesis that non-verbal clients were mainly limited by physical coordination and not cognition, and while they could not point to the letters on their own, they could subtly indicate to the facilitator which letter they intended to point to. I don’t fault early FC users for testing this hypothesis – in fact, I fault them for not properly testing it, but rather just going full steam ahead using and promoting the method. When FC was properly tested it utterly failed – it turns out the facilitators were doing all the communicating (mostly through the ideomotor effect). In many cases the clients were not even looking at the letter board, and they were spelling far faster than is plausible given the premise that their main limitation is motor function.

FC moved to the fringe for a couple decades, although kept popping up with different clothes. Recently, however, FC has been given a boost by a popular podcast, the Telepathy Tapes, which adds a new wrinkle to the FC pseudoscience. To first back up a bit, however, one of the hallmarks of pseudoscience is the logical fallacy called special pleading. One of the core ways in which science proceeds is to come up with a way to test your hypothesis. If my hypothesis is true, then the result of this experiment or observation will be A, if it is false, then the result with be B. If the result is B, then you modify or discard the hypothesis. But pseudoscientists will often, upon getting the falsifying result B, make up a special excuse for why B was the result, to rescue their hypothesis from falsification.

The Telepathy Tapes is a massive exercise in special pleading to rescue FC from clear falsification. ESP or telepathy, the ability to read minds, is invoked to explain away all of the many reasons why the evidence falsifies FC. For example, if you secretly show the facilitator a rubber duck and the client a teddy bear then ask the client what they saw, the answer is invariably a rubber duck – because the facilitator is doing the communicating and does not know what the client actually saw. The makers of the Telepathy Tapes, however, conclude that the client read the mind of the facilitator and communicated what they saw – classic special pleading. This is also a great example of using one implausible claim to apparently support another implausible claim. This is the process that Janyce is referring to above.

She then goes on to describe another example of massive special pleading. Another fatal flaw in the FC evidence base is that often times clients are not even looking at the letter board. If you want to see how impossible this is, just try to one finger type with your eyes closed. In other words, you cannot feel the keyboard to center yourself, you have to rely entirely on proprioception to know precisely where your finger is in three-dimensional space to hit the correct key. It’s basically impossible. But apparently these clients who have severe motor impairment can do it. This is as solid proof as you can get that the clients are not doing the communicating.

But if you are just making shit up, and magical shit at that, you can easily invent some BS reason why they can do this, and that is where the synesthesia argument comes in. They argue that these mind-reading non-verbal clients also have “synesthesia” in which they can see the aura of the keys or letters, apparently in their peripheral vision. Seeing auras has nothing to do with actual synesthesia, which is when one sensory modality bleeds into another, or gets crossed with another in higher-order processing. So, for example, a synesthete may be able to smell colors, or feel numbers. The number three may feel rough to them, while four is smooth. So I guess they are saying their clients can feel the auras of the letters, and therefore don’t have to look at them.

This is as close to pure magical thinking as you can get. It is a fantastic example of pseudoscience – of why we need to processes of science in order to constrain our thinking about reality. Otherwise our ideas will tend to drift off into fantasy land. We will see patterns where they don’t exist, and we can construct intricate and complex webs of special pleading to explain any set of observations. Proper blinding and hypothesis testing is needed to slice away all the nonsense so that only reality remains. The people involved with the Telepathy Tapes are not doing that. They are simply engaging in pure fantasy. Unfortunately, in this case, their clients are their victims. This is not a benign practice at all, and can cause tremendous harm.

 

The post Telepathy Tapes Promotes Pseudoscience first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #1038 - May 31 2025

Skeptics Guide to the Universe Feed - Sat, 05/31/2025 - 8:00am
Quickie with Steve: Global Warming and Ocean Currents; News Items: Infrared Contact Lenses, Trees Respond to Solar Eclipse, Affective Polarization, The Brain's Motor Switchboard, New Dwarf Planet Candidate; Discussion: The Effect of Science Fiction; Your Questions and E-mails: HHS Cancels Vaccine Contract; Science or Fiction
Categories: Skeptic

Is Christianity a “Load-Bearing Wall” for American Democracy?

Skeptic.com feed - Tue, 05/27/2025 - 6:48am

In his new book Cross Purposes: Christianity’s Broken Bargain with Democracy, Jonathan Rauch argues that Christianity is a “load-bearing wall” in American democracy. As Christianity has been increasingly co-opted by politics, Rauch believes it is straying from its core tenets and failing to serve its traditional role as a spiritual and civic ballast. He blames this shift for the decline of religiosity in the United States, as well as collapsing faith in democratic institutions.

The Rise of the Nones and Its Effects 

Rauch writes that his book is “penitence for the dumbest thing I ever wrote,” a 2003 essay for The Atlantic about the rise of what he called “apatheism”—a “disinclination to care all that much about one’s own religion, and an even stronger disinclination to care about other people’s.” The essay argued that the growing number of people who aren’t especially concerned about religion is a “major civilizational advance” and a “product of a determined cultural effort to discipline the religious mindset.” Rauch cites John Locke’s case for religious tolerance and pluralism to argue that the emergence of apatheism represented the hard-fought taming of “divisive and volatile” religious forces. 

In Cross Purposes, Rauch explains why he now repudiates this view. First, he argues that the decline of religion has led Americans to import “religious zeal into secular politics.” Second, he believes Christianity is losing its traditional role in shaping culture—the faith now reflects American society and culture instead of the other way around—and argues that this has been corrosive to the civic health of the country. Third, Rauch claims that “there is no secular substitute for the meaning and moral grounding which religious life provides.” 

All of these arguments rely on shaky assumptions about modern religiosity and the influence of secularism in America. In 2003, Rauch rightly questioned the idea that “everyone brims with religious passions.” While he acknowledged that human beings appear “wired” to believe, he also recognized that secularization, in the aggregate, is a real phenomenon. He now rejects this observation in favor of the increasingly fashionable view that religiosity never really declines but can only be repurposed: “We see this in the soaring demand for pseudo-religions in American life,” he writes. These pseudo-religions, he observes, include everything from “wellness culture” to wokeness and political extremism. 

But Americans have held quasi-religious, supernatural beliefs throughout history—including during periods of much greater religiosity than today. The popularity of practices like astrology and tarot reading isn’t a recent development, and pagan religions like Wicca originated and spread in the God-fearing middle of the twentieth century. Belief in UFOs and extraterrestrial encounters surged in the 1940s and 1950s, an era when over 90 percent of Americans were Christians. In the early 1990s, 90 percent of Americans still identified as Christians compared to 63 percent today. But a 1991 Gallup poll of Americans found a wide array of paranormal and other supernatural beliefs—nearly half believed in extrasensory perception (ESP), 36 percent believed in telepathy, 29 percent believed houses could be haunted, 26 percent believed in clairvoyance, and 25 percent believed in astrology. Religious belief wasn’t much of a bulwark against these other beliefs. Even in cases when those beliefs contradicted traditional Christian teachings—such as reincarnation—significant proportions of Christians believed them. 

The secularism of Western liberal democracies is a historical aberration. For most of history, the separation of church and state didn’t exist.

Rauch argues that “it has become pretty evident that secularism has not been able to fill what has been called the ‘God-shaped hole’ in American life.” He continues: “In today’s America, we see evidence everywhere of the inadequacy of secular liberalism to provide meaning, exaltation, spirituality, transcendence, and morality anchored in more than the self.” But the evidence Rauch is referring to—aside from the latest spiritual fads, many of which have been adopted by religious and irreligious Americans alike—is thin. He cites a 2023 survey conducted by the Wall Street Journal and NORC, which found that the percentage of Americans who say religion is “very important” to them fell from 62 percent in 1998 to 39 percent in 2023. The survey also found that the proportion of Americans who regard patriotism, community involvement, and having children as “very important” declined over the same period. Meanwhile, a growing proportion of Americans said money is very important. 

While it’s possible that secularization has played a role in making Americans more greedy and less community or family-oriented, it isn’t enough to merely assert that rising secularism is to blame for the decline of these values in the United States. Even if it’s true that secularism has some social costs, those costs would need to be weighed against its benefits. “As a homosexual American,” Rauch writes, I owe my marriage—and the astonishing liberation I have enjoyed during my lifetime—to the advance of enlightened secular values.” Rauch argues that the Founders believed the governance system they set up would only work if it remained on a firm foundation of Christian morality. He cites John Adams, who declared that the Constitution was “made only for a moral and religious people.” But he also could have cited Thomas Jefferson’s trenchant criticisms of Christianity or Thomas Paine’s argument in The Age of Reason that many Christian doctrines are, in fact, deeply immoral, superstitious, and corrosive to human freedom. 

While Rauch doesn’t appear to regard his own secularism as an impediment to patriotism or any other civic virtue—and thus he doesn’t need religion—he appears to believe that other Americans do. He invokes an argument made by Friedrich Nietzsche nearly 150 years ago: “When religious ideas are destroyed one is troubled by an uncomfortable emptiness and deprivation. Christianity, it seems to me, is still needed by most people in old Europe even today.” A central theme of Cross Purposes is a paternalistic view that, while it’s possible for some people to be good citizens and live lives of meaning without religion, it’s not possible for many others. 

Without religion, Rauch argues, most people will be adrift with no grounding for their moral values. He claims that “moral propositions … must have some external validity.” He observes that “scientific or naturalistic” foundations for morality fail because they “anchor morality in ourselves and our societies, not in something transcendent.” He asks: “If there is no transcendent moral order anchored in a purposive universe—something like God-given laws—why must we not be nihilistic and despairing sociopaths?” However, he qualifies his argument… 

Now, speaking as an atheist and a scientific materialist, I do not believe religions actually answer that question. Instead, they rely on a cheat, which they call God. They assume their conclusion by simply asserting the existence of a transcendent spiritual and moral order. They invent God and then claim he solves the problem. … The Christians who believe the Bible is the last word on morality—and, not coincidentally, that they are the last word on interpreting the Bible—are every bit as relativistic as I am; it’s just that I admit it and they don’t. 

After presenting this powerful rejoinder to the religious pretension to have a monopoly on objective morality, Rauch writes: 

That is neither here nor there. I am not important. What is important is that the religious framing of morality and mortality is plausible and acceptable to humans in a way nihilism and relativism are not and never will be. 

But this is a false dichotomy—the choice isn’t between religious morality and nihilistic relativism. The choice is between religious morality and an attempt to develop an ethical system that is far more epistemically honest and humble. Instead of relying on the God “cheat”—a philosophical sleight of hand Rauch feels he is equipped to identify, but one he evidently assumes most people are incapable of understanding—we can attempt to develop and ground ethical arguments in ways that don’t require the invention of a supernatural, supervising entity. As he writes: 

For most people, the idea that the universe is intended and ordered by God demonstrably provides transcendent meaning and moral grounding which scientific materialism demonstrably does not. … God may be (as I believe) a philosophical shortcut, but he gets you there—and I don’t. 

But Rauch just admitted that religion only “gets you there” in an illusory way. It may be comforting for believers to convince themselves that there’s a divine superintendent who ensures that the universe is morally intelligible, but the religious are no closer to apprehending fundamental moral truth than nonbelievers. 

Rauch also argues that “purely secular thinking about death will never satisfy the large majority of people.” While he personally doesn’t struggle with the idea of mortality, he once again assumes that a critical mass of people “rely on some version of faith to rescue them from the bleak nihilism of mortality.” While Rauch presents this view in a self-deprecating way—“I am weird!” he informs the reader—it’s difficult to shake the impression that he believes himself capable of accepting hard realities that others aren’t equipped to handle. 

While Rauch believes his scientific materialism and secular morality is some kind of exotic oddity, these views were at the heart of the Enlightenment and they have informed centuriesof Western philosophy. A fundamental aspect of Enlightenment thought was that religious authorities don’t have a monopoly on truth or morality. Secularists like David Hume resisted religious dogma and undermined the notion that morality must be grounded in God. Secularism was rare and dangerous hundreds of years ago, but it has gone mainstream. Pew reports that the number of Christians in the United States fell from around 90 percent in 1990 to 63 percent in 2024. Gallup found that other measures of religiosity have declined as well, such as church attendance and membership. Pew has also recorded substantial and sustained declines in religious belief across Europe. 

The idea that there’s a latent level of religiosity in human societies that remains static over the centuries is dubious.

Rauch was right in 2003—plenty of people are capable of leading ethical and meaningful lives without religious faith. There are more of these people today than there used to be, and this doesn’t mean they have all been taken in by some God-shaped superstition or cult. The idea that there’s a latent level of religiosity in human societies that remains static over the centuries is dubious—in pre-Enlightenment Europe, religious belief was ubiquitous and mandated by law. Heretics were publicly executed. So were witches. Scientific discoveries were suppressed and punished if they were seen as conflicting with religious teachings. Regular people had extremely limited access to information that wasn’t audited by religious authorities. Science often blended seamlessly with pseudoscience (even Newton was fascinated by alchemy and other aspects of the occult, along with his commitment to biblical interpretation). Incessant religious conflict culminated in the Thirty Years’ War, which caused millions of deaths—with some estimates ranging as high as around a third of central Europe’s population. 

The last execution for blasphemy in Europe was the hanging of Thomas Aikenhead in Edinburgh in 1697, whose crimes included criticizing scripture and questioning the divinity of Jesus Christ. Aikenhead was a student at the University of Edinburgh, where Hume would attend just a couple of decades later. It wouldn’t be long before several of the most prominent philosophers in Europe were publicly making arguments that would have once sent them to the gallows. Drawing upon the work of these philosophers, less than a century after Aikenhead’s execution the United States would be founded on the principle of religious liberty. The world has secularized, and this is exactly what Rauch once believed it to be: a major civilizational advance. 

When the Load-Bearing Wall Buckles 

Rauch believes the decline of religion is to blame for many of the most destructive political pathologies in the United States today. He argues that the “collapse of the ecumenical churches has displaced religious zeal into politics, which is not designed to provide purpose in life and breaks when it tries.” According to Rauch, when the “load-bearing wall” of Christianity “buckles, all the institutions around it come under stress, and some of them buckle, too.” Much of Cross Purposes is an explanation for why this buckling has occurred. 

Rauch fails to demonstrate why Christianity is a necessary foundation for morality.

Rauch organizes the book around what he describes as Thin, Sharp, and Thick Christianity. Thin Christianity describes a process whereby the faith is “no longer able, or no longer willing, to perform the functions on which our constitutional order depends.” One of these functions is the export of Christian values to the rest of society. “My claim,” he writes, “is not just that secular liberalism and religious faith are instrumentally interdependent but that each is intrinsically reliant on the other to build a morally and epistemically complete and coherent account of the world.” This is the claim we discussed in the first section—Rauch fails to demonstrate why Christianity is a necessary foundation for morality. He explains that people may find it easier to ground their values in God and why religion makes mortality easier to handle, but these are hardly arguments for the necessity of faith in the public square. 

Rauch is particularly concerned about what he describes as Sharp Christianity—a version of the faith that is “not only secularized but politicized, partisan, confrontational, and divisive.” Instead of focusing on the teachings of Jesus, Rauch writes, these Christians “bring to church the divisive cultural issues they hear about on Fox News” and believe “Christianity is under attack and we have to do something about it.” Sharp Christianity is best captured by the overwhelming evangelical support for Donald Trump, who received roughly 80 percent of the evangelical vote in 2020 and 2024. An April Pew survey found that Trump’s support among evangelicals remains strong after his first 100 days in office—while 40 percent of Americans approve of his performance, this proportion jumps to 72 percent among evangelicals. 

Rauch challenges the view held by many Sharp Christians that their faith is constantly under assault from Godless liberals. He critiques what he regards as an increasingly powerful “post-liberal” movement on the right, which argues that the liberal emphasis on individualism and autonomy has led to the atomization of society and the rejection of faith, family, and patriotism. Rauch acknowledges that liberalism on its own doesn’t inspire the same level of commitment as religion, and he rightly notes that this is by design: “the whole point of liberalism was to put an end to centuries of bloody coercion and war arising from religious and factional attempts to impose one group’s moral vision on everyone else.” 

While Rauch does an excellent job critiquing the post-liberal right, he grants one of its central claims: that Christianity is the necessary glue that holds liberal society together. As he notes: “liberals understood they could not create and sustain virtue by themselves, and they warned against trying.” It’s true that liberalism is capacious enough to encompass many competing values and ideologies, but there are certain values that are in the marrow of liberal societies—such as individual rights, pluralism, and democracy. Mutual respect for these values can cultivate virtues like openness, tolerance, and forbearance. 

Rauch emphasizes the achievements of liberalism: “constitutional democracy, mass prosperity, the scientific revolution, outlawing slavery, empowering women, and—not least from my point of view—tolerating atheistic homosexual Jews instead of burning us alive.” That he should have added that many of these advancements were made in the teeth of furious religious opposition brings us to a central problem with Cross Purposes—Rauch would argue that all the Christian bloodletting, intolerance, and authoritarianism throughout history is based on a series of misconceptions about what Christianity really is. His central demand is that American Christians rediscover the true meaning of their faith, which he regards as an anodyne and narrow reading of Jesus Christ’s essential teachings. He reduces millennia of Christian thought and the whole of the Bible to a simple formula (which he first heard from the Catholic theologian and priest James Alison): “Don’t be afraid. Imitate Jesus. Forgive each other.” But Rauch then admits: “I am in no position to judge whether those are the essential elements of Christianity, but they certainly command broad and deep reverence in America’s Christian traditions.” 

While this tidy formula does capture some central elements of Jesus’ teachings, it intentionally leaves out other less agreeable (but no less essential) aspects of Christianity. Jesus urged his followers not to be afraid because he would return and they would be granted eternal life in the presence of God. He told his Apostles that their “generation will not pass away” before his return, so they could expect their reward in short order. For those who did not accept his gospel, Jesus had another message: “Depart from me, you cursed, into the eternal fire prepared for the devil and his angels.” Rauch may be correct that “Don’t be afraid” captures one of Jesus’ core messages, but this is a message that only applies to believers—all others should be very afraid. As for the idea of forgiveness, Jesus clearly believed there were some limits—once the “cursed” are consigned to “eternal fire,” redemption appears to be unlikely. 

Even at its best, Christianity is inherently divisive.

While Rauch admits that he is in “no position to judge … the essential elements of Christianity” (nor am I), but any summary of the faith that leaves out Jesus’ most fundamental teaching of all—that his followers must accept the truth of Christianity or face eternal destruction—isn’t in touch with reality. It’s also untenable to present an essentialized version of Christianity that leaves out the entire Old Testament, which is crammed with scriptural warrants for slavery, genocide, misogyny, and persecution on a horrifying scale. There’s a reason Christianity has been such a repressive force throughout history—despite the moderating influence of Jesus, the Bible is chockablock full of justifications for the punishment of nonbelievers and religious warfare. Even at its best, Christianity is inherently divisive—the “wages of sin is death,” and there’s no greater sin than the rejection of the Christian God. Because Christianity is a universalist, missionary faith, believers have a responsibility to deliver the gospel to their neighbors. If you believe, as evangelicals do, that millions of souls are at stake, the stripped-down, liberal version of Christianity offered by Rauch may seem like a deep abrogation of responsibility. 

“If we wanted to summarize the direction of change in American Christianity over the past century or so,” Rauch writes, “we might do well to use the term secularization.” While Rauch argues that some secularization has been good for Christianity by helping it integrate with the broader culture, he also argues that the “mainline church cast its lot with center-left progressivism and let itself drift, or at least seem to drift, from its scriptural moorings.” He cites the historian Randall Balmer, who observed in 1996 that many Protestants “stand for nothing at all, aside from some vague (albeit noble) pieties like peace, justice, and inclusiveness.” But this is just what Rauch is calling for—the elevation of vague pieties about forgiveness and courage to a central role in how Christianity interacts with the wider culture. 

Rauch argues that American evangelicals have become “secularized.” The thrust of this argument is that evangelicals thought they would reshape the GOP in their image when they became more political in the 1980s, but the opposite occurred. For decades, white evangelicals have been one of the largest and most loyal Republican voting blocs, and Rauch observes that this has been a self-reinforcing process: “Republicans self-selected into evangelical religious identities and those identities in turn reinforced the church’s partisanship.” Rauch points out that church attendance and other indicators of religiosity have declined among evangelicals in recent decades. He even argues that evangelical Christianity has become “primarily a political rather than religious identity.” 

While there are some signs that evangelicals aren’t quite as committed to their religious practices as they were at the turn of the century, the idea that politics has displaced their faith is a bold overstatement. According to the latest data from Pew, evangelicals remain disproportionately fervent in their beliefs and religious behaviors: 97 percent believe in a soul or spirit beyond the physical body; 72 percent say they pray daily; 82 percent consider the Bible very or extremely important; 84 percent believe in heaven; and 82 percent believe in hell. American history demonstrates that piety and politics don’t cancel each other out. Rauch explains why Christians are tempted to enter the political arena by summarizing several of the arguments political evangelicals often make: 

…some might expect conservative Christians to meekly accept the industrial-scale murder of unborn children, the aggressive promotion of LGBT ideology, the left’s intolerance of traditional social mores, and the relentless advance of wokeness in universities, corporations, and the media; but enough is enough. It is both natural and biblical for Christians to stand up for their values. 

Rauch challenges these claims and argues the “war on Christianity” frequently invoked by evangelicals is imaginary. The current U.S. Supreme Court is extremely pro-religious freedom, American evangelicals are protected by the First Amendment, most members of Congress are Christians, and surveys show that the vast majority of Americans approve of Christianity. But evangelicals’ perception is what matters—they have felt like their faith is under attack for decades, which has pushed them toward political action. Rauch cited a 1979 conversation between Ronald Reagan and the evangelical Jim Bakker in which the GOP presidential candidate asked: “Do you ever get the feeling sometimes that if we don’t do it now, if we let this be another Sodom and Gomorrah, that maybe we might be the generation that sees Armageddon?” 

It’s an inconvenient fact for Rauch’s argument that Christianity can coexist so comfortably with hyper-partisanship and authoritarianism.

While it’s fine to call for a gentler and more civically responsible Christianity, Rauch appears to believe that any version of the faith that inflames partisan hatreds or focuses on the culture war is, by definition, un-Christian. But this isn’t the case. When Reagan worried about the United States becoming Sodom and Gomorrah and ushering in Armageddon, he wasn’t “secularizing” Christianity by blending it with worldly politics. He was allowing his religious beliefs to inform his political views, which many Christians regard as morally and spiritually obligatory. 

The secularism of Western liberal democracies is a historical aberration. For most of history, the separation of church and state didn’t exist—everyone in society was forced to submit to the same religious strictures, and the punishment for failing to do so was often torture and death. One reason for this history of state-sanctioned dogma and repression is that eschatology is central to Christianity. The idea that certain actions on earth will lead to either eternal reward or punishment is a powerful force multiplier in human affairs, which is one of the reasons the European wars of religion were so bloody and why the role of religion in many other conflicts around the world has been to increase the level of tribal hatred on both sides. Modern religion-infused politics is just a return to the historical norm.

Photo by Julian GentileTrump: God’s Wrecking Ball 

Then there is President Donald Trump. “Absolutely nothing about secular liberalism,” Rauch writes, “required white evangelicals to embrace the likes of Donald Trump.” If there’s one argument in favor of the idea that evangelicals have allowed politics to distort their faith, it’s the overwhelming support President Trump still commands within their ranks. Rauch cites a survey conducted by the Public Religion Research Institute, which reported that evangelicals were suddenly much less concerned about the personal character of elected officials after they threw their weight behind Trump. In 2011, just 30 percent of evangelicals said an “elected official can behave ethically even if they have committed transgressions in their personal life”—a proportion that jumped to 72 percent in October 2016. 

There are many reasons evangelicals cite for supporting Trump, from his nomination of pro-life Supreme Court justices who overturned Roe v. Wade to the conviction that he’s an enthusiastic culture warrior who will crush wokeness. Because evangelicals are consumed by the paranoid belief that they’re an embattled group clinging to the margins of the dominant culture, they decided that they could dispense with concerns over character if it meant mobilizing a larger flock and gaining political and cultural influence. Over three-quarters of evangelicals believe the United States is losing its identity and culture, so the idea of making America great again appeals to them. Rauch cites Os Guinness, who described Trump as “God’s wrecking ball stopping America in its tracks [from] the direction it’s going and giving the country a chance to rethink.” But Rauch is right that arguments like this don’t explain the depth of evangelical support for the 45-47 President or the fact that “they did not merely support Trump, they adored him.” 

“Whatever the predicates,” Rauch writes, “embracing Trump and MAGA was fundamentally a choice and a change.” It’s true that it would have once been difficult to imagine evangelicals supporting a president like Donald Trump. It’s also true, as Rauch contends, that evangelicals now appear to follow “two incommensurable moralities, an absolute one in the personal realm and an instrumental one in the political realm.” But Cross Purposes isn’t just about the hypocrisy and moral bankruptcy of American evangelicals or the post-liberal justifications for Trumpism. Rauch is calling for a revival of public Christianity in America, and the evangelical capitulation to Trump raises questions about the viability of that project. 

It’s an inconvenient fact for Rauch’s argument that Christianity can coexist so comfortably with hyper-partisanship and authoritarianism. Rauch insists that evangelical Christianity is the product of a warping process of secularization—the “Church of Fear is more pagan than Christian,” he insists. But as Pew reports, evangelicals are disproportionately likely to attend church, pray daily, believe in the importance of the Bible, and so on. Rauch is in no position to adjudicate who is a true believer and who isn’t (nor is anyone else, me included), and if it’s true that the only real Christianity is the reassuring liberal version he endorses, the vast majority of Christians throughout history were just as “secularized” as today’s evangelicals. 

“Mr. Jefferson, Build Up that Wall” 

Because Rauch has such an innocuous view of “essential” Christian theology, he believes Christianity doesn’t need to “be anything other than itself” to ensure that Christians keep their commitments to “God and liberal democracy.” If only it were so easy. Despite the steady decline of Christianity in the United States, 63 percent of the adult population still self-reports as Christian—a proportion that has actually stabilized since 2019. In any religious population so large, there will always be significant variation in what people believe and how they express those beliefs in the public square. Christianity doesn’t necessarily lead to certain political positions—the faith has been invoked to support slavery and to oppose it; to justify imperialism and to condemn it; to damn nonbelievers as heretics bound for hell or to embrace everyone as part of a universalist message of redemption. Of course, it would be nice if all Christians adopted Jonathan Rauch’s version of civic theology, but there will always be scriptural warrants for other forms of theology that Rauch believes are corrosive to our civic culture. 

Americans who believe that Christianity is untrue and unnecessary for morality should continue to make their case in the public square.

According to Pew, Trump’s net favorability rating among American agnostics is just 17 percent, and it falls to 12 percent among atheists. On average, nearly half of American Protestants view Trump favorably—a proportion that falls to 25 percent among the “religiously unaffiliated,” which includes atheists, agnostics, and those who define their religious beliefs as “nothing in particular.” Rauch presents the rise of post-liberal Christianity and the politicization of American evangelicals as examples of secular intrusions of one kind or another. He doesn’t entertain the possibility that hisconception of Christianity as conveniently aligned with liberal democracy is a modern, secularized vision that isn’t consistent with how Christianity has historically functioned politically—or with the Bible itself. 

It’s a shame that Rauch regards his 2003 essay about the value of secularization as the “dumbest thing I ever wrote.” While there’s nothing wrong with emphasizing the aspects of Christian theology that support liberal democracy, there’s a more effective way to resist post-liberal Christianity, MAGA evangelicalism, and all the other intersections between faith and politics today. Americans who believe that Christianity is untrue and unnecessary for morality should continue to make their case in the public square. Rauch is wrong to argue that Christianity is a load-bearing wall in American democracy. The real load-bearing wall in the United States is the one constructed by Jefferson at the nation’s founding, and which has sustained our liberal democratic culture ever since: the wall of separation between church and state.

Categories: Critical Thinking, Skeptic

Skeptoid #990: Rethinking Science Education

Skeptoid Feed - Tue, 05/27/2025 - 2:00am

How one special moment redefined how a science teacher does her job.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

The Skeptics Guide #1037 - May 24 2025

Skeptics Guide to the Universe Feed - Sat, 05/24/2025 - 5:00am
Live from NotACon with Guest Rogue Adam Russell; News Items: New Cambrian Fossil, Best Archaeopteryx Specimen, Chimps Using First Aid, Treatment for Baldness, New Color - Olo, The Next Theranos, Bespoke Genetic Therapy; Science or Fiction
Categories: Skeptic

Standardized Admission Tests Are Not Biased. In Fact, They’re Fairer Than Other Measures

Skeptic.com feed - Thu, 05/22/2025 - 3:08pm
“It ain’t what you know that gets you into trouble. It’s what you know for sure that just ain’t so.” —Mark Twain

When it comes to opinions concerning standardized tests, it seems that most people know for sure that tests are simply terrible. In fact, a recent article published by the National Education Association (NEA) began by saying, “Most of us know that standardized tests are inaccurate, inequitable, and often ineffective at gauging what students actually know.”1 But do they really know that standardized tests are all these bad things? What does the hard evidence suggest? In the same article, the author quoted a first-grade teacher who advocated teaching to each student’s particular learning style—another ill-conceived educational fad 2 that, unfortunately, draws as much praise as standardized tests draw damnation.

Indeed, a typical post in even the most prestigious of news outlets34 will make several negative claims about standardized admission tests. In this article, we describe each of those claims and then review what mainstream scientific research has to say about them.

Claim 1: Admission tests are biased against historically disadvantaged racial/ethnic groups.

Response: There are racial/ethnic average group differences in admission test scores, but those differences do not qualify as evidence that the tests are biased.

The claim that admission tests are biased against certain groups is an unwarranted inference based on differences in average test performance among groups.

The differences themselves are not in question. They have persisted for decades despite substantial efforts to ameliorate them.5 As shown in the table above and reviewed more comprehensively elsewhere,67 average group differences appear on just about any test of cognitive performance—even those administered before kindergarten. Gaps in admission test performance among racial groups mirror other achievement gaps (e.g., high school GPA) that also manifest well before high school graduation. (Note: these group differences are differences between the averages— technically, the means—for the respective groups. The full range of scores is found within all the groups, and there is significant overlap between groups.)

Group differences in admission test scores do not mean that the tests are biased. An observed difference does not provide an explanation of the difference, and to presume that a group difference is due to a biased test is to presume an explanation of the difference. As noted recently by scientists Jerry Coyne and Luana Maroja, the existence of group differences on standardized tests is well known; what is not well understood is what causes the disparities: “genetic differences, societal issues such as poverty, past and present racism, cultural differences, poor access to educational opportunities, the interaction between genes and social environments, or a combination of the above.”8 Test bias, then, is just one of many potential factors that could be responsible for group disparities in performance on admission tests. As we will see in addressing Claim 2, psychometricians have a clear empirical method for confirming or disconfirming the existence of test bias and they have failed to find any evidence for its existence. (Psychometrics is that division of psychology concerned with the theory and technique of measurement of cognitive abilities and personality traits.)

Claim 2: Standardized tests do not predict academic outcomes.

Response: Standardized tests do predict academic outcomes, including academic performance and degree completion, and they predict with similar accuracy for all racial/ethnic groups.

The purpose of standardized admission tests is simple: to predict applicants’ future academic performance. Any metric that fails to predict is rendered useless for making admission decisions. The Scholastic Assessment Test (now, simply called the SAT) has predictive validity if it predicts outcomes such as college grade point average (GPA), whether the student returns for the second year (retention), and degree completion. Likewise, the Graduate Record Examination (GRE) has predictive validity if it predicts outcomes such as graduate school GPA, degree completion, and the important real world measure of publications. In practice, predictive validity, for example between SAT scores and college GPA, implies that if you pull two SAT-takers at random off the street, the one who earned a higher score on the SAT is more likely to earn a higher GPA in college (and is less likely to drop out). The predictive utility of standardized tests is solid and well established. In the same way that blood pressure is an important but not perfect predictor of stroke, cognitive test scores are an important but not perfect predictor of academic outcomes. For example, the correlation between SAT scores and college GPA is around .5,91011 the correlations between GRE scores and various measures of graduate school performance range between .3 and .4,12 and the correlation between Medical College Admission Test (MCAT) scores and licensing exam scores during medical school is greater than .6.13 Using aggregate rather than individual test scores yields even higher correlations that predict a college’s graduation rate given the ACT/SAT score of its incoming students. Based on 2019 data, the correlations between six-year graduation rate and a college’s 25th percentile ACT or SAT score are between .87 and .90.14

Standardized tests do predict academic outcomes, including academic performance and degree completion, and they predict with similar accuracy for all racial/ethnic groups.

Research confirming the predictive validity of standardized tests is robust and provides a stark contrast to popular claims to the contrary.151718 The latter are not based on the results of meta-analyses1920 nor on studies conducted by psychometricians.2122232425 Rather, those claims are based on cherry-picked studies that rely on select samples of students who have already been admitted to highly selective programs—partially because of their high test scores—and who therefore have a severely restricted range of test scores. For example, one often-mentioned study26 investigated whether admitted students’ GRE scores predicted PhD completion in STEM programs and found that students with higher scores were not more likely to complete their degree. In another study of students in biomedical graduate programs at Vanderbilt,27 links between GRE scores and academic outcomes were trivial. However, because the samples of students in both studies had a restricted range of GRE scores—all scored well above average28—the results are essentially uninterpretable. This situation is analogous to predicting U.S. men’s likelihood of playing college basketball based on their height, but only including in the sample men who are well above average. If we want to establish the link between men’s height and playing college ball, it is more appropriate to begin with a sample of men who range from 5'1" (well below the mean) to 6'7" (well above the mean) than to begin with a restricted sample of men who are all at least 6'4" (two standard deviations above the mean). In the latter context, what best differentiates those who play college ball versus not is unlikely to be their height—not when they are all quite tall to begin with.

Students of higher socioeconomic status (SES) do tend to score higher on the SAT and fare somewhat better in college. However, this link is not nearly as strong as many people … tend to assume.

Given these demonstrated facts about predictive validity, let’s return to the first claim, that admission tests are biased against certain groups. This claim can be evaluated by comparing the predictive validities for each racial or ethnic group. As noted previously, the purpose of standardized admission tests is to predict applicants’ future academic performance. If the tests serve that purpose similarly for all groups, then, by definition, they are not biased. And this is exactly what scientific studies find, time and time again. For example, the SAT is a strong predictor of first year college performance and retention to the second year, and to the same degree (that is, they predict with essentially equal accuracy) for students of varying racial and ethnic groups.2930 Thus, regardless of whether individuals are Black, Hispanic, White, or Asian, if they score higher on the SAT, they have a higher probability of doing well in college. Likewise, individuals who score higher on the GRE tend to have higher graduate school GPAs and a higher likelihood of eventual degree attainment; and these correlations manifest similarly across racial/ethnic groups, males and females, academic departments and disciplines, and master’s as well as doctoral programs.313233, 34 When differential prediction does occur, it is usually in the direction of slightly overpredicting Black students’ performance (such that Black students perform at a somewhat lower level in college than would be expected based on their test scores).

Claim 3: Standardized tests are just indicators of wealth or access to test preparation courses.

Response: Standardized tests were designed to detect (sometimes untapped) academic potential, which is very useful; and controlling for wealth and privilege does not detract from their utility.

Some who are critical of standardized tests say that their very existence is racist. That argument is not borne out by the history and expansion of the SAT. One of the long-standing purposes of the SAT has been to lessen the use of legacy admissions (set-asides for the progeny of wealthy donors to the college or university) and thereby to draw college students from more walks of life than elite high schools of the East Coast.35 Standardized tests have a long history of spotting “diamonds in the rough”—underprivileged youths of any race or ethnic group whose potential has gone unnoticed or who have under-performed in high school (for any number of potential reasons, including intellectual boredom). Notably, comparisons of Black and White students with similar 12th grade test scores show that Black students are more likely than White students to complete college.36 And although most of us think of the SAT and comparable American College Test (ACT) as tests taken by high school juniors and seniors, these tests have a very successful history of identifying intellectual potential among middle-schoolers37 and predicting their subsequent educational and career accomplishments.38

Students of higher socioeconomic status (SES) do tend to score higher on the SAT and fare somewhat better in college.39 However, this link is not nearly as strong as many people, especially critics of standardized tests, tend to assume—17 percent of the top 10 percent of ACT and SAT scores come from students whose family incomes fall in the bottom 25 percent of the distribution.40 Further, if admission tests were mere “wealth” tests, the association between students’ standardized test scores and performance in college would be negligible once students’ SES is accounted for statistically. Instead, the association between SAT scores and college grades (estimated at .47) is essentially unchanged (moving only to .44) after statistically controlling for SES.4142

Standardized tests have a long history of spotting “diamonds in the rough”—underprivileged youths of any race or ethnic group whose potential has gone unnoticed.

A related common criticism of standardized tests is that higher SES students have better access to special test preparation programs and specific coaching services that advertise their potential to raise students’ test scores. The findings from systematic research, however, are clear: the effects of test preparation programs, including semester-long, weekly, in-person structured sessions with homework assignments,43 demonstrate limited gains, and this is the case for the ACT, SAT, GRE, and LSAT.44454647 Average gains are small—approximately one-tenth to one-fifth of a standard deviation. Moreover, free test preparation materials are readily available at libraries and online; and for tests such as the SAT and ACT, many high schools now provide, and often require, free in-class test preparation sessions during the year leading up to the test.

Claim 4: Admission decisions are fairer without standardized tests.

Response: The admissions process will be less useful, and more unfair, if standardized tests are not used.

According to the fairtest.org website, in 2019, before the pandemic, just over 1,000 colleges were test-optional. Today, there are over 1,800. In 2022–2023, only 43 percent of applicants submitted ACT/SAT scores, compared to 75 percent in 2019–2020.48 Currently, there are over 80 colleges that do not consider ACT/SAT scores in the admissions process even if an applicant submits them. These colleges are using a test-free or test-blind admissions policy. The same trend is occurring for the use of the GRE among graduate programs.49

The movement away from admission tests began before the COVID-19 pandemic but was accelerated by it, and there are multiple reasons why so many colleges and universities are remaining test-optional or test-free. First, very small colleges (and programs) have taken enrollment hits and suffered financially. By eliminating the tests, they hope to attract more applicants and, hopefully, enroll more students. Once a few schools go test-optional or test-free, other schools feel they have to as well in order to be competitive in attracting applicants. Second, larger, less-selective schools (and programs) can similarly benefit from relaxed admission standards by enrolling more students, which, in turn, benefits their bottom line. Both types of schools also increase their percentages of minority student enrollment. It looks good to their constituents that they are enrolling young people from historically underrepresented groups and giving them a chance at success in later life. Highly selective schools also want a diverse student body but, similar to the previously mentioned schools, will not see much of a change in minority graduation rates simply by lowering admission standards if they also maintain their classroom academic standards. They will get more applicants, but they are still limited by the number of students they can serve. Rejection rates increase (due to more applicants) and other metrics become more important in identifying which students can succeed in a highly competitive academic environment.

The admissions process will be less useful, and more unfair, if standardized tests are not used.

There are multiple concerns with not including admission tests as a metric to identify students’ potential for succeeding in college and advanced degree programs, particularly those programs that are highly competitive. First, the admissions process will be less useful. Other metrics, with the exception of high school GPA as a solid predictor of first-year grades in college, have lower predictive validity than tests such as the SAT. For example, letters of recommendation are generally considered nearly as important as test scores and prior grades, yet letters of recommendation are infamously unreliable—there is more agreement between two letters about two different applicants from the same letter-writer than there is between two letters about the same applicant from two different letter-writers.50 (Tip to applicants—make sure you ask the right person to write your recommendation). Moreover, letters of recommendation are weak predictors of subsequent performance. The validity of letters of recommendation as a predictor of college GPA hovers around .3; and although letters of recommendation are ubiquitous in applications for entry to advanced degree programs, their predictive validity in that context is even weaker.51 More importantly, White and Asian students typically get more positive letters of recommendation than students from underrepresented groups.52 For colleges that want a more diverse student body, placing more emphasis on such admission metrics that also reveal race differences will not help.

Without the capacity to rely on a standard, objective metric such as an admission test score, some admissions committee members may rely on subjective factors, which will only exacerbate … disparate representation.

This brings us to our second concern. Because race differences exist in most metrics that admission officers would consider, getting rid of admission test scores will not solve any problems. For example, race differences in performance on Advanced Placement (AP) course exams, now used as an indicator of college readiness, are substantial. In 2017, just 30 percent of Black students’ AP exams earned a qualifying score compared to more than 60 percent of Asian and White students’ exams.53 Similar disparities exist for high school GPA; in 2009, Black students averaged 2.69, whereas White students averaged 3.09,54 even with grade inflation across U.S. high schools.5556 Finally, as mentioned previously, race differences even exist in the very subjective letters of recommendation submitted for college admission.57

Removing tests from the process is not going to address existing inequities; if anything, it promises to exacerbate them.

Without the capacity to rely on a standard, objective metric such as an admission test score, some admissions committee members may rely on subjective factors, which will only exacerbate any disparate representation of students who come from lower-income families or historically underrepresented racial and ethnic groups. For example, in the absence of standardized test scores, admissions committee members may give more attention to the name and reputation of students’ high school, or, in the case of graduate admissions, the name recognition of their undergraduate research mentor and university. Admissions committees for advanced degree programs may be forced to pay greater attention to students’ research experience and personal statements, which are unfortunately susceptible to a variety of issues, not the least being that students of high socioeconomic backgrounds may have more time to invest in gaining research experience, as well as the resources to pay for “assistance” in preparing a well-written and edited personal statement.58

So why continue to shoot the messenger?

If scientists were to find that a medical condition is more common in one group than in another, they would not automatically presume the diagnostic test is invalid or biased. As one example, during the pandemic, COVID-19 infection rates were higher among Black and Hispanic Americans compared to White and Asian Americans. Scientists did not shoot the messenger or engage in ad hominem attacks by claiming that the very existence of COVID tests or support for their continued use is racist.

Sadly, however, that is not the case with standardized tests of college or graduate readiness, which have been attacked for decades,59 arguably because they reflect an inconvenient, uncomfortable, and persistent truth in our society: There are group differences in test performance, and because the tests predict important life outcomes, the group differences in test scores forecast group differences in those life outcomes.

The attack on testing is likely rooted in a well-intentioned concern that the social consequences of test use are inconsistent with our social values of equality.60 That is, there is a repeated and illogical rejection of what “is” in favor of what educators feel “ought” to be.61 However, as we have seen in addressing misconceptions about admission tests, removing tests from the process is not going to address existing inequities; if anything, it promises to exacerbate them by denying the existence of actual performance gaps. If we are going to move forward on a path that promises to address current inequities, we can best do so by assessing as accurately as possible each individual to provide opportunities and interventions that coincide with that individual’s unique constellation of abilities, skills, and preferences.6263

Categories: Critical Thinking, Skeptic

Preserving Food

neurologicablog Feed - Thu, 05/22/2025 - 5:04am

About 30-40% of the produce we grow ends up wasted. This is a massive inefficiency in the food system. It occurs at every level, from the farm to the end user, and for a variety of reasons. This translates to enough food worldwide to feed 1.6 billion people. We also have to consider the energy that goes into growing, transporting, and disposing of this wasted food. Not all uneaten food winds up in landfills. About 30% of the food fed to animals is food waste. Some food waste ends up in compost which is used as fertilizer. This still is inefficient, but at least it is recycled.

There is a huge opportunity for increased efficiency here, one that can save money, reduce energy demand, reduce the carbon footprint of our food infrastructure, and reduce the land necessary to meet our nutritional needs. Increased efficiency will be critical as our populations grows (it is estimated to likely peak at about 10 billion people). But there is no one cause of food waste, and therefore there is no one solution. It will take a concerted effort in many areas to minimize food waste, and make the best use of the food that does not get eaten by people.

One method is to slow food spoilage. The longer food lasts after it has been harvested, the less likely it is to be wasted due to spoilage. Delaying spoilage also makes it easier to get food from the farm to the consumer, because there is more time for transport. And delayed spoilage, if sufficient, may reduce dependence on the cold chain – an expensive and energy dense process by which food must be maintained in refrigerated conditions for its entire life from the farm until used by the consumer.

A recent study explores one method for delaying spoilage – injecting small amounts of melatonin into plants through silk microneedles. The melatonin regulates the plants stress response and slow spoilage. In this study they looked at pak choy. The treated plants had a shelf-life (time in which it can be sold) from 4 days to 8 without refrigeration, and with refrigeration shelf life was extended from 15 days to 25. This was a lab proof-of-concept, and so the process would need to be industrialized and made cost-effective enough to be viable. It also would not necessarily be needed in every situation, but could be used in areas with a cold chain is very difficult or expensive, or transportation is slow. This could therefore not only reduce waste, but improve food availability in challenging areas.

Perhaps the most effective way to extend shelf life is through irradiation, a proven and cost-effective method. This exposes food to either gamma rays (from cobalt-60 sources), electron beams, or x-rays, killing most microorganisms and delaying ripening or sprouting. This is completely safe – the resulting food is not radioactive. The radiation just passed through it. There is no significant difference in nutritional value and only subtle changes to taste (compared to the effects of pasteurization on milk). The effectiveness depends on the food item being irradiated – fresh produce may last for an additional week, meat for an additional month, and dried goods for months or even years. This process not only reduces food waste and reliance on the cold chain, it reduces foodborne illness as well.

The main limitation of irradiation is public acceptance. Studies show that between 40-50% of people would accept irradiated food, but this number increases to 80-90% with education. In the US irradiated food is considered not organic – yet another perfectly safe technology opposed by the counterproductive organic lobby. Part of the problem is mandated labeling that mostly scares rather than informs consumers.

These same problems, of course, exist for another way to extend shelf-life – genetic engineering. There is already approval for GMO apples, bananas, strawberries, tomatoes, and potatoes with extended shelf life. GMO produce is perfectly safe, something I have written about extensively. All of the tropes spread by the anti-GMO / organic lobby are false or grossly misleading. Meanwhile this technology can dramatically increase the efficiency of our food infrastructure, which is the best way to limit the environmental footprint of our food system. It is ironic that a group, organic farmers and consumers, that state they are interested in helping the environment are directly harming it, and represent one of the greatest threats to the environment. By limiting the use of GMOs they are effectively increasing land use for agriculture (which is the biggest negative effect agriculture has on the environment) and blocking the most effective methods to limit food waste.

They argue that the point of opposing GMOs is to limit pesticide use, but this is false one two main levels. First, GMO technology is not just about making pesticide-tolerate cultivars, that is one application. It makes no sense to oppose a technology because you object to one specific application. But there is also no evidence that pesticide tolerant GMOs increase overall pesticide use. They increase the use of the specific pesticide to which the plants are tolerant, but decrease the use of usually more toxic pesticides. Also, some GMOs decreased pesticide use by creating plants that are inherently pest resistant. Further, organic farmers do use pesticides – just “natural” ones that we cannot assume are safe, and are generally less effective, and therefore have to be used more frequently and is larger amounts. This is what happens when you substitute logic and evidence with ideology (such as the appeal-to-nature fallacy).

Reducing food waste may not be sexy, but this is an important area that deserves our attention. It is a huge opportunity to increase efficiency, reduce disease, improve nutrition, and decrease the environmental footprint of agriculture.

 

The post Preserving Food first appeared on NeuroLogica Blog.

Categories: Skeptic

Can Evolutionary Psychology Explain Fashion?

Skeptic.com feed - Tue, 05/20/2025 - 6:15pm

When people think of fashion, they often picture runway shows, luxury brands, pricey handbags, or the latest trends among teens and young adults. Fashion can be elite and expensive or cheap and fleeting—a statement made through clothing, hairstyles, or even body modifications. Regardless of gender, fashion is frequently viewed as a way to signal income, social status, group affiliation, personal taste, or even to attract a partner. But why does fashion serve these purposes, and where do these associations come from? An evolutionary perspective offers surprising insights into the role of fashion in signaling status and sexual attraction.

The adaptive nature of ornamentation is something that has been long admired and studied in a wealth of nonhuman species. Most examples are ornaments the animals grow themselves.1 Consider the peacock’s tail, a sexually selected trait present only in males.2 Peahens are attracted to males with the largest and most symmetrical tails.

The ability of males to grow a large and symmetric tail is related to their overall fitness (the ability to pass their genes into the next generation), so that females that mate with them will have better quality offspring. Studies have shown that altering the length and symmetry of peacock tails influences mating success—shorter tails lead to less mating opportunities for the males. Antlers are primarily found on male members of the Cervidae family, which include elk, deer, moose, and caribou (the one species in which the females also grow antlers).3 Antlers, unlike horns, are shed and regrown every year. They are used as weapons, symbols of sexual prowess or status, and as tools to dig in the snow for food. Antlers increase in size until males reach maturity, and grow larger with better nutrition, higher testosterone levels, and better health or absence of disease during growth. The size of a male’s antlers is also influenced by genetics and females prefer to mate with males with larger antlers compared to smaller ones (much like in the peacocks).45

In many species, exaggerated male structures like tails, antlers, bright coloration, and sheer size can serve as a weapon in intrasexual competition and as an ornament to signal genetic quality and thereby promote female choice. As a result, much attention has been focused on male ornamentation in nonhuman animals and what it indicates.6 Moreover, males of various species add outside materials to their bodies, nests, and environments specifically to attract mates. Consider the caddisfly, the bower bird, and even the decorator crab; all use decoration to attract females.7 Interestingly, in what are often referred to as sex role-reversed species, such as the pipefish,8 it is the females who are more competitive for mates and are more highly ornamented. But what about humans? Has ornamentation or fashion in humans also been shaped by sexual selection?

Humans do not have “natural” ornaments like tails or antlers to display their quality.

Humans have a fascination with fashion, as best summed up by the psychologist George Sproles:9 “Psychologists speak of fashion as the seeking of individuality; sociologists see class competition and social conformity to norms of dress; economists see a pursuit of the scarce; aestheticians view the artistic components and ideals of beauty; historians offer evolutionary explanations for changes in design. Literally hundreds of viewpoints unfold, from a literature more immense than for any phenomenon of consumer behavior.” To be fair, humans do not have “natural” ornaments like tails or antlers to display their quality. They also do not have much in the way of fur, armor, or feathers to protect their bodies or to regulate temperature, so “adornment” in the form of clothing was necessary for survival. However, humans have spent millennia fashioning and refashioning what they wear, not just according to climate or condition, but for status, sex, and aesthetics.

If fashion has been such a large part of human history with deep evolutionary roots, why do so many trends, preferences, and standards fluctuate across cultures and time? This is because fashion is a display of status as well as mating appeal. Many human preferences are influenced by context. For example, male preferences for women’s body size and weight shift with resource availability; in populations with significant history of food shortages, larger or obese women are prized. Larger women are displaying that they have, or can acquire, resources that others cannot and have sufficient bodily resources for reproduction.10 When resources are historically abundant, men prefer thinner women; in this context, these women display that they can acquire higher-quality nutrition and have time or resources to keep a fit, youthful figure. When tan bodies indicated working outside, and therefore lower standing, pale skin was preferred. When some societies shifted to tan bodies reflecting a life of resources and leisure, they gave tanning prestige, and it became “fashionable.”11

The shifts in what is fashionable can be attributed to these environmental changes, but one principle remains constant: if it displays status (social, financial, or sexual), it is preferred.12 A good example of this would be jewelry, which shifts with fashion trends—whether gold or silver is in this season, or whether rose gold is passé. However, if the appeal of jewelry was just aesthetic—to be shiny or pretty—people would not care whether the jewels were real and expensive or cheap “costume” jewelry. However, they do care, because expense indicates greater wealth and status. This is so much so that people often make comments regarding the authenticity or the size (and therefore cost) of jewels, such as the size of diamonds in engagement rings.13

Fashion for Sexual Display

It would be surprising if fashion and how humans choose to ornament themselves was not influenced by sexual selection. Humans show a number of traits associated across other species that are sexually selected, including dimorphism in physical size and aggression, delayed sexual maturity in males, and greater male variation in reproductive success (defined as the number of offspring).14 Men typically choose clothing that emphasizes the breadth of their shoulders and sometimes adds to their height through shoes with lifts or heels. In many modern western populations, men also spend significant time crafting their body shape by weight lifting to attain that triangle shaped upper body without the benefit of shoulder pads or other deceptive tailoring signals. These are all traits that females have been shown to value in terms of choosing a mate.15

Illustration by Marco Lawrence for SKEPTIC

Examining artistic depictions of bodies provides particular insights into human preferences, as these figures are not limited by biology and can be as exaggerated as the artist wants. We can also see how the population reacts to these figures in terms of popularity and artistic trends. The triangular masculine body shape has been historically exaggerated in art and among fictional heroes, and this feature continues today as comic books and graphic artists create extreme triangular torsos and film superhero costumes with tight waists and padded shoulders and arms. These costumes are not new and do not vary a great deal. They mimic the costume of warriors, soldiers, and other figures of authority or dominance. As cultural scholar Friedrich Weltzien writes, “The superhero costume is an imitation of the historical models of the warrior, the classic domain of heroic manhood.”16

If it displays status (social, financial, or sexual), it is preferred.

Indeed, military personnel and heroes share behaviors and purposes (detecting threats, fighting adversaries, protecting communities, and achieving status in hierarchies). These costumes act as physical markers and are used to display dominance in size, muscularity, and markers of testosterone. Research has found that comic book men have shoulder-to-waist ratios (the triangular torso) and upper body muscularity almost twice that of real-life young men, and that Marvel comic book heroes in particular are more triangular and muscular than championship body builders. What is remarkable is that even with imaginary bodies, male comic book hero “suits” have several features that, not coincidentally, exaggerate markers of testosterone and signal dominance and strength. Even more triangular torsos are created by padded shoulders and accents (capes, epaulets) and flat stomachs (tight costumes with belts, abdominal accents) with chest pieces that have triangular shapes or insignia, large legs and footwear (boots, holsters), and helmets and other face protection that create angular jawlines.17

Men’s choice of clothing and jewelry … convey information about status and resources that are valued by the opposite sex for what they may contribute to offspring success.

The appearance of a tall, strong, healthy masculine body shape is often weighted strongly by women in their judgments of men. There is also an interaction between sex appeal and status. Women choose these men in part because the men’s appearance affects how other men treat them. Men who appear more masculine and dominant elevate their status among men, which makes them more attractive to women.18 Men’s choice of clothing and jewelry or other methods of adornment can not only emphasize physical traits but also convey information about status and resources that are valued by the opposite sex for what they may contribute to offspring success. Some clothing brands (or jewelry) are more expensive and are associated with more wealth, and so are likely to attract the attention of the opposite sex; think of brand logos, expensive watches, or even the car logo on a keychain as indicators of wealth.19

Female fashion also shows indications of being influenced by its ability to signal mate value or enhance it, sometimes deceptively. In many mammals, female red ornamentation is a sexual signal designed to attract mates.20 Experimental studies of human females suggest that they are more likely to choose red clothing when interacting with an attractive man than an attractive woman;21 the suggestion being that red coloration can serve a sexual signaling function in humans as well as other primates. Red dyes in clothing and cosmetics have been extremely popular over centuries, notably cochineal, madder, and rubia. In fact, the earliest documented dyed thread was red.22

One of the primary attributes that women have accentuated throughout time is their waist-to-hip ratio, a result of estrogen directing fat deposition23—a signal of reproductive viability. The specific male preferences regarding waist to hip ratio have been documented for decades.24 But is this signal, and its amplification, really a global phenomenon? It is easy to give western examples of waist minimization and hip amplification—corsets, hoop skirts, bustles, and especially panniers,25 or fake hips that can make a woman as wide as a couch. Even before these, there was the “bum roll”—rolled up fabric attached to a belt to create a larger bulge over the buttocks.

Outside of Western cultures, one can find a variety of “wrappers” (le pagne in Francophone African cultures), yards of fabric wrapped around the hips and other parts of the body to accentuate and amplify the hips.26 Not surprisingly, these are also a show of status as the quality of the fabric is prioritized and displayed.

Just as with men, this specific attribute is wildly exaggerated in fictional depictions of women, from ancient statues to contemporary comic, film, and video game characters. One study concluded that “when limitations imposed by biology are removed, preferred waist sizes become impossibly small.”27 Comic book heroines are drawn with skintight costumes and exaggerated waist-to-hip ratios. They have smaller waists and wider hips than typical humans by far; the average waist-to-hip ratio of a comic book woman was smaller than the minimum waist-to-hip ratio of real women in the U.S. Heroine costumes further accentuate this already extreme curve by use of small belts or sashes, lines, and color changes. Costumes are either skintight or show skin (or both), with cutouts on the arms, thighs, midriff, and in particular, on the chest to show cleavage. The irony of battle uniforms that serve no protective purpose has been pointed out many times in cultural studies.28

Another feminine feature that plays a role in fashion is leg length. Various artistic depictions of the human body throughout history show that while the ideal leg length in women has increased over time, the preference for male leg length has not shifted. This increase appears to emerge during the Renaissance, which may be due to increases in food security and health during that time. As with many physical preferences in humans, leg length can be an indicator of health, particularly in cases of malnutrition or illness during development. This is another important reminder that preferences are shaped by resources, and consistently shift toward features that display status. What is the ideal leg length? One study found that if a woman’s height was 170 cm (5 foot 7 inches), the majority favored a leg length that was 6 cm (2.36 inches) longer, a difference that corresponds to the average height of high-heeled shoes.29 You can probably see where this is going: Sexual attractiveness ratings of legs correlate with perceived leg length, and legs are perceived as longer with high-heeled shoes. It should come as no surprise that women may accentuate or elongate their legs with high heels.

Photo by Ham Kris / Unsplash

High heeled shoes were not originally the domain of women, as they are thought to have originated in Western Asia prior to the 16th century in tandem with male military dress and equestrianism. The trend spread to Europe, with both sexes wearing heightened heels by the mid-17th century.30 They have remained present in men’s fashion in the form of shoes for rockstars and entertainers (e.g., Elton John), and boots worn by cowboys and motorcyclists. However, these heels are either short or hidden as lifts to make the men appear taller. By the 18th century, high heels became worn primarily by women, particularly as societies redefined fashion as frivolous and feminine.

As one might expect, high heels do more than elongate legs and increase height. High heels change the shape of the body and how it moves. Women wearing heels increase their lumbar curvature and exaggerate their hip rotation, breasts, and buttocks, making their body curvier. As supermodel Veronica Webb put it, “Heels put your ass on a pedestal.” When women walk in heels, they must take smaller steps, utilize greater pelvic rotation, and have greater pelvic tilt. All of these changes result in greater attractiveness ratings. Wearing high heels also denotes status—high heel shoes are typically more expensive than flat shoes, and women who wear them sustain serious damage if they have occupations that require a lot of labor. Therefore, women who wear heels appear to be in positions where they do less labor and have more resources. Research has asked this question directly, and both men and women view women in high heels as being of higher status than women wearing flat shoes.31

Fashion can also signify membership in powerful groups, such as the government, the military, or nobility.

At this point, it’s hardly surprising to learn that, compared to actual humans, comic book women are depicted with longer legs that align with peak preferences for leg length in several cultures, while men are shown with legs of average length. Women are also far more often drawn in heels or on tiptoe, regardless of context. Women are even drawn on tiptoe when barefoot, in costume stocking feet, and even when wearing other types of shoes or boots. This further elongates their already longer legs.32

Fashion as Status Signaling

Social status, as previously mentioned in terms of traits valued by the opposite sex, is also often displayed through fashion in ways relevant to within-sex status signaling, particularly when it comes to accessories. Men making fashion choices that indicate masculinity and dominance include preferences for expensive cars and watches—aspects of luxury consumption.33 Women not only emphasize their own beauty but also carry bags, for example, that are brand conscious, conveying information about their wealth and perhaps their preferences for specific causes, as in the popularity of animal welfare friendly high-end brands such as Stella McCartney.

Unlike high-end cars, however, which signal status to possible mates as well as status competitors, men are largely unaware of the signals sent from women to other women by such accessories. Women are highly attuned to brands and costs of women’s handbags, while most men do not seem to recognize the signaling value.34 While luxury products can boost self-esteem, express identity, and signal status, men tend to use conspicuous luxury products to attract mates, while women may use such products to deter female rivals. Some studies have shown that activating mate guarding motives prompts women to seek and display lavish possessions, such as clothes, handbags, and jewelry, and that women use pricey possessions to signal that their romantic partner is especially devoted to them.35

Fashion can also signify membership in powerful groups, such as the government, the military, or nobility. It can also signify the person’s role in society in other ways, for example, whether someone is married, engaged, or betrothed (by their own volition or by family). There are several changes in fashion that are specific to the various events surrounding a wedding, each with its own cultural differences and symbolism, and far too many to review here.36 Several researchers have explored the prominence and the symbolic value of a bride’s traditional dress in different societies.37 However, these signifiers are not just specific to the wedding rituals; what these women wear as wives (and widows) is culturally dictated for the rest of their lives.

These types of salient markers of female marital status are present in a number of societies. For example, not only are Latvian brides no longer allowed to wear a crown, but they may be given an apron and other displays (such as housekeeping tools) that indicate that they are now wives. In other cultures, girls will wear veils from puberty to their wedding day, and the removal of the veil is an obvious display of the change in status. Some cultures symbolically throw away the bride’s old clothes, as she is no longer that person; she is now the wife of her husband. In Turkey, married Pomak women cut locks of hair on either side of their head, and their clothing is much simpler in style than the highly decorated daily clothing of unmarried Pomak women. However, wives do wear more expensive necklaces—gold or pearls rather than beads.38 Notice that this is not only a signal of marital status, but also a signal of the groom’s wealth.

An evolutionary perspective suggests … people who choose to tattoo and pierce their bodies are doing so … because it serves as an advertisement or signal of their genetic quality.

Meanwhile, for men, the vast majority of cultures possess only one marker for married men—a wedding ring—which is also expected of women. Why are there more visible markers of marital status for women than for men? This seems likely to be a product of the elevated sexual jealousy and resulting proprietariness employed by men to prevent cuckoldry—what evolutionary psychologists call mate guarding. Salient markers of marital status for women show other men that she is attached to, or the property of, her husband. If the term “property” seems like an exaggeration, cultures have been documented to have rituals specifically for the purpose of transferring ownership of the bride from her parents to her husband, with the accompanying changes in appearance to declare that transfer to the public.39

Tattoos as Signals of Mate Quality, Social Status, and Group Membership

Body modifications, such as tattoos and piercings, have become increasingly prevalent in recent years in Western culture, with rates in the United States approaching 25 percent.40 Historically, tattooing and piercing were frequently used as an indicator of social status41 or group membership, for example, among criminals, gang members, sailors, and soldiers. While this corresponds with all of the other types of adornment we have reviewed, other researchers have suggested that these explanations don’t fully illuminate why individuals should engage in such costly and painful behavior when other methods of affiliation, such as team colors, clothing, or jewelry are less of a health risk. Tattoos and piercings are not only painful but entail health risks, including infections and disease transmission, such as hepatitis and HIV.42 One could suggest that the permanence of body modifications is a marker of commitment or significance, but an evolutionary perspective suggests an additional level of explanation: that people who choose to tattoo and pierce their bodies are doing so not only to show their bravery and toughness, but also because it serves as an advertisement or signal of their genetic quality. Good genetic quality and immunocompetence may be signaled by the presence and appearance of tattoos and piercings in much the same way as ornamentation, much as the peacock’s tail (in its size and symmetry), serves as a signal of male health and genetic quality.43

Photo by benjamin lehman / Unsplash

Even with tattoos, the same areas of the body are accentuated as we see in clothing.44 Researchers have reported sex differences in the placement of tattoos such that their respective secondary sexual characteristics were highlighted, with males concentrating on their upper bodies drawing attention to the shoulder-to-hip ratio. Females had more abdominal and backside tattoos, drawing attention to the waist-to-hip ratio. The emphasis seems to be on areas highlighting fertility in females and physical strength in males, essential features of physical attractiveness.45 In fact, female body modification in the abdominal region was most common in geographic regions with higher pathogen load, again suggesting that such practices may serve to signal physical and reproductive health.46 Recent work has also indicated social norms influence how tattoos affect perceptions of beauty such that younger people and ones who themselves are tattooed see them as enhancing attractiveness.47

Tattoos and piercings are not only painful but entail health risks, including infections and disease transmission, such as hepatitis and HIV.

Studies on humans and nonhuman animals have indicated that low fluctuating asymmetry (that is, greater overall symmetry in body parts) is related to developmental stability and is a likely indicator of genetic quality.48 Fluctuating asymmetry (FA), which is defined as deviation from perfect bilateral symmetry, is thought to reflect an organism’s relative inability to maintain stable morphological development in the face of environmental and genetic stressors. One study found49 FA to be lower (that is, the symmetry was greater) in those with tattoos or piercings. This effect was much stronger in males than in females, suggesting that those with greater developmental stability were able to tolerate the costs of tattoos or piercings, and that these serve as an honest signal of biological quality, at least in the men in this study.50 Researchers have also tested the “human canvas hypothesis,” which suggests that tattooing and piercing are hard to fake advertisements of fitness or social affiliations and the “upping the ante hypothesis,” which suggests tattooing is a costly honest signal of good genes in that injury to the body can demonstrate how well it heals. In short, tattoos and piercings not only display a group affiliation, but also that the owner possesses higher genetic quality and health, and these tattoos are placed on areas that accentuate “sexy” body parts. Thus, we have come full circle with humans: Just as other species like peacocks, people show off ornamentation to display their quality as mates and access to resources. Even taking into account cultural differences and generational shifts, the primary message remains.

Social Factors in Human Ornamentation

In addition to all of the evidence we have presented here, ornamentation is not just about mating or even signaling social status. Humans also signal group membership or allegiance through fashion. Modern sports fans show their allegiance to their sports teams by various shirts, hats, and other types of clothing—think the “cheese head” hats worn by Green Bay Packers fans at the team’s NFL home games. Fans of various musical performers, from Kid Rock to Taylor Swift, display their loyalty with concert shirts and other apparel. Typically, they also feel an automatic sense of connection when they encounter others sporting similar items. As discussed, tattoos can be seen as signals of genetic quality or health, and over the last twenty or so years tattoos have also increasingly become seen as statements of individuality. And yet, many serious sports fans, for example, have similar tattoos representing their favorite teams. Marvel fans sport Iron Man and Captain America illustrations on their skin, while fans of the television show Supernatural have the anti-possession symbol from the show tattooed on their torso. It may be that in many populations with weak social and family connections, individuals are seeking connection, and adornment is one way of indicating participation in a community or group. You can also see this in terms of political allegiance and the proliferation of Harris-Waltz and MAGA-MAHA merchandise during the 2024 election cycle in the United States.

While it is clear that an adaptationist approach to ornamentation can explain many aspects of fashion related to signaling social status (whether honest or not), group membership, or mate quality, much research remains to be done, including more work on what aspects are cross-culturally consistent and that are constrained more by unique cultural aspects or the local ecology. Not everything is the product of an adaptation; some aspects of fashion that seem less predictable or may be less enduring are unlikely to be explained by ornamentation and signaling theory because they are not rooted in mating or social motives. That being said, many fashion choices, including our own (for better or worse) make a lot of sense in the light of evolutionary processes. For all the small shifts from generation to generation and across cultures, the main themes remain the same. As Rachel Zoe noted: “Style is a way to say who you are without having to speak.”

What do your fashion choices have to say?

Categories: Critical Thinking, Skeptic

Skeptoid #989: Are $1,000,000 Paranormal Challenges Effective?

Skeptoid Feed - Tue, 05/20/2025 - 2:00am

Their exciting nature, combined with the fact nobody's ever won one, make paranormal challenge prizes important educational tools.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

End of Life on Earth

neurologicablog Feed - Mon, 05/19/2025 - 5:19am

Let’s talk about climate change and life on Earth. Not anthropogenic climate change – but long term natural changes in the Earth’s environment due to stellar evolution. Eventually, as our sun burns through its fuel, it will go through changes. It will begin to grow, becoming a red giant that will engulf and incinerate the Earth. But long before Earth is a cinder, it will become uninhabitable, a dry hot wasteland. When and how will this happen, and is there anything we or future occupants of Earth can do about it?

Our sun is a main sequence yellow star. The “main sequence” refers to the Hertzsprung-Russell diagram (HR diagram), which maps all stars based on mass, luminosity, temperature, and color. Most stars fall within a band called the main sequence, which is where stars will fall when they are burning hydrogen into helium as their source of energy. More massive stars are brighter and have a color more towards the blue end of the spectrum. They also have a shorter lifespan, because they burn through their fuel faster than lighter stars. Blue stars can burn through their fuel in mere millions of years. Yellow stars, like our own, can last 10 billion years, while red dwarfs can last for hundreds of billions of year or longer.

Which stars are the best for life? We categorize main sequence stars as blue, white, yellow, orange, and red (this is a continuum, but that is how we humans categorize the colors we see). Interestingly, there are no green stars, which has more to do with human color perception than anything else. Stars at an otherwise “green” temperature have enough blue and red mixed in to appear white to our color perception. The hotter the star the farther away a planet would have to be to be in its habitable zone, and that zone can be quite wide. But hotter stars are short-lived. Colder stars last for a long time but have a small and close-in habitable zone, so close they may be tidally locked to their star. Red dwarfs are also relatively unstable and put out a lot of solar wind which is unfriendly to atmospheres.

So the ideal color for a star, if you want to evolve some life, is probably in the middle – yellow, right where we are. However, some astronomers argue that the optimal temperature may be orange, which can last for 15-45 or more billion years, but with a comfortably distant habitable zone. If we are looking for life in our galaxy than orange stars are probably the way to go.

What about our humble yellow sun? Our sun is about 4.6 billion years old, with a total lifespan of about 10 billion years. So it might seem as if we have another 5 billion years to go, which is a comfortable chunk of time. While main sequence stars are relatively stable, they do subtly change, and can significantly change toward the end of their life. So the question is – when will our sun change enough to threaten the habitability of the Earth? The 5 billion years figure is how much longer our sun can burn hydrogen. After that it will start burning its helium at the core, and that is when it will start expanding into a red giant. However, we will run into problems long before then. As the sun burns hydrogen and collects helium at its core, it heats up, by about 10% every billion years. When will this slow heating spell doom for life on Earth?

There are two other variables to consider. The environment of the Earth depends on three main things – the sun, the orbit of the Earth (and anything else in the solar system that might affect Earth), and conditions on Earth itself (the atmosphere, the biosphere, geologically, our magnetic field). When you think about it, having a stable environment for billions of years is pretty amazing.

A recent paper considers the interaction between the slowly warming sun and the biosphere. Using a supercomputer to model what may happen, they conclude:

Our results suggest that the planetary carbonate–silicate cycle will tend to lead to terminally CO2-limited biospheres and rapid atmospheric deoxygenation, emphasizing the need for robust atmospheric biosignatures applicable to weakly oxygenated and anoxic exoplanet atmospheres and highlighting the potential importance of atmospheric organic haze during the terminal stages of planetary habitability.

In other words, the increasing heat will lead to chemical reactions that will reduce atmosphere CO2, this in turn will limit oxygen production through photosynthesis. Oxygen levels will crash, making the Earth uninhabitable to anything dependent on CO2 or oxygen. This will happen in about 1 billion years – 4 billion year sooner than our red giant phase. Eventually the Earth will continue to heat anyway, burning away all our water and resulting in a dry lifeless desert.

Is there anything we can or should do about this? I will leave a deep discussion of “should” to philosophers, and only say keeping Earth habitable to life for as long as possible seems like a good idea to me. Assuming we want this, what can we do? First let me say that I think the question is irrelevant from a practical perspective. Even in a million years, humanity will have changed significantly, definitely technologically, but also probably biologically. In 20 million years or 100 million years, still long before the Earth becomes uninhabitable, other technological species may evolve on Earth. Many things can happen. It’s massively premature to worry about things on that timescale.

I also think its very likely that long before this becomes an issue humanity will either be extinct, or (hopefully) we will be a multi-planet species. We will likely settle many parts of our own solar system, and eventually travel to the nearest stars. Even still, the future technological inhabitants of Earth may want to preserve its ecosystem for as long as possible.

Assuming we cannot change the sun (barring some ridiculously advanced stellar engineering) we could try to manipulate the other variables. We could, for example, put objects into orbit that will reflect away part of the sun’s light and heat to compensate for its increased output. Another option seems more radical but may be easier, and even necessary – we could slowly move the Earth further from the sun to precisely compensate for the sun’s increased temperature. We could use spacecraft flybys to take some angular momentum from Jupiter and give it to the Earth, pushing it a tiny bit further from the sun.  By one calculation, such a flyby would only need to occur once every 6,000 years in order to compensate for the warming of the sun (hat tip to Warwick for sending me this link).

But it seems likely that if we have a robust space presence within our solar system over the next billion years (seems likely), there will be countless Earth flybys by spacecraft. What we will need to do is track all the flybys, and/or their effects, and then calculate a compensatory flyby schedule, which can include moving the Earth slowly further from the sun.

It’s interesting, and daunting, to think about such long time scales. It reminds me of a science-fiction story (I forget which one) in which a tourist planet started to run into the problem of tourists carrying away net mass. Over hundreds and thousands of years, the planet was losing mass. So they had to pass and strictly enforce rules that no visitor could leave with more mass than they came with. If you wanted souvenirs (or even gain a little weight, which people on vacation often do) you had to pack your suitcase with some rocks to leave behind.

It seems like it will not be overly difficult for future Earth inhabitants (whether humans or something else) to keep Earth habitable for the full 5 billion years left in our sun’s main sequence life. So we have that going for us. But seriously, while all this is a fun thought experiment informed by our current scientific knowledge, it is also a reminder of how fragile our ecosystem is, especially when you think long term. We should respect our current stability, and we shouldn’t mess with it casually.

The post End of Life on Earth first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #1036 - May 17 2025

Skeptics Guide to the Universe Feed - Sat, 05/17/2025 - 3:00am
Dumbest Word of the Week: Moxibustion; News Items: Cold Plunges, The End of Life, Floating Nuclear Power, Visualizing Special Relativity, Brainspotting Pseudoscience; Who's That Noisy; Your Questions and E-mails: Ethics of Pig Hearts, Are Flat Earthers Real; Science or Fiction
Categories: Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic