You are here

News Feeds

Trust No One, Believe Everything: Does Common Sense Have a Future?

Skeptic.com feed - Mon, 02/24/2025 - 8:12am

For as long as I can remember, espionage has fascinated me. Over the years, I’ve developed a certain expertise—at least in the pop culture sense—interviewing former spies for publications ranging from The Washington Post to, well, Playboy. I even once worked as a researcher at an international investigative firm, a job that, regrettably, involved fewer trench coats and shadowy rendezvous than one might hope. But I did walk away with a highly marketable skill: knowing how to conduct a proper background check (one never knows when that might prove useful).

Spies have long been the pillars of Hollywood storytelling, woven into thrilling tales of intrigue, and deception. But what is it about them that keeps us so enthralled? I’d argue that our obsession stems from an innate desire to know what is hidden from us. Secrets are power, and in a world increasingly shaped by information, nothing is more seductive than the idea of being the one in the know.

Secrets are power, and in a world increasingly shaped by information, nothing is more seductive than the idea of being the one in the know.

But while James Bond is synonymous with adrenaline filled action and shaken-but-not-stirred glamour, in real life, intelligence work is usually rather mundane and bureaucratic. More along the lines of painstaking, systematic data gathering and staring for hours at your screen, rather than the dramatic fight sequences we’ve been conditioned to associate with spycraft through the media. In other words, making sense of what’s going on is often hard, dull work.

Making sense of what’s going on is often hard, dull work.

We have never had more access to information, yet somehow, we understand less. The sheer volume of data is overwhelming—no single person can process even a fraction of it—so we outsource the task to algorithms, aggregators, and search engines with their own opaque filtration systems. In theory, social media should expose us to a diversity of perspectives, but in practice, its algorithms ensure we’re served more of what we already believe, cocooning us in ideological comfort.

We like to think of Google, X, Facebook, and even ChatGPT as neutral tools, but neutrality is an illusion. These platforms, intentionally or not, prioritize engagement over accuracy, outrage over nuance, and emotional provocation over intellectual depth. Further, the speed at which information spreads tends to outpace our ability to critically analyze it. Misinformation, half-truths, and emotionally charged narratives circulate rapidly, shaping perceptions before facts can be verified. In this landscape, false stories are 70 percent more likely to be shared than true ones and travel six times faster. Eager to engage in the conversation as it happens, we jump in before having even had sufficient time to process the “latest thing.” Our public discourse is shaped not by careful reasoning but by knee-jerk reactions.

Social media should expose us to a diversity of perspectives, but in practice, its algorithms ensure we’re served more of what we already believe.

Then there’s the growing crisis of trust in media. As per Gallup, Americans’ trust in mass media remains at a record low, with only 31% expressing confidence in its accuracy and fairness in 2024. Trust first dropped to 32% in 2016 and has remained low. For the third year in a row, more Americans (36%) have no trust at all in the media than those who trust it. Another 33% have little confidence. Contrast this with 72% of Americans trusting newspapers in 1976, after Watergate.

Trust in mass media remains at a record low. What is behind this erosion? A cocktail of inaccuracies, overt ideological bias, viewpoint discrimination, the weaponization of fact-checking, and outright censorship.

What is behind this erosion? A cocktail of inaccuracies, overt ideological bias, viewpoint discrimination, the weaponization of fact-checking, and outright censorship has pushed many toward alternatives: independent media, podcasters, influencers, social media, and, naturally, grifters. Yet rejecting legacy media in favor of these alternatives is often a case of leaping from the frying pan into the fire. There’s a common misconception that because something isn’t mainstream, it must be more truthful—but plenty of these new voices are just as ideologically captured, if not more so, with even fewer guardrails against deception and little investment in accuracy. Many embrace them because they mistake truthfulness for ideological alignment. Paradoxically, many have embraced the idea that “we are the media now,” a phrase frequently echoed by Elon Musk and his admirers on X—even as they repost news from the very mainstream outlets they claim are now irrelevant, and even, “dead.”

Rejecting legacy media in favor of these alternatives is often a case of leaping from the frying pan into the fire.

We are living in the middle of an information battlefield, where reality itself feels up for debate. What’s legitimate news, and what’s an AI-generated psyop? Who’s a real person, and who’s a bot designed to amplify division? How much of what we read is organic, and how much is algorithmically nudged into our feeds? And then there are also state-sponsored disinformation campaigns added to the mix—with countries like Russia, Iran, China, and yes, even the United States deploying fake news sites, deepfakes, and coordinated social media operations to manipulate global narratives.

Russia, Iran, China, and yes, even the United States deploy fake news sites and coordinated social media operations to manipulate global narratives.

In this environment, conspiracy theories thrive. People don’t fall down rabbit holes at random—there are certain preconditions that make them susceptible. Institutional distrust is a major factor, and right now, faith in institutions is in free fall, whether it’s the government, the courts, or the medical establishment. Many people feel betrayed. Add in alienation and social disconnection, and you have the perfect recipe for radicalization. The irony, of course, is that while conspiracy thinking is often framed as a form of skepticism about official narratives, it frequently results in an even greater willingness to believe in something—just not the official story.

Faith in institutions is in free fall, whether it’s the government, the courts, or the medical establishment. Many people feel betrayed.

Not all people become full blown conspiracy theorists, of course, but we can see how conspiratorial thinking has taken root. But then again, perhaps we are simply seeing this phenomenon because social media lets us see people we might have otherwise never come in contact with? What we do know is that people have a high need for certainty and control when times are uncertain, so they become more prone towards believing false things because they no longer trust institutions that they once might have.

The fragmentation of media consumption means that reaching people with authoritative information has never been more difficult. Everyone is living in a slightly different version of reality, dictated by the platforms they frequent and the sources they trust. And because attention spans have collapsed, many don’t even make it past the headlines before forming an opinion. When everything is engineered to make us feel angry, polarized, scared, and reactionary, how can we stay nuanced, critical, open-minded, and objective? How can we be more truth-seeking in a world where everyone seems to have their own version of the truth on tap?

Everyone is living in a slightly different version of reality, dictated by the platforms they frequent and the sources they trust.

A recent controversy over a certain billionaire’s hand gesture provided a perfect case study in perception bias. We all saw the same video. To some, Elon Musk’s movement was undeniably a Nazi salute. To others, it was merely an overzealous gesture made to express “my heart goes out to you.” Few people remained undecided. The fact that two groups could witness the exact same footage and walk away with diametrically opposed conclusions is a testament to how much our prior beliefs shape our perception of reality and speaks to the difficulty of uniting people behind a single understanding of reality. Psychologists call this phenomenon, “motivated perception.” We often see what we expect to see, rather than what’s actually there.

So in this landscape, what is it that grounds me? It all comes down to a simple question: How much of what I believe is based on evidence, and how much is just my own emotions, assumptions, and attempts to connect the dots? What is it that I really know? Very often in life, we imagine what something might be, rather than seeing it for what it is.

In a world where narratives compete for dominance, my goal is not to add another, but to cut to the core of what is verifiable and likely to be true.

With this new column at Skeptic, my aim is to strip away the noise in front of the headlines and get to the core of what is verifiable and true. I have no interest in reinforcing anyone’s preconceived notions—including my own. The only way to do that is through curiosity rather than confirmation. In a world where narratives compete for dominance, my goal is not to add another, but to cut to the core of what is verifiable and likely to be true. It’s easy to be swayed by emotion, to see what we expect rather than what’s in front of us. But the only way forward—the only way to make sense of this fractured information landscape—is to remain committed to facts, no matter where they lead.

I would like to keep my door open to topics you’d like to see me cover, or just feedback and thoughts. Comment below, and feel free to reach out anytime: mysteriouskat[at]protonmail.com

Categories: Critical Thinking, Skeptic

How Behavioral Science Lost its Way And How It Can Recover

Skeptic.com feed - Mon, 02/24/2025 - 8:10am

Over the past decade behavioral science, particularly psychology, has come under fire from critics for being fixated on progressive political ideology, most notably Diversity, Equity, and Inclusion (DEI). The critics’ evidence is, unfortunately, quite strong. For example, a recent volume, Ideological and Political Bias in Psychology,1 recounts many incidents of scholarly censorship and personal attacks that a decade ago might have only been conceivable as satire.

We believe that many problems plaguing contemporary behavioral science, especially for issues touching upon DEI, can best be understood, at their root, as a failure to adhere to basic scientific principles. In this essay, we will address three fundamental scientific principles: (1) Prioritize Objective Data Over Lived Experience; (2) Measure Well; and (3) Distinguish Appropriately Between Correlation and Causation. We will show how DEI scholarship often violates those principles, and offer suggestions for getting behavioral science back on track. “Getting back to the basics” may not sound exciting but, as athletes, musicians, and other performers have long recognized, reinforcing the fundamentals is often the best way to eliminate bad habits in order to then move forward.

The Failure to Adhere to Basic Scientific Principles
Principle #1: Prioritize Objective Data Over Lived Experience

A foundational assumption of science is that objective truth exists and that humans can discover it.2345 We do this most effectively by proposing testable ideas about the world, making systematic observations to test the ideas, and revising our ideas based on those observations. A crucial point is that this process of proposing and testing ideas is open to everyone. A fifth grader in Timbuktu, with the right training and equipment, should be able to take atmospheric observations that are as valuable as those of a Nobel Prize-winning scientist from MIT. If the fifth grader’s observations are discounted, this should only occur because their measurement methods were poor, not because of their nationality, gender, age, family name, or any other personal attribute.

A corollary of science being equally open to all is that an individual’s personal experience or “lived experience” carries no inherent weight in claims about objective reality. It is not that lived experience doesn’t have value; indeed, it has tremendous value in that it provides a window into individuals’ perceptions of reality. However, perception can be wildly inaccurate and does not necessarily equate to reality. If that Nobel Prizewinning scientist vehemently disputed global warming because his personal experience was that temperatures have not changed over time, yet he provided no atmospheric measurements or systematic tests of his claim, other scientists would rightly ignore his statements—at least as regards the question of climate change.

The limited utility of a person’s lived experience seems obvious in most scientific disciplines, such as in the study of rocks and wind patterns, but less so in psychology. After all, psychological science involves the study of people—and they think and have feelings about their lived experiences. However, what is the case in other scientific disciplines is also the case in psychological science: lived experience does not provide a foolproof guide to objective reality.

To take an example from the behavioral sciences, consider the Cambridge-Somerville Youth Study.6 At-risk boys were mentored for five years, from the ages of 10 to 15. They participated in a host of programs, including tutoring, sports, and community groups, and were given medical and psychiatric care. Decades later, most of those who participated claimed the program had been helpful. Put differently, their lived experience was that the program had a positive impact on their life. However, these boys were not any better in important outcomes relative to a matched group of at-risk boys who were not provided mentoring or extra support. In fact, boys in the program ended up more likely to engage in serious street crimes and, on average, they died at a younger age. The critical point is that giving epistemic authority to lived experience would have resulted in making inaccurate conclusions. And the Cambridge-Somerville Youth Study is not an isolated example. There are many programs that people feel are effective, but when tested systematically turn out to be ineffective, at best. These include programs like DARE,7 school-wide mental health interventions,8 and—of course—many diversity training programs.9

DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature.

Indeed, when it comes to concerns related to DEI, the scientific tenet of prioritizing testable truth claims over lived experience has often fallen to the wayside. Members of specific identity groups are given privilege to speak about things that cannot be contested by those from other groups. In other words, in direct contradiction of the scientific method, some people are granted epistemic authority based solely on their lived experience.10

Consider gender dysphoria. In the past decade, there has been a drastic increase in the number of people, particularly children and adolescents, identifying as transgender. Those who express the desire to biologically transition often describe their lived experience as feeling “born in the wrong body,” and express confidence that transition will dramatically improve their lives. We argue while these feelings must be acknowledged, they should not be taken as objective truth; instead, such feelings should be weighed against objective data on life outcomes of others who have considered gender transition and/or transitioned. And those data, while limited, suggest that many individuals who identify as transgender during childhood, but who do not medically transition, eventually identify again with the gender associated with their birth sex.1112 Although these are small, imperfect studies, they underscore that medical transition is not always the best option.

Photo by Delia Giandeini / Unsplash

Caution in automatically acceding to a client’s preference to transition is particularly important among minors. Few parents and health care professionals would affirm a severely underweight 13-year-old’s claim that, based on their lived experience, they are fat and will only be happy if they lose weight. Nevertheless, many psychologists and psychiatrists make a similar mistake when they affirm a transgender child’s desire to transition without carefully weighing the risks. In one study, 65 percent of people who had detransitioned reported that their clinician, who often was a psychologist, “did not evaluate whether their desire to transition was secondary to trauma or a mental health condition.”13 The concern, in other words, is that lived experience is being given too much weight. How patients feel is important, but their feelings should be only one factor among many, especially if they are minors. Mental health professionals should know this, and parents should be able to trust them to act accordingly.

Principle #2: Measure Well

Another basic principle of behavioral science is that anything being measured must be measured reliably and validly. Reliability refers to the consistency of measurement; validity refers to whether the instrument is truly measuring what it claims to measure. For example, a triple beam balance is reliable if it yields the same value when repeatedly measuring the same object. The balance is valid if it yields a value of exactly 1 kg when measuring the reference kilogram (i.e., the International Prototype of the Kilogram), a platinum-iridium cylinder housed in a French vault under standardized conditions.

Behavioral scientists’ understanding of any concept is constrained by the degree to which they can measure it consistently and accurately. Thus, to make a claim about a concept, whether about its prevalence in a population or its relation to another concept, scientists must first demonstrate both the reliability and the validity of the measure being used. For some measures of human behavior, such as time spent listening to podcasts or number of steps taken each day, achieving good reliability and validity is reasonably straightforward. Things are generally more challenging for the self-report measures that psychologists often use.

Nevertheless, good measurement can sometimes be achieved, and the study of personality provides a nice model. In psychology, there are several excellent measures of the Big Five personality factors (Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness).14 Individuals’ responses are highly reliable: people who rate themselves as highly extraverted as young adults rate themselves similarly years later. Moreover, personality assessments are valid: individuals’ responses correlate with their actual day-to-day behaviors, as reported by themselves and as observed by others.15 In other words, people who rate themselves as high (versus low) in extroversion on psychological questionnaires, for example, really do spend more time socializing.

Credit: Simply Psychology

However, not all psychological measures turn out to have solid reliability and validity. These include the popular Myers Briggs Type Indicator personality test and projective tests such as the Rorschach. Unfortunately, in the quest to support DEI, some concepts that fail the requirements of good measurement are used widely and without reservation. The concept of microaggressions, for example, has gained enormous traction despite its having fundamental measurement issues.

“Microaggressions” were brought to psychologists’ attention by Derald Wing Sue and colleagues.16 Originally described as “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color” (p. 271),17 the concept has since expanded in use to describe brief, verbal or nonverbal, indignities directed toward a different “other.”1819

In 2017, Scott Lilienfeld discussed how the failure to adhere to the principles of good measurement has rendered the concept of microaggression “wide open,” without any clear anchors to reality.20 The primary weakness for establishing validity, that is, for establishing evidence of truly measuring what scientists claim to be measuring, is that “microaggression” is defined in the eye of the beholder.21 Thus, any person at any point can say they have been “microaggressed” against, and no one can test, let alone refute, the claim because it is defined solely by the claimant’s subjective appraisal—their lived experience.

Our criticism of microaggressions, then, spans concerns related to both weak measurement and an undue reliance on lived experience.

As Scott Lilienfeld explained, the end result is that essentially anything, including opposing behaviors (such as calling on a student in class or not calling on a student in class) can be labeled a microaggression. A question such as, “Do you feel like you belong here?” could be perceived as a microaggression by one person but not by someone else; in fact, even the same person can perceive the same comment differently depending on their mood or on who asks the question (which would indicate poor reliability). Our criticism of microaggressions, then, spans concerns related to both weak measurement and an undue reliance on lived experience.

Another of psychology’s most famous recent topics is the Implicit Association Test (IAT), which supposedly reveals implicit, or subconscious, bias. The IAT measures an individual’s reaction times when asked to classify pictures or text spatially. A video22 may be the best way to appreciate what is happening in the IAT, but the basic idea is that if a person more quickly pairs pictures of a Black person than those of a White person with a negative word (for example, “lazy” or “stupid”) then they have demonstrated their unconscious bias against Black people. The IAT was introduced by Anthony Greenwald and colleagues in the 1990s.23 They announced that their newly developed instrument, the race IAT, measures unconscious racial prejudice or bias and that 90 to 95 percent of Americans, including many racial minorities, demonstrated such bias. Since then, these scholars and their collaborators (plus others such as DEI administrators) have enjoyed tremendous success advancing the claim that the race IAT reveals pervasive unconscious bias that contributes to society-wide discrimination.

Screenshot from Harvard’s Project Implicit Skin Type Test

Despite its immense influence, the IAT is a flawed measure. Regarding reliability, the correlation between a person’s response when taking the test at two different times hovers around 0.5.24 This is well below conventionally acceptable levels in psychology, and far below the test-retest reliabilities for accepted personality and cognitive ability measures, which can reach around .8, even when a person takes the tests decades later.2526

The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation.

As for the IAT’s validity, nobody has convincingly shown that patterns of reaction times actually reflect “unconscious bias” (or “implicit prejudice”) as opposed to cultural stereotypes.27 Moreover, in systematic syntheses of published studies, the association between scores on the race IAT and observations or measurements of real-world biased behavior is inconsistent and weak.2829 In other words, scores on the IAT do not meaningfully correlate with other ways of measuring racial bias or real life manifestations of it.

Principle #3: Distinguish Appropriately Between Correlation and Causation

“Correlation does not equal causation” is another basic principle of behavioral science (indeed, all science). Although human brains seem built to readily notice and even anticipate causal connections, a valid claim that “X” has a causal effect on “Y” needs to meet three criteria, and a correlation between X and Y is only the first. The second criterion is that X precedes Y in time. The third and final criterion is the link between X and Y is not actually due to some other variable that influences both X and Y (“confounders”). To test this final point, researchers typically need to show that when X is manipulated in an experiment, Y also changes.

Imagine, for instance, that a researcher asks students about their caffeine intake and sleep schedule, and upon analyzing the data finds that students’ caffeine consumption is negatively correlated with how much they sleep—those who report consuming more caffeine tend to report sleeping less. This is what many psychologists call correlational research (or associational or observational research). These correlational data could mean that caffeine consumption reduces sleep time, but the data could also mean that a lack of sleep causes an increase in caffeine consumption, or that working long hours causes both a decrease in sleep and an increase in caffeine. To make the case that caffeine causes poor sleep, the researcher must impose, by random assignment, different amounts of caffeine on students to determine how sleep is affected by varying doses. That is, the researcher would conduct a true experiment.

Distinguishing between correlation and causation is easier said in the abstract than practiced in reality, even for psychological scientists who are specifically trained to make the distinction.30 Part of the difficulty is that in behavioral science, many variables that are generally thought of as causal cannot be manipulated for ethical or practical reasons. For example, researchers cannot impose neglect (or abuse, corporal punishment, parental divorce, etc.) on some children and not others to study how children are affected by the experience. Still, absent experiments, psychologists bear the responsibility of providing converging, independent lines of evidence that indicate causality before they draw a causal conclusion. Indeed, scientists did this when it came to claiming that smoking causes cancer: they amassed evidence from national datasets with controls, discordant twin designs, correlational studies of exposure to second-hand smoke, non-human experiments, and so on—everything but experiments on humans—before coming to a consensus view that smoking causes cancer in humans. Our point is that investigating causal claims without true experiments is possible, but extremely difficult and time consuming.

The conflation of correlation with causation seems especially prevalent when it comes to DEI issues.

That said, the conflation of correlation with causation seems especially prevalent when it comes to DEI issues. In the context of microaggressions, for example, a Google search quickly reveals many scholars claiming that microaggressions cause psychological harm. Lilienfeld has been a rare voice suggesting that it is dangerous to claim that microaggressions cause mental health issues when there are no experimental data to support such a claim. Moreover, there is a confounding variable that predicts both (1) perceiving oneself as having been “microaggressed” against and (2) struggling with one’s mental health—namely, the well-documented personality trait of neuroticism. In other words, individuals who are prone to experience negative emotions (those who are high in neuroticism) often perceive that more people try to inflict harm on them than actually do, and these same individuals also struggle with mental health.

Assuming we were able to develop a workable definition of “microaggressions,” what would a true experiment look like? An experiment would require that participants be exposed to microaggressions (or not), and then be measured or observed for indications of psychological harm. There are valid ethical concerns for such a study, but we believe it can be done. There is a lengthy precedent in psychological research where temporary discomfort can be inflicted with appropriate safeguards. For instance, a procedure called the “trier social stress test” (TSST) is widely used, where participants make a speech with little preparation time in front of judges who purposefully avoid any non-verbal reaction. This is followed by a mental arithmetic task.31 If the TSST is acceptable for use in research, then it should also be acceptable to expose study participants to subtle slights.

This fallacy of equating correlation with causation also arises in the context of gender transitioning and suicide. To make the point that not being able to transition is deeply damaging, transgender individuals, and sometimes their professional supporters, may ask parents something such as, “would you rather have a dead daughter or a living son?” One logical flaw here is in assuming that because gender distress is associated with suicidal ideation, then the gender distress must be causing the suicidal ideation. However, other psychological concerns, such as depression, anxiety, trauma, eating disorders, ADHD, and autism, could be causing both the gender distress and the suicidal ideation—another case of confounding variables. Indeed, these disorders occur more frequently in individuals who identify as transgender. Thus, it is quite possible that someone may suffer from depression, and this simultaneously raises their likelihood of identifying as transgender and of expressing suicidal ideation.

Photo by Uday Mittal / Unsplash

It is not possible (nor would it be ethical if possible) to impose gender identity concerns on some children and not others to study the effect of gender dysphoria on suicidality. However, at this point, the correlational research that does exist has not offered compelling evidence that gender dysphoria causes increased suicidality. Studies have rarely attempted to rule out third variables, such as other mental health diagnoses. The few studies that have tried to control for other variables have yielded mixed results.3233 Until researchers have consistently isolated gender dysphoria as playing an independent role in suicidality, they should not claim that gender dysphoria increases suicide risk.

Over three decades ago, the psychologist David Lykken wrote, “Psychology isn’t doing very well as a scientific discipline and something seems to be wrong somewhere” (p. 3).34 Sadly, psychology continues to falter; in fact, we think it has gotten worse. The emotional and moral pull of DEI concerns are understandable but they may have short-circuited critical thinking about the limitations of lived experience, the requirement of using only reliable and valid measurement instruments, and the need to meet strict criteria before claiming that one variable has a causal influence on another variable.

DEI Concepts Contradict Known Findings About Human Cognition

The empirical bases for some DEI concepts contradict social scientific principles. Additionally, certain DEI ideas run counter to important findings about human nature that scientists have established by following the required scientific principles. We discuss three examples below.

1) Out-Group Antipathy

Humans are tribal by nature. We have a long history of living in stable groups and competing against other groups. Thus, it’s no surprise that one of social psychology’s most robust findings is that in-group preferences are powerful and easy to evoke. For example, in studies where psychologists create in-groups and out-groups using arbitrary criteria such as shirt color, adults and children alike have a large preference for their group members.3536 Even infants prefer those who are similar to themselves37 and respond preferentially to those who punish dissimilar others.38

Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently.

DEI, although generally well-intentioned, often overlooks this tribal aspect of our psychology. In particular, in the quest to confront the historical mistreatment of certain identity groups, it often instigates zero-sum thinking (i.e., that one group owes a debt to another; that one group cannot gain unless another loses). This type of thinking will exacerbate, rather than mitigate, animosity. A more fruitful approach would emphasize individual characteristics over group identity, and the common benefits that can arise when all individuals are treated fairly.

2) Expectancies

When people expect to feel a certain way, they are more likely to experience that feeling.3940 Thus, when someone, especially an impressionable teenager or young adult, is told that they are a victim, the statement (even if true) is not merely a neutral descriptor. It can also set up the expectation of victimhood with the downstream consequence of making one feel themselves to be even more of a victim. DEI microaggression workshops may do exactly this—they prime individuals to perceive hostility and negative intent in ambiguous words and actions.41 The same logic applies to more pronounced forms of bigotry. For instance, when Robin DiAngelo describes “uniquely anti-black sentiment integral to white identity” (p. 95),42 the suggestion that White people are all anti-Black might have the effect of exacerbating both actual and perceived racism. Of course, we need to deal honestly with any and all racism when it does exist, but it is also important to understand potential costs of exaggerating such claims. Expectancy effects might interact with the “virtuous victim effect,” wherein individuals perceive victims as being more moral than non-victims.4344 Thus, there can be a social value gained simply in presenting oneself as a victim.

3) Cognitive Biases

Cognitive biases are one of the most important and well-replicated discoveries of the behavioral sciences. It is therefore troubling that, in the discussion of DEI topics, psychologists often fall victim to those very biases. (If you’re reading this on a desktop computer, be sure to explore the interactive version of the comprehensive chart shown below.)

Credit: Design by John Manoogian III; Categories & Descriptions by Buster Benson; Implementation by TilmannR (CC BY-SA 4.0, via Wikimedia Commons)

A striking example is the American Psychological Association’s (APA) statement shortly after the death of George Floyd, which provides a textbook illustration of the availability bias, the tendency to overvalue evidence that easily comes to mind. The APA, the largest psychological organization in the world, asserted after Floyd’s death that “The deaths of innocent black people targeted specifically because of their race—often by police officers—are both deeply shocking and shockingly routine.”45 How “shockingly routine” are they? According to the Washington Post database of police killings, in 2020 there were 248 Black people killed by police. By comparison, over 6,500 Black people were killed in traffic fatalities that year—a 26-fold difference.46 Also, some portion of those 248 victims were not innocent—given that 216 were armed, some killings would probably have been an appropriate use of force by the police to defend themselves or others. Some portion was also not killed specifically because of their race. So why would the APA describe a relatively rare event as “shockingly routine”? This statement came in the aftermath of the widely publicized police killings of Floyd and those of Ahmaud Arbery and Breonna Taylor. In other words, these rare events were seen as common likely because widespread media coverage made them readily available in our minds.

Unfortunately, the APA also recently fell prey to another well-known bias, the base rate fallacy, where relevant population sizes are ignored. In this case, the APA described new research that found “The typical woman was considered to be much more similar to a typical White woman than a typical Black woman.”47 Although not stated explicitly, the implication seems to be that, absent racism, the typical woman would be roughly midway between typical White woman and typical Black woman. That is an illogical conclusion given base rates. In the U.S., White people outnumber Black people by roughly 5 to 1; hence the typical woman should be perceived as more similar to a typical White woman than to a typical Black woman.

What Happened? Some Possible Causes

At this stage, we expect that many readers may be wondering how it can be that social scientists regularly violate basic scientific principles—principles that are so fundamental that these same social scientists routinely teach them in introductory courses. One possible reason is myside bias, wherein individuals process information in a way that favors their own “team.” For example, in the case of the race Implicit Association Test, proponents of the IAT might more heavily scrutinize the methodology of studies that yield negative results compared to those that have yielded their desired results. Similarly, although lived experience is a limited kind of evidence, it certainly is a source of evidence, and thus scholars may elevate its importance and overlook its limitations when doing so bolsters their personal views.

A related challenge facing behavioral scientists is that cognitive biases are universal and ubiquitous—everyone, including professional scientists, is susceptible.48 In fact, one might say that the scientific method, including the three principles we emphasize here, is an algorithm (i.e., a set of rules and processes) designed to overcome our eternally pervasive cognitive biases.

A third challenge confronting behavioral scientists is the current state of the broader scientific community. Scientific inquiry works best when practiced in a community adhering to a suite of norms, including organized skepticism, that incentivize individuals to call out each other’s poor practices.4950 In other words, in a healthy scientific community, if a claim becomes widely adopted without sufficient evidence, or if a basic principle is neglected, a maverick scientist would be rewarded for sounding the alarm by gaining respect and opportunities. Unfortunately, the scientific community does not act this way with respect to DEI issues, perhaps because the issues touch widely held personal values (e.g., about equality between different groups of people). If different scientists held different values, there would probably be more healthy skepticism of DEI topics. However, there is little ideological diversity within the academy. In areas such as psychology, for example, liberal-leaning scholars outnumber conservative-leaning scholars by at least 8 to 1, and in some disciplines the ratio is 20 to 1 or even more.5152 A related concern is that these values are more than just personal views. They often seem to function as sacred values, non-negotiable principles that cannot be compromised and only questioned at risk to one’s status within the community.

A related challenge facing behavioral scientists is that cognitive biases are universal and ubiquitous—everyone, including professional scientists, is susceptible.

From this perspective,53 it is easy to see how those who question DEI may well face moral outrage, even if (or maybe especially if) their criticisms are well-founded. The fact that this outrage sometimes translates into public cancellations is extremely disheartening. Yet there are likely even more de facto cancellations than it seems. Someone can be cancelled directly or indirectly. Indirect cancellations can take the form of contract nonrenewal, pressure to resign, or having one’s employer dig for another offense to use as the stated grounds of forcing someone out of their job. This latter strategy is a very subtle, yet no less insidious, method of cancellation. As an analogy, it is like a police officer following someone with an out-of-state license plate and then pulling the car over when they fail to use a turn signal. An offense was committed, but the only reason the offense was observed in the first place is because the officer was looking for a reason to make the stop and therefore artificially enhanced the time window in which the driver was being scrutinized. The stated reason for the stop is failure to signal; the real reason is the driver is from out of town. Whether direct or indirect, the key to a cancellation is that holding the same job becomes untenable after failing to toe the party line on DEI topics.

It is against this backdrop that DEI scholarship is conducted. Academics fear punishment (often subtle) for challenging DEI research. Ideas that cannot be freely challenged are unfalsifiable. Those ideas will likely gain popularity because the marketplace of ideas becomes the monopoly of a single idea. An illusory consensus can emerge about a complex area for which reasonable, informed, and qualified individuals have highly differing views. An echo chamber created by forced consensus is the breeding ground for bad science.

How to Get Behavioral Science Back on Track

We are not the first ones to express concern about the quality of science in our discipline.5455 However, to our knowledge, we are the first to discuss how DEI over-reach goes hand-in-hand with the failure to engage in good science. Nonetheless, this doesn’t mean it can’t be fixed. We offer a few suggestions for improvement.

First, disagreement should be normalized. Advisors should model disagreement by presenting an idea and explicitly asking their lab members to talk about its weaknesses. We need to develop a culture where challenging others’ ideas is viewed as an integral (and even enjoyable) part of the scientific process, and not an ad hominem attack.

Second, truth seeking must be re-established as the fundamental goal of behavioral science. Unfortunately, many academics in behavioral science seem now to be more interested in advocacy than science. Of course, as a general principle, faculty and students should not be restricted from engaging in advocacy. However, this advocacy should not mingle with their academic work; it must occur on their own time. The tension between advocacy and truth seeking is that advocates, by definition, have an a priori position and are tasked with convincing others to accept and then act upon that belief. Truth seekers must be open to changing their opinion whenever new evidence or better analyses demand it.

To that end, we need to resurrect guardrails that hold students accountable for demonstrating mastery of important scientific concepts, including those described above, before receiving a PhD. Enforcing high standards may sound obvious, but actually failing students who do not meet those standards is an exclusionary practice that might be met with resistance.

We need to develop a culture where challenging others’ ideas is viewed as an integral (and even enjoyable) part of the scientific process, and not an ad hominem attack.

Another intriguing solution is to conduct “adversarial collaborations,” wherein scholars who disagree work together on a joint project.56 Adversarial collaborators explicitly spell out their competing hypotheses and together develop a method for answering a particular question, including the measures and planned analyses. Stephen Ceci, Shulamit Kahn, and Wendy Williams,57 for example, engaged in an adversarial collaboration that synthesized evidence regarding gender bias in six areas of academic science, including hiring, grant funding, and teacher ratings. They found evidence for gender bias in some areas but not others, a finding that should prove valuable in decisions about where to allocate resources.

Illustration by Izhar Cohen

In conclusion, we suggest that DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature. The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation. We need to remember that the goal of science is to discover truth. This requires putting ideology and advocacy aside while in the lab or classroom. Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently. The scientific method requires us to stay humble and accept that we just might be wrong. That principle applies to all scientists, including the three authors of this article. To that end, readers who disagree with any of our points should let us know! Maybe we can sort out our differences—and find common ground—through an adversarial collaboration.

The views presented in this article are solely those of the authors. They do not represent the views of any author’s employer or affiliation.

Categories: Critical Thinking, Skeptic

Konstantin Kisin: “The tide is turning”

Why Evolution is True Feed - Mon, 02/24/2025 - 8:00am

Trigger(nometry) warning: semi-conservative video.

I can’t remember who recommended I watch this video, which features satirist, author, and Triggernometry co-host Konstantin Kisin speaking for 15 minutes at a meeting of the Alliance for Responsible Citizenship (ARC). The group is described by Wikipedia as “an international organisation whose aim is to unite conservative voices and propose policy based on traditional Western values.”

The talk is laced with humor, but the message is serious:  Kisin argues that societies based on “Western values” are the most attractive, as shown by the number of potential immigrants; but they are endangered by the negativity and “lies” of those who tell us that “our history is all bad and our country is plagued by prejudice and intolerance.” To that he replies that people espousing such sentiments still prefer to live in the West. (But of course that doesn’t mean that these factors still aren’t at play in the West!)  Kisin then touts both Elon Musk (for “building big things”) and (oy) Jordan Peterson for “reminding us that our lives will improve if we accept that “honesty is better than lies, that responsibility is better than blame, and strength is better than weakness.”

He continues characterizing the West as special: “the most free and prosperous societies in the history of humanity, and we are going to keep them that way.” To accomplish that, he promotes free speech as the highest of Western values, and rejects identity politics, arguing that “multiethnic societies can work; multicultural societies cannot.” Finally, he claims that human beings are good, denying (as he avers) the woke view that “human beings are a pestilence on the planet.”  Kisin calls for more reproduction and making energy “as cheap and abundant as possible.”

The talk finishes with the most inspiring thing Kising says he’s ever heard: that we’re going to die; ergo, we have nothing to lose. “We might as well speak the truth, we might as well reach for the stars, we might as well fight like our lives depended on it—because they do.”  I’m not exactly sure what he means, nor do I feel uplifted or inspired by these words, which don’t really tell us why he thinks the tide is turning. And, at the end, I could see where this optimistic word salad came from: it’s in Wikipedia, too:

[The ARC] is associated with psychologist and political commentator Jordan Peterson. One Australian journalist identified the purpose of ARC as follows: “to replace a sense of division and drift within conservatism, and Western society at large, with a renewed cohesion and purpose”.

Do any readers get inspired by this kind of chest-pounding?  I have to add that I do like Triggernometry, one of the few podcasts I can listen to, but I’m not especially energized by the co-host’s speech.

Categories: Science

Inside the new therapies promising to finally beat autoimmune disease

New Scientist Feed - Mon, 02/24/2025 - 8:00am
Type-1 diabetes, IBD, multiple sclerosis, rheumatoid arthritis, coeliac disease and lupus are all caused by the body attacking itself. But new therapies that reset the immune system could offer lasting help
Categories: Science

The New Skeptic: Welcome

Skeptic.com feed - Mon, 02/24/2025 - 7:37am

Consider the word “skeptic.” What image does it evoke? A cynic? Someone who doubts everything? Many conflate skepticism with pure doubt, but true skepticism is far richer. It is thoughtful inquiry and open-minded analysis. Its essence is captured by Spinoza’s timeless dictum, “not to ridicule, not to bewail, not to scorn human actions, but to understand them.”

This commitment to understanding different viewpoints is at the heart of meaningful discourse—yet, in practice, it is often abandoned. Public debates—on issues from abortion and climate change to the principle of free speech—tend to degenerate into a melee of name-calling and outrage. Genuine skepticism, however, demands thoughtful engagement. It insists that we immerse ourselves in diverse perspectives, striving to understand them thoroughly before reaching reasoned conclusions. It calls for intellectual honesty—the willingness to consider opposing arguments without succumbing to anger or mockery, even when the evidence seems overwhelmingly in favor of one side.

Skepticism requires the ability to grasp opposing arguments without resorting to anger or ridicule, even when the evidence overwhelmingly shows they’re wrong.

Indeed, one of the greatest obstacles to having an accurate understanding of reality isn’t just having your facts wrong—it’s the human tendency to moralize our biases. We rarely think of ourselves as extremists or ideologues.

Instead, we often embrace belief systems that validate our most destructive impulses, fooling ourselves into thinking we’re champions of justice. Some of the most toxic voices in our society are utterly convinced that they stand on the right side of history. Skepticism requires the courage and conscious effort to step outside that mindset.

If this resonates with you, then welcome—you’ve found your intellectual home.

People rarely think of themselves as extremists or ideologues. Instead, they find belief systems that validate their most destructive impulses. Skepticism requires the courage to step outside that mindset.

And there’s more: even when we adopt Spinoza’s dictum, our understanding of the world will be lacking if the very foundation of our knowledge—the very basis on which you (and others) make decisions—doesn’t accurately reflect the world. This is where critical thinking and the tools of science come in.

Skeptic magazine: A Commitment to Depth and Balance

Skeptic is a leading popular science magazine that explores the biggest questions in science, technology, society, and culture with a relentless commitment to truth. We don’t push an agenda—we follow the evidence. Every article, before it’s published, is put to the test: Is this really accurate? How could it be wrong? Do the cited sources support the claims being made? Our mission is clear—to cut through misinformation and dogma, delivering sharp, evidence-based analysis grounded in reality.

So, what can you expect to find in our pages? Long-form, analytical pieces that explore complex issues in depth constitute the vast majority of our work. At times, we may also feature op-eds—particularly when they emerge from rigorous research and present an intriguing, contrarian perspective. And when an issue carries significant weight, we may publish “the best case for...” articles—usually pairing them with an equally strong piece presenting the counterargument(s). As an example, our recent coverage of abortion after the Roe v. Wade ruling featured my own explanation of the pro-choice position, Danielle D’Souza Gill’s robust argument against abortion (quoting none other than Christopher Hitchens!), and a comprehensive Skeptic Research Center report analyzing public attitudes toward the issue (it turns out most people don’t understand the effects of overturning Roe).

In today’s age of activism, this balanced approach might seem unassertive. But the truth is, absolute certainties are rare. We can only approximate truth, and what constitutes “truth” varies by domain—science, politics, law, journalism, and ethics all demand different methods of reasoning. Put simply, our mission is to present what is known about the world as rigorously as possible.

You, the reader, decide where you stand.

So, are you simply believers in “The Science”?

Lately, the rallying cry “Trust the Science” has become a viral meme—a slogan that, on the surface, criticizes the limitations of science in being able to solve complex problems. Yet, a closer look reveals two deeper issues. First, some public science communicators are overstating consensus and stepping into the policy arena—territory traditionally reserved for politicians and activists. Second, many academic institutions have allowed ideology to seep into their departments, undermining strict adherence to the scientific method and neutral, dispassionate inquiry. This contamination isn’t confined to academia; even reputable popular science outlets have been affected. The net result is a degradation of trust in deep expertise and the scientific approach, allowing less rigorous voices to gain prominence.

In truth, trusting science—meaning evidence gathered through systematic, methodical inquiry—remains our best tool for uncovering reality. But conflating this with blind faith in public science figures is a category error. Science is not a priesthood, and consensus is not dogma.

Likewise, flawed research built on unfalsifiable assumptions can only be dismantled through relentless skepticism and the unwavering application of the scientific method; within its fallibility lies the greatest strength of science: self-correction.

Whether mistakes are made honestly or dishonestly, whether a fraud is knowingly or unknowingly perpetrated, in time it will be flushed out of the system through the lack of external verification. The cold fusion fiasco is a classic example of the system’s swift consequences for error and hasty publication; the purported link between vaccines and autism was debunked in the 1990s, and yet still persists in some circles, which indicates that reason, like freedom, requires eternal vigilance.

Despite built-in mechanisms science is still subject to a number of problems and fallacies that even the most careful scientist and skeptic are aware can be troublesome. We can, however, find inspiration in those who have overcome them to make monumental contributions to our understanding of the world.

Charles Darwin is a sterling example of a scientist who struck the right balance between total acceptance of and devotion to the status quo, and an open willingness to explore and accept new ideas. This delicate balance forms the basis of the whole concept of paradigm shifts in the history of science.

Photo by Hulki Okan Tabak / UnsplashThe Next Chapter: A New Era for Skeptic

It’s been over 30 years since Pat Linse and I founded Skeptic magazine and the Skeptics Society, our 501(c)(3) nonprofit dedicated to science research and education—a modest beginning in my garage that has since blossomed into one of the world’s most influential popular science publications.

Along the way, we’ve had the honor of collaborating with some of the greatest thinkers of our time, including our current Editorial Board members Jared Diamond, Steven Pinker, and Richard Dawkins, amongst many.

While Pat’s passing left an irreplaceable void, our team has doubled down on the mission to promote an evidence-based understanding of the world.

A glimpse into the past: Pat, Randi, Tanja, me, and our old offices—lost in the Los Angeles fires of January 2025 but not forgotten.

Today, I’m proud to announce that our esteemed Editorial Board is joined by three fresh voices—April Bleske-Rechek, Robert Maranto, and Catherine Salmon—whose incisive articles grace our recent issues and this new website.

We’re also delighted to have contributing editor Katherine Brodsky join us, launching her regular column, Culture Code.

Finally, we’re excited to reintroduce the Skeptic Research Center—led by social scientists Kevin McCaffree and Anondah Saide—now with its own dedicated site, where we’ll continue our mission of data-driven inquiry into some of today’s biggest issues.

In celebration of our rich 30+ year history, we’ll also be republishing some of our most timeless articles—works that remain as relevant and thought-provoking now as when they were first written—and publishing many, many more brand-new articles, podcasts, research reports, and even documentary films.

Welcome to the new Skeptic! Let’s explore reality together.

Categories: Critical Thinking, Skeptic

Intuitive Machines' lunar lander Athena set to blast off to the moon

New Scientist Feed - Mon, 02/24/2025 - 6:59am
A SpaceX Falcon 9 rocket is about to launch a number of missions, including a private lunar lander, a lunar satellite for NASA and a prospecting probe for an asteroid-mining company
Categories: Science

Still collecting signatures on the tri-societies letter

Why Evolution is True Feed - Mon, 02/24/2025 - 6:48am

If you’re following this site, you’ll know that 22 biologists (including me) sent a letter to three ecology and evolution societies who had issued a statement directed at the President and Congress that biological sex was a spectrum and a continuum in all species. The statement claimed without support that it expressed a consensus view of biologists, although the members of the societies were not polled.

Of course this behavior could not stand, and so Luana Maroja cobbled together a letter to those societies noting that the biological definition of sex was based on the development of the apparatus evolved to produce gametes, and that this showed that all animals and plants had only two sexes: male and female. As Richard Dawkins pointed out, even the three Society Presidents used the sex binary in their own biological work.

The letter has now accumulated more than a hundred signatures.  If you are an anisogamite and want to sign the letter, this is a reminder that the deadline for signatures is in about a week: 5 p.m. Monday, March 3. You can sign it this way (from Luana’s post on Heterodox STEM);

The societies for the Study of Evolution (SSE), the American Society of Naturalists (ASN) and the Society for Systematic Biologists (SSB) issued a declaration addressed to President Trump and all the members of Congress (declaration also archived here), proffering a confusing definition of sex, implying that sex is not binary.

We wrote a short letter explaining that sex is indeed defined by gamete type.

We are now collecting more signatures from biologists who agree to have their name publicly posted. If you are a biologist (or in a field related to biology) want to add your name, just fill in the bottom of this form (it contains the full text of our letter and a link to the tri-societies’ letter).

Please fill in all the blanks, including your name, position, and email, and we ask that you have something to do with biology. Also, we will most likely post the letter with names, so if you want to remain publicly anonymous but agree with our sentiments, just write your own personal email to the Society presidents (two of them have emails in the original letter). Nobody’s email will become public if I decide to post the final letter and signers on this site.

It takes about one minute to fill in the form, so if you want to send a message to these three societies, you know what to do.

Categories: Science

No readers’ wildlife today

Why Evolution is True Feed - Mon, 02/24/2025 - 6:15am

We have contributions from two people, but I am holding onto those, as it appears that this feature will become sporadic in the future. That’s sad, no?

Categories: Science

Although it Lacks a Magnetic Field, Venus Can Still Protect With in its Atmosphere

Universe Today Feed - Mon, 02/24/2025 - 6:10am

Venus differs from Earth in many ways including a lack of internal dynamo driving global magnetosphere to shield potential life from solar and cosmic radiation. However, Venus possesses a dense atmosphere and, in a recent study, planetary scientists conducted simulations of the Venusian atmosphere to determine radiation penetration to the lower cloud layers. Their calculations revealed that the atmospheric thickness provides adequate protection for life at what’s considered Venus’s “habitable zone,” located 40–60 km above the surface.

Venus, the second planet from the Sun, is often called Earth’s “sister planet” because of its comparable size and composition. Yet its environment couldn’t be more different or extreme. It has a thick carbon dioxide atmosphere with sulfuric acid clouds that have created a runaway greenhouse effect, making Venus the solar system’s hottest planet—surface temperatures in excess of 475°C. The Venusian landscape features volcanic plains, mountains, and canyons under atmospheric pressure exceeding 90 times Earth’s. Despite these inhospitable conditions, Venus remains an object of scientific interest, with researchers studying its geology and atmosphere.

Venus

In 2020, scientists found phosphine in Venus’s atmosphere which, on Earth, is mostly made by biological processes or in other words – living things. This discovery was somewhat unexpected and facilitated a fresh look at Venus as a possible home for life. Surprisingly perhaps, Venus does have a “habitable zone” in its clouds about 40-60 km up, where the temperature and pressure aren’t too different from Earth’s. While the planet’s surface is totally uninhabitable, high up in the atmosphere might actually support some kind of microbial life that’s adapted to acidic conditions. A new piece of research has been exploring if the thick Venusian atmosphere would protect any such life that may have evolved or whether intense radiation bathes its habitable zone. 

The spectral data from SOFIA overlain atop this image of Venus from NASA’s Mariner 10 spacecraft is what the researchers observed in their study, showing the intensity of light from Venus at different wavelengths. If a significant amount of phosphine were present in Venus’s atmosphere, there would be dips in the graph at the four locations labeled “PH3,” similar to but less pronounced than those seen on the two ends. Credit: Venus: NASA/JPL-Caltech; Spectra: Cordiner et al.

The research, that was led by Luis A. Anchordoqui from the University of New York has revealed surprising results. The team discovered that despite Venus lacking a magnetic field and orbiting closer to the Sun, the radiation levels in its potentially habitable cloud layer are remarkably similar to those at Earth’s surface. Using the AIRES simulation package (AIRshower Extended Simulations – simulates cascades of secondary particles from incoming high energy radiation) the team generated over a billion simulated cosmic ray showers to analyse particle interactions within Venus’s atmosphere. 

Their findings show that at equivalent atmospheric depths, particle fluxes on Venus and Earth are nearly identical, with only about 40% higher radiation detected at the uppermost boundary of Venus’s habitable zone. This suggests Venus’s thick atmosphere provides substantial radiation shielding that might be sufficient for potential microbial life.

The research suggests that cosmic radiation wouldn’t significantly hinder life in Venus’s cloud layer. Any potential microorganisms that were there would face radiation levels similar to those on Earth’s surface. On Earth, life has found a way across a wide range of environments that span many kilometres, this is known as its life reservoir. Venus doesn’t have such a great reservoir so if radiation were to sterilise the habitable clouds, there’s no equivalent to Earth’s subsurface biosphere that could eventually recolonise the region. This means life needs to persist continuously in its atmospheric habitat without being able to move to other parts of the planet.

Source : The Venusian Chronicles

The post Although it Lacks a Magnetic Field, Venus Can Still Protect With in its Atmosphere appeared first on Universe Today.

Categories: Science

Black Lives Matter vs. Black Lives Saved: The Urgent Need for Better Policing

Skeptic.com feed - Mon, 02/24/2025 - 6:06am

To paraphrase Shakespeare’s Romeo and Juliet, Black Lives Matter activists and police unions are two houses both alike in indignity. Neither truly wants to improve policing in the most necessary ways: the former because it could undermine their view of the world and reduce revenue streams, including billions in donations; the latter for a more mundane reason. Cops, like other street-level bureaucrats, don’t want to change their standard operating procedures and face accountability for screwups. Unfortunately, with Black Lives Matter groups receiving billions in donations and helping increase progressive turnout, media and academia failing to provide accurate information to voters, and police unions enjoying iconic status among conservatives when they are better viewed as armed but equally inefficient teachers’ unions, we don’t see the political incentives for reform any time soon, despite some recent local level successes.

Injustice—How Progressives (and Some Conservatives) Got Us Into This Mess

Professors and other respectables rail against “deplorables,”1 but missing in political discourse is that mass rule, AKA populism, is not a mass pathological delusion. Rather, its appearance is for solid economic and social reasons. When problems that affect regular citizens get ignored by their leaders, people in democratic systems can get revenge at the ballot box. From inflation and foreign policy debacles, to COVID-19 school shutdowns that went on far longer in the U.S. than in Europe at immense and immensely unequal social cost,2 ordinary people sense that the wealthy, bureaucrats, professionals, and professors often advance their own interests and fetishes at the expense of regular folks, and then use mainstream “knowledge producing” institutions, particularly academia and the mainstream news media, to cover up their failings.

Indeed, as Newsweek’s Batya Ungar-Sargon shows in her brilliant book Bad News: How Woke Media Is Undermining Democracy, the mainstream media now stand forthrightly behind the plutocrats. This can be documented empirically: the Center for Public Integrity points out that, during the 2016 presidential race, identified mainstream media journalists made 96 percent of their financial donations to one political party (the Democrats) and to the more mainstream of the candidates running.3 That basic instinct to hold the respectables accountable for their failings may have been the only thing keeping the Trump 2024 presidential candidacy viable despite his many and well-documented failings and debate loss against Kamala Harris.4

Perhaps nowhere is popular anger more justifiable than regarding crime, a trend best captured in the saga of the Black Lives Matter (BLM) movement. The roots of that failure go deep, and implicate multiple sacred cows in contemporary elite politics. As Anglo-Canadian political scientist Eric P. Kaufmann writes in his landmark work The Third Awokening,5 critical theory and other postmodern ideologies (AKA woke) have been evolving for over a century. To his credit, and unlike most conservatives, Kaufmann does not paint wokeism as entirely wrong—like populism, it too came about as a result of grievances experienced by the wider society. Rather, he describes it as needing moderating influences because, as with all other ideologies, it is not entirely (or in this case even mainly) correct. This is all the more so since so many among the woke, who are vastly overrepresented in the political class, lack experience with people from different walks of life. Their insulation, which Democratic commentator and political consultant James Carville—who coined the phrase “it’s the economy, stupid” that was key to then-Governor Bill Clinton’s 1992 victory over President George H.W. Bush—derides as “faculty lounge politics,”6 promotes fanaticism, declaring formerly extreme ideas not merely contestable or even mainstream, but off limits to criticism.

The nonnegotiable assumptions of late-stage woke include reflexively disparaging the achievements of Western civilization, while anointing non-Western or traditionally marginalized peoples and ideas as sacred. This deep script makes those (particularly wealthy Whites)7 with advanced degrees susceptible to believing the worst about White police officers, leaving influential segments of the political class subject to exploitation by grifters, with disastrous results. As one of us shows, many Americans believe that police pose a near genocidal threat to Black people, when in fact in a typical year fewer than 20 unarmed Black people (some of whom were attacking the police) are killed by nearly a half million White police officers, a lot lower than one would expect given that the Black crime rate is more than double that of other cohorts.8 Likewise, The 1619 Project creator Nikole Hannah-Jones and many other activists claim that police departments evolved from racist slave-catching patrols, which is simply not true.9

The problem arises when the Pulitzer Prize-winning Hannah-Jones and many other scholars and activists have an interest in maintaining the assertion that police are a threat to Black people, employing shocking visual images and taking advantage of widespread ignorance to make the case. The PBS News Hour, like other media outlets, has constantly highlighted the very rare instances in which White police officers actually do kill unarmed Black people, without ever placing them in the context of overall statistical evidence, which demonstrates that these tragic events are incredibly rare, nor giving comparable treatment to the far more numerous White casualties of police.10

Since the Black Lives era began, fatal ambushes of police officers have risen dramatically, almost certainly due to demonizing of the police.

Academia is an even greater offender. At the opening plenary of the 2021 American Educational Research Association annual meeting, AERA President Shaun Harper spent most of his hour-long session lambasting police as a threat to Black people. Harper is a master at securing grants and climbing the hierarchy to run academic associations. Yet his views on cops are out of sync with both reality and with the views of Black voters, who have consistently refused to support defunding police, and whose opinions on criminal justice generally resemble those of Whites and Hispanics.11, 12

Effective, accountable policing can save lives, especially in Black communities. Reform, rather than de-policing, is crucial.

Harper’s views do, however, reflect the Critical Race Theory (CRT) approaches preferred by professors studying race, both in education and in the social sciences more broadly—24 of the 25 most cited works with Black Lives Matter in their titles do not involve research that would save Black lives in any conceivable time frame. The 19th most cited article does empirically study (and suggest better) police procedures, making a case for having police document their actions in writing not just every time they fire their guns, but every time they unholster them. This mere reform, likely forcing cops to think an extra second before acting, reduces police shootings of civilians without increasing casualties among officers.13 In sharp contrast, however, other highly cited “scholarly” articles on Black Lives Matter:

… explore social media use and activism (4, including one piece involving Ben and Jerry’s ice cream and BLM), racial activism and white attitudes (3), immigration and migrants (2), anti-Blackness in higher education, “democratic repair,” radically re-imagining law, anti-Blackness of global capital, urban geography, counseling psychology, research on K–12 schools, BLM and “technoscientific expertise amid solar transitions,” BLM and “evidence based outrage in obstetrics and gynecology,” and BLM and differential mortality in the electorate.14

It is probably worth repeating here that at least one article, written by senior academics at respected institutions, looks specifically at the influence of the Black Lives Matter movement on the naming of popular ice cream flavors at Ben and Jerry’s. These “studies” get professors tenure, grants, and notoriety, but will not save Black (or any) lives in any conceivable time frame.

Sometimes academia allies with progressive politicians. As Harvard University-affiliated Democratic pollster John Della Volpe boasted at a recent political science conference,15 Black Lives Matter offers dramatic symbols that can measurably increase progressive voter turnout. Left unsaid was that the dominant BLM narrative both misleads voters and gets Black people killed—or that questioning it can be risky. This tension likely explains why, after careful, peer-reviewed empirical research by economist Roland Fryer found that controlling for suspect behavior, police do not disproportionately kill Black people (White suspects were in fact 27 percent more likely to be shot), then-Harvard University President Claudine Gay tried to fire Fryer.

She accused the tenured professor, an African- American academic star, of the use of inappropriate language, an offense for which Harvard’s own policies dictated sensitivity training. Fryer’s published findings were likely seen as attacking “sacred” beliefs and threatening external grants received on the premise of overwhelming police racism.16 As renegade journalist Batya Ungar-Sargon shows, the same dynamic holds in newsrooms, where reporting on Black Lives Matter’s spectacular failures to save Black (and other) lives is a firing offense.17 Indeed, were we not tenured professors at public universities in the South, we could likely get in trouble for writing essays like this one.

So what if progressives use anti-police demagoguery to win a few elections and grants? Isn’t that just election campaign “gamesmanship?” Does that hurt anyone? Yes, it does. Since the Black Lives era began, fatal ambushes of police officers have risen dramatically, almost certainly due to demonizing of the police. More importantly, Black Lives Matter de-policing policies seem to have taken thousands of (mainly Black) lives.18 During the BLM era, dated here as beginning in 2012, the age-adjusted Black homicide rate has almost doubled, rising from 18.6 murders per 100,000 African-American citizens in 2011 to 32 murders per 100,000 in 2021.19 Murders of Black males rose to an astonishing peak of 56/100,000 during this period (in 2021), while Black women (9.0/100,000) came to “boast” a higher homicide rate than White men (6.4) and all American men (8.2).

Yet for all our lambasting of Black Lives Matter, police unions and leaders have not covered themselves in glory in the BLM era, largely supporting precinct level decisions to de-police the dangerous parts (“no-go”- or “slow-go”-zones) of major cities, and refusing to support reforms that do cut crime but discomfort cops. Astonishingly, high homicide rates have little or no impact on whether police commissioners keep their jobs, giving cops few incentives to do better rather than just well enough.20

On the positive side, the political system is starting to respond to public anger from the increased crime and disorder of the Black Lives Matter era. In its presidential transition, the Biden administration largely sidelined the BLM portions of its racial reckoning agenda—even as it poured money into counterproductive and arguably racist DEI initiatives.21 More impactful responses came at the level of major city governments, which are those most affected by crime and disorder. Across progressive cities such as Seattle, Portland, and New York and less progressive cities like Philadelphia and Dallas, voters have started distancing themselves from Black Lives Matter policies. For the first time in decades, Seattle elected a Republican prosecutor (supported by most Democratic leaders). Uber-left Portland elected a prosecutor who was a Republican until recently. The Dallas mayor switched parties (from Democratic to Republican) out of frustration with progressive opposition to his (successful) efforts to cut crime by hiring and empowering more cops. New York elected a tough on crime (Democratic) former police captain to replace the prior progressive mayor. Even uber-progressives like Minnesota Governor and 2024 Democratic VP candidate Tim Walz did U-turns on issues such as whether police belong in schools, and what they can do while there.

Yet cops can do far more, and the Big Apple has shown the way. How that happened suggests that color matters, but the color is not Black so much as green.

New York City’s Turnaround: How a White Tourist’s Murder Made Black Lives Truly Matter

Sometimes history is shaped by unexpected (and undesirable) events that have positive impacts. A case in point is Brian Watkins, the 22-year-old White tourist from Provo, Utah, who was brutally murdered in front of his family on Labor Day Weekend in 1990 in NYC, while in town to watch the U.S. Open tennis tournament. His murder had historic impacts on New York, ultimately saving thousands of (mainly) Black lives, but it did not have the same impact nationally, a fact that says volumes about whose lives matter and why.

In 1990, New York City was among the most dangerous cities in the country. Today, as we show in our article “Which Police Departments Make Black Lives Matter?”22 despite high poverty, New York has the sixth lowest homicide rate among the 50 largest cities. That might not have happened without the brutal murder of Brian Watkins. As City Limits detailed in a 20-year retrospective23 on the Watkins killing, in 1990 New York City resembled the dystopian movie Escape from New York, with a record 2,245 homicides, including 75 murders of children under 16 and 35 killings of cab drivers, forced to risk their lives daily for their livelihoods. For their part, police, who found themselves outnumbered and sometimes outgunned, killed 41 civilians, around four times more than today.

The city that never sleeps was awash in blood, but NYC residents did not bleed equitably. Mainly, in what would turn out to be a common pattern, low-income minorities killed other low-income minorities in underpoliced neighborhoods. To use the first person for a bit, as I (Reilly) note in my 2020 book Taboo,24 and Rafael Mangual points out in his Criminal Injustice (2022),25 felony crime such as murder is remarkably concentrated by income and race. In my hometown of Chicago, the 10 relatively small community areas with the highest murder rates contain 53 percent of all recorded homicides in the city and have a total murder rate of 61.7/100,000, versus 18.2/100,000 for the rest of the city—with those districts included. In the even larger New York City, few wealthy businesspeople or tourists were affected by the most serious crime even during its horrendous peak.

Against that backdrop, after spending the day watching the U.S. Open, the Watkins family left their upscale hotel to enjoy Moroccan food in Greenwich Village. While waiting on a subway platform, they were assaulted by a “wolfpack” scouting for mugging victims so they could steal enough money to pay the $10-per-man cover charge at a nightclub.

In those bad old days, many young New Yorkers committed an occasional mugging to supplement their incomes, but this attack was unusually violent. In a matter of seconds, Brian Watkins’ brother and sister-in-law were roughed up while his father was knocked to the ground and slashed with a box-cutter, cutting his wallet out of his pocket. Brian’s mother was pulled down by her hair and kicked in the face and chest. While trying to protect her, Brian was fatally stabbed in the chest with a spring-handled butterfly knife. Not realizing the extent of his injury—a severed pulmonary artery—Brian chased the thieves until collapsing by a toll booth, dying shortly thereafter.

In 1990, New York City was among the most dangerous cities in the country. Today... despite high poverty, New York has the sixth lowest homicide rate among the 50 largest cities.

In Turnaround: How America’s Top Cop Reversed the Crime Epidemic,26 then-New York City Transit Police Chief and later NYPD Commissioner William Bratton recalled the Watkins killing as “among the worst nightmares” city leaders could imagine: “A tourist in the subway during a high-profile event with which the mayor is closely associated … gets stabbed and killed by a wolfpack. The murder made international headlines.”

Within hours a team of top cops apprehended the perpetrators, which just shows what police can do when a crime, such as the murder of a wealthy tourist, is made an actual priority. Twenty years later, rotting in a prison cell, Brian’s killer sadly recalled his decisions that night as the worst of his life. Had police been in control of the subways, the teen might have been deterred from making the decision that in essence ended two lives.

Unlike the great majority of the other 2,244 murder victims in 1990, the dead Brian mattered by name to Big Apple politicians. Bratton wrote that New York Governor Mario Cuomo “understood the impact this killing could have on New York tourism.” With hundreds of millions of dollars at stake, two days after the Watkins murder, Bratton got a call out of the blue from a top aid to the Governor asking whether transit police could make the subways safe if the state kicked in $40 million—big money in 1990. For Bratton, “this was the turnaround I needed.”

With the cash for more transit police, communications and data analytic tools to put cops where crimes occurred, and better police armaments, subway crime plummeted. Later, NYPD Commissioner Bratton drove homicide down by over a third in just two years with similar tactics, and by replacing hundreds of ineffective administrators with better leaders, as Patrick Wolf and one of us (Maranto) detail in “Cops, Teachers, and the Art of the Impossible: Explaining the Lack of Diffusion of Innovations That Make Impossible Jobs Possible.”27 In another article coauthored with Domonic Bearfield,28 we estimated that as of 2020, NYPD’s reforms saved over 20,000 lives, disproportionately of Black Americans.

NYPD leadership made ineffective leaders get better or get out. This is a tool almost never used by police reformers at the level of city governance.

So how did NYPD do it? New York got serious about both recruiting and training great, tough cops and about holding them accountable. In the 1990s, NYPD Commissioner William Bratton imposed CompStat, a statistical program reporting crimes by location in real time. In weekly meetings, NYPD leaders praised precinct commanders who cut crime and grilled others. They made ineffective managers get better or get out. Homicides fell by over a third in just two years, followed by steady declines since.

Let us repeat part of that for emphasis: NYPD leadership made ineffective leaders get better or get out. This is a tool almost never used by police reformers at the level of city governance, who don’t want to be hated by officers, and who are also hamstrung by civil service rules and union contracts that make it difficult to terminate bad police officers, and almost impossible to jettison bad managers. NYPD was the exception.

Because of obscure personnel reforms by Benjamin Ward, the first Black NYPD commissioner and someone who wanted to shake up NYPD’s Irish Mafia of officers, where promotion often depended on what some called “the friends and family plan,” NYPD commissioners have unusual power over personnel. The commissioner can bust precinct commanders and other key leaders back in rank almost to the street level. Since retirement is based on pay at an officer’s rank, this essentially forces managers into early retirement, with the commissioner getting to pick their replacements rather than having seniority or other civil service rules determine the outcomes.

Legendary police leader John Timoney, who was Bratton’s Chief of Department in NYPD before going on to successfully run departments in Philadelphia and Miami, told us that he had the ability to personally fire over 300 cops in NYPD compared to just two in Philadelphia—the two being himself and his driver. In the latter city, everyone else was covered by civil service tenure.29 Politicians such as Tim Walz were publicly emphasizing their focus on saving Black lives, but showed no enthusiasm for personnel reforms such as these, which could actually get the job done.

Of course, firing cops can’t work if you don’t know who to fire. Since the mid-1990s, NYPD strengthened its internal affairs unit to get off the streets unprofessional cops in the mold of Minneapolis’ Derek Chauvin, the officer who killed George Floyd and who had 18 prior citizen complaints, before rather than after a disaster. Longtime NYPD Internal Affairs leader Charles Campisi details this process well in Blue on Blue: An Insider’s Story of Good Cops Catching Bad Cops.30

Yet none of this might have happened without the brutal murder of Brian Watkins. In a real sense, the Watkins family suffered so thousands could live. They deserve a monument.

How to Make Black (and All) Lives Matter

Rather than supporting neo-Marxist activism portraying police as fascists enforcing “late-stage capitalist technocratic white supremacy,” or similarly impenetrable academic jargon that seeks to pit citizens against police and fails to solve problems, we see police departments as public organizations staffed by unionized employees, some of whom are public servants, some of whom mainly serve themselves, and most of whom are somewhere in between.31 Just like companies, some police departments are incredibly successful; some are so ineffective that it might make sense to defund them and start over … and some—most by far—are somewhere in between.

So the real question for those of us who want to make police better rather than run for office or get government grants, is how we can get low-performing police departments to learn from the best, and how we can get the mayors, city councils, governors, and state legislatures overseeing police to enact the sort of civil service reforms, like higher pay coupled with abolishing civil service tenure, that are likely to succeed in getting police to make all lives matter.

Black Lives Matter de-policing policies seem to have taken thousands of (mainly Black) lives. During the BLM era … the age-adjusted Black homicide rate has almost doubled, rising from 18.6 murders per 100,000 African-American citizens in 2011 to 32 murders per 100,000 in 2021.

For us, the key to get elected politicians to take police reform seriously is to make police reform a serious election issue, rather than how well one virtue signals for BLM. To do that, first and foremost, failed police departments and the mayors and city council members running them must be shamed into action. Businesses should be encouraged to relocate from dangerous cities to safe ones. That starts with data.

To make that happen, earlier this year, in a leading public administration journal, along with Patrick Wolf, we published “Which Police Departments Make Black Lives Matter?,” an article that anyone can download for free.32 Here, we ranked police in the 50 largest U.S. cities (using 2020 statistics, but the overall rankings were stable from 2015–2020) by their effectiveness in keeping homicides low and not taking civilian lives, while adjusting for poverty, which makes policing more difficult. Some departments excel. On our Police Professionalism Index, New York City easily takes first place, just as it did in 2015. The top 18 cities also include Boston, MA; Mesa, AZ; Raleigh, NC; Virginia Beach, VA; five California cities including San Diego and San Jose; and five Texas cities including El Paso and Austin.

In contrast, by a wide margin, Baltimore ranked dead last (as it did in 2015). Baltimore’s homicide rate (56.12 per 100,000 population) was roughly 15 times higher than New York’s, and Baltimore police kill roughly ten times as many civilians per capita as NYPD. Baltimoreans should be outraged, particularly since, as noted above, top-ranked NYPD used to be in Baltimore’s league. Fifty years ago, NYPD killed about 100 civilians annually, compared to 10 today. In 1990, New York City had 2,245 homicides, mostly people of color, compared to just 462 in 2020. And, as discussed earlier, reforming NYPD saved tens of thousands of lives, mainly Black lives, while at the same time reducing incarceration.

If democracy means anything, it means the ability to influence government, and the first duty of government is protecting life and property. For too long, this most basic of needs has been denied to people without means, who are disproportionately people of color. If we want to increase trust in government, we must start with the police. Doing that requires real data, not agitprop that paints cops as racist killers. To enable that, the U.S. Department of Justice (DOJ) needs to rank large cities on their policing, in a manner we did, awarding those doing well and calling out those doing badly. The DOJ should also issue reports on which cities enable their police chiefs to terminate problematic officers.

This methodical approach would offend leftist cultural warriors and rightist police unions alike. On the local levels, to copy NYPD’s success, voters in Baltimore and other poorly policed cities such as Kansas City, Las Vegas, Albuquerque, and Miami, must ask pointed questions about their police, such as:

  • Can police chiefs hire and retain the great officers they need? If not, why not?
  • Can police chiefs fire subordinates who are not up to their tough jobs?
  • Are there enough cops to do the job?
  • Do police use CompStat to copy what works in fighting crime?
  • Does the internal affairs unit hold brutal cops accountable?

Building a great police department takes time, but the NYPD has shown how it can be done. It is long past time to stop political virtue signaling and start reforming policing to save all lives.

Categories: Critical Thinking, Skeptic

The Particle and the “Particle” (Part 1)

Science blog of a physics theorist Feed - Mon, 02/24/2025 - 5:52am

Why do I find the word particle so problematic that I keep harping on it, to the point that some may reasonably view me as obsessed with the issue? It has to do with the profound difference between the way an electron is viewed in 1920s quantum physics (“Quantum Mechanics”, or QM for short) as opposed to 1950s relativistic Quantum Field Theory (abbreviated as QFT). [The word “relativistic” means “incorporating Einstein’s special theory of relativity of 1905”.] My goal this week is to explain carefully this difference.

The overarching point:

I’ve discussed this to some degree already in my article about how the view of an electron has changed over time, but here I’m going to give you a fuller picture. To complete the story will take two or three posts, but today’s post will already convey one of the most important points.

There are two short readings that you may want to dofirst.

I’ll will review the main point of the second item, and then I’ll start explaining what an isolated object of definite momentum looks like in QFT.

Removing Everything Extraneous

First, though, let’s make things as simple as possible. Though electrons are familiar, they are more complicated than some of their cousins, thanks to their electric charge and “spin”, and the fact that they are fermions. By contrast, bosons with neither charge nor spin are much simpler. In nature, these include Higgs bosons and electrically-neutral pions, but each of these has some unnecessary baggage. For this reason I’ll frame my discussion in terms of imaginary objects even simpler than a Higgs boson. I’ll call these spinless, chargeless objects “Bohrons” in honor of Niels Bohr (and I’ll leave the many puns to my readers.)

For today we’ll just need one, lonely Bohron, not interacting with anything else, and moving along a line. Using 1920s QM in the style of Schrödinger, we’ll take the following viewpoints.

  • A Bohron is a particle and exists in physical space, which we’ll take to be just a line — the set of points arranged along what we’ll call the x-axis.
  • The Bohron has a property we call position in physical space. We’ll refer to its position as x1.
  • For just one Bohron, the space of possibilities is simply all of its possible positions — all possible values of x1. [See Fig. 1]
  • The system of one isolated Bohron has a wave function Ψ(x1), a complex number at each point in the space of possibilities. [Note it is not a function of x, the points in physical space; it is a function of x1, the possible positions of the Bohron.]
  • The wave function predicts the probability of finding the Bohron at any selected position x1: it is proportional to |Ψ(x1)|2, the square of the absolute value of the complex number Ψ(x1).
Figure 1: For a Bohron moving along a line, physical space is the x-axis where the Bohron (blue dot) is located. The space of possibilities, the set of all possible arrangements of our one-Bohron system (red star) is the the x1-axis. This subtle but important distinction becomes clearer when we have two or more Bohrons; the physical space is unchanged, but possibility space is totally different. A QM State of Definite Momentum

In a previous post, I described states of definite momentum. But I also described states whose momentum is slightly less definite — a broad Gaussian wave packet state, which is a bit more intutive. The wave function for a Bohron in this state is shown in Fig. 2, using three different representations. You can see intuitively that the Bohron’s motion is quite steady, reflecting near definite momentum, while the wave function’s peak is very broad, reflecting great uncertainty in the Bohron’s position.

  • Fig. 2a shows the real and imaginary parts of Ψ(x1) in red and blue, along with its absolute-value squared |Ψ(x1)|2 in black.
  • Fig. 2b shows the absolute value |Ψ(x1)| in a color that reflects the argument [i.e. the phase] of Ψ(x1).
  • Fig. 2c indicates |Ψ(x1)|2, using grayscale, at a grid of x1 values; the Bohron is more likely to be found at or near dark points than at or near lighter ones.

For more details and examples using these representations, see this post.

Figure 2a: The wave function for a wave packet state with near-definite momentum, showing its real (red) and imaginary (blue) parts and its absolute value squared (black.) Figure 2b: The same wave function, with the curve showing its absolute value and colored by its argument. Figure 2c: The same wave function, showing its absolute value squared using gray-scale values on a grid of x1 points. The Bohron is more likely to be found near dark-shaded points.

To get a Bohron of definite momentum P1, we simply take what is plotted in Fig. 2 and make the broad peak wider and wider, so that the uncertainty in the Bohron’s position becomes infinite. Then (as discussed in this post) the wave function for that state, referred to as |P1>, can be drawn as in Fig. 3:

Figure 3a: As in Fig. 2a, but now for a state |P1> of precisely known momentum to the left. Figure 3b: As in Fig. 2b, but now for a state |P1> of precisely known momentum to the left. Figure 3c: As in Fig. 2c, but now for a state |P1> of precisely known momentum; note the probability of finding the Bohron is equal at every point at all times.

In math, the wave function for the state at some fixed moment in time takes a simple form, such as

where i is the square root of -1. This is a special state, because the absolute-value-squared of this function is just 1 for every value of x1, and so the probability of measuring the Bohron to be at any particular x1 is the same everywhere and at all times. This is seen in Fig. 3c, and reflects the fact that in a state with exactly known momentum, the uncertainty on the Bohron’s position is infinite.

Let’s compare the Bohron (the particle itself) in the state |P1> to the wave function that describes it.

  • In the state |P1>, the Bohron’s location is completely unknown. Still, its position is a meaningful concept, in the sense that we could measure it. We can’t predict the outcome of that measurement, but the measurement will give us a definite answer, not a vague indefinite one. That’s because the Bohron is a particle; it is not spread out across physical space, even though we don’t know where it is.
  • By contrast, the wave function Ψ(x1) is spread out, as is clear in Fig. 3. But caution: it is not spread out across physical space, the points of the x axis. It is spread out across the space of possibilities — across the range of possible positions x1. See Fig. 1 [and read my article on the space of possibilities if this makes no sense to you.]
  • Thus neither the Bohron nor its wave function is spread out in physical space!

We do have waves here, and they have a wavelength; that’s the distance between one crest and the next in Fig. 3a, and the distance beween one red band and the next in Fig. 3b. That wavelength is a property of the wave function, not a property of the Bohron. To have a wavelength, an object has to be wave-like, which our QM Bohron is not.

Conversely, the Bohron has a momentum (which is definite in this state, and is something we can measure). This has real effects; if the Bohron hits another particle, some or all of its momentum will be transferred, and the second particle will recoil from the blow. By contrast, the wave function does not have momentum. It cannot hit anything and make it recoil, because, like any wave function, it sits outside the physical system. It merely describes an object with momentum, and tells us the probable outcomes of measurements of that object.

Keep these details of wavelength (the wave function’s purview) and the momentum (the Bohron’s purview) in mind. This is how 1920’s QM organizes things. But in QFT, things are different.

First Step Toward a QFT State of Definite Momentum

Now let’s move to quantum field theory, and start the process of making a Bohron of definite momentum. We’ll take some initial steps today, and finish up in the next post.

Our Bohron is now a “particle”, in quotation marks. Why? Because our Bohron is no longer a dot, with a measurable (even if unknown) position. It is now a ripple in a field, which we’ll call the Bohron field. That said, there’s still something particle-like about the Bohron, because you can only have an integer number (1, 2, 3, 4, 5, …) of Bohrons, and you can never have a fractional number (1/2, 7/10, 2.46, etc.) of Bohrons. This feature is something we’ll discuss in later posts, but we’ll just accept it for now.

As fields go, the Bohron field is a very simple example. At any given moment, the field takes on a value — a real number — at each point in space. Said another way, it is a function of physical space, of the form B(x).

Very, very important: Do not confuse the Bohron field B(x) with a wave function!!

  • This field is a function in physical space (not the space of possibilities). B(x) is a function of physical space points x that make up the x-axis, and is not a function of a particle’s position x1, nor is it a function of any other coordinate that might arise in the space of possibilities.
  • I’ve chosen the simplest type of QFT field: B(x) is a real number at each location in physical space. This is in contrast to a QM wave function, which is a complex number for each possibility in the space of possibilities.
  • The field itself can carry energy and momentum and transport it from place to place. This is unlike a wave function, which can only describe the energy and momentum that may be carried by physical objects.

Now here’s the key distinction. Whereas the Bohron of QM has a position, the Bohron of QFT does not generally have a position. Instead, it has a shape.

If our Bohron is to have a definite momentum P1, the field must ripple in a simple way, taking on a shape proportional to a sine or cosine function from pre-university math. An example would be:

where A is a real number, called the “amplitude” of the wave, and x is a location in physical space.

At some point soon we’ll consider all possible values of A — a part of the space of possibilities for the field B(x) — so remember that A can vary. To remind you, I’ve plotted this shape for A=1 in Fig. 4a and again for A=-3/2 in Fig 4b.

Figure 4a: The function A cos[P1 x], for the momentum P1 set equal to 1 and the amplitude A set equal to 1. Figure 4b: Same as Fig. 4a, but with A = -3/2 . Initial Comparison of QM and QFT

At first, the plots in Fig. 4 of the QFT Bohron’s shape look very similar to the QM wave function of the Bohron particles, especially as drawn in Fig. 3a. The math formulas for the two look similar, too; compare the formula after Fig. 3 to the one above Fig. 4.

However, appearances are deceiving. In fact, when we look carefully, EVERYTHING IS COMPLETELY DIFFERENT.

  • Our QM Bohron with definite momentum has a wave function Ψ(x1), while in QFT it has a shape B(x); they are functions of variables which, though related, are different.

  • On top of that, there’s a wave function in QFT too, which we haven’t drawn yet. When we do, we’ll see that the QFT Bohron’s wave function looks nothing like the QM Bohron’s wave function. That’s because
    • the space of possibilities for the QM wave function is the space of possible positions that the Bohron particle can have, but
    • the space of possibilities for the QFT wave function is the space of all possible shapes that the Bohron field can have.
  • The plot in Fig. 4 shows a curve that is both positive and negative but is drawn colorless, in contrast to Fig. 3b, where the curve is positive but colored. That’s because
    • the Bohron field B(x) is a real number with no argument [phase], whereas
    • the QM wave function Ψ(x1) for the state of definite momentum has an always-positive absolute value and a rapidly varying argument [phase].
  • The axes in Fig. 4 are labeled differently from the axis in Fig. 3. That’s because (see Fig. 1)
    • the QFT Bohron field B(x) is found in physical space, while
    • the QM wave function Ψ(x1) for the Bohron particle is found in the particle’s space of possibilities.
  • The absolute-value-squared of a wave function |Ψ(x1)|2 is interpreted as a probability (specifically, the probability for the particular possibility that the particle is at position x1. There is no such interpretation for the square of the Bohron field |B(x)|2. We will later find a probability interpretation for the QFT wave function, but we are not there yet.

  • Both Fig. 4 and Figs. 3a, 3b show curves with a wavelength, albeit along different axes. But they are very different in every sense
    • In QM, the Bohron has no wavelength; only its wave function has a wavelength — and that involves lengths not in physical space but in the space of possibilities.
    • In QFT,
      • the field ripple corresponding to the QFT Bohron with definite momentum has a physical wavelength;
      • meanwhile the QFT Bohron’s wave function does not have anything resembling a wavelength! The field’s space of possibilities, where the wave function lives, doesn’t even have a recognizable notion of lengths in general, much less wavelengths in particular.

I’ll explain that last statement next time, when we look at the nature of the QFT wave function that corresponds to having a single QFT Bohron.

A Profound Change of Perspective

But before we conclude for the day, let’s take a moment to contemplate the remarkable change of perspective that is coming into our view, as we migrate our thinking from QM of the 1920s to modern QFT. In both cases, our Bohron of definite momentum is certainly associated with a definite wavelength; we can see that both in Fig. 3 and in Fig. 4. The formula for the relation is well-known to scientists; the wavelength λ for a Bohron of momentum P1 is simply

where h is Planck’s famous constant, the mascot of quantum physics. Larger momentum means smaller wavelength, and vice versa. On this, QM and QFT agree.

But compare:

  • in QM, this wavelength sits in the wave function, and has nothing to do with waves in physical space;
  • in QFT, the wavelength is not found in the field’s wave function; instead it is found in the field itself, and specifically in its ripples, which are waves in physical space.

I’ve summarized this in Table 1.

Table 1: The Bohron with definite momentum has an associated wavelength. In QM, this wavelength appears in the wave function. In QFT it does not; both the wavelength and the momentum are found in the field itself. This has caused no end of confusion.

Let me say that another way. In QM, our Bohron is a particle; it has a position, cannot spread out in physical space, and has no wavelength. In QFT, our Bohron is a “particle”, a wavy object that can spread out in physical space, and can indeed have a wavelength. (This is why I’d rather call it a wavicle.)

[Aside for experts: if anyone thinks I’m spouting nonsense, I encourage the skeptic to simply work out the wave function for phonons (or their counterparts with rest mass) in a QM system of coupled balls and springs, and watch as free QFT and its wave function emerge. Every statement made here is backed up with a long but standard calculation, which I’m happy to show you and discuss.]

I think this little table is deeply revealing both about quantum physics and about its history. It goes a long way toward explaining one of the many reasons why the brilliant founding parents of quantum physics were so utterly confused for a couple of decades. [I’m going to go out on a limb here, because I’m certainly not a historian of physics; if I have parts of the history wrong, please set me straight.]

Based on experiments on photons and electrons and on the theoretical insight of Louis de Broglie, it was intuitively clear to the great physicists of the 1920s that electrons and photons, which they were calling particles, do have a wavelength related to their momentum. And yet, in the late 1920s, when they were just inventing the math of QM and didn’t understand QFT yet, the wavelength was always sitting in the wave function. So that made it seem as though maybe the wave function was the particle, or somehow was an aspect of the particle, or that in any case the wave function must carry momentum and be a real physical thing, or… well, clearly it was very confusing. It still confuses many students and science writers today, and perhaps even some professional scientists and philosophers.

In this context, is it surprising that Bohr was led in the late 1920s to suggest that electrons are both particles and waves, depending on experimental context? And is it any wonder that many physicists today, with the benefit of both hindsight and a deep understanding of QFT, don’t share this perspective?

In addition, physicists already knew, from 19th century research, that electromagnetic waves — ripples in the electromagnetic field, which include radio waves and visible light — have both wavelength and momentum. Learning that wave functions for QM have wavelength and describe particles with momentum, as in Fig. 3, some physicists naturally assumed that fields and wave functions are closely related. This led to the suggestion that to build the math of QFT, you must go through the following steps:

  • first you take particles and describe them with a wave function, and then
  • second, you make this wave function into a field, and describe it using an even bigger wave function.

(This is where the archaic terms “first quantization” and “second quantization” come from.) But this idea was misguided, arising from early conceptual confusions about wave functions. The error becomes more understandable when you imagine what it must have been like to try to make sense of Table 1 for the very first time.

In the next post, we’ll move on to something novel: images depicting the QFT wave function for a single Bohron. I haven’t seen these images anywhere else, so I suspect they’ll be new to most readers.

Categories: Science

22,000-year-old tracks are earliest evidence of transport vehicles

New Scientist Feed - Mon, 02/24/2025 - 5:18am
Tracks and footprints found in New Mexico are by far the earliest evidence of people using primitive vehicles to transport things
Categories: Science

The Alef Flying Car

neurologicablog Feed - Mon, 02/24/2025 - 5:01am

The flying car is an icon of futuristic technology – in more ways than one. This is partly why I can’t resist a good flying car story. I was recently sent this YouTube video on the Alef flying car. The company says his is a street-legal flying car, with vertical take off and landing. They also demonstrate that they have tested this vehicle in urban environments. They are available now for pre-order (estimated price, $300k). The company claims: “Alef will deliver a safe, affordable vehicle to transform your everyday commute.” The claim sounds reminiscent of claims made for the Segway (which recently went defunct).

The flying car has a long history as a promise of future technology. As a technology buff, nerd, and sci-fi fan, I have been fascinated with them my entire life. I have also seen countless prototype flying cars come and go, an endless progression of overhyped promises that have never delivered. I try not to let this make my cynical – but I am cautious and skeptical. I even wrote an entire book about the foibles of predicting future technology, in which flying cars featured prominently.

So of course I met the claims for the Alef flying car with a fair degree of skepticism – which has proven entirely justified. First I will say that the Alef flying car does appear to function as a car and can fly like a drone. But I immediately noticed in the video that as a car, it does not go terribly fast. You have to do some digging, but I found the technical specs which say that it has a maximum road speed of 25 MPH.  It also claims a road range of 200 miles, and an air range of 110 miles. It is an EV with a gas motor to extend battery life in flight, with eight electric motors and eight propellers. It is also single passenger. It’s basically a drone with a frame shaped like a car with tires and weak motors – a drone that can taxi on roads.

It’s a good illustration of the inherent hurdles to a fully-realized flying car of our dreams, mostly rooted in the laws of physics. But before I go there, as is, can this be a useful vehicle? I suppose, for very specific applications. It is being marketed as a commuter car, which makes sense, as it is single passenger (this is no family car). The limited range also makes it suited to commuting (average daily commutes in the US is around 42 miles).

That 25 MPH limit, however, seems like a killer. You can’t drive this thing on the highway, or on many roads, in fact. But, trying to be as charitable as possible, that may be adequate for congested city driving. It is also useful for pulling the vehicle out of the garage into a space with no overhead obstruction. Then you would essentially fly to your destination, land in a suitable location, and then drive to your parking space. If you are only driving into the parking garage, the 25 MPH is fine. So again – it’s really a drone that can taxi on public roads.

The company claims the vehicle is safe, and that seems plausible. Computer aided drone control is fairly advanced now, and AI is only making it better. The real question is – would you need a pilot’s license to fly it? How much training would be involved? And what are the weather conditions in which it is safe to fly? Where you live, what percentage of days would the drone car be safe to fly, and how easy would it be to be stuck at work (or need to take an Uber) because the weather unexpectedly turned for the worse? And if you are avoiding even the potential of bad weather, how much further does this restrict your flying days?

There are obviously lots of regulatory issues as well. Will cities allow the vehicles to be flying overhead. What happens if they become popular and we see a significant increase in their use? How will air traffic be managed. If widely adopted, we will see then what their real safety statistics are. How many people will fly into power lines, etc.?

What all this means is that a vehicle like this may be great as “James Bond” technology. This means, if you are the only one with the tech, and you don’t have to worry about regulations (because you’re a spy), it may help you get away from the bad guys, or quickly cross a city frozen with grid lock. (Let’s face it, you can totally see James Bond in this thing.) But as a widely adopted technology, there are significant issues.

For me the bottom line is that this technology is a great proof-of-concept, and I welcome anything that incrementally advances the technology. It may also find a niche somewhere, but I don’t think this will become the Tesla of flying cars, or that this will transform city commuting. It does help demonstrate where the technology is. We are seeing the benefits of improving battery technology, and improving drone technology. But is this the promised “flying car”? I think the answer is still no.

For me a true flying car functions fully as a car and as a flying conveyance. What we often see are planes that can drive on the road, and now drones that can drive on the road. But they are not really cars, or they are terrible cars. You would never drive the Alef flying car as a car – again, at most you would taxi it to and from its parking space.

What will it take to have a true flying car? I do think the drone approach is much better than the plane approach, or jet-pack approach. Drone technology is definitely the way to go. Before it is practical, however, we need better battery tech. The Alef uses lithium-ion batteries and lithium polymer batteries. Perhaps eventually they will use the silicone anode lithium batteries, which have a higher energy density. But we may need to see the availability of batteries with triple or more current lithium ion batteries before flying drone cars will be a practical reality. But we can feasibly get there.

Perhaps, however, the “flying car” is just a futuristic pipe dream. We do have to consider that if the concept is valid, or are we just committing a “futurism fallacy” by projecting current technology into the future. We don’t necessarily have to do things in the same way, with just better technology. The thought process is – I use my car for transportation, wouldn’t it be great if my car could fly. Perhaps the trade-offs of making a single vehicle that is both a good car and a good drone are just not worth it. Perhaps we should just make the best drone possible for human transportation and specific applications. We may need to develop some infrastructure to accommodate them.

In a city there may be other combinations of travel that work better. You may take a e-scooter to the drone, or some form of public transportation. Then a drone can take you across the city, or across a geological obstacle. Personal drones may be used for commuting, but then you may have a specific pad at your home and another at work for landing. That seems easier than designing a drone-car just to drive 30 feet to the take off location.

If we go far enough into the future, where technology is much more advanced (like batteries with 10 times the energy density of current tech), then flying cars may eventually become practical. But even then there may be little reason to choose that tradeoff.

The post The Alef Flying Car first appeared on NeuroLogica Blog.

Categories: Skeptic

Huge thunderstorm on Jupiter captured in best detail ever seen

New Scientist Feed - Mon, 02/24/2025 - 4:00am
NASA's Juno spacecraft swooped in for a close look at a massive thunderstorm on Jupiter, revealing that it may have similarities to storms on Earth
Categories: Science

How Robert F. Kennedy Jr. will undermine and ultimately destroy US vaccination programs

Science-based Medicine Feed - Mon, 02/24/2025 - 12:03am

When Robert F. Kennedy Jr. was nominated to be Secretary of Health & Human Services, I called him an "extinction-level threat" to public health. Here's how he will attempt to make vaccines extinct in the US.

The post How Robert F. Kennedy Jr. will undermine and ultimately destroy US vaccination programs first appeared on Science-Based Medicine.
Categories: Science

Dogs seem to have a strong preference for yellow things

New Scientist Feed - Sun, 02/23/2025 - 10:00pm
When offered a choice of bowls, free-ranging dogs in India tend to approach a yellow one much more than blue or grey
Categories: Science

We Know How Much Radiation Astronauts Will Receive, But We Don’t Know How to Prevent it

Universe Today Feed - Sun, 02/23/2025 - 4:53pm

The journey to Mars will subject astronauts to extended periods of exposure to radiation during their months-long travel through space. While NASA’s Artemis 1 mission lasted only a matter of weeks, it provided valuable radiation exposure data that scientists can use to predict the radiation risks for future Mars crews. The measurements not only validated existing radiation prediction models but also revealed unexpected insights about the effectiveness of radiation shielding strategies too. 

Space radiation poses one of the most significant health risks for astronauts travelling beyond Earth’s magnetic field. Unlike the radiation from medical X-rays or nuclear sources on Earth, space radiation includes high-energy galactic cosmic rays and solar particle events that can penetrate traditional shielding materials. When these particles collide with human tissue, they can damage DNA, increase cancer risk and weaken the immune system. The effects are cumulative too, with longer missions like a journey to Mars significantly increasing exposure and health risks. 

Artist’s illustration of ultra-high energy cosmic rays

The International Space Station crews receive radiation doses similar to nuclear power plant workers due to a little protection from Earth’s magnetosphere, but astronauts traveling to Mars would face much higher exposure levels during their multi-month journey. NASA estimates that a mission to Mars could expose astronauts to radiation levels that exceed current career exposure limits, making effective radiation shielding one of the key challenges for deep space exploration.

A full-disk view of Mars, courtesy of VMC. Credit: ESA

A paper recently published by a team led by Tony C Slaba from the Langley Research Centre at NASA, they use computer models and data from on-board detectors to assess the health risk to long term space flight. The data is taken from the International Space Station (ISS,) the Orion Spacecraft, the BioSentinel CubeSat and from receivers on the surface of Mars. Collectively this data enables a full mission profile to be modelled for a Martian journey. The data was captured during the time period of the Artemis-1 mission, just under one month in duration.

NASA’s Orion spacecraft will carry astronauts further into space than ever before using a module based on Europe’s Automated Transfer Vehicles (ATV). Credit: NASA

Space radiation comes in two primary forms that pose risks to astronauts and spacecraft. Solar Particle Events occur during solar storms, releasing intense bursts of energetic particles from the Sun, while Galactic Cosmic Rays represent a constant stream of highly penetrating radiation from deep space. The findings enabled the team to assess current models for accuracy. They found that predictions match actual measurements to within 10-25% for the International Space Station, 4% for deep space conditions, and 10% for the Martian surface. This level of precision gives confidence in the existing models and in planning radiation protection for future missions.

They also found that, having assessed traditional shielding approaches, that they are largely ineffective against Galactic Cosmic Rays. In some cases, excessive shielding or inappropriate material choices can even amplify radiation exposure through secondary particle production. This occurs when the ‘original radiation’ creates a cascade of new particles on impact that can be more dangerous than the original radiation! They found that radiation levels vary substantially depending on location and the specific shielding configurations used! Quite the headache for engineers!

Radiation exposure is one of the greatest challenges in human space exploration. The study shows that our models for assessing radiation risk are reliable and that the ability to accurately assess those risks is crucial for protecting astronauts from serious health consequences. Having a good understanding of the risk directly influences how spacecraft are engineered, and plays a key role in mission planning for trips beyond Earth orbit. More work is needed now in the design of radiation protection systems if our space travellers are to be better protected from the long term risks posed by radiation.

Source : Validated space radiation exposure predictions from earth to mars during Artemis-I

The post We Know How Much Radiation Astronauts Will Receive, But We Don’t Know How to Prevent it appeared first on Universe Today.

Categories: Science

Glaciers Worldwide are Melting Faster Causing Sea Levels to Rise More

Universe Today Feed - Sun, 02/23/2025 - 4:25pm

Anthropogenic climate change is creating a vicious circle where rising temperatures are causing glaciers to melt at an increasing rate. In addition to contributing to rising sea levels, coastal flooding, and extreme weather, the loss of polar ice and glaciers is causing Earth’s oceans to absorb more solar radiation. The loss of glaciers is also depleting regional freshwater resources, leading to elevated levels of drought and the risk of famine. According to new findings by an international research effort, there has been an alarming increase in the rate of glacier loss over the last ten years.

The research was conducted by the Glacier Mass Balance Intercomparison Exercise (GlaMBIE) team, a major research initiative coordinated by the World Glacier Monitoring Service (WGMS). Located at the University of Zurich in collaboration with the University of Edinburgh and Earthwave Ltd, this international data repository and data analyzing service generates community estimates of glacier mass loss globally. The paper that details their research and findings, “Community estimate of global glacier mass changes from 2000 to 2023,” was published on February 19th in the journal Nature.

As part of their efforts, the team coordinated the compilation, standardization, and analysis of field measurements and data from optical, radar, laser, and gravimetry satellite missions. These include satellite observations from NASA’s Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2), the NASA-DLR Gravity Recovery and Climate Experiment (Grace), the GLR’s TanDEM-X mission, and the ESA’s CryoSat missions, and more.

Combining data from multiple sources, the Glambie team produced an annual time series of global glacier loss from 2000 to 2023. In 2000, glaciers covered about 705,221 square km (272,287 mi2) and held an estimated 121,728 billion metric tons (134,182 US tons) of ice. Over the next twenty years, they lost 273 billion tonnes of ice annually, approximately 5% of their total volume, with regional losses ranging from 2% in the Antarctic and Subantarctic to 39% in Central Europe. To put that in perspective, this amounts to what the entire global population consumes in 30 years.

In short, the amount of ice lost rose to 36% during the second half of the study (2012 and 2023) compared to the first half (2000-2011). Glacier mass loss over the whole study period was 18% higher than the meltwater from the Greenland Ice Sheet and more than double that from the Antarctic Ice Sheet. Michael Zemp, a noted glaciologist who co-led the study, said in an ESA press release:

“We compiled 233 estimates of regional glacier mass change from about 450 data contributors organized in 35 research teams. Benefiting from the different observation methods, Glambie not only provides new insights into regional trends and year-to-year variability, but we could also identify differences among observation methods. This means that we can provide a new observational baseline for future studies on the impact of glacier melt on regional water availability and global sea-level rise.”

This photograph, taken in 2012, shows the Golubin Glacier in Kyrgyzstan, in Central Asia. Credit: M. Hoelzle (2012)

Globally, glaciers collectively lost 6,542 tonnes (7,210 tons) of ice, leading to a global sea-level rise of 18 mm (0.7 inches). However, the rate of glacier ice loss increased significantly from 231 billion tonnes per year in the first half of the study period to 314 billion tonnes per year in the second half – an increase of 36%. This rise in water loss has made glaciers the second-largest contributor to global sea-level rise, surpassing the contributions of the Greenland Ice Sheet, Antarctic Ice Sheet, and changes in land water storage. Said UZH glaciologist Inés Dussaillant, who was involved in the Glambie analyses:

“Glaciers are vital freshwater resources, especially for local communities in Central Asia and the Central Andes, where glaciers dominate runoff during warm and dry seasons. But when it comes to sea-level rise, the Arctic and Antarctic regions, with their much larger glacier areas, are the key players. However, almost Thione-quarter of the glacier contribution to sea-level rise originates from Alaska.”

These results will provide environmental scientists with a refined baseline for interpreting observational differences arising from different methods and for calibrating models. They hope this will help future studies of global ice loss by narrowing the projection uncertainties for the twenty-first century. These research findings are the culmination of many years of cooperative studies and observations, which included the use of satellites that were not specifically designed to monitor glaciers globally. As co-author Noel Gourmelen, a lecturer in Earth Observation of the Cryosphere at the University of Edinburgh, said:

“The research is the result of sustained efforts by the community and by space agencies over many years, to exploit a variety of satellites that were not initially specifically designed for the task of monitoring glaciers globally. This legacy is already producing impact with satellite missions being designed to allow operational monitoring of future glacier evolution, such as Europe’s Copernicus CRISTAL mission which builds on the legacy of ESA’s CryoSat.”

The study also marks an important milestone since it was released in time for the United Nations’ International Year of Glaciers’ Preservation and the Decade of Action for Cryospheric Sciences (2025–2034). Said Livia Jakob, the Chief Scientific Officer & Co-Founder at Earthwave, hosted a large workshop with all the participants to discuss the findings. “Bringing together so many different research teams from across the globe in a joint effort to increase our understanding and certainty of glacier ice loss has been extremely valuable. This initiative has also fostered a stronger sense of collaboration within the community.”

The study also illustrates the importance of collective action on climate change, which is accelerating at an alarming rate. Research that quantifies glacial loss, rising sea levels, and other impacts is key to preparing for the worst. It’s also essential to the development of proper adaptation, mitigation, and restoration strategies consistent with the recommendations made by the UN Intergovernmental Panel on Climate Change (IPCC).

Further Reading: ESA

The post Glaciers Worldwide are Melting Faster Causing Sea Levels to Rise More appeared first on Universe Today.

Categories: Science

A Chinese Satellite Tests Orbital Refuelling

Universe Today Feed - Sun, 02/23/2025 - 4:08pm

Satellites often face a disappointing end: despite having fully working systems, they are often de-orbited after their propellant runs out. However, a breakthrough is on the cards with the launch of China’s Shijian-25 satellite which has been launched into orbit to test orbital refuelling operations. The plan; docking with satellite Beidou-3 G7 and transferring 142 kilograms of hydrazine to extend its life by 8 years! It’s success will mean China plans to develop a network of orbital refuelling stations!

Like cars on Earth, satellites need fuel to manoeuvre and for their constantly decaying orbits to be boosted. But unlike vehicles on the ground, when satellites run out of propellant, they become expensive space debris. This challenge has driven the development of orbital refuelling technology, which could extend satellite lifespans and transform space operations.

An artist’s conception of ERS-2 in orbit. ESA

The International Space Station (ISS) offers one of the most well known examples of an orbiting ‘satellite’ and it too needs to deal with boosting its orbit. The problem is the drag imposed upon the structures by gas in our atmosphere. In the case of the ISS, docked supply craft are typically used to fire their engines to reposition ISS to the correct altitude. Without these periodic “orbital boosts,” the ISS would eventually lose altitude and reenter the atmosphere.

The International Space Station (ISS) in orbit. Credit: NASA

A significant milestone in autonomous refuelling came in 2007 with DARPA’s Orbital Express mission. This demonstration involved two spacecraft: the ASTRO servicing vehicle and a prototype modular satellite called NextSat. Over three months, they performed multiple autonomous fuel transfers and component replacements, proving that robotic spacecraft could conduct complex servicing operations without direct human control.

The technology continues to advance with China’s Shijian-25 satellite (launched on 6 January 2025) representing another step forward in orbital refuelling capabilities. The mission aims to demonstrate refuelling operations in geosynchronous orbit approximately 36,000 kilometres above Earth. This is particularly significant because geosynchronous orbits often host communications satellites that benefit from life extension.

The technical challenges of orbital refuelling are considerable though. Spacecraft must achieve extremely precise rendezvous and docking while travelling in excess of 28,000 kilometres per hour. The fuel transfer system must prevent leaks, which could be hazardous to both spacecraft and create hazardous debris. Adding to the challenge is that many satellites were never designed with refuelling in mind, lacking any form of standardised fuel ports or docking interfaces.

Orange balls of light fly across the sky as debris from a SpaceX rocket launched in Texas is spotted over Turks and Caicos Islands on Jan. 16, in this screen grab obtained from social media video. Credit: Marcus Haworth/Reuters

Looking ahead, several companies and space agencies are developing orbital refuelling systems. These range from dedicated “gas station” satellites to more versatile servicing vehicles that can perform repairs and upgrades alongside refuelling. As the technology advances, it could significantly change how we operate in space, making satellite operations more sustainable and cost-effective.

Source : China successfully sent Shijian-25 satellite

The post A Chinese Satellite Tests Orbital Refuelling appeared first on Universe Today.

Categories: Science

White Coat Crime

Skeptic.com feed - Sun, 02/23/2025 - 3:52pm

She murdered her patients. At least, that’s what the prosecutors said. All it took to get powerful opioids from California internist Lisa Tseng was a brief conversation. No X-rays. No lab tests. No medical exam. Video surveillance shows an undercover officer posing as a patient who asks Dr. Tseng for methadone (an opioid) and Xanax (an anti-anxiety medication), drugs that can form a deadly cocktail when combined. He tells her that he is in recovery and takes the drugs at night with alcohol to “take the edge off.” He makes clear that he is not in pain and does not plan to use the medications to treat a medical condition. Tseng writes the prescription—after the agent hands over $75 cash.1

Did she know what she was doing was wrong? Tseng received desperate calls from patients’ families and friends concerned that their loved ones were hooked on the meds she prescribed.2 She did not stop. Coroners and law enforcement agents called Dr. Tseng each time a patient died—14 in total.3 She did not stop. Perhaps she thought the financial perks outweighed the risks. Dr. Tseng’s reckless prescribing raked in $3,000 a day and exceeded $5 million in three years. 

Dr. Tseng’s prescribing spree ended in 2015, when a jury convicted her on three counts of second-degree murder.4 In 2016, Superior Court Judge George G. Lomeli imposed a prison sentence of 30 years to life in prison. The trial lasted eight weeks. It included 77 witnesses and 250 pieces of evidence. Families of overdose victims praised the judge’s decision and concluded that “justice has been served.”5

Dr. Tseng was the first California physician ever convicted of murder for overprescribing opioids, and one of the first in the United States. Her case was a turning point for law enforcement because it created a playbook for subsequent prosecutions and because it sent a clear signal to physicians across the nation: you could be next. “The message this case sends is you can’t hide behind a white lab coat and commit crimes,” declared Deputy District Attorney John Niedermann. “A lab coat and stethoscope are no shield.” Medical experts warned that Tseng’s case could scare physicians away from prescribing opioids and leave chronic pain patients to suffer without care.6

Illustration by Izhar Cohen for SKEPTIC

Law enforcement is responsible for making sure that doctors only prescribe opioids legally, which is no easy task. However, some physicians make it easy when they engage in behavior that is explicitly and undeniably criminal. These are the cases that make headlines. Opioids are illegal by default. Federal law gives doctors a special exemption to prescribe them for legitimate medical purposes, particularly pain. But how can a physician be legitimate if he has a parking lot filled with out-of-state license plates and a line of patients snaking around the building as if they are waiting to buy concert tickets? If he asks the patient to state his blood pressure while a brand-new blood pressure cuff hangs on the wall, unused? If he can’t tell the difference between a dog X-ray and a human one? 

Doctors are hard to investigate and even harder to prosecute. It is difficult for judges and juries to wrap their minds around the idea that physicians perpetrate crimes.

It sounds far-fetched, but in July 2012, Glendora, CA, police arrested physician Rolando Lodevico Atiga for prescribing powerful opioids to an undercover officer. The officer used a dog X-ray—with the tail clearly visible—to prove he had a bad back. Police Captain Tim Staab told CBS, “Either Sparky the dog really needs Percocet or this doctor is a drug dealer masquerading as a physician.”7 The medical board suspended Dr. Atiga’s license in August 2012.8 Then criminal proceedings were suspended in 2013 due to Dr. Atiga’s poor mental health and inability to stand trial.9

Doctors are hard to investigate and even harder to prosecute. It is difficult for judges and juries to wrap their minds around the idea that physicians perpetrate crimes. The image of the “dirty doctor” just doesn’t mesh with the popular image of “doctor as savior.” And many overdoses involve multiple drugs, making it hard to pin a death on a single drug or a single doctor.10 Still, over the past decade, judges and juries have put physicians behind bars. Law enforcement arrests scores of physicians for opioid crimes each year. They charge physicians with the same counts as illicit drug dealers: fraud, unlawful distribution, racketeering, manslaughter, and murder.11 Doctors are legally required to keep extensive records that investigators use to prove criminal activity. Physicians who avoid arrest still face steep penalties, such as losing their medical license, losing the ability to prescribe controlled substances, or paying a hefty fine. 

It was not always this way. As early as the mid-1990s, evidence showed that physicians were generously doling out opioids, but the first murder conviction did not occur until 2016.12 What happened over those twenty years that unleashed prosecutors’ power and helped them win cases against providers? The answer lies in organizational change, education, and technological innovation. New organizations centered on criminal healthcare providers cropped up, enforcement agents came together to share strategies, and Prescription Drug Monitoring Programs (PDMPs) spread across the nation that made targeting physicians a far easier task. 

Reshaping the Enforcement Landscape 

A lot has changed since the days when pill mills popped up like weeds and law enforcement had no way to stop them. Enforcement agencies have responded to the opioid crisis with three strategies: (1) organizing task forces, (2) educating investigators, and (3) using PDMPs. Together, these efforts have made physician cases easier and faster to initiate, even if some challenges persist. 

Task forces are subunits of enforcement agencies that bring together individuals who have different resources and expertise to address a common goal. Federal agencies such as the Drug Enforcement Administration (DEA) and local agencies such as sheriffs’ departments have devoted themselves to physician cases by creating task forces centered on prescription opioids. DEA task forces do much of the heavy lifting, a major difference from decades ago. 

The DEA plays the biggest federal role in regulating opioids. The DEA’s Office of Diversion Control oversees registrants—physicians, pharmacies, hospitals, manufacturers, wholesalers, and drug distributors—who must register with the agency in order to provide controlled substances. The Controlled Substances Act (CSA) designates these registrants as part of a “closed system of distribution,” which means that the DEA tracks everyone who handles opioids along the supply chain and accounts for every transaction. The DEA monitors opioid transactions using the Automation of Reports and Consolidated Orders System (ARCOS), a database that tracks controlled substances all the way from manufacture to public distribution.13

“The message this case sends is … a lab coat and stethoscope are no shield.” 
—Deputy District Attorney John Niedermann

For decades, the Office of Diversion Control14 was considered a lesser part of the DEA, and the agents who worked for it—known as Diversion Investigators (DIs)—were treated as less important than Special Agents (SAs), who work for the Operations Division. The position of DI was originally created to relieve SAs from the burden of inspecting and auditing manufacturers and distributors of controlled substances as mandated by the CSA. Handing off those tasks to DIs freed SAs to focus on heroin and cocaine trafficking. This hierarchy persisted into the late 1990s, the heyday of opioid prescribing, when physicians treated pain as a fifth vital sign and were urged to treat it aggressively. With physicians and regulators on board with generous opioid prescribing, the diversion office found itself underfunded and understaffed. Laura Nagel, who was appointed head of the DEA’s Office of Diversion Control in 2000, led DIs who struggled to get resources and respect. Unaware of the giant opioid wave poised to crest a few short years later, SAs thought prescription opioids were nothing more than a child’s version of the hard drugs they pursued. 

That all changed in the early 2000s when, for the first time in U.S. history, Americans were more likely to overdose on prescription drugs than illegal ones.15 Suddenly, DIs were in high demand. In late 2006, the DEA created task forces called Tactical Diversion Squads. These included DIs, SAs, and Task Force Officers (TFOs), who are local police deputized to work with the DEA. DIs understood healthcare norms; SAs could arrest people; and TFOs had fine-grained knowledge of their communities. This arrangement created the organizational synergy needed to pursue doctors. 

Local agencies such as police departments and sheriff’s departments also created narcotics task forces that enabled them to exchange information with other local agencies. Members of such task forces can represent various police departments, the highway patrol, the district attorney’s office, the department of healthcare services, and the medical board. They may also ally with the FBI, the DEA, and the Food and Drug Administration (FDA). 

Federal and local agencies have complementary resources. Local police departments have insufficient funding to do provider cases, so they collaborate with federal law enforcement either formally by sending one of their officers to the DEA’s task force or informally by working cases with them. Federal agencies have more money and equipment. They can perform federal wire taps, which are expensive and require specialized technology. They can also afford expert witnesses, whose expertise is crucial in building a solid case against a doctor. Local agencies, on the other hand, have more agents, so they are better equipped to conduct undercover investigations and process the mountains of paperwork that a doctor case generates. 

Prescription Drug Monitoring Programs (PDMPs) have dramatically transformed the ways that investigators and prosecutors conduct cases against providers.

Task forces are only one site of information exchange. Enforcement agents have found various ways to break down information silos and thereby distribute knowledge. Years of failed attempts have taught investigators and prosecutors both what works, and what doesn’t. They know which questions to ask, which behaviors to look for, and which charges to bring. When task force members, eager to share what they had learned with others, lacked formal venues in which to do so, they got creative. 

Together, new organizations, new knowledge, and new technology expand law enforcement capacity. These changes are evident when we consider what investigation and prosecution look like today. Let’s turn to PDMPs as an example. 

Prescription Drug Monitoring Programs 

PDMPs have dramatically transformed the ways that investigators and prosecutors conduct cases against providers. New organizational developments paved the way for monitoring programs to have the greatest impact. Enforcement agencies’ impetus to investigate providers coincided with the arrival of technology that made those investigations easier and faster. Enforcement agents find both provider and patient data useful—the former because it shows patterns of providers’ behavior and the latter data because it helps law enforcement convince patients to become confidential informants in exchange for leniency in their own cases. 

Healthcare providers have direct access to the database, but law enforcement access is more complicated. State laws restrict which enforcement agencies can get access and how. Some states give law enforcement direct access to data. In those states, enforcement agents have their own login to the system but can only legally access the data in the process of an active case, meaning that they are already investigating a specific crime. They can’t just search through the database to see what they find. Other states require law enforcement to request access from the agency that houses the PDMP, and the agency returns only information that is relevant to the case. Still other states require enforcement agents to obtain a warrant or a subpoena to access the data.16, 17 Regardless of how they get the information, PDMPs are a boon to law enforcement because they make tasks easier and more efficient. 

A prescription drug monitoring program (PDMP) is an electronic database that tracks controlled substance prescriptions in a state. (Source: CDC.gov)

Physician cases are reactive instead of proactive, which creates a barrier to starting an investigation. Enforcement agents say that they do not go out looking for bad doctors but find them through tips they receive from a patient, a parent, a healthcare provider, or another agency. They use information from tips to gather evidence and determine whether the case is worth pursuing. For a provider to come under law enforcement scrutiny, someone has to notice their behavior, feel compelled to do something, and know who to call.

The legwork necessary to investigate a physician traditionally posed a second barrier because investigators had to travel from pharmacy to pharmacy to gather the physician’s prescriptions. Now, thanks to the PDMP, that legwork has become deskwork. Instead of spending time on a potentially fruitless pharmacy expedition, enforcement agents simply look up the physician in the database or request access to information from the agency that controls it. Investigators can obtain a physician’s prescribing history, analyze prescribing patterns, and link their findings to other databases without setting foot outside the office. 

Physician cases are decidedly unsexy. There are no drugs. There are no guns. There is paperwork. Stacks and stacks of paperwork.

PDMP data are a starting point. They do not make a case alone. Investigators examine the data from various angles and try to come up with alternative explanations for the patterns they see.

PDMPs also have their drawbacks. Investigators can use the database to track physicians, but a smart criminal physician also uses the database to monitor their patients and identify potential undercover investigators. People who are addicted to or diverting medications usually have a long PDMP report because they are actively trying to obtain opioids from various physicians. Undercover agents do not have a report at all, so running a report is a way to root out narcs. Knowing this, law enforcement finds ways to create fake reports so that they blend in with other patients. Overall, PDMPs benefit law enforcement because they improve the speed and accuracy of their investigations. Better investigations lead to more successful prosecutions (that is, a greater percentage of convictions). 

The War on Drug Doctors 

Drug cases capture media attention for a reason. Whether on popular TV shows or the evening news, drug cases are sexy. Towering bags of confiscated drugs and arrays of automatic rifles captivate audiences. This stagecraft also helps to justify the War on Drugs. Props such as drugs and guns show that the “bad guys,” the drug dealers, are armed and dangerous. They also show how desperately we need the “good guys,” the investigators and prosecutors, to keep the bad guys off the street. 

Photo by Wesley Tingey / Unsplash

By comparison, physician cases are decidedly unsexy. There are no drugs. There are no guns. There is paperwork. Stacks and stacks of paperwork. Not only do prosecutors have to prove to judges and juries that doctors—professionals revered as pillars of our society—are criminals, but they have to do so using something as uninspiring as paperwork. It’s a tough sell.

This essay was excerpted and adapted by the author from Policing Patients: Treatment and Surveillance on the Frontlines of the Opioid Crisis. Copyright © 2024 by Elizabeth Chiarello. Reprinted by permission of Princeton University Press.

Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator