The first of a set of groundbreaking Cluster satellites is set for a controlled reentry next week.
The European Space Agency is paving the way in controlled reentry technology. ESA recently announced that plans to terminate the first of four Cluster satellites is about to come to fruition in early September, with the reentry of Salsa.
The ReentrySalsa is one of four dance-themed Cluster satellites. The other three are Rumba, Samba and Tango. ESA controllers used the remaining thruster fuel on the spacecraft back in January to lower the perigee of the mission down to around 100 kilometers, which will assure destructive reentry for the 550 kilogram satellite over the South Pacific on or around September 8th. The area the satellite will meet its demise is known as ‘Point Nemo’ or the Pacific Ocean Uninhabited Area. The region has seen several large reentries over the years, including the Mir space station and ESA’s Automated Transfer Vehicle. The region will likely see the demise of the International Space Station sometime around 2030.
Salsa’s final reentry track. Credit: ESA“By studying how Salsa burns up, which parts might survive, for how long and in what state, we will learn much about how to build ‘zero debris’ satellites,” says Tim Flohrer, (ESA-Space Debri Office) in a recent press release.
ESA’s Malarguee tracking station in Argentina. Credit: ESA A Pioneering MissionESA designed the Cluster mission to explore space weather interactions with the Earth’s magnetic environment as the four spacecraft fly in a tetrahedral configuration through the planet’s magnetosphere. The four spacecraft fly out to a distant apogee of about 117,000 kilometers (over three times farther out versus geosynchronous orbit), and orbit the Earth once every 54 hours.
Anatomy of Cluster-Salsa’s orbital trajectory and reentry. Credit: ESALaunched in the summer of 2000, the Cluster satellites had a 5-year nominal mission, which lasted well over two decades. The missions have since proven to be pioneers in space weather research. The mission also escaped glitches and software failures over the years, including a bug requiring a “dirty hack” in 2010. Cluster II was also a replacement for the original set of Cluster satellites, which were lost on the inaugural launch of the Ariane-5 rocket on June 4th, 1996 from the Kourou Space Center. The mission ended in an explosion 37 seconds after liftoff.
Cluster satellites in the clean room at Baikonur ahead of encapsulation and launch. Credit: ESA Controlling ReentriesThis sort of ‘targeted reentry’ for a long duration mission is one of the first of its kind for ESA. The zero-debris conclusion to the mission exceeds international standards. Furthermore, it also addresses issues surrounding the mitigation of debris in low Earth orbit. On Earth, ESA’s worldwide Estrack network will follow Salsa during its final orbits, and an airborne campaign is underway to spot the final reentry. ESA made a similar effort to image the Aeolus satellite in 2023, shortly before reentry.
Engineers will apply a similar technique to the SMILE (Solar wind Magnetosphere Ionosphere Link Explorer) and Proba-3 missions. These are also set to enter a similar far-ranging orbit around the Earth. SMILE is the follow-on mission to Cluster, and is launching in late 2025. ESA will launch the Proba-3 solar observatory next month. The mission will feature a free-flying, solar eclipsing disk.
You can spot the cluster satellites including Salsa on their final days. Salsa is COSPAR ID 2000-041A/26411in the NORAD satellite catalog, and listed in Heavens-Above. The satellites reach naked eye visibility on a good perigee pass.
After the demise of Salsa, Rumba will also reenter in November of next year, followed by Tango and Samba in August 2026.
While this is the ‘Last Dance’ for Salsa, the efforts to study space weather and come to terms with space debris continue.
Follow @ESAOperations and @ESA_Cluster on Twitter for the latest updates on Salsa leading up to reentry.
The post ESA Cluster Satellite to Reenter in Early September appeared first on Universe Today.
Over the past decade behavioral science, particularly psychology, has come under fire from critics for being fixated on progressive political ideology, most notably Diversity, Equity, and Inclusion (DEI). The critics’ evidence is, unfortunately, quite strong. For example, a recent volume, Ideological and Political Bias in Psychology,1 recounts many incidents of scholarly censorship and personal attacks that a decade ago might have only been conceivable as satire.
We believe that many problems plaguing contemporary behavioral science, especially for issues touching upon DEI, can best be understood, at their root, as a failure to adhere to basic scientific principles. In this essay, we will address three fundamental scientific principles: (1) Prioritize Objective Data Over Lived Experience; (2) Measure Well; and (3) Distinguish Appropriately Between Correlation and Causation. We will show how DEI scholarship often violates those principles, and offer suggestions for getting behavioral science back on track. “Getting back to the basics” may not sound exciting but, as athletes, musicians, and other performers have long recognized, reinforcing the fundamentals is often the best way to eliminate bad habits in order to then move forward.
The Failure to Adhere to Basic Scientific PrinciplesA foundational assumption of science is that objective truth exists and that humans can discover it.2, 3, 4, 5 We do this most effectively by proposing testable ideas about the world, making systematic observations to test the ideas, and revising our ideas based on those observations. A crucial point is that this process of proposing and testing ideas is open to everyone. A fifth grader in Timbuktu, with the right training and equipment, should be able to take atmospheric observations that are as valuable as those of a Nobel Prize-winning scientist from MIT. If the fifth grader’s observations are discounted, this should only occur because their measurement methods were poor, not because of their nationality, gender, age, family name, or any other personal attribute.
A corollary of science being equally open to all is that an individual’s personal experience or “lived experience” carries no inherent weight in claims about objective reality. It is not that lived experience doesn’t have value; indeed, it has tremendous value in that it provides a window into individuals’ perceptions of reality. However, perception can be wildly inaccurate and does not necessarily equate to reality. If that Nobel Prizewinning scientist vehemently disputed global warming because his personal experience was that temperatures have not changed over time, yet he provided no atmospheric measurements or systematic tests of his claim, other scientists would rightly ignore his statements—at least as regards the question of climate change.
The limited utility of a person’s lived experience seems obvious in most scientific disciplines, such as in the study of rocks and wind patterns, but less so in psychology. After all, psychological science involves the study of people—and they think and have feelings about their lived experiences. However, what is the case in other scientific disciplines is also the case in psychological science: lived experience does not provide a foolproof guide to objective reality.
To take an example from the behavioral sciences, consider the Cambridge-Somerville Youth Study.6 At-risk boys were mentored for five years, from the ages of 10 to 15. They participated in a host of programs, including tutoring, sports, and community groups, and were given medical and psychiatric care. Decades later, most of those who participated claimed the program had been helpful. Put differently, their lived experience was that the program had a positive impact on their life. However, these boys were not any better in important outcomes relative to a matched group of at-risk boys who were not provided mentoring or extra support. In fact, boys in the program ended up more likely to engage in serious street crimes and, on average, they died at a younger age. The critical point is that giving epistemic authority to lived experience would have resulted in making inaccurate conclusions. And the Cambridge-Somerville Youth Study is not an isolated example. There are many programs that people feel are effective, but when tested systematically turn out to be ineffective, at best. These include programs like DARE,7 school-wide mental health interventions,8 and—of course—many diversity training programs.9
DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature.
Indeed, when it comes to concerns related to DEI, the scientific tenet of prioritizing testable truth claims over lived experience has often fallen to the wayside. Members of specific identity groups are given privilege to speak about things that cannot be contested by those from other groups. In other words, in direct contradiction of the scientific method, some people are granted epistemic authority based solely on their lived experience.10
Consider gender dysphoria. In the past decade, there has been a drastic increase in the number of people, particularly children and adolescents, identifying as transgender. Those who express the desire to biologically transition often describe their lived experience as feeling “born in the wrong body,” and express confidence that transition will dramatically improve their lives. We argue while these feelings must be acknowledged, they should not be taken as objective truth; instead, such feelings should be weighed against objective data on life outcomes of others who have considered gender transition and/or transitioned. And those data, while limited, suggest that many individuals who identify as transgender during childhood, but who do not medically transition, eventually identify again with the gender associated with their birth sex.11, 12 Although these are small, imperfect studies, they underscore that medical transition is not always the best option.
Caution in automatically acceding to a client’s preference to transition is particularly important among minors. Few parents and health care professionals would affirm a severely underweight 13-year-old’s claim that, based on their lived experience, they are fat and will only be happy if they lose weight. Nevertheless, many psychologists and psychiatrists make a similar mistake when they affirm a transgender child’s desire to transition without carefully weighing the risks. In one study, 65 percent of people who had detransitioned reported that their clinician, who often was a psychologist, “did not evaluate whether their desire to transition was secondary to trauma or a mental health condition.”13 The concern, in other words, is that lived experience is being given too much weight. How patients feel is important, but their feelings should be only one factor among many, especially if they are minors. Mental health professionals should know this, and parents should be able to trust them to act accordingly.
Principle #2: Measure WellAnother basic principle of behavioral science is that anything being measured must be measured reliably and validly. Reliability refers to the consistency of measurement; validity refers to whether the instrument is truly measuring what it claims to measure. For example, a triple beam balance is reliable if it yields the same value when repeatedly measuring the same object. The balance is valid if it yields a value of exactly 1 kg when measuring the reference kilogram (i.e., the International Prototype of the Kilogram), a platinum-iridium cylinder housed in a French vault under standardized conditions.
Behavioral scientists’ understanding of any concept is constrained by the degree to which they can measure it consistently and accurately. Thus, to make a claim about a concept, whether about its prevalence in a population or its relation to another concept, scientists must first demonstrate both the reliability and the validity of the measure being used. For some measures of human behavior, such as time spent listening to podcasts or number of steps taken each day, achieving good reliability and validity is reasonably straightforward. Things are generally more challenging for the self-report measures that psychologists often use.
Nevertheless, good measurement can sometimes be achieved, and the study of personality provides a nice model. In psychology, there are several excellent measures of the Big Five personality factors (Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness).14 Individuals’ responses are highly reliable: people who rate themselves as highly extraverted as young adults rate themselves similarly years later. Moreover, personality assessments are valid: individuals’ responses correlate with their actual day-to-day behaviors, as reported by themselves and as observed by others.15 In other words, people who rate themselves as high (versus low) in extroversion on psychological questionnaires, for example, really do spend more time socializing.
However, not all psychological measures turn out to have solid reliability and validity. These include the popular Myers Briggs Type Indicator personality test and projective tests such as the Rorschach. Unfortunately, in the quest to support DEI, some concepts that fail the requirements of good measurement are used widely and without reservation. The concept of microaggressions, for example, has gained enormous traction despite its having fundamental measurement issues.
“Microaggressions” were brought to psychologists’ attention by Derald Wing Sue and colleagues.16 Originally described as “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color” (p. 271),17 the concept has since expanded in use to describe brief, verbal or nonverbal, indignities directed toward a different “other.”18, 19
In 2017, Scott Lilienfeld discussed how the failure to adhere to the principles of good measurement has rendered the concept of microaggression “wide open,” without any clear anchors to reality.20 The primary weakness for establishing validity, that is, for establishing evidence of truly measuring what scientists claim to be measuring, is that “microaggression” is defined in the eye of the beholder.21 Thus, any person at any point can say they have been “microaggressed” against, and no one can test, let alone refute, the claim because it is defined solely by the claimant’s subjective appraisal—their lived experience.
As Scott Lilienfeld explained, the end result is that essentially anything, including opposing behaviors (such as calling on a student in class or not calling on a student in class) can be labeled a microaggression. A question such as, “Do you feel like you belong here?” could be perceived as a microaggression by one person but not by someone else; in fact, even the same person can perceive the same comment differently depending on their mood or on who asks the question (which would indicate poor reliability). Our criticism of microaggressions, then, spans concerns related to both weak measurement and an undue reliance on lived experience.
Another of psychology’s most famous recent topics is the Implicit Association Test (IAT), which supposedly reveals implicit, or subconscious, bias. The IAT measures an individual’s reaction times when asked to classify pictures or text spatially. A video22 may be the best way to appreciate what is happening in the IAT, but the basic idea is that if a person more quickly pairs pictures of a Black person than those of a White person with a negative word (for example, “lazy” or “stupid”) then they have demonstrated their unconscious bias against Black people. The IAT was introduced by Anthony Greenwald and colleagues in the 1990s.23 They announced that their newly developed instrument, the race IAT, measures unconscious racial prejudice or bias and that 90 to 95 percent of Americans, including many racial minorities, demonstrated such bias. Since then, these scholars and their collaborators (plus others such as DEI administrators) have enjoyed tremendous success advancing the claim that the race IAT reveals pervasive unconscious bias that contributes to society-wide discrimination.
Despite its immense influence, the IAT is a flawed measure. Regarding reliability, the correlation between a person’s response when taking the test at two different times hovers around 0.5.24 This is well below conventionally acceptable levels in psychology, and far below the test-retest reliabilities for accepted personality and cognitive ability measures, which can reach around .8, even when a person takes the tests decades later.25, 26
The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation.
As for the IAT’s validity, nobody has convincingly shown that patterns of reaction times actually reflect “unconscious bias” (or “implicit prejudice”) as opposed to cultural stereotypes.27 Moreover, in systematic syntheses of published studies, the association between scores on the race IAT and observations or measurements of real-world biased behavior is inconsistent and weak.28, 29 In other words, scores on the IAT do not meaningfully correlate with other ways of measuring racial bias or real life manifestations of it.
Principle #3: Distinguish Appropriately Between Correlation and Causation“Correlation does not equal causation” is another basic principle of behavioral science (indeed, all science). Although human brains seem built to readily notice and even anticipate causal connections, a valid claim that “X” has a causal effect on “Y” needs to meet three criteria, and a correlation between X and Y is only the first. The second criterion is that X precedes Y in time. The third and final criterion is the link between X and Y is not actually due to some other variable that influences both X and Y (“confounders”). To test this final point, researchers typically need to show that when X is manipulated in an experiment, Y also changes.
Imagine, for instance, that a researcher asks students about their caffeine intake and sleep schedule, and upon analyzing the data finds that students’ caffeine consumption is negatively correlated with how much they sleep—those who report consuming more caffeine tend to report sleeping less. This is what many psychologists call correlational research (or associational or observational research). These correlational data could mean that caffeine consumption reduces sleep time, but the data could also mean that a lack of sleep causes an increase in caffeine consumption, or that working long hours causes both a decrease in sleep and an increase in caffeine. To make the case that caffeine causes poor sleep, the researcher must impose, by random assignment, different amounts of caffeine on students to determine how sleep is affected by varying doses. That is, the researcher would conduct a true experiment.
Distinguishing between correlation and causation is easier said in the abstract than practiced in reality, even for psychological scientists who are specifically trained to make the distinction.30 Part of the difficulty is that in behavioral science, many variables that are generally thought of as causal cannot be manipulated for ethical or practical reasons. For example, researchers cannot impose neglect (or abuse, corporal punishment, parental divorce, etc.) on some children and not others to study how children are affected by the experience. Still, absent experiments, psychologists bear the responsibility of providing converging, independent lines of evidence that indicate causality before they draw a causal conclusion. Indeed, scientists did this when it came to claiming that smoking causes cancer: they amassed evidence from national datasets with controls, discordant twin designs, correlational studies of exposure to second-hand smoke, non-human experiments, and so on—everything but experiments on humans—before coming to a consensus view that smoking causes cancer in humans. Our point is that investigating causal claims without true experiments is possible, but extremely difficult and time consuming.
That said, the conflation of correlation with causation seems especially prevalent when it comes to DEI issues. In the context of microaggressions, for example, a Google search quickly reveals many scholars claiming that microaggressions cause psychological harm. Lilienfeld has been a rare voice suggesting that it is dangerous to claim that microaggressions cause mental health issues when there are no experimental data to support such a claim. Moreover, there is a confounding variable that predicts both (1) perceiving oneself as having been “microaggressed” against and (2) struggling with one’s mental health—namely, the well-documented personality trait of neuroticism. In other words, individuals who are prone to experience negative emotions (those who are high in neuroticism) often perceive that more people try to inflict harm on them than actually do, and these same individuals also struggle with mental health.
Assuming we were able to develop a workable definition of “microaggressions,” what would a true experiment look like? An experiment would require that participants be exposed to microaggressions (or not), and then be measured or observed for indications of psychological harm. There are valid ethical concerns for such a study, but we believe it can be done. There is a lengthy precedent in psychological research where temporary discomfort can be inflicted with appropriate safeguards. For instance, a procedure called the “trier social stress test” (TSST) is widely used, where participants make a speech with little preparation time in front of judges who purposefully avoid any non-verbal reaction. This is followed by a mental arithmetic task.31 If the TSST is acceptable for use in research, then it should also be acceptable to expose study participants to subtle slights.
This fallacy of equating correlation with causation also arises in the context of gender transitioning and suicide. To make the point that not being able to transition is deeply damaging, transgender individuals, and sometimes their professional supporters, may ask parents something such as, “would you rather have a dead daughter or a living son?” One logical flaw here is in assuming that because gender distress is associated with suicidal ideation, then the gender distress must be causing the suicidal ideation. However, other psychological concerns, such as depression, anxiety, trauma, eating disorders, ADHD, and autism, could be causing both the gender distress and the suicidal ideation—another case of confounding variables. Indeed, these disorders occur more frequently in individuals who identify as transgender. Thus, it is quite possible that someone may suffer from depression, and this simultaneously raises their likelihood of identifying as transgender and of expressing suicidal ideation.
It is not possible (nor would it be ethical if possible) to impose gender identity concerns on some children and not others to study the effect of gender dysphoria on suicidality. However, at this point, the correlational research that does exist has not offered compelling evidence that gender dysphoria causes increased suicidality. Studies have rarely attempted to rule out third variables, such as other mental health diagnoses. The few studies that have tried to control for other variables have yielded mixed results.32, 33 Until researchers have consistently isolated gender dysphoria as playing an independent role in suicidality, they should not claim that gender dysphoria increases suicide risk.
Over three decades ago, the psychologist David Lykken wrote, “Psychology isn’t doing very well as a scientific discipline and something seems to be wrong somewhere” (p. 3).34 Sadly, psychology continues to falter; in fact, we think it has gotten worse. The emotional and moral pull of DEI concerns are understandable but they may have short-circuited critical thinking about the limitations of lived experience, the requirement of using only reliable and valid measurement instruments, and the need to meet strict criteria before claiming that one variable has a causal influence on another variable.
DEI Concepts Contradict Known Findings about Human CognitionThe empirical bases for some DEI concepts contradict social scientific principles. Additionally, certain DEI ideas run counter to important findings about human nature that scientists have established by following the required scientific principles. We discuss three examples below.
Out-Group AntipathyHumans are tribal by nature. We have a long history of living in stable groups and competing against other groups. Thus, it’s no surprise that one of social psychology’s most robust findings is that in-group preferences are powerful and easy to evoke. For example, in studies where psychologists create in-groups and out-groups using arbitrary criteria such as shirt color, adults and children alike have a large preference for their group members.35, 36 Even infants prefer those who are similar to themselves37 and respond preferentially to those who punish dissimilar others.38
Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently.
DEI, although generally well-intentioned, often overlooks this tribal aspect of our psychology. In particular, in the quest to confront the historical mistreatment of certain identity groups, it often instigates zero-sum thinking (i.e., that one group owes a debt to another; that one group cannot gain unless another loses). This type of thinking will exacerbate, rather than mitigate, animosity. A more fruitful approach would emphasize individual characteristics over group identity, and the common benefits that can arise when all individuals are treated fairly.
ExpectanciesWhen people expect to feel a certain way, they are more likely to experience that feeling.39, 40 Thus, when someone, especially an impressionable teenager or young adult, is told that they are a victim, the statement (even if true) is not merely a neutral descriptor. It can also set up the expectation of victimhood with the downstream consequence of making one feel themselves to be even more of a victim. DEI microaggression workshops may do exactly this—they prime individuals to perceive hostility and negative intent in ambiguous words and actions.41 The same logic applies to more pronounced forms of bigotry. For instance, when Robin DiAngelo describes “uniquely anti-black sentiment integral to white identity” (p. 95),42 the suggestion that White people are all anti-Black might have the effect of exacerbating both actual and perceived racism. Of course, we need to deal honestly with any and all racism when it does exist, but it is also important to understand potential costs of exaggerating such claims. Expectancy effects might interact with the “virtuous victim effect,” wherein individuals perceive victims as being more moral than non-victims.43, 44 Thus, there can be a social value gained simply in presenting oneself as a victim.
Cognitive BiasesCognitive biases are one of the most important and well-replicated discoveries of the behavioral sciences. It is therefore troubling that, in the discussion of DEI topics, psychologists often fall victim to those very biases.
A striking example is the American Psychological Association’s (APA) statement shortly after the death of George Floyd, which provides a textbook illustration of the availability bias, the tendency to overvalue evidence that easily comes to mind. The APA, the largest psychological organization in the world, asserted after Floyd’s death that “The deaths of innocent black people targeted specifically because of their race—often by police officers—are both deeply shocking and shockingly routine.”45 How “shockingly routine” are they? According to the Washington Post database of police killings, in 2020 there were 248 Black people killed by police. By comparison, over 6,500 Black people were killed in traffic fatalities that year—a 26-fold difference.46 Also, some portion of those 248 victims were not innocent—given that 216 were armed, some killings would probably have been an appropriate use of force by the police to defend themselves or others. Some portion was also not killed specifically because of their race. So why would the APA describe a relatively rare event as “shockingly routine”? This statement came in the aftermath of the widely publicized police killings of Floyd and those of Ahmaud Arbery and Breonna Taylor. In other words, these rare events were seen as common likely because widespread media coverage made them readily available in our minds.
Unfortunately, the APA also recently fell prey to another well-known bias, the base rate fallacy, where relevant population sizes are ignored. In this case, the APA described new research that found “The typical woman was considered to be much more similar to a typical White woman than a typical Black woman.”47 Although not stated explicitly, the implication seems to be that, absent racism, the typical woman would be roughly midway between typical White woman and typical Black woman. That is an illogical conclusion given base rates. In the U.S., White people outnumber Black people by roughly 5 to 1; hence the typical woman should be perceived as more similar to a typical White woman than to a typical Black woman.
What Happened? Some Possible CausesAt this stage, we expect that many readers may be wondering how it can be that social scientists regularly violate basic scientific principles—principles that are so fundamental that these same social scientists routinely teach them in introductory courses. One possible reason is myside bias, wherein individuals process information in a way that favors their own “team.” For example, in the case of the race Implicit Association Test, proponents of the IAT might more heavily scrutinize the methodology of studies that yield negative results compared to those that have yielded their desired results. Similarly, although lived experience is a limited kind of evidence, it certainly is a source of evidence, and thus scholars may elevate its importance and overlook its limitations when doing so bolsters their personal views.
A related challenge facing behavioral scientists is that cognitive biases are universal and ubiquitous—everyone, including professional scientists, is susceptible.48 In fact, one might say that the scientific method, including the three principles we emphasize here, is an algorithm (i.e., a set of rules and processes) designed to overcome our eternally pervasive cognitive biases.
A third challenge confronting behavioral scientists is the current state of the broader scientific community. Scientific inquiry works best when practiced in a community adhering to a suite of norms, including organized skepticism, that incentivize individuals to call out each other’s poor practices.49, 50 In other words, in a healthy scientific community, if a claim becomes widely adopted without sufficient evidence, or if a basic principle is neglected, a maverick scientist would be rewarded for sounding the alarm by gaining respect and opportunities. Unfortunately, the scientific community does not act this way with respect to DEI issues, perhaps because the issues touch widely held personal values (e.g., about equality between different groups of people). If different scientists held different values, there would probably be more healthy skepticism of DEI topics. However, there is little ideological diversity within the academy. In areas such as psychology, for example, liberal-leaning scholars outnumber conservative-leaning scholars by at least 8 to 1, and in some disciplines the ratio is 20 to 1 or even more.51, 52 A related concern is that these values are more than just personal views. They often seem to function as sacred values, non-negotiable principles that cannot be compromised and only questioned at risk to one’s status within the community.
From this perspective,53 it is easy to see how those who question DEI may well face moral outrage, even if (or maybe especially if) their criticisms are well-founded. The fact that this outrage sometimes translates into public cancellations is extremely disheartening. Yet there are likely even more de facto cancellations than it seems. Someone can be cancelled directly or indirectly. Indirect cancellations can take the form of contract nonrenewal, pressure to resign, or having one’s employer dig for another offense to use as the stated grounds of forcing someone out of their job. This latter strategy is a very subtle, yet no less insidious, method of cancellation. As an analogy, it is like a police officer following someone with an out-of-state license plate and then pulling the car over when they fail to use a turn signal. An offense was committed, but the only reason the offense was observed in the first place is because the officer was looking for a reason to make the stop and therefore artificially enhanced the time window in which the driver was being scrutinized. The stated reason for the stop is failure to signal; the real reason is the driver is from out of town. Whether direct or indirect, the key to a cancellation is that holding the same job becomes untenable after failing to toe the party line on DEI topics.
It is against this backdrop that DEI scholarship is conducted. Academics fear punishment (often subtle) for challenging DEI research. Ideas that cannot be freely challenged are unfalsifiable. Those ideas will likely gain popularity because the marketplace of ideas becomes the monopoly of a single idea. An illusory consensus can emerge about a complex area for which reasonable, informed, and qualified individuals have highly differing views. An echo chamber created by forced consensus is the breeding ground for bad science.
How to Get Behavioral Science Back on TrackWe are not the first ones to express concern about the quality of science in our discipline.54, 55 However, to our knowledge, we are the first to discuss how DEI over-reach goes hand-in-hand with the failure to engage in good science. Nonetheless, this doesn’t mean it can’t be fixed. We offer a few suggestions for improvement.
First, disagreement should be normalized. Advisors should model disagreement by presenting an idea and explicitly asking their lab members to talk about its weaknesses. We need to develop a culture where challenging others’ ideas is viewed as an integral (and even enjoyable) part of the scientific process, and not an ad hominem attack.
Second, truth seeking must be re-established as the fundamental goal of behavioral science. Unfortunately, many academics in behavioral science seem now to be more interested in advocacy than science. Of course, as a general principle, faculty and students should not be restricted from engaging in advocacy. However, this advocacy should not mingle with their academic work; it must occur on their own time. The tension between advocacy and truth seeking is that advocates, by definition, have an a priori position and are tasked with convincing others to accept and then act upon that belief. Truth seekers must be open to changing their opinion whenever new evidence or better analyses demand it.
To that end, we need to resurrect guardrails that hold students accountable for demonstrating mastery of important scientific concepts, including those described above, before receiving a PhD. Enforcing high standards may sound obvious, but actually failing students who do not meet those standards is an exclusionary practice that might be met with resistance.
Another intriguing solution is to conduct “adversarial collaborations,” wherein scholars who disagree work together on a joint project.56 Adversarial collaborators explicitly spell out their competing hypotheses and together develop a method for answering a particular question, including the measures and planned analyses. Stephen Ceci, Shulamit Kahn, and Wendy Williams,57 for example, engaged in an adversarial collaboration that synthesized evidence regarding gender bias in six areas of academic science, including hiring, grant funding, and teacher ratings. They found evidence for gender bias in some areas but not others, a finding that should prove valuable in decisions about where to allocate resources.
In conclusion, we suggest that DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature. The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation. We need to remember that the goal of science is to discover truth. This requires putting ideology and advocacy aside while in the lab or classroom. Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently. The scientific method requires us to stay humble and accept that we just might be wrong. That principle applies to all scientists, including the three authors of this article. To that end, readers who disagree with any of our points should let us know! Maybe we can sort out our differences—and find common ground— through an adversarial collaboration.
The views presented in this article are solely those of the authors. They do not represent the views of any author’s employer or affiliation.
About the AuthorApril Bleske-Rechek is a Professor of Psychology at the University of Wisconsin-Eau Claire. Her teaching and research efforts focus on scientific reasoning and individual and group differences in cognitive abilities, personality traits, and relationship attitudes.
Michael H. Bernstein is an experimental psychologist and an Assistant Professor at Brown University. His research focuses on the overlap between cognitive science and medicine. He is co-editor of The Nocebo Effect: When Words Make You Sick.
Robert O. Deaner is a Professor of Psychology at Grand Valley State University. He teaches courses on research methods, sex differences, and evolutionary psychology. His research addresses sex differences in competitiveness.
ReferencesI am not the first to say this but it bears repeating – it is wrong to use the accusation of a mental illness as a political strategy. It is unfair, stigmatizing, and dismissive. Thomas Szasz (let me say straight up – I am not a Szaszian) was a psychiatrist who made it his professional mission to make this point. He was concerned especially about oppressive governments diagnosing political dissidents with mental illness and using that as a justification to essentially imprison them.
Szasz had a point (especially back in the 1960s when he started making it) but unfortunately took his point way too far, as often happens. He decided that mental illness, in fact, does not exist, and is 100% political oppression. He took a legitimate criticism of the institution of mental health and abuse by oppressive political figures and systems and turned it into science denial. But that does not negate the legitimate points at the core of his argument – we should be careful not to conflate unpopular political opinions with mental illness, and certainly not use it as a deliberate political strategy.
While the world of mental illness is much better today (at least in developed nations), the strategy of labeling your political opponents as mentally ill continues. I truly sincerely wish it would stop. For example, in a recent interview on ABC, senator Tom Cotton was asked about some fresh outrageous thing Trump said, criticism of which Cotton waved away as “Trump Derangement Syndrome”.
Sorry, but you cannot just make up a new mental illness by framing it in clinical-sounding terminology. There is no such syndrome, and it is supremely insulting and dismissive to characterize political disagreements as “derangement”. Szasz should be turning over in his grave. This is, of course, and ad hominem strategy – attacking the person rather than the argument. “Oh, you just feel that way because you are suffering from some derangement, you poor thing.” This also can cut both ways – I have heard some on the left argue that it is, in fact, those who support Trump who are suffering from TDS. Some may justify this as “turnabout is fair play”, but in this case it isn’t. Don’t play their game, and don’t endorse the underlying strategy of framing political disagreements as mental illness.
Sometimes accusations are leveled at individuals, which in some ways is worse than accusing half the country of derangement. The most recent episode of this to come to my attention stems from the Arlington cemetery controversy. When the Trump campaign was asked for a statement, this is what they gave:
“The fact is that a private photographer was permitted on the premises and for whatever reason an unnamed individual, clearly suffering from a mental health episode, decided to physically block members of President Trump’s team during a very solemn ceremony,” Cheung said in the statement. (Steven Cheung is a Trump campaign spokesman)
Was it really clear that they were having a “mental health episode”? Why not just say they were “hysterical” and be done with it. By all accounts the person in question acted completely professionally the entire time, and when they were physical pushed aside decided to deescalate the conflict and stand down. My point is not to relitigate the controversy itself, but to point out the casual use of an accusation of mental illness as a political tool. It apparently is not enough to say someone was rude, unprofessional, or inappropriate – you have to accuse them of a “mental health episode” to truly put them in their place.
I am also uncomfortable with the way in which both ends of the political spectrum have characterized the other candidate, in this and previous election cycles. It is one thing to raise legitimate concerns about the “fitness” of a candidate, whether due to the effects of age, or their apparent personality and moral standing. But I think it is inappropriate and harmful to start speculating about actual neurological or psychological diagnoses. This is meant to lend weight and gravitas to the accusation. However, either you are not qualified to make such diagnosis, in which case you shouldn’t, or you are qualified to make such diagnoses, in which case you shouldn’t, although for different reasons. Actual professionals generally abstain from making public diagnoses without the benefit of an actual clinical exam, and those who perhaps have made a clinical exam are then bound by confidentiality. Non-professionals should stay out of the diagnosis business.
It’s best to be conscious of this, and to frame whatever political criticism you have in terms other than mental or neurological illness. Casual accusations of mental illness are cheap and gratuitous, and exist on a spectrum that begins with dismissiveness and includes the abuse of mental illness for political oppression.
The post Accusation of Mental Illness as a Political Strategy first appeared on NeuroLogica Blog.
Vaccines save lives and money according to a recently published CDC report. This shouldn't come as a surprise, but it's still nice to see.
The post Vaccines: Saving Lives and Money for Over 200 Years first appeared on Science-Based Medicine.Meanwhile, in Dobrzyn, Hili is being logical:
Hili: You have to leave the rest of the nuts to the squirrels.
A: Why?
Hili: All the containers are already full.
Hili: Musicie zostawić resztę orzechów wiewiórkom.
Ja: Czemu?
Hili: Już wszystkie pojemniki są wypełnione.
As it’s name suggests, dark matter is dark! That means it’s largely invisible to us and only detectable through its interaction with gravity. One of the leading theories to explain the stuff that makes up the majority of the matter in the Universe are WIMPs, Weakly Interacting Massive Particles. They are just theories though and none have been detected. An exciting new experiment called LUX-ZEPLIN has just completed 280 days of collecting data but still, no WIMPs have been detected above 9 Gev/c2. There are plans though to narrow the search.
The concept of dark matter was first proposed by Fritz Zwicky in 1930’s who noticed that galaxies were moving too fast to be held together by ‘normal’ visible matter. His work was expanded upon by Vera Rubin in the 1970’s who confirmed that stars around the outer regions of galaxies were moving faster than expected. The conclusion of these observations is that there was some form of invisible matter making up about 85% of the mass of the Universe. New results from LUX-ZEPLIN are homing in on the WIMP model to describe the nature of dark matter.
Fritz Zwicky. Image Source: Fritz Zwicky Stiftung websiteLUX-ZEPLIN is an instrument designed to detect dark matter. Located 1.6km underground at the Sanford Underground Research Facility in South Dakota, it’s a massive tank filled with 10 tons of liquid xenon waiting and watching for tiny interactions between dark matter and normal particles. It is shielded from cosmic radiation and other background noise by an onion layer design with each layer blocking out radiation or tracking particle interactions to rule out erroneous dark matter interactions. The principle is simple, a passing WIMP could knock into a xenon nucleus causing it to move and emit light and electrons in the process. It is these signals the teams are looking for. Along with new analytical techniques the teams using it hope they will finally unlock the mysteries of dark matter.
A view of the Large Underground Xenon (LUX) dark matter detector. Shown are photomultiplier tubes that can ferret out single photons of light. Signals from these photons told physicists that they had not yet found Weakly Interacting Massive Particles (WIMPs) Credit: Matthew Kapust / South Dakota Science and Technology AuthorityFollowing 280 days worth of data, no dark matter WIMPs were detected. Focussing their attention on WIMPs with a mass above 9 Gev/c2 the team found nothing despite the sensitivity of the detector. Having explored 9 Gev/c2 the team plans to start probing different energy levels as the hunt continues. The results had been announced at two physics conferences on 26 August TeV Particle Astrophysics 2024 in Chicago and LIDINE 2024 in São Paulo.
The team applied an interesting new technique called ‘salting’ which adds fake WIMP signals during the collection of the data. The approach hides real data until the very end when an ‘unsalting’ technique removes them. This avoids unconscious bias and stops researchers from overly interpreting the data.
Over the few years and by 2028, the team plan to collect at least 1,000 days of data which will be analysed using the new techniques. They will be using the data to study other rare phenomenon like the decay of xenon atoms, neutrino-less double beta decay and born-8 neutrinos from the Sun. There is still much work to do but the 250 scientists at LUX-ZEPLIN are hopeful. LUX-ZEPLIN’s physics coordinator Scott Kravitz from the University of Texas summed it up beautifully ‘Our ability to search for dark matter is improving at a rate faster than Moore’s Law. If you look at an exponential curve, everything before now is nothing. Just wait until you see what comes next.’
Source : Experiment sets new record in search for dark matter
The post New Limits on Dark Matter appeared first on Universe Today.
When the James Webb Space Telescope provided astronomers with a glimpse of the earliest galaxies in the Universe, there was some understandable confusion. Given that these galaxies existed during “Cosmic Dawn,” less than one billion years after the Big Bang, they seemed “impossibly large” for their age. According to the most widely accepted cosmological model—the Lambda Cold Dark Matter (LCDM) model—the first galaxies in the Universe did not have enough time to become so massive and should have been more modestly sized.
This presented astronomers with another “crisis in cosmology,” suggesting that the predominant model about the origins and evolution of the Universe was wrong. However, according to a new study by an international team of astronomers, these galaxies are not so “impossibly large” after all, and what we saw may have been the result of an optical illusion. In short, the presence of black holes in some of these early galaxies made them appear much brighter and larger than they actually were. This is good news for astronomers and cosmologists who like the LCDM the way it is!
The study was led by Katherine Chworowsky, a graduate student at the University of Texas at Austin (UT) and a National Science Foundation (NSF) Fellow. She was joined by colleagues from UT’s Cosmic Frontier Center, NSF’s NOIRLab, the Dunlap Institute for Astronomy & Astrophysics, the Mitchell Institute for Fundamental Physics and Astronomy, the Cosmic Dawn Center (DAWN), the Niels Bohr Institute, the Netherlands Institute for Space Research (SRON), NASA’s Goddard Space Flight Center, the European Space Agency (ESA), the Space Telescope Science Institute (STScI), and other prestigious universities and institutes. The paper that details their findings recently appeared in The Astrophysical Journal.
The first image taken by the James Webb Space Telescope, featuring the galaxy cluster SMACS 0723. Credit: NASA, ESA, CSA, and STScIThe data was acquired as part of the Cosmic Evolution Early Release Science (CEERS) Survey, led by Steven Finkelstein, a professor of astronomy at UT and a study co-author. In a previous study, Avishai Dekel and his colleagues at the Racah Institute of Physics at the Hebrew University of Jerusalem (HUJI) argued that the prevalence of low-density dust clouds in the early Universe allowed for rapid star formation in galaxies. Dekel and Zhaozhou Li (a Marie Sklodowska-Curie Fellow at HUJI) were also co-authors of this latest study.
As Chworowsky and her colleagues explained, the observed galaxies only appeared massive because their central black holes were rapidly consuming gas. This process causes friction, causing the gas to emit heat and light, creating the illusion of there being many more stars and throwing off official mass estimates. These galaxies appeared as “little red dots” in the Webb image (shown below). When removed from the analysis, the remaining galaxies were consistgent with what the standard LCDM model predicts.
“So, the bottom line is there is no crisis in terms of the standard model of cosmology,” Finkelstein said in a UT News release. “Any time you have a theory that has stood the test of time for so long, you have to have overwhelming evidence to really throw it out. And that’s simply not the case.”
However, there is still the matter of the number of galaxies in the Webb data, which are twice as many as the standard model predicts. A possible explanation is that stars formed more rapidly in the early Universe. Essentially, stars are formed from clouds of dust and gas (nebulae) that cool and condense to the point where they undergo gravitational collapse, triggering nuclear fusion. As the star’s interior heats up, it generates outward pressure that counteracts gravity, preventing further collapse. The balance of these opposing forces makes star formation relatively slow in our region of the cosmos.
The galaxy cluster SMACS0723, with the five galaxies selected for closer study. Credit: NASA, ESA, CSA, STScI / Giménez-Arteaga et al. (2023), Peter Laursen (Cosmic Dawn Center).According to some theories, the Universe was much denser than it is today, which prevented stars from blowing out gas during formation, thus making the process more rapid. These findings echo what Dekel and his colleagues argued in their previous paper, though it would account for there being more galaxies rather than several massive ones. Similarly, the CEERS team and other research groups have obtained spectra from these black holes that indicate the presence of fast-moving hydrogen gas, which could mean that they have accretion disks.
The swirling of these disks could provide some of the luminosity previously mistaken for stars. In any case, further observations of these “little red dots” are pending, which should help resolve any remaining questions about how massive these galaxies are and whether or not star formation was more rapid during the early Universe. So, while this study has shown that the LCDM model of cosmology is safe for now, its findings raise new questions about the formation process of stars and galaxies in the early Universe.
“And so, there is still that sense of intrigue,” said Chworowsky. “Not everything is fully understood. That’s what makes doing this kind of science fun, because it’d be a terribly boring field if one paper figured everything out, or there were no more questions to answer.”
Further Reading: UT News, The Astronomical Journal
The post Remember those Impossible Galaxies Found by JWST? It Turns Out They Were Possible After All appeared first on Universe Today.
Solar sails are an exciting way to travel through the Solar System because they get their propulsion from the Sun. NASA has developed several solar sails, and their newest, the Advanced Composite Solar Sail System (or ACS3), launched a few months ago into low-Earth orbit. After testing, NASA reported today that they extended the booms, deploying its 80-square-meter (860 square feet) solar sail. They’ll now use the sail to raise and lower the spacecraft’s orbit, learning more about solar sailing.
“The Sun will continue burning for billions of years, so we have a limitless source of propulsion. Instead of launching massive fuel tanks for future missions, we can launch larger sails that use ‘fuel’ already available,” said Alan Rhodes, the mission’s lead systems engineer at NASA’s Ames Research Center, earlier this year. “We will demonstrate a system that uses this abundant resource to take those next giant steps in exploration and science.”
And for all you skywatchers out there, NASA said that given the reflectivity of the large sail and its position in orbit (about 1,000 km/600 miles) above Earth, ACS3 should be easily visible at times in the night sky. The Heavens Above website already has ACS3 listed on their page (just put in your location to see when to catch the solar sail passing over your area.) There should be info and updates available on social media, so follow NASA.gov and @NASAAmes on X and Instagram for updates.
ACS3 is part of NASA’s Small Spacecraft Technology program, which has the objective of deploying small missions that demonstrate unique capabilities rapidly. ACS3 launched in April 2024 aboard Rocket Lab’s Electron rocket from New Zealand. The spacecraft is a twelve-unit (12U) CubeSat built by NanoAvionics that’s about the size of a microwave oven. The biggest challenge designing and creating lightweight booms that could be small enough to fit inside the spacecraft while being able to extend to about 9 meters (30 ft) per side, and being strong enough to support the solar sail. The lightweight but strong composite carbon fiber boom system unrolled from the spacecraft to form rigid tubes that support the ultra-thin, reflective polymer sail.
This video shows how the booms work and the sail deploys:
When fully deployed, the sail forms a square that is about half the size of a tennis court. To change direction, the spacecraft angles its sails. Now with the boom deployment, the ACS3 team will perform maneuvers with the spacecraft, angling the sails and to change the spacecraft’s orbit.
The primary goal of the mission was to demonstrate boom deployment. With that now successfully achieved, the ACS3 team also hopes the mission will prove that their solar sail spacecraft can actually work for future solar sail-equipped science and exploration missions.?
This image shows the ACS3 being unfurled at NASA’s Langley Research Center. The solar wind is reliable but not very powerful. It requires a large sail area to power a spacecraft effectively. The ACS2 is about 9 meters (30 ft) per side, requiring a strong, lightweight boom system. Image Credit: NASASince ACS3 is a demonstration mission, the goal is to build larger sails that can generate more thrust. With these unique composite carbon fiber booms, the ACS3 system has the potential to support sails as large as 2,000 square meters, or about 21,500 square feet, or about half the area of a soccer field.
“The hope is that the new technologies verified on this spacecraft will inspire others to use them in ways we haven’t even considered,” Rhodes said.
And look for photos of the ACS3 fully deployed sail next week. The spacecraft has four cameras which captured a panoramic view of the reflective sail and supporting composite booms. NASA said that high-resolution imagery from these cameras will be available on Wednesday, Sept. 4.
NASA is providing updates on this mission on their Small Satellite Missions blog page.
The post NASA's New Solar Sail Extends Its Booms and Sets Sail appeared first on Universe Today.
Rogue Planets, or free-floating planetary-mass objects (FFPMOs), are planet-sized objects that either formed in interstellar space or were part of a planetary system before gravitational perturbations kicked them out. Since they were first observed in 2000, astronomers have detected hundreds of candidates that are untethered to any particular star and float through the interstellar medium (ISM) of our galaxy. In fact, some scientists estimate that there could be as many as 2 trillion rogue planets (or more!) wandering through the Milky Way alone.
In recent news, a team of astronomers working with the James Webb Space Telescope (JWST) announced the discovery of six rogue planet candidates in an unlikely spot. The planets, which include the lightest rogue planet ever identified (with a debris disk around it), were spotted during Webb‘s deepest survey of the young nebula NGC 1333, a star-forming cluster about a thousand light-years away in the Perseus constellation. These planets could teach astronomers a great deal about the formation process of stars and planets.
The team was led by Adam Langeveld, an Assistant Research Scientist in the Department of Physics and Astronomy at Johns Hopkins University (JHU). He was joined by colleagues from the Carl Sagan Institute, the Instituto de Astrofísica e Ciências do Espaço, the Trottier Institute for Research on Exoplanets, the Mont Mégantic Observatory, the Herzberg Astronomy and Astrophysics Research Centre, the University of Texas at Austin, the University of Victoria, the Scottish Universities Physics Alliance (SUPA) at the University of St Andrews. The paper detailing the survey’s findings has been accepted for publication in The Astronomical Journal.
Most of the rogue planets detected to date were discovered using Gravitational Microlensing, while others were detected via Direct Imaging. The former method relies on “lensing events,” where the gravitational force of massive objects alters the curvature of spacetime around them and amplifies light from more distant objects. The latter consists of spotting brown dwarfs (objects that straddle the line between planets and stars) and massive planets directly by detecting the infrared radiation produced within their atmospheres.
In their paper, the team describes how the discovery occurred during an extremely deep spectroscopic survey of NGC1333. Using data from Webb‘s Near-Infrared Imager and Slitless Spectrograph (NIRISS), the team measured the spectrum of every object in the observed portion of the star cluster. This allowed them to reanalyze spectra from 19 previously observed brown dwarfs and led to the discovery of a new brown dwarf with a planetary-mass companion. This latter observation was a rare find that already challenges theories of how binary systems form. But the real kicker was the detection of six planets with 5-10 times the mass of Jupiter (aka. super-Jupiters).
This means these six candidates are among the lowest-mass rogue planets ever found that formed through the same process as brown dwarfs and stars. This was the purpose of the Deep Spectroscopic Survey for Young Brown Dwarfs and Free-Floating Planets survey, which was to investigate massive objects that are not quite large enough to become stars. The fact that Webb’s observations revealed no objects lower than five Jupiter masses (which it is sensitive enough to detect) is a strong indication that stellar objects lighter than are more likely to form the way planets do.
Said lead author Langeveld in a statement released by JHU’s new source (the Hub):
“We are probing the very limits of the star-forming process. If you have an object that looks like a young Jupiter, is it possible that it could have become a star under the right conditions? This is important context for understanding both star and planet formation.”
New wide-field view mosaic from the James Webb Space Telescope spectroscopic survey of NGC1333 with three of the newly discovered free-floating planetary-mass objects indicated by green markers. Credit: ESA/Webb, NASA & CSA, A. Scholz, K. Muzic, A. Langeveld, R. JayawardhanaThe most intriguing of the rogue planets was also the lightest: an estimated five Jupiter masses (about 1,600 Earths). Since dust and gas generally fall into a disk during the early stages of star formation, the presence of this debris ring around the one planet strongly suggests that it formed in the same way stars do. However, planetary systems also form from debris disks (aka. circumsolar disks), which suggests that these objects may be able to form their own satellites. This suggests that these massive planets could be a nursery for a miniature planet system – like our Solar System, but on a much smaller scale.
Said Johns Hopkins Provost Ray Jayawardhana, an astrophysicist and senior author of the study (who also leads the survey group):
“It turns out the smallest free-floating objects that form like stars overlap in mass with giant exoplanets circling nearby stars. It’s likely that such a pair formed the way binary star systems do, from a cloud fragmenting as it contracted. The diversity of systems that nature has produced is remarkable and pushes us to refine our models of star and planet formation…
“Our observations confirm that nature produces planetary mass objects in at least two different ways—from the contraction of a cloud of gas and dust, the way stars form, and in disks of gas and dust around young stars, as Jupiter in our own solar system did.”
In the coming months, the team plans to use Webb to conduct follow-up studies of these rogue planets’ atmospheres and compare them to those of brown dwarfs and gas giants. They also plan to search the star-forming region for other objects with debris disks to investigate the possibility of mini-planetary systems. The data they obtain will also help astronomers refine their estimates on the number of rogue planets in our galaxy. The new Webb observations indicate that such bodies account for about 10% of celestial bodies in the targeted cluster.
Current estimates place the number of stars in our galaxy between 100 and 400 billion stars and the number of planets between 800 billion and 3.2 trillion. At 10%, that would suggest that there are anywhere from 90 to 360 billion rogue worlds floating out there. As we have explored in previous articles, we might be able to explore some of them someday, and our Sun may even capture a few!
Further Reading: HUB
The post Webb Discovers Six New “Rogue Worlds” that Provide Clues to Star Formation appeared first on Universe Today.
Scientists have discovered that Earth has a third field. We all know about the Earth’s magnetic field. And we all know about Earth’s gravity field, though we usually just call it gravity.
Now, a team of international scientists have found Earth’s global electric field.
It’s called the ambipolar electric field, and it’s a weak electric field that surrounds the planet. It’s responsible for the polar wind, which was first detected decades ago. The polar wind is an outflow of plasma from the polar regions of Earth’s magnetosphere. Scientists hypothesized the ambipolar field’s existence decades ago, and now they finally have proof.
The discovery is in a new article in Nature titled “Earth’s ambipolar electrostatic field and its role in ion escape to space.” The lead author is Glyn Collinson from the Heliophysics Science Division at NASA Goddard Space Flight Center.
“It’s like this conveyor belt, lifting the atmosphere up into space.”
Glyn Collinson, Heliophysics Science Division, NASA Goddard Space Flight CenterThe Space Age gained momentum back in the 1960s as the USA and USSR launched more and more satellites. When spacecraft passed over the Earth’s poles, they detected an outflow of particles from Earth’s atmosphere into space. Scientists named this the polar wind, but for decades, it was mysterious.
Scientists expect some particles from Earth to “leak” into space. Sunlight can cause this. But if that’s the case, the particles should be heated. The wind is mysterious because many particles in it are cold despite moving at supersonic speeds.
“Something had to be drawing these particles out of the atmosphere,” said lead author Collinson.
Collinson is also the Principal Investigator for NASA’s “Endurance” Sounding Rocket Mission. “The purpose of the Endurance mission was to make the first measurement of the magnitude and structure of the electric field generated by Earth’s ionosphere,” NASA writes in their mission description. Endurance launched on May 22nd, 2022, from Norway’s Svalbard Archipelago.
This image shows NASA’s Endurance rocket launching from Ny-Ålesund, Svalbard, Norway. It flew for 19 minutes to an altitude of about 780 km (484 mi) above Earth’s sunlit polar cap. It carried six science instruments and could only be launched in certain conditions to be successful. Image Credit: NASA/Brian Bonsteel.“Svalbard is the only rocket range in the world where you can fly through the polar wind and make the measurements we needed,” said Suzie Imber, a space physicist at the University of Leicester, UK, and co-author of the paper.
Svalbard is key because there are open magnetic field lines above Earth’s polar caps. These field lines provide a pathway for ions to outflow to the magnetosphere.
This figure from the research shows Endurance’s flight profile and its path over Earth. The rocket had to fly near the open magnetic field lines that exist at Svalbard’s high polar latitudes. Image Credit: Collinson et al. 2024.After it was launched, Collinson said, “We got fabulous data all through the flight, though it will be a while before we can really dig into it to see if we achieved our science objective or not.”
Now, the data is in, and the results show that Earth has a global electric field.
Prior to its discovery, scientists hypothesized that the field was weak and that its effects could only be felt over hundreds of kilometres. Even though it was first proposed 60 years ago, scientists had to wait for technology to advance before they could measure it. In 2016, Collinson and his colleagues began inventing a new instrument that could measure the elusive field.
At about 250 km (150 mi) above the Earth’s surface, atoms break apart into negatively charged electrons and positively charged ions. Electrons are far lighter than ions, and the tiniest energetic jolt can send them into space. Ions are more than 1800 times heavier, and gravity draws them back to the surface.
If gravity were the only force at work, the two populations would separate over time and simply drift apart. But that’s not what happens.
Electrons and ions have opposite electrical charges. They’re attracted to one another and an electric field forms that keeps them together. This counteracts some of gravity’s power.
The field is called ambipolar because it’s bidirectional. That means it works in both directions. As ions sink down due to gravity, the electrical charges mean that the ions drag some of the electrons down with them. However, at the same time, electrons lift ions high into the atmosphere with them as they attempt to leave the atmosphere and escape into space.
The result of all this is that the ambipolar field extends the atmosphere’s height, meaning some of the ions escape with the polar wind.
After decades of hypothesizing and theorizing, the Endurance rocket measured a change in electric potential of only 0.55 volts. That’s extremely weak but enough to be measurable.
“A half a volt is almost nothing — it’s only about as strong as a watch battery,” Collinson said. “But that’s just the right amount to explain the polar wind.”
Hydrogen ions are the most plentiful particles in the polar wind. Endurance’s results show that these ions experience an outward force from the magnetic field that’s 10.6 times more powerful than gravity. “That’s more than enough to counter gravity — in fact, it’s enough to launch them upwards into space at supersonic speeds,” said Alex Glocer, Endurance project scientist at NASA Goddard and co-author of the paper.
Hydrogen ions are light, but even the heavier particles in the polar wind are lifted. Oxygen ions in the weak electrical field effectively weigh half as much, yet they’re boosted to greater heights, too. Overall, the ambipolar field makes the ionosphere denser at higher altitudes than it would be without the field’s lofting effect. “It’s like this conveyor belt, lifting the atmosphere up into space,” Collinson added.
“The measurements support the hypothesis that the ambipolar electric field is the primary driver of ionospheric H+ outflow and of the supersonic polar wind of light ions escaping from the polar caps,” the authors explain in their paper.
“We infer that this increases the supply of cold O+ ions to the magnetosphere by more than 3,800%,” the authors write. At that point, other mechanisms come into play. Wave-particle interactions can heat the ions, accelerating them to escape velocity.
These results raise other questions. How does this field affect Earth? Has the field affected the planet’s habitability? Do other planets have these fields?
Back in 2016, the European Space Agency’s Venus Express mission detected a 10-volt electric potential surrounding the planet. This means that positively charged particles would be pulled away from the planet’s surface. This could draw away oxygen.
Scientists think that Venus may have once had plentiful water. However, since sunlight splits water into hydrogen and oxygen, the electric field could’ve siphoned the oxygen away, eliminating the planet’s water. This is theoretical, but it begs the question of why the same thing hasn’t happened on Earth.
The ambipolar field is fundamental to Earth. Its role in the evolution of the planet’s atmosphere and biosphere is yet to be understood, but it must play a role.
“Any planet with an atmosphere should have an ambipolar field,” Collinson said. “Now that we’ve finally measured it, we can begin learning how it’s shaped our planet as well as others over time.”
The post A NASA Rocket Has Finally Found Earth’s Global Electric Field appeared first on Universe Today.
Digging in the ground is so commonplace on Earth that we hardly ever think of it as hard. But doing so in space is an entirely different proposition. On some larger worlds, like the Moon or Mars, it would be broadly similar to how digging is done on Earth. But their “milligravity” would make the digging experience quite different on the millions of asteroids in our solar system. Given the potential economic impact of asteroid mining, there have been plenty of suggested methods on how to dig on an asteroid, and a team from the University of Arizona recently published the latest in a series of papers about using a customized bucket wheel to do so.
Bucket wheel designs seem to be gaining popularity in space mining more generally lately. NASA’s ISRU Pilot Excavator (IPEx) uses a similar design and has been advanced to Technology Readiness Level 5, according to its latest yearly report. However, it was designed for use on the Moon, where gravity is significantly larger than that of the asteroids that hold vastly more valuable materials.
According to the paper, the lowest 10% of asteroids have higher concentrations of platinum group metals, such as palladium and osmium, than the Moon does. They are also much more “energy accessible,” meaning that you would only need a delta-V of about 5% that of the Moon to get resources off an asteroid undergoing active mining. Since delta-V is equivalent to fuel weight and is therefore directly equivalent to cost, lower delta-V makes mining on these tiny bodies much more economically attractive.
This video, from nine years ago, shows how long the development path for asteroid mining technology is.But they have their own engineering challenges to face. Most asteroids are known as “rubble piles,” meaning they are made up of clumps of rock simply stuck together by whatever minimal gravity their mass gives them. Even metal-rich M-type asteroids, such as Psyche, could be primarily composed of these small chunks of material. Such an environment would not be very hospitable to traditional mining techniques.
The University of Arizona researchers, led by Dr. Jekan Thangavelautham, have taken a rapid iteration approach to solving that problem. They developed a model representing the forces expected on the surface of an asteroid and applied those forces to models of different bucket wheel designs, selecting features that best suit the environment.
They also took the next step and started 3D printing prototypes of the different designs. They intended to use those printed prototypes to collect physical data on the mechanics of excavation; however, to do so, they needed realistic asteroid regolith simulant material. That doesn’t currently exist, so they decided to make their own. A combination of styrofoam and 3D-printed resin seemed to do the trick, however they weren’t able to make enough simulant yet to test a planned test assembly for this paper thoroughly.
Artist’s depiction of an implementation of a bucket wheel excavatorOne of the other important findings of the paper was the impact different characteristics of the asteroid itself would have on two of the most important parameters for the design—the bucket volume and the cutting velocity (i.e., how fast the buckets move). Some characteristics, such as the resource concentration, had little impact on those two parameters. However, other obvious ones, such as the density, had a major impact.
The research team found that high-volume, slow-moving buckets were ideal in this environment. However, part of that consideration was how quickly an orbiting support craft would fill up with material being excavated. To increase the throughput time of material from the bucket wheel to the storage system, the researchers suggest the use of a screw feeder, which would also allow the bucket to operate continuously – another necessity given the economic constraints of the system.
Additionally, they found that claws were necessary to hold onto the regolith. An extensible tubing system is also a “nice-to-have,” though it becomes more necessary if there are many buckets per wheel.
Details of this work are contained in the paper, and an associated presentation was given by the researchers at the ASCEND conference at the end of July. While these milestones are a step in the right direction, these technologies are still at a relatively low readiness level. However, they will eventually be needed if humans utilize some of the most easily accessible resources in the solar system. As our expansion to other worlds picks up, it’s only a matter of time before a bucket excavator lands on an asteroid and starts going to work.
Learn More:
Hansen, Muniyasamy, & Thangavelautham – Modified Bucket Wheel Design and Mining Techniques for Asteroid Mining
UT – Heavy Construction on the Moon
UT – A Handy Attachment Could Make Lunar Construction a Breeze
UT – Robotic asteroid mining spacecraft wins a grant from NASA
Lead Image:
Artist’s depiction of NASA’s IPEx Bucket Excavator Robot.
Credit – NASA
The post What Type of Excavator Is Most Suitable for Asteroids? appeared first on Universe Today.