Abigail Shrier has a new book out, and it’s doing quite well despite the vitriol she received for her first book, Irreversible Damage, the Transgender Craze Seducing our Daughters. I read the latter one, and thought it was quite good—not nearly as inflammatory as the gender activists deemed it. But of course the topic—that social media was contributing to a desire of young women to identify as men, a
“rapid-onset” change that was unnecessary and generally harmful—was tailor-made to anger gender activists. Remember this tweet by ACLU LAWYER Chase Strangio about that book?
An ACLU lawyer advocating censorship! What has the world come to? Well, Strangio, a biological woman who identifies as male, deleted that tweet, but the Internet is forever.
Now Shrier has a somewhat related book, in that it’s about children’s psychological difficulties, but this one isn’t directly related to gender. Click on the icon to go to the Amazon link.
I haven’t yet read it, but have ordered it by interlibrary loan (I can no longer buy books because I have no space on my shelves), and will report my take forthwith. But Greg Lukianoff, President of FIRE and coauthor of two books (one a blockbuster bestseller), has reviewed Shrier’s new book on his website, The Eternally Radical Idea. He pronounces Bad Therapy a “masterpiece,” which is high praise. But he also takes up about 70% the review listing the varieties of opprobrium that Shrier will meet. Click to read Lukianoff’s review; I’ll just give a couple of excerpts:
First, Lukianoff’s assessment and brief summary. Bolding is mine:
“Bad Therapy” is simply a masterpiece — easily the most important book of the year. Unfortunately, it most desperately needs to be read by the very people who are likely most hostile to Shrier’s work. The book focuses on the harms of the therapeutic approach to raising our children and how the generation treated with the most psychological therapy and psychiatric drugs has become the most miserable, anxious, and disempowered generation on record. (“Disempowered,” by the way, was the original title of the book I wrote with Jonathan Haidt, which became “The Coddling of The American Mind.”)
Shrier comes to many of the same conclusions that Haidt and I came to in “Coddling,” which I’d sum up like this: As a culture, we seem to be teaching young people the mental habits of anxious and depressed people — encouraging them, often through example, to engage in negative mental exaggerations called cognitive distortions. It’s a kind of reverse-cognitive behavioral therapy. I’ve talked about this problem for the last decade, beginning with Haidt’s and my original 2015 article for The Atlantic, “The Coddling of the American Mind,” and most recently with my piece, “What’s behind the campus mental health crisis?” for UnHerd.
Shrier’s book also focuses on how parenting in the K-12 environment is informed by an ideology that completely undermines students’ sense of an internalized locus of control. Indeed, if you really want to make someone despondent, just persuade them that all important decisions are out of their hands and that they are essentially powerless in their own lives.
Haidt and I — and more recently a Substacker named Gurwinder Bhogal — have pointed out that the current campus left ideology inherently tells young women in particular that they are unavoidably simultaneously both oppressors and oppressed; that their life is determined by their immutable characteristics; that the planet is doomed; that fascists are everywhere; and that there’s not much that can be done about this other than consciousness-raising and feeling guilt, shame, and despair.
What I’ve been emphasizing more recently is that, in many cases, teaching people these cognitive distortions was largely done in the name of motivating them towards some positive social action. This is a terrible strategy, of course, because depressed and anxious people make terrible activists. Depression and anxiety more often result in fatalism and despair than an attitude capable of bringing about positive social change, so it’s a weird way to build a movement.
Here are the three conclusions from Haidt and Lukianoff’s best-selling and influential book:
1.) We young people are fragile (“What doesn’t kill you makes you weaker.”)
2.) We are prone to emotional reasoning and confirmation bias (“Always trust your feelings.”)
3.) We are prone to “dichotomous thinking and tribalism” (“Life is a battle between good people and evil people.”)
So what’s the difference between Shrier’s book and the earlier one? I’m sure they are quite different, but Lukianoff says very little about this issue. In fact, he says nothing about what Shrier add’s to the Haidt and Lukianoff book:
But Shrier’s book goes far beyond what Haidt and I did in “Coddling,” and that is why every single parent and K-12 teacher must read it. Despite being steeped in this stuff for the better part of two decades, I still learned a great deal from it — including that the research behind the health harms of growing up with “adverse childhood experiences” is far weaker than I understood it to be.
The book is gorgeously written, thoughtful, compassionate, and has gobs of both research and common sense. It also features some of my favorite experts, including my friend Camilo Ortiz, a professor and clinical psychologist who specializes in CBT. Other friends who make an appearance include Jonathan Haidt, Lenore Skenazy, Rob Henderson, Richard J. McNally, Paul Bloom, and Peter Gray.
And that first paragraph is all you’ll get. The review and assessment of the book takes up only a third of Lukianoff’s piece. Now I don’t mind someone using a review as a platform to launch their own ideas into the ether (H. L. Mencken was famous for that), but Lukianoff uses the book as a way to list all the potential criticisms that Shrier’s book will face, criticisms that he outlined in another book with Rikki Schlott: The Canceling of the American Mind: Cancel Culture Undermines Trust and Threatens Us All—But There is a Solution. The review leaves me, at least, not knowing what Shrier’s book is really about.
The criticisms that Lukianoff says that Shrier will face fall into three categories: “The Obstacle Course” (“rhetorical doges and logical fallacies” like strawmanning and misrepresenting the book’s arguments); “The Minefield” (dissing the book by attacking the author, a tactic with which we’re quite familiar), and “The Perfect Rhetorical Fortress” (raising guilt by association, labeling people as bad because of their politics, and so on). If you read Shrier’s earlier book, you’ll see all of these tactics were indeed used to dismiss it. It turns out that Shrier had a good point, as we now know as European countries dismantle their use of “affirmative treatment” and puberty blockers for gender-dysphoric youth, most of whom would come out as gay (and not lose body parts nor get sterilized) if they were treated less “affirmatively” and they deep-sixed the hormones for adolescents. But now that Shrier has been labeled a Bad Person and guilty of Ideological Wrongthink, that label can be used to discredit everything she writes in the future.
At any rate, and despite the digressions by Lukianoff that are aimed at pushing his own platform, this is certainly a book worth investigating. I haven’t read any other reviews, but just found on on Slate that is quite critical. We shall see if the author of that one, Anna Nordberg, engages in the bad-faith criticisms described by Lukianoff. (Nordberg does have expertise in the area of parenting and child psychology.)
Most of the exoplanets we’ve discovered orbit red dwarf stars. This isn’t because red dwarfs are somehow special, simply that they are common. About 75% of the stars in the Milky Way are red dwarfs, so you would expect red dwarf planets to be the most abundant. This also means that most habitable worlds are going to orbit these small, cool stars, and that has some significant consequences for our search for life.
To begin with, any potentially habitable red dwarf world will need to orbit their star closely, just to be warm enough for things like liquid water. The TRAPPIST-1 system I talked about yesterday is a good example of this. The three potentially habitable planets of the system orbit at a small fraction of the distance between Mercury and the Sun. This means they are at risk of things such as stellar flares, but it also means they are almost certainly tidally locked.
Tidal locking occurs when a planet or moon is so close to its companion that tidal forces cause its rotation to sync with its orbital motion. When a planet is tidally locked, one side always faces its star while the other side is forever in darkness. As you might imagine, this would mean the warm side fries while the other freezes. That’s true unless the planet were to have a good atmosphere. With a water-rich Earth-like atmosphere heat could move between the day and night sides. Weather would be strange on such a world, but a tidally locked world could be habitable, with fairly even day-side and night-side temperatures.
How clouds could make a planet appear airless. Credit: Powell, et alObserving the atmospheres of tidally locked planets is difficult, but astronomers have a trick to see whether an atmosphere exists. Rather than trying to capture an atmospheric spectra, they can simply measure the surface temperature of the planet on opposite sides. So, look at the star as the planet moves in front of it to determine the temperature of the dark side, and look at it again as the planet moves behind the star to get the light side temperature. If the dark and light sides have dramatically different temperatures, then it must not have an atmosphere. Easy-peasy. But a new study shows that isn’t necessarily true.
In this paper the authors argue that clouds on the dark side of a world could skew our data. To show this, they considered a tidally locked world with a thick atmosphere. Based on their models, the atmosphere would moderate global temperatures on the planet so that the day side is only a few dozen degrees warmer than the dark side. This is similar to the day and night extremes of a dry region on Earth. While moderate, the temperature shift would be enough to trigger the formation of thick clouds on the dark side.
In this scenario, the day side would be mostly cloudless and we would measure the warm temperature of the planet’s surface. But with a cloudy dark side we would measure temperature of the upper layer of clouds, which would be much colder. So even though surface temperatures of the planet are fairly uniform, it would appear to have an extreme temperature shift like an airless world. The authors go on to look at how observations from the JWST could distinguish between cloudy planets and those without an atmosphere, but it is clear that one simple trick in the search for habitable planets isn’t quite so simple.
Reference: Powell, Diana, Robin Wordsworth, and Karin Öberg. “Nightside Clouds on Tidally-locked Terrestrial Planets Mimic Atmosphere-Free Scenarios.” arXiv preprint arXiv:2409.07542 (2024).
The post Exoplanets Could be Hiding Their Atmospheres appeared first on Universe Today.
Today we have LEOPARD photos taken by Phil Frymire, whom I met in the line for the plane from Newark to Johannesburg. Someone said, as I perused the line, “Are you Jerry Coyne?” I was shocked, but it turned out that Phil and his brother read this site and recognized me. My 5 minutes of fame! Phil’s IDs captions are indented, and you can enlarge the photos by clicking on them.
My brother and I visited South Africa at the same time as our host. We stayed at Kambaku River Sands lodge in the Timbavati Nature Preserve and at Mala Mala Sable Camp. Kambaku River Sands is about 35 miles northwest of Manyeleti (where Jerry visited) and Mala Mala is about 12 miles south of Manyeleti. The routine at both lodges was very similar to what Jerry described for Manyeleti. Here are a selection of leopard (Panthera pardus) photos which also include some unfortunate impalas (Aepyceros melampus). The first six photos are from Timbavati and the last five are from Mala Mala.
This leopard was eating an impala she had killed and cached up a tree when part of the carcass fell onto some lower limbs. She slipped briefly when retrieving it, hence the expression. A lone hyena was lurking below, hoping in vain for scraps.
Another Timbavati female:
The same cat in a different pose:
This cub was stashed up a tree about 30 yards away from its mother:
This female was relaxing in between bites of impala. What is she thinking?:
This is a screenshot of a video I took. We missed this leopard’s taking of an impala by only a couple of minutes. When we came upon the scene, it was dragging its prey, looking for a suitable tree to store the kill. We found out later that hyenas had stolen the carcass.Closeup of another Mala Mala female:
She was part of a mating pair. The larger male is on the right:
A Mala Mala female:
Her nearby cub:
Just above the mother and cub, in a dense tree, you can see the disembodied head of her impala victim:
Back in April 2022, the CDF experiment, which operated at the long-ago-closed Tevatron particle collider. presented the world’s most precise measurement of the mass of the particle known as the “W boson“. Their result generated some excited commentary, because it disagreed by 0.1% with the prediction of the Standard Model of particle physics. Even though the mismatch was tiny, it was significant, because the CDF measurement was so exceptionally precise. Any disagreement of such high significance would imply that something has to give: either the Standard Model is missing something, or the CDF measurement is incorrect.
Like most of my colleagues, I was more than a little skeptical about CDF’s measurement. This was partly because it disagreed with the average of earlier, less precise measurements, but mainly because of the measurement’s extreme challenges. To quote a commentary that I wrote at the time,
In the weeks following CDF’s announcement, I attended a detailed presentation about the measurement. The physicist who gave it tried to convince us that everything in the measurement had been checked, cross-checked, and understood. However, I did not find the presentation exceptionally persuasive, so my confidence in it did not increase.
But so what? It doesn’t matter what I think. All a theorist like me can do, seeing a measurement like this, is check to see if it is logically possible and conceptually reasonable for the W boson mass to shift slightly without messing up other existing measurements. And it is.
(In showing this is true, I took the opportunity to explain more about how the Standard Model works, and specifically how the W boson’s mass arises from simple math, before showing how the mass could be shifted upwards. Some of you may still find these technical details interesting, even though the original motivation for this series of articles is no longer what it was.)
Instead, what really matters is for other experimental physicists to make the same measurement, to see if they get the same answer as CDF or not. Because of the intricacy of the measurement, this was far easier said than done. But it has now happened.
In the past year, the ATLAS collaboration at the Large Hadron Collider [LHC] presented a new W boson mass measurement consistent with the Standard Model. But because their uncertainties were 60% larger than CDF’s result, it didn’t entirely settle the issue.
Now the CMS collaboration, ATLAS’s competitor at the LHC, has presented their measurement. They have managed to be almost as precise at that of CDF — a truly impressive achievement. And what do they find? Their result, in red below, is fully consistent with the Standard Model, shown as the vertical grey band, and with ATLAS, the bar line just above the red one. The CDF measurement is the bar outlying to the right; it is the only one in disagreement with the Standard Model.
Measurements of the W boson mass made by several different experiments, with names listed at left. In each case, the dot represents the measurement and the horizontal band represents its uncertainty. The vertical grey band represents the Standard Model prediction and its own uncertainty. The ATLAS and CMS measurements, shown at the bottom, agree with each other and with the Standard Model, while both disagree with the CDF measurement. Note that the uncertainty in the CMS measurement is about the same as in the CDF measurement.Since the ATLAS and CMS results are both consistent with all other previous measurements as well as with the Standard Model, and since CMS has even reached the same level of uncertainty obtained by CDF, this makes CDF by far the outlier, as you can see above. The tentative but reasonable conclusion is that the CDF measurement is not correct.
Of course, the CDF experimentalists may argue that it is ATLAS and CMS that have made an error, not CDF. One shouldn’t instantly dismiss that out of hand. It’s worth remembering that ATLAS and CMS use the same accelerator to gather their data, and might have used similar logic in the design of their analysis, so it’s not completely impossible for them to have made correlated mistakes. Still, this is far from plausible, so the onus will be on CDF to directly pinpoint an error in their competitors’ work.
Even if the mistake is CDF’s, it’s worth noting that we still have no idea what exactly it might have been. A long chain of measurements and calibrations are required to determine the W boson mass at this level of precision (about one part in ten thousand). It would be great if the error within this chain could be tracked down, but no one may have the stamina to do that, and it is possible that we will never know what went wrong.
But the bottom line is that the discrepancy suggested by the CDF measurement was always a long shot. I don’t think many particle physicists are surprised to see its plausibility fading away.
On the SGU we recently talked about aphantasia, the condition in which some people have a decreased or entirely absent ability to imagine things. The term was coined recently, in 2015, by neurologist Adam Zeman, who described the condition of “congenital aphantasia,” that he described as being with mental imagery. After we discussed in on the show we received numerous e-mails from people with the condition, many of which were unaware that they were different from most other people. Here is one recent example:
“Your segment on aphantasia really struck a chord with me. At 49, I discovered that I have total multisensory aphantasia and Severely Deficient Autobiographical Memory (SDAM). It’s been a fascinating and eye-opening experience delving into the unique way my brain processes information.
Since making this discovery, I’ve been on a wild ride of self-exploration, and it’s been incredible. I’ve had conversations with artists, musicians, educators, and many others about how my experience differs from theirs, and it has been so enlightening.
I’ve learned to appreciate living in the moment because that’s where I thrive. It’s been a life-changing journey, and I’m incredibly grateful for the impact you’ve had on me.”
Perhaps more interesting than the condition itself, and what I want to talk about today, is that the e-mailer was entirely unaware that most of the rest of humanity have a very different experience of their own existence. This makes sense when you think about it – how would they know? How can you know the subjective experience happening inside one’s brain? We tend to assume that other people’s brains function similar to our own, and therefore their experience must be similar. This is partly a reasonable assumption, and partly projection. We do this psychologically as well. When we speculate about other people’s motivations, we generally are just projecting our own motivations onto them.
Projecting our neurological experience, however, is a little different. What the aphantasia experience demonstrates is a couple of things, beginning with the fact that whatever is normal for you is normal. We don’t know, for example, if we have a deficit because we cannot detect what is missing. We can only really know by sharing other people’s experiences.
For example, let’s consider color vision. Someone who is completely color blind, who sees only in shades of grey, would have no idea that they are not seeing color, or that color exists as a phenomenon, except for the fact that other people speak of the fact that they perceive this thing called color. Even then it may take time as they grow to realize that other people are experiencing something they are not. But if they lived in a world with color-blind people, they would never know what they are missing.
This also relates to the old question – is what I experience as “red” the same thing that you experience as “red”? Is there any way we can know? We can only infer from indirect evidence. It’s likely that people experience colors similarly since we tend to associate the same emotions and feelings to those colors, but of course that could also be learned. However, there is no reason to assume our color experiences are identical. There are likely differences in vibrancy, contrast, shading, and other details. Also there are many people who are partially color blind (like me – I have a deficit in red-green distinction). I would never ever know, however, that my color vision was different than most people were it not for those tests we were forced to take where we try to see the number in the circles.
Similarly, if you cannot form visual mental representations in your mind, you might assume everyone is that way. Several people with aphantasia have told me that when other people talked about “seeing” things in their mind, they assumed it was a metaphor. They had no idea other people were literally seeing an image in their mind.
Sometimes even the objective lack of a sensory experience might be entirely unknown to the person. For example, people who are born with a decrease in sensation because of a disorder of their nerves do not know this. Whatever sensation they have is normal for them. So they don’t complain of numbness, even though on exam they have a profound decrease in sensation (that’s how we know its congenital and not acquired).
We should, I think, extrapolate from this experience. There are likely countless ways in which our brains differ from each other in how they construct our subjective experience of reality, our abstractions, our emotional worlds, and our sensory perceptions. These are all brain constructs, dependent on the particulars of networks and nodes in the brain, how they connect, and how they function. We cannot get outside of this – this is who and what we are. This is why neuroscientists have moved toward the concept of “neurodiversity” – understanding the full diversity of how different human brains function. There may be a “typical” brain, in one or more aspects, but there is also lots of diversity. We also should not automatically pathologize this diversity and assume anything not typical is a “disorder” or even worse, a “disease.” Mostly biological diversity is a matter of different tradeoffs.
Even when we recognize that some forms of neurodiversity may quality as a “disorder”, meaning that there are demonstrable objective negative outcomes, sometimes this is very context dependent. They may only have negative outcomes because neurotypicals have designed society to best suit them. They may be on the short end of the tradeoffs, but that is not an inherent reality, just a societal choice.
Even more fascinating to me is to think about the universal human neurological experience. In other words – what do humans lack, or in what ways is human experience of reality idiosyncratic? Just like those with aphantasia, we likely will never know – not until we encounter other intelligent species who experience reality differently. If we are even able to sufficiently communicate with them, we may find their realities are very different from our own. Until then we may not know what it truly means to be human.
The post Subjective Neurological Experience first appeared on NeuroLogica Blog.
Dwarf planet Ceres is the largest planetary body in the Asteroid Belt. For a long time, scientists thought it was born in the outer solar system and then migrated to its present position. Some evidence for that origin lies in extensive surface deposits of ammonium-rich materials on the Cerean surface.
Some of those bright, white and whitish-yellow deposits are found in impact craters on Ceres. Researchers suspect they are the remnants of a brine that seeped to the surface from a liquid layer between the mantle and crust. When impacts whacked the planet, they altered its surface. They also dug up and splattered material from the brine layer. Images and observational data from NASA’s Dawn mission of an impact region called Consus Crater also show bright yellowish-white deposits. Now, thanks to a deeper analysis of Dawn data, their presence could point to Ceres’s origin in the Asteroid Belt.
NASA’s Dawn spacecraft captured this approximately true-color image of Ceres in 2015 as it approached the dwarf planet. Dawn showed that some polar craters on Ceres hold ancient ice, but new research suggests the ice is much younger. Image Credit: NASA / JPL-Caltech / UCLA / MPS / DLR / IDA / Justin Cowart Peeping Inside CeresCeres is classified as a dwarf planet and its rocky component is very similar to that of carbonaceous chondrite asteroids. At least a quarter of its mass is water ice. The surface is pretty complex, consisting of carbon-rich rocks and something called ammoniated phyllosilicates. Those are minerals that include such familiar substances as talc and mica. There’s also evidence of water ice in various surface regions.
This dwarf planet is an active world, with most of its activity driven by cryovolcanism. The surface has been gardened by impacts. The thick outer crust lies over a salt-rich liquid (that brine layer) and a muddy mantle. There’s a lot of evidence to suggest that the concentration of ammonium is greater in deeper layers of the crust. The few places on the surface of Ceres where those obvious yellowish-bright patches show up are in and near Consus Crater and also within other deep craters.
Planetary scientists have long wondered about exactly where Ceres formed. If it formed in the outer Solar system, then it must have migrated into position billions of years ago. If it formed in place, then that raises the question of how it could have become enriched with the icy ammonium-rich materials.
A cutaway showing the surface and interior of dwarf planet Ceres. Thick outer crust (ice, salts, hydrated minerals) Salt-rich liquid (brine), and rock “Mantle” (hydrated rock). Courtesy: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA Clues to Ceres’s BirthplaceWhy the differing suggestions about where Ceres formed? Let’s look more deeply at those ammonium-rich deposits for an answer. They tend to form in very cold environments. That’s why people assumed that Ceres formed in the outer Solar System. That’s where frozen ammonium ice is most stable. In warmer environments (such as closer to the Sun), it evaporates. So, it makes sense to think that Ceres formed our where it was colder and then somehow migrated to the Asteroid Belt.
However, if the ice was part of a rocky planetesimal, the location might not matter so much. Inside the rock, the ice would be insulated from solar heating. Such world-forming materials exist closer to the Sun, and certainly out at the location of the Asteroid Belt. So, if they coalesced to form Ceres in situ, their encased ices would have contributed to the subsurface brine layer that today feeds the cryovolcanism. Impacts punching through the surface would release the brine, as well.
Connecting the DotsA team led by Andres Nathues and Ranjan Sarkar (both Dawn mission scientists), zeroed in on materials sprayed across the surface in the area of Consus Crater. It lies in Ceres’s southern hemisphere and stretches across 64 kilometers (~39 miles). The crater walls are about 4.5 kilometers (~3 miles) high and parts of them are eroded. There’s a smaller crater inside on the eastern half of Consus. Its edges appear to be “painted” with speckles of bright yellowish material, which is also spattered out nearby.
Further analysis of the Dawn data ties the ammonium on the surface with the salty brine from Ceres’ interior. Cryovolcanic activity on this world brings the ammonium-rich brine up toward the Cerean surface. Once there, it seeps into the crust, according to Andreas Nathues, former lead investigator for the Dawn mission. “The minerals in Ceres’ crust possibly absorbed the ammonium over many billions of years like a kind of sponge,” said Nathues.
Nathues and others argue that the dwarf planet’s origin does not necessarily have to be in the outer Solar System simply based on the presence of those ammonium-rich deposits. As mentioned above, they could have been part of the planetesimals in the Asteroid Belt that coalesced to build Ceres. Once it formed, Ceres experienced impacts and cryovolcanism and those actions produced the surface deposits we see today.
Evidence from the CratersConsus Crater itself was “dug out” between 400 and 500 million years ago by a huge impact. That event exposed material from the deep, particularly the ammonium-rich layers below Consus Crater. A later impact about 280 million years ago created the smaller crater inside. The yellowish-bright speckles to the east of the smaller crater are material ejected by the second event. If those materials always existed inside Ceres, then that supports the idea this dwarf planet formed where it is now, rather than out at the edge of the Solar System. That’s where the impacts become important, since that action exposed deeper layers, according to Dawn researcher Ranjan Sarkar.
“At 450 million years, Consus Crater is not particularly old by geological standards, but it is one of the oldest surviving structures on Ceres,” Sarkar said. “Due to its deep excavation, it gives us access to processes that took place in the interior of Ceres over many billions of years, and is thus a kind of window into the dwarf planet’s past.”
For More InformationDwarf Planet Ceres: Origin in the Asteroid Belt?
Consus Crater on Ceres: Ammonium-enriched Brines Exchange with Phylosilicates?
The post Actually, Ceres Might Have Formed in the Asteroid Belt After All appeared first on Universe Today.
Additive manufacturing, also known as 3D printing, has had a profound impact on the way we do business. There is scarcely any industry that has not been affected by the adoption of this technology, and that includes spaceflight. Companies like SpaceX, Rocket Lab, Aerojet Rocketdyne, and Relativity Space have all turned to 3D printing to manufacture engines, components, and entire rockets. NASA has also 3D-printed an aluminum thrust chamber for a rocket engine and an aluminum rocket nozzle, while the ESA fashioned a 3D-printed steel floor prototype for a future Lunar Habitat.
Similarly, the ESA and NASA have been experimenting with 3D printing in space, known as in-space manufacturing (ISM). Recently, the ESA achieved a major milestone when their Metal 3D Printer aboard the International Space Station (ISS) produced the first metal part ever created in space. This technology is poised to revolutionize operations in Low-Earth Orbit (LEO) by ensuring that replacement parts can be manufactured in situ rather than relying on resupply missions. This process will reduce operational costs and enable long-duration missions to the Moon, Mars, and beyond!
The Metal 3D Printer is a technology demonstrator built by an industrial team led by Airbus Defence and Space (SAS) in partnership with the ESA’s Directorate of Human and Robotic Exploration. It was launched to the ISS in late January and installed in the European Drawer Rack aboard the ESA’s Columbus Laboratory Module by European astronaut Andreas Mogensen. The printer became operational by the following June, and the first 3D metal shape was produced by August. With the first metal component built, the ESA plans to create three more as part of the experiment.
These four samples will then be sent to Earth for quality analysis and testing. Two will be sent to the ESA’s European Space Research and Technology Centre (ESTEC) in the Netherlands, a third to the Technical University of Denmark (DTU), and the fourth to the ESA’s European Astronaut Centre (EAC) in Cologne, where it will be integrated into the LUNA facility—a lunar analog environment designed for astronaut training. The availability of ISM will significantly reduce the challenges of resupplying spacecraft as they travel to the Moon, Mars, and other locations in deep space.
For long-duration missions on the lunar surface, the ability to print machine parts and ship them directly from LEO will reduce the number of launches needed to sustain operations there. As for Mars, the ability to manufacture replacement parts, repair equipment, and construct specific tools on demand will ensure a measure of autonomy for mission crews and reduce their reliance on resupply missions sent from Earth. This is especially important given the limited launch windows to Mars (every 26 months) and the time it takes to make a one-way transit (6 to 9 months).
NASA is also pursuing an ISM project aboard the ISS with the help of its commercial partners through the Marshall Space Flight Center (MSFC), with additional support provided by the physics-based modeling group at NASA’s Ames Research Center. These efforts began in 2014 when NASA launched the first 3D printer (manufactured by Made In Space, Inc.) to the ISS. This technology demonstrator used the fused filament fabrication (FFF) process to create objects out of plastic and proved that 3D printing could work in a microgravity environment.
This was followed by the creation of the Additive Manufacturing Facility (AMF), which can print using a variety of materials. These devices allowed for the creation of the first 3D-printed tools in space, including a plastic wrench, a rachet wrench, and more. In 2019, NASA added the ReFabricator experiment to the ISS, which was developed by Tethers Unlimited to create 3D-printed parts using recycled plastic materials. However, the ESA’s technology demonstrator is the first to successfully print a metal component in microgravity conditions.
Artist’s impression of Artemis astronauts conducting science operations on the Moon. Credit: NASAThe experiments will not stop there. In 2021, NASA sent another 3D printer to the ISS, the Redwire Regolith Print (RRP), designed to fashion construction materials out of lunar regolith. They are also investigating how Moon rover wheels can be 3D-printed on the lunar surface and how Martian rocks and minerals could be used to manufacture whatever future missions will need in situ. In collaboration with the University of Texas at El Paso (UTEP) and Youngstown State University (YSU), NASA is also considering how batteries could be 3D printed using lunar or Martian resources.
The potential applications for this technology are almost limitless and are integral to all plans for human expansion beyond Low Earth Orbit (LEO).
Further Reading: ESA
The post Metal Part 3D Printed in Space for the First Time appeared first on Universe Today.
Peanuts! Get your peanuts here! The Solar System has been passing out peanuts lately in the form of two different oddly shaped asteroids that recently passed by Earth, and both look like over-sized peanuts. The latest peanut-shaped asteroid pass was on September 16, 2024, when the near-Earth asteroid 2024 ON came within 1 million kilometers (62,000 miles) of Earth (2.6 times the Earth-Moon distance). Radar imaging revealed the asteroid was peanut-shaped because it is actually a contact binary – which means it is made of two smaller objects touching each other. NASA says the two rounded lobes are separated by a pronounced neck, and one lobe about 50% larger than the other.
In total, 2024 ON measures about 350 meters (382 yards) long. The radar could resolve features down to about 3.75 meters across on the surface, including brighter boulders. NASA says about 14% of asteroids in this size range (larger than about 200 meters (660 feet)) are contact binaries.
It’s a bird, it’s a plane, it’s a… peanut? ?
This nutty asteroid is about as long as the Eiffel Tower is tall. It was imaged by our Goldstone radar as it safely passed Earth at a distance of 2.8M miles (4.6M km). https://t.co/66hy0ehsPe
(P.S. it's #NationalPeanutDay!) pic.twitter.com/WlxoIFx2IM
Just last month, on August 18-19, 2024, the other “peanut” passed by our planet. Asteroid 2024 JV33 appears to also be a contact binary with two rounded lobes, one lobe larger than the other, and is about 300 meters (980 feet) long, about as long as the Eiffel Tower. Imagery showed that asteroid 2024 JV33 rotates once every seven hours. It safely passed Earth a little further than 2024 ON, at a distance of 4.6 million km (2.8 million miles), about 12 times the distance between the Moon and Earth.
Both asteroids were captured in a series of radar images obtained by the Deep Space Network’s Goldstone Solar System Radar near Barstow, California. The principal technique for studying asteroids is radar – called planetary radar. While astronomers can study the Universe by capturing light from stars, planets, and galaxies, they can also study nearby objects by shining radio light on them and analyzing the signals that echo back. Planetary radar can reveal incredibly detailed information about our planetary neighbors.
“When astronomers are studying light that is being made by a star, or galaxy, they’re trying to figure out its properties,” said Patrick Taylor, radar division head for the National Radio Astronomy Observatory, in an interview I did with him earlier this year. “But with radar, we already know what the properties of the signals are, and we leverage that to figure out the properties of whatever we bounced the signals off of. That allows us to characterize planetary bodies – like their shape, speed, and trajectory. That’s especially important for hazardous objects that might stray too close to Earth.”
An animation of the radar images showing the rotation of asteroid 2024 ON. Credit: NASA/JPL.2024 ON was discovered by the Asteroid Terrestrial-impact Last Alert System (ATLAS) on Mauna Loa in Hawaii on July 27. The asteroid was discovered by the Catalina Sky Survey in Tucson, Arizona, on May 4.
NASA labels objects larger than 492 feet that come within 4.6 million miles of Earth “potentially hazardous objects,” so scientists are monitoring 2024 JV33 for potential danger even though they don’t expect the asteroid to pose a threat in the future.
The post NASA Watches a Peanut-Shaped Asteroid Drift Past Earth appeared first on Universe Today.
We are all familiar with our one Moon but other planets have different numbers of moons; Mercury has none, Jupiter has 95 and Mars has two. A new paper proposes that Mars may actually have had a third larger moon. Why? The red planet has a triaxial shape which means it bulges just like Earth does but along a third axis. The paper suggests a massive moon could have distorted Mars into this shape.
Celestial bodies that orbit planets or dwarf planets are known as moons. They vary significantly in size from just a few kilometres to several thousand kilometres. Earth’s Moon (notice capital ‘M’) is the moon everyone is familiar with but there are many fascinating moons in the outer Solar System from the largest moon Ganymede to the icy ocean world Europa or Titan with its methane lakes. Even Mars has two moons; Phobos and Deimos.
Phobos and Deimos, photographed here by the Mars Reconnaissance Orbiter, are tiny, irregularly-shaped moons that are probably strays from the main asteroid belt. Credit: NASA – See more at: http://astrobob.areavoices.com/2013/07/05/rovers-capture-loony-moons-and-blue-sunsets-on-mars/#sthash.eMDpTVPT.dpufIn a paper published by Michael Efroimsky from the US Naval Observatory in Washington the shape of Mars is explored with a view to assessing the liklihood of a third moon of Mars. Efroimsky explains that the triaxial nature of Mars is noticeable through the equatorial ellipticity which is produced by the Tharsis Rise. Another less noticeable bulge is located almost opposite to the Tharsis rise and is in the Syrtis Major Planum region.
Olympus Mons, Tharsis Bulge trio of volcanoes and Valles Marineris from ISRO’s Mars Orbiter Mission. Note the clouds and south polar ice cap. Credit: ISROThe paper proposes the peculiar bulge shape of Mars has been caused by two different elements. The initial shape was caused by a massive moon in orbit around the young and pliable Mars. It was in a synchronous or captured orbit so the same face of Mars was always pointing toward the moon. Under the constant tug of gravity, a triaxial ellipsoid shape evolved. A triaxial ellipsoid is shaped like a rugby ball but the three axes are of different lengths. The longest axis was aligned to the Moon while the others were forged by other tidal effects.
The second element of the development of the shape of Mars relates to the convection processes under its surface. After the triaxial ellipsoid shape developed, the tidally raised regions became more prone to uplift driven by convection, tectonic and volcanic activity. The activity slowly enhanced the triaxial ellipticity seen today.
Efroimsky demonstrates that a moon of less than a third of the mass of the Moon, in a synchronous orbit around Mars was capable of creating the initial triaxiality (this is my new favourite word!) The research also put showed that the asymmetry of the equator was significant if the synchronous moon existed while Mars still have magma oceans, and was weaker if the moon showed up at the solidification stage.
In order for the second element to be evidenced, further research is required. However Efroimsky believes the tidal deformations could very easily oscillate and generate heat. A moon in an elliptical but synchronous orbit would appear to oscillate east/west around the same region of sky. This would enhance the tidal deformation and internal heating of the system giving credence to Efroimsky’s theory that Mars did indeed once have a third larger moon.
Source : A synchronous moon as a possible cause of Mars’ initial triaxiality
The post Did Mars Once Have a Third, Larger Moon? appeared first on Universe Today.
The Hubble Deep Field and its successor, the Hubble Ultra-Deep Field, showed us how vast our Universe is and how it teems with galaxies of all shapes and sizes. They focused on tiny patches of the sky that appeared to be empty and revealed the presence of countless galaxies. Now, astronomers are using the Hubble Ultra-Deep Field and follow-up images to reveal the presence of a large number of supermassive black holes in the early Universe.
This is a shocking result because, according to theory, these massive objects shouldn’t have been so plentiful billions of years ago.
The Hubble Ultra-Deep Field (HUDF) was released in 2004 and required almost one million seconds of exposure over 400 of the telescope’s orbits. Over the years, the same region has been imaged with other wavelengths and been updated and refined in other ways.
The Hubble has re-imaged the region multiple times, and astronomers have compared the new images to older images and identified more SMBHs from the Universe’s early times.
The results are in a paper titled “Glimmers in the Cosmic Dawn: A Census of the Youngest Supermassive Black Holes by Photometric Variability, ” which was published in The Astrophysical Journal Letters. Matthew Hayes, an associate professor in the Department of Astronomy at Stockholm University, Sweden, is the lead author.
Supermassive Black Holes (SMBHs) sit in the center of large galaxies like ours. While the hole itself isn’t visible, material being drawn into the hole collects in an accretion disk. As that material heats, it gives off light as an active galactic nucleus (AGN). Since black holes feed sporadically, only a portion of them were visible in the original HUDF. By re-imaging the same field at different times, the Hubble captured additional SMBHs that weren’t originally visible.
Our understanding of the ancient Universe and how it and its galaxies evolved depends on several factors. One of them is the requirement for an accurate idea of the number of AGN. AGN can be difficult to spot, and this method overcomes some of the obstacles.
AGN can emit X-ray, radio, and other emissions, but they don’t always stand out. “The challenge to this field comes from the fact that identifying AGN at the luminosity regimes of typical galaxies is observationally difficult,” the authors write. “This leads to SMBHs probably being undercounted, with potentially large numbers going unnoticed among the ostensibly star-forming galaxy population at high-z.”
The authors’ photometric variability method circumvents that. Since AGN accrete material at variable rates, observing changes in output from AGN is a better method of determining how many there are. “Here, we argue that the photometric variability that results from changes in the mass accretion rate of SMBHs can provide a completely independent and complementary probe of AGN,” Hayes and his co-authors write. “Monitoring for variability selects AGN from imaging data directly by phenomena related to the SMBH, without any biases of photometric preselection (color, luminosity, compactness, etc).”
This figure from the research article shows how effective photometric variability can be at detecting SMBH. It shows the photometric variability of two objects found in the field: 1051264 at z = 2 (upper panels) and 1052126 at z = 3.2. Image Credit: Hayes et al. 2024.The new paper presents preliminary results and reports the detection of eight interesting targets that display variability. Three of the eight are probably supernovae, two are clear AGN at about z = 2–3, and three more are likely AGN at redshifts greater than 6.
These findings are significant because they impact our understanding of black holes, how they form, and their place in the history of the Universe.
Astronomers understand how stellar-mass black holes form. They also believe that supermassive black holes grow so massive through mergers with other black holes. They’re even making progress in finding the in-between black holes called intermediate-mass black holes (IMBHs).
Since astronomers think that SMBHs grow through mergers, there should be more of them in the modern Universe and comparatively few, if any, in the ancient Universe. There simply hadn’t been enough time for enough mergers to take place to create SMBHs. That’s why there are alternate theories to explain black holes in the early Universe.
Astronomers theorize that a different type of star existed in the early universe. These massive, pristine stars could only form in the conditions that dominated the early Universe. They could’ve collapsed and become massive black holes.
Another theory suggests that massive gas clouds in the early Universe could have collapsed directly into black holes. Yet another theory suggests that so-called ‘primordial black holes’ could have formed in the first seconds after the Big Bang through purely speculative mechanisms.
The Hubble Ultra Deep Field with annotation showing the location of a supermassive black hole. Image Credit: Hayes et al. 2024.The new observations should help clarify some of these ideas.
“The formation mechanism of early black holes is an important part of the puzzle of galaxy evolution,” said study lead author Hayes. “Together with models for how black holes grow, galaxy evolution calculations can now be placed on a more physically motivated footing, with an accurate scheme for how black holes came into existence from collapsing massive stars.”
“These sources provide a first measure of nSMBH in the reionization epoch by photometric variability,” the authors explain in their paper. They say the sources identified in their work indicate the largest black hole population ever reported for these redshifts. “This SMBH abundance is also strikingly similar to estimates of nSMBH in the local Universe,” the authors write.
Some theoretical models suggest that there were large numbers of AGN in the reionization epoch. The JWST shows us that there seem to be more SMBHs and AGN than astronomers thought. By finding more SMBHs and AGN, this research is adding to our understanding of black holes and the evolution of the Universe.
But there’s still more work to be done. The researchers think that a larger sample of AGN at high redshifts is needed to reduce uncertainties and strengthen their results, and the JWST can help. “JWST is required to push to detection of fainter AGN via variability,” the authors explain, adding that it would take years of monitoring for the space telescope to do so.
This work also underlines the HST’s ongoing contribution to astronomy. It may not be as powerful as the JWST, but it has the benefit of many years of observations already under its belt and keeps proving its worth as a powerful observatory in its own right.
“In contrast, HST’s legacy of deep NIR imaging already stretches back about 15 yr, providing an excellent baseline for monitoring.”
The post The Early Universe Had a Lot of Black Holes appeared first on Universe Today.
If you are going to look for intelligent life beyond Earth, there are few better candidates than the TRAPPIST-1 star system. It isn’t a perfect choice. Red dwarf stars like TRAPPIST-1 are notorious for emitting flares and hard X-rays in their youth, but the system is just 40 light-years away and has seven Earth-sized worlds. Three of them are in the potentially habitable zone of the star. They are clustered closely enough to experience tidal forces and thus be geologically active. If intelligent life arises easily in the cosmos, then there’s a good chance it exists in the TRAPPIST-1 system.
But finding evidence of intelligent life on a distant planet is difficult. Unless Mr. Mxyzptlk or the Great Gazoo want to talk about your car’s extended car warranty, any signal we detect will likely be subtle, similar to the stray radio signals we emit from Earth. So the challenge is to distinguish actual signals from aliens, known as technosignatures, from the naturally occuring emissions of stars and planets. Recently a team used the Allen Telescope Array to capture 28 hours of TRAPPIST-1 signals in an effort to find the elusive aliens.
The study began with a few assumptions. The biggest one was to presume that if TRAPPIST-1 has an intelligent civilization it is likely spread across more than one world. Given how compact the system is, that isn’t too outlandish. Getting from one world to another wouldn’t be much more difficult than it is for us to get to the Moon. With that assumption, the team then assumed that the worlds would transmit radio messages between each other. Since the signals would need to transverse interplanetary distances, they would be the strongest and most clear technosignatures in the system. So the team focused on signals during a planet-planet occultation (PPO). That is when two planets line up from our vantage point. During a PPO any signal sent from the far planet to the closer planet would spill over and eventually reach us.
Illustration of a PPO event. Credit: Tusay, et alWith 28 hours of observation data in hand, the team filtered out more than 11,000 candidate signals. Signals that were stronger than the expected range for natural signals. Then using computer models of the system they determined 7 possible PPO events and further narrowed things down to about 2,200 potential signals occurring during a PPO window. From there they went on to determine whether any of those signals were statistically unusual enough to suggest an intelligent origin. The answer to that was sadly no.
Alas, if there are aliens in the TRAPPIST-1 system, we haven’t found them yet. But the result shouldn’t minimize this study. It is the longest continuous survey of the system to date, which is pretty cool. And it’s kind of amazing that we’ve reached the point where we’re able to do this study. We are actively searching known exoplanets in detail.
Reference: Tusay, Nick, et al. “A Radio Technosignature Search of TRAPPIST-1 with the Allen Telescope Array.” arXiv preprint arXiv:2409.08313 (2024).
The post SETI Scientists Scan TRAPPIST-1 for Technosignatures appeared first on Universe Today.