How would detecting methane help astronomers identify if exoplanets, or even exomoons, have life as we know it, or even as we don’t know it? This is what a recent study published in The Astronomical Journal hopes to address as a team of researchers led by the NASA Goddard Space Flight Center investigated how a method called BARBIE (Bayesian Analysis for Remote Biosignature Identification on exoEarths) could be used on a future space mission to detect methane (CH4) on Earth-like exoplanets in optical (visible) and near-infrared (NIR) wavelengths. This study builds on past studies using BARBIE, known as BARBIE 1 and BARBIE 2, and has the potential to help scientists and engineers develop new methods for finding life beyond Earth and throughout the cosmos.
Here, Universe Today discusses this incredible study with Natasha Latouf, who is a PhD Candidate in the Department of Physics and Astronomy at George Mason University and lead author of the study, regarding the motivation behind the study, significant results, potential follow-up studies, next steps for BARBIE, the significance of detecting methane on Earth-like exoplanets, and if Natasha thinks we’ll ever find life on Earth-like exoplanets. Therefore, what was the motivation behind the study?
Latouf tells Universe Today, “We developed the BARBIE methodology in order to quickly investigate large amounts of parameter space and make informed decisions about the resultant observational trade-offs. Methane is a key contextual biosignature that we would be very interested in detecting, especially with other biosignatures like O2.”
As its name states, BARBIE used what’s known as a Bayesian inference, which is a statistical method employed to determine data probability outcomes based on a given input of data, meaning the probabilities change based on additional data input. As noted, this work builds off previous studies involving BARBIE, with those investigating parameters including observing exoplanets in optical wavelengths with planetary parameters including surface pressure, surface albedo, gravity, along with water (H20), oxygen (O2), and ozone (O3) abundance. However, those results indicated that only oxygen-rich atmospheres were observable in optical wavelengths, with the authors noting the parameters were too limited. With this work, known as BARBIE 3, the team added NIR wavelengths and CH4 to the parameters to broaden the parameters for more desirable results. Therefore, what were the most significant results from this study?
“The most significant results from this study is the interesting interplay between H2O and CH4 in the near-infrared (NIR),” Latouf tells Universe Today. “While we knew that the spectral features H2O and CH4 overlap heavily in the NIR, and would probably cause some issues with detectability, what we didn’t realize was how much that effect mattered. In fact, we find that at sufficiently high CH4, the signal-to-noise ratio (SNR) required to strongly detect H2O shoots up, and the same vice versa. Essentially, we need to be careful before claiming a planet has no H2O or CH4, because if both are present, we might be missing one! There are follow up studies happening currently, led by my fantastic post-bac Celeste Hagee, studying how the detectability of biosignatures in the NIR changes if we add CO2 into the mix!”
Along with building off previous BARBIE studies, this study focuses on contributing to the planned NASA Habitable Worlds Observatory (HWO) mission, which was recommended by National Academies of Sciences, Engineering, and Medicine (NASEM) Decadal Survey on Astronomy and Astrophysics 2020 and is currently planned to launch sometime in the 2040s. The goal of HWO will be to analyze 25 potentially habitable exoplanets, which contrast past and current exoplanet-hunting missions like NASA’s Kepler and NASA’s TESS (Transiting Exoplanet Survey Satellite) missions, respectively, whose objectives were to locate and identify as many exoplanets as possible.
Artist’s rendition for NASA’s Habitable Worlds Observatory, which is slated to launch in the 2040s with the goal of analyzing 25 potentially habitable exoplanets for biosignatures along with conducting other incredible science about our place in the cosmos. (Credit: NASA’s Goddard Space Flight Center Conceptual Image Lab)HWO will use a combination of the direct imaging method to find the exoplanets and its spectroscopy instruments to analyze their respective atmospheres for biosignatures, specifically oxygen and methane. Along with identifying and analyzing potential habitable exoplanets, the other science goals include galaxy growth, element evolution from the Big Bang until now, and our solar system and its place in the universe. Therefore, what next steps need to be taken for BARBIE to become a reality on a future exoplanet imaging mission like HWO?
“The reason why BARBIE is useful is because it provides a huge swath of information about lots of parameter space very quickly – that means we can use that data to build future telescopes!” Latouf tells Universe Today. “For instance, if we’re trying to understand whether we need a 20% or a 40% coronagraph in order to strongly detect biosignatures in the optical regime, we can look at how the 20% and 40% influences detection of biosignatures, and from there make the decision on whether the science benefit of a 40% is worth the increased cost.”
This isn’t the first time scientists have postulated that methane might be a key indicator of life on exoplanets, as a 2022 study published in the Proceedings of the National Academy of Sciences (PNAS) discussed how atmospheric methane should be considered an exoplanet biosignature and be targeted by space telescopes like NASA’s James Webb Space Telescope (JWST). Within our own solar system, methane is a key component of Saturn’s largest moon, Titan, with researchers hypothesizing that its crust could contain methane. Additionally, Mars experiences seasonal changes in methane gases that keep scientists puzzled regarding its origin. Therefore, what is the significance of identifying methane on Earth-like exoplanets?
Latouf tells Universe Today, “CH4 is a contextual biosignature – if we find sufficient amounts of CH4 and O2 in an atmosphere together, it means the atmosphere is in disequilibrium. That means that there must be something PRODUCING those levels of CH4 and O2, and depending on the abundances of each, the signs would point to some form of life behind that production.”
This study comes as the number of confirmed exoplanets currently totals 5,832 with 212 being designated as terrestrial (rocky) exoplanets, or exoplanets that are Earth-sized or smaller. A primary example of terrestrial exoplanets includes the TRAPPIST-1 system that resides just over 40 light-years from Earth and is currently hypothesized to host seven Earth-sized exoplanets with at least three orbiting in its star’s habitable zone, which is the right distance from the star to support surface liquid water like Earth.
The closest known terrestrial exoplanet to Earth is Proxima Centauri b, which is 4.24 light-years from Earth and orbits within its star’s HZ despite its orbit only being 11.2 days. However, this also means Proxima Centauri b is blasted by ultraviolet radiation, meaning its surface might not be suitable for life as we know it. Therefore, does Latouf believe we will ever find life on Earth-like exoplanets and which Earth-like exoplanets are particularly interesting to her?
“In my opinion, I think that we will,” Latouf tells Universe Today. “Will that happen in my lifetime? That I’m not sure of – but I do believe we’re going to find life eventually! Although it’ll sound boring the most Earth-like planet I’m interested in is…Earth. We have this wonderful gift in this planet, with all the exact right conditions. We need to be making sure we’re preserving it and understanding our own planet before we dive into the search for others!”
For now, BARBIE remains on the drawing board, but it demonstrates the tireless commitment of the scientific community to improve upon previous designs with the goal of answering whether life exists beyond Earth and throughout the cosmos. Going forward, the authors note that future work will continue to enhance BARBIE’s capabilities, including detecting all molecules across HWO’s entire wavelength range like ultraviolet in addition to optical and NIR. They also plan to test whether coronagraph detectors, which block light from a star to both reveal and improve exoplanet analysis, are suitable for identifying molecules in an exoplanet’s atmosphere.
Latouf concludes by telling Universe Today, “I want to emphasize that it’s very easy to see a completed paper and think to yourself, especially as an early career, “I could never do that.” BARBIE was a project that was created by a team – sure, I put my special branding on it and did the work, but the project was born of open collaboration and communication. The process of doing the work for BARBIE1, 2, and 3 took about 3.5 years, and many, many setbacks. This work is hard, it’s not easy, and no one finds it easy. All this to say – if you’re working on something, and looking at others thinking you can’t do it like they can, just know: they’re learning and growing too, and science is never as easy as it looks.”
Is methane the correct biosignature to identify life as we know it on exoplanets and how will BARBIE help the continued search for life beyond Earth in the coming years and decades? Only time will tell, and this is why we science!
As always, keep doing science & keep looking up!
The post Is Methane the Key to Finding Life on Other Worlds? appeared first on Universe Today.
In the more than 60 years since the Space Age began, humans have sent more than 6,740 rockets to space. According to the ESA’s Space Debris Office, this has resulted in 56,450 objects in orbit; about 36,860 of these objects are regularly tracked and maintained in a catalog, while 10,200 are still functioning. The rest is a combination of spent rocket stages, defunct satellites, and pieces of debris caused by unused propellant exploding and orbital collisions. This is leading to a cascade effect known as Kessler Syndrome, where the amount of debris in orbit will lead to more collisions and more debris.
What’s worse, the situation is only projected to get worse since more launches are expected with every passing year. Last year, space agencies and commercial space companies conducted a record-breaking 263 launches, with the U.S. (158) and China (68) leading the way. And with future break-ups occurring at historic rates of 10 to 11 per year, the number of debris objects in orbit will continue to increase. According to a new study by a team from the University of British Columbia (UBC), this also means that debris falling to Earth will have a 1 in 4 chance per year of entering busy airspace.
Ewan Wright, a doctoral student in UBC’s Interdisciplinary Studies Graduate Program, led the research. He was joined by Associate Professor Aaron Boley of the UBC Department of Physics and Astronomy and the co-director of The Outer Space Institute (OSI) at UBC, and Professor Michael Byers, the Canada Research Chair in Global Politics and International Law at the UBC Department of Political Science. The paper detailing their findings, “Airspace closures due to reentering space objects,” recently appeared in Scientific Reports, a journal maintained by Nature Publishing.
Artist’s impression of the orbital debris problem. Credit: UC3MTraditionally, the discussion of space junk and the Kessler Syndrome has focused on how debris in orbit will pose a hazard for future satellites, payloads, and current and future space stations. In 2030, NASA and its many partnered space agencies plan to decommission the International Space Station (ISS) after thirty years of continuous service. However, this situation will also mean that more debris will be deorbiting regularly, not all of which will completely burn up in Earth’s atmosphere.
While the chance of debris hitting an aircraft is very low (one in 430,000, according to their paper), the UBC team’s research highlights the potential for disruption to commercial air flights and the additional costs it will lead to. The situation of more launches and more hazards is illustrated perfectly by the “rapid unscheduled disassembly” (RUD) of the Starship on January 16th, during its seventh orbital flight test. The explosion, which happened shortly after the prototype reentered Earth’s atmosphere, caused debris to rain down on the residents of the Turks and Caicos. Said Wright in a UBC News release:
“The recent explosion of a SpaceX Starship shortly after launch demonstrated the challenges of having to suddenly close airspace. The authorities set up a ‘keep out’ zone for aircraft, many of which had to turn around or divert their flight path. And this was a situation where we had good information about where the rocket debris was likely to come down, which is not the case for uncontrolled debris re-entering the atmosphere from orbit.”
A similar situation happened in 2022 when the spent stages of a Chinese Long March 5B (CZ-5B) weighing about 20 metric tons (22 U.S. tons) prompted Spanish and French aviation authorities to close parts of their airspace. If spent stages and other payloads have a low enough orbit, they can reenter Earth’s orbit uncontrolled, and large portions may make it to the ground. In addition to the record number of launches last year, there were also 120 uncontrolled rocket debris re-entries while more than 2,300 spent rocket stages are still in orbit.
Debris from the SpaceX Starship launched on January 16th, spotted over the Turks and Caicos Islands.According to the International Air Transport Association, passenger numbers are expected to increase by almost 7% this year. With rocket launches and commercial flights increasing at their current rate, Wright and his colleagues say that action must be taken to mitigate the potential risks. As part of their study, the team selected the busiest day and location for air traffic in 2023, which was in the skies above Denver, Colorado – with one aircraft for every 18 square km (~7 mi2). They then paired this to the probability of spent rock stages reentering Earth’s atmosphere (based on a decade of data) above the flights.
With this as their peak, they calculated the probability of rocket debris reentering the atmosphere over different air traffic density thresholds. Their results showed that for regions experiencing 10% peak air traffic density or higher, there was a 26% chance of deorbited rocket debris entering that airspace. “Notably, the airspace over southern Europe that was closed in 2022 is only five percent of the peak,” said Wright. “Around the world, there is a 75-per-cent chance of a re-entry in such regions each year.”
At present, whenever orbital debris reenters the atmosphere around busy airspace, aviation authorities will respond by diverting flight paths, closing airspace, or taking the risk of allowing flights to continue. “But why should authorities have to make these decisions in the first place? Uncontrolled rocket body re-entries are a design choice, not a necessity,” said Dr. Boley. “The space industry is effectively exporting its risk to airlines and passengers.”
One possibility is to design rocket stages to reenter the atmosphere in a controlled way so they can crash into the ocean far away from busy air traffic lanes. However, this solution requires collective international action. “Countries and companies that launch satellites won’t spend the money to improve their rocket designs unless all of them are required to do so,” said Dr. Byers. “So, we need governments to come together and adopt some new standards here.”
Further Reading: UBC, Scientific Reports
The post Space Junk Could Re-Enter the Atmosphere in Busy Flight Areas appeared first on Universe Today.
The idea of Dyson Sphere’s has been around for decades. When Freeman Dyson explored the concept he acknowledged that they may not be a physical sphere but could be a swarm of satellites in a spherical configuration around a star. The challenge with a solid sphere is that its orbit will not be stable leading to its destruction. A new paper casts a new view on that though and proposes a way that a rigid sphere could be stable after all. The idea suggests that a binary star system, where the mass ratio between the two objects is small, the sphere may be stable.
The Dyson Sphere is a theoretical megastructure that was first proposed by physicist Freeman Dyson in 1960 as a method to harness the energy output of a star. The concept may take the form of a massive shell or a swarm, or network of solar-collecting satellites circling a star to capture and utilise its energy, potentially providing virtually limitless power. Dyson acknowledged that the construction of a solid sphere around a star is impractical due to immense material and stability challenges, a more feasible design involves a Dyson swarm—a collection of orbiting solar power stations.
Freeman Dyson speak at the Long Now Foundation.The idea of a solid sphere has taken a back seat over recent years and indeed studies have focussed on searches for satellite swarms. The acceptance that such a solid structure is not stable has been supported by other studies. In 1856, James Clark Maxwell showed that Saturn’s rings too, could not possibly be a solid uniform structure. The interaction of gravity between the ring and the planet would result in instability. The same was thought to be true for a Dyson Sphere. That was until Colin R McInnes published his findings in the monthly notices of the Royal Astronomical Society.
Saturn and its system of rings, acquired by the Cassini probe. Credit: NASA/JPL-CaltechMcInnes argues that a solution lies within the circular restricted three-body problem. This is a classical problem from celestial mechanics. At its core, it describes the motion of a small body (such as an asteroid) under the gravitational influence of two larger objects (like the Sun and Jupiter) which are in circular orbits around their common centre of mass. The presence of the smaller object, which has negligible mass has no significant impact on the motion of the two larger bodies.
In such a system, there would be five equilibrium points known as Lagrange points. Two of these will be unstable but two of them (L4 and L5) will be stable but only if the mass ratio is small as in Jupiter and the Sun for example. Here, an object will remain in a stable orbit. There are extensions and more complicated models to consider where for example radiation pressure has an impact on the stability of a system.
McInnes finds that there are configurations that could be stable for a sphere or ring after all but only under specific conditions. The first occurs if the two primary masses in the system are in orbit around their common centre of mass and a large uniform ring encloses the smaller mass. Of perhaps more interest is that McInnes suggests even a sphere could be stable if it encloses the smaller of the two masses.
The results of the study reveal an enticing glimpse into a universe where Dyson sphere’s may not be just restrained to science fiction. That there may be stellar systems scattered across the cosmos where advanced civilisations have harnessed the energy from one of their local stars.
Source : Ringworlds and Dyson spheres can be stable
The post There’s a Way to Make Ringworlds and Dyson Spheres Stable appeared first on Universe Today.
Roughly 4.6 billion years ago, the Sun was born from the gas and dust of a nebula that underwent gravitational collapse. The remaining gas and dust settled into a protoplanetary disk that slowly accreted to form the planets, including Earth. About 4.5 billion years ago, our planet was impacted by a Mars-sized body (Theia), which led to the formation of the Moon. According to current theories, water was introduced to Earth and the inner planets by asteroids and comets that permeated the early Solar System.
The timing of this event is of major importance since the introduction of water was key to the origin of life on Earth. Exactly when this event occurred has been a mystery for some time, but astronomers generally thought it had arrived early during Earth’s formation. According to a recent study by a team led by scientists from the University of Rutgers-New Brunswick, water may have arrived near “late accretion” – the final stages of Earth’s formation. These findings could seriously affect our understanding of when life first emerged on Earth.
The team was led by Katherine Bermingham, an associate professor in the Department of Earth and Planetary Sciences at Rutgers-New Brunswick and the University of Maryland. She was joined by researchers from Clemson University, the Research Centre for Astronomy and Earth Sciences (CSsFK), the Department of Lithospheric Research, the Centre for Planetary Habitability (PHAB), and the Institute for Earth Sciences. Their findings are described in a paper, “The non-carbonaceous nature of Earth’s late-stage accretion,” in Geochimica et Cosmochimica Acta.
Artist’s impression of the giant impact that shaped the Earth and created the Moon.According to what scientists have learned from life on Earth, three ingredients are essential to putting the process in motion. These are water, energy, and the basic building blocks of organic chemicals – carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur – collectively called CHNOPS. As a cosmogeochemist, Bermingham and her associates are dedicated to the study of the chemical composition of matter in the Solar System. This largely consists of analyzing Earth rocks and materials deposited by meteorites and other extraterrestrial sources.
In so doing, they hope to learn more about the origin and evolution of the Solar System and its rocky planets. A major aspect of this is knowing when and where the basic ingredients for life originated and how they found their way to Earth. For their study, Birmingham and her team examined meteorites obtained from the Smithsonian National Museum of Natural History that belong to the “NC” group. These meteorites’ composition suggests they formed in the inner Solar System, where conditions were drier.
This sets them apart from the “CC” group, which likely formed in the outer Solar System, where water and other volatiles were more abundant. The team extracted isotopes of molybdenum from these meteorites – a trace mineral essential for human health – and analyzed them using ionization spectrometry and a new analytical method they developed. This element is thought to have been deposited on Earth at about the same time the Moon formed, which was thought to have deposited a significant amount of the Earth’s water. As Birmingham explained in a Rutgers University press release:
“When water was delivered to the planet is a major unanswered question in planetary science. If we know the answer, we can better constrain when and how life developed. The molybdenum isotopic composition of Earth rocks provides us with a special window into events occurring around the time of Earth’s final core formation, when the last 10% to 20% of material was being assembled by the planet. This period is thought to coincide with the Moon’s formation.”
A piece of iron meteorite Campo del Cielo, one of the samples measured in the study. Credit: Katherine BerminghamThey then compared the composition of these meteors’ isotopes to Earth rocks obtained by field geologists from Greenland, South Africa, Canada, the United States, and Japan. Their analysis showed that the Earth rocks were more similar to meteorites originating in the inner Solar System (NC). As Birmingham said:
“Once we gathered the different samples and measured their isotopic. compositions, we compared the meteorites signatures with the rock signatures to see if there was a similarity or a difference. And from there, we drew inferences. We have to figure out from where in our solar system Earth’s building blocks – the dust and the gas – came and around when that happened. That’s the information needed to understand when the stage was set for life to begin.”
The finding is significant since it indicates that Earth did not receive as much water from the Moon-forming impact as previously theorized. Instead, the data supports the competing school of thought that water was delivered to Earth in smaller portions late in its formation history and after the Moon was formed. “Our results suggest that the Moon-forming event was not a major supplier of water, unlike what has been thought previously,” said Bermingham. “These findings, however, permit a small amount of water to be added after final core formation, during what is called late accretion.”
Further Reading: Rutgers University, Geochimica et Cosmochimica Acta
The post Water Arrived in the Final Stages of Earth's Formation appeared first on Universe Today.
The James Webb Space Telescope (JWST) has been giving us a fabulous new view on the universe since its launch. This new image of the protostar HH30 is in amazing new detail thanks to the JWST. It was first discovered using the Hubble Space Telescope but this Herbig-Haro object, which is a dark molecular cloud, is a perfect object for JWST. The image shows the protoplanetary disk seen edge on with a conical outflow of gas and dust with a narrow jet blasting out into space.
The JWST is arguably the most advanced space observatory ever built. It was launched on December 25, 2021 and orbits the Sun at the second Lagrange point, about 1.5 million kilometres from Earth. It has a 6.5-meter gold-coated mirror and powerful infrared instruments which can peer through dust to study the formation of stars, galaxies, and even exoplanet atmospheres. It has already given us amazing images of deep space to reveal galaxies from the early universe.
Artist impression of the James Webb Space TelescopeRecently JWST has been used to study the protostar HH30. It’s a young star system located about 450 light-years away in the constellation Taurus and is embedded in the dark cloud LDN1551. At its centre lies a newborn star embedded in a dense disk of gas and dust, which fuels its formation.
HH30 is a Herbig-Haro object, a small, bright nebulae which has been found in a star-forming region. The nebula is created when high-speed jets of ionized gas from the newborn stars collide with surrounding interstellar material. They are typically located near protostars and are often aligned along the axis of bipolar outflows. As the jets travel through space at hundreds of kilometers per second, they create shock waves that heat the surrounding gas, causing it to glow in visible and infrared wavelengths. Herbig-Haro objects tend to be transient, evolving over a few thousand years as the jets interact with changing environments.
The system is best known for its spectacular bipolar jets, which shoot out from the protostar at high speeds. Observations from the Hubble Space Telescope have revealed a stunning silhouette of the dusty disk, seen edge-on, obscuring the central star while allowing astronomers to study the complex processes of star and planet formation.
The team of astronomers combined images from JWST, HST and the Atacama Large Millimetre Array (ALMA) so that they could study the appearance of the disk in multiple wavelengths. The observations have been wonderfully captured in this new image that has been released as Picture of the Month. HH30 is seen in unprecedented detail.
This image of NASA’s Hubble Space Telescope was taken on May 19, 2009 after deployment during Servicing Mission 4. NASAJWST is known for its infrared capabilities and allowed the team to track the location of sub-millimetre sized grains of dust but ALMA allowed the team to explore further. Using ALMA millimetre-sized grains of dust were studied revealing that they, unlike the smaller dust grains, were found in a narrow region in the plane of the disc. The smaller grains were found to be much more widespread.
The study concluded that larger grains of dust seem to migrate within the disc and settle into a thin layer. It’s thought this marks an important part of the formation of planetary systems with the grains clumping together to form smaller rocks and ultimately into planets.
Not only did the study reveal the behaviour of grains of dust in HH30 but it also uncovered a number of different structures embedded within one another. A narrow, high-velocity jet was seen to be emerging from the central disc. The jet seems to be surrounded by a wider, rather more cone shaped outward flow of gas. Not only does this study help us to learn more about how exoplanetary systems form but it helps us to understand more about the origins of our own Solar System.
Source : Webb investigates a dusty and dynamic disc
The post An Amazing JWST Image of a Protostar appeared first on Universe Today.
Hypervelocity stars (HVSs) were first theorized to exist in the late 1980s. In 2005, the first discoveries were confirmed. HVSs travel much faster than normal stars, and sometimes, they can exceed the galactic escape velocity. Astronomers estimate that the Milky Way contains about 1,000 HVSs, and new research shows that some of these originate in the Milky Way’s satellite galaxy, the Large Magellanic Cloud (LMC).
Does the LMC have a supermassive black hole (SMBH) that’s ejecting some HVSs into the Milky Way?
Most stars in the Milky Way travel at about 100 km/s, whereas HVSs can travel as quickly as about 1000 km/s. Established thinking, backed up by existing evidence, says that HVSs originate in the Galactic Centre. Astronomers think they come from binary star systems that get too close to Sgr. A*, the Milky Way’s SMBH. In this scenario, one of the binary stars is captured by the black hole, and the other is ejected as an HVS. This is called the Hills mechanism. In fact, some of the original evidence supporting the existence of Sgr. A* was based on fast-moving stars in the galactic center by the Hills mechanism.
New research submitted to The Astrophysical Journal shows that a surprising number of the Milky Way’s HVSs come not from the galactic centre but from the LMC. It’s titled “Hypervelocity Stars Trace a Supermassive Black Hole in the Large Magellanic Cloud.” The lead author is Jiwon Han, a grad student at the Harvard and Smithsonian Center for Astrophysics who studies galactic archaeology.
In 2006, researchers published the results of a survey of HVSs in the Milky Way. That survey detected 21 HVSs that were unbound B-type main sequence stars in the Milky Way’s outer halo. Their properties were consistent with stars ejected from the galactic center by the Hills mechanism. In this new research, the astronomers revisited these stars. They had some help that wasn’t available in 2006: the ESA’s Gaia spacecraft.
Gaia is our star-measuring superhero. It sits at the Sun-Earth L2 point, where it measures two billion objects, mostly stars, and tracks their positions and velocities. Han and his colleagues revisited the 21 HVSs using the proper motions provided by Gaia. Gaia, a mission that has driven substantial progress in our understanding of the Milky Way, came through again.
“We find that half of the unbound HVSs discovered by the HVS Survey trace back not to the Galactic Center, but to the LMC,” Han and his co-authors write.
That motivated them to dig deeper. The researchers constructed a model based on simulated stars that were ejected by an SMBH in the LMC. “The predicted spatial and kinematic distributions of simulated HVSs are remarkably similar to the observed distributions,” the authors write.
This pie chart shows the results of the team’s analysis of the HVSs. “Among the HVSs that can be confidently classified, 9 out of 16 stars originate from the LMC center,” the authors explain. Image Credit: Han et al. 2025.speeds
Could there be another root cause of the HSVs? Supernova explosions can eject stars, and so can dynamic gravitational interactions. Those can’t explain them, according to the authors. “We find that the birth rate and clustering of LMC HVSs cannot be explained by supernova runaways or dynamical ejection scenarios not involving an SMBH,” the authors explain.
One key piece of evidence supporting a black hole in the LMC is an overdensity. Called the Leo overdensity, it’s a region toward the Leo constellation that contains a higher density of stars than the surrounding regions. Han and his co-researchers say their model also produces this same overdensity. An SMBH with about 600,000 solar masses in the LMC is hurling stars into the Milky Way, some of which are HVSs, some of which are now residing in the overdensity.
The researcher’s model predicts the existing overdensity of stars in the Milky Way toward the Leo constellation, called the Leo overdensity. “The black open circles denote the Galactic coordinates of hypervelocity stars detected in the HVS Survey, while the grey-shaded regions mark areas excluded from the survey,” the authors explain. “This model accurately reproduces the observed overdensity location, supporting the hypothesis of an SMBH in the LMC as a source of these stars.” Image Credit: Han et al. 2025.Their model shows that almost all of the stars in the Leo overdensity came from the LMC and its SMBH, which the authors describe as “a curious result.” To understand it better, they dug into how the Hills mechanism works.
“The main ingredients of the Hills Mechanism are: (1) the mass of LMC, (2) binary star masses, (3) binary
separations prior to tidal disruption, (4) pericenter distances of the binary orbit around the SMBH,” the authors write. These are inputs into the Hills mechanism, and the outputs are ejection probabilities and velocities for individual stars.
For ejected stars, the researchers integrated their orbits forward for 400 million years to see where they would go. “We finally ‘observe’ the resulting population of stars from the Galactic rest frame at the present day and apply a selection function to match the observational constraints of the HVS Survey,” the authors write.
This figure illustrates some of the modelling and the results. (1) shows the LMC rest-frame velocities of stars ejected from the LMC by the SMBH. (2) shows the velocity of these stars in the rest-frame of the Milky Way. “The size of each point is proportional to the excess velocity over the local Galactic escape velocity,” the authors write. (3) shows stars that exceed the galactic escape velocity, which reveals a stream of hypervelocity stars ahead of the LMC’s orbit. (4) shows the stars that made it into the HVS Survey. Basically, the leading tip of hypervelocity stars from the LMC is the LEO overdensity. Image Credit: Han et al. 2025.
The implications of this research could be far-reaching. Current thinking says that all large galaxies contain an SMBH but that smaller galaxies don’t. There’s some evidence that smaller galaxies can harbour them, but in dwarf galaxies like the LMC, for example, the black holes may not be massive enough to qualify as actual SMBHs, depending on where the cut-off is. Additionally, they’re more difficult to detect in dwarf galaxies because they may not be actively accreting matter.
This research changes things.
It shows that the presence of a black hole does not generate HVSs alone; the motion of the galaxy also contributes. Future studies of HVSs need to consider galactic motion.
The study also has ramifications for our understanding of galaxy growth and evolution. If astrophysicists are missing black holes in smaller galaxies, that means our theories of galactic evolution are likely lacking consequential data.
More research into HVSs will take these results into account. Gaia data may help find more HVSs when more becomes available in future data releases. That means more data points, something scientists are always looking for. With that data, researchers can build more detailed models and develop more stringent theories on HVSs and how they’re generated.
Research: Hypervelocity Stars Trace a Supermassive Black Hole in the Large Magellanic Cloud
The post There Could Be a Supermassive Black Hole in the Large Magellanic Cloud Hurling Stars at the Milky Way appeared first on Universe Today.
We’ve only gotten one close-up view of Uranus and its moons, and it happened decades ago. In 1986, Voyager 2 performed a flyby of Uranus from about 81,500 km (50,600 mi) of the planet’s cloud tops. It was 130,000 km (80,000 mi) away from Uranus’ moon, Ariel, when it captured the leading image. It showed some unusual features that scientists are still puzzling over.
What do they reveal about the moon’s interior?
Ariel has the usual crater-pitted surface that most Solar System objects display. But its surface also has complex features like ridges, canyons, and steep banks and slopes called scarps. Research published last year suggested that these surface features and chemical deposits are caused by chemical processes inside the moon. Ariel could even have an internal ocean, according to the research.
New research published in The Planetary Science Journal digs deeper into the issue to try and understand what processes could create Ariel’s surface features. Its title is “Ariel’s Medial Grooves: Spreading Centers on a Candidate Ocean World.” The lead author is Chloe Beddingfield from Johns Hopkins University Applied Physics Laboratory (JHUAPL).
“Ariel is a candidate ocean world, and recent observations from the James Webb Space Telescope (JWST) confirmed that its surface is mantled by a large amount of CO2 ice mixed with lower amounts of CO ice,” Beddingfield and her co-researchers write in their paper. These materials should be unstable on Ariel, though, and should sublimate away into space. “Consequently, the observed constituents on Ariel are likely replenished, possibly from endogenic sources,” the authors write.
The research is centred on Ariel’s chasma-medial groove systems and how they formed. These are trenches that cut straight through the moon’s huge canyons. While previous research has suggested that the trenches are tectonic fractures, this research arrives at a different hypothesis. “We present evidence that Ariel’s massive chasma-medial groove systems formed via spreading, where internally sourced material ascended and formed new crust,” the paper states.
This Voyager 2 image of Ariel shows the names of some of the moon’s surface features. Image Credit: By Ariel_(moon).jpg: NASA/Jet Propulsion Labderivative work: Ruslik (talk) – Ariel_(moon).jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=12867133This is similar to ocean-floor spreading on Earth, which is where new crust forms. If true, it can account for Ariel’s surface deposits of carbon dioxide ice and other carbon-bearing molecules.
“If we’re right, these medial grooves are probably the best candidates for sourcing those carbon oxide deposits and uncovering more details about the moon’s interior,” Beddingfield said in a press release. “No other surface features show evidence of facilitating the movement of materials from inside Ariel, making this finding particularly exciting.”
Ariel’s surface is dominated by three main terrain types: plains, ridged terrain, and cratered terrain. The cratered terrain is the oldest and most extensive type of terrain. The ridged terrain is the second main terrain type and is made of bands of ridges and troughs that can extend for hundreds of kilometres. The plains are the third type and are the youngest of the terrains. They’re on canyon floors and in depressions in the middle of the cratered terrain.
As far as scientists can tell, the grooves that intersect the canyons are the youngest surface features on Ariel. Previous research suggested that they result from the interplay between volcanic and tectonic processes. However, this research says otherwise: spreading could be responsible.
This image (Figure 1) from the research puts Ariel’s complex surface on full display. The locations of the three known medial grooves are shown in red. Image Credit: Beddingfield et al. 2025.In the 1960s, scientists validated the idea of seafloor spreading on Earth, which led to the acceptance of plate tectonics. One of the main pieces of evidence for plate tectonics is the way the edges of continents like Africa and South America fit together if you “remove” the Atlantic Ocean and the intervening seafloor.
The same thing happened when Beddingfield and her colleagues “removed” the chasm floors on Ariel.
The researchers showed that when they removed the floors of the chasms, the margins lined up. This is strong evidence of spreading. “The margins of Brownie, Kewpie, Korrigan, Pixie, and Sylph Chasmata closely align when the Intermediate Age Smooth Materials (orange unit in Figure 1), which make up the chasma floors, are removed and the Cratered Plains (green unit in Figure 1) are reconstructed,” they write.
This figure from the study shows possible configurations of Ariel’s Cratered Plains before (left) and after (right) spreading occurred. Note how neatly the chasma walls line up. “Our reconstruction focuses on removing the young chasma floors, examining the offset of the Kra Chasma segments, and aligning the similarly shaped chasma walls,” the authors write. Image Credit: Beddingfield et al. 2025.According to the research, spreading centers develop above convention cells underneath Ariel’s crust, and heat forces material upward to the crust. The material cools at the surface, forming new crust. The entire process is driven by tidal forces as Ariel orbits the much larger Uranus. This heats the moon’s interior, creating the convection. Some of the moon’s interior cycles between heating as the moon follows its orbit. It’s possible that internal material continuously melts and then refreezes.
“It’s a fascinating situation — how this cycle affects these moons, their evolution and their characteristics,” Beddingfield said.
Like other Solar System moons that experience tidal heating, Ariel may have an ocean under its surface. In a 2024 study, researchers proposed that another of Uranus’ moons, Miranda, could have a subsurface ocean maintained by tidal heating.
However, Beddingfield is skeptical about drawing a connection between Ariel’s grooves and a potential ocean.
“The size of Ariel’s possible ocean and its depth beneath the surface can only be estimated, but it may be too isolated to interact with spreading centers,” she said. “There’s just a lot we don’t know. And while carbon oxide ices are present on Ariel’s surface, it’s still unclear whether they’re associated with the grooves because Voyager 2 didn’t have instruments that could map the distribution of ices.”
The connection between the grooves and the materials deposited on Ariel’s surface is stronger though. “These new results suggest a possible mechanism for emplacing fresh material and short-lived compounds, including carbon monoxide and perhaps ammonia-bearing species on the surface,” said Tom Nordheim, a co-author of this research and the 2024 paper.
“Our results indicate that medial grooves in large chasmata on Ariel are spreading centers, resulting from the exposure of subsurface material, creating new crust,” the authors summarize in their conclusion. “Thus, these features are likely geologic conduits to Ariel’s interior and could be the primary source of CO2, CO, and other volatiles detected on its surface.”
Richard Cartwright from the Johns Hopkins Applied Physics Laboratory led the 2024 study that used the JWST to identify CO ice and CO2 deposits on Ariel. To find more answers about this intriguing moon, Cartwright says we need a dedicated mission to Uranus and its moons. “We need an orbiter that can make close passes of Ariel, map its medial grooves in detail, and analyze their spectral signatures for components like carbon dioxide and carbon monoxide,” he said. “If carbon-bearing molecules are concentrated along these grooves, then it would strongly support the idea that they’re windows into Ariel’s interior.”
The authors agree that only a dedicated mission can provide answers. “The medial grooves are some of the youngest geologic features observed on Ariel, and close flybys of these features by a future Uranus orbiter are imperative to gain insight into recent geologic events and the geologic and geochemical properties of this candidate ocean world,” they write.
There’ve been many proposed missions to Uranus. NASA, the ESA, JAXA, and the CNSA (China National Space Administration) have all had proposals. NASA’s Uranus Orbiter and Probe mission would study Uranus and its moons from orbit by conducting multiple flybys of each major moon. The probe would enter Uranus’ atmosphere. However, even if selected, a plutonium shortage means the mission wouldn’t launch until the mid or late 2030s.
A graphic explaining some of the features of NASA’s proposed Uranus Orbiter and Probe mission. Image Credit: NASA.So far, only China has firm plans to send a spacecraft to the ice giant. It will be part of their Tianwen-4 mission to Jupiter and would perform a single flyby of Uranus. The next launch windows for a mission to Uranus are between 2030 and 2034, but China’s mission isn’t scheduled until 2045.
Press Release: New Study Suggests Trench-Like Features on Uranus’ Moon Ariel May Be Windows to Its Interior
Research: Ariel’s Medial Grooves: Spreading Centers on a Candidate Ocean World
The post Uranus’ Moon Ariel has Deep Gashes, Could Reveal its Interior appeared first on Universe Today.
New research suggests an impact recently rattled Mars deeper than thought.
HiRISE images a recent impact crater in the Cerberus Fossae region, seen on March 4, 2021. Credit: NASA/MRO/HiRISESomething really rang the Red Planet’s bell. Research involving two NASA missions—the Mars Reconnaissance Orbiter, and the late InSight lander—has shed light on meteorite impacts and the seismic signals they produce. In a crucial finding, these signals may penetrate deeper inside Mars than previously thought. This could change how we view the interior of Mars itself.
The interior of Mars, and InSight’s detection of impacts versus geologic activity. Credit: NASA/JPL-Caltech.The study comes from two papers published this week in the journal of Geophysical Research Letters. The primary data comes from NASA’s InSight mission, the first dedicated geodesy mission to Mars. Insight landed in the Elysium Planitia region of Mars on November 26th, 2018, and carried the first ever dedicated seismometer to the Red Planet. During its four years of operation, Insight detected over 1,300 ‘marsquakes,’ until the mission’s end in 2022. Most were due to geologic activity, while a few were due to distant meteorite impacts. Occasionally, InSight would even see ‘land tides’ due to the passage of the moon Phobos overhead.
InSight uses its robotic arm to place a wind shield over the SEIS seismometer. Credit: NASA/JPL-Caltech. A Distant Mars ImpactAs on Earth, the detection of seismic waves gives us the opportunity to probe the interior of Mars, providing clues as to the density, depth and thickness of the crust, mantle and core. To be sure, impacts have been correlated to seismic waves captured by InSight in the past. A fresh crater seen by NASA’s Mars Reconnaissance Orbiter (MRO) in 2022 was correlated to an impact in the Amazonis Planitia region. But this was the first time an impact in the quake-prone Cerberus Fossae area was linked to InSight detections. The find is especially intriguing, as the area is quarter of a world away from the InSight landing site, at 1,640 kilometers (1,019 miles) distant.
A wider context view of the Cerberus Fossae region on Mars, courtesy of Mars Odyssey. NASA/JPL-Caltech.The discovery of the 21.5-meter (71 foot) crater about the length of a semi-truck immediately presented scientists with a mystery. The smoking gun impact crater was more distant than thought. Typically, the Martian crust was thought to have a dampening effect on distant impacts. This means that the impact-generated waves took a more direct route via a ‘seismic highway,’ through the deeper mantle of the planet itself.
This discovery has key implications for what we generally think about the interior of Mars. This may also imply that our understanding and model for the planet’s interior may be due for an overhaul.
“Composition of the crust and how seismic waves from impacts travel through them is one factor,” Andrew Good (NASA-JPL) told Universe Today. “No current plans for follow-on seismometers on Mars, but there is a seismometer planned for the Moon in the near future,” says Good, in reference to the Farside Seismic Suite planned for 2026.
A New View of the Interior of Mars?InSight team member Costantinos Charalambous of Imperial College London explains the finding in more detail, in an email to Universe Today:
The detection of this impact changes our understanding of Mars’ interior, particularly its crust and upper mantle, both immediately and in the longer term. However, in the latter case, it will take further work to know quite how!
The immediate shift in our understanding is that many more of the seismic events we detected at InSight have penetrated much deeper into the planet than we thought. Previously, we had thought that the crust would trap most of the high-frequency seismic energy, guiding it around the planet from the point of impact to InSight’s seismometer. We thought any high-frequency energy that penetrated more deeply into the mantle was quickly lost. But it now appears the Martian mantle is much better at propagating this seismic energy than we thought, allowing it to travel more quickly and farther. This tells us that the mantle has a different elemental composition that previously assumed, likely with a lower iron oxide content than earlier models predicted.
Additionally, because this impact was detected in Cerberus Fossae – a region where many recorded marsquakes likely originate – it provides a unique opportunity to distinguish seismic signatures generated by seismic activity driven by deeper, internal (tectonic) forces versus shallower, external (impact) sources.
Therefore, in the longer term, we will be re-examining the data from seismic events that we had previously assumed didn’t penetrate deeper into Mars. This work is ongoing, but these findings suggest new features of Mars’ upper mantle that we are seeking to confirm. Watch this space!
MRO’s Hunt For ImpactsJust how researchers imaged the tiny crater is the amazing second part of the story. NASA’s venerable MRO generates tens of thousands of images of the surface of Mars. These come mainly via the spacecraft’s onboard Context Camera. For years, researchers have used a machine learning algorithm to sift through the images. This looks for fresh impact sites that do not appear in previous frames. These areas are in turn flagged for closer scrutiny with the mission’s 0.5-meter High-Resolution Imaging Science Experiment (HiRISE) camera. The AI program was developed by NASA’s Jet Propulsion Laboratory.
A crater cluster on Mars, one of the first spotted courtesy the MRO AI search program. Credit: NASA/JPL-Caltech/MSSS.To date, the team has found 123 new craters within 3,000 kilometers (1,864 miles) of the InSight landing site. 49 of these (including the Cerberus Fossae impact) are potential matches with InSight seismology data.
“Done manually, this would be years of work,” says InSight team member Valentin Bickel (University of Bern, Switzerland) in a recent press release. “Using this tool, we went from tens of thousands of images to just a handful in a matter of days.”
InSight’s LegacyInSight provided a wealth of seismology and geological information about Mars. The Seismic Experiment for Interior Structure (SEIS) instrument worked as planned. The Heat Flow and Physical Properties Package (HP^3) failed, however, to reach its target depth for returning useful science about the planet’s interior. Unfortunately, no dedicated follow on geology mission is set to head to Mars. This sort of exciting science will probably have to wait until the hoped for crewed missions of the 2030s.
InSight was a collaborative effort between NASA, the German Space Agency (DLR) and the French Space Agency (CNES). Other international partners also participated in the ground-breaking mission.
Still, it’s great to see missions like InSight still generating scientific results, long after they’ve fallen silent.
The post A Recent Impact on Mars Shook the Planet to Its Mantle appeared first on Universe Today.
The current exoplanet census contains 5,832 confirmed candidates, with more than 7,500 still awaiting confirmation. Of those that have been confirmed, most have been gas giants ranging from Neptune-like bodies (1992) to those similar to or many times the size and mass of Jupiter and Saturn (1883). Like the gas giants of the Solar System, astronomers generally theorized that these types of planets form in the outer reaches of their star system, where conditions are cold enough for gases like hydrogen and helium and volatile compounds (water, ammonia, methane, etc.) will condense or freeze solid.
However, astronomers have noted that many of the gas giants they’ve observed orbited close to their stars, known as “Hot Jupiters.” This has raised questions about whether or not gas giants and other planets migrate after formation until they find their long-term, stable orbits. In a new study, a team from Arizona State University’s School Of Earth and Space Exploration (ASU-SESE) examined the atmospheric chemistry of several Hot and Ultra-Hot Jupiters. After examining WASP-121b, the team came to the unexpected conclusion that it likely formed close to its star.
The research was conducted by Graduate Associate Peter C. B. Smith and other members of the ASU-SESE. They were joined by exoplanet researchers from the Steward Observatory, the Italian National Institute for Astrophysics (INAF), the Trottier Institute for Research on Exoplanets (iREX), the Centre for Exoplanets and Habitability (CEH), and multiple universities. Collectively, they are part of the Roasting Marshmallows Program, and their latest research was presented in a paper appearing in The Astronomical Journal.
Members of this program are dedicated to studying the atmospheres of hot and ultra-hot Jupiters using the Immersion GRating INfrared Spectrograph (IGRINS), built by the University of Texas and the Korea Astronomy and Space Science Institute (KASI). The instrument is part of the Gemini South telescope in Chile, one of two telescopes that make up the International Gemini Observatory, funded in part by the U.S. National Science Foundation (NSF) and operated by the National Optical-Infrared Astronomy Research Laboratory (NOIRLab).
This program aims to learn more about the protoplanetary disks from which hot gas giants formed. In the past, scientists assumed that these disks – leftover rocky and icy material from the nebulae that give birth to stars – settle into gradients around their suns that allow certain types of planets to form around them. According to this theory, material closer to the star would consist largely of rocky material since volatiles would turn to vapor, while material farther from the star would consist of icy material since temperatures would be low enough for it to solidify.
Since the material in these disks varies based on the distance from their parent stars, astronomers can measure the abundance of these materials in planetary atmospheres based on their spectral signatures. As a result, they can determine how far from a parent star its planets may have formed. Ordinarily, measuring this ratio requires multiple observations in both visible and infrared light (for rocky and gaseous elements, respectively). However, the team obtained measurements WASP-121b to determine the radio of rocky and gaseous elements thanks to it being an ultra-hot Jupiter.
As a result, the planet’s atmosphere contains vaporized rock and gaseous materials that were detectable using the IGRINS instrument alone and with a single observation! This instrument allowed the team to obtain high-resolution spectral data from WASP-121b as it made a transit in front of its star. Said Smith:
“Ground-based data from Gemini South using IGRINS actually made more precise measurements of the individual chemical abundances than even space-based telescopes could have achieved. Our measurement means that perhaps this typical view needs to be reconsidered and our planet formation models revisited. The planet’s dayside is so hot that elements typically thought of as ‘metal’ are vaporized into the atmosphere, making them detectable via spectroscopy.”
This artist’s impression shows an ultra-hot exoplanet as it is about to transit in front of its host star.The spectra showed that WASP-121b has a high rock-to-ice ratio, indicating that it accreted an excess of rocky material while forming. This suggests the planet formed closer to its star, which was quite a surprise since traditional models suggest that gas giants need much colder temperatures to form. The reason for this became obvious once Smith and his team learned several things about WASP-121b’s atmosphere. On the dayside, temperatures are so hot that rocky material and metals are vaporized into the atmosphere, while powerful winds blow these to the night side, where they condense.
This leads to WASP-121b experiencing many types of “metal rain” on its night side, a phenomenon that astronomers had previously observed. “The climate of this planet is extreme and nothing like that of Earth,” said Smith, adding that IGRINS was a major factor in his team’s detailed measurements. “Our instrument sensitivity is advancing to the point where we can use these elements to probe different regions, altitudes, and longitudes to see subtleties like wind speeds, revealing just how dynamic this planet is.”
These results may resolve the mystery of Hot Jupiters by demonstrating that gas giants need not be composed predominantly of gaseous volatile elements, but heavier elements that are heated to the point that they become vapor. These findings support previous observations of gas giants that experience metal precipitation, such as WASP-76, Kepler-7b, KELT-9b. The team hopes that future surveys using IGRINS successor instrument – IGRINS-2 – which was commissioned for the Gemini North telescope in Hawai‘i and is currently being calibrated for science operations.
Further Reading: NOIRLab, The Astronomical Journal
The post This Hot Jupiter Probably Formed Close to Its Star appeared first on Universe Today.
Is it possible to understand the Universe without understanding the largest structures that reside in it? In principle, not likely. In practical terms? Definitely not. Extremely large objects can distort our understanding of the cosmos.
Astronomers have found the largest structure in the Universe so far, named Quipu after an Incan measuring system. It contains a shocking 200 quadrillion solar masses.
Astronomy is an endeavour where extremely large numbers are a part of daily discourse. But even in astronomy, 200 quadrillion is a number so large it’s rarely encountered. And if Quipu’s extremely large mass doesn’t garner attention, its size surely does. The object, called a superstructure, is more than 400 megaparsecs long. That’s more than 1.3 billion light-years.
A structure that large simply has to affect its surroundings, and understanding those effects is critical to understanding the cosmos. According to new research, studying Quipu and its brethren can help us understand how galaxies evolve, help us improve our cosmological models, and improve the accuracy of our cosmological measurements.
The research, titled “Unveiling the largest structures in the nearby Universe: Discovery of the Quipu superstructure,” has been accepted for publication in the journal Astronomy and Astrophysics. Hans Bohringer from the Max Planck Institute is the lead author.
“For a precise determination of cosmological parameters, we need to understand the effects of the local large-scale structure of the Universe on the measurements,” the authors write. “They include modifications of the cosmic microwave background, distortions of sky images by large-scale gravitational lensing, and the influence of large-scale streaming motions on measurements of the Hubble constant.”
Superstructures are extremely large structures that contain groups of galaxy clusters and superclusters. They’re so massive they challenge our understanding of how our Universe evolved. Some of them are so massive they break our models of cosmological evolution.
Quipu is the largest structure we’ve ever found in the Universe. It and the other four superstructures the researchers found contain 45% of the galaxy clusters, 30% of the galaxies, 25% of the matter, and
occupy a volume fraction of 13%.
The image below helps explain why they named it Quipu. Quipu are recording devices made of knotted cords, where the knots contain information based on colour, order, and number. “This view gives the best impression of the superstructure as a long filament with small side filaments, which initiated the naming of Quipu,” the authors explain in their paper.
This figure from the new research is a wedge diagram in declination and distance of the Quipu superstructure. The distance is in units of Megaparsecs. The red dots show the superstructure members and the black lines show the friends-to-friends linking. The grey dots show the non-member clusters. The two dashed lines give the distances for redshifts of 0.03 and 0.06.
In their work, Bohringer and his co-researchers found Quipu and four other superstructures within a distance range of 130 to 250 Mpc. They used X-ray galaxy clusters to identify and analyze the superstructures in their Cosmic Large-Scale Structure in X-rays (CLASSIX) Cluster Survey. X-ray galaxy clusters can contain thousands of galaxies and lots of very hot intracluster gas that emits X-rays. These emissions are the key to mapping the mass of the superstructures. X-rays trace the densest regions of matter concentration and the underlying cosmic web. The emissions are like signposts for identifying superstructures.
This figure from the research shows galaxy distribution in density gradients. The density ratio to the average density is shown by six contour levels: 0 – 0.23 (black), 0.23 – 0.62 (dark blue), 0.62 – 1.13 (light blue), 1.13 – 1.9 (grey), 1.9 – 3.7 (olive), and > 3.7 (white). The clusters of the five superstructures are overplotted with filled black circles. Image Credit: Bohringer et al. 2025.The authors point out that “the difference in the galaxy density around field clusters and members of superstructures is remarkable.” This could be because field clusters are populated with less massive clusters than those in the superstructure rather than because the field clusters have lower galaxy density.
Regardless of the reasons, the mass of these superstructures wields enormous influence on our attempt to observe, measure, and understand the cosmos. “These large structures leave their imprint on cosmological observations,” the authors write.
The superstructures leave an imprint on the Cosmic Microwave Background (CMB), which is relic radiation from the Big Bang and key evidence supporting it. The CMB’s properties match our theoretical predictions with near-surgical precision. The superstructures’ gravity alters the CMB as it passes through them according to the Integrated Sachs-Wolfe (ISW) effect, producing fluctuations in the CMB. These fluctuations are foreground artifacts that are difficult to filter out, introducing interference into our understanding of the CMB and, hence, the Big Bang.
The full-sky image of the temperature fluctuations (shown as colour differences) in the cosmic microwave background is made from nine years of WMAP observations. These are the seeds of galaxies from a time when the universe was under 400,000 years old. Credit: NASA/WMAPThe superstructures can also impact measurements of the Hubble constant, a fundamental value in cosmology that describes how fast the Universe is expanding. While galaxies are moving further apart due to expansion, they also have local velocities, called peculiar velocities or streaming motions. These need to be separated from expansion to understand expansion clearly. The great mass of these superstructures influences these streaming motions and distorts our measurements of the Hubble constant.
The research also notes that these massive structures can alter and distort our sky images through large-scale gravitational lensing. This can introduce errors in our measurements.
On the other hand, simulations of the Lambda CDM produce superstructures like Quipu and the four others. Lambda CDM is our standard model of Big Bang cosmology and accounts for much of what we see in the Universe, like its large-scale structure. “We find superstructures with similar properties in simulations based on Lambda-CDM cosmology models,” the authors write.
It’s clear that these superstructures are critical to understanding the Universe. They hold a significant portion of its matter and affect their surroundings in fundamental ways. More research is needed to understand them and their influence.
“Interesting follow-up research on our findings includes, for example, studies of the influence of these environments on the galaxy population and evolution,” the authors write in their conclusion.
According to the study, these superstructures won’t persist forever. “In the future cosmic evolution, these superstructures are bound to break up into several collapsing units. They are thus transient configurations,” Bohringer and his co-researchers explain.
“But at present, they are special physical entities with characteristic properties and special cosmic environments deserving special attention.”
The post Astronomers Find the Largest Structure in the Universe and Name it “Quipu” appeared first on Universe Today.
Asteroid sampling missions are getting increasingly complex. Recent announcements about the existence of amino acids in the sample OSIRIS-REx returned from Bennu in 2023 will likely result in more interest in studying the small bodies strewn throughout our solar system. Engineering challenges abound when doing so, though, including one of the most important – how to collect a sample from the asteroid. A new paper from researchers at the China Academy of Space Technology looks at a gas-drive sample system they believe could hold the key to China’s future asteroid sample return mission.
There are three main categories of successful asteroid sampling missions – shooting, drilling, and puffing. The original Hayabusa mission in 2010 was an example of the first method – it fired a bullet into the asteroid’s surface after performing a “soft landing.” It used the force of the bullet’s impact to shoot fragments into a collection system. This has the advantage of not requiring the spacecraft to be anchored to the asteroid but isn’t very effective at breaking through hard surfaces.
The puffing method, which OSIRIS-REx used during its visit to Bennu, has the same advantages and disadvantages. Instead of a bullet, it puffed nitrogen at the surface as part of its Touch-and-Go Sample Acquisition Mechanism (TAGSAM).
Fraser discusses the discovery of amino acids in the Bennu sample.Rosetta took a shirt approach, though it did not successfully collect any sample from an asteroid due to problems with its lander, Philae. Philae had a drill called the SD2, intended to bore into the surface of comet 67P/Churyumov-Gerasimenko. It also included a sampling tube that extended through the drill to collect the material. This might have worked, but it required significant power and force on the lander.
In the new paper, the researchers took a hybrid approach to developing their regolith sampling system. It utilizes a pneumatic drill that punches a hole in the regolith rather than spinning to drill one directly. After the hole is punched, the system retracts the drill bit and pushes gas down into the hole to force some of the particles up in a sample collector.
According to the team’s simulations and experiments, this method works well in both microgravity and regular gravity environments. It also operated with various granular materials, ranging from hard marble to fine sand. More pressure (i.e., more gas) was needed to collect larger particles, but any future mission can estimate the necessary gas reserves well in advance.
Sampling system test setup.There is a good chance that a future mission will use a sampling system like this. Much of the paper discusses how China is rapidly becoming a space scientific power and how the country’s interest in asteroid resources is growing. The research was funded by several governmental organizations in China, and the country has already shown an interest in asteroid sample return, with the Tianwen-2 mission planned for launch later this year. This hybrid sampling approach might someday be adopted, though it remains to be seen if it will stand the test of a rendezvous with an actual asteroid.
Learn More:
Zhao et al – Gas-Driven Regolith-Sampling Strategy for Exploring Micro-Gravity Asteroids
UT – The Building Blocks for Life Found in Asteroid Bennu Samples
UT – Asteroid Samples Returned to Earth Were Immediately Colonized by Bacteria
UT – OSIRIS-REx’s Final Haul: 121.6 Grams from Asteroid Bennu
Lead Image:
Image of the regolith sampling system under test.
Credit – Zhao et al.
The post Hybrid Gas/Drill Asteroid Sampler Could Improve Collection Amounts appeared first on Universe Today.
What happens when one galaxy shoots a bigger galaxy right through the heart? Like a rock thrown into a pond, the smashup creates a splash-up of starry ripples. At least that’s what happened to the Bullseye galaxy, which is the focus of observations made by NASA’s Hubble Space Telescope and the Keck Observatory in Hawaii.
In a study published today by The Astrophysical Journal Letters, a research team led by Yale University’s Imad Pasha identifies nine visible ring-shaped ripples in the structure of the galaxy, formally known as LEDA 1313424. The galaxy is 567 million light years from Earth in the constellation Pisces.
The Bullseye now holds the record for the most rings observed in a galaxy. Previous observations of other galaxies showed a maximum of two or three rings.
“This was a serendipitous discovery,” Pasha said in a news release. “I was looking at a ground-based imaging survey and when I saw a galaxy with several clear rings, I was immediately drawn to it. I had to stop to investigate it.”
Eight separate rings could be spotted in the image captured by Hubble’s Advanced Camera for Surveys. The ninth ring was identified in data from the Keck Observatory. Follow-up observations also helped the team figure out which galaxy plunged through the Bullseye’s core. It’s the blue dwarf galaxy visible to the center-left of LEDA 1313424 in the Hubble image.
This illustration pinpoints the nine rings in the Bullseye galaxy. Credit: NASA, ESA, Ralf Crawford (STScI)Researchers say the current view captures the state of the Bullseye about 50 million years after the blue dwarf blasted through its core. Even though the two galaxies are separated by 130,000 light-years, a thin trail of gas still links them together. “We’re catching the Bullseye at a very special moment in time,” said Yale Professor Pieter G. van Dokkum, a study co-author. “There’s a very narrow window after the impact when a galaxy like this would have so many rings.”
The multi-ringed shape conforms to the mathematical models for a headlong galaxy-on-galaxy collision. The blue dwarf’s impact caused galactic material to move both inward and outward, sparking multiple waves of star formation along the lines of the ripples — almost exactly as the models predicted.
“It is immensely gratifying to confirm this longstanding prediction with the Bullseye galaxy,” van Dokkum said.
The models suggest that the first two rings in the Bullseye formed quickly and spread out in wider circles. The timing for the formation of additional rings was staggered as the blue dwarf plowed through the bigger galaxy’s core. The research team suspects that there was once a 10th ring to the galaxy, but that it faced out and is no longer detectable. That ring might have been as much as three times farther out than the widest ring seen in the Hubble image.
This artist’s conception shows our Milky Way galaxy at left, and the Bullseye galaxy at right. Credit: NASA, ESA, Ralf Crawford (STScI)Compared to our own Milky Way galaxy, the Bullseye is a big target. It’s about 250,000 light-years wide, as opposed to 100,000 light-years for the Milky Way.
Billions of years from now, the Milky Way and the neighboring Andromeda galaxy are due to collide, but computer simulations suggest that the dynamics of that collision will be more complex than merely dropping a cosmic rock into a pond, or shooting an arrow through a bull’s-eye.
Fortunately, astronomers won’t have to wait billions of years to see more spot-on galactic collisions. “Once NASA’s Nancy Grace Roman Space Telescope begins science operations, interesting objects will pop out much more easily,” van Dokkum said. “We will learn how rare these spectacular events really are.”
In addition to Pasha and van Dokkum, the authors of the Astrophysical Journal Letters study, “The Bullseye: HST, Keck/KCWI, and Dragonfly Characterization of a Giant Nine-Ringed Galaxy,” include Qing Liu, William P. Bowman, Steven R. Janssens, Michael A. Keim, Chloe Neufeld and Roberto Abraham.
The post Bullseye! Hubble Spots Ripples in Space From a Galaxy Collision appeared first on Universe Today.
The ESA’s Gaia mission mapped the positions and velocities of stars with extreme precision by measuring about one billion stars multiple times. It created a massive 3D map of the Milky Way that will pay scientific dividends for years to come. Gaia is based on astrometry, the study of the positions and movements of celestial objects.
Gaia also tentatively detected some planets, and new radial velocity studies have now confirmed the existence of one of them. The planet is an important outlier in exoplanet science.
Gaia wasn’t designed to be a planet finder, but it found some anyway. Since the spacecraft was built to measure stars, the planets it found are massive, and they orbit low-mass stars. These planets tug on their stars, and Gaia can detect the way the stars wobble. However, follow-up observations were required to confirm them.
Now, researchers have used the NEID spectrograph on the WIYN 3.5-meter Telescope at the NSF’s Kitt Peak National Observatory to measure these stellar wobbles and the planet and brown dwarf that cause them via radial velocity. Their results are in a paper published in The Astronomical Journal. Its title is “Gaia-4b and 5b: Radial Velocity Confirmation of Gaia Astrometric Orbital Solutions Reveal a Massive Planet and a Brown Dwarf Orbiting Low-mass Stars.” The lead author is Gudmundur Stefansson from the Anton Pannekoek Institute for Astronomy at the University of Amsterdam.
“Gaia is more than living up to its promise of detecting planetary companions to stars with highly precise astrometry…”
Jayadev Rajagopal, co-author, NSF NOIRLabThe most recent Gaia data release contains a list of Gaia AStrometric Objects of Interest (Gaia-ASOIs). They’re stars that appear to be moving as if influenced by an exoplanet.
In a press release, lead author Stefansson said, “However, the motion of these stars is not necessarily due to a planet. Instead, the ‘star’ might be a pair of stars that are too close together for Gaia to recognize them as separate objects. The tiny shifts in position that appear to be due to a planet might actually result from the nearly perfect cancellation of the larger shifts in position of the two stars.”
Follow-up spectroscopy can do what Gaia can’t and determine if the objects are binary stars or stars and their orbiting planets. The researchers used the NEID spectrograph and two others—the Habitable-zone Planet Finder and the FIES Spectrograph to perform follow-up observations. In radial velocity, spectrographs measure the blue-shifted and red-shifted light from stars as nearby planets tug on them and make them wobble. It takes extreme precision to do this, and all three spectrographs are capable of it.
Astronomers used the NEID spectrograph on the WIYN 3.5-meter Telescope at Kitt Peak National Observatory (KPNO) to confirm the existence of an exoplanet and a brown dwarf first detected by the ESA’s Gaia spacecraft. Image Credit: KPNO/NOIRLab/NSF/AURA/T. MatsopoulosThe researchers examined 28 separate star systems where Gaia detected candidate exoplanets.
According to the results, 21 of the systems have no substellar companions. Instead, these 21 are binary star systems. Five others are inconclusive and require more observations and data before they can be confirmed or refuted.
However, two of the 21 are confirmed: one is an exoplanet now named Gaia-4b, and one is a brown dwarf named Gaia-5b.
Gaia-4b is a massive exoplanet with about 11.8 Jupiter masses. It follows a 571-day orbit around a star with a mass of 0.644 solar masses. It has the distinction of being the first confirmed exoplanet found by Gaia. It’s also one of the most massive planets that have ever been detected orbiting a low-mass star, reflecting the observational bias inherent in Gaia’s method.
Gaia-4b orbits the star Gaia-4, which is around 244 light-years away. It is about twelve times more massive than Jupiter and has an orbital period of 570 days. It is a relatively cold gas giant planet. This artist’s impression visualizes a portion of the orbital motion as determined by Gaia’s astrometric data. The star and planet are not to scale. Image Credit: ESA/Gaia/DPAC/M. Marcussen“It is an exciting time for both NEID and Gaia,” said Jayadev Rajagopal, a scientist at NSF NOIRLab and a co-author of the paper. “Gaia is more than living up to its promise of detecting planetary companions to stars with highly precise astrometry, and NEID is demonstrating that its long-term radial velocity precision is capable of detecting low-mass planets around those stars. With more candidate planets to come as roughly the last year of data is analyzed, this work is a harbinger of the future where Gaia discoveries of planets and brown dwarfs will need to be confirmed, or rejected, by NEID data.”
Gaia-5b is a brown dwarf, an object in between planetary mass and stellar mass. Gaia-5b has about 21 Jupiter masses and follows a highly eccentric 358-day orbit around a star with a mass of about 0.34 solar masses.
This study highlights how effective Gaia’s astrometric capabilities are for detecting exoplanets and brown dwarfs. It also exemplifies how different observational techniques—astrometry and radial velocity spectrometry—can work together for more robust results. The combined methods can find a wider range of substellar companion masses and orbital characteristics compared to the transit method, for example.
“If we want to understand how planets are formed, it is necessary to have a vision of how the whole planetary system is composed,” said the ESA’s Ana Heras in a separate press release. “Currently, our vision of most systems is only partial because each detection technique is efficient for a certain range of planet sizes and orbital periods. Being able to combine all techniques and data is critical to understand what planetary systems look like and to put our Solar System in context.”
Gaia-4b is an outlier in exoplanet discoveries. Finding such a massive planet around such a low-mass star is a big test for our planet formation theories. “With respect to stellar host-star mass, the occurrence of massive planets is known to decrease with decreasing stellar mass,” the authors write in their paper. “This has been connected to the fact that less massive stars tend to have less massive protoplanetary disks.” If Gaia and the NEID spectrograph and other facilities can find and confirm more of these massive planets, maybe researchers can make progress in understanding how they form.
This figure from the published study shows the masses of planets and brown dwarfs as a function of stellar host mass for stars with <0.7 solar masses and orbital periods <10,000 days. (a) Companion mass as a function of host-star mass. (b) Histogram of the points in panel (a). (c) Mass ratio as a function of host-star mass. As the figure shows, Gaia-5b and Gaia-4b straddle the Brown Dwarf Limit Line. The Jupiter Desert Region highlights the absence of planets with 1 to 10 Jupiter masses orbiting stars with 0.3 solar masses or less. Image Credit: Stefansson et al. 2025.Astronomers expect to find more massive exoplanets and brown dwarfs in Gaia data and confirm some of them with spectrographs like NEID. Due to Gaia’s observational method, there will likely be more “outliers” in the data. These outliers are needed to help us understand planet formation and solar system architecture.
“These detections represent the tip of the iceberg of the planet and brown dwarf yield expected with Gaia in the immediate future, enabling key insights into the masses and orbital architectures of numerous massive planets at intermediate orbital periods,” the authors conclude.
The post Gaia Was Right. It Did Find a Planet. appeared first on Universe Today.
At first glance the large scale structure of the Universe may seem to be a swarming mass of unconnected galaxies. Yet somehow, they are! The ‘cosmic web’ is the largest scale structure of the Universe and consists of vast networks of interconnected filamentary structures that surround empty voids. A team of astronomers have used hundreds of hours of telescope time to capture the highest resolution image ever taken of a single cosmic filament that connects to forming galaxies. It’s so far away from us that we see it as it was when the Universe was just 2 billion years old!
Dark matter is largely invisible to us, only detectable through its interaction with other phenomenon. It makes up about 85% of the matter in the universe and plays a crucial role in shaping the large-scale structure of the cosmos. It doesn’t emit, absorb, or reflect light hence its name and its gravitational influence holds galaxies together and forms the cosmic web—a vast, interconnected network of filaments composed of dark matter, gas, and galaxies. Scientists have been studying the cosmic web using simulations and gravitational lensing techniques to understand the nature of dark matter and its role in evolution of the universe.
A massive galaxy cluster named MACS-J0417.5-1154 is warping and distorting the appearance of galaxies behind it, an effect known as gravitational lensing. This natural phenomenon magnifies distant galaxies and can also make them appear in an image multiple times, as NASA’s James Webb Space Telescope saw here. Two distant, interacting galaxies — a face-on spiral and a dusty red galaxy seen from the side — appear multiple times, tracing a familiar shape across the sky. NASA, ESA, CSA, STScI, V. Estrada-Carpenter (Saint Mary’s University).One of the biggest challenges that faces astronomers studying the cosmic web is that the gas has mainly been detected through its absorption of light from a more distant object. The results of such studies however do not help us to understand the distribution of gas in the web. Studies that focus on hydrogen which is the most common element in the universe, can only be detected from a very faint glow so that previous attempts to map its distribution have failed.
In this new paper that was published by a team of researchers that were led by scientists from the University of Milano-Bicocca and included members from the Max Planck Institute for Astrophysics. The team employed the use of the Multi-Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope at the European Southern Observatory in Chile. The instrument was designed to capture 3D data of astronomical objects by combining images and spectroscopic observations across thousands of wavelengths simultaneously. Even with the capabilities of MUSE, the team had to capture data over hundreds of hours to reveal sufficient detail in the filaments of the cosmic web.
ESO’s Very Large Telescope is composed of four Unit Telescopes (UTs) and four Auxiliary Telescopes (ATs). Seen here is one of the UTs firing four lasers which are crucial to the telescope’s adaptive optics systems. To the right of the UT are two ATs, these smaller telescopes are moveable and work in tandem with the other telescopes to create a unique and powerful tool for observing the Universe.The team was led by PhD student at the University of Milano-Bicocca Davide Tornotti and they used MUSE to study a filament that measures 3 million light years in length. The filament connects two galaxies, each with a supermassive black hole deep in their core. They were able to demonstrate a new way of mapping the intergalactic filaments, helping to understand more about galactic formation and the evolution of the universe.
Before they were able to start collecting the data, the team were able to run simulations of the emissions from filaments based upon the current model of the universe. They were then able to compare the results and both were remarkably similar. The discovery can help us to learn how galaxies in the cosmic web are fuelled but the team assert that they still need more data. More structures are now being uncovered as the techniques are repeated with the goal to finally reveal how gas is distributed among the cosmic web.
Source : Researchers capture direct high-definition image of the “Cosmic Web”
The post Our Best Look at the Cosmic Web appeared first on Universe Today.
Revelations from the past can seem quaint once we’ve been living with them for a generation or two. That’s true of the realization in the past that spawned SETI: the Search for Extraterrestrial Intelligence. Humanity realized that if we’re blasting radio signals out into the cosmos haphazardly, then other ETIs, if they exist, are probably doing the same.
It seems obvious now, but back then, it was a revelation. So, we set up our radio antennae and began scanning the skies.
The realization that other ETIs are probably sending out radio noise leads to the obvious question: How easily can hypothetical ETIs detect our radio signals and other technosignatures?
A fledgling space-travelling civilization similar to ours may be out there somewhere in the Milky Way. Maybe they have their own fledgling SETI program, complete with radiotelescope arrays scanning the sky for the telltale signs of another technological civilization.
If there is, and if they do, from how far away could they detect our technosignatures? New research is asking that question.
The research is titled “Earth Detecting Earth: At What Distance Could Earth’s Constellation of Technosignatures Be Detected with Present-day Technology?” It’s published in The Astronomical Journal, and Sofia Sheikh is the lead author. Sheikh is affiliated with the SETI Institute, the Penn State Extraterrestrial Intelligence Center, and Breakthrough Listen at UC Berkeley.
Nikola Tesla was one of the first to suggest communicating with beings on other planets. In 1899, Tesla thought he had detected a signal from Mars. In the early part of the 20th century, Guglielmo Marconi also thought he had heard signals from Mars. These potential signals were serious enough that when Mars was closest to Earth in 1924, the USA promoted a Radio Silence Day in order to better detect signals from Mars.
We know better now. The only signals we’ll detect will be from our own Martian rovers and orbiters. However, the basic idea of searching for radio signals from other worlds was planted, and people started taking it more seriously.
In 1971, NASA considered Project Cyclops, a plan to build an array of 1500 radio dishes to scan the cosmos for signals. Although it was never funded, it helped lead to the modern SETI.
It’s a simple matter to imagine that other civilizations followed a similar path and are now searching the sky for signals. In the new research in The Astronomical Journal, Sheikh and her co-researchers try to understand how one of these civilizations could detect our technosignatures if they had the same technology as we do in 2024.
“In SETI, we should never assume other life and technology would be just like ours, but quantifying what ‘ours’ means can help put SETI searches into perspective.”
Macy Huston, co-author, Dept. of Astronomy, UC BerkeleyThis is important because similar research looks for advanced ETIs that are further along the Kardashev Scale, which many researchers think is probable. However, this means researchers have to do a lot of technological extrapolation. “In this paper, we instead turn our gaze Earthward, minimizing the axis of extrapolation by only considering transmission and detection methods commensurate with an Earth 2024 level,” the authors write.
It all boils down to simple questions: Can an ETI with our current technology detect our technosignatures? If the answer is yes, which of our signatures would they detect, and from how far away?
The researchers considered multiple types of different technosignatures, including radio transmissions, microwave signals, atmospheric technosignatures like NO2, satellites, and even city lights. They used a theoretical, modelling-based method in their effort, and they say they’re the first to analyze these technosignatures together rather than separately.
“Our goal with this project was to bring SETI back ‘down to Earth’ for a moment and think about where we really are today with Earth’s technosignatures and detection capabilities,” said Macy Huston in a press release. Huston is a co-author and postdoc at the University of California, Berkeley, Department of Astronomy. “In SETI, we should never assume other life and technology would be just like ours, but quantifying what ‘ours’ means can help put SETI searches into perspective.”
This table is a rough timeline of human technologies across different wavelengths and multimessenger approaches. Image Credit: Sheikh et al. 2025.Imagine a hypothetical space probe travelling toward us from this hypothetical, technologically equivalent ETI. According to the researchers, the first technosignature they’d detect would come from our effort to detect potentially hazardous asteroids that might be headed for Earth. This is our planetary radar, like the signals coming from the now-defunct Arecibo Radio Observatory. These are detectable out to about 12,000 light years from Earth. That’s about the same distance away as the Tadpole Nebula.
The hypothetical space probe would have a long way to travel before it could detect our next technosignature. When it was about 100 light-years away, it would detect signals from NASA’s Deep Space Network that’s used to communicate with spacecraft we send out into the Solar System. 100 light-years away is about the same distance away as Alpha Pictoris, the brightest star in the Pictor constellation.
The alien spacecraft would hit paydirt at about four light-years away, around the same distance as our closest stellar neighbour, Proxima Centauri. At that distance, it would detect lasers, our atmospheric NO2 emissions, and even LTE signals.
The figure below illustrates how our current technology would detect our own technosignatures and at what distances.
This figure from the research shows the maximum distances that each of Earth’s modern-day technosignatures could be detected at using modern-day receiving technology. Image Credit: Sheikh et al. 2025.“One of the most satisfying aspects of this work was getting to use SETI as a cosmic mirror: what does Earth look like to the rest of the galaxy? And how would our current impacts on our planet be perceived,” said Sheikh. “While, of course, we cannot know the answer, this work allowed us to extrapolate and imagine what we might assume if we ever discover a planet with, say, high concentrations of pollutants in its atmosphere.”
The research also illustrates how our own technosignature footprint is growing. According to the authors, it highlights “the growing complexity and visibility of the human impact upon our planet.”
It also shows that despite some second-guessing among the SETI community, it’s probably wise to focus our search on radio waves. “In this framework, we find that Earth’s space-detectable signatures span 13 orders of magnitude in detectability, with intermittent, celestially targeted radio transmission (i.e., planetary radar) beating out its nearest nonradio competitor by a factor of 103 in detection distance,” the authors write in their paper.
The authors also point out that we can begin to understand what an ETI might surmise about us based on our technosignatures. That can also serve as a mirror through which we can see ourselves. “It is possible for ETIs to hypothesize about our culture, society, biosphere, etc., from our unintentional technosignatures, and thinking through those possible hypotheses can help us interrogate how we are presenting ourselves to the galaxy: how we organize socially, how we relate to the world around us, how we perceive and experience things, and perhaps even what we value,” the authors explain in their research.
For example, they could correctly surmise that our species has no biological capacity to detect radio signals; otherwise, our world would be an unimaginably noisy cacophony of competing signals. Or, they may infer the reverse. “Conversely, our reliance on radio waves could make it natural for an alien species to wonder if it is because we can detect them biologically!” the authors write.
As in all things SETI and technosignature related, we’re left wondering.
However, with their “Earth detecting Earth” paradigm, Sheikh and her co-authors are at least giving us another way to examine one of our most quintessential questions: Are we alone?
Press Release: Earth Detecting Earth
The post How Far Away Could We Detect… Ourselves? appeared first on Universe Today.
Burrowing under soil opens up a whole new world, especially when that soil is on other planets. Getting under the top layer of regolith on a world such as Mars could give access to a world still extant with life, whereas, on the Moon, it could lead to discovering a water source. So, for almost 30 years, scientists have been developing robots based on that most well-known burrowing machine here on Earth – the mole. Unfortunately, the models that have made it into space so far have failed for various reasons, but that hasn’t stopped more research groups from trying to perfect their own version of a mole robot. A paper from a research group at Guangdong University of Technology in China describes their efforts and frustrations in mimicking one of nature’s more unique but capable specialists.
To be fair to the development teams working on previous versions of “mole” robots, the failure was not always their fault. The Planetary Undersurface Tool (PLUTO) robot was fitted to Mars Express and had a hammering mechanism unlike anything seen in nature. It made it to Mars but was not successfully tested because the Beagle 2 lander that Mars Express was attached to failed to deploy its solar panels properly, ending the mission before it began.
Insight, another mission to Mars, carried another probe with a similar mission. The Heat Flow and Physical Properties Package (HP3) probe was designed to burrow 5m down into the Maritan soil and measure the heat coming from the interior of Mars. Unfortunately, it ran into unexpected soil conditions and wasn’t able to burrow as intended—as we reported extensively. After multiple failed attempts, that part of the mission was eventually abandoned.
Moles are a common inspiration for robotics design, as described in this video from KAIST robotics lab.In other words, burrowing on other worlds is hard. But that didn’t stop the team in Guangdong from trying. Their robot is meant to be much more similar to an actual mole than either HP3 or PLUTO was. The existing missions both used forms of drilling techniques, whereas the new robot uses actions intended to mimic what actual moles do.
There are two types of digging motions used by different types of moles. The first, and the one most commonly depicted in media, is shoveling away dirt from in front of them by using their massive forearms. Another, less commonly known method is using their huge teeth to bite away at the dirt and using their arms to push it underneath them.
We know this because much research has been done into the kinematics of mole burrowing behavior. Those assessments showed the importance of the front and hind legs for mobility and moving the dirt around the mole’s body.
Mole Robot Progressively Burrowing.Understanding kinematics is only the first step, and the team from Guangdong took the next step in building a prototype, which, admittedly, looks like a mole, though maybe not a cute and cuddly one. It has distinct forearms and hind legs, and its head is shaped like a mole’s, though given its lack of eyes, it almost looks more like a shell casing. Advanced electronics, power systems, and motors are hidden inside the robot’s body, allowing it to mimic the overall shape of a mole more fully.
To test their prototype, the researchers made a bed of plastic particles to simulate the size the robot would expect to see in lunar or Martian regolith. The robot successfully burrowed itself into the particles. However, it had difficulty moving forward and creating the kinds of tunnels that moles are famous for. That seemed to be because of more complex coordination between the forearms and hind limbs, which was modeled in the current iteration of the prototype.
That first prototype is a step toward a more complete model that might someday be used in another world. Future research would include developing techniques to allow it to burrow, crawl, run, and even swim, allowing the locomotion of this bot to become genuine”y “multi-mode,” as the paper describes it. It might be hard to find another world off Earth that would need all those capabilities, but as biomimetic design improves, expect to see more bots shaped like cuddly and furry things back home exploring new worlds.
Learn More:
Zhang et al – Mole-Inspired Robot Burrowing with Forelimbs forPlanetary Soil Exploration
UT – NASA Has Given Up on Trying to Deploy InSight’s Mole
UT – It Looks Like it’s Working! NASA InSight’s Mole is Making Progress Again Thanks to the Arm Scoop Hack
UT – InSight’s ‘Mole’ is Now Completely buried!
Lead Image:
Different orientations of the mole robot prototype.
Credit – Zhang et al.
The post Burrowing Mole-Bot Could Characterize Other Planet’s Soil appeared first on Universe Today.
The Earth is bathed in high-energy particles. Known as cosmic rays, most of them are protons striking us at nearly the speed of light. Fortunately, the atmosphere protects us from any significant harm, though the particles can strike with so much energy that they create a shower of lower energy particles that do reach Earth’s surface. That’s actually how we can detect most cosmic rays.
We aren’t entirely sure what accelerates these particles so tremendously. There are phenomena that can do it. Nearby supernovae can generate lower energy cosmic rays, but the origin of the highest-energy cosmic rays is less understood. One clear source is quasars. These distant beacons are powered by supermassive black holes, which can generate tremendous jets of relativistic particles. Even across billions of light years, these particles can strike Earth with incredible energy. But the problem is that the number of particles quasars send our way isn’t enough to account for the number of cosmic rays we observe. So there must be another source as well.
The most obvious possibility is what are known as microquasars. These are exactly what they sound like. Whereas quasars are powered by supermassive black holes in the hearts of distant galaxies, microquasars are powered by stellar mass black holes in our own galaxy. Although they are tiny in comparison to quasars, they have very similar structures, with an accretion disk of material surrounding them and high-energy jets that stream away from their poles. Microquasars are so similar to their large cousins that astronomers can study them to better understand the evolution of quasars.
Not all stellar-mass black holes are microquasars. Regular quasars create jets by capturing material from the galaxy surrounding them, but microquasars need a companion star to pull material from. The amount of energy a microquasar can produce depends on the available material, so microquasars are often categorized by the mass of their companion star. For high-mass microquasars the black hole’s companion is several times the mass of the Sun, providing plenty of material to accelerate. Low-mass microquasars have small companion stars and therefore tend to be less energetic. One of the most energetic microquasars is SS 433, where the black hole’s companion is ten times the Sun’s mass.
Since high mass stars are much less common than low mass ones, high-mass microquasars are rare compared to low-mass microquasars. So there aren’t enough high-mass microquasars to account for all the cosmic rays we detect on Earth. But a new study finds that might not be a problem, since low-mass microquasars can also produce cosmic rays.
In this study, the team looked at a microquasar known as GRS 1915+105. It is a stellar-mass black hole with a companion star less massive than the Sun, so it shouldn’t be large enough to produce cosmic rays. However, using data from the Fermi satellite, the team found a source of gamma rays from the same location. The gamma-ray source is faint, so the team used 16 years of back data to confirm it. They found that some of the gamma rays have energies greater than 10 GeV, which is quite a kick. The gamma rays are likely produced when protons accelerated by the microquasar strike interstellar gas, generating high-energy photons. For this to work, the protons of the microquasar’s jets must have energies higher than 10 GeV. This puts them in the energy range of high-energy cosmic rays.
While this study shows that low-mass microquasars can produce high-energy cosmic rays, it doesn’t settle the question of whether this would solve the mystery of their source. Some low-mass microquasars don’t produce cosmic rays, so more study will be needed to determine why some microquasars are so energetic while others aren’t.
Reference: Martí-Devesa, Guillem, and Laura Olivera-Nieto. “Persistent GeV counterpart to the microquasar GRS 1915+ 105.” The Astrophysical Journal Letters 979.2 (2025): L40.
The post Even Microquasars are Powerful Particle Accelerators appeared first on Universe Today.
This Hubble image shows a supernova named SN 2022aajn in a distant galaxy about 600 million light-years away with the unwieldy name of WISEA J070815.11+210422.3. However, the obtuse yet scientifically descriptive names aren’t what’s important.
What’s important is that SN 2022aajn is a Type 1a supernova, also known as a standard candle, and this image is part of a critical effort in cosmology.
Standard candles are an important part of the Cosmic Distance Ladder (CDL). Astronomers use the CDL to determine accurate distances to objects at extreme distances from us. There are different types of standard candles, though Type 1a supernovae are considered the most reliable. What do all standard candles have in common?
They have a known intrinsic luminosity. That means that they emit the same amount of energy across all wavelengths in all directions. So no matter from what angle it’s measured, its the same luminosity. For clarity, our Sun has intrinsic luminosity.
Astronomers compare a standard candle’s intrinsic luminosity with its apparent or observed brightness. Note the different terms “luminosity” and “brightness.” Brightness depends on both an object’s luminosity and how that luminosity is diminished by distance and any intervening matter like dust.
The Cosmic Distance Ladder is ubiquitous in cosmology and does a good job. However, it still faces some problems. The primary problem has to do with calibration: How can astronomers determine what a candle’s absolute magnitude is? How can they accurately describe the class of objects called Type 1a SN so that they can recognize all of them? And how can they find enough of them at well-known distances in order to determine their intrinsic luminosity with extreme accuracy?
The Cosmic Distance Ladder starts out using parallax, but it has its limits. Astronomers rely on standard candles beyond paralax. Image Credit: By ESA/Hubble, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=49212250This Hubble image is part of the effort. As part of an observing program, the Hubble is observing 100 known Type 1a supernovae to more finely calibrate our understanding of standard candles and their distances.
The program’s name gives a good idea of its goal. It’s named “Reducing Type Ia Supernova Distance Biases by Separating Reddening and Intrinsic Color.” Prof. Ryan Foley of the University of California at Santa Cruz is the Principal Investigator.
If Type 1a supernova exploded in a Universe without any dust, astronomers’ work would be simplified. But, of course, they dont. They explode in galaxies with their own dust. There can also be a lot of intergalactic dust between us and distant SN. All that dust reddens the light from the supernova, making its intrinsic luminosity more difficult to determine.
Type 1a Supernovae occur in binary systems where one star is a massive white dwarf. As its companion ages and swells, the white dwarf draws material away from the companion onto its surface. Eventually, the white dwarf explodes. Image Credit: By NASA, ESA and A. Feild (STScI); vectorisation by chris ? – http://hubblesite.org/newscenter/archive/releases/star/supernova/2004/34/image/d/, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=8666262In observations, the reddening from dust gets tangled up with the reddening from redshift. Dust in the intergalactic medium is about the same size as the wavelength of blue light. The dust absorbs and scatters the light from distant objects, making their light more red by the time it reaches us. Professor Foley’s observing program is an effort to “remove” intergalactic dust from our observations.
“Accurate distance measurements and unbiased cosmological constraints from Type Ia supernovae (SNe Ia) rely on proper correction for host-galaxy dust reddening that may attenuate the observed SN brightness,” Foley and his co-researchers write. To get around this, astronomers use what’s called a “reddening law.” “A correction is made by comparing observed and intrinsic color, and using a reddening law to determine extinction,” they write.
But reddening laws can be difficult to work with. It’s a delicate matter. “This procedure is nontrivial since a SN’s intrinsic color correlates with its luminosity in a manner nearly indistinguishable from the effects of dust reddening at optical wavelengths,” Foley writes in the description of the observing campaign.
Astronomers use a somewhat simplified way to determine how red a distant supernova is by treating the reddening from both dust and distance the same. “The current standard for measuring SN distances treats both fainter-redder relations as a single SN color law,” Foley explains. However, this introduces a bias into measurements since both causes are unlikely to contribute equally and uniformly to the reddening.
“This issue is currently SN cosmology’s largest systematic uncertainty and if not addressed will prevent future cosmology experiments from meeting their goals,” Foley explains. He also says that the error can be as large as about 6%. That’s a lot when measuring objects that are hundreds of millions of light-years away, and even much further than that.
How can astronomers solve this problem? By getting better data and more of it. That’s the motivation for Foley’s campaign, which tries to get around the problem by observing across multiple wavelengths, something the Hubble is built to do.
“The path to breaking the degeneracy between SN color and dust reddening is to extend observations to the UV and NIR, where the dust and intrinsic color, respectively, dominate the observed color,” the observing program’s description states.
The researchers will try to get around the SN cosmology problem by using the Hubble to survey 100 Type Ia supernovae in seven wavelength bands from ultraviolet to near-infrared. The leading image is a combination of image data from four infrared wavelengths, since IR passes through dust more easily than either UV or visible light. The researchers will then compare the brightness of the SNe across the wavelengths and disentangle the distance reddening effect from the dust reddening effect.
We’re accustomed to the Hubble’s “eye candy” images that have been gracing web pages and magazines for decades now. They’ve transformed our understanding of nature. But the telescope’s purely scientific side is where some of its real transformative power lies.
An accurate cosmic distance ladder is integral to cosmology. By helping scientists determine accurate distances to standard candles, the Hubble is helping develop a more accurate cosmic distance ladder, paving the way for a better understanding of the Universe.
The post The Hubble Space Telescope is a Powerful Science Instrument Despite its Age appeared first on Universe Today.
It seems everyone is talking about the Moon and everyone wants to get their foot in the door with the renewed passion for lunar exploration. ESA too have jumped into the lunar landing game having just signed a contract with Thales Alenia Space to build its Argonaut Lunar Lander. Compared to other landers, it will be unique in its ability to handle the harsh night and day conditions on the lunar surface. Each mission is planned to have a 5 year life and will have a standard descent and cargo module but with different payloads determined by the Moon. If all goes to plan then the first lander will fly in 2031.
The Moon, Earth’s only natural satellite, is a celestial body that has fascinated us for centuries. It orbits Earth at an average distance of about 384,400 kilometres and is a barren, rocky surface covered in craters, mountains, and vast plains of solidified lava. Its lack of atmosphere results in extreme temperature fluctuations, with daytime temperatures reaching up to 127°C and nighttime temperatures plummeting as low as -173°C.
The occultation of Aldebaran by the Moon in 2016. Credit: Andrew Symes.Since the Apollo missions of the 1960’s lunar exploration has become a central part of space science. The first major milestone was achieved in 1959 when the Soviet Luna 2 mission became the first human-made object to impact the Moon. This was followed by Luna 9, which successfully landed and transmitted images from the surface. This was followed by Apollo 11 and humanity’s first steps on another celestial body. Since then robotic missions like China’s Chang’e program, India’s Chandrayaan missions, and NASA’s Artemis program have aimed to study lunar water ice, geology, and sustainability for long-term human presence.
Apollo 11 launch using the Saturn V rocketThe European Space Agency have got in on the act now with their plans to build Argonaut, an autonomous lunar lander. It will launch on regular missions to the moon and can be used for delivering rovers, infrastructure, instrumentation or resources to the Moon for lunar explorers. The lander will compose of the descent module, the payload and the cargo platform which will act as the interface between the lander and the payload and will integrate operations between the two.
ESA signed their contract with Thales Alenia Space in Italy, a joint venture and prominent player in the global space market. They have been delivering high-tech solutions for navigation, telecommunication and Earth observation for over 40 years. They will be leading the European group to build the descent module with the remaining core team from the Group’s UK and France.
Artist’s impession of the Lunar Gateway with the Orion spacecraft docked on the left side. Credit: ESAOnce complete, Argonaut will become a key part of ESA’s lunar exploration strategy and will integrate with their Lunar Link on the new lunar Gateway. This new international space station is planned to orbit the Moon as part of the NASA Artemis programme. Argonaut will become one of Europe’s main contributions to international lunar exploration as nations work together to establish permanent presence on our nearest celestial neighbour.
Source : Argonaut: a first European lunar lander
The post ESA is Building its Own Lunar Lander appeared first on Universe Today.
We can’t help ourselves but wonder about life elsewhere in the Universe. Any hint of a biosignature or even a faint, technosignature-like event wrests our attention away from our tumultuous daily affairs. In 1984, our wistful quest took concrete form as SETI, the Search for Extraterrestrial Intelligence.
Unfortunately, or maybe fortunately, SETI has turned up nothing. Recently, scientists used a powerful new data system to re-examine data from one million cosmic objects and still came up empty-handed. Did they learn anything from this attempt?
This effort used COSMIC, which stands for Commensal Open-Source Multimode Interferometer Cluster. It’s a signal-processing and algorithm system attached to the Karl G. Jansky Very Large Array (VLA) radio astronomy observatory. According to SETI, it’s designed to “search for signals throughout the Galaxy consistent with our understanding of artificial radio emissions. “
Modern astronomy generates vast volumes of data and algorithms and automated processing are needed to comb through it all. So far, COSMIC has observed more than 950,000 objects, and the results of the effort are in a new paper. It’s titled “COSMIC’s Large-Scale Search for Technosignatures during the VLA sky Survey: Survey Description and First Results” and will be published in The Astronomical Journal. The lead author is Chenoa Tremblay from SETI.
Image of radio telescopes at the Karl G. Jansky Very Large Array, located in Socorro, New Mexico. Image Credit: National Radio Astronomy Observatory“The place of humanity in the Universe and the existence of life is one of the most profound and widespread questions in astronomy and society in general,” the authors write. “Throughout history, humans have marvelled at the starry night sky.”
In our modern technological age, we marvel not only with our eyes but with powerful telescopes. The Karl G. Jansky Array is one of those telescopes, though it’s actually 28 radio dishes working together as an interferometer. Each one is 25 meters across, and they’re all mounted on movable bases that are maneuvered around railway tracks. This gives the system the ability to change its radius and density so it can balance its angular resolution and its sensitivity.
The Array is used to observe astronomical objects like quasars, pulsars, supernova remnants, and black holes. It’s also used to search trillions of systems quickly for signs of radio transmissions.
Currently, the VLA is engaged in the VLA Sky Survey (VLASS), a long-term effort to detect transient radio signals in the entire visible sky. The elegance of the COSMIC system is that it can “tag along” as VLASS progresses. “COSMIC was designed to provide an autonomous real-time pipeline for observing and processing data for one of the largest experiments in the search for extraterrestrial intelligence to date,” the authors write.
One of the problems facing modern astronomy is the deluge of data. There aren’t enough astronomers or students to possibly manage it. “The idea is that we are receiving increasing quantities of data that must be sorted in new ways in order to find information of scientific interest,” the authors write. “Developing algorithms to search through data efficiently is a challenging part of searching for signs of technology beyond our solar system.”
There aren’t enough human brains to manage the tidal wave of valuable data created by modern astronomy. The signals we seek are buried in this wave, and we need automated help to find them. Image Credit: DALL-ECOSMIC is a digital signal processing pipeline that VLASS data flows through. It searches for signals that display temporal and spectral characteristics consistent with our idea of what an artificial technological signal would look like.
The sky is full of radio signals from astrophysical objects. In order for a signal to be considered a technosignature, it needs to be a narrowband signal, and its frequency should change over time as a result of the Doppler effect. That still leaves potentially millions of hits. Researchers are forced to make other assumptions about what might constitute a technosignature, and COSMIC filters through signals based on those assumptions. “In this pipeline, extraterrestrial technosignatures are characterized by a set of assumptions and conditions that, if not met, are used to eliminate hits that do not meet these assumptions,” Tremblay and her co-authors write. “The output of this search is a database of “hits” and small cutouts of the phase-corrected voltage data for each antenna around the hits called “postage stamps.”
COSMIC examined more than 950 million objects in space for technosignatures and found nothing. But that’s okay. SETI scientists still learned things from the effort by testing their system.
“As shown in <Figure 15>, within the last 11 months of operation, COSMIC has observed over 950,000 fields and is rapidly becoming one of the largest SETI experiments ever designed,” the authors write.
Figure 15 from the paper shows a plot in galactic coordinates of all the coordinates currently in the database observed from 29 March 2023 to 14 July 2024. The orange points represent data fromThough COSMIC has observed almost 1 million sources, researchers focused on a small subset to rigorously test the postprocessing system. In a test field of 30 minutes of data, they searched toward 511 stars from the Gaia catalogue. “In this search, no potential technosignatures were identified,” the authors write.
However, this is just the beginning and constitutes a successful test of the system. Future efforts with COSMIC will be both faster and more automated, which is necessary to manage the vast volume of data in modern astronomy.
“This work overall represents an important milestone in our search,” the authors write in their paper’s conclusion. “With the rapidly growing database, we need new methods for sorting through the data, and this paper describes a rapid and viable filtering mechanism.”
The post SETI Researchers Double-Checked 1 Million Objects for Signs of Alien Signals appeared first on Universe Today.