Telling time in space is difficult, but it is absolutely critical for applications ranging from testing relativity to navigating down the road. Atomic clocks, such as those used on the Global Navigation Satellite System network, are accurate, but only up to a point. Moving to even more precise navigation tools would require even more accurate clocks. There are several solutions at various stages of technical development, and one from Germany’s DLR, COMPASSO, plans to prove quantum optical clocks in space as a potential successor.
There are several problems with existing atomic clocks – one has to do with their accuracy, and one has to do with their size, weight, and power (SWaP) requirements. Current atomic clocks used in the GNSS are relatively compact, coming in at around .5 kg and 125 x 100 x 40 mm, but they lack accuracy. In the highly accurate clock world terminology, they have a “stability” of 10e-9 over 10,000 seconds. That sounds absurdly accurate, but it is not good enough for a more precise GNSS.
Alternatives, such as atomic lattice clocks, are more accurate, down to 10e-18 stability for 10,000. However, they can measure .5 x .5 x .5m and weigh hundreds of kilograms. Given satellite space and weight constraints, those are way too large to be adopted as a basis for satellite timekeeping.
Rendering of a passive hydrogen maser atomic clock.To find a middle ground, ESA has developed a technology development roadmap focusing on improving clock stability while keeping it small enough to fit on a satellite. One such example of a technology on the roadmap is a cesium-based clock cooled by lasers and combined with a hydrogen-based maser, a microwave laser. NASA is not missing out on the fun either, with its work on a mercury ion clock that has already been orbitally tested for a year.
COMPASSO hopes to surpass them all. Three key technologies enable the mission: two iodine frequency references, a “frequency comb,” and a “laser communication and ranging terminal.” Ideally, the mission will be launched to the ISS, where it will sit in space for two years, constantly keeping time. The accuracy of those measurements will be compared to alternatives over that time frame.
Lasers are the key to the whole system. The iodine frequency references display the very distinct absorption lines of molecular iodine, which can be used as a frequency reference for the frequency comb, a specialized laser whose output spectrum looks like it has comb teeth at specific frequencies. Those frequencies can be tuned to the frequency of the iodine reference, allowing for the correction of any drift in the comb.
engineerguy explains how atomic clocks work with the GNSS.The comb then provides a method for phase locking for a microwave oscillator, a key part of a standard atomic clock. Overall, this means that the stability of the iodine frequency reference is transferred to the frequency comb, which is then again transferred to the microwave oscillator and, therefore, the atomic clock. In COMPASSO’s case, the laser communication terminal is used to transmit frequency and timing information back to a ground station while it is active.
COMPASSO was initially begun in 2021, and a paper describing its details and some breadboarding prototypes were released this year. It will hop on a ride to the ISS in 2025 to start its mission to make the world a more accurately timed place—and maybe improve our navigation abilities as well.
Learn More:
Kuschewski et al – COMPASSO mission and its iodine clock: outline of the clock design
UT – Atomic Clocks Separated by Just a few Centimetres Measure Different Rates of Time. Just as Einstein Predicted
UT – Deep Space Atomic Clocks Will Help Spacecraft Answer, with Incredible Precision, if They’re There Yet
UT – A New Atomic Clock has been Built that Would be off by Less than a Second Since the Big Bang
Lead Image:
Benchtop prototype of part of the COMPASSO system.
Credit – Kuschewski et al
The post Need to Accurately Measure Time in Space? Use a COMPASSO appeared first on Universe Today.
Binary stars are common throughout the galaxy. Roughly half the stars in the Milky Way are part of a binary or multiple system, so we would expect to find them almost everywhere. However, one place we wouldn’t expect to find a binary is at the center of the galaxy, close to the supermassive black hole Sagittarius A*. And yet, that is precisely where astronomers have recently found one.
There are several stars near Sagittarius A*. For decades, we have watched as they orbit the great gravitational well. The motion of those stars was the first strong evidence that Sag A* was indeed a black hole. At least one star orbits so closely that we can see it redshift as it reaches peribothron.
But we also know that stars should be ever wary of straying too close to the black hole. The closer a star gets to the event horizon of a black hole, the stronger the tidal forces on the star become. There is a point where the tidal forces are so strong a star is ripped apart. We have observed several of these tidal disruption events (TDEs), so we know the threat is very real.
Tidal forces also pose a threat to binary stars. It wouldn’t take much for the tidal pull of a black hole to disrupt binary orbits, causing the stars to separate forever. Tidal forces would also tend to disrupt the formation of binary stars in favor of larger single stars. Therefore astronomers assumed the formation of binary stars near Sagittarius A* wasn’t likely, and even if a binary formed, it wouldn’t last long on cosmic timescales. So astronomers were surprised when they found the binary system known as D9.
Distance and age of D9 in the context of basic dynamical processes and stellar populations in the Galactic center. Credit: Peißker et alThe D9 system is young, only about 3 million years old. It consists of one star of about 3 solar masses and the other with a mass about 75% that of the Sun. The orbit of the system puts it within 6,000 AU of Sag A* at its closest approach, which is surprisingly close. Simulations of the D9 system estimate that in about a million years, the black hole’s gravitational influence will cause the two stars to merge into a single star. But even this short lifetime is unexpected, and it shows that the region near a supermassive black hole is much less destructive than we thought.
It’s also pretty amazing that the system was discovered at all. The center of our galaxy is shrouded in gas and dust, meaning that we can’t observe the area in the visible spectrum. We can only see stars in the region with radio and infrared light. The binary stars are too close together for us to identify them individually, so the team used data from the Enhanced Resolution Imager and Spectrograph (ERIS) on the ESO’s Very Large Telescope, as well as archive data from the Spectrograph for INtegral Field Observations in the Near Infrared (SINFONI). This gave the team data covering a 15-year timespan, which was enough to watch the light of D9 redshift and blueshift as the stars orbit each other every 372 days.
Now that we know the binary system D9 exists, astronomers can look for other binary stars. This could help us solve the mystery of how such systems can form so close to the gravitational beast at the heart of our galaxy.
Reference: Peißker, Florian, et al. “A binary system in the S cluster close to the supermassive black hole Sagittarius A.” Nature Communications 15.1 (2024): 10608.
The post A Binary Star Found Surprisingly Close to the Milky Way's Supermassive Black Hole appeared first on Universe Today.
Jupiter’s moon Io is the most volcanically active body in the Solar System, with roughly 400 active volcanoes regularly ejecting magma into space. This activity arises from Io’s eccentric orbit around Jupiter, which produces incredibly powerful tidal interactions in the interior. In addition to powering Io’s volcanism, this tidal energy is believed to support a global subsurface magma ocean. However, the extent and depth of this ocean remains the subject of debate, with some supporting the idea of a shallow magma ocean while others believe Io has a more rigid, mostly solid interior.
In a recent NASA-supported study, an international team of researchers combined data from multiple missions to measure Io’s tidal deformation. According to their findings, Io does not possess a magma ocean and likely has a mostly solid mantle. Their findings further suggest that tidal forces do not necessarily lead to global magma oceans on moons or planetary bodies. This could have implications for the study of exoplanets that experience tidal heating, including Super-Earths and exomoons similar to Io that orbit massive gas giants.
The study was led by Ryan Park, a Senior Research Scientist and Principal Engineer at NASA’s Jet Propulsion Laboratory (JPL). He was joined by multiple colleagues from NASA JPL, the Centro Interdipartimentale di Ricerca Industriale Aerospaziale (CIRI) at the Università di Bologna, the National Institute for Astrophysics (NIAF), the Sapienza Università di Roma, the Southwest Research Institute (SwRI), and NASA’s Goddard Space Flight Center, and multiple universities. Their findings were described in a paper that appeared in the journal Nature.
An amazingly active Io, Jupiter’s “pizza moon,” shows multiple volcanoes and hot spots, as seen with Juno’s infrared camera. Credit: NASA/JPL-Caltech/SwRI/ASI/INAF/JIRAM/Roman TkachenkoAs they explain in their paper, two types of analysis have predicted the existence of a global magma ocean. On the one hand, magnetic induction measurements conducted by the Galileo mission suggested the existence of a magma ocean within Io, approximately 50 km [~30 mi] thick and located near the surface. These results also implied that about 20% of the material in Io’s mantle is melted. However, these results were subjected to debate for many years. In recent years, NASA’s Juno mission conducted multiple flybys of Io and the other Jovian moons and obtained data that supported this conclusion.
In particular, the Juno probe conducted a global mapping campaign of Io’s volcanoes, which suggested that the distribution of volcanic heat flow is consistent with the presence of a global magma ocean. However, these discoveries have led to considerable debate about these techniques and whether they can be used to distinguish whether a shallow global magma ocean drives Io’s volcanic activity. This is the question Park and his colleagues sought to address in their study:
“In our study, Io’s tidal deformation is modeled using the gravitational tidal Love number k2, which is defined as the ratio of the imposed gravitational potential from Jupiter to the induced potential from the deformation of Io. In short, if k2 is large, there is a global magma ocean, and if k2 is small, there is no global magma ocean. Our result shows that the recovered value of k2 is small, consistent with Io not having a global magma ocean.”
The significance of these findings goes far beyond the study of Io and other potentially volcanic moons. Beyond the Solar System, astronomers have discovered countless bodies that (according to current planetary models) experience intense tidal heating. This includes rocky exoplanets that are several times the size and mass of Earth (Super-Earths) and in the case of tidally-locked planets like the TRAPPIST-1 system. These findings are also relevant for the study of exomoons that also experience intense tidal heating (similar to the Jovian moons). As Park explained:
“Although it is commonly assumed among the exoplanet community that intense tidal heating may lead to magma oceans, the example of Io shows that this need not be the case. Our results indicate that tidal forces do not universally create global magma oceans, which may be prevented from forming due to rapid melt ascent, intrusion, and eruption, so even strong tidal heating – like that expected on several known exoplanets and super-Earths – may not guarantee the formation of magma oceans on moons or planetary bodies.”
Further Reading: Nature
The post New Research Suggests Io Doesn’t Have a Shallow Ocean of Magma appeared first on Universe Today.
The star HD 65907 is not what it appears to be. It’s a star that looks young, but on closer inspection is actually much, much older. What’s going on? Research suggests that it is a resurrected star.
Astronomers employ different methods to measure a star’s age. One is based on its brightness and temperature. All stars follow a particular path in life, known as the main sequence. The moment they begin fusing hydrogen in their cores, they maintain a strict relationship between their brightness and temperature. By measuring these two properties, astronomers can roughly pin down the age of a star. But there are other techniques, like measuring the amount of heavy elements in a stellar atmosphere. Older stars tend to have fewer of these elements, because they were born at a time before the galaxy had become enriched with them.
Going by its temperature and brightness, HD 65907 is relatively young, with an age right around 5 billion years old. And yet it contains very little heavy elements. Plus, its path in the galaxy isn’t in line with other young stars, which tend to serenely orbit around the center. HD 65907 is much more erratic, suggesting that it only recently moved here from somewhere else.
In a recent paper, an international team of astronomers dug into archival data to see if they could resolve the mystery, and they believe that HD 65907 is a kind of star known as a blue straggler, and that it has its strange combination of properties because of a violent event in its past, causing it to be resurrected.
If two low-mass stars collide, the remnants can sometimes survive as a star on its own. At first that newly merged star will be both massive and large, with its outer surface flung far away from the core due to the enormous rotation after the collision. But eventually some astrophysical process (perhaps strong magnetic fields might be to blame) drag down the rotation rate of the star, causing it to slow down and settle into equilibrium. In this new state the star will appear massive and incredibly hot: a blue straggler.
No matter what, blue straggler stars get a second chance on life. Those mergers transform small stars into big stars, and they’re just now enjoying their hydrogen-burning main sequence lives.
The astronomers believe this is the case for HD 65907. What makes this star especially unique is that it’s not a member of a cluster, where frequent mergers can easily lead to blue stragglers. Instead, it’s a field star, wandering the galaxy on its own. It must have cannibalized a companion five billion years ago, leading to its apparent youthful age.
Work like this is essential to untangling the complicated lives of stars in the Milky Way, and it shows how the strangest stars hold the keys to unlocking the evolution of elements that lead to systems like our own.
The post The Mysterious Case of the Resurrected Star appeared first on Universe Today.
It’s axiomatic that the Universe is expanding. However, the rate of expansion hasn’t remained the same. It appears that the Universe is expanding more quickly now than it did in the past.
Astronomers have struggled to understand this and have wondered if the apparent acceleration is due to instrument errors. The JWST has put that question to rest.
American astronomer Edwin Hubble is widely credited with discovering the expansion of the Universe. But it actually stemmed from relativity equations and was pioneered by Russian scientist Alexander Freedman. Hubble’s Law bears Edwin’s name, though, and he was the one who confirmed the expansion, called Hubble’s constant, and put a more precise value to it. It measures how rapidly galaxies that aren’t gravitationally bound are moving away from one another. The movement of objects due solely to the Hubble constant is called the Hubble flow.
Measuring the Hubble constant means measuring distances to far-flung objects. Astronomers use the cosmic distance ladder (CDL) to do that. However, the ladder has a problem.
This illustration shows the three basic steps astronomers use to calculate how fast the universe expands over time, a value called the Hubble constant. All the steps involve building a strong “cosmic distance ladder” by starting with measuring accurate distances to nearby galaxies and then moving to galaxies farther and farther away. Image Credit: NASA, ESA and A. Feild (STScI)The first rungs on the CDL are fundamental measurements that can be observed directly. Parallax measurement is the most important fundamental measurement. But the method breaks down at great distances.
Beyond that, astronomers use standard candles, things with known intrinsic brightness, like supernovae and Cepheid variables. Those objects and their relationships help astronomers measure distances to other galaxies. This has been tricky to measure, though advancing technology has made progress.
Another pair of problems plagues the effort, though. The first is that different telescopes and methods produce different distance measurements. The second is that our measurements of distances and expansion don’t match up with the Standard Model of Cosmology, also known as the Lambda Cold Dark Matter (LCDM) model. That discrepancy is called the Hubble tension.
The question is, can the mismatch between the measurements and the LCDM be explained by instrument differences? That possibility has to be eliminated, and the trick is to take one large set of distance measurements from one telescope and compare them to another.
New research in The Astrophysical Journal tackles the problem by comparing Hubble Space Telescope measurements with JWST measurements. It’s titled “JWST Validates HST Distance Measurements: Selection of Supernova Subsample Explains Differences in JWST Estimates of Local H0.” The lead author is Adam Riess, a Bloomberg Distinguished Professor and Thomas J. Barber Professor of Physics and Astronomy at Johns Hopkins University. Riess is also a Nobel laureate, winning the 2011 Nobel Prize in Physics “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae,” according to the Nobel Institute.
As of 2022, the Hubble Space Telescope gathered the most numerous sample of homogeneously measured standard candles. It measured a large number of standard candles out to about 40 Mpc or about 130 million light-years. “As of 2022, the largest collection of homogeneously measured SNe Ia is complete to D less than or equal to 40 Mpc or redshift z less than or equal to 0.01,” the authors of the research write. “It consists of 42 SNe Ia in 37 host galaxies calibrated with observations of Cepheids with the Hubble Space Telescope (HST), the heritage of more than 1000 orbits (a comparable number of hours) invested over the last ~20 yrs.”
In this research, the astronomers used the powerful JWST to cross-check the Hubble’s work. “We cross-check the Hubble Space Telescope (HST) Cepheid/Type Ia supernova (SN Ia) distance ladder, which yields the most precise local H0 (Hubble flow), against early James Webb Space Telescope (JWST) subsamples (~1/4 of the HST sample) from SH0ES and CCHP, calibrated only with NGC 4258,” the authors write. SH0ES and CCHP are different observing efforts aimed at measuring the Hubble constant. SH0ES stands for Supernova H0 for the Equation of State of Dark Energy, and CCHP stands for Chicago-Carnegie Hubble Program, which uses the JWST to measure the Hubble constant.
“JWST has certain distinct advantages (and some disadvantages) compared to HST for measuring distances to nearby galaxies,” Riess and his co-authors write. It offers a 2.5 times higher near-infrared resolution than the HST. Despite some of its disadvantages, the JWST “is able to provide a strong cross-check of distances in the first two rungs,” the authors explain.
Observations from both telescopes are closely aligned, which basically minimizes instrument error as the cause of the discrepancy between observations and the Lambda CDM model.
There’s a lot to digest in this figure from the research. It shows “Comparisons of H0 between HST Cepheids and other measures (JWST Cepheids, JWST JAGB, and JWST NIR-TRGB) for SN Ia host subsamples selected by different teams and for the different methods,” the authors explain. JAGB stands for J-region Asymptotic Giant Branch, and TRGB stands for Tip of the Red Giant Branch. Both JAGB and TRGB are ways of measuring distance to specific types of stars. Basically, coloured circles represent Hubble measurements, and squares represent JWST measurements. “The HST Cepheid and JWST distance measurements themselves are in good agreement,” the authors write. Image Credit: Riess et al. 2024.“While it will still take multiple years for the JWST sample of SN hosts to be as large as the HST sample, we show that the current JWST measurements have already ruled out systematic biases from the first rungs of the distance ladder at a much smaller level than the Hubble tension,” the authors write.
This research covered about one-third of the Hubble’s data set, with the known distance to a galaxy called NGC 4258 serving as a reference point. Even though the data set was small, Riess and his co-researchers achieved impressively precise results. They showed that the measurement differences were less than 2%. That’s much less than the 8% to 9% in the Hubble tension discrepancy.
NGC 4258 is significant in the cosmic distance ladder because it contains Cepheid variables similar to both the metallicities of the Milky Way and other galaxies’ Cepheids. Astronomers use it to calibrate distances to Cepheids with different metallicities. A new composite of NGC 4258 features X-rays from Chandra (blue), radio waves from the VLA (purple), optical data from Hubble (yellow and blue), and infrared with Spitzer (red). Image Credit: ChandraThat means that our Lamda CDM model is missing something. The standard model yields an expansion rate of about 67 to 68 kilometres per second per megaparsec. Telescope observations yield a slightly higher rate: between 70 and 76 kilometres per second per megaparsec. This work shows that the discrepancy can’t be due to the different telescopes and methods.
“The discrepancy between the observed expansion rate of the universe and the predictions of the standard model suggests that our understanding of the universe may be incomplete. With two NASA flagship telescopes now confirming each other’s findings, we must take this [Hubble tension] problem very seriously—it’s a challenge but also an incredible opportunity to learn more about our universe,” said lead author Riess.
What could be missing from the Lambda CDM model?
Marc Kamionkowski is a Johns Hopkins cosmologist who helped calculate the Hubble constant and recently developed a possible new explanation for the tension. Though not part of this research, he commented on it in a press release.
“One possible explanation for the Hubble tension would be if there was something missing in our understanding of the early universe, such as a new component of matter—early dark energy—that gave the universe an unexpected kick after the big bang,” said Kamionkowski. “And there are other ideas, like funny dark matter properties, exotic particles, changing electron mass, or primordial magnetic fields that may do the trick. Theorists have license to get pretty creative.”
The door is open, theorists just have to walk in.
The post The JWST Looked Over the Hubble’s Shoulder and Confirmed that the Universe is Expanding Faster appeared first on Universe Today.
Astrophotography is a challenging art. Beyond the usual skill set of understanding things such as light exposure, color balance, and the quirks of your kit, there is the fact that stars are faint and they move.
Technically, the stars don’t move; the Earth rotates. But to capture a faint object, you need a long exposure time. Typically, from a few seconds to half a minute, depending on the level of detail you want to capture. In thirty seconds, the sky will shift by more than a tenth of a degree. That might not seem like much, but it’s enough to make the stars blur ever so slightly. Many astrophotographers take multiple images and stack them for even greater detail, which would blur things even more. It can create an interesting effect, but it doesn’t give you a panorama of pinpoint stars.
The motion blur of starlight used to create a rain of stars. Credit: Diana Juncher/ESOFortunately, there is plenty of off-the-shelf equipment you can get to account for motion blur. There are tracking motors you can mount to your camera that move your frame in time with the Earth’s rotation. They are incredibly precise so that you can capture image after image for hours, and your camera will always be perfectly aligned with the sky. If you make your images into a movie, the stars will remain fixed while the Earth rotates beneath them.
Of course, most astrophotographers have the same limitations of almost everyone. We are bound to the Earth and can only view the stars through our blanket of sky. If we could rise above the atmosphere, we would have an unburdened view of the heavens. A sky filled with uncountable, untwinkling stars. While astronauts often talk about this wondrous sight, photographs of stars from orbit are often less than spectacular. That’s because of how difficult astrophotography is in space, and it all comes back to motion blur.
Most astrophotography is done from the International Space Station (ISS). Since the ISS is in a relatively low orbit, it travels around the Earth once every 90 minutes. This means the stars appear to drift at a rate 16 times faster than they do on Earth. A 30-second exposure on the ISS has greater motion blur than an eight minute exposure on Earth. Because of this, most photographs from the ISS either have blurry stars or only capture the brightest stars.
Don Pettit’s Homemade Orbital Sidereal Tracker. Credit: Don PettitIdeally, an astronaut astrophotographer would bring along a camera mount similar to the ones used on Earth. But the market demand for such a mount is tiny, so you can’t just buy one from your local camera store. You have to make your own, which is precisely what astronaut Don Pettit did. Working with colleagues from RIT, he created a camera tracker that shifts by 0.064 degrees per second and can be adjusted give or take 5%. With this mount, Don has been able to capture 30-second exposures with almost no motion blur. His images rival some of the best Earth-based images, but he takes them from space!
The detail of his photographs is unprecedented. In the image above, for example, you can see the Large and Small Magellanic Clouds, and not just as fuzzy patches in the sky. You can see individual stars within the clouds. The image also gives an excellent view of an effect known as airglow. Molecules in the upper atmosphere are ionized by sunlight and cosmic rays, which means this layer always has a faint glow to it. No matter how skilled a terrestrial astrophotographer is, their images will always have a bit of this glow.
Airglow from different molecules in the upper atmosphere. Credit: NASA/annotations by Alex RivestBut not Don Pettit. He’s currently on the ISS, capturing outstanding photographs as a side hobby from his day job. If you want to see more of his work, check him out on Reddit, where he posts under the username astro_pettit.
The post Astronaut Don Pettit is Serious, He Rigged up Astrophotography Gear on the ISS appeared first on Universe Today.
We’ve already seen the success of the Ingenuity probe on Mars. The first aircraft to fly on another world set off on its maiden voyage in April 2021 and has now completed 72 flights. Now a team of engineers are taking the idea one step further and investigating ways that drones can be released from satellites in orbit and explore the atmosphere without having to land. The results are positive and suggest this could be a cost effective way to explore alien atmospheres.
The idea of using drones on alien worlds has been enticing engineers and planetary explorers for a few years now. They’re lightweight and versatile and an excellent cost effective way to study the atmosphere of the planets. Orbiters and rovers have been visiting the planets for decades now but drones can explore in ways rovers and orbiters cannot. Not only will they be useful to study atmospheric effects but they will be able to reach inaccessible surface areas providing imagery to inform potential future land based study.
Illustration of Perseverance on MarsPerhaps one of the most famous, indeed the only successful planetary drone to date is the Ingenuity drone which was part of the Perseverance rover mission. It was able to demonstrate that controlled flight in the Martian atmosphere was possible, could hunt out possible landing sites for future missions and direct ground based exploration. It’s iconic large wingspan was needed due to the rarefied atmosphere on Mars requiring larger rotor blades to generate the required lift. Ingenuity was originally planned as a technology demonstration mission but it soon became a useful tool in the Perseverance mission arsenal.
Ingenuity helicopterNASA engineers are very aware of the benefits of drone technology and so a team of engineers and researchers from the Armstrong Flight Research Center in California have been taking the idea of small drones one step further. The research was part of the Center Innovation Fund award from 2023 and began as the team developed three atmospheric probe models. The models were all the same, measuring 71 cm from top to bottom, one for visual demonstration, the other two for research and technology readiness.
Their first launch on 1 August didn’t go to plan with a failure in the release mechanism. The team reviewed everything from the lifting aircraft, the release mechanism and even the probe design itself to identify improvements. The team were finally able to conduct flights with their new atmospheric probe after it was released from a quad rotor remotely piloted aircraft on 22 October 2024.
The flights were conducted above the Rogers Dry Lake near in California with designs informed by previous NASA instrumentation designed for lifting and transportation. The test flights were aiming to prove the shape of the probe worked. The team now want to release the probe from a higher altitude, ultimately hoping to be able to release it from a satellite in orbit around a planet.
The next steps are to review photos and videos from the flight to identify further improvements before another probe is built. Once they have probed the flight technology, instrumentation will be added to facilitate data gathering and recording. If all goes to plan then the team hope to be chosen for a mission to one of the planets, be released in orbit and then dive into the atmosphere under controlled flight to learn more about the environment.
Source : Atmospheric Probe Shows Promise in Test Flight
The post Drone Test Flights Are Being Tested for Flights on Alien Worlds appeared first on Universe Today.
Since the discovery of the first exoplanet in 1992, thousands more have been discovered. 40 light years away, one such system of exoplanets was discovered orbiting a star known as Trappist-1. Studies using the James Webb Space Telescope have revealed that one of the planets, Trappist-1 b has a crust that seems to be changing. Geological activity and weathering are a likely cause and if the latter, it suggests the exoplanet has an atmosphere too.
Exoplanets are planets that orbit around other stars. In every way they vary in size, composition and distance from their star. Finding them is quite a tricky undertaking and there are a number of different approaches that are used. Since the first discovery, over 5,000 exoplanets have been found and now of course, the hunt is on to find planets that could sustain life. Likely candidates would be orbiting their host star in a region known as the habitable zone where the temperature is just right for a life sustaining world to evolve.
This illustration shows what the hot rocky exoplanet TRAPPIST-1 b could look like. A new method can help determine what rocky exoplanets might have large reservoirs of subsurface water. Credits: NASA, ESA, CSA, J. Olmsted (STScI)There are three exoplanets in the Trappist-1 system that orbit the star within the habitable zone; Trappist-1e, f and g. The star is a cool dwarf star in the constellation of Aquarius and was identified as being a host of exoplanets in 2017. The discoveries were made using data from NASA’s Kepler Space Telescope and the Spitzer Space Telescope. The system was named after the Transiting Planets and PlanetesImals Small Telescope (TRAPPIST.)
The Spitzer Space Telescope observatory trails behind Earth as it orbits the Sun. Credit: NASA/JPL-CaltechA team of researchers from the Max Planck Institute for Astronomy and the Commissariat aux Énergies Atomiques (CEA) in Paris have been studying Trappist-1b. They have been using the Mid-Infrared Imager of the James Webb Space Telescope to measure thermal radiation from the exoplanet. Their findings have been published in the journal Nature Astronomy. Previous studies concluded that Trappist-1b was a dark rocky planet that and no atmosphere. The new study has turned this conclusion on its head.
The measurements found by the team revealed something else. They found a world with a surface composed of largely unchanged material. Typically the surface of a world with no atmosphere is weathered by radiation and peppered with impacts from meteorites. The study found that the surface materials is around 1,000 years old, much younger than the planet itself which is thought to be several billion years old.
The team postulate that this could indicate volcanic activity or plate tectonics since the planet has sufficient size to still retain internal heat from its formation. It’s also possible that the observations reveal a thick atmosphere rich in carbon dioxide. The observations suggested at first that there was no layer of carbon dioxide since they found no evidence of thermal radiation absorption. They ran models however to show that atmospheric haze can reverse the temperature profile of a carbon dioxide rich atmosphere. Typically the ground is the warmest region but in the case of Trappist-1b, it may be that the atmosphere absorbs radiation, this heats the upper layers which radiates the infrared energy itself. A similar process is seen on Saturn’s moon Titan.
Fortunately, the alignment of the planetary system means that it passes directly in front of the star so that spectroscopic observations and the dimming of starlight as the planet passes in front can reveal the profile of the atmosphere. Further studies are now underway to explore this and take further observations to conclude the nature of the atmosphere around Trappist-1b.
Source : Does the exoplanet Trappist-1 b have an atmosphere after all?
The post One of the Most Interesting Exoplanets Just Got Even More Interesting! appeared first on Universe Today.
Even if you knew nothing about astronomy, you’d understand that exploding stars are forceful and consequential events. How could they not be? Supernovae play a pivotal role in the Universe with their energetic, destructive demises.
There are different types of supernovae exploding throughout the Universe, with different progenitors and different remnants. The Zwicky Transient Facility has detected 100,000 supernovae and classified 10,000 of them.
The Zwicky Transient Facility (ZTF) is a wide-field astronomical survey named after the prolific Swiss astronomer Fritz Zwicky. In the early 1930s, Zwicky and his colleague Walter Baade coined the term ‘supernova’ to describe the transition of normal main sequence stars into neutron stars. In the 1940s, Zwicky and his colleague developed the modern supernova classification system. The ZTF bears his name because of these and many other scientific contributions. (Zwicky was also a humanitarian and a philosopher.)
The ZTF observes in both optical and infrared and was built to detect transients with the Samuel Oschin Telescope at the Palomar Observatory in San Diego County, California. Transients are objects that change brightness rapidly or objects that move. While supernovae (SN) don’t move, they definitely change brightness rapidly. They can outshine their entire host galaxy for months.
In 2017, the ZTF began its Bright Transient Survey (BTS), an effort dedicated to the search for supernovae (SNe). It’s by far the largest spectroscopic SNe survey ever conducted. The BTS has discovered 100,000 potential SNe, and more than 10,000 of them have been confirmed and classified according to distance, type, rarity, and brightness. These types of astronomical surveys create a rich dataset that will aid researchers well into the future.
“There are trillions of stars in the universe, and about every second, one of them explodes. Reaching 10,000 classifications is amazing, but what we truly should celebrate is the incredible progress we have made in our ability to browse the universe for transients, or objects that change in the sky, and the science our rich data will enable,” said Christoffer Fremling, a staff astronomer at Caltech. Fremling leads the ZTF’s Bright Transient Survey (BTS).
The effort to catalogue supernovae dates back to 2012 when astronomical databases began officially tracking them. Since then, astronomers have detected nearly 16,000 of them, and the ZTF is responsible for more than 10,000 of those detections.
The first documented SNe discovery was in 185 AD when Chinese astronomers recorded the appearance of a ‘guest star’ in the sky that shone for eight months. In the nearly two millennia since then, we’ve seen many more. 1987 was a watershed year for supernovae science when a massive star exploded in the nearby Large Magellanic Cloud. Named SN 1987A. it was the first supernova explosion since the telescope was invented. This was also the first direct detection of neutrinos from a supernova, and the detection is considered by many to be the beginning of neutrino astronomy.
A timeline of important events in the history of supernova astronomy. Click to enlarge. Image Credit: ZTF/Caltech/NSFEach night, the ZTF detects hundreds of thousands of events, including everything from small, simple asteroids in our inner Solar System to powerful gamma-ray bursts in the distant Universe. The ZTF uses a pair of telescopes that act as a kind of ‘triage’ facility for supernovae and transients. The Samuel Oschin Telescope has a 60-megapixel wide field camera that images the visible sky every two nights. Astronomers detect new transient events by subtracting images of the same portion of the sky from subsequent scans.
Then, members of the ZTF team study these images and send the most promising to the other ZTF telescope, the Spectral Energy Distribution Machine (SEDM). This robotic spectrograph operates on the Palomar 60-inch telescope.
“We combine the brightness information from the ZTF camera with the data from the SEDM to correctly identify the origin and type of a transient, a process astronomers call transient classification,” said Yu-Jing Qin, a postdoc at Caltech, who is running much of the daily operations of the BTS survey.
ZTF Detections are also sent to other observatories around the world who can examine transients with other spectroscopic facilities. About 30% of the ZTF transients have been confirmed this way.
ZTF detects so many transients that it’s difficult for astronomers to keep up. In recent years, Caltech has made an effort to develop machine-learning tools that can examine SEDM spectroscopic data, classify the transients, and send them to the Transient Name Server. In 2023, the BTSBot system was employed to help manage the flow of detections.
“Since BTSbot began operation it has found about half of the brightest ZTF supernovae before a human,” said PhD student Nabeel Rehemtulla from Northwestern University, developer of the BTSBot. “For specific types of supernovae, we have automated the entire process and BTSbot has so far performed excellently in over a hundred cases. This is the future of supernova surveys, especially when the Vera Rubin Observatory begins operations.”
Though every supernova discovery is scientifically valuable, there are some highlights among all these detections.
The ZTF has detected thousands of Type 1a supernovae. They occur in binary systems where one star is a white dwarf. The white dwarf draws gas away from its companion and the gas gathers on the white dwarf. Eventually, this causes a supernova explosion. SN 2022qmx is one of these Type 1a supernovae that appeared to be way brighter than it should be. It turns out that an interceding galaxy was gravitationally lensing the SN’s light, making it appear 24 times brighter.
The ZTF is also responsible for detecting the closest and most distant SNe (with help from the JWST).
Some highlights from the ZTF’s 10,000 supernovae. Click the image to enlarge. Image Credit: ZTF/Caltech/NSF“Back when we started this project, we didn’t know how many astronomers would follow up on our detections,” said Caltech’s Fremling. “To see that so many have is a testament to why we built ZTF: to survey the whole sky for changing objects and share those data as rapidly as possible with astronomers around the world. That’s the purpose of the Transient Name Server (TNS).”
The TNS is where the global astronomical community announces the detection and classification of transients so that work isn’t duplicated. Since 2016, the TNS has handled over 150,000 reported transients and over 15,000 reported supernovae.
“Everything is public in hopes that the community will come together and make the most of it,” said Fremling. “This way, we don’t have, say, 10 telescopes across the world doing the same thing and wasting time.”
Soon, the ZTF will have a powerful partner in time-domain astronomy. The Vera Rubin Observatory (VRO) should see its first light in the next few months and then begin its 10-year Legacy Survey of Space and Time (LSST). The LSST will also detect transients but is far more sensitive than the ZTF. It’s expected to detect millions of supernovae, and handling all of those detections will require a machine-learning tool similar to the BTSbot.
“The machine learning and AI tools we have developed for ZTF will become essential when the Vera Rubin Observatory begins operations,” said Daniel Perley, an astronomer at Liverpool John Moores University in the UK who developed the search and discovery procedures for the BTS. “We have already planned to work closely with Rubin to transfer our machine learning knowledge and technology,” added Perley.
Astronomical surveys like the ones performed by ZTF and the VRO provide foundational data that researchers will use for years. It’s impossible to know how it will be used in every case or what discoveries it will lead to. Even better, the ZTF and the VRO will overlap.
According to Caltech astronomy professor Mansi Kasliwal, who will lead ZTF in the coming two years, this will be a very important and exciting time in time-domain astronomy.
“The period in 2025 and 2026 when ZTF and Vera Rubin can both operate in tandem is fantastic news for time-domain astronomers,” said Kasliwal. “Combining data from both observatories, astronomers can directly address the physics of why supernovae explode and discover fast and young transients that are inaccessible to ZTF or Rubin alone. I am excited about the future,” added Kasliwal.
The post Zwicky Classifies More Than 10,000 Exploding Stars appeared first on Universe Today.
It seems that we are completely alone in the universe. But simple reasoning suggests that there should be an abundance of alien civilizations. Maybe they’re all out there, but they are keeping their distance. Welcome to the zoo (hypothesis).
The story goes that in the summer of 1950, eminent physicist Enrico Fermi was visiting colleagues at Los Alamos National Laboratory. It was the initial peak of UFO mania, and naturally the physicists brought it up over lunch. After a short while, Fermi went silent. Later, well after the conversation had turned to other topics, he exclaimed “Where is everybody?”
Everybody knew what he meant. We know that the universe is capable of producing intelligent life. We’re literally living proof of that. But the cosmos tends to not do things just once. If life happened here, it likely also happened elsewhere. In fact, given the extraordinary age of the universe and the incredible number of stars and planetary systems in any given galaxy, the Milky Way should be abuzz with intelligent space-faring civilizations.
Humanity itself is right on the cusp of developing a sustained interplanetary presence, and our species is still in its youth, at least as cosmic reckoning is concerned. We should see evidence for other intelligent species everywhere: radio signals, megastructures, wandering probes, and so on.
But we’ve got nothing. So where is everybody?
Perhaps the strangest possible solution to Fermi’s paradox, as this conundrum came to be known, is known as the zoo hypothesis. In this idea, alien life is indeed common, as is intelligence. There really is no huge barrier to intelligent creatures developing spaceflight capabilities and spreading themselves throughout the galaxy.
But the reason that we don’t see anybody is that they are intentionally hiding themselves from us. Through their sophisticated observations, they can easily tell that we are intelligent ourselves, but also somewhat dangerous. After all, we have peaceful space rockets and dangerous ICBMs. We are just dipping our toes into space, and we may not be exactly trustworthy.
And so the intelligent civilizations of the galaxy are keeping us in a sort of “zoo.” They are masking themselves and their signals, making us think that we’re all alone, largely confined to our own solar system and a few nearby stars.
Once we prove ourselves, the hypothesis goes, we’ll be welcomed into the larger galactic community with open arms (or tentacles).
The zoo hypothesis is, honestly, a little far-fetched. It assumes not only the existence of alien civilizations, but also their motives and intentions. But we ultimately do not know if we are alone in the universe. And there’s only one way to find out.
The post What is the Zoo Hypothesis? appeared first on Universe Today.
When it comes to our modern society and the many crises we face, there is little doubt that fusion power is the way of the future. The technology not only offers abundant power that could solve the energy crisis, it does so in a clean and sustainable way. At least as long as our supplies of deuterium (H2) and helium-3 hold up. In a recent study, a team of researchers considered how evidence of deuterium-deuterium (DD) fusion could be used as a potential technosignature in the Search for Extraterrestrial Intelligence (SETI).
The study was conducted by David C. Catling and Joshua Krissansen-Totton of the Department of Earth & Space Sciences and the Virtual Planetary Laboratory (VPL) at the University of Washington (respectively) and Tyler D. Robinson of the VPL and the Lunar & Planetary Laboratory (LPL) at the University of Arizona. In their paper, which is set to appear in the Astrophysical Journal, the team considered how long-lived extraterrestrial civilizations may deplete their supplies of deuterium – something that would be detectable by space telescopes.
At the heart of SETI lies the foregone conclusion that advanced civilizations have existed in our galaxy long before humanity. Another conclusion extends from this: if humanity can conceive of something (and the physics are sound), a more advanced civilization is likely to have already built it. In fact, it has been suggested by many SETI researchers and scientists that advanced civilizations will adopt fusion power to meet their growing energy needs as they continue to grow and ascend the Kardashev Scale.
The spherical tokamak MAST at the Culham Centre for Fusion Energy (UK). Photo: CCFEThis is understandable, considering how other forms of energy (fossil fuels, solar, wind, nuclear, hydroelectric, etc.) are either finite or inefficient. Space-based solar power is a viable option since it can provide a steady supply of energy that is not subject to intermittency or weather patterns. Nevertheless, nuclear fusion is considered a major contender for future energy needs because of its efficiency and energy density. It is estimated that one gram of hydrogen fuel could generate as much as 90,000 kilowatt-hours of energy – the equivalent of 11 metric tons (12 U.S. tons) of coal.
In addition, deuterium has a natural abundance in Earth’s oceans of about one atom of deuterium in every 6,420 atoms of hydrogen. This deuterium interacts with water molecules and will replace one or both hydrogen atoms to create “semi-heavy water” (HOD or DOH) and sometimes “heavy water” (D2O). This works out to 4.85×1013 or 48.5 billion metric tons (5.346×1013 U.S. tons) of deuterium. As they argue in their paper, extracting deuterium from an ocean would decrease its ratio of deuterium-to-hydrogen (D/H), which would be detectable in atmospheric water vapor. Meanwhile, the helium produced in the nuclear reactions would escape to space.
In recent years, it has been suggested that excess carbon dioxide and radioactive isotopes in an exoplanet’s atmosphere could be used to infer the presence of an industrial civilization. In the same vein, low values of D/H in an exoplanet’s atmosphere (along with helium) could be used to detect a highly advanced and long-lived civilization. As Catling explained in a recent interview with phys.org, this possibility is one he began pondering years ago.
“I didn’t do much with this germ of idea until I was co-organizing an astrobiology meeting last year at Green Bank Observatory in West Virginia,” he said. “Measuring the D/H ratio in water vapor on exoplanets is certainly not a piece of cake. But it’s not a pipe dream either.”
A model JWST transmission spectrum for an Earth-like planet, showing the wavelengths of sunlight that molecules like ozone (O3), water (H2O), carbon dioxide (CO2), and methane (CH4) absorb. Credit: NASA, ESA, Leah Hustak (STScI)To model what an advanced civilization dependent on DD fusion would look like, Catling and his colleagues considered projections for what Earth will look like by 2100. At this point, the global population is expected to reach 10.4 billion, and fusion power is projected to provide 100 Terawatts (TW). They then multiplied that by a factor of ten (1,000 TW) for a more advanced civilization and found that they would reduce the D/H value of an Earth-like ocean to that of the interstellar medium (ISM) in about 170 million years.
The beauty of this approach is that the low D/H values in an exoplanet’s atmosphere would persist long after a civilization went extinct, migrated off-world, or became even more advanced and “transcended.” In terms of search strategies, the team used the Spectral Mapping Atmospheric Radiative Transfer (SMART) model to identify the specific wavelengths and emission lines for HDO and H2O. These findings will be useful for future surveys involving the James Webb Space Telescope (JWST), NASA’s proposed Habitable Worlds Observatory (HWO), and the Large Interferometer For Exoplanets (LIFE).
“It’s up to the engineers and scientists designing [HWO] and [LIFE] to see if measuring D/H on exoplanets might be an achievable goal. What we can say, so far, is that looking for D/H from LIFE appears to be feasible for exoplanets with plenty of atmospheric water vapor in a region of the spectrum around 8 microns wavelength.”
Further Reading: phys.org, arXiv
The post A New Study Suggests How we Could Find Advanced Civilizations that Ran Out of Fusion Fuel appeared first on Universe Today.
Astronomers have spent decades trying to understand how galaxies grow so large. One piece of the puzzle is spheroids, also known as galactic bulges. Spiral galaxies and elliptical galaxies have different morphologies, but they both have spheroids. This is where most of their stars are and, in fact, where most stars in the Universe reside. Since most stars reside in spheroids, understanding them is critical to understanding how galaxies grow and evolve.
New research focused on spheroids has brought them closer than ever to understanding how galaxies become so massive.
Elliptical galaxies have no flat disk component. They’re smooth and featureless and contain comparatively little gas and dust compared to spirals. Without gas and dust, new stars seldom form, so ellipticals are populated with older stars.
Astronomers don’t know how these ancient, bulging galaxies formed and evolved. However, a new research letter in Nature may finally have the answer. It’s titled “In situ spheroid formation in distant submillimetre-bright galaxies.” The lead author is Qing-Hua Tan from the Purple Mountain Observatory, Chinese Academy of Sciences, China. Dr. Annagrazia Puglisi from the University of Southampton co-authored the research.
“Our findings take us closer to solving a long-standing mystery in astronomy that will redefine our understanding of how galaxies were created in the early universe.”
Dr. Annagrazia Puglisi, University of SouthamptonThe international team of researchers used the Atacama Large Millimetre/sub-millimetre Array (ALMA) to examine highly luminous starburst galaxies in the distant Universe. Sub-millimetre means it observes electromagnetic energy between far-infrared and microwave. Astronomers have suspected for a long time that these galaxies are connected to spheroids, but observing them is challenging.
“Infrared/submillimetre-bright galaxies at high redshifts have long been suspected to be related to spheroid formation,” the authors write. “Proving this connection has been hampered so far by heavy dust obscuration when focusing on their stellar emission or by methodologies and limited signal-to-noise ratios when looking at submillimetre wavelengths.”
This image shows two of the Atacama Large Millimeter/submillimeter Array (ALMA) 12-metre antennas. ALMA has 66 antennas that work together as an interferometer. (Credit : Iztok Bonina/ESO)The researchers used ALMA to analyze more than 100 of these ancient galaxies with a new technique that measures their distribution of light. These brightness profiles show that the majority of the galaxies have tri-axial shapes rather than flat disks, indicating that something in their history made them misshapen.
Two important concepts underpin the team’s results: The Sersic index and the Spergel index.
The Sersic index is a fundamental concept in describing the brightness profiles of galaxies. It characterizes the radial distribution of light coming from galaxies and basically describes how light is concentrated in a galaxy.
The Spergel index is less commonly used. It’s based on the distribution of dark matter in galaxies. Rather than light, it helps astronomers understand how matter is concentrated. Together, both indices help astronomers characterize the complex structure of galaxies.
These indices, along with the new ALMA observations, led to new insights into how spheroids formed through mergers and the resulting influx of cold, star-forming gas.
It all starts with a galaxy collision or merger, which sends large flows of cold gas into the galactic centre.
This is a JWST image (not from this research) of an ancient galaxy merger from 13 billion years ago. The galaxy, named Gz9p3, has a double nucleus indicating that the merger is ongoing. While astronomers know that mergers are a critical part of galaxy growth and evolution, the role spheroids play has been difficult to discern. Image Credit: NASA/Boyett et al“Two disk galaxies smashing together caused gas—the fuel from which stars are formed—to sink towards their centre, generating trillions of new stars,” said co-author Puglisi. “These cosmic collisions happened some eight to 12 billion years ago when the universe was in a much more active phase of its evolution.”
“This is the first real evidence that spheroids form directly through intense episodes of star formation located in the cores of distant galaxies,” Puglisi said. “These galaxies form quickly—gas is sucked inwards to feed black holes and triggers bursts of stars, which are created at rates ten to 100 times faster than our Milky Way.”
The researchers compared their observations to hydro-simulations of galaxy mergers. The results show that the spheroids can maintain their shape for up to approximately 50 million years after the merger. “This is compatible with the inferred timescales for the submillimeter-bright bursts based on observations,” the authors write. After this intense period of star formation in the spheroid, the gas is used up, and things die down. No more energy is injected into the system, and the residual gas flattens out into a disk.
This figure from the research shows how the spheroids lose their shape after the intense period of star formation following a merger. (a) shows maps (2×2 kpc) of the central gas in three differentThese types of galaxies were more plentiful in the early Universe than they are now. The researchers’ results show that these galaxies used up their fuel quickly, forming the spheroids that are now populated by old stars.
This isn’t the first time that astronomers have investigated the potential link between spheroids and distant submillimeter-bright galaxies. Previous research that found evidence for tri-axiality also found heavy ellipticity and other evidence showing that submillimeter-bright galaxies are disks with bars in the submillimeter. However, this new research relied on observations with a higher signal-to-noise ratio than previous research.
“Astrophysicists have sought to understand this process for decades,” Puglisi said. “Our findings take us closer to solving a long-standing mystery in astronomy that will redefine our understanding of how galaxies were created in the early universe.”
“This will give us a more complete picture of early galaxy formation and deepen our understanding of how the universe has evolved since the beginning of time.”
The post We Might Finally Know How Galaxies Grow So Large appeared first on Universe Today.
Imagine you’ve just gotten to Mars as part of the first contingent of settlers. Your first challenge: build a long-term habitat using local materials. Those might include water from the polar caps mixed with specific surface soils. They might even require some very personal contributions—your blood, sweat, and tears. Using such in situ materials is the challenge a team of Iranian engineers studied in a research project looking at local materials on Mars.
In situ resource utilization has always been part of Mars mission and colonization scenarios. It’s expensive to bring along habitat construction materials with you, and space will be limited onboard the ship. Once you settle on Mars, you can use your ship as a habitat until you build your new colony. But, what are you going to create new homes from?
Cement or concrete come to mind, made from whatever’s available on or just below the surface. The authors of the study, Omid Karimzade Soureshjani, Ali Massumi, and Gholmreza Nouri, focused on Martian cement. They assembled data sets about soil composition from Mars landers and orbiters and came up with a collection of concrete types that future colonists could use. Next, they applied structural engineering principles and suggested some options for onsite construction using what are called spider/radar diagrams and charts. These allow building planners to apply data for different concepts of Mars architecture.
A graph showing steps in the study of possible building materials on Mars. Courtesy: Soureshjani, et al. Click to enlarge. Building That Mars CityThe authors, like most of us, foresee permanent settlements in the next decades. They write, “The goal would be to establish a self-sustaining city (self-sufficient megabase) on the surface of Mars, accommodating at least a million people. However, constructing safe, stable, and sufficient buildings that can withstand the harsh Martian environment for such a population will be challenging. Due to the high costs associated with importing buildings, materials, and structural elements from Earth, it is necessary to construct all buildings on-site using local resources.”
Let’s look at the usability and cost-effectiveness of Martian soil (regolith). Chemically, it’s rich in the right amounts of elements to make different types of concrete. Of course, not all the regoliths are equally useful, so they propose surface scans to find the best surface materials mixes. Presumably, those scans will help future inhabitants find the best collections. Access to those raw materials from around the planet should make them cost-effective, eventually.
Challenges to Mars ConstructionOf course, there are other factors besides material availability at work in such a construction project. Here on Earth, we have centuries of experience building in this gravity well, with familiar materials. We know how to build things under this atmospheric pressure, and we don’t have to contend with the harsh conditions of a planet constantly bombarded by ultraviolet radiation. Mars presents the challenge of creating buildings that have to withstand that radiation, the lower atmospheric pressure, and water scarcity. That lower pressure and gravity on Mars could seriously affect the durability of a given concrete made from Martian materials.
In addition to planetary geology and surface conditions, it takes energy to collect, process, and create the building materials needed for long-term habitation. You need a simple, cost-effective energy source—particularly in the beginning. It’s not likely that nuclear power plants will be first on the list to build. Those require a tremendous number of resources. Perhaps later they can be built, but not in the first wave. Solar energy is going to be the “go-to” resource in the beginning. In addition, to make cement, you need water. And, water is a notably scarce resource on much of Mars, except at the poles. They could provide some water from the ice caps, but you’ll likely want to figure out a way to make good cement with the least amount of water.
Using Organic Binders for Mars Home Building BlocksInterestingly, the authors mention something called “blood concrete”, or its modern version: AstroCrete. It’s a concept based on ancient Roman practices of using organic additives to construction materials (think: animal blood, urine, etc.). Now, they aren’t suggesting that future Martians must “bleed for their art” but our bodies do make plasma rather easily. It could be a useful resource.
A substance called “human serum albumin” (HAS) is under study as a binder to mix with “AstroCrete” materials, along with sweat, tears, and urine. All those will be available in relative abundance in future Mars settlements. The AstroCrete made from Martian soils and human “contributions” is a strong building material you can rely on for strength (and you hope it won’t smell too bad). Essentially, AstroCrete is waterless cement.
Visible light images of the 3D-printed HSA-ERB based on Martian Global Simulant. (a) after fabrication, (b) during compression testing, and (c) after compression testing. Courtesy: Robertsad, et al. Exploring the PossibilitiesThe authors studied 11 types of cement, including geopolymer and magnesium silica mixtures, all of which require specific materials. They point out that sulfur concrete is probably going to be the most promising avenue for structures on Mars. Others will take more study and implementation to understand their usability in Martian conditions. In the long term, searching out and understanding the materials available on the Red Planet will help future colonists build the necessary habitats and cities. Finally, the authors point out that additional study of both materials and the Martian environment using data from current and future missions is necessary. Their paper is well worth reading in more detail.
For More InformationMartian Buildings: Feasible Cement/concrete for Onsite Sustainable Construction from the Structural Point of View
Martian Concrete Could be Tough Stuff
Blood, Sweat, and Tears: Extraterrestrial Regolith Biocomposites with in vivo Binders
The post Building Concrete on Mars From Local Materials appeared first on Universe Today.
This past year saw some significant solar activity. This was especially true during the month of May, which saw more than 350 solar storms, solar flares, and geomagnetic storms. This included the strongest solar storm in 20 years that produced aurorae at far lower latitudes than usual and the strongest solar flare observed since December 2019. Given the threat they pose to radio communications, power grids, navigation systems, and spacecraft and astronauts, numerous agencies actively monitor the Sun’s behavior to learn more about its long-term behavior.
However, astronomers have not yet determined whether the Sun can produce “superflares” or how often they might occur. While tree rings and samples of millennia-old glacial ice are effective at records of the most powerful superflares, they are not effective ways to determine their frequency, and direct measurements of solar activity have only been available since the Space Age. In a recent study, an international team of researchers adopted a new approach. By analyzing Kepler data on tens of thousands of Sun-like stars, they estimate that stars like ours produce superflares about once a century.
The study was conducted by reseMax-Planck-Institut for Solar System Research (MPS), the Sodankylä Geophysical Observatory (SGO) and the Space Physics and Astronomy Research unit at the University of Oulu, the National Astronomical Observatory of Japan (NAOJ), the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder (UCF), the National Solar Observatory (NSO), the Commissariat of Atomic and Alternative Energies of Paris-Saclay and the University of Paris-Cité, and multiple universities. The paper that addresses their research recently appeared in the journal Science.
Superflares are notable for the intense amount of radiation they emit, about 1032 erg, or 6.2444 electron volts (eV). For comparison, consider the Carrington Event of 1859, one of the most violent solar storms of the past 200 years. While this solar flare caused widespread disruption, leading to the collapse of telegraph networks in northern Europe and North America, it released only a hundredth of the energy of a superflare. While tree rings and glacial samples have recorded powerful events in the past, the ability to observe thousands of stars at a time is teaching astronomers a lot about how often the most powerful flares occur.
This is certainly true of the Kepler Space Telescope, which monitored about 100,000 main-sequence stars continuously for years for signs of periodic dips indicating the presence of exoplanets. These same observations recorded countless solar flares, which appeared in the observational data as short, pronounced peaks in brightness. As Prof. Dr. Sami Solanki, a Director at the MPS and a co-author of the paper, explained in a MPS press release:
“We cannot observe the Sun over thousands of years. Instead, however, we can monitor the behavior of thousands of stars very similar to the Sun over short periods of time. This helps us to estimate how frequently superflares occur.”
For their study, the team analyzed data obtained by Kepler from 56,450 Sun-like stars between 2009 and 2013. This consisted of carefully analyzing the images for signs of potential superflares, which were only a few pixels in size. The team was also careful in their selection of stars, taking into account only those whose surface temperature and brightness were similar to the Sun’s. The researchers also ruled out potential sources of error, including cosmic radiation, transient phenomena (asteroids or comets), and other types of stars flaring up near a Sun-like star.
In total, the Kepler data provided the team with evidence of 220,000 years of stellar activity. From this, they were able to identify 2,889 superflares from 2,527 of the observed stars, producing an average of one superflare per star per century. While previous surveys have found average intervals of a thousand or even ten thousand years, these studies could not determine the exact source of the observed flares. They also had to limit themselves to stars without any close neighbors, making this latest study the most precise and sensitive to date.
Nevertheless, previous studies that considered indirect evidence and observations made in the past few decades have yielded longer intervals between superflares. Whenever the Sun has released a high level of energetic particles that reached Earth’s atmosphere in the past, the interaction produced a detectable amount of radioactive carbon-14 (C14). This isotope will remain in tree and glacial samples over thousands of years of slow decay, allowing astronomers to identify powerful solar events and how long ago they occurred.
This method has allowed researchers to identify five extreme solar particle events and three candidates within the past twelve thousand years – suggesting an average rate of one superflare per 1,500 years. However, the team acknowledges that it is possible that more violent solar particle events and superflares occurred in the past. “It is unclear whether gigantic flares are always accompanied by coronal mass ejections and what is the relationship between superflares and extreme solar particle events,” said co-author Prof. Dr. Ilya Usoskin from the University of Oulu. “This requires further investigation.”
While the new study does not reveal when the Sun will experience its next superflare, the results urge caution. “The new data are a stark reminder that even the most extreme solar events are part of the Sun’s natural repertoire,” said co-author Dr. Natalie Krivova from the MPS. In the meantime, the best way to stay prepared is to monitor the Sun regularly to ensure reliable forecasting and advanced warning. By 2031, these efforts will be bolstered by the ESA’s Vigil probe, which the MPS is assisting through the development of its Polarimetric and Magnetic Imager (PHI) instrument.
The post New Research Indicates the Sun may be More Prone to Flares Than we Thought appeared first on Universe Today.
In 2018, NASA mission planners selected the Jezero Crater as the future landing site of the Perseverance rover. This crater was a natural choice, as it was once an ancient lake bed, as evidenced by the delta fan at its western edge. On Earth, these features form in the presence of flowing water that gradually deposits sediment over time. Combined with the fact that the Jezero Crater’s delta feature is rich in clays, this makes the region a prime target to search for biosignatures – evidence of past (and maybe present) life on Mars!
In recent news, NASA announced that the Perseverance rover had reached the top of Jezero Crater’s rim at a location the science team calls “Lookout Hill.” The rover spent the previous three and a half months climbing the rim, covering a distance of 500 vertical meters (1,640 vertical feet) and making science observations along the way. Now that it has crested the rim, Perseverance can begin what the mission team calls its “Northern Rim” campaign. Over the next year, the rover is expected to drive 6.4 km (4 mi) and visit up to four sites of interest where it will obtain geological samples.
Since it landed in the Jezero Crater in February 2021, Perseverance has completed four science campaigns. This includes the “Crater Floor,” “Fan Front,” “Upper Fan,” and “Margin Unit” based on where the rover was obtaining samples from. During the first campaign, the rover visited features around its landing site – like the Máaz formation – where it obtained several rock and atmospheric samples, and some witness samples for contamination assessment. The two campaigns that followed saw the rover explore different sections of Jezero’s delta fan and obtain samples of rock and clay.
The fourth campaign, meanwhile, consisted of the rover examining marginal carbonate rocks that circle the upper edge of the Jezero Crater. The science team calls Perseverance’s fifth campaign the “Northern Rim” because its route covers the northern part of the southwestern section of Jezero’s rim. The site was selected so that the rover could explore a region of Mars, unlike anything it has investigated before. Ken Farley, a project scientist for Perseverance at Caltech, explained in a NASA press release:
“The Northern Rim campaign brings us completely new scientific riches as Perseverance roves into fundamentally new geology. It marks our transition from rocks that partially filled Jezero Crater when it was formed by a massive impact about 3.9 billion years ago to rocks from deep down inside Mars that were thrown upward to form the crater rim after impact. These rocks represent pieces of early Martian crust and are among the oldest rocks found anywhere in the solar system. Investigating them could help us understand what Mars — and our own planet — may have looked like in the beginning.”
Now that Perseverance has crested and moved on from Lookout Hill, the rover is heading to a rocky outcrop about 450 m (1,500 feet) on the other side of the rim known as “Witch Hazel Hill.” Said Candice Bedford, a Perseverance scientist from Purdue University:
“The campaign starts off with a bang because Witch Hazel Hill represents over 330 feet [~100 m] of layered outcrop, where each layer is like a page in the book of Martian history. As we drive down the hill, we will be going back in time, investigating the ancient environments of Mars recorded in the crater rim. Then, after a steep descent, we take our first turns of the wheel away from the crater rim toward ‘Lac de Charmes,’ about 2 miles [3.2 km] south.”
Located on the plains beyond the rim, the Lac de Charmes region is of interest to the mission team because it is less likely to have been affected by the impact that led to the Jezero Crater. Beyond that, the rover will travel about 1.6 km (1 mi) back up the rim to investigate an outcropping of blocks (megabreccia) that may be the remains of ancient bedrock broken by another impact. This was the Isidis impact, which occurred 3.9 billion years ago and led to the formation of the Isidis Planitia basin in the Northern Lowlands.
The route NASA’s Perseverance Mars rover took (in blue) as it climbed the western rim of Jezero Crater. Credit: NASA/JPL-Caltech/University of ArizonaInvestigating this site could provide valuable insight into a major surface-reshaping event that took place during the Noachian Period on Mars. This geological epoch saw extensive erosion by flowing water, as indicated by the many river valley networks dated to the period. It is also during the Noachian that the Tharsis Bulge is believed to have formed, indicating that Mars was still geologically active. As always, the ultimate goal is to find biosignatures from this “warmer, wetter” period that indicate that Mars could have had life (similar to Earth at the time).
The Perseverance science team also shared information on the rover, their science operations, and future plans at a media briefing on Thursday, December 12th, during the annual meeting of the American Geophysical Union (AGU) in Washington. As Steven Lee, the deputy project manager for the Perseverance mission at NASA’s Jet Propulsion Laboratory, said during the briefing:
“During the Jezero Crater rim climb, our rover drivers have done an amazing job negotiating some of the toughest terrain we’ve encountered since landing. They developed innovative approaches to overcome these challenges — even tried driving backward to see if it would help — and the rover has come through it all like a champ. Perseverance is ‘go’ for everything the science team wants to throw at it during this next science campaign.”
Further Reading: NASA
The post NASA’s Perseverance Rover Reaches the Top Rim of the Jezero Crater appeared first on Universe Today.
Getting places in space quickly has been the goal of propulsion research for a long time. Rockets, our most common means of doing so, are great for providing lots of force but extraordinarily inefficient. Other options like electric propulsion and solar sailing are efficient but offer measly amounts of force, albeit for a long time. So scientists have long dreamed of a third method of propulsion – one that could provide enough force over a long enough time to power a crewed mission to another star in a single human lifetime. And that could theoretically happen using one of the rarest substances in the universe – antimatter.
A new paper from Sawsan Ammar Omira and Abdel Hamid I. Mourad at the United Arab Emirates University looks at the possibilities of developing a space drive using antimatter and what makes it so hard to create. Antimatter was initially discovered in 1932 when physicist Carl David Anderson observed positrons – the antimatter form of an electron – in cosmic rays by passing them through a cloud chamber. He won the Nobel Prize in physics in 1936 for his discovery. It took 20 years to create it artificially for the first time.
Since then, antimatter has been poked and prodded in as many ways as scientists could think of – including literally, but that causes the thing that antimatter is most famous for – self-annihilation. When an antimatter proton comes into contact with protons or neutrons of normal matter, they annihilate one another and release a combination of energy (typically in the form of gamma rays) and also high-energy short-lived particles, known as pion and kaon, which happen to be traveling at relativistic speeds.
So, in theory, a ship could contain enough antimatter to intentionally create this annihilation explosion, using the relativistic particles as a form of thrust and potentially using the gamma rays as a source of power. The overall amount of energy released from a gram of antiprotons being annihilated is 1.8×1014, 11 orders of magnitude more energy than rocket fuel and even 100 times greater energy density than a nuclear fission or fusion reactor. As the paper puts it, “one gram of antihydrogen could ideally power 23 space shuttles.”
All this begs the question – why don’t we have these awesome propulsions systems yet? The simple answer is that antimatter is tricky to work with. Since it will self-annihilate with anything it touches, it must be suspended in an advanced electromagnetic containment field. The longest scientists have been able to do that was for about 16 minutes at CERN in 2016, and even that was only on the order of a few atoms – not the grams or kilograms needed to support an interstellar propulsion system.
Additionally, it takes absurd amounts of energy to create antimatter, which makes it expensive. The Antiproton Decelerator, a massive particle accelerator at CERN, makes about ten nanograms of antiprotons a year at a cost of several million dollars. Extrapolating that out, producing one gram of antimatter would require something like 25 million kWh of energy—enough to power a small city for a year. It would also cost over $4M at average electricity rates, making it one of the most expensive substances on Earth.
Fraser discusses techniques to protect relativistic ships (such as those powered by antimatter) from dust in the interstellar medium.Given this expense and the massive scale of the infrastructure needed to do it, antimatter research is relatively limited. Around 100-125 papers per year are produced on the subject, dramatically increasing from around 25 in 2000. However, that compares to around 1000 papers per year on large language models, one of the more popular forms of algorithms powering the current AI boom. In other words, the overall expense and relative long-term horizon over any payout limit the amount of funding and, therefore, advancements in antimatter creation and storage.
That means it will probably be quite some time before we end up with an antimatter ship drive. We might even need to create some preliminary energy-producing technologies like fusion that could significantly lower the cost of energy and even enable the research that would eventually get us there. However, the possibility of traveling at near-relativistic speeds and potentially getting actual humans to another star within a single lifetime is an ambitious goal that space and exploration enthusiasts everywhere will continue to pursue, no matter how long it takes.
Learn More:
Sawsan Ammar Omira & Abdel Hamid I. Mourad – Future of Antimatter Production, Storage, Control, and Annihilation Applications in Propulsion Technologies
UT – It’s Official, Antimatter Falls Down in Gravity, Not Up
UT – Are There Antimatter Galaxies?
UT – Spectrum of Antimatter Observed for First Time
Lead Image:
Artist’s conception of an antimatter rocket system.
Credit – NASA/MFSC
The post Antimatter Propulsion Is Still Far Away, But It Could Change Everything appeared first on Universe Today.
Exomoons are a hot topic in the science community, as none have been confirmed with astronomers finding new and creative ways to identify them. But while astronomers have searched for exomoons orbiting exoplanets around single stars like our Sun, could exomoons exist around exoplanets orbiting binary stars? This is what a recent study submitted to The Astrophysical Journal hopes to address as a team of researchers from Tufts University investigated the statistical likelihood of exomoons orbiting exoplanets with two stars, also known as circumbinary planets (CBPs). This study holds the potential to help researchers better understand methods needed for identifying exomoons in a variety of exoplanetary systems.
Here, Universe Today discusses this incredible research with Benjamin R. Gordon, who is a Master of Science student in Astrophysics at Tufts University and lead author of the study, regarding the motivation behind the study, significant results, potential follow-up studies, the importance of finding exomoons orbiting CBPs, and which known systems are the most promising for identifying exomoons? Therefore, what was the motivation behind this study?
Gordon tells Universe Today, “We were motivated at the start by a couple of ideas, but my biggest source of inspiration was the idea that circumbinary planets are thought to have a farther minimum distance than single star planets, meaning that more circumbinary planets would be likely to lie within the “habitable zone”. Thus, any moon of these circumbinary planets that may have the potential to form life, as they may be similar in size to Earth if a planet is very large. It’s not a trivial question to ask if moons in these chaotic systems of 2 stars and a planet would be stable, so we were eager to find an answer!”
For the study, the researchers used computer models to simulate how exomoons could orbit CBPs under a variety of exoplanetary systems conditions, specifically what’s known as a planet’s hill radius, which is its threshold to have exomoons orbiting them. The researchers conducted the simulations on two populations of CBPs and exomoons: Population 1, which had an unlimited planetary radius to have exomoons; and Population 2, which had a planetary radius between 3x the Earth and the size of the corresponding exoplanet, which have been identified as all gas giants orbiting binary stars. The researchers then conducted 390 computer simulations of the Population 1 planets and 484 computer simulations of the Population 2 planets. So, what were the most significant results from the study?
“One of the main findings is that there is a section of the parameter space of the initial conditions of our system that always results in stable exomoons of circumbinary planets,” Gordon tells Universe Today. “We also found that 30-40% of stable moons are in the habitable zone, which is a very significant fraction. We also show that the disk-driven migration scenario for a circumbinary planet-moon system is a possible formation pathway for long-period circumbinary planets as well as planetary mass objects that float freely through space.”
The goal of exoplanet hunting is to find an Earth-like world whose size, distance from its star, and atmospheric composition could have the right conditions to support life as we know it. Unfortunately, of the 5,806 confirmed exoplanets, only 210 are rocky worlds like our own, with more than half of those confirmed exoplanets being gas giants. Therefore, identifying exomoons orbiting CBPs within their star’s habitable zone could hold promise for potentially identifying Earth-sized exomoons orbiting gas giants larger than Jupiter. So, what follow-up studies are currently in the works and what are Gordon’s thoughts on the importance of potentially finding exomoons orbiting CBPs?
“It would be interesting to investigate the stability of these moons including the effects of inclination and multi-planet systems,” Gordon tells Universe Today. “I am also hoping to apply for telescope time with future missions such as the Nancy Grace Roman Telescope to follow-up on circumbinary systems that are similar to those we see in our simulations with stable exomoons. Currently, there have been no confirmed exomoons, so finding one in general would be remarkable! If we find one specifically orbiting a circumbinary planet, this may be a tremendous candidate for follow up searches for life via JWST.”
As noted, no exomoons have been confirmed to exist, but there are currently almost two dozen exomoon candidates, with two recently being debunked due to exoplanet transit data but those findings were subsequently refuted only a few months later as likely candidates (Kepler 1625b and Kepler 1708b), along with two potentially being volcanically-active exomoons each orbiting a “hot Jupiter” (WASP-49b and HD 189733b). Of those four, HD 189733b resides in a binary star system with the primary star hypothesized to be an orange dwarf star—which HD 189733b orbits—and the secondary star hypothesized to be a red dwarf star.
With this, the question then becomes what about habitable exomoons, since several moons within our solar system exhibit evidence for containing the building blocks for life as we know it, specifically Europa, Titan, and Enceladus, and all of which orbit gas giants, though far outside of our Sun’s habitable zone. If worlds like these exist within our own solar system, then similar exomoons could orbit gas giants in other solar systems, as well. Then the question becomes could we find exomoons orbiting within their star’s respective habitable zone? For instance, could a gas giant that orbits within its star’s habitable zone possess exomoons similar to Earth? Therefore, according to Gordon, which known systems are the most promising for identifying exomoons?
Artist’s illustration of an Earth-like exomoon orbiting a gas giant exoplanet in a star’s habitable zone. (Credit: NASA/JPL-Caltech)“In my opinion, I do think that single star systems would be the easiest to confirm an exomoon,” Gordon tells Universe Today. “This is because the data used for various proposed detection methods is much more complex for binary systems than for single stars, as an extra star provides another source of dynamical interactions. For example, there is already an issue with finding circumbinary planets using the transit method, as the transits do not phase fold due to transit timing variations from interactions with the binary.”
Gordon continues by telling Universe Today, “Trying to find a moon on a circumbinary planet light-curve would make a hard problem even more difficult, whereas a single star exoplanetary light-curve would provide a cleaner starting point where each of the candidates so far have been spotted (Kepler-1625b and Kepler-1708b). For circumbinary exomoons, our research shows that it would be best to search in systems that have a wide binary separation, as stable moons were able to orbit at up to 10% of their planet’s hill radius (for context, our moon orbits at around 26% of the Earth’s hill radius).”
As astronomers continue searching the heavens for definitive evidence of an exomoon potentially orbiting an exoplanet or CBP, the technology and techniques used to search for exomoons will only improve in the future, specifically with the aforementioned Nancy Grace Roman Telescope (commonly referred to as Roman), which is due to launch between Fall 2026 and May 2027. Along with searching for exoplanets using the gravitational microlensing method, Roman will also study cosmic structures, dark energy, general relativity, and the space-time curvature, all while being stationed in a Sun-Earth L2 orbit, which is located on the opposite side of the Earth’s orbit from the Sun.
How many exomoons orbiting circumbinary planets will researchers make in the coming years and decades? Only time will tell, and this is why we science!
As always, keep doing science & keep looking up!
The post Could Planets Orbiting Two Stars Have Moons? appeared first on Universe Today.
What was the Milky Way like billions of years ago? One way we can find out is by looking at the most distant galaxies in the observable Universe. Seeing those far galaxies is one goal of the James Webb Space Telescope. It has revealed some surprising facts about early galaxies, and now it is starting to reveal the story of our own.
Most of the galaxies Webb has observed so far have been larger than we expected, which led to some speculation that perhaps the Big Bang was wrong, which isn’t the case. The bias toward large galaxies is partly because there are some surprisingly large ones in the early cosmos, but also because smaller galaxies are more difficult to see. But a chance alignment of a galaxy cluster has allowed us to see one small early galaxy that is quite similar to what the Milky Way may have appeared.
The galaxy has been nicknamed Firefly Sparkle, and we see it from a time when the Universe was just 600 million years old. Its light traveled for more than 13 billion years to reach us and would have been too dim even for Webb to see were it not for a trick of light. Since Firefly Sparkle is behind a large cluster of galaxies, its light is gravitationally lensed. Just as a glass lens can make an object appear larger and brighter than it actually is, so can a gravitational lens. In this case, the foreground galaxy cluster magnified the light of Firefly Sparkle making it bright enough for Webb to see.
Firefly Sparkle compared to the hypothetical evolution of the Milky Way. Credit: Mowla, et alGravitational lensing also highly distorts our view of a distant galaxy, so astronomers have to trace the paths of light to reconstruct the true shape of the galaxy. Normally, that would be a problem, but in this case, the distortion was a surprise blessing. Rather than appearing as a single fuzzy blob, Firefly Sparkle appears as a string of glowing jewels. When viewed in the infrared, it gives us a kind of exploded view of the galaxy. Thanks to gravitational lensing the research team was able to show how Firefly Sparkle is in the early stages of becoming a true galaxy. They found clumps of active star-forming regions and that these regions are illuminated diffuse light from more mature stars. From the spectra of this galaxy, the team also found that star formation is happening in stages, not all at once. It gives us a rich view of early galaxies.
From the clumps of star-forming regions, the team could also estimate the overall mass of Firefly Sparkle, which is very similar to the hypothetical mass of the Milky Way at that age. The young galaxy even has a couple of companion dwarf galaxies, similar to the Magellanic clouds of the Milky Way. Overall, this gives us a much better understanding of how our galaxy might have formed.
Reference: Mowla, Lamiya, et al. “Formation of a low-mass galaxy from star clusters in a 600-million-year-old Universe.” Nature 636.8042 (2024): 332-336.
The post Webb Weighs an Early Twin of the Milky Way appeared first on Universe Today.
Neutron stars are so named because in the simplest of models they are made of neutrons. They form when the core of a large star collapses, and the weight of gravity causes atoms to collapse. Electrons are squeezed together with protons so that the core becomes a dense sea of neutrons. But we now know that neutron stars aren’t just gravitationally bound neutrons. For one thing, neutrons are comprised of quarks, which have their own interactions both within and between neutrons. These interactions are extremely complex, so the details of a neutron star’s interior are something we don’t fully understand.
The bulk properties of neutron matter are best described by the Tolman-Oppenheimer-Volkoff (TOV) equation of state. Based on this, the upper mass limit for a neutron star should be around 2.2 to 2.6 solar masses, which seems to agree with observation. The TOV equation also assumes that the neutrons within the neutron star remain neutrons. In atomic nuclei, you can’t have a sea of free quarks because of the nature of the strong nuclear force, so this seems like a reasonable assumption. But some physicists and astronomers have argued that within the dense heart of a neutron star, quarks might break free to create a quark star. Some have even suggested that quarks within a neutron star might interact so strongly that strange quarks appear, making them strange quark stars.
One way to explore these ideas is to look at pulsars. Since pulsars are rotating neutron stars where their magnetic pole sweeps in our direction, we can measure the rate of rotation by timing the radio pulses from a pulsar. So if a pulsar flashes every three seconds, we know that’s how long it takes for the neutron star to rotate once. Pulsars are how we first learned that neutron stars are, well, neutron stars, because the rate of an object’s rotation tells you the minimum density the object must have.
The shape of a neutron star at different frequencies. Credit: Gärtlein, et alYou can think of it like a playground merry-go-round. If you let a few children climb on, then spin the merry-go-round really fast, you can watch the kids fly off one by one as they lose their grip. This is one of the reasons playground merry-go-rounds are so rare these days. Since stars are held together by gravity, there is an upper limit on how fast a star can rotate. Any faster and gravity would lose its grip and the star would fly apart. So when we measure the rotation of a pulsar, we know it must be below that upper limit, known as the Kepler frequency. Since the surface gravity of a star depends on its density, the rotation frequency tells us the minimum density of the star. When astronomers first discovered pulsars rotating several times a second, they knew the density of the pulsar was greater than a white dwarf, so it had to be a neutron star.
There are some pulsars that have very high rotation frequencies. The fastest observed pulsars, known as millisecond pulsars, can have frequencies above 700 Hz. It’s pretty astonishing when you think about it. An object with nearly twice the mass of the Sun, but only a few kilometers across and making hundreds of rotations a second. Millisecond pulsars rotate so quickly that they aren’t even spherical. They bulge out around their equators to become oblate spheroids. This means the density in their polar regions must be much higher than near the equator. This raises the question of whether neutrons in the polar regions might undergo a phase transition into quark matter.
A comparison of mass and Kepler frequencies for neutron stars and hybrid neutron stars. Credit: Gärtlein, et alTo explore this idea, a team looked at various models of neutron stars. They modeled the equation of state for traditional neutron stars and compared them to so-called hybrid stars, where the interior is a mix of neutrons and quark matter. From this, they calculated the Kepler frequency as it relates to the overall mass of the star. They found that while all the currently observed millisecond pulsars can be described by the traditional model, the hybrid model is a better fit for the fastest pulsars. They also calculated that hybrid stars would push the upper limit closer to 1,000 rotations a second. So if we find pulsars in the 800 Hz or higher range, we know they likely contain quark matter in their cores.
Another way to test the hybrid neutron star model would be to find more millisecond pulsars with a wide range of masses. This would allow us to look at how the rotation frequency varies with mass at the upper limit to see if Kepler frequencies agree more strongly with a hybrid or traditional model.
Reference: Gärtlein, Christoph, et al. “Fastest spinning millisecond pulsars: indicators for quark matter in neutron stars?” arXiv preprint arXiv:2412.07758 (2024).
The post Do the Fastest Spinning Pulsars Contain Quark Matter? appeared first on Universe Today.
Space largely seems quite empty! Yet even in the dark voids of the cosmos, ultra-high-energy cosmic rays are streaming through space. The rays contain 10 million times as much energy as the Large Hadron Collider can produce! The origin of the rays though is still the source of many a scientific debate but they are thought to be coming from some of the most energetic events in the universe. A new paper suggests the rays may be linked to magnetic turbulence, coming from regions where magnetic fields get tangled and twisted up.
Cosmic rays are high-energy particles, typically protons and atomic nuclei. They travel at speeds near the speed of light and are thought to come from different sources such as the Sun, supernova explosions and other events across the universe. As the rays travel through space, they bombard Earth, interacting with molecules in the atmosphere producing secondary particles that rain down. The term cosmic ray often leads to the confusion that they are part of the electromagnetic spectrum. Instead they are streams of charged particles.
Distant past supernovae could be linked by cosmic ray particles to climate change on Earth and changes in biodiversity. Courtesy: Henrik Svensmark, DTU Space.A cousin of the cosmic rays are the ultra-high-energy rays. These are among the most energetic particles in the universe with energies that exceed 1018 electron volts, this equates to more energy than the energetic particles that escape from the Sun. The origin of these energetic particles is still not clearly understood but they are thought to originate in highly energetic events like active galactic nuclei, gamma ray bursts or the more massive black holes. Just like the typical cosmic rays, the ultra-high energy particles strike molecules in the atmosphere and produce secondary particles. Studying the secondary particles is one way researchers are trying to unravel their nature.
This artist’s visualization of GRB 221009A shows the narrow relativistic jets (emerging from a central black hole) that gave rise to the gamma-ray burst (GRB) and the expanding remains of the original star ejected via the supernova explosion. Credit: Aaron M. Geller / Northwestern / CIERA / IT Research Computing and Data ServicesThese previous theories have seemed reasonable but a team of researchers have published their findings about their origins in the Astrophysical Journal Letters. The team suggest the rays have instead originated in magnetic turbulence – the fluctuation of magnetic fields, often occurring in plasmas. Their research found that the magnetic fields get tangled up, swiftly causing the particles to accelerate with an increase in energy.
According to Luca Comisso, associate research scientist from the Columbia Astrophysics Lab explained that ‘These findings help solve enduring questions that are of great interest to both astrophysicists and particle physicists about how the cosmic rays get their energy.’
The team ran several simulations that demonstrated particle acceleration by magnetic turbulence could accelerate cosmic rays to high energies. Using the Pierre Auger Observatory to measure magnetic turbulence samples, the team found their measurements supported the simulation results. This as perhaps the first successful analysis into ultra-high-energy cosmic rays.
Source : A New Discovery About the Source of the Vast Energy in Cosmic Rays
The post Another Clue About the Ultra-High Energy Cosmic Rays: Magnetic Turbulence appeared first on Universe Today.