According to the United Nations, the world produces about 430 million metric tons (267 U.S. tons) of plastic annually, two-thirds of which are only used for a short time and quickly become garbage. What’s more, plastics are the most harmful and persistent fraction of marine litter, accounting for at least 85% of total marine waste. This problem is easily recognizable due to the Great Pacific Garbage Patch and the amount of plastic waste that washes up on beaches and shores every year. Unless measures are taken to address this problem, the annual flow of plastic into the ocean could triple by 2040.
One way to address this problem is to improve the global tracking of plastic waste using Earth observation satellites. In a recent study, a team of Australian researchers developed a new method for spotting plastic rubbish on our beaches, which they successfully field-tested on a remote stretch of coastline. This satellite imagery tool distinguishes between sand, water, and plastics based on how they reflect light differently. It can detect plastics on shorelines from an altitude of more than 600 km (~375 mi) – higher than the International Space Station‘s (ISS) orbit.
The paper that describes their tool, “Beached Plastic Debris Index; a modern index for detecting plastics on beaches,” was recently published by the Marine Pollution Bulletin. The research team was led by Jenna Guffogg, a researcher at the Royal Melbourne Institute of Technology University (RMIT) and the Faculty of Geo-Information Science and Earth Observation (ITC) at the University of Twente. She was joined by multiple colleagues from both institutions. The study was part of Dr. Guffogg’s joint PhD research with the support of an Australian Government Research Training Program (RTP) scholarship.
Dr Jenna Guffogg said plastic on beaches can have severe impacts on wildlife and their habitats, just as it does in open waters. Credit: BPDIAccording to current estimates, humans dump well over 10 million metric tons (11 million U.S. tons) of plastic waste into our oceans annually. Since plastic production continues to increase worldwide, these numbers are projected to increase dramatically. What ends up on our beaches can severely impact wildlife and marine habitats, just like the impact it has in open waters. If these plastics are not removed, they will inevitably fragment into micro and nano plastics, another major environmental hazard. Said Dr. Guffogg in a recent RMIT University press release:
“Plastics can be mistaken for food; larger animals become entangled, and smaller ones, like hermit crabs, become trapped inside items such as plastic containers. Remote island beaches have some of the highest recorded densities of plastics in the world, and we’re also seeing increasing volumes of plastics and derelict fishing gear on the remote shorelines of northern Australia.
“While the impacts of these ocean plastics on the environment, fishing and, tourism are well documented, methods for measuring the exact scale of the issue or targeting clean-up operations, sometimes most needed in remote locations, have been held back by technological limitations.”
Satellite technology is already used to track plastic garbage floating around the world’s oceans. This includes relatively small drifts containing thousands of plastic bottles, bags, and fishing nets, but also gigantic floating trash islands like the Great Pacific Garbage Patch. As of 2018, this garbage patch measured about 1.6 million km2 (620,000 mi2) and consisted of 45,000–129,000 metric tons (50,000–142,000 U.S. tons). However, the technology used to locate plastic waste in the ocean is largely ineffective at spotting plastic on beaches.
Geospatial scientists have found a way to detect plastic waste on remote beaches, bringing us closer to global monitoring options. Credit: RMITMuch of the problem is that plastic can be mistaken for patches of sand when viewed from space. The Beached Plastic Debris Index (BPDI) developed by Dr. Guffogg and her colleagues circumvents this by employing a spectral index – a mathematical formula that analyzes patterns of reflected light. The BPDI is specially designed to map plastic debris in coastal areas using high-definition data from the WorldView-3 satellite, a commercial Earth observation satellite (owned by Maxar Technologies) that has been in operation since 2014.
Thanks to their efforts, scientists now have an effective way to monitor plastic on beaches, which could assist in clean-up operations. As part of the remote sensing team at RMIT, Dr. Guffogg and her colleagues have developed similar tools for monitoring forests and mapping bushfires from space. To validate the BPDI, the team field-tested it by placing 14 plastic targets on a beach in southern Gippsland, about 200 km (125 mi) southeast of Melbourne. Each target was made of a different type of plastic and measured two square meters (21.5 square feet) – smaller than the satellite’s pixel size of about three square meters.
The resulting images were compared to three other indices, two designed for detecting plastics on land and one for detecting plastics in aquatic settings. The BPDI outperformed all three as the others struggled to differentiate between plastics and sand or misclassified shadows and water as plastic. As study author Dr. Mariela Soto-Berelov explained, this makes the BPDI far more useful for environments where water and plastic-contaminated pixels are likely to coexist.
“This is incredibly exciting, as up to now we have not had a tool for detecting plastics in coastal environments from space. The beauty of satellite imagery is that it can capture large and remote areas at regular intervals. Detection is a key step needed for understanding where plastic debris is accumulating and planning clean-up operations, which aligns with several Sustainable Development Goals, such as Protecting Seas and Oceans.”
The next step is to test the BPDI tool in real-life scenarios, which will consist of the team partnering with various organizations dedicated to monitoring and addressing the plastic waste problem.
Further Reading: RMIT, Marine Pollution Bulletin
The post Plastic Waste on our Beaches Now Visible from Space, Says New Study appeared first on Universe Today.
Here’s this week’s comedy/news bit on Bill Maher’s “Real Time” show. His topic is voters who can’t seem to agree on a Presidential candidate, and how they should be voting for Kamala Harris. Maher avers that if Harris loses, it will because of “progressophobia,” which he calls “Steven Pinker’s term for the liberal fear that of ever admitting when things are actually good.”
Maher’s point is that salaries and the economy are “great”, as he says, and that the perception that they’re not is not a reason to vote for Trump. The predicted recession didn’t happen (note the very salacious==and somewhat tasteless–joke about Trump’s sexual proficiency, followed by a not-bad imitation of Trump himself. I love the “in this reality, if you can’t get bacon, you’ll die” statement, mocking one recent assertion of Trump. One statement I don’t get, though, is this one: ” I don’t know if Kamala worked at McDonald’s, but she’s not Flo from Progressive.” Help me out here.
It’s basically an endorsement of Harris, saying that although she’s not perfect, and is mostly campaigning by dissing Trump rather than advancing her own plans, Maher finishes by saying, “‘I’m not Trump’ is still a really great reason.”
The Stanford health economist turned right-wing pandemic star could help take down
academia and scientific institutions in a second Trump administration
Space-based telescopes are remarkable. Their view isn’t obscured by the weather in our atmosphere, and so they can capture incredibly detailed images of the heavens. Unfortunately, they are quite limited in mirror size. As amazing as the James Webb Space Telescope is, its primary mirror is only 6.5 meters in diameter. Even then, the mirror had to have foldable components to fit into the launch rocket. In contrast, the Extremely Large Telescope currently under construction in northern Chile will have a mirror more than 39 meters across. If only we could launch such a large mirror into space! A new study looks at how that might be done.
As the study points out, when it comes to telescope mirrors, all you really need is a reflective surface. It doesn’t need to be coated onto a thick piece of glass, nor does it need a big, rigid support structure. All that is just needed to hold the shape of the mirror against its own weight. As far as starlight is concerned, the shiny surface is all that matters. So why not just use a thin sheet of reflective material? You could just roll it up and put it in your launch vehicle. We could, for example, easily launch a 40-meter roll of aluminum foil into space.
Of course, things aren’t quite that simple. You would still need to unroll your membrane telescope back into its proper shape. You would also need a detector to focus the image upon, and you’d need a way to keep that detector in the correct alignment with the broadsheet mirror. In principle, you could do that with a thin support structure, which wouldn’t add an excessive bulk to your telescope. But even if we assume all of those engineering problems could be solved, you’d still have a problem. Even in the vacuum of space, the shape of such a thin mirror would deform over time. Solving this problem is the main focus of this new paper.
Once launched into space and unfurled, the membrane mirror wouldn’t deform significantly. But to capture sharp images, the mirror would have to maintain focus on the order of visible light. When the Hubble was launched, its mirror shape was off by less than the thickness of a human hair, and it took correcting lenses and an entire shuttle mission to fix. Any shifts on that scale would render our membrane telescope useless. So the authors look to a well-used trick of astronomers known as adaptive optics.
How radiative adaptive optics might work. Credit: Rabien, et alAdaptive optics is used on large ground-based telescopes as a way to correct for atmospheric distortion. Actuators behind the mirror distort the mirror’s shape in real time to counteract the twinkles of the atmosphere. Essentially, it makes the shape of the mirror imperfect to account for our imperfect view of the sky. A similar trick could be used for a membrane telescope, but if we had to launch a complex actuator system for the mirror, we might as well go back to launching rigid telescopes. But what if we simply use laser projection instead?
By shining a laser projection onto the mirror, we could alter its shape through radiative recoil. Since it is simply a thin membrane, the shape would be significant enough to create optical corrections, and it could be modified in real time to maintain the mirror’s focus. The authors call this technique radiative adaptive optics, and through a series of lab experiments have demonstrated that it could work.
Doing this in deep space is much more complicated than doing it in the lab, but the work shows the approach is worth exploring. Perhaps in the coming decades we might build an entire array of such telescopes, which would allow us to see details in the distant heavens we can now only imagine.
Reference: Rabien, S., et al. “Membrane space telescope: active surface control with radiative adaptive optics.” Space Telescopes and Instrumentation 2024: Optical, Infrared, and Millimeter Wave. Vol. 13092. SPIE, 2024.
The post Future Space Telescopes Could be Made From Thin Membranes, Unrolled in Space to Enormous Size appeared first on Universe Today.
I’m actually surprised that the article below was published in The Proceedings of the National Academies of Science (PNAS), one of the more high-quality science journals, just a tad below Science and Nature in prestige. It has had a reputation for being “progressive” (e.g., woke), one that I discussed last year when Steve Pinker had an email exchange with National Academy of Sciences (NAS) President Marcia McNutt.
After McNutt, along with the Presidents of the National Academy of Medicine and of the National Academy of Engineering, issued a pro-affirmative-action and pro-DEI statement on June 30, 2023, Pinker wrote McNutt pointing out that such statements are incompatible with the NAS’s mission. His email (reproduced at the link above) contained this bit:
I would like to express my disquiet at the recent NAS Statement on Affirmative Action. The desirability of racial preferences in university admissions is not a scientific issue but a political and moral one. It involves tradeoffs such as maintaining the proportion of African Americans in elite universities at the expense of fairness to qualified applicants who are rejected because of their race, including other racial minorities such as Asian Americans. Moreover it is a highly politicized policy, almost exclusively associated with the left, and one that majorities of Americans of all races oppose.
It’s not clear to me how endorsing one side of a politically polarizing, nonscientific issue is compatible with the Academy’s stated mission “providing independent, objective advice to the nation on matters related to science and technology”.
The problem is worse than being incompatible with the Academy’s mission; it could substantially harm the Academy’s goal of promoting politicians’ and the public’s acceptance of science. Extensive research has shown that rejection of the scientific consensus on evolution, anthropogenic climate change, and other scientific topics is uncorrelated with scientific literacy but predictable from political orientation: the farther to the right, the greater the rejection of evolution and climate change.
McNutt wrote back, but declined to have her answer reproduced on this site. Nevertheless, from Pinker’s response to her response, you can gather that she defended the stand of the original three-President statement, apparently written to criticize the Supreme Court’s decision that college admissions could not be based on race.
Steve said this, among other things (again, see the whole of his email at the site):
Even more concerning, the statement could have been lifted out of the pages of any recent left-wing opinion magazine, since it reiterates the current conviction that racial inequities are primarily due to “past and current racial discrimination and structural, systemic, and institutional racism in education” and to “individual bias and discrimination.” Entirely unmentioned are other potential causes of racial discrepancies, including poverty, school quality, family structure, and cultural norms. It is surprising to see a scientific organization attribute a complex sociological outcome to a single cause.
Finally, the statement, and your letter, equate diversity of ideas with diversity of race. The advantages of intellectual diversity are obvious (though I have not seen any statements from the Academy addressing the shrinking political diversity among science faculty, nor the increasing campaigns that punish or cancel scientists who express politically unpopular views). The assumption that racial diversity is the same as intellectual diversity was exactly what the Supreme Court decision singled out and struck down, since it carries with it the racist assumptions that black students think alike, and that their role in universities is to present their race-specific views to their classmates.
Dr. McNutt replied, but again did not give permission for her letter to be reproduced.
I have to give McNutt credit, then, for allowing the two-page piece letter to be published, as it contains a pretty explicit criticism of McNutt, especially of a later piece by McNutt and Crow, “Enhancing trust in science and democracy in an age of information,” published in Issues in Science and Technology. McNutt and Crow bemoan the detachment of science from society and society’s ethical values and make this statement, which is debatable:
Therefore, we believe the scientific community must more fully embrace its vital role in producing and disseminating knowledge in democratic societies. In Science in a Democratic Society, philosopher Philip Kitcher reminds us that “science should be shaped to promote democratic ideals.” To produce outcomes that advance the public good, scientists must also assess the moral bases of their pursuits. Although the United States has implemented the democratically driven, publicly engaged, scientific culture that Vannevar Bush outlined in Science, the Endless Frontier in 1945, Kitcher’s moral message remains relevant to both conducting science and communicating the results to the public, which pays for much of the enterprise of scientific discovery and technological innovation. It’s on scientists to articulate the moral and public values of the knowledge that they produce in ways that can be understood by citizens and decisionmakers.
While the good part of McNutt and Crow’s message is their call for scientists to explain the scientific results of their work to the public, it’s a different matter to ask scientists to “produce outcomes that advance the public good.” That can be an explicit aim of science, as in producing golden rice or Covid vaccines, but many scientists doing “pure” science are motivated by simple curiosity. That curiosity, too, can have salubrious social outcomes, but most of the time it just enriches our knowledge of the universe.
Further, it seems excessive to asks scientists to also “articulate the moral and public values of the knowledge that they produce.” Are scientists experts in morality? And what are “public values”—the latest ideology of the times? One might think from this piece, and the correspondence above, that McNutt does favor the politicization of science, but along the lines of “progressive” politics.
Thus I was pleased to see this letter, by evolutionary molecular biologist Ford Doolittle, appear as an opinion piece in the latest PNAS. Here he takes issue not only with the politicization of science, but explicitly with McNutt and Crow’s article. You can read the letter by clicking on the screenshot below, or read the pdf here:
But Doolittle begins with a thesis that I find dubious: that “group selection”—the differential reproduction of genetically different human groups—has led to our drive to understand nature—indeed, to selection on many species to “understand” their environment. But, says Doolittle, group selection has not led to the drive to integrate science and social values. (Other species don’t really have “social values” anyway). Bolding is mine:
Most humanists and scientists now agree that science is special in its relationship to the real world, more special than are other human activities—religion and politics, for instance. But philosophers of science keep arguing about why that should be. There is, I believe, a good evolutionary explanation of why—one that incorporates what is often called group selection (1). But group selection will only move humans closer to the truth if researchers and others take care to ensure that social values don’t distract or mislead.
So, my plea is that scientists and others ensure that science remains independent from social values. Social values are constraints—limitations on the evolutionary process. I worry that mixing science and social values hampers scientific progress.
and this from Doolittle’s piece:
My evolutionary argument starts with the contention that there is a selective advantage at all levels to having a better map of reality. Having a better understanding of the world promotes fitness. Living things at all levels (genes, cells, multicellular organisms, species, multispecies communities, tribes, nations of humans, and even broader cultural frameworks) that have such a better map of the world leave more progeny or last longer than living things that don’t, all else being equal. This has been true from the beginning of life.
. . . And, of course, human groups—tribes, nations, and broader cultural collectives—that have better knowledge of the natural and cultural world have a better chance, all else being equal, than those that have less adequate knowledge.
This is a bit mixed up, for evolutionary group selection is a genetic phenomenon, not a cultural one, and in this case would argue that some groups of humans genetically endowed with better knowledge of the environment would survive and reproduce better than less-informed groups. And, over time, this would spread the genes for acquiring more and more accurate knowledge about the universe.
The problem, as always with group selection, is that, because it depends on the differential survival and reproduction of groups, it is much slower than selection acting on individuals harboring genes producing an ambition to know. Those genes would spread within groups and there is no bar to having individuals with such genes. (I think Doolittle’s misconception here is that only groups can differ in their urge to understand.) Group selection is usually invoked to explain the evolution of traits that are advantageous to groups but not individuals, like pure altruism towards nonrelatives. But over time, group selection has fallen out of favor; see this eloquent critique by Pinker on Edge: “The false allure of group selection“).
Doolittle notes that occasionally Darwin was a group selectionist, but in fact A. R. Wallace, in his first exposition of natural selection, published simultaneously with Darwin’s, was even more of one!
But I digress; natural selection acting on genes (Dawkins’s “replicators”) and the bodies bearing them (the “vehicles”) is sufficient to produce the drive to know. Still, in the end it hardly matters. Humans are curious creatures, and there’s doubtlessly a big effect of evolution on that trait.
And it doesn’t even matter whether our drive to know is evolutionary rather than purely social if one argues, as Doolittle does, is that mixing science and politics is bad for science. Here’s Doolittle’s peroration about why mixing science and ideology is bad:
But outside certain limits, society is not ethically uniform, and important values are not shared. We are so politically polarized now that there is an ever-present danger of “weaponizing” the pursuit of knowledge, and thus of the results of earnest inquiry being dismissed by those whose social values disagree with those of scientists. We embrace political polarization to the detriment of both scientists and the scientific enterprise.
Science is based on the assumption that our collective understanding of the world, though always imperfect, generally improves over time and that there is no trade-off between what we think we should do and the scientific truth. As the 18th-century philosopher David Hume noted, you can’t derive “ought” from “is.” The consilience of scientists’ personal social values (which surely have changed over time) and modern, fundable science is precisely why I see current trends in politicization as dangerous to the scientific enterprise—a worry underscored when these trends are viewed through an evolutionary perspective going from genes to individual cells to tribes to broader cultural frameworks.
We scientists should be even more careful not to allow what we think is “right” (what we ought to do) to influence what we think is “true” of the world. What we think is right changes with time and context, but what we think is true should be our eternal goal.Doolittle notes that “it is inevitable that science which does not agree with some aspect of society’s current value system has little chance of getting funded,” but that isn’t 100% true. Sure, if you want to show that there is “structural racism” in an academic field, then your grant may well get funded, but it could also get funded if you’re studying the systematics of ants, or string theory, or the migration distance of Drosophila. Those kinds of studies get funded based on merit, not on “society’s current value system”—unless, that is, you define “value system” tautologically as “what people want to fund”.
In the middle of the article, though, he’s careful not to go too hard after McNutt. But, again to her credit, she let this be published:
As an ethical constraint, the sentiments of Marcia McNutt, the president of the National Academy of Sciences, and her coauthor Michael Crow, president of Arizona State University, might serve as a contemporary example (10). They write that science must “produce outcomes that advance the public good,” citing the Columbia University philosopher Philip Kitcher to remind us that “science should be shaped to promote democratic ideals.” Science, in other words, should be constrained by human social values. Perhaps they meant by this that science functions best (that is, provides better understandings of the world) in democratic societies, rather than arguing that democracy is best for our species. The former is an epistemic value, but the latter is a social value and thus an unnecessary constraint.
McNutt and Crow’s social values are mine, too, and those of many scientists, I hasten to add. . . .
As I said, if you want to stretch “ethical values” to become “the idea of what sorts of questions need answering,” then of course the science that people do, and especially the science that gets funded, will generally comport with social values. But McNutt, Crow and Doolittle are talking, I think, about prioritizing science that matches our current ideology (i.e., justifying DEI initiatives, documenting inequities, or trying to show that indigenous “ways of knowing” are coequal to modern science). Alternatively, McNutt and Crow might urge us not to do forms of science carrying any possibility that they could have bad social consequences (the classic example is studying group differences in IQ).
But it would have behooved Doolittle to give more examples of the kind of science that people are objecting to now. I’ve written a lot about the ways that ideology is intruding in science in detrimental ways: two examples are my paper with Luana Maroja on ‘The ideological subversion of biology” and also the Abbott et al. paper “In defense of merit in science.”
I see this has been a rather rambling post, involving group selection, the debasing of science by politics, and debates in the scientific literature. So be it, and again I’m pleased that NAS President McNutt has allowed an op-ed to be posted in “her” journal that explicitly takes her to task. That is in the finest tradition of allowing open discourse in the literature.
h/t: Anna, Luana
I have about three wildlife-photo submissions in reserve, so we’re going to run out soon. If you have some good photos (not blurry or small!), please send them to me. Thanks!
Today is Sunday, and we’re resuming John Avise‘s series on the birds of Hawaii; this is the last installment. John’s captions are indented, and you can enlarge his photos by clicking on them.
Birds in Hawaii, Part 4
This week we conclude our 4-part photographic journey into native and introduced bird species that you might encounter on a natural-history tour of the Hawaiian Islands.
Red-vented Bulbul (Pycnonotus cafer) (native to the Indian subcontinent):
Red-whiskered Bulbul (Pycnonotus jocosus) (native to Asia):
Salmon-crested Cockatoo (Cacatua moluccensis) (native to Indonesia):
Spotted Dove (Spilopelia chinensis) (native to the Indian subcontinent and southeast Asia):
Wedge-tailed Shearwater (Ardenna pacifica) (widespread in tropical Pacific and Indian oceans):
Western Meadowlark (Sturnella neglecta) (native to North America):
White Tern (Gygis alba) (widespread in the world’s tropical oceans):
White Tern flying:
White-rumped Shama (Copsychus malabaricus), male in bird-bander’s hand (native to India and southeast Asia):
White-rumped Shama female:
Yellow-fronted Canary (Serinus mozambicus) (native to Africa):
Zebra Dove (Geopelia striata) (native to southeast Asia):
Here’s the beginning of Wikipedia’s entry for Kathleen Hagerty, the Provost of Northwestern University here in Evanston, Illinois. It’s a screenshot, and I’ve marked it:
I don’t find any discussion about “antisemite” in the “history” section of the entry, so this description must have been in the original post created in August, 2020.
Now why would this description of Hagerty be added to her entry? One thing I recall is that Northwestern was one of the few universities to actually bargain and strike a deal with the pro-Palestinian protestors at her school. I find this from The Minnesota Lawyer (bolding is mine):
The Wisconsin Institute for Law & Liberty (WILL) has filed a federal Title VI complaint against Northwestern University on behalf of the Young America’s Foundation, which has an active chapter on the university’s campus.
The complaint documents the university’s plan to offer nearly $1.9 million in scholarship funds, faculty positions, and student-organization space to Palestinian students and staff. As a recipient of federal funds, Northwestern University is subject to Title VI of the Civil Rights Act of 1964, which prohibits discrimination “on the grounds of race, color, or national origin,” WILL said.
Northwestern University officials have struck a deal with pro-Palestinian protesters who set up an encampment on campus. In exchange for removal of the encampment, Northwestern agreed to provide a facility for Muslim student activities and fundraise for scholarships going to Palestinian undergraduates.
According to WILL attorney Skylar Croy, that deal violates federal law.
“You just can’t go get scholarships based on ethnicity because they rioted it and demanded it,” Croy said.
According to WILL, on April 29, 2024, University officials entered into an agreement with anti-Israel demonstrators occupying a space on campus called Deering Meadow. The officials involved in the agreement are University President Michael Schill, Provost Kathleen Hagerty, and Vice President Susan Davis.
Pursuant to the terms of the agreement, the University promised to provide the “full cost of attendance for five Palestinian undergraduates to attend Northwestern for the duration of their undergraduate careers.”
The agreement provides “funding two faculty per year for two years,” with the provision that these faculty will be “Palestinian faculty.”
Additionally, Northwestern University agreed to “provide immediate temporary space for MENA/Muslim students.” MENA is an acronym for “Middle Eastern and North African” individuals.
According to WILL, as a recipient of federal funds, the University is subject to Title VI of the Civil Rights Act of 1964, which prohibits discrimination “on the grounds of race, color, or national origin.” By providing nearly $1.9 million in scholarships, two faculty positions, and “immediate temporary space” based on an individual’s status as Palestinian or MENA, the University is intentionally discriminating against non-Palestinian or non-MENA individuals on the grounds of race, color, or national origin.
WILL noted, as the United States Supreme Court recently held in a case applying Title VI, race and national origin may never operate as a “negative” or a “stereotype.” Students for Fair Admissions, Inc. v. President & Fellows of Harvard Coll., 600 U.S. 181, 218 (2023). Discrimination in favor of Palestinians or MENA individuals is, in turn, discrimination against individuals not within those categories and is therefore illegal under federal law.
Did some pro-Israel editor stick “antisemite” in there somehow to reflect this bargain? If so, it’s not in the history of the entry. I don’t find the word in the entry for Northwestern President Michael Shill, and VP Susan Davis doesn’t have a Wikipedia entry.
But I expect that, now that I’ve called attention to it, this noun will be gone by the end of the day. Still, this deal is almost certainly illegal, but that doesn’t warrant such pejorative.
h/t: Peggy
Voyager 1 was launched waaaaaay back in 1977. I would have been 4 years old then! It’s an incredible achievement that technology that was built THAT long ago is still working. Yet here we are in 2024, Voyager 1 and 2 are getting older. Earlier this week, NASA had to turn off one of the radio transmitters on Voyager 1. This forced communication to rely upon the low-power radio. Alas technology around 50 years old does sometimes glitch and this was the result of a command to turn on a heater. The result was that Voyager 1 tripped into fault protection mode and switch communications! Oops.
Voyager 1 is a NASA space probe launched on September 5, 1977, as part of the Voyager program to study the outer planets and beyond. Initially, Voyager 1’s mission focused on flybys of Jupiter and Saturn, capturing incredible images before traveling outward. In 2012, it became the first human-made object to enter interstellar space, crossing the heliopause—the boundary between the influence of the Sun and interstellar space. It now continues to to send data back to Earth from over 22 billion km away, helping scientists learn about the interstellar medium. There is also a “Golden Record” onboard which contains sounds and images of life on Earth, Voyager 1 serves as a time capsule, intended to articulate the story of our world to any alien civilizations that may encounter it.
The Ringed Planet SaturnJust a few days ago on 24 October, NASA had to reconnect to Voyager 1 on its outward journey because one of its radio transmitters had been turned off! Alien intervention perhaps! Exciting though that would be, alas not.
The transmitter seems to have been turned off as a result of one of the spacecraft fault protection systems. Any time there is an issue with onboard systems the computer will flip the systems into protection mode to protect any further damage. If the spacecraft draws too much power from the batteries, the same system will turn off less critical systems to conserve power. When the fault protection system kicks in, it’s then the job of engineers on the ground fixing the fault.
Artist rendition of Voyager 1 entering interstellar space. (Credit: NASA/JPL-Caltech)There are challenges here though. Due to the immense distance to Voyager 1, now about 24 billion km away, any communications to or from takes almost 23 hours to arrive. A request for data for example means a delay of 46 hours before the request arrives and the data returned! Undaunted, the team sent commands to Voyager 1 on the 16 October to turn on a heater but, whilst the probe should have had enough power, the command triggered the system to turn off a radio transmitter to conserve power. This was discovered on 18 October when the Deep Space Network was no longer able to detect the usual ping from the spacecraft.
The engineers correctly identified the likely cause of the problem and found Voyager pinging away on a different frequency using the alternate radio transmitte. This one hadn’t been used since the early 19080’s! With the fault identified, the team did not switch immediately back to the original transmitter just yet in case the fault triggered again. Instead,they are now working to understand the fault before switching back.
Until then, Voyager 1 will continue to communicate with Earth using the lower power transmitter as it continues its exploration out into interstellar space.
Source : After Pause, NASA’s Voyager 1 Communicating With Mission Team
The post Voyager 1 is Forced to Rely on its Low Power Radio appeared first on Universe Today.
Perhaps the greatest tool astronomers have is the ability to look backward in time. Since starlight takes time to reach us, astronomers can observe the history of the cosmos by capturing the light of distant galaxies. This is why observatories such as the James Webb Space Telescope (JWST) are so useful. With it, we can study in detail how galaxies formed and evolved. We are now at the point where our observations allow us to confirm long-standing galactic models, as a recent study shows.
This particular model concerns how galaxies become chemically enriched. In the early universe, there was mostly just hydrogen and helium, so the first stars were massive creatures with no planets. They died quickly and spewed heavier elements, from which more complex stars and planets could form. Each generation adds more elements to the mix. But as a galaxy nurtures a menagerie of stars from blue supergiants to red dwarfs, which stars play the greatest role in chemical enrichment?
One model argues that it is the most massive stars. This makes sense because giant stars explode as supernovae when they die. They toss their enriched outer layers deep into space, allowing the material to mix within great molecular clouds from which new stars can form. But about 20 years ago, another model argued that smaller, more sunlike stars played a greater role.
The Cat’s Eye nebula is a remnant of an AGB star. Credit: ESA, NASA, HEIC and the Hubble Heritage Team, STScI/AURAStars like the Sun don’t die in powerful explosions. Billions of years from now, the Sun will swell into a red giant star. In a desperate attempt to keep burning, the core of a sun-like star heats up significantly to fuse helium, and its diffuse outer layers swell. On the Hertzsprung-Russell diagram, they are known as asymptotic giant branch (AGB) stars. While each AGB star might toss less material into interstellar space, they are far more common than giant stars. So, the model argues, AGB stars play a greater role in the enrichment of galaxies.
Both models have their strengths, but proving the AGB model over the giant star model would prove difficult. It’s easy to observe supernovae in galaxies billions of light years away. Not so much with AGB stars. Thanks to the JWST, we can now test the AGB model.
Using JWST the study looked at the spectra of three young galaxies. Since the Webb’s NIRSpec camera can capture high-resolution infrared spectra, the team could see not just the presence of certain elements but their relative abundance. They found a strong presence of carbon and oxygen bands, which is common for AGB remnants, but also the presence of more rare elements such as vanadium and zirconium. Taken altogether, this points to a type of AGB star known as thermally pulsing AGBs, or TP-AGBs.
Many red giant stars enter a pulsing phase at the end of their lives. The hot core swells the outer layers, things cool down a bit, and gravity compresses the star a bit, which heats the core, and the whole process starts over. This study indicates that TP-AGBs are particularly efficient at enriching galaxies, thus confirming the 20-year-old model.
Reference: Lu, Shiying, et al. “Strong spectral features from asymptotic giant branch stars in distant quiescent galaxies.” Nature Astronomy (2024): 1-13.
The post Webb Confirms a Longstanding Galaxy Model appeared first on Universe Today.
Neutron stars are extraordinarily dense objects, the densest in the Universe. They pack a lot of matter into a small space and can squeeze several solar masses into a radius of 20 km. When two neutron stars collide, they release an enormous amount of energy as a kilonova.
That energy tears atoms apart into a plasma of detached electrons and atomic nuclei, reminiscent of the early Universe after the Big Bang.
Even though kilonova are extraordinarily energetic, they’re difficult to observe and study because they’re transient and fade quickly. The first conclusive kilonova observation was in 2017, and the event is named AT2017gfo. AT stands for Astronomical Transient, followed by the year it was observed, followed by a sequence of three letters that are assigned to uniquely identify the event.
New research into AT2017gfo has uncovered more details of this energetic event. The research is “Emergence hour-by-hour of r-process features in the kilonova AT2017gfo.” It’s published in the journal Astronomy and Astrophysics, and the lead author is Albert Sneppen from the Cosmic Dawn Center (DAWN) and the Niels Bohr Institute, both in Copenhagen, Denmark.
A kilonova explosion creates a spherical ball of plasma that expands outward, similar to the conditions shortly after the Big Bang. Plasma is made up of ions and electrons, and the intense heat prevents them from combining into atoms.
However, as the plasma cools, atoms form via nucleosynthesis, and scientists are intensely interested in this process. There are three types of nucleosynthesis: slow neutron capture (s-process), proton process (p-process), and rapid neutron capture (r-process). Kilonovae form atoms through the r-process and are known for forming heavier elements, including gold, platinum, and uranium. Some of the atoms they form are radioactive and begin to decay immediately, and this releases the energy that makes a kilonova so luminous.
This study represents the first time astronomers have watched atoms being created in a kilonova.
“For the first time we see the creation of atoms.”
Rasmus Damgaard, co-author, PhD student at Cosmic DAWN CenterThings happen rapidly in a kilonova, and no single telescope on Earth can watch as it plays out because the Earth’s rotation removes it from view.
“This astrophysical explosion develops dramatically hour by hour, so no single telescope can follow its entire story. The viewing angle of the individual telescopes to the event is blocked by the rotation of the Earth,” explained lead author Sneppen.
This research is based on multiple ground telescopes that each took their turn watching the kilonova as Earth rotated. The Hubble also contributed observations from its perch in low-Earth orbit.
“But by combining the existing measurements from Australia, South Africa and The Hubble Space Telescope, we can follow its development in great detail,” Sneppen said. “We show that the whole shows more than the sum of the individual sets of data.”
As the plasma cools, atoms start to form. This is the same thing that happened in the Universe after the Big Bang. As the Universe expanded and cooled and atoms formed, light was able to travel freely because there were no free electrons to stop it. AT2017gfo produced
The research is based on spectra collected from 0.5 to 9.4 days after the merger. The observations focused on optical and near-infrared (NIR) wavelengths because, in the first few days after the merger, the ejecta is opaque to shorter wavelengths like X-rays and UV. Optical and NIR are like open windows into the ejecta. They can observe the rich spectra of newly-formed elements, which are a critical part of kilonovae.
This figure from the research shows how different telescopes contributed to the observations of AT2017gfo. Image Credit: Sneppen et al. 2024.The P Cygni spectral line is also important in this research. It indicates that a star, or in this case, a kilonova, has an expanding shell of gas around it. It’s both an emission line and an absorption line and has powerful diagnostic capabilities. Together, they reveal velocity, density, temperature, ionization, and direction of flow.
Strontium plays a strong role in this research and in kilonovae. It produces strong emission and absorption features in Optical/NIR wavelengths, which also reveal the presence of other newly formed elements. These spectral lines do more than reveal the presence of different elements. Along with P Cygni, they’re used to determine the velocity of the ejecta, the velocity structures in the ejecta, and the temperature conditions and ionization states.
The spectra from AT2017gfo are complex and anything but straightforward. However, in all that light data, the researchers say they’ve identified elements being synthesized, including Tellurium, Lanthanum, Cesium, and Yttrium.
“We can now see the moment where atomic nuclei and electrons are uniting in the afterglow. For the first time we see the creation of atoms, we can measure the temperature of the matter and see the micro physics in this remote explosion. It is like admiring the cosmic background radiation surrounding us from all sides, but here, we get to see everything from the outside. We see before, during and after the moment of birth of the atoms,” says Rasmus Damgaard, PhD student at Cosmic DAWN Center and co-author of the study.
“The matter expands so fast and gains in size so rapidly, to the extent where it takes hours for the light to travel across the explosion. This is why, just by observing the remote end of the fireball, we can see further back in the history of the explosion,” said Kasper Heintz, co-author and assistant professor at the Niels Bohr Institute.
The kilonova produced about 16,000 Earth masses of heavy elements, including 10 Earth masses of the elements gold and platinum.
Neutron star mergers also create black holes, and AT2017gfo created the smallest one ever observed, though there’s some doubt. The gravitational wave GW170817 is associated with the kilonova and was detected by LIGO in August 2017. It was the first time a GW event was seen in conjunction with its electromagnetic counterpart. Taken together, the GW data and other observations suggest that a black hole was created, but overall, there’s uncertainty. Some researchers think a magnetar may be involved.
This artist’s illustration shows a neutron star collision that, in addition to the radioactive fire cloud, leaves behind a black hole and jets of fast-moving material from its poles. Illustration: O.S. SALAFIA, G. GHIRLANDA, CXC/NASA, GSFC, B. WILLIAMS ET ALKilonovae are complex objects. They’re like mini-laboratories where scientists can study extreme nuclear physics. Kilonovae are important contributors of heavy elements in the Universe, and researchers are keen to model and understand how elements are created in these environments.
The post The Aftermath of a Neutron Star Collision Resembles the Conditions in the Early Universe appeared first on Universe Today.
Today’s photos are from California tidepools and were taken by UC Davis math professor Abigail Thompson, a recognized “hero of intellectual freedom.” Abby’s notes and IDs are indented, and you can enlarge the photos by clicking on them.
September-October tidepools (Northern California).
September and October tides are not as extreme as the tides of midsummer, and by mid-October the lowest tides occur after sunset, which altogether makes finding creatures and taking pictures a bit more challenging. As usual I got help with some of the IDs from people on inaturalist.
Phyllocomus hiltoni: this Doctor-Suessian marine worm washed up on the beach in a clump of eelgrass. It was tiny; the photo is through a microscope. I already thought it was amazing, but then (see the next picture) as a bonus it also sprouted tentacles:
Phyllocomus hiltoni with frills!
Porychthis notatus: these tiny fish showed up when I turned over a rock. They were very small, I assume newly-hatched:
Porychthis notatus: close-up:
Anthopleura sola (starburst anemone), one of the more spectacular sea anemones:
Phragmatopoma californica (California sandcastle worm): These worms often live in groups and form large conglomerations of the tubes they live in (the “sandcastles”). The black shell-like thing on the left is the worm’s operculum, like a lid to close off the top of the tube when the worm withdraws. The next picture is a close-up of the operculum:
Operculum close-up:
Triopha maculata: nudibranch; this one looks like he’s eating the pink bryozoan, but he may just be passing over it, I’m not sure what this species eats (nudibranchs are very picky eaters):
Epiactis prolifera (brooding anemone: probably): there are a few species of Epiactis sea anemones along the California coast; prolifera is the most common:
Halosydna brevisetosa: Eighteen-scaled worm, found on the underside of a rock. There are 18 pairs of scales, with a close-up of them in the next picture.
Close-up of scales:
Low tide on this day was about an hour after sunset, which is a lovely time to be out on the beach:
Camera info: Mostly Olympus TG-7, in microscope mode, pictures taken from above the water. The first picture was taken with my iphone through the eyepiece of a microscope.
Think of the Moon and most people will imagine a barren world pockmarked with craters. The same is likely true of Mars albeit more red in colour than grey! The Earth too has had its fair share of craters, some of them large but most of the evidence has been eroded by centuries of weathering. Surprisingly perhaps, Venus, the second planet from the Sun does not have the same weathering processes as we have on Earth yet there are signs of impact craters, but no large impact basins! A team of astronomers now think they have secured a new view on the hottest planet in the Solar System and revealed the missing impact sites.
Venus is the second planet from the Sun and, whilst it’s often called Earth’s sister planet, the reality is really they differ in many ways. The term comes from similarities in size and composition yet the conditions on Venus are far more hostile. Surface temperatures far exceed the boiling point of water, the dense atmosphere exerts a pressure on the surface equivalent to being 3,000 feet under water and there is sulphuric acid rain in the atmosphere! Most definitely not a nice place to head to for your next vacation.
VenusIf you were to stand on the surface of Venus you would see beautifully formed craters. Looking down on the planet from orbit you would see none due to the thick, dense atmosphere. Yet if you could gaze through the obscuring clouds you would see a distinct lack of larger impact basins of the sort we are familiar with on the Moon. Now, a team of researchers mostly from the Planetary Science Institute believe they solve the mystery of the missing craters.
The Moon. Credit: NASAThey have mapped a region of Venus known as Haastte-baad Tessera using radar technology and the results were rather surprising. The region is thought to be one of the oldest surfaces on Venus and is classed as tessera terrain. This type of feature is complex and is characterised by rough, intersecting ridges to create a tile like pattern thought to be the result of a thin but strong layer of material forming over a weak layer which can flow and convect energy just like boiling water. Images from the area in question reveal a set of concentric rings over 1,400 km across at their widest. The team propose that the feature is the result of two back-to-back impact events. “Think of pea soup with a scum forming on top,” said Vicki Hansen, Planetary Science Institute Senior Scientist.
Obviously there is no pea soup on Venus but instead, the thin crust layer formed upon a layer of molten lava. Venus of today has a thick outer shell called a lithosphere which is about 112 km thick but when Venus was younger, its thought it was just 9km thick! If an impactor struck the hot young Venus then it’s very likely it would have fractured the lithosphere allowing molten lava to seep through and eventually solidify to create the tesserae we see today.
Confusing things slightly however is that features like this have been seen on top of flat, raised plateaus where the lithosphere is likely much thicker. The researchers have an answer for this though, “When you have vast amounts of partial melt in the mantle that rushes to the surface, what gets left behind is something called residuum. Solid residuum is much stronger than the adjacent mantle, which did not experience partial melting.” said Hansen. “What may be surprising is that the solid residuum is also lower density than all the mantle around it. So, it’s stronger, but it’s also buoyant. You basically have an air mattress sitting in the mantle beneath your lava pond, and it’s just going to rise up and raise that tessera terrain.”
The features found by the time seem to show that two impact events happened one after the other with the first creating the build up of lava and the second creating the ring structure seen today.
Source : Impact craters were hiding in plain sight, say researchers with a new view of Venus
The post New View of Venus Reveals Previously Hidden Impact Craters appeared first on Universe Today.
In a few years, as part of the Artemis Program, NASA will send the “first woman and first person of color” to the lunar surface. This will be the first time astronauts have set foot on the Moon since the Apollo 17 mission in 1972. This will be followed by the creation of permanent infrastructure that will allow for regular missions to the surface (once a year) and a “sustained program of lunar exploration and development.” This will require spacecraft making regular trips between the Earth and Moon to deliver crews, vehicles, and payloads.
In a recent NASA-supported study, a team of researchers at the University of Illinois Urbana-Champaign investigated a new method of sending spacecraft to the Moon. It is known as “multimode propulsion,” a method that integrates a high-thrust chemical mode and a low-thrust electric mode – while using the same propellant. This system has several advantages over other forms of propulsion, not the least of which include being lighter and more cost-effective. With a little luck, NASA could rely on multimode propulsion-equipped spacecraft to achieve many of its Artemis objectives.
The paper describing their investigation, “Indirect optimal control techniques for multimode propulsion mission design,” was recently published in Acta Astronautica. The research was led by Bryan C. Cline, a doctoral student in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign. He was joined by fellow aerospace engineer and PhD Candidate Alex Pascarella, and Robyn M. Woollands and Joshua L. Rovey – an assistant professor and professor with the Grainger College of Engineering (Aerospace Engineering).
Artist’s impression of the ESA LISA Pathfinder mission. Credit: ESA–C.CarreauTo break it down, a multimode thruster relies on a single chemical monopropellant – like hydrazine or Advanced Spacecraft Energetic Non-Toxic (ASCENT) propellant – to power chemical thrusters and an electrospray thruster (aka. colloid thruster). The latter element relies on a process known as electrospray ionization (ESI), where charged liquid droplets are produced and accelerated by a static electric field. Electrospray thrusters were first used in space aboard the ESA’s LISA Pathfinder mission to demonstrate disturbance reduction.
By developing a system that relies on both that can switch as needed, satellites will be able to perform propulsive manuevers using less propellant (aka. minimum-fuel transfers). As Cline said in a Grainger College of Engineering press release:
“Multimode propulsion systems also expand the performance envelope. We describe them as flexible and adaptable. I can choose a high-thrust chemical mode to get someplace fast and a low-thrust electrospray to make smaller maneuvers to stay in the desired orbit. Having multiple modes available has the potential to reduce fuel consumption or reduce time to complete your mission objective.”
The team’s investigation follows a similar study conducted by Cline and researchers from NASA’s Goddard Spaceflight Center and the aerospace advisory company Space Exploration Engineering, LLC. In a separate paper, “Lunar SmallSat Missions with Chemical-Electrospray Multimode Propulsion,” they considered the advantages of multimode propulsion against all-chemical and all-electric approaches for four design reference missions (DRMs) provided by NASA. For this latest investigation, Cline and his colleagues used a standard 12-unit CubeSat to execute these four mission profiles.
.Earth–Mars minimum-fuel trajectory when the CubeSat is coasting, as well as in mode 1-low thrust and mode 2-high thrust. Credit: UIU-C“We showed for the first time the feasibility of using multimode propulsion in NASA-relevant lunar missions, particularly with CubeSats,” said Cline. “Other studies used arbitrary problems, which is a great starting point. Ours is the first high-fidelity analysis of multimode mission design for NASA-relevant lunar missions.”
Multimode propulsion is similar in some respects to hybrid propulsion, where two propulsion systems are combined to achieve optimal thrust. A good example of this (though still unrealized) is bimodal nuclear propulsion, where a spacecraft relies on a nuclear-thermal propulsion (NTP) and nuclear-electric propulsion (NEC) system. While an NTP system relies on a nuclear reactor to heat hydrogen or deuterium propellant and can achieve a high rate of acceleration (delta-v), an NEC system uses the reactor to power an ion engine that offers a consistent level of thrust.
A key advantage multimode propulsion has over a hybrid system is a drastic reduction in the dry mass of the spacecraft. Whereas hybrid propulsion systems require two different propellants (and hence, two separate fuel tanks), bimodal propulsion requires only one. This not only saves on the mass and volume of the spacecraft, but makes them cheaper to launch. “I can choose to use high-thrust at any time and low-thrust at any time, and it doesn’t matter what I did in the past,” said Cline. “With a hybrid system, when one tank is empty, I can’t choose that option.”
To complete each of the design reference missions for this project, the team made all decisions manually – i.e., when to use high-thrust and low-thrust. As a result, the trajectories weren’t optimal. This led Cline to develop an algorithm after completing the project that automatically selects which mode would lead to an optimal trajectory. This allowed Cline and his team to solve a simple two-dimensional transfer between Earth and Mars and a three-dimensional transfer to geostationary orbit that minimizes fuel consumption. As Cline explained:
“This was an entirely different beast where the focus was on the development of the method, rather than the specific results shown in the paper. We developed the first indirect optimal control technique specifically for multimode mission design. As a result, we can develop transfers that obey the laws of physics while achieving a specific objective such as minimizing fuel consumption or transfer time.”
“We showed the method works on a mission that’s relevant to the scientific community. Now you can use it to solve all kinds of mission design problems. The math is agnostic to the specific mission. And because the method utilizes variational calculus, what we call an indirect optimal control technique, it guarantees that you’ll get at least a locally optimal solution.”
Artist rendering of an Artemis astronaut exploring the Moon’s surface during a future mission. Credit: NASAThe research is part of a project led by Professor Rovey and a multi-institutional team known as the Joint Advanced Propulsion Institute (JANUS). Their work is funded by NASA as part of a new Space and Technology Research Institute (STRI) initiative. Rovey is responsible for leading the Diagnostics and Fundamental Studies team, along with Dr. John D. Williams, a Professor of Mechanical Engineering and the Director of the Electric Propulsion & Plasma Engineering Laboratory at Colorado State University (CSU).
As Cline indicated, their work into multimode propulsion could revolutionize how small spacecraft travel between Earth and the Moon, Mars, and other celestial bodies:
“It’s an emerging technology because it’s still being developed on the hardware side. It’s enabling in that we can accomplish all kinds of missions we wouldn’t be able to do without it. And it’s enhancing because if you’ve got a given mission concept, you can do more with multimode propulsion. You’ve got more flexibility. You’ve got more adaptability.
“I think this is an exciting time to work on multimode propulsion, both from a hardware perspective, but also from a mission design perspective. We’re developing tools and techniques to take this technology from something we test in the basement of Talbot Lab and turn it into something that can have a real impact on the space community.”
Further ReadingL University of Illinois Urbana-Champaign, Acta Astronautica
The post Multimode Propulsion Could Revolutionize How We Launch Things to Space appeared first on Universe Today.
China has a fabulously rich history when it comes to space travel and was among the first to experiment in rocket technology. The invention of the rocket is often attributed to the Sung Dynasty (AD 960-1279.) Since then, China has been keen to develop and build its own space industry. The Chinese National Space Administration has already successfully landed probes on the Moon but is preparing for their first human landers. Chinese astronauts are sometimes known as taikonauts and CNSA has just confirmed their fourth batch of taikonauts are set for a lunar landing.
The Chinese National Space Administration (CNSA) is China’s equivalent to NASA. It was founded in 1993 to oversee the country’s space aspirations. Amazing results have been achieved over the last twenty years including the landmark Chang’e lunar missions. In 2019 Chang’e-4 landed on the far side of the Moon, the first lunar lander to do so and in 2021 became the third country to land a rover on Mars. In 2021 the first modules for CNSA’s Tiangong space station were launched, it’s now operational and working with other space agencies, is working on a number of scientific research projects.
China has announced that it successfully completed its latest selection process in May. The CNSA are striving to expand their team of taikonauts. Ten were chosen from all the applicants including 8 experienced space pilots and two payload specialists. The team will now begin their program of training in August covering over 200 subject areas designed to prepare them for future missions to the Moon and other Chinese space initiatives.
The training covers an extensive range of skills It will include training for living and working in microgravity, to learn about physical and mental health in space and specialist training in extravehicular activities. They will also learn maintenance techniques for advanced spacecraft systems and in hands-on training for undertaking experiments in microgravity.
On her 2007 mission aboard the International Space Station, NASA astronaut Peggy Whitson, Expedition 16 commander, worked on the Capillary Flow Experiment (CFE), which observes the flow of fluid, in particular capillary phenomena, in microgravity. Credits: NASAThe program is designed to expand and fine tune the skills of the taikonauts in preparation for future crewed lunar missions. Specialist training for lunar landings include piloting spacecraft under different gravitational conditions, manoeuvring lunar rovers, training in celestial navigation and stellar identification.
Not only will they learn about space operations but they will have to learn skills to support scientific objectives too. This will include how to conduct geological surveys and how to operate tools and manoeuvre in the micro-gravitational environments.
Source : China’s fourth batch of taikonauts set for lunar landings
The post China Trains Next Batch of Taikonauts appeared first on Universe Today.
It was 1969 that humans first set foot on the Moon. Back then, the Apollo mission was the focus of the attempts to land on the Moon but now, over 50 years on, it looks like we are set to head back. The Artemis project is the program that hopes to take us back to the Moon again and it’s going from strength to strength. The plan is to get humans back on the Moon by 2025 as part of Artemis III. As a prelude to this, NASA is now turning its attention to the possible landing sites.
The Artemis Project is NASA’s program aimed at returning humans to the Moon and establishing a permanent base there. Ultimately with a view to paving the way for missions to Mars. With the first launch in 2017, Artemis intends to land “the first woman and the next man” on the lunar surface by 2025. The program began with Artemis I and an uncrewed mission which orbited the Moon. Arte is II will take astronauts on an orbit of the Moon and finally Artemis III will land humans back on the Moon by 2025. At the heart of the program is the giant Space Launch System (SLS) rocket and the Orion spacecraft.
NASA’s Space Launch System rocket carrying the Orion spacecraft launches on the Artemis I flight test, Wednesday, Nov. 16, 2022, from Launch Complex 39B at NASA’s Kennedy Space Center in Florida. Credit: NASA/Joel Kowsky.As the plans ramp up for the first crewed landing, NASA are now analysing possible landing sites and have identified nine potential spots. They are all near the South Pole of the Moon and will provide Artemis III with landing sites near to potentially useful resources. Further investigations will be required to further assess them for their suitability.
The team working upon the analysis is the Cross Agency Site Selection Analysis team and they will work with other science and industry partners. The teams will explore each possible site for science value and suitability for the mission including the availability of water ice. The final list so far, and in no particular order, are;
The South Polar region was chosen as a region was chosen chiefly because it has water locked up deep in the shadowed craters. The Apollo missions never visited that region of the Moon either so it is a great opportunity for humans to explore this aged region of the lunar surface. To settle on these 9 areas, the team assessed various regions of the south polar region using potential launch window suitability, terrain suitability, communication capability and even lighting levels. The geology team also looked at the landing sites to assess their scientific value
Apollo 17 astronaut Harrison Schmitt collecting a soil sample, his spacesuit coated with dust. Credit: NASANASA will finally settle on the appropriate landing site based upon the decision for the launch date. Once that has been confirmed it will determine the transfer trajectories to the Moon, the orbital paths and the surface environment.
Source : NASA Provides Update on Artemis III Moon Landing Regions
The post NASA Focusses in on Artemis III Landing Sites. appeared first on Universe Today.
The discovery of the accelerated expansion of the Universe has often been attributed to the force known as dark energy. An intriguing new theory was put forward last year to explain this mysterious force; black holes could be the cause of dark energy! The theory goes on to suggest as more black holes form in the Universe, the stronger the pressure from dark energy. A survey from the Dark Energy Spectroscopic Instrument (DESI) seems to support the theory. The data from the first year of operation shows the density of dark energy increases over time and seems to correlate with the number and mass of black holes!
Cast your mind back 4 billion years to the beginning of the Universe. Just after the Big Bang, the moment when the Universe popped into existence, there was a brief period when the Universe expanded faster than the speed of light. Before you argue that nothing can travel faster than the speed of light we are talking of the very fabric of space and time expanding faster than the speed of light. The speed of light limit relates to travel through the fabric of space, not the fabric of space itself! This was the inflationary period.
This illustration shows the “arrow of time” from the Big Bang to the present cosmological epoch. Credit: NASAThe energy that drove the expansion in the early Universe shared similarities with dark energy, the repulsive force that seems to permeate the Universe and is driving the current day accelerated expansion of the Universe.
What is dark energy though? It is thought to make up around 68% of the Universe and, unlike normal matter and energy seems to have a repulsive force rather than attractive. The repulsive nature was first inferred from observations in the late 1990’s when astronomers deduced the rate of acceleration when observing distant supernova. As to the nature of dark energy, no-one really knows what it is or what it comes from, that is, until now.
Artist’s illustration of a bright and powerful supernova explosion. (Credit: NASA/CXC/M.Weiss)A team of researchers from the University of Michigan and other institutions have published a paper in the Journal of Cosmology and Astroparticle Physics. In their paper they propose that black holes are the source of dark energy. Professor Gregory Tarle said ‘Where in the later Universe do we see gravity as strong as it was at the beginning of the Universe?’ The answer, Tarle goes on to describe is the centre of black holes. Tarle and team propose that what happened during the inflation period runs in reverse during the collapse of a massive star. When this happens, the matter could conceivably become dark energy.
The team have used data from the Dark Energy Spectroscopic Instrument (DESI) which is mounted upon the 4m Mayall telescope at Kitt Peak National Observatory. The instrument is essentially 5,000 computer controlled fibre optics which cover an area of the sky equal to about 8 square degrees. The evidence of dark energy is achieved by studying tens of millions of galaxies. The galaxies are so far way their light takes billions of years to reach us. We can use the information to determine how fast the Universe is expanding with unprecedented precision.
Stu Harris works on assembling the focal plane for the Dark Energy Spectroscopic Instrument (DESI), which involves hundreds of thousands of parts, at Lawrence Berkeley National Laboratory on Wednesday, 6 December, 2017 in Berkeley, Calif.The data shows evidence that dark energy has increased with time. This is not perhaps in itself surprising but it seems to accurately mirror the increase in black holes over time too. Now that DESI is operational, more observations are required to hunt down the black holes and try to quantify their growth over time to see if there really is merit in this new exciting hypothesis.
Source : Evidence mounts for dark energy from black holes
The post The Connection Between Black Holes and Dark Energy is Getting Stronger appeared first on Universe Today.