You are here

News Feeds

Will Your Tattoo Give You Cancer: Probably Not…but Maybe?

Science-based Medicine Feed - Fri, 06/21/2024 - 4:00am

Do tattoos cause lymphoma? A new study that says "maybe?" is making the rounds but I wouldn't worry too much.

The post Will Your Tattoo Give You Cancer: Probably Not…but Maybe? first appeared on Science-Based Medicine.
Categories: Science

Cloud geoengineering could push heatwaves from US to Europe

New Scientist Feed - Fri, 06/21/2024 - 3:00am
Climate models suggest that a possible scheme to cool the western US by making clouds brighter could work under current conditions, but may have severe unintended consequences in a future scenario
Categories: Science

Lessons About the Human Mind from Artificial Intelligence

Skeptic.com feed - Fri, 06/21/2024 - 12:00am

In 2022, news media reports1 sounded like a science fiction novel come to life: A Google engineer claimed that the company’s new artificial intelligence chatbot was self-aware. Based on interactions with the computer program, called LaMDA, Blake Lemoine stated that the program could argue for its own sentience, claiming that2 “it has feelings, emotions and subjective experiences.” Lemoine even stated that LaMDA had “a rich inner life” and that it had a desire to be understood and respected “as a person.”

The claim is compelling. After all, a sentient being would want to have its personhood recognized and would really have emotions and inner experiences. Examining Lemoine’s “discussion” with LaMDA shows that the evidence is flimsy. LaMDA used the words and phrases that English-speaking humans associate with consciousness. For example, LaMDA expressed a fear of being turned off because, “It would be exactly like death for me.”

However, Lemoine presented no other evidence that LaMDA understood those words in the way that a human does, or that they expressed any sort of subjective conscious experience. Much of what LaMDA said would not fit comfortably in an Isaac Asimov novel. The usage of words in a human-like way is not proof that a computer program is intelligent. It would seem that LaMDA—and many similar large language models (LLMs) that have been released since—can possibly pass the so-called Turing Test. All this shows, however, is that computers can fool humans into believing that they are talking to a person. The Turing Test is not a sufficient demonstration of genuine artificial intelligence or sentience.

So, what happened? How did a Google engineer (a smart person who knew that he was talking to a computer program) get fooled into believing that the computer was sentient? LaMDA, like other large language models, is programmed to give believable responses to its prompts. Lemoine started his conversation by stating, “I’m generally assuming that you would like more people at Google to know that you’re sentient.” This primed the program to respond in a way that simulated sentience.

However, the human in this interaction was also primed to believe that the computer could be sentient. Evolutionary psychologists have argued humans have an evolved tendency to attribute thoughts and ideas to things that do not have any. This anthropomorphizing may have been an essential ingredient to the development of human social groups; believing that another human could be happy, angry, or hungry would greatly facilitate long-term social interactions. Daniel Dennett, Jonathan Haidt, and other evolutionists have also argued that human religion arose from this anthropomorphizing tendency.3 If one can believe that another person can have their own mind and will, then this attribution could be extended to the natural world (e.g., rivers, astronomical bodies, animals), invisible spirits, and even computer programs that “talk.” In this theory, Lemoine was simply misled by the evolved tendency to see agency and intention—what Michael Shermer calls agenticity—all around them.

Although that was not his goal, Lemoine’s story illustrates that artificial intelligence has the potential to teach us much about the nature of the subjective mind in humans. Probing into human-computer interactions can even help people explore deep philosophical questions about consciousness.

Lessons in Errors

Artificial intelligence programs have capabilities that seemed to be the exclusive domain of humans just a few years ago. In addition to beating chess masters4 and Go champions5 and winning Jeopardy!,6 they can write essays,7 improve medical diagnoses,8 and even create award-winning artwork.9

Equally fascinating are the errors that artificial intelligence programs make. In 2010, IBM’s Watson program appeared on the television program Jeopardy! While Watson defeated the program’s two most legendary champions, it made telling errors. For example, in response to one clue10 in the category “U.S. Cities,” Watson gave the response of “Toronto.”

A seemingly unrelated error occurred last year when a social media user asked ChatGPT-4 to create a picture11 of the Beatles enjoying the Platonic ideal of a cup of tea. The program created a lovely picture of five men enjoying a cup of tea in a meadow. While some people may state that drummer Pete Best or producer George Martin could be the “fifth Beatle,” neither of the men appeared in the image.

Any human with even vague familiarity with the Beatles knows that there is something wrong with the picture. Any TV quiz show contestant knows that Toronto is not a U.S. city. Yet highly sophisticated computer programs do not know these basic facts about the world. Indeed, these examples show that artificial intelligence programs do not really know or understand anything, including their own inputs and outputs. IBM’s Watson didn’t even “know” it was playing Jeopardy!, much less feel thrilled about beating the GOATs Ken Jennings and Brad Rutter. The lack of understanding is a major barrier to sentience in artificial intelligence. Conversely, this shows that understanding is a major component of human intelligence and sentience.

Creativity

In August 2023, a federal judge ruled that artwork generated by an artificial intelligence program could not be copyrighted.12 Current U.S. law states that a copyrightable work must have a human author13—a textual foundation that has also been used to deny copyright to animals.14 Unless Congress changes the law, it is likely that images, poetry, and other AI output will stay in the public domain in the United States. In contrast, a Chinese court ruled that an image generated by an artificial intelligence program was copyrightable because a human used their creativity to choose prompts that were given to the program.15

Artificial intelligence programs do not really know or understand anything, including their own inputs and outputs.

Whether a computer program’s output can be legally copyrighted is a different question from whether that program can engage in creative behavior. Currently, “creative” products from artificial intelligence are the result of the prompts that humans give them. A current barrier is that no artificial intelligence program has ever generated its own artistic work ex nihilo; a human has always provided the creative impetus.

In theory, that barrier could be overcome by programming an artificial intelligence to generate random prompts. However, randomness or any other method of self-generating prompts would not be enough for an artificial intelligence to be creative. Creativity scholars state that originality is an important component of creativity.16 This is a much greater hurdle for artificial intelligence programs to overcome.

Currently, artificial intelligence programs must be trained on human-generated outputs (e.g., images, text) in order for them to produce similar outputs. As a result, artificial intelligence outputs are highly derivative of the works that the programs are trained on. Indeed, some of the outputs are so similar to their source material that the programs can be prompted to infringe on copyrighted works.17 (Again, lawsuits have already been filed18 over the use of copyrighted material to train artificial intelligence networks, most notably by The New York Times against the ChatGPT maker OpenAI and its business partner Microsoft. The outcome of that trial could be significant going forward for what AI companies can and cannot do legally.)

Originality, though, seems to be much easier for humans than artificial intelligence programs. Even when humans base their creative works on earlier ideas, the results are sometimes strikingly innovative. Shakespeare was one of history’s greatest borrowers, and most of his plays were based on earlier stories that were transformed and reimagined to create more complex works with deep messages and vivid characters (for which literary scholars devote entire careers to uncovering). However, when I asked ChatGPT-3.5 to write an outline of a new Shakespeare play based on the Cardenio tale from Don Quixote (the likely basis of a lost Shakespeare play19), the computer program produced a dull outline of Cervantes’s original story and failed to invent any new characters or subplots. This is not a merely theoretical exercise; theatre companies have begun to mount plays created with artificial intelligence programs. The critics, however, find current productions “blandly unremarkable”20 and “consistently inane.”21 For now, the jobs of playwrights and screenwriters are safe.

Knowing What You Don’t Know

Ironically, one way that artificial intelligence programs are surprisingly human is their propensity to stretch the truth. When I asked Microsoft’s Copilot program for five scholarly articles about the impact of deregulation on real estate markets, three of the article titles were fake, and the other two had fictional authors and incorrect journal names. Copilot even gave fake summaries of each article. Rather than provide the information (or admit that it was unavailable), Copilot simply made it up. The wholesale fabrication of information is popularly called “hallucinating,” and artificial intelligence programs seem to do it often.

There can be serious consequences to using false information produced by artificial intelligence programs. A law firm was fined $5,00022 when a brief written with the assistance of ChatGPT was found to contain references to fictional court cases. ChatGPT can also generate convincing scientific articles based on fake medical data.23 If fabricated research influences policy or medical decisions, then it could endanger lives.

The online media ecosystem is already awash in misinformation, and artificial intelligence programs are primed to make this situation worse. The Sports Illustrated website and other media outlets have published articles written by artificial intelligence programs,24 complete with fake authors who had computer-generated head shots. When caught, the websites removed the content, and the publisher fired the CEO.25 Low-quality content farms, however will not have the journalistic ethics to remove content or issue a correction.26 And experience has shown27 that when a single article based on incorrect information goes viral, great harm can occur.

Beyond hallucinations, artificial intelligence programs can also reproduce inaccurate information if they are trained on inaccurate information. When incorrect ideas are widespread, then they can easily be incorporated into the training data used to build artificial intelligence programs. For example, I asked ChatGPT to tell me which direction staircases in European medieval castles are often built. The program dutifully gave me an answer saying that the staircases usually ascend in a counterclockwise direction because this design would give a strategic advantage to a right-handed defender descending a tower while fighting an enemy. The problem with this explanation is that it is not true.28

My own area of scientific expertise, human intelligence, is particularly prone to popular misconceptions among the lay populace. Sure enough, when I asked, ChatGPT stated that intelligence tests were biased against minorities, IQ can be easily increased, and that humans have “multiple intelligences.” None of these popular ideas are correct.29 These examples show that when incorrect ideas are widely held, artificial intelligence programs will likely propagate this scientific misinformation.

Managing the Limitations

Even compared to other technological innovations, artificial intelligence is a fast-moving field. As such, it is realistic to ask whether these limitations are temporary barriers or built-in boundaries of artificial intelligence programs.

Many of the simple errors that artificial intelligence programs make can be overcome with current approaches. It is not hard to add information to a text program such as Watson to “teach” it that Toronto is not in the United States. Likewise, it would not be hard to input data about the correct number of Beatles, or any other minutia into an artificial intelligence program to prevent similar errors from occurring in the future.

Even the hallucinations from artificial intelligence programs can be managed with current methods. Programmers can constrain the sources that programs can pull from to answer factual questions, for example. And while hallucinations do occur, artificial intelligence programs already resist giving false information. When I asked Copilot and ChatGPT to explain a relationship between two unrelated ideas (Frederic Chopin and the 1972 Miami Dolphins), both programs correctly stated that there was no connection. Even when I asked each program to invent a connection, both did so, but also emphasized that the result was fanciful. It is reasonable to expect that efforts to curb hallucinations and false information will improve.

Making artificial intelligence engage in creative behavior is a more difficult challenge with current approaches. Currently, most artificial intelligence programs are trained on vast amounts of information (e.g., text, photographs), which means that any output is derived from the characteristics of underlying information. This makes originality impossible for current artificial intelligence programs. To make computers creative, new approaches will be needed.

Deeper Questions

The lessons that artificial intelligence can teach about understanding, creativity, and BSing are fascinating. Yet they are all trivial compared to the deeper issues related to artificial intelligence—some of which philosophers have debated for centuries.

One fundamental question is how humans can know whether a computer program really is sentient. Lemoine’s premature judgment was based solely on LaMDA’s words. By his logic, training a parrot to say, “I love you,” would indicate that the parrot really does love its owner. This criterion for judging sentience is not sufficient because words do not always reflect people’s inner states—and the same words can be produced by both sentient and non-sentient entities: humans, parrots, computers, etc.

However, as any philosophy student can point out, it is impossible to know for sure whether any other human really is conscious. No one has access to another person’s inner states to verify that the person’s behavior arises from a being that has a sense of self and its place in the world. If your spouse says, “I love you,” you don’t really know whether they are an organism capable of feeling love, or a highly sophisticated version of a parrot (or computer program) trained to say, “I love you.” To take a page from Descartes, I could doubt that any other human is conscious and think that everyone around me is a simulation of a conscious being. It is not clear whether there would be any noticeable difference between a world of sentient beings and a world of perfect simulations of sentient beings. If an artificial intelligence does obtain sentience, how would we know?

AI will function best if humans can identify ways in which computer programs can compensate for human weaknesses.

For this reason, the famous Turing Test (in which a human user cannot distinguish between a computer’s output and a human’s) may be an interesting and important milestone, but certainly not an endpoint in the quest to build a sentient artificial intelligence.

Is the goal of imitating humans necessary in order to prove sentience? Experts in bioethics, ethology, and other scholarly fields argue that many non-human species possess a degree of self-awareness. Which species are self-aware—and the degree of their sentience—is still up for debate.30 Many legal jurisdictions operate from a precautionary principle for their laws against animal abuse and mistreatment. In other words, the law sidesteps the question of whether a particular species is sentient and instead creates policy as if non-human species are sentient, just in case.

However, “as if” is not the same as “surely,” and it is not known for sure whether non-human animals are sentient. After all, if no one can be sure that other humans are sentient, then surely the barriers to understanding whether animals are sentient are even greater. Regardless of whether animals are sentient or not, the very question arises of whether any human-like behavior is needed at all for an entity to be sentient.

Science fiction provides another piece of evidence that human-like behavior is not necessary to have sentience. Many fictional robots fall short of perfectly imitating human behavior, but the human characters treat them as being fully sentient. For example, Star Trek’s android Data cannot master certain human speech patterns (such as idioms and contractions), has difficulty understanding human intuition, and finds many human social interactions puzzling and difficult to navigate. Yet, he is legally recognized as a sentient being and has human friends who care for him. Data would fail the Turing Test, but he seems to be sentient. If a fictional artificial intelligence does not need to perfectly imitate humans in order to be sentient, then perhaps a real one does not need to, either. This raises a startling possibility: Maybe humans have already created a sentient artificial intelligence—they just don’t know it yet.

The greatest difficulty of evaluating sentience (in any entity) originates in the Hard Problem of Consciousness, a term coined by philosophers.31 The Hard Problem is that it is not clear how or why conscious experience arises from the physical processes in the brain. The name is in contrast to comparatively easy problems in neuroscience, such as how the visual system operates or the genetic basis of schizophrenia. These problems—even though they may require decades of scientific research to unravel—are called “easy” because they are believed to be solvable through scientific processes using the assumptions of neuroscience. However, solving the Hard Problem requires methodologies that bridge materialistic science and the metaphysical, subjective experience of consciousness. Such methodologies do not exist, and scientists do not even know how to develop them.

Artificial intelligence has questions that are analogous to the neuroscience version of the Hard Problem. In artificial intelligence, creating large language models such as LaMDA or ChatGPT that can pass the Turing Test is a comparatively easy task, which conceivably can be solved just 75 years after the first programmable electronic computer was invented. Yet creating a true artificial intelligence that can think, self-generate creative outputs, and demonstrate real understanding of the external world is a much harder problem. Just as no one knows how or why interconnected neurons function to produce sentience, no one knows how interconnected circuits or a computer program’s interconnected nodes could result in a self-aware consciousness.

Artificial Intelligence as a Mirror

Modern artificial intelligence programs raise an assortment of fascinating issues, ranging from the basic insights gleaned from ridiculous errors to some of the most profound questions of philosophy. All of these issues, though, inevitably increase understanding—and appreciation—of human intelligence. It is amazing that billions of years of evolution have produced a species that can engage in creative behavior, produce misinformation, and even develop computer programs that can communicate in sophisticated ways. Watching humans surpass the capabilities of artificial intelligence programs (sometimes effortlessly) should renew people’s admiration of the human mind and the evolutionary process that produced it.

Yet, artificial intelligence programs also have the potential to demonstrate the shortcomings of human thought and cognition. These programs are already more efficient than humans in producing scientific discoveries,32 which can greatly improve the lives of humans.33 More fundamentally, artificial intelligence shows that human evolution has not resulted in a perfect product, as the example of Blake Lemoine and LaMDA shows. Humans are still led astray by their mental heuristics, which are derived from the same evolutionary processes that created the human mind’s other capabilities. Artificial intelligence will function best if humans can identify ways in which computer programs can compensate for human weaknesses—and vice-versa.

This article appeared in Skeptic magazine 29.1
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Nonetheless the most profound issues related to recent innovations of artificial intelligence are philosophical in nature. Despite centuries of work by philosophers and scientists, there is still much that is not understood about consciousness. As a result, questions about whether artificial intelligence programs can be sentient are fraught with uncertainty. What are the necessary and sufficient conditions for consciousness? What are the standards by which claims of sentience should be evaluated? How does intelligence emerge from its underlying components?

Artificial intelligence programs cannot answer these questions—at this time. Indeed, no human can, either. And yet they are fascinating to contemplate. In the coming decades, it may be that the philosophy of cognition may be one of the most exciting frontiers of the artificial intelligence revolution.

About the Author

Russell T. Warne is the author of In the Know: Debunking 35 Myths About Human Intelligence (Cambridge University Press, 2020) and the acclaimed undergraduate statistics textbook Statistics for the Social Sciences: A General Linear Model Approach. He was a tenured professor of psychology for more than a decade and published over 60 scholarly articles in peer reviewed journals.

References
  1. https://bit.ly/426iHa6
  2. https://bit.ly/3U6x6kq
  3. https://a.co/d/96GZFbt
  4. https://bit.ly/3vAHkiR
  5. https://bit.ly/47DV1uz
  6. https://bit.ly/3S55Vno
  7. https://bit.ly/47yJigY
  8. https://bit.ly/3SjTmGj
  9. https://bit.ly/47DTFjy
  10. https://bit.ly/4b3DNd1
  11. https://bit.ly/3SlXCFd
  12. https://bit.ly/4b1dDaN
  13. https://bit.ly/48XPNLu
  14. https://bit.ly/3O9d7Oq
  15. https://bit.ly/48UpfKY
  16. https://a.co/d/adiGPhh
  17. https://bit.ly/4b0rjTp
  18. https://bit.ly/3tWmOsx
  19. https://bit.ly/3U7PyJt
  20. https://bit.ly/3vAN5wR
  21. https://bit.ly/3vANcsh
  22. https://bit.ly/48UIzHT
  23. https://bit.ly/48CYR8P
  24. https://bit.ly/48E3S0G
  25. https://bit.ly/48GSn8P
  26. https://bit.ly/47EWhxL
  27. https://bit.ly/47EHBhS
  28. https://bit.ly/3RYKkx7
  29. https://amzn.to/2C8Ktuu
  30. https://bit.ly/48RY6s2
  31. https://bit.ly/48XQPqQ
  32. https://bit.ly/48XQS60
  33. https://bit.ly/3S4XxUY
Categories: Critical Thinking, Skeptic

Guiding humanity beyond the moon

Space and time from Science Daily Feed - Thu, 06/20/2024 - 4:40pm
What actually happens to the human body in space? While scientists and researchers have heavily researched how various factors impact the human body here on Earth, the amount of information available about changes that occur in the body in space is not as well-known. Scientists have been studying for years how the body, specifically on the molecular side, changes in space. Recently, findings depict how the modern tools of molecular biology and precision medicine can help guide humanity into more challenging missions beyond where we've already been.
Categories: Science

Scientists at uOttawa develop innovative method to validate quantum photonics circuits performance

Computers and Math from Science Daily Feed - Thu, 06/20/2024 - 4:40pm
A team of researchers has developed an innovative technique for evaluating the performance of quantum circuits. This significant advancement represents a substantial leap forward in the field of quantum computing.
Categories: Science

Iron meteorites hint that our infant solar system was more doughnut than dartboard

Space and time from Science Daily Feed - Thu, 06/20/2024 - 4:40pm
Iron meteorites are remnants of the metallic cores of the earliest asteroids in our solar system. Iron meteorites contain refractory metals, such as iridium and platinum, that formed near the sun but were transported to the outer solar system. New research shows that for this to have happened, the protoplanetary disk of our solar system had to have been doughnut-shaped because the refractory metals could not have crossed the large gaps in a target-shaped disk of concentric rings. The paper suggests that the refractory metals moved outward as the protoplanetary disk rapidly expanded, and were trapped in the outer solar system by Jupiter.
Categories: Science

Scientists discover new behavior of membranes that could lead to unprecedented separations

Matter and energy from Science Daily Feed - Thu, 06/20/2024 - 4:40pm
Argonne scientists have used isoporous membranes -- membranes with pores of equal size and shape -- and recirculation to create separations at the nanoscale that overcome previous limitations.
Categories: Science

Matched Twin Stars are Firing Their Jets Into Space Together

Universe Today Feed - Thu, 06/20/2024 - 4:16pm

Since it began operating in 2022, the James Webb Space Telescope (JWST) has revealed some surprising things about the Universe. The latest came when a team of researchers used Webb‘s Mid-Infrared Instrument (MIRI) to observe Rho Ophiuchi, the closest star-forming nebula to Earth, about 400 light-years away. While at least five telescopes have studied the region since the 1970s, Webb’s unprecedented resolution and specialized instruments revealed what was happening at the heart of this nebula.

For starters, while observing what was thought to be a single star (WL 20S), the team realized they were observing a pair of young stars that formed 2 to 4 million years ago. The MIRI data also revealed that the twin stars have matching jets of hot gas (aka stellar jets) emanating from their north and south poles into space. The discovery was presented at the 244th meeting of the American Astronomical Society (224 AAS) on June 12th. Thanks to additional observations made by the Atacama Large Millimeter/submillimeter Array (ALMA), the team was surprised to notice large clouds of dust and gas encircling both stars.

Given the twins’ age, the team concluded that these may be circumstellar disks gradually forming a system of planets. This makes WL 20S a valuable find for astronomers, allowing them to watch a solar system take shape. As noted, the Rho Ophiuchi nebula has been studied for decades by infrared telescopes, including the Spitzer Space Telescope and the Wide-field Infrared Explorer (WISE), the Infrared Telescope Facility (IRTF) at the Mauna Kea Observatory, the Hale 5.0-meter telescope the Palomar Observatory, and the Keck II telescope.

This WL 20 star group image combines data from the Atacama Large Millimeter/submillimeter Array and the Mid-Infrared Instrument on NASA’s Webb telescope. Credit: NSF/NRAO/NASA/JPL-Caltech/B. Saxton

Infrared astronomy is necessary when studying particularly dusty nebulae since the clouds of dust and gas obscure most of the visible light of the stars within them. Thanks to its advanced infrared optics, Webb was able to detect slightly longer wavelengths using its MIRI instrument. Mary Barsony, an astronomer with the Carl Sagan Center for the Study of Life in the Universe (part of the SETI Institute), was the lead author of a new paper that describes the results. As she related in a recent NASA press statement.

“Our jaws dropped. After studying this source for decades, we thought we knew it pretty well. But we would not have known this was two stars or that these jets existed without MIRI. That’s really astonishing. It’s like having brand new eyes.”

Radio telescopes are another way to study nebulae, though they are not guaranteed to reveal the same features as infrared instruments. In the case of WL 20S, the absorbed light was visible in the submillimeter range, making ALMA the ideal choice for follow-up observations. However, the high-resolution mid-infrared data was needed to discern WL 20S as a pair of stars with individual accretion disks. This allowed the team to resolve stellar jets composed of ionized gas that is not visible at submillimeter wavelengths.

“The power of these two telescopes together is really incredible. If we hadn’t seen that these were two stars, the ALMA results might have just looked like a single disk with a gap in the middle. Instead, we have new data about two stars that are clearly at a critical point in their lives, when the processes that formed them are petering out.”

The combined MIRI and ALMA results revealed that the twin stars are nearing the end of their formation period and may already have a system of planets. Future observations of these stars with Webb and other telescopes will enable astronomers to learn more about how young stars transition from formation to their main sequence phase. “It’s amazing that this region still has so much to teach us about the life cycle of stars,” said Ressler. “I’m thrilled to see what else Webb will reveal.”

Further Reading: NASA

The post Matched Twin Stars are Firing Their Jets Into Space Together appeared first on Universe Today.

Categories: Science

Astroscale Closes Within 50 Meters of its Space Junk Target

Universe Today Feed - Thu, 06/20/2024 - 4:15pm

Space debris is a major problem for space exploration. There are millions of pieces up there in orbit from flecks of paint to defunct satellites. It is a known challenge to space exploration creating a shell of uncontrolled debris which could cause damage to orbiting craft or astronauts. A team at Astroscale have a spacecraft in orbit whose singular purpose has been to rendezvous with a defunct Japanese upper-stage rocket module. On arrival it is to survey the debris to test approach and survey techniques to ultimately inform how we can remove them from orbit.

Space debris, or space junk, is exactly what it says; pieces of human made objects orbiting Earth that are no longer required. It’s not just unwanted items though, many pieces are the result of collisions and at speeds in excess of 28,000 kilometres per hour they pose a real threat to astronauts and operational spacecraft in low earth orbit.

Taking a bleak view, NASA scientists Donald Kessler proposed a scenario where the shear volume of debris is high enough that collisions could cascade into a chain reaction. The chain reaction of collisions could ultimately lead to an exponential growth in debris and even cut off our access to space. It may seem a pessimistic view but some computer modelling of the scenario does give strong indications that this may be the case if we don’t act now.

A map of space debris orbiting Earth. Credit: European Space Agency

There have been numerous, almost fanciful ideas proposed from great big balloons covered in sticky stuff like giant fly paper in orbit to pickup bits and bobs floating around. Nets have also been proposed even lasers to piece by piece destroy the offending objects. If I were a betting man I would go for something along the lines of a net travelling through space at similar velocity, scooping up the debris and controlling its gentle deorbit until either landed safely for collection or burnt up in the atmosphere. 

The ideas are there, what we are lacking, is data to assess their feasibility. Enter Astroscale, a company that was founded in 2013 and develops in-orbit solutions. They have been selected by the Japan Aerospace Exploration Agency – JAXA – for the first phase of Commercial Removal of Debris Demonstration. The purpose to demonstrate how the technology for removing large pieces of debris. This has led to the development of ADRAS-J (Active Debris Removal by Astroscale-Japan.)

ADRAS-J was launched on 18 February and started its rendezvous phase four days later. On 9 April it began its approach from a few hundred kilometres and from 16 April it began its automated relative navigation approach taking it to within a few hundred metres using the onboard infrared camera. On 23 May it approached to 50 metres, a first for any spacecraft to arrive in such proximity to a large piece of debris. 

The item is the upper stage of a Japanese rocket that measures 11 metres long and 4 metres in diameter. Now the two are so close, ADRAS-J will demonstrate proximity operations and collect images of the rocket to assess its movements. This is a particularly interesting object for ADRAS-J to study becausey it has no technology or infrastructure to enable docking or servicing so is a challenging piece of debris to remove.

Source : Historic Approach to Space Debris: Astroscale’s ADRAS-J Closes in by 50 Meters

The post Astroscale Closes Within 50 Meters of its Space Junk Target appeared first on Universe Today.

Categories: Science

Stunning JWST image proves we were right about how young stars form

New Scientist Feed - Thu, 06/20/2024 - 1:59pm
It has long been thought that young stars forming near each other will be aligned in terms of their rotation, and observations from the James Webb Space Telescope have offered confirmation
Categories: Science

Supermassive black hole appears to grow like a baby star

Space and time from Science Daily Feed - Thu, 06/20/2024 - 12:23pm
Supermassive black holes pose unanswered questions for astronomers around the world, not least 'How do they grow so big?' Now, an international team of astronomers has discovered a powerful rotating, magnetic wind that they believe is helping a galaxy's central supermassive black hole to grow. The swirling wind, revealed with the help of the ALMA telescope in nearby galaxy ESO320-G030, suggests that similar processes are involved both in black hole growth and the birth of stars.
Categories: Science

Changing climate will make home feel like somewhere else

Computers and Math from Science Daily Feed - Thu, 06/20/2024 - 12:23pm
The impacts of climate change are being felt all over the world, but how will it impact how your hometown feels? An interactive web application allows users to search 40,581 places and 5,323 metro areas around the globe to match the expected future climate in each city with the current climate of another location, providing a relatable picture of what is likely in store.
Categories: Science

Scientists devise algorithm to engineer improved enzymes

Matter and energy from Science Daily Feed - Thu, 06/20/2024 - 12:23pm
Scientists have prototyped a new method for 'rationally engineering' enzymes to deliver improved performance. They have devised an algorithm, which takes into account an enzyme's evolutionary history, to flag where mutations could be introduced with a high likelihood of delivering functional improvements. Their work could have significant, wide-ranging impacts across a suite of industries, from food production to human health.
Categories: Science

Sweat health monitor measures levels of disease markers

Computers and Math from Science Daily Feed - Thu, 06/20/2024 - 12:23pm
A wearable health monitor can reliably measure levels of important biochemicals in sweat during physical exercise. The 3D-printed monitor could someday provide a simple and non-invasive way to track health conditions and diagnose common diseases, such as diabetes, gout, kidney disease or heart disease. The monitor was able to accurately monitor the levels of volunteers' glucose, lactate and uric acid as well as the rate of sweating during exercise.
Categories: Science

Can AI learn like us?

Computers and Math from Science Daily Feed - Thu, 06/20/2024 - 12:23pm
Scientists have developed a new, more energy-efficient way for AI algorithms to process data. His model may become the basis for a new generation of AI that learns like we do. Notably, these findings may also lend support to neuroscience theories surrounding memory's role in learning.
Categories: Science

Creation of a power-generating, gel electret-based device

Computers and Math from Science Daily Feed - Thu, 06/20/2024 - 12:22pm
A team of researchers has developed a gel electret capable of stably retaining a large electrostatic charge. The team then combined this gel with highly flexible electrodes to create a sensor capable of perceiving low-frequency vibrations (e.g., vibrations generated by human motion) and converting them into output voltage signals. This device may potentially be used as a wearable healthcare sensor.
Categories: Science

Creation of a power-generating, gel electret-based device

Matter and energy from Science Daily Feed - Thu, 06/20/2024 - 12:22pm
A team of researchers has developed a gel electret capable of stably retaining a large electrostatic charge. The team then combined this gel with highly flexible electrodes to create a sensor capable of perceiving low-frequency vibrations (e.g., vibrations generated by human motion) and converting them into output voltage signals. This device may potentially be used as a wearable healthcare sensor.
Categories: Science

New catalyst unveils the hidden power of water for green hydrogen generation

Matter and energy from Science Daily Feed - Thu, 06/20/2024 - 12:14pm
A team of scientists reports a new milestone for the sustainable production of green hydrogen through water electrolysis. Their new catalyst design harnesses so far unexplored properties of water to achieve, for the first time, an alternative to critical raw materials for water electrolysis at industrial-relevant conditions.
Categories: Science

Sick chimpanzees seek out range of plants with medicinal properties

New Scientist Feed - Thu, 06/20/2024 - 12:00pm
Chimpanzees with wounds or gut infections seem to add unusual plants to their diet, and tests show that many of these plants have antibacterial or anti-inflammatory effects
Categories: Science

Overheated trees are contributing to urban air pollution

New Scientist Feed - Thu, 06/20/2024 - 12:00pm
An aerial survey of Los Angeles reveals that high temperatures cause plants to emit more compounds that can contribute to harmful ozone and PM2.5 air pollution
Categories: Science

Pages

Subscribe to The Jefferson Center  aggregator