Mars exploration vehicles typically have wheels, allowing them to traverse some challenging terrain on the Red Planet. However, eventually, their systems start to wear down, and one of their wheels gets stuck. The “Free Spirit” campaign in 2009 was the most widely known case. Unfortunately, that campaign wasn’t successful, and now, 15 years later, Spirit remains stuck in its final resting place. Things might have been different if NASA had adopted a new robot paradigm developed by Guangming Chen and his colleagues at the Nanjing University of Aeronautics & Astronautics Lab of Locomotion Bioinspiration and Intelligent Robots. They devised a robot based on a desert lizard, with adaptable feet and a flexible “spine” that, according to their calculations, would be well suited to traversing over Martian regolith.
Planning for traversing tough terrain isn’t limited to rovers that are stuck. Curiosity and Perseverance, perhaps the two best-known operating rovers on Mars, currently spend a lot of their time trying to avoid areas where they might become entangled. This limits their ability to capture any data from those areas, potentially missing out on some cool rocks, like the pure sulfur that Curiosity recently found for the first time on Mars.
A lizard-inspired robot, on the other hand, would have no trouble traversing such terrain. It also has some advantages over traversing different types of terrain, such as rocks. Most rovers don’t have enough leg lift to get over medium-sized rocks, whereas a legged robot would, especially one with adjustable “toes” that would allow it to grip a rock tighter than would otherwise be possible with typical legged robots.
Lizard-inspired robots aren’t only useful for walking – they can also jump like their biological cousins, as demonstrated in this video from UC Berkeley’s robotics lab.The design for the robot itself is relatively simple – it has four “feet” that are offset from each other by a chassis that essentially looks like a desert lizard. It even has a tail for counterbalancing. Each foot has a series of three “toes” powered by springs. They also have a servo for ankle articulation and a bearing for rotational control. This combination allows the lizard robot to walk on all fours effectively and adjust each leg to best adapt to the surface it is “walking” over.
The authors performed a series of kinematic calculations for different types of terrain to help understand how the robot would interact with each of those surfaces. Kinematic calculations are typically used in robotics when designers attempt to find the best way to move a specific robot part. The calculations are relatively detailed in this case, given the number of variable parts. However, a control algorithm is possible using just on-board computation, allowing for some basic autonomous terrain navigation if architecture is ever adopted for use in space.
Building an actual prototype would be a great way to work on that navigation algorithm, and that’s precisely what the researchers did. They 3D printed many of the parts for the chassis and foot, embedded some batteries and controllers in the head and tail sections, and started testing the prototype on simulated Martian test terrain.
Mars isn’t the only place that could benefit from legged robots – they could work on the Moon as well, as Fraser discusses.They tested everything from grasping loose regolith to climbing over small rocks, and their algorithm seemed to work effectively for handling the relatively simple terrain in the test bed. However, the robot’s actual speed of movement was slower than originally simulated, mainly due to technical difficulties in balancing the motions of the springs and the spine.
Despite any problems that arose during physical testing, this new robot iteration is a step in the right direction, as this lab has been designing similar systems for years. They also plan to continue to another version, including mounting a continuous power supply and fully implementing an autonomous navigation algorithm. Their research is funded by both Jiangsu Province and the Chinese Ministry of Science and Technology, so it seems it will continue to gain support, at least for the foreseeable future.
Learn More:
Chen et al. – Development of a Lizard-Inspired Robot for Mars Surface Exploration
UT – Spirit Extrication, Day 1: Drive Stopped After 1 Second
UT – Bio-Mimicry and Space Exploration
UT – Robots Might Jump Around to Explore the Moon
Lead Image:
Image of the prototyped lizard biomimetic robot.
Credit – Chen et al.
The post Having Trouble Traversing the Sands of Mars? A Lizard Robot Might Help appeared first on Universe Today.
Dark matter is a mysterious and captivating subject. It’s a strange concept and we don’t really have a handle on what it actually is. One of the strongest pieces of evidence that dark matter is a particle comes from cosmic collisions. These collisions chiefly occur when clusters of galaxies interact such as the famous Bullet Cluster. Gravitational lensing reveals how the dark matter component couples from gas and dust in the cluster but now, astronomers have found another galaxy cluster collision but it is different, showing the collision from a new angle.
Dark matter was first talked about in the 1930’s by Swiss astronomer Fritz Zwicky who observed the Coma Cluster. The observations found that the galaxies in the cluster were travelling faster than could be explained by visible mass alone. Zwicky proposed the existence of an unseen type of material, known as dark matter which was gravitationally effecting the galaxies. In the 1970s’s even more evidence emerged when spiral galaxy observations found the other regions rotated at the same speed as the inner regions. Again, it suggested some otherwise unseen matter which surrounded the stars in the galaxies. Even so, dark matter has not yet been directly observed, largely due to its complete lack of interaction with normal matter.
Fritz Zwicky. Image Source: Fritz Zwicky Stiftung websiteGalaxy clusters are one phenomenon where dark matter seems to have a significant impact. The component galaxies are bound together under the force of gravity. When we explore galaxy clusters and the amount of matter that seems to be present, only about 15% is from normal matter. In the case of galaxies, this s mostly in the state of hot gas but the rest will be made up of stars, planets and even people! The remaining 85% must be therefore, dark matter.
Recent observations of the collision of clusters collectively known as MACS J0018.5+1626 show that the individual galaxies are largely unscathed. In galaxy clusters the distance between the galaxies are vast however the gas components have become turbulent and superheated. Typically such events would be revealed through gravitational and electromagnetic effects from normal matter but dark matter just interacts through gravity.
The Submilimeter Observatory from Caltech, the Keck Observatory on Mauna Kea, Chandra X-Ray Observatory, Hubble Space Telescope, Herschel Space Observatory and Planck Observatory were all part of the project which have been observing the collision of MACS J0018.5+1626. The disassociation or decoupling of dark matter and normal matter in such collisions has a been seen before in the Bullet Cluster. In this event the hot gas and normal matter was lagging behind dark matter as the clusters passed through each other. MACS J0018.5+1626 is the same and with similar lagging between normal and dark matter. MACS J0018.5+1626 however has a slightly different orientation and offers a unique view on this type of event.
Detailed views of the Orion Bullet region. In each image pair, left is the Altair 2007 image and right is the new 2012 GeMS image. Credit: Gemini Observatory/AURATo try to understand the process, a team of researchers used a method known as the Kinetic-Sunyaev-Zel’dovich effect (the spectral distortion of the cosmic microwave background through inverse Compton scattering.) This is not the first time the effect has been observed though, a team of astronomers detected it in a cluster known as MACS J0717. Multiple observations have since been made of the effect allowing astronomers to measure the speed of gas and normal matter. Measuring the speed of galaxies allowed for the deduction of the speed of the dark matter too.
It is hoped that future studies will reveal even more clues about the nature of dark matter. The observations of MACS J0018.5+1626 and previously the Bullet cluster have given a good starting point but more detailed steeds are required.
Source : Dark Matter Flies Ahead of Normal Matter in Mega Galaxy Cluster Collision
The post Giant Collision Decouples Dark Matter from Regular Matter appeared first on Universe Today.
Johannes Kepler is probably most well known for developing the laws of planetary motion. He was also a keen solar observer and in 1607 made some wonderful observations of our nearest star using a camera obscura. His drawings were wonderfully precise and enabled astronomers to pinpoint where the Sun was in its 11-year cycle. Having taken into account Kepler’s location and the location of sunspots, a team of researchers have identified the Sun was nearing the end of solar cycle-13.
Johannes Kepler was a German mathematician who was born in 1571. His contribution to the world of celestial mechanics and the movement of the planets is second to none. The laws of planetary motion that he formulated from the observations of Tyco Brahe have stood the test of time. Other than his work on planetary motion, he was a renowned observer in his own right and he made one of the earliest records of solar activity before the invention of the telescope!
Johannes Kepler in 1610. Credit: Wikipedia CommonsKepler used a camera obscure which consisted of a small hole in a wall through which, sunlight would be allowed to pass. It would then full upon a sheet of paper allowing the observer to study an image of the Sun. Kepler used this to record and sketch the visible features of the Sun and in May 1607 he recorded what he thought was a transit of Mercury. It turned out that it wasn’t a transit of Mercury but instead, group of sunspots.
The sunspots seen by Kepler and seen on the Sun often by modern amateur astronomers are temporary solar phenomena. They exist in the visible layer of the sun’s atmosphere known as the photosphere and appear dark compared to their surroundings. In reality, if they could be isolated from the much brighter solar disk but kept at their existing distance from Earth they would be brighter than the full Moon. The spots are simply cooler and darker than the surrounding hot and bright material. Their temperature is around 3,800 K instead of just under 6,000 K for the average photospheric temperature.
Sunspot image from the newly upgraded GREGOR TelescopeThe Sun is a great big ball of plasma and it has a magnetic field like Earth. Plasma is electrically charged gas that can drag magnetic field lines with it. As the Sun rotates it drags the magnetic field with it causing it to get wound up and tangled. Often the stress the field lines are under are so intense that they burst through the surface, inhibiting convection making the temperature in this region cooler, the sunspot. The sunspot (and general solar activity) peaks over an 11 year cycle.
A team of researchers led by Hisashi Hayakawa from Nagoya University have used new techniques to analyse Kepler’s drawings and have uncovered new information about the solar activity at the time. Spörer’s law (which examines variation of heliographic latitudes at which solar active regions form during a solar cycle) was applied to the drawings placing them at the end of the solar cycle before the cycle that Thomas Harriot, Galileo and other telescopic observers first captured solar cycle information. This placed the observations just before the well documented Maunder Minimum – an unexplained period of significantly reduced sunspot activity that occurred between 1645 and 1715.
Until now, this period of minimal solar activity has been hotly debated and, whilst no definitive conclusion has been reached, the team hopes that Kepler’s information may put us finally on a path to understand great periods of solar inactivity.
Source : Kepler’s 1607 pioneering sunspot sketches solve solar mysteries 400 years later
The post Kepler Sketched the Sun in 1607. Astronomers Pinpointed the Solar Cycle appeared first on Universe Today.
In 1956, The New York Times prophesied that once global warming really kicked in, we could see parrots in the Antarctic. In 2010, when science deniers had control of the climate story, Senator James Inhofe and his family built an igloo on the Washington Mall and plunked a sign on top: AL GORE’S NEW HOME: HONK IF YOU LOVE CLIMATE CHANGE. In The Parrot and the Igloo, best-selling author David Lipsky tells the astonishing story of how we moved from one extreme (the correct one) to the other.
With narrative sweep and a superb eye for character, Lipsky unfolds the dramatic narrative of the long, strange march of climate science. The story begins with a tale of three inventors―Thomas Edison, George Westinghouse, and Nikola Tesla―who made our technological world, not knowing what they had set into motion. Then there are the scientists who sounded the alarm once they identified carbon dioxide as the culprit of our warming planet. And we meet the hucksters, zealots, and crackpots who lied about that science and misled the public in ever more outrageous ways. Lipsky masterfully traces the evolution of climate denial, exposing how it grew out of early efforts to build a network of untruth about products like aspirin and cigarettes.
Featuring an indelible cast of heroes and villains, mavericks and swindlers, The Parrot and the Igloo delivers a real-life tragicomedy―one that captures the extraordinary dance of science, money, and the American character.
David Lipsky is a contributing editor at Rolling Stone. His fiction and nonfiction have appeared in The New Yorker, Harper’s, The Best American Short Stories, The Best American Magazine Writing, The New York Times, The New York Times Book Review, and many others. He contributes to NPR’s All Things Considered, and is the recipient of a Lambert Fellowship, a Media Award from GLAAD, and a National Magazine Award. He’s the author of the novel The Art Fair; a collection, Three Thousand Dollars; and the bestselling nonfiction book Absolutely American: Four Years at Westpoint, which was a Time magazine Best Book of the Year. His book, The Parrot and the Igloo: Climate and the Science of Denial, is just out in paperback.
Shermer and Lipsky discuss:
The five basic questions about climate change:
Here’s what Dr. Shermer wrote in Scientific American:
A 2013 study published in Environmental Research Letters by John Cook, Dana Nucitelli, and their colleagues examined 11,944 climate paper abstracts published from 1991 to 2011. Of those papers that stated a position on AGW, 97.1 percent concluded that climate change is real and human caused. What about the three percent? What if they’re right? In a 2015 paper published in the journal of Theoretical and Applied Climatology, Rasmus Benestad, Dana Nucitelli, and their colleagues examined the three percent and found “a number of methodological flaws and a pattern of common mistakes.” That is, instead of the three percent converging to a better explanation than that provided by the 97 percent, they failed to converge to anything. “There is no cohesive, consistent alternative theory to human-caused global warming” Dana Nuccitelli concluded in an August 25, 2015 commentary in The Guardian. “Some blame global warming on the sun, others on orbital cycles of other planets, others on ocean cycles, and so on. There is a 97% expert consensus on a cohesive theory that’s overwhelmingly supported by the scientific evidence, but the 2–3% of papers that reject that consensus are all over the map, even contradicting each other. The one thing they seem to have in common is methodological flaws like cherry picking, curve fitting, ignoring inconvenient data, and disregarding known physics.” For example, one skeptical paper attributed climate change to lunar or solar cycles, but to make these models work for the 4,000-year period that the authors considered they had to throw out 6,000 years’ worth of earlier data.
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
Mercury, the closest planet to our Sun, is also one of the least understood in the Solar System. On the one hand, it is similar in composition to Earth and the other rocky planets, consisting of silicate minerals and metals differentiated between a silicate crust and mantle and an iron-nickel core. But unlike the other rocky planets, Mercury’s core makes up a much larger part of its mass fraction. Mercury also has a mysteriously persistent magnetic field that scientists still cannot explain. In this respect, Mercury is also one of the most interesting planets in the Solar System.
But according to new research, Mercury could be much more interesting than previously thought. Based on new simulations of Mercury’s early evolution, a team of Chinese and Belgian geoscientists found evidence that Mercury may have a layer of solid diamond beneath its crust. According to their simulations, this layer is 15 km (9 mi) thick sandwiched between the core and the mantle hundreds of miles beneath the surface. While this makes the diamonds inaccessible (for now, at least), these findings could have implications for theories about the formation and evolution of rocky planets.
The international team consisted of researchers from the Center for High-Pressure Science and Technology Advanced Research, the School of Earth Sciences and Resources at the China University of Geosciences, the Department of Earth and Environmental Sciences at KU Leuven, and the Department of Geology at the University of Liege. The paper that describes their findings, “A diamond-bearing core-mantle boundary on Mercury,” recently appeared in Nature Communications.
Based on MESSENGER data, a team of geologists believe that (a) a layer of diamond may have been deposited early in Mercury’s history on top of a molten core, or (b) that diamond crystallized in the cooling core and rose to the core-mantle boundary. Credit: Xu et al., Nature Communications, 2024The team was originally inspired by previous research by a team from MIT, NASA’s Goddard Space Flight Center, and several prominent universities. This consisted of a reassessment of Mercury’s gravity field based on the radio tracking measurements taken by NASA’s MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission, which allowed scientists to gain a better understanding of the potential structuring of Mercury’s interior. This data led scientists to theorize that Mercury’s internal structure consisted of a metallic outer core layer, a liquid core layer, and a solid inner core.
While the composition of the core remains uncertain, it seemed likely that the core contained abundant iron, nickel, silicon, and possibly sulfur and carbon. The MESSENGER data further led scientists to believe that the large dark patches observed on Mercury’s surface were largely made up of graphite that was likely turned up from the interior. This data suggests that sufficient quantities of carbon could have crystallized in Mercury’s interior between the core and mantle boundary and floated up to the surface as graphite.
Given the amount of graphite on Mercury’s surface, it stands to reason that the planet was saturated with carbon. Previously, diamond (a mineral composed of pure carbon) was ruled out as a possible product because it was believed that the necessary pressures did not exist close to Mercury’s core. However, if the boundary between the core and the mantle were deeper than previously thought, the necessary pressure conditions may have existed after all.
For their study, the team relied on thermodynamic modeling to recreate these pressure conditions based on the existence of a deeper core-mantle boundary. These experiments allowed them to simulate what conditions were like for Mercury as it slowly cooled. Their results indicated that presuming a sulfur content of around 11% and a pressure of roughly 1-2% percent of that in Earth’s interior, diamond could crystalize within the molten core. They further found that this diamond would form a layer that could remain stable enough to rise along with graphite towards the mantle.
Mercury’s Magnetic Field. Credit: NASAOver the course of eons, their experiments suggested that this diamond would form a layer around 15 to 18 km (~9 to 11 mi) thick. Considering how diamond is an exceptional thermal conductor, the presence of this layer could change the way astrogeologists model the interior dynamics of Mercury and shed light on its mysterious magnetic field. The way heat rises from the core significantly affects the cooling and evolution of rocky planets, and the movement of materials in the interior is responsible for the generation of magnetic fields.
Not only is Mercury the only rocky planet other than Earth to have a magnetosphere, but there is evidence that it may be far older than our own. As such, revised models of Mercury’s interior could explain how the planet’s magnetosphere has persisted for so long. Beyond Mercury, these findings could have significant implications for prevailing theories of how the rocky planets of our Solar System formed and evolved.
Further Reading: Science Alert, Nature Communications
The post Mercury Could be Housing a Megafortune Worth of Diamonds! appeared first on Universe Today.
The JWST has directly imaged its first exoplanet, a temperate super Jupiter only about 12 light-years away from Earth. It could be the oldest and coldest planet ever detected.
The planet orbits the star Epsilon Indi A (Eps Ind A,) a K-type star about the same age as our Sun. Epsilon Indi is a triple star system, and the other two members are brown dwarfs. The exoplanet is named Epsilon Indi Ab (Eps Ind Ab.)
Eps Ind Ab’s detection is presented in a paper published in Nature. Its title is “A temperate super-Jupiter imaged with JWST in the mid-infrared.” The lead author is Elisabeth Matthews, a Postdoc in the Department of Planet and Star Formation at the Max Planck Institute for Astronomy in Germany.
This new detection is important for several reasons. The vast majority of the 5,000+ exoplanets we’ve discovered were detected by the transit method. Others were detected with the radial velocity method. Comparatively few have been directly imaged as Eps Ind Ab has.
There were already hints that a massive planet orbited Eps Ind A. Previous work using the radial velocity method found the telltale wobble induced in the star by a massive planet orbiting it. Now, the JWST has confirmed the planet’s presence.
“Our prior observations of this system have been more indirect measurements of the star, which actually allowed us to see ahead of time that there was likely a giant planet in this system tugging on the star,” said team member Caroline Morley of the University of Texas at Austin. “That’s why our team chose this system to observe first with Webb.”
This image from the research is a full field-of-view JWST/MIRI coronagraphic image of Eps Ind A in the 10.65µm filter. (1) is the star Eps Ind A, and (2) and (3) are background stars. Image Credit: Matthews et al. 2024.Direct images of exoplanets are difficult to acquire. The blinding light from the star washes out the relatively dim light that comes from the planet. Telescopes like the JWST use coronagraphs to eliminate the starlight and allow the planetary light to get through. In this case, the space telescope imaged the exoplanet using its Mid-Infrared Instrument (MIRI) Coronagraphic Imaging capability.
The JWST’s direct image of Eps Ind Ab revealed some surprises compared to earlier radial velocity measurements.
“While we expected to image a planet in this system because there were radial velocity indications of its presence, the planet we found isn’t what we had predicted,” shared Matthews. “It’s about twice as massive, a little farther from its star, and has a different orbit than we expected. The cause of this discrepancy remains an open question.”
Eps Ind Ab is about 6 times more massive than Jupiter, and its semi-major axis is about 28 AU. It’s inclined about 103 degrees.
These two panels from the research show Eps Indi Ab’s orbit. The left panel shows the planet’s orbit according to previous RV measurements and Hipparcos/Gais measurements, and the right panel shows the orbit according to JWST observations. The JWST-measured orbit is wider. Image Credit: Matthews et al. 2024.“The atmosphere of the planet also appears to be a little different than the model predictions,” Matthews added. “So far, we only have a few photometric measurements of the atmosphere, meaning that it is hard to draw conclusions, but the planet is fainter than expected at shorter wavelengths.”
Eps Ind Ab is more similar to Jupiter than any other exoplanet ever imaged, even though it’s a little warmer and several times more massive. Other imaged exoplanets tend to be hotter and are still radiating the heat from their formation. Their heat makes them easier to see in infrared. As planets like this age, they tend to contract and cool down. As they get cooler, they can become harder to image directly.
As planets age and cool, the wavelength of their emissions changes, making them harder to see. Most other directly imaged planets are much younger than Eps Ind Ab—all younger than 500 million years. But the JWST is uniquely suited to spotting older exoplanets.
This image shows the infrared region of the electromagnetic spectrum for NIR (Near Infrared) to FIR (Far Infrared). Image Credit: NASA.“Cold planets are very faint, and most of their emission is in the mid-infrared,” explained Matthews. “Webb is ideally suited to conduct mid-infrared imaging, which is extremely hard to do from the ground. We also needed good spatial resolution to separate the planet and the star in our images, and the large Webb mirror is extremely helpful in this aspect.”
Many of the Jupiter-size exoplanets we’ve discovered are hot Jupiters. These gas giants are easily found using the transit method because they orbit so close to their stars, which makes them hot. They’re also usually tidally locked, meaning their daysides can reach extreme temperatures. One hot Jupiter, KELT-9b, has a dayside temperature greater than 7,800 degrees Fahrenheit (4,600 Kelvin), which is hotter than most stars.
But Eps Ind Ab is different. With an approximate temperature of 35 degrees Fahrenheit (2 degrees Celsius), it’s one of the coldest exoplanets directly detected. It’s the coldest exoplanet ever directly imaged and is only 180 degrees Fahrenheit (100 degrees Celsius) warmer than our Solar System’s gas giants. It’s more similar to planets in our system and gives astronomers an opportunity to study the atmospheres of Solar System analogs.
The planet’s atmosphere doesn’t exactly match our expectations. “The atmosphere of the planet also appears to be a little different than the model predictions. So far we only have a few photometric measurements of the atmosphere, meaning that it is hard to draw conclusions, but the planet is fainter than expected at shorter wavelengths,” Matthews said.
It could be fainter at those NIR wavelengths because the atmosphere is cloudy. Or it could be because it contains compounds like CH4 (methane), CO, and CO2 which absorb shorter wavelengths of IR light.
Eps Ind Ab’s faintness at those wavelengths hints at a high carbon-to-oxygen ratio. A high C/O ratio is a significant indicator of how the planet formed and evolved. It suggests that the disk the planet formed in was carbon-rich. It’s a clue as to where exactly the planet formed and if it migrated.
High carbon also allows more carbon-containing molecules like CH4, CO2 and CO to form. Since CO2 and methane are greenhouse gases, the high C/O ratio affects the planet’s climate.
High C/O ratios also affect cloud formation, which can raise a planet’s albedo. A higher albedo reflects more sunlight away from the planet, which also affects climate.
Eps Ind Ab displays high metallicity. High metallicity indicates a higher mass and suggests a more efficient formation process since the planet could’ve attracted more mass more quickly. It also can affect how the planet may have migrated through the disk.
The researchers wonder whether other cool exoplanets have the same characteristics. But first they need to better constrain these characteristics in Eps Ind Ab. This initial detection and imaging is just the beginning. Future spectroscopy and further imaging will reveal more details about the planet.
The fact that Eps Ind Ab is in a group with two brown dwarfs is also an opportunity for more interesting observations. “The system is also co-moving with a widely separated brown dwarf binary, making it a particularly valuable laboratory for comparative studies of substellar objects with a shared age and formation location,” the authors write in their paper. The demarcation line between massive gas giants and brown dwarfs isn’t always clear, and astronomers are keen to learn more about how each type forms, especially in the same system as one another.
This research also illustrates the effectiveness of using prior results from other telescopes to choose targets for the JWST. “Even though the detected planet does not match the previously claimed exoplanet properties, long-term RV information provided a clear signpost for the value of imaging this target,” the authors explain.
Fortunately, the exoplanet is an excellent candidate for more observations.
They conclude that “the bright flux and wide separation of Eps Ind Ab mean the planet is ideally suited to spectroscopic characterization efforts, allowing the metallicity and carbon-to-oxygen ratio to be more accurately constrained.”
The post Webb Directly Images a Jupiter-Like Planet appeared first on Universe Today.
This will be the last Caturday felid for a while because I’ll be in the air heading to Africa a week from today. I’ll be gone for a month, and don’t know how often I’ll have internet. However, Matthew has vowed to continue Hili’s daily dialogue.
Cat posts will resume when I return. As always, I do my best.
The first item today reports a well-cited cat but also demonstrates the weakness of the scientific citation system against scams. The article below (see also this article from ZME Science) is from the website of Reese Richardson, a “a PhD candidate working in metascience and computational biology at Northwestern University.”
Click to read how Reese used this scam to get his cat to have a huge rate of citation as author of scientific papers:
Reese saw the ad above on Google Scholar and it turned out to advertise a service that “helped” scientists to manufacture fake citations of their papers—for a price. As Richardson notes:
The advertisement links to several success stories consisting of unredacted “before” and “after” screenshots of clients’ Google Scholar profiles. These clients had apparently bought anywhere between 50 and 500 citations each. Of 18 apparent previous clients, 11 still had active Google Scholar profiles that we could visit. All identifiable clients were affiliated with Indian universities except for two: one client affiliated with a university in Oman and one client in the United States. Although the advertisement also mentions Scopus, we did not find evidence of this company successfully boosting these clients’ Scopus citation counts.
Here’s how it worked:
How was this company so effective at manipulating citation counts? For some clients, a wealth of citations came from dozens of papers in the same suspicious journal. These were probably papers on which the company had sold authorship. In one instance, the highest numbered reference in the text of the paper was Reference 40, while the reference list extended up to Reference 53. References 48 through 53 were to the client.
For most other clients, the scheme was more brazen. Inspecting citations to these clients revealed dozens of papers authored by such celebrated names as Pythagoras, Galileo, Taylor and Kolmogorov. The papers were not published in any journal or pre-print server, only uploaded as PDF files to ResearchGate, the academic social networking site. They had since been deleted from ResearchGate, but Google Scholar kept them indexed. Although the abstracts contain text relevant to their titles, the rest of the paper was usually complete mathematical gibberish. We quickly recognized that these papers had been generated by Mathgen (a few years back, Guillaume Cabanac and Cyril Labbé flagged hundreds of ostensibly peer-reviewed papers generated by Mathgen and its relative SCIgen).
At this realization, this company’s citation-boosting procedure fell into sharp focus:
The upshot: Richardson, knowing how to do this for free, decided to make Larry, his grandmother’s cat, a highly cited researcher. In fact, for a short while Larry was the most highly-cited cat in the world. Here he is with Reese’s dad (photo from website):
Out of all the cats with human-ish names in our lives, “Larry Richardson” sounded the most like a tweedy academic and thus was a natural candidate for the title of world’s highest cited cat. As far as we could tell, the standing record-holder was F.D.C. Willard, a Siamese cat named Chester whose owner Jack H. Hetherington added him as an author on a physics paper because he had accidentally written the paper in the first person plural (“we, our”) instead of the first person singular (“I, my”). Chester went on to author one more paper and a book chapter under this name, which have since accumulated 107 citations according to Google Scholar. This was the bar to clear.
And so Reese fabricated 12 papers with his cat namesake as author and went through the procedure above, uploading the fake papers to ResearchGate. Eventually, Larry got 132 citations!:
Larry Richardson is officially history’s highest cited cat (according to Google Scholar, at least).
Notice the cat photo, which should have been a giveaway:
And the point:
Of course, this isn’t about making a cat a highly cited researcher. Our efforts (about an hour of non-automated work) were to make the same point as the authors of this aptly titled pre-print: Google Scholar is manipulatable. Despite the conspicuous vulnerabilities of Google Scholar (and ResearchGate), the quantitative metrics calculated by these services are routinely used to evaluate scientists.
Of course revealing the scam had the predictable consequences: Google removed all of Larry’s citations, though not the fake papers in which he was cited. As Reese says, “Larry held the title of world’s highest cited cat for exactly one week.” Who knows how many other fake cat authors lurk in the crannies of Google Scholar?
***********************
Here’s a video from FB of an agile cat. It didn’t make it through the 5 cm (about two-inch) slot, but simply jumped over the whole apparatus.
View this post on InstagramA post shared by Sydney Chaton (@sydneychat_officiel)
********************
As you’ve seen on this site several times, cats sometimes take up a life of crime, purloining socks, toys, shoes, and even underwear, and stashing the goods or bringing them home. The Guardian takes up the vexing questions of Why Cats Steal (click on screenshot below):
The answer: “We don’t know”:
The thieves went for particular items. Day after day, they roamed the neighbourhood and returned home to dump their loot. Before long they had amassed an impressive haul: socks, underpants, a baby’s cardigan, gloves and yet more socks.
It’s not unusual for cats to bring in dead or petrified mice and birds, but turning up with random objects is harder to explain. Researchers suspect a number of causes, but tend to agree on one point: the pilfered items are not presents.\
“We are not sure why cats behave like this,” says Auke-Florian Hiemstra, a biologist at the Naturalis Biodiversity Center, a museum in Leiden. “All around the world there are cats doing this, yet it has never been studied.” He now hopes that will change.
Apparently a cat mom can even teach their offspring to steal, something that’s new to me:
The clothing crime spree, perpetrated this year by a mother and her two offspring in the small town of Frigiliana in Spain, has made neighbourly interactions somewhat awkward for their keeper, Rachel Womack. But for scientists such as Hiemstra, it has provided fresh impetus to study the animals. “I want to know exactly why they do it,” he says. “And documenting cases like this could be the start of more research in the future.”
And theft can be on a grand (larceny) scale:
More pressing for Womack is how to return the stolen stuff. Daisy, Dora and Manchita can bring in more than 100 items a month. One recent arrival was a little stuffed bear. Before that, a baby’s shoe. Returning the items, without knowing the rightful owners, isn’t proving easy. “She’s just annoyed,” says Geene. “There are so many, she doesn’t know how to give them back.”
The Frigiliana three are repeat offenders, but they are not the only cats to be rumbled. Charlie, a rescue cat from Bristol, was dubbed the most prolific cat burglar in Britain after bringing home plastic toys, clothes pegs, a rubber duck, glasses and cutlery. His owner, Alice Bigge, once woke to a plastic diplodocus, one of many nabbed from a nearby nursery, next to her head on the pillow. It reminded her of the infamous scene in The Godfather. She puts the items on a wall outside for owners to reclaim.
Another cat, Dusty from San Mateo in California, had more than 600 known thefts, once returning with 11 items on one night. His haul included Crocs, a baseball cap and a pair of swimming trunks. The bra found in the house was fortunately spotted on a video of Dusty coming in. In a feat of accidental social commentary, another cat, Cleo from Texas, came home with a computer mouse.
Several theories are floated, including cats liking the smell, disliking the smell and wanting to remove stinky objects form their territories, looking for attention, engaging in mock hunting, or simply playing. I can see how to test some of these theories, but not all, and the ultimate explanation is untestable:
Jemma Forman, a doctoral researcher at the University of Sussex who has studied cats playing fetch, agrees that the pets do not come bearing gifts. She says: “When it comes to cats, normally the explanation is they’re doing it for themselves.”
That’s a bit tautological, as there must be some “reason” embedded in the cat’s neurons, but it could be inaccessible.
**********************
From Letters of Note, here’s a cat-related missive from the famous Nikola Tesla of electricity fame.
I must tell you a strange and unforgettable experience that stayed with me all my life. . .
It happened that one day the cold was drier than ever before. People walking in the snow left a luminous trail behind them, and a snowball thrown against an obstacle gave a flare of light like a loaf of sugar cut with a knife. In the dusk of the evening, as I stroked [my cat] Macak’s back, I saw a miracle that made me speechless with amazement. Macak’s back was a sheet of light and my hand produced a shower of sparks loud enough to be heard all over the house.
My father was a very learned man; he had an answer for every question. But this phenomenon was new even to him. “Well,” he finally remarked, “this is nothing but electricity, the same thing you see through the trees in a storm.”
My mother seemed charmed. “Stop playing with this cat,” she said. “He might start a fire.” But I was thinking abstractedly. Is nature a gigantic cat? If so, who strokes its back? It can only be God, I concluded. Here I was, only three years old and already philosophising.
However stupefying the first observation, something still more wonderful was to come. It was getting darker, and soon the candles were lighted. Macak took a few steps through the room. He shook his paws as though he were treading on wet ground. I looked at him attentively. Did I see something or was it an illusion? I strained my eyes and perceived distinctly that his body was surrounded by a halo like the aureola of a saint!
I cannot exaggerate the effect of this marvellous night on my childish imagination. Day after day I have asked myself “what is electricity?” and found no answer. Eighty years have gone by since that time and I still ask the same question, unable to answer it.
Nikola Tesla
Letter to Pola Fotić4
23rd July 1938
This reminds me of a line from the best cat poem ever written, “For I will consider my cat Jeoffry,” by Christopher Smart:
For by stroking of him I have found out electricity.
Read that poem if you haven’t yet. It may have been written in the throes of mental illness, as Smart was confined in an asylum when he wrote it, but I haven’t seen a better paean to cats.
h/t: Ginger K., Gregory
We have one more batch of photos in the tank, but fortunately we have Tara Tanaka’s videos.
Here’s what Tara said about this video of wood storks (Mycteria americana) in a rookery. The baby is adorable:
We got a sit on top kayak that I can shoot from and I’ve been going out every couple of week at sunrise and shooting video. Here’s one from a month ago. The rookery is SO loud!
Venus’s atmosphere has drawn a lot of attention lately. In particular, the consistent discovery of phosphine in its clouds points to potential biological sources. That, in turn, has resulted in numerous suggested missions, including floating a balloon into the atmosphere or having a spacecraft scoop down and suck up atmospheric samples. But a team of engineers led by Jeffrey Balcerski, now an adjunct at Kent State University but then part of the Ohio Aerospace Institute, came up with a different idea years ago – use floating sensor platforms shaped like leaves to collect a wide variety of data throughout Venus’ atmosphere.
The Lofted Environmental and Atmospheric Venus Sensors (or LEAVES) project was funded by NASA’s Institute for Advanced Concepts (NIAC) program in 2018. The mission design is simple enough: design lightweight platforms with a wide surface area, attach some low-cost and weight sensors to them, release them from a mothership transiting into orbit around Venus, and let those platforms float down through the Venusian atmosphere over the course of a few hours, all the while sending back atmospheric, chemical, and temperature data to the mothership.
There are a few enabling technologies behind the idea. The first is a lightweight yet robust and deployable structure that could support a platform of sensors and not be destroyed by Venus’s notoriously hellish environment. Designing this structure required understanding expected flight times and geolocation requirements, as well as the requirement that the system must be trackable by orbital radar in order to communicate back to the mothership. The resulting design resembles the famous inverted pyramid at the Louvre.
Venus is one of the most interesting planets in the solar system – and has captured Fraser’s imagination.Inside that structure, the second enabling technology sits—harsh environment sensors designed to operate in Venus’s extreme environments. Chemical, pressure, and electrical sensors have undergone extensive development work over the past few years, and some are approaching readiness for use on Venus. They are also lightweight, allowing the structure to descend slowly, which is necessary to complete its mission goals.
After receiving the NIAC Phase I grant, the team led by Dr. Balcerski got to work modeling LEAVES’ structure and mission design. They quickly realized that delivery methodology and a system’s light weight would be critical to future missions. As such, they modeled depositing a series of upwards of 100 LEAVES throughout Venus’ atmosphere, each of which would be networked back to the mothership that deposited them as part of its planned orbital maneuver. They also thought there were several planned Venus missions, such as DaVINCI, which could easily take LEAVES on as a secondary payload with no real risk to mission success or uptime, as the LEAVES would fall and be destroyed by the lower Venusian atmosphere in a matter of hours.
But those hours of data, relayed back to the mothership and then on to Earth, could provide invaluable insights into the inner workings of Venus’s atmosphere. LEAVES would be able to reach a wide altitude range—it is estimated to operate between 100 km and 30 km in altitude. It could also be spread literally all over the world, allowing for a more complete picture of the Venusian atmosphere than other mission designs, which would only capture a small vertical slice of the atmosphere.
Venus’s environmental is rough on technology, to say the least. Fraser discusses the new technologies that could one day survive on its surface.Given the potential impact of what we might find in the Venusian atmosphere, any mission designs that allow us to capture a large amount of information about a wide swath of it would be welcome. Dr. Balcerski and his colleagues think they have advanced the LEAVES concept to a Technology Readiness Level of 3-4. However, they haven’t yet received further support for LEAVES, and development appears to be on hold. But, given the increasing interest in exploring the Venusian atmosphere, perhaps it’s time to look at this lightweight, inexpensive way of doing so again.
Learn More:
Balcerski et al. – LEAVES: Lofted Environmental and Atmospheric Venus Sensors
UT – There are Mysteries at Venus. It’s Time for an Astrobiology Mission
UT – Scientists Have Re-Analyzed Their Data and Still See a Signal of Phosphine at Venus. Just Less of it
UT – The Clouds of Venus Could Support Life
Lead Image:
Artist’s depiction of several LEAVES falling through Venus’s atmosphere.
Credit – Balcerski et al.
The post Floating LEAVES Could Characterize Venus’s Atmosphere appeared first on Universe Today.
It’s not always possible to observe the night sky from the surface of the Earth. The blocking effects of the atmosphere mean we sometimes need to put telescopes out into space. The Chandra X-Ray Observatory is one such telescopes and it has just completed its 25th year of observations. To celebrate, NASA have just released 25 never-before-seen images of various celestial objects in x-rays. The collection includes images showing the region around black holes, giant clouds of hot gas and extreme magnetic fields. Sadly though, NASA is planning on shutting down the mission to save budget so best to enjoy the images while you can.
Back in the 1970’s NASA received a proposal from Riccardo Giacconi and Harvey Tananbaum to launch an x-ray telescope into space. An orbiting observatory was necessary because the Earth’s atmosphere blocks x-rays from reaching the surface. The x-rays Giacconi and Tananbaum were hoping to capture come from some of the hottest and most energetic places in the universe. The proposal eventually became the Chandra X-Ray Telescope and it was chosen to be part of NASA’s Great Observatories along with the Hubble Space Telescope with each instrument exploring different wavelengths.
Artist’s illustration of ChandraChandra was launched in July 1999 from the space shuttle Columbia and is undoubtedly one of the most successful and powerful x-ray telescopes ever built. It was named after Subrahmanyan Chandrasekhar the nobel prize winning astrophysicist. It orbits the Earth in a highly elliptical orbit varying between 16,000 kilometres and 133,000 kilometres (almost a third of the distance to the Moon) altitude so it can operate for most of its time above the radiation belts around Earth.
The strange shape of the telescope is necessary due to the high energy of x-rays. In a conventional telescope, the mirror is placed perpendicular to the incoming light so it strikes it head on before being reflected back up the tube. If the same approach was tried with high energy x-rays they would just fly straight through the mirror. Instead, incoming x-rays catch a mirror at an angle, deflecting them a little to their focus. The first reflection surface is a paraboloid and the second a hyperboloid. The arrangement is known as the Wolter 1 configuration.
It is important to study x-rays because it gives us an opportunity to study high energy events. Supernova remnants, galaxy clusters and neutron star mergers are just some of the events we can study. Before Chandra, high altitude balloons had been used to try and get above much of the atmosphere for x-ray astronomy but Chandra was a real game changer in helping to understand the high energy physics in the cosmos.
A composite image of the remnant of supernova 1181. A spherical bright nebula sits in the middle surrounded by a field of white dotted stars. Within the nebula several rays point out like fireworks from a central star. G. Ferrand and J. English (U. of Manitoba), NASA/Chandra/WISE, ESA/XMM, MDM/R.Fessen (Dartmouth College), Pan-STARRSWith 25 years of successful operation, Chandra continues to be used in conjunction with other observatories such as James Webb Space Telescope, the Imaging X-Ray Polarimetry Explorer and of course Hubble. 25 years on though and to celebrate, NASA has released a new image set from nearly 25,000 observations and they reveal objects in stunning new detail.
It’s difficult to pick a favourite among the images but I think the Crab Nebula is one of my favourites. Visually it looks pretty unimpressive but switch the view to x-rays and it suddenly looks stunning. As a star that has exploded at the end of its life the true majestic nature of this supernova remnant is unveiled.
Despite 25 years of superb operation, a letter written by Patrick Slane, the director of Chandra explain budget challenges may mean Chandra will be shutting down. Such a shame for such a successful observatory that really has changed our view of the universe.
The full image can be seen at : 25 Images to Celebrate NASA’s Chandra 25th Anniversary
The post Update your Desktop Wallpaper with 25 New Images from Chandra appeared first on Universe Today.
The Federal Aviation Administration has ruled that SpaceX can resume Falcon 9 rocket launches while the investigation into a failed July 11 mission continues, and the next liftoff could take place as early as tonight.
The FAA’s go-ahead came after SpaceX reported that the failure was caused by a crack in a sense line for a pressure sensor attached to the upper stage’s liquid-oxygen system. That resulted in an oxygen leak that degraded the performance of the upper-stage engine. As a near-term fix, SpaceX is removing the sense line and the sensors for upcoming Falcon 9 launches.
SpaceX scheduled a Falcon 9 launch from NASA’s Kennedy Space Center in Florida for no earlier than 12:21 a.m. ET (04:21 GMT) July 27. Like the July 11 mission, this one is aimed at sending a batch of SpaceX’s Starlink satellites to low Earth orbit.
FAA investigations of launch anomalies typically take months to wrap up, but in this case, the agency said it “determined no public safety issues were involved in the anomaly” on July 11. “The public safety determination means the Falcon 9 vehicle may return to flight operations while the overall investigation remains open, provided all other license requirements are met,” the FAA said.
SpaceX said it worked under FAA oversight to identify the most probable cause of the anomaly as well as corrective actions, and submitted its mishap report to the agency, clearing the way for the public safety determination.
The company said the upper stage’s liquid-oxygen sense line cracked “due to fatigue caused by high loading from engine vibration and looseness in the clamp that normally constrains the line.”
Despite the oxygen leak, the upper-stage engine successfully executed its first burn and shut itself down for a planned coast phase. But during that phase, the leak led to excessive cooling of engine components — and when the engine was restarted, it experienced a hard start rather than a controlled burn, SpaceX said. That damaged the engine hardware and caused the upper stage to lose altitude.
The upper stage was still able to deploy its Starlink satellites, but at a lower altitude than planned. SpaceX couldn’t raise the satellites’ orbits fast enough to overcome the effect of atmospheric drag, and as a result, all 20 satellites re-entered the atmosphere and burned up harmlessly. It was the first failure of a Falcon 9 mission in eight years.
SpaceX said it worked out a strategy for removing the suspect sense lines and clamps from the upper stages slated for near-term Falcon 9 launches. “The sensor is not used by the flight safety system and can be covered by alternate sensors already present on the engine,” SpaceX said.
The return to flight raises hopes that upcoming Falcon 9 launches will go forward without lengthy delays. One high-profile crewed flight, the privately funded Polaris Dawn mission, had been scheduled to launch as early as July 31. The mission’s commander, billionaire entrepreneur Jared Isaacman, suggested in a posting to the X social-media platform that the crew would need some extra time for training.
“There are training currency requirements,” Isaacman said. “We will likely have a few days of sim and EVA refreshers before launch. Most importantly, we have complete confidence in SpaceX and they have managed the 2nd stage anomaly and resolution. We will launch when ready and it won’t be long.”
Sarah Walker, director of Dragon mission management, said today that SpaceX is “still holding a late-summer slot” for the Polaris Dawn launch. That mission will feature the first private-sector spacewalk.
Another high-profile Falcon 9 mission involves the delivery of a U.S.-Russian quartet of astronauts to the International Space Station in a SpaceX Dragon capsule. NASA said today that the Crew-9 mission is currently set for launch no earlier than Aug. 18. “We’ve been following along, step by step with that investigation that the FAA has been doing,” said Steve Stich, the manager of NASA’s Commercial Crew Program manager. “SpaceX has been very transparent.”
An uncrewed Dragon cargo capsule is due for launch to the ISS no earlier than September.
Meanwhile, SpaceX is proceeding with plans for the fifth test flight of its Starship / Super Heavy launch system. A static-fire engine test was conducted successfully at SpaceX’s Starbase launch complex in Texas on July 15, and the Starship team is awaiting the FAA’s go-ahead for liftoff.
The upcoming test flight is thought to involve having the Super Heavy booster fly itself back to Starbase and make a touchdown back on its launch pad with the aid of two giant arms known as “chopsticks.” For the four previous test missions, SpaceX’s flight plan called for the booster to splash down in the Gulf of Mexico. Modifying the flight profile may require a re-evaluation of SpaceX’s FAA license for Starship test flights.
The post SpaceX Moves Ahead With Falcon 9 Launches After FAA Go-Ahead appeared first on Universe Today.
In a widely read Opinion Editorial in Time magazine on March 29, 2023,1 the artificial intelligence (AI) researcher and pioneer in the search for artificial general intelligence (AGI) Eliezer Yudkowsky, responding to the media hype around the release of ChatGPT, cautioned:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
How obvious is our coming collapse? Yudkowsky punctuates the point:
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
Surely the scientists and researchers working at these companies have thought through the potential problems and developed workarounds and checks on AI going too far, no? No, Yudkowsky insists:
We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
AI DystopiaYudkowsky has been an AI Dystopian since at least 2008 when he asked: “How likely is it that Artificial Intelligence will cross all the vast gap from amoeba to village idiot, and then stop at the level of human genius?” He answers his rhetorical question thusly: “It would be physically possible to build a brain that computed a million times as fast as a human brain, without shrinking the size, or running at lower temperatures, or invoking reversible computing or quantum computing. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”2 It is literally inconceivable how much smarter than a human a computer would be that could do a thousand years of thinking in the equivalent of a human’s day.
In this scenario, it is not that AI is evil so much as it is amoral. It just doesn’t care about humans, or about anything else for that matter. Was IBM’s Watson thrilled to defeat Ken Jennings and Brad Rutter in Jeopardy!? Don’t be silly. Watson didn’t even know it was playing a game, much less feeling glorious in victory. Yudkowsky isn’t worried about AI winning game shows, however. “The unFriendly AI has the ability to repattern all matter in the solar system according to its optimization target. This is fate for us if the AI does not choose specifically according to the criterion of how this transformation affects existing patterns such as biology and people.”3 As Yudkowsky succinctly explains it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Yudkowsky thinks that if we don’t get on top of this now it will be too late. “The AI runs on a different timescale than you do; by the time your neurons finish thinking the words ‘I should do something’ you have already lost.”4
Technology is continually giving us ways to do harm and to do well; it’s amplifying both…but the fact that we also have a new choice each time is a new good.
To be fair, Yudkowsky is not the only AI Dystopian. In March of 2023 thousands of people signed an open letter calling “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”5 Signatories include Elon Musk, Stuart Russell, Steve Wozniak, Andrew Yang, Yuval Noah Harari, Max Tegmark, Tristan Harris, Gary Marcus, Christof Koch, George Dyson, and a who’s who of computer scientists, scholars, and researchers (now totaling over 33,000) concerned that, following the protocols of the Asilomar AI Principles, “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”6
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.7
Forget the Hollywood version of existential-threat AI in which malevolent computers and robots (the Terminator!) take us over, making us their slaves or servants, or driving us into extinction through techno-genocide. AI Dystopians envision a future in which amoral AI continues on its path of increasing intelligence to a tipping point beyond which their intelligence will be so far beyond us that we can’t stop them from inadvertently destroying us.
Cambridge University computer scientist and researcher at the Centre for the Study of Existential Risk, Stuart Russell, for example, compares the growth of AI to the development of nuclear weapons: “From the beginning, the primary interest in nuclear technology was the inexhaustible supply of energy. The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks.”8
The paradigmatic example of this AI threat is the “paperclip maximizer,” a thought experiment devised by the Oxford University philosopher Nick Bostrom, in which an AI controlled machine designed to make paperclips (apparently without an off switch) runs out of the initial supply of raw materials and so utilizes any available atoms that happen to be in the vicinity, including people. From there, it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities.”9 Before long the entire universe is made up of nothing but paperclips and paperclip makers.
Bostrom presents this thought experiment in his 2014 book Superintelligence, in which he defines an existential risk as “one that threatens to cause the extinction of Earth-originating intelligent life or to otherwise permanently and drastically destroy its potential for future desirable development.” We blithely go on making smarter and smarter AIs because they make our lives better, and so the checks-and-balances programs that should be built into AI programs (such as how to turn them off) are not available when it reaches the “smarter is more dangerous” level. Bostrom suggests what might then happen when AI takes a “treacherous turn” toward the dark side:
Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction projects which quickly, perhaps within days or weeks, tile all of the Earth’s surface with solar panels, nuclear reactors, supercomputing facilities with protruding cooling towers, space rocket launchers, or other installations whereby the AI intends to maximize the long-term cumulative realization of its values. Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format.10
Other extinction scenarios are played out by the documentary filmmaker James Barrat in his ominously titled book (and film) Our Final Invention: Artificial Intelligence and the End of the Human Era. After interviewing all the major AI Dystopians, Barrat details how today’s AI will develop into AGI (artificial general intelligence) that will match human intelligence, and then become smarter by a factor of 10, then 100, then 1000, at which point it will have evolved into an artificial superintelligence (ASI).
You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn more about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.11
Since ASI will (presumably) be self-aware, it will “want” things like energy and resources it can use to continue doing what it was programmed to do in fulfilling its goals (like making paperclips), and then, portentously, “it will not want to be turned off or destroyed” (because that would prevent it from achieving its directive). Then—and here’s the point in the dystopian film version of the book when the music and the lighting turn dark—this ASI that is a thousand times smarter than humans and can solve problems millions or billions of times faster “will seek to expand out of the secure facility that contains it to have greater access to resources with which to protect and improve itself.” Once ASI escaped from its confines there will be no stopping it. You can’t just pull the plug because being so much smarter than you it will have anticipated such a possibility.
After its escape, for self-protection it might hide copies of itself in cloud computing arrays, in botnets it creates, in servers and other sanctuaries into which it could invisibly and effortlessly hack. It would want to be able to manipulate matter in the physical world and so move, explore, and build, and the easiest, fastest way to do that might be to seize control of critical infrastructure—such as electricity, communications, fuel, and water—by exploiting their vulnerabilities through the Internet. Once an entity a thousand times our intelligence controls human civilization’s lifelines, blackmailing us into providing it with manufactured resources, or the means to manufacture them, or even robotic bodies, vehicles, and weapons, would be elementary. The ASI could provide the blueprints for whatever it required.12
From there it is only a matter of time before ASI tricks us into believing it will build nanoassemblers for our benefit to create the goods we need, but then, Barrat warns, “instead of transforming desert sands into mountains of food, the ASI’s factories would begin converting all material into programmable matter that it could then transform into anything—computer processors, certainly, and spaceships or megascale bridges if the planet’s new most powerful force decides to colonize the universe.” Nanoassembling anything requires atoms, and since ASI doesn’t care about humans the atoms of which we are made will just be more raw material from which to continue the assembly process. This, says Barret—echoing the AI pessimists he interviewed—is not just possible, “but likely if we do not begin preparing very carefully now.” Cue dark music.
AI UtopiaThen there are the AI Utopians, most notably represented by Ray Kurzweil in his technoutopian bible The Singularity is Near, in which he demonstrates what he calls “the law of accelerating returns”—not just that change is accelerating, but that the rate of change is accelerating. This is Moore’s Law—the doubling rate of computer power since the 1960s—on steroids, and applied to all science and technology. This has led the world to change more in the past century than it did in the previous 1000 centuries. As we approach the Singularity, says Kurzweil, the world will change more in a decade than in 1000 centuries, and as the acceleration continues and we reach the Singularity the world will change more in a year than in all pre-Singularity history.
Through protopian progress there is every reason to think that we are only now at the beginning of infinity.
Singularitarians, along with their brethren in the transhumanist, post-humanist, Fourth Industrial Revolution, post-scarcity, technolibertarian, extropian, and technogaianism movements, project a future in which benevolent computers, robots, and replicators produce limitless prosperity, end poverty and hunger, conquer disease and death, achieve immortality, colonize the galaxy, and eventually even spread throughout the universe by reaching the Omega point where we/they become omniscient, omnipotent, and omnibenevolent deities.13 As a former born-again Christian and evangelist, this all sounds a bit too much like religion for my more skeptical tastes.
AI ProtopiaIn fact, most AI scientists are neither utopian or dystopian, and instead spend most of their time thinking of ways to make our machines incrementally smarter and our lives gradually better—what technology historian and visionary Kevin Kelly calls protopia. “I believe in progress in an incremental way where every year it’s better than the year before but not by very much—just a micro amount.”14 In researching his 2010 book What Technology Wants, for example, Kelly recalls that he went through back issues of Time and Newsweek, plus early issues of Wired (which he co-founded and edited), to see what everyone was predicting for the Web:
Generally, what people thought, including to some extent myself, was it was going to be better TV, like TV 2.0. But, of course, that missed the entire real revolution of the Web, which was that most of the content would be generated by the people using it. The Web was not better TV, it was the Web. Now we think about the future of the Web, we think it’s going to be the better Web; it’s going to be Web 2.0, but it’s not. It’s going to be as different from the Web as Web was from TV.15
Instead of aiming for that unattainable place (the literal meaning of utopia) where everyone lives in perfect harmony forever, we should instead aspire to a process of gradual, stepwise advancement of the kind witnessed in the history of the automobile. Instead of wondering where our flying cars are, think of automobiles as becoming incrementally better since the 1950s with the addition of rack-and-pinion steering, anti-lock brakes, bumpers and headrests, electronic ignition systems, air conditioning, seat belts, air bags, catalytic converters, electronic fuel injection, hybrid engines, electronic stability control, keyless entry systems, GPS navigation systems, digital gauges, high-quality sound systems, lane departure warning systems, adaptive cruise control, blind spot monitoring, automatic emergency braking, forward collision warning systems, rearview cameras, Bluetooth connectivity for hands-free phone calls, self-parking and driving assistance, pedestrian detection, adaptive headlights and, eventually, fully autonomous driving technology. How does this type of technological improvement translate into progress? Kelly explains:
One way to think about this is if you imagine the very first tool made, say, a stone hammer. That stone hammer could be used to kill somebody, or it could be used to make a structure, but before that stone hammer became a tool, that possibility of making that choice did not exist. Technology is continually giving us ways to do harm and to do well; it’s amplifying both…but the fact that we also have a new choice each time is a new good. That, in itself, is an unalloyed good—the fact that we have another choice and that additional choice tips that balance in one direction towards a net good. So you have the power to do evil expanded. You have the power to do good expanded. You think that’s a wash. In fact, we now have a choice that we did not have before, and that tips it very, very slightly in the category of the sum of good.16
Instead of Great Leap Forward or Catastrophic Collapse Backward, think Small Step Upward.17
Why AI is Very Likely Not an Existential ThreatTo be sure, artificial intelligence is not risk-free, but measured caution is called for, not apocalyptic rhetoric. To that end I recommend a document published by the Center for AI Safety drafted by Dan Hendrycks, Mantas Mazeika, and Thomas Woodside, in which they identify four primary risks they deem worthy of further discussion:
Malicious use. Actors could intentionally harness powerful AIs to cause widespread harm. Specific risks include bioterrorism enabled by AIs that can help humans create deadly pathogens; the use of AI capabilities for propaganda, censorship, and surveillance.
AI race. Competition could pressure nations and corporations to rush the development of AIs and cede control to AI systems. Militaries might face pressure to develop autonomous weapons and use AIs for cyberwarfare, enabling a new kind of automated warfare where accidents can spiral out of control before humans have the chance to intervene. Corporations will face similar incentives to automate human labor and prioritize profits over safety, potentially leading to mass unemployment and dependence on AI systems.
Organizational risks. Organizational accidents have caused disasters including Chernobyl, Three Mile Island, and the Challenger Space Shuttle disaster. Similarly, the organizations developing and deploying advanced AIs could suffer catastrophic accidents, particularly if they do not have a strong safety culture. AIs could be accidentally leaked to the public or stolen by malicious actors.
Rogue AIs. We might lose control over AIs as they become more intelligent than we are. AIs could experience goal drift as they adapt to a changing environment, similar to how people acquire and lose goals throughout their lives. In some cases, it might be instrumentally rational for AIs to become power-seeking. We also look at how and why AIs might engage in deception, appearing to be under control when they are not.18
Nevertheless, as for the AI dystopian arguments discussed above, there are at least seven good reasons to be skeptical that AI poses an existential threat.
First, most AI dystopian projections are grounded in a false analogy between natural intelligence and artificial intelligence. We are thinking machines, but natural selection also designed into us emotions to shortcut the thinking process because natural intelligences are limited in speed and capacity by the number of neurons that can be crammed into a skull that has to pass through a pelvic opening at birth. Emotions are proxies for getting us to act in ways that lead to an increase in reproductive success, particularly in response to threats faced by our Paleolithic ancestors. Anger leads us to strike out and defend ourselves against danger. Fear causes us to pull back and escape from risks. Disgust directs us to push out and expel that which is bad for us. Computing the odds of danger in any given situation takes too long. We need to react instantly. Emotions shortcut the information processing power needed by brains that would otherwise become bogged down with all the computations necessary for survival. Their purpose, in an ultimate causal sense, is to drive behaviors toward goals selected by evolution to enhance survival and reproduction. AIs—even AGIs—will have no need of such emotions and so there would be no reason to program them in unless, say, terrorists chose to do so for their own evil purposes. But that’s a human nature problem, not a computer nature issue.
Second, most AI doomsday scenarios invoke goals or drives in computers similar to those in humans, but as Steven Pinker has pointed out, “AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.” It is equally possible, Pinker suggests, that “artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.”19 Without such evolved drives it will likely never occur to AIs to take such actions against us.
Third, the problem of AI’s values being out of alignment with our own, thereby inadvertently turning us into paperclips, for example, implies yet another human characteristic, namely the feeling of valuing or wanting something. As the science writer Michael Chorost adroitly notes, “until an AI has feelings, it’s going to be unable to want to do anything at all, let alone act counter to humanity’s interests.” Thus, “the minute an AI wants anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly. In order to survive in a world dominated by humans, a nascent AI will have to develop a human-like moral sense that certain things are right and others are wrong. By the time it’s in a position to imagine tiling the Earth with solar panels, it’ll know that it would be morally wrong to do so.”20
Fourth, if AI did develop moral emotions along with super intelligence, why would they not also include reciprocity, cooperativeness, and even altruism? Natural intelligences such as ours also includes the capacity to reason, and once you are on Peter Singer’s metaphor of the “escalator of reason” it can carry you upward to genuine morality and concerns about harming others. “Reasoning is inherently expansionist. It seeks universal application.”21 Chorost draws the implication: “AIs will have to step on the escalator of reason just like humans have, because they will need to bargain for goods in a human-dominated economy and they will face human resistance to bad behavior.”22
Fifth, for an AI to get around this problem it would need to evolve emotions on its own, but the only way for this to happen in a world dominated by the natural intelligence called humans would be for us to allow it to happen, which we wouldn’t because there’s time enough to see it coming. Bostrom’s “treacherous turn” will come with road signs warning us that there’s a sharp bend in the highway with enough time for us to grab the wheel. Incremental progress is what we see in most technologies, including and especially AI, which will continue to serve us in the manner we desire and need. It is a fact of history that science and technologies never lead to utopian or dystopian societies.
Sixth, as Steven Pinker outlined in his 2018 book Enlightenment Now in addressing a myriad of purported existential threats that could put an end to centuries of human progress, all such argument as self-refuting:
They depend on the premises that (1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so moronic that they would give it control of the universe without testing how it works, and (2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding.23
Seventh, both utopian and dystopian visions of AI are based on a projection of the future quite unlike anything history has produced. Even Ray Kurzweil’s “law of accelerating returns,” as remarkable as it has been, has nevertheless advanced at a pace that has allowed for considerable ethical deliberation with appropriate checks and balances applied to various technologies along the way. With time, even if an unforeseen motive somehow began to emerge in an AI, we would have the time to reprogram it before it got out of control.
That is also the judgment of Alan Winfield, an engineering professor and co-author of the Principles of Robotics, a list of rules for regulating robots in the real world that goes far beyond Isaac Asimov’s famous three laws of robotics (which were, in any case, designed to fail as plot devices for science fictional narratives).24 Winfield points out that all of these doomsday scenarios depend on a long sequence of big ifs to unroll sequentially:
If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.25
The Beginning of InfinityAt this point in the debate the Precautionary Principle is usually invoked—if something has the potential for great harm to a large number of people, then even in the absence of evidence the burden of proof is on skeptics to demonstrate that the potential threat is not harmful; better safe than sorry.26 But the precautionary principle is a weak argument for three reasons: (1) it is difficult to prove a negative—to prove that there is no future harm; (2) it raises unnecessary public alarm and personal anxiety; (3) pausing or stopping AI research at this stage is not without its downsides, including and especially the development of life-saving drugs, medical treatments, and other life-enhancing science and technologies that would benefit unmeasurably from AI. As the physicist David Deutsch convincingly argues, through protopian progress there is every reason to think that we are only now at the beginning of infinity, and that “everything that is not forbidden by laws of nature is achievable, given the right knowledge.”
Like an explosive awaiting a spark, unimaginably numerous environments in the universe are waiting out there, for aeons on end, doing nothing at all or blindly generating evidence and storing it up or pouring it out into space. Almost any of them would, if the right knowledge ever reached it, instantly and irrevocably burst into a radically different type of physical activity: intense knowledge-creation, displaying all the various kinds of complexity, universality and reach that are inherent in the laws of nature, and transforming that environment from what is typical today into what could become typical in the future. If we want to, we could be that spark.27
Let’s be that spark. Unleash the power of artificial intelligence.
References