You’re on the fourth human mission to Mars, and you’re told the Odyssey spacecraft designed to take you there will be the smoothest ride you’ll ever take. It features a newly christened electric propulsion engine which was in the late stages of testing during the first three missions. The mission starts and the spacecraft travels at a crawl, and you wonder if it’s broken. A week goes by and you’re now traveling at more than 400,000 kilometers (250,000 miles) per hour, and your mind is blown as to how fast you’re going, how quickly that happened, and that this mission might be more awesome than you thought.
Astronomers now believe there is at least one planet for every star in the Milky Way but new research has revealed a deeply unsettling twist in that picture. The most common planets in our Galaxy, it turns out, are almost entirely absent around the most common stars. Using data from NASA's TESS satellite, researchers found that the small, faint stars that make up the vast majority of the Milky Way seem to host rocky super Earths in abundance, but virtually no sub Neptunes, the planet type previously thought to be plentiful. The finding doesn't just refine existing theories of planet formation, it rewrites them.
An international team of astrophysicists has just released one of the largest cosmological datasets ever assembled. A mouthwatering 2.5 petabytes of simulated universe, freely available to researchers anywhere in the world. Built using a supercomputer and a suite of simulations called FLAMINGO, the data models how matter has evolved since the Big Bang, tracing everything from individual galaxies to the vast cosmic web that stretches across billions of light years.
When NASA's Artemis II crew swung around the Moon in April, the world watched in extraordinary detail and a breakthrough laser communications system was the reason why. Bolted to the outside of the Orion capsule, a compact optical terminal beamed 484 gigabytes of data back to Earth using invisible infrared light, outpacing traditional radio systems by a factor of tens. The result was some of the most vivid imagery ever captured in deep space, and a technology demonstration that will fundamentally change how humanity communicates beyond Earth.
Our Sun is a bit of an outlier in the general stellar population. We typically think of stars as being solitary wanderers throughout the galaxy. But roughly half of Sun-like stars are locked in with more than one companion star. If there are two, it’s known as a “binary” system, but in many cases there are even more stars all collectively tied together by gravity. Astronomers have long debated why this happens, and a new paper, available in pre-print on arXiv from Ryan Sponzilli, a graduate student at the University of Illinois, makes an argument for a mechanism known as disk fragmentation.
There are tens of thousands of Near-Earth Objects (NEOs) that represent some of the most easily accessible resources in the solar system. If we can get to them at least. Planning trajectories to rendezvous with these miniature worlds is notoriously difficult, and requires a massive amount of computational power to calculate. But a new paper from astrodynamicist Alessandro Beolchi of Khalifa University of Science and Technology and his co-authors offers a much less computationally intensive way to find these trajectories, and has the added bonus of finding the much less energy-intensive paths to boot.
They’re a prolific, yet often elusive for northern hemisphere observers. If skies are clear, watch for a strong annual meteor shower that’s attained an almost mythical status: the May Eta Aquariids. The Eta Aquariid meteor shower is active from April 19th until May 28th, with the key night being the evening of May 5th into the morning of May 6th.
Despite outward appearances, the internal workings of ice giants like Uranus and Neptune are extremely chaotic. Pressures millions of times greater than Earth’s sea level combine with temperatures in the thousands of degrees to make some pretty weird materials. Now, a new paper from researchers at the Carnegie Institution, published in Nature Communications, describes a completely new state of matter that might exist in these extreme environments - a “quasi-1D superionic” phase.
I thought everyone needed one more thing to worry about, so here you go: evolving AI. When I hear this phrase I think of two things. The first are AI systems designed to simulate organic evolution. The second are artificially intelligent systems that are capable of evolving themselves. That latter one is the type you need to worry about.
Systems that simulate evolution already exist – Avida, Biogenesis, Grovolve, Tierra, Framsticks: and others. They basically have some code that competes for some resource or to complete some task and the code randomly mutates and reproduces. That’s it, all you need for an evolution simulation. Code can compete for computer resources, or be a physics simulator with digital creature trying to move quickly across terrain. These are sometime gamified for entertainment, but are also used for serious research, to study patterns within evolutionary systems. I would love to see these kinds of systems get more and more sophisticated, even to the point of reasonably simulating living systems. Such systems could be used to test hypotheses about evolution – and would also disprove a lot of silly creationist talking points.
But now we are talking about evolvable AI – AI systems that are capable of developing themselves through evolutionary processes. A new paper in PNAS discusses the potential power and risks of such systems. They echo they kinds of issues that have been explored in science fiction for decades. The authors write: “Evolvable AI (eAI), i.e., AI systems whose components, learning rules, and deployment conditions can themselves undergo Darwinian evolution, may soon emerge from current trends in generative, agentic, and embodied AI.” The results, they argue, have not been adequately addressed when discussing the potential risks of rapidly developing AI ability.
The authors distinguish two types of evolving AI – breeder systems and ecological systems. In breeder scenarios the programmers are in control of the process, selecting which code to “breed” and evaluating the outcome. This process is like a digital version of domestication, and has the potential, if done wisely, to maintain control. In fact, systems can be bred to have greater predictability and control. There are still risks here. So far humanity has not bred an animal to be more intelligent than humans. This could theoretically happen with AI, resulting in emergent behavior not specifically selected for that could get out of the control of human programmers.
A far greater risk, however, is the ecosystem scenario in which the program itself produces variation and selection, without external control. They argue that such systems lead to “selfish replication” which “reliably gives rise to cheating, parasitism, deception, and manipulation, even in very simple systems.” This echos Dawkins’ “selfish gene” in which evolutionary forces result in genes, essentially, doing whatever they can to maximize their passing into the next generation, without consideration for the interests of the whole organism, the population, the species, or the ecosystem. That is how evolution works – it cannot really see the bigger picture, but rather the selective feedback loop considers only survival and reproduction. There is still ongoing debate among evolutionary biologists the extent to which selective pressures can operate at any level other than the individual creature. Dawkins argued it was better understood at the gene level, which is why a parent, for example, would sacrifice themselves for their child – they may die, but the genes live on through their children.
In any case – this same “selfish” principle, when applied to AI, could lead to unpredictable and extremely bad behavior on the part of the AI. They too would not really see or understand the big picture, and will simply maximize whatever parameters they were given. Systems capable of independent evolution are likely to find unpredicted (perhaps unpredictable) solutions to problems, ones that might be anathema to human interests. Again, we are already seeing this is current AI systems (lying, cheating), but this phenomenon would be much greater with evolving systems.
One significant problem with evolving AI is that would essentially be impossible to control. Any controls we put in place would simply become a selective pressure, with evolving AI systems finding creative ways around the controls. This would be exactly like the evolution of antibiotic resistance in bacteria. In fact, it could be a lot worse. Natural systems essentially have to wait for a fortuitous mutation to occur. The reason why bacteria evolve resistance so quick is because there are so many of them and their lifecycle is so short. The opportunities for such mutations are therefore enormous. The same would be true of an AI system that could test billions of possibilities in moments. But also, AI systems do not have to wait for the right mutation to pop up – they can create it themselves. They can explore new possibilities, direct the course of their own evolution, and in fact evolve their ability to do so. They can learn how to optimize randomness vs directed changes, and learn which patterns predict successful evolution. If something doesn’t work, they can try something else. They could pass on acquired characteristics. Such systems would not only be evolutionary, they could be super-evolutionary.
These types of processes can function at multiple levels, not just the code itself. For example, programmers are already using evolutionary methods to evolve prompts for AI systems. Prompts themselves affect the behavior of AI, and when engineered in a sophisticated way can significantly improve an AI’s ability.
The outcome of such systems would be essentially impossible to predict. There would be emergent behaviors that may even be hard to notice, or fully understand. The most predictable thing about such systems is that they will be “selfish”, because that seems to be inherent in evolving systems themselves. The end result is the creation of AI systems that are prone to cheating, lying, parasitism, and manipulation, that we cannot understand or control. If we make such systems powerful enough and give them enough resources, it seems likely that they will eventually become more intelligent (at least in some ways – even short of true sentience) than humans.
The authors also recognize that such systems would be incredibly powerful, and therefore they are coming and can produce useful products. We just have to do it wisely. For example, any such evolutionary AI should be run entirely in a sandbox, isolated from the outside world. It has to be truly isolated, so that it cannot find a way out of the sandbox. Once the result of such an evolutionary AI is sufficiently tested and understood, it can be released. But they warn against running evolutionary systems out in the world where their behavior cannot be controlled. This makes sense, but I wonder if the sandbox method is sufficient. If these systems are prone to deception and manipulation, might one such system trick its users into thinking it is safe, until it is release into the world? That sounds like the plot to a great sci-fi dystopian horror. We may be living through act I of such a horror story right now.
One final word – I get that there is a lot of AI hype our there. This is almost always the case with any new technology that is sufficiently disruptive or game-changing. The existence of hype is a given – it does not mean, however, that the technology is not truly disruptive. It often means that it will just take longer than the hype indicates, but in the long run the hype will not only be realized but exceeded. I do not buy the “AI is all hype” brigade, nor do I buy the “fund me” propaganda or blithe reassurances by the tech bros. The truth is somewhere in the middle. What I mostly listen to are reasonable experts who are given sober warnings, like the authors of the current paper. This technology is genuinely very powerful. That power needs to be respected, understood, and properly regulated. This requires anticipating what can potentially go wrong, and that is what this paper does. This is not a prediction – it is laying out potential worst-case-scenarios so that we do not blindly walk into them.
The post Evolving AI first appeared on NeuroLogica Blog.
I am not afraid to defend my book by discussing the real-world job performance of the MAHA/MAGA doctors featured in it. What about the authors of In COVID's Wake?
The post A Tale of Two Books: We Want Them Infected & In COVID’s Wake first appeared on Science-Based Medicine.You’re based at Artemis Station on the lunar south pole, and you’re monitoring your 12 autonomous rovers that are exploring the surrounding terrain for signs of water ice or other essentials minerals. They’re about 3 kilometers out when you suddenly get a NASA Alert for an incoming solar storm. You know the rovers won’t return to base before the storm hits, but you’re calm knowing the rovers all recently got retrofitted with the latest hair-thin nanotube shielding to protect them from the harsh electromagnetic waves and radiation.
Mercury is one of the four rocky worlds of the Solar System, yet its chemistry is very different from Earth, Venus, and Mars. Missions to the planet show that it has an iron-poor, but sulfur- and magnesium-rich crust. Furthermore, it's known to planetary scientists as the most reduced planet in the Solar system. It means that the chemical makeup is dominated by sulfides, carbides, and silicides -- as opposed to oxides like we see here on Earth.
Binary stars are common, but for a long time astronomers have thought that exoplanets would have trouble forming around them. In recent years, powerful telescopes have detected about 50 of these planets. Now, new simulations show that their formation isn't actually rare, it's just that they tend to be on wide orbits, with few opportunities to observe transits. Also, many of them are ejected and become rogue planets.
One of the most intriguing puzzles in cosmology is the existence of supermassive black holes that seem to appear very early in the history of the Universe. Astronomers keep finding them at times when, by all that they understand about the infant Universe, they shouldn't be there. The standard theory of black hole formation suggests that they shouldn't have had enough time to grow as massive as they appear to be. Yet, there they are, monster black holes with the mass of at least a billion suns. The James Webb Space Telescope (JWST) has found a large population of them in early epochs, and they've been observed in very early quasars as well.