You are here

News Feeds

Martian meteorites deliver a trove of information on Red Planet's structure

Space and time from Science Daily Feed - Fri, 05/31/2024 - 3:28pm
Mars has a distinct structure in its mantle and crust with discernible reservoirs, and this is known thanks to meteorites that scientists have analyzed. These results are important for understanding not only how Mars formed and evolved, but also for providing precise data that can inform recent NASA missions like Insight and Perseverance and the Mars Sample Return.
Categories: Science

Wormholes could blast out blazing hot plasma at incredible speeds

New Scientist Feed - Fri, 05/31/2024 - 1:20pm
If matter falls into one end of a wormhole, it could heat up in a tornado of plasma hot enough to initiate nuclear fusion – and come blasting out the other end
Categories: Science

Battle-damage detector can help aid groups rapidly respond during war

New Scientist Feed - Fri, 05/31/2024 - 12:00pm
A simple statistical test can quickly guide humanitarian efforts in areas like Gaza and Ukraine impacted by war – and it could perform as well as more expensive, AI-powered methods
Categories: Science

Can We Trust AI to Make Decisions?

Skeptic.com feed - Fri, 05/31/2024 - 12:00pm

Machine-based decision-making is an interesting vision for the future: Humanity, crippled by its own cognitive deformations, tries to improve its lot by opting to outsource its decisions to adaptive machines—a kind of mental prosthetic.

For most of the twentieth century, artificial intelligence was based on representing explicit sets of rules in software and having the computer “reason” based on these rules—the machine’s “intelligence” involved applying the rules to a particular situation. Because the rules were explicit, the machine could also “explain” its reasoning by listing the rules that prompted its decision. Even if AI had the ring of going beyond the obvious in reasoning and decisionmaking, traditional AI depended on our ability to make explicit all relevant rules and to translate them into some machine-digestible representation. It was transparent and explainable, but it was also static—in this way, it did not differ fundamentally from other forms of decisional guardrails such as standard operating procedures (SOPs) or checklists. The progress of this kind of AI stalled because in many everyday areas of human activity and decisionmaking, it is exceptionally hard to make rules explicit.

In recent decades, however, AI has been used as a label for something quite different. The new kind of AI analyzes training data in sophisticated ways to uncover patterns that represent knowledge implicit in the data. The AI does not turn this hidden knowledge into explicit and comprehensible rules, but instead represents it as a huge and complex set of abstract links and dependencies within a network of nodes, a bit like neurons in a brain. It then “decides” how to respond to new data by applying the patterns from the training data. For example, the training data may consist of medical images of suspected tumors, and information about whether or not they in fact proved to be cancerous. When shown a new image, the AI estimates how likely that image is to be of a cancer. Because the system is learning from training data, the process is referred to as “machine learning.”

Such data-driven AI offers two important advantages over conventional AI. First, humans no longer have to make rules explicit to feed into the system. Instead, rules emerge from the training data. Alex Davies, author of the book Driven on machine learning and self-driving cars, puts it succinctly: in this new paradigm “the computer gets lessons, not laws.” That means we can use such AI for the kind of everyday knowledge that’s so difficult to capture with explicit rules.

The second advantage—which is even greater, in this context—is that because rules are derived from training data, they don’t have to be fixed. Instead, they can be adapted as more (and newer) training data is used. This should prevent the stiffening that lessens the effectiveness of many decisional guardrails as times change. It enables looking at patterns not only from the past but also from the present to deduce rules that can be applied to decisions in the future. It has, in other words, a built-in mechanism of updating rules.

Advocates suggest that we should incentivize the use of machine learning in an ever-increasing number of contexts, and even mandate it—much like collision warning systems have become obligatory in commercial aviation. While this might sound dramatic, the change may actually be more gradual. In many instances in our daily lives, we already have machines making decisions for us, from the relatively simple—such as an airbag deploying in a car crash—to the more sophisticated, such as Siri selecting music on our smartphone. And we profit from it: Machines aren’t as easily derailed by human biases; they perform consistently, irrespective of their emotional state. They also act efficiently—capable of doing so within a split second and at relatively low cost.

The central idea of data-driven decision guidance is that past experiences can be employed to decide well in the present. That works when the world doesn’t change—not the circumstances in which we must decide, nor the goals we want to attain through our decisions. Hard-coded rules are a poor fit for times of change; in theory, this is where data-driven AI should be able to shine. If a situation changes, we should be able to add more training data that reflect the new situation. However, there is a flaw in this line of reasoning.

Autonomous driving company Waymo illustrates the argument—and the flaw. For years, Waymo has had hundreds of cars roam the roads in the United States, collecting enormous heaps of data on roads, signage, conditions, weather, and the behavior of drivers. The data were used to train Waymo’s AI system, which then could drive autonomously. These cars were the guinea pigs for the Waymo system. Mistakes observed (including by their own drivers) in turn help the Waymo system to learn to avoid them. To identify the best driving behavior for any given circumstance, such a system needs not only data about a wide variety of situations, but also data about the outcomes of many different decisions made by drivers in each situation. Learning is richest when there is sufficient variability in the training data, so the system can deduce what works best in which conditions. To get diverse training data, Waymo needs to capture drivers making a variety of choices.

The more we use data-driven machine learning to make decisions, the more it will take the variability of decisions out of the data and shed its ability to progress.

Because Waymo never stopped collecting training data, even small changes in circumstances—such as in driving laws and resulting driving behavior—were reflected in the data collected and eventually embedded in the Waymo system. It was a machine that not only learned once, but never stopped learning.

However, let’s imagine a world in which we increasingly rely on machines when making decisions. The more machines shape our choices, the more these decisions will become the only source of training data for ongoing machine learning. The problem is that data-driven machine learning does not experiment; it acts based on the best practice it has deduced from data about previous decisions. If machines begin to learn more from choices we made based on their recommendations, they will amplify their own, conservative solutions.

Over time, this will narrow and drown out behavioral diversity in the training data. There will not be enough experimentation represented in it to enable the machines to adjust to new situations. This means data-driven machine learning will lose its single most important advantage over explicit rule-based systems. We will end up with a decisional monoculture that’s unable to evolve; we are back to fixed decisional rules.

The flaw is even bigger and more consequential than not being able to adjust to changed circumstances. Even if reality doesn’t change, we may miss opportunities to improve our decision-making in the future. Many innovations that end up becoming successful are less useful than existing choices in their initial form. But any new decision options emerging from the training data will likely only be adopted if they yield better results than existing choices straight away. This closes off any opportunity to experiment with promising new ideas.

For example, the first steam engines used far more energy than they could translate into motion and power. If a machine had compared them to the existing solution of using horses for power, it would have discarded the idea of steam power right away. The only reason the steam engine succeeded is because stubborn humans thought that they could improve the invention in the long run and stuck with it. These tinkerers had no data to support their confidence. They just imagined—and kept tinkering.

Of course, most such would-be innovators fail over time. The path of progress is paved with epitaphs to dogged tinkerers following crazy ideas. Occasionally, though, small changes accumulate and lead to a breakthrough—a far more optimal decision option. Modern societies have permitted tinkering to persist, though it is almost always unproductive, even destructive, in the short term—because of the slight chance of a big payoff sometime in the future.

Data-driven machine learning, if widely utilized, would discard initially suboptimal inventions. But in doing so, it would forego the possibility of long-term breakthroughs. Machines can learn only from what already exists. Humans can imagine what does not yet exist but could. Where humans invented steam power, data-driven machine learning would instead have found more and more efficient ways to use horse power.

Human dreaming can go far beyond technical novelties. Our ancestors once dreamed of a world in which slavery is abolished; women can vote; and people can choose for themselves whom to marry and whether to have children. They imagined a world in which smallpox is extinct and we vaccinate against polio. And they worked to make those dreams come true. If they had looked only at data from their past and present, none of these dreams would have been realized.

Decisional guidelines, from SOPs to nudges, emphasize constancy. Traditional education, too, often aims to perpetuate—suggesting there is a right answer for decisions much like for math problems. But decisional guidelines are just that—suggestions that can be disobeyed if one is willing to take the risk (and shoulder the responsibility). For eons, young people have frequently revolted against their parents and teachers, pushed back against the old, the conventional and predictable, and embraced instead not just the original and novel, but the still only imagined. Humans continue to dream—of a world, for example, that will warm by less than two degrees, or in which people have enough to eat without depleting the planet.

In contrast to humans, machine decision-making is optimized toward consistency across time. Even if data-driven machine learning has access to the very latest data, it will still limit our option space. It will always choose a more efficient way to travel along our current path, rather than try to forge a new one. The more we use it to make decisions, the more it will take the variability of decisions out of the data and shed its ability to progress. It will lead us into vulnerability, rigidity, and an inability to adapt and evolve. In this sense, data-driven machine learning is an adulation of immutability, the anathema of imagination.

This article appeared in Skeptic magazine 29.1
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

No technological adjustment can remedy this easily. If we want to increase diversity in the data, we will need variability in machine decisions. By definition, this means machines that make suboptimal choices. But the entire argument for using more AI in our decision-making is premised on AI’s ability to suggest better choices consistently across space and time. In many instances, it would not be societally palatable to deliberately introduce variation into what options a machine picks, thereby increasing the near-term risk of bad decisions in the hope of long-term benefits. And even if it were, it would not necessarily produce the experimentation we hope for. Very often, the theoretical decision space is immense. Randomly iterating through decision options to generate the diverse data necessary would take a very long time—far too long in most instances to help in timely decision-making. Even when iterations are non-random and can be done purely digitally, it would require massive computing resources.

In contrast, when humans experiment, they rarely decide randomly; instead, they use mental models to imagine outcomes. Done correctly, this can dramatically narrow the decision space. It’s that filtering based on cognitive modeling that differentiates human experimentation in decision contexts from the random walk that the machine, in the absence of a mental model, has to employ. And if machines were to use a particular mental model, the resulting data would be constrained again by the limitations of that model. A diverse set of humans experimenting using diverse mental models is simply very hard to beat.

This essay was excerpted and adapted by the authors from their book Guardrails: Guiding Human Decisions in the Age of AI. Copyright © 2024 by Princeton University Press.

About the Author

Urs Gasser is professor of public policy, governance, and innovative technology and dean of the School of Social Sciences and Technology at the Technical University of Munich. He is the author of Born Digital: How Children Grow Up in a Digital Age.

Viktor Mayer-Schönberger is professor of internet governance and regulation at the University of Oxford. He is the author of Delete: The Virtue of Forgetting in the Digital Age. This essay was excerpted and adapted by the authors from their book Guardrails: Guiding Human Decisions in the Age of AI.

Categories: Critical Thinking, Skeptic

Children's visual experience may hold key to better computer vision training

Computers and Math from Science Daily Feed - Fri, 05/31/2024 - 11:50am
A novel, human-inspired approach to training artificial intelligence (AI) systems to identify objects and navigate their surroundings could set the stage for the development of more advanced AI systems to explore extreme environments or distant worlds, according to new research.
Categories: Science

Overcoming barriers to heat pump adoption in cold climates and avoiding the 'energy poverty trap'

Matter and energy from Science Daily Feed - Fri, 05/31/2024 - 11:50am
Converting home heating systems from natural gas furnaces to electric heat pumps is seen as a way to address climate change by reducing greenhouse gas emissions.
Categories: Science

This self-powered sensor could make MRIs more efficient

Computers and Math from Science Daily Feed - Fri, 05/31/2024 - 11:50am
MRI scans are commonly used to diagnose a variety of conditions, anything from liver disease to brain tumors. But, as anyone who has been through one knows, patients must remain completely still to avoid blurring the images and requiring a new scan. A prototype device could change that. The self-powered sensor detects movement and shuts down an MRI scan in real time, improving the process for patients and technicians.
Categories: Science

This self-powered sensor could make MRIs more efficient

Matter and energy from Science Daily Feed - Fri, 05/31/2024 - 11:50am
MRI scans are commonly used to diagnose a variety of conditions, anything from liver disease to brain tumors. But, as anyone who has been through one knows, patients must remain completely still to avoid blurring the images and requiring a new scan. A prototype device could change that. The self-powered sensor detects movement and shuts down an MRI scan in real time, improving the process for patients and technicians.
Categories: Science

“Try a Little Tenderness”

Why Evolution is True Feed - Fri, 05/31/2024 - 10:45am

Here’s the last video of the day, as well as the last live performance of Otis Redding, who died with his band in a plane crash the day after this video was recorded on December 9, 1967.  He was only 26. This song, along with “Dock of the Bay”, are Redding’s best recordings, but “Dock of the Bay” was largely written by him, while this song, “Try a Little Tenderness“, recorded on the Stax label, was actually written by three white men in 1932. And it was recorded by, among others, Bing Crosby and Frank Sinatra.  (Redding’s released recording, from 1966, is here.)

NPR’s “Fresh Air” did a documentary on Stax Records that’s still up, and well worth listening to (it’s only 46 minutes long and has tons of music, including some good stuff from Booker T., who, with the M.G.s, backed Redding on the recorded version of “Tenderness”.). Since Redding recorded for Stax, I revisited this song and found this live version.  If you listen to the recordings by Crosby or Sinatra, you’ll see that Redding’s soul version is infinitely better. The difference between the performance below and the earlier versions shows you the very essence of soul music.

And you can also get an inkling of Redding’s talent—talent cut off way too early.

(“Dock of the Bay,” by the way, was released posthumously, and became the first Top 100 pop single to top the charts after the performer’s death.)

Ladies and gentlemen, brothers and sisters, comrades, here’s Otis Redding, giving James Brown a run for his money as “the hardest-working man in show business.”

Categories: Science

Asian hornets have overwintered in the UK for the first time

New Scientist Feed - Fri, 05/31/2024 - 10:37am
Queen Asian hornets found in East Sussex this year are a genetic match to a 2023 nest, suggesting the invasive species is becoming established in the UK
Categories: Science

Time may be an illusion created by quantum entanglement

New Scientist Feed - Fri, 05/31/2024 - 10:00am
The true nature of time has eluded physicists for centuries, but a new theoretical model suggests it may only exist due to entanglement between quantum objects
Categories: Science

Ancient medicine blends with modern-day research in new tissue regeneration method

Matter and energy from Science Daily Feed - Fri, 05/31/2024 - 9:25am
For centuries, civilizations have used naturally occurring, inorganic materials for their perceived healing properties. Egyptians thought green copper ore helped eye inflammation, the Chinese used cinnabar for heartburn, and Native Americans used clay to reduce soreness and inflammation. Flash forward to today, and researchers are still discovering ways that inorganic materials can be used for healing. A new article explains that cellular pathways for bone and cartilage formation can be activated in stem cells using inorganic ions. Another recent article explores the usage of mineral-based nanomaterials, specifically 2D nanosilicates, to aid musculoskeletal regeneration.
Categories: Science

Designing environments that are robot-inclusive

Computers and Math from Science Daily Feed - Fri, 05/31/2024 - 9:25am
To overcome issues associated with real-life testing, researchers successfully demonstrated the use of digital twin technology within robot simulation software in assessing a robot's suitability for deployment in simulated built environments.
Categories: Science

AI-controlled stations can charge electric cars at a personal price

Computers and Math from Science Daily Feed - Fri, 05/31/2024 - 9:25am
As more and more people drive electric cars, congestion and queues can occur when many people need to charge at the same time. A new study shows how AI-controlled charging stations, through smart algorithms, can offer electric vehicle users personalized prices, and thus minimize both price and waiting time for customers. But the researchers point to the importance of taking the ethical issues seriously, as there is a risk that the artificial intelligence exploits information from motorists.
Categories: Science

AI-controlled stations can charge electric cars at a personal price

Matter and energy from Science Daily Feed - Fri, 05/31/2024 - 9:25am
As more and more people drive electric cars, congestion and queues can occur when many people need to charge at the same time. A new study shows how AI-controlled charging stations, through smart algorithms, can offer electric vehicle users personalized prices, and thus minimize both price and waiting time for customers. But the researchers point to the importance of taking the ethical issues seriously, as there is a risk that the artificial intelligence exploits information from motorists.
Categories: Science

Stunning image reveals the intricate structure of supersonic plasma

New Scientist Feed - Fri, 05/31/2024 - 9:12am
A simulation-generated image reveals how charge distributions and gas densities vary in the plasma that floats across our universe
Categories: Science

Douglas Murray: “Life has to be fought for”

Why Evolution is True Feed - Fri, 05/31/2024 - 8:40am

Here’s another good talk, though not as good as the preceding one.  But it does get better in the last third.

‘Yes, Douglas Murray is a conservative, and yes, the Manhattan Institute is a generally conservative think tank, but Murray is eloquent also sensible on many issues, including the war and (in this case), the courage of Israelis, and it’s worth listening to his 24-minute acceptance speech from May 6, when he was given the Alexander Hamilton Award from the Manhattan Institute for his “unwavering defense of Western values.”  I hate to have to qualify things this way, but yes, I disagree with Murray on several issues, the main one being his consistent opposition to widespread immigration into Britain. (I’m sure many of you will agree with him, though.)

In some ways, including his memory and his eloquence, Murray resembles Hitchens. (When he makes a crack about “Queers for Palestine,” remember that Murray is gay.)

The transcript of this speech is at The Free Press.

Categories: Science

Small fern species has a genome 50 times larger than that of humans

New Scientist Feed - Fri, 05/31/2024 - 8:00am
A small fern found only on a few Pacific islands has more than 100 metres of DNA in every single cell, more than any other organism that we know of
Categories: Science

Bari Weiss: “Courage is the most important virtue”

Why Evolution is True Feed - Fri, 05/31/2024 - 7:20am

I’m tired today, and also have work to do, so it may turn out that all of my posts have videos in them. Graduation is tomorrow, and I plan to be around to see if it goes smoothly (disruption is threatened).

Bari Weiss is often demonized, but I think her critics are largely mistaken.  She’s a centrist, but leans Left; and those who criticize her for being a member of the “Intellectual Dark Web” (which seems to me to consist largely of people who think for themselves) or for being some kind of right-winger, is simply misguided. In the 16-minute TED talk below, followed by 5 minutes of Q&A moderated by Chris Anderson (the head of TED) Weiss extols what she sees as the highest of virtues: courage.

She begins by laying out a litany of her beliefs, which are quite good (save for one note that we’re all “created in the image of god”), comporting with good liberalism, though some of them might be controversial (she thinks Covid came from a lab, that hiring should be based on merit rather than on “immutable characteristics”, promotes standardized testing, etc.) As she says (the transcript is here):

The point in all of this is that I am really boring, or at least I thought I was.  I am, or at least until a few seconds ago in historical time,  I used to be considered a standard-issue liberal.  And yet somehow, in our most intellectual and prestigious spaces,  many of the ideas I just outlined and others like them,  have become provocative or controversial,  which is really a polite way of saying unwelcome, beyond the pale. Even bigoted or racist How? How did these relatively boring views come to be seen as off-limits?  And how did that happen,  at least it seems to me,  in the span of under a few years? She then takes on the “progressives,” and finally gives what she sees as the reason for our “culture in crisis”: My theory is that the reason we have a culture in crisis is because of the cowardice of people that know better. It is because the weakness of the silent, or rather the self-silencing majority.  So why have we been silent?  Simple. Because it’s easier.  Because speaking up is hard, it is embarrassing, it makes you vulnerable. It exposes you as someone who is not chill, as someone who cares a lot, as someone who makes judgments, as someone who discerns between right and wrong, between better and worse. Among the courageous people she mentions are Natan Sharansky, Masih Alinejad, John Fetterman, Salman Rushdie, Roland Fryer, Alexei Navalny, Coleman Hughes, Jimmy Lai, and others.  You will have your own list of Courageous People. Mine also includes J. K. Rowling, Ayaan Hirsi Ali, and, among those no longer living but who inhabited the 20th century, Mohandas Gandhi, Nelson Mandela, Martin Luther King, James Meredith, Ruby Bridges, and many figures of the American Civil Rights movement who gave their lives pursuing the cause (Medgar Evers, Goodman, Schwerner, and Chaney). These people made considerable sacrifice to promote positive change; their activism was not performative. (Yes, Rowling remains wealthy, but she didn’t have to stand up for women in the way she did, and that led to considerable erosion of her reputation.)

Weiss’s ending is lovely, and is followed by a standing ovation.

The freest people in the history of the world seem to have lost the hunger for liberty.  Or maybe it’s really the will to defend it.
And when they tell me this, it puts me in mind of my hero, Natan Sharansky,  who spent a decade in the Soviet gulag before getting his freedom.
He is the single bravest person that I have ever met in my life.  And a few years ago, one afternoon in Jerusalem, I asked him a simple question.
“Nathan,” I asked him, “is it possible to teach courage?”
And he smiled in his impish way and said, “No.
All you can do is show people how good it feels to be free.” My comment on that ending: does seeing the benefits of freedom really make people more courageous? Or was Sharansky merely extolling the benefits of what you can get from courage?

The talk:

 

Categories: Science

Readers’ wildlife photos

Why Evolution is True Feed - Fri, 05/31/2024 - 6:15am

Reader Robert Lang, physicist and origami master, has contributed two batches of photos, and I’ll show one today. (I just missed being able to get on a cruise to the Arctic, featuring Richard Dawkins and with several of my friends like Robert, so I’m bummed.) At any rate, Robert’s words are indented and you can enlarge his photos by clicking on them,

Wildflowers 1/2

Springtime in Southern California is when the hills come alive with life. We have had two good years of winter rains but the first few months of 2024 were coolish. In April, we began to see warm afternoons, and this brought out a burst of wildflower blooms from many species.

It didn’t get a lot of press, but on May 2, Joe Biden announced the expansion of the San Gabriel Mountains National Monument—it now begins about 20 feet from my studio window and I can walk out my back door to get onto a network of trails. The trails range from deep, forested canyons to thickets of mountainside chapparal and rocky exposed ridges; one of my favorite afternoon routes goes through all three terrains, which offers a wide variety of wildflowers at peak season. Most of the pictures in this collection and the next were taken (with an iPhone, so the quality varies) during a single 3-hour ramble.

A note on IDs: I am even less expert in wildflowers than I am in animal life, so I am relying on iNaturalist for most of these IDs. Corrections and clarifications welcome!

Baby blue eyes (Nemophila menziesii) has tiny flowers and because of its low growth is easily overlooked, but I find them lovely:

Blue blossom ceanothus (Ceanothus thyrsiflorus) is a common shrub of the chapparal. On the day I hiked, the northwest side of Millard Canyon was covered in its lavender blooms:

Chaparral whitethorn (Ceanothus leucodermis) is another species of Ceanothus with similar flowers as blueblossom but is easily distinguishable by its pale branches and vicious long thorns; at higher elevations, it’s one of the dominant shrubs of the chaparral and its thickets are impenetrable (unless you’re willing to spill some blood):

Canterbury bells (Phacelia minor) is often findable along the edges of trails; its deep purple blooms, about 2–3 cm long and similar length, are distinctive with their bell-shaped base:

Clearwater cryptantha (Cryptantha intermedia) is another easily overlooked flower with tiny (~1 cm) blooms and low growth form:

Crofton weed (Ageratina adenophora) is well named as a “weed;” it is an invasive plant that chokes many small streams in canyon bottoms. Every few years a flash flood will clear out the lot, making the stream hikeable for several months, but then the invaders come crowding in again:

This plant had the tiniest flowers, only about 0.5 cm across. iNaturalist only narrows it down to tribe Cynoglosseae, in family Boraginaceae (which makes it a relative of Clearwater cryptantha); any further ID would be most welcome:

iNat identifies this as a Delphinium, but doesn’t narrow down the species:

Gum rock-rose (Cistus ladanifer) is an import from the Mediterranean region, which can be found in areas that were once developed (e.g., the Echo Peak ruins and along the Mount Lowe Roadway above Altadena). The big, showy flowers come in two forms: plain white, which somewhat resemble those of the native Matillija poppy (Romneya coulteri), but they can be distinguished by checking the petals: four petals for poppies, fivefold symmetry for the rock-rose:

More commonly, though, the Gum rock-rose flowers are decorated with maroon dots, which makes the ID unmistakable:

Next: more wildflowers.

Categories: Science

Pages

Subscribe to The Jefferson Center  aggregator