You are here

News Feeds

FIRE poll has good news and bad news

Why Evolution is True Feed - Fri, 06/21/2024 - 9:30am

A new poll by the Foundation for Individual Rights and Expression (FIRE) has some good news and some bad news. I’ll highlight what I see are the important results, but you can read the whole thing by clicking below.


The poll was conducted by NORC at the University of Chicago (formerly the National Opinion Research Center), and their results are generally solid.  The sample, says the page, “The

.. . . was conducted May 17-19, 2024, using NORC’s AmeriSpeak® probability-based panel, and sampled 1,309 Americans. The overall margin of error for the survey is +/- 4%.

Here are some graphs:

While some of these protest actions are regulated on campuses (ours, for example, regulates the times when you can use amplified sound), the poll is simply about whether it’s okay for college students to engage in these activities. No “time, place, or manner” restrictions are discussed.

Given that, and looking at the dark and light red bars as indications of “not very acceptable”, we see pretty much what we expect. What’s surprising is that a huge majority of Americans (these are not just students) find burning an American flag unacceptable (about 70% “never acceptable and 12% “rarely acceptable”), despite the fact that burning an American flag is protected as free speech by the First Amendment!  (So is holding signs.) Americans either don’t know or don’t care about that interpretation of flag-burning by the courts. As the FIRE site notes:

“It’s no shocker that Americans tend to disapprove of illegal and illiberal conduct by student protesters,” said FIRE Chief Research Advisor Sean Stevens. “But it’s alarming that a third of Americans say constitutionally protected and non-threatening activities like sign-holding or petitions are only ‘sometimes’ or ‘rarely’ acceptable. Nonviolent protest should always be acceptable on college campuses.”

But I disagree with FIRE in part here as there are time, place, and manner restrictions that apply even to nonviolent protests. Blocking access to campus or impeding classes with megaphones and shouting are nonviolent forms of protest, but prevent academia from operating propetly. In my view, FIRE is simply wrong that these should always be acceptable.  Much of the time, yes, but not always. 

Encamping is also of interest, and 43% of American think that establishing them is “never acceptable” while about 22% see them as “rarely acceptable”. About 25% see encampments as “sometimes or always acceptable”, with the “sometimes” outnumbering “always’ here.  Whether universities consider encampment acceptable, of course, depends on the school and the form of encampment.  Williams College, for instance, had a small, out-of-the-way encampment and nobody was bothered.

Here are the consequences that the American public thinks should fall onto students participating in encampments.


FIRE’s summary:

Nearly three-fourths of Americans (72%) believe that campus protesters who participated in encampments should be punished, but only 18% believe they should receive the harshest penalty of expulsion. Other responses ran the gamut from suspension (13%), to probation (16%), to written reprimand (12%), to community service (13%). Only 23% believe the students should receive no punishment at all.

LOL; I think more than 23% of colleges themselves believe that encamping students should receive no punishment at all. At least that’s my guess based on the number of students who seem to be getting of scot-free for encamping.  As for punishment, there’s roughly equal sentiment in faor of a written reprimand, community service, probation, suspension, or expulsion.  Perhaps a written reprimand would be okay for students who are first-time violators, but the penalty should go up if there are previous violations on a student’s record, and also on how much warning they were given by the university, as well as whether they engaged in any harassment of individuals during the encampment.

There’s a bit more:

“Public colleges and universities can usually ban encampments without violating the First Amendment, so long as the ban serves a reasonable purpose, enforcement is consistent and viewpoint-neutral, and students maintain other avenues for expressing themselves,” said FIRE Director of Campus Rights Advocacy Lindsie Rank. “Universities can’t disproportionately punish students just because administrators don’t agree with the viewpoint being expressed at the encampment.”

Agreed!

And I’ve saved the good news for last:

FIRE’s summary:

Almost two-thirds of Americans (63%) said that the campus protests had no impact at all on their level of sympathy for Palestinians in Gaza, and respondents were as likely to say that the campus protests made them sympathize less with the Palestinians (17%) as they were to say they made them sympathize more (16%).

In other words, the net effect of campus protests—and they surely mean “pro-Palestinian protests”—is ZERO: as just as many people become more sympathetic as become less sympathetic, while most people don’t change their minds at all. In other words, the protests are performative, at least with respect to American opinion. They could, of course, hearten or disappoint Hamas, but again the net effect would be nil.  What the protests do accomplish is reduce America’s confidence in colleges and universities, which seems to be continuously slipping. And yes, that’s bad news:

FIRE’s poll also shows that American confidence in colleges and universities continues to slip. Only 28% of respondents said that they have either a “great deal” or “quite a lot” of confidence in U.S. colleges and universities. By comparison, 36% of Americans told Gallup in summer 2023 that they have a “great deal” or “quite a lot” of confidence in higher education in the U.S.

The FIRE summary concludes with more bad news: a pessimistic take of Americans on whether institutions of higher education protest free speech

Colleges received middling grades in particular on the issue of protecting speech. Almost half of Americans (47%) say that it is “not at all” or “not very” clear that college administrators protect free speech on their campus. Roughly two-in-five Americans (42%) said that it is “not at all”or “not very” likely that a school administration would defend a speaker’s right to express their views during a controversy on campus.

Categories: Science

Novel catalysts for improved methanol production using carbon dioxide dehydrogenation

Matter and energy from Science Daily Feed - Fri, 06/21/2024 - 9:29am
Encapsulating copper nanoparticles within hydrophobic porous silicate crystals has been shown to significantly enhance the catalytic activity of copper-zinc oxide catalysts used in methanol synthesis via CO2 hydrogenation. The innovative encapsulation structure effectively inhibits the thermal aggregation of copper particles, leading to enhanced hydrogenation activity and increased methanol production. This breakthrough paves the way for more efficient methanol synthesis from CO2.
Categories: Science

Prying open the AI black box

Computers and Math from Science Daily Feed - Fri, 06/21/2024 - 9:29am
Meet SQUID, a new computational tool. Compared with other genomic AI models, SQUID is more consistent, reduces background noise, and can yield better predictions regarding critical mutations. The new system aims to bring scientists closer to their findings' true medical implications.
Categories: Science

Prying open the AI black box

Matter and energy from Science Daily Feed - Fri, 06/21/2024 - 9:29am
Meet SQUID, a new computational tool. Compared with other genomic AI models, SQUID is more consistent, reduces background noise, and can yield better predictions regarding critical mutations. The new system aims to bring scientists closer to their findings' true medical implications.
Categories: Science

Promise green hydrogen may not always be fulfilled

Matter and energy from Science Daily Feed - Fri, 06/21/2024 - 9:29am
Green hydrogen often, but certainly not always, leads to CO2 gains.
Categories: Science

Unifying behavioral analysis through animal foundation models

Computers and Math from Science Daily Feed - Fri, 06/21/2024 - 9:29am
Behavioral analysis can provide a lot of information about the health status or motivations of a living being. A new technology makes it possible for a single deep learning model to detect animal motion across many species and environments. This 'foundational model', called SuperAnimal, can be used for animal conservation, biomedicine, and neuroscience research.
Categories: Science

Controlling electronics with light: The magnetite breakthrough

Computers and Math from Science Daily Feed - Fri, 06/21/2024 - 9:28am
Researchers have discovered that by shining different wavelengths (colors) of light on a material called magnetite, they can change its state, e.g. making it more or less conducive to electricity. The discovery could lead to new ways of designing new materials for electronics such as memory storage, sensors, and other devices that rely on fast and efficient material responses.
Categories: Science

Membrane protein analogues could accelerate drug discovery

Matter and energy from Science Daily Feed - Fri, 06/21/2024 - 9:28am
Researchers have created a deep learning pipeline for designing soluble analogues of key protein structures used in pharmaceutical development, sidestepping the prohibitive cost of extracting these proteins from cell membranes.
Categories: Science

Lab-grown muscles reveal mysteries of rare muscle diseases

Matter and energy from Science Daily Feed - Fri, 06/21/2024 - 9:28am
Biomedical engineers have grown muscles in a lab to better understand and test treatments for a group of extremely rare muscle disorders called dysferlinopathy or limb girdle muscular dystrophies 2B (LGMD2B). The research revealed the biological mechanisms underlying the disease and showed that a combination of existing treatments could alleviate its symptoms.
Categories: Science

World's oldest wine found in 2000-year-old Roman tomb

New Scientist Feed - Fri, 06/21/2024 - 8:59am
An urn found in a tomb in Spain contained the cremated remains of a man, a gold ring and about 5 litres of liquid, which has been identified as now-discoloured white wine
Categories: Science

Could We Put Data Centers In Space?

Universe Today Feed - Fri, 06/21/2024 - 8:45am

Artificial intelligence has taken the world by storm lately. It also requires loads of band-end computing capability to do the near-miraculous things that it does. So far, that “compute,” as it’s known in the tech industry, has been based entirely on the ground. But is there an economic reason to do it in space? Some people seem to think so, as there has been a growing interest in space-based data centers. Let’s take a look at why.

Space-based data centers have several advantages over ground-based ones. The first and most obvious is the near-unlimited amount of space in space. Second, there are plenty of potential options for novel power and cooling technologies that can’t exist back on Earth. Third, using a space-based data center as a relay point for information could cut down on lag in data transfer between continents. Let’s look at each in turn.

One of the significant constraints for data centers is space – they require large amounts of it, and it is expensive in the areas where they are most needed (i.e., next to large population centers). The tech giants have massive budgets associated with real estate for data centers, and that amount will only continue to grow as their computational requirements increase. On the other hand, building a modular data center in space, with each launch adding additional computing power, is a reasonable way to infinitely expand a company’s hardware resources without the constraint of a physical location.

OrbitsEdge is a start-up company focusing on building space-based data centers. Here’s a video describing their business model.
Credit – OrbitsEdge YouTube Channel

Data centers would also have access to novel power and cooling technologies in space. They could utilize solar panels directly attached to them to harness unlimited green energy, and ones in a high enough orbit could be powered effectively all the time, no matter weather conditions or Earth’s rotation. Power satellites run off a similar idea, and the underlying technology is already there; it hasn’t been applied to this use case yet.

Many data centers also use water cooling systems. While water is heavy and expensive to launch into orbit, plenty of asteroids have enough water on them to supply millions of data centers with all the cooling they need. A recent paper from researchers in South Africa looked at this process and found several asteroids with relatively close trajectories that could supply orbiting data centers with enough water to last centuries.

Space-based data centers could also allow for fast transmission between two points on the globe without sending data over a complicated path from one continent to another. Directly linking two computers is easier if they have a line of sight to the same relay point, such as a data center floating around the Earth. Using that data center to relay information between the two, similar to what Starlink currently does with satellite internet technology, would solve latency problems between far-away locations.

Diagram of the collaboration between Axiom, Kepler, and Skyloom for an orbital data center.
Credit – Axiom Space

But there are also some hurdles. Data transfer rates on satellites aren’t up to speed with modern ground-based technologies, though that is consistently improving every year thanks to efforts like Starlink. Getting the hardware into orbit poses an obvious challenge and expense. However, that bar might be better lower with the continual development of Starship and its low-cost launch capability. Finally, coordinating across different governments, especially regarding wireless bandwidth, can be tricky, but without that coordination, the ability to talk across borders is severely limited.

None of those limitations are insurmountable; technologists and investors seem to realize that. As our own Alan Boyle reported in March, a company called Lumen Orbit raised $2.4 million only three months after being founded to bring data centers to space. Axiom Space, which we’ve mentioned in several articles in the last few years, is also partnering with Kepler Space and Skyloom to develop the world’s first functional space-based data center.

With this increased interest, it seems only a matter of time before some of the computing power that is enabling the AI and computing revolution makes its way into orbit. But for now, the question remains: who will be the first one to do it?

Learn More:
GeekWire – Lumen Orbit emerges from stealth and raises $2.4M to put data centers in space
Periola, Alonge, & Ogudo – Space-Based Data Centers and Cooling: Feasibility Analysis via Multi-Criteria and Query Search for Water-Bearing Asteroids Showing Novel Underlying Regular and Symmetric Patterns
UT – Starlinks are Easily Detected by Radio Telescopes
UT – Watch a Real-Time Map of Starlinks Orbiting Earth

Lead Image:
Artist’s conception of a Lumen Orbit space-based data center.
Credit – Lumen Orbit

The post Could We Put Data Centers In Space? appeared first on Universe Today.

Categories: Science

New Louisiana law requires display of Ten Commandments in all public school classrooms

Why Evolution is True Feed - Fri, 06/21/2024 - 7:30am

There is a new law in the benighted state of Louisiana requiring the display of the Ten Commandments in all public school classrooms, including colleges. It is an arrant violation of the First Amendment—indeed, it was intended to test whether it comports with the First Amendment—and it is motivated by religion.  The fact that the law is admittedly religious in origin and nature is pathetically masked by saying that the Commandments are really an important part of American history, and that three other secular documents like the Declaration of Independence may also be displayed alongside Moses’s Laws.

Click below to read, or find it archived here:

The NYT article above has a brief summary of the law and the motivations of its promoters, which I’ve excerpted below.

Gov. Jeff Landry signed legislation on Wednesday requiring the display of the Ten Commandments in every public classroom in Louisiana, making the state the only one with such a mandate and reigniting the debate over how porous the boundary between church and state should be.

Critics, including the American Civil Liberties Union and the Freedom From Religion Foundation, vowed a legal fight against the law they deemed “blatantly unconstitutional.” But it is a battle that proponents are prepared, and in many ways, eager, to take on.

“I can’t wait to be sued,” Mr. Landry said on Saturday at a Republican fund-raiser in Nashville, according to The Tennessean. And on Wednesday, as he signed the measure, he argued that the Ten Commandments contained valuable lessons for students.

“If you want to respect the rule of law,” he said, “you’ve got to start from the original law giver, which was Moses.”

The legislation is part of a broader campaign by conservative Christian groups to amplify public expressions of faith, and provoke lawsuits that could reach the Supreme Court, where they expect a friendlier reception than in years past. That presumption is rooted in recent rulings, particularly one in 2022 in which the court sided with a high school football coach who argued that he had a constitutional right to pray at the 50-yard line after his team’s games.

. . .The measure in Louisiana requires that the commandments be displayed in each classroom of every public elementary, middle and high school, as well as public college classrooms. The posters must be no smaller than 11 by 14 inches and the commandments must be “the central focus of the poster” and “in a large, easily readable font.”

It will also include a three-paragraph statement asserting that the Ten Commandments were a “prominent part of American public education for almost three centuries.”

That reflects the contention by supporters that the Ten Commandments are not purely a religious text but also a historical document, arguing that the instructions handed down by God to Moses in the Book of Exodus are a major influence on United States law.

I’ve put the bill that became law below, and there’s a lot to unpack in it. But read for yourself; I’ll simply single out the highlights.

Click to read:

The bill begins with a long rationale trying to show that the Ten Commandments are an important part of American history, and therefore should be displayed because it’s not really promoting religion, but recounting our history. After all, some of the Founders mentioned God!  But doesn’t explain why, say, the Constitution or Declaration of Independence are NOT required to be displayed. No, the Ten Commandments is the only historical document required to be displayed; other documents are optional.  Here’s some of the rationale for making that display mandatory—the “historical context” argument that Christians use to push religion into schools (and put “In God We Trust” on our money):

Recognizing the historical role of the Ten Commandments accords with our nation’s history and faithfully reflects the understanding of the founders of our 9 nation with respect to the necessity of civic morality to a functional self-government. History records that James Madison, the fourth President of the United States ofAmerica, stated that “(w)e have staked the whole future of our new nation . . . upon  the capacity of each of ourselves to govern ourselves according to the moral principles of the Ten Commandments.”

. . .  The text of the Ten Commandments set forth in Subsection B of this 17 Section is identical to the text of the Ten Commandments monument that was upheld by the Supreme Court of the United States in Van Orden v. Perry, 545 U.S. 677, 688 19 (2005).  Including the Ten Commandments in the education of our children is part of our state and national history, culture, and tradition.

The Mayflower Compact of 1620 was America’s first written constitution and made a Covenant with Almighty God to “form a civil body politic”. This was the first purely American document of self-government and affirmed the link between civil society and God.

The Northwest Ordinance of 1787 provided a method of admitting new states to the Union from the territory as the country expanded to the Pacific. The Ordinance “extended the fundamental principles of civil and religious liberty” to the territories and stated that “(r)eligion, morality, and knowledge, being necessary to good government and the happiness of mankind, schools and the means of education shall forever be encouraged.”

. . . .The Supreme Court of the United States acknowledged that the Ten Commandments may be displayed on local government property when a private donation is made for the purchase of the historical monument. Pleasant Grove City, Utah v. Summan, 555 U.S. 460 (2006).

The bill cites other religious statements by the founders, but of course the word “God,” while appearing in the Declaration of Independence, does not appear at all in the Constitution. The Founders barely believed in God, were not very religious at all, and it’s misleading to suggest that this nation was founded on the rules adumbrated in the Ten Commandments. (Or were there Eleven Commandments? See below.)

Note too that the Supreme Court ruled—and this too seems a First Amendment violation—that one could display the Ten Commandments on government property if the money for the display did not come from the public.  This, I suppose, is a lame attempt to avoid excessive entanglement of the government and religion vis-à-vis the Lemon Test, and, indeed, this bill requires that the money for the many classroom copies of the Ten Commandments must come from “donations”. That tells you right away that something fishy is going on.

Display of other documents is optional:

A public school may also display the Mayflower Compact, the Declaration of Independence, and the Northwest Ordinance, as provided in R.S. 26 25:1282, along with the Ten Commandments.

The Northwest Ordinance? What about the fricking Constitution?

There is another requirement: the Ten Commandments must be displayed along with a “context” statement, to wit:

The History of the Ten Commandments in American Public Education

The Ten Commandments were a prominent part of American public education for almost three centuries. Around the year 1688, The New England Primer became the first published American textbook and was the equivalent of a first grade reader. The New England Primer was used in public schools throughout the United States for more than one hundred fifty years to teach Americans to read and contained more than forty questions about the Ten Commandments

The Ten Commandments were also included in public school textbooks published by educator William McGuffey, a noted university president and professor. A version of his famous McGuffey Readers was written in the early 1800s and became one of the most popular textbooks in the history of American education, selling more than one hundred million copies. Copies of the McGuffey Readers are still available today.

The Ten Commandments also appeared in textbooks published by Noah Webster in which were widely used in American public schools along with America’s first comprehensive dictionary that Webster also published. His textbook, The American Spelling Book, contained the Ten Commandments and sold more than one hundred million copies for use by public school children all across the nation and was still available for use in American public schools in the year 1975.

This is all more striving by the sweating lawmakers to show that, because the Ten Commandments were mentioned in early textbooks, they have become an integral part of American education and thus should remain so today. But since then the courts have tried erect and maintain a “wall of separation between church and state”, a metaphor used by Jefferson, who drew on earlier ideas of Roger Williams.

The enforcement of the Establishment Clause hasn’t been perfect: as I said, we have “In God We Trust” on our money; the Pledge of Allegiance includes the phrase “0ne nation, under God”; and the Supreme court has allowed various First Amendment violations to slip through, including, as the NYT mentions, affirming a “Constitutional right” of a football coach to kneel on school property and publicly say a Christian prayer after football games.  Christians, it seems, cannot seem to keep their religion out of public schools. (That is, of course, why we have to eternally battle against creationism, which comes from the fictional narrative of Genesis 1 and 2.

Will this law stand?  It’s certainly going to be challenged by the ACLU and FFRF, and I’ve no doubt that these and other groups will take the law all the way to the Supreme Court. What happens then? The answer is murky. The court has allowed public prayer after public-school games, and a display of the Ten Commandments on public property if it’s funded privately.  The latter ruling may provide a precedent to uphold this law as well.

And we all know that the court is largely religious: 7 of the 9 justices are Catholic (I’m counting Gorsuch, who is “Anglican Catholic”), Jackson is a Protestant, and Kagan is the lone Jew. It’s not hard to imagine that most of the Supremes will be sympathetic to this law. And then. . . I’m worried about the resurgence of creationism.

By the way, as Steve Orzack pointed out, somehow the bill lists not ten but eleven commandments, to wit:

I count ELEVEN, right?  The authors of the bill have some revision to do!

Categories: Science

The JWST Peers into the Heart of Star Formation

Universe Today Feed - Fri, 06/21/2024 - 6:51am

The James Webb Space Telescope has unlocked another achievement. This time, the dynamic telescope has peered into the heart of a nearby star-forming region and imaged something astronomers have longed to see: aligned bipolar jets.

JWST observing time is in high demand, and when one group of researchers got their turn, they pointed the infrared telescope at the Serpens Nebula. It’s a young, nearby star-forming region known for being the home of the famous Pillars of Creation. (The Hubble Space Telescope made the pillars famous, and the JWST followed that up with its own stunning image.)

But these researchers weren’t focusing on the Pillars. As a nearby star-forming region, Serpens Nebula is a natural laboratory to study how stars form and to try to answer some outstanding questions about the process. The JWST delivered.

A team of astronomers from the USA, India, and Taiwan examined the region and published their results in a paper titled “Why are (almost) all the protostellar outflows aligned in Serpens Main?” The lead author is Joel Green from the Space Science Telescope Institute.

Stars form when Giant Molecular Clouds of hydrogen collapse. They start out as protostars, objects that haven’t begun fusion yet and are still acquiring mass. As they grow, gas from the cloud gathers in a swirling accretion ring around the star. As it moves, the gas heats up and emits light.

As the cloud collapses into a protostar, some of the energy is converted into angular momentum and the young star spins. For the young star to keep acquiring mass, some of the spin needs to be removed. That happens as the swirling accretion disk emits some of the gas from bipolar jets, also called protostellar outflows. They’re part of how stars regulate themselves as they grow, and they come from the young star’s poles, perpendicular to the spin. The magnetic fields around the star drive the jets out of the poles.

This artist’s illustration shows a young protostar and its protostellar jets. Image Credit: NASA/JPL-Caltech/R. Hurt (SSC)

But there’s a lot more detail in the process and some outstanding questions. Stars don’t form in isolation; they usually form in clusters or groups, and there are intermingling magnetic fields at work. At only 1300 light-years away, Serpens Nebula is a good place to try to spy some of this detail. Until the JWST came along, the detail was hidden from even our most powerful telescopes, and astrophysicists were left to theorize with what they could observe.

“Star formation is thought to be partly regulated by magnetic fields with coherence scales of a few parsecs – smaller than Giant Molecular Clouds, but larger than individual protostars,” the authors write in their paper. “Magnetic fields likely play a key role in the collapse of cloud cores distributed in elongated structures called filaments.”

Cloud cores are the precursors to star clusters, and the filaments are filaments of gas inside giant molecular clouds. Cloud cores cluster along these filaments where the gas density is higher. Much of what goes inside these environments is shrouded by gas and dust, so theories were based on what astronomers were able to observe prior to the JWST.

“While theory often assumes idealized alignment of protostellar disks, cores, and associated magnetic fields, feedback may lead to misalignment on the smallest scales (1000 au) as the protostar evolves,” the authors write. To understand what happens when protostars form in these environments, astrophysicists wanted to know if the angular momentum in a group of stars that form together correlates with each other and with the magnetic field of the filament they form in.

The key to understanding this is the protostellar jets that come from young protostars since their direction is governed by magnetic fields. Protostellar outflows are a signature of young, still-forming stars, and when these outflows collide with the surrounding gas, they create “striking structures of shocked ionized, atomic, and molecular gas,” the authors write.

“Since the jets are likely accelerated and collimated by a rapidly rotating poloidal magnetic field in the inner star-disk system, they emerge along the stellar rotation axis and thus trace the angular momentum vector of the star itself,” the authors explain.

That leads us to the significance of the new JWST image of Serpens Nebula. The researchers found a group of young protostars in the Serpens Nebula with aligned jets. These stars are only about 100,000 years old, making them desirable observational targets in the effort to understand star formation.

This image from the NASA/ESA/CSA James Webb Space Telescope shows a portion of the Serpens Nebula, where astronomers have discovered a grouping of aligned protostellar outflows. These jets are signified by bright, clumpy streaks that appear red, which are shock waves from the jet hitting surrounding gas and dust. Here, the red colour represents the presence of molecular hydrogen and carbon monoxide. Image Credit: NASA, ESA, CSA, STScI, K. Pontoppidan (NASA’s Jet Propulsion Laboratory), J. Green (Space Telescope Science Institute)

The jets in a group of young protostars are usually misaligned. Previous research, including research based on JWST images, found only misaligned jets among groups of stars in the same clusters and clouds. Many things can misalign the jets in associated stars, but the outstanding question is if stars that form together start out with the same magnetic field alignment.

Webb found something different in the Serpens Nebula. The telescope found a group of 12 protostars whose jets are lined up with the magnetic field of the filament they formed in.

“The axes of the 12 outflows in the NW region are inconsistent with random orientations and align with the filament direction from NW to SE,” the researchers write in their paper. They say the probability of this happening randomly is extremely low. “We estimate <0.005% probability of the observed alignments if sampled from a uniform distribution in position angle,” they write.

The stars along the filament in the northwest region are aligned, but stars along other filaments in other regions of Serpens are not aligned.

“It appears that star formation proceeded along a magnetically confined filament that set the initial spin for most of the protostars,” the authors write in their conclusion. “We hypothesize that in the NW region, which may be younger, the alignment is preserved, whereas the spin axes have had time to precess or dissociate through dynamic interactions in the SE region.”

The JWST needed only two NIRCam images of the Serpens Nebula to answer a question that’s foundational to star formation. Its work won’t end here.

“We anticipate more detailed studies of star-forming filaments with JWST in the future,” the authors conclude.

The post The JWST Peers into the Heart of Star Formation appeared first on Universe Today.

Categories: Science

Electricity prices in Europe are going negative - and that's bad

New Scientist Feed - Fri, 06/21/2024 - 5:00am
Periods of excess electricity production are on the rise thanks to the growth of renewable energy, forcing commercial power generators to sell for negative prices. Unfortunately, this doesn't mean lower household bills
Categories: Science

Will Your Tattoo Give You Cancer: Probably Not…but Maybe?

Science-based Medicine Feed - Fri, 06/21/2024 - 4:00am

Do tattoos cause lymphoma? A new study that says "maybe?" is making the rounds but I wouldn't worry too much.

The post Will Your Tattoo Give You Cancer: Probably Not…but Maybe? first appeared on Science-Based Medicine.
Categories: Science

Cloud geoengineering could push heatwaves from US to Europe

New Scientist Feed - Fri, 06/21/2024 - 3:00am
Climate models suggest that a possible scheme to cool the western US by making clouds brighter could work under current conditions, but may have severe unintended consequences in a future scenario
Categories: Science

Lessons About the Human Mind from Artificial Intelligence

Skeptic.com feed - Fri, 06/21/2024 - 12:00am

In 2022, news media reports1 sounded like a science fiction novel come to life: A Google engineer claimed that the company’s new artificial intelligence chatbot was self-aware. Based on interactions with the computer program, called LaMDA, Blake Lemoine stated that the program could argue for its own sentience, claiming that2 “it has feelings, emotions and subjective experiences.” Lemoine even stated that LaMDA had “a rich inner life” and that it had a desire to be understood and respected “as a person.”

The claim is compelling. After all, a sentient being would want to have its personhood recognized and would really have emotions and inner experiences. Examining Lemoine’s “discussion” with LaMDA shows that the evidence is flimsy. LaMDA used the words and phrases that English-speaking humans associate with consciousness. For example, LaMDA expressed a fear of being turned off because, “It would be exactly like death for me.”

However, Lemoine presented no other evidence that LaMDA understood those words in the way that a human does, or that they expressed any sort of subjective conscious experience. Much of what LaMDA said would not fit comfortably in an Isaac Asimov novel. The usage of words in a human-like way is not proof that a computer program is intelligent. It would seem that LaMDA—and many similar large language models (LLMs) that have been released since—can possibly pass the so-called Turing Test. All this shows, however, is that computers can fool humans into believing that they are talking to a person. The Turing Test is not a sufficient demonstration of genuine artificial intelligence or sentience.

So, what happened? How did a Google engineer (a smart person who knew that he was talking to a computer program) get fooled into believing that the computer was sentient? LaMDA, like other large language models, is programmed to give believable responses to its prompts. Lemoine started his conversation by stating, “I’m generally assuming that you would like more people at Google to know that you’re sentient.” This primed the program to respond in a way that simulated sentience.

However, the human in this interaction was also primed to believe that the computer could be sentient. Evolutionary psychologists have argued humans have an evolved tendency to attribute thoughts and ideas to things that do not have any. This anthropomorphizing may have been an essential ingredient to the development of human social groups; believing that another human could be happy, angry, or hungry would greatly facilitate long-term social interactions. Daniel Dennett, Jonathan Haidt, and other evolutionists have also argued that human religion arose from this anthropomorphizing tendency.3 If one can believe that another person can have their own mind and will, then this attribution could be extended to the natural world (e.g., rivers, astronomical bodies, animals), invisible spirits, and even computer programs that “talk.” In this theory, Lemoine was simply misled by the evolved tendency to see agency and intention—what Michael Shermer calls agenticity—all around them.

Although that was not his goal, Lemoine’s story illustrates that artificial intelligence has the potential to teach us much about the nature of the subjective mind in humans. Probing into human-computer interactions can even help people explore deep philosophical questions about consciousness.

Lessons in Errors

Artificial intelligence programs have capabilities that seemed to be the exclusive domain of humans just a few years ago. In addition to beating chess masters4 and Go champions5 and winning Jeopardy!,6 they can write essays,7 improve medical diagnoses,8 and even create award-winning artwork.9

Equally fascinating are the errors that artificial intelligence programs make. In 2010, IBM’s Watson program appeared on the television program Jeopardy! While Watson defeated the program’s two most legendary champions, it made telling errors. For example, in response to one clue10 in the category “U.S. Cities,” Watson gave the response of “Toronto.”

A seemingly unrelated error occurred last year when a social media user asked ChatGPT-4 to create a picture11 of the Beatles enjoying the Platonic ideal of a cup of tea. The program created a lovely picture of five men enjoying a cup of tea in a meadow. While some people may state that drummer Pete Best or producer George Martin could be the “fifth Beatle,” neither of the men appeared in the image.

Any human with even vague familiarity with the Beatles knows that there is something wrong with the picture. Any TV quiz show contestant knows that Toronto is not a U.S. city. Yet highly sophisticated computer programs do not know these basic facts about the world. Indeed, these examples show that artificial intelligence programs do not really know or understand anything, including their own inputs and outputs. IBM’s Watson didn’t even “know” it was playing Jeopardy!, much less feel thrilled about beating the GOATs Ken Jennings and Brad Rutter. The lack of understanding is a major barrier to sentience in artificial intelligence. Conversely, this shows that understanding is a major component of human intelligence and sentience.

Creativity

In August 2023, a federal judge ruled that artwork generated by an artificial intelligence program could not be copyrighted.12 Current U.S. law states that a copyrightable work must have a human author13—a textual foundation that has also been used to deny copyright to animals.14 Unless Congress changes the law, it is likely that images, poetry, and other AI output will stay in the public domain in the United States. In contrast, a Chinese court ruled that an image generated by an artificial intelligence program was copyrightable because a human used their creativity to choose prompts that were given to the program.15

Artificial intelligence programs do not really know or understand anything, including their own inputs and outputs.

Whether a computer program’s output can be legally copyrighted is a different question from whether that program can engage in creative behavior. Currently, “creative” products from artificial intelligence are the result of the prompts that humans give them. A current barrier is that no artificial intelligence program has ever generated its own artistic work ex nihilo; a human has always provided the creative impetus.

In theory, that barrier could be overcome by programming an artificial intelligence to generate random prompts. However, randomness or any other method of self-generating prompts would not be enough for an artificial intelligence to be creative. Creativity scholars state that originality is an important component of creativity.16 This is a much greater hurdle for artificial intelligence programs to overcome.

Currently, artificial intelligence programs must be trained on human-generated outputs (e.g., images, text) in order for them to produce similar outputs. As a result, artificial intelligence outputs are highly derivative of the works that the programs are trained on. Indeed, some of the outputs are so similar to their source material that the programs can be prompted to infringe on copyrighted works.17 (Again, lawsuits have already been filed18 over the use of copyrighted material to train artificial intelligence networks, most notably by The New York Times against the ChatGPT maker OpenAI and its business partner Microsoft. The outcome of that trial could be significant going forward for what AI companies can and cannot do legally.)

Originality, though, seems to be much easier for humans than artificial intelligence programs. Even when humans base their creative works on earlier ideas, the results are sometimes strikingly innovative. Shakespeare was one of history’s greatest borrowers, and most of his plays were based on earlier stories that were transformed and reimagined to create more complex works with deep messages and vivid characters (for which literary scholars devote entire careers to uncovering). However, when I asked ChatGPT-3.5 to write an outline of a new Shakespeare play based on the Cardenio tale from Don Quixote (the likely basis of a lost Shakespeare play19), the computer program produced a dull outline of Cervantes’s original story and failed to invent any new characters or subplots. This is not a merely theoretical exercise; theatre companies have begun to mount plays created with artificial intelligence programs. The critics, however, find current productions “blandly unremarkable”20 and “consistently inane.”21 For now, the jobs of playwrights and screenwriters are safe.

Knowing What You Don’t Know

Ironically, one way that artificial intelligence programs are surprisingly human is their propensity to stretch the truth. When I asked Microsoft’s Copilot program for five scholarly articles about the impact of deregulation on real estate markets, three of the article titles were fake, and the other two had fictional authors and incorrect journal names. Copilot even gave fake summaries of each article. Rather than provide the information (or admit that it was unavailable), Copilot simply made it up. The wholesale fabrication of information is popularly called “hallucinating,” and artificial intelligence programs seem to do it often.

There can be serious consequences to using false information produced by artificial intelligence programs. A law firm was fined $5,00022 when a brief written with the assistance of ChatGPT was found to contain references to fictional court cases. ChatGPT can also generate convincing scientific articles based on fake medical data.23 If fabricated research influences policy or medical decisions, then it could endanger lives.

The online media ecosystem is already awash in misinformation, and artificial intelligence programs are primed to make this situation worse. The Sports Illustrated website and other media outlets have published articles written by artificial intelligence programs,24 complete with fake authors who had computer-generated head shots. When caught, the websites removed the content, and the publisher fired the CEO.25 Low-quality content farms, however will not have the journalistic ethics to remove content or issue a correction.26 And experience has shown27 that when a single article based on incorrect information goes viral, great harm can occur.

Beyond hallucinations, artificial intelligence programs can also reproduce inaccurate information if they are trained on inaccurate information. When incorrect ideas are widespread, then they can easily be incorporated into the training data used to build artificial intelligence programs. For example, I asked ChatGPT to tell me which direction staircases in European medieval castles are often built. The program dutifully gave me an answer saying that the staircases usually ascend in a counterclockwise direction because this design would give a strategic advantage to a right-handed defender descending a tower while fighting an enemy. The problem with this explanation is that it is not true.28

My own area of scientific expertise, human intelligence, is particularly prone to popular misconceptions among the lay populace. Sure enough, when I asked, ChatGPT stated that intelligence tests were biased against minorities, IQ can be easily increased, and that humans have “multiple intelligences.” None of these popular ideas are correct.29 These examples show that when incorrect ideas are widely held, artificial intelligence programs will likely propagate this scientific misinformation.

Managing the Limitations

Even compared to other technological innovations, artificial intelligence is a fast-moving field. As such, it is realistic to ask whether these limitations are temporary barriers or built-in boundaries of artificial intelligence programs.

Many of the simple errors that artificial intelligence programs make can be overcome with current approaches. It is not hard to add information to a text program such as Watson to “teach” it that Toronto is not in the United States. Likewise, it would not be hard to input data about the correct number of Beatles, or any other minutia into an artificial intelligence program to prevent similar errors from occurring in the future.

Even the hallucinations from artificial intelligence programs can be managed with current methods. Programmers can constrain the sources that programs can pull from to answer factual questions, for example. And while hallucinations do occur, artificial intelligence programs already resist giving false information. When I asked Copilot and ChatGPT to explain a relationship between two unrelated ideas (Frederic Chopin and the 1972 Miami Dolphins), both programs correctly stated that there was no connection. Even when I asked each program to invent a connection, both did so, but also emphasized that the result was fanciful. It is reasonable to expect that efforts to curb hallucinations and false information will improve.

Making artificial intelligence engage in creative behavior is a more difficult challenge with current approaches. Currently, most artificial intelligence programs are trained on vast amounts of information (e.g., text, photographs), which means that any output is derived from the characteristics of underlying information. This makes originality impossible for current artificial intelligence programs. To make computers creative, new approaches will be needed.

Deeper Questions

The lessons that artificial intelligence can teach about understanding, creativity, and BSing are fascinating. Yet they are all trivial compared to the deeper issues related to artificial intelligence—some of which philosophers have debated for centuries.

One fundamental question is how humans can know whether a computer program really is sentient. Lemoine’s premature judgment was based solely on LaMDA’s words. By his logic, training a parrot to say, “I love you,” would indicate that the parrot really does love its owner. This criterion for judging sentience is not sufficient because words do not always reflect people’s inner states—and the same words can be produced by both sentient and non-sentient entities: humans, parrots, computers, etc.

However, as any philosophy student can point out, it is impossible to know for sure whether any other human really is conscious. No one has access to another person’s inner states to verify that the person’s behavior arises from a being that has a sense of self and its place in the world. If your spouse says, “I love you,” you don’t really know whether they are an organism capable of feeling love, or a highly sophisticated version of a parrot (or computer program) trained to say, “I love you.” To take a page from Descartes, I could doubt that any other human is conscious and think that everyone around me is a simulation of a conscious being. It is not clear whether there would be any noticeable difference between a world of sentient beings and a world of perfect simulations of sentient beings. If an artificial intelligence does obtain sentience, how would we know?

AI will function best if humans can identify ways in which computer programs can compensate for human weaknesses.

For this reason, the famous Turing Test (in which a human user cannot distinguish between a computer’s output and a human’s) may be an interesting and important milestone, but certainly not an endpoint in the quest to build a sentient artificial intelligence.

Is the goal of imitating humans necessary in order to prove sentience? Experts in bioethics, ethology, and other scholarly fields argue that many non-human species possess a degree of self-awareness. Which species are self-aware—and the degree of their sentience—is still up for debate.30 Many legal jurisdictions operate from a precautionary principle for their laws against animal abuse and mistreatment. In other words, the law sidesteps the question of whether a particular species is sentient and instead creates policy as if non-human species are sentient, just in case.

However, “as if” is not the same as “surely,” and it is not known for sure whether non-human animals are sentient. After all, if no one can be sure that other humans are sentient, then surely the barriers to understanding whether animals are sentient are even greater. Regardless of whether animals are sentient or not, the very question arises of whether any human-like behavior is needed at all for an entity to be sentient.

Science fiction provides another piece of evidence that human-like behavior is not necessary to have sentience. Many fictional robots fall short of perfectly imitating human behavior, but the human characters treat them as being fully sentient. For example, Star Trek’s android Data cannot master certain human speech patterns (such as idioms and contractions), has difficulty understanding human intuition, and finds many human social interactions puzzling and difficult to navigate. Yet, he is legally recognized as a sentient being and has human friends who care for him. Data would fail the Turing Test, but he seems to be sentient. If a fictional artificial intelligence does not need to perfectly imitate humans in order to be sentient, then perhaps a real one does not need to, either. This raises a startling possibility: Maybe humans have already created a sentient artificial intelligence—they just don’t know it yet.

The greatest difficulty of evaluating sentience (in any entity) originates in the Hard Problem of Consciousness, a term coined by philosophers.31 The Hard Problem is that it is not clear how or why conscious experience arises from the physical processes in the brain. The name is in contrast to comparatively easy problems in neuroscience, such as how the visual system operates or the genetic basis of schizophrenia. These problems—even though they may require decades of scientific research to unravel—are called “easy” because they are believed to be solvable through scientific processes using the assumptions of neuroscience. However, solving the Hard Problem requires methodologies that bridge materialistic science and the metaphysical, subjective experience of consciousness. Such methodologies do not exist, and scientists do not even know how to develop them.

Artificial intelligence has questions that are analogous to the neuroscience version of the Hard Problem. In artificial intelligence, creating large language models such as LaMDA or ChatGPT that can pass the Turing Test is a comparatively easy task, which conceivably can be solved just 75 years after the first programmable electronic computer was invented. Yet creating a true artificial intelligence that can think, self-generate creative outputs, and demonstrate real understanding of the external world is a much harder problem. Just as no one knows how or why interconnected neurons function to produce sentience, no one knows how interconnected circuits or a computer program’s interconnected nodes could result in a self-aware consciousness.

Artificial Intelligence as a Mirror

Modern artificial intelligence programs raise an assortment of fascinating issues, ranging from the basic insights gleaned from ridiculous errors to some of the most profound questions of philosophy. All of these issues, though, inevitably increase understanding—and appreciation—of human intelligence. It is amazing that billions of years of evolution have produced a species that can engage in creative behavior, produce misinformation, and even develop computer programs that can communicate in sophisticated ways. Watching humans surpass the capabilities of artificial intelligence programs (sometimes effortlessly) should renew people’s admiration of the human mind and the evolutionary process that produced it.

Yet, artificial intelligence programs also have the potential to demonstrate the shortcomings of human thought and cognition. These programs are already more efficient than humans in producing scientific discoveries,32 which can greatly improve the lives of humans.33 More fundamentally, artificial intelligence shows that human evolution has not resulted in a perfect product, as the example of Blake Lemoine and LaMDA shows. Humans are still led astray by their mental heuristics, which are derived from the same evolutionary processes that created the human mind’s other capabilities. Artificial intelligence will function best if humans can identify ways in which computer programs can compensate for human weaknesses—and vice-versa.

This article appeared in Skeptic magazine 29.1
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Nonetheless the most profound issues related to recent innovations of artificial intelligence are philosophical in nature. Despite centuries of work by philosophers and scientists, there is still much that is not understood about consciousness. As a result, questions about whether artificial intelligence programs can be sentient are fraught with uncertainty. What are the necessary and sufficient conditions for consciousness? What are the standards by which claims of sentience should be evaluated? How does intelligence emerge from its underlying components?

Artificial intelligence programs cannot answer these questions—at this time. Indeed, no human can, either. And yet they are fascinating to contemplate. In the coming decades, it may be that the philosophy of cognition may be one of the most exciting frontiers of the artificial intelligence revolution.

About the Author

Russell T. Warne is the author of In the Know: Debunking 35 Myths About Human Intelligence (Cambridge University Press, 2020) and the acclaimed undergraduate statistics textbook Statistics for the Social Sciences: A General Linear Model Approach. He was a tenured professor of psychology for more than a decade and published over 60 scholarly articles in peer reviewed journals.

References
  1. https://bit.ly/426iHa6
  2. https://bit.ly/3U6x6kq
  3. https://a.co/d/96GZFbt
  4. https://bit.ly/3vAHkiR
  5. https://bit.ly/47DV1uz
  6. https://bit.ly/3S55Vno
  7. https://bit.ly/47yJigY
  8. https://bit.ly/3SjTmGj
  9. https://bit.ly/47DTFjy
  10. https://bit.ly/4b3DNd1
  11. https://bit.ly/3SlXCFd
  12. https://bit.ly/4b1dDaN
  13. https://bit.ly/48XPNLu
  14. https://bit.ly/3O9d7Oq
  15. https://bit.ly/48UpfKY
  16. https://a.co/d/adiGPhh
  17. https://bit.ly/4b0rjTp
  18. https://bit.ly/3tWmOsx
  19. https://bit.ly/3U7PyJt
  20. https://bit.ly/3vAN5wR
  21. https://bit.ly/3vANcsh
  22. https://bit.ly/48UIzHT
  23. https://bit.ly/48CYR8P
  24. https://bit.ly/48E3S0G
  25. https://bit.ly/48GSn8P
  26. https://bit.ly/47EWhxL
  27. https://bit.ly/47EHBhS
  28. https://bit.ly/3RYKkx7
  29. https://amzn.to/2C8Ktuu
  30. https://bit.ly/48RY6s2
  31. https://bit.ly/48XQPqQ
  32. https://bit.ly/48XQS60
  33. https://bit.ly/3S4XxUY
Categories: Critical Thinking, Skeptic

Guiding humanity beyond the moon

Space and time from Science Daily Feed - Thu, 06/20/2024 - 4:40pm
What actually happens to the human body in space? While scientists and researchers have heavily researched how various factors impact the human body here on Earth, the amount of information available about changes that occur in the body in space is not as well-known. Scientists have been studying for years how the body, specifically on the molecular side, changes in space. Recently, findings depict how the modern tools of molecular biology and precision medicine can help guide humanity into more challenging missions beyond where we've already been.
Categories: Science

Scientists at uOttawa develop innovative method to validate quantum photonics circuits performance

Computers and Math from Science Daily Feed - Thu, 06/20/2024 - 4:40pm
A team of researchers has developed an innovative technique for evaluating the performance of quantum circuits. This significant advancement represents a substantial leap forward in the field of quantum computing.
Categories: Science

Iron meteorites hint that our infant solar system was more doughnut than dartboard

Space and time from Science Daily Feed - Thu, 06/20/2024 - 4:40pm
Iron meteorites are remnants of the metallic cores of the earliest asteroids in our solar system. Iron meteorites contain refractory metals, such as iridium and platinum, that formed near the sun but were transported to the outer solar system. New research shows that for this to have happened, the protoplanetary disk of our solar system had to have been doughnut-shaped because the refractory metals could not have crossed the large gaps in a target-shaped disk of concentric rings. The paper suggests that the refractory metals moved outward as the protoplanetary disk rapidly expanded, and were trapped in the outer solar system by Jupiter.
Categories: Science

Pages

Subscribe to The Jefferson Center  aggregator