You are here

News Feeds

Neural networks made of light

Matter and energy from Science Daily Feed - Fri, 07/12/2024 - 9:41am
Scientists propose a new way of implementing a neural network with an optical system which could make machine learning more sustainable in the future. In a new paper, the researchers have demonstrated a method much simpler than previous approaches.
Categories: Science

Neural networks made of light

Computers and Math from Science Daily Feed - Fri, 07/12/2024 - 9:41am
Scientists propose a new way of implementing a neural network with an optical system which could make machine learning more sustainable in the future. In a new paper, the researchers have demonstrated a method much simpler than previous approaches.
Categories: Science

Scientists create computer program that 'paints' the structure of molecules in the style of Piet Mondrian

Matter and energy from Science Daily Feed - Fri, 07/12/2024 - 9:41am
Scientists have created a computer program that 'paints' the structure of molecules in the style of famous Dutch artist, Piet Mondrian. Researchers are opening eyes and minds to the beauty of molecular structure, as well as posing new questions about the form and function of the molecules themselves.
Categories: Science

How the 'heart and lungs' of a galaxy extend its life

Space and time from Science Daily Feed - Fri, 07/12/2024 - 9:40am
Galaxies avoid an early death because they have a 'heart and lungs' which effectively regulate their 'breathing' and prevent them growing out of control, a new study suggests. If they didn't, the universe would have aged much faster than it has and all we would see today is huge 'zombie' galaxies teeming with dead and dying stars. That's according to a new study that investigates one of the great mysteries of the Universe -- why galaxies are not as large as astronomers would expect.
Categories: Science

Your pupils change size as you breathe

New Scientist Feed - Fri, 07/12/2024 - 8:00am
Our pupils shrink and expand as we breathe in and out, but whether this affects vision is unclear
Categories: Science

In Science, fifteen New Zealand researchers criticize the initiative to teach indigenous “ways of knowing” as science

Why Evolution is True Feed - Fri, 07/12/2024 - 7:30am

Two letters have just been published in Science signed by a total of 15 scientists, all criticizing the first article below (published in Science last February), a piece arguing for teaching indigenous knowledge (including N.Z.’s version, Mātauranga Māori) alongside science in the science classroom. (Click to read.)  Now the authors, after being criticized, denied that they really meant what they argued in this paper:

I also published a post in February criticizing Black and Tylianakis’s paper, and was pretty hard on their claims, which deserved such criticism. Science clearly published their article as part of the performative wokeness infecting major science journals, and it was full of assertions and short on facts. It was, in reality, an attempt to sacralize indigenous knowledge—a dangerous gambit.  Some quotes from my critique:

In the end, this article appears to me to be a DEI-ish contribution: something published to advance “the authority of the sacred victim” by arguing that indigenous knowledge and ways to attain it is just as good as modern (sometimes called “Western” ) science, and that teaching it will empower the oppressed. Here’s one line from the paper supporting my hypothesis:

In addition to a suite of known benefits to Indigenous students, we see the potential for all students to benefit from exposure to Indigenous knowledge, alongside a science curriculum, as a way of fostering sustainability and environmental integrity.

In other words, the argument here is really meant to buttress the self image of indigenous people, not to buttress science. You can see this because there are hardly any examples given to support their thesis. Instead, there is a lot of palaver and evidence-free argument, as well as both tedious and tendentious writing.

The publication of this paper is somewhat of a travesty, for it shows that the AAAS is becoming as woke as New Zealand, where the claim that you should NOT teach MM in the science classroom can get you fired!  If this kind of stuff continues, the authoritarians will eventually shut down anybody who makes counterarguments, as is happening in New Zealand, where counterspeech against the “scientific” nature of MM is demonized and punishable.  Did the AAAS[ The American Association for the Advancement of Science] even get critical reviewers for this piece?

But it’s especially important for Kiwis themselves to push back on this paper, for authors Black and Tylianakis are both from New Zealand, and their paper could be seen as supporting the widespread but misleading idea that indigenous knowledge, at least in New Zealand but probably everywhere else, is coequal to modern science.

The first paper pushing back, which you can access by clicking the screenshot below, has fourteen authors, including all but one of the Auckland University researchers who signed the Listener Letter on science—the letter that ignited this conflagration. In fact, that letter, which argued that indigenous knowledge in NZ had a place in the classroom, but not the science classroom, is quite similar to what you’ll read below (click headline to read). But you can’t attack this stuff too often, for the postmodern-derived claim that “all ways of knowing are equal” must be debunked before it destroys New Zealand science (it’s already done a job on social science and the humanities).

Here’s Ahdar et al.’s argument against what Kiwis, in their drive to sacralize Māori language, call mana ōrite, defined below. An excerpt (I’ve highlighted the money quote):

We agree with A. Black and J. M. Tylianakis (“Teach Indigenous knowledge alongside science,” Policy Forum, 9 February, p. 592) that the arguments of those supporting the “mana ōrite” policy (translated as “equal status” or “equal value”) between Indigenous knowledge and science are largely based on ethics and morals; that science is typically considered discrete from nonscience academic disciplines, whereas Indigenous knowledge lacks such divisions; and that science and Indigenous knowledge systems are distinct in “methodologies, philosophies, worldview, and modes of transmission.” However, such distinctions (12) are precisely why Indigenous knowledge—although it contains empirical and cultural knowledge of great value—should be taught as a distinct subject or as aspects of other subjects, not “alongside” science in science classes, as Black and Tylianakis suggest.

Black and Tylianakis fail to consider how to resolve conflicts between science and Indigenous knowledge in empirical content or methodology in the classroom. In Indigenous knowledge, empirical observations generally merge seamlessly with, and gain an authority not to be challenged from, spiritual and religious beliefs (35). Therefore, incorporating such observations into science curricula has led to, and will continue to lead to, the use of spiritual concepts in science classrooms (6).

Placing science and Indigenous knowledge alongside each other does disservice to the coherence and understanding of both, and leading Māori scholars have cautioned against such comparisons (78). Black and Tylianakis do not explain how science students might reconcile content from these two very distinct systems when taught as being of “equal value,” nor do they acknowledge that teaching Indigenous knowledge alongside science greatly limits the delivery of science curricula that meet international academic disciplinary standards.

 

Note how the authors use the Dennett-ian strategy of first showing where they agree with the paper they’re criticizing before they start hurling the brickbats.

And indeed, as I’ve written before, attempts to equate MM with science has lead to confusing lessons incorporating Māori myths and the concept of “mauri”, or vitalism, into the science classroom (see here, here, and the many posts here). What’s new in this letter is the authors’ digging for the roots of mana ōrite, which, they say, lie in social constructivism (my bolding):

The mana ōrite policy (9) states that Indigenous knowledge and science should be given equal status, but equating such vastly different systems is meaningless and based on the relativist concept of social constructivism. This ideology posits that all knowledge depends entirely on its cultural context, which it cannot transcend, and therefore epistemic claims from one culture cannot challenge claims from another. This is inherently antiscience; science is open to all to pursue and critique, and it depends on every claim being open to challenge. Framing the mana ōrite policy in terms of “relative value” or “relative status” is the problem, not the solution, because it tips the discussion into an emotive moral judgment that purports to say something about the merit of cultural differences. Under this view, the contest of ideas becomes a battle of cultural and political power rather than a matter of empirical evidence and theoretical coherence.

Their letter goes on to say that because science is based on testing factual claims, but indigenous knowledge, in contrast, comes with a heavy dose of spirituality and other nonfactual stuff, it shouldn’t be taught in the science class, or construed as a form of “knowledge”.  This parallels the Listener letter, but this and Matzke’s letter are more important because they are peer-reviewed letters in one of the world’s most prestigious science journals. It goes without saying that the letter could not have been publishe in New Zealand, and that’s very sad.

There’s another critique as well: a single-authored paper written by American Nick Matzke, now working at Auckland Uni. Nick may be familiar to you as a prolific author on The Panda’s Thumb website, and as a fighter against creationism as a member of the National Center for Science Education. Nick is now battling the Kiwi version of creationism: the spiritual/religious aspects of MM.  He’s argued against the vitalism of MM (“mauri“) in a video (see here), but in this letter, again peer reviewed, he criticizes the vitalism of New Zealand’s indigenous “ways of knowing”. Letters in Science have considerable clout, though of course Nick and the other 14 authors are up against powerful ideological and political forces in their own country and university. (Click to read.)

Nick points out several examples where vitalism (“mauri“), a supernatural concept, remains in the Kiwi science curriculum—at the behest of NZ’s Ministry of Education:

A. Black and J. M. Tylianakis (“Teach Indigenous knowledge alongside science,” Policy Forum, 9 February, p. 592) give an overly rosy picture of New Zealand’s policy of “mana ōrite,” or equal status for mātauranga Māori, in science education, which they say teaches Indigenous knowledge “alongside” science rather than “as” science. They suggest that this policy avoids problems such as teaching creationist myths in science class. However, the New Zealand Ministry of Education placed supernatural content directly into science and math curricula with no clarification that it was nonscientific material.

The chemistry curriculum required students to “recognise that mauri is present in all matter which exists as particles held together by attractive forces” (1), with a glossary that defined mauri as “[t]he vital essence, life force of everything.” This concept, known as vitalism, has long been debunked (2). Teaching concepts that directly conflict with empirical evidence undermines the goals of science education. Dozens of science teachers opposed the inclusion of mauri in the chemistry curriculum, but the Ministry steamrolled their objections, citing “the requirement for mana ōrite” (1). The objective was only removed after 18 months of controversy, at a time when the 2023 election was looming. The Ministry, ignoring vitalism’s evidentiary flaws, claimed the reversal occurred because inserting concepts such as mauri into science curricula ran the “risk of recolonisation” (3), despite the fact that mana ōrite’s entire rationale was decolonization.

Problems remain in 2024. Despite its removal from exam objectives, mauri remains in the chemistry curriculum, in which students are told, “Revisit the concept of mauri” (4). This learning can sit beside learnings in atomic theory” (5), and the Gulf Innovation Fund Together website (4) says that mauri is “the force that interpenetrates all things to bind and knit them together.” A math qualification on practical problems of “life in… the Pacific” asks trigonometry students to calculate how much flaxen rope the demigod Maui made to lasso the Sun, slowing it to lengthen the day (6). The text of the exercise is studiously agnostic about the literal truth of this story, describing it as a “narrative.” Black and Tylianakis might categorize this as teaching Indigenous knowledge alongside math, but teachers face the prospect of strife among students over whether it is appropriate to call it knowledge or myth and if students of various backgrounds are expected to defend or disclaim its verity.

The letter (limited to about 300 words) goes on to emphasize that the Ministry’s current policy puts supernatural content in the science classroom, and suggests, as is only sensible, that MM, if it’s to be taught as a whole, has to be in a “nonscience class or unit” that discusses the content and diversity of Māori beliefs.  Nick also wrote a brief backstory about this on The Panda’s Thumb website and makes two minor corrections of his letter.

Now of course the original authors, Amanda Black and Jason Tylianakis, got to respond, and they were given more words than the critics.  Click below to see their reply:

I’m biased, of course, but I consider this response very weak, as it continues to defend the nonscientific aspects of MM, including mauri, as forms of “knowledge”.  In fact, I don’t think that they realize that all verifications of truths about the world, whether they come from science or sociology, are examples of what I call “science construed broadly”.  Here are some statements that weaken their response (my own comments are flush left):

Indigenous knowledge must retain its integrity as a separate, parallel knowledge system. Analogous to philosophy, Indigenous knowledge should be taught alongside science as a separate form of knowledge, not within the science curriculum.

Indigenous “ways of knowing” such as MM are not “parallel knowledge systems”. In fact, MM is not a “knowledge system” at all, for, although it does contain some empirical knowledge, it’s also laden with religion, tradition, superstition, ethics, social strictures, legend, vitalism, and so on.  This gemisch cannot be a knowledge system, though later on the authors try to argue that, for example, vitalism is also “knowledge.” Further, philosophy, a useful discipline when applied to real issues, is not a “way of knowing” but a “way of thinking”.  Philosophers can verify what’s true about the world only in the same way scientists do: via observation, replication, hypothesis testing, pervasive doubt, experiments, and so on, And that’s part of science, not philosophy. But wait! There’s more!

Matzke demotes Indigenous knowledge to a “belief system” rather than knowledge, and Ahdar et al. dispute the idea that “epistemic claims from one culture cannot challenge claims from another.” Philosophy, arts, and other social sciences and humanities are all valuable forms of knowledge that sit alongside science in the curriculum without positivist science proofs of their “verity,” as Matzke requires of Indigenous knowledge. We thus agree with scholars who have cautioned against using science to test nonscience concepts from other knowledge systems (2). (Ahdar et al. claim to agree with such scholars as well but contradict themselves.)

No, philosophy, art and much of the humanities are “ways of seeing,” not “ways of knowing”. Knowledge or empirical truth, defined as “justified true belief” accepted by most rational people, cannot be attained without using the methods of science. If you make a claim about what’s true in the world, then yes, you need science construed broadly to test that claim.  These authors are so immersed in their “all knowledge systems are true in their own way” mantra that they don’t seem to even know what science is.

Here they try to shoehorn mauri, indisputably a form of vitalism and supernaturalism, into science:

 The concept of mauri, a key feature in the Māori worldview, has been frequently explored within the peer-reviewed scientific literature as a measure of ecological resilience (2) without being absorbed by or undermining science. Similar to the concept of health (45), mauri is not directly measurable, but both health and mauri can be operationalized through quantifiable indicators, and both concepts are useful for communicating societal and environmental well-being to the public. Nonscience concepts (assuming that they are not presented as science) can have value for connecting with communities.

I’m not sure what the sweating authors are trying to say here. What do they mean by “operationalizing vitalism through quantifiable indicators”? If they mean that, then yes, the concept of mauri is testable in the same way that intercessory prayer as a way to cure disease is testable (and of course it’s failed: prayer doesn’t work). I’d put up many dollars if they could find a way to test whether vitalism was operating in nature. The authors’ last statement, that supernaturalism can be valuable in “connecting with communities”, is undoubtedly true, but irrelevant to the argument of these letters.

Here’s another example of their relative ignorance about indigenous knowledge. If they mean what they say below, let them give just ONE EXAMPLE:

Matzke’s concern about “whether it is appropriate to call it knowledge or myth” fails to acknowledge that Indigenous knowledge systems can encode knowledge within apparent myth (2), so neither English term may fit perfectly. Education on Indigenous knowledge would avert such misunderstandings.

Yes, true. Separate the empirical wheat from the supernatural chaff, and then plant the wheat alongside science.  But teaching myths that mix both empirical knowledge and superstition can only confuse students. Are the authors suggesting that teachers tell students that part of MM isn’t really true?  If so, they should admit that (this would get them into big-time trouble), but they should also clarify what they mean by this:

We believe that harm arises when nonscience is presented as science, and we remain unconvinced that the intent of the mana ōrite initiative (8) is to present Indigenous knowledge and culture as science or to compete with scientific concepts in science classes.

Well, ante up, Drs. Black and Tilianakis! MM is in fact being funded and taught as science, and there are personal penalties levied on those who criticize it.  In the end, Black and Tilianakis admit that MM, which is largely nonscience, should not be “presented as science”. So far, so good. But it’s clear that the mana ōrite initiative is indeed presenting myth and tradition as science and is pitting MM and other forms of indigenous “knowledge” against science.

Kiwis really need to debate this issue: in fact, this is the most important of aspect of science that needs discussing in New Zealand right now.  What a pity it is that this discussion has effectively been banned. Remember Auckland Vice-Chancellor Dawn Freshwater’s promise to hold such a debate three years ago—a promise she never kept?

Categories: Science

Denisovan DNA may help modern humans adapt to different environments

New Scientist Feed - Fri, 07/12/2024 - 5:00am
Highland and lowland populations in Papua New Guinea have different gene variants derived from Denisovan archaic humans, indicating possible adaptations for lower oxygen levels and higher malaria risk
Categories: Science

A Hopping Robot Could Explore Europa Using Locally Harvested Water

Universe Today Feed - Fri, 07/12/2024 - 4:58am

Various forms of hopping robots have crept into development for us[e in different space exploration missions. We’ve reported on their use on asteroids and even our own Moon. But a study funded by NASA’s Institute for Advanced Concepts (NIAC) in 2018 planned a mission to a type of world where hopping may not be as noticeable an advantage—Europa.

The mission, developed by engineers at NASA’s Jet Propulsion Laboratory, Purdue University, and Honeybee Robotics, is known as the Steam Propelled Autonomous Retrieval Robot for Ocean Worlds, or SPARROW. It’s about the size and shape of a soccer ball, with the logic, power, and control systems inside a spherical outer hollow shell. 

SPARROW wouldn’t be able to operate on its own, however. It would require a lander to deposit it onto the surface and serve as a refueling and sample collection storage base. Europa Clipper, the only currently planned NASA mission to the icy moon, would have been good for hitching a ride, but its lack of a lander made it unsuitable for SPARROW.

Budget constraints are always a problem for innovative missions – as Fraser explains with Dr. Manasvi Lingam.

However, the hopping robot itself is well suited for the environment in Europa. Its designers intended to make it “terrain agnostic,” meaning it could traverse even the harshest terrain the icy moon could throw at it. These would include penitentes, shards of ice that could be meters tall, and difficult for ground-based robots to traverse.

SPARROW could fly over them, collect interesting samples, and return to the lander to refuel and deposit them. Then, it could go out again in a different direction. To model this system architecture, the JPL team spent Phase I trying to determine the best propulsion system for the robot and modeling control algorithms for the flights.

First, let’s tackle the propulsion system. The lander accompanying SPARROW would have to mine ice off the moon’s surface, then heat it and store it as water. When SPARROW returned from a hop, it would use the water to refuel. Five different propulsion methods were considered as part of the study. Still, the best turned out to be a “hot water thruster,” where SPARROW would internally heat the water supplied by the lander, then eject that out in a burst of propulsive force to launch the robot off the surface.

Exploring the surface of Europa is only one part of its mystery – as Fraser explains.

The second major part of the paper was controlling that propulsion. Trajectory correction is critical to mission success, but in this case, the designers believe that no matter where the robot ends up, it will be able to collect a sample and return to the lander. This is due to its gimballed design, which allows the robot to consistently orient correctly, even after bouncing along a frozen surface for a while.

There is still much work to do before the mission is ready to go, though. Some of the most pressing questions are how to stop ice from forming in the robot’s propulsion nozzle and throughout its structural cage. Such blockages could easily throw off any existing trajectory calculations and theoretically immobilize the hopper entirely if they were severe enough.

However, no work is planned to solve those problems for now as the project has yet to receive Phase II funding from NIAC, and work on it appears to have stalled. Dr. Gareth Meirion-Griffith, the primary investigator on the project, has moved on from JPL to take a job at Collins Aerospace. Even so, someday, the author’s ideas might be integrated into a Europa lander mission—we’ll have to wait and see.

Learn More:
JPL – This Hopping Robot Could Explore the Solar System’s Icy Moons
Meirion-Griffith et al. – SPARROW: Steam Propelled Autonomous Retrieval Robot for Ocean Worlds
UT – A Robot Hopper to Explore the Moon’s Dangerous Terrain
UT – Miniaturized Jumping Robots Could Study An Asteroid’s Gravity

Lead Image:
Artist’s depiction of SPARROW and it’s lander operating on an ocean world.
Credit – NASA JPL / Caltech

The post A Hopping Robot Could Explore Europa Using Locally Harvested Water appeared first on Universe Today.

Categories: Science

Russia is building ground-based kamikaze robots out of old hoverboards

New Scientist Feed - Fri, 07/12/2024 - 3:00am
Hoverboards, or self-balancing scooters, are already used by hobbyists as a basis for robots, but now a group in Russia is putting them to use on the battlefields of Ukraine
Categories: Science

Last common ancestor of all life emerged far earlier than thought

New Scientist Feed - Fri, 07/12/2024 - 3:00am
All life on Earth can trace its origin to LUCA, the last universal common ancestor – and now it seems this organism may have lived a few hundred million years after the planet formed
Categories: Science

Chatbots and the Pursuit of Artificial General Intelligence

Skeptic.com feed - Fri, 07/12/2024 - 12:00am

Since the release of ChatGPT and other large language models (LLMs) in the fall of 2022, the hype and hysteria about artificial intelligence has been dialed up to 11. ChatGPT, Bard, and other LLMs (hereinafter, “chatbots”) have been seen as significant advances toward the holy grail of artificial general intelligence (AGI).

Chatbots are the result of artificial neural nets of unprecedented size, processing enormous samples of natural language—some say essentially the entire World Wide Web (or, at least, all of Wikipedia). At least up to a certain date:

Me: What is today’s date?

ChatGPT-3 [on 7/18/2023]: Today’s date is November 8, 2021.

ChatGPT-3.5 knows today’s date, but is still time-limited:

Me: What happened yesterday?

ChatGPT-3.5: I’m sorry for any confusion, but as of my last knowledge update in January 2022, I don’t have real-time information, and I’m unable to provide details about events that occurred after that date. To find out what happened yesterday, I recommend checking the latest news updates, online news websites, or other reliable sources for the most recent information.

Chatbots produce astonishingly convincing simulacra of natural language. Their outputs, however, are the result purely of pattern-matching. Chatbots are autocomplete on steroids. Chatbots “understand” nothing, even as they fluently respond to user prompts. It used to be said that natural language computer programs had syntax but not semantics. Chatbots have neither syntax nor semantics, only statistics.

• • • • • •

The notion that large language models might in some way resemble or parallel what’s actually going on in a real human brain is perhaps suggested by the phrase “artificial neural net[work].”

The idea that so-called artificial neural networks bear any resemblance to natural neural networks (in plain words, actual brains) is almost laughable. In artificial neural nets (including those in large language models), the “neurons” and the “synapses” are represented by one or two parameters (and the pattern of the network itself largely by a single number identifying the strength of the connection between pairs of neurons). Detailed biological models of actual neurons and synapses can consist of hundreds or even thousands of values for each neuron and each synapse—specifying, for example, the firing rate, the refractory period, and the strength of each neurotransmitter emitted—and all artificial neural nets omit entirely attempting to model the activity of the glial cells, which in actual brains influence neurons hormonally in ways that are scarcely understood.

In other words, although the latest LLMs are impressively large and their output impressively fluent, their models are oversimplified by two, three or perhaps even larger orders of magnitude. In the 1940s, artificial networks were said to be “modeled” on actual brains. Nowadays, they are only said to be “inspired” by natural neural networks—a word so vague as to disclaim any accuracy at all.

To the extent that anyone attempts to model language beyond the level of neurons and synapses, they would somehow have to map the semantics of thought onto the syntax of speech (and written language). But, as we’ve seen, large language models have neither syntax nor semantics. We have no idea how it all works, and we have no idea how to model it, either.

• • • • • •

That chatbots respond to prompts in natural language with natural language is the foundation of the illusion that there’s anything akin to comprehension occurring. But consider the psychological effect of reading this string:

100 000 010 000 111 000 101 000 110 000 011 001

Does this mean anything to you? Me neither. Now apply the following table:

[blank] 000 [period] 001 am 010 body 011 I 100 just 101 my 110 not 111

The string above can now be interpreted as:

I am not just my body.

Reading this sentence has quite a different effect. But from the point of view of the programming of a chatbot, they are exactly equivalent.

The training of a chatbot doesn’t even attempt to address what any of those words mean—that is, their referents in the real world. Back when AIs were programmed with some attempt to incorporate how sentences were constructed and interpreted, this was known as the syntactic bubble. As an example, let’s try to understand what “body” means solely by following Merriam-Webster. (We’ll ignore the fact that, whenever we look a word up, we have to determine the appropriate sense to choose the appropriate definition, which is sometimes unclear.) Our paper chase begins like this:

  • Body: the organized physical substance [of an animal or plant either living or dead].
  • Organized: having a formal organization to coordinate and carry out activities.
  • Formal: belonging to or constituting the form or essence of a thing; relating to or involving the outward form, structure, relationships, or arrangement of elements rather than content.
  • Organization: the condition or manner of being organized.

Well, that didn’t take long, did it? We’re caught in a loop from which we can never escape. (This is just the first loop encountered in chasing down “body.” Another loop goes from “physical” to “material” and back again. Nor was “body” chosen because it was expected to be particularly difficult; you can conduct the same exercise with any word at all.)

Nowadays, this problem might be dubbed the chatbot bubble. Chatbots no more understand the prompts to which they respond and the outputs they produce than the printer that disgorges your store receipt is genuinely appreciative when it thanks you for your purchase.

Why do we find chatbots so convincing that some people assert that they have already achieved sentience and others actually believe that they share a love with their automated interlocutors?

Well, we’re easily fooled. Have you ever answered a phone call and, thinking you were listening to a real person, tried to reply to a recording? I know I have.

Almost immediately after birth, people normally develop what’s known, awkwardly and unfortunately, as theory of mind. (Hereinafter, I’ll refer to this instead as a mind model.) This is the irresistible sense or intuition that other people (and most animals) experience, well, experience, just as we ourselves do—thoughts and feelings, pleasure and pain, hunger and thirst, sensation and perception. We make this inferential leap long before we have any intellectual or philosophical justification for it (or the chance to dispute it—as do eliminative materialists).

Since the invention of written language, people have made the same inference on much narrower grounds—we infer sentience from sentences alone. (Even if we don’t believe that the cash register itself is sentient, we know that someone once wrote the words “THANK YOU” that the cash register mindlessly spits out on the slip of paper you get.) Very recently, we’ve been engaging in email and text message dialogs with people we’ve never met or even seen, and we inevitably infer that our conversations are also with actual people. Now, when we engage in a dialogue with a chatbot and it replies with uncanny plausibility, we irresistibly make the same inference.

This effect is not entirely accidental. ChatGPT, for example, was designed to reply in the first person as if it had both consciousness and intentionality. For example:

I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.

Let’s see how much less compelling it becomes in the third person:

ChatGPT is a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. ChatGPT is trained on a massive amount of text data, and is able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, ChatGPT can provide summaries of factual topics or create stories.

Now, this reads as not coming from ChatGPT itself but from its designers—who of course were its real source. But the designers of ChatGPT have, even in their canned disclaimers, themselves attributed to ChatGPT awareness and intentionality, and invite us to do the same:

I am always learning. I try my best to provide accurate and informative answers to your questions, but I may sometimes make mistakes. If you ever notice that I have provided a false or misleading answer, please let me know and I will do my best to correct it.

I am aware [of the phenomenon of chatbot hallucination], and I am working to improve my ability to detect and avoid it.

The intentionality attributed to ChatGPT is, in short, intentional:

My use of language, such as saying “I apologize,” is part of the programmed response to provide a more human-like interaction and to acknowledge and address user concerns.

• • • • • •

When we watch a magician (or, as some performers more properly refer to themselves, an illusionist) perform, we don’t believe that we’re actually seeing real magic. In fact, the delight and astonishment comes from the paradoxical incongruity of what we have just seen with what we know of how the world works. (When an illusionist demonstrates and explains how an effect is achieved, the usual response is not a feeling of relief but of disappointment.) No one believes that, as illusions become increasingly elaborate and convincing, we’re getting any closer to achieving actual magic. Why, then, would we believe that as chatbots become more fluent and convincing we’re getting any closer to actual intelligence?

• • • • • •

A recurrent claim is that, as chatbots incorporate larger and larger neural nets with deeper and deeper layers, digest greater and greater amounts of text, and receive more and more feedback from users and others, they will become somehow “better.” To the extent that “better” means more convincing, improvements are likely to be imperceptible—since their output is already convincing enough, thank you very much. But little or no attempt is actually being made to verify the factuality of the source material or what is emitted. ChatGPT contradicts itself about this (notice the use of the passive voice to avoid the assumption of responsibility):

…[E]fforts have been made to ensure the accuracy of the information…provide[d.] …[T]he selection of books, articles, and websites is typically based on a combination of factors, including considerations of reliability, diversity, and relevance. OpenAI aims to include sources that are widely recognized as reputable and credible within their respective fields.

But:

…[T]he training process typically involves using as much text data as possible rather than a manual selection process.

• • • • • •

Whatever intelligence is displayed by a chatbot is the intelligence embodied in the source texts that it has ingested. Imagine a chatbot entirely based on Wikipedia. As a recent article makes clear,1 Wikipedia is ChatGPT’s largest single source, and the relationship between Wikipedia and chatbots such as ChatGPT is more intimate and complicated than perhaps has been recognized. Wikipedia provides an application programming interface (API), allowing programmers to input a particular entry automatically for digestion and incorporation into its knowledge base, or just on-the-fly summarization or paraphrase.

In this case, why would anyone attribute intelligence to such a chatbot, rather than simply to the authors of the encyclopedia itself? And if a chatbot digested two such encyclopedias? Three? Three thousand? At what point should the attribution of intelligence shift from the authors of the sources to the chatbot itself?

Another major source is The New York Times (as well as other newspapers and magazines). In fact, the Times recently sued OpenAI (the creator of ChatGPT) and Microsoft (the creator of Bing and Bing Chat) for, in effect, plagiarism.2

ChatGPT, however, argues both sides of the question of whether it displays genuine intelligence with equal conviction. The following are excerpts from much longer and more detailed text responses to prompts.

ChatGPT displays artificial intelligence (AI) because it is an advanced language model that can understand and generate human-like text responses.

But:

…[W]hile ChatGPT is an impressive language model capable of generating human-like text responses, it falls short of displaying true artificial intelligence due to its lack of understanding, limited domain, contextual limitations, inability to learn, and potential biases and errors.

ChatGPT will agreeably argue both sides of almost any dispute you choose. Try it yourself with your own favorite controversy.

• • • • • •

In the natural world, we usually recognize intelligence only in animals with a brain and, of course, the products of such intelligence, such as in the good design of a tool. The lives of animals (hereinafter referring to animals with a brain) display two fundamental attributes:

  • Embodiment: animals have bodies equipped with senses to perceive and mechanisms to respond and behave. Their engagement with the world is consistent with the body they actually have. An octopus, for example, uses all eight of its tentacles, but a chimpanzee doesn’t try to make use of the four limbs it doesn’t have.
  • Embeddedness: animals are alive in the world, with which they engage and interact.

In addition to a mind model, animals behave as if they have a model of the world, based on the following basic attributes of reality:

  • Tridimensionality: Reality subtends three spatial dimensions.
  • Extensionality: Objects and animals have extent in each of the three spatial dimensions (even if one or even two dimensions may be minimal, as is the case of a sheet of paper or a hair).
  • Time: Reality passes continuously in time from the past to the present and into the future.
  • Continuity: Over time, and to a greater or lesser extent, things and processes persist from one moment to the next, with gradual change the norm and abrupt change the exception.
  • Regularity and Periodicity: A great many natural phenomena exhibit regular and often periodic change, such as the day/night cycle; waking and sleeping; needs such as hunger and thirst alternating with satiety following satisfaction; the lunar cycle; the seasons; birth, growth, maturity, and death; and the like.
  • Uniformity: To a greater or lesser extent, things and processes are more or less the same everywhere.
  • Consistency: To a greater or lesser extent, things and processes do not exhibit paradoxical properties.
  • Causality: Events in the present are contingent upon and caused by events in the past; events in the future will be contingent upon and caused by previous events.

Other basic attributes of an animal’s reality include, among others, the following:

  • Affordance: Everything in the world, natural and artificial, has properties and features that can be used to accomplish one or more goals. For example, a door handle can be used to open a door, and it can also be used to hang your hat.
  • Choice: To a greater or lesser degree, an animal, having agency, can choose its behavior, based on its own goals and its knowledge of its situation.

Further, animals become familiar with certain ubiquitous characteristics of their environments, including such features as gravity, light, sound, solidity, liquidity, gaseousness, the flow of liquids (rivers, waves, tides, rain), and gasses (winds), and so on.

All of these, and much more, are reflected in a world model that underlies an animal’s implicit understanding of reality. The mind model is a fundamental part of the world model. This world model is much deeper and wide-ranging than any enumerable collection of facts could possibly be. Our world model and our ability to engage with the world based upon it arise almost immediately after birth and may have even evolved into the brain without the need for any experience at all to trigger it into being. In the case of humans, this implicit world model almost always includes language, explicit education, and training. To date, all attempts to incorporate a world model into an AI have failed.

To give an example of implicit knowledge: you intuitively know that a person can’t touch their left elbow with their left hand. What does ChatGPT-3 think?

Yes, a person can touch their left elbow with their left hand by crossing their left arm over their chest and reaching their left hand towards their left elbow.

ChatGPT-3.5 does a good deal better:

No, it is physically impossible for a person to touch their left elbow with their left hand. The range of motion of the human arm does not allow for the hand to reach the same-side elbow without external assistance or manipulation.

• • • • • •

If a chatbot—necessarily disembodied and unembedded— can’t achieve actual intelligence, what about a robot? At least some AI researchers believe that actual computerized electromechanical robots can solve the dual fundamental realities of embodiedness and embeddedness. How’s that been going?

Boston Dynamics appears to have the most advanced robots. In one demonstration,3 the robot runs and jumps when it can simply walk, seeming to be more concerned with demonstrating its parkour (the sport of traversing obstacles in urban environments by running, climbing, and jumping athletically) skills than anything else.

Perhaps the most conspicuous deficiency in such robots is their silence. No attempt has been made to incorporate a chatbot into one—and it’s entirely unclear how such could be done. In the demonstration of the Boston Dynamics robot cited earlier, a person uses an app to input its instructions to the robot and the app screen is pointedly left unshown.

By its very nature, a chatbot—being unembodied and unembedded—has a timeless view from nowhere and everywhere, whereas a robot is necessarily in a particular place and a particular position at a particular moment in time.

• • • • • •

In our intuitive apprehension of intelligence, it’s often depicted as a kind of spotlight (or, lately, laser) that can, with equal facility, illuminate and penetrate any subject to which attention is directed. This is much like the historical misconception of vision as a force originating in the eyes, like Superman’s X-ray vision. But it is no such facility. We’re better at understanding some things than others; our intelligence works better on some problems than others. My favorite example of this is the Monty Hall problem.4 But this is hardly the only example. Popular websites list the top 10 fallacies, or the top 16 or 24 or 26. Wikipedia’s list contains over fifty unenumerated fallacies in several different categories.

• • • • • •

The concept of artificial intelligence was originally divided into two categories, weak and strong. The notion of strong artificial intelligence was one in which an AI would not only imitate a person but actually have conscious experience as does a person. Of course, eliminative materialists don’t believe that people have conscious experience, either, so this has never been a problem for them. Except for those who believe in the possibility of uploading a mind to a computer—an unlikely prospect—most AI researchers have shifted the distinction from weak/strong to narrow/general.

The concept of narrow intelligence is, well, self-contradictory. As we’ve already seen, the entire idea of intelligence is that it’s a general (even if not unlimited) ability to deal flexibly and creatively with novel, challenging situations. The closest natural equivalent to narrow artificial intelligence is perhaps people that used to be called idiot savants—now more politely just savants—although of course savants have enough broad natural intelligence to navigate their own worlds reasonably well. More recently, the term machine learning is often used to moot any claim of any kind of intelligence at all.

The distinction now is between good old-fashioned programming—which involves encoding explicit instructions for automating a task—and a computer program that, one way or another, learns to do whatever it’s supposed to. Before chatbots, probably the best-known example of machine learning was computer chess.

Because chess has been traditionally seen as a quintessential example of human intelligence, it’s likely that innumerable early computer programmers tried their hand at writing a program that could play the game. Before the days of machine learning, computer chess programs were written explicitly, and they were quite poor. But, with the incorporation of machine learning, a program eventually defeated the current world champion at the time, and nowadays chess programs routinely defeat international grandmasters. Despite this, perhaps surprisingly, chess is now more popular than ever. One program, Stockfish, considered the best in the world, is even used to rate the performance of players in real time.

Make no mistake, however: the rules of the game still had to be explicitly programmed into Stockfish and every other chess program. Without further programming and training, not even Stockfish could play any of the hundreds of chess variations that have been devised.

Ironically, although ChatGPT-3 claims to be able to play chess, it can’t, as demonstrated in this simple example (I’ve edited the moves to conform to standard notation).

Me: Let’s play chess.

ChatGPT-3: Sure, I’d be happy to play a game of chess with you! Shall we start?

Me, playing White: e2 e4

ChatGPT-3, playing Black: e7 e5

White: N g1 h3

Black: N b8 c6

White: N b1 a3

Black: N g8 f6

White: f2 f3

Black: d7 d6

White: d2 d3

Black: e7 e6

Although, through its first four moves, ChatGPT-3 appears to know how to play, on its fifth move, it tries to move a pawn that it had already been moved from a square it no longer occupies back to e6!

With ChatGPT-3.5, the response is equally absurd. After I play e4, it responds:

Now, it’s my move. I’ll respond with 1…c5, the Sicilian Defense:

But on the board it shows itself having made two moves, not just one (both e5 and c5)!

Other published examples of ChatGPT playing chess include it making impossible and illegal moves such as taking its own pieces.

Although ChatGPT can apparently converse plausibly on any topic, including the rules of chess, it is as narrow a machine learning programing as any chess program. Its words are no more meaningful than N b8 c6, and it is no more a step to artificial general intelligence than can Stockfish play Tic-Tac-Toe.

• • • • • •

A major problem with machine learning programs is their opacity—they are completely and fundamentally black boxes. As ChatGPT helpfully explains:

A black box typically refers to a device or system that is opaque or mysterious in its inner workings or processes.

The term “black box” in this sense is from World War II, referring to secret electronic navigational devices literally housed in a non-reflective black housing.5

As explained by Melanie Mitchell in her book Artificial Intelligence: A Guide for Thinking Humans, machine learning programs aren’t learning what we think they’re learning. The author gives several compelling examples, including a school bus identified by a machine learning program as an ostrich. The same program has also identified completely random television “snow” as a robin, a cheetah, an armadillo, a panda, a centipede, a peacock, a jackfruit, and a bubble, and abstract geometric patterns as a penguin, a starfish, a baseball, a guitar, a railroad car, and a remote control.

This article appeared in Skeptic magazine 29.1
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Unlike traditional computer programs which are opaque enough even when heavily documented and commented by their programmers, large language models consist virtually entirely of parameters (that is, numbers). For example, GPT-3 (a version of ChatGPT) contains 175 billion parameters based on a training corpus of about 500 billion tokens (words and parts of words such as the plural S) and using a token length of 2,048 as its context in predicting the next word. Imagine trying to manually trace and understand the reason a chatbot produces a particular response to a particular prompt! Given this overwhelming complexity and such a daunting task, chatbots can only be regarded as oracles.

[The term] oracle refers to a system or source of information that is relied upon for its accuracy or expertise, even though the inner workings or processes are not fully comprehensible or transparent. It is essentially a trusted authority or source of knowledge that provides guidance or answers to questions, without revealing its underlying methods or reasoning. This concept is often used metaphorically to describe situations where information or advice is accepted without full understanding or scrutiny.

Thank you, ChatGPT.

As a step toward artificial general intelligence, chatbots are a blind alley leading to a dead end.

About the Author

As a step toward artificial general intelligence, chatbots are a blind alley leading to a dead end. As a skeptical observer of the pursuit of artificial intelligence and other subjects, Peter Kassan has been a regular contributor to Skeptic since 2006. Over his long career in the software products industry, he has held positions at the technical, managerial, and executive levels. Kassan is the author of several software patents. He can be reached at kassan.peter@gmail.com.

References
  1. https://bit.ly/3RYnkzh
  2. https://bit.ly/3GZ3z4B
  3. https://bit.ly/3tFxuM3
  4. https://bit.ly/48qNm3W
  5. https://bit.ly/48eCyWO
Categories: Critical Thinking, Skeptic

Astronauts could drink their own urine with water-recycling spacesuit

New Scientist Feed - Thu, 07/11/2024 - 10:00pm
When astronauts go on a spacewalk, their urine is collected by what is essentially a large diaper before being thrown away, and they have less than a litre of drinking water available - but a new kind of spacesuit could solve both issues
Categories: Science

Resources on Mars Could Support Human Explorers

Universe Today Feed - Thu, 07/11/2024 - 9:52pm

In the coming decades, multiple space agencies and private companies plan to establish outposts on the Moon and Mars. These outposts will allow for long-duration stays, astrobiological research, and facilitate future Solar System exploration. However, having crews operating far from Earth for extended periods will also present some serious logistical challenges. Given the distances and costs involved, sending resupply missions will be both impractical and expensive. For this reason, relying on local resources to meet mission needs – aka. In-Situ Resource Utilization (ISRU) – is the name of the game.

The need for ISRU is especially important on Mars as resupply missions could take 6 to 9 months to get there. Luckily, Mars has abundant resources that can be harvested and used to provide everything from oxygen, propellant, water, soil for growing food, and building materials. In a recent study, a Freie Universität Berlin-led team evaluated the potential of harvesting resources from several previously identified deposits of hydrated minerals on the surface of Mars. They also presented estimates of how much water and minerals can be retrieved and how they may be used.

The team was led by Christoph Gross, a Postdoctoral researcher with the Planetary Sciences and Remote Sensing Group at the Institute of Geological Sciences, Freie Universität Berlin. They were joined by researchers from the SETI Institute, NASA’s Ames Research Center, the Institut d’Astrophysique Spatiale, and the Institute of Space Systems at the German Aerospace Center (DLR). Their research paper, “Prospecting in-situ resources for future crewed missions to Mars,” will be published in the October 2024 issue of Acta Astronautica.

The MOXIE unit is being placed into the Perseverance rover. Courtesy NASA/JPL.

As the authors note, NASA and other space agencies are invested in ISRU technologies to significantly reduce the overall mass that must be sent to the Moon or Mars to support human exploration efforts. In recent years, this has led to experiments like the Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) on NASA’s Perseverance rover, which produced oxygen gas from Mars’ atmospheric carbon dioxide. The ESA is also preparing an ISRU Demonstration Mission to demonstrate that water and oxygen can be produced from water ice harvested on the Moon.

Mars Express mineralogy maps. Credit: ESA/CNES/CNRS/IAS/UP-S, Orsay; NASA/JPL/JHUAPL/MOLA

These resources would have applications for life support systems, ensuring mission crews have breathable air and water for drinking and irrigation. However, they also have applications for power and propulsion, providing hydrogen gas for fuel cells or reactors and being used in combination to create liquid hydrogen (LH2) and liquid oxygen (LOX) propellant. On Mars, most of the water there today is concentrated in the polar ice caps and permafrost or in pockets of hydrated minerals where water once flowed on the surface.

For the sake of their study, Gross and his colleagues focused on hydrated mineral sites since they offer the potential for water extraction directly at the surface and at lower latitudes. But as Gross told Universe Today via email, these deposits also have potential resource applications that go beyond just water:

“The hydrated minerals on Mars are the largest water reservoir on Mars known to date (mainly sulphates and phyllosilicates). Water can relatively easily extracted from sulphates and as described in the paper, the minerals can also be used as fertilizer for food production. The phyllosilicates could be used as building material or, for example, for ceramics. Water is the most important resource, especially propellant production. This may be more interesting for Mars due to the distance to Earth, gravity, etc.”

Next, Gross and his colleagues assessed different geographical locations where hydrated minerals have been identified based on data obtained by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) instrument aboard NASA’s Mars Reconnaissance Orbiter (MRO). This included Mawrth Vallis, an ancient flood channel that opens into the Chryse Planitia plains in Mars’ northern hemisphere, and Juventae Chasma, a 5 km (~3 mi) deep basin located north of Valles Marineris.

Water detection under the south pole of Mars. Credit: Context map: NASA/Viking; THEMIS background: NASA/JPL-Caltech/Arizona State University; MARSIS data: ESA/NASA/JPL/ASI/Univ. Rome; R. Orosei et al 2018

“Regions hosting a variety of different materials may be interesting,” said Gross. “Then, the site must be easily accessible (not in a canyon, etc.), and it should be close to interesting science sites. I would also support the idea of having a base in equatorial regions where the temperatures are not too cold. And there should be enough space around the base to grow with follow-up missions. Meridiani Planum is a hot candidate. We shall try to constrain the resources there, too.”

Gross and his colleagues also recommended how these resources should be extracted. According to the authors, the dehydration of mono- and poly-hydrated sulfates is theoretically the best approach since several methods exist that are relatively straightforward, fast, and energy-efficient ways exist to do this. They also recommend that robotic missions be sent in advance of astronauts to scout, assess, and begin harvesting and processing these resources in anticipation of their arrival.

“Robotic precursor missions could start mining and refining the resources, especially for propellant production,” said Gross. “NASA and private companies are conducting many studies concerning this point. Also, for example, the robotic construction of habitats or the pre-production of oxygen are conceivable projects.”

This analysis presents new possibilities for exploration and long-term habitats on Mars. Although the polar regions are seen as a good place for building future habitats, mainly because of the abundant frozen water they have access to, extracting this ice (especially from deep underground sources) will be expensive and restrictive. The possible use of hydrated minerals not only offers an alternative for ISRU operations on Mars, but opens sites in the equatorial region to exploration and habitat creation.

Further Reading: Acta Astronautica

The post Resources on Mars Could Support Human Explorers appeared first on Universe Today.

Categories: Science

A comprehensive derivative synthesis method for development of new antimicrobial drugs

Matter and energy from Science Daily Feed - Thu, 07/11/2024 - 6:56pm
A method to screen a wide variety of drug candidates without laborious purification steps could advance the fight against drug-resistant bacteria.
Categories: Science

Artificial intelligence speeds up heart scans, saving doctors' time, and could lead to better treatment for heart conditions

Computers and Math from Science Daily Feed - Thu, 07/11/2024 - 6:56pm
Researchers have developed a groundbreaking method for analysing heart MRI scans with the help of artificial intelligence, which could save valuable time and resources, as well as improve care for patients.
Categories: Science

Scientific definition of a planet says it must orbit our sun; A new proposal would change that

Space and time from Science Daily Feed - Thu, 07/11/2024 - 6:55pm
The International Astronomical Union defines a planet as a celestial body that orbits the sun, is massive enough that gravity has forced it into a spherical shape, and has cleared away other objects near its orbit around the sun. Scientists now recognize the existence of thousands of planets, but the IAU definition applies only to those within our solar system. The new proposed definition specifies that the body may orbit one or more stars, brown dwarfs or stellar remnants and sets mass limits that should apply to planets everywhere.
Categories: Science

User control of autoplay can alter awareness of online video 'rabbit holes'

Computers and Math from Science Daily Feed - Thu, 07/11/2024 - 6:55pm
A new study suggests that giving users control over the interface feature of autoplay can help them realize that they are going down a rabbit hole. The work has implications for responsibly designing online content viewing platforms and algorithms, as well as helping users better recognize extreme content.
Categories: Science

Ultrasound technology can be used to boost mindfulness, study finds

Matter and energy from Science Daily Feed - Thu, 07/11/2024 - 6:55pm
In a new study, researchers used low-intensity ultrasound technology to noninvasively alter a brain region associated with activities such as daydreaming, recalling memories and envisioning the future.
Categories: Science

Engineers' probe could help advance treatment for spinal cord disease, injury

Matter and energy from Science Daily Feed - Thu, 07/11/2024 - 6:55pm
Neuroscientists have used a nanosized sensor to record spinal cord neurons in free-moving mice, a feat that could lead to the development of better treatments for spinal cord disease and injury.
Categories: Science

Muscle machine: How water controls the speed of muscle contraction

Matter and energy from Science Daily Feed - Thu, 07/11/2024 - 6:55pm
The flow of water within a muscle fiber may dictate how quickly muscle can contract, according to a new study.
Categories: Science

Pages

Subscribe to The Jefferson Center  aggregator