Reader Su called my attention to the AI website below, which you can join simply by giving your email and a password. And, of course, I couldn’t resist. Click on the link I just gave you, or on the screenshot below. The figures you can talk to (ask them anything!) include Charles Darwin, Florence Nightingale, Genghis Khan, Socrates, Aristotle, Isaac Newton, Galileo, Albert Einstein, Marie Curie, Catherine the Great, Alexander the Great, Alan Turing, Sigmund Freud, and Leonardo da Vinci. Clearly there are hours of fun to be had, and much time to be wasted. I asked a few of them questions, with the answers reproduced below. You’ll have to click on the conversations to enlarge them.
I started with Darwin, of course, and asked him about speciation. He clearly knew much more about species and speciation than he discussed in The Origin. His definition of species at the bottom is spot on. Click to enlarge:
I asked Freud if he was a fraud, and of course he was evasive:
Genghis Khan denied being a mass murderer:
I asked Socrates the Euthphro question, and he gave a very good answer!:
I asked Marie Curie how she felt about her work contributing to the atomic bomb. She gave a boilerplate answer, but it shows she (or AI) would make a good politician:
Asked about whether Gandhi was mistaken in insisting that India remain a country of simple farming and crafts, and not embrace modern technology, he equivocated.
This gives uis a chance to revise history: to find out what can be, unburdened by what has been. Perhaps those of you of a philosophy bent would like to interact with philosophers of the past. In the meantime, I better leave this site alone.
Often times the answer to a binary question is “yes”. Is artificial intelligence (AI) a powerful and quickly advancing tool or is it overhyped? Yes. Are opiates useful medicines or dangerous drugs? Yes. Is Elon Musk a technological visionary or an eccentric opportunist? This is because the world is usually more complex and nuanced than our false dichotomy or false choice simplistic thinking. People and things can contain disparate and seemingly contradictory traits – they can be two things at the same time.
This was therefore my immediate reaction to the question – are AI companions a potentially healthy and useful phenomenon, or are they weird and harmful? First let me address a core neuropsychological question underlying this issue – how effective are chatbot companions, for just companionship, or for counseling, or even as romantic partners? The bottom line is that the research consistently shows that they are very effective.
This is likely a consequence of how human brains are typically wired to function. Neurologically speaking, we do not distinguish between something that acts alive and something that is alive. Our brains have a category for things out there in the world that psychologists term “agents”, things that are acting on their own volition. There is a separate category for everything else, inanimate objects. There are literally different pathways in the brain for dealing with these two categories, agents and non-agents. Our brains also tend to overall the agent category, and really only require that things move in a way that suggest agency (moving in a non-inertial frame, for example). Perhaps this makes evolutionary sense. We need to know, adaptively, what things out there might be acting on their own agenda. Does that thing over there want to eat me, or is it just a branch blowing in the wind.
Humans are also intensely social animals, and a large part of our brains are dedicated to social functions. Again, we tend to overcall what is a social agent in our world. We easily attribute emotion to cartoons, or inanimate objects that seem to be expressing emotions. Now that we have technology that can essentially fake human agency and emotion, this can hack into our evolved algorithms which never had to make a distinction between real and fake agents.
In short, if something acts like a person, we treat it like a person. This extends to our pets as well. So – do AI chatbots act like a real person? Sure, and they are getting better at it fast. It doesn’t matter if we consciously know the entity we are chatting with is an AI, that knowledge does not alter the pathways in our brain. We still process the conversation like a social interaction. What’s the potential good and bad here?
Let’s start with the good. We already have research showing that AI chatbots can be effective at providing some basic counseling. They have many potential advantages. They are good listeners, and are infinitely patient and attentive. They can adapt to the questions, personality, and style of the person they are chatting with, and remember prior information. They are good at reflecting, which is a basic component of therapy. People feel like they form a therapeutic alliance with these chatbots. They can also provide a judgement-free and completely private environment in which people can reflect on whatever issues they are dealing with. They can provide positive affirmation, while also challenging the person to confront important issues. At least these can provide a first line of defense, cheaply and readily available.
Therapeutic relationships easily morph into personal or even romantic ones, in fact this is always a very real risk for human counselors (a process called transferance). So, why wouldn’t this also happen with AI therapists, and in fact can be programmed to happen (a feature rather than a bug). All the advantages carry over – AI romantic partners can adapt to your personality, and have all the qualities you may want in a partner. They provide companionship that can lessen loneliness and be fulfilling in many ways. l
What about the sexual component? Indicators so far are that this can be very fulfilling as well. I am not saying that anything is a real replacement for a mutually consenting physical relationship with another person. But as a second choice, it can have value. The most important sex organ, as they say, is the brain. We respond to erotic stimuli and imagery, and sex chatting can be exciting and even fulfilling to some degree. This likely varies from person to person, as does the ability to fantasize, but for some sexual encounters happening entirely in the mind can be intense. I will leave for another day what happens when we pair AI with robotics, and for now limit the discussion to AI alone. The in-between case is like Blade Runner 2049, where an AI girlfriend was paired with a hologram. We don’t have this tech today, but AI can be paired with pictures and animation.
What is the potential downside? That depends on how these apps are used. As a supplement to the full range of normal human interactions, there is likely little downside. It just extends our experience. But there are at least two potential types of problems here – dependence on AI relationships getting in the way of human relationships, and nurturing our worst instincts rather than developing relationship skills.
The first issue mainly applies to people who may find social relationship difficult for various reasons (but could apply to most people to some extent). AI companions may be an easy solution, but the fear is that it would reduce the incentive to work on whatever issues make human relationships difficult, and reduce the motivation to do the hard work of finding and building relationships. We may choose the easy path, especially as functionality improves, rather than doing the hard work.
But the second issue, to me, is the bigger threat. AI companions can become like cheesecake – optimized to appeal to our desires, rather than being good for us. While there will likely be “health food” AI options developed, market forces will likely favor the “junk food” variety. AI companions, for example, may cater to our desires and our egos, make no demands on us, have no issues of their own we would need to deal with, and would essentially give everything and take nothing. In short, they could spoil us for real human relationships. How long will it be before some frustrated person shouts in the middle of an argument, “why aren’t you more like my AI girlfriend/boyfriend?” This means we may not have to build the skills necessary to be in a successful relationship, which often requires that we give a lot of ourselves, think of other people, put the needs of others above our own, compromise, and work out some of our issues. ]
This concept is not new. The 1974 movie, based on the 1972 book, The Stepford Wives, deals with a small Connecticut town where the men all replace their wives with robot replicas that are perfectly subservient. This has become a popular sci-fi theme, as it touches, I think, on this basic concept of having a relationship that is 100% about you and not having to do all the hard work of thinking about the needs of the other person.
The concern goes beyond the “Stepford Wife” manifestation – what if chatbot companions could either be exploited, or even are deliberately optimized, to cater to – darker – impulses? What are the implications of being in a relationship with an AI child, or slave? Would it be OK to be abusive to your AI companion? What if they “liked” it? Do they get a safe word? Would this provide a safe outlet for people with dark impulses, or nurture those impulses (preliminary evidence suggests it may be the latter). Would this be analogous to roleplaying, which can be useful in therapy but also can have risks?
In the end, whether or not AI companions are a net positive or negative depends upon how they are developed and used, and I suspect we will see the entire spectrum from very good and useful to creepy and harmful. Either way, they are now a part of our world.
The post AI Companions – Good or Bad? first appeared on NeuroLogica Blog.
Craters are a familiar sight on the lunar surface and indeed on many of the rocky planets in the Solar System. There are other circular features that are picked up on images from orbiters but these pits are thought to be the collapsed roofs of lava tubes. A team of researchers have mapped one of these tubes using radar reflection and created the first 3D map of the tube’s entrance. Places like these could make ideal places to setup research stations, protected from the harsh environment of an alien world.
Lava tubes have been hotly debated for the last 50 years. They are the result of ancient volcanic activity and develop when the surface of a lava flow cools and hardens. Below this, the molten lava continues to move and eventually drains away leaving behind a hollow tunnel. Exploring these tunnels can mean we can learn more about the geological history of the Moon from the preserved records in the rocks.
The lava tubes have been the subject of analysis by NASA’s Lunar Reconnaissance Orbiter (LRO) which began its journey in 2009. It’s purpose was to gather information about the Moon’s surface and environment and to that end has a plethora of scientific equipment. LRO has been mapping the lunar surface using high resolution imagery capturing temperature, radiation levels and water ice deposits. All with a view to identifying potential landing sites for future missions.
Artist’s rendering of Lunar Reconnaissance Orbiter (LRO) in orbit. Credit: ASU/LROCA team of scientists from around the world have been working together to make a breakthrough in the quest to understand these tubes. The research was led by the University of Trento in Italy and the results published in Nature Astronomy. They have identified the first, confirmed tunnel just under the surface of the Moon that seems to be an empty lava tube. Until now, their existence was just a theory, now they are a reality.
The discovery would not have been possible without the LRO and its Miniature Radio-Frequency instrument. In 2010 it surveyed Mare Tranquilitatis – location for Apollo 11’s historic lunar landing in 1969 – capturing data which included the region around a pit. As part of this new research the data was reanalysed with modern complex signal processing techniques. The analysis revealed previously unidentified radar reflections that can best be explained by an underground cave or tunnel. Excitingly perhaps is that this represents an underground tunnel on the surface of the Moon but it is an accessible tunnel too.
Buzz Aldrin Gazes at Tranquility Base during the Apollo 11 moonwalk in an image taken by Neil Armstrong. Credit: NASAThe discovery highlights the importance of continued analysis of historical data, even from decades ago for hidden information that modern techniques can reveal. It also highlights the importance of further remote sensing and lunar exploration from orbit to identify more lava tubes as potential safe havens for lunar explorers.
Travellers to the Moon can experience temperatures on the illuminated side of 127 degrees down to -173 degrees on the night time side. Radiation from the Sun can rocket – pardon the pun – to 150 times more powerful than here on Earth and that’s not even considering the threat of meteorite impacts. We are protected from thousands of tonnes of the stuff thanks to the atmosphere but there is no protective shield on the Moon. If we build structures on the surface of the Moon then they must be built to withstand such a hostile environment but look to lava tubes and many of the problems naturally go away making it a far safer and cheaper prospect to establish a lunar presence.
Source : Existence of lunar lava tube cave demonstrated
The post The Entrance of a Lunar Lava Tube Mapped from Space appeared first on Universe Today.
The weirdest, creepiest, funniest, and just plain strange stories from the era of crewed space flight.
For years as an award-winning war reporter, Sebastian Junger traveled to many front lines and frequently put his life at risk. And yet the closest he ever came to death was the summer of 2020 while spending a quiet afternoon at the New England home he shared with his wife and two young children. Crippled by abdominal pain, Junger was rushed to the hospital by ambulance. Once there, he began slipping away. As blackness encroached, he was visited by his dead father, inviting Junger to join him. “It’s okay,” his father said. “There’s nothing to be scared of. I’ll take care of you.” That was the last thing Junger remembered until he came to the next day when he was told he had suffered a ruptured aneurysm that he should not have survived.
This experience spurred Junger—a confirmed atheist raised by his physicist father to respect the empirical—to undertake a scientific, philosophical, and deeply personal examination of mortality and what happens after we die. How do we begin to process the brutal fact that any of us might perish unexpectedly on what begins as an ordinary day? How do we grapple with phenomena that science may be unable to explain? And what happens to a person, emotionally and spiritually, when forced to reckon with such existential questions?
In My Time of Dying is part medical drama, part searing autobiography, and part rational inquiry into the ultimate unknowable mystery.
Sebastian Junger is The New York Times bestselling author of Tribe, War, Freedom, A Death in Belmont, Fire, and The Perfect Storm, and codirector of the documentary film Restrepo, which was nominated for an Academy Award. He is also the winner of a Peabody Award and the National Magazine Award for Reporting. Here is how a Wall Street Journal reviewer described him:
Sebastian Junger has lived multiple lives and almost died in many of them. There was his accident while working for a tree-felling company that inspired him to research a book on dangerous jobs, which ultimately became The Perfect Storm (1997). There was the time he almost drowned while surfing. Then there was his work as an embedded journalist in Afghanistan, where machine-gun fire missed him by inches. Later, there was the assignment he did not take, to war-torn Libya, which claimed the life of his frequent collaborator and close friend, the British photographer Tim Hetherington.
His new book is In My Time of Dying: How I Came Face-to-Face with the Idea of an Afterlife, a book-length memento mori: remember, you are going to die.
Shermer and Junger discuss:
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.