David Lusseau always wanted to be a biologist. “Well, either biologist or clown,” he adds, “but I realized there was not much money in clowning.” When Marie the dolphin entered Lusseau’s life, she sealed the deal for him becoming a biologist. A bottlenose dolphin (Tursiops truncatus) who swam in the waters near the village of Cerbère on the border between France and Spain in the late 1980s, Marie set seventeen-year-old Lusseau on a path that would one day lead him to study social networks in her species. “When you look in the eyes of a dolphin, you realize there is a lot going on,” Lusseau says, reminiscing on his time with his cetacean friend. “It is something that is very hard to express or grasp or explain in a factual matter, but spending time with [Marie] got me interested in … trying to understand how dolphins work, [in what] I perceived as another intelligent species on the planet.”
As an undergraduate, Lusseau spent time as a research assistant working with a group studying bottlenose dolphins in Florida. When out in the water, he encountered dolphins swimming on their own or in pairs. On occasion he bumped into a trio, but dolphins always seemed to be doing their own thing, just in the company of one or two others. That view of dolphin sociality, or the lack of it, changed dramatically when Lusseau began his PhD research in the late 1990s at the University of Otago in New Zealand. His dissertation focused on conservation biology in bottlenose dolphins in a fjord called Doubtful Sound, but the social behavior of the dolphins there hit him like a ton of bricks. As soon as he got there, he encountered not lone dolphins, duos, or trios, but groups of thirty or more dolphins schooling and moving about in a coordinated manner. These were very different animals from the solo dolphins and very small dolphin groups he had studied in Florida.
Each day Lusseau rose at 4 a.m., grabbed some breakfast, swatted away an endless barrage of midges, and arrived at Doubtful Sound before the sun rose. He’d board a 14-foot boat, locate a group of dolphins, and do focal animal sampling, cycling through dolphins, each recognizable by natural markings on their dorsal fins, often from shark attacks. Doubtful Sound can be stunningly beautiful, but it is at a latitude called the “roaring forties” because of the strong winds from the west and six- to eight-foot waves at times, which make for rough going when watching dolphins from a boat.
As he spent time with the dolphins, Lusseau began thinking about how to understand their complex social dynamics, but he couldn’t quite figure out the best way to proceed. On one of his stints back at the University of Otago, Lusseau recalls reading a Proceedings of the National Academy of Sciences paper on social networks written by physicist Mark Newman and others. Soon after that, he emailed Newman, telling him, “I think you are doing really cool stuff and I can understand it, because you write so well. Would you like to have a look at what we’re doing?” Newman was interested. It wasn’t long before he and Lusseau were coauthoring papers on dolphin social networks. But before they penned any coauthored papers, Lusseau published a 2003 paper in the Proceedings of the Royal Society of London that is widely regarded as the first study explicitly on social networks in nonhumans.
Unlike animal social network papers in today’s journals, where readers are acquainted with how networks operate, to put readers in the right frame of mind in 2003, Lusseau opened his Royal Society paper using a strategy that Darwin had employed in On the Origin of Species. The idea was to introduce a phenomenon that readers already knew about (in Darwin’s case artificial selection, as in selection of different breeds of pigeons) and then make the case that what followed (natural selection), though it appeared radical, was really just another variety of what he had just discussed. In Lusseau’s paper, the opening sentences read: “Complex networks that contain many members such as human societies … the World Wide Web (WWW) … or electric power grids … permit all components (or vertices) in the network to be linked by a short chain of intermediate vertices.” And before readers knew it, they were learning about such social networks in dolphins.
Lusseau constructed dolphin networks based on thousands of observations, and one metric he looked at was network diameter, which measures the average shortest path between nodes. To introduce network diameter to readers, Lusseau first discussed psychologist Stanley Milgram’s “small world” research from the late 1960s. “The global human population seems to have a diameter of six,” wrote Milgram, “meaning that any two humans can be linked using five intermediate acquaintances.” The party version of Milgram’s small world is the parlor game “six degrees of Kevin Bacon.” The rules are simple: players choose a movie actor and then connect that actor to another that they played alongside in a film, repeating the process over and over, trying to link their original actor to movie star Kevin Bacon—who once quipped “he had worked with everybody in Hollywood or someone who’s worked with them”—in no more than six connections. It turns out the dolphin small world in Doubtful Sound is smaller than the human one (including Kevin Bacon’s), both in the size of the network and network diameter, the latter of which is approximately three, meaning any two dolphins in Doubtful Sound can be linked using two intermediate acquaintances.
Lusseau wondered what would happen if the dolphin network was culled by, for example, shark predation. To do this, using the network data he had collected, he built a computer algorithm that simulated predation, reducing the network size 20 percent by randomly removing 20 percent of the dolphins. The small world of the dolphins, it turned out, was unaffected by such a reduction. But if instead of randomly selecting individuals to remove from the network, Lusseau simulated removal of the 20 percent of dolphins who had the greatest number of ties to others, network diameter increased, which had the effect of slowing information transfer within the network.
As he came to know his dolphins better, Lusseau discovered that some individuals in Doubtful Sound give signals that affect group movement associated with finding new resources, including food. Side flopping, in which a dolphin leaps from the water and lands on its side, is seen only in males when they initiate a move to a new location, while upside-downing, in which an individual rolls onto its ventral side and slaps the water to signal an end to a group move, is seen almost exclusively in females. But only a few males do all the side flopping, and only a few females do all the upside-downing. Lusseau wanted to know if a network analysis would shed light on exactly which males and which females. It did. Males initiating and females terminating travel had higher betweenness— they were key hubs in this traveling/foraging network—than their non-signaling counterparts.
In a few populations of bottleneck dolphins on the other side of the planet, in Brazil, signaling and networking is not sometimes about feeding opportunities—they are always about that. And the dolphins have, rather remarkably, added humans to their feeding networks.
For more than three decades, ethologist Paulo Simões-Lopes has been studying dolphin populations in the lagoon systems along the coastline near Laguna, Brazil, about 800 kilometers south of São Paulo. The dolphins in nine populations along that stretch do something that no other dolphins—and almost no other animals, period— do. They not only network with each other, but cooperate with humans to secure more food for both themselves and their primate partner.
Each autumn, a huge mullet migration takes place in southern Brazil. Both the dolphins and the fishermen see the fish as prize prey. Up to fifty fishers, wading waist deep in very cold water, wait for the chance to cast large circular nylon nets called tarrafa over schools of mullet. The problem for the fishers is that the water is murky, and it is next to impossible to see the fish. The problem for the sixty or so dolphins at Laguna is that compared to their other prey, mullet are large and hard to catch. But dolphins aren’t especially troubled by murky water, as they detect mullet using echolocation, a built-in sonar system that would be the envy of most engineers.
Dolphins produce sound waves in their nasal sacs and focus those waves through fatty tissue and fluid in their foreheads. Once the sound waves are shot out into the water, they travel until they bump into an object, at which point they bounce back to the dolphins, who use their lower jaw as a receiver. From the lower jaw, the waves travel to the inner ear and then to the brain. Objects of different sizes and densities reflect back sound waves of different frequencies, and the dolphins use that information to “see” what is in the water around them. When their sonar detects mullet, dolphins signal fishers that the fish are present by curving their backs and then slapping their heads or their tails on the water surface. The fishers then cast their tarrafa and pull in loads of mullet. The confused mullet who escape the tarrafa often swim right into the mouths of waiting dolphins. It’s the perfect win-win situation.
Laguna newspapers from the late 1890s featured articles about this dolphin-human mutualism, and so Simões-Lopes knows that, at the very least, it has been going on for more than 130 years. And though many dolphins don’t signal fishers, every fisher knows which dolphins do. “It is famous [in southern Brazil],” Simões-Lopes says. “I grew up watching those dolphins … I would sit on a rock in the canal and watch for hours. I knew it was unusual … I knew there were dolphins in a big harbor farther south where dolphins and fishermen don’t interact.”
Today Simões-Lopes has a team of ten working with him, but he began on his own in 1988. Soon thereafter, he entered a PhD program and built his dissertation around his research on the dolphin-human foraging mutualism. Each day he brought a folding chair with him and set it up on a rock, watching the dolphins through his binoculars, taking photos—he had compiled a mug book with photos of all the dolphins in the lagoon—and filling notebook after notebook with data on dolphins signaling fishers.
Simões-Lopes began to know the fishers, and they began to know him. He also was starting to get a good feel for which dolphins at Laguna signaled the fishers and which did not. Not surprisingly, the fishers also kept tabs, telling Simões-Lopes about the “good dolphins” (who signaled fishers) and the “bad dolphins” (who did not). The fishers know not only which dolphins signal, but which dolphin will give which signal: “Each dolphin gives the signal in a different way,” one fisher said, “and we need to know [the different signals] in order to catch the fish.” Another fisher was more of a romantic, telling Simões-Lopes and his colleagues, “This is beautiful. It doesn’t happen everywhere.”
The more that Simões-Lopes thought about those “good” dolphins and “bad” dolphins, the more he wanted to understand them better. Years later Mauricio Cantor joined Simões-Lopes’s team; Cantor had worked with Hal Whitehead, a leader in early social network analysis. Simões-Lopes and Cantor decided that a network analysis might help them delve deeper into the between-species cooperation they observed on a daily basis. In 2008, they contacted David Lusseau, who had done the network studies on bottlenose dolphins in New Zealand, and asked if he would be interested in serving as a sort of conceptual consultant specializing in social networks. Lusseau was more than happy to join their team.
Simões-Lopes and his team assumed dolphins learn how to signal humans from other signalers they associate with, so for their social network analysis, they were especially interested in whether signaling dolphins preferred spending time with other signaling dolphins, both when they were chasing mullet into nets and, just as importantly, when they were not. To test whether there were cliques of signalers and cliques of dolphins who didn’t signal, Simões-Lopes’s team looked at clustering coefficients of sixteen cooperators and nineteen dolphins who did not signal and cooperate with fishers.
What they discovered were three cliques within the larger network of the thirty-five dolphins. Clique 1 had fifteen dolphins: each and every one of them cooperated with the local fishers. Dolphins in this clique associated with one another not just during the autumn mullet fishing season but the rest of the year as well. Clique 2 had a dozen dolphins, none of whom cooperated with fishers, and dolphins in this clique were not as well connected to one another as the individuals were in Clique 1. Clique 3 was made up of eight dolphins: seven never cooperated with fishers, but one—dolphin 20—did. And of all thirty-five dolphins in the network, it was dolphin 20 who spent the most time interacting across cliques, acting as what Simões-Lopes and his colleagues call a “social broker” between the signalers and non-signalers.
This behavior is all wonderfully complex, and we humans—and I don’t just mean the artisanal fishers of Laguna—should be grateful to play a role in understanding it.
Excerpted and adapted by the author from The Well-Connected Animal: Social Networks and the Wondrous Complexity of Animal Societies by Lee Alan Dugatkin, published by The University of Chicago Press. © 2024 by Lee Alan Dugatkin. All rights reserved.
About the AuthorLee Alan Dugatkin is an evolutionary biologist and a historian of science in the Department of Biology at the University of Louisville. He is the author of sixteen books and more than 200 articles in such journals as Nature, The Proceedings of the National Academy of Sciences, and The Proceedings of the Royal Society of London. Dr. Dugatkin is contributing author to Scientific American, The American Scientist, The New Scientist, and The Washington Post. His latest book is The Well-Connected Animal: Social Networks and the Wondrous Complexity of Animal Societies.
In this solo episode, Michael Shermer discusses the upcoming election, reflecting on the historical context of past elections and the political polarization that has intensified over the years.
The United States as Global Liberal Hegemon: How the U.S. Came to Lead the World examines America’s role as the global liberal hegemon. Using a historical analysis to understand how the United States came to serve as the world leader, Goldberg argues why the role of a liberal hegemon is needed, whether the United States has the ability to fulfill this role, and what the pitfalls and liabilities of continuing in this role are for the nation. He also considers the impact that this role on the global stage has for the country as well as individual citizens of the United States. Goldberg argues that the United States’s geographic location away from strong competitors, it’s role as the dominant economy for much of the 20th century, and its political culture of meritocracy all contributed to the United States taking this role in the 1940s. He also argues that the role of liberal hegemon has shifted to include not only being the international policeperson but also to be the world’s central banker, a role that at this time only the United States can fill.
Edward Goldberg is a leading expert in the area of where global politics and economics intercept. He teaches International Political Economy at the New York University Center for Global Affairs where he is an Adjunct Assistant Professor. He is also a Scholarly Practitioner at the Zicklin Graduate School of Business of Baruch College of the City University of New York where he teaches courses on globalization. With over 30 years of experience in international business and as a former member of President Barack Obama’s election Foreign Policy Network Team, Dr. Goldberg is the author of Why Globalization Works For America: How Nationalist Trade Policies Destroy Countries, and The Joint Ventured Nation: Why America Needs A New Foreign Policy. He is a much-quoted essayist and public speaker on the subjects of Globalization, European-American relations, U.S.-Russian and China relations. He has commented on these issues on PBS, NPR, CBS, Bloomberg, and in The New York Times, The Hill, and the Huffington Post. His new book is The United States as Global Liberal Hegemon: How the U.S. Came to Lead the World.
Shermer and Goldberg discuss:
“In the 1940s, when America anointed itself hegemon, somewhat like in Great Britain in the nineteenth century, American foreign policy was largely, aside from Harry Truman and a few others, dominated by a group of men who generally all went to similar prep schools and graduated from Princeton, Yale, or Harvard. This has changed drastically. If there is one common domestic thread in American post-World War II history, it is how American society and political life has become noticeably more diverse.”
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
Seven creepy stories from seven listeners, and seven guesses by me.
All across America, a storm is gathering: from book bans in school libraries to anti-trans laws in state legislatures; fire-bombings of abortion clinics and protests against gay rights. The Christian Right, a cunning political force in America for more than half a century, has never been more powerful than it is right now—it propelled Donald Trump to power, and it won’t stop until it’s refashioned America in its own image.
In Wild Faith, critically acclaimed author Talia Lavin goes deep into what motivates the Christian Right, from its segregationist past to a future riddled with apocalyptic ideology. Using primary sources and firsthand accounts, Lavin introduces you to “deliverance ministers” who carry out exorcisms by the hundreds; modern-day, self-proclaimed prophets and apostles; Christian militias, cults, zealots, and showmen; and the people in power who are aiding them to achieve their goals. Along the way, she explores anti-abortion terrorists, the Christian Patriarchy movement, with its desire to place all women under absolute male control; the twisted theology that leads to rampant child abuse; and the ways conspiracy theorists and extremist Christians influence each other to mutual political benefit.
From school boards to the Supreme Court, Christian theocracy is ascendant in America—and only through exploring its motivations and impacts can we understand the crisis we face. In Wild Faith, Lavin fearlessly confronts whether our democracy can survive an organized, fervent theocratic movement, one that seeks to impose its religious beliefs on American citizens.
Talia Lavin is the author of the critically acclaimed book Culture Warlords. She is a journalist who has had bylines in the New Yorker, the New Republic, the New York Times Review of Books, the Washington Post, and more. She writes a newsletter, The Sword and the Sandwich, which is featured in Best American Food and Travel Writing 2024. She is a graduate of Harvard University with a degree in comparative literature, and was a Fulbright scholar who spent a year in Ukraine. Her first book was Culture Warlords: My Journey Into the Dark Web of White Supremacy. Her new book is Wild Faith: How the Christian Right is Taking Over America.
Shermer and Lavin discuss:
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
The great auk (Pinguinus impennis) displayed in the Natural History Museum of Denmark stands erect on its pedestal, its great beak jutting forward, apparently fearless. It is possessed of a certain dignity and grace. It demands my attention. It was probably killed off in Iceland, where I come from, and was one of the last of its kind. For thousands of years, these large, flightless birds swam the extensive waters of the North Atlantic and made their nests on islands and skerries, where each pair laid and incubated a single, uniquely patterned egg per year. According to most accounts, the last of the great auks were slaughtered on Eldey, an island off the southwest coast of Iceland, in June 1844. About eighty taxidermic examples of great auks exist in various museum collections, and most of them came from Eldey.
Alongside the great auk displayed in Copenhagen are four large glass jars. One is labeled: Iceland 1844, . These jars contain the viscera of great auks killed on that famous (or infamous) expedition to Eldey. These are not all the birds’ organs; some are stored in another seven jars elsewhere in the museum, out of the public eye, along with another stuffed great auk. At my request, a museum guide takes me to see this second bird. It is posed somewhat differently than the one on display. Its beak is open, as if ready to address the visitor. Unlike the first bird’s stark black-and-white plumage, this one looks grayish and rather dull. I am told it is a true rarity; it is in winter plumage, while most great auks were captured while breeding, in early summer. Perhaps this second bird was caged alive and slaughtered in winter. Perhaps it was kept as a pet for some months, like the great auk owned by the Danish polymath Ole Worm (1588–1654), one of the leading figures of the Nordic Renaissance. Worm personally owned three great auks, one of which he sometimes walked on a leash, and he made a fine drawing of it before adding it—stuffed—to his Wunderkammer, or cabinet of curiosities, a precursor to the modern museum.
In its imposing old building in Copenhagen, only a fraction of the Danish museum’s “curiosities” are on display. In full, the collection comprises millions of animals from around the globe, and boasts exemplars of several species that have become extinct in recent centuries—such as a well-preserved skull of a dodo (Raphus cucullatus)—as well as fossils of dinosaurs and other organisms from previous eras of the earth’s history. Here, in this old and venerable museum, it is easy to detect the ideas that lay behind the collecting of natural objects for the past three and a half centuries. The need was perceived to educate the populace of various European nations, whose empires extended around the world, about the progression of time and about their place in the expanding universe. Such collections demonstrated the might and extent of each empire, and the value of research: all things can be named, catalogued, and categorized systematically.
Is such an approach still valid in our current era, now termed the Anthropocene, or Human Age? In our time, the “natural” habitat of the planet has been radically refashioned by humans. Vital links between species, developed over eons, have been severed swiftly, fundamentally impoverishing the living world and posing a serious threat of the mass extinction of many species. How, I wonder, can such a process possibly be cataloged or categorized, given the speed of change and the complexities involved— and what would be the point?
The bird species that no longer exist had, and still have, a special attraction. They have much to teach us.
ExtinctionI never saw a great auk growing up in Iceland, a land where they had once been quite common. Neither did the nineteenth-century British naturalists John Wolley and Alfred Newton.
Like their contemporaries, Wolley and Newton busily collected birds’ eggs and specimens, classifying and recording them in the fashion of the Victorian age. When they set off for Iceland in 1858, they hoped to visit Eldey Island and study the rare great auk. They hoped to observe its behavior and habits and, perhaps, bring home an egg, or a skin, or a stuffed bird or two for their own cabinets of curiosities—unaware of the fact that the species had already been hunted to extinction. When they left Victorian England for Iceland, they teased that this was a “genuinely awkward expedition.” And so it proved to be, in many ways. They never made it to Eldey. Like me, they never saw a great auk on Iceland, not even a stuffed one.
Prior to the killing of the last great auks, extinction was either seen as an impossibility or trivialized as a “natural” thing. The great taxonomist Carl von Linné, or Linnaeus (1707–78), imagined that a living species could never disappear; for evolutionary theorist Charles Darwin (1809–82), species would naturally come and go in the long history of life. The great auk brought home the fact that a species could perish quite quickly and, moreover, not naturally, but primarily as a result of human activities. No other extinction had been documented as carefully.
During their historic expedition to Iceland in 1858, Wolley and Newton collected impressions of great auk hunting, through substantial interviews with the men who took part in the latest hunts and the women who skinned and mounted the birds, along with their prices and sales on foreign markets to collectors of “curiosities.” These impressions were preserved in the set of five handwritten notebooks Wolley titled the Gare-Fowl Books. Now archived in Cambridge University Library in England, their hundreds of pages are written in several languages (English, Icelandic, Danish, and German). As an anthropologist and an Icelander, once I had seen the Gare-Fowl Books, there was no turning back: I had to dive into the text and visit zoological museums and archives. For me, the great auk opened an intellectual window into ideas of extinction and their relevance to the current mass disappearance of species.
De-extinctionMany sightings of great auks were reported after 1844 on North Atlantic skerries in Iceland (1846, 1870), Greenland (1859 or 1867), Newfoundland (1852, 1853), and northern Norway (1848). Some of the reports were certainly apocryphal: people had mistaken another species for a great auk, or had seen what they wanted to see. Others were deemed credible and were probably true: evidence of a few dispersed pairs of birds continuing to breed on islands or skerries for a few years. Such tales were often unjustly dismissed, and unnecessarily strict standards of proof and corroboration were applied. The consensus among scholars today seems to be that the last living great auk was seen off Newfoundland in 1852.
Once it seemed clear that the last great auks were dead, museums and collectors around the world scrambled to acquire skins, eggs, and bones of the extinct bird. The Victorian obsession with collecting was past its peak, but anything relating to the great auk remained a prize. There are some eighty stuffed great auks in collections around the world, and an unknown number of preserved skins and viscera. Only about twenty-four complete skeletons exist, while thousands of loose bones (some with knife marks) are kept in museum collections. The skeletons do not have the visual appeal of the stuffed birds, mounted to look so lifelike in their full plumage. However, the bones—what Wolley and Newton termed “relics”—tell a long and complex story of their own. And there are about seventy-five great auk eggs believed to be extant today, the vast majority being documented and numbered.
Now and then over the years, various species have been said to reappear suddenly, after having been thought long exterminated. Several birds have been confirmed to be such so-called “Lazarus species,” including the Bermuda petrel (Pterodroma cahow), which scared Spanish explorers away with their eerie calls. Considered extinct for three centuries, it was rediscovered on one of the Bermuda Islands in 1951. Also, the flightless takahē (Porphyrio hochstetter) of New Zealand, which was claimed extinct late in the nineteenth century, reappeared in 1948. In recent years, with intensive searching, social media, and growing awareness of the threat of mass extinction, such reports have escalated. However, the possibility of any surviving great auk “Lazarus” can be ruled out.
Charles Darwin made the point that species swept away by history would not return. They were gone for good. In On the Origin of Species, he wrote: “We can clearly understand why a species when once lost should never reappear, even if the very same conditions of life, organic and inorganic, should recur.” This has long seemed blindingly obvious. No doubt many people have wondered why Darwin saw reason to state it at all. Yet his words were perhaps necessary at the time. The meaning of extinction had not yet been fixed, and Darwin may well have felt it was time to dispel the fantasy regarding the resurrection of species.
Alfred Newton, on the contrary, entertained the idea that extinction processes could be reversed. And in our own time, discussions of the renaissance, even resurrection, of species is taken for granted—as if Bible stories and the natural sciences had coalesced into one, after centuries of enmity and conflict. Will we live to see the resurrection of Pinguinus impennis? Might genetics and cloning do the trick?
In the spring of 2015, a group of like-minded individuals met at the International Centre for Life in Newcastle, England, to discuss the possible reanimation of the great auk. The meeting was attended by more than twenty people, including scientists and others interested in bird conservation. They addressed the principal stages of “de-extinction,” from the sequencing of the full genome of the extinct animal to the successful releasing of a proxy animal population into the wild. They were interested in resurrecting the great auk quite literally, to see it thrive once more, in zoos or even on the skerries and islands of the North Atlantic.
Thomas Gilbert, a geneticist at the University of Copenhagen who has sequenced the great auk genome was one of the scientists who attended. The de-extinction of a species, however, has proved to be a more complicated issue than was originally anticipated—both technically and ethically. Gilbert pointed out that a re-created species can never be exactly like the original, and that the question must be asked: What counts as “near enough”—ninety-five percent, ninety, …? If the element that is lacking, though it may only account for a few percent of the genome, turns out to be crucial, and makes it harder for a re-created species to survive or to reproduce, nothing will have been gained. A re-created great auk that could not swim, for instance, would not be “near enough.” Likewise, a great auk capable of flight might be “way too much.” For most people, whatever the species concept to which they subscribe—and there remains a thriving philosophical debate on that subject—a flying bird would hardly qualify for legitimate member of the great auk species.
Yet a substitute bird that could swim would be welcomed by many, as it might fill in the large gap left by the great auk’s extinction. A substitute species might contribute to the rewilding of the oceans, a task that has barely begun; indeed “the underwater realm has been trailing behind its terrestrial counterparts.” Interestingly, this idea echoes Philip Henry Gosse’s historic aquaria project, reversing the arrows, from land to sea, and operating on a much larger scale. The grand aquarium of the planet’s oceans, including the recently discovered seabirds’ hotspot in the middle of the North Atlantic, or so the idea goes, could be repopulated by relatively large charismatic animals, territorially raised and later released into the oceans, where they would be managed and monitored by human divers. Gosse would be amused.
The expense of such de-extinction is high, however, and it is hard to decide which species should have priority: the mammoth? the dodo? the great auk? or perhaps one of the numerous species of tiny snails that rarely generate human concern? It’s tempting, and productive, to focus on tall birds and charismatic megafauna, but invertebrates such as snails and insects, which make up most of the animal kingdom (perhaps 99 percent), deserve attention too. In the Anthropocene, this age of mass human-caused extinctions, the selection of species is clearly an urgent, but difficult, concern. The re-creation of the great auk assuredly has symbolic significance, not least in light of the attention the species has garnered from both scholars and the public since its demise. The excessive price nowadays of great auk remains is significant too.
In January 2023, a great auk egg sold for $125,000 at Sotheby’s. But bringing the bird back to life is a gigantic challenge, if not an impossible one. Perhaps the funds that would be spent on the de-extinction of the great auk might be better spent elsewhere. Nor should we overlook the Law of Intended Actions, Unintended Consequences.
• • • • • •
Now that I know the great auk’s long history, I feel as if the stuffed birds in the Copenhagen museum were once my neighbors or acquaintances. As a scientist, I know that their viscera are stored in alcohol to preserve them and to enable people to study them. Still I wonder if the organs are in a constant state of inebriation from the alcohol, existing beyond the bounds of real time, in a sort of euphoric oblivion? Generations of visitors, of all ages and many nationalities, have passed by these jars of preserved bird parts over the past century and a half. What observations did they take home?
The hearts stored in one jar are no longer beating, but no doubt many visitors on my side of the glass have wondered, as I do, how they would have pulsed when the bird’s blood was still flowing—and whether they could be resuscitated, by electric shock or genetic reconstruction. The eyes of the last male great auk are kept in another jar. I see them staring, gazing into both the past and into my own eyes.
This essay was excerpted and adapted by the author from The Last of Its Kind: The Search for the Great Auk and the Discovery of Extinction. Copyright © 2024 by Gísli Pálsson. Reprinted by permission of Princeton University Press.
About the AuthorGísli Pálsson is professor emeritus of anthropology at the University of Iceland. He previously held positions in the Department of Anthropology at the University of Oslo, the Centre for Biomedicine & Society at King’s College, London, and at the Rosenstiel School of Marine, Atmospheric, and Earth Science at the University of Miami. His books include The Last of Its Kind: The Search for the Great Auk and the Discovery of Extinction, Down to Earth, and The Man Who Stole Himself.
Big Tech is driving us, our kids, and society mad. In the nick of time, Restoring Our Sanity Online presents the bold, revolutionary framework for an epic reboot. What would social media look like if it nourished our critical thinking, mental health, privacy, civil discourse, and democracy? Is that even possible?
Restoring Our Sanity Online is the entertaining, informative, and frequently jaw-dropping social reset by Mark Weinstein, contemporary tech leader, privacy expert, and one of the visionary inventors of social networking.
This book is for all of us. Casual and heavy users of social media, parents, teachers, students, techies, entrepreneurs, investors, and elected officials. Restoring Our Sanity Online is the catapult to an exciting, enriching, and authentic future. Readers will embark on a captivating journey leading to an inspiring and actionable reinvention.
Restoring Our Sanity Online includes thought-provoking insights including:
Mark Weinstein is a world-renowned tech entrepreneur, privacy expert, and one of the visionary inventors of social networking, including SuperFamily and SuperFriends, two of the earliest social networks. In 2016 he founded MeWe, the Facebook alternative with the industry’s first Privacy Bill of Rights. MeWe’s membership grew to nearly 20 million users worldwide, whose advisory board includes Sir Tim Berners-Lee, the inventor of the Web; Steve “Woz” Wozniak, co-founder of Apple; Sherry Turkle, MIT academic and tech ethics leader; and Raj Sisodia, co-founder of the Conscious Capitalism movement. Mark is frequently interviewed and published in major media including the Wall Street Journal, The New York Times, Fox, CNN, BBC, PBS, Newsweek, Los Angeles Times, The Hill, and many more worldwide. He covers topics including social media, privacy, AI, free speech, antitrust, and protecting kids online. A leading privacy advocate, Mark’s landmark 2020 TED Talk, “The Rise of Surveillance Capitalism,” exposed the many infractions and manipulations by Big Tech, and called for a privacy revolution. Mark has also been listed as one of the “Top 8 Minds in Online Privacy” and named “Privacy by Design Ambassador” by the Canadian government. His new book is Restoring Our Sanity Online: A Revolutionary Social Framework.
Shermer and Weinstein discuss:
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
I am always sniffing around (pun intended) for new and interesting technology, especially anything that I think is currently flying under the radar of public awareness but has the potential to transform our world in some way. I think electronic nose technology fits into this category.
The idea is to use electronic sensors that can detect chemicals, specifically those that are abundant in the air, such as volatile organic compounds (VOCs). Such technology has many potential uses, which I will get to below. The current state of the art is advancing quickly with the introduction of various nanomaterials, but at present these sensing arrays require multiple antenna coated with different materials. As a result they are difficult and expensive to manufacture and energy intensive to operate. They work, and often are able to detect specific VOCs with 95% or greater accuracy. But their utility is limited by cost and inconvenience.
A new advance, however, is able to reproduce and even improve upon current performance with a single antenna and single coating. The technology uses a single graphene oxide coated antenna which then uses ultrawide microwave band signals to detect specific VOCs. These molecules will reflect different wavelengths differently depending on their chemical structure. That is how they “sniff” the air. The results are impressive.
The authors report that a “classification accuracy of 96.7 % is attained for multiple VOC gases.” This is comparable to current technology, but again with a simpler, cheaper, and less energy hungry technology. Further, they actually has better results in terms of discriminating different isomers. Isomers are different configurations of the same molecular composition – same atoms in the same ratios and but arranged differently, so that the chemical properties may be different. This is a nice proof of concept advance in this technology.
Now the fun part – let’s speculate about how this technology might be used. The basic application for electronic noses is to automatically detect VOCs in the environment or associated with a specific item as a way of detecting something useful. For example, this could be used as a breath test to detect specific diseases. This could be a non-invasive bedside quick test that could reliably detect different infections, disease states, event things like cancer or Alzheimer’s disease. When disease alters the biochemistry of the body, it may be reflected in VOCs in the breath, or even the sweat, of a person.
VOC detection can also be used in manufacturing to monitor chemical processes for quality control or to warn about any problems. They could be used to detect fire, gas leaks, contraband, or explosives. People and things are often surrounded by a cloud of chemical information, a cloud that would be difficult to impossible to hide from sensitive sniffers.
So far this may seem fairly mundane, and just an incremental extrapolation of stuff we already can do. That’s because it is. The real innovation here is doing all this with a much cheaper, smaller, and less energy intensive design. As an analogy, think about the iPhone, a icon of disruptive technology. The iPhone could not really do anything that we didn’t already have a device or app for. We already had phones, texting devices, PDAs, digital cameras, flashlights, MP3 players, web browsers, handheld gaming platforms, and GPS devices. But the iPhone put all this into one device you could fit in your pocket, and carry around with you everywhere. Functionality then got added on with more apps and with motions sensors. But the main innovation that changed the world was the all-in-one portability and convenience. A digital camera, for example, is only useful when you have it on you, but are you really going to carry around a separate digital camera with you every day everywhere you go?
This new electronic nose technology has the potential to transform the utility of this tech for similar reasons – it’s potentially cheap enough to become ubiquitous and portable enough to carry with you. In fact, there is already talk about incorporating the technology into smartphones. That would be transformative. Imagine if you now also could carry with you everywhere at all times an electronic nose that could detect smoke, dangerous gas, that you or others might be ill, or that your food is spoiled and potentially dangerous.
Imagine that most people are carrying such devices, and that they are networked together. Now we have millions of sensors out there in the community able to detect all these things. This could add up to an incredible early warning system for all sorts of dangers. It’s one of those things that is challenging to just sit here and think of all the potential specific uses. Once such technology gets out there, there will be millions of people figuring out innovative uses. But even the immediately obvious ones would be incredibly useful. I can think of several people I know personally whose lives would have been saved if they had such a device on them.
As I often have to say, this is in the proof-of-concept stage and it remains to be seen if this technology can scale and be commercializable. But it seems promising. Even if it does not end up in every smartphone, having dedicated artificial nose devices in the hospital, in industry, and in the home can be extremely useful.
The post Electronic Noses first appeared on NeuroLogica Blog.
A thoroughly discredited idea, that the Mesoamerican Olmec people were Black Africans, continues to gain traction.
At a recent event Tesla showcased the capabilities of its humanoid autonomous robot, Optimus. The demonstration has come under some criticism, however, for not being fully transparent about the nature of the demonstration. We interviewed robotics expert, Christian Hubicki, on the SGU this week to discuss the details. Here are some of the points I found most interesting.
First, let’s deal with the controversy – to what extent were the robots autonomous, and how transparent was this to the crowd? The first question is easier to answer. There are basically three types of robot control, pre-programmed, autonomous, and teleoperated. Pre-programmed means they are following a predetermined set of instructions. Often if you see a robot dancing, for example, that is a pre-programmed routine. Autonomous means the robot has internal real-time control. Teleoperated means that a human in a motion-capture suit is controlling the movement of the robots. All three of these types of control have their utility.
These are humanoid robots, and they were able to walk on their own. Robot walking has to be autonomous or pre-programmed, it cannot be teleoperated. This is because balance requires real-time feedback of position and other information to produces the moment-to-moment adjustments that maintain balance. A tele-operator would not have this (at least not with current technology). The Optimus robots walked out, so this was autonomous.
Once in position, however, the robots began serving and interacting with the humans present. Christian noted that he and other roboticists were able to immediately tell that the upper body movements of the robots were teleoperated, just by the way they were moving. Also, the verbal interaction also seemed teleoperated as each robot had a difference voice and the responses were immediate and included gesticulations.
Some might say – so what? The engineering of the robots themselves is impressive. They can autonomously walk, and not of them fell over or did anything weird. This much is a fairly impressive demonstration. It is actually quite dangerous to have fully autonomous robots interacting with people. The technology is not quite there yet. Robots are heavy and powerful, and just falling over might cause human injury. Reliability has to be extremely high before we will be comfortable putting fully autonomous robots in human spaces. Making robots lighter and softer is one solution, because they they were be less physically dangerous.
But the question for the Optimus demonstration is – how transparent was the teleoperation of the robots? Tesla, apparently, did not explicitly say the robots were being operated fully autonomously, nor did any of the robot operator lie when directly asked. But at the same time, the teleoperators were not in view, and Tesla did not go out of their way to transparently point out that they were being teleoperated. How big a deal is this? That is a matter of perception.
But Christian pointed out that there is a very specific question at the heart of the demonstration – where is Tesla compared to its competitors in terms of autonomous control? The demonstration, if you did not know there were teleoperators, makes the Optimus seem years ahead of where it really is. It made it seem as if Tesla is ahead of their competition when in fact they may not be.
While Tesla was operating in a bit of a transparency grey-zone, I think the pushback is healthy for the industry. The fact is that robotics demonstrations typically use various methods of making the robots seem more impressive than they are – speeding up videos, hiding teleoperation, only showing successes and not the failures, and glossing over significant limitations. This is OK if you are Disney and your intent is to create an entertaining illusion. This is not OK if you are a robotics company demonstrating the capabilities of your product.
What is happening as a result of push back and exposure of lack of total transparency is an increasing use of transparency in robotic videos. This, in my opinion, should become standard, and anything less unacceptable. Videos, for example, can be labeled as “autonomous” or “teleoperated” and also can be labeled if they are being shown in a speed other than 1x. Here is a follow up video from Tesla where they do just that. However, this video is in a controlled environment, we don’t know how many “takes” were required, and the Optimus demonstrates only some of what it did at the event. At live events, if there are teleoperators, they should not be hidden in any way.
This controversy aside, the Optimus is quite impressive just from a hardware point of view. But the real question is – what will be the market and the use of these robots? The application will depend partly on the safety and reliability, and therefore on its autonomous capabilities. Tesla wants their robots to be all-purpose. This is an extremely high bar, and requires significant advances in autonomous control. This is why people are very particular about how transparent Tesla is being about where their autonomous technology is.
The post Tesla Demonstrated its Optimus Robot first appeared on NeuroLogica Blog.
Neal Stephenson is the #1 New York Times bestselling author of the novels Termination Shock, Fall; or, Dodge in Hell, Seveneves, Reamde, Anathem, The System of the World, The Confusion, Quicksilver, Cryptonomicon, The Diamond Age, Snow Crash, and Zodiac, and the groundbreaking nonfiction work In the Beginning … Was the Command Line. He is also the coauthor, with Nicole Galland, of The Rise and Fall of D.O.D.O. His works of speculative fiction have been variously categorized as science fiction, historical fiction, maximalism, cyberpunk, and post-cyberpunk. In his fiction, he explores fields such as mathematics, cryptography, philosophy, currency, and the history of science. Born in Fort Meade, Maryland (home of the NSA and the National Cryptologic Museum), Stephenson comes from a family comprising engineers and hard scientists he dubs “propeller heads.” He holds a degree in geography and physics from Boston University, where he spent a great deal of time on the university mainframe. He lives in Seattle, Washington. As The Atlantic has recently observed, “Perhaps no writer has been more clairvoyant about our current technological age than Neal Stephenson. His novels coined the term metaverse, laid the conceptual groundwork for cryptocurrency, and imagined a geoengineered planet. And nearly three decades before the release of ChatGPT, he presaged the current AI revolution.” His new novel is Polostan, the first installment in his Bomb Light cycle.
Shermer and Stephenson discuss:
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
A review of Informatica: Mastering Information Through the Ages by Alex Wright (2023) and Knowing What We Know: The Transmission of Knowledge, From Ancient Wisdom to Modern Magic by Simon Winchester (2023)
Can the history of how humans organize knowledge help us understand 21st century information overload? Two readable new books help us address these questions with interdisciplinary narratives: Knowing What We Know: The Transmission of Knowledge: From Ancient Wisdom to Modern Magic by Simon Winchester, and Informatica: Mastering Information Through the Ages by Alex Wright.
To varying degrees and slightly different ways, both books review the history of information technologies as a helpful tool. Both cover the familiar chronology from the clay tablets and papyrus scrolls of ancient times, monks in the Middle Ages copying texts in their scriptoria, the 15th and 19th century technologies that made books cheaper and more common, the development of reference books, and the mid-20th century innovations leading to modern computers and World Wide Web. Both books are also stimulatingly interdisciplinary, discussing many more historical topics than I’ve mentioned above, but also grounded in science and technology. After these similarities, the books diverge.
Although Knowing What We Know is rich in history, it is not chronological. It instead progresses from the learning of information (education) to the storing of knowledge (museums, libraries, and encyclopedias), and then to the dissemination of knowledge, concluding in thoughtful discussion of the implication of new technologies, such as the AI-based Large Language Models (LLMs). These topics are corralled by Winchester’s background in journalism, and the grounding of each topic in precise examples.
On education, for example, Winchester contrasts three striking 21st century cases. He vividly recalls the woman he interviewed who started a school in a poverty-stricken village in India. Those students’ joyous thirst for knowledge is contrasted against the high-tension stakes in China, where a single exam taken in students’ teenage years determines their job opportunities for the rest of their lives. Winchester’s third example of education is the most striking—that of an illiterate island group whose oral storytelling tradition saved them, alone, from a tsunami.
Winchester progresses to knowledge summarized in encyclopedias, recalling his own love of them in his youth and summarizes the rise and cessation of the leading print encyclopedia of the 19th and 20th century, Encyclopedia Britannica. How can complex issues about the leading online encyclopedia Wikipedia, with its vast size and reliability, be better illustrated than by Winchester’s own experience late in his research seeing there that a pioneer of internet technology was listed as having died, the correction of which Winchester learned the next morning on social media?
And so it goes: Winchester focuses on a few extraordinary cases to illustrate each of his points. For the preservation of knowledge in museums, it is the remarkable story of the saving of museum treasures in China during political turmoil, and how the Chinese government has viewed this precious collection. Similarly, the rise of mass media is illustrated by the BBC because, Winchester notes, its style was influential in the development of radio news around the world. This flows naturally to the following chapter’s discussion of propaganda, focusing on the chilling example of the Nazis. His penultimate chapter is about polymaths and, finally, wisdom, focusing less on religion than on whether it was wise to drop the atomic bombs in 1945. The book concludes with the implications of ChatGPT and other new technology for our brains.
Winchester has a remarkable ability to turn what could be a dry recitation of facts into a series of compelling stories, with numbered subsections in each chapter. The one time I felt that he could have used a copy editor was during his overly long digression on Krakatoa, the subject of one of his previous books, though he did make even this topic surprisingly relevant. In his hands, such meandering is usually done masterfully.
Like a well-structured novel, all that came before leads Winchester to his conclusion. His fear is that technology, as currently progressing, can hurt our ability to think for ourselves. Characteristically, he illustrates this with a specific example: the complex skill set he stumbled through when his small boat needed to navigate toward land rather than be lost in the ocean in the days prior to GPS. Can people even read maps anymore? In one of the book’s few missed opportunities, he does not draw an extended parallel to the people who (accurately) decried in Gutenberg’s era that if books were mass produced, people’s ability to remember vast amounts of knowledge would decline, which it did (the skill of modern mnemonists, such as the late Harry Lorrayne, notwithstanding).
If Winchester’s book is grounded in concise case studies, Wright’s contributions in Informatica are science and the history of structured systems for organizing knowledge. These merge when Wright discusses the biological classification scheme developed primarily by Carl Linnaeus, including an amusing anecdote involving Thomas Jefferson mailing the decaying body of a moose to acclaimed scientific theorist Comte de Buffon. Although science is mentioned several times in Winchester’s Knowing, Alex Wright’s Informatica opens with it, following the late biologist E.O. Wilson in speculating about the biological role of epigenetics in human knowledge transmission. Wright compares “networks and hierarchies” in the natural and the human worlds. He sees parallels between creations by groups that are unlikely to have communicated, such as the similarity between the plant taxonomies created by Western scholars and those formed through oral tradition in other societies.
Using more traditional evidence, Wright explicitly links the Linnaean classification scheme to the development of librarians’ attempts to organize books, culminating in the Dewey Decimal System at the turn of the 20th century. He appropriately refers to this 19th century arc as “the industrial library,” the creation of more elaborate organizational schemes being demanded by vastly increased numbers of published books, which was in turn allowed by new technology.
Successive chapters discuss early to mid-20th century utopian information sharing projects using then-existing technology, including index cards and telegraphy, or the briefly famous Mundaneum (an institution that aimed to gather together all the world’s knowledge and classify it according to a system called the Universal Decimal Classification). In Informatica, Wright’s discussion of these utopian schemes does not flow as well as it could, the reader being left to make the connections.
Worse, Wright’s extended history of the developments leading to the modern internet is shoehorned into a subsection of the revised “Web That Wasn’t” chapter as “The Web That Was.” This combination of topics in the same chapter was tenable in Glut, but in Informatica the subsection discusses so many people and inventions, all of whose work made the World Wide Web possible, that it should have been a new chapter. Finally, Wright recycled some of his earlier writing and did not update it, such as referring to CD-Roms and America Online (AOL) as leading technologies. This could have been fixed easily.
That said, the narrative in Informatica is more clearly chronological than in Knowing What We Know, but Simon Winchester is so skilled a writer that his book is generally a smoother narrative despite being more episodic. Except in the book’s outline: I was halfway through the book before realizing that its main chapters had a logically progressive sequence to them, from data acquisition to information display to the uses of knowledge and finally to wisdom. Winchester could have made this clearer earlier in the book with just a few words.
One side topic bears noting: Winchester said in at least two media interviews that his discussion of the racism found in a leading mid-century encyclopedia was edited out of the published version of Knowing What We Know, on the grounds that it would be too controversial or offend too many of his readers. Perhaps it would have, but its inclusion would have been valuable, partly for highlighting the important point that even the most well-respected reference materials can be wrong. While it can be argued that this is excusable because Knowing is not written by an academic scholar, a similar edit was also made in a book by Yale historian Beverly Gage, G-Man, (which I reviewed in an earlier issue of Skeptic), with pages 62–63 twice leading the reader to guess, but never know for sure, which apparently offensive word is represented. The criticism that only elite scholars know about the history of racism will become a self-fulfilling prophecy if that history is not included in popular books.
On the other hand, Informatica and Knowing What We Know both have problems with the wording of their titles, and with such vast topics, it would be easy to quibble with decisions on which topics to focus. I wonder if Informatica’s new title could make readers think they are getting a wholly different book, rather than an update of Glut (originally published in 2007), with uneven revisions and only a chapter’s worth of new material? In Knowing What We Know, it’s the last third of the subtitle (“From Ancient Wisdom to Modern Magic”) that could mislead: in other cases, the phrase “Ancient Wisdom” has sometimes referred to religious traditions, but here seems to refer more to any ancient writing, and the book’s late discussion of wisdom is not primarily about religion.
The important point shared by Knowing What We Know and Informatica is that greater access to information also presents challenges. Informatica is more theoretical and historical, Knowing being more a historically informed snapshot of our present. Both are stimulating and both are informative.
About the AuthorMichelle Ainsworth holds an MA in History and she is currently researching the cultural history of stage magic in the United States. She is a humanist and lives in New York City.
I wrote earlier this week about the latest successful test of Starship and the capture of the Super Heavy booster by grabbing arms of the landing tower. This was quite a feat, but it should not eclipse what was perhaps even bigger space news this week – the launch of NASAs Clipper probe to Europa. If all goes well the probe will enter orbit around Jupiter in 2030.
Europa is one of the four large moons of Jupiter. It’s an icy world but one with a subsurface ocean – an ocean that likely contains twice as much water as the oceans of Earth combined. Europa is also one of the most likely locations in our solar system for life outside Earth. It is possible that conditions in that ocean are habitable to some form of life. Europa, for example, has a rocky core, which may still be molten, heating Europa from the inside and seeding its ocean with minerals. Chemosynthetic organisms survive around volcanic vents on Earth, so we know that life can exist without photosynthesis and Europa might have the right conditions for this.
But there is still a lot we don’t know about Europa. Previous probes to Jupiter have gathered some information, but Clipper will be the first dedicated Europa probe. It will make 49 close flybys over a 4 year primary mission, during which it will survey its magnetic field, gravity, and chemical composition. Perhaps most exciting is that Clipper is equipped with instruments that can sample any material around Europa. The hope is that Clipper will be able to fly through a plume of material shooting up geyser-like from the surface. It would then be able to detect the chemical composition of Europa material, especially looking for organic compounds.
Clipper is not equipped specifically to detect if there is life on Europa. Rather it is equipped to determine how habitable Europa is. If there are conditions suitable to subsurface ocean life, and certainly if we detect organic compounds, that would justify another mission to Europa specifically to look for life. This may be our best chance and finding life off Earth.
Clipper is the largest probe that NASA has sent out into space so far. It is about the size of an SUV, and will be powered by solar panels that span 100 feet. Light intensity at Jupiter is only 3-4% what it is on Earth, so it will need large panels to generate significant power. It also has batteries so that it can operate while in shadow. NASA reports that soon after launch Clipper’s solar arrays successfully fully unfolded, so the probe will have power throughout the rest of its mission. These are the largest solar arrays for any NASA probe. At Jupiter they will generate 700 watts of power. NASA says they are “more sensitive” than typical commercial solar panels, but I could not find more specific technical information, such as their conversion efficiency. But I did learn that the panels have much more sturdy, in order to survive the frigid temperatures and heavy radiation environment around Jupiter.
Clipper will take a somewhat indirect path, first flying to Mars where it will get a gravity boost and swing back to Earth where it will get a second gravity boost. Only then will it head for Jupiter, where it will arrive in 2030 and then use its engines to enter into orbit around Jupiter. The orbit is designed to bring it close to Europa, where it will get as close at 16 miles from the surface over its 49 flybys. At the end of its mission NASA will crash Clipper into Ganymede, another of Jupiter’s large moons, in order to avoid any potential contamination of Europa itself.
I always get excited at the successful launch of another planetary probe, but then you have to wait years before the probe finally arrives at its destination. The solar system is big and it takes a long time to get anywhere. But it is likely to be worth the wait.
An even longer wait will be for what comes after Clipper. NASA is “discussing” a Europa lander. Such a mission will take years to design, engineer, and build, and then more years to arrive and land on Europa. We won’t get data back until the 2040s at the earliest. So let’s get hopping. The potential for finding life off Earth should be one of NASA’s top priorities.
The post The Clipper Europa Mission first appeared on NeuroLogica Blog.
In ChatGPT and the Future of AI, the sequel to The Deep Learning Revolution, Terrence Sejnowski offers a nuanced exploration of large language models (LLMs) like ChatGPT and what their future holds. How should we go about understanding LLMs? Do these language models truly understand what they are saying? Or is it possible that what appears to be intelligence in LLMs may be a mirror that merely reflects the intelligence of the interviewer? In this book, Sejnowski, a pioneer in computational approaches to understanding brain function, answers all our urgent questions about this astonishing new technology.
Sejnowski begins by describing the debates surrounding LLMs’ comprehension of language and exploring the notions of “thinking” and “intelligence.” He then takes a deep dive into the historical evolution of language models, focusing on the role of transformers, the correlation between computing power and model size, and the intricate mathematics shaping LLMs. Sejnowski also provides insight into the historical roots of LLMs and discusses the potential future of AI, focusing on next-generation LLMs inspired by nature and the importance of developing energy-efficient technologies.
Grounded in Sejnowski’s dual expertise in AI and neuroscience, ChatGPT and the Future of AI is the definitive guide to understanding the intersection of AI and human intelligence.
Terrence J. Sejnowski is Francis Crick Chair at The Salk Institute for Biological Studies and Distinguished Professor at the University of California at San Diego. He has published over 500 scientific papers and 12 books, including The Computational Brain with Patricia Churchland. He was instrumental in shaping the BRAIN Initiative that was announced by the White House in 2013, and he received the prestigious Gruber Prize in Neuroscience in 2022.
Sejnowski and Shermer discuss:
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
This buried rock wall found throughout Rockwall County has people wondering about its origin.
SpaceX has conducted their most successful test launch of a Starship system to date. The system they tested has three basic components – the Super Heavy first stage rocket booster, the Starship second stage (which is the actual space ship that will go places), and the “chopsticks”, which is a mechanical tower designed to catch the Super Heavy as it returns. All three components apparently functioned as hoped.
The Super Heavy lifted Starship into space (suborbital), then returned to the launch pad in Southern Texas where it maneuvered into the grasping mechanical arms of the chopsticks. The tower’s arms closed around the Super Heavy, successfully grabbing it. The engines then turned off and the rocket remained held in place. The idea here is to replicate the reusable function of the Falcon rockets, which can return to a landing pad after lifting their cargo into orbit. The Falcons land on a platform one the water. SpaceX, however, envisions many Starship launches and wants to be able to return the rockets directly to the launch pad, for quicker turnaround.
The Starship, for its part, also performed as expected. It came back down over the designated target in the Indian Ocean. Once it got to the surface it rolled on its side and exploded. They were never planning on recovering any of the Starship so this was an acceptable outcome. Of course, eventually they will need to land Starship safely on the ground.
The system that SpaceX came up with reflects some of the realities and challenges of space travel. The Earth is a massive gravity well, and it is difficult to get out of and back into that gravity well. Getting into orbit requires massive rockets with lots of fuel, and falls prey to the rocket equation – you need fuel to carry the fuel, etc. This is also why, if we want to use Starship to go to Mars, SpaceX will have to develop a system to refuel in orbit.
Getting back down to the ground is also a challenge. Orbital velocity is fast, and you have to lose all that speed. Using the atmosphere for breaking works, but the air compression (not friction as most people falsely believe) causing significant heat, so reentering through the atmosphere requires heat shielding. You then have to slow down enough for a soft landing. You can use parachutes. You can splash down in the water. You can use bouncy cushions on a hard landing. Or you can use rockets. Or you can land like a plane, which was the Shuttle option. All of these methods are challenging.
If you want to reuse your rockets and upper stages, then a splashdown is problematic as salt water is bad. No one has gotten the cushion approach to work on Earth, although we have used it on Mars. The retro-rocket approach is what SpaceX is going with, and it works well. They have now added a new method, by combining rockets with a tower and mechanical arms to grab the first stage. I think this is also the planned method for Starship itself.
On the Moon and Mars the plan is to land on legs. These worlds have a lower gravity than Earth, so this method can work. In fact, NASA is planning on using the Starship as their lunar lander for the Artemis program. We apparently can’t do this on Earth because the legs would have to be super strong to handle the weight of the Super Heavy or Starship, and therefore difficult to engineer. It does seem amazing that a tower with mechanical arms grabbing the rocket out of the air was considered to be an easier engineering feat than designing strong-enough landing legs, but there it is. Needing a tower does limit the location where you can land – you have to return to the landing pad exactly.
SpaceX, however, is already good at this. They perfected the technology the the Falcon rocket boosters, which can land precisely on a floating landing pad in the ocean. So they are going with technology they already have. But it does seem to me that it would be worth it to have an engineering team work on the whole strong-landing-legs problem. That would seem like a useful technology to have.
All of this is a reminder that the space program, as mature as it is, is still operating at the very limits of our technology. It makes it all the more amazing that the Apollo program was able to send successful missions to the Moon. Apollo solved these various issues also by going with a complex system. As a reminder, the Saturn V used three stages to get into space for the Apollo program (although only two stages for Skylab). You then had the spaceship that would go to the moon, which consisted of a service module, a command module, and a lander. On approach to the Moon, it would have to undergo, “transposition, docking, and extraction”. The command module would detach from the service module, turn around, then dock with the lunar lander and extract it from the service module. The pair would then go into lunar orbit. The lander would detach and land on the lunar surface, and eventually blast off back into orbit around the Moon. There it would dock again with the command module for return to Earth.
This was considered a crazy idea at first within NASA, and many of the engineers were worried they couldn’t pull it off. Docking in orbit was considered the most risk aspect, and if that failed it would have resulted in astronauts being stranded in lunar orbit. This is why they perfected the procedure in Earth orbit before going to the Moon.
All of this complexity is a response to the the physical realities of getting a lot of mass out of Earth’s gravity well, and having enough fuel to get to the Moon, land, take off again, return to Earth, and then get back down to the ground. The margins were super thin. It is amazing it all worked as well as it did. Here we are more than 50 years later and it is still a real challenge.
Spaceflight technology has not fundamentally changed in the last 50 years – rockets, fuel, capsules are essentially the same in overall design, with some tweaks and refinements. Except for one thing – computer technology. This has been transformative, make no mistake. SpaceX’s reusable rockets would not be possible without advanced computer controls. Modern astronauts have the benefits of computer control of their craft, and are travelling with the Apollo-era equivalent of supercomputers. Computer advances have been the real game-changing technology for space travel.
Otherwise we are still using the same kinds of rocket fuel. We are still using stages and boosters to get into orbit. Modern capsule design would be recognizable to an Apollo-era astronaut, although the interior design is greatly improved, again due to the availability of computer technology. There are some advanced materials in certain components, but Starship is literally built out of steel.
Again, I am not downplaying the very real advances in the aerospace industry, especially in getting down costs and in reusability. My point is more that there haven’t been any game-changing technological advances not dependent on computer technology. There is no super fuel, or game-changing material. And we are still operating at the limits of physics, and have to make very real tradeoffs to make it work. If I’m missing something, feel free to let me know in the comments.
In any case, I’m glad to see progress being made, and I look forward the the upcoming Artemis missions. I do hope that this time we are successful in building a permanent cis-lunar infrastructure. That, in turn, would be a stepping stone to Mars.
The post Latest Starship Launch first appeared on NeuroLogica Blog.
Situating her analyses within the broader intellectual landscape, First Amendment scholar and philosopher Tara Smith takes up the views of such historical figures as John Locke, Thomas Jefferson and John Stuart Mill, while also addressing contemporary clashes over issues ranging from speech on social media, “cancel culture,” and the implications of “religious exemptions” to the crucial difference between speech and action and the very vocabulary in which we discuss these issues, dissecting the exact meanings of “censorship” and “freedom.”
Tara Smith is Professor of Philosophy at the University of Texas at Austin, where she has taught since 1989. A specialist in moral, legal, and political philosophy, she is author of the books Judicial Review in an Objective Legal System (Cambridge University Press, 2015), Ayn Rand’s Normative Ethics: The Virtuous Egoist (Cambridge, 2006), Viable Values: A Study of Life as the Root and Reward of Morality (Rowman and Littlefield, 2000), and Moral Rights and Political Freedom (Rowman and Littlefield 1995). Smith’s scholarly articles span such subjects as rights conflicts, the morality of money, everyday justice, forgiveness, friendship, pride, moral perfection, and the value of spectator sports.
Shermer and Smith discuss:
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.