How is the history of espionage relevant to the present? How does recent document declassification change our understanding of the Cold War? Spies, Lies, and Algorithms broadly and concisely surveys the hows and whys of the U.S. intelligence community from multiple perspectives. Spies: The Epic Intelligence War Between East and West deeply surveys a century of espionage by Russia against the U.S. and Britain. Both books offer new information and conclude with sharp warnings for the present.
When I was in graduate school, the professor of a class on cold war history commented that a book he had initially assigned was already out of date just three years after its publication, due to information declassified in the interim. I recalled this often as I read, so many times, in Calder Walton’s Spies, that his sources were documents that had only been accessible or declassified as recently as 2022. As such, Walton’s book rewrites history, from Lenin to Putin. His thesis is that Russian espionage against the U.S. and Britain was as aggressive before and after the Cold War as it was during it.
Some of the book’s new or strengthened conclusions will please partisans on either side of political (U.S.) debates. Conservatives might find grim validation in the relentlessness and depth of Soviet—and then Russian—espionage. For example, Russian archives have not only proven the guilt of President Franklin Roosevelt’s advisers Lauchlin Currie and Treasury Secretary Harry Dexter White (among many others of the Cold War era), but also reveal compelling new evidence of Russian assistance to liberal politician Henry Wallace, as well as to later left-wing intellectuals and the multi-country antinuclear movement, and that the Soviet Union used détente (and later the Soviet Union’s collapse) to increase its espionage. Liberals, on the other hand, may be pleased by new evidence that U.S. Cold War policy did not take into consideration the Soviet perception of NATO, and that the founding U.S. Cold War document’s “domino theory” was based on a false premise (Kremlin documents now show that Soviets did not initiate wars in the Third World).
U.S. science and technology effectively drove both sides of the Cold War.Newly released material also suggests that from Lenin to Putin, Russian leaders’ refusal to tolerate criticism and alternative points of view severely damaged the Soviet Union (and later Russia), both internally and externally. In contrast, the openness of the U.S. and Britain made it smart for Russia to focus its efforts on human spies. Walton points out, for an example perhaps of particular interest to Skeptic readers, that Russian spies in the U.S. were remarkably successful in their technological espionage, not just in accelerating the Russian development of the atomic bomb, but also more recently in stealing military technology, so that “U.S. science and technology effectively drove both sides of the Cold War.”
One revelation that startled me was new evidence that Truman was never briefed on Korea prior to the outbreak of war, and that, in Russia and the U.S., throughout the Cold War and since, most spies who were caught have been unmasked due to the opposing side’s defectors (and that both sides blundered in promoting people who committed treason). By contrast, Walton argues that, in most other areas, even intelligence historians continue to overemphasize the role of human spies and underestimate the role of communications interception (“signals intelligence”).
Zegart diagnoses the root of the problem as the necessity for secrecy: it is illegal for political scholars to examine most current intelligence.One reason for the overemphasis on human spies highlighted by Professor Amy Zegart in Spies, Lies, and Algorithms is the explosion in popularity of spy entertainment in recent decades. The ticking time bomb scenario where the hero saves the world is a staple of fiction, but has vanishingly few analogs in real life, according to Zegart, where real intelligence work involves multiple sources being weighed against each other. (Walton’s most dramatic example of how both human and technological methods complement is new evidence of why President Kennedy was able to defuse the Cuban Missile Crisis.) However, this is not how things are portrayed in fiction. Zegart acknowledges how terrific it would be if fantasy were reality, and cites alarming evidence that the general public confuses the two. Her even more damning indictment is that entertainment has been mistaken for fact by senior policy makers in the 21st century, including by a U.S. Supreme Court Justice and in a confirmation hearing for a CIA director.
Zegart diagnoses the root of the problem as the necessity for secrecy: it is illegal for political scholars to examine most current intelligence, and older declassified documents sought by historians can arrive years after they’d been requested, and then only heavily redacted. Thus, Zegart finds it unsurprising that there are remarkably few articles about intelligence in academic journals, and incredibly few college courses on the history or politics of espionage. Zegart sees a similar dynamic at work when it comes to Congressional oversight, where elected representatives can’t talk about secret material.
To provide some much needed background, Zegart discusses the history of U.S. intelligence, including an additional chapter on highly placed traitors. The heart of the book is a chapter-by-chapter discussion of issues in the world of espionage. Real life intelligence work is mostly tedious and mundane. Her coverage of it, and what its results can and cannot do, is nuanced and sobering. Readers of Skeptic will not be surprised by the challenge of overcoming confirmation bias and human frailty at estimating size and probability. Evidence, she suggests, is that these are best overcome by an outsider “devil’s advocate” (a procedure bypassed in the case of Saddam Hussein’s alleged Weapons of Mass Destruction) type counterscenario planning. The paradox of presidential use of covert action analyzes why presidents of opposing views in different times and facing different challenges all criticize secret, morally questionable “active measures” but end up using them anyway.
Intelligence failures result from “the natural variations in the predictability of human events and the limitations of human cognition.” — AMY ZEGART, Spies, Lies, and AlgorithmsWalton and Zegart agree that governments are losing their monopoly on intelligence gathering, essentially due to new technology. Zegart sees Google Earth, smart phones, and other public technology as having broken the monopoly that governments once had on the discovery of nuclear weapons sites and other military matters, and she discusses the potential dangers of premature revelation of that information, even if it is true. (Here and elsewhere, she emphasizes that the analysis of images and other data is a highly specialized and sophisticated skill learned by intelligence professionals, with many traps into which even well-meaning amateurs all to0 easily fall.) Where Zegart focuses on the activities of private citizens, Walton sees the future of intelligence being with multinational private companies, selling satellite access or high-end encryption programs to whatever government or business willing to pay their price, and so with no chance of government oversight.
American Cryptology during the Cold War, 1945-1989, Book II: Centralization Wins, 1960–1972. (Source: National Security Agency/Central Security Service)Both books cite FBI statistics that document that China is by far the greatest threat to the U.S. through both government- and business-allied intelligence agencies sending over a seemingly endless number of highly trained agents to steal military and technological secrets from the U.S. Both authors also discuss the difficulty and urgency of reorienting an intelligence bureaucracy to new realities, Zegart’s being in greater depth and among the sources cited by Walton. Interestingly, both authors agree that history and current practice both indicate that intelligence is most effective when multiple techniques—human spying, satellite imagery, and much more—are used in combination, and both agree that cutting intelligence budgets ends up costing more than they save.
Both authors agree that cutting intelligence budgets ends up costing more than they save.Both authors also discuss the relation between intelligence and conspiracy theories. First, both identify a few that were real, including the recent revelation of a Cold War deal with a leading manufacturer of government encoding machines. However, far more often people see conspiracies where none exist. Part of Walton’s data comes from England, and he suggests that “Those who tend to see … conspiracy overestimate the competency of those in Whitehall (home of British Secret Intelligence Service, MI6, much as ‘Langley’ is a term used for the American CIA).” Zegart quips that from her analysis of the impact of spytainment and her survey of Ivy League courses, students are more likely to hear a professor discuss U2 the rock band than the U-2 spy plane, one point of which is that ignorance of espionage history and practice is a great breeding ground for conspiracy theories. While Stalin’s paranoia is well known, Walton provides evidence of Lenin’s as well, and concludes that Putin is “a naturally inclined conspiracist.” (Note that Putin began his career in counterintelligence—ferreting out spies.)
As outstanding as both books are, no text of such depth can be perfect. The most serious problem with Walton’s Spies is that the bibliography is solely online (and often inaccessible), and the entries on it do not always include dates and publisher information. In the book itself, if part of Walton’s thesis is that the Cold War started in 1917, shouldn’t he have offered more than one example of early espionage? Recent scholarship in most areas is thorough (based on the endnotes), but some important books are missing, including G-Man by Beverly Gage (for its new data about the FBI’s work abroad [which it is not supposed to do], and which I reviewed in Skeptic).
Most of Walton’s 548 pages of main text are well used, but some ancillary material (such as recently declassified World War II British intelligence work unrelated to Russia) might have been edited out for length, however fascinating and new it is. I also wonder if, in a history book, it is best practice for an author to explicitly discuss implications for the present, which he does, with several opening pages on Russia’s 2022 invasion of Ukraine and a closing chapter on the relevance of the book’s conclusions to 21st century Chinese espionage. That said, this is telling, or at least amusing: for this book, it was discovered that a World War II Russian operative in Ukraine was named Nikita Khrushchev, who might not have gone on to become a future Soviet leader had he tried to warn Stalin about German troops massing on the border of Russia. Another likely case of the futility of speaking truth to power in the old Soviet bloc is the anecdote about a lone non-Communist Czech minister, Jan Masaryk, who tried to warn the public about Soviet tyranny and soon died from falling out a window, allegedly from suicide.
Walton offers two examples of recent or new evidence that the world came closer to nuclear war than previously known.More to the point, Walton offers two examples of recent or new evidence that the world came closer to nuclear war than previously known in not only the Cuban Missile Crisis, but, possibly, also in a 1980s military exercise that may have been mistaken for the real thing. Both resulted in increased dialogue between the U.S. and Russia.
The two books are masterclasses on their respective subjects. Walton doesn’t just incorporate recent research on Soviet Russian espionage, but he has investigated original documents (some declassified very recently) in Russia, Ukraine (for its intelligence about the Soviet Union), Britain, the U.S., and elsewhere. In addition to Zegart’s original research and government work, she has mastered a vast secondary literature and demonstrates her experience in explaining it. Both can be read by skeptics for evidence of the very real dangers posed by confirmation bias and lack of critical thinking by the highest government officials, as well as by the general public who, at least in some countries, empower them.
We are not close to mining asteroids, but the idea is intriguing enough to cause some serious study of the potential. The idea is simple enough – our solar system is full of chunks of rock with valuable minerals. If we could make it economically viable to mine even a tiny percentage of these asteroids the potential would be immense, a game changer for many types of resources. How valuable are asteroids?
The range of potential value is extreme, but at the high end we have a large metal rich asteroid like 16 Psyche in the asteroid belt. Astronomers estimate that the iron in 16 Psyche alone is worth about $10,000 quadrillion on today’s market. By comparison the world’s current economic output is just over $100 trillion, so that’s 100,000 times the world’s annual economic output. Of course, the cost of extraction would be high and the market value would likely be dramatically affected by such a resource, but it shows the dramatic potential of mining asteroids. Some asteroids are rich in platinum-group metals or rare earths, which would be even more valuable. But even the more common carbonaceous asteroids would likely have minerals worth quadrillions.
Again, these figures are likely not the actual monetary value that would be profited from mining asteroids, but they indicate that it is very likely economically viable to do so. I am reminded of the fact that aluminum was more expensive than gold in the 19th century. Then a process for extracting and refining aluminum from dirt was found, and now it is worth about $1.30 a pound. Still the aluminum industry is worth about $300 billion today. Mining asteroids would have a similar effect on many industries.
There are two basic uses for the material mined from asteroids. The first is to provide resources for space exploration and settlement itself. It is really expensive to get things into space, and getting out of Earth’s gravity well is the vast majority of the cost. Once in Earth’s orbit, you are most of the way there (in terms of energy costs) to pretty much anywhere in the inner solar system. So extracting resources away from Earth would potentially be extremely cost-effective. The more local the better, but even mining an asteroid for material to be used on the Moon is a huge advantage over blasting material off the Earth.
Further, many asteroids, and especially comets, have water-rich minerals or frozen volatiles. Having a steady water supply is essential if we want humans to live in space. Hydrogen from water is also potentially a source of fuel (not energy, just a way of storing energy in hydrogen).
The second use is to bring valuable minerals back to Earth. For this purpose we would want to target asteroids that are already close to Earth, and even come close to our orbit. We could even potentially alter the orbit of such asteroids to keep them in an Earth-lunar orbit, or to rest near a Lagrangian point (a “valley” in the combined gravitational fields of multiple objects that keep objects in place). We could then mine them at our leisure.
Further, if we identify an asteroid whos orbit might intersect with Earth, and therefore pose a threat of strike, we could deal with it by simply mining it out of existence. Therefore we get a double benefit – we get the minerals and we eliminate a potential threat to the Earth.
Right now we are mostly studying asteroids (and mostly from studying meteorites) to determine their composition, how to identify their composition, and determine the composition of specific asteroids that might be a target for future mining. To kickstart an asteroid mining industry we would likely want to pick the lowest-hanging fruit first – which means the easiest to mine, close to Earth, and chock full of highly valuable metals. Even still, this would require a massive investment with a very long horizon before returns are realized.
But once we get a toe-hold in this industry, the potential value is so extreme it will likely take off. We need to develop the technology for mining in low gravity environments, and develop cost-effective methods for returning the ore to Earth or perhaps even refining it in space for delivery to the Moon or Mars. Technological progress over the last two decades, specifically with reusable rockets dramatically lowering the cost of getting into space, makes mining asteroids more feasible, but further technological progress is still required.
It is easy to imagine that in a few hundred years something like the Belters of The Expanse might become a reality – people living permanently in the asteroid belt, mining it for its resources. It’s also possible that the industry would be entirely robotic – why put frail humans into the harsh environment of space unless they are absolutely necessary. Robotics and AI advances have also been extensive in the last decade, and it would certainly be more cost-effective to extract resources without having the added expense of keeping people alive in space. Belters, in other words, are likely to be robots.
The post Mining Asteroids first appeared on NeuroLogica Blog.
An exploration of all the scientific possibilities by which ghosts might actually exist in this universe.
Learn about your ad choices: dovetail.prx.org/ad-choicesA new study reinforces the evidence for the safety and efficacy of the mRNA COVID-19 vaccines. That’s the TLDR, but let’s dive into the details.
Medical evidence is always rolled out in stages. First there is what we would consider preclinical evidence, or basic science. This could be initial uncontrolled clinical observations, or mechanistic animal or in vitro research. At some point we have sufficient evidence to generate a hypothesis that a specific treatment could be effective in treating a specific disease, enough to progress to human research. For FDA qualifying research, there are four specific phases. Phase I trials look at the safety of the intervention in usually healthy controls, while also answering basic questions and mechanism and effects. If there are no safety red-flags then the research progressed to a phase II trial, which look for preliminary evidence of efficacy, and further safety data. Again, if that data continues to look encouraging we can progress to a phase III trial, which is a larger and more rigorous trial designed to be definitive. Usually the FDA requires several phase III trials to grant approval of a drug for a specific indication. Then, once on the market there is phase IV trials, which look at data from more widespread use to confirm safety and effectiveness in the real world.
Looked at another way, we do research in the lab, then on dozens of people, then score to hundreds of people, then hundreds to thousands of people, and then finally on thousands to millions of people. Each step of the way we gain the ability to detect less and less common side effects in a broader set of people. Further, the types of evidence are designed to be complementary. Phase III trials, for example, are rigorously experimental, with highly defined populations with randomization to control as many variables as possible. Phase IV trials, on the other hand, are generally observational, designed to look at very large numbers of people in an uncontrolled setting – to determine how safe and effective the treatment is in real-world conditions.
The mRNA vaccines for COVD all went through phase I-III trials before getting approval. Operation Warp Speed to accelerate the process was not about cutting corners, but about doing the trials more in parallel rather than sequentially (they could at least begin to recruit for the phase III trial while the phase II data was still being analyzed) and streamlining the red tape, but the science still had to get done. Since the vaccine has been in use we have the opportunity to gather phase IV type data. Billions of people have received at least one dose of COVID-19 vaccine, so that is a lot of data to pour through.
In the recent study:
“This cohort study used data from the French National Health Data System for all individuals in the French population aged 18 to 59 years who were alive on November 1, 2021. Data analysis was conducted from June 2024 to September 2025.”
Some countries have socialized medicine including centralized health data banks, which allows for very convenient sources for such observational research. This study was able to compare 22 ,767, 546 vaccinated and 5, 932, 443 unvaccinated individuals. The strength of this kind of study is that it is very representative, because it is so inclusive, and it is statistically robust. The challenge is that it is uncontrolled, so there is always potential confounding factors – differences between those who choose to get vaccinated or not get vaccinated. So how do the researchers deal with these confounding factors? Through weighting of the evidence.
They looked at sociodemographic characteristics and 41 comorbidities and then weighted the results accordingly. They could still be missing something, but that is a pretty thorough analysis. Their main outcomes were death due to COVID-19 and all-cause mortality over a four year period. They also did a separate analysis for all-cause mortality in the six months following vaccination. For the unvaccinated group, another end-point was getting vaccinated (after which, of course, they were no longer considered vaccinated).
The results are fairly dramatic. The vaccinated group had a 74% lower risk of death from COVID-19, indicating that the vaccine is effective in preventing death from COVID. But also, over the four year period the vaccinated group had a 25% lower risk of all-cause mortality, even when you eliminate death from COVID. Mortality was 29% lower in the first six months after getting vaccinated.
This data pretty clearly reflects that the mRNA vaccines were effective, at least in preventing death from severe COVID. The data is also very reassuring that the vaccines are safe. There could still be extremely rare, one in a million type side effects, but there does not appear to be any significant negative effects from the vaccine that could contribute to the risk of death. Medical interventions are all about risk vs benefit – no intervention is risk free, so having zero risk is not a rational or reasonable criterion. What we like to see is a robust increased benefit vs risk.
The bottom line is that if you chose to get an mRNA COVID-19 vaccine in 2021 you were much less likely to die of either COVID-19 or all-cause mortality. Clearly there is significant benefit in excess of any risk, which all the data indicates is tiny.
The post New Study on the COVID-19 mRNA Vaccines first appeared on NeuroLogica Blog.
I was born in the late 1970s, back when “transgender” wasn’t a word you’d see on television, let alone in a school curriculum. Back then, there was only “he” or “she,” and if you didn’t fit neatly into one of those boxes, you were expected to hide it. I learned early that whatever I was didn’t fit, and that saying so could make me a target.
I remember being six years old, draping a towel over my head and pretending it was long hair. I wasn’t rebelling against anything. I was aligning myself, in the only way I knew how, with what felt true. It took years before I discovered there were others like me and decades before society began to admit that such people even existed. The shame came later, when I learned that such feelings were unspeakable.
My first experiences with desire were tangled up with fear. As a teenager, I was drawn to boys but couldn’t imagine anyone seeing me that way. Every crush came with an undercurrent of panic: If he knew who I really am, he’d hate me. And all of them did. The first time I came out to someone I liked, he laughed and told his friends. The next day at school, the whispering started. Within a week, I had no friends left.
Transition isn’t a lifestyle. It’s a form of care that restores equilibrium.For trans women of my generation, that kind of rejection was typical. You learn to move through the world invisible because being seen too clearly can be dangerous. It still can be. Even now in my 40s I find myself editing how I walk, how I speak, how I dress in public. Not out of vanity, but self-preservation.
But when I say that being trans is the last thing I want people to notice about me, it’s not from shame, it’s because being trans should be irrelevant to my humanity. It’s a rare medical condition, not an identity that defines the whole of a person. I’d rather be recognized for my work, my sense of humor, my curiosity, and my contributions to the world than for the fact that I had to undergo medical treatment to live comfortably in my body. Transition isn’t a lifestyle. It’s a form of care that restores equilibrium. A way to make the physical self match the internal one so that life can finally move beyond gender altogether.
That’s why I bristle at the way trans discourse has evolved in the past few years. I’m grateful that young people today have the language, visibility, and community. But I also worry that online activists (many of them very young!) speak about gender transition as if it were a simple matter of identity affirmation rather than the profound, irreversible medical journey it is. Hormone therapy and surgery are not accessories to self-expression. They are life-altering interventions that carry serious physical and emotional consequences.
To conflate transient identity exploration with the rare and lifelong condition experienced by people like me is to risk harm.We are also witnessing an unprecedented rise in gender dysphoria among adolescents and young adults, particularly girls transitioning to be boys (I intentionally use “boys” here instead of “men”). While I don’t doubt that some number of them are trans—trans people have existed in every culture throughout history and we are not going anywhere anytime soon—the sudden increase suggests many are likely grappling with broader questions of identity, anxiety, and belonging rather than a deep-seated, persistent dysphoria, and latch onto gender identity because it is so visible and so celebrated today. But to conflate transient identity exploration with the rare and lifelong condition experienced by people like me is to risk harm. The medical establishment must be able to tell the difference, without fear of being called bigoted for doing so!
I say all this not to gatekeep, but to underscore the gravity of what transition entails. I’ve had surgeries that permanently changed my body. I inject hormones, knowing they’ll likely to affect my liver, my bones, and my fertility. I made these decisions as an adult, after years of therapy and reflection. I don’t regret them for a second but I also wouldn’t wish their necessity on anyone.
Caution is not cruelty. It’s compassion informed by reality.They are serious medical interventions that alter the body permanently, often with side effects that require lifelong management. For adults, with informed consent and psychological support, they can be lifesaving. For children and adolescents, whose identities are still in flux, such decisions must be approached with restraint and rigorous oversight. Caution is not cruelty. It’s compassion informed by reality.
Gender-affirming care saves lives, no matter what anyone says. I know this because it saved mine. But that doesn’t mean it should be prescribed without deep, individualized assessment, especially for children and adolescents who are still developing their sense of self. Puberty blockers and cross-sex hormones are not toys, and it’s not transphobic to say so. The medical community must balance compassion with caution. Both can coexist.
At the same time, we cannot let this conversation become an excuse for cruelty. The backlash against gender medicine has brought out voices who see our existence itself as pathology. They call for bans, restrictions, and “re-education,” pretending that trans lives can be legislated out of reality. And many more make disparaging jokes about genitals or trans people supposedly not knowing that plastic surgery cannot change biology. These people are not protecting children. They are using them as pawns.
The surgeries and hormones are not what make us who we are.What gets lost in the shouting is the truth most trans adults live quietly every day: We don’t want special treatment! We just want to be left in peace! To work, to love, to grow old without fear. The surgeries and hormones are not what make us who we are. They are tools that allow us to stop fighting our reflection and start living!
I wish more activists today understood that dignity doesn’t come from angry rhetoric or slogans. It comes from honesty. And honesty means acknowledging not just the courage, but also the risk and the pain it takes to become yourself. It also means being truthful about those things with our youth.
Trans rights are human rights, not because trans people are flawless or because half-naked activists shout it at a protest, but because no one should have to justify their existence. Defending trans rights means defending the right to live truthfully and safely. But truth also demands clarity: transition is not something to be entered into lightly, nor denied to those who need it. The middle ground—careful, evidence-based, compassionate medicine—is where reason lives. And it’s where our humanity should, too.
It should have been impossible for the CIA's Glomar Explorer to obtain the ship's bell from the K-129 submarine... but they did. How?
Learn about your ad choices: dovetail.prx.org/ad-choicesWe have all likely had the experience that when we learn a task it becomes easier to learn a distinct but related task. Learning to cook one dish makes it easier to learn other dishes. Learning how to repair a radio helps you learn to repair other electronics. Even more abstractly – when you learn anything it can improve your ability to learn in general. This is partly because primate brains are very flexible – we can repurpose knowledge and skills to other areas. This is related to the fact that we are good at finding patterns and connections among disparate items. Language is also a good example of this – puns or witty linguistic humor is often based on making a connection between words in different contexts (I tried to tell a joke about chemistry, but there was no reaction).
Neuroscientists are always trying to understand what we call the “neuroanatomical correlates” of cognitive function – what part of the brain is responsible for specific tasks and abilities? There is no specific one-to-one correlation. I think the best current summary of how the brain is organized is that it is made of networks of modules. Modules are nodes in the brain that do specific processing, but they participate in multiple different networks or circuits, and may even have different functions in different networks. Networks can also be more or less widely distributed, with the higher cognitive functions tending to be more complex than specific simple tasks.
What, then, is happening in the brain when we exhibit this cognitive flexibility, repurposing elements of one learned task to help learn a new task? To address this question Princeton researchers looked at rhesus macaques. Specifically they wanted to know if primates engage in what is called “compositionality” – breaking down a task into specific components that can then be combined to perform the task. Those components can then be combined in new arrangements to compose a new task, like building with legos.
They taught the macaques different tasks, such as discriminating between shapes or colors. The tasks had a range of difficulty, for example they had to distinguish between red and blue, with some of the colors being vibrant and obvious while others were muted or ambiguous. To indicate which shape or color they were perceiving they had to look either to the upper left or the lower right on some tasks, or the upper right and lower left on others. Essentially they had to combine a sensory perception to a motor activity. The question was – when the tasks were shuffled, would they use the same brain components (or what the researchers call “subspaces”) in a new combination to perform the new task? And the answer is – yes, that is exactly what they did.
Obviously, this is a rather simple construct, and it is only one study, but the evidence is consistent with the compositionality hypothesis. More research will be needed to confirm these results for different tasks with more complexity, and of course to replicate these results in humans. I think the idea of compositionality makes sense, but not everything that makes sense in science turns out to be true. Some ideas in neuroscience are discarded when they turn out not to be true, like the notion of the “global workspace” (an area of the brain that was the common networking hub of all consciousness).
There is also already research indicating that compositionality is just one feature of learning that exists on a continuum (probably) with another feature of learning – interference. The way you measure interference is to train someone on task A, then train them on related task B, and then retest them on task A. If learning task B reduces their performance on task A, that is interference. You have probably experienced this as well – you sometimes have to “unlearn” a new task to go back to an older one. My family has two cars, one with regenerative braking and one without, with each requiring a slightly different driving style. With regenerative braking, when you lift off the gas it slows the car through resistance. Switching back and forth causes a bit of interference, and it takes a moment to adapt to the new task.
It turns out, humans and neural networks display similar patterns of compositionality and interference. People exist along a spectrum with “lumpers” transferring skills from one task to another more easily, but also displaying more interference, and “splitters” who do not transfer skills as much, but also do not suffer interference as much. It appears to be a tradeoff, with different people having different tradeoffs between these two features of learning. In other words, if you reuse cognitive legos to build new tasks, that will make it easier to learn new related tasks because you can repurpose existing skills. But then those legos are networked with other tasks, which can cause interference with previously learned tasks using the same legos. Or – you build an entirely new network for a new task, which takes more time but does not repurpose and therefore does not cause interference with previously learned tasks. Which is better? There is likely no simple answer, as it is probably very context dependent.
Further, if people fall along the lumper to splitter spectrum, is that consistent across cognitive domains? Can one person be a lumper for some kinds of tasks and a splitter for others? Can we start as a lumper, but then morph into a splitter if we switch among tasks frequently over time, thereby reducing interference? Will different learning mechanisms favor adopting a lumper vs splitter strategy? Sometimes I want to be flexible and adapt quickly, at other times I may want to invest the time to minimize interference as I switch among tasks. Is there a way to get the best of both worlds?
That’s the thing with interesting research, it usually provokes more questions than it answers. Lots to do.
The post Cognitive Legos first appeared on NeuroLogica Blog.
Now through the end of the year, all Skeptoid donations will be matched up to $27,500.
Learn about your ad choices: dovetail.prx.org/ad-choicesIt’s the most wonderful time of the year. Sure, there are all the twinkly lights on display and family dinners, but really, it’s all about the shopping. Isn’t it? After all, each year we spend thousands of dollars around this time (retail sales in the U.S. between Black Friday and Christmas will likely surpass $1 trillion for the first time this year, up from $994 billion in 2024)—and not all of it on gifts either. It’s the Super Bowl of consumerism.
It’s hard to resist. Every few minutes I’ll get some sort of notification of a once-a-year sale that I must take advantage of immediately. I’m being primed to want things that I never really even thought of because they happen to be 20 percent off. I get nagging follow-up messages to get the items I happened to have glanced at but didn’t succumb to—before it’s too late, before they are gone forever (or at least the discount is)! That’s the annual ritual.
Some, however, are more disciplined than I. They wait until this time of year to buy the things they actually want and need at a discount. They are the real unsung heroes of the season. Just the other day a woman in my writers’ group proudly showed off her new Apple Watch that she had waited all this time to get. “I got it for cheap,” she exclaimed proudly, “I don’t know if it’s any good.”
As for me, I’ve been eyeing a Mason Pearson brush for at least 15 years. Girl math dictates that had I bought it 10 years ago, I would have gotten it for half off. I’m told the quality is so good that it will last long enough to pass on to my children—because that’s what every child craves: a used hairbrush. Maybe in a few years, when it’s twice the price?
The Mason Pearson website showcases “luxury and efficacy” alongside 130 years of heritage—a masterclass in how legacy brands use elegant design and storytelling to justify premium pricing.Mason Pearson, though, represents a “legacy” brand in an increasingly disposable world. It is characterized by its longevity (think: Levi’s and Tiffany & Co), rich history, perception of quality, and cultural relevance. Brands come and go, but Mason Pearson gets its name from its founder, an engineer who first created it in 1885. Multiple generations have enjoyed smooth hair from these high-quality durable brushes that continue to be handcrafted in England and are referred to as “the Ferrari of brushes.” It has cult status. I hear that spoilt pets like it too.
But this isn’t an ad for Mason Pearson. It’s a column about psychology.
Legacy brands tend to evoke nostalgia, one of the most powerful feelings a brand that wants to sell lots of products can invoke. It reminds us of simpler or happier times, or connects us to family members who might have used the same products. Or people like Marilyn Monroe, who to this day is still partially responsible for the sales of Erno Laszlo skincare products and Chanel No. 5 perfume. If you wear the latter to bed, you too can continue the legacy. This emotional resonance and sentimental bond help foster brand loyalty by transforming the product into something more meaningful.
The Ed Feingersh photograph that launched decades of marketing: Monroe with Chanel No. 5 in 1955. One interview answer in 1952 became eternal brand mythology—and continues selling perfume in 2025.As Mad Men’s Don Draper describes it, nostalgia is a “twinge in your heart, far more powerful than memory alone.”
The legacy brand also comes with a story.
Successful legacy brands leverage their rich history and craftsmanship through compelling storytelling. This narrative allows consumers to feel like they are part of an ongoing legacy, connecting them with tradition and artistry that defines the brand. A good example of this type of marketing is deployed by Grado Labs, a company that produces headphones “handmade in Brooklyn, producing the finest audio products since 1953 in the building that our father/grandfather/great grandfather Pasquale bought back in 1918.”
The Maker Stories website featured Grado Labs exemplary legacy brand marketing: four generations of family craftsmanship in the same Brooklyn building since 1918, triggering nostalgia and trust in consumers.Indeed, research shows that our brains respond to stories, triggering the release of oxytocin, a hormone that happens to promote trust. This helps explain the results of the 2009 Significant Objects experiment conducted by Robert Walker and Joshua Glenn where they found that pairing a story with a product was able to increase its perceived value by up to 2,706 percent.
According to clinical psychologist Clary Tepper, when consumers buy into a brand they are engaging in what is deemed by psychologists to be “symbolic consumption” whereby the brand becomes a representative of a set of ideals or values.
“From a psychological point of view, the principles of memory, identity, and emotional security are all at work here,” she tells me, “For consumers who feel like the world is constantly changing, legacy brands offer a sense of stability and continuity. Those brands also tap into shared cultural history, collective memory, and identity, all of which can foster a sense of belonging and trust. If consumers have had positive experiences with these brands in the past, engaging with them again can activate reward pathways in the brain.”
Deidre Popovich, Associate Professor of Marketing at Texas Tech University, agrees: “People are quite drawn to legacy brands because they feel very familiar and comforting. From a consumer psychology perspective, these brands may be tied to early personal memories, such as thinking back to certain family routines when you were a kid. When someone sees a legacy brand, they often feel this sense of recognition that links back to earlier points in their life. That is what creates a feeling of nostalgia. It is usually less about the product itself or its functional purpose, and more about reconnecting with family moments and/or positive feelings.”
Ownership of a legacy brand’s products can also be a way for consumers to signal their aspirations or social status. It’s part of an identity that they can choose to put on—or change.
Sarah Seung-McFarland is a psychologist and founder of Trulery, where her focus is specifically on fashion and design psychology. She tells me: “In psychology, we know that consumers don’t just buy a product, they buy into the identity, lifestyle, and social meaning connected to it. Brands like Louis Vuitton have spent decades being linked to wealth, status, and an aspirational lifestyle through film, celebrity culture, and consistent visual storytelling.”
Brands, in a sense, become a stand-in for a world that might be within our reach. Though, says, Seung-McFarland, “For many, the desire came from the inaccessibility itself. Owning a legacy piece represented the version of themselves they hoped to become.”
It’s also a type of reassurance about the quality and reliability of the brand. Its longevity is a testimony to that. That’s why we’re seeing a bit of a revival towards appreciation for long-standing brands.
The reddit message board “BuyItForLife” receives 1.7M weekly visitors. There the focus seems to be on brands that, as the title suggests, last. Some items are expensive status symbols like Rolex watches and Birkin bags, but others are more practical items like eiderdown bedding, Montblanc fountain pens, Le Creuset cookware, knives with a lifetime of sharpening, Canada Goose coats that can be passed on as a family heirloom, microwaves that don’t break within a year or two, Zojirushi rice cookers, Dyson vacuums, Viberg boots, Barbour’s waxed jacket, Herman Miller office chairs, and on the more affordable side—Stanley water bottles. And yes, my coveted Mason Pearson brush is also a common recommendation. But most surprisingly, there’s even a laptop that users believe can last a lifetime. The purpose—at whichever price point—is the pursuit of quality and longevity.
According to Popovich, going with a long-standing brand helps consumers reduce the cognitive effort involved in making a choice. “Shoppers don’t have to work hard to evaluate it; the feeling of familiarity makes it seem like an obvious decision,” she says, adding, “This is due to cognitive fluency, which is the feeling of ease we get when our brain can process information quickly and without effort. This feeling influences our judgments, making us more likely to perceive information as truthful and likable, simply because it’s familiar.”
I’ll keep that in mind as I inch toward buying that expensive hairbrush that somehow keeps feeling more and more like the “obvious” choice.
In the long tradition of scientific wagers, Skeptic magazine publisher and historian of science Dr. Michael Shermer has issued a $1000 bet that…
Discovery or disclosure of alien visitation to Earth in the form of UFOs, UAPs, or any other technological artifact or alien biological form, as confirmed by major scientific institutions and government agencies, will not happen by December 31, 02030.Taking him up on that challenge is Harvard astronomer and Director of the Galileo Project Dr. Avi Loeb. The wager is placed through the Long Now Foundation’s Long Bets program (“an arena for competitive, accountable predictions”), which adds a 0 at the front of all dates on a 10,000 year calendar (“to foster better long-term thinking”), in keeping with their Clock of the Long Now, being built in Texas and designed to tick for 10,000 years. Details of the Shermer-Loeb wager may be found here.
Whoever wins, the $1000 stakes will be donated to the Galileo Project Foundation. Here are the terms for deciding who wins:
By Dec 31st 02030, at least two of these three scientific organizations—NASA, the National Science Foundation, and the American Astronomical Society—will affirm that discovery of extraterrestrial intelligence in the form of UAPs, UFOs, or any other interstellar objects that are determined to be ETI technological in nature, or any alien biological life form found here on Earth, has been made.Here is Dr. Loeb’s argument:
The search for technological artifacts has just started in earnest in 2025 with the discovery of the anomalous interstellar object 3I/ATLAS, the launch of the Rubin Observatory and the construction of three Galileo Project Observatories.Here is Dr. Shermer’s argument:
Since the founding of the Skeptics Society and Skeptic magazine in 1992, I have been documenting predictions by UFOlogists that discovery or disclosure of alien visitation to Earth is coming any day now.The Long Bets program was started in 2003 by Stewart Brand and Kevin Kelly, and is part of a long tradition of scientific wagers dating back at least to 1870 when Alfred Russel Wallace, co-discoverer with Charles Darwin of natural selection, accepted a £500 wager (a workingman’s wages for one year) placed by flat-Earther John Hampden that scientists could not prove that the Earth is round.
Wallace proved it by demonstrating same-height poles placed at even intervals along a six-mile stretch of the Old Bedford Canal (north of London) appeared through a telescope lower by the exact “amount calculated from the known dimensions of the earth.”
Unfortunately, Wallace had to take Hampden to court to collect his winnings. Thus, it is important that such wagers be professionally adjudicated by neutral referees. Other wagers include:
If Loeb wins the bet, it will represent what would arguably be the greatest discovery in human history, namely that we are not alone in the universe.
If Shermer wins the bet, it does not mean that we are the only intelligence in the cosmos, only that claims of contact are likely greatly exaggerated and that we need to keep search for the truth about extraterrestrial intelligence.
Yesterday I started a response to this article, which seems to me fits cleanly into a science-denial format. The author is making a lawyers case against the notion of climate change, using classic denialist strategies. Yesterday I focused on his denial that scientists can ever form a meaningful consensus about the evidence, conflating it with the straw man that a consensus somehow is mere opinion, rather than being based on the totality of the evidence. Today I am going to focus on the notion of “post-normal” science. Macrae gives this summary of what post-normal science is:
“The conclusions of post-normal science aren’t ultimately based, then, on empirical data, with theories that can be rigorously tested and falsified, but on “quality as assessed by internal and extended peer communities,” i.e., “consensus,” i.e., informed guesses.”
This is another straw man. He is creating a false dichotomy here, based on his misunderstanding of science (he is a journalist, not a scientist). Yesterday I gave this summary of how science works:
“Science is not a simple matter of proof. There are many different kinds of evidence – observational, experimental, theoretical, and modeling (computer modeling, animal models, etc.). Scientific evidence can use deduction, induction, can start with observation or start with a hypothesis, can use theoretical constructs, can make observations about the past and make predictions about the future. All of these various activities are part of the regular operation of science. No one type of evidence is supreme or perfect – they all represent different tradeoffs. Scientific conclusions are always a matter of inference – scientists make the best inference they can to the most probable explanation given all of the available evidence. This always involves judgement, and some opinion. How are different kinds of evidence weighted when they appear to conflict?”
He seems to believe that the only “real” science is one based on pure evidence, requiring no opinion or judgement – but this does not exist. There is no proof in science, only inference based on the evidence, which is always partial and imperfect. But this is the strategy of science denial – create an artificially narrow definition of science (which may sound reasonable to a non-scientist) then try to exclude the science you want to deny from “real” science. So, evolution deniers claim that no exploration of the past can be “real” science because you cannot do repeated experiments on the past. No one was there to observe it. Now Macrae is saying we cannot do science about the future, because you can’t experiment on the future, only make “guesses”.
Macrae is also repeating another common evolution-denial tactic of saying that climate change cannot be falsified. He has to go there because his notion that you cannot do science about the future is obviously false when you consider that science often functions by predicting what will happen in the future, and that such prediction can potentially be falsified. He claims climate models are not real science (they are one piece of doing climate science) because even if they are wrong, climate scientists don’t change them. But in order to make this point, he has to misrepresent how well the climate models over the past 50 years have matched actual warming.
To do this he again employs a common denial tactic – reference outliers that agree with your position. There are three prominent climate change denying scientists that always seem to be quoted – Lindzen, Spencer, and Christy. A thorough exploration of their claims is beyond this post, but suffice to say, they are a minority opinion, far from the mainstream of their field. Every field has such outliers. Again – this is why we look to see if there is a consensus in a discipline, to see where the weight of opinion is. Otherwise you can play – choose your own expert – to find whatever opinion suits you. In this case, Macrae cites Christy to claim that climate models have over-called warming. He states this as a fact, without disclosing that Christy’s analyses are controversial at best, and clearly in the minority.
Here is a good review of climate models by an academic source. They conclude:
“Climate models published since 1973 have generally been quite skillful in projecting future warming. While some were too low and some too high, they all show outcomes reasonably close to what has actually occurred, especially when discrepancies between predicted and actual CO2 concentrations and other climate forcings are taken into account.”
You have likely seen these projections, with a line surrounding by a zone of uncertainty, projecting temperature into the future. Actual warming has been within this zone (within 2 standard deviations from the average predicted warming). What they mean by taking discrepancies into account – if you ran a model in 1980 and plugged in a predicted amount of CO2 release, but the actual CO2 releases was more or less, the model will be off not because it doesn’t work, but because the wrong amount of CO2 was entered. We can then run the model again with the correct CO2 and see how its predicts warming. But even without this, the models have done generally very well. They are not perfect, but accurate, and are being tweaked all the time to get more sophisticated and more accurate.
Much of what Macrae says after this is based on the false premise that climate models don’t work but scientists ignored this – hence climate science is not falsifiable. But this is nonsense – most analyses find that the climate models work just fine.
Macrae also, even within his false premise, is committing another denialist trope – saying that because the models were allegedly off (they weren’t) they are therefore wrong. Evolution deniers do this a lot as well – because scientists were wrong about the branching pattern of evolutionary relationship among certain species, perhaps evolution did not happen at all. Even though the cherry-picked outlier he chose shows the models were off, they still predicted warming and the globe is warming. They were correct about the direction and persistence of warming, just off in terms of magnitude (again, according to Christy, but not the majority of climate scientists).
Taken together these strategies that Macrae is using are common among many campaigns to deny accepted science (accepted because the totality of evidence favors those conclusions). But of course, he denies the denial, even while blatantly engaging in it.
The post Is Climate Science “Post Normal” Science – Part II first appeared on NeuroLogica Blog.
A roundup of evidence supporting the use of radiocarbon dating to assess the age of organic matter from 500 to 55,000 years old.
Learn about your ad choices: dovetail.prx.org/ad-choicesThis article is from a year ago, but it was just sent to me as it is making the rounds in climate change denying circles. It is by Paul Macrae, who is an ex-journalist who now seems to be primarily engaged in climate change denial. The article (a chapter from his book on the subject) is full of the standard climate denial tropes – for the sake of space, I would like to focus on three specific points. The first is the claim that climate science is “settled”, the second is the notion of “post-normal science”, and the third is a factual claim about the accuracy of prior climate models.
Of course, if there is a consensus among climate scientists that global warming (I will get into more details on what this means) is “settled”, that makes it difficult, especially for a non-scientists, to question the conclusion. So order number one – deny that there is a consensus, deny that consensus is even a thing in science, and deny that science can ever be settled. I don’t suspect that I will ever be able to slay this dragon, it is simply too useful rhetorically, but for those who are open to argument, here is my analysis.
First – consensus is absolutely a thing in the regular operations of science. A consensus can be built in a number of ways, but often panels of recognized world experts are assembled to review all existing scientific data and make a consensus statement about what the data shows. This is often done when there is a policy or practice question. For example, in medicine, practitioners need to know how to practice, and these consensus statements are used as practice guidelines. They also set the standard of care, so as a practitioner you should definitely be aware of them and not violate them unless you have a good reason. Obviously, the question of global warming is a serious policy question, and so providing scientific guidance to policy makers is the point, such as with the IPCC. Consensus is also used to set research and funding priorities, to establish terminology, and resolve controversies. But to be clear – these mechanisms of consensus do not determine what the science says. That is determined by the actual science. The point is to provide clarity regarding complex scientific evidence, especially when a practice or policy is at issue.
The reason we need such expert review is because scientific evidence, as regular readers here know, is complicated. Science is not a simple matter of proof. There are many different kinds of evidence – observational, experimental, theoretical, and modeling (computer modeling, animal models, etc.). Scientific evidence can use deduction, induction, can start with observation or start with a hypothesis, can use theoretical constructs, can make observations about the past and make predictions about the future. All of these various activities are part of the regular operation of science. No one type of evidence is supreme or perfect – they all represent different tradeoffs. Scientific conclusions are always a matter of inference – scientists make the best inference they can to the most probable explanation given all of the available evidence. This always involves judgement, and some opinion. How are different kinds of evidence weighted when they appear to conflict?
So it is a meaningful question to ask – to different scientists looking at the same question from different perspectives come to roughly the same conclusion? For example, do those doing ice-core analysis large agree with those looking at tree rings? We have data from different kinds of temperature measurement, from physicists looking at the activity and influence of the sun, and the role of CO2 as a greenhouse gas. We may even have data from planetary astronomers. Then we have various computer models, which take input from many sources of information. No one source of information is definitive. So scientists triangulate from different perspectives and see if they align on the same answer. The only way to know is to see if there is a consensus among the various scientists, each with different pieces to this enormous puzzle.
But of course, in order to be precise, you have to break down the question of global warming into it’s specific pieces – does CO2 drive warming, are there other sources of climate forcing, what is the climate sensitivity to CO2, is the planet actually warming and how much, and what will be the consequences of different levels of warming? We can’t just say – climate change or global warming, we have to address each component separately.
Is science ever “settled”. Macrae conflates this notion with “certain” (there are such straw men throughout his article). Science is never 100% certain, and it is never done. But there is a place for the notion that some claims in science are so well-established that they are functionally settled, meaning we no longer have to specifically establish them over and over again. We can take them as a given and move on to more detail and other sub-questions. For example, it is settled that the Earth is roughly a sphere, while planetary scientists continue to revise greater and greater detail. It is established that life on Earth is the result of organic evolution from a common ancestor, that the brain is the organ of the mind, that DNA is the molecule of inheritance, that plate tectonics is real, and that multiple sclerosis is an auto-immune inflammatory disease. Research in all these areas is ongoing, but there is very strong agreement (a consensus) that these basic fundamental questions are settled.
They could still, theoretically, be overturned, but the probability is so close to zero we can treat it functionally as zero. It is simple not a serious scientific possibility that the Earth is flat, that life was created 10,000 years ago, that consciousness lives in the heart, that proteins carry inheritance, that the Earth is completely static, or that MS is caused by an imbalance of the humors. Would Macrae agree that any of these questions are scientifically “settled”? Should we give serious consideration to flat-Earthers?
With regard to climate change, it is well-established (use whatever phrase you like) that CO2 is a greenhouse gas, and that the planet is warming. It is also well-established that industrial release of previously sequestered CO2 into the atmosphere and therefore the carbon cycle is forcing the climate to become warmer on average. There is a range of possible climate sensitivity, which is open to revision, but that range has statistical confidence intervals and the range is narrowing as our confidence increases through further research. Exactly how much warming will occur is open to further study and revision, but again there is a range with confidence intervals. What will the consequences be? This is difficult to predict, but there are some very reasonable statements that we can make, informed by climate models and what has already happened over the last 30 years. But there is a lot of uncertainty – and of course, that uncertainty cuts both ways. It could be better than the average prediction, or it could be worse.
What is not reasonable is to assume everything will be fine. This is like facing the possibility of cancer, with the same degree of uncertainty. Just hoping that it’s all fine and doing nothing is likely not a rational course of action. This doesn’t mean you have to opt for the most radical surgery either. There is often a range of options, which can be determined by the level of evidence and the resulting risks vs benefit. Doing nothing about climate change until we have a high degree of certainty is also not a rational course, because climate change gets harder and harder to deal with the longer it goes and the worse it gets. Solutions also take decades to unfold.
Typically, those in the denier camp use the most unreasonable or extreme version of climate mitigation strategies as if they are the only option. This is like alternative medicine proponents characterizing all cancer treatment as “cut, burn, and poison.” Macrae similarly writes:
“And shouldn’t we be especially wary when this science, with its attack on fossil fuels, threatens the very foundations of Western-style civilization?”
Now who’s the alarmist? Sure, there are extremists on the fringe of every movement. In terms of actual proposed policy, however, and the center of gravity of climate discussion, we are mostly talking about investing in R&D, investing in infrastructure, and jiggering the markets away from fossil fuels and towards green energy. There is no serious policy discussion about banning fossil fuels and collapsing western civilization. There is nothing like that in the Paris Accords, or in any UN recommendation. Whatever you think about the effectiveness of Biden’s policies, they were entirely carrots for industry to increase investment in green energy. About the most radical actual policy proposal is a carbon tax, which most economists agree would likely be effective. This would hardly collapse civilization.
These are all inevitable technologies because they are superior to burning fossil fuels on many levels – cleaner air, reduced health care costs, more energy independence. We just want to make them happen faster.
Tomorrow I will write part II, covering post-normal science and the accuracy of climate models.
The post Is Climate Science “Post Normal” Science? first appeared on NeuroLogica Blog.
Here we go…again. Another documentary film about how disclosure of alien contact is imminent. It’s a claim I’ve been hearing for over three decades, albeit this one is of a higher quality than the dozens of similar such docs available on Amazon Prime (and hundreds more on YouTube).
With The Age of Disclosure, filmmaker Dan Farah (Call Jane, Ready Player One, The Phenomenon) has lifted the genre to a higher level than the others (James Fox being the exception, with The Phenomenon—credit shared with Farah—well worth watching). I was tempted to offer a snarky “I watched it so you don’t have too,” but if you are relatively new to the UFO/UAP topic, I recommend investing the twenty bucks Amazon Prime charges to rent the film for 30 days ($25 to buy it). The artfully edited trailer hints at what is to come in the full film.
0:00 /2:47 1×The Age of Disclosure is packaged and produced so well that naïve viewers may come away thinking that something strikingly original, shockingly new, and world-shaking is about to be loosed among the world, everywhere the ceremony of innocence drowned (Yeats, of course).
Alas, it is not to be. Every fact, opinion, or anecdote in the film has been rehearsed elsewhere in recent years, and a good deal of the footage is from Congressional hearings, media reports, and stock interviews that have been circulating for years on CNN, Fox News, News Nation, and even the Wall Street Journal and the New York Times, along with other mainstream media sources and large-audience podcasts. But the fusillade of statements, interspersed with the familiar UAP grainy videos and UFO blurry photographs, leaves no doubt about the film’s conclusion:
“We are not alone in the universe.”Wow, can we see these aliens and their spaceships?
Nope.
Why not?
The opening credentialing sequence gives us a clue: when you don’t actually have concrete evidence that we can all see, your case depends on eyewitness accounts so you must establish that their words are trustworthy and reliable. For example, the intrepid UAP proponent Lue Elizando says:
The problem is that we can’t be in anyone else’s shoes, so we depend on evidence that does not depend on a single eyewitness. “If only you could have been in my shoes that night when I saw Bigfoot—there would be zero shadow of a doubt….” In science, such anecdotes do not count as evidence; you need to be able to show actual physical evidence—in this case a body of a Bigfoot.
Continuing my biological analogy, in order to name a new species you have to present a type specimen—a holotype—that everyone can see, examine, photograph, analyze, etc. If you gave a talk at a biology conference about how you discovered a new species of bipedal primate, no one would take you seriously if you did not also present unmistakable evidence. If all you had were stories about what you saw, and maybe a couple of out-of-focus videos and grainy photographs, no one would believe you…and for good reason!
What scientists and skeptics are asking of the UFO and UAP community is to, at long last, show us the evidence.What scientists and skeptics are asking of the UFO and UAP community is to, at long last, show us the evidence. We have been hearing of pending disclosure for half a century and are always left wanting. We don’t need to know your credentials, how many years you worked for the U.S. government or military, or how strongly you believe that what you saw was aliens or alien craft; just show us what you claim is here and we will all believe. QED!
But no. Here is parapsychologist, remote viewing researcher, and UFOlogist Hal Putoff:
“The classified data that we had access to when we joined the program was indisputable.”Here is astrophysicist and UFOlogist Eric Davis:
“There is 80 years of data that the public isn’t even aware of.”Here is Jay Stratton, prominently featured in the film as one of the defense officials who first investigated UAP:
“The things that I’ve seen, the clearest videos, the best evidence we have that these are non-human intelligence, remains classified. I have seen with my own eyes non-human craft and non-human beings.”He saw it himself! No FOAF (Friend of a Friend) urban legend. O-kay, but can I see it with my own eyes? No? Then I remain skeptical, as it should be in science.
The film then reviews most of the standard UAP pilot accounts, such as this from Navy pilot Ryan Graves: “They [UAPs] were ubiquitous. We were seeing them almost daily.” If true, given that nearly every commercial airline passenger has a smart phone with a high-definition camera at the ready, there should be thousands of clear and unmistakable photographs and videos of these UAPs. To date there is not one. Nada. Zilch. Here the absence of evidence is evidence of absence.
Where were the aliens in 1945 to stop the bombing of Hiroshima and Nagasaki?A key message of the film is that there are political and even military ramifications of UAPs. Here is Stratton again: “They [UAPs] have both activated and deactivated nuclear weapons in both the U.S. and Russia.” In the category of “If this were true, what else would be true?”… where were the aliens in 1945 to stop the bombing of Hiroshima and Nagasaki? Why did they allow us to detonate the first atomic bomb in New Mexico? Why didn’t they curtail the hundreds of nuclear explosions in the Nevada desert and the South Pacific? The answer is classic hand-waving rationalization, as in Stanford University professor and UFOlogist Gary Nolan’s answer: “They [the aliens] were willing to let us see the consequences of our actions.”
To add urgency to the film, Elizondo tells us that “It [UAP sightings] is happening all over the world and it is happening with greater frequency.” The Bayesian reasoner in me asks: can we see some data on the base rate of sightings over the decades to make an assessment if, in fact, there has been an increase in frequency? No such data is provided.
Another standard theme throughout the film is explaining why—despite the unmitigated confidence that alien contact has been discovered (but not yet disclosed)—the evidence is not readily available. Several reasons are on offer, such as this one from Elizondo: “religious fundamentalists in the Pentagon who had a severe adversity to this topic…put their religion above national security.” Among the fundies, apparently, were those who told Stratton “these were demons and we were messing with Satan’s world.”
The documentary has attracted wide attention, including coverage from Bill Maher on HBO and Joe Rogan.As for the larger issue of the consequences of disclosure on religious faith, numerous surveys over the years have consistently found that the vast majority of religious people would not find the discovery of extra-terrestrial intelligences (“non-human biologics” in the newfangled UAP jargon meant to legitimize an otherwise fringe movement) in any way a threat to their religious beliefs. Theologian Ted Peters, for example, queried 1,300 people on the matter, finding that most people do not think the discovery of extraterrestrial intelligence would shake their faith. The reason is as obvious as it is logical: If an omnipotent deity can create life on Earth, he could do it elsewhere in the universe. In a cosmos with a sextillion planets (1 followed by 21 zeros, or 1,000,000,000,000,000,000,000), what a terrible waste of space it would be (echoing Carl Sagan) to create a cosmos so vast as to house so many planets, only one of which would contain sentient consciousness beings worthy of saving.
What are these UAPs, exactly? Here the film segues into a chalkboard lecture by Elizondo, who explains that there are four hypotheses on offer:
Unfortunately, left off the list was…
For #5, I am fond of quoting from Leslie Kean’s 2010 book UFOs: Generals, Pilots and Government Officials Go on the Record, in which the UFOlogist admitted that “roughly 90 to 95 percent of UFO sightings can be explained” as:
weather balloons, flares, sky lanterns, planes flying in formation, secret military aircraft, birds reflecting the sun, planes reflecting the sun, blimps, helicopters, the planets Venus or Mars, meteors or meteorites, space junk, satellites, swamp gas, spinning eddies, sundogs, ball lightning, ice crystals, reflected light off clouds, lights on the ground or lights reflected on a cockpit window, temperature inversions, hole-punch clouds, and the list goes on!Elizondo then ticked off six characteristics (“observables” because, well, it sounds more scientific) about UAP:
All of these assumptions are based on highly questionable interpretations of grainy videos and blurry photographs of UAPs/UFOs. For example, an incredibly grainy video apparently filmed from the USS Omaha off the coast of San Diego in July 2019, shows a dark blob appear to segue from above the waves to below. This, we are told, is clear and unmistakable evidence that UAPs can seemingly transition from the air into the ocean where, the speculation continues, can move through the water at hundreds of miles per hour. What’s more likely? That all of physics and aerodynamics needs revising, or that someone has misinterpreted a low-resolution video?
An unidentified anomalous phenomenon (UAP) was filmed from the USS Omaha off the coast of San Diego in July 2019. CREDIT: Jeremy Corbell/WeaponizedPodcastI was surprised—even shocked—to see that the film included accusations that Lue Elizondo was not completely honest about his role with the U.S. government in the UAP program. To wit, we are told that Pentagon spokesman Christopher Sherwood said:
“Mr. Elizondo had no responsibilities with regard to the AATIP program”And Pentagon Spokesperson Susan Gough revealed:
“Luis Elizondo did not have any assigned responsibilities for AATIP.”So included, I fully expected Elizondo to offer an explanation, or the filmmakers to include proof that Elizondo worked at AATIP. Surely they could have provided a contract or pay stubs or some employment paperwork for Elizondo and AATIP, but no. Did Elizondo work for AATIP? It’s hard to believe that he didn’t, given how much information he reveals about what was going on in that department. And why would anyone lie about something so easy to check? Who knows, but UFOlogist Bob Lazar (who said he worked at Area 51 and back engineered alien spaceships) lied when he said he graduated with degrees in physics from MIT and Caltech when, in fact, he didn’t attend either such institution. Lazar’s lie was exposed by UFOlogist Stan Friedman, and the explanation on offer is that “they” erased all traces of Lazar’s academic record.
The film includes several high-profile interviews, among them Secretary of State Marco Rubio.Another theme in the film that almost everyone I’ve ever engaged with on this topic is confused about, is articulated by former CIA Director John Brennan: “I think it’s a bit presumptuous, if not arrogant, for us to believe that there’s no other form of life anywhere in the entire universe.”
Of course, but that is not what any of this is about, or else the filmmakers would have interviewed SETI scientists, who have been listening for ETI signals for decades. The question “are they out there somewhere?” is a different matter entirely than “have they come here?” My provisional answers are “yes” and “no”, although as a good Bayesian I am willing to update my priors and flip my credence from skepticism to belief…with sufficient evidence.
What do the featured experts in this film think the aliens are? Elizondo suggests that they might be “cryptoterrestrial” (whatever that is—never explained) or some “proto-human” that branched off the family tree long ago and is “as natural to this planet as we are.”
“They’ve been operating here for a very long time.” How long? We are not told.That’s the sanest of the explanations. Hal Putoff suggests that the UAP aliens might be time travelers, or some ancient civilization hiding here on Earth or on the seabed. Well, they must be hiding exceptionally well, because explorers (and satellites) have covered nearly every square meter of the planet and there is no sign of such an ancient civilization. (Maybe they have a cloaking device, like the one the Starship Enterprise used to monitor primitive civilizations on other planets.) “Whoever it is and wherever they are,” Putoff concludes, “they’ve been operating here for a very long time.” How long? We are not told.
One segment of the film stands out, and that is the so-called “Legacy Program” that is a “crash retrieval program” to “back-engineer” alien spaceships. Now, to be sure, the U.S. government (along with other governments) have such programs to study downed/crashed airplanes, jets, drones, and spacecraft of other nations, because obviously we’d like to know what the other guy is up to technologically, and that, apparently, has been going on since the First World War (“what kind of altimeter are those German biplanes using, anyway?”). But if you Google search “Legacy Program” this is what you find:
Department of Defense Legacy Resource Management Program: This is a real, long-standing government program that funds projects to protect natural and cultural resources on military installations. Its mission is to balance military readiness with environmental stewardship.According to this site:
The mission of the Legacy Resource Management Program is to provide coordinated, Department-wide, and partnership-based integration of military mission readiness with the conservation of irreplaceable natural and cultural resources.When pressed to explain this Legacy crash-retrieval program, the Pentagon's All-domain Anomaly Resolution Office (AARO) concluded in a 2024 report that “there is no evidence of such programs, attributing the claims to misidentified real events or circular reporting.”
Why the lacuna? Here is Lue Elizondo’s explanation: “The ‘Legacy Program’ was so secret that it was withheld from the Secretary of Defense, Congress, and even the President of the United States.” And: “We had a choice: keep silent while keeping Americans in the dark, or resign my position in protest and fulfill my obligations to the American people by telling the truth about what I know about UAP.”
Elizondo quit. How noble. It must fill one’s ego with massive pride to know that you have made the greatest discovery in the history of humanity and no one around you has any idea of this monumental event.
Throughout the film nods are made about UAPs as a “national security threat,” for example: “It could be China. It could be Russia.” Former Director of National Ingtelligence James Clapper: “any unexplained phenomena could pose a national security threat.” Stratton: “Violation of all nations sovereign airspace presents a safety of flight concern for all military and commercial aviation.”
Well, sure it could, but does it in fact? And why include all these admonitions about national security threats to our nation from other nations, when none of these people think that is the origin of UAPs. As stated at the beginning, they all thing they’re space aliens.
An amusing (and to UFOlogists, irritating) question that skeptics such as me like to ask, “Why do they keep crashing?” If the aliens are so advanced, so sophisticated, and have engineered anti-gravity propulsion systems that can use relativistic quantum space-time bubbles to jet about the galaxy in the blink of an eye, why can’t they seem to land in New Mexico (and elsewhere) without slamming into the ground?
The film’s experts have a ready-made answer: They’re not crashing at all! These are intentionally left “gifts” to humanity. Or they’re a giant IQ test. Or, as in the film 2001: A Space Odyssey, it’s the aliens’ way of imputing superior intelligence into one species of hominin, namely us.
Why can’t we all see the evidence that the film’s experts have seen with their own eyes? Because it would freak everyone out: the stock market would tank, economies would collapse, governments would fold, and religions would abandon their faith beliefs. That’s what we’re told, anyway, and the filmmakers insist that the coverup is so extensive and powerful that “99.99 percent of all scientists are skeptical.” Perhaps, but could it be that 99.99 percent of scientists think like scientists who demand extraordinary evidence for extraordinary claims?
Then there is the assertion that “they” are silencing people in the know with threats to their jobs, careers, and lives. Elizondo: “Historically, every time a military member had a UAP encounter, it was very quickly swept under the rug and they were discouraged from talking about it.”
Right, so then why are all these military eyewitnesses going on CNN, Fox News, and Joe Rogan to tell millions of people about their UAP encounters? If “they” are so effective at covering up the existence of aliens, how is it that there are thousands of articles and news stories, hundreds of books and documentaries, and endless podcast discussions ongoing, without a single person (that I know of) fired or killed for telling us all what they know about these programs?
Another tell in the film about the lack of actual photographic or video evidence of said alien spaceships (aside from the half dozen UAP videos that have been recycled endlessly for years—TicTac, Go-Fast, Gimbal, etc.) is the inclusion of artistic representations of hovering spaceships over U.S. military bases. If there are any photographs, videos, or security camera footage of any kind available—as surely there must be if these events happened as reported—they were not included.
Example: Vandenberg Air Force base, where Elon Musk’s SpaceX launches its rockets, appears to be a hotbed of alien surveillance. A former employee there says that there are over 60 cameras that record everything that ever happens during a rocket launch. And yet, mysteriously, on October 14, 2003, there was an “incursion” in which there was “a red square object hovering in the air above the launch pad at low altitude, making no noise, it had no obvious signs of propulsion, and it was just hovering silently. It was a security breach of the area. (…) It was massive. The size of a football field, almost rectangular in shape, it was just floating there, no propulsion system, no windows. It was flat black. Then it shot off thousands of miles an hour up the coast.”
Surely the filmmakers managed to wrangle from SpaceX or the base commanders at Vandenberg actual footage of this sighting? Nope. As usual we are left with our (and an artist’s) imagination.
The film wraps up with speculations about how, exactly, these UAPs manage to pull off such feats of propulsion and maneuverability, going full science fiction mode with the pantheon of experts speculating about space-warping bubbles in which spaceships can zoom off in an instant because space itself is being warped so it doesn’t need to move through normal space (or ocean). Putoff: “So time moves differently for people inside the bubble versus people outside the bubble. (… )This could be the key to interstellar travel.” Hopefully Elon and his SpaceX engineers are taking notes.
On this matter I am reminded of the comedian Mitch Hedberg riff on why photos of Bigfoot are blurry: “It’s not the photographers, it’s the subject. I think Bigfoot is just fuzzy. You know, I think there’s a large, out-of-focus monster roaming the countryside. Run, he’s fuzzy!”
In UAP circles, life imitates art. Radar signals, we are told in the film by Eric Davis, cannot detect UAPs “because the signal just moves around the bubble and doesn’t reflect back to the radar operator.” Here is Hal Putoff in full Hedberg mode: “This explains why people who take a photo of a UAP get a fuzzy and distorted picture because they’re actually taking a photo through a spacetime barrier.”
Once you convince yourself that this is all real, it is natural to ask, “what is their energy source?” Continuing in full science fiction fantasy, Eric Davis calculates that “UAP performance implies the use of 1,100 billion watts of power. This is 100 times the daily electrical utility power generated in the U.S.” Where do the aliens find such energy? “Vacuum energy. Zero-point energy. Quantum entanglement.” The film ends with speculation that when disclosure of this technology comes online it will solve all our energy demands and replace oil, natural gas, and coal.
This is all very entertaining. Who doesn’t love science fiction? But The Age of Disclosure claims to be science fact. The evidence for it remains as elusive as it ever was, as I explained in my $1000 bet on the Long Now Foundation’s Long Bets site that “Discovery or disclosure of alien visitation to Earth in the form of UFOs, UAPs, or any other technological artifact or alien biological form, as confirmed by major scientific institutions, will not happen by December 31, 2030.”
Since posting this, Harvard astronomer Avi Loeb has accepted the bet and we each donated $500 to the Long Now Foundation, the proceeds of the winnings to go to the Galileo Project. I am reasonably confident I will win, but I am hoping to lose because I agree with the experts in The Age of Disclosure that this would indeed be the greatest discovery in the history of humanity.
The attempt on Donald Trump’s life in Butler, Pennsylvania, remains one of the most consequential security failures in recent political history. It deserves, and still lacks, a full public accounting. For months, legitimate questions have lingered about the background of the gunman, Thomas Matthew Crooks, and the lapses that allowed a 20-year-old with a rifle to reach an unsecured rooftop less than 200 yards from a former President and then current front-runner for the nation’s top job.
But the leap from unanswered questions to sweeping conspiratorial conclusions is a chasm worth avoiding. In recent days Tucker Carlson has encouraged precisely that leap. Rather than pressing for serious transparency, he has mixed factual gaps with political suspicion to construct a theory of concealed motives and hidden hands. The public deserves better than that, and so does the pursuit of truth.
Let’s start with what remains troubling. Federal investigators initially described Crooks as a quiet, socially isolated young man with a limited online presence. Yet Carlson, in a video posted on X on Friday, November 14, showcased material he claimed came from Crooks’ Google Drive and from social media accounts on YouTube, Snapchat, Quora, and Venmo. The content, he contended, suggested a trajectory of threats and firearms practice inconsistent with the FBI’s portrait.
Tucker Carlson, in a video posted on X on Friday, November 14, showcased material he claimed came from Crooks’ Google Drive and from his social media accountsThe FBI has not publicly explained why these accounts were not part of its early description of Crooks’ digital activity. The haste to cremate the shooter and scrub his apartment, the rapid disappearance of his online postings, and the absence of a detailed biographical narrative have only fueled suspicion about the thoroughness of the FBI’s investigation. Americans can reasonably ask how a major assassination attempt generated to date so little public information about the perpetrator.
As I document in my 1993 book Case Closed, after he shot President John F. Kennedy, the FBI and the CIA quickly complied a detailed account of the life of Lee Harvey Oswald, in some cases what he was up to by the day, hour, and even minute in the months and even years leading up to the assassination. All of that was available to the public a year after the assassination when the Warren Report was published. And yet, over a year after Crooks’ attempted assassination of Donald Trump—and murder of Corey Comperatore, a volunteer firefighter and former fire chief who was in the audience—we know next to nothing about this shooter. How did he get on the roof of the adjacent building without anyone noticing? Why did no one in the Secret Service respond to the numerous verbal warnings by spectators at the rally (that can be heard on cell phone footage) that they saw a man with a rifle on the roof? And despite apparently not seeing Crooks on the roof, how did the Secret Service shoot and kill him within seconds of his opening fire on Trump?
A democratic society should not have to rely on private individuals to surface essential details about an attack on a national political figure.Those questions merit full answers. A democratic society should not have to rely on private individuals to surface essential details about an attack on a national political figure. If intelligence agencies do their job the country should not need to rely on podcasters for accurate and relevant information about important national events.
But Carlson’s speculation overshoots the available facts. His error is not raising questions but in constructing a sprawling narrative of deliberate concealment. He suggests the FBI suppressed Crooks’ online footprint and implies a broader conspiracy behind the attack. His confidence in the authenticity of the accounts he identified is not investigative rigor; it is assumption presented as certainty. Even if Carlson’s files are authentic, nothing yet proves the FBI saw them and chose to hide them. It is plausible that Carlson’s source identified material investigators had not verified or did not view as conclusive.
The FBI’s Rapid Response account stated last week that the agency never claimed Crooks had “no online footprint.” FBI Director Kash Patel has emphasized the scope of the inquiry: more than 1,000 interviews; thousands of public tips; data from 13 digital devices; nearly half a million files reviewed; and financial activity across 10 accounts analyzed. Patel maintains investigators found no evidence Crooks worked with anyone or shared his intent.
FBI Director Kash Patel Patel maintains investigators found no evidence Crooks worked with anyone or shared his intentThis does not close the matter. Federal agencies have a long history of releasing information too slowly and too narrowly. But it also does not substantiate Carlson’s suggestion of a suppressed plot or a rogue bureau determined to hide the truth.
The deeper issue is this: By framing unanswered questions as proof of a coordinated deep-state conspiracy, Carlson undermines the very process required to get real answers. He transforms factual uncertainty into political advantage. This style of commentary turns national tragedies into narrative battlegrounds, where ambiguity becomes opportunity.
Prepackaged conspiracy narratives corrode the public’s ability to assess facts when they ultimately emerge. The Kennedy assassination offers a reminder: early opacity, mixed with political distrust, created a vacuum that conspiracy theories quickly filled. The result is an event still debated six decades later, long after credible evidence should have settled the matter.
Something similar is now taking shape. Gaps in public information about Crooks have fostered speculation. By framing those gaps as evidence of intent in some nebulous deep state plot, Carlson makes it harder for legitimate investigators—in Congress, in the press, and within federal agencies—to do their work without being accused of participating in a cover-up the moment an answer proves incomplete.
Transparency by the FBI is the only way to reassure the public that its conclusions rest on verified evidence.Americans deserve a clearer record of Crooks’ ideology, his online activity, and his movements before the shooting. Congress should press for more information about the security breakdowns that allowed the attack. The FBI should release as much documentation as possible. Transparency by the FBI is the only way to reassure the public that its conclusions rest on verified evidence, not institutional defensiveness or a coverup for an inadequate investigation.
Transparency does not require accepting Carlson’s conclusions. It requires accepting that the public has a right to know more than it does today and insisting that institutions meet that obligation.
Carlson is right about one thing: the story of Thomas Matthew Crooks is incomplete. But incompleteness is not proof of conspiracy. It is proof that work remains to be done. The path to clarity is careful inquiry, not sensational extrapolation. If the goal is truth rather than clicks, the method matters as much as the questions.
The Butler attack demands answers. It does not demand a conspiracy theory.
Stories claim that this region of Alaska is home to a huge number of unexplained disappearances.
Learn about your ad choices: dovetail.prx.org/ad-choicesI am currently in Dubai at the Future Forum conference, and later today I am on a panel about the future of the mind with two other neuroscientists. I expect the conversation to be dynamic, but here is the core of what I want to say.
As I have been covering here over the years in bits and pieces, there seems to be several technologies converging on at least one critical component of research into consciousness and sentience. The first is the ability to image the functioning of the brain, in addition to the anatomy, in real time. We have functional MRI scanning, PET, and EEG mapping which enable us to see cerebral blood flow, metabolism and electrical activity. This allows researchers to ask questions such as: what parts of the brain light up when a subject is experiencing something or performing a specific task. The data is relatively low resolution (compared to the neuronal level of activity) and noisy, but we can pull meaningful patterns from this data to build our models of how the brain works.
The second technology which is having a significant impact on neuroscience research is computer technology, including but not limited to AI. All the technologies I listed above are dependent on computing, and as the software improves, so does the resulting imaging. AI is now also helping us make sense of the noisy data. But the computing technology flows in the other direction as well – we can use our knowledge of the brain to help us design computer circuits, whether in neural networks or even just virtually in software. This creates a feedback loop whereby we use computers to understand the brain, and the resulting neuroscience to build better computers.
The third technology is the brain machine interface (BMI). This allows biological brains to talk to computer software, and through that to robotic prosthetics and any other application that can be run digitally. So far it seems like our brains are happy to accept input from software and can learn to control robotic limbs. A robotic hand, for example, can have sensory feedback in addition to motor control, and this closes the loop in the brain so that the user feels as if they own and operate the robotic limb, more like their original biological limb.
All these technologies together, but especially the first two, are building toward a final goal (among many) of creating a human connectome – a map of all the circuits in the human brain at a functional level of resolution. Along the way there have been some interesting milestones. Back in 2011 researchers built the first computer model of a mouse cortical column, a complete circuit in the brain. Since then this kind of research has taken off.
There are several things happening at once: Researchers are modeling brains in some combination of hardware and software, and sometimes in “wetware” that mimics how neurons function. They can also create circuits that combine silicon and living neurons, which can function and learn. Further they can map brain circuits virtually to see how they behave, learn and function. They are also using our knowledge of how the brain and neurons work to design computers and AI that is perhaps more efficient and powerful.
I see no reason why all of these technologies, working together, will not eventually achieve, through nothing but incremental advances, a complete model of a human brain, either virtually in software, or in hardware, or in some combination. This will enable us to confirm (sort of) the ultimate question about the human mind – is the mind an emergent property of brain circuits functioning in real time? If so, then a virtual or silicon brain should be conscious.
Of course, we will not know if the virtual brain experiences its own existence, only that it acts as if it does. We may create an AI p-zombie – a philosophical zombie that acts sentient but does not experience its own existence (no qualia, as the philosophers say). But at this point I think we will be obligated to treat a virtual brain as if it is a sentient being, since it will be indistinguishable from one.
But even if we set aside this question aside, we will be able to model human brain function and measure its behavior and output. This would give us an amazing research tool. We can endlessly alter the circuits to see what happens. We can model psychiatric conditions, like schizophrenia. We can find out how different circuits behave and interact to create the human mind in all its aspects.
Back to BMI – we will also be able to network all of this with biological human brains. We will be able to merge with our AI, to extend our brain capacity with silicon. Will this work? Every indication so far says that it will. I imagine it can function like a third hemisphere. Each hemisphere of our brain is capable of generating independent consciousness – each hemisphere is you. They also contribute their unique function. But they are so robustly connected and networked together that they function like one mind (to our subjective experience).
So – a third silicon hemisphere, robustly connected to the two biological ones, should also function as part of a single mind, just one with expanded capacity and function. Imagine living much of your life with such a computer extension. It would become part of you – it would become you. If it were powerful enough, you may not even notice when the biological hemispheres are damaged or die – unless they are still needed to interface with your body. But if that could be duplicated as well, to create redundant connection from your silicon brain to all the brain’s inputs and outputs, then you would not notice.
But even if we cannot do that last part, your consciousness would continue, perhaps with little change. It could theoretically be placed in a virtual environment, or in an android, or (as in the series Altered Carbon) in another biological body.
These last applications are for the far future – but creating an entire human brain in some combination of hardware, wetware, and software will likely happen sometime this century. Perhaps it will run on a quantum computer, or some advanced neural network that models human neurons as much as possible. Either way, it is the ultimate extension of the current paradigm of neuroscience – the mind is what the brain does, it is an emergent property of brain function. Silicon or virtual brains should demonstrate the same emergent behavior.
The post The Future of the Mind first appeared on NeuroLogica Blog.