You are here

Skeptic

Autism’s Cult of Redemption: My Adventure Searching for Help for My Son’s Autism Diagnosis in the World of Alternative Medicine & Anti-Vaxxers

Skeptic.com feed - Thu, 02/08/2024 - 12:00am

A pediatric neurologist at Boston Children’s Hospital diagnosed my son, Misha, with autism spectrum disorder at age three. At Massachusetts General Hospital, another pediatric neurologist answered my call for a second opinion only to rebuff my hope for a different one. “I did not find him to be very receptive to testing,” the expert sighed. Both neurologists observed that Misha didn’t respond to their request to identify colors, body parts, or animals, that he averted his eyes from theirs, that he pawed their examination table when he didn’t flap his arms. Autism, the doctors said, constituted a lifelong condition. Medical science didn’t understand its causes or cures, and scarcely comprehended the limits of its woes.

How could the neurologists deduce such a bleak judgment from 90 minutes in the bell jar of their examination rooms? If they knew so little about autism, then how could they gavel down a life sentence? I remembered reading somewhere that a properly trained neurologist ought to be able to argue both for and against any single diagnosis in a stepwise process of elimination. I opened the Diagnostic and Statistical Manual of Mental Disorders (DSM), leafed to the entry under autism, and plucked out of its basket several inculpating symptoms. Aggrieved, I sought out the Handbook of Differential Diagnosis, a companion volume, and underlined an admonitory passage: “Clinicians typically decide on the diagnosis within the first five minutes of meeting the patient and then spend the rest of the time during their evaluation interpreting (and often misinterpreting) elicited information through this diagnostic bias.” Now what?

As an educated citizen of progressive Cambridge, Massachusetts, I consumed large volumes of such second-hand, semi-digested information. I felt that I should, and believed that I could, develop my own, independent judgment about Misha’s condition. I would do my own research, and I would draw my own conclusions based on what I learned.

These virtues turned out to be constituent features of my error. My skepticism and sense of responsibility blended with my stubbornness as I struggled to evaluate a welter of “holistic” attitudes about medicine and health. Several fixed ideas confronted me. Autism, I read, is neither the psychopathology listed in the DSM nor the organic twist of disease supposed by neurologists. Autism, these alternative sources explained, is one among an epidemic of preventable chronic illnesses that American children contract from toxins in the environment. Holistic therapy, according to another, contains the requisite resources. Vitamin therapy, homeopathy, and antifungal treatment could heal children like Misha of their injuries.

The claim that autism is a treatable, toxin-induced chronic illness is a half-century old. Its history forms a pattern of culture and credulity imprinted on our own time. Today, indeed, as one in every 36 children receive the diagnosis, and as controversies swirl around COVID-19, more people than ever turn to holistic remedies to treat illnesses real and imagined. Homeopathic remedies fly off the shelves at pharmacies, alongside an array of alleged immunity-boosting, anti-inflammatory vitamins and herbal supplements.

Critics view the vogue for holism as the product of an irrational transaction between charlatans and suckers. As I reflect on my experience with Misha in the grassroots of autism agonistes, however, I find the issues don’t divide so tidily. The question isn’t whom to trust or what to believe, but how to make an existential choice between incommensurable propositions.

A family friend introduced me to Mary Coyle, a homeopath at the Real Child Center in New York. Coyle said Misha had likely contracted autism from contaminants in the environment. Was I aware of the epidemic of chronic illnesses afflicting children like him? Some of them, Coyle explained, received diagnoses of asthma, chronic fatigue, or dermatitis. Others were diagnosed with fibromyalgia, Lyme disease, or PANDAS (Pediatric Autoimmune Neuropsychiatric Disorder Associated with Streptococcal Infections). Pathogens lying at the nexus between the body and the environment fooled medical specialists at places like Boston Children’s Hospital and Massachusetts General Hospital. Coyle urged me to abandon their dead-end query, “Is your child on the autism spectrum?” To help Misha, I needed to switch the predicate and envisage a different question: “How toxic is your child?”

My kitchen turned into an ersatz pharmacy of unguents, powders, drops, and tablets.

Why not find out? Although I had never heard of homeopathy or Coyle’s sub-specialty of homotoxicology, I believed that with some study I could probably draw the necessary distinction between evidence and interpretation in the test results. Coyle herself had been trained by conventional physicians before seeking out propaedeutic instruction in holistic medicine. Holism sounded nice.

We started out with an “Energetic Assessment.” Measuring Misha’s rates of “galvanic skin response,” Coyle said, would weigh the balance of electrical vibrations conducted through his pores. Toward this end, she deployed an electrodermal screening device that deciphered imbalances in his “meridians,” or “pathways.” Toxic metals, alas, appeared from the results to be obstructing his “flow” of energy.

With Coyle’s theory confirmed, she referred me to Lawrence Caprio to canvass for food and environmental allergens. Caprio, like Coyle, had defected from conventional to alternative medicine. I learned that while attending medical school at the University of Rome he had befriended a homeopath in the Italian countryside and lived “a very natural lifestyle”; the experience led him to pursue naturopathy.

Misha—Caprio now reported—turned out to be “intolerant” of bread, butter, eggplant, oatmeal, peanuts, potatoes, and tomatoes. Misha also displayed a “sensitivity” to bananas, car exhaust, cheese, chlorine, chocolate, cow milk, dust mites, garlic, onions, oranges, soy beans, and strawberries. Caprio flagged “phenolics” such as malvin (in corn sweeteners) and piperin (in nightshade vegetables and animal proteins).

Next, I mailed urine and stool samples to the Great Plains Laboratory in Kansas. The director there, William Shaw, had worked as a researcher in biochemistry, endocrinology, and immunology at the Centers for Disease Control before he quit and set up his own laboratory. Shaw suspected lithium in “the bottled water craze” and fluoridation in the public water supply as just two of the causes of autism. He came to believe that government scientists woefully misunderstood such sources. He compared their dereliction to the Red Cross’s failure to intervene in the Holocaust. Shaw also found toxic levels of yeast flooding Misha’s intestines.

Homeopathy, naturopathy, and renegade biochemistry cast me outside the institutions of science where Misha’s neurologists practiced. But to grasp how these new realms might be objective correlates of Misha’s condition—and how toxins, foods, and yeast might be culprits—I had only to remind myself of the progressive demonology that made the diagnosis seem plausible.

Industrial corporations have been chewing up the land, choking the air, and despoiling the water, I read, turning the whole country into a hazardous materials zone. I’d read Silent Spring, in which ecologist Rachel Carson claimed that our bodies weren’t shields, but permeable organisms that absorbed particulates. I’d heard Ralph Nader liken air and water pollution to “domestic chemical and biological warfare.” I’d finished Bill McKibben’s The End of Nature with the requisite dread. Listening to progressive news media about “forever chemicals” evoked moods that swung between indignation and paranoia. I paid for eco-friendly cribs, de-leaded the windows in our apartment, and tried to shop organic.

As Coyle, Caprio, and Shaw whispered in my ear, though, my imagination boggled with an even greater catalogue of possible pathogens. Our food contained more pesticides, hormones, and insecticides than I had suspected. Our air is filled with methanol and carbon monoxide. Chlorine, herbicides, and parasites degraded our tap water. Mold festered in our walls, floors, and ceilings. Formaldehyde lurked in our furniture. Heavy metals hid in our lotions, shampoos, and antiperspirants. Synthetic chemical compounds—polychlorinated biphenyls, phthalates, bisphenol A, polybrominated diphenyl ethers—seeped into our toys, diapers, bottles, soaps, and appliances. Even our Wi-Fi, cell phones, refrigerator, light bulbs, and microwave oven emitted radiation through electromagnetic fields.

Had the dystopia of the contemporary world poisoned my son? Coyle, Caprio, and Shaw not only defined autism as a preventable, “biomedical” illness, they traced the mechanism of harm to his pediatrician’s office.

Misha had received three-in-one vaccines against diphtheria, tetanus, and pertussis (DTP) and measles, mumps, and rubella (MMR) according to the recommended schedule. The holistic experts now told me that these vaccines contain dangerous metals, including mercury and aluminum. The vaccines, I read, could have spread from Misha’s arm to his gut and persisted long enough to perforate an intestinal wall. Mercury, a neurotoxin, could have leaked into his bloodstream and surreptitiously addled his brain. Or his pediatrician could have set off a chain reaction that had the same effect. The antibiotics she gave him for petty infections could have reduced the diversity of natural flora that controlled yeast in his gastrointestinal tract. An overabundance of yeast could have generated enzymes that perforated his intestines even if live-virus vaccines had not done so directly.

Either way, undigested food molecules such as gluten (in wheat) and casein (in dairy) could have joined forces with environmental toxins and heavy metals and attached to Misha’s opiate receptors, disrupting his neurotransmitters and triggering allergic reactions. The ballooning inflammation would have thwarted his immune responses. If so, then his “toxic load” could be starving his cells of nutrients. Escalating levels of “oxidative stress” could be congesting his metabolism. No wonder he lacked muscle tone, coordination, and balance!

How could I dismiss their diagnosis of “autism enterocolitis,” AKA “leaky gut?” My liberal education prided open-mindedness, after all. In 1998, a midlevel British lab researcher named Andrew Wakefield published a study warranting the diagnosis in The Lancet, one of the world’s most prestigious medical journals. Wakefield’s paper, it turned out, “entered his profession’s annals of shame as among the most unethical, dishonest, and damaging medical research to be unmasked in living memory,” according to Brian Deer’s The Doctor Who Fooled the World.

In the meantime, both liberal and conservative politicians echoed the implications of Wakefield’s hoax. “The science right now is inconclusive,” Barack Obama said in 2008. Thousands of media outlets around the world reported a controversy between two legitimate sides. “Fears raised over preservatives in vaccines,” a front-page headline in the Boston Globe announced. Wakefield appeared on television with articulate parents by his side. “You have to listen to the story the parents tell,” he said on CBS’s 60 Minutes. Reputable television programs did just that. ABC’s Nightline, Good Morning America, and 20/20, NBC’s Dateline, and The Oprah Winfrey Show broadcast the gravamen of the indictment out of the mouths of well-educated parents.

The accusation against antibiotics resonated with definite misgivings that I held over the dispensations of American medicine. Doctors in the United States order more excessive diagnostic tests, perform more needless caesarean sections, and prescribe more superfluous antibiotics than their counterparts around the world. A prepossessing dependence on technology encourages American medicine to treat symptoms rather than people. From this indubitable truth, Coyle, Caprio, and Shaw drew an uncommon inference that aggressive medical care had sabotaged Misha’s birthright immunity.

Misha, so endowed, could have repaired the damage done, no matter whether vaccines or antibiotics had upset his “primary pathways.” His body would have availed “secondary pathways” such as his skin and mucous membrane. Coyle said his innate capacity for adaptation had been telegraphing itself in his fevers, his eczema, his ear infections, even his runny noses. Yet his pediatrician had stood blind before the hidden meaning of these irruptions. Reaching into her chamber of magic bullets, she prescribed steroid creams for his eczema, acetaminophen for his headaches, amoxicillin for his ear and sinus infections, antihistamines for his coughs and runny noses, and ibuprofen for his fevers. This “Whac-a-Mole mentality,” Coyle despaired, had plugged his “secondary pathways” as well.

The trio of virtuoso healers would help me sidestep the adulterated dialectic of science and charm Misha’s autism out of its chronic condition.

A vicious cycle set in. Vaccines and/or antibiotics had predisposed Misha’s microbiome to harbor viruses, bacteria, and fungi. Turning toxic, they invaded his cells, tissues, and fluids. The foreign occupation precipitated allergies. The allergies provoked inflammation, which arrested metabolic energy, which led to anemia, which invited recurring infections. His pediatrician perpetuated those with cascading doses of foreign chemicals. “Rather than freak out and take medication and look to suppress,” Coyle counseled, “we should celebrate that the body is working and go and look at the primary pathways and clear out the blockages.” Up to 103 degrees Fahrenheit, “the fever might be a good thing.”

If I could accept that “allopathic” medicine did not stand apart and speak objectively, but instead reflected the sickness of American society, then the trio of virtuoso healers would help me sidestep the adulterated dialectic of science and health. A holistic treatment protocol would charm Misha’s autism out of its chronic condition and turn it into a treatable medical illness. “The body’s infinite wisdom,” Coyle said, “would take care of the rest.” As the protocol purged and flushed his toxins, the fawn of nature would close the holes in his intestines. His allergies would ebb, reducing inflammation, reviving cellular respiration, and reconnecting his neurotransmitters. The realignment of his meridians would reflow his energy. “Once you clear,” Caprio said, “the whole thing just changes dramatically.”

* * *

Autism parents first embraced holistic treatments in the 1960s and 1970s, when emphatic personal testimonials, printed and distributed in underground newsletters, led to the formation of grassroots groups such as Defeat Autism Now! (DAN!) and ushered in the “leaky gut” theory. DAN! grew out of the psychologist Bernard Rimland’s Autism Research Institute. Rimland’s 1964 book Infantile Autism blew up the prevailing, psychogenetic thesis of autism’s origins, which blamed mothers for failing to love their children enough.

The Today Show and The Dick Cavett Show had given psychologist Bruno Bettelheim, the chief exponent of the “refrigerator mothers” thesis, free reign to liken them to concentration camp guards. Rimland’s Infantile Autism refuted that thesis. Letters poured into his Autism Research Institute from grateful parents attesting to the efficacy of the holistic approach: vitamin therapy, detoxification, and elimination dieting. Pharmaceutical companies rolled out new childhood vaccines for measles (1963), mumps (1967), and rubella (1969) and combined the immunizations against pertussis, diphtheria, and tetanus into one injection. Rimland began distributing an annual survey that queried parents about the effects.

Belief in an etiology variously called “leaky gut,” “autism enterocolitis,” or “toxic psychosis” awkwardly amalgamated elements from both ancient and modern medical philosophy. The old idea of disease as a sign of disharmony with nature queued behind the modern concept of infection through the invasion of microorganisms. But no theory of etiology needs to be complete for a treatment to work. “Help the child first,” Rimland urged, “worry later about exactly what it is that’s helping the child.”

Like anti-psychiatry activists, breast cancer patients, and AIDS activists, autism parents confronted physicians with the backlash doctrine of “consumer choice” in specialist medical care. “The parent who reads this book should assume that their family doctor, or even their neurologist or other specialist, may not know nearly as much as they do about autism,” William Shaw wrote in Biological Treatments for Autism.

The first television program to elevate parental intuitions, Vaccine Roulette, aired in 1982 on an NBC affiliate in Washington, DC. The show promoted the vaccine injury theory—and won an Emmy Award. Accelerating rates of the diagnosis over the next decades brought the injury theory from a simmer to a boil. In the 1960s, an average of one out of every 2,500 children received the diagnosis. By the first decade of the 21st century, the prevalence rose to one out of every 88, an increase of over 2,500 percent. Up to three-quarters of autism parents used some form of holistic treatment on their children.

A Congressional hearing in 2012 featured their cause, heaping suspicion on vaccines, speculating on gut flora, and praising the efficacy of vitamins, homeopathy, and elimination dieting. Dennis Kucinich, a Democrat from Ohio and one-time Presidential candidate, expressed outrage over the spectacle of “children all over the country turning up with autism.” Kucinich blamed “neurotoxic chemicals in the environment,” particularly emissions from coal-burning power plants. Like the autism parents in attendance at the hearing, Kucinich did his own research and drew his own conclusions.

“There’s no such thing as ‘conventional’ or ‘alternative’ or ‘complementary’ or ‘integrative’ or ‘holistic’ medicine,” alternative medicine skeptic Paul Offit complained the next year. “There’s only medicine that works and medicine that doesn’t.” Clever and concise, Offit’s polemic nonetheless begged the relevant questions. Who decides what works? Fundamental science is one thing; therapeutic interventions are quite another. “Evidence-based medicine,” introduced in 1991, supplies a template of criteria to translate medical science into clinical medicine. Atop its hierarchy sits the “randomized control trial,” a methodology loaded with social and financial biases. Even when a therapy works incontrovertibly, that fact doesn’t free its applications of ambiguity. Antibiotics work. We’ve known that since the 1930s. But which of their benefits are worth which of their costs?

When does an accumulation of confirmed research equal a consensus of reasonable certainty? In 1992, ABC’s 20/20 exposed a cluster of autism cases in Leominster, Massachusetts. A sunglasses’ manufacturer had long treated the city as a dumping ground for its chemical waste. After the company shuttered, a group of mothers counted 43 autistic children born to parents who had worked at the plant or resided near it. Commenting on the Leominster case, the eminently sane neurologist Oliver Sacks voiced a curious sentiment. “The question of whether autism can be caused by exposure to toxic agents has yet to be fully studied,” Sacks wrote, three years after epidemiologists from the Massachusetts Department of Public Health determined that no unusual cluster of cases had existed in that city in the first place. Who gets to decide the meaning of “fully studied”?

Bernard Rimland and the autism parents in his movement answered the question for themselves. “There are thousands of children who have recovered from autism as a result of the biomedical interventions pioneered by the innovative scientists and physicians in the DAN! movement,” Rimland insisted in the group’s 2005 treatment manual, Autism: Effective Biomedical Treatments.

William Shaw and Mary Coyle, both DAN! clinicians, adapted Rimland’s manual for Misha. Coyle vouched personally for the safety and efficacy of the holistic treatment therein. She swore she used it to “recover” her own son.

Interdicting toxins marked the first step on the “healing journey.” Taking it obliged me to decline Misha’s pneumococcal conjugate vaccine (for pneumonia) and his varicella vaccine (for chickenpox). Meanwhile, I eliminated from our cupboard and refrigerator the foods for which Caprio had proved Misha sensitive and intolerant, and I prepared a course of “optimal dose sub-lingual immunotherapy” to “de-sensitize” him. Coyle drew up a monthly schedule to detoxify him with homeopathic remedies from a manufacturer in Belgium. Shaw itemized vitamins and minerals to supplement Misha’s intake of nutrients, plus probiotics and antifungals to control his yeast and rehabilitate his intestinal tract. My kitchen turned into an ersatz pharmacy of unguents, powders, drops, and tablets.

Every morning, I inserted two tablets of a Chinese herbal supplement, Huang Lian Su, into an apple. This would crank-start his digestion. I added half a capsule of methylfolate into his breakfast. This would juice his metabolism. Ten minutes after he finished breakfast, I stirred Nystatin powder into warm coconut water, drew two ounces into a dropper, irrigated his mouth, and ensured that he abstained from eating or drinking for ten more minutes. Fifteen minutes before his midday snack, I squeezed six drops of a B12 vitamin under his tongue. Every evening, I slipped him two more Huang Lian Su tablets.

An exception in federal law places vitamins, supplements, and homeopathic remedies outside the FDA’s approval process. Only their manufacturers know what these dummy drugs contain.

To fortify his glucose levels, I could elect to give him two vials of raisin water every other hour. To normalize his alkaline levels, I added a quarter-cup of baking soda to his baths. The “de-sensitizing drops,” however, had to be dribbled onto his wrists twice every day. Misha also needed regular, carefully calibrated doses of boron, chromium, folic acid, glutathione, iodine, magnesium, manganese, milk thistle, selenium, vitamins A, C, D, E, and zinc.

Homotoxicology, the core modality, entailed his daily ingestion of homeopathic “drainage remedies” to purge toxins and open pathways. The bottles arrived in the mail. Coyle provided a table of equivalencies, linking particular remedies to organs. This compound for his small intestines; That one for his large intestine; This one for his kidney; and That one for his mucous membrane.

At the same time, homeopathy’s whole-body scope of intervention claimed to relieve a wide range of illnesses. Shaw and his colleagues said the modality could treat autism, plus sensory integration disorder, central auditory processing disorder, speech and language problems, fine motor and gross motor problems, oppositional defiance disorder, obsessive compulsive disorder, eating disorders, headaches, eczema, and irritable bowel syndrome. The marketing materials that accompanied Misha’s compounds claimed that they could treat bloating, constipation, cramps, flatulence, nausea, night sweats, and sneezing.

I learned the shorthand rationale as part of my self-education. Homeopaths stake their claim on a manufacturing process that distinguishes their remedies from pharmaceutical medicaments. It’s called “succussion.” A label that reads “4X,” for example, indicates that the original ingredient has been diluted four times by a factor of 10—the manufacturer has succussed it 10,000 times. “12X” indicated that the original ingredient has been succussed one trillion times.

The compounds prescribed for Misha said they contained asparagus, bark, boldo leaf, goldenrod, goldenseal, horsetail, juniper, marigold, milk thistle, parsley, passionflower, Scottish pine root, and other herbs and plants of which I’d never heard. Having been succussed, though, the remedies actually contained no active ingredients. In the bottles remained “the mother tincture,” a special kind of water said to “remember” the original ingredient. The only other ingredient listed on the label was an organic compound that served as a solvent and preservative. Thirty-one percent of some of Misha’s remedies contained ethanol alcohol, a proof as strong as vodka or gin. Coyle instructed me to “gas off the alcohol” on the stove before serving him.

Succussion confused me. Misha’s reaction worried me. He looked a fright. Black circles ringed his eyelids. Yeast blanketed his nostrils and lips. Rashes and red spots appeared all over his body. Pale and lethargic, he oscillated between diarrhea and constipation. He broke out with recurring fevers. He stopped gaining weight. Because he didn’t speak, or reliably communicate in any other manner, I couldn’t understand why his emotions seemed to be running at an unusually high pitch.

Coyle explained that different glands and organs in the body stored specific feelings. The kidneys stored fear. The pancreas stored frustration. The thyroid stored misunderstanding, the liver anger, the lungs grief, the bladder a sense of loss, and so forth. Those emotions poured out as his body excreted toxins. I shouldn’t regard the worsening of his symptoms as a side effect, but rather as a necessary condition of his recovery—“aggravations,” in homeopathy’s parlance. A Table of Homotoxicosis charted the correspondences with the precision and predictability of biochemistry. Nor should I abandon the treatment. To do so would be to “re-toxify” him. I must allow the treatment to fully fledge. I must keep my nerve.

* * *

I lost my nerve. It took 18 months of gnawing doubt and thousands of dollars out the door. Then one day I swept all the vitamins, antigens, probiotics, antifungals, and homeopathic remedies into the trash bin. I restored Misha to a regular diet, caught him up on his vaccines, and demanded (and received) a full refund from Coyle.

I had blundered into a non sequitur. The environment is toxic. Conventional medicine does reflect the sickness of our culture. Yet that doesn’t render holism any better. The supplement industry, I came to understand, has pumped hundreds of millions of dollars into thousands of clinical studies without demonstrating that vitamins, herbal products, or mineral compounds are either safe or effective, much less necessary. The Food & Drug Administration (FDA) neither tests the industry’s marketing claims nor regulates its product standards.

Caprio and Coyle regard Traditional Chinese Medicine (TCM) as a reproach to modern, Western medicine. TCM, they pointed out, is 5,000 years old. Actually, I learned, Chairman Mao Zedong contrived TCM after 1950 as a means of controlling China’s rural population and burnishing the regime’s reputation abroad. In 1972, during Richard Nixon’s tour of Chinese hospitals, his guides stage-managed a demonstration of TCM’s miracles. American media reported the healing event at face value and launched the holistic health movement stateside. Several years later, the FDA sought to regulate the vitamin and supplement industry. Manufacturers fought back with a marketing campaign centered on “freedom of choice” and convinced Americans to stand up for their right not to know which ingredients may (or may not) be contained in their daily vitamins.

I needed to file a public records request with the Connecticut Department of Public Health to discover that Lawrence Caprio had been censured and fined for improperly labeling medication, for practicing without a license, and for passing himself off as a medical doctor. I also learned that Caprio’s naturopathy license had been suspended for two years after the FDA determined his bogus “sensitivity tests” violated its regulations. Misha, an actual immunologist confirmed, had no food allergies in the first place.

Was my son ever really burdened by toxins? Coyle said the results of the “energetic assessments” revealed that Misha carried quantities of heavy metals. Degrees of dangerousness were measured against a standard range credited to “Dr. Richard L. Cowden.” I sent Misha’s results to Cowden. I stated my belated impression that meaningful ranges for heavy metals don’t exist—we all have traces—and my belief that autism cannot be reversed. “I have reversed advanced autism in many children,” Dr. Cowden snapped. “I saw reversal of more than a dozen cases of full-blown autism, including my own grandson. So I am pretty sure the parents of those dozen+ children would debate you on your IMPRESSION/BELIEF.”

Cowden advised me to repeat Misha’s energetic assessment through the Internet and to place him into an “infrared sauna” to detoxify him. I declined.

Even before Misha’s first energetic assessment, the FDA had accused the device’s manufacturer of making unapproved claims. The FDA had approved it only for measuring “galvanic skin response.” But the company’s marketing materials had crossed over into unapproved diagnostic and predictive territory when they claimed that the “software indicates what is referred to as Biological Preference and Biological Aversion.” The software was recalled. “Dr. Cowden,” I also learned too late, was not the “Board Certified cardiologist and internist” that he advertises. He surrendered his medical license in 2008 after the Texas Board of Medical Examiners twice reprimanded him for endangering his patients. According to the American Board of Internal Medicine, Cowden’s certifications are “inactive.”

The “homotoxicology” that Coyle practiced had sounded to me like a branch of toxicology. But the two fields turn out to have nothing in common. An analysis of clinical trials of homotoxicology established that it is “not a method based on accepted scientific principles or biological plausibility.” Actual toxicologists pass a rigorous examination for their board certifications and adhere to a code of ethics. Homotoxicologists become so simply by declaring themselves homotoxicologists.

As for vitamins, supplements, and homeopathic remedies: an exception in federal law places them outside the FDA’s approval process. Only their manufacturers know what these dummy drugs contain. Last year, after fielding numerous reports of “toxic” reactions, finding “many serious violations” of manufacturing controls, and recording “significant harm” to children, the FDA warned the consuming public.

Homeopathy offers no detectable mechanism of action, nor any reason to believe that “aggravating” the primary symptoms of an illness is necessary to cure it. Water does not “remember,” at least not if the laws of molecular physics hold true. The tinier the dosage, homeopaths insist, the more potent the therapeutic effect the mother tincture will deliver. By this logic, a patient who misses a day might die of an overdose.

As I steered Misha back toward medical science, though, I remembered the gap that holism fills for parents like me. I took him to a “neuro-biologist,” a “neuro-psychologist,” and a “neuro-immunologist.” His “neuro-ophthalmologist” ordered an MRI. His “neuro-radiologist” read the images with algorithms—and pronounced his brain “normal” due to the absence of indications of damage.

That determination proved only the vacuity of scientific materialism. The “biological revolution” that seized psychiatry in the 1980s aspired to network the anatomical, electrical, and chemical functions of the brain. A procession of neuroimaging technologies held out the promise of progress: electroencephalography (EEG); computerized axial tomography (CAT); positron emission tomography (PET); magnetic resonance spectroscopy (MRS); magnetic resonance imaging (MRI). The resulting studies have always fallen pitifully short of a credible evidentiary threshold and have never done anything to expand treatment options. Mainly, neuroimaging has furnished opportunities to market the research industry, a breakthrough culture that has never broken through.

Holism, by contrast, answers prayers in the immaterial world, bidding to restore harmony through an aesthetically elegant fusion of mind, body, and spirit. As Coyle explained on her website: “Homotoxicology utilizes complex homeopathic remedies designed to restore the child’s vital force and balance the biological flow system.”

One part of me still craves holism’s beautiful notions. Another part recognizes in their desiccated spiritualism the return of a repressed pagan unconscious. I can no more believe in goblets of magic water and occult energy than I can conceal my disappointment with “neuro-radiology.”

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Scientists long ago dispatched the “leaky gut” theory with a series of disproof. Holistic parents, researchers, and clinicians, however, continue to reject what they contend are the false revelations of cold, mechanical instrumentalism. Tylenol, electromagnetic fields, “toxic baby food,” COVID-19 vaccines, HPV inoculation, “geo-engineering,” and genetically modified foods top the current indictment. William Shaw published a paper in 2020 purporting to demonstrate “rapid complete recovery from autism” through antifungal therapy. Mary Coyle attested last year to having healed her son’s chickenpox through “natural” remedies.

Many of the holistic advocacy organizations intermittently lost access to social media platforms during COVID. Yet censorship has deepened the martyrdom ingrained in this theodicy of misfortune. A spiritual war against invisible enemies animates their imaginations and elevates their personal disappointment to the status of a historical event. Rebaptized in nature’s holy immunity by ascetic protocols of abstinence and purification, they turn over a new leaf, as it were, and crave vindication above all else. “This book offers you two messages,” Bernard Rimland promised of the testimonials that he collected in Recovering Autistic Children: “You are not alone in your fight, and you can win.”

Here’s another message: Children need love and respect above all. As René Dubos wrote in Mirage of Health, “As far as life is concerned, there is no such thing as ‘Nature.’ There are only homes.”

About the Author

John Summers is a writer, historian, and Editor-in-Chief of Lingua Franca Media, Inc., an independent research institute in Cambridge, MA. He received his PhD in American history from the University of Rochester. For a decade, he taught at Harvard University, Boston College, and Columbia University. After leaving academia, he edited The Baffler magazine for five years. He is a father of a boy with autism.

Categories: Critical Thinking, Skeptic

Weaponized Pedantry and Reverse Gish Gallop

neurologicablog Feed - Tue, 02/06/2024 - 4:45am

Have you ever been in a discussion where the person with whom you disagree dismisses your position because you got some tiny detail wrong or didn’t know the tiny detail? This is a common debating technique. For example, opponents of gun safety regulations will often use the relative ignorance of proponents regarding gun culture and technical details about guns to argue that they therefore don’t know what they are talking about and their position is invalid. But, at the same time, GMO opponents will often base their arguments on a misunderstanding of the science of genetics and genetic engineering.

Dismissing an argument because of an irrelevant detail is a form of informal logical fallacy. Someone can be mistaken about a detail while still being correct about a more general conclusion. You don’t have to understand the physics of the photoelectric effect to conclude that solar power is a useful form of green energy.

There are also some details that are not irrelevant, but may not change an ultimate conclusion. If someone thinks that industrial release of CO2 is driving climate change, but does not understand the scientific literature on climate sensitivity, that doesn’t make them wrong. But understanding climate sensitivity is important to the climate change debate, it just happens to align with what proponents of anthropogenic global warming are concluding. In this case you need to understand what climate sensitivity is, and what the science says about it, in order to understand and counter some common arguments deniers use to argue against the science of climate change.

What these few examples show is a general feature of the informal logical fallacies – they are context dependent. Just because you can frame someone’s position as a logical fallacy does not make their argument wrong (thinking this is the case is the fallacy fallacy). What logical fallacy is using details to dismissing the bigger picture? I have heard this referred to as a “Reverse Gish Gallop”. I’m don’t use this term because I don’t think it captures the essence of the fallacy. I have used the term “weaponized pedantry” before and I think that is better.

It’s OK to be a little pedantic if the purpose is to be precise and accurate. That is consistent with good science and good scholarship. But such pedantry must be fair and in context. This requires a fair assessment of the implications of the detail. It is good to get the details right for their own sake, but some details don’t matter to a particular argument or position. There are a couple of ways to weaponize pedantry not to advocate for genuine good scholarship but as a hit job against a position you don’t like.

One way is to simply be biased in your search for and exposure of small mistakes. If you are only looking for them on one side or in one direction of an argument, then that is not good scholarship. It’s searching for ammunition to use as a weapon. The other method is to imply, or sometimes even explicitly state, that an error in a detail calls into question or even invalidates the bigger picture, even when it doesn’t. Sometimes this could just be a non sequitur argument – you made a mistake in describing the uranium cycle, therefore your opinion on nuclear power is not correct. And sometimes this can be an ad hominem fallacy – you don’t know the difference between a clip and a magazine so you are not allowed to have an opinion on gun safety.

Given this complexity, what is a good approach to pedantry about details and accuracy? First, I will reiterate my position that having a discussion or even an “argument” should not be about winning. Winning is for debate club and the courtroom. Having a discussion should be about understanding the other person’s position, understanding your own position better, understanding the topic better, and coming to as much common ground as possible. This means identifying the factual claims and resolving any differences, hopefully with reliable sources. Then you need to examine the logic of every claim and statement, including your own, to see if it is valid. You may also need to identify any value judgements that are subjective, or any areas where the facts are unknown or ambiguous.

With this approach, knowledge of logical fallacies is a good way to police your own arguments and thinking on a topic, and a good way to resolve differences and come to common ground. But if wielded as a rhetorical weapon, you are almost certain to commit the fallacy fallacy, including weaponized pedantry.

Specifically with reference to this fallacy – you need to ask the question, does this detail affect the larger claim? It may be entirely irrelevant, or it may be a tiny tweak, or it may be truly critical to the claim. If someone falsely thinks that Monsanto sued farmers solely for accidental contamination, that is not a tiny detail – that is core to one anti-GMO argument. Try to be as fair and neutral as possible it making that call, and then be honest about it (to yourself and anyone else involved in the discussion).

It’s OK to be that person who says, “Well, actually.” It’s OK to get the details right for the sake of getting the details right. We all should have a dedication to accuracy and precision. But its very easy to disguise biased advocacy as dedication to accuracy when it isn’t.

The post Weaponized Pedantry and Reverse Gish Gallop first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #922: Testing Alcubierre's Warp Drive

Skeptoid Feed - Tue, 02/06/2024 - 2:00am

Proponents of alien visitation often claim the Alcubierre drive makes faster than light possible. Here's why it can't exist.

Categories: Critical Thinking, Skeptic

Seth Stephens-Davidowitz — What Determines Who Succeeds in the NBA?

Skeptic.com feed - Tue, 02/06/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss403_Seth_Stephens-Davidowitz_2024_02_06.mp3 Download MP3

Former Google data scientist and bestselling author of Everybody Lies Seth Stephens-Davidowitz turns his analytic skills to the NBA.

Seth Stephens-Davidowitz is a contributing op-ed writer for the New York Times, a lecturer at The Wharton School, and a former Google data scientist. He received a BA from Stanford and a PhD from Harvard. He is the author of Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are and Don’t Trust Your Gut: Using Data to Get What You Really Want in Life.

Shermer and Stephens-Davidowitz discuss:

  • how he used AI to help write this book
  • players systematically undervalued in the draft
  • Are clutch shooters born or made?
  • the percent of 7-footers in the NBA
  • why tall NBA players are worse athletes than short NBA players
  • the greatest NBA players adjusted for height
  • names as proxies for success (or not)
  • why some countries produce so many more NBA players than others
  • who would be the best NBA player of all time if every player were the same height
  • What percent genetic is basketball talent? And how does this compare to other sports?
  • What advantages do NBA player fathers pass on to their sons?
  • How much do NBA coaches matter and what do they do?
  • Will any time win 11 NBA titles like Bill Russell’s Celtics did?
  • why no one hits .400 in baseball any more
  • Six sigma in sports and life
  • nature/nurture in sports and life
  • In a population of 8 billion today compared to centuries past, where are all the Mozarts, Beethovens, Da Vincis, Newtons, Darwins, etc.?
  • the Moneyball revolution in sports
  • how to apply the moneyball system in life
  • What makes people happy?
  • How much do good looks matter?
  • How much does height and competent faces influence elections?

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Did They Find Amelia Earhart’s Plane

neurologicablog Feed - Mon, 02/05/2024 - 4:25am

Is this sonar image taken at 16,000 feet below the surface about 100 miles from Howland island, that of a downed Lockheed Model 10-E Electra plane? Tony Romeo hopes it is. He spent $9 million to purchase an underwater drone, the Hugan 6000, then hired a crew and scoured 5,200 square miles in a 100 day search hoping to find exactly that. He was looking, of course, for the lost plane of Amelia Earhart. Has he found it? Let’s explore how we answer that question.

First some quick background, and most people know Amelia Earhart was a famous (and much beloved) early female pilot, the first female to cross the Atlantic solo. She was engaged in a mission to be the first solo pilot (with her navigator, Fred Noonan) to circumnavigate the globe. She started off in Oakland California flying east. She made it all the way to Papua New Guinea. From there her plan was to fly to Howland Island, then Honolulu, and back to Oakland. So she three legs of her journey left. However, she never made it to Howland Island. This is a small island in the middle of the Pacific ocean and navigating to it is an extreme challenge. The last communication from Earhart was that she was running low on fuel.

That was the last anyone heard from her. The primary assumption has always been that she never found Howland Island, her plane ran out of fuel and crashed into the ocean. This happened in 1937.  But people love mysteries and there has been endless speculation about what may have happened to her. Did she go of course an arrive at the Marshall Islands 1000 miles away? Was she captured by the Japanese (remember, this was right before WWII)? Every now and the a tidbit of suggestive evidence crops up, but always evaporates on close inspection. It’s all just wishful thinking and anomaly hunting.

There have also been serious attempts to find her plane. However, assuming she was off course, and that’s why they never made it to their target, there could potentially be a huge area of the Pacific ocean where her plane ended up. Romeo’s effort is the latest to look for her plane, and his approach was entirely reasonably – sonar scan the bottom of the ocean around Howland Island. He and his crew did this starting in September 2023. After the scanning mission was over, while going through the images, they found the image you can see above. Is this Earhart’s plane?

There are three possibilities to consider. One is that the image is not that of a plane at all, but just a random geological formation or something else. Remember that Romeo and his team poured through tons of data looking for a plane-like image. It’s not all that surprising that they found something. This could just be an example of the Face on Mars or the Martian Bigfoot – if you look at enough images looking for stuff you will find it.

The second possibility is that the sonar image is that of a plane, just not Earhart’s Lockheed Electra. There are lots of known missing aircraft. But more importantly perhaps, how many unknown missing aircraft are there? How many planes were lost during WWII and unaccounted for? There could be private unregistered planes, even drug smugglers. And of course, the third possibility is that this is Amelia Earhart’s plane. How can we know?

First, we can make some inferences from the information we have. Is the image that of a plane? I think this is a coin toss. It is reasonably symmetrical, has things that can be wings, a fuselage, and a tail. But again, it’s just a fuzzy image. It could just be a ledge and a rock. Neither outcome would shock me.

If it is a plane, could this be Earhart’s plane? The one data point that is in favor of this conclusion is the location – 100 miles off Howland Island. That is within the scope of where we would expect to find her plane. But there are two big things going against it being the Lockheed Electra. First, the Electra had straight wings, while, if this is a plane, the wings appear to be swept back. If this image is accurate, then the answer is no. But it is possible that the plane was damaged by the crash. Perhaps the wings broke and were pushed back by the fall through the water.

Also, the Lockheed Electra was a twin engine plane, with one large engine on each wing. They are not apparent in this image, and they should be. So we also have to speculate that the engines were lost in the process of the plane crashing and sinking, or that the image is too distorted to see them.

As you can see, speculation from the existing evidence is pretty thin. We need more data. What we have with the sonar image is not confirmatory evidence, just a clue that needs follow up. We need better images, hopefully with sufficient detail to provide forensic evidence. This will require a deep sea mission with lights and cameras, like the kind used to explore the wreckage of the Titanic. With such images it should be easy to tell if this is a Lockheed Electra. If it is, then it is almost certainly Earhart’s plane. But also, we may be able to read the registration numbers on the side of the plane, and that would be definitive.

Romeo is in the process of planning a follow up mission to investigate this sonar image. Unless and until this happens, we will not be able to say with any confidence if this is or is not Earhart’s plane.

The post Did They Find Amelia Earhart’s Plane first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #969 - Feb 3 2024

Skeptics Guide to the Universe Feed - Sat, 02/03/2024 - 8:00am
Interview with Justin Bates of Starset; News Items: Neuralink Implant, Love on the Brain, Amelia Earhart Plane Evidence, Hiding Sickness, Cicada Double Brood; Who's That Noisy; Your Questions and E-mails: Moon Timeline, Long Acting Insulin; Science or Fiction
Categories: Skeptic

Jessica Schleider — How to Build Meaningful Moments that Can Transform Your Mental Health

Skeptic.com feed - Sat, 02/03/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss402_Jessica_Schleider_2024_02_03.mp3 Download MP3

If you’ve ever wanted mental health support but haven’t been able to get it, you are not alone.

In fact, you’re part of the more than 50% of adults and more than 75% of young people worldwide with unmet psychological needs. Maybe you’ve faced months-long waiting lists, or you’re not sure if your problems are ‘bad enough’ to merit treatment? Maybe you tried therapy but stopped due to costs or time constraints? Perhaps you just don’t know where to start looking? The fact is, there are infinite reasons why mental health treatment is hard to get. There’s an urgent need for new ideas and pathways to help people heal.

Little Treatments, Big Effects integrates cutting-edge psychological science, lived experience narratives and practical self-help activities to introduce a new type of therapeutic experience to audiences worldwide: single-session interventions. Its chapters unpack why systemic change in mental healthcare is necessary; the science behind how single-session interventions make it possible; how others have created ‘meaningful moments’ in their recovery journeys (and how you can, too); and how single-session interventions could transform the mental healthcare system into one that’s accessible to all.

Jessica L. Schleider, Ph.D. is an American psychologist, author, and an associate professor of Medical Social Sciences at Northwestern University. She is the lab director of the Lab for Scalable Mental Health. She completed her PhD in Clinical Psychology at Harvard University and her Doctoral Internship in Clinical and Community Psychology at Yale School of Medicine. She has received numerous scientific awards for her work in this area and her work is frequently featured in major media outlets (Wall Street Journal, The Atlantic, Washington Post). In 2020, she was selected as one of Forbes Magazine’s ‘30 Under 30’ in Healthcare. She has developed six evidence-based, single-session mental health programmes, which have served more than 40,000 people to date. She is the author of The Growth Mindset Workbook for Teens and co-editor of the Oxford Guide to Brief and Low Intensity Interventions for Children and Young People. Her new book is Little Treatments, Big Effects: How to Build Meaningful Moments That Can Transform Your Mental Health.

Shermer and Schleider discuss:

  • her own experience with mental illness in an eating disorder
  • 80% of people meet criteria for a mental illness at some point in their life
  • What is the goal of therapy?
  • navigating therapy modalities, access, payments, insurance, etc
  • What prevents people from getting the mental health help they need?
  • a brief history of asylums, institutions, deinstitutionalization and othering of mental healthcare
  • disease model of mental illness
  • What are outcome measures to test different therapies? “Works”?
  • traditional therapy vs. single-session interventions
  • growth mindset: personality, academic performance and personal outcomes can be changed if we treat setbacks as opportunities to grow and improve
  • Cognitive Behavior Therapy (CBT)
  • You don’t have to feel ready for recovery to take steps towards it.
  • mental health issues that can be addressed through single session interventions: eating disorders, anxiety disorders, depression, suicidality, ADHD, substance/alcohol use disorder, OCD, self-injury
  • difference between goals and values (wellness/health, family, compassion/helping, wisdom/education, relationships/kinship, joy/pleasure, spirituality/religion, perseverance, independence, community
  • action brings change.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

How To Prove Prevention Works

neurologicablog Feed - Fri, 02/02/2024 - 4:55am

Homer: Not a bear in sight. The Bear Patrol must be working like a charm.
Lisa: That’s specious reasoning, Dad.
Homer: Thank you, dear.
Lisa: By your logic I could claim that this rock keeps tigers away.
Homer: Oh, how does it work?
Lisa: It doesn’t work.
Homer: Uh-huh.
Lisa: It’s just a stupid rock.
Homer: Uh-huh.
Lisa: But I don’t see any tigers around, do you?
[Homer thinks of this, then pulls out some money]
Homer: Lisa, I want to buy your rock.
[Lisa refuses at first, then takes the exchange]

 

This memorable exchange from The Simpsons is one of the reasons the fictional character, Lisa Simpson, is a bit of a skeptical icon. From time to time on the show she does a descent job of defending science and reason, even toting a copy of “Jr. Skeptic” magazine (which was fictional at the time then created as a companion to Skeptic magazine).

What the exchange highlights is that it can be difficult to demonstrate (let alone “prove”) that a preventive measure has worked. This is because we cannot know for sure what the alternate history or counterfactual would have been. If I take a measure to prevent contracting COVID and then I don’t get COVID, did the measure work, or was I not going to get COVID anyway? Historically the time this happened on a big scale was Y2K – this was a computer glitch set to go off when the year changed to 2000. Most computer code only encoded the year as two digits, assuming the first two digits were 19, so 1995 was encoded as 95. So when the year changed to 2000, computers around the world would think it was 1900 and chaos would ensue. Between $300 billion and $500 billion were spent world wide to fix this bug by upgrading millions of lines of code to a four digit year stamp.

Did it work? Well, the predicted disasters did not happen, so from that perspective it did. But we can’t know for sure what would have happened if we did not fix the code. This has lead to speculation and even criticism about wasting all that time and money fixing a non-problem. There is good reason to think that the preventive measures worked, however.

At the other end of the spectrum, often doomsday cults, predicting that the world will end in some way on a specific date, have to deal with the day after. One strategy is to say that the faith of the group prevented doomsday (the tiger-rock strategy). They can now celebrate and start recruiting to prevent the next doomsday.

The question is – how do we know when our preventive efforts have been successful or if they were not needed. In either scenario above you can use the absence of anything bad happening as both evidence that the problem was fake all along, or that the preventive measures worked. The absence of disaster fits both narratives. The problem can get very complicated. When preventive measures are taken and negative outcomes happen anyway, can we argue that it would have been worse? Did the school closures during COVID prevent any deaths? What would have happened if we tried to keep schools open? The absence of a definitive answer means that anyone can use the history to justify their ideological narrative.

How do we determine if a preventive measure works. There are several valid methods, mostly involving statistics. There is no definitive proof (you can’t run history back again to see what happens), but you can show convincing correlation. Ideally the correlation will be repeatable with at least some control of confounding variables. For public health measures, for example, we can compare data from either a time or a place without the preventive measures to those with the preventive measures. This can vary by state, province, country, region, demographic population, or over historic time. In each country where the measles vaccine is rolled out, for example, there is an immediate sharp decline in the incidence of measles. And if vaccine compliance decreases there is a rises in measles. If this happens often enough, the statistical data can be incredibly robust.

This relates to a commonly invoked (but often misunderstood) logical fallacy, the confusion of correlation with causation. Often people will say “correlation does not equal causation”. This is true but can be misleading. Correlation is not necessarily due to a specific causation, but it can be. Over applying this principle is a way to dismiss correlational data as useless – but it isn’t. The way scientists use correlation is to look for multiple correlations and triangulate to the one causation that is consistent with all of them. Smoking correlates with an increased risk of lung cancer. But also, duration and intensity also correlate, as does filtered vs unfiltered, and quitting correlates with reduced risk over time back to baseline. There are multiple correlations that only make sense in total if smoking causes lung cancer. Interestingly, the tobacco industry argued for decades that this data does not prove smoking causes cancer, because it was just correlation.

Another potential line of evidence is simulations. We cannot rerun history, but we can simulate it to some degree. Our ability to do so is growing fast, as computers get more powerful and AI technology advances. So we can run the counterfactual and ask, what would have happened if we had not taken a specific measure. But of course, these conclusions are only as good as the simulations themselves, which are only as good as our models. Are we accounting for all variables? This, of course, is at the center of the global climate change debate. We can test our models both against historical data (would they have predicted what has already happened) and future data (did they predict what happened after the prediction). It turns out, the climate models have been very accurate, and are getting more precise. So we should probably pay attention to what they say is likely to happen with future release of greenhouse gases.

But I predict that if by some miracle we are able to prevent the worst of climate change through a massive effort of decarbonizing our industry, future deniers will argue that climate change was a hoax all along, because it didn’t happen. It will be Y2K all over again but on a more massive scale. That’s a problem I am willing to have, however.

Another way to evaluate claims for prevention is plausibility. The tiger rock example that Lisa gives is brilliant for two reason. First, the rock is clearly “just a stupid rock” that she randomly picked up off the ground. Second, there is no reason to think that there are any tigers anywhere near where they are. For any prevention claim, the empirical data from correlation or simulations has to be put into the context of plausibility. Is there a clear mechanism? The lower the plausibility (or prior probability, in statistical terms) then the greater the need for empirical evidence to show probable causation.

For Y2K, there was a clear and fully understood mechanism at play. They could also easily simulate what would happen, and computer systems did crash. For global climate change, there is a fairly mature science with thousands of papers published over decades. We have a pretty good handle on the greenhouse effect. We don’t know everything (we never do) and there are error-bars on our knowledge (climate sensitivity, for example) but we also don’t know nothing. Carbon dioxide does trap heat, and more CO2 in the atmosphere does increase the equilibrium point of the total heat in the Earth system. There is no serious debate about this, only about the precise relationship. Regarding smoking, we have a lot of basic science data showing how the carcinogens in tobacco smoke can cause cancer, so it’s no surprise that it does.

But if the putative mechanism is magic, then a simple unidirectional correlation would not be terribly convincing, and certainly not the absence of a single historical event.

Of course there are many complicated example about which sincere experts can disagree, but it is good to at least understand the relevant logic.

The post How To Prove Prevention Works first appeared on NeuroLogica Blog.

Categories: Skeptic

Institutional Partners for Our New Show

Skeptoid Feed - Fri, 02/02/2024 - 2:00am

Skeptoid is looking for institutional partners and/or title sponsors for a proposed video series.

Categories: Critical Thinking, Skeptic

Some Future Tech Possibilities

neurologicablog Feed - Thu, 02/01/2024 - 5:10am

It’s difficult to pick winners and losers in the future tech game. In reality you just have to see what happens when you try out a new technology in the real world with actual people. Many technologies that look good on paper run into logistical problems, difficulty scaling, fall victim to economics, or discover that people just don’t like using the tech. Meanwhile, surprises hits become indispensable or can transform the way we live our lives.

Here are a few technologies from recent news that may or may not be part of our future.

Recharging Roads

Imaging recharging your electric vehicle wirelessly just by driving over a road. Sounds great, but is it practical and scalable? Detroit is running an experiment to help find out. On a 400 meter stretch of downtown road they installed inducting cables under the ground and connected them to the city grid. EVs that have the $1,000 device attached to their battery can charge up while driving over this stretch of road.

The technology itself is proven, and is already common for recharging smartphones. It’s inductive charging, using a magnetic field to induce a current which recharges a battery. Is this a practical approach to range anxiety? Right now this technology costs $2 million per mile. Having any significant infrastructure of these roads would be incredibly costly, and it’s not clear the benefit is worth it. How much are they going to charge the EV? What is the efficiency? Will drivers fork out $1000 for minimal benefit?

I think this approach has a low probability of working. Where I think there might be a role, however, is in long stretches of interstate highway. This will still be an expensive option, but a 100 mile stretch of highway, for example, fit with these coils would cost $200 million. Hopefully with mass production and advances the cost will come down, so maybe it will be only $100 million. That is not a bank breaker for a Federal infrastructure project. This could significantly extend the rage of EVs on long trips along such highways. Busy corridors, like I95, could potentially benefit. You could also put the coils under parking spaces at rest stations.

Will this be better and more efficient than just plugging in? Probably not. I give this a low probability, but it’s possible there may be some limited applications.

 

The Virtual Office

I like VR, and still use it for occasional gaming. I don’t use an app because it’s VR, but some VR games and apps are great. The technology, however, is not yet fully mature. Companies have tried to promote a virtual office in the past. Again it looks good on paper. Imagine having your office be a virtual space that you can configure anyway you want with everything you need to do right in front of you.

But these efforts all failed, because people simply don’t like wearing heavy goggles on their face for hours at a time. I get this – I can only play VR games for so long at once, then I need to stop. It can be exhausting (that is actually a feature for me, not a bug, to get off my chair, and at least stand up and move around). But for an 8 hour work day – no way.

Ideas that look good on paper often don’t die completely, they keep coming back. In this case, I think we will need to keep taking a look at this technology as it evolves. A recent spate of companies are doing just that, trying again for the virtual office. Now they are calling it “extended reality” or XR, which involves a combination of augmented reality and virtual reality. There are some real advantages – training is more effective in XR (than either in person or online). It also is cost effective to have remote, rather than in person meetings. It allows people to work more effectively from home, which also has potential huge efficiency gains.

Still I think this is essentially a hardware problem. The goggles are still bulky and tiring. The experience is still limited by motion sickness. At some point, however, we will get to a critical point where the hardware is good enough for regular extended use, and then adoption may explode.

Apple is coming out with their long awaited entry – the Vision Pro is being released tomorrow, Feb 2. It still looks pretty bulky, but does look like a solid incremental advance. I would like the opportunity to test it out. If this does not turn out to be the killer tech, I think it’s inevitable that we will get there eventually.

 

AI Generated News Anchors

We have been talking about this for years now – when will AI generated characters get good enough to replace actors completely? Now we are starting to see AI generated news anchors. That makes sense, and is likely much easier than an AI character in a dramatic role in a movie. A TV anchor is often just a talking head (while on camera – I’m not saying they are not also sometimes serious journalists). But this way you completely separate the journalism from the good looking talking head part of TV news. The journalism is all done behind the scenes, and the attractive anchor is AI generated.

All they have to do is read the text, with a fairly narrow range of emotional expression. It’s actually perfect, if you think about it. I predict this will rapidly become a thing. Probably the biggest limiting factor is going to be protests, contracts, and other legal stuff. But the tech itself is ready, and perhaps perfectly suited to this application.

 

Those are just a few things in tech news that caught my attention this week. This will be a fun post to look back on in a few years to see how I did.

The post Some Future Tech Possibilities first appeared on NeuroLogica Blog.

Categories: Skeptic

Your Microbiome & Your Health:Prebiotics and Postbiotics — The Good, the Bad, and the Bugly

Skeptic.com feed - Thu, 02/01/2024 - 12:00am

The human colon may represent the most biodense ecosystem in the world. Though many may believe that our stool is primarily made up of undigested food, about 75 percent is pure bacteria—trillions and trillions, in fact, about half a trillion bacteria per teaspoon.

Do we get anything from these trillions of tenants taking up residence in our colon, or are they just squatting? They pay rent by boosting our immune system, making vitamins for us, improving our digestion, and balancing our hormones. We house and feed them, and they maintain and protect their house, our body. Prebiotics are what feed good bacteria. Probiotics are the good bacteria themselves. And postbiotics are what our bacteria make.

Our gut bacteria are known as a “forgotten organ,” as metabolically active as our liver and weighing as much as one of our kidneys. They may control as many as one in ten metabolites in our bloodstream. Each one of us has about 23,000 genes, but our gut bacteria, collectively, have about three million. About half of the cells in our body are not human. We are, in effect, a superorganism, a kind of “human-microbe hybrid.”

Having coevolved with us and our ancestors for millions of years, the relationship we have with our gut flora is so tightly knit as to affect most of our physiological functions. Yet our microbiome is probably the most adaptable component of our body. Gut bugs like Escherichia coli (E. coli) can divide every twenty minutes. The more than ten trillion bugs we churn out every day can therefore rapidly respond to changing life conditions. Every meal, we have the opportunity to nudge them in the right direction.

Thousands of years ago, Hippocrates is attributed as saying that all diseases begin in the gut or, more ominously, “death sits in the bowels.” Of course, he also thought women were hysterical because of their “wandering uterus.” (“Hysteria” comes from the Greek husterikos for “of the womb.”) So much for ancient medical wisdom. The pendulum then swung to the point of incredulity when the medical community refused to accept the role of one gut bug, Helicobacter pylori, as the cause of stomach and intestinal ulcers. Out of frustration, one of the pioneers chugged a brew of the bugs from one of his ulcer patients to prove the point, before finally being vindicated with the Nobel Prize in 2005 for his discovery.

In some ways, the pendulum has swung back, with overstated causal claims about the microbiome’s role in a wide range of disparate diseases that are casually bandied about. Perhaps the boldest such claim dates back more than a century to Élie Metchnikoff, who argued that senility and the disabilities of old age were caused by “putrefactive bacterial autotoxins” leaking from the colon. He was the first to emphasize the importance of the gut microbiome to aging. He attributed healthy aging to gut bacteria that fermented carbohydrates into beneficial metabolic end products like lactic acid and associated unhealthy aging with putrefaction, the process in which bacteria degrade protein into noxious metabolites as waste products.

There is no shortage throughout history of oldtimey crackpots with quack medical theories, but Metchnikoff was no slouch. He was appointed Louis Pasteur’s successor, coined the terms “gerontology” and “probiotics,” and won the Nobel Prize in medicine to become the founding “father of cellular immunology.” More than a century later, some aspects of his theories on aging and the gut are now being vindicated.

Young at Gut

Full-term, vaginally delivered, breastfed babies are said to start out with the gold standard for a healthy microbiome, which then starts to diverge as we age. The microbiomes of children, adults, the elderly, and centenarians tend to cluster together, such that a “microbiomic clock” can be devised. Dozens of different classes of bacteria in our gut so reliably shift as we age that our age can be guessed based on a stool sample within about a six-year margin of error. If these changes turn out to play a causal role in the aging process, then, hypothetically, our future high-tech toilet may one day be able predict our lifespan as well.

The transition from adulthood into old age is accompanied by pronounced changes to the microbiome. Given large interpersonal differences, there is no “typical” microbiome of the elderly, but the trends are in the very direction Metchnikoff described: a shift from the fermentation of fiber to the putrefaction of protein. This deviation from good bugs to bad is accompanied by an increase in gut leakiness, the spillage of bacterial toxins into the bloodstream, and a cascade of inflammatory effects. This has led to the proposal that this microbiome shift is a “primary cause of aging-associated pathologies and consequent premature death of elderly people.”

The most important role a healthy microbiome has for preserving health as we age is thought to be the prevention of systemic inflammation.

As profound a change in microbiome composition from early adulthood into old age, there’s an even bigger divergence between the elderly and centenarians. When researchers analyzed centenarian poop, they found a maintenance of short-chain fatty acid production from fiber fermentation. For example, in the Bama County longevity region in the Guangxi province of China, fecal sample analyses found that centenarians were churning out more than twice as much butyrate as those in their eighties or nineties living in the same region. Butyrate is an anti-inflammatory short-chain fatty acid critical for the maintenance of gut barrier integrity. At the same time, there were significantly fewer products of putrefaction, such as ammonia and uremic toxins like p-cresol. The researchers concluded that an increase of dietary fiber intake may therefore be a path toward longevity. An abundance of fiber feeders also distinguished healthy individuals ninety years and older from unhealthy nonagenarians.

Centenarian Scat

Interestingly, the microbiomes of Chinese centenarians shared some common features with Italian centenarians, suggesting that there could be certain universal signatures of a longevity-promoting microbiome. For example, centenarians have up to about a fifteenfold increase in butyrate producers.

A study of dozens of semi-supercentenarians (those aged 105 to 109) found higher levels of health-associated bacteria, such as Bifidobacteria and Akkermansia. In vaginally delivered, breastfed infants,

Bifidobacteria make up 90 percent of colon bacteria, but the level may slip down to less than five percent in adult colons and even less in the elderly and those with inflammatory bowel disease. But centenarians carry more of the good bacteria in their gut.

Bifidobacteria are often used as probiotics, but anti-aging properties may exist in their postbiotics. Bifidobacteria are one of the many bacteria that secrete “exopolysaccharides,” a science-y word for slime. That’s what dental plaque is—the biofilm created by bacteria on our teeth. Exopolysaccharides produced from a strain of Bifidobacteria isolated from centenarian poop were found to have anti-aging properties in mice, reducing the accumulation of age pigment in their brains and boosting the antioxidant capacity of their blood and livers.

Akkermansia muciniphila is named after the late Dutch microbiologist Antoon Akkermans and from Latin and Greek for “mucus-lover.” The species is the dominant colonizer of the protective mucus layer in our gut that is secreted by our intestinal lining. Unfortunately, that mucus layer thins as we age, a problem exacerbated by low-fiber diets. When we eat a fiber-depleted diet, we starve our microbial selves. Our famished flora, the microbes in our gut, have to then compete for limited resources and may consume our own mucus barrier as an alternative energy source, thereby undermining our defenses. Mucus erosion from bacterial overgrazing can be switched on and off on a day-to-day basis in mice supplanted with human microbiomes with fiber-rich and fiber-free diets. You can even show it in a Petri dish. Researchers successfully recreated layers of human intestinal cells and showed that dripping fiber (from plantains and broccoli) onto the cells at dietary doses could “markedly reduce” the number of E. coli bacteria breaching the barrier. Aside from eating fiber-rich foods, A. muciniphila helps to directly restore the protective layer by stimulating mucus secretion.

A. muciniphila is a likely candidate for a healthy aging biomarker, as its abundance is enriched in centenarians and it is particularly scarce in elders suffering from frailty. A comparative study was undertaken of the microbiomes of people in their seventies and eighties experiencing “healthy” versus “non-healthy” aging, defined as the absence or presence of cancer, diabetes, or heart, lung, or brain disease. Akkermansia, the species most associated with healthier aging, were three times more abundant in the fecal samples of the healthy versus non-healthy aging cohort. Among centenarians, a drop in A. muciniphila is one of the microbiome changes that seems to occur about seven months before death, despite no apparent changes in the physical status, food intake, or appetite at the time. To prove a causal role in aging, researchers showed that feeding A. muciniphila to aging-accelerated mice significantly extended their lifespans.

Cause, Consequence, or Confounding

A recurring recommendation from centenarian poop studies is the promotion of high-fiber diets, one of the most consistently cited pieces of lifestyle advice in general for extreme longevity and health. An alternative proposal is a fecal transplant, from a cocktail of centenarian stool. Both approaches assume a cause-and-effect relationship between fiber-fueled feces and long lives, but there remains much controversy over whether age-related microbiome changes are cause, consequence, or confounding.

Aging is accompanied by dysbiosis, an unhealthy imbalance of gut flora characterized by a loss of fiber-fed species. Rather than a changing microbiome contributing to the aging process, it’s easier to imagine how aging could instead be contributing to a changing microbiome. Loss of taste, smell, and teeth with age could lead to decreased consumption of fiber-rich foods, replaced by salted, sweetened, easier-to-chew processed foods. The drop in the quantity and diversity of whole plant foods—the only naturally abundant source of fiber—could result in a dysbiosis that leads to early death and disability. Or, the decline in diet quality could directly dispose to disease, with the dysbiosis just an incidental marker of an unhealthy diet.

There are also ways aging can be connected to dysbiosis independent of diet. While the rates of antibiotic prescriptions in childhood and through middle age have dropped in recent years, prescription rates among the elderly have shot up. Even non-antibiotic pharmaceuticals can muck with our microbiome. A study pitting more than a thousand FDA-approved drugs against forty representative strains of gut bacteria found that 24 percent of marketed drugs inhibited the growth of at least one strain. Reduced physical activity could also contribute to sluggish, stagnant bowels that could leave our gut bugs no other choice but to turn to protein for putrefaction once preferred prebiotics are used up. Nursing home residents are often fed the kind of low-fiber diet that can contribute to the “decimation” of a healthy microbiome.

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

So, while researchers have interpreted the link between dysbiosis and frailty as a poor diet leading to poor gut flora leading to poor health, the arrows of causality could potentially go in every which direction. Maybe there’s even a chicken-or-the-egg feedback loop in play. With so many interrelated factors, you can imagine how hard it is to tease out the causal chain of events.

These questions crop up all the time in microbiome research. For example, the microbiomes of centenarians aren’t just better at digesting fiber. They’re better at detoxifying industrial pollutants, such as petrochemicals; food preservatives like benzoate and naphthalene, used in petroleum refinement; and haloalkanes, widely used commercially as flame retardants, refrigerants, propellants, and solvents. None of these detoxification pathways was found in the microbiomes of the Hadza, one of the last hunter-gatherer tribes in Africa. Did the enhanced detoxification in centenarian guts (compared to younger individuals) contribute to their longevity, or did their longevity contribute to their enhanced detoxification (given their longer lifetime exposure and accumulation of chemicals)?

The microbiomes of centenarians and semi-supercentenarians are better able to metabolize plant fats than animal fats, but maybe that’s just due to their eating more plant-based diets. The Bama County longevity region centenarians who had such an abundance of fiber feeders were eating more than 70 percent more fiber (38 g versus only 22 g per 2,000 calories) compared to those aged eighty through ninety-nine living in the same region. The only way to know if their longer lives eating more healthfully just led to a better microbiome or if their better microbiome actually contributed to their living longer is to put it to the test.

Fecal Transplant Experiments

Longevity researchers have good reason to suspect a causal, rather than bystander, role for age-related microbiome changes, given fecal transplant studies showing that the lives of old animals can be extended by receiving gut bugs from younger animals. Centenarian stool has anti-aging effects when fed to mice. Researchers fed mice fecal matter from a 70-year-old individual that contained Bilophila wadsworthia, a pro-inflammatory bacteria enriched by a diet high in animal products, versus feces from a 101-year-old containing more fiber feeders. Mice transplanted with the centenarian microbiome ended up displaying a range of youthful physiological indicators, including less age pigment in their brains. This raises the possibility that we will one day be using centenarian fecal matter to promote healthy aging. Why bathe in the blood of virgins when you can dine on the dung of the venerable?

Plugging Leaks with Fiber

One of the mechanisms by which intestinal dysbiosis may accelerate aging is a leaky gut. This can lead to tiny bits of undigested food, microbes, and toxins slipping through our gut lining and entering uninvited into our bloodstream, triggering chronic systemic inflammation. Thankfully, there’s something we can do about it.

To avoid gut dysbiosis, inflammation, and leakiness, plants should be preferred. The reason vegetarians tend to have a better intestinal microbiome balance, a high bacterial biodiversity, and enhanced integrity of the intestinal barrier, and also produce markedly less uremic toxins in the gut, is likely that fiber is the primary food for a healthy gut microbiome. Cause and effect was established in a randomized, double-blind, crossover study of pasta with or without added fiber.

Dysbiosis Inflammation Immunosuppression

The most important role a healthy microbiome has for preserving health as we age is thought to be the prevention of systemic inflammation. Inflammaging is a strong risk factor not only for premature death. Those with higher-than-average levels of inflammatory markers in their blood for their age are more likely to be hospitalized, frail, and less independent, and suffer from a variety of diseases, including common infections.

In Japan, for example, more than 40 percent of all centenarian deaths are due to pneumonia and other infectious diseases. In one of the largest studies, involving nearly 36,000 British centenarians, pneumonia was the leading identifiable cause of death. Inflammaging has not only been shown to increase susceptibility to coming down with the leading cause of bacterial pneumonia but older adults with more inflammation also tend to suffer increased severity and decreased survival.

As we age, our immune system macrophages (from the Greek for “big eaters”) start to lose their ability to engulf and destroy bacteria. The same happens in regular mice. But mice raised microbe-free don’t suffer from the leaking gut, subsequent inflammation, and loss of macrophage function. To connect the dots between the inflammation and loss of function, researchers found that the macrophage impairment could be induced in microbe-free mice by infusing them with an inflammatory mediator, which, when dripped on macrophages in a Petri dish, could directly interfere with their ability to kill pneumonia bacteria. Because our immune system is also responsible for cancer defense, immune dysfunction caused by the inflammation resulting from dysbiosis may also help explain why cancer incidence increases so steeply as we age (and why microbe-free mice have fewer tumors and live longer).

Avoiding Dietary Antibiotics

Other than getting enough fiber, what else can we do to prevent dysbiosis in the first place? There are a number of factors that contribute to microbiome imbalance. For example, on any given day, an average of about two and a half doses of antibiotics are consumed for every one hundred people in Western countries. The havoc this can play on our microbiome may explain why antibiotic use predicts an increased risk of cancer, though confounding factors, such as smoking, that are associated with both, could also potentially explain this link.

Up to three-quarters of antibiotic use is of questionable therapeutic value. Avoiding unnecessary use of antibiotics and using targeted, narrow-spectrum agents whenever possible can help protect our gut flora, but most people may not realize they’re consuming antibiotic residues every day in the meat, dairy, and eggs they eat. As much as 80 percent of the antibiotics used in the United States doesn’t go to treat sick people but rather is fed to farm animals in part as a crutch to compensate for the squalid conditions that now characterize much of modern agribusiness. But do enough antibiotics make it onto our plates to make a difference?

Infections with multidrug-resistant bacteria are on target to become the world’s leading cause of disease and death by the year 2050, poised to surpass even cancer and heart disease. Excessive antibiotic use can result in our guts becoming colonized with these superbugs, so researchers set out to calculate how many animal products one would need to eat to achieve antibiotic concentrations in our colon to give resistant bugs an advantage. Single servings of beef, chicken, or pork were found to contain enough tetracycline, ciprofloxacin, tilmicosin, tylosin, sarafloxacin, and erythromycin to favor the growth of resistant bacteria. One and a half servings of fish (150 g) exceeded minimum selective concentrations of ciprofloxacin and erythromycin. Two cups of milk could tip the scales for tetracycline, ciprofloxacin, tilmicosin, tylosin, and lincomycin. And, legal levels of erythromycin and oxytetracycline in two eggs could also exceed safe levels.

We need to stop squandering lifesaving miracle drugs just to speed the growth of farm animals reared in unhygienic conditions, and we also need to stop the reckless overuse in medicine.

Excerpted from How Not to Age: The Scientific Approach to Getting Healthier as You Get Older by Michael Greger. Copyright © 2023 by Michael Greger. Reprinted with permission from Flatiron Books. All rights reserved.

About the Author

Michael Greger, M.D. FACLM is a graduate of the Cornell University School of Agriculture and the Tufts University School of Medicine. He is a practicing physician and author of Bird Flu: A Virus of Our Own Hatching and Carbophobia: The Scary Truth Behind America’s Low Carb Craze. Three of his recent books—How Not to Die, the How Not to Die Cookbook, and How Not to Diet—became instant New York Times Best Sellers. Greger has lectured at the Conference on World Affairs and the National Institute of Health, testified before Congress, and appeared on shows such as The Colbert Report and Oprah Winfrey.

Categories: Critical Thinking, Skeptic

Neuralink Implants Chip in Human

neurologicablog Feed - Tue, 01/30/2024 - 2:18pm

Elon Musk has announced that his company, Neuralink, has implanted their first wireless computer chip into a human. The chip, which they plan on calling Telepathy (not sure how I feel about that) connects with 64 thin hair-like electrodes, is battery powered and can be recharged remotely. This is exciting news, but of course needs to be put into context. First, let’s get the Musk thing out of the way.

Because this is Elon Musk the achievement gets more attention than it probably deserves, but also more criticism. It gets wrapped up in the Musk debate – is he a genuine innovator, or just an exploiter and showman? I think the truth is a little bit of both. Yes, the technologies he is famous for advancing (EVs, reusable rockets, digging tunnels, and now brain-machine interface) all existed before him (at least potentially) and were advancing without him. But he did more than just gobble up existing companies or people and slap his brand on it (as his harshest critics claim). Especially with Tesla and SpaceX, he invested his own fortune and provided a specific vision which pushed these companies through to successful products, and very likely advanced their respective industries considerably.

What about Neuralink and BMI (brain-machine interface) technology? I think Musk’s impact in this industry is much less than with EVs and reusable rockets. But he is increasing the profile of the industry, providing funding for research and development, and perhaps increasing the competition. In the end I think Neuralink will have a more modest, but perhaps not negligible, impact on bringing BMI applications to the world. I think it will end up being a net positive, and anything that accelerates this technology is a good thing.

So – how big a deal is this one advance, implanting a wireless chip into a human brain? Not very, at least not yet. Just the mere fact of implanting a chip is not a big deal. The real test is how long it lasts, how long it maintains its function, and how well it functions – none of which has yet been demonstrated. Also, other companies (although only a few) are ahead of the game already.

Here is a list of five companies (in addition to Neuralink) working on BMI technology (and I have written about many of them before). Synchron is taking a different approach, with their stentrodes. Instead of implanting in the brain, which is very invasive, they place their electrodes inside veins inside the brain, which gets them very close to brain tissue, and critically inside the skull. They completed their first human implant in 2022.

Blackrock Neurotech has a similar computer chip with an array of tiny electrodes that gets implanted in the brain. They are farther along than Neuralink and are the favorite to have a product available for use outside a research lab setting. Clearpoint Neuro is working with Blackrock to develop a robot to automatically implant their chips with the precision necessary to optimize function. They also are developing their own applications for BMI and also implants for drug delivery to brain tissue.

Braingate has also successfully implants an array of electrodes into humans that allows them to communicate wireless to external devices, allowing them to control computer interfaces or robotic limbs.

These companies are all focusing on implanted devices. There is also research into using scalp surface electrodes for a BMI connection. The advantage here is that nothing has to be implanted. The disadvantage is that the quality of the signal is much less. Which option is better depends on the application. Neurable is working on external BMI that you wear like headphones. They envision this will be used like a virtual reality application, but with neuro-reality (VR through a neurological connection, rather than goggles).

All of these advances are exciting, and I have been following them closely and reporting on them over the years. The Neuralink announcement adds them to the list of companies who have implanted a BMI chip into a human, a very exclusive club, but does not advance the cutting edge beyond where it already is.

What has me the most excited recently, actually, is advances in AI. What we need to have fairly mature BMI technology, the kind that can allow a paralyzed person to communicate effectively or control robotic limbs, is an implant (surface electrodes are not enough for these applications) that has many connection, is durable, self powered (or easily recharged), does not damage brain tissue, and maintains a consistent connection (does not move or migrate). We keep inching close to this goal. The stentrode may be a great intermediary step, good enough for decades until we develop really good implantable electrodes, which will almost certainly have to be soft and flexible.

But as we slowly and incrementally advance toward this goal (basically the hardware) we also have to keep an eye on the software. I had thought that this basically peaked and was more than advanced enough for what it needed to do – translate brain signals into what the person is thinking with enough fidelity to provide communication and control. But recent AI applications are showing how much more powerful this software can be. This is what AI is good at – taking lots of data and making sense of it. The same way it can make a deep fake of someone’s voice, or recreate a work of art in the style of a specific artist, it can take the jumble of blurry signals from the brain and assemble it into coherent speech (at least that’s the goal). This essentially means we can do much more with the hardware we have.

This is the kind of thing that might make Stentrode the leader of the pack – they sacrifice a little resolution for being much safer and less invasive. But that sacrifice may be more than compensated for with a good AI interface.

The bottom line is that this industry is advancing nicely. We are at the cusp of going from the laboratory to early medical applications. From there we will go to more advanced medical applications, and then eventually to consumer applications. It should be exciting to watch.

 

The post Neuralink Implants Chip in Human first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #921: Reconsidering the Seveso Dioxin Disaster

Skeptoid Feed - Tue, 01/30/2024 - 2:00am

Was this infamous 1976 dioxin disaster as bad as reported, or might it have been much worse than we thought?

Categories: Critical Thinking, Skeptic

Katherine Brodsky — How to Find and Free Your Voice in the Age of Outrage

Skeptic.com feed - Tue, 01/30/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss401_Katherine_Brodsky_2024_01_10.mp3 Download MP3

As a society we are self-censoring at record rates. Say the wrong thing at the wrong moment to the wrong person and the consequences can be dire. Think that everyone should be treated equally regardless of race? You’re a racist who needs to be kicked out of the online forum that you started. Believe there are biological differences between men and women? You’re a sexist who should be fired with cause. Argue that people should be able to speak freely within the bounds of the law? You’re a fascist who should be removed from your position of authority. When the truth is no defense and nuance is seen as an attack, self-censorship is a rational choice. Yet, our silence comes with a price. When we are too fearful to speak openly and honestly, we deprive ourselves of the ability to build genuine relationships, we yield all cultural and political power to those with opposing views, and we lose our ability to challenge ideas or change minds, even our own.

In No Apologies, Katherine Brodsky argues that it’s time for principled individuals to hit the unmute button and resist the authoritarians among us who name, shame, and punish. Recognizing that speaking authentically is easier said than done, she spent two years researching and interviewing those who have been subjected to public harassment and abuse for daring to transgress the new orthodoxy or criticize a new taboo. While she found that some of these individuals navigated the outrage mob better than others, and some suffered worse personal and professional effects than others, all of the individuals with whom she spoke remain unapologetic over their choice to express themselves authentically. In sharing their stories, which span the arts, education, journalism, and science, Brodsky uncovers lessons for all of us in the silenced majority to push back against the dangerous illiberalism of the vocal minority that tolerates no dissent— and to find and free our own voices.

Katherine Brodsky is a journalist, author, essayist and commentator who has been taking an especially keen interest in emerging technologies and their impact on society. She has contributed to publications such as Variety, the Washington Post, WIRED, The Guardian, Esquire, Newsweek, Mashable, and many others. Over the years she has interviewed a diverse range of intriguing personalities including numerous Oscar, Emmy, Tony, Pulitzer, and Nobel Prize winners and nominees—including the Dalai Lama.

Shermer and Brodsky discuss:

  • What it’s like growing up Jewish in the Soviet Union and Israel
  • Why the Jews
  • Why liberals (or progressives) no longer defend free speech
  • Cancel culture: data and anecdotes
  • Is Cancel Culture an imagined moral panic?
  • Cancel Culture on the political Left
  • Cancel Culture on the political Right
  • Social media and Cancel Culture
  • Free speech law vs. free speech norms
  • Pluralistic Ignorance and the spiral of silence
  • Solutions to cancel culture
  • Identity politics
  • Cancel culture, witch crazes, and virtue signaling
  • Free speech, hate speech and slippery slopes
  • How to stand up to cancel culture.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Controlling the Narrative with AI

neurologicablog Feed - Mon, 01/29/2024 - 5:08am

There is an ongoing battle in our society to control the narrative, to influence the flow of information, and thereby move the needle on what people think and how they behave. This is nothing new, but the mechanisms for controlling the narrative are evolving as our communication technology evolves. The latest addition to this technology is the large language model AIs.

“The media”, of course, has been a large focus of this competition. On the right there is constant complaints of the “liberal bias” in the media, and on the left there are complaints of the rise of right-wing media which they feel is biased and radicalizing. The culture wars focus mainly on schools, because those schools teach not only facts and knowledge but convey the values of our society. The left views DEI (diversity, equity, and inclusion) initiates as promoting social justice while the right views it as brainwashing the next generation with liberal propaganda. This is an oversimplification, but it is the basic dynamic. Even industry has been targeted by the culture wars – which narratives are specific companies supporting? Is Disney pro-gay? Which companies fly BLM or LGBTQ flags?

But increasingly “the narrative” (the overall cultural conversation) is not being controlled by the media, educational system, or marketing campaigns. It’s being controlled by social media. This is why, when the power of social media started to become apparent, many people panicked. Suddenly it seemed we had seeded control of the narrative to a few tech companies, who had apparently decided that destroying democracy was a price they were prepared to pay for maximizing their clicks. We now live in a world where YouTube algorithms can destroy lives and relationships.

We are not yet over panicking about the influence of social media and the tech giants who control them when another player has crashed the party – artificial intelligence, chatbots, and the large language models that run them. This is an extension of the social media infrastructure, but it is enough of a technological advance to be disruptive. Here is the concern – by shaping the flow of information to the masses, social media platforms and AI can have a significant effect on the narrative, enough to create populist movements, to alter the outcome of elections, or to make or destroy brands.

It seems likely that increasingly we will be giving control of the flow of information to AI. Now, instead of searching on Google for information you can have a conversation with Chat GPT. Behind the scenes it’s still searching the web for information, but the interface is radically different. I have documented and discussed here many times how easy human brains are to fool. We have evolved circuits in our brain that construct our perception of reality and make certain judgements about how to do so. One subset of these circuits is dedicated to determining if something out there in the world has agency (are they a person or just a thing) and once the agency-algorithm determines that something is an agent, that then connects to the emotional centers of our brain. We then feel toward that apparent agent and treat them as if they were a person. This extends to cartoons, digital entities, and even abstract shapes. Physical form, or the lack thereof, does not seem to matter because it is not part of the agency algorithm.

It is increasingly well established that people respond to an even half-way decent chatbot as if that chatbot were a person. So now when we interface with “the internet”, looking for information, we may not just be searching for websites but talking with an entity – an entity that can sound friendly, understanding, and authoritative. Even though we may know completely that this is just an AI, we emotionally fall for it. It’s just how our brains are wired.

A recent study demonstrates the subtle power that such chatbots can have. They asked subjects to talk with ChatGPT-3 about black lives matter (BLM) and climate change, but gave them no other instructions. They also surveyed the subjects attitudes toward these topics before and after the conversation. Those who scored negatively toward BLM or climate change ranked their experience half a point lower on a five point scale (which is significant), so they were unhappy when the AI told them things they did not agree with. But, more importantly, after the interaction their attitudes moved 6% in the direction of accepting climate change and the BLM movement. We don’t know from this study if this effect is enduring, or if it is enough to affect behavior, but at least temporarily ChatGPT did move the needle a little. This is a proof of concept.

So the question is – who controls these large language model AI chatbots, who we are rapidly making the gatekeepers to information on the internet?

One approach is to make it so that no one controls them (as much as possible). Through transparency, regulation, and voluntary standards, the large tech companies can try to keep their thumbs off the scale as much as possible, and essentially “let the chips fall where they may.” But this is a problem and early indications are this approach likely won’t work. The problem is that even if they are trying not to influence the behavior of these AI, they can’t help but to have a large influence on them by the choices they make about how to program and train them. There is no neutral approach. Every decision has a large influence, and they have to make choices. What do they prioritize.

If, for example, they prioritize the user experience, well, as we see in this study, one way to improve the user experience is to tell people what they want to hear, rather what the AI determines is the truth. How much does the AI caveat what it says? How authoritative should it sound? How thoroughly should it source whatever information it gives? And how does it weight different sources that it is using? Further, we know that these AI applications can “hallucinate” – just make up fake information. How do we stop that, and to what extent (and how) to we build in fact-checking processes into the AI?

These are all difficult and challenging questions, even for a well-meaning tech company acting in good faith. But of course, there are powerful actors out there who would not act in good faith. There is already deep concern about the rise of Tik Tok, and the ability of China to control the flow of information through that app to favor pro-China news and opinion. How long will it be before ChatGPT is accused of having a liberal bias, and ConservaGPT is created to combat that (just like the Conservapedia, or Truth Social)?

The narrative wars go on, but they seem to be increasingly concentrated in fewer and fewer choke points of information. That, I think, is the real risk. And the best solution may be an anti-trust approach – make sure there are lots of options out there, so no one or few options dominate.

The post Controlling the Narrative with AI first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #968 - Jan 27 2024

Skeptics Guide to the Universe Feed - Sat, 01/27/2024 - 8:00am
Swindler's List: Deep Fake Robot Call; News Item: Oxygen Bottleneck, NASA Opens Osiris Rex Canister, Learning and Longevity, DNA Directed Assembly, Bleach Peddler Sentenced; Who's That Noisy; Your Questions and E-mails: Nuclear Batteries; Science or Fiction
Categories: Skeptic

Brian Klaas — Fluke: Chance, Chaos, and Why Everything We Do Matters

Skeptic.com feed - Sat, 01/27/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss400_Brian_Klaas_2024_01_03.mp3 Download MP3

If you could rewind your life to the very beginning and then press play, would everything turn out the same? Or could making an accidental phone call or missing an exit off the highway change not just your life, but history itself? And would you remain blind to the radically different possible world you unknowingly left behind?

In Fluke, myth-shattering social scientist Brian Klaas dives deeply into the phenomenon of random chance and the chaos it can sow, taking aim at most people’s neat and tidy storybook version of reality. The book’s argument is that we willfully ignore a bewildering truth: but for a few small changes, our lives—and our societies—could be radically different.

Offering an entirely new lens, Fluke explores how our world really works, driven by strange interactions and apparently random events. How did one couple’s vacation cause 100,000 people to die? Does our decision to hit the snooze button in the morning radically alter the trajectory of our lives? And has the evolution of humans been inevitable or are we simply the product of a series of freak accidents?

Drawing on social science, chaos theory, history, evolutionary biology, and philosophy, Klaas provides a brilliantly fresh look at why things happen—all while providing mind-bending lessons on how we can live smarter, be happier, and lead more fulfilling lives.

Brian Klaas grew up in Minnesota, earned his DPhil at Oxford, and is now a professor of global politics at University College London. He is a regular contributor for The Washington Post and The Atlantic, host of the award-winning Power Corrupts podcast, and frequent guest on national television. Klaas has conducted field research across the globe, interviewing despots, CEOs, torture victims, dissidents, cult leaders, criminals, and everyday power abusers. He has also advised major politicians and organizations including NATO, the European Union, and Amnesty International. His previous book, for which he appears on this podcast, was Corruptible: Who Gets Power and How it Changes Us. His new book is Fluke: Chance, Chaos and Why Everything We Do Matters. You can find him at BrianPKlaas.com and on X @brianklaas.

Shermer and Klaas discuss:

  • contingency and necessity/convergence
  • chance and randomness
  • complexity and chaos theory
  • Jorge Luis Borges “The Garden of Forking Paths”
  • self-organized criticality
  • limits of probability in a complex, ever-changing world
  • frequency- vs. belief-type probability
  • ceteris paribus, or “all else being equal” but things are never equal
  • economic forecasting
  • free will, determinism, and compatibilism
  • Holy Grail of Causality
  • Easy Problem of Social Research and the Hard Problem of Social Research
  • Was the original theory wrong, or did the world change?
  • When Clinton lost, Silver pointed to his model as a defense: 71.4 percent isn’t 100 percent! There was nearly a 30 percent chance of Clinton losing in the model, so the model wasn’t wrong—it was just something that would happen nearly a third of the time!
  • Special Order 191 and the turning point of the Civil War
  • Implicit in the baby Hitler thought experiment is the idea that without Hitler the Nazis wouldn’t rise to power in Germany, World War II wouldn’t happen, and the Holocaust would be avoided. It therefore assumes that Hitler was the sole, or at least the crucial, cause of those events. Many historians would take issue with that viewpoint, arguing that those cataclysms were all but inevitable. Hitler might have affected some outcomes, they’d say, but not the overall trajectory of events. The Nazis, the war, and the genocide were due to larger factors than just one man.
  • weak-link problem
  • complex world defined by tipping points, feedback loops, increasing returns, lock-in, emergence, and self-organized criticality
  • QWERTY and path dependency, Betamax vs. VHS, cassette v. CD v. streaming.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

How Humans Can Adapt to Space

neurologicablog Feed - Fri, 01/26/2024 - 5:11am

My recent article on settling Mars has generated a lot of discussion, some of it around the basic concept of how difficult it is for humans to live anywhere but a thin envelope of air hugging the surface of the Earth. This is undoubtedly true, as I have discussed before – we evolved to be finely adapted to Earth. We are only comfortable in a fairly narrow range of temperature. We need a fairly high percentage of oxygen (Earth’s is 21%) at sufficient pressure, and our atmosphere can’t have too much of other gases that might cause us problems. We are protected from most radiation that bathes the universe. Our skin and eyes have adapted to the light of our sun, both in frequency and intensity. And we are adapted to Earth’s surface gravity, with any significantly more or less causing problems for our biology.

Space itself is an extremely unforgiving environment requiring a total human habitat, with the main current technological challenges being artificial gravity and radiation protection. But even on other worlds it is extremely unlikely that all of the variables will be within the range of human survival, let alone comfort and thriving. Mars, for example, has too thin an atmosphere with no oxygen, no magnetic field to protect from radiation, it’s too cold and its surface gravity is too little. It’s better than the cold vacuum of space, but not by much. You still need essentially a total habitat, and we will probably have to go underground for radiation protection. Gravity is 38% that of Earths, which is probably not ideal for human biology. In space, with microgravity, at least you can theoretically use rotation to simulate gravity.

In addition to adapting off-Earth environments to humans, is it feasible to adapt humans to other environments? Let me start with some far-future options then finish with what is likely to be the nearest-future options.

Perhaps the optimal way to most fully adapt humans to alien environments is to completely replace the human body with one that is adapted. This could be a robot body, a genetically engineered biological one, or a cyborg combination. How does one replace their body? One option might be taking virtual control of the “brain” of the avatar (yes, like in the movie, Avatar). This could be through a neural link, or even just through virtual reality. This way you can remain safely ensconced in a protective environment, while your Avatar runs around a world that would instantly kill you. We are closer to having robotic avatars than biological ones, and to a limited degree we are already doing this through virtual presence technology.

But this approach has a severe limitation – you have to be relatively close to your Avatar. If, for example, you wanted to explore the Martian surface with an avatar, you would need to be in Mars orbit or on the surface of Mars. You could not be on Earth, because the delay in communication would be too great. So essentially this approach is limited by the speed of light.

You could also “upload” your mind into the Avatar, so that real time communication is not required. I put “upload” in quotes, because in reality you would be copying the structure and function of your brain. The avatar would not be you, it would be a mental copy of you operating the avatar (again, whether machine or biological). That copy would feel like it is you, and so that would be a way for “you” to explore a hostile environment, but it would not be the original you. However, it may also be possible, once the exploration has concluded, to copy the acquired memories back to you. It may also be possible to do this as a streaming function. In this case the distance does not matter as much, because you have a local copy with real time interaction, while you are receiving the feed in a constant stream, just delayed by the communication time. Because the avatar is a copy of you, the original you would not need to send instructions, only receive the feed. So you could be safely on Earth while your mental twin avatar is running around on Mars.

A more advanced version of this is similar to the series Altered Carbon. In this hypothetical future people can have their minds transferred (again, copied) to a “stack” which is essentially a computer. The stack, which is now you, operates your body, which is called your “sleeve”. This means, however, that you can change sleeves by pulling our your stack and plugging it into a different sleeve. Such a sleeve could be genetically engineered for a specific environment, or again it could be a robot. This envisions a future in which humans are really digital information that can inhabit biological, robotic, or virtual entities.

So far these options are pretty far in the future. The closest would be using virtual reality to control a robot, which is currently very limited but I can this being fairly robust by the time we could, for example, get to Mars. Another approach which is also fairly near term (at least more than the other options) is to use genetic engineering, medical interventions, and cyborg implants to enhance our existing bodies. This does not involve any avatars or neural transfer, just making our existing bodies better able to handle harsh environments.

For existing adults, genetic engineering options are likely limited, but could still be helpful. For example, inserting a gene that produces a protein derived from tardigrades could protect our DNA from radiation damage. We could also adapt our skin to block out more radiation, and be resistant to UV damage. We could adapt our bones and muscles to different surface gravities. We may even find ways to adapt to microgravity, allowing our bodies to better handle fluids with gravity.

For adults, using medical interventions, such as drugs, is another option. Drugs could theoretically compensate for lower oxygen tension, radiation damage, altered cardiac function, neutralizing toxins, and other physiological responses to alien environments.  Cyborg implants are yet another option, reinforcing our bones, enhancing cardiac function, shielding light or radiation, or adapting to low pressure.

But we could more profoundly adapt humans to alien environments with germ line genetic engineering – altering the genes that control development from an embryo. We could then make profound alterations to the anatomy and physiology of humans. This would create, in essence, a subspecies of humans, adapted to a specific environment – Homo martianus or Homo lunus. Then we could theoretically include extreme adaptations, to temperature, air pressure, oxygen tension, radiation exposure, and surface gravity. These subspecies would not be adapted to Earth, and may find Earth as hostile and we find Mars. They would be an offshoot of humanity.

Even the nearest of these technologies will take a long time to develop. For now we need to carry our Earth environment with us, even if it is within the confines of a spacesuit. But it seems likely we will find ways to adapt ourselves to space to some degree.

The post How Humans Can Adapt to Space first appeared on NeuroLogica Blog.

Categories: Skeptic

DNA Directed Assembly of Nanomaterials

neurologicablog Feed - Thu, 01/25/2024 - 4:54am

Arguably the type of advance that has the greatest impact on technology is material science. Technology can advance by doing more with the materials we have, but new materials can change the game entirely. It is no coincidence that we mark different technological ages by the dominant material used, such as the bronze age and iron age. But how do we invent new materials?

Historically new materials were mostly discovered, not invented. Or we discovered techniques that allowed us to use new materials. Metallurgy, for example, was largely about creating a fire hot enough to smelt different metals. Sometimes we literally discovered new elements, like aluminum or tungsten, with desirable properties. We also figured out how to make alloys, combining different elements to create a new material with unique or improved properties. Adding tin to copper made a much stronger and more durable metal, bronze. While the hunt for new usable elements is basically over, there are so many possible combinations that researching new alloys is still a viable way to find new materials. In fact a recent class of materials known as “superalloys” have incredible properties, such as extreme heat resistance.

If there are no new elements (other than really big and therefore unstable artificial elements), and we already have a mature science of making alloys, what’s next? There are also chemically based materials, such as polymers, resins, and composites, that can have excellent properties, including the ability to be manufactured easily. Plastics clearly had a dramatic effect on our technology, and some of the strongest and lightest materials we have are carbon composites. But again it feels like we have already picked the low-hanging fruit here. We still need new better materials.

It seems like the new frontier of material science is nanostructured material. Now it’s not only about the elements that a material is made from, it is how the atoms of that material are arranged on a nano-scale. We are just at the beginning of this technology. This approach has yielded what we call metamaterials – substances with properties determined by their structure, not just their composition. Some metamaterials can accomplish feats previously thought theoretically impossible, like focusing light beyond the diffraction limit. Another class of structured material is two-dimensional material, such as carbon nanofibers.

The challenge of nanostructured materials, however, is manufacturing them with high quality and high output. It’s one thing to use a precise technique in the lab as a proof of concept, but unless we can mass produce such material they will benefit only the highest end users. This is still great for institutions like NASA, but we probably won’t be seeing such materials on the desktop or in the home.

This brings us to the topic of today’s post – using DNA in order to direct the assembly of nanomaterials. This is already in used, and has been for about a decade, but a recent paper highlights some advances in this technique:  Three-dimensional nanoscale metal, metal oxide, and semiconductor frameworks through DNA-programmable assembly and templating.

There are a few techniques being used here. DNA is a nanoscale molecule that essentially evolved to direct the assembly of proteins. The same process is not being used here, but rather the programmable structure of DNA means we can exploit it for other purposes. The first step in the process being outlined here is to use DNA in order to direct the assembly of a lattice out of inorganic material. They make the analogy that the lattice is like the frame of a house. It provides the basic structure, but then you install specific structures (like copper pipes for water and insulation) to provide specific functionality.

So they then use two different methods to infiltrate the lattice with specific materials to provide the desired properties – semiconductors, insulators, magnetic conduction, etc. One method is vapor-phase infiltration which introduces the desired elements as a gas, which can penetrate deeply into the lattice structure. The other is liquid phase infiltration, which is better at depositing substance on the surface of the lattice.

These combinations of methods address some of the challenging of DNA directly assembly. First, the process is highly programmable. This is critical for allowing the production of a variety of 3D nanostructured materials with differing properties. Second the process takes advantage of self-assembly, which is another concept critical to nanostructured materials. When you get down to the 30 nm scale, you can’t really place individual atoms or molecules in the desired locations. You need a manufacturing method that causes the molecules to automatically go where they are supposed to – to self assemble. This is what happens with infiltration of the lattice.

The researchers also hope to develop a method that can work with a variety of materials to produce a range of desirable structures in a process that can be scaled up to manufacturing levels. They demonstrate at least the first two properties here, and show the potential for mass production, but of course that has yet to be actually demonstrated. They worked with a variety of materials, including: ” zinc, aluminum, copper, molybdenum, tungsten, indium, tin, and platinum, and composites such as aluminum-doped zinc oxide, indium tin oxide, and platinum/aluminum-doped zinc oxide.”

I don’t know if we are quite there yet, but this seems like a big step toward the ultimate goal of mass producing specific 3D nanostructured inorganic materials that we can program to have a range of desirable properties. One day the computer chips in your smartphone or desktop may come off an assembly line using a process similar to the one outlined in this paper. Or this may allow for new applications that are not even possible today.

The post DNA Directed Assembly of Nanomaterials first appeared on NeuroLogica Blog.

Categories: Skeptic

Leonardo da Vinci & Albert Einstein: Could the Renaissance Genius Have Grasped the Foundational Concepts of General Relativity?

Skeptic.com feed - Thu, 01/25/2024 - 12:00am

Leonardo da Vinci was a man of many talents. He was one of the few individuals to have made contributions to both the arts and science. His work extends to civil engineering, chemistry, geology, geometry, hydrodynamics, mathematics, mechanical engineering, optics, physics, pyrotechnics, warfare, and zoology.

Da Vinci was one of the best artists of his generation and many of his paintings are greatly admired today and command astronomical prices (his Salvator Mundi fetched the highest auction price ever). He was also an extraordinary illustrator, leaving thousands of manuscripts full of drawings of machines, fluid mechanics, humans, and many other topics. In addition, he was also a sculptor, architect, and more. As the type specimen of a Renaissance man, he put his mind to many different subjects, and he excelled at most of them. He was generally considered a genius by his contemporaries. In addition to all of this, he was described as a handsome and charming man, who was able to convince a whole room of the feasibility of something impossible.1 However, as it is sometimes said of promising but lazy children, some said that he would have been capable of even more accomplishments had he put his focus on them for longer and worked harder.

Revealingly, in his time, Leonardo was not considered to be at the same level as Michelangelo or even Raphael, perhaps because his notebooks were not published until much later. However, today many consider him superior to all his peers and—in a few extreme cases—some people fall into what we might call the “cult of Leonardo,” whose adherents believe that his genius was almost superhuman.

Consider a recently published article titled “Leonardo da Vinci’s Visualization of Gravity as a Form of Acceleration” by Morteza Gharib, Chris Roh, and Flavio Noca (henceforth GRN).2 In it, the authors propose that Leonardo understood gravity in a way that was not surpassed until the works of Galileo, Newton, and even Einstein. Had GRN presented their ideas in a less spectacular way, their article could have been a flawed, but mainly harmless one. Unfortunately, they chose to take the more risky path of venturing unfounded, under-researched, mind-blowing claims under the guise of solid scholarship, starting with the assertion that Leonardo saw gravity not as a force, but as an acceleration:

About 500 years ago, Leonardo da Vinci tried to uncover the mystery of gravity and its connection to acceleration through a series of ingenious experiments guided only by his imagination and masterful experimental techniques.

The shocking revelation that they put forth is that Leonardo “almost” (bit of wiggle room there) anticipated Einstein’s General Theory of Relativity, in particular, the so-called “Equivalence Principle” (see Figure 1):

As with Galileo, Leonardo’s geometrical representation of the equation of motion is as insightful as Newtonian mechanics’ representations of equations of motion. […] After Newton, Albert Einstein referred to the equivalency of gravity and acceleration, when he introduced the principles of “strong equivalency” while developing his theory of relativity in the early twentieth century.

Figure 1. (Click image to enlarge) Einstein’s equivalence principle states that gravity is indistinguishable from being in an accelerated system of reference. This was famously illustrated by Einstein using a thought experiment: imagine we are in a closed room. Is there any way we can know if the down force that we feel is due to gravity? Maybe the room is in a spaceship, away from big masses and accelerating upwards with acceleration g. Einstein concluded that both situations are equivalent.

Considering gravity as an acceleration instead of a force is indeed a crucial difference between Einstein’s and Newton’s conceptions. The assertion that Leonardo could have hit upon this insight centuries before Einstein is the most preposterous claim in GRN’s article and likely what has made it so ballyhooed in the popular press. To give just a couple of examples of some of those reviews, here is one from Ars Technica:

[Leonardo attempted] to draw a link between gravity and acceleration—well before Isaac Newton came up with his laws of motion, and centuries before Albert Einstein would demonstrate the equivalence principle with his general theory of relativity.3

Here’s another one from CNET:

Before Galileo, Newton, and Einstein, it seems to be Leonardo da Vinci who started piecing together the gravity puzzle […] Rather, it’s kind of the same thing as acceleration…. [Einstein] called it the equivalence principle, and soon, this eye-opening concept would blossom into the mind-bending theory of general relativity. The rest, as they say, is history.4

Let’s summarize GRN’s argument. First, they assert that Leonardo had a good understanding of how objects fall with constant acceleration under the effect of gravity. Second, they present a thought experiment devised by Leonardo that, they claim, shows he understood that gravity is equivalent to being in an accelerated frame of reference. Finally, they present a quantitative model, purportedly based on Leonardo’s manuscripts, and they compare it against Newtonian mechanics. Let’s consider each of these points.

Acceleration of Falling Objects

To support their claim that Leonardo understood that gravity produces a constant acceleration on falling objects, GRN provide the following quote from Leonardo’s M manuscript: “a weight that descends freely in every degree of time acquires…a degree of velocity”5 (ellipsis in their article). They further tell us that “many scholars of Leonardo note that this statement indicates that Leonardo correctly understood that the velocity of a falling object is a linear function of time.”

Now consider Leonardo’s quote in full: “The free-falling body acquires a degree of displacement over each degree of time, and over each degree of displacement it acquires a degree of velocity.”6 It is not completely clear what Leonardo meant by this, since the original sentence can be translated in slightly different ways; but the simplest interpretation is that Leonardo didn’t have a full understanding of acceleration. He repeats similar ideas in various places,7 including in drawings and calculations.8 For the full quote, I have used a translation from Prof. Enzo Macagno, one of the scholars that GRN cite in support of their hypothesis. Macagno has this to say about Leonardo’s understanding of gravity relative to this quote:

what Leonardo is trying to express is that over equal intervals of time there are constant increments for both distance traversed and for velocity. If this is understood, we may study critically what Leonardo said to detect how far he went in his descriptions of motion during free fall. Even if he did not add anything new to this question, or actually detracted from it, it is still important to know his “degree” of understanding.9

However, Macagno then notes that, “In his descriptions of an accelerated motion, which could not be correct because of an intrinsic inconsistency between velocity and displacement,” an observation that is hardly in support of GRN’s claim.

Another point to consider is the concept of “free fall.” Today we apply it to objects moving exclusively under the influence of the Earth’s gravity. However, when Leonardo talks about free fall (“discienso libero”), he was probably referring to something different. Da Vinci was very conscious of the effect of air drag. In almost every case where he talks about falling objects, he mentions the effect of air and he even includes it in his simplified calculations.10 In his manuscripts, he has many things to say about the effect of air on falling objects and vice versa. To me, it is much more likely that for Leonardo, free fall meant something closer to what we now call “terminal velocity”— that is, that constant velocity which a falling object reaches due to the balance between gravity and air resistance.

Further, Leonardo mentions several times that, on sunny clear days, the air is lighter at higher altitudes, so that the air becomes thicker as the object falls. This means that he thought that, at terminal velocity, objects decelerate as they fall. This is actually true, although the effect is probably much weaker than what Leonardo implies. None of these considerations discussed at length by Da Vinci in his manuscripts are mentioned in GRN’s article.

Leonardo’s Thought Experiment

Having argued that Leonardo thought that objects fell with constant acceleration, the next step in GRN’s article is to “prove” that Leonardo had a deeper understanding, namely that he was somehow aware that gravitation was not a force, but an acceleration in a manner similar to Einstein’s equivalence principle (see Figure 1). To do this, GRN analyze a thought experiment that Leonardo described in slightly different forms in various parts of his manuscripts.

Figure 2. Leonardo’s thought experiment. The jar moves from left to right releasing beads as it moves (Manuscript M, 143r).

The experiment consists of an open “container” (a jar, a funnel, and even a cloud in his various descriptions) that moves horizontally as it allows some particles to fall (beads or hail grains). Leonardo then considers the geometry of the system, giving special consideration to the case where the jar moves horizontally at the same speed as the first released bead falls vertically. This can be seen in Figure 2 as drawn by Leonardo, where he explains that, in this particular case, the trajectory of the first bead, the one of the jar, and the line that connects all beads, form an isosceles right triangle.

Figure 3. GRN’s interpretation of the experiment. All movements are accelerated, and the beads follow parabolas.

GRN analyze this problem using a more modern Newtonian approach. As is commonly done in high school physics problems, they start by simplifying away the effect of the air—an unusual assumption in this case—given that Leonardo constantly talks about the effect of air on falling objects. They also use the perhaps more reasonable assumption that particles leave the jar at the same speed as the jar itself, not considering that they must be moving with some relative speed out of it. They show their results in a graphic similar to Figure 3.

Figure 2 is not identical to the one drawn by Leonardo, but some salient features are still there: an isosceles right triangle, abn, defined by the movement of the jar (an), the falling trajectory of the first bead (ab), and the straight line that connects all the beads (bn). GRN assert that this is what Leonardo had in mind and they use the fact that, in both cases, the line formed by the falling beads is a straight line as proof that their assumptions are correct. They contrast it against the case in which the jar moved at constant velocity while the beads fall accelerated by gravity, in which case the beads align, but in a vertical line. They never entertain the more logical possibility: that Leonardo thought that the beads fell vertically at more or less constant velocities.

Then GRN go on to explain that this system can be better understood from the point of view of the accelerated frame of reference of the moving jar, a technique not available in Leonardo’s time but in the toolbox of Newtonian mechanics. Probably, they do this to remind us of Einstein’s Equivalence Principle, wherein the connection between gravity and acceleration is deeper and where accelerated frames of reference are equivalent to gravity fields. To me, it is clear that GRN’s ulterior and ultimate motivation is to establish a connection with the General Theory of Relativity. Throughout the article they leave small hints of this; for example, they say that “Leonardo’s studies of objects in free fall demonstrate that gravitational and pseudo-acceleration fields are indistinguishable locally when their magnitudes are the same.” Here, the words “fields” and “indistinguishable locally” have nothing to do with anything Leonardo writes, but GRN say it anyway because it is a language that feels more Einsteinian. In another part of the article, they say: “in other words, he [Leonardo] switched time with space to be able to conduct this experiment,” which is a thinly veiled way of suggesting that Leonardo was wise to the space-time continuum.

Of course, Einstein’s Equivalence Principle is deeper than just comparing accelerations. That could have been done in Newtonian mechanics. The crucial point that Einstein understood is that the mass of an object subjected to a gravitational field plays no role in its dynamics. All objects are accelerated equally, even light! Leonardo never says that all objects fall at the same speed independently of their weight; quite the contrary. Leonardo gives various examples where they don’t, although he mentions air resistance as one reason. Famously, Galileo was the first person to prove that all objects fall at the same speed (not including the effect of air), and there is no reason to believe that Leonardo knew that before Galileo.

Figure 4. Leonardo’s Manuscript M 217r (left), and my translation (right). The image above has been mirrored from the original for ease of understanding. Leonardo wrote from right to left, using his left hand, to prevent smudging the ink as he wrote.

I have translated the page where Leonardo presents the experiment of the hail cloud (see Figure 4). My translation is quite literal, except that I have simplified the third paragraph which, to me, was a little bit reiterative and confusing. It is clear that Leonardo thought that hail grains fell mostly vertically, without any appreciable horizontal velocity, as indicated by the vertical lines that connect every grain with the location at the moment it was released. Leonardo thought that the effect of the air would make objects quickly stop any horizontal movement (see Figure 5). The fact that he also thought that this experiment could be performed substantiates the assumption that he was considering objects falling at constant velocity. Should we believe Leonardo was thinking that clouds could be seen accelerating to absurdly great speeds or that hail grains were not affected by air resistance?

Figure 5. Objects thrown at different angles. The image has been mirrored for ease of understanding (Codex Arundel 92v).

Simply by inspecting Figure 4 and the other pages that Leonardo devoted to this problem, it is clear that he was interested in a simpler geometrical problem: two things that start moving from the same point at the same constant speed but in perpendicular directions will have trajectories that define the two legs of an isosceles right triangle. And the trivial corollary is that if the velocities are different, the triangle will not be isosceles. Da Vinci draws examples of each of these cases and explains how this can be used to estimate the speed of the clouds.

If Leonardo really thought that the particles were following the beautiful parabolic trajectories shown in Figure 3, why didn’t he draw them that way rather than drawing, as he did, vertical lines of no clear meaning? GRN never comment on this obvious weakness in their claim.

Leonardo’s Model?

The next section in GRN’s article is truly strange. In what seems like a misguided attempt to perform a quantitative validation of Leonardo’s ideas on gravity, they make extraordinary assumptions and take huge leaps of faith. They interpret the line in Figure 4 labeled “equation of movement” (it can also be translated as “balanced movement”) as meaning that this figure encodes the actual physical equation of movement. After observing that Leonardo seemed to have bisected the axes, they decide that “presumably, the distance between consecutive bisecting locations represents the distance the object traveled during a fixed time step,” although Leonardo says nothing of the sort. He very clearly says that these bisections represent possible speeds of the cloud, relative to the speed of the hail. According to the supplementary materials provided by GRN, it seems that they came up with “Leonardo’s model” for gravity acceleration by looking at the figures, which may explain their misunderstanding.

GRN claim that “Leonardo’s model” is given by the formula: z(t) ∝ 2(t-1)n, where z is the vertical location of the object, t is time, n is the number of bisections, and ∝ means “proportional to.” It is a strange mixture of a discrete description in terms of bisections (n) and a continuous one in time (t). They recognize that this model is incorrect, but after a few additional assumptions which I will not discuss here, they realize that it is not as bad as it might seem initially. In fact, they say that in certain circumstances it is quite good. They write: “Leonardo’s gravitational constant is 0.9774 (95 percent confidence interval, 0.8535, 1.101), which is close to the nondimensional gravity of 1. These two observations suggest that Leonardo’s model of natural motion, while imperfect, was an accurate representation of his observation of falling objects.”

I don’t think this section requires detailed commentary. GRN start with their wrong interpretation of Leonardo’s manuscripts, invent a model based on what they think a figure means, make some unsupported assumptions, and end up with something that has nothing to do with what Leonardo might have had in mind. One could imagine that they wanted to end their article with some hard numerical results, and they distorted Leonardo’s meaning until it yielded something they could use.

This article appeared in Skeptic magazine 28.3
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

There is, however, an additional point I would like to mention. The model they attribute to Leonardo is invalid for times close to zero (ironically, the only ones for which air drag is insignificant). The plot of z against t that they show in their article and in the supplementary materials does not begin at the origin. The object starts falling only after it is already eight percent of its way down!

* * *

As we have seen, there is no basis to believe that Leonardo da Vinci, genius though he undoubtedly was, had a knowledge of gravity ahead of his time, much less at the level of Newton or Einstein. Every year, thousands of articles are written with the only intention of entertaining casual readers. Their flaws are obvious to most knowledgeable readers. However, this article was published in a peer-reviewed journal by a well-known academic institution. The authors claim to have studied the topic scientifically and their conclusions are not easy to dismiss. One must dig into Leonardo’s large corpus of manuscripts to be able to properly analyze their claims, and few are willing or have the language skills to do so. I have tried my best to examine GRN’s claims carefully. After looking at all the evidence, I remain unconvinced.

Leonardo da Vinci was one of the greatest minds in history. He is unrivaled in having made significant contributions to both science and the arts. There is simply no need for GRN’s hyperbole that Leonardo was a genius who foresaw relativity theory centuries ahead of his time. Their claim is not supported by any fair reading of the original manuscripts. Rather, their paper is a generator of disinformation that has helped to decrease the already too low signal-to-noise ratio in public conversations about science.

About the Author

José María González Ondina is an Associate Researcher at the University of Florida. He received his PhD from Cornell University. He spent most of his career as an ocean modeler, studying underwater sound propagation and sediment transport at the Plymouth Ocean Forecasting Centre. He also spent a decade at the Ocean & Coastal Research Group (University of Cantabria, Spain) developing numerical models for coast engineering.

References
  1. Giorgio, V. (1550) Lives of the Most Excellent Painters, Sculptors, and Architects.
  2. https://rb.gy/7sfix
  3. https://rb.gy/8lhni
  4. https://rb.gy/wvnhr
  5. Manuscript M, folio 45r, folio 43r.
  6. I am using the translation of Enzo Macagno from Leonardian Fluid Mechanics in the Manuscript M, page 18.
  7. For example here: “Hence, in each doubling of the quantity of time the body doubles the length of fall and the velocity of its motion.” from Manuscript M, folio 44v.
  8. Manuscript M, folio 45r.
  9. Enzo Macagno, Leonardian Fluid Mechanics in the Manuscript M, page 18.
  10. Manuscript M, folio 44v.
Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic