You are here

Skeptic

Ernest Scheyder — The Global Battle to Power Our Lives

Skeptic.com feed - Sat, 02/24/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss408_Ernest_Scheyder_2024_02_24.mp3 Download MP3

A new economic war for critical minerals has begun, and The War Below is an urgent dispatch from its front lines. To build electric vehicles, solar panels, cell phones, and millions of other devices means the world must dig more mines to extract lithium, copper, and other vital building blocks. But mines are deeply unpopular, even as they have a role to play in fighting climate change and powering crucial technologies. These tensions have sparked a worldwide reckoning over the sourcing of necessary materials, and no one understands the complexities of these issues better than Ernest Scheyder, whose exclusive access to sites around the globe has allowed him to gain unparalleled insights into a future without fossil fuels.

The War Below reveals the explosive brawl among industry titans, conservationists, community groups, policymakers, and many others over whether some places are too special to mine or whether the habitats of rare plants, sensitive ecosystems, Indigenous holy sites, and other places should be dug up for their riches.

With vivid and engaging writing, Scheyder shows the human toll of this war and explains why recycling and other newer technologies have struggled to gain widespread use. He also expertly chronicles Washington’s attempts to wean itself off supplies from China, the global leader in mineral production and processing. The War Below paints a powerfully honest and nuanced picture of what is at stake in this new fight for energy independence, revealing how America and the rest of the world’s hunt for the “new oil” directly affects us all.

Ernest Scheyder is a senior correspondent for Reuters, covering the green energy transition and the minerals that undergird it. He previously covered the US shale oil revolution, politics, and the environment, and held roles at the Associated Press and the Bangor Daily News. A native of Maine, Scheyder is a graduate of the University of Maine and Columbia Journalism School. Visit his website at ErnestScheyder.com and follow him on Twitter @ErnestScheyder.

Shermer and Scheyder discuss:

  • how, as a Reuters reporter, Scheyder came to this issue
  • rare earth metals
  • lithium and copper
  • aluminum and other precious metals
  • How much rare earth metals will we need by 2050, 2100, and beyond?
  • How do lithium-ion batteries work compared to lead-acid? What are the alternatives?
  • How crucial are these technologies necessary to combat climate change?
  • Will EVs completely replace all other automobiles?
  • Can renewables completely replace fossil fuels without nuclear?
  • recycling electronic waste
  • how mining works in the U.S., China, Chile, Russia, elsewhere
  • types of mines: hard-rock vs. soft-rock, open-pit vs. deep earth
  • public vs. private ownership of mines (Bureau of Mines)
  • what companies like Apple and Tesla are doing about the coming problem
  • Native American rights to land containing valuable mines
  • third world labor exploitation
  • electric leaf blowers and weed wackers.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Odysseus Lands on the Moon

neurologicablog Feed - Fri, 02/23/2024 - 5:00am

December 11, 1972, Apollo 17 soft landed on the lunar surface, carrying astronauts Gene Cernan and Harrison Schmitt. This was the last time anything American soft landed on the moon, over 50 years ago. It seems amazing that it’s been that long. On February 22, 2024, the Odysseus soft landed on the Moon near the south pole. This was the first time a private company has achieved this goal, and the first time an American craft has landed on the Moon since Apollo 17.

Only five countries have ever achieved a soft landing on the moon, America, China, Russia, Japan, and India. Only America did so with a crewed mission, the rest were robotic. Even though this feat was first accomplished in 1966 by the Soviet Union, it is still an extremely difficult thing to pull off. Getting to the Moon requires powerful rocket. Inserting into lunar orbit requires a great deal of control, on a craft that is too far away for real time remote control. This means you either need pilots on the craft, or the craft is able to carry out a pre-programmed sequence to accomplish this goal. Then landing on the lunar surface is tricky. There is no atmosphere to slow the craft down, but also no atmosphere to get in the way. As the ship descends it burns fuel, which constantly changes the weight of the vehicle. It has to remain upright with respect to the lunar surface and reduce its speed by just the right amount to touch down softly – either with a human pilot or all by itself.

The Odysseus mission is funded by NASA as part of their program to develop private industry to send instruments and supplies to the Moon. It is the goal of their Artemis mission to establish a permanent base on the moon, which will need to be supported by regular supply runs. In January another company with a NASA grant under the same program, Astrobotic Technology, sent their own craft to the Moon, the Peregrine. However, a fuel leak prevented the craft from orienting its solar panels toward the sun, and the mission had to be abandoned. This left the door open for the Odysseus mission to grab the achievement of being the first private company to do so.

One of the primary missions of Odysseus is to investigate is the effect of the rocket’s exhaust on the landing site. When the Apollo missions landed the lander’s exhaust blasted regolith from the lunar surface at up to 3-4 km/second, faster than a bullet. With no atmosphere to slow down these particles, they blasted everything in the area and went a long distance. When Apollo 12 landed somewhat near the Surveyor 3 robotic lander the astronauts then walked to the Surveyor to bring back pieces for study. They found that the Surveyor had been “sandblasted” by the lander’s exhaust.

This is a much more serious problem for Artemis than Apollo. Sandblasting on landing is not really a problem if there is nothing else of value nearby. But with a permanent base on the Moon, and even possibly equipment from other nation’s lunar programs, this sandblasting can be dangerous and harm sensitive equipment. We need to know, therefore, how much damage it does, and how close landers can land to existing infrastructure.

There are potential ways to deal with the issue, including landing at a safe distance, but also erecting walls or curtains to block the blasted regolith from reaching infrastructure. A landing pad that is hardened and free of loose regolith is another option. These options, in turn, require a high degree of precision in terms of the landing location. For the Apollo missions, the designated landing areas were huge, with the landers often being kilometers away from their target. If the plan for Artemis is to land on a precise location, eventually onto a landing pad, then we need to not only pull off soft landings, but we need to hit a bullseye.

Fortunately, our technology is no longer in the Apollo era. SpaceX, for example, now routinely pulls off similar feats, with their reusable rockets that descend back down to Earth after launching their payload, and make a soft landing on a small target such as a floating platform.

The Odysseus craft will also carry out other experiments and missions to prepare the way for Artemis. This is also the first soft landing for the US near the south pole. All the Apollo missions landed near the equator. The craft will also be placing a laser retroreflector on the lunar surface. This is a reflector that can return a laser pointed at it directly back at the source. Such reflectors have been left on the Moon before and are used to do things like measure the precise distance between the Earth and Moon. NASA plans to place many retroreflectors on the Moon to use as a positioning system for spacecraft and satellites in lunar orbit.

This is all part of building an infrastructure for a permanent presence on the Moon. This, I think, is the right approach. NASA knows they need to go beyond the “flags and footprints” style one-off missions. Such missions are still useful for doing research and developing technology, but they are not sustainable. We should be focusing now on partnering with private industry, developing a commercial space industry, advancing international cooperation, developing long term infrastructure and reusable technology. While I’m happy to see the Artemis program get underway, I also hope this is the last time NASA develops these expensive one-time use rocket systems. Reusable systems are the way to go.

 

The post Odysseus Lands on the Moon first appeared on NeuroLogica Blog.

Categories: Skeptic

Legalization of Marijuana and Violent Crime in the Nicest Place in America

Skeptic.com feed - Fri, 02/23/2024 - 12:00am

Throughout most of the last century, both political Right1 and Left2 were unified, a rare occurrence in itself, in their opposition to the decriminalization of marijuana. By 2023, public opinion had shifted. Most Americans now support legalization for medical and recreational use,3 and this support extends across the political divide. Nearly two-thirds of the electorate supports legalization, making it one of the least divisive issues in the country.4 At this writing, 23 states have legalized recreational marijuana, along with Washington, DC, and Guam.5

The third that opposes legalization remains, though, and there are reasoned arguments against legalization. Significant research establishing the adverse effects of marijuana consumption exists, noting its correlation with neurophysical decline,6 cognitive impairment,7 highway deaths,8 lower educational attainment,9 addiction,10 and other adverse health effects.11 Within the last decade, correlations have been found between both distal and proximal drug use (including the use of marijuana) and sexual aggression.12

Buchanan, Michigan (Callie Lipkin / Gallery Stock), “The Nicest Place in America (2020)”

There are also reasonable arguments against legalization based on the burdens it is claimed it would produce on society: the tax revenue received from the longstanding legal sale of alcohol and tobacco pales in comparison to the costs of healthcare for the individuals who consume them.13 So some argue marijuana legalization would only further increase the costs to the taxpayer.

In 2019, Alex Berenson of the New York Times published Tell Your Children: The Truth About Marijuana, Mental Illness, and Violence. In it, Berenson warned that paranoia, one of the established side effects of marijuana consumption, is likely to trigger violence in those suffering from psychosis.14 The book was predictably lauded by those pundits who saw it as a revelatory argument against legalization.15 Berenson cited stories such as that of Raina Thaiday, who stabbed eight children to death, seven of which were her own (the eighth was her niece). Berenson noted the ruling of schizophrenia for Thaiday, in which the Justice who presided over her case wrote, “All the psychiatrists thought that it is likely that (Thaiday’s) long-term use of cannabis caused (Thaiday’s) mental illness schizophrenia to emerge.”16 Tell Your Children is chock full of historical tragedies such as Thaiday’s from the 1970s to the present day. The book describes scalping, mutilation, mass shootings, and spousal murder by psychotic perpetrators triggered by smoking marijuana. The author warned that today’s marijuana is considerably more potent (that is, higher concentration of THC) than that used 40+ years ago, and so predicted that such atrocities will only get worse. Yet Berenson’s argument is not new. Cannabis-induced violence has been argued by the U.S. Department of Justice for decades.17

New research challenged the Department’s claims, examining the rates of violent crime in states that had legalized medical and recreational marijuana. The data suggested that legalization not only failed to increase violent crime rates, but it also possibly led to a decline in crimes such as homicide, robbery, and aggravated assault.18 Likewise, Tell Your Children was challenged by many in the scientific community. They argued the author was guilty of confusing correlation with causation and selectively selecting his data, and even likened his anecdotal data to the long-discredited “reefer madness” panic of the past.19

Having grown up during the 1980s at the height of the War on Drugs, I read Tell Your Children with interest, and asked myself if Berenson’s fears were valid. Was he right? Turns out, I live in a small Michigan community that offers an ideal cluster sample in which to test his claims. It’s called Buchanan.

In the fall of 2020, I heard a radio ad calling for nominations to be considered for Reader’s Digest’s Nicest Place in America. I wrote about Buchanan. My essay won.20 Reporters from around the world covered the story.21 Coincidentally, that same year, Buchanan fully implemented marijuana legalization.22 Michigan had passed a medical marijuana law in 2008, and we’d spent the previous 10 years respectfully debating whether or not to follow suit in our small town. In the fall of 2019, the city adopted a plan for six dispensaries.

Location of marijuana dispensaries in Buchanan, MI

The Nicest Place in America has since become the go-to destination for Michigan stoners. At this writing, there is one legal dispensary for every 860 residents, one of the highest per capita ratios in the state.23 We even have a local marijuana ambassador, Freddie “The Stoner” Miller, who’s been seen on the Jimmy Kimmel Live! TV show.24

Buchanan seemed like the perfect case study of the effects of marijuana legalization. Did The Nicest Place in America see an increase in violent crime rates in the years following its adoption of recreational marijuana? I began by looking up our demographics. I found that, in many ways, Buchanan is a microcosm of America. We have a population of 4,270 and enjoy a diverse citizenry that is 83.2 percent White, 11 percent Biracial, 4.38 percent African American, Hispanic (.445 percent), and Asian (.445 percent). We have a poverty rate of 7.85 percent and a median household income of $43,668.

Much of Buchanan’s demographic data is comparable to that of the United States as a whole, though the U.S. has a considerably larger Hispanic population (18.2 percent), a larger median household income ($64,994), and a higher poverty rate (12.8 percent). Buchanan’s industrial statistics are likewise similar to those of the nation, with the workforce distributed across manufacturing, education, retail trade, and professional and technical services.25 Perhaps most significantly, Buchanan’s unemployment insurance claims skyrocketed to record levels in April 2020, as did those throughout the country.

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

I then called Sean Denison, Buchanan’s mayor. He told me he’d seen no evidence of violent crime increase since 2020. When I called Tim Ganus, our Police Chief, he told me that he also doubted crime spiked. Still, though, to really know, you need data. I submitted a Freedom of Information Act Request to the Buchanan Police Department to obtain arrest records for violent crimes from 2016 to 2022. Chief Ganus called me again to establish what I meant by “violent crime.” I told him he knew more about this than I did, so I’d leave it up to him. He suggested arrests for assault and for those that would constitute a felony. I concurred. One week later, I had the information in hand. Each report encompassed one calendar year.

Here’s what I found:

There was a total of 855 adult arrests between January 1, 2016, and December 31, 2022. Of these, there were a total of 105 (12.2 percent) arrests deemed “violent.” These offenses included nonaggravated assault, aggravated felonious assault, sexual assault, parental kidnapping, and robbery.

  • In 2016, there were 29 arrests, one for parental kidnapping, one for sexual assault, 21 nonaggravated (misdemeanor level) assaults, and six aggravated (felonious) assaults.
  • In 2017, there were 17 arrests, one for robbery, 12 for non-aggravated (misdemeanor level) assaults, and four for aggravated felonious assault.
  • In 2018, there were 23 arrests, one for robbery, 17 for non-aggravated assault, and five for aggravated felonious assault.
  • In 2019, there were 21 arrests, two for sexual assault, 17 for non-aggravated assault, and two for aggravated felonious assault.
  • In 2020, the first full year of implementation, there were 21 arrests, two for sexual assault, 15 for non-aggravated assault, and four for aggravated felonious assault.
  • In 2021, the second year of implementation, there were 19 arrests, one for parental kidnapping, three for sexual assault, one for forcible sexual contact, nine for non-aggravated assault, and four for aggravated felonious assault.
  • In 2022, the third year of the implementation, there were 22 arrests, 16 for non-aggravated assault and six for aggravated felonious assault.

Did violent crime increase in Buchanan after 2020? Hardly. Any fears of increased violent crime following marijuana legalization in The Nicest Place in America proved unwarranted. We’re still safe, and so, I’m glad to report, is our title.

About the Author

John D. Van Dyke is an academic and science educator. His personal website is www.vandykerevue.org.

References
  1. https://rb.gy/17hag
  2. https://rb.gy/fzigf
  3. https://rb.gy/rkncw
  4. https://rb.gy/qx0x8
  5. https://rb.gy/m049q
  6. https://rb.gy/4a3e9
  7. https://rb.gy/36zd3
  8. https://rb.gy/eft8n
  9. https://rb.gy/m2mcd
  10. Shover, C.L., David, C.S., Gordon, S.C., & Humphreys, K. (n.d.). Association between medical cannabis laws and opioid overdose mortality has reversed over time. PNAS, 116(26).
  11. https://rb.gy/m2mcd
  12. https://rb.gy/wofyp
  13. https://rb.gy/ifprp
  14. https://rb.gy/hi3zc
  15. https://rb.gy/iibix
  16. Berenson, A. (2019). Tell Your Children: The Truth About Marijuana, Mental Illness, and Violence. Free Press.
  17. Inciardi Inciardi, J. A., & Saum, C. A. (1998). Legalizing Drugs Would Increase Violent Crime (From Illegal Drugs, p. 142–150, 1998, Charles P. Cozic, ed. See NCJ-169238).
  18. https://rb.gy/k7fmq
  19. https://rb.gy/luf2z
  20. https://rb.gy/c1xun
  21. https://rb.gy/annph
  22. https://rb.gy/cg4xr
  23. https://rb.gy/cqg5s
  24. https://rb.gy/1xmah
  25. https://rb.gy/0is0v
Categories: Critical Thinking, Skeptic

AI Video

neurologicablog Feed - Thu, 02/22/2024 - 5:05am

Recently OpenAI launched a website showcasing their latest AI application, Sora. This app, based on prompts similar to what you would use for ChatGPT or the image creation applications, like Midjourney or Dalle-2, creates a one minute photorealistic video without sound. Take a look at the videos and then come back.

Pretty amazing. Of course, I have no idea how cherry picked these videos are. Were there hundreds of failures for each one we are seeing? Probably not, but we don’t know. They do give the prompts they used, and they state explicitly that these videos were created entirely by Sora from the prompt without any further editing.

I have been using Midjourney quite extensively since it came out, and more recently I have been using ChatGPT 4 which is linked to Dalle-2, so that ChatGPT will create the prompt for you from more natural language instructions. It’s pretty neat. I sometimes use it to create the images I attach to my blog posts. If I need, for example, a generic picture of a lion I can just make one, rather than borrowing one from the internet and risking that some German firm will start harassing me about copyright violation and try to shake me down for a few hundred Euros. I also make images for personal use, mostly gaming. It’s a lot of fun.

Now I am looking forward to getting my hands on Sora. They say that they are testing the app, having given it to some creators to give them feedback. They are also exploring ways in which the app can be exploited for evil and trying to make it safe. This is where the app raises some tricky questions.

But first I have a technical question – how long will it be before AI video creation is so good that it becomes indistinguishable (without technical analysis) from real video? Right now Sora is about as good at video as Midjourney is at pictures. It’s impressive, but there are some things it has difficulty doing. It doesn’t actually understand anything, like physics or cause and effect, and is just inferring in it’s way what something probably looks like. Probably the best representation of this is how they deal with words. They will create pseudo-letters and words, reconstructing word like images without understanding language.

Here is a picture I made through ChatGPT and Dalle-2 asking for an advanced spaceship with the SGU logo. Superficially very nice, but the words are not quite right (and this is after several iterations). You can see the same kind of thing in the Sora videos. Often there are errors in scale, in how things related to each other, and objects just spawn our of nowhere. The video of the birthday party is interesting – I think everyone is supposed to be clapping, but it’s just weird.

So we are still right in the middle of the uncanny valley with AI generated video. Also, this is without sound. The hardest thing to do with photorealistic CG people is make them talk. As soon as their mouth starts moving, you know they are CG. They don’t even attempt that in these videos. My question is – how close are we to getting past the uncanny valley and fixing all the physics problems with these videos?

On the one hand it seems close. These videos are pretty impressive. But this kind of technology historically (AI driving cars, speech recognition) tend to follow a curve where the last 5% of quality is as hard or harder than the first 95%. So while we may seem close, fixing the current problems may be really hard. We will have to wait and see.

The more tricky question is – once we do get through the uncanny valley and can essentially create realistic video, paired with sound, of anything that is indistinguishable from reality, what will the world be like? We can already make fairly good voice simulations (again, at the 95% level). OpenAI says they are addressing these questions, and that’s great, but once this code is out there in the world who says everyone will adhere to good AI hygiene?

There are some obvious abuses of this technology to consider. One is to create fake videos meant to confuse the public and influence elections or for general propaganda purposes. Democracy requires a certain amount of transparency and shared reality. We are already seeing what happens when different groups cannot even agree on basic facts. This problem also cuts both ways – people can make videos to create the impression that something happened that didn’t, but also real video can be dismissed as fake. That wasn’t me taking a bribe, it was an AI fake video. This creates easy plausible deniability.

This is a perfect scenario for dictators and authoritarians, who can simply create and claim whatever reality they wish. The average person will be left with no solid confidence in what reality is. You can’t trust anything, and so there is no shared truth. Best put our trust in a strongman who vows to protect us.

There are other ways to abuse this technology, such as violating other people’s privacy by using their image. This could also revolutionize the porn industry, although I wonder if that will be a good thing.

While I am excited to get my hands on this kind of software for my personal use, and I am excited to see what real artists and creators can do with the medium, I also worry that we again are at the precipice of a social disruption. It seems that we need to learn the lessons of recent history and try to get ahead of this technology with regulations and standards. We can’t just leave it up to individual companies. Even if most of them are responsible, there are bound to be ones that aren’t. Not only do we need some international standards, we need the technology to enforce them (if that’s even possible).

The trick is, even if AI generated videos can be detected and revealed, the damage may already be done. The media will have to take a tremendous amount of responsibility for any video they show, and this includes social media giants. At the very least any AI generated video should be clearly labels as such. There may need to be several layers of detection to make this effective. At least we need to make it as difficult as possible, so not every teenager with a cellphone can interfere with elections. At the creation end AI created video can be watermarked, for example. There may also be several layers of digital watermarking to alert social media platforms so they can properly label such videos, or refuse to host them depending on content.

I don’t have the final answers, but I do have a strong feeling we should not just go blindly into this new world. I want a world in which I can write a screenplay, and have that screenplay automatically translated into a film. But I don’t want a world in which there is no shared reality, where everything is “fake news” and “alternative facts”. We are already too close to that reality, and taking another giant leap in that direction is intimidating.

The post AI Video first appeared on NeuroLogica Blog.

Categories: Skeptic

Scammers on the Rise

neurologicablog Feed - Tue, 02/20/2024 - 5:06am

Good rule of thumb – assume it’s a scam. Anyone who contacts you, or any unusual encounter, assume it’s a scam and you will probably be right. Recently I was called on my cell phone by someone claiming to be from Venmo. They asked me to confirm if I had just made two fund transfers from my Venmo account, both in the several hundred dollar range. I had not. OK, they said, these were suspicious withdrawals and if I did not make them then someone has hacked my account. They then transferred me to someone from the bank that my Venmo account is linked to.

I instantly knew this was a scam for several reasons, but even just the overall tone and feel of the exchange had my spidey senses tingling. The person was just a bit too helpful and friendly. They reassured me multiple times that they will not ask for any personal identifying information. And there was the constant and building pressure that I needed to act immediately to secure my account, but not to worry, they would walk me through what I needed to do. I played along, to learn what the scam was. At what point was the sting coming?

Meanwhile, I went directly to my bank account on a separate device and could see there were no such withdrawals. When I pointed this out they said that was because the transactions were still pending (but I could stop them if I acted fast). Of course, my account would show pending transactions. When I pointed this out I got a complicated answer that didn’t quite make sense. They gave me a report number that would identify this event, and I could use that number when they transferred me to someone allegedly from my bank to get further details. Again, I was reassured that they would not ask me for any identifying information. It all sounded very official. The bank person confirmed (even though it still did not appear on my account) that there was an attempt to withdraw funds and sent me back to the Venmo person who would walk be through the remedy.

What I needed to do was open my Venmo account. Then I needed to hit the send button in order to send a report to Venmo. Ding, ding ding!. That was the sting. They wanted me to send money from my Venmo account to whatever account they tricked me into entering. “You mean the button that says ‘send money’, that’s the button you want me to press?” Yes, because that would “send” a report to their fraud department to resolve the issue. I know, it sounds stupid, but it only has to work a fraction of the time. I certainly have elderly and not tech savvy relatives who I could see falling for this. At this point I confronted the person with the fact that they were trying to scam me, but they remained friendly and did not drop the act, so eventually I just hung up.

Digital scammers like this are growing, and getting more sophisticated. By now you may have heard about the financial advice columnist who was scammed out of $50,000. Hearing the whole story at the end, knowing where it is all leading, does make it seem obvious. But you have to understand the panic that someone can feel when confronted with the possibility that their identify has been stolen or their life savings are at risk. That panic is then soothed by a comforting voice who will help you through this crisis. The FBI documented $10.2 billion in online fraud in 2022. This is big business.

We are now living in a world where everyone needs to know how to defend themselves from such scams. First, don’t assume you have to be stupid to fall for a scam. Con artists want you to think that – a false sense of security or invulnerability plays into their hands.

There are many articles detailing good internet hygiene to protect yourself, but frequent reminders are helpful, so here is my list. As I said up top – assume it’s a scam. Whenever anyone contacts me I assume it’s a scam until proven otherwise. That also means – do not call that number, do not click that link, do not give any information, do not do anything that someone who contacted you (by phone, text, e-mail, or even snail mail) asks you to do. In many cases you can just assume it’s a scam and comfortably ignore it. But if you have any doubt, then independently look up a contact number for the relevant institution and call them directly.

Do not be disarmed by a friendly voice. The primary vulnerability of your digital life is not some sophisticated computer hack, but a social hack – someone manipulating you, trying to get you to act impulsively or out of fear. They also know how to make people feel socially uncomfortable. If you push back, they will make it seem like you are being unreasonable, rude, or stupid for doing so. They will push whatever social and psychological buttons they can. This means you have to be prepared, you have to be armed with a defense against this manipulation. Perhaps the best defense is simply protocol. If you don’t want to be rude, then just say, “Sorry, I can’t do that.” Take the basic information and contact the relevant institution directly. Or – just hang up. Remember, they are trying to scam you. You own them nothing. Even if they are legit, it’s their fault for breaking protocol – they should not be asking you to do something risky.

When in doubt, ask someone you know. Don’t be pressured by the alleged need to act fast. Don’t be pressured into not telling anyone or contacting them directly. Always ask yourself – is there any possible way this could be a scam. If there is, then it’s probably is a scam.

It’s also important to know that anything can be spoofed. A scammer can make it seem like the call is coming from a legit organization, or someone you know. Now, with AI, it’s possible to fake someone’s voice. Standard protocol should always be, take the information, hang up, look up the number independently and contact them directly. Just assume, if they contacted you, it’s a scam. Nothing should reassure you that it isn’t.

The post Scammers on the Rise first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #924: Foo Fighters

Skeptoid Feed - Tue, 02/20/2024 - 2:00am

What were these early UFOs that chased and harried World War II fighter pilots?

Categories: Critical Thinking, Skeptic

Paul Offit — Deciphering Covid Myths and Navigating Our Post-Pandemic World

Skeptic.com feed - Tue, 02/20/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss407_Paul_Offit_2024_02_20.mp3 Download MP3

Four years on, Covid is clearly here to stay. So what do we do now? Drawing on his expertise as one of the world’s top virologists, Dr. Paul Offit helps weary readers address that crucial question in this brief, definitive guide.

As a member of the FDA Vaccine Advisory Committee and a former member of the Advisory Committee for Immunization Practices to the CDC, Offit has been in the room for the creation of policies that have affected hundreds of millions of people. In these pages, he marshals the power of hindsight to offer a fascinating frontline look at where we were, where we are, and where we’re heading in the now-permanent fight against the disease.

Accompanied by a companion website populated with breaking news and relevant commentary, this book contains everything you need to know to navigate Covid going forward. Offit addresses fundamental issues like boosters, immunity induced by natural infection, and what it means to be fully vaccinated. He explores the dueling origin stories of the disease, tracing today’s strident anti-vax rhetoric to twelve online sources and tracking the fallout. He breaks down long Covid—what it is, and what the known treatments are. And he looks to the future, revealing whether we can make a better vaccine, whether it should be mandated, and providing a crucial list of fourteen takeaways to eradicate further spread.

Paul A. Offit, M.D. is the Director of the Vaccine Education Center at the Children’s Hospital of Philadelphia and the Maurice R. Hilleman Professor of Vaccinology and Professor of Pediatrics at the University of Pennsylvania. He has appeared on The Today Show, Good Morning America, CBS This Morning, 60 Minutes, and many other programs. Offit has published more than 170 papers in medical and scientific journals in the areas of rotavirus-specific immune responses and vaccine safety. He is also the co-inventor of the rotavirus vaccine, RotaTeq, recommended for universal use in infants by the CDC and WHO. In 2021 he was awarded the Edward Jenner Lifetime Achievement Award in Vaccinology from the 15th Vaccine Congress. He is the author of numerous books including: Do You Believe in Magic?: Vitamins, Supplements and All things Natural; Vaccinated: From Cowpox to mRNA, the Remarkable Story of Vaccines; Deadly Choices: How the Anti-Vaccine Movement Threatens Us All; You Bet Your Life: From Blood Transfusion to Mass Vaccination, the Long and Risky History of Medical Innovation; and Pandora’s Lab: Seven Stories of Science Gone Wrong. His new book is Tell Me When It’s Over: An Expert’s Guide to Deciphering Covid Myths and Navigating Our Post-Pandemic World.

Shermer and Offit discuss:

  • How do you know that the Covid-19 vaccines are not the 8th story of science gone wrong, or part of the long and risky history of medical innovation?
  • Loss of trust in medical and scientific institutions (Anthony Fauci, Francis Collins)
  • Overall assessment of what went right and wrong with the Covid-19 pandemic
  • Pandemic vs. epidemic
  • Influenza caused 800,000 hospitalizations & 60,000 deaths
  • Testing, masking, social isolation
  • Mandates vs. recommendations
  • Is the cure worse than the disease?
  • Closing of schools, restaurants, salons, parks, beaches, hiking trails, etc.
  • The cost to the economy of the shut downs
  • The cost to the education of children of the shut downs
  • Comparative method: which countries and states did better or worse?
  • Viral: The Search for the Origin of Covid-19 by Alina Chan and Matt Ridley
  • Lab Leak hypothesis vs. Zoonomic hypothesis
  • Living with SARS-CoV-2 and its variants
  • Vaccines and autism
  • RFK, Jr. and his conspiracy theories
  • Debating anti-vaxxers (Rogan and elsewhere)
  • Treatments: hydroxychloroquine, ivermectin, remdesivir, Vitamin D, Paxlovid, Tamiflu, retroviral medicines, monoclonal antibodies, convalescent plasma
  • High risk vs. low risk groups; age, sex, race, pregnancy, weight, preconditions, immune compromised
  • Myocarditis, Robert Malone, mRNA vaccines, Joe Rogan, RFKJ, Peter Hotez, Del Bigtree
  • Spike protein made by Covid vaccines: toxic? (the spike protein the mRNA vaccines make cannot fuse to our cells. Normally, SARS-CoV-2 attaches to cells via the spike protein, then enters cells through a process called fusion….p. 107
  • Stanford professor Jay Bhattacharya censored for signing the Great Barrington Declaration (“focused protection” of the people most at risk): Wall Street Journal OpEd: “Is the Coronavirus as Deadly as They Say?”, which argued there was little evidence to support shelter-in-place orders and quarantines In March 2021, Bhattacharya called the Covid-19 lockdowns the “biggest public health mistake we’ve ever made” and argued that “The harm to people is catastrophic”. Blacklisted by Twitter.

How civilization might change:

  • Medical: Coronavirus is here to stay—herd immunity naturally and through vaccines
  • Personal and Public Health: handshakes, hugs, and other human contact; masks, social distancing, hygiene
  • Economics and Business
  • Travel, conferences, meetings
  • Marriage, dating, sex, and home life
  • Entertainment, vacations, bars and restaurants
  • Education and schools
  • Politics and society (and a better understanding of freedom and why it is restricted).

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Fake Fossils

neurologicablog Feed - Mon, 02/19/2024 - 4:46am

In 1931 a fossil lizard was recovered from the Italian Alps, believed to be a 280 million year old specimen. The fossil was also rare in that it appeared to have some preserved soft tissue. It was given the species designation Tridentinosaurus antiquus and was thought to be part of the Protorosauria group.

A recent detailed analysis of the specimen, hoping to learn more about the soft tissue elements of the fossil, revealed something unexpected. The fossil is a fake (at least mostly). What appears to have happened is that a real fossil which was poorly preserved was “enhanced” to make it more valuable. There are real fossilized femur bones and some bony scales on what was the back of the lizard. But the overall specimen was poorly preserved and of not much value. What the forger did was carve out the outline of the lizard around the preserved bones and then paint it black to make it stand out, giving the appearance of carbonized soft tissue.

How did such a fake go undetected for 93 years? Many factors contributed to this delay. First, there were real bones in the specimen and it was taken from an actual fossil deposit. Initial evaluation did reveal some kind of lacquer on the specimen, but this was common practice at the time as a way of preserving the fossils, so did not raise any red flags. Also, characterization of the nature of the black material required UV photography and microscopic examination using technology not available at the time. This doesn’t mean they couldn’t have revealed it as a fake back then, but it is certainly much easier now.

It also helps to understand how fossils are typically handled. Fossils are treated as rare and precious items. They are typically examined with non-destructive techniques. It is also common for casts to be made and photographs taken, with the original fossils then catalogued and stored away for safety. Not every fossil has a detailed examination before being put away in a museum drawer. There simply aren’t the resources for that.

No fossil fake can withstand detailed examination. There is no way to forge a fossil that cannot be detected by the many types of analysis that we have available today. Some fakes are detected immediately, usually because of some feature that a paleontologist will recognize as fake. Others require high tech analysis. The most famous fake fossil, Piltdown Man, was a chimera of modern human and ape bones aged to look old. The fraud was revealed by drilling into the bones revealing they were not fossilized.

There was also an entire industry of fake fossils coming out of China. These are mostly for sale to private collectors, exploiting the genuine fossil deposits in China, especially of feathered dinosaurs. It is illegal to export real fossils from China, but not fakes. In at least one case, paleontologists were fooled for about a year by a well-crafted fake. Some of these fakes were modified real (but not very valuable) fossils while others were entire fabrications. The work was often so good, they could have just sold them as replicas for decent amounts of money. But still, claiming they were real inflated the price.

Creationists would have you believe that all fossils are fake, and will point to known cases as evidence. But this is an absurd claim. The Smithsonian alone boasts a collection of 40 million fossil specimens. Most fossils are discovered by paleontologists looking for them in geological locations that correspond to specific periods of time and have conditions amenable to fossil preservation. There is transparency, documentation, and a provenance to the fossils that would make a forgery impossible.

There are a few features that fake fossils have in common that in fact reinforce the nature of genuine fossils. Fake fossils generally were not found by scientists. They were found by amateurs who claim to have gotten lucky. The source and provenance of the fossils are therefore often questionable. This does not automatically mean they are fakes. There is a lot of non-scientific activity that can dig up fossils or other artifacts by chance. Ideally as soon as the artifacts are detected scientists are called in to examine them first hand, in situ. But that does not always happen.

Perhaps most importantly, fake fossils rarely have an enduring impact on science. Many are just knock-offs, and therefore even if they were real they are of little scientific value. They are just copies of real fossils. Fakes purported to be of unique fossil specimens, like Piltdown, have an inherent problem. If they are unique, then they would tell us something about the evolutionary history of the group. But if they are fake, they can’t be telling us something real. Chances are the fakes will not comport to the actual fossil record. They will be enigmas, and likely will be increasingly out of step with the actual fossil record as more genuine specimens are found.

That is exactly what happened with Piltdown. Some paleontologists were immediately skeptical of the find, and it was always thought of as a quirky specimen that scientists did not know how to fit into the tree of life. As more hominid specimens were found Piltdown became increasingly the exception, until finally scientists had enough, pulled the original specimens out of the vault, and showed them to be fakes. The same is essentially true of Tridentinosaurus antiquus specimen. Paleontologists could not figure out exactly where it fit taxonomically, and did not know how it had apparent soft tissue preservation. It was an enigma, which prompted the analysis which revealed it to be a fake.

Paleontology is essentially the world’s largest puzzle, with each fossil specimen being a puzzle piece. A fake fossil is either redundant or a puzzle piece that does not fit.

The post Fake Fossils first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #971 - Feb 17 2024

Skeptics Guide to the Universe Feed - Sat, 02/17/2024 - 8:00am
Quickie with Bob - Metalenses; News Items: Flow Batteries, Green Roofs, LEGO MRI scanner, The Future Circular Collider, Mayo Clinic and Reiki; Who's That Noisy; Name That Logical Fallacy; Science or Fiction
Categories: Skeptic

Rob Henderson — Foster Care, Family, and Social Class

Skeptic.com feed - Sat, 02/17/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss406_Rob_Henderson_2024_02_17.mp3 Download MP3

In this raw coming-of-age memoir, in the vein of The Short and Tragic Life of Robert Peace, The Other Wes Moore, and Someone Has Led This Child to Believe, Rob Henderson vividly recounts growing up in foster care, enlisting in the US Air Force, attending elite universities, and pioneering the concept of “luxury beliefs” — ideas and opinions that confer status on the upper class while inflicting costs on the less fortunate.

Rob Henderson was born to a drug-addicted mother and a father he never met, ultimately shuttling between ten different foster homes in California. When he was adopted into a loving family, he hoped that life would finally be stable and safe. Divorce, tragedy, poverty, and violence marked his adolescent and teen years, propelling Henderson to join the military upon completing high school.

An unflinching portrait of shattered families, desperation, and determination, Troubled recounts Henderson’s expectation-defying young life and juxtaposes his story with those of his friends who wound up incarcerated or killed. He retreads the steps and missteps he took to escape the drama and disorder of his youth. As he navigates the peaks and valleys of social class, Henderson finds that he remains on the outside looking in. His greatest achievements — a military career, an undergraduate education from Yale, a PhD from Cambridge — feel like hollow measures of success. He argues that stability at home is more important than external accomplishments, and he illustrates the ways the most privileged among us benefit from a set of social standards that actively harm the most vulnerable.

Rob Henderson grew up in foster homes in Los Angeles and the rural town of Red Bluff, California. He joined the US Air Force at the age of seventeen. Once described as “self-made” by the New York Times, Rob subsequently received a BS from Yale University and a PhD in psychology from St. Catharine’s College, Cambridge. His writing has appeared in the New York Times, Wall Street Journal, Boston Globe, and more. His weekly newsletter is sent to more than forty thousand subscribers. Learn more at RobKHenderson.com. His new book is Troubled: A Memoir of Foster Care, Family, and Social Class.

Shermer and Henderson discuss:

  • Autobiographies and memoirs and the hindsight bias
  • Memoirs: Tara Westover, Educated; Amber Scorah, Leaving the Witness; J.D. Vance, Hillbilly Elegy; Yeonmi Park, In Order to Live: A North Korean Girl’s Journey to Freedom
  • Genes, Environment, and Luck/Contingency
  • Childhood: drug-addicted mother, absence father, 10 different foster homes
  • 60% of boys in foster care are later incarcerated; 3% graduate from college
  • Marriage, divorce and childhood outcomes; one parent vs. two
  • Poverty, welfare programs, and social safety nets
  • The trouble with boys and men: competitiveness, risk taking, and violence, “the young male syndrome, Margo Wilson and Martin Daly
  • Alcohol, drugs, depression
  • Choice: top 1% of educational attainment or top 1% of childhood instability
  • Luxury beliefs of educated elites
  • College education vs. having a parent who cares enough to make sure you get to class
  • Wealthy but unstable home vs. low-income but stable home
  • How many who wield the most influence in society only pay lip service to inequality
  • What it was like in the military
  • What it was like at Yale
  • What it was like at Cambridge
  • What does it mean to be “self-made”?
  • Steven Pinker’s How the Mind Works
  • Jonathan Haidt’s lecture on the telos of universities
  • Nicolas Christakas and Yale’s privileged students
  • Jordan Peterson
  • The Coddling and Canceling of the American Mind
  • Self-Help books
  • Warrior-Scholar Project.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Biofrequency Gadgets are a Total Scam

neurologicablog Feed - Fri, 02/16/2024 - 4:51am

I was recently asked what I thought about the Solex AO Scan. The website for the product includes this claim:

AO Scan Technology by Solex is an elegant, yet simple-to-use frequency technology based on Tesla, Einstein, and other prominent scientists’ discoveries. It uses delicate bio-frequencies and electromagnetic signals to communicate with the body.

The AO Scan Technology contains a library of over 170,000 unique Blueprint Frequencies and created a hand-held technology that allows you to compare your personal frequencies to these Blueprints in order to help you achieve homeostasis, the body’s natural state of balance.

This is all hogwash (to use the technical term). Throwing out the names Tesla and Einstein, right off, is a huge red flag. This is a good rule of thumb – whenever these names (or Galileo) are invoked to hawk a product, it is most likely a scam. I guess you can say that any electrical device is based on the work of any scientist who had anything to do with electromagnetism.

What are “delicate bio-frequencies”? Nothing, they don’t exist. The idea, which is an old one used in scam medical devices for decades, is that the cells in our bodies have their own unique “frequency” and you want these frequencies to be in balance and healthy. If the frequencies are blocked or off, in some way, this causes illness. You can therefore read these frequencies to diagnoses diseases or illness, and you can likewise alter these frequencies to restore health and balance. This is all complete nonsense, not based on anything in reality.

Living cells, of course, do have tiny electromagnetic fields associated with them. Electrical potential is maintained across all cellular membranes. Specialized cells, like muscles and nervous tissue, use this potential as the basis for their function. But there is no magic “frequency” associated with these fields. There is no “signature” or “blueprint”. That is all made up nonsense. They claim to have researched 170,000 “Blueprint Frequencies” but the relevant science appears to be completely absent from the published literature. And of course there are no reliable clinical trials indicating that any type of frequency-based intervention such as this has any health or medical application.

As an aside, there are brainwave frequencies (although this is not what they are referring to). This is caused by networks of neurons in the brain all firing together with a regular frequency. We can also pick up the electrical signals caused by the contraction of the heart – a collection of muscle cells all firing in synchrony. When you contract a skeletal muscle, we can also record that electrical activity – again, because there are lots of cells activating in coordination. Muscle contractions have a certain frequency to them. Motor units don’t just contract, they fire at an increasing frequency as they are recruited, peaking (in a healthy muscle) at 10 hz. We will measure these frequencies to look for nerve or motor neuron damage. If you cannot recruit as many motor units, the ones you can recruit will fire faster to compensate.

These are all specialized tests looking at specific organs with many cells firing in a synchronous fashion. If you are just looking at the body in general, not nervous tissue or muscles, the electrical signals are generally too tiny to measure and would just be white noise anyway. You will not pick up “frequencies”, and certainly not anything with any biological meaning.

In general, be very skeptical of any “frequency” based claims. That is just a science-sounding buzzword used by some to sell dubious products and claims.

The post Biofrequency Gadgets are a Total Scam first appeared on NeuroLogica Blog.

Categories: Skeptic

Using AI and Social Media to Measure Climate Change Denial

neurologicablog Feed - Thu, 02/15/2024 - 5:14am

A recent study finds that 14.8% of Americans do not believe in global climate change. This number is roughly in line with what recent survey have found, such as this 2024 Yale study which put the figure at 16%. In 2009, by comparison, the figure was at 33% (although this was a peak – the 2008 result was 21%). The numbers are also encouraging when we ask about possible solutions, with 67% of Americans saying that we should prioritize development of green energy and should take steps to become carbon neutral by 2050. The good news is that we now have a solid majority of Americans who accept the consensus on climate change and broadly support measures to reduce our carbon footprint.

But there is another layer to this study I first mentioned – the methods used in deriving the numbers. It was not a survey. It used artificial intelligence to analyze posts on X (Twitter) and their networks. The fact that the results aligns fairly well to more tried and true methods, like surveys, is somewhat validating of the methods. Of course surveys can be variable as well, depending on exactly how questions are asked and how populations are targeted. But multiple well designed survey by experienced institutions, like Pew, can create an accurate picture of public attitudes.

The advantage of analyzing social media is that it can more easily provide vast amounts of data. The authors report:

We used a Deep Learning text recognition model to classify 7.4 million geocoded tweets containing keywords related to climate change. Posted by 1.3 million unique users in the U.S., these tweets were collected between September 2017 and May 2019.

That’s a lot of data. As is almost always the case, however, there is a price to pay for using methods which capture such vast amounts of data – that data is not strictly controlled. It’s observational. It is a self-selective group – people who post on X. It therefore may not be representative of the general population. Because the results broadly agree with more traditional survey methods, however, this does suggest that any such selective effects balanced out. Also, they adjusted for and skew toward certain demographic groups – so if younger people were overrepresented in the sample they adjusted for that.

The results also showed some more detail. Because the posts were geocoded the analysis can look at regional difference. They found broadly that acceptance of global warming science was highest on the coasts, and lower in the midwest and south. There were also significant county level differences. They found:

Political affiliation has the strongest correlation, followed by level of education, COVID-19 vaccination rates, carbon intensity of the regional economy, and income.

Climate change denial, again in line with prior data, correlated strongly with identifying as a Republican. That was the dominant factor. It’s likely that other factors, like COVID-19 vaccination rates, also derive from political affiliation. But it does suggest that when one rejects scientific consensus and the opinion of experts on climate change, it makes it more likely to do so on other issues.

Because they did network analysis they were also able to analyze who is talking to whom, and who the big influencers were. The found, again unsurprising, that there are networks of users who accept climate change and networks that reject climate change, with very little communication between the networks. This shows that the echo-chamber effect on social media is real, at least on this issue. This is a disturbing finding, perhaps the most disturbing of this study (even if we already knew this).

It reflects in data what many of us feel – that social media and the internet has transformed our society from one where there is a basic level of shared culture and facts to one in which different factions are siloed in different realities. There have always been different subcultures, with vastly different ideologies and life experiences. But the news was the news, perhaps with different spin and emphasis. Now it is possible for people to exist in completely different and relatively isolated information ecosystems. We don’t just have different priorities and perspectives -we live in different realities.

The study also identified individual influencers who were responsible for many of the climate change denial posts. Number one among them was Trump, followed by conservative media outlets. Trump is, of course, a polarizing figure, a poster child for the echo-chamber social media phenomenon. For many he represents either salvation or the destruction of American democracy.

On the bright side, it does seem there is still the possibility of movement in the middle. The middle may have shrunk, but still holds some sway in American politics, and there does seem to be a number of people who can be persuaded by facts and reason. We have moved the needle on many scientific issues, and attitudes have improved on topics such as climate change, GMOs, and nuclear power. The next challenge is fixing our dysfunctional political system so we can translate solid public majorities into tangible action.

The post Using AI and Social Media to Measure Climate Change Denial first appeared on NeuroLogica Blog.

Categories: Skeptic

Who Should You Trust? Why Appeals to Scientific Consensus Are Often Uncompelling

Skeptic.com feed - Thu, 02/15/2024 - 12:00am

The public is frequently told to “trust the science,” and then ridiculed for holding any views that differ from what is reported to be the scientific consensus. Should non-experts then naively accept the authorized narrative, or are there good reasons to be skeptical?

Is sugar-free gum good for your teeth?

When we’re told that four out of five dentists recommend sugarless gum, we assume that five dentists independently examined the evidence and four of them concluded that chewing gum is good for your dental health. However, those dentists aren’t examining completely independent evidence. They sat through the same lectures in dental school, they have ready access to the same studies, they go to the same conventions, and they talk to each other, so we should worry about correlated errors.

Even worse, most dentists may have never even read a study about chewing gum, let alone conducted one of their own. Suppose they heard that most dentists recommend sugarless gum; they might well figure those other dentists are probably doing so for good reason, and so they would recommend it too. In other words, the dentists are following the herd mentality and just going along to get along. Perhaps most dentists believe chewing gum is good for dental health because they believe that most other dentists believe this, even though few if any of them have any good, independent reason to think this is true.

Herding can be a rational behavior. It would not be a good use of time or money for every dentist to conduct an independent study to assess the evidence and determine whether sugarless gum is good for dental health. However, herding can lead an entire scientific community to converge on the wrong answer, and they typically won’t know whether they’ve converged on the right or the wrong answer.

We can see how a dangerous emperor-has-no-clothes situation could easily arise. Suppose a dentist questions whether chewing gum really is good for dental health. He or she considers raising the issue at a convention but then remembers that most dentists recommend gum and worries that they’ll be mocked for questioning the consensus view. So they decide to keep quiet, the field moves on, nobody’s beliefs are challenged, and no new evidence is collected.

This may be a low-stakes example, and there probably are good scientific reasons to believe that chewing sugar-free gum is good for dental health. But herding is a problem in many scientific fields, including those studying arguably more important questions, such as the health of democracy.

How We Vote

Consider this example from an academic subfield I happen to know well. Among scholars of political behavior, there is a broad consensus that American voters don’t know or care much about policy, and their voting decisions are largely driven by party identity. Such claims are commonplace in academic papers, conferences, classrooms, textbooks, and public writings. To a member of the general public who has never taken a political science class, this claim might seem absurd. The average American may not be as informed as we would hope, and their policy preferences might diverge from ours. Yet even a brief conversation with a voter would likely reveal that they know and care about policy and think about it when they decide which candidates to support in elections. How can such a strong claim unsupported by good evidence be the scientific consensus?

When I challenged this scientific consensus,1 I received significant public and private criticism from scholars of political behavior. A few of my critics engaged with my arguments and evidence, but most did not. Instead, they typically made appeals to authority, such as, “How dare you challenge what’s been established wisdom for seven decades?”.

In other words, they were herding. They assumed that something must be right because that’s been the consensus view in their field for a long time. They were not able or willing to provide further evidence or arguments in support of their position, and they simply dismissed anyone who challenged them, thereby creating a strong incentive for other scholars to uphold the consensus.

The Good and the Bad Scenario

Roughly speaking, there are two different ways in which an apparent scientific consensus might arise. In the good scenario, scientists are conducting genuinely good work, rigorously vetting each other’s work, and the theory, the evidence, and the analyses supporting the consensus view are all really strong. In this scenario, if reasonable, objective, intelligent individuals from outside the field examined all of the evidence, they too would be provisionally confident in the consensus.

In the bad scenario, the scientists are not always conducting good work, don’t rigorously vet each other’s work (or they engage in selective vetting based on whether or not they like and/or agree with the conclusions of a study), and the theory, the evidence, or the analyses supporting the consensus are not robust. In this scenario, a reasonable, objective, intelligent individual from outside the field who examined the evidence, the analyses, and the theory would be, at best, genuinely uncertain. Nevertheless, some scientists and all too many media pundits and politicos repeatedly state that there is a scientific consensus in support of their preferred view. Dissenters, whether scientists themselves or not, are ostracized.

Unfortunately, the bad scenario occurs too often— much more often than many scientists, commentators, and cultural leaders presume. We already saw one way in which the bad scenario can arise—herding. Here are some additional ways in which the bad scenario can arise and why skeptics should view appeals to scientific consensus, on their own, as uncompelling. I also discuss how non-experts can better distinguish between the good and bad scenarios, and how scientists can do more to avoid the latter.

The Illusion of Scientific Consensus

Commentators and leaders often assert that their position is the consensus view, but without providing direct evidence of that consensus. Just as social media and public discourse don’t accurately reflect the views of regular Americans, they also need not accurately reflect the views of scientists. Making it even more difficult to assess scientific consensus, those who do not hold the views of the purported consensus are often dismissed as not being legitimate members of the scientific community.

In the rare cases in which we are presented with systematic evidence on the views of the scientific community, the results are often underwhelming. Doran and Zimmerman conducted a survey of earth scientists to assess the extent of scientific consensus on climate change, and they concluded that “the debate on the authenticity of global warming and the role played by human activity is largely nonexistent among those who understand the nuances and scientific basis of long-term climate processes.”2 Specifically, in one question, they asked earth scientists “Do you think human activity is a significant contributing factor in changing mean global temperatures?” and 82 percent of them said yes. The meaning of significant is open to interpretation, and even among people who answer yes, there could be genuine disagreement about the extent to which climate change is a problem and the right ways to address it. Furthermore, the survey’s response rate was only 31 percent, and we don’t know if those responding are representative of all scientists who were contacted. Even still, nearly one in five scientists surveyed did not answer yes to this seemingly anodyne question. So maybe the consensus isn’t as strong as we’re frequently told.

Doran and Zimmerman further find that the apparent scientific consensus on climate change gets stronger as they restrict their sample. For example, if they focus on scientists who actively and primarily publish papers on climate change, 97 percent of those scientists answered yes to the question above. One potential interpretation is that when people become immersed in climate science research, they increasingly converge to the truth. Another is that earth scientists who do not hold the desirable view on this question are prevented from publishing papers on climate science. The recent admissions of one climate scientist suggest that journals indeed will not publish the papers of authors who do not conform to the preferred narrative.3

Broad Consensus Doesn’t Mean High Certainty

Scientists in a particular field all have access to essentially the same information, so I would expect many of them to have similar beliefs on many scientific questions. How confident are they in those beliefs?

Even if 100 percent of earth scientists agreed that human activity is a significant contributing factor to an increase in mean temperature readings from around the globe, it would still tell us nothing about the certainty with which they held those beliefs. If someone is only 51 percent sure of a claim, they might answer yes to the forced-choice two-option question. So for all we know, although 82 percent of earth scientists answered yes, all of those individual scientists might still be genuinely uncertain.

For this reason, the percentage of scientists who agree with a statement is not a very informative statistic. How sure are they that human activity influences global mean temperature? (Also, how much do they think human activity influences temperature? If it’s a small effect, we’ll want to consider the other costs and benefits before making any rash decisions; if it’s a large effect, we should allocate more resources to accelerate the transition away from fossil fuels.) For some questions, 49 percent certainty might be more than enough to warrant taking a costly action— if you were 49 percent sure that your car was going to explode in the next minute, you would get out and run. For other questions, 51 percent certainty is not nearly enough—if you were 51 percent sure that you were going to win the lottery, you wouldn’t quit your job.

Unfortunately, surveys of scientists typically elicit no information about the certainty with which the respondents hold their beliefs. However, since 18 percent of scientists did not agree that human activity is a significant factor in changing mean global temperatures, and since those 18 percent have access to largely the same information as the majority, I would be surprised if all of the 82 percent who agree with the statement hold that belief with strong certainty. Indeed, it would be quite strange if 82 percent of experts were virtually certain while 18 percent of experts weren’t even sure enough to say yes to the binary question.

Correlated Errors

A scientific estimate can diverge from the truth for many reasons, but the hope of the scientific community is that if we conduct a lot of studies, the errors will cancel out, and when we conduct meta-analyses, our estimates will converge to the truth.

The problem with this logic is that not all errors cancel each other out. Often, scientific studies are biased, meaning that even if we repeated them over and over with infinitely large sample sizes, we still wouldn’t get closer to the truth. Further, the biases of different, related studies are likely correlated with one another.

Consider the increasingly common claim that diet sodas are bad for your health. Although we currently lack a compelling biological explanation as to why, dozens of scientific studies report that consuming diet soda and other artificially sweetened beverages causes a host of health problems including obesity, diabetes, and heart attacks. What’s the evidence for this claim? People who regularly consume diet soda typically have more health problems than people who don’t consume sweet beverages (people who drink sugary beverages are usually excluded or analyzed separately).

Why is there a strong correlation between diet soda and health problems? It could be that diet soda causes health problems. Alternatively, health problems might cause people to drink diet soda. For example, perhaps people switch from regular soda to diet soda after they become obese or diabetic. Or there could be confounding factors that influence both diet soda consumption and health. For example, perhaps people with a sweet tooth are more likely to consume diet soda and also more likely to consume sugary desserts, which cause health problems. These latter possibilities are sources of bias. Because of reverse causation and confounding, the correlation between diet soda consumption and health is not, in and of itself, convincing evidence that diet soda is bad for you. For all we know, it could be good for you insofar as it’s a substitute for sugary foods and beverages.

It doesn’t matter how many observational, correlational studies we conduct on this topic. They will likely all yield similar results, and we still would not learn much about the actual effects of diet soda on health. If all the studies are biased in the same direction, a scientific consensus could emerge that is based on hundreds or even thousands of studies and still be wrong.

Selective Reporting

Scientific results that happen to align with the predispositions of journal editors and peer-reviewers are more likely to be written up and published than those that go against the accepted wisdom in a field. Scientists often conduct multiple tests and selectively report those that are the most publishable, meaning that the published record is often biased, even if each individual analysis is unbiased. In some cases, scientific results might be skewed in the direction of sensational, surprising, or newsworthy findings. However, once a field has settled upon an apparent consensus and desires to maintain it, perhaps we should worry that results affirming the consensus are much more likely to be published than those that conflict with that consensus.

Archives of Sexual Behavior, a scientific journal published by Springer Nature, recently retracted an article on rapid onset gender dysphoria in response to criticism from activists.4 The retraction note says nothing about the scientific validity of either the data or analysis in that article. Rather, the paper was purportedly retracted on the grounds that participants in a survey did not consent to participate in a study, a claim that the author of the study contests.5

In 2017, Hypatia, an academic philosophy journal, published a paper entitled “In Defense of Transracialism” [that is, changing one’s racial identity].6 Hundreds of academics signed an open letter asking the journal to retract the paper.7 The open letter did not seriously engage with the arguments in the paper; rather, it asserted that the availability of the paper causes harm. The associate editors of the journal issued an apology and condemned the paper. The editor-in-chief criticized the associate editors and defended the journal’s review process but resigned soon after.8 Ultimately, the paper was not retracted, but the philosophy community has signaled that certain arguments and conclusions would not be allowed in their field.

There are many more examples of academic studies being retracted or condemned for reasons unrelated to merit, credibility, integrity, or validity. And unfortunately, these cases are just the tip of the iceberg. For every public retraction, there are likely many more studies that never make it through peer review because of their undesired or unpalatable results. And for every one of these, there are likely many more studies that never get written up and submitted because the author reasonably infers that a paper with such results would either not be published or would harm their reputation.

Some scientists and journal editors openly admit to engaging in this kind of selective reporting. In 2022, the editors of Nature Human Behavior published new ethics guidelines for their journal.9 They reserved the right to decline publication of any submitted paper and retract any published paper that might cause “substantial risk of harm.” In other words, the editors of the journal can reject or retract any study for reasons that are completely unrelated to its scientific validity. So, if the results and conclusion of one scientific result are deemed to be safe by journal editors, while those in another are deemed harmful, only the safe study gets published. This would lead to a scientific consensus around the safe result, even if it were factually wrong.

Fraud

Another obvious but important reason the scientific record might fail to reflect the truth is that some scientists engage in fraud. They might manipulate data points to make their results more favorable or even fabricate entire data sets whole cloth. We would all hope that this kind of outright scientific misconduct is rare, but it does happen. Two prominent behavioral scientists from Harvard and Duke University both independently appear to have intentionally manipulated or fabricated data in different parts of the same study—ironically, a study about dishonesty.10 The president of Stanford recently resigned (but kept his faculty appointment) after evidence came to light that strongly suggests he intentionally manipulated images in his neuroscience studies.11 And these are just recent, high-profile examples that made the news. There are likely more cases of fraud that don’t come to light, don’t make the news, and do not lead to a correction of the scientific record.

Career Incentives

Partly because of the phenomena discussed above, we can’t know if a scientist who publicly supports a conclusion genuinely holds that view. To publish papers, secure grants, get a good job, get tenure, receive praise, and avoid banishment, scientists must not question the key tenets of their field. Some of this is natural. A biochemist is not likely to make much progress in her field if she doesn’t accept the atomic theory or the periodicity of elements, and biochemistry as a field won’t make much progress if it has to devote significant journal space and lab time to questions that are already well settled. However, most scientific claims aren’t nearly so well-established, and we’ll never know if they’re truly right or wrong if scientists aren’t able to publish novel theoretical perspectives, data, or analyses that challenge them. Paradigms, as defined by Thomas Kuhn, could simply never shift.

In addition to the incentives for individual researchers, scientific fields as a whole often have a strong incentive to collectively uphold a consensus. Virologists won’t be able to secure as much funding and support for their research if the public and the rest of the scientific community were to think that virology researchers caused a global pandemic. As a result, others outside the field shouldn’t necessarily be persuaded by the sheer fact that virologists oppose the lab-leak theory of COVID-19 origins, which, it just so happens, would be very bad for their careers.

Science vs. Values

Not all important questions are scientific questions. What is the effect of eating bacon on my chances of having a heart attack? is a scientific question. Should I eat bacon? is not. When you consider whether or not to eat bacon, you’ll want to think about a lot of things that can be scientifically quantified such as health risks, nutritional value, economic costs, and so on. However, you’ll also want to think about other questions such as How much do I enjoy eating bacon?, What are the ethical implications of eating pig products?, and Does my enjoyment of bacon outweigh the health risks and ethical downsides?. These latter questions are about your personal values, and by the personal experiential nature of the questions, scientists are probably less equipped to answer them than you are.

Just as individual decision-making involves values, so does public policy. So, If we banned the sale of bacon, how much would it increase unemployment in Iowa? is a scientific question, while Should we ban the sale of bacon? is not. And scientists’ values aren’t necessarily any more enlightened than those of citizens, elected officials, and bureaucrats in deciding the latter. So we should consider scientific evidence when assessing the costs or benefits of different policy decisions, but science alone cannot dictate which policies to implement.

Unfortunately, the difference between science questions and value judgments is often forgotten or ignored by scientists themselves. We’re often told things such as, there is a scientific consensus that we should raise the gas tax, economists support surge pricing for parking, education researchers oppose standardized testing, or a scientific journal endorses a political candidate,12 and similar statements should be roughly as persuasive as anthropologists prefer mayonnaise over mustard. Scientists shouldn’t be in the business of telling people what to do. They should provide people with information so that they can make better decisions and policies conditional on their values.

Politicization

This article presents a number of different explanations for the potential emergence of an unreliable scientific consensus. All of these concerns are exacerbated when a scientific question becomes politicized or is of great public interest. If a particular scientific claim happens to align with the values, policy preferences, or political objectives of scientists, you can imagine that the incentives for misrepresenting the scientific consensus, selectively reporting results, accepting the conclusions of biased studies, and herding become even greater. And if an undesirable result can lead a scientist to be ostracized by not just their peers but by journalists, friends, family, and activists, the distortionary incentives become even stronger.

This poses a vexing problem for the otherwise-promising practice of evidence-based policy. All else equal, the more relevant science is for policy, the less reliable it will likely be. This is because scientists, like everyone else, are individuals with their own values, biases, and incentives. They probably already had strong views about policy before they analyzed any data, which means they’re even more likely than normal to report results selectively, publish biased studies, herd on a politically desirable conclusion, and so on. Unfortunately, this means that we should be more skeptical of scientific findings when that question is particularly politicized or policy-relevant.

All that said, avoid nihilism or worse.

Consumers of scientific information should be skeptical of an apparent scientific consensus, and they should think about some of the factors discussed here when deciding how skeptical they should be. How politicized is this topic? What are the career incentives for the scientists? How easy would it be for scientists to selectively report only the favorable results? Would a study have been published if it had found the opposite result or a null result? The answers to these questions will not definitively tell us whether the scientific consensus is right or wrong, but they should help us decide the degree to which we should simply trust the consensus or investigate further.

Although skepticism is warranted, nihilism is not. Even when a topic is highly politicized and when there are good reasons to worry about biased studies, selective reporting, herding, and so on, the scientific community can still find the right answer. The debate over evolution by natural selection would seem to feature many of the problems I’ve discussed, and yet, the scientific consensus is almost surely right in that case. However, you shouldn’t think evolution is right just because it’s the scientific consensus. You should think it’s right because the evidence is strong. And if scientists want to convince more people about evolution, they shouldn’t simply appeal to scientific consensus. They should present and discuss the evidence.

Science is a process, not a result.

If we want to learn more about the universe for the sake of enjoyment or with the goal of improving our lives, science is our best hope. So don’t become a nihilist, and don’t replace science with something worse such as random guessing or deference to authority, religious or political. Remember that science is just the word we use to describe the process by which we generate new knowledge by questioning, experimenting, analyzing, and testing to see if we are wrong rather than confirming that we are right. It involves repeated iterations of hypothesizing, experimenting, analyzing, empirical testing, and arguing.

If a group of so-called scientists stop theorizing, testing, and challenging, then they’re no longer engaged in science. Perhaps they’re engaged in advocacy, which is a respectable thing to do, particularly if the theory, evidence, and arguments on their side are strong. Yet advocacy and science are distinctly different activities and shouldn’t be conflated.

Science is not a specific person13 or even a group of people. Science is not a particular result or conclusion. It is not content but method. It is to remain always open to skepticism while never succumbing to cynicism. The goal of science is not for everyone to agree or behave in the same way. To the extent that there is a goal or purpose of science, it’s for us to challenge what we thought we knew, to obtain new information, and thereby get successively closer to the truth. Of course, we don’t know what the truth is, and the scientific process is imperfect, so, as part of a healthy scientific process, we can sometimes move away from the truth. However, if we’re doing this correctly, we will get successively closer to truth more often than not.

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

At various points in our history, there has been a scientific consensus that the sun revolves around the earth, that humans do not share a common ancestor with other animals, that force equals mass times velocity, and that bloodletting is an effective medical treatment. More recently, doctors told people for decades to treat soft-tissue injuries with ice, while the most current evidence now suggests that cold therapy delays healing.14

To its credit, the scientific process has allowed us to correct these mistakes. However, the scientific record is imperfect and ever-changing. So even if the scientific consensus might be right more often than not, we should not accept it on faith alone.

* * *

The scientific community should actively work to address the problems discussed here. It should try to set up better institutions and career incentives to reduce the prevalence of biased studies, selective reporting, and herding. It should do a better job of conveying the uncertainty associated with any scientific claims and beliefs. And it should not impose its values on others. In the meantime, members of the public should continue to be skeptical, but not cynical, while asking for better evidence and arguments before reflexively accepting a reported scientific consensus.

About the Author

Anthony Fowler is a Professor in the Harris School of Public Policy at the University of Chicago. He is the editor-in-chief of the Quarterly Journal of Political Science, an author of Thinking Clearly with Data, and a host of Not Another Politics Podcast.

References
  1. Fowler, A. (2020). Partisan Intoxication or Policy Voting?. Quarterly Journal of Political Science, 15(2), 141–179.
  2. Doran, P.T., & Zimmerman, M.K. (2009). Examining the Scientific Consensus on Climate Change. Eos, Transactions American Geophysical Union, 90(3), 22–23.
  3. https://rb.gy/9h54n
  4. https://rb.gy/23irm
  5. https://rb.gy/gulwn
  6. Tuvel, R. (2017). In Defense of Transracialism. Hypatia, 32(2), 263–278.
  7. https://rb.gy/fys22
  8. https://rb.gy/24iw4
  9. https://rb.gy/5qmym
  10. https://rb.gy/acict
  11. https://rb.gy/ihwxa
  12. https://rb.gy/vmjj7
  13. https://rb.gy/adckm
  14. Wang, Z. R., & Ni, G. X. (2021). Is It Time to Put Traditional Cold Therapy in Rehabilitation of Soft-Tissue Injuries Out to Pasture?. World Journal of Clinical Cases, 9(17), 4116.
Categories: Critical Thinking, Skeptic

Flow Batteries – Now With Nanofluids

neurologicablog Feed - Tue, 02/13/2024 - 5:12am

Battery technology has been advancing nicely over the last few decades, with a fairly predictable incremental increase in energy density, charging time, stability, and lifecycle. We now have lithium-ion batteries with a specific energy of 296 Wh/kg – these are in use in existing Teslas. This translates to BE vehicles with ranges from 250-350 miles per charge, depending on the vehicle. That is more than enough range for most users. Incremental advances continue, and every year we should expect newer Li-ion batteries with slightly better specs, which add up quickly over time. But still, range anxiety is a thing, and batteries with that range are heavy.

What would be nice is a shift to a new battery technology with a leap in performance. There are many battery technologies being developed that promise just that. We actually already have one, shifting from graphite anodes to silicon anodes in the Li-ion battery, with an increase in specific energy to 500 Wh/kg. Amprius is producing these batteries, currently for aviation but with plans to produce them for BEVs within a couple of years. Panasonic, who builds 10% of the world’s EV batteries and contracts with Tesla, is also working on a silocon anode battery and promises to have one in production soon. That is basically a doubling of battery capacity from the average in use today, and puts us on a path to further incremental advances. Silicon anode lithium-ion batteries should triple battery capacity over the next decade, while also making a more stable battery that uses less (or no – they are working on this too) rare earth elements and no cobalt. So even without any new battery breakthroughs, there is a very bright future for battery technology.

But of course, we want more. Battery technology is critical to our green energy future, so while we are tweaking Li-ion technology and getting the most out of that tech, companies are working to develop something to replace (or at least complement) Li-ion batteries. Here is a good overview of the best technologies being developed, which include sodium-ion, lithium-sulphur, lithium-metal, and solid state lithium-air batteries. As an aside, the reason lithium is a common element here is because it is the third-lightest element (after hydrogen and helium) and the first that can be used for this sort of battery chemistry. Sodium is right below lithium on the period table, so it is the next lightest element with similar chemistry.

But for the rest of this article I want to focus on one potential successor to Li-ion batteries – flow batteries. So-called flow batteries are called that because they use two liquid electrochemical substance to carry their charge and create electrical current. Flow batteries are stable, less prone to fires than lithium batteries, and have a potential critical advantage – they can be recharged by swapping out the electrolyte. They can also be recharged in the conventional way, by plugging them in. So theoretically a flow battery could provide the same BEV experience as a current Li-ion battery, but with an added option. For “fast charging” you could pull into a station, connect a hose to your car, and swap out spent electrolyte for fresh electrolyte, fully charging your vehicle in the same time it would take to fill up a tank. This is the best of both worlds – for those who own their own off-street parking space (82% of Americans) routine charging at home is super convenient. But for longer trips, the option to just “fill the tank” is great.

But there is a problem. As I have outlined previously, battery technology is one of those tricky technologies that requires a suite of characteristics in order to be functional, and any one falling short is a deal-killer. For flow batteries the problem is that their energy density is only about 10% that of Li-ion batteries. This makes them unsuitable for BEVs. This is also an inherent limitation of chemistry – you can only dissolve so much solute in a liquid. However, as you likely have guessed based upon my headline, there is also a solution to this limitation – nanofluids. Nanoparticles suspended in a fluid can potentially have much greater energy density.

Research into this approach actually goes back to 2009, at Argonne National Laboratory and the Illinois Institute of Technology, who did the initial proof of concept. Then in 2013 DARPA-energy gave a grant to the same team to build a working prototype, which they did. Those same researchers then spun off a private company, Influit Energy, to develop a commercial product, with further government contracts for such development. As an aside, we see here an example of how academic researchers, government funding, and private industry work together to bring new cutting edge technology to market. It can be a fruitful arrangement, as long as the private companies give back to the public the public support they built upon.

Where is this technology now? John Katsoudas, a founder and chief executive of Influit, claims that they are developing a battery with an specific energy of 550 to 850 Wh/kg, with the potential to go even higher. That’s roughly double to triple current EV batteries. They also claim these batteries (soup to nuts) will be cost competitive to Li-ion batteries. Of course, claims from company executives always need to be taken with a huge grain of salt, and I don’t get too excited until a product is actually in production, but this does all look very promising.

Part of the technology involved how much nanoparticles they can cram into their electrolyte fluid. They claim they are currently up to 50% by weight, but believe they can push that to 80%. At 80% nanoparticles, the fluid would have the viscosity of motor oil.

A big part of any new technology, often neglected in the hype, is infrastructure. We are facing this issue with BEVs. The technology is great, but we need an infrastructure of charging stations. They are being built, but currently are a limiting factor to public acceptance of the technology (lack of chargers contributes to range anxiety). The same issue would exist with nanoparticle flow batteries. However, they would have at least a good an infrastructure for normal recharging as current BEVs. Plus also they would benefit from pumping electrolyte fluid as a means of fast charging. Such fluid could be process and recharged on site, but also could be trucked or piped as with existing gasoline infrastructure. Still, this is not like flipping a switch. It could take a decade to build out an adequate infrastructure. But again, meanwhile at least such batteries can be charges as normal.

I don’t know if this battery technology will be the one to displace lithium-ion batteries. A lot will depend on which technologies make it to market first, and what infrastructure investments we make. It’s possible that the silicon anode Li-ion batteries may improve so quickly they will eclipse their competitors. Or the solid state batteries may make a big enough leap to crush the competition. Or companies may decide that pumping fluid is the path to public acceptance and go all-in on flow batteries. It’s a good problem to have, and will be fascinating to watch this technology race unfold.

The only prediction that seems certain is that battery technology is advancing quickly, and by the 2030s we should have batteries for electric vehicles with 2-3 times the energy density and specific energy of those in common use today. That will be a different world for BEVs.

 

The post Flow Batteries – Now With Nanofluids first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #923: The Voodoo Ax Murders

Skeptoid Feed - Tue, 02/13/2024 - 2:00am

Were two waves of ax murders in the American south in the early 20th century truly associated with Louisiana Voodoo?

Categories: Critical Thinking, Skeptic

Sandro Galea — How US Public Health Has Strayed From Its Liberal Roots

Skeptic.com feed - Tue, 02/13/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss405_Sandro_Galea_2024_02_13.mp3 Download MP3

The Covid-19 response was a crucible of politics and public health—a volatile combination that produced predictably bad results. As scientific expertise became entangled with political motivations, the public-health establishment found itself mired in political encampment.

It was, as Sandro Galea argues, a crisis of liberalism: a retreat from the principles of free speech, open debate, and the pursuit of knowledge through reasoned inquiry that should inform the work of public health.

Across fifty essays, Within Reason chronicles how public health became enmeshed in the insidious social trends that accelerated under Covid-19. Galea challenges this intellectual drift towards intolerance and absolutism while showing how similar regressions from reason undermined social progress during earlier eras. Within Reason builds an incisive case for a return to critical, open inquiry as a guiding principle for the future public health we want—and a future we must work to protect.

Shermer and Galea discuss: his immigrant experience in the U.S. coming from Malta • why he left practicing medicine for public health • public health vs. private health • mask/vaccine recommendations vs. mandates • the case against moralism in public health • Medicare for all, UBI, generous social safety net, reparation for slavery, liberal immigration policies, commonsense gun safety reform • public health and: race, class, sex/gender • moralizing and public health.

Dr. Sandro Galea is a physician, epidemiologist, author and the Robert A. Knox Professor at Boston University School of Public Health. He previously held academic and leadership positions at Columbia University, the University of Michigan, and the New York Academy of Medicine. He has published more than 1000 scientific journal articles, 75 chapters, and 24 books, and his research has been featured extensively in current periodicals and newspapers. Galea holds a medical degree from the University of Toronto and graduate degrees from Harvard University and Columbia University. Dr. Galea was named one of Time magazine’s epidemiology innovators and has been listed as one of the “World’s Most Influential Scientific Minds.” He is past chair of the board of the Association of Schools and Programs of Public Health and past president of the Society for Epidemiologic Research and of the Interdisciplinary Association for Population Health Science. He is an elected member of the National Academy of Medicine and the American Epidemiological Society. He is the author of The Contagion Next Time and Well: What We Need to Talk About When We Talk About Health. His new book is Within Reason: A Liberal Public Health for an Illiberal Time.

Shermer and Galea discuss:

  • his immigrant experience in the U.S. coming from Malta
  • why he left practicing medicine for public health
  • What is public health?
  • public health vs. private health
  • mask recommendations vs. mandates
  • vaccine recommendations vs. mandates
  • the case against moralism in public health
  • Galea’s progressive views: Medicare for all, UBI, generous social safety net, reparation for slavery, liberal immigration policies, commonsense gun safety reform
  • public health/healthcare and: race, class, sex/gender
  • moralizing and public health.
Show Notes

Stigma: smoking: “We can now plausibly say the choice to smoke or not smoke is, in a sense, a choice between right and wrong. The same was to some extent true of COVID-19. We did know that wearing masks and limiting our physical interaction would reduce the spread of the disease. Taking these steps was—there’s no getting around it—a matter of personal responsibility, a moral consideration, and it was right for us to acknowledge this.”

Working remotely adversely affected the poor over the rich, as did closing schools, restaurants, etc. “If we ignore the populations whose lives are shaped by conditions different from those that shape our own, we are acting contrary to the spirit of liberalism. COVID provided many examples of how such conditions create gaps in the lived experience of populations. We know, for example, that there is a clear link between income quartile and ability to physically distance by working remotely. Data from the Bureau of Labor Statistics has shown that 62 percent of earners in the top twenty-fifth quartile were able to work remotely, compared with just 9 percent of those in the bottom twenty-fifth. In stigmatizing those who do not adhere to physical distancing protocols, we risk targeting those with the least personal control over whether they do so.”

At a fundamental level, it would be characterized by social and economic justice. By economic justice, I mean a world where economic systems are geared toward fairness rather than the inequality that currently benefits the well-off few at the expense of the less well-off many. By social justice, I mean a world where no one is unfairly held back by characteristics of identity—whether race, sexual orientation, or gender.

John Hopkins University DEI Office, Diversity Word of the Month

Privilege is a set of unearned benefits given to people who are in a specific social group. Privilege operates on personal, interpersonal, cultural and institutional levels, and it provides advantages and favors to members of dominant groups at the expense of members of other groups. In the United States, privilege is granted to people who have membership in one or more of these social identity groups: White people, able-bodied people, heterosexuals, cisgender people, males, Christians, middle or own class people, middle-aged people, English-speaking people. Privilege is characteristically invisible to people who have it. People in dominant groups often believe they have earned the privileges they enjoy or that everyone could have access to these privileges if only they worked to earn them. In fact, privileges are unearned and are granted to people in the dominant groups whether they want those privileges or not, and regardless of their stated intent.”

Galea: “To answer every challenge with a call for complete upheaval of all that came before is to be neither serious nor effective as a movement. We may not want to use the language of overthrow when pragmatic reform is called for, just as we may not want to talk about incremental reform when our speech might support something bolder. If we continually cry “revolution” when we really need basic, commonsense reforms, we are liable to drive otherwise sympathetic partners out of our coalition. We also risk being taken less seriously when systemic change really is necessary, with our calls for bold action falling on ears that have long since ceased to listen.”

Great Barrington Declaration

It claimed harmful COVID-19 lockdowns could be avoided via the fringe notion of “focused protection”, by which those most at risk could purportedly be kept safe while society otherwise took no steps to prevent infection.

5 Obstacles to a Full Restoration of Public Health Liberal Ideals
  1. Science/public health have become politicized
  2. We have forgotten our roots (free speech and thought, reasoned methodology, pursuit of truth)
  3. We have become poor at weighing trade-offs
  4. Media feedback loops have become the new peer review
  5. We have prioritized the cultivation of influence over the pursuit of truth.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

The Exoplanet Radius Gap

neurologicablog Feed - Mon, 02/12/2024 - 5:03am

As of this writing, there are 5,573 confirmed exoplanets in 4,146 planetary systems. That is enough exoplanets, planets around stars other than our own sun, that we can do some statistics to describe what’s out there. One curious pattern that has emerged is a relative gap in the radii of exoplanets between 1.5 and 2.0 Earth radii. What is the significance, if any, of this gap?

First we have to consider if this is an artifact of our detection methods. The most common method astronomers use to detect exoplanets is the transit method – carefully observe a star over time precisely measuring its brightness. If a planet moves in front of the star, the brightness will dip, remain low while the planet transits, and then return to its baseline brightness. This produces a classic light curve that astronomers recognize as a planet orbiting that start in the plane of observation from the Earth. The first time such a dip is observed that is a suspected exoplanet, and if the same dip is seen again that confirms it. This also gives us the orbital period. This method is biased toward exoplanets with short periods, because they are easier to confirm. If an exoplanet has a period of 60 years, that would take 60 years to confirm, so we haven’t confirmed a lot of those.

There is also the wobble method. We can observe the path that a star takes through the sky. If that path wobbles in a regular pattern that is likely due to the gravitational tug from a large planet or other dark companion that is orbiting it. This method favors more massive planets closer to their parent star. Sometimes we can also directly observe exoplanets by blocking out their parent star and seeing the tiny bit of reflected light from the planet. This method favors large planets distant from their parent star. There are also a small number of exoplanets discovered through gravitational microlensing, and effect of general relativity.

None of these methods, however, explain the 1.5 to 2.0 radii gap. It’s also likely not a statistical fluke given the number of exoplanets we have discovered. Therefore it may be telling us something about planetary evolution. But there are lots of variables that determine the size of an exoplanet, so it can be difficult to pin down a single explanation.

One theory has to do the atmospheres if planets. Exoplanets that are small and rocky but larger than Earth are called super-earths. Here is an example of a recent super-earth discovered in the habitable zone of a nearby red star – TOI-715 b. It has a mass of 3.02 earth masses, and a radius 1.55 that of Earth. So it is right on the edge of the gap. I calculated the surface gravity of this planet, which is 1.25 g. It has an orbital period of 19.3 days, which means it is likely tidally locked to its parent star. This planet was discovered by the TESS telescope using the transit method.

Planets like TOI-715 b, at or below the gap, likely are close to their parent stars and have relatively thin atmospheres (something like Earth or less). If the same planet were further out from its parent star, however, with that mass it would likely retain a thick atmosphere. This would increase the apparent radius of the planet using the transit method (which cannot distinguish a rocky world from a thick atmosphere), increasing its size to greater than two Earth radii – vaulting it across the gap. These worlds, above the gap, are called mini-Neptunes or sub-Neptunes. So according to this theory the main factor is distance from the parent star and whether or not the planet can retain a thick atmosphere. When small rocky worlds get big enough and far enough from their parent star, they jump to the sub-Neptune category by retaining a thick atmosphere.

But as I said, there are lots of variables here, such as the mass of the parent star.  A recent paper adds another layer – what about planets that migrate? One theory of planetary formation (mainly through simulations) holds that some planets may migrate either closer to or farther away from their parent stars over time. Also the existence of “hot Jupiters” – large gas planets very close to their parent stars – suggests migration, as such planets likely could not have formed where they are.  It is likely that Neptune and Uranus migrated farther away from the sun after their formation. This is part of a broader theory about the stability of planetary systems. Such systems, almost by definition, are stable. If they weren’t, they would not last for long, which means we would not observe many of them in the universe. Our own solar system has been relatively stable for billions of years.

There are several possible explanations for this remarkable stability. One is that this is how planetary systems evolve. The planets form from a rotating disc of material which means they form roughly circular orbits all going in the same plane and same direction. But it is also possible that early stellar systems develop many more planets than ultimately survive. Those is stable orbits survive long term, while those in not stable orbits either fall into their parent star or get ejected from the system to become rogue planets wandering between the stars. There is therefore a selection for planets in stable orbits. There is also now a third process likely happening, and that is planetary migration. Planets may migrate to more stable orbits over time. Eventually all the planets in a system jockey into position in stable orbits that can last billions of years.

Observing exoplanetary systems is one way to test our theories about how planetary systems form and evolve. The relative gap in planet size is one tiny piece of this puzzle. With migrating planets what the paper says is likely happening is that if you have sub-Neptunes that migrate closer to their parent star, the thick atmosphere will be stripped away leaving behind a smaller rocky world below the gap. But also they hypothesize that a very icy world may migrate closer to their parent star, melting the ice and forming a thick atmosphere, jumping the gap to large planetary size.

What all of these theories of the gap have in common is the presence or absence of a thick atmosphere, which makes sense. There are some exoplanets in the gap, but it’s just much less likely. It’s hard to get a planet right in the gap, because either it’s too light to have a thick atmosphere, or too massive not to have one. The gap can be seen as an unstable region of planetary formation.

The more time that goes by the more data we will have and the better our exoplanet statistics will be. Not only will we have more data, but longer observation periods allow for the confirmation of planets with longer orbital periods, so our data will become progressively more representative. Also, better telescopes will be able to detect smaller worlds in orbits more difficult to observe, so again the data will become more representative of what’s out there.

Finally, I have to add, with greater than 5000 exoplanets and counting, we have still not found an Earth analogue. No exoplanet that is a small rocky world of roughly Earth size and mass in the habitable zone of an orange or yellow star. Until we find one, it’s hard to do statistics, except to say that truly Earth-like planets are relatively rare. But I anxiously await the discovery of the first true Earth twin.

The post The Exoplanet Radius Gap first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #970 - Feb 10 2024

Skeptics Guide to the Universe Feed - Sat, 02/10/2024 - 8:00am
What's the Word: Cardinal; News Items: New Virus-Like Microbes Found, SLIM Lunar Lander, Misinformation and Wellness Influencers, Super Earth in Habitable Zone, Climate Change and Storms; Who's That Noisy, Name That Logical Fallacy, Science or Fiction
Categories: Skeptic

Ronald Lindsay on How the Left’s Dogmas on Race and Equity Harm Liberal Democracy and Invigorate Christian Nationalism

Skeptic.com feed - Sat, 02/10/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss404_Ronald_Lindsay_2024_02_09.mp3 Download MP3

In Against the New Politics of Identity, philosopher Ronald A. Lindsay offers a sustained criticism of the far-reaching cultural transformation occurring across much of the West by which individuals are defined primarily by their group identity, such as race, ethnicity, gender identity, and sexual orientation. Driven largely by the political Left, this transformation has led to the wholesale grouping of individuals into oppressed and oppressor classes in both theory and practice. He warns that the push for identity politics on the Left predictably elicits a parallel reaction from the Right, including the Right’s own version of identity politics in the form of Christian nationalism. As Lindsay makes clear, the symbiotic relationship that has formed between these two political poles risks producing even deeper threats to Enlightenment values and Western democracy. If we are to preserve a liberal democracy in which the rights of individuals are respected, he concludes, the dogmas of identity politics must be challenged and refuted. Against the New Politics of Identity offers a principled path for doing so.

Dr. Ronald Lindsay, a philosopher (PhD, Georgetown University) and lawyer (JD, University of Virginia) is the author of The Necessity of Secularism and Future Bioethics. Although his non-fiction works focus on different topics, two threads unite them: Lindsay’s gift for thinking critically about accepted narratives and his strong commitment to individual rights, whether it’s the right to assisted dying, the right to religious freedom, or the right of individuals to be judged on their own merit, as opposed to their group identity. In addition to his books, Lindsay has also written numerous philosophical and legal essays, including the entry on Euthanasia in the International Encyclopedia of Ethics. In his spare time, Lindsay plays baseball—baseball, not softball. The good news is he maintains a batting average near .300; the bad news is his fielding average is not much higher. A native of Boston, Ron Lindsay currently lives in Loudoun County, Virginia with his wife, Debra, where their presence is usually tolerated by their cat. His new book is: Against the New Politics of Identity: How the Left’s Dogmas on Race and Equity Harm Liberal Democracy and Invigorate Christian Nationalism.

Shermer and Lindsay discuss:

  • Who is worse, the Left or the Right?
  • Critical Race Theory (CRT)
  • Diversity, Equity and Inclusion (DEI)
  • identity politics: identity or politics?
  • overt racism vs. systemic racism
  • liberalism vs. illiberalism
  • What is progressive? What is woke?
  • What are the true motives of woke progressive leftists?
  • How widespread is the problem of woke ideology?
  • standpoint epistemology
  • equality vs. equity
  • race and class
  • cancel culture on the political Left and Right
  • Christian nationalism and its agenda
  • abortion
  • Why do Blacks make less money, own fewer and lower quality homes, work in less prestigious jobs, hold fewer seats in the Senate and House of Representatives, run fewer Fortune 500 companies, etc.?
Show Notes From the Skeptic article “Systemic Racism—Explained

The article, by Mahzarin R. Banaji, Susan T. Fiske & Douglas S. Massey, appeared in Skeptic 27.3.

Race is baked into the history of the U.S. going back to colonial times and continuing through early independence when slavery was quietly written into the nation’s Constitution. Although the 13th, 14th, and 15th Amendments to the Constitution ended slavery and granted due process, equal protection, and voting rights to the formerly enslaved, efforts to combat systemic racism in the U.S. faltered when Reconstruction collapsed in the disputed election of 1876, which triggered the withdrawal of federal troops from the South.

From 1876 to 1900, 90 percent of all African Americans lived in the South and were subject to the dictates of the repressive Jim Crow system; 83 percent lived in poor rural areas, occupying ramshackle dwellings clustered in small settlements in or near the plantations where they worked.

Between 1900 and 1970, millions of African Americans left the rural South in search of better lives in industrializing cities throughout the nation. As a result of this migration, by 1970 nearly half of all African Americans had come to live outside the South, 90 percent in urban areas. It was during this period of Black urbanization that the ghetto emerged as a structural feature of American urbanism, making Black residential segregation into the linchpin of a new system of racial stratification that prevailed throughout the U.S. irrespective of region.

In 1924, the National Association of Real Estate Brokers adopted a code of ethics stating that “a Realtor should never be instrumental in introducing into a neighborhood a character of property or occupancy, members of any race or nationality, or any individuals whose presence will clearly be detrimental to property values in that neighborhood”

Redlining through the 1960s…

By 1970, high levels of Black residential segregation were universal throughout metropolitan America. As of 1970, 61 percent of Black Americans living in US metropolitan areas lived under hypersegregation, a circumstance unique to Americans. Although in theory, segregation should have withered away after the Civil Rights Era, it has not.

In 2010, the average index of Black–White segregation remained high and a third of all Black metropolitan residents continued to live in hypersegregated areas. This reality prevails despite the outlawing of racial discrimination in housing (the 1968 Fair Housing Act) and lending (the 1974 Equal Credit Opportunity Act and the 1977 Community Reinvestment Act).

In the early 1960s, more than 60 percent of White Americans agreed that Whites have a right to keep Blacks out of their neighborhoods. By the 1980s the percentage had dropped to 13 percent.

Although overt discrimination in housing and lending has clearly declined in response to legislation, covert discrimination continues. Rental and sales agents today are less likely to respond to emails from people with stereotypically Black names or to reply to phone messages left by speakers who “sound Black.” A recent meta-analysis of 16 experimental housing audit studies and 19 lending analyses conducted since 1970 revealed that sharp racial differentials in the number of units recommended by realtors and inspected by clients have persisted and that racial gaps in loan denial rates and borrowing cost have barely changed in 40 years.

Audit studies, conducted across the social and behavioral sciences, include a subset of resume studies in which researchers send the same resume out to apply for jobs, but change just one item: the candidate’s name is Lisa Smith or Lakisha Smith. Then, they wait to see who gets the callback. The bias is clear: employers avoid “Black-sounding” names.

No other group in the history of the U.S. has ever experienced such intense residential segregation in so many areas and over such a long period of time.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

JET Fusion Experiment Sets New Record

neurologicablog Feed - Fri, 02/09/2024 - 5:06am

Don’t get excited. It’s always nice to see incremental progress being made with the various fusion experiments happening around the world, but we are still a long way off from commercial fusion power, and this experiment doesn’t really bring us any close, despite the headlines. Before I get into the “maths”, here is some quick background.

Fusion is the process of combining light elements into heavier elements. This is the process the fuels stars. We have been dreaming about a future powered by clean abundant fusion energy for at least 80 years. The problem is – it’s really hard. In order to get atoms to smash into each other with sufficient energy to fuse, you need high temperatures and pressures, like those at the core of our sun. We can’t replicate the density and pressure at a star’s core, so we have to compensate here on Earth with even higher temperatures.

There are a few basic fusion reactor designs. The tokamak design (like the JET rector) is a torus, with a plasma of hydrogen isotopes (usually deuterium and tritium) inside the torus contained by powerful magnetic fields. The plasma is heated and squeezed by brute magnetic force until fusion happens. Another method, the pinch method, also uses magnetic fields, but they use a stream of plasma that gets pinched at one point to high density and temperature. Then there is kinetic confinement which essentially uses an implosion created by powerful lasers to create a brief moment of high density and temperature. More recently a group has used sonic cavitation to create an instance of fusion (rather than sustained fusion). These methods are essentially in a race to create commercial fusion. It’s an exciting (if very slow motion) race.

There are essentially three thresholds to keep an eye out for. The first is fusion – does the setup create any measurable fusion. You might think that this is the ultimate milestone, but it isn’t. Remember, the goal for commercial fusion is to create net energy. Fusion creates energy through heat, which can then be used to run a convention turbine. So just achieving fusion, while super nice, is not even close to where we need to get. If you are putting thousands of times the energy into the process as you get out, that is not a commercial power plant. The next threshold is “ignition”, or sustained fusion in which the heat energy created by fusion is sufficient to sustain the fusion process. (This is not relevant to the cavitation method which does not even try to sustain fusion.) A couple of labs have recently achieve this milestone.

But wait, there’s more. Even though they achieved ignition, and (as was widely reported) produced net fusion energy, they are still far from a commercial plant. The fusion created more energy than when into the fusion itself. But the entire process still used about 100 times the total energy output. So we are only about 1% of the way toward the ultimate goal of total net energy. When framed that way, it doesn’t sound like we are close at all. We need lasers or powerful magnets that are more than 100 times as efficient as the ones we are using now, or the entire method needs to pick up an order of magnitude or two of greater efficiency. That is no small task. It’s quite possible that we simply can’t do it with existing materials and technology. Fusion power may have to wait for some future unknown technology.

In the meantime we are learning an awful lot about plasmas and how to create and control fusion. It’s all good. It’s just not on a direct path to commercial fusion. It’s not just a matter of “scaling up”. We need to make some fundamental changes to the whole process.

So what record did the JET fusion experiment break? Using the tokamak torus constrained by magnetic fields design, they were able to create fusion and generate “69 megajoules of fusion energy for five seconds.” Although the BBC reports it produced, “69 megajoules of energy over five seconds.” That is not a subtle difference. Was it 69 megajoules per second for five seconds, or was it 13.8 megajoules per second for five seconds for a total of 69 megajoules? More to the point – what percentage of energy input was this. I could not find anyone reporting it (and ChatGPT didn’t know). But I did find this – “In total, when JET runs, it consumes 700 – 800 MW of electrical power.” A joule is one watt of power for one second.

It’s easy to get the power vs energy units confused, and I’m trying not to do that here, but the sloppy reporting is no help. Watts are a measure of power. Watts over time are a measure of energy, so a watt second or watt hour is a unit of energy. From here:

1 Joule (J) is the MKS unit of energy, equal to the force of one Newton acting through one meter.
1 Watt is the power of a Joule of energy per second

So since joules are a measure of energy, it makes more sense that it would be a total amount of energy created over 5 seconds (so the BBC was more accurate). So 700 MW of power over 5 seconds is 3,500 megajoules of energy input, compared to 69 megajoules output. That is 1.97%, which is close to where the best fusion reactors are so I think I got that right. However, that’s only counting the energy to run the reactor for the 5 seconds it was fusing. What about all the energy for starting up the process and everything else soup to nuts?

This is not close to a working fusion power plant. Some reporting says the scientists hope to double the efficiency with better superconducting magnets. That would be nice – but double is still nowhere close. We need two orders of magnitude, at least, just to break even. We probably need closer to three orders of magnitude for the whole thing to be worth it, cradle to grave. We have to create all that tritium too, remember. Then there is inefficiency in converting the excess heat energy to electricity. That may be an order a magnitude right there.

I am not down on fusion. I think we should continue to research it. Once we can generate net energy through fusion reactors, that will likely be our best energy source forever – at least for the foreseeable future. It would take super advanced technology to eclipse it. So it’s worth doing the research. But just being realistic, I think we are looking at the energy of the 22nd century, and maybe the end of this one. Not the 2040s as some optimists predict. I hope to be proven wrong on this one. But either way, premature hype is likely to be counterproductive. This is a long term research and development project. It’s possible no one alive today will see a working fusion plant.

At least, for the existing fusion reactor concepts I think this is true. The exception is the cavitation method, which does not even try to sustain fusion. They are just looking for a “putt putt putt” of individual fusion events, each creating heat. Perhaps this, or some other radical new approach, will cross over the finish line much sooner than anticipated and make me look foolish (although happily so).

 

The post JET Fusion Experiment Sets New Record first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic