You are here

Skeptic

Microbes Aboard the ISS

neurologicablog Feed - Tue, 01/23/2024 - 5:00am

As I have written many times, including in yesterday’s post, people occupying space is hard. The environment of space, or really anywhere not on Earth, is harsh and unforgiving. One of the issues, for example, rarely addressed in science fiction or even discussions of space travel, is radiation. We don’t really have a solution to deal with radiation exposure outside the protective atmosphere and magnetic field of Earth.

There are other challenges, however, that do not involve space itself but just the fact that people living off Earth will have to be in an enclosed environment. Whether this is a space station or habitat on the Moon or Mars, people will be living in a relatively small finite physical space. These spaces will be enclosed environments – no opening a window to let some fresh air in. Our best experience so far with this type of environment is the International Space Station (ISS). By all accounts, the ISS smells terrible. It is a combination of antiseptic, body odor, sweat, and basically 22 years of funk.

Perhaps even worse, the ISS is colonized with numerous pathogenic bacteria and different types of fungus. The bacteria is mainly human-associated bacteria, the kinds of critters that live on and in humans. According to NASA:

The researchers found that microbes on the ISS were mostly human-associated. The most prominent bacteria were Staphylococcus (26% of total isolates), Pantoea (23%) and Bacillus (11%). They included organisms that are considered opportunistic pathogens on Earth, such as Staphylococcus aureus (10% of total isolates identified), which is commonly found on the skin and in the nasal passage, and Enterobacter, which is associated with the human gastrointestinal tract.

This is similar to what one might find in a gym or crowded office space, but worse. This is something I often considered – when establishing a new environment off Earth, what will the microbiota look like? On the one hand, establishing a new base is an opportunity to avoid many infectious organisms. Having strict quarantine procedures can create a settlement without flu viruses, COVID, HIV or many of the germs that plague humans. I can imagine strict medical examinations and isolation prior to gaining access to such a community. But can such efforts to make an infection-free settlement succeed?

What is unavoidable is human-associated organisms. We are colonized with bacteria, most of which are benign, but some of which are opportunistic pathogens. We live with them, but they will infect us if they are given the chance. There are also viruses that many of us harbor in a dormant state, but can become activated, such as chicken pox. It would be near impossible to find people free of any such organisms. Also – in such an environment, would the population become vulnerable to infection because their immune systems will become weak in the absence of a regular workout? (The answer is almost certainly yes.) And would this mean that they are a setup for potentially catastrophic disease outbreaks when an opportunistic bug strikes?

In the end it is probably impossible to make an infection-free society. The best we can do is keep out the worst bugs, like HIV, but we will likely never be free of the common cold and living with bacteria.

There is also another issue – food contamination. There has been a research program aboard the ISS to grow food on board, like lettuce, as a supplement of fresh produce. However, long term NASA would like to develop an infrastructure of self-sustaining food production. If we are going to settle Mars, for example, it would be best to be able to produce all necessary food on Mars. But our food crops are not adapted to the microgravity of the ISS, or the low gravity of the Moon or Mars. A recent study shows that this might produce unforeseen challenges.

First, prior research has shown that the lettuce grown aboard the ISS is colonized with lots of different bacteria, including some groups capable of being pathogens. There have not been any cases of foodborne illness aboard the ISS, which is great, so the amounts and specific bacteria so far have not caused disease (also thoroughly washing the lettuce is probably a good idea). But it shows there is the potential for bacterial contamination.

What the new study looks at is the behavior of the stomata of the lettuce leaves under simulated microgravity (they slowly rotate the plants so they can never orient to gravity). The stomata of plants are little openings through which they breath. They can open and close these stomata under different conditions, and will generally close them when stressed by bacteria to prevent the bugs from entering and causing infection. However, under simulated microgravity the lettuce leaves opened rather than closed their stomata in response to a bacterial stress. This is not good and would make them vulnerable to infection. Further, there are friendly bacteria that cause the stomata to close, helping them to defend against harmful bacteria. But in microgravity these friendly bacteria failed to cause stomata closure.

This is concerning, but again we don’t know how practically relevant this is. We have too little experience aboard the ISS with locally grown plants. It suggests, however, that we can choose, or perhaps cultivate or engineer, plants that are better adapted to microgravity. We can test to see which cultivars will retain their defensive stomata closure even in simulated microgravity. Once we do that we may be able to determine which gene variants convey that adaptation. This is the direction the researchers hope to go next.

So yeah, while space is harsh and the challenges immense, people are clever and we can likely find solutions to whatever space throws at us. Likely we will need to develop crops that are adapted to microgravity, lunar gravity, and Martian gravity. We may need to develop plants that can grow in treated Martian soil, or lunar regolith. Or perhaps off Earth we need to go primarily hydroponic.

I also wonder how solvable the funk problem is. It seems likely that a sufficiently robust air purifier could make a huge impact. Environmental systems will not only need to scrub CO2, add oxygen, and manage humidity and temperature in the air aboard a station, ship, or habitat. It will also have to have a serious defunking ability.

 

The post Microbes Aboard the ISS first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #920: The Headless Goats of the Chattahoochee

Skeptoid Feed - Tue, 01/23/2024 - 2:00am

The carcasses of headless goats are floating in the Chattahoochee River; too many for a prosaic explanation.

Categories: Critical Thinking, Skeptic

Chris Anderson — Infectious Generosity: The Ultimate Idea Worth Spreading

Skeptic.com feed - Tue, 01/23/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss399_Chris_Anderson_2024_01_17.mp3 Download MP3

As head of TED, Chris Anderson has had a ringside view of the world’s boldest thinkers sharing their most uplifting ideas. Inspired by them, he believes that it’s within our grasp to turn outrage back into optimism. It all comes down to reimagining one of the most fundamental human virtues: generosity. What if generosity could become infectious generosity? Consider:

  • how a London barber began offering haircuts to people experiencing homelessness—and catalyzed a movement
  • how two anonymous donors gave $10,000 each to two hundred strangers and discovered that most recipients wanted to “pay it forward” with their own generous acts
  • how TED itself transformed from a niche annual summit into a global beacon of ideas by giving away talks online, allowing millions access to free learning.

In telling these inspiring stories, Anderson has given us “the first page-turner ever written about human generosity” (Elizabeth Dunn). More important, he offers a playbook for how to embark on our own generous acts—whether gifts of money, time, talent, connection, or kindness—and to prime them, thanks to the Internet, to have self-replicating, even world-changing, impact.

Chris Anderson has been the curator of TED since 2001. His TED mantra—“ideas worth spreading”—continues to blossom on an international scale. He lives in New York City and London but was born in a remote village in Pakistan and spent his early years in India, Pakistan and Afghanistan, where his parents worked as medical missionaries. After boarding school in Bath, England, he went on to Oxford University, graduating in 1978 with a degree in philosophy, politics and economics. Chris then trained as a journalist, working in newspapers and radio, and founded Future Publishing that focused on specialist computer publications but eventually expanded into other areas such as cycling, music, video games, technology and design. He then built Imagine Media, publisher of Business 2.0 magazine and creator of the popular video game users website IGN, publishing some 150 magazines and websites and employed 2,000 people. This success allowed Chris’s nonprofit organization to acquire the TED Conference, then an annual meeting of luminaries in the fields of Technology, Entertainment and Design held in Monterey, California. He expanded the conference’s remit to cover all topics, and now has TED Fellows, the TED Prize, TEDx events, and the TED-Ed program offering free educational videos and tools to students and teachers. Astonishingly, TED talks have been translated into 100 languages and garner over 1 billion views a year. His new book is Infectious Generosity: The Ultimate Idea Worth Spreading.

Shermer and Anderson discuss:

  • how his life turned out (genes, environment, luck)
  • what makes TED successful while other platforms failed or stalled
  • TED talks go public for free vs. paying customers
  • power laws and giving: do 10% donate 90%?
  • Amanda Parker gave away her music and asked people to pay: survival bias—how many people have tried this and failed?
  • blogs, podcast, Substack … saturation markets
  • changing business landscape of charging vs. giving away
  • What makes things infectious?
  • What is generosity? Idea vs. character trait—virtue ethics
  • altruism and reciprocal altruism, reputation and self-reputation
  • religion and morality: do we need an “eye in the sky” to be good?
  • Can people be good without God?
  • philanthropy: 2700 billionaires have more wealth than 120 poorest countries combined
  • giving & philanthropy seems like a rich-person’s game. How can average people participate?
  • incentivizing giving as a selfish act: why “pay it forward”?
  • public vs. private solutions to social problems
  • How can one person make a difference?
  • The Mystery Experiment
  • Ndugu Effect
  • donor fatigue
  • Giving What We Can.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Is Mars the New Frontier?

neurologicablog Feed - Mon, 01/22/2024 - 5:08am

In the excellent sci fi show, The Expanse, which takes place a couple hundred years in the future, Mars has been settled and is an independent self-sustaining society. In fact, Mars is presented as the most scientifically and technologically advanced society of humans in the solar system. This is presented as being due to the fact that Martians have had to struggle to survive and build their world, and that lead to a culture of innovation and dynamism.

This is a  version of the Turner thesis, which has been invoked as one justification for the extreme expense and difficulty of settling locations off Earth. I was recently pointed to this article discussing the Turner thesis in the context of space settlement, which I found interesting. The Turner thesis is that the frontier mindset of the old West created a culture of individualism, dynamism, and democracy that is a critical part of the success of America in general. This theory was popular in the late 19th and early 20th centuries, but fell out of academic favor in the second half of the 20th century. Recent papers trying to revive some version of it are less than compelling, showing that frontier exposure correlates only very softly with certain political and social features, and that those features are a mixed bag rather than an unalloyed good.

The article is generally critical of the notion that some version of the Turner thesis should be used to justify settling Mars – that humanity would benefit from a new frontier. But I basically agree with the article, that the Turner thesis is rather weak and complex, and that analogies between the American Western frontier and Mars (or other space locations) is highly problematic. In every material sense, it’s a poor analogy. On the frontier there was already air, food, soil, water, and other people living there. None of those things (as far as we know) exists on Mars.

But I do think that something closer to The Expanse hypothesis is not unreasonable. Just as the Apollo program spawned a lot of innovation and technology, solving the problems of getting to and settling Mars would likely have some positive technological fallout. However, I would not put this forward as a major reason to explore and settle Mars. We could likely dream up many other technological projects here on Earth that would be better investments with a much higher ROI.

I do support space exploration, including human space exploration, however. I largely agree with those who argue that robots are much better adapted to space, and sending our robotic avatars into space is much cheaper and safer than trying to keep fragile biological organisms alive in the harsh environment of space. For this reason I think that most of our space exploration and development should be robotic.

I also think we should continue to develop our ability to send people into space. Yes, this is expensive and dangerous, but I think it would be worth it. One reason is that I think humanity should become a multi-world spacefaring species. This will be really hard in the early days (now) but there is every reason to believe that technological advancements will make it easier, cheaper, and safer. This is not just as a hedge against extinction, but also opens up new possibilities for humanity. It is also part of the human psyche to be explorers, and this is one activity that can have unifying effect on shared human culture (depending, of course, on how it’s done).

There is still debate about the effectiveness of sending humans into space for scientific activity. Sure, our robots are capable and getting more capable, but for the time-being they are no substitute for having people on site actively carrying out scientific exploration. Landers and rovers are great, but imagine if we had a team of scientists stationed on Mars able to guide scientific investigations, react to findings, and take research in new directions without having to wait 20 years for the next mission to be designed and executed.

There are also romantic reasons which I don’t think can be dismissed. Being a species that explores and lives in space can have a profound effect on our collective psyche. If nothing else it can inspire generations of scientists and engineers, as the Apollo program did. Sometimes we just need to do big and great things. It gives us purpose and perspective and can inspire further greatness.

In terms of cost the raw numbers are huge, but then anything the government does on that scale has huge dollar figures. But comparatively, the amount of money we spend on space exploration is tiny compared to other activity of dubious or even whimsical value. NASAs annual budget is around $23 billion, but Americans spend over $12 billion on Halloween each year. I’m not throwing shade on Halloween, but it’s hard to complain about the cost of NASA when we so blithely spend similar amounts on things of no practical value. NASA is only 0.48% of our annual budget. It’s almost a round off error. I know all spending counts and it all adds up, but this does put things into perspective.

Americans also spent $108 billion on lottery tickets in 2022. Those have, statistically speaking, almost no value. People are essentially buying the extremely unlikely dream of winning, which most will not. I would much rather buy the dream of space exploration. In fact, that may be a good way to supplement NASA’s funding. Sell the equivalent of NASA lottery tickets for a chance to take an orbital flight, or go to the ISS, or perhaps name a new feature or base on Mars. People spend more for less.

The post Is Mars the New Frontier? first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #967 - Jan 20 2024

Skeptics Guide to the Universe Feed - Sat, 01/20/2024 - 8:00am
Interview with Robert Sapolsky; News Items: Betavolt 50 Year Battery, Moon Landing Delayed, Cloned Monkeys, Converting CO2 into Carbon Nanofibers, Bad Fen Shui; Who's That Noisy; From TikTok: Jellyfish UFO; Science or Fiction
Categories: Skeptic

Why Do Species Evolve to Get Bigger or Smaller

neurologicablog Feed - Fri, 01/19/2024 - 4:58am

Have you heard of Cope’s Rule or Foster’s Rule? American paleontologist Edward Drinker Cope first noticed a trend in the fossil record that certain animal lineages tend to get bigger over evolutionary time. Most famously this was noticed in the horse lineage, beginning with small dog-sized species and ending with the modern horse. Bristol Foster noticed a similar phenomenon specific to islands – populations that find their way to islands tend to either increase or decrease in size over time, depending on the availability of resources. This may also be called island dwarfism or gigantism (or insular dwarfism or gigantism).

When both of these things happen in the same place there can be some interesting results. On the island of Flores a human lineage, Homo floresiensis (the Hobbit species) experienced island dwarfism, while the local rats experienced island gigantism. The result were people living with rats the relative size of large dogs.

Based on these observations, two questions emerge. The first (and always important and not to be skipped) is – are these trends actually true or are the initial observations just quirks or hyperactive pattern recognition. For example, with horses, there are many horse lineages and not all of them got bigger over time. Is this just cherry-picking to notice the one lineage that survived today as modern horses? If some lineages are getting bigger and some are getting smaller, is this just random evolutionary change without necessarily any specific trend? I believe this question has been answered and the consensus is that these trends are real, although more complicated than first observed.

This leads to the second question – why? We have to go beyond just saying “evolutionary pressure” to determine if there is any unifying specific evolutionary pressure that is a dominant determinant of trends in body size over time. Of course, it’s very likely that there isn’t one answer in every case. Evolution is complex and contingent, and statistical trends in evolution over time can emerge from many potential sources. But we do see these body size trends a lot, and it does suggest there may be a common factor.

Also, the island dwarfism/gigantism thing seems to be real, and the fact that these trends correlate so consistently with migrating to an island suggests a common evolutionary pressure. Foster, who published his ideas in 1964, thought it was due to resources. Species get smaller if island resources are scarce, or they get bigger if island resources are abundant due to relative lack of competition. Large mainland species who find themselves on islands may find a smaller region in which to operate which much less resources, so the smaller critters will have an advantage. Also, smaller species can have a shorter gestation time and more rapid generational time, which can provide advantages in a stressed environment. Predator species may then become smaller in order to adapt to smaller prey (which apparently is also a thing).

At the small end of the animal size, getting bigger has advantages. Larger animals can go longer between meals and can roam across a larger range looking for food. Again, this is what we see, the largest animals become smaller and the smallest animals become larger, meeting in the middle (hence the Hobbits with dog-sized rats).

Now a recent study looks that these ideas with computer evolutionary simulations. The simulations pretty much confirm what I summarized above but also add some new wrinkles. The simulations show that a key factor, beyond the availability of resources, is competition for those resources. First, it showed a general trend in increasing body size due to competition between species. When different species compete in the same niche the larger animals tend to win out. They state this as Cope’s rule applying when interaction between species is determines largely by their body size.

The simulations also showed, however, that when the environment is stress the larger species were more vulnerable to extinction. There were relatively fewer individuals in larger species with long gestation and generational times. While smaller species could weather the strain better and bounce back quicker, but then they subsequently undergo slow increase in size when the environment stabilizes. This leads to what they call recurrent Cope’s Rule – each subsequent pulse of gigantism gets even bigger.

The simulations also confirmed island dwarfism – that species tend to shrink over time when there is overlap in niches and resource use, which contributes to a decreased resource availability. They call this an inverse Cope’s Rule. They don’t refer to Foster’s Rule, I think because the simulations were independent of being on an island or in an insular environment (which is the core of Foster’s observation). Rather, species become smaller when their interaction is determined more by the environment and resource availability than their relative body size (which could be the case on islands).

So the simulations don’t really change anything dramatically. They largely confirm Cope’s Rule and Foster’s Rule, and add the layer that niche overlap and competition is important, not just total availability of recourses.

The post Why Do Species Evolve to Get Bigger or Smaller first appeared on NeuroLogica Blog.

Categories: Skeptic

Converting CO2 to Carbon Nanofibers

neurologicablog Feed - Thu, 01/18/2024 - 4:56am

One of the dreams of a green economy where the amount of CO2 in the atmosphere is stable, and not slowly increasing, is the ability to draw CO2 from the atmosphere and convert it to a solid form. Often referred to as carbon capture, some form of this is going to be necessary eventually, and most climate projections include the notion of carbon capture coming online by 2050. Right now we don’t have a way to economically and on a massive industrial scale pull significant CO2 from the air. There is some carbon capture in the US, for example, but it accounts for only 0.4% of CO2 emissions. It is used near locations of high CO2 production, like coal-fired plants.

But there is a lot of research being done, mostly in the proof of concept stage. Scientists at the DOE and Brookhaven National Laboratory have published a process which seems to have promise. They can convert CO2 in the atmosphere to carbon nanofibers, which is a solid form of carbon with potential industrial uses. One potential use of these nanofibers would be as filler for concrete. This would bind up the carbon for at least 50 years, while making the concrete stronger.

In order to get from CO2 to carbon nanofibers they break the process up into two steps. They figured out a way, using an iron-cobalt catalyst, to make carbon monoxide (CO) into carbon nanofibers. This is a thermocatalyst process operating at 400 degrees C. That’s hot, but practical for industrial processes. It’s also much lower than the 1000 degrees C required for a method that would go directly from CO2 to carbon nanofibers.

That’s great, but first you have to convert the CO2 to CO, and that’s actually the hard part. They decided to use a proven method which uses a commercially available catalyst – palladium supported on carbon. This is an electrocatalyst process, that converts CO2 and H2O into CO and H2 (together called syngas). Both CO and H2 are high energy molecules that are very useful in industry. Hydrogen, as I have written about extensively, has many uses, including in steel making, concrete, and energy production. CO is a feed molecule for many useful reactions creating a range of hydrocarbons.

But as I said – conversion of CO2 and H20 to CO and H2 is the hard part. There has been active research to create an industrial scale, economic, and energy efficient process to do this for years, and you can find many science news items reporting on different processes. It seems like this is the real game, this first step in the process, and from what I can tell that is not the new innovation in this research, which focuses on the second part, going from CO to carbon nanofibers.

The electrocatalyst process that goes from CO2 to CO uses electricity. Other processes are thermocatalytic, and may use exothermic reactions to drive the process. Using a lot of energy is unavoidable, because essentially we are going from a low energy molecule (CO2) to a higher energy molecule (CO), which requires the addition of energy. This is the unavoidable reality of carbon capture in general – CO2 gets released in the process of making energy, and if we want to recapture that CO2 we need to put the energy back in.

The researchers (and pretty much all reporting on CO2 to CO conversion research) state that if the electricity were provided by a green energy source (solar, wind, nuclear) then the entire process itself can be carbon neutral. But this is exactly why any type of carbon capture like this is not going to be practical or useful anytime soon. Why have a nuclear power plant powering a carbon capture facility, that is essentially recapturing the carbon released from a coal-fired plant? Why not just connect the nuclear power plant to the grid and shut down the coal-fired plant? That’s more direct and efficient.

What this means is that any industrial scale carbon capture will only be useful after we have already converted our energy infrastructure to low or zero carbon. Once all the fossil fuel plants are shut down, and we get all our electricity from wind, solar, nuclear, hydro, and geothermal then we can make some extra energy in order to capture back some of the CO2 that has already been released. This is why when experts project out climate change for the rest of the century they figure in carbon capture after 2050 – after we have already achieved zero carbon energy. Carbon capture prior to that makes no sense, but after will be essential.

This is also why some in the climate science community think that premature promotion of carbon capture is a con and a diversion. The fossil fuel industry would like to use carbon capture as a way to keep burning fossil fuels, or to “cook their books” and make it seem like they are less carbon polluting than they are. But the whole concept is fatally flawed – why have a coal-fired plant to make electricity and a nuclear plant to recapture the CO2 produced, when you can just have a nuclear plant to make the electricity?

The silver lining here is that we have time. We won’t really need industrial scale carbon capture for 20-30 years, so we have time to perfect the technology and make it as efficient as possible. But then, the technology will become essential to avoid the worst risks of climate change.

 

The post Converting CO2 to Carbon Nanofibers first appeared on NeuroLogica Blog.

Categories: Skeptic

Educational Testing and the War on Reality & Common Sense

Skeptic.com feed - Thu, 01/18/2024 - 12:00am

The practice of discussing educational testing in the same sentence with the term “war” is not necessarily new or original.1 What may be new to readers, however, is to characterize current debates involving educational testing as involving a war against: (1) accurate perceptions about the way things really are (reality), and (2) sound judgment in practical matters (common sense).

Education, Testing, and the Real World

Education is compulsory in American society, and no one escapes testing—whether standardized or unstandardized—in their schooling experience, even before entering school. As newborns, infants are given Apgar scores to assess their overall health.2 When a child is ready to enter preschool, s/he may be assessed with a standardized test to determine school readiness in understanding basic concepts, cognitive and language development, and early academic achievement.

As children matriculate through the primary school years, they are required to pay attention to teacher lessons; resist natural impulses to fidget, talk out of turn, or bother one’s neighbor; complete worksheets quietly at one’s desk; complete and return homework assignments; and complete national or state-mandated standardized academic achievement tests that measure “what students know and can do.”3 In some cities, students must complete tests to determine eligibility for entrance into elite or specialty high schools,4 and students in some states must successfully complete tests in order to graduate high school.5 Well before students are scheduled to graduate, they have, until recently, been required to complete standardized college admissions tests in order for their applications to be competitive for colleges of their choice.6

Enter Basic Common Sense

When enough years are spent surrounded by age peers in schools, everyone—regardless of background, race, ethnicity, or socioeconomic status—intuitively understands that comparatively, some peers are intellectually smarter, other peers are roughly the same, and others are intellectually slower. These differences are most determinative of one’s overall level of academic achievement from kindergarten to high school graduation and beyond. Some pupils have a natural proclivity to be voracious readers and progress successfully through their academic programs much more quickly than others. They are able to grasp and understand difficult and abstract academic material more quickly, have a wide range of intellectual interests and hobbies, and are much more likely to be selected for admission to programs for the gifted and talented. These are generally the A and B students and tend to enroll in advanced foreign languages, trigonometry, pre-calculus, chemistry, and other advanced placement (AP) classes in high school.

Then there are students who struggle with school—particularly as the curriculum becomes more conceptual, complex, and abstract. These students have often been identified as “slow learners,” and school generally becomes a profoundly aversive experience. In higher grades, many tend to select vocational courses or may sometimes drop out of school before graduation, and these are generally known as the C and D students in their classes. The majority of pupils, however, fall somewhere in between those two extremes.7

When interacting with curricula, brighter students can generalize learning more easily to new classes of similar information never before encountered, while slower students have more difficulty in remembering what has been previously learned. Using a simple illustration from the early elementary school years (and barring specific reading disabilities), teachers can teach brighter students the phonetic rules for sounding out words such as “groan” and “moan.” Later, when these students encounter similar words that they have not seen before, such as “Joan” or “loan,” they can more easily apply what they have previously learned and correctly sound out these new words as well as understand their meaning. In contrast, when slower students encounter new words that have the same phonetic spelling and pronunciation as previously learned words, they find it more difficult to spontaneously apply what they have previously learned to sound out these new words, and consequently, word identification mastery takes them more time.8

Similarly, slower students will be easily confused over the rules that govern the correct pronunciation of words with the same “ei” letter combination but different pronunciations, e.g., neighbor, heist, and weird. In contrast, brighter students will internalize these nuances more quickly, readily identify these words correctly, and so move on to master more complex words. These differences in word identification skills also influence reading comprehension.

To be sure, slower students will eventually learn how to pronounce correctly similar words governed by different phonetic rules, but teaching such students requires instruction where broader learning objectives are broken down into smaller hierarchical steps, teaching is much more intentionally explicit, and greater amounts of time are devoted to learning and practice.9 If you learned academic subjects more quickly than other students, you probably have found other areas (e.g., art, music, athletics, home and auto repair, cooking, or just learning to get along with others) that took you longer than others, including those who took longer than you on the purely academic subjects.

Regardless of grade level brighter students can more quickly internalize and consolidate the required mental schemata for representing material that is learned, and then use this knowledge as a foundation upon which to build new schemata.10 Slower students have more difficulty consolidating information to be learned, or at least it requires more time to consolidate prerequisite information compared to brighter age peers. When slower peers attempt to mentally consolidate new information built on a shaky foundation, new information is poorly understood.

Brighter students can generally follow along at the pace of regular instruction, while slower students cannot, and eventually fall further and further behind as they get older. The older pupils are, the more they begin to self-select into secondary school tracks that are more suitable to their intellectual capabilities and interests, resulting in extremely wide individual differences in academic performance at higher grades. By the time students reach 11th and 12th grades, for example, brighter students are able to solve complex mathematical equations while slower students still struggle with mastering elementary fractions. As a result, the brightest students in high school tend to enroll in advanced placement courses such as foreign languages, pre-calculus, chemistry, and physics, while slower students gravitate to vocational courses.

Anti-testing hostility has found a powerful, organized voice whose prime directive is to diminish the influence—if not the outright banishing—of standardized testing.

Psychologists refer to this basic phenomenon as “individual differences in mental ability and learning potential,”11 and no one knows this better than teachers. In the elementary grades, for example, teachers regularly come into contact with wide individual differences in performance on standardized achievement tests, despite all students being taught the same material under the same teacher. That’s why it is a bit unfair to hold teachers solely responsible for the achievement test performance of their students, since students can perform poorly on achievement tests despite exemplary teaching, and can also perform well on achievement tests despite mediocre teaching.

Enter Painful Realities

There are no racial, ethnic, language, or socioeconomic subpopulation groups, anywhere on any continent on the globe, that display equal means in their respective distributions of mental test scores.12 These individual differences in mental test scores, when consolidated and averaged, will inevitably result in statistically significant average differences in academic achievement across subpopulation groups. Of course, there is also significant overlap among these groups. Although the full range of test scores and performance—from severe intellectual disability to mental genius—can be found within all racial and ethnic subpopulation groups,13 it is nevertheless true that these abilities are not equally distributed across such groups. Group differences have been observed since the beginning of standardized testing. In fact, they begin as early as three years of age, remain consistent over decades, and have proven stubbornly resistant to intervention.14 The largest gaps between subpopulation groups in both mental test scores and the achievement outcomes that result from such scores will be most noticeable at the extremes of their respective distributions.15 Because this is such a sensitive subject it should be noted that these are average differences between groups and tell us nothing about the ability of any single member of any group.

Differences in academic achievement are not equally distributed across socioeconomic groups or across communities and school districts, as these have more or less different concentrations of low to high performing students. Studies consistently show that even massive allocation of funds to school districts, without other interventions, has no significant effect on raising academic achievement.16 School systems are keenly aware of this, which is why comparisons of achievement test scores across school districts are careful to use race, ethnicity, and socioeconomic status as a covariate in comparing scores. That is, schools having similar concentrations of students from particular racial/ethnic groups and socioeconomic backgrounds are compared to other schools with similar backgrounds. This way, when schools having concentrations of students from similar backgrounds show significantly different levels of academic achievement, higher performing schools can be studied intensively to determine the key factors that are responsible for their relative success.17

For purposes of this analysis, the term education establishment refers to the constellation of education school professors, teacher education textbooks and journals, teacher certification training programs, and professional teaching associations (e.g., the American Educational Research Association, or AERA; the National Education Association, or NEA) that dominate thought and opinion within the education and teaching professions. Within that group, there are four arguments held by anti-testing critics that are given prominence that far outweighs their scientifically demonstrated validity.

Claims That Testing Harms Students

Eighteenth Century social philosopher Jean-Jacques Rousseau’s notion of children born in freedom and innocence, but eventually corrupted and enslaved by society,18 is the basic assumption that undergirds hostility toward standardized testing among many educators. According to critics, standardized testing places undue emotional stress on students due to test scores’ relation to important outcomes. They argue that testing fails to measure accurately the capabilities of students with different learning styles and penalizes pupils who are not good test takers.

Another common argument is that standardized testing fails to account for language deficiencies, empty stomachs, learning disabilities, difficult home lives, or cultural differences.19 The tests are said not to measure student progress or improve student performance, but rather penalize students’ critical thinking and creativity due to the multiple-choice testing format (or its opposite), namely, that tests confer an unfair advantage to students who perform well on multiple-choice tests by learning test-taking strategies without having deep knowledge of the subject matter.

Anti-testing hostility has found a powerful, organized voice in numerous movements whose prime directive is to diminish the influence—if not the outright banishing—of standardized testing in pre- and post-higher education. The opt-out movement, for example, began in New York in 2014 among mostly White, highly-educated, and politically liberal parents who were united in their refusal to have their children sit for standardized testing in schools.20 They claimed that judging teacher performance by students’ test scores is unfair and that testing unduly narrows the school curricula by creating a “teaching-to-the-test” instructional ethos. Some stated they were in outright opposition to the implementation of Common Core State Standards.21

It would not be an overstatement to say that certain criticisms have their origin in various neo-Marxist ideologies. There, standardized tests are portrayed as instruments of oppression designed by capitalistic test-construction companies to crush students’ dreams of a better life and trap them in the social classes in which they were born. One such critic writes:

Rather than providing for an objective and fair means of social mobility, the tests were a tracking mechanism limiting the odds of improving on one’s family’s economic and social position in America…. The SAT aptitude test in particular was designed from the beginning to facilitate social Darwinism, selecting for White Anglo-Saxon males; Jim Crow segregation, eugenics, and protecting the Ivy League’s racial stock provided the legal and cultural context in which the SAT was born.22

These criticisms are feeble, shallow, and above all, dishonest. Rebuttals to these fallacies, patiently documented and dissected by recognized testing scholars, are readily available to anyone with a fair and open mind.23

Claims of Cultural Bias in Tests

The critically acclaimed 1991 film Boyz N the Hood told the tale of three Black youths growing up in a South Central Los Angeles ghetto, and the differences in their eventual life outcomes as a function of having (or not having) a strong father figure. One of the boys has a strict but caring father figure (named Jason “Furious” Styles), while the other two do not. In numerous spots throughout the movie, Mr. Styles imparts pithy pearls of wisdom to the boys, intended to guide them throughout life. In one such sequence, he opines on the SAT requirement for college:

Most of those tests are culturally biased to begin with. The only part that is universal is the Math.24

Wrong. Although popularly believed, the claim that contemporary standardized mental testing is culturally biased is patently false, as revealed in hundreds of empirical studies.25 When critics accuse standardized tests of cultural bias, they typically mean that a test includes words, concepts, or ideas that are perceived to be more familiar to White middle-class examinees compared to other groups, or that a test’s standardization samples fail to include sufficient representation of non-White, lower socioeconomic status (SES) persons.26

Both of these conditions are alleged to foster an unfair disadvantage to lower SES non-White examinees, purporting to cause them to have lower average scores relative to more advantaged White test takers. While some critics may not be familiar with the content of tests or the racial/ethnic makeup of standardization samples, they nevertheless believe that standardized tests are biased simply because the average scores achieved by different subpopulation groups are not equal. Such a definition of test bias is widely rejected by contemporary testing experts.27

The cold reality, however, is that test companies, like all other companies that must be profitable in order to stay in business, routinely and carefully examine their test items for any evidence of statistical bias in the production phase, before any updated test revisions are published. Items that show actual evidence of statistical bias (i.e., items that statistically perform differently for test takers of different racial/ethnic groups) are discarded, and the results of statistical tests for biased test items are typically published in test manuals for open review by the general public.28

Crying Racism

Whenever attempts to tar and feather tests with charges of cultural bias fail, the next step is to simply smear them with the charge of racism. In today’s heated political climate few things are more effective in attracting panicked attention than labeling persons, organizations, or products as “racist.” In the 1990s, test critics began to point out that the term “aptitude” in the (then-called) Scholastic Aptitude Tests (SAT), could be perceived as measuring something innate that is impervious to effort or instruction.29 This, coupled with the fact that these tests reflect the significant subpopulation group differences in mean scores discussed above, prompted the College Board to change the middle word of the SAT from “aptitude” to the more bland descriptor “assessment” in 1993.30 That euphemism, however, did little to quell the ire of critics, who continued to accuse standardized college testing of being racist.31

In today’s heated political climate few things are more effective in attracting panicked attention than labeling persons, organizations, or products as “racist.”

To be fair, it is relatively easy to locate offensive quotes by 19th and early 20th-century testing supporters who freely ascribed the adjectives “inferior” and “superior” to racial groups on the basis of significant mean differences in IQ scores.32 It comes as little surprise, therefore, when Ibram X. Kendi, founder and director of the Center for Antiracist Research, declares that:

Standardized tests have become the most effective racist weapon ever devised to objectively degrade Black and Brown minds and legally exclude their bodies from prestigious schools.33

Kendi and many others never doubt that contemporary testing must be racist, based on the false belief that such testing was birthed out of a history of racism.34 There is no doubt that these types of claims are very effective in poisoning contemporary public discourse, but such invective does not hold up under critical examination or hard evidence.

First, many early researchers were extremely cautious about, and resistant to, interpreting group differences in text performance as ironclad indicators of any innate inferiority/ superiority of groups. While racist attitudes were certainly more prevalent a century ago compared to today, many early American IQ test researchers were keenly aware of racial discrimination and unequal social circumstances of racial groups during the times in which they wrote, and so urged their peers to avoid hasty and intemperate generalizations from performance on tests until environmental disadvantages could be properly ruled out.35

Second, not a few early 20th-century researchers intentionally showcased the exceptional IQ test performance of high-scoring non-White (particularly African-American) students, who achieved scores several standard deviations above the general mean.36 Their writings disprove the assertion that there is something intentionally nefarious deeply embedded within mental tests that unfairly suppresses the intellectual capabilities of examinees who are not White and/or middle class.

Third, one study using a large and representative dataset of school-aged students in California, analyzed the sources that account for IQ test score variance (using Analysis of Variance, a long-standing, well-established, and widely-used statistical method), and demonstrated that the largest sources of IQ test score variability are within and between families that in many cases share the same racial group and social class.37 If two members of this same dataset are selected at random (regardless of race, ethnicity, social class, or family) and the difference in their IQ scores are calculated and averaged and the procedure repeated an infinite number of times, the average difference between randomly selected pairs of IQ scores is 17 points.

Given that the mean of modern IQ tests is 100 and its standard deviation is 15, this average 17-point difference between such randomly chosen pairs exceeds the average score differences between Black and White students in the dataset (i.e., 12 points). Simply stated, the average IQ point difference between siblings in the same family exceeds the average test score difference between African Americans and White Americans. Taken together, these findings demonstrate the oft-repeated claims that IQ and other mental tests are inherently flawed and discriminate unfairly along racial lines, are simply false. This won’t convince Ibram X. Kendi, however, since his definition of racism is any group difference of any kind anywhere, thereby rendering the concept unfalsifiable.

Lowering Standards

Whenever two or more subpopulation groups achieve unequal means in their test score distributions, any set cutoff score that a college or university uses to determine acceptance or rejection for admission will display unequal percentages across groups as to who is selected or rejected. That is a statistical reality. For admissions committees that champion Diversity, Equity, and Inclusion (DEI) mandates, standards must be lowered for members of lower-scoring groups in a manner that camouflages what is actually being done.

Researchers have long acknowledged that obtaining data on college admissions decisions is an uphill battle, as colleges strive to prevent access to the criteria on which acceptance decisions are made. When such information is obtained, the results confirm what many have always suspected.

That is to say, Black and Latino applicants are admitted with test qualifications that are as much as one standard deviation or more below the average test scores of White and Asian applicants,38 and this practice has predictable consequences. To illustrate, many Black and Hispanic students find themselves on academic probation or switch majors (from the major into which they were initially admitted) to enter disciplines that are less demanding.39 Many of those so admitted will simply drop out and fail to graduate, creating “artificial failures” that would have been successful if properly matched to institutions that enroll students with comparable qualifications.40

This observation was solidly reinforced in Richard Sander and Stuart Taylor’s 2012 book Mismatch: How Affirmative Action Hurts Students It’s Intended to Help, and Why Universities Won’t Admit It. In it, the authors examined and compared enrollment, graduation rates, and doctorate/STEM graduate degrees of Black and Hispanic students in the state of California in the eras before and after Proposition 209 was passed in that state. Proposition 209 (Prop 209, also known as the California Civil Rights Initiative, or CCRI), was a ballot proposition approved in 1996, which prohibited state governmental institutions from considering race, sex, or ethnicity in public employment, contracting, and education.

When comparing the pre-Prop 209 to the post-Prop 209 eras, the number of Black students receiving bachelor degrees from University of California (UC) schools, the number of UC Black and Hispanic freshmen who went on to graduate in four years (as well as graduate with STEM degrees), and the number of graduates with GPAs of 3.5 or higher all significantly rose. This hard data was used to support the general thesis that when students are matched (through objective standardized test scores) to institutions where all students are admitted under the same standards (and standards are not artificially lowered to satisfy diversity goals), minority students benefit significantly.

These practices are so pervasive, that Black students who meet the same college admissions requirements as their peers often write of their frustration and resentment at being unfairly judged by other students as having been admitted solely because of their race and under lower standards.41 In one particularly heartbreaking account, a successful Black journalist wrote of his frustrations taking two years out of his professional life to teach journalism to Black students, admitted under lowered academic standards, at a small, historically Black college. He writes of his reluctant efforts to repeatedly lower basic academic expectations in order to accommodate a critical mass of students whose attitudes, values, achievement motivation, academic preparation and qualifications, and intellectual capabilities demonstrated that they had no business being at an institution of higher learning.42

One strategy for justifying lowering standards is for college admissions committees to claim that their admission standards are “holistic.”43 That is, criteria for admission presumably must take into account a wide range of factors that provide a more “three-dimensional picture of the whole person,” as opposed to the more “narrow” consideration of standardized test scores. Yet critics charge that the deep subjectivity of such practices represents little more than academic flimflam.44

The oft-repeated claims that IQ and other mental tests are inherently flawed and discriminate unfairly along racial lines, are simply false.

Recently, testing companies have come to serve as enablers of lowered college admissions standards. For example, the College Board spent two years (2017–2019) creating an “adversity index,” a 100-point scale that provides a rough measure of the degree of adversity versus privilege in the life of a prospective applicant. In theory, adversity index scores could be used to balance lower standardized test scores in an effort to justify lower admissions standards. Ultimately, however, these efforts of testing companies to placate their critics once again proved futile.45

Another strategy is to claim that empirical research supports the benefits of having diverse academic settings compared to those not as diverse. For example, a DEI advocate cited research support for claims that students who enroll in more diverse classrooms earn higher GPAs, more diverse college discussion groups generate “more novel and complex analyses,” and that greater exposure to diversity in college settings increases civic attitudes and engagement.46

However, studies of such an important topic as the benefits of diversity in college admissions require at minimum systematic replication as well as hundreds of studies by independent researchers (conducted at a wide variety of institutions) if they are to yield results that can be subjected to appropriate meta-analyses.

One study, however, is notable for its elegance, clarity, and simplicity. In 2002, researchers specifically evaluated the claim that increased racial diversity in college enrollments enriches students’ educational experience and improves relations between students from different cultural groups.47 They argued that prior self-report data claimed to demonstrate support for this notion were misleading, as they suffered from biased item wording, methodological flaws, and the tendency for responses to reflect social desirability effects.

To correct for these flaws, the researchers analyzed self-report data from a random sample of more than 4,000 American college students, faculty, and administrators who were asked to simply evaluate various aspects of their educational experience and campus environment, but without any direct references to racial/ethnic diversity. They then correlated their data with the percentage of Black student enrollment in predominantly White student bodies. They found that, contrary to what diversity advocates would predict, no consistent positive correlation was found between increased diversity and respondents’ assessments of educational satisfaction.

Delete Standards Altogether

Eventually, what was previously unthinkable, has now become unavoidable: objective standards in and of themselves are seen as an impediment to the goals of achieving diversity, equity, and inclusion. Hence, testing necessary for demonstrating mastery of taught subject matter must itself be abolished.

This article appeared in Skeptic magazine 28.3
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

In one example, the Oregon state legislature eliminated (for two years, until the state can re-evaluate its graduation policies) the long-standing requirement that students successfully pass a high school exit exam in order to demonstrate proficiency in reading, mathematics, and writing. This was done in response to criticisms that the testing requirement was inequitable because higher percentages of Black and Hispanic students failed the test.48

Various anti-testing writers and organizations applaud the news that more and more institutions of higher education no longer require standardized test scores as a condition for selection,49 under the pretense that “the social and academic costs of continuing to rely on…tests outweigh any possible benefits.”50

Where are we headed?

At the time of this writing, the U.S. Supreme Court has ruled that the admissions programs at Harvard University and the University of North Carolina (where race is used as one of many factors in student admissions) violate the equal protection clause of the 14th Amendment of the United States Constitution, which guarantees equal protection for all U.S. citizens.51 In a videotaped reaction to the decision, President Biden stated that the decision “effectively ends affirmative action in college admissions,”52 a sentiment echoed by many who support the continued and fair race-neutral use of standardized tests. Nevertheless, many commentators have also suggested ways in which admissions committees can circumvent the decision by no longer requiring standardized testing, or by changing the manner in which applicants write their college essays to signal their racial group membership.53

There is simply no way to produce a mental test that effectively measures the abilities and skills needed to predict success in educational programs but at the same time satisfies the political goals of racially proportional representation as demanded by DEI advocates.54 Given this reality, the war involving standardized testing has by no means ended, but rather is just beginning.

About the Author

Craig Frisby is Associate Professor Emeritus in School Psychology from the University of Missouri, Columbia. He has served as an Associate Editor for School Psychology Review, the official journal of the National Association of School Psychologists, and Associate Editor for Psychological Assessment, a journal published by the American Psychological Association. He currently serves as Associate Editor for the Journal of Open Inquiry in the Behavioral Sciences. He is the author of Meeting the Psychoeducational Needs of Minority Students: Data-based Guidelines for School Psychologists and Other School Personnel and co-editor of the recently published Ideological and Political Bias in Psychology: Nature, Scope and Solutions. Watch him on C-SPAN discussing education reforms to benefit the African American community

References
  1. https://rb.gy/px4qc; https://rb.gy/ee7vq
  2. https://rb.gy/b0xfx
  3. https://rb.gy/2247j
  4. https://rb.gy/8lwgq
  5. https://rb.gy/uh0da
  6. https://rb.gy/0vkuf
  7. Frisby, C.L. (2013). General Cognitive Ability, Learning, and Instruction. In C.L. Frisby, Meeting the Psychoeducational Needs of Minority Students, 201–266. Wiley.
  8. Ibid.
  9. Ibid.
  10. Jensen, A.R. (1993). Psychometric G and Achievement. In B.R. Gifford (Ed.), Policy Perspectives on Educational Testing, 117–227. National Commission on Testing and Public Policy. Springer.
  11. https://rb.gy/row4i; Jensen, A.R. (1987). Individual Differences in Mental Ability. In J.A. Glover & R.R. Ronning (Eds.), A History of Educational Psychology, 61–88. Plenum.
  12. Lynn, R. & Vanhanen, T. (2006). IQ and Global Inequality. Washington Summit Publishers; Rushton, J.P. & Jensen, A.R. (2005). Thirty Years of Research on Race Differences in Cognitive Ability. Psychology, Public Policy, and Law, 11(2), 235–294.
  13. Gottfredson, L.A. (1997). Mainstream Science on Intelligence: An Editorial With 52 Signatories, History, and Bibliography. Intelligence, 24(1), 13–23.
  14. Rushton, J.P. & Jensen, A.R. (2005). Thirty Years of Research on Race Differences in Cognitive Ability. Psychology, Public Policy, and Law, 11(2), 235–294; Gottfredson, L. (2005). Implications of Cognitive Differences for Schooling Within Diverse Societies. In C.L. Frisby & C.R. Reynolds (Eds.), Comprehensive Handbook of Multicultural School Psychology, 517–554. Wiley.; https://rb.gy/24n08
  15. Ibid.
  16. https://rb.gy/pvhup; Greene, J.P. (2005). Education Myths: What Special Interest Groups Want You to Believe About Our Schools—and Why It Isn’t So. Rowman & Littlefield.; https://rb.gy/65e0w
  17. https://rb.gy/jpry7; Whitman, D. (2008). Sweating the Small Stuff: Inner–City Schools and the New Paternalism. Thomas B. Fordham Institute Press.
  18. Rousseau, J. (2019). The Social Contract, or Principles of Political Right. (Trans. by G. Cole) Compass Circle.
  19. https://rb.gy/bdg1d
  20. https://rb.gy/l8bls
  21. https://rb.gy/r8xlx
  22. Soares, J.A. (Ed.) (2020). The Scandal of Standardized Tests: Why We Need to Drop the SAT and ACT (p. ix). Teachers College Press.
  23. Phelps, R.P. (2003). Kill the Messenger. Transaction; Phelps, R.P. (2005). Defending Standardized Testing. Erlbaum; Phelps, R.P. (2009). Educational Achievement Testing: Critiques and Rebuttals. In R.P. Phelps (Ed.), Correcting Fallacies About Educational and Psychological Testing, 89–146. American Psychological Association; https://rb.gy/8b1mv
  24. https://rb.gy/r449n
  25. Camara, W.J. (2009). College Admissions Testing: Myths and Realities in an Age of Admissions Hype. In R.P. Phelps (Ed.), Correcting Fallacies About Educational and Psychological Testing, 147–180. American Psychological Association.; Reynolds, C.R., Altmann, R.A., & Allen, D.N. (2021). Chapter 15: The Problem of Bias in Psychological Assessment. In C.R. Reynolds, R.A. Altmann & D.N. Allen, Mastering Modern Psychological Testing: Theory and Methods (2nd Ed.), 573–614. Springer.; Jensen, A.R. (1980). Bias in Mental Testing. Free Press.
  26. Jensen, A.R. (1980). Bias in Mental Testing. Free Press.
  27. Warne, R.T., Yoon, M. & Price, C.J. (2014). Exploring the Various Interpretations of ‘Test Bias’. Cultural Diversity and Ethnic Minority Psychology, 20(4), 570–582.
  28. Ibid.
  29. https://rb.gy/btc99
  30. https://rb.gy/4ixp7
  31. https://rb.gy/hvz82
  32. Galton, F. (1870). Hereditary genius: An Inquiry Into Its Laws and Consequences. Appleton.; Brigham, C. (1923). A Study of American Intelligence. Princeton University Press.; Gould, S.J. (1996). The Mismeasure of Man (revised and expanded). W.W. Norton & Company.
  33. https://rb.gy/foaup
  34. https://rb.gy/nk8fu; https://rb.gy/3ec9n; https://rb.gy/3gnwb
  35. Bond, H.M. (1924). What the Army ‘Intelligence’ Tests Really Measured. Opportunity, 2, 197–198.; Canady, H.G. (1942). The American Caste System and the Question of Negro Intelligence. The Journal of Educational Psychology, 33(3), 161–172.; Canady, H.G., Buxton, C. & Gilliland, A.R. (1942). A Scale for the Measurement of the Social Environment of Negro Youth. The Journal of Negro Education, 11(1), 4–13.; Klineberg, O. (1934). Cultural Factors in Intelligence Test Performance. The Journal of Negro Education, 3(3), 478–483.; Long, H.H. (1925). On Mental Tests and Racial Psychology—a Critique. Opportunity, 134–138.
  36. Bond, H.M. (1927). Some Exceptional Negro Children. The Crisis, 34(8), 257–259, 278, 280.; Bousfield, M.B. (1932). The Intelligence and School Achievement of Negro Children. The Journal of Negro Education, 1(3/4), 388–395.; Jenkins, M.D. (1939). Psychological Study of Negro Children of Superior Intelligence. The Journal of Negro Education, 5(2), 175–190.
  37. Jensen, A.R. (1980). Bias in Mental Testing (p. 43). Free Press.; Jensen, A.R. (1998). The G Factor: The Science of Mental Ability (p. 357). Praeger.
  38. Murray, C. (2021). Facing Reality: Two Truths About Race in America, 67–71. Encounter Books.; Riley, J.L. (2014). Chapter 6: Affirmative Discrimination. In J.L. Riley, Please Stop Helping Us: How Liberals Make It Harder for Blacks to Succeed, 141–168. Encounter Books.
  39. https://rb.gy/9zbnl
  40. Riley, J. (2014). Please Stop Helping Us: How Liberals Make It Harder for Blacks to Succeed. Encounter Books.; Sander, R.H. & Taylor, S. (2012). Mismatch: How Affirmative Action Hurts Students It’s Intended to Help, and Why Universities Won’t Admit It. Basic Books
  41. Carter, S.L. (1992). Reflections of an Affirmative Action Baby. Basic Books.; https://rb.gy/18cbl
  42. https://rb.gy/h1olw; https://rb.gy/4e8sn
  43. https://rb.gy/xkcte
  44. https://rb.gy/xlr6p
  45. Soares, J.A. (2020). The “Landscape” or “Dashboard Adversity Index” Distraction. In J.A. Soares (Ed.), The Scandal of Standardized Tests: Why We Need to Drop the SAT and ACT, 76–94. Teachers College Press.
  46. https://rb.gy/b884w
  47. Rothman, S., Lipset, S.M., & Nevitte, N. (2002). Does Enrollment Diversity Improve University Education? International Journal of Public Opinion Research, 15(1), 8–26.
  48. https://rb.gy/sw9u8; https://rb.gy/xqvc9
  49. https://rb.gy/oa9o4; https://rb.gy/md53l; https://rb.gy/1qw9z
  50. Schaeffer, R.A. (2020). The SAT/ACT Optional Admissions Growth Surge: More Colleges Conclude “Test Scores Do Not Equal Merit”. In In J.A. Soares (Ed.), The Scandal of Standardized Tests: Why We Need to Drop the SAT and ACT, 97–113. Teachers College Press.
  51. https://rb.gy/za7v3
  52. https://rb.gy/z22hf
  53. https://rb.gy/r4k0d
  54. Gottfredson, L. (2000). Skills Gaps, Not Tests, Make Racial Proportionality Impossible. Psychology, Public Policy, and Law, 6(1), 129–143.
Categories: Critical Thinking, Skeptic

Betavoltaic Batteries

neurologicablog Feed - Tue, 01/16/2024 - 5:08am

In 1964 Isaac Asimov, asked to imagine the world 50 years in the future, wrote:

“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes. The isotopes will not be expensive for they will be by- products of the fission-power plants which, by 2014, will be supplying well over half the power needs of humanity.”

Today nuclear fission provides about 10% of the world’s electricity. Asimov can be forgiven for being off by such a large amount. He, as a science fiction futurist, was thinking more about the technology itself. Technology is easier to predict than things like public acceptance, irrational fear of anything nuclear, or even economics (which even economists have a hard time predicting).

But he was completely off about the notion that nuclear batteries would be running most everyday appliances and electronics. This now seems like a quaint retro-futuristic vision, something out of the Fallout franchise. Here the obstacle to widespread adoption of nuclear batteries has been primarily technological (issues of economics and public acceptance have not even come into play yet). Might Asimov’s vision still come true, just decades later than he thought? It’s theoretically possible, but there is still a major limitation that for now appears to be a deal-killer – the power output is still extremely low.

Nuclear batteries that run through thermoelectric energy production have been in use for decades by the aerospace industry. These work by converting the heat generated by the decay of nuclear isotopes into electricity. Their main advantage is that they can last a long time, so they are ideal for putting on deep space probes. These batteries are heavy and operate at high temperatures – not suitable for powering your vacuum cleaner. There are also non-thermal nuclear batteries, which do not depend on a heat gradient to generate electricity. There are different types depending on the decay particle and the mechanism for converting it into electricity. These can be small cool devices, and can function safely for commercial. In fact, for a while nuclear powered pacemakers were in common use, until lithium-ion batteries became powerful enough to replace them.

One type of non-thermal nuclear battery is betavoltaic, which is widely seen as the most likely to achieve widespread commercial use. These convert beta particles, which are the source of energy –

“…energy is converted to electricity when the beta particles inter-act with a semiconductor p–n junction to create electron–hole pairs that are drawn off as current.”

Beta particles are essentially either high energy electrons or positrons emitted during certain types of radioactive decay. They are pretty safe, as radiation goes, and are most dangerous when inhaled. From outside the skin they are less dangerous, but high exposure can cause burns. The small amounts released within a battery are unlikely to be dangerous, and the whole idea is that they are captured and converted into electricity, not radiated away from the device. A betavoltaic device is often referred to as a “battery” but are not charged or recharged with energy. When made they have a finite amount of energy that they release over time – but that time can be years or even decades.

Imagine having a betavoltaic power source in your smartphone. This “battery” never has to be charged and can last for 20-30 years. In such a scenario you might have one such battery that you transfer to subsequent phones. Such an energy source would also be ideal for medical uses, for remote applications, as backup power, and for everyday use. If they were cheap enough, I could imagine such batteries being ubiquitous in everyday electronics. Imagine if most devices were self-powered. How close are we to this future?

I wish I could say that we are close or that such a vision is inevitable, but there is a major limiting factor to betavoltaics – they have low power output. This is suitable for some applications, but not most. A recent announcement by a Chinese company,  Betavolt, reminded me of this challenge. Their press release does read like some grade A propaganda, but I tried to read between the lines.

Their battery uses nickel-63 as a power source, which decays safely into copper. The design incorporates a crystal diamond semiconductor, which is not new (nuclear diamond batteries have been in the news for years). In a device as small as a coin they can generate 100 microwatts (at 3 volts) for “50 years”. In reality the nickel-63 has a half-life of 100 years. That is a more precise way to describe its lifespan. In 100 years it will be generating half the energy it did when manufactured. So saying it has a functional life of 50 years is not unreasonable.

The problem is the 100 microwatts. A typical smart phone requires 3-5 watts of power. So the betavolt battery produces only 1/30 thousandth the energy necessary to run your smart phone. That’s four orders of magnitude. And yet, Betavolt claims they will produce a version of their battery that can produce 1 watt of power by 2025. Farther down in the article it says they plan-

“to continue to study the use of strontium 90, plethium 147 and deuterium and other isotopes to develop atomic energy batteries with higher power and a service life of 2 to 30 years.”

I suspect these two things are related. What I mean is that when it comes to powering a device with nuclear decay, the half-life is directly related to power output. If the radioisotope decays at half the rate, then it produces half the energy (given a fixed mass). There are three variables that could affect power output. One is the starting mass of the isotope that is producing the beta particles. The second is the half life of that substance. And the third is the efficiency of conversion to electricity. I doubt there are four orders of magnitude to be gained in efficiency.

From what I can find betavoltaics are getting to about the 5% efficiency range. So maybe there is one order of magnitude to gain here, if we could design a device that is 50% efficient (which seems like a massive gain). Where are the other three orders of magnitude coming from? If you use an isotope with a much shorter half-life, say 1 year instead of 100 years, there are two orders of magnitude. I just don’t see where the other one is coming from. You would need 10 such batteries to run your smart phone, and even then, in one year you are operating at half power.

Also, nuclear batteries have constant energy output. You do not draw power from them as needed, like with a lithium-ion battery. They just produce electricity at a constant (and slowly decreasing) rate. Perhaps, then, such a battery could be paired with a lithium-ion battery (or other traditional battery). The nuclear battery slowly charges the traditional battery, which operates the devices. This way the nuclear battery does not have to power the device, and can produce much less power than needed. If you use your device 10% of the time, the nuclear battery can keep it charged. Even if the nuclear battery does not produce all the energy the device needs, you would be able to go much longer between charges, and you will never be dead in the water. You could always wait and build up some charge in an emergency or when far away from any power source to recharge. So I can see a roll for betavoltaic batteries, not only in devices that use tiny amounts of power, but in consumer devices as a source of “trickle” charging.

At first this might be gimicky, and we will have to see if it provides a real-world benefit that is worth the expense. But it’s plausible. I can see it being very useful in some situations, and the real variable is how widely adopted such a technology would be.

The post Betavoltaic Batteries first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #919: Looking Back on the Chronovisor

Skeptoid Feed - Tue, 01/16/2024 - 2:00am

A Benedictine monk is said to have built a device allowing him to see and hear historical events.

Categories: Critical Thinking, Skeptic

Paul Halpern — Extra dimensions, Other Worlds, and Parallel Universes

Skeptic.com feed - Tue, 01/16/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss398_Paul_Halpern_2024_01_02.mp3 Download MP3

Our books, our movies—our imaginations—are obsessed with extra dimensions, alternate timelines, and the sense that all we see might not be all there is. In short, we can’t stop thinking about the multiverse. As it turns out, physicists are similarly captivated.

In The Allure of the Multiverse, physicist Paul Halpern tells the epic story of how science became besotted with the multiverse, and the controversies that ensued. The questions that brought scientists to this point are big and deep: Is reality such that anything can happen, must happen? How does quantum mechanics “choose” the outcomes of its apparently random processes? And why is the universe habitable? Each question quickly leads to the multiverse. Drawing on centuries of disputation and deep vision, from luminaries like Nietzsche, Einstein, and the creators of the Marvel Cinematic Universe, Halpern reveals the multiplicity of multiverses that scientists have imagined to make sense of our reality. Whether we live in one of many different possible universes, or simply the only one there is, might never be certain. But Halpern shows one thing for sure: how stimulating it can be to try to find out.

Dr. Paul Halpern is the author of 18 popular science books, exploring the subjects of space, time, higher dimensions, dark energy, dark matter, exoplanets, particle physics, and cosmology. The recipient of a Guggenheim Fellowship, a Fulbright Scholarship, and an Athenaeum Literary Award, he has contributed to Nature, Physics Today, Aeon, NOVA’s “The Nature of Reality” physics blog, and Forbes “Starts with a Bang!” He has appeared on numerous radio and television shows including “Future Quest,” “Science Friday,” “Radio Times,” “Coast to Coast AM,” “The Simpsons 20th Anniversary Special,” and C-SPAN’s “BookTV.” He appeared previously on the show for his book Synchronicity: The Epic Quest to Understand the Quantum Nature of Cause and Effect. His new book, The Allure of the Multiverse, describes the controversial history of higher dimensional and parallel universe schemes in science and culture. More information can be found at: allureofthemultiverse.com

Shermer and Halpern discuss:

  • universe and multiverse meaning
  • Is the multiverse science, metaphysics, or faith?
  • theists claim the “multiverse” is just handwaving around the God answer
  • types of multiverses
  • many worlds interpretation of quantum mechanics?
  • inflationary cosmology and eternal inflation
  • Darwinian cosmology
  • infinity and eternity
  • multiple dimensions and the multiverse
  • string theory and the multiverse
  • cyclical universes and multiverses (the Big Bounce)
  • Anthropic Principle (weak, strong, participatory)
  • time travel and the multiverse
  • sliding doors, contingency, and the multiverse.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic