The world of science communication has changed dramatically over the last two decades, and it’s useful to think about those changes, both for people who generate and consume science communication. The big change, of course, is social media, which has disrupted journalism and communication in general.
Prior to this disruption the dominant model was that most science communication was done by science journalists backed up by science editors. Thrown into the mix was the occasional scientist who crossed over into public communication, people like Carl Sagan. Science journalists generally were not scientists, but would have a range of science backgrounds. The number one rule for such science journalists is to communicate the consensus of expert opinion, not substitute their own opinion.
Science journalists are essentially a bridge between scientists and the public. They understand enough about science, and should have a fairly high degree of science literacy, that they can communicate directly with scientists and understand what they have to say. They then repackage that communication for the general public.
This can get tricky when dealing with controversial subjects. This is one of the challenges for good science journalists, they must fairly represent the controversy without making it seem more or less than it is, and without giving unbalanced attention to fringe or minority opinions. They need to speak to enough experts to put the controversy into a proper context.
Now let’s transition to the post-social media world. One of the best things about social media is that it makes it much easier for scientists to communicate directly to the public – for scientists to become journalists. While I think this has created a lot of fantastic content, of which I am an enthusiastic consumer and producer, it has created its own challenges.
The big challenge for science journalists who are not scientists is getting the science right. The big challenge for science journalists who are not journalists is getting the journalism right. Part of the challenge is that scientist science journalist blur the lines between of expertise. This is because expertise itself is a fuzzy concept, and is more of a continuum. Expertise, in fact, is more of a pyramid.
At the base of this pyramid we have all scientists, who have some level of expertise in science itself – scientific principles and practices, and basic concepts like hypothesis, theory, experiment, and data. They should also have some knowledge of statistics and how they are used in science. So any scientist is in a good position to discuss any science to the general public. But “scientist” is also a broad and fuzzy concept. Are engineers scientists? Are clinicians? These are often considered applied sciences, but they may or may not be researchers.
A scientist also has topic expertise in their particular field. So they are especially well positioned to discuss topics within their field of expertise. But “field of expertise” is also a continuum. Using myself as an example, I am a medical doctor, so I have some level of expertise in science itself, a higher level of expertise in medicine, and then a specialty in neuroscience. Within neuroscience there are different subspecialties, and I have fellowship level training in neuromuscular disease and also am certified in headache medicine. So I am more able to comment on these areas than Multiple Sclerosis, for example.
At what point am I considered an “expert”? This is obviously not a binary. I have high level training in biological concepts, in medicine in general, but have a higher level of expertise in clinical neurology and even higher in my subspecialties. So when I am communicating about such topics, am I communicating as a scientist or as a journalist? The big difference is that scientists, when commenting without their field of expertise, can exercise their own scientific judgement and include their own opinions. Non-scientist journalists should never do this. The scientist journalists have a spectrum of expertise – so where is the line for when they can start to weave in their own scientific opinions?
There is no right or wrong answer here, only judgement calls. Within academia – when communicating with other scientists – the general rule is that you should only communicate from a position of maximal expertise. When communicating to the public, however, there is no such rule (nor would it be practical). Scientist journalists, therefore, have to constantly be shifting their approach to a topic depending on their relative level of expertise, and this can be tricky.
For me, I do my best to understand what the consensus of scientific expert opinion is on a topic and to align my communication with that consensus. I try to be humble and to avoid substituting my own relatively less expert opinion for those with more expertise than me. I leverage my expertise, when I have it, to helping understand the topic as a whole and to put it into a helpful context.
There is also another layer to my science communication – I am a scientific skeptic, and I think it is reasonable to consider myself an expert in pseudoscience, science denial, conspiracy thinking, and critical thinking. I often am communicating science through this skeptical lens. This is a type of expertise that is orthogonal to topic expertise. It’s like being a statistician – you are an expert in statistics regardless of the field of science to which they are applied.
There is yet another layer here which can be extremely helpful – we are not all just individual scientist communicators. We are part of a community. This means that we can check in with other scientist communicators with different topic expertise to check our understanding of a topic outside our expertise. This is one of the best things about the SGU podcast – I interview other scientists and ask them questions about the consensus of opinion within their area of expertise. I also have access to lots of colleagues in different fields so I can check my understanding of various topics.
Sometimes this also means that different fields of expertise have a different perspective on a topic that spans multiple fields. In the recent discussion of biological sex, for example, there is clearly a different approach for evolutionary biologists, developmental biologists, neuroscientists, and medical experts. All these views are legitimate, but they contain different perspectives. Again, as a scientist communicator it’s important not to confuse your perspective with the correct perspective.
This can all strengthen our community – if we are all individually willing to be humble, understand and explore the limits of our expertise, listen to our colleagues, and be open to different perspectives. We can also discuss the meta-lessons of how to be better science communicators.
The post Science Communication About Controversial Issues first appeared on NeuroLogica Blog.
It’s been a while since I discussed artificial intelligence (AI) generated art here. What I have said in the past is that AI art appears a bit soulless and there are details it has difficulty creating without bizarre distortions (hands are particularly difficult). But I also predicted that it would get better fast. So how is it doing? In brief – it’s getting better fast.
I was recently sent a link to this site which tests people on their ability to tell the difference between AI and human-generated art. Unfortunately the site is no longer taking submissions, but you can view a discussion of the results here. These pictures were highly selected, so they are not representative. These are not random choices. So any AI pictures with obvious errors were excluded. People were 60% accurate in determining which art was AI and which was human, which is only slightly better than chance. Also, the most liked picture (the one above the fold here) in the line-up was AI generated. People had the hardest time with the impressionist style, which makes sense.
Again – these were selected pictures. So I can think of three reasons that it may be getting harder to tell the difference between AI and humans in these kinds of tests other than improvements in the AI themselves. First, people may be getting better at using AI as a tool for generating art. This would yield better results, even without any changes in the AI. Second, as more and more people use AI to generate art there are more examples out there, so it is easier to pick the cream of the crop which are very difficult to tell from human art. This includes picking images without obvious tells, but also just picking ones that don’t feel like AI art. We now are familiar with AI art, having seen so many examples, and that familiarity can be used to subvert expectations by picking examples of AI art that are atypical. Finally, people are figuring out what AI does well and what it does not-so-well. As mentioned, AI is really good at some genres, like impressionism. This can also just fall under – getting better at using AI art – but I thought it was distinct enough for its own mention.
And, of course, AI applications are being iterated and are objectively improving. I have been using them since they hit the scene and I can see the difference in results. I still think that most AI generated art is a bit soulless, but that is improving significantly. It does depend a great deal on style, and you still have to throw a lot of crap against the wall to get good results. But the compositions are improving and the details are improving. The lighting can be amazing, and the ability to generate highly detailed images can be astounding.
None of this changes the ongoing discussion of whether or not AI generated images can truly be considered art, and also is this fair to the artists whose work is being scraped to generate these images. On the former question, I still think the answer is – yes and no. It depends on how you define art. Some people I have had this discussion with use what might be called “high art” as the definition. This makes it easier to answer the question – if you are just putting a prompt into an AI algorithm and getting back an image, that is not high art. But of course there is a lot of human-generated art that also has minimal creative content and is not high art. I think the best way to define art as in true artistic expression is by what went into it. How much thought, feeling, background, talent, and creativity went into the process? The more that went in, then the more it contains. You still get out what you put in, even if the AI is doing the technical heavy lifting.
But sure, this is still a gray zone.
Also, we are not just talking about images. Video, music, voice replication are all also getting better pretty fast. There is a lot of buzz, for example, about this Star Trek piece which is pretty impressive. You can still tell it’s AI. The characters are slightly off. The blinking is not quite right, so we are still a bit in the uncanny valley. Also, the characters don’t speak, which is always a dead giveaway.
I am also still blown away by this reimagining of the light saber fight between Obi-Wan and Darth Vader in A New Hope. It’s not hard to see where we are headed – a world in which convincing video of pretty much anything can be generated with the help of AI. This has both good and bad implications. The good implications, potentially, are for entertainment. You may be able to watch any story with any actors, living or dead, in any style. I want to see a film noire with a young Harrison Ford playing a hard-boiled detective in a 1920s thriller. You could recast any movie with different actors. We could create the seasons of Firefly that were never made.
The downside, of course, is that you cannot trust anything you see. I basically am already there. I do not trust any image or video, especially if there are any social or political implications, unless it has been adequately vetted. The problem is that many people don’t wait for a sufficient vetting. Also, the damage of disinformation is already done by the time it gets investigated and corrected. Even worse, this is all leading to a kind of intellectual fatigue where people throw up their arms and conclude – well, you can’t trust anything, so I might as well just believe what feels good. This is what my tribe believes – that’s good enough for me.
So sure, I would love to see a well-crafted season 4 of Star Trek TOS, but I would also like to trust my news. I am not willing to throw up my arms just yet. I think we need to explore ways to minimize disinformation and make it easier for people to feel confident in a shared reality. But if not, at least I can entertain myself with some nostalgia.
The post Update on AI Art first appeared on NeuroLogica Blog.
Humans (assuming you all experience roughly what I experience, which is a reasonable assumption) have a sense of self. This sense has several components – we feel as if we occupy our physical bodies, that our bodies are distinct entities separate from the rest of the universe, that we own our body parts, and that we have the agency to control our bodies. We can do stuff and affect the world around us. We also have a sense that we exist in time, that there is a continuity to our existence, that we existed yesterday and will likely exist tomorrow.
This may all seem too basic to bother pointing out, but it isn’t. These aspects of a sense of self also do not flow automatically from the fact of our own existence. There are circuits in the brain receiving input from sensory and cognitive information that generate these senses. We know this primarily from studying people in whom one or more of these circuits are disrupted, either temporarily or permanently. This is why people can have an “out of body” experience – disrupt those circuits which make us feel embodied. People can feel as if they do not own or control a body part (such as so-called alien hand syndrome). Or they can feel as if they own and control a body part that doesn’t exist. It’s possible for there to be a disconnect between physical reality and our subjective experience, because the subjective experience of self, of reality, and of time are constructed by our brains based upon sensory and other inputs.
Perhaps, however, there is another way to study the phenomenon of a sense of self. Rather than studying people who are missing one or more aspects of a sense of self, we can try to build up that sense, one component at a time, in robots. This is the subject of a paper by three researchers, a cognitive roboticist, a cognitive psychologist who works with robot-human interactions, and a psychiatrist. They explore how we can study the components of a sense of self in robots, and how we can use robots to do psychological research about human cognition and the self of self.
Obviously we are a long way away from having artificial intelligence (AI) that reproduces human-level general cognition. But by now it’s pretty clear that we do not need this in order to at least simulate aspects of human level cognition and beyond. One great example with reference to robotics is that we do not need human-level general AI to have a robot walk. Instead we can develop algorithms that can respond in real time to sensory information so that robots can maintain themselves upright, traverse terrain, and respond to perturbations. This actually mimics how the human brain works. You don’t have to think too much about walking. There are subcortical pathways that do all the hard-lifting for you – algorithms that utilize sensory input to maintain anti-gravity posture, walk, and react to perturbations. The system is largely subconscious, although you can consciously direct it. Similarly you don’t have to think about breathing. It’s automatic. But you can control your breathing if you want.
The idea with robots is not that we create a robot that has a full human-level sense of self, but that we start to build in specific components that are the building blocks of a sense of self. For example, robots could have sensors and algorithms that give them feedback that indicates they control their robotic body parts. As with the human brain, a circuit can compare the commands to move a body part with sensors that indicate how the body part actually moved. Similarly, when robots move there can be sensors feeding into algorithms that determine what the effect of that movement was on the outside world (a sense of agency).
This would not be enough to give the robot a subjective experience of self, just as your brainstem would not give you a sense of self without a functioning cortex. But we can start to build the subconscious components of self. We can then do experiments to see how, if at all, these components affect the behavior of the robot. Perhaps this will enable them to control their movements more precisely, or adapt to the environment more quickly and effectively.
I think this is a good pathway for developing robotic AI in any case. Our brains evolved from the bottom up, starting with simple algorithms to control basic functions. It makes sense that we should build robotic intelligence from the bottom up also. Then, as we develop more and more sophisticated AI, we can plug these subconscious algorithms into them.
The big question is – how much will plugging in a bunch of narrow AI / subconscious algorithms into each other contribute to AI sentience and self-awareness? Will (like V-ger or Skynet from science fiction) awareness spontaneously emerge from a complex-enough network of narrow AIs? Is that how vertebrate self-awareness evolved? Arguably, human consciousness is ultimately a bunch of subconscious networks all talking to each other in real time with wakeful consciousness emerging from this process. You can take components away, changing the resulting consciousness, but if you take too many of them away, then wakeful consciousness cannot be maintained.
The other question I have concerns the difference between AI running on a computer and AI in a robot. Does an AI have to be embodied to have human-like self-awareness? Is a Max Headroom type of AI with a completely virtual existence possible? Probably – if they had a virtual body and it was programmed to function like a physical body in the virtual world. But since we are developing robotics anyways, developing robotic AI that mimics human-like embodiment and sense of self makes sense. It evolved for a reason, and we should explore how to leverage that to advance robotics. While we use our understanding of neuroscience to help advance AI and robotics, we can also use AI and robotics to study neuroscience.
As the authors propose, we can use our attempts at building the components of self into robots to see how those components function and what effect they have.
The post Robots and a Sense of Self first appeared on NeuroLogica Blog.
It’s interesting that there isn’t much discussion about this in the mainstream media, but the Biden administration recently pledged to triple US nuclear power capacity by 2050. At COP28 last year the US was among 25 signatories who also pledged to triple world nuclear power capacity by 2050. Last month the Biden administration announced $900 million to support startups of Gen III+ nuclear reactors in the US. This is on top of the nuclear subsidies in the IRA. Earlier this year they announced the creation of the Nuclear Power Project Management and Delivery working group to help streamline the nuclear industry and reduce cost overruns. In July Biden signed the bipartisan ADVANCE act which has sweeping support for the nuclear industry and streamlining of regulations.
What is most encouraging is that all this pro-nuclear action has bipartisan support. In Trump’s first term he was “broadly supportive” of nuclear power, and took some small initial steps. His campaign has again signaled support for “all forms of energy” and there is no reason to suspect that he will undo any of the recent positive steps.
Even environmental groups are split about nuclear power. Some still oppose nuclear, while others have embraced it as a necessary part of the solution to global warming. Regarding the recent pledge to triple nuclear capacity:
Environment America Executive Director Lisa Frank said in a statement that the plan risked “toxic meltdowns, wrecked landscapes and contaminated drinking water.” U.S. PIRG Energy and Utilities Program Director Abe Scarr in a separate statement called nuclear energy “dangerous, expensive and a distraction from cheaper, safer options like solar power” and said its expansion would “[waste] time and resources.”
These responses are not very compelling. Fearmongering about “toxic meltdowns” seems like it was scripted in a 1960s anti-nuclear demonstration. The world has operated hundreds of nuclear power plants (there are currently 440) for decades with only a few mishaps, mostly due to avoidable poor management. The technology is also significantly improving, with some reactor designs being essentially melt-down proof. Nuclear power is actually the second safest (in terms of lives per unit of energy-0.03 deaths per terawatt hour), just slightly behind solar (0.02). Coal, on the other hand, has 1230 times as many deaths per unit energy than solar.
This is the bottom line that even many environmentalists are starting to see clearly – first, you can’t just consider risk, you have to consider risk vs benefit. Second, you can’t just assess one option, you have to compare it to the other options. As should be clear to any frequent reader here, I am a huge advocate of solar power and renewable energy in general and favor steps to maximize these sources of energy. But our goal is to phase out fossil fuels as quickly as possible, while by 2050 the world will likely increase our energy demand by 50% or more. For the next several decades, the more nuclear power we have, the fewer coal and natural gas plants we will have. It is simply implausible that we will fully displace fossil fuel in that time without nuclear.
Bringing up wrecked landscapes and contaminated drinking water is pretty naked fearmongering. One of the advantages of nuclear power is that it has a relatively small land footprint – much less than renewables for the amount of energy produced. It also has much less than fossil fuels, especially when you consider mining and fracking. Coal releases more radioactive material into the environment than nuclear power. It’s simply no contest. Environmentalists who oppose nuclear power will demonstrably hurt the environment.
Nuclear has some advantages over wind and solar, such as less land use. But also, a nuclear power plant can be plugged into existing connections to the grid. The plan should be, in fact, to swap out coal for nuclear one for one. Renewables require distributed connections to the grid and significant grid expansion. While we should are are doing this, applications for new grid connections are backed up by years, more than a decade in some cases. Nuclear also is not an intermittent power source, and the newer reactor designs (such as salt cooled) can more nimbly follow demand than older reactors. Nuclear also has the smallest carbon footprint of all power sources, lower than solar and wind.
Of course there are challenges with nuclear power, cost being one. But it is better to find solutions than to just accept worse options. The Biden administration, with bipartisan support, is finding solutions. Bill Gates is funding a salt-cooled reactor startup, also in the hopes of kickstarting a new reactor design that will be cheaper, safer, and more nimble. Opposition to nuclear is mixed and softening. The looming threat of global warming is simply changing the calculus, even for environmentalists.
The post Pledge to Triple Nuclear by 2050 first appeared on NeuroLogica Blog.
The world produces 350-400 million metric tons of plastic waste. Less than 10% of this waste is recycled, while 25% is mismanaged or littered. About 1.7 million tons ends up in the ocean. This is not sustainable, but whose responsibility is it to deal with this issue?
The debate about responsibility is often framed as personal responsibility vs systemic (at the government policy level). Industry famously likes to emphasize personal responsibility, as a transparent way to shield themselves from regulations. The Keep American Beautiful campaign (the crying Indian one) was actually an industry group using an anti-littering campaign to shift the focus away from the companies producing the litter to the consumer. It worked.
This is not to say we do not all have individual responsibility to be good citizens. There are hundreds of things adults should or should not do to care for their own health, the environment, the people around them, and their fellow citizens. But a century of research shows a very strong and consistent signal – campaigns to influence mass public behavior have limited efficacy. Getting most people to remember and act upon best behavior consistently is difficult. This likely reflects the fact that it is difficult for individuals to remember and act upon best behavior consistently – it’s cognitively demanding. As a general rule we tend to avoid cognitively demanding behavior and follow pathways of least resistance. We likely evolved an inherent laziness as a way of conserving energy and resources, which can make it challenging for us to navigate the complex massive technological society we have constructed for ourselves.
There is a general consensus among researchers who study such things that there are better ways to influence public behavior than shaming or guilting people. We have to change the culture. People will follow the crowd and social norms, so we have to essentially create ever-present peer pressure to do the right thing. While this approach is more effective than shaming, it is still remarkably ineffective overall. Influencing public behavior by 20%, say, is considered a massive win. What works best is to make the optimal behavior the pathway of least resistance. It has to be the default, the easiest option, or perhaps the only option.
We also know that industry is always going to follow the cheapest and most profitable pathway. Counting on industry to sacrifice their own shareholder profits in the name of some abstract common good is not a solid plan. Even if some companies do this, they will be out-competed by those who don’t. Good behavior, therefore, requires top-down policy, which brings us back to the plastic question.
I love plastic as much as the next person – it’s light, durable, cheap, and sanitary. Glass is heavier and is brittle. For many applications aluminum is a great option, however. But single use plastic is simply terrible for the environment. Technology is often about trade-offs, with some alternatives being better in some ways but worse in others. In a capitalist society we often let the market decide which trade-offs are optimal, and that’s great (the power of the market). But what happens when the trade-offs are different for different segments of society? Industry might prefer one set of trade-offs, consumers another, and environmentalists another. There is also the issue of externalizing costs – who pays for the public consequences of technology?
This is where government comes in. Their job is to protect consumer safety and interests, to protect public spaces and shared resources, and to make sure there is a level playing field and no one is cheating. Of course, this can be challenging as well, and carries its own set of trade-offs – a cumulative regulatory burden. Optimizing the balance between free markets, consumer choices, and government regulation is often tricky, and is best done, in my opinion, based upon evidence and review, not ideology.
So how do we apply all this to the plastic problem? A recent study sheds some light. Researchers found that just four government policies could reduce plastic waste by 91% and reduce the carbon footprint of the plastics industry by 37%. What sacrifices do we have to make to get these benefits?
“The policies are: mandate new products be made with 40% post-consumer recycled plastic; cap new plastic production at 2020 levels; invest significantly in plastic waste management — such as landfills and waste collection services; and implement a small fee on plastic packaging.”
These all sound like reasonable suggestions. They are not necessarily a package deal, and the specifics can be tweaked, but the study shows the potential to significantly reduce the environmental burden of plastic with these types of measures. These policy suggestions also reflect some of the kinds of things government policy can do. They can set a standard for industry, so that everyone has to comply and no one has an unfair advantage (like using 40% recycled material and limiting total production). They can invest in infrastructure, which can both facilitate a technology and also deal with negative impacts. Policy can also shift economic considerations by making harmful practices more expensive or more environmentally friendly practices relatively less expensive. In other words – don’t tell industry what to do, just tweak the economic incentives so that the most profitable path is the optimal one. This also lets industry figure out the details for themselves.
I am neither a free-market purist nor a believer in unlimited regulation without consideration of unintended consequences. These are ideological extremes that are ultimately harmful. As with many things, there is a balance between free market forces and government regulations that follow Aristotle’s “golden mean”. Plus, this balance should be evidence-based and subject to review and revision. All players must have a seat a the table with no one interest dominating. In the end we can get some rational policies that make the world a better place for everyone.
The post Managing Plastic Waste first appeared on NeuroLogica Blog.
On September 11, 2001, as part of a planned terrorist attack, commercial planes were hijacked and flown into each of the two towers at the World Trade Center in New York. A third plane was flown into the Pentagon, and a fourth crashed after the passengers fought back. This, of course, was a huge world-affecting event. It is predictable that after such events, conspiracy theorists will come out of the woodwork and begin their anomaly hunting, breathing in the chaos that inevitably follows such events and spinning their sinister tales, largely out of their warped imagination. It is also not surprising that the theories that result, just like any pseudoscience, never truly die. They may fade to the fringe, but will not go away completely, waiting for a new generation to bamboozle. In the age of social media, everything also has a second and third life as a You Tube or Tik Tok video.
But still I found it interesting, after not hearing 911 conspiracy theories for years, to get an e-mail out of the blue promoting the same-old 911 conspiracy that the WTC towers fell due to planned demolition, not the impact of the commercial jets. The e-mail pointed to this recent video, by dedicated conspiracy theorist Jonathan Cole. The video has absolutely nothing new to say, but just recycles the same debunked line of argument.
The main idea is that experts and engineers cannot fully explain the sequence of events that led to the collapse of the towers and also explain exactly how the towers fell as they did. To do this Cole uses the standard conspiracy theory playbook – look for anomalies and then insert your preferred conspiracy theory into the apparent gap in knowledge that you open up. The unstated major premise of this argument is that experts should be able to explain, to an arbitrary level of detail, exactly how a complex, unique, and one-off event unfolded – and they should be able to do this from whatever evidence happens to be available.
The definitive official report on the cause of the collapse of the two towers is in the NIST report, which concludes:
“In WTC 1 , the fires weakened the core columns and caused the floors on the south side of the building to sag. The floors pulled the heated south perimeter columns inward, reducing their capacity to support the building above. Their neighboring columns quickly became overloaded as columns on the south wall buckled. The top section of the building tilted to the south and began its descent. The time from aircraft impact to collapse initiation was largely determined by how long it took for the fires to weaken the building core and to reach the south side of the building and weaken the perimeter columns and floors.”
The process in WTC 2 was similar, just with different details. Essentially the impact of the commercial jets dislodged the fireproofing from the core columns. The subsequent fires then heated up and weakened the steel, reducing their ability to bear load until they ultimately failed, initiating collapse. Once a collapse was initiated the extra load of the falling floors was greater than the ability of the lower floors to bear, so they also collapsed.
There really is no mystery here – a careful and thorough analysis by many experts using all available video evidence, engineering designs of the building, and computer simulations have provided an adequate and highly plausible explanation. But Cole believes you can just look at the videos and contradict the experts – he explicitly argues for this position, even that it is “obvious” what is happening and all the experts are wrong. He then cherry picks reasons for not accepting the expert conclusion, such as, why haven’t we seen this before? Where are the pancaked floors? But again, he is just anomaly hunting. What he fails to consider is that the WTC towers were the largest structures ever to collapse in this way, and that you cannot simply scale up smaller building collapses and think you can understand or predict what should have happened with the towers. The energies involved are different, and therefore the relevant physics will behave differently. This is like trying to understand what will happen if a person falls from a height by your experience with small insects falling from relatively similar heights.
The two main anomalies he focuses on are the absence of recognizable debris and the apparent “explosions”. He says – where are the pancaked floors? Meanwhile, lower Manhattan was covered with a layer of concrete dust. Where do you think that dust came from? Again – at this scale and these energies the concrete was mostly pulverized into powder. This is not a mystery.
His second line of evidence (again, nothing new) is the apparent series of explosions ahead of the collapse. However, these explosions are simply the air pressure and immense power of the building collapsing down, causing an explosive sound as each floor was encountered by the collapse, and causing air to be blown out the windows. This is not an incredibly precise sequence of explosions ahead of the collapse – it is the collapse. I love how the “controlled demolition” advocates argue that the collapse looked like such a demolition. But actually look at videos of controlled demolitions – they look nothing like the collapse of the towers. In such cases you see the explosions, usually happening at roughly the same time, a moment before the collapse. The sequence is – explosions then collapse. But with the WTC collapses the collapse comes first, and the apparent “explosions” (which do not look like any demolition video I have seen) are at the leading edge of the collapse. This would requires a fantastically timed sequence of demolitions that is virtually impossible.
In essence Cole and other die-hard 911 conspiracy theorists are replacing a well modeled and evidenced explanation for the collapse with wild speculation, causing far more problems than the imaginary ones they conjure up.
There is also the fact that conspiracy theorists rarely provide any positive evidence for their conspiracy. They only try to poke holes in the official explanation, then insert a sinister interpretation. But here we are, 23 years later, and still there isn’t a lick of evidence (even through multiple subsequent administrations) for a conspiracy. The conspiracy narrative also doesn’t make sense. Why would they arrange to have commercial jets laden with fuel crash into the towers, and then also take on the risk of rigging them for controlled demolition, and then setting off the demolition in front of the world and countless cameras? And then take the risk that an official investigation, even in a later administration, would not reveal the truth. This is a bad movie plot, one that would pull me from my sustained disbelief.
There is no evidence for an inside job. There is no evidence that a massive project to plant explosives in both towers (or three, if you include WTC7) had occurred. There is no evidence from actual expert analysis that the towers fell due to controlled demolition. Cole’s analysis is not convincing to say the least. I find it childish and simplistic. But it is easy to use anomaly hunting to create the impression that something fishy is going on, and that is partly why these conspiracy theories persists, long past their expiration date.
The post 911 Conspiracy Theories Persist first appeared on NeuroLogica Blog.
Australia is planning a total ban on social media for children under 16 years old. Prime Minister Anthony Albanese argues that it is the only way to protect vulnerable children from the demonstrable harm that social media can do. This has sparked another round of debates about what to do, if anything, about social media.
When social media first appeared, there wasn’t much discussion or recognition about the potential downsides. Many viewed it as one way to fulfill the promise of the web – to connect people digitally. It was also viewed as the democratization of mass communication. Now anyone could start a blog, for example, and participate in public discourse without having to go through editors and gatekeepers or invest a lot of capital. And all of this was true. Here I am, two decades later, using my personal blog to do just that.
But the downsides also quickly became apparent. Bypassing gatekeepers also means that the primary mechanism for quality control (for what it was worth) was also gone. There are no journalistic standards on social media, no editorial policy, and no one can get fired for lying, spreading misinformation, or making stuff up. While legacy media still exists, social media caused a realignment in how most people access information.
In the social media world we have inadvertently created, the people with the most power are arguably the tech giants. This has consolidated a lot of power in the hands of a few billionaires with little oversight or regulations. Their primary tool for controlling the flow of information is computer algorithms, which are designed to maximize engagement. You need to get people to click and to stay on your website so that you can feed them ads. This also created a new paradigm in which the user (that’s you) is the product – apps and websites are used to gather information about users which are then sold to other corporations, largely for marketing purposes. In some cases, like the X platform, and individual can favor their own content and perspective, essentially turning a platform into a propaganda machine. Sometimes an authoritarian government controls the platform, and can push public discourse in whatever direction they want.
Perhaps worse, if the only feedback loop for algorithms is engagement, then this creates an interesting psychological experiment. What drives engagement is extremism, outrage, and reinforcing prejudices. This has resulted in a few derivative phenomena, including echochambers. It became trivial, and almost automatic, for spaces to emerge on social media that reinforce a particular world view. Those who do not comply are deemed “trolls” and are banned. Rather than having a shared reality of core facts, people are largely isolated in cocoons of ideological purity. The result was increasing division – each half of the country (politically speaking) cannot imagine how the other half can possibly believe what they do.
In addition getting people to engage meant feeding them increasingly radical content, which had the result of radicalizing a lot of people. This resulted in the rise of lunatic ideas like flat-eatherism, and conspiracy theories like QAnon. It also supercharged the spread of misinformation, and provided a convenient mechanism for the deliberate spread of disinformation. Bad actors and authoritarian governments quickly seized upon this opportunity.
There is also another layer here -mental health. Obsessively engaging online results in fomo, bullying, low self-esteem, and depression. This is exacerbated by the fact that the layer of protection afforded by social media allows for psychopaths, predators, and other bad actors to roam freely.
So I can understand the feeling that by allowing young children to engage on social media is like throwing our children to the wolves, with predictable negative effects. But the question remains – what do we do about it? Australia is planning an experiment of their own, taking a bold step to outright ban social media use for children under 16. There is already a lot of pushback against this idea. In an open letter from 100 academics, they argue that banning is a blunt tool, and that it will leave children more vulnerable. They will not learn the skills to be able to navigate social media, they argue. They suggest that other methods would be better, without getting into too much detail about what those methods might be. The details of the banning also have to be worked out – how will it be enforced?
It is a genuine dilemma. There is no real solution, only different trade-offs. It is certainly worth having the conversation about what the options and trade-off might be. Doing nothing is one option – just let the experiment play itself out, with the idea that society will adapt. While I think this will happen to some degree, we may not like where we end up. My problem with this approach is that it assumes that things will play out organically. Rather, powerful actors (tech giants, powerful corporations, and governments) will exploit the system to their own advantage and to the detriment of the public. We may have just provided the tools for authoritarian governments to exert ultimate control over society. It may not be a coincidence that democracies are in retreat around the world.
But even without an authoritarian thumb on the scale, misinformation seems to have a significant advantage in the world of social media. Perhaps even worse, we seem to be heading for a world in which truth is irrelevant. I spend a lot of time on TikTok, for example, trying to spread a little science and critical thinking. The platform has lots of good science communication on hit, and lots of wholesome entertainment. But it is also overwhelmed with nonsense, including misinformation and disinformation. But perhaps the dominant trend is for something that is not so much misinformation but that is completely unconcerned with reality. Many videos are purely performative, to the point that I cannot figure out if the person making the video actually believes anything they say. It’s as if it doesn’t matter – it’s all about engagement. The very concept that one factual claim may be more reliable than an opposing claim seems anathema. It’s all opinion, and all that matters is clicks. Any argument otherwise is immediately dismissed as a conspiracy, or mere elitism.
We may already be living in the post-truth hellscape that critics predicted social media would lead to. I don’t think a ban is likely to be the solution, but I welcome the experiment. If Australia enacts the ban, we need to pay close attention to what results. Even if there are some net positive outcomes, it is not likely to be the only needed solution. We need to start talking more seriously about what measures should be taken to reign in some of the worse aspects of social media. Also AI is about to supercharge everything, giving even more power spreaders of misinformation. I liken to an industry that is dumping tons of toxic substances into the environment. I don’t think we should just sit back and see what happens.
The post The Social Media Dilemma first appeared on NeuroLogica Blog.
At CSICON this year I gave talk about topics over which skeptics have and continue to disagree with each other. My core theme was that these are the topics we absolutely should be discussing with each other, especially at skeptical conferences. Nothing should be taboo or too controversial. We are an intellectual community dedicated to science and reason, and have spent decades talking about how to find common ground and resolve differences, when it comes to empirical claims about reality. But the fact is we sometimes disagree, and this is a great learning opportunity. It’s also humbling, reminding ourselves that the journey toward critical thinking and reason never ends. On several topics self-identified skeptics disagree largely along political grounds, which is a pretty sure sign we are not immune to ideology and partisanship.
I spent most of the talk, however, discussing the issue of biological sex in humans, which I perceive as the currently most controversial topic within skepticism. My goal was to explore where it is we actually disagree. Generally speaking skeptics don’t disagree about the facts or about the proper role of science in determining what is likely to be true. We tend to disagree for more subtle reasons, although often the reason does come down to a lack of specific topic expertise on questions that are highly technical. The most important thing is that we actually engage with each-other’s arguments and positions, to make sure we truly understand what those who disagree with us are saying so that we can properly explore premises and logic.
Jerry Coyne, author of the book and blog Why Evolution is True, was also at CSICON and gave a talk essentially taking the opposing position to my own. His position is that biological sex in humans is binary, that this is the only scientific position, and anything else is simply ideology trumping science. His talk was after mine so I was very interested in how he would respond to my position. He essentially didn’t – he just gave the talk he was going to give and then included a single slide with his “responses” to my talk. Except, they weren’t responses at all, just a list of standard talking points that really had nothing to do with my talk.
Now he has written a blog post discussing my talk. Here is his opening paragraph:
“I’ve been busy at the CSICon conference, which included giving my own 30-minute presentation this morning. I had to modify it to take into account the misguided views of Steve Novella, who gave a talk yesterday about “When Skeptics Disagree.” It turned out to be largely a diatribe about how sex in humans is not binary, and in fact isn’t even to be defined by morphology or physiology. As far as I can see, Novella’s view of sex is that one is born with a “brain module” (which of course is biological) that determines which sex you are. No, not gender, but actual biological sex. You can have a “female” module or a “male module”, and regardless of gametes, hormones, genitalia, and so on, you are whatever sex your module dictates to your self-identification.”
It’s difficult to convey how disappointing this response is, from someone I previously admired for his science communication about evolution and creationism. His summary of my position is completely and utterly wrong. Calling it a strawman does not do justice to how off it is. He sat through my talk (as I sat through his), and yet I have to wonder if he actually listened to it. We do agree on one thing – CSI will be making the recorded talks available and people can watch and decide for themselves – please do.
Here is my actual position, as articulated (quite clearly, if the overwhelming feedback I got was any indication) in the talk. Biological sex in humans is multifactorial and complicated, pretty much like all of biology. While there are two pathways of sexual development (we are a sexually dimorphic species), humanity is not “strictly” binary because not everyone fits cleanly or unambiguously into one of two sexes. Pretty much every aspect of biological sex has variations, or “differences in sexual development”, or ambiguous features in some individuals. These are the facts, and you cannot meaningfully disagree about this. So how do skeptics disagree? Largely due to semantics (or I guess making the other side into a one-dimensional strawman).
As an example I used archaeopteryx – a “transitional” species with morphological features that are pretty much half modern bird and half theropod dinosaur (the non-avian branch, if you are using cladistics, which of course is just another tradition of categorization). Creationist Duane Gish famously said that archaeopteryx is simply a bird. It had feathers and it flew, so it was a bird. When confronted about the fact that it also had teeth he say – well, some birds had teeth. He also, interestingly, said that you could look at archaeopteryx as a dinosaur with feathers. He was apparently blissfully unaware of the fact that he was just arbitrarily choosing a subset of morphological features as the “defining characteristics”. You could say feathers and flying equals bird, or teeth, claws, bony tail (and other features) equals dinosaur. Or – you could look at all the features and say it was transitional between these two groups. But that, of course, is exactly what he was trying to avoid and deny.
My larger point was that, in the end our labels are ultimately meaningless. They do not determine reality. Archaeopteryx is what it is regardless of what box we put it in. Saying that some birds have teeth or some dinosaurs have feathers are both equally correct, equally misleading, and equally just non sequiturs.
I then gave as an example of biological sex in humans those with CAIS (complete androgen insensitivity syndrome). These are people who are XY, produce male levels of testosterone, have undescended testes, produce sperm, and do not develop a uterus, but they do not respond to testosterone so they develop otherwise along the female body plan. They have vaginas, a complete (and often even more so than average) suite of female secondary sexual characteristics, and typical female neurological development. So they have male genetics, hormones, and gametes, but female genitalia, body and brain. So – are they male or female? Do some men have vaginas, or do some women have XY chromosomes? You see the parallel?
There is no necessarily right or wrong answer. Categorization is ultimately arbitrary and context dependent. You have to ask – why are we dividing humanity up into two categories of biological sex in the first place? Is this just an exercise in abstract biological science, is this for social reasons, medical purposes, designing public bathrooms, or making rules for competitive sports? The answer may differ depending on the context. Also, CAIS is just one example. There are differences in every feature of biological sex, and often those features are ambiguous (there are conditions that are literally referred to as ambiguous genitalia, for example).
To me it is an unavoidable and simple fact that biological sex in humans is not strictly binary. I always use the modifier “strictly” to be as clear as possible. Even critics of this position admit humans are only “mostly” binary when it comes to sex. But they say – “mostly” binary equals binary. But this is nonsensical – it is a purely semantic argument.
What they are saying (pretty directly) is that biological sex in humans is binary, and anyone who does not fit into this binary simply doesn’t count. They also say that some features of sexual dimorphism in humans don’t count. Coyne’s position is that we should categorize biological sex by gametes and gametes only. Why? He only said during his talk that this is how it is done, so an appeal to tradition, I guess. He seems to be engaged in a bit of circular logic – biological sex is binary because of gametes, and we use gametes to define biological sex because they are essentially binary (with rare exceptions). I have also seen him make the justification (which I find ironic coming from an evolutionary biologist) that biological sex is about reproduction, and reproduction is all about gametes. But biological features, even if they evolved mainly for a specific purpose, often take on other purposes and aspects. I would argue that people are more than their gametes. Again – context matters. Should we have sperm and egg leagues in competitive sports? Should we just have pictures of sperms and eggs on signs outside public bathrooms?
Coyne argued that the percentage of people with differences of sexual development is too small to count. He used the very low end of estimates (without disclosing or discussing this fact), but even with this estimate we are talking about millions of people. And also where you draw the line is yet another arbitrary choice of categorization.
He also explicitly argues that the brain does not count. He ridicules this notion in his response, again creating a shallow strawman, and uses quotation marks inappropriately. I never said, for example, “male module” and yet his use of quotes made it seem as if I did. He states that it is my position that only the brain counts – which is pretty close to the exact opposite of my position, which is that everything counts. All sexual dimorphism, from genes to neurological development, are part of biological sex. It is a complex, dynamic, and multifactorial system, with lots of variation. There is no one “correct” way to categorize male and female. Sure, you can make rules that are true for most people. There are typical schemes of development. But whatever system you come up with, there will be people who do not fit. And yet all people count, and not just because they are actual people. Even in the abstract – all data counts.
My final point was that I don’t think we would even be having this discussion were it not for the political controversy around the trans issue. The point, in my opinion, of saying that only gametes count is to argue that the brain specifically does not count. But the brain is a biological organ, and brain development is absolutely influenced by sex genes and hormones. The brain is a sexual organ, and part of sexual dimorphism. Why wouldn’t it count as part of biological sex? I specifically said that at this time we have not identified the specific neuroanatomic correlates of gender identity (which are likely to be complex). But the early research so far is pointing in the direction that gender identity is both real and a neurological trait. It has the features of other neurological traits – like core personality features, and sexual orientation. Most people who identify as trans knew their gender identity from a very young age, and their identity is remarkably persistent over their lives. They are likely not a homogenous group, which is to be expected for a complex phenomenon like gender identity. But again, most show clear signs of gender identity being a persistent neurological trait (something Coyne dismisses as mere “feelings”).
The best analogy here is sexual orientation, which also behaves like a stable neurological trait. People cannot be “turned” gay, nor converted from being gay. Sexual orientation is basically a brain phenomenon, influenced by biological sex, including genetics and the hormonal environment of the womb. And yet, all the same arguments against the claim that gender identity is real and neurological were used against sexual orientation being a neurological trait, including the lack of a “gay gene” (analogous so saying their is no “gender module” in the brain).
This is the basic science, and we should be able to find common ground here. We can have meaningful discussions about what all this means for sports, bathrooms, and medical care for those identifying as trans. That is the social, medical, and political debate we should be having, because this is a complex topic. The claim that biological sex is simply binary, and any other thought is (as Coyne writes) “full-out progressive woke” is just not true. It strikes me as a rhetorical strategy to win the debate by semantic fiat. Biological sex is binary (because gametes) therefore all other discussions about sports and medical care is merely an exercise is delusion.
The position is one giant non-sequitur. What do we do in this specific context with people who do not unambiguously fit into a biological sex binary? Well, the answers seems to be, they don’t count because biological sex is binary because gametes. And they are a small percentage of humanity. But they exist and we still need to decide how best to accommodate them. Insisting sex is strictly binary is not only wrong, it doesn’t even address the issue.
Coyne makes some other claims (as if they are a rebuttal to my position) that are not based in reality (but are suspiciously similar to right-wing talking points and propaganda). He says, for example,
“And what about those people—yes, they exist—who think they really are in the wrong body, and should be a member of another species, like a horse or a cat? Does that actually make them a cat or horse? Of course not.”
First, reports of this phenomenon (called therians) is greatly exaggerated. They are often conflated with other groups (like furries). This is pretty clearly a social and psychological phenomenon, not a neurological trait, and having nothing to do with a system of neurological development (influenced by hormones, for example). He is making some sort of slippery slope argument, which does not hold up to examination. Some people have a strong psychological identity with an animal archetype, just like some people think they are aliens, or that they were reincarnated. This has literally nothing to do with gender identity and is not the same kind of phenomenon.
He also writes:
“One more example. There are people who are nonbinary, but are that way on a temporal basis: they change from feeling male to feeling female on a daily or even hourly basis.”
There are some people who are “gender fluid”. So what? What does that say about people who are not fluid, but remarkably stable in their gender identity (ie, most people)? Nothing. It’s irrelevant. It’s similar to arguing that we cannot treat people with MS because a subset of people with MS have a type that does not respond to treatment.
I think this statement by Coyne highlights his misunderstanding:
“Does a full biological man, with the right genitalia, hormones and chromosomes, but who feels that he’s a woman, actually become a woman (or vice versa for women)? Of course not, unless you think that words mean whatever you want them to. This is why I believe that people can claim to be of any gender, but they can’t actually change their biological sex.”
This is a non sequitur, and has nothing to do with my actual position. It shows that he completely does not get it. Notice that he thinks there is a “right” genitalia. This is a strangely essentialist formulation for an evolutionary biologist. Notice also that he dismisses how someone “feels” as if this is not, in some cases, a function of developmental neurology. But the core fallacy here is the straw man that gender identity changes biological reality. Rather, it is just part of biological reality. The entire point of the “trans” identity is that gender identity (which behaves in most people like a neurological trait) does not align with aspects of sexual morphology. They are not delusional about their genitalia – otherwise they would not identify as trans. Every aspect of biological sex may line up or not line up in some people. That is just reality. Coyne’s entire framing, however, is as a conflict between subjective feelings (which are not real) and anatomy (which is real). And in people where the more tangible anatomy does not line up (like in CAIS)? Well, they don’t count. Only gametes count.
I don’t expect we will agree on the broader issues here. But at the very least we should be able to agree on the basic science. This means getting past pointless semantic arguments and confidently mischaracterizing the “other side”. If I have misstated his position in any way I will happily make corrections, but I tried to fairly and with the principle of charity reflect what he has written. You can read it for yourself and see. My summary here is pretty much exactly what I said during my CISCON talk and have written elsewhere.
The post A Discussion about Biological Sex first appeared on NeuroLogica Blog.
I was away last week, first at CSICON and then at a conference in Dubai. I was invited to give a 9 hour seminar on scientific skepticism for the Dubai Future Foundation. That sounds like a lot of time, but it isn’t. It was a good reminder of the vast body of knowledge that is relevant to skepticism, from neuroscience to psychology and philosophy. Just the study of pseudoscience and conspiracy thinking themselves could have filled the time. It was my first time visiting the Middle East and I always find it fascinating to see the differences and similarities between cultures.
What does all this have to do with alternating vs direct current? Nothing, really, except that I found myself in a conversation about the topic with someone deeply involved in the power industry in the UAE. My class was an eclectic and international group of business people – all very smart and accomplished, but also mostly entirely new to the concept of scientific skepticism and without a formal science background. It was a great opportunity to gauge my American perspective against an international group.
I was struck, among other things, by how similar it was. I could have been talking to a similar crowd in the US. Sure, there was a layer of Arabic and Muslim culture on top, but otherwise the thinking and attitudes felt very familiar. Likely this is a result of the fact that Dubai is a wealthy international city. It is a good reminder that the urban-rural divide may be the most deterministic one in the world, and if you get urban and wealthy enough you tend to align with global culture.
Back to my conversation with the power industry exec – the power mix in the UAE is not very different from the US. They have about 20% nuclear (same as the US), 8% solar, and the rest fossil fuel, mostly natural gas. They have almost no wind and no hydropower. Their strategy to shift to low carbon power is all in on solar. They are rapidly increasing their power demand, and solar is the cheapest new energy. I don’t think their plan for the future is aggressive enough, but they are moving in the right direction.
What I did not encounter was any defensiveness about fossil fuels, denial of global warming, or any conspiracy nonsense. The UAE is the world’s 8th biggest oil producer, so I would not have been surprised if I had. At the end of the day, the science and the tradeoffs are pretty much the same. There are regional differences in terms of how much wind, sunshine, and water there is locally, and that affects the calculus, but everyone is dealing with the same technologies. But I still found it fascinating to be in a conversation with someone half-way around the world, from an entirely different culture, and hit all the same talking points that I have been discussing for years. We even discussed net metering (he was in favor) and Germany’s poor decision to shut down their nuclear industry.
And, of course, the conversation turned to the question of AC vs DC (which he brought up). Most nerds and technology history buffs know that there was a big fight between Edison and Tesla about whether or not the electricity infrastructure in the US should be alternating or direct current. Edison favored direct current, while Tesla favored alternating current. AC won out largely because it is more efficient to transmit over long distances and to alter the voltage with transformers.
The question of AC vs DC is raising its head again, however, because technology has changed. I am not an expert in electrical engineering, and I have had enough conversations with experts to know that this topic is very technical and complex. So I am not going to try to explain the technical details, but just discuss some of the main issues. There are essentially two reasons to rethink the AC vs DC choice. The first is that as technology has improved, the advantage of AC over DC had diminished. The transformer advantage still exists, but transmission efficiency is not as big of an issue as it was. AC and DC are not very different over short and medium distance, but AC still has an increasing advantage over longer distances.
But the second reason has to do with solar power and electric vehicles. An increasing number of homes have both, and even battery backup to boot. And, in the opinion of many experts, with whom I agree, it is a reasonable goal to maximize the number of residential homes that have all three – solar, EVs, and battery backup. All three of these technologies are DC. So in such a home the solar panels convert their DC power to AC, which then gets converted back to DC to charge the EV. You can have either DC-coupled or AC-coupled battery systems – in the former the power remains DC, while in the latter it is converted to AC before being stored in the battery. DC-coupled systems are more efficient (97.5% vs 90%).
In a modern home, therefore, there could be an entirely DC system where the power from the panels to the battery to the EV (which is just another batter) is all DC. The car battery can then also more easily be used as additional storage without conversion. Every time you convert AC to DC and back you get about a 3% energy loss, and having an all DC system would avoid that loss.
In terms of appliances, it’s a mix. Many of the bigger appliances, like refrigerators and dishwashers, use AC. While most of the smaller appliances, like computers, light-bulbs, and microwaves, use DC power. In order to have a 100% DC home, therefore, all that is necessary is to convert a few large appliances to DC, or for them to have their own DC to AC converter. DC also makes sense for a distributed power system, rather than distant centralized power production. Microgrids could be all DC. All of this makes some experts advocate for a future with residential DC power grids and all DC homes. We would likely need a hybrid system where we will have AC for long distance transmission. There is also still the advantage that AC is easier to alter voltage, but that is not a deal-breaker for DC if the home system were all at the same voltage.
The largest barrier, of course, is technology inertia. It is difficult to change over entire industries and change standards. At this point it’s difficult to predict what will happen, and the default will be for no change. I suspect, however, that this conversation will increase as the penetration of solar power, home battery backup, and EVs increases. At some point “going DC” for the home may be a thing, with the advantage of knocking 10% or so off of electricity demand (by eliminating multiple conversions).
It may happen first in developing nations and those who are currently building a lot of new infrastructure, like the UAE, leaving older industrialized nations with their crusty technology.
The post AC vs DC and other Power Questions first appeared on NeuroLogica Blog.