Cloud seeding would seem like an easy and obvious way to create rain where none existed before. Is it really that simple?
My recent article on social media has fostered good social media engagement, so I thought I would follow up with a discussion of the most urgent question regarding social media – should the US ban TikTok? The Biden administration signs into law legislation that would ban the social media app TikTok on January 19th (deliberately the day before Trump takes office) unless it is sold off to a company that is not, as it is believed, beholden to the Chinese government. The law states it must be divested from ByteDance, which is the Chinese parent company who owns TikTok. This raises a few questions – is this constitutional, are the reasons for it legitimate, how will it work, and will it work?
A federal appeals court ruled that the ban is constitutional and can take place, and that decision is now before the Supreme Court. We will know soon how they rule, but indicators are they are leaning towards allowing the law to take effect. Trump, who previously tried to ban TikTok himself, now supports allowing the app and his lawyers have argued that he should be allowed to solve the issue. He apparently does not have any compelling legal argument for this. In any case, we will hear the Supreme Court’s decision soon.
If the ban is allowed to take place, how will it work? First, if you are not aware, TikTok is a short form video sharing app. I have been using it extensively over the past couple of years, along with most of the other popular platforms, to share skeptical videos and have had good engagement. Apparently TikTok is popular because it has a good algorithm that people like. TikTok is already banned on devices owned by Federal employees. The new ban will force app stores in the US to remove the TikTok app and now allow any further updates or support. Existing TikTok users will continue to be able to use their existing apps, but they will not be able to get updates so they will eventually become unusable.
ByteDance will have time to comply with the law by divesting TikTok before the app becomes unusable, and many believe they are essentially waiting to see if the law will actually take effect. So, it is possible that even if the law does take effect, not much will change for existing users unless ByteDance refuses to comply and the app slowly fades away. In this case it is likely that the two existing main competitors, YouTube shorts, and Instagram, will benefit.
Will users be able to bypass the ban? Possibly. You can use a virtual private network (VPN) to change your apparent location to download the app from foreign stores. But even if it is technically possible, this would be a significant hurdle for some users and likely reduce use of the app in the US.
That is the background. Now lets get to the most interesting question – are the stated reasons for wanting to ban the app legitimate? This is hotly debated, but I think there is a compelling argument to make for the risks of the app and they essentially echo many of the points I made in my previous post. Major social media platforms undeniably have an influence on the broader culture. If the platforms are left entirely open, this allows for bad actors to have unfettered access to tools to spread misinformation, disinformation, radicalization, and hate speech. I have stated that my biggest fear is that these platforms will be used by authoritarian governments to control their society and people. The TikTok ban is about a hostile foreign power using an app to undermine the US.
There are essentially two components to the fear – that TikTok is gathering information on US citizens that can then be weaponized against them or our society. The second is that the Chinese government will use TikTok in order to spread pro-communist China propaganda, anti-American propaganda, so social civil strife and influence American politics. We actually don’t have to speculate about whether or not China will do this – TikTok has already admitted that they have identified and shut down massive Chinese government campaigns to influence US users – one with 110,000 accounts, and another with 141,000 accounts. You might argue that the fact that they took them down means they are not cooperating with the Chinese government, but we cannot conclude that. They may be making a public show of taking down some campaigns but leaving others in place. The more important fact here is that the Chinese government is using TikTok to influence US politics and society.
There are also more subtle ways than massive networks of accounts to influence the US through TikTok. American TikTok is different from the Chinese version, and analyses have found that the Chinese version has better quality informational content and more educational content than the US version. China can be playing the long game (actually, not that long, in my opinion) of dumbing down the US. Algorithms can put light thumbs on the scale of information that have massive effects.
It was raised in the comments to my previous post if all this discussion is premised on the notion that people are easily manipulated pawns in the hands of social media giants. Unfortunately, the answer to that question is a pretty clear yes. There is a lot of social psychology research to show that influence campaigns are effective. Obviously not everyone is affected, but moving the needle 10 or 20 percentage points (or even a lot less) can have a big impact on society. Again – I have been on TikTok for over a year. It is flooded with videos that seem crafted to spread ignorance and anti-intellectualism. I know that most of them are not crafted specifically for this purpose – but that is the effect they have, and if one did intend to craft content for this purpose they could not do a better j0b than what is already on the platform. There is also a lot of great science communication content, but it is drowned out by nonsense.
Social media, regardless of who owns it, has all the risks and problems I discussed. But it does seem reasonable that we also do not want to add another layer of having a foreign adversary with significant influence over the platform. Some argue that it doesn’t really matter, social media can be used for influence campaigns regardless of who owns them. But that is hardly reassuring. At the very least I would argue we don’t really know and this is probably not an experiment we want to add on top of the social media experiment itself.
The post Should the US Ban TikTok? first appeared on NeuroLogica Blog.
One of the things I have come to understand from following technology news for decades is that perhaps the most important breakthroughs, and often the least appreciated, are those in material science. We can get better at engineering and making stuff out of the materials we have, but new materials with superior properties change the game. They make new stuff possible and feasible. There are many futuristic technologies that are simply not possible, just waiting on the back burning for enough breakthroughs in material science to make them feasible. Recently, for example, I wrote about fusion reactors. Is the addition of high temperature superconducting material sufficient to get us over the finish line of commercial fusion, or are more material breakthroughs required?
One area where material properties are becoming a limiting factor is electronics, and specifically computer technology. As we make smaller and smaller computer chips, we are running into the limits of materials like copper to efficiently conduct electrons. Further advance is therefore not just about better technology, but better materials. Also, the potential gain is not just about making computers smaller. It is also about making them more energy efficient by reducing losses to heat when processors work. Efficiency is arguably now a more important factor, as we are straining our energy grids with new data centers to run all those AI and cryptocurrency programs.
This is why a new study detailing a new nanoconducting material is actually more exciting than it might at first sound. Here is the editor’s summary:
Noncrystalline semimetal niobium phosphide has greater surface conductance as nanometer-scale films than the bulk material and could enable applications in nanoscale electronics. Khan et al. grew noncrystalline thin films of niobium phosphide—a material that is a topological semimetal as a crystalline material—as nanocrystals in an amorphous matrix. For films with 1.5-nanometer thickness, this material was more than twice as conductive as copper. —Phil Szuromi
Greater conductance at nanoscale means we can make smaller transistors. The study also claims that this material has lower resistance, which means more efficient – less waste heat. They also claim that manufacturing is similar to existing transistors at similar temperatures, so it’s feasible to mass produce (at least it seems like it should be). But what about niobium? Another lesson I have learned from examining technology news is to look for weaknesses in any new technology, including the necessary raw material. I see lots of battery and electronic news, for example, that uses platinum, which means it’s not going to be economical.
Niobium is considered a rare metal, and is therefore relatively expensive, about $45 per kilogram. (By comparison copper goes for $9.45 per kg.) Most of the world’s niobium is sourced in Brazil (so at least it’s not a hostile or unstable country). It is not considered a “precious” metal like gold or platinum, so that is a plus. About 90% of niobium is currently used as a steel alloy, to make steel stronger and tougher. If we start producing advanced computer chips using niobium what would that do to world demand? How will that affect the price of niobium? By definition we are talking about tiny amounts of niobium per chip, the wires are only a few molecules thick, but the world produces a lot of computer chips.
How all this will sort out is unclear, and the researchers don’t get into that kind of analysis. They basically are concerned with the material science and proving their concept works. This is often where the disconnect is between exciting-sounding technology news and ultimate real-world applications. Much of the stuff we read about never comes to fruition, because it simply cannot work at scale or is too expensive. Some breakthroughs do work, but we don’t see the results in the marketplace for 10-20 years, because that is how long it took to go from the lab to the factory. I have been doing this long enough now that I am seeing the results of lab breakthroughs I first reported on 20 years ago.
Even if a specific demonstration is not translatable into mass production, however, material scientists still learn from it. Each new discovery increases our knowledge of how materials work and how to engineer their properties. So even when the specific breakthrough may not translate, it may lead to other spin-offs which do. This is why such a proof-of-concept is exciting – it shows us what is possible and potential pathways to get there. Even if that specific material may not ultimately be practical, it still is a stepping stone to getting there.
What this means is that I have learned to be patient, to ignore the hype, but not dismiss science entirely. Everything is incremental. It all adds up and slowly churns out small advances that compound over time. Don’t worry about each individual breakthrough – track the overall progress over time. From 2000 to today, lithium-ion batteries have about tripled their energy capacity, for example, while solar panels have doubled their energy production efficiency. This was due to no one breakthrough, just the cumulative effects of hundreds of experiments. I still like to read about individual studies, but it’s important to put them into context.
The post New Material for Nanoconductors first appeared on NeuroLogica Blog.
Recently Meta decided to end their fact-checkers on Facebook and Instagram. The move has been both hailed and criticized. They are replacing the fact-checkers with an X-style “community notes”. Mark Zuckerberg summed up the move this way: “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
That is the essential tradeoff- whether you think false positives are more of a problem or false negatives. Are you concerned more with enabling free speech or minimizing hate speech and misinformation? Obviously both are important, and an ideal platform would maximize both freedom and content quality. It is becoming increasingly apparent that it matters. The major social media platforms are not mere vanity projects, they are increasingly the main source of news and information, and foster ideological communities. They affect the functioning of our democracy.
Let’s at least be clear about the choice that “we” are making (meaning that Zuckerberg is making for us). Maximal freedom without even basic fact-checking will significantly increase the amount of misinformation and disinformation on these platforms, as well as hate-speech. Community notes is a mostly impotent method of dealing with this. Essentially this leads to crowd-sourcing our collective perception of reality.
Free-speech optimists argue that this is all good, and that we should let the marketplace of ideas sort everything out. I do somewhat agree with this, and the free marketplace of ideas is an essential element of any free and open society. It is a source of strength. I also am concerned about giving any kind of censorship power to any centralized authority. So I buy the argument that this may be the lesser of two evils – but it still comes with some significant downsides that should not be minimized.
What I think the optimists are missing (whether out of ignorance or intention) is that a completely open platform is not a free marketplace of ideas. The free marketplace assumes that everyone is playing fair and everyone is acting in good faith. This is 2005 level of naivete. This leaves the platform open to people who are deliberately exploiting it and using it as a tool of political disinformation. This also leaves it open to motivated and dedicated ideological groups that can flood the zone with extreme views. Corporations can use the platform for their own influence campaigns and self-serving propaganda. This is not a free and fair marketplace – it means people with money, resources, and motivation can dominate the narrative. We are simply taking control away from fact-checkers and handing it over to shadowy groups with nefarious motivations. And don’t think that authoritarian governments won’t find a way to thrive in this environment also.
So we have ourselves a Catch-22. We are damned if we do and damned if we don’t. This does not mean, however, that some policies are not better than others. There is a compromise in the middle that allows for the free marketplace of idea without making it trivially easy to spread disinformation, to radicalize innocent users of the platform, and allow for ideological capture. I don’t know exactly what those policies are, we need to continue to experiment and find them. But I don’t think we should throw up our hands in defeat (and acquiescence).
I think we should approach the issue like an editorial policy. Having editorial standards is not censorship. But who makes and enforces the editorial standards? Independent, transparent, and diverse groups with diffuse power and appeals processes is a place to start. No such process will be perfect, but it is likely better than having no filter at all. Such a process should have a light touch, err on the side of tolerance, and focus on the worst blatant disinformation.
I also think that we need to take a serious look at social media algorithms. This also is not censorship, but Facebook, for example, gets to decide how to recommend new content to you. They tweak the algorithms to maximize engagement. How about tweaking the algorithms to maximize quality of content and diverse perspectives instead?
We may need to also address the question of whether or not giant social media platforms represent a monopoly. Let’s face it, they do, and they also concentrate a lot of media into a few hands. We have laws to protect against such things because we have long recognized the potential harm of so much concentrated power. Social media giants have simply side-stepped these laws because they are relatively new and exist in a gray zone. Our representatives have failed to really address these issues, and the public is conflicted so there isn’t a clear political will. I think the public is conflicted partly because this is all still relatively new, but also as a result of a deliberate ideological campaign to sow doubt and confusion. The tech giants are influencing the narrative on how we should deal with tech giants.
I know there is an inherent problem here – social media outlets work best when everyone is using them, i.e. they have a monopoly. But perhaps we need to find a way to maintain the advantage of an interconnected platform while breaking up the management of that platform into smaller pieces run independently. The other option is to just have a lot of smaller platforms, but what is happening there is that different platforms are becoming their own ideological echochambers. We seem to have a knack for screwing up every option.
Right now there does not seem to be anyway for any of these things to happen. The tech giants are in control and have little incentive to give up their power and monopoly. Government has been essentially hapless on this issue. And the public is divided. Many have a vague sense that something is wrong, but there is no clear consensus on what exactly the problem is and what to do about it.
The post What Kind of Social Media Do We Want? first appeared on NeuroLogica Blog.
Economic nationalism, while attractive to many populists, is not the path to economic success some believe it to be.
How close are we to having fusion reactors actually sending electric power to the grid? This is a huge and complicated question, and one with massive implications for our civilization. I think we are still at the point where we cannot count on fusion reactors coming online anytime soon, but progress has been steady and in some ways we are getting tatalizingly close.
One company, Commonwealth Fusion Systems, claims it will have completed a fusion reactor capable of producing net energy by “the early 2030’s”. A working grid-scale fusion reactor within 10 years seems really optimistic, but there are reasons not to dismiss this claim entirely out of hand. After doing a deep dive my take is that the 2040’s or even 2050’s is a safer bet, but this may be the fusion design that crosses the finish line.
Let’s first give the background and reasons for optimism. I have written about fusion many times over the years. The basic idea is to fuse lighter elements into heavier elements, which is what fuels stars, in order to release excess energy. This process releases a lot of energy, much more than fission or any chemical process. In terms of just the physics, the best elements to fuse are one deuterium atom to one tritium atom, but deuterium to deuterium is also feasible. Other fusion elements are simply way outside our technological capability and so are not reasonable candidates.
There are also many reactor designs. Basically you have to squeeze the elements close together at high temperature so as to have a sufficiently high probability of fusion. Stars use gravitational confinement to achieve this condition at their cores. We cannot do that on Earth, so we use one of two basic methods – inertial confinement and magnetic confinement. Inertial confinement includes a variety of methods that squeeze hydrogen atoms together using inertia, usually from implosions. These methods have achieved ignition (burning plasma) but are not really a sustainable method of producing energy. Using laser inertial confinement, for example, destroys the container in the process.
By far the best method, and the one favors by physics, is magnetic confinement. Here too there are many designs, but the one that is closest to the finish line (and the one used by CFS) is called a tokamak design. This is torus shaped in a specific way to control the flow of plasma just so to avoid any kind of turbulence that will prevent fusion.
In order to achieve the energies necessary to create sustained fusion you need really powerful magnetic fields, and the industry has essentially been building larger and larger tokamaks to achieve this. CFS has the advantage of being the first to design a reactor using the latest higher temperature superconductors (HTS), which really are a game changer for tokamaks. They allow for a smaller design with more powerful magnets using less energy. Without these HTS I don’t think there would even be a question of feasibility.
CFS is currently building a test facility called the SPARC reactor, which stands for the smallest possible ARC reactor, and ARC in turn stand for “affordable, robust, compact”. This is a test facility that will not be commercial. Meanwhile they are planning their first ARC reactor, which is grid commercial scale, in Virginia and which they claim will produce 400 Megawatts of power.
Reasons for optimism – the physics all seems to be good here. CFS was founded by engineers and scientists from MIT – essentially some of the best minds in fusion physics. They have mapped out the most viable path to commercial fusion, and the numbers all seem to add up.
Reasons for caution – they haven’t done it yet. This is not, at this point, so much a physics problem as an engineering problem. As they push to higher energies, and incorporate the mechanisms necessary to bleed off energy to heat water to run a turbine, they may run into problems they did not anticipate. They may hit a hurdle that will suddenly throw 10 or 20 years into the development process. Again, my take is that the 2035 timeline is if everything goes perfectly well. Any bumps in the road will keep adding years. This is a project at the very limits of our technology (as complex as going to the Moon), and delays are the rule, not the exception.
So – how close are they? The best so far is the JET tokamak reactor which produced 67% of net energy. That sounds close, but keep in mind, 100% is break even. Also – this is heat energy, not electricity. Modern fission reactors have about a 30% efficiency in converting heat to electricity, so that is a reasonable assumption. Also, this is fusion energy efficiency, not total energy. This is the energy that goes into the plasma, not the total energy to run the reactor.
The bottom line is that they probably need to increase their energy output by an order of magnitude or more in order to be commercially viable. Just producing a little bit of net energy is not enough. They need massive excess energy (meaning electricity) in order to justify the expense. So really we are no where near net total energy in any fusion design. CFS is hoping that their fancy new HTS magnets will get them there. They actually might – but until they do, it’s still just an informed hope.
I do hope that my pessimism, born of decades of overhyped premature tech promises, is overcalling it in this case. I hope these MIT plasma jocks can get it done, somewhere close to the promised timeline. The sooner the better, in terms of global warming. Let’s explore for a bit what this would mean.
Obviously the advantage of fusions reactors like the planned ARC design if it works is that it produces a lot of carbon-free energy. They can be plugged into existing connections to the grid, and produce stable predictable energy. They produce only low level nuclear waste. They also have a relatively small land footprint for energy produced. If the first ARC reactor works, we would need to build thousands around the world as fast as possible. If they are profitable, this will happen. But the industry can also be supported by targeted regulations. Such reactors could replace fossil fuel-based reactors, and then eventually fission reactors.
Once we develop viable fusion energy, it is very likely that this will become our primary energy source literally forever. At least for hundreds if not thousands or tens of thousands of years. It gets hard to predict technology that far out, but there are really no candidates for advanced energy sources that are better. Matter-antimatter theoretically could work, but why bother messing around with antimatter, which is hard to make and contain. The advantage is probably not enough to justify it. Other energy sources, like black holes, are theoretically and extremely exotic, perhaps something for millions of years advanced beyond where we are.
Even if some really advanced energy source becomes possible, fusion will likely remain in the sweet spot in terms of producing large amounts of energy cleanly and sustainable. Once we cross the line to being able to produce net total electricity with fusion, incremental advances in material science and the overall technology will just make fusion better. From that point forward all we really need to do is make fusion better. There will likely still be a role for distributed energy like solar, but fusion will replace all centralized large sources of power.
The post Plan To Build First Commercial Fusion Reactor first appeared on NeuroLogica Blog.
Skeptoid answers another round of feedback emails sent in by listeners.
These schools combine an atypical education with a New Age spirituality called anthroposophy.
The latest flap over drone sightings in New Jersey and other states in the North East appears to be – essentially nothing. Or rather, it’s a classic example of a mass panic. There are reports of “unusual” drone activity, which prompts people to look for drones, which results in people seeing drones or drone-like objects and therefore reporting them, leading to more drone sightings. Lather, rinse, repeat. The news media happily gets involved to maximize the sensationalism of the non-event. Federal agencies eventually comment in a “nothing to see here” style that just fosters more speculation. UFO and other fringe groups confidently conclude that whatever is happening is just more evidence for whatever they already believed in.
I am not exempting myself from the cycle either. Skeptics are now part of the process, eventually explaining how the whole thing is a classic example of some phenomenon of human self-deception, failure of critical thinking skills, and just another sign of our dysfunctional media ecosystem. But I do think this is a healthy part of the media cycle. One of the roles that career skeptics play is to be the institution memory for weird stuff like this. We can put such events rapidly into perspective because we have studied the history and likely been through numerous such events before.
Before I get to that bigger picture, here is a quick recap. In November there were sightings in New Jersey of “mysterious” drone activity. I don’t know exactly what made them mysterious, but it lead to numerous reportings of other drone sightings. Some of those sightings were close to a military base, Joint Base McGuire-Dix-Lakehurst, and some were concerned of a security threat. Even without the UFO/UAP angle, there is concern about foreign powers using drones for spying or potentially as a military threat. This is perhaps enhanced by all the reporting of the major role that drones are playing in the Russian-Ukraine war. Some towns in Southern New Jersey have banned the use of drones temporarily, and the FAA has also restricted some use.
A month after the first sightings Federal officials have stated that the sightings that have been investigated have all turned out to be drones, planes mistaken for drones, and even stars mistaken for drones. None have turned out to be anything mysterious or nefarious. So the drones, it turns out, are mostly drones.
Also in November (which may or may not be related) a CT police officer came forward and reported a “UFO” sighting he had in 2022. Local news helpfully created a “reenactment” of the encounter (to accompany their breathless reporting), which is frankly ridiculous. The officer, Robert Klein, did capture the encounter on his smart phone video. The video shows – a hovering light in the distance. That is all – 100% consistent with a drone.
So here’s the bigger picture – as technology evolves, so does sightings to match that technology. Popular expectations also match the sightings. Around the turn of the century it was anticipated that someone would invest a flying machine, so there were lots of false sightings of such machines. After the first “flying saucer” was reported in 1947, UFO sightings often looked like flying saucers. As military aircraft increased in number and capability, sightings would track along with them, being more common near military air bases. When ultralight aircraft became a thing, people reports UFOs of silent floating craft (I saw one myself and was perplexed until I read in the news what it was). As rocket launches become more common, so do sightings of rocket launches mistaken for “UFOs”. There was the floating candle flap from over a decade ago – suddenly many people were releasing floating candles for celebrations, and people were reporting floating candle “UFOs”.
And now we are seeing a dramatic increase in drone activity. Drones are getting better, cheaper, and more common, so we should be having more drone sightings. This is not a mystery.
Interestingly there is one technological development that does not lead to more sightings but does lead to more evidence – smart phones. Most people are now walking around all the time with a camera and video. Just like with the CT cop, we not only have his sensational report but an accompanying video. What does this dramatic increase in photo and video evidence show? Mundane objects and blurry nothings. What do they not show? Unambiguous alien spacecraft. This is the point at which alien true-believers insert some form of special pleading to explain away the lack of objective evidence.
This pattern, of sightings tracking with technology, goes beyond alien activity. We see the same thing with ghost photos. It turns out that the specific way in which ghosts manifest on photographic film is highly dependent on camera technology. What we are actually seeing is different kinds of camera artifacts resulting from specific camera technology, and those artifacts being interpreted as ghosts or something paranormal. So back in the day when it was possible to accidentally create a double-exposure, we had lots of double-exposure ghosts. Those cameras that can create the “golden door” illusion because of their shutter created golden door phenomena. Those cameras with camera straps create camera strap ghosts. When high-powered flashes became common we started to see lots of flashback ghosts. Now we are seeing lots of AI generated fakes.
All of this is why it is important to study and understand history. Often those enamored of the paranormal or the notion of aliens are seeing the phenomenon in a tiny temporal bubble. It seems like this is all new and exciting, and major revelations are right around the corner. Of course it has seemed this way for decades, or even hundreds of years for some phenomena. Meanwhile it’s the same old thing. This was made obvious to me when I first read Sagan’s 1972 book, UFOs: A Scientific Debate. I read this three decades after it was first published – and virtually nothing had changed in the UFO community. It was deja vu all over again. I had the same reaction to the recent Pentagon UFO thing – same people selling the same crappy evidence and poor logic.
New cases are occasionally added, and as I said as the technology evolves so does some of the evidence. But what does not change is people, who are still making the same poor arguments based on flimsy evidence and dodgy logic.
The post The Jersey Drones Are Likely Drones first appeared on NeuroLogica Blog.
Some narratives are simply ubiquitous in our culture (every culture has its universal narratives). Sometimes these narratives emerge out of shared values, like liberty and freedom. Sometimes they emerge out of foundational beliefs (the US still has a puritanical bent). And sometimes they are the product of decades of marketing. Marketing-based narratives deserve incredible scrutiny because they are crafted to alter the commercial decision-making of people in society, not for the benefit of society or the public, but for the benefit of an industry. For example, I have tried to expose the fallacy of the “natural is always good, and chemicals are always bad” narrative. Nature, actually, is quite indifferent to humanity, and everything is made of chemicals.
Another narrative that is based entirely on propaganda meant to favor one industry and demonize its competition is the notion that organic farming is better for health and better for the environment. Actually, there is no evidence of any nutritional or health advantage from consuming organic produce. Further – and most people I talk to find this claim shocking – organic farming is worse for the environment than conventional or even “factory” farming. Stick with me and I will explain why this is the case.
A recent article in the NYT by Michael Grunwald nicely summarizes what I have been saying for years. First let me explain why I think there is such a disconnect between reality and public perception. This gets back to the narrative idea – people tend to view especially complex situations through simplistic narratives that give them a sense of understanding. We all do this because the world is complicated and we have to break it down. There is nothing inherently wrong with this – we use schematic, categories, and diagrams to simplify complex reality and chunk it into digestible bits. But we have to understand this is what we are doing, and how this may distort our understanding of reality. There are also better and worse ways to do this.
One principle I like to use as a guide is the Moneyball approach. This refers to Paul DePodesta who devised a new method of statistical analysis to find undervalued baseball players. Prior to DePodesta talent scouts would find high value players to recruit, players who had impressive classic statistics, like batting average. They would then pay high sums for these star players. DePodesta, however, realized that players without star-quality stats still might be solid players, and for their price could have a disproportionate positive effect on a team’s performance. If, therefore, you have a finite amount of funds to spread out over a team’s players, you might be better off shoring up your players at the low end rather than paying huge sums for star players. Famously this approach worked extremely well (first applied to the Oakland Athletics).
So let’s “Moneyball” farming. We can start with the premise that we have to produce a certain amount of calories in order to feed the world. Even if we consider population control as a long term solution – that’s a really long term solution for any ethically acceptable methods. I will add as a premise that it is not morally or politically feasible to reduce the human population through deliberate starvation. Right now there are 8.2 billion humans on Earth. Estimates are this will rise to about 10 billion before the population starts to come down again through ethical methods like poverty mitigation and better human rights. So for the next hundred years or so we will have to feed 8+ billion people.
If our goal is to feed humanity while minimizing any negative effect on the environment, then we have to consider what all the negative effects are of farming. As Grunwald points out – they are huge. Right now we are using about 38% of the land on Earth for farming. We are already using just about all of the arable land – arable land is actually a continuum, so it is more accurate to say we are using the most arable land. Any expansion of farmland will therefore expand into less and less arable land, at greater and greater cost and lower efficiency. Converting a natural ecosystem, whether a prairie, forest, meadow, or whatever, into farmland is what has, by far, the greatest negative effect on the ecosystem. This is what causes habitat loss, isolates populations, reduces biodiversity, and uses up water. The difference between different kinds of farming is tiny compared to the difference between farming and natural ecosystems.
This all means that the most important factor, by far, in determining the net effect of calorie production for humans on the environment is the amount of land dedicated to all the various kinds of farming. Organic farming simply uses more land than conventional farming, 20-40% more land on average. This fact overwhelms any other alleged advantage of organic farming. I say alleged because organic farms can and many do use pesticides – they just use natural pesticides, which are often less effective requiring more applications. Sometimes they also rely on tilling, which releases carbon from the soil.
But even if we compare maximally productive farming to the most science-based regenerative farming techniques, designed to minimize pesticide use and optimize soil health – maximally efficient farming wins the Moneyball game. It’s no contest. Also, the advantage of efficient factory farming will only get greater as agricultural science and technology improves. GMOs, for example, have the potential for massive improvements in crop efficiency, leaving organic farming progressively in the dust.
But all this does not fit the cultural narrative. We have been fed this constant image of the gentle farm, using regenerative practices, protecting the soil, with local mom and pop farmers producing food for local consumption. It’s a nice romantic image, and I have no problem with having some small local farms growing heirloom produce for local consumption. But this should be viewed as a niche luxury – not the primary source of our calories. Eating locally grown food from such farms is, in a way, a selfish act of privilege. It is condemning the environment so you can feel good about yourself. Again, it’s fine in moderation. But we need to get 95% of our calories from factory farms that are brutally efficient. This also does not mean that factory farms should not endeavor to be environmentally friendly, as long as it does not come at the cost of efficiency.
At this point many people will point out that we can improve farming efficiency by eliminating meat. It is true that overproducing meat for calories is hugely inefficient. But so is underproducing meat. What the evidence shows is that maximal efficiency comes from using each parcel of land for it’s optimal use. Grazing land for animals in many cases is the optimal use. Cattle, for example, can convert a lot of non-edible calories into edible calories. And finishing lots can also use low grade feed not fit for humans to pack on high-grade calories for humans. Yes – many industrialized nations consume too much meat. Part of optimizing efficiency is also optimizing the ratio of which kinds of calories we consume. But zero meat is not maximally efficient. Also – half our fertilizer comes from manure, and we can’t just eliminate the source of half our fertilizer without creating a disaster.
It’s a complicated system. We no longer, however, have the luxury of just letting everyone do what they want to do and what they think is in their best interest. Optimally there would be some voluntary coordination for the world’s agricultural system to maximize efficiency and minimize land use. This can come through science-based standards, and funding to help poorer countries have access to more modern farming techniques, rather than just converting more land for inefficient farming.
But first we have to dispense with the comforting but ultimately fictional narrative that the old gentle methods of farming are the best. We need science-based maximal efficiency.
The post Factory Farming is Better Than Organic Farming first appeared on NeuroLogica Blog.
Cryonics promises an opportunity for you to be frozen and revived at some distant point in the future — though with plenty of controversy.
A recent BBC article highlights some of the risk of the new age of social media we have crafted for ourselves. The BBC investigated the number one ranked UK podcast, Diary of a CEO with host Steven Bartlett, for the accuracy of the medical claims recently made on the show. While the podcast started out as focusing on tips from successful businesspeople, it has recently turned toward unconventional medical opinions as this has boosted downloads.
“In an analysis of 15 health-related podcast episodes, BBC World Service found each contained an average of 14 harmful health claims that went against extensive scientific evidence.”
These includes showcasing an anti-vaccine crank, Dr. Malhotra, who claimed that the “Covid vaccine was a net negative for society”. Meanwhile the WHO estimates that the COVID vaccine saved 14 million lives worldwide. A Lancet study estimates that in the European region alone the vaccine saved 1.4 million lives. This number could have been greater were in not for the very type of antivaccine misinformation spread by Dr. Malhotra.
Another guest promoted the Keto diet as a treatment for cancer. Not only is there no evidence to support this claim, dietary restrictions while undergoing treatment for cancer can be very dangerous, and imperil the health of cancer patients.
This reminds me of the 2014 study that found that, “For recommendations in The Dr Oz Show, evidence supported 46%, contradicted 15%, and was not found for 39%.” Of course, evidence published in the BMJ does little to counter misinformation spread on extremely popular shows. The BBC article highlights the fact that in the UK podcasts are not covered by the media regulator Ofcom, which has standards of accuracy and fairness for legacy media.
I have discussed previously the double-edged sword of social media. It did democratize information publishing and has made it easier for experts to communicate directly with the public. But this has come at the expense of quality control – there is now no editorial filter, so the public is overwhelmed with low quality information, misinformation, and disinformation. I think it’s difficulty to argue that this was a good trade-off for society, at least in the short run.
Journalism has never been perfect (nothing is), but at least there are standards and an editorial process. Much of those standards, however, were just norms. Even back to the 1980s there was a lot of handwringing about erosion of those norms by mass media. I remember those quaint days when people worried about The Phil Donahue Show, which dominated daytime television by having on sensational guests. Donahue justified the erosion of quality standards he was pioneering by saying, you have to get viewers. The, occasionally, you can slip in some quality content. But of course Donahue was soon eclipsed by daytime talk shows that abandoned any pretense of being interested in quality content, and who fought to outdo each other in brazen sensationalism.
Perhaps most notorious was Morton Downey Jr., who all but encouraged fights on set. He did not last long, and in a desperate attempt to remain relevant even faked getting attacked by neo-nazis. His hoax was busted, however, because he drew the swastika on himself in the mirror and drew it backwards. Downey was eclipsed by so-called “trash TV” shows like Jerry Springer. These shows were little more than freak shows, without any pretense of being “news” or informative.
But at the same time we saw the rise of shows that did seem to go back to more of a Phil Donahue format of spreading information, not just highlighting the most dysfunctional lives they could find. The Queen of this format was Oprah Winfrey. Unfortunately, her stated goal was to spread her particular brand of spirituality, and she did it very well. She spawned many acolytes, including Dr. Oz, whose shows were based almost entirely on profitable misinformation.
So even before social media hit, there were major problems with the quality of information being fed to the public through mass media. Social media just cranked up the misinformation by a couple orders of magnitude, and swept away any remaining mechanisms of quality control. Social media gives the ability of a few superspreaders of misinformation to have a magnified effect. Misinformation can be favored by algorithms that prioritize engagement over all else – not just misinformation, but radicalizing information. One result is that people trust all news sources less. This leads to a situation where everyone can just believe what suits them, because all information is suspect. In some social media cultures it seems that truth is irrelevant – it’s no longer even a meaningful concept. These are trends that imperil democracy.
Steven Bartlett defends the low quality of the health information he spreads in the laziest of ways, saying the this is about free speech and airing opposing opinions. He is essentially absolving himself of any journalistic responsibility, so that he can be free to pursue maximal audience size at the expense of quality information. Of course, in a unregulated market that is the inevitable result. Most people will consume the information that most people consume, with popularity being driven by sensationalism and ideological support, not quality. Again – this is nothing new. It’s now just algorithmically assured and there are no longer and breaks to slow the spread of misinformation. Worse, ideological and bad actors have learned how to exploit this situation to spread politically motivated disinformation.
Worse still, authoritarian governments now have a really easy time controlling information and therefore their populations. We may have (and this is my worst fear) created the ultimate authoritarian tools. In the big picture of history, this may lead to a ratcheting of societies in the authoritarian direction. We likely won’t see this happening until it’s too late. I know this will be triggering to many partisans, but I think it is reasonable to argue that we are seeing this in the US with the election of Trump, something that would likely have been impossible 20 years ago. His election (I know, it’s difficult to make sweeping conclusions like this) was partly due to the spread of misinformation and the successful leveraging of social media to control the narrative.
I don’t have any clear solutions to all this. We just have to find a way through it somehow. Individual critical thinking and media savvy are essential. But we do need to also have a conversation about the information ecosystem we have created for our societies.
The post Podcast Pseudoscience first appeared on NeuroLogica Blog.
Why does news reporting of science and technology have to be so terrible at baseline? I know the answers to this question – lack of expertise, lack of a business model to support dedicated science news infrastructure, the desire for click-bait and sensationalism – but it is still frustrating that this is the case. Social media outlets do allow actual scientists and informed science journalists to set the record straight, but they are also competing with millions of pseudoscientific, ideological, and other outlets far worse than mainstream media. In any case, I’m going to complain about while I try to do my bit to set the record straight.
I wrote about nuclear diamond batteries in 2020. The concept is intriguing but the applications very limited, and cost likely prohibitive for most uses. The idea is that you take a bit of radioactive material and surround it with “diamond like carbon” which serves two purposes. It prevents leaking of radiation to the environment, and it capture the beta decay and converts it into a small amount of electricity. This is not really a battery (a storage of energy) but an energy cell that produces energy, but it would have some battery-like applications.
The first battery based on this concept, capturing the beta decay of a radioactive substance to generate electricity, was in 1913, made by physicist Henty Moseley. So year, despite the headlines about the “first of its kind” whatever, we have had nuclear batteries for over a hundred years. The concept of using diamond like carbon goes back to 2016, with the first prototype created in 2018.
So of course I was disappointed when the recent news reporting on another such prototype declares this is a “world first” without putting it into any context. It is reporting on a new prototype that does have a new feature, but they make it sound like this is the first nuclear battery, when it’s not even the first diamond nuclear battery. The new prototype is a diamond nuclear battery using Carbon-14 and the beta decay source. They make diamond like carbon out of C-14 and surround it with diamond like carbon made from non-radioactive carbon. C-14 has a half life of 5,700 years, so they claim the battery lasts of over 5,000 years.
The previous prototype nuclear diamond batteries used Nickle 63, including this Chinese prototype from earlier this year, and the one from 2018. So sure, it’s the first prototype using C-14 as the beta decay source. But that is hardly clear from the reporting, nor is there any mentions of other nuclear batteries and previous diamond nuclear batteries.
But worse, the reporting says explicitly this technology could replace the alkaline or lithium ion batteries you currently use in your devices. This will likely never be the case, for a simple reason – these devices have an extremely low energy density and specific energy. The current generated by these small diamond batteries is tiny – on the order of 10 microwatts per cubic centimeter (called the power density). So you would need a 100 liter volume battery to produce one watt, which is about what a cell phone uses (depending on which features you are using).
But wait, that is for Ni63, which has a half life of 101.2 years. C14 has a half life of 5,700 years, which means it would produce about 56 times less current for 56 time longer per a given mass. This is just math and is unavoidable. So using a C14 battery you would need about 5,600 liters of battery to power a cell phone. They don’t mention that in the reporting.
This does not mean there are no potential applications for such batteries. Right now they are mainly used for deep space probes or satellites – devices that we will never be able to recharge or service and may need only a small amount of energy. Putting cost aside, there are some other applications feasible based on physics. We could recycle C14 from nuclear power plants and make them into diamond batteries. This is a good way to deal with nuclear waste, and it would produce electricity as a bonus. Warehouses of such batteries could be connected to the grid to produce a small amount of steady power. A building that is 100 meters by 100 meters by 20 meters tall, if it was packed with such batteries, could product about 35 Watts of power. Hmmm – probably not worth it.
The low power density is just a deal killer for any widespread or large application. You would have to use very short half-life materials to get the power density up, but then of course the lifespan is much shorter. But still, for some applications, a battery with a half-life of a few years would still be very useful.
Another potential application, however is not as a primary power source but as a source to trickle charge another battery, that has a much higher power density. But again, we have the question – is it worth it? I doubt there are many applications outside NASA that would be considered cost effective. Still, it is an interesting technology with some potential applications, just mostly niche. But reporters cannot help by hype this technology as if you are going to have everlasting cell phone batteries soon.
The post Diamond Batteries Again first appeared on NeuroLogica Blog.