The newer the data, and the longer we've had to study the epidemiology, the less harm we find that Agent Orange caused.
Learn about your ad choices: dovetail.prx.org/ad-choicesThere really is a significant mystery in the world of cosmology. This, in my opinion, is a good thing. Such mysteries point in the direction of new physics, or at least a new understanding of the universe. Resolving this mystery – called the Hubble Tension – is a major goal of cosmology. This is a scientific cliffhanger, one which will unfortunately take years or even decades to sort out. Recent studies have now made the Hubble Tension even more dramatic.
The Hubble Tension refers to discrepancies in measuring the rate of expansion of the universe using different models or techniques. We have known since 1929 that the universe is not static, but it is expanding. This was the famous discovery of Edwin Hubble who notice
d that galaxies further from Earth have a greater red-shift, meaning they are moving away from us faster. This can only be explained as an expanding universe – everything (not gravitationally bound) is moving away from everything else. This became known as Hubble’s Law, and the rate of expansion as the Hubble Constant.
Then in 1998 two teams, the Supernova Cosmology Project and the High-Z Supernova Search Team, analyzing data from Type 1a supernovae, found that the expansion rate of the universe is actually accelerating – it is faster now than in the distant past. This discovery won the Nobel Prize in physics in 2011 for Adam Riess, Saul Perlmutter, and Brian Schmidt. The problem remains, however, that we have no idea what is causing this acceleration, or even any theory about what might have the necessary properties to cause it. This mysterious force was called “dark energy”, and instantly became the dominant form of mass-energy in the universe, making up 68-70% of the universe.
I have seen the Hubble Tension framed in two ways – it is a disconnect between our models of cosmology (what they predict) and measurements of the rate of expansion, or it is a disagreement between different methods of measuring that expansion rate. The two main methods of measuring the expansion rate are using Type 1a supernovae and by measuring the cosmic background radiation. Type 1a supernovae are considered standard candle because they have roughly the same absolute magnitude (brightness). The are white dwarf stars in a binary system that are siphoning off mass from their partner. When they reach a critical point of mass, they go supernova. So every Type 1a goes supernova with the same mass, and therefore the same brightness. If we know an object’s absolute magnitude of brightness, then we can calculate its distance. It was this data that lead to the discovery that the universe is accelerating.
But using our models of physics, we can also calculate the expansion of the universe by looking at the cosmic microwave background (CMB) radiation, which is the glow left over after the Big Bang. This gets cooler as the universe expands, and so we can calculate that expansion by looking at the CMB close to us and farther away. Here is where the Hubble Tension comes in. Using Type 1a supernovae, we calculate the Hubble Constant to be 73 km/s per megaparsec. Using the CMB the calculation is 67 km/s/Mpc. These numbers are not close enough – they are very different.
At first it was thought that perhaps the difference is due to imprecision in our measurements. As we gather more and better data (such as building a more complete sample of Type 1a supernovae), using newer and better instruments, some hoped that perhaps these two numbers would come into alignment. The opposite has happened – newer data has solidified the Hubble Tension.
A recent study, for example, uses the Dark Energy Spectroscopic Instrument (DESI) to make more precise measurements of Type 1a’s in the nearby Coma cluster. This is used to make a more precise calibration of our overall measurements of distance in the universe. With this more precise data, the authors argue that the Hubble Tension should now be considered a “Hubble Crisis” (a term which then metastasized throughout reporting headlines). The bottom line is that there really is a disconnect between theory and measurements.
Even more interesting, another group has used updated Type 1a supernovae data to argue that perhaps dark energy does not have to exist at all. This is their argument: The calculation of the Hubble Constant throughout the universe used to establish an accelerating universe is based on the assumption of isotropy and homogeneity at the scale we are observing. Isotropy means that the universe is essentially the same density no matter which direction you look in, while homogeneity means that every piece of the universe is the same as every other piece. So no matter where you are and which direction you look in, you will observe about the same density of mass and energy. This is obviously not true at small scales, like within a galaxy, so the real question is – at what scale does the universe become isotropic and homogenous? Essentially cosmologists have used the assumption of isotropy and homogeneity at the scale of the observable universe to make their calculations regarding expansion. This is called the lambda CDM model (ΛCDM), where lambda is the cosmological constant and CDM is cold dark matter.
This group, however, argues that this is not true. There are vast gaps with little matter, and matter tends to clump along filaments in the universe. If instead you take into account these variations in the density of matter throughout the universe, you get different results for the Hubble Constant. The primary reason for this is General Relativity. This is part of Einstein’s (highly verified) theory that matter affects spacetime. Where matter is dense, time relatively slows down. This means as we look out into the universe, the light that we see is travelling faster through empty space than it is through space with lots of matter, because that matter is causing time to slow down. So if you measure the expansion rate of the universes it will appear faster in gaps and slower in galaxy clusters. As the universe expands, the gaps expand, meaning the later universe will have more gaps and therefore measure a faster acceleration, while the older universe has smaller gaps and therefore measures a slower expansion. They call this the timescape model.
If the timescape model is true, then the expansion of the universe is not accelerating (it’s just an illusion of our observations and assumptions), and therefore there is no need for dark energy. They further argue that their model is a better fit for the data than ΛCDM (but not by much). We need more and better data to definitively determine which model is correct. They are also not mutually exclusive – timescape may explain some but not all of the observed acceleration, still leaving room for some dark energy.
I find this all fascinating. I will admit I am rooting for timescape. I never liked the concept of dark energy. It was always a placeholder, but also just has properties that are really counter-intuitive. For example, dark energy does not dilute as spacetime expands. This does not mean it is false – the universe can be really counterintuitive to us apes with our very narrow perspectives. I will also follow whatever the data says. But wouldn’t it be exciting if an underdog like timescape overturned a Nobel Prize winning discovery, and for at least a second time in my lifetime radically changed how we think about cosmology. Timescape may also resolve the Hubble Tension to boot.
Whatever the answer turns out to be – clearly there is something wrong with our current cosmology. Resolving this “crisis” will expand our knowledge of the universe.
The post The Hubble Tension Hubbub first appeared on NeuroLogica Blog.
Cloud seeding would seem like an easy and obvious way to create rain where none existed before. Is it really that simple?
My recent article on social media has fostered good social media engagement, so I thought I would follow up with a discussion of the most urgent question regarding social media – should the US ban TikTok? The Biden administration signs into law legislation that would ban the social media app TikTok on January 19th (deliberately the day before Trump takes office) unless it is sold off to a company that is not, as it is believed, beholden to the Chinese government. The law states it must be divested from ByteDance, which is the Chinese parent company who owns TikTok. This raises a few questions – is this constitutional, are the reasons for it legitimate, how will it work, and will it work?
A federal appeals court ruled that the ban is constitutional and can take place, and that decision is now before the Supreme Court. We will know soon how they rule, but indicators are they are leaning towards allowing the law to take effect. Trump, who previously tried to ban TikTok himself, now supports allowing the app and his lawyers have argued that he should be allowed to solve the issue. He apparently does not have any compelling legal argument for this. In any case, we will hear the Supreme Court’s decision soon.
If the ban is allowed to take place, how will it work? First, if you are not aware, TikTok is a short form video sharing app. I have been using it extensively over the past couple of years, along with most of the other popular platforms, to share skeptical videos and have had good engagement. Apparently TikTok is popular because it has a good algorithm that people like. TikTok is already banned on devices owned by Federal employees. The new ban will force app stores in the US to remove the TikTok app and now allow any further updates or support. Existing TikTok users will continue to be able to use their existing apps, but they will not be able to get updates so they will eventually become unusable.
ByteDance will have time to comply with the law by divesting TikTok before the app becomes unusable, and many believe they are essentially waiting to see if the law will actually take effect. So, it is possible that even if the law does take effect, not much will change for existing users unless ByteDance refuses to comply and the app slowly fades away. In this case it is likely that the two existing main competitors, YouTube shorts, and Instagram, will benefit.
Will users be able to bypass the ban? Possibly. You can use a virtual private network (VPN) to change your apparent location to download the app from foreign stores. But even if it is technically possible, this would be a significant hurdle for some users and likely reduce use of the app in the US.
That is the background. Now lets get to the most interesting question – are the stated reasons for wanting to ban the app legitimate? This is hotly debated, but I think there is a compelling argument to make for the risks of the app and they essentially echo many of the points I made in my previous post. Major social media platforms undeniably have an influence on the broader culture. If the platforms are left entirely open, this allows for bad actors to have unfettered access to tools to spread misinformation, disinformation, radicalization, and hate speech. I have stated that my biggest fear is that these platforms will be used by authoritarian governments to control their society and people. The TikTok ban is about a hostile foreign power using an app to undermine the US.
There are essentially two components to the fear – that TikTok is gathering information on US citizens that can then be weaponized against them or our society. The second is that the Chinese government will use TikTok in order to spread pro-communist China propaganda, anti-American propaganda, so social civil strife and influence American politics. We actually don’t have to speculate about whether or not China will do this – TikTok has already admitted that they have identified and shut down massive Chinese government campaigns to influence US users – one with 110,000 accounts, and another with 141,000 accounts. You might argue that the fact that they took them down means they are not cooperating with the Chinese government, but we cannot conclude that. They may be making a public show of taking down some campaigns but leaving others in place. The more important fact here is that the Chinese government is using TikTok to influence US politics and society.
There are also more subtle ways than massive networks of accounts to influence the US through TikTok. American TikTok is different from the Chinese version, and analyses have found that the Chinese version has better quality informational content and more educational content than the US version. China can be playing the long game (actually, not that long, in my opinion) of dumbing down the US. Algorithms can put light thumbs on the scale of information that have massive effects.
It was raised in the comments to my previous post if all this discussion is premised on the notion that people are easily manipulated pawns in the hands of social media giants. Unfortunately, the answer to that question is a pretty clear yes. There is a lot of social psychology research to show that influence campaigns are effective. Obviously not everyone is affected, but moving the needle 10 or 20 percentage points (or even a lot less) can have a big impact on society. Again – I have been on TikTok for over a year. It is flooded with videos that seem crafted to spread ignorance and anti-intellectualism. I know that most of them are not crafted specifically for this purpose – but that is the effect they have, and if one did intend to craft content for this purpose they could not do a better j0b than what is already on the platform. There is also a lot of great science communication content, but it is drowned out by nonsense.
Social media, regardless of who owns it, has all the risks and problems I discussed. But it does seem reasonable that we also do not want to add another layer of having a foreign adversary with significant influence over the platform. Some argue that it doesn’t really matter, social media can be used for influence campaigns regardless of who owns them. But that is hardly reassuring. At the very least I would argue we don’t really know and this is probably not an experiment we want to add on top of the social media experiment itself.
The post Should the US Ban TikTok? first appeared on NeuroLogica Blog.
One of the things I have come to understand from following technology news for decades is that perhaps the most important breakthroughs, and often the least appreciated, are those in material science. We can get better at engineering and making stuff out of the materials we have, but new materials with superior properties change the game. They make new stuff possible and feasible. There are many futuristic technologies that are simply not possible, just waiting on the back burning for enough breakthroughs in material science to make them feasible. Recently, for example, I wrote about fusion reactors. Is the addition of high temperature superconducting material sufficient to get us over the finish line of commercial fusion, or are more material breakthroughs required?
One area where material properties are becoming a limiting factor is electronics, and specifically computer technology. As we make smaller and smaller computer chips, we are running into the limits of materials like copper to efficiently conduct electrons. Further advance is therefore not just about better technology, but better materials. Also, the potential gain is not just about making computers smaller. It is also about making them more energy efficient by reducing losses to heat when processors work. Efficiency is arguably now a more important factor, as we are straining our energy grids with new data centers to run all those AI and cryptocurrency programs.
This is why a new study detailing a new nanoconducting material is actually more exciting than it might at first sound. Here is the editor’s summary:
Noncrystalline semimetal niobium phosphide has greater surface conductance as nanometer-scale films than the bulk material and could enable applications in nanoscale electronics. Khan et al. grew noncrystalline thin films of niobium phosphide—a material that is a topological semimetal as a crystalline material—as nanocrystals in an amorphous matrix. For films with 1.5-nanometer thickness, this material was more than twice as conductive as copper. —Phil Szuromi
Greater conductance at nanoscale means we can make smaller transistors. The study also claims that this material has lower resistance, which means more efficient – less waste heat. They also claim that manufacturing is similar to existing transistors at similar temperatures, so it’s feasible to mass produce (at least it seems like it should be). But what about niobium? Another lesson I have learned from examining technology news is to look for weaknesses in any new technology, including the necessary raw material. I see lots of battery and electronic news, for example, that uses platinum, which means it’s not going to be economical.
Niobium is considered a rare metal, and is therefore relatively expensive, about $45 per kilogram. (By comparison copper goes for $9.45 per kg.) Most of the world’s niobium is sourced in Brazil (so at least it’s not a hostile or unstable country). It is not considered a “precious” metal like gold or platinum, so that is a plus. About 90% of niobium is currently used as a steel alloy, to make steel stronger and tougher. If we start producing advanced computer chips using niobium what would that do to world demand? How will that affect the price of niobium? By definition we are talking about tiny amounts of niobium per chip, the wires are only a few molecules thick, but the world produces a lot of computer chips.
How all this will sort out is unclear, and the researchers don’t get into that kind of analysis. They basically are concerned with the material science and proving their concept works. This is often where the disconnect is between exciting-sounding technology news and ultimate real-world applications. Much of the stuff we read about never comes to fruition, because it simply cannot work at scale or is too expensive. Some breakthroughs do work, but we don’t see the results in the marketplace for 10-20 years, because that is how long it took to go from the lab to the factory. I have been doing this long enough now that I am seeing the results of lab breakthroughs I first reported on 20 years ago.
Even if a specific demonstration is not translatable into mass production, however, material scientists still learn from it. Each new discovery increases our knowledge of how materials work and how to engineer their properties. So even when the specific breakthrough may not translate, it may lead to other spin-offs which do. This is why such a proof-of-concept is exciting – it shows us what is possible and potential pathways to get there. Even if that specific material may not ultimately be practical, it still is a stepping stone to getting there.
What this means is that I have learned to be patient, to ignore the hype, but not dismiss science entirely. Everything is incremental. It all adds up and slowly churns out small advances that compound over time. Don’t worry about each individual breakthrough – track the overall progress over time. From 2000 to today, lithium-ion batteries have about tripled their energy capacity, for example, while solar panels have doubled their energy production efficiency. This was due to no one breakthrough, just the cumulative effects of hundreds of experiments. I still like to read about individual studies, but it’s important to put them into context.
The post New Material for Nanoconductors first appeared on NeuroLogica Blog.
Recently Meta decided to end their fact-checkers on Facebook and Instagram. The move has been both hailed and criticized. They are replacing the fact-checkers with an X-style “community notes”. Mark Zuckerberg summed up the move this way: “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
That is the essential tradeoff- whether you think false positives are more of a problem or false negatives. Are you concerned more with enabling free speech or minimizing hate speech and misinformation? Obviously both are important, and an ideal platform would maximize both freedom and content quality. It is becoming increasingly apparent that it matters. The major social media platforms are not mere vanity projects, they are increasingly the main source of news and information, and foster ideological communities. They affect the functioning of our democracy.
Let’s at least be clear about the choice that “we” are making (meaning that Zuckerberg is making for us). Maximal freedom without even basic fact-checking will significantly increase the amount of misinformation and disinformation on these platforms, as well as hate-speech. Community notes is a mostly impotent method of dealing with this. Essentially this leads to crowd-sourcing our collective perception of reality.
Free-speech optimists argue that this is all good, and that we should let the marketplace of ideas sort everything out. I do somewhat agree with this, and the free marketplace of ideas is an essential element of any free and open society. It is a source of strength. I also am concerned about giving any kind of censorship power to any centralized authority. So I buy the argument that this may be the lesser of two evils – but it still comes with some significant downsides that should not be minimized.
What I think the optimists are missing (whether out of ignorance or intention) is that a completely open platform is not a free marketplace of ideas. The free marketplace assumes that everyone is playing fair and everyone is acting in good faith. This is 2005 level of naivete. This leaves the platform open to people who are deliberately exploiting it and using it as a tool of political disinformation. This also leaves it open to motivated and dedicated ideological groups that can flood the zone with extreme views. Corporations can use the platform for their own influence campaigns and self-serving propaganda. This is not a free and fair marketplace – it means people with money, resources, and motivation can dominate the narrative. We are simply taking control away from fact-checkers and handing it over to shadowy groups with nefarious motivations. And don’t think that authoritarian governments won’t find a way to thrive in this environment also.
So we have ourselves a Catch-22. We are damned if we do and damned if we don’t. This does not mean, however, that some policies are not better than others. There is a compromise in the middle that allows for the free marketplace of idea without making it trivially easy to spread disinformation, to radicalize innocent users of the platform, and allow for ideological capture. I don’t know exactly what those policies are, we need to continue to experiment and find them. But I don’t think we should throw up our hands in defeat (and acquiescence).
I think we should approach the issue like an editorial policy. Having editorial standards is not censorship. But who makes and enforces the editorial standards? Independent, transparent, and diverse groups with diffuse power and appeals processes is a place to start. No such process will be perfect, but it is likely better than having no filter at all. Such a process should have a light touch, err on the side of tolerance, and focus on the worst blatant disinformation.
I also think that we need to take a serious look at social media algorithms. This also is not censorship, but Facebook, for example, gets to decide how to recommend new content to you. They tweak the algorithms to maximize engagement. How about tweaking the algorithms to maximize quality of content and diverse perspectives instead?
We may need to also address the question of whether or not giant social media platforms represent a monopoly. Let’s face it, they do, and they also concentrate a lot of media into a few hands. We have laws to protect against such things because we have long recognized the potential harm of so much concentrated power. Social media giants have simply side-stepped these laws because they are relatively new and exist in a gray zone. Our representatives have failed to really address these issues, and the public is conflicted so there isn’t a clear political will. I think the public is conflicted partly because this is all still relatively new, but also as a result of a deliberate ideological campaign to sow doubt and confusion. The tech giants are influencing the narrative on how we should deal with tech giants.
I know there is an inherent problem here – social media outlets work best when everyone is using them, i.e. they have a monopoly. But perhaps we need to find a way to maintain the advantage of an interconnected platform while breaking up the management of that platform into smaller pieces run independently. The other option is to just have a lot of smaller platforms, but what is happening there is that different platforms are becoming their own ideological echochambers. We seem to have a knack for screwing up every option.
Right now there does not seem to be anyway for any of these things to happen. The tech giants are in control and have little incentive to give up their power and monopoly. Government has been essentially hapless on this issue. And the public is divided. Many have a vague sense that something is wrong, but there is no clear consensus on what exactly the problem is and what to do about it.
The post What Kind of Social Media Do We Want? first appeared on NeuroLogica Blog.
Economic nationalism, while attractive to many populists, is not the path to economic success some believe it to be.
How close are we to having fusion reactors actually sending electric power to the grid? This is a huge and complicated question, and one with massive implications for our civilization. I think we are still at the point where we cannot count on fusion reactors coming online anytime soon, but progress has been steady and in some ways we are getting tatalizingly close.
One company, Commonwealth Fusion Systems, claims it will have completed a fusion reactor capable of producing net energy by “the early 2030’s”. A working grid-scale fusion reactor within 10 years seems really optimistic, but there are reasons not to dismiss this claim entirely out of hand. After doing a deep dive my take is that the 2040’s or even 2050’s is a safer bet, but this may be the fusion design that crosses the finish line.
Let’s first give the background and reasons for optimism. I have written about fusion many times over the years. The basic idea is to fuse lighter elements into heavier elements, which is what fuels stars, in order to release excess energy. This process releases a lot of energy, much more than fission or any chemical process. In terms of just the physics, the best elements to fuse are one deuterium atom to one tritium atom, but deuterium to deuterium is also feasible. Other fusion elements are simply way outside our technological capability and so are not reasonable candidates.
There are also many reactor designs. Basically you have to squeeze the elements close together at high temperature so as to have a sufficiently high probability of fusion. Stars use gravitational confinement to achieve this condition at their cores. We cannot do that on Earth, so we use one of two basic methods – inertial confinement and magnetic confinement. Inertial confinement includes a variety of methods that squeeze hydrogen atoms together using inertia, usually from implosions. These methods have achieved ignition (burning plasma) but are not really a sustainable method of producing energy. Using laser inertial confinement, for example, destroys the container in the process.
By far the best method, and the one favors by physics, is magnetic confinement. Here too there are many designs, but the one that is closest to the finish line (and the one used by CFS) is called a tokamak design. This is torus shaped in a specific way to control the flow of plasma just so to avoid any kind of turbulence that will prevent fusion.
In order to achieve the energies necessary to create sustained fusion you need really powerful magnetic fields, and the industry has essentially been building larger and larger tokamaks to achieve this. CFS has the advantage of being the first to design a reactor using the latest higher temperature superconductors (HTS), which really are a game changer for tokamaks. They allow for a smaller design with more powerful magnets using less energy. Without these HTS I don’t think there would even be a question of feasibility.
CFS is currently building a test facility called the SPARC reactor, which stands for the smallest possible ARC reactor, and ARC in turn stand for “affordable, robust, compact”. This is a test facility that will not be commercial. Meanwhile they are planning their first ARC reactor, which is grid commercial scale, in Virginia and which they claim will produce 400 Megawatts of power.
Reasons for optimism – the physics all seems to be good here. CFS was founded by engineers and scientists from MIT – essentially some of the best minds in fusion physics. They have mapped out the most viable path to commercial fusion, and the numbers all seem to add up.
Reasons for caution – they haven’t done it yet. This is not, at this point, so much a physics problem as an engineering problem. As they push to higher energies, and incorporate the mechanisms necessary to bleed off energy to heat water to run a turbine, they may run into problems they did not anticipate. They may hit a hurdle that will suddenly throw 10 or 20 years into the development process. Again, my take is that the 2035 timeline is if everything goes perfectly well. Any bumps in the road will keep adding years. This is a project at the very limits of our technology (as complex as going to the Moon), and delays are the rule, not the exception.
So – how close are they? The best so far is the JET tokamak reactor which produced 67% of net energy. That sounds close, but keep in mind, 100% is break even. Also – this is heat energy, not electricity. Modern fission reactors have about a 30% efficiency in converting heat to electricity, so that is a reasonable assumption. Also, this is fusion energy efficiency, not total energy. This is the energy that goes into the plasma, not the total energy to run the reactor.
The bottom line is that they probably need to increase their energy output by an order of magnitude or more in order to be commercially viable. Just producing a little bit of net energy is not enough. They need massive excess energy (meaning electricity) in order to justify the expense. So really we are no where near net total energy in any fusion design. CFS is hoping that their fancy new HTS magnets will get them there. They actually might – but until they do, it’s still just an informed hope.
I do hope that my pessimism, born of decades of overhyped premature tech promises, is overcalling it in this case. I hope these MIT plasma jocks can get it done, somewhere close to the promised timeline. The sooner the better, in terms of global warming. Let’s explore for a bit what this would mean.
Obviously the advantage of fusions reactors like the planned ARC design if it works is that it produces a lot of carbon-free energy. They can be plugged into existing connections to the grid, and produce stable predictable energy. They produce only low level nuclear waste. They also have a relatively small land footprint for energy produced. If the first ARC reactor works, we would need to build thousands around the world as fast as possible. If they are profitable, this will happen. But the industry can also be supported by targeted regulations. Such reactors could replace fossil fuel-based reactors, and then eventually fission reactors.
Once we develop viable fusion energy, it is very likely that this will become our primary energy source literally forever. At least for hundreds if not thousands or tens of thousands of years. It gets hard to predict technology that far out, but there are really no candidates for advanced energy sources that are better. Matter-antimatter theoretically could work, but why bother messing around with antimatter, which is hard to make and contain. The advantage is probably not enough to justify it. Other energy sources, like black holes, are theoretically and extremely exotic, perhaps something for millions of years advanced beyond where we are.
Even if some really advanced energy source becomes possible, fusion will likely remain in the sweet spot in terms of producing large amounts of energy cleanly and sustainable. Once we cross the line to being able to produce net total electricity with fusion, incremental advances in material science and the overall technology will just make fusion better. From that point forward all we really need to do is make fusion better. There will likely still be a role for distributed energy like solar, but fusion will replace all centralized large sources of power.
The post Plan To Build First Commercial Fusion Reactor first appeared on NeuroLogica Blog.
Skeptoid answers another round of feedback emails sent in by listeners.