Skeptoid answers another round of questions from students all around the world.
As I predicted the controversy over whether or not we have achieved general AI will likely exist for a long time before there is a consensus that we have. The latest round of this controversy comes from Vahid Kazemi from OpenAI. He posted on X:
“In my opinion we have already achieved AGI and it’s even more clear with O1. We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks”. Some say LLMs only know how to follow a recipe. Firstly, no one can really explain what a trillion parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify. Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. There’s nothing that can’t be learned with examples.”
I will set aside the possibility that this is all for publicity of OpenAI’s newest O1 platform. Taken at face value – what is the claim being made here? I actually am not sure (part of the problem of short form venues like X). In order to say whether or not OpenAI O1 platform qualified as an artificial general intelligence (AGI) we need to operationally define what an AGI is. Right away, we get deep into the weeds, but here is a basic definition: “Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.”
That may seem straightforward, but it is highly problematic for many reasons. Scientific American has a good discussion of the issues here. But at it’s core two features pop up regularly in various definitions of general AI – the AI has to have wide-ranging abilities, and it has to equal or surpass human level cognitive function. There is a discussion about whether or not how the AI achieves its ends matters or should matter. Does it matter if the AI is truly thinking or understanding? Does it matter if the AI is self-aware or sentient? Does the output have to represent true originality or creativity?
Kazemi puts his nickel down on how he operationally defines general AI – “better than most humans at most tasks”. As if often the case, one has to frame such claims as “If you define X this way, then this is X.” So, if you define AGI as being better than most humans at most tasks, then Kazemi’s claims are somewhat reasonable. There is still a lot to debate, but at least we have some clear parameters. This definition also eliminated the thorny question of understanding and awareness.
But not everyone agrees with this definition. There are still many experts who contend that the modern LLM’s are still just really good autocompletes. They are language prediction algorithms that simulate thought through simulating language, but are not capable of true thought, understanding, or creativity. What they are great at is sifting through massive amounts of data and finding patterns, and then regenerating those patterns.
This is not a mere discussion of “how” LLMs function but gets to the core of whether or not they are “better” than humans at what they do. I think the primary argument against LLMs being better than humans is that they function by using the output of humans. Kazemi essentially says this is just how they learn, they are following a recipe like people do. But I think that dodges the key question.
Let’s take art as an example. Humans create art, and some artists are truly creative and can bring into existence new and unique works. There are always influences and context, but there is also true creativity. AI art does not do this. It sifts through the work of humans, learns the patterns, and then generates imitations from those patterns. Since AI does not experience existence, it cannot draw upon experience or emotions or the feeling of what it is to be a human in order to manifest artistic creativity. It just regurgitates the work of humans. So how can we say that AI is better than humans at art when it is completely dependent on humans for what it does? The same is true for everything LLMs do, but it is just more obvious when it comes to art.
I am not denigrating LLMs or any modern AI as extremely useful tools. They are powerful, and fast, and can accomplish many great tasks. They are accelerating the rate of scientific research in many areas. They can improve the practice of medicine. They can help us control the tsunami of data that we are drowning ourselves in. And yes, they can do a lot of different tasks.
Perhaps it is easier to define what is not AGI. A chess-playing computer is not AGI, as it is programmed to do one task. In fact, the term AGI was developed by programmers to distinguish this effort from the crop of narrow AI applications that were popping up, like Chess and Go players. But is everything that is not a very narrow AI an AGI? Seems like we need more highly specific terms.
OpenAI and other LLMs are more than just the narrow AIs of old. But they are not thinking machines, nor do they have human-level intelligence. They are also certainly not self-aware. I think Kazemi’s point about a trillion parameter deep neural net misses the point. Sure, we don’t know exactly what it is doing, but we know what it is not doing, and we can infer from it’s output and also how it’s programmed the general way that it is accomplishing its outcome. There is also the fact that LLMs are still “brittle” – a term that refers to the fact that narrow AIs can be easily “broken” when they are pushed beyond their parameters. It’s not hard to throw an LLM off its game and push the limits of it’s ability. It still has not true thinking or understanding, and this makes it brittle.
For that reason I don’t think that LLMs have achieved AGI. But I could be wrong, and even if we are not there yet we may be really close. But regardless, I think we need to go back to the drawing board, look at what we currently have in terms of AI, and experts need to come up with perhaps new more specific operational definitions. We do this in medicine all the time – as our knowledge evolves, sometimes we need for experts to get together and revamp diagnostic definitions and make up new diagnoses to reflect that knowledge. Perhaps ANI and AGI are not enough.
To me LLMs seems like a multi-purpose ANI, and perhaps that is a good definition. Either “AGI” needs to be reserved for an AI that can truly derive new knowledge from a general understanding of the world, or we “downgrade” the term “AGI” to refer to what LLMs currently are (multi-purpose but otherwise narrow) and come up with a new term for true human-level thinking and understanding.
What’s exciting (and for some scary) is that AIs are advancing quickly enough to force a reconsideration of our definitions of what AIs actually are.
The post Have We Achieved General AI first appeared on NeuroLogica Blog.
What is Power-to-X (PtX)? It’s just a fancy marketing term for green hydrogen – using green energy, like wind, solar, nuclear, or hydroelectric, to make hydrogen from water. This process does not release any CO2, just oxygen, and when the hydrogen is burned back with that oxygen it creates only water as a byproduct. Essentially hydrogen is being used as an energy storage medium. This whole process does not create energy, it uses energy. The wind and solar etc. are what create the energy. The “X” refers to all the potential applications of hydrogen, from fuel to fertilizer. Part of the idea is that intermittent energy production can be tied to hydrogen production, so when there is excess energy available it can be used to make hydrogen.
A recent paper explores the question of why, despite all the hype surrounding PtX, there is little industry investment. Right now only 0.1% of the world’s hydrogen production is green. Most of the rest comes from fossil fuel (gray and brown hydrogen) and in many cases is actually worse than just burning the fossil fuel. Before I get into the paper, let’s review what hydrogen is currently used for. Hydrogen is essentially a high energy molecule and it can be used to drive a lot of reactions. It is mostly used in industry – making fertilizer, reducing the sulfur content of gas, producing industrial chemicals, and making biofuel. It can also be used for hydrogen fuel cells cars, which I think is a wasted application as BEVs are a better technology and any green hydrogen we do make has better uses. There are also emerging applications, like using hydrogen to refine iron ore, displacing the use of fossil fuels.
A cheap abundant source of green hydrogen would be a massive boost to multiple industries and would also be a key component to achieving net zero carbon emissions. So where is all the investment? This is the question the paper explores.
The short answer has to do with investment risk. Investors, especially when we are talking about billions of dollars, like predictability. Uncertainty increases their risk and is a huge disincentive to invest large sums of money. The paper concludes that there are two main sources of uncertainty that make PtX investments seem like they are high risk – regulatory uncertainty and lack of infrastructure.
Regulations in many countries are still in flux. This, fortunately, is an entirely solvable problem. Governments can put resources and priority into hammering out comprehensive regulations for the hydrogen and related industries, lock in those regulations for years, and provide the stability that investors want. Essentially the lack of proper regulations is a hurdle for green hydrogen investment, and governments simply need to do their job.
The second issue is lack of infrastructure, with further uncertainty about the completion of planned hydrogen projects –
“For instance, in October, the Danish government announced that a planned hydrogen pipeline to Germany would not be established until 2031 at the earliest, whereas the previous target was scheduled for 2028.”
The fossil fuel industry has the advantage of a mature infrastructure. Imagine if we had to develop all the oil rigs, oil wells, pipelines, trucking infrastructure, and gas stations from scratch. That would be a massive investment on an uncertain timeline. Hydrogen is facing the same issue. Again, this is a solvable issue – invest in hydrogen infrastructure. Make sure projects are sufficiently funded to keep on the originally promised timeline. Governments are supposed to craft regulation and invest in common infrastructure in order to facilitate private industry investing in new technologies. This may be all that is necessary to accelerate the green transition. At least we shouldn’t be holding it back because governments are doing their job.
The authors of the paper also explore another aspect of this issue – incentives for industry to specifically invest in green technology. This is essentially what the IRA did in the US. Here incentives fall into two broad categories, carrots and sticks. One type of carrot is to reduce risk for private investment. Beyond what I already mentioned, government can, for example, guarantee loans to reduce financial risk. They can also provide direct subsidies, such as tax breaks for investments in green technology. For context, the fossil fuel industry received $1.4 trillion in 2022 in direct subsidies worldwide. It is also estimated that the fossil fuel industry was allowed to externalize $5.6 trillion in health and environmental costs (whether or not you consider this a “subsidy”). This is for a mature industry with massive profits sitting on top of a massive infrastructure partly paid for with public dollars. The bottom line is that some targeted subsidies for green energy technology is perfectly reasonable, and in fact is a good investment.
But the authors argue that this might not be enough. They also recommend we add some sticks to the equation. This usually takes the form of some type of carbon tax, which would make fossil fuels less profitable. This seems perfectly reasonable. They also recommend mandated phase out of fossil fuel investments. This is trickier, and I think this type of approach should be a last resort if anything. You won’t have to mandate a phase out if you make green technologies more attractive through subsidies and infrastructure, and fossil fuels less attractive by eliminating subsidies and perhaps taxing carbon.
At the very least governments should be not slowing down the green transition because they are neglecting to do their basic job.
The post Power-To-X and Climate Change Policy first appeared on NeuroLogica Blog.
Astrophysicists come up with a lot of whacky ideas, some of which actually turn out to be possibly true (like the Big Bang, black holes, accelerating cosmic expansion, dark matter). Of course, all of these conclusions are provisional, but some are now backed by compelling evidence. Evidence is the real key – often the challenge is figuring out a way to find evidence that can potentially support or refute some hypothesis about the cosmos. Sometimes it’s challenging to figure out even theoretically (let alone practically) how we might prove or disprove a hypothesis. Decades may go buy before we have the ability to run relevant experiments or make the kinds of observations necessary.
Black holes fell into that category. They were predicted by physics long before we could find evidence of their existence. There is a category of black hole, however, that we still have not confirmed through any observation – primordial black holes (PBH). As the name implies, these black holes may have been formed in the early universe, even before the first stars. In the early dense universe, fluctuations in the density of space could have lead to the formation of black holes. These black holes could theoretically be of any size, since they are not dependent on a massive star collapsing to form them. This process could lead to black holes smaller than the smaller stellar remnant black hole.
In fact, it is possible that there are enough small primordial black holes out there to account for the missing dark matter – matter we can detect through its gravitational effects but that we cannot otherwise see (hence dark). PBHs are considered a black hole candidate, but the evidence for this so far is not encouraging. For example, we might be able to detect black holes through microlensing. If a black hole happens to pass in front of a more distant star (from the perspective of an observer on Earth), then gravitational lensing will cause that star to appear to brighten, until the black hole passes. However, microlensing surveys have not found the number of microlensing events that would be necessary for PBHs to explain dark matter. Dark matter makes up 85% of the matter in the universe, so there would have to be lots of PBHs to be the sole cause of dark matter. It’s still possible that longer observation times would detect larger black holes (brightening events can take years if the black holes are large). But so far there is a negative result.
Observations of galaxies have also not shown the effects of swarms of PBHs, which should have (those > 10 solar masses) congregated in the centers of small galaxies over the age of the universe. This would have disturbed stars near the centers of these galaxies, causing the galaxies to appear fluffier. Observations of dwarf galaxies so far have not seen this effect, however.
A recent paper suggest two ways in which we might observe small PBHs, or at least their effects. These ideas are pretty out there, and are extreme long shots, which I think reflects the desperation for new ideas on how we might confirm the existence of PBHs. One idea is that small PBHs might have been gravitationally captured by planets. If the planet had a molten core, it’s then possible that the PBH would consume the molten core, leaving behind a hollow solid shell. The researchers calculate that for planets with a radius smaller than one tenth that of Earth, they outer solid shell could remain intact and not collapse in on itself. This idea then requires that a later collision knocks the PBH out of the center of this hollowed out small planet.
If this sequence of events occurs, then we could theoretically observe small hollow exoplanets to confirm PBHs. We could know a planet is hollow if we can calculate its size and mass, which we can do for some exoplanets. An object can have a mass much too small for its apparent size, meaning that it could be hollow. Yes, such an object would be unlikely, but the universe is a big place and even very unlikely events happen all the time. Being unlikely, however, means that such objects would be hard to find. That doesn’t matter if we can survey large parts of the universe, but finding exoplanets requires lots of observations. So far we have identified over 5 thousand exoplanets, with thousands of candidates waiting for confirmation. Most of these are larger worlds, which are easier to detect. In any case, it may be a long time before we find a small hollow world, if they are out there.
The second proposed method is also highly speculative. The idea here is that there may be really small PBHs that formed in the early universe, which can theoretically have masses in the range of 10^17 to 10^24 grams. The authors calculate that a PBH with a mass of 10^22 grams, if it passed through a solid object at high speed, would leave behind a tunnel of radius 0.1 micrometers. This tunnel would make a long straight path, which is otherwise not something you would expect to see in a solid object.
Therefore, we can look at solid objects, especially really old solid objects, with light microscopy to see if any such tiny straight tunnels exist. If they do, that could be evidence of tiny PBHs. What is the probability of finding such microscopic tunnels? The authors calculate that the probability of a billion year old boulder containing such a tunnel is 0.000001. So on average you would have to examine a million such boulders to find a single PBH tunnel. This may seem like a daunting task – because it is. The authors argue that at least the procedure is not expensive (I guess they are not counting the people time needed).
Perhaps if there were some way to automate such a search, using robots or equipment designed for the purpose. I feel like if such an experiment were to occur, it would be in the future when technology makes it feasible. The only other possibility is to crowd source it in some way. We would need millions of volunteers.
The authors recognize that these are pretty mad ideas, but they also argue that at this point any idea for finding PBHs, or dark matter, is likely to be out there. Fair enough. But unless we can practically do the experiment, it is likely to remain just a thought experiment and not really get us closer to an answer.
The post Finding Small Primordial Black Holes first appeared on NeuroLogica Blog.
What could explain a strange creature living in the suburbs, but only ever witnessed once?
On this Giving Tuesday, please consider supporting Skeptoid Media (a 501(c)(3) nonprofit organization) and our mission of cultivating critical thinking and science literacy skills. Donations will be matched by our Board of Directors!
Climate change is a challenging issue on multiple levels – it’s challenging for scientists to understand all of the complexities of a changing climate, it’s difficult to know how to optimally communicate to the public about climate change, and of course we face an enormous challenge in figuring out how best to mitigate climate change. The situation is made significantly more difficult by the presence of a well-funded campaign of disinformation aimed at sowing doubt and confusion about the issue.
I recently interviewed climate scientist Michael Mann about some of these issues and he confirmed one trend that I had noticed, that the climate change denier rhetoric has, to some extent, shifted to what he called “doomism”. I have written previously about some of the strategies of climate change denial, specifically the motte and bailey approach. This approach refers to a range of positions, all of which lead to the same conclusion – that we should essentially do nothing to mitigate climate change. We should continue to burn fossil fuels and not worry about the consequences. However, the exact position shifts based upon current circumstances. You can deny that climate change is even happening, when you have evidence or an argument that seems to support this position. But when that position is not rhetorically tenable, you can back off to more easily defended positions, that while climate change may be happening, we don’t know the causes and it may just be a natural trend. When that position fails, then you can fall back to the notion that climate change may not be a bad thing. And then, even if forced to admit that climate change is happening, it is largely anthropogenic, and it will have largely negative consequences, there isn’t anything we can do about it anyway.
This is where doomism comes in. It is a way of turning calls for climate action against themselves. Advocates for taking steps to mitigate climate change often emphasize how dire the situation is. The climate is already showing dangerous signs of warming, the world is doing too little to change course, the task at hand is enormous, and time is running out. That’s right, say the doomists, in fact it’s already too late and we will never muster the political will to do anything significant, so why bother trying. Again, the answer is – do nothing.
This means that science communicators dealing with climate change have to recalibrate. First, we always have to accurately portray what the science actually says (a limitation that does not burden the other side). But we also need to put this information into a proper context, and think carefully about our framing and emphasis. For example, we can focus on all the negative aspects of climate change and our political dysfunction, trying to convince people how urgent the situation is and the need for bold action. But if we just do this, that would feed the doomist narrative. We also need to emphasize the things we can do, the power we have to change course, the assets (technological and otherwise) at our disposal, and the fact that any change in course has the potential to make things better (or at least less bad). As Mann says – we have created the sense of urgency, and now we need to create a sense of agency.
The framing, therefore, should be one of strategic optimism. Pessimism is self-defeating and self-fulfilling. Admittedly, optimism can be challenging. Trump has pledged to nominate for energy secretary Chris Wright, an oil executive who essentially denies climate change as an issue. Apparently, he does not deny that human-released CO2 is warming the climate, he just thinks the negative consequences are overblown, that the costs of a green energy transition are too great, and that the efforts of the US will likely be offset by emerging industrial nations anyway. Again – do nothing. Just keep drilling. I would dispute all of these positions. Sure, the media overhypes everything, but climate scientists are generally being pretty conservative in their projections. Some argue, too conservative if anything. Yes, the cost of the green transition will be great, but the cost of climate change will be greater. And for the investment we get less pollution, better health, and greater energy independence.
That last claim, essentially – why should the US bother to do anything unless everyone is making the same effort, is simply not logical. Climate change is not all or nothing, it is a continuum. Anything anyone does to mitigate greenhouse gas release will help. Also it’s pretty clear that the US has a leadership role to play in this issue, and when we take steps to mitigate climate change other countries tend to follow. Further still, the US has released more CO2 than any other nation, and we still have among the highest per capita CO2 release (mostly exceeded only by petro-states with high oil production and low populations), so it makes little sense to blame emerging economies with comparatively negligible impacts.
But if I’m trying to be optimistic I can focus on a couple of things. First, there is a momentum to technology that is not easily turned off. The IRA has provided billions in subsidies to industry to accelerate the green transition, and a lot of that money is going to red states. It’s doubtful that money will be clawed back. Further, wind and solar are increasing rapidly because they are cost effective, especially while the overall penetration of these sources is still relatively low. Electric vehicles are also getting better and cheaper. So my hope is that these industries have enough momentum to not only survived but to thrive on their own.
Also, there is one green energy technology that has bipartisan support – nuclear. As I discussed recently, we are making moves to significantly increase nuclear energy, and this does require government support to help revitalize the industry and transition to the next generation. Hopefully this will continue over the next four years. So while having someone like Wright as energy secretary (or someone like Trump as president, for that matter) is not ideal for our efforts to make a green energy transition, it is not unreasonable to hope that we can coast through the next four years without too much disruption. We’ll see.
There is also some good news – bad news on the climate front. The bad news is that the negative effects of climate change are happening faster than models predicted. One recent study, for example, shows that there are heat wave hot spots around the world that are difficult to model. Climate models have been great at predicting average global temperatures, but are less able to predict local variation. What is happening is called “tail-widening” – as average temperatures increase, the variability across regions also increases, leading to outlier hotspots. This is causing an increase in heat related deaths, and bringing extreme heat to areas that have not previously experienced it.
We are also seeing events like hurricane Helene that hit North Carolina. Scientists are confident that the amount of rainfall was significantly increased due to increases in global temperatures. Warmer air holds more moisture. Dropping more rain meant increased flooding, bringing extreme flooding events and catastrophic damage to an area that was not considered a flood risk and was therefore largely unprepared to such an event.
What’s the good news part of this? Events like extreme heat waves and hurricane destruction seem to be shifting the political center of gravity. It’s becoming harder to deny that climate change is happening with potential negative effects. This gets back to the doomism phenomenon – increasingly, doomism is all the climate change deniers have left. They are essentially saying, sorry, it’s too late. But it is objectively not too late, and it will never be too late to make changes that will have a positive impact, even if that impact is just making things less bad.
The Biden Administration actually showed a good way forward, using essentially all carrots and no sticks. Just give industry some incentives and assurances to make investments in green energy, and they will. We also need to invest in infrastructure, which is also something that tends to have bipartisan support. Climate activists do need to become strategic about their messaging (the other side certainly is). This might mean focusing on bipartisan wins – investing in industry, investing in infrastructure, becoming economic leaders in 21st century technology, and facilitating nuclear and geothermal energy. These are win-wins everyone should be able to get behind.
The post Some Climate Change Trends and Thoughts first appeared on NeuroLogica Blog.
The world of science communication has changed dramatically over the last two decades, and it’s useful to think about those changes, both for people who generate and consume science communication. The big change, of course, is social media, which has disrupted journalism and communication in general.
Prior to this disruption the dominant model was that most science communication was done by science journalists backed up by science editors. Thrown into the mix was the occasional scientist who crossed over into public communication, people like Carl Sagan. Science journalists generally were not scientists, but would have a range of science backgrounds. The number one rule for such science journalists is to communicate the consensus of expert opinion, not substitute their own opinion.
Science journalists are essentially a bridge between scientists and the public. They understand enough about science, and should have a fairly high degree of science literacy, that they can communicate directly with scientists and understand what they have to say. They then repackage that communication for the general public.
This can get tricky when dealing with controversial subjects. This is one of the challenges for good science journalists, they must fairly represent the controversy without making it seem more or less than it is, and without giving unbalanced attention to fringe or minority opinions. They need to speak to enough experts to put the controversy into a proper context.
Now let’s transition to the post-social media world. One of the best things about social media is that it makes it much easier for scientists to communicate directly to the public – for scientists to become journalists. While I think this has created a lot of fantastic content, of which I am an enthusiastic consumer and producer, it has created its own challenges.
The big challenge for science journalists who are not scientists is getting the science right. The big challenge for science journalists who are not journalists is getting the journalism right. Part of the challenge is that scientist science journalist blur the lines between of expertise. This is because expertise itself is a fuzzy concept, and is more of a continuum. Expertise, in fact, is more of a pyramid.
At the base of this pyramid we have all scientists, who have some level of expertise in science itself – scientific principles and practices, and basic concepts like hypothesis, theory, experiment, and data. They should also have some knowledge of statistics and how they are used in science. So any scientist is in a good position to discuss any science to the general public. But “scientist” is also a broad and fuzzy concept. Are engineers scientists? Are clinicians? These are often considered applied sciences, but they may or may not be researchers.
A scientist also has topic expertise in their particular field. So they are especially well positioned to discuss topics within their field of expertise. But “field of expertise” is also a continuum. Using myself as an example, I am a medical doctor, so I have some level of expertise in science itself, a higher level of expertise in medicine, and then a specialty in neuroscience. Within neuroscience there are different subspecialties, and I have fellowship level training in neuromuscular disease and also am certified in headache medicine. So I am more able to comment on these areas than Multiple Sclerosis, for example.
At what point am I considered an “expert”? This is obviously not a binary. I have high level training in biological concepts, in medicine in general, but have a higher level of expertise in clinical neurology and even higher in my subspecialties. So when I am communicating about such topics, am I communicating as a scientist or as a journalist? The big difference is that scientists, when commenting without their field of expertise, can exercise their own scientific judgement and include their own opinions. Non-scientist journalists should never do this. The scientist journalists have a spectrum of expertise – so where is the line for when they can start to weave in their own scientific opinions?
There is no right or wrong answer here, only judgement calls. Within academia – when communicating with other scientists – the general rule is that you should only communicate from a position of maximal expertise. When communicating to the public, however, there is no such rule (nor would it be practical). Scientist journalists, therefore, have to constantly be shifting their approach to a topic depending on their relative level of expertise, and this can be tricky.
For me, I do my best to understand what the consensus of scientific expert opinion is on a topic and to align my communication with that consensus. I try to be humble and to avoid substituting my own relatively less expert opinion for those with more expertise than me. I leverage my expertise, when I have it, to helping understand the topic as a whole and to put it into a helpful context.
There is also another layer to my science communication – I am a scientific skeptic, and I think it is reasonable to consider myself an expert in pseudoscience, science denial, conspiracy thinking, and critical thinking. I often am communicating science through this skeptical lens. This is a type of expertise that is orthogonal to topic expertise. It’s like being a statistician – you are an expert in statistics regardless of the field of science to which they are applied.
There is yet another layer here which can be extremely helpful – we are not all just individual scientist communicators. We are part of a community. This means that we can check in with other scientist communicators with different topic expertise to check our understanding of a topic outside our expertise. This is one of the best things about the SGU podcast – I interview other scientists and ask them questions about the consensus of opinion within their area of expertise. I also have access to lots of colleagues in different fields so I can check my understanding of various topics.
Sometimes this also means that different fields of expertise have a different perspective on a topic that spans multiple fields. In the recent discussion of biological sex, for example, there is clearly a different approach for evolutionary biologists, developmental biologists, neuroscientists, and medical experts. All these views are legitimate, but they contain different perspectives. Again, as a scientist communicator it’s important not to confuse your perspective with the correct perspective.
This can all strengthen our community – if we are all individually willing to be humble, understand and explore the limits of our expertise, listen to our colleagues, and be open to different perspectives. We can also discuss the meta-lessons of how to be better science communicators.
The post Science Communication About Controversial Issues first appeared on NeuroLogica Blog.
Skeptoid corrects another round of errors in previous episodes.
It’s been a while since I discussed artificial intelligence (AI) generated art here. What I have said in the past is that AI art appears a bit soulless and there are details it has difficulty creating without bizarre distortions (hands are particularly difficult). But I also predicted that it would get better fast. So how is it doing? In brief – it’s getting better fast.
I was recently sent a link to this site which tests people on their ability to tell the difference between AI and human-generated art. Unfortunately the site is no longer taking submissions, but you can view a discussion of the results here. These pictures were highly selected, so they are not representative. These are not random choices. So any AI pictures with obvious errors were excluded. People were 60% accurate in determining which art was AI and which was human, which is only slightly better than chance. Also, the most liked picture (the one above the fold here) in the line-up was AI generated. People had the hardest time with the impressionist style, which makes sense.
Again – these were selected pictures. So I can think of three reasons that it may be getting harder to tell the difference between AI and humans in these kinds of tests other than improvements in the AI themselves. First, people may be getting better at using AI as a tool for generating art. This would yield better results, even without any changes in the AI. Second, as more and more people use AI to generate art there are more examples out there, so it is easier to pick the cream of the crop which are very difficult to tell from human art. This includes picking images without obvious tells, but also just picking ones that don’t feel like AI art. We now are familiar with AI art, having seen so many examples, and that familiarity can be used to subvert expectations by picking examples of AI art that are atypical. Finally, people are figuring out what AI does well and what it does not-so-well. As mentioned, AI is really good at some genres, like impressionism. This can also just fall under – getting better at using AI art – but I thought it was distinct enough for its own mention.
And, of course, AI applications are being iterated and are objectively improving. I have been using them since they hit the scene and I can see the difference in results. I still think that most AI generated art is a bit soulless, but that is improving significantly. It does depend a great deal on style, and you still have to throw a lot of crap against the wall to get good results. But the compositions are improving and the details are improving. The lighting can be amazing, and the ability to generate highly detailed images can be astounding.
None of this changes the ongoing discussion of whether or not AI generated images can truly be considered art, and also is this fair to the artists whose work is being scraped to generate these images. On the former question, I still think the answer is – yes and no. It depends on how you define art. Some people I have had this discussion with use what might be called “high art” as the definition. This makes it easier to answer the question – if you are just putting a prompt into an AI algorithm and getting back an image, that is not high art. But of course there is a lot of human-generated art that also has minimal creative content and is not high art. I think the best way to define art as in true artistic expression is by what went into it. How much thought, feeling, background, talent, and creativity went into the process? The more that went in, then the more it contains. You still get out what you put in, even if the AI is doing the technical heavy lifting.
But sure, this is still a gray zone.
Also, we are not just talking about images. Video, music, voice replication are all also getting better pretty fast. There is a lot of buzz, for example, about this Star Trek piece which is pretty impressive. You can still tell it’s AI. The characters are slightly off. The blinking is not quite right, so we are still a bit in the uncanny valley. Also, the characters don’t speak, which is always a dead giveaway.
I am also still blown away by this reimagining of the light saber fight between Obi-Wan and Darth Vader in A New Hope. It’s not hard to see where we are headed – a world in which convincing video of pretty much anything can be generated with the help of AI. This has both good and bad implications. The good implications, potentially, are for entertainment. You may be able to watch any story with any actors, living or dead, in any style. I want to see a film noire with a young Harrison Ford playing a hard-boiled detective in a 1920s thriller. You could recast any movie with different actors. We could create the seasons of Firefly that were never made.
The downside, of course, is that you cannot trust anything you see. I basically am already there. I do not trust any image or video, especially if there are any social or political implications, unless it has been adequately vetted. The problem is that many people don’t wait for a sufficient vetting. Also, the damage of disinformation is already done by the time it gets investigated and corrected. Even worse, this is all leading to a kind of intellectual fatigue where people throw up their arms and conclude – well, you can’t trust anything, so I might as well just believe what feels good. This is what my tribe believes – that’s good enough for me.
So sure, I would love to see a well-crafted season 4 of Star Trek TOS, but I would also like to trust my news. I am not willing to throw up my arms just yet. I think we need to explore ways to minimize disinformation and make it easier for people to feel confident in a shared reality. But if not, at least I can entertain myself with some nostalgia.
The post Update on AI Art first appeared on NeuroLogica Blog.
Michael Shermer interviews Jon Mills, a psychoanalyst and philosopher, on a variety of topics, including the evolution of psychoanalysis, the dynamics of therapeutic relationships, and the psychological roots of aggression and trauma. Mills explains Freud’s lasting influence, the moral implications of aggression, and the role violence plays in society. The conversation also explores how trauma affects individuals and families across generations and the difficulty of understanding human behavior when faced with global challenges.
The discussion extends to broader issues such as individuality, the struggles faced by modern youth, and the evolution of belief in God. Shermer and Mills discuss how technology impacts mental health and the pursuit of spirituality without relying on traditional religion.
Jon Mills, PsyD, PhD, ABPP, is a philosopher, psychoanalyst, and clinical psychologist. His two latest books are Inventing God: Psychology of Belief and the Rise of Secular Spirituality, and End of the World: Civilization and its Fate.
End of the World: Civilization and Its FateFamine. Extreme climate change. Threats of global war and nuclear annihilation. Obscene wealth disparities. Is civilization destined for self-annihilation? In this timely book, philosopher and psychoanalyst Jon Mills explores the emergencies that could ignite an apocalypse. As we idly stand by in the face of ecological, economic, and societal collapse, we must seriously question whether humanity is under the sway of a collective unconscious death wish. Examining ominous existential risks and drawing on the psychological motivations, unconscious conflicts, and cultural complexes that drive human behavior and social relations, he offers fresh new perspectives on the looming fate of humanity based on a collective bystander disorder.
End of the World is a warning about the dangerous precipice we find ourselves careening toward and a call to action to take control of our own fate.
Inventing God: Psychology of Belief and the Rise of Secular SpiritualityIn this controversial book, philosopher and psychoanalyst Jon Mills argues that God does not exist; and more provocatively, that God cannot exist as anything but an idea. Put concisely, God is a psychological creation signifying ultimate ideality. Mills argues that the idea or conception of God is the manifestation of humanity’s denial and response to natural deprivation; a self-relation to an internalized idealized object, the idealization of imagined value.
After demonstrating the lack of any empirical evidence and the logical impossibility of God, Mills explains the psychological motivations underlying humanity’s need to invent a supreme being. In a highly nuanced analysis of unconscious processes informing the psychology of belief and institutionalized social ideology, he concludes that belief in God is the failure to accept our impending death and mourn natural absence for the delusion of divine presence. As an alternative to theistic faith, he offers a secular spirituality that emphasizes the quality of lived experience, the primacy of feeling and value inquiry, ethical self-consciousness, aesthetic and ecological sensibility, and authentic relationality toward self, other, and world as the pursuit of a beautiful soul in search of the numinous.
Jon Mills, PsyD, PhD, ABPP, is a philosopher, psychoanalyst, and clinical psychologist. He is Honorary Professor, Department of Psychosocial & Psychoanalytic Studies, University of Essex, UK, on faculty in the Postgraduate Programs in Psychoanalysis & Psychotherapy, Gordon F. Derner School of Psychology, Adelphi University, USA, and on faculty and a Supervising Analyst at the New School for Existential Psychoanalysis, USA. Recipient of numerous awards for his scholarship including 5 Gradiva Awards, he is the author and/or editor of over 30 books in psychoanalysis, philosophy, psychology, and cultural studies including most recently Psyche, Culture, World. In 2015 he was given the Otto Weininger Memorial Award for Lifetime Achievement by the Canadian Psychological Association. He is based in Ontario, Canada. His two latest books that I want to discuss today are Inventing God: Psychology of Belief and the Rise of Secular Spirituality, and End of the World: Civilization and its Fate.
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
This alleged sea serpent terrorized a New England fishing village for two years in the 19th century.