In 2023, a subatomic particle called a neutrino crashed into Earth with such a high amount of energy that it should have been impossible. In fact, there are no known sources anywhere in the universe capable of producing such energy—100,000 times more than the highest-energy particle ever produced by the Large Hadron Collider, the world's most powerful particle accelerator. However, a team of physicists at the University of Massachusetts Amherst recently hypothesized that something like this could happen when a special kind of black hole, called a "quasi-extremal primordial black hole," explodes.
There are a fair number of newbies coming on to the site, which is great, but a couple of them are hateful, like the one who tried to refer to your host yesterday as a “kike faggot who runs this site” with “a fine hooked nose as any other degenerate kike”. Needless to say, that person has been vanquished to the hinterland for antisemites for committing a big-time Roolz violation. But I wanted to let other new readers/commenters know that there are guidelines for commenting here, called, in Chicago argot, “Da Roolz“. You can find them on the left sidebar or at the preceding link. They may seem long, but I find them useful for ensuring civility and reasonable discussion on this website. If you haven’t read them, please do before posting.
And if you want to send me wildlife photos (I welcome good ones), read the sidebar post “How to send me wildlife photos.”
Thanks!
Telescopes are getting smaller. It’s strange to think: smartscopes have been with us for over half a decade now. Since 2020, we’ve tested units from Vaonis, Unistellar and more. In a short time, these smartscopes have revolutionized amateur astronomy, putting deep-sky imaging within reach of causal users. Recently, we had a chance to put Dwarf Lab’s latest unit the Dwarf Mini through its paces.
Mark Zuckerberg said a few months ago that AI is ushering in a third phase of social media. First social media was used to connect with family and friends, then it became a platform for content creators, and now creativity is being further unleashed with new AI-powered tools. That’s a pretty rosy view, and unsurprising coming from the creator of Facebook. Many people, however, are becoming increasing concerned about what the net effect of AI-generated content will be, especially low-grade content (now colloquially referred to as AI slop).
One thing is clear – AI-generated content, because it is so easy and fast, is increasingly flooding social media. AI’s influence takes two basic forms, AI-generated content, and recommendations driven by AI-powered algorithms. So an AI might be telling you to watch an AI-generated video. Recent studies show that about 70% of images on Facebook are now AI-generated, with 80% of the recommendations being AI-powered. This is a fast-moving target, but across social media AI-generated content is somewhere between 20 and 40%. This is not evenly distributed, with some sites being overwhelmed. The arts and crafts site Etsy has been overrun by AI slop, causing some users to abandon the platform.
We are already seeing a backlash and crackdown, but this is sporadic and of questionable effectiveness. Etsy, for example, has tried to limit AI slop on its site, but with limited success. So where is all this headed?
We need to consider the different types of content separately. Much of AI-slop is obviously fake and for entertainment purposes only. They may be cartoony or obviously humorous, with no intent to pass as real or deceive. Some content is meant to entertain (i.e., drive clicks and engagement), but is not obviously fake. Part of the appeal, in fact, may be the question of whether or not the content is real. Other content is meant to deceive, to influence public opinion or the behavior of the content consumer. This latter type of content is obviously the most concerning.
There are also different types of concerns or potential negative outcomes. One of the biggest concerns is that AI-generated content can be used to spread misinformation. This has both direct and indirect negative effects – it can spread false information and influence public opinion, but it also degrades trust in accurate information or responsible sources. So true information can be dismissed as possibly fake. The combined effect is that we no longer know what is true and what is not. Without any way to objectively referee which facts are reliable and which are likely fake (and yes, it’s a continuum, not a dichotomy), people will tend to just hunker down with their social tribe. Each group has their own reality, with no shared reality to bridge the gap.
There is also the Etsy problem – low-quality content is crowding out anything of value, and consumers are buried in slop. I use Etsy, and so have encountered this myself. It takes a lot of cognitive work to separate out real work, especially art, from the flood of AI content. Highly cognitively demanding work is unsustainable – most people will not do it for long and will look for the less work-intensive path. This may mean abandoning a platform, or throwing up their arms and saying it’s hopeless to tell the difference, or just giving in and not worrying if something is AI or not. This is a problem for non-AI content creators, and also a problem across the board. Mental AI-fatigue will affect everything, not just low-grade AI artwork. Etsy-fatigue can also influence how much mental energy we have for political AI content (studies do show that mental energy is fungible in this way).
There is also the middle ground, not low-grade AI slop or deliberate deception, but AI used as a legitimate tool to create high-quality art or other content. This is the use I think can be valuable, making content creation better or more efficient. The problem with this content is not really for the end-user but the issues of ownership and displacing human artists. For me, this is where the real dilemma is. I would love for the big video game companies to be able to double their output because of efficiencies gained through AI, and I also want to see how the latest AI can enhance certain game features (like interacting with AI-driven characters, or open-ended generative content). But these advances are being held back by the other concerns with AI, many of which are legitimate.
There are several approaches to the issue that I can see. One is to simply let the free market sort it all out. Users are having somewhat of a backlash against AI slop, and companies are responding. We will see how well they can manage the issue, but if the last few decades are any guide I don’t have a lot of hope that big tech companies will do what’s best for the end-user, rather than their own bottom line. Likely some individual platforms will push back heavily against AI, perhaps even creating AI-free social media platforms or websites.
A second approach is to craft some thoughtful legislation to try to wrangle this beast. The most important fix would simply be transparency – if AI-generated content had to be labeled as such, with heavy penalties for passing off AI content as real, this could significantly help. I would also like to see a conversation about how algorithms recommend content. It may also be feasible to make the use of AI-generated fakes for political persuasion illegal.
Both of these approaches, however, require a third approach – developing the technology to detect, label, and filter AI-generated content. A truly effective app to do this could be massively useful, and I think highly popular.
My biggest concern is that governments will use AI to enhance their ability to control their populations. This is part of the “information autocracy” problem. If you control what information your population sees, you can control what they think, and you can control what they do. This is already a problem, but AI-generated content and AI-driven algorithms can make it orders of magnitude more effective. Even without authoritarian governments, large corporations can use the same technology to influence their consumers. Or they can use it to promote their political views. A populace, both entertained and overwhelmed by AI slop, would be especially compliant.
The post The AI Slop Problem first appeared on NeuroLogica Blog.
Mr. Epstein was not only a world-class child abuser, he was a big fan of theoretical high-energy physics and of theoretical physicists. Some of my colleagues, unfortunately, got to know him. A number who were famous and/or had John Brockman as a book agent were even invited to a physics conference on Epstein’s private island, well before he was first arrested. This was no secret; as I recall, a lot of us heard about the existence of this conference/trip, but we hadn’t heard Epstein’s name before and didn’t pay much attention (ho hum, just another weird billionaire).
Personally, I feel quite lucky. The Brockman agency rejected the proposal for my recent book without comment (thank you!); and my research is mostly considered unimportant by the Brian Greenes of the world. As a result, I was not invited to Epstein’s island, never made his acquaintance, and blissfully avoided the entire affair. Clearly there are some benefits to being considered ordinary. And so — I’m sorry/not-sorry to say — I can’t tell you much about Epstein at all, or about how certain physicists did and did not interact with him. Regarding my colleagues who did get to know him, I can’t speak for them, since I wasn’t there, and I don’t know to what extent Epstein hid his immoral activities when they were around. It’s up to them to tell their own stories if they feel the need to do so (and I hope a couple of them do, just to clear the air.) Personally I tend to give them the benefit of the doubt — probably some literally didn’t know what was up until Epstein’s arrest in 2008, while perhaps others felt there wasn’t much they could do about Epstein’s actions on his own private island. I imagine they are deeply embarrassed to have been caught in this horrible man’s ugly web.
Fans of physics come in all shapes and sizes, and some have large wallets, large egos, and/or large ambitions. Among the wealthy supporters, we can count Alfred Nobel himself; billionaires sit on important scientific institute and university boards, and the more recent Breakthrough Prizes were funded by deep pockets. The extreme wealthy have outsized influence in our country and in our world, and one could argue that their influence in 2025 was not for the better. Usually, though, the influence in physics and related fields tends to be relatively benign, funding postdoctoral researchers and graduate students who deeply want to do science but also need to eat. That said, sometimes donors fund non-essential fields at the expense of critical ones, or favor theoretical research over the gathering of crucial experimental data, or push money on famous rich organizations when there are poor ones that are equally deserving and far more needy.
When gazillionaires, on their own initiative, come calling on non-profit organizations, whether they be community centers, arts organizations, or universities, they pose a problem. On the one hand, it is the job of anyone in a non-profit organization to help raise money — fail to do that, and your organization will close. When a single person offers to permanently change the future of your program, you would be derelict in your duty if you did not consider that offer. On the other hand, donors who might have ethical or criminal problems could drag the organization’s name through the mud. Worse, they might be able to force the organization itself to do something ethically questionable or even illegal.
There is a clear lesson for young academics and other up-and-coming non-profit actors in the Epstein affair: the more money potentially offered to our organizations, the more carefully we must tread. Money is power; power corrupts; and every pursuit of dollars, even for the best causes, risks infection. We can’t be large-scale non-profit fundraisers without doing serious and thorough background checks of the biggest donors; we have to question motives, and we can’t look the other way when something seems amiss. Those of us with clear hearts and honest pursuits tend to assume the best in other people. But we have to beware of those hoping to bolster their reputations, or clean their consciences, by giving away “generously” what they never deserved to have.
Astronomers have been collecting data for generations, and the sad fact is that not all of it has yet been fully analyzed. There are still discoveries hiding in the dark recesses of data archives strewn throughout the astronomical world. Some of them are harder to access than others, such as actual physical plates containing star positions from more than a hundred years ago. But as more and more of this data is archived, astronomers also keep coming up with ever more impressive tools to analyze it. A recent paper from Cyril Tasse of the Paris Observatory and his co-authors, published recently in Nature Astronomy describes an algorithm that analyzes hundreds of thousands of previously unknown data points in radio telescope archives - and they found some interesting features in it.
NASA’s Orion spacecraft, which will carry the Artemis II crew around the Moon, sits at the launch pad on Jan. 17, 2026, after rollout. It rests atop the SLS (Space Launch System) rocket. Orion can provide living space on missions for four astronauts for up to 21 days without docking to another spacecraft. Advances in technology […]
This is all based on the assumption that galaxies are receding away from us. And I actually cheated a little.
The JWST found a system of at least five interacting galaxies only 800 million years after the Big Bang. The discovery adds weight to the growing understanding that galaxies were interacting and shaping their surroundings far earlier than scientists thought. There's also evidence that the collision was redistributing heavy elements beyond the galaxies themselves.