Lots of things out in the Universe can cause a supernova, from the gravitational collapse of a massive star, to the collision of white dwarfs. But most of the supernovae we observe are in other galaxies, too distant for us to see the details of the process. So, instead, we categorize supernovae by observed characteristics such as the light curves of how they brighten and fade and the types of elements identified in their spectra. While this gives us some idea of the underlying cause, there are still things we don’t entirely understand. This is particularly true for one particular kind of supernova known as Type Ia.
You have likely heard of Type Ia supernovae because they are central to our understanding of cosmology. They have an important characteristic of having a uniform maximum brightness. This means we can observe their apparent brightness, compare it to their actual brightness, and calculate their distance. For this reason, they are often referred to as standard candles, and they were the first way we learned that the Universe is not just expanding; it’s accelerating under the influence of dark energy.
From the spectra of these supernovae, we can see that the initial brightness is powered by the radioactive decay of nickel-56, while much of the later brightness comes from the decay of cobalt-56. We also see the presence of ionized silicon near peak brightness, which no other type of supernova has. This tells us that Type Ia supernova are not caused by the core collapse of a star, but rather some kind of thermal runaway effect.
Single progenitor of a Type Ia Supernova. Credit: NASA, ESA and A. Feild (STScI)The most popular model for Type Ia supernovae is that they are caused by the collapse of a white dwarf. When a white dwarf is part of a close binary with an aging red giant, the white dwarf can capture some of the companion’s outer layer. Over time, the white dwarf captures enough mass that it crosses the Chandresekhar limit, which triggers the supernova. Since the Chandrasekhar limit is always at 1.4 solar masses, this would explain why Type Ia supernovae always have the same maximum brightness.
But as we’ve observed ever more supernovae, we’ve learned that Type Ia supernovae don’t always have the same maximum brightness. There are some that are particularly brighter, with weaker silicon lines in their spectra and stronger iron lines. There are some that are much dimmer than usual, with strong titanium absorption lines. This doesn’t prevent their use as standard candles since we can identify them by the spectra and adjust our brightness calculations accordingly, but it does suggest that the single progenitor model is incomplete.
Illustration of colliding white dwarf stars. Credit: European Southern ObservatoryOne possibility is that some Type Ia supernovae are caused by white dwarf collisions. Given the calculated number of binary white dwarf systems, collisions can’t account for all supernovae of this type, but stellar collisions are known to occur, and they wouldn’t be bound by the Chandresekhar limit, thus allowing for supernovae that are brighter or dimmer than usual. It’s also possible that some Type Ia supernovae are caused by accretion from a close companion, but the resulting supernova doesn’t destroy the white dwarf, which could explain the dimmer subtypes of these supernovae.
Right now, there are lots of possibilities, and we simply don’t have enough data to pinpoint causes. But the good news is that with new observatories and sky surveys such as Rubin Observatory coming online soon, we will gather a wealth of observational data, particularly from supernovae that occur within our own galaxy. This will provide us with the information we need to finally solve this longstanding astronomical problem.
Reference: Ruiter, Ashley J., and Ivo R. Seitenzahl. “Type Ia supernova progenitors: a contemporary view of a long-standing puzzle.” arXiv preprint arXiv:2412.01766 (2024).
The post Do We Really Know What Becomes a Type Ia Supernova? appeared first on Universe Today.
A spacecraft that can provide the propulsion necessary to reach other planets while also being reproducible, relatively light, and inexpensive would be a great boon to larger missions in the inner solar system. Micocosm, Inc., based in Hawthorne, California, proposed just such a system via a NASA Small Business Innovation Research (SBIR) grant. Its Hummingbird spacecraft would have provided a platform to visit nearby planets and asteroids and a payload to do some basic scouting of them.
Large space missions are expensive, so using a much less expensive spacecraft to collect preliminary data on the mission target could potentially help save money on the larger mission’s final design. That is the role that Hummingbird would play. It is designed essentially as a propulsion system, with slots for radiation-hardened CubeSat components as well as a larger exchangeable payload, such as a telescope.
The key component of the Hummingbird is its propulsion system. It uses a rocket engine that runs on hydrazine fuel. More importantly, it holds a lot of that fuel. A fully assembled system is expected to weigh 25 kg “Dry”—meaning without propellant installed—whereas a fully fueled “Wet” system would weigh an estimated 80 kg.
Travelling to a Lagrange Point is one of the things Hummingbird could do – Fraser explains why this points in space are important.That would give Hummingbird plenty of “oomph” – enough to bring its orbital speed up to an estimated 3.5 km/s delta-V, which is required for getting to hard-to-reach objects like some near-Earth asteroids. However, it could also reach other, larger places, like Mars or even Venus, the various Lagrange points, or even Mars’ moons.
When it got there, the prototype of Hummingbird described in a paper presented back in 2013 would take images of its target world using an Exelis telescope. The manufacturer of this telescope has since been bought by Harris Systems, which was then rolled into L3Harris Technologies, the owner of Aerojet Rocketdyne. However, the authors stress that the payload itself was interchangeable and could be tailored to the mission that it was meant to scout.
The Hummingbird bus was also the fuel tank, and it had additional slots for CubeSat components. These components could be used for further data collection or data analysis. However, the paper doesn’t necessarily mention how Hummingbird would handle standard CubeSat operations, like attitude control or communications back to a ground station.
A CubeSat has already made its way to Mars – as described in the JPL video.Those could likely have been worked out in future iterations. Additionally, the final design was published before the dramatically reduced cost of getting to orbit, which is now available – the authors don’t even mention a “Falcon” as a potential launch service. A lot has changed in the space industry in the last 11 years. Still, the idea behind Hummingbird, an inexpensive, adaptable platform for preliminary scouting missions to interesting places in the inner solar system, has yet to see its day in the Sun – the project did not appear to receive a Phase II SBIR grant, which could have continued its development. But maybe, someday, it or a similar system will see the light of interplanetary space.
Learn More:
C. Taylor et al – Hummingbird: Versatile Interplanetary Mission Architecture
UT – What Happened to those CubeSats that were Launched with Artemis I?
UT – A CubeSat Mission to Phobos Could Map Staging Bases for a Mars Landing
UT – We Could SCATTER CubeSats Around Uranus To Track How It Changes
Lead Image:
Computer-generated mockup of the Hummingbird spacecraft
Credit – C. Taylor et al.
The post A Cheap Satellite with Large Fuel Tank Could Scout For Interplanetary Missions appeared first on Universe Today.
Don’t let the bright Moon deter you from seeing the one of the best meteor showers of the year.
One of the best meteor showers of 2024 closes out the year this coming weekend. If skies are clear, watch for the Geminid meteors, peaking on the night of Friday into Saturday, December 13-14th.
The Geminids in 2024To be sure, the Geminids have a few strikes against them this year. Not only is it cold outside, but the Moon is near Full, 98% illuminated waxing gibbous at the shower’s max. But don’t despair: the shower hits its maximum at 3:00 Universal Time (UT) on December 14th (10:00 PM EST on the 13th) with a max Zenithal Hourly Rate of 120 meteors per hour. This means the shower will favor western Europe and North America, a plus. The radiant in Gemini near the bright star Castor (Alpha Geminorum) also means that the shower starts to be active in the late evening before local midnight.
The Geminid radiant, looking east on the evening of December 13th. Credit: Stellarium.The source of the Geminids is none other than prolific ‘rock-comet’ 3200 Phaethon. Clearly, something intriguing is going on with this object. On a short 1.4 year orbit, 3200 Phaethon seems to blur the line between asteroid and semi-dormant comet nucleus. Japan wants to send its DESTINY+ mission to 3200 Phaethon in 2028 to get a closer look.
A radio animation of 3200 Phaethon. Credit: Arecibo/NASA/NSFThe Geminids have put on a show since 1862, though they seem to have really taken off in recent decades, surpassing the August Perseids as the best annual meteor shower of the year.
Fighting the MoonThe key to seeing any meteor shower at its best is to find dark skies and a clear, unobstructed horizon. The December Moon sits just a constellation away in Taurus at the shower’s peak… but keep in mind, the shower is also active on the evenings prior to and after the 14th. I plan to select my observing site with this in mind, and block the Moon behind a hill or tree. Early morning predawn observing will put the Moon lower to the horizon.
A sequence of Geminid meteors from 2014. Credit: Mary McIntyre.There’s a reason the Moon is currently so high in the sky: not only is the Moon near the December Solstice and occupying the slot that the Sun will hold in June, but we’re headed towards a once every 18.6-year Major Lunar Standstill of the Moon in 2025.
A Geminid meteor all-sky camera view. Credit: Eliot Herman.Observing and contributing to meteor shower science is as easy as watching, recording what you’re seeing at a designated interval, and reporting that count to the International Meteor Organization (IMO). Keep in mind, several other meteor showers are still active in mid-December, including the November Taurid fireballs and the Ursids, peaking on December 22nd. For imaging, I like to simply automate the process, and set a wide-field DSLR camera running on a tripod with an intervalometer to take timed exposure shots and see what turns up later in post processing. Aim the camera off to one side of the radiant by about 45 to 90 degrees to catch the Geminid meteors in profile.
Don’t miss the 2024 Geminids, as a fine way to round out sky-watching in 2024.
The post Our Strategy to Catch the 2024 Geminid Meteors appeared first on Universe Today.
As I predicted the controversy over whether or not we have achieved general AI will likely exist for a long time before there is a consensus that we have. The latest round of this controversy comes from Vahid Kazemi from OpenAI. He posted on X:
“In my opinion we have already achieved AGI and it’s even more clear with O1. We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks”. Some say LLMs only know how to follow a recipe. Firstly, no one can really explain what a trillion parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify. Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. There’s nothing that can’t be learned with examples.”
I will set aside the possibility that this is all for publicity of OpenAI’s newest O1 platform. Taken at face value – what is the claim being made here? I actually am not sure (part of the problem of short form venues like X). In order to say whether or not OpenAI O1 platform qualified as an artificial general intelligence (AGI) we need to operationally define what an AGI is. Right away, we get deep into the weeds, but here is a basic definition: “Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.”
That may seem straightforward, but it is highly problematic for many reasons. Scientific American has a good discussion of the issues here. But at it’s core two features pop up regularly in various definitions of general AI – the AI has to have wide-ranging abilities, and it has to equal or surpass human level cognitive function. There is a discussion about whether or not how the AI achieves its ends matters or should matter. Does it matter if the AI is truly thinking or understanding? Does it matter if the AI is self-aware or sentient? Does the output have to represent true originality or creativity?
Kazemi puts his nickel down on how he operationally defines general AI – “better than most humans at most tasks”. As if often the case, one has to frame such claims as “If you define X this way, then this is X.” So, if you define AGI as being better than most humans at most tasks, then Kazemi’s claims are somewhat reasonable. There is still a lot to debate, but at least we have some clear parameters. This definition also eliminated the thorny question of understanding and awareness.
But not everyone agrees with this definition. There are still many experts who contend that the modern LLM’s are still just really good autocompletes. They are language prediction algorithms that simulate thought through simulating language, but are not capable of true thought, understanding, or creativity. What they are great at is sifting through massive amounts of data and finding patterns, and then regenerating those patterns.
This is not a mere discussion of “how” LLMs function but gets to the core of whether or not they are “better” than humans at what they do. I think the primary argument against LLMs being better than humans is that they function by using the output of humans. Kazemi essentially says this is just how they learn, they are following a recipe like people do. But I think that dodges the key question.
Let’s take art as an example. Humans create art, and some artists are truly creative and can bring into existence new and unique works. There are always influences and context, but there is also true creativity. AI art does not do this. It sifts through the work of humans, learns the patterns, and then generates imitations from those patterns. Since AI does not experience existence, it cannot draw upon experience or emotions or the feeling of what it is to be a human in order to manifest artistic creativity. It just regurgitates the work of humans. So how can we say that AI is better than humans at art when it is completely dependent on humans for what it does? The same is true for everything LLMs do, but it is just more obvious when it comes to art.
I am not denigrating LLMs or any modern AI as extremely useful tools. They are powerful, and fast, and can accomplish many great tasks. They are accelerating the rate of scientific research in many areas. They can improve the practice of medicine. They can help us control the tsunami of data that we are drowning ourselves in. And yes, they can do a lot of different tasks.
Perhaps it is easier to define what is not AGI. A chess-playing computer is not AGI, as it is programmed to do one task. In fact, the term AGI was developed by programmers to distinguish this effort from the crop of narrow AI applications that were popping up, like Chess and Go players. But is everything that is not a very narrow AI an AGI? Seems like we need more highly specific terms.
OpenAI and other LLMs are more than just the narrow AIs of old. But they are not thinking machines, nor do they have human-level intelligence. They are also certainly not self-aware. I think Kazemi’s point about a trillion parameter deep neural net misses the point. Sure, we don’t know exactly what it is doing, but we know what it is not doing, and we can infer from it’s output and also how it’s programmed the general way that it is accomplishing its outcome. There is also the fact that LLMs are still “brittle” – a term that refers to the fact that narrow AIs can be easily “broken” when they are pushed beyond their parameters. It’s not hard to throw an LLM off its game and push the limits of it’s ability. It still has not true thinking or understanding, and this makes it brittle.
For that reason I don’t think that LLMs have achieved AGI. But I could be wrong, and even if we are not there yet we may be really close. But regardless, I think we need to go back to the drawing board, look at what we currently have in terms of AI, and experts need to come up with perhaps new more specific operational definitions. We do this in medicine all the time – as our knowledge evolves, sometimes we need for experts to get together and revamp diagnostic definitions and make up new diagnoses to reflect that knowledge. Perhaps ANI and AGI are not enough.
To me LLMs seems like a multi-purpose ANI, and perhaps that is a good definition. Either “AGI” needs to be reserved for an AI that can truly derive new knowledge from a general understanding of the world, or we “downgrade” the term “AGI” to refer to what LLMs currently are (multi-purpose but otherwise narrow) and come up with a new term for true human-level thinking and understanding.
What’s exciting (and for some scary) is that AIs are advancing quickly enough to force a reconsideration of our definitions of what AIs actually are.
The post Have We Achieved General AI first appeared on NeuroLogica Blog.
Meanwhile, in Dobrzyn, Hili is on the alert:
Hili: The Chinese are coming!
A: You must be confused.
Hili: So maybe it’s somebody else.
Hili: Chińczycy idą!
Ja: Chyba ci się coś pomyliło.
Hili: To może jacyś inni.
On BlueSky, Jack Ashby is continuing his observations of the duckbilled platypus:
Perfect #platypus – you can see how they change from swimming to waddling to slithering depending on how deep the water is.#MonotremeMonday #fieldwork #Tasmania #MammalWatching #platypuses #WildOz
— Jack Ashby (@jackdashby.bsky.social) 2024-12-09T08:22:33.461Z
And Ze Frank is exploring how species are named:
Since the nomination of Dr. Jay Bhattacharya for NIH Director, I've been seeing a suggestion from certain contrarian doctors for a a "randomized trial" of study sections vs. a "modified lottery" to determine which grant applications are funded by the NIH. Just what the heck is Dr. Vinay Prasad talking about?
The post Are NIH study sections a waste of time? first appeared on Science-Based Medicine.