Please send in your photos, or at least get them ready to send, as I’ll be gone from this Wednesday through Thursday, the 31st. Today we’re featuring the birds of Iceland taken by physicist and origami master Robert Lang, traveling on a June Center for Inquiry cruise featuring Richard Dawkins. (Robert’s flower pictures from the same trip are here.) Robert’s captions are indented, and you can enlarge the photos by clicking on them.
Iceland Birds (etc.)
Continuing my recent trip to islands of the northern Atlantic—heading out from Ireland taking in Orkney, Shetland, the Faroe Islands, and then Iceland—here are some of the birds (and a few bonus mammals) we saw along the way. Most of these are from Iceland. (I am not a birder, so IDs are from Merlin ID and/or Wikipedia; corrections are welcome.)
An Arctic Tern (Sterna paradisae), taken at Grimsey Island, the northernmost spot of Iceland with a bit extending above the Arctic Circle. Visiting brought home how powerful the warming influence of the Gulf Stream is; it was light-jacket weather when we visited in June and the ground was covered in thick grassland. By contrast, six months earlier, I was slightly across the Antarctic Circle along the Antarctic Peninsula (so also in midsummer), and all was glaciers, snow, and ice:
Also from Grimsey, a Common redshank (Tringa totanus), presumably the T. t. robusta subspecies (which, according to Wikipedia, breeds in Iceland).
We visited the tiny island of Vigur, which is a habitat for Common Eider ducks (Somateria mollissima). As the photo shows, they are strongly sexually dimorphic. The island is owned by a couple who gather the eider down for use in pillows, quilts, and the like; because there are no predators on the island and the ducks are used to humans wandering about, they are quite tolerant when some of those humans are visiting tourists. They have cute chicks:
Eider duckling:
A European golden plover (Pluvialis apricaria), also from the grasslands of Grimsey:
A Black guillemot (Pluvialis apricaria) (I think), a species that is widespread in the North Atlantic:
The juveniles are mottled:
A Northern fulmar (Fulmarus glacialis), nesting in the cliffs of Grimsey. (Wikipedia tells me there are both dark and light morphs; this must be the light one):
A snow bunting (Plectrophenax nivalis), the most northerly recorded passerine in the world. I saw this one on the main island of Iceland:
One of the more distinctive seagoing birds seen along the Grimsey cliffs is the Razorbill (Alca torda), the closest living relative of the extinct Great Auk:
But the by far most distinctive seagoing bird is the Atlantic puffin (Fratercula arctica), the iconic bird of the northern Atlantic and whose representations fill tchotke shops all over:
Their clown-faced makeup is unbelievable!:
Although the majority of the wildlife we saw were birds, there were a few mammals here and there. This grey seal (Halichoerus grypus) seems to be floating quite high in the water; in fact, it’s basking on a barely submerged rock. (This is off the coast of Vigur island; that’s an Eider duck next to it):
And not an example of wildlife, but in honor of our host, I spotted this moggie wandering the streets of Ísafjörður, a tiny town in the northwest (and wildest) region of Iceland:
At a recent event Tesla showcased the capabilities of its humanoid autonomous robot, Optimus. The demonstration has come under some criticism, however, for not being fully transparent about the nature of the demonstration. We interviewed robotics expert, Christian Hubicki, on the SGU this week to discuss the details. Here are some of the points I found most interesting.
First, let’s deal with the controversy – to what extent were the robots autonomous, and how transparent was this to the crowd? The first question is easier to answer. There are basically three types of robot control, pre-programmed, autonomous, and teleoperated. Pre-programmed means they are following a predetermined set of instructions. Often if you see a robot dancing, for example, that is a pre-programmed routine. Autonomous means the robot has internal real-time control. Teleoperated means that a human in a motion-capture suit is controlling the movement of the robots. All three of these types of control have their utility.
These are humanoid robots, and they were able to walk on their own. Robot walking has to be autonomous or pre-programmed, it cannot be teleoperated. This is because balance requires real-time feedback of position and other information to produces the moment-to-moment adjustments that maintain balance. A tele-operator would not have this (at least not with current technology). The Optimus robots walked out, so this was autonomous.
Once in position, however, the robots began serving and interacting with the humans present. Christian noted that he and other roboticists were able to immediately tell that the upper body movements of the robots were teleoperated, just by the way they were moving. Also, the verbal interaction also seemed teleoperated as each robot had a difference voice and the responses were immediate and included gesticulations.
Some might say – so what? The engineering of the robots themselves is impressive. They can autonomously walk, and not of them fell over or did anything weird. This much is a fairly impressive demonstration. It is actually quite dangerous to have fully autonomous robots interacting with people. The technology is not quite there yet. Robots are heavy and powerful, and just falling over might cause human injury. Reliability has to be extremely high before we will be comfortable putting fully autonomous robots in human spaces. Making robots lighter and softer is one solution, because they they were be less physically dangerous.
But the question for the Optimus demonstration is – how transparent was the teleoperation of the robots? Tesla, apparently, did not explicitly say the robots were being operated fully autonomously, nor did any of the robot operator lie when directly asked. But at the same time, the teleoperators were not in view, and Tesla did not go out of their way to transparently point out that they were being teleoperated. How big a deal is this? That is a matter of perception.
But Christian pointed out that there is a very specific question at the heart of the demonstration – where is Tesla compared to its competitors in terms of autonomous control? The demonstration, if you did not know there were teleoperators, makes the Optimus seem years ahead of where it really is. It made it seem as if Tesla is ahead of their competition when in fact they may not be.
While Tesla was operating in a bit of a transparency grey-zone, I think the pushback is healthy for the industry. The fact is that robotics demonstrations typically use various methods of making the robots seem more impressive than they are – speeding up videos, hiding teleoperation, only showing successes and not the failures, and glossing over significant limitations. This is OK if you are Disney and your intent is to create an entertaining illusion. This is not OK if you are a robotics company demonstrating the capabilities of your product.
What is happening as a result of push back and exposure of lack of total transparency is an increasing use of transparency in robotic videos. This, in my opinion, should become standard, and anything less unacceptable. Videos, for example, can be labeled as “autonomous” or “teleoperated” and also can be labeled if they are being shown in a speed other than 1x. Here is a follow up video from Tesla where they do just that. However, this video is in a controlled environment, we don’t know how many “takes” were required, and the Optimus demonstrates only some of what it did at the event. At live events, if there are teleoperators, they should not be hidden in any way.
This controversy aside, the Optimus is quite impressive just from a hardware point of view. But the real question is – what will be the market and the use of these robots? The application will depend partly on the safety and reliability, and therefore on its autonomous capabilities. Tesla wants their robots to be all-purpose. This is an extremely high bar, and requires significant advances in autonomous control. This is why people are very particular about how transparent Tesla is being about where their autonomous technology is.
The post Tesla Demonstrated its Optimus Robot first appeared on NeuroLogica Blog.
Geneva, Switzerland, is not known for its sunny weather, and seeing the comet here was almost impossible, though I caught some glimpses. I hope many of you have seen it clearly by now. It’s dim enough now that dark skies and binoculars are increasingly essential.
I came here (rather than the clear skies of, say, Morocco, where a comet would be an easier target) to give a talk at the CERN laboratory — the lab that hosts the Large Hadron Collider [LHC], where the particle known as the Higgs boson was discovered twelve years ago. This past week, members of the CMS experiment, one of the two general purpose experiments at the LHC, ran a small, intensive workshop with a lofty goal: to record vastly more information from the LHC’s collisions than anyone would have thought possible when the LHC first turned on fifteen years ago.
The flood of LHC data is hard to wrap one’s head around. At CMS, as at the ATLAS and LHCb experiments, two bunches of protons pass through each other every 40 billionths of a second. In each of these “bunch crossings”, dozens of proton-proton collisions happen simultaneously. As the debris from the collisions moves into and through the CMS experiment, many detailed measurements are made, generating roughly a megabyte of data even with significant data compression. If that were all recorded, it would translate to many terabytes produced per second, and hundreds of millions of terabytes per year. That’s well beyond what CMS can store, manage and process. ATLAS faces the same issues, and LHCb faces their own version.
So what’s to be done? There’s only one option: throw most of that data away in the smartest way possible, and ensure that the data retained is processed and stored efficiently.
Data Overload and the TriggerThe automated system that has the job of selecting which data to throw away and which to keep is called the “trigger”; I wrote an extended article about it back in 2011. The trigger has to make a split-second judgment, based on limited information. It is meant to narrow a huge amount of data down to something manageable. It’s has to be thoughtfully designed and carefully monitored. But it isn’t going to be perfect.
Originally, at ATLAS and CMS, the trigger was a “yes/no” data processor. If “yes”, the data collected by the experiment during a bunch crossing was stored; otherwise it was fully discarded.
A natural if naive idea would be to do something more nuanced than this yes/no decision making. Instead a strict “no” leading to total loss of all information about a bunch crossing, one could store a sketch of the information — perhaps a highly compressed version of the data from the detector, something that occupies a few kilobytes instead of a megabyte.
After all, the trigger, in order to make its decision, has to look at each bunch crossing in a quick and rough way, and figure out, as best it can, what particles may have been produced, where they went and how much energy they have. Why not store the crude information that it produces as it makes its decision? At worst, one would learn more about what the trigger is throwing away. At best, one might even be able to make a measurement or a discovery in data that was previously being lost.
It’s a good idea, but any such plan has costs in hardware, data storage and person-hours, and so it needs a strong justification. For example, if one just wants to check that the trigger is working properly, one could do what I just described using only a randomly-selected handful of bunch crossings per second. That sort of monitoring system would be cheap. (The experiments actually do something smarter than that [called “prescaled triggers”.])
Only if one were really bold would one suggest that the trigger’s crude information be stored for every single bunch crossing, in hopes that it could actually be used for scientific research. This would be tantamount to treating the trigger system as an automated physicist, a competent assistant whose preliminary analysis could later be put to use by human physicists.
Data “Scouting” a.k.a. Trigger-Level AnalysisMore than ten years ago, some of the physicists at CMS became quite bold indeed, and proposed to do this for a certain fraction of the data produced by the trigger. They faced strong counter-arguments.
The problem, many claimed, is that the trigger is not a good enough physicist, and the information that it produces is too corrupted to be useful in scientific data analysis. From such a perspective, using this information in one’s scientific research would be akin to choosing a life-partner based on a dating profile. The trigger’s crude measurements would lead to all sorts of problems. They could hide a new phenomenon, or worse, create an artifact that would be mistaken for a new physical phenomenon. Any research done using this data, therefore, would never be taken seriously by the scientific community.
Nevertheless, the bold CMS physicists were eventually given the opportunity to give this a try, starting in 2011. This was the birth of “data scouting” — or, as the ATLAS experiment prefers to call it, “trigger-object-level analysis”, where “trigger-object” means “a particle or jet identified by the trigger system.”
The Two-Stage TriggerIn my description of the trigger, I’ve been oversimplifying. In each experiment, the trigger works in stages.
At CMS, the “Level-1 trigger” (L1T) is the swipe-left-or-right step of a 21st-century dating app; using a small fraction of the data from a bunch crossing, and taking an extremely fast glance at it using programmable hardware, it makes the decision as to whether to discard it or take a closer look.
The “High-Level Trigger” (HLT) is the read-the-dating-profile step. All the data from the bunch crossing is downloaded from the experiment, the particles in the debris of the proton-proton collision are identified to the extent possible, software examines the collection of particles from a variety of perspectives, and a rapid but more informed decision is made as to whether to discard or store the data from this bunch crossing.
The new strategy implemented by CMS in 2011 (as I described in more detail here) was to store more data using two pipelines; see Figure 1.
Effectively, the scouting pipeline uses the HLT trigger’s own data analysis to compress the full data from the bunch crossing down to a much smaller size, which makes storing it affordable.
Being bold paid off. It turned out that the HLT output could indeed be used for scientific research. Based on this early success, the HLT scouting program was expanded for the 2015-2018 run of the LHC (Figure 2), and has been expanded yet again for the current run, which began in 2023. At the present time, sketchy information is now being kept for a significant fraction of the bunch crossings for which the Level-1 trigger says “yes” but the High-Level trigger says “no”.
After CMS demonstrated this approach could work, ATLAS developed a parallel program. Separately, the LHCb experiment, which works somewhat differently, has introduced their own methods; but that’s a story for another day.
Dropping Down a LevelSeeing this, it’s natural to ask: if scouting works for the bunch crossings where the high-level trigger swipes left, might it work even when the level-1 trigger swipes left? A reasonable person might well think this is going too far. The information produced by the level-1 trigger as it makes its decision is far more limited and crude than that produced by the HLT, and so one could hardly imagine that anything useful could be done with it.
But that’s what people said the last time, and so the bold are again taking the risk of being called foolhardy. And they are breathtakingly courageous. Trying to do this “level-1 scouting” is frighteningly hard for numerous reasons, among them the following:
So what comes out of the level-1 trigger “no” votes is a gigantic amount of very sketchy information. Having more data is good when the data is high quality. Here, however, we are talking about an immense but relatively low-quality data set. There’s a risk of “garbage in, garbage out.”
Nevertheless, this “level-1 scouting” is already underway at CMS, as of last year, and attempts are being made to use it and improve it. These are early days, and only a few new measurements with the data from the current run, which lasts through 2026, are likely. But starting in 2029, when the upgraded LHC begins to produce data at an even higher rate — with the same number of bunch crossings, but four to five times as many proton-proton collisions per crossing — the upgraded level-1 trigger will then have access to a portion of the tracker’s data, allowing it to reconstruct particle tracks. Along with other improvements to the trigger and the detector, this will greatly enhance the depth and quality of the information produced by the level-1 trigger system, with the potential to make level-1 scouting much more valuable.
And so there are obvious questions, as we look ahead to 2029:
My task, in the run up to this workshop, was to prepare a talk addressing the second question, which required me to understand, as best I could, the answer to the first. Unfortunately the questions are circular. Only with the answer to the second question is it clear how best to approach the first one, because the decision about how much to spend in personnel-time, technical resources and money depends on how much physics one can potentially learn from that expenditure. And so the only thing I could do in my talk was make tentative suggestions, hoping thereby to start a conversation between experimenters and theorists that will continue for some time to come.
Will an effort to store all this information actually lead to measurements and searches that can’t be done any other way? It seems likely that the answer is “yes”, though it’s not yet clear if the answer is “yes — many”. But I’m sure the effort will be useful. At worst, the experimenters will find new ways to exploit the level-1 trigger system, leading to improvements in standard triggering and high-level scouting, and allowing the retention of new classes of potentially interesting data. The result will be new opportunities for LHC data to teach us about unexpected phenomena both within and potentially beyond the Standard Model.
Ever since recombinant DNA has been used to develop and manufacture vaccines, antivaxxers have portrayed it as evil. This weekend, an antivaxxer decided that fear mongering about SV40 in COVID-19 vaccines wasn't enough. Here we go again...
The post “And we’d better not risk another frontal assault. That plasmid’s dynamite.” Antivaxxers vs. plasmid DNA first appeared on Science-Based Medicine.Here’s Bill Maher’s monologue from his most recent Real Time show, arguing that voters should not expect an “October surprise”. He argues that because Trump has been so persistently awful in familiar ways, that there will be no change in his character before the election (remember that it’s just about two weeks away). He’s in five lawsuits, there’s all of his awful treatment of women, and he keeps doing bizarre things. None of this has markedly helped or hurt his polling numbers. So. . . no surprise with Trump. (There are some funny asides, though.)
This, he urges Democrats and liberals not to put any stock in something bad happening that will knock Trump out of the race, dismissing several possibilities (see 4:15). He adds this:
“This is Kamala’s great dilemma: Trump is invulnerable to an October surprise, but she is very vulnerable, because she is the one who is still undefined. And as she showed in this week’s Brett Baier interview, her go-to when attacked for her own actions is usually ‘Trump is worse’. Okay, we know that, but now undecided voters want to hear about you. They want someone to vote for. . . the voters’ big doubt about Kamala is ‘Are you part of far-left insanity?”
I saw the Fox interview, and watched Harris bob and weave rather than specify position she holds, especially ones that are different from Biden’s. (He shows a video.) Harris cannot simultaneously argue that she is not Joe Biden, and will not have the same policies as Biden—but then refuse to tell us what those policies are.
Maher then (9:40) then recites an answer that Harris could have given in response to a question about the immigration system but didn’t (she waffled). That answer, says Maher, would help her (he’s pro-Harris). But does admitting that something could be improved over what it was really going to help her? After all, she wants to be unburdened by the past.
Up to now, the oldest rocks known to contain living bacteria—microorganisms that were alive since the rock were formed—were sediments from about 100 million years ago. Now, a group of researchers from South Africa, Japan, and Germany report finding living bacteria in rocks 20 times older than that: over two billion years ago. And those bacteria were alive, and presumably dividing.
This finding, published in Microbial Ecology, suggests that if there was once life on Mars, one might be able to find its remnants by examining rock samples the way these researchers did.
The paper can be accessed by clicking on the screenshot below. You can also find a pdf here and a short New Scientist article about the discovery here.
The details: the researchers drilled into 2-billion-year old igneous “mafic rocks” in the Bushveld Igneous Complex of South Africa, described by Wikipedia as “the largest layered igneous intrusion within the Earth’s crust“. Drilling down 15 meters using a special drilling fluid to lubricate and cool the drill bit, they extracted a 30-cm (about 12-inch) core of rock with a diameter of 8.5 cm (3.3 inches). They then carefully cut into this core, making sure not to contaminate it with modern bacteria.
Here’s a photo of part of the Bushveld intrusion showing the igneous rock (see caption for details:
(From Wikipedia): Chromitite (black) and anorthosite (light grey) layered igneous rocks in Critical Zone UG1 of the Bushveld Igneous Complex at the Mononono River outcrop, near Steelpoort Photo: kevinzim / Kevin Walsh, CC BY 2.0, via Wikimedia CommonsRemember that igneous rock is formed when other rock is melted by extreme heat and then cooled. As this rock cooled, there were cracks in it that were filled with clay during the process, and, when the rock was solid, the clay was impervious to further intrusions. In other words, the clay in the rock cracks were 2 billion years old. But was the clay and its inhabitant bacteria that old? (See below.)
What they found. To test whether what they saw in the cracks (bacteria!) were really original, 2-billion-year-old bacteria rather than organisms that had entered the rock after formation or were contaminants during the drilling or handling, the authors dissolved tiny fluorescent microspheres in the drilling fluid, spheres smaller than bacteria. Tests showed that although the microspheres were visible in the fluid sample, they were not seen within the rock (of course the researchers took great care to not contaminate the rock either during extraction or when it was cut and examined). Here is their schematic of how the cores were extracted and handled (figure from the paper). Note the flaming to kill anything living on the outside of the core (click all figures and photos to enlarge them):
Here is a fluorescent sample of drilling fluid (on the left), showing many microspheres, and a sample of the rock showing DNA-stained bacteria on the right, which appear as green rods. The scale is the same, so you can see that the microspheres are smaller than the bacteria:
(from paper): Microscopic inspection of the drill fluid sample. A 1000-fold magnification images of fluorescent microspheres and (B) microbial cells stained by SYBR Green IThe presence of living organisms (at one time) in the cracks was also confirmed by finding “amides I and II,” which, say the authors “are diagnostic for proteins in microbial cells.” The New Scientist paper adds that the cell walls of the bacteria (if they are indeed “bacteria”!) were intact, which, says author Chen Ly, is “a sign that the cells were alive and active”.
What did the bacteria eat? The paper’s authors say that “indigenous microbes are immobile and survive in the veins by metabolizing inorganic and/or organic energy available around clay minerals.” They do add that there is doubt about the ages of the clay cracks, as they might actually have been formed much more recently than two billion years. Both the paper and the NS blurb are careful not to say that the bacteria have actually been in the rocks for two billion years, but that seems to be the tacit assumption.
Here are two photos from the paper of one of the bacteria-containing cracks. The color indicates, say the authors, spectra from silicate minerals and microbial cells
The upshot and implications: These are by far the oldest rocks even seen to contain indigenous (rather than externally-derived) living organisms, presumably bacteria. It’s not 100% clear that the organisms are themselves 2 billion years old, but the assumption here is that they are. New Scientist floats the idea that we should do this kind of analysis to look for life on other planets, most notably Mars:
This discovery may also have important implications for the search for life on other planets. “The rocks in the Bushveld Igneous Complex are very similar to Martian rocks, especially in terms of age,” says Suzuki, so it is possible that microorganisms could be persisting beneath the surface of Mars. He believes that applying the same technique to differentiate between contaminant and indigenous microbes in Martian rock samples could help detect life on the Red Planet.
But they quote one critic who asks the same questions I do above, and insists that the bacteria aren’t as old as the rocks. (For one thing, bacteria couldn’t survive in an igneous rock when it was very hot during formation.)
“This study adds to the view that the deep subsurface is an important environment for microbial life,” says Manuel Reinhardt at the University of Göttingen, Germany. “But the microorganisms themselves are not 2 billion years old. They colonised the rocks after formation of cracks; the timing still needs to be investigated.”
Questions that remain:
1.) Are the bacteria themselves two billion years old? I’m not sure how they would investigate this if the clay could have entered the rock and then been sealed into the cracks a long time after the igneous rock was formed.
2.) If the bacteria that old, were they dividing during that period? I don’t see any mention of seeing dividing cells, and the authors say that the cells were effectively trapped in the clay. If so, could they still divide, or are we seeing the original bacteria, perhaps two billion years old and still kicking? This raises another question:
3.) Were the bacteria “alive” during this period? If they were really metabolizing over this period, then yes, they were alive. But if their metabolism was completely shut down, what do we mean by saying they were alive? The NS piece says that the presence of cell walls means that the bacteria were “alive and active”, but is that really true?
4.) Finally, if these things had stainable DNA, can it be sequenced? It would be interesting to get the DNA sequences of these bacteria, which they’d presumably have to do by culturing them. Although we now have methods to get the DNA sequence of a single bacterium by sequencing its RNA transcripts (see this report), you’d have to pry the bacteria out of the clay to do that. And if you can get the sequence, does it resemble that of any living bacteria, or are these ancient forms very different from today’s microbes? (If they do resemble modern bacteria—for evolution would be very slow when cell division takes millions of years—then perhaps we could culture them.)
The biggest question, of course, is #1 above. I’m hoping that these things really are two billion years old, for what we’d then have is a very, very ancient bacterial culture. But I’m very dubious that we’ll find bacteria in Martian rocks.
h/t: Matthew Cobb, for alerting me to the relevant. tweet
Today we have another batch of Hawaiian bird photos (part 3 of 4) taken by biologist John Avise. John’s captions are indented, and you can enlarge his photos by clicking on them.
Birds in Hawaii, Part 3
This week we again continue our photographic journey into native and introduced bird species that might be seen on a natural-history tour of the Hawaiian Islands.
Mallard (Anas platyrhynchos) (native to temperate North America and Eurasia, but introduced widely around the world), hen with duckling:
Laysan Duck pair (Anas laysanensis)(endemic to the Hawaiian Islands):
Northern Cardinal (Cardinalis cardinalis) (native to North America), male:
Northern Mockingbird (Mimus polyglottos) (native to North America):
Scaly-breasted Munia (Lonchura punctulata) (native to tropical Asia):
Pacific Golden Plover (Pluvialis fulva) (breeds in Alaska and Siberia, seen here on migration):
Pacific Golden Plover flying:
Red Junglefowl male (Gallus gallus) (native to South Asia, but domesticated and widely introduced):
Red Junglefowl hen:
Red Junglefowl chick:
Red-billed Tropicbird (Phaethon aethereus) (widespread in tropical oceans):
Red-billed Tropicbird flying:
I am certain I posted both of these songs before, but I was just listening to “Big Love “ by Lindsey Buckingham and performed by him beside Fleetwood Mac in The Dance tour and album; and I thought I would pair that one with what I see as the best solo by his erstwhile bandmate and partner Stevie Nicks. Both wrote their songs and both sing them here solo.
Apparently Nicks was doing a photo session for Rolling Stone in 1981, and the soundtrack for “Wild Heart” was playing in the background as she was made up. She began an impromptu version of the song, which is a gazillion times better than the recorded version. Her sister-in-law Lori Perry-Nicks comes in on harmony. Nicks could not stop herself from singing.
From Wikipedia:
The video was recorded during a Rolling Stone photo shoot in 1981. It starts with Nicks singing a rendition of “Love in Store“, a song by Fleetwood Mac’s Christine McVie. The video ends with a version of McVie’s “Wish You Were Here”. The video has been viewed over a million times on YouTube. The backing music was written by Lindsey Buckingham found in a demo which can also be found on YouTube. It can also be found on the “Deluxe” 2016 reissue of Fleetwood Mac’s Mirage album, as a track titled “Suma’s Walk”.
This is one of the best performances from a great singer and may be the best impromptu rock solo I know of.
And Buckingham, underrated as a guitarist, producing a lot of sound. He won the trifecta of musicianship: a great singer, a great songwriter, and a great instrumentalist.
Happy Saturday.
Well, I decided to add this one, too: the best song featuring just the two of them. Written by Nicks, it mesmerized me the first time I heard it. How callous of Rolling Stone to say this about it (from Wikipedia):
In a contemporary review, Rolling Stone wrote that Nicks seemed “lost and out of place” on “Landslide” and that her voice sounded “callow and mannered.”
If ever a musical judgment was wrong, it was this one.
As he so often did, Buckingham performed on an acoustic guitar without a pick, just using his fingers.
In this decade and the next, multiple space agencies will send crewed missions to the Moon for the first time since the Apollo Era. These missions will culminate in the creation of permanent lunar infrastructure, including habitats, using local resources – aka. In-situ resource utilization (ISRU). This will include lunar regolith, which robots equipped with additive manufacturing (3D printing) will use to fashion building materials. These operations will leverage advances in teleoperation, where controllers on Earth will remotely operate robots on the lunar surface.
According to new research by scientists at the University of Bristol, the technology is one step closer to realization. Through a virtual simulation, the team completed a sample collection task and sent commands to a robot that mimicked the simulation’s actions in real life. Meanwhile, the team monitored the simulation without requiring live camera streams, which are subject to a communications lag on the Moon. This project effectively demonstrates that the team’s method is well-suited for teleoperations on the lunar surface.
As part of NASA’s Artemis Program, the ESA’s Moon Village, and the Chinese Lunar Exploration Program (Chang’e), space agencies, research institutes, and commercial space companies are researching how to extract valuable resources from lunar regolith (aka. moon dust). These include water and oxygen, which can be used to provide for astronauts’ basic needs and create liquid hydrogen and oxygen propellant. Remote handling of regolith will be essential to these activities since moon dust is abrasive, electrostatically charged, and difficult to handle.
The teleoperated robot used by the research team from the University of Bristol (1 of 2) Credit: Joe LoucaThe team was comprised of researchers from the University of Bristol’s School of Engineering Mathematics and Technology, who carried out the experiment at the European Space Agency’s European Centre for Space Applications and Telecommunications (ESA-ESCAT) in Harwell, UK. The study that describes their experiment was presented at the 2024 International Conference on Intelligent Robots and Systems (IROS 2024) in Dubai and was published in the research journal run by the Institute of Electrical and Electronics Engineers (IEEE).
As lead author Joe Louca, a Doctor of Philosophy at Bristol’s School of Engineering Mathematics and Technology, explained:
“One option could be to have astronauts use this simulation to prepare for upcoming lunar exploration missions. We can adjust how strong gravity is in this model, and provide haptic feedback, so we could give astronauts a sense of how Moon dust would feel and behave in lunar conditions – which has a sixth of the gravitational pull of the Earth’s. This simulation could also help us to operate lunar robots remotely from Earth, avoiding the problem of signal delays.”
The virtual model the team created could also reduce the costs associated with the development of lunar robots for institutes and companies researching the technology. Traditionally, experiments involving lunar construction have required the creation of simulants with the same properties as regolith and access to advanced facilities. Instead, developers can use this simulation to conduct initial tests on their systems without incurring these expensive costs.
The teleoperated robot used by the research team from the University of Bristol (2 of 2) Credit: Joe LoucaLooking ahead, the team plans to investigate the potential non-technical barriers of this technology. This will include how people interact with this system, where communications suffer a roundtrip delay of 5 to 14 seconds. This is expected for the Artemis missions, as opposed to the 3-second delay experienced by the Apollo missions due to increased delays in the Deep Space Network (DSN). Said Louca:
“The model predicted the outcome of a regolith simulant scooping task with sufficient accuracy to be considered effective and trustworthy 100% and 92.5% of the time. In the next decade, we’re going to see several crewed and uncrewed missions to the Moon, such as NASA’s Artemis program and China’s Chang’e program. This simulation could be a valuable tool to support preparation or operation for these missions.”
Further Reading: University of Bristol
The post New Simulation Will Help Future Missions Collect Moon Dust appeared first on Universe Today.
Neal Stephenson is the #1 New York Times bestselling author of the novels Termination Shock, Fall; or, Dodge in Hell, Seveneves, Reamde, Anathem, The System of the World, The Confusion, Quicksilver, Cryptonomicon, The Diamond Age, Snow Crash, and Zodiac, and the groundbreaking nonfiction work In the Beginning … Was the Command Line. He is also the coauthor, with Nicole Galland, of The Rise and Fall of D.O.D.O. His works of speculative fiction have been variously categorized as science fiction, historical fiction, maximalism, cyberpunk, and post-cyberpunk. In his fiction, he explores fields such as mathematics, cryptography, philosophy, currency, and the history of science. Born in Fort Meade, Maryland (home of the NSA and the National Cryptologic Museum), Stephenson comes from a family comprising engineers and hard scientists he dubs “propeller heads.” He holds a degree in geography and physics from Boston University, where he spent a great deal of time on the university mainframe. He lives in Seattle, Washington. As The Atlantic has recently observed, “Perhaps no writer has been more clairvoyant about our current technological age than Neal Stephenson. His novels coined the term metaverse, laid the conceptual groundwork for cryptocurrency, and imagined a geoengineered planet. And nearly three decades before the release of ChatGPT, he presaged the current AI revolution.” His new novel is Polostan, the first installment in his Bomb Light cycle.
Shermer and Stephenson discuss:
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
For most of human history, the Sun appeared stable. It was a stoic stellar presence, going about its business fusing hydrogen into helium beyond our awareness and helping Earth remain habitable. But in our modern technological age, that facade fell away.
We now know that the Sun is governed by its powerful magnetic fields, and as these fields cycle through their changes, the Sun becomes more active. Right now, according to NASA, the Sun is at its solar maximum, a time of increased activity.
Solar Maximum means pretty much what it sounds like. In this phase of the cycle, our star is exhibiting maximum activity. The Sun’s intense magnetic fields produce more sunspots and solar flares than at any other time in its 11-year cycle.
The Solar Maximum is all based on the Sun’s magnetic fields. These fields are measured in Gauss units, which describe magnetic flux density. The Sun’s poles measure about 1 to 2 gauss, but sunspots are much higher at about 3,000 gauss. (Earth is only 0.25 to 0.65 gauss at its surface.) Since the magnetic field is so much stronger where sunspots appear, they inhibit convective heating from deeper inside the Sun. As a result, sunspots appear as dark patches.
Sunspots are visual indicators of the Sun’s 11-year cycle. The National Oceanic and Atmospheric Administration and an international group called the Solar Cycle Prediction Panel watch sunspots to understand where the Sun is at in its cycle.
“During solar maximum, the number of sunspots, and therefore, the amount of solar activity, increases,” said Jamie Favors, director of the Space Weather Program at NASA Headquarters in Washington. “This increase in activity provides an exciting opportunity to learn about our closest star — but also causes real effects at Earth and throughout our solar system.”
The effects came into focus for many of us recently. In May 2024, the Sun launched multiple CMEs. As the magnetic fields and charged particles reached Earth, they triggered the strongest geomagnetic storm in 200 decades. These created colourful aurorae that were visible much further from the poles than usual. NASA says that these aurorae were likely among the strongest displays in the last 500 years.
Scientists know the Sun is at its solar maximum. But it lasts for an entire year. They won’t know when its activity peaks until after they’ve watched it for months and its activity has declined.
“This announcement doesn’t mean that this is the peak of solar activity we’ll see this solar cycle,” said Elsayed Talaat, director of space weather operations at NOAA. “While the Sun has reached the solar maximum period, the month that solar activity peaks on the Sun will not be identified for months or years.”
Each cycle is different, making it difficult to label peak solar activity. Different peaks have different durations and have higher or lower peaks than others.
Understanding the Sun’s cycle is important because it creates space weather. During solar maximum, the increased sunspots and flares also mean more coronal mass ejections (CMEs.) CMEs can strike Earth, and when they do, they can trigger aurorae and cause geomagnetic storms. CMEs, which are blobs of hot plasma, can also affect satellites, communications, and even electrical grids.
NASA’s Solar Dynamics Observatory captured these images of solar flares below, as seen in the bright flashes in the left image (May 8, 2024 flare) and the right image (May 7, 2024 flare). The image shows 131 angstrom light, a subset of extreme ultraviolet light that highlights the extremely hot material in flares and which is colourized in orange.
During the solar maximum, the Sun produces an average of three CMEs every day, while it drops to one CME every five days during the solar minimum. The CMEs’ effect on satellites causes the most concern. In 2003, satellites experienced 70 different types of failures. The failures ranged from erroneous signals in a satellite’s electronics to the destruction of electrical components. The solar storm that occurred in 2003 was deemed responsible for 46 of those 70 failures.
CMEs are also a hazard for astronauts orbiting Earth. The increased radiation poses a health risk, and during storms, astronauts seek safety in the most shielded part of the ISS, Russia’s Zvezda Service Module.
Galileo and other astronomers noticed sunspots hundreds of years ago but didn’t know exactly what they were. In a 1612 pamphlet titled “Letters on Sunspots,” Galileo wrote ‘The sun, turning on its axis, carries them around without necessarily showing us the same spots, or in the same order, or having the same shape.’ This contrasted with others’ views on the spots, some of which suggested they were natural satellites of the Sun.
We’ve known about the Sun’s magnetic fields for 200 hundred years, though at first, scientists didn’t know the magnetism was coming from the Sun. In 1724, an English geophysicist noticed that his compass was behaving strangely and was deflected from magnetic north throughout the day. In 1882, other scientists correlated these magnetic effects with increased sunspots.
In recent decades, we’ve learned much more about our stellar companion thanks to spacecraft dedicated to studying it. NASA and the ESA launched the Solar and Heliospheric Observatory (SOHO) in 1995, and NASA launched the Solar Dynamics Observatory (SDO) in 2010. In 2011, we got our first 360-degree view of the Sun thanks to NASA’s two Solar TErrestrial RElations Observatory (STEREO) spacecraft. In 2019, NASA launched the Parker Solar Probe, which also happens to be humanity’s fastest spacecraft.
Our understanding of the Sun and its cycles is far more complete now. The current cycle, Cycle 25, is the 25th one since 1755.
This figure shows the number of sunspots over the previous twenty-four solar cycles. Scientists use sunspots to track solar cycle progress; the dark spots are associated with solar activity, often as the origins for giant explosions—such as solar flares or coronal mass ejections—that can spew light, energy, and solar material out into space. Image Credit: NOAA’s Space Weather Prediction Center“Solar Cycle 25 sunspot activity has slightly exceeded expectations,” said Lisa Upton, co-chair of the Solar Cycle Prediction Panel and lead scientist at Southwest Research Institute in San Antonio, Texas. “However, despite seeing a few large storms, they aren’t larger than what we might expect during the maximum phase of the cycle.”
The most powerful flare so far in Cycle 25 was on October 3rd, when the Sun emitted an X9 class flare. But scientists anticipate more flares and activity to come. There can be significantly powerful storms even in the cycle’s declining phase, though they’re not as common.
On October 3, 2024, the Sun emitted a strong solar flare. As of this date, this solar flare is the largest of Solar Cycle 25 and is classified as an X9.0 flare. X-class denotes the most intense flares, while the number provides more information about its strength. NASA’s Solar Dynamics Observatory captured imagery of this solar flare – as seen in the bright flash in the center – on October 3, 2024. The image shows a blend of 171 Angstrom and 131 Angstrom light, subsets of extreme ultraviolet light.The Sun’s 11-year cycle is just one of its cycles, nested in larger cycles. The Gleissberg cycle lasts between 80 to 90 years and modulates the 11-year cycle. The de Vries cycle or Suess cycle lasts between 200 and 210 years, and the Hallstatt cycle lasts about 2,300 years. Both of these cycles contribute to long-term solar variation.
However, even with all we know about the Sun, there are big gaps in our knowledge. The Sun’s magnetic poles switch during the 11-year cycle, and scientists aren’t sure why.
There’s a lot more to learn about the Sun, but we won’t run out of time to study it any time soon. It’s in the middle of its 10-billion-year lifetime and will be a main-sequence star for another five billion years.
The post The Sun Has Reached Its Solar Maximum and it Could Last for One Year appeared first on Universe Today.
On July 1st, 2023 (Canada Day!), the ESA’s Euclid mission lifted off from Cape Canaveral, Florida, atop a SpaceX Falcon 9 rocket. As part of the ESA’s Cosmic Vision Programme, the purpose of this medium-class mission was to observe the “Dark Universe.” This will consist of observing billions of galaxies up to 10 billion light-years away to create the most extensive 3D map of the Universe ever created. This map will allow astronomers and cosmologists to trace the evolution of the cosmos, helping to resolve the mysteries of Dark Matter and Dark Energy.
The first images captured by Euclid were released by the ESA in November 2023 and May 2024, which provided a glimpse at their quality. On October 15th, 2024, the first piece of Euclid‘s great map of the Universe was revealed at the International Astronautical Congress (IAC) in Milan. This 208-gigapixel mosaic contains 260 observations made between March 25th and April 8th, 2024, and provides detailed imagery of millions of stars and galaxies. This mosaic accounts for just 1% of the wide survey that Euclid will cover over its six-year mission and provides a sneak peek at what the final map will look like.
The IAC 2024 session, which took place from October 14th – 18th in Milan, was the 75th annual meeting of the Congress. The session welcomed over 8,000 experts from space agencies, the research sector, and the space industry to come together and discuss the use of space to support sustainability. The mosaic, presented by ESA Director General Josef Aschbacher and Director of Science Carole Mundell during the event, contains about 100 million sources, including stars in our Milky Way and galaxies beyond.
The main objective of the Euclid mission is to measure the hidden influence of Dark Matter and Dark Energy on the Universe. These will hopefully resolve questions that astronomers have been dealing with for decades. It all began in the 1960s when astronomers noted that the rotational curves of galaxies did not agree with the observed amounts of matter they contained. This led to speculation that there must be a mysterious, invisible mass that optical telescopes could not account for (aka. Dark Matter).
By the 1990s, thanks to observations made by the venerable Hubble Space Telescope, astronomers also noticed that the rate at which the Universe has been expanding (the Hubble-Lemaitre Constant) was accelerating with time. By observing the shapes, distances, and motions of billions of galaxies, Euclid‘s 3D map will provide the most accurate estimates of galactic masses and cosmic expansion over the past 10 billion years. Zooming very deep into the mosaic (see image below), the intricate structure of the Milky Way can be seen, as well as many galaxies beyond.
Another interesting feature is what looks like clouds between the stars in our galaxy, which appear light blue against the background of space. This is the gas and dust of the interstellar medium (ISM), which is known on a galactic scale as the “galactic cirrus” (because of its resemblance to clouds). Euclid‘s super-sensitive optical camera—the VISible instrument (VIS), composed of 36 charged-coupled devices (CCDs) with 4000 x 4000 pixels each—can see these clouds as they reflect optical light from the Milky Way. Said Euclid Project Scientist Valeria Pettorino in an ESA press release.
“This stunning image is the first piece of a map that, in six years, will reveal more than one-third of the sky. This is just 1% of the map, and yet it is full of a variety of sources that will help scientists discover new ways to describe the Universe.”
This graphic provides an overview of the mosaic and zoomed-in images released by ESA’s Euclid mission on October 15th, 2024. Credit: ESA/Euclid/Euclid Consortium/NASA/CEA Paris-Saclay/J.-C. Cuillandre, E. Bertin, G. AnselmiAs noted, the mosaic shows only 1% of what Euclid will observe during the course of its six-year mission. In just two weeks, the observatory covered 132 square degrees of the Southern Sky in pristine detail (more than 500 times the area of the full Moon). Since the mission began routine science observations in February, 12% of the survey has been completed. By March 2025, the ESA will release 53 square degrees of the survey, including a preview of the Euclid Deep Field areas. This will be followed by the release of the first year of cosmology data sometime in 2026.
The Euclid Consortium (EC) consists of more than 2000 scientists from 300 institutes in Europe, the USA, Canada, and Japan and is responsible for providing the mission’s instruments and data analysis.
Further Reading: ESA
The post Check Out This Sneak Peek of the Euclid mission’s Cosmic Atlas appeared first on Universe Today.
It’s Friday, and you may have noticed that I haven’t done a lot of braining lately, and put up virtually no science posts. That’s because I am going through another bout of insomnia (it’s now five nights since I had a decent sleep), and it’s hard to concentrate on anything. So bear with me; I do my best. Instead of something intellectual, science-y, or literary this Friday, have a look at the world’s longest truck.
It’s in Australia, of course, where there are long stretches of straight road that can be navigated by “road trains”.