Ships passing in the night used Morse code sent with lanterns and shutters to communicate. That same basic principle has allowed NASA to communicate with Psyche, its mission to a metal-rich asteroid in the main belt. However, the “light” was a version of heat, and instead of being able to see each other, Psyche is 240 million miles away from Earth. Oh, and the upload rate of the data it sent is still better than old dial-up internet connections that were prevalent not so long ago.
This feat was part of the culmination of the first Phase of NASA’s Deep Space Optical Communications experiment. Psyche is carrying a laser transceiver tuned to a specific frequency of infrared light, which can also be transmitted and received by two ground stations based in California. The infrared frequency the mission planner at NASA’s Jet Propulsion Laboratory selected is much higher than the typical radio frequency communications used for deep space missions. In this case, higher frequency also means higher data rate.
As part of its Phase I operations, the experiment transmitted data to and from Psyche at an astonishing 267 megabits per second when the spacecraft was as far away as Mars when the Red Planet is closest to us. That is equivalent to a typical wired broadband connection back here on Earth. But it was made in space – with lasers.
Video that Psyche sent back to Earth.In June, Psyche reached a new milestone for distance from Earth – 390 million km. That is equivalent to Earth and Mars’ farthest distance from each other. During this window, operators managed to maintain a 6.25 megabits per second download link. While that’s a few orders of magnitude slower than the maximum data rate it reached the closer distance, it is still orders of magnitude above the same data rate of a radio frequency connection with the same power output.
As part of this Phase I test, what else would NASA send from its spacecraft but a cat video—in this case, an ultra-high-definition video of a cat named Taters chasing a red laser pointer for 15 seconds straight. As a proof of concept for a high-speed communication line, most of the internet would agree that this is a good use of bandwidth.
Ultimately, the latest successful connection in June was the end of the first Phase of testing for the system. The project team unequivocally proved that, as expected, communication data-rate reduction was proportional to the inverse square of the distance between Earth and Psyche. In other words, the data rate decreases even faster as the distance increases between the spacecraft and the base station.
Taters probably didn’t understand how important it was that he catch the laser – but he was trying his best anyway.A second phase of the experiment will pick up in November when the laser transceiver is turned back on again. At that point, it will prove the system can operate for more than a year, and eventually, the system will be brought up into full operational mode later in 2024. Psyche is scheduled to arrive at its target asteroid in 2029, so the team will have plenty of time to prep their system for operation before that time. There is also a backup radio frequency communication system on Psyche in case the laser system fails – and even that is still faster than lanterns and shutters.
Learn More:
NASA JPL – NASA’s Laser Comms Demo Makes Deep Space Record, Completes First Phase
UT – Psyche Gives Us Its First Images of Space
UT – We’re Entering a New Age When Spacecraft Communicate With Lasers
UT – NASA’s Psyche Mission is off to Asteroid Psyche
Lead Image:
NASA’s Psyche spacecraft is depicted receiving a laser signal from the Deep Space Optical Communications uplink ground station at JPL’s Table Mountain Facility in this artist’s concept. The DSOC experiment consists of an uplink and downlink station, plus a flight laser transceiver flying with Psyche. Credit: NASA/JPL-Caltech
The post NASA Achieves Impressive Bandwidth with its New Laser Communications System appeared first on Universe Today.
The massive South Pole-Aitken (SPA) basin is one of the Moon’s dominant features, though it’s not visible from Earth. It’s on the lunar far side, and only visible to spacecraft. It’s one of the largest impact features in the Solar System, and there are many outstanding questions about it. What type of impactor created it? Where did the ejected material end up? Is it feasible or worthwhile to explore it?
But the biggest question could be: how old is it?
The SPA basin is about 2500 km (1600 mi) in diameter and between 6.2 and 8.2 km (3.9–5.1 mi) deep. Research shows that it’s the Moon’s oldest impact basin and likely formed between 4.2 and 4.3 billion years ago. That places it in the most intense period of bombardment in the inner Solar System. But there’s debate about the accuracy of that date. A more precise measurement would help scientists understand the history of the Solar System and the periods of bombardment that helped shape it.
Researchers at the University of Manchester and other institutions tackled the problem of the SPA’s age. Their results are in a paper in Nature Astronomy titled “Evidence of a 4.33 billion year age for the Moon’s South Pole–Aitken basin.” The lead author is Professor Katherine Joy from The University of Manchester.
“The implications of our findings reach far beyond the Moon. We know that the Earth and the Moon likely experienced similar impacts during their early history, but rock records from the Earth have been lost.”
Co-author Dr. Romain Tartese, University of ManchesterWhatever struck the Moon, the impact was catastrophic. Some estimates suggest the impactor was 200 km in diameter, far more massive than the 10 km Chicxulub impactor that ended the dinosaurs. This massive, energetic impact represents a key event in the inner Solar System’s history.
“Determining the timing of this catastrophic event is key to understanding the onset of the lunar basin-forming epoch, with implications for understanding the impact bombardment history of the inner Solar System,” the researchers write. “Despite this, the formation age of the SPA basin remains poorly constrained.”
The inner Solar System bodies have been pummelled by comets and asteroids. On Earth, the evidence of these impacts is mostly wiped away by billions of years of plate tectonics and weathering. There’s only faint evidence of most impacts. The Vredevort impact crater in South Africa was created by a massive impactor about two billion years ago. It’s so eroded that scientists aren’t certain how large the original impact structure was.
Since Earth’s impact features are incomplete, scientists study the lunar surface to understand both the Earth and the Moon’s bombardment history. Fortunately, some evidence from the lunar surface has made it to Earth in the form of samples collected by landers. Some serendipitous evidence also comes in the form of meteorites.
Study co-author Dr. Romain Tartese, Senior Lecturer at The University of Manchester, said, “The implications of our findings reach far beyond the Moon. We know that the Earth and the Moon likely experienced similar impacts during their early history, but rock records from the Earth have been lost. We can use what we have learnt about the Moon to provide us with clues about the conditions on Earth during the same period of time.”
When a large impactor travelling quickly strikes a rocky planet or moon, it releases a lot of energy. The impact can spread debris around the surface and even launch some into space. Scientists have studied multiple meteorites that came from lunar and Mars impacts, and they’ve learned a lot by studying them. In fact, there are so many of them that they’ve been able to categorize many meteorites according to their asteroidal parent bodies.
At least one piece of debris from the impact reached Earth: a lunar meteorite named Northwest Africa 2995.
Over the years, different researchers have examined NWA 2995. By comparing it to Apollo samples, they’ve found that it has the same oxygen isotope ratios, which points to a shared lunar origin. The meteorite’s minerals and texture are also very similar to crustal rocks from the lunar highlands.
The researchers write that the meteorite is in “good agreement with lithologies exposed within the southern region of the SPA basin.”
NWA 2995 was found in Algeria in 2005 and it hasn’t been on Earth for long. It’s only been here for a few thousand years, and by analyzing the concentration of certain cosmogenic nuclides, which are atoms produced by exposure to cosmic rays, scientists have determined that the rock has only been travelling in space for about 22 million years. So, though it was initially created in an ancient impact, it was only launched into space much later by a subsequent impact. MWA 2995 is relatively unchanged and can provide insights into the early Solar System.
NWA 2995 is what scientists call regolith breccia. Regolith is the layer of unconsolidated rocky material that covers bedrock. Breccia is a rock formed from angular fragments of rocks and minerals that are cemented together by fine-grained material. According to the authors, NWA 2995 represents an “ancient fused lunar soil, made up of many different rock and mineral components. ”
The researchers examined NWA 2995 to constrain the age of the SPA basin. They used radiometric dating on a range of mineral and rock components of the meteorite to find NWA 2995’s age.
This image from the research shows a section of NWA 2995 in four different views. a is an optical scan, b is a back-scattered electron image from an electron microscope, c is a cathodoluminescence image that highlights certain minerals, and d is a composite false colour element map. The colours represent silica (blue), aluminum (white), magnesium (green), iron (red), titanium (pink), potassium (cyan) and calcium (yellow). Image Credit: Joy et al. 2024.The researchers also compared NWA 2995 with orbital data from NASA’s Lunar Prospector, which used a low polar orbit to map the Moon’s surface composition. They created a map showing the probabilities that the meteorite originated in different regions on the Moon.
This figure from the research shows the probability that NWA 2995 came from different locations on the lunar surface. Image Credit: Joy et al. 2024.They found that the meteorite most likely came from one of two locations, both inside the SPA. The nearby Cabannes craters are all the right size to eject a rock like NWA 2995.
c is from a unified geological map of the Moon, and d shows stratigraphic units by age. Image Credit: Joy et al. 2024.The researchers analyzed the ages of uranium and lead in NWA 2995. Overall, the results indicate that the SPA basin formed about 4.32–4.33 billion years ago. That means that it formed about 120 million years before the main cluster of other lunar basins like the Serenitatis, Nectaris, and Crisium basins.
This image shows thorium concentrations on the Moon. Thorium is used in conjunction with uranium in radiometric dating to help determine the Moon’s chronology. Radiometric data suggests that NWA 2995 came from the South Pole-Aitken Basin. Image Credit: Joy et al. 2024.Dr Joshua Snape, Royal Society University Research Fellow at The University of Manchester, is one of the co-authors of the new research. “Over many years, scientists across the globe have been studying rocks collected during the Apollo, Luna, and Chang’e 5 missions, as well as lunar meteorites, and have built up a picture of when these impact events occurred,” Snape said.
“For several decades there has been general agreement that the most intense period of impact bombardment was concentrated between 4.2-3.8 billion years ago – in the first half a billion years of the Moon’s history,” said Snape. “But now, constraining the age of the South-Pole Aitken basin to 120 million years earlier weakens the argument for this narrow period of impact bombardment on the Moon and instead indicates there was a more gradual process of impacts over a longer period.”
These results will only grow stronger when future missions collect more samples from the area. “The proposed ancient 4.32 billion year old age of the South Pole-Aiken basin now needs to be tested by sample return missions collecting rocks from known localities within the crater itself,” said lead author Joy.
“Our proposed formation age for SPA will require confirmation from future radiometric dating of samples collected from the south of the Apollo basin area by the Chang’e–6 mission or from future proposed missions such as the Endurance-A rover concept that aims to collect 100?kg of samples from across the SPA basin floor,” the authors write in their conclusion.
The post Scientists Determine the Age of the Moon’s Oldest and Largest Impact Basin appeared first on Universe Today.
There’s not much new this week, and certainly nothing to inspire me to comment on science, current events, and so on. So it’s time to go back fifty years and compare the Billboard Top Ten Songs from then with the current ones. It turns out that the comparison isn’t as dire as it has been the last few times.
This may be for two reasons. First, rock had already reached its apogee before 1974, and while there are a couple of classics on the 1974 list, and certainly some great musicians, the list in general is not inspiring.
Second, it seems to me that pop music is getting infused with a soupçon of country music, and, given how bad recent pop music has been, this can only improve it.
First, the list from this week in 1974. I’ve put a link to the performance of each song.
The best songs on this list include #1 (the Spinners were underrated: “I’ll Be Around” is one of the great soul songs), and the addition of Dionne Warwick makes for a creditable tune. The Stevie Wonder song is okay, but not close to his greatest efforts (viz. “Isn’t she Lovely?” or “For Once in my Life“, etc.). I have little use for Bachman Turner Overdrive, but “Jazzman: is an excellent effort by Carole King. The Elton John song is an 8; I can dance to it. Bad Company’s song rates a 5 out of 10, and it’s downhill from there, save the classic “Sweet Home Alabama”. Ergo, I’d rate #1, #4, and #8 as music that will last. We shall forget about Tony Orlando, Mac Davis, and the Osmonds.
Here is the Billboard Top 10 from fifty years ago: October 21, 1974:
1.) “Then Came You” Dionne Warwicke and the Spinners
2.) “You Haven’t Done Nothing” Stevie Wonder
3.) “You Ain’t Seen Nothing Yet/Free Wheelin'” Bachman Turner Overdrive
4.) “Jazzman” Carole King
5.) “The Bitch is Back” Elton John
6.) “Can’t Get Enough” Bad Company
7.) “Steppin’ Out/Gonna Boogie Tonight” Tony Orlando and Dawn
8.) “Sweet Home Alabama” Lynyrd Skynyrd
9.) “Stop and Smell the Roses” Mac Davis
10.) “Love Me for a Reason” The Osmonds
And the latest Billboard Top 10 from October 19, 2024.
The music on the list below is surprisingly good given that it’s from today. I’m not a fan of “A Bar Song” as it’s too rap-py—but note the country tinge to it! My favorite on this list is Billie Eilish’s song (#2), which is quite lovely. #3 is largely a country/pop hybrid. It’s okay, but the melody and words are rather trite. We shall leave aside the talentless Sabrina Carpenter, which eliminates three songs off this list. The Bruno Mars/Lady Gaga duet has the trappings of country music (cowboy hats and boots, and big hair on Lady Gaga), but it’s just okay: neither catchy or memorable. Chappell Roan appears to be a phenom these days, but I wasn’t impressed with this effort, which in the end is a standard love song, and the melody is trite. Skipping over Carpenter to Swims, we find a song that’s beginning to sound of a piece with much of modern music, but it’s okay (note the country intonations). Skipping Carpenter for the last time (yes, I listened to all the songs), we finish with Benson Boone, performing a countrified pop song, but again the tune is boring and the lyrics uncompelling.
1.) “A Bar Song” Tipsy
2.) “Birds of a Feather” Billie Eilish
3.) “I Had Some Help” Post Malone featuring Morgan Wallen
4.) “Espresso” Sabrina Carpenter
5.) “Die With a Smile” Lady Gaga with Bruno Mars
6.) “Good Luck, Babe!” Chappell Roan
7.) “Taste” Sabrina Carpenter
8.) “Lose Control” Teddy Swims
9.) “Please Please Please” Sabrina Carpenter
10.) “Beautiful Things” Benson Boone
All in all, the lists are pretty much tied, but 1974 wins (you knew it would!) because it has a couple of classics. The latest list, in my view, is redeemed by Billie Eilish‘s song, and I should sample more of her music. I see she’s only 22 and her full name is Billie Eilish Pirate Baird O’Connell. (Note that Wonder’s “For Once in My Life” was recorded when he was just 18.)
Here’s “Birds of a Feather” by Billie Eilish, co-written with her brother, Finneas O’Connell
Please send in your photos, or at least get them ready to send, as I’ll be gone from this Wednesday through Thursday, the 31st. Today we’re featuring the birds of Iceland taken by physicist and origami master Robert Lang, traveling on a June Center for Inquiry cruise featuring Richard Dawkins. (Robert’s flower pictures from the same trip are here.) Robert’s captions are indented, and you can enlarge the photos by clicking on them.
Iceland Birds (etc.)
Continuing my recent trip to islands of the northern Atlantic—heading out from Ireland taking in Orkney, Shetland, the Faroe Islands, and then Iceland—here are some of the birds (and a few bonus mammals) we saw along the way. Most of these are from Iceland. (I am not a birder, so IDs are from Merlin ID and/or Wikipedia; corrections are welcome.)
An Arctic Tern (Sterna paradisae), taken at Grimsey Island, the northernmost spot of Iceland with a bit extending above the Arctic Circle. Visiting brought home how powerful the warming influence of the Gulf Stream is; it was light-jacket weather when we visited in June and the ground was covered in thick grassland. By contrast, six months earlier, I was slightly across the Antarctic Circle along the Antarctic Peninsula (so also in midsummer), and all was glaciers, snow, and ice:
Also from Grimsey, a Common redshank (Tringa totanus), presumably the T. t. robusta subspecies (which, according to Wikipedia, breeds in Iceland).
We visited the tiny island of Vigur, which is a habitat for Common Eider ducks (Somateria mollissima). As the photo shows, they are strongly sexually dimorphic. The island is owned by a couple who gather the eider down for use in pillows, quilts, and the like; because there are no predators on the island and the ducks are used to humans wandering about, they are quite tolerant when some of those humans are visiting tourists. They have cute chicks:
Eider duckling:
A European golden plover (Pluvialis apricaria), also from the grasslands of Grimsey:
A Black guillemot (Pluvialis apricaria) (I think), a species that is widespread in the North Atlantic:
The juveniles are mottled:
A Northern fulmar (Fulmarus glacialis), nesting in the cliffs of Grimsey. (Wikipedia tells me there are both dark and light morphs; this must be the light one):
A snow bunting (Plectrophenax nivalis), the most northerly recorded passerine in the world. I saw this one on the main island of Iceland:
One of the more distinctive seagoing birds seen along the Grimsey cliffs is the Razorbill (Alca torda), the closest living relative of the extinct Great Auk:
But the by far most distinctive seagoing bird is the Atlantic puffin (Fratercula arctica), the iconic bird of the northern Atlantic and whose representations fill tchotke shops all over:
Their clown-faced makeup is unbelievable!:
Although the majority of the wildlife we saw were birds, there were a few mammals here and there. This grey seal (Halichoerus grypus) seems to be floating quite high in the water; in fact, it’s basking on a barely submerged rock. (This is off the coast of Vigur island; that’s an Eider duck next to it):
And not an example of wildlife, but in honor of our host, I spotted this moggie wandering the streets of Ísafjörður, a tiny town in the northwest (and wildest) region of Iceland:
At a recent event Tesla showcased the capabilities of its humanoid autonomous robot, Optimus. The demonstration has come under some criticism, however, for not being fully transparent about the nature of the demonstration. We interviewed robotics expert, Christian Hubicki, on the SGU this week to discuss the details. Here are some of the points I found most interesting.
First, let’s deal with the controversy – to what extent were the robots autonomous, and how transparent was this to the crowd? The first question is easier to answer. There are basically three types of robot control, pre-programmed, autonomous, and teleoperated. Pre-programmed means they are following a predetermined set of instructions. Often if you see a robot dancing, for example, that is a pre-programmed routine. Autonomous means the robot has internal real-time control. Teleoperated means that a human in a motion-capture suit is controlling the movement of the robots. All three of these types of control have their utility.
These are humanoid robots, and they were able to walk on their own. Robot walking has to be autonomous or pre-programmed, it cannot be teleoperated. This is because balance requires real-time feedback of position and other information to produces the moment-to-moment adjustments that maintain balance. A tele-operator would not have this (at least not with current technology). The Optimus robots walked out, so this was autonomous.
Once in position, however, the robots began serving and interacting with the humans present. Christian noted that he and other roboticists were able to immediately tell that the upper body movements of the robots were teleoperated, just by the way they were moving. Also, the verbal interaction also seemed teleoperated as each robot had a difference voice and the responses were immediate and included gesticulations.
Some might say – so what? The engineering of the robots themselves is impressive. They can autonomously walk, and not of them fell over or did anything weird. This much is a fairly impressive demonstration. It is actually quite dangerous to have fully autonomous robots interacting with people. The technology is not quite there yet. Robots are heavy and powerful, and just falling over might cause human injury. Reliability has to be extremely high before we will be comfortable putting fully autonomous robots in human spaces. Making robots lighter and softer is one solution, because they they were be less physically dangerous.
But the question for the Optimus demonstration is – how transparent was the teleoperation of the robots? Tesla, apparently, did not explicitly say the robots were being operated fully autonomously, nor did any of the robot operator lie when directly asked. But at the same time, the teleoperators were not in view, and Tesla did not go out of their way to transparently point out that they were being teleoperated. How big a deal is this? That is a matter of perception.
But Christian pointed out that there is a very specific question at the heart of the demonstration – where is Tesla compared to its competitors in terms of autonomous control? The demonstration, if you did not know there were teleoperators, makes the Optimus seem years ahead of where it really is. It made it seem as if Tesla is ahead of their competition when in fact they may not be.
While Tesla was operating in a bit of a transparency grey-zone, I think the pushback is healthy for the industry. The fact is that robotics demonstrations typically use various methods of making the robots seem more impressive than they are – speeding up videos, hiding teleoperation, only showing successes and not the failures, and glossing over significant limitations. This is OK if you are Disney and your intent is to create an entertaining illusion. This is not OK if you are a robotics company demonstrating the capabilities of your product.
What is happening as a result of push back and exposure of lack of total transparency is an increasing use of transparency in robotic videos. This, in my opinion, should become standard, and anything less unacceptable. Videos, for example, can be labeled as “autonomous” or “teleoperated” and also can be labeled if they are being shown in a speed other than 1x. Here is a follow up video from Tesla where they do just that. However, this video is in a controlled environment, we don’t know how many “takes” were required, and the Optimus demonstrates only some of what it did at the event. At live events, if there are teleoperators, they should not be hidden in any way.
This controversy aside, the Optimus is quite impressive just from a hardware point of view. But the real question is – what will be the market and the use of these robots? The application will depend partly on the safety and reliability, and therefore on its autonomous capabilities. Tesla wants their robots to be all-purpose. This is an extremely high bar, and requires significant advances in autonomous control. This is why people are very particular about how transparent Tesla is being about where their autonomous technology is.
The post Tesla Demonstrated its Optimus Robot first appeared on NeuroLogica Blog.
Geneva, Switzerland, is not known for its sunny weather, and seeing the comet here was almost impossible, though I caught some glimpses. I hope many of you have seen it clearly by now. It’s dim enough now that dark skies and binoculars are increasingly essential.
I came here (rather than the clear skies of, say, Morocco, where a comet would be an easier target) to give a talk at the CERN laboratory — the lab that hosts the Large Hadron Collider [LHC], where the particle known as the Higgs boson was discovered twelve years ago. This past week, members of the CMS experiment, one of the two general purpose experiments at the LHC, ran a small, intensive workshop with a lofty goal: to record vastly more information from the LHC’s collisions than anyone would have thought possible when the LHC first turned on fifteen years ago.
The flood of LHC data is hard to wrap one’s head around. At CMS, as at the ATLAS and LHCb experiments, two bunches of protons pass through each other every 40 billionths of a second. In each of these “bunch crossings”, dozens of proton-proton collisions happen simultaneously. As the debris from the collisions moves into and through the CMS experiment, many detailed measurements are made, generating roughly a megabyte of data even with significant data compression. If that were all recorded, it would translate to many terabytes produced per second, and hundreds of millions of terabytes per year. That’s well beyond what CMS can store, manage and process. ATLAS faces the same issues, and LHCb faces their own version.
So what’s to be done? There’s only one option: throw most of that data away in the smartest way possible, and ensure that the data retained is processed and stored efficiently.
Data Overload and the TriggerThe automated system that has the job of selecting which data to throw away and which to keep is called the “trigger”; I wrote an extended article about it back in 2011. The trigger has to make a split-second judgment, based on limited information. It is meant to narrow a huge amount of data down to something manageable. It’s has to be thoughtfully designed and carefully monitored. But it isn’t going to be perfect.
Originally, at ATLAS and CMS, the trigger was a “yes/no” data processor. If “yes”, the data collected by the experiment during a bunch crossing was stored; otherwise it was fully discarded.
A natural if naive idea would be to do something more nuanced than this yes/no decision making. Instead a strict “no” leading to total loss of all information about a bunch crossing, one could store a sketch of the information — perhaps a highly compressed version of the data from the detector, something that occupies a few kilobytes instead of a megabyte.
After all, the trigger, in order to make its decision, has to look at each bunch crossing in a quick and rough way, and figure out, as best it can, what particles may have been produced, where they went and how much energy they have. Why not store the crude information that it produces as it makes its decision? At worst, one would learn more about what the trigger is throwing away. At best, one might even be able to make a measurement or a discovery in data that was previously being lost.
It’s a good idea, but any such plan has costs in hardware, data storage and person-hours, and so it needs a strong justification. For example, if one just wants to check that the trigger is working properly, one could do what I just described using only a randomly-selected handful of bunch crossings per second. That sort of monitoring system would be cheap. (The experiments actually do something smarter than that [called “prescaled triggers”.])
Only if one were really bold would one suggest that the trigger’s crude information be stored for every single bunch crossing, in hopes that it could actually be used for scientific research. This would be tantamount to treating the trigger system as an automated physicist, a competent assistant whose preliminary analysis could later be put to use by human physicists.
Data “Scouting” a.k.a. Trigger-Level AnalysisMore than ten years ago, some of the physicists at CMS became quite bold indeed, and proposed to do this for a certain fraction of the data produced by the trigger. They faced strong counter-arguments.
The problem, many claimed, is that the trigger is not a good enough physicist, and the information that it produces is too corrupted to be useful in scientific data analysis. From such a perspective, using this information in one’s scientific research would be akin to choosing a life-partner based on a dating profile. The trigger’s crude measurements would lead to all sorts of problems. They could hide a new phenomenon, or worse, create an artifact that would be mistaken for a new physical phenomenon. Any research done using this data, therefore, would never be taken seriously by the scientific community.
Nevertheless, the bold CMS physicists were eventually given the opportunity to give this a try, starting in 2011. This was the birth of “data scouting” — or, as the ATLAS experiment prefers to call it, “trigger-object-level analysis”, where “trigger-object” means “a particle or jet identified by the trigger system.”
The Two-Stage TriggerIn my description of the trigger, I’ve been oversimplifying. In each experiment, the trigger works in stages.
At CMS, the “Level-1 trigger” (L1T) is the swipe-left-or-right step of a 21st-century dating app; using a small fraction of the data from a bunch crossing, and taking an extremely fast glance at it using programmable hardware, it makes the decision as to whether to discard it or take a closer look.
The “High-Level Trigger” (HLT) is the read-the-dating-profile step. All the data from the bunch crossing is downloaded from the experiment, the particles in the debris of the proton-proton collision are identified to the extent possible, software examines the collection of particles from a variety of perspectives, and a rapid but more informed decision is made as to whether to discard or store the data from this bunch crossing.
The new strategy implemented by CMS in 2011 (as I described in more detail here) was to store more data using two pipelines; see Figure 1.
Effectively, the scouting pipeline uses the HLT trigger’s own data analysis to compress the full data from the bunch crossing down to a much smaller size, which makes storing it affordable.
Being bold paid off. It turned out that the HLT output could indeed be used for scientific research. Based on this early success, the HLT scouting program was expanded for the 2015-2018 run of the LHC (Figure 2), and has been expanded yet again for the current run, which began in 2023. At the present time, sketchy information is now being kept for a significant fraction of the bunch crossings for which the Level-1 trigger says “yes” but the High-Level trigger says “no”.
After CMS demonstrated this approach could work, ATLAS developed a parallel program. Separately, the LHCb experiment, which works somewhat differently, has introduced their own methods; but that’s a story for another day.
Dropping Down a LevelSeeing this, it’s natural to ask: if scouting works for the bunch crossings where the high-level trigger swipes left, might it work even when the level-1 trigger swipes left? A reasonable person might well think this is going too far. The information produced by the level-1 trigger as it makes its decision is far more limited and crude than that produced by the HLT, and so one could hardly imagine that anything useful could be done with it.
But that’s what people said the last time, and so the bold are again taking the risk of being called foolhardy. And they are breathtakingly courageous. Trying to do this “level-1 scouting” is frighteningly hard for numerous reasons, among them the following:
So what comes out of the level-1 trigger “no” votes is a gigantic amount of very sketchy information. Having more data is good when the data is high quality. Here, however, we are talking about an immense but relatively low-quality data set. There’s a risk of “garbage in, garbage out.”
Nevertheless, this “level-1 scouting” is already underway at CMS, as of last year, and attempts are being made to use it and improve it. These are early days, and only a few new measurements with the data from the current run, which lasts through 2026, are likely. But starting in 2029, when the upgraded LHC begins to produce data at an even higher rate — with the same number of bunch crossings, but four to five times as many proton-proton collisions per crossing — the upgraded level-1 trigger will then have access to a portion of the tracker’s data, allowing it to reconstruct particle tracks. Along with other improvements to the trigger and the detector, this will greatly enhance the depth and quality of the information produced by the level-1 trigger system, with the potential to make level-1 scouting much more valuable.
And so there are obvious questions, as we look ahead to 2029:
My task, in the run up to this workshop, was to prepare a talk addressing the second question, which required me to understand, as best I could, the answer to the first. Unfortunately the questions are circular. Only with the answer to the second question is it clear how best to approach the first one, because the decision about how much to spend in personnel-time, technical resources and money depends on how much physics one can potentially learn from that expenditure. And so the only thing I could do in my talk was make tentative suggestions, hoping thereby to start a conversation between experimenters and theorists that will continue for some time to come.
Will an effort to store all this information actually lead to measurements and searches that can’t be done any other way? It seems likely that the answer is “yes”, though it’s not yet clear if the answer is “yes — many”. But I’m sure the effort will be useful. At worst, the experimenters will find new ways to exploit the level-1 trigger system, leading to improvements in standard triggering and high-level scouting, and allowing the retention of new classes of potentially interesting data. The result will be new opportunities for LHC data to teach us about unexpected phenomena both within and potentially beyond the Standard Model.
Ever since recombinant DNA has been used to develop and manufacture vaccines, antivaxxers have portrayed it as evil. This weekend, an antivaxxer decided that fear mongering about SV40 in COVID-19 vaccines wasn't enough. Here we go again...
The post “And we’d better not risk another frontal assault. That plasmid’s dynamite.” Antivaxxers vs. plasmid DNA first appeared on Science-Based Medicine.