There’s a burgeoning arms race between Artificial Intelligence (AI) deepfake images and the methods used to detect them. The latest advancement on the detection side comes from astronomy. The intricate methods used to dissect and understand light in astronomical images can be brought to bear on deepfakes.
The word ‘deepfakes’ is a portmanteau of ‘deep learning’ and ‘fakes.’ Deepfake images are called that because they’re made with a certain type of AI called deep learning, itself a subset of machine learning. Deep learning AI can mimic something quite well after being shown many examples of what it’s being asked to fake. When it comes to images, deepfakes usually involve replacing the existing face in an image with a second person’s face to make it look like someone else is in a certain place, in the company of certain people, or engaging in certain activities.
Deepfakes are getting better and better, just like other forms of AI. But as it turns out, a new tool to uncover deepfakes already exists in astronomy. Astronomy is all about light, and the science of teasing out minute details in light from extremely distant and puzzling objects is developing just as rapidly as AI.
In a new article in Nature, science journalist Sarah Wild looked at how researchers are using astronomical methods to uncover deepfakes. Adejumoke Owolabi is a student at the University of Hull in the UK who studies data science and computer vision. Her Master’s Thesis focused on how light reflected in eyeballs should be consistent, though not identical, between left and right. Owolabi used a high-quality dataset of human faces from Flickr and then used an image generator to create fake faces. She then compared the two using two different astronomical measurement systems called the CAS system and the Gini index to compare the light reflected in the eyeballs and to determine which were deepfakes.
CAS stands for concentration, asymmetry, and smoothness, and astronomers have used it for decades to study and quantify the light from extragalactic stars. It’s also used to quantify the light from entire galaxies and has made its way into biology and other areas where images need to be carefully examined. Noted astrophysicist Christopher J. Conselice was a key proponent of using CAS in astronomy.
The Gini index, or Gini coefficient, is also used to study galaxies. It’s named after the Italian statistician Corrado Gini, who developed it in 1912 to measure income inequality. Astronomers use it to measure how light is spread throughout a galaxy and whether it’s uniform or concentrated. It’s a tool that helps astronomers determine a galaxy’s morphology and classification.
In her research, Owolabi successfully determined which images were fake 70% of the time.
These eyes are all from deepfake images with inconsistent light reflection patterns. The ones on the right are coloured to highlight the inconsistencies. Image Credit: Adejumoke Owolabi (CC BY 4.0)For her article, Wild spoke with Kevin Pimbblet, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull in the UK. Pimblett presented the research at the UK Royal Astronomical Society’s National Astronomy Meeting on July 15th.
“It’s not a silver bullet, because we do have false positives and false negatives,” said Pimbblet. “But this research provides a potential method, an important way forward, perhaps to add to the battery of tests that one can apply to try to figure out if an image is real or fake.”
This is a promising development. Open democratic societies are prone to disinformation attacks from enemies without and within. Public figures are prone to similar attacks. Disturbingly, the majority of deepfakes are pornographic and can depict public figures in private and sometimes degrading situations. Anything that can help combat it and bolster civil society is a welcome tool.
But as we know from history, arms races have no endpoint. They go on and on in an escalating series of countermeasures. Look at how the USA and the USSR kept one-upping each other during their nuclear arms race as warhead sizes reached absurd levels of destructive power. So, inasmuch as this work shows promise, the purveyors of deepfakes will learn from it and improve their AI deepfake methods.
Wild also spoke to Brant Robertson in her article. Robertson is an astrophysicist at the University of California, Santa Cruz, who studies astrophysics and astronomy, including big data and machine learning. “However, if you can calculate a metric that quantifies how realistic a deepfake image may appear, you can also train the AI model to produce even better deepfakes by optimizing that metric,” he said, confirming what many can predict.
This isn’t the first time that astronomical methods have intersected with Earthly issues. When the Hubble Space Telescope was developed, it contained a powerful CCD (charge-coupled device.) That technology made its way into a digital mammography biopsy system. The system allowed doctors to take better images of breast tissue and identify suspicious tissue without a physical biopsy. Now, CCDs are at the heart of all of our digital cameras, including on our mobile phones.
Might our internet browsers one day contain a deepfake detector based on Gini and CAS? How would that work? Would hostile actors unleash attacks on those detectors and then flood our media with deepfake images in an attempt to weaken our democratic societies? It’s the nature of an arms race.
It’s also in our nature to use deception to sway events. History shows that rulers with malevolent intent can more easily deceive populations that are in the grip of powerful emotions. AI deepfakes are just the newest tool at their disposal.
We all know that AI has downsides, and deepfakes are one of them. While their legality is fuzzy, as with many new technologies, we’re starting to see efforts to combat them. The United States government acknowledges the problem, and several laws have been proposed to deal with it. The “DEEPFAKES Accountability Act” was introduced in the US House of Representatives in September 2023. The “Protecting Consumers from Deceptive AI Act” is another related proposal. Both are floundering in the sometimes murky world of subcommittees for now, but they might breach the surface and become law eventually. Other countries and the EU are wrestling with the same issue.
But in the absence of a comprehensive legal framework dealing with AI deepfakes, and even after one is established, detection is still key.
Astronomy and astrophysics could be an unlikely ally in combatting them.
The post Astronomers Have Tools That Can Help Detect Deepfake Images appeared first on Universe Today.
Characterizing near-Earths asteroids (NEAs) is critical if we hope to eventually stop one from hitting us. But so far, missions to do so have been expensive, which is never good for space exploration. So a team led by Patrick Bambach of the Max Planck Institute for Solar System Research in Germany developed a mission concept that utilizes a relatively inexpensive 6U CubeSat (or, more accurately, two of them) to characterize the interior of NEAs that would cost only a fraction of the price of previous missions.
The mission, known as the Deep Interior Scanning CubeSat mission to a rubble pile near-Earth asteroid, or DISCUS, was initially floated in 2018. Its central architecture involves two separate 6U CubeSats equipped with a powerful radar. They would travel to opposite sides of an NEA and direct a radar to pass through the NEA’s interior.
To understand more about the mission architecture, it’s best to look at the type of asteroid best suited to being visited by DISCUS. The authors suggest one about the size of Itokawa, the target of the first Hayabusa mission. It’s about 330 meters in diameter, right in the size range the mission planners were looking for, and is designated as a “rubble pile,” meaning the interior is relatively sparse.
Understanding how to stop an asteroid strike is one of DISCUS’s primary mission drivers. Fraser discusses how we can do it.A sparse interior is critical to the mission objectives, as an asteroid’s density can dramatically impact the scientific toolkit needed to characterize it. For DISCUS, the mission team plans a radar antenna known as a half-dipole. This would transmit at a relatively low frequency, which is more likely to pass through larger objects. Additionally, they plan to use a radar technique known as stepped-frequency modulation, which changes the radar’s frequency to allow for the broadest range of characterizations.
The opposing spacecraft on the other side of the asteroid would then receive these radar signals, analyze whatever waveform deformations occurred, and correlate that to the materials the radar had to pass through. Calculations show that this technique should enable a resolution of a few tens of meters for the interior of an asteroid about the size of Itokawa.
However, they also have to be run through another spectral analysis technique called computed radar tomography. This technique is often used in radiology diagnoses on Earth—the name CT scan comes from—but it can also be used to analyze the interiors of solid objects in the solar system.
The radar techniques DISCUS uses are also used on Earth, as described in this video on bistatic radar.However, the science payload is only one part of the DISCUS package and would ideally only take up 1U of the 6U allotted on each probe. The other five would be taken up by a series of off-the-shelf components, including a propulsion system (2U), communication system (1U), and avionics suite (1U). The dipole antenna and solar panels would deploy outside the standard CubeSat housing, allowing for better power collection and signal strength.
One of the most critical selections is the propulsion system, which would enable an acceleration of around 3.2 km/s, allowing DISCUS to match speeds with at least some NEAs. Alternatively, the mission plans to slingshot the craft around the Moon to get a boost of up to 4 km/s and gain access to even more asteroids.
A particular asteroid stood out to the team as they developed the mission design in 2018. Asteroid 1993 BX3 came within 18.4 times the distance to the Moon back in 2021 and was traveling at a speed that DISCUS could match, so the mission design team was hoping to have a prototype up and running to allow for a launch to that particular asteroid.
Unfortunately, that didn’t happen, and there hasn’t been much work on the mission concept since the paperback in 2018. However, more and more missions are targeting NEAs, and CubeSats are becoming increasingly popular. Eventually, a CubeSat mission will visit one of these objects and likely will be based at least partially on some ideas from DISCUS.
Learn More:
Bambach et al. – DISCUS – The Deep Interior Scanning CubeSat mission to a rubble pile near-Earth asteroid
UT – Swarms of Orbiting Sensors Could Map An Asteroid’s Surface
UT – Swarming Satellites Could Autonomous Characterize an Asteroid
UT – Asteroid Samples Were Once Part of a Wetter World
Lead Image:
This illustration shows the ESA’s Hera spacecraft and its two CubeSats at the binary asteroid Didymos. Image Credit: ESA
The post A Pair of CubeSats Using Ground Penetrating Radar Could Map The Interior of Near Earth Asteroids appeared first on Universe Today.
You must have experienced this frustration: trying to get those stickers off of individual pieces of fruit without ripping the skin. I suppose it can be done with care, but I don’t have the time. Plus they now have ways to emboss the fruit without stickers, like using lasers.
My lunch apple, before:
My lunch apple, after sticker removal. The unavoidable crater appears:
Now clearly this isn’t a cosmic issue, but it’s one Andy Rooney would have talked about, and now that he’s gone somebody has to!