Far from the nightmarish scenarios of all-consuming artificial intelligence of the Terminator franchise, in reality groups of neural networks are being deployed to protect humanity. One of the more recent applications of AI is to monitor space for deadly rocks, the likes of which ended the dinosaurs 65 million years ago.
In 2019, a previously unknown asteroid, named 2019 OK, flew by 45,000 miles away from our planet , which is five times closer than the Moon. In cosmic terms, that’s a very near miss. Scientists estimate that the asteroid could have been as big as 400 ft in diameter.
If it were to hit Earth, the amount of energy released would equal that of the most powerful nuclear device ever detonated, the “Tsar Bomba.” Enough to level any metropolitan area to dust and rubble. Alarmingly, no one knew that 2019 OK was coming.
Even though space agencies worldwide monitor tens of thousands of space objects, it’s virtually impossible to catch every single one. The vastness of space is beyond human comprehension. Even the most powerful supercomputers struggle to crunch orbital data constantly tweaked by unpredictable events in the Solar system.
We identified them as hazardous, did follow-up calculations, and concluded that there is a small, small chance that these asteroids can hit Earth,
That’s where intelligence that is not bound by time can be of immense use. Last year, a group of scientists from the Leiden University published a paper that successfully demonstrated how neural networks could accurately predict Earth-impactors. The system named Hazardous Objects Identifier (HOI) even found 11 previously undetected objects that pose considerable risk to our planet over the next 20,000 years.
One of the scientists behind the project, John Hefele, a Texan living in the Netherlands, explained to CyberNews that AI-powered HOI could one day become the first line of defense against hostile space rocks.
“The idea is that every time we get new measurements for an asteroid, we want to go and put the asteroids with the new measurements through the system. And if AI indicates that this is something dangerous, then we’ll go and spend the computational resources necessary to calculate it,” he explained.
Hefele, a junior scientist at Cosine Measurement Systems, is among the first to deploy a neural network into Earth’s orbit. I sat down with him to discuss how AI could benefit astronomy and how an orbital neural network could benefit everyday life here on Earth.
One of your areas of focus is machine learning. Could you tell us how machine learning and AI technology tie into astronomy? What sort of benefits do they provide to the field?
Neural networks are something that a lot of astronomers should be using, and they aren’t. They should be using neural networks because there are massive amounts of data. Some of the sky surveys have petabytes of data, and they’re stored on magnetic discs because it’s too expensive to store them in traditional hard drives.
It’s not easy to sort through it with traditional algorithms, and you need some intelligence. You need an astronomer behind the data, but you would have to go and hire thousands of astronomers to go through all of it. And that’s where the AI comes in. It can do semi-intelligent tasks, and it can do it much faster and as accurately as an astronomer can do it.
It’s something that we probably could have been doing five years ago, but the main thing is getting other scientists to accept your results. And the main problem with neural networks is that a neural network acts as a black box.
It’s very good at connecting input to output, but you don’t really know what’s going on inside. You can create an algorithm that goes and learns to connect the two. The biggest problem is that it’s hard to create error estimates around what the neural network does.
For example, you can make a neural network to determine that a particular image depicts a specific type of galaxy. The network can be 97% sure about the results it is giving back. You can make a neural network go and spit out numbers between zero and one, but those numbers aren’t necessarily probabilities.
And that’s what the scientists don’t like. That’s why this isn’t so widely used yet in astronomy. But I think it’s going to start to become more popular because I’ve probably seen 10-20 papers about scientists using neural networks so that they can go and estimate errors along with estimates.
In a way, with the Hazardous object identifier (HOI), you did precisely that: proved to your fellow scientists that neural networks can be beneficial in solving complex problems.
I think there are always people that are skeptical about what we’re doing, but the way that you can prove that the neural network is doing things is you have to be careful about how you evaluate the neural networks output.
The use of AI in space is in its very first steps. And I think it’s the same situation as with astronomy. I believe getting AI to analyze troves of data is the kind of direction that things are going now,
And we just showed that this neural network is quite good at picking out dangerous objects. The most convincing evidence to me is that we simulated the objects that were selected by the network. And then, when we simulated those objects forward in time, they all came quite close to Earth.
And on top of that, those objects matched the set of objects that were already considered dangerous by NASA, and the neural network wasn’t trained on those objects. We left those objects out of the training process. The neural network didn’t see them, wasn’t trained on them. And the neural network was still able to identify around 90-95% of those potentially hazardous objects.
Could you tell us a little bit about how HOI works and what role neural networks play in the whole system?
The best way to think about the HOI is like a black box. You train this black box to connect input with output. And the input that we are training the box with are known impactors. It’s tough to simulate an asteroid that hits Earth. You don’t even go along that route.
What you do is “launch” asteroids from Earth’s surface and integrate backward in time. And when you integrate those objects forward in time, you have an example of a hazardous object. This is what we can give to the neural network as a training example. We created about 800,000 of these dangerous objects for the network to learn from.
We also gave it examples of objects that we don’t think are dangerous. And that’s all the asteroids since it’s implausible that you’ll point to an asteroid, and then that specific asteroid will be an asteroid that hits Earth. So, you can take all the observed objects minus the ones that we know are dangerous. You give this information to the neural network, so it has examples of dangerous and non-dangerous asteroids.
It learns to differentiate between the two. The network gives you a number between zero and one, which isn’t a probability, just a number in between zero and one. And then that number correlates with how dangerous an object is.
It’s not uncommon for AI to solve problems that were designed specifically for AI to solve in the first place. Having that in mind, what’s the practical application of HOI?
In the paper, we pointed out a shortlist of asteroids. I think it was 11 that weren’t identified as being hazardous. We identified them as hazardous, did follow-up calculations, and concluded that there is a small, small chance that these asteroids can hit Earth.
And I want to really emphasize that at something like one out of a hundred million. However, the other asteroids that are considered dangerous also have one out of 100 million probabilities. So relative to the other things that we consider dangerous, we say that these should also be considered dangerous.
But then, as far as practical applications, we’re looking to go and create a European version of NASA’s Sentry system. It calculates the probability of an object colliding with Earth. And it does it by running Monte Carlo simulations about the trajectories and the distribution of uncertainties.
We say that those simulations can’t sufficiently cover the space of probability about where the asteroid can hit because of the solar system’s chaotic nature. So, we’re trying to create a combination system that combines AI and NASA’s Monte Carlo approach. The AI will act as a filter, and then it will go and pre-select several asteroids that it thinks are quite dangerous that aren’t considered as such.
We can then run some more extensive computations, using some of the techniques used by NASA. The thing is, those Monte Carlo simulations require a humongous amount of computing power. And you can’t sufficiently explore close to 900,000 asteroids that we know now. But if we can cut down the list to be something that’s just maybe 10,000 objects at most, it allows us to do a thorough investigation and properly quantize the impact properties.
Those simulations can’t sufficiently cover the space of probability about where the asteroid can hit because of the solar system’s chaotic nature,
The idea is that every time we get new measurements for an asteroid, we want to go and put the asteroids with the new measurements through the system. And if AI indicates that this is something dangerous, then we’ll go and spend the computational resources necessary to calculate it. Maybe do a very extensive Monte Carlo search or use some other conventional techniques.
You are also a member of the project that ran the first neural network in space, the HyperScout 2. What does it do, and what’s its significance?
It’s a hyperspectral imager that creates these huge photos. They cover an area that’s 300 by 160 kilometers and a single frame. Our scout can produce an image cube with 50 bands with central wavelengths from 400 to a thousand nanometers. About half of what the HyperScout sees isn’t even visible to our eyes.
The part that I’m involved in is around the Myriad chip that’s onboard the satellite. This chip can run a neural network efficiently, fast, and using very little power. And when you have a small satellite, those things are essential because you have a limited power budget, and every little process you’d do on board takes away from that power budget. People weren’t running neural networks in space before because they took up too much energy.
Now we can run the hyperspectral images through an AI chip and automatically recognize clouds. And maybe you’ll wonder, what’s the point of going and inferring a cloud mask in space. And the reason that we want to do it is that we don’t like clouds. We hate clouds in our images. We put this satellite into space so that we can go and take pictures of the Earth.
Do you see any other potential uses of the technology beyond cloud detection and maybe weather-related applications?
The idea is we have these HyperScouts circling the Earth that is continuously taking and analyzing the photos. After one of those images gets analyzed, like a red flag goes up, and that red flag means that, for example, there’s a fire.
Instead of downlinking the whole image, which is expensive and time-consuming, the HyperScout sends down a zero, meaning there is no fire, and one, meaning something down there is burning. Then we can inform the respective local government. Or, for example, notice that based on visible changes, a fire is likely to break out in a week.
Another thing that’s quite important with hyperspectral imaging is crops and agriculture. With hyperspectral imaging, you can see if those crops have certain diseases or if those crops are stressed and then, depending on what diseases or how stressed the crops are we can send down warning signals, like ‘disease detected here’ or a ‘crop stress level’.
The idea is to analyze an image intelligently, at a low cost, and efficiently. There are boundless applications for this technology. The use of AI in space is in its very first steps. And I think it’s the same situation as with astronomy. I believe getting AI to analyze troves of data is the kind of direction that things are going now.