Intel has announced today that it is behind PhiSat-1, a small experimental satellite, the size of a shoebox, which was launched on September 2 and has become the first satellite in orbit with Artificial Intelligence (at more than 27,500 km / h in a synchronous orbit to the Sun at about 530 km altitude).
This satellite, made up of a thermal and hyperspectral camera, is powered by the Intel Movidius Myriad 2 vision processing unit and is one of two available satellites that are on a mission to monitor the polar ice and humidity of the ground, while inter-satellite communication systems are being tested to create a future network of federated satellites. This AI allows you to discard all the images where the clouds get in the way, saving bandwidth. You have all the detailed information after the jump.
The first problem Myriad 2 is helping to solve is how to handle the massive amount of data generated by high-fidelity cameras like the one on the PhiSat-1. “The ability of sensors to produce data increases by 100 with each generation; however, our ability to download them increases, but only by three, four or five in each generation, ”says Gianluca Furano, data systems and onboard computing lead at the European Space Agency (ESA), who led the project. the collaboration behind PhiSat-1.
At the same time, about two-thirds of our planet’s surface is continuously covered in clouds. For this reason, a lot of useless cloud images are usually captured and saved, sent to Earth via significant downstream bandwidth, saved again, and reviewed by a scientist (or algorithm) in computer hours or days later, so that they eventually have to be removed.
“And the AI came to rescue us, like the cavalry in a Western movie,” says Furano. The idea the team came up with was to use on-board processing to identify and discard cloud images, thus saving around 30% of bandwidth.
“Space is cutting edge,” explains Aubrey Dunne, Ubotica’s Chief Technology Officer . This Irish startup built and tested the AI of the PhiSat-1’s in collaboration with Cosine, the camera’s manufacturer, as well as the University of Pisa and Sinergise to develop the complete solution. “Myriad was completely designed from the ground up to have impressive computing power, but at very low power. This is fully adapted to the needs of space applications. “
However, Myriad 2 was not originally intended for orbit. Typically, spacecraft computers use highly specialized “radiation-hardened” chips that can be “up to two decades behind leading-edge commercial technology,” Dunne explains; And, at the time, AI was not part of the equation.
For this reason, Dunne and the Ubotica team performed the “radiation characterization,” putting the Myriad chip through a series of tests to figure out how to handle any errors or deterioration that might arise.
ESA “had never before tested a chip with this radiation complexity,” says Furano. “We doubted whether we would be able to perform their tests correctly … We had to develop the manual on how to do a comprehensive test and characterization of this chip from scratch.”
The first test, which was 36 hours straight of radiation beams at CERN in late 2018, “was a very pressure situation,” Dunne explains. However, that test and its two follow-ups “luckily we went well.” Finally, Myriad 2 passed the test in its standard form, with no modifications required. So after that, this low-power, high-performance computer vision chip was ready to venture beyond Earth’s atmosphere.