In April 2022, Palantir launched its Edge AI solution into space onboard Satellogic’s NewSat-27 as part of the SpaceX Transporter-4 mission. We’re excited to provide an update on our on-orbit imagery processing efforts. Between April and July, we performed various hardware and software tests in-orbit, and over the past few months we have been receiving some exciting results from our direct tasking and on-orbit processing pipelines onboard NewSat-27.
Where We Stand As of November 2022, we have successfully demonstrated the capability for customers to task the satellite with multiple captures, resulting in over 100 images from NewSat 27’s multispectral camera.
We had our most recent live image capture and onboard processing test on October 30th over Tartus, Syria. Let’s run through how we handled these images starting from the raw capture in-orbit all the way to results on the ground, utilizing Edge AI in space:
Raw images captured by the satellite consist of a single channel comprising four different ‘bands’ of information — these represent a specific wavelength of light. Palantir Edge AI then orchestrated our onboard imagery preprocessing services to convert batches of raw images into standard, three-channel RGB images. By processing images into a standardized format that our models expect, we can improve accuracy and create more confident results for our users. As part of this specific capture, we received 44 images that we processed into six RGB images.
After pre-processing was completed, we then ran AI models onboard the satellite. For this particular capture, Edge AI ran our in-house Palantir Omni model to identify buildings in the images. We received 210 building detections, or ‘inferences’, from the model. For each inference, our post-processing services created PNG thumbnails and computed geodetic coordinates by using the satellite telemetry and the onboard global elevation datasets. The outputs were then bundled and secured using various onboard cryptographic mechanisms, so we could validate the data once it was received on the ground.
In our initial on-orbit tests, we discovered an edge-case bug in our pre-processing algorithm. To remedy the issue, we uplinked a small software patch to the satellite that modified how we converted these individual images into RGB images. Once our patch was uplinked, we were able to update our software onboard to account for this new case within seven minutes. With the upgrade infrastructure in-place, we can continuously refine and augment our in-orbit software and algorithms.
Notably, in this live capture instance, we were to demonstrate that software capacity for customers to process all 44 frames within 7 minutes. In our previous post , we discussed how we had strict time constraints for each individual processing run of Edge AI. Even when we accounted for the update, our end-to-end processing time was comfortably within the thresholds that we had initially targeted. For even larger captures, our software features a built-in checkpointing system for resuming processing in the event that we have to halt processing.
What’s Next? While this previous version of our Omni model was geared towards identifying buildings of interest and focused on the onboard integration with the satellite, our next generation of in-house models can identify more specialized object classes, such as ships. These models are already running on the ground as we test their performance. We ran this same capture through one of our newer models and were able to identify various ships near the port of Tartus in Syria with high confidence. We will be sending this new model up to the satellite in our next upgrade cycle. This will allow us to demonstrate Edge AI’s ability to continuously update and manage models while in flight, in order to optimize inference results based on areas of interest.
Ships off the coast of Tartus, Syria. Detections come from Palantir’s new in-house ML models on imagery collected as part of our Tartus capture. We have also integrated our Edge AI outputs with Palantir MetaConstellation . MetaConstellation provides end-to-end software around satellite imaging, including an operational UI for image analysis. It allows users to annotate imagery with features and easily compare multiple images from different vendors and sensors over a given area of interest.
Our outputs from the AIP Satellite — either the combined image with detections, or just the PNG thumbnails — can be viewed directly within MetaConstellation. This means that in future deployments we could be able to directly downlink from an Edge AI-equipped satellite to a tactical instance of MetaConstellation in the field, allowing for detections and imagery to be sent to operational users within minutes.
Palantir MetaConstellation makes imagery analysis readily accessible to users. Here, we compare imagery from our Tartus capture on October 30, 2022 with images that we had previously collected on September 17, 2022.
Palantir MetaConstellation makes imagery analysis readily accessible to users. Here, we compare imagery from our Tartus capture on October 30, 2022 with images that we had previously collected on September 17, 2022. Our Ongoing Commitment We are continuing to invest in our on-orbit capabilities and are currently focused on hardware-backed security mechanisms, upgraded model capabilities, and our in-house georegistration algorithm, which should dramatically increase the accuracy of our model inferences. We are also planning to introduce new communication options to facilitate direct downlink for data, which will allow Palantir to get inferences into the hands of our customers faster than ever before.
Authors Rob Imig, Senior Architect, USG, Palantir
Mueed Ur Rehman, Senior Software Engineer for Edge, Palantir
This post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, expectations regarding the expected benefits and uses of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond Palantir’s control. These risks and uncertainties include Palantir’s ability to meet the unique needs of its customers; the failure of its platforms and solutions to satisfy its customers or perform as desired; the frequency or severity of any software and implementation errors; its platforms’ reliability; and the ability to modify or terminate the partnership. Additional information regarding these and other risks and uncertainties is included in the filings Palantir makes with the Securities and Exchange Commission from time to time. Except as required by law, Palantir does not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.
Updates from Palantir Edge AI in Space was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Adopted from https://blog.palantir.com/updates-from-palantir-edge-ai-in-space-1b4d8b8f97a0?source=rss—-3c87dc14372f—4