How we solve the satellite data bottleneck through On-Orbit Computer Vision and specialized CNN architectures.
Traditional Earth Observation (EO) relies on a "Store and Forward" model: satellites capture massive amounts of raw data, store it, and wait for a ground station pass to download it. This creates a latency of hours or days.
SentianOrbit Edge AI flips this paradigm. By processing data onboard the satellite using low-power NPUs (Neural Processing Units) and FPGAs, we convert gigabytes of raw pixels into kilobytes of actionable insights. Only the detections are sent down, enabling sub-minute response times.
Computer Vision is a field of Artificial Intelligence that trains computers to interpret and understand the visual world. In our case, we apply it to Synthetic Aperture Radar (SAR) data, which can "see" through darkness and cloud cover.
Our computer vision pipeline includes:
Convolutional Neural Networks (CNNs) are the backbone of our detection layers. Unlike standard algorithms, CNNs learn spatial hierarchies of features automatically. Our YOLO26n architecture is a state-of-the-art CNN optimized for both speed and accuracy.
How a CNN works in orbit:
Optimized for Xilinx Zynq UltraScale+ FPGAs, utilizing DPU cores for hardware-accelerated tensor math.
Trained specifically to handle the "speckle" noise inherent in radar imagery, reducing false positives.