Skip to main content ->
Ai2

Protecting marine life with AI

Patrick Beukema / December 12, 2023

Consider the vast expanse of our oceans: they cover 70% of the Earth's surface, spanning over 360 million square kilometers. Most of this area lies outside the jurisdiction of any nation, in what are known as international waters or the high seas. These areas are remote and lawless, yet they hold enormous economic value. This combination has created the perfect storm for unchecked overexploitation of marine resources, putting ecosystems at risk.

Addressing these threats requires vigilant monitoring of the most remote areas of the planet. Historically, effective monitoring of the high seas has been unattainable due to their sheer size and inaccessibility. However, recent breakthroughs in artificial intelligence, coupled with extensive satellite imagery made publicly available by NASA and the European Space Agency, present an opportunity to turn the tide. High-performance computer vision, operating on a global scale, now makes it feasible to monitor even the most isolated areas in real time.

An orbiting Sentinel-2 satellite from the European Space Agency continuously images the planet. A closeup of a remote section of the Pacific Ocean reveals a transiting tanker and its wake (Images copyright ESA).

Over the last year, Ai2's computer vision and conservation teams have collaborated on building highly specialized computer vision models designed to accurately detect vessels across a diverse constellation of satellites. Each day, these models infer over 10 million square kilometers, surfacing tens of thousands of vessel detections with pinpoint accuracy. These models can even detect vessels through clouds using radar and the presence of faint lights on board ships at night.

All of this intelligence is made freely available within Skylight, a maritime intelligence platform in use by over 300 organizations and 70 countries. Users can leverage this technology to track down and intercept illegal activity as it occurs. Today, alongside a new paper describing these results, which won "best paper" at the 2023 NeurIPS Computational Sustainability meeting, we are releasing the model architectures, data annotations, and code, all under permissive open-source licenses (Apache 2.0).

The Skylight platform showing vessel detections across three different satellites in a 24 hour period. Black boxes are ships correlated with broadcasted GPS positions (AIS), and red boxes are ships that are not broadcasting their locations.

AI as a Force for Good

Manual identification of objects in satellite imagery is a challenging task for a variety of reasons. Objects of interest are typically shown at an uncommon perspective and context. On top of that, the volume of data can be significant, especially when inferencing imagery from multiple satellites. For example, each week we inference 700 GB of data, and growing, and results are needed as quickly as possible. In addition, maritime intelligence experts with deep expertise in satellite imagery are needed to correctly interpret satellite imagery, especially for less common modalities such as synthetic aperture radar or infrared imagery (i.e. mechanical turk is not an option). These features make automated computer vision trained via supervised learning against expertly annotated datasets an attractive choice for satellite object detection.

Data-flow depiction of a real-time streaming computer vision service for vessel detection in satellite imagery. An orbiting satellite images a vessel. The image is downlinked to a ground station. That data is copied to Skylight owned servers and processed by a computer vision model. The resulting vessel detection is reported to our users through a GUI and available via an API.

Orbiting satellites image the earth underneath as they pass overhead, and therefore no single satellite provides coverage of the entire planet at every moment in time. We rely on multiple satellites, and every satellite exhibits unique strengths and weaknesses. For example, VIIRS equipped satellites capture the intensity of light (in watts), which is especially useful for capturing vessels at night (see panels A, D below) but at the cost of low spatial resolution (750 meters). The Sentinel-1 satellites capture synthetic aperture radar, which is not affected by clouds - a common cause of signal loss for other satellites. Sentinel-2 satellites capture optical imagery - similar to the human eye - at 10 meter resolution.

Example satellite imagery (top row) and sample detections (bottom row) from each satellite. VIIRS (A, D) near the Ecuadorian coast, an S1 image (B, E) from the North Sea and an S2 image (C, F) from the Maldives. Scale bars are approximate. Confidence scores ≥ 0.95.

Because the imagery generated by each satellite is so different, we have found that custom model architectures tuned to each satellite were needed to achieve high performance.

More details can be found in the paper.

Constant Iteration and Continual Improvement

High accuracy and low latency are critical to our users. False positives are especially problematic as our users cannot afford to waste fuel chasing a vessel that does not exist. We measured offline model performance at 80%+F1 for Sentinel-1 and Sentinel-2, and 90%+F1 for VIIRS. To our knowledge, these are the best performing models in their class. More detailed performance evaluations are available in the paper. The just-released Sentinel-2 model is the only production model that we are aware of.

It is important to emphasize that while the data sources are largely stable, ocean conditions are not, and models can exhibit performance degradations even in the absence of data drift. For example, marine infrastructure (wind turbines, oil platforms, etc) is constantly under construction. To improve precision, we geofence detected marine infrastructure which is regularly updated by Ai2's Satlas team. In addition, users can grade the quality of each detection with a simple thumbs up (or down) feedback button which can be used as a supervisory signal. We continuously retrain and upgrade on a monthly cadence.

Summary

The oceans are vast and rich with life, but there is a limit to the resources that they can provide. If we continue to overexploit marine resources, we are at risk of depleting ocean life. In the context of environmental conservation, AI represents a paradigm shift that can be used as a powerful force for good. We are entering a new era in environmental protection, one where it's possible to monitor and safeguard our planet's oceans at a scale previously considered unattainable.

What lies ahead

We are excited about the impact these models have already had on successful interdictions on the high seas but we believe that this is only the beginning of what can be accomplished with AI. Next year we will release new deep sequence-to-sequence networks that can understand the behavior of ships at sea based purely on sequences of GPS positions, without the need for satellite imagery, using similar technology that proved pivotal for LLMs.

Learn more

Use these models

You can download and install each of the computer vision services in just a few simple steps. For example, the following commands will download, and run the containerized computer vision service for executing inference against VIIRS satellite imagery:

docker pull ghcr.io/allenai/vessel-detection-viirs:sha-1f774f7
docker run -d -p 5555:5555 ghcr.io/allenai/vessel-detection-viirs:sha-1f774f7

View a more detailed set of instructions for installing and querying the service on GitHub.

Subscribe to receive monthly updates about the latest Ai2 news.