@globalfishingwatch/i18n-labels 1.3.0 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -197,7 +197,7 @@
197
197
  },
198
198
  "private-global-planet-presence": {
199
199
  "name": "Planet Imagery detections (Optical)",
200
- "description": "<h2>Overview</h2> <p> This layer shows vessels detected using optical satellite imagery collected by the European Space Agency's Sentinel-2 satellites. Optical imagery is similar to high-quality aerial photography from space, using reflected sunlight in visible and near-infrared wavelengths. This type of imagery provides high-resolution detail that allows us to spot small vessels, identify wake patterns, and better understand activity near shore. </p> <p> Global Fishing Watch uses a machine learning model that processes each image to identify vessels and estimate their length, orientation, and speed based on wake features. The detections are then filtered using a secondary classifier to remove objects that are not vessels, such as clouds, rocks or icebergs. Each detection is linked to a cropped image (a thumbnail) so users can visually inspect what the model identified. </p> <p> Because optical satellites rely on sunlight and clear skies, detections are only possible during the day and when the area is not obscured by clouds or haze. Despite these limitations, detections with optical imagery are especially helpful in identifying small untracked vessels that may not appear in other tracking systems. </p> <h2>Use cases</h2> <ul> <li> Monitor vessel presence (both fishing and non-fishing) in areas of interest such as marine protected areas (MPAs), exclusive economic zones (EEZs), inshore exclusion zones (IEZs) and Regional Fisheries Management Organisations (RFMOs). In some cases, activity like bottom trawling can be seen through disturbance to seabed sediment. </li> <li> Assess presence of vessels that don't show up on cooperative tracking systems—including automatic identification system (AIS) and vessel monitoring system (VMS)—near vulnerable marine ecosystems and essential fish habitats. </li> <li> Goes beyond vessel detection in other satellite remote sensors like Sentinel-1 SAR and VIIRS which simply detect the presence of an object, with Sentinel-2 users can often infer the object's activity based on the wake of a detection, and in some cases, the dataset can be used to identify fishing activity e.g. sediment plumes of trawlers, net encircling fish in purse seine vessels. </li> <li> Support analyses on small-scale fishing. While the 10m resolution is still too coarse to comprehensively map small-scale fishing, Sentinel-2 detections have been integrated into multiple analyses related to regional small-scale fishery and demonstrated the potential as a valuable addition to the limited vessel tracking data. </li> </ul> <h2>Limitations</h2> <ul> <li> Vessel detection with optical imagery requires daylight and clear skies <ul> <li> Unlike radar, optical satellites cannot see through clouds, fog, or haze. Detections are only possible during daylight hours when the view is unobstructed. </li> </ul> </li> <li> Not all geographies are covered equally <ul> <li> Sentinel-2 coverage is mostly limited to coastal waters. It revisits most areas every five days, but the image availability depends on the weather. Cloudy or hazy regions have lower effective revisit frequencies than regions with better weather conditions. </li> </ul> </li> <li> The detections may include false positives <ul> <li> Despite post-processing, the model may still produce occasional false detections—e.g., picking up buoys, debris, fixed infrastructure, or image artifacts. These false positives are reduced using a secondary classifier, but not completely eliminated. </li> </ul> </li> <li> Uncertainty in some vessel features <ul> <li> Smaller or slower-moving vessels may not produce visible wakes, making it more difficult to estimate their speed or heading. Therefore, these values may be inaccurate for small boats. </li> </ul> </li> <li> Not all detections unmatched to AIS are untracked vessels <ul> <li> The detections include both vessels on AIS and untracked vessels. We try to match detections to AIS tracks, but sometimes matching is not feasible due to large time gaps between AIS positions and in areas with high density of detections. </li> </ul> </li> </ul> <h2>Methods</h2> <h3>Optical imagery</h3> <p> This layer is based on images from the Sentinel-2 satellites operated by the European Space Agency (ESA). These satellites capture medium-resolution images (10 m per pixel) of the ocean using visible and near-infrared light (among several other bands). Combined, the satellites acquire images of most coastal waters and dedicated areas in the open ocean roughly every five days, and the imagery is made freely available by the ESA. </p> <h3>Image processing and selection</h3> <p> We use pre-processed Sentinel-2 images that have been corrected for geometric distortions and aligned to the Earth's surface. These images are split into manageable tiles, and we selected the tiles that cover only ocean areas (image tiles over land are excluded). We use four image bands: red, green, blue (RGB), and near-infrared (NIR), all at 10-meter resolution. These bands give us the detail and contrast needed to detect and classify vessels. </p> <h3>Vessel detection</h3> <p> Our machine learning model scans each image tile to detect vessels. It is trained to look for features such as the shape, brightness, and wake of a vessel. When it finds a likely candidate, the model predicts a score for vessel presence alongside estimates of the vessel's location, size, orientation, and speed. </p> <p> The detection model was trained on over 11,000 manually reviewed vessel examples across thousands of Sentinel-2 scenes. This training process included many small vessels and scenes from around the world, helping the model to perform well across different environments and vessel types. </p> <h3>Image thumbnails</h3> <p> Each detection includes a small visual \"chip\" showing the detected vessel at the center. These thumbnails come in two formats: a color version from the RGB bands, and a grayscale version from the near-infrared band. Each chip covers an area of 1 km². These thumbnails are helpful for visually confirming a detection or understanding its context. For very small vessels (under 15 meters), it may still be difficult to see them clearly. </p> <h3>Reducing false positives</h3> <p> Not everything that looks like a vessel in satellite imagery actually is one. To help remove false detections (like buoys, offshore platforms, sea ice, or clouds), we run each detection through a secondary classifier. This classifier is a machine learning model that uses both the image thumbnail and additional information about the detection (such as distance from shore, local depth, and vessel density nearby, among others) to decide whether the object is likely to be a vessel. We also flag detections that are close to known fixed infrastructure or in areas with substantial sea ice or iceberg presence. </p> <p> If a detection is classified as likely non-vessel or flagged as potential infrastructure or ice, we remove it from the map layer so only high-confidence detections are included. We also clip the satellite footprints (displayed on the map layer) to exclude the areas under the icy-region mask. However, we provide all the false positives with labels through the data download portal for stakeholders who require a more complete dataset. </p> <h3>AIS matching and vessel identity</h3> <p> AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large \"blind spots\" along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched imagery detections therefore provide the missing information about vessel traffic in the ocean. </p> <p> Matching imagery detections to vessels' GPS coordinates from AIS is challenging because the timestamps of the images and AIS records do not coincide, and a single AIS identity can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific detection, we developed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were produced from one year of global AIS data from the Global Fishing Watch pipeline, which sources satellite data from Spire Global and Orbcomm. The probability rasters are based on roughly 10 billion vessel positions and are computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a detection based on the vessel class, speed and time interval. In addition to the spatiotemporal matching, we factor in the similarity between the model-inferred vessel length and the length from AIS identity data to avoid (likely incorrect) matches with large discrepancies in size, e.g., AIS of a tugboat and the detection of a large vessel behind it. </p> <h3>Detection footprints</h3> <p> To help users understand where detections were possible, we show the detection \"footprints\" on the map. These polygons are the portions of the satellite images that cover the ocean and that were used for detection. Thus, if you see a footprint but no detections, it means no vessels were detected in that area. If there is no footprint, no image was processed for that location and time. </p> <h3>Automation and updates</h3> <p> Our detection and matching system runs automatically each day. It checks for new Sentinel-2 images published to Google Cloud and processes those that meet our quality criteria. New detections are typically available within 1–2 days of the satellite capturing the image. The automated pipeline also re-checks any images published late to ensure any data gaps are filled. </p> <h2>Source data and citations</h2> <p> All vessel data are freely available through the Global Fishing Watch data portal at <a target=\"_blank\" rel=\"noopener noreferrer nofollow\" href=\"https://globalfishingwatch.org/data-download/\" >https://globalfishingwatch.org/data-download/</a >. </p> <h2>License</h2> <p> Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us. </p>",
200
+ "description": "<h2>Overview</h2> <p> This layer shows vessels detected using optical satellite imagery collected by the European Space Agency's Planet satellites. Optical imagery is similar to high-quality aerial photography from space, using reflected sunlight in visible and near-infrared wavelengths. This type of imagery provides high-resolution detail that allows us to spot small vessels, identify wake patterns, and better understand activity near shore. </p> <p> Global Fishing Watch uses a machine learning model that processes each image to identify vessels and estimate their length, orientation, and speed based on wake features. The detections are then filtered using a secondary classifier to remove objects that are not vessels, such as clouds, rocks or icebergs. Each detection is linked to a cropped image (a thumbnail) so users can visually inspect what the model identified. </p> <p> Because optical satellites rely on sunlight and clear skies, detections are only possible during the day and when the area is not obscured by clouds or haze. Despite these limitations, detections with optical imagery are especially helpful in identifying small untracked vessels that may not appear in other tracking systems. </p> <h2>Use cases</h2> <ul> <li> Monitor vessel presence (both fishing and non-fishing) in areas of interest such as marine protected areas (MPAs), exclusive economic zones (EEZs), inshore exclusion zones (IEZs) and Regional Fisheries Management Organisations (RFMOs). In some cases, activity like bottom trawling can be seen through disturbance to seabed sediment. </li> <li> Assess presence of vessels that don't show up on cooperative tracking systems—including automatic identification system (AIS) and vessel monitoring system (VMS)—near vulnerable marine ecosystems and essential fish habitats. </li> <li> Goes beyond vessel detection in other satellite remote sensors like Sentinel-1 SAR and VIIRS which simply detect the presence of an object, with Planet users can often infer the object's activity based on the wake of a detection, and in some cases, the dataset can be used to identify fishing activity e.g. sediment plumes of trawlers, net encircling fish in purse seine vessels. </li> <li> Support analyses on small-scale fishing. While the 10m resolution is still too coarse to comprehensively map small-scale fishing, Planet detections have been integrated into multiple analyses related to regional small-scale fishery and demonstrated the potential as a valuable addition to the limited vessel tracking data. </li> </ul> <h2>Limitations</h2> <ul> <li> Vessel detection with optical imagery requires daylight and clear skies <ul> <li> Unlike radar, optical satellites cannot see through clouds, fog, or haze. Detections are only possible during daylight hours when the view is unobstructed. </li> </ul> </li> <li> Not all geographies are covered equally <ul> <li> Planet coverage is mostly limited to coastal waters. It revisits most areas every five days, but the image availability depends on the weather. Cloudy or hazy regions have lower effective revisit frequencies than regions with better weather conditions. </li> </ul> </li> <li> The detections may include false positives <ul> <li> Despite post-processing, the model may still produce occasional false detections—e.g., picking up buoys, debris, fixed infrastructure, or image artifacts. These false positives are reduced using a secondary classifier, but not completely eliminated. </li> </ul> </li> <li> Uncertainty in some vessel features <ul> <li> Smaller or slower-moving vessels may not produce visible wakes, making it more difficult to estimate their speed or heading. Therefore, these values may be inaccurate for small boats. </li> </ul> </li> <li> Not all detections unmatched to AIS are untracked vessels <ul> <li> The detections include both vessels on AIS and untracked vessels. We try to match detections to AIS tracks, but sometimes matching is not feasible due to large time gaps between AIS positions and in areas with high density of detections. </li> </ul> </li> </ul> <h2>Methods</h2> <h3>Optical imagery</h3> <p> This layer is based on images from the Planet satellites operated by the European Space Agency (ESA). These satellites capture medium-resolution images (10 m per pixel) of the ocean using visible and near-infrared light (among several other bands). Combined, the satellites acquire images of most coastal waters and dedicated areas in the open ocean roughly every five days, and the imagery is made freely available by the ESA. </p> <h3>Image processing and selection</h3> <p> We use pre-processed Planet images that have been corrected for geometric distortions and aligned to the Earth's surface. These images are split into manageable tiles, and we selected the tiles that cover only ocean areas (image tiles over land are excluded). We use four image bands: red, green, blue (RGB), and near-infrared (NIR), all at 10-meter resolution. These bands give us the detail and contrast needed to detect and classify vessels. </p> <h3>Vessel detection</h3> <p> Our machine learning model scans each image tile to detect vessels. It is trained to look for features such as the shape, brightness, and wake of a vessel. When it finds a likely candidate, the model predicts a score for vessel presence alongside estimates of the vessel's location, size, orientation, and speed. </p> <p> The detection model was trained on over 11,000 manually reviewed vessel examples across thousands of Planet scenes. This training process included many small vessels and scenes from around the world, helping the model to perform well across different environments and vessel types. </p> <h3>Image thumbnails</h3> <p> Each detection includes a small visual \"chip\" showing the detected vessel at the center. These thumbnails come in two formats: a color version from the RGB bands, and a grayscale version from the near-infrared band. Each chip covers an area of 1 km². These thumbnails are helpful for visually confirming a detection or understanding its context. For very small vessels (under 15 meters), it may still be difficult to see them clearly. </p> <h3>Reducing false positives</h3> <p> Not everything that looks like a vessel in satellite imagery actually is one. To help remove false detections (like buoys, offshore platforms, sea ice, or clouds), we run each detection through a secondary classifier. This classifier is a machine learning model that uses both the image thumbnail and additional information about the detection (such as distance from shore, local depth, and vessel density nearby, among others) to decide whether the object is likely to be a vessel. We also flag detections that are close to known fixed infrastructure or in areas with substantial sea ice or iceberg presence. </p> <p> If a detection is classified as likely non-vessel or flagged as potential infrastructure or ice, we remove it from the map layer so only high-confidence detections are included. We also clip the satellite footprints (displayed on the map layer) to exclude the areas under the icy-region mask. However, we provide all the false positives with labels through the data download portal for stakeholders who require a more complete dataset. </p> <h3>AIS matching and vessel identity</h3> <p> AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large \"blind spots\" along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched imagery detections therefore provide the missing information about vessel traffic in the ocean. </p> <p> Matching imagery detections to vessels' GPS coordinates from AIS is challenging because the timestamps of the images and AIS records do not coincide, and a single AIS identity can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific detection, we developed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were produced from one year of global AIS data from the Global Fishing Watch pipeline, which sources satellite data from Spire Global and Orbcomm. The probability rasters are based on roughly 10 billion vessel positions and are computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a detection based on the vessel class, speed and time interval. In addition to the spatiotemporal matching, we factor in the similarity between the model-inferred vessel length and the length from AIS identity data to avoid (likely incorrect) matches with large discrepancies in size, e.g., AIS of a tugboat and the detection of a large vessel behind it. </p> <h3>Detection footprints</h3> <p> To help users understand where detections were possible, we show the detection \"footprints\" on the map. These polygons are the portions of the satellite images that cover the ocean and that were used for detection. Thus, if you see a footprint but no detections, it means no vessels were detected in that area. If there is no footprint, no image was processed for that location and time. </p> <h3>Automation and updates</h3> <p> Our detection and matching system runs automatically each day. It checks for new Planet images published to Google Cloud and processes those that meet our quality criteria. New detections are typically available within 1–2 days of the satellite capturing the image. The automated pipeline also re-checks any images published late to ensure any data gaps are filled. </p> <h2>Source data and citations</h2> <p> All vessel data are freely available through the Global Fishing Watch data portal at <a target=\"_blank\" rel=\"noopener noreferrer nofollow\" href=\"https://globalfishingwatch.org/data-download/\" >https://globalfishingwatch.org/data-download/</a >. </p> <h2>License</h2> <p> Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us. </p>",
201
201
  "schema": {
202
202
  "length": {
203
203
  "keyword": "length"
@@ -666,6 +666,57 @@
666
666
  "Tainha": "Tainha",
667
667
  "Vermelhos (especificar)": "Vermelhos (especificar)"
668
668
  }
669
+ },
670
+ "fleet_code": {
671
+ "keyword": "fleet_code",
672
+ "enum": {
673
+ "1.1": "1.1",
674
+ "1.10": "1.10",
675
+ "1.12": "1.12",
676
+ "1.13": "1.13",
677
+ "1.14": "1.14",
678
+ "1.17": "1.17",
679
+ "1.18": "1.18",
680
+ "1.2": "1.2",
681
+ "1.3": "1.3",
682
+ "1.4": "1.4",
683
+ "1.5": "1.5",
684
+ "1.6": "1.6",
685
+ "1.7": "1.7",
686
+ "1.8": "1.8",
687
+ "1.9": "1.9",
688
+ "2.10": "2.10",
689
+ "2.11": "2.11",
690
+ "2.13": "2.13",
691
+ "2.2": "2.2",
692
+ "2.3": "2.3",
693
+ "2.4": "2.4",
694
+ "2.5": "2.5",
695
+ "3.1": "3.1",
696
+ "3.10": "3.10",
697
+ "3.11": "3.11",
698
+ "3.12": "3.12",
699
+ "3.13": "3.13",
700
+ "3.2": "3.2",
701
+ "3.3": "3.3",
702
+ "3.5": "3.5",
703
+ "3.6": "3.6",
704
+ "3.9": "3.9",
705
+ "4.1": "4.1",
706
+ "4.2": "4.2",
707
+ "4.3": "4.3",
708
+ "4.4": "4.4",
709
+ "4.6": "4.6",
710
+ "5.1": "5.1",
711
+ "5.10": "5.10",
712
+ "5.11": "5.11",
713
+ "5.2": "5.2",
714
+ "5.3": "5.3",
715
+ "5.4": "5.4",
716
+ "5.6": "5.6",
717
+ "5.9": "5.9",
718
+ "Sem código IN": "Sem código IN"
719
+ }
669
720
  }
670
721
  }
671
722
  },
@@ -2224,26 +2275,6 @@
2224
2275
  }
2225
2276
  }
2226
2277
  },
2227
- "proto-global-skylight-viirs": {
2228
- "name": "Skylight Viirs",
2229
- "description": "Skylight viirs",
2230
- "schema": {
2231
- "matched": {
2232
- "keyword": "matched",
2233
- "enum": {
2234
- "true": "true",
2235
- "false": "false"
2236
- }
2237
- },
2238
- "radiance": {
2239
- "keyword": "radiance",
2240
- "enum": {
2241
- "0": "0",
2242
- "1000": "1000"
2243
- }
2244
- }
2245
- }
2246
- },
2247
2278
  "public-areas-to-be-avoided-1618836788619": {
2248
2279
  "name": "Areas to be Avoided by Cargo Shipping",
2249
2280
  "description": "25 nm buffer around islands recommending shipping diversion"
@@ -2511,6 +2542,10 @@
2511
2542
  "name": "Areas boundaries for eez",
2512
2543
  "description": "EEZs boundaries are shown as solid lines for '200 NM', 'Treaty', 'Median line', 'Joint regime', 'Connection Line', 'Unilateral claim (undisputed)' and dashed lines for 'Joint regime', 'Unsettled', 'Unsettled median line' based on the 'LINE_TYPE' field. Flanders Marine Institute (2019). Maritime Boundaries Geodatabase: Maritime Boundaries and Exclusive Economic Zones (200NM), version 11. Source: marineregions.org"
2513
2544
  },
2545
+ "public-eez-land": {
2546
+ "name": "EEZ (marineregions.org)",
2547
+ "description": "Flanders Marine Institute (2019). Maritime Boundaries Geodatabase: Maritime Boundaries and Exclusive Economic Zones (200NM), version 11. Source: marineregions.org"
2548
+ },
2514
2549
  "public-fao": {
2515
2550
  "name": "FAO",
2516
2551
  "description": "FAO Major Fishing Areas for Statistical Purposes are arbitrary areas, the boundaries of which were determined in consultation with international fishery agencies on various considerations, including (i) the boundary of natural regions and the natural divisions of oceans and seas; (ii) the boundaries of adjacent statistical fisheries bodies already established in inter-governmental conventions and treaties; (iii) existing national practices; (iv) national boundaries; (v) the longitude and latitude grid system; (vi) the distribution of the aquatic fauna; and (vii) the distribution of the resources and the environmental conditions within an area."
@@ -2519,6 +2554,20 @@
2519
2554
  "name": "FAO major fishing areas",
2520
2555
  "description": "FAO major fishing areas for statistical purposes are arbitrary areas, the boundaries of which were determined in consultation with international fishery agencies. The major fishing areas, inland and marine, are listed below by two-digit codes and their names. To access maps and description of boundaries of each fishing area click on the relevant item in the list below or in the map showing the 19 major marine fishing areas. <a href='https://www.fao.org/fishery/en/area/search' target='_blank'>Source</a>. See more detailed <a href='https://globalfishingwatch.org/faqs/reference-layer-sources/' target='_blank' rel=noopener'>metadata information</a> for this layer"
2521
2556
  },
2557
+ "public-fixed-infrastructure": {
2558
+ "name": "Fixed infrastructure",
2559
+ "description": "SAR identified fixed infrastructure",
2560
+ "schema": {
2561
+ "label": {
2562
+ "keyword": "label",
2563
+ "enum": {
2564
+ "oil": "oil",
2565
+ "wind": "wind",
2566
+ "unknown": "unknown"
2567
+ }
2568
+ }
2569
+ }
2570
+ },
2522
2571
  "public-fixed-infrastructure-filtered": {
2523
2572
  "name": "Offshore Fixed Infrastructure (SAR, Optical)",
2524
2573
  "description": "<h2>Overview</h2> <p>Offshore fixed infrastructure is a global dataset that uses AI and machine learning to detect and classify structures throughout the world’s oceans.</p> <p>Classification labels (oil, wind, and unknown) are provided, as well as confidence levels (high, medium, or low) reflecting our certainty in the assigned label. Detections can be filtered and colored on the map using both label and confidence level.<em></em>The data is updated on a monthly basis, and new classified detections are added at the beginning of every month. Viewing change using the timebar is simple, and allows anyone to recognize the rapid industrialization of the world’s oceans. For example, you can easily observe the expansion of wind farms in the North and East China Seas, or changes in oil infrastructure in the Gulf of Mexico or Persian Gulf.</p> <p>By overlaying the existing map layers, you can explore how vessels interact with oil and wind structures, visualise the density of synthetic aperture radar (SAR) and Visible Infrared Imaging Radiometer Suite (VIIRS) vessel detections around infrastructure, or determine which marine protected areas (MPAs) contain wind, oil, or other infrastructure types. These are only examples of the types of questions we can now ask. Offshore fixed infrastructure is a first of its kind dataset that not only brings to light the extensive industrialization of our oceans, but enables users across industries to use this information in research, monitoring and management.</p> <h2>Use cases</h2> <ul> <li>Maritime domain awareness</li> <ul> <li>Infrastructure locations can support maritime domain awareness, and understanding of other activities occurring at sea.</li> <li>Infrastructure data supports assessments of ocean industrialization, facilitating monitoring of areas experiencing build-up or new development</li> </ul> <li>Monitoring vessels</li> <ul> <li>Infrastructure locations can be used to analyse the behaviour of vessels associated with infrastructure, including grouping vessels based on their interaction with oil and wind structures.</li> <li>Interactions between vessels and infrastructure can help quantify the resources required to support offshore industrial activity</li> <li>The impacts of infrastructure on fishing, including attracting or deterring fishing, can be analysed.</li> </ul> <li>Marine protected areas (MPAs) and marine spatial planning</li> <ul> <li>During the planning stage in the designation of new protected areas, knowing the location of existing infrastructure will be vital to understand which stakeholders shall be included in the consultation process, to understand potential conflicts, and identify easy wins.</li> </ul> <li>Environmental impacts</li> <ul> <li>Infrastructure locations can be used to help detect marine pollution events, and to differentiate between types of pollution events (e.g. pollution from vessels versus pollution from platforms)</li> </ul> </ul> <h2>Caveats</h2> <ul> <li><strong>Sentinel-1 and Sentinel-2 satellites do not sample most of the open ocean.</strong></li> <ul> <li>Most industrial activity happens relatively close to shore.</li> <li>The extent and frequency of SAR acquisitions is determined by the mission priorities.</li> <li>For more info see: https://www.nature.com/articles/s41586-023-06825-8/figures/5</li> </ul> <li><strong>We do not provide detections of infrastructure within 1 km of shore</strong></li> <ul> <li>We do not classify objects within 1 km of shore because it is difficult to map where the shoreline begins, and ambiguous coastlines and rocks cause false positives.</li> <li>The bulk of industrial activities, including offshore development with medium-to-large oil rigs and wind farms, occur several kilometers from shore.</li> </ul> <li> <strong>False positives can be produced from noise artifacts.</strong> </li> <ul> <li>Rocks, small islands, sea ice, radar ambiguities (radar echoes), and image artifacts can cause false positives</li> <li>Detections in some areas including Southern Chile, the Arctic, and the Norwegian Sea have been filtered to remove noise.</li> </ul> <li><strong>Spatial coverage varies over time, which can produce different detections results year on year - <a target=\"_blank\" href=\"https://share.cleanshot.com/yG0qfF\"> <span style=\"color:rgb(0, 0, 0);\">Example</span> </a></strong> </li> <ul> <li>Infrastructure detentions from 2017-01-01 to near real time are available, and updated on a monthly basis.</li> </ul> <li> <strong>Labels can change over time</strong> </li> <ul> <li>The label assigned to a structure is the greatest predicted label averaged across time. As we get more data, the label may change, and more accurately predict the true infrastructure type.</li> </ul> <li><strong>Global datasets aren’t perfect</strong></li> <ul> <li>We’ve done our best to create the most accurate product possible, but there will be infrastructure that isn’t detected, or has been classified incorrectly. This will be most evident when working at the project level.</li> <li>We strongly encourage users to provide feedback to the research team so that we may improve future versions of the model. All feedback is greatly appreciated.</li> </ul> </ul> <h2>Methods</h2> <h3>SAR imagery</h3> <p>We use SAR imagery from the Copernicus Sentinel-1 mission of the European Space Agency (ESA) [1]. The images are sourced from two satellites (S1A and S1B up until December 2021 when S1B stopped operating, and S1A only from 2022 onward) that orbit 180 degrees out of phase with each other in a polar, sun-synchronous orbit. Each satellite has a repeat-cycle of 12 days, so that together they provide a global mapping of coastal waters around the world approximately every six days for the period that both were operating. The number of images per location, however, varies greatly depending on mission priorities, latitude, and degree of overlap between adjacent satellite passes. Spatial coverage also varies over time [2]. Our data consist of dual-polarization images (VH and VV) from the Interferometric Wide (IW) swath mode, with a resolution of about 20 m.</p> <p>[1] <a target=\"_blank\" href=\"https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf\"> <span style=\"color:rgb(0, 0, 0);\">https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf</span> </a> </p> <p>[2]<a target=\"_blank\" href=\"https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario\"> <span style=\"color:rgb(0, 0, 0);\"></span> <span style=\"color:rgb(0, 0, 0);\">https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario</span> </a> </p> <h3>Infrastructure detection by SAR</h3> <p>Detecting infrastructure with SAR is based on the widely used Constant False Alarm Rate (CFAR) algorithm, an anomaly detection method conceived for detecting ships in synthetic aperture radar images, that has been modified to remove non-stationary objects. This algorithm is designed to search for pixel values that are unusually bright (the targets) compared to those in the surrounding area (the sea clutter). This method sets a threshold based on the pixel values of the local background (within a window), scanning the whole image pixel-by-pixel. Pixel values above the threshold constitute an anomaly and are likely to be samples from a target.</p> <h3>Infrastructure classification</h3> <p>To classify every detected offshore infrastructure, we used deep learning and designed a ConvNet based on the ConvNeXt architecture. A novel aspect of our deep learning classification approach is the combination of SAR imagery from Sentinel-1 with optical imagery from Sentinel-2. From six-month composites of dual-band SAR (VH and VV) and four-band optical (RGB and NIR) images, we extracted small tiles for every detected fixed infrastructure, with the respective objects at the center of the tile. A single model output includes the probabilities for the specified classes: wind, oil, unknown, lake maracaibo, and noise.</p> <h3>Filtering</h3> <p>GFW post-processed the classified SAR detections to reduce noise (false positives), remove vessels, exclude areas with sea ice at high latitudes, and incorporate expert feedback. We used a clustering approach to identify detections across time (within a 50 m radius) that were likely the same structure but their coordinates differed slightly, and assigned them the greatest average predicted label of the cluster. We also filled in gaps for fixed structures that were missing in one timestep but detected in the previous and following timesteps, and dropped detections appearing in a single timestep. Finally, the dataset underwent extensive manual review and editing by researchers and industry experts in order to refine the final product, and provide the most accurate dataset possible.</p> <h3>Data field descriptions</h3> <p>Each detection has a unique individual identifier (<em>detection_id</em>). A six-month image composite is used in the classification, therefore the <em>detection_date</em> represents the middle of the six month period. This helps to remove non-stationary objects (i.e. vessels), and avoid confusion in the model if a structure is being built, or there isn’t adequate imagery available. <em>structure_id</em> allows you to track a structure through time. There are therefore many <em>detection_id</em> (one for each month the structure is detected) for each <em>structure_id</em>. Labels of <em>wind</em> and <em>oil </em>represent any wind or oil related structure respectively. <em>Unknown</em> represents a structure that is not oil or wind related, such as bridges or navigational buoys. </p> <p>Label confidence levels of ‘High’. ‘Medium’ and ‘Low’ are assigned to each structure, and are conditional on where the detections fell in relation to the boundaries of manually developed wind and oil polygons, and whether the label has changed from the previous month. The <em>label_confidence</em> field can be used to filter analysis. </p> <h2>Resources, code and other notes</h2> <p>Two repos are used in the automation process, both of which are private, and should not be shared publicly.</p> <p>Detection and classification: https://github.com/GlobalFishingWatch/sentinel-1-ee/tree/master</p> <p>Clustering and reclassification: https://github.com/GlobalFishingWatch/infrastructure-post-processing</p> <p>All code developed for the paper, Paolo, F.S., Kroodsma, D., Raynor, J. et al. Satellite mapping reveals extensive industrial activity at sea. Nature 625, 85–91 (2024). https://doi.org/10.1038/s41586-023-06825-8, including SAR detection, deep learning models, and analyses is open source and freely available at https://github.com/GlobalFishingWatch/paper-industrial-activity.</p> <h2>Sources data and citations</h2> <p>Copernicus Sentinel data 2017-current</p> <p>Lujala, Päivi; Jan Ketil Rød &amp; Nadia Thieme, 2007. 'Fighting over Oil: Introducing A New Dataset', Conflict Management and Peace Science 24(3), 239-256</p> <p>Sabbatino, M., Romeo, L., Baker, V., Bauer, J., Barkhurst, A., Bean, A., DiGiulio, J., Jones, K., Jones, T.J., Justman, D., Miller III, R., Rose, K., and Tong., A., Global Oil &amp; Gas Infrastructure Features Database Geocube Collection, 2019-03-25, https://edx.netl.doe.gov/dataset/global-oil-gas-infrastructure-features-database-geocube-collection, DOI: 10.18141/1502839</p> <h2>License</h2> <p>Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us.</p> <h2>Global Fishing Watch metadata</h2> <p>Infrastructure development methods should reference the paper:</p> <p>Paolo, F.S., Kroodsma, D., Raynor, J. et al. Satellite mapping reveals extensive industrial activity at sea. Nature 625, 85–91 (2024). https://doi.org/10.1038/s41586-023-06825-8</p> <p>All code developed for the paper, including SAR detection, deep learning models, and analyses is open source and freely available at https://github.com/GlobalFishingWatch/paper-industrial-activity. All the data generated and used by these scripts can reference the following data repos:</p> <p>Analysis and Figures: https://doi.org/10.6084/m9.figshare.24309475</p> <p>Training and Evaluation: https://doi.org/10.6084/m9.figshare.24309469</p>",
@@ -2564,7 +2613,23 @@
2564
2613
  },
2565
2614
  "public-global-all-tracks": {
2566
2615
  "name": "Tracks",
2567
- "description": "The dataset contains the tracks from all vessels (AIS) - Version 3.0"
2616
+ "description": "The dataset contains the tracks from all vessels (AIS) - Version 3.0",
2617
+ "schema": {
2618
+ "elevation": {
2619
+ "keyword": "elevation",
2620
+ "enum": {
2621
+ "0": "0",
2622
+ "-2000": "-2000"
2623
+ }
2624
+ },
2625
+ "speed": {
2626
+ "keyword": "speed",
2627
+ "enum": {
2628
+ "0": "0",
2629
+ "20": "20"
2630
+ }
2631
+ }
2632
+ }
2568
2633
  },
2569
2634
  "public-global-bathymetry": {
2570
2635
  "name": "Bathymetry",
@@ -2863,6 +2928,26 @@
2863
2928
  }
2864
2929
  }
2865
2930
  },
2931
+ "public-global-skylight-viirs": {
2932
+ "name": "VIIRS (Skylight)",
2933
+ "description": "<h2>Overview</h2> <ul> <li> This layer shows vessels detected using the Visible Infrared Imaging Radiometer Suite (VIIRS) \"Day/Night Band\" on board the Suomi NPP, NOAA-20, and NOAA-21 satellites. These sensors are uniquely sensitive to low-level light, allowing them to detect anthropogenic light sources on the ocean surface, such as vessel deck lights or high-intensity lamps used to lure catch. </li> <li> Skylight processes these nightly global sweeps using a suite of seven parallel computer vision models. These models differentiate between actual vessels and \"noise,\" such as gas flares or lightning, to provide a near real-time map of illuminated maritime activity during the middle of the night (typically 1-4 a.m. local time). </li> </ul> <h2>Use cases</h2> <ul> <li> Identify industrial fishing operations, such as squid jiggers and purse seiners, that use bright lights but may not be broadcasting AIS or VMS positions. </li> <li> Fill surveillance gaps in vast areas of the ocean where other satellite coverage is infrequent, as Night Lights provides daily global revisits. </li> <li> Use Night Lights in tandem with Radar (SAR) or Optical (Sentinel-2) detections to build a 24-hour timeline of a vessel's presence in an area of interest. </li> </ul> <h2>Caveats</h2> <ul> <li> While the model filters most noise, heavy cloud cover can diffuse light (making detections appear larger), and extreme moonlight reflection (glint) may occasionally result in false positives. </li> <li> With a resolution of approximately 750 meters per pixel, multiple vessels in close proximity may appear as a single detection. </li> </ul> <h2>Methods</h2> <ul> <li> Multi-Model Computer Vision: Because the VIIRS sensor was originally designed for weather monitoring, Skylight uses specialized machine learning to isolate vessel signals. The system employs seven distinct models to filter out non-vessel light sources like oil platform gas flares, lightning strikes, and ionospheric noise (the South Atlantic Anomaly). </li> <li> AIS Matching &amp; Identification: Skylight automatically attempts to correlate each light detection with AIS records. By comparing the light's location and timing with known vessel tracks, the system can distinguish between AIS-transmitting vessels and \"unmatched\" detections. </li> <li> Global Daily Coverage: The constellation of three satellites follows a sun-synchronous polar orbit. This ensures that every point on Earth is imaged at least once per night, with occasional multiple passes that can help analysts infer a vessel's course based on consecutive detections. </li> </ul>",
2934
+ "schema": {
2935
+ "matched": {
2936
+ "keyword": "matched",
2937
+ "enum": {
2938
+ "true": "true",
2939
+ "false": "false"
2940
+ }
2941
+ },
2942
+ "radiance": {
2943
+ "keyword": "radiance",
2944
+ "enum": {
2945
+ "0": "0",
2946
+ "1000": "1000"
2947
+ }
2948
+ }
2949
+ }
2950
+ },
2866
2951
  "public-global-sst-anomalies-max": {
2867
2952
  "name": "Sea Surface Temperatures anomalies (Max)",
2868
2953
  "description": "Sea surface temperatures anomalies (Max)"
@@ -3004,7 +3089,7 @@
3004
3089
  "description": "Vessel Insights from AIS"
3005
3090
  },
3006
3091
  "public-global-viirs-presence": {
3007
- "name": "Night light detections (VIIRS)",
3092
+ "name": "VIIRS (EOG)",
3008
3093
  "description": "The night lights vessel detections layer, known as visible infrared imaging radiometer suite or VIIRS, shows vessels at sea that satellites have detected by the light they emit at night. Though not exclusively associated with fishing vessels, this activity layer is likely to show vessels associated with activities like squid fishing, which use bright lights and fish at night.The satellite makes a single over-pass across the entire planet every night, detecting lights not obscured by clouds and designed to give at least one observation globally every day. Because the vessels are detected solely based on light emission, we can detect individual vessels and even entire fishing fleets that are not broadcasting automatic identification system (AIS) and so are not represented in the AIS apparent fishing effort layer. Lights from fixed offshore infrastructure and other non-vessel sources are excluded. Global Fishing Watch ingests boat detections processed from low light imaging data collected by the U.S. National Oceanic and Atmospheric Administration (NOAA) VIIRS. The boat detections are processed in near-real time by NOAA’s Earth Observation Group, located in Boulder, Colorado. The data, known as VIIRS boat detections, picks up the presence of fishing vessels using lights to attract catch or to conduct operations at night. More than 85% of the detections are from vessels that lack AIS or Vessel Monitoring System (VMS) transponders. Due to the orbit design of polar orbiting satellites, regions closer to polar will have more over-passes per day, while equatorial regions have only one over-pass daily. Read more about this product, and download the data <a href=\"https://ngdc.noaa.gov/eog/viirs/download_boat.html\" target=\"_blank\" rel=\"noopener\">here</a>.Those using night light detections data should acknowledge the South Atlantic Anomaly (SAA), an area where the Earth's inner Van Allen radiation belt is at its lowest altitude, allowing more energetic particles from space to penetrate. When such particles hit the sensors on a satellite, this can create a false signal which might cause the algorithm to recognize it as a boat presence. A filtration algorithm has been applied but there may still be some mis-identification. The GFW layer includes quality flags (QF), including a filter to show only detections which NOAA has classified as vessels (QF1)",
3009
3094
  "schema": {
3010
3095
  "matched": {
@@ -3631,7 +3716,50 @@
3631
3716
  },
3632
3717
  "public-rfmo": {
3633
3718
  "name": "RFMO",
3634
- "description": "Regional fisheries management organizations (RFMOs) are international bodies formed by countries with a shared interest in managing or conserving fish stocks in a particular region. Some manage all the fish stocks found in a given area, while others focus on specific highly migratory species, notably tuna. The regional fisheries management organization on the Global Fishing Watch map currently includes the five tuna regional fisheries management organizations. See more detailed <a href='https://globalfishingwatch.org/faqs/reference-layer-sources/' target='_blank' rel=noopener'>metadata information</a> for this layer."
3719
+ "description": "Regional fisheries management organizations (RFMOs) are international bodies formed by countries with a shared interest in managing or conserving fish stocks in a particular region. Some manage all the fish stocks found in a given area, while others focus on specific highly migratory species, notably tuna. The regional fisheries management organization on the Global Fishing Watch map currently includes the five tuna regional fisheries management organizations. See more detailed <a href='https://globalfishingwatch.org/faqs/reference-layer-sources/' target='_blank' rel=noopener'>metadata information</a> for this layer.",
3720
+ "schema": {
3721
+ "ID": {
3722
+ "keyword": "ID",
3723
+ "enum": {
3724
+ "APFIC": "APFIC",
3725
+ "BOBP-IGO": "BOBP-IGO",
3726
+ "CCAMLR": "CCAMLR",
3727
+ "CCBSP": "CCBSP",
3728
+ "CCSBT": "CCSBT",
3729
+ "CCSBT Primary Area": "CCSBT Primary Area",
3730
+ "COREP": "COREP",
3731
+ "CPPS": "CPPS",
3732
+ "CRFM": "CRFM",
3733
+ "CTMFM": "CTMFM",
3734
+ "FCWC": "FCWC",
3735
+ "FFA": "FFA",
3736
+ "GFCM": "GFCM",
3737
+ "IATTC": "IATTC",
3738
+ "ICCAT": "ICCAT",
3739
+ "ICES": "ICES",
3740
+ "IOTC": "IOTC",
3741
+ "IPHC": "IPHC",
3742
+ "LTA": "LTA",
3743
+ "NAFO": "NAFO",
3744
+ "NAMMCO": "NAMMCO",
3745
+ "NASCO": "NASCO",
3746
+ "NEAFC": "NEAFC",
3747
+ "NPAFC": "NPAFC",
3748
+ "NPFC": "NPFC",
3749
+ "OSPESCA": "OSPESCA",
3750
+ "PERSGA": "PERSGA",
3751
+ "PICES": "PICES",
3752
+ "RECOFI": "RECOFI",
3753
+ "SEAFDEC": "SEAFDEC",
3754
+ "SIOFA": "SIOFA",
3755
+ "SPC": "SPC",
3756
+ "SPRFMO": "SPRFMO",
3757
+ "SRFC": "SRFC",
3758
+ "SWIOFC": "SWIOFC",
3759
+ "WCPFC": "WCPFC"
3760
+ }
3761
+ }
3762
+ }
3635
3763
  },
3636
3764
  "public-seagrasses": {
3637
3765
  "name": "Seagrasses",
@@ -3883,15 +4011,7 @@
3883
4011
  },
3884
4012
  "public-vms-bra-vessel-identity": {
3885
4013
  "name": "VMS Brazil",
3886
- "description": "Vessels (VMS Brazil)",
3887
- "schema": {
3888
- "selfReportedInfo.fishingLicenseCode": {
3889
- "keyword": "fishingLicenseCode"
3890
- },
3891
- "selfReportedInfo.vesselRegistrationCode": {
3892
- "keyword": "vesselRegistrationCode"
3893
- }
3894
- }
4014
+ "description": "Vessels (VMS Brazil)"
3895
4015
  },
3896
4016
  "public-vms-chl-fishing-effort": {
3897
4017
  "name": "VMS Chile",