insardev-toolkit 2025.2.20.dev0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,29 @@
1
+ BSD 3-Clause License
2
+
3
+ Copyright (c) 2025, Alexey Pechnikov, https://orcid.org/0000-0001-9626-8615 (ORCID)
4
+ All rights reserved.
5
+
6
+ Redistribution and use in source and binary forms, with or without
7
+ modification, are permitted provided that the following conditions are met:
8
+
9
+ * Redistributions of source code must retain the above copyright notice, this
10
+ list of conditions and the following disclaimer.
11
+
12
+ * Redistributions in binary form must reproduce the above copyright notice,
13
+ this list of conditions and the following disclaimer in the documentation
14
+ and/or other materials provided with the distribution.
15
+
16
+ * Neither the name of the copyright holder nor the names of its
17
+ contributors may be used to endorse or promote products derived from
18
+ this software without specific prior written permission.
19
+
20
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
@@ -0,0 +1,157 @@
1
+ Metadata-Version: 2.2
2
+ Name: insardev_toolkit
3
+ Version: 2025.2.20.dev0
4
+ Summary: InSAR.dev (Python InSAR): Geospatial Processing Toolkit
5
+ Home-page: https://github.com/AlexeyPechnikov/pygmtsar
6
+ Author: Alexey Pechnikov
7
+ Author-email: alexey@pechnikov.dev
8
+ License: BSD-3-Clause
9
+ Keywords: satellite interferometry,InSAR,remote sensing,geospatial analysis,Sentinel-1,SBAS,PSI
10
+ Classifier: Development Status :: 4 - Beta
11
+ Classifier: Intended Audience :: Science/Research
12
+ Classifier: Natural Language :: English
13
+ Classifier: License :: OSI Approved :: BSD License
14
+ Classifier: Operating System :: POSIX
15
+ Classifier: Operating System :: POSIX :: Linux
16
+ Classifier: Operating System :: MacOS
17
+ Classifier: Programming Language :: Python
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Programming Language :: Python :: 3.12
21
+ Classifier: Programming Language :: Python :: 3.13
22
+ Requires-Python: >=3.10
23
+ Description-Content-Type: text/markdown
24
+ License-File: LICENSE.txt
25
+ Requires-Dist: xarray>=2024.1.0
26
+ Requires-Dist: numpy
27
+ Requires-Dist: numba
28
+ Requires-Dist: pandas>=2.2
29
+ Requires-Dist: geopandas
30
+ Requires-Dist: distributed>=2024.1.0
31
+ Requires-Dist: dask[complete]>=2024.4.1
32
+ Requires-Dist: opencv-python
33
+ Requires-Dist: joblib
34
+ Requires-Dist: tqdm
35
+ Requires-Dist: ipywidgets
36
+ Requires-Dist: scipy
37
+ Requires-Dist: shapely>=2.0.2
38
+ Requires-Dist: xmltodict
39
+ Requires-Dist: rioxarray
40
+ Requires-Dist: tifffile
41
+ Requires-Dist: h5netcdf>=1.3.0
42
+ Requires-Dist: netCDF4
43
+ Requires-Dist: nc-time-axis
44
+ Requires-Dist: remotezip
45
+ Requires-Dist: asf_search
46
+ Requires-Dist: matplotlib
47
+ Dynamic: author
48
+ Dynamic: author-email
49
+ Dynamic: classifier
50
+ Dynamic: description
51
+ Dynamic: description-content-type
52
+ Dynamic: home-page
53
+ Dynamic: keywords
54
+ Dynamic: license
55
+ Dynamic: requires-dist
56
+ Dynamic: requires-python
57
+ Dynamic: summary
58
+
59
+ [![View on GitHub](https://img.shields.io/badge/GitHub-View%20on%20GitHub-blue)](https://github.com/AlexeyPechnikov/pygmtsar)
60
+ [![Available on pypi](https://img.shields.io/pypi/v/pygmtsar.svg)](https://pypi.python.org/pypi/pygmtsar/)
61
+ [![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/pechnikov/pygmtsar)
62
+ [![DOI](https://zenodo.org/badge/398018212.svg)](https://zenodo.org/badge/latestdoi/398018212)
63
+ [![Support on Patreon](https://img.shields.io/badge/Patreon-Support-orange.svg)](https://www.patreon.com/pechnikov)
64
+ [![ChatGPT Assistant](https://img.shields.io/badge/ChatGPT-Assistant-green?logo=openai)](https://insar.dev/ai)
65
+
66
+ ## PyGMTSAR (Python InSAR): Powerful, Accessible Satellite Interferometry
67
+
68
+ <img src="assets/logo.jpg" width="15%" />
69
+
70
+ PyGMTSAR (Python InSAR) is designed for both occasional users and experts working with Sentinel-1 satellite interferometry. It supports a wide range of features, including SBAS, PSI, PSI-SBAS, and more. In addition to the examples below, you’ll find more Jupyter notebook use cases on [Patreon](https://www.patreon.com/pechnikov) and updates on [LinkedIn](https://www.linkedin.com/in/alexey-pechnikov/).
71
+
72
+ ## About PyGMTSAR
73
+
74
+ PyGMTSAR offers reproducible, high-performance Sentinel-1 interferometry accessible to everyone—whether you prefer Google Colab, cloud servers, or local processing. It automatically retrieves Sentinel-1 SLC scenes and bursts, DEMs, and orbits; computes interferograms and correlations; performs time-series analysis; and provides 3D visualization. This single library enables users to build a fully integrated InSAR project with minimal hassle. Whether you need a single interferogram or a multi-year analysis involving thousands of datasets, PyGMTSAR can handle the task efficiently, even on standard commodity hardware.
75
+
76
+ ## PyGMTSAR Live Examples on Google Colab
77
+
78
+ Google Colab is a free service that lets you run interactive notebooks directly in your browser—no powerful computer, extensive disk space, or special installations needed. You can even do InSAR processing from a smartphone. These notebooks automate every step: installing PyGMTSAR library and its dependencies on a Colab host (Ubuntu 22, Python 3.10), downloading Sentinel-1 SLCs, orbit files, SRTM DEM data (automatically converted to ellipsoidal heights via EGM96), land mask data, and then performing complete interferometry with final mapping. You can also modify scene or bursts names to analyze your own area of interest, and each notebook includes instant interactive 3D maps.
79
+
80
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1TARVTB7z8goZyEVDRWyTAKJpyuqZxzW2?usp=sharing) **Central Türkiye Earthquakes (2023).** The area is large, covering two consecutive Sentinel-1 scenes or a total of 56 bursts.
81
+
82
+ <img src="assets/turkie_2023a.jpg" width="40%" /><img src="assets/turkie_2023b.jpg" width="40%" />
83
+
84
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1dDFG8BoF4WfB6tOF5sAi5mjdBKRbhxHo?usp=sharing) **Pico do Fogo Volcano Eruption, Fogo Island, Cape Verde (2014).** The interferogram for this event is compared to the study *The 2014–2015 eruption of Fogo volcano: Geodetic modeling of Sentinel-1 TOPS interferometry* (*Geophysical Research Letters*, DOI: [10.1002/2015GL066003](https://doi.org/10.1002/2015GL066003)).
85
+
86
+ <img src="assets/pico_2014a.jpg" width="40%" /><img src="assets/pico_2014b.jpg" width="40%" />
87
+
88
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d9RcqBmWIKQDEwJYo8Dh6M4tMjJtvseC?usp=sharing) **La Cumbre Volcano Eruption, Ecuador (2020).** The results compare with the report from Instituto Geofísico, Escuela Politécnica Nacional (IG-EPN) (InSAR software unspecified).
89
+
90
+ <img src="assets/la_cumbre_2020a.jpg" width="40%" /><img src="assets/la_cumbre_2020b.jpg" width="40%" />
91
+
92
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1shNGvUlUiXeyV7IcTmDbWaEM6XrB0014?usp=sharing) **Iran–Iraq Earthquake (2017).** The event has been well investigated, and the results compared to outputs from GMTSAR, SNAP, and GAMMA software.
93
+
94
+ <img src="assets/iran_iraq_2017a.jpg" width="40%" /><img src="assets/iran_iraq_2017b.jpg" width="40%" />
95
+
96
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1h4XxJZwFfm7EC8NUzl34cCkOVUG2uJr4?usp=sharing) **Imperial Valley Subsidence, CA USA (2015).** This example is provided in the [GMTSAR project](https://topex.ucsd.edu/gmtsar/downloads/) in the archive file [S1A_Stack_CPGF_T173.tar.gz](http://topex.ucsd.edu/gmtsar/tar/S1A_Stack_CPGF_T173.tar.gz), titled 'Sentinel-1 TOPS Time Series'.
97
+
98
+ The resulting InSAR velocity map is available as a self-contained web page at: [Imperial_Valley_2015.html](https://insar.dev/ui/Imperial_Valley_2015.html)
99
+
100
+ <img src="assets/imperial_valley_2015a.jpg" width="40%" /> <img src="assets/imperial_valley_2015b.jpg" width="40%" />
101
+
102
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1aqAr9KWKzGx9XpVie1M000C3vUxzNDxu?usp=sharing) **Kalkarindji Flooding, NT Australia (2024).** Correlation loss serves to identify flooded areas.
103
+
104
+ <img src="assets/kalkarindji_2024.jpg" width="80%" />
105
+
106
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ipiQGbvUF8duzjZER8v-_R48DSpSmgvQ?usp=sharing) **Golden Valley Subsidence, CA USA (2021).** This example demonstrates the case study 'Antelope Valley Freeway in Santa Clarita, CA,' as detailed in [SAR Technical Series Part 4 Sentinel-1 global velocity layer: Using global InSAR at scale](https://blog.descarteslabs.com/using-global-insar-at-scale) and [Sentinel-1 Technical Series Part 5 Targeted Analysis](https://blog.descarteslabs.com/sentinel-1-targeted-analysis) with a significant subsidence rate 'exceeding 5cm/year in places'.
107
+
108
+ <img src="assets/golden_valley_2021.jpg" width="80%" />
109
+
110
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1O3aZtZsTrQIldvCqlVRel13wJRLhmTJt?usp=sharing) **Lake Sarez Landslides, Tajikistan (2017).** The example reproduces the findings shared in the following paper: [Integration of satellite SAR and optical acquisitions for the characterization of the Lake Sarez landslides in Tajikistan](https://www.google.com/url?q=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F378176884_Integration_of_satellite_SAR_and_optical_acquisitions_for_the_characterization_of_the_Lake_Sarez_landslides_in_Tajikistan).
111
+
112
+ <img src="assets/lake_sarez_2017.jpg" width="80%" />
113
+
114
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19PLuebOZ4gaYX5ym1H7SwUbJKfl23qPr?usp=sharing) **Erzincan Elevation, Türkiye (2019).** This example reproduces 29-page ESA document [DEM generation with Sentinel-1 IW](https://step.esa.int/docs/tutorials/S1TBX%20DEM%20generation%20with%20Sentinel-1%20IW%20Tutorial.pdf).
115
+
116
+ <img src="assets/erzincan_2019.jpg" width="80%" />
117
+
118
+ ## More PyGMTSAR Live Examples on Google Colab
119
+
120
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1yuuA1ES2ly4QG3hyPg8YYT0nnpGDiQDw?usp=sharing) **Mexico City Subsidence, Mexico (2016).** This example replicates the 29-page ESA manual [TRAINING KIT – HAZA03. LAND SUBSIDENCE WITH SENTINEL-1 using SNAP](https://eo4society.esa.int/wp-content/uploads/2022/01/HAZA03_Land-Subsidence_Mexico-city.pdf).
121
+
122
+ ## PyGMTSAR Live Examples on Google Colab Pro
123
+
124
+ I share additional InSAR projects on Google Colab Pro through my [Patreon page](https://www.patreon.com/pechnikov). These are ideal for InSAR learners, researchers, and industry professionals tackling challenging projects with large areas, big stacks of interferograms, low-coherence regions, or significant atmospheric delays. You can run these privately shared notebooks online with Colab Pro or locally/on remote servers.
125
+
126
+ ## Projects and Publications Using PyGMTSAR
127
+
128
+ See the [Projects and Publications](/pubs/README.md) page for real-world projects and academic research applying PyGMTSAR. This is not an exhaustive list—contact me if you’d like your project or publication included.
129
+
130
+ ## Resources
131
+
132
+ **PyGMTSAR projects and e-books**
133
+ Available on [Patreon](https://www.patreon.com/c/pechnikov/shop). Preview versions can be found in this GitHub repo:
134
+
135
+ - [PyGMTSAR Introduction Preview](https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/book/PyGMTSAR_preview.pdf)
136
+ - [PyGMTSAR Gaussian Filtering Preview](https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/book/Gaussian_preview.pdf)
137
+
138
+ <img src="assets/listing.jpg" width="40%" />
139
+
140
+ **Video Lessons and Notebooks**
141
+ Find PyGMTSAR (Python InSAR) video lessons and educational notebooks on [Patreon](https://www.patreon.com/collection/12458) and [YouTube](https://www.youtube.com/channel/UCSEeXKAn9f_bDiTjT6l87Lg).
142
+
143
+ **PyGMTSAR AI Assistant**
144
+ The [PyGMTSAR AI Assistant](https://insar.dev/ai), powered by OpenAI ChatGPT, can explain InSAR theory, guide you through examples, help build an InSAR processing pipeline, and troubleshoot.
145
+
146
+ <img width="40%" alt="PyGMTSAR AI Assistant" src="assets/ai.jpg" />
147
+
148
+ **PyGMTSAR on DockerHub**
149
+ Run InSAR processing on macOS, Linux, or Windows via [Docker images](https://hub.docker.com/r/pechnikov/pygmtsar).
150
+
151
+ **PyGMTSAR on PyPI**
152
+ Install the library from [PyPI](https://pypi.python.org/pypi/pygmtsar).
153
+
154
+ **PyGMTSAR Previous Versions**
155
+ 2023 releases are still on GitHub, PyPI, DockerHub, and Google Colab. Compare PyGMTSAR InSAR with other software by checking out the [PyGMTSAR 2023 Repository](https://github.com/AlexeyPechnikov/pygmtsar/tree/pygmtsar).
156
+
157
+ © Alexey Pechnikov, 2025
@@ -0,0 +1,468 @@
1
+ # ----------------------------------------------------------------------------
2
+ # InSAR.dev
3
+ #
4
+ # This file is part of the InSAR.dev project: https://InSAR.dev
5
+ #
6
+ # Copyright (c) 2025, Alexey Pechnikov
7
+ #
8
+ # Licensed under the BSD 3-Clause License (see LICENSE for details)
9
+ # ----------------------------------------------------------------------------
10
+ from .tqdm_joblib import tqdm_joblib
11
+ #from .S1 import S1
12
+
13
+ class ASF(tqdm_joblib):
14
+ import pandas as pd
15
+ from datetime import timedelta
16
+
17
+ # check for downloaded burst files
18
+ # like S1_370328_IW1_20150121T134421_VV_DBBE-BURST
19
+ template_burst = '{burstId}/*/{burst}.*'
20
+
21
+ def __init__(self, username=None, password=None):
22
+ import asf_search
23
+ import getpass
24
+ if username is None:
25
+ username = getpass.getpass('Please enter your ASF username and press Enter key:')
26
+ if password is None:
27
+ password = getpass.getpass('Please enter your ASF password and press Enter key:')
28
+ self.username = username
29
+ self.password = password
30
+
31
+ def _get_asf_session(self):
32
+ import asf_search
33
+ return asf_search.ASFSession().auth_with_creds(self.username, self.password)
34
+
35
+ # https://asf.alaska.edu/datasets/data-sets/derived-data-sets/sentinel-1-bursts/
36
+ def download(self, basedir, bursts, session=None, n_jobs=8, joblib_backend='loky', skip_exist=True,
37
+ retries=30, timeout_second=3, debug=False):
38
+ """
39
+ Downloads the specified bursts extracted from Sentinel-1 SLC scenes.
40
+
41
+ Parameters
42
+ ----------
43
+ basedir : str
44
+ The directory where the downloaded bursts will be saved.
45
+ bursts : list of str
46
+ List of burst identifiers to download.
47
+ session : asf_search.ASFSession, optional
48
+ The session object for authentication. If None, a new session is created.
49
+ n_jobs : int, optional
50
+ The number of concurrent download jobs. Default is 8.
51
+ joblib_backend : str, optional
52
+ The backend for parallel processing. Default is 'loky'.
53
+ skip_exist : bool, optional
54
+ If True, skips downloading bursts that already exist. Default is True.
55
+ debug : bool, optional
56
+ If True, prints debugging information. Default is False.
57
+
58
+ Returns
59
+ -------
60
+ pandas.DataFrame
61
+ A DataFrame containing the list of downloaded bursts.
62
+ """
63
+ import rioxarray as rio
64
+ from tifffile import TiffFile
65
+ import xmltodict
66
+ from xml.etree import ElementTree
67
+ import pandas as pd
68
+ import asf_search
69
+ import joblib
70
+ from tqdm.auto import tqdm
71
+ import os
72
+ import glob
73
+ from datetime import datetime, timedelta
74
+ import time
75
+ import warnings
76
+ # supress asf_search 'UserWarning: File already exists, skipping download'
77
+ warnings.filterwarnings("ignore", category=UserWarning)
78
+
79
+ def filter_azimuth_time(items, start_utc_dt, stop_utc_dt, delta=3):
80
+ return [item for item in items if
81
+ datetime.strptime(item['azimuthTime'], '%Y-%m-%dT%H:%M:%S.%f') >= start_utc_dt - timedelta(seconds=delta) and
82
+ datetime.strptime(item['azimuthTime'], '%Y-%m-%dT%H:%M:%S.%f') <= stop_utc_dt + timedelta(seconds=delta)]
83
+
84
+ if isinstance(bursts, str):
85
+ bursts = list(filter(None, map(str.strip, bursts.split('\n'))))
86
+
87
+ # create the directory if needed
88
+ os.makedirs(basedir, exist_ok=True)
89
+
90
+ # skip existing bursts
91
+ if skip_exist:
92
+ bursts_missed = []
93
+ for burst in bursts:
94
+ #print (burst)
95
+ # orbital path is not included into burst
96
+ fakeBurstId = '_'.join(['*'] + burst.split('_')[1:3])
97
+ template = self.template_burst.format(burstId=fakeBurstId, burst=burst)
98
+ #print ('template', template)
99
+ files = glob.glob(template, root_dir=basedir)
100
+ #print ('files', files)
101
+ exts =[ os.path.splitext(file)[-1] for file in files]
102
+ #print ('exts', exts)
103
+ if '.tiff' in exts and '.xml' in exts and len(exts)==4:
104
+ #print ('pass')
105
+ pass
106
+ else:
107
+ bursts_missed.append(burst)
108
+ else:
109
+ # process all the defined scenes
110
+ bursts_missed = bursts
111
+ #print ('bursts_missed', len(bursts_missed))
112
+ # do not use internet connection, work offline when all the scenes already available
113
+ if len(bursts_missed) == 0:
114
+ return
115
+
116
+ def download_burst(result, basedir, session):
117
+ properties = result.geojson()['properties']
118
+ #print ('result properties', properties)
119
+ burst = properties['fileID']
120
+ burstId = properties['burst']['fullBurstID']
121
+ burstIndex = properties['burst']['burstIndex']
122
+ platform = properties['platform'][-2:]
123
+ polarization = properties['polarization']
124
+ #print ('polarization', polarization)
125
+ subswath = properties['burst']['subswath']
126
+
127
+ # create the directories if needed
128
+ burst_dir = os.path.join(basedir, burstId)
129
+ tif_dir = os.path.join(burst_dir, 'measurement')
130
+ xml_annot_dir = os.path.join(burst_dir, 'annotation')
131
+ xml_noise_dir = os.path.join(burst_dir, 'noise')
132
+ xml_calib_dir = os.path.join(burst_dir, 'calibration')
133
+ # save annotation using the burst and scene names
134
+ xml_file = os.path.join(xml_annot_dir, f'{burst}.xml')
135
+ xml_noise_file = os.path.join(xml_noise_dir, f'{burst}.xml')
136
+ xml_calib_file = os.path.join(xml_calib_dir, f'{burst}.xml')
137
+ #rint ('xml_file', xml_file)
138
+ tif_file = os.path.join(tif_dir, f'{burst}.tiff')
139
+ #print ('tif_file', tif_file)
140
+ for dirname in [burst_dir, tif_dir, xml_annot_dir, xml_noise_dir, xml_calib_dir]:
141
+ os.makedirs(dirname, exist_ok=True)
142
+
143
+ # download tif
144
+ # properties['bytes'] is not an accurate file size but it looks about 40 kB smaller
145
+ if os.path.exists(tif_file) and os.path.getsize(tif_file) >= int(properties['bytes']):
146
+ #print (f'pass {tif_file}')
147
+ pass
148
+ else:
149
+ #print ('YYY', os.path.getsize(tif_file), properties['bytes'])
150
+ # remove potentially incomplete file if needed
151
+ if os.path.exists(tif_file):
152
+ os.remove(tif_file)
153
+ # check if we can open the downloaded file without errors
154
+ tmp_file = os.path.join(burst_dir, os.path.basename(tif_file))
155
+ # remove potentially incomplete data file if needed
156
+ if os.path.exists(tmp_file):
157
+ os.remove(tmp_file)
158
+ result.download(burst_dir, filename=os.path.basename(tif_file), session=session)
159
+ if not os.path.exists(tmp_file):
160
+ raise Exception(f'ERROR: TiFF file is not downloaded: {tmp_file}')
161
+ if os.path.getsize(tmp_file) == 0:
162
+ raise Exception(f'ERROR: TiFF file is empty: {tmp_file}')
163
+ # check TiFF file validity opening it
164
+ with TiffFile(tmp_file) as tif:
165
+ # get TiFF file information
166
+ page = tif.pages[0]
167
+ tags = page.tags
168
+ data = page.asarray()
169
+ # attention: rasterio can crash the interpreter on a corrupted TIFF file
170
+ # perform this check as the final step
171
+ with rio.open_rasterio(tmp_file) as raster:
172
+ raster.load()
173
+ # TiFF file is well loaded
174
+ if not os.path.exists(tmp_file):
175
+ raise Exception(f'ERROR: TiFF file is missed: {tmp_file}')
176
+ # move to persistent name
177
+ if os.path.exists(tmp_file):
178
+ os.rename(tmp_file, tif_file)
179
+
180
+ # download xml
181
+ if os.path.exists(xml_file) and os.path.getsize(xml_file) > 0 \
182
+ and os.path.exists(xml_noise_file) and os.path.getsize(xml_noise_file) > 0 \
183
+ and os.path.exists(xml_calib_file) and os.path.getsize(xml_calib_file) > 0:
184
+ #print (f'pass {xml_file}')
185
+ pass
186
+ else:
187
+ # get TiFF file information
188
+ with TiffFile(tif_file) as tif:
189
+ page = tif.pages[0]
190
+ offset = page.dataoffsets[0]
191
+ #print ('offset', offset)
192
+ # get the file name
193
+ basename = os.path.basename(properties['additionalUrls'][0])
194
+ #print ('basename', '=>', basename)
195
+ manifest_file = os.path.join(burst_dir, basename)
196
+ # remove potentially incomplete manifest file if needed
197
+ if os.path.exists(manifest_file):
198
+ os.remove(manifest_file)
199
+ asf_search.download_urls(urls=properties['additionalUrls'], path=burst_dir, session=session)
200
+ if not os.path.exists(manifest_file):
201
+ raise Exception(f'ERROR: manifest file is not downloaded: {manifest_file}')
202
+ if os.path.getsize(manifest_file) == 0:
203
+ raise Exception(f'ERROR: manifest file is empty: {manifest_file}')
204
+ # check XML file validity parsing it
205
+ with open(manifest_file, 'r') as file:
206
+ xml_content = file.read()
207
+ _ = ElementTree.fromstring(xml_content)
208
+ # xml file is well parsed
209
+ if not os.path.exists(manifest_file):
210
+ raise Exception(f'ERROR: manifest file is missed: {manifest_file}')
211
+ # parse xml
212
+ with open(manifest_file, 'r') as file:
213
+ xml_content = file.read()
214
+ # remove manifest file
215
+ if os.path.exists(manifest_file):
216
+ os.remove(manifest_file)
217
+
218
+ subswathidx = int(subswath[-1:]) - 1
219
+ content = xmltodict.parse(xml_content)['burst']['metadata']['product'][subswathidx]
220
+ assert polarization == content['polarisation'], 'ERROR: XML polarization differs from burst polarization'
221
+ annotation = content['content']
222
+
223
+ annotation_burst = annotation['swathTiming']['burstList']['burst'][burstIndex]
224
+ start_utc = annotation_burst['azimuthTime']
225
+ start_utc_dt = datetime.strptime(start_utc, '%Y-%m-%dT%H:%M:%S.%f')
226
+ #print ('start_utc', start_utc, start_utc_dt)
227
+
228
+ length = int(annotation['swathTiming']['linesPerBurst'])
229
+ #print (f'length={length}, burstIndex={burstIndex}')
230
+ azimuth_time_interval = annotation['imageAnnotation']['imageInformation']['azimuthTimeInterval']
231
+ burst_time_interval = timedelta(seconds=(length - 1) * float(azimuth_time_interval))
232
+ stop_utc_dt = start_utc_dt + burst_time_interval
233
+ stop_utc = stop_utc_dt.strftime('%Y-%m-%dT%H:%M:%S.%f')
234
+ #print ('stop_utc', stop_utc, stop_utc_dt)
235
+
236
+ # output xml
237
+ product = {}
238
+
239
+ adsHeader = annotation['adsHeader']
240
+ adsHeader['startTime'] = start_utc
241
+ adsHeader['stopTime'] = stop_utc
242
+ adsHeader['imageNumber'] = '001'
243
+ product = product | {'adsHeader': adsHeader}
244
+
245
+ qualityInformation = {'productQualityIndex': annotation['qualityInformation']['productQualityIndex']} |\
246
+ {'qualityDataList': annotation['qualityInformation']['qualityDataList']}
247
+ product = product | {'qualityInformation': qualityInformation}
248
+
249
+ generalAnnotation = annotation['generalAnnotation']
250
+ # filter annotation['generalAnnotation']['replicaInformationList'] by azimuthTime
251
+ product = product | {'generalAnnotation': generalAnnotation}
252
+
253
+ imageAnnotation = annotation['imageAnnotation']
254
+ imageAnnotation['imageInformation']['productFirstLineUtcTime'] = start_utc
255
+ imageAnnotation['imageInformation']['productLastLineUtcTime'] = stop_utc
256
+ imageAnnotation['imageInformation']['productComposition'] = 'Assembled'
257
+ imageAnnotation['imageInformation']['sliceNumber'] = '0'
258
+ imageAnnotation['imageInformation']['sliceList'] = {'@count': '0'}
259
+ imageAnnotation['imageInformation']['numberOfLines'] = str(length)
260
+ # imageStatistics and inputDimensionsList are not updated
261
+ product = product | {'imageAnnotation': imageAnnotation}
262
+
263
+ dopplerCentroid = annotation['dopplerCentroid']
264
+ items = filter_azimuth_time(dopplerCentroid['dcEstimateList']['dcEstimate'], start_utc_dt, stop_utc_dt)
265
+ dopplerCentroid['dcEstimateList'] = {'@count': len(items), 'dcEstimate': items}
266
+ product = product | {'dopplerCentroid': dopplerCentroid}
267
+
268
+ antennaPattern = annotation['antennaPattern']
269
+ items = filter_azimuth_time(antennaPattern['antennaPatternList']['antennaPattern'], start_utc_dt, stop_utc_dt)
270
+ antennaPattern['antennaPatternList'] = {'@count': len(items), 'antennaPattern': items}
271
+ product = product | {'antennaPattern': antennaPattern}
272
+
273
+ swathTiming = annotation['swathTiming']
274
+ items = filter_azimuth_time(swathTiming['burstList']['burst'], start_utc_dt, start_utc_dt, 1)
275
+ assert len(items) == 1, 'ERROR: unexpected bursts count, should be 1'
276
+ # add TiFF file information
277
+ items[0]['byteOffset'] = offset
278
+ swathTiming['burstList'] = {'@count': len(items), 'burst': items}
279
+ product = product | {'swathTiming': swathTiming}
280
+
281
+ geolocationGrid = annotation['geolocationGrid']
282
+ items = filter_azimuth_time(geolocationGrid['geolocationGridPointList']['geolocationGridPoint'], start_utc_dt, stop_utc_dt, 1)
283
+ # re-numerate line numbers for the burst
284
+ for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
285
+ geolocationGrid['geolocationGridPointList'] = {'@count': len(items), 'geolocationGridPoint': items}
286
+ product = product | {'geolocationGrid': geolocationGrid}
287
+
288
+ product = product | {'coordinateConversion': annotation['coordinateConversion']}
289
+ product = product | {'swathMerging': annotation['swathMerging']}
290
+
291
+ with open(xml_file, 'w') as file:
292
+ file.write(xmltodict.unparse({'product': product}, pretty=True, indent=' '))
293
+
294
+ # output noise xml
295
+ content = xmltodict.parse(xml_content)['burst']['metadata']['noise'][subswathidx]
296
+ assert polarization == content['polarisation'], 'ERROR: XML polarization differs from burst polarization'
297
+ annotation = content['content']
298
+
299
+ noise = {}
300
+
301
+ adsHeader = annotation['adsHeader']
302
+ adsHeader['startTime'] = start_utc
303
+ adsHeader['stopTime'] = stop_utc
304
+ adsHeader['imageNumber'] = '001'
305
+ noise = noise | {'adsHeader': adsHeader}
306
+
307
+ if 'noiseVectorList' in annotation:
308
+ noiseRangeVector = annotation['noiseVectorList']
309
+ items = filter_azimuth_time(noiseRangeVector['noiseVector'], start_utc_dt, stop_utc_dt)
310
+ # re-numerate line numbers for the burst
311
+ for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
312
+ noiseRangeVector = {'@count': len(items), 'noiseVector': items}
313
+ noise = noise | {'noiseVectorList': noiseRangeVector}
314
+
315
+ if 'noiseRangeVectorList' in annotation:
316
+ noiseRangeVector = annotation['noiseRangeVectorList']
317
+ items = filter_azimuth_time(noiseRangeVector['noiseRangeVector'], start_utc_dt, stop_utc_dt)
318
+ # re-numerate line numbers for the burst
319
+ for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
320
+ noiseRangeVector = {'@count': len(items), 'noiseRangeVector': items}
321
+ noise = noise | {'noiseRangeVectorList': noiseRangeVector}
322
+
323
+ if 'noiseAzimuthVectorList' in annotation:
324
+ noiseAzimuthVector = annotation['noiseAzimuthVectorList']
325
+ items = noiseAzimuthVector['noiseAzimuthVector']['line']['#text'].split(' ')
326
+ items = [int(item) for item in items]
327
+ lowers = [item for item in items if item <= burstIndex * length] or items[0]
328
+ uppers = [item for item in items if item >= (burstIndex + 1) * length - 1] or items[-1]
329
+ mask = [True if item>=lowers[-1] and item<=uppers[0] else False for item in items]
330
+ items = [item - burstIndex * length for item, m in zip(items, mask) if m]
331
+ noiseAzimuthVector['noiseAzimuthVector']['firstAzimuthLine'] = lowers[-1] - burstIndex * length
332
+ noiseAzimuthVector['noiseAzimuthVector']['lastAzimuthLine'] = uppers[0] - burstIndex * length
333
+ noiseAzimuthVector['noiseAzimuthVector']['line'] = {'@count': len(items), '#text': ' '.join([str(item) for item in items])}
334
+ items = noiseAzimuthVector['noiseAzimuthVector']['noiseAzimuthLut']['#text'].split(' ')
335
+ items = [item for item, m in zip(items, mask) if m]
336
+ noiseAzimuthVector['noiseAzimuthVector']['noiseAzimuthLut'] = {'@count': len(items), '#text': ' '.join(items)}
337
+ noise = noise | {'noiseAzimuthVectorList': noiseAzimuthVector}
338
+
339
+ with open(xml_noise_file, 'w') as file:
340
+ file.write(xmltodict.unparse({'noise': noise}, pretty=True, indent=' '))
341
+
342
+ # output calibration xml
343
+ content = xmltodict.parse(xml_content)['burst']['metadata']['calibration'][subswathidx]
344
+ assert polarization == content['polarisation'], 'ERROR: XML polarization differs from burst polarization'
345
+ annotation = content['content']
346
+
347
+ calibration = {}
348
+
349
+ adsHeader = annotation['adsHeader']
350
+ adsHeader['startTime'] = start_utc
351
+ adsHeader['stopTime'] = stop_utc
352
+ adsHeader['imageNumber'] = '001'
353
+ calibration = calibration | {'adsHeader': adsHeader}
354
+
355
+ calibration = calibration | {'calibrationInformation': annotation['calibrationInformation']}
356
+
357
+ calibrationVector = annotation['calibrationVectorList']
358
+ items = filter_azimuth_time(calibrationVector['calibrationVector'], start_utc_dt, stop_utc_dt)
359
+ # re-numerate line numbers for the burst
360
+ for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
361
+ calibrationVector = {'@count': len(items), 'calibrationVector': items}
362
+ calibration = calibration | {'calibrationVectorList': calibrationVector}
363
+
364
+ with open(xml_calib_file, 'w') as file:
365
+ file.write(xmltodict.unparse({'calibration': calibration}, pretty=True, indent=' '))
366
+
367
+ # prepare authorized connection
368
+ if session is None:
369
+ session = self._get_asf_session()
370
+
371
+ with tqdm(desc=f'ASF Downloading Bursts Catalog', total=1) as pbar:
372
+ results = asf_search.granule_search(bursts_missed)
373
+ pbar.update(1)
374
+
375
+ if n_jobs is None or debug == True:
376
+ print ('Note: sequential joblib processing is applied when "n_jobs" is None or "debug" is True.')
377
+ joblib_backend = 'sequential'
378
+
379
+ def download_burst_with_retry(result, basedir, session, retries, timeout_second):
380
+ for retry in range(retries):
381
+ try:
382
+ download_burst(result, basedir, session)
383
+ return True
384
+ except Exception as e:
385
+ print(f'ERROR: download attempt {retry+1} failed for {result}: {e}')
386
+ if retry + 1 == retries:
387
+ return False
388
+ time.sleep(timeout_second)
389
+
390
+ # download bursts
391
+ with self.tqdm_joblib(tqdm(desc='ASF Downloading Sentinel-1 SLC Bursts', total=len(bursts_missed))) as progress_bar:
392
+ statuses = joblib.Parallel(n_jobs=n_jobs, backend=joblib_backend)(joblib.delayed(download_burst_with_retry)\
393
+ (result, basedir, session, retries=retries, timeout_second=timeout_second) for result in results)
394
+
395
+ failed_count = statuses.count(False)
396
+ if failed_count > 0:
397
+ raise Exception(f'Bursts downloading failed for {failed_count} items.')
398
+ # parse processed bursts and convert to dataframe
399
+ bursts_downloaded = pd.DataFrame(bursts_missed, columns=['burst'])
400
+ # return the results in a user-friendly dataframe
401
+ return bursts_downloaded
402
+
403
+ @staticmethod
404
+ def search(geometry, startTime=None, stopTime=None, flightDirection=None,
405
+ platform='SENTINEL-1', processingLevel='auto', polarization='VV', beamMode='IW'):
406
+ import geopandas as gpd
407
+ import asf_search
408
+ import shapely
409
+
410
+ # cover defined time interval
411
+ if len(startTime)==10:
412
+ startTime=f'{startTime} 00:00:01'
413
+ if len(stopTime)==10:
414
+ stopTime=f'{stopTime} 23:59:59'
415
+
416
+ if flightDirection == 'D':
417
+ flightDirection = 'DESCENDING'
418
+ elif flightDirection == 'A':
419
+ flightDirection = 'ASCENDING'
420
+
421
+ # convert to a single geometry
422
+ if isinstance(geometry, (gpd.GeoDataFrame, gpd.GeoSeries)):
423
+ geometry = geometry.geometry.union_all()
424
+ # convert closed linestring to polygon
425
+ if geometry.type == 'LineString' and geometry.coords[0] == geometry.coords[-1]:
426
+ geometry = shapely.geometry.Polygon(geometry.coords)
427
+ if geometry.type == 'Polygon':
428
+ # force counterclockwise orientation.
429
+ geometry = shapely.geometry.polygon.orient(geometry, sign=1.0)
430
+ #print ('wkt', geometry.wkt)
431
+
432
+ if isinstance(processingLevel, str) and processingLevel=='auto' and platform == 'SENTINEL-1':
433
+ processingLevel = asf_search.PRODUCT_TYPE.BURST
434
+
435
+ # search bursts
436
+ results = asf_search.search(
437
+ start=startTime,
438
+ end=stopTime,
439
+ flightDirection=flightDirection,
440
+ intersectsWith=geometry.wkt,
441
+ platform=platform,
442
+ processingLevel=processingLevel,
443
+ polarization=polarization,
444
+ beamMode=beamMode,
445
+ )
446
+ return gpd.GeoDataFrame.from_features([product.geojson() for product in results], crs="EPSG:4326")
447
+
448
+ @staticmethod
449
+ def plot(bursts):
450
+ import pandas as pd
451
+ import matplotlib
452
+ import matplotlib.pyplot as plt
453
+
454
+ bursts['date'] = pd.to_datetime(bursts['startTime']).dt.strftime('%Y-%m-%d')
455
+ bursts['label'] = bursts.apply(lambda rec: f"{rec['flightDirection'].replace('E','')[:3]} {rec['date']} [{rec['pathNumber']}]", axis=1)
456
+ unique_labels = sorted(bursts['label'].unique())
457
+ unique_paths = sorted(bursts['pathNumber'].astype(str).unique())
458
+ colors = {label[-4:-1]: 'orange' if label[0] == 'A' else 'cyan' for i, label in enumerate(unique_labels)}
459
+ fig, ax = plt.subplots(figsize=(10, 8))
460
+ for label, group in bursts.groupby('label'):
461
+ group.plot(ax=ax, edgecolor=colors[label[-4:-1]], facecolor='none', linewidth=1, alpha=1, label=label)
462
+ burst_handles = [matplotlib.lines.Line2D([0], [0], color=colors[label[-4:-1]], lw=1, label=label) for label in unique_labels]
463
+ aoi_handle = matplotlib.lines.Line2D([0], [0], color='red', lw=1, label='AOI')
464
+ handles = burst_handles + [aoi_handle]
465
+ ax.legend(handles=handles, loc='upper right')
466
+ ax.set_title('Sentinel-1 Burst Footprints')
467
+ ax.set_xlabel('Longitude')
468
+ ax.set_ylabel('Latitude')