cavapy 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of cavapy might be problematic. Click here for more details.

@@ -0,0 +1,21 @@
1
+ Copyright (c) Food and Agriculture Organization of the United Nations. All rights reserved.
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,77 @@
1
+ Metadata-Version: 2.1
2
+ Name: cavapy
3
+ Version: 0.1.0
4
+ Summary: CAVA Python package. Retrive and analyze climate data.
5
+ Home-page: https://github.com/Risk-Team/cavapy
6
+ License: MIT
7
+ Author: Riccardo Soldan
8
+ Author-email: riccardosoldan@hotmail.it
9
+ Requires-Python: >=3.11,<4.0
10
+ Classifier: License :: OSI Approved :: MIT License
11
+ Classifier: Operating System :: OS Independent
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.11
14
+ Classifier: Programming Language :: Python :: 3.12
15
+ Classifier: Programming Language :: Python :: 3.13
16
+ Requires-Dist: bottleneck (>=1.4.2,<2.0.0)
17
+ Requires-Dist: dask (>=2024.11.2,<2025.0.0)
18
+ Requires-Dist: geopandas (>=0.14.4,<0.15.0)
19
+ Requires-Dist: llvmlite (==0.43.0)
20
+ Requires-Dist: matplotlib (>=3.9.2,<4.0.0)
21
+ Requires-Dist: netcdf4 (>=1.7.2,<2.0.0)
22
+ Requires-Dist: xclim (>=0.53.2,<0.54.0)
23
+ Project-URL: Repository, https://github.com/Risk-Team/cavapy
24
+ Description-Content-Type: text/markdown
25
+
26
+ #### **Status
27
+
28
+ **Beta testing**
29
+
30
+ # Introduction
31
+ This project is about creating a Python function to automatically get the required climate data needed to run pyAEZ. This can be an optional feature of the pyAEZ climate module.
32
+
33
+ ## Data source
34
+ The climate data is available at the THREDDS data server of the University of Cantabria as part of the CAVA (Climate and Agriculture Risk Visualization and Assessment) product developed by FAO, the University of Cantabria, the University of Cape Town and Predictia.
35
+ CAVA has available CORDEX-CORE climate models, the high resolution (25 Km) dynamically-downscaled climate models used in the IPCC report AR5. Additionally, CAVA offers access to state-of-the-art reanalyses datasets, such as W5E5 and ERA5.
36
+
37
+ The currently available data is:
38
+
39
+ - CORDEX-CORE simulations (3 GCMs donwscaled with 2 RCMs for two RCPs)
40
+ - W5E5 and ERA5 reanalyses datasets
41
+
42
+ Available variables are:
43
+
44
+ - Daily maximum temperature (tasmax)
45
+ - Daily minimum temperature (tasmin)
46
+ - Daily precipitation (pr)
47
+ - Daily relative humidity (hurs)
48
+ - Daily wind speed (sfcWind)
49
+ - Daily solar radiation (rsds)
50
+
51
+
52
+
53
+ ## Usage
54
+ The function can be downloaded from the script folder and imported, for example, as follow:
55
+
56
+ ```
57
+ import os
58
+ os.chdir('/path/to/function')
59
+ import cavapy
60
+ # check documentation
61
+ help(cavapy.get_climate_data)
62
+
63
+ ```
64
+ Depending on the interest, downloading climate data can be done in a few different ways. Note that GCM stands for General Circulation Model while RCP stands for Regional Climate Model. As the climate data comes from the CORDEX-CORE initiative, users can choose between 3 different GCMs downscaled with two RCMs. In total, there are six simulations for any given domain (except for CAS-22 where only three are available).
65
+
66
+ Since bias-correction requires both the historical run of the CORDEX model and the observational dataset (in this case ERA5), even when the historical argument is set to False, the historical run will be used for learning the bias-correction factor.
67
+
68
+ ```
69
+ ### Bias-corrected climate projections
70
+ Zambia_climate_data = cavapy.get_climate_data(country="Zambia", cordex_domain="AFR-22", rcp="rcp26", gcm="MPI", rcm="REMO", years_up_to=2030, obs=False, bias_correction=True, historical=False)
71
+ ### Non bias-corrected climate projections
72
+ Zambia_climate_data = cavapy.get_climate_data(country="Zambia", cordex_domain="AFR-22", rcp="rcp26", gcm="MPI", rcm="REMO", years_up_to=2030, obs=False, bias_correction=False, historical=False)
73
+ ### Bias-corrected climate projections plus the historical run. This is useful when assessing changes in crop yield from the historical period. In this case, we provide the bias-corrected # historical run of the climate models plus the projections.
74
+ Zambia_climate_data = cavapy.get_climate_data(country="Zambia", cordex_domain="AFR-22", rcp="rcp26", gcm="MPI", rcm="REMO", years_up_to=2030, obs=False, bias_correction=True, historical=True)
75
+ ### Observations only (ERA5)
76
+ Zambia_climate_data = cavapy.get_climate_data(country="Zambia", cordex_domain="AFR-22", rcp="rcp26", gcm="MPI", rcm="REMO", years_up_to=2030, obs=True, bias_correction=True, historical=True, years_obs=range(1980,2019))
77
+ ```
@@ -0,0 +1,5 @@
1
+ cavapy.py,sha256=epmNP1X7JZxsvunWHFve_IpR4hX9o8UMkd_iTxgKj8A,21690
2
+ cavapy-0.1.0.dist-info/LICENSE,sha256=RQwEQjhceDnbQwzPXq77-QbteBn1Rh94TaOxgQN-CXw,1130
3
+ cavapy-0.1.0.dist-info/METADATA,sha256=Xi6vHoxzvflSblsYHMlWlQizNNcuXX1XR6K0SP1fLEs,4132
4
+ cavapy-0.1.0.dist-info/WHEEL,sha256=Nq82e9rUAnEjt98J6MlVmMCZb-t9cYE2Ir1kpBmnWfs,88
5
+ cavapy-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,4 @@
1
+ Wheel-Version: 1.0
2
+ Generator: poetry-core 1.9.1
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
cavapy.py ADDED
@@ -0,0 +1,631 @@
1
+ import os
2
+ import multiprocessing as mp
3
+ from concurrent.futures import ThreadPoolExecutor
4
+ from functools import partial
5
+ import logging
6
+ import warnings
7
+
8
+ warnings.filterwarnings(
9
+ "ignore",
10
+ category=FutureWarning,
11
+ message=".*geopandas.dataset module is deprecated.*",
12
+ )
13
+ import geopandas as gpd # noqa: E402
14
+ import pandas as pd # noqa: E402
15
+ import xarray as xr # noqa: E402
16
+ import numpy as np # noqa: E402
17
+ from xclim import sdba # noqa: E402
18
+
19
+
20
+ logger = logging.getLogger("climate")
21
+ logger.handlers = [] # Remove any existing handlers
22
+ handler = logging.StreamHandler()
23
+ formatter = logging.Formatter(
24
+ "%(asctime)s | %(name)s | %(process)d:%(thread)d [%(levelname)s]: %(message)s"
25
+ )
26
+ handler.setFormatter(formatter)
27
+ for hdlr in logger.handlers[:]: # remove all old handlers
28
+ logger.removeHandler(hdlr)
29
+ logger.addHandler(handler)
30
+ logger.setLevel(logging.DEBUG)
31
+
32
+ VARIABLES_MAP = {
33
+ "pr": "tp",
34
+ "tasmax": "t2mx",
35
+ "tasmin": "t2mn",
36
+ "hurs": "hurs",
37
+ "sfcWind": "sfcwind",
38
+ "rsds": "ssrd",
39
+ }
40
+ VALID_VARIABLES = list(VARIABLES_MAP)
41
+ # TODO: Throw an error if the selected country is not in the selected domain
42
+ VALID_DOMAINS = [
43
+ "NAM-22",
44
+ "EUR-22",
45
+ "AFR-22",
46
+ "EAS-22",
47
+ "SEA-22",
48
+ "WAS-22",
49
+ "AUS-22",
50
+ "SAM-22",
51
+ "CAM-22",
52
+ ]
53
+ VALID_RCPS = ["rcp26", "rcp85"]
54
+ VALID_GCM = ["MOHC", "MPI", "NCC"]
55
+ VALID_RCM = ["REMO", "Reg"]
56
+
57
+ INVENTORY_DATA_REMOTE_URL = (
58
+ "https://hub.ipcc.ifca.es/thredds/fileServer/inventories/cava.csv"
59
+ )
60
+ INVENTORY_DATA_LOCAL_PATH = os.path.join(
61
+ os.path.expanduser("~"), "shared/inventories/cava/inventory.csv"
62
+ )
63
+ ERA5_DATA_REMOTE_URL = (
64
+ "https://hub.ipcc.ifca.es/thredds/dodsC/fao/observations/ERA5/0.25/ERA5_025.ncml"
65
+ )
66
+ ERA5_DATA_LOCAL_PATH = os.path.join(
67
+ os.path.expanduser("~"), "shared/data/observations/ERA5/0.25/ERA5_025.ncml"
68
+ )
69
+ DEFAULT_YEARS_OBS = range(1980, 2006)
70
+
71
+
72
+ def get_climate_data(
73
+ *,
74
+ country: str | None,
75
+ cordex_domain: str,
76
+ rcp: str,
77
+ gcm: str,
78
+ rcm: str,
79
+ years_up_to: int,
80
+ years_obs: range | None = None,
81
+ bias_correction: bool = False,
82
+ historical: bool = False,
83
+ obs: bool = False,
84
+ buffer: int = 0,
85
+ xlim: tuple[float, float] | None = None,
86
+ ylim: tuple[float, float] | None = None,
87
+ remote: bool = True,
88
+ num_processes: int = len(VALID_VARIABLES),
89
+ max_threads_per_process: int = 8,
90
+ ) -> dict[str, xr.DataArray]:
91
+ f"""
92
+ Process climate data required by pyAEZ climate module.
93
+ The function automatically access CORDEX-CORE models at 0.25° and the ERA5 datasets.
94
+
95
+ Args:
96
+ country (str): Name of the country for which data is to be processed.
97
+ Use None if specifying a region using xlim and ylim.
98
+ cordex_domain (str): CORDEX domain of the climate data. One of {VALID_DOMAINS}.
99
+ rcp (str): Representative Concentration Pathway. One of {VALID_RCPS}.
100
+ gcm (str): GCM name. One of {VALID_GCM}.
101
+ rcm (str): RCM name. One of {VALID_RCM}.
102
+ years_up_to (int): The ending year for the projected data. Projections start in 2006 and ends in 2100.
103
+ Hence, if years_up_to is set to 2030, data will be downloaded for the 2006-2030 period.
104
+ years_obs (range): Range of years for observational data (ERA5 only). Only used when obs is True. (default: None).
105
+ bias_correction (bool): Whether to apply bias correction (default: False).
106
+ historical (bool): Flag to indicate if processing historical data (default: False).
107
+ If True, historical data is provided together with projections.
108
+ Historical simulation runs for CORDEX-CORE initiative are provided for the 1980-2005 time period.
109
+ obs (bool): Flag to indicate if processing observational data (default: False).
110
+ buffer (int): Buffer distance to expand the region of interest (default: 0).
111
+ xlim (tuple or None): Longitudinal bounds of the region of interest. Use only when country is None (default: None).
112
+ ylim (tuple or None): Latitudinal bounds of the region of interest. Use only when country is None (default: None).
113
+ remote (bool): Flag to work with remote data or not (default: True).
114
+ num_processes (int): Number of processes to use, one per variable.
115
+ By default equals to the number of all possible variables. (default: {len(VALID_VARIABLES)}).
116
+ max_threads_per_process (int): Max number of threads within each process. (default: 8).
117
+
118
+ Returns:
119
+ dict: A dictionary containing processed climate data for each variable as an xarray object.
120
+ """
121
+
122
+ if xlim is None and ylim is not None or xlim is not None and ylim is None:
123
+ raise ValueError(
124
+ "xlim and ylim mismatch: they must be both specified or both unspecified"
125
+ )
126
+ if country is None and xlim is None:
127
+ raise ValueError("You must specify a country or (xlim, ylim)")
128
+ if country is not None and xlim is not None:
129
+ raise ValueError("You must specify either country or (xlim, ylim), not both")
130
+ verify_variables = {
131
+ "cordex_domain": VALID_DOMAINS,
132
+ "rcp": VALID_RCPS,
133
+ "gcm": VALID_GCM,
134
+ "rcm": VALID_RCM,
135
+ }
136
+ for var_name, valid_values in verify_variables.items():
137
+ var_value = locals()[var_name]
138
+ if var_value not in valid_values:
139
+ raise ValueError(
140
+ f"Invalid {var_name}={var_value}. Must be one of {valid_values}"
141
+ )
142
+ if years_up_to <= 2006:
143
+ raise ValueError("years_up_to must be greater than 2006")
144
+ if years_obs is not None and not (1980 <= min(years_obs) <= max(years_obs) <= 2020):
145
+ raise ValueError("Years in years_obs must be within the range 1980 to 2020")
146
+ if obs and years_obs is None:
147
+ raise ValueError("years_obs must be provided when obs is True")
148
+ if not obs or years_obs is None:
149
+ # Make sure years_obs is set to default when obs=False
150
+ years_obs = DEFAULT_YEARS_OBS
151
+
152
+ _validate_urls(gcm, rcm, rcp, remote, cordex_domain, obs)
153
+
154
+ bbox = _geo_localize(country, xlim, ylim, buffer, cordex_domain)
155
+
156
+ with mp.Pool(processes=num_processes) as pool:
157
+ futures = []
158
+ for variable in VALID_VARIABLES:
159
+ futures.append(
160
+ pool.apply_async(
161
+ process_worker,
162
+ args=(max_threads_per_process,),
163
+ kwds={
164
+ "variable": variable,
165
+ "bbox": bbox,
166
+ "cordex_domain": cordex_domain,
167
+ "rcp": rcp,
168
+ "gcm": gcm,
169
+ "rcm": rcm,
170
+ "years_up_to": years_up_to,
171
+ "years_obs": years_obs,
172
+ "obs": obs,
173
+ "bias_correction": bias_correction,
174
+ "historical": historical,
175
+ "remote": remote,
176
+ },
177
+ )
178
+ )
179
+
180
+ results = {
181
+ variable: futures[i].get() for i, variable in enumerate(VALID_VARIABLES)
182
+ }
183
+
184
+ pool.close() # Prevent any more tasks from being submitted to the pool
185
+ pool.join() # Wait for all worker processes to finish
186
+
187
+ return results
188
+
189
+
190
+ def _validate_urls(
191
+ gcm: str = None,
192
+ rcm: str = None,
193
+ rcp: str = None,
194
+ remote: bool = True,
195
+ cordex_domain: str = None,
196
+ obs: bool = False,
197
+ ):
198
+ # Load the data
199
+ log = logger.getChild("URLs validation")
200
+
201
+ if obs is False:
202
+ inventory_csv_url = (
203
+ INVENTORY_DATA_REMOTE_URL if remote else INVENTORY_DATA_LOCAL_PATH
204
+ )
205
+ data = pd.read_csv(inventory_csv_url)
206
+
207
+ # Set the column to use based on whether the data is remote or local
208
+ column_to_use = "location" if remote else "hub"
209
+
210
+ # Filter the data based on the conditions
211
+ filtered_data = data[
212
+ lambda x: (
213
+ x["activity"].str.contains("FAO", na=False)
214
+ & (x["domain"] == cordex_domain)
215
+ & (x["model"].str.contains(gcm, na=False))
216
+ & (x["rcm"].str.contains(rcm, na=False))
217
+ & (x["experiment"].isin([rcp, "historical"]))
218
+ )
219
+ ][["experiment", column_to_use]]
220
+
221
+ # Extract the column values as a list
222
+ num_rows = filtered_data.shape[0]
223
+ column_values = filtered_data[column_to_use]
224
+
225
+ if num_rows == 1:
226
+ # Log the output for one row
227
+ row1 = column_values.iloc[0]
228
+ log.info(f"Projections: {row1}")
229
+ else:
230
+ # Log the output for two rows
231
+ row1 = column_values.iloc[0]
232
+ row2 = column_values.iloc[1]
233
+ log.info(f"Historical simulation: {row1}")
234
+ log.info(f"Projections: {row2}")
235
+ else: # when obs is True
236
+ log.info(f"Observations: {ERA5_DATA_REMOTE_URL}")
237
+
238
+
239
+ def _geo_localize(
240
+ country: str = None,
241
+ xlim: tuple[float, float] = None,
242
+ ylim: tuple[float, float] = None,
243
+ buffer: int = 0,
244
+ cordex_domain: str = None,
245
+ ) -> dict[str, tuple[float, float]]:
246
+ if country:
247
+ if xlim or ylim:
248
+ raise ValueError(
249
+ "Specify either a country or bounding box limits (xlim, ylim), but not both."
250
+ )
251
+ # Load country shapefile and extract bounds
252
+ world = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres"))
253
+ country_shp = world[world.name == country]
254
+ if country_shp.empty:
255
+ raise ValueError(f"Country '{country}' is unknown.")
256
+ bounds = country_shp.total_bounds # [minx, miny, maxx, maxy]
257
+ xlim, ylim = (bounds[0], bounds[2]), (bounds[1], bounds[3])
258
+ elif not (xlim and ylim):
259
+ raise ValueError(
260
+ "Either a country or bounding box limits (xlim, ylim) must be specified."
261
+ )
262
+
263
+ # Apply buffer
264
+ xlim = (xlim[0] - buffer, xlim[1] + buffer)
265
+ ylim = (ylim[0] - buffer, ylim[1] + buffer)
266
+
267
+ # Always validate CORDEX domain
268
+ if cordex_domain:
269
+ _validate_cordex_domain(xlim, ylim, cordex_domain)
270
+
271
+ return {"xlim": xlim, "ylim": ylim}
272
+
273
+
274
+ def _validate_cordex_domain(xlim, ylim, cordex_domain):
275
+
276
+ # CORDEX domains data
277
+ cordex_domains_df = pd.DataFrame(
278
+ {
279
+ "min_lon": [
280
+ -33,
281
+ -28.3,
282
+ 89.25,
283
+ 86.75,
284
+ 19.25,
285
+ 44.0,
286
+ -106.25,
287
+ -115.0,
288
+ -24.25,
289
+ 10.75,
290
+ ],
291
+ "min_lat": [
292
+ -28,
293
+ -23,
294
+ -15.25,
295
+ -54.25,
296
+ -15.75,
297
+ -4.0,
298
+ -58.25,
299
+ -14.5,
300
+ -46.25,
301
+ 17.75,
302
+ ],
303
+ "max_lon": [
304
+ 20,
305
+ 18,
306
+ 147.0,
307
+ -152.75,
308
+ 116.25,
309
+ -172.0,
310
+ -16.25,
311
+ -30.5,
312
+ 59.75,
313
+ 140.25,
314
+ ],
315
+ "max_lat": [28, 21.7, 26.5, 13.75, 45.75, 65.0, 18.75, 28.5, 42.75, 69.75],
316
+ "cordex_domain": [
317
+ "NAM-22",
318
+ "EUR-22",
319
+ "SEA-22",
320
+ "AUS-22",
321
+ "WAS-22",
322
+ "EAS-22",
323
+ "SAM-22",
324
+ "CAM-22",
325
+ "AFR-22",
326
+ "CAS-22",
327
+ ],
328
+ }
329
+ )
330
+
331
+ def is_bbox_contained(bbox, domain):
332
+ """Check if bbox is contained within the domain bounding box."""
333
+ return (
334
+ bbox[0] >= domain["min_lon"]
335
+ and bbox[1] >= domain["min_lat"]
336
+ and bbox[2] <= domain["max_lon"]
337
+ and bbox[3] <= domain["max_lat"]
338
+ )
339
+
340
+ user_bbox = [xlim[0], ylim[0], xlim[1], ylim[1]]
341
+ domain_row = cordex_domains_df[cordex_domains_df["cordex_domain"] == cordex_domain]
342
+
343
+ if domain_row.empty:
344
+ raise ValueError(f"CORDEX domain '{cordex_domain}' is not recognized.")
345
+
346
+ domain_bbox = domain_row.iloc[0]
347
+
348
+ if not is_bbox_contained(user_bbox, domain_bbox):
349
+ suggested_domains = cordex_domains_df[
350
+ cordex_domains_df.apply(
351
+ lambda row: is_bbox_contained(user_bbox, row), axis=1
352
+ )
353
+ ]
354
+
355
+ if suggested_domains.empty:
356
+ raise ValueError(
357
+ f"The bounding box {user_bbox} is outside of all available CORDEX domains."
358
+ )
359
+
360
+ suggested_domain = suggested_domains.iloc[0]["cordex_domain"]
361
+
362
+ raise ValueError(
363
+ f"Bounding box {user_bbox} is not within '{cordex_domain}'. Suggested domain: '{suggested_domain}'."
364
+ )
365
+
366
+
367
+ def process_worker(num_threads, **kwargs) -> xr.DataArray:
368
+ variable = kwargs["variable"]
369
+ log = logger.getChild(variable)
370
+ try:
371
+ with ThreadPoolExecutor(
372
+ max_workers=num_threads, thread_name_prefix="climate"
373
+ ) as executor:
374
+ return _climate_data_for_variable(executor, **kwargs)
375
+ except Exception as e:
376
+ log.exception(f"Process worker failed: {e}")
377
+ raise
378
+
379
+
380
+ def _climate_data_for_variable(
381
+ executor: ThreadPoolExecutor,
382
+ *,
383
+ variable: str,
384
+ bbox: dict[str, tuple[float, float]],
385
+ cordex_domain: str,
386
+ rcp: str,
387
+ gcm: str,
388
+ rcm: str,
389
+ years_up_to: int,
390
+ years_obs: range,
391
+ obs: bool,
392
+ bias_correction: bool,
393
+ historical: bool,
394
+ remote: bool,
395
+ ) -> xr.DataArray:
396
+ log = logger.getChild(variable)
397
+
398
+ pd.options.mode.chained_assignment = None
399
+ inventory_csv_url = (
400
+ INVENTORY_DATA_REMOTE_URL if remote else INVENTORY_DATA_LOCAL_PATH
401
+ )
402
+ data = pd.read_csv(inventory_csv_url)
403
+ column_to_use = "location" if remote else "hub"
404
+ filtered_data = data[
405
+ lambda x: (x["activity"].str.contains("FAO", na=False))
406
+ & (x["domain"] == cordex_domain)
407
+ & (x["model"].str.contains(gcm, na=False))
408
+ & (x["rcm"].str.contains(rcm, na=False))
409
+ & (x["experiment"].isin([rcp, "historical"]))
410
+ ][["experiment", column_to_use]]
411
+
412
+ future_obs = None
413
+ if obs or bias_correction:
414
+ future_obs = executor.submit(
415
+ _thread_download_data,
416
+ url=None,
417
+ bbox=bbox,
418
+ variable=variable,
419
+ obs=True,
420
+ years_up_to=years_up_to,
421
+ years_obs=years_obs,
422
+ remote=remote,
423
+ )
424
+
425
+ if not obs:
426
+ download_fn = partial(
427
+ _thread_download_data,
428
+ bbox=bbox,
429
+ variable=variable,
430
+ obs=False,
431
+ years_obs=years_obs,
432
+ years_up_to=years_up_to,
433
+ remote=remote,
434
+ )
435
+ downloaded_models = list(
436
+ executor.map(download_fn, filtered_data[column_to_use])
437
+ )
438
+
439
+ # Add the downloaded models to the DataFrame
440
+ filtered_data["models"] = downloaded_models
441
+ log.info("Interpolating missing values")
442
+ hist = (
443
+ filtered_data["models"].iloc[0].interpolate_na(dim="time", method="linear")
444
+ )
445
+ proj = (
446
+ filtered_data["models"].iloc[1].interpolate_na(dim="time", method="linear")
447
+ )
448
+ log.info("Missing values interpolated")
449
+
450
+ if bias_correction and historical:
451
+ # Load observations for bias correction
452
+ ref = future_obs.result()
453
+ log.info("Training eqm with historical data")
454
+ QM_mo = sdba.EmpiricalQuantileMapping.train(
455
+ ref,
456
+ hist,
457
+ group="time.month",
458
+ kind="*" if variable in ["pr", "rsds", "sfcWind"] else "+",
459
+ )
460
+ log.info("Performing bias correction with eqm")
461
+ hist_bs = QM_mo.adjust(hist, extrapolation="constant", interp="linear")
462
+ proj_bs = QM_mo.adjust(proj, extrapolation="constant", interp="linear")
463
+ log.info("Done!")
464
+ if variable == "hurs":
465
+ hist_bs = hist_bs.where(hist_bs <= 100, 100)
466
+ hist_bs = hist_bs.where(hist_bs >= 0, 0)
467
+ combined = xr.concat([hist_bs, proj_bs], dim="time")
468
+ return combined
469
+
470
+ elif not bias_correction and historical:
471
+ combined = xr.concat([hist, proj], dim="time")
472
+ return combined
473
+
474
+ elif bias_correction and not historical:
475
+ ref = future_obs.result()
476
+ log.info("Training eqm with historical data")
477
+ QM_mo = sdba.EmpiricalQuantileMapping.train(
478
+ ref,
479
+ hist,
480
+ group="time.month",
481
+ kind="*" if variable in ["pr", "rsds", "sfcWind"] else "+",
482
+ ) # multiplicative approach for pr, rsds and wind speed
483
+ log.info("Performing bias correction with eqm")
484
+ proj_bs = QM_mo.adjust(proj, extrapolation="constant", interp="linear")
485
+ log.info("Done!")
486
+ if variable == "hurs":
487
+ proj_bs = proj_bs.where(proj_bs <= 100, 100)
488
+ proj_bs = proj_bs.where(proj_bs >= 0, 0)
489
+ return proj_bs
490
+
491
+ return proj
492
+
493
+ else: # when observations are True
494
+ downloaded_obs = future_obs.result()
495
+ log.info("Done!")
496
+ return downloaded_obs
497
+
498
+
499
+ def _thread_download_data(url: str | None, **kwargs):
500
+ variable = kwargs["variable"]
501
+ log = logger.getChild(variable)
502
+ try:
503
+ return _download_data(url=url, **kwargs)
504
+ except Exception as e:
505
+ log.exception(f"Failed to download data from {url}: {e}")
506
+ raise
507
+
508
+
509
+ def _download_data(
510
+ url: str | None,
511
+ bbox: dict[str, tuple[float, float]],
512
+ variable: str,
513
+ obs: bool,
514
+ years_obs: range,
515
+ years_up_to: int,
516
+ remote: bool,
517
+ ) -> xr.DataArray:
518
+ log = logger.getChild(variable)
519
+ if obs:
520
+ var = VARIABLES_MAP[variable]
521
+ log.info(f"Downloading observational data for {variable}({var})")
522
+ if remote:
523
+ ds_var = xr.open_dataset(ERA5_DATA_REMOTE_URL)[var]
524
+ else:
525
+ ds_var = xr.open_dataset(ERA5_DATA_LOCAL_PATH)[var]
526
+ log.info(f"Observational data for {variable}({var}) has been downloaded")
527
+
528
+ # Coordinate normalization and renaming for 'hurs'
529
+ if var == "hurs":
530
+ ds_var = ds_var.rename({"lat": "latitude", "lon": "longitude"})
531
+ ds_cropped = ds_var.sel(
532
+ longitude=slice(bbox["xlim"][0], bbox["xlim"][1]),
533
+ latitude=slice(bbox["ylim"][0], bbox["ylim"][1]),
534
+ )
535
+ else:
536
+ ds_var.coords["longitude"] = (ds_var.coords["longitude"] + 180) % 360 - 180
537
+ ds_var = ds_var.sortby(ds_var.longitude)
538
+ ds_cropped = ds_var.sel(
539
+ longitude=slice(bbox["xlim"][0], bbox["xlim"][1]),
540
+ latitude=slice(bbox["ylim"][1], bbox["ylim"][0]),
541
+ )
542
+
543
+ # Unit conversion
544
+ if var in ["t2mx", "t2mn", "t2m"]:
545
+ ds_cropped -= 273.15 # Convert from Kelvin to Celsius
546
+ ds_cropped.attrs["units"] = "°C"
547
+ elif var == "tp":
548
+ ds_cropped *= 1000 # Convert precipitation
549
+ ds_cropped.attrs["units"] = "mm"
550
+ elif var == "ssrd":
551
+ ds_cropped /= 86400 # Convert from J/m^2 to W/m^2
552
+ ds_cropped.attrs["units"] = "W m-2"
553
+ elif var == "sfcwind":
554
+ ds_cropped = ds_cropped * (
555
+ 4.87 / np.log((67.8 * 10) - 5.42)
556
+ ) # Convert wind speed from 10 m to 2 m
557
+ ds_cropped.attrs["units"] = "m s-1"
558
+
559
+ # Select years
560
+ years = [x for x in years_obs]
561
+ time_mask = (ds_cropped["time"].dt.year >= years[0]) & (
562
+ ds_cropped["time"].dt.year <= years[-1]
563
+ )
564
+
565
+ else:
566
+ log.info(f"Downloading CORDEX data for {variable}")
567
+ ds_var = xr.open_dataset(url)[variable]
568
+ log.info(f"CORDEX data for {variable} has been downloaded")
569
+ ds_cropped = ds_var.sel(
570
+ longitude=slice(bbox["xlim"][0], bbox["xlim"][1]),
571
+ latitude=slice(bbox["ylim"][1], bbox["ylim"][0]),
572
+ )
573
+
574
+ # Unit conversion
575
+ if variable in ["tas", "tasmax", "tasmin"]:
576
+ ds_cropped -= 273.15 # Convert from Kelvin to Celsius
577
+ ds_cropped.attrs["units"] = "°C"
578
+ elif variable == "pr":
579
+ ds_cropped *= 86400 # Convert from kg m^-2 s^-1 to mm/day
580
+ ds_cropped.attrs["units"] = "mm"
581
+ elif variable == "rsds":
582
+ ds_cropped.attrs["units"] = "W m-2"
583
+ elif variable == "sfcWind":
584
+ ds_cropped = ds_cropped * (
585
+ 4.87 / np.log((67.8 * 10) - 5.42)
586
+ ) # Convert wind speed from 10 m to 2 m
587
+ ds_cropped.attrs["units"] = "m s-1"
588
+
589
+ # Select years based on rcp
590
+ if "rcp" in url:
591
+ years = [x for x in range(2006, years_up_to + 1)]
592
+ else:
593
+ years = [x for x in DEFAULT_YEARS_OBS]
594
+
595
+ # Add missing dates
596
+ ds_cropped = ds_cropped.convert_calendar(
597
+ calendar="gregorian", missing=np.nan, align_on="date"
598
+ )
599
+ log.debug(
600
+ "360-calendar converted into Gregorian calendar and missing values linearly interpolated"
601
+ )
602
+
603
+ time_mask = (ds_cropped["time"].dt.year >= years[0]) & (
604
+ ds_cropped["time"].dt.year <= years[-1]
605
+ )
606
+
607
+ # subset years
608
+ ds_cropped = ds_cropped.sel(time=time_mask)
609
+
610
+ assert isinstance(ds_cropped, xr.DataArray)
611
+
612
+ log.info(
613
+ f"{'Observational' if obs else 'CORDEX'} data for {variable} has been processed"
614
+ )
615
+
616
+ return ds_cropped
617
+
618
+
619
+ if __name__ == "__main__":
620
+ data = get_climate_data(
621
+ country="Zambia",
622
+ cordex_domain="AFR-22",
623
+ rcp="rcp26",
624
+ gcm="MPI",
625
+ rcm="REMO",
626
+ years_up_to=2030,
627
+ obs=False,
628
+ bias_correction=True,
629
+ historical=False,
630
+ )
631
+ print(data)