stac-fastapi-opensearch 4.0.0a2__tar.gz → 4.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (17) hide show
  1. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/PKG-INFO +8 -4
  2. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/README.md +7 -3
  3. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/setup.py +1 -1
  4. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi/opensearch/app.py +49 -23
  5. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi/opensearch/config.py +26 -2
  6. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi/opensearch/database_logic.py +411 -83
  7. stac_fastapi_opensearch-4.2.0/stac_fastapi/opensearch/version.py +2 -0
  8. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi_opensearch.egg-info/PKG-INFO +8 -4
  9. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi_opensearch.egg-info/requires.txt +1 -1
  10. stac_fastapi_opensearch-4.0.0a2/stac_fastapi/opensearch/version.py +0 -2
  11. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/setup.cfg +0 -0
  12. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi/opensearch/__init__.py +0 -0
  13. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi_opensearch.egg-info/SOURCES.txt +0 -0
  14. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi_opensearch.egg-info/dependency_links.txt +0 -0
  15. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi_opensearch.egg-info/entry_points.txt +0 -0
  16. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi_opensearch.egg-info/not-zip-safe +0 -0
  17. {stac_fastapi_opensearch-4.0.0a2 → stac_fastapi_opensearch-4.2.0}/stac_fastapi_opensearch.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: stac_fastapi_opensearch
3
- Version: 4.0.0a2
3
+ Version: 4.2.0
4
4
  Summary: Opensearch stac-fastapi backend.
5
5
  Home-page: https://github.com/stac-utils/stac-fastapi-elasticsearch-opensearch
6
6
  License: MIT
@@ -125,6 +125,7 @@ You can customize additional settings in your `.env` file:
125
125
  | `STAC_FASTAPI_TITLE` | Title of the API in the documentation. | `stac-fastapi-elasticsearch` or `stac-fastapi-opensearch` | Optional |
126
126
  | `STAC_FASTAPI_DESCRIPTION` | Description of the API in the documentation. | N/A | Optional |
127
127
  | `STAC_FASTAPI_VERSION` | API version. | `2.1` | Optional |
128
+ | `STAC_FASTAPI_LANDING_PAGE_ID` | Landing page ID | `stac-fastapi` | Optional |
128
129
  | `APP_HOST` | Server bind address. | `0.0.0.0` | Optional |
129
130
  | `APP_PORT` | Server port. | `8080` | Optional |
130
131
  | `ENVIRONMENT` | Runtime environment. | `local` | Optional |
@@ -132,9 +133,12 @@ You can customize additional settings in your `.env` file:
132
133
  | `RELOAD` | Enable auto-reload for development. | `true` | Optional |
133
134
  | `STAC_FASTAPI_RATE_LIMIT` | API rate limit per client. | `200/minute` | Optional |
134
135
  | `BACKEND` | Tests-related variable | `elasticsearch` or `opensearch` based on the backend | Optional |
135
- | `ELASTICSEARCH_VERSION` | Version of Elasticsearch to use. | `8.11.0` | Optional |
136
- | `ENABLE_DIRECT_RESPONSE` | Enable direct response for maximum performance (disables all FastAPI dependencies, including authentication, custom status codes, and validation) | `false` | Optional |
137
- | `OPENSEARCH_VERSION` | OpenSearch version | `2.11.1` | Optional |
136
+ | `ELASTICSEARCH_VERSION` | Version of Elasticsearch to use. | `8.11.0` | Optional | |
137
+ | `OPENSEARCH_VERSION` | OpenSearch version | `2.11.1` | Optional
138
+ | `ENABLE_DIRECT_RESPONSE` | Enable direct response for maximum performance (disables all FastAPI dependencies, including authentication, custom status codes, and validation) | `false` | Optional
139
+ | `RAISE_ON_BULK_ERROR` | Controls whether bulk insert operations raise exceptions on errors. If set to `true`, the operation will stop and raise an exception when an error occurs. If set to `false`, errors will be logged, and the operation will continue. **Note:** STAC Item and ItemCollection validation errors will always raise, regardless of this flag. | `false` Optional |
140
+ | `DATABASE_REFRESH` | Controls whether database operations refresh the index immediately after changes. If set to `true`, changes will be immediately searchable. If set to `false`, changes may not be immediately visible but can improve performance for bulk operations. If set to `wait_for`, changes will wait for the next refresh cycle to become visible. | `false` | Optional |
141
+ | `ENABLE_TRANSACTIONS_EXTENSIONS` | Enables or disables the Transactions and Bulk Transactions API extensions. If set to `false`, the POST `/collections` route and related transaction endpoints (including bulk transaction operations) will be unavailable in the API. This is useful for deployments where mutating the catalog via the API should be prevented. | `true` | Optional |
138
142
 
139
143
  > [!NOTE]
140
144
  > The variables `ES_HOST`, `ES_PORT`, `ES_USE_SSL`, and `ES_VERIFY_CERTS` apply to both Elasticsearch and OpenSearch backends, so there is no need to rename the key names to `OS_` even if you're using OpenSearch.
@@ -104,6 +104,7 @@ You can customize additional settings in your `.env` file:
104
104
  | `STAC_FASTAPI_TITLE` | Title of the API in the documentation. | `stac-fastapi-elasticsearch` or `stac-fastapi-opensearch` | Optional |
105
105
  | `STAC_FASTAPI_DESCRIPTION` | Description of the API in the documentation. | N/A | Optional |
106
106
  | `STAC_FASTAPI_VERSION` | API version. | `2.1` | Optional |
107
+ | `STAC_FASTAPI_LANDING_PAGE_ID` | Landing page ID | `stac-fastapi` | Optional |
107
108
  | `APP_HOST` | Server bind address. | `0.0.0.0` | Optional |
108
109
  | `APP_PORT` | Server port. | `8080` | Optional |
109
110
  | `ENVIRONMENT` | Runtime environment. | `local` | Optional |
@@ -111,9 +112,12 @@ You can customize additional settings in your `.env` file:
111
112
  | `RELOAD` | Enable auto-reload for development. | `true` | Optional |
112
113
  | `STAC_FASTAPI_RATE_LIMIT` | API rate limit per client. | `200/minute` | Optional |
113
114
  | `BACKEND` | Tests-related variable | `elasticsearch` or `opensearch` based on the backend | Optional |
114
- | `ELASTICSEARCH_VERSION` | Version of Elasticsearch to use. | `8.11.0` | Optional |
115
- | `ENABLE_DIRECT_RESPONSE` | Enable direct response for maximum performance (disables all FastAPI dependencies, including authentication, custom status codes, and validation) | `false` | Optional |
116
- | `OPENSEARCH_VERSION` | OpenSearch version | `2.11.1` | Optional |
115
+ | `ELASTICSEARCH_VERSION` | Version of Elasticsearch to use. | `8.11.0` | Optional | |
116
+ | `OPENSEARCH_VERSION` | OpenSearch version | `2.11.1` | Optional
117
+ | `ENABLE_DIRECT_RESPONSE` | Enable direct response for maximum performance (disables all FastAPI dependencies, including authentication, custom status codes, and validation) | `false` | Optional
118
+ | `RAISE_ON_BULK_ERROR` | Controls whether bulk insert operations raise exceptions on errors. If set to `true`, the operation will stop and raise an exception when an error occurs. If set to `false`, errors will be logged, and the operation will continue. **Note:** STAC Item and ItemCollection validation errors will always raise, regardless of this flag. | `false` Optional |
119
+ | `DATABASE_REFRESH` | Controls whether database operations refresh the index immediately after changes. If set to `true`, changes will be immediately searchable. If set to `false`, changes may not be immediately visible but can improve performance for bulk operations. If set to `wait_for`, changes will wait for the next refresh cycle to become visible. | `false` | Optional |
120
+ | `ENABLE_TRANSACTIONS_EXTENSIONS` | Enables or disables the Transactions and Bulk Transactions API extensions. If set to `false`, the POST `/collections` route and related transaction endpoints (including bulk transaction operations) will be unavailable in the API. This is useful for deployments where mutating the catalog via the API should be prevented. | `true` | Optional |
117
121
 
118
122
  > [!NOTE]
119
123
  > The variables `ES_HOST`, `ES_PORT`, `ES_USE_SSL`, and `ES_VERIFY_CERTS` apply to both Elasticsearch and OpenSearch backends, so there is no need to rename the key names to `OS_` even if you're using OpenSearch.
@@ -6,7 +6,7 @@ with open("README.md") as f:
6
6
  desc = f.read()
7
7
 
8
8
  install_requires = [
9
- "stac-fastapi-core==4.0.0a2",
9
+ "stac-fastapi-core==4.2.0",
10
10
  "opensearch-py~=2.8.0",
11
11
  "opensearch-py[async]~=2.8.0",
12
12
  "uvicorn~=0.23.0",
@@ -1,6 +1,10 @@
1
1
  """FastAPI application."""
2
2
 
3
+ import logging
3
4
  import os
5
+ from contextlib import asynccontextmanager
6
+
7
+ from fastapi import FastAPI
4
8
 
5
9
  from stac_fastapi.api.app import StacApi
6
10
  from stac_fastapi.api.models import create_get_request_model, create_post_request_model
@@ -20,6 +24,7 @@ from stac_fastapi.core.extensions.fields import FieldsExtension
20
24
  from stac_fastapi.core.rate_limit import setup_rate_limit
21
25
  from stac_fastapi.core.route_dependencies import get_route_dependencies
22
26
  from stac_fastapi.core.session import Session
27
+ from stac_fastapi.core.utilities import get_bool_env
23
28
  from stac_fastapi.extensions.core import (
24
29
  AggregationExtension,
25
30
  FilterExtension,
@@ -36,6 +41,12 @@ from stac_fastapi.opensearch.database_logic import (
36
41
  create_index_templates,
37
42
  )
38
43
 
44
+ logging.basicConfig(level=logging.INFO)
45
+ logger = logging.getLogger(__name__)
46
+
47
+ TRANSACTIONS_EXTENSIONS = get_bool_env("ENABLE_TRANSACTIONS_EXTENSIONS", default=True)
48
+ logger.info("TRANSACTIONS_EXTENSIONS is set to %s", TRANSACTIONS_EXTENSIONS)
49
+
39
50
  settings = OpensearchSettings()
40
51
  session = Session.create_from_settings(settings)
41
52
 
@@ -57,19 +68,6 @@ aggregation_extension.POST = EsAggregationExtensionPostRequest
57
68
  aggregation_extension.GET = EsAggregationExtensionGetRequest
58
69
 
59
70
  search_extensions = [
60
- TransactionExtension(
61
- client=TransactionsClient(
62
- database=database_logic, session=session, settings=settings
63
- ),
64
- settings=settings,
65
- ),
66
- BulkTransactionExtension(
67
- client=BulkTransactionsClient(
68
- database=database_logic,
69
- session=session,
70
- settings=settings,
71
- )
72
- ),
73
71
  FieldsExtension(),
74
72
  QueryExtension(),
75
73
  SortExtension(),
@@ -78,6 +76,28 @@ search_extensions = [
78
76
  FreeTextExtension(),
79
77
  ]
80
78
 
79
+
80
+ if TRANSACTIONS_EXTENSIONS:
81
+ search_extensions.insert(
82
+ 0,
83
+ TransactionExtension(
84
+ client=TransactionsClient(
85
+ database=database_logic, session=session, settings=settings
86
+ ),
87
+ settings=settings,
88
+ ),
89
+ )
90
+ search_extensions.insert(
91
+ 1,
92
+ BulkTransactionExtension(
93
+ client=BulkTransactionsClient(
94
+ database=database_logic,
95
+ session=session,
96
+ settings=settings,
97
+ )
98
+ ),
99
+ )
100
+
81
101
  extensions = [aggregation_extension] + search_extensions
82
102
 
83
103
  database_logic.extensions = [type(ext).__name__ for ext in extensions]
@@ -87,28 +107,34 @@ post_request_model = create_post_request_model(search_extensions)
87
107
  api = StacApi(
88
108
  title=os.getenv("STAC_FASTAPI_TITLE", "stac-fastapi-opensearch"),
89
109
  description=os.getenv("STAC_FASTAPI_DESCRIPTION", "stac-fastapi-opensearch"),
90
- api_version=os.getenv("STAC_FASTAPI_VERSION", "4.0.0a2"),
110
+ api_version=os.getenv("STAC_FASTAPI_VERSION", "4.2.0"),
91
111
  settings=settings,
92
112
  extensions=extensions,
93
113
  client=CoreClient(
94
- database=database_logic, session=session, post_request_model=post_request_model
114
+ database=database_logic,
115
+ session=session,
116
+ post_request_model=post_request_model,
117
+ landing_page_id=os.getenv("STAC_FASTAPI_LANDING_PAGE_ID", "stac-fastapi"),
95
118
  ),
96
119
  search_get_request_model=create_get_request_model(search_extensions),
97
120
  search_post_request_model=post_request_model,
98
121
  route_dependencies=get_route_dependencies(),
99
122
  )
100
- app = api.app
101
- app.root_path = os.getenv("STAC_FASTAPI_ROOT_PATH", "")
102
-
103
-
104
- # Add rate limit
105
- setup_rate_limit(app, rate_limit=os.getenv("STAC_FASTAPI_RATE_LIMIT"))
106
123
 
107
124
 
108
- @app.on_event("startup")
109
- async def _startup_event() -> None:
125
+ @asynccontextmanager
126
+ async def lifespan(app: FastAPI):
127
+ """Lifespan handler for FastAPI app. Initializes index templates and collections at startup."""
110
128
  await create_index_templates()
111
129
  await create_collection_index()
130
+ yield
131
+
132
+
133
+ app = api.app
134
+ app.router.lifespan_context = lifespan
135
+ app.root_path = os.getenv("STAC_FASTAPI_ROOT_PATH", "")
136
+ # Add rate limit
137
+ setup_rate_limit(app, rate_limit=os.getenv("STAC_FASTAPI_RATE_LIMIT"))
112
138
 
113
139
 
114
140
  def run() -> None:
@@ -2,13 +2,13 @@
2
2
  import logging
3
3
  import os
4
4
  import ssl
5
- from typing import Any, Dict, Set
5
+ from typing import Any, Dict, Set, Union
6
6
 
7
7
  import certifi
8
8
  from opensearchpy import AsyncOpenSearch, OpenSearch
9
9
 
10
10
  from stac_fastapi.core.base_settings import ApiBaseSettings
11
- from stac_fastapi.core.utilities import get_bool_env
11
+ from stac_fastapi.core.utilities import get_bool_env, validate_refresh
12
12
  from stac_fastapi.types.config import ApiSettings
13
13
 
14
14
 
@@ -83,6 +83,18 @@ class OpensearchSettings(ApiSettings, ApiBaseSettings):
83
83
  indexed_fields: Set[str] = {"datetime"}
84
84
  enable_response_models: bool = False
85
85
  enable_direct_response: bool = get_bool_env("ENABLE_DIRECT_RESPONSE", default=False)
86
+ raise_on_bulk_error: bool = get_bool_env("RAISE_ON_BULK_ERROR", default=False)
87
+
88
+ @property
89
+ def database_refresh(self) -> Union[bool, str]:
90
+ """
91
+ Get the value of the DATABASE_REFRESH environment variable.
92
+
93
+ Returns:
94
+ Union[bool, str]: The value of DATABASE_REFRESH, which can be True, False, or "wait_for".
95
+ """
96
+ value = os.getenv("DATABASE_REFRESH", "false")
97
+ return validate_refresh(value)
86
98
 
87
99
  @property
88
100
  def create_client(self):
@@ -103,6 +115,18 @@ class AsyncOpensearchSettings(ApiSettings, ApiBaseSettings):
103
115
  indexed_fields: Set[str] = {"datetime"}
104
116
  enable_response_models: bool = False
105
117
  enable_direct_response: bool = get_bool_env("ENABLE_DIRECT_RESPONSE", default=False)
118
+ raise_on_bulk_error: bool = get_bool_env("RAISE_ON_BULK_ERROR", default=False)
119
+
120
+ @property
121
+ def database_refresh(self) -> Union[bool, str]:
122
+ """
123
+ Get the value of the DATABASE_REFRESH environment variable.
124
+
125
+ Returns:
126
+ Union[bool, str]: The value of DATABASE_REFRESH, which can be True, False, or "wait_for".
127
+ """
128
+ value = os.getenv("DATABASE_REFRESH", "false")
129
+ return validate_refresh(value)
106
130
 
107
131
  @property
108
132
  def create_client(self):
@@ -13,7 +13,6 @@ from opensearchpy.helpers.query import Q
13
13
  from opensearchpy.helpers.search import Search
14
14
  from starlette.requests import Request
15
15
 
16
- from stac_fastapi.core import serializers
17
16
  from stac_fastapi.core.base_database_logic import BaseDatabaseLogic
18
17
  from stac_fastapi.core.database_logic import (
19
18
  COLLECTIONS_INDEX,
@@ -31,7 +30,8 @@ from stac_fastapi.core.database_logic import (
31
30
  mk_item_id,
32
31
  )
33
32
  from stac_fastapi.core.extensions import filter
34
- from stac_fastapi.core.utilities import MAX_LIMIT, bbox2polygon
33
+ from stac_fastapi.core.serializers import CollectionSerializer, ItemSerializer
34
+ from stac_fastapi.core.utilities import MAX_LIMIT, bbox2polygon, validate_refresh
35
35
  from stac_fastapi.opensearch.config import (
36
36
  AsyncOpensearchSettings as AsyncSearchSettings,
37
37
  )
@@ -143,14 +143,20 @@ async def delete_item_index(collection_id: str) -> None:
143
143
  class DatabaseLogic(BaseDatabaseLogic):
144
144
  """Database logic."""
145
145
 
146
- client = AsyncSearchSettings().create_client
147
- sync_client = SyncSearchSettings().create_client
146
+ async_settings: AsyncSearchSettings = attr.ib(factory=AsyncSearchSettings)
147
+ sync_settings: SyncSearchSettings = attr.ib(factory=SyncSearchSettings)
148
148
 
149
- item_serializer: Type[serializers.ItemSerializer] = attr.ib(
150
- default=serializers.ItemSerializer
151
- )
152
- collection_serializer: Type[serializers.CollectionSerializer] = attr.ib(
153
- default=serializers.CollectionSerializer
149
+ client = attr.ib(init=False)
150
+ sync_client = attr.ib(init=False)
151
+
152
+ def __attrs_post_init__(self):
153
+ """Initialize clients after the class is instantiated."""
154
+ self.client = self.async_settings.create_client
155
+ self.sync_client = self.sync_settings.create_client
156
+
157
+ item_serializer: Type[ItemSerializer] = attr.ib(default=ItemSerializer)
158
+ collection_serializer: Type[CollectionSerializer] = attr.ib(
159
+ default=CollectionSerializer
154
160
  )
155
161
 
156
162
  extensions: List[str] = attr.ib(default=attr.Factory(list))
@@ -301,6 +307,34 @@ class DatabaseLogic(BaseDatabaseLogic):
301
307
  )
302
308
  return item["_source"]
303
309
 
310
+ async def get_queryables_mapping(self, collection_id: str = "*") -> dict:
311
+ """Retrieve mapping of Queryables for search.
312
+
313
+ Args:
314
+ collection_id (str, optional): The id of the Collection the Queryables
315
+ belongs to. Defaults to "*".
316
+
317
+ Returns:
318
+ dict: A dictionary containing the Queryables mappings.
319
+ """
320
+ queryables_mapping = {}
321
+
322
+ mappings = await self.client.indices.get_mapping(
323
+ index=f"{ITEMS_INDEX_PREFIX}{collection_id}",
324
+ )
325
+
326
+ for mapping in mappings.values():
327
+ fields = mapping["mappings"].get("properties", {})
328
+ properties = fields.pop("properties", {}).get("properties", {}).keys()
329
+
330
+ for field_key in fields:
331
+ queryables_mapping[field_key] = field_key
332
+
333
+ for property_key in properties:
334
+ queryables_mapping[property_key] = f"properties.{property_key}"
335
+
336
+ return queryables_mapping
337
+
304
338
  @staticmethod
305
339
  def make_search():
306
340
  """Database logic to create a Search instance."""
@@ -329,7 +363,7 @@ class DatabaseLogic(BaseDatabaseLogic):
329
363
 
330
364
  @staticmethod
331
365
  def apply_datetime_filter(search: Search, datetime_search):
332
- """Apply a filter to search based on datetime field.
366
+ """Apply a filter to search based on datetime field, start_datetime, and end_datetime fields.
333
367
 
334
368
  Args:
335
369
  search (Search): The search object to filter.
@@ -338,17 +372,109 @@ class DatabaseLogic(BaseDatabaseLogic):
338
372
  Returns:
339
373
  Search: The filtered search object.
340
374
  """
375
+ should = []
376
+
377
+ # If the request is a single datetime return
378
+ # items with datetimes equal to the requested datetime OR
379
+ # the requested datetime is between their start and end datetimes
341
380
  if "eq" in datetime_search:
342
- search = search.filter(
343
- "term", **{"properties__datetime": datetime_search["eq"]}
381
+ should.extend(
382
+ [
383
+ Q(
384
+ "bool",
385
+ filter=[
386
+ Q(
387
+ "term",
388
+ properties__datetime=datetime_search["eq"],
389
+ ),
390
+ ],
391
+ ),
392
+ Q(
393
+ "bool",
394
+ filter=[
395
+ Q(
396
+ "range",
397
+ properties__start_datetime={
398
+ "lte": datetime_search["eq"],
399
+ },
400
+ ),
401
+ Q(
402
+ "range",
403
+ properties__end_datetime={
404
+ "gte": datetime_search["eq"],
405
+ },
406
+ ),
407
+ ],
408
+ ),
409
+ ]
344
410
  )
411
+
412
+ # If the request is a date range return
413
+ # items with datetimes within the requested date range OR
414
+ # their startdatetime ithin the requested date range OR
415
+ # their enddatetime ithin the requested date range OR
416
+ # the requested daterange within their start and end datetimes
345
417
  else:
346
- search = search.filter(
347
- "range", properties__datetime={"lte": datetime_search["lte"]}
348
- )
349
- search = search.filter(
350
- "range", properties__datetime={"gte": datetime_search["gte"]}
418
+ should.extend(
419
+ [
420
+ Q(
421
+ "bool",
422
+ filter=[
423
+ Q(
424
+ "range",
425
+ properties__datetime={
426
+ "gte": datetime_search["gte"],
427
+ "lte": datetime_search["lte"],
428
+ },
429
+ ),
430
+ ],
431
+ ),
432
+ Q(
433
+ "bool",
434
+ filter=[
435
+ Q(
436
+ "range",
437
+ properties__start_datetime={
438
+ "gte": datetime_search["gte"],
439
+ "lte": datetime_search["lte"],
440
+ },
441
+ ),
442
+ ],
443
+ ),
444
+ Q(
445
+ "bool",
446
+ filter=[
447
+ Q(
448
+ "range",
449
+ properties__end_datetime={
450
+ "gte": datetime_search["gte"],
451
+ "lte": datetime_search["lte"],
452
+ },
453
+ ),
454
+ ],
455
+ ),
456
+ Q(
457
+ "bool",
458
+ filter=[
459
+ Q(
460
+ "range",
461
+ properties__start_datetime={
462
+ "lte": datetime_search["gte"]
463
+ },
464
+ ),
465
+ Q(
466
+ "range",
467
+ properties__end_datetime={
468
+ "gte": datetime_search["lte"]
469
+ },
470
+ ),
471
+ ],
472
+ ),
473
+ ]
351
474
  )
475
+
476
+ search = search.query(Q("bool", filter=[Q("bool", should=should)]))
477
+
352
478
  return search
353
479
 
354
480
  @staticmethod
@@ -437,8 +563,9 @@ class DatabaseLogic(BaseDatabaseLogic):
437
563
 
438
564
  return search
439
565
 
440
- @staticmethod
441
- def apply_cql2_filter(search: Search, _filter: Optional[Dict[str, Any]]):
566
+ async def apply_cql2_filter(
567
+ self, search: Search, _filter: Optional[Dict[str, Any]]
568
+ ):
442
569
  """
443
570
  Apply a CQL2 filter to an Opensearch Search object.
444
571
 
@@ -458,7 +585,7 @@ class DatabaseLogic(BaseDatabaseLogic):
458
585
  otherwise the original Search object.
459
586
  """
460
587
  if _filter is not None:
461
- es_query = filter.to_es(_filter)
588
+ es_query = filter.to_es(await self.get_queryables_mapping(), _filter)
462
589
  search = search.filter(es_query)
463
590
 
464
591
  return search
@@ -633,7 +760,7 @@ class DatabaseLogic(BaseDatabaseLogic):
633
760
  if not await self.client.exists(index=COLLECTIONS_INDEX, id=collection_id):
634
761
  raise NotFoundError(f"Collection {collection_id} does not exist")
635
762
 
636
- async def prep_create_item(
763
+ async def async_prep_create_item(
637
764
  self, item: Item, base_url: str, exist_ok: bool = False
638
765
  ) -> Item:
639
766
  """
@@ -663,49 +790,120 @@ class DatabaseLogic(BaseDatabaseLogic):
663
790
 
664
791
  return self.item_serializer.stac_to_db(item, base_url)
665
792
 
666
- def sync_prep_create_item(
793
+ async def bulk_async_prep_create_item(
667
794
  self, item: Item, base_url: str, exist_ok: bool = False
668
795
  ) -> Item:
669
796
  """
670
797
  Prepare an item for insertion into the database.
671
798
 
672
- This method performs pre-insertion preparation on the given `item`,
673
- such as checking if the collection the item belongs to exists,
674
- and optionally verifying that an item with the same ID does not already exist in the database.
799
+ This method performs pre-insertion preparation on the given `item`, such as:
800
+ - Verifying that the collection the item belongs to exists.
801
+ - Optionally checking if an item with the same ID already exists in the database.
802
+ - Serializing the item into a database-compatible format.
675
803
 
676
804
  Args:
677
- item (Item): The item to be inserted into the database.
678
- base_url (str): The base URL used for constructing URLs for the item.
679
- exist_ok (bool): Indicates whether the item can exist already.
805
+ item (Item): The item to be prepared for insertion.
806
+ base_url (str): The base URL used to construct the item's self URL.
807
+ exist_ok (bool): Indicates whether the item can already exist in the database.
808
+ If False, a `ConflictError` is raised if the item exists.
680
809
 
681
810
  Returns:
682
- Item: The item after preparation is done.
811
+ Item: The prepared item, serialized into a database-compatible format.
683
812
 
684
813
  Raises:
685
814
  NotFoundError: If the collection that the item belongs to does not exist in the database.
686
- ConflictError: If an item with the same ID already exists in the collection.
815
+ ConflictError: If an item with the same ID already exists in the collection and `exist_ok` is False,
816
+ and `RAISE_ON_BULK_ERROR` is set to `true`.
687
817
  """
688
- item_id = item["id"]
689
- collection_id = item["collection"]
690
- if not self.sync_client.exists(index=COLLECTIONS_INDEX, id=collection_id):
691
- raise NotFoundError(f"Collection {collection_id} does not exist")
818
+ logger.debug(f"Preparing item {item['id']} in collection {item['collection']}.")
692
819
 
693
- if not exist_ok and self.sync_client.exists(
694
- index=index_alias_by_collection_id(collection_id),
695
- id=mk_item_id(item_id, collection_id),
820
+ # Check if the collection exists
821
+ await self.check_collection_exists(collection_id=item["collection"])
822
+
823
+ # Check if the item already exists in the database
824
+ if not exist_ok and await self.client.exists(
825
+ index=index_alias_by_collection_id(item["collection"]),
826
+ id=mk_item_id(item["id"], item["collection"]),
696
827
  ):
697
- raise ConflictError(
698
- f"Item {item_id} in collection {collection_id} already exists"
828
+ error_message = (
829
+ f"Item {item['id']} in collection {item['collection']} already exists."
699
830
  )
831
+ if self.async_settings.raise_on_bulk_error:
832
+ raise ConflictError(error_message)
833
+ else:
834
+ logger.warning(
835
+ f"{error_message} Continuing as `RAISE_ON_BULK_ERROR` is set to false."
836
+ )
837
+ # Serialize the item into a database-compatible format
838
+ prepped_item = self.item_serializer.stac_to_db(item, base_url)
839
+ logger.debug(f"Item {item['id']} prepared successfully.")
840
+ return prepped_item
841
+
842
+ def bulk_sync_prep_create_item(
843
+ self, item: Item, base_url: str, exist_ok: bool = False
844
+ ) -> Item:
845
+ """
846
+ Prepare an item for insertion into the database.
700
847
 
701
- return self.item_serializer.stac_to_db(item, base_url)
848
+ This method performs pre-insertion preparation on the given `item`, such as:
849
+ - Verifying that the collection the item belongs to exists.
850
+ - Optionally checking if an item with the same ID already exists in the database.
851
+ - Serializing the item into a database-compatible format.
852
+
853
+ Args:
854
+ item (Item): The item to be prepared for insertion.
855
+ base_url (str): The base URL used to construct the item's self URL.
856
+ exist_ok (bool): Indicates whether the item can already exist in the database.
857
+ If False, a `ConflictError` is raised if the item exists.
702
858
 
703
- async def create_item(self, item: Item, refresh: bool = False):
859
+ Returns:
860
+ Item: The prepared item, serialized into a database-compatible format.
861
+
862
+ Raises:
863
+ NotFoundError: If the collection that the item belongs to does not exist in the database.
864
+ ConflictError: If an item with the same ID already exists in the collection and `exist_ok` is False,
865
+ and `RAISE_ON_BULK_ERROR` is set to `true`.
866
+ """
867
+ logger.debug(f"Preparing item {item['id']} in collection {item['collection']}.")
868
+
869
+ # Check if the collection exists
870
+ if not self.sync_client.exists(index=COLLECTIONS_INDEX, id=item["collection"]):
871
+ raise NotFoundError(f"Collection {item['collection']} does not exist")
872
+
873
+ # Check if the item already exists in the database
874
+ if not exist_ok and self.sync_client.exists(
875
+ index=index_alias_by_collection_id(item["collection"]),
876
+ id=mk_item_id(item["id"], item["collection"]),
877
+ ):
878
+ error_message = (
879
+ f"Item {item['id']} in collection {item['collection']} already exists."
880
+ )
881
+ if self.sync_settings.raise_on_bulk_error:
882
+ raise ConflictError(error_message)
883
+ else:
884
+ logger.warning(
885
+ f"{error_message} Continuing as `RAISE_ON_BULK_ERROR` is set to false."
886
+ )
887
+
888
+ # Serialize the item into a database-compatible format
889
+ prepped_item = self.item_serializer.stac_to_db(item, base_url)
890
+ logger.debug(f"Item {item['id']} prepared successfully.")
891
+ return prepped_item
892
+
893
+ async def create_item(
894
+ self,
895
+ item: Item,
896
+ base_url: str = "",
897
+ exist_ok: bool = False,
898
+ **kwargs: Any,
899
+ ):
704
900
  """Database logic for creating one item.
705
901
 
706
902
  Args:
707
903
  item (Item): The item to be created.
708
- refresh (bool, optional): Refresh the index after performing the operation. Defaults to False.
904
+ base_url (str, optional): The base URL for the item. Defaults to an empty string.
905
+ exist_ok (bool, optional): Whether to allow the item to exist already. Defaults to False.
906
+ **kwargs: Additional keyword arguments like refresh.
709
907
 
710
908
  Raises:
711
909
  ConflictError: If the item already exists in the database.
@@ -716,31 +914,52 @@ class DatabaseLogic(BaseDatabaseLogic):
716
914
  # todo: check if collection exists, but cache
717
915
  item_id = item["id"]
718
916
  collection_id = item["collection"]
719
- es_resp = await self.client.index(
917
+
918
+ # Ensure kwargs is a dictionary
919
+ kwargs = kwargs or {}
920
+
921
+ # Resolve the `refresh` parameter
922
+ refresh = kwargs.get("refresh", self.async_settings.database_refresh)
923
+ refresh = validate_refresh(refresh)
924
+
925
+ # Log the creation attempt
926
+ logger.info(
927
+ f"Creating item {item_id} in collection {collection_id} with refresh={refresh}"
928
+ )
929
+
930
+ item = await self.async_prep_create_item(
931
+ item=item, base_url=base_url, exist_ok=exist_ok
932
+ )
933
+ await self.client.index(
720
934
  index=index_alias_by_collection_id(collection_id),
721
935
  id=mk_item_id(item_id, collection_id),
722
936
  body=item,
723
937
  refresh=refresh,
724
938
  )
725
939
 
726
- if (meta := es_resp.get("meta")) and meta.get("status") == 409:
727
- raise ConflictError(
728
- f"Item {item_id} in collection {collection_id} already exists"
729
- )
730
-
731
- async def delete_item(
732
- self, item_id: str, collection_id: str, refresh: bool = False
733
- ):
940
+ async def delete_item(self, item_id: str, collection_id: str, **kwargs: Any):
734
941
  """Delete a single item from the database.
735
942
 
736
943
  Args:
737
944
  item_id (str): The id of the Item to be deleted.
738
945
  collection_id (str): The id of the Collection that the Item belongs to.
739
- refresh (bool, optional): Whether to refresh the index after the deletion. Default is False.
946
+ **kwargs: Additional keyword arguments like refresh.
740
947
 
741
948
  Raises:
742
949
  NotFoundError: If the Item does not exist in the database.
743
950
  """
951
+ # Ensure kwargs is a dictionary
952
+ kwargs = kwargs or {}
953
+
954
+ # Resolve the `refresh` parameter
955
+ refresh = kwargs.get("refresh", self.async_settings.database_refresh)
956
+ refresh = validate_refresh(refresh)
957
+
958
+ # Log the deletion attempt
959
+ logger.info(
960
+ f"Deleting item {item_id} from collection {collection_id} with refresh={refresh}"
961
+ )
962
+
744
963
  try:
745
964
  await self.client.delete(
746
965
  index=index_alias_by_collection_id(collection_id),
@@ -770,12 +989,12 @@ class DatabaseLogic(BaseDatabaseLogic):
770
989
  except exceptions.NotFoundError:
771
990
  raise NotFoundError(f"Mapping for index {index_name} not found")
772
991
 
773
- async def create_collection(self, collection: Collection, refresh: bool = False):
992
+ async def create_collection(self, collection: Collection, **kwargs: Any):
774
993
  """Create a single collection in the database.
775
994
 
776
995
  Args:
777
996
  collection (Collection): The Collection object to be created.
778
- refresh (bool, optional): Whether to refresh the index after the creation. Default is False.
997
+ **kwargs: Additional keyword arguments like refresh.
779
998
 
780
999
  Raises:
781
1000
  ConflictError: If a Collection with the same id already exists in the database.
@@ -785,6 +1004,16 @@ class DatabaseLogic(BaseDatabaseLogic):
785
1004
  """
786
1005
  collection_id = collection["id"]
787
1006
 
1007
+ # Ensure kwargs is a dictionary
1008
+ kwargs = kwargs or {}
1009
+
1010
+ # Resolve the `refresh` parameter
1011
+ refresh = kwargs.get("refresh", self.async_settings.database_refresh)
1012
+ refresh = validate_refresh(refresh)
1013
+
1014
+ # Log the creation attempt
1015
+ logger.info(f"Creating collection {collection_id} with refresh={refresh}")
1016
+
788
1017
  if await self.client.exists(index=COLLECTIONS_INDEX, id=collection_id):
789
1018
  raise ConflictError(f"Collection {collection_id} already exists")
790
1019
 
@@ -824,14 +1053,14 @@ class DatabaseLogic(BaseDatabaseLogic):
824
1053
  return collection["_source"]
825
1054
 
826
1055
  async def update_collection(
827
- self, collection_id: str, collection: Collection, refresh: bool = False
1056
+ self, collection_id: str, collection: Collection, **kwargs: Any
828
1057
  ):
829
1058
  """Update a collection from the database.
830
1059
 
831
1060
  Args:
832
- self: The instance of the object calling this function.
833
1061
  collection_id (str): The ID of the collection to be updated.
834
1062
  collection (Collection): The Collection object to be used for the update.
1063
+ **kwargs: Additional keyword arguments like refresh.
835
1064
 
836
1065
  Raises:
837
1066
  NotFoundError: If the collection with the given `collection_id` is not
@@ -842,9 +1071,23 @@ class DatabaseLogic(BaseDatabaseLogic):
842
1071
  `collection_id` and with the collection specified in the `Collection` object.
843
1072
  If the collection is not found, a `NotFoundError` is raised.
844
1073
  """
1074
+ # Ensure kwargs is a dictionary
1075
+ kwargs = kwargs or {}
1076
+
1077
+ # Resolve the `refresh` parameter
1078
+ refresh = kwargs.get("refresh", self.async_settings.database_refresh)
1079
+ refresh = validate_refresh(refresh)
1080
+
1081
+ # Log the update attempt
1082
+ logger.info(f"Updating collection {collection_id} with refresh={refresh}")
1083
+
845
1084
  await self.find_collection(collection_id=collection_id)
846
1085
 
847
1086
  if collection_id != collection["id"]:
1087
+ logger.info(
1088
+ f"Collection ID change detected: {collection_id} -> {collection['id']}"
1089
+ )
1090
+
848
1091
  await self.create_collection(collection, refresh=refresh)
849
1092
 
850
1093
  await self.client.reindex(
@@ -860,7 +1103,7 @@ class DatabaseLogic(BaseDatabaseLogic):
860
1103
  refresh=refresh,
861
1104
  )
862
1105
 
863
- await self.delete_collection(collection_id)
1106
+ await self.delete_collection(collection_id=collection_id, **kwargs)
864
1107
 
865
1108
  else:
866
1109
  await self.client.index(
@@ -870,75 +1113,160 @@ class DatabaseLogic(BaseDatabaseLogic):
870
1113
  refresh=refresh,
871
1114
  )
872
1115
 
873
- async def delete_collection(self, collection_id: str, refresh: bool = False):
1116
+ async def delete_collection(self, collection_id: str, **kwargs: Any):
874
1117
  """Delete a collection from the database.
875
1118
 
876
1119
  Parameters:
877
1120
  self: The instance of the object calling this function.
878
1121
  collection_id (str): The ID of the collection to be deleted.
879
- refresh (bool): Whether to refresh the index after the deletion (default: False).
1122
+ **kwargs: Additional keyword arguments like refresh.
880
1123
 
881
1124
  Raises:
882
1125
  NotFoundError: If the collection with the given `collection_id` is not found in the database.
883
1126
 
884
1127
  Notes:
885
1128
  This function first verifies that the collection with the specified `collection_id` exists in the database, and then
886
- deletes the collection. If `refresh` is set to True, the index is refreshed after the deletion. Additionally, this
887
- function also calls `delete_item_index` to delete the index for the items in the collection.
1129
+ deletes the collection. If `refresh` is set to "true", "false", or "wait_for", the index is refreshed accordingly after
1130
+ the deletion. Additionally, this function also calls `delete_item_index` to delete the index for the items in the collection.
888
1131
  """
1132
+ # Ensure kwargs is a dictionary
1133
+ kwargs = kwargs or {}
1134
+
889
1135
  await self.find_collection(collection_id=collection_id)
1136
+
1137
+ # Resolve the `refresh` parameter
1138
+ refresh = kwargs.get("refresh", self.async_settings.database_refresh)
1139
+ refresh = validate_refresh(refresh)
1140
+
1141
+ # Log the deletion attempt
1142
+ logger.info(f"Deleting collection {collection_id} with refresh={refresh}")
1143
+
890
1144
  await self.client.delete(
891
1145
  index=COLLECTIONS_INDEX, id=collection_id, refresh=refresh
892
1146
  )
893
1147
  await delete_item_index(collection_id)
894
1148
 
895
1149
  async def bulk_async(
896
- self, collection_id: str, processed_items: List[Item], refresh: bool = False
897
- ) -> None:
898
- """Perform a bulk insert of items into the database asynchronously.
1150
+ self,
1151
+ collection_id: str,
1152
+ processed_items: List[Item],
1153
+ **kwargs: Any,
1154
+ ) -> Tuple[int, List[Dict[str, Any]]]:
1155
+ """
1156
+ Perform a bulk insert of items into the database asynchronously.
899
1157
 
900
1158
  Args:
901
- self: The instance of the object calling this function.
902
1159
  collection_id (str): The ID of the collection to which the items belong.
903
1160
  processed_items (List[Item]): A list of `Item` objects to be inserted into the database.
904
- refresh (bool): Whether to refresh the index after the bulk insert (default: False).
1161
+ **kwargs (Any): Additional keyword arguments, including:
1162
+ - refresh (str, optional): Whether to refresh the index after the bulk insert.
1163
+ Can be "true", "false", or "wait_for". Defaults to the value of `self.sync_settings.database_refresh`.
1164
+ - refresh (bool, optional): Whether to refresh the index after the bulk insert.
1165
+ - raise_on_error (bool, optional): Whether to raise an error if any of the bulk operations fail.
1166
+ Defaults to the value of `self.async_settings.raise_on_bulk_error`.
1167
+
1168
+ Returns:
1169
+ Tuple[int, List[Dict[str, Any]]]: A tuple containing:
1170
+ - The number of successfully processed actions (`success`).
1171
+ - A list of errors encountered during the bulk operation (`errors`).
905
1172
 
906
1173
  Notes:
907
- This function performs a bulk insert of `processed_items` into the database using the specified `collection_id`. The
908
- insert is performed asynchronously, and the event loop is used to run the operation in a separate executor. The
909
- `mk_actions` function is called to generate a list of actions for the bulk insert. If `refresh` is set to True, the
910
- index is refreshed after the bulk insert. The function does not return any value.
1174
+ This function performs a bulk insert of `processed_items` into the database using the specified `collection_id`.
1175
+ The insert is performed synchronously and blocking, meaning that the function does not return until the insert has
1176
+ completed. The `mk_actions` function is called to generate a list of actions for the bulk insert. The `refresh`
1177
+ parameter determines whether the index is refreshed after the bulk insert:
1178
+ - "true": Forces an immediate refresh of the index.
1179
+ - "false": Does not refresh the index immediately (default behavior).
1180
+ - "wait_for": Waits for the next refresh cycle to make the changes visible.
911
1181
  """
912
- await helpers.async_bulk(
1182
+ # Ensure kwargs is a dictionary
1183
+ kwargs = kwargs or {}
1184
+
1185
+ # Resolve the `refresh` parameter
1186
+ refresh = kwargs.get("refresh", self.async_settings.database_refresh)
1187
+ refresh = validate_refresh(refresh)
1188
+
1189
+ # Log the bulk insert attempt
1190
+ logger.info(
1191
+ f"Performing bulk insert for collection {collection_id} with refresh={refresh}"
1192
+ )
1193
+
1194
+ # Handle empty processed_items
1195
+ if not processed_items:
1196
+ logger.warning(f"No items to insert for collection {collection_id}")
1197
+ return 0, []
1198
+
1199
+ raise_on_error = self.async_settings.raise_on_bulk_error
1200
+ success, errors = await helpers.async_bulk(
913
1201
  self.client,
914
1202
  mk_actions(collection_id, processed_items),
915
1203
  refresh=refresh,
916
- raise_on_error=False,
1204
+ raise_on_error=raise_on_error,
917
1205
  )
1206
+ # Log the result
1207
+ logger.info(
1208
+ f"Bulk insert completed for collection {collection_id}: {success} successes, {len(errors)} errors"
1209
+ )
1210
+ return success, errors
918
1211
 
919
1212
  def bulk_sync(
920
- self, collection_id: str, processed_items: List[Item], refresh: bool = False
921
- ) -> None:
922
- """Perform a bulk insert of items into the database synchronously.
1213
+ self,
1214
+ collection_id: str,
1215
+ processed_items: List[Item],
1216
+ **kwargs: Any,
1217
+ ) -> Tuple[int, List[Dict[str, Any]]]:
1218
+ """
1219
+ Perform a bulk insert of items into the database asynchronously.
923
1220
 
924
1221
  Args:
925
- self: The instance of the object calling this function.
926
1222
  collection_id (str): The ID of the collection to which the items belong.
927
1223
  processed_items (List[Item]): A list of `Item` objects to be inserted into the database.
928
- refresh (bool): Whether to refresh the index after the bulk insert (default: False).
1224
+ **kwargs (Any): Additional keyword arguments, including:
1225
+ - refresh (str, optional): Whether to refresh the index after the bulk insert.
1226
+ Can be "true", "false", or "wait_for". Defaults to the value of `self.sync_settings.database_refresh`.
1227
+ - refresh (bool, optional): Whether to refresh the index after the bulk insert.
1228
+ - raise_on_error (bool, optional): Whether to raise an error if any of the bulk operations fail.
1229
+ Defaults to the value of `self.async_settings.raise_on_bulk_error`.
1230
+
1231
+ Returns:
1232
+ Tuple[int, List[Dict[str, Any]]]: A tuple containing:
1233
+ - The number of successfully processed actions (`success`).
1234
+ - A list of errors encountered during the bulk operation (`errors`).
929
1235
 
930
1236
  Notes:
931
- This function performs a bulk insert of `processed_items` into the database using the specified `collection_id`. The
932
- insert is performed synchronously and blocking, meaning that the function does not return until the insert has
933
- completed. The `mk_actions` function is called to generate a list of actions for the bulk insert. If `refresh` is set to
934
- True, the index is refreshed after the bulk insert. The function does not return any value.
1237
+ This function performs a bulk insert of `processed_items` into the database using the specified `collection_id`.
1238
+ The insert is performed synchronously and blocking, meaning that the function does not return until the insert has
1239
+ completed. The `mk_actions` function is called to generate a list of actions for the bulk insert. The `refresh`
1240
+ parameter determines whether the index is refreshed after the bulk insert:
1241
+ - "true": Forces an immediate refresh of the index.
1242
+ - "false": Does not refresh the index immediately (default behavior).
1243
+ - "wait_for": Waits for the next refresh cycle to make the changes visible.
935
1244
  """
936
- helpers.bulk(
1245
+ # Ensure kwargs is a dictionary
1246
+ kwargs = kwargs or {}
1247
+
1248
+ # Resolve the `refresh` parameter
1249
+ refresh = kwargs.get("refresh", self.async_settings.database_refresh)
1250
+ refresh = validate_refresh(refresh)
1251
+
1252
+ # Log the bulk insert attempt
1253
+ logger.info(
1254
+ f"Performing bulk insert for collection {collection_id} with refresh={refresh}"
1255
+ )
1256
+
1257
+ # Handle empty processed_items
1258
+ if not processed_items:
1259
+ logger.warning(f"No items to insert for collection {collection_id}")
1260
+ return 0, []
1261
+
1262
+ raise_on_error = self.sync_settings.raise_on_bulk_error
1263
+ success, errors = helpers.bulk(
937
1264
  self.sync_client,
938
1265
  mk_actions(collection_id, processed_items),
939
1266
  refresh=refresh,
940
- raise_on_error=False,
1267
+ raise_on_error=raise_on_error,
941
1268
  )
1269
+ return success, errors
942
1270
 
943
1271
  # DANGER
944
1272
  async def delete_items(self) -> None:
@@ -0,0 +1,2 @@
1
+ """library version."""
2
+ __version__ = "4.2.0"
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: stac-fastapi-opensearch
3
- Version: 4.0.0a2
3
+ Version: 4.2.0
4
4
  Summary: Opensearch stac-fastapi backend.
5
5
  Home-page: https://github.com/stac-utils/stac-fastapi-elasticsearch-opensearch
6
6
  License: MIT
@@ -125,6 +125,7 @@ You can customize additional settings in your `.env` file:
125
125
  | `STAC_FASTAPI_TITLE` | Title of the API in the documentation. | `stac-fastapi-elasticsearch` or `stac-fastapi-opensearch` | Optional |
126
126
  | `STAC_FASTAPI_DESCRIPTION` | Description of the API in the documentation. | N/A | Optional |
127
127
  | `STAC_FASTAPI_VERSION` | API version. | `2.1` | Optional |
128
+ | `STAC_FASTAPI_LANDING_PAGE_ID` | Landing page ID | `stac-fastapi` | Optional |
128
129
  | `APP_HOST` | Server bind address. | `0.0.0.0` | Optional |
129
130
  | `APP_PORT` | Server port. | `8080` | Optional |
130
131
  | `ENVIRONMENT` | Runtime environment. | `local` | Optional |
@@ -132,9 +133,12 @@ You can customize additional settings in your `.env` file:
132
133
  | `RELOAD` | Enable auto-reload for development. | `true` | Optional |
133
134
  | `STAC_FASTAPI_RATE_LIMIT` | API rate limit per client. | `200/minute` | Optional |
134
135
  | `BACKEND` | Tests-related variable | `elasticsearch` or `opensearch` based on the backend | Optional |
135
- | `ELASTICSEARCH_VERSION` | Version of Elasticsearch to use. | `8.11.0` | Optional |
136
- | `ENABLE_DIRECT_RESPONSE` | Enable direct response for maximum performance (disables all FastAPI dependencies, including authentication, custom status codes, and validation) | `false` | Optional |
137
- | `OPENSEARCH_VERSION` | OpenSearch version | `2.11.1` | Optional |
136
+ | `ELASTICSEARCH_VERSION` | Version of Elasticsearch to use. | `8.11.0` | Optional | |
137
+ | `OPENSEARCH_VERSION` | OpenSearch version | `2.11.1` | Optional
138
+ | `ENABLE_DIRECT_RESPONSE` | Enable direct response for maximum performance (disables all FastAPI dependencies, including authentication, custom status codes, and validation) | `false` | Optional
139
+ | `RAISE_ON_BULK_ERROR` | Controls whether bulk insert operations raise exceptions on errors. If set to `true`, the operation will stop and raise an exception when an error occurs. If set to `false`, errors will be logged, and the operation will continue. **Note:** STAC Item and ItemCollection validation errors will always raise, regardless of this flag. | `false` Optional |
140
+ | `DATABASE_REFRESH` | Controls whether database operations refresh the index immediately after changes. If set to `true`, changes will be immediately searchable. If set to `false`, changes may not be immediately visible but can improve performance for bulk operations. If set to `wait_for`, changes will wait for the next refresh cycle to become visible. | `false` | Optional |
141
+ | `ENABLE_TRANSACTIONS_EXTENSIONS` | Enables or disables the Transactions and Bulk Transactions API extensions. If set to `false`, the POST `/collections` route and related transaction endpoints (including bulk transaction operations) will be unavailable in the API. This is useful for deployments where mutating the catalog via the API should be prevented. | `true` | Optional |
138
142
 
139
143
  > [!NOTE]
140
144
  > The variables `ES_HOST`, `ES_PORT`, `ES_USE_SSL`, and `ES_VERIFY_CERTS` apply to both Elasticsearch and OpenSearch backends, so there is no need to rename the key names to `OS_` even if you're using OpenSearch.
@@ -1,4 +1,4 @@
1
- stac-fastapi-core==4.0.0a2
1
+ stac-fastapi-core==4.2.0
2
2
  opensearch-py~=2.8.0
3
3
  opensearch-py[async]~=2.8.0
4
4
  uvicorn~=0.23.0
@@ -1,2 +0,0 @@
1
- """library version."""
2
- __version__ = "4.0.0a2"