glean-indexing-sdk 0.0.3__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,482 @@
1
+ Metadata-Version: 2.4
2
+ Name: glean-indexing-sdk
3
+ Version: 0.0.3
4
+ Summary: SDK for building custom Glean indexing integrations
5
+ Project-URL: Source Code, https://github.com/glean-io/glean-indexing-sdk
6
+ Author-email: Steve Calvert <steve.calvert@glean.com>
7
+ License: MIT
8
+ License-File: LICENSE
9
+ Requires-Python: <4.0,>=3.10
10
+ Requires-Dist: glean-api-client>=0.6.3
11
+ Requires-Dist: jinja2>=3.1.2
12
+ Provides-Extra: dev
13
+ Requires-Dist: build>=1.0.0; extra == 'dev'
14
+ Requires-Dist: commitizen>=4.4.1; extra == 'dev'
15
+ Requires-Dist: pip-audit>=2.6.0; extra == 'dev'
16
+ Requires-Dist: twine>=4.0.0; extra == 'dev'
17
+ Provides-Extra: lint
18
+ Requires-Dist: ruff>=0.5; extra == 'lint'
19
+ Provides-Extra: test
20
+ Requires-Dist: pytest-asyncio>=0.23.2; extra == 'test'
21
+ Requires-Dist: pytest-cov>=4.1.0; extra == 'test'
22
+ Requires-Dist: pytest-httpx>=0.35.0; extra == 'test'
23
+ Requires-Dist: pytest-socket>=0.7.0; extra == 'test'
24
+ Requires-Dist: pytest-watcher>=0.3.4; extra == 'test'
25
+ Requires-Dist: pytest>=7.4.3; extra == 'test'
26
+ Requires-Dist: python-dotenv>=1.1.0; extra == 'test'
27
+ Provides-Extra: typing
28
+ Requires-Dist: pyright>=1.1.370; extra == 'typing'
29
+ Description-Content-Type: text/markdown
30
+
31
+
32
+ # Glean Indexing SDK
33
+
34
+ A Python SDK for building custom Glean indexing integrations. This package provides the base classes and utilities to create custom connectors for Glean's indexing APIs.
35
+
36
+ > [!WARNING]
37
+ > This is an experimental repository. APIs, interfaces, and functionality may change significantly without notice.
38
+
39
+ ## Installation
40
+
41
+ ```bash
42
+ pip install glean-indexing-sdk
43
+ ```
44
+
45
+ ## Architecture Overview
46
+
47
+ The Glean Indexing SDK follows a simple, predictable pattern for all connector types. Understanding this flow will help you implement any connector quickly:
48
+
49
+ ```mermaid
50
+ sequenceDiagram
51
+ participant User
52
+ participant Connector as "Connector<br/>(BaseDatasourceConnector<br/>or BasePeopleConnector)"
53
+ participant DataClient as "DataClient<br/>(BaseConnectorDataClient<br/>or StreamingConnectorDataClient)"
54
+ participant External as "External System<br/>(API/Database)"
55
+ participant Glean as "Glean API"
56
+
57
+ User->>+Connector: 1. connector.index_data()<br/>or connector.index_people()
58
+ Connector->>+DataClient: 2. get_source_data()
59
+ DataClient->>+External: 3. Fetch data
60
+ External-->>-DataClient: Raw source data
61
+ DataClient-->>-Connector: Typed source data
62
+ Connector->>Connector: 4. transform() or<br/>transform_people()
63
+ Note over Connector: Transform to<br/>DocumentDefinition or<br/>EmployeeInfoDefinition
64
+ Connector->>+Glean: 5. Batch upload documents<br/>or employee data
65
+ Glean-->>-Connector: Upload response
66
+ Connector-->>-User: Indexing complete
67
+ ```
68
+
69
+ **Key Components:**
70
+
71
+ 1. **DataClient** - Fetches raw data from your external system (API, database, files, etc.)
72
+ 2. **Connector** - Transforms your data into Glean's format and handles the upload process
73
+
74
+ ---
75
+
76
+ ## Datasource Connectors
77
+
78
+ Use datasource connectors to index documents, files, and content from external systems into Glean. This is the most common use case.
79
+
80
+ ## Datasource Quickstart
81
+
82
+ ### Environment Setup
83
+
84
+ 1. Set up environment variables for Glean API access:
85
+
86
+ ```bash
87
+ # Copy the environment template
88
+ cp env.template .env
89
+
90
+ # Set your Glean credentials
91
+ export GLEAN_INSTANCE="acme"
92
+ export GLEAN_INDEXING_API_TOKEN="your-indexing-api-token"
93
+ ```
94
+
95
+ > [!TIP] Choose the right connector type:
96
+ >
97
+ > **BaseDatasourceConnector** - For most use cases where all data can fit comfortably in memory and your API can return all data efficiently in one call.
98
+ > **BaseStreamingDatasourceConnector** - For very large datasets, memory-constrained environments, or when your API requires incremental/paginated access.
99
+ > **Single Document Indexing** - For real-time updates of individual documents
100
+
101
+ ## BaseDatasourceConnector
102
+
103
+ ### When to Use This
104
+
105
+ #### Perfect for
106
+
107
+ - Document repositories where all data can fit comfortably in memory
108
+ - Wikis, knowledge bases, documentation sites
109
+ - File systems with moderate amounts of content
110
+ - Systems where you can fetch all data in memory at once
111
+ - Documents that cannot be fetched via paginated APIs
112
+
113
+ #### Avoid when
114
+
115
+ - You have very large datasets that cannot fit in memory
116
+ - Documents are very large (> 10MB each)
117
+ - Memory usage is a concern
118
+
119
+ ### Step-by-Step Implementation
120
+
121
+ #### Step 1: Define Your Data Type
122
+
123
+ ```python snippet=non_streaming/wiki_page_data.py
124
+ from typing import List, TypedDict
125
+
126
+
127
+ class WikiPageData(TypedDict):
128
+ """Type definition for your source data structure."""
129
+
130
+ id: str
131
+ title: str
132
+ content: str
133
+ author: str
134
+ created_at: str
135
+ updated_at: str
136
+ url: str
137
+ tags: List[str]
138
+ ```
139
+
140
+ #### Step 2: Create Your DataClient
141
+
142
+ ```python snippet=non_streaming/wiki_data_client.py
143
+ from typing import Sequence
144
+
145
+ from glean.indexing.connectors.base_data_client import BaseConnectorDataClient
146
+
147
+ from .wiki_page_data import WikiPageData
148
+
149
+
150
+ class WikiDataClient(BaseConnectorDataClient[WikiPageData]):
151
+ """Fetches data from your external system."""
152
+
153
+ def __init__(self, wiki_base_url: str, api_token: str):
154
+ self.wiki_base_url = wiki_base_url
155
+ self.api_token = api_token
156
+
157
+ def get_source_data(self, since=None) -> Sequence[WikiPageData]:
158
+ """Fetch all your documents here."""
159
+ # Your implementation here - call APIs, read files, query databases
160
+ pass
161
+ ```
162
+
163
+ #### Step 3: Create Your Connector
164
+
165
+ ```python snippet=non_streaming/wiki_connector.py
166
+ from typing import List, Sequence
167
+
168
+ from glean.indexing.connectors import BaseDatasourceConnector
169
+ from glean.indexing.models import (
170
+ ContentDefinition,
171
+ CustomDatasourceConfig,
172
+ DocumentDefinition,
173
+ UserReferenceDefinition,
174
+ )
175
+
176
+ from .wiki_page_data import WikiPageData
177
+
178
+
179
+ class CompanyWikiConnector(BaseDatasourceConnector[WikiPageData]):
180
+ """Transform and upload your data to Glean."""
181
+
182
+ configuration: CustomDatasourceConfig = CustomDatasourceConfig(
183
+ name="company_wiki",
184
+ display_name="Company Wiki",
185
+ url_regex=r"https://wiki\.company\.com/.*",
186
+ trust_url_regex_for_view_activity=True,
187
+ is_user_referenced_by_email=True,
188
+ )
189
+
190
+ def transform(self, data: Sequence[WikiPageData]) -> List[DocumentDefinition]:
191
+ """Transform your data to Glean's format."""
192
+ documents = []
193
+ for page in data:
194
+ document = DocumentDefinition(
195
+ id=page["id"],
196
+ title=page["title"],
197
+ datasource=self.name,
198
+ view_url=page["url"],
199
+ body=ContentDefinition(mime_type="text/plain", text_content=page["content"]),
200
+ author=UserReferenceDefinition(email=page["author"]),
201
+ created_at=self._parse_timestamp(page["created_at"]),
202
+ updated_at=self._parse_timestamp(page["updated_at"]),
203
+ tags=page["tags"],
204
+ )
205
+ documents.append(document)
206
+ return documents
207
+
208
+ def _parse_timestamp(self, timestamp_str: str) -> int:
209
+ """Convert ISO timestamp to Unix epoch seconds."""
210
+ from datetime import datetime
211
+
212
+ dt = datetime.fromisoformat(timestamp_str.replace("Z", "+00:00"))
213
+ return int(dt.timestamp())
214
+ ```
215
+
216
+ #### Step 4: Run the Connector
217
+
218
+ ```python snippet=non_streaming/run_connector.py
219
+ from glean.indexing.models import IndexingMode
220
+
221
+ from .wiki_connector import CompanyWikiConnector
222
+ from .wiki_data_client import WikiDataClient
223
+
224
+ # Initialize
225
+ data_client = WikiDataClient(wiki_base_url="https://wiki.company.com", api_token="your-wiki-token")
226
+ connector = CompanyWikiConnector(name="company_wiki", data_client=data_client)
227
+
228
+ # Configure the datasource in Glean
229
+ connector.configure_datasource()
230
+
231
+ # Index all documents
232
+ connector.index_data(mode=IndexingMode.FULL)
233
+ ```
234
+
235
+ ### Complete Example
236
+
237
+ ```python snippet=non_streaming/complete.py
238
+ from typing import List, Sequence, TypedDict
239
+
240
+ from glean.indexing.connectors import BaseConnectorDataClient, BaseDatasourceConnector
241
+ from glean.indexing.models import (
242
+ ContentDefinition,
243
+ CustomDatasourceConfig,
244
+ DocumentDefinition,
245
+ IndexingMode,
246
+ UserReferenceDefinition,
247
+ )
248
+
249
+
250
+ class WikiPageData(TypedDict):
251
+ id: str
252
+ title: str
253
+ content: str
254
+ author: str
255
+ created_at: str
256
+ updated_at: str
257
+ url: str
258
+ tags: List[str]
259
+
260
+
261
+ class WikiDataClient(BaseConnectorDataClient[WikiPageData]):
262
+ def __init__(self, wiki_base_url: str, api_token: str):
263
+ self.wiki_base_url = wiki_base_url
264
+ self.api_token = api_token
265
+
266
+ def get_source_data(self, since=None) -> Sequence[WikiPageData]:
267
+ # Example static data
268
+ return [
269
+ {
270
+ "id": "page_123",
271
+ "title": "Engineering Onboarding Guide",
272
+ "content": "Welcome to the engineering team...",
273
+ "author": "jane.smith@company.com",
274
+ "created_at": "2024-01-15T10:00:00Z",
275
+ "updated_at": "2024-02-01T14:30:00Z",
276
+ "url": f"{self.wiki_base_url}/pages/123",
277
+ "tags": ["onboarding", "engineering"],
278
+ },
279
+ {
280
+ "id": "page_124",
281
+ "title": "API Documentation Standards",
282
+ "content": "Our standards for API documentation...",
283
+ "author": "john.doe@company.com",
284
+ "created_at": "2024-01-20T09:15:00Z",
285
+ "updated_at": "2024-01-25T16:45:00Z",
286
+ "url": f"{self.wiki_base_url}/pages/124",
287
+ "tags": ["api", "documentation", "standards"],
288
+ },
289
+ ]
290
+
291
+
292
+ class CompanyWikiConnector(BaseDatasourceConnector[WikiPageData]):
293
+ configuration: CustomDatasourceConfig = CustomDatasourceConfig(
294
+ name="company_wiki",
295
+ display_name="Company Wiki",
296
+ url_regex=r"https://wiki\.company\.com/.*",
297
+ trust_url_regex_for_view_activity=True,
298
+ is_user_referenced_by_email=True,
299
+ )
300
+
301
+ def transform(self, data: Sequence[WikiPageData]) -> List[DocumentDefinition]:
302
+ documents = []
303
+ for page in data:
304
+ documents.append(
305
+ DocumentDefinition(
306
+ id=page["id"],
307
+ title=page["title"],
308
+ datasource=self.name,
309
+ view_url=page["url"],
310
+ body=ContentDefinition(mime_type="text/plain", text_content=page["content"]),
311
+ author=UserReferenceDefinition(email=page["author"]),
312
+ created_at=self._parse_timestamp(page["created_at"]),
313
+ updated_at=self._parse_timestamp(page["updated_at"]),
314
+ tags=page["tags"],
315
+ )
316
+ )
317
+ return documents
318
+
319
+ def _parse_timestamp(self, timestamp_str: str) -> int:
320
+ from datetime import datetime
321
+
322
+ dt = datetime.fromisoformat(timestamp_str.replace("Z", "+00:00"))
323
+ return int(dt.timestamp())
324
+
325
+
326
+ data_client = WikiDataClient(wiki_base_url="https://wiki.company.com", api_token="your-wiki-token")
327
+ connector = CompanyWikiConnector(name="company_wiki", data_client=data_client)
328
+ connector.configure_datasource()
329
+ connector.index_data(mode=IndexingMode.FULL)
330
+ ```
331
+
332
+ ## BaseStreamingDatasourceConnector
333
+
334
+ ### When to Use This
335
+
336
+ #### Perfect for
337
+
338
+ - Large document repositories that cannot fit in memory
339
+ - Memory-constrained environments
340
+ - Documents that are fetched via paginated APIs
341
+ - Very large individual documents (> 10MB)
342
+ - When you want to process data incrementally
343
+
344
+ #### Avoid when
345
+
346
+ - You have a small document set that fits comfortably in memory
347
+ - Your API can return all data efficiently in one call
348
+ - Memory usage is not a concern
349
+
350
+ ### Step-by-Step Implementation
351
+
352
+ #### Step 1: Define Your Data Type
353
+
354
+ ```python snippet=streaming/article_data.py
355
+ from typing import TypedDict
356
+
357
+
358
+ class ArticleData(TypedDict):
359
+ """Type definition for knowledge base article data."""
360
+
361
+ id: str
362
+ title: str
363
+ content: str
364
+ author: str
365
+ updated_at: str
366
+ url: str
367
+ ```
368
+
369
+ #### Step 2: Create Your Streaming DataClient
370
+
371
+ ```python snippet=streaming/article_data_client.py
372
+ from typing import Generator
373
+
374
+ import requests
375
+
376
+ from glean.indexing.connectors.base_streaming_data_client import StreamingConnectorDataClient
377
+
378
+ from .article_data import ArticleData
379
+
380
+
381
+ class LargeKnowledgeBaseClient(StreamingConnectorDataClient[ArticleData]):
382
+ """Streaming client that yields data incrementally."""
383
+
384
+ def __init__(self, kb_api_url: str, api_key: str):
385
+ self.kb_api_url = kb_api_url
386
+ self.api_key = api_key
387
+
388
+ def get_source_data(self, since=None) -> Generator[ArticleData, None, None]:
389
+ """Stream documents one page at a time to save memory."""
390
+ page = 1
391
+ page_size = 100
392
+
393
+ while True:
394
+ params = {"page": page, "size": page_size}
395
+ if since:
396
+ params["modified_since"] = since
397
+
398
+ response = requests.get(
399
+ f"{self.kb_api_url}/articles",
400
+ headers={"Authorization": f"Bearer {self.api_key}"},
401
+ params=params,
402
+ )
403
+ response.raise_for_status()
404
+
405
+ data = response.json()
406
+ articles = data.get("articles", [])
407
+
408
+ if not articles:
409
+ break
410
+
411
+ for article in articles:
412
+ yield ArticleData(article)
413
+
414
+ if len(articles) < page_size:
415
+ break
416
+
417
+ page += 1
418
+ ```
419
+
420
+ #### Step 3: Create Your Streaming Connector
421
+
422
+ ```python snippet=streaming/article_connector.py
423
+ from typing import List, Sequence
424
+
425
+ from glean.api_client.models.userreferencedefinition import UserReferenceDefinition
426
+ from glean.indexing.connectors import BaseStreamingDatasourceConnector
427
+ from glean.indexing.models import ContentDefinition, CustomDatasourceConfig, DocumentDefinition
428
+
429
+ from .article_data import ArticleData
430
+
431
+
432
+ class KnowledgeBaseConnector(BaseStreamingDatasourceConnector[ArticleData]):
433
+ configuration: CustomDatasourceConfig = CustomDatasourceConfig(
434
+ name="knowledge_base",
435
+ display_name="Knowledge Base",
436
+ url_regex=r"https://kb\.company\.com/.*",
437
+ trust_url_regex_for_view_activity=True,
438
+ )
439
+
440
+ def __init__(self, name: str, data_client):
441
+ super().__init__(name, data_client)
442
+ self.batch_size = 50
443
+
444
+ def transform(self, data: Sequence[ArticleData]) -> List[DocumentDefinition]:
445
+ documents = []
446
+ for article in data:
447
+ documents.append(
448
+ DocumentDefinition(
449
+ id=article["id"],
450
+ title=article["title"],
451
+ datasource=self.name,
452
+ view_url=article["url"],
453
+ body=ContentDefinition(mime_type="text/html", text_content=article["content"]),
454
+ author=UserReferenceDefinition(email=article["author"]),
455
+ updated_at=self._parse_timestamp(article["updated_at"]),
456
+ )
457
+ )
458
+ return documents
459
+
460
+ def _parse_timestamp(self, timestamp_str: str) -> int:
461
+ from datetime import datetime
462
+
463
+ dt = datetime.fromisoformat(timestamp_str.replace("Z", "+00:00"))
464
+ return int(dt.timestamp())
465
+ ```
466
+
467
+ #### Step 4: Run the Connector
468
+
469
+ ```python snippet=streaming/run_connector.py
470
+ from glean.indexing.models import IndexingMode
471
+
472
+ from .article_connector import KnowledgeBaseConnector
473
+ from .article_data_client import LargeKnowledgeBaseClient
474
+
475
+ data_client = LargeKnowledgeBaseClient(
476
+ kb_api_url="https://kb-api.company.com", api_key="your-kb-api-key"
477
+ )
478
+ connector = KnowledgeBaseConnector(name="knowledge_base", data_client=data_client)
479
+
480
+ connector.configure_datasource()
481
+ connector.index_data(mode=IndexingMode.FULL)
482
+ ```
@@ -0,0 +1,27 @@
1
+ glean/indexing/__init__.py,sha256=4rk3Q9mlKf707DNKstmOf2l5cljagvYobwwHhYlD-Zw,1519
2
+ glean/indexing/models.py,sha256=UuaEDCx0ygvU4u0lRbSn4YXXZVo7D_pyD_whQtjORm8,1223
3
+ glean/indexing/py.typed,sha256=Nqnn8clbgv-5l0PgxcTOldg8mkMKrFn4TvPL-rYUUGg,1
4
+ glean/indexing/common/__init__.py,sha256=6COS3jP66xJ7VcNGI8I95tkF5zpqHy9QPVn82CB4m4I,513
5
+ glean/indexing/common/batch_processor.py,sha256=ZdKYPjjTTgQV_iyIvA23EjrXF58wLdeOI-VLEy6588o,893
6
+ glean/indexing/common/content_formatter.py,sha256=PkIUZRoRtaOf1w6tJbB3cDj4oV58I7Tw8zChActumt8,1269
7
+ glean/indexing/common/glean_client.py,sha256=tKRWK_C1Nja0gVy2FLnj9SmUbpIdOA3WKmpuuhIl7kk,488
8
+ glean/indexing/common/metrics.py,sha256=SWCWCYnNOkN4cnwCxyWyEF8iHVwQ4HZqhewi2lqyS84,1771
9
+ glean/indexing/common/mocks.py,sha256=-TbLzpZ7yUstQW58AICixiIQM2CV5_OPRXejjI_brhE,726
10
+ glean/indexing/connectors/__init__.py,sha256=YaHEmCj246zKIvPIAOjTBTDV2O-KvMLncc6jjmaEeOw,1035
11
+ glean/indexing/connectors/base_connector.py,sha256=Q435TzSLqs0OTFBrD3KCcjQnGSICQg11pdSfJ7C3XtI,2398
12
+ glean/indexing/connectors/base_data_client.py,sha256=krOFHJbwCZI-hCS6fr-z44TvjCbPCTCw54hkk0CZFsQ,1004
13
+ glean/indexing/connectors/base_datasource_connector.py,sha256=x0Fsc7uCKgTtTgyOus1yDFBr87JbVGHM3zHFp9mGgc4,12440
14
+ glean/indexing/connectors/base_people_connector.py,sha256=XuSCFyegenW271GZJ408IQgT19sBq9C9NkKHkiSxLKg,6239
15
+ glean/indexing/connectors/base_streaming_data_client.py,sha256=xW67crQ_rHaOnD0NFBi2zTGex9JGME886CjX4EqgbZM,1241
16
+ glean/indexing/connectors/base_streaming_datasource_connector.py,sha256=wUcsBPExzmgMQd6P24epR4bZFBl40aN6qm6di_F2hmA,7116
17
+ glean/indexing/observability/__init__.py,sha256=SuWJ7pHs5WFq5vL036B3RIsJSbjDsy6SI705u83874I,455
18
+ glean/indexing/observability/observability.py,sha256=cHlo-tbrmGie6YeWXqEUap0YE6JRtFvOKTnxWD-7yac,9222
19
+ glean/indexing/testing/__init__.py,sha256=h9mK0QjRZD5f470ePTeg635jZNwPBAd2S7g1DQO4LuE,448
20
+ glean/indexing/testing/connector_test_harness.py,sha256=CMQZmn0cOIrj_GdIHb3OwRN9jTaZrn3pYkHHz50rqK8,1988
21
+ glean/indexing/testing/mock_data_source.py,sha256=ICYbbHQZe9RVTzvrlwcxp_suxm9yXgjEAGiNCU-SkS4,1325
22
+ glean/indexing/testing/mock_glean_client.py,sha256=aY_Jfg_NJNPw2HSM1IshgT2lkT59SD9BJzOnvNFJhck,2528
23
+ glean/indexing/testing/response_validator.py,sha256=jehEtXlW0AQcOVck-_VPoDFtQM_vkHJQ10SUN1ftr1Q,1800
24
+ glean_indexing_sdk-0.0.3.dist-info/METADATA,sha256=uyukc_HxjJuhdYzolllnxL8tqpmyWani9g1_NVJhnI4,15619
25
+ glean_indexing_sdk-0.0.3.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
26
+ glean_indexing_sdk-0.0.3.dist-info/licenses/LICENSE,sha256=RAfePGwatR5BOtlNhW60zAKWCeHVgtGpaGBqZQadXNQ,1062
27
+ glean_indexing_sdk-0.0.3.dist-info/RECORD,,
@@ -0,0 +1,4 @@
1
+ Wheel-Version: 1.0
2
+ Generator: hatchling 1.27.0
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2023 Glean
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.