rustystats 0.1.5__cp313-cp313-manylinux_2_34_x86_64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,476 @@
1
+ Metadata-Version: 2.4
2
+ Name: rustystats
3
+ Version: 0.1.5
4
+ Classifier: Programming Language :: Rust
5
+ Classifier: Programming Language :: Python :: Implementation :: CPython
6
+ Classifier: Programming Language :: Python :: 3
7
+ Classifier: Programming Language :: Python :: 3.9
8
+ Classifier: Programming Language :: Python :: 3.10
9
+ Classifier: Programming Language :: Python :: 3.11
10
+ Classifier: Programming Language :: Python :: 3.12
11
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
12
+ Classifier: Intended Audience :: Financial and Insurance Industry
13
+ Classifier: Intended Audience :: Science/Research
14
+ Requires-Dist: numpy>=1.20
15
+ Requires-Dist: polars>=1.0
16
+ Requires-Dist: pytest>=7.0 ; extra == 'dev'
17
+ Requires-Dist: statsmodels>=0.14 ; extra == 'dev'
18
+ Requires-Dist: maturin>=1.4 ; extra == 'dev'
19
+ Requires-Dist: jupyter>=1.0 ; extra == 'dev'
20
+ Requires-Dist: pyarrow>=14.0 ; extra == 'dev'
21
+ Requires-Dist: psutil>=5.9 ; extra == 'dev'
22
+ Provides-Extra: dev
23
+ License-File: LICENSE
24
+ Summary: Fast Generalized Linear Models with a Rust backend - statsmodels compatible
25
+ Keywords: statistics,glm,actuarial,regression,modeling
26
+ License: MIT
27
+ Requires-Python: >=3.9
28
+ Description-Content-Type: text/markdown; charset=UTF-8; variant=GFM
29
+ Project-URL: Repository, https://github.com/PricingFrontier/rustystats
30
+ Project-URL: Documentation, https://github.com/PricingFrontier/rustystats#readme
31
+
32
+ # RustyStats 🦀📊
33
+
34
+ **High-performance Generalized Linear Models with a Rust backend and Python API**
35
+
36
+ **Codebase Documentation**: [pricingfrontier.github.io/rustystats/](https://pricingfrontier.github.io/rustystats/)
37
+
38
+ ## Performance Benchmarks
39
+
40
+ **RustyStats vs Statsmodels** — Synthetic data, 101 features (10 continuous + 10 categorical with 10 levels each).
41
+
42
+ | Family | 10K rows | 250K rows | 500K rows |
43
+ |--------|----------|-----------|-----------|
44
+ | Gaussian | **15.6x** | **5.7x** | **4.3x** |
45
+ | Poisson | **16.3x** | **6.2x** | **4.2x** |
46
+ | Binomial | **19.5x** | **6.8x** | **4.4x** |
47
+ | Gamma | **33.7x** | **13.4x** | **8.4x** |
48
+ | NegBinomial | **26.7x** | **6.7x** | **5.0x** |
49
+
50
+ **Average speedup: 10.5x** (range: 4.2x – 33.7x)
51
+
52
+ ### Memory Usage
53
+
54
+ RustyStats uses significantly less RAM by reusing buffers and avoiding Python object overhead:
55
+
56
+ | Rows | RustyStats | Statsmodels | Reduction |
57
+ |------|------------|-------------|-----------|
58
+ | 10K | 38 MB | 72 MB | **1.9x** |
59
+ | 250K | 460 MB | 1,796 MB | **3.9x** |
60
+ | 500K | 836 MB | 3,590 MB | **4.3x** |
61
+
62
+ *Memory advantage grows with data size — at 500K rows, RustyStats uses ~4x less RAM.*
63
+
64
+ <details>
65
+ <summary>Full benchmark details</summary>
66
+
67
+ | Family | Rows | RustyStats | Statsmodels | Speedup |
68
+ |--------|------|------------|-------------|---------|
69
+ | Gaussian | 10,000 | 0.100s | 1.559s | **15.6x** |
70
+ | Gaussian | 250,000 | 1.991s | 11.363s | **5.7x** |
71
+ | Gaussian | 500,000 | 4.023s | 17.386s | **4.3x** |
72
+ | Poisson | 10,000 | 0.165s | 2.692s | **16.3x** |
73
+ | Poisson | 250,000 | 2.429s | 15.072s | **6.2x** |
74
+ | Poisson | 500,000 | 5.668s | 23.693s | **4.2x** |
75
+ | Binomial | 10,000 | 0.112s | 2.189s | **19.5x** |
76
+ | Binomial | 250,000 | 1.946s | 13.155s | **6.8x** |
77
+ | Binomial | 500,000 | 4.708s | 20.862s | **4.4x** |
78
+ | Gamma | 10,000 | 0.129s | 4.353s | **33.7x** |
79
+ | Gamma | 250,000 | 2.385s | 31.885s | **13.4x** |
80
+ | Gamma | 500,000 | 5.499s | 46.167s | **8.4x** |
81
+ | NegBinomial | 10,000 | 0.119s | 3.177s | **26.7x** |
82
+ | NegBinomial | 250,000 | 2.281s | 15.278s | **6.7x** |
83
+ | NegBinomial | 500,000 | 4.821s | 24.331s | **5.0x** |
84
+
85
+ *Times are median of 3 runs. Benchmark scripts in `benchmarks/`.*
86
+
87
+ </details>
88
+
89
+ ---
90
+
91
+ ## Features
92
+
93
+ - **Fast** - Parallel Rust backend, 4-30x faster than statsmodels
94
+ - **Memory Efficient** - 4x less RAM than statsmodels at scale
95
+ - **Stable** - Step-halving IRLS, warm starts for robust convergence
96
+ - **Splines** - B-splines `bs()` and natural splines `ns()` in formulas
97
+ - **Regularisation** - Ridge, Lasso, and Elastic Net via coordinate descent
98
+ - **Complete** - 7 families, robust SEs, full diagnostics
99
+ - **Minimal** - Only `numpy` and `polars` required
100
+
101
+ ## Installation
102
+
103
+ ```bash
104
+ uv add rustystats
105
+ ```
106
+
107
+ ## Quick Start
108
+
109
+ ```python
110
+ import rustystats as rs
111
+ import polars as pl
112
+
113
+ # Load data
114
+ data = pl.read_parquet("insurance.parquet")
115
+
116
+ # Fit a Poisson GLM for claim frequency
117
+ result = rs.glm(
118
+ "ClaimCount ~ VehAge + VehPower + C(Area) + C(Region)",
119
+ data=data,
120
+ family="poisson",
121
+ offset="Exposure"
122
+ ).fit()
123
+
124
+ # View results
125
+ print(result.summary())
126
+ ```
127
+
128
+ ---
129
+
130
+ ## Families & Links
131
+
132
+ | Family | Default Link | Use Case |
133
+ |--------|--------------|----------|
134
+ | `gaussian` | identity | Linear regression |
135
+ | `poisson` | log | Claim frequency |
136
+ | `binomial` | logit | Binary outcomes |
137
+ | `gamma` | log | Claim severity |
138
+ | `tweedie` | log | Pure premium (var_power=1.5) |
139
+ | `quasipoisson` | log | Overdispersed counts |
140
+ | `quasibinomial` | logit | Overdispersed binary |
141
+ | `negbinomial` | log | Overdispersed counts (proper distribution) |
142
+
143
+ ---
144
+
145
+ ## Formula Syntax
146
+
147
+ ```python
148
+ # Main effects
149
+ "y ~ x1 + x2 + C(category)"
150
+
151
+ # Interactions
152
+ "y ~ x1*x2" # x1 + x2 + x1:x2
153
+ "y ~ C(area):age" # Area-specific age effects
154
+ "y ~ C(area)*C(brand)" # Categorical × categorical
155
+
156
+ # Splines (non-linear effects)
157
+ "y ~ bs(age, df=5)" # B-spline basis
158
+ "y ~ ns(income, df=4)" # Natural spline (better extrapolation)
159
+
160
+ # Target encoding (high-cardinality categoricals)
161
+ "y ~ TE(brand) + TE(model)"
162
+
163
+ # Combined
164
+ "y ~ bs(age, df=5) + C(region)*income + ns(vehicle_age, df=3) + TE(brand)"
165
+ ```
166
+
167
+ ---
168
+
169
+ ## Results Methods
170
+
171
+ ```python
172
+ # Coefficients & Inference
173
+ result.params # Coefficients
174
+ result.fittedvalues # Predicted means
175
+ result.deviance # Model deviance
176
+ result.bse() # Standard errors
177
+ result.tvalues() # z-statistics
178
+ result.pvalues() # P-values
179
+ result.conf_int(alpha) # Confidence intervals
180
+
181
+ # Robust Standard Errors (sandwich estimators)
182
+ result.bse_robust("HC1") # Robust SE (HC0, HC1, HC2, HC3)
183
+ result.tvalues_robust() # z-stats with robust SE
184
+ result.pvalues_robust() # P-values with robust SE
185
+ result.conf_int_robust() # Confidence intervals with robust SE
186
+ result.cov_robust() # Full robust covariance matrix
187
+
188
+ # Diagnostics (statsmodels-compatible)
189
+ result.resid_response() # Raw residuals (y - μ)
190
+ result.resid_pearson() # Pearson residuals
191
+ result.resid_deviance() # Deviance residuals
192
+ result.resid_working() # Working residuals
193
+ result.llf() # Log-likelihood
194
+ result.aic() # Akaike Information Criterion
195
+ result.bic() # Bayesian Information Criterion
196
+ result.null_deviance() # Null model deviance
197
+ result.pearson_chi2() # Pearson chi-squared
198
+ result.scale() # Dispersion (deviance-based)
199
+ result.scale_pearson() # Dispersion (Pearson-based)
200
+ result.family # Family name
201
+ ```
202
+
203
+ ---
204
+
205
+ ## Regularization
206
+
207
+ ```python
208
+ # Ridge (L2) - shrinks coefficients, keeps all variables
209
+ result = rs.glm("y ~ x1 + x2 + C(cat)", data, family="gaussian").fit(
210
+ alpha=0.1, l1_ratio=0.0
211
+ )
212
+
213
+ # Lasso (L1) - variable selection, zeros out weak predictors
214
+ result = rs.glm("y ~ x1 + x2 + C(cat)", data, family="poisson").fit(
215
+ alpha=0.1, l1_ratio=1.0
216
+ )
217
+ print(f"Selected {result.n_nonzero()} variables")
218
+ print(f"Features: {result.selected_features()}")
219
+
220
+ # Elastic Net - mix of L1 and L2
221
+ result = rs.glm("y ~ x1 + x2 + C(cat)", data, family="gaussian").fit(
222
+ alpha=0.1, l1_ratio=0.5
223
+ )
224
+ ```
225
+
226
+ ---
227
+
228
+ ## Interaction Terms
229
+
230
+ ```python
231
+ # Continuous × Continuous interaction (main effects + interaction)
232
+ result = rs.glm(
233
+ "ClaimNb ~ Age*VehPower", # Equivalent to Age + VehPower + Age:VehPower
234
+ data, family="poisson", offset="Exposure"
235
+ ).fit()
236
+
237
+ # Categorical × Continuous interaction
238
+ result = rs.glm(
239
+ "ClaimNb ~ C(Area)*Age", # Each area level has different age effect
240
+ data, family="poisson", offset="Exposure"
241
+ ).fit()
242
+
243
+ # Categorical × Categorical interaction
244
+ result = rs.glm(
245
+ "ClaimNb ~ C(Area)*C(VehBrand)",
246
+ data, family="poisson", offset="Exposure"
247
+ ).fit()
248
+
249
+ # Pure interaction (no main effects added)
250
+ result = rs.glm(
251
+ "ClaimNb ~ Age + C(Area):VehPower", # Area-specific VehPower slopes
252
+ data, family="poisson", offset="Exposure"
253
+ ).fit()
254
+ ```
255
+
256
+ ---
257
+
258
+ ## Spline Basis Functions
259
+
260
+ ```python
261
+ # Use splines in formulas - automatic parsing
262
+ result = rs.glm(
263
+ "ClaimNb ~ bs(Age, df=5) + ns(VehPower, df=4) + C(Region)",
264
+ data=data,
265
+ family="poisson",
266
+ offset="Exposure"
267
+ ).fit()
268
+
269
+ # Combine splines with interactions
270
+ result = rs.glm(
271
+ "y ~ bs(age, df=4)*C(gender) + ns(income, df=3)",
272
+ data=data,
273
+ family="gaussian"
274
+ ).fit()
275
+
276
+ # Direct basis computation for custom use
277
+ import numpy as np
278
+ x = np.linspace(0, 10, 100)
279
+ basis = rs.bs(x, df=5) # 5 degrees of freedom (4 basis columns)
280
+ basis_ns = rs.ns(x, df=5) # Natural splines - linear extrapolation at boundaries
281
+ ```
282
+
283
+ **When to use each spline type:**
284
+ - **B-splines (`bs`)**: Standard choice, more flexible at boundaries
285
+ - **Natural splines (`ns`)**: Better extrapolation, linear beyond boundaries (recommended for actuarial work)
286
+
287
+ ---
288
+
289
+ ## Quasi-Families for Overdispersion
290
+
291
+ ```python
292
+ # Fit a standard Poisson model first
293
+ result_poisson = rs.glm("ClaimNb ~ Age + C(Region)", data, family="poisson", offset="Exposure").fit()
294
+
295
+ # Check for overdispersion: Pearson χ² / df >> 1 indicates overdispersion
296
+ dispersion_ratio = result_poisson.pearson_chi2() / result_poisson.df_resid
297
+ print(f"Dispersion ratio: {dispersion_ratio:.2f}") # If >> 1, use quasi-family
298
+
299
+ # Fit QuasiPoisson if overdispersed
300
+ result_quasi = rs.glm("ClaimNb ~ Age + C(Region)", data, family="quasipoisson", offset="Exposure").fit()
301
+
302
+ # Coefficients are IDENTICAL to Poisson, but standard errors are inflated by √φ
303
+ print(f"Estimated dispersion (φ): {result_quasi.scale():.3f}")
304
+
305
+ # For binary data with overdispersion
306
+ result_qb = rs.glm("Binary ~ x1 + x2", data, family="quasibinomial").fit()
307
+ ```
308
+
309
+ **Key properties of quasi-families:**
310
+ - **Point estimates**: Identical to base family (Poisson/Binomial)
311
+ - **Standard errors**: Inflated by √φ where φ = Pearson χ²/(n-p)
312
+ - **P-values**: More conservative (larger), accounting for extra variance
313
+
314
+ ---
315
+
316
+ ## Negative Binomial for Overdispersed Counts
317
+
318
+ ```python
319
+ # Automatic θ estimation (default when theta not supplied)
320
+ result = rs.glm("ClaimNb ~ Age + C(Region)", data, family="negbinomial", offset="Exposure").fit()
321
+ print(result.family) # "NegativeBinomial(theta=2.1234)"
322
+
323
+ # Fixed θ value
324
+ result = rs.glm("ClaimNb ~ Age + C(Region)", data, family="negbinomial", theta=1.0, offset="Exposure").fit()
325
+
326
+ # θ controls overdispersion: Var(Y) = μ + μ²/θ
327
+ # - θ=0.5: Strong overdispersion (variance = μ + 2μ²)
328
+ # - θ=1.0: Moderate overdispersion (variance = μ + μ²)
329
+ # - θ→∞: Approaches Poisson (variance = μ)
330
+ ```
331
+
332
+ **NegativeBinomial vs QuasiPoisson:**
333
+ | Aspect | QuasiPoisson | NegativeBinomial |
334
+ |--------|--------------|------------------|
335
+ | **Variance** | φ × μ | μ + μ²/θ |
336
+ | **True distribution** | No (quasi) | Yes |
337
+ | **AIC/BIC valid** | Questionable | Yes |
338
+ | **Prediction intervals** | Not principled | Proper |
339
+
340
+ ---
341
+
342
+ ## Target Encoding for High-Cardinality Categoricals
343
+
344
+ ```python
345
+ # Formula API - TE() in formulas
346
+ result = rs.glm(
347
+ "ClaimNb ~ TE(Brand) + TE(Model) + Age + C(Region)",
348
+ data=data,
349
+ family="poisson",
350
+ offset="Exposure"
351
+ ).fit()
352
+
353
+ # With options
354
+ result = rs.glm(
355
+ "y ~ TE(brand, prior_weight=2.0, n_permutations=8) + age",
356
+ data=data,
357
+ family="gaussian"
358
+ ).fit()
359
+
360
+ # Sklearn-style API
361
+ encoder = rs.TargetEncoder(prior_weight=1.0, n_permutations=4)
362
+ train_encoded = encoder.fit_transform(train_categories, train_target)
363
+ test_encoded = encoder.transform(test_categories)
364
+ ```
365
+
366
+ **Key benefits:**
367
+ - **No target leakage**: Ordered target statistics
368
+ - **Regularization**: Prior weight controls shrinkage toward global mean
369
+ - **High-cardinality**: Single column instead of thousands of dummies
370
+
371
+ ---
372
+
373
+ ## Model Diagnostics
374
+
375
+ ```python
376
+ # Compute all diagnostics at once
377
+ diagnostics = result.diagnostics(
378
+ data=data,
379
+ categorical_factors=["Region", "VehBrand", "Area"], # Including non-fitted
380
+ continuous_factors=["Age", "Income", "VehPower"], # Including non-fitted
381
+ )
382
+
383
+ # Export as compact JSON (optimized for LLM consumption)
384
+ json_str = diagnostics.to_json()
385
+
386
+ # Pre-fit data exploration (no model needed)
387
+ exploration = rs.explore_data(
388
+ data=data,
389
+ response="ClaimNb",
390
+ categorical_factors=["Region", "VehBrand", "Area"],
391
+ continuous_factors=["Age", "VehPower", "Income"],
392
+ exposure="Exposure",
393
+ family="poisson",
394
+ detect_interactions=True,
395
+ )
396
+ ```
397
+
398
+ **Diagnostic Features:**
399
+ - **Calibration**: Overall A/E ratio, calibration by decile with CIs, Hosmer-Lemeshow test
400
+ - **Discrimination**: Gini coefficient, AUC, KS statistic, lift metrics
401
+ - **Factor Diagnostics**: A/E by level/bin for ALL factors (fitted and non-fitted)
402
+ - **Interaction Detection**: Greedy residual-based detection of potential interactions
403
+ - **Warnings**: Auto-generated alerts for high dispersion, poor calibration, missing factors
404
+
405
+ ---
406
+
407
+ ## RustyStats vs Statsmodels
408
+
409
+ | Feature | RustyStats | Statsmodels |
410
+ |---------|------------|-------------|
411
+ | **Parallel IRLS Solver** | ✅ Multi-threaded via Rayon | ❌ Single-threaded only |
412
+ | **Native Polars Support** | ✅ Formula API works with Polars DataFrames | ❌ Pandas only |
413
+ | **Built-in Lasso/Elastic Net for GLMs** | ✅ Fast coordinate descent with all families | ⚠️ Limited |
414
+ | **Relativities Table** | ✅ `result.relativities()` for pricing | ❌ Must compute manually |
415
+ | **Robust Standard Errors** | ✅ HC0, HC1, HC2, HC3 sandwich estimators | ✅ HC0-HC3 |
416
+
417
+ ---
418
+
419
+ ## Project Structure
420
+
421
+ ```
422
+ rustystats/
423
+ ├── Cargo.toml # Workspace config
424
+ ├── pyproject.toml # Python package config
425
+
426
+ ├── crates/
427
+ │ ├── rustystats-core/ # Pure Rust GLM library
428
+ │ │ └── src/
429
+ │ │ ├── families/ # Gaussian, Poisson, Binomial, Gamma, Tweedie, Quasi, NegativeBinomial
430
+ │ │ ├── links/ # Identity, Log, Logit
431
+ │ │ ├── solvers/ # IRLS, coordinate descent
432
+ │ │ ├── inference/ # P-values, CIs, robust SE (HC0-HC3)
433
+ │ │ ├── interactions/ # Lazy interaction term computation
434
+ │ │ ├── splines/ # B-spline and natural spline basis functions
435
+ │ │ ├── design_matrix/ # Categorical encoding, interaction matrices
436
+ │ │ ├── formula/ # R-style formula parsing
437
+ │ │ ├── target_encoding/ # Ordered target statistics
438
+ │ │ └── diagnostics/ # Residuals, dispersion, AIC/BIC, calibration, loss
439
+ │ │
440
+ │ └── rustystats/ # Python bindings (PyO3)
441
+ │ └── src/lib.rs
442
+
443
+ ├── python/rustystats/ # Python package
444
+ │ ├── __init__.py # Main exports
445
+ │ ├── formula.py # Formula API with DataFrame support
446
+ │ ├── splines.py # bs() and ns() spline basis functions
447
+ │ ├── target_encoding.py # Target encoding
448
+ │ ├── diagnostics.py # Model diagnostics with JSON export
449
+ │ └── families.py # Family wrappers
450
+
451
+ ├── examples/
452
+ │ └── frequency.ipynb # Claim frequency example
453
+
454
+ └── tests/python/ # Python test suite
455
+ ```
456
+
457
+ ---
458
+
459
+ ## Dependencies
460
+
461
+ ### Rust
462
+ - `ndarray`, `nalgebra` - Linear algebra
463
+ - `rayon` - Parallel iterators (multi-threading)
464
+ - `statrs` - Statistical distributions
465
+ - `pyo3` - Python bindings
466
+
467
+ ### Python
468
+ - `numpy` - Array operations (required)
469
+ - `polars` - DataFrame support (required)
470
+
471
+ ---
472
+
473
+ ## License
474
+
475
+ MIT
476
+
@@ -0,0 +1,14 @@
1
+ rustystats-0.1.5.dist-info/METADATA,sha256=gcMU-FCDylYCfffKFDlg5l5U7IHcjpRvJIU7etxVcGM,15761
2
+ rustystats-0.1.5.dist-info/WHEEL,sha256=jmh_XJXNl6S4nP-PSN0xl3LQkj9yKhrwCYo0d4TS4ew,109
3
+ rustystats-0.1.5.dist-info/licenses/LICENSE,sha256=MMgZDMsAZZqE7jrwmPD2K_pwsdY6Cw4OBVmp5pJYOKg,1072
4
+ rustystats/__init__.py,sha256=XIfx7I1nVPLnp3iwJy297klDJpb2_2LOwErLhNPkoyk,4052
5
+ rustystats/_rustystats.cpython-313-x86_64-linux-gnu.so,sha256=u8-h8i0aZuJw-SF6Xu2cyF7A-0-fYWx4E4-I0EXQ5wU,1896704
6
+ rustystats/diagnostics.py,sha256=CNZTRibvf00PATHXcyQlinrLgrPxIrypRVUruYdWm58,90498
7
+ rustystats/families.py,sha256=PNgaXPd09C4T72sNBCswnGD2bIec81hjTq8zxzsjaUc,14510
8
+ rustystats/formula.py,sha256=U2nETIndcmjSvkEnpU6Qd-8WRNSU4H0nodTwkVqhXh4,36398
9
+ rustystats/glm.py,sha256=N2DKV3J1X-2x_LUYjNXHmoogjVGLqqePlNfAMfAEF-Q,8133
10
+ rustystats/interactions.py,sha256=3objuZgqXBG2XTuBXNY--flXoM2cqrpIRvicO8MuaV4,47405
11
+ rustystats/links.py,sha256=qvlSV5IJQPtDy92PNKkxQUEsni3Jfsw55lFzp9RXlVI,6754
12
+ rustystats/splines.py,sha256=y6nRxVEmzpXVgeT92UA7Fs9LkHutQ2qtdHY0xaaLsZU,11137
13
+ rustystats/target_encoding.py,sha256=M9KFs1AILqvv-yZr3MVGUw541GU9-DyOuQW5p0YzOC4,11563
14
+ rustystats-0.1.5.dist-info/RECORD,,
@@ -0,0 +1,4 @@
1
+ Wheel-Version: 1.0
2
+ Generator: maturin (1.10.2)
3
+ Root-Is-Purelib: false
4
+ Tag: cp313-cp313-manylinux_2_34_x86_64
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 PricingFrontier
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.