comfy-env 0.0.9__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (48) hide show
  1. comfy_env-0.0.9/.github/workflows/publish.yml +28 -0
  2. comfy_env-0.0.9/.gitignore +29 -0
  3. comfy_env-0.0.9/CLAUDE.md +131 -0
  4. comfy_env-0.0.9/CRITICISM.md +316 -0
  5. comfy_env-0.0.9/LICENSE +21 -0
  6. comfy_env-0.0.9/PKG-INFO +228 -0
  7. comfy_env-0.0.9/README.md +201 -0
  8. comfy_env-0.0.9/examples/basic_node/__init__.py +5 -0
  9. comfy_env-0.0.9/examples/basic_node/comfy-env.toml +65 -0
  10. comfy_env-0.0.9/examples/basic_node/nodes.py +157 -0
  11. comfy_env-0.0.9/examples/basic_node/worker.py +79 -0
  12. comfy_env-0.0.9/examples/decorator_node/__init__.py +9 -0
  13. comfy_env-0.0.9/examples/decorator_node/nodes.py +182 -0
  14. comfy_env-0.0.9/pyproject.toml +46 -0
  15. comfy_env-0.0.9/src/comfy_env/__init__.py +161 -0
  16. comfy_env-0.0.9/src/comfy_env/cli.py +388 -0
  17. comfy_env-0.0.9/src/comfy_env/decorator.py +422 -0
  18. comfy_env-0.0.9/src/comfy_env/env/__init__.py +30 -0
  19. comfy_env-0.0.9/src/comfy_env/env/config.py +148 -0
  20. comfy_env-0.0.9/src/comfy_env/env/config_file.py +619 -0
  21. comfy_env-0.0.9/src/comfy_env/env/detection.py +176 -0
  22. comfy_env-0.0.9/src/comfy_env/env/manager.py +673 -0
  23. comfy_env-0.0.9/src/comfy_env/env/platform/__init__.py +21 -0
  24. comfy_env-0.0.9/src/comfy_env/env/platform/base.py +96 -0
  25. comfy_env-0.0.9/src/comfy_env/env/platform/darwin.py +53 -0
  26. comfy_env-0.0.9/src/comfy_env/env/platform/linux.py +68 -0
  27. comfy_env-0.0.9/src/comfy_env/env/platform/windows.py +377 -0
  28. comfy_env-0.0.9/src/comfy_env/env/security.py +267 -0
  29. comfy_env-0.0.9/src/comfy_env/errors.py +325 -0
  30. comfy_env-0.0.9/src/comfy_env/install.py +539 -0
  31. comfy_env-0.0.9/src/comfy_env/ipc/__init__.py +55 -0
  32. comfy_env-0.0.9/src/comfy_env/ipc/bridge.py +512 -0
  33. comfy_env-0.0.9/src/comfy_env/ipc/protocol.py +265 -0
  34. comfy_env-0.0.9/src/comfy_env/ipc/tensor.py +371 -0
  35. comfy_env-0.0.9/src/comfy_env/ipc/torch_bridge.py +401 -0
  36. comfy_env-0.0.9/src/comfy_env/ipc/transport.py +318 -0
  37. comfy_env-0.0.9/src/comfy_env/ipc/worker.py +221 -0
  38. comfy_env-0.0.9/src/comfy_env/registry.py +252 -0
  39. comfy_env-0.0.9/src/comfy_env/resolver.py +399 -0
  40. comfy_env-0.0.9/src/comfy_env/runner.py +273 -0
  41. comfy_env-0.0.9/src/comfy_env/stubs/__init__.py +1 -0
  42. comfy_env-0.0.9/src/comfy_env/stubs/folder_paths.py +57 -0
  43. comfy_env-0.0.9/src/comfy_env/workers/__init__.py +49 -0
  44. comfy_env-0.0.9/src/comfy_env/workers/base.py +82 -0
  45. comfy_env-0.0.9/src/comfy_env/workers/pool.py +241 -0
  46. comfy_env-0.0.9/src/comfy_env/workers/tensor_utils.py +188 -0
  47. comfy_env-0.0.9/src/comfy_env/workers/torch_mp.py +375 -0
  48. comfy_env-0.0.9/src/comfy_env/workers/venv.py +903 -0
@@ -0,0 +1,28 @@
1
+ name: Publish to PyPI
2
+
3
+ on:
4
+ release:
5
+ types: [published]
6
+
7
+ jobs:
8
+ publish:
9
+ runs-on: ubuntu-latest
10
+ environment: pypi
11
+ permissions:
12
+ id-token: write # Required for trusted publishing
13
+
14
+ steps:
15
+ - uses: actions/checkout@v4
16
+
17
+ - uses: actions/setup-python@v5
18
+ with:
19
+ python-version: "3.11"
20
+
21
+ - name: Install build tools
22
+ run: pip install build
23
+
24
+ - name: Build package
25
+ run: python -m build
26
+
27
+ - name: Publish to PyPI
28
+ uses: pypa/gh-action-pypi-publish@release/v1
@@ -0,0 +1,29 @@
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *.so
5
+ *.egg
6
+ *.egg-info/
7
+ dist/
8
+ build/
9
+ .eggs/
10
+
11
+ # Virtual environments
12
+ .venv/
13
+ venv/
14
+ ENV/
15
+
16
+ # IDE
17
+ .idea/
18
+ .vscode/
19
+ *.swp
20
+ *.swo
21
+
22
+ # Testing
23
+ .pytest_cache/
24
+ .coverage
25
+ htmlcov/
26
+
27
+ # OS
28
+ .DS_Store
29
+ Thumbs.db
@@ -0,0 +1,131 @@
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code when working with this repository.
4
+
5
+ ## Project Overview
6
+
7
+ **comfy-env** is a Python library for ComfyUI custom nodes that provides:
8
+
9
+ 1. **CUDA Wheel Resolution** - Deterministic wheel URL construction for CUDA packages (nvdiffrast, pytorch3d, etc.)
10
+ 2. **In-Place Installation** - Install CUDA wheels into current environment without compile
11
+ 3. **Process Isolation** - Run nodes in separate venvs with different dependencies
12
+
13
+ ## Architecture
14
+
15
+ ### Type 1 Nodes (Isolated Venv)
16
+ Nodes that need separate venv due to conflicting dependencies:
17
+ ```
18
+ ComfyUI Main Process Isolated Subprocess
19
+ ┌─────────────────────────┐ ┌──────────────────────────┐
20
+ │ @isolated decorator │ │ runner.py entrypoint │
21
+ │ intercepts FUNCTION │ UDS/ │ │
22
+ │ method calls │ stdin ──► │ Imports node module │
23
+ │ │ │ (decorator is no-op) │
24
+ │ Tensor IPC via shared │ │ │
25
+ │ memory / CUDA IPC │ ◄──────────│ Returns result │
26
+ └─────────────────────────┘ └──────────────────────────┘
27
+ ```
28
+
29
+ ### Type 2 Nodes (In-Place)
30
+ Nodes that just need CUDA wheels resolved:
31
+ ```
32
+ comfy-env.toml
33
+
34
+
35
+ ┌──────────────────────────────────────────────┐
36
+ │ WheelResolver │
37
+ │ - Detects CUDA/PyTorch/Python versions │
38
+ │ - Constructs exact wheel URLs │
39
+ │ - pip install --no-deps │
40
+ └──────────────────────────────────────────────┘
41
+ ```
42
+
43
+ ## Key Components
44
+
45
+ | File | Purpose |
46
+ |------|---------|
47
+ | `src/comfy_env/install.py` | `install()` function for both modes |
48
+ | `src/comfy_env/resolver.py` | Wheel URL resolution with template expansion |
49
+ | `src/comfy_env/cli.py` | `comfy-env` CLI commands |
50
+ | `src/comfy_env/decorator.py` | `@isolated` decorator for process isolation |
51
+ | `src/comfy_env/workers/` | Worker classes (TorchMPWorker, VenvWorker) |
52
+ | `src/comfy_env/env/manager.py` | venv creation with `uv` |
53
+ | `src/comfy_env/env/config_file.py` | TOML config parsing |
54
+
55
+ ## Development Commands
56
+
57
+ ```bash
58
+ # Install in development mode
59
+ cd /home/shadeform/comfy-env
60
+ pip install -e .
61
+
62
+ # Run CLI
63
+ comfy-env info
64
+ comfy-env install --dry-run
65
+ comfy-env resolve nvdiffrast==0.4.0
66
+ ```
67
+
68
+ ## Usage Patterns
69
+
70
+ ### In-Place Installation (Type 2)
71
+ ```python
72
+ from comfy_env import install
73
+
74
+ # Auto-discover config and install
75
+ install()
76
+
77
+ # Dry run
78
+ install(dry_run=True)
79
+ ```
80
+
81
+ ### Process Isolation (Type 1)
82
+ ```python
83
+ from comfy_env import isolated
84
+
85
+ @isolated(env="myenv")
86
+ class MyGPUNode:
87
+ FUNCTION = "process"
88
+ RETURN_TYPES = ("IMAGE",)
89
+
90
+ def process(self, image):
91
+ # Runs in isolated subprocess
92
+ import heavy_dependency
93
+ return (result,)
94
+ ```
95
+
96
+ ### Direct Worker Usage
97
+ ```python
98
+ from comfy_env import TorchMPWorker
99
+
100
+ worker = TorchMPWorker()
101
+ result = worker.call(my_function, image=tensor)
102
+ ```
103
+
104
+ ## Config File Format
105
+
106
+ ```toml
107
+ [env]
108
+ name = "my-node"
109
+ python = "3.10"
110
+ cuda = "auto"
111
+
112
+ [packages]
113
+ requirements = ["transformers>=4.56"]
114
+ no_deps = ["nvdiffrast==0.4.0"]
115
+
116
+ [sources]
117
+ wheel_sources = ["https://github.com/.../releases/download/"]
118
+ ```
119
+
120
+ ## Key Design Decisions
121
+
122
+ 1. **Deterministic Resolution**: Wheel URLs are constructed, not solved. If URL 404s, fail fast with clear message.
123
+
124
+ 2. **No Compilation on User Machines**: If wheel doesn't exist, fail with actionable error showing what combos are available.
125
+
126
+ 3. **Template Variables**: `{cuda_short}`, `{torch_mm}`, `{py_short}`, `{platform}` for URL construction.
127
+
128
+ ## Related Projects
129
+
130
+ - **pyisolate** - ComfyUI's official security-focused isolation
131
+ - **comfy-cli** - High-level ComfyUI management
@@ -0,0 +1,316 @@
1
+ # Critical Analysis: comfyui-isolation Architecture
2
+
3
+ *Analysis from the perspective of a senior systems engineer with GPU computing expertise*
4
+
5
+ **Date**: January 2025
6
+ **Compared Against**: pyisolate (ComfyUI's official isolation library)
7
+
8
+ ---
9
+
10
+ ## Executive Summary
11
+
12
+ The current implementation is a **pragmatic MVP** that solves the immediate problem (dependency isolation) but has fundamental architectural decisions that will become technical debt at scale. It prioritizes simplicity over performance, which is acceptable for prototyping but not for production GPU workloads.
13
+
14
+ **Overall Grade: B-** — Works, but with significant overhead and several design choices that need iteration.
15
+
16
+ ---
17
+
18
+ ## 1. IPC Mechanism: JSON over stdin/stdout
19
+
20
+ ### The Problem
21
+
22
+ ```python
23
+ # protocol.py - Every tensor transfer does this:
24
+ arr = obj.cpu().numpy() # GPU → CPU copy
25
+ pickle.dumps(arr) # Serialize to bytes
26
+ base64.b64encode(...) # Encode to string (+33% size)
27
+ json.dumps(...) # Wrap in JSON
28
+ ```
29
+
30
+ ### Critique
31
+
32
+ - **4x memory overhead minimum** for tensor data (original + numpy + pickle + base64)
33
+ - **2 copies minimum** per tensor (GPU→CPU, then pickle serialization)
34
+ - For a 1024×1024 RGBA image: ~16MB becomes ~21MB on wire, with ~64MB peak memory during serialization
35
+ - JSON parsing is CPU-bound and single-threaded
36
+
37
+ ### What pyisolate Does Better
38
+
39
+ pyisolate uses **CUDA IPC handles** for zero-copy GPU tensor sharing:
40
+
41
+ ```python
42
+ # pyisolate/tensor_serializer.py
43
+ def _serialize_cuda_tensor(t: torch.Tensor) -> dict[str, Any]:
44
+ func, args = reductions.reduce_tensor(t)
45
+ # args[7] is the CUDA IPC handle - NO DATA COPY
46
+ return {
47
+ "__type__": "TensorRef",
48
+ "device": "cuda",
49
+ "handle": base64.b64encode(args[7]).decode('ascii'),
50
+ # ... metadata only, no tensor data
51
+ }
52
+ ```
53
+
54
+ **Performance difference for 1GB tensor:**
55
+ - comfyui-isolation: ~500ms, 3.3GB memory
56
+ - pyisolate: ~0ms, ~0 extra memory
57
+
58
+ ### Recommended Fix
59
+
60
+ ```python
61
+ # Priority 1: Adopt CUDA IPC for tensors
62
+ import torch.multiprocessing.reductions as reductions
63
+
64
+ def serialize_tensor(t: torch.Tensor) -> dict:
65
+ if t.is_cuda:
66
+ func, args = reductions.reduce_tensor(t)
67
+ return {"__type__": "TensorRef", "device": "cuda",
68
+ "handle": base64.b64encode(args[7]), ...}
69
+ else:
70
+ t.share_memory_()
71
+ # Use file_system strategy for CPU tensors
72
+ ...
73
+ ```
74
+
75
+ ---
76
+
77
+ ## 2. The fd-level Redirection Hack
78
+
79
+ ### Current Code
80
+
81
+ ```python
82
+ # runner.py lines 178-183
83
+ stdout_fd = original_stdout.fileno()
84
+ stderr_fd = sys.stderr.fileno()
85
+ stdout_fd_copy = os.dup(stdout_fd)
86
+ os.dup2(stderr_fd, stdout_fd)
87
+ ```
88
+
89
+ ### Critique
90
+
91
+ This is a **code smell** indicating a deeper architectural problem. We're fighting the IPC design rather than fixing it.
92
+
93
+ **Why This Exists:**
94
+ - JSON-RPC uses stdout for responses
95
+ - C libraries (pymeshfix) print to fd 1 directly
96
+ - Can't distinguish "library noise" from "JSON response"
97
+
98
+ ### The Real Fix
99
+
100
+ Don't use stdout for data. Use a **dedicated communication channel**:
101
+
102
+ ```python
103
+ # Option 1: Unix Domain Socket
104
+ sock_path = f"/tmp/comfyui-isolation-{pid}.sock"
105
+ server = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
106
+ server.bind(sock_path)
107
+ # Pass sock_path to subprocess, both stdout/stderr go to logging
108
+
109
+ # Option 2: socketpair at spawn time
110
+ parent_sock, child_sock = socket.socketpair()
111
+ subprocess.Popen(..., pass_fds=[child_sock.fileno()])
112
+ ```
113
+
114
+ ### Why This Matters
115
+
116
+ - Current hack is **fragile** — any library could still break it with buffered writes
117
+ - **Not portable** — `select()` on stdin doesn't work the same on Windows
118
+ - **Race conditions** — fd manipulation during execution is not thread-safe
119
+
120
+ ---
121
+
122
+ ## 3. Process Lifecycle Management
123
+
124
+ ### Current Design
125
+
126
+ - One subprocess per `(env_name, node_package_dir)` tuple
127
+ - Process kept alive, reused for multiple calls
128
+ - Killed on timeout or error
129
+
130
+ ### Issues
131
+
132
+ **A. No GPU Memory Management:**
133
+ ```python
134
+ # After node execution, GPU memory is NOT freed
135
+ # The subprocess stays alive, holding VRAM
136
+ # Next node in workflow inherits fragmented GPU state
137
+ ```
138
+
139
+ **B. No Graceful Degradation:**
140
+ ```python
141
+ # If subprocess dies, you lose ALL state
142
+ # No checkpointing, no recovery
143
+ ```
144
+
145
+ **C. Single-Process Bottleneck:**
146
+ ```python
147
+ # One subprocess per env = sequential execution
148
+ # Can't parallelize across nodes even if they're independent
149
+ ```
150
+
151
+ ### Recommendations
152
+
153
+ - **Process pool** per environment with configurable size
154
+ - **Explicit VRAM management** — option to kill subprocess after each call
155
+ - **Health checks** — periodic GPU memory queries, automatic restart if fragmented
156
+
157
+ ---
158
+
159
+ ## 4. Serialization Protocol Design
160
+
161
+ ### Current Issues
162
+
163
+ **A. Type Detection by Shape Heuristics:**
164
+ ```python
165
+ # protocol.py lines 105-110
166
+ if len(shape) == 4 and shape[-1] in (3, 4):
167
+ obj_type = "comfyui_image"
168
+ elif len(shape) in (2, 3) and arr.dtype in ('float32', 'float64'):
169
+ if arr.min() >= 0 and arr.max() <= 1:
170
+ obj_type = "comfyui_mask"
171
+ ```
172
+
173
+ This is **brittle**. A 4D tensor that happens to have shape `(1, 100, 100, 3)` will be misidentified as an image even if it's not.
174
+
175
+ **B. Pickle Security:**
176
+ Using pickle for arbitrary objects is a **security risk**. Malicious pickle payloads can execute arbitrary code.
177
+
178
+ **C. SimpleNamespace Fallback:**
179
+ ```python
180
+ # Objects become SimpleNamespace after round-trip
181
+ ns = SimpleNamespace(**data)
182
+ ns._class_name = obj.get("_class", "unknown")
183
+ ```
184
+
185
+ This **loses type identity**. Method calls on reconstructed objects will fail.
186
+
187
+ ### What pyisolate Does Better
188
+
189
+ - Explicit type registry with custom serializers
190
+ - Attempts to reconstruct original classes
191
+ - Separate serializers for known ComfyUI types
192
+
193
+ ---
194
+
195
+ ## 5. Comparison: comfyui-isolation vs pyisolate
196
+
197
+ | Aspect | comfyui-isolation | pyisolate |
198
+ |--------|-------------------|-----------|
199
+ | **Process Model** | `subprocess.Popen` | `multiprocessing.Process` (spawn) |
200
+ | **IPC Channel** | stdin/stdout (JSON) | `multiprocessing.Queue` OR Unix Domain Sockets |
201
+ | **Tensor Transfer** | CPU copy → pickle → base64 | `share_memory_()` + CUDA IPC handles |
202
+ | **Serialization** | Custom JSON protocol | Pickle (Queue) OR JSON (Sandbox mode) |
203
+ | **Security** | JSON-only (no pickle RCE) | Pickle + bwrap sandbox option |
204
+ | **API** | Simple decorator | Async + inheritance |
205
+
206
+ ### What We Do Better
207
+
208
+ 1. **Simpler User API** - Just add `@isolated` decorator
209
+ 2. **Complete Process Isolation** - subprocess.Popen means truly separate processes
210
+ 3. **JSON-only Security** - No pickle deserialization RCE risk by default
211
+ 4. **Config File Discovery** - Auto-discovers isolation config files
212
+
213
+ ### What pyisolate Does Better
214
+
215
+ 1. **Zero-Copy Tensor Sharing** - CUDA IPC handles, no data copy
216
+ 2. **Transport Abstraction** - Pluggable transports (Queue, UDS, JSON Socket)
217
+ 3. **TensorKeeper Pattern** - Prevents GC race conditions on shared tensors
218
+ 4. **Sandboxing** - bwrap integration for untrusted code
219
+
220
+ ---
221
+
222
+ ## 6. Recommended Improvements (Prioritized)
223
+
224
+ | Priority | Change | Effort | Impact |
225
+ |----------|--------|--------|--------|
226
+ | **P0** | Adopt CUDA IPC for tensor serialization | High | 100x+ faster for large tensors |
227
+ | **P0** | Replace stdout with Unix Domain Socket | Medium | Eliminates fd hack, cleaner design |
228
+ | **P1** | Add transport abstraction layer | Medium | Flexibility for future transports |
229
+ | **P1** | Explicit type registry for serialization | Low | Eliminates shape-guessing bugs |
230
+ | **P2** | Add `fresh_process=True` option | Low | Guaranteed VRAM cleanup when needed |
231
+ | **P2** | Process pool with GPU affinity | High | Multi-GPU support, parallelism |
232
+ | **P3** | Optional bwrap sandboxing | High | Security for untrusted extensions |
233
+
234
+ ---
235
+
236
+ ## 7. Implementation Notes
237
+
238
+ ### Adopting pyisolate's Tensor Serializer
239
+
240
+ The key file is `pyisolate/_internal/tensor_serializer.py`. Key patterns:
241
+
242
+ ```python
243
+ # TensorKeeper - prevents GC race condition
244
+ class TensorKeeper:
245
+ def keep(self, t: torch.Tensor) -> None:
246
+ # Hold reference for 30s to ensure receiver can open shared memory
247
+ self._keeper.append((time.time(), t))
248
+
249
+ # CPU tensor via file_system shared memory
250
+ def _serialize_cpu_tensor(t: torch.Tensor) -> dict:
251
+ _tensor_keeper.keep(t)
252
+ if not t.is_shared():
253
+ t.share_memory_()
254
+ storage = t.untyped_storage()
255
+ sfunc, sargs = reductions.reduce_storage(storage)
256
+ return {
257
+ "__type__": "TensorRef",
258
+ "strategy": "file_system",
259
+ "manager_path": sargs[1].decode('utf-8'),
260
+ "storage_key": sargs[2].decode('utf-8'),
261
+ ...
262
+ }
263
+
264
+ # CUDA tensor via IPC handle
265
+ def _serialize_cuda_tensor(t: torch.Tensor) -> dict:
266
+ _tensor_keeper.keep(t)
267
+ func, args = reductions.reduce_tensor(t)
268
+ return {
269
+ "__type__": "TensorRef",
270
+ "device": "cuda",
271
+ "handle": base64.b64encode(args[7]).decode('ascii'),
272
+ ...
273
+ }
274
+ ```
275
+
276
+ ### Unix Domain Socket Transport
277
+
278
+ ```python
279
+ class UDSTransport:
280
+ def __init__(self, sock: socket.socket):
281
+ self._sock = sock
282
+ self._lock = threading.Lock()
283
+
284
+ def send(self, obj: Any) -> None:
285
+ data = json.dumps(obj).encode('utf-8')
286
+ msg = struct.pack('>I', len(data)) + data # Length-prefixed
287
+ with self._lock:
288
+ self._sock.sendall(msg)
289
+
290
+ def recv(self) -> Any:
291
+ raw_len = self._recvall(4)
292
+ msg_len = struct.unpack('>I', raw_len)[0]
293
+ data = self._recvall(msg_len)
294
+ return json.loads(data.decode('utf-8'))
295
+ ```
296
+
297
+ ---
298
+
299
+ ## 8. References
300
+
301
+ - **pyisolate source**: `/home/shadeform/pyisolate/`
302
+ - **pyisolate PR #3**: Production-ready update with RPC, sandboxing, tensor serialization
303
+ - **torch.multiprocessing.reductions**: PyTorch's IPC serialization primitives
304
+ - **CUDA IPC**: `cudaIpcGetMemHandle` / `cudaIpcOpenMemHandle`
305
+
306
+ ---
307
+
308
+ ## Conclusion
309
+
310
+ The core insight: **pyisolate solves the hard problems (tensor IPC, sandboxing) but has UX issues. comfyui-isolation has good UX but needs to adopt their tensor handling.**
311
+
312
+ Don't rewrite from scratch. Instead:
313
+ 1. Steal the tensor serializer from pyisolate
314
+ 2. Add Unix Domain Socket transport
315
+ 3. Keep the decorator API
316
+ 4. Consider optional sandbox mode for security
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Andrea Pozzetti
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.