shared-tensor 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,181 @@
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity granting the License.
13
+
14
+ "Legal Entity" shall mean the union of the acting entity and all
15
+ other entities that control, are controlled by, or are under common
16
+ control with that entity. For the purposes of this definition,
17
+ "control" means (i) the power, direct or indirect, to cause the
18
+ direction or management of such entity, whether by contract or
19
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
20
+ outstanding shares, or (iii) beneficial ownership of such entity.
21
+
22
+ "You" (or "Your") shall mean an individual or Legal Entity
23
+ exercising permissions granted by this License.
24
+
25
+ "Source" form shall mean the preferred form for making modifications,
26
+ including but not limited to software source code, documentation
27
+ source, and configuration files.
28
+
29
+ "Object" form shall mean any form resulting from mechanical
30
+ transformation or translation of a Source form, including but
31
+ not limited to compiled object code, generated documentation,
32
+ and conversions to other media types.
33
+
34
+ "Work" shall mean the work of authorship, whether in Source or
35
+ Object form, made available under the License, as indicated by a
36
+ copyright notice that is included in or attached to the work
37
+ (which shall not include communications that are clearly marked or
38
+ otherwise designated in writing by the copyright owner as "Not a Contribution").
39
+
40
+ "Contribution" shall mean any work of authorship, including
41
+ the original version of the Work and any modifications or additions
42
+ to that Work or Derivative Works thereof, that is intentionally
43
+ submitted to Licensor for inclusion in the Work by the copyright owner
44
+ or by an individual or Legal Entity authorized to submit on behalf of
45
+ the copyright owner. For the purposes of this definition, "submitted"
46
+ means any form of electronic, verbal, or written communication sent
47
+ to the Licensor or its representatives, including but not limited to
48
+ communication on electronic mailing lists, source code control
49
+ systems, and issue tracking systems that are managed by, or on behalf of,
50
+ the Licensor for the purpose of discussing and improving the Work, but
51
+ excluding communication that is conspicuously marked or otherwise
52
+ designated in writing by the copyright owner as "Not a Contribution."
53
+
54
+ 2. Grant of Copyright License. Subject to the terms and conditions of
55
+ this License, each Contributor hereby grants to You a perpetual,
56
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
57
+ copyright license to use, reproduce, modify, merge, publish,
58
+ distribute, sublicense, and/or sell copies of the Work, and to
59
+ permit persons to whom the Work is furnished to do so, subject to
60
+ the following conditions:
61
+
62
+ The above copyright notice and this permission notice shall be
63
+ included in all copies or substantial portions of the Work.
64
+
65
+ 3. Grant of Patent License. Subject to the terms and conditions of
66
+ this License, each Contributor hereby grants to You a perpetual,
67
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
68
+ (except as stated in this section) patent license to make, have made,
69
+ use, offer to sell, sell, import, and otherwise transfer the Work,
70
+ where such license applies only to those patent claims licensable
71
+ by such Contributor that are necessarily infringed by their
72
+ Contribution(s) alone or by combination of their Contribution(s)
73
+ with the Work to which such Contribution(s) was submitted. If You
74
+ institute patent litigation against any entity (including a
75
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
76
+ or a Contribution incorporated within the Work constitutes direct
77
+ or contributory patent infringement, then any patent licenses
78
+ granted to You under this License for that Work shall terminate
79
+ as of the date such litigation is filed.
80
+
81
+ 4. Redistribution. You may reproduce and distribute copies of the
82
+ Work or Derivative Works thereof in any medium, with or without
83
+ modifications, and in Source or Object form, provided that You
84
+ meet the following conditions:
85
+
86
+ (a) You must give any other recipients of the Work or
87
+ Derivative Works a copy of this License; and
88
+
89
+ (b) You must cause any modified files to carry prominent notices
90
+ stating that You changed the files; and
91
+
92
+ (c) You must retain, in the Source form of any Derivative Works
93
+ that You distribute, all copyright, trademark, patent, and
94
+ attribution notices from the Source form of the Work,
95
+ excluding those notices that do not pertain to any part of
96
+ the Derivative Works; and
97
+
98
+ (d) If the Work includes a "NOTICE" text file as part of its
99
+ distribution, then any Derivative Works that You distribute must
100
+ include a readable copy of the attribution notices contained
101
+ within such NOTICE file, excluding those notices that do not
102
+ pertain to any part of the Derivative Works, in at least one
103
+ of the following places: within a NOTICE text file distributed
104
+ as part of the Derivative Works; within the Source form or
105
+ documentation, if provided along with the Derivative Works; or,
106
+ within a display generated by the Derivative Works, if and
107
+ wherever such third-party notices normally appear. The contents
108
+ of the NOTICE file are for informational purposes only and
109
+ do not modify the License. You may add Your own attribution
110
+ notices within Derivative Works that You distribute, alongside
111
+ or as an addendum to the NOTICE text from the Work, provided
112
+ that such additional attribution notices cannot be construed
113
+ as modifying the License.
114
+
115
+ You may add Your own copyright notice to Your modifications and
116
+ may provide additional or different license terms and conditions
117
+ for use, reproduction, or distribution of Your modifications, or
118
+ for any such Derivative Works as a whole, provided Your use,
119
+ reproduction, and distribution of the Work otherwise complies with
120
+ the conditions stated in this License.
121
+
122
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
123
+ any Contribution intentionally submitted for inclusion in the Work
124
+ by You to the Licensor shall be under the terms and conditions of
125
+ this License, without any additional terms or conditions.
126
+ Notwithstanding the above, nothing herein shall supersede or modify
127
+ the terms of any separate license agreement you may have executed
128
+ with Licensor regarding such Contributions.
129
+
130
+ 6. Trademarks. This License does not grant permission to use the trade
131
+ names, trademarks, service marks, or product names of the Licensor,
132
+ except as required for reasonable and customary use in describing the
133
+ origin of the Work and reproducing the content of the NOTICE file.
134
+
135
+ 7. Disclaimer of Warranty. Unless required by applicable law or
136
+ agreed to in writing, Licensor provides the Work (and each
137
+ Contributor provides its Contributions) on an "AS IS" BASIS,
138
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
139
+ implied, including, without limitation, any warranties or conditions
140
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
141
+ PARTICULAR PURPOSE. You are solely responsible for determining the
142
+ appropriateness of using or redistributing the Work and assume any
143
+ risks associated with Your exercise of permissions under this License.
144
+
145
+ 8. Limitation of Liability. In no event and under no legal theory,
146
+ whether in tort (including negligence), contract, or otherwise,
147
+ unless required by applicable law (such as deliberate and grossly
148
+ negligent acts) or agreed to in writing, shall any Contributor be
149
+ liable to You for damages, including any direct, indirect, special,
150
+ incidental, or consequential damages of any character arising as a
151
+ result of this License or out of the use or inability to use the
152
+ Work (including but not limited to damages for loss of goodwill,
153
+ work stoppage, computer failure or malfunction, or any and all
154
+ other commercial damages or losses), even if such Contributor
155
+ has been advised of the possibility of such damages.
156
+
157
+ 9. Accepting Warranty or Support. You may choose to offer,
158
+ and charge a fee for, warranty, support, indemnity or other
159
+ liability obligations and/or rights consistent with this License.
160
+ However, in accepting such obligations, You may act only on Your
161
+ own behalf and on Your sole responsibility, not on behalf of any
162
+ other Contributor, and only if You agree to indemnify, defend,
163
+ and hold each Contributor harmless for any liability incurred by,
164
+ or claims asserted against, such Contributor by reason of your
165
+ accepting any such warranty or support.
166
+
167
+ END OF TERMS AND CONDITIONS
168
+
169
+ Copyright 2024 Athena Team
170
+
171
+ Licensed under the Apache License, Version 2.0 (the "License");
172
+ you may not use this file except in compliance with the License.
173
+ You may obtain a copy of the License at
174
+
175
+ http://www.apache.org/licenses/LICENSE-2.0
176
+
177
+ Unless required by applicable law or agreed to in writing, software
178
+ distributed under the License is distributed on an "AS IS" BASIS,
179
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
180
+ See the License for the specific language governing permissions and
181
+ limitations under the License.
@@ -0,0 +1,30 @@
1
+ # Include the README and LICENSE files
2
+ include README.md
3
+ include LICENSE
4
+
5
+ # Include all files in the shared_tensor package
6
+ recursive-include shared_tensor *.py
7
+ recursive-include shared_tensor *.so *.dll *.dylib
8
+
9
+ # Include scripts
10
+ include scripts/start_server.sh
11
+
12
+ # Exclude test files and examples from the distribution
13
+ exclude tests/*
14
+ exclude examples/*
15
+ recursive-exclude tests *
16
+ recursive-exclude examples *
17
+
18
+ # Exclude development files
19
+ exclude .gitignore
20
+ exclude .git/*
21
+ exclude __pycache__/*
22
+ recursive-exclude * __pycache__
23
+ recursive-exclude * *.pyc
24
+ recursive-exclude * *.pyo
25
+ recursive-exclude * .DS_Store
26
+
27
+ # Exclude build artifacts
28
+ exclude build/*
29
+ exclude dist/*
30
+ exclude *.egg-info/*
@@ -0,0 +1,420 @@
1
+ Metadata-Version: 2.4
2
+ Name: shared-tensor
3
+ Version: 0.1.0
4
+ Summary: A library for sharing GPU memory objects across processes using IPC mechanisms
5
+ Author-email: Athena Team <contact@world-sim-dev.org>
6
+ Maintainer-email: Athena Team <contact@world-sim-dev.org>
7
+ License-Expression: Apache-2.0
8
+ Project-URL: Homepage, https://github.com/world-sim-dev/shared-tensor
9
+ Project-URL: Repository, https://github.com/world-sim-dev/shared-tensor
10
+ Project-URL: Documentation, https://github.com/world-sim-dev/shared-tensor/wiki
11
+ Project-URL: Bug Reports, https://github.com/world-sim-dev/shared-tensor/issues
12
+ Project-URL: Changelog, https://github.com/world-sim-dev/shared-tensor/releases
13
+ Keywords: gpu,memory,sharing,ipc,inter-process-communication,pytorch,tensorflow,cuda,model-serving,inference,distributed-computing
14
+ Classifier: Development Status :: 3 - Alpha
15
+ Classifier: Intended Audience :: Developers
16
+ Classifier: Intended Audience :: Science/Research
17
+ Classifier: Operating System :: POSIX :: Linux
18
+ Classifier: Programming Language :: Python :: 3
19
+ Classifier: Programming Language :: Python :: 3.8
20
+ Classifier: Programming Language :: Python :: 3.9
21
+ Classifier: Programming Language :: Python :: 3.10
22
+ Classifier: Programming Language :: Python :: 3.11
23
+ Classifier: Programming Language :: Python :: 3.12
24
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
25
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
26
+ Classifier: Topic :: System :: Hardware :: Symmetric Multi-processing
27
+ Requires-Python: >=3.8
28
+ Description-Content-Type: text/markdown
29
+ License-File: LICENSE
30
+ Requires-Dist: torch>=1.12.0
31
+ Requires-Dist: numpy>=1.20.0
32
+ Requires-Dist: requests>=2.25.0
33
+ Provides-Extra: dev
34
+ Requires-Dist: pytest>=6.0; extra == "dev"
35
+ Requires-Dist: pytest-cov>=2.0; extra == "dev"
36
+ Requires-Dist: black>=22.0; extra == "dev"
37
+ Requires-Dist: flake8>=4.0; extra == "dev"
38
+ Requires-Dist: isort>=5.0; extra == "dev"
39
+ Requires-Dist: mypy>=0.950; extra == "dev"
40
+ Requires-Dist: pre-commit>=2.0.0; extra == "dev"
41
+ Requires-Dist: build>=0.8.0; extra == "dev"
42
+ Requires-Dist: twine>=4.0.0; extra == "dev"
43
+ Provides-Extra: test
44
+ Requires-Dist: pytest>=6.0; extra == "test"
45
+ Requires-Dist: pytest-cov>=2.0; extra == "test"
46
+ Requires-Dist: pytest-benchmark>=3.0; extra == "test"
47
+ Requires-Dist: pytest-asyncio>=0.20.0; extra == "test"
48
+ Provides-Extra: docs
49
+ Requires-Dist: sphinx>=4.0.0; extra == "docs"
50
+ Requires-Dist: sphinx-rtd-theme>=1.0.0; extra == "docs"
51
+ Requires-Dist: myst-parser>=0.18.0; extra == "docs"
52
+ Dynamic: license-file
53
+
54
+ # Shared Tensor
55
+
56
+ [![Python Version](https://img.shields.io/badge/python-3.8%2B-blue.svg)](https://python.org)
57
+ [![PyTorch](https://img.shields.io/badge/PyTorch-1.12%2B-orange.svg)](https://pytorch.org)
58
+ [![License](https://img.shields.io/badge/license-Apache%202.0-green.svg)](LICENSE)
59
+
60
+ A high-performance library for sharing GPU memory objects across processes using IPC mechanisms with JSON-RPC 2.0 protocol, enabling model and inference engine separation architecture.
61
+
62
+ ## 🚀 Project Overview
63
+
64
+ Shared Tensor is a cross-process communication library designed specifically for deep learning and AI applications, utilizing IPC mechanisms and JSON-RPC protocol to achieve:
65
+
66
+ - **Efficient GPU Memory Sharing**: Cross-process sharing of PyTorch tensors and models
67
+ - **Remote Function Execution**: Easy remote function calls through decorators
68
+ - **Async/Sync Support**: Flexible execution modes for different scenarios
69
+ - **Model Serving**: Deploy machine learning models as independent services
70
+ - **Distributed Inference**: Support for distributed computing in multi-GPU environments
71
+
72
+ ## 📋 Core Features
73
+
74
+ ### 🔄 Cross-Process Communication
75
+ - **JSON-RPC 2.0 Protocol**: Standardized remote procedure calls
76
+ - **HTTP Transport**: Reliable HTTP-based communication mechanism
77
+ - **Serialization Optimization**: Efficient PyTorch object serialization/deserialization
78
+
79
+ ### 🎯 Function Sharing
80
+ - **Decorator Pattern**: Easy function sharing using `@provider.share`
81
+ - **Auto Discovery**: Smart function path resolution and import
82
+ - **Parameter Passing**: Support for complex data type parameters
83
+
84
+ ### ⚡ Async Support
85
+ - **Async Execution**: `AsyncSharedTensorProvider` supports non-blocking calls
86
+ - **Task Management**: Complete async task status tracking
87
+ - **Concurrent Processing**: Efficient concurrent request handling
88
+
89
+ ### 🖥️ GPU Compatibility
90
+ - **CUDA Support**: Native CUDA tensor sharing support
91
+ - **Device Management**: Smart data migration between devices
92
+ - **Memory Optimization**: Efficient GPU memory usage
93
+
94
+ ## 🛠️ Installation Guide
95
+
96
+ ### Requirements
97
+
98
+ - **Python**: 3.8+
99
+ - **Operating System**: Linux (recommended)
100
+ - **PyTorch**: 1.12.0+
101
+ - **CUDA**: Optional, for GPU support
102
+
103
+ ### Installation Methods
104
+
105
+ #### Install from Source
106
+
107
+ ```bash
108
+ # Clone the repository
109
+ git clone https://github.com/world-sim-dev/shared-tensor.git
110
+ cd shared-tensor
111
+
112
+ # Install dependencies
113
+ pip install -r requirements.txt
114
+
115
+ # Install the package
116
+ pip install -e .
117
+ ```
118
+
119
+ #### Development Installation
120
+
121
+ ```bash
122
+ # Install with development dependencies
123
+ pip install -e ".[dev]"
124
+
125
+ # Install with test dependencies
126
+ pip install -e ".[test]"
127
+ ```
128
+
129
+ ### Verify Installation
130
+
131
+ ```bash
132
+ # Check core functionality
133
+ python -c "import shared_tensor; print('✓ Shared Tensor installed successfully')"
134
+ ```
135
+
136
+ ## 🎯 Quick Start
137
+
138
+ ### 1. Basic Function Sharing
139
+
140
+ ```python
141
+ from shared_tensor.async_provider import AsyncSharedTensorProvider
142
+
143
+ # Create provider
144
+ provider = AsyncSharedTensorProvider()
145
+
146
+ # Share simple function
147
+ @provider.share()
148
+ def add_numbers(a, b):
149
+ return a + b
150
+
151
+ # Share PyTorch function
152
+ @provider.share()
153
+ def create_tensor(shape):
154
+ import torch
155
+ return torch.zeros(shape)
156
+
157
+ # Load PyTorch model
158
+ @provider.share()
159
+ def load_model():
160
+ ...
161
+
162
+ ```
163
+
164
+ ### 2. Start Server
165
+
166
+ ```bash
167
+ # Method 1: Use command line tool, single server
168
+ shared-tensor-server
169
+
170
+ # Method 2: Use torchrun
171
+ torchrun --nproc_per_node=4 --no-python shared-tensor-server
172
+
173
+ # Method 3: Custom configuration
174
+ python shared_tensor/server.py
175
+ ```
176
+
177
+
178
+ ## 📖 Detailed Usage
179
+
180
+ ### Model Sharing Example
181
+
182
+ ```python
183
+ import torch
184
+ import torch.nn as nn
185
+
186
+ from shared_tensor.async_provider import AsyncSharedTensorProvider
187
+
188
+ # Create provider
189
+ provider = AsyncSharedTensorProvider()
190
+
191
+ # Define model
192
+ class SimpleNet(nn.Module):
193
+ def __init__(self, input_size, hidden_size, output_size):
194
+ super().__init__()
195
+ self.fc1 = nn.Linear(input_size, hidden_size)
196
+ self.relu = nn.ReLU()
197
+ self.fc2 = nn.Linear(hidden_size, output_size)
198
+
199
+ def forward(self, x):
200
+ x = self.fc1(x)
201
+ x = self.relu(x)
202
+ x = self.fc2(x)
203
+ return x
204
+
205
+ # Share model creation function
206
+ @provider.share(name="create_model")
207
+ def create_model(input_size=784, hidden_size=128, output_size=10):
208
+ model = SimpleNet(input_size, hidden_size, output_size)
209
+ return model
210
+
211
+ # Share inference function
212
+ model = create_model()
213
+ with torch.no_grad():
214
+ model(input_data)
215
+ ```
216
+
217
+ ## 🔧 Configuration Options
218
+
219
+ ### Server Configuration
220
+
221
+ ```python
222
+ from shared_tensor.server import SharedTensorServer
223
+
224
+ server = SharedTensorServer(
225
+ host="0.0.0.0", # Listen address
226
+ port=2537, # Port number
227
+ timeout=30, # Request timeout
228
+ max_workers=4, # Maximum worker threads
229
+ enable_cache=True, # Enable result caching
230
+ debug=False # Debug mode
231
+ )
232
+ ```
233
+
234
+ ## 🧪 Testing
235
+
236
+ ### Run Test Suite
237
+
238
+ ```bash
239
+ # Run all tests
240
+ python tests/run_tests.py
241
+
242
+ # Run specific category tests
243
+ python tests/run_tests.py --category unit
244
+ python tests/run_tests.py --category integration
245
+ python tests/run_tests.py --category pytorch
246
+
247
+ # Run only PyTorch related tests
248
+ python tests/run_tests.py --torch-only
249
+
250
+ # Verbose output
251
+ python tests/run_tests.py --verbose
252
+ ```
253
+
254
+ ### Test Environment Info
255
+
256
+ ```bash
257
+ # Check test environment
258
+ python tests/run_tests.py --env-info
259
+ ```
260
+
261
+ ### Individual Test Files
262
+
263
+ ```bash
264
+ # Test tensor serialization
265
+ python tests/pytorch_tests/test_tensor_serialization.py
266
+
267
+ # Test async system
268
+ python tests/integration/test_async_system.py
269
+
270
+ # Test client
271
+ python tests/integration/test_client.py
272
+ ```
273
+
274
+ ## 🏗️ Architecture Design
275
+
276
+ ### Core Components
277
+
278
+ ```
279
+ shared-tensor/
280
+ ├── shared_tensor/ # Core modules
281
+ │ ├── server.py # JSON-RPC server
282
+ │ ├── client.py # Sync client
283
+ │ ├── provider.py # Sync provider
284
+ │ ├── async_client.py # Async client
285
+ │ ├── async_provider.py # Async provider
286
+ │ ├── async_task.py # Async task management
287
+ │ ├── jsonrpc.py # JSON-RPC protocol implementation
288
+ │ ├── utils.py # Utility functions
289
+ │ └── errors.py # Exception definitions
290
+ ├── examples/ # Usage examples
291
+ └── tests/ # Test suite
292
+ ```
293
+
294
+ ### Communication Flow
295
+
296
+ ```mermaid
297
+ sequenceDiagram
298
+ participant CA as Client App
299
+ participant SC as SharedTensorClient
300
+ participant SS as SharedTensorServer
301
+ participant FE as Function Executor
302
+
303
+ Note over CA, FE: Client-Server Communication Flow
304
+
305
+ CA->>SC: call_function("model_inference", args)
306
+ SC->>SC: Serialize parameters
307
+ SC->>SS: HTTP POST /jsonrpc<br/>JSON-RPC Request
308
+
309
+ Note over SS: Server Processing
310
+ SS->>SS: Parse JSON-RPC request
311
+ SS->>SS: Resolve function path
312
+ SS->>FE: Import & execute function
313
+ FE->>FE: Deserialize parameters
314
+ FE->>FE: Execute function logic
315
+ FE->>SS: Return execution result
316
+
317
+ Note over SS: Response Preparation
318
+ SS->>SS: Serialize result
319
+ SS->>SS: Create JSON-RPC response
320
+ SS->>SC: HTTP Response<br/>JSON-RPC Result
321
+
322
+ Note over SC: Client Processing
323
+ SC->>SC: Parse response
324
+ SC->>SC: Deserialize result
325
+ SC->>CA: Return final result
326
+
327
+ Note over CA, FE: End-to-End Process Complete
328
+ ```
329
+
330
+ ### Debug Tips
331
+
332
+ 1. **Enable verbose logging**:
333
+ ```python
334
+ import logging
335
+ logging.basicConfig(level=logging.DEBUG)
336
+ ```
337
+
338
+ 2. **Use debug mode**:
339
+ ```python
340
+ provider = SharedTensorProvider(verbose_debug=True)
341
+ ```
342
+
343
+ 3. **Check function paths**:
344
+ ```python
345
+ provider = SharedTensorProvider()
346
+ print(provider._registered_functions)
347
+ ```
348
+
349
+ ## 🤝 Contributing
350
+
351
+ We welcome community contributions! Please follow these steps:
352
+
353
+ ### Development Environment Setup
354
+
355
+ ```bash
356
+ # Clone repository
357
+ git clone https://github.com/world-sim-dev/shared-tensor.git
358
+ cd shared-tensor
359
+
360
+ # Create virtual environment
361
+ python -m venv venv
362
+ source venv/bin/activate
363
+
364
+ # Install development dependencies
365
+ pip install -e ".[dev]"
366
+
367
+ # Install pre-commit hooks
368
+ pre-commit install
369
+
370
+ # Package & Publish
371
+ python setup.py sdist bdist_wheel
372
+ python -m twine upload --repository testpypi dist/*
373
+ python -m twine upload dist/*
374
+ ```
375
+
376
+ ### Code Standards
377
+
378
+ ```bash
379
+ # Code formatting
380
+ black shared_tensor/ tests/ examples/
381
+
382
+ # Import sorting
383
+ isort shared_tensor/ tests/ examples/
384
+
385
+ # Static checking
386
+ flake8 shared_tensor/
387
+ mypy shared_tensor/
388
+ ```
389
+
390
+ ### Submission Process
391
+
392
+ 1. Fork the project and create a feature branch
393
+ 2. Write code and tests
394
+ 3. Run the complete test suite
395
+ 4. Submit a Pull Request
396
+
397
+ ### Test Requirements
398
+
399
+ - New features must include tests
400
+ - Maintain test coverage > 90%
401
+ - All tests must pass
402
+
403
+ ## 📄 License
404
+
405
+ This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details
406
+
407
+ ## 🙏 Acknowledgments
408
+
409
+ - [PyTorch](https://pytorch.org/) - Deep learning framework
410
+ - [JSON-RPC 2.0](https://www.jsonrpc.org/) - Remote procedure call protocol
411
+
412
+ ## 📞 Contact Us
413
+
414
+ - **Issues**: [GitHub Issues](https://github.com/world-sim-dev/shared-tensor/issues)
415
+ - **Documentation**: [Shared Tensor Documentation](https://github.com/world-sim-dev/shared-tensor/wiki)
416
+ - **Source**: [GitHub Repository](https://github.com/world-sim-dev/shared-tensor)
417
+
418
+ ---
419
+
420
+ **Shared Tensor** - Making GPU memory sharing simple and efficient 🚀