tensorneko 0.3.15__tar.gz → 0.3.17__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (103) hide show
  1. tensorneko-0.3.17/PKG-INFO +696 -0
  2. {tensorneko-0.3.15 → tensorneko-0.3.17}/README.md +4 -1
  3. {tensorneko-0.3.15 → tensorneko-0.3.17}/setup.py +8 -1
  4. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/__init__.py +19 -6
  5. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/util/__init__.py +2 -0
  6. tensorneko-0.3.17/src/tensorneko/util/gc.py +11 -0
  7. tensorneko-0.3.17/src/tensorneko/version.txt +1 -0
  8. tensorneko-0.3.17/src/tensorneko.egg-info/PKG-INFO +696 -0
  9. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko.egg-info/SOURCES.txt +1 -1
  10. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko.egg-info/requires.txt +10 -8
  11. tensorneko-0.3.15/LICENSE +0 -21
  12. tensorneko-0.3.15/PKG-INFO +0 -695
  13. tensorneko-0.3.15/src/tensorneko/version.txt +0 -1
  14. tensorneko-0.3.15/src/tensorneko.egg-info/PKG-INFO +0 -695
  15. {tensorneko-0.3.15 → tensorneko-0.3.17}/setup.cfg +0 -0
  16. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/arch/__init__.py +0 -0
  17. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/arch/auto_encoder.py +0 -0
  18. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/arch/binary_classifier.py +0 -0
  19. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/arch/gan.py +0 -0
  20. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/arch/vqvae.py +0 -0
  21. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/arch/wgan.py +0 -0
  22. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/backend/__init__.py +0 -0
  23. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/__init__.py +0 -0
  24. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/display_metrics_callback.py +0 -0
  25. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/earlystop_lr.py +0 -0
  26. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/epoch_num_logger.py +0 -0
  27. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/epoch_time_logger.py +0 -0
  28. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/gpu_stats_logger.py +0 -0
  29. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/lr_logger.py +0 -0
  30. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/nil_callback.py +0 -0
  31. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/callback/system_stats_logger.py +0 -0
  32. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/dataset/__init__.py +0 -0
  33. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/dataset/list_dataset.py +0 -0
  34. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/dataset/nested_dataset.py +0 -0
  35. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/dataset/round_robin_dataset.py +0 -0
  36. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/dataset/sampler/__init__.py +0 -0
  37. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/dataset/sampler/sequential_iter_sampler.py +0 -0
  38. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/debug/__init__.py +0 -0
  39. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/evaluation/__init__.py +0 -0
  40. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/evaluation/enum.py +0 -0
  41. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/evaluation/fid.py +0 -0
  42. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/evaluation/iou.py +0 -0
  43. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/evaluation/psnr.py +0 -0
  44. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/evaluation/secs.py +0 -0
  45. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/evaluation/ssim.py +0 -0
  46. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/__init__.py +0 -0
  47. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/mesh/__init__.py +0 -0
  48. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/mesh/mesh_reader.py +0 -0
  49. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/mesh/mesh_writer.py +0 -0
  50. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/reader.py +0 -0
  51. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/weight/__init__.py +0 -0
  52. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/weight/weight_reader.py +0 -0
  53. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/weight/weight_writer.py +0 -0
  54. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/io/writer.py +0 -0
  55. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/__init__.py +0 -0
  56. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/aggregation.py +0 -0
  57. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/attention.py +0 -0
  58. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/concatenate.py +0 -0
  59. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/conv.py +0 -0
  60. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/linear.py +0 -0
  61. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/log.py +0 -0
  62. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/masked_conv2d.py +0 -0
  63. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/noise.py +0 -0
  64. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/patching.py +0 -0
  65. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/positional_embedding.py +0 -0
  66. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/reshape.py +0 -0
  67. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/stack.py +0 -0
  68. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/layer/vector_quantizer.py +0 -0
  69. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/module/__init__.py +0 -0
  70. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/module/dense.py +0 -0
  71. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/module/gated_conv.py +0 -0
  72. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/module/inception.py +0 -0
  73. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/module/mlp.py +0 -0
  74. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/module/residual.py +0 -0
  75. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/module/transformer.py +0 -0
  76. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/msg/__init__.py +0 -0
  77. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/neko_model.py +0 -0
  78. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/neko_module.py +0 -0
  79. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/neko_trainer.py +0 -0
  80. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/notebook/__init__.py +0 -0
  81. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/optim/__init__.py +0 -0
  82. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/optim/lr_scheduler/__init__.py +0 -0
  83. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/preprocess/__init__.py +0 -0
  84. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/preprocess/crop.py +0 -0
  85. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/preprocess/enum.py +0 -0
  86. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/preprocess/face_detector/__init__.py +0 -0
  87. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/preprocess/pad.py +0 -0
  88. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/preprocess/resize.py +0 -0
  89. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/util/configuration.py +0 -0
  90. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/util/dispatched_misc.py +0 -0
  91. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/util/misc.py +0 -0
  92. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/util/reproducibility.py +0 -0
  93. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/util/string_getter.py +0 -0
  94. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/util/type.py +0 -0
  95. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/visualization/__init__.py +0 -0
  96. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/visualization/image_browser/__init__.py +0 -0
  97. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/visualization/log_graph.py +0 -0
  98. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/visualization/matplotlib.py +0 -0
  99. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko/visualization/watcher/__init__.py +0 -0
  100. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko.egg-info/dependency_links.txt +0 -0
  101. {tensorneko-0.3.15 → tensorneko-0.3.17}/src/tensorneko.egg-info/top_level.txt +0 -0
  102. {tensorneko-0.3.15 → tensorneko-0.3.17}/test/test_library_info.py +0 -0
  103. {tensorneko-0.3.15 → tensorneko-0.3.17}/test/test_version.py +0 -0
@@ -0,0 +1,696 @@
1
+ Metadata-Version: 2.1
2
+ Name: tensorneko
3
+ Version: 0.3.17
4
+ Summary: Tensor Neural Engine Kompanion. An util library based on PyTorch and PyTorch Lightning.
5
+ Home-page: https://github.com/ControlNet/tensorneko
6
+ Author: ControlNet
7
+ Author-email: smczx@hotmail.com
8
+ License: UNKNOWN
9
+ Project-URL: Bug Tracker, https://github.com/ControlNet/tensorneko/issues
10
+ Project-URL: Source Code, https://github.com/ControlNet/tensorneko
11
+ Description: <h1 style="text-align: center">TensorNeko</h1>
12
+
13
+ <div align="center">
14
+ <img src="https://img.shields.io/github/stars/ControlNet/tensorneko?style=flat-square">
15
+ <img src="https://img.shields.io/github/forks/ControlNet/tensorneko?style=flat-square">
16
+ <a href="https://github.com/ControlNet/tensorneko/issues"><img src="https://img.shields.io/github/issues/ControlNet/tensorneko?style=flat-square"></a>
17
+ <a href="https://pypi.org/project/tensorneko/"><img src="https://img.shields.io/pypi/v/tensorneko?style=flat-square"></a>
18
+ <a href="https://pypi.org/project/tensorneko/"><img src="https://img.shields.io/pypi/dm/tensorneko?style=flat-square"></a>
19
+ <img src="https://img.shields.io/github/license/ControlNet/tensorneko?style=flat-square">
20
+ </div>
21
+
22
+ <div align="center">
23
+ <a href="https://www.python.org/"><img src="https://img.shields.io/pypi/pyversions/tensorneko?style=flat-square"></a>
24
+ <a href="https://pytorch.org/"><img src="https://img.shields.io/badge/PyTorch-%3E%3D1.9.0-EE4C2C?style=flat-square&logo=pytorch"></a>
25
+ <a href="https://www.pytorchlightning.ai/"><img src="https://img.shields.io/badge/Lightning-2.0.*%20|%202.1.*%20|%202.2.*-792EE5?style=flat-square&logo=lightning"></a>
26
+ </div>
27
+
28
+ <div align="center">
29
+ <a href="https://github.com/ControlNet/tensorneko/actions"><img src="https://img.shields.io/github/actions/workflow/status/ControlNet/tensorneko/unittest.yml?branch=dev&label=unittest&style=flat-square"></a>
30
+ <a href="https://github.com/ControlNet/tensorneko/actions"><img src="https://img.shields.io/github/actions/workflow/status/ControlNet/tensorneko/release.yml?branch=master&label=release&style=flat-square"></a>
31
+ <a href="https://coveralls.io/github/ControlNet/tensorneko"><img src="https://img.shields.io/coverallsCoverage/github/ControlNet/tensorneko?style=flat-square"></a>
32
+ </div>
33
+
34
+ Tensor Neural Engine Kompanion. An util library based on PyTorch and PyTorch Lightning.
35
+
36
+ ## Install
37
+
38
+ The tensorneko requires pytorch and pytorch-lightning (optional), and you can install it with below command.
39
+
40
+ ```shell
41
+ pip install tensorneko # for PyTorch only
42
+ pip install tensorneko[lightning] # for PyTorch and Lightning
43
+ ```
44
+
45
+ To use the library without PyTorch and PyTorch Lightning, you can install the util library (support Python 3.7 ~ 3.12 with limited features) with following command.
46
+ ```shell
47
+ pip install tensorneko_util
48
+ ```
49
+
50
+ Some cpu bound functions are implemented by rust-based `pyo3`, and you can install the optimized version with below command.
51
+ ```shell
52
+ pip install tensorneko_lib
53
+ ```
54
+
55
+ Some CLI tools are provided in the `tensorneko_tool` package, and you can install it with below command.
56
+ ```shell
57
+ pipx install tensorneko_tool # or `pip install tensorneko_tool`
58
+ ```
59
+
60
+ Then you can use the CLI tools `tensorneko` in the terminal.
61
+
62
+ ## Neko Layers, Modules and Architectures
63
+
64
+ Build an MLP with linear layers. The activation and normalization will be placed in the hidden layers.
65
+
66
+ 784 -> 1024 -> 512 -> 10
67
+
68
+ ```python
69
+ import tensorneko as neko
70
+ import torch.nn
71
+
72
+ mlp = neko.module.MLP(
73
+ neurons=[784, 1024, 512, 10],
74
+ build_activation=torch.nn.ReLU,
75
+ build_normalization=[
76
+ lambda: torch.nn.BatchNorm1d(1024),
77
+ lambda: torch.nn.BatchNorm1d(512)
78
+ ],
79
+ dropout_rate=0.5
80
+ )
81
+ ```
82
+
83
+ Build a Conv2d with activation and normalization.
84
+
85
+ ```python
86
+ import tensorneko as neko
87
+ import torch.nn
88
+
89
+ conv2d = neko.layer.Conv2d(
90
+ in_channels=256,
91
+ out_channels=1024,
92
+ kernel_size=(3, 3),
93
+ padding=(1, 1),
94
+ build_activation=torch.nn.ReLU,
95
+ build_normalization=lambda: torch.nn.BatchNorm2d(256),
96
+ normalization_after_activation=False
97
+ )
98
+ ```
99
+
100
+ #### All architectures, modules and layers
101
+
102
+ Layers:
103
+
104
+ - `Aggregation`
105
+ - `Concatenate`
106
+ - `Conv`, `Conv1d`, `Conv2d`, `Conv3d`
107
+ - `GaussianNoise`
108
+ - `ImageAttention`, `SeqAttention`
109
+ - `MaskedConv2d`, `MaskedConv2dA`, `MaskedConv2dB`
110
+ - `Linear`
111
+ - `Log`
112
+ - `PatchEmbedding2d`
113
+ - `PositionalEmbedding`
114
+ - `Reshape`
115
+ - `Stack`
116
+ - `VectorQuantizer`
117
+
118
+ Modules:
119
+
120
+ - `DenseBlock`
121
+ - `InceptionModule`
122
+ - `MLP`
123
+ - `ResidualBlock` and `ResidualModule`
124
+ - `AttentionModule`, `TransformerEncoderBlock` and `TransformerEncoder`
125
+ - `GatedConv`
126
+
127
+ Architectures:
128
+ - `AutoEncoder`
129
+ - `GAN`
130
+ - `WGAN`
131
+ - `VQVAE`
132
+
133
+ ## Neko modules
134
+
135
+ All `tensorneko.layer` and `tensorneko.module` are `NekoModule`. They can be used in
136
+ [fn.py](https://github.com/kachayev/fn.py) pipe operation.
137
+
138
+ ```python
139
+ from tensorneko.layer import Linear
140
+ from torch.nn import ReLU
141
+ import torch
142
+
143
+ linear0 = Linear(16, 128, build_activation=ReLU)
144
+ linear1 = Linear(128, 1)
145
+
146
+ f = linear0 >> linear1
147
+ print(f(torch.rand(16)).shape)
148
+ # torch.Size([1])
149
+ ```
150
+
151
+ ## Neko IO
152
+
153
+ Easily load and save different modal data.
154
+
155
+ ```python
156
+ import tensorneko as neko
157
+ from tensorneko.io import json_data
158
+ from typing import List
159
+
160
+ # read video (Temporal, Channel, Height, Width)
161
+ video_tensor, audio_tensor, video_info = neko.io.read.video("path/to/video.mp4")
162
+ # write video
163
+ neko.io.write.video("path/to/video.mp4",
164
+ video_tensor, video_info.video_fps,
165
+ audio_tensor, video_info.audio_fps
166
+ )
167
+
168
+ # read audio (Channel, Temporal)
169
+ audio_tensor, sample_rate = neko.io.read.audio("path/to/audio.wav")
170
+ # write audio
171
+ neko.io.write.audio("path/to/audio.wav", audio_tensor, sample_rate)
172
+
173
+ # read image (Channel, Height, Width) with float value in range [0, 1]
174
+ image_tensor = neko.io.read.image("path/to/image.png")
175
+ # write image
176
+ neko.io.write.image("path/to/image.png", image_tensor)
177
+ neko.io.write.image("path/to/image.jpg", image_tensor)
178
+
179
+ # read plain text
180
+ text_string = neko.io.read.text("path/to/text.txt")
181
+ # write plain text
182
+ neko.io.write.text("path/to/text.txt", text_string)
183
+
184
+ # read json as python dict or list
185
+ json_dict = neko.io.read.json("path/to/json.json")
186
+ # read json as an object
187
+ @json_data
188
+ class JsonData:
189
+ x: int
190
+ y: int
191
+
192
+ json_obj: List[JsonData] = neko.io.read.json("path/to/json.json", cls=List[JsonData])
193
+ # write json from python dict/list or json_data decorated object
194
+ neko.io.write.json("path/to/json.json", json_dict)
195
+ neko.io.write.json("path/to/json.json", json_obj)
196
+ ```
197
+
198
+ Besides, the read/write for `mat` and `pickle` files is also supported.
199
+
200
+
201
+ ## Neko preprocessing
202
+
203
+ ```python
204
+ import tensorneko as neko
205
+
206
+ # A video tensor with (120, 3, 720, 1280)
207
+ video = neko.io.read.video("example/video.mp4").video
208
+ # Get a resized tensor with (120, 3, 256, 256)
209
+ resized_video = neko.preprocess.resize_video(video, (256, 256))
210
+ ```
211
+
212
+ #### All preprocessing utils
213
+
214
+ - `resize_video`
215
+ - `resize_image`
216
+ - `padding_video`
217
+ - `padding_audio`
218
+ - `crop_with_padding`
219
+ - `frames2video`
220
+
221
+ if `ffmpeg` is available, you can use below ffmpeg wrappers.
222
+
223
+ - `video2frames`
224
+ - `merge_video_audio`
225
+ - `resample_video_fps`
226
+ - `mp32wav`
227
+
228
+ ## Neko Visualization
229
+
230
+ ### Variable Web Watcher
231
+ Start a web server to watch the variable status when the program (e.g. training, inference, data preprocessing) is running.
232
+ ```python
233
+ import time
234
+ from tensorneko.visualization.watcher import *
235
+ data_list = ... # a list of data
236
+ def preprocessing(d): ...
237
+
238
+ # initialize the components
239
+ pb = ProgressBar("Processing", total=len(data_list))
240
+ logger = Logger("Log message")
241
+ var = Variable("Some Value", 0)
242
+ line_chart = LineChart("Line Chart", x_label="x", y_label="y")
243
+ view = View("Data preprocessing").add_all()
244
+
245
+ t0 = time.time()
246
+ # open server when the code block in running.
247
+ with Server(view, port=8000):
248
+ for i, data in enumerate(data_list):
249
+ preprocessing(data) # do some processing here
250
+
251
+ x = time.time() - t0 # time since the start of the program
252
+ y = i # processed number of data
253
+ line_chart.add(x, y) # add to the line chart
254
+ logger.log("Some messages") # log messages to the server
255
+ var.value = ... # keep tracking a variable
256
+ pb.add(1) # update the progress bar by add 1
257
+ ```
258
+ When the script is running, go to `127.0.0.1:8000` to keep tracking the status.
259
+
260
+ ### Tensorboard Server
261
+
262
+ Simply run tensorboard server in Python script.
263
+ ```python
264
+ import tensorneko as neko
265
+
266
+ with neko.visualization.tensorboard.Server(port=6006):
267
+ trainer.fit(model, dm)
268
+ ```
269
+
270
+ ### Matplotlib wrappers
271
+ Display an image of (C, H, W) shape by `plt.imshow` wrapper.
272
+ ```python
273
+ import tensorneko as neko
274
+ import matplotlib.pyplot as plt
275
+
276
+ image_tensor = ... # an image tensor with shape (C, H, W)
277
+ neko.visualization.matplotlib.imshow(image_tensor)
278
+ plt.show()
279
+ ```
280
+
281
+ ### Predefined colors
282
+ Several aesthetic colors are predefined.
283
+
284
+ ```python
285
+ import tensorneko as neko
286
+ import matplotlib.pyplot as plt
287
+
288
+ # use with matplotlib
289
+ plt.plot(..., color=neko.visualization.Colors.RED)
290
+
291
+ # the palette for seaborn is also available
292
+ from tensorneko_util.visualization.seaborn import palette
293
+ import seaborn as sns
294
+ sns.set_palette(palette)
295
+ ```
296
+
297
+ ## Neko Model
298
+
299
+ Build and train a simple model for classifying MNIST with MLP.
300
+
301
+ ```python
302
+ from typing import Optional, Union, Sequence, Dict, List
303
+
304
+ import torch.nn
305
+ from torch import Tensor
306
+ from torch.optim import Adam
307
+ from torchmetrics import Accuracy
308
+ from lightning.pytorch.callbacks import ModelCheckpoint
309
+
310
+ import tensorneko as neko
311
+ from tensorneko.util import get_activation, get_loss
312
+
313
+
314
+ class MnistClassifier(neko.NekoModel):
315
+
316
+ def __init__(self, name: str, mlp_neurons: List[int], activation: str, dropout_rate: float, loss: str,
317
+ learning_rate: float, weight_decay: float
318
+ ):
319
+ super().__init__(name)
320
+ self.weight_decay = weight_decay
321
+ self.learning_rate = learning_rate
322
+
323
+ self.flatten = torch.nn.Flatten()
324
+ self.mlp = neko.module.MLP(
325
+ neurons=mlp_neurons,
326
+ build_activation=get_activation(activation),
327
+ dropout_rate=dropout_rate
328
+ )
329
+ self.loss_func = get_loss(loss)()
330
+ self.acc_func = Accuracy()
331
+
332
+ def forward(self, x):
333
+ # (batch, 28, 28)
334
+ x = self.flatten(x)
335
+ # (batch, 768)
336
+ x = self.mlp(x)
337
+ # (batch, 10)
338
+ return x
339
+
340
+ def training_step(self, batch: Optional[Union[Tensor, Sequence[Tensor]]] = None, batch_idx: Optional[int] = None,
341
+ optimizer_idx: Optional[int] = None, hiddens: Optional[Tensor] = None
342
+ ) -> Dict[str, Tensor]:
343
+ x, y = batch
344
+ logit = self(x)
345
+ prob = logit.sigmoid()
346
+ loss = self.loss_func(logit, y)
347
+ acc = self.acc_func(prob.max(dim=1)[1], y)
348
+ return {"loss": loss, "acc": acc}
349
+
350
+ def validation_step(self, batch: Optional[Union[Tensor, Sequence[Tensor]]] = None, batch_idx: Optional[int] = None,
351
+ dataloader_idx: Optional[int] = None
352
+ ) -> Dict[str, Tensor]:
353
+ x, y = batch
354
+ logit = self(x)
355
+ prob = logit.sigmoid()
356
+ loss = self.loss_func(logit, y)
357
+ acc = self.acc_func(prob.max(dim=1)[1], y)
358
+ return {"loss": loss, "acc": acc}
359
+
360
+ def configure_optimizers(self):
361
+ optimizer = Adam(self.parameters(), lr=self.learning_rate, betas=(0.5, 0.9), weight_decay=self.weight_decay)
362
+ return {
363
+ "optimizer": optimizer
364
+ }
365
+
366
+
367
+ model = MnistClassifier("mnist_mlp_classifier", [784, 1024, 512, 10], "ReLU", 0.5, "CrossEntropyLoss", 1e-4, 1e-4)
368
+
369
+ dm = ... # The MNIST datamodule from PyTorch Lightning
370
+
371
+ trainer = neko.NekoTrainer(log_every_n_steps=100, gpus=1, logger=model.name, precision=32,
372
+ callbacks=[ModelCheckpoint(dirpath="./ckpt",
373
+ save_last=True, filename=model.name + "-{epoch}-{val_acc:.3f}", monitor="val_acc", mode="max"
374
+ )])
375
+
376
+ trainer.fit(model, dm)
377
+ ```
378
+
379
+ ## Neko Callbacks
380
+
381
+ Some simple but useful pytorch-lightning callbacks are provided.
382
+
383
+ - `DisplayMetricsCallback`
384
+ - `EarlyStoppingLR`: Early stop training when learning rate reaches threshold.
385
+
386
+ ## Neko Notebook Helpers
387
+ Here are some helper functions to better interact with Jupyter Notebook.
388
+ ```python
389
+ import tensorneko as neko
390
+ # display a video
391
+ neko.notebook.display.video("path/to/video.mp4")
392
+ # display an audio
393
+ neko.notebook.display.audio("path/to/audio.wav")
394
+ # display a code file
395
+ neko.notebook.display.code("path/to/code.java")
396
+ ```
397
+
398
+ ## Neko Debug Tools
399
+
400
+ Get the default values from `ArgumentParser` args. It's convenient to use this in the notebook.
401
+ ```python
402
+ from argparse import ArgumentParser
403
+ from tensorneko.debug import get_parser_default_args
404
+
405
+ parser = ArgumentParser()
406
+ parser.add_argument("integers", type=int, nargs="+", default=[1, 2, 3])
407
+ parser.add_argument("--sum", dest="accumulate", action="store_const", const=sum, default=max)
408
+ args = get_parser_default_args(parser)
409
+
410
+ print(args.integers) # [1, 2, 3]
411
+ print(args.accumulate) # <function sum at ...>
412
+ ```
413
+
414
+ ## Neko Evaluation
415
+
416
+ Some metrics function for evaluation are provided.
417
+
418
+ - `iou_1d`
419
+ - `iou_2d`
420
+ - `psnr_video`
421
+ - `psnr_image`
422
+ - `ssim_video`
423
+ - `ssim_image`
424
+
425
+
426
+ ## Neko Utilities
427
+
428
+ ### Misc functions
429
+
430
+ `__`: The arguments to pipe operator. (Inspired from [fn.py](https://github.com/kachayev/fn.py))
431
+ ```python
432
+ from tensorneko.util import __, _
433
+ result = __(20) >> (_ + 1) >> (_ * 2) >> __.get
434
+ print(result)
435
+ # 42
436
+ ```
437
+
438
+ `Seq` and `Stream`: A collection wrapper for method chaining with concurrent supporting.
439
+ ```python
440
+ from tensorneko.util import Seq, Stream, _
441
+ from tensorneko_util.backend.parallel import ParallelType
442
+ # using method chaining
443
+ seq = Seq.of(1, 2, 3).map(_ + 1).filter(_ % 2 == 0).map(_ * 2).take(2).to_list()
444
+ # return [4, 8]
445
+
446
+ # using bit shift operator to chain the sequence
447
+ seq = Seq.of(1, 2, 3) << Seq.of(2, 3, 4) << [3, 4, 5]
448
+ # return Seq(1, 2, 3, 2, 3, 4, 3, 4, 5)
449
+
450
+ # run concurrent with `for_each` for Stream
451
+ if __name__ == '__main__':
452
+ Stream.of(1, 2, 3, 4).for_each(print, progress_bar=True, parallel_type=ParallelType.PROCESS)
453
+ ```
454
+
455
+ `Option`: A monad for dealing with data.
456
+ ```python
457
+ from tensorneko.util import return_option
458
+
459
+ @return_option
460
+ def get_data():
461
+ if some_condition:
462
+ return 1
463
+ else:
464
+ return None
465
+
466
+ def process_data(n: int):
467
+ if condition(n):
468
+ return n
469
+ else:
470
+ return None
471
+
472
+
473
+ data = get_data()
474
+ data = data.map(process_data).get_or_else(-1) # if the response is None, return -1
475
+ ```
476
+
477
+ `Eval`: A monad for lazy evaluation.
478
+ ```python
479
+ from tensorneko.util import Eval
480
+
481
+ @Eval.always
482
+ def call_by_name_var():
483
+ return 42
484
+
485
+ @Eval.later
486
+ def call_by_need_var():
487
+ return 43
488
+
489
+ @Eval.now
490
+ def call_by_value_var():
491
+ return 44
492
+
493
+
494
+ print(call_by_name_var.value) # 42
495
+ ```
496
+
497
+ ### Reactive
498
+ This library provides event bus based reactive tools. The API integrates the Python type annotation syntax.
499
+
500
+ ```python
501
+ # useful decorators for default event bus
502
+ from tensorneko.util import subscribe
503
+ # Event base type
504
+ from tensorneko.util import Event, EventBus
505
+
506
+ class LogEvent(Event):
507
+ def __init__(self, message: str):
508
+ self.message = message
509
+
510
+ # the event argument should be annotated correctly
511
+ @subscribe # run in the main thread
512
+ def log_information(event: LogEvent):
513
+ print(event.message)
514
+
515
+
516
+ @subscribe.thread # run in a new thread
517
+ def log_information_thread(event: LogEvent):
518
+ print(event.message, "in another thread")
519
+
520
+
521
+ @subscribe.coro # run with async
522
+ async def log_information_async(event: LogEvent):
523
+ print(event.message, "async")
524
+
525
+
526
+ @subscribe.process # run in a new process
527
+ def log_information_process(event: LogEvent):
528
+ print(event.message, "in a new process")
529
+
530
+ if __name__ == '__main__':
531
+ # emit an event, and then the event handler will be invoked
532
+ # The sequential order is not guaranteed
533
+ LogEvent("Hello world!")
534
+ EventBus.default.wait() # it's not blocking, need to call wait manually before exit.
535
+ # one possible output:
536
+ # Hello world! in another thread
537
+ # Hello world! async
538
+ # Hello world!
539
+ # Hello world! in a new process
540
+ ```
541
+
542
+ ### Multiple Dispatch
543
+
544
+ `dispatch`: Multi-dispatch implementation for Python.
545
+
546
+ To my knowledge, 3 popular multi-dispatch libraries still have critical limitations.
547
+ [plum](https://github.com/wesselb/plum) doesn't support static methods,
548
+ [mutipledispatch](https://github.com/mrocklin/multipledispatch) doesn't support Python type annotation syntax and
549
+ [multimethod](https://github.com/coady/multimethod) doesn't support default argument. TensorNeko can do it all.
550
+
551
+ ```python
552
+ from tensorneko.util import dispatch
553
+
554
+ class DispatchExample:
555
+
556
+ @staticmethod
557
+ @dispatch
558
+ def go() -> None:
559
+ print("Go0")
560
+
561
+ @staticmethod
562
+ @dispatch
563
+ def go(x: int) -> None:
564
+ print("Go1")
565
+
566
+ @staticmethod
567
+ @dispatch
568
+ def go(x: float, y: float = 1.0) -> None:
569
+ print("Go2")
570
+
571
+ @dispatch
572
+ def come(x: int) -> str:
573
+ return "Come1"
574
+
575
+ @dispatch.of(str)
576
+ def come(x) -> str:
577
+ return "Come2"
578
+ ```
579
+
580
+ ### Miscellaneous
581
+
582
+ `StringGetter`: Get PyTorch class from string.
583
+ ```python
584
+ import tensorneko as neko
585
+ activation = neko.util.get_activation("leakyRelu")()
586
+ ```
587
+
588
+ `Seed`: The universal seed for `numpy`, `torch` and Python `random`.
589
+ ```python
590
+ from tensorneko.util import Seed
591
+ from torch.utils.data import DataLoader
592
+
593
+ # set seed to 42 for all numpy, torch and python random
594
+ Seed.set(42)
595
+
596
+ # Apply seed to parallel workers of DataLoader
597
+ DataLoader(
598
+ train_dataset,
599
+ batch_size=batch_size,
600
+ num_workers=num_workers,
601
+ worker_init_fn=Seed.get_loader_worker_init(),
602
+ generator=Seed.get_torch_generator()
603
+ )
604
+ ```
605
+
606
+ `Timer`: A timer for measuring the time.
607
+ ```python
608
+ from tensorneko.util import Timer
609
+ import time
610
+
611
+ # use as a context manager with single time
612
+ with Timer():
613
+ time.sleep(1)
614
+
615
+ # use as a context manager with multiple segments
616
+ with Timer() as t:
617
+ time.sleep(1)
618
+ t.time("sleep A")
619
+ time.sleep(1)
620
+ t.time("sleep B")
621
+ time.sleep(1)
622
+
623
+ # use as a decorator
624
+ @Timer()
625
+ def f():
626
+ time.sleep(1)
627
+ print("f")
628
+ ```
629
+
630
+ `Singleton`: A decorator to make a class as a singleton. Inspired from Scala/Kotlin.
631
+ ```python
632
+ from tensorneko.util import Singleton
633
+
634
+ @Singleton
635
+ class MyObject:
636
+ def __init__(self):
637
+ self.value = 0
638
+
639
+ def add(self, value):
640
+ self.value += value
641
+ return self.value
642
+
643
+
644
+ print(MyObject.value) # 0
645
+ MyObject.add(1)
646
+ print(MyObject.value) # 1
647
+ ```
648
+
649
+ Besides, many miscellaneous functions are also provided.
650
+
651
+
652
+ Functions list (in `tensorneko_util`):
653
+ - `generate_inf_seq`
654
+ - `compose`
655
+ - `listdir`
656
+ - `with_printed`
657
+ - `ifelse`
658
+ - `dict_add`
659
+ - `as_list`
660
+ - `identity`
661
+ - `list_to_dict`
662
+ - `get_tensorneko_util_path`
663
+
664
+ Functions list (in `tensorneko`):
665
+ - `reduce_dict_by`
666
+ - `summarize_dict_by`
667
+ - `with_printed_shape`
668
+ - `is_bad_num`
669
+ - `count_parameters`
670
+
671
+ ## TensorNeko Tools
672
+
673
+ Some CLI tools are provided in the `tensorneko_tool` package.
674
+
675
+ The `gotify` can send a message to the Gotify server, with the environment variables `GOTIFY_URL` and `GOTIFY_TOKEN` set.
676
+
677
+ ```shell
678
+ tensorneko gotify "Script finished!"
679
+ ```
680
+
681
+ Keywords: deep learning,pytorch,AI,data processing
682
+ Platform: UNKNOWN
683
+ Classifier: Programming Language :: Python :: 3
684
+ Classifier: Programming Language :: Python :: 3.8
685
+ Classifier: Programming Language :: Python :: 3.9
686
+ Classifier: Programming Language :: Python :: 3.10
687
+ Classifier: Programming Language :: Python :: 3.11
688
+ Classifier: Programming Language :: Python :: 3.12
689
+ Classifier: License :: OSI Approved :: MIT License
690
+ Classifier: Operating System :: OS Independent
691
+ Classifier: Intended Audience :: Developers
692
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
693
+ Classifier: Topic :: Utilities
694
+ Requires-Python: >=3.8
695
+ Description-Content-Type: text/markdown
696
+ Provides-Extra: lightning
@@ -25,8 +25,11 @@ Tensor Neural Engine Kompanion. An util library based on PyTorch and PyTorch Lig
25
25
 
26
26
  ## Install
27
27
 
28
+ The tensorneko requires pytorch and pytorch-lightning (optional), and you can install it with below command.
29
+
28
30
  ```shell
29
- pip install tensorneko
31
+ pip install tensorneko # for PyTorch only
32
+ pip install tensorneko[lightning] # for PyTorch and Lightning
30
33
  ```
31
34
 
32
35
  To use the library without PyTorch and PyTorch Lightning, you can install the util library (support Python 3.7 ~ 3.12 with limited features) with following command.