jetson-examples 0.2.1__py3-none-any.whl → 0.2.3__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
- Metadata-Version: 2.1
1
+ Metadata-Version: 2.2
2
2
  Name: jetson-examples
3
- Version: 0.2.1
3
+ Version: 0.2.3
4
4
  Summary: Running Gen AI models and applications on NVIDIA Jetson devices with one-line command
5
5
  Author-email: luozhixin <zhixin.luo@seeed.cc>
6
6
  Project-URL: Homepage, https://github.com/Seeed-Projects/jetson-examples
@@ -60,6 +60,8 @@ Here are some examples that can be run:
60
60
 
61
61
  | Example | Type | Model/Data Size | Docker Image Size | Command |
62
62
  | ------------------------------------------------ | ------------------------ | --------------- | ---------- | --------------------------------------- |
63
+ | 🆕 [Ultralytics-yolo](/reComputer/scripts/ultralytics-yolo/README.md) | Computer Vision | | 15.4GB | `reComputer run ultralytics-yolo` |
64
+ | 🆕 [Deep-Live-Cam](/reComputer/scripts/deep-live-cam/README.md) | Face-swapping | 0.5GB | 20GB | `reComputer run deep-live-cam` |
63
65
  | 🆕 llama-factory | Finetune LLM | | 13.5GB | `reComputer run llama-factory` |
64
66
  | 🆕 [ComfyUI](/reComputer/scripts/comfyui/README.md) |Computer Vision | | 20GB | `reComputer run comfyui` |
65
67
  | [Depth-Anything-V2](/reComputer/scripts/depth-anything-v2/README.md) |Computer Vision | | 15GB | `reComputer run depth-anything-v2` |
@@ -74,7 +76,6 @@ Here are some examples that can be run:
74
76
  | [Nanodb](../reComputer/scripts/nanodb/readme.md) | Vector Database | 76GB | 7.0GB | `reComputer run nanodb` |
75
77
  | Whisper | Audio | 1.5GB | 6.0GB | `reComputer run whisper` |
76
78
  | [Yolov8-rail-inspection](/reComputer/scripts/yolov8-rail-inspection/readme.md) | Computer Vision | 6M | 13.8GB | `reComputer run yolov8-rail-inspection` |
77
- | [Ultralytics-yolo](/reComputer/scripts/ultralytics-yolo/README.md) | Computer Vision | | 15.4GB | `reComputer run ultralytics-yolo` |
78
79
  | [TensorFlow MoveNet Thunder](/reComputer/scripts/MoveNet-Thunder/readme.md) |Computer Vision | | 7.7GB | `reComputer run MoveNet-Thunder` |
79
80
  | [Parler-TTS mini: expresso](/reComputer/scripts/parler-tts/readme.md) | Audio | | 6.9GB | `reComputer run parler-tts` |
80
81
 
@@ -4,7 +4,7 @@ reComputer/scripts/check.sh,sha256=cUMwAjHpgJoaD5a8gTLJG7QWjF9CyKPgQ-ewRNK3FD8,1
4
4
  reComputer/scripts/clean.sh,sha256=TlGas1IKqSX7MEkZe4VvCQJgjDNEvfQyuAeXtKraNMA,898
5
5
  reComputer/scripts/run.sh,sha256=aKxkcj16zemZWf5ut6gHtTsgufFm4IK8GPd2b6MBQIQ,1127
6
6
  reComputer/scripts/update.sh,sha256=9Pw9-laO8NU2-4t4UisjFEwHPY5-ZAIoDi3AqWBzBbs,900
7
- reComputer/scripts/utils.sh,sha256=Hg-7vzvfSwy2znGFKPJc0yoCyzvBy88bpgJEvu3_74w,6871
7
+ reComputer/scripts/utils.sh,sha256=DGo3jkZhmGGFYBRlpO2hb1oTQb7MpuszUS06vNpyK5U,7104
8
8
  reComputer/scripts/MoveNet-Lightning/clean.sh,sha256=B-CJEj8KQPd5evJjl9XDpMgQRn6-hcaxB6oEUfVozrs,124
9
9
  reComputer/scripts/MoveNet-Lightning/getVersion.sh,sha256=pFaf2Uej2AhqeXkm-EJ5Zc9vqQpQwKUaK2CxKUVOfMY,1648
10
10
  reComputer/scripts/MoveNet-Lightning/init.sh,sha256=A_lpAHeXzm2POhyTBvzl3Zm7CQ9GsfyzmGwLuqOyBsA,61
@@ -32,6 +32,13 @@ reComputer/scripts/comfyui/clean.sh,sha256=mzTwIlHg8FJBIe0Q0lr1oGcLAwFV40EzD2Ezo
32
32
  reComputer/scripts/comfyui/config.yaml,sha256=ZMEclnn_wJ71Xw0oYVwUURhRz3CG4pvi5QBmmgJEmSU,636
33
33
  reComputer/scripts/comfyui/init.sh,sha256=owGwu9aVdsosjBzrPmZOQtunaLOtCjjxzkr7pb_zUoQ,226
34
34
  reComputer/scripts/comfyui/run.sh,sha256=OveMB_K3DCne4UzIM7_CrigjaE_vlOmhcHOxF8jACN0,911
35
+ reComputer/scripts/deep-live-cam/Dockerfile,sha256=y1OXujJWbLzjJQtT2FKQZgMf8HrThwzsWnDuFTOFt5Q,125
36
+ reComputer/scripts/deep-live-cam/LICENSE,sha256=feSgondqafLDgk7Dp9gnVBE5Y9K5X9mal_DGYKALzXQ,1073
37
+ reComputer/scripts/deep-live-cam/README.md,sha256=eEJH1SqNX0prSB8cj6vpoqyHA7UU9y_5XJLRJczLUjI,5280
38
+ reComputer/scripts/deep-live-cam/clean.sh,sha256=1XTzK3VmdfQ-tZ2LgPdzFqm_9BHsZZoTncgjh-oHWsE,199
39
+ reComputer/scripts/deep-live-cam/config.yaml,sha256=8jVA9m3iCzr5ngqfpgP5xaePXeZR9COMd__pAbQJsC4,637
40
+ reComputer/scripts/deep-live-cam/init.sh,sha256=t_mQOwQXtKy64i-F0_uSfc8VI9ZgzqU9o_1M4rpatzQ,155
41
+ reComputer/scripts/deep-live-cam/run.sh,sha256=6aob37QdaUmLMkPeMnLBIpm1rW9WKcnxWGqb75TO-mk,899
35
42
  reComputer/scripts/depth-anything/Dockerfile,sha256=gJ2Q1g2E06_z4hy9C-m1bA4X2IMqRWuo42izFfQ_E5Y,279
36
43
  reComputer/scripts/depth-anything/LICENSE,sha256=feSgondqafLDgk7Dp9gnVBE5Y9K5X9mal_DGYKALzXQ,1073
37
44
  reComputer/scripts/depth-anything/README.md,sha256=FeSMWdOFv2SeWA4x_abPzy7eK3AbaUhCcsHuNQzS0dk,4918
@@ -57,6 +64,10 @@ reComputer/scripts/llama3/clean.sh,sha256=lSFxp-uGD8vtzXMcZz8Id_CweQZfQbglrce7_s
57
64
  reComputer/scripts/llama3/config.yaml,sha256=pJTev2aiqRhhCqJ87wzlCpJgKRFLl8AZfx3uKiyWWCk,646
58
65
  reComputer/scripts/llama3/init.sh,sha256=T0IuhQo4oPM7BFUgcd95MXvK0Hwrig8djdm5akjijkc,569
59
66
  reComputer/scripts/llama3/run.sh,sha256=IkHvMwh_U8fp7AH2qWFUMWywHZDbZKHlp8ODzfHgqBQ,328
67
+ reComputer/scripts/llama3.2/clean.sh,sha256=pXpPOryyasC_19dGPqQfOsOV8AK-7pLkJGLwxFRadcg,1070
68
+ reComputer/scripts/llama3.2/config.yaml,sha256=FIZe09l4OAeUXbUYkq8-pQUqvabGVYGAohwvoBwOz6M,648
69
+ reComputer/scripts/llama3.2/init.sh,sha256=T0IuhQo4oPM7BFUgcd95MXvK0Hwrig8djdm5akjijkc,569
70
+ reComputer/scripts/llama3.2/run.sh,sha256=M0KoPTcjRbaDHmeOuCoIJcFxseyudNA31F4P5fS_M44,1067
60
71
  reComputer/scripts/llava/clean.sh,sha256=-NfJEQ0Wb00RwfajxyunwZAR90G_E_p0MBZeOeYDW-g,82
61
72
  reComputer/scripts/llava/config.yaml,sha256=pJTev2aiqRhhCqJ87wzlCpJgKRFLl8AZfx3uKiyWWCk,646
62
73
  reComputer/scripts/llava/init.sh,sha256=T0IuhQo4oPM7BFUgcd95MXvK0Hwrig8djdm5akjijkc,569
@@ -90,11 +101,11 @@ reComputer/scripts/text-generation-webui/config.yaml,sha256=fkt2vHLCSVRy0TMaA51-
90
101
  reComputer/scripts/text-generation-webui/init.sh,sha256=ujJXMfv_i904901wFBcEGgFAJlE83j2pAP_7orIurwg,570
91
102
  reComputer/scripts/text-generation-webui/run.sh,sha256=mRlWxZTArGXoyyeHbgZvPbVKTflQSsFCKUERKoHYHHE,407
92
103
  reComputer/scripts/ultralytics-yolo/LICENSE,sha256=feSgondqafLDgk7Dp9gnVBE5Y9K5X9mal_DGYKALzXQ,1073
93
- reComputer/scripts/ultralytics-yolo/README.md,sha256=InfR1206B9Qo9xPp7GdAmb2ZUdb9lrgHn0H_M1hBlGY,6085
94
- reComputer/scripts/ultralytics-yolo/clean.sh,sha256=51z_LiiS4XFPmE23t3Mi4P92UYEidefrLNGhBVENqZ0,186
95
- reComputer/scripts/ultralytics-yolo/config.yaml,sha256=fXWvsvEhz7toklEEgjipLZpv09xdvSNCwPPZWMIdNQQ,635
104
+ reComputer/scripts/ultralytics-yolo/README.md,sha256=_n7zjfuUYJkkfVW0sahE-bVngL4flwFt_K65ccnQ4hg,9064
105
+ reComputer/scripts/ultralytics-yolo/clean.sh,sha256=8eM7xIDKb-Heu_CxrS8AFe0bYo_sf8Kuf6wGsU-yr2k,1129
106
+ reComputer/scripts/ultralytics-yolo/config.yaml,sha256=jMeOm7Yig58gFfb6vgGmETE8hRJ86kCwkeYR0YCuST4,668
96
107
  reComputer/scripts/ultralytics-yolo/init.sh,sha256=Va_ESXeVYXmnz0PMqBXl9ysQL_SudOiUPDgHsrGLo0M,121
97
- reComputer/scripts/ultralytics-yolo/run.sh,sha256=LRs8Kwkevcr0-B0ZePZ0XZdiaCn9RCA7YVscEMgAtTA,845
108
+ reComputer/scripts/ultralytics-yolo/run.sh,sha256=F-Sa0qMpB2LwY1XVy8nnYgNwp8Qrc5imrAlOLa-scuQ,1979
98
109
  reComputer/scripts/whisper/config.yaml,sha256=fkt2vHLCSVRy0TMaA51-xXWCOpdJlgeb-BVspXFd7XI,646
99
110
  reComputer/scripts/whisper/init.sh,sha256=T0IuhQo4oPM7BFUgcd95MXvK0Hwrig8djdm5akjijkc,569
100
111
  reComputer/scripts/whisper/run.sh,sha256=UKiY7Ie5uyGrdvAob1XwPSlpdEL27HR5vcMtnVOrph4,146
@@ -108,9 +119,9 @@ reComputer/scripts/yolov8-rail-inspection/config.yaml,sha256=FhmaI16bv7G1IpXqhgn
108
119
  reComputer/scripts/yolov8-rail-inspection/init.sh,sha256=8IqV1qC0LlklcWEdfUVMBAVm8abFIXUdsawg4uAbf9Q,122
109
120
  reComputer/scripts/yolov8-rail-inspection/readme.md,sha256=awuvn2sLDnr-U4Q5pTTyieJMYNy27NRQyFhiQyiUNFI,2008
110
121
  reComputer/scripts/yolov8-rail-inspection/run.sh,sha256=rTpjiwMgn6iA3IJ6QDFR9wkOMDVCQ-qgTbFo2YKiX-c,809
111
- jetson_examples-0.2.1.dist-info/LICENSE,sha256=ac_LOi8ChcJhymEfBulX98Y06wTI2IMcQnqCXZ5yay4,1066
112
- jetson_examples-0.2.1.dist-info/METADATA,sha256=Rf9RQp0MDe_f_aE-eIJT_rPKjtMhfd8sVvVSdZRXpHw,6729
113
- jetson_examples-0.2.1.dist-info/WHEEL,sha256=ixB2d4u7mugx_bCBycvM9OzZ5yD7NmPXFRtKlORZS2Y,91
114
- jetson_examples-0.2.1.dist-info/entry_points.txt,sha256=5-OdcBifoDjVXE9KjNoN6tQa8l_XSXhdbBEgL2hxeDM,58
115
- jetson_examples-0.2.1.dist-info/top_level.txt,sha256=SI-liiUOkoGwOJfMP7d7k63JKgdcbiEj6DEC8QIKI90,11
116
- jetson_examples-0.2.1.dist-info/RECORD,,
122
+ jetson_examples-0.2.3.dist-info/LICENSE,sha256=ac_LOi8ChcJhymEfBulX98Y06wTI2IMcQnqCXZ5yay4,1066
123
+ jetson_examples-0.2.3.dist-info/METADATA,sha256=cVsRstj0NrvbJjfGm559xF4-nngoCc6oN5g83ULJ1_U,6870
124
+ jetson_examples-0.2.3.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
125
+ jetson_examples-0.2.3.dist-info/entry_points.txt,sha256=5-OdcBifoDjVXE9KjNoN6tQa8l_XSXhdbBEgL2hxeDM,58
126
+ jetson_examples-0.2.3.dist-info/top_level.txt,sha256=SI-liiUOkoGwOJfMP7d7k63JKgdcbiEj6DEC8QIKI90,11
127
+ jetson_examples-0.2.3.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (74.1.0)
2
+ Generator: setuptools (75.8.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -0,0 +1,6 @@
1
+
2
+ FROM yaohui1998/deep-live-cam:0.1
3
+
4
+ WORKDIR /usr/src/Deep-Live-Cam
5
+
6
+ CMD ["python3", "run.py", "--execution-provider", "cuda"]
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) [2024] [Seeed Studio]
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,140 @@
1
+ # Jetson-Example: Run Deep Live Cam on Seeed Studio NVIDIA AGX Orin Developer Kit 🚀
2
+
3
+ This project provides a one-click deployment of the Deep Live Cam AI face-swapping project on the [Seeed Studio Jetson AGX Orin Developer Kit](https://www.seeedstudio.com/NVIDIArJetson-AGX-Orintm-64GB-Developer-Kit-p-5641.html), retaining all the features of the [original project](https://github.com/hacksider/Deep-Live-Cam) and supporting functionalities such as image-to-image, image-to-video, and image-to-webcam.
4
+
5
+ <p align="center">
6
+ <img src="images/WebUI.png" alt="WebUI">
7
+ </p>
8
+
9
+ All models and inference engine implemented in this project are from the official [Deep-Live-Cam](https://github.com/hacksider/Deep-Live-Cam).
10
+
11
+ ## Get a Jetson Orin Device 🛒
12
+ | Device Model | Link |
13
+ |--------------|------|
14
+ | Jetson AGX Orin Dev Kit 32G | [Buy Here](https://www.seeedstudio.com/NVIDIA-Jetson-AGX-Orin-Developer-Kit-p-5314.html) |
15
+ | Jetson AGX Orin Dev Kit 64G | [Buy Here](https://www.seeedstudio.com/NVIDIArJetson-AGX-Orintm-64GB-Developer-Kit-p-5641.html) |
16
+
17
+ ## New Features 🔥
18
+ ### Resizable Preview Window
19
+
20
+ Dynamically improve the performance by using the --resizable parameter
21
+ ![resizable-gif](./images/resizable.gif)
22
+
23
+ ### Face Mapping
24
+
25
+ Track faces and change it on the fly
26
+
27
+ ![face_mapping_source](./images/face_mapping_source.gif)
28
+
29
+ source video
30
+
31
+ ![face-mapping](./images/face_mapping.png)
32
+
33
+ Tick this switch
34
+
35
+ ![face-mapping2](./images/face_mapping2.png)
36
+
37
+ Map the faces
38
+
39
+ ![face_mapping_result](./images/face_mapping_result.gif)
40
+
41
+ And see the magic!
42
+
43
+ > The images in the "New Features" section are sourced from the [github community](https://github.com/hacksider/Deep-Live-Cam).
44
+
45
+ ## 🥳Getting Started
46
+ ### 📜Prerequisites
47
+ - AGX Orin Developer Kit [(🛒Buy Here)](https://www.seeedstudio.com/NVIDIArJetson-AGX-Orintm-64GB-Developer-Kit-p-5641.html)
48
+ - Jetpack 6.0
49
+ - USB Camera (optional)
50
+
51
+
52
+ ### Modify Docker Daemon Configuration (Optional)
53
+ To enhance the experience of quickly loading models in Docker, you need to add the following content to the `/etc/docker/daemon.json` file:
54
+
55
+ ```json
56
+ {
57
+ "default-runtime": "nvidia",
58
+ "runtimes": {
59
+ "nvidia": {
60
+ "path": "nvidia-container-runtime",
61
+ "runtimeArgs": []
62
+ }
63
+ },
64
+ "storage-driver": "overlay2",
65
+ "data-root": "/var/lib/docker",
66
+ "log-driver": "json-file",
67
+ "log-opts": {
68
+ "max-size": "100m",
69
+ "max-file": "3"
70
+ },
71
+ "no-new-privileges": true,
72
+ "experimental": false
73
+ }
74
+ ```
75
+
76
+ After modifying the `daemon.json` file, you need to restart the Docker service to apply the configuration:
77
+
78
+ ```sh
79
+ sudo systemctl restart docker
80
+ ```
81
+
82
+
83
+ ### 🚀Installation
84
+
85
+
86
+ PyPI(recommend)
87
+ ```sh
88
+ pip install jetson-examples
89
+ ```
90
+ Linux (github trick)
91
+ ```sh
92
+ curl -fsSL https://raw.githubusercontent.com/Seeed-Projects/jetson-examples/main/install.sh | sh
93
+ ```
94
+ Github (for Developer)
95
+ ```sh
96
+ git clone https://github.com/Seeed-Projects/jetson-examples
97
+ cd jetson-examples
98
+ pip install .
99
+ ```
100
+
101
+ ### 📋Usage
102
+ 1. Run code:
103
+ ```sh
104
+ reComputer run deep-live-cam
105
+ ```
106
+
107
+ 2. An `image` folder will be created in the user's home directory, where templates and the face images or videos that need to be swapped can be placed.
108
+
109
+ 3. Click `Select a face` to choose an image of a face.
110
+
111
+ 4. Click the `Select a target` button to choose a target face image.
112
+
113
+ 5. Click `Preview` to display the transformed result, and click `Start` to save the result to the specified directory without displaying it.
114
+
115
+ 6. Click `Preview` to display the transformed result, and click `Start` to save the result to the specified directory without displaying it.
116
+
117
+ 7. You can choose the `Face enhancer` to enhance facial details and features.
118
+
119
+ 8. Click `Live` to open the webcam for real-time conversion. Please connect a USB camera before starting the program.
120
+
121
+ > ⚠️ **Note**: The first time you convert an image, it may take approximately two minutes.
122
+
123
+ ## 🙏🏻Thanks
124
+ [Deep-Live-Cam](https://github.com/hacksider/Deep-Live-Cam)
125
+
126
+ ## 💨Contributing
127
+
128
+ We welcome contributions from the community. Please fork the repository and create a pull request with your changes.
129
+
130
+
131
+ ## 🙅‍Disclaimer
132
+ This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc.
133
+
134
+ The developers of this software are aware of its possible unethical applications and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law.
135
+
136
+ Users of this software are expected to use this software responsibly while abiding by local laws. If the face of a real person is being used, users are required to get consent from the concerned person and clearly mention that it is a deepfake when posting content online. Developers of this software will not be responsible for actions of end-users.
137
+
138
+ ## ✅License
139
+
140
+ This project is licensed under the AGPL-3.0 License.
@@ -0,0 +1,9 @@
1
+ #!/bin/bash
2
+
3
+ CONTAINER_NAME="deep-live-cam"
4
+ IMAGE_NAME="yaohui1998/deep-live-cam:1.0"
5
+
6
+ sudo docker stop $CONTAINER_NAME
7
+ sudo docker rm $CONTAINER_NAME
8
+ sudo docker rmi $IMAGE_NAMEs
9
+ sudo rm -r ~/images
@@ -0,0 +1,30 @@
1
+
2
+ # The tested JetPack versions.
3
+ ALLOWED_L4T_VERSIONS:
4
+ - 36.3.0
5
+ REQUIRED_DISK_SPACE: 40 # in GB
6
+ REQUIRED_MEM_SPACE: 20
7
+ PACKAGES:
8
+ - nvidia-jetpack
9
+ - x11-xserver-utils
10
+ DOCKER:
11
+ ENABLE: true
12
+ DAEMON: |
13
+ {
14
+ "default-runtime": "nvidia",
15
+ "runtimes": {
16
+ "nvidia": {
17
+ "path": "nvidia-container-runtime",
18
+ "runtimeArgs": []
19
+ }
20
+ },
21
+ "storage-driver": "overlay2",
22
+ "data-root": "/var/lib/docker",
23
+ "log-driver": "json-file",
24
+ "log-opts": {
25
+ "max-size": "100m",
26
+ "max-file": "3"
27
+ },
28
+ "no-new-privileges": true,
29
+ "experimental": false
30
+ }
@@ -0,0 +1,6 @@
1
+ #!/bin/bash
2
+
3
+ # check the runtime environment.
4
+ source $(dirname "$(realpath "$0")")/../utils.sh
5
+ check_base_env "$(dirname "$(realpath "$0")")/config.yaml"
6
+
@@ -0,0 +1,29 @@
1
+ CONTAINER_NAME="deep-live-cam"
2
+ IMAGE_NAME="yaohui1998/deep-live-cam:1.0"
3
+
4
+ # Pull the latest image
5
+ docker pull $IMAGE_NAME
6
+ # Set display id
7
+ xhost +local:docker
8
+ export DISPLAY=:0
9
+ # mkdir image dir
10
+ mkdir ~/images
11
+ echo $DISPLAY
12
+ # Check if the container with the specified name already exists
13
+ if [ $(docker ps -a -q -f name=^/${CONTAINER_NAME}$) ]; then
14
+ echo "Container $CONTAINER_NAME already exists. Starting and attaching..."
15
+ docker start $CONTAINER_NAME
16
+ else
17
+ echo "Container $CONTAINER_NAME does not exist. Creating and starting..."
18
+ docker run -it --rm \
19
+ --name $CONTAINER_NAME \
20
+ --privileged \
21
+ --network host \
22
+ -v ~/images:/usr/src/Deep-Live-Cam/images \
23
+ -e DISPLAY=$DISPLAY \
24
+ -v /tmp/.X11-unix:/tmp/.X11-unix \
25
+ -v /dev/*:/dev/* \
26
+ -v /etc/localtime:/etc/localtime:ro \
27
+ --runtime nvidia \
28
+ $IMAGE_NAME
29
+ fi
@@ -0,0 +1,34 @@
1
+ #!/bin/bash
2
+
3
+ get_l4t_version() {
4
+ local l4t_version=""
5
+ local release_line=$(head -n 1 /etc/nv_tegra_release)
6
+ if [[ $release_line =~ R([0-9]+)\ *\(release\),\ REVISION:\ ([0-9]+\.[0-9]+) ]]; then
7
+ local major="${BASH_REMATCH[1]}"
8
+ local revision="${BASH_REMATCH[2]}"
9
+ l4t_version="${major}.${revision}"
10
+ fi
11
+ echo "$l4t_version"
12
+ }
13
+
14
+ L4T_VERSION=$(get_l4t_version)
15
+ echo "Detected L4T version: $L4T_VERSION"
16
+
17
+ # Determine the Docker image based on L4T version
18
+ if [[ "$L4T_VERSION" == "35.3.1" || "$L4T_VERSION" == "35.4.1" || "$L4T_VERSION" == "35.5.0" ]]; then
19
+ IMAGE_NAME="youjiang9977/ollama:r35.3.1"
20
+ elif [[ "$L4T_VERSION" == "36.3.0" || "$L4T_VERSION" == "36.4.0" ]]; then
21
+ IMAGE_NAME="youjiang9977/ollama:r36.3.0"
22
+ else
23
+ echo "Error: L4T version $L4T_VERSION is not supported."
24
+ exit 1
25
+ fi
26
+
27
+ if [ "$(docker images -q "$IMAGE_NAME")" ]; then
28
+ echo "Deleting $IMAGE_NAME..."
29
+ docker rmi "$IMAGE_NAME"
30
+ echo "Image $IMAGE_NAME has been successfully deleted."
31
+ else
32
+ echo "No image named $IMAGE_NAME was found."
33
+ fi
34
+
@@ -0,0 +1,32 @@
1
+ # The tested JetPack versions.
2
+ ALLOWED_L4T_VERSIONS:
3
+ - 35.3.1
4
+ - 35.4.1
5
+ - 35.5.0
6
+ - 36.3.0
7
+ - 36.4.0
8
+ REQUIRED_DISK_SPACE: 15
9
+ REQUIRED_MEM_SPACE: 7
10
+ PACKAGES:
11
+ - nvidia-jetpack
12
+ DOCKER:
13
+ ENABLE: true
14
+ DAEMON: |
15
+ {
16
+ "default-runtime": "nvidia",
17
+ "runtimes": {
18
+ "nvidia": {
19
+ "path": "nvidia-container-runtime",
20
+ "runtimeArgs": []
21
+ }
22
+ },
23
+ "storage-driver": "overlay2",
24
+ "data-root": "/var/lib/docker",
25
+ "log-driver": "json-file",
26
+ "log-opts": {
27
+ "max-size": "100m",
28
+ "max-file": "3"
29
+ },
30
+ "no-new-privileges": true,
31
+ "experimental": false
32
+ }
@@ -0,0 +1,19 @@
1
+ #!/bin/bash
2
+
3
+ # check the runtime environment.
4
+ source $(dirname "$(realpath "$0")")/../utils.sh
5
+ check_base_env "$(dirname "$(realpath "$0")")/config.yaml"
6
+
7
+ BASE_PATH=/home/$USER/reComputer
8
+ mkdir -p $BASE_PATH/
9
+ JETSON_REPO_PATH="$BASE_PATH/jetson-containers"
10
+ BASE_JETSON_LAB_GIT="https://github.com/dusty-nv/jetson-containers"
11
+ if [ -d $JETSON_REPO_PATH ]; then
12
+ echo "jetson-ai-lab existed."
13
+ else
14
+ echo "jetson-ai-lab does not installed. start init..."
15
+ cd $BASE_PATH/
16
+ git clone --depth=1 $BASE_JETSON_LAB_GIT
17
+ cd $JETSON_REPO_PATH
18
+ bash install.sh
19
+ fi
@@ -0,0 +1,35 @@
1
+ #!/bin/bash
2
+
3
+ BASE_PATH=/home/$USER/reComputer
4
+ JETSON_REPO_PATH="$BASE_PATH/jetson-containers"
5
+ cd $JETSON_REPO_PATH
6
+
7
+ get_l4t_version() {
8
+ local l4t_version=""
9
+ local release_line=$(head -n 1 /etc/nv_tegra_release)
10
+ if [[ $release_line =~ R([0-9]+)\ *\(release\),\ REVISION:\ ([0-9]+\.[0-9]+) ]]; then
11
+ local major="${BASH_REMATCH[1]}"
12
+ local revision="${BASH_REMATCH[2]}"
13
+ l4t_version="${major}.${revision}"
14
+ fi
15
+ echo "$l4t_version"
16
+ }
17
+
18
+ L4T_VERSION=$(get_l4t_version)
19
+ echo "Detected L4T version: $L4T_VERSION"
20
+
21
+ # Determine the Docker image based on L4T version
22
+ if [[ "$L4T_VERSION" == "35.3.1" || "$L4T_VERSION" == "35.4.1" || "$L4T_VERSION" == "35.5.0" ]]; then
23
+ IMAGE_NAME="youjiang9977/ollama:r35.3.1"
24
+ elif [[ "$L4T_VERSION" == "36.3.0" || "$L4T_VERSION" == "36.4.0" ]]; then
25
+ IMAGE_NAME="youjiang9977/ollama:r36.3.0"
26
+ else
27
+ echo "Error: L4T version $L4T_VERSION is not supported."
28
+ exit 1
29
+ fi
30
+
31
+ docker rm -f ollama
32
+ ./run.sh -d --name ollama $IMAGE_NAME
33
+ ./run.sh $IMAGE_NAME /bin/ollama run llama3.2
34
+ docker rm -f ollama
35
+
@@ -1,26 +1,43 @@
1
- # Jetson-Example: Run Ultralytics YOLO Platform Service on NVIDIA Jetson Orin 🚀
1
+ # Jetson-Example: Run Ultralytics YOLO Platform Service on NVIDIA Jetson Orin 🚀(**Supported YOLOV11**)
2
2
 
3
- ## "One-Click Quick Deployment of Plug-and-Play Ultralytics YOLOv8 for All Task Models with Web UI and HTTP API Interface"
3
+ ## One-Click Quick Deployment of Plug-and-Play All Ultralytics YOLO for All Task Models with Web UI and HTTP API Interface
4
4
  <p align="center">
5
5
  <img src="images/Ultralytics-yolo.gif" alt="Ultralytics YOLO">
6
6
  </p>
7
7
 
8
8
  ## Introduction 📘
9
- In this project, you can quickly deploy all YOLOv8 task models on Nvidia Jetson Orin devices with one click. This setup enables object detection, segmentation, human pose estimation, and classification. It supports uploading local videos, images, and using a webcam, and also allows one-click TensorRT model conversion. By accessing [http://127.0.0.1:5001](http://127.0.0.1:5001) on your local machine or within the same LAN, you can quickly start using Ultralytics YOLO. Additionally, an HTTP API method has been added at [http://127.0.0.1:5001/results](http://127.0.0.1:5001/results) to display detection data results for any task, and an additional Python script is provided to read YOLOv8 detection data within Docker.
9
+ In this project, you can quickly deploy all Ultralytics YOLO task models on Nvidia Jetson Orin devices with one click. This setup enables object detection, segmentation, human pose estimation, and classification. It supports uploading local videos, images, and using a webcam, and also allows one-click TensorRT model conversion. By accessing [http://127.0.0.1:5000](http://127.0.0.1:5000) on your local machine or within the same LAN, you can quickly start using Ultralytics YOLO. Additionally, an HTTP API method has been added at [http://127.0.0.1:5000/results](http://127.0.0.1:5000/results) to display detection data results for any task, and an additional Python script is provided to read YOLO detection data within Docker.
10
10
 
11
11
  ## **Key Features**:
12
12
 
13
- 1. **One-Click Deployment and Plug-and-Play**: Quickly deploy all YOLOv8 task models on Nvidia Jetson Orin devices.
13
+ 1. **One-Click Deployment and Plug-and-Play**: Quickly deploy all YOLO task models on Nvidia Jetson Orin devices.
14
14
  2. **Comprehensive Task Support**: Enables object detection, segmentation, human pose estimation, and classification.
15
15
  3. **Versatile Input Options**: Supports uploading local videos, images, and using a webcam.
16
16
  4. **TensorRT Model Conversion**: Allows one-click conversion of models to TensorRT.
17
- 5. **Web UI Access**: Easy access via [`http://127.0.0.1:5001`](http://127.0.0.1:5001) on the local machine or within the same LAN.
18
- 6. **HTTP API Interface**: Added HTTP API at [`http://127.0.0.1:5001/results`](http://127.0.0.1:5001/results) to display detection data results.
19
- 7. **Python Script Support**: Provides an additional Python script to read YOLOv8 detection data within Docker.
17
+ 5. **Web UI Access**: Easy access via [`http://127.0.0.1:5000`](http://127.0.0.1:5000) on the local machine or within the same LAN.
18
+ 6. **HTTP API Interface**: Added HTTP API at [`http://127.0.0.1:5000/results`](http://127.0.0.1:5000/results) to display detection data results.
19
+ 7. **Python Script Support**: Provides an additional Python script to read YOLO detection data within Docker.
20
20
 
21
21
  [![My Project](images/tasks.png)](https://github.com/ultralytics/ultralytics?tab=readme-ov-file#models)
22
22
  All models implemented in this project are from the official [Ultralytics Yolo](https://github.com/ultralytics/ultralytics?tab=readme-ov-file#models).
23
23
 
24
+ # Supported Task Models
25
+
26
+ | Model Type | Pre-trained Weights / Filenames | Task | Inference | Validation | Training | Export |
27
+ |-------------|--------------------------------------------------------------------------------------------------------------------------------------|----------------------|-----------|------------|----------|--------|
28
+ | YOLOv5u | yolov5nu, yolov5su, yolov5mu, yolov5lu, yolov5xu, yolov5n6u, yolov5s6u, yolov5m6u, yolov5l6u, yolov5x6u | Object Detection | ✅ | ✅ | ✅ | ✅ |
29
+ | YOLOv8 | yolov8n.pt, yolov8s.pt, yolov8m.pt, yolov8l.pt, yolov8x.pt | Detection | ✅ | ✅ | ✅ | ✅ |
30
+ | YOLOv8-seg | yolov8n-seg.pt, yolov8s-seg.pt, yolov8m-seg.pt, yolov8l-seg.pt, yolov8x-seg.pt | Instance Segmentation | ✅ | ✅ | ✅ | ✅ |
31
+ | YOLOv8-pose | yolov8n-pose.pt, yolov8s-pose.pt, yolov8m-pose.pt, yolov8l-pose.pt, yolov8x-pose-p6.pt | Pose/Keypoints | ✅ | ✅ | ✅ | ✅ |
32
+ | YOLOv8-obb | yolov8n-obb.pt, yolov8s-obb.pt, yolov8m-obb.pt, yolov8l-obb.pt, yolov8x-obb.pt | Oriented Detection | ✅ | ✅ | ✅ | ✅ |
33
+ | YOLOv8-cls | yolov8n-cls.pt, yolov8s-cls.pt, yolov8m-cls.pt, yolov8l-cls.pt, yolov8x-cls.pt | Classification | ✅ | ✅ | ✅ | ✅ |
34
+ | YOLOv11 | yolov11n.pt, yolov11s.pt, yolov11m.pt, yolov11l.pt, yolov11x.pt | Detection | ✅ | ✅ | ✅ | ✅ |
35
+ | YOLOv11-seg | yolov11n-seg.pt, yolov11s-seg.pt, yolov11m-seg.pt, yolov11l-seg.pt, yolov11x-seg.pt | Instance Segmentation | ✅ | ✅ | ✅ | ✅ |
36
+ | YOLOv11-pose| yolov11n-pose.pt, yolov11s-pose.pt, yolov11m-pose.pt, yolov11l-pose.pt, yolov11x-pose.pt | Pose/Keypoints | ✅ | ✅ | ✅ | ✅ |
37
+ | YOLOv11-obb | yolov11n-obb.pt, yolov11s-obb.pt, yolov11m-obb.pt, yolov11l-obb.pt, yolov11x-obb.pt | Oriented Detection | ✅ | ✅ | ✅ | ✅ |
38
+ | YOLOv11-cls | yolov11n-cls.pt, yolov11s-cls.pt, yolov11m-cls.pt, yolov11l-cls.pt, yolov11x-cls.pt | Classification | ✅ | ✅ | ✅ | ✅ |
39
+
40
+
24
41
  ### Get a Jetson Orin Device 🛒
25
42
  | Device Model | Description | Link |
26
43
  |--------------|-------------|------|
@@ -79,12 +96,12 @@ sudo systemctl restart docker
79
96
  <img src="images/ultralytics_fig1.png" alt="Ultralytics YOLO">
80
97
  </p>
81
98
 
82
- - **Choose Model**: Select YOLOv8 n, s, l, m, x models and various tasks such as object detection, classification, segmentation, human pose estimation, OBB, etc.
83
- - **Upload Custom Model**: Users can upload their own trained YOLOv8 models.
99
+ - **Choose Model**: Select Yolo version and models for various tasks such as object detection, classification, segmentation, human pose estimation, OBB, etc.
100
+ - **Upload Custom Model**: Users can upload their own trained YOLO models.
84
101
  - **Choose Input Type**: Users can select to input locally uploaded images, videos, or real-time camera devices.
85
102
  - **Enable TensorRT**: Choose whether to convert and use the TensorRT model. The initial conversion may require varying amounts of time.
86
103
 
87
- 5. If you want to see the detection result data, you can enter [`http://127.0.0.1:5001/results`](http://127.0.0.1:5001/results) in your browser to view the `JSON` formatted data results. These results include `boxes` for object detection, `masks` for segmentation, `keypoints` for human pose estimation, and the `names` corresponding to all numerical categories.
104
+ 5. If you want to see the detection result data, you can enter [`http://127.0.0.1:5000/results`](http://127.0.0.1:5000/results) in your browser to view the `JSON` formatted data results. These results include `boxes` for object detection, `masks` for segmentation, `keypoints` for human pose estimation, and the `names` corresponding to all numerical categories.
88
105
  <p align="center">
89
106
  <img src="images/ultralytics_fig2.png" alt="Ultralytics YOLO">
90
107
  </p>
@@ -109,13 +126,13 @@ sudo systemctl restart docker
109
126
 
110
127
  ## Notes 📝
111
128
  - To stop detection at any time, press the Stop button.
112
- - When accessing the WebUI from other devices within the same LAN, use the URL: `http://{Jetson_IP}:5001`.
113
- - You can view the JSON formatted detection results by accessing http://{Jetson_IP}:5001/results.
129
+ - When accessing the WebUI from other devices within the same LAN, use the URL: `http://{Jetson_IP}:5000`.
130
+ - You can view the JSON formatted detection results by accessing http://{Jetson_IP}:5000/results.
114
131
  - The first model conversion may require different amounts of time depending on the hardware and network environment, so please be patient.
115
132
 
116
133
 
117
134
  ## Further Development 🔧
118
- - [Training a YOLOv8 Model](https://wiki.seeedstudio.com/How_to_Train_and_Deploy_YOLOv8_on_reComputer/)
135
+ - [Training a YOLO Model](https://wiki.seeedstudio.com/How_to_Train_and_Deploy_YOLOv8_on_reComputer/)
119
136
  - [TensorRT Acceleration](https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/)
120
137
  - [Multistreams using Deepstream](https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/#multistream-model-benchmarks) Tutorials.
121
138
 
@@ -1,7 +1,34 @@
1
- #!/bin/bash
2
1
  CONTAINER_NAME="ultralytics-yolo"
3
- IMAGE_NAME="yaohui1998/ultralytics-yolo:latest"
4
2
 
3
+ # Function to get L4T version
4
+ get_l4t_version() {
5
+ local l4t_version=""
6
+ local release_line=$(head -n 1 /etc/nv_tegra_release)
7
+ if [[ $release_line =~ R([0-9]+)\ *\(release\),\ REVISION:\ ([0-9]+\.[0-9]+) ]]; then
8
+ local major="${BASH_REMATCH[1]}"
9
+ local revision="${BASH_REMATCH[2]}"
10
+ l4t_version="${major}.${revision}"
11
+ fi
12
+ echo "$l4t_version"
13
+ }
14
+
15
+ L4T_VERSION=$(get_l4t_version)
16
+ echo "Detected L4T version: $L4T_VERSION"
17
+
18
+ # Determine the Docker image based on L4T version
19
+ if [[ "$L4T_VERSION" == "32.6.1" ]]; then
20
+ IMAGE_NAME="yaohui1998/ultralytics-jetpack4:1.0"
21
+ elif [[ "$L4T_VERSION" == "35.3.1" || "$L4T_VERSION" == "35.4.1" || "$L4T_VERSION" == "35.5.0" ]]; then
22
+ IMAGE_NAME="yaohui1998/ultralytics-jetpack5:1.0"
23
+ elif [[ "$L4T_VERSION" == "36.3.0" ]]; then
24
+ IMAGE_NAME="yaohui1998/ultralytics-jetpack6:1.0"
25
+ else
26
+ echo "Error: L4T version $L4T_VERSION is not supported."
27
+ exit 1
28
+ fi
29
+
30
+ echo "Using Docker image: $IMAGE_NAME"
31
+ sudo rm -r ~/yolo_models
5
32
  sudo docker stop $CONTAINER_NAME
6
33
  sudo docker rm $CONTAINER_NAME
7
34
  sudo docker rmi $IMAGE_NAME
@@ -1,9 +1,12 @@
1
1
  # The tested JetPack versions.
2
2
  ALLOWED_L4T_VERSIONS:
3
+ - 32.6.1
3
4
  - 35.3.1
4
5
  - 35.4.1
5
6
  - 35.5.0
6
- REQUIRED_DISK_SPACE: 20 # in GB
7
+ - 36.3.0
8
+ - 36.4.0
9
+ REQUIRED_DISK_SPACE: 16 # in GB
7
10
  REQUIRED_MEM_SPACE: 4
8
11
  PACKAGES:
9
12
  - nvidia-jetpack
@@ -1,15 +1,47 @@
1
1
  #!/bin/bash
2
2
 
3
3
  CONTAINER_NAME="ultralytics-yolo"
4
- IMAGE_NAME="yaohui1998/ultralytics-yolo:latest"
5
4
 
6
- # Pull the latest image
5
+ # Function to get L4T version
6
+ get_l4t_version() {
7
+ local l4t_version=""
8
+ local release_line=$(head -n 1 /etc/nv_tegra_release)
9
+ if [[ $release_line =~ R([0-9]+)\ *\(release\),\ REVISION:\ ([0-9]+\.[0-9]+) ]]; then
10
+ local major="${BASH_REMATCH[1]}"
11
+ local revision="${BASH_REMATCH[2]}"
12
+ l4t_version="${major}.${revision}"
13
+ fi
14
+ echo "$l4t_version"
15
+ }
16
+
17
+ L4T_VERSION=$(get_l4t_version)
18
+ echo "Detected L4T version: $L4T_VERSION"
19
+
20
+ # Determine the Docker image based on L4T version
21
+ if [[ "$L4T_VERSION" == "32.6.1" ]]; then
22
+ IMAGE_NAME="yaohui1998/ultralytics-jetpack4:1.0"
23
+ elif [[ "$L4T_VERSION" == "35.3.1" || "$L4T_VERSION" == "35.4.1" || "$L4T_VERSION" == "35.5.0" ]]; then
24
+ IMAGE_NAME="yaohui1998/ultralytics-jetpack5:1.0"
25
+ elif [[ "$L4T_VERSION" == "36.3.0" ]]; then
26
+ IMAGE_NAME="yaohui1998/ultralytics-jetpack6:1.0"
27
+ elif [[ "$L4T_VERSION" == "36.4.0" ]]; then
28
+ IMAGE_NAME="yaohui1998/ultralytics-jetpack61:v1.0"
29
+ else
30
+ echo "Error: L4T version $L4T_VERSION is not supported."
31
+ exit 1
32
+ fi
33
+
34
+ echo "Using Docker image: $IMAGE_NAME"
35
+
36
+ # Pull the Docker image
7
37
  docker pull $IMAGE_NAME
38
+ # make dir for save models
39
+ mkdir ~/yolo_models
8
40
 
9
41
  # Check if the container with the specified name already exists
10
42
  if [ $(docker ps -a -q -f name=^/${CONTAINER_NAME}$) ]; then
11
43
  echo "Container $CONTAINER_NAME already exists. Starting and attaching..."
12
- echo "Please open http://127.0.0.1:5001 to access the WebUI."
44
+ echo "Please open http://127.0.0.1:5000 to access the WebUI."
13
45
  docker start $CONTAINER_NAME
14
46
  docker exec -it $CONTAINER_NAME /bin/bash
15
47
  else
@@ -18,6 +50,7 @@ else
18
50
  --name $CONTAINER_NAME \
19
51
  --privileged \
20
52
  --network host \
53
+ -v ~/yolo_models/:/usr/src/ultralytics/models/ \
21
54
  -v /tmp/.X11-unix:/tmp/.X11-unix \
22
55
  -v /dev/*:/dev/* \
23
56
  -v /etc/localtime:/etc/localtime:ro \
@@ -137,12 +137,14 @@ check_base_env()
137
137
  fi
138
138
  # 6.2 Modify the Docker configuration file
139
139
  DAEMON_JSON_PATH="/etc/docker/daemon.json"
140
+ NECESSARY_CONTENT=
140
141
  if [ ! -f "$DAEMON_JSON_PATH" ]; then
141
142
  echo "${BLUE}Creating $DAEMON_JSON_PATH with the desired content...${RESET}"
142
143
  echo "$DESIRED_DAEMON_JSON" | sudo tee $DAEMON_JSON_PATH > /dev/null
143
144
  sudo systemctl restart docker
144
145
  echo "${GREEN}$DAEMON_JSON_PATH has been created.${RESET}"
145
- elif [ "$(cat $DAEMON_JSON_PATH)" != "$DESIRED_DAEMON_JSON" ]; then
146
+ elif [ "$(jq -e '.["default-runtime"] == "nvidia" and .runtimes.nvidia.path == "nvidia-container-runtime" and (.runtimes.nvidia.runtimeArgs | length == 0)' "$DAEMON_JSON_PATH")" != "true" ]; then
147
+ # elif [ "$(cat $DAEMON_JSON_PATH)" != "$DESIRED_DAEMON_JSON" ]; then
146
148
  echo "${BLUE}Backing up the existing $DAEMON_JSON_PATH to /etc/docker/daemon_backup.json ...${RESET}"
147
149
  sudo cp "$DAEMON_JSON_PATH" "/etc/docker/daemon_backup.json"
148
150
  echo "${GREEN}Backup completed.${RESET}"