jetson-examples 0.1.5__py3-none-any.whl → 0.1.7__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (69) hide show
  1. {jetson_examples-0.1.5.dist-info → jetson_examples-0.1.7.dist-info}/LICENSE +0 -0
  2. {jetson_examples-0.1.5.dist-info → jetson_examples-0.1.7.dist-info}/METADATA +35 -15
  3. jetson_examples-0.1.7.dist-info/RECORD +103 -0
  4. {jetson_examples-0.1.5.dist-info → jetson_examples-0.1.7.dist-info}/WHEEL +1 -1
  5. reComputer/main.py +1 -1
  6. reComputer/scripts/MoveNet-Lightning/clean.sh +8 -0
  7. reComputer/scripts/MoveNet-Lightning/getVersion.sh +59 -0
  8. reComputer/scripts/MoveNet-Lightning/init.sh +6 -0
  9. reComputer/scripts/MoveNet-Lightning/readme.md +30 -0
  10. reComputer/scripts/MoveNet-Lightning/run.sh +19 -0
  11. reComputer/scripts/MoveNet-Thunder/clean.sh +7 -0
  12. reComputer/scripts/MoveNet-Thunder/getVersion.sh +59 -0
  13. reComputer/scripts/MoveNet-Thunder/init.sh +6 -0
  14. reComputer/scripts/MoveNet-Thunder/readme.md +31 -0
  15. reComputer/scripts/MoveNet-Thunder/run.sh +18 -0
  16. reComputer/scripts/MoveNetJS/clean.sh +4 -0
  17. reComputer/scripts/MoveNetJS/readme.md +56 -0
  18. reComputer/scripts/MoveNetJS/run.sh +13 -0
  19. reComputer/scripts/comfyui/LICENSE +21 -0
  20. reComputer/scripts/comfyui/README.md +127 -0
  21. reComputer/scripts/comfyui/clean.sh +7 -0
  22. reComputer/scripts/comfyui/config.yaml +29 -0
  23. reComputer/scripts/comfyui/init.sh +163 -0
  24. reComputer/scripts/comfyui/run.sh +30 -0
  25. reComputer/scripts/depth-anything/README.md +33 -0
  26. reComputer/scripts/depth-anything/clean.sh +6 -1
  27. reComputer/scripts/depth-anything/config.yaml +31 -0
  28. reComputer/scripts/depth-anything/init.sh +164 -0
  29. reComputer/scripts/depth-anything/run.sh +20 -10
  30. reComputer/scripts/depth-anything-v2/Dockerfile +6 -0
  31. reComputer/scripts/depth-anything-v2/LICENSE +21 -0
  32. reComputer/scripts/depth-anything-v2/README.md +135 -0
  33. reComputer/scripts/depth-anything-v2/clean.sh +8 -0
  34. reComputer/scripts/depth-anything-v2/config.yaml +31 -0
  35. reComputer/scripts/depth-anything-v2/init.sh +164 -0
  36. reComputer/scripts/depth-anything-v2/run.sh +22 -0
  37. reComputer/scripts/llama-factory/README.md +5 -4
  38. reComputer/scripts/{ultralytics-yolo/images/Ultralytics-yolo.gif → llama-factory/assets/training.gif} +0 -0
  39. reComputer/scripts/llama-factory/init.sh +0 -0
  40. reComputer/scripts/llama3/clean.sh +22 -0
  41. reComputer/scripts/ollama/clean.sh +22 -0
  42. reComputer/scripts/parler-tts/clean.sh +7 -0
  43. reComputer/scripts/parler-tts/getVersion.sh +59 -0
  44. reComputer/scripts/parler-tts/init.sh +8 -0
  45. reComputer/scripts/parler-tts/readme.md +63 -0
  46. reComputer/scripts/parler-tts/run.sh +17 -0
  47. reComputer/scripts/run.sh +5 -0
  48. reComputer/scripts/ultralytics-yolo/LICENSE +0 -0
  49. reComputer/scripts/ultralytics-yolo/README.md +78 -4
  50. reComputer/scripts/ultralytics-yolo/clean.sh +5 -1
  51. reComputer/scripts/ultralytics-yolo/config.yaml +32 -0
  52. reComputer/scripts/ultralytics-yolo/init.sh +176 -0
  53. reComputer/scripts/ultralytics-yolo/run.sh +22 -9
  54. jetson_examples-0.1.5.dist-info/RECORD +0 -79
  55. reComputer/scripts/depth-anything/images/Autonomous Driving.png +0 -0
  56. reComputer/scripts/depth-anything/images/Indoor Scenes.png +0 -0
  57. reComputer/scripts/depth-anything/images/Opr.png +0 -0
  58. reComputer/scripts/depth-anything/images/Security.png +0 -0
  59. reComputer/scripts/depth-anything/images/Underwater Scenes.png +0 -0
  60. reComputer/scripts/depth-anything/images/WebUI.png +0 -0
  61. reComputer/scripts/depth-anything/images/teaser.png +0 -0
  62. reComputer/scripts/ultralytics-yolo/images/tasks.png +0 -0
  63. reComputer/scripts/yolov8:detect/Dockerfile +0 -9
  64. reComputer/scripts/yolov8:detect/README.txt +0 -32
  65. reComputer/scripts/yolov8:detect/app.py +0 -47
  66. reComputer/scripts/yolov8:detect/run.sh +0 -3
  67. reComputer/scripts/yolov8:detect/templates/index.html +0 -27
  68. {jetson_examples-0.1.5.dist-info → jetson_examples-0.1.7.dist-info}/entry_points.txt +0 -0
  69. {jetson_examples-0.1.5.dist-info → jetson_examples-0.1.7.dist-info}/top_level.txt +0 -0
@@ -0,0 +1,135 @@
1
+ # Jetson-Example: Run Depth Anything V2 on NVIDIA Jetson Orin 🚀
2
+ This project provides an one-click deployment of the Depth Anything V2 monocular depth estimation model developed by Hong Kong University and ByteDance. The deployment is visualized on [reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) (Jetson Orin NX 16GB, 100 TOPS) and includes a WebUI for model conversion to TensorRT and real-time depth estimation.
3
+ <p align="center">
4
+ <img src="images/WebUI.png" alt="WebUI">
5
+ </p>
6
+
7
+ All models and inference engine implemented in this project are from the official [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2).
8
+
9
+ ## 🔥Features
10
+
11
+ - One-click deployment for Depth Anything V2 models.
12
+ - WebUI for model conversion and depth estimation.
13
+ - Support for uploading videos/images or using the local camera
14
+ - Supports S, B, L models of Depth Anything V2 with input sizes 518.
15
+
16
+ ### 🗝️WebUI Features
17
+ - **Choose model**: Select from Depth Anything V2 models. (S, B, L)
18
+ - **Grayscale option**: Option to use grayscale.
19
+ - **Choose source**: Select the input source (Video, Image, Camera).
20
+ - **Export Model**: Automatically download and convert the model from ONNX to TensorRT format.
21
+ - **Start Estimation**: Begin depth estimation using the selected model and input source.
22
+ - **Stop Estimation**: Stop the ongoing depth estimation process.
23
+ <p align="center">
24
+ <img src="images/Opr.png" alt="Depthanything" width="320" height="360">
25
+ </p>
26
+
27
+ ## 🥳Getting Started
28
+ ### 📜Prerequisites
29
+ - reComputer J4012 [(🛒Buy Here)](https://www.seeedstudio.com/reComputer-J4012-p-5586.html)
30
+ - Docker installed on reComputer
31
+ - USB Camera (optional)
32
+
33
+
34
+ ### Modify Docker Daemon Configuration (Optional)
35
+ To enhance the experience of quickly loading models in Docker, you need to add the following content to the `/etc/docker/daemon.json` file:
36
+
37
+ ```json
38
+ {
39
+ "default-runtime": "nvidia",
40
+ "runtimes": {
41
+ "nvidia": {
42
+ "path": "nvidia-container-runtime",
43
+ "runtimeArgs": []
44
+ }
45
+ },
46
+ "storage-driver": "overlay2",
47
+ "data-root": "/var/lib/docker",
48
+ "log-driver": "json-file",
49
+ "log-opts": {
50
+ "max-size": "100m",
51
+ "max-file": "3"
52
+ },
53
+ "no-new-privileges": true,
54
+ "experimental": false
55
+ }
56
+ ```
57
+
58
+ After modifying the `daemon.json` file, you need to restart the Docker service to apply the configuration:
59
+
60
+ ```sh
61
+ sudo systemctl restart docker
62
+ ```
63
+
64
+
65
+ ### 🚀Installation
66
+
67
+
68
+ PyPI(recommend)
69
+ ```sh
70
+ pip install jetson-examples
71
+ ```
72
+ Linux (github trick)
73
+ ```sh
74
+ curl -fsSL https://raw.githubusercontent.com/Seeed-Projects/jetson-examples/main/install.sh | sh
75
+ ```
76
+ Github (for Developer)
77
+ ```sh
78
+ git clone https://github.com/Seeed-Projects/jetson-examples
79
+ cd jetson-examples
80
+ pip install .
81
+ ```
82
+
83
+ ### 📋Usage
84
+ 1. Run code:
85
+ ```sh
86
+ reComputer run depth-anything-v2
87
+ ```
88
+
89
+ 2. Open a web browser and input **http://{reComputer ip}:5000**. Use the WebUI to select the model and source.
90
+
91
+ 3. Click on **Export Model** to download and convert the model.
92
+
93
+ 4. Click on **Start Estimation** to begin the depth estimation process.
94
+
95
+ 5. View the real-time depth estimation results on the WebUI.
96
+
97
+ ## ⛏️Applications
98
+
99
+ - **Security**: Enhance surveillance systems with depth perception.
100
+ <p align="center">
101
+ <img src="images/Security.png" alt="Security" width="500" height="150">
102
+ </p>
103
+ - **Autonomous Driving**: Improve environmental sensing for autonomous vehicles.
104
+ <p align="center">
105
+ <img src="images/Autonomous Driving.png" alt="Autonomous Driving" width="500" height="150">
106
+ </p>
107
+ - **Underwater Scenes**: Apply depth estimation in underwater exploration.
108
+ <p align="center">
109
+ <img src="images/Underwater Scenes.png" alt="Underwater Scenes" width="500" height="150">
110
+ </p>
111
+ - **Indoor Scenes**: Use depth estimation for indoor navigation and analysis.
112
+ <p align="center">
113
+ <img src="images/Indoor Scenes.png" alt="Indoor Scenes" width="500" height="150">
114
+ </p>
115
+
116
+ ## Further Development 🔧
117
+ - [Depth Anything V2 Official](https://github.com/DepthAnything/Depth-Anything-V2)
118
+ - [Depth Anything V2 TensorRT](https://github.com/spacewalk01/depth-anything-tensorrt)
119
+ - [Depth Anything ONNX](https://github.com/fabio-sim/Depth-Anything-ONNX)
120
+ - [Depth Anything ROS](https://github.com/scepter914/DepthAnything-ROS)
121
+ - [Depth Anything Android](https://github.com/FeiGeChuanShu/ncnn-android-depth_anything)
122
+
123
+
124
+ ## 🙏🏻Contributing
125
+
126
+ We welcome contributions from the community. Please fork the repository and create a pull request with your changes.
127
+
128
+ ## ✅License
129
+
130
+ This project is licensed under the MIT License.
131
+
132
+ ## 🏷️Acknowledgements
133
+
134
+ - Depth Anything V2 Official [project](https://github.com/DepthAnything/Depth-Anything-V2) by Hong Kong University and ByteDance.
135
+ - Seeed Studio team for their [support and resources](https://github.com/Seeed-Projects/jetson-examples).
@@ -0,0 +1,8 @@
1
+ #!/bin/bash
2
+
3
+ CONTAINER_NAME="depth-anything-v2"
4
+ IMAGE_NAME="yaohui1998/depthanything-v2-on-jetson-orin:latest"
5
+
6
+ sudo docker stop $CONTAINER_NAME
7
+ sudo docker rm $CONTAINER_NAME
8
+ sudo docker rmi $IMAGE_NAMEs
@@ -0,0 +1,31 @@
1
+ allowed_l4t_versions:
2
+ - 35.3.1
3
+ - 35.4.1
4
+ - 35.5.0
5
+ required_disk_space: 15 # in GB
6
+ min_mem_gb: 4
7
+ min_swap_gb: 4
8
+ nvidia_jetson_package: "nvidia-jetpack"
9
+ packages:
10
+ #- "ros-noetic-ros-base"
11
+ #- "flask"
12
+ docker:
13
+ desired_daemon_json: |
14
+ {
15
+ "default-runtime": "nvidia",
16
+ "runtimes": {
17
+ "nvidia": {
18
+ "path": "nvidia-container-runtime",
19
+ "runtimeArgs": []
20
+ }
21
+ },
22
+ "storage-driver": "overlay2",
23
+ "data-root": "/var/lib/docker",
24
+ "log-driver": "json-file",
25
+ "log-opts": {
26
+ "max-size": "100m",
27
+ "max-file": "3"
28
+ },
29
+ "no-new-privileges": true,
30
+ "experimental": false
31
+ }
@@ -0,0 +1,164 @@
1
+ #!/bin/bash
2
+ #set color value
3
+ RED=$(tput setaf 1)
4
+ GREEN=$(tput setaf 2)
5
+ YELLOW=$(tput setaf 3)
6
+ BLUE=$(tput setaf 4)
7
+ MAGENTA=$(tput setaf 5)
8
+ CYAN=$(tput setaf 6)
9
+ RESET=$(tput sgr0)
10
+
11
+ echo "${CYAN}This script will install the necessary packages and configurations for running depth-anything on a Jetson Nano.${RESET}"
12
+
13
+ # Install yq for parsing YAML files
14
+ sudo apt-get update
15
+ sudo apt-get install -y jq
16
+
17
+ # Read configuration
18
+ CURRENT_DIR="depth-anything-v2"
19
+ CONFIG_FILE="./jetson-examples/reComputer/scripts/${CURRENT_DIR}/config.yaml"
20
+ ALLOWED_L4T_VERSIONS=$(yq -r '.allowed_l4t_versions[]' $CONFIG_FILE)
21
+ ALLOWED_L4T_VERSIONS_ARRAY=($ALLOWED_L4T_VERSIONS)
22
+ REQUIRED_DISK_SPACE=$(yq -r '.required_disk_space' $CONFIG_FILE)
23
+ MIN_MEM_GB=$(yq -r '.min_mem_gb' $CONFIG_FILE)
24
+ MIN_SWAP_GB=$(yq -r '.min_swap_gb' $CONFIG_FILE)
25
+ NVIDIA_JETSON_PACKAGE=$(yq -r '.nvidia_jetson_package' $CONFIG_FILE)
26
+ PACKAGES=$(yq -r '.packages[]' $CONFIG_FILE)
27
+ DESIRED_DAEMON_JSON=$(yq -r '.docker.desired_daemon_json' $CONFIG_FILE)
28
+ CURRENT_DISK_SPACE=$(df -BG --output=avail / | tail -1 | sed 's/[^0-9]*//g')
29
+ MEM_GB=$(free -g | awk '/^Mem:/{print $2}')
30
+ SWAP_GB=$(free -g | awk '/^Swap:/{print $2}')
31
+
32
+ echo "${MAGENTA}Allowed L4T versions:${RESET} ${GREEN}$ALLOWED_L4T_VERSIONS ${RESET}"
33
+ echo "${MAGENTA}Required disk space: ${GREEN}${REQUIRED_DISK_SPACE}G ${RESET}"
34
+ echo "${MAGENTA}Minimum memory: ${GREEN}${MIN_MEM_GB}G ${RESET}"
35
+ echo "${MAGENTA}Minimum swap: ${GREEN}${MIN_SWAP_GB}G ${RESET}"
36
+ echo "${MAGENTA}NVIDIA Jetson package:${RESET} ${GREEN}$NVIDIA_JETSON_PACKAGE ${RESET}"
37
+ echo "${MAGENTA}Additional packages: ${RESET} ${GREEN}$PACKAGES ${RESET}"
38
+
39
+ # Check if NVIDIA Jetson package is installed
40
+ if ! dpkg -l | grep -qw "$NVIDIA_JETSON_PACKAGE"; then
41
+ echo "$NVIDIA_JETSON_PACKAGE is not installed. Installing $NVIDIA_JETSON_PACKAGE..."
42
+ sudo apt-get install -y $NVIDIA_JETSON_PACKAGE
43
+ else
44
+ echo "$NVIDIA_JETSON_PACKAGE is installed: ${GREEN}OK!${RESET}"
45
+ fi
46
+
47
+ # Install additional packages
48
+ for PACKAGE in $PACKAGES; do
49
+ if ! dpkg -l | grep -qw "$PACKAGE"; then
50
+ echo "$PACKAGE is not installed. Installing $PACKAGE..."
51
+ sudo apt-get install -y $PACKAGE
52
+ else
53
+ echo "$PACKAGE is installed: ${GREEN}OK!${RESET}"
54
+ fi
55
+ done
56
+
57
+ # Get system architecture
58
+ ARCH=$(uname -i)
59
+ if [ "$ARCH" = "aarch64" ]; then
60
+ # Check for L4T version string
61
+ L4T_VERSION_STRING=$(head -n 1 /etc/nv_tegra_release)
62
+
63
+ if [ -z "$L4T_VERSION_STRING" ]; then
64
+ L4T_VERSION_STRING=$(dpkg-query --showformat='${Version}' --show nvidia-l4t-core)
65
+ fi
66
+
67
+ L4T_RELEASE=$(echo "$L4T_VERSION_STRING" | cut -f 2 -d ' ' | grep -Po '(?<=R)[^;]+')
68
+ L4T_REVISION=$(echo "$L4T_VERSION_STRING" | cut -f 2 -d ',' | grep -Po '(?<=REVISION: )[^;]+')
69
+ L4T_VERSION="$L4T_RELEASE.$L4T_REVISION"
70
+
71
+ elif [ "$ARCH" = "x86_64" ]; then
72
+ echo "${RED}Unsupported architecture: $ARCH${RESET}"
73
+ exit 1
74
+ fi
75
+
76
+
77
+ # Check L4T version
78
+ if [[ " ${ALLOWED_L4T_VERSIONS_ARRAY[@]} " =~ " ${L4T_VERSION} " ]]; then
79
+ echo "L4T VERSION ${GREEN}${L4T_VERSION}${RESET} is in the allowed: ${GREEN}OK!${RESET}"
80
+ else
81
+ echo "${RED}L4T VERSION ${GREEN}${L4T_VERSION}${RESET}${RED} is not in the allowed versions list.${RESET}"
82
+ exit 1
83
+ fi
84
+
85
+ # Check disk space
86
+ if [ "$CURRENT_DISK_SPACE" -lt "$REQUIRED_DISK_SPACE" ]; then
87
+ echo "${RED}Insufficient disk space. Required: ${REQUIRED_DISK_SPACE}G, Available: ${CURRENT_DISK_SPACE}G. ${RESET}"
88
+ exit 1
89
+ else
90
+ echo "Required ${GREEN}${REQUIRED_DISK_SPACE}${RESET} G disk space: ${GREEN}OK!${RESET}"
91
+ fi
92
+
93
+ # Check memory and swap space
94
+ if [ "$MEM_GB" -lt "$MIN_MEM_GB" ]; then
95
+ echo "${RED}Insufficient memory: $MEM_GB GB (minimum required: $MIN_MEM_GB GB).${RESET}"
96
+ exit 1
97
+ else
98
+ echo "Required ${GREEN}$MIN_MEM_GB${RESET} G memory space: ${GREEN}OK!${RESET}"
99
+ fi
100
+
101
+ if [ "$SWAP_GB" -lt "$MIN_SWAP_GB" ]; then
102
+ echo "${RED}Insufficient swap space: $SWAP_GB GB (minimum required: $MIN_SWAP_GB GB). ${RESET}"
103
+ exit 1
104
+ else
105
+ echo "Required ${GREEN}$MIN_SWAP_GB${RESET} G swap space: ${GREEN}OK!${RESET}"
106
+ fi
107
+
108
+ # Check if Docker is installed
109
+ if ! command -v docker &> /dev/null; then
110
+ echo "${BLUE}Docker is not installed. Installing Docker...${RESET}"
111
+
112
+ sudo apt-get install -y \
113
+ apt-transport-https \
114
+ ca-certificates \
115
+ curl \
116
+ software-properties-common
117
+
118
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
119
+ sudo add-apt-repository \
120
+ "deb [arch=arm64] https://download.docker.com/linux/ubuntu \
121
+ $(lsb_release -cs) \
122
+ stable"
123
+
124
+ sudo apt-get update
125
+ sudo apt-get install -y docker-ce
126
+ sudo systemctl enable docker
127
+ sudo systemctl start docker
128
+ sudo usermod -aG docker $USER
129
+ sudo systemctl restart docker
130
+ newgrp docker
131
+
132
+ echo "Docker has been installed and configured."
133
+ fi
134
+
135
+ # Check if the current user has permissions to use Docker
136
+ if ! docker info &> /dev/null; then
137
+ echo "The current user does not have permissions to use Docker. Adding permissions..."
138
+ sudo usermod -aG docker $USER
139
+ sudo systemctl restart docker
140
+ newgrp docker
141
+ echo "${BLUE}Permissions added. Please log out and log back in for the changes to take effect.${RESET}"
142
+ else
143
+ echo "${GREEN}Docker is installed and the current user has permissions to use it.${RESET}"
144
+ fi
145
+
146
+ DAEMON_JSON_PATH="/etc/docker/daemon.json"
147
+ if [ ! -f "$DAEMON_JSON_PATH" ] || [ "$(cat $DAEMON_JSON_PATH)" != "$DESIRED_DAEMON_JSON" ]; then
148
+ echo "${BLUE}Creating/updating $DAEMON_JSON_PATH with the desired content...${RESET}"
149
+ echo "$DESIRED_DAEMON_JSON" | sudo tee $DAEMON_JSON_PATH > /dev/null
150
+ sudo systemctl restart docker
151
+ echo "${GREEN}$DAEMON_JSON_PATH has been created/updated.${RESET}"
152
+ else
153
+ echo "${GREEN}$DAEMON_JSON_PATH already exists and has the correct content.${RESET}"
154
+ fi
155
+
156
+ # Install additional packages
157
+ for PACKAGE in $PACKAGES; do
158
+ if ! dpkg -l | grep -qw "$PACKAGE"; then
159
+ echo "${CYAN}$PACKAGE${RESET} ${BLUE}is not installed. Installing $PACKAGE...${RESET}"
160
+ sudo apt-get install -y $PACKAGE
161
+ else
162
+ echo "${GREEN}$PACKAGE${RESET} is already installed: ${GREEN}OK!${RESET}"
163
+ fi
164
+ done
@@ -0,0 +1,22 @@
1
+ CONTAINER_NAME="depth-anything-v2"
2
+ IMAGE_NAME="yaohui1998/depthanything-v2-on-jetson-orin:latest"
3
+
4
+ # Pull the latest image
5
+ docker pull $IMAGE_NAME
6
+
7
+ # Check if the container with the specified name already exists
8
+ if [ $(docker ps -a -q -f name=^/${CONTAINER_NAME}$) ]; then
9
+ echo "Container $CONTAINER_NAME already exists. Starting and attaching..."
10
+ docker start $CONTAINER_NAME
11
+ else
12
+ echo "Container $CONTAINER_NAME does not exist. Creating and starting..."
13
+ docker run -it \
14
+ --name $CONTAINER_NAME \
15
+ --privileged \
16
+ --network host \
17
+ -v /tmp/.X11-unix:/tmp/.X11-unix \
18
+ -v /dev/*:/dev/* \
19
+ -v /etc/localtime:/etc/localtime:ro \
20
+ --runtime nvidia \
21
+ $IMAGE_NAME
22
+ fi
@@ -2,6 +2,7 @@
2
2
 
3
3
 
4
4
  ## Hello
5
+ Now you can tailor a custom private local LLM to meet your requirements.
5
6
 
6
7
  💡 Here's an example of quickly deploying [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) on Jetson device.
7
8
 
@@ -12,15 +13,15 @@
12
13
 
13
14
  🛠️ Follow the tutorial below to quickly experience the performance of Llama-Factory on edge computing devices.
14
15
 
15
- <!-- <div align="center">
16
- <img alt="yolov10" width="1200px" src="./assets/llama-factory-Jetson.png">
17
- </div> -->
16
+ <div align="center">
17
+ <img alt="training" width="1200px" src="./assets/training.gif">
18
+ </div>
18
19
 
19
20
  ## Get a Jetson Orin Device 🛒
20
21
  | Device Model | Description | Link |
21
22
  |--------------|-------------|------|
22
23
  | reComputer J4012, powered by Orin NX 16GB, 100 TOPS | Embedded computer powered by Orin NX | [Buy Here](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) |
23
- | NVIDIA® Jetson AGX Orin™ 64GB Developer Kit | smallest and most powerful AI edge computer | [Buy Here](https://www.seeedstudio.com/NVIDIArJetson-AGX-Orintm-64GB-Developer-Kit-p-5641.html) |
24
+ | NVIDIA® Jetson AGX Orin™ 64GB Developer Kit | Smallest and most powerful AI edge computer | [Buy Here](https://www.seeedstudio.com/NVIDIArJetson-AGX-Orintm-64GB-Developer-Kit-p-5641.html) |
24
25
 
25
26
  ## Getting Started
26
27
 
File without changes
@@ -0,0 +1,22 @@
1
+ #!/bin/bash
2
+ BASE_PATH=/home/$USER/reComputer
3
+ JETSON_REPO_PATH="$BASE_PATH/jetson-containers"
4
+ # search local image
5
+ img_tag=$($JETSON_REPO_PATH/autotag -p local ollama)
6
+ # 检查返回值
7
+ if [ $? -eq 0 ]; then
8
+ echo "Found Image successfully."
9
+ sudo docker rmi $img_tag
10
+ else
11
+ echo "[warn] Found Image failed with error code $?. skip delete Image."
12
+ fi
13
+ #
14
+ # 4 build whl
15
+ read -p "Delete all data for ollama? (y/n): " choice
16
+ if [[ $choice == "y" || $choice == "Y" ]]; then
17
+ echo "Delete=> $JETSON_REPO_PATH/data/models/ollama/"
18
+ sudo rm -rf $JETSON_REPO_PATH/data/models/ollama/
19
+ echo "Clean Data Done."
20
+ else
21
+ echo "[warn] Skip Clean Data."
22
+ fi
@@ -0,0 +1,22 @@
1
+ #!/bin/bash
2
+ BASE_PATH=/home/$USER/reComputer
3
+ JETSON_REPO_PATH="$BASE_PATH/jetson-containers"
4
+ # search local image
5
+ img_tag=$($JETSON_REPO_PATH/autotag -p local ollama)
6
+ # 检查返回值
7
+ if [ $? -eq 0 ]; then
8
+ echo "Found Image successfully."
9
+ sudo docker rmi $img_tag
10
+ else
11
+ echo "[warn] Found Image failed with error code $?. skip delete Image."
12
+ fi
13
+ #
14
+ # 4 build whl
15
+ read -p "Delete all data for ollama? (y/n): " choice
16
+ if [[ $choice == "y" || $choice == "Y" ]]; then
17
+ echo "Delete=> $JETSON_REPO_PATH/data/models/ollama/"
18
+ sudo rm -rf $JETSON_REPO_PATH/data/models/ollama/
19
+ echo "Clean Data Done."
20
+ else
21
+ echo "[warn] Skip Clean Data."
22
+ fi
@@ -0,0 +1,7 @@
1
+ #!/bin/bash
2
+
3
+ # get image
4
+ source ./getVersion.sh
5
+
6
+ # remove docker image
7
+ sudo docker rmi feiticeir0/parler-tts:${TAG_IMAGE}
@@ -0,0 +1,59 @@
1
+ #!/bin/bash
2
+ # based on dusty - https://github.com/dusty-nv/jetson-containers/blob/master/jetson_containers/l4t_version.sh
3
+ # and llama-factory init script
4
+
5
+ # we only have images for these - 36.2.0 works on 36.3.0
6
+ L4T_VERSIONS=("35.3.1", "35.4.1", "36.2.0", "36.3.0")
7
+
8
+ ARCH=$(uname -i)
9
+ # echo "ARCH: $ARCH"
10
+
11
+ if [ $ARCH = "aarch64" ]; then
12
+ L4T_VERSION_STRING=$(head -n 1 /etc/nv_tegra_release)
13
+
14
+ if [ -z "$L4T_VERSION_STRING" ]; then
15
+ #echo "reading L4T version from \"dpkg-query --show nvidia-l4t-core\""
16
+
17
+ L4T_VERSION_STRING=$(dpkg-query --showformat='${Version}' --show nvidia-l4t-core)
18
+ L4T_VERSION_ARRAY=(${L4T_VERSION_STRING//./ })
19
+
20
+ #echo ${L4T_VERSION_ARRAY[@]}
21
+ #echo ${#L4T_VERSION_ARRAY[@]}
22
+
23
+ L4T_RELEASE=${L4T_VERSION_ARRAY[0]}
24
+ L4T_REVISION=${L4T_VERSION_ARRAY[1]}
25
+ else
26
+ #echo "reading L4T version from /etc/nv_tegra_release"
27
+
28
+ L4T_RELEASE=$(echo $L4T_VERSION_STRING | cut -f 2 -d ' ' | grep -Po '(?<=R)[^;]+')
29
+ L4T_REVISION=$(echo $L4T_VERSION_STRING | cut -f 2 -d ',' | grep -Po '(?<=REVISION: )[^;]+')
30
+ fi
31
+
32
+ L4T_REVISION_MAJOR=${L4T_REVISION:0:1}
33
+ L4T_REVISION_MINOR=${L4T_REVISION:2:1}
34
+
35
+ L4T_VERSION="$L4T_RELEASE.$L4T_REVISION"
36
+
37
+ IMAGE_TAG=$L4T_VERSION
38
+
39
+ #echo "L4T_VERSION : $L4T_VERSION"
40
+ #echo "L4T_RELEASE : $L4T_RELEASE"
41
+ #echo "L4T_REVISION: $L4T_REVISION"
42
+
43
+ elif [ $ARCH != "x86_64" ]; then
44
+ echo "unsupported architecture: $ARCH"
45
+ exit 1
46
+ fi
47
+
48
+
49
+ if [[ ! " ${L4T_VERSIONS[@]} " =~ " ${L4T_VERSION} " ]]; then
50
+ echo "L4T_VERSION is not in the allowed versions list. Exiting."
51
+ exit 1
52
+ fi
53
+
54
+ # check if 36 to change IMAGE_TAG
55
+ if [ ${L4T_RELEASE} -eq "36" ]; then
56
+ # image tag will be 2.0
57
+ IMAGE_TAG="36.2.0"
58
+ fi
59
+
@@ -0,0 +1,8 @@
1
+ #!/bin/bash
2
+
3
+ echo "Creating models directory at /home/$USER/models"
4
+
5
+ # Create Model dir in User home
6
+ mkdir /home/$USER/models
7
+
8
+
@@ -0,0 +1,63 @@
1
+ # Parler TTS Mini: Expresso
2
+
3
+
4
+ Parler-TTS Mini: Expresso is a fine-tuned version of Parler-TTS Mini v0.1 on the Expresso dataset. It is a lightweight text-to-speech (TTS) model that can generate high-quality, natural sounding speech. Compared to the original model, Parler-TTS Expresso provides superior control over emotions (happy, confused, laughing, sad) and consistent voices (Jerry, Thomas, Elisabeth, Talia).
5
+
6
+ [You can get more information on HuggingFace](https://huggingface.co/parler-tts/parler-tts-mini-expresso)
7
+
8
+ ![Gradio Interface] (audio1.png)
9
+ ![Gradio Interface result] (audio2.png)
10
+
11
+ ## Getting started
12
+ #### Prerequisites
13
+ * SeeedStudio reComputer J402 [Buy one](https://www.seeedstudio.com/reComputer-J4012-p-5586.html)
14
+ * Audio Columns
15
+ * Docker installed
16
+
17
+ ## Instalation
18
+ PyPI (best)
19
+
20
+ ```bash
21
+ pip install jetson-examples
22
+ ```
23
+
24
+ ## Usage
25
+ ### Method 1
26
+ ##### If you're running inside your reComputer
27
+ 1. Type the following command in a terminal
28
+ ```bash
29
+ reComputer run parler-tts
30
+ ```
31
+ 2. Open a web browser and go to [http://localhost:7860](http://localhost:7860)
32
+ 3. A Gradio interface will appear with two text boxes
33
+ 1. The first for you to write the text that will be converted to audio
34
+ 2. A second one for you to describe the speaker: Male/Female, tone, pitch, mood, etc.. See the examples in Parler-tts page.
35
+ 4. When you press submit, after a while, the audio will appear on the right box. You can also download the file if yo want.
36
+
37
+ ### Method 2
38
+ ##### If you want to connect remotely with ssh to the reComputer
39
+ 1. Connect using SSH but redirecting the 7860 port
40
+ ```bash
41
+ ssh -L 7860:localhost:7860 <username>@<reComputer_IP>
42
+ ```
43
+ 2. Type the following command in a terminal
44
+ ```bash
45
+ reComputer run parler-tts
46
+ ```
47
+ 3. Open a web browser (on your machine) and go to [http://localhost:7860](http://localhost:7860)
48
+
49
+ 4. The same instructions above.
50
+
51
+ ## Manual Run
52
+
53
+ If you want to run the docker image outside jetson-examples, here's the command:
54
+
55
+ ```bash
56
+ docker run --rm -p 7860:7860 --runtime=nvidia -v $(MODELS_DIR):/app feiticeir0/parler_tts:r36.2.0
57
+ ```
58
+
59
+ **MODELS_DIR** is a directory where HuggingFace will place the models downloaded from its hub. If you want to run the image several times, the code will only download the model once, if that diretory stays the same.
60
+
61
+ This is controlled by an environment variable called HF_HOME.
62
+
63
+ [More info about HF environment variables](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables)
@@ -0,0 +1,17 @@
1
+ #!/bin/bash
2
+
3
+ MODELS_DIR=/home/$USER/models
4
+
5
+ # get L4T version
6
+ # it exports a variable IMAGE_TAG
7
+ source ./getVersion.sh
8
+
9
+ # pull docker image
10
+ echo "docker push feiticeir0/parler_tts:${IMAGE_TAG}"
11
+
12
+ docker run \
13
+ --rm \
14
+ -p 7860:7860 \
15
+ --runtime=nvidia \
16
+ -v $(MODELS_DIR):/app \
17
+ feiticeir0/parler_tts:${IMAGE_TAG}
reComputer/scripts/run.sh CHANGED
@@ -1,4 +1,9 @@
1
1
  #!/bin/bash
2
+ handle_error() {
3
+ echo "An error occurred. Exiting..."
4
+ exit 1
5
+ }
6
+ trap 'handle_error' ERR
2
7
 
3
8
  check_is_jetson_or_not() {
4
9
  model_file="/proc/device-tree/model"
File without changes