@tier0/node-red-contrib-opcda-client 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,132 @@
1
+ # OPC DA to MQTT Bridge - Deployment Guide
2
+
3
+ ## 1. Prerequisites (Offline Package Contents)
4
+
5
+ Before you begin, ensure you have the following files from the provided offline package:
6
+
7
+ | # | File | Purpose |
8
+ |---|------|---------|
9
+ | 1 | `python-2.7.amd64.msi` | Python 2.7 runtime |
10
+ | 2 | `pywin32-221.win-amd64-py2.7.exe` | Windows COM/DCOM support for Python |
11
+ | 3 | `OpenOPC-1.3.1.win-amd64-py2.7.exe` | OPC DA client library |
12
+ | 4 | `paho-mqtt-1.6.1.tar.gz` | MQTT client library (offline) |
13
+ | 5 | `opctest.py` | Bridge script |
14
+
15
+ > **Note:** If the script fails with a DCOM/OPC error on a machine that does NOT have KEPServerEX or any OPC software installed, you may also need to install `OPC Core Components Redistributable (x64).msi` to register `opcdaauto.dll`.
16
+
17
+ ## 2. Installation Steps
18
+
19
+ ### Step 1: Install Python 2.7
20
+
21
+ 1. Run `python-2.7.amd64.msi`
22
+ 2. **Important:** In the installation options, check **"Add python.exe to Path"**
23
+ 3. Keep the default installation path: `C:\Python27\`
24
+ 4. After installation, open CMD and verify:
25
+ ```
26
+ C:\Python27\python.exe --version
27
+ ```
28
+ Expected output: `Python 2.7.x`
29
+
30
+ ### Step 2: Install pywin32 (DCOM Support)
31
+
32
+ 1. Run `pywin32-221.win-amd64-py2.7.exe`
33
+ 2. The installer will automatically detect the Python 2.7 path
34
+ 3. Click "Next" through the installation
35
+
36
+ ### Step 3: Install OpenOPC
37
+
38
+ 1. Run `OpenOPC-1.3.1.win-amd64-py2.7.exe`
39
+ 2. The installer will automatically detect the Python 2.7 path
40
+ 3. Click "Next" through the installation
41
+
42
+ ### Step 4: Install paho-mqtt (Offline)
43
+
44
+ 1. Extract `paho-mqtt-1.6.1.tar.gz` to a folder (e.g. `C:\temp\paho-mqtt-1.6.1\`)
45
+ 2. Open **Command Prompt (CMD)** and run:
46
+ ```
47
+ cd C:\yourpath\paho-mqtt-1.6.1
48
+ C:\Python27\python.exe setup.py install
49
+ ```
50
+
51
+ ## 3. Configuration
52
+
53
+ Open `opctest.py` in a text editor (e.g. Notepad) and update the following settings at the top of the file:
54
+
55
+ ```
56
+ OPC_SERVER = 'Kepware.KEPServerEX.V6' # OPC DA server ProgID
57
+ OPC_HOST = '192.168.31.75' # IP of the OPC DA server machine
58
+ MQTT_BROKER = '192.168.31.45' # IP of the MQTT broker (Node-RED)
59
+ MQTT_PORT = 1883 # MQTT port
60
+ POLL_INTERVAL = 1 # Read interval in seconds
61
+ ```
62
+
63
+ Update the `TAGS` list with the actual OPC DA tag names:
64
+
65
+ ```
66
+ TAGS = [
67
+ u'TI2022.PV',
68
+ u'TI2022.SV',
69
+ u'TI2022.MV',
70
+ # Add more tags as needed...
71
+ ]
72
+ ```
73
+
74
+ ## 4. Run the Bridge
75
+
76
+ 1. Open **Command Prompt as Administrator** (right-click CMD > "Run as administrator")
77
+ 2. Navigate to the folder containing `opctest.py`:
78
+ ```
79
+ cd C:\path\to\opctest
80
+ ```
81
+ 3. Run the script:
82
+ ```
83
+ C:\Python27\python.exe opctest.py
84
+ ```
85
+ 4. You should see output like:
86
+ ```
87
+ [*] OPC DA -> MQTT Bridge
88
+ OPC Server: Kepware.KEPServerEX.V6 @ 192.168.31.75
89
+ MQTT Broker: 192.168.31.45:1883
90
+ Tags: 3
91
+
92
+ [+] UNS import JSON written to uns_import.json
93
+ [+] MQTT connected
94
+ [+] OPC DA connected
95
+ [*] Publishing data every 1s (Ctrl+C to stop)...
96
+ --------------------------------------------------
97
+ [18:30:01] Published 3 tags
98
+ [18:30:02] Published 3 tags
99
+ ```
100
+ 5. Press `Ctrl+C` to stop the bridge gracefully
101
+
102
+ ## 5. Import UNS Structure into Tier 0
103
+
104
+ 1. After the script runs, a file named `uns_import.json` is generated in the same folder
105
+ 2. Open the Tier 0 platform
106
+ 3. Navigate to the UNS import function
107
+ 4. Upload `uns_import.json`
108
+ 5. The topic tree will be created automatically (e.g. `opcda/TI2022/Metric/PV`)
109
+
110
+ ## 6. Troubleshooting
111
+
112
+ | Error | Solution |
113
+ |-------|----------|
114
+ | `'python' is not recognized` | Python was not added to PATH. Use the full path: `C:\Python27\python.exe` |
115
+ | `No module named OpenOPC` | OpenOPC not installed. Re-run Step 4 |
116
+ | `No module named paho` | paho-mqtt not installed. Re-run Step 5 |
117
+ | `Access denied` / DCOM error | Run CMD as **Administrator** |
118
+ | `OPC server not found` | Verify `OPC_SERVER` name and that the OPC server is running on the target machine |
119
+ | `MQTT connection refused` | Verify `MQTT_BROKER` IP and that the MQTT broker is running on port 1883 |
120
+ | Script stops unexpectedly | Check network connectivity to both OPC and MQTT servers |
121
+
122
+ ## 7. Run as a Windows Service (Optional)
123
+
124
+ To keep the bridge running in the background after logoff, you can use **NSSM** (Non-Sucking Service Manager):
125
+
126
+ 1. Download `nssm.exe` and place it in the script folder
127
+ 2. Open CMD as Administrator and run:
128
+ ```
129
+ nssm install OPCDABridge "C:\Python27\python.exe" "C:\path\to\opctest.py"
130
+ nssm start OPCDABridge
131
+ ```
132
+ 3. The bridge will now run automatically on system startup
package/README.md CHANGED
@@ -1,20 +1,13 @@
1
1
  # @tier0/node-red-contrib-opcda-client
2
2
 
3
- Node-RED nodes for reading and writing to OPC DA servers via DCOM.
3
+ > **Fork Note:** This is a fork of the original `node-red-contrib-opcda-client` package.
4
+ > This version introduces **full NTLMv2 authentication support** and packet signing for modern Windows DCOM environments, utilizing a custom-built [`FREEZONEX/node-dcom`](https://github.com/FREEZONEX/node-dcom) fork. It also resolves `/opcda/browse` route conflicts with the original package.
4
5
 
5
- Forked from [node-red-contrib-opcda-client](https://github.com/emrebekar/node-red-contrib-opcda-client) by emrebekar.
6
+ This node can be used in order to read and write to OPC DA servers.
6
7
 
7
- ## Installation
8
-
9
- ```bash
10
- npm install @tier0/node-red-contrib-opcda-client
11
- ```
12
-
13
- ## Nodes
14
-
15
- - **opcda-server** -- Connection configuration for an OPC DA server
16
- - **opcda-read** -- Read tags from an OPC DA server
17
- - **opcda-write** -- Write values to an OPC DA server
8
+ - opcda-server
9
+ - opcda-read
10
+ - opcda-write
18
11
 
19
12
  ## Input Parameters
20
13
  ### opcda-server
@@ -80,10 +73,10 @@ set msg.payload parameter with the following;
80
73
  #### Screenshots
81
74
 
82
75
  ##### opcda-server
83
- ![opcda-server](https://raw.githubusercontent.com/FREEZONEX/opcda-client/main/images/opcda_server.png)
76
+ ![opcda-server](https://raw.githubusercontent.com/emrebekar/node-red-contrib-opcda-client/master/images/opcda_server.png)
84
77
 
85
78
  ##### opcda-read
86
- ![opcda-read](https://raw.githubusercontent.com/FREEZONEX/opcda-client/main/images/opcda_read.png)
79
+ ![opcda-read](https://raw.githubusercontent.com/emrebekar/node-red-contrib-opcda-client/master/images/opcda_read.png)
87
80
 
88
81
  ##### opcda-write
89
- ![opcda-write](https://raw.githubusercontent.com/FREEZONEX/opcda-client/main/images/opcda_write.png)
82
+ ![opcda-write](https://raw.githubusercontent.com/emrebekar/node-red-contrib-opcda-client/master/images/opcda_write.png)
@@ -0,0 +1,460 @@
1
+ networks:
2
+ edge_network:
3
+ driver: bridge
4
+
5
+ services:
6
+ frontend:
7
+ image: tier0/tier0-frontend:1.0.1-R8
8
+ container_name: frontend
9
+ ports:
10
+ - "3010:3000"
11
+ - "4000:4000"
12
+ environment:
13
+ - LLM_MODEL=${LLM_MODEL}
14
+ - LLM_API_KEY=${LLM_API_KEY}
15
+ - LLM_TYPE=${LLM_TYPE}
16
+ - REACT_APP_OS_LANG=${LANGUAGE}
17
+ - TOKEN_MAX_AGE=${TOKEN_MAX_AGE}
18
+ - TZ=UTC
19
+ command: >
20
+ sh -c "
21
+ if [ -z \"$LLM_API_KEY\" ]; then
22
+ echo 'LLM_API_KEY为空,跳过Node.js服务启动'
23
+ concurrently \"serve -s /app/web-dist -l 3000\"
24
+ else
25
+ echo 'LLM_API_KEY已配置,启动所有服务'
26
+ concurrently \"serve -s /app/web-dist -l 3000\" \"node /app/services-express-dist/index.js\"
27
+ fi
28
+ "
29
+ volumes:
30
+ - /etc/docker/certs:/certs
31
+ networks:
32
+ - edge_network
33
+ restart: always
34
+ uns:
35
+ image: tier0/tier0-backend:1.0.1-R8
36
+ container_name: uns
37
+ environment:
38
+ - LLM_MODEL=${LLM_MODEL}
39
+ - LLM_API_KEY=${LLM_API_KEY}
40
+ - LLM_TYPE=${LLM_TYPE}
41
+ - REACT_APP_OS_LANG=${LANGUAGE}
42
+ - OAUTH_REDIRECT_URI=${BASE_URL}/inter-api/supos/auth/token
43
+ - OAUTH_SUPOS_HOME=${BASE_URL}/uns
44
+ - OAUTH_REALM=${OAUTH_REALM}
45
+ - OAUTH_CLIENT_NAME=${OAUTH_CLIENT_NAME}
46
+ - OAUTH_CLIENT_ID=${OAUTH_CLIENT_ID}
47
+ - OAUTH_CLIENT_SECRET=${OAUTH_CLIENT_SECRET}
48
+ - OAUTH_GRANT_TYPE=${OAUTH_GRANT_TYPE}
49
+ - OAUTH_ISSUER_URI=${OAUTH_ISSUER_URI}
50
+ - OAUTH_REFRESH_TOKEN_TIME=${OAUTH_REFRESH_TOKEN_TIME}
51
+ - SYS_OS_APP_TITLE=${OS_NAME}
52
+ - SYS_OS_LOGIN_PATH=${OS_LOGIN_PATH}
53
+ - SYS_OS_LANG=${LANGUAGE}
54
+ - TOKEN_MAX_AGE=${TOKEN_MAX_AGE}
55
+ - TZ=UTC
56
+ - GRPC_POOL_CORE_SIZE=2
57
+ - GRPC_POOL_MAX_SIZE=4
58
+ - dbDSN=${UNS_DB_URL}
59
+ - SINK_PG_URL=${SINK_PG_URL}
60
+ - SINK_TSDB_URL=${SINK_TSDB_URL}
61
+ - NODE_RED_HOST=nodered
62
+ - NODE_RED_PORT=1880
63
+ - SYS_OS_MULTIPLE_TOPIC=false
64
+ - SYS_OS_VERSION=${OS_VERSION}
65
+ - SYS_OS_AUTH_ENABLE=${OS_AUTH_ENABLE}
66
+ - SYS_OS_LLM_TYPE=${OS_LLM_TYPE}
67
+ - SYS_OS_MQTT_TCP_PORT=${OS_MQTT_TCP_PORT}
68
+ - SYS_OS_MQTT_WEBSOCKET_TSL_PORT=${OS_MQTT_WEBSOCKET_TSL_PORT}
69
+ - SYS_OS_PLATFORM_TYPE=${OS_PLATFORM_TYPE}
70
+ - SYS_OS_ENTRANCE_URL=${BASE_URL}
71
+ - SYS_OS_QUALITY_NAME=quality
72
+ - SYS_OS_TIMESTAMP_NAME=timeStamp
73
+ - SYS_OS_LAZY_TREE=${LAZY_TREE}
74
+ - UNS_ADD_BATCH_SIZE=1000
75
+ volumes:
76
+ - ${VOLUMES_PATH}/edge/system/:/app/go-edge/system/
77
+ - ${VOLUMES_PATH}/edge/attachment/:/app/go-edge/attachment/
78
+ - ${VOLUMES_PATH}/edge/apps/:/app/data/apps/
79
+ - /etc/docker/certs:/certs
80
+ ports:
81
+ - "18998:8080"
82
+ networks:
83
+ - edge_network
84
+ depends_on:
85
+ postgresql:
86
+ condition: service_healthy
87
+ restart: always
88
+ extra_hosts:
89
+ - "host.docker.internal:host-gateway"
90
+ marimo:
91
+ image: suposce/marimo:latest
92
+ container_name: marimo
93
+ environment:
94
+ - TSDB_URL=postgresql+psycopg2://postgres:${TSDB_PASSWORD}@tsdb:5432/postgres
95
+ - PG_URL=postgresql+psycopg2://postgres:${POSTGRES_PASSWORD}@postgresql:5432/postgres
96
+ volumes:
97
+ - ${VOLUMES_PATH}/marimo/data:/app
98
+ # command: marimo edit /app/main.py --host 0.0.0.0 --port 8080 --no-token
99
+ restart: always
100
+ deploy:
101
+ resources:
102
+ limits:
103
+ memory: 2g
104
+ reservations:
105
+ memory: 512M
106
+ networks:
107
+ - edge_network
108
+ emqx:
109
+ image: emqx/emqx:5.8
110
+ container_name: emqx
111
+ ports:
112
+ - "1883:1883" # MQTT端口
113
+ - "8883:8883" # MQTT加密端口
114
+ - "8083:8083" # WebSocket端口
115
+ - "8084:8084" # WebSocket加密端口
116
+ - "18083:18083" # EMQX Dashboard端口
117
+ environment:
118
+ - EMQX_NAME=emqx
119
+ - EMQX_NODE__COOKIE=secretcookie # 节点通信时的cookie
120
+ - service_logo=emqx-original.svg
121
+ - service_description=aboutus.emqxDescription
122
+ - service_redirect_url=/emqx/home/
123
+ - service_account=admin
124
+ - service_password=public
125
+ volumes:
126
+ - /etc/localtime:/etc/localtime:ro
127
+ - ${VOLUMES_PATH}/emqx/data:/opt/emqx/data
128
+ - ${VOLUMES_PATH}/emqx/log:/opt/emqx/log
129
+ - ${VOLUMES_PATH}/emqx/config/emqx.conf:/opt/emqx/etc/emqx.conf
130
+ - ${VOLUMES_PATH}/emqx/config/default_api_key.conf:/opt/emqx/etc/default_api_key.conf
131
+ - ${VOLUMES_PATH}/emqx/config/acl.conf:/opt/emqx/etc/acl.conf
132
+ restart: always
133
+ networks:
134
+ - edge_network
135
+ nodered:
136
+ image: nodered/node-red:4.0.8-22
137
+ container_name: nodered
138
+ user: root
139
+ ports:
140
+ - "1880:1880" # Node-RED web UI端口
141
+ environment:
142
+ - service_logo=nodered-original.svg
143
+ - service_description=aboutus.nodeRedDescription
144
+ - FLOWS=/data/flows.json
145
+ - USE_ALIAS_AS_TOPIC=false
146
+ - TIMESTAMP_NAME=timeStamp
147
+ - QUALITY_NAME=quality
148
+ - TZ=UTC
149
+ - OS_LANG=${LANGUAGE}
150
+ - NODE_OPTIONS=--openssl-legacy-provider
151
+ - NODE_HTTP_API_PREFIX=/nodered-api
152
+ volumes:
153
+ - /etc/localtime:/etc/localtime:ro
154
+ - ${VOLUMES_PATH}/node-red:/data # 使用当前目录的 data 目录
155
+ depends_on:
156
+ - emqx
157
+ restart: always
158
+ networks:
159
+ - edge_network
160
+ eventflow:
161
+ image: nodered/node-red:4.0.8-22
162
+ container_name: eventflow
163
+ user: root
164
+ ports:
165
+ - "1889:1889" # Node-RED web UI端口
166
+ environment:
167
+ - service_logo=nodered-original.svg
168
+ - service_description=aboutus.nodeRedDescription
169
+ - FLOWS=/data/flows.json
170
+ - USE_ALIAS_AS_TOPIC=false
171
+ - QUALITY_NAME=status
172
+ - TIMESTAMP_NAME=timeStamp
173
+ - TZ=UTC
174
+ - OS_LANG=${LANGUAGE}
175
+ - NODE_OPTIONS=--openssl-legacy-provider
176
+ - NODE_HTTP_API_PREFIX=/eventflow-api
177
+ volumes:
178
+ - /etc/localtime:/etc/localtime:ro
179
+ - ${VOLUMES_PATH}/eventflow:/data # 使用当前目录的 data 目录
180
+ restart: always
181
+ networks:
182
+ - edge_network
183
+ grafana:
184
+ image: grafana/grafana:11.5.6
185
+ profiles:
186
+ - grafana
187
+ container_name: grafana
188
+ user: root
189
+ ports:
190
+ - "3000:3000" # Grafana web UI端口
191
+ volumes:
192
+ - ${VOLUMES_PATH}/grafana/data:/var/lib/grafana
193
+ # - ${VOLUMES_PATH}/grafana/data/plugins:/var/lib/grafana/plugins # 使用当前目录的 data 目录
194
+ environment:
195
+ service_logo: grafana-original.svg
196
+ service_description: aboutus.grafanaDescription
197
+ service_redirect_url: /grafana/home/dashboards/
198
+ # 设置管理员用户的初始密码
199
+ GF_SECURITY_ADMIN_PASSWORD: "supos"
200
+ # 开启 Grafana 的 Explore 功能
201
+ GF_EXPLORE_ENABLED: "true"
202
+ # 安装 Grafana 插件
203
+ # GF_INSTALL_PLUGINS: "grafana-clock-panel,grafana-mqtt-datasource,tdengine-datasource,yesoreyeram-infinity-datasource"
204
+ # 注释掉的设置,用于改变 Grafana 用户界面的语言
205
+ GF_VIEWER_LANGUAGE: "${GRAFANA_LANG:-en-US}"
206
+ GF_AUTH_ANONYMOUS_ENABLED: "true"
207
+ GF_AUTH_ANONYMOUS_ORG_ROLE: "Admin"
208
+ GF_SECURITY_ALLOW_EMBEDDING: "true"
209
+ GF_SERVER_ROOT_URL: "http://${ENTRANCE_DOMAIN}/grafana/home/"
210
+ GF_USERS_DEFAULT_THEME: "light"
211
+ GF_DATABASE_TYPE: postgres
212
+ GF_DATABASE_HOST: postgresql:5432
213
+ GF_DATABASE_NAME: grafana
214
+ GF_DATABASE_USER: postgres
215
+ GF_DATABASE_PASSWORD: ${POSTGRES_PASSWORD}
216
+ restart: always
217
+ depends_on:
218
+ postgresql:
219
+ condition: service_healthy # 依赖 PostgreSQL 的健康状态
220
+ networks:
221
+ - edge_network
222
+ portainer:
223
+ image: portainer/portainer-ce:2.23.0
224
+ container_name: portainer
225
+ environment:
226
+ service_logo: portainer.svg
227
+ command: --admin-password="$$2y$$05$$ZTAqF7Tn.hil8X.ifVmQTuKiJQoZDiKDW3t1lRR2/VPR06QoHv4AC"
228
+ ports:
229
+ - "8000:8000"
230
+ - "9443:9443"
231
+ restart: always
232
+ volumes:
233
+ - /var/run/docker.sock:/var/run/docker.sock
234
+ - ${VOLUMES_PATH}/portainer:/data
235
+ networks:
236
+ - edge_network
237
+ postgresql:
238
+ image: timescale/timescaledb:2.20.0-pg17
239
+ container_name: postgresql
240
+ environment:
241
+ TZ: UTC # 设置容器时区
242
+ service_logo: postgresql-original.svg
243
+ service_description: aboutus.postgresqlDescription
244
+ POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
245
+ ports:
246
+ - "5432:5432"
247
+ volumes:
248
+ - ${VOLUMES_PATH}/postgresql/conf/postgresql.conf:/etc/postgresql/custom.conf
249
+ - /etc/localtime:/etc/localtime:ro
250
+ - ${VOLUMES_PATH}/postgresql/pgdata:/var/lib/postgresql/data # 持久化数据
251
+ - ${VOLUMES_PATH}/postgresql/init-scripts:/docker-entrypoint-initdb.d # 加载初始化脚本
252
+ command:
253
+ - postgres
254
+ - -c
255
+ - config_file=/etc/postgresql/custom.conf
256
+ healthcheck:
257
+ test: [ "CMD-SHELL", "psql -U postgres -d keycloak -c '\\dt public.initialization_complete' | grep -q 'initialization_complete'" ]
258
+ interval: 15s
259
+ timeout: 15s
260
+ retries: 60
261
+ start_period: 120s
262
+ start_interval: 10s
263
+ restart: always
264
+ networks:
265
+ - edge_network
266
+ keycloak:
267
+ image: keycloak/keycloak:26.0 # 使用 Keycloak 的最新镜像
268
+ container_name: keycloak
269
+ ports:
270
+ - "8081:8080"
271
+ user: root
272
+ deploy:
273
+ resources:
274
+ limits:
275
+ memory: 2G
276
+ reservations:
277
+ memory: 512M
278
+ environment:
279
+ service_logo: keycloak-original.svg
280
+ JAVA_OPTS: >-
281
+ -Xms512m
282
+ -Xmx1g
283
+ -XX:MaxMetaspaceSize=256m
284
+ -XX:MetaspaceSize=256m
285
+ -Xss512k
286
+ -XX:+UseG1GC
287
+ service_description: aboutus.keycloakDescription
288
+ service_redirect_url: /keycloak/home/
289
+ service_account: admin
290
+ service_password: tier0
291
+ KC_SSL_REQUIRED: none # 不要求 SSL
292
+ KC_PROXY: passthrough # 设置代理模式
293
+ KC_HOSTNAME: "${ENTRANCE_DOMAIN}" # 指定主机名
294
+ KC_FRONTEND_URL: "${BASE_URL}" # 设置前端 URL
295
+ KC_BOOTSTRAP_ADMIN_USERNAME: admin
296
+ KC_BOOTSTRAP_ADMIN_PASSWORD: tier0
297
+ KC_COOKIE_SECURE: false # 禁用安全 cookie
298
+ KC_DB: postgres
299
+ KC_DB_URL: "jdbc:postgresql://postgresql:5432/keycloak"
300
+ KC_DB_USERNAME: postgres
301
+ KC_DB_PASSWORD: ${POSTGRES_PASSWORD}
302
+ KC_DB_POOL_INITIAL_SIZE: 2
303
+ KC_DB_POOL_MIN_SIZE: 2
304
+ KC_DB_POOL_MAX_SIZE: 4
305
+ KC_DB_POOL_MAX_LIFETIME: 1800
306
+ KC_HEALTH_ENABLED: true
307
+ KC_FEATURES: token-exchange
308
+ JAVA_OPTS_APPEND: -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
309
+ volumes:
310
+ - /etc/localtime:/etc/localtime:ro
311
+ - ${VOLUMES_PATH}/keycloak/data:/opt/keycloak/data # 将 Keycloak 的数据目录挂载到本地
312
+ - ${VOLUMES_PATH}/keycloak/theme/keycloak.v2:/opt/keycloak/themes/wenhao
313
+ - ${VOLUMES_PATH}/keycloak/ca/ca.crt:/opt/keycloak/ca.crt
314
+ - ${VOLUMES_PATH}/keycloak/ca/init-ldaps-cert.sh:/opt/keycloak/init-ldaps-cert.sh
315
+ depends_on:
316
+ postgresql:
317
+ condition: service_healthy # 依赖 PostgreSQL 的健康状态
318
+ command: start-dev --hostname ${BASE_URL}/keycloak/home/auth --proxy-headers forwarded
319
+ healthcheck:
320
+ test: >
321
+ /bin/bash -c " exec 3<>/dev/tcp/localhost/8080"
322
+ interval: 10s
323
+ timeout: 5s
324
+ retries: 60
325
+ start_period: 120s # 新增:给予2分钟的启动宽限期
326
+ start_interval: 10s # 新增:宽限期内的检查间隔
327
+ restart: always
328
+ networks:
329
+ - edge_network
330
+ tsdb:
331
+ image: timescale/timescaledb:2.20.0-pg17
332
+ container_name: tsdb
333
+ environment:
334
+ TZ: UTC # 设置容器时区
335
+ service_logo: postgresql-original.svg
336
+ service_description: aboutus.postgresqlDescription
337
+ POSTGRES_PASSWORD: ${TSDB_PASSWORD}
338
+ ports:
339
+ - "2345:5432"
340
+ volumes:
341
+ - ${VOLUMES_PATH}/tsdb/conf/postgresql.conf:/etc/postgresql/custom.conf
342
+ - ${VOLUMES_PATH}/tsdb/data:/var/lib/postgresql/data # 持久化数据
343
+ - ${VOLUMES_PATH}/tsdb/init-scripts:/docker-entrypoint-initdb.d # 加载初始化脚本
344
+ command:
345
+ - postgres
346
+ - -c
347
+ - config_file=/etc/postgresql/custom.conf
348
+ healthcheck:
349
+ test: [ "CMD-SHELL", "pg_isready -U postgres" ]
350
+ interval: 10s
351
+ timeout: 5s
352
+ retries: 10
353
+ start_period: 30s
354
+ start_interval: 60s
355
+ restart: always
356
+ networks:
357
+ - edge_network
358
+ kong:
359
+ image: kong:3.9.0
360
+ container_name: kong
361
+ environment:
362
+ KONG_WORKER_STATE_UPDATE_FREQUENCY: 30
363
+ KONG_DB_CACHE_TTL: 600
364
+ KONG_NGINX_WORKER_PROCESSES: 2
365
+ KONG_LUA_GC_PAUSE: 100 # 提高暂停率,降低 GC 触发频率
366
+ KONG_LUA_GC_STEPMUL: 200 # 增加单次回收强度
367
+ service_logo: konga-original.svg
368
+ service_description: aboutus.kongaDescription
369
+ service_redirect_url: /konga/home/
370
+ KONG_DATABASE: postgres
371
+ KONG_PG_HOST: postgresql
372
+ KONG_PG_PASSWORD: ${POSTGRES_PASSWORD}
373
+ KONG_PG_USER: postgres
374
+ KONG_ADMIN_LISTEN: 0.0.0.0:8001
375
+ KONG_SSL_CERT: /etc/kong/ssl/fullchain.cer
376
+ KONG_SSL_CERT_KEY: /etc/kong/ssl/private.key
377
+ KONG_PROXY_LISTEN: 0.0.0.0:8000, 0.0.0.0:8443 ssl
378
+ KONG_PLUGINS: bundled,supos-auth-checker,supos-url-transformer
379
+ KONG_PG_POOL_SIZE: 2
380
+ KONG_LOG_LEVEL: error
381
+ KONG_NGINX_HTTP_CLIENT_MAX_BODY_SIZE: 2048m
382
+ KONG_NGINX_PROXY_CLIENT_MAX_BODY_SIZE: 2048m
383
+ KONG_NGINX_HTTP_PROXY_CONNECT_TIMEOUT: 300000
384
+ KONG_NGINX_HTTP_PROXY_SEND_TIMEOUT: 300000
385
+ KONG_NGINX_HTTP_PROXY_READ_TIMEOUT: 300000
386
+ volumes:
387
+ - ${VOLUMES_PATH}/kong/certificationfile:/etc/kong/ssl:ro
388
+ - ${VOLUMES_PATH}/kong/kong_config.yml:/etc/kong/kong_config.yml
389
+ - ${VOLUMES_PATH}/kong/start.sh:/usr/local/bin/start.sh
390
+ - ${VOLUMES_PATH}/kong/kong-plugin-auth-checker:/usr/local/share/lua/5.1/kong/plugins/supos-auth-checker
391
+ - ${VOLUMES_PATH}/kong/kong-plugin-url-transformer:/usr/local/share/lua/5.1/kong/plugins/supos-url-transformer
392
+ ports:
393
+ - "${ENTRANCE_PORT:-8088}:8000"
394
+ - "${ENTRANCE_SSL_PORT:-8443}:8443"
395
+ - "8001:8001"
396
+ - "8444:8444"
397
+ depends_on:
398
+ - emqx
399
+ - uns
400
+ - keycloak
401
+ - postgresql
402
+ command: >
403
+ sh -c "kong migrations bootstrap &&
404
+ kong config db_import /etc/kong/kong_config.yml &&
405
+ kong start"
406
+ restart: always # 确保 Kong 自动重启
407
+ networks:
408
+ - edge_network
409
+ konga:
410
+ image: suposce/konga:1.0.0
411
+ container_name: konga
412
+ profiles:
413
+ - konga
414
+ environment:
415
+ # DB_ADAPTER: mysql # 使用 MySQL 作为数据库
416
+ # DB_HOST: konga_mysql_database # MySQL 容器的主机名
417
+ # DB_USER: konga # MySQL 用户
418
+ # DB_PASSWORD: konga # MySQL 用户密码
419
+ # DB_DATABASE: konga # MySQL 数据库名
420
+ # NODE_ENV: production # 设置 NODE_ENV 为生产环境
421
+ NO_AUTH: "true" # 禁用认证
422
+ KONGA_SEED_KONG_NODE_DATA_SOURCE_FILE: /node.data
423
+ ports:
424
+ - "1337:1337" # 映射 Konga 的端口到主机
425
+ volumes:
426
+ - ${VOLUMES_PATH}/konga/db/:/app/kongadata/
427
+ - ${VOLUMES_PATH}/konga/node.data:/node.data # 持久化数据库数据
428
+ restart: always
429
+ networks:
430
+ - edge_network
431
+ minio:
432
+ image: minio/minio:RELEASE.2024-12-18T13-15-44Z
433
+ container_name: minio
434
+ profiles:
435
+ - minio
436
+ environment:
437
+ - service_logo=minio-original.svg
438
+ - service_description=aboutus.minioDescription
439
+ - service_redirect_url=/minio/home/
440
+ - MINIO_ACCESS_KEY=admin
441
+ - MINIO_SECRET_KEY=adminpassword
442
+ - MINIO_BROWSER_REDIRECT_URL=${BASE_URL}/minio/home
443
+ - MINIO_IDENTITY_OPENID_CONFIG_URL=${OAUTH_ISSUER_URI}/realms/supos/.well-known/openid-configuration
444
+ - MINIO_IDENTITY_OPENID_CLIENT_ID=${OAUTH_CLIENT_ID}
445
+ - MINIO_IDENTITY_OPENID_CLIENT_SECRET=${OAUTH_CLIENT_SECRET}
446
+ - MINIO_IDENTITY_OPENID_SCOPES=openid
447
+ - MINIO_IDENTITY_OPENID_ROLE_POLICY=public-delete-policy
448
+ - MINIO_IDENTITY_OPENID_REDIRECT_URI=${BASE_URL}/minio/home/oauth_callback
449
+ ports:
450
+ - "9000:9000" # Web UI 端口
451
+ - "9001:9001" # Admin API 端口 (如果需要访问管理接口)
452
+ volumes:
453
+ - ${VOLUMES_PATH}/minio/data:/data # 数据存储位置
454
+ command: server /data --console-address ":9001"
455
+ depends_on:
456
+ keycloak:
457
+ condition: service_healthy
458
+ restart: always
459
+ networks:
460
+ - edge_network
@@ -26,11 +26,35 @@ module.exports = function(RED) {
26
26
  };
27
27
 
28
28
  function resolveError(e) {
29
+ if (typeof e === "number") {
30
+ const u = e >>> 0;
31
+ if (errorCode[e] !== undefined) return errorCode[e];
32
+ if (errorCode[u] !== undefined) return errorCode[u];
33
+ return "HRESULT 0x" + u.toString(16).toUpperCase() + " (" + e + ")";
34
+ }
35
+ if (e instanceof Error && e.message) {
36
+ const asNum = Number(e.message);
37
+ if (!Number.isNaN(asNum) && String(asNum) === String(e.message).trim()) {
38
+ return resolveError(asNum);
39
+ }
40
+ return e.message;
41
+ }
29
42
  if (errorCode[e]) return errorCode[e];
30
- if (typeof e === 'number') return `DCOM error code: 0x${(e >>> 0).toString(16).toUpperCase()}`;
31
- if (e instanceof Error) return e.message || e.toString();
32
- if (typeof e === 'string') return e;
33
- try { return JSON.stringify(e); } catch (_) { return String(e); }
43
+ if (typeof e === "string") return e;
44
+ try {
45
+ return JSON.stringify(e);
46
+ } catch (_) {
47
+ return String(e);
48
+ }
49
+ }
50
+
51
+ /** IOPCItemMgt::Add per-item result (may be signed negative in JS). */
52
+ function describeOpcItemResult(code) {
53
+ if (code === 0) return "OK";
54
+ const unsigned = code >>> 0;
55
+ const fromOpc = opcda.constants.opc.errorDesc[String(unsigned)];
56
+ if (fromOpc) return fromOpc + " (0x" + unsigned.toString(16).toUpperCase() + ")";
57
+ return "OPC/DCOM result 0x" + unsigned.toString(16).toUpperCase() + " (" + code + ")";
34
58
  }
35
59
 
36
60
  function OPCDARead(config) {
@@ -105,9 +129,12 @@ module.exports = function(RED) {
105
129
  try{
106
130
  node.updateStatus('connecting');
107
131
 
108
- var timeout = parseInt(server.config.timeout);
132
+ var timeout = parseInt(server.config.timeout, 10);
133
+ if (!Number.isFinite(timeout) || timeout <= 0) {
134
+ timeout = 15000;
135
+ }
109
136
  var comSession = new Session();
110
-
137
+
111
138
  comSession = comSession.createSession(server.config.domain, server.credentials.username, server.credentials.password);
112
139
  comSession.setGlobalSocketTimeout(timeout);
113
140
 
@@ -144,7 +171,7 @@ module.exports = function(RED) {
144
171
  const item = itemsList[i];
145
172
 
146
173
  if (addedItem[0] !== 0) {
147
- node.warn(`Error adding item '${item.itemID}'`);
174
+ node.warn("Error adding item '" + item.itemID + "': " + describeOpcItemResult(addedItem[0]));
148
175
  }
149
176
  else {
150
177
  serverHandles.push(addedItem[1].serverHandle);