@apocaliss92/scrypted-advanced-notifier 4.8.39 → 5.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +4 -0
- package/README.md +2 -263
- package/dist/plugin.zip +0 -0
- package/package.json +5 -1
- package/docs/INTERFACE_DESCRIPTORS_MIGRATION.md +0 -184
- package/docs/PLAN-RECORDER-MIXIN-AND-UNIFIED-PIPELINE.md +0 -203
package/CHANGELOG.md
CHANGED
|
@@ -1,6 +1,10 @@
|
|
|
1
1
|
<details>
|
|
2
2
|
<summary>Changelog</summary>
|
|
3
3
|
|
|
4
|
+
### 5.0.0
|
|
5
|
+
Finally An app is completed. It's called CamStack and it's available on testflight and as pwa on the scrypted local istance. Also available on https://camstack.zentik.app
|
|
6
|
+
New docs is up on https://advanced-notifier-docs.zentik.app/docs/advanced-notifier
|
|
7
|
+
|
|
4
8
|
### 4.8.39
|
|
5
9
|
|
|
6
10
|
Added section on every battery camera to setup the battery management based on customizable thresholds
|
package/README.md
CHANGED
|
@@ -1,267 +1,6 @@
|
|
|
1
|
-
# Scrypted
|
|
1
|
+
# Scrypted Advanced Notifier
|
|
2
2
|
|
|
3
3
|
☕️ If this extension works well for you, consider buying me a coffee. Thanks!
|
|
4
4
|
[Buy me a coffee!](https://buymeacoffee.com/apocaliss92)
|
|
5
5
|
|
|
6
|
-
[
|
|
7
|
-
|
|
8
|
-
# Getting started
|
|
9
|
-
|
|
10
|
-
## MQTT
|
|
11
|
-
|
|
12
|
-
To enable MQTT exporting:
|
|
13
|
-
|
|
14
|
-
- enable `MQTT enabled` in the general -> general tab
|
|
15
|
-
- setup the authentication parameters in the tab general -> MQTT, check the `Use MQTT plugin credentials` to use the credentials set on the MQTT plugin
|
|
16
|
-
- Check `Use NVR detections` if you want the images stored on MQTT to be the clipped ones from NVR.
|
|
17
|
-
- Check `Audio pressure (dB) detection` if you want a continuous reporting of the audio kept by the camera (dBs)
|
|
18
|
-
- Check `Check objects occupancy regularly` if you want regular occupancy data checks on the camera
|
|
19
|
-
- Cameras enabled to the plugin will be automatically enabled to MQTT. Can be disabled in the camera's section Advanced notifier -> Report to MQTT
|
|
20
|
-
|
|
21
|
-
The plugin will export to MQTT the following entities:
|
|
22
|
-
|
|
23
|
-
- PTZ controls
|
|
24
|
-
- Restart control
|
|
25
|
-
- Notifications enabled
|
|
26
|
-
- Basic detection information (motion, animal, person, vehicle, face, plate, generic object). Same information will be available for every rule associated to a camera
|
|
27
|
-
- Latest image
|
|
28
|
-
- Triggered
|
|
29
|
-
- Last trigger time (disabled by default)
|
|
30
|
-
- Amount of objects (if enabled)
|
|
31
|
-
- Online status
|
|
32
|
-
- Sleeping status
|
|
33
|
-
- Battery status
|
|
34
|
-
- Recording switch (NVR privacy mode)
|
|
35
|
-
- Current dBs (if enabled)
|
|
36
|
-
|
|
37
|
-
IMPORTANT
|
|
38
|
-
If you edit a device name, to force the correct re-creation of the ha entities you will need to delete manually the entities on HA and restart the plugin
|
|
39
|
-
|
|
40
|
-
## Notifications
|
|
41
|
-
|
|
42
|
-
The plugin provides customized way to deliver notifications. It is based on rules. Each rule can be activated based on several factors, i.e. active sensors, time ranges, security system status. Any notifier can be used but the only fully supported currently are (can be extended for any other):
|
|
43
|
-
|
|
44
|
-
- Zentik - Full round notifier application. Perfect for apple devices (iOS + watchOS, iPad, MacOS), developed by me with an eye to Advanced notifier and support whatever was possible, available for beta testing:
|
|
45
|
-
- iOS under https://testflight.apple.com/join/dFqETQEm
|
|
46
|
-
- PWA (android, web) under https://notifier.zentik.app
|
|
47
|
-
- Scrypted plugin available https://www.npmjs.com/package/@apocaliss92/scrypted-zentik
|
|
48
|
-
- Native Scrypted notifiers (i.e. Scrypted iPhone app...)
|
|
49
|
-
- Ntfy
|
|
50
|
-
- Homeassistant push notifications
|
|
51
|
-
- Pushover
|
|
52
|
-
- Telegram
|
|
53
|
-
|
|
54
|
-
Highly suggested to utilize Zentik, since it will give a nice experience for both push notifications and an history of the past one. Otherwise combinations of other notifiers can be used, Pushover or NTFY as notifiers storage, in combination with a homeassistant or NVR one, setting its priority to the lowest. This will allow to have a rich notification and also to store it on another notifier. This because notifiers such as pushover, ntfy or telegram do not have a nice support to actions. Following parameters are required to successfully send notifications
|
|
55
|
-
|
|
56
|
-
- `Scrypted token`: Token stored on the scrypted entity on homeassistant
|
|
57
|
-
- `NVR url`: Url pointing to the NVR instance, should be accessible from outside
|
|
58
|
-
|
|
59
|
-
Each notifier will be fully configurable on every rule, with possibility to set: actions, addSnoozeActions or priority.
|
|
60
|
-
Default actions can be set on every camera, will be added to each notification
|
|
61
|
-
|
|
62
|
-
All notifiers currently support critical notifications.
|
|
63
|
-
|
|
64
|
-
Notifications can be disabled for a specific camera on the camera page, Advanced notifier => Notifier => `Notifications enabled` (available on MQTT as well)
|
|
65
|
-
Notifications can be disabled globally on the general tab of the plugin
|
|
66
|
-
|
|
67
|
-
### Scrypted NVR notifiers
|
|
68
|
-
|
|
69
|
-
Plugins supports scripting of the NVR buitin notifiers, following features are available:
|
|
70
|
-
|
|
71
|
-
- discover to MQTT
|
|
72
|
-
- Notifier notifications disabled: completely disable notifications for a specific notifier
|
|
73
|
-
- Camera notifications disabled: disable notifications for a specific camera
|
|
74
|
-
- Schedule notifications on both cameras and notifiers
|
|
75
|
-
- Translate notifications with the plugin `Texts` section (enabled by default)
|
|
76
|
-
- Enable AI to generate descriptions. To make this to work, each camera and notifier should be extended with the Advanced notifier pluign and activate the AI flag on both. Reason is that ai calls can be expensive and needs to be explicitely enabled on both entities
|
|
77
|
-
|
|
78
|
-
**NVR notifiers can be used both as plugin notifiers, then with rules and everything, or just to enhance the NVR notifications.**
|
|
79
|
-
|
|
80
|
-
- If you want to use it as plugin notifier, you should keep the notifier enabled (at the very bottom of the page) BUT disable all the detection classes (on the device page of the device, i.e. `Scrypted iPhone App (user)`)
|
|
81
|
-
- If you want the plugin to just enhance the NVR notifications, there is nothing to change to make it work with the plugin. Just extend the notifier with this plugin and use the features you like to use
|
|
82
|
-
|
|
83
|
-
## Rules
|
|
84
|
-
|
|
85
|
-
Rules can be of following types: Detection, Occupancy, Audio, Timelapse. These properties are in common with all, some are hidden until the `Show more configurations` gets activated
|
|
86
|
-
|
|
87
|
-
- `Activation type`: when the rule shoul be active
|
|
88
|
-
- Always
|
|
89
|
-
- Schedule, defined in a time range during the day
|
|
90
|
-
- OnActive, will be active only if the camera will be listed in the `"OnActive" devices` selector (plugin => rules => general). This selector can be driven by MQTT with a topic specified in `Active entities topic` under General => MQTT. The message to this topic can contain either a list of device IDs, names or homeassistant entityId (check homeassistant section)
|
|
91
|
-
- `Notifiers`: notifiers to notify, additional properties will be applied depending on the selected ones
|
|
92
|
-
- `Pushover priority` priority to use on pushover
|
|
93
|
-
- `Homeassistant Actions` actions to show on the homessistant push notifications, of type `{"action":"open_door","title":"Open door","icon":"sfsymbols:door"}`, check homeassistant documentation for further info
|
|
94
|
-
- `Open sensors` which sensors should be open to enable the rule
|
|
95
|
-
- `Closed sensors` which sensors should be closed to enable the rule
|
|
96
|
-
- `Alarm modes` which alarm states should enable the rule. The alarm system device can be defined in the plugin page under Rules => `Security system`
|
|
97
|
-
- `Notify with a clip` available only for detection and occupancy rules, the plugin will activate a decoder to save the last frames of the camera. On the trigger of a rule, a short clip will be generated and sent instead of a simple snapshot. It supports 2 types:
|
|
98
|
-
- MP4: supported only by homeassistant and partially the others
|
|
99
|
-
- GIF: supported by homeassistant, pushover
|
|
100
|
-
|
|
101
|
-
### Detection
|
|
102
|
-
|
|
103
|
-
These rules can be created for multiple cameras (on the plugin page) or per single camera. They allow to specify which object detections should trigger a notification:
|
|
104
|
-
|
|
105
|
-
- Create a new rule adding a new text in the `Detection rules` selector and hit save. A new tab will appear
|
|
106
|
-
- Set the activation type
|
|
107
|
-
- Set the notifiers to notify on the detection
|
|
108
|
-
- Check `Use NVR detections` to trigger the rule only as effect of detections from NVR plugin. This will include cropped images stored on MQTT and will be in sync with the NVR app events reel
|
|
109
|
-
- Set the detection classes and the minimum score to trigger the notification
|
|
110
|
-
- Set `Minimum notification delay` to debounce further notifications (overrides the camera settings)
|
|
111
|
-
- Set `Minimum MQTT publish delay` to debounce the image update on MQTT for this rule
|
|
112
|
-
- Set `Whitelisted zones` to use only detections on these zones
|
|
113
|
-
- Set `Blacklisted zones` to ignore detections coming from these zones
|
|
114
|
-
- Set `Disable recording in seconds` to enable NVR recording for some seconds and disable it afterwords
|
|
115
|
-
- Set a `Custom text` if a specific text should be applied. By default detection rules will use the texts defined in the plugin tab `Texts`, many placeholder are available to enrich the content
|
|
116
|
-
- Check `Enable AI to generate descriptions` if you want to let AI generate a description text out of the image. AI settings are available on the plugin page under the AI, currently supported: GoogleAi, OpenAi, Claude, Groq
|
|
117
|
-
- Set `CLIP Description` to use semantic search and filter out even more detections. It will be applied at the very end of the chain, when all the filters already had effect. Set `CLIP confidence level` to finetune the confidence level of the search
|
|
118
|
-
- Set `AI filter` to send the image to the choosen AI tool to confirm the input prompt
|
|
119
|
-
- Set `Image post processing` to process notification images:
|
|
120
|
-
- MarkBoundaries will drawn a coloured rectangle around the detected object
|
|
121
|
-
- Crop will crop the image around the detected object
|
|
122
|
-
|
|
123
|
-
#### Audio classification
|
|
124
|
-
|
|
125
|
-
Detection rules can also detect audio labels, i.e. crying, scream, speech etc. This will be possible adding in the detection calsses setting the "Audio" label. A new setting will appear to specify which labels to consider. The classifier to be used can be set in the Audio analysis section of the camera device:
|
|
126
|
-
|
|
127
|
-
- YAMNET ('YAMNet Audio Classification' plugin), this will be used by the plugin onboarded audio analyser
|
|
128
|
-
- DISABLED, no classifier will be run, external plugins will be able to still forward events, such as Frigate Bridge
|
|
129
|
-
|
|
130
|
-
Impact of the onboard classifier is relatively small and it can replace completely the smart audio sensor
|
|
131
|
-
|
|
132
|
-
### Recording (only on camera)
|
|
133
|
-
|
|
134
|
-
These rules let the cameras to record configurable videoclips. Mostly use these on camera where NVR might be overkill to be used, the app will still show the clips in a nice way
|
|
135
|
-
|
|
136
|
-
They are based on some criteria:
|
|
137
|
-
|
|
138
|
-
- detection classes, what should initially trigger the recording
|
|
139
|
-
- score threshold, the minimum score to trigger the recording, leave empty for any detection
|
|
140
|
-
- Minimum delay between clips, how many seconds to wait, at minimum, to record the following clip
|
|
141
|
-
- Post event seconds, how many seconds to record, at minimum, after the recording starts. The default camera prebuffer will be extra pre-event seconds on top
|
|
142
|
-
- Max clip length, seconds cap of the clip, following detections will prolong the original clip length
|
|
143
|
-
- Prolong clip on motion, prolong the clip also for simple motion events
|
|
144
|
-
|
|
145
|
-
### Occupancy (only on camera)
|
|
146
|
-
|
|
147
|
-
These rules will monitor a specific area to mark it as occupied or not
|
|
148
|
-
|
|
149
|
-
- Make sure to set an object detector on the plugin page under Rules => `Object Detector`
|
|
150
|
-
- Create a new rule adding a new text in the `Occupancy rules` selector and hit save. A new tab will appear
|
|
151
|
-
- Set the activation type
|
|
152
|
-
- Set the notifiers to notify on the occupancy change
|
|
153
|
-
- Set the detection class to monitor
|
|
154
|
-
- Set the camera zone to monitor, must be an `Observe` type zone defined in the `Object detection` section of the camera
|
|
155
|
-
- (Optional) set a capture zone to reduce the frame used for the detection, may increase success rate
|
|
156
|
-
- Set `Zone type`
|
|
157
|
-
- `Intersect` if the objects can be considered detected if falling in any portion of the zone
|
|
158
|
-
- `Contain` if the objects should be completely included in the detection zone
|
|
159
|
-
- Set a `Score threshold`, in case of static detections should be pretty low (default 0.3)
|
|
160
|
-
- Set `Occupancy confirmation`, it's a confirmation period in seconds to avoid false results. Set it depending on your specific case
|
|
161
|
-
- Set `Force update in seconds` to force an occupancy check if no detection happens. Any detection running on the camera will anyways check all the occupancy rules
|
|
162
|
-
- Set the `Max objects` the zone can contain. The zone will be marked as occupied if the detected objects are >= of the number set here
|
|
163
|
-
- Set a text in both `Zone occupied text` and `Zone not occupied text` for the notification texts
|
|
164
|
-
- Activate `Confirm occupancy with AI` to confirm occupancy results to reduce even more false positives. Under the plugin AI section is possible to customize the prompt. Results may vary depending on the model used
|
|
165
|
-
|
|
166
|
-
### Timelapse (only on camera)
|
|
167
|
-
|
|
168
|
-
Define a timeframe, the plugin will collect frames from the camera and generate a clip out of it at the end of the defined range. All the generated timelapses will be available as videoclip on the NVR app, only if the `Enable Camera` on the plugin page will be enabled.
|
|
169
|
-
|
|
170
|
-
- Create a new rule adding a new text in the `Timelapse rules` selector and hit save. A new tab will appear
|
|
171
|
-
- Define the week days and the start/end times. i.e. start at 11pm and end at 8am
|
|
172
|
-
- Set the notifiers to notify the generated clip
|
|
173
|
-
- If an homeassistant notifier is used and the final clip will be <50bm, the clip will be shown as preview of the push notification!
|
|
174
|
-
- Set a `Notification text` for the notification message
|
|
175
|
-
- Set a `Frames acquisition delay`, a frame will be generated according to this. Each non-motion detection will always add a frame
|
|
176
|
-
- In future will be possible to add frames based on specific detection classes and even small clips
|
|
177
|
-
- Set a `Timelapse framerate`, this will depend on the timespan you will chose and how long you want the final clip to be
|
|
178
|
-
- Use the `Generate now` button to reuse the frames collected the previous session. They will be stored until the following session starts
|
|
179
|
-
|
|
180
|
-
### Audio (only on camera)
|
|
181
|
-
|
|
182
|
-
**Audio rules will activate only if a source of audio measurement is active. The plugin provides an onboarded audio analyser, which will activate when any audio rule is running.**
|
|
183
|
-
Audio rules will monitor the audio received by the camera
|
|
184
|
-
|
|
185
|
-
- Create a new rule adding a new text in the `Audio rules` selector and hit save. A new tab will appear
|
|
186
|
-
- Set the notifiers to notify the event
|
|
187
|
-
- Set a `Notification text` for the notification message
|
|
188
|
-
- Set a `Decibel threshold` for the audio level to alert
|
|
189
|
-
- Set `Duration in seconds` if the audio should last at least these seconds to trigger a notification. Leave blank to notify right away
|
|
190
|
-
|
|
191
|
-
## Sequences
|
|
192
|
-
|
|
193
|
-
In the sequences section, on the plugin page, you will be able to define a custom sequence with various steps. Each sequence can contain actions such as: Wait, Script, PTZ preset, Switch on/off, Lock, Entry open/close.
|
|
194
|
-
|
|
195
|
-
Sequences can be attached to rules at different moments:
|
|
196
|
-
|
|
197
|
-
- **On-activation sequences** (rules with activation type other than Always): run when the rule becomes active.
|
|
198
|
-
- **On-deactivation sequences**: run when the rule becomes inactive.
|
|
199
|
-
- **On-trigger sequences**: run when the rule is triggered (e.g. detection matched, occupancy change).
|
|
200
|
-
- **On-reset sequences**: run when the rule is reset (e.g. after the trigger window ends).
|
|
201
|
-
- **On-generated sequences**: run when all artifacts for that rule have been generated (video clip, gif, image). Available for Detection, Occupancy and Timelapse rules.
|
|
202
|
-
|
|
203
|
-
When a sequence is run with a **payload** (e.g. On-generated), Script actions receive it as run variables. In Scrypted scripts, access it via `variables.payload`.
|
|
204
|
-
|
|
205
|
-
### Payload for On-generated sequences
|
|
206
|
-
|
|
207
|
-
For **On-generated sequences**, the payload passed to scripts has the following shape:
|
|
208
|
-
|
|
209
|
-
- **Detection and Occupancy rules**: `{ rule, videoUrl?, gifUrl?, imageUrl }`
|
|
210
|
-
- `rule`: the rule object (name, notifiers, settings, etc.).
|
|
211
|
-
- `videoUrl`: URL of the generated video clip (if clip type is MP4).
|
|
212
|
-
- `gifUrl`: URL of the generated GIF (if clip type is GIF).
|
|
213
|
-
- `imageUrl`: URL of the stored snapshot used for the notification.
|
|
214
|
-
|
|
215
|
-
- **Timelapse rules**: `{ rule, videoUrl, imageUrl, videoPath?, imagePath? }`
|
|
216
|
-
- `rule`: the timelapse rule object.
|
|
217
|
-
- `videoUrl`: URL to the generated timelapse video.
|
|
218
|
-
- `imageUrl`: URL to the generated thumbnail image.
|
|
219
|
-
- `videoPath`, `imagePath`: filesystem paths where the video and image were saved (when available).
|
|
220
|
-
|
|
221
|
-
(Initially only a few action types will be available, ask for more if you wish something more specific)
|
|
222
|
-
|
|
223
|
-
## Stored images
|
|
224
|
-
|
|
225
|
-
The plugin will store on filesystem, if configured, images for every basic detection and rule. Set the following configurations on the plugin page under the Storage tab
|
|
226
|
-
|
|
227
|
-
- `Storage path`: If set, the images used to populate MQTT topic will be also stored on the drive path
|
|
228
|
-
|
|
229
|
-
## Additional camera settings
|
|
230
|
-
|
|
231
|
-
- `Minimum snapshot acquisition delay`, minimum seconds to wait until a new snapshot can be taken from a camera, keep it around 5 seconds for cameras with weak hardware
|
|
232
|
-
- `Off motion duration`, amount of seconds to consider motion as ended for rules/detections affecting the camera. It will override the motion off events
|
|
233
|
-
- `Snapshot from Decoder`, take snapshots from the camera decoded stream. If set to `Always` it will be active only if any detection rule with videoclips, timelapse or occupancy rule is running. If set `OnMotion` it will run only during motion sessions, usefull if your camera gives many snapshot timeout errors. `Auto` will be the default and regulate it when required
|
|
234
|
-
- Set `Minimum notification delay` to debounce further notifications
|
|
235
|
-
- Set `Minimum MQTT publish delay` to debounce the image update on MQTT for this basic detections
|
|
236
|
-
|
|
237
|
-
## Webhooks
|
|
238
|
-
|
|
239
|
-
Some basic webhooks are available
|
|
240
|
-
|
|
241
|
-
### Latest snapshot
|
|
242
|
-
|
|
243
|
-
Will provide the latest registered image for each type, on the camera settings will be provided the basic url, {IMAGE_NAME} should be replaced with one of the following:
|
|
244
|
-
|
|
245
|
-
- `object-detection__{ motion | any_object | animal | person | vehicle }`
|
|
246
|
-
- `object-detection__{ motion | any_object | animal | person | vehicle }__{ NVR | Frigate }`
|
|
247
|
-
- `object-detection__face-{ known person label }`
|
|
248
|
-
- `object-detection__face-{ known person label }_{ NVR | Frigate }`
|
|
249
|
-
- `ruleImage__{ ruleName }`
|
|
250
|
-
- `ruleClip__{ ruleName }`
|
|
251
|
-
- `ruleGif__{ ruleName }`
|
|
252
|
-
|
|
253
|
-
The base path is available under each camera in the section Advanced Notifier -> Webhooks.
|
|
254
|
-
|
|
255
|
-
### POST detection images
|
|
256
|
-
|
|
257
|
-
Provide multiple urls, for each detection, POST a b64 image with some additional metadata. Filter
|
|
258
|
-
on some classes and define a minimum delay.
|
|
259
|
-
|
|
260
|
-
## Adanced Alarm System
|
|
261
|
-
|
|
262
|
-
The plugin provides a security system hooked into the plugin detection rules. To use it this will be required:
|
|
263
|
-
|
|
264
|
-
- Create 1 or more detection rule on the plugin page with activation type `AdvancedSecuritySystem` and set 1 or more modes to activate the rule
|
|
265
|
-
- Setup the provided `Advanced security system` device with preferred preferences, such as texts or devices that can be bypassed during the activation
|
|
266
|
-
|
|
267
|
-
The device is discovered on MQTT and completely compatible with Homekit.
|
|
6
|
+
**Documentation:** [https://advanced-notifier-docs.zentik.app/](https://advanced-notifier-docs.zentik.app/)
|
package/dist/plugin.zip
CHANGED
|
Binary file
|
package/package.json
CHANGED
|
@@ -5,8 +5,12 @@
|
|
|
5
5
|
"type": "git",
|
|
6
6
|
"url": "https://github.com/apocaliss92/scrypted-advanced-notifier"
|
|
7
7
|
},
|
|
8
|
-
"version": "
|
|
8
|
+
"version": "5.0.1",
|
|
9
9
|
"scripts": {
|
|
10
|
+
"docs:dev": "cd docs && npm run dev",
|
|
11
|
+
"docs:build": "cd docs && npm run build",
|
|
12
|
+
"docs:start": "cd docs && npm run start",
|
|
13
|
+
"docs:install": "cd docs && npm install",
|
|
10
14
|
"scrypted-setup-project": "scrypted-setup-project",
|
|
11
15
|
"prescrypted-setup-project": "scrypted-package-json",
|
|
12
16
|
"build": "scrypted-webpack",
|
|
@@ -1,184 +0,0 @@
|
|
|
1
|
-
# Migrazione da Webhook HTTP a Socket SDK con interfaceDescriptors
|
|
2
|
-
|
|
3
|
-
Piano per esporre i metodi Events App via **interfaceDescriptors** (come [@scrypted/llm](https://github.com/scryptedapp/llm)), eliminando le chiamate REST e usando solo la socket SDK.
|
|
4
|
-
|
|
5
|
-
---
|
|
6
|
-
|
|
7
|
-
## 1. Come funziona interfaceDescriptors (LLM plugin)
|
|
8
|
-
|
|
9
|
-
Dal [package.json dell'LLM](https://github.com/scryptedapp/llm/blob/main/package.json):
|
|
10
|
-
|
|
11
|
-
```json
|
|
12
|
-
{
|
|
13
|
-
"scrypted": {
|
|
14
|
-
"interfaces": ["DeviceProvider", "UserDatabase", ...],
|
|
15
|
-
"interfaceDescriptors": {
|
|
16
|
-
"UserDatabase": {
|
|
17
|
-
"name": "UserDatabase",
|
|
18
|
-
"methods": ["openDatabase"],
|
|
19
|
-
"properties": []
|
|
20
|
-
}
|
|
21
|
-
}
|
|
22
|
-
}
|
|
23
|
-
}
|
|
24
|
-
```
|
|
25
|
-
|
|
26
|
-
- **interfaceDescriptors** dichiara interfacce custom con metodi e proprietà
|
|
27
|
-
- Il server Scrypted usa questi descrittori per esporre i metodi via RPC sulla socket
|
|
28
|
-
- Il client può chiamare `device.openDatabase()` invece di fare HTTP
|
|
29
|
-
|
|
30
|
-
---
|
|
31
|
-
|
|
32
|
-
## 2. Metodi Events App da esporre (handleEventsAppRequest)
|
|
33
|
-
|
|
34
|
-
| apimethod | payload | Note |
|
|
35
|
-
|-----------|---------|------|
|
|
36
|
-
| GetConfigs | — | |
|
|
37
|
-
| GetCamerasStatus | — | |
|
|
38
|
-
| GetEvents | fromDate, tillDate, limit, offset, sources, cameras, detectionClasses, eventSource, filter, groupingRange | |
|
|
39
|
-
| GetVideoclips | fromDate, tillDate, limit, offset, cameras, detectionClasses | |
|
|
40
|
-
| GetCameraDayData | deviceId, day | |
|
|
41
|
-
| GetClusteredDayData | deviceId, days, bucketMs, enabledClasses, classFilter | |
|
|
42
|
-
| GetClusterEvents | clusterId, deviceId, startMs, endMs | |
|
|
43
|
-
| GetArtifacts | deviceId, day | |
|
|
44
|
-
| GetLatestRuleArtifacts | deviceId, limit | |
|
|
45
|
-
| RemoteLog | level, message | |
|
|
46
|
-
|
|
47
|
-
---
|
|
48
|
-
|
|
49
|
-
## 3. Modifiche al plugin advanced-notifier
|
|
50
|
-
|
|
51
|
-
### 3.1 package.json — aggiungere interfaceDescriptors
|
|
52
|
-
|
|
53
|
-
```json
|
|
54
|
-
{
|
|
55
|
-
"scrypted": {
|
|
56
|
-
"interfaces": ["Settings", "DeviceProvider", "MixinProvider", "HttpRequestHandler", "Videoclips", "LauncherApplication", "PushHandler"],
|
|
57
|
-
"interfaceDescriptors": {
|
|
58
|
-
"EventsAppApi": {
|
|
59
|
-
"name": "EventsAppApi",
|
|
60
|
-
"methods": [
|
|
61
|
-
"getConfigs",
|
|
62
|
-
"getCamerasStatus",
|
|
63
|
-
"getEvents",
|
|
64
|
-
"getVideoclips",
|
|
65
|
-
"getCameraDayData",
|
|
66
|
-
"getClusteredDayData",
|
|
67
|
-
"getClusterEvents",
|
|
68
|
-
"getArtifacts",
|
|
69
|
-
"getLatestRuleArtifacts",
|
|
70
|
-
"remoteLog"
|
|
71
|
-
],
|
|
72
|
-
"properties": []
|
|
73
|
-
}
|
|
74
|
-
}
|
|
75
|
-
}
|
|
76
|
-
}
|
|
77
|
-
```
|
|
78
|
-
|
|
79
|
-
### 3.2 utils.ts — costante interfaccia
|
|
80
|
-
|
|
81
|
-
```ts
|
|
82
|
-
export const EVENTS_APP_API_INTERFACE = "EventsAppApi";
|
|
83
|
-
```
|
|
84
|
-
|
|
85
|
-
### 3.3 main.ts — aggiungere interfaccia al data fetcher
|
|
86
|
-
|
|
87
|
-
In `onDeviceDiscovered` per DATA_FETCHER_NATIVE_ID:
|
|
88
|
-
|
|
89
|
-
```ts
|
|
90
|
-
interfaces: [
|
|
91
|
-
ScryptedInterface.VideoClips,
|
|
92
|
-
ScryptedInterface.EventRecorder,
|
|
93
|
-
ScryptedInterface.Settings,
|
|
94
|
-
EVENTS_APP_API_INTERFACE, // <-- aggiungere
|
|
95
|
-
],
|
|
96
|
-
```
|
|
97
|
-
|
|
98
|
-
### 3.4 dataFetcher.ts — implementare EventsAppApi
|
|
99
|
-
|
|
100
|
-
La classe `AdvancedNotifierDataFetcher` deve implementare i metodi pubblici che mappano 1:1 con gli apimethod. Esempio:
|
|
101
|
-
|
|
102
|
-
```ts
|
|
103
|
-
// EventsAppApi interface
|
|
104
|
-
async getConfigs(): Promise<{ cameras: ...; enabledDetectionSources: string[] }> {
|
|
105
|
-
const { statusCode, body } = await this.handleEventsAppRequest('GetConfigs', {});
|
|
106
|
-
if (statusCode !== 200) throw new Error(JSON.stringify(body));
|
|
107
|
-
return body as any;
|
|
108
|
-
}
|
|
109
|
-
async getCamerasStatus(): Promise<CamerasStatusResponse> { ... }
|
|
110
|
-
async getEvents(payload: GetEventsPayload): Promise<GetEventsResponse> { ... }
|
|
111
|
-
// ... etc
|
|
112
|
-
```
|
|
113
|
-
|
|
114
|
-
Oppure, più pulito: estrarre la logica da `handleEventsAppRequest` in metodi dedicati e far sì che `handleEventsAppRequest` li chiami, così si evita duplicazione.
|
|
115
|
-
|
|
116
|
-
---
|
|
117
|
-
|
|
118
|
-
## 4. Come il server Scrypted gestisce interfaceDescriptors
|
|
119
|
-
|
|
120
|
-
Il server Scrypted (koush/scrypted) legge `interfaceDescriptors` dal `package.json` del plugin. Quando un device dichiara un'interfaccia in `interfaces`, il server:
|
|
121
|
-
|
|
122
|
-
1. Verifica che l'interfaccia sia in `interfaceDescriptors` (per interfacce custom)
|
|
123
|
-
2. Espone i metodi via RPC sulla socket Engine.IO
|
|
124
|
-
3. Il client `@scrypted/client` può chiamare `device.getConfigs()` e la chiamata viene serializzata e inviata via socket
|
|
125
|
-
|
|
126
|
-
Non serve modificare il server: il supporto è già presente. Il client deve solo usare `client.systemManager.getDeviceById(deviceId)` e chiamare i metodi sull'oggetto restituito.
|
|
127
|
-
|
|
128
|
-
---
|
|
129
|
-
|
|
130
|
-
## 5. Modifiche al client (camstack / scrypted-an-frontend)
|
|
131
|
-
|
|
132
|
-
### 5.1 Trovare il device Events App
|
|
133
|
-
|
|
134
|
-
Il device "Advanced notifier data fetcher" ha tipo `API` e implementa `EventsAppApi`. Per ottenere il suo ID:
|
|
135
|
-
|
|
136
|
-
```ts
|
|
137
|
-
const state = client.systemManager.getSystemState();
|
|
138
|
-
const eventsAppDeviceId = Object.entries(state).find(
|
|
139
|
-
([_, d]) => (d as any)?.interfaces?.includes?.('EventsAppApi')
|
|
140
|
-
)?.[0];
|
|
141
|
-
```
|
|
142
|
-
|
|
143
|
-
Oppure cercare per nome/tipo se lo stato lo espone.
|
|
144
|
-
|
|
145
|
-
### 5.2 Sostituire fetch con chiamate SDK
|
|
146
|
-
|
|
147
|
-
**Prima (HTTP):**
|
|
148
|
-
```ts
|
|
149
|
-
const res = await fetch(`${baseUrl}/eventsApp`, {
|
|
150
|
-
method: 'POST',
|
|
151
|
-
body: JSON.stringify({ apimethod: 'GetClusteredDayData', payload: { deviceId, days, bucketMs } }),
|
|
152
|
-
headers: { 'Content-Type': 'application/json', Authorization: getAuthHeader(auth) },
|
|
153
|
-
});
|
|
154
|
-
const data = await res.json();
|
|
155
|
-
```
|
|
156
|
-
|
|
157
|
-
**Dopo (Socket):**
|
|
158
|
-
```ts
|
|
159
|
-
const client = await getScryptedClient(auth);
|
|
160
|
-
const device = client.systemManager.getDeviceById(eventsAppDeviceId) as EventsAppApi;
|
|
161
|
-
const data = await device.getClusteredDayData({ deviceId, days, bucketMs });
|
|
162
|
-
```
|
|
163
|
-
|
|
164
|
-
### 5.3 Cosa resta su HTTP
|
|
165
|
-
|
|
166
|
-
- **URL di immagini/thumbnail/video**: usati in `<Image src={url} />` e `<Video source={{ uri }} />` — devono restare URL HTTP. Il plugin continua a servire `/eventThumbnail/...`, `/eventImage/...`, `/eventVideoclip/...` via HttpRequestHandler.
|
|
167
|
-
- **Autenticazione**: la socket SDK usa già le credenziali del client (login con username/password). Non serve più Basic auth per le chiamate dati.
|
|
168
|
-
|
|
169
|
-
---
|
|
170
|
-
|
|
171
|
-
## 6. Ordine di implementazione
|
|
172
|
-
|
|
173
|
-
1. **Plugin**: aggiungere `interfaceDescriptors` e `EVENTS_APP_API_INTERFACE`, implementare i metodi su `AdvancedNotifierDataFetcher`
|
|
174
|
-
2. **Mantenere HttpRequestHandler**: per `apimethod` POST a `/eventsApp` — opzionale durante la transizione (fallback)
|
|
175
|
-
3. **Client**: creare `eventsAppSdk.ts` che usa la socket; `eventsAppApi.ts` può passare a usare l'SDK quando il client è connesso
|
|
176
|
-
4. **Rimuovere** le chiamate fetch a `/eventsApp` dal client una volta validato l'SDK
|
|
177
|
-
|
|
178
|
-
---
|
|
179
|
-
|
|
180
|
-
## 7. Riferimenti
|
|
181
|
-
|
|
182
|
-
- [LLM plugin package.json](https://github.com/scryptedapp/llm/blob/main/package.json) — esempio interfaceDescriptors
|
|
183
|
-
- [Scrypted Developer Docs](https://developer.scrypted.app/) — interfacce e plugin
|
|
184
|
-
- [@scrypted/client](https://www.npmjs.com/package/@scrypted/client) — SDK client con socket
|
|
@@ -1,203 +0,0 @@
|
|
|
1
|
-
# Piano: Advanced Notifier Recorder mixin e pipeline unificata
|
|
2
|
-
|
|
3
|
-
Piano per un’estensiva modifica al plugin Advanced Notifier: **sostituire** decoder, ffmpeg audio e recorder attuali con **una sola pipeline** che supporti clip on-demand, recording con retention (motion, detection, ecc.) e spostare tutta la logica eventi/recording in un nuovo mixin **Advanced Notifier Recorder**.
|
|
4
|
-
|
|
5
|
-
---
|
|
6
|
-
|
|
7
|
-
## 1. Situazione attuale (da sostituire)
|
|
8
|
-
|
|
9
|
-
### 1.1 Componenti separati
|
|
10
|
-
|
|
11
|
-
| Componente | File | Ruolo | Input | Output |
|
|
12
|
-
|------------|------|--------|-------|--------|
|
|
13
|
-
| **Decoder** | `cameraMixin.ts` | Loop frame per motion/detection | `getVideoStream(decoderStream)` (solo video, no audio) | JPEG in `lastFrame` + `storeDecoderFrame()` |
|
|
14
|
-
| **Audio** | `audioAnalyzerUtils.ts` | Analisi volume/classificazione | RTSP → ffmpeg `-vn -dn -sn` → PCM 16kHz mono | Eventi `audio` → `processAudioDetection` → `addMotionEvent` |
|
|
15
|
-
| **Recording** | `videoRecorderUtils.ts` | Clip su trigger (recording rules) | RTSP → ffmpeg `-c:v copy|libx264` (no audio in pratica) | `.mp4` in `recordedEvents/` |
|
|
16
|
-
|
|
17
|
-
Problemi:
|
|
18
|
-
- **Tre consumi separati** dello stream (decoder, audio ffmpeg, recording ffmpeg) = più connessioni RTSP e più carico.
|
|
19
|
-
- **Nessuna pipeline unica** video+audio: decoder senza audio, recorder senza audio reale, audio da secondo ffmpeg.
|
|
20
|
-
- **Eventi e recording** sono in `cameraMixin` + `main.ts`; nessun modulo dedicato “recorder/events”.
|
|
21
|
-
|
|
22
|
-
### 1.2 Dove vivono eventi e clip oggi
|
|
23
|
-
|
|
24
|
-
- **Scrittura eventi:** `main.ts` → `storeEventImage()`, `addMotionEvent()` → `enqueueDbWrite` → `writeEventsAndMotionBatch()` in `db.ts` (path `storagePath/{deviceId}/events/dbs/{YYYYMMDD}.json`).
|
|
25
|
-
- **Trigger recording:** `cameraMixin.processAccumulatedDetections()` → `startRecording({ triggerTime, rules, candidates })`; prolungamento da `ensureRecordingMotionCheckInterval()`.
|
|
26
|
-
- **Clip:** `cameraMixin.getVideoClipsInternal()` legge da `recordedEventsPath` e da rule-generated path; `VideoRtspFfmpegRecorder` scrive in `recordedEvents/`.
|
|
27
|
-
|
|
28
|
-
---
|
|
29
|
-
|
|
30
|
-
## 2. Obiettivi
|
|
31
|
-
|
|
32
|
-
1. **Una sola pipeline** per dispositivo camera:
|
|
33
|
-
- Un input (stream video, con o senza audio).
|
|
34
|
-
- Da lì: **frame per analysis** (motion/detection), **audio** (se presente), **segmenti di recording** (buffer/scratch + clip finali).
|
|
35
|
-
2. **Clip on-demand:** generazione clip a partire da un intervallo temporale (es. “ultimi 30 s”, o “dalle 12:00:00 per 60 s”) usando la pipeline, senza avviare un secondo ffmpeg ad hoc.
|
|
36
|
-
3. **Recording con retention rules:** continuare a supportare “record on motion / on detection” con regole configurabili; retention (es. “tieni 7 giorni”, “solo eventi con persona”) gestita nel nuovo mixin.
|
|
37
|
-
4. **Nuovo mixin “Advanced Notifier Recorder”:** contiene tutta la logica eventi + recording + clip; il camera mixin resta “analysis + notifiche”, il plugin orchestra e espone API.
|
|
38
|
-
|
|
39
|
-
---
|
|
40
|
-
|
|
41
|
-
## 3. Architettura target
|
|
42
|
-
|
|
43
|
-
### 3.1 Pipeline unica (per camera)
|
|
44
|
-
|
|
45
|
-
```
|
|
46
|
-
┌─────────────────────────────────────────────────────────┐
|
|
47
|
-
│ UNIFIED RECORDER PIPELINE │
|
|
48
|
-
Stream (RTSP/ │ ┌─────────────┐ ┌──────────────┐ ┌─────────────┐ │
|
|
49
|
-
getVideoStream) │ │ Ingest │───▶│ Circular │───▶│ Outputs │ │
|
|
50
|
-
─────────────────▶│ │ (demux + │ │ Buffer │ │ - Analysis │ │
|
|
51
|
-
│ │ optional │ │ (e.g. 60s) │ │ frames │ │
|
|
52
|
-
│ │ audio) │ │ │ │ - Audio │ │
|
|
53
|
-
│ └─────────────┘ └──────┬───────┘ │ chunks │ │
|
|
54
|
-
│ │ │ │ - Clips │ │
|
|
55
|
-
│ │ │ │ (on-demand │ │
|
|
56
|
-
│ │ │ │ or rule) │ │
|
|
57
|
-
│ │ ▼ └─────────────┘ │
|
|
58
|
-
│ │ ┌──────────────┐ │
|
|
59
|
-
│ └───────────│ Retention │ │
|
|
60
|
-
│ │ & clip │ │
|
|
61
|
-
│ │ writer │ │
|
|
62
|
-
└─────────────────────┴──────────────┴─────────────────────┘
|
|
63
|
-
```
|
|
64
|
-
|
|
65
|
-
- **Ingest:** un processo ffmpeg (o un solo consumer Scrypted) che legge **un** stream (video + audio se disponibile): demux, decode video (e opzionale audio), scrive in un **buffer circolare** (segmenti in memoria o su disco, es. anelli da 60 s).
|
|
66
|
-
- **Outputs dalla pipeline:**
|
|
67
|
-
- **Analysis:** copia frame (o callback) verso il decoder esistente / motion / detection (così il camera mixin continua a ricevere frame senza aprire un secondo stream).
|
|
68
|
-
- **Audio:** stessi chunk PCM usati per analisi (soglie, YAMNET) e, se serve, per mux nei clip.
|
|
69
|
-
- **Clips:**
|
|
70
|
-
- **On-demand:** da buffer circolare + eventuale “tail live” → un segmento [start, end] → file .mp4 (o altro) generato dalla pipeline (es. segmenti già in formato adatto, o un secondo pass ffmpeg breve).
|
|
71
|
-
- **Retention rules:** quando una regola dice “record”, la pipeline scrive da buffer + live in un file in `recordedEvents/` (o path configurato), con possibile post-processing (thumbnail, metadati).
|
|
72
|
-
|
|
73
|
-
### 3.2 Nuovo mixin: Advanced Notifier Recorder
|
|
74
|
-
|
|
75
|
-
- **Nome proposto:** `AdvancedNotifierRecorderMixin` (file es. `src/recorderMixin.ts`).
|
|
76
|
-
- **Interfacce Scrypted da considerare:** `EventRecorder`, `VideoClips` (se già usate), più l’eventuale nuova interfaccia per “clip on-demand” (es. `getClipForTimeRange(deviceId, startTime, endTime)`).
|
|
77
|
-
- **Responsabilità:**
|
|
78
|
-
- **Eventi:** ricevere “eventi” e “motion” dal camera mixin (o dalla pipeline) e scriverli nel DB (delega a `main.ts` per `enqueueDbWrite` o incorpora la logica se si sposta anche la queue nel recorder).
|
|
79
|
-
- **Recording:** gestione regole di recording (motion, detection, retention); avvio/arresto segmenti di recording tramite la **pipeline unica** (non più `VideoRtspFfmpegRecorder` separato).
|
|
80
|
-
- **Clip on-demand:** esporre API per “genera clip per [deviceId, start, end]” usando il buffer + writer della pipeline.
|
|
81
|
-
- **Retention:** pulizia clip/segmenti secondo retention rules (giorni, tipo evento, spazio disco); possibile integrazione con “rimuovi clip più vecchi di X” già presente in `main.ts`.
|
|
82
|
-
|
|
83
|
-
### 3.3 Ruolo del camera mixin dopo il refactor
|
|
84
|
-
|
|
85
|
-
- **Conservare:** regole detection, notifiche, occupancy, timelapse, UI settings (decoder type, stream destination, ecc.).
|
|
86
|
-
- **Cambiare:**
|
|
87
|
-
- **Decoder:** non più un loop che chiama `getVideoStream()` da solo; invece **riceve frame dalla pipeline** del recorder (o legge da un’API del recorder “get next frame for analysis”). In questo modo c’è un solo consumer dello stream.
|
|
88
|
-
- **Audio:** non più `AudioRtspFfmpegStream` nel camera mixin; il recorder espone chunk audio (o callback) e il camera mixin continua a chiamare `processAudioDetection` con quei chunk.
|
|
89
|
-
- **Recording:** nessuna chiamata a `startRecording` / `VideoRtspFfmpegRecorder` dal camera mixin; il camera mixin segnala al recorder “c’è un evento/motion che matcha una recording rule” e il **recorder** avvia/ prolunga il segmento tramite la pipeline.
|
|
90
|
-
- **Eventi:** il camera mixin può continuare a chiamare `plugin.storeEventImage()` e `plugin.addMotionEvent()`; l’implementazione di queste può essere spostata nel recorder mixin (e il plugin le delega al recorder), così tutta la “scrittura eventi” è in un posto.
|
|
91
|
-
|
|
92
|
-
### 3.4 Plugin (main.ts)
|
|
93
|
-
|
|
94
|
-
- **Composizione:** oltre a `AdvancedNotifierCameraMixin` e `AdvancedNotifierNotifierMixin`, introdurre `AdvancedNotifierRecorderMixin`.
|
|
95
|
-
- Opzione A: il **recorder è un mixin sulla stessa camera** (stesso device, tre mixin: notifier, camera, recorder). La pipeline unica è di proprietà del recorder; camera e notifier la “usano” tramite il recorder.
|
|
96
|
-
- Opzione B: il recorder è un **device separato** “Recorder” per camera (uno-a-uno). La pipeline vive nel device Recorder; la camera mixin comunica con esso via plugin.
|
|
97
|
-
- **Percorsi e storage:** `getRecordedEventPath`, `getEventPaths`, `storeEventImage`, `addMotionEvent` possono restare in `main.ts` come facade che delega al recorder mixin (per device camera), così l’API pubblica del plugin non cambia.
|
|
98
|
-
- **DB queue:** `dbWriteQueue` / `enqueueDbWrite` / `runDbWriteProcess` possono restare in `main.ts` o essere spostati nel recorder; il recorder in ogni caso deve poter scrivere eventi/motion nel DB.
|
|
99
|
-
|
|
100
|
-
---
|
|
101
|
-
|
|
102
|
-
## 4. Pipeline unica: dettaglio tecnico
|
|
103
|
-
|
|
104
|
-
### 4.1 Scelta implementativa
|
|
105
|
-
|
|
106
|
-
- **Opzione 1 – FFmpeg unico (demux + buffer + tee):** un processo ffmpeg che:
|
|
107
|
-
- Legge RTSP (o riceve stream da Scrypted).
|
|
108
|
-
- Demux video + audio.
|
|
109
|
-
- Scrive in un **segment file** circolare (es. `segment_%03d.m4s` o simile) o in un **named pipe / shared memory** letto da Node.
|
|
110
|
-
- Opzionale: `tee` per inviare copia a un secondo output (es. analisi).
|
|
111
|
-
- Pro: un solo processo, meno connessioni. Contro: complessità buffer/segmenti e sincronizzazione con “clip da intervallo”.
|
|
112
|
-
- **Opzione 2 – Consumer Scrypted + buffer in Node:** un solo `getVideoStream()` (con audio se il backend lo supporta); in Node un consumer che:
|
|
113
|
-
- Legge frame (e eventuale audio) e li mette in un **buffer circolare** (es. anello di segmenti in memoria o file).
|
|
114
|
-
- Espone “slice del buffer” per clip on-demand e per “scrivi da start a end” per recording.
|
|
115
|
-
- Pro: massimo controllo in JS. Contro: possibile overhead e complessità (codec, mux) se i frame arrivano già codificati.
|
|
116
|
-
- **Opzione 3 – Ibrido:** ffmpeg per ingest e buffer su disco (segmenti brevi, es. 5–10 s); un servizio in Node che tiene un indice (startTime → file) e per clip on-demand concatena/rimux con ffmpeg. Recording “su regola” = copia di segmenti già scritti + append live fino a fine evento.
|
|
117
|
-
|
|
118
|
-
Raccomandazione: partire da **Opzione 3** per avere un buffer su disco ben definito e clip on-demand affidabili; unificare comunque in **un solo ffmpeg di ingest** (video+audio) che produce segmenti, e un “RecorderPipeline” in Node che gestisce indice, retention e generazione clip.
|
|
119
|
-
|
|
120
|
-
### 4.2 Buffer circolare / segmenti
|
|
121
|
-
|
|
122
|
-
- **Formato:** segmenti brevi (es. 5–15 s) in formato adatto al concatenamento (es. fMP4 o segmenti MPEG-TS).
|
|
123
|
-
- **Indice:** struttura (in memoria o file) che mappa `[startTime, endTime]` → lista file segmenti.
|
|
124
|
-
- **Retention:** job periodico che rimuove segmenti oltre la retention (o oltre lo spazio massimo); i clip “recorded” (salvati in `recordedEvents/`) sono copie permanenti fino a quando non scatta la loro retention.
|
|
125
|
-
|
|
126
|
-
### 4.3 Clip on-demand
|
|
127
|
-
|
|
128
|
-
- **Input:** `deviceId`, `startTime`, `endTime` (timestamp Unix o ms).
|
|
129
|
-
- **Logica:** dalla pipeline (indice segmenti) individuare i segmenti che coprono [startTime, endTime]; concatenare (concat demuxer ffmpeg o copy) e scrivere un file .mp4; opzionale: estrarre thumbnail a metà clip.
|
|
130
|
-
- **Output:** path del file clip (e thumbnail) da esporre via API (es. `VideoClips.getVideoClip` o nuova `getClipForTimeRange`).
|
|
131
|
-
|
|
132
|
-
### 4.4 Recording con retention rules
|
|
133
|
-
|
|
134
|
-
- **Regole:** come oggi (motion, classi detection, ecc.) ma interpretate dal **recorder mixin**.
|
|
135
|
-
- **Trigger:** il camera mixin (o la pipeline) segnala “motion on” / “detection X”; il recorder confronta con le regole e decide “start recording” / “prolong”.
|
|
136
|
-
- **Scrittura:** invece di avviare un `VideoRtspFfmpegRecorder` separato, il recorder dice alla pipeline “da adesso scrivi in un file in `recordedEvents/` fino a fine evento (o max duration)”. La pipeline può:
|
|
137
|
-
- copiare dal buffer (segmenti già scritti) per la parte “pre-trigger” (es. 30 s prima),
|
|
138
|
-
- poi appendere live fino a “motion off” + post-buffer.
|
|
139
|
-
- **Retention:** regole tipo “conserva 7 giorni”, “solo eventi con persona”; il recorder applica la pulizia sui file in `recordedEvents/` (e eventualmente sui segmenti del buffer).
|
|
140
|
-
|
|
141
|
-
---
|
|
142
|
-
|
|
143
|
-
## 5. Piano di implementazione (fasi)
|
|
144
|
-
|
|
145
|
-
### Fase 1 – Fondamenta recorder e spostamento eventi
|
|
146
|
-
|
|
147
|
-
1. **Creare `recorderMixin.ts`** (Advanced Notifier Recorder mixin).
|
|
148
|
-
- Interfacce: almeno ciò che serve per “eventi” e “clip” (EventRecorder / VideoClips se già usate).
|
|
149
|
-
- Implementare **delega** di `storeEventImage` e `addMotionEvent`: il plugin, quando è un device camera con recorder mixin, inoltra al recorder; il recorder chiama la stessa logica di scrittura DB (o sposta `enqueueDbWrite` nel recorder).
|
|
150
|
-
2. **Registrare il mixin in `main.ts`:** per le camera, creare anche il recorder mixin (stesso device o device figlio); mantenere l’API `storeEventImage` / `addMotionEvent` sul plugin che delega al recorder.
|
|
151
|
-
3. **Test:** verificare che eventi e motion continuino a essere scritti e letti come oggi (Events App, Data Fetcher).
|
|
152
|
-
|
|
153
|
-
### Fase 2 – Pipeline unica (ingest + buffer)
|
|
154
|
-
|
|
155
|
-
1. **Modulo “RecorderPipeline”** (es. `src/recorderPipeline.ts` o sotto `src/recorder/`):
|
|
156
|
-
- Ingest: un processo ffmpeg che legge **un** stream (RTSP o URL da `getVideoStream` se possibile) con **video + audio**, output in segmenti (fMP4 o TS).
|
|
157
|
-
- Parametri: cameraId, stream URL, path directory segmenti, lunghezza segmento, lunghezza buffer (es. 60 s = 12 segmenti da 5 s).
|
|
158
|
-
- Scrittura segmenti e indice (startTime/endTime per segmento).
|
|
159
|
-
2. **Integrazione nel recorder mixin:** all’avvio della camera (o on-demand quando serve recording/clip), avviare la pipeline per quella camera; fermarla quando la camera viene rilasciata.
|
|
160
|
-
3. **Sostituire l’audio analyzer:** invece di avviare `AudioRtspFfmpegStream`, il recorder legge l’audio dalla pipeline (dai segmenti o da un output ffmpeg dedicato “solo audio” tee). Fornire i chunk al camera mixin per `processAudioDetection` (stessa API).
|
|
161
|
-
4. **Sostituire il decoder:** il decoder non chiama più `getVideoStream()` direttamente; la pipeline espone “frame per analysis” (es. estrazione frame dai segmenti con ffmpeg, o tee video verso un output che il camera mixin consuma). Il camera mixin continua a fare motion/detection sui frame così forniti.
|
|
162
|
-
|
|
163
|
-
### Fase 3 – Recording e clip dalla pipeline
|
|
164
|
-
|
|
165
|
-
1. **Recording:** rimuovere `VideoRtspFfmpegRecorder` e `startRecording` dal camera mixin. Nel recorder:
|
|
166
|
-
- Alla notifica “start recording” (da camera mixin o da regole interne), chiedere alla pipeline di “salvare da buffer[start] a live fino a stop”.
|
|
167
|
-
- Implementare “prolong on motion” leggendo lo stato motion dalla pipeline/camera mixin.
|
|
168
|
-
2. **Clip on-demand:** implementare `getClipForTimeRange(deviceId, startTime, endTime)` (o equivalente) usando l’indice segmenti; concatenare e scrivere .mp4; restituire path o URL.
|
|
169
|
-
3. **Retention:** job nel recorder che applica retention rules su `recordedEvents/` e sui segmenti del buffer; integrare con la logica di rimozione clip già presente in `main.ts` (es. spostarla nel recorder).
|
|
170
|
-
|
|
171
|
-
### Fase 4 – Pulizia e opzionali
|
|
172
|
-
|
|
173
|
-
1. **Rimuovere** da `cameraMixin.ts`: `startRecording`, `stopRecording`, `ensureRecordingMotionCheckInterval`, uso di `VideoRtspFfmpegRecorder`, `AudioRtspFfmpegStream` (sostituito dalla pipeline), e il loop decoder “standalone” (sostituito da frame dalla pipeline).
|
|
174
|
-
2. **Deprecare** (o rimuovere) `videoRecorderUtils.ts` e `audioAnalyzerUtils.ts` nella forma attuale; eventualmente tenere helper riutilizzabili (es. estrazione thumbnail) dentro il modulo pipeline/recorder.
|
|
175
|
-
3. **Documentazione:** aggiornare README e doc per “single pipeline”, “recorder mixin”, “retention rules”.
|
|
176
|
-
4. **Settings:** spostare le impostazioni “recording rules”, “retention”, “buffer length”, “decoder source” (pipeline vs legacy, se si mantiene fallback) nel recorder mixin o in una sezione “Recording” condivisa.
|
|
177
|
-
|
|
178
|
-
---
|
|
179
|
-
|
|
180
|
-
## 6. Riepilogo file toccati / nuovi
|
|
181
|
-
|
|
182
|
-
| Azione | File |
|
|
183
|
-
|--------|------|
|
|
184
|
-
| **Nuovo** | `src/recorderMixin.ts` – Advanced Notifier Recorder mixin (eventi, recording, clip, retention). |
|
|
185
|
-
| **Nuovo** | `src/recorderPipeline.ts` (o `src/recorder/`) – Ingest ffmpeg, buffer segmenti, indice, export clip. |
|
|
186
|
-
| **Modifica** | `src/main.ts` – Registrazione recorder mixin, delega `storeEventImage`/`addMotionEvent` al recorder, eventuale spostamento DB queue. |
|
|
187
|
-
| **Modifica** | `src/cameraMixin.ts` – Rimuovere decoder standalone, audio analyzer, startRecording/VideoRtspFfmpegRecorder; ricevere frame e audio dalla pipeline/recorder; mantenere detection, notifiche, regole. |
|
|
188
|
-
| **Modifica** | `src/db.ts` – Solo se la scrittura eventi viene spostata nel recorder (stesso schema, diverso chiamante). |
|
|
189
|
-
| **Deprecare/rimuovere** | `src/videoRecorderUtils.ts` – Sostituito dalla pipeline. |
|
|
190
|
-
| **Deprecare/rimuovere** | `src/audioAnalyzerUtils.ts` – Sostituito da audio dalla pipeline. |
|
|
191
|
-
|
|
192
|
-
---
|
|
193
|
-
|
|
194
|
-
## 7. Rischi e mitigazioni
|
|
195
|
-
|
|
196
|
-
- **Compatibilità:** mantenere l’API pubblica del plugin (EventRecorder, VideoClips, getVideoClips, getRecordedEventPath, storeEventImage, addMotionEvent) così che camstack e altri client non cambino.
|
|
197
|
-
- **Performance:** un solo ffmpeg per camera può essere un single point of failure; prevedere restart automatico e backoff come in `VideoRtspFfmpegRecorder`/`AudioRtspFfmpegStream`.
|
|
198
|
-
- **Disco:** il buffer circolare su disco consuma spazio; configurare lunghezza massima e retention chiara.
|
|
199
|
-
- **Migrazione:** per rollout graduale, si può mantenere un “legacy mode” (decoder + audio ffmpeg + VideoRtspFfmpegRecorder) disattivabile da setting “Use unified recorder pipeline”, e abilitare la nuova pipeline solo quando il setting è on.
|
|
200
|
-
|
|
201
|
-
---
|
|
202
|
-
|
|
203
|
-
Questo piano può essere usato come base per issue, task e PR incrementali (una fase per volta).
|