@blueharford/scrypted-spatial-awareness 0.4.6 → 0.4.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,428 +1,152 @@
1
1
  # Spatial Awareness - Scrypted Plugin
2
2
 
3
- Cross-camera object tracking for Scrypted NVR with spatial awareness capabilities.
3
+ Cross-camera object tracking for Scrypted NVR. Track people, vehicles, and animals as they move across your property.
4
4
 
5
- ## Why This Plugin?
5
+ ## What It Does
6
6
 
7
- **Traditional camera notifications** tell you *what* was detected on *which* camera:
8
- > "Person detected on Front Door camera"
7
+ Instead of getting separate "person detected" alerts from each camera, get one coherent narrative:
9
8
 
10
- **Spatial Awareness** tells you *where they came from* and *where they're going*:
11
- > "Man in blue jacket walking from Garage towards Front Door"
9
+ > "Man in blue jacket entered via Driveway, walked to Front Door, left package, exited via Driveway" (2 min on property)
12
10
 
13
- This plugin **tracks objects across your entire camera system**, understanding that the person who got out of a car in your driveway is the same person now walking to your front door. Instead of getting 5 separate "person detected" alerts from 5 cameras, you get one coherent narrative of movement across your property.
11
+ | Traditional Alerts | Spatial Awareness |
12
+ |-------------------|-------------------|
13
+ | Alert per camera | One alert per movement |
14
+ | "Person on Camera X" | "Person moving from X to Y" |
15
+ | No identity tracking | Same object tracked across cameras |
16
+ | Basic detection | Movement patterns, dwell time, unusual paths |
14
17
 
15
- ### Key Differences from Normal Notifications
18
+ ## Quick Start
16
19
 
17
- | Feature | Normal Notifications | Spatial Awareness |
18
- |---------|---------------------|-------------------|
19
- | **Scope** | Single camera | Entire property |
20
- | **Identity** | New detection each camera | Same object tracked across cameras |
21
- | **Context** | "Person on Camera X" | "Person moving from X towards Y" |
22
- | **Alert Volume** | Alert per camera per detection | One alert per significant movement |
23
- | **Intelligence** | Basic detection | Movement patterns, unusual paths, dwell time |
24
- | **LLM Integration** | None | Rich descriptions like "Woman with dog" |
25
-
26
- ## Use Cases
27
-
28
- ### Home Security
29
- - **Delivery Tracking**: "Person arrived via Driveway, walked to Front Door, left package, exited via Driveway" (2 minutes on property)
30
- - **Suspicious Activity**: "Person entered via Back Fence, lingered 5 minutes near Garage, unusual path - did not use normal entry points"
31
- - **Family Awareness**: Know when family members arrive home and their path through the property
32
-
33
- ### Vehicle Monitoring
34
- - **Guest Arrivals**: "Black SUV entered via Street, parked in Driveway"
35
- - **Unusual Vehicles**: "Unknown vehicle circling property - seen on Street Camera, Side Camera, Street Camera again"
36
-
37
- ### Pet & Animal Tracking
38
- - **Pet Location**: Track your dog's movement through the yard
39
- - **Wildlife Alerts**: "Deer moving from Back Yard towards Garden"
40
-
41
- ### Property Management
42
- - **Worker Tracking**: Know when contractors arrive, where they go, when they leave
43
- - **Occupancy Patterns**: Understand traffic flow through your property
44
-
45
- ## Features
46
-
47
- ### Core Tracking
48
- - **Cross-Camera Tracking**: Correlate objects (people, vehicles, animals) as they move between cameras
49
- - **Journey History**: Complete path history for each tracked object across your property
50
- - **Entry/Exit Detection**: Know when objects enter or leave your property
51
- - **Movement Alerts**: Get notified when objects move between camera zones
52
- - **Smart Cooldowns**: Prevent alert spam with per-object cooldowns
53
- - **Loitering Threshold**: Only alert after objects are visible for a configurable duration
54
- - **Multiple Notifiers**: Send alerts to multiple notification services simultaneously
55
-
56
- ### LLM-Enhanced Descriptions
57
- - **Rich Contextual Alerts**: Get alerts like "Man in red shirt walking from garage towards front door" (requires LLM plugin)
58
- - **Configurable Rate Limiting**: Prevent LLM API overload with configurable debounce intervals
59
- - **Automatic Fallback**: Falls back to basic notifications when LLM is slow or unavailable
60
- - **Configurable Timeouts**: Set maximum wait time for LLM responses
61
-
62
- ### Visual Floor Plan Editor
63
- - **Drag-and-Drop**: Place cameras, landmarks, and connections visually
64
- - **Live Tracking Overlay**: See tracked objects move across your floor plan in real-time
65
- - **Journey Visualization**: Click any tracked object to see their complete path drawn on the floor plan
66
- - **Drawing Tools**: Add walls, rooms, and labels without needing an image
67
-
68
- ### Spatial Intelligence
69
- - **Landmarks & Static Objects**: Define landmarks like mailbox, shed, driveway, deck to give the system spatial context
70
- - **Camera Context**: Describe where each camera is mounted and what it can see for richer descriptions
71
- - **Field of View Configuration**: Define camera FOV (simple angle or polygon) to understand coverage overlap
72
- - **RAG-Powered Reasoning**: Uses Retrieval-Augmented Generation to understand property layout for intelligent descriptions
73
- - **AI Landmark Suggestions**: System learns to identify landmarks from camera footage over time
74
- - **Spatial Relationships**: Auto-inferred relationships between cameras and landmarks based on position
75
-
76
- ### Automatic Learning
77
- - **Transit Time Learning**: Automatically adjusts connection transit times based on observed movement patterns
78
- - **Connection Suggestions**: System suggests new camera connections based on observed object movements
79
- - **Confidence Scoring**: Suggestions include confidence scores based on consistency of observations
80
- - **One-Click Approval**: Accept or reject suggestions directly from the topology editor
81
-
82
- ### Training Mode (NEW in v0.4.0)
83
- - **Guided Walkthrough**: Walk your property and let the system learn your camera layout
84
- - **Mobile-Optimized UI**: Designed for phone use while walking around
85
- - **Auto Camera Detection**: System detects you automatically as you walk
86
- - **Transit Time Recording**: Learns actual transit times between cameras
87
- - **Overlap Detection**: Identifies where camera coverage overlaps
88
- - **Landmark Marking**: Mark landmarks (mailbox, gate, etc.) as you encounter them
89
- - **One-Click Setup**: Apply training results to generate your complete topology
90
-
91
- ### Integrations
92
- - **MQTT Integration**: Export tracking data to Home Assistant for automations
93
- - **REST API**: Query tracked objects and journeys programmatically
94
-
95
- ## Installation
96
-
97
- ### From NPM (Recommended)
20
+ ### 1. Install
98
21
  ```bash
99
- npm install @blueharford/scrypted-spatial-awareness
100
- ```
101
-
102
- ### From Scrypted Plugin Repository
103
- 1. Open Scrypted Management Console
104
- 2. Go to Plugins
105
- 3. Search for "@blueharford/scrypted-spatial-awareness"
106
- 4. Click Install
107
-
108
- ## Getting Started: Training Mode (NEW in v0.4.0)
109
-
110
- The fastest way to set up Spatial Awareness is using **Training Mode** - a guided walkthrough where you physically walk around your property while the system learns your camera layout.
111
-
112
- ### Why Training Mode?
113
-
114
- Instead of manually drawing connections and guessing transit times, simply:
115
- 1. Start training on your phone
116
- 2. Walk between cameras
117
- 3. The system automatically learns:
118
- - Which cameras can see you
119
- - How long it takes to walk between cameras
120
- - Where cameras overlap
121
- - Your property's layout
122
-
123
- ### Quick Start
124
-
125
- 1. **Open Training Mode**
126
- - Navigate to: `/endpoint/@blueharford/scrypted-spatial-awareness/ui/training`
127
- - Or scan the QR code in the plugin settings (mobile-optimized)
128
-
129
- 2. **Start Training**
130
- - Tap "Start Training"
131
- - The system begins listening for person detections
132
-
133
- 3. **Walk Your Property**
134
- - Walk to each camera on your property
135
- - The system detects you automatically and records:
136
- - Camera positions
137
- - Transit times between cameras
138
- - Camera overlaps (when both cameras see you)
139
-
140
- 4. **Mark Landmarks** (Optional)
141
- - Tap the "Mark" tab to add landmarks as you encounter them
142
- - Select type (mailbox, gate, shed, etc.) and name
143
- - Landmarks are associated with the current camera
144
-
145
- 5. **End Training**
146
- - When finished, tap "End Training"
147
- - Review the statistics: cameras visited, transits recorded, landmarks marked
148
-
149
- 6. **Apply Results**
150
- - Tap "Apply Results" to generate your topology
151
- - The system creates camera connections with learned transit times
152
- - Open the Topology Editor to fine-tune if needed
153
-
154
- ### Training Tips
155
-
156
- - **Walk naturally** - Don't rush between cameras, walk at your normal pace
157
- - **Hit every camera** - Try to be detected by each camera at least once
158
- - **Create multiple transits** - Walk back and forth between cameras to improve accuracy
159
- - **Mark key landmarks** - Mailbox, gates, driveway end, etc. help with contextual alerts
160
- - **Re-train anytime** - Run training again to improve accuracy or add new cameras
161
-
162
- ### Mobile Access
163
-
164
- Training Mode is designed to be used on your phone while walking. Access via:
165
- ```
166
- https://[your-scrypted-server]/endpoint/@blueharford/scrypted-spatial-awareness/ui/training
22
+ npx scrypted install @blueharford/scrypted-spatial-awareness
167
23
  ```
168
24
 
169
- ## Setup (Manual)
170
-
171
- 1. **Configure Topology**:
172
- - Open the plugin settings
173
- - Click "Open Topology Editor"
174
- - Upload a floor plan image (or use the drawing tools to create one)
175
- - Place cameras on the floor plan
176
- - Mark entry/exit points
177
- - Draw connections between cameras with expected transit times
178
-
179
- 2. **Configure Alerts**:
180
- - Select one or more notifiers (Pushover, email, Home Assistant, etc.)
181
- - Adjust loitering threshold (how long before alerting)
182
- - Adjust per-object cooldown (prevent duplicate alerts)
183
- - Enable/disable specific alert types
184
-
185
- 3. **Optional - Enable LLM Descriptions**:
186
- - Install an LLM plugin (OpenAI, Ollama, etc.)
187
- - Enable "Use LLM for Rich Descriptions" in settings
188
- - Configure rate limiting and fallback options
189
- - Get alerts like "Woman with stroller" instead of just "Person"
25
+ ### 2. Train Your Property
190
26
 
191
- 4. **Optional - Enable MQTT**:
192
- - Enable MQTT integration
193
- - Configure broker URL and credentials
194
- - Use in Home Assistant automations
27
+ The fastest setup is **Training Mode** - walk your property while the system learns:
195
28
 
196
- 5. **Optional - Enable Learning Features**:
197
- - Enable "Learn Transit Times" to auto-adjust connection timing
198
- - Enable "Suggest Camera Connections" to discover new paths
199
- - Enable "Learn Landmarks from AI" for automatic landmark discovery
29
+ 1. Open plugin settings Click **Training Mode**
30
+ 2. Tap **Start Training** on your phone
31
+ 3. Walk between cameras naturally
32
+ 4. System auto-detects you and records transit times
33
+ 5. Tap **End Training** → **Apply Results**
200
34
 
201
- ## How It Works
35
+ Done! Your camera topology is configured.
202
36
 
203
- The plugin listens to object detection events from all configured cameras. When an object (person, car, animal, package) is detected:
37
+ ### 3. Configure Alerts
204
38
 
205
- 1. **Same Camera**: If the object is already being tracked on this camera, the sighting is added to its history
206
- 2. **Cross-Camera Correlation**: If the object disappeared from another camera recently, the plugin attempts to correlate using:
207
- - **Timing (30%)**: Does the transit time match the expected range?
208
- - **Visual (35%)**: Do the visual embeddings match (if available)?
209
- - **Spatial (25%)**: Was the object in the exit zone of the previous camera and entry zone of the new camera?
210
- - **Class (10%)**: Is it the same type of object?
211
- 3. **New Object**: If no correlation is found, a new tracked object is created
39
+ - Select notifiers (Pushover, email, etc.)
40
+ - Set loitering threshold (default: 3s)
41
+ - Set per-object cooldown (default: 30s)
212
42
 
213
- ### Loitering & Cooldown Logic
43
+ ## Features
214
44
 
215
- To prevent alert spam and reduce noise:
45
+ ### Core
46
+ - **Cross-Camera Tracking** - Correlate objects across cameras using timing, visual similarity, and spatial position
47
+ - **Journey History** - Complete path for each tracked object
48
+ - **Entry/Exit Detection** - Know when objects enter or leave your property
49
+ - **Smart Alerts** - Loitering thresholds and per-object cooldowns prevent spam
216
50
 
217
- - **Loitering Threshold**: Object must be visible for X seconds before triggering any alerts (default: 3 seconds). This prevents alerts for someone briefly passing through frame.
218
- - **Per-Object Cooldown**: After alerting for a specific tracked object, won't alert again for that same object for Y seconds (default: 30 seconds). This prevents "Person moving from A to B", "Person moving from B to C", "Person moving from C to D" spam.
51
+ ### Visual Editor
52
+ - **Floor Plan** - Upload image or draw with built-in tools
53
+ - **Drag & Drop** - Place cameras, draw connections
54
+ - **Live Tracking** - Watch objects move in real-time
219
55
 
220
- ### LLM Integration
56
+ ### AI Features (optional)
57
+ - **LLM Descriptions** - "Woman with stroller" instead of just "Person"
58
+ - **Auto-Learning** - Transit times adjust based on observations
59
+ - **Connection Suggestions** - System suggests new camera paths
60
+ - **Landmark Discovery** - AI identifies landmarks from footage
221
61
 
222
- When an LLM plugin is installed and enabled, the plugin will:
223
- 1. Check rate limiting (configurable, default: 10 second minimum between calls)
224
- 2. Capture a snapshot from the camera
225
- 3. Send it to the LLM with context about the movement
226
- 4. Apply timeout (configurable, default: 3 seconds) with automatic fallback
227
- 5. Get a rich description like "Man in blue jacket" or "Black pickup truck"
228
- 6. Include this in the notification
62
+ ### Integrations
63
+ - **MQTT** - Home Assistant integration
64
+ - **REST API** - Query tracked objects programmatically
229
65
 
230
- This transforms generic alerts into contextual, actionable information.
66
+ ## Configuration
231
67
 
232
- ## Configuration Options
68
+ ### Settings
233
69
 
234
- ### Tracking Settings
235
- | Setting | Default | Description |
236
- |---------|---------|-------------|
237
- | Correlation Window | 30s | Maximum time to wait for cross-camera correlation |
238
- | Correlation Threshold | 0.6 | Minimum confidence (0-1) for automatic correlation |
239
- | Lost Timeout | 300s | Time before marking an object as lost |
240
- | Visual Matching | ON | Use visual embeddings for correlation |
241
- | Loitering Threshold | 3s | Object must be visible this long before alerting |
242
- | Per-Object Cooldown | 30s | Minimum time between alerts for same object |
243
-
244
- ### AI & Spatial Reasoning Settings (NEW in v0.3.0)
245
70
  | Setting | Default | Description |
246
71
  |---------|---------|-------------|
247
- | LLM Descriptions | ON | Use LLM plugin for rich descriptions |
248
- | LLM Rate Limit | 10s | Minimum time between LLM API calls |
249
- | Fallback to Basic | ON | Use basic notifications when LLM unavailable |
250
- | LLM Timeout | 3s | Maximum time to wait for LLM response |
251
- | Learn Transit Times | ON | Auto-adjust transit times from observations |
252
- | Suggest Connections | ON | Suggest new camera connections |
253
- | Learn Landmarks | ON | Allow AI to suggest landmarks |
254
- | Landmark Confidence | 0.7 | Minimum confidence for landmark suggestions |
72
+ | Correlation Window | 30s | Max time for cross-camera matching |
73
+ | Correlation Threshold | 0.6 | Min confidence for auto-correlation |
74
+ | Loitering Threshold | 3s | Time before triggering alerts |
75
+ | Per-Object Cooldown | 30s | Min time between alerts for same object |
76
+ | LLM Rate Limit | 10s | Min time between LLM API calls |
255
77
 
256
78
  ### Alert Types
257
- | Alert | Description | Default |
258
- |-------|-------------|---------|
259
- | Property Entry | Object entered via an entry point | Enabled |
260
- | Property Exit | Object exited via an exit point | Enabled |
261
- | Movement | Object moved between cameras | Enabled |
262
- | Unusual Path | Object took an unexpected route | Enabled |
263
- | Dwell Time | Object lingered >5 minutes | Enabled |
264
- | Restricted Zone | Object entered a restricted zone | Enabled |
265
- | Lost Tracking | Object disappeared without exiting | Disabled |
266
79
 
267
- ### Notification Settings
268
- - **Notifiers**: Select multiple notification services to receive alerts
269
- - **Thumbnails**: Automatically includes camera snapshot with notifications
80
+ | Alert | Description |
81
+ |-------|-------------|
82
+ | Property Entry | Object entered via entry point |
83
+ | Property Exit | Object exited via exit point |
84
+ | Movement | Object moved between cameras |
85
+ | Unusual Path | Object took unexpected route |
86
+ | Dwell Time | Object lingered >5 minutes |
270
87
 
271
- ## API Endpoints
88
+ ## API
272
89
 
273
- The plugin exposes a REST API via Scrypted's HTTP handler:
90
+ Base URL: `/endpoint/@blueharford/scrypted-spatial-awareness`
274
91
 
275
- ### Core Endpoints
276
92
  | Endpoint | Method | Description |
277
93
  |----------|--------|-------------|
278
- | `/api/tracked-objects` | GET | List all tracked objects |
279
- | `/api/journey/{id}` | GET | Get journey for specific object |
280
- | `/api/journey-path/{id}` | GET | Get journey path with positions for visualization |
281
- | `/api/topology` | GET | Get camera topology configuration |
282
- | `/api/topology` | PUT | Update camera topology |
283
- | `/api/alerts` | GET | Get recent alerts |
284
- | `/api/alert-rules` | GET/PUT | Get or update alert rules |
285
- | `/api/cameras` | GET | List available cameras |
286
- | `/api/floor-plan` | GET/POST | Get or upload floor plan image |
94
+ | `/api/tracked-objects` | GET | List tracked objects |
95
+ | `/api/journey/{id}` | GET | Get object journey |
96
+ | `/api/topology` | GET/PUT | Camera topology |
97
+ | `/api/alerts` | GET | Recent alerts |
98
+ | `/api/live-tracking` | GET | Real-time object positions |
287
99
  | `/ui/editor` | GET | Visual topology editor |
100
+ | `/ui/training` | GET | Training mode UI |
288
101
 
289
- ### Live Tracking Endpoints (NEW in v0.3.0)
290
- | Endpoint | Method | Description |
291
- |----------|--------|-------------|
292
- | `/api/live-tracking` | GET | Get current state of all tracked objects |
293
-
294
- ### Landmark & Spatial Reasoning Endpoints
295
- | Endpoint | Method | Description |
296
- |----------|--------|-------------|
297
- | `/api/landmarks` | GET | List all configured landmarks |
298
- | `/api/landmarks` | POST | Add a new landmark |
299
- | `/api/landmarks/{id}` | GET/PUT/DELETE | Get, update, or delete a landmark |
300
- | `/api/landmark-suggestions` | GET | Get AI-suggested landmarks |
301
- | `/api/landmark-suggestions/{id}/accept` | POST | Accept an AI suggestion |
302
- | `/api/landmark-suggestions/{id}/reject` | POST | Reject an AI suggestion |
303
- | `/api/landmark-templates` | GET | Get landmark templates for quick setup |
304
- | `/api/infer-relationships` | GET | Get auto-inferred spatial relationships |
305
-
306
- ### Connection Suggestion Endpoints
307
- | Endpoint | Method | Description |
308
- |----------|--------|-------------|
309
- | `/api/connection-suggestions` | GET | Get suggested camera connections |
310
- | `/api/connection-suggestions/{id}/accept` | POST | Accept a connection suggestion |
311
- | `/api/connection-suggestions/{id}/reject` | POST | Reject a connection suggestion |
102
+ ### Training API
312
103
 
313
- ### Training Mode Endpoints (NEW in v0.4.0)
314
104
  | Endpoint | Method | Description |
315
105
  |----------|--------|-------------|
316
- | `/api/training/start` | POST | Start a new training session |
317
- | `/api/training/pause` | POST | Pause the current training session |
318
- | `/api/training/resume` | POST | Resume a paused training session |
319
- | `/api/training/end` | POST | End the training session and get results |
320
- | `/api/training/status` | GET | Get current training status and stats |
321
- | `/api/training/landmark` | POST | Mark a landmark during training |
322
- | `/api/training/apply` | POST | Apply training results to topology |
323
- | `/ui/training` | GET | Mobile-optimized training UI |
106
+ | `/api/training/start` | POST | Start training session |
107
+ | `/api/training/end` | POST | End session, get results |
108
+ | `/api/training/apply` | POST | Apply results to topology |
109
+ | `/api/training/status` | GET | Current training status |
324
110
 
325
111
  ## MQTT Topics
326
112
 
327
- When MQTT is enabled, the plugin publishes to:
113
+ Base: `scrypted/spatial-awareness`
328
114
 
329
115
  | Topic | Description |
330
116
  |-------|-------------|
331
- | `{baseTopic}/occupancy/state` | ON/OFF property occupancy |
332
- | `{baseTopic}/count/state` | Number of active tracked objects |
333
- | `{baseTopic}/person_count/state` | Number of people on property |
334
- | `{baseTopic}/vehicle_count/state` | Number of vehicles on property |
335
- | `{baseTopic}/state` | Full JSON state with all objects |
336
- | `{baseTopic}/alerts` | Alert events |
337
- | `{baseTopic}/events/entry` | Entry events |
338
- | `{baseTopic}/events/exit` | Exit events |
339
- | `{baseTopic}/events/transition` | Camera transition events |
340
-
341
- Default base topic: `scrypted/spatial-awareness`
342
-
343
- ## Virtual Devices
344
-
345
- The plugin creates these virtual devices in Scrypted:
346
-
347
- ### Global Object Tracker
348
- - **Type**: Occupancy Sensor
349
- - **Purpose**: Shows whether any objects are currently tracked on the property
350
- - **Use**: Trigger automations when property becomes occupied/unoccupied
351
-
352
- ### Tracking Zones (User-Created)
353
- - **Type**: Motion + Occupancy Sensor
354
- - **Purpose**: Monitor specific areas across one or more cameras
355
- - **Types**: Entry, Exit, Dwell, Restricted
356
- - **Use**: Create zone-specific automations and alerts
357
-
358
- ## Example Alert Messages
359
-
360
- With LLM enabled:
361
- - "Man in blue jacket walking from Garage towards Front Door (5s transit)"
362
- - "Black SUV driving from Street towards Driveway"
363
- - "Woman with dog walking from Back Yard towards Side Gate"
364
- - "Delivery person entered property via Driveway"
365
-
366
- Without LLM:
367
- - "Person moving from Garage towards Front Door (5s transit)"
368
- - "Car moving from Street towards Driveway"
369
- - "Dog moving from Back Yard towards Side Gate"
370
-
371
- ## Changelog
372
-
373
- ### v0.4.0
374
- - **Training Mode**: Guided walkthrough to train the system by walking your property
375
- - **Mobile-Optimized Training UI**: Phone-friendly interface for training while walking
376
- - **Auto Camera Detection**: System automatically detects you as you walk between cameras
377
- - **Transit Time Learning**: Records actual transit times during training
378
- - **Camera Overlap Detection**: Identifies where multiple cameras see the same area
379
- - **Landmark Marking**: Mark landmarks (mailbox, gate, etc.) during training sessions
380
- - **One-Click Topology Generation**: Apply training results to create complete topology
381
-
382
- ### v0.3.0
383
- - **Live Tracking Overlay**: View tracked objects in real-time on the floor plan
384
- - **Journey Visualization**: Click any tracked object to see their complete path
385
- - **Transit Time Learning**: Automatically adjusts connection times based on observations
386
- - **Connection Suggestions**: System suggests new camera connections
387
- - **LLM Rate Limiting**: Configurable debounce intervals to prevent API overload
388
- - **LLM Fallback**: Automatic fallback to basic notifications when LLM is slow
389
- - **LLM Timeout**: Configurable timeout with automatic fallback
390
-
391
- ### v0.2.0
392
- - **Landmark System**: Add landmarks for spatial context
393
- - **RAG Reasoning**: Context-aware movement descriptions
394
- - **AI Learning**: Automatic landmark suggestions
395
- - **Camera Context**: Rich camera descriptions for better alerts
396
-
397
- ### v0.1.0
398
- - Initial release with cross-camera tracking
399
- - Entry/exit detection
400
- - Movement alerts
401
- - MQTT integration
402
- - Visual topology editor
117
+ | `/occupancy/state` | ON/OFF property occupancy |
118
+ | `/count/state` | Active object count |
119
+ | `/person_count/state` | People on property |
120
+ | `/alerts` | Alert events |
121
+ | `/events/entry` | Entry events |
122
+ | `/events/exit` | Exit events |
123
+
124
+ ## How Correlation Works
125
+
126
+ When an object is detected on a new camera, the system scores potential matches:
127
+
128
+ | Factor | Weight | Description |
129
+ |--------|--------|-------------|
130
+ | Timing | 30% | Transit time within expected range |
131
+ | Visual | 35% | Embedding similarity (if available) |
132
+ | Spatial | 25% | Exit zone → Entry zone coherence |
133
+ | Class | 10% | Object type match |
134
+
135
+ Objects are correlated if total score exceeds threshold (default: 0.6).
403
136
 
404
137
  ## Requirements
405
138
 
406
139
  - Scrypted with NVR plugin
407
- - Cameras with object detection enabled (via Scrypted NVR, OpenVINO, CoreML, ONNX, or TensorFlow Lite)
408
- - Optional: LLM plugin for rich descriptions (OpenAI, Ollama, etc.)
409
- - Optional: MQTT broker for Home Assistant integration
140
+ - Cameras with object detection (NVR, OpenVINO, CoreML, etc.)
141
+ - Optional: LLM plugin for rich descriptions
142
+ - Optional: MQTT broker for Home Assistant
410
143
 
411
144
  ## Development
412
145
 
413
146
  ```bash
414
- # Install dependencies
415
147
  npm install
416
-
417
- # Build
418
148
  npm run build
419
-
420
- # Deploy to local Scrypted
421
149
  npm run scrypted-deploy
422
-
423
- # Debug in VS Code
424
- # Edit .vscode/settings.json with your Scrypted server IP
425
- # Press F5 to start debugging
426
150
  ```
427
151
 
428
152
  ## License