@probelabs/visor 0.1.130 → 0.1.131

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (134) hide show
  1. package/README.md +7 -0
  2. package/defaults/visor.yaml +5 -2
  3. package/dist/ai-review-service.d.ts +2 -0
  4. package/dist/ai-review-service.d.ts.map +1 -1
  5. package/dist/cli-main.d.ts.map +1 -1
  6. package/dist/cli.d.ts.map +1 -1
  7. package/dist/config/cli-handler.d.ts +5 -0
  8. package/dist/config/cli-handler.d.ts.map +1 -0
  9. package/dist/config/config-reloader.d.ts +24 -0
  10. package/dist/config/config-reloader.d.ts.map +1 -0
  11. package/dist/config/config-snapshot-store.d.ts +21 -0
  12. package/dist/config/config-snapshot-store.d.ts.map +1 -0
  13. package/dist/config/config-watcher.d.ts +19 -0
  14. package/dist/config/config-watcher.d.ts.map +1 -0
  15. package/dist/config/types.d.ts +16 -0
  16. package/dist/config/types.d.ts.map +1 -0
  17. package/dist/defaults/visor.yaml +5 -2
  18. package/dist/docs/ai-configuration.md +139 -0
  19. package/dist/docs/ai-custom-tools.md +30 -0
  20. package/dist/docs/capacity-planning.md +359 -0
  21. package/dist/docs/commands.md +35 -0
  22. package/dist/docs/database-operations.md +487 -0
  23. package/dist/docs/index.md +6 -1
  24. package/dist/docs/licensing.md +372 -0
  25. package/dist/docs/production-deployment.md +583 -0
  26. package/dist/examples/ai-with-bash.yaml +17 -0
  27. package/dist/generated/config-schema.d.ts +4 -0
  28. package/dist/generated/config-schema.d.ts.map +1 -1
  29. package/dist/index.js +9945 -10907
  30. package/dist/liquid-extensions.d.ts +7 -0
  31. package/dist/liquid-extensions.d.ts.map +1 -1
  32. package/dist/output/traces/{run-2026-02-11T16-20-59-999Z.ndjson → run-2026-02-15T19-14-20-379Z.ndjson} +84 -84
  33. package/dist/{traces/run-2026-02-11T16-21-47-711Z.ndjson → output/traces/run-2026-02-15T19-15-09-410Z.ndjson} +1019 -1019
  34. package/dist/providers/ai-check-provider.d.ts +5 -0
  35. package/dist/providers/ai-check-provider.d.ts.map +1 -1
  36. package/dist/providers/command-check-provider.d.ts.map +1 -1
  37. package/dist/providers/workflow-check-provider.d.ts.map +1 -1
  38. package/dist/scheduler/schedule-tool.d.ts.map +1 -1
  39. package/dist/sdk/{check-provider-registry-PANIXYRB.mjs → check-provider-registry-AAPPJ4CP.mjs} +7 -7
  40. package/dist/sdk/{check-provider-registry-M3Y6JMTW.mjs → check-provider-registry-S7BMQ2FC.mjs} +7 -7
  41. package/dist/sdk/check-provider-registry-ZOLEYDKM.mjs +28 -0
  42. package/dist/sdk/{chunk-VMLORODQ.mjs → chunk-2GCSK3PD.mjs} +4 -4
  43. package/dist/sdk/{chunk-EUUAQBTW.mjs → chunk-6ZZ4DPAA.mjs} +240 -48
  44. package/dist/sdk/chunk-6ZZ4DPAA.mjs.map +1 -0
  45. package/dist/sdk/{chunk-HOKQOO3G.mjs → chunk-EBTD2D4L.mjs} +2 -2
  46. package/dist/sdk/chunk-LDFUW34H.mjs +39912 -0
  47. package/dist/sdk/chunk-LDFUW34H.mjs.map +1 -0
  48. package/dist/sdk/{chunk-UCNT3PDT.mjs → chunk-LQ5B4T6L.mjs} +5 -1
  49. package/dist/sdk/chunk-LQ5B4T6L.mjs.map +1 -0
  50. package/dist/sdk/{chunk-S6CD7GFM.mjs → chunk-MQ57AB4U.mjs} +211 -35
  51. package/dist/sdk/chunk-MQ57AB4U.mjs.map +1 -0
  52. package/dist/sdk/chunk-N4I6ZDCJ.mjs +436 -0
  53. package/dist/sdk/chunk-N4I6ZDCJ.mjs.map +1 -0
  54. package/dist/sdk/chunk-OMFPM576.mjs +739 -0
  55. package/dist/sdk/chunk-OMFPM576.mjs.map +1 -0
  56. package/dist/sdk/chunk-RI77TA6V.mjs +436 -0
  57. package/dist/sdk/chunk-RI77TA6V.mjs.map +1 -0
  58. package/dist/sdk/chunk-VO4N6TEL.mjs +1502 -0
  59. package/dist/sdk/chunk-VO4N6TEL.mjs.map +1 -0
  60. package/dist/sdk/{chunk-V2IV3ILA.mjs → chunk-XJQKTK6V.mjs} +31 -5
  61. package/dist/sdk/chunk-XJQKTK6V.mjs.map +1 -0
  62. package/dist/sdk/{config-OGOS4ZU4.mjs → config-4EG7IQIU.mjs} +2 -2
  63. package/dist/sdk/{failure-condition-evaluator-HC3M5377.mjs → failure-condition-evaluator-GLHZZF47.mjs} +3 -3
  64. package/dist/sdk/failure-condition-evaluator-KN55WXRO.mjs +17 -0
  65. package/dist/sdk/{github-frontend-E2KJSC3Y.mjs → github-frontend-F4TE2JY7.mjs} +3 -3
  66. package/dist/sdk/github-frontend-HCOKL53D.mjs +1356 -0
  67. package/dist/sdk/github-frontend-HCOKL53D.mjs.map +1 -0
  68. package/dist/sdk/{host-EE6EJ2FM.mjs → host-SAT6RHDX.mjs} +2 -2
  69. package/dist/sdk/host-VA3ET7N6.mjs +63 -0
  70. package/dist/sdk/host-VA3ET7N6.mjs.map +1 -0
  71. package/dist/sdk/{liquid-extensions-E4EUOCES.mjs → liquid-extensions-YDIIH33Q.mjs} +2 -2
  72. package/dist/sdk/{routing-OZQWAGAI.mjs → routing-KFYQGOYU.mjs} +5 -5
  73. package/dist/sdk/routing-OXQKETSA.mjs +25 -0
  74. package/dist/sdk/{schedule-tool-handler-IEB2VS7O.mjs → schedule-tool-handler-G353DHS6.mjs} +7 -7
  75. package/dist/sdk/{schedule-tool-handler-B7TMSG6A.mjs → schedule-tool-handler-OQF57URO.mjs} +7 -7
  76. package/dist/sdk/schedule-tool-handler-PJVKWSYX.mjs +38 -0
  77. package/dist/sdk/schedule-tool-handler-PJVKWSYX.mjs.map +1 -0
  78. package/dist/sdk/sdk.d.mts +15 -0
  79. package/dist/sdk/sdk.d.ts +15 -0
  80. package/dist/sdk/sdk.js +621 -183
  81. package/dist/sdk/sdk.js.map +1 -1
  82. package/dist/sdk/sdk.mjs +6 -6
  83. package/dist/sdk/{trace-helpers-PP3YHTAM.mjs → trace-helpers-LOPBHYYX.mjs} +4 -2
  84. package/dist/sdk/trace-helpers-LOPBHYYX.mjs.map +1 -0
  85. package/dist/sdk/trace-helpers-R2ETIEC2.mjs +25 -0
  86. package/dist/sdk/trace-helpers-R2ETIEC2.mjs.map +1 -0
  87. package/dist/sdk/{workflow-check-provider-2ET3SFZH.mjs → workflow-check-provider-57KAR4Y4.mjs} +7 -7
  88. package/dist/sdk/workflow-check-provider-57KAR4Y4.mjs.map +1 -0
  89. package/dist/sdk/{workflow-check-provider-HB4XTD4Z.mjs → workflow-check-provider-LRWD52WN.mjs} +7 -7
  90. package/dist/sdk/workflow-check-provider-LRWD52WN.mjs.map +1 -0
  91. package/dist/sdk/workflow-check-provider-N2DRFQDB.mjs +28 -0
  92. package/dist/sdk/workflow-check-provider-N2DRFQDB.mjs.map +1 -0
  93. package/dist/slack/socket-runner.d.ts.map +1 -1
  94. package/dist/state-machine/context/build-engine-context.d.ts.map +1 -1
  95. package/dist/state-machine/runner.d.ts.map +1 -1
  96. package/dist/state-machine/states/completed.d.ts.map +1 -1
  97. package/dist/telemetry/trace-helpers.d.ts +5 -0
  98. package/dist/telemetry/trace-helpers.d.ts.map +1 -1
  99. package/dist/test-runner/evaluators.d.ts.map +1 -1
  100. package/dist/test-runner/index.d.ts +7 -0
  101. package/dist/test-runner/index.d.ts.map +1 -1
  102. package/dist/test-runner/validator.d.ts.map +1 -1
  103. package/dist/traces/{run-2026-02-11T16-20-59-999Z.ndjson → run-2026-02-15T19-14-20-379Z.ndjson} +84 -84
  104. package/dist/{output/traces/run-2026-02-11T16-21-47-711Z.ndjson → traces/run-2026-02-15T19-15-09-410Z.ndjson} +1019 -1019
  105. package/dist/tui/chat-runner.d.ts.map +1 -1
  106. package/dist/types/cli.d.ts +2 -0
  107. package/dist/types/cli.d.ts.map +1 -1
  108. package/dist/types/config.d.ts +15 -0
  109. package/dist/types/config.d.ts.map +1 -1
  110. package/dist/types/engine.d.ts +2 -0
  111. package/dist/types/engine.d.ts.map +1 -1
  112. package/package.json +3 -3
  113. package/defaults/.visor.yaml +0 -420
  114. package/dist/sdk/chunk-EUUAQBTW.mjs.map +0 -1
  115. package/dist/sdk/chunk-S6CD7GFM.mjs.map +0 -1
  116. package/dist/sdk/chunk-UCNT3PDT.mjs.map +0 -1
  117. package/dist/sdk/chunk-V2IV3ILA.mjs.map +0 -1
  118. package/dist/sdk/chunk-YJRBN3XS.mjs +0 -217
  119. package/dist/sdk/chunk-YJRBN3XS.mjs.map +0 -1
  120. /package/dist/sdk/{check-provider-registry-M3Y6JMTW.mjs.map → check-provider-registry-AAPPJ4CP.mjs.map} +0 -0
  121. /package/dist/sdk/{check-provider-registry-PANIXYRB.mjs.map → check-provider-registry-S7BMQ2FC.mjs.map} +0 -0
  122. /package/dist/sdk/{config-OGOS4ZU4.mjs.map → check-provider-registry-ZOLEYDKM.mjs.map} +0 -0
  123. /package/dist/sdk/{chunk-VMLORODQ.mjs.map → chunk-2GCSK3PD.mjs.map} +0 -0
  124. /package/dist/sdk/{chunk-HOKQOO3G.mjs.map → chunk-EBTD2D4L.mjs.map} +0 -0
  125. /package/dist/sdk/{failure-condition-evaluator-HC3M5377.mjs.map → config-4EG7IQIU.mjs.map} +0 -0
  126. /package/dist/sdk/{liquid-extensions-E4EUOCES.mjs.map → failure-condition-evaluator-GLHZZF47.mjs.map} +0 -0
  127. /package/dist/sdk/{routing-OZQWAGAI.mjs.map → failure-condition-evaluator-KN55WXRO.mjs.map} +0 -0
  128. /package/dist/sdk/{github-frontend-E2KJSC3Y.mjs.map → github-frontend-F4TE2JY7.mjs.map} +0 -0
  129. /package/dist/sdk/{host-EE6EJ2FM.mjs.map → host-SAT6RHDX.mjs.map} +0 -0
  130. /package/dist/sdk/{schedule-tool-handler-B7TMSG6A.mjs.map → liquid-extensions-YDIIH33Q.mjs.map} +0 -0
  131. /package/dist/sdk/{schedule-tool-handler-IEB2VS7O.mjs.map → routing-KFYQGOYU.mjs.map} +0 -0
  132. /package/dist/sdk/{trace-helpers-PP3YHTAM.mjs.map → routing-OXQKETSA.mjs.map} +0 -0
  133. /package/dist/sdk/{workflow-check-provider-2ET3SFZH.mjs.map → schedule-tool-handler-G353DHS6.mjs.map} +0 -0
  134. /package/dist/sdk/{workflow-check-provider-HB4XTD4Z.mjs.map → schedule-tool-handler-OQF57URO.mjs.map} +0 -0
@@ -0,0 +1,487 @@
1
+ # Database Operations Guide
2
+
3
+ > **Enterprise Edition feature.** PostgreSQL, MySQL, and MSSQL backends require a Visor EE license with the `scheduler-sql` feature.
4
+
5
+ This guide covers production database operations for Visor's SQL backends: backup, replication, failover, connection pooling, and migration from SQLite.
6
+
7
+ ---
8
+
9
+ ## Table of Contents
10
+
11
+ - [Overview](#overview)
12
+ - [Schema and Tables](#schema-and-tables)
13
+ - [PostgreSQL Operations](#postgresql-operations)
14
+ - [Initial Setup](#initial-setup)
15
+ - [Backup Strategies](#backup-strategies)
16
+ - [Replication](#replication)
17
+ - [Connection Pooling with PgBouncer](#connection-pooling-with-pgbouncer)
18
+ - [Monitoring](#monitoring)
19
+ - [MySQL Operations](#mysql-operations)
20
+ - [MSSQL Operations](#mssql-operations)
21
+ - [Migrating from SQLite to PostgreSQL](#migrating-from-sqlite-to-postgresql)
22
+ - [Performance Tuning](#performance-tuning)
23
+ - [Disaster Recovery](#disaster-recovery)
24
+ - [Troubleshooting](#troubleshooting)
25
+
26
+ ---
27
+
28
+ ## Overview
29
+
30
+ Visor uses two databases in production:
31
+
32
+ | Database | Path / Table | Purpose | Backend |
33
+ |----------|-------------|---------|---------|
34
+ | Scheduler | `schedules`, `scheduler_locks` | Schedule state, HA locking | SQLite (OSS) or PostgreSQL/MySQL/MSSQL (EE) |
35
+ | Config Snapshots | `.visor/config.db` | Config snapshot history | SQLite only (local) |
36
+
37
+ The scheduler database is the critical stateful component. Config snapshots are local-only and do not require HA.
38
+
39
+ ---
40
+
41
+ ## Schema and Tables
42
+
43
+ Visor auto-creates tables on first connection. No manual migration is required.
44
+
45
+ ### `schedules` Table
46
+
47
+ ```sql
48
+ CREATE TABLE schedules (
49
+ id VARCHAR(36) PRIMARY KEY,
50
+ creator_id VARCHAR(255) NOT NULL,
51
+ creator_context VARCHAR(255),
52
+ creator_name VARCHAR(255),
53
+ timezone VARCHAR(64) NOT NULL DEFAULT 'UTC',
54
+ schedule_expr VARCHAR(255),
55
+ run_at BIGINT,
56
+ is_recurring BOOLEAN NOT NULL,
57
+ original_expression TEXT,
58
+ workflow VARCHAR(255),
59
+ workflow_inputs TEXT, -- JSON
60
+ output_context TEXT, -- JSON
61
+ status VARCHAR(20) NOT NULL,
62
+ created_at BIGINT NOT NULL,
63
+ last_run_at BIGINT,
64
+ next_run_at BIGINT,
65
+ run_count INTEGER NOT NULL DEFAULT 0,
66
+ failure_count INTEGER NOT NULL DEFAULT 0,
67
+ last_error TEXT,
68
+ previous_response TEXT,
69
+ claimed_by VARCHAR(255),
70
+ claimed_at BIGINT,
71
+ lock_token VARCHAR(36)
72
+ );
73
+ ```
74
+
75
+ ### `scheduler_locks` Table (HA mode)
76
+
77
+ ```sql
78
+ CREATE TABLE scheduler_locks (
79
+ lock_id VARCHAR(255) PRIMARY KEY,
80
+ node_id VARCHAR(255) NOT NULL,
81
+ lock_token VARCHAR(36) NOT NULL,
82
+ acquired_at BIGINT NOT NULL,
83
+ expires_at BIGINT NOT NULL
84
+ );
85
+ ```
86
+
87
+ ### Indexes
88
+
89
+ ```sql
90
+ CREATE INDEX idx_schedules_creator_id ON schedules(creator_id);
91
+ CREATE INDEX idx_schedules_status ON schedules(status);
92
+ CREATE INDEX idx_schedules_status_next_run ON schedules(status, next_run_at);
93
+ ```
94
+
95
+ ---
96
+
97
+ ## PostgreSQL Operations
98
+
99
+ ### Initial Setup
100
+
101
+ ```bash
102
+ # Create database and user
103
+ psql -U postgres <<SQL
104
+ CREATE USER visor WITH PASSWORD 'changeme';
105
+ CREATE DATABASE visor OWNER visor;
106
+ GRANT ALL PRIVILEGES ON DATABASE visor TO visor;
107
+ SQL
108
+ ```
109
+
110
+ Visor configuration:
111
+
112
+ ```yaml
113
+ scheduler:
114
+ storage:
115
+ driver: postgresql
116
+ connection:
117
+ host: db.example.com
118
+ port: 5432
119
+ database: visor
120
+ user: visor
121
+ password: ${VISOR_DB_PASSWORD}
122
+ ssl: true
123
+ pool:
124
+ min: 2
125
+ max: 10
126
+ ha:
127
+ enabled: true
128
+ ```
129
+
130
+ Visor creates all tables and indexes automatically on first startup.
131
+
132
+ ### Backup Strategies
133
+
134
+ #### Logical Backup (pg_dump)
135
+
136
+ Best for small-to-medium deployments. Creates a portable SQL file.
137
+
138
+ ```bash
139
+ # Full backup
140
+ pg_dump -h db.example.com -U visor -Fc visor > visor-$(date +%Y%m%d).dump
141
+
142
+ # Restore
143
+ pg_restore -h db.example.com -U visor -d visor --clean visor-20260214.dump
144
+ ```
145
+
146
+ #### Continuous Archiving (WAL)
147
+
148
+ Best for production with point-in-time recovery (PITR).
149
+
150
+ ```ini
151
+ # postgresql.conf
152
+ wal_level = replica
153
+ archive_mode = on
154
+ archive_command = 'cp %p /var/lib/postgresql/wal_archive/%f'
155
+ ```
156
+
157
+ ```bash
158
+ # Base backup
159
+ pg_basebackup -h db.example.com -U replication -D /var/lib/postgresql/backup -Fp -Xs -P
160
+ ```
161
+
162
+ #### Cloud-Managed Backups
163
+
164
+ - **AWS RDS**: Automated backups enabled by default (35-day retention).
165
+ - **Google Cloud SQL**: Automated + on-demand backups via Console or `gcloud`.
166
+ - **Azure Database**: Automated backups with geo-redundancy option.
167
+
168
+ #### Recommended Schedule
169
+
170
+ | Frequency | Method | Retention |
171
+ |-----------|--------|-----------|
172
+ | Hourly | WAL archiving | 7 days |
173
+ | Daily | pg_dump (full) | 30 days |
174
+ | Weekly | pg_dump to offsite | 90 days |
175
+
176
+ ### Replication
177
+
178
+ #### Streaming Replication (read replicas)
179
+
180
+ On the primary:
181
+
182
+ ```ini
183
+ # postgresql.conf
184
+ wal_level = replica
185
+ max_wal_senders = 3
186
+ ```
187
+
188
+ ```sql
189
+ -- Create replication user
190
+ CREATE USER replication WITH REPLICATION LOGIN PASSWORD 'replpass';
191
+ ```
192
+
193
+ On the replica:
194
+
195
+ ```bash
196
+ pg_basebackup -h primary.example.com -U replication -D /var/lib/postgresql/data -Fp -Xs -P -R
197
+ ```
198
+
199
+ The `-R` flag generates `standby.signal` and connection settings automatically.
200
+
201
+ #### Cloud-Managed Replicas
202
+
203
+ - **AWS RDS**: Create Read Replica via Console or CLI.
204
+ - **Aurora**: Up to 15 read replicas with sub-10ms lag.
205
+ - **Azure**: Read replicas for Azure Database for PostgreSQL.
206
+
207
+ **Note**: Visor's scheduler should always connect to the **primary/writer** endpoint. Read replicas are useful for reporting queries only.
208
+
209
+ ### Connection Pooling with PgBouncer
210
+
211
+ For high-concurrency deployments (many Visor instances), PgBouncer reduces connection overhead.
212
+
213
+ ```ini
214
+ # pgbouncer.ini
215
+ [databases]
216
+ visor = host=db.example.com port=5432 dbname=visor
217
+
218
+ [pgbouncer]
219
+ listen_addr = 0.0.0.0
220
+ listen_port = 6432
221
+ auth_type = md5
222
+ auth_file = /etc/pgbouncer/userlist.txt
223
+ pool_mode = transaction
224
+ default_pool_size = 20
225
+ max_client_conn = 200
226
+ ```
227
+
228
+ ```
229
+ # userlist.txt
230
+ "visor" "md5hash"
231
+ ```
232
+
233
+ Update Visor config to point to PgBouncer:
234
+
235
+ ```yaml
236
+ scheduler:
237
+ storage:
238
+ driver: postgresql
239
+ connection:
240
+ host: pgbouncer.example.com
241
+ port: 6432
242
+ database: visor
243
+ user: visor
244
+ password: ${VISOR_DB_PASSWORD}
245
+ pool:
246
+ min: 0 # Let PgBouncer manage pooling
247
+ max: 5 # Fewer connections per instance
248
+ ```
249
+
250
+ ### Monitoring
251
+
252
+ Key PostgreSQL metrics to watch:
253
+
254
+ ```sql
255
+ -- Active connections
256
+ SELECT count(*) FROM pg_stat_activity WHERE datname = 'visor';
257
+
258
+ -- Table sizes
259
+ SELECT relname, pg_size_pretty(pg_total_relation_size(relid))
260
+ FROM pg_catalog.pg_statio_user_tables
261
+ WHERE schemaname = 'public'
262
+ ORDER BY pg_total_relation_size(relid) DESC;
263
+
264
+ -- Lock contention
265
+ SELECT * FROM pg_locks WHERE NOT granted;
266
+
267
+ -- Slow queries
268
+ SELECT query, calls, mean_exec_time, total_exec_time
269
+ FROM pg_stat_statements
270
+ WHERE dbid = (SELECT oid FROM pg_database WHERE datname = 'visor')
271
+ ORDER BY mean_exec_time DESC LIMIT 10;
272
+ ```
273
+
274
+ Enable `pg_stat_statements` for query performance tracking:
275
+
276
+ ```ini
277
+ # postgresql.conf
278
+ shared_preload_libraries = 'pg_stat_statements'
279
+ ```
280
+
281
+ ---
282
+
283
+ ## MySQL Operations
284
+
285
+ ### Initial Setup
286
+
287
+ ```sql
288
+ CREATE DATABASE visor CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
289
+ CREATE USER 'visor'@'%' IDENTIFIED BY 'changeme';
290
+ GRANT ALL PRIVILEGES ON visor.* TO 'visor'@'%';
291
+ FLUSH PRIVILEGES;
292
+ ```
293
+
294
+ ### Backup
295
+
296
+ ```bash
297
+ # Full backup
298
+ mysqldump -h db.example.com -u visor -p visor > visor-$(date +%Y%m%d).sql
299
+
300
+ # Restore
301
+ mysql -h db.example.com -u visor -p visor < visor-20260214.sql
302
+ ```
303
+
304
+ ### Connection Pooling
305
+
306
+ MySQL's default `max_connections` (151) is usually sufficient. For high concurrency, increase it and keep Visor's pool small:
307
+
308
+ ```yaml
309
+ connection:
310
+ pool:
311
+ min: 0
312
+ max: 5
313
+ ```
314
+
315
+ ---
316
+
317
+ ## MSSQL Operations
318
+
319
+ ### Initial Setup
320
+
321
+ ```sql
322
+ CREATE DATABASE visor;
323
+ GO
324
+ CREATE LOGIN visor WITH PASSWORD = 'changeme';
325
+ GO
326
+ USE visor;
327
+ CREATE USER visor FOR LOGIN visor;
328
+ ALTER ROLE db_owner ADD MEMBER visor;
329
+ GO
330
+ ```
331
+
332
+ ### Backup
333
+
334
+ ```sql
335
+ BACKUP DATABASE visor TO DISK = '/var/opt/mssql/backup/visor.bak';
336
+ ```
337
+
338
+ ---
339
+
340
+ ## Migrating from SQLite to PostgreSQL
341
+
342
+ ### Step 1: Export from SQLite
343
+
344
+ ```bash
345
+ # Dump schedule data
346
+ sqlite3 .visor/schedules.db ".mode json" ".once schedules.json" "SELECT * FROM schedules;"
347
+ ```
348
+
349
+ ### Step 2: Configure PostgreSQL
350
+
351
+ ```yaml
352
+ scheduler:
353
+ storage:
354
+ driver: postgresql
355
+ connection:
356
+ host: db.example.com
357
+ database: visor
358
+ user: visor
359
+ password: ${VISOR_DB_PASSWORD}
360
+ ssl: true
361
+ ```
362
+
363
+ ### Step 3: Start Visor
364
+
365
+ Visor will create the schema automatically. If you had an older JSON-based store (`.visor/schedules.json`), Visor auto-migrates it on first startup.
366
+
367
+ ### Step 4: Import Data (optional)
368
+
369
+ For SQLite-to-PostgreSQL data transfer, use a script or tool:
370
+
371
+ ```bash
372
+ # Using pgloader (if available)
373
+ pgloader sqlite:///.visor/schedules.db postgresql://visor:pass@db.example.com/visor
374
+
375
+ # Or manual import via psql + json
376
+ cat schedules.json | psql -h db.example.com -U visor -d visor -c \
377
+ "COPY schedules FROM STDIN WITH (FORMAT csv, HEADER true);"
378
+ ```
379
+
380
+ ### Step 5: Enable HA
381
+
382
+ Once on PostgreSQL, enable HA for multi-instance deployments:
383
+
384
+ ```yaml
385
+ scheduler:
386
+ ha:
387
+ enabled: true
388
+ lock_ttl: 60
389
+ heartbeat_interval: 15
390
+ ```
391
+
392
+ ---
393
+
394
+ ## Performance Tuning
395
+
396
+ ### PostgreSQL Settings
397
+
398
+ For a dedicated Visor database (small dataset, write-heavy locking):
399
+
400
+ ```ini
401
+ # postgresql.conf
402
+ shared_buffers = 256MB
403
+ effective_cache_size = 512MB
404
+ work_mem = 4MB
405
+ maintenance_work_mem = 64MB
406
+ max_connections = 50
407
+ checkpoint_completion_target = 0.9
408
+ wal_buffers = 16MB
409
+ ```
410
+
411
+ ### Visor Pool Sizing
412
+
413
+ | Deployment | `pool.min` | `pool.max` | Notes |
414
+ |------------|-----------|-----------|-------|
415
+ | Single instance | 0 | 5 | Minimal overhead |
416
+ | 2-3 instances | 1 | 5 | Keep warm connections |
417
+ | 5+ instances | 0 | 3 | Use PgBouncer upstream |
418
+ | Serverless/Lambda | 0 | 2 | Short-lived, release fast |
419
+
420
+ ### Index Maintenance
421
+
422
+ Visor's tables are small (hundreds to low thousands of rows). Standard autovacuum is sufficient. No custom index maintenance is needed.
423
+
424
+ ---
425
+
426
+ ## Disaster Recovery
427
+
428
+ ### Recovery Time Objective (RTO)
429
+
430
+ | Scenario | Recovery Method | Estimated RTO |
431
+ |----------|----------------|---------------|
432
+ | Instance crash | Container restart | < 1 min |
433
+ | Database corruption | Restore from pg_dump | 5-15 min |
434
+ | Full data loss | Restore from backup + WAL replay | 15-60 min |
435
+ | Region failure | Failover to standby region | Cloud-dependent |
436
+
437
+ ### Recovery Procedure
438
+
439
+ 1. **Stop all Visor instances** to prevent writes to a corrupted database.
440
+ 2. **Restore the database** from the most recent backup.
441
+ 3. **Verify data integrity**: `SELECT count(*) FROM schedules; SELECT count(*) FROM scheduler_locks;`
442
+ 4. **Restart Visor instances**. HA locking will re-establish automatically.
443
+ 5. **Check scheduler state**: `visor schedule list`
444
+
445
+ ---
446
+
447
+ ## Troubleshooting
448
+
449
+ ### "knex is required" Error
450
+
451
+ Install the appropriate database driver:
452
+
453
+ ```bash
454
+ npm install knex pg # PostgreSQL
455
+ npm install knex mysql2 # MySQL
456
+ npm install knex tedious # MSSQL
457
+ ```
458
+
459
+ ### Connection Pool Exhaustion
460
+
461
+ ```
462
+ Error: Knex: Timeout acquiring a connection
463
+ ```
464
+
465
+ - Reduce `pool.max` per instance and use PgBouncer.
466
+ - Check for long-running transactions or connection leaks.
467
+ - Increase `pool.max` if genuinely under high load.
468
+
469
+ ### Lock Contention in HA Mode
470
+
471
+ If schedules are executing slowly or being skipped:
472
+
473
+ - Increase `lock_ttl` (default: 60s) if executions take longer.
474
+ - Decrease `heartbeat_interval` for faster lock renewal.
475
+ - Check `scheduler_locks` table for stale locks:
476
+
477
+ ```sql
478
+ SELECT * FROM scheduler_locks WHERE expires_at < EXTRACT(EPOCH FROM NOW()) * 1000;
479
+ ```
480
+
481
+ ### Duplicate Schedule Execution
482
+
483
+ In rare cases (network partition, lock expiry during execution):
484
+
485
+ - Ensure `lock_ttl` exceeds your longest workflow execution time.
486
+ - Use `heartbeat_interval < lock_ttl / 3` for safe renewal.
487
+ - Design workflows to be idempotent where possible.
@@ -118,6 +118,7 @@ The test framework allows you to write integration tests for your Visor workflow
118
118
  | [Slack Integration](./slack-integration.md) | Bidirectional Slack integration via Socket Mode |
119
119
  | [Scheduler](./scheduler.md) | Schedule workflows and reminders to run at specified times |
120
120
  | [Deployment](./DEPLOYMENT.md) | Cloudflare Pages deployment for landing page |
121
+ | [Production Deployment](./production-deployment.md) | Docker, Kubernetes, and multi-instance deployment guide |
121
122
 
122
123
  ---
123
124
 
@@ -141,7 +142,11 @@ The test framework allows you to write integration tests for your Visor workflow
141
142
 
142
143
  | Document | Description |
143
144
  |----------|-------------|
144
- | [Enterprise Policy Engine (OPA)](./enterprise-policy.md) | Comprehensive guide to the OPA-based policy engine: installation, licensing, Rego policies, configuration, and troubleshooting |
145
+ | [Licensing](./licensing.md) | Obtaining, installing, managing, and troubleshooting EE licenses |
146
+ | [Enterprise Policy Engine (OPA)](./enterprise-policy.md) | OPA-based policy engine: installation, Rego policies, configuration, and troubleshooting |
147
+ | [Scheduler Storage](./scheduler-storage.md) | PostgreSQL, MySQL, and MSSQL backends for scheduler with cloud examples |
148
+ | [Database Operations](./database-operations.md) | Backup, replication, failover, PgBouncer, and migration from SQLite |
149
+ | [Capacity Planning](./capacity-planning.md) | Sizing guide, cost estimates, scaling guidelines, and load testing |
145
150
 
146
151
  ---
147
152