moflo 4.9.22 → 4.9.24
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/guidance/shipped/moflo-cli-reference.md +19 -16
- package/.claude/guidance/shipped/moflo-core-guidance.md +0 -2
- package/.claude/guidance/shipped/moflo-spell-runner.md +1 -0
- package/.claude/guidance/shipped/moflo-spell-scheduling.md +225 -0
- package/.claude/guidance/shipped/moflo-spell-troubleshooting.md +1 -0
- package/.claude/skills/fl/phases.md +67 -0
- package/.claude/skills/spell-schedule/SKILL.md +18 -5
- package/README.md +1 -1
- package/bin/index-guidance.mjs +32 -6
- package/bin/session-start-launcher.mjs +15 -8
- package/dist/src/cli/commands/daemon.js +13 -17
- package/dist/src/cli/commands/hooks.js +3 -6
- package/dist/src/cli/commands/spell-schedule.js +237 -49
- package/dist/src/cli/init/settings-generator.js +5 -6
- package/dist/src/cli/mcp-tools/memory-tools.js +16 -5
- package/dist/src/cli/memory/bridge-embedder.js +26 -6
- package/dist/src/cli/memory/bridge-entries.js +33 -15
- package/dist/src/cli/services/daemon-autostart-lifecycle.js +62 -0
- package/dist/src/cli/services/daemon-dashboard.js +192 -18
- package/dist/src/cli/services/daemon-readiness.js +19 -31
- package/dist/src/cli/services/ephemeral-namespace-purge.js +61 -33
- package/dist/src/cli/services/headless-worker-executor.js +7 -94
- package/dist/src/cli/services/worker-daemon.js +40 -66
- package/dist/src/cli/spells/core/runner.js +12 -0
- package/dist/src/cli/spells/scheduler/scheduler.js +24 -9
- package/dist/src/cli/spells/schema/validator.js +2 -1
- package/dist/src/cli/spells/schema/validators/top-level.js +18 -0
- package/dist/src/cli/version.js +1 -1
- package/package.json +4 -2
|
@@ -130,22 +130,25 @@ npx flo daemon start
|
|
|
130
130
|
| `coverage-suggest` | Suggest coverage improvements | `--path` |
|
|
131
131
|
| `coverage-gaps` | List coverage gaps with priorities | `--format`, `--limit` |
|
|
132
132
|
|
|
133
|
-
###
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
|
142
|
-
|
|
143
|
-
| `
|
|
144
|
-
| `
|
|
145
|
-
| `
|
|
146
|
-
| `
|
|
147
|
-
| `
|
|
148
|
-
| `
|
|
133
|
+
### Background Workers
|
|
134
|
+
|
|
135
|
+
The daemon ships nine workers — four scheduled by default plus five
|
|
136
|
+
manual-trigger only. The pre-#970 `audit`, `predict`, and `document`
|
|
137
|
+
workers were removed because they ran without a surfacing layer for
|
|
138
|
+
findings; if AI-driven security scanning returns it should be an opt-in
|
|
139
|
+
`flo doctor` one-shot, not a recurring background task.
|
|
140
|
+
|
|
141
|
+
| Worker | Priority | Default | Description |
|
|
142
|
+
|---------------|----------|---------------|----------------------------|
|
|
143
|
+
| `map` | normal | scheduled 15m | Codebase mapping |
|
|
144
|
+
| `optimize` | high | scheduled 15m | Performance optimization |
|
|
145
|
+
| `consolidate` | low | scheduled 30m | Memory consolidation |
|
|
146
|
+
| `testgaps` | normal | scheduled 20m | Test coverage analysis |
|
|
147
|
+
| `ultralearn` | normal | manual | Deep knowledge acquisition |
|
|
148
|
+
| `refactor` | normal | manual | Refactoring suggestions |
|
|
149
|
+
| `deepdive` | normal | manual | Deep code analysis |
|
|
150
|
+
| `benchmark` | normal | manual | Performance benchmarking |
|
|
151
|
+
| `preload` | low | manual | Resource preloading |
|
|
149
152
|
|
|
150
153
|
### Essential Hook Commands (MCP Preferred)
|
|
151
154
|
|
|
@@ -126,8 +126,6 @@ For the full `moflo.yaml` schema, gate toggles, model routing, and sandbox confi
|
|
|
126
126
|
|---------|--------|---------|
|
|
127
127
|
| After major refactor | `optimize` | Performance optimization |
|
|
128
128
|
| After adding features | `testgaps` | Find missing test coverage |
|
|
129
|
-
| After security changes | `audit` | Security analysis |
|
|
130
|
-
| After API changes | `document` | Update documentation |
|
|
131
129
|
| Every 5+ file changes | `map` | Update codebase map |
|
|
132
130
|
| Complex debugging | `deepdive` | Deep code analysis |
|
|
133
131
|
|
|
@@ -128,6 +128,7 @@ Credential values listed in `RunnerOptions.credentialValues` are automatically r
|
|
|
128
128
|
|
|
129
129
|
- `.claude/guidance/moflo-spell-engine.md` — Definition format, step types, variable interpolation
|
|
130
130
|
- `.claude/guidance/moflo-spell-sandboxing.md` — Capability-based security and permission levels
|
|
131
|
+
- `.claude/guidance/moflo-spell-scheduling.md` — Cron / interval / one-time scheduling, daemon lifecycle, catch-up window, `schedule-executions` audit trail
|
|
131
132
|
- `.claude/guidance/moflo-spell-troubleshooting.md` — Common failure modes when running spells
|
|
132
133
|
- `.claude/guidance/moflo-spell-custom-steps.md` — Pluggable step commands
|
|
133
134
|
- `.claude/guidance/moflo-spell-connectors.md` — Resource connectors and the registry
|
|
@@ -0,0 +1,225 @@
|
|
|
1
|
+
# Spell Scheduling — Cron, Daemon, Catch-Up, and Failure Modes
|
|
2
|
+
|
|
3
|
+
**Purpose:** Reference for spell scheduling — definition syntax, CLI subcommands, daemon lifecycle, catch-up window semantics, and the failure modes you will hit. For the user-driven `/spell-schedule` walkthrough, see `.claude/skills/spell-schedule/SKILL.md`. For the engine itself (steps, args, runner), see `.claude/guidance/moflo-spell-engine.md`.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## When to Schedule a Spell
|
|
8
|
+
|
|
9
|
+
**Use scheduling when the spell must fire on a clock without a human present.** Don't reach for it for one-shot runs you can `flo spell cast` yourself.
|
|
10
|
+
|
|
11
|
+
| Trigger | Use |
|
|
12
|
+
|---------|-----|
|
|
13
|
+
| "Run X every weekday at 9am" | Schedule with `--cron` |
|
|
14
|
+
| "Run X every 6 hours" | Schedule with `--interval` |
|
|
15
|
+
| "Run X once at <future time>" | Schedule with `--at` |
|
|
16
|
+
| "Run X right now" | `flo spell cast -n X` — not a schedule |
|
|
17
|
+
| "Run X when file Y changes" | Hook or worker — not a schedule |
|
|
18
|
+
|
|
19
|
+
The scheduler is poll-based (1-minute floor). It is not a real-time trigger system.
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## CLI Subcommands
|
|
24
|
+
|
|
25
|
+
**All scheduling lives under `flo spell schedule`.** No external cron, no second runtime.
|
|
26
|
+
|
|
27
|
+
| Subcommand | Purpose |
|
|
28
|
+
|------------|---------|
|
|
29
|
+
| `create -n <spell> --cron|--interval|--at <value>` | Create an ad-hoc schedule |
|
|
30
|
+
| `list` (alias `ls`) | List all schedules (definitions only — does NOT prove they fired) |
|
|
31
|
+
| `executions [--schedule <id>] [--limit N]` (alias `exec`, `history`) | Read the `schedule-executions` audit trail — the only way to confirm a schedule actually ran |
|
|
32
|
+
| `cancel <schedule-id>` | Disable a schedule (soft delete — record stays, `enabled: false`) |
|
|
33
|
+
|
|
34
|
+
**`list` shows what should fire. `executions` shows what did fire.** When verifying a new schedule, read `executions` — `list` only proves the record was written.
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## Definition-Embedded Schedules
|
|
39
|
+
|
|
40
|
+
**A spell can declare its schedule inline in YAML.** The daemon registers it on every start.
|
|
41
|
+
|
|
42
|
+
```yaml
|
|
43
|
+
name: nightly-audit
|
|
44
|
+
schedule:
|
|
45
|
+
cron: "0 2 * * *" # UTC, 5-field cron (minute hour day-of-month month day-of-week)
|
|
46
|
+
enabled: true # default true; set false to keep the def without scheduling
|
|
47
|
+
mofloLevel: hooks # optional cap (narrows scheduler-level cap, never widens)
|
|
48
|
+
steps:
|
|
49
|
+
- id: audit
|
|
50
|
+
type: bash
|
|
51
|
+
config:
|
|
52
|
+
command: ./scripts/audit.sh
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
**Exactly one of `cron`, `interval`, or `at` must be set.** Validation rejects `cron: "invalid"`, `interval: "10w"` (only `s/m/h/d` units), or non-ISO datetimes — the spell fails to load before the scheduler ever sees it.
|
|
56
|
+
|
|
57
|
+
| Field | Type | Notes |
|
|
58
|
+
|-------|------|-------|
|
|
59
|
+
| `cron` | string | 5-field, UTC, no seconds field |
|
|
60
|
+
| `interval` | string | `<n>(s|m|h|d)` — `s` is allowed but ignored below 60s (poll floor) |
|
|
61
|
+
| `at` | string | ISO 8601 datetime, must be in the future at load time |
|
|
62
|
+
| `enabled` | bool | Defaults `true`; gate without deleting the record |
|
|
63
|
+
| `mofloLevel` | enum | `read` < `hooks` < `swarm`; per-schedule cap — narrows only |
|
|
64
|
+
|
|
65
|
+
Definition-embedded schedules get IDs of the form `sched-def-<spell-name>` (one per spell). Ad-hoc schedules get `sched-adhoc-<timestamp>-<rand>` (one per `flo spell schedule create`).
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## Configuration in `moflo.yaml`
|
|
70
|
+
|
|
71
|
+
**The scheduler is on by default.** Disable it without affecting other daemon workers via `scheduler.enabled: false`.
|
|
72
|
+
|
|
73
|
+
```yaml
|
|
74
|
+
scheduler:
|
|
75
|
+
enabled: true # set false to disable scheduled spells
|
|
76
|
+
pollIntervalMs: 60000 # how often the scheduler checks for due spells
|
|
77
|
+
maxConcurrent: 2 # max concurrent scheduled spell executions
|
|
78
|
+
catchUpWindowMs: 3600000 # max age (ms) of a missed run that should still fire
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
All four fields are optional. **Non-positive values are rejected at load and replaced with the defaults** — `pollIntervalMs: 0` won't silently break the poll loop.
|
|
82
|
+
|
|
83
|
+
Practical floors:
|
|
84
|
+
- `pollIntervalMs` is the granularity floor. Sub-minute schedules don't fire faster.
|
|
85
|
+
- `maxConcurrent: 1` serializes everything — useful when scheduled spells share a write lock.
|
|
86
|
+
- `catchUpWindowMs: 0` disables catch-up entirely; missed runs are skipped on restart.
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
## Storage Namespaces
|
|
91
|
+
|
|
92
|
+
**Two memory namespaces back the scheduler.** Both are project-scoped — schedules and history don't leak across projects.
|
|
93
|
+
|
|
94
|
+
| Namespace | Contents | Written by | Read by |
|
|
95
|
+
|-----------|----------|------------|---------|
|
|
96
|
+
| `scheduled-spells` | `SpellSchedule` records (id, spellName, timing, nextRunAt, enabled, args) | `flo spell schedule create`, definition load | Scheduler poll loop, `flo spell schedule list` |
|
|
97
|
+
| `schedule-executions` | `ScheduleExecution` audit records (startedAt, completedAt, success, error, duration, manualRun) | Scheduler at execute-start and execute-end | The Luminarium, `flo spell schedule executions` |
|
|
98
|
+
|
|
99
|
+
When debugging "did my schedule fire?", read `schedule-executions` directly via `mcp__moflo__memory_list namespace=schedule-executions` if the CLI is unavailable.
|
|
100
|
+
|
|
101
|
+
---
|
|
102
|
+
|
|
103
|
+
## Catch-Up Window Semantics
|
|
104
|
+
|
|
105
|
+
**On daemon startup, schedules whose `nextRunAt` is in the past are evaluated against `catchUpWindowMs`.** This is the single most common source of "why didn't my run fire?" confusion.
|
|
106
|
+
|
|
107
|
+
| Lag (now − nextRunAt) | Behavior | Event emitted |
|
|
108
|
+
|-----------------------|----------|---------------|
|
|
109
|
+
| ≤ `pollIntervalMs` | Treated as routine cron drift; fires on next poll | `schedule:due` only |
|
|
110
|
+
| > `pollIntervalMs` and ≤ `catchUpWindowMs` | Fires on next poll as a caught-up run | `schedule:catchup` then `schedule:due` |
|
|
111
|
+
| > `catchUpWindowMs` | Skipped; `nextRunAt` advances past the missed slot | `schedule:skipped` |
|
|
112
|
+
|
|
113
|
+
This prevents a daemon that was offline for days from firing dozens of stale schedules at once.
|
|
114
|
+
|
|
115
|
+
**One-time `at:` schedules past their trigger get auto-disabled rather than rescheduled.** Re-enabling returns `null` because there's no future run to compute.
|
|
116
|
+
|
|
117
|
+
---
|
|
118
|
+
|
|
119
|
+
## Concurrency and Overlap Rules
|
|
120
|
+
|
|
121
|
+
**`maxConcurrent` (default 2) caps total in-flight scheduled spells.** Same-schedule overlap is never allowed regardless of `maxConcurrent`.
|
|
122
|
+
|
|
123
|
+
| Situation | Outcome |
|
|
124
|
+
|-----------|---------|
|
|
125
|
+
| Same schedule's prior run still in flight when next fire is due | New fire skipped (`schedule:skipped`); regular cadence continues |
|
|
126
|
+
| `maxConcurrent` saturated by other schedules | Due fire waits until next poll — nothing is queued |
|
|
127
|
+
| Manual run via `runScheduleNow` (dashboard "Run now") | Runs outside the poll loop; respects per-schedule overlap; does NOT advance `nextRunAt` |
|
|
128
|
+
|
|
129
|
+
There is no internal queue. A fire that didn't get a slot just shows up again on the next tick if it's still due.
|
|
130
|
+
|
|
131
|
+
---
|
|
132
|
+
|
|
133
|
+
## `mofloLevel` Composition for Scheduled Runs
|
|
134
|
+
|
|
135
|
+
**Three caps compose for every scheduled cast; the most restrictive wins.** Per-schedule caps can never widen the scheduler-level cap.
|
|
136
|
+
|
|
137
|
+
```
|
|
138
|
+
effectiveLevel = min(
|
|
139
|
+
daemon.defaultMofloLevel, // moflo.yaml scheduler-level cap
|
|
140
|
+
spell.mofloLevel, // spell definition cap
|
|
141
|
+
schedule.mofloLevel // per-schedule cap
|
|
142
|
+
)
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
Where `min` follows the level lattice `read < hooks < swarm`. A spell that needs `swarm` cannot run if any cap above it is `hooks` or `read` — it fails the capability gate at execute time, not at schedule create time.
|
|
146
|
+
|
|
147
|
+
---
|
|
148
|
+
|
|
149
|
+
## Daemon Prerequisite and Cross-Platform Autostart
|
|
150
|
+
|
|
151
|
+
**Schedules only fire while the daemon is running.** The scheduler is just code inside the daemon worker pool — no daemon, no schedules.
|
|
152
|
+
|
|
153
|
+
For survival across reboot, register the OS-native autostart service:
|
|
154
|
+
|
|
155
|
+
```bash
|
|
156
|
+
flo daemon install # one-time setup; idempotent
|
|
157
|
+
flo daemon status # shows registration AND running-process state
|
|
158
|
+
flo daemon uninstall # remove the autostart hook
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
| Platform | Mechanism | Path |
|
|
162
|
+
|----------|-----------|------|
|
|
163
|
+
| macOS | launchd `LaunchAgent` | `~/Library/LaunchAgents/com.moflo.daemon.plist` |
|
|
164
|
+
| Linux | systemd `--user` unit | `~/.config/systemd/user/moflo-daemon.service` |
|
|
165
|
+
| Windows | Task Scheduler `ONLOGON` | Task name `MoFloDaemon` (via `schtasks`) |
|
|
166
|
+
|
|
167
|
+
`flo spell schedule create` prompts to install the autostart service when none is registered, so a freshly-scheduled spell survives the next reboot without an extra step. Cancel the last enabled schedule and the service is auto-removed (so an idle daemon doesn't autostart forever).
|
|
168
|
+
|
|
169
|
+
---
|
|
170
|
+
|
|
171
|
+
## Scheduler Event Types
|
|
172
|
+
|
|
173
|
+
**The scheduler emits typed events the daemon forwards to the dashboard event stream.** Subscribe via `scheduler.on(listener)`; the returned function unsubscribes. Listener exceptions are caught so a misbehaving subscriber can't break the poll loop.
|
|
174
|
+
|
|
175
|
+
| Event | When |
|
|
176
|
+
|-------|------|
|
|
177
|
+
| `schedule:catchup` | A missed run (lag > one poll interval, within catch-up window) is about to fire |
|
|
178
|
+
| `schedule:due` | A schedule is due (always emitted, with or without catch-up) |
|
|
179
|
+
| `schedule:started` | Execution started; an execution record exists in `schedule-executions` |
|
|
180
|
+
| `schedule:completed` | Execution finished with `success: true` |
|
|
181
|
+
| `schedule:failed` | Execution finished with `success: false` or threw |
|
|
182
|
+
| `schedule:skipped` | Execution skipped (overlap, expired catch-up, missing spell, sandbox-required mismatch) |
|
|
183
|
+
| `schedule:disabled` | Schedule disabled (manual cancel or auto-disable because the spell vanished) |
|
|
184
|
+
|
|
185
|
+
---
|
|
186
|
+
|
|
187
|
+
## Common Failure Modes
|
|
188
|
+
|
|
189
|
+
**Most "schedule isn't firing" reports trace to one of these.** Walk the list before reading scheduler source.
|
|
190
|
+
|
|
191
|
+
| Symptom | Likely cause | Fix |
|
|
192
|
+
|---------|--------------|-----|
|
|
193
|
+
| `executions` is empty after the schedule's `nextRunAt` passed | Daemon not running | `flo daemon status` → `flo daemon start`; install autostart |
|
|
194
|
+
| `executions` shows `schedule:skipped` repeatedly | Same-schedule overlap (prior run never finished) | Check the spell — likely hung; cancel, fix, recreate |
|
|
195
|
+
| `executions` shows `schedule:skipped` once at startup | `nextRunAt` outside `catchUpWindowMs` | Expected after a long outage; next normal fire will land |
|
|
196
|
+
| Run fires but spell errors `SANDBOX_REQUIRED` | Spell needs `sandbox: required` and host doesn't supply one | Install sandbox runtime (Docker/bwrap) or remove `sandbox.required` from the spell |
|
|
197
|
+
| Schedule auto-disabled with `schedule:disabled` event | The spell name no longer resolves in the grimoire | Restore the spell file, then re-enable the schedule |
|
|
198
|
+
| Cron fires an hour off | Cron is UTC; user-typed time was local | Convert to UTC before `--cron`; see `/spell-schedule` skill |
|
|
199
|
+
| `executions` shows `success: true` but no side-effect | Spell ran but interpolation/credentials failed silently inside a step | Run the spell manually (`flo spell cast -n <name>`) and inspect step output |
|
|
200
|
+
|
|
201
|
+
---
|
|
202
|
+
|
|
203
|
+
## Verification Recipe (Schedule Round-Trip)
|
|
204
|
+
|
|
205
|
+
**To confirm a fresh schedule works end-to-end without waiting for the cron tick:**
|
|
206
|
+
|
|
207
|
+
1. Create the schedule with `--interval 1m` (or `--at` near the current time).
|
|
208
|
+
2. `flo spell schedule list` → verify `nextRunAt` is in the next minute.
|
|
209
|
+
3. `flo daemon status` → confirm running.
|
|
210
|
+
4. Wait one poll cycle (~60s).
|
|
211
|
+
5. `flo spell schedule executions --schedule <id>` → expect one row with `success: true`.
|
|
212
|
+
6. Cancel and recreate with the real cadence.
|
|
213
|
+
|
|
214
|
+
If step 5 is empty, jump straight to the failure-modes table above — don't loop in step 4.
|
|
215
|
+
|
|
216
|
+
---
|
|
217
|
+
|
|
218
|
+
## See Also
|
|
219
|
+
|
|
220
|
+
- `.claude/skills/spell-schedule/SKILL.md` — User-facing walkthrough for creating a schedule (procedural counterpart to this reference)
|
|
221
|
+
- `.claude/guidance/moflo-spell-engine.md` — Definition format, step types, variable interpolation
|
|
222
|
+
- `.claude/guidance/moflo-spell-runner.md` — Execution lifecycle, dry-run, layering, errors
|
|
223
|
+
- `.claude/guidance/moflo-spell-sandboxing.md` — Capability levels (`read`/`hooks`/`swarm`) referenced by the `mofloLevel` cap
|
|
224
|
+
- `.claude/guidance/moflo-spell-troubleshooting.md` — Broader spell failure-mode catalog beyond scheduling
|
|
225
|
+
- `.claude/guidance/moflo-core-guidance.md` — CLI, hooks, daemon, MCP reference hub
|
|
@@ -144,6 +144,7 @@ A step appears to run (`exitCode: 0`), produces no output, and downstream steps
|
|
|
144
144
|
- `.claude/guidance/moflo-spell-sandboxing.md` — Capability types, enforcement layers, permission levels (the model these failures exercise)
|
|
145
145
|
- `.claude/guidance/moflo-spell-engine.md` — Step definition format and types
|
|
146
146
|
- `.claude/guidance/moflo-spell-runner.md` — Dry-run validation, error codes, pause/resume
|
|
147
|
+
- `.claude/guidance/moflo-spell-scheduling.md` — Scheduled-spell-specific failure modes (catch-up window, overlap, missing spell auto-disable, daemon-down)
|
|
147
148
|
- `.claude/guidance/moflo-yaml-reference.md` — `sandbox:` block in `moflo.yaml` (master toggle, tier selection)
|
|
148
149
|
- `src/cli/spells/core/bwrap-sandbox.ts` — Source for `--unshare-net` and namespace setup
|
|
149
150
|
- `src/cli/spells/core/permission-resolver.ts` — Capability → permission level derivation
|
|
@@ -2,6 +2,47 @@
|
|
|
2
2
|
|
|
3
3
|
Phase-by-phase notes for the full `/flo <issue>` run. Phase 2 (Ticket) lives in `./ticket.md`.
|
|
4
4
|
|
|
5
|
+
## Phase 0: Record run start (Flo Runs dashboard)
|
|
6
|
+
|
|
7
|
+
Before research, write a row to the `tasklist` namespace so the Luminarium "Flo Runs" tab shows this run live and after the next session restart (#968). Skip this phase ONLY when `--epic-branch` is set — the epic orchestrator owns the parent record and the per-story spell engine writes its own row.
|
|
8
|
+
|
|
9
|
+
Compute and **remember** for Phase 5:
|
|
10
|
+
- `runId` — `flo-<issue-number-or-"new">-<startedAt-ms>` (sortable, unique).
|
|
11
|
+
- `startedAt` — `Date.now()` snapshot (ms since epoch).
|
|
12
|
+
|
|
13
|
+
Pick the matching `context.type`:
|
|
14
|
+
| Mode | type | label format |
|
|
15
|
+
|------|------|--------------|
|
|
16
|
+
| Full / ticket on existing issue | `ticket` | `#<n> — <title>` |
|
|
17
|
+
| `-r` research | `research` | `#<n> — Research` |
|
|
18
|
+
| `-t` with title (no issue # yet) | `new-ticket` | `New: <title>` |
|
|
19
|
+
| Epic detected | `epic` | `Epic #<n> — <title> (0/<total> stories)` |
|
|
20
|
+
| `-wf <spell>` | `spell` | `<spell-name> → <args>` |
|
|
21
|
+
|
|
22
|
+
Then call once:
|
|
23
|
+
|
|
24
|
+
```
|
|
25
|
+
mcp__moflo__memory_store
|
|
26
|
+
namespace: "tasklist"
|
|
27
|
+
key: "<runId>"
|
|
28
|
+
upsert: true
|
|
29
|
+
value: {
|
|
30
|
+
"status": "running",
|
|
31
|
+
"context": {
|
|
32
|
+
"type": "<ticket|research|new-ticket|epic|spell>",
|
|
33
|
+
"label": "<computed label>",
|
|
34
|
+
"issueNumber": <n | omit>,
|
|
35
|
+
"issueTitle": "<title | omit>",
|
|
36
|
+
"execMode": "<normal|swarm|hive>"
|
|
37
|
+
},
|
|
38
|
+
"spellName": "<same as label>",
|
|
39
|
+
"startedAt": <startedAt>,
|
|
40
|
+
"updatedAt": "<new Date().toISOString()>"
|
|
41
|
+
}
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
The schema mirrors `storeFloRunRecord` in `src/cli/services/daemon-dashboard.ts` — keep it in sync if you ever change one. The session-start launcher retains the most recent ~200 tasklist rows so this record outlives the session and renders in the Flo Runs tab on subsequent restarts.
|
|
45
|
+
|
|
5
46
|
## Phase 1: Research (also `-r`)
|
|
6
47
|
|
|
7
48
|
### 1.1 Fetch the issue + history (cheap, before any file exploration)
|
|
@@ -150,3 +191,29 @@ Closes #<issue-number>"
|
|
|
150
191
|
gh issue edit <issue-number> --remove-label "in-progress" --add-label "ready-for-review"
|
|
151
192
|
gh issue comment <issue-number> --body "PR created: <pr-url>"
|
|
152
193
|
```
|
|
194
|
+
|
|
195
|
+
### 5.5 Finalize run record (Flo Runs dashboard)
|
|
196
|
+
|
|
197
|
+
Update the tasklist row written in Phase 0 with the terminal status. Same `runId`, `upsert: true`. On success:
|
|
198
|
+
|
|
199
|
+
```
|
|
200
|
+
mcp__moflo__memory_store
|
|
201
|
+
namespace: "tasklist"
|
|
202
|
+
key: "<runId>" # same key from Phase 0
|
|
203
|
+
upsert: true
|
|
204
|
+
value: {
|
|
205
|
+
"status": "completed",
|
|
206
|
+
"success": true,
|
|
207
|
+
"context": <same context object as Phase 0>,
|
|
208
|
+
"spellName": "<same label as Phase 0>",
|
|
209
|
+
"startedAt": <startedAt from Phase 0>,
|
|
210
|
+
"duration": <Date.now() - startedAt>,
|
|
211
|
+
"updatedAt": "<new Date().toISOString()>"
|
|
212
|
+
}
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
On failure (tests still red after retries, or any aborting error): same shape with `"status": "failed"`, `"success": false`, and an `"error": "<short summary>"` field.
|
|
216
|
+
|
|
217
|
+
This finalize call MUST also fire if the run aborts *before* reaching Phase 5 (early failure during research, ticket, or implement) — otherwise the dashboard shows a permanently "running" row for a dead run.
|
|
218
|
+
|
|
219
|
+
Skip this when `--epic-branch` is set — the epic orchestrator records its own outcome.
|
|
@@ -39,9 +39,19 @@ npx flo doctor 2>&1 | grep -i daemon
|
|
|
39
39
|
|
|
40
40
|
If the daemon is not running, prompt the user:
|
|
41
41
|
- "The moflo daemon isn't running. Schedules only fire while the daemon is up. Start it now?"
|
|
42
|
-
- If yes: `npx flo daemon start
|
|
42
|
+
- If yes: `npx flo daemon start`.
|
|
43
43
|
- If they decline, warn the user that the schedule will be created but won't fire until the daemon is started.
|
|
44
44
|
|
|
45
|
+
OS-native autostart (launchd / systemd / Task Scheduler) is **automatic**: the
|
|
46
|
+
first `flo spell schedule create` registers the daemon as a login service so
|
|
47
|
+
schedules survive reboot, and the cancel that takes the enabled-schedule count
|
|
48
|
+
to 0 unregisters it. Users only need to think about it in two cases:
|
|
49
|
+
|
|
50
|
+
- `--no-autostart` on `create` — skip registration (use in containers/CI where
|
|
51
|
+
the daemon is already managed externally).
|
|
52
|
+
- `--keep-autostart` on `cancel` — keep the login service registered through a
|
|
53
|
+
cancel-then-recreate dance.
|
|
54
|
+
|
|
45
55
|
### Step 2 — Identify the target spell
|
|
46
56
|
|
|
47
57
|
If `$ARGUMENTS` was provided, use it as the spell name/alias. Otherwise, list spells and let the user pick:
|
|
@@ -107,17 +117,20 @@ Capture the schedule ID from output and surface it to the user along with the ne
|
|
|
107
117
|
|
|
108
118
|
### Step 5 — Verify the wiring
|
|
109
119
|
|
|
110
|
-
Tail the
|
|
120
|
+
Tail the actual execution history for this schedule so the user can confirm the daemon picked it up:
|
|
111
121
|
|
|
112
122
|
```bash
|
|
113
|
-
npx flo spell schedule
|
|
123
|
+
npx flo spell schedule executions --schedule <schedule-id> 2>&1
|
|
114
124
|
```
|
|
115
125
|
|
|
116
|
-
|
|
126
|
+
`executions` reads from the daemon-written `schedule-executions` namespace and shows started time, status (success/failed/running), duration, and whether the run was manual. This is the only command that proves a schedule actually fired — `flo spell schedule list` only shows the schedule definition.
|
|
127
|
+
|
|
128
|
+
If the user wants to wait for the first fire (interval ≤ 5m), poll `flo spell schedule executions --schedule <id>` or watch The Luminarium (the daemon's localhost UI). Otherwise, summarize and exit:
|
|
117
129
|
|
|
118
130
|
```
|
|
119
131
|
Scheduled: <schedule-id>
|
|
120
132
|
Next run: <ISO datetime UTC> (<local-equivalent>)
|
|
133
|
+
Verify: npx flo spell schedule executions --schedule <schedule-id>
|
|
121
134
|
Cancel: npx flo spell schedule cancel <schedule-id>
|
|
122
135
|
```
|
|
123
136
|
|
|
@@ -139,7 +152,7 @@ If the user asks to **run now** without altering the cadence:
|
|
|
139
152
|
|
|
140
153
|
## Important — gotchas
|
|
141
154
|
|
|
142
|
-
- **Daemon prerequisite**: schedules only fire while the daemon is running. Tell the user this explicitly.
|
|
155
|
+
- **Daemon prerequisite**: schedules only fire while the daemon is running. Tell the user this explicitly. OS autostart for reboot survival is now wired automatically — see Step 1.
|
|
143
156
|
- **Catch-up window** (default 1h, `scheduler.catchUpWindowMs` in `moflo.yaml`): if the daemon was offline when a run was due, runs within the window still fire on the next poll. Older missed runs are skipped with a `schedule:skipped` event.
|
|
144
157
|
- **maxConcurrent** (default 2): caps the number of scheduled spells running concurrently. Same-schedule overlap is never allowed.
|
|
145
158
|
- **No update CLI yet**: `flo spell schedule` exposes create/list/cancel only. To change a cadence, cancel + recreate.
|
package/README.md
CHANGED
|
@@ -419,7 +419,7 @@ flo daemon status # shows whether the service is registered AND running
|
|
|
419
419
|
|
|
420
420
|
`flo spell schedule create` warns when the daemon isn't installed so you don't quietly miss runs.
|
|
421
421
|
|
|
422
|
-
**Monitoring.** The daemon
|
|
422
|
+
**Monitoring.** **The Luminarium** (the moflo daemon's localhost UI) surfaces live schedules, recent executions, and per-schedule controls (disable / re-enable / run now). It starts alongside the daemon at `http://localhost:3117` (override with `--dashboard-port` or disable with `--no-dashboard`).
|
|
423
423
|
|
|
424
424
|
For full configuration (`scheduler:` block in `moflo.yaml`), event types, and the catch-up window after restarts, see [docs/SPELLS.md#scheduling](docs/SPELLS.md#scheduling).
|
|
425
425
|
|
package/bin/index-guidance.mjs
CHANGED
|
@@ -131,6 +131,18 @@ function loadGuidanceDirs() {
|
|
|
131
131
|
// 3. CLAUDE.md files are NOT indexed — Claude loads them into context automatically.
|
|
132
132
|
// Indexing them wastes vectors and creates duplicate keys across subprojects.
|
|
133
133
|
|
|
134
|
+
// 4. Project skills — index .claude/skills/<name>/SKILL.md
|
|
135
|
+
const projectSkillsDir = resolve(projectRoot, '.claude/skills');
|
|
136
|
+
if (existsSync(projectSkillsDir)) {
|
|
137
|
+
dirs.push({ path: '.claude/skills', prefix: 'skill', fileFilter: ['SKILL.md'], kind: 'skill' });
|
|
138
|
+
}
|
|
139
|
+
|
|
140
|
+
// 5. Bundled moflo skills — gated by isSelfRef to prevent double-indexing
|
|
141
|
+
const bundledSkillsDir = resolve(mofloRoot, '.claude/skills');
|
|
142
|
+
if (!isSelfRef && existsSync(bundledSkillsDir) && resolve(bundledSkillsDir) !== resolve(projectSkillsDir)) {
|
|
143
|
+
dirs.push({ path: bundledSkillsDir, prefix: 'skill-bundled', fileFilter: ['SKILL.md'], kind: 'skill', absolute: true });
|
|
144
|
+
}
|
|
145
|
+
|
|
134
146
|
return dirs;
|
|
135
147
|
}
|
|
136
148
|
|
|
@@ -513,10 +525,12 @@ function buildHierarchy(chunks, chunkPrefix) {
|
|
|
513
525
|
return hierarchy;
|
|
514
526
|
}
|
|
515
527
|
|
|
516
|
-
function indexFile(db, filePath, keyPrefix) {
|
|
517
|
-
const fileName = basename(filePath, extname(filePath));
|
|
528
|
+
function indexFile(db, filePath, keyPrefix, options = {}) {
|
|
529
|
+
const fileName = options.nameOverride || basename(filePath, extname(filePath));
|
|
518
530
|
const docKey = `doc-${keyPrefix}-${fileName}`;
|
|
519
531
|
const chunkPrefix = `chunk-${keyPrefix}-${fileName}`;
|
|
532
|
+
const extraMetadata = options.extraMetadata || {};
|
|
533
|
+
const extraTags = options.extraTags || [];
|
|
520
534
|
|
|
521
535
|
try {
|
|
522
536
|
const content = readFileSync(filePath, 'utf-8');
|
|
@@ -538,6 +552,7 @@ function indexFile(db, filePath, keyPrefix) {
|
|
|
538
552
|
|
|
539
553
|
// 1. Store full document
|
|
540
554
|
const docMetadata = {
|
|
555
|
+
...extraMetadata,
|
|
541
556
|
type: 'document',
|
|
542
557
|
filePath: relativePath,
|
|
543
558
|
fileSize: stats.size,
|
|
@@ -547,7 +562,7 @@ function indexFile(db, filePath, keyPrefix) {
|
|
|
547
562
|
ragVersion: '2.0', // Mark as full RAG indexed
|
|
548
563
|
};
|
|
549
564
|
|
|
550
|
-
storeEntry(db, docKey, content, docMetadata, [keyPrefix, 'document']);
|
|
565
|
+
storeEntry(db, docKey, content, docMetadata, [keyPrefix, 'document', ...extraTags]);
|
|
551
566
|
debug(`Stored document: ${docKey}`);
|
|
552
567
|
|
|
553
568
|
// 2. Chunk and store semantic pieces with full RAG linking
|
|
@@ -567,7 +582,7 @@ function indexFile(db, filePath, keyPrefix) {
|
|
|
567
582
|
children: siblings,
|
|
568
583
|
chunkCount: chunks.length,
|
|
569
584
|
};
|
|
570
|
-
storeEntry(db, docKey, content, docChildrenMeta, [keyPrefix, 'document']);
|
|
585
|
+
storeEntry(db, docKey, content, docChildrenMeta, [keyPrefix, 'document', ...extraTags]);
|
|
571
586
|
|
|
572
587
|
for (let i = 0; i < chunks.length; i++) {
|
|
573
588
|
const chunk = chunks[i];
|
|
@@ -589,6 +604,7 @@ function indexFile(db, filePath, keyPrefix) {
|
|
|
589
604
|
const hierInfo = hierarchy[chunkKey];
|
|
590
605
|
|
|
591
606
|
const chunkMetadata = {
|
|
607
|
+
...extraMetadata,
|
|
592
608
|
type: 'chunk',
|
|
593
609
|
ragVersion: '2.0',
|
|
594
610
|
|
|
@@ -647,7 +663,7 @@ function indexFile(db, filePath, keyPrefix) {
|
|
|
647
663
|
chunkKey,
|
|
648
664
|
searchableContent,
|
|
649
665
|
chunkMetadata,
|
|
650
|
-
[keyPrefix, 'chunk', `level-${chunk.level}`, chunk.title.toLowerCase().replace(/[^a-z0-9]+/g, '-')]
|
|
666
|
+
[keyPrefix, 'chunk', `level-${chunk.level}`, chunk.title.toLowerCase().replace(/[^a-z0-9]+/g, '-'), ...extraTags]
|
|
651
667
|
);
|
|
652
668
|
|
|
653
669
|
debug(` Stored chunk ${i}: ${chunk.title} (${chunk.content.length} chars, prev=${!!prevChunk}, next=${!!nextChunk})`);
|
|
@@ -699,7 +715,17 @@ function indexDirectory(db, dirConfig) {
|
|
|
699
715
|
: allMdFiles;
|
|
700
716
|
|
|
701
717
|
for (const filePath of filtered) {
|
|
702
|
-
|
|
718
|
+
let options = {};
|
|
719
|
+
if (dirConfig.kind === 'skill') {
|
|
720
|
+
// kind: 'skill' — key by parent dir name (skill folder), not SKILL.md
|
|
721
|
+
const skillName = basename(dirname(filePath));
|
|
722
|
+
options = {
|
|
723
|
+
nameOverride: skillName,
|
|
724
|
+
extraMetadata: { kind: 'skill', skill_name: skillName },
|
|
725
|
+
extraTags: ['skill', `skill-${skillName}`],
|
|
726
|
+
};
|
|
727
|
+
}
|
|
728
|
+
const result = indexFile(db, filePath, dirConfig.prefix, options);
|
|
703
729
|
results.push(result);
|
|
704
730
|
}
|
|
705
731
|
|
|
@@ -1435,14 +1435,15 @@ try {
|
|
|
1435
1435
|
} catch { /* writing the failure itself must not throw */ }
|
|
1436
1436
|
}
|
|
1437
1437
|
|
|
1438
|
-
// ── 3e-729. Purge ephemeral-namespace rows (#729)
|
|
1439
|
-
//
|
|
1440
|
-
//
|
|
1441
|
-
//
|
|
1442
|
-
//
|
|
1443
|
-
//
|
|
1444
|
-
// `purged: 0`
|
|
1445
|
-
// so the foreground sql.js write isn't
|
|
1438
|
+
// ── 3e-729. Purge ephemeral-namespace rows + trim tasklist (#729, #968) ─────
|
|
1439
|
+
// Three namespaces (hive-mind, epic-state, test-bridge-fix) store internal
|
|
1440
|
+
// run-tracking and get hard-deleted on every session start. The fourth
|
|
1441
|
+
// embedding-skipped namespace, `tasklist`, backs the dashboard's Flo Runs
|
|
1442
|
+
// tab — it's *trimmed* to a retention cap instead of purged so prior runs
|
|
1443
|
+
// survive a session restart (#968). Idempotent: returns
|
|
1444
|
+
// `{ purged: 0, trimmed: 0 }` when nothing needs cleaning. Runs BEFORE the
|
|
1445
|
+
// background MCP/daemon spawn so the foreground sql.js write isn't
|
|
1446
|
+
// overwritten by a concurrent flush.
|
|
1446
1447
|
try {
|
|
1447
1448
|
const purgePaths = [
|
|
1448
1449
|
resolve(projectRoot, 'node_modules/moflo/dist/src/cli/services/ephemeral-namespace-purge.js'),
|
|
@@ -1458,6 +1459,12 @@ try {
|
|
|
1458
1459
|
`${plural(result.purged, 'row')} from internal run-tracking`,
|
|
1459
1460
|
);
|
|
1460
1461
|
}
|
|
1462
|
+
if (result?.trimmed > 0) {
|
|
1463
|
+
emitMutation(
|
|
1464
|
+
'trimmed flo run history',
|
|
1465
|
+
`${plural(result.trimmed, 'old row')} beyond retention cap`,
|
|
1466
|
+
);
|
|
1467
|
+
}
|
|
1461
1468
|
}
|
|
1462
1469
|
} catch (err) {
|
|
1463
1470
|
// Non-fatal — leftover rows just sit until the next session retries.
|