deepflow 0.1.89 → 0.1.91

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,42 +0,0 @@
1
- ---
2
- name: df:consolidate
3
- description: Remove duplicates and superseded entries from decisions file, promote stale provisionals
4
- ---
5
-
6
- # /df:consolidate — Consolidate Decisions
7
-
8
- Remove duplicates, superseded entries, and promote stale provisionals. Keep decisions.md dense and useful.
9
-
10
- **NEVER:** use EnterPlanMode, ExitPlanMode
11
-
12
- ## Behavior
13
-
14
- ### 1. LOAD
15
- Read `.deepflow/decisions.md` via `` !`cat .deepflow/decisions.md 2>/dev/null || echo 'NOT_FOUND'` ``. If missing/empty, report and exit.
16
-
17
- ### 2. ANALYZE (model-driven, not regex)
18
- - Identify duplicates (same meaning, different wording)
19
- - Identify superseded entries (later contradicts earlier)
20
- - Identify stale `[PROVISIONAL]` entries (>30 days old, no resolution)
21
-
22
- ### 3. CONSOLIDATE
23
- - Remove duplicates (keep more precise wording)
24
- - Remove superseded entries (later decision wins)
25
- - Promote stale `[PROVISIONAL]` → `[DEBT]`
26
- - Preserve `[APPROACH]` unless superseded, `[ASSUMPTION]` unless invalidated
27
- - Target: 200-500 lines if currently longer
28
- - When in doubt, keep both entries (conservative)
29
-
30
- ### 4. WRITE
31
- - Rewrite `.deepflow/decisions.md` with consolidated content
32
- - Write `{ "last_consolidated": "{ISO-8601}" }` to `.deepflow/last-consolidated.json`
33
-
34
- ### 5. REPORT
35
- `✓ Consolidated: {before} → {after} lines, {n} removed, {n} promoted to [DEBT]`
36
-
37
- ## Rules
38
-
39
- - Conservative: when in doubt, keep both entries
40
- - Never add new decisions — only remove, merge, or re-tag
41
- - `[DEBT]` is only produced by consolidation, never manually assigned
42
- - Preserve chronological ordering within sections
@@ -1,73 +0,0 @@
1
- ---
2
- name: df:note
3
- description: Capture decisions that emerged during free conversations outside of deepflow commands
4
- ---
5
-
6
- # /df:note — Capture Decisions from Free Conversations
7
-
8
- ## Orchestrator Role
9
-
10
- Scan conversation for candidate decisions, present for user confirmation, persist to `.deepflow/decisions.md`.
11
-
12
- **NEVER:** Spawn agents, use Task tool, use Glob/Grep on source code, run git, use TaskOutput, EnterPlanMode, ExitPlanMode
13
-
14
- **ONLY:** Read `.deepflow/decisions.md`, present candidates via `AskUserQuestion`, append confirmed decisions
15
-
16
- ## Behavior
17
-
18
- ### 1. EXTRACT CANDIDATES
19
-
20
- Scan prior messages for resolved choices, adopted approaches, or stated assumptions. Look for:
21
- - **Approaches chosen**: "we'll use X instead of Y"
22
- - **Provisional choices**: "for now we'll use X"
23
- - **Stated assumptions**: "assuming X is true"
24
- - **Constraints accepted**: "X is out of scope"
25
- - **Naming/structural choices**: "we'll call it X", "X goes in the Y layer"
26
-
27
- Extract **at most 4 candidates**. For each, determine:
28
-
29
- | Field | Value |
30
- |-------|-------|
31
- | Tag | `[APPROACH]` (deliberate choice), `[PROVISIONAL]` (revisit later), or `[ASSUMPTION]` (unvalidated) |
32
- | Decision | One concise line describing the choice |
33
- | Rationale | One sentence explaining why |
34
-
35
- If <2 clear candidates found, say so and exit.
36
-
37
- ### 2. CHECK FOR CONTRADICTIONS
38
-
39
- Read `.deepflow/decisions.md` if it exists. If a candidate contradicts a prior entry: keep prior entry unchanged, amend candidate rationale to `was "X", now "Y" because Z`.
40
-
41
- ### 3. PRESENT VIA AskUserQuestion
42
-
43
- Single multi-select call. Each option: `label` = tag + decision text, `description` = rationale.
44
-
45
- ### 4. APPEND CONFIRMED DECISIONS
46
-
47
- For each selected option:
48
- 1. Create `.deepflow/decisions.md` with `# Decisions` header if absent
49
- 2. Append a dated section: `### YYYY-MM-DD — note`
50
- 3. Group all confirmed decisions under one section: `- [TAG] Decision text — rationale`
51
- 4. Never modify or delete prior entries
52
-
53
- ### 5. CONFIRM
54
-
55
- Report: `Saved N decision(s) to .deepflow/decisions.md` or `No decisions saved.`
56
-
57
- ## Decision Tags
58
-
59
- | Tag | Meaning | Source |
60
- |-----|---------|--------|
61
- | `[APPROACH]` | Firm decision | /df:note, auto-extraction |
62
- | `[PROVISIONAL]` | Revisit later | /df:note, auto-extraction |
63
- | `[ASSUMPTION]` | Unverified | /df:note, auto-extraction |
64
- | `[DEBT]` | Needs revisiting | /df:consolidate only, never manually assigned |
65
-
66
- ## Rules
67
-
68
- - Max 4 candidates per invocation (AskUserQuestion tool limit)
69
- - multiSelect: true — user confirms any subset
70
- - Never invent decisions — only extract what was discussed and resolved
71
- - Never modify prior entries in `.deepflow/decisions.md`
72
- - Source is always `note`; date is today (YYYY-MM-DD)
73
- - One AskUserQuestion call — all candidates in a single call
@@ -1,75 +0,0 @@
1
- ---
2
- name: df:report
3
- description: Generate session cost report with token usage, cache hit ratio, per-task costs, and quota impact
4
- allowed-tools: [Read, Write, Bash]
5
- ---
6
-
7
- # /df:report — Session Cost Report
8
-
9
- > **DEPRECATED:** Use `/df:dashboard` instead to view deepflow metrics and status.
10
-
11
- ## Orchestrator Role
12
-
13
- Aggregate token usage data and produce a structured report.
14
-
15
- **NEVER:** Spawn agents, use Task tool, use AskUserQuestion, run git, EnterPlanMode, ExitPlanMode
16
-
17
- **ONLY:** Read data files, compute aggregates, write `.deepflow/report.json` and `.deepflow/report.md`
18
-
19
- ## Behavior
20
-
21
- ### 1. LOAD DATA SOURCES
22
-
23
- Read each source gracefully — missing files yield zero/empty values, never error out.
24
-
25
- | Source | Path | Shell injection | Key fields |
26
- |--------|------|-----------------|------------|
27
- | Token history | `.deepflow/token-history.jsonl` | `` !`cat .deepflow/token-history.jsonl 2>/dev/null \|\| echo ''` `` | `timestamp`, `input_tokens`, `cache_creation_input_tokens`, `cache_read_input_tokens`, `used_percentage`, `model`, `session_id` |
28
- | Quota history | `~/.claude/quota-history.jsonl` | `` !`tail -5 ~/.claude/quota-history.jsonl 2>/dev/null \|\| echo ''` `` | `timestamp`, `event`, API payload |
29
- | Task results | `.deepflow/results/T*.yaml` | `` !`ls .deepflow/results/T*.yaml 2>/dev/null \|\| echo ''` `` | `tokens` block: `start_percentage`, `end_percentage`, `delta_percentage`, `input_tokens`, `cache_creation_input_tokens`, `cache_read_input_tokens` |
30
- | Session metadata | `.deepflow/auto-memory.yaml` | `` !`cat .deepflow/auto-memory.yaml 2>/dev/null \|\| echo ''` `` | session_id, start time (optional) |
31
-
32
- ### 2. COMPUTE AGGREGATES
33
-
34
- ```
35
- total_input_tokens = sum(input_tokens)
36
- total_cache_creation = sum(cache_creation_input_tokens)
37
- total_cache_read = sum(cache_read_input_tokens)
38
- total_tokens_all = total_input_tokens + total_cache_creation + total_cache_read
39
- cache_hit_ratio = total_cache_read / total_tokens_all (0 if denominator=0, clamp [0,1], round 4 decimals)
40
- peak_context_percentage = max(used_percentage)
41
- model = most recent line's model
42
- ```
43
-
44
- ### 3. WRITE `.deepflow/report.json`
45
-
46
- Structure: `{ version: 1, generated: ISO-8601-UTC, session_summary: {total_input_tokens, total_cache_creation, total_cache_read, cache_hit_ratio, peak_context_percentage, model}, tasks: [{task_id, start_percentage, end_percentage, delta_percentage, input_tokens, cache_creation, cache_read}], quota: {available: bool, ...API fields if available} }`
47
-
48
- Rules: `version` always 1. `tasks` = `[]` if no results found. `quota.available` = false if missing. All token fields integers >= 0. `cache_hit_ratio` float in [0,1].
49
-
50
- ### 4. WRITE `.deepflow/report.md`
51
-
52
- Required sections with exact headings:
53
-
54
- **## Session Summary** — Table: Model, Total Input Tokens, Cache Creation Tokens, Cache Read Tokens, Cache Hit Ratio (with %), Peak Context Usage %.
55
-
56
- **## Per-Task Costs** — Table: Task, Start %, End %, Delta %, Input Tokens, Cache Creation, Cache Read. Show `_(No task data available)_` if empty.
57
-
58
- **## Quota Impact** — Quota fields table if `quota.available=true`, else exactly: `Not available (non-macOS or no token)`.
59
-
60
- ### 5. CONFIRM
61
-
62
- ```
63
- Report generated:
64
- .deepflow/report.json — machine-readable (version=1)
65
- .deepflow/report.md — human-readable summary
66
- ```
67
-
68
- List missing data sources as a note if any were absent.
69
-
70
- ## Rules
71
-
72
- - Graceful degradation — missing files yield zero/empty, never error
73
- - No hallucination — only values from actual file contents; 0 for missing fields
74
- - Idempotent — re-running overwrites both files with fresh data
75
- - ISO 8601 UTC timestamps for `generated` field
@@ -1,47 +0,0 @@
1
- ---
2
- name: df:resume
3
- description: Synthesize project state into a briefing covering what happened, current decisions, and next steps
4
- allowed-tools: [Read, Grep, Glob, Bash]
5
- ---
6
-
7
- # /df:resume — Session Continuity Briefing
8
-
9
- ## Orchestrator Role
10
-
11
- Read project state from multiple sources, produce a concise briefing for resuming work. Pure read-only.
12
-
13
- **NEVER:** Write/create/modify files, run git write ops, use AskUserQuestion, spawn agents, use TaskOutput, EnterPlanMode, ExitPlanMode
14
-
15
- **ONLY:** Read files (Bash read-only git commands, Read, Glob, Grep), write briefing to stdout
16
-
17
- ## Behavior
18
-
19
- ### 1. GATHER SOURCES (parallel, all reads)
20
-
21
- | Source | Command/Path | Purpose |
22
- |--------|-------------|---------|
23
- | Git timeline | `` !`git log --oneline -20` `` | What changed and when |
24
- | Decisions | `` !`cat .deepflow/decisions.md 2>/dev/null \|\| echo 'NOT_FOUND'` `` | Live [APPROACH], [PROVISIONAL], [ASSUMPTION] entries |
25
- | Plan | `` !`cat PLAN.md 2>/dev/null \|\| echo 'NOT_FOUND'` `` | Task status (checked vs unchecked) |
26
- | Spec headers | `` !`head -20 specs/doing-*.md 2>/dev/null \|\| echo 'NOT_FOUND'` `` | In-flight features |
27
- | Experiments | `` !`ls .deepflow/experiments/ 2>/dev/null \|\| echo 'NOT_FOUND'` `` | Validated/failed approaches |
28
-
29
- Token budget: ~2500 tokens input. Skip missing sources silently.
30
-
31
- ### 2. SYNTHESIZE BRIEFING (200-500 words, 3 sections)
32
-
33
- **## Timeline** — 3-6 sentences: arc of work from git log + spec/PLAN state. What completed, in-flight, notable milestones. Reference dates/commits where informative.
34
-
35
- **## Live Decisions** — All `[APPROACH]`, `[PROVISIONAL]`, `[ASSUMPTION]` from `.deepflow/decisions.md` as bullets with tag + text + rationale. Show newest entry per topic if contradictions exist. State "No decisions recorded yet." if absent/empty.
36
-
37
- **## Next Steps** — From PLAN.md: unblocked `- [ ]` tasks first, then blocked tasks with blockers. If no PLAN.md: suggest `/df:plan`.
38
-
39
- ### 3. OUTPUT
40
-
41
- Print briefing to stdout. No file writes.
42
-
43
- ## Rules
44
-
45
- - Read sources in a single pass — no re-reads
46
- - Contradicted decisions: show newest per topic only
47
- - Token budget: ~2500 input tokens to produce ~500 words output