pi-smart-compact 7.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,72 @@
1
+ # Changelog
2
+
3
+ ## [7.5.0] - 2026-05-17
4
+
5
+ ### Added
6
+
7
+ - **Redundancy-aware pre-pruning**: New `pruning.ts` module collapses redundant message sequences before compaction — duplicate file reads (keep last), collapsed error chains, agent acknowledgment message removal, long tool output truncation (>800 chars). Reduces compaction input by 15-30%.
8
+ - **Topic-level compression budgeting**: `allocateTopicBudgets()` assigns per-topic token allocations based on priority (critical 2x, high 1.5x), error density, recency weighting, and decision count. Assembly prompt includes budget hints per segment.
9
+ - **Project context fingerprint**: New `fingerprint.ts` module stores lightweight per-project metadata (language, framework, key directories, known files) across sessions. Compaction uses this for better file verification and context injection.
10
+ - **Post-compaction damage detection**: New `damage.ts` module monitors agent behavior after compaction for regression signals — re-reads of compacted files, user complaints, re-questions about compacted topics. Logs damage reports for future analysis.
11
+ - **Compaction preview context**: Project fingerprint and pruning stats are shown in notifications before compaction starts.
12
+
13
+ ### Changed
14
+
15
+ - Single-pass prompt now includes project context from fingerprint.
16
+ - `preProcessSummaries` accepts optional `budgetTokens` parameter for topic-level budget allocation.
17
+
18
+ ## [7.4.0] - 2026-05-17
19
+
20
+ ### Added
21
+
22
+ - **Decision propagation**: Batch summarization now injects active decisions from previous segments into each batch prompt, preventing cross-batch decision amnesia and reducing semantic drift.
23
+ - **Deterministic patch**: New `patchDeterministic()` function injects verification gaps directly into the relevant summary sections (files → Files Modified, errors → Critical Context, etc.) without any LLM call.
24
+ - **Lazy verification**: Verification scores ≥ 85 skip patching entirely. Scores 75–84 use deterministic patch only. Scores < 75 fall back to LLM patch only if deterministic patch is insufficient.
25
+ - **Session-aware prompts**: `SESSION_TYPE_INSTRUCTIONS` map provides session-type-specific focus instructions (debugging, implementation, review, discussion) injected into single-pass synthesis.
26
+ - **Immutable Context framing**: Assembly prompt now presents deterministic data as "IMMUTABLE CONTEXT (do not modify)" with explicit rules against fabrication and contradiction.
27
+ - **Metrics memory cap**: `_metrics` array capped at 200 entries (pruning to 100 when exceeded) to prevent memory leaks in long-running processes.
28
+
29
+ ### Changed
30
+
31
+ - **Renamed**: Extension renamed from `semantic-compact` to `pi-smart-compact`. Command changed from `/compact-semantic` to `/smart-compact`. Tool changed from `semantic_compact` to `smart_compact`. Config key changed from `semanticCompact` to `smartCompact`.
32
+
33
+ - **Token estimation**: Language-aware (Turkish/CE character penalty) and JSON-aware penalty. Per-provider calibration instead of global shared factor to prevent cross-session bleed.
34
+ - **Verification path matching**: Replaced basename-only `string.includes()` with path suffix array matching to reduce false positives (e.g., "index" no longer matches every file containing "index").
35
+ - **Boundary merging**: LLM boundaries no longer completely override heuristic boundaries. Low-confidence LLM boundaries (confidence < 0.4) are filtered. Remaining LLM boundaries are merged with heuristic boundaries that fill gaps.
36
+ - **Constraint mining**: Added Turkish diacritical character variants (önemli, şart, zorunlu, kesinlikle, asla, sakın) and new Turkish pattern categories (prohibition: yapma/kullanma/asla, preference: tercih/isterim/olsun).
37
+ - **Keep-boundary token calc**: Uses content text instead of JSON.stringify(message) to avoid metadata overhead in token estimation.
38
+
39
+ ### Added
40
+
41
+ - **Adaptive exploration gate**: Simple sessions (≤3 topics, ≤1 unresolved errors, ≤2 decisions, ≤2 directory groups) skip Phase 2 exploration entirely, saving 3-8 LLM calls.
42
+ - **Tool support cache**: Provider tool support results cached for 30 minutes with TTL-based eviction. Prevents repeated probe calls for known-unsupported providers.
43
+
44
+ ### Fixed
45
+
46
+ - Constraint regex bug: `önemli` (with ö) now correctly matches alongside `onemli`.
47
+ - `calibrateFromResponse` now requires provider parameter to scope calibration per-provider.
48
+
49
+ ## [7.3.1] - 2025-05-16
50
+
51
+ ### Added
52
+ - Full EESV pipeline: Extract → Explore → Synthesize → Verify
53
+ - Deterministic extraction of files, errors, decisions, constraints, topics
54
+ - LLM tool-calling exploration with 6 exploration tools
55
+ - Parallel batch synthesis with provider-aware concurrency
56
+ - Automated verification with coverage checks and hallucination detection
57
+ - Quality score (0-100) and gap patching
58
+ - Incremental compaction cache (1hr TTL, delta extraction)
59
+ - Live progress overlay and detailed result screen
60
+ - Metrics logging (`~/.pi/agent/.cache/compact-metrics.jsonl`)
61
+ - Cache-aware LLM calls with session affinity
62
+ - Three compression profiles: light, balanced, aggressive
63
+
64
+ ### Changed
65
+ - Refactored from single 2200-line file to modular `src/` architecture
66
+ - Improved token estimation with provider-specific ratios and EMA calibration
67
+ - Added `LlmMessage` and `StructuredExtraction` types replacing `any`
68
+
69
+ ### Fixed
70
+ - Tool loop safety (prevents orphaned tool result errors)
71
+ - No-op edit detection
72
+ - Graceful fallback when models don't support tool calling
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Semantic Compact Contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,200 @@
1
+ # Smart Compact
2
+
3
+ > EESV-powered smart compaction extension for the [Pi Coding Agent](https://github.com/earendil-works/pi-coding-agent).
4
+
5
+ [![Version](https://img.shields.io/badge/version-7.5.0-blue)](./package.json)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](./LICENSE)
7
+
8
+ ---
9
+
10
+ ## What is it?
11
+
12
+ **Smart Compact** compresses long conversation contexts by understanding *what happened* instead of blindly truncating. It uses the **EESV architecture**:
13
+
14
+ | Phase | What it does | LLM calls |
15
+ |-------|-------------|-----------|
16
+ | **Extract** | Deterministically pull files, errors, decisions, constraints, topics | 0 |
17
+ | **Explore** | Use LLM tools to verify boundaries and enrich context (skipped for simple sessions) | 0–8 |
18
+ | **Synthesize** | Parallel batch summarization + assembly | N + 1 |
19
+ | **Verify** | Check coverage, detect hallucinations, patch gaps | 0–1 |
20
+
21
+ **Result:** Shorter context that preserves the *meaning* and *state* of your session.
22
+
23
+ ---
24
+
25
+ ## Features
26
+
27
+ - 🔍 **Deterministic extraction** — zero-LLM-call file/error/decision mining
28
+ - 🧭 **Tool-calling exploration** — targeted investigation with `get_message_range`, `search_conversation`, `get_error_chain`
29
+ - ⚡ **Parallel batch synthesis** — provider-aware concurrency (2–5 in flight)
30
+ - ✅ **Automated verification** — coverage checks, hallucination detection, gap patching
31
+ - 📊 **Live metrics** — token savings, cache hit rate, latency per phase
32
+ - 🎛️ **Profiles** — `light` / `balanced` / `aggressive` compression
33
+ - 💾 **Backup & incremental cache** — safe rollback, delta re-compaction
34
+ - 🧠 **Adaptive exploration** — skips Phase 2 for simple sessions, saving 3–8 LLM calls
35
+ - 🔧 **Provider-aware token estimation** — language and JSON-aware with per-provider calibration
36
+
37
+ ---
38
+
39
+ ## Install
40
+
41
+ ```bash
42
+ # Inside your Pi agent extensions directory
43
+ cd ~/.pi/agent/extensions
44
+ git clone https://github.com/YOUR_USERNAME/pi-smart-compact.git
45
+ cd pi-smart-compact
46
+ bun install
47
+ ```
48
+
49
+ Add to your Pi `settings.json`:
50
+
51
+ ```json
52
+ {
53
+ "extensions": ["pi-smart-compact"]
54
+ }
55
+ ```
56
+
57
+ Or via `package.json` (already configured):
58
+
59
+ ```json
60
+ {
61
+ "pi": {
62
+ "extensions": ["./src/index.ts"]
63
+ }
64
+ }
65
+ ```
66
+
67
+ ---
68
+
69
+ ## Usage
70
+
71
+ ```bash
72
+ # TUI — pick model + profile
73
+ /smart-compact
74
+
75
+ # Direct — specific model + profile
76
+ /smart-compact anthropic/claude-sonnet-4 balanced
77
+
78
+ # Dry run — preview only
79
+ /smart-compact dry-run
80
+
81
+ # Verbose — detailed logging
82
+ /smart-compact debug
83
+
84
+ # Add a steering note
85
+ /smart-compact "focus on auth changes"
86
+ ```
87
+
88
+ ### Tool Usage
89
+
90
+ The extension also registers a tool named `smart_compact`:
91
+
92
+ ```json
93
+ {
94
+ "name": "smart_compact",
95
+ "parameters": {
96
+ "profile": "balanced",
97
+ "verbose": false,
98
+ "dry_run": false
99
+ }
100
+ }
101
+ ```
102
+
103
+ ---
104
+
105
+ ## Architecture
106
+
107
+ ```
108
+ ┌─────────────────────────────────────────────┐
109
+ │ Conversation (too long) │
110
+ └──────────────┬──────────────────────────────┘
111
+
112
+ ┌────────────▼────────────┐
113
+ │ Phase 1: EXTRACT │ ← deterministic (0 LLM calls)
114
+ │ • files modified/read │
115
+ │ • errors + retries │
116
+ │ • decisions │
117
+ │ • constraints │
118
+ └────────────┬────────────┘
119
+
120
+ ┌────────────▼────────────┐
121
+ │ Phase 2: EXPLORE │ ← LLM with tools (0–8 rounds)
122
+ │ • verify topic bounds │ ← skipped for simple sessions
123
+ │ • find cross-references│
124
+ │ • assess status │
125
+ └────────────┬────────────┘
126
+
127
+ ┌────────────▼────────────┐
128
+ │ Phase 3: SYNTHESIZE │ ← parallel batch summarize
129
+ │ • chunk messages │
130
+ │ • summarize batches │
131
+ │ • assemble final │
132
+ └────────────┬────────────┘
133
+
134
+ ┌────────────▼────────────┐
135
+ │ Phase 4: VERIFY │ ← deterministic checks
136
+ │ • coverage gaps? │
137
+ │ • hallucinated files? │
138
+ │ • patch if needed │
139
+ └────────────┬────────────┘
140
+
141
+ ┌────────────▼────────────┐
142
+ │ Compact context applied│
143
+ └─────────────────────────┘
144
+ ```
145
+
146
+ ---
147
+
148
+ ## Profiles
149
+
150
+ | Profile | Summary Budget | Keep Recent | Best For |
151
+ |---------|---------------|-------------|----------|
152
+ | **light** | 10K tokens | 30K tokens | Debugging, complex multi-file refactors |
153
+ | **balanced** | 6K tokens | 20K tokens | General development (default) |
154
+ | **aggressive** | 3K tokens | 10K tokens | Quick exploration, prototyping |
155
+
156
+ ---
157
+
158
+ ## Configuration
159
+
160
+ Create `~/.pi/agent/settings.json`:
161
+
162
+ ```json
163
+ {
164
+ "smartCompact": {
165
+ "profile": "balanced",
166
+ "summaryModel": "anthropic/claude-sonnet-4",
167
+ "segmentationModel": "anthropic/claude-haiku-3",
168
+ "autoTrigger": true,
169
+ "backupEnabled": true,
170
+ "profiles": {
171
+ "light": { "summaryBudgetTokens": 10000, "keepRecentTokens": 30000 }
172
+ }
173
+ }
174
+ }
175
+ ```
176
+
177
+ ---
178
+
179
+ ## Development
180
+
181
+ ```bash
182
+ bun install
183
+ bun test # runs test suite
184
+ ```
185
+
186
+ ---
187
+
188
+ ## Contributing
189
+
190
+ 1. Fork it
191
+ 2. Create your feature branch (`git checkout -b feat/amazing-feature`)
192
+ 3. Commit your changes (`git commit -am 'Add amazing feature'`)
193
+ 4. Push (`git push origin feat/amazing-feature`)
194
+ 5. Open a Pull Request
195
+
196
+ ---
197
+
198
+ ## License
199
+
200
+ MIT © [Alper](https://github.com/alper)
package/package.json ADDED
@@ -0,0 +1,42 @@
1
+ {
2
+ "name": "pi-smart-compact",
3
+ "version": "7.5.0",
4
+ "description": "EESV smart compaction extension for Pi Coding Agent — deterministic extraction, exploration, synthesis, verification with redundancy pruning, project fingerprinting, and damage detection.",
5
+ "license": "MIT",
6
+ "author": "Alper Tarhan <alpertarhan@gmail.com>",
7
+ "repository": {
8
+ "type": "git",
9
+ "url": "git+https://github.com/alpertarhan/pi-smart-compact.git"
10
+ },
11
+ "homepage": "https://github.com/alpertarhan/pi-smart-compact#readme",
12
+ "bugs": {
13
+ "url": "https://github.com/alpertarhan/pi-smart-compact/issues"
14
+ },
15
+ "keywords": [
16
+ "pi",
17
+ "pi-extension",
18
+ "pi-coding-agent",
19
+ "pi-smart-compact",
20
+ "smart-compaction",
21
+ "context-compression",
22
+ "eesv",
23
+ "conversation-summarization",
24
+ "llm-context"
25
+ ],
26
+ "files": [
27
+ "src",
28
+ "README.md",
29
+ "LICENSE",
30
+ "CHANGELOG.md"
31
+ ],
32
+ "scripts": {
33
+ "test": "bun test",
34
+ "typecheck": "bunx tsc --noEmit"
35
+ },
36
+ "pi": {
37
+ "extensions": ["./src/index.ts"]
38
+ },
39
+ "publishConfig": {
40
+ "access": "public"
41
+ }
42
+ }
@@ -0,0 +1,140 @@
1
+ /**
2
+ * Constants, prompts, and profile defaults.
3
+ */
4
+
5
+ import type { CompressionProfile, ProfileConfig } from "./types.ts";
6
+
7
+ export const VERSION = "7.5.0";
8
+ export const CHARS_PER_TOKEN = 3.8;
9
+
10
+ export const COMPACT_SYSTEM_PREFIX =
11
+ "You are an expert conversation summarizer for a coding agent. " +
12
+ "Produce structured markdown summaries. " +
13
+ "Follow output format exactly. " +
14
+ "Use EXACT names — never paraphrase code identifiers. " +
15
+ "Trust deterministic extraction data over intuition.";
16
+
17
+ export const PROFILES: Record<CompressionProfile, ProfileConfig> = {
18
+ light: {
19
+ summaryBudgetTokens: 10000,
20
+ keepRecentTokens: 30000,
21
+ minChunkTokens: 800,
22
+ maxChunkTokens: 12000,
23
+ singlePassMaxTokens: 40000,
24
+ batchMaxTokens: 30000,
25
+ },
26
+ balanced: {
27
+ summaryBudgetTokens: 6000,
28
+ keepRecentTokens: 20000,
29
+ minChunkTokens: 500,
30
+ maxChunkTokens: 8000,
31
+ singlePassMaxTokens: 30000,
32
+ batchMaxTokens: 24000,
33
+ },
34
+ aggressive: {
35
+ summaryBudgetTokens: 3000,
36
+ keepRecentTokens: 10000,
37
+ minChunkTokens: 300,
38
+ maxChunkTokens: 6000,
39
+ singlePassMaxTokens: 20000,
40
+ batchMaxTokens: 18000,
41
+ },
42
+ };
43
+
44
+ export const DEFAULT_CONFIG = {
45
+ profile: "balanced" as CompressionProfile,
46
+ profiles: PROFILES,
47
+ summaryModel: null as string | null,
48
+ segmentationModel: null as string | null,
49
+ autoTrigger: true,
50
+ backupEnabled: true,
51
+ backupDir: "",
52
+ };
53
+
54
+ export const NO_OP_RE = /applied:\s*0|no changes applied|nothing to (?:do|change)|0 edits? applied/i;
55
+ export const SHIFT_RE = /simdi|peki|bide|bi de|gecelim|bakalim|yapalim|baska|sonra|tamam simdi|now let|also|next|let's|moving on|switch to/i;
56
+ export const CHOICE_RE = /use\s+\S+\s+(?:instead|not|rather)|don't\s+use|avoid\s+|switch\s+to\s+|go\s+with\s+|prefer\s+/i;
57
+
58
+ // ── Prompt Templates ──
59
+
60
+ export const SINGLE_PASS_PREFIX =
61
+ "Summarize this coding agent conversation. Produce ONE structured summary.\n" +
62
+ "\nRules for Accuracy:\n" +
63
+ "1. Session Type: read-only tool calls = REVIEW, not implementation\n" +
64
+ "2. Status: Check for user complaints before marking \"Done\"\n" +
65
+ "3. Exact Names: Quote specific variable/function/parameter names, don't paraphrase\n" +
66
+ "4. Files: Use the VERIFIED file lists above (deterministically extracted, zero hallucination risk)\n" +
67
+ "\nOutput EXACTLY this format:\n\n" +
68
+ "## Goal\n[What the user is trying to accomplish]\n" +
69
+ "## Constraints & Preferences\n- [CRITICAL: user requirements, preferences, constraints]\n" +
70
+ "## Progress\n### Done\n- [x] [Completed tasks with file references]\n### In Progress\n- [ ] [Current work state]\n### Blocked\n- [Issues]\n" +
71
+ "## Key Decisions\n- **[Decision]**: [Rationale]\n" +
72
+ "## Files Modified\n- [Verified list from deterministic extraction]\n" +
73
+ "## Files Read\n- [Verified list from deterministic extraction]\n" +
74
+ "## Next Steps\n1. [What should happen next]\n" +
75
+ "## Critical Context\n- [Specific data, patterns, info needed to continue]\n- [Error patterns or gotchas]\n" +
76
+ "## Topics Covered\n[Chronological bullet list with priority in brackets]\n";
77
+
78
+ export const SINGLE_PASS_SUFFIX =
79
+ "\n{PREV_CONTEXT}\n\n{EXTRACTION_CONTEXT}\n\n{EXPLORATION_CONTEXT}\n\n<conversation>\n{CONVERSATION}\n</conversation>";
80
+
81
+ export const BATCH_PROMPT_PREFIX =
82
+ "Summarize these conversation segments.\n\nRules for Accuracy:\n" +
83
+ "1. Use EXACT file paths from extraction data\n" +
84
+ "2. Status: only mark \"done\" if there's clear evidence (successful test run, user confirmation)\n" +
85
+ "3. Quote specific values, don't paraphrase code\n\n" +
86
+ "For EACH segment produce:\n" +
87
+ "### {TOPIC_NAME}\n" +
88
+ "**Priority**: [critical|high|normal|low]\n" +
89
+ "**Summary**: [2-4 sentences: what happened, errors, code changes with paths]\n" +
90
+ "**Decisions**: [comma-separated, or \"None\"]\n" +
91
+ "**Modified**: [comma-separated paths, or \"None\"]\n" +
92
+ "**Read**: [comma-separated paths, or \"None\"]\n";
93
+
94
+ export const BATCH_PROMPT_SUFFIX = "\n{EXTRACTION_CONTEXT}\n\n<segments>\n{TEXT}\n</segments>";
95
+
96
+ export const ASSEMBLY_PROMPT_PREFIX =
97
+ "Merge these topic summaries into ONE coherent summary.\n\n" +
98
+ "## IMMUTABLE CONTEXT (do not modify or contradict these facts)\n" +
99
+ "These are deterministically verified from the original conversation. They take priority over ANY summary content below.\n\n" +
100
+ "Rules:\n" +
101
+ "1. Preserve ALL critical/high info. Condense normal, minimize low.\n" +
102
+ "2. Chronological order.\n" +
103
+ "3. The pre-processed data below is GROUND TRUTH — trust it over individual summaries.\n" +
104
+ "4. Files Modified list is deterministically verified — if a summary says a file was modified but it's NOT in the list above, omit it.\n" +
105
+ "5. Key Decisions below are verified — preserve them exactly, do not paraphrase the decision text.\n" +
106
+ "6. Do NOT fabricate file paths, function names, or error messages not present in the verified data.\n\n" +
107
+ "Format:\n" +
108
+ "## Goal\n[Overall objective]\n" +
109
+ "## Constraints & Preferences\n- [CRITICAL requirements, preferences, constraints]\n" +
110
+ "## Progress\n### Done\n- [x] [Completed tasks with file refs]\n### In Progress\n- [ ] [Current work state]\n### Blocked\n- [Issues]\n" +
111
+ "## Key Decisions\n- **[Decision]**: [Rationale]\n" +
112
+ "## Files Modified\n- [Verified deterministic list]\n" +
113
+ "## Files Read\n- [Verified deterministic list]\n" +
114
+ "## Next Steps\n1. [What should happen next]\n" +
115
+ "## Critical Context\n- [Data, patterns, info needed]\n" +
116
+ "## Topics Covered\n[Chronological bullets with priority]\n";
117
+
118
+ export const ASSEMBLY_PROMPT_SUFFIX =
119
+ "\nIMMUTABLE CONTEXT (verified deterministic data):\n- Key Decisions: {DECISIONS}\n- Files Modified (VERIFIED): {MODIFIED}\n- Files Read (VERIFIED): {READ}\n\n{EXPLORATION_CONTEXT}\n{PREV_CONTEXT}\n\n<summaries>{SUMMARIES}</summaries>";
120
+
121
+ // ── Session-type-specific prompt instructions ──
122
+
123
+ export const SESSION_TYPE_INSTRUCTIONS: Record<string, string> = {
124
+ debugging: "Focus on: error chains, root cause analysis, attempted fixes, resolution status. Prioritize error messages and stack traces. Mark files as Done only if all errors resolved.",
125
+ implementation: "Focus on: files created/modified, architectural decisions, feature completeness, test coverage. Prioritize code changes with exact paths.",
126
+ review: "Focus on: files read, issues found, recommendations, approval status. Prioritize findings over changes. Read-only tool calls = REVIEW, not implementation.",
127
+ discussion: "Focus on: decisions made, trade-offs discussed, consensus reached. Prioritize rationale over implementation details.",
128
+ };
129
+
130
+ export const EXPLORER_SYSTEM_PROMPT =
131
+ "You are a conversation analyst. You have deterministic extraction data and can query the raw conversation using tools.\n\n" +
132
+ "Your job:\n" +
133
+ "1. Verify/enrich the extracted boundaries (merge, split, or add as needed)\n" +
134
+ "2. Identify cross-topic relationships\n" +
135
+ "3. Find implicit constraints (user tone, frustration, urgency)\n" +
136
+ "4. Assess completion status accurately\n" +
137
+ "5. Extract the narrative arc\n\n" +
138
+ "Use tools BEFORE forming conclusions. You may make up to 8 tool calls.\n\n" +
139
+ "After exploration, output ONLY a JSON object (no markdown):\n" +
140
+ '{"boundaries":[{"afterIndex":N,"topic":"...","priority":"critical|high|normal|low","confidence":0.0-1.0}],"mainGoal":"...","sessionType":"implementation|review|debugging|discussion","enrichedConstraints":[...],"crossReferences":[...],"statusAssessment":{"done":[...],"inProgress":[...],"blocked":[...]},"criticalContext":[...],"keyDecisions":[...]}';