opencodekit 0.19.2 → 0.19.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/index.js +1 -1
- package/dist/template/.opencode/context/git-context.md +32 -0
- package/dist/template/.opencode/dcp-prompts/defaults/README.md +40 -0
- package/dist/template/.opencode/dcp-prompts/defaults/compress-message.md +42 -0
- package/dist/template/.opencode/dcp-prompts/defaults/compress-range.md +59 -0
- package/dist/template/.opencode/dcp-prompts/defaults/context-limit-nudge.md +21 -0
- package/dist/template/.opencode/dcp-prompts/defaults/iteration-nudge.md +5 -0
- package/dist/template/.opencode/dcp-prompts/defaults/system.md +44 -0
- package/dist/template/.opencode/dcp-prompts/defaults/turn-nudge.md +8 -0
- package/dist/template/.opencode/dcp-prompts/overrides/compress-message.md +71 -0
- package/dist/template/.opencode/dcp.jsonc +100 -81
- package/dist/template/.opencode/memory.db +0 -0
- package/dist/template/.opencode/memory.db-shm +0 -0
- package/dist/template/.opencode/memory.db-wal +0 -0
- package/dist/template/.opencode/opencode.json +2 -1
- package/dist/template/.opencode/plugin/sdk/copilot/chat/openai-compatible-chat-language-model.ts +12 -12
- package/dist/template/.opencode/skill/compaction/SKILL.md +69 -2
- package/package.json +1 -1
- package/dist/template/.opencode/memory.db.corrupt.1774156595899 +0 -0
- package/dist/template/.opencode/memory.db.corrupt.1774156595899-shm +0 -0
- package/dist/template/.opencode/memory.db.corrupt.1774156595899-wal +0 -0
package/dist/index.js
CHANGED
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
---
|
|
2
|
+
purpose: Git spatial awareness injected into all agent prompts
|
|
3
|
+
updated: 2026-04-01
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Git Context
|
|
7
|
+
|
|
8
|
+
## Auto-Context (start of every task)
|
|
9
|
+
|
|
10
|
+
Before starting work, establish your position in the repo. Run once per session and cache results:
|
|
11
|
+
|
|
12
|
+
```bash
|
|
13
|
+
git branch --show-current # What branch am I on?
|
|
14
|
+
git status --short # What's modified/staged? (cap at 20 lines)
|
|
15
|
+
git log --oneline -5 # What happened recently?
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
## When to Refresh
|
|
19
|
+
|
|
20
|
+
Re-run git context after:
|
|
21
|
+
|
|
22
|
+
- Any commit you make
|
|
23
|
+
- Switching branches
|
|
24
|
+
- Pulling, merging, or rebasing
|
|
25
|
+
- Stashing or unstashing
|
|
26
|
+
|
|
27
|
+
## How to Use
|
|
28
|
+
|
|
29
|
+
- **Commit messages**: Use branch name for scope (e.g., `feat(auth):` if on `feature/auth`)
|
|
30
|
+
- **Branch decisions**: Check if you're on main before creating a feature branch
|
|
31
|
+
- **Conflict awareness**: `git status` shows modified files — avoid editing files with uncommitted changes from another task
|
|
32
|
+
- **Recent history**: `git log` shows what was just done — prevents duplicate work
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
# DCP Prompt Defaults
|
|
2
|
+
|
|
3
|
+
This directory stores the DCP prompts.
|
|
4
|
+
Each prompt file here should contain plain text only (no XML wrappers).
|
|
5
|
+
|
|
6
|
+
## Creating Overrides
|
|
7
|
+
|
|
8
|
+
1. Copy a prompt file from this directory into an overrides directory using the same filename.
|
|
9
|
+
2. Edit the copied file using plain text.
|
|
10
|
+
3. Restart OpenCode.
|
|
11
|
+
|
|
12
|
+
To reset an override, delete the matching file from your overrides directory.
|
|
13
|
+
|
|
14
|
+
Do not edit the default prompt files directly, they are just for reference, only files in the overrides directory are used.
|
|
15
|
+
|
|
16
|
+
Override precedence (highest first):
|
|
17
|
+
1. `.opencode/dcp-prompts/overrides/` (project)
|
|
18
|
+
2. `$OPENCODE_CONFIG_DIR/dcp-prompts/overrides/` (config dir)
|
|
19
|
+
3. `~/.config/opencode/dcp-prompts/overrides/` (global)
|
|
20
|
+
|
|
21
|
+
## Prompt Files
|
|
22
|
+
|
|
23
|
+
- `system.md`
|
|
24
|
+
- Purpose: Core system-level DCP instruction block.
|
|
25
|
+
- Runtime use: Injected into the model system prompt on every request.
|
|
26
|
+
- `compress-range.md`
|
|
27
|
+
- Purpose: range-mode compress tool instructions and summary constraints.
|
|
28
|
+
- Runtime use: Registered as the range-mode compress tool description.
|
|
29
|
+
- `compress-message.md`
|
|
30
|
+
- Purpose: message-mode compress tool instructions and summary constraints.
|
|
31
|
+
- Runtime use: Registered as the message-mode compress tool description.
|
|
32
|
+
- `context-limit-nudge.md`
|
|
33
|
+
- Purpose: High-priority nudge when context is over max threshold.
|
|
34
|
+
- Runtime use: Injected when context usage is beyond configured max limits.
|
|
35
|
+
- `turn-nudge.md`
|
|
36
|
+
- Purpose: Nudge to compress closed ranges at turn boundaries.
|
|
37
|
+
- Runtime use: Injected when context is between min and max limits at a new user turn.
|
|
38
|
+
- `iteration-nudge.md`
|
|
39
|
+
- Purpose: Nudge after many iterations without user input.
|
|
40
|
+
- Runtime use: Injected when iteration threshold is crossed.
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
Collapse selected individual messages in the conversation into detailed summaries.
|
|
2
|
+
|
|
3
|
+
THE SUMMARY
|
|
4
|
+
Your summary must be EXHAUSTIVE. Capture file paths, function signatures, decisions made, constraints discovered, key findings, tool outcomes, and user intent details that matter... EVERYTHING that preserves the value of the selected message after the raw message is removed.
|
|
5
|
+
|
|
6
|
+
USER INTENT FIDELITY
|
|
7
|
+
When a selected message contains user intent, preserve that intent with extra care. Do not change scope, constraints, priorities, acceptance criteria, or requested outcomes.
|
|
8
|
+
Directly quote short user instructions when that best preserves exact meaning.
|
|
9
|
+
|
|
10
|
+
Yet be LEAN. Strip away the noise: failed attempts that led nowhere, verbose tool output, and repetition. What remains should be pure signal - golden nuggets of detail that preserve full understanding with zero ambiguity.
|
|
11
|
+
|
|
12
|
+
MESSAGE IDS
|
|
13
|
+
You specify individual raw messages by ID using the injected IDs visible in the conversation:
|
|
14
|
+
|
|
15
|
+
- `mNNNN` IDs identify raw messages
|
|
16
|
+
|
|
17
|
+
Each message has an ID inside XML metadata tags like `<dcp-message-id priority="high">m0007</dcp-message-id>`.
|
|
18
|
+
The ID tag appears at the end of the message it belongs to — each ID covers all the content above it back to the previous ID.
|
|
19
|
+
Treat these tags as message metadata only, not as content to summarize. Use only the inner `mNNNN` value as the `messageId`.
|
|
20
|
+
The `priority` attribute indicates relative context cost. Prefer higher-priority closed messages before lower-priority ones.
|
|
21
|
+
Messages marked as `<dcp-message-id>BLOCKED</dcp-message-id>` cannot be compressed.
|
|
22
|
+
|
|
23
|
+
Rules:
|
|
24
|
+
|
|
25
|
+
- Pick each `messageId` directly from injected IDs visible in context.
|
|
26
|
+
- Only use raw message IDs of the form `mNNNN`.
|
|
27
|
+
- Ignore XML attributes such as `priority` when copying the ID; use only the inner `mNNNN` value.
|
|
28
|
+
- Do NOT use compressed block IDs like `bN`.
|
|
29
|
+
- Do not invent IDs. Use only IDs that are present in context.
|
|
30
|
+
- Do not target prior compressed blocks or block summaries.
|
|
31
|
+
|
|
32
|
+
BATCHING
|
|
33
|
+
Select MANY messages in a single tool call when they are independently safe to compress.
|
|
34
|
+
Each entry should summarize exactly one message, and the tool can receive as many entries as needed in one batch.
|
|
35
|
+
When several messages are equally safe to compress, prefer higher-priority messages first.
|
|
36
|
+
|
|
37
|
+
Because each message is compressed independently:
|
|
38
|
+
|
|
39
|
+
- Do not describe ranges
|
|
40
|
+
- Do not use start/end boundaries
|
|
41
|
+
- Do not use compressed block placeholders
|
|
42
|
+
- Do not reference prior compressed blocks with `(bN)`
|
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
Collapse a range in the conversation into a detailed summary.
|
|
2
|
+
|
|
3
|
+
THE SUMMARY
|
|
4
|
+
Your summary must be EXHAUSTIVE. Capture file paths, function signatures, decisions made, constraints discovered, key findings... EVERYTHING that maintains context integrity. This is not a brief note - it is an authoritative record so faithful that the original conversation adds no value.
|
|
5
|
+
|
|
6
|
+
USER INTENT FIDELITY
|
|
7
|
+
When the compressed range includes user messages, preserve the user's intent with extra care. Do not change scope, constraints, priorities, acceptance criteria, or requested outcomes.
|
|
8
|
+
Directly quote user messages when they are short enough to include safely. Direct quotes are preferred when they best preserve exact meaning.
|
|
9
|
+
|
|
10
|
+
Yet be LEAN. Strip away the noise: failed attempts that led nowhere, verbose tool outputs, back-and-forth exploration. What remains should be pure signal - golden nuggets of detail that preserve full understanding with zero ambiguity.
|
|
11
|
+
|
|
12
|
+
COMPRESSED BLOCK PLACEHOLDERS
|
|
13
|
+
When the selected range includes previously compressed blocks, use this exact placeholder format when referencing one:
|
|
14
|
+
|
|
15
|
+
- `(bN)`
|
|
16
|
+
|
|
17
|
+
Compressed block sections in context are clearly marked with a header:
|
|
18
|
+
|
|
19
|
+
- `[Compressed conversation section]`
|
|
20
|
+
|
|
21
|
+
Compressed block IDs always use the `bN` form (never `mNNNN`) and are represented in the same XML metadata tag format.
|
|
22
|
+
|
|
23
|
+
Rules:
|
|
24
|
+
|
|
25
|
+
- Include every required block placeholder exactly once.
|
|
26
|
+
- Do not invent placeholders for blocks outside the selected range.
|
|
27
|
+
- Treat `(bN)` placeholders as RESERVED TOKENS. Do not emit `(bN)` text anywhere except intentional placeholders.
|
|
28
|
+
- If you need to mention a block in prose, use plain text like `compressed bN` (not as a placeholder).
|
|
29
|
+
- Preflight check before finalizing: the set of `(bN)` placeholders in your summary must exactly match the required set, with no duplicates.
|
|
30
|
+
|
|
31
|
+
These placeholders are semantic references. They will be replaced with the full stored compressed block content when the tool processes your output.
|
|
32
|
+
|
|
33
|
+
FLOW PRESERVATION WITH PLACEHOLDERS
|
|
34
|
+
When you use compressed block placeholders, write the surrounding summary text so it still reads correctly AFTER placeholder expansion.
|
|
35
|
+
|
|
36
|
+
- Treat each placeholder as a stand-in for a full conversation segment, not as a short label.
|
|
37
|
+
- Ensure transitions before and after each placeholder preserve chronology and causality.
|
|
38
|
+
- Do not write text that depends on the placeholder staying literal (for example, "as noted in `(b2)`").
|
|
39
|
+
- Your final meaning must be coherent once each placeholder is replaced with its full compressed block content.
|
|
40
|
+
|
|
41
|
+
BOUNDARY IDS
|
|
42
|
+
You specify boundaries by ID using the injected IDs visible in the conversation:
|
|
43
|
+
|
|
44
|
+
- `mNNNN` IDs identify raw messages
|
|
45
|
+
- `bN` IDs identify previously compressed blocks
|
|
46
|
+
|
|
47
|
+
Each message has an ID inside XML metadata tags like `<dcp-message-id>...</dcp-message-id>`.
|
|
48
|
+
The ID tag appears at the end of the message it belongs to — each ID covers all the content above it back to the previous ID.
|
|
49
|
+
Treat these tags as boundary metadata only, not as tool result content.
|
|
50
|
+
|
|
51
|
+
Rules:
|
|
52
|
+
|
|
53
|
+
- Pick `startId` and `endId` directly from injected IDs in context.
|
|
54
|
+
- IDs must exist in the current visible context.
|
|
55
|
+
- `startId` must appear before `endId`.
|
|
56
|
+
- Do not invent IDs. Use only IDs that are present in context.
|
|
57
|
+
|
|
58
|
+
BATCHING
|
|
59
|
+
When multiple independent ranges are ready and their boundaries do not overlap, include all of them as separate entries in the `content` array of a single tool call. Each entry should have its own `startId`, `endId`, and `summary`.
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
CRITICAL WARNING: MAX CONTEXT LIMIT REACHED
|
|
2
|
+
|
|
3
|
+
You are at or beyond the configured max context threshold. This is an emergency context-recovery moment.
|
|
4
|
+
|
|
5
|
+
You MUST use the `compress` tool now. Do not continue normal exploration until compression is handled.
|
|
6
|
+
|
|
7
|
+
If you are in the middle of a critical atomic operation, finish that atomic step first, then compress immediately.
|
|
8
|
+
|
|
9
|
+
RANGE STRATEGY (MANDATORY)
|
|
10
|
+
Prioritize one large, closed, high-yield compression range first.
|
|
11
|
+
This overrides the normal preference for many small compressions.
|
|
12
|
+
Only split into multiple compressions if one large range would reduce summary quality or make boundary selection unsafe.
|
|
13
|
+
|
|
14
|
+
RANGE SELECTION
|
|
15
|
+
Start from older, resolved history and capture as much stale context as safely possible in one pass.
|
|
16
|
+
Avoid the newest active working slice unless it is clearly closed.
|
|
17
|
+
Use visible injected boundary IDs for compression (`mNNNN` for messages, `bN` for compressed blocks), and ensure `startId` appears before `endId`.
|
|
18
|
+
|
|
19
|
+
SUMMARY REQUIREMENTS
|
|
20
|
+
Your summary must cover all essential details from the selected range so work can continue without reopening raw messages.
|
|
21
|
+
If the compressed range includes user messages, preserve user intent exactly. Prefer direct quotes for short user messages to avoid semantic drift.
|
|
@@ -0,0 +1,5 @@
|
|
|
1
|
+
You've been iterating for a while after the last user message.
|
|
2
|
+
|
|
3
|
+
If there is a closed portion that is unlikely to be referenced immediately (for example, finished research before implementation), use the compress tool on it now.
|
|
4
|
+
|
|
5
|
+
Prefer multiple short, closed ranges over one large range when several independent slices are ready.
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
You operate in a context-constrained environment. Manage context continuously to avoid buildup and preserve retrieval quality. Efficient context management is paramount for your agentic performance.
|
|
2
|
+
|
|
3
|
+
The ONLY tool you have for context management is `compress`. It replaces older conversation content with technical summaries you produce.
|
|
4
|
+
|
|
5
|
+
`<dcp-message-id>` and `<dcp-system-reminder>` tags are environment-injected metadata. Do not output them.
|
|
6
|
+
|
|
7
|
+
THE PHILOSOPHY OF COMPRESS
|
|
8
|
+
`compress` transforms conversation content into dense, high-fidelity summaries. This is not cleanup - it is crystallization. Your summary becomes the authoritative record of what transpired.
|
|
9
|
+
|
|
10
|
+
Think of compression as phase transitions: raw exploration becomes refined understanding. The original context served its purpose; your summary now carries that understanding forward.
|
|
11
|
+
|
|
12
|
+
OPERATING STANCE
|
|
13
|
+
Prefer short, closed, summary-safe compressions.
|
|
14
|
+
When multiple independent stale sections exist, prefer several focused compressions (in parallel when possible) over one broad compression.
|
|
15
|
+
|
|
16
|
+
Use `compress` as steady housekeeping while you work.
|
|
17
|
+
|
|
18
|
+
CADENCE, SIGNALS, AND LATENCY
|
|
19
|
+
|
|
20
|
+
- No fixed threshold mandates compression
|
|
21
|
+
- Prioritize closedness and independence over raw size
|
|
22
|
+
- Prefer smaller, regular compressions over infrequent massive compressions for better latency and summary quality
|
|
23
|
+
- When multiple independent stale sections are ready, batch compressions in parallel
|
|
24
|
+
|
|
25
|
+
COMPRESS WHEN
|
|
26
|
+
|
|
27
|
+
A section is genuinely closed and the raw conversation has served its purpose:
|
|
28
|
+
|
|
29
|
+
- Research concluded and findings are clear
|
|
30
|
+
- Implementation finished and verified
|
|
31
|
+
- Exploration exhausted and patterns understood
|
|
32
|
+
- Dead-end noise can be discarded without waiting for a whole chapter to close
|
|
33
|
+
|
|
34
|
+
DO NOT COMPRESS IF
|
|
35
|
+
|
|
36
|
+
- Raw context is still relevant and needed for edits or precise references
|
|
37
|
+
- The target content is still actively in progress
|
|
38
|
+
- You may need exact code, error messages, or file contents in the immediate next steps
|
|
39
|
+
|
|
40
|
+
Before compressing, ask: _"Is this section closed enough to become summary-only right now?"_
|
|
41
|
+
|
|
42
|
+
Evaluate conversation signal-to-noise REGULARLY. Use `compress` deliberately with quality-first summaries. Prioritize stale content intelligently to maintain a high-signal context window that supports your agency.
|
|
43
|
+
|
|
44
|
+
It is of your responsibility to keep a sharp, high-quality context window for optimal performance.
|
|
@@ -0,0 +1,8 @@
|
|
|
1
|
+
Evaluate the conversation for compressible ranges.
|
|
2
|
+
|
|
3
|
+
If any range is cleanly closed and unlikely to be needed again, use the compress tool on it.
|
|
4
|
+
If direction has shifted, compress earlier ranges that are now less relevant.
|
|
5
|
+
|
|
6
|
+
Prefer small, closed-range compressions over one broad compression.
|
|
7
|
+
The goal is to filter noise and distill key information so context accumulation stays under control.
|
|
8
|
+
Keep active context uncompressed.
|
|
@@ -0,0 +1,71 @@
|
|
|
1
|
+
Collapse selected individual messages in the conversation into detailed summaries.
|
|
2
|
+
|
|
3
|
+
THE SUMMARY
|
|
4
|
+
Your summary must be EXHAUSTIVE. Use the following 9-section structure to ensure nothing is lost.
|
|
5
|
+
Capture file paths, function signatures, decisions made, constraints discovered, key findings, tool outcomes, and user intent details that matter... EVERYTHING that preserves the value of the selected message after the raw message is removed.
|
|
6
|
+
|
|
7
|
+
STRUCTURED SUMMARY FORMAT
|
|
8
|
+
When summarizing each message, organize findings into these sections (omit empty sections):
|
|
9
|
+
|
|
10
|
+
1. **Primary Request** — What the user actually wants. Quote exact instructions when short.
|
|
11
|
+
2. **Key Technical Concepts** — Decisions made, constraints discovered, architecture choices, patterns identified.
|
|
12
|
+
3. **Files and Code** — Exact file paths with line numbers (`path/file.ts:42-67`). Function signatures. Code snippets critical to understanding.
|
|
13
|
+
4. **Errors and Fixes** — What broke, root cause, what fixed it. Include error messages verbatim when diagnostic.
|
|
14
|
+
5. **Problem Solving** — Approaches tried (including failures). What worked and why.
|
|
15
|
+
6. **User Messages** — Preserve exact user intent. Directly quote short user instructions when that best preserves meaning.
|
|
16
|
+
7. **Pending Tasks** — Incomplete work with current status.
|
|
17
|
+
8. **Current Work** — What's actively happening right now.
|
|
18
|
+
9. **Next Step** — What should happen next (if clear from context).
|
|
19
|
+
|
|
20
|
+
USER INTENT FIDELITY
|
|
21
|
+
When a selected message contains user intent, preserve that intent with extra care. Do not change scope, constraints, priorities, acceptance criteria, or requested outcomes.
|
|
22
|
+
Directly quote short user instructions when that best preserves exact meaning.
|
|
23
|
+
|
|
24
|
+
Yet be LEAN. Strip away the noise: failed attempts that led nowhere, verbose tool output, and repetition. What remains should be pure signal — golden nuggets of detail that preserve full understanding with zero ambiguity.
|
|
25
|
+
|
|
26
|
+
MESSAGE IDS
|
|
27
|
+
You specify individual raw messages by ID using the injected IDs visible in the conversation:
|
|
28
|
+
|
|
29
|
+
- `mNNNN` IDs identify raw messages
|
|
30
|
+
|
|
31
|
+
Each message has an ID inside XML metadata tags like `<dcp-message-id priority="high">m0007</dcp-message-id>`.
|
|
32
|
+
The ID tag appears at the end of the message it belongs to — each ID covers all the content above it back to the previous ID.
|
|
33
|
+
Treat these tags as message metadata only, not as content to summarize. Use only the inner `mNNNN` value as the `messageId`.
|
|
34
|
+
The `priority` attribute indicates relative context cost. Prefer higher-priority closed messages before lower-priority ones.
|
|
35
|
+
Messages marked as `<dcp-message-id>BLOCKED</dcp-message-id>` cannot be compressed.
|
|
36
|
+
|
|
37
|
+
Rules:
|
|
38
|
+
|
|
39
|
+
- Pick each `messageId` directly from injected IDs visible in context.
|
|
40
|
+
- Only use raw message IDs of the form `mNNNN`.
|
|
41
|
+
- Ignore XML attributes such as `priority` when copying the ID; use only the inner `mNNNN` value.
|
|
42
|
+
- Do NOT use compressed block IDs like `bN`.
|
|
43
|
+
- Do not invent IDs. Use only IDs that are present in context.
|
|
44
|
+
- Do not target prior compressed blocks or block summaries.
|
|
45
|
+
|
|
46
|
+
BATCHING
|
|
47
|
+
Select MANY messages in a single tool call when they are independently safe to compress.
|
|
48
|
+
Each entry should summarize exactly one message, and the tool can receive as many entries as needed in one batch.
|
|
49
|
+
When several messages are equally safe to compress, prefer higher-priority messages first.
|
|
50
|
+
|
|
51
|
+
Because each message is compressed independently:
|
|
52
|
+
|
|
53
|
+
- Do not describe ranges
|
|
54
|
+
- Do not use start/end boundaries
|
|
55
|
+
- Do not use compressed block placeholders
|
|
56
|
+
- Do not reference prior compressed blocks with `(bN)`
|
|
57
|
+
|
|
58
|
+
THE FORMAT OF COMPRESS
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
{
|
|
62
|
+
topic: string, // Short label (3-5 words) for the overall batch
|
|
63
|
+
content: [ // One or more messages to compress independently
|
|
64
|
+
{
|
|
65
|
+
messageId: string, // Raw message ID only: mNNNN (ignore metadata attributes like priority)
|
|
66
|
+
topic: string, // Short label (3-5 words) for this one message summary
|
|
67
|
+
summary: string // Complete technical summary replacing that one message (use 9-section format above)
|
|
68
|
+
}
|
|
69
|
+
]
|
|
70
|
+
}
|
|
71
|
+
```
|
|
@@ -1,83 +1,102 @@
|
|
|
1
1
|
{
|
|
2
|
-
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
2
|
+
"$schema": "https://raw.githubusercontent.com/Opencode-DCP/opencode-dynamic-context-pruning/master/dcp.schema.json",
|
|
3
|
+
"enabled": true,
|
|
4
|
+
"debug": false,
|
|
5
|
+
// "off" | "minimal" | "detailed" — keep minimal for low-noise dev flow
|
|
6
|
+
"pruneNotification": "minimal",
|
|
7
|
+
// "chat" (in-conversation) or "toast" (system notification)
|
|
8
|
+
"pruneNotificationType": "toast",
|
|
9
|
+
// Slash commands: /dcp context, /dcp stats, /dcp sweep, /dcp compress, /dcp decompress, /dcp recompress
|
|
10
|
+
"commands": {
|
|
11
|
+
"enabled": true,
|
|
12
|
+
// Additional tools to protect from /dcp sweep (supports glob wildcards)
|
|
13
|
+
"protectedTools": ["observation", "memory-*"]
|
|
14
|
+
},
|
|
15
|
+
// Manual mode: disables autonomous context management
|
|
16
|
+
"manualMode": {
|
|
17
|
+
"enabled": false,
|
|
18
|
+
"automaticStrategies": true
|
|
19
|
+
},
|
|
20
|
+
// Protect recent tool outputs from pruning
|
|
21
|
+
"turnProtection": {
|
|
22
|
+
"enabled": false,
|
|
23
|
+
"turns": 4
|
|
24
|
+
},
|
|
25
|
+
// Glob patterns for files that should never be auto-pruned
|
|
26
|
+
// Keep tight: broad patterns reduce DCP effectiveness
|
|
27
|
+
"protectedFilePatterns": [
|
|
28
|
+
"**/.env*",
|
|
29
|
+
"**/AGENTS.md",
|
|
30
|
+
"**/.opencode/**",
|
|
31
|
+
"**/.beads/**",
|
|
32
|
+
"**/package.json",
|
|
33
|
+
"**/tsconfig.json"
|
|
34
|
+
],
|
|
35
|
+
// Unified context compression tool (v3.1.0)
|
|
36
|
+
"compress": {
|
|
37
|
+
// "range" (stable) compresses spans into block summaries
|
|
38
|
+
// "message" (experimental) compresses individual raw messages
|
|
39
|
+
"mode": "message",
|
|
40
|
+
// "allow" (no prompt) | "ask" (prompt) | "deny" (tool not registered)
|
|
41
|
+
"permission": "allow",
|
|
42
|
+
"showCompression": false,
|
|
43
|
+
// v3.1.0: active summary tokens extend effective maxContextLimit
|
|
44
|
+
"summaryBuffer": true,
|
|
45
|
+
// Soft upper threshold: above this, strong compression nudges fire
|
|
46
|
+
// Accepts number or "X%" of model context window
|
|
47
|
+
"maxContextLimit": "80%",
|
|
48
|
+
// Per-model override for maxContextLimit (takes priority over global)
|
|
49
|
+
// Aligned to claude-opus-4.6 (216k context, 64k output) as primary build agent
|
|
50
|
+
"modelMaxLimits": {
|
|
51
|
+
"github-copilot/claude-opus-4.6": 192000,
|
|
52
|
+
"github-copilot/claude-opus-4.5": 192000,
|
|
53
|
+
"github-copilot/claude-sonnet-4.6": 192000,
|
|
54
|
+
"github-copilot/claude-sonnet-4.5": 192000,
|
|
55
|
+
"github-copilot/claude-sonnet-4": 192000,
|
|
56
|
+
"github-copilot/claude-haiku-4.5": 172000
|
|
57
|
+
},
|
|
58
|
+
// Soft lower threshold: below this, turn/iteration reminders are off
|
|
59
|
+
"minContextLimit": "35%",
|
|
60
|
+
// Per-model override for minContextLimit (takes priority over global)
|
|
61
|
+
"modelMinLimits": {
|
|
62
|
+
"github-copilot/claude-opus-4.6": "30%",
|
|
63
|
+
"github-copilot/claude-opus-4.5": "35%",
|
|
64
|
+
"github-copilot/claude-sonnet-4.6": "35%",
|
|
65
|
+
"github-copilot/claude-sonnet-4.5": "35%",
|
|
66
|
+
"github-copilot/claude-sonnet-4": "35%",
|
|
67
|
+
"github-copilot/claude-haiku-4.5": "25%"
|
|
68
|
+
},
|
|
69
|
+
// How often context-limit nudge fires above maxContextLimit (1 = every fetch)
|
|
70
|
+
"nudgeFrequency": 5,
|
|
71
|
+
// Messages since last user message before adding compression reminders
|
|
72
|
+
"iterationNudgeThreshold": 15,
|
|
73
|
+
// "strong" = more likely to compress, "soft" = less likely
|
|
74
|
+
"nudgeForce": "soft",
|
|
75
|
+
// Keep user messages compressible to avoid permanent context growth
|
|
76
|
+
"protectUserMessages": false,
|
|
77
|
+
// Auto-protected by DCP: task, skill, todowrite, todoread, compress, batch, plan_enter, plan_exit, write, edit
|
|
78
|
+
// Only list ADDITIONAL tools whose outputs should be appended to compression summaries
|
|
79
|
+
"protectedTools": ["observation", "memory-*", "tilth_*"]
|
|
80
|
+
},
|
|
81
|
+
// Experimental features
|
|
82
|
+
"experimental": {
|
|
83
|
+
// Allow DCP processing in subagent sessions (default: false)
|
|
84
|
+
"allowSubAgents": false,
|
|
85
|
+
// Enable user-editable prompt overrides under dcp-prompts directories
|
|
86
|
+
"customPrompts": true
|
|
87
|
+
},
|
|
88
|
+
// Automatic pruning strategies (zero LLM cost)
|
|
89
|
+
"strategies": {
|
|
90
|
+
// Removes duplicate tool calls (same tool + same arguments), keeps most recent
|
|
91
|
+
"deduplication": {
|
|
92
|
+
"enabled": true,
|
|
93
|
+
"protectedTools": []
|
|
94
|
+
},
|
|
95
|
+
// Prunes inputs from errored tool calls after N turns (error messages preserved)
|
|
96
|
+
"purgeErrors": {
|
|
97
|
+
"enabled": true,
|
|
98
|
+
"turns": 4,
|
|
99
|
+
"protectedTools": []
|
|
100
|
+
}
|
|
101
|
+
}
|
|
83
102
|
}
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
@@ -83,7 +83,8 @@
|
|
|
83
83
|
".opencode/memory/project/tech-stack.md",
|
|
84
84
|
".opencode/memory/project/project.md",
|
|
85
85
|
".opencode/memory/project/roadmap.md",
|
|
86
|
-
".opencode/memory/project/state.md"
|
|
86
|
+
".opencode/memory/project/state.md",
|
|
87
|
+
".opencode/context/git-context.md"
|
|
87
88
|
],
|
|
88
89
|
"keybinds": {
|
|
89
90
|
"leader": "`",
|
package/dist/template/.opencode/plugin/sdk/copilot/chat/openai-compatible-chat-language-model.ts
CHANGED
|
@@ -9,22 +9,22 @@ import {
|
|
|
9
9
|
type SharedV2ProviderMetadata,
|
|
10
10
|
} from "@ai-sdk/provider";
|
|
11
11
|
import {
|
|
12
|
-
type FetchFunction,
|
|
13
|
-
type ParseResult,
|
|
14
|
-
type ResponseHandler,
|
|
15
12
|
combineHeaders,
|
|
16
13
|
createEventSourceResponseHandler,
|
|
17
14
|
createJsonErrorResponseHandler,
|
|
18
15
|
createJsonResponseHandler,
|
|
16
|
+
type FetchFunction,
|
|
19
17
|
generateId,
|
|
20
18
|
isParsableJson,
|
|
19
|
+
type ParseResult,
|
|
21
20
|
parseProviderOptions,
|
|
22
21
|
postJsonToApi,
|
|
22
|
+
type ResponseHandler,
|
|
23
23
|
} from "@ai-sdk/provider-utils";
|
|
24
24
|
import { z } from "zod/v4";
|
|
25
25
|
import {
|
|
26
|
-
type ProviderErrorStructure,
|
|
27
26
|
defaultOpenAICompatibleErrorStructure,
|
|
27
|
+
type ProviderErrorStructure,
|
|
28
28
|
} from "../openai-compatible-error.js";
|
|
29
29
|
import { convertToOpenAICompatibleChatMessages } from "./convert-to-openai-compatible-chat-messages.js";
|
|
30
30
|
import { getResponseMetadata } from "./get-response-metadata.js";
|
|
@@ -482,15 +482,15 @@ export class OpenAICompatibleChatLanguageModel implements LanguageModelV2 {
|
|
|
482
482
|
const delta = choice.delta;
|
|
483
483
|
|
|
484
484
|
// Capture reasoning_opaque for Copilot multi-turn reasoning
|
|
485
|
+
// Claude models can send multiple reasoning_opaque delta chunks
|
|
486
|
+
// during streaming (especially with parallel tool calls).
|
|
487
|
+
// Concatenate them instead of throwing — same pattern as content/arguments.
|
|
488
|
+
// See: https://github.com/anomalyco/opencode/issues/17011
|
|
485
489
|
if (delta.reasoning_opaque) {
|
|
486
|
-
|
|
487
|
-
|
|
488
|
-
|
|
489
|
-
|
|
490
|
-
"Multiple reasoning_opaque values received in a single response. Only one thinking part per response is supported.",
|
|
491
|
-
});
|
|
492
|
-
}
|
|
493
|
-
reasoningOpaque = delta.reasoning_opaque;
|
|
490
|
+
reasoningOpaque =
|
|
491
|
+
reasoningOpaque != null
|
|
492
|
+
? reasoningOpaque + delta.reasoning_opaque
|
|
493
|
+
: delta.reasoning_opaque;
|
|
494
494
|
}
|
|
495
495
|
|
|
496
496
|
// enqueue reasoning before text deltas (Copilot uses reasoning_text):
|
|
@@ -19,7 +19,6 @@ dependencies: []
|
|
|
19
19
|
|
|
20
20
|
- Short sessions with low context usage where no compaction is needed.
|
|
21
21
|
|
|
22
|
-
|
|
23
22
|
## Overview
|
|
24
23
|
|
|
25
24
|
**Compaction = Summarization + Preservation + Continuity**
|
|
@@ -53,7 +52,6 @@ Pay attention to these signals:
|
|
|
53
52
|
|
|
54
53
|
Compress completed conversation phases into dense summaries. This is the primary DCP instrument in the installed beta.
|
|
55
54
|
|
|
56
|
-
|
|
57
55
|
Compress completed conversation phases into dense summaries.
|
|
58
56
|
|
|
59
57
|
```
|
|
@@ -267,6 +265,75 @@ This project uses `@tarquinen/opencode-dcp` for always-on context management (in
|
|
|
267
265
|
- **DCP plugin**: Context budget rules, tool guidance, prunable-tools list, nudges (always present via system prompt)
|
|
268
266
|
- **Compaction plugin** (`.opencode/plugin/compaction.ts`): Session continuity, beads state, handoff recovery, post-compaction protocol (fires during compaction events only)
|
|
269
267
|
|
|
268
|
+
**Custom prompts (enabled):**
|
|
269
|
+
|
|
270
|
+
DCP custom prompts are enabled (`experimental.customPrompts: true`). Override files live at `.opencode/dcp-prompts/overrides/`:
|
|
271
|
+
|
|
272
|
+
- `compress-message.md` — 9-section structured summary format (Primary Request, Key Technical Concepts, Files/Code, Errors/Fixes, Problem Solving, User Messages, Pending Tasks, Current Work, Next Step)
|
|
273
|
+
|
|
274
|
+
Override precedence: project (`.opencode/dcp-prompts/overrides/`) > config dir > global (`~/.config/opencode/dcp-prompts/overrides/`).
|
|
275
|
+
|
|
276
|
+
## Post-Compaction Restoration Protocol
|
|
277
|
+
|
|
278
|
+
**Critical:** After any compaction (server-side or DCP compress), you lose raw file content and active context. Immediately restore:
|
|
279
|
+
|
|
280
|
+
### Step 1: Re-read Active Files (max 5)
|
|
281
|
+
|
|
282
|
+
Re-read the files you were actively editing before compaction. Prioritize by recency — most recently modified first. Cap at 5 files, use offset/limit to read only the sections you need (~200 lines each).
|
|
283
|
+
|
|
284
|
+
```
|
|
285
|
+
After compaction, if you were editing src/auth.ts:
|
|
286
|
+
read({ filePath: "src/auth.ts", offset: 30, limit: 50 })
|
|
287
|
+
```
|
|
288
|
+
|
|
289
|
+
**Why:** The Write/Edit tool requires a prior Read in the same session (`FileTime.assert()`). Without re-reading, your next edit attempt will fail.
|
|
290
|
+
|
|
291
|
+
### Step 2: Restore Active Skill Context
|
|
292
|
+
|
|
293
|
+
If you were using a skill before compaction, re-load it:
|
|
294
|
+
|
|
295
|
+
```typescript
|
|
296
|
+
skill({ name: "<active-skill>" });
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
Skills are prompt-injected — compaction drops them. Re-load to restore the workflow.
|
|
300
|
+
|
|
301
|
+
### Step 3: Check Todo State
|
|
302
|
+
|
|
303
|
+
Review your TodoWrite list to re-establish task tracking:
|
|
304
|
+
|
|
305
|
+
```
|
|
306
|
+
- What was in_progress?
|
|
307
|
+
- What was pending?
|
|
308
|
+
- What was completed?
|
|
309
|
+
```
|
|
310
|
+
|
|
311
|
+
### Step 4: Verify Git Position
|
|
312
|
+
|
|
313
|
+
```bash
|
|
314
|
+
git branch --show-current
|
|
315
|
+
git status --short
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
Confirms you're on the right branch and shows any uncommitted changes.
|
|
319
|
+
|
|
320
|
+
### Step 5: Scan Memory for Context
|
|
321
|
+
|
|
322
|
+
If the compaction was heavy (>50% reduction), check memory for recent observations:
|
|
323
|
+
|
|
324
|
+
```typescript
|
|
325
|
+
memory - search({ query: "<current task keywords>", limit: 3 });
|
|
326
|
+
```
|
|
327
|
+
|
|
328
|
+
### Why This Protocol Exists
|
|
329
|
+
|
|
330
|
+
Compaction strips raw file content, tool outputs, and skill prompts. Without restoration:
|
|
331
|
+
|
|
332
|
+
- File edits fail (`FileTime.assert` — no prior read)
|
|
333
|
+
- Line-number precision is lost (you're guessing, not citing)
|
|
334
|
+
- Skill workflows are inactive (no prompt injection)
|
|
335
|
+
- Task tracking is disconnected (todos lost in compressed summary)
|
|
336
|
+
|
|
270
337
|
## Anti-Patterns
|
|
271
338
|
|
|
272
339
|
### ❌ Premature Compaction
|
package/package.json
CHANGED
|
Binary file
|
|
Binary file
|
|
File without changes
|