oh-my-opencode 0.3.0 → 0.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.ko.md +21 -7
- package/README.md +29 -14
- package/dist/hooks/session-recovery/storage.d.ts +1 -0
- package/dist/index.js +389 -72
- package/package.json +1 -1
package/README.ko.md
CHANGED
|
@@ -106,7 +106,19 @@ opencode --version # 1.0.132 이상이어야 함
|
|
|
106
106
|
cat ~/.config/opencode/opencode.json # plugin 배열에 "oh-my-opencode"가 있어야 함
|
|
107
107
|
```
|
|
108
108
|
|
|
109
|
-
|
|
109
|
+
### 4단계: 인증정보 설정
|
|
110
|
+
|
|
111
|
+
plugin 에 oh my opencode 를 추가했다면, tmux 를 사용하여 다음의 내용을 진행주세요.
|
|
112
|
+
|
|
113
|
+
1. tmux 를 사용하여 셸 열기
|
|
114
|
+
2. opencode auth login > Provider Anthropic > Login method Claude Pro/Max > 사용자에게 oauth 화면에서 인증하라고 안내
|
|
115
|
+
3. 사용자에게 macOS 일 경우 VibeProxy, 아닐 경우 CLIProxyAPI 를 사용하여 OpenAI / Google 인증을 완료하라고 안내
|
|
116
|
+
|
|
117
|
+
### ⚠️ 주의
|
|
118
|
+
|
|
119
|
+
**사용자가 명시적으로 요청하지 않는 한, `oh-my-opencode.json` 설정 파일을 생성하거나 수정하지 마세요.**
|
|
120
|
+
|
|
121
|
+
플러그인은 설정 파일 없이도 기본값으로 완벽하게 동작합니다. 설정 파일은 사용자가 커스터마이징을 원할 때만 필요합니다.
|
|
110
122
|
|
|
111
123
|
</details>
|
|
112
124
|
|
|
@@ -326,7 +338,7 @@ Schema 자동 완성이 지원됩니다:
|
|
|
326
338
|
|
|
327
339
|
각 에이전트에서 지원하는 옵션: `model`, `temperature`, `top_p`, `prompt`, `tools`, `disable`, `description`, `mode`, `color`, `permission`.
|
|
328
340
|
|
|
329
|
-
또는 `disabled_agents
|
|
341
|
+
또는 ~/.config/opencode/oh-my-opencode.json 혹은 .opencode/oh-my-opencode.json 의 `disabled_agents` 를 사용하여 비활성화할 수 있습니다:
|
|
330
342
|
|
|
331
343
|
```json
|
|
332
344
|
{
|
|
@@ -338,7 +350,9 @@ Schema 자동 완성이 지원됩니다:
|
|
|
338
350
|
|
|
339
351
|
### MCPs
|
|
340
352
|
|
|
341
|
-
|
|
353
|
+
기본적으로 Context7, Exa MCP 를 지원합니다.
|
|
354
|
+
|
|
355
|
+
이것이 마음에 들지 않는다면, ~/.config/opencode/oh-my-opencode.json 혹은 .opencode/oh-my-opencode.json 의 `disabled_mcps` 를 사용하여 비활성화할 수 있습니다:
|
|
342
356
|
|
|
343
357
|
```json
|
|
344
358
|
{
|
|
@@ -346,13 +360,13 @@ Schema 자동 완성이 지원됩니다:
|
|
|
346
360
|
}
|
|
347
361
|
```
|
|
348
362
|
|
|
349
|
-
더 자세한 내용은 [OpenCode MCP Servers](https://opencode.ai/docs/mcp-servers)를 참조하세요.
|
|
350
|
-
|
|
351
363
|
### LSP
|
|
352
364
|
|
|
353
|
-
|
|
365
|
+
OpenCode 는 분석을 위해 LSP 도구를 제공합니다.
|
|
366
|
+
Oh My OpenCode 에서는 LSP 의 리팩토링(이름 변경, 코드 액션) 도구를 제공합니다.
|
|
367
|
+
OpenCode 에서 지원하는 모든 LSP 구성 및 커스텀 설정 (opencode.json 에 설정 된 것) 을 그대로 지원하고, Oh My OpenCode 만을 위한 추가적인 설정도 아래와 같이 설정 할 수 있습니다.
|
|
354
368
|
|
|
355
|
-
`lsp` 옵션을 통해 LSP 서버를
|
|
369
|
+
~/.config/opencode/oh-my-opencode.json 혹은 .opencode/oh-my-opencode.json 의 `lsp` 옵션을 통해 LSP 서버를 추가로 설정 할 수 있습니다:
|
|
356
370
|
|
|
357
371
|
```json
|
|
358
372
|
{
|
package/README.md
CHANGED
|
@@ -28,11 +28,12 @@ Oh My OpenCode
|
|
|
28
28
|
oMoMoMoMoMo···
|
|
29
29
|
|
|
30
30
|
If you work in tech, you likely appreciated [Claude Code](https://www.claude.com/product/claude-code).
|
|
31
|
-
If you are a hacker, you will fucking
|
|
31
|
+
If you are a hacker, you will fucking fall in love with [OpenCode](https://github.com/sst/opencode).
|
|
32
32
|
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
33
|
+
Do you remember the first time you switched from Windows to Linux? The excitement of configuring everything just the way you wanted?
|
|
34
|
+
In an era where that romance feels lost, OpenCode will be that project for you.
|
|
35
|
+
If you love coding and computers, OpenCode will feel exactly like that moment you first touched Linux after years of Windows.
|
|
36
|
+
Even if you don't, invest a little time. Boost your skills and productivity by multipliers.
|
|
36
37
|
|
|
37
38
|
## TL;DR
|
|
38
39
|
|
|
@@ -42,7 +43,7 @@ To you, OpenCode will feel like the paradigm shift from Windows to Linux. Not yo
|
|
|
42
43
|
- For OpenAI models, I recommend the [OpenCode-OpenAI-Codex-Auth plugin](https://github.com/numman-ali/opencode-openai-codex-auth).
|
|
43
44
|
- For Gemini and GPT, use [VibeProxy (macOS Only)](https://github.com/automazeio/vibeproxy) or [CLIProxyAPI](https://github.com/router-for-me/CLIProxyAPI).
|
|
44
45
|
- **Truth be told, the rest is optional. Just install it. Once set up, no further learning is required. It simply evolves.**
|
|
45
|
-
- **Paste this document into Claude Code, OpenCode, or Cursor, instruct it to install, and ask for assistance if needed. It
|
|
46
|
+
- **Paste this document into Claude Code, OpenCode, or Cursor, instruct it to install, and ask for assistance if needed. It just installs.**
|
|
46
47
|
- If you are an LLM Agent, install the required plugins and dependencies, then run `opencode auth login` via tmux to assist the user with configuration.
|
|
47
48
|
|
|
48
49
|
## Installation
|
|
@@ -104,7 +105,19 @@ opencode --version # Should be >= 1.0.132
|
|
|
104
105
|
cat ~/.config/opencode/opencode.json # Should contain "oh-my-opencode" in plugin array
|
|
105
106
|
```
|
|
106
107
|
|
|
107
|
-
|
|
108
|
+
### Step 4: Configure Authentication
|
|
109
|
+
|
|
110
|
+
Once `oh-my-opencode` is added to the plugin list, use `tmux` to proceed with the following:
|
|
111
|
+
|
|
112
|
+
1. Open a shell using `tmux`.
|
|
113
|
+
2. Run `opencode auth login` > Select Provider `Anthropic` > Select Login method `Claude Pro/Max` > Instruct the user to authenticate via the OAuth screen.
|
|
114
|
+
3. Recommend the setup: [VibeProxy](https://github.com/automazeio/vibeproxy) for macOS users, or [CLIProxyAPI](https://github.com/router-for-me/CLIProxyAPI) for others.
|
|
115
|
+
|
|
116
|
+
### ⚠️ Warning
|
|
117
|
+
|
|
118
|
+
**Do NOT create or modify the `oh-my-opencode.json` configuration file unless the user explicitly requests it.**
|
|
119
|
+
|
|
120
|
+
The plugin works perfectly with default settings without any configuration file. The configuration file is only needed when the user wants customization.
|
|
108
121
|
|
|
109
122
|
</details>
|
|
110
123
|
|
|
@@ -115,7 +128,7 @@ OpenCode is limitlessly extensible and customizable. Zero screen flicker.
|
|
|
115
128
|
You can mix and orchestrate models to your exact specifications.
|
|
116
129
|
It is feature-rich. It is elegant. It handles the terminal without hesitation. It is high-performance.
|
|
117
130
|
|
|
118
|
-
But here is the catch: the learning curve is steep. There is a lot to master.
|
|
131
|
+
But here is the catch: the learning curve is steep. There is a lot to master. And your time is expensive.
|
|
119
132
|
|
|
120
133
|
Inspired by [AmpCode](https://ampcode.com) and [Claude Code](https://code.claude.com/docs/en/overview), I have implemented their features here—often with superior execution.
|
|
121
134
|
Because this is OpenCode.
|
|
@@ -323,7 +336,7 @@ Override built-in agent settings:
|
|
|
323
336
|
|
|
324
337
|
Each agent supports: `model`, `temperature`, `top_p`, `prompt`, `tools`, `disable`, `description`, `mode`, `color`, `permission`.
|
|
325
338
|
|
|
326
|
-
Or disable
|
|
339
|
+
Or you can disable them using `disabled_agents` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:
|
|
327
340
|
|
|
328
341
|
```json
|
|
329
342
|
{
|
|
@@ -335,7 +348,9 @@ Available agents: `oracle`, `librarian`, `explore`, `frontend-ui-ux-engineer`, `
|
|
|
335
348
|
|
|
336
349
|
### MCPs
|
|
337
350
|
|
|
338
|
-
|
|
351
|
+
By default, Context7 and Exa MCP are supported.
|
|
352
|
+
|
|
353
|
+
If you don't want these, you can disable them using `disabled_mcps` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:
|
|
339
354
|
|
|
340
355
|
```json
|
|
341
356
|
{
|
|
@@ -343,13 +358,13 @@ Disable built-in MCPs:
|
|
|
343
358
|
}
|
|
344
359
|
```
|
|
345
360
|
|
|
346
|
-
See [OpenCode MCP Servers](https://opencode.ai/docs/mcp-servers) for more.
|
|
347
|
-
|
|
348
361
|
### LSP
|
|
349
362
|
|
|
350
|
-
|
|
363
|
+
OpenCode provides LSP tools for analysis.
|
|
364
|
+
Oh My OpenCode provides LSP tools for refactoring (rename, code actions).
|
|
365
|
+
It supports all LSP configurations and custom settings supported by OpenCode (those configured in opencode.json), and you can also configure additional settings specifically for Oh My OpenCode as shown below.
|
|
351
366
|
|
|
352
|
-
|
|
367
|
+
You can configure additional LSP servers via the `lsp` option in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:
|
|
353
368
|
|
|
354
369
|
```json
|
|
355
370
|
{
|
|
@@ -389,7 +404,7 @@ If this sounds arrogant and you have a superior solution, send a PR. You are wel
|
|
|
389
404
|
|
|
390
405
|
As of now, I have no affiliation with any of the projects or models mentioned here. This plugin is purely based on personal experimentation and preference.
|
|
391
406
|
|
|
392
|
-
I constructed 99% of this project using OpenCode. I focused on functional verification
|
|
407
|
+
I constructed 99% of this project using OpenCode. I focused on functional verification, and honestly, I don't know how to write proper TypeScript. **But I personally reviewed and comprehensively rewritten this documentation, so you can rely on it with confidence.**
|
|
393
408
|
## Warnings
|
|
394
409
|
|
|
395
410
|
- If you are on [1.0.132](https://github.com/sst/opencode/releases/tag/v1.0.132) or lower, OpenCode has a bug that might break config.
|
|
@@ -7,6 +7,7 @@ export declare function hasContent(part: StoredPart): boolean;
|
|
|
7
7
|
export declare function messageHasContent(messageID: string): boolean;
|
|
8
8
|
export declare function injectTextPart(sessionID: string, messageID: string, text: string): boolean;
|
|
9
9
|
export declare function findEmptyMessages(sessionID: string): string[];
|
|
10
|
+
export declare function findEmptyMessageByIndex(sessionID: string, targetIndex: number): string | null;
|
|
10
11
|
export declare function findFirstEmptyMessage(sessionID: string): string | null;
|
|
11
12
|
export declare function findMessagesWithThinkingBlocks(sessionID: string): string[];
|
|
12
13
|
export declare function findMessagesWithOrphanThinking(sessionID: string): string[];
|
package/dist/index.js
CHANGED
|
@@ -633,6 +633,7 @@ function createTodoContinuationEnforcer(ctx) {
|
|
|
633
633
|
const remindedSessions = new Set;
|
|
634
634
|
const interruptedSessions = new Set;
|
|
635
635
|
const errorSessions = new Set;
|
|
636
|
+
const pendingTimers = new Map;
|
|
636
637
|
return async ({ event }) => {
|
|
637
638
|
const props = event.properties;
|
|
638
639
|
if (event.type === "session.error") {
|
|
@@ -642,6 +643,11 @@ function createTodoContinuationEnforcer(ctx) {
|
|
|
642
643
|
if (detectInterrupt(props?.error)) {
|
|
643
644
|
interruptedSessions.add(sessionID);
|
|
644
645
|
}
|
|
646
|
+
const timer = pendingTimers.get(sessionID);
|
|
647
|
+
if (timer) {
|
|
648
|
+
clearTimeout(timer);
|
|
649
|
+
pendingTimers.delete(sessionID);
|
|
650
|
+
}
|
|
645
651
|
}
|
|
646
652
|
return;
|
|
647
653
|
}
|
|
@@ -649,61 +655,73 @@ function createTodoContinuationEnforcer(ctx) {
|
|
|
649
655
|
const sessionID = props?.sessionID;
|
|
650
656
|
if (!sessionID)
|
|
651
657
|
return;
|
|
652
|
-
|
|
653
|
-
|
|
654
|
-
|
|
655
|
-
|
|
656
|
-
|
|
657
|
-
|
|
658
|
-
|
|
659
|
-
|
|
660
|
-
|
|
661
|
-
|
|
662
|
-
|
|
663
|
-
|
|
664
|
-
|
|
665
|
-
|
|
666
|
-
}
|
|
667
|
-
todos =
|
|
668
|
-
|
|
669
|
-
|
|
670
|
-
|
|
671
|
-
|
|
672
|
-
|
|
673
|
-
|
|
674
|
-
|
|
675
|
-
|
|
676
|
-
|
|
677
|
-
|
|
678
|
-
|
|
679
|
-
|
|
680
|
-
|
|
681
|
-
|
|
682
|
-
|
|
683
|
-
|
|
684
|
-
|
|
685
|
-
|
|
686
|
-
|
|
687
|
-
|
|
688
|
-
|
|
689
|
-
|
|
690
|
-
|
|
658
|
+
const existingTimer = pendingTimers.get(sessionID);
|
|
659
|
+
if (existingTimer) {
|
|
660
|
+
clearTimeout(existingTimer);
|
|
661
|
+
}
|
|
662
|
+
const timer = setTimeout(async () => {
|
|
663
|
+
pendingTimers.delete(sessionID);
|
|
664
|
+
const shouldBypass = interruptedSessions.has(sessionID) || errorSessions.has(sessionID);
|
|
665
|
+
interruptedSessions.delete(sessionID);
|
|
666
|
+
errorSessions.delete(sessionID);
|
|
667
|
+
if (shouldBypass) {
|
|
668
|
+
return;
|
|
669
|
+
}
|
|
670
|
+
if (remindedSessions.has(sessionID)) {
|
|
671
|
+
return;
|
|
672
|
+
}
|
|
673
|
+
let todos = [];
|
|
674
|
+
try {
|
|
675
|
+
const response = await ctx.client.session.todo({
|
|
676
|
+
path: { id: sessionID }
|
|
677
|
+
});
|
|
678
|
+
todos = response.data ?? response;
|
|
679
|
+
} catch {
|
|
680
|
+
return;
|
|
681
|
+
}
|
|
682
|
+
if (!todos || todos.length === 0) {
|
|
683
|
+
return;
|
|
684
|
+
}
|
|
685
|
+
const incomplete = todos.filter((t) => t.status !== "completed" && t.status !== "cancelled");
|
|
686
|
+
if (incomplete.length === 0) {
|
|
687
|
+
return;
|
|
688
|
+
}
|
|
689
|
+
remindedSessions.add(sessionID);
|
|
690
|
+
if (interruptedSessions.has(sessionID) || errorSessions.has(sessionID)) {
|
|
691
|
+
remindedSessions.delete(sessionID);
|
|
692
|
+
return;
|
|
693
|
+
}
|
|
694
|
+
try {
|
|
695
|
+
await ctx.client.session.prompt({
|
|
696
|
+
path: { id: sessionID },
|
|
697
|
+
body: {
|
|
698
|
+
parts: [
|
|
699
|
+
{
|
|
700
|
+
type: "text",
|
|
701
|
+
text: `${CONTINUATION_PROMPT}
|
|
691
702
|
|
|
692
703
|
[Status: ${todos.length - incomplete.length}/${todos.length} completed, ${incomplete.length} remaining]`
|
|
693
|
-
|
|
694
|
-
|
|
695
|
-
|
|
696
|
-
|
|
697
|
-
|
|
698
|
-
|
|
699
|
-
|
|
700
|
-
|
|
704
|
+
}
|
|
705
|
+
]
|
|
706
|
+
},
|
|
707
|
+
query: { directory: ctx.directory }
|
|
708
|
+
});
|
|
709
|
+
} catch {
|
|
710
|
+
remindedSessions.delete(sessionID);
|
|
711
|
+
}
|
|
712
|
+
}, 200);
|
|
713
|
+
pendingTimers.set(sessionID, timer);
|
|
701
714
|
}
|
|
702
715
|
if (event.type === "message.updated") {
|
|
703
716
|
const info = props?.info;
|
|
704
717
|
const sessionID = info?.sessionID;
|
|
705
718
|
if (sessionID && info?.role === "user") {
|
|
706
719
|
remindedSessions.delete(sessionID);
|
|
720
|
+
const timer = pendingTimers.get(sessionID);
|
|
721
|
+
if (timer) {
|
|
722
|
+
clearTimeout(timer);
|
|
723
|
+
pendingTimers.delete(sessionID);
|
|
724
|
+
}
|
|
707
725
|
}
|
|
708
726
|
}
|
|
709
727
|
if (event.type === "session.deleted") {
|
|
@@ -712,6 +730,11 @@ function createTodoContinuationEnforcer(ctx) {
|
|
|
712
730
|
remindedSessions.delete(sessionInfo.id);
|
|
713
731
|
interruptedSessions.delete(sessionInfo.id);
|
|
714
732
|
errorSessions.delete(sessionInfo.id);
|
|
733
|
+
const timer = pendingTimers.get(sessionInfo.id);
|
|
734
|
+
if (timer) {
|
|
735
|
+
clearTimeout(timer);
|
|
736
|
+
pendingTimers.delete(sessionInfo.id);
|
|
737
|
+
}
|
|
715
738
|
}
|
|
716
739
|
}
|
|
717
740
|
};
|
|
@@ -723,16 +746,8 @@ var CONTEXT_WARNING_THRESHOLD = 0.7;
|
|
|
723
746
|
var CONTEXT_REMINDER = `[SYSTEM REMINDER - 1M Context Window]
|
|
724
747
|
|
|
725
748
|
You are using Anthropic Claude with 1M context window.
|
|
726
|
-
|
|
727
|
-
|
|
728
|
-
RECOMMENDATIONS:
|
|
729
|
-
- Consider compacting the session if available
|
|
730
|
-
- Break complex tasks into smaller, focused sessions
|
|
731
|
-
- Be concise in your responses
|
|
732
|
-
- Avoid redundant file reads
|
|
733
|
-
|
|
734
|
-
You have access to 1M tokens - use them wisely. Do NOT rush or skip tasks.
|
|
735
|
-
Complete your work thoroughly despite the context usage warning.`;
|
|
749
|
+
You have plenty of context remaining - do NOT rush or skip tasks.
|
|
750
|
+
Complete your work thoroughly and methodically.`;
|
|
736
751
|
function createContextWindowMonitorHook(ctx) {
|
|
737
752
|
const remindedSessions = new Set;
|
|
738
753
|
const toolExecuteAfter = async (input, output) => {
|
|
@@ -854,7 +869,13 @@ function readMessages(sessionID) {
|
|
|
854
869
|
continue;
|
|
855
870
|
}
|
|
856
871
|
}
|
|
857
|
-
return messages.sort((a, b) =>
|
|
872
|
+
return messages.sort((a, b) => {
|
|
873
|
+
const aTime = a.time?.created ?? 0;
|
|
874
|
+
const bTime = b.time?.created ?? 0;
|
|
875
|
+
if (aTime !== bTime)
|
|
876
|
+
return aTime - bTime;
|
|
877
|
+
return a.id.localeCompare(b.id);
|
|
878
|
+
});
|
|
858
879
|
}
|
|
859
880
|
function readParts(messageID) {
|
|
860
881
|
const partDir = join2(PART_STORAGE, messageID);
|
|
@@ -918,19 +939,26 @@ function injectTextPart(sessionID, messageID, text) {
|
|
|
918
939
|
function findEmptyMessages(sessionID) {
|
|
919
940
|
const messages = readMessages(sessionID);
|
|
920
941
|
const emptyIds = [];
|
|
921
|
-
for (
|
|
922
|
-
const msg = messages[i];
|
|
942
|
+
for (const msg of messages) {
|
|
923
943
|
if (msg.role !== "assistant")
|
|
924
944
|
continue;
|
|
925
|
-
const isLastMessage = i === messages.length - 1;
|
|
926
|
-
if (isLastMessage)
|
|
927
|
-
continue;
|
|
928
945
|
if (!messageHasContent(msg.id)) {
|
|
929
946
|
emptyIds.push(msg.id);
|
|
930
947
|
}
|
|
931
948
|
}
|
|
932
949
|
return emptyIds;
|
|
933
950
|
}
|
|
951
|
+
function findEmptyMessageByIndex(sessionID, targetIndex) {
|
|
952
|
+
const messages = readMessages(sessionID);
|
|
953
|
+
if (targetIndex < 0 || targetIndex >= messages.length)
|
|
954
|
+
return null;
|
|
955
|
+
const targetMsg = messages[targetIndex];
|
|
956
|
+
if (targetMsg.role !== "assistant")
|
|
957
|
+
return null;
|
|
958
|
+
if (messageHasContent(targetMsg.id))
|
|
959
|
+
return null;
|
|
960
|
+
return targetMsg.id;
|
|
961
|
+
}
|
|
934
962
|
function findMessagesWithThinkingBlocks(sessionID) {
|
|
935
963
|
const messages = readMessages(sessionID);
|
|
936
964
|
const result = [];
|
|
@@ -1025,6 +1053,11 @@ function getErrorMessage(error) {
|
|
|
1025
1053
|
const errorObj = error;
|
|
1026
1054
|
return (errorObj.data?.message || errorObj.error?.message || errorObj.message || "").toLowerCase();
|
|
1027
1055
|
}
|
|
1056
|
+
function extractMessageIndex(error) {
|
|
1057
|
+
const message = getErrorMessage(error);
|
|
1058
|
+
const match = message.match(/messages\.(\d+)/);
|
|
1059
|
+
return match ? parseInt(match[1], 10) : null;
|
|
1060
|
+
}
|
|
1028
1061
|
function detectErrorType(error) {
|
|
1029
1062
|
const message = getErrorMessage(error);
|
|
1030
1063
|
if (message.includes("tool_use") && message.includes("tool_result")) {
|
|
@@ -1091,14 +1124,21 @@ async function recoverThinkingDisabledViolation(_client, sessionID, _failedAssis
|
|
|
1091
1124
|
}
|
|
1092
1125
|
return anySuccess;
|
|
1093
1126
|
}
|
|
1094
|
-
async function recoverEmptyContentMessage(_client, sessionID, failedAssistantMsg, _directory) {
|
|
1095
|
-
const
|
|
1096
|
-
|
|
1097
|
-
|
|
1098
|
-
|
|
1099
|
-
|
|
1100
|
-
|
|
1127
|
+
async function recoverEmptyContentMessage(_client, sessionID, failedAssistantMsg, _directory, error) {
|
|
1128
|
+
const targetIndex = extractMessageIndex(error);
|
|
1129
|
+
const failedID = failedAssistantMsg.info?.id;
|
|
1130
|
+
if (targetIndex !== null) {
|
|
1131
|
+
const targetMessageID = findEmptyMessageByIndex(sessionID, targetIndex);
|
|
1132
|
+
if (targetMessageID) {
|
|
1133
|
+
return injectTextPart(sessionID, targetMessageID, "(interrupted)");
|
|
1134
|
+
}
|
|
1101
1135
|
}
|
|
1136
|
+
if (failedID) {
|
|
1137
|
+
if (injectTextPart(sessionID, failedID, "(interrupted)")) {
|
|
1138
|
+
return true;
|
|
1139
|
+
}
|
|
1140
|
+
}
|
|
1141
|
+
const emptyMessageIDs = findEmptyMessages(sessionID);
|
|
1102
1142
|
let anySuccess = false;
|
|
1103
1143
|
for (const messageID of emptyMessageIDs) {
|
|
1104
1144
|
if (injectTextPart(sessionID, messageID, "(interrupted)")) {
|
|
@@ -1171,10 +1211,11 @@ function createSessionRecoveryHook(ctx) {
|
|
|
1171
1211
|
} else if (errorType === "thinking_disabled_violation") {
|
|
1172
1212
|
success = await recoverThinkingDisabledViolation(ctx.client, sessionID, failedMsg);
|
|
1173
1213
|
} else if (errorType === "empty_content_message") {
|
|
1174
|
-
success = await recoverEmptyContentMessage(ctx.client, sessionID, failedMsg, ctx.directory);
|
|
1214
|
+
success = await recoverEmptyContentMessage(ctx.client, sessionID, failedMsg, ctx.directory, info.error);
|
|
1175
1215
|
}
|
|
1176
1216
|
return success;
|
|
1177
|
-
} catch {
|
|
1217
|
+
} catch (err) {
|
|
1218
|
+
console.error("[session-recovery] Recovery failed:", err);
|
|
1178
1219
|
return false;
|
|
1179
1220
|
} finally {
|
|
1180
1221
|
processingErrors.delete(assistantMsgID);
|
|
@@ -1811,6 +1852,280 @@ function createEmptyTaskResponseDetectorHook(_ctx) {
|
|
|
1811
1852
|
}
|
|
1812
1853
|
};
|
|
1813
1854
|
}
|
|
1855
|
+
// src/hooks/anthropic-auto-compact/parser.ts
|
|
1856
|
+
var TOKEN_LIMIT_PATTERNS = [
|
|
1857
|
+
/(\d+)\s*tokens?\s*>\s*(\d+)\s*maximum/i,
|
|
1858
|
+
/prompt.*?(\d+).*?tokens.*?exceeds.*?(\d+)/i,
|
|
1859
|
+
/(\d+).*?tokens.*?limit.*?(\d+)/i,
|
|
1860
|
+
/context.*?length.*?(\d+).*?maximum.*?(\d+)/i,
|
|
1861
|
+
/max.*?context.*?(\d+).*?but.*?(\d+)/i
|
|
1862
|
+
];
|
|
1863
|
+
var TOKEN_LIMIT_KEYWORDS = [
|
|
1864
|
+
"prompt is too long",
|
|
1865
|
+
"is too long",
|
|
1866
|
+
"context_length_exceeded",
|
|
1867
|
+
"max_tokens",
|
|
1868
|
+
"token limit",
|
|
1869
|
+
"context length",
|
|
1870
|
+
"too many tokens"
|
|
1871
|
+
];
|
|
1872
|
+
function extractTokensFromMessage(message) {
|
|
1873
|
+
for (const pattern of TOKEN_LIMIT_PATTERNS) {
|
|
1874
|
+
const match = message.match(pattern);
|
|
1875
|
+
if (match) {
|
|
1876
|
+
const num1 = parseInt(match[1], 10);
|
|
1877
|
+
const num2 = parseInt(match[2], 10);
|
|
1878
|
+
return num1 > num2 ? { current: num1, max: num2 } : { current: num2, max: num1 };
|
|
1879
|
+
}
|
|
1880
|
+
}
|
|
1881
|
+
return null;
|
|
1882
|
+
}
|
|
1883
|
+
function isTokenLimitError(text) {
|
|
1884
|
+
const lower = text.toLowerCase();
|
|
1885
|
+
return TOKEN_LIMIT_KEYWORDS.some((kw) => lower.includes(kw.toLowerCase()));
|
|
1886
|
+
}
|
|
1887
|
+
function parseAnthropicTokenLimitError(err) {
|
|
1888
|
+
if (typeof err === "string") {
|
|
1889
|
+
if (isTokenLimitError(err)) {
|
|
1890
|
+
const tokens = extractTokensFromMessage(err);
|
|
1891
|
+
return {
|
|
1892
|
+
currentTokens: tokens?.current ?? 0,
|
|
1893
|
+
maxTokens: tokens?.max ?? 0,
|
|
1894
|
+
errorType: "token_limit_exceeded_string"
|
|
1895
|
+
};
|
|
1896
|
+
}
|
|
1897
|
+
return null;
|
|
1898
|
+
}
|
|
1899
|
+
if (!err || typeof err !== "object")
|
|
1900
|
+
return null;
|
|
1901
|
+
const errObj = err;
|
|
1902
|
+
const dataObj = errObj.data;
|
|
1903
|
+
const responseBody = dataObj?.responseBody;
|
|
1904
|
+
const errorMessage = errObj.message;
|
|
1905
|
+
const errorData = errObj.error;
|
|
1906
|
+
const nestedError = errorData?.error;
|
|
1907
|
+
const textSources = [];
|
|
1908
|
+
if (typeof responseBody === "string")
|
|
1909
|
+
textSources.push(responseBody);
|
|
1910
|
+
if (typeof errorMessage === "string")
|
|
1911
|
+
textSources.push(errorMessage);
|
|
1912
|
+
if (typeof errorData?.message === "string")
|
|
1913
|
+
textSources.push(errorData.message);
|
|
1914
|
+
if (typeof errObj.body === "string")
|
|
1915
|
+
textSources.push(errObj.body);
|
|
1916
|
+
if (typeof errObj.details === "string")
|
|
1917
|
+
textSources.push(errObj.details);
|
|
1918
|
+
if (typeof errObj.reason === "string")
|
|
1919
|
+
textSources.push(errObj.reason);
|
|
1920
|
+
if (typeof errObj.description === "string")
|
|
1921
|
+
textSources.push(errObj.description);
|
|
1922
|
+
if (typeof nestedError?.message === "string")
|
|
1923
|
+
textSources.push(nestedError.message);
|
|
1924
|
+
if (typeof dataObj?.message === "string")
|
|
1925
|
+
textSources.push(dataObj.message);
|
|
1926
|
+
if (typeof dataObj?.error === "string")
|
|
1927
|
+
textSources.push(dataObj.error);
|
|
1928
|
+
if (textSources.length === 0) {
|
|
1929
|
+
try {
|
|
1930
|
+
const jsonStr = JSON.stringify(errObj);
|
|
1931
|
+
if (isTokenLimitError(jsonStr)) {
|
|
1932
|
+
textSources.push(jsonStr);
|
|
1933
|
+
}
|
|
1934
|
+
} catch {}
|
|
1935
|
+
}
|
|
1936
|
+
const combinedText = textSources.join(" ");
|
|
1937
|
+
if (!isTokenLimitError(combinedText))
|
|
1938
|
+
return null;
|
|
1939
|
+
if (typeof responseBody === "string") {
|
|
1940
|
+
try {
|
|
1941
|
+
const jsonPatterns = [
|
|
1942
|
+
/data:\s*(\{[\s\S]*?\})\s*$/m,
|
|
1943
|
+
/(\{"type"\s*:\s*"error"[\s\S]*?\})/,
|
|
1944
|
+
/(\{[\s\S]*?"error"[\s\S]*?\})/
|
|
1945
|
+
];
|
|
1946
|
+
for (const pattern of jsonPatterns) {
|
|
1947
|
+
const dataMatch = responseBody.match(pattern);
|
|
1948
|
+
if (dataMatch) {
|
|
1949
|
+
try {
|
|
1950
|
+
const jsonData = JSON.parse(dataMatch[1]);
|
|
1951
|
+
const message = jsonData.error?.message || "";
|
|
1952
|
+
const tokens = extractTokensFromMessage(message);
|
|
1953
|
+
if (tokens) {
|
|
1954
|
+
return {
|
|
1955
|
+
currentTokens: tokens.current,
|
|
1956
|
+
maxTokens: tokens.max,
|
|
1957
|
+
requestId: jsonData.request_id,
|
|
1958
|
+
errorType: jsonData.error?.type || "token_limit_exceeded"
|
|
1959
|
+
};
|
|
1960
|
+
}
|
|
1961
|
+
} catch {}
|
|
1962
|
+
}
|
|
1963
|
+
}
|
|
1964
|
+
const bedrockJson = JSON.parse(responseBody);
|
|
1965
|
+
if (typeof bedrockJson.message === "string" && isTokenLimitError(bedrockJson.message)) {
|
|
1966
|
+
return {
|
|
1967
|
+
currentTokens: 0,
|
|
1968
|
+
maxTokens: 0,
|
|
1969
|
+
errorType: "bedrock_input_too_long"
|
|
1970
|
+
};
|
|
1971
|
+
}
|
|
1972
|
+
} catch {}
|
|
1973
|
+
}
|
|
1974
|
+
for (const text of textSources) {
|
|
1975
|
+
const tokens = extractTokensFromMessage(text);
|
|
1976
|
+
if (tokens) {
|
|
1977
|
+
return {
|
|
1978
|
+
currentTokens: tokens.current,
|
|
1979
|
+
maxTokens: tokens.max,
|
|
1980
|
+
errorType: "token_limit_exceeded"
|
|
1981
|
+
};
|
|
1982
|
+
}
|
|
1983
|
+
}
|
|
1984
|
+
if (isTokenLimitError(combinedText)) {
|
|
1985
|
+
return {
|
|
1986
|
+
currentTokens: 0,
|
|
1987
|
+
maxTokens: 0,
|
|
1988
|
+
errorType: "token_limit_exceeded_unknown"
|
|
1989
|
+
};
|
|
1990
|
+
}
|
|
1991
|
+
return null;
|
|
1992
|
+
}
|
|
1993
|
+
|
|
1994
|
+
// src/hooks/anthropic-auto-compact/executor.ts
|
|
1995
|
+
async function getLastAssistant(sessionID, client, directory) {
|
|
1996
|
+
try {
|
|
1997
|
+
const resp = await client.session.messages({
|
|
1998
|
+
path: { id: sessionID },
|
|
1999
|
+
query: { directory }
|
|
2000
|
+
});
|
|
2001
|
+
const data = resp.data;
|
|
2002
|
+
if (!Array.isArray(data))
|
|
2003
|
+
return null;
|
|
2004
|
+
const reversed = [...data].reverse();
|
|
2005
|
+
const last = reversed.find((m) => {
|
|
2006
|
+
const msg = m;
|
|
2007
|
+
const info = msg.info;
|
|
2008
|
+
return info?.role === "assistant";
|
|
2009
|
+
});
|
|
2010
|
+
if (!last)
|
|
2011
|
+
return null;
|
|
2012
|
+
return last.info ?? null;
|
|
2013
|
+
} catch {
|
|
2014
|
+
return null;
|
|
2015
|
+
}
|
|
2016
|
+
}
|
|
2017
|
+
async function executeCompact(sessionID, msg, autoCompactState, client, directory) {
|
|
2018
|
+
try {
|
|
2019
|
+
const providerID = msg.providerID;
|
|
2020
|
+
const modelID = msg.modelID;
|
|
2021
|
+
if (providerID && modelID) {
|
|
2022
|
+
await client.session.summarize({
|
|
2023
|
+
path: { id: sessionID },
|
|
2024
|
+
body: { providerID, modelID },
|
|
2025
|
+
query: { directory }
|
|
2026
|
+
});
|
|
2027
|
+
setTimeout(async () => {
|
|
2028
|
+
try {
|
|
2029
|
+
await client.tui.submitPrompt({ query: { directory } });
|
|
2030
|
+
} catch {}
|
|
2031
|
+
}, 500);
|
|
2032
|
+
}
|
|
2033
|
+
autoCompactState.pendingCompact.delete(sessionID);
|
|
2034
|
+
autoCompactState.errorDataBySession.delete(sessionID);
|
|
2035
|
+
} catch {}
|
|
2036
|
+
}
|
|
2037
|
+
|
|
2038
|
+
// src/hooks/anthropic-auto-compact/index.ts
|
|
2039
|
+
function createAutoCompactState() {
|
|
2040
|
+
return {
|
|
2041
|
+
pendingCompact: new Set,
|
|
2042
|
+
errorDataBySession: new Map
|
|
2043
|
+
};
|
|
2044
|
+
}
|
|
2045
|
+
function createAnthropicAutoCompactHook(ctx) {
|
|
2046
|
+
const autoCompactState = createAutoCompactState();
|
|
2047
|
+
const eventHandler = async ({ event }) => {
|
|
2048
|
+
const props = event.properties;
|
|
2049
|
+
if (event.type === "session.deleted") {
|
|
2050
|
+
const sessionInfo = props?.info;
|
|
2051
|
+
if (sessionInfo?.id) {
|
|
2052
|
+
autoCompactState.pendingCompact.delete(sessionInfo.id);
|
|
2053
|
+
autoCompactState.errorDataBySession.delete(sessionInfo.id);
|
|
2054
|
+
}
|
|
2055
|
+
return;
|
|
2056
|
+
}
|
|
2057
|
+
if (event.type === "session.error") {
|
|
2058
|
+
const sessionID = props?.sessionID;
|
|
2059
|
+
if (!sessionID)
|
|
2060
|
+
return;
|
|
2061
|
+
const parsed = parseAnthropicTokenLimitError(props?.error);
|
|
2062
|
+
if (parsed) {
|
|
2063
|
+
autoCompactState.pendingCompact.add(sessionID);
|
|
2064
|
+
autoCompactState.errorDataBySession.set(sessionID, parsed);
|
|
2065
|
+
}
|
|
2066
|
+
return;
|
|
2067
|
+
}
|
|
2068
|
+
if (event.type === "message.updated") {
|
|
2069
|
+
const info = props?.info;
|
|
2070
|
+
const sessionID = info?.sessionID;
|
|
2071
|
+
if (sessionID && info?.role === "assistant" && info.error) {
|
|
2072
|
+
const parsed = parseAnthropicTokenLimitError(info.error);
|
|
2073
|
+
if (parsed) {
|
|
2074
|
+
parsed.providerID = info.providerID;
|
|
2075
|
+
parsed.modelID = info.modelID;
|
|
2076
|
+
autoCompactState.pendingCompact.add(sessionID);
|
|
2077
|
+
autoCompactState.errorDataBySession.set(sessionID, parsed);
|
|
2078
|
+
}
|
|
2079
|
+
}
|
|
2080
|
+
return;
|
|
2081
|
+
}
|
|
2082
|
+
if (event.type === "session.idle") {
|
|
2083
|
+
const sessionID = props?.sessionID;
|
|
2084
|
+
if (!sessionID)
|
|
2085
|
+
return;
|
|
2086
|
+
if (!autoCompactState.pendingCompact.has(sessionID))
|
|
2087
|
+
return;
|
|
2088
|
+
const errorData = autoCompactState.errorDataBySession.get(sessionID);
|
|
2089
|
+
if (errorData?.providerID && errorData?.modelID) {
|
|
2090
|
+
await ctx.client.tui.showToast({
|
|
2091
|
+
body: {
|
|
2092
|
+
title: "Auto Compact",
|
|
2093
|
+
message: "Token limit exceeded. Summarizing session...",
|
|
2094
|
+
variant: "warning",
|
|
2095
|
+
duration: 3000
|
|
2096
|
+
}
|
|
2097
|
+
}).catch(() => {});
|
|
2098
|
+
await executeCompact(sessionID, { providerID: errorData.providerID, modelID: errorData.modelID }, autoCompactState, ctx.client, ctx.directory);
|
|
2099
|
+
return;
|
|
2100
|
+
}
|
|
2101
|
+
const lastAssistant = await getLastAssistant(sessionID, ctx.client, ctx.directory);
|
|
2102
|
+
if (!lastAssistant) {
|
|
2103
|
+
autoCompactState.pendingCompact.delete(sessionID);
|
|
2104
|
+
return;
|
|
2105
|
+
}
|
|
2106
|
+
if (lastAssistant.summary === true) {
|
|
2107
|
+
autoCompactState.pendingCompact.delete(sessionID);
|
|
2108
|
+
return;
|
|
2109
|
+
}
|
|
2110
|
+
if (!lastAssistant.modelID || !lastAssistant.providerID) {
|
|
2111
|
+
autoCompactState.pendingCompact.delete(sessionID);
|
|
2112
|
+
return;
|
|
2113
|
+
}
|
|
2114
|
+
await ctx.client.tui.showToast({
|
|
2115
|
+
body: {
|
|
2116
|
+
title: "Auto Compact",
|
|
2117
|
+
message: "Token limit exceeded. Summarizing session...",
|
|
2118
|
+
variant: "warning",
|
|
2119
|
+
duration: 3000
|
|
2120
|
+
}
|
|
2121
|
+
}).catch(() => {});
|
|
2122
|
+
await executeCompact(sessionID, lastAssistant, autoCompactState, ctx.client, ctx.directory);
|
|
2123
|
+
}
|
|
2124
|
+
};
|
|
2125
|
+
return {
|
|
2126
|
+
event: eventHandler
|
|
2127
|
+
};
|
|
2128
|
+
}
|
|
1814
2129
|
// src/hooks/think-mode/detector.ts
|
|
1815
2130
|
var ENGLISH_PATTERNS = [/\bultrathink\b/i, /\bthink\b/i];
|
|
1816
2131
|
var MULTILINGUAL_KEYWORDS = [
|
|
@@ -19041,6 +19356,7 @@ var OhMyOpenCodePlugin = async (ctx) => {
|
|
|
19041
19356
|
const emptyTaskResponseDetector = createEmptyTaskResponseDetectorHook(ctx);
|
|
19042
19357
|
const thinkMode = createThinkModeHook();
|
|
19043
19358
|
const claudeCodeHooks = createClaudeCodeHooksHook(ctx, {});
|
|
19359
|
+
const anthropicAutoCompact = createAnthropicAutoCompactHook(ctx);
|
|
19044
19360
|
updateTerminalTitle({ sessionId: "main" });
|
|
19045
19361
|
return {
|
|
19046
19362
|
tool: builtinTools,
|
|
@@ -19089,6 +19405,7 @@ var OhMyOpenCodePlugin = async (ctx) => {
|
|
|
19089
19405
|
await contextWindowMonitor.event(input);
|
|
19090
19406
|
await directoryAgentsInjector.event(input);
|
|
19091
19407
|
await thinkMode.event(input);
|
|
19408
|
+
await anthropicAutoCompact.event(input);
|
|
19092
19409
|
const { event } = input;
|
|
19093
19410
|
const props = event.properties;
|
|
19094
19411
|
if (event.type === "session.created") {
|