@kiritoko1029/opencode-add-oai-instruction 1.0.0 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +77 -0
- package/README.zh-CN.md +78 -0
- package/index.mjs +21 -4
- package/package.json +1 -1
package/README.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
# opencode-add-oai-instruction
|
|
2
|
+
|
|
3
|
+
[简体中文](README.zh-CN.md)
|
|
4
|
+
|
|
5
|
+
[OpenCode](https://github.com/anomalyco/opencode) plugin that injects an `instructions` string into OpenAI-compatible API requests.
|
|
6
|
+
|
|
7
|
+
The instructions are loaded from a Markdown file based on the request's `model`.
|
|
8
|
+
|
|
9
|
+
It also removes `max_output_tokens` / `max_tokens` for `gpt*` models (useful for some OpenAI-compatible providers that reject these fields).
|
|
10
|
+
|
|
11
|
+
## Installation
|
|
12
|
+
|
|
13
|
+
```bash
|
|
14
|
+
cd ~/.config/opencode
|
|
15
|
+
bun add github:kiritoko1029/opencode-add-oai-instruction
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
## Usage
|
|
19
|
+
|
|
20
|
+
Add to your `opencode.json`:
|
|
21
|
+
|
|
22
|
+
```jsonc
|
|
23
|
+
{
|
|
24
|
+
"plugins": [
|
|
25
|
+
"@kiritoko1029/opencode-add-oai-instruction" // Add this line
|
|
26
|
+
],
|
|
27
|
+
"provider": {
|
|
28
|
+
"my-provider": {
|
|
29
|
+
"npm": "@ai-sdk/openai",
|
|
30
|
+
"api": "https://your-api.com/v1",
|
|
31
|
+
"env": ["MY_API_KEY"],
|
|
32
|
+
"options": {
|
|
33
|
+
"addInstruction": true // Enable the plugin
|
|
34
|
+
},
|
|
35
|
+
"models": {
|
|
36
|
+
"gpt-5.2": {}
|
|
37
|
+
}
|
|
38
|
+
}
|
|
39
|
+
}
|
|
40
|
+
}
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
## Prompt Files
|
|
44
|
+
|
|
45
|
+
This package ships with built-in prompt Markdown files (so you typically don't need to create anything).
|
|
46
|
+
|
|
47
|
+
Prompt source: https://github.com/openai/codex/tree/main/codex-rs/core
|
|
48
|
+
|
|
49
|
+
### Custom prompts (optional)
|
|
50
|
+
|
|
51
|
+
If you want to use your own prompt (override the built-in one, or add a prompt for another model), put a Markdown file in the same directory as this plugin's `index.mjs`.
|
|
52
|
+
|
|
53
|
+
File name format:
|
|
54
|
+
|
|
55
|
+
- `<model>_prompt.md`
|
|
56
|
+
|
|
57
|
+
Examples:
|
|
58
|
+
|
|
59
|
+
- `model="gpt-5.2-codex"` → `gpt-5.2-codex_prompt.md`
|
|
60
|
+
- `model="gpt-5.2"` → `gpt-5.2_prompt.md` (or `gpt_5_2_prompt.md`)
|
|
61
|
+
|
|
62
|
+
Notes:
|
|
63
|
+
|
|
64
|
+
- Matching is case-insensitive.
|
|
65
|
+
- If no matching file is found, the plugin falls back to a short built-in `instructions` (unless the request already includes a non-empty `instructions`).
|
|
66
|
+
|
|
67
|
+
## How It Works
|
|
68
|
+
|
|
69
|
+
1. Hooks into `chat.params` to read `provider.options.addInstruction`
|
|
70
|
+
2. Intercepts `globalThis.fetch` and (when enabled):
|
|
71
|
+
- Removes `max_output_tokens` / `max_tokens` when `model` contains `gpt`
|
|
72
|
+
- Loads `instructions` from the model-specific `_prompt.md` file
|
|
73
|
+
3. Caches the loaded instructions per model (in-memory)
|
|
74
|
+
|
|
75
|
+
## License
|
|
76
|
+
|
|
77
|
+
MIT
|
package/README.zh-CN.md
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
1
|
+
# opencode-add-oai-instruction
|
|
2
|
+
|
|
3
|
+
[English](README.md) | [简体中文](README.zh-CN.md)
|
|
4
|
+
|
|
5
|
+
[OpenCode](https://github.com/anomalyco/opencode) 插件:会向 OpenAI 兼容接口的请求体中注入 `instructions` 字段。
|
|
6
|
+
|
|
7
|
+
`instructions` 的内容会根据请求里的 `model` 从对应的 Markdown 文件加载。
|
|
8
|
+
|
|
9
|
+
另外,对于 `gpt*` 模型(`model` 包含 `gpt`),插件也会移除 `max_output_tokens` / `max_tokens`,用于兼容一些不支持这些字段的 OpenAI-compatible provider。
|
|
10
|
+
|
|
11
|
+
## 安装
|
|
12
|
+
|
|
13
|
+
```bash
|
|
14
|
+
cd ~/.config/opencode
|
|
15
|
+
bun add github:kiritoko1029/opencode-add-oai-instruction
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
## 使用
|
|
19
|
+
|
|
20
|
+
在你的 `opencode.json` 中加入:
|
|
21
|
+
|
|
22
|
+
```jsonc
|
|
23
|
+
{
|
|
24
|
+
"plugins": [
|
|
25
|
+
"@kiritoko1029/opencode-add-oai-instruction" // 添加这一行
|
|
26
|
+
],
|
|
27
|
+
"provider": {
|
|
28
|
+
"my-provider": {
|
|
29
|
+
"npm": "@ai-sdk/openai",
|
|
30
|
+
"api": "https://your-api.com/v1",
|
|
31
|
+
"env": ["MY_API_KEY"],
|
|
32
|
+
"options": {
|
|
33
|
+
"addInstruction": true // 启用插件
|
|
34
|
+
},
|
|
35
|
+
"models": {
|
|
36
|
+
"gpt-5.2": {}
|
|
37
|
+
}
|
|
38
|
+
}
|
|
39
|
+
}
|
|
40
|
+
}
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
## 提示词文件(Prompt Files)
|
|
44
|
+
|
|
45
|
+
本包已内置提示词 Markdown 文件(通常不需要用户额外创建)。
|
|
46
|
+
|
|
47
|
+
提示词来源:https://github.com/openai/codex/tree/main/codex-rs/core
|
|
48
|
+
|
|
49
|
+
### 自定义提示词(可选)
|
|
50
|
+
|
|
51
|
+
如果你希望使用自己的提示词(覆盖内置提示词,或为其它 model 增加提示词),可以在插件目录(与 `index.mjs` 同级)放置/替换对应的 Markdown 文件。
|
|
52
|
+
|
|
53
|
+
文件命名规则:
|
|
54
|
+
|
|
55
|
+
- `<model>_prompt.md`
|
|
56
|
+
|
|
57
|
+
示例:
|
|
58
|
+
|
|
59
|
+
- `model="gpt-5.2-codex"` → `gpt-5.2-codex_prompt.md`
|
|
60
|
+
- `model="gpt-5.2"` → `gpt-5.2_prompt.md`(或 `gpt_5_2_prompt.md`)
|
|
61
|
+
|
|
62
|
+
说明:
|
|
63
|
+
|
|
64
|
+
- 匹配不区分大小写。
|
|
65
|
+
- 如果没有找到匹配的文件,插件会使用一段内置的简短 `instructions`(除非请求里已经包含非空的 `instructions`)。
|
|
66
|
+
- 读取到的提示词会按 `model` 做内存缓存(in-memory cache)。
|
|
67
|
+
|
|
68
|
+
## 工作原理
|
|
69
|
+
|
|
70
|
+
1. 在 `chat.params` hook 中读取 `provider.options.addInstruction`。
|
|
71
|
+
2. 拦截 `globalThis.fetch`,当插件启用时:
|
|
72
|
+
- 当 `model` 包含 `gpt` 时,移除 `max_output_tokens` / `max_tokens`。
|
|
73
|
+
- 根据 `model` 加载对应的 `_prompt.md` 文件内容,并写入请求体 `instructions`。
|
|
74
|
+
3. 对每个 `model` 的提示词内容进行缓存,减少文件读取次数。
|
|
75
|
+
|
|
76
|
+
## License
|
|
77
|
+
|
|
78
|
+
MIT
|
package/index.mjs
CHANGED
|
@@ -1,5 +1,7 @@
|
|
|
1
1
|
/**
|
|
2
|
-
* OpenCode plugin to
|
|
2
|
+
* OpenCode plugin to inject model-specific instructions from Markdown files.
|
|
3
|
+
*
|
|
4
|
+
* It also removes max_output_tokens/max_tokens for gpt* models.
|
|
3
5
|
*
|
|
4
6
|
* Configuration in opencode.json:
|
|
5
7
|
*
|
|
@@ -29,6 +31,14 @@ let addInstruction = false;
|
|
|
29
31
|
/** @type {Map<string, string | null>} */
|
|
30
32
|
const instructionCache = new Map();
|
|
31
33
|
|
|
34
|
+
const DEFAULT_INSTRUCTIONS = `You are a helpful coding assistant.
|
|
35
|
+
- Be concise and direct.
|
|
36
|
+
- Follow the user's requirements; ask one clarifying question if needed.
|
|
37
|
+
- Don't guess or fabricate details.
|
|
38
|
+
- Prefer minimal, safe changes.
|
|
39
|
+
- Don't commit or push unless explicitly asked.
|
|
40
|
+
- Reply in plain text.`;
|
|
41
|
+
|
|
32
42
|
function toSafeFilenameKey(value) {
|
|
33
43
|
return value
|
|
34
44
|
.toLowerCase()
|
|
@@ -111,9 +121,16 @@ if (typeof originalFetch === "function") {
|
|
|
111
121
|
}
|
|
112
122
|
|
|
113
123
|
// 根据模型不同 使用不同的提示词
|
|
114
|
-
const
|
|
115
|
-
if (
|
|
116
|
-
parsed.instructions =
|
|
124
|
+
const fileInstructions = await loadInstructionForModel(model);
|
|
125
|
+
if (typeof fileInstructions === "string" && fileInstructions.trim() !== "") {
|
|
126
|
+
parsed.instructions = fileInstructions;
|
|
127
|
+
} else {
|
|
128
|
+
const existingInstructions = parsed.instructions;
|
|
129
|
+
const hasExistingInstructions =
|
|
130
|
+
typeof existingInstructions === "string" && existingInstructions.trim() !== "";
|
|
131
|
+
if (!hasExistingInstructions) {
|
|
132
|
+
parsed.instructions = DEFAULT_INSTRUCTIONS;
|
|
133
|
+
}
|
|
117
134
|
}
|
|
118
135
|
|
|
119
136
|
return originalFetch.call(this, input, { ...init, body: JSON.stringify(parsed) });
|
package/package.json
CHANGED