@shareai-lab/kode 1.0.81 → 1.0.82
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +201 -661
- package/README.md +23 -2
- package/README.zh-CN.md +18 -2
- package/package.json +6 -3
- package/src/components/ModelSelector.tsx +2 -1
- package/src/query.ts +11 -3
- package/src/services/claude.ts +166 -52
- package/src/tools/URLFetcherTool/URLFetcherTool.tsx +178 -0
- package/src/tools/URLFetcherTool/cache.ts +55 -0
- package/src/tools/URLFetcherTool/htmlToMarkdown.ts +55 -0
- package/src/tools/URLFetcherTool/prompt.ts +17 -0
- package/src/tools/WebSearchTool/WebSearchTool.tsx +103 -0
- package/src/tools/WebSearchTool/prompt.ts +13 -0
- package/src/tools/WebSearchTool/searchProviders.ts +66 -0
- package/src/tools.ts +4 -0
- package/src/utils/PersistentShell.ts +196 -17
- package/src/utils/debugLogger.ts +1 -0
- package/src/utils/log.ts +5 -6
package/README.md
CHANGED
|
@@ -1,11 +1,30 @@
|
|
|
1
1
|
# Kode - AI Assistant for Your Terminal
|
|
2
2
|
|
|
3
3
|
[](https://www.npmjs.com/package/@shareai-lab/kode)
|
|
4
|
-
[](https://opensource.org/licenses/Apache-2.0)
|
|
5
5
|
[](https://agents.md)
|
|
6
6
|
|
|
7
7
|
[中文文档](README.zh-CN.md) | [Contributing](CONTRIBUTING.md) | [Documentation](docs/)
|
|
8
8
|
|
|
9
|
+
## 🎉 Big Announcement: We're Now Apache 2.0 Licensed!
|
|
10
|
+
|
|
11
|
+
**Great news for the developer community!** In our commitment to democratizing AI agent technology and fostering a vibrant ecosystem of innovation, we're thrilled to announce that Kode has transitioned from AGPLv3 to the **Apache 2.0 license**.
|
|
12
|
+
|
|
13
|
+
### What This Means for You:
|
|
14
|
+
- ✅ **Complete Freedom**: Use Kode in any project - personal, commercial, or enterprise
|
|
15
|
+
- ✅ **Build Without Barriers**: Create proprietary solutions without open-sourcing requirements
|
|
16
|
+
- ✅ **Simple Attribution**: Just maintain copyright notices and license info
|
|
17
|
+
- ✅ **Join a Movement**: Be part of accelerating the world's transition to AI-powered development
|
|
18
|
+
|
|
19
|
+
This change reflects our belief that the future of software development is collaborative, open, and augmented by AI. By removing licensing barriers, we're empowering developers worldwide to build the next generation of AI-assisted tools and workflows. Let's build the future together! 🚀
|
|
20
|
+
|
|
21
|
+
## 📢 Update Log
|
|
22
|
+
|
|
23
|
+
**2025-08-29**: We've added Windows support! All Windows users can now run Kode using Git Bash, Unix subsystems, or WSL (Windows Subsystem for Linux) on their computers.
|
|
24
|
+
|
|
25
|
+
|
|
26
|
+
<img width="606" height="303" alt="image" src="https://github.com/user-attachments/assets/6cf50553-aacd-4241-a579-6e935b6c62b5" />
|
|
27
|
+
|
|
9
28
|
## 🤝 AGENTS.md Standard Support
|
|
10
29
|
|
|
11
30
|
**Kode proudly supports the [AGENTS.md standard protocol](https://agents.md) initiated by OpenAI** - a simple, open format for guiding programming agents that's used by 20k+ open source projects.
|
|
@@ -27,6 +46,8 @@ Kode is a powerful AI assistant that lives in your terminal. It can understand y
|
|
|
27
46
|
>
|
|
28
47
|
> **📊 Model Performance**: For optimal performance, we recommend using newer, more capable models designed for autonomous task completion. Avoid older Q&A-focused models like GPT-4o or Gemini 2.5 Pro, which are optimized for answering questions rather than sustained independent task execution. Choose models specifically trained for agentic workflows and extended reasoning capabilities.
|
|
29
48
|
|
|
49
|
+
<img width="600" height="577" alt="image" src="https://github.com/user-attachments/assets/8b46a39d-1ab6-4669-9391-14ccc6c5234c" />
|
|
50
|
+
|
|
30
51
|
## Features
|
|
31
52
|
|
|
32
53
|
### Core Capabilities
|
|
@@ -390,7 +411,7 @@ We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) f
|
|
|
390
411
|
|
|
391
412
|
## License
|
|
392
413
|
|
|
393
|
-
|
|
414
|
+
Apache 2.0 License - see [LICENSE](LICENSE) for details.
|
|
394
415
|
|
|
395
416
|
## Thanks
|
|
396
417
|
|
package/README.zh-CN.md
CHANGED
|
@@ -1,10 +1,26 @@
|
|
|
1
1
|
# Kode - 终端 AI 助手
|
|
2
2
|
|
|
3
3
|
[](https://www.npmjs.com/package/@shareai-lab/kode)
|
|
4
|
-
[](https://opensource.org/licenses/Apache-2.0)
|
|
5
5
|
|
|
6
6
|
[English](README.md) | [贡献指南](CONTRIBUTING.md) | [文档](docs/)
|
|
7
7
|
|
|
8
|
+
## 🎉 重磅消息:我们已切换至 Apache 2.0 开源协议!
|
|
9
|
+
|
|
10
|
+
**开发者社区的福音来了!** 为了推动 AI 智能体技术的民主化进程,构建充满活力的创新生态,我们激动地宣布:Kode 已正式从 AGPLv3 协议升级为 **Apache 2.0 开源协议**。
|
|
11
|
+
|
|
12
|
+
### 这对您意味着什么:
|
|
13
|
+
- ✅ **完全自由**:在任何项目中使用 Kode - 无论是个人项目、商业产品还是企业方案
|
|
14
|
+
- ✅ **无障碍创新**:构建专有解决方案,无需开源您的代码
|
|
15
|
+
- ✅ **极简要求**:仅需保留版权声明和许可信息
|
|
16
|
+
- ✅ **共创未来**:与全球开发者一起,加速世界向 AI 驱动生产的转型
|
|
17
|
+
|
|
18
|
+
让我们携手共建未来!🚀
|
|
19
|
+
|
|
20
|
+
## 📢 更新日志
|
|
21
|
+
|
|
22
|
+
**2025-08-29**:我们添加了 Windows 电脑的运行支持!所有的 Windows 用户现在可以使用你电脑上的 Git Bash、Unix 子系统或 WSL(Windows Subsystem for Linux)来运行 Kode。
|
|
23
|
+
|
|
8
24
|
Kode 是一个强大的 AI 助手,运行在你的终端中。它能理解你的代码库、编辑文件、运行命令,并为你处理整个开发工作流。
|
|
9
25
|
|
|
10
26
|
> **⚠️ 安全提示**:Kode 默认以 YOLO 模式运行(等同于 Claude 的 `--dangerously-skip-permissions` 标志),跳过所有权限检查以获得最大生产力。YOLO 模式仅建议在安全可信的环境中处理非重要项目时使用。如果您正在处理重要文件或使用能力存疑的模型,我们强烈建议使用 `kode --safe` 启用权限检查和手动审批所有操作。
|
|
@@ -301,7 +317,7 @@ bun test
|
|
|
301
317
|
|
|
302
318
|
## 许可证
|
|
303
319
|
|
|
304
|
-
|
|
320
|
+
Apache 2.0 许可证 - 详见 [LICENSE](LICENSE)。
|
|
305
321
|
|
|
306
322
|
## 支持
|
|
307
323
|
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@shareai-lab/kode",
|
|
3
|
-
"version": "1.0.
|
|
3
|
+
"version": "1.0.82",
|
|
4
4
|
"bin": {
|
|
5
5
|
"kode": "cli.js",
|
|
6
6
|
"kwa": "cli.js",
|
|
@@ -11,7 +11,7 @@
|
|
|
11
11
|
},
|
|
12
12
|
"main": "cli.js",
|
|
13
13
|
"author": "ShareAI-lab <ai-lab@foxmail.com>",
|
|
14
|
-
"license": "
|
|
14
|
+
"license": "Apache-2.0",
|
|
15
15
|
"description": "AI-powered terminal assistant that understands your codebase, edits files, runs commands, and automates development workflows.",
|
|
16
16
|
"homepage": "https://github.com/shareAI-lab/kode",
|
|
17
17
|
"repository": {
|
|
@@ -76,12 +76,15 @@
|
|
|
76
76
|
"lru-cache": "^11.1.0",
|
|
77
77
|
"marked": "^15.0.12",
|
|
78
78
|
"nanoid": "^5.1.5",
|
|
79
|
+
"node-fetch": "^3.3.2",
|
|
80
|
+
"node-html-parser": "^7.0.1",
|
|
79
81
|
"openai": "^4.104.0",
|
|
80
82
|
"react": "18.3.1",
|
|
81
83
|
"semver": "^7.7.2",
|
|
82
84
|
"shell-quote": "^1.8.3",
|
|
83
85
|
"spawn-rx": "^5.1.2",
|
|
84
86
|
"tsx": "^4.20.3",
|
|
87
|
+
"turndown": "^7.2.1",
|
|
85
88
|
"undici": "^7.11.0",
|
|
86
89
|
"wrap-ansi": "^9.0.0",
|
|
87
90
|
"zod": "^3.25.76",
|
|
@@ -113,4 +116,4 @@
|
|
|
113
116
|
"terminal",
|
|
114
117
|
"command-line"
|
|
115
118
|
]
|
|
116
|
-
}
|
|
119
|
+
}
|
|
@@ -480,6 +480,7 @@ export function ModelSelector({
|
|
|
480
480
|
'x-api-key': apiKey,
|
|
481
481
|
'anthropic-version': '2023-06-01',
|
|
482
482
|
'Content-Type': 'application/json',
|
|
483
|
+
'Authorization': `Bearer ${apiKey}`
|
|
483
484
|
},
|
|
484
485
|
})
|
|
485
486
|
|
|
@@ -1846,7 +1847,7 @@ export function ModelSelector({
|
|
|
1846
1847
|
}
|
|
1847
1848
|
|
|
1848
1849
|
// Use the verifyApiKey function which uses the official Anthropic SDK
|
|
1849
|
-
const isValid = await verifyApiKey(apiKey, testBaseURL)
|
|
1850
|
+
const isValid = await verifyApiKey(apiKey, testBaseURL, selectedProvider)
|
|
1850
1851
|
|
|
1851
1852
|
if (isValid) {
|
|
1852
1853
|
return {
|
package/src/query.ts
CHANGED
|
@@ -206,10 +206,10 @@ export async function* query(
|
|
|
206
206
|
typeof lastUserMessage.message.content === 'string'
|
|
207
207
|
? reminders + lastUserMessage.message.content
|
|
208
208
|
: [
|
|
209
|
-
{ type: 'text', text: reminders },
|
|
210
209
|
...(Array.isArray(lastUserMessage.message.content)
|
|
211
210
|
? lastUserMessage.message.content
|
|
212
211
|
: []),
|
|
212
|
+
{ type: 'text', text: reminders },
|
|
213
213
|
],
|
|
214
214
|
},
|
|
215
215
|
}
|
|
@@ -567,8 +567,16 @@ async function* checkPermissionsAndCallTool(
|
|
|
567
567
|
// (surprisingly, the model is not great at generating valid input)
|
|
568
568
|
const isValidInput = tool.inputSchema.safeParse(input)
|
|
569
569
|
if (!isValidInput.success) {
|
|
570
|
+
// Create a more helpful error message for common cases
|
|
571
|
+
let errorMessage = `InputValidationError: ${isValidInput.error.message}`
|
|
572
|
+
|
|
573
|
+
// Special handling for the "View" tool (FileReadTool) being called with empty parameters
|
|
574
|
+
if (tool.name === 'View' && Object.keys(input).length === 0) {
|
|
575
|
+
errorMessage = `Error: The View tool requires a 'file_path' parameter to specify which file to read. Please provide the absolute path to the file you want to view. For example: {"file_path": "/path/to/file.txt"}`
|
|
576
|
+
}
|
|
577
|
+
|
|
570
578
|
logEvent('tengu_tool_use_error', {
|
|
571
|
-
error:
|
|
579
|
+
error: errorMessage,
|
|
572
580
|
messageID: assistantMessage.message.id,
|
|
573
581
|
toolName: tool.name,
|
|
574
582
|
toolInput: JSON.stringify(input).slice(0, 200),
|
|
@@ -576,7 +584,7 @@ async function* checkPermissionsAndCallTool(
|
|
|
576
584
|
yield createUserMessage([
|
|
577
585
|
{
|
|
578
586
|
type: 'tool_result',
|
|
579
|
-
content:
|
|
587
|
+
content: errorMessage,
|
|
580
588
|
is_error: true,
|
|
581
589
|
tool_use_id: toolUseID,
|
|
582
590
|
},
|
package/src/services/claude.ts
CHANGED
|
@@ -962,6 +962,84 @@ export function resetAnthropicClient(): void {
|
|
|
962
962
|
* 4. Fallback region (us-east5)
|
|
963
963
|
*/
|
|
964
964
|
|
|
965
|
+
/**
|
|
966
|
+
* Manage cache control to ensure it doesn't exceed Claude's 4 cache block limit
|
|
967
|
+
* Priority:
|
|
968
|
+
* 1. System prompts (high priority)
|
|
969
|
+
* 2. Long documents or reference materials (high priority)
|
|
970
|
+
* 3. Reusable context (medium priority)
|
|
971
|
+
* 4. Short messages or one-time content (no caching)
|
|
972
|
+
*/
|
|
973
|
+
function applyCacheControlWithLimits(
|
|
974
|
+
systemBlocks: TextBlockParam[],
|
|
975
|
+
messageParams: MessageParam[]
|
|
976
|
+
): { systemBlocks: TextBlockParam[]; messageParams: MessageParam[] } {
|
|
977
|
+
if (!PROMPT_CACHING_ENABLED) {
|
|
978
|
+
return { systemBlocks, messageParams }
|
|
979
|
+
}
|
|
980
|
+
|
|
981
|
+
const maxCacheBlocks = 4
|
|
982
|
+
let usedCacheBlocks = 0
|
|
983
|
+
|
|
984
|
+
// 1. Prioritize adding cache to system prompts (highest priority)
|
|
985
|
+
const processedSystemBlocks = systemBlocks.map((block, index) => {
|
|
986
|
+
if (usedCacheBlocks < maxCacheBlocks && block.text.length > 1000) {
|
|
987
|
+
usedCacheBlocks++
|
|
988
|
+
return {
|
|
989
|
+
...block,
|
|
990
|
+
cache_control: { type: 'ephemeral' as const }
|
|
991
|
+
}
|
|
992
|
+
}
|
|
993
|
+
const { cache_control, ...blockWithoutCache } = block
|
|
994
|
+
return blockWithoutCache
|
|
995
|
+
})
|
|
996
|
+
|
|
997
|
+
// 2. Add cache to message content based on priority
|
|
998
|
+
const processedMessageParams = messageParams.map((message, messageIndex) => {
|
|
999
|
+
if (Array.isArray(message.content)) {
|
|
1000
|
+
const processedContent = message.content.map((contentBlock, blockIndex) => {
|
|
1001
|
+
// Determine whether this content block should be cached
|
|
1002
|
+
const shouldCache =
|
|
1003
|
+
usedCacheBlocks < maxCacheBlocks &&
|
|
1004
|
+
contentBlock.type === 'text' &&
|
|
1005
|
+
typeof contentBlock.text === 'string' &&
|
|
1006
|
+
(
|
|
1007
|
+
// Long documents (over 2000 characters)
|
|
1008
|
+
contentBlock.text.length > 2000 ||
|
|
1009
|
+
// Last content block of the last message (may be important context)
|
|
1010
|
+
(messageIndex === messageParams.length - 1 &&
|
|
1011
|
+
blockIndex === message.content.length - 1 &&
|
|
1012
|
+
contentBlock.text.length > 500)
|
|
1013
|
+
)
|
|
1014
|
+
|
|
1015
|
+
if (shouldCache) {
|
|
1016
|
+
usedCacheBlocks++
|
|
1017
|
+
return {
|
|
1018
|
+
...contentBlock,
|
|
1019
|
+
cache_control: { type: 'ephemeral' as const }
|
|
1020
|
+
}
|
|
1021
|
+
}
|
|
1022
|
+
|
|
1023
|
+
// Remove existing cache_control
|
|
1024
|
+
const { cache_control, ...blockWithoutCache } = contentBlock as any
|
|
1025
|
+
return blockWithoutCache
|
|
1026
|
+
})
|
|
1027
|
+
|
|
1028
|
+
return {
|
|
1029
|
+
...message,
|
|
1030
|
+
content: processedContent
|
|
1031
|
+
}
|
|
1032
|
+
}
|
|
1033
|
+
|
|
1034
|
+
return message
|
|
1035
|
+
})
|
|
1036
|
+
|
|
1037
|
+
return {
|
|
1038
|
+
systemBlocks: processedSystemBlocks,
|
|
1039
|
+
messageParams: processedMessageParams
|
|
1040
|
+
}
|
|
1041
|
+
}
|
|
1042
|
+
|
|
965
1043
|
export function userMessageToMessageParam(
|
|
966
1044
|
message: UserMessage,
|
|
967
1045
|
addCache = false,
|
|
@@ -974,23 +1052,13 @@ export function userMessageToMessageParam(
|
|
|
974
1052
|
{
|
|
975
1053
|
type: 'text',
|
|
976
1054
|
text: message.message.content,
|
|
977
|
-
...(PROMPT_CACHING_ENABLED
|
|
978
|
-
? { cache_control: { type: 'ephemeral' } }
|
|
979
|
-
: {}),
|
|
980
1055
|
},
|
|
981
1056
|
],
|
|
982
1057
|
}
|
|
983
1058
|
} else {
|
|
984
1059
|
return {
|
|
985
1060
|
role: 'user',
|
|
986
|
-
content: message.message.content.map((_
|
|
987
|
-
..._,
|
|
988
|
-
...(i === message.message.content.length - 1
|
|
989
|
-
? PROMPT_CACHING_ENABLED
|
|
990
|
-
? { cache_control: { type: 'ephemeral' } }
|
|
991
|
-
: {}
|
|
992
|
-
: {}),
|
|
993
|
-
})),
|
|
1061
|
+
content: message.message.content.map((_) => ({ ..._ })),
|
|
994
1062
|
}
|
|
995
1063
|
}
|
|
996
1064
|
}
|
|
@@ -1012,25 +1080,13 @@ export function assistantMessageToMessageParam(
|
|
|
1012
1080
|
{
|
|
1013
1081
|
type: 'text',
|
|
1014
1082
|
text: message.message.content,
|
|
1015
|
-
...(PROMPT_CACHING_ENABLED
|
|
1016
|
-
? { cache_control: { type: 'ephemeral' } }
|
|
1017
|
-
: {}),
|
|
1018
1083
|
},
|
|
1019
1084
|
],
|
|
1020
1085
|
}
|
|
1021
1086
|
} else {
|
|
1022
1087
|
return {
|
|
1023
1088
|
role: 'assistant',
|
|
1024
|
-
content: message.message.content.map((_
|
|
1025
|
-
..._,
|
|
1026
|
-
...(i === message.message.content.length - 1 &&
|
|
1027
|
-
_.type !== 'thinking' &&
|
|
1028
|
-
_.type !== 'redacted_thinking'
|
|
1029
|
-
? PROMPT_CACHING_ENABLED
|
|
1030
|
-
? { cache_control: { type: 'ephemeral' } }
|
|
1031
|
-
: {}
|
|
1032
|
-
: {}),
|
|
1033
|
-
})),
|
|
1089
|
+
content: message.message.content.map((_) => ({ ..._ })),
|
|
1034
1090
|
}
|
|
1035
1091
|
}
|
|
1036
1092
|
}
|
|
@@ -1383,9 +1439,6 @@ async function queryAnthropicNative(
|
|
|
1383
1439
|
|
|
1384
1440
|
const system: TextBlockParam[] = splitSysPromptPrefix(systemPrompt).map(
|
|
1385
1441
|
_ => ({
|
|
1386
|
-
...(PROMPT_CACHING_ENABLED
|
|
1387
|
-
? { cache_control: { type: 'ephemeral' } }
|
|
1388
|
-
: {}),
|
|
1389
1442
|
text: _,
|
|
1390
1443
|
type: 'text',
|
|
1391
1444
|
}),
|
|
@@ -1404,6 +1457,10 @@ async function queryAnthropicNative(
|
|
|
1404
1457
|
)
|
|
1405
1458
|
|
|
1406
1459
|
const anthropicMessages = addCacheBreakpoints(messages)
|
|
1460
|
+
|
|
1461
|
+
// apply cache control
|
|
1462
|
+
const { systemBlocks: processedSystem, messageParams: processedMessages } =
|
|
1463
|
+
applyCacheControlWithLimits(system, anthropicMessages)
|
|
1407
1464
|
const startIncludingRetries = Date.now()
|
|
1408
1465
|
|
|
1409
1466
|
// 记录系统提示构建过程
|
|
@@ -1426,8 +1483,8 @@ async function queryAnthropicNative(
|
|
|
1426
1483
|
const params: Anthropic.Beta.Messages.MessageCreateParams = {
|
|
1427
1484
|
model,
|
|
1428
1485
|
max_tokens: getMaxTokensFromProfile(modelProfile),
|
|
1429
|
-
messages:
|
|
1430
|
-
system,
|
|
1486
|
+
messages: processedMessages,
|
|
1487
|
+
system: processedSystem,
|
|
1431
1488
|
tools: toolSchemas.length > 0 ? toolSchemas : undefined,
|
|
1432
1489
|
tool_choice: toolSchemas.length > 0 ? { type: 'auto' } : undefined,
|
|
1433
1490
|
}
|
|
@@ -1450,6 +1507,7 @@ async function queryAnthropicNative(
|
|
|
1450
1507
|
: null,
|
|
1451
1508
|
maxTokens: params.max_tokens,
|
|
1452
1509
|
temperature: MAIN_QUERY_TEMPERATURE,
|
|
1510
|
+
params: params,
|
|
1453
1511
|
messageCount: params.messages?.length || 0,
|
|
1454
1512
|
streamMode: true,
|
|
1455
1513
|
toolsCount: toolSchemas.length,
|
|
@@ -1471,6 +1529,7 @@ async function queryAnthropicNative(
|
|
|
1471
1529
|
let finalResponse: any | null = null
|
|
1472
1530
|
let messageStartEvent: any = null
|
|
1473
1531
|
const contentBlocks: any[] = []
|
|
1532
|
+
const inputJSONBuffers = new Map<number, string>()
|
|
1474
1533
|
let usage: any = null
|
|
1475
1534
|
let stopReason: string | null = null
|
|
1476
1535
|
let stopSequence: string | null = null
|
|
@@ -1484,30 +1543,81 @@ async function queryAnthropicNative(
|
|
|
1484
1543
|
})
|
|
1485
1544
|
throw new Error('Request was cancelled')
|
|
1486
1545
|
}
|
|
1487
|
-
|
|
1488
|
-
|
|
1489
|
-
|
|
1490
|
-
|
|
1491
|
-
|
|
1492
|
-
|
|
1493
|
-
|
|
1494
|
-
contentBlocks[event.index] = { ...event.content_block }
|
|
1495
|
-
} else if (event.type === 'content_block_delta') {
|
|
1496
|
-
if (!contentBlocks[event.index]) {
|
|
1497
|
-
contentBlocks[event.index] = {
|
|
1498
|
-
type: event.delta.type === 'text_delta' ? 'text' : 'unknown',
|
|
1499
|
-
text: '',
|
|
1546
|
+
|
|
1547
|
+
switch (event.type) {
|
|
1548
|
+
case 'message_start':
|
|
1549
|
+
messageStartEvent = event
|
|
1550
|
+
finalResponse = {
|
|
1551
|
+
...event.message,
|
|
1552
|
+
content: [], // Will be populated from content blocks
|
|
1500
1553
|
}
|
|
1501
|
-
|
|
1502
|
-
|
|
1503
|
-
|
|
1504
|
-
|
|
1505
|
-
|
|
1506
|
-
|
|
1507
|
-
|
|
1508
|
-
|
|
1509
|
-
|
|
1510
|
-
|
|
1554
|
+
break
|
|
1555
|
+
|
|
1556
|
+
case 'content_block_start':
|
|
1557
|
+
contentBlocks[event.index] = { ...event.content_block }
|
|
1558
|
+
// Initialize JSON buffer for tool_use blocks
|
|
1559
|
+
if (event.content_block.type === 'tool_use') {
|
|
1560
|
+
inputJSONBuffers.set(event.index, '')
|
|
1561
|
+
}
|
|
1562
|
+
break
|
|
1563
|
+
|
|
1564
|
+
case 'content_block_delta':
|
|
1565
|
+
const blockIndex = event.index
|
|
1566
|
+
|
|
1567
|
+
// Ensure content block exists
|
|
1568
|
+
if (!contentBlocks[blockIndex]) {
|
|
1569
|
+
contentBlocks[blockIndex] = {
|
|
1570
|
+
type: event.delta.type === 'text_delta' ? 'text' : 'tool_use',
|
|
1571
|
+
text: event.delta.type === 'text_delta' ? '' : undefined,
|
|
1572
|
+
}
|
|
1573
|
+
if (event.delta.type === 'input_json_delta') {
|
|
1574
|
+
inputJSONBuffers.set(blockIndex, '')
|
|
1575
|
+
}
|
|
1576
|
+
}
|
|
1577
|
+
|
|
1578
|
+
if (event.delta.type === 'text_delta') {
|
|
1579
|
+
contentBlocks[blockIndex].text += event.delta.text
|
|
1580
|
+
} else if (event.delta.type === 'input_json_delta') {
|
|
1581
|
+
const currentBuffer = inputJSONBuffers.get(blockIndex) || ''
|
|
1582
|
+
inputJSONBuffers.set(blockIndex, currentBuffer + event.delta.partial_json)
|
|
1583
|
+
}
|
|
1584
|
+
break
|
|
1585
|
+
|
|
1586
|
+
case 'message_delta':
|
|
1587
|
+
if (event.delta.stop_reason) stopReason = event.delta.stop_reason
|
|
1588
|
+
if (event.delta.stop_sequence) stopSequence = event.delta.stop_sequence
|
|
1589
|
+
if (event.usage) usage = { ...usage, ...event.usage }
|
|
1590
|
+
break
|
|
1591
|
+
|
|
1592
|
+
case 'content_block_stop':
|
|
1593
|
+
const stopIndex = event.index
|
|
1594
|
+
const block = contentBlocks[stopIndex]
|
|
1595
|
+
|
|
1596
|
+
if (block?.type === 'tool_use' && inputJSONBuffers.has(stopIndex)) {
|
|
1597
|
+
const jsonStr = inputJSONBuffers.get(stopIndex)
|
|
1598
|
+
if (jsonStr) {
|
|
1599
|
+
try {
|
|
1600
|
+
block.input = JSON.parse(jsonStr)
|
|
1601
|
+
} catch (error) {
|
|
1602
|
+
debugLogger.error('JSON_PARSE_ERROR', {
|
|
1603
|
+
blockIndex: stopIndex,
|
|
1604
|
+
jsonStr,
|
|
1605
|
+
error: error instanceof Error ? error.message : String(error)
|
|
1606
|
+
})
|
|
1607
|
+
block.input = {}
|
|
1608
|
+
}
|
|
1609
|
+
inputJSONBuffers.delete(stopIndex)
|
|
1610
|
+
}
|
|
1611
|
+
}
|
|
1612
|
+
break
|
|
1613
|
+
|
|
1614
|
+
case 'message_stop':
|
|
1615
|
+
// Clear any remaining buffers
|
|
1616
|
+
inputJSONBuffers.clear()
|
|
1617
|
+
break
|
|
1618
|
+
}
|
|
1619
|
+
|
|
1620
|
+
if (event.type === 'message_stop') {
|
|
1511
1621
|
break
|
|
1512
1622
|
}
|
|
1513
1623
|
}
|
|
@@ -1557,6 +1667,10 @@ async function queryAnthropicNative(
|
|
|
1557
1667
|
}
|
|
1558
1668
|
}, { signal })
|
|
1559
1669
|
|
|
1670
|
+
debugLogger.api('ANTHROPIC_API_CALL_SUCCESS', {
|
|
1671
|
+
content: response.content
|
|
1672
|
+
})
|
|
1673
|
+
|
|
1560
1674
|
const ttftMs = start - Date.now()
|
|
1561
1675
|
const durationMs = Date.now() - startIncludingRetries
|
|
1562
1676
|
|
|
@@ -0,0 +1,178 @@
|
|
|
1
|
+
import { Box, Text } from 'ink'
|
|
2
|
+
import React from 'react'
|
|
3
|
+
import { z } from 'zod'
|
|
4
|
+
import fetch from 'node-fetch'
|
|
5
|
+
import { Cost } from '../../components/Cost'
|
|
6
|
+
import { FallbackToolUseRejectedMessage } from '../../components/FallbackToolUseRejectedMessage'
|
|
7
|
+
import { Tool, ToolUseContext } from '../../Tool'
|
|
8
|
+
import { DESCRIPTION, TOOL_NAME_FOR_PROMPT } from './prompt'
|
|
9
|
+
import { convertHtmlToMarkdown } from './htmlToMarkdown'
|
|
10
|
+
import { urlCache } from './cache'
|
|
11
|
+
import { queryQuick } from '../../services/claude'
|
|
12
|
+
|
|
13
|
+
const inputSchema = z.strictObject({
|
|
14
|
+
url: z.string().url().describe('The URL to fetch content from'),
|
|
15
|
+
prompt: z.string().describe('The prompt to run on the fetched content'),
|
|
16
|
+
})
|
|
17
|
+
|
|
18
|
+
type Input = z.infer<typeof inputSchema>
|
|
19
|
+
type Output = {
|
|
20
|
+
url: string
|
|
21
|
+
fromCache: boolean
|
|
22
|
+
aiAnalysis: string
|
|
23
|
+
}
|
|
24
|
+
|
|
25
|
+
function normalizeUrl(url: string): string {
|
|
26
|
+
// Auto-upgrade HTTP to HTTPS
|
|
27
|
+
if (url.startsWith('http://')) {
|
|
28
|
+
return url.replace('http://', 'https://')
|
|
29
|
+
}
|
|
30
|
+
return url
|
|
31
|
+
}
|
|
32
|
+
|
|
33
|
+
export const URLFetcherTool = {
|
|
34
|
+
name: TOOL_NAME_FOR_PROMPT,
|
|
35
|
+
async description() {
|
|
36
|
+
return DESCRIPTION
|
|
37
|
+
},
|
|
38
|
+
userFacingName: () => 'URL Fetcher',
|
|
39
|
+
inputSchema,
|
|
40
|
+
isReadOnly: () => true,
|
|
41
|
+
isConcurrencySafe: () => true,
|
|
42
|
+
async isEnabled() {
|
|
43
|
+
return true
|
|
44
|
+
},
|
|
45
|
+
needsPermissions() {
|
|
46
|
+
return false
|
|
47
|
+
},
|
|
48
|
+
async prompt() {
|
|
49
|
+
return DESCRIPTION
|
|
50
|
+
},
|
|
51
|
+
renderToolUseMessage({ url, prompt }: Input) {
|
|
52
|
+
return `Fetching content from ${url} and analyzing with prompt: "${prompt}"`
|
|
53
|
+
},
|
|
54
|
+
renderToolUseRejectedMessage() {
|
|
55
|
+
return <FallbackToolUseRejectedMessage />
|
|
56
|
+
},
|
|
57
|
+
renderToolResultMessage(output: Output) {
|
|
58
|
+
const statusText = output.fromCache ? 'from cache' : 'fetched'
|
|
59
|
+
|
|
60
|
+
return (
|
|
61
|
+
<Box justifyContent="space-between" width="100%">
|
|
62
|
+
<Box flexDirection="row">
|
|
63
|
+
<Text> ⎿ Content </Text>
|
|
64
|
+
<Text bold>{statusText} </Text>
|
|
65
|
+
<Text>and analyzed</Text>
|
|
66
|
+
</Box>
|
|
67
|
+
<Cost costUSD={0} durationMs={0} debug={false} />
|
|
68
|
+
</Box>
|
|
69
|
+
)
|
|
70
|
+
},
|
|
71
|
+
renderResultForAssistant(output: Output) {
|
|
72
|
+
if (!output.aiAnalysis.trim()) {
|
|
73
|
+
return `No content could be analyzed from URL: ${output.url}`
|
|
74
|
+
}
|
|
75
|
+
|
|
76
|
+
return output.aiAnalysis
|
|
77
|
+
},
|
|
78
|
+
async *call({ url, prompt }: Input, {}: ToolUseContext) {
|
|
79
|
+
const normalizedUrl = normalizeUrl(url)
|
|
80
|
+
|
|
81
|
+
try {
|
|
82
|
+
let content: string
|
|
83
|
+
let fromCache = false
|
|
84
|
+
|
|
85
|
+
// Check cache first
|
|
86
|
+
const cachedContent = urlCache.get(normalizedUrl)
|
|
87
|
+
if (cachedContent) {
|
|
88
|
+
content = cachedContent
|
|
89
|
+
fromCache = true
|
|
90
|
+
} else {
|
|
91
|
+
// Fetch from URL with AbortController for timeout
|
|
92
|
+
const abortController = new AbortController()
|
|
93
|
+
const timeout = setTimeout(() => abortController.abort(), 30000)
|
|
94
|
+
|
|
95
|
+
const response = await fetch(normalizedUrl, {
|
|
96
|
+
method: 'GET',
|
|
97
|
+
headers: {
|
|
98
|
+
'User-Agent': 'Mozilla/5.0 (compatible; URLFetcher/1.0)',
|
|
99
|
+
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
|
|
100
|
+
'Accept-Language': 'en-US,en;q=0.5',
|
|
101
|
+
'Accept-Encoding': 'gzip, deflate',
|
|
102
|
+
'Connection': 'keep-alive',
|
|
103
|
+
'Upgrade-Insecure-Requests': '1',
|
|
104
|
+
},
|
|
105
|
+
signal: abortController.signal,
|
|
106
|
+
redirect: 'follow',
|
|
107
|
+
})
|
|
108
|
+
|
|
109
|
+
clearTimeout(timeout)
|
|
110
|
+
|
|
111
|
+
if (!response.ok) {
|
|
112
|
+
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
|
|
113
|
+
}
|
|
114
|
+
|
|
115
|
+
const contentType = response.headers.get('content-type') || ''
|
|
116
|
+
if (!contentType.includes('text/') && !contentType.includes('application/')) {
|
|
117
|
+
throw new Error(`Unsupported content type: ${contentType}`)
|
|
118
|
+
}
|
|
119
|
+
|
|
120
|
+
const html = await response.text()
|
|
121
|
+
content = convertHtmlToMarkdown(html)
|
|
122
|
+
|
|
123
|
+
// Cache the result
|
|
124
|
+
urlCache.set(normalizedUrl, content)
|
|
125
|
+
fromCache = false
|
|
126
|
+
}
|
|
127
|
+
|
|
128
|
+
// Truncate content if too large (keep within reasonable token limits)
|
|
129
|
+
const maxContentLength = 50000 // ~15k tokens approximately
|
|
130
|
+
const truncatedContent = content.length > maxContentLength
|
|
131
|
+
? content.substring(0, maxContentLength) + '\n\n[Content truncated due to length]'
|
|
132
|
+
: content
|
|
133
|
+
|
|
134
|
+
// AI Analysis - always performed fresh, even with cached content
|
|
135
|
+
const systemPrompt = [
|
|
136
|
+
'You are analyzing web content based on a user\'s specific request.',
|
|
137
|
+
'The content has been extracted from a webpage and converted to markdown.',
|
|
138
|
+
'Provide a focused response that directly addresses the user\'s prompt.',
|
|
139
|
+
]
|
|
140
|
+
|
|
141
|
+
const userPrompt = `Here is the content from ${normalizedUrl}:
|
|
142
|
+
|
|
143
|
+
${truncatedContent}
|
|
144
|
+
|
|
145
|
+
User request: ${prompt}`
|
|
146
|
+
|
|
147
|
+
const aiResponse = await queryQuick({
|
|
148
|
+
systemPrompt,
|
|
149
|
+
userPrompt,
|
|
150
|
+
enablePromptCaching: false,
|
|
151
|
+
})
|
|
152
|
+
|
|
153
|
+
const output: Output = {
|
|
154
|
+
url: normalizedUrl,
|
|
155
|
+
fromCache,
|
|
156
|
+
aiAnalysis: aiResponse.message.content[0]?.text || 'Unable to analyze content',
|
|
157
|
+
}
|
|
158
|
+
|
|
159
|
+
yield {
|
|
160
|
+
type: 'result' as const,
|
|
161
|
+
resultForAssistant: this.renderResultForAssistant(output),
|
|
162
|
+
data: output,
|
|
163
|
+
}
|
|
164
|
+
} catch (error: any) {
|
|
165
|
+
const output: Output = {
|
|
166
|
+
url: normalizedUrl,
|
|
167
|
+
fromCache: false,
|
|
168
|
+
aiAnalysis: '',
|
|
169
|
+
}
|
|
170
|
+
|
|
171
|
+
yield {
|
|
172
|
+
type: 'result' as const,
|
|
173
|
+
resultForAssistant: `Error processing URL ${normalizedUrl}: ${error.message}`,
|
|
174
|
+
data: output,
|
|
175
|
+
}
|
|
176
|
+
}
|
|
177
|
+
},
|
|
178
|
+
} satisfies Tool<typeof inputSchema, Output>
|