joonecli 0.1.1 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (147) hide show
  1. package/dist/cli/index.js +4 -1
  2. package/dist/cli/index.js.map +1 -1
  3. package/dist/commands/builtinCommands.js +6 -6
  4. package/dist/commands/builtinCommands.js.map +1 -1
  5. package/dist/commands/commandRegistry.d.ts +3 -1
  6. package/dist/commands/commandRegistry.js.map +1 -1
  7. package/dist/core/agentLoop.d.ts +3 -1
  8. package/dist/core/agentLoop.js +17 -7
  9. package/dist/core/agentLoop.js.map +1 -1
  10. package/dist/core/compactor.js +2 -2
  11. package/dist/core/compactor.js.map +1 -1
  12. package/dist/core/contextGuard.d.ts +5 -0
  13. package/dist/core/contextGuard.js +30 -3
  14. package/dist/core/contextGuard.js.map +1 -1
  15. package/dist/core/events.d.ts +45 -0
  16. package/dist/core/events.js +8 -0
  17. package/dist/core/events.js.map +1 -0
  18. package/dist/core/sessionStore.js +3 -2
  19. package/dist/core/sessionStore.js.map +1 -1
  20. package/dist/core/subAgent.js +2 -2
  21. package/dist/core/subAgent.js.map +1 -1
  22. package/dist/core/tokenCounter.d.ts +8 -1
  23. package/dist/core/tokenCounter.js +28 -0
  24. package/dist/core/tokenCounter.js.map +1 -1
  25. package/dist/middleware/permission.js +1 -0
  26. package/dist/middleware/permission.js.map +1 -1
  27. package/dist/tools/browser.js +4 -1
  28. package/dist/tools/browser.js.map +1 -1
  29. package/dist/tools/index.d.ts +2 -1
  30. package/dist/tools/index.js +11 -3
  31. package/dist/tools/index.js.map +1 -1
  32. package/dist/tools/installHostDeps.d.ts +2 -0
  33. package/dist/tools/installHostDeps.js +37 -0
  34. package/dist/tools/installHostDeps.js.map +1 -0
  35. package/dist/tools/router.js +1 -0
  36. package/dist/tools/router.js.map +1 -1
  37. package/dist/tools/spawnAgent.js +3 -1
  38. package/dist/tools/spawnAgent.js.map +1 -1
  39. package/dist/tracing/sessionTracer.d.ts +1 -0
  40. package/dist/tracing/sessionTracer.js +4 -1
  41. package/dist/tracing/sessionTracer.js.map +1 -1
  42. package/dist/ui/App.js +6 -1
  43. package/dist/ui/App.js.map +1 -1
  44. package/dist/ui/components/ActionLog.d.ts +7 -0
  45. package/dist/ui/components/ActionLog.js +63 -0
  46. package/dist/ui/components/ActionLog.js.map +1 -0
  47. package/dist/ui/components/FileBrowser.d.ts +2 -0
  48. package/dist/ui/components/FileBrowser.js +41 -0
  49. package/dist/ui/components/FileBrowser.js.map +1 -0
  50. package/package.json +3 -5
  51. package/AGENTS.md +0 -56
  52. package/Handover.md +0 -115
  53. package/PROGRESS.md +0 -160
  54. package/docs/01_insights_and_patterns.md +0 -27
  55. package/docs/02_edge_cases_and_mitigations.md +0 -143
  56. package/docs/03_initial_implementation_plan.md +0 -66
  57. package/docs/04_tech_stack_proposal.md +0 -20
  58. package/docs/05_prd.md +0 -87
  59. package/docs/06_user_stories.md +0 -72
  60. package/docs/07_system_architecture.md +0 -138
  61. package/docs/08_roadmap.md +0 -200
  62. package/e2b/Dockerfile +0 -26
  63. package/src/__tests__/bootstrap.test.ts +0 -111
  64. package/src/__tests__/config.test.ts +0 -97
  65. package/src/__tests__/m55.test.ts +0 -238
  66. package/src/__tests__/middleware.test.ts +0 -219
  67. package/src/__tests__/modelFactory.test.ts +0 -63
  68. package/src/__tests__/optimizations.test.ts +0 -201
  69. package/src/__tests__/promptBuilder.test.ts +0 -141
  70. package/src/__tests__/sandbox.test.ts +0 -102
  71. package/src/__tests__/security.test.ts +0 -122
  72. package/src/__tests__/streaming.test.ts +0 -82
  73. package/src/__tests__/toolRouter.test.ts +0 -52
  74. package/src/__tests__/tools.test.ts +0 -146
  75. package/src/__tests__/tracing.test.ts +0 -196
  76. package/src/agents/agentRegistry.ts +0 -69
  77. package/src/agents/agentSpec.ts +0 -67
  78. package/src/agents/builtinAgents.ts +0 -142
  79. package/src/cli/config.ts +0 -124
  80. package/src/cli/index.ts +0 -742
  81. package/src/cli/modelFactory.ts +0 -174
  82. package/src/cli/postinstall.ts +0 -28
  83. package/src/cli/providers.ts +0 -107
  84. package/src/commands/builtinCommands.ts +0 -293
  85. package/src/commands/commandRegistry.ts +0 -194
  86. package/src/core/agentLoop.d.ts.map +0 -1
  87. package/src/core/agentLoop.ts +0 -312
  88. package/src/core/autoSave.ts +0 -95
  89. package/src/core/compactor.ts +0 -252
  90. package/src/core/contextGuard.ts +0 -129
  91. package/src/core/errors.ts +0 -202
  92. package/src/core/promptBuilder.d.ts.map +0 -1
  93. package/src/core/promptBuilder.ts +0 -139
  94. package/src/core/reasoningRouter.ts +0 -121
  95. package/src/core/retry.ts +0 -75
  96. package/src/core/sessionResumer.ts +0 -90
  97. package/src/core/sessionStore.ts +0 -216
  98. package/src/core/subAgent.ts +0 -339
  99. package/src/core/tokenCounter.ts +0 -64
  100. package/src/evals/dataset.ts +0 -67
  101. package/src/evals/evaluator.ts +0 -81
  102. package/src/hitl/bridge.ts +0 -160
  103. package/src/middleware/commandSanitizer.ts +0 -60
  104. package/src/middleware/loopDetection.ts +0 -63
  105. package/src/middleware/permission.ts +0 -72
  106. package/src/middleware/pipeline.ts +0 -75
  107. package/src/middleware/preCompletion.ts +0 -94
  108. package/src/middleware/types.ts +0 -45
  109. package/src/sandbox/bootstrap.ts +0 -121
  110. package/src/sandbox/manager.ts +0 -239
  111. package/src/sandbox/sync.ts +0 -157
  112. package/src/skills/loader.ts +0 -143
  113. package/src/skills/tools.ts +0 -99
  114. package/src/skills/types.ts +0 -13
  115. package/src/test_cache.ts +0 -72
  116. package/src/tools/askUser.ts +0 -47
  117. package/src/tools/browser.ts +0 -137
  118. package/src/tools/index.d.ts.map +0 -1
  119. package/src/tools/index.ts +0 -237
  120. package/src/tools/registry.ts +0 -198
  121. package/src/tools/router.ts +0 -78
  122. package/src/tools/security.ts +0 -220
  123. package/src/tools/spawnAgent.ts +0 -158
  124. package/src/tools/webSearch.ts +0 -142
  125. package/src/tracing/analyzer.ts +0 -265
  126. package/src/tracing/langsmith.ts +0 -63
  127. package/src/tracing/sessionTracer.ts +0 -202
  128. package/src/tracing/types.ts +0 -49
  129. package/src/types/valyu.d.ts +0 -37
  130. package/src/ui/App.tsx +0 -404
  131. package/src/ui/components/HITLPrompt.tsx +0 -119
  132. package/src/ui/components/Header.tsx +0 -51
  133. package/src/ui/components/MessageBubble.tsx +0 -46
  134. package/src/ui/components/StatusBar.tsx +0 -138
  135. package/src/ui/components/StreamingText.tsx +0 -48
  136. package/src/ui/components/ToolCallPanel.tsx +0 -80
  137. package/tests/commands/commands.test.ts +0 -356
  138. package/tests/core/compactor.test.ts +0 -217
  139. package/tests/core/retryAndErrors.test.ts +0 -164
  140. package/tests/core/sessionResumer.test.ts +0 -95
  141. package/tests/core/sessionStore.test.ts +0 -84
  142. package/tests/core/stability.test.ts +0 -165
  143. package/tests/core/subAgent.test.ts +0 -238
  144. package/tests/hitl/hitlBridge.test.ts +0 -115
  145. package/tsconfig.json +0 -16
  146. package/vitest.config.ts +0 -10
  147. package/vitest.out +0 -48
@@ -1,143 +0,0 @@
1
- # Edge Cases & Mitigations
2
-
3
- When building a coding agent with Prompt Caching + Middlewares, these are the primary edge cases to design around:
4
-
5
- ## 1. Prompt Caching Edge Cases (Cost & Latency Traps)
6
-
7
- - **The "Leaky Timestamp" Cache Breaker:**
8
- - _The Edge Case:_ If you inject dynamic data (like the current time, memory usage, or random UUIDs) into your Base System Prompt, you will achieve a **0% cache hit rate**. The cache relies on exact prefix matching.
9
- - _Mitigation:_ Put all static, immutable instructions at the top. Any dynamic state must be injected via a `<system-reminder>` inside the _Messages_ array (which sits at the end of the context).
10
- - **The Mid-Session Model Switch:**
11
- - _The Edge Case:_ Switching models mid-thread (e.g., cheap model for summarizing, smart model for coding) means the new model has an empty cache and must re-process the entire prompt prefix from scratch.
12
- - _Mitigation:_ Avoid swapping models in the same thread. Span a "Sub-agent" thread and only pass minimum necessary context.
13
- - **Context Window Compaction (Amnesia):**
14
- - _The Edge Case:_ Summarizing a long conversation and starting a new prompt causes you to lose your cached prefix AND the agent forgets specific constraints.
15
- - _Mitigation:_ Implement **Cache-Safe Forking**. Keep the exact same System Prompt and Tool definitions. Start a new thread by passing the summary of the previous history as the first few messages, followed by the new task.
16
-
17
- ## 2. Harness & Middleware Edge Cases (Logic Traps)
18
-
19
- - **The "Massive File" Blunder:**
20
- - _The Edge Case:_ The agent reads a 10,000-line minified file. This floods the context window, pushes out important instructions, and ruins the session cache.
21
- - _Mitigation:_ Harness-level Guardrails. Restrict `read_file` to return chunks or force the agent to use `grep_search` / `view_file_outline`.
22
- - **The "Blind Retry" Doom Loop:**
23
- - _The Edge Case:_ The agent misses a space in a search-and-replace, fails, and tries the exact same edit endlessly.
24
- - _Mitigation:_ Use `LoopDetectionMiddleware`. If the agent emits identical tool calls 3 times, intercept and inject: _"You have failed this 3 times. Stop trying this approach."_
25
- - **The "Fake Success" Verification:**
26
- - _The Edge Case:_ The agent runs tests, they fail, but the agent hallucinates that the failure is acceptable and marks the task as Done. Older approaches relied on fragile string parsing (e.g., matching "failed" in output), which could easily be bypassed or confused by test output.
27
- - _Mitigation:_ The harness must programmatically parse terminal exit codes. By explicitly surfacing structured tool metadata (e.g., `ToolResult.metadata.exitCode`) from execution sandboxes, the `PreCompletionMiddleware` reliably blocks the agent from exiting if tests don't pass (`exitCode !== 0`).
28
- - **Tool Schema Amnesia (with Lazy Loading):**
29
- - _The Edge Case:_ An agent loads a complex tool lazily, uses it once, and then later forgets how to format its JSON schema.
30
- - _Mitigation:_ If a tool is "discovered", it must remain in the "Messages" context as a system reminder so the schema is preserved.
31
- - **The "Ghost Tool Call" (Context Desync):**
32
- - _The Edge Case:_ A model emits a tool call but occasionally forgets to attach a internal `tool_call_id` (this breaks the strict `AIMessage[tool_calls] -> ToolMessage[tool_call_id]` sequencing rules required by modern LangChain/Anthropic/OpenAI APIs). If you forge a fake ID or cast it as a string, the LLM rejects the context on the next turn.
33
- - _Mitigation:_ The "Soft Fail" approach. Intercept the malformed tool call in the `ExecutionHarness`. Do not execute the tool and do not emit a `ToolMessage`. Instead, emit a corrective `HumanMessage` stating: _"You attempted to call tool X, but didn't provide a tool_call_id. Please try again."_ This prevents context poisoning.
34
-
35
- ## 3. Security & Execution Edge Cases (Tool Exploits)
36
-
37
- - **Command Injection via Malicious Interpolation:**
38
- - _The Edge Case:_ Passing user-provided arguments directly into shell commands (e.g., `agent-browser --url "${args.url}"` or `gemini --file "${args.path}"`) allows attackers to escape quotes and execute arbitrary commands in the sandbox (e.g., `url = '"; cat /etc/passwd; "'`).
39
- - _Mitigation:_ Use strict Bash parameter escaping. All dynamic strings passed to shell commands are wrapped in single quotes, and any internal single quotes are escaped (`'\\''`).
40
- - **Host Filesystem Path Traversal (The "Escaped Workspace" Vulnerability):**
41
- - _The Edge Case:_ Because `read_file` and `write_file` execute on the host machine to support live IDE syncing, a malicious prompt could instruct the agent to write to `~/.bashrc`, `C:\Windows\System32`, or `/.ssh/id_rsa`, compromising the user's host machine.
42
- - _Mitigation:_ Implement strict Workspace Jail boundaries. Before any host I/O operation, the resolved path is evaluated against `process.cwd()`. If the path attempts to escape the root workspace, the tool immediately rejects the call returning a permissions error.
43
- - **Silently Swallowed CLI Errors:**
44
- - _The Edge Case:_ A CLI tool (like OSV-Scanner) crashes due to a configuration error (exit code > 1) and prints an error to `stderr`. If the orchestration layer only checks for `stdout` and swallows non-zero exit codes silently falling back to another tool, the critical error trace is lost.
45
- - _Mitigation:_ Enforce strict exit code verification (e.g., `exitCode === 1` means vulnerabilities found) and emit clear warnings with the full `stderr` trace before attempting any fallback strategies.
46
- - **The "Over-Eager Doom Loop" Reporter:**
47
- - _The Edge Case:_ When detecting a doom loop (calling the same tool with identical args continuously), firing an alert during the active iteration causes redundant, spammy issue reports (e.g., reporting loop counts 3, 4, and 5 as separate critical issues).
48
- - _Mitigation:_ Track the loop state continuously but defer pushing the `AnalysisIssue` to the report array until the loop is visibly broken by a differing action, or the trace ends.
49
- - **The "Parallel Tool Expansion" Bug (TUI Memory Corruption):**
50
- - _The Edge Case:_ In a Terminal UI rendering loop, executing an array of tool calls _inside_ the UI rendering iteration causes the generated `ToolMessage` array to be appended to the conversation history $N$ times (for $N$ tools), massively inflating context usage with duplicated data.
51
-
52
- ## 4. Persistent Session Edge Cases (State Management)
53
-
54
- - **File System Drift (Host Desync):**
55
- - _The Edge Case:_ The agent edits a file, the session is paused. A human edits the file externally before the session is resumed. The agent resumes, unaware of the external edits, and attempts a line-based replacement that corrupts the file.
56
- - _Mitigation:_ `SessionResumer` explicitly logs `mtime` file stats. Upon resumption, it flags recently modified workspace files and injects a "Wakeup Prompt" forcing the LLM to diff or re-read the file before acting.
57
- - **Sandbox Ephemerality (The Amnesia Problem):**
58
- - _The Edge Case:_ A session running a background Express server in a cloud sandbox on Friday is resumed on Monday. The cloud provider killed the idle VM. The new VM lacks the running server, but the LLM’s context history believes it is still running.
59
- - _Mitigation:_ Sandboxes are treated strictly statelessly. Upon string resumption, the agent is injected with a system message that the sandbox was recycled and it must manually restart required daemons/dev-servers.
60
- - **"Mid-Breath" Interruption State (Corrupt Serialization):**
61
- - _The Edge Case:_ A forced exit (`SIGINT`/Power Loss) occurs exactly while the agent stream is halfway through emitting a JSON tool call chunk, serializing a broken `AIMessage` into history.
62
- - _Mitigation:_ The `SessionStore` must only trigger a `saveSession()` at strict execution boundaries (e.g. after a complete LLM generation cycle or successfully parsed CLI execution), guaranteeing invalid mid-stream JSON chunks never touch the disk.
63
- - **Context Overflow (The Infinite Chat Log):**
64
- - _The Edge Case:_ A persistent session spanning weeks scales the context past 200k tokens, hitting API limits and exponentially inflating the per-turn token costs.
65
- - _Mitigation:_ Compaction is forced _before_ disk serialization. The session stringizes and compresses turns older than $N$ iterations into a dense system summary block before writing to `.jsonl`.
66
- - **Provider/Model Switching Mid-Task:**
67
- - _The Edge Case:_ Starting a complex reasoning loop with Opus, pausing, and resuming with a lightweight local model like Llama 3 8B. The history is filled with complex schema usages that confused the smaller model.
68
- - _Mitigation:_ Serialize the `.jsonl` lines with `provider/model` metadata blocks. Upon resumption, the CLI explicitly warns if a provider downgrade is detected.
69
-
70
- ## 5. Error Recovery & Retry Edge Cases
71
-
72
- - **Transient LLM API Failure (429/5xx):**
73
- - _The Edge Case:_ The LLM provider returns a rate-limit (429) or server error (500/502/503) mid-turn, crashing the entire session.
74
- - _Mitigation:_ `retryWithBackoff()` wraps all LLM calls with exponential backoff (1s→2s→4s + jitter). Only `JooneError` instances with `retryable === true` trigger retries; auth failures (401/403) propagate immediately.
75
- - **Exhausted Retries (Self-Recovery):**
76
- - _The Edge Case:_ After 3 retry attempts, the LLM API is still down. The session crashes and the user loses all progress.
77
- - _Mitigation:_ Instead of crashing, `ExecutionHarness` injects the error's `toRecoveryHint()` as a `SystemMessage` into the conversation, returning a synthetic `AIMessage`. The agent can observe the error context and adapt (e.g., wait, simplify, or ask the user).
78
- - **Unclassified Provider Errors:**
79
- - _The Edge Case:_ A new LLM provider throws a non-standard error with no HTTP status code, bypassing the retry classification.
80
- - _Mitigation:_ `wrapLLMError()` inspects `.status`, `.statusCode`, `.code`, and `.response.status` on raw errors, covering the common patterns of LangChain, Axios, and native `fetch` errors.
81
-
82
- ## 6. Human-in-the-Loop Edge Cases
83
-
84
- - **Permission Timeout (User Away):**
85
- - _The Edge Case:_ The agent calls a dangerous tool (`bash`, `write_file`) while the user is away from the terminal. The agent blocks indefinitely waiting for permission.
86
- - _Mitigation:_ `HITLBridge.requestPermission()` has a configurable timeout (default 5 minutes) that auto-denies and returns a short-circuit string, letting the agent try an alternative.
87
- - **Ask Question Timeout:**
88
- - _The Edge Case:_ The agent asks the user a clarifying question via `ask_user_question`, but the user doesn't respond.
89
- - _Mitigation:_ `HITLBridge.askUser()` resolves with `"[No response]"` after timeout, so the agent can proceed with a default assumption.
90
- - **Permission Mode Misconfiguration:**
91
- - _The Edge Case:_ The user sets `"permissionMode": "ask_all"` and then every tool call — including harmless reads — triggers a prompt, making the agent unusable.
92
- - _Mitigation:_ `PermissionMiddleware` maintains a hardcoded `SAFE_TOOLS` whitelist (`read_file`, `search_skills`, `ask_user_question`, etc.) that bypasses approval even in `ask_all` mode.
93
-
94
- ## 7. Skills Sync Edge Cases
95
-
96
- - **Missing User Skills Directory:**
97
- - _The Edge Case:_ `~/.joone/skills/` doesn't exist on the user's machine. The sync crashes trying to walk a nonexistent path.
98
- - _Mitigation:_ `syncSkillsToSandbox()` checks `fs.existsSync()` before walking each skill directory and silently skips missing paths.
99
- - **Skill Name Collision (Project vs. User):**
100
- - _The Edge Case:_ A user-level skill and a project-level skill have the same name. Both get synced to the sandbox, creating confusion.
101
- - _Mitigation:_ `SkillLoader.discoverSkills()` deduplicates by name with project-level priority. `syncSkillsToSandbox()` only uploads `source: "user"` skills since project-level skills are already inside `projectRoot`.
102
-
103
- ## 8. Slash Command Edge Cases (M11)
104
-
105
- - **Command Typos & Frustration:**
106
- - _The Edge Case:_ User types `/modle` instead of `/model` and the agent treats it as a prompt, wasting LLM tokens and failing to switch the model.
107
- - _Mitigation:_ Levenshtein distance check in `CommandRegistry`. If an unknown command is `< 3` edits away from a known command, the TUI intercepts it and suggests the correct command without calling the LLM.
108
- - **State Mutation While Processing:**
109
- - _The Edge Case:_ User runs `/exit` or `/clear` while the agent is midway through generating a sequence of ToolCalls.
110
- - _Mitigation:_ App-level UI blocks input while `isProcessing === true`. The commands are disabled.
111
- - **Model Switch to Non-Existent Model:**
112
- - _The Edge Case:_ User runs `/model nonexistent`.
113
- - _Mitigation:_ The command validates the model string against `ConfigManager`'s available models and securely rejects it before updating internal state.
114
-
115
- ## 9. LLM-Powered Compaction Edge Cases (M12)
116
-
117
- - **Compaction Data Loss (Amnesia 2.0):**
118
- - _The Edge Case:_ The LLM summarizes a 50-turn conversation but drops explicit file paths or tool choices, leaving the main agent blind when resuming.
119
- - _Mitigation:_ The built-in Compact Prompt explicitly mandates a structured format: `Files Modified`, `Decisions Made`, `Tools Used`. A handoff prompt (`[CONTEXT HANDOFF]`) is injected into the bottom of the history to glue the summary back to the agent's persona.
120
- - **Double Compaction Fidelity Loss:**
121
- - _The Edge Case:_ A session exists so long it must be compacted twice. A "summary of a summary" loses critical resolution.
122
- - _Mitigation:_ `ConversationCompactor` detects prior summaries and includes them entirely in the eviction block, prompting the LLM to unify the old summary with the new evicted messages.
123
-
124
- ## 10. Sub-Agent Orchestration Edge Cases (M13)
125
-
126
- - **The Sub-Agent Recursion Bomb:**
127
- - _The Edge Case:_ A sub-agent uses the `spawn_agent` tool to spawn another sub-agent, creating an infinite nesting loop.
128
- - _Mitigation:_ Hardcoded Depth-1 limit. Pre-configured sub-agents in `AgentRegistry` never include `spawn_agent` or `check_agent` in their allowed toolsets.
129
- - **Async Resource Contention:**
130
- - _The Edge Case:_ The main agent loops over a directory and spawns 50 async `test_runner` agents concurrently.
131
- - _Mitigation:_ `SubAgentManager` maintains a hard cap of 3 concurrent async tasks. Further spawn requests are queued or rejected with a backpressure error tool response.
132
- - **Stale Files in Sandbox:**
133
- - _The Edge Case:_ The main agent edits a file on the host, then immediately spawns a `bash` sub-agent. The sub-agent runs in the sandbox before the new host file is synced.
134
- - _Mitigation:_ The `SubAgentManager` shares the main harness's `FileSync` instance and always forces a `syncToSandbox()` pass _before_ the sub-agent takes its first step.
135
-
136
- ## 11. Stability & Reliability Edge Cases (M14)
137
-
138
- - **Context Window Overflows (Instant Death):**
139
- - _The Edge Case:_ Despite compaction thresholds, a single `read_file` returns 120k tokens string, instantly blowing past the 100% capacity mark. Compaction fails because the context is already overflowing.
140
- - _Mitigation:_ `ContextGuard` has a 95% "Emergency Truncation" threshold. Before hitting the API, if tokens > 95%, it _bypasses_ LLM compaction and brutally slices all but the last 4 messages, inserting a loud warning message directly into the stream, guaranteeing survival.
141
- - **Process Death Serialization Tearing:**
142
- - _The Edge Case:_ The `AutoSave` triggers at the exact millisecond the user presses `Ctrl+C`. The Node process terminates while `fs.writeFileSync` is mid-chunk, corrupting the JSONL session file irreversibly.
143
- - _Mitigation:_ Atomic saves. `SessionStore.saveSession()` writes to an intermediate staging stream. On `process.on('SIGINT')`, a synchronous `forceSave()` is fired to cleanly flush state _before_ `process.exit(0)`.
@@ -1,66 +0,0 @@
1
- # Initial Implementation Plan
2
-
3
- ## Phase 1: Context Engine & Caching Layer
4
-
5
- Build a structured Prompt Builder that strictly enforces the Prefix Matching patterns so every task in a session enjoys a >90% cache hit rate.
6
-
7
- ```mermaid
8
- graph TD
9
- A[Base System Prompt] -->|Static Prefix| B
10
- B[Tool Schemas] -->|Static Prefix| C
11
- C[Project Memory e.g., README] -->|Project Prefix| D
12
- D[Session Context e.g., OS Info] -->|Session Prefix| E
13
- E[Conversation History] -->|Dynamic Appends| F[New User/Tool Message]
14
-
15
- style A fill:#1e4620,stroke:#2b662e,color:#fff
16
- style B fill:#1e4620,stroke:#2b662e,color:#fff
17
- style C fill:#1e4620,stroke:#2b662e,color:#fff
18
- style D fill:#2b465e,stroke:#3b6282,color:#fff
19
- style E fill:#4a3219,stroke:#664422,color:#fff
20
-
21
- subgraph Fully Cached Prefix
22
- A
23
- B
24
- C
25
- end
26
- ```
27
-
28
- ## Phase 2: Interoperable Tooling & Lazy Loading
29
-
30
- Implement tools as immutable objects for the session. Implement "Plan Mode" to alter agent rules without unloading tool schemas.
31
-
32
- - Define core tools: `read_file`, `write_file`, `bash_command`.
33
- - Implement dummy/stub tools for complex integrations.
34
- - Implement "Cache-Safe Forking" for compaction.
35
-
36
- ## Phase 3: The Middleware Harness
37
-
38
- Implement pre-completion checks and loop detection via a middleware pipeline.
39
-
40
- ```mermaid
41
- sequenceDiagram
42
- participant Agent as LLM Agent
43
- participant Harness as Execution Harness
44
- participant Middle as Middleware Pipeline
45
- participant Env as Environment (Bash/FS)
46
-
47
- Agent->>Harness: Request: Edit target_file.py
48
- Harness->>Middle: Emit: 'pre_tool_call'
49
- Middle-->>Harness: Check LoopDetection (Fail if > 4 tries)
50
- Harness->>Env: Execute Edit
51
- Env-->>Harness: Return File Diff
52
- Harness->>Agent: Send Tool Result
53
-
54
- Agent->>Harness: Request: Submit/Exit
55
- Harness->>Middle: Emit: 'pre_submit'
56
- Middle->>Harness: Inject 'PreCompletionChecklist' (Wait, did you run tests?)
57
- Harness->>Agent: System Reminder: "Please run tests to verify."
58
- Agent->>Harness: Request: Run `pytest`
59
- ```
60
-
61
- ## Phase 4: Tracing & Feedback Loop
62
-
63
- Build an automated pipeline that sends JSON traces of failed agent runs into an evaluation database.
64
-
65
- - Hook LLM API calls to save traces.
66
- - Implement `TraceAnalyzer` subagent to review failures.
@@ -1,20 +0,0 @@
1
- # Tech Stack
2
-
3
- The technology stack has been finalized. We are moving forward with a combination of the strong typing of TypeScript and the robust AI orchestration ecosystem of LangChain.
4
-
5
- ## The Final Stack
6
-
7
- - **Language:** TypeScript (Node.js)
8
- - Provides end-to-end type safety, especially crucial for tool schemas (Zod) and avoiding runtime errors in the execution loop.
9
- - **Orchestration / LLM Framework:** LangChain.js / LangGraph.js
10
- - Using the TypeScript SDK for LangChain allows us to build complex, cyclical agent workflows (like Middlewares and self-correction loops) via LangGraph.
11
- - **Typing / Tool Schemas:** Zod
12
- - Seamless integration with LangChain for structural output parsing and strict tool definition.
13
- - **Tracing:** LangSmith
14
- - First-party integration with LangChain, providing deep visibility into token usage, prompt construction, and latency. Essential for debugging cache hit rates.
15
- - **CLI Framework (Optional):** Commander.js / Ink
16
- - To be used if we build a robust terminal interface for the agent.
17
-
18
- ## Why this combination?
19
-
20
- This marries the best of both originally proposed worlds. It gives us the frontend/backend interoperability and strict compile-time checks of TypeScript, while retaining the mature, graph-based agent orchestration and high-fidelity trace analysis typically dominated by Python's LangChain ecosystem.
package/docs/05_prd.md DELETED
@@ -1,87 +0,0 @@
1
- # Product Requirements Document (PRD)
2
-
3
- ## 1. Product Overview
4
-
5
- **Joone** is a CLI-based autonomous coding agent that leverages **Prompt Caching** and **Harness Engineering** to achieve high autonomy and robustness in complex coding tasks while minimizing token cost and latency. It executes all generated code inside **E2B sandboxed microVMs**, isolating the host machine from any destructive operations.
6
-
7
- ## 2. Target Audience
8
-
9
- - Software Engineers looking for an autonomous pair-programmer.
10
- - DevOps engineers looking to automate script fixes and verifications.
11
- - AI Researchers running benchmarks (e.g., Terminal Bench 2.0).
12
-
13
- ## 3. Core Features
14
-
15
- ### 3.1. CLI Interface & Provider Selection
16
-
17
- - **Installable CLI**: Packaged as an npm global binary (`npx joone` or `npm i -g joone`).
18
- - **Provider/Model Selection**: On first run (or via `joone config`), the user interactively selects their LLM provider and model. Stored at `~/.joone/config.json`.
19
- - **Supported Providers**: Anthropic, OpenAI, Google, Mistral, Groq, DeepSeek, Fireworks, Together AI, Ollama (local).
20
- - **Dynamic Provider Loading**: Provider packages are loaded on demand. If a package isn't installed, the CLI prints a helpful install command.
21
- - **Streaming Output**: Token-by-token streaming enabled by default for all providers. Tool calls are buffered until complete before execution.
22
-
23
- ### 3.2. API Key Security (Tiered)
24
-
25
- - **Tier 1 (Default)**: API keys stored in `~/.joone/config.json` with restrictive file permissions (`chmod 600`). Masked input during `joone config`.
26
- - **Tier 2 (Planned)**: OS Keychain integration (Windows Credential Manager / macOS Keychain) via `keytar`.
27
- - **Tier 3 (Planned)**: AES-256 encrypted config file with machine-derived key.
28
- - During onboarding, the user will eventually be able to choose their preferred security tier.
29
-
30
- ### 3.3. Cache-Optimized Context Engine
31
-
32
- - **Strict Prefix Ordering**: Separates static system instructions, tool definitions, project memory, and conversation history to align with LLM `cache_control` behaviors.
33
- - **`<system-reminder>` Injection**: Updates agent state natively via standard messages rather than system prompt overwrites, preserving the cache.
34
- - **Cache-Safe Compaction**: Forks and summarizes contexts seamlessly without full cache eviction.
35
-
36
- ### 3.4. Hybrid Sandbox Execution
37
-
38
- - **Architecture**: The agent uses a **Hybrid** model — file operations (`write_file`, `read_file`) run on the **host machine** so the user sees changes live in their IDE, while all **code execution** (`bash`, tests, scripts) runs inside an [E2B](https://e2b.dev) cloud microVM sandbox.
39
- - **File Sync Mechanism**: Before each sandbox execution, changed files are synced from host → sandbox.
40
- - _Tracking:_ Modifications are tracked via a "dirty paths" memory array. The `write_file` host tool explicitly marks paths as dirty upon successful write.
41
- - _Concurrency:_ Concurrent modifications are prevented by the `ExecutionHarness`, which executes agent tool calls sequentially and blocks the LLM loop until file I/O and sandbox syncs are fully complete.
42
- - _Conflict Resolution:_ The Host machine is the absolute source of truth. The sandbox filesystem is ephemeral and overwritten by the Host's dirty files before any command runs. Modifications made manually in the sandbox bypass the host and are lost upon destroy.
43
- - **Security**: The host machine is never exposed to agent-executed commands. Only file read/write touches the host.
44
-
45
- ### 3.5. Middleware Harness
46
-
47
- - **Loop Detection (Anti-Doom Loop)**: Tracks agent action duplication and injects corrective context to break the loop.
48
- - **Pre-Completion Checklist**: Intercepts task submission to force a self-verification/testing phase.
49
- - **Guardrails for Scale**: Prevents loading oversized files (>1MB) entirely into memory; enforces chunked reads.
50
-
51
- ### 3.6. Lazy & Interoperable Tooling
52
-
53
- - **Immutable Tool Definition**: Prevents mid-session tool swapping to preserve cache.
54
- - **Tool Search**: Implements "stub" tools, allowing dynamic loading of complex tools only when actively requested.
55
-
56
- ### 3.7. Trace Analytics (V2)
57
-
58
- - Logs reasoning loops and tool execution traces to analyze points of failure.
59
- - Trace analyzer sub-agent that periodically reviews failures to suggest harness improvements.
60
-
61
- ## 4. Non-Functional Requirements
62
-
63
- - **Latency:** High cache hit rates (>80% for long sessions) leading to sub-second Time-To-First-Token.
64
- - **Cost:** Minimize redundant prefix token generation.
65
- - **Extensibility:** Middleware pipeline should make it trivial to add new guardrails.
66
- - **Development Process:** Strict Red-Green-Refactor TDD for all new features.
67
-
68
- ### 4.1 Error Handling & Degraded Modes
69
-
70
- - **Sandbox Failures:** The `SandboxManager` is architected with an `ISandboxWrapper` interface. If the primary cloud **E2B** sandbox fails to initialize (e.g., due to network drops, API outages, or invalid keys), the manager will automatically print a warning and gracefully degrade to a local **OpenSandbox** deployment (`localhost:8080`) to ensure execution continuity.
71
- - **LLM Failures (Planned):** If the remote LLM provider API goes down, the Model Factory should seamlessly fallback to a local Ollama instance if available.
72
-
73
- ### 4.2 Rate Limiting & Cost Controls (Planned)
74
-
75
- - **Session Budgets:** Users can configure a maximum token budget (e.g., $5.00) per session. The `ExecutionHarness` tracks usage via the `SessionTracer` and will forcefully halt the agent if the threshold is reached.
76
- - **Loop Circuit Breakers:** The `LoopDetectionMiddleware` acts as a behavioral rate limit, preventing the agent from infinitely burning tokens on failed bash commands.
77
-
78
- ### 4.3 Sandbox Authentication Management
79
-
80
- - **E2B:** Authentication is managed via the `E2B_API_KEY` environment variable or securely stored in `~/.joone/config.json`.
81
- - **OpenSandbox (Fallback):** Handled via `OPENSANDBOX_API_KEY` and `OPENSANDBOX_DOMAIN` properties in the config, falling back to a local Docker `localhost:8080` endpoint.
82
-
83
- ### 4.4 Telemetry, Privacy & Data Retention
84
-
85
- - **Local-First Tracing:** All session reasoning and tool execution traces are logged to `~/.joone/traces/` as local JSON files for offline analysis using `TraceAnalyzer`.
86
- - **Data Retention:** Local traces are automatically rotated/deleted after 30 days to prevent infinite disk bloat.
87
- - **Privacy:** Project source code is _never_ sent to third-party telemetry providers unless the user intentionally enables the optional `LANGSMITH_API_KEY` for advanced dashboard debugging.
@@ -1,72 +0,0 @@
1
- # User Stories
2
-
3
- This document contains the foundational user stories for the Joone agent, organized by Epic. It does not include exhaustive acceptance criteria, but rather serves as a high-level requirements tracker for the core features.
4
-
5
- ## Epic 1: CLI & Configuration
6
-
7
- - **US 1.1**: As a user, I want to install joone globally via `npm i -g joone` and run it with `joone` in any project directory.
8
- - **US 1.2**: As a user, I want to select my preferred LLM provider and model on first run or via `joone config`, choosing from at least 9 providers (Anthropic, OpenAI, Google, Mistral, Groq, DeepSeek, Fireworks, Together AI, Ollama).
9
- - **US 1.3**: As a user, I want my API key collected via masked interactive input during `joone config`, so I never have to manually create `.env` files.
10
- - **US 1.4**: As a user, I want my preferences stored at `~/.joone/config.json` with restrictive file permissions, so I don't re-enter them every session.
11
- - **US 1.5**: As a user, I want the CLI to tell me which provider package to install if it's missing (e.g., `Run: npm install @langchain/groq`).
12
- - **US 1.6** _(Planned)_: As a security-conscious user, I want to choose during onboarding whether to store my API key in a plain config file, OS Keychain, or encrypted config.
13
-
14
- ## Epic 2: Streaming & Output
15
-
16
- - **US 2.1**: As a user, I want to see the agent's response stream token-by-token in my terminal, not wait for the entire response to finish.
17
- - **US 2.2**: As the system, I want to buffer tool call JSON during streaming until the full call is received, then execute it.
18
- - **US 2.3**: As a user, I want the option to disable streaming via `joone config` or a CLI flag (`--no-stream`).
19
-
20
- ## Epic 3: The Context & Prompt Layer
21
-
22
- - **US 3.1**: As a developer, I want the system prompt to be strictly divided into static and dynamic sections, so that I maximize prompt caching and reduce costs.
23
- - **US 3.2**: As the system, I need to inject state updates (like time or file changes) into the conversation history as simulated messages (`<system-reminder>`), so I avoid invalidating the static prefix cache.
24
- - **US 3.3**: As the system, when the context window reaches 90% capacity, I want to execute a cache-safe compaction that summarizes early history while keeping the system prompt matching the parent thread.
25
-
26
- ## Epic 4: Hybrid Sandbox Execution
27
-
28
- - **US 4.1**: As a user, I want `write_file` and `read_file` to operate on my host filesystem, so I can see the agent's code changes in my IDE in real-time.
29
- - **US 4.4**: As the system, I want to create a new E2B sandbox at the start of each agent session and destroy it when the session ends or times out, so that each session has a clean isolated environment and resources are properly released.
30
- - **US 4.5**: As a developer, I want the tool router to automatically determine whether a tool runs on the host or in the sandbox based on tool type.
31
-
32
- ## Epic 5: Tooling & Lazy Loading
33
-
34
- - **US 5.1**: As an agent, I want access to core tools (`read_file`, `write_file`, `run_bash_command`) defined statically at the beginning of the session.
35
- - **US 5.2**: As an agent, I want to use a "Search Tools" endpoint to learn about complex or specific tools, rather than having all 50+ tool schemas loaded simultaneously into my context window.
36
- - **US 5.3**: As a developer, I want guardrails on `read_file` so the agent cannot accidentally load a 10MB file into the context window and blind itself.
37
-
38
- ## Epic 6: Middleware Guards & Execution Loops
39
-
40
- - **US 6.1**: As a developer, I want a `LoopDetectionMiddleware` that counts how many consecutive times an agent has failed a specific action.
41
- - **US 6.2**: As an agent stuck in a loop, I want the system to interrupt me and tell me to reconsider my approach, so I don't waste tokens repeating a failure.
42
- - **US 6.3**: As an agent trying to finish a task, I want a `PreCompletionMiddleware` to ask me if I have run tests. If I haven't, it should block completion and ask me to run verifications.
43
- - **US 6.4**: As the system, I want to parse test exit codes; if a test fails (`exit 1`), I want to block the agent from declaring the task "Done" unless a max retry limit is reached.
44
-
45
- - **US 7.1**: As an operator, I want every agent decision, tool call, and token metric logged to a standard trace format so I can monitor cache hit rates.
46
- - **US 7.2**: As an operator, I want a script that can read failed traces and use an LLM to automatically summarize _why_ the agent failed tasks, allowing me to refine the harness.
47
-
48
- ## Epic 8: TUI Slash Commands (M11)
49
-
50
- - **US 8.1**: As a user, I want to type `/help` or `/?` to see a list of all available commands without making an LLM call.
51
- - **US 8.2**: As a user, I want to switch models mid-session securely by typing `/model <name>`.
52
- - **US 8.3**: As a user with a bloated history context, I want to type `/compact` to manually force a context summarization.
53
- - **US 8.4**: As an error-prone user, if I type `/cls` instead of `/clear`, I want the UI to suggest `/clear` via Levenshtein distance grouping instead of sending garbage tokens to the API.
54
-
55
- ## Epic 9: LLM-Powered Compaction (M12)
56
-
57
- - **US 9.1**: As an agent managing a huge conversation history, I want to delegate summarization of my older messages to an LLM, so the resulting summary is precise, preserving file paths and tool outcomes perfectly.
58
- - **US 9.2**: As the system, I want to automatically select a cheaper, faster LLM model (like `gpt-4o-mini` instead of `gpt-4o`) to perform the background compaction, saving the user money.
59
- - **US 9.3**: As a resumed agent, I want a seamless Handoff Prompt injected directly beneath the compaction summary, so I instantly understand my persona and context haven't broken.
60
-
61
- ## Epic 10: Sub-Agent Orchestration (M13)
62
-
63
- - **US 10.1**: As the main reasoning agent, I want the ability to spawn named "sub-agents" to handle specialized tasks (e.g., executing scripts, analyzing directories) so I don't clutter my own context overhead.
64
- - **US 10.2**: As the main agent, I want to spawn certain sub-agents asynchronously, allowing me to continue reasoning or writing files while the sub-agent scans tests in the background.
65
- - **US 10.3**: As an orchestrator, I want hard limitations (a Depth-1 limit) that strictly prevent a sub-agent from accidentally spawning another sub-agent ad infinitum.
66
-
67
- ## Epic 11: Stability & Reliability (M14)
68
-
69
- - **US 11.1**: As the core engine, I want a proactive `ContextGuard` that estimates API token payloads before sending the request to the provider, automatically triggering compaction at 80% usage.
70
- - **US 11.2**: As the core engine, I want an absolute Emergency Truncation trap door at 95% capacity to prevent immediate process death when compaction isn't fast enough.
71
- - **US 11.3**: As a user working on a long-running complex task, I want the `AutoSave` feature to quietly save my `.jsonl` session file atomically in the background every few turns.
72
- - **US 11.4**: As a user, when I hit `Ctrl+C` in my terminal, I want the CLI to intercept the shutdown signal, force a final instantaneous save, and clean up the sandbox before exiting.
@@ -1,138 +0,0 @@
1
- # System Architecture
2
-
3
- ## High-Level Architecture Overview
4
-
5
- The system operates as a CLI-based REPL (Read-Eval-Print Loop) Agent Wrapper. The user runs `joone` in their project directory. The LLM is nested within an "Execution Harness" that mediates all inputs, actions, and memory. Responses are **streamed** token-by-token.
6
-
7
- ### Hybrid Sandbox Model
8
-
9
- Joone uses a **Hybrid** architecture for safety and developer experience:
10
-
11
- - **File operations** (`write_file`, `read_file`) run on the **host machine**, so the user sees changes in real-time in their IDE.
12
- - **Code execution** (`bash`, `npm test`, scripts) runs inside an **E2B sandboxed microVM**, protecting the host from destructive commands.
13
- - A **File Sync** layer mirrors changed files from host → sandbox before each execution.
14
-
15
- ```
16
- ┌─────────────────────────┐ ┌──────────────────────────┐
17
- │ HOST MACHINE │ sync │ E2B SANDBOX │
18
- │ │ ───────► │ │
19
- │ write_file ──► disk │ │ /workspace/ (mirror) │
20
- │ read_file ◄── disk │ │ │
21
- │ │ │ bash, npm test, scripts │
22
- │ User sees changes │ │ run here (isolated) │
23
- │ live in their IDE │ │ │
24
- └─────────────────────────┘ └──────────────────────────┘
25
- ```
26
-
27
- ## System Diagram
28
-
29
- ```mermaid
30
- graph TD
31
- Client["User CLI (joone)"] -->|Task Input| Config
32
- Config["Config Manager (~/.joone/config.json)"] -->|Provider + Key| Factory
33
- Factory[Model Factory] -->|BaseChatModel| MainLoop
34
-
35
- subgraph Agent Execution Harness
36
- MainLoop[Execution Engine]
37
- State[Conversation State Manager]
38
- PromptBuilder[Cache-Oriented Prompt Builder]
39
- StreamHandler[Stream Handler]
40
-
41
- State --> PromptBuilder
42
- MainLoop --> PromptBuilder
43
- PromptBuilder --> LLM((LLM API))
44
- LLM -->|Streamed Chunks| StreamHandler
45
- StreamHandler -->|Complete Tool Call| Middlewares
46
- StreamHandler -->|Text Tokens| Terminal[Terminal Output]
47
- end
48
-
49
- subgraph Middleware Pipeline
50
- Middlewares{Middleware Orchestrator}
51
- LoopDet[Loop Detection]
52
- PreComp[Pre-Completion Check]
53
- Guard[File Size Guardrails]
54
-
55
- Middlewares --> LoopDet
56
- Middlewares --> PreComp
57
- Middlewares --> Guard
58
- end
59
-
60
- subgraph "Tool Routing (Hybrid)"
61
- Middlewares -->|Approved Tool Call| Router{Tool Router}
62
- Router -->|"write_file, read_file"| HostFS["Host Filesystem (Node.js fs)"]
63
- Router -->|"bash, test, install"| Sync[File Sync Layer]
64
- Sync -->|Upload changed files| Sandbox["E2B MicroVM (Ubuntu)"]
65
- Sandbox -->|stdout/stderr| MainLoop
66
- HostFS -->|File content| MainLoop
67
- end
68
- ```
69
-
70
- ## Component Breakdown
71
-
72
- 1. **CLI & Config Layer** (`src/cli/`):
73
- - `index.ts`: Parses user commands (`joone`, `joone config`) via Commander.js.
74
- - `config.ts`: Reads/writes `~/.joone/config.json`. Stores provider, model, API key (plain text + `chmod 600`), streaming preference, and temperature.
75
- - `modelFactory.ts`: Factory that dynamically imports the correct LangChain provider package and returns a `BaseChatModel`. Supports 9+ providers.
76
-
77
- 2. **State Manager & Prompt Builder** (`src/core/promptBuilder.ts`):
78
- - Maintains the "Prefix Match". Compiles the static system prompt, appends project variables once, and exclusively appends subsequent messages.
79
-
80
- 3. **Execution Engine** (`src/core/agentLoop.ts`):
81
- - Polls the LLM via `.stream()` (default) or `.invoke()`.
82
- - The **Stream Handler** prints text tokens to stdout in real-time and buffers tool call JSON chunks until complete.
83
- - Routes completed tool calls to the Middleware pipeline.
84
-
85
- 4. **Middleware Orchestrator** (`src/middleware/`):
86
- - Implements the Observer pattern over the `on_tool_call` and `on_submit` events.
87
- - Operates on a structured `ToolResult` interface (`{ content, metadata, isError }`) to robustly pass execution metadata (like process exit codes) through the pipeline without brittle string parsing.
88
- - Can _intercept_ or _modify_ a tool request before it hits the tools.
89
- - Can _inject_ `<system-reminder>` messages back to the Execution Engine.
90
-
91
- 5. **Tool Router & Hybrid Execution**:
92
- - **Host tools** (`write_file`, `read_file`): Execute directly on the host via Node.js `fs`. Changes appear instantly in the user's IDE.
93
- - **Sandbox tools** (`bash`, `run_tests`, `install_deps`): Route through the File Sync layer → E2B sandbox.
94
- - The split is determined by tool type, not configuration.
95
-
96
- 6. **File Sync Layer** (`src/sandbox/sync.ts`):
97
- - Tracks which files have changed on the host since the last sandbox sync.
98
- - Before each sandbox execution, uploads only the changed files to the sandbox's `/workspace/` directory.
99
- - Strategies: **upload-on-execute** (default) or **watch & mirror** (future).
100
-
101
- 7. **E2B Sandbox** (`src/sandbox/`):
102
- - Each agent session initializes an E2B cloud sandbox via the `e2b` TypeScript SDK.
103
- - All bash commands and code execution run via `sandbox.commands.run()`.
104
- - The sandbox is destroyed on session end or timeout.
105
- - The host machine is **never** exposed to agent-executed commands.
106
-
107
- ## Tool Routing Table
108
-
109
- | Tool | Runs On | Why |
110
- | ---------------------- | ----------- | ----------------------------------------- |
111
- | `write_file` | **Host** | User sees changes in IDE instantly |
112
- | `read_file` | **Host** | Reads the real project files |
113
- | `run_bash_command` | **Sandbox** | Protects host from destructive commands |
114
- | `run_tests` | **Sandbox** | Tests may have side-effects |
115
- | `install_dependencies` | **Sandbox** | npm install can execute arbitrary scripts |
116
- | `search_tools` | **Host** | Registry lookup, no execution |
117
-
118
- ## Supported LLM Providers
119
-
120
- | Provider | Package | Dynamic Import |
121
- | -------------- | ------------------------- | -------------------------- |
122
- | Anthropic | `@langchain/anthropic` | `ChatAnthropic` |
123
- | OpenAI | `@langchain/openai` | `ChatOpenAI` |
124
- | Google | `@langchain/google-genai` | `ChatGoogleGenerativeAI` |
125
- | Mistral | `@langchain/mistralai` | `ChatMistralAI` |
126
- | Groq | `@langchain/groq` | `ChatGroq` |
127
- | DeepSeek | OpenAI-compatible | `ChatOpenAI` with base URL |
128
- | Fireworks | `@langchain/community` | `ChatFireworks` |
129
- | Together AI | `@langchain/community` | `ChatTogetherAI` |
130
- | Ollama (Local) | `@langchain/ollama` | `ChatOllama` |
131
-
132
- ## Security Roadmap
133
-
134
- | Tier | Method | Status |
135
- | ---- | -------------------------- | -------------------- |
136
- | 1 | Plain config + `chmod 600` | **Active (Default)** |
137
- | 2 | OS Keychain (`keytar`) | Planned |
138
- | 3 | AES-256 encrypted config | Planned |