mini-coder 0.0.18 → 0.0.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,158 @@
1
+ # MINI-CODER(1)
2
+
3
+ ## NAME
4
+ **mini-coder** (executable: `mc`) - A small, fast CLI coding agent built for developers.
5
+
6
+ ## SYNOPSIS
7
+ `mc [options] [prompt]`
8
+
9
+ ## DESCRIPTION
10
+ **mini-coder** is a developer-focused CLI coding agent. It prioritizes developer flow with no slow startup, no clunky GUI, and no vendor lock-in. It uses a minimalist terminal UI restricted to 16 ANSI colors to inherit the user's terminal theme, and is built entirely on Bun.js for maximum performance.
11
+
12
+ ## OPTIONS
13
+ **-m, --model <id>**
14
+ : Specify the model to use (e.g., `zen/claude-sonnet-4-6`).
15
+
16
+ **-c, --continue**
17
+ : Continue the most recent session.
18
+
19
+ **-r, --resume <id>**
20
+ : Resume a specific session by its ID.
21
+
22
+ **-l, --list**
23
+ : List recent sessions.
24
+
25
+ **--cwd <path>**
26
+ : Set the working directory (defaults to current directory).
27
+
28
+ **-h, --help**
29
+ : Display help information.
30
+
31
+ **[prompt]**
32
+ : Optional one-shot prompt text before entering interactive mode.
33
+
34
+ ## INTERACTIVE COMMANDS
35
+ Inside the interactive session, the following slash commands are available:
36
+
37
+ **/model**
38
+ : List all available models, indicating free models and context sizes.
39
+
40
+ **/model <id>**
41
+ : Switch to a specific model.
42
+
43
+ **/model effort <low|medium|high|xhigh|off>**
44
+ : Configure reasoning effort levels for models that support it.
45
+
46
+ **/reasoning [on|off]**
47
+ : Toggle the display of the model's reasoning/thought process.
48
+
49
+ **/context prune <off|balanced|aggressive>**
50
+ : Configure context window pruning strategies.
51
+
52
+ **/context cap <off|bytes|kb>**
53
+ : Set a hard payload cap size for tool results to avoid blowing out context.
54
+
55
+ **/cache <on|off>**
56
+ : Toggle prompt caching globally.
57
+
58
+ **/cache openai <in_memory|24h>**
59
+ : Set OpenAI prompt cache retention policies.
60
+
61
+ **/cache gemini <off|cachedContents/...>**
62
+ : Attach Google Gemini cached content.
63
+
64
+ **/plan**
65
+ : Toggle read-only planning mode.
66
+
67
+ **/ralph**
68
+ : Toggle autonomous execution looping.
69
+
70
+ **/undo**
71
+ : Revert the last turn and restore files.
72
+
73
+ **/new**
74
+ : Clear context and start a fresh session.
75
+
76
+ **/mcp list**
77
+ : List configured MCP servers.
78
+
79
+ **/mcp add <name> http <url>**
80
+ : Add an MCP server over HTTP.
81
+
82
+ **/mcp add <name> stdio <cmd> [args...]**
83
+ : Add an MCP server over stdio.
84
+
85
+ **/mcp remove <name>** (or **rm**)
86
+ : Remove an MCP server.
87
+
88
+ **/agent [name]**
89
+ : Set or clear an active primary custom agent.
90
+
91
+ **/help**
92
+ : Display command help.
93
+
94
+ **/exit, /quit, /q**
95
+ : Leave the session.
96
+
97
+ ## INLINE FEATURES
98
+ **Shell Integration**
99
+ : Prefix user prompts with `!` to run shell commands inline directly into the context.
100
+
101
+ **File & Agent Referencing**
102
+ : Prefix words with `@` to reference files, custom agents, or skills within prompts (supports tab completion).
103
+
104
+ ## BUILT-IN TOOLS
105
+ The agent has access to the following tools:
106
+ * **glob**: Discover files by glob pattern across the project.
107
+ * **grep**: Search file contents using regular expressions.
108
+ * **read**: Read file contents with line-range pagination support.
109
+ * **create**: Write a new file or completely overwrite an existing one.
110
+ * **replace**: Replace or delete targeted lines using hashline anchors.
111
+ * **insert**: Insert new lines before/after an anchor without replacing existing content.
112
+ * **shell**: Execute bash commands and capture output.
113
+ * **subagent**: Spawn a focused mini-agent with a prompt.
114
+ * **webSearch**: Search the internet (requires EXA key).
115
+ * **webContent**: Fetch full page content from a URL (requires EXA key).
116
+
117
+ ## ENVIRONMENT
118
+ **OPENCODE_API_KEY**
119
+ : OpenCode Zen API key (Recommended provider).
120
+
121
+ **ANTHROPIC_API_KEY**
122
+ : Direct Anthropic API key.
123
+
124
+ **OPENAI_API_KEY**
125
+ : Direct OpenAI API key.
126
+
127
+ **GOOGLE_API_KEY** (or **GEMINI_API_KEY**)
128
+ : Direct Google Gemini API key.
129
+
130
+ **OLLAMA_BASE_URL**
131
+ : Ollama local base URL (Defaults to `http://localhost:11434`).
132
+
133
+ **EXA_API_KEY**
134
+ : Enables built-in `webSearch` and `webContent` tools.
135
+
136
+ ## FILES & DIRECTORIES
137
+ **~/.config/mini-coder/**
138
+ : Application data directory. Contains `sessions.db` (SQLite database for session history, tool snapshots, MCP server configs, and model metadata), `api.log`, and `errors.log`.
139
+
140
+ **.agents/ or .claude/ (Local or Global in ~/)**
141
+ : Configuration directories for advanced features:
142
+ * **commands/*.md**: Custom slash commands.
143
+ * **agents/*.md**: Custom behavioral wrappers or subagents.
144
+ * **skills/<name>/SKILL.md**: Isolated context/instruction snippets.
145
+ * **hooks/post-<tool>**: Executable scripts triggered upon tool execution.
146
+
147
+ **AGENTS.md / CLAUDE.md**
148
+ : Auto-loaded system context files for project-specific instructions.
149
+
150
+ ## CORE FEATURES & ARCHITECTURE
151
+ * **Multi-Provider LLM Routing**: Automatically discovers API keys to route to OpenCode (Zen), Anthropic, OpenAI, Google/Gemini, or local Ollama instances.
152
+ * **Session Memory**: Persists conversation history in a local SQLite database, allowing users to resume past sessions effortlessly.
153
+ * **Subagent Delegation**: Includes a tool to spawn parallel instances of itself to tackle independent subtasks simultaneously (up to 10 levels deep).
154
+ * **Autonomous Mode (Ralph)**: An autonomous looping mode that runs tasks in an isolated context loop (up to 20 iterations) until completion.
155
+ * **Plan Mode**: A read-only thinking mode utilizing read tools + MCP, safely analyzing code without making mutations or executing shell commands.
156
+ * **Model Context Protocol (MCP)**: Native support for connecting external tools via MCP servers over HTTP or stdio.
157
+ * **Prompt Caching**: Configurable caching behaviors for supported providers (OpenAI, Gemini).
158
+ * **Undo Functionality**: Roll back the last conversation turn, cleanly restoring previous file states and git history via snapshots.
package/docs/skills.md CHANGED
@@ -1,28 +1,48 @@
1
1
  # Skills
2
2
 
3
- A skill is a reusable instruction file injected inline into your prompt.
4
- Use `@skill-name` to load it — the content is inserted into the message
5
- before it's sent to the LLM.
3
+ Skills are reusable instruction files discovered automatically from local and global directories.
6
4
 
7
- > **Skills are never auto-loaded.** They must be explicitly referenced
8
- > with `@skill-name` in your prompt. Nothing is injected automatically.
5
+ - The model sees **skill metadata only** by default (name, description, source).
6
+ - Full `SKILL.md` content is loaded **on demand**:
7
+ - when explicitly requested with the runtime skill tools (`listSkills` / `readSkill`), or
8
+ - when you reference `@skill-name` in your prompt.
9
9
 
10
- ## Where to put them
10
+ ## Discovery locations
11
11
 
12
- Each skill is a folder containing a `SKILL.md`:
12
+ Skills live in folders containing `SKILL.md`:
13
13
 
14
14
  | Location | Scope |
15
15
  |---|---|
16
- | `.agents/skills/<name>/SKILL.md` | Current repo only |
17
- | `~/.agents/skills/<name>/SKILL.md` | All projects (global) |
18
- | `.claude/skills/<name>/SKILL.md` | Current repo only (Claude-compatible) |
19
- | `~/.claude/skills/<name>/SKILL.md` | All projects (global, Claude-compatible) |
16
+ | `.agents/skills/<name>/SKILL.md` | Local |
17
+ | `.claude/skills/<name>/SKILL.md` | Local (Claude-compatible) |
18
+ | `~/.agents/skills/<name>/SKILL.md` | Global |
19
+ | `~/.claude/skills/<name>/SKILL.md` | Global (Claude-compatible) |
20
20
 
21
- Local skills override global ones with the same name. At the same scope, `.agents` wins over `.claude`.
21
+ Local discovery walks up from the current working directory to the git worktree root.
22
22
 
23
- ## Create a skill
23
+ ## Precedence rules
24
+
25
+ If multiple skills share the same `name`, precedence is deterministic:
26
+
27
+ 1. Nearest local directory wins over farther ancestor directories.
28
+ 2. Any local skill wins over global.
29
+ 3. At the same scope/path level, `.agents` wins over `.claude`.
30
+
31
+ ## Frontmatter validation
32
+
33
+ `SKILL.md` frontmatter must include:
24
34
 
25
- The folder name becomes the skill name (unless overridden by `name:` in frontmatter).
35
+ - `name` (required)
36
+ - `description` (required)
37
+
38
+ `name` constraints:
39
+
40
+ - lowercase alphanumeric and hyphen format (`^[a-z0-9]+(?:-[a-z0-9]+)*$`)
41
+ - 1–64 characters
42
+
43
+ Invalid skills are skipped with warnings. Unknown frontmatter fields are allowed.
44
+
45
+ ## Create a skill
26
46
 
27
47
  `.agents/skills/conventional-commits/SKILL.md`:
28
48
 
@@ -34,40 +54,25 @@ description: Conventional commit message format rules
34
54
 
35
55
  # Conventional Commits
36
56
 
37
- All commit messages must follow this format:
38
-
39
- <type>(<scope>): <short summary>
40
-
41
- Types: feat, fix, docs, refactor, test, chore
42
- - Summary is lowercase, no period at the end
43
- - Breaking changes: add `!` after type, e.g. `feat!:`
44
- - Body is optional, wrapped at 72 chars
57
+ Use:
58
+ <type>(<scope>): <short summary>
45
59
  ```
46
60
 
47
- Then in the REPL:
61
+ ## Use a skill explicitly
48
62
 
49
- ```
63
+ ```text
50
64
  @conventional-commits write a commit message for my staged changes
51
65
  ```
52
66
 
53
- The skill content is wrapped in `<skill name="…">…</skill>` tags and
54
- included in the message sent to the LLM.
55
-
56
- ## Frontmatter fields
57
-
58
- | Field | Required | Description |
59
- |---|---|---|
60
- | `name` | No | Skill name for `@` reference. Defaults to folder name. |
61
- | `description` | No | Shown in `/help`. Defaults to name. |
62
-
63
- ## Tab completion
64
-
65
- Type `@` and press `Tab` to autocomplete skill names alongside agents and files.
66
-
67
- ## Listing skills
67
+ `@skill-name` injects the raw skill body wrapped as:
68
68
 
69
+ ```xml
70
+ <skill name="conventional-commits">
71
+ ...
72
+ </skill>
69
73
  ```
70
- /help
71
- ```
72
74
 
73
- Skills are listed in yellow, tagged `(local)` or `(global)`.
75
+ ## Tab completion and help
76
+
77
+ - Type `@` then `Tab` to complete skill names.
78
+ - Run `/help` to list discovered skills with `(local)` / `(global)` tags.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mini-coder",
3
- "version": "0.0.18",
3
+ "version": "0.0.20",
4
4
  "description": "A small, fast CLI coding agent",
5
5
  "module": "src/index.ts",
6
6
  "type": "module",
@@ -19,12 +19,12 @@
19
19
  "jscpd": "jscpd src"
20
20
  },
21
21
  "dependencies": {
22
- "@ai-sdk/anthropic": "^3.0.58",
23
- "@ai-sdk/google": "^3.0.43",
24
- "@ai-sdk/openai": "^3.0.41",
25
- "@ai-sdk/openai-compatible": "^2.0.35",
22
+ "@ai-sdk/anthropic": "^3.0.60",
23
+ "@ai-sdk/google": "^3.0.51",
24
+ "@ai-sdk/openai": "^3.0.45",
25
+ "@ai-sdk/openai-compatible": "^2.0.36",
26
26
  "@modelcontextprotocol/sdk": "^1.27.1",
27
- "ai": "^6.0.116",
27
+ "ai": "^6.0.127",
28
28
  "ignore": "^7.0.5",
29
29
  "yoctocolors": "^2.1.2",
30
30
  "zod": "^4.3.6"
@@ -1,68 +0,0 @@
1
- # ChatGPT/Codex subscription auth notes
2
-
3
- mini-coder does **not** currently support logging in with a ChatGPT Plus/Pro/Codex subscription.
4
-
5
- ## Why
6
-
7
- We looked at two implementations:
8
-
9
- - OpenCode in `/tmp/opencode-src`
10
- - official Codex in `/tmp/openai-codex/codex-rs`
11
-
12
- Both rely on OpenAI **first-party/private** auth and backend APIs rather than a documented public developer API.
13
-
14
- ## What those implementations do
15
-
16
- ### Auth
17
-
18
- They use OAuth-like flows against `https://auth.openai.com`, including:
19
-
20
- - browser login with PKCE and a localhost callback server
21
- - device-code / headless login
22
- - refresh tokens via `POST /oauth/token`
23
-
24
- Both also rely on a hardcoded first-party client id embedded in their source trees.
25
-
26
- Examples:
27
-
28
- - official Codex: `/tmp/openai-codex/codex-rs/core/src/auth.rs`
29
- - OpenCode: `/tmp/opencode-src/packages/opencode/src/plugin/codex.ts`
30
-
31
- ### Runtime API
32
-
33
- After login, requests are sent to ChatGPT backend endpoints such as:
34
-
35
- - `https://chatgpt.com/backend-api/codex`
36
- - `https://chatgpt.com/backend-api/codex/responses`
37
-
38
- with headers like:
39
-
40
- - `Authorization: Bearer <oauth access token>`
41
- - `ChatGPT-Account-Id: <account id>`
42
-
43
- Examples:
44
-
45
- - official Codex: `/tmp/openai-codex/codex-rs/core/src/model_provider_info.rs`
46
- - official Codex headers: `/tmp/openai-codex/codex-rs/backend-client/src/client.rs`
47
- - OpenCode rewrite layer: `/tmp/opencode-src/packages/opencode/src/plugin/codex.ts`
48
-
49
- ## Why mini-coder is not adopting this
50
-
51
- - It depends on undocumented/private auth endpoints.
52
- - It depends on a hardcoded first-party client id.
53
- - It depends on private ChatGPT backend routes.
54
- - Browser login would require running a local callback server.
55
- - Even the official Codex source does not expose a clean public API-based alternative here.
56
-
57
- ## Future stance
58
-
59
- We may revisit support if OpenAI exposes a stable, documented path for:
60
-
61
- - ChatGPT subscription login for third-party tools, or
62
- - a public Codex/ChatGPT backend API intended for external clients
63
-
64
- Until then, mini-coder only supports providers with clearer public integration paths.
65
-
66
- ## Note
67
-
68
- If you want a supported hosted integration instead of ChatGPT subscription auth, mini-coder already supports OpenCode Zen via `OPENCODE_API_KEY`. See the existing `zen/<model>` provider path.