skill-capture 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,247 @@
1
+ Metadata-Version: 2.4
2
+ Name: skill-capture
3
+ Version: 1.0.0
4
+ Summary: A privacy-first AI agent that learns your workflows and turns them into one-click Skills.
5
+ Author: SkillCapture Contributors
6
+ License: MIT
7
+ Requires-Python: >=3.10
8
+ Description-Content-Type: text/markdown
9
+ License-File: LICENSE
10
+ Requires-Dist: fastmcp>=2.0.0
11
+ Requires-Dist: pydantic>=2.0.0
12
+ Requires-Dist: openai>=1.0.0
13
+ Requires-Dist: python-frontmatter>=1.1.0
14
+ Requires-Dist: apscheduler>=3.10.0
15
+ Requires-Dist: python-dotenv>=1.0.0
16
+ Requires-Dist: anthropic>=0.40.0
17
+ Requires-Dist: google-genai>=1.0.0
18
+ Dynamic: license-file
19
+
20
+ # SkillCapture ๐Ÿง 
21
+
22
+ A **privacy-first, local AI agent** that watches your daily chats, automatically learns your repetitive workflows, and turns them into one-click **Skills** โ€” all stored safely on your own hard drive.
23
+
24
+ Built with [FastMCP](https://github.com/jlowin/fastmcp) ยท Works with Claude Desktop, Cursor, Windsurf, and any MCP-compatible client.
25
+
26
+ ---
27
+
28
+ ## How It Works
29
+
30
+ SkillCapture uses a **two-tier pipeline** inspired by how human memory consolidation works:
31
+
32
+ ### Day 1 โ€” Lightweight Draft (Cheap)
33
+ The AI scans your chat log and extracts potential workflows into a flat JSON cache. No heavy processing โ€” just keywords and action summaries.
34
+
35
+ ### Day 2 โ€” Heavy Promotion (Only on Match)
36
+ If you repeat a workflow, the system detects the keyword overlap and *only then* triggers the expensive generation: building a full, reusable Skill with named variables, step-by-step actions, and trigger phrases.
37
+
38
+ ```
39
+ DISCOVERED โ†’ PENDING โ†’ PROMOTED โ†’ DEPRECATED
40
+ (Day 1) (Cache) (Vault) (30d unused)
41
+ ```
42
+
43
+ ### The Storage Architecture
44
+
45
+ | Layer | Location | Purpose |
46
+ |-------|----------|---------|
47
+ | **The Sandbox** | `data/pending.json` | Lightweight Day 1 cache โ€” fast read/write |
48
+ | **The Vault** | `skills/*.md` | Promoted skills as human-readable Markdown with YAML frontmatter |
49
+ | **The Index** | `skills/index.json` | Ultra-light manifest so the AI never overloads its context window |
50
+
51
+ Skills are stored as **Markdown files** โ€” you can read, edit, and version-control them with Git.
52
+
53
+ ---
54
+
55
+ ## Quick Start
56
+
57
+ ### 1. Install
58
+
59
+ **Python (uvx / pipx)** ๐Ÿ
60
+ ```bash
61
+ uvx skill-capture-mcp
62
+ # or
63
+ pipx install skill-capture
64
+ ```
65
+
66
+ ### 2. Configure your LLM provider
67
+
68
+ ```bash
69
+ cp .env.example .env
70
+ # Edit .env with your provider and API key
71
+ ```
72
+
73
+ SkillCapture ships with **three built-in providers**. Set `LLM_PROVIDER` in `.env`:
74
+
75
+ | Provider | `LLM_PROVIDER` | API Key Env Var | Default Model |
76
+ |----------|---------------|-----------------|---------------|
77
+ | OpenAI | `openai` | `OPENAI_API_KEY` | `gpt-4o-mini` |
78
+ | Anthropic | `anthropic` | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514` |
79
+ | Google Gemini | `gemini` | `GOOGLE_API_KEY` | `gemini-2.0-flash` |
80
+
81
+ > **Extensible**: Need a different provider? Implement the `LLMClient.chat()` interface in `core/providers.py`.
82
+
83
+ ### 3. Run the MCP Server
84
+
85
+ ```bash
86
+ skill-capture-mcp
87
+ # (or `npx @YOUR_USERNAME/skill-capture-mcp`)
88
+ ```
89
+
90
+ Then connect from **Claude Desktop**, **Cursor**, **Windsurf**, or any MCP-compatible client.
91
+
92
+ ### 4. Connect to your MCP client
93
+
94
+ <details>
95
+ <summary><strong>Claude Desktop</strong></summary>
96
+
97
+ Add to `claude_desktop_config.json`:
98
+
99
+ ```json
100
+ {
101
+ "mcpServers": {
102
+ "skill-capture": {
103
+ "command": "python",
104
+ "args": ["/absolute/path/to/skill-capture/server.py"],
105
+ "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-..." }
106
+ }
107
+ }
108
+ }
109
+ ```
110
+ </details>
111
+
112
+ <details>
113
+ <summary><strong>Cursor</strong></summary>
114
+
115
+ Add to `~/.cursor/mcp.json` (global) or `.cursor/mcp.json` (project):
116
+
117
+ ```json
118
+ {
119
+ "mcpServers": {
120
+ "skill-capture": {
121
+ "command": "skill-capture-mcp",
122
+ "args": [],
123
+ "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-..." }
124
+ }
125
+ }
126
+ }
127
+ ```
128
+ </details>
129
+
130
+ <details>
131
+ <summary><strong>Windsurf</strong></summary>
132
+
133
+ Add to `~/.codeium/windsurf/mcp_config.json`:
134
+
135
+ ```json
136
+ {
137
+ "mcpServers": {
138
+ "skill-capture": {
139
+ "command": "skill-capture-mcp",
140
+ "args": [],
141
+ "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-..." }
142
+ }
143
+ }
144
+ }
145
+ ```
146
+ </details>
147
+
148
+ <details>
149
+ <summary><strong>Codex CLI</strong></summary>
150
+
151
+ Run:
152
+
153
+ ```bash
154
+ codex mcp add skill-capture -- skill-capture-mcp
155
+ ```
156
+
157
+ Or add to `~/.codex/config.toml`:
158
+
159
+ ```toml
160
+ [mcp_servers.skill-capture]
161
+ type = "stdio"
162
+ command = "skill-capture-mcp"
163
+ args = []
164
+
165
+ [mcp_servers.skill-capture.env]
166
+ LLM_PROVIDER = "openai"
167
+ OPENAI_API_KEY = "sk-..."
168
+ ```
169
+ </details>
170
+
171
+ ---
172
+
173
+ ## CLI Mode
174
+
175
+ Don't need MCP? Use SkillCapture standalone from the terminal:
176
+
177
+ ```bash
178
+ skill-capture-cli analyze # Run the Day 1/Day 2 pipeline
179
+ skill-capture-cli list # List all promoted skills
180
+ skill-capture-cli pending # View pending drafts in the sandbox
181
+ skill-capture-cli run "Deploy App" # Load and display a specific skill
182
+ ```
183
+
184
+ ---
185
+
186
+ ## MCP Tools
187
+
188
+ Once connected, your AI client has access to these tools:
189
+
190
+ | Tool | Description |
191
+ |------|-------------|
192
+ | `list_skills()` | Browse all promoted skills (reads the lightweight index) |
193
+ | `run_skill(name)` | Load the full content of a specific skill from the Vault |
194
+ | `analyze_today()` | Manually trigger the Day 1/Day 2 pipeline |
195
+ | `get_pending()` | View workflow drafts sitting in the sandbox |
196
+
197
+ ---
198
+
199
+ ## Project Structure
200
+
201
+ ```
202
+ skill-capture/
203
+ โ”œโ”€โ”€ data/
204
+ โ”‚ โ””โ”€โ”€ pending.json # The Sandbox
205
+ โ”œโ”€โ”€ skills/
206
+ โ”‚ โ”œโ”€โ”€ index.json # The Index
207
+ โ”‚ โ””โ”€โ”€ *.md # The Vault
208
+ โ”œโ”€โ”€ logs/ # Daily chat logs (input)
209
+ โ”œโ”€โ”€ core/
210
+ โ”‚ โ”œโ”€โ”€ models.py # Two-tier Pydantic schemas
211
+ โ”‚ โ”œโ”€โ”€ storage.py # File-system I/O layer
212
+ โ”‚ โ”œโ”€โ”€ evaluator.py # LLM client interface + evaluator logic
213
+ โ”‚ โ”œโ”€โ”€ providers.py # OpenAI, Anthropic, Gemini clients
214
+ โ”‚ โ””โ”€โ”€ scheduler.py # APScheduler nightly worker
215
+ โ”œโ”€โ”€ server.py # FastMCP server
216
+ โ”œโ”€โ”€ cli.py # Standalone CLI interface
217
+ โ””โ”€โ”€ requirements.txt
218
+ ```
219
+
220
+ ---
221
+
222
+ ## Tech Stack
223
+
224
+ - **Python** โ€” Core language
225
+ - **[FastMCP](https://github.com/jlowin/fastmcp)** โ€” Model Context Protocol server framework
226
+ - **[Pydantic](https://docs.pydantic.dev/)** โ€” Structured data validation
227
+ - **[OpenAI](https://platform.openai.com/) ยท [Anthropic](https://docs.anthropic.com/) ยท [Google Gemini](https://ai.google.dev/)** โ€” LLM providers (swappable)
228
+ - **[python-frontmatter](https://github.com/eyeseast/python-frontmatter)** โ€” Markdown + YAML parsing
229
+ - **[APScheduler](https://apscheduler.readthedocs.io/)** โ€” Background task scheduling
230
+
231
+ ---
232
+
233
+ ## Contributing
234
+
235
+ Contributions are welcome! Some ideas:
236
+
237
+ - ๐Ÿ”Œ Add more LLM providers (Ollama, local models)
238
+ - ๐ŸŽจ Build the web UI for skill management
239
+ - ๐Ÿ“Š Add usage analytics and skill effectiveness tracking
240
+ - ๐Ÿงช Improve the keyword matching with embeddings
241
+ - ๐Ÿ“ Add support for more chat log formats
242
+
243
+ ---
244
+
245
+ ## License
246
+
247
+ MIT License โ€” see [LICENSE](LICENSE) for details.
@@ -0,0 +1,228 @@
1
+ # SkillCapture ๐Ÿง 
2
+
3
+ A **privacy-first, local AI agent** that watches your daily chats, automatically learns your repetitive workflows, and turns them into one-click **Skills** โ€” all stored safely on your own hard drive.
4
+
5
+ Built with [FastMCP](https://github.com/jlowin/fastmcp) ยท Works with Claude Desktop, Cursor, Windsurf, and any MCP-compatible client.
6
+
7
+ ---
8
+
9
+ ## How It Works
10
+
11
+ SkillCapture uses a **two-tier pipeline** inspired by how human memory consolidation works:
12
+
13
+ ### Day 1 โ€” Lightweight Draft (Cheap)
14
+ The AI scans your chat log and extracts potential workflows into a flat JSON cache. No heavy processing โ€” just keywords and action summaries.
15
+
16
+ ### Day 2 โ€” Heavy Promotion (Only on Match)
17
+ If you repeat a workflow, the system detects the keyword overlap and *only then* triggers the expensive generation: building a full, reusable Skill with named variables, step-by-step actions, and trigger phrases.
18
+
19
+ ```
20
+ DISCOVERED โ†’ PENDING โ†’ PROMOTED โ†’ DEPRECATED
21
+ (Day 1) (Cache) (Vault) (30d unused)
22
+ ```
23
+
24
+ ### The Storage Architecture
25
+
26
+ | Layer | Location | Purpose |
27
+ |-------|----------|---------|
28
+ | **The Sandbox** | `data/pending.json` | Lightweight Day 1 cache โ€” fast read/write |
29
+ | **The Vault** | `skills/*.md` | Promoted skills as human-readable Markdown with YAML frontmatter |
30
+ | **The Index** | `skills/index.json` | Ultra-light manifest so the AI never overloads its context window |
31
+
32
+ Skills are stored as **Markdown files** โ€” you can read, edit, and version-control them with Git.
33
+
34
+ ---
35
+
36
+ ## Quick Start
37
+
38
+ ### 1. Install
39
+
40
+ **Python (uvx / pipx)** ๐Ÿ
41
+ ```bash
42
+ uvx skill-capture-mcp
43
+ # or
44
+ pipx install skill-capture
45
+ ```
46
+
47
+ ### 2. Configure your LLM provider
48
+
49
+ ```bash
50
+ cp .env.example .env
51
+ # Edit .env with your provider and API key
52
+ ```
53
+
54
+ SkillCapture ships with **three built-in providers**. Set `LLM_PROVIDER` in `.env`:
55
+
56
+ | Provider | `LLM_PROVIDER` | API Key Env Var | Default Model |
57
+ |----------|---------------|-----------------|---------------|
58
+ | OpenAI | `openai` | `OPENAI_API_KEY` | `gpt-4o-mini` |
59
+ | Anthropic | `anthropic` | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514` |
60
+ | Google Gemini | `gemini` | `GOOGLE_API_KEY` | `gemini-2.0-flash` |
61
+
62
+ > **Extensible**: Need a different provider? Implement the `LLMClient.chat()` interface in `core/providers.py`.
63
+
64
+ ### 3. Run the MCP Server
65
+
66
+ ```bash
67
+ skill-capture-mcp
68
+ # (or `npx @YOUR_USERNAME/skill-capture-mcp`)
69
+ ```
70
+
71
+ Then connect from **Claude Desktop**, **Cursor**, **Windsurf**, or any MCP-compatible client.
72
+
73
+ ### 4. Connect to your MCP client
74
+
75
+ <details>
76
+ <summary><strong>Claude Desktop</strong></summary>
77
+
78
+ Add to `claude_desktop_config.json`:
79
+
80
+ ```json
81
+ {
82
+ "mcpServers": {
83
+ "skill-capture": {
84
+ "command": "python",
85
+ "args": ["/absolute/path/to/skill-capture/server.py"],
86
+ "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-..." }
87
+ }
88
+ }
89
+ }
90
+ ```
91
+ </details>
92
+
93
+ <details>
94
+ <summary><strong>Cursor</strong></summary>
95
+
96
+ Add to `~/.cursor/mcp.json` (global) or `.cursor/mcp.json` (project):
97
+
98
+ ```json
99
+ {
100
+ "mcpServers": {
101
+ "skill-capture": {
102
+ "command": "skill-capture-mcp",
103
+ "args": [],
104
+ "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-..." }
105
+ }
106
+ }
107
+ }
108
+ ```
109
+ </details>
110
+
111
+ <details>
112
+ <summary><strong>Windsurf</strong></summary>
113
+
114
+ Add to `~/.codeium/windsurf/mcp_config.json`:
115
+
116
+ ```json
117
+ {
118
+ "mcpServers": {
119
+ "skill-capture": {
120
+ "command": "skill-capture-mcp",
121
+ "args": [],
122
+ "env": { "LLM_PROVIDER": "openai", "OPENAI_API_KEY": "sk-..." }
123
+ }
124
+ }
125
+ }
126
+ ```
127
+ </details>
128
+
129
+ <details>
130
+ <summary><strong>Codex CLI</strong></summary>
131
+
132
+ Run:
133
+
134
+ ```bash
135
+ codex mcp add skill-capture -- skill-capture-mcp
136
+ ```
137
+
138
+ Or add to `~/.codex/config.toml`:
139
+
140
+ ```toml
141
+ [mcp_servers.skill-capture]
142
+ type = "stdio"
143
+ command = "skill-capture-mcp"
144
+ args = []
145
+
146
+ [mcp_servers.skill-capture.env]
147
+ LLM_PROVIDER = "openai"
148
+ OPENAI_API_KEY = "sk-..."
149
+ ```
150
+ </details>
151
+
152
+ ---
153
+
154
+ ## CLI Mode
155
+
156
+ Don't need MCP? Use SkillCapture standalone from the terminal:
157
+
158
+ ```bash
159
+ skill-capture-cli analyze # Run the Day 1/Day 2 pipeline
160
+ skill-capture-cli list # List all promoted skills
161
+ skill-capture-cli pending # View pending drafts in the sandbox
162
+ skill-capture-cli run "Deploy App" # Load and display a specific skill
163
+ ```
164
+
165
+ ---
166
+
167
+ ## MCP Tools
168
+
169
+ Once connected, your AI client has access to these tools:
170
+
171
+ | Tool | Description |
172
+ |------|-------------|
173
+ | `list_skills()` | Browse all promoted skills (reads the lightweight index) |
174
+ | `run_skill(name)` | Load the full content of a specific skill from the Vault |
175
+ | `analyze_today()` | Manually trigger the Day 1/Day 2 pipeline |
176
+ | `get_pending()` | View workflow drafts sitting in the sandbox |
177
+
178
+ ---
179
+
180
+ ## Project Structure
181
+
182
+ ```
183
+ skill-capture/
184
+ โ”œโ”€โ”€ data/
185
+ โ”‚ โ””โ”€โ”€ pending.json # The Sandbox
186
+ โ”œโ”€โ”€ skills/
187
+ โ”‚ โ”œโ”€โ”€ index.json # The Index
188
+ โ”‚ โ””โ”€โ”€ *.md # The Vault
189
+ โ”œโ”€โ”€ logs/ # Daily chat logs (input)
190
+ โ”œโ”€โ”€ core/
191
+ โ”‚ โ”œโ”€โ”€ models.py # Two-tier Pydantic schemas
192
+ โ”‚ โ”œโ”€โ”€ storage.py # File-system I/O layer
193
+ โ”‚ โ”œโ”€โ”€ evaluator.py # LLM client interface + evaluator logic
194
+ โ”‚ โ”œโ”€โ”€ providers.py # OpenAI, Anthropic, Gemini clients
195
+ โ”‚ โ””โ”€โ”€ scheduler.py # APScheduler nightly worker
196
+ โ”œโ”€โ”€ server.py # FastMCP server
197
+ โ”œโ”€โ”€ cli.py # Standalone CLI interface
198
+ โ””โ”€โ”€ requirements.txt
199
+ ```
200
+
201
+ ---
202
+
203
+ ## Tech Stack
204
+
205
+ - **Python** โ€” Core language
206
+ - **[FastMCP](https://github.com/jlowin/fastmcp)** โ€” Model Context Protocol server framework
207
+ - **[Pydantic](https://docs.pydantic.dev/)** โ€” Structured data validation
208
+ - **[OpenAI](https://platform.openai.com/) ยท [Anthropic](https://docs.anthropic.com/) ยท [Google Gemini](https://ai.google.dev/)** โ€” LLM providers (swappable)
209
+ - **[python-frontmatter](https://github.com/eyeseast/python-frontmatter)** โ€” Markdown + YAML parsing
210
+ - **[APScheduler](https://apscheduler.readthedocs.io/)** โ€” Background task scheduling
211
+
212
+ ---
213
+
214
+ ## Contributing
215
+
216
+ Contributions are welcome! Some ideas:
217
+
218
+ - ๐Ÿ”Œ Add more LLM providers (Ollama, local models)
219
+ - ๐ŸŽจ Build the web UI for skill management
220
+ - ๐Ÿ“Š Add usage analytics and skill effectiveness tracking
221
+ - ๐Ÿงช Improve the keyword matching with embeddings
222
+ - ๐Ÿ“ Add support for more chat log formats
223
+
224
+ ---
225
+
226
+ ## License
227
+
228
+ MIT License โ€” see [LICENSE](LICENSE) for details.
@@ -0,0 +1,31 @@
1
+ [build-system]
2
+ requires = ["setuptools>=61.0"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "skill-capture"
7
+ version = "1.0.0"
8
+ description = "A privacy-first AI agent that learns your workflows and turns them into one-click Skills."
9
+ readme = "README.md"
10
+ requires-python = ">=3.10"
11
+ license = { text = "MIT" }
12
+ authors = [
13
+ { name = "SkillCapture Contributors" }
14
+ ]
15
+ dependencies = [
16
+ "fastmcp>=2.0.0",
17
+ "pydantic>=2.0.0",
18
+ "openai>=1.0.0",
19
+ "python-frontmatter>=1.1.0",
20
+ "apscheduler>=3.10.0",
21
+ "python-dotenv>=1.0.0",
22
+ "anthropic>=0.40.0",
23
+ "google-genai>=1.0.0"
24
+ ]
25
+
26
+ [project.scripts]
27
+ skill-capture-cli = "skill_capture.cli:main"
28
+ skill-capture-mcp = "skill_capture.server:main"
29
+
30
+ [tool.setuptools]
31
+ packages = ["skill_capture", "skill_capture.core"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
File without changes
@@ -0,0 +1,127 @@
1
+ """
2
+ CLI โ€” Standalone command-line interface for SkillCapture.
3
+
4
+ Usage:
5
+ python cli.py analyze Run the Day 1/Day 2 pipeline
6
+ python cli.py list List all promoted skills
7
+ python cli.py pending View pending drafts in the sandbox
8
+ python cli.py run <skill_name> Load and display a promoted skill
9
+ """
10
+
11
+ import argparse
12
+ import json
13
+ import sys
14
+
15
+ from dotenv import load_dotenv
16
+
17
+ load_dotenv()
18
+
19
+ from skill_capture.core.storage import load_index, load_skill_from_vault, load_pending
20
+ from skill_capture.core.scheduler import run_pipeline
21
+
22
+
23
+ def cmd_analyze(args: argparse.Namespace) -> None:
24
+ """Run the memory pipeline (Day 1 extraction or Day 2 promotion)."""
25
+ result = run_pipeline()
26
+ action = result.get("action", "unknown")
27
+ details = result.get("details", {})
28
+
29
+ if action == "skipped":
30
+ print(f"โญ Skipped โ€” {details.get('reason', 'no reason given')}")
31
+ elif action == "day1_extraction":
32
+ count = details.get("drafts_cached", 0)
33
+ print(f"๐Ÿ“‹ Day 1 โ€” Extracted {count} draft(s) to pending cache.")
34
+ for s in details.get("summaries", []):
35
+ print(f" โ€ข {s}")
36
+ elif action == "day2_promotion":
37
+ promoted = details.get("skills_promoted", [])
38
+ remaining = details.get("remaining_pending", 0)
39
+ print(f"๐Ÿš€ Day 2 โ€” Promoted {len(promoted)} skill(s), {remaining} draft(s) remain pending.")
40
+ for name in promoted:
41
+ print(f" โœ… {name}")
42
+ else:
43
+ print(json.dumps(result, indent=2))
44
+
45
+
46
+ def cmd_list(args: argparse.Namespace) -> None:
47
+ """List all promoted skills from the index."""
48
+ entries = load_index()
49
+ if not entries:
50
+ print("No skills promoted yet. Run 'python cli.py analyze' after logging workflows.")
51
+ return
52
+
53
+ print(f"๐Ÿ“š {len(entries)} skill(s) in the Vault:\n")
54
+ for e in entries:
55
+ print(f" {e.name}")
56
+ print(f" {e.trigger_description}")
57
+ print()
58
+
59
+
60
+ def cmd_pending(args: argparse.Namespace) -> None:
61
+ """View pending drafts in the sandbox."""
62
+ drafts = load_pending()
63
+ if not drafts:
64
+ print("Sandbox is empty โ€” no pending drafts.")
65
+ return
66
+
67
+ print(f"๐Ÿ“ {len(drafts)} pending draft(s):\n")
68
+ for i, d in enumerate(drafts, 1):
69
+ print(f" {i}. {d.action_summary}")
70
+ print(f" Keywords: {', '.join(d.keywords)}")
71
+ print(f" First seen: {d.first_seen} | Occurrences: {d.occurrences}")
72
+ print()
73
+
74
+
75
+ def cmd_run(args: argparse.Namespace) -> None:
76
+ """Load and display a promoted skill."""
77
+ skill = load_skill_from_vault(args.skill_name)
78
+ if not skill:
79
+ available = load_index()
80
+ names = [e.name for e in available]
81
+ print(f"โŒ Skill '{args.skill_name}' not found.")
82
+ if names:
83
+ print(f" Available: {', '.join(names)}")
84
+ return
85
+
86
+ print(f"๐Ÿ”ง {skill.name} (v{skill.version})\n")
87
+ print(f" {skill.description}\n")
88
+ if skill.trigger_phrases:
89
+ print(f" Triggers: {', '.join(skill.trigger_phrases)}")
90
+ if skill.variables:
91
+ print(f" Variables: {', '.join(skill.variables)}")
92
+ print()
93
+ for action in skill.actions:
94
+ tool = f" [{action.tool_call}]" if action.tool_call else ""
95
+ print(f" {action.step_number}. {action.description}{tool}")
96
+
97
+
98
+ def main() -> None:
99
+ parser = argparse.ArgumentParser(
100
+ prog="skill-capture",
101
+ description="SkillCapture โ€” Turn repeated workflows into reusable Skills.",
102
+ )
103
+ subparsers = parser.add_subparsers(dest="command", required=True)
104
+
105
+ # analyze
106
+ sub_analyze = subparsers.add_parser("analyze", help="Run the Day 1/Day 2 pipeline")
107
+ sub_analyze.set_defaults(func=cmd_analyze)
108
+
109
+ # list
110
+ sub_list = subparsers.add_parser("list", help="List all promoted skills")
111
+ sub_list.set_defaults(func=cmd_list)
112
+
113
+ # pending
114
+ sub_pending = subparsers.add_parser("pending", help="View pending drafts")
115
+ sub_pending.set_defaults(func=cmd_pending)
116
+
117
+ # run
118
+ sub_run = subparsers.add_parser("run", help="Load and display a promoted skill")
119
+ sub_run.add_argument("skill_name", help="Name of the skill to load")
120
+ sub_run.set_defaults(func=cmd_run)
121
+
122
+ args = parser.parse_args()
123
+ args.func(args)
124
+
125
+
126
+ if __name__ == "__main__":
127
+ main()
File without changes