progressive-skills-mcp 0.2.1__tar.gz → 0.2.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: progressive-skills-mcp
3
- Version: 0.2.1
3
+ Version: 0.2.2
4
4
  Summary: MCP server that exposes Claude-style skills to any MCP client.
5
5
  Keywords: mcp,skills,fastmcp,claude
6
6
  Author: Eleanor Berger
@@ -92,25 +92,92 @@ pip install progressive-skills-mcp
92
92
  progressive-skills-mcp
93
93
  ```
94
94
 
95
+ ## System Prompt Configuration
96
+
97
+ Progressive disclosure works by adding skill metadata to your LLM agent's system prompt. This tells the agent what skills are available **without** loading all the detailed instructions.
98
+
99
+ ### Step 1: Generate Metadata
100
+
101
+ ```bash
102
+ progressive-skills-mcp --generate-metadata > skills-metadata.txt
103
+ ```
104
+
105
+ This outputs:
106
+
107
+ ```markdown
108
+ ## Available Skills
109
+
110
+ You have access to specialized skills that provide detailed instructions for specific tasks. When a task requires specialized knowledge or a specific workflow, use the `load_skill` tool to get the full instructions.
111
+
112
+ ### How to Use Skills
113
+
114
+ 1. Check if a skill is relevant to the user's request
115
+ 2. Call `load_skill("skill-name")` to get detailed instructions
116
+ 3. Follow the instructions in the skill
117
+ 4. Use `read_skill_file()` if the skill references additional resources
118
+
119
+ **Available skills:**
120
+
121
+ - **context7-docs-lookup**: Look up documentation from Context7 for libraries and frameworks
122
+ ```
123
+
124
+ ### Step 2: Add to Agent System Prompt
125
+
126
+ Copy the metadata output and add it to your LLM agent's system prompt. For example, in **Onyx**, **LibreChat**, or **Open WebUI**:
127
+
128
+ ```
129
+ You are a helpful AI assistant.
130
+
131
+ [... other system prompt content ...]
132
+
133
+ ## Available Skills
134
+
135
+ You have access to specialized skills that provide detailed instructions for specific tasks...
136
+
137
+ - **context7-docs-lookup**: Look up documentation from Context7...
138
+ ```
139
+
140
+ ### Step 3: Agent Uses Skills
141
+
142
+ When relevant, the agent will:
143
+
144
+ 1. **See skill in system prompt** → "context7-docs-lookup is available"
145
+ 2. **Call load_skill** → `load_skill("context7-docs-lookup")`
146
+ 3. **Receive full instructions** → Complete SKILL.md content
147
+ 4. **Follow instructions** → Execute the skill workflow
148
+
149
+ ### Example Conversation
150
+
151
+ **User:** "How do I use React hooks in Next.js?"
152
+
153
+ **Agent thinks:** *The context7-docs-lookup skill can help with documentation lookup*
154
+
155
+ **Agent calls:** `load_skill("context7-docs-lookup")`
156
+
157
+ **Agent receives:** Full skill instructions on how to use Context7 API
158
+
159
+ **Agent executes:** Follows skill instructions to look up Next.js documentation
160
+
161
+ **Agent responds:** "Here's how to use React hooks in Next.js..." (with accurate docs)
162
+
95
163
  ## Progressive Disclosure
96
164
 
97
165
  ### Level 1: System Prompt (Once per conversation)
98
166
  ```markdown
99
167
  ## Available Skills
100
- - **weather**: Get weather forecasts
101
- - **pptx**: Create presentations
168
+ - **context7-docs-lookup**: Look up documentation
102
169
  ```
103
170
  **Cost:** ~200 tokens, sent ONCE
104
171
 
105
172
  ### Level 2: On-Demand Instructions
106
173
  ```python
107
- load_skill("pptx") # Returns full SKILL.md
174
+ load_skill("context7-docs-lookup") # Returns full SKILL.md
108
175
  ```
109
176
  **Cost:** 0 tokens until loaded
110
177
 
111
178
  ### Level 3: Referenced Resources
112
179
  ```python
113
- read_skill_file("pptx", "references/api.md")
180
+ read_skill_file("context7-docs-lookup", "references/api.md")
114
181
  ```
115
182
  **Cost:** 0 tokens until accessed
116
183
 
@@ -125,16 +192,22 @@ read_skill_file("pptx", "references/api.md")
125
192
  For Onyx or other MCP clients that support system prompts:
126
193
 
127
194
  ```bash
195
+ # Markdown format (default)
128
196
  progressive-skills-mcp --generate-metadata
129
- ```
130
197
 
131
- Output:
132
- ```markdown
133
- ## Available Skills
134
-
135
- You have access to specialized skills...
198
+ # JSON format
199
+ progressive-skills-mcp --generate-metadata --format json
200
+ ```
136
201
 
137
- - **context7-docs-lookup**: Look up documentation from Context7
202
+ JSON output:
203
+ ```json
204
+ [
205
+ {
206
+ "name": "context7-docs-lookup",
207
+ "description": "Look up documentation from Context7 for libraries and frameworks",
208
+ "allowed_tools": []
209
+ }
210
+ ]
138
211
  ```
139
212
 
140
213
  ## Usage
@@ -51,25 +51,92 @@ pip install progressive-skills-mcp
51
51
  progressive-skills-mcp
52
52
  ```
53
53
 
54
+ ## System Prompt Configuration
55
+
56
+ Progressive disclosure works by adding skill metadata to your LLM agent's system prompt. This tells the agent what skills are available **without** loading all the detailed instructions.
57
+
58
+ ### Step 1: Generate Metadata
59
+
60
+ ```bash
61
+ progressive-skills-mcp --generate-metadata > skills-metadata.txt
62
+ ```
63
+
64
+ This outputs:
65
+
66
+ ```markdown
67
+ ## Available Skills
68
+
69
+ You have access to specialized skills that provide detailed instructions for specific tasks. When a task requires specialized knowledge or a specific workflow, use the `load_skill` tool to get the full instructions.
70
+
71
+ ### How to Use Skills
72
+
73
+ 1. Check if a skill is relevant to the user's request
74
+ 2. Call `load_skill("skill-name")` to get detailed instructions
75
+ 3. Follow the instructions in the skill
76
+ 4. Use `read_skill_file()` if the skill references additional resources
77
+
78
+ **Available skills:**
79
+
80
+ - **context7-docs-lookup**: Look up documentation from Context7 for libraries and frameworks
81
+ ```
82
+
83
+ ### Step 2: Add to Agent System Prompt
84
+
85
+ Copy the metadata output and add it to your LLM agent's system prompt. For example, in **Onyx**, **LibreChat**, or **Open WebUI**:
86
+
87
+ ```
88
+ You are a helpful AI assistant.
89
+
90
+ [... other system prompt content ...]
91
+
92
+ ## Available Skills
93
+
94
+ You have access to specialized skills that provide detailed instructions for specific tasks...
95
+
96
+ - **context7-docs-lookup**: Look up documentation from Context7...
97
+ ```
98
+
99
+ ### Step 3: Agent Uses Skills
100
+
101
+ When relevant, the agent will:
102
+
103
+ 1. **See skill in system prompt** → "context7-docs-lookup is available"
104
+ 2. **Call load_skill** → `load_skill("context7-docs-lookup")`
105
+ 3. **Receive full instructions** → Complete SKILL.md content
106
+ 4. **Follow instructions** → Execute the skill workflow
107
+
108
+ ### Example Conversation
109
+
110
+ **User:** "How do I use React hooks in Next.js?"
111
+
112
+ **Agent thinks:** *The context7-docs-lookup skill can help with documentation lookup*
113
+
114
+ **Agent calls:** `load_skill("context7-docs-lookup")`
115
+
116
+ **Agent receives:** Full skill instructions on how to use Context7 API
117
+
118
+ **Agent executes:** Follows skill instructions to look up Next.js documentation
119
+
120
+ **Agent responds:** "Here's how to use React hooks in Next.js..." (with accurate docs)
121
+
54
122
  ## Progressive Disclosure
55
123
 
56
124
  ### Level 1: System Prompt (Once per conversation)
57
125
  ```markdown
58
126
  ## Available Skills
59
- - **weather**: Get weather forecasts
60
- - **pptx**: Create presentations
127
+ - **context7-docs-lookup**: Look up documentation
61
128
  ```
62
129
  **Cost:** ~200 tokens, sent ONCE
63
130
 
64
131
  ### Level 2: On-Demand Instructions
65
132
  ```python
66
- load_skill("pptx") # Returns full SKILL.md
133
+ load_skill("context7-docs-lookup") # Returns full SKILL.md
67
134
  ```
68
135
  **Cost:** 0 tokens until loaded
69
136
 
70
137
  ### Level 3: Referenced Resources
71
138
  ```python
72
- read_skill_file("pptx", "references/api.md")
139
+ read_skill_file("context7-docs-lookup", "references/api.md")
73
140
  ```
74
141
  **Cost:** 0 tokens until accessed
75
142
 
@@ -84,16 +151,22 @@ read_skill_file("pptx", "references/api.md")
84
151
  For Onyx or other MCP clients that support system prompts:
85
152
 
86
153
  ```bash
154
+ # Markdown format (default)
87
155
  progressive-skills-mcp --generate-metadata
88
- ```
89
156
 
90
- Output:
91
- ```markdown
92
- ## Available Skills
93
-
94
- You have access to specialized skills...
157
+ # JSON format
158
+ progressive-skills-mcp --generate-metadata --format json
159
+ ```
95
160
 
96
- - **context7-docs-lookup**: Look up documentation from Context7
161
+ JSON output:
162
+ ```json
163
+ [
164
+ {
165
+ "name": "context7-docs-lookup",
166
+ "description": "Look up documentation from Context7 for libraries and frameworks",
167
+ "allowed_tools": []
168
+ }
169
+ ]
97
170
  ```
98
171
 
99
172
  ## Usage
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "progressive-skills-mcp"
3
- version = "0.2.1"
3
+ version = "0.2.2"
4
4
  description = "MCP server that exposes Claude-style skills to any MCP client."
5
5
  readme = "README.md"
6
6
  authors = [
@@ -2,4 +2,4 @@
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.1"
5
+ __version__ = "0.2.2"