@jaypie/mcp 0.7.4 → 0.7.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/suites/docs/index.js +1 -1
- package/package.json +1 -1
- package/release-notes/constructs/1.2.27.md +20 -0
- package/release-notes/constructs/1.2.28.md +12 -0
- package/release-notes/express/1.2.8.md +34 -0
- package/release-notes/mcp/0.7.5.md +13 -0
- package/release-notes/mcp/0.7.6.md +13 -0
- package/skills/agents.md +16 -3
- package/skills/cicd-actions.md +337 -0
- package/skills/cicd-deploy.md +332 -0
- package/skills/cicd-environments.md +184 -0
- package/skills/cicd.md +9 -1
- package/skills/development.md +3 -1
- package/skills/infrastructure.md +5 -2
- package/skills/monorepo.md +166 -0
- package/skills/secrets.md +108 -110
- package/skills/skills.md +2 -2
- package/skills/subpackage.md +219 -0
- package/skills/tools-llm.md +98 -0
- package/skills/tools.md +11 -1
|
@@ -0,0 +1,98 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: LLM MCP tool for debugging provider responses
|
|
3
|
+
related: llm, tools
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# LLM MCP Tool
|
|
7
|
+
|
|
8
|
+
Debug and inspect LLM provider responses. Useful for understanding how providers format responses and troubleshooting API integrations.
|
|
9
|
+
|
|
10
|
+
## Usage
|
|
11
|
+
|
|
12
|
+
```
|
|
13
|
+
llm() # Show help
|
|
14
|
+
llm("command", { ...params }) # Execute a command
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
## Commands
|
|
18
|
+
|
|
19
|
+
| Command | Description |
|
|
20
|
+
|---------|-------------|
|
|
21
|
+
| `list_providers` | List available LLM providers and their status |
|
|
22
|
+
| `debug_call` | Make a debug call to an LLM provider |
|
|
23
|
+
|
|
24
|
+
## List Providers
|
|
25
|
+
|
|
26
|
+
Check which providers are configured:
|
|
27
|
+
|
|
28
|
+
```
|
|
29
|
+
llm("list_providers")
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
Returns provider availability based on environment variables.
|
|
33
|
+
|
|
34
|
+
## Debug Call
|
|
35
|
+
|
|
36
|
+
Make a test call to inspect the raw response:
|
|
37
|
+
|
|
38
|
+
```
|
|
39
|
+
llm("debug_call", { provider: "openai", message: "Hello, world!" })
|
|
40
|
+
llm("debug_call", { provider: "anthropic", message: "Hello, world!" })
|
|
41
|
+
llm("debug_call", { provider: "openai", model: "o3-mini", message: "What is 15 * 17?" })
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
### Parameters
|
|
45
|
+
|
|
46
|
+
| Parameter | Required | Description |
|
|
47
|
+
|-----------|----------|-------------|
|
|
48
|
+
| `provider` | Yes | Provider name: `openai`, `anthropic`, `gemini`, `openrouter` |
|
|
49
|
+
| `message` | Yes | Message to send |
|
|
50
|
+
| `model` | No | Specific model to use |
|
|
51
|
+
|
|
52
|
+
### Response Fields
|
|
53
|
+
|
|
54
|
+
| Field | Description |
|
|
55
|
+
|-------|-------------|
|
|
56
|
+
| `content` | The response text |
|
|
57
|
+
| `reasoning` | Extracted reasoning/thinking content (if available) |
|
|
58
|
+
| `reasoningTokens` | Count of reasoning tokens used |
|
|
59
|
+
| `history` | Full conversation history |
|
|
60
|
+
| `rawResponses` | Raw API responses for debugging |
|
|
61
|
+
| `usage` | Token usage statistics |
|
|
62
|
+
|
|
63
|
+
## Environment Variables
|
|
64
|
+
|
|
65
|
+
| Variable | Description |
|
|
66
|
+
|----------|-------------|
|
|
67
|
+
| `OPENAI_API_KEY` | OpenAI API key |
|
|
68
|
+
| `ANTHROPIC_API_KEY` | Anthropic API key |
|
|
69
|
+
| `GOOGLE_API_KEY` | Google/Gemini API key |
|
|
70
|
+
| `OPENROUTER_API_KEY` | OpenRouter API key |
|
|
71
|
+
|
|
72
|
+
## Supported Providers
|
|
73
|
+
|
|
74
|
+
| Provider | Models |
|
|
75
|
+
|----------|--------|
|
|
76
|
+
| `openai` | gpt-4o, gpt-4o-mini, o1, o3-mini, etc. |
|
|
77
|
+
| `anthropic` | claude-sonnet-4-20250514, claude-opus-4-20250514, etc. |
|
|
78
|
+
| `gemini` | gemini-2.0-flash, gemini-1.5-pro, etc. |
|
|
79
|
+
| `openrouter` | Access to multiple providers |
|
|
80
|
+
|
|
81
|
+
## Common Patterns
|
|
82
|
+
|
|
83
|
+
### Compare Provider Responses
|
|
84
|
+
|
|
85
|
+
```
|
|
86
|
+
llm("debug_call", { provider: "openai", message: "Explain recursion briefly" })
|
|
87
|
+
llm("debug_call", { provider: "anthropic", message: "Explain recursion briefly" })
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
### Test Reasoning Models
|
|
91
|
+
|
|
92
|
+
```
|
|
93
|
+
llm("debug_call", { provider: "openai", model: "o3-mini", message: "Solve: If 3x + 5 = 14, what is x?" })
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
### Check Token Usage
|
|
97
|
+
|
|
98
|
+
Use `debug_call` to inspect the `usage` field for token consumption analysis.
|
package/skills/tools.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
description: Available MCP tools reference
|
|
3
|
-
related: tools-aws, tools-dynamodb, tools-
|
|
3
|
+
related: tools-aws, tools-datadog, tools-dynamodb, tools-llm, debugging
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
# MCP Tools Reference
|
|
@@ -79,6 +79,10 @@ datadog("monitors", { status: ["Alert", "Warn"] })
|
|
|
79
79
|
|
|
80
80
|
## LLM Tool
|
|
81
81
|
|
|
82
|
+
Debug and inspect LLM provider responses.
|
|
83
|
+
|
|
84
|
+
See **tools-llm** for complete documentation.
|
|
85
|
+
|
|
82
86
|
```
|
|
83
87
|
llm() # Show help
|
|
84
88
|
llm("list_providers") # List available LLM providers
|
|
@@ -103,3 +107,9 @@ llm("debug_call", { provider: "openai", message: "Hello" })
|
|
|
103
107
|
- `DD_SERVICE` - Default service filter
|
|
104
108
|
- `DD_SOURCE` - Default log source
|
|
105
109
|
|
|
110
|
+
### LLM Tools
|
|
111
|
+
- `OPENAI_API_KEY` - OpenAI API key
|
|
112
|
+
- `ANTHROPIC_API_KEY` - Anthropic API key
|
|
113
|
+
- `GOOGLE_API_KEY` - Google/Gemini API key
|
|
114
|
+
- `OPENROUTER_API_KEY` - OpenRouter API key
|
|
115
|
+
|