@mclawnet/agent 0.5.8 → 0.6.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/cli.js +168 -61
- package/dist/__tests__/cli.test.d.ts +2 -0
- package/dist/__tests__/cli.test.d.ts.map +1 -0
- package/dist/__tests__/service-config.test.d.ts +2 -0
- package/dist/__tests__/service-config.test.d.ts.map +1 -0
- package/dist/__tests__/service-linux.test.d.ts +2 -0
- package/dist/__tests__/service-linux.test.d.ts.map +1 -0
- package/dist/__tests__/service-macos.test.d.ts +2 -0
- package/dist/__tests__/service-macos.test.d.ts.map +1 -0
- package/dist/__tests__/service-windows.test.d.ts +2 -0
- package/dist/__tests__/service-windows.test.d.ts.map +1 -0
- package/dist/backend-adapter.d.ts +2 -0
- package/dist/backend-adapter.d.ts.map +1 -1
- package/dist/{chunk-KHPEQTWF.js → chunk-KITKMSBE.js} +166 -90
- package/dist/chunk-KITKMSBE.js.map +1 -0
- package/dist/chunk-W3LSW4XY.js +95 -0
- package/dist/chunk-W3LSW4XY.js.map +1 -0
- package/dist/hub-connection.d.ts.map +1 -1
- package/dist/index.js +1 -1
- package/dist/linux-5KQ4SCAA.js +175 -0
- package/dist/linux-5KQ4SCAA.js.map +1 -0
- package/dist/macos-FGY546NC.js +173 -0
- package/dist/macos-FGY546NC.js.map +1 -0
- package/dist/service/config.d.ts +19 -0
- package/dist/service/config.d.ts.map +1 -0
- package/dist/service/index.d.ts +6 -0
- package/dist/service/index.d.ts.map +1 -0
- package/dist/service/index.js +46 -0
- package/dist/service/index.js.map +1 -0
- package/dist/service/linux.d.ts +18 -0
- package/dist/service/linux.d.ts.map +1 -0
- package/dist/service/macos.d.ts +18 -0
- package/dist/service/macos.d.ts.map +1 -0
- package/dist/service/types.d.ts +19 -0
- package/dist/service/types.d.ts.map +1 -0
- package/dist/service/windows.d.ts +18 -0
- package/dist/service/windows.d.ts.map +1 -0
- package/dist/session-manager.d.ts +4 -7
- package/dist/session-manager.d.ts.map +1 -1
- package/dist/skill-loader.d.ts +8 -0
- package/dist/skill-loader.d.ts.map +1 -0
- package/dist/start.d.ts.map +1 -1
- package/dist/start.js +1 -1
- package/dist/windows-PIJ4CMWX.js +164 -0
- package/dist/windows-PIJ4CMWX.js.map +1 -0
- package/package.json +18 -16
- package/skills/academic-search/SKILL.md +147 -0
- package/skills/architecture/SKILL.md +294 -0
- package/skills/changelog-generator/SKILL.md +112 -0
- package/skills/chart-visualization/SKILL.md +183 -0
- package/skills/code-review/SKILL.md +304 -0
- package/skills/codebase-health/SKILL.md +281 -0
- package/skills/consulting-analysis/SKILL.md +584 -0
- package/skills/content-research-writer/SKILL.md +546 -0
- package/skills/data-analysis/SKILL.md +194 -0
- package/skills/deep-research/SKILL.md +198 -0
- package/skills/docx/SKILL.md +211 -0
- package/skills/github-deep-research/SKILL.md +207 -0
- package/skills/image-generation/SKILL.md +209 -0
- package/skills/lead-research-assistant/SKILL.md +207 -0
- package/skills/mcp-builder/SKILL.md +304 -0
- package/skills/meeting-insights-analyzer/SKILL.md +335 -0
- package/skills/pair-programming/SKILL.md +196 -0
- package/skills/pdf/SKILL.md +309 -0
- package/skills/performance-analysis/SKILL.md +261 -0
- package/skills/podcast-generation/SKILL.md +224 -0
- package/skills/pptx/SKILL.md +497 -0
- package/skills/project-learnings/SKILL.md +280 -0
- package/skills/security-audit/SKILL.md +211 -0
- package/skills/skill-creator/SKILL.md +200 -0
- package/skills/technical-writing/SKILL.md +286 -0
- package/skills/testing/SKILL.md +363 -0
- package/skills/video-generation/SKILL.md +247 -0
- package/skills/web-design-guidelines/SKILL.md +203 -0
- package/skills/webapp-testing/SKILL.md +162 -0
- package/skills/workflow-automation/SKILL.md +299 -0
- package/skills/xlsx/SKILL.md +305 -0
- package/dist/chunk-KHPEQTWF.js.map +0 -1
|
@@ -0,0 +1,304 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: mcp-builder
|
|
3
|
+
description: Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# MCP Server Development Guide
|
|
7
|
+
|
|
8
|
+
Create high-quality MCP servers that enable LLMs to effectively interact with external services and APIs. Quality is measured by how well it enables LLMs to accomplish real-world tasks.
|
|
9
|
+
|
|
10
|
+
## When to Use
|
|
11
|
+
|
|
12
|
+
- Building a new MCP server from scratch
|
|
13
|
+
- Integrating an external API or service for LLM access via MCP
|
|
14
|
+
- Designing tool schemas and resource definitions for MCP
|
|
15
|
+
- Evaluating and improving existing MCP server quality
|
|
16
|
+
|
|
17
|
+
## When NOT to Use
|
|
18
|
+
|
|
19
|
+
- **Using existing MCP servers** — just configure them in your MCP client settings
|
|
20
|
+
- **General API development** — this is specific to the MCP protocol for LLM tool use
|
|
21
|
+
- **Claude API / SDK integration** — use the `claude-api` skill
|
|
22
|
+
- **Simple shell scripts or automation** — use the `workflow-automation` skill
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
# Process
|
|
27
|
+
|
|
28
|
+
Creating a high-quality MCP server involves four main phases:
|
|
29
|
+
|
|
30
|
+
### Phase 1: Deep Research and Planning
|
|
31
|
+
|
|
32
|
+
#### 1.1 Understand Agent-Centric Design Principles
|
|
33
|
+
|
|
34
|
+
Before implementing, internalize these principles for designing tools for AI agents:
|
|
35
|
+
|
|
36
|
+
- **Build for Workflows, Not Just API Endpoints** - Consolidate related operations (e.g., `schedule_event` that both checks availability and creates event). Focus on complete tasks, not individual API calls.
|
|
37
|
+
- **Optimize for Limited Context** - Return high-signal info, not data dumps. Provide "concise" vs "detailed" options. Default to human-readable identifiers over technical codes.
|
|
38
|
+
- **Design Actionable Error Messages** - Guide agents toward correct usage. Suggest next steps: "Try using filter='active_only' to reduce results."
|
|
39
|
+
- **Follow Natural Task Subdivisions** - Tool names should reflect how humans think about tasks. Group related tools with consistent prefixes.
|
|
40
|
+
- **Use Evaluation-Driven Development** - Create realistic evaluation scenarios early. Let agent feedback drive improvements.
|
|
41
|
+
|
|
42
|
+
#### 1.2 Study MCP Protocol Documentation
|
|
43
|
+
|
|
44
|
+
Use WebFetch to load: `https://modelcontextprotocol.io/llms-full.txt` -- the complete MCP specification.
|
|
45
|
+
|
|
46
|
+
#### 1.3 Study SDK Documentation
|
|
47
|
+
|
|
48
|
+
- **Python SDK**: WebFetch `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
|
|
49
|
+
- **TypeScript SDK**: WebFetch `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
|
|
50
|
+
|
|
51
|
+
#### 1.4 MCP Best Practices Quick Reference
|
|
52
|
+
|
|
53
|
+
**Server Naming:**
|
|
54
|
+
- Python: `{service}_mcp` (e.g., `slack_mcp`)
|
|
55
|
+
- Node/TypeScript: `{service}-mcp-server` (e.g., `slack-mcp-server`)
|
|
56
|
+
|
|
57
|
+
**Tool Naming:**
|
|
58
|
+
- Use snake_case with service prefix: `{service}_{action}_{resource}`
|
|
59
|
+
- Example: `slack_send_message`, `github_create_issue`
|
|
60
|
+
- Be action-oriented, specific, and consistent
|
|
61
|
+
|
|
62
|
+
**Response Formats:**
|
|
63
|
+
- Support both JSON (programmatic) and Markdown (human-readable)
|
|
64
|
+
- Default to Markdown; use `response_format` parameter for flexibility
|
|
65
|
+
|
|
66
|
+
**Pagination:**
|
|
67
|
+
- Always respect `limit` parameter; return `has_more`, `next_offset`, `total_count`
|
|
68
|
+
- Default to 20-50 items; never load all results into memory
|
|
69
|
+
|
|
70
|
+
**Character Limits:**
|
|
71
|
+
- Define CHARACTER_LIMIT constant (typically 25,000)
|
|
72
|
+
- Truncate gracefully with clear messages and filtering guidance
|
|
73
|
+
|
|
74
|
+
#### 1.5 Study Target API Documentation
|
|
75
|
+
|
|
76
|
+
Read through ALL available API documentation for the service you are integrating: official API reference, authentication, rate limiting, pagination patterns, error responses, data models.
|
|
77
|
+
|
|
78
|
+
Use web search and WebFetch as needed to gather comprehensive information.
|
|
79
|
+
|
|
80
|
+
#### 1.6 Create a Comprehensive Implementation Plan
|
|
81
|
+
|
|
82
|
+
Based on your research, create a detailed plan covering:
|
|
83
|
+
|
|
84
|
+
**Tool Selection:** Prioritize the most valuable endpoints that enable common workflows. Consider which tools work together for complex tasks.
|
|
85
|
+
|
|
86
|
+
**Shared Utilities:** Identify common API request patterns, pagination helpers, filtering/formatting utilities, and error handling strategies.
|
|
87
|
+
|
|
88
|
+
**Input/Output Design:** Define validation models (Pydantic for Python, Zod for TypeScript), consistent response formats, character limits, and truncation strategies.
|
|
89
|
+
|
|
90
|
+
**Error Handling Strategy:** Plan graceful failure modes with clear, actionable, LLM-friendly natural language error messages.
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
### Phase 2: Implementation
|
|
95
|
+
|
|
96
|
+
Now that you have a comprehensive plan, begin implementation following language-specific best practices.
|
|
97
|
+
|
|
98
|
+
#### 2.1 Set Up Project Structure
|
|
99
|
+
|
|
100
|
+
**For Python:**
|
|
101
|
+
- Create a single `.py` file or organize into modules if complex
|
|
102
|
+
- Use the MCP Python SDK (`FastMCP`) for tool registration
|
|
103
|
+
- Define Pydantic models for input validation
|
|
104
|
+
|
|
105
|
+
**For Node/TypeScript:**
|
|
106
|
+
- Set up `package.json` and `tsconfig.json`
|
|
107
|
+
- Use MCP TypeScript SDK with `server.registerTool`
|
|
108
|
+
- Define Zod schemas for input validation
|
|
109
|
+
|
|
110
|
+
#### 2.2 Implement Core Infrastructure First
|
|
111
|
+
|
|
112
|
+
Create shared utilities before implementing tools:
|
|
113
|
+
- API request helper functions
|
|
114
|
+
- Error handling utilities
|
|
115
|
+
- Response formatting functions (JSON and Markdown)
|
|
116
|
+
- Pagination helpers
|
|
117
|
+
- Authentication/token management
|
|
118
|
+
|
|
119
|
+
#### 2.3 Implement Tools Systematically
|
|
120
|
+
|
|
121
|
+
For each tool in the plan:
|
|
122
|
+
|
|
123
|
+
**Define Input Schema:**
|
|
124
|
+
- Use Pydantic (Python) or Zod (TypeScript) for validation
|
|
125
|
+
- Include proper constraints (min/max length, regex patterns, ranges)
|
|
126
|
+
- Provide clear, descriptive field descriptions with diverse examples
|
|
127
|
+
|
|
128
|
+
**Write Comprehensive Docstrings/Descriptions:**
|
|
129
|
+
- One-line summary of what the tool does
|
|
130
|
+
- Detailed explanation of purpose and functionality
|
|
131
|
+
- Explicit parameter types with examples
|
|
132
|
+
- Complete return type schema
|
|
133
|
+
- Usage examples (when to use, when not to use)
|
|
134
|
+
- Error handling documentation with guidance on how to proceed
|
|
135
|
+
|
|
136
|
+
**Implement Tool Logic:**
|
|
137
|
+
- Use shared utilities to avoid code duplication
|
|
138
|
+
- Follow async/await patterns for all I/O
|
|
139
|
+
- Implement proper error handling
|
|
140
|
+
- Support multiple response formats (JSON and Markdown)
|
|
141
|
+
- Respect pagination parameters; check character limits and truncate
|
|
142
|
+
|
|
143
|
+
**Add Tool Annotations:**
|
|
144
|
+
- `readOnlyHint`: true for read-only operations
|
|
145
|
+
- `destructiveHint`: false for non-destructive operations
|
|
146
|
+
- `idempotentHint`: true if repeated calls have same effect
|
|
147
|
+
- `openWorldHint`: true if interacting with external systems
|
|
148
|
+
|
|
149
|
+
#### 2.4 Follow Language-Specific Best Practices
|
|
150
|
+
|
|
151
|
+
**For Python, ensure:**
|
|
152
|
+
- Using MCP Python SDK (`FastMCP`) with `@mcp.tool` decorator
|
|
153
|
+
- Pydantic v2 models with `model_config`
|
|
154
|
+
- Type hints throughout
|
|
155
|
+
- Async/await for all I/O operations
|
|
156
|
+
- Module-level constants (CHARACTER_LIMIT, API_BASE_URL)
|
|
157
|
+
- Logging to stderr (never stdout, which interferes with stdio transport)
|
|
158
|
+
|
|
159
|
+
**For Node/TypeScript, ensure:**
|
|
160
|
+
- Using `server.registerTool` properly
|
|
161
|
+
- Zod schemas with `.strict()`
|
|
162
|
+
- TypeScript strict mode enabled
|
|
163
|
+
- No `any` types - use proper types
|
|
164
|
+
- Explicit `Promise<T>` return types
|
|
165
|
+
- Build process configured (`npm run build`)
|
|
166
|
+
|
|
167
|
+
---
|
|
168
|
+
|
|
169
|
+
### Phase 3: Review and Refine
|
|
170
|
+
|
|
171
|
+
After initial implementation:
|
|
172
|
+
|
|
173
|
+
#### 3.1 Code Quality Review
|
|
174
|
+
|
|
175
|
+
Review the code for:
|
|
176
|
+
- **DRY Principle**: No duplicated code between tools
|
|
177
|
+
- **Composability**: Shared logic extracted into functions
|
|
178
|
+
- **Consistency**: Similar operations return similar formats
|
|
179
|
+
- **Error Handling**: All external calls have error handling
|
|
180
|
+
- **Type Safety**: Full type coverage (Python type hints, TypeScript types)
|
|
181
|
+
- **Documentation**: Every tool has comprehensive docstrings/descriptions
|
|
182
|
+
|
|
183
|
+
#### 3.2 Test and Build
|
|
184
|
+
|
|
185
|
+
**Important:** MCP servers are long-running processes that wait for requests over stdio or HTTP. Running them directly will cause your process to hang indefinitely.
|
|
186
|
+
|
|
187
|
+
**Safe ways to test:**
|
|
188
|
+
- Use the evaluation harness (Phase 4) - recommended approach
|
|
189
|
+
- Run the server in tmux to keep it outside your main process
|
|
190
|
+
- Use a timeout: `timeout 5s python server.py`
|
|
191
|
+
|
|
192
|
+
**For Python:**
|
|
193
|
+
- Verify syntax: `python -m py_compile your_server.py`
|
|
194
|
+
- Check imports work correctly by reviewing the file
|
|
195
|
+
|
|
196
|
+
**For Node/TypeScript:**
|
|
197
|
+
- Run `npm run build` and ensure it completes without errors
|
|
198
|
+
- Verify `dist/index.js` is created
|
|
199
|
+
|
|
200
|
+
#### 3.3 Quality Checklist
|
|
201
|
+
|
|
202
|
+
Verify your implementation against these criteria:
|
|
203
|
+
|
|
204
|
+
**Python Quality Checklist:**
|
|
205
|
+
- [ ] FastMCP server initialized with descriptive name
|
|
206
|
+
- [ ] All tools use `@mcp.tool` decorator with annotations
|
|
207
|
+
- [ ] Pydantic v2 models for all input validation
|
|
208
|
+
- [ ] Type hints on all functions and parameters
|
|
209
|
+
- [ ] Async/await for all I/O operations
|
|
210
|
+
- [ ] CHARACTER_LIMIT constant defined and enforced
|
|
211
|
+
- [ ] Response format parameter (JSON/Markdown) supported
|
|
212
|
+
- [ ] Pagination with `limit`, `offset`, `has_more` metadata
|
|
213
|
+
- [ ] Actionable error messages (not raw exceptions)
|
|
214
|
+
- [ ] No logging to stdout
|
|
215
|
+
|
|
216
|
+
**TypeScript Quality Checklist:**
|
|
217
|
+
- [ ] Server uses `registerTool` with proper schemas
|
|
218
|
+
- [ ] Zod schemas with `.strict()` for all inputs
|
|
219
|
+
- [ ] TypeScript strict mode, no `any` types
|
|
220
|
+
- [ ] Explicit `Promise<T>` return types
|
|
221
|
+
- [ ] CHARACTER_LIMIT constant defined and enforced
|
|
222
|
+
- [ ] Response format parameter (JSON/Markdown) supported
|
|
223
|
+
- [ ] Pagination with `limit`, `offset`, `has_more` metadata
|
|
224
|
+
- [ ] Actionable error messages
|
|
225
|
+
- [ ] Build succeeds without errors (`npm run build`)
|
|
226
|
+
|
|
227
|
+
---
|
|
228
|
+
|
|
229
|
+
### Phase 4: Create Evaluations
|
|
230
|
+
|
|
231
|
+
After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
|
|
232
|
+
|
|
233
|
+
#### 4.1 Understand Evaluation Purpose
|
|
234
|
+
|
|
235
|
+
Evaluations test whether LLMs can effectively use your MCP server to answer realistic, complex questions using only the tools provided.
|
|
236
|
+
|
|
237
|
+
#### 4.2 Create 10 Evaluation Questions
|
|
238
|
+
|
|
239
|
+
Follow this process:
|
|
240
|
+
|
|
241
|
+
1. **Tool Inspection**: List available tools and understand their capabilities
|
|
242
|
+
2. **Content Exploration**: Use READ-ONLY operations to explore available data
|
|
243
|
+
3. **Question Generation**: Create 10 complex, realistic questions
|
|
244
|
+
4. **Answer Verification**: Solve each question yourself to verify answers
|
|
245
|
+
|
|
246
|
+
#### 4.3 Evaluation Requirements
|
|
247
|
+
|
|
248
|
+
Each question must be:
|
|
249
|
+
- **Independent**: Not dependent on other questions
|
|
250
|
+
- **Read-only**: Only non-destructive operations required
|
|
251
|
+
- **Complex**: Requiring multiple tool calls and deep exploration
|
|
252
|
+
- **Realistic**: Based on real use cases humans would care about
|
|
253
|
+
- **Verifiable**: Single, clear answer verifiable by string comparison
|
|
254
|
+
- **Stable**: Answer won't change over time (use historical/closed data)
|
|
255
|
+
|
|
256
|
+
Questions should NOT be solvable with straightforward keyword search. Use synonyms, related concepts, or paraphrases. Require multiple searches and synthesis.
|
|
257
|
+
|
|
258
|
+
#### 4.4 Output Format
|
|
259
|
+
|
|
260
|
+
Create an XML file with this structure:
|
|
261
|
+
|
|
262
|
+
```xml
|
|
263
|
+
<evaluation>
|
|
264
|
+
<qa_pair>
|
|
265
|
+
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
|
|
266
|
+
<answer>3</answer>
|
|
267
|
+
</qa_pair>
|
|
268
|
+
<!-- More qa_pairs... -->
|
|
269
|
+
</evaluation>
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
---
|
|
273
|
+
|
|
274
|
+
# Reference Resources
|
|
275
|
+
|
|
276
|
+
## Documentation Library
|
|
277
|
+
|
|
278
|
+
Load these resources as needed during development:
|
|
279
|
+
|
|
280
|
+
### Core MCP Documentation (Load First)
|
|
281
|
+
- **MCP Protocol**: Fetch from `https://modelcontextprotocol.io/llms-full.txt`
|
|
282
|
+
|
|
283
|
+
### SDK Documentation (Load During Phase 1/2)
|
|
284
|
+
- **Python SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
|
|
285
|
+
- **TypeScript SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
|
|
286
|
+
|
|
287
|
+
### Transport Options
|
|
288
|
+
|
|
289
|
+
MCP servers support multiple transports:
|
|
290
|
+
|
|
291
|
+
| Transport | Best For | Clients | Communication |
|
|
292
|
+
|-----------|----------|---------|---------------|
|
|
293
|
+
| **Stdio** | Local/CLI tools | Single | Bidirectional |
|
|
294
|
+
| **HTTP** | Web services | Multiple | Request-Response |
|
|
295
|
+
| **SSE** | Real-time updates | Multiple | Server-Push |
|
|
296
|
+
|
|
297
|
+
### Security Essentials
|
|
298
|
+
|
|
299
|
+
- Store API keys in environment variables, never in code
|
|
300
|
+
- Validate all inputs with schema validation (Pydantic/Zod)
|
|
301
|
+
- Sanitize file paths to prevent directory traversal
|
|
302
|
+
- Use HTTPS for all network communication
|
|
303
|
+
- Report tool errors within result objects (set `isError: true`), not as protocol-level errors
|
|
304
|
+
- Don't expose internal errors to clients; provide helpful but not revealing messages
|
|
@@ -0,0 +1,335 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: meeting-insights-analyzer
|
|
3
|
+
description: Analyze meeting transcripts to uncover communication patterns, behavioral insights, and actionable feedback on speaking habits. Use when reviewing meeting performance, coaching communication skills, or preparing for feedback sessions.
|
|
4
|
+
disable-model-invocation: true
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Meeting Insights Analyzer
|
|
8
|
+
|
|
9
|
+
This skill transforms your meeting transcripts into actionable insights about your communication patterns, helping you become a more effective communicator and leader.
|
|
10
|
+
|
|
11
|
+
## When to Use This Skill
|
|
12
|
+
|
|
13
|
+
- Analyzing your communication patterns across multiple meetings
|
|
14
|
+
- Getting feedback on your leadership and facilitation style
|
|
15
|
+
- Identifying when you avoid difficult conversations
|
|
16
|
+
- Understanding your speaking habits and filler words
|
|
17
|
+
- Tracking improvement in communication skills over time
|
|
18
|
+
- Preparing for performance reviews with concrete examples
|
|
19
|
+
- Coaching team members on their communication style
|
|
20
|
+
|
|
21
|
+
## When NOT to Use
|
|
22
|
+
|
|
23
|
+
- **Meeting summarization only** — if you just need a summary without behavioral analysis, a general prompt suffices
|
|
24
|
+
- **Audio transcription** — this skill analyzes text transcripts, not raw audio
|
|
25
|
+
- **Project status tracking** — use project management tools instead
|
|
26
|
+
- **Written communication review** — this is focused on verbal/meeting patterns
|
|
27
|
+
|
|
28
|
+
## What This Skill Does
|
|
29
|
+
|
|
30
|
+
1. **Pattern Recognition**: Identifies recurring behaviors across meetings like:
|
|
31
|
+
- Conflict avoidance or indirect communication
|
|
32
|
+
- Speaking ratios and turn-taking
|
|
33
|
+
- Question-asking vs. statement-making patterns
|
|
34
|
+
- Active listening indicators
|
|
35
|
+
- Decision-making approaches
|
|
36
|
+
|
|
37
|
+
2. **Communication Analysis**: Evaluates communication effectiveness:
|
|
38
|
+
- Clarity and directness
|
|
39
|
+
- Use of filler words and hedging language
|
|
40
|
+
- Tone and sentiment patterns
|
|
41
|
+
- Meeting control and facilitation
|
|
42
|
+
|
|
43
|
+
3. **Actionable Feedback**: Provides specific, timestamped examples with:
|
|
44
|
+
- What happened
|
|
45
|
+
- Why it matters
|
|
46
|
+
- How to improve
|
|
47
|
+
|
|
48
|
+
4. **Trend Tracking**: Compares patterns over time when analyzing multiple meetings
|
|
49
|
+
|
|
50
|
+
## How to Use
|
|
51
|
+
|
|
52
|
+
### Basic Setup
|
|
53
|
+
|
|
54
|
+
1. Download your meeting transcripts to a folder (e.g., `~/meetings/`)
|
|
55
|
+
2. Navigate to that folder in Claude Code
|
|
56
|
+
3. Ask for the analysis you want
|
|
57
|
+
|
|
58
|
+
### Quick Start Examples
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
Analyze all meetings in this folder and tell me when I avoided conflict.
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
```
|
|
65
|
+
Look at my meetings from the past month and identify my communication patterns.
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
```
|
|
69
|
+
Compare my facilitation style between these two meeting folders.
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
### Advanced Analysis
|
|
73
|
+
|
|
74
|
+
```
|
|
75
|
+
Analyze all transcripts in this folder and:
|
|
76
|
+
1. Identify when I interrupted others
|
|
77
|
+
2. Calculate my speaking ratio
|
|
78
|
+
3. Find moments I avoided giving direct feedback
|
|
79
|
+
4. Track my use of filler words
|
|
80
|
+
5. Show examples of good active listening
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
## Instructions
|
|
84
|
+
|
|
85
|
+
When a user requests meeting analysis:
|
|
86
|
+
|
|
87
|
+
1. **Discover Available Data**
|
|
88
|
+
- Scan the folder for transcript files (.txt, .md, .vtt, .srt, .docx)
|
|
89
|
+
- Check if files contain speaker labels and timestamps
|
|
90
|
+
- Confirm the date range of meetings
|
|
91
|
+
- Identify the user's name/identifier in transcripts
|
|
92
|
+
|
|
93
|
+
2. **Clarify Analysis Goals**
|
|
94
|
+
|
|
95
|
+
If not specified, ask what they want to learn:
|
|
96
|
+
- Specific behaviors (conflict avoidance, interruptions, filler words)
|
|
97
|
+
- Communication effectiveness (clarity, directness, listening)
|
|
98
|
+
- Meeting facilitation skills
|
|
99
|
+
- Speaking patterns and ratios
|
|
100
|
+
- Growth areas for improvement
|
|
101
|
+
|
|
102
|
+
3. **Analyze Patterns**
|
|
103
|
+
|
|
104
|
+
For each requested insight:
|
|
105
|
+
|
|
106
|
+
**Conflict Avoidance**:
|
|
107
|
+
- Look for hedging language ("maybe", "kind of", "I think")
|
|
108
|
+
- Indirect phrasing instead of direct requests
|
|
109
|
+
- Changing subject when tension arises
|
|
110
|
+
- Agreeing without commitment ("yeah, but...")
|
|
111
|
+
- Not addressing obvious problems
|
|
112
|
+
|
|
113
|
+
**Speaking Ratios**:
|
|
114
|
+
- Calculate percentage of meeting spent speaking
|
|
115
|
+
- Count interruptions (by and of the user)
|
|
116
|
+
- Measure average speaking turn length
|
|
117
|
+
- Track question vs. statement ratios
|
|
118
|
+
|
|
119
|
+
**Filler Words**:
|
|
120
|
+
- Count "um", "uh", "like", "you know", "actually", etc.
|
|
121
|
+
- Note frequency per minute or per speaking turn
|
|
122
|
+
- Identify situations where they increase (nervous, uncertain)
|
|
123
|
+
|
|
124
|
+
**Active Listening**:
|
|
125
|
+
- Questions that reference others' previous points
|
|
126
|
+
- Paraphrasing or summarizing others' ideas
|
|
127
|
+
- Building on others' contributions
|
|
128
|
+
- Asking clarifying questions
|
|
129
|
+
|
|
130
|
+
**Leadership & Facilitation**:
|
|
131
|
+
- Decision-making approach (directive vs. collaborative)
|
|
132
|
+
- How disagreements are handled
|
|
133
|
+
- Inclusion of quieter participants
|
|
134
|
+
- Time management and agenda control
|
|
135
|
+
- Follow-up and action item clarity
|
|
136
|
+
|
|
137
|
+
4. **Provide Specific Examples**
|
|
138
|
+
|
|
139
|
+
For each pattern found, include:
|
|
140
|
+
|
|
141
|
+
```markdown
|
|
142
|
+
### [Pattern Name]
|
|
143
|
+
|
|
144
|
+
**Finding**: [One-sentence summary of the pattern]
|
|
145
|
+
|
|
146
|
+
**Frequency**: [X times across Y meetings]
|
|
147
|
+
|
|
148
|
+
**Examples**:
|
|
149
|
+
|
|
150
|
+
1. **[Meeting Name/Date]** - [Timestamp]
|
|
151
|
+
|
|
152
|
+
**What Happened**:
|
|
153
|
+
> [Actual quote from transcript]
|
|
154
|
+
|
|
155
|
+
**Why This Matters**:
|
|
156
|
+
[Explanation of the impact or missed opportunity]
|
|
157
|
+
|
|
158
|
+
**Better Approach**:
|
|
159
|
+
[Specific alternative phrasing or behavior]
|
|
160
|
+
|
|
161
|
+
[Repeat for 2-3 strongest examples]
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
5. **Synthesize Insights**
|
|
165
|
+
|
|
166
|
+
After analyzing all patterns, provide:
|
|
167
|
+
|
|
168
|
+
```markdown
|
|
169
|
+
# Meeting Insights Summary
|
|
170
|
+
|
|
171
|
+
**Analysis Period**: [Date range]
|
|
172
|
+
**Meetings Analyzed**: [X meetings]
|
|
173
|
+
**Total Duration**: [X hours]
|
|
174
|
+
|
|
175
|
+
## Key Patterns Identified
|
|
176
|
+
|
|
177
|
+
### 1. [Primary Pattern]
|
|
178
|
+
- **Observed**: [What you saw]
|
|
179
|
+
- **Impact**: [Why it matters]
|
|
180
|
+
- **Recommendation**: [How to improve]
|
|
181
|
+
|
|
182
|
+
### 2. [Second Pattern]
|
|
183
|
+
[Same structure]
|
|
184
|
+
|
|
185
|
+
## Communication Strengths
|
|
186
|
+
|
|
187
|
+
1. [Strength 1 with example]
|
|
188
|
+
2. [Strength 2 with example]
|
|
189
|
+
3. [Strength 3 with example]
|
|
190
|
+
|
|
191
|
+
## Growth Opportunities
|
|
192
|
+
|
|
193
|
+
1. **[Area 1]**: [Specific, actionable advice]
|
|
194
|
+
2. **[Area 2]**: [Specific, actionable advice]
|
|
195
|
+
3. **[Area 3]**: [Specific, actionable advice]
|
|
196
|
+
|
|
197
|
+
## Speaking Statistics
|
|
198
|
+
|
|
199
|
+
- Average speaking time: [X% of meeting]
|
|
200
|
+
- Questions asked: [X per meeting average]
|
|
201
|
+
- Filler words: [X per minute]
|
|
202
|
+
- Interruptions: [X given / Y received per meeting]
|
|
203
|
+
|
|
204
|
+
## Next Steps
|
|
205
|
+
|
|
206
|
+
[3-5 concrete actions to improve communication]
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
6. **Offer Follow-Up Options**
|
|
210
|
+
- Track these same metrics in future meetings
|
|
211
|
+
- Deep dive into specific meetings or patterns
|
|
212
|
+
- Compare to industry benchmarks
|
|
213
|
+
- Create a personal communication development plan
|
|
214
|
+
- Generate a summary for performance reviews
|
|
215
|
+
|
|
216
|
+
## Examples
|
|
217
|
+
|
|
218
|
+
### Example 1: Conflict Avoidance Analysis (Inspired by Dan Shipper)
|
|
219
|
+
|
|
220
|
+
**User**: "I download all of my meeting recordings and put them in a folder. Tell me all the times I've subtly avoided conflict."
|
|
221
|
+
|
|
222
|
+
**Output**:
|
|
223
|
+
```markdown
|
|
224
|
+
# Conflict Avoidance Patterns
|
|
225
|
+
|
|
226
|
+
Found 23 instances across 15 meetings where you used indirect
|
|
227
|
+
communication or avoided addressing tensions directly.
|
|
228
|
+
|
|
229
|
+
## Pattern: Hedging on Critical Feedback
|
|
230
|
+
|
|
231
|
+
**Frequency**: 8 times across 7 meetings
|
|
232
|
+
|
|
233
|
+
**Example 1: 1:1 with Sarah** - 00:14:32
|
|
234
|
+
|
|
235
|
+
**What Happened**:
|
|
236
|
+
> "So, I was thinking... maybe we could, like, potentially
|
|
237
|
+
> consider looking at the timeline again? I mean, if you
|
|
238
|
+
> think that makes sense. But whatever you think is best!"
|
|
239
|
+
|
|
240
|
+
**Why This Matters**:
|
|
241
|
+
You needed to address that Sarah's project was behind schedule,
|
|
242
|
+
but the hedging language ("maybe", "potentially", "if you think")
|
|
243
|
+
and the deflection at the end ("whatever you think") made it easy
|
|
244
|
+
for her to miss the urgency.
|
|
245
|
+
|
|
246
|
+
**Better Approach**:
|
|
247
|
+
"Sarah, the project is two weeks behind schedule. We need to
|
|
248
|
+
discuss what's blocking progress and create a new timeline
|
|
249
|
+
today."
|
|
250
|
+
|
|
251
|
+
[Additional examples...]
|
|
252
|
+
|
|
253
|
+
## Recommendations
|
|
254
|
+
|
|
255
|
+
1. **Name the issue directly** in the first sentence
|
|
256
|
+
2. **Remove hedging words** like "maybe," "kind of," "sort of"
|
|
257
|
+
3. **Ask specific questions** instead of hinting
|
|
258
|
+
4. **Schedule difficult conversations** instead of raising them casually
|
|
259
|
+
```
|
|
260
|
+
|
|
261
|
+
### Example 2: Leadership Facilitation
|
|
262
|
+
|
|
263
|
+
**User**: "Analyze my team meetings and tell me about my facilitation style."
|
|
264
|
+
|
|
265
|
+
**Output**: Provides insights on:
|
|
266
|
+
- How much you speak vs. team members (60% vs. 40%)
|
|
267
|
+
- Whether you ask questions or make statements (3:1 ratio)
|
|
268
|
+
- How you handle disagreements (tendency to resolve too quickly)
|
|
269
|
+
- Who speaks least and whether you draw them in
|
|
270
|
+
- Examples of good and missed facilitation moments
|
|
271
|
+
|
|
272
|
+
### Example 3: Personal Development Tracking
|
|
273
|
+
|
|
274
|
+
**User**: "Compare my meetings from Q1 vs. Q2 to see if I've improved my listening skills."
|
|
275
|
+
|
|
276
|
+
**Output**: Creates a comparative analysis showing:
|
|
277
|
+
- Decrease in interruptions (8 per meeting → 3 per meeting)
|
|
278
|
+
- Increase in clarifying questions (2 → 7 per meeting)
|
|
279
|
+
- Improvement in building on others' ideas
|
|
280
|
+
- Specific examples showing the difference
|
|
281
|
+
- Remaining areas for growth
|
|
282
|
+
|
|
283
|
+
## Setup Tips
|
|
284
|
+
|
|
285
|
+
### Getting Meeting Transcripts
|
|
286
|
+
|
|
287
|
+
**From Granola** (free with Lenny's newsletter subscription):
|
|
288
|
+
- Granola auto-transcribes your meetings
|
|
289
|
+
- Export transcripts to a folder: [Instructions on how]
|
|
290
|
+
- Point Claude Code to that folder
|
|
291
|
+
|
|
292
|
+
**From Zoom**:
|
|
293
|
+
- Enable cloud recording with transcription
|
|
294
|
+
- Download VTT or SRT files after meetings
|
|
295
|
+
- Store in a dedicated folder
|
|
296
|
+
|
|
297
|
+
**From Google Meet**:
|
|
298
|
+
- Use Google Docs auto-transcription
|
|
299
|
+
- Save transcript docs to a folder
|
|
300
|
+
- Download as .txt files or give Claude Code access
|
|
301
|
+
|
|
302
|
+
**From Fireflies.ai, Otter.ai, etc.**:
|
|
303
|
+
- Export transcripts in bulk
|
|
304
|
+
- Store in a local folder
|
|
305
|
+
- Run analysis on the folder
|
|
306
|
+
|
|
307
|
+
### Best Practices
|
|
308
|
+
|
|
309
|
+
1. **Consistent naming**: Use `YYYY-MM-DD - Meeting Name.txt` format
|
|
310
|
+
2. **Regular analysis**: Review monthly or quarterly for trends
|
|
311
|
+
3. **Specific queries**: Ask about one behavior at a time for depth
|
|
312
|
+
4. **Privacy**: Keep sensitive meeting data local
|
|
313
|
+
5. **Action-oriented**: Focus on one improvement area at a time
|
|
314
|
+
|
|
315
|
+
## Common Analysis Requests
|
|
316
|
+
|
|
317
|
+
- "When do I avoid difficult conversations?"
|
|
318
|
+
- "How often do I interrupt others?"
|
|
319
|
+
- "What's my speaking vs. listening ratio?"
|
|
320
|
+
- "Do I ask good questions?"
|
|
321
|
+
- "How do I handle disagreement?"
|
|
322
|
+
- "Am I inclusive of all voices?"
|
|
323
|
+
- "Do I use too many filler words?"
|
|
324
|
+
- "How clear are my action items?"
|
|
325
|
+
- "Do I stay on agenda or get sidetracked?"
|
|
326
|
+
- "How has my communication changed over time?"
|
|
327
|
+
|
|
328
|
+
## Related Use Cases
|
|
329
|
+
|
|
330
|
+
- Creating a personal development plan from insights
|
|
331
|
+
- Preparing performance review materials with examples
|
|
332
|
+
- Coaching direct reports on their communication
|
|
333
|
+
- Analyzing customer calls for sales or support patterns
|
|
334
|
+
- Studying negotiation tactics and outcomes
|
|
335
|
+
|