@ema.co/mcp-toolkit 1.5.1 → 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of @ema.co/mcp-toolkit might be problematic. Click here for more details.

@@ -0,0 +1,508 @@
1
+ # Local Workflow Generation
2
+
3
+ > Use the conversational-workflow-builder to generate workflows locally, then deploy via MCP tools.
4
+
5
+ ## Overview
6
+
7
+ This integration allows Cursor rules to:
8
+ 1. Generate Auto Builder prompts using MCP tools for guidance
9
+ 2. Run the prompt through a local workflow generation pipeline
10
+ 3. Get back validated `workflow_def` JSON
11
+ 4. Deploy to Ema using `workflow(mode="deploy", ...)`
12
+
13
+ **Key Benefits:**
14
+ - Full Planner → Grapher → Validator pipeline with retry logic
15
+ - Works offline (only needs Anthropic API)
16
+ - Supports complex multi-agent workflows
17
+ - Returns deployable JSON directly
18
+
19
+ ---
20
+
21
+ ## Quick Start
22
+
23
+ ### Prerequisites
24
+
25
+ ```bash
26
+ # 1. Set up the conversational-workflow-builder
27
+ cd .refs/conversational-workflow-builder
28
+
29
+ # 2. Install dependencies (using uv)
30
+ uv sync
31
+
32
+ # 3. Set environment variable
33
+ export ANTHROPIC_API_KEY="your-key-here"
34
+ ```
35
+
36
+ ### Basic Usage
37
+
38
+ ```bash
39
+ # Generate workflow graph from prompt
40
+ python generate_local.py --prompt "Create an IT helpdesk bot that routes to KB search or ticket creation"
41
+
42
+ # Generate complete workflow_def for deployment
43
+ python generate_local.py --prompt "..." --full --persona-id "my-bot-id" --pretty
44
+
45
+ # Pipe prompt from file
46
+ cat my_prompt.txt | python generate_local.py --verbose --pretty
47
+ ```
48
+
49
+ ---
50
+
51
+ ## Integration Workflow
52
+
53
+ ### Step 1: Gather Requirements (MCP Tools)
54
+
55
+ ```text
56
+ # Get structured questions to ask user
57
+ template(questions=true)
58
+
59
+ # Get suggested agents for use case
60
+ action(suggest="IT helpdesk with ServiceNow integration")
61
+
62
+ # Get pattern template
63
+ template(pattern="tool-calling")
64
+ ```
65
+
66
+ ### Step 2: Generate Auto Builder Prompt
67
+
68
+ Based on MCP tool responses, craft a detailed natural language prompt:
69
+
70
+ ```text
71
+ Create a Chat AI Employee for IT Helpdesk Support:
72
+
73
+ TRIGGER: chat_trigger
74
+
75
+ ROUTING:
76
+ - Use chat_categorizer to classify user intent
77
+ - Categories: Password Reset, Create Ticket, Check Status, Fallback
78
+
79
+ PASSWORD RESET PATH:
80
+ - Search knowledge base for password reset articles
81
+ - Use respond_with_sources to answer with citations
82
+
83
+ CREATE TICKET PATH:
84
+ - Use external_action_caller with ServiceNow Create_Ticket
85
+ - Add general_hitl for approval before creation
86
+ - On success: confirm ticket created
87
+ - On failure: apologize and offer alternatives
88
+
89
+ CHECK STATUS PATH:
90
+ - Use external_action_caller with ServiceNow Get_Ticket_Status
91
+ - Return status information
92
+
93
+ FALLBACK PATH:
94
+ - Ask clarifying question using fixed_response
95
+
96
+ All paths must connect to WORKFLOW_OUTPUT.
97
+ ```
98
+
99
+ ### Step 3: Run Local Generation
100
+
101
+ ```bash
102
+ # From Cursor, run terminal command:
103
+ cd .refs/conversational-workflow-builder && \
104
+ python generate_local.py \
105
+ --prompt "Your detailed prompt here" \
106
+ --full \
107
+ --persona-id "it-helpdesk-bot" \
108
+ --template-name "ITHelpdeskBot" \
109
+ --pretty \
110
+ --output /tmp/workflow.json
111
+ ```
112
+
113
+ ### Step 4: Review and Deploy
114
+
115
+ ```text
116
+ # Read the generated workflow
117
+ cat /tmp/workflow.json
118
+
119
+ # If satisfied, deploy via MCP:
120
+ workflow(
121
+ mode="deploy",
122
+ persona_id="existing-persona-id", # or create new one first
123
+ workflow_def={...from generated file...}
124
+ )
125
+
126
+ # Or create new persona and deploy
127
+ persona(mode="create", name="IT Helpdesk", description="...", type="chat")
128
+ # Then deploy workflow to the new persona
129
+ ```
130
+
131
+ ---
132
+
133
+ ## CLI Reference
134
+
135
+ ```text
136
+ usage: generate_local.py [-h] [--prompt PROMPT | --file FILE] [--output OUTPUT]
137
+ [--full] [--persona-id PERSONA_ID] [--template-name TEMPLATE_NAME]
138
+ [--type {chat,document,dashboard}] [--max-retries MAX_RETRIES]
139
+ [--verbose] [--pretty] [--graph-only] [--plan-only]
140
+
141
+ Options:
142
+ --prompt, -p Workflow description prompt
143
+ --file, -f File containing the workflow prompt
144
+ --output, -o Output file (default: stdout)
145
+ --full Generate complete workflow_def (not just graph)
146
+ --persona-id Persona ID for workflow_def generation
147
+ --template-name Template name for workflow_def
148
+ --type, -t Persona type hint (chat or document/dashboard)
149
+ --max-retries Maximum validation retries (default: 3)
150
+ --verbose, -v Print progress to stderr
151
+ --pretty Pretty-print JSON output
152
+ --graph-only Output only the workflow_graph
153
+ --plan-only Output only the workflow_plan (markdown)
154
+ ```
155
+
156
+ ---
157
+
158
+ ## Output Formats
159
+
160
+ ### Default Output (--graph-only false, --full false)
161
+
162
+ ```json
163
+ {
164
+ "success": true,
165
+ "workflow_graph": {
166
+ "user_query": "description",
167
+ "is_document": false,
168
+ "graph": {
169
+ "trigger": { "action_name": "chat_trigger", ... },
170
+ "categorizer_1": { "action_name": "chat_categorizer", ... }
171
+ }
172
+ },
173
+ "workflow_plan": "# Workflow Plan\n\n..."
174
+ }
175
+ ```
176
+
177
+ ### Full Output (--full)
178
+
179
+ ```json
180
+ {
181
+ "success": true,
182
+ "workflow_def": {
183
+ "workflowName": { ... },
184
+ "actions": [ ... ],
185
+ "results": { ... },
186
+ "enumTypes": [ ... ]
187
+ },
188
+ "workflow_graph": { ... },
189
+ "workflow_plan": "..."
190
+ }
191
+ ```
192
+
193
+ ---
194
+
195
+ ## Complete Example Flow
196
+
197
+ ### 1. User Request
198
+
199
+ "Create a bot that helps employees find benefits information and submit PTO requests"
200
+
201
+ ### 2. MCP Discovery
202
+
203
+ ```text
204
+ action(suggest="HR bot for benefits and PTO")
205
+ → Suggests: chat_categorizer, search, respond_with_sources, external_action_caller, general_hitl
206
+
207
+ template(pattern="intent-routing")
208
+ → Returns pattern template with categorizer routing
209
+ ```
210
+
211
+ ### 3. Generate Detailed Prompt
212
+
213
+ ```text
214
+ Create a Chat AI Employee for HR Self-Service:
215
+
216
+ TRIGGER: chat_trigger (receives employee questions)
217
+
218
+ ROUTING (chat_categorizer):
219
+ - "Benefits Information" - questions about health, dental, 401k
220
+ - "PTO Request" - requests to submit time off
221
+ - "PTO Balance" - checking remaining PTO
222
+ - "Fallback" - unclear requests
223
+
224
+ BENEFITS PATH:
225
+ → conversation_to_search_query (convert to search)
226
+ → search (query HR knowledge base)
227
+ → respond_with_sources (answer with citations)
228
+ → WORKFLOW_OUTPUT
229
+
230
+ PTO REQUEST PATH:
231
+ → call_llm (extract dates and reason from conversation)
232
+ → general_hitl (manager approval required)
233
+ - On "HITL Success": external_action_caller (Workday submit PTO)
234
+ → fixed_response ("Your PTO has been submitted!")
235
+ - On "HITL Failure": fixed_response ("Request not approved")
236
+ → WORKFLOW_OUTPUT
237
+
238
+ PTO BALANCE PATH:
239
+ → external_action_caller (Workday get PTO balance)
240
+ → respond_for_external_actions (format balance info)
241
+ → WORKFLOW_OUTPUT
242
+
243
+ FALLBACK PATH:
244
+ → fixed_response ("I can help with benefits info, PTO requests, or checking your PTO balance.")
245
+ → WORKFLOW_OUTPUT
246
+ ```
247
+
248
+ ### 4. Run Generation
249
+
250
+ ```bash
251
+ python generate_local.py \
252
+ --prompt "$(cat prompt.txt)" \
253
+ --full \
254
+ --type chat \
255
+ --persona-id "hr-self-service" \
256
+ --verbose \
257
+ --pretty \
258
+ --output workflow.json
259
+ ```
260
+
261
+ ### 5. Validate with MCP
262
+
263
+ ```text
264
+ # Validate connections
265
+ workflow(mode="analyze", workflow_def=<workflow_def>, include=["connections"])
266
+
267
+ # Check for issues
268
+ workflow(mode="analyze", workflow_def=<workflow_def>, include=["issues"])
269
+ ```
270
+
271
+ ### 6. Deploy
272
+
273
+ ```text
274
+ # Create persona
275
+ persona(
276
+ mode="create",
277
+ name="HR Self-Service Bot",
278
+ description="Helps employees with benefits and PTO",
279
+ type="chat"
280
+ )
281
+
282
+ # Deploy workflow
283
+ workflow(
284
+ mode="deploy",
285
+ persona_id=<new_persona_id>,
286
+ workflow_def=<generated workflow_def>
287
+ )
288
+ ```
289
+
290
+ ---
291
+
292
+ ## Error Handling
293
+
294
+ ### Generation Failures
295
+
296
+ ```json
297
+ {
298
+ "success": false,
299
+ "error": "Failed to generate valid workflow after 3 attempts...",
300
+ "workflow_graph": { ... },
301
+ "workflow_plan": "..."
302
+ }
303
+ ```
304
+
305
+ **Recovery:**
306
+ 1. Review `workflow_plan` to understand what Planner intended
307
+ 2. Check `error` for specific validation issues
308
+ 3. Modify prompt to be more explicit about connections
309
+ 4. Increase `--max-retries` if close to success
310
+
311
+ ### Common Issues
312
+
313
+ | Error | Cause | Fix |
314
+ |-------|-------|-----|
315
+ | `Missing Fallback category` | Categorizer without fallback | Add "Fallback" to categories |
316
+ | `Type mismatch` | Incompatible connections | Use `reference(check_types={source: "...", target: "..."})` |
317
+ | `HITL needs both paths` | Missing success or failure | Define both HITL outcomes |
318
+ | `No WORKFLOW_OUTPUT` | Missing output connections | Ensure all paths reach OUTPUT |
319
+
320
+ ---
321
+
322
+ ## Best Practices
323
+
324
+ ### 1. Be Explicit in Prompts
325
+
326
+ ```text
327
+ BAD:
328
+ "Create a helpdesk bot"
329
+
330
+ GOOD:
331
+ "Create a Chat AI Employee for IT Helpdesk:
332
+ - TRIGGER: chat_trigger
333
+ - ROUTING: chat_categorizer with categories [A, B, C, Fallback]
334
+ - PATH A: search → respond_with_sources → WORKFLOW_OUTPUT
335
+ - PATH B: external_action_caller → WORKFLOW_OUTPUT
336
+ - FALLBACK: fixed_response → WORKFLOW_OUTPUT"
337
+ ```
338
+
339
+ ### 2. Always Include Fallback
340
+
341
+ Every categorizer must have a Fallback category.
342
+
343
+ ### 3. Define HITL Paths Completely
344
+
345
+ ```text
346
+ "general_hitl for approval:
347
+ - On HITL Success: proceed to create ticket
348
+ - On HITL Failure: send apology message"
349
+ ```
350
+
351
+ ### 4. Use MCP for Validation
352
+
353
+ ```text
354
+ workflow(mode="analyze", workflow_def=<workflow_def>)
355
+ ```
356
+
357
+ ### 5. Test Incrementally
358
+
359
+ Start with a simple workflow, validate, then add complexity.
360
+
361
+ ---
362
+
363
+ ## Comparison: Local vs Auto Builder UI
364
+
365
+ | Aspect | Local Generation | Auto Builder UI |
366
+ |--------|-----------------|-----------------|
367
+ | Input | CLI prompt | Web form |
368
+ | Speed | ~60-90 seconds | ~60-90 seconds |
369
+ | Retry Logic | Automatic (3 attempts) | Manual |
370
+ | Output | JSON file | Deployed workflow |
371
+ | Batch Processing | Yes | No |
372
+ | CI/CD Integration | Easy | Hard |
373
+ | Debugging | Full logs | Limited |
374
+
375
+ ---
376
+
377
+ ## Related Documentation
378
+
379
+ - `../mcp/RULE.md` — MCP usage + resource-first guidance
380
+ - `../auto-builder/RULE.md` — Auto Builder prompt authoring rules
381
+ - `docs/mcp-tools-guide.md` — Canonical consolidated MCP tool semantics
382
+
383
+ ---
384
+
385
+ ## LLM Templating for Document Generation
386
+
387
+ ### Recommended Approach: LLM Templating
388
+
389
+ For dynamic, context-dependent content, use **LLM templating** instead of hardcoded templates:
390
+
391
+ ```text
392
+ search (gather context)
393
+ → call_llm (generate structured content with section prompts)
394
+ → generate_document (convert to document format)
395
+ → [optional] send_email (with proper entity extraction)
396
+ ```
397
+
398
+ ### How to Use LLM Templating
399
+
400
+ **In your prompt to call_llm:**
401
+
402
+ ```text
403
+ Generate structured content with clear ## section headers.
404
+ Let the LLM determine appropriate sections based on:
405
+ - Content type (the user's request)
406
+ - Audience (who will read this)
407
+ - Purpose (inform, analyze, recommend, etc.)
408
+
409
+ Use a structured prompt like:
410
+ "Based on the provided data, generate a professional document with:
411
+ - Clear section headers (## format)
412
+ - Key findings organized by themes
413
+ - Actionable recommendations
414
+ Format with markdown for conversion to document."
415
+ ```
416
+
417
+ ### Output Semantics Extraction
418
+
419
+ Instead of hardcoding document types, extract semantics via LLM:
420
+
421
+ | Attribute | Example Values | Determined By |
422
+ |-----------|----------------|---------------|
423
+ | output_type | brief, report, analysis | LLM from context |
424
+ | tone | formal, professional, casual | Audience + relationship |
425
+ | style | analytical, informative | Purpose |
426
+ | audience | client, internal, executive | Context clues |
427
+ | length | brief, standard, detailed | Explicit or implied |
428
+
429
+ Use `generateOutputSemanticsPrompt(userInput)` to create the extraction prompt.
430
+
431
+ ### When to Use Template Engines (Rare)
432
+
433
+ Only use data source templates when:
434
+ - Strict regulatory formatting is required
435
+ - Pixel-perfect layouts are needed
436
+ - Content is purely data-driven (not narrative)
437
+
438
+ ### Document Generation Chain
439
+
440
+ ```yaml
441
+ # Recommended flow
442
+ - name: gather_context
443
+ action: search
444
+ # Search KB and/or web for relevant information
445
+
446
+ - name: generate_content
447
+ action: call_llm
448
+ inputs:
449
+ query: user_request
450
+ named_inputs_KB_Results:
451
+ actionOutput: { actionName: gather_context, output: search_results }
452
+ # Prompt includes: "Generate structured content with ## sections..."
453
+
454
+ - name: create_document
455
+ action: generate_document
456
+ inputs:
457
+ markdown_file_contents:
458
+ actionOutput: { actionName: generate_content, output: response_with_sources }
459
+
460
+ # If sending via email:
461
+ - name: extract_recipient
462
+ action: entity_extraction
463
+ # Extract email_address from conversation
464
+
465
+ - name: confirm_send
466
+ action: general_hitl
467
+ # ALWAYS confirm before sending
468
+
469
+ - name: send_email
470
+ action: send_email_agent
471
+ inputs:
472
+ email_to:
473
+ actionOutput: { actionName: extract_recipient, output: email_address }
474
+ # Use named_inputs for DOCUMENT type attachment
475
+ named_inputs_Attachment:
476
+ actionOutput: { actionName: create_document, output: document_link }
477
+ ```
478
+
479
+ ---
480
+
481
+ ## Email Best Practices
482
+
483
+ ### Critical Rules
484
+
485
+ 1. **email_to MUST come from entity_extraction** - Never from text summaries or LLM outputs
486
+ 2. **ALWAYS use HITL before sending** - Emails have external side effects
487
+ 3. **Use named_inputs for document attachments** - DOCUMENT type requires named_inputs, not attachment_links
488
+
489
+ ### Anti-Patterns to Avoid
490
+
491
+ ```yaml
492
+ # ❌ WRONG - Text output as email recipient
493
+ email_to:
494
+ actionOutput: { actionName: summarizer, output: summarized_conversation }
495
+
496
+ # ✅ CORRECT - Extracted email address
497
+ email_to:
498
+ actionOutput: { actionName: entity_extraction, output: email_address }
499
+ ```
500
+
501
+ ### Email Flow Pattern
502
+
503
+ ```text
504
+ entity_extraction (extract email_address, recipient_name)
505
+ → general_hitl (confirm recipient and content)
506
+ → [HITL Success] send_email_agent
507
+ → [HITL Failure] fixed_response (cancelled message)
508
+ ```
@@ -0,0 +1,135 @@
1
+ # MCP Flow Diagrams
2
+
3
+ ## ⚠️ THE KEY INSIGHT: ONE CALL
4
+
5
+ **Creating an AI Employee should be ONE MCP call, not 45.**
6
+
7
+ If an agent is making 10+ calls to create one persona, something is wrong with:
8
+ 1. The tool descriptions (not clear enough)
9
+ 2. The cursor rules (not being applied)
10
+ 3. The agent's understanding (ignoring guidance)
11
+
12
+ ---
13
+
14
+ ## Greenfield: Create New AI Employee
15
+
16
+ ```mermaid
17
+ sequenceDiagram
18
+ participant User
19
+ participant Agent as Agent (LLM)
20
+ participant MCP
21
+ participant API as Ema API
22
+
23
+ User->>Agent: "Create a Voice AI for sales..."
24
+
25
+ Note over Agent: If requirements unclear...
26
+ Agent->>MCP: template(questions=true, category="Voice")
27
+ MCP-->>Agent: { questions: ["What intents?", "Data sources?", ...] }
28
+
29
+ Agent->>User: Asks MCP-provided questions
30
+ User->>Agent: Provides answers
31
+
32
+ Note over Agent: ONE CALL with all info
33
+ Agent->>MCP: persona(input="Voice AI SDR...", type="voice", name="Sales SDR", preview=false)
34
+
35
+ Note over MCP: MCP handles everything internally:<br/>1. Template selection<br/>2. Config generation<br/>3. Widget formatting<br/>4. API orchestration
36
+
37
+ MCP->>API: createAiEmployee(template_id)
38
+ API-->>MCP: { persona_id: "abc" }
39
+ MCP->>API: getPersonaById("abc")
40
+ API-->>MCP: { proto_config, workflow_def }
41
+ MCP->>API: updateAiEmployee({ proto_config })
42
+ API-->>MCP: { success }
43
+
44
+ MCP-->>Agent: { deployed_to: { persona_id, created: true } }
45
+ Agent->>User: "Created Sales SDR (abc-123)"
46
+ ```
47
+
48
+ ## Key Points: Who Does What
49
+
50
+ ### Agent (LLM) Responsibilities:
51
+ 1. **Ask MCP what to ask** - Call `template(questions=true)` to get qualifying questions
52
+ 2. **Ask user the MCP-provided questions** - Don't hardcode questions
53
+ 3. **Format ONE MCP call** - Single `persona()` call with all gathered info
54
+ 4. **Report results** - Show user what was created
55
+
56
+ ### MCP Responsibilities (ALL internal, abstracted):
57
+ 1. **Mode detection** - Greenfield vs brownfield vs analyze
58
+ 2. **Input parsing** - Extract intents, settings from natural language
59
+ 3. **Proto config generation** - Format widgets, settings properly
60
+ 4. **Template selection** - Pick correct template for type
61
+ 5. **API orchestration** - Create → Fetch → Merge → Update
62
+ 6. **Field name handling** - Uses correct API field names
63
+ 7. **Validation** - Check for issues before/after
64
+
65
+ ### Ema API Responsibilities:
66
+ 1. **Create persona from template** - Returns valid workflow structure
67
+ 2. **Store persona** - Persist config and workflow
68
+ 3. **Return data** - Full persona with workflow_def
69
+
70
+ ## Brownfield: Modify Existing AI Employee
71
+
72
+ ```mermaid
73
+ sequenceDiagram
74
+ participant User
75
+ participant Agent as Agent (LLM)
76
+ participant MCP
77
+ participant API as Ema API
78
+
79
+ User->>Agent: "Add HITL before sending emails"
80
+
81
+ Note over Agent: Optional: Understand what exists
82
+ Agent->>MCP: persona(id="abc-123")
83
+ MCP->>API: getPersonaById("abc-123")
84
+ API-->>MCP: { workflow_def, proto_config }
85
+ MCP-->>Agent: { issues, nodes, connections }
86
+
87
+ Note over Agent: ONE CALL to modify
88
+ Agent->>MCP: persona(id="abc-123", input="add HITL before email", preview=false)
89
+
90
+ Note over MCP: MCP handles internally:<br/>1. Fetch existing workflow<br/>2. Analyze modification<br/>3. Apply changes<br/>4. Validate<br/>5. Deploy
91
+
92
+ MCP->>API: updateAiEmployee({ workflow })
93
+ API-->>MCP: { success }
94
+
95
+ MCP-->>Agent: { status: "deployed", changes_applied: [...] }
96
+ Agent->>User: "Done! HITL added before email."
97
+ ```
98
+
99
+ ## Summary: One Tool, One Call
100
+
101
+ ```mermaid
102
+ graph TD
103
+ subgraph "The Right Way"
104
+ USER[User Request] --> AGENT[Agent]
105
+ AGENT -->|"If unclear"| TEMPLATE["template(questions=true)"]
106
+ TEMPLATE --> ASK[Ask user questions]
107
+ ASK --> AGENT
108
+ AGENT -->|"ONE CALL"| PERSONA["persona(input=..., type=..., name=...)"]
109
+ PERSONA --> SUCCESS[AI Employee Created]
110
+ end
111
+
112
+ subgraph "The Wrong Way ❌"
113
+ BAD_USER[User Request] --> BAD_AGENT[Agent]
114
+ BAD_AGENT --> CALL1["persona(mode=create)"]
115
+ CALL1 --> CALL2["template(config=voice)"]
116
+ CALL2 --> CALL3["persona(mode=update, proto_config)"]
117
+ CALL3 --> CALL4["workflow(persona_id, workflow_def)"]
118
+ CALL4 --> CALL5["workflow(mode=analyze)"]
119
+ CALL5 --> CALL6["workflow(mode=optimize)"]
120
+ CALL6 --> DOTS["... 40 more calls ..."]
121
+ DOTS --> FAIL[Confusion & Wasted Time]
122
+ end
123
+ ```
124
+
125
+ ## Tool Responsibility
126
+
127
+ | What | Use `persona` for | Do NOT use |
128
+ |------|-------------------|------------|
129
+ | Create new AI Employee | `persona(input=..., type=..., name=...)` | `workflow` (deprecated) |
130
+ | Modify existing | `persona(id=..., input=...)` | Manual orchestration |
131
+ | Analyze | `persona(id=...)` | Multiple separate calls |
132
+ | Optimize | `persona(id=..., optimize=true)` | Manual fixing |
133
+ | List/Search | `persona(all=true)` | N/A |
134
+
135
+ **The agent makes 1-2 simple calls. MCP handles all complexity.**