@agentled/cli 0.7.2 → 0.7.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -164,3 +164,334 @@ fieldMapping:
164
164
  - [ ] If delivered via email, the email template embeds `{{steps.share.shareUrl}}`
165
165
  - [ ] `knowledgeSync` persists to a list with a typed schema (not implicit)
166
166
  - [ ] `executedAt` (or a similar timestamp) is stored so rows can be trended
167
+
168
+ ---
169
+
170
+ ## Don't (anti-patterns AI authors hit constantly)
171
+
172
+ These four mistakes turn a report into a wall-of-text "looks AI-generated" output:
173
+
174
+ - ❌ **Wall of `markdown` blocks for a scored report.** A scored AI step produces a numeric verdict — surface it. Use `scoringHeader` + `dimensionScores` at the top, body markdown after.
175
+ - ❌ **`section` block with nested `blocks: [...]`.** The `section` renderer only accepts `fields: [{ name, label, display? }]`. Nested `blocks` arrays are silently dropped — every section that uses them renders as `null`. Use `grid` (which DOES accept `blocks[]`) or convert to a flat `markdown` block with a `title`.
176
+ - ❌ **Markdown block with `{{var}}` in `title`.** Markdown block titles are NOT template-resolved (see `MarkdownBlockRenderer.tsx:22`). Only `section` block titles are. Use a `section` block with `display: 'text'` if you need a runtime-resolved heading, or have the AI emit the resolved title as a separate response field and reference it via the section title.
177
+ - ❌ **Skipping `thresholds` on `scoringHeader` / `dimensionScores`.** Without thresholds the score chip renders gray and unreadable. Always wire threshold-to-color rules.
178
+
179
+ ✅ **scoringHeader + dimensionScores + rubric table** for any 0–100 scored output.
180
+ ✅ **`funnel` block** for any orchestrator digest with stage attrition.
181
+ ✅ **`banner` block at the bottom** for the CTA (apply, contact, upgrade).
182
+
183
+ ---
184
+
185
+ ## Pattern A — Scored report (rubric + scoringHeader + dimensionScores)
186
+
187
+ Use case: any AI step that produces a 0–100 score plus per-dimension breakdown — startup scoring, lead scoring, GBP audits, candidate evaluation, fit reviews.
188
+
189
+ **Visual hierarchy:**
190
+
191
+ 1. `scoringHeader` — big colored hero with score + decision label + identity badges
192
+ 2. `dimensionScores` — per-rubric colored bars (each with their own thresholds)
193
+ 3. `table` "Why these scores" with rationale per dimension
194
+ 4. Body sections (markdown / section blocks)
195
+ 5. `banner` CTA at the bottom
196
+
197
+ **Threshold convention:**
198
+
199
+ - Total score (0–100): `>= 70` emerald, `>= 40` amber, `< 40` rose.
200
+ - Dimension score (0–20): `>= 14` emerald, `>= 8` amber, `< 8` rose.
201
+
202
+ ### Required `responseStructure` shape
203
+
204
+ ```jsonc
205
+ {
206
+ "score_total": "REQUIRED integer 0-100",
207
+ "public_verdict": "REQUIRED short label e.g. 'Strong fit' | 'Promising' | 'Refine and resubmit'",
208
+ "rubric": "REQUIRED array of {dimension, label, score 0-20, max 20, rationale}, ordered to match dimensionScores",
209
+ "company_name": "REQUIRED string for scoringHeader.titlePath",
210
+ "founder_name": "REQUIRED string for identity badge",
211
+ "stage": "REQUIRED string for identity badge",
212
+ "themesObserved": "REQUIRED comma-separated string",
213
+ "headline": "6-10 word warm verdict",
214
+ "summary": "2-3 sentence executive summary",
215
+ // ...body fields referenced by markdown / section blocks
216
+ }
217
+ ```
218
+
219
+ ### Hard rules
220
+
221
+ 1. **`rubric` array order MUST match `dimensions[]` order in dimensionScores.** `valuePath: "rubric.0.score"` means the rubric's first entry. Misalignment silently shows the wrong score next to each label.
222
+ 2. **Sum of dimension scores SHOULD equal `score_total`.** State this explicitly in the prompt — without that constraint the LLM produces inconsistent numbers between hero and rubric.
223
+ 3. **`decisionPath` should resolve to a SHORT public-friendly label.** Don't echo internal status values (`qualified` / `declined` / `dead`). Map them in the prompt: `qualified → 'Strong fit'`, `declined → 'Promising — refine'`, `dead → 'Refine and resubmit'`.
224
+ 4. **Use `scoringHeader.identity.badges[]` for cover-card metadata** (founder, stage, themes, country, source) instead of an extra `kpiRow`. Avoids the disconnected "big card on top, separate text card below" feeling.
225
+
226
+ ### Live worked example
227
+
228
+ Reference workflow: AngelHive Pitch Review step `founder_report` (`99a8f552-b822-40c3-855c-16d5bfa0fe1f`). Pull the current config:
229
+
230
+ ```bash
231
+ agentled steps get 99a8f552-b822-40c3-855c-16d5bfa0fe1f founder_report --source live
232
+ ```
233
+
234
+ Renderer config (verbatim, abbreviated for readability):
235
+
236
+ ```jsonc
237
+ {
238
+ "type": "Config",
239
+ "config": {
240
+ "downloadPdf": true,
241
+ "layout": {
242
+ "title": "Your AngelHive Review",
243
+ "subtitle": "{{steps.save_submission.company_name}} — {{steps.save_submission.themes}}",
244
+ "blocks": [
245
+ // 1. Big colored hero — score 0-100, decision label, identity badges
246
+ {
247
+ "blockType": "scoringHeader",
248
+ "scorePath": "score_total",
249
+ "max": 100,
250
+ "decisionPath": "public_verdict",
251
+ "summaryPath": "headline",
252
+ "titlePath": "company_name",
253
+ "thresholds": [
254
+ { "min": 70, "color": "emerald" },
255
+ { "min": 40, "color": "amber" },
256
+ { "min": 0, "color": "rose" }
257
+ ],
258
+ "identity": {
259
+ "namePath": "company_name",
260
+ "badges": [
261
+ { "label": "Founder", "valuePath": "founder_name" },
262
+ { "label": "Stage", "valuePath": "stage" },
263
+ { "label": "Themes", "valuePath": "themesObserved" }
264
+ ]
265
+ }
266
+ },
267
+ // 2. Per-rubric colored bars — each dimension /20 with its own thresholds
268
+ {
269
+ "blockType": "dimensionScores",
270
+ "title": "Pitch readiness rubric",
271
+ "dimensions": [
272
+ { "label": "Product clarity", "valuePath": "rubric.0.score", "max": 20,
273
+ "thresholds": [{"min":14,"color":"emerald"},{"min":8,"color":"amber"},{"min":0,"color":"rose"}] },
274
+ { "label": "Market opportunity", "valuePath": "rubric.1.score", "max": 20, "thresholds": [/*…same…*/] },
275
+ { "label": "Team strength", "valuePath": "rubric.2.score", "max": 20, "thresholds": [/*…same…*/] },
276
+ { "label": "Traction signals", "valuePath": "rubric.3.score", "max": 20, "thresholds": [/*…same…*/] },
277
+ { "label": "Pitch readiness", "valuePath": "rubric.4.score", "max": 20, "thresholds": [/*…same…*/] }
278
+ ]
279
+ },
280
+ // 3. Rationale table — readable breakdown of WHY each score
281
+ {
282
+ "blockType": "table",
283
+ "title": "Why these scores",
284
+ "arrayPath": "rubric",
285
+ "columns": [
286
+ { "field": "label", "header": "Dimension" },
287
+ { "field": "score", "header": "Score", "display": "score", "sortable": true },
288
+ { "field": "max", "header": "Max" },
289
+ { "field": "rationale", "header": "Why this score" }
290
+ ]
291
+ },
292
+ // 4. Body sections — section blocks (titles ARE template-resolved, so score can appear inline)
293
+ { "blockType": "markdown", "title": "Our Read", "contentPath": "summary" },
294
+ {
295
+ "blockType": "grid", "columns": 2,
296
+ "blocks": [
297
+ { "blockType": "section", "title": "Product · {{rubric.0.score}}/20",
298
+ "fields": [{ "name": "productAnalysis", "label": "", "display": "text" }] },
299
+ { "blockType": "section", "title": "Market · {{rubric.1.score}}/20",
300
+ "fields": [{ "name": "marketAnalysis", "label": "", "display": "text" }] }
301
+ ]
302
+ },
303
+ // …Team/Traction, Business Model/Go-to-Market grids…
304
+ // 5. Banner CTA at the bottom
305
+ {
306
+ "blockType": "banner",
307
+ "variant": "upsell",
308
+ "icon": "Sparkles",
309
+ "title": "Apply for an upcoming AngelHive Pitch Night",
310
+ "contentPath": "aboutAngelHive",
311
+ "actions": [
312
+ { "label": "Apply for a Pitch Night",
313
+ "url": "https://angelhive.pynn.ai/pitch-nights",
314
+ "icon": "ExternalLink", "variant": "primary" }
315
+ ]
316
+ }
317
+ ]
318
+ }
319
+ }
320
+ }
321
+ ```
322
+
323
+ ### Variant: scored report with radar chart
324
+
325
+ For audits that compare a target against a benchmark + competitor average (e.g. SEO/GBP audits), add a `chart` block of `chartType: "radar"`. Reference workflow: Agwanet GBP audit step `score-target-3` and `generate-full-report` — uses `dimensionScores` with per-dimension `thresholds` PLUS a radar comparing `cible` / `meilleur concurrent` / `moyenne zone`:
326
+
327
+ ```jsonc
328
+ {
329
+ "blockType": "chart",
330
+ "chartType": "radar",
331
+ "title": "Comparison by dimension",
332
+ "arrayPath": "radar_data",
333
+ "categoryField": "dimension",
334
+ "valueFields": [
335
+ { "field": "cible", "label": "Target", "color": "#f43f5e" },
336
+ { "field": "meilleur", "label": "Leader", "color": "#10b981" },
337
+ { "field": "moyenne", "label": "Average", "color": "#f59e0b" }
338
+ ]
339
+ }
340
+ ```
341
+
342
+ Pulled live with:
343
+
344
+ ```bash
345
+ agentled steps get <agwanet-workflow-id> score-target-3 --source live
346
+ ```
347
+
348
+ ---
349
+
350
+ ## Pattern B — Funnel report (orchestrator digest)
351
+
352
+ Use case: orchestrator workflow that processes N items and produces a digest report — daily deal flow, weekly sourcing summary, batch outreach reports, pipeline health dashboards.
353
+
354
+ **Visual hierarchy:**
355
+
356
+ 1. `funnel` block — stage-by-stage attrition with conversion percentages and bottleneck highlighting
357
+ 2. `kpiRow` — totals (Total processed, Qualified, Contacted, Paid)
358
+ 3. `banner` (variant `info` or `warning`) — operator focus / "this run" summary
359
+ 4. `markdown` — executive summary
360
+ 5. `list` (style `card`) or `table` — itemized rows
361
+ 6. `signalList` (variant `risk`) — bottlenecks
362
+ 7. `list` (style `numbered`) — recommended actions
363
+
364
+ ### Funnel block schema
365
+
366
+ ```jsonc
367
+ {
368
+ "blockType": "funnel",
369
+ "title": "Pitch Night Funnel",
370
+ "description": "Sourcing → Scheduled conversion",
371
+ "stages": [
372
+ { "label": "Sourced", "valuePath": "report.funnel.sourced", "icon": "Search" },
373
+ { "label": "Qualified", "valuePath": "report.funnel.qualified", "icon": "CheckCircle" },
374
+ { "label": "Contacted", "valuePath": "report.funnel.contacted", "icon": "Mail" },
375
+ { "label": "Paid", "valuePath": "report.funnel.paid", "icon": "CreditCard" },
376
+ { "label": "Scheduled", "valuePath": "report.funnel.scheduled", "icon": "Calendar" }
377
+ ],
378
+ "showAbsolute": true, // show raw counts next to bars
379
+ "showConversion": true, // show stage-to-stage % between bars
380
+ "emphasizeBottleneck": true // highlight the largest drop in red
381
+ }
382
+ ```
383
+
384
+ The funnel block uses `stages: [{ label, valuePath, icon? }]` — NOT `arrayPath`. Each stage's count is resolved by walking `valuePath` against the AI step's output. Bottleneck detection is automatic: the largest stage-to-stage drop is highlighted rose; anti-funnel growth is highlighted emerald.
385
+
386
+ ### Required `responseStructure` shape
387
+
388
+ ```jsonc
389
+ {
390
+ "report": {
391
+ "title": "string",
392
+ "headline": "string — most urgent operator next action",
393
+ "summary": "string — 2 sentences",
394
+ "funnel": {
395
+ "sourced": "REQUIRED number, never null",
396
+ "qualified": "REQUIRED number, never null",
397
+ "contacted": "REQUIRED number, never null",
398
+ "paid": "REQUIRED number, never null",
399
+ "scheduled": "REQUIRED number, never null"
400
+ },
401
+ "weeklyDeltas": { "sourced": 0, "contacted": 0, "paid": 0 },
402
+ "thisRun": { "scored": 0, "qualified_total": 0, "to_contact": 0, "outreach_queued": 0 },
403
+ "bottlenecks": ["string"],
404
+ "recommendations": ["string"]
405
+ }
406
+ }
407
+ ```
408
+
409
+ **Hard rule on null counts:** the funnel block renders `null` as a dash, which breaks the conversion math and the bottleneck detection. State this explicitly in the prompt: *"For every count field, output `0` if the underlying metric is null/blank/missing. Never emit null."* Verify the rule at the bottom of the prompt as a final-pass check.
410
+
411
+ ### Live worked example
412
+
413
+ Reference workflow: AngelHive Daily Funnel step `generate_report` (`58bb623f-cbc2-4e5c-bcb4-2855ce64bb56`). Pull the current config:
414
+
415
+ ```bash
416
+ agentled steps get 58bb623f-cbc2-4e5c-bcb4-2855ce64bb56 generate_report --source live
417
+ ```
418
+
419
+ Renderer config (verbatim, abbreviated):
420
+
421
+ ```jsonc
422
+ {
423
+ "type": "Config",
424
+ "config": {
425
+ "layout": {
426
+ "blocks": [
427
+ // 1. Funnel — stages with conversion + bottleneck
428
+ {
429
+ "blockType": "funnel",
430
+ "title": "Pitch Night Funnel",
431
+ "description": "Sourcing → Scheduled conversion",
432
+ "stages": [
433
+ { "icon": "Search", "label": "Sourced", "valuePath": "report.funnel.sourced" },
434
+ { "icon": "CheckCircle", "label": "Qualified", "valuePath": "report.funnel.qualified" },
435
+ { "icon": "Mail", "label": "Contacted", "valuePath": "report.funnel.contacted" },
436
+ { "icon": "CreditCard", "label": "Paid", "valuePath": "report.funnel.paid" },
437
+ { "icon": "Calendar", "label": "Scheduled", "valuePath": "report.funnel.scheduled" }
438
+ ],
439
+ "showAbsolute": true,
440
+ "showConversion": true,
441
+ "emphasizeBottleneck": true
442
+ },
443
+ // 2. KPI row — this-run + weekly deltas
444
+ {
445
+ "blockType": "kpiRow",
446
+ "kpis": [
447
+ { "icon": "Sparkles", "label": "Scored this run", "valuePath": "report.thisRun.scored", "format": "number" },
448
+ { "icon": "CheckCircle", "label": "Outreach queued", "valuePath": "report.thisRun.outreach_queued", "format": "number" },
449
+ { "icon": "TrendingUp", "label": "Sourced 7d", "valuePath": "report.weeklyDeltas.sourced", "format": "number" },
450
+ { "icon": "Mail", "label": "Contacted 7d", "valuePath": "report.weeklyDeltas.contacted", "format": "number" },
451
+ { "icon": "CreditCard", "label": "Paid 7d", "valuePath": "report.weeklyDeltas.paid", "format": "number" }
452
+ ]
453
+ },
454
+ // 3. Banners — this-run summary (info) + operator focus (warning)
455
+ {
456
+ "blockType": "banner", "variant": "info", "icon": "Activity",
457
+ "title": "This run", "contentPath": "report.thisRunSummary"
458
+ },
459
+ {
460
+ "blockType": "banner", "variant": "warning", "icon": "TriangleAlert",
461
+ "title": "Operator focus", "contentPath": "report.headline"
462
+ },
463
+ // 4. Executive summary
464
+ { "blockType": "markdown", "title": "Executive Summary", "contentPath": "report.summary" },
465
+ // 5. Itemized rows (card-list style for variable-length items)
466
+ {
467
+ "blockType": "list", "style": "card",
468
+ "title": "Upcoming Editions",
469
+ "arrayPath": "report.upcomingEditions",
470
+ "itemFields": [ /* per-item fields with thresholds + display hints */ ]
471
+ },
472
+ // 6. Side-by-side: source breakdown + bottlenecks
473
+ {
474
+ "blockType": "grid", "columns": 2,
475
+ "blocks": [
476
+ { "blockType": "table", "title": "Source Breakdown", "arrayPath": "report.sourceBreakdown",
477
+ "columns": [
478
+ { "field": "source", "header": "Source" },
479
+ { "field": "count", "header": "Count", "display": "threshold",
480
+ "thresholds": { "red": 0, "orange": 3, "green": 8 }, "sortable": true }
481
+ ]
482
+ },
483
+ { "blockType": "signalList", "variant": "risk",
484
+ "title": "Bottlenecks", "arrayPath": "report.bottlenecks" }
485
+ ]
486
+ },
487
+ // 7. Recommendations — numbered list at the bottom
488
+ {
489
+ "blockType": "list", "style": "numbered",
490
+ "title": "Recommended Actions",
491
+ "arrayPath": "report.recommendations"
492
+ }
493
+ ]
494
+ }
495
+ }
496
+ }
497
+ ```
@@ -79,7 +79,7 @@
79
79
  "id": "update-external-system",
80
80
  "type": "appAction",
81
81
  "name": "Update External System",
82
- "app": { "id": "http-request", "actionId": "request", "source": "native" },
82
+ "app": { "id": "http-request", "actionId": "http-request.request", "source": "native" },
83
83
  "stepInputData": {
84
84
  "url": "https://example.com/api/entities/{{input.entity_id}}",
85
85
  "method": "PATCH",
@@ -91,7 +91,7 @@
91
91
  "id": "alert-on-breach",
92
92
  "type": "appAction",
93
93
  "name": "Slack Alert (breaches only)",
94
- "app": { "id": "webhook", "actionId": "trigger", "source": "native" },
94
+ "app": { "id": "webhook", "actionId": "webhook.trigger", "source": "native" },
95
95
  "entryConditions": {
96
96
  "onCriteriaFail": "skip",
97
97
  "conditionText": "Only alert when at least one threshold is breached.",
@@ -26,7 +26,7 @@
26
26
  "id": "read-leads",
27
27
  "type": "appAction",
28
28
  "name": "Read Leads",
29
- "app": { "id": "kg", "actionId": "read-list", "source": "native" },
29
+ "app": { "id": "kg", "actionId": "kg.read-list", "source": "native" },
30
30
  "stepInputData": {
31
31
  "listKey": "{{input.source_list_key}}",
32
32
  "limit": "500"
@@ -44,7 +44,7 @@
44
44
  "id": "read-candidates",
45
45
  "type": "appAction",
46
46
  "name": "Read Candidate Pool",
47
- "app": { "id": "kg", "actionId": "read-list", "source": "native" },
47
+ "app": { "id": "kg", "actionId": "kg.read-list", "source": "native" },
48
48
  "stepInputData": {
49
49
  "listKey": "{{input.candidate_list_key}}",
50
50
  "limit": "500"
@@ -348,6 +348,38 @@ A legacy `{ contextKey, value }` body shape is still accepted as a **compatibili
348
348
  6. Test: `start_workflow` with sample input
349
349
  7. Check results: `get_execution` to see step outputs
350
350
 
351
+ ## Workspace Surfaces
352
+
353
+ The workspace home and sidebar carry workspace-level UI state on `Workspace.metadata`. These surfaces are read-only via the MCP today (most tooling does not need to write them), but agents authoring templated workspaces should know they exist:
354
+
355
+ ### Pinned outputs (sidebar shortcuts)
356
+
357
+ `Workspace.metadata.pinnedOutputs[]` lists output pages that appear in the sidebar **after Knowledge & Data**, always visible regardless of which workflow is currently open. Operators set these manually from each output page's configuration sheet (toggle: *"Pin to workspace home"*). Pin sparingly: only pin output pages that are useful as direct workspace-level destinations, such as a recurring report, scoring dashboard, or canonical results list. Do not pin every output page, implementation detail, approval surface, or one-off execution artifact; normal workflow output pages remain accessible from the workflow itself. Each entry:
358
+
359
+ ```json
360
+ {
361
+ "pipelineId": "wfl_abc123",
362
+ "pipelinePathname": "deal-flow",
363
+ "outputPagePathname": "weekly-report",
364
+ "label": "Weekly Investor Report",
365
+ "iconName": "FileText",
366
+ "pipelineName": "Deal Sourcing",
367
+ "colorTextClass": "text-emerald-700 dark:text-emerald-400",
368
+ "pinnedAt": "2026-05-09T10:00:00Z"
369
+ }
370
+ ```
371
+
372
+ All snapshot fields capture the source values at pin time and do not auto-update if the workflow or page is later renamed or restyled. To refresh, the operator unpins and re-pins.
373
+
374
+ Sidebar rendering uses each field directly:
375
+ - `label` is the primary line.
376
+ - `pipelineName` renders in small muted text below the label, so two pins with the same `label` from different workflows are distinguishable.
377
+ - `iconName` resolves to a lucide icon (defaults to `Pin` if missing or unknown).
378
+ - `colorTextClass` is applied as a Tailwind class on the icon, mirroring the source workflow's accent. Use the same shape produced by the workflow style picker (e.g. `text-emerald-700 dark:text-emerald-400`).
379
+ - The pin row renders as **active** when the operator's current URL matches the pin's target.
380
+
381
+ When seeding a workspace via templates, pre-populate `pinnedOutputs[]` only for the curated artifacts the operator should access directly from the workspace sidebar on day one. Supply `pipelineName` and `colorTextClass` for the proper rendering (otherwise they fall back to the muted default, which still works but loses the visual cue). Pin entries reference output pages that exist on workflows in the same workspace; entries pointing at deleted or renamed pipelines still render — the URL just resolves to a 404 page until the pin is removed.
382
+
351
383
  ## Workspace Awareness
352
384
 
353
385
  Be explicit about which Agentled workspace you are operating on.
@@ -431,7 +463,7 @@ Every workflow needs at minimum: a trigger step, one or more action steps, and a
431
463
  "id": "enrich",
432
464
  "type": "appAction",
433
465
  "name": "Enrich Company",
434
- "app": { "id": "agentled", "actionId": "get-linkedin-company-from-url", "source": "native" },
466
+ "app": { "id": "agentled", "actionId": "agentled.get-linkedin-company-from-url", "source": "native" },
435
467
  "stepInputData": { "profileUrls": "{{input.company_url}}" },
436
468
  "next": { "stepId": "next-step" }
437
469
  }