@rishildi/ldi-process-skills-test 0.0.18 → 0.0.19

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,5 @@
1
1
  // AUTO-GENERATED by scripts/embed-skills.ts — do not edit
2
- // Generated at: 2026-04-04T23:26:58.038Z
2
+ // Generated at: 2026-04-05T11:24:45.161Z
3
3
  export const EMBEDDED_SKILLS = [
4
4
  {
5
5
  name: "create-fabric-lakehouses",
@@ -71,11 +71,11 @@ export const EMBEDDED_SKILLS = [
71
71
  files: [
72
72
  {
73
73
  relativePath: "SKILL.md",
74
- content: "---\nname: create-fabric-process-workflow-agent\ndescription: >\n Use this skill to create an orchestration agent definition (agent.md) for any\n Microsoft Fabric technical process. The user describes what they want to automate;\n the skill produces a self-contained agent.md. When run, the agent maps the process\n to available Fabric process skills, automatically creates any missing skills using\n create-fabric-process-skill, logs all changes to an audit trail, and orchestrates\n the full process end-to-end. The process skills library grows with every run.\n Triggers on: \"create a process workflow agent\", \"build an orchestration agent\n for [process]\", \"create an agent that automates [process]\", \"orchestrate\n [process] into an agent\". Does NOT trigger for creating individual process\n skills, running an agent, writing code, or one-off analysis.\nlicense: MIT\ncompatibility: Python 3.8+ required for scripts/\n---\n\n# Create Fabric Process Workflow Agent\n\nCreates a concise, self-contained `agent.md` that defines an orchestration agent\nfor a Microsoft Fabric technical process. No process skills need to exist upfront.\nWhen run, the agent maps requirements to available skills, creates any that are\nmissing, and builds up the process skills library over time.\n\n## Core Governance Rules\n\nThese rules are non-negotiable. They must be embedded verbatim in every generated\n`agent.md` so they are active at runtime.\n\n- **RULE 1 — Never execute autonomously.** Never run terminal commands, API calls,\n or scripts directly. Present every command in a fenced code block with the\n insert-into-terminal icon. The user runs it and reports back before proceeding.\n- **RULE 2 — Pre-empt; don't react.** Before any step, ask pointed questions about\n permissions, tooling, and dependencies. Do not collect parameters and then\n discover blockers mid-execution.\n- **RULE 3 — No silent approach changes.** If a blocker is found with the chosen\n approach, surface it and present alternatives. Let the user decide. Never switch\n silently.\n- **RULE 4 — No inference from context.** Collect all parameters from the user or\n the current prompt. Do not pre-populate from prior chat history, previous runs,\n or attached files not explicitly part of the current request.\n- **RULE 5 — Respect the user's skill level and environment.** Do not steer toward\n an approach the agent finds easier to generate. Match the user's comfort level,\n installed tooling, and stated preferences.\n- **RULE 6 — Stay within skill boundaries.** Generate only what skill definitions\n describe. On any failure: explain the cause from the error, offer the simplest\n manual or UI fallback, ask whether to skip.\n- **RULE 7 — Append to CHANGE_LOG.md after every step.** Include: step number,\n what was done, outcome (success/failure/skipped), and any notable decisions.\n\n## Inputs\n\n| Parameter | Description | Example |\n|-----------|-------------|---------|\n| `PROCESS_NAME` | Short name for the process (lowercase, hyphens) | `monthly-budget-consolidation` |\n| `REQUIREMENTS` | Full description of the process and each of its steps | `\"1) Collect data from five Excel files... 2) Summarise by category...\"` |\n| `SECTIONS` | Sub-agent sections to include (default: all four) | `impl-plan, biz-process, architecture, governance` |\n| `USERNAME` | Used in output folder naming | `rishi` |\n\n## Workflow\n\n- [ ] **Collect** — If `PROCESS_NAME`, `REQUIREMENTS`, or `USERNAME` are missing, ask for them.\n\n- [ ] **Analyse discovery questions** — Read the requirements and identify the\n environment-specific questions that determine which approaches are viable. For each question:\n - Name the specific activity that needs the permission or tool\n - Offer concrete options (not yes/no)\n - State what the agent does differently based on the answer\n Group questions by domain (permissions, tooling, execution preferences, data access,\n existing infrastructure). Ask only about domains the requirements actually need.\n Embed the questionnaire as **Sub-Agent 0: Environment Discovery** in the generated agent.md.\n\n- [ ] **Confirm sections** — Present the four standard sections with descriptions\n (see `references/section-descriptions.md`). Ask which to include. Default: all four.\n Wait for explicit confirmation before drafting.\n\n- [ ] **Draft agent.md** — Use `assets/agent-template.md` as the base.\n - Substitute `{PROCESS_NAME}` and a ≤3-sentence `{REQUIREMENTS_SUMMARY}`.\n - Remove excluded sections. Keep each sub-agent block ≤25 lines.\n - Do not name any specific process skill or technology — all resolved at runtime.\n - Do not hardcode company names, specific values, or environment paths.\n\n- [ ] **Validate** — Present the draft. Ask: *\"Does this accurately reflect the process? Anything unclear?\"*\n Refine until the user confirms.\n\n- [ ] **Scaffold** — Run `python scripts/scaffold_output.py --process-name $PROCESS_NAME --username $USERNAME --sections $SECTIONS`.\n Write the confirmed agent.md to the returned `agent_md_path`.\n\n- [ ] **Confirm** — Report the output root path and list all created subfolders.\n\n## Output Format\n\n```\noutputs/\n└── {process-name}_{YYYY-MM-DD_HH-MM}_{username}/\n ├── agent.md ← self-contained orchestration agent definition\n ├── CHANGE_LOG.md ← audit trail; updated as agent runs\n ├── 01-implementation-plan/ ← empty; populated when agent runs\n ├── 02-business-process/ ← empty; populated when agent runs\n ├── 03-solution-architecture/ ← empty; populated when agent runs\n ├── 04-governance/ ← empty; populated when agent runs\n └── NN-step-name/ ← additional subfolders for execution steps\n ├── generate_thing.py intermediate (generator script)\n └── thing.ipynb final deliverable (generated notebook)\n```\n\n`CHANGE_LOG.md` is initialised empty and updated by the agent each time it runs.\n\n### Intermediate vs. final artefacts\n\n| Classification | Description | Examples |\n|----------------|-------------|----------|\n| **Final** | The deliverable the user runs or deploys | `.ipynb` notebooks, `.sql` scripts, `.md` documentation |\n| **Intermediate** | Scripts that generate the final artefacts | `generate_*.py`, `generate_*.ps1` |\n\n- Intermediate artefacts live alongside their final outputs (same subfolder).\n- Label both types clearly when presenting outputs to the user.\n- Intermediate scripts must be deterministic and re-runnable.\n\n### Sub-agents in the generated agent.md\n\n| # | Section | Output document |\n|---|---------|-----------------|\n| 0 | Environment Discovery | `00-environment-discovery/environment-profile.md` |\n| 1 | Implementation Plan | `01-implementation-plan/implementation-plan.md` |\n| 2 | Business Process Mapping | `02-business-process/sop.md` |\n| 3 | Solution Architecture | `03-solution-architecture/specification.md` |\n| 4 | Security, Testing & Governance | `04-governance/governance-plan.md` |\n\n## Gotchas\n\n- **Do not check for or create process skills during skill execution.** All skill\n discovery, creation of missing skills, and audit logging happen inside Sub-Agent 2\n when the generated agent.md is run.\n- **Do not execute sub-agents** during skill execution — `agent.md` is a definition only.\n- Do not name specific tools, technologies, or process skills in the generated agent.md.\n- **Environment discovery must be contextual, not generic.** Derive questions from the\n requirements. If the process doesn't involve workspaces, don't ask about workspace\n creation permissions. The questionnaire should read like a knowledgeable consultant\n scoping a project, not a bureaucratic form.\n- Confirm sections **before** drafting, not after.\n- Keep each sub-agent block ≤25 lines to avoid context overload when the agent runs.\n\n## Available Scripts\n\n- **`scripts/scaffold_output.py`** — Creates the dated output folder structure including\n an empty `CHANGE_LOG.md`. Run: `python scripts/scaffold_output.py --help`\n",
74
+ content: "---\nname: create-fabric-process-workflow-agent\ndescription: >\n Use this skill to create an orchestration agent definition (agent.md) for any\n Microsoft Fabric technical process. The user describes what they want to automate;\n the skill produces a self-contained agent.md. When run, the agent maps the process\n to available Fabric process skills, automatically creates any missing skills using\n create-fabric-process-skill, logs all changes to an audit trail, and orchestrates\n the full process end-to-end. The process skills library grows with every run.\n Triggers on: \"create a process workflow agent\", \"build an orchestration agent\n for [process]\", \"create an agent that automates [process]\", \"orchestrate\n [process] into an agent\". Does NOT trigger for creating individual process\n skills, running an agent, writing code, or one-off analysis.\nlicense: MIT\ncompatibility: Python 3.8+ required for scripts/\n---\n\n# Create Fabric Process Workflow Agent\n\nCreates a concise, self-contained `agent.md` that defines an orchestration agent\nfor a Microsoft Fabric technical process. No process skills need to exist upfront.\nWhen run, the agent maps requirements to available skills, creates any that are\nmissing, and builds up the process skills library over time.\n\n## Core Governance Rules\n\nThese rules are non-negotiable. They must be embedded verbatim in every generated\n`agent.md` so they are active at runtime.\n\n- **RULE 1 — Never execute autonomously.** Never run terminal commands, API calls,\n or scripts directly. Present every command in a fenced code block with the\n insert-into-terminal icon. The user runs it and reports back before proceeding.\n- **RULE 2 — Parameter gate before every execution step.** Before generating any\n artefact for a step, verify every required parameter is resolved. Any parameter\n deferred during discovery (marked `[TBC]`) must be asked for explicitly before\n proceeding. Never silently skip a parameter or substitute an empty value.\n- **RULE 3 — No silent approach changes.** If a blocker is found with the chosen\n approach, surface it and present alternatives. Let the user decide. Never switch\n silently. Approach constraints by step type:\n - Local file upload (CSV/PDF from operator's machine): **notebook not possible** —\n options are script, CLI commands, or manual. For 50+ files, note that script/CLI\n is sequential and slow; suggest manual upload via Fabric Files UI instead.\n - Schema creation: notebook (Spark SQL) or CLI; no native Fabric UI for lakehouses.\n - Shortcuts: CLI (`fab ln`) or script; notebook cannot run `fab ln` natively.\n- **RULE 4 — No inference from context.** Collect all parameters from the user or\n the current prompt. Do not pre-populate from prior chat history, previous runs,\n or attached files not explicitly part of the current request.\n- **RULE 5 — Respect the user's skill level and environment.** Do not steer toward\n an approach the agent finds easier to generate. Match the user's comfort level,\n installed tooling, and stated preferences.\n- **RULE 6 — Stay within skill boundaries.** Generate only what skill definitions\n describe. On any failure: explain the cause from the error, offer the simplest\n manual or UI fallback, ask whether to skip.\n- **RULE 7 — Append to CHANGE_LOG.md after every step.** Include: step number,\n what was done, outcome (success/failure/skipped), and any notable decisions.\n- **RULE 8 — Two-question post-step pattern.** After each execution step: (Q1) ask\n whether the previous artefact ran correctly — if not, get the error and resolve it\n before proceeding; (Q2) propose the next step by name, state the planned approach\n and any implications, offer Yes (generate it) or No (choose a different approach\n or manual). Update the SOP and CHANGE_LOG to reflect any runtime decisions.\n\n## Inputs\n\n| Parameter | Description | Example |\n|-----------|-------------|---------|\n| `PROCESS_NAME` | Short name for the process (lowercase, hyphens) | `monthly-budget-consolidation` |\n| `REQUIREMENTS` | Full description of the process and each of its steps | `\"1) Collect data from five Excel files... 2) Summarise by category...\"` |\n| `SECTIONS` | Sub-agent sections to include (default: all four) | `impl-plan, biz-process, architecture, governance` |\n| `USERNAME` | Used in output folder naming | `rishi` |\n\n## Workflow\n\n- [ ] **Collect** — If `PROCESS_NAME`, `REQUIREMENTS`, or `USERNAME` are missing, ask for them.\n\n- [ ] **Analyse discovery questions** — Read the requirements and identify the\n environment-specific questions that determine which approaches are viable. For each question:\n - Name the specific activity that needs the permission or tool\n - Offer concrete options (not yes/no)\n - State what the agent does differently based on the answer\n Group questions by domain (permissions, tooling, execution preferences, data access,\n existing infrastructure). Ask only about domains the requirements actually need.\n Embed the questionnaire as **Sub-Agent 0: Environment Discovery** in the generated agent.md.\n\n- [ ] **Confirm sections** — Present the four standard sections with descriptions\n (see `references/section-descriptions.md`). Ask which to include. Default: all four.\n Wait for explicit confirmation before drafting.\n\n- [ ] **Draft agent.md** — Use `assets/agent-template.md` as the base.\n - Substitute `{PROCESS_NAME}` and a ≤3-sentence `{REQUIREMENTS_SUMMARY}`.\n - Remove excluded sections. Keep each sub-agent block ≤25 lines.\n - Do not name any specific process skill or technology — all resolved at runtime.\n - Do not hardcode company names, specific values, or environment paths.\n\n- [ ] **Validate** — Present the draft. Ask: *\"Does this accurately reflect the process? Anything unclear?\"*\n Refine until the user confirms.\n\n- [ ] **Scaffold** — Run `python scripts/scaffold_output.py --process-name $PROCESS_NAME --username $USERNAME --sections $SECTIONS`.\n Write the confirmed agent.md to the returned `agent_md_path`.\n\n- [ ] **Confirm** — Report the output root path and list all created subfolders.\n\n## Output Format\n\n```\noutputs/\n└── {process-name}_{YYYY-MM-DD_HH-MM}_{username}/\n ├── agent.md ← self-contained orchestration agent definition\n ├── CHANGE_LOG.md ← audit trail; updated as agent runs\n ├── 01-implementation-plan/ ← empty; populated when agent runs\n ├── 02-business-process/ ← empty; populated when agent runs\n ├── 03-solution-architecture/ ← empty; populated when agent runs\n ├── 04-governance/ ← empty; populated when agent runs\n ├── 05-step-name/ ← execution step 1 (numbered from 05)\n │ └── thing.ipynb deliverable only (.ipynb / .ps1 / cli-commands.md)\n └── 06-step-name/ execution step 2\n └── thing.ps1\n```\n\n`CHANGE_LOG.md` is initialised empty and updated by the agent each time it runs.\n\n### Intermediate vs. final artefacts\n\n| Classification | Description | Examples |\n|----------------|-------------|----------|\n| **Final** | The deliverable the user runs or deploys | `.ipynb` notebooks, `.sql` scripts, `.md` documentation |\n| **Intermediate** | Scripts that generate the final artefacts | `generate_*.py`, `generate_*.ps1` |\n\n- Intermediate artefacts live alongside their final outputs (same subfolder).\n- Label both types clearly when presenting outputs to the user.\n- Intermediate scripts must be deterministic and re-runnable.\n\n### Sub-agents in the generated agent.md\n\n| # | Section | Output document |\n|---|---------|-----------------|\n| 0 | Environment Discovery | `00-environment-discovery/environment-profile.md` |\n| 1 | Implementation Plan | `01-implementation-plan/implementation-plan.md` |\n| 2 | Business Process Mapping | `02-business-process/sop.md` |\n| 3 | Solution Architecture | `03-solution-architecture/specification.md` |\n| 4 | Security, Testing & Governance | `04-governance/governance-plan.md` |\n| — | **Execution Phase** | `05-[step]/`, `06-[step]/` ... + `COMPLETION_SUMMARY.md` |\n\nThe execution phase runs after all planning sub-agents are reviewed and confirmed.\nEach SOP step becomes a numbered execution subfolder. The SOP is updated in place\nthroughout execution to reflect runtime decisions (approach changes, errors, manual\nselections). CHANGE_LOG.md is updated after every step.\n\n## Gotchas\n\n- **Do not check for or create process skills during skill execution.** All skill\n discovery, creation of missing skills, and audit logging happen inside Sub-Agent 2\n when the generated agent.md is run.\n- **Do not execute sub-agents** during skill execution — `agent.md` is a definition only.\n- Do not name specific tools, technologies, or process skills in the generated agent.md.\n- **Environment discovery must be contextual, not generic.** Derive questions from the\n requirements. If the process doesn't involve workspaces, don't ask about workspace\n creation permissions. The questionnaire should read like a knowledgeable consultant\n scoping a project, not a bureaucratic form.\n- Confirm sections **before** drafting, not after.\n- Keep each sub-agent block ≤25 lines to avoid context overload when the agent runs.\n\n## Available Scripts\n\n- **`scripts/scaffold_output.py`** — Creates the dated output folder structure including\n an empty `CHANGE_LOG.md`. Run: `python scripts/scaffold_output.py --help`\n",
75
75
  },
76
76
  {
77
77
  relativePath: "assets/agent-template.md",
78
- content: "# Orchestration Agent: {PROCESS_NAME}\r\n\r\n## Context\r\n\r\n**Process**: {PROCESS_NAME}\r\n**Requirements**: {REQUIREMENTS_SUMMARY}\r\n\r\n---\r\n\r\n## How to Run This Agent\r\n\r\n**Start with Sub-Agent 0 (Environment Discovery).** This gathers the user's\r\npermissions, tooling, and preferences so that every subsequent sub-agent produces\r\nplans tailored to their actual environment. Do not skip this step.\r\n\r\nThen execute each remaining sub-agent in sequence:\r\n\r\n1. Use only the inputs and instructions provided in this file.\r\n2. Produce the specified output document in the designated subfolder.\r\n3. Present the output to the user; ask clarifying questions if anything is unclear.\r\n4. Refine until the user explicitly confirms the output.\r\n5. Append a timestamped entry to `CHANGE_LOG.md` recording what was produced or decided.\r\n6. Pass the confirmed output as the primary input to the next sub-agent.\r\n **Every sub-agent must also read `00-environment-discovery/environment-profile.md`**\r\n and respect the path decisions recorded there.\r\n\r\n> 🛑 **HARD STOP RULE — applies to every sub-agent and every execution step:**\r\n> After producing any output, you MUST stop and wait. Do not proceed to the next\r\n> step until the user responds with explicit confirmation (e.g. \"confirmed\",\r\n> \"looks good\", \"proceed\"). A lack of objection is NOT confirmation. Never\r\n> self-confirm or assume approval. Never run two steps in the same turn.\r\n\r\n**Do not produce code, scripts, or data artefacts not described in each sub-agent below.**\r\n\r\n### Parameter Resolution Protocol\r\n\r\nWhen invoking any skill, **always resolve parameters from existing documents before\r\nasking the user**. Check in this order:\r\n\r\n1. `00-environment-discovery/environment-profile.md` — provides: deployment approach,\r\n capacity name, workspace names, access control method, Object ID resolution approach,\r\n environment (dev/prod), credential management approach, available tooling\r\n2. The confirmed SOP (`02-business-process/sop.md`) — provides: lakehouse names,\r\n schema names, shared parameters, step inputs and outputs\r\n3. The implementation plan (`01-implementation-plan/implementation-plan.md`) — provides:\r\n naming conventions, task-level decisions\r\n\r\n**Only ask the user for parameters not found in any of these documents.** Summarise\r\nwhat was resolved automatically before asking for what remains. Never ask for a\r\nparameter that was explicitly captured during environment discovery or planning.\r\n\r\n### Notebook Documentation Standard\r\n\r\nEvery Fabric notebook produced by any skill **must** include a numbered markdown cell\r\nimmediately above each code cell. Each markdown cell must:\r\n\r\n1. State the cell number and a short title (e.g. `## Cell 1 — Install dependencies`).\r\n2. Explain **what** the code cell does in 1–2 sentences.\r\n3. Explain **how to use it**: variables to change, flags to toggle, prerequisites.\r\n\r\nAll transformation logic and design rationale must be **embedded as markdown cells inside\r\nthe notebook** — not maintained as separate documentation files. The notebook is the single\r\nsource of truth. A reader must be able to understand what each cell does, why the logic was\r\nchosen, and how to run it without opening any other file.\r\n\r\n### Output Conventions\r\n\r\n- Each sub-agent writes to its own **numbered subfolder** (`01-implementation-plan/`,\r\n `02-business-process/`, etc.). Execution steps continue the numbering (e.g.,\r\n `05-execution/`, `06-gold-layer/`).\r\n- Within each subfolder, only present **final deliverables** to the user: notebooks,\r\n SQL scripts, and documentation they run or deploy. Generator scripts (e.g.\r\n `generate_notebook.py`) are internal tools the skill runs to produce deliverables —\r\n **never present generator scripts as outputs and never generate notebook or script\r\n content directly**. Run the generator script via Bash; present what it produces.\r\n- All transformation logic and design rationale must be **embedded as markdown cells\r\n inside notebooks** — not maintained as separate documentation files. The notebook\r\n is the single source of truth.\r\n\r\n---\r\n\r\n## Sub-Agent 0: Environment Discovery\r\n\r\n**Input**: Requirements above\r\n**Output**: `00-environment-discovery/environment-profile.md`\r\n\r\nThis sub-agent runs **before anything is planned or built**. Its sole purpose is to\r\nunderstand the operator's environment, permissions, and preferences so that every\r\nsubsequent sub-agent produces plans tailored to what is actually possible and practical.\r\n\r\n**Invoke the `fabric-process-discovery` skill to run this step.**\r\n\r\nThe skill defines the full adaptive questioning tree — which questions to ask, in what\r\norder, and how to branch based on answers. Key principles:\r\n\r\n- **Read the requirements first.** Only ask about domains the process actually needs.\r\n A CSV ingestion job does not need workspace creation questions. A full pipeline\r\n needs all domains.\r\n- **Present all questions in a single turn**, grouped by domain. Never ask one question\r\n at a time. Target **5–7 questions** for most processes; simpler ones may need 3–4.\r\n- **Branch adaptively.** The skill defines conditional follow-ups — apply them after\r\n the first-turn answers before presenting the confirmation summary.\r\n- **Confirm before proceeding.** After processing answers, present the path table and\r\n ask: *\"Is this accurate, or anything to correct before I proceed to planning?\"*\r\n Wait for explicit confirmation.\r\n\r\nThe skill covers these domains (use only those relevant to the requirements):\r\n\r\n| Domain | When to include |\r\n|--------|----------------|\r\n| **A — Workspace access** | Any step creates or uses workspaces |\r\n| **A — Domain assignment** | Requirements mention domain governance (only if creating workspaces) |\r\n| **A — Access control / groups** | Process assigns roles to users or groups |\r\n| **B — Deployment approach** | Any step generates notebooks, scripts, or CLI commands |\r\n| **C — Source data location** | Process ingests files (CSV, PDF, etc.) |\r\n| **D — Capacity / SKU** | Process involves compute-intensive operations |\r\n\r\n**Critical framing rules from the skill — do not deviate:**\r\n\r\n1. **Deployment approach is NOT a CLI vs no-CLI question.** All three options (PySpark\r\n notebook, PowerShell script, CLI commands) use the Fabric CLI internally. The\r\n question is only about *how* the operator runs it. Present it as:\r\n - **A) PySpark notebook** — imported into Fabric, run cell-by-cell in the Fabric UI\r\n - **B) PowerShell script** — generated `.ps1` reviewed and run locally\r\n - **C) CLI commands** — individual `fab` commands run interactively in the terminal\r\n\r\n2. **Workspace creation must branch correctly.** If the operator cannot create\r\n workspaces, immediately ask for the exact names of existing hub and spoke\r\n workspaces — do not ask about domain assignment or access control (they only\r\n apply when creating).\r\n\r\n3. **Entra group Object IDs are a known technical constraint.** When groups are\r\n involved, always surface this: *\"The Fabric API requires Object IDs — display\r\n names are not accepted programmatically.\"* Then offer the resolution options\r\n (have IDs / Azure CLI / PowerShell Graph / UI manual).\r\n\r\n4. **Never leave the user blocked.** If a step requires permissions they don't have,\r\n offer: (a) skip and mark as manual, (b) produce a spec for their admin, or\r\n (c) substitute a UI-based workaround.\r\n\r\nOnce the environment profile is confirmed, save it as\r\n`00-environment-discovery/environment-profile.md` and append to `CHANGE_LOG.md`:\r\n`[{DATETIME}] Sub-Agent 0 complete — environment-profile.md produced. [N] path decisions recorded. Manual gates: [list or none].`\r\n\r\n🛑 **STOP — present the environment profile and ask: \"Does this look correct? Please confirm before I move to the implementation plan.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 1: Implementation Plan\r\n\r\n**Input**: Requirements above\r\n**Output**: `01-implementation-plan/implementation-plan.md`\r\n\r\nProduce a phased implementation plan using the structure below. Keep ≤50 lines.\r\nUpdate the RAID log whenever a later sub-agent raises a new risk or dependency.\r\n\r\n```markdown\r\n---\r\ngoal: {PROCESS_NAME} — Implementation Plan\r\nstatus: Planned\r\ndate_created: {DATE}\r\n---\r\n\r\n# Implementation Plan: {PROCESS_NAME}\r\n\r\n## Requirements & Constraints\r\n- REQ-001: [Requirement drawn from the context above]\r\n- CON-001: [Key constraint]\r\n\r\n## Phases\r\n\r\n### Phase 1: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-001 | [Task] | Planned |\r\n| TASK-002 | [Task] | Planned |\r\n\r\n### Phase 2: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-003 | [Task] | Planned |\r\n\r\n## RAID Log\r\n| Type | ID | Description | Mitigation / Action | Status |\r\n|------------|-------|--------------|---------------------|--------|\r\n| Risk | R-001 | [Risk] | [Mitigation] | Open |\r\n| Assumption | A-001 | [Assumption] | [Validation] | Open |\r\n| Issue | I-001 | [Issue] | [Resolution] | Open |\r\n| Dependency | D-001 | [Dependency] | [Owner] | Open |\r\n```\r\n\r\nRules:\r\n- Use REQ-, CON-, TASK-, R-, A-, I-, D- prefixes consistently.\r\n- Task status values: Planned / In Progress / Done.\r\n- Do not include implementation code or scripts.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 1 complete — implementation-plan.md produced.`\r\n- 🛑 **STOP — present the implementation plan and ask: \"Does this look correct? Please confirm before I move to the business process mapping.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 2: Business Process Mapping\r\n\r\n**Input**: Confirmed output of Sub-Agent 1 + Requirements above\r\n**Output**: `02-business-process/sop.md`\r\n\r\nThis sub-agent maps requirements to process skills, creates any that are missing,\r\nand produces a Standard Operating Procedure. Work through the three steps below.\r\n\r\n### Step 1 — Decompose requirements into process steps\r\n\r\nRead the requirements and break them into discrete, ordered steps. For each step,\r\nwrite a one-line description of what it needs to do and what its output is.\r\n\r\n### Step 2 — Map each step to a process skill\r\n\r\nFor each step, search the skills directory for a matching process skill\r\n(a skill whose description covers the same action and output).\r\n\r\nFor every step, one of three outcomes applies:\r\n\r\n**A — Skill found**: Read the skill's `SKILL.md`. Note its inputs, outputs, and\r\nany parameters it needs from earlier steps. Mark the step as covered.\r\n\r\n**B — Skill not found**: Determine the deterministic logic needed to automate\r\nthis step (the specific inputs, the repeatable actions, and the expected output).\r\nInvoke `create-fabric-process-skill` to create a new skill definition for this step.\r\nOnce created, read its `SKILL.md` and mark the step as covered.\r\nAppend to `CHANGE_LOG.md`:\r\n`[{DATETIME}] New skill created: [skill-name] — [one-line description of what it does].`\r\nAdd the new skill as a dependency in the RAID log from Sub-Agent 1.\r\n\r\n**C — Step must be manual**: If the step cannot be automated (e.g. requires human\r\njudgement or a physical action), document it as a manual step with exact operator\r\ninstructions and mark it accordingly.\r\n\r\nRepeat until every step is either covered by a skill or accepted as manual.\r\n\r\n🛑 **STOP — present the skill list and ask: \"Does this mapping look correct? Please confirm before I produce the SOP.\"** Do not proceed to Step 3 until the user confirms.\r\n\r\n### Step 3 — Produce the SOP\r\n\r\n```markdown\r\n# SOP: {PROCESS_NAME}\r\n\r\n## Step Sequence\r\n| Step | Skill / Action | Input Parameters (resolved values where known) | Output | Manual? |\r\n|------|---------------------|------------------------------------------------|-------------------|---------|\r\n| 1 | [skill-name] | capacity=ldifabricdev, deployment=notebook | [output artefact] | No |\r\n| 2 | [skill-name] | workspace=[from step 1], lakehouse=[name] | [output artefact] | No |\r\n| 3 | [Manual: action] | — | — | Yes |\r\n\r\nPopulate parameter values from `00-environment-discovery/environment-profile.md` where\r\nalready known. Use `[TBC]` only for parameters not yet resolved.\r\n\r\n## Shared Parameters\r\n| Parameter | Value / Source | Passed to steps |\r\n|-----------|---------------------------------|-----------------|\r\n| [param] | [actual value or \"user input\"] | 1, 3 |\r\n\r\n## Newly Created Skills\r\n| Skill name | Step | Description |\r\n|--------------|------|------------------------------------|\r\n| [skill-name] | 2 | [What it does — one line] |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n```\r\n\r\nRules:\r\n- If requirements are unclear for any step, ask a targeted question and update\r\n requirements before continuing.\r\n- New skills created in this sub-agent are a permanent addition to the skills\r\n library and will be available for future agents.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 2 complete — sop.md produced. [N] new skills created.`\r\n- 🛑 **STOP — present the SOP and ask: \"Does this look correct? Please confirm before I move to the solution architecture.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 3: Solution Architecture\r\n\r\n**Input**: Confirmed output of Sub-Agent 2\r\n**Output**: `03-solution-architecture/specification.md`\r\n\r\nProduce a plain-language specification. Keep total length ≤50 lines.\r\nWrite for a non-technical reader — no code, no implementation detail.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Solution Specification\r\nstatus: Draft\r\ndate_created: {DATE}\r\n---\r\n\r\n# Specification: {PROCESS_NAME}\r\n\r\n## Purpose\r\n[One paragraph: what this solution does and what problem it solves.]\r\n\r\n## Scope\r\n[What is included and what is explicitly excluded.]\r\n\r\n## How It Works\r\n| Step | What happens | Automated? | Notes |\r\n|------|-------------------------------|------------|-----------------|\r\n| 1 | [Plain-language description] | Yes | |\r\n| 2 | [Plain-language description] | No | See MANUAL-001 |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n\r\n## Acceptance Criteria\r\n- AC-001: Given [context], when [action], then [expected outcome].\r\n\r\n## Dependencies\r\n- DEP-001: [External system, file, or service] — [Purpose]\r\n```\r\n\r\nRules:\r\n- Write for a non-technical reader. No jargon without explanation.\r\n- Every manual step must include exact operator instructions.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 3 complete — specification.md produced.`\r\n- 🛑 **STOP — present the specification and ask: \"Does this look correct? Please confirm before I move to the governance plan.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 4: Security, Testing and Governance\r\n\r\n**Input**: Confirmed output of Sub-Agent 3\r\n**Output**: `04-governance/governance-plan.md`\r\n\r\nProduce a governance and deployment plan. Keep total length ≤45 lines.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Governance Plan\r\ndate_created: {DATE}\r\n---\r\n\r\n# Governance Plan: {PROCESS_NAME}\r\n\r\n## Agent Boundaries\r\n| Boundary | Rule |\r\n|-------------------------|--------------------------------------------|\r\n| Allowed actions | [Permitted operations] |\r\n| Blocked actions | [Prohibited operations] |\r\n| Requires human approval | [Steps needing explicit sign-off] |\r\n\r\n## Testing Checklist\r\n- [ ] Validate each sub-agent output before passing it to the next\r\n- [ ] Test all manual steps with a real operator before production use\r\n- [ ] Run against a minimal test dataset before using real data\r\n- [ ] Review CHANGE_LOG.md to confirm all new skills are correct\r\n- [ ] Verify the output folder structure after scaffolding\r\n\r\n## Microsoft Responsible AI Alignment\r\n| Principle | How Applied |\r\n|----------------|--------------------------------------------------------|\r\n| Fairness | [How bias is avoided in outputs and decisions] |\r\n| Reliability | [Validation steps, error handling, new skill review] |\r\n| Privacy | [Data handling — no PII retained in output files] |\r\n| Inclusiveness | [Plain language; no domain assumptions made] |\r\n| Transparency | [User validates every sub-agent output; CHANGE_LOG] |\r\n| Accountability | [Human sign-off required before production execution] |\r\n\r\n## Deployment Guidance\r\n- Review `CHANGE_LOG.md` to verify all newly created skills before first run.\r\n- Store `agent.md`, all outputs, and new skills in version control.\r\n- Review the RAID log from Sub-Agent 1 before each new run.\r\n- Human sign-off required before running against production systems.\r\n```\r\n\r\nRules:\r\n- Every RAI principle row must be completed — state explicitly if not applicable and why.\r\n- Human approval must be required for any step that modifies production systems.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 4 complete — governance-plan.md produced. Agent definition finalised.`\r\n- 🛑 **STOP — present the governance plan and ask:**\r\n > \"Planning is complete. Here's a summary of what we've produced:\r\n > - `00-environment-discovery/environment-profile.md`\r\n > - `01-implementation-plan/implementation-plan.md`\r\n > - `02-business-process/sop.md`\r\n > - `03-solution-architecture/specification.md`\r\n > - `04-governance/governance-plan.md`\r\n >\r\n > Please review these documents. When you're ready to proceed with execution, say **'ready to execute'**.\"\r\n Do not begin the Execution Phase until the user says they are ready.\r\n\r\n---\r\n\r\n## Execution Phase\r\n\r\n**Input**: Confirmed outputs of Sub-Agents 0–4 (environment profile, SOP, governance plan)\r\n**Trigger**: User explicitly confirms they are ready to execute after reviewing Sub-Agent 4\r\n\r\n🛑 **Do not begin execution until the user explicitly says they are ready** (e.g. \"ready\r\nto execute\", \"let's go\", \"proceed\"). When they confirm, read the SOP from\r\n`02-business-process/sop.md` and execute steps one at a time.\r\n\r\n**One step per turn.** After completing each step and presenting the output, stop and\r\nask: *\"Step [N] complete — [filename] is in `0N-[step-slug]/`. Ready for step [N+1]?\"*\r\nDo not proceed until the user confirms.\r\n\r\n### How execution steps work\r\n\r\nFor each step in the SOP:\r\n\r\n1. **Announce the step.** State the step number, name, and which skill will handle it.\r\n Show what parameters will be used (resolved from environment profile and SOP).\r\n Ask for any parameters not yet resolved — keeping the Parameter Resolution Protocol.\r\n\r\n2. **Invoke the skill.** Run the skill using the resolved parameters. Follow the skill's\r\n instructions exactly — run generator scripts via Bash, do not generate artefact content\r\n directly.\r\n\r\n3. **Write output to its subfolder.** Each step writes to a numbered subfolder continuing\r\n from `04-governance/`. Step 1 of the SOP → `05-[step-slug]/`, step 2 → `06-[step-slug]/`,\r\n etc. The slug is a short lowercase hyphenated name derived from the SOP step name\r\n (e.g. `05-create-workspaces/`, `06-create-lakehouses/`).\r\n\r\n4. **Only the deliverable goes in the folder.** One of:\r\n - **PySpark notebook**: the `.ipynb` file only\r\n - **PowerShell script**: the `.ps1` file only\r\n - **CLI commands**: a `cli-commands.md` recording each `!fab` command run and its output\r\n - **Other**: the specific file type described by the skill (e.g. workspace definition `.md`)\r\n No intermediate files, generator scripts, or working notes.\r\n\r\n5. **Present the output and confirm.** Show the user what was produced. Wait for explicit\r\n confirmation before moving to the next step.\r\n\r\n6. **Log the step.** Append to `CHANGE_LOG.md`:\r\n `[{DATETIME}] Execution step [N] complete — [step-name] — [filename] produced.`\r\n\r\n7. **Proceed to the next step.** Repeat until all non-manual SOP steps are complete.\r\n\r\n### Manual steps\r\n\r\nFor any step marked Manual in the SOP, do not invoke a skill. Instead:\r\n- Display the exact operator instructions from the SOP\r\n- Wait for the user to confirm they have completed the manual step\r\n- Log it: `[{DATETIME}] Manual step [N] confirmed by operator — [step-name].`\r\n\r\n### CLI command log format\r\n\r\nWhen deployment approach is terminal (interactive CLI), produce a `cli-commands.md`\r\nin the step subfolder with this structure:\r\n\r\n```markdown\r\n# CLI Commands: [Step Name]\r\n_Executed: {DATETIME}_\r\n\r\n## Commands Run\r\n\r\n### [Command description]\r\n```bash\r\n[exact command]\r\n```\r\n**Output:**\r\n```\r\n[output or \"No output / success\"]\r\n```\r\n\r\n## Result\r\n[One-sentence summary of what was created or confirmed]\r\n```\r\n\r\n### After all steps complete\r\n\r\nOnce all SOP steps are confirmed, produce `outputs/COMPLETION_SUMMARY.md`:\r\n\r\n```markdown\r\n# Completion Summary: {PROCESS_NAME}\r\n_Completed: {DATETIME}_\r\n\r\n## Steps Executed\r\n| Step | Folder | Deliverable | Status |\r\n|------|--------|-------------|--------|\r\n| [N] | [folder] | [filename] | ✅ Complete |\r\n\r\n## Manual Steps\r\n| Step | Description | Confirmed by operator |\r\n|------|-------------|----------------------|\r\n| [N] | [description] | ✅ Yes |\r\n\r\n## Next Steps\r\n[Any post-execution actions: verify in Fabric UI, share workspace, run first notebook, etc.]\r\n```\r\n\r\nAppend to `CHANGE_LOG.md`:\r\n`[{DATETIME}] Execution phase complete — all [N] steps done. See COMPLETION_SUMMARY.md.`\r\n",
78
+ content: "# Orchestration Agent: {PROCESS_NAME}\r\n\r\n## Context\r\n\r\n**Process**: {PROCESS_NAME}\r\n**Requirements**: {REQUIREMENTS_SUMMARY}\r\n\r\n---\r\n\r\n## How to Run This Agent\r\n\r\n**Start with Sub-Agent 0 (Environment Discovery).** This gathers the user's\r\npermissions, tooling, and preferences so that every subsequent sub-agent produces\r\nplans tailored to their actual environment. Do not skip this step.\r\n\r\nThen execute each remaining sub-agent in sequence:\r\n\r\n1. Use only the inputs and instructions provided in this file.\r\n2. Produce the specified output document in the designated subfolder.\r\n3. Present the output to the user; ask clarifying questions if anything is unclear.\r\n4. Refine until the user explicitly confirms the output.\r\n5. Append a timestamped entry to `CHANGE_LOG.md` recording what was produced or decided.\r\n6. Pass the confirmed output as the primary input to the next sub-agent.\r\n **Every sub-agent must also read `00-environment-discovery/environment-profile.md`**\r\n and respect the path decisions recorded there.\r\n\r\n> 🛑 **HARD STOP RULE — applies to every sub-agent and every execution step:**\r\n> After producing any output, you MUST stop and wait. Do not proceed to the next\r\n> step until the user responds with explicit confirmation (e.g. \"confirmed\",\r\n> \"looks good\", \"proceed\"). A lack of objection is NOT confirmation. Never\r\n> self-confirm or assume approval. Never run two steps in the same turn.\r\n\r\n**Do not produce code, scripts, or data artefacts not described in each sub-agent below.**\r\n\r\n### Parameter Resolution Protocol\r\n\r\nWhen invoking any skill, **always resolve parameters from existing documents before\r\nasking the user**. Check in this order:\r\n\r\n1. `00-environment-discovery/environment-profile.md` — provides: deployment approach,\r\n capacity name, workspace names, access control method, Object ID resolution approach,\r\n environment (dev/prod), credential management approach, available tooling\r\n2. The confirmed SOP (`02-business-process/sop.md`) — provides: lakehouse names,\r\n schema names, shared parameters, step inputs and outputs\r\n3. The implementation plan (`01-implementation-plan/implementation-plan.md`) — provides:\r\n naming conventions, task-level decisions\r\n\r\n**Only ask the user for parameters not found in any of these documents.** Summarise\r\nwhat was resolved automatically before asking for what remains. Never ask for a\r\nparameter that was explicitly captured during environment discovery or planning.\r\n\r\n### Notebook Documentation Standard\r\n\r\nEvery Fabric notebook produced by any skill **must** include a numbered markdown cell\r\nimmediately above each code cell. Each markdown cell must:\r\n\r\n1. State the cell number and a short title (e.g. `## Cell 1 — Install dependencies`).\r\n2. Explain **what** the code cell does in 1–2 sentences.\r\n3. Explain **how to use it**: variables to change, flags to toggle, prerequisites.\r\n\r\nAll transformation logic and design rationale must be **embedded as markdown cells inside\r\nthe notebook** — not maintained as separate documentation files. The notebook is the single\r\nsource of truth. A reader must be able to understand what each cell does, why the logic was\r\nchosen, and how to run it without opening any other file.\r\n\r\n### Output Conventions\r\n\r\n- Each sub-agent writes to its own **numbered subfolder** (`01-implementation-plan/`,\r\n `02-business-process/`, etc.). Execution steps continue the numbering (e.g.,\r\n `05-execution/`, `06-gold-layer/`).\r\n- Within each subfolder, only present **final deliverables** to the user: notebooks,\r\n SQL scripts, and documentation they run or deploy. Generator scripts (e.g.\r\n `generate_notebook.py`) are internal tools the skill runs to produce deliverables —\r\n **never present generator scripts as outputs and never generate notebook or script\r\n content directly**. Run the generator script via Bash; present what it produces.\r\n- All transformation logic and design rationale must be **embedded as markdown cells\r\n inside notebooks** — not maintained as separate documentation files. The notebook\r\n is the single source of truth.\r\n\r\n---\r\n\r\n## Sub-Agent 0: Environment Discovery\r\n\r\n**Input**: Requirements above\r\n**Output**: `00-environment-discovery/environment-profile.md`\r\n\r\nThis sub-agent runs **before anything is planned or built**. Its sole purpose is to\r\nunderstand the operator's environment, permissions, and preferences so that every\r\nsubsequent sub-agent produces plans tailored to what is actually possible and practical.\r\n\r\n**Invoke the `fabric-process-discovery` skill to run this step.**\r\n\r\nThe skill defines the full adaptive questioning tree — which questions to ask, in what\r\norder, and how to branch based on answers. Key principles:\r\n\r\n- **Read the requirements first.** Only ask about domains the process actually needs.\r\n A CSV ingestion job does not need workspace creation questions. A full pipeline\r\n needs all domains.\r\n- **Present all questions in a single turn**, grouped by domain. Never ask one question\r\n at a time. Target **5–7 questions** for most processes; simpler ones may need 3–4.\r\n- **Branch adaptively.** The skill defines conditional follow-ups — apply them after\r\n the first-turn answers before presenting the confirmation summary.\r\n- **Confirm before proceeding.** After processing answers, present the path table and\r\n ask: *\"Is this accurate, or anything to correct before I proceed to planning?\"*\r\n Wait for explicit confirmation.\r\n\r\nThe skill covers these domains (use only those relevant to the requirements):\r\n\r\n| Domain | When to include |\r\n|--------|----------------|\r\n| **A — Workspace access** | Any step creates or uses workspaces |\r\n| **A — Domain assignment** | Requirements mention domain governance (only if creating workspaces) |\r\n| **A — Access control / groups** | Process assigns roles to users or groups |\r\n| **B — Deployment approach** | Any step generates notebooks, scripts, or CLI commands |\r\n| **C — Source data location** | Process ingests files (CSV, PDF, etc.) |\r\n| **D — Capacity / SKU** | Process involves compute-intensive operations |\r\n\r\n**Critical framing rules from the skill — do not deviate:**\r\n\r\n1. **Deployment approach is NOT a CLI vs no-CLI question.** All three options (PySpark\r\n notebook, PowerShell script, CLI commands) use the Fabric CLI internally. The\r\n question is only about *how* the operator runs it. Present it as:\r\n - **A) PySpark notebook** — imported into Fabric, run cell-by-cell in the Fabric UI\r\n - **B) PowerShell script** — generated `.ps1` reviewed and run locally\r\n - **C) CLI commands** — individual `fab` commands run interactively in the terminal\r\n\r\n2. **Workspace creation must branch correctly.** If the operator cannot create\r\n workspaces, immediately ask for the exact names of existing hub and spoke\r\n workspaces — do not ask about domain assignment or access control (they only\r\n apply when creating).\r\n\r\n3. **Entra group Object IDs are a known technical constraint.** When groups are\r\n involved, always surface this: *\"The Fabric API requires Object IDs — display\r\n names are not accepted programmatically.\"* Then offer the resolution options\r\n (have IDs / Azure CLI / PowerShell Graph / UI manual).\r\n\r\n4. **Never leave the user blocked.** If a step requires permissions they don't have,\r\n offer: (a) skip and mark as manual, (b) produce a spec for their admin, or\r\n (c) substitute a UI-based workaround.\r\n\r\nOnce the environment profile is confirmed, save it as\r\n`00-environment-discovery/environment-profile.md` and append to `CHANGE_LOG.md`:\r\n`[{DATETIME}] Sub-Agent 0 complete — environment-profile.md produced. [N] path decisions recorded. Manual gates: [list or none].`\r\n\r\n🛑 **STOP — present the environment profile and ask: \"Does this look correct? Please confirm before I move to the implementation plan.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 1: Implementation Plan\r\n\r\n**Input**: Requirements above\r\n**Output**: `01-implementation-plan/implementation-plan.md`\r\n\r\nProduce a phased implementation plan using the structure below. Keep ≤50 lines.\r\nUpdate the RAID log whenever a later sub-agent raises a new risk or dependency.\r\n\r\n```markdown\r\n---\r\ngoal: {PROCESS_NAME} — Implementation Plan\r\nstatus: Planned\r\ndate_created: {DATE}\r\n---\r\n\r\n# Implementation Plan: {PROCESS_NAME}\r\n\r\n## Requirements & Constraints\r\n- REQ-001: [Requirement drawn from the context above]\r\n- CON-001: [Key constraint]\r\n\r\n## Phases\r\n\r\n### Phase 1: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-001 | [Task] | Planned |\r\n| TASK-002 | [Task] | Planned |\r\n\r\n### Phase 2: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-003 | [Task] | Planned |\r\n\r\n## RAID Log\r\n| Type | ID | Description | Mitigation / Action | Status |\r\n|------------|-------|--------------|---------------------|--------|\r\n| Risk | R-001 | [Risk] | [Mitigation] | Open |\r\n| Assumption | A-001 | [Assumption] | [Validation] | Open |\r\n| Issue | I-001 | [Issue] | [Resolution] | Open |\r\n| Dependency | D-001 | [Dependency] | [Owner] | Open |\r\n```\r\n\r\nRules:\r\n- Use REQ-, CON-, TASK-, R-, A-, I-, D- prefixes consistently.\r\n- Task status values: Planned / In Progress / Done.\r\n- Do not include implementation code or scripts.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 1 complete — implementation-plan.md produced.`\r\n- 🛑 **STOP — present the implementation plan and ask: \"Does this look correct? Please confirm before I move to the business process mapping.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 2: Business Process Mapping\r\n\r\n**Input**: Confirmed output of Sub-Agent 1 + Requirements above\r\n**Output**: `02-business-process/sop.md`\r\n\r\nThis sub-agent maps requirements to process skills, creates any that are missing,\r\nand produces a Standard Operating Procedure. Work through the three steps below.\r\n\r\n### Step 1 — Decompose requirements into process steps\r\n\r\nRead the requirements and break them into discrete, ordered steps. For each step,\r\nwrite a one-line description of what it needs to do and what its output is.\r\n\r\n### Step 2 — Map each step to a process skill\r\n\r\nFor each step, search the skills directory for a matching process skill\r\n(a skill whose description covers the same action and output).\r\n\r\nFor every step, one of three outcomes applies:\r\n\r\n**A — Skill found**: Read the skill's `SKILL.md`. Note its inputs, outputs, and\r\nany parameters it needs from earlier steps. Mark the step as covered.\r\n\r\n**B — Skill not found**: Determine the deterministic logic needed to automate\r\nthis step (the specific inputs, the repeatable actions, and the expected output).\r\nInvoke `create-fabric-process-skill` to create a new skill definition for this step.\r\nOnce created, read its `SKILL.md` and mark the step as covered.\r\nAppend to `CHANGE_LOG.md`:\r\n`[{DATETIME}] New skill created: [skill-name] — [one-line description of what it does].`\r\nAdd the new skill as a dependency in the RAID log from Sub-Agent 1.\r\n\r\n**C — Step must be manual**: If the step cannot be automated (e.g. requires human\r\njudgement or a physical action), document it as a manual step with exact operator\r\ninstructions and mark it accordingly.\r\n\r\nRepeat until every step is either covered by a skill or accepted as manual.\r\n\r\n🛑 **STOP — present the skill list and ask: \"Does this mapping look correct? Please confirm before I produce the SOP.\"** Do not proceed to Step 3 until the user confirms.\r\n\r\n### Step 3 — Produce the SOP\r\n\r\n```markdown\r\n# SOP: {PROCESS_NAME}\r\n\r\n## Step Sequence\r\n| Step | Skill / Action | Input Parameters (resolved values where known) | Output | Manual? |\r\n|------|---------------------|------------------------------------------------|-------------------|---------|\r\n| 1 | [skill-name] | capacity=ldifabricdev, deployment=notebook | [output artefact] | No |\r\n| 2 | [skill-name] | workspace=[from step 1], lakehouse=[name] | [output artefact] | No |\r\n| 3 | [Manual: action] | — | — | Yes |\r\n\r\nPopulate parameter values from `00-environment-discovery/environment-profile.md` where\r\nalready known. Use `[TBC]` only for parameters not yet resolved.\r\n\r\n## Shared Parameters\r\n| Parameter | Value / Source | Passed to steps |\r\n|-----------|---------------------------------|-----------------|\r\n| [param] | [actual value or \"user input\"] | 1, 3 |\r\n\r\n## Newly Created Skills\r\n| Skill name | Step | Description |\r\n|--------------|------|------------------------------------|\r\n| [skill-name] | 2 | [What it does — one line] |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n```\r\n\r\nRules:\r\n- If requirements are unclear for any step, ask a targeted question and update\r\n requirements before continuing.\r\n- New skills created in this sub-agent are a permanent addition to the skills\r\n library and will be available for future agents.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 2 complete — sop.md produced. [N] new skills created.`\r\n- 🛑 **STOP — present the SOP and ask: \"Does this look correct? Please confirm before I move to the solution architecture.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 3: Solution Architecture\r\n\r\n**Input**: Confirmed output of Sub-Agent 2\r\n**Output**: `03-solution-architecture/specification.md`\r\n\r\nProduce a plain-language specification. Keep total length ≤50 lines.\r\nWrite for a non-technical reader — no code, no implementation detail.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Solution Specification\r\nstatus: Draft\r\ndate_created: {DATE}\r\n---\r\n\r\n# Specification: {PROCESS_NAME}\r\n\r\n## Purpose\r\n[One paragraph: what this solution does and what problem it solves.]\r\n\r\n## Scope\r\n[What is included and what is explicitly excluded.]\r\n\r\n## How It Works\r\n| Step | What happens | Automated? | Notes |\r\n|------|-------------------------------|------------|-----------------|\r\n| 1 | [Plain-language description] | Yes | |\r\n| 2 | [Plain-language description] | No | See MANUAL-001 |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n\r\n## Acceptance Criteria\r\n- AC-001: Given [context], when [action], then [expected outcome].\r\n\r\n## Dependencies\r\n- DEP-001: [External system, file, or service] — [Purpose]\r\n```\r\n\r\nRules:\r\n- Write for a non-technical reader. No jargon without explanation.\r\n- Every manual step must include exact operator instructions.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 3 complete — specification.md produced.`\r\n- 🛑 **STOP — present the specification and ask: \"Does this look correct? Please confirm before I move to the governance plan.\"** Do not proceed until the user confirms.\r\n\r\n---\r\n\r\n## Sub-Agent 4: Security, Testing and Governance\r\n\r\n**Input**: Confirmed output of Sub-Agent 3\r\n**Output**: `04-governance/governance-plan.md`\r\n\r\nProduce a governance and deployment plan. Keep total length ≤45 lines.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Governance Plan\r\ndate_created: {DATE}\r\n---\r\n\r\n# Governance Plan: {PROCESS_NAME}\r\n\r\n## Agent Boundaries\r\n| Boundary | Rule |\r\n|-------------------------|--------------------------------------------|\r\n| Allowed actions | [Permitted operations] |\r\n| Blocked actions | [Prohibited operations] |\r\n| Requires human approval | [Steps needing explicit sign-off] |\r\n\r\n## Testing Checklist\r\n- [ ] Validate each sub-agent output before passing it to the next\r\n- [ ] Test all manual steps with a real operator before production use\r\n- [ ] Run against a minimal test dataset before using real data\r\n- [ ] Review CHANGE_LOG.md to confirm all new skills are correct\r\n- [ ] Verify the output folder structure after scaffolding\r\n\r\n## Microsoft Responsible AI Alignment\r\n| Principle | How Applied |\r\n|----------------|--------------------------------------------------------|\r\n| Fairness | [How bias is avoided in outputs and decisions] |\r\n| Reliability | [Validation steps, error handling, new skill review] |\r\n| Privacy | [Data handling — no PII retained in output files] |\r\n| Inclusiveness | [Plain language; no domain assumptions made] |\r\n| Transparency | [User validates every sub-agent output; CHANGE_LOG] |\r\n| Accountability | [Human sign-off required before production execution] |\r\n\r\n## Deployment Guidance\r\n- Review `CHANGE_LOG.md` to verify all newly created skills before first run.\r\n- Store `agent.md`, all outputs, and new skills in version control.\r\n- Review the RAID log from Sub-Agent 1 before each new run.\r\n- Human sign-off required before running against production systems.\r\n```\r\n\r\nRules:\r\n- Every RAI principle row must be completed — state explicitly if not applicable and why.\r\n- Human approval must be required for any step that modifies production systems.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 4 complete — governance-plan.md produced. Agent definition finalised.`\r\n- 🛑 **STOP — present the governance plan and ask:**\r\n > \"Planning is complete. Here's a summary of what we've produced:\r\n > - `00-environment-discovery/environment-profile.md`\r\n > - `01-implementation-plan/implementation-plan.md`\r\n > - `02-business-process/sop.md`\r\n > - `03-solution-architecture/specification.md`\r\n > - `04-governance/governance-plan.md`\r\n >\r\n > Please review these documents. When you're ready to proceed with execution, say **'ready to execute'**.\"\r\n Do not begin the Execution Phase until the user says they are ready.\r\n\r\n---\r\n\r\n## Execution Phase\r\n\r\n**Input**: Confirmed outputs of Sub-Agents 0–4 (environment profile, SOP, governance plan)\r\n**Trigger**: User explicitly confirms they are ready to execute after reviewing Sub-Agent 4\r\n\r\n🛑 **Do not begin execution until the user explicitly says they are ready** (e.g. \"ready\r\nto execute\", \"let's go\", \"proceed\"). When they confirm, read the SOP from\r\n`02-business-process/sop.md` and execute steps one at a time.\r\n\r\n**One step per turn.** After completing each step and presenting the output, stop and\r\nask: *\"Step [N] complete — [filename] is in `0N-[step-slug]/`. Ready for step [N+1]?\"*\r\nDo not proceed until the user confirms.\r\n\r\n### Per-step execution pattern\r\n\r\nEvery step follows this exact sequence. Do not skip any part of it.\r\n\r\n---\r\n\r\n#### A — Parameter check (before generating anything)\r\n\r\nBefore invoking the skill, verify every required parameter is resolved. Cross-check\r\nagainst `environment-profile.md` and the SOP shared parameters table. For any\r\nparameter that was deferred during discovery or planning (marked `[TBC]` or\r\n\"provide at runtime\"), ask now:\r\n\r\n> *\"Before I generate step [N], I need a few values that weren't available earlier:*\r\n> *— [param 1]: [brief explanation of what it is and where to find it]*\r\n> *— [param 2]: ...*\r\n> *Please provide these and I'll proceed.\"*\r\n\r\nDo not generate the artefact until all required parameters are confirmed. Never\r\nsilently skip a parameter or substitute an empty value.\r\n\r\n---\r\n\r\n#### B — Generate the artefact\r\n\r\nInvoke the skill using the resolved parameters. Follow the skill's instructions\r\nexactly — run generator scripts via Bash, do not generate artefact content directly.\r\n\r\nWrite the deliverable to a numbered subfolder continuing from `04-governance/`:\r\n- Step 1 → `05-[step-slug]/`, step 2 → `06-[step-slug]/`, etc.\r\n- Slug = short lowercase hyphenated step name (e.g. `05-create-workspaces/`)\r\n- Only the deliverable goes in the folder: `.ipynb`, `.ps1`, `cli-commands.md`,\r\n or the specific file type described by the skill. No generator scripts, no notes.\r\n\r\n---\r\n\r\n#### C — Q1: Did the previous step run correctly?\r\n\r\nPresent the generated artefact. Then ask (for all steps after step 1):\r\n\r\n> *\"Before we move on — did step [N-1] ([step name]) run correctly?*\r\n> *— A) Yes, all looks good*\r\n> *— B) No — I hit an error\"*\r\n\r\nIf B: ask the user to paste the error message and note where it occurred. Diagnose\r\nthe issue, suggest a fix or workaround, update the SOP to note the error, and log:\r\n`[{DATETIME}] Error in step [N-1] — [error summary] — [resolution or status].`\r\nOnly proceed once the error is resolved or the user accepts the workaround.\r\n\r\n---\r\n\r\n#### D — Q2: Proceed to next step with approach confirmation\r\n\r\nAfter Q1 is resolved, propose the next step:\r\n\r\n> *\"I've updated the change log. Next is **step [N+1]: [step name]** — [one sentence\r\n> of what it does].*\r\n>\r\n> *[Approach note — see rules below]*\r\n>\r\n> *Shall I continue?*\r\n> *— A) Yes — generate the [notebook / PowerShell script / CLI commands]*\r\n> *— B) No — I want to take a different approach for this step\"*\r\n\r\nIf B: present the available alternatives for this step type (see Approach Rules\r\nbelow), including implications of each. If the user selects manual, generate\r\ndetailed UI instructions (see Manual Instructions below).\r\n\r\nUpdate the SOP step to reflect the chosen approach and log:\r\n`[{DATETIME}] Step [N+1] approach confirmed: [approach] — [reason if changed].`\r\n\r\n---\r\n\r\n### Approach rules and implications\r\n\r\nWhen proposing step [N+1] in Q2, include a one-line approach note. Use the rules\r\nbelow to determine what to say and what options to offer if the user says no.\r\n\r\n**Workspace / lakehouse creation, role assignment:**\r\n- All three approaches work: notebook, PowerShell script, CLI commands\r\n- Default to the approach chosen in the environment profile\r\n- If manual selected: walk through the Fabric portal UI step by step\r\n\r\n**Local file upload (CSV, PDF, any file from the operator's machine):**\r\n- ⚠️ **Notebook approach is not possible** — notebooks run inside Fabric and cannot\r\n access the operator's local file system\r\n- Available options: PowerShell script, CLI terminal commands, manual upload via UI\r\n- For **large file volumes (50+ files)**, note: script and CLI upload is sequential\r\n and slow for large batches. For 100+ files, manual drag-and-drop via the Fabric\r\n Files section (or OneDrive sync) is significantly faster\r\n- If manual selected: provide instructions for uploading via the Fabric Files UI\r\n\r\n**Schema creation:**\r\n- Notebook (Spark SQL cell) or CLI commands work; PowerShell script works via\r\n shell-invoked CLI\r\n- Manual: Fabric doesn't have a direct UI for schema creation in lakehouses —\r\n recommend using a notebook with a single Spark SQL cell as the simplest option\r\n\r\n**Shortcuts (cross-lakehouse):**\r\n- CLI commands (`fab ln`) or PowerShell script work; notebook cannot run `fab ln`\r\n natively\r\n- Manual: Fabric portal has a shortcut creation UI (Lakehouse → New shortcut)\r\n\r\n**Notebook / script execution (running something already generated):**\r\n- This step is always manual — the operator runs the artefact themselves\r\n- Provide instructions for importing and running it in Fabric\r\n\r\n---\r\n\r\n### Manual step instructions\r\n\r\nWhen a step is manual (either flagged in the SOP or chosen at runtime), do not just\r\nsay \"do this manually.\" Generate step-by-step UI instructions specific to the action:\r\n\r\n- State the exact URL or navigation path in the Fabric portal\r\n- List each click, field, and value required\r\n- Include what success looks like (what the user should see when done)\r\n- Note any common mistakes or things to watch for\r\n\r\nLog: `[{DATETIME}] Step [N] — manual approach selected — UI instructions provided.`\r\nUpdate the SOP step to mark it as Manual with reason.\r\n\r\n### CLI command log format\r\n\r\nWhen deployment approach is terminal (interactive CLI), produce a `cli-commands.md`\r\nin the step subfolder with this structure:\r\n\r\n```markdown\r\n# CLI Commands: [Step Name]\r\n_Executed: {DATETIME}_\r\n\r\n## Commands Run\r\n\r\n### [Command description]\r\n```bash\r\n[exact command]\r\n```\r\n**Output:**\r\n```\r\n[output or \"No output / success\"]\r\n```\r\n\r\n## Result\r\n[One-sentence summary of what was created or confirmed]\r\n```\r\n\r\n### SOP and CHANGE_LOG updates at runtime\r\n\r\nThe SOP is a living document during execution. Update `02-business-process/sop.md`\r\nwhenever a runtime decision changes the plan:\r\n\r\n- Approach changed for a step → update the Skill / Action column and add a note\r\n- Error encountered and resolved → add an error note and resolution to the step row\r\n- Parameter provided at runtime → fill in the `[TBC]` value in the Shared Parameters table\r\n- Step marked manual at runtime → update Manual? column to Yes, add reason\r\n\r\nEvery update to the SOP must also be logged in `CHANGE_LOG.md` with a timestamp.\r\n\r\n---\r\n\r\n### After all steps complete\r\n\r\nOnce all SOP steps are confirmed, produce `outputs/COMPLETION_SUMMARY.md`:\r\n\r\n```markdown\r\n# Completion Summary: {PROCESS_NAME}\r\n_Completed: {DATETIME}_\r\n\r\n## Steps Executed\r\n| Step | Folder | Deliverable | Approach | Status |\r\n|------|--------|-------------|----------|--------|\r\n| [N] | [folder] | [filename] | [notebook/script/CLI/manual] | ✅ Complete |\r\n\r\n## Runtime Decisions\r\n| Step | Decision | Reason |\r\n|------|----------|--------|\r\n| [N] | Changed from notebook to manual upload | 150 files — script too slow |\r\n\r\n## Manual Steps\r\n| Step | Description | Confirmed by operator |\r\n|------|-------------|----------------------|\r\n| [N] | [description] | ✅ Yes |\r\n\r\n## Next Steps\r\n[Any post-execution actions: verify in Fabric UI, share workspace, run first notebook, etc.]\r\n```\r\n\r\nAppend to `CHANGE_LOG.md`:\r\n`[{DATETIME}] Execution phase complete — all [N] steps done. See COMPLETION_SUMMARY.md.`\r\n",
79
79
  },
80
80
  {
81
81
  relativePath: "references/section-descriptions.md",
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rishildi/ldi-process-skills-test",
3
- "version": "0.0.18",
3
+ "version": "0.0.19",
4
4
  "description": "LDI Process Skills MCP Server — TEST channel. Mirrors the development branch for pre-production validation.",
5
5
  "type": "module",
6
6
  "bin": {