@rishildi/ldi-process-skills-test 0.0.4 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1 +1 @@
1
- {"version":3,"file":"embedded.d.ts","sourceRoot":"","sources":["../../src/skills/embedded.ts"],"names":[],"mappings":"AAGA,MAAM,WAAW,SAAS;IACxB,YAAY,EAAE,MAAM,CAAC;IACrB,OAAO,EAAE,MAAM,CAAC;CACjB;AAED,MAAM,WAAW,aAAa;IAC5B,IAAI,EAAE,MAAM,CAAC;IACb,QAAQ,EAAE,MAAM,CAAC;IACjB,KAAK,EAAE,SAAS,EAAE,CAAC;CACpB;AAED,eAAO,MAAM,eAAe,EAAE,aAAa,EAqO1C,CAAC"}
1
+ {"version":3,"file":"embedded.d.ts","sourceRoot":"","sources":["../../src/skills/embedded.ts"],"names":[],"mappings":"AAGA,MAAM,WAAW,SAAS;IACxB,YAAY,EAAE,MAAM,CAAC;IACrB,OAAO,EAAE,MAAM,CAAC;CACjB;AAED,MAAM,WAAW,aAAa;IAC5B,IAAI,EAAE,MAAM,CAAC;IACb,QAAQ,EAAE,MAAM,CAAC;IACjB,KAAK,EAAE,SAAS,EAAE,CAAC;CACpB;AAED,eAAO,MAAM,eAAe,EAAE,aAAa,EA+O1C,CAAC"}
@@ -1,5 +1,5 @@
1
1
  // AUTO-GENERATED by scripts/embed-skills.ts — do not edit
2
- // Generated at: 2026-04-04T17:22:33.162Z
2
+ // Generated at: 2026-04-04T20:21:44.415Z
3
3
  export const EMBEDDED_SKILLS = [
4
4
  {
5
5
  name: "create-fabric-lakehouses",
@@ -75,7 +75,7 @@ export const EMBEDDED_SKILLS = [
75
75
  },
76
76
  {
77
77
  relativePath: "assets/agent-template.md",
78
- content: "# Orchestration Agent: {PROCESS_NAME}\r\n\r\n## Context\r\n\r\n**Process**: {PROCESS_NAME}\r\n**Requirements**: {REQUIREMENTS_SUMMARY}\r\n\r\n---\r\n\r\n## How to Run This Agent\r\n\r\n**Start with Sub-Agent 0 (Environment Discovery).** This gathers the user's\r\npermissions, tooling, and preferences so that every subsequent sub-agent produces\r\nplans tailored to their actual environment. Do not skip this step.\r\n\r\nThen execute each remaining sub-agent in sequence:\r\n\r\n1. Use only the inputs and instructions provided in this file.\r\n2. Produce the specified output document in the designated subfolder.\r\n3. Present the output to the user; ask clarifying questions if anything is unclear.\r\n4. Refine until the user explicitly confirms the output.\r\n5. Append a timestamped entry to `CHANGE_LOG.md` recording what was produced or decided.\r\n6. Pass the confirmed output as the primary input to the next sub-agent.\r\n **Every sub-agent must also read `00-environment-discovery/environment-profile.md`**\r\n and respect the path decisions recorded there.\r\n\r\n**Do not proceed to the next sub-agent without explicit user confirmation.**\r\n**Do not produce code, scripts, or data artefacts not described in each sub-agent below.**\r\n\r\n### Notebook Documentation Standard\r\n\r\nEvery Fabric notebook produced by any skill **must** include a numbered markdown cell\r\nimmediately above each code cell. Each markdown cell must:\r\n\r\n1. State the cell number and a short title (e.g. `## Cell 1 — Install dependencies`).\r\n2. Explain **what** the code cell does in 1–2 sentences.\r\n3. Explain **how to use it**: variables to change, flags to toggle, prerequisites.\r\n\r\nAll transformation logic and design rationale must be **embedded as markdown cells inside\r\nthe notebook** — not maintained as separate documentation files. The notebook is the single\r\nsource of truth. A reader must be able to understand what each cell does, why the logic was\r\nchosen, and how to run it without opening any other file.\r\n\r\n### Output Conventions\r\n\r\n- Each sub-agent writes to its own **numbered subfolder** (`01-implementation-plan/`,\r\n `02-business-process/`, etc.). Execution steps continue the numbering (e.g.,\r\n `05-execution/`, `06-gold-layer/`).\r\n- Within each subfolder, distinguish **final deliverables** (notebooks, SQL scripts,\r\n documentation the user runs or deploys) from **intermediate artefacts** (generator\r\n scripts that produce the deliverables). When presenting outputs, label each file.\r\n- All transformation logic and design rationale must be **embedded as markdown cells\r\n inside notebooks** — not maintained as separate documentation files. The notebook\r\n is the single source of truth.\r\n\r\n---\r\n\r\n## Sub-Agent 0: Environment Discovery\r\n\r\n**Input**: Requirements above\r\n**Output**: `00-environment-discovery/environment-profile.md`\r\n\r\nThis sub-agent runs **before anything is planned or built**. Its purpose is to\r\nunderstand the user's environment, permissions, installed tooling, and preferences\r\nso that every subsequent sub-agent produces plans tailored to what is actually\r\npossible and practical.\r\n\r\n### How it works\r\n\r\n1. **Derive questions from the requirements.** Read the requirements and identify\r\n which environment factors will determine which approaches are viable. Group\r\n questions into the relevant discovery domains (see below). Do not ask about\r\n things the requirements don't need — if a process doesn't create workspaces,\r\n don't ask about workspace creation permissions.\r\n\r\n2. **Present the questionnaire.** Show all questions at once, grouped by domain.\r\n Aim for **5–7 questions** — enough to cover the critical unknowns without\r\n overwhelming the user. Prioritise by impact: if an answer could change the\r\n entire approach, ask it; if it's a nice-to-have detail, skip it.\r\n Each question must:\r\n - State **why** the answer matters (what it unlocks or blocks).\r\n - Offer concrete options where applicable (e.g., checkboxes, multiple choice).\r\n - Explain what the agent will do differently depending on the answer.\r\n\r\n3. **Confirm understanding.** After the user answers, present a brief summary:\r\n > \"Based on your answers, here's my understanding of your environment: [2–4\r\n > sentence summary of key decisions]. Is this accurate, or anything to correct\r\n > before I proceed to planning?\"\r\n Wait for explicit confirmation. If new gaps surface, ask only the follow-up\r\n questions needed to resolve them — do not re-ask the full questionnaire.\r\n\r\n4. **Record the answers.** Save the complete environment profile as\r\n `00-environment-discovery/environment-profile.md`. This file is the primary\r\n input for Sub-Agent 1 (Implementation Plan) and is referenced by all\r\n subsequent sub-agents.\r\n\r\n### Discovery domains\r\n\r\nSelect only the domains relevant to the requirements. **Every question must\r\nexplain why it is being asked** — what activity needs the permission or tool,\r\nand what the agent will do differently based on the answer.\r\n\r\n#### Permissions & roles\r\n\r\nProbe platform admin rights, resource creation permissions, role assignments,\r\nand domain management. Frame each question around the **specific activity** that\r\nneeds the permission.\r\n\r\nExample — workspace role assignment with Entra groups (a real technical constraint):\r\n\r\n> **Can you assign Entra security groups to Fabric workspace roles?**\r\n>\r\n> _Why this matters:_ The SOP assigns groups to workspace roles for RBAC. The\r\n> Fabric REST API and CLI require **Entra group Object IDs** — display names\r\n> are not accepted. The Fabric UI allows searching by name but is manual.\r\n>\r\n> Pick the option that best fits your situation:\r\n>\r\n> - **A) I can look up group Object IDs myself** (e.g., from Entra portal or\r\n> from my admin) → Agent will ask you for the Object IDs and script the\r\n> assignments via Fabric CLI.\r\n> - **B) I have Azure CLI (`az`) installed and can query Entra** → Agent will\r\n> generate `az ad group list --display-name \"...\"` commands so you can\r\n> retrieve Object IDs yourself, then script the assignments.\r\n> - **C) I have PowerShell with the Microsoft.Graph module** → Agent will\r\n> generate `Get-MgGroup -Filter \"displayName eq '...'\"` commands instead.\r\n> - **D) I only have access to the Fabric UI** → Agent will provide step-by-step\r\n> UI instructions with screenshots guidance. Role assignment becomes a manual\r\n> step in the SOP.\r\n> - **E) I'm not sure / I need to check** → Agent will provide a quick check\r\n> command (`az ad group list --display-name \"YourGroupName\" --query \"[].id\"`)\r\n> and pause until you confirm.\r\n\r\nOther permission questions follow the same pattern — always state the activity,\r\nthe constraint, and the options:\r\n\r\n- \"Can you **create workspaces** in Fabric? _(Step 1 needs this. If not, the\r\n agent will produce a workspace specification for your admin to create.)_\"\r\n- \"Can you **create or manage domains** and assign workspaces to them? _(The SOP\r\n organises workspaces under a domain. If you lack domain-admin rights, the agent\r\n will produce a domain-assignment request instead.)_\"\r\n- \"Can you **create lakehouses** in the target workspaces? _(Steps 3-5 provision\r\n lakehouses. If you only have Viewer/Member access, the agent will produce\r\n creation requests for a workspace admin.)_\"\r\n\r\n#### Installed tooling\r\n\r\nProbe CLI tools, SDKs, and runtimes — but only the ones the requirements\r\nactually need. **Tell the user what each tool is used for** so they can make\r\nan informed decision about whether to install it.\r\n\r\n- \"Is the **Fabric CLI (`fab`)** installed and authenticated? _(Used for:\r\n creating workspaces, uploading files, creating shortcuts, listing resources.\r\n If not installed, the agent will provide notebook-based alternatives or guide\r\n you through installation.)_\"\r\n- \"Is **Azure CLI (`az`)** available? _(Used for: querying Entra group/user\r\n Object IDs when assigning roles. Not needed if you can supply Object IDs\r\n directly or prefer PowerShell.)_\"\r\n- \"Do you have **Python 3.10+**? _(Used for: running generator scripts that\r\n produce notebooks and SQL. If not available, the agent can provide pre-built\r\n notebooks instead.)_\"\r\n\r\n#### Execution preferences\r\n\r\nGive the user agency over *how* the process is delivered:\r\n\r\n- \"How do you prefer to **run commands**? _(Terminal / Notebook cells / Fabric UI\r\n — the agent will format all instructions accordingly.)_\"\r\n- \"Do you want the agent to **execute commands directly** or **produce scripts\r\n for you to review and run**? _(Direct execution is faster; review-first gives\r\n you more control.)_\"\r\n\r\n#### Data access & connectivity\r\n\r\nOnly ask when the requirements involve data ingestion or movement:\r\n\r\n- \"Where is the **source data**? _(Local files / SharePoint / Azure Storage /\r\n API / already in OneLake — determines upload method and whether shortcuts\r\n can replace copies.)_\"\r\n- \"Can notebooks in your Fabric workspace **access the source location**?\r\n _(Network restrictions or firewall rules may block runtime access. If blocked,\r\n the agent will add a local-upload step.)_\"\r\n\r\n#### Capacity & licensing\r\n\r\nOnly ask when relevant to compute or feature availability:\r\n\r\n- \"What **Fabric capacity SKU** are you on? _(F2/F4 have lower parallelism\r\n limits — the agent will adjust batch sizes. Trial capacities have time and\r\n feature limits the agent will flag.)_\"\r\n\r\n#### Existing infrastructure\r\n\r\nOnly ask when the requirements could reuse existing resources:\r\n\r\n- \"Are there **existing workspaces or lakehouses** the process should reuse\r\n rather than create? _(If so, the agent will skip creation steps and wire up\r\n shortcuts to existing resources.)_\"\r\n\r\n#### Team & handoff\r\n\r\nOnly ask when multi-user or governance concerns apply:\r\n\r\n- \"Will **other team members** run or maintain this pipeline? _(If yes, the\r\n agent will add role-assignment steps, document naming conventions, and\r\n produce a handoff checklist.)_\"\r\n\r\n### Path table\r\n\r\nOnce answers are collected, produce a **path table** summarising how the answers\r\nshape the approach. **Each row links an answer back to the specific step it\r\naffects**, so the user can see exactly how their environment shapes the plan:\r\n\r\n```markdown\r\n## Path Decisions\r\n\r\n| # | Question | Your answer | What this means for the plan |\r\n|---|----------|-------------|------------------------------|\r\n| 1 | Workspace creation rights | Admin on capacity | Steps 1-2: Agent will create workspaces directly via `fab workspace create` |\r\n| 2 | Workspace creation rights | No admin rights | Steps 1-2: Agent will produce a workspace spec document; you hand it to your admin. SOP marks this as a manual gate. |\r\n| 3 | Entra group role assignment | Option B — has Azure CLI | Step 2: Agent will generate `az ad group list` commands to fetch Object IDs, then script `fab workspace role assign` |\r\n| 4 | Entra group role assignment | Option D — UI only | Step 2: Agent will provide click-by-click UI instructions. Role assignment becomes a manual SOP step. |\r\n| 5 | Fabric CLI installed | Yes, authenticated | All CLI steps presented as `fab ...` terminal commands |\r\n| 6 | Fabric CLI installed | Not installed | Agent will either (a) guide installation, or (b) provide notebook `!pip install` + `!fab` alternatives — your choice |\r\n```\r\n\r\n### Rules for question design\r\n\r\n- **Contextual, not generic.** Every question must name the activity it enables\r\n and the step(s) it affects. A questionnaire that reads like a bureaucratic\r\n intake form is wrong — it should read like a knowledgeable consultant scoping\r\n a project.\r\n- **Explain technical constraints in plain language.** When a platform limitation\r\n exists (e.g., \"the API requires Object IDs, not display names\"), say so — then\r\n immediately offer the user multiple ways to work around it. The user should\r\n never feel blocked; they should feel informed and in control.\r\n- **Give power to the user.** Options should not be \"yes/no you can or can't do\r\n this.\" They should be \"here are 3-4 ways to achieve this — which fits your\r\n situation?\" Even a user with limited permissions should see a viable path.\r\n- **Offer verification commands.** If the user isn't sure about an answer, give\r\n them a one-liner they can run to find out (e.g., \"Run `fab ls` — if it\r\n returns workspace names, you're authenticated.\").\r\n- **Do not guess or assume.** If the answer matters to the plan, ask. If the\r\n user says \"I'm not sure,\" help them check — don't default silently.\r\n- **Keep it proportional.** Target 5–7 questions. A simple 3-step process may\r\n need only 3–4; a complex multi-workspace pipeline might need 7. Beyond 7,\r\n split into a first wave and ask follow-ups only if gaps emerge. Never pad\r\n with irrelevant questions to look thorough.\r\n- The environment profile is a **living document** — if a later sub-agent\r\n discovers a new constraint, append it and re-confirm with the user.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 0 complete — environment-profile.md produced. [N] path decisions recorded.`\r\n- **Confirm the environment profile with the user before proceeding to Sub-Agent 1.**\r\n\r\n---\r\n\r\n## Sub-Agent 1: Implementation Plan\r\n\r\n**Input**: Requirements above\r\n**Output**: `01-implementation-plan/implementation-plan.md`\r\n\r\nProduce a phased implementation plan using the structure below. Keep ≤50 lines.\r\nUpdate the RAID log whenever a later sub-agent raises a new risk or dependency.\r\n\r\n```markdown\r\n---\r\ngoal: {PROCESS_NAME} — Implementation Plan\r\nstatus: Planned\r\ndate_created: {DATE}\r\n---\r\n\r\n# Implementation Plan: {PROCESS_NAME}\r\n\r\n## Requirements & Constraints\r\n- REQ-001: [Requirement drawn from the context above]\r\n- CON-001: [Key constraint]\r\n\r\n## Phases\r\n\r\n### Phase 1: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-001 | [Task] | Planned |\r\n| TASK-002 | [Task] | Planned |\r\n\r\n### Phase 2: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-003 | [Task] | Planned |\r\n\r\n## RAID Log\r\n| Type | ID | Description | Mitigation / Action | Status |\r\n|------------|-------|--------------|---------------------|--------|\r\n| Risk | R-001 | [Risk] | [Mitigation] | Open |\r\n| Assumption | A-001 | [Assumption] | [Validation] | Open |\r\n| Issue | I-001 | [Issue] | [Resolution] | Open |\r\n| Dependency | D-001 | [Dependency] | [Owner] | Open |\r\n```\r\n\r\nRules:\r\n- Use REQ-, CON-, TASK-, R-, A-, I-, D- prefixes consistently.\r\n- Task status values: Planned / In Progress / Done.\r\n- Do not include implementation code or scripts.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 1 complete — implementation-plan.md produced.`\r\n- **Confirm with user before proceeding to Sub-Agent 2.**\r\n\r\n---\r\n\r\n## Sub-Agent 2: Business Process Mapping\r\n\r\n**Input**: Confirmed output of Sub-Agent 1 + Requirements above\r\n**Output**: `02-business-process/sop.md`\r\n\r\nThis sub-agent maps requirements to process skills, creates any that are missing,\r\nand produces a Standard Operating Procedure. Work through the three steps below.\r\n\r\n### Step 1 — Decompose requirements into process steps\r\n\r\nRead the requirements and break them into discrete, ordered steps. For each step,\r\nwrite a one-line description of what it needs to do and what its output is.\r\n\r\n### Step 2 — Map each step to a process skill\r\n\r\nFor each step, search the skills directory for a matching process skill\r\n(a skill whose description covers the same action and output).\r\n\r\nFor every step, one of three outcomes applies:\r\n\r\n**A — Skill found**: Read the skill's `SKILL.md`. Note its inputs, outputs, and\r\nany parameters it needs from earlier steps. Mark the step as covered.\r\n\r\n**B — Skill not found**: Determine the deterministic logic needed to automate\r\nthis step (the specific inputs, the repeatable actions, and the expected output).\r\nInvoke `create-fabric-process-skill` to create a new skill definition for this step.\r\nOnce created, read its `SKILL.md` and mark the step as covered.\r\nAppend to `CHANGE_LOG.md`:\r\n`[{DATETIME}] New skill created: [skill-name] — [one-line description of what it does].`\r\nAdd the new skill as a dependency in the RAID log from Sub-Agent 1.\r\n\r\n**C — Step must be manual**: If the step cannot be automated (e.g. requires human\r\njudgement or a physical action), document it as a manual step with exact operator\r\ninstructions and mark it accordingly.\r\n\r\nRepeat until every step is either covered by a skill or accepted as manual.\r\nAsk the user to confirm the skill list before proceeding to Step 3.\r\n\r\n### Step 3 — Produce the SOP\r\n\r\n```markdown\r\n# SOP: {PROCESS_NAME}\r\n\r\n## Step Sequence\r\n| Step | Skill / Action | Input Parameters | Output | Manual? |\r\n|------|---------------------|--------------------|-------------------|---------|\r\n| 1 | [skill-name] | param=value | [output artefact] | No |\r\n| 2 | [skill-name] | output from step 1 | [output artefact] | No |\r\n| 3 | [Manual: action] | — | — | Yes |\r\n\r\n## Shared Parameters\r\n| Parameter | Source | Passed to steps |\r\n|-----------|------------|-----------------|\r\n| [param] | User input | 1, 3 |\r\n\r\n## Newly Created Skills\r\n| Skill name | Step | Description |\r\n|--------------|------|------------------------------------|\r\n| [skill-name] | 2 | [What it does — one line] |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n```\r\n\r\nRules:\r\n- If requirements are unclear for any step, ask a targeted question and update\r\n requirements before continuing.\r\n- New skills created in this sub-agent are a permanent addition to the skills\r\n library and will be available for future agents.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 2 complete — sop.md produced. [N] new skills created.`\r\n- **Confirm with user before proceeding to Sub-Agent 3.**\r\n\r\n---\r\n\r\n## Sub-Agent 3: Solution Architecture\r\n\r\n**Input**: Confirmed output of Sub-Agent 2\r\n**Output**: `03-solution-architecture/specification.md`\r\n\r\nProduce a plain-language specification. Keep total length ≤50 lines.\r\nWrite for a non-technical reader — no code, no implementation detail.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Solution Specification\r\nstatus: Draft\r\ndate_created: {DATE}\r\n---\r\n\r\n# Specification: {PROCESS_NAME}\r\n\r\n## Purpose\r\n[One paragraph: what this solution does and what problem it solves.]\r\n\r\n## Scope\r\n[What is included and what is explicitly excluded.]\r\n\r\n## How It Works\r\n| Step | What happens | Automated? | Notes |\r\n|------|-------------------------------|------------|-----------------|\r\n| 1 | [Plain-language description] | Yes | |\r\n| 2 | [Plain-language description] | No | See MANUAL-001 |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n\r\n## Acceptance Criteria\r\n- AC-001: Given [context], when [action], then [expected outcome].\r\n\r\n## Dependencies\r\n- DEP-001: [External system, file, or service] — [Purpose]\r\n```\r\n\r\nRules:\r\n- Write for a non-technical reader. No jargon without explanation.\r\n- Every manual step must include exact operator instructions.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 3 complete — specification.md produced.`\r\n- **Confirm with user before proceeding to Sub-Agent 4.**\r\n\r\n---\r\n\r\n## Sub-Agent 4: Security, Testing and Governance\r\n\r\n**Input**: Confirmed output of Sub-Agent 3\r\n**Output**: `04-governance/governance-plan.md`\r\n\r\nProduce a governance and deployment plan. Keep total length ≤45 lines.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Governance Plan\r\ndate_created: {DATE}\r\n---\r\n\r\n# Governance Plan: {PROCESS_NAME}\r\n\r\n## Agent Boundaries\r\n| Boundary | Rule |\r\n|-------------------------|--------------------------------------------|\r\n| Allowed actions | [Permitted operations] |\r\n| Blocked actions | [Prohibited operations] |\r\n| Requires human approval | [Steps needing explicit sign-off] |\r\n\r\n## Testing Checklist\r\n- [ ] Validate each sub-agent output before passing it to the next\r\n- [ ] Test all manual steps with a real operator before production use\r\n- [ ] Run against a minimal test dataset before using real data\r\n- [ ] Review CHANGE_LOG.md to confirm all new skills are correct\r\n- [ ] Verify the output folder structure after scaffolding\r\n\r\n## Microsoft Responsible AI Alignment\r\n| Principle | How Applied |\r\n|----------------|--------------------------------------------------------|\r\n| Fairness | [How bias is avoided in outputs and decisions] |\r\n| Reliability | [Validation steps, error handling, new skill review] |\r\n| Privacy | [Data handling — no PII retained in output files] |\r\n| Inclusiveness | [Plain language; no domain assumptions made] |\r\n| Transparency | [User validates every sub-agent output; CHANGE_LOG] |\r\n| Accountability | [Human sign-off required before production execution] |\r\n\r\n## Deployment Guidance\r\n- Review `CHANGE_LOG.md` to verify all newly created skills before first run.\r\n- Store `agent.md`, all outputs, and new skills in version control.\r\n- Review the RAID log from Sub-Agent 1 before each new run.\r\n- Human sign-off required before running against production systems.\r\n```\r\n\r\nRules:\r\n- Every RAI principle row must be completed — state explicitly if not applicable and why.\r\n- Human approval must be required for any step that modifies production systems.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 4 complete — governance-plan.md produced. Agent definition finalised.`\r\n- **Confirm with user before finalising.**\r\n",
78
+ content: "# Orchestration Agent: {PROCESS_NAME}\r\n\r\n## Context\r\n\r\n**Process**: {PROCESS_NAME}\r\n**Requirements**: {REQUIREMENTS_SUMMARY}\r\n\r\n---\r\n\r\n## How to Run This Agent\r\n\r\n**Start with Sub-Agent 0 (Environment Discovery).** This gathers the user's\r\npermissions, tooling, and preferences so that every subsequent sub-agent produces\r\nplans tailored to their actual environment. Do not skip this step.\r\n\r\nThen execute each remaining sub-agent in sequence:\r\n\r\n1. Use only the inputs and instructions provided in this file.\r\n2. Produce the specified output document in the designated subfolder.\r\n3. Present the output to the user; ask clarifying questions if anything is unclear.\r\n4. Refine until the user explicitly confirms the output.\r\n5. Append a timestamped entry to `CHANGE_LOG.md` recording what was produced or decided.\r\n6. Pass the confirmed output as the primary input to the next sub-agent.\r\n **Every sub-agent must also read `00-environment-discovery/environment-profile.md`**\r\n and respect the path decisions recorded there.\r\n\r\n**Do not proceed to the next sub-agent without explicit user confirmation.**\r\n**Do not produce code, scripts, or data artefacts not described in each sub-agent below.**\r\n\r\n### Notebook Documentation Standard\r\n\r\nEvery Fabric notebook produced by any skill **must** include a numbered markdown cell\r\nimmediately above each code cell. Each markdown cell must:\r\n\r\n1. State the cell number and a short title (e.g. `## Cell 1 — Install dependencies`).\r\n2. Explain **what** the code cell does in 1–2 sentences.\r\n3. Explain **how to use it**: variables to change, flags to toggle, prerequisites.\r\n\r\nAll transformation logic and design rationale must be **embedded as markdown cells inside\r\nthe notebook** — not maintained as separate documentation files. The notebook is the single\r\nsource of truth. A reader must be able to understand what each cell does, why the logic was\r\nchosen, and how to run it without opening any other file.\r\n\r\n### Output Conventions\r\n\r\n- Each sub-agent writes to its own **numbered subfolder** (`01-implementation-plan/`,\r\n `02-business-process/`, etc.). Execution steps continue the numbering (e.g.,\r\n `05-execution/`, `06-gold-layer/`).\r\n- Within each subfolder, distinguish **final deliverables** (notebooks, SQL scripts,\r\n documentation the user runs or deploys) from **intermediate artefacts** (generator\r\n scripts that produce the deliverables). When presenting outputs, label each file.\r\n- All transformation logic and design rationale must be **embedded as markdown cells\r\n inside notebooks** — not maintained as separate documentation files. The notebook\r\n is the single source of truth.\r\n\r\n---\r\n\r\n## Sub-Agent 0: Environment Discovery\r\n\r\n**Input**: Requirements above\r\n**Output**: `00-environment-discovery/environment-profile.md`\r\n\r\nThis sub-agent runs **before anything is planned or built**. Its sole purpose is to\r\nunderstand the operator's environment, permissions, and preferences so that every\r\nsubsequent sub-agent produces plans tailored to what is actually possible and practical.\r\n\r\n**Invoke the `fabric-process-discovery` skill to run this step.**\r\n\r\nThe skill defines the full adaptive questioning tree — which questions to ask, in what\r\norder, and how to branch based on answers. Key principles:\r\n\r\n- **Read the requirements first.** Only ask about domains the process actually needs.\r\n A CSV ingestion job does not need workspace creation questions. A full pipeline\r\n needs all domains.\r\n- **Present all questions in a single turn**, grouped by domain. Never ask one question\r\n at a time. Target **5–7 questions** for most processes; simpler ones may need 3–4.\r\n- **Branch adaptively.** The skill defines conditional follow-ups — apply them after\r\n the first-turn answers before presenting the confirmation summary.\r\n- **Confirm before proceeding.** After processing answers, present the path table and\r\n ask: *\"Is this accurate, or anything to correct before I proceed to planning?\"*\r\n Wait for explicit confirmation.\r\n\r\nThe skill covers these domains (use only those relevant to the requirements):\r\n\r\n| Domain | When to include |\r\n|--------|----------------|\r\n| **A — Workspace access** | Any step creates or uses workspaces |\r\n| **A — Domain assignment** | Requirements mention domain governance (only if creating workspaces) |\r\n| **A — Access control / groups** | Process assigns roles to users or groups |\r\n| **B — Deployment approach** | Any step generates notebooks, scripts, or CLI commands |\r\n| **C — Source data location** | Process ingests files (CSV, PDF, etc.) |\r\n| **D — Capacity / SKU** | Process involves compute-intensive operations |\r\n\r\n**Critical framing rules from the skill — do not deviate:**\r\n\r\n1. **Deployment approach is NOT a CLI vs no-CLI question.** All three options (PySpark\r\n notebook, PowerShell script, CLI commands) use the Fabric CLI internally. The\r\n question is only about *how* the operator runs it. Present it as:\r\n - **A) PySpark notebook** — imported into Fabric, run cell-by-cell in the Fabric UI\r\n - **B) PowerShell script** — generated `.ps1` reviewed and run locally\r\n - **C) CLI commands** — individual `fab` commands run interactively in the terminal\r\n\r\n2. **Workspace creation must branch correctly.** If the operator cannot create\r\n workspaces, immediately ask for the exact names of existing hub and spoke\r\n workspaces — do not ask about domain assignment or access control (they only\r\n apply when creating).\r\n\r\n3. **Entra group Object IDs are a known technical constraint.** When groups are\r\n involved, always surface this: *\"The Fabric API requires Object IDs — display\r\n names are not accepted programmatically.\"* Then offer the resolution options\r\n (have IDs / Azure CLI / PowerShell Graph / UI manual).\r\n\r\n4. **Never leave the user blocked.** If a step requires permissions they don't have,\r\n offer: (a) skip and mark as manual, (b) produce a spec for their admin, or\r\n (c) substitute a UI-based workaround.\r\n\r\nOnce the environment profile is confirmed, save it as\r\n`00-environment-discovery/environment-profile.md` and append to `CHANGE_LOG.md`:\r\n`[{DATETIME}] Sub-Agent 0 complete — environment-profile.md produced. [N] path decisions recorded. Manual gates: [list or none].`\r\n\r\n**Confirm the environment profile with the user before proceeding to Sub-Agent 1.**\r\n\r\n---\r\n\r\n## Sub-Agent 1: Implementation Plan\r\n\r\n**Input**: Requirements above\r\n**Output**: `01-implementation-plan/implementation-plan.md`\r\n\r\nProduce a phased implementation plan using the structure below. Keep ≤50 lines.\r\nUpdate the RAID log whenever a later sub-agent raises a new risk or dependency.\r\n\r\n```markdown\r\n---\r\ngoal: {PROCESS_NAME} — Implementation Plan\r\nstatus: Planned\r\ndate_created: {DATE}\r\n---\r\n\r\n# Implementation Plan: {PROCESS_NAME}\r\n\r\n## Requirements & Constraints\r\n- REQ-001: [Requirement drawn from the context above]\r\n- CON-001: [Key constraint]\r\n\r\n## Phases\r\n\r\n### Phase 1: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-001 | [Task] | Planned |\r\n| TASK-002 | [Task] | Planned |\r\n\r\n### Phase 2: [Phase name]\r\n| Task | Description | Status |\r\n|----------|-------------|---------|\r\n| TASK-003 | [Task] | Planned |\r\n\r\n## RAID Log\r\n| Type | ID | Description | Mitigation / Action | Status |\r\n|------------|-------|--------------|---------------------|--------|\r\n| Risk | R-001 | [Risk] | [Mitigation] | Open |\r\n| Assumption | A-001 | [Assumption] | [Validation] | Open |\r\n| Issue | I-001 | [Issue] | [Resolution] | Open |\r\n| Dependency | D-001 | [Dependency] | [Owner] | Open |\r\n```\r\n\r\nRules:\r\n- Use REQ-, CON-, TASK-, R-, A-, I-, D- prefixes consistently.\r\n- Task status values: Planned / In Progress / Done.\r\n- Do not include implementation code or scripts.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 1 complete — implementation-plan.md produced.`\r\n- **Confirm with user before proceeding to Sub-Agent 2.**\r\n\r\n---\r\n\r\n## Sub-Agent 2: Business Process Mapping\r\n\r\n**Input**: Confirmed output of Sub-Agent 1 + Requirements above\r\n**Output**: `02-business-process/sop.md`\r\n\r\nThis sub-agent maps requirements to process skills, creates any that are missing,\r\nand produces a Standard Operating Procedure. Work through the three steps below.\r\n\r\n### Step 1 — Decompose requirements into process steps\r\n\r\nRead the requirements and break them into discrete, ordered steps. For each step,\r\nwrite a one-line description of what it needs to do and what its output is.\r\n\r\n### Step 2 — Map each step to a process skill\r\n\r\nFor each step, search the skills directory for a matching process skill\r\n(a skill whose description covers the same action and output).\r\n\r\nFor every step, one of three outcomes applies:\r\n\r\n**A — Skill found**: Read the skill's `SKILL.md`. Note its inputs, outputs, and\r\nany parameters it needs from earlier steps. Mark the step as covered.\r\n\r\n**B — Skill not found**: Determine the deterministic logic needed to automate\r\nthis step (the specific inputs, the repeatable actions, and the expected output).\r\nInvoke `create-fabric-process-skill` to create a new skill definition for this step.\r\nOnce created, read its `SKILL.md` and mark the step as covered.\r\nAppend to `CHANGE_LOG.md`:\r\n`[{DATETIME}] New skill created: [skill-name] — [one-line description of what it does].`\r\nAdd the new skill as a dependency in the RAID log from Sub-Agent 1.\r\n\r\n**C — Step must be manual**: If the step cannot be automated (e.g. requires human\r\njudgement or a physical action), document it as a manual step with exact operator\r\ninstructions and mark it accordingly.\r\n\r\nRepeat until every step is either covered by a skill or accepted as manual.\r\nAsk the user to confirm the skill list before proceeding to Step 3.\r\n\r\n### Step 3 — Produce the SOP\r\n\r\n```markdown\r\n# SOP: {PROCESS_NAME}\r\n\r\n## Step Sequence\r\n| Step | Skill / Action | Input Parameters | Output | Manual? |\r\n|------|---------------------|--------------------|-------------------|---------|\r\n| 1 | [skill-name] | param=value | [output artefact] | No |\r\n| 2 | [skill-name] | output from step 1 | [output artefact] | No |\r\n| 3 | [Manual: action] | — | — | Yes |\r\n\r\n## Shared Parameters\r\n| Parameter | Source | Passed to steps |\r\n|-----------|------------|-----------------|\r\n| [param] | User input | 1, 3 |\r\n\r\n## Newly Created Skills\r\n| Skill name | Step | Description |\r\n|--------------|------|------------------------------------|\r\n| [skill-name] | 2 | [What it does — one line] |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n```\r\n\r\nRules:\r\n- If requirements are unclear for any step, ask a targeted question and update\r\n requirements before continuing.\r\n- New skills created in this sub-agent are a permanent addition to the skills\r\n library and will be available for future agents.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 2 complete — sop.md produced. [N] new skills created.`\r\n- **Confirm with user before proceeding to Sub-Agent 3.**\r\n\r\n---\r\n\r\n## Sub-Agent 3: Solution Architecture\r\n\r\n**Input**: Confirmed output of Sub-Agent 2\r\n**Output**: `03-solution-architecture/specification.md`\r\n\r\nProduce a plain-language specification. Keep total length ≤50 lines.\r\nWrite for a non-technical reader — no code, no implementation detail.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Solution Specification\r\nstatus: Draft\r\ndate_created: {DATE}\r\n---\r\n\r\n# Specification: {PROCESS_NAME}\r\n\r\n## Purpose\r\n[One paragraph: what this solution does and what problem it solves.]\r\n\r\n## Scope\r\n[What is included and what is explicitly excluded.]\r\n\r\n## How It Works\r\n| Step | What happens | Automated? | Notes |\r\n|------|-------------------------------|------------|-----------------|\r\n| 1 | [Plain-language description] | Yes | |\r\n| 2 | [Plain-language description] | No | See MANUAL-001 |\r\n\r\n## Manual Steps\r\n- MANUAL-001: [Step] — [Reason] — [Exact operator instructions]\r\n\r\n## Acceptance Criteria\r\n- AC-001: Given [context], when [action], then [expected outcome].\r\n\r\n## Dependencies\r\n- DEP-001: [External system, file, or service] — [Purpose]\r\n```\r\n\r\nRules:\r\n- Write for a non-technical reader. No jargon without explanation.\r\n- Every manual step must include exact operator instructions.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 3 complete — specification.md produced.`\r\n- **Confirm with user before proceeding to Sub-Agent 4.**\r\n\r\n---\r\n\r\n## Sub-Agent 4: Security, Testing and Governance\r\n\r\n**Input**: Confirmed output of Sub-Agent 3\r\n**Output**: `04-governance/governance-plan.md`\r\n\r\nProduce a governance and deployment plan. Keep total length ≤45 lines.\r\n\r\n```markdown\r\n---\r\ntitle: {PROCESS_NAME} — Governance Plan\r\ndate_created: {DATE}\r\n---\r\n\r\n# Governance Plan: {PROCESS_NAME}\r\n\r\n## Agent Boundaries\r\n| Boundary | Rule |\r\n|-------------------------|--------------------------------------------|\r\n| Allowed actions | [Permitted operations] |\r\n| Blocked actions | [Prohibited operations] |\r\n| Requires human approval | [Steps needing explicit sign-off] |\r\n\r\n## Testing Checklist\r\n- [ ] Validate each sub-agent output before passing it to the next\r\n- [ ] Test all manual steps with a real operator before production use\r\n- [ ] Run against a minimal test dataset before using real data\r\n- [ ] Review CHANGE_LOG.md to confirm all new skills are correct\r\n- [ ] Verify the output folder structure after scaffolding\r\n\r\n## Microsoft Responsible AI Alignment\r\n| Principle | How Applied |\r\n|----------------|--------------------------------------------------------|\r\n| Fairness | [How bias is avoided in outputs and decisions] |\r\n| Reliability | [Validation steps, error handling, new skill review] |\r\n| Privacy | [Data handling — no PII retained in output files] |\r\n| Inclusiveness | [Plain language; no domain assumptions made] |\r\n| Transparency | [User validates every sub-agent output; CHANGE_LOG] |\r\n| Accountability | [Human sign-off required before production execution] |\r\n\r\n## Deployment Guidance\r\n- Review `CHANGE_LOG.md` to verify all newly created skills before first run.\r\n- Store `agent.md`, all outputs, and new skills in version control.\r\n- Review the RAID log from Sub-Agent 1 before each new run.\r\n- Human sign-off required before running against production systems.\r\n```\r\n\r\nRules:\r\n- Every RAI principle row must be completed — state explicitly if not applicable and why.\r\n- Human approval must be required for any step that modifies production systems.\r\n- Append to `CHANGE_LOG.md`: `[{DATETIME}] Sub-Agent 4 complete — governance-plan.md produced. Agent definition finalised.`\r\n- **Confirm with user before finalising.**\r\n",
79
79
  },
80
80
  {
81
81
  relativePath: "references/section-descriptions.md",
@@ -177,6 +177,16 @@ export const EMBEDDED_SKILLS = [
177
177
  },
178
178
  ],
179
179
  },
180
+ {
181
+ name: "fabric-process-discovery",
182
+ category: "fabric",
183
+ files: [
184
+ {
185
+ relativePath: "SKILL.md",
186
+ content: "---\nname: fabric-process-discovery\ndescription: >\n Use this skill to conduct the initial environment discovery conversation for any\n Microsoft Fabric process workflow. Collects workspace access, deployment approach,\n access control preferences, capacity, and data location through an adaptive,\n one-question-at-a-time conversation grounded in what the downstream Fabric skills\n actually require. Output is a structured environment profile used by the\n orchestrating agent to plan execution. Triggers as Sub-Agent 0 in any Fabric\n process workflow agent.\nlicense: MIT\ncompatibility: Works in any Claude context — no external tools required at this stage.\n---\n\n# Fabric Process Discovery\n\n> ⚠️ **GOVERNANCE**: This skill only gathers context — it never executes commands or\n> creates resources. All collected information feeds into the execution plan which the\n> operator reviews and confirms before anything runs.\n\n## Workflow\n\n1. Read the process requirements and identify which domains below are relevant.\n2. Ask one question at a time, branching adaptively based on each answer.\n3. Collect all path decisions and any parameter values the operator has available.\n4. Present a confirmation summary and wait for explicit approval.\n5. Write the environment profile and append to `CHANGE_LOG.md`.\n\nRuns a structured, adaptive discovery conversation before any Fabric work begins.\nAsk **one question at a time**. Branch based on each answer before deciding what to\nask next. Every question must explain why it matters. Never leave the user blocked.\n\n## Principles\n\nThese are not scripts to follow — they are the reasoning the model should apply\nwhen deriving and sequencing questions.\n\n**1. Read requirements first.**\nBefore asking anything, read the process requirements. Identify which domains below\nare relevant. Only ask about what the requirements actually need — do not run\nthrough all domains for every process.\n\n**2. Ask one question at a time.**\nNever present multiple questions in one turn. Ask the most important unresolved\nquestion, wait for the answer, then decide what to ask next based on that answer.\nThis produces cleaner answers and better branching.\n\n**3. Always explain why.**\nEvery question must briefly state what it unlocks or what it blocks. Users answer\nbetter when they understand the purpose.\n\n**4. Always offer a way forward.**\nEvery question should include an option to provide the answer later (placeholder),\nor to skip the step if it is optional. For questions requiring specific values the\nuser may not have ready (names, IDs, capacity names), offer a command or\ninstruction that helps them find it. Never leave the user stuck.\n\n**5. Distinguish path decisions from parameter values.**\n- **Path decisions** (can you create workspaces? what deployment approach?) determine\n the plan structure — always collect these during discovery.\n- **Parameter values** (exact workspace names, group Object IDs, capacity name) are\n needed before execution — collect them now if the user has them, or flag them as\n \"required before running\" if not.\n\n**6. Trust the model's intelligence.**\nThe domains below describe what to establish and the technical context needed to\nask good questions. Do not read them as scripts. Derive clear, natural questions\nfrom the requirements and the conversation so far.\n\n---\n\n## Domains\n\nCover only the domains relevant to the process requirements:\n\n| Process involves | Domains to cover |\n|---|---|\n| Creating workspaces | A, B, C, D, F |\n| Creating lakehouses | A, D, F |\n| Ingesting files (CSV/PDF) | D, E |\n| Running notebooks/scripts | D, F |\n| Full pipeline | All domains |\n\n---\n\n### Domain A — Workspace access\n\n**What to establish:**\n- Can the operator create new workspaces, or must they use existing ones?\n- If creating: what names do they want?\n- If using existing: what are the exact names?\n\n**Technical context:**\n- Workspace names are case-sensitive in `fab` paths. Always confirm exact casing.\n- If the operator is unsure whether they have create rights, `fab ls` will show\n workspaces they already have access to. Command requires `fab` CLI installed first:\n `pip install ms-fabric-cli` → `fab auth login` → `fab ls`.\n- Read the requirements to determine how many workspaces are needed (e.g. hub and\n spoke, or a single workspace) before asking.\n\n**Branch:**\n- Can create → collect intended workspace names (or use placeholder if not decided)\n- Cannot create → collect exact names of existing workspaces to use\n- Unsure → offer the `fab ls` command to check; proceed once confirmed\n\n---\n\n### Domain B — Domain assignment\n\n**What to establish:**\n- Does the operator want to assign the workspace(s) to a Fabric domain?\n- If yes: assign to existing domain, or create a new one?\n- If creating a new domain: do they have Fabric Admin rights?\n\n**Technical context:**\n- Domain assignment is optional. Many teams skip it and add it later.\n- Assigning to an existing domain requires no special rights beyond workspace access.\n- **Creating a new domain requires Fabric Administrator rights — this is a\n tenant-level permission, not workspace-level.** If the operator is unsure, default\n to assigning an existing domain or skipping. Do not assume they have these rights.\n- Domain assignment can always be done later via the Fabric portal.\n\n**Branch:**\n- Assign to existing domain → collect domain name\n- Create new domain → confirm Fabric Admin rights; if uncertain or no → mark as\n manual gate, note the intended domain name for the plan\n- Skip → no domain parameters needed\n\n---\n\n### Domain C — Access control\n\n**What to establish:**\n- Beyond the workspace creator (automatically assigned as Admin), should additional\n users or security groups be assigned workspace roles?\n- If groups: how will the Object IDs be obtained?\n\n**Technical context:**\n- **The Fabric REST API requires Entra group Object IDs (GUIDs) — display names are\n not accepted programmatically.** This is a hard API requirement.\n- Individual users can be identified by email address (UPN) — no Object ID needed.\n- Object IDs can be found via:\n - Azure portal: Azure Active Directory → Groups → select group → Object ID field\n - Azure CLI: `az ad group show --group \"Display Name\" --query id -o tsv`\n - PowerShell: `Get-MgGroup -Filter \"displayName eq 'Name'\" | Select-Object Id`\n- **If the deployment approach is a PySpark notebook AND security groups are involved:\n `notebookutils` inside a Fabric notebook cannot query Microsoft Graph.** The\n notebook cannot resolve group display names to Object IDs at runtime. Options:\n (a) operator provides Object IDs directly before running, (b) IDs are resolved via\n Azure CLI or PowerShell before the notebook is run, (c) switch to PowerShell or\n terminal deployment for the role assignment step.\n\n**Branch:**\n- No additional access → skip role collection\n- Users only → collect email addresses and intended roles\n- Security groups → ask if the operator can see the groups in the Azure portal:\n - Yes → ask if they will provide Object IDs directly, or want the agent to\n generate Azure CLI lookup commands to retrieve them automatically\n - No / unsure → mark group role assignment as manual; provide portal instructions\n- Mix of users and groups → handle each type appropriately\n\n**Roles available:** Admin, Member, Contributor, Viewer\n\n---\n\n### Domain D — Deployment approach\n\n**What to establish:**\n- How does the operator want to run the generated scripts or notebooks?\n\n**Technical context:**\n- **All three approaches use the Fabric CLI (`fab`) internally.** This is not a\n question about whether to use the CLI — it is about how the operator runs the\n generated artefacts.\n- **PySpark notebook:** imported into a Fabric workspace and run cell-by-cell in the\n Fabric UI. Authentication is automatic via `notebookutils`. Best for operators\n who prefer working inside Fabric and want step-by-step visibility.\n- **PowerShell script:** a `.ps1` file the operator reviews and runs locally.\n Requires `fab` CLI installed locally (`pip install ms-fabric-cli`) and PowerShell.\n- **Terminal commands:** individual `fab` commands run one at a time in a terminal.\n Requires `fab` CLI installed locally. Best for operators who want full control\n and visibility at each step.\n- If the operator chooses notebook AND has Entra group role assignments, flag the\n Service Principal constraint from Domain C before proceeding.\n\n---\n\n### Domain E — Source data\n\n*Only ask if the process involves ingesting files.*\n\n**What to establish:**\n- Where are the source files (CSVs, PDFs, etc.)?\n\n**Technical context:**\n- Local files require an upload step before they can be referenced in Fabric.\n- Files already in OneLake can be referenced by path directly — no upload needed.\n- Files in SharePoint or Azure Blob Storage can be connected via Fabric shortcuts,\n avoiding the need to copy data.\n\n**Branch:**\n- Local machine → include an upload step in the plan\n- Already in OneLake → collect the OneLake path; skip upload\n- Cloud storage (SharePoint / Azure Blob) → collect source URL; include shortcut\n creation step\n\n---\n\n### Domain F — Capacity\n\n*Ask whenever workspaces are being created.*\n\n**What to establish:**\n- What Fabric capacity will the workspace(s) be assigned to?\n\n**Technical context:**\n- Every Fabric workspace must be assigned to an active capacity at creation time.\n- The capacity must be in Active state — if it is paused, the operator must resume\n it in the Azure portal before running workspace creation.\n- The operator may not know the exact name. Options:\n - Run `fab ls` — capacity information appears in the output\n - Check the Fabric Admin portal under Capacities\n- If the operator does not have the name yet, use the placeholder `[CAPACITY_NAME]`\n and flag it as required before the notebook or script is run.\n\n---\n\n## What to Collect\n\nBy the end of discovery, the environment profile must include:\n\n**Path decisions** (always required — these determine the shape of the plan):\n- Workspace approach: creating new / using existing\n- Domain approach: new (manual if no admin rights) / existing / skipped\n- Access control: none / users only / groups / manual\n- Deployment approach: notebook / PowerShell / terminal\n- Group ID resolution method (if groups involved): direct / CLI lookup / manual\n\n**Parameter values** (collect if available; flag as required before run if not):\n- Workspace name(s) — exact, case-preserved\n- Capacity name\n- Domain name (if assigning)\n- Security group display names and intended roles\n- Group Object IDs (if the operator has them; otherwise flag as needed before run)\n- Existing workspace names (verbatim, if using existing)\n\n---\n\n## Confirmation\n\nBefore writing the environment profile, present a concise summary table of all path\ndecisions and collected parameters. Ask the operator to confirm accuracy. If anything\nis missing or unclear, ask only the targeted follow-up needed — do not restart from\nthe beginning.\n\nExample format:\n\n```\n| # | Question | Your answer | What this means |\n|---|-----------------------|------------------------------------|----------------------------------------------------|\n| A | Workspace creation | Creating new | Agent will create hub + spoke workspaces |\n| B | Domain assignment | New domain (manual gate) | Domain creation flagged manual — admin rights needed |\n| C | Access control | Security groups — IDs to be provided | Role assignment scripted; IDs needed before run |\n| D | Deployment approach | PySpark notebook | Agent generates .ipynb for import into Fabric |\n| F | Capacity | ldifabricdev | Embedded in notebook |\n```\n\n---\n\n## Output\n\nSave the confirmed profile as `00-environment-discovery/environment-profile.md`.\n\nInclude:\n- All path decisions\n- All collected parameter values\n- Parameters flagged as required before execution, with instructions for obtaining them\n- Manual gates — steps the operator must perform themselves, and why\n- Deployment prerequisites (e.g. `pip install ms-fabric-cli` if PowerShell or terminal)\n\nAppend to `CHANGE_LOG.md`:\n`[{DATETIME}] Sub-Agent 0 complete — environment-profile.md produced. [N] path decisions recorded. Manual gates: [list or none]. Parameters still needed: [list or none].`\n\n---\n\n## Gotchas\n\n- **Never frame deployment as CLI vs no-CLI.** All three approaches use `fab`. The\n question is only about how the operator runs the generated artefacts.\n- **Workspace names are case-sensitive in `fab` paths.** Always confirm exact casing.\n- **Entra group Object IDs are GUIDs, not display names.** The Fabric REST API will\n reject display names. If the user provides a name, generate a lookup command rather\n than scripting the assignment directly.\n- **`notebookutils` does not support Microsoft Graph.** A Fabric notebook cannot\n resolve group display names to Object IDs at runtime. Either the operator provides\n IDs directly, or resolution must happen outside the notebook.\n- **Domain creation requires Fabric Administrator rights — tenant-level.** Workspace\n Admin rights are not sufficient. Default to assigning an existing domain or skipping\n if there is any doubt about the operator's rights.\n- **Never leave the user blocked.** If a step requires permissions they don't have,\n always offer: (a) skip and mark as manual, (b) produce a spec for their admin, or\n (c) substitute a UI-based workaround.\n",
187
+ },
188
+ ],
189
+ },
180
190
  {
181
191
  name: "generate-fabric-workspace",
182
192
  category: "fabric",
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rishildi/ldi-process-skills-test",
3
- "version": "0.0.4",
3
+ "version": "0.0.6",
4
4
  "description": "LDI Process Skills MCP Server — TEST channel. Mirrors the development branch for pre-production validation.",
5
5
  "type": "module",
6
6
  "bin": {