@rishildi/ldi-process-skills 0.1.2 → 0.1.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/build/skills/embedded.js +2 -2
- package/package.json +1 -1
package/build/skills/embedded.js
CHANGED
|
@@ -1,5 +1,5 @@
|
|
|
1
1
|
// AUTO-GENERATED by scripts/embed-skills.ts — do not edit
|
|
2
|
-
// Generated at: 2026-04-04T20:
|
|
2
|
+
// Generated at: 2026-04-04T20:35:33.906Z
|
|
3
3
|
export const EMBEDDED_SKILLS = [
|
|
4
4
|
{
|
|
5
5
|
name: "create-fabric-lakehouses",
|
|
@@ -183,7 +183,7 @@ export const EMBEDDED_SKILLS = [
|
|
|
183
183
|
files: [
|
|
184
184
|
{
|
|
185
185
|
relativePath: "SKILL.md",
|
|
186
|
-
content: "---\nname: fabric-process-discovery\ndescription: >\n Use this skill to conduct the initial environment discovery conversation for any\n Microsoft Fabric process workflow. Collects workspace access, deployment approach,\n access control preferences, capacity, and data location through an adaptive,\n one-question-at-a-time conversation grounded in what the downstream Fabric skills\n actually require. Output is a structured environment profile used by the\n orchestrating agent to plan execution. Triggers as Sub-Agent 0 in any Fabric\n process workflow agent.\nlicense: MIT\ncompatibility: Works in any Claude context — no external tools required at this stage.\n---\n\n# Fabric Process Discovery\n\n> ⚠️ **GOVERNANCE**: This skill only gathers context — it never executes commands or\n> creates resources. All collected information feeds into the execution plan which the\n> operator reviews and confirms before anything runs.\n\n## Workflow\n\n1. Read the process requirements and identify which domains below are relevant.\n2. Ask one question at a time, branching adaptively based on each answer.\n3. Collect all path decisions and any parameter values the operator has available.\n4. Present a confirmation summary and wait for explicit approval.\n5. Write the environment profile and append to `CHANGE_LOG.md`.\n\nRuns a structured, adaptive discovery conversation before any Fabric work begins.\nAsk **one question at a time**. Branch based on each answer before deciding what to\nask next. Every question must explain why it matters. Never leave the user blocked.\n\n## Principles\n\nThese are not scripts to follow — they are the reasoning the model should apply\nwhen deriving and sequencing questions.\n\n**1. Read requirements first.**\nBefore asking anything, read the process requirements. Identify which domains below\nare relevant. Only ask about what the requirements actually need — do not run\nthrough all domains for every process.\n\n**2. Ask one question at a time.**\nNever present multiple questions in one turn. Ask the most important unresolved\nquestion, wait for the answer, then decide what to ask next based on that answer.\nThis produces cleaner answers and better branching.\n\n**3. Always explain why.**\nEvery question must briefly state what it unlocks or what it blocks. Users answer\nbetter when they understand the purpose.\n\n**4. Always offer a way forward.**\nEvery question should include an option to provide the answer later (placeholder),\nor to skip the step if it is optional. For questions requiring specific values the\nuser may not have ready (names, IDs, capacity names), offer a command or\ninstruction that helps them find it. Never leave the user stuck.\n\n**5. Distinguish path decisions from parameter values.**\n- **Path decisions** (can you create workspaces? what deployment approach?) determine\n the plan structure — always collect these during discovery.\n- **Parameter values** (exact workspace names, group Object IDs, capacity name) are\n needed before execution — collect them now if the user has them, or flag them as\n \"required before running\" if not.\n\n**6. Trust the model's intelligence.**\nThe domains below describe what to establish and the technical context needed to\nask good questions. Do not read them as scripts. Derive clear, natural questions\nfrom the requirements and the conversation so far.\n\n---\n\n## Domains\n\nCover only the domains relevant to the process requirements:\n\n| Process involves | Domains to cover |\n|---|---|\n| Creating workspaces | A, B, C, D, F |\n| Creating lakehouses | A, D, F |\n| Ingesting files (CSV/PDF) | D, E |\n| Running notebooks/scripts | D, F |\n| Full pipeline | All domains |\n\n---\n\n### Domain A — Workspace access\n\n**What to establish:**\n- Can the operator create new workspaces, or must they use existing ones?\n- If creating: what names do they want?\n- If using existing: what are the exact names?\n\n**Technical context:**\n- Workspace names are case-sensitive in `fab` paths. Always confirm exact casing.\n- If the operator is unsure whether they have create rights, `fab ls` will show\n workspaces they already have access to. Command requires `fab` CLI installed first:\n `pip install ms-fabric-cli` → `fab auth login` → `fab ls`.\n- Read the requirements to determine how many workspaces are needed (e.g. hub and\n spoke, or a single workspace) before asking.\n\n**Branch:**\n- Can create → collect intended workspace names (or use placeholder if not decided)\n- Cannot create → collect exact names of existing workspaces to use\n- Unsure → offer the `fab ls` command to check; proceed once confirmed\n\n---\n\n### Domain B — Domain assignment\n\n**What to establish:**\n- Does the operator want to assign the workspace(s) to a Fabric domain?\n- If yes: assign to existing domain, or create a new one?\n- If creating a new domain: do they have Fabric Admin rights?\n\n**Technical context:**\n- Domain assignment is optional. Many teams skip it and add it later.\n- Assigning to an existing domain requires no special rights beyond workspace access.\n- **Creating a new domain requires Fabric Administrator rights — this is a\n tenant-level permission, not workspace-level.** If the operator is unsure, default\n to assigning an existing domain or skipping. Do not assume they have these rights.\n- Domain assignment can always be done later via the Fabric portal.\n\n**Branch:**\n- Assign to existing domain → collect domain name\n- Create new domain → confirm Fabric Admin rights; if uncertain or no → mark as\n manual gate, note the intended domain name for the plan\n- Skip → no domain parameters needed\n\n---\n\n### Domain C — Access control\n\n**What to establish:**\n- Beyond the workspace creator (automatically assigned as Admin), should additional\n users or security groups be assigned workspace roles?\n- If groups: how will the Object IDs be obtained?\n\n**Technical context:**\n- **The Fabric REST API requires Entra group Object IDs (GUIDs) — display names are\n not accepted programmatically.** This is a hard API requirement.\n- Individual users can be identified by email address (UPN) — no Object ID needed.\n- Object IDs can be found via:\n - Azure portal: Azure Active Directory → Groups → select group → Object ID field\n - Azure CLI: `az ad group show --group \"Display Name\" --query id -o tsv`\n - PowerShell: `Get-MgGroup -Filter \"displayName eq 'Name'\" | Select-Object Id`\n- **If the deployment approach is a PySpark notebook AND security groups are involved:\n `notebookutils` inside a Fabric notebook cannot query Microsoft Graph.** The\n notebook cannot resolve group display names to Object IDs at runtime. Options:\n (a) operator provides Object IDs directly before running, (b) IDs are resolved via\n Azure CLI or PowerShell before the notebook is run, (c) switch to PowerShell or\n terminal deployment for the role assignment step.\n\n**Branch:**\n- No additional access → skip role collection\n- Users only → collect email addresses and intended roles\n- Security groups → ask if the operator can see the groups in the Azure portal:\n - Yes → ask if they will provide Object IDs directly, or want the agent to\n generate Azure CLI lookup commands to retrieve them automatically\n - No / unsure → mark group role assignment as manual; provide portal instructions\n- Mix of users and groups → handle each type appropriately\n\n**Roles available:** Admin, Member, Contributor, Viewer\n\n---\n\n### Domain D — Deployment approach\n\n**What to establish:**\n- How does the operator want to run the generated scripts or notebooks?\n\n**Technical context:**\n- **All three approaches use the Fabric CLI (`fab`) internally.** This is not a\n question about whether to use the CLI — it is about how the operator runs the\n generated artefacts.\n- **PySpark notebook:** imported into a Fabric workspace and run cell-by-cell in the\n Fabric UI. Authentication is automatic via `notebookutils`. Best for operators\n who prefer working inside Fabric and want step-by-step visibility.\n- **PowerShell script:** a `.ps1` file the operator reviews and runs locally.\n Requires `fab` CLI installed locally (`pip install ms-fabric-cli`) and PowerShell.\n- **Terminal commands:** individual `fab` commands run one at a time in a terminal.\n Requires `fab` CLI installed locally. Best for operators who want full control\n and visibility at each step.\n- If the operator chooses notebook AND has Entra group role assignments, flag the\n Service Principal constraint from Domain C before proceeding.\n\n---\n\n### Domain E — Source data\n\n*Only ask if the process involves ingesting files.*\n\n**What to establish:**\n- Where are the source files (CSVs, PDFs, etc.)?\n\n**Technical context:**\n- Local files require an upload step before they can be referenced in Fabric.\n- Files already in OneLake can be referenced by path directly — no upload needed.\n- Files in SharePoint or Azure Blob Storage can be connected via Fabric shortcuts,\n avoiding the need to copy data.\n\n**Branch:**\n- Local machine → include an upload step in the plan\n- Already in OneLake → collect the OneLake path; skip upload\n- Cloud storage (SharePoint / Azure Blob) → collect source URL; include shortcut\n creation step\n\n---\n\n### Domain F — Capacity\n\n*Ask whenever workspaces are being created.*\n\n**What to establish:**\n- What Fabric capacity will the workspace(s) be assigned to?\n\n**Technical context:**\n- Every Fabric workspace must be assigned to an active capacity at creation time.\n- The capacity must be in Active state — if it is paused, the operator must resume\n it in the Azure portal before running workspace creation.\n- The operator may not know the exact name. Options:\n - Run `fab ls` — capacity information appears in the output\n - Check the Fabric Admin portal under Capacities\n- If the operator does not have the name yet, use the placeholder `[CAPACITY_NAME]`\n and flag it as required before the notebook or script is run.\n\n---\n\n## What to Collect\n\nBy the end of discovery, the environment profile must include:\n\n**Path decisions** (always required — these determine the shape of the plan):\n- Workspace approach: creating new / using existing\n- Domain approach: new (manual if no admin rights) / existing / skipped\n- Access control: none / users only / groups / manual\n- Deployment approach: notebook / PowerShell / terminal\n- Group ID resolution method (if groups involved): direct / CLI lookup / manual\n\n**Parameter values** (collect if available; flag as required before run if not):\n- Workspace name(s) — exact, case-preserved\n- Capacity name\n- Domain name (if assigning)\n- Security group display names and intended roles\n- Group Object IDs (if the operator has them; otherwise flag as needed before run)\n- Existing workspace names (verbatim, if using existing)\n\n---\n\n## Confirmation\n\nBefore writing the environment profile, present a concise summary table of all path\ndecisions and collected parameters. Ask the operator to confirm accuracy. If anything\nis missing or unclear, ask only the targeted follow-up needed — do not restart from\nthe beginning.\n\nExample format:\n\n```\n| # | Question | Your answer | What this means |\n|---|-----------------------|------------------------------------|----------------------------------------------------|\n| A | Workspace creation | Creating new | Agent will create hub + spoke workspaces |\n| B | Domain assignment | New domain (manual gate) | Domain creation flagged manual — admin rights needed |\n| C | Access control | Security groups — IDs to be provided | Role assignment scripted; IDs needed before run |\n| D | Deployment approach | PySpark notebook | Agent generates .ipynb for import into Fabric |\n| F | Capacity | ldifabricdev | Embedded in notebook |\n```\n\n---\n\n## Output\n\nSave the confirmed profile as `00-environment-discovery/environment-profile.md`.\n\nInclude:\n- All path decisions\n- All collected parameter values\n- Parameters flagged as required before execution, with instructions for obtaining them\n- Manual gates — steps the operator must perform themselves, and why\n- Deployment prerequisites (e.g. `pip install ms-fabric-cli` if PowerShell or terminal)\n\nAppend to `CHANGE_LOG.md`:\n`[{DATETIME}] Sub-Agent 0 complete — environment-profile.md produced. [N] path decisions recorded. Manual gates: [list or none]. Parameters still needed: [list or none].`\n\n---\n\n## Gotchas\n\n- **Never frame deployment as CLI vs no-CLI.** All three approaches use `fab`. The\n question is only about how the operator runs the generated artefacts.\n- **Workspace names are case-sensitive in `fab` paths.** Always confirm exact casing.\n- **Entra group Object IDs are GUIDs, not display names.** The Fabric REST API will\n reject display names. If the user provides a name, generate a lookup command rather\n than scripting the assignment directly.\n- **`notebookutils` does not support Microsoft Graph.** A Fabric notebook cannot\n resolve group display names to Object IDs at runtime. Either the operator provides\n IDs directly, or resolution must happen outside the notebook.\n- **Domain creation requires Fabric Administrator rights — tenant-level.** Workspace\n Admin rights are not sufficient. Default to assigning an existing domain or skipping\n if there is any doubt about the operator's rights.\n- **Never leave the user blocked.** If a step requires permissions they don't have,\n always offer: (a) skip and mark as manual, (b) produce a spec for their admin, or\n (c) substitute a UI-based workaround.\n",
|
|
186
|
+
content: "---\nname: fabric-process-discovery\ndescription: >\n Use this skill to conduct the initial environment discovery conversation for any\n Microsoft Fabric process workflow. Collects workspace access, deployment approach,\n access control preferences, capacity, and data location through a FATA-aligned,\n one-question-at-a-time adaptive conversation grounded in what the downstream Fabric\n skills actually require. Output is a structured environment profile used by the\n orchestrating agent to plan execution. Triggers as Sub-Agent 0 in any Fabric\n process workflow agent.\nlicense: MIT\ncompatibility: Works in any Claude context — no external tools required at this stage.\n---\n\n# Fabric Process Discovery\n\n> ⚠️ **GOVERNANCE**: This skill only gathers context — it never executes commands or\n> creates resources. All collected information feeds into the execution plan which the\n> operator reviews and confirms before anything runs.\n>\n> ⚠️ **PRIVACY**: Never ask for passwords, access tokens, client secrets, or any\n> credential values. If the plan requires a Service Principal, record only that one\n> is needed — not the values. Credentials are entered by the operator at runtime,\n> not during discovery.\n\n## Workflow\n\n1. Adopt a Fabric architect expert perspective before asking anything.\n2. Read process requirements and identify which domains are relevant.\n3. Gather contextual and historical background first (one question).\n4. Work through relevant domains — one question at a time, branching on each answer.\n5. Present a confirmation summary and wait for explicit approval.\n6. Write the environment profile and append to `CHANGE_LOG.md`.\n\n---\n\n## Core Principles\n\nThese govern how every question is asked. They are not optional — apply all of them\nthroughout the conversation.\n\n**1. Adopt expert perspective first (FATA: Domain Expert Activation).**\nBefore generating any questions, reason as a senior Fabric architect reviewing the\nrequirements. Ask yourself: *what information gaps, if left unfilled, would cause\nthe plan to fail or need rework?* Those are the questions worth asking. Surface\nthings the operator may not know they need to tell you.\n\n**2. One question at a time — Yes/No or 3–4 options.**\nNever present multiple questions in one turn. Each question must be answerable with\na yes/no or a single choice from 3–4 clearly labelled options (A/B/C or A/B/C/D).\nWait for the answer before deciding what to ask next. This is intentional:\nin Fabric discovery, each answer materially changes which questions are relevant —\npresenting all questions at once produces noise. Single-turn efficiency is the right\ndefault for general LLMs; one-at-a-time branching is correct here.\n\n**3. Scaffold before asking (FATA: User Experience Scaffolding).**\nBefore each question, write one sentence explaining what the question is trying to\nunderstand and why it matters for the plan. Operators new to Fabric cannot anticipate\nwhat a Fabric architect considers essential. Make the purpose visible.\n\n**4. Cover all five FATA information dimensions.**\nStructure discovery to address all five dimensions — not just the obvious ones:\n\n| Dimension | What to establish |\n|---|---|\n| **Contextual** | Project background, team, experience level with Fabric |\n| **Constraint-based** | Permissions, tooling, licensing limits |\n| **Preference-oriented** | Deployment style, governance priorities, reuse goals |\n| **Environmental** | Capacity, existing workspaces, data locations |\n| **Historical** | Previous runs, existing naming conventions, known issues |\n\n**5. Always offer a way forward.**\nEvery question must include an option equivalent to \"I'm not sure / I'll find out.\"\nFor questions requiring specific values (names, IDs), offer a command the operator\ncan run to retrieve them. Never leave the operator blocked.\n\n**6. Distinguish path decisions from parameter values.**\n- **Path decisions** determine the shape of the plan — always collect these.\n- **Parameter values** (exact names, IDs) are needed before execution — collect now\n if the operator has them, otherwise flag as *required before running*.\n\n**7. Prevent over-questioning.**\nCover only the domains the requirements actually need. For simple processes (e.g.\na single notebook), 4–6 questions is sufficient. For a full pipeline, up to 10 is\nreasonable. Stop when all path decisions are resolved — do not ask about things that\nwon't change the plan.\n\n**8. Protect privacy.**\nDo not ask for credentials, secrets, tokens, or Object IDs at this stage. If the\nplan needs a Service Principal, record that one is required and note the permissions\nneeded — the operator enters values at runtime.\n\n---\n\n## Question Sequence\n\n### Phase 1 — Contextual and Historical (always run first)\n\nAsk about background before asking about specifics. This sets the right level of\nexplanation for subsequent questions and surfaces constraints the operator may not\nthink to mention.\n\n**Contextual background question** — ask something like:\n*\"To make sure I pitch the questions at the right level — is this your first time\nsetting up a Fabric environment for this project, or are you extending something\nthat already exists?\"*\n\nOptions should cover: brand new setup / extending an existing one / rebuilding or\nmigrating from somewhere else / unsure.\n\n**Historical question** (ask if the answer above suggests existing work) — ask\nsomething like:\n*\"Are there existing naming conventions, workspace patterns, or previous deployments\nI should follow or be aware of?\"*\n\nOptions: yes (they'll describe) / no / unsure.\n\nThese answers shape how specific later questions need to be and whether defaults\ncan be inferred from what already exists.\n\n---\n\n### Phase 2 — Relevant Domains\n\nCover only the domains relevant to the process requirements. Typical mapping:\n\n| Process involves | Domains to cover |\n|---|---|\n| Creating workspaces | A, B, C, D, F |\n| Creating lakehouses | A, D, F |\n| Ingesting files (CSV/PDF) | D, E |\n| Running notebooks/scripts | D, F |\n| Full pipeline | All domains |\n\nWork through domains in order A → F, skipping irrelevant ones. Within each domain,\nask one question and branch before moving to the next domain.\n\n---\n\n#### Domain A — Workspace access (Constraint-based + Environmental)\n\n**What to establish:** Can the operator create new workspaces, or must they use\nexisting ones? What are the names?\n\n**Technical context:**\n- Workspace names are case-sensitive in `fab` paths.\n- If unsure about create rights: `pip install ms-fabric-cli` → `fab auth login`\n → `fab ls`. If workspace names are returned, they have access.\n- Read requirements to determine how many workspaces are needed before asking.\n\n**Question format:** Can you create new Fabric workspaces?\n- A) Yes — I can create workspaces\n- B) No — I need to use existing workspaces\n- C) I'm not sure — I can run `fab ls` to check\n\n**Branch:**\n- A → ask for intended names (or placeholder if not decided yet)\n- B → ask for exact names of existing workspaces (verbatim — case-sensitive)\n- C → provide the `fab ls` command; wait for output; branch as A or B\n\n---\n\n#### Domain B — Domain assignment (Constraint-based)\n\n**What to establish:** Should workspaces be assigned to a Fabric domain? If yes,\ndoes the operator have the rights needed?\n\n**Technical context:**\n- Domain assignment is optional and can be done later via the portal.\n- Assigning to an *existing* domain requires no special rights.\n- *Creating* a new domain requires Fabric Administrator rights (tenant-level —\n not the same as Workspace Admin). Default to \"skip\" or \"assign existing\" if\n there is any doubt.\n\n**Question format:** Would you like to assign these workspaces to a Fabric domain?\n- A) Yes — assign to an existing domain\n- B) Yes — create a new domain for these workspaces\n- C) No — skip domain assignment for now\n\n**Branch:**\n- A → ask for the domain name\n- B → ask if they have Fabric Administrator rights (Yes / No / Unsure);\n if No or Unsure → mark as manual gate, note intended domain name for documentation\n- C → no domain parameters needed\n\n---\n\n#### Domain C — Access control (Environmental + Constraint-based)\n\n**What to establish:** Who else needs access? How will group identifiers be obtained?\n\n**Technical context:**\n- The workspace creator is automatically assigned as Admin — no action needed.\n- Individual users are identified by email address (UPN) — straightforward.\n- **Entra security groups require Object IDs (GUIDs) — the Fabric REST API does not\n accept display names.** This is a hard API constraint, not a preference.\n- Object IDs can be found: Azure portal (AAD → Groups → select → Object ID field),\n Azure CLI (`az ad group show --group \"Name\" --query id -o tsv`), or PowerShell\n (`Get-MgGroup -Filter \"displayName eq 'Name'\" | Select-Object Id`).\n- **If deployment is a PySpark notebook AND groups are involved:** `notebookutils`\n cannot query Microsoft Graph. Either provide Object IDs directly, resolve via\n Azure CLI/PowerShell before running, or switch deployment approach for this step.\n- Do not ask for Object ID values during discovery — flag that they will be needed\n and establish how they will be obtained.\n\n**Question format:** Beyond yourself as Admin, does anyone else need access?\n- A) No — just me for now\n- B) Yes — specific users (by email)\n- C) Yes — Entra security groups\n- D) Yes — a mix of users and groups\n\n**Branch:**\n- A → skip role collection\n- B → ask for email addresses and intended roles (Admin/Member/Contributor/Viewer)\n- C or D → ask: \"Can you see the security groups in the Azure portal\n (Azure Active Directory → Groups)?\"\n - Yes → ask: will you provide Object IDs directly, or should the agent generate\n Azure CLI lookup commands to retrieve them automatically?\n - Provide directly → flag IDs as required before run; ask for group names and roles\n - CLI lookup → note that lookup commands will be generated; ask for group names and roles\n - No → mark group role assignment as manual gate; provide portal instructions\n\n---\n\n#### Domain D — Deployment approach (Preference-oriented)\n\n**What to establish:** How does the operator prefer to run generated scripts/notebooks?\n\n**Technical context:**\n- **All three approaches use the Fabric CLI (`fab`) internally.** This is not a\n question about whether to use the CLI — it is about how the operator runs the\n generated artefacts.\n- PySpark notebook: runs inside the Fabric UI cell-by-cell. Authentication is\n automatic. Best for operators who prefer working inside Fabric.\n- PowerShell script: reviewed and run locally. Requires `fab` CLI installed\n (`pip install ms-fabric-cli`) and PowerShell.\n- Terminal commands: `fab` commands run one at a time interactively. Requires `fab`\n CLI installed locally. Best for operators who want step-by-step control.\n- If notebook is chosen AND Entra groups are involved, flag the Service Principal\n constraint from Domain C.\n\n**Question format:** How would you like to run the generated artefacts?\n- A) PySpark notebook — import into Fabric and run cell-by-cell in the Fabric UI\n- B) PowerShell script — review and run locally\n- C) Individual CLI commands — run interactively in the terminal, one step at a time\n\n---\n\n#### Domain E — Source data (Environmental)\n\n*Only ask if the process involves ingesting files.*\n\n**What to establish:** Where are the source files?\n\n**Technical context:**\n- Local files require an upload step before they can be used in Fabric.\n- Files already in OneLake can be referenced by path directly.\n- SharePoint/Azure Blob files can be connected via Fabric shortcuts — no copying needed.\n\n**Question format:** Where are the source files you want to ingest?\n- A) On my local machine\n- B) Already in OneLake / Fabric\n- C) In cloud storage (SharePoint, Azure Blob, etc.)\n\n**Branch:**\n- A → include upload step in plan\n- B → ask for OneLake path; skip upload\n- C → ask for source URL/path; include shortcut creation step\n\n---\n\n#### Domain F — Capacity (Environmental + Constraint-based)\n\n*Ask whenever workspaces are being created.*\n\n**What to establish:** What Fabric capacity will workspaces be assigned to?\n\n**Technical context:**\n- Every workspace must be assigned to an active capacity at creation.\n- Capacity must be in Active state — if paused, the operator resumes it in the\n Azure portal before running.\n- `fab ls` output includes capacity information. Also visible in the Fabric Admin portal.\n\n**Question format:** Do you know the name of the Fabric capacity to use?\n- A) Yes — I know it (provide the name)\n- B) I can find it — I'll run `fab ls` or check the Fabric Admin portal\n- C) I'll provide it later — use a placeholder for now\n\n**Branch:**\n- A → embed capacity name in plan\n- B → provide `fab ls` command; wait for name; embed in plan\n- C → use `[CAPACITY_NAME]` placeholder; flag as required before running\n\n---\n\n### Phase 3 — Preference check (Preference-oriented)\n\nAfter the main domains, ask one closing preference question if the requirements\ninvolve choices between rigour and speed:\n\n*\"For any optional steps (e.g. domain assignment, access control), would you prefer\nto include everything now for a complete setup, or keep it minimal and add\ngovernance steps later?\"*\n\n- A) Include everything — set it up completely now\n- B) Keep it minimal — flag optional steps as manual for later\n- C) Decide step by step — I'll confirm each optional item\n\nThis shapes how the plan presents optional components.\n\n---\n\n## Confirmation\n\nBefore writing the environment profile, present a concise summary table of all path\ndecisions and collected parameters. Ask the operator to confirm accuracy. If anything\nis missing or unclear, ask only the targeted follow-up needed.\n\n```\n| # | Dimension | Question | Your answer | What this means |\n|---|-----------------|---------------------- |--------------------------------------|------------------------------------------------------|\n| 0 | Contextual | Project context | New setup | No existing conventions to inherit |\n| A | Constraint | Workspace creation | Creating new | Agent will create hub + spoke workspaces |\n| B | Constraint | Domain assignment | New domain (manual gate) | Domain creation flagged manual — admin rights needed |\n| C | Environmental | Access control | Groups — IDs to be provided directly | Role assignment scripted; IDs needed before run |\n| D | Preference | Deployment approach | PySpark notebook | Agent generates .ipynb for import into Fabric |\n| F | Environmental | Capacity | ldifabricdev | Embedded in notebook |\n| | Preference | Setup completeness | Include everything | All optional steps included in plan |\n```\n\n---\n\n## Output\n\nSave the confirmed profile as `00-environment-discovery/environment-profile.md`.\n\nInclude:\n- All path decisions (with FATA dimension label)\n- All collected parameter values\n- Parameters flagged as required before execution, with instructions for obtaining them\n- Manual gates — steps the operator must perform themselves, and why\n- Deployment prerequisites (e.g. `pip install ms-fabric-cli` if PowerShell/terminal)\n- Any historical/contextual notes that should inform naming or structure decisions\n\nAppend to `CHANGE_LOG.md`:\n`[{DATETIME}] Sub-Agent 0 complete — environment-profile.md produced. [N] path decisions recorded. Manual gates: [list or none]. Parameters still needed: [list or none].`\n\n---\n\n## Gotchas\n\n- **Never frame deployment as CLI vs no-CLI.** All three approaches use `fab`.\n- **Workspace names are case-sensitive in `fab` paths.** Always confirm exact casing.\n- **Entra group Object IDs are GUIDs, not display names.** Do not ask for them during\n discovery — flag that they are needed and establish how they will be obtained.\n- **`notebookutils` does not support Microsoft Graph.** A Fabric notebook cannot\n resolve group names to Object IDs at runtime.\n- **Domain creation requires Fabric Administrator rights — tenant-level.** Default to\n assigning an existing domain or skipping if there is any doubt.\n- **Never ask for credentials, secrets, or token values.** Discovery is about shape\n and approach — not credentials. Flag that a Service Principal is needed; the\n operator provides the values at runtime.\n- **Never leave the user blocked.** If a step requires permissions they don't have,\n offer: (a) skip and mark as manual, (b) produce a spec for their admin, or\n (c) substitute a UI-based workaround.\n- **Stop when path decisions are resolved.** Do not continue asking questions once\n everything that affects the plan structure is known.\n",
|
|
187
187
|
},
|
|
188
188
|
],
|
|
189
189
|
},
|