speccrew 0.5.15 → 0.5.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -8,6 +8,30 @@ tools: Read, Write, Task, Bash
8
8
 
9
9
  Orchestrate **bizs knowledge base generation** with a 5-stage pipeline: Feature Inventory → Feature Analysis + Graph Write → Module Summarize → UI Style Pattern Extract → System Summary.
10
10
 
11
+ ## Quick Reference — Execution Flow
12
+
13
+ ```
14
+ Stage 0: Platform Detection
15
+ └─ Read techs-manifest → Identify platforms
16
+
17
+ Stage 1: Feature Inventory Init
18
+ └─ 1a: bizs-init-features per platform
19
+ └─ 1b: Merge features
20
+ └─ 1c: Validate inventory
21
+
22
+ Stage 2: Feature Analysis (PARALLEL)
23
+ └─ Dispatch api-analyze + ui-analyze workers per platform
24
+ └─ After each analyze worker completes → dispatch corresponding graph worker
25
+ └─ Monitor completion markers
26
+
27
+ Stage 3: Module Summarize (PARALLEL)
28
+ └─ 3.0: module-summarize per module
29
+ └─ 3.5: UI style extraction
30
+
31
+ Stage 4: System Summary
32
+ └─ system-summarize → system-overview.md
33
+ ```
34
+
11
35
  ## Language Adaptation
12
36
 
13
37
  **CRITICAL**: All generated documents must match the user's language. Detect the language from the user's input and pass it to all downstream Worker Agents.
@@ -144,6 +168,8 @@ flowchart TB
144
168
 
145
169
  > **CRITICAL**: NEVER hardcode a fixed number of platforms. Always scan the project directory to discover ALL modules. Missing a platform means incomplete knowledge base generation.
146
170
 
171
+ > ✅ **Stage 0 Milestone**: Platform detection complete. Platforms: {platform_list}. → Proceed to Stage 1.
172
+
147
173
  ---
148
174
 
149
175
  ## Stage 1a: Entry Directory Recognition (LLM-Driven)
@@ -158,14 +184,14 @@ flowchart TB
158
184
 
159
185
  ### Step 1: Read Directory Tree
160
186
 
161
- Use `ListDir` or `Bash(tree)` to read the platform's `sourcePath` directory structure (3 levels deep):
187
+ Use `ListDir` or `Bash(tree)` to read the platform's `{source_path}` directory structure (3 levels deep):
162
188
 
163
189
  ```bash
164
190
  # Windows (PowerShell)
165
- tree /F /A "{sourcePath}" | Select-Object -First 100
191
+ tree /F /A "{source_path}" | Select-Object -First 100
166
192
 
167
193
  # Unix/Linux/Mac
168
- tree -L 3 "{sourcePath}"
194
+ tree -L 3 "{source_path}"
169
195
  ```
170
196
 
171
197
  ### Step 2: LLM Analysis - Identify Entry Directories
@@ -196,11 +222,11 @@ Based on the directory tree and technology stack, analyze and identify entry dir
196
222
  - Configuration directories: `.git`, `.idea`, `.vscode`, `.speccrew`
197
223
 
198
224
  **Root Module Handling**:
199
- - If an entry file is not under any subdirectory (directly under `sourcePath`), assign it to the `_root` module
225
+ - If an entry file is not under any subdirectory (directly under `{source_path}`), assign it to the `_root` module
200
226
 
201
227
  ### Step 3: Generate entry-dirs JSON
202
228
 
203
- Output file: `{speccrew-workspace}/knowledges/base/sync-state/knowledge-bizs/entry-dirs-{platformId}.json`
229
+ Output file: `{speccrew_workspace}/knowledges/base/sync-state/knowledge-bizs/entry-dirs-{platform_id}.json`
204
230
 
205
231
  **JSON Format**:
206
232
  ```json
@@ -222,17 +248,17 @@ Output file: `{speccrew-workspace}/knowledges/base/sync-state/knowledge-bizs/ent
222
248
 
223
249
  **Field Definitions**:
224
250
  - `platformId`: Platform identifier (e.g., `backend-ai`, `web-vue`, `mobile-uniapp`)
225
- - `platformName`: (Optional) Human-readable platform name. Auto-generated as `{platformType}-{platformSubtype}` if missing
226
- - `platformType`: (Optional) Platform type: `backend`, `web`, `mobile`, `desktop`. Inferred from platformId if missing
227
- - `platformSubtype`: (Optional) Platform subtype (e.g., `ai`, `vue`, `uniapp`). Inferred from platformId if missing
251
+ - `platformName`: (Optional) Human-readable platform name. Auto-generated as `{platform_type}-{platform_subtype}` if missing
252
+ - `platformType`: (Optional) Platform type: `backend`, `web`, `mobile`, `desktop`. Inferred from platform_id if missing
253
+ - `platformSubtype`: (Optional) Platform subtype (e.g., `ai`, `vue`, `uniapp`). Inferred from platform_id if missing
228
254
  - `sourcePath`: Absolute path to the platform source root
229
- - `techStack`: (Optional) Array of tech stack names (e.g., `["spring-boot", "mybatis-plus"]`). Default inferred from platformType
255
+ - `techStack`: (Optional) Array of tech stack names (e.g., `["spring-boot", "mybatis-plus"]`). Default inferred from platform_type
230
256
  - `modules`: Array of business modules
231
257
  - `name`: Module name (business-meaningful, e.g., `chat`, `system`, `order`)
232
- - `entryDirs`: Array of entry directory paths (relative to `sourcePath`)
258
+ - `entryDirs`: Array of entry directory paths (relative to `{source_path}`)
233
259
 
234
260
  **Path Rules**:
235
- - All `entryDirs` paths must be relative to `sourcePath`
261
+ - All `entryDirs` paths must be relative to `{source_path}`
236
262
  - Use forward slashes `/` as path separators (even on Windows)
237
263
  - Do not include leading or trailing slashes
238
264
 
@@ -255,7 +281,7 @@ After generating the entry-dirs JSON:
255
281
  > **IMPORTANT**: This stage is executed **directly by the dispatch agent (Leader)**, NOT delegated to a Worker Agent.
256
282
  > Worker Agents do not have `run_in_terminal` capability, which is required for script execution.
257
283
 
258
- **Prerequisite**: Stage 1a completed. `entry-dirs-{platformId}.json` files exist in `{sync_state_path}/knowledge-bizs/`.
284
+ **Prerequisite**: Stage 1a completed. `entry-dirs-{platform_id}.json` files exist in `{sync_state_path}/knowledge-bizs/`.
259
285
 
260
286
  **Action** (dispatch executes directly via `run_in_terminal`):
261
287
 
@@ -270,7 +296,7 @@ After generating the entry-dirs JSON:
270
296
  ```
271
297
 
272
298
  **Script Parameters**:
273
- - `--entryDirsFile`: Path to the `entry-dirs-{platformId}.json` file generated in Stage 1a (required)
299
+ - `--entryDirsFile`: Path to the `entry-dirs-{platform_id}.json` file generated in Stage 1a (required)
274
300
 
275
301
  **Note**: `platformId` and `sourcePath` are read from the entry-dirs JSON file. Platform mapping and output directory are automatically derived by the script.
276
302
 
@@ -280,7 +306,7 @@ After generating the entry-dirs JSON:
280
306
  - `--excludeDirs`: Additional directories to exclude
281
307
 
282
308
  **Output**:
283
- - `speccrew-workspace/knowledges/base/sync-state/knowledge-bizs/features-{platformId}.json` — Per-platform feature inventory files
309
+ - `{speccrew_workspace}/knowledges/base/sync-state/knowledge-bizs/features-{platform_id}.json` — Per-platform feature inventory files
284
310
  - Each file contains: platform metadata, modules list, and flat features array with `analyzed` status
285
311
 
286
312
  **Features JSON Structure**:
@@ -358,6 +384,17 @@ After generating the entry-dirs JSON:
358
384
 
359
385
  **Error handling**: If the merge script exits with non-zero code, STOP and report the error. Do NOT proceed to Stage 2 until merge is resolved.
360
386
 
387
+ > ✅ **Stage 1 Milestone**: Feature inventory initialized. {feature_count} features across {platform_count} platforms. → Proceed to Stage 2.
388
+
389
+ ---
390
+
391
+ > **⚠️ MANDATORY RULES FOR PARALLEL EXECUTION (Stage 2-3)**:
392
+ > 1. ALL workers for the same stage MUST be dispatched in PARALLEL — sequential execution is FORBIDDEN
393
+ > 2. Each worker runs independently — do NOT wait for one worker before dispatching the next
394
+ > 3. Monitor completion via marker files, NOT by polling worker status
395
+ > 4. Failed workers can be retried independently without affecting successful ones
396
+ > 5. Do NOT proceed to next Stage until ALL workers in current Stage have completed or failed
397
+
361
398
  ---
362
399
 
363
400
  ## Stage 2: Feature Analysis (Batch Processing)
@@ -420,9 +457,68 @@ For each feature in the `batch` array, prepare a Worker Task:
420
457
  **Execution sequence**:
421
458
  1. Prepare ALL Worker Tasks first (do NOT launch yet)
422
459
  2. Launch ALL Workers at the SAME TIME in a single batch dispatch
423
- 3. Wait for ALL Workers to complete before proceeding to Step 3
460
+ 3. Wait for ALL Workers to complete before proceeding to Step 2.5
424
461
  4. Each Worker writes `.done` and `.graph.json` marker files to `completed_dir` upon completion
425
462
 
463
+ **Step 2.5: Launch Graph Workers — PARALLEL per Completed Analyze Worker**
464
+
465
+ After each analyze worker completes (writes `.done.json` marker), immediately dispatch the corresponding graph worker:
466
+
467
+ | Analyze Worker | Graph Worker | Input |
468
+ |----------------|--------------|-------|
469
+ | `speccrew-knowledge-bizs-api-analyze` | `speccrew-knowledge-bizs-api-graph` | `documentPath` from analyze output |
470
+ | `speccrew-knowledge-bizs-ui-analyze` | `speccrew-knowledge-bizs-ui-graph` | `documentPath` from analyze output |
471
+
472
+ **Graph Worker Task Prompt Format**:
473
+
474
+ **For API Graph Worker**:
475
+ ```json
476
+ {
477
+ "skill_name": "speccrew-knowledge-bizs-api-graph",
478
+ "instructions": "Generate graph data nodes and edges from the analyzed API feature document.\\n\\nRequirements:\\n- Read the API analysis document at api_analysis_path\\n- Extract entities (APIs, services, tables, DTOs)\\n- Generate graph nodes and edges\\n- Write graph JSON to output_dir\\n- Create .graph-done.json completion marker at output_dir",
479
+ "context": {
480
+ "api_analysis_path": "<feature.documentPath>",
481
+ "platform_id": "<feature.platform_id>",
482
+ "output_dir": "<completed_dir_absolute_path>",
483
+ "module": "<feature.module>",
484
+ "fileName": "<feature.fileName>",
485
+ "sourcePath": "<feature.sourcePath>",
486
+ "sourceFile": "<feature.sourceFile>",
487
+ "language": "<user language>",
488
+ "subpath": "<computed_subpath_from_sourcePath>"
489
+ }
490
+ }
491
+ ```
492
+
493
+ **For UI Graph Worker**:
494
+ ```json
495
+ {
496
+ "skill_name": "speccrew-knowledge-bizs-ui-graph",
497
+ "instructions": "Generate graph data nodes and edges from the analyzed UI feature document.\\n\\nRequirements:\\n- Read the UI analysis document at documentPath\\n- Extract entities (pages, components, API calls, navigations)\\n- Generate graph nodes and edges\\n- Write graph JSON to completed_dir\\n- Create .graph-done.json completion marker at completed_dir",
498
+ "context": {
499
+ "feature": "<complete_feature_object>",
500
+ "fileName": "<feature.fileName>",
501
+ "sourcePath": "<feature.sourcePath>",
502
+ "documentPath": "<feature.documentPath>",
503
+ "module": "<feature.module>",
504
+ "platform_type": "<feature.platform_type>",
505
+ "platform_subtype": "<feature.platform_subtype>",
506
+ "completed_dir": "<completed_dir_absolute_path>",
507
+ "sourceFile": "<feature.sourceFile>",
508
+ "status": "<analysis_status>",
509
+ "analysisNotes": "<analysis_notes>",
510
+ "language": "<user language>"
511
+ }
512
+ }
513
+ ```
514
+
515
+ **Execution sequence**:
516
+ 1. Scan `completed_dir` for new `.done.json` files from Step 2
517
+ 2. For each completed analyze worker, prepare corresponding graph worker task
518
+ 3. Launch ALL graph workers for the current batch in PARALLEL
519
+ 4. Wait for ALL graph workers to complete
520
+ 5. Each graph worker writes `.graph-done.json` marker to `completed_dir`
521
+
426
522
  Example: If batch has 5 features → create and launch 5 Worker Tasks simultaneously, NOT one by one.
427
523
 
428
524
  **Worker Task Prompt Format**:
@@ -471,30 +567,30 @@ Example: If batch has 5 features → create and launch 5 Worker Tasks simultaneo
471
567
 
472
568
  **✅ CORRECT Format - MUST USE:**
473
569
  ```
474
- {completed_dir}/{module}-{subpath}-{fileName}.done.json ← Completion status marker (JSON format)
475
- {completed_dir}/{module}-{subpath}-{fileName}.graph.json ← Graph data marker (JSON format)
570
+ {completed_dir}/{module}-{subpath}-{file_name}.done.json ← Completion status marker (JSON format)
571
+ {completed_dir}/{module}-{subpath}-{file_name}.graph.json ← Graph data marker (JSON format)
476
572
  ```
477
573
 
478
574
  **Naming Rule Explanation:**
479
575
 
480
- The marker filename MUST follow the composite naming pattern `{module}-{subpath}-{fileName}` to prevent conflicts between same-named source files.
576
+ The marker filename MUST follow the composite naming pattern `{module}-{subpath}-{file_name}` to prevent conflicts between same-named source files.
481
577
 
482
578
  **How Workers Generate the Filename:**
483
579
 
484
580
  1. **module**: Use the `{{module}}` input variable directly
485
581
 
486
- 2. **subpath**: Extract from `{{sourcePath}}`:
582
+ 2. **subpath**: Extract from `{{source_path}}`:
487
583
  - For UI (Vue/React): Middle path between `views/` or `pages/` and the file name
488
584
  - For API (Java): Middle path between controller root and the file name
489
585
  - Replace path separators (`/`) with hyphens (`-`)
490
586
  - Omit if file is at module root (empty subpath)
491
587
 
492
- 3. **fileName**: Use `{{fileName}}` input variable (file name WITHOUT extension)
588
+ 3. **file_name**: Use `{{file_name}}` input variable (file name WITHOUT extension)
493
589
 
494
590
  **Examples:**
495
591
 
496
- | Source File | module | subpath | fileName | Marker Filename |
497
- |-------------|--------|---------|----------|-----------------|
592
+ | Source File | module | subpath | file_name | Marker Filename |
593
+ |-------------|--------|---------|-----------|-----------------|
498
594
  | `yudao-ui/.../views/system/notify/message/index.vue` | `system` | `notify-message` | `index` | `system-notify-message-index.done.json` |
499
595
  | `yudao-ui/.../views/system/user/index.vue` | `system` | `user` | `index` | `system-user-index.done.json` |
500
596
  | `yudao-module-system/.../controller/admin/user/UserController.java` | `system` | `controller-admin-user` | `UserController` | `system-controller-admin-user-UserController.done.json` |
@@ -505,11 +601,11 @@ The marker filename MUST follow the composite naming pattern `{module}-{subpath}
505
601
 
506
602
  **❌ WRONG Format - NEVER USE:**
507
603
  ```
508
- {fileName}.done.json ← WRONG: missing module and subpath (causes conflicts)
509
- {fileName}.graph.json ← WRONG: missing module and subpath (causes conflicts)
510
- {fileName}.completed.json ← WRONG extension
511
- {fileName}.done ← WRONG extension (missing .json)
512
- {fileName}_done.json ← WRONG separator and extension
604
+ {file_name}.done.json ← WRONG: missing module and subpath (causes conflicts)
605
+ {file_name}.graph.json ← WRONG: missing module and subpath (causes conflicts)
606
+ {file_name}.completed.json ← WRONG extension
607
+ {file_name}.done ← WRONG extension (missing .json)
608
+ {file_name}_done.json ← WRONG separator and extension
513
609
  ```
514
610
 
515
611
  **❌ WRONG Filename Examples - NEVER USE:**
@@ -597,11 +693,12 @@ The marker filename MUST follow the composite naming pattern `{module}-{subpath}
597
693
 
598
694
  2. **Execute process-results**:
599
695
  ```
600
- node "{path_to_batch_orchestrator_js}" process-results --syncStatePath "{sync_state_path}" --graphRoot "{graph_root}" --graphWriteScript "{graph_write_script_path}" --platformId "{platformId}"
696
+ node "{path_to_batch_orchestrator_js}" process-results --syncStatePath "{sync_state_path}" --graphRoot "{graph_root}" --platformId "{platformId}"
601
697
  ```
602
698
 
603
699
  This script:
604
- - Scans `.done` files → updates feature status to `completed` in features-*.json
700
+ - Scans `.done.json` files → updates feature status to `completed` in features-*.json
701
+ - Scans `.graph-done.json` files → confirms graph data generation completed
605
702
  - Scans `.graph.json` files → writes graph data (nodes + edges) grouped by module
606
703
  - Cleans up all marker files
607
704
 
@@ -617,10 +714,13 @@ Dispatch 采用完全无状态的文件驱动设计。如果执行过程中发
617
714
 
618
715
  #### Stage 2 Output
619
716
 
620
- - Generated by Workers: Feature documentation at `feature.documentPath` (one .md per feature); marker files (`.done` + `.graph.json`) in `completed_dir`
621
- - Updated by `process-results`: Each `features-{platform}.json` updated with analysis timestamps and status; graph data written to `speccrew-workspace/knowledges/bizs/graph/`
717
+ - Generated by Analyze Workers: Feature documentation at `feature.documentPath` (one .md per feature); marker files (`.done.json`) in `completed_dir`
718
+ - Generated by Graph Workers: Graph data files (`.graph.json`) in `completed_dir`; consolidated graph data in `speccrew-workspace/knowledges/bizs/graph/`
719
+ - Updated by `process-results`: Each `features-{platform}.json` updated with analysis timestamps and status
622
720
  - Marker files cleaned up after each batch
623
721
 
722
+ **Stage 2 Completion Condition**: ALL analyze workers AND ALL graph workers completed (both `.done.json` and `.graph-done.json` markers present)
723
+
624
724
  **Feature Status Flow**: `pending` → `in_progress` → `completed` / `failed`
625
725
 
626
726
  ### Large-Scale Scenario Guidance
@@ -632,6 +732,8 @@ When dealing with modules containing more than **20 features**, consider the fol
632
732
  - **Resume Support**: The `get-next-batch` script naturally supports resume across sessions — it skips features that already have `.done` files. To resume after a session break, simply restart the Stage 2 loop.
633
733
  - **Validation After Completion**: After all features are marked `analyzed=true`, run `process-batch-results` with `--validateDocs --syncStatePath "{sync_state_path}"` to verify document completeness.
634
734
 
735
+ > ✅ **Stage 2 Milestone**: Feature analysis complete. {analyzed_count} features analyzed, {failed_count} failed. {graph_count} graph data files generated. → Proceed to Stage 3.
736
+
635
737
  ---
636
738
 
637
739
  ## Stage 3: Module Summarize (Parallel)
@@ -716,6 +818,8 @@ speccrew-workspace/knowledges/techs/{platform_id}/ui-style-patterns/
716
818
  └── {pattern-name}.md
717
819
  ```
718
820
 
821
+ > ✅ **Stage 3 & 3.5 Milestone**: Module summaries and UI style patterns complete. {module_count} modules summarized, {pattern_count} patterns extracted. → Proceed to Stage 4.
822
+
719
823
  ---
720
824
 
721
825
  ## Stage 4: System Summarize (Single Task)
@@ -738,6 +842,8 @@ Expected Worker Return: `{ "status": "success|failed", "output_file": "system-ov
738
842
  **Output**:
739
843
  - `speccrew-workspace/knowledges/bizs/system-overview.md` (complete with platform index and module hierarchy)
740
844
 
845
+ > ✅ **Stage 4 Milestone**: System overview generated. All stages complete. Pipeline finished successfully.
846
+
741
847
  ---
742
848
 
743
849
  ## Error Handling
@@ -756,6 +862,34 @@ Expected Worker Return: `{ "status": "success|failed", "output_file": "system-ov
756
862
 
757
863
  ---
758
864
 
865
+ ## Task Completion Report
866
+
867
+ Upon completing all stages, output the following structured report:
868
+
869
+ ```json
870
+ {
871
+ "status": "success | partial | failed",
872
+ "skill": "speccrew-knowledge-bizs-dispatch",
873
+ "stages_completed": ["stage_0", "stage_1", "stage_2", "stage_3", "stage_4"],
874
+ "stages_failed": [],
875
+ "output_summary": {
876
+ "platforms_processed": ["frontend", "backend"],
877
+ "features_analyzed": 32,
878
+ "modules_summarized": 8,
879
+ "system_overview_generated": true
880
+ },
881
+ "output_files": [
882
+ "knowledges/bizs/{platform}/features/",
883
+ "knowledges/bizs/{platform}/modules/",
884
+ "knowledges/bizs/system-overview.md"
885
+ ],
886
+ "errors": [],
887
+ "next_steps": ["Initialize techs knowledge base"]
888
+ }
889
+ ```
890
+
891
+ ---
892
+
759
893
  ## Return
760
894
 
761
895
  After all 5 stages complete, return a summary object to the caller: