sdtk-kit 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +131 -0
- package/assets/manifest/toolkit-bundle.manifest.json +303 -0
- package/assets/manifest/toolkit-bundle.sha256.txt +59 -0
- package/assets/toolkit/toolkit/AGENTS.md +103 -0
- package/assets/toolkit/toolkit/install.ps1 +155 -0
- package/assets/toolkit/toolkit/runtimes/claude/CLAUDE_TEMPLATE.md +32 -0
- package/assets/toolkit/toolkit/runtimes/codex/CODEX_TEMPLATE.md +32 -0
- package/assets/toolkit/toolkit/scripts/init-feature.ps1 +253 -0
- package/assets/toolkit/toolkit/scripts/install-codex-skills.ps1 +181 -0
- package/assets/toolkit/toolkit/scripts/uninstall-codex-skills.ps1 +116 -0
- package/assets/toolkit/toolkit/sdtk.config.json +28 -0
- package/assets/toolkit/toolkit/sdtk.config.profiles.example.json +50 -0
- package/assets/toolkit/toolkit/skills/sdtk-api-design-spec/SKILL.md +78 -0
- package/assets/toolkit/toolkit/skills/sdtk-api-design-spec/references/API_DESIGN_CREATION_RULES.md +212 -0
- package/assets/toolkit/toolkit/skills/sdtk-api-design-spec/references/FLOWCHART_CREATION_RULES.md +397 -0
- package/assets/toolkit/toolkit/skills/sdtk-api-design-spec/scripts/generate_api_design_detail.py +565 -0
- package/assets/toolkit/toolkit/skills/sdtk-api-doc/SKILL.md +36 -0
- package/assets/toolkit/toolkit/skills/sdtk-api-doc/references/FLOWCHART_CREATION_RULES.md +397 -0
- package/assets/toolkit/toolkit/skills/sdtk-arch/SKILL.md +43 -0
- package/assets/toolkit/toolkit/skills/sdtk-arch/references/API_DESIGN_CREATION_RULES.md +212 -0
- package/assets/toolkit/toolkit/skills/sdtk-arch/references/FLOWCHART_CREATION_RULES.md +397 -0
- package/assets/toolkit/toolkit/skills/sdtk-arch/references/FLOW_ACTION_SPEC_CREATION_RULES.md +136 -0
- package/assets/toolkit/toolkit/skills/sdtk-ba/SKILL.md +24 -0
- package/assets/toolkit/toolkit/skills/sdtk-design-layout/SKILL.md +21 -0
- package/assets/toolkit/toolkit/skills/sdtk-dev/SKILL.md +20 -0
- package/assets/toolkit/toolkit/skills/sdtk-dev-backend/SKILL.md +17 -0
- package/assets/toolkit/toolkit/skills/sdtk-dev-frontend/SKILL.md +15 -0
- package/assets/toolkit/toolkit/skills/sdtk-orchestrator/SKILL.md +44 -0
- package/assets/toolkit/toolkit/skills/sdtk-pm/SKILL.md +26 -0
- package/assets/toolkit/toolkit/skills/sdtk-qa/SKILL.md +22 -0
- package/assets/toolkit/toolkit/skills/sdtk-screen-design-spec/SKILL.md +59 -0
- package/assets/toolkit/toolkit/skills/sdtk-screen-design-spec/references/FLOW_ACTION_SPEC_CREATION_RULES.md +136 -0
- package/assets/toolkit/toolkit/skills/sdtk-screen-design-spec/references/excel-image-export.md +51 -0
- package/assets/toolkit/toolkit/skills/sdtk-screen-design-spec/references/figma-mcp.md +54 -0
- package/assets/toolkit/toolkit/skills/sdtk-screen-design-spec/references/numbering-rules.md +76 -0
- package/assets/toolkit/toolkit/skills/sdtk-screen-design-spec/scripts/renumber_flow_action_spec_global.py +136 -0
- package/assets/toolkit/toolkit/skills/sdtk-screen-design-spec/scripts/validate_flow_action_spec_numbering.py +249 -0
- package/assets/toolkit/toolkit/skills/sdtk-test-case-spec/SKILL.md +65 -0
- package/assets/toolkit/toolkit/skills/sdtk-test-case-spec/references/TEST_CASE_CREATION_RULES.md +129 -0
- package/assets/toolkit/toolkit/skills/sdtk-test-case-spec/scripts/validate_test_case_spec.py +97 -0
- package/assets/toolkit/toolkit/templates/QUALITY_CHECKLIST.md +124 -0
- package/assets/toolkit/toolkit/templates/README.md +56 -0
- package/assets/toolkit/toolkit/templates/SHARED_PLANNING.md +80 -0
- package/assets/toolkit/toolkit/templates/docs/api/API_DESIGN_CREATION_RULES.md +212 -0
- package/assets/toolkit/toolkit/templates/docs/api/API_DESIGN_DETAIL_TEMPLATE.md +62 -0
- package/assets/toolkit/toolkit/templates/docs/api/API_ENDPOINTS_TEMPLATE.md +229 -0
- package/assets/toolkit/toolkit/templates/docs/api/FEATURE_API_TEMPLATE.yaml +20 -0
- package/assets/toolkit/toolkit/templates/docs/api/FLOWCHART_CREATION_RULES.md +397 -0
- package/assets/toolkit/toolkit/templates/docs/api/feature_api_flow_list_TEMPLATE.txt +12 -0
- package/assets/toolkit/toolkit/templates/docs/architecture/ARCH_DESIGN_TEMPLATE.md +109 -0
- package/assets/toolkit/toolkit/templates/docs/database/DATABASE_SPEC_TEMPLATE.md +175 -0
- package/assets/toolkit/toolkit/templates/docs/design/DESIGN_LAYOUT_TEMPLATE.md +49 -0
- package/assets/toolkit/toolkit/templates/docs/dev/FEATURE_IMPL_PLAN_TEMPLATE.md +73 -0
- package/assets/toolkit/toolkit/templates/docs/product/BACKLOG_TEMPLATE.md +50 -0
- package/assets/toolkit/toolkit/templates/docs/product/PRD_TEMPLATE.md +66 -0
- package/assets/toolkit/toolkit/templates/docs/product/PROJECT_INITIATION_TEMPLATE.md +98 -0
- package/assets/toolkit/toolkit/templates/docs/qa/QA_RELEASE_REPORT_TEMPLATE.md +61 -0
- package/assets/toolkit/toolkit/templates/docs/qa/TEST_CASE_CREATION_RULES.md +129 -0
- package/assets/toolkit/toolkit/templates/docs/qa/TEST_CASE_TEMPLATE.md +104 -0
- package/assets/toolkit/toolkit/templates/docs/specs/BA_SPEC_TEMPLATE.md +139 -0
- package/assets/toolkit/toolkit/templates/docs/specs/FLOW_ACTION_SPEC_CREATION_RULES.md +136 -0
- package/assets/toolkit/toolkit/templates/docs/specs/FLOW_ACTION_SPEC_TEMPLATE.md +160 -0
- package/bin/sdtk.js +15 -0
- package/package.json +47 -0
- package/src/commands/auth.js +85 -0
- package/src/commands/generate.js +177 -0
- package/src/commands/help.js +69 -0
- package/src/commands/init.js +73 -0
- package/src/index.js +56 -0
- package/src/lib/args.js +116 -0
- package/src/lib/errors.js +41 -0
- package/src/lib/github-access.js +68 -0
- package/src/lib/powershell.js +85 -0
- package/src/lib/state.js +83 -0
- package/src/lib/toolkit-payload.js +99 -0
package/assets/toolkit/toolkit/skills/sdtk-api-design-spec/scripts/generate_api_design_detail.py
ADDED
|
@@ -0,0 +1,565 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
"""
|
|
3
|
+
Generate [FEATURE_KEY]_API_DESIGN_DETAIL.md from OpenAPI YAML and flow list.
|
|
4
|
+
|
|
5
|
+
This script follows API_DESIGN_CREATION_RULES.md and produces:
|
|
6
|
+
- Markdown detail spec
|
|
7
|
+
- Per-endpoint .puml files
|
|
8
|
+
- Per-endpoint .svg flow images (if PlantUML render is available)
|
|
9
|
+
"""
|
|
10
|
+
|
|
11
|
+
from __future__ import annotations
|
|
12
|
+
|
|
13
|
+
import argparse
|
|
14
|
+
import copy
|
|
15
|
+
import os
|
|
16
|
+
import re
|
|
17
|
+
import shutil
|
|
18
|
+
import subprocess
|
|
19
|
+
import sys
|
|
20
|
+
from pathlib import Path
|
|
21
|
+
from typing import Dict, List, Optional, Tuple
|
|
22
|
+
|
|
23
|
+
import yaml
|
|
24
|
+
|
|
25
|
+
|
|
26
|
+
HTTP_METHODS = {"get", "post", "put", "patch", "delete", "options", "head"}
|
|
27
|
+
|
|
28
|
+
|
|
29
|
+
def parse_args() -> argparse.Namespace:
|
|
30
|
+
parser = argparse.ArgumentParser(description="Generate API design detail markdown from YAML + flow list.")
|
|
31
|
+
parser.add_argument("--feature-key", required=True, help="Feature key, e.g. SCHEDULE_WHITEBOARD")
|
|
32
|
+
parser.add_argument("--yaml", required=True, help="Path to OpenAPI YAML")
|
|
33
|
+
parser.add_argument("--flow-list", required=True, help="Path to API flow list txt containing PlantUML blocks")
|
|
34
|
+
parser.add_argument("--output", required=True, help="Output markdown path, e.g. docs/api/[FEATURE_KEY]_API_DESIGN_DETAIL.md")
|
|
35
|
+
parser.add_argument("--flows-dir", default="docs/api/flows", help="Directory for generated .puml files")
|
|
36
|
+
parser.add_argument("--images-dir", default="docs/api/images", help="Directory for rendered .svg files")
|
|
37
|
+
parser.add_argument(
|
|
38
|
+
"--include",
|
|
39
|
+
action="append",
|
|
40
|
+
default=[],
|
|
41
|
+
help='Optional filter. Repeatable. Format: "METHOD /path" or "/path"',
|
|
42
|
+
)
|
|
43
|
+
parser.add_argument("--plantuml-jar", default="", help="Optional explicit path to plantuml.jar")
|
|
44
|
+
parser.add_argument("--skip-render", action="store_true", help="Skip SVG rendering step")
|
|
45
|
+
return parser.parse_args()
|
|
46
|
+
|
|
47
|
+
|
|
48
|
+
def load_yaml(path: Path) -> dict:
|
|
49
|
+
return yaml.safe_load(path.read_text(encoding="utf-8"))
|
|
50
|
+
|
|
51
|
+
|
|
52
|
+
def normalize_feature_snake(feature_key: str) -> str:
|
|
53
|
+
return re.sub(r"[^a-z0-9]+", "_", feature_key.lower()).strip("_")
|
|
54
|
+
|
|
55
|
+
|
|
56
|
+
def parse_include_filters(raw_filters: List[str]) -> List[Tuple[Optional[str], str]]:
|
|
57
|
+
parsed: List[Tuple[Optional[str], str]] = []
|
|
58
|
+
for raw in raw_filters:
|
|
59
|
+
text = raw.strip()
|
|
60
|
+
if not text:
|
|
61
|
+
continue
|
|
62
|
+
parts = text.split(maxsplit=1)
|
|
63
|
+
if len(parts) == 1:
|
|
64
|
+
if not parts[0].startswith("/"):
|
|
65
|
+
raise ValueError(f"Invalid --include format: {raw}")
|
|
66
|
+
parsed.append((None, parts[0]))
|
|
67
|
+
continue
|
|
68
|
+
method = parts[0].lower().strip()
|
|
69
|
+
path = parts[1].strip()
|
|
70
|
+
if method not in HTTP_METHODS:
|
|
71
|
+
raise ValueError(f"Invalid HTTP method in --include: {raw}")
|
|
72
|
+
if not path.startswith("/"):
|
|
73
|
+
raise ValueError(f"Path must start with '/' in --include: {raw}")
|
|
74
|
+
parsed.append((method, path))
|
|
75
|
+
return parsed
|
|
76
|
+
|
|
77
|
+
|
|
78
|
+
def match_include(method: str, path: str, filters: List[Tuple[Optional[str], str]]) -> bool:
|
|
79
|
+
if not filters:
|
|
80
|
+
return True
|
|
81
|
+
for f_method, f_path in filters:
|
|
82
|
+
if f_path != path:
|
|
83
|
+
continue
|
|
84
|
+
if f_method is None or f_method == method:
|
|
85
|
+
return True
|
|
86
|
+
return False
|
|
87
|
+
|
|
88
|
+
|
|
89
|
+
def collect_operations(spec: dict, include_filters: List[Tuple[Optional[str], str]]) -> List[Tuple[str, str, dict]]:
|
|
90
|
+
operations: List[Tuple[str, str, dict]] = []
|
|
91
|
+
for path, path_item in spec.get("paths", {}).items():
|
|
92
|
+
for method, op in path_item.items():
|
|
93
|
+
m = method.lower()
|
|
94
|
+
if m not in HTTP_METHODS:
|
|
95
|
+
continue
|
|
96
|
+
if not match_include(m, path, include_filters):
|
|
97
|
+
continue
|
|
98
|
+
operations.append((m, path, op))
|
|
99
|
+
return operations
|
|
100
|
+
|
|
101
|
+
|
|
102
|
+
def collect_flow_map(flow_text: str) -> Dict[Tuple[str, str], str]:
|
|
103
|
+
flow_map: Dict[Tuple[str, str], str] = {}
|
|
104
|
+
blocks = re.findall(r"@startuml[\s\S]*?@enduml", flow_text)
|
|
105
|
+
for block in blocks:
|
|
106
|
+
m = re.search(r'partition\s+"([A-Z]+)\s+\*\*(.*?)\*\*', block)
|
|
107
|
+
if not m:
|
|
108
|
+
continue
|
|
109
|
+
method = m.group(1).lower().strip()
|
|
110
|
+
path = m.group(2).strip()
|
|
111
|
+
sanitized = re.sub(r";<<#[^>]+>>", ";", block.strip())
|
|
112
|
+
flow_map[(method, path)] = sanitized + "\n"
|
|
113
|
+
return flow_map
|
|
114
|
+
|
|
115
|
+
|
|
116
|
+
def slugify(text: str) -> str:
|
|
117
|
+
s = re.sub(r"[^a-zA-Z0-9]+", "_", text.lower())
|
|
118
|
+
s = re.sub(r"_+", "_", s).strip("_")
|
|
119
|
+
return s
|
|
120
|
+
|
|
121
|
+
|
|
122
|
+
def resolve_ref(spec: dict, ref: str):
|
|
123
|
+
if not ref.startswith("#/components/"):
|
|
124
|
+
raise ValueError(f"Unsupported ref: {ref}")
|
|
125
|
+
cur = spec
|
|
126
|
+
for p in ref.lstrip("#/").split("/"):
|
|
127
|
+
cur = cur[p]
|
|
128
|
+
return copy.deepcopy(cur)
|
|
129
|
+
|
|
130
|
+
|
|
131
|
+
def deref(spec: dict, obj):
|
|
132
|
+
if isinstance(obj, dict) and "$ref" in obj:
|
|
133
|
+
base = deref(spec, resolve_ref(spec, obj["$ref"]))
|
|
134
|
+
merged = copy.deepcopy(base)
|
|
135
|
+
for k, v in obj.items():
|
|
136
|
+
if k == "$ref":
|
|
137
|
+
continue
|
|
138
|
+
merged[k] = deref(spec, v)
|
|
139
|
+
return merged
|
|
140
|
+
if isinstance(obj, dict):
|
|
141
|
+
return {k: deref(spec, v) for k, v in obj.items()}
|
|
142
|
+
if isinstance(obj, list):
|
|
143
|
+
return [deref(spec, v) for v in obj]
|
|
144
|
+
return obj
|
|
145
|
+
|
|
146
|
+
|
|
147
|
+
def normalize_schema(spec: dict, schema):
|
|
148
|
+
s = deref(spec, schema)
|
|
149
|
+
if not isinstance(s, dict):
|
|
150
|
+
return {}
|
|
151
|
+
if "allOf" in s:
|
|
152
|
+
merged: Dict = {}
|
|
153
|
+
merged_required: List[str] = []
|
|
154
|
+
merged_props: Dict = {}
|
|
155
|
+
for part in s["allOf"]:
|
|
156
|
+
p = normalize_schema(spec, part)
|
|
157
|
+
for k in ("type", "description", "format", "nullable", "default", "enum", "additionalProperties"):
|
|
158
|
+
if k in p and k not in merged:
|
|
159
|
+
merged[k] = p[k]
|
|
160
|
+
for req in p.get("required", []):
|
|
161
|
+
if req not in merged_required:
|
|
162
|
+
merged_required.append(req)
|
|
163
|
+
for prop_k, prop_v in p.get("properties", {}).items():
|
|
164
|
+
merged_props[prop_k] = prop_v
|
|
165
|
+
if "items" in p:
|
|
166
|
+
merged["items"] = p["items"]
|
|
167
|
+
if merged_required:
|
|
168
|
+
merged["required"] = merged_required
|
|
169
|
+
if merged_props:
|
|
170
|
+
merged["properties"] = merged_props
|
|
171
|
+
rest = {k: v for k, v in s.items() if k != "allOf"}
|
|
172
|
+
merged.update(rest)
|
|
173
|
+
s = merged
|
|
174
|
+
return s
|
|
175
|
+
|
|
176
|
+
|
|
177
|
+
def schema_type(spec: dict, schema) -> str:
|
|
178
|
+
s = normalize_schema(spec, schema)
|
|
179
|
+
t = s.get("type")
|
|
180
|
+
if not t:
|
|
181
|
+
if "properties" in s:
|
|
182
|
+
t = "object"
|
|
183
|
+
elif "items" in s:
|
|
184
|
+
t = "array"
|
|
185
|
+
elif s.get("additionalProperties"):
|
|
186
|
+
t = "object"
|
|
187
|
+
else:
|
|
188
|
+
t = "object"
|
|
189
|
+
if t == "array":
|
|
190
|
+
items = normalize_schema(spec, s.get("items", {}))
|
|
191
|
+
it = items.get("type", "object")
|
|
192
|
+
return f"array<{it}>"
|
|
193
|
+
return str(t)
|
|
194
|
+
|
|
195
|
+
|
|
196
|
+
def schema_notes(spec: dict, schema) -> str:
|
|
197
|
+
s = normalize_schema(spec, schema)
|
|
198
|
+
notes: List[str] = []
|
|
199
|
+
desc = s.get("description")
|
|
200
|
+
if desc:
|
|
201
|
+
notes.append(str(desc).replace("\n", " ").strip())
|
|
202
|
+
if "enum" in s:
|
|
203
|
+
notes.append("enum(" + ",".join(map(str, s["enum"])) + ")")
|
|
204
|
+
if "default" in s:
|
|
205
|
+
notes.append(f"default={s['default']}")
|
|
206
|
+
if s.get("nullable") is True:
|
|
207
|
+
notes.append("nullable")
|
|
208
|
+
if s.get("additionalProperties") is True:
|
|
209
|
+
notes.append("additionalProperties=true")
|
|
210
|
+
return "; ".join(notes)
|
|
211
|
+
|
|
212
|
+
|
|
213
|
+
def schema_format(spec: dict, schema) -> str:
|
|
214
|
+
s = normalize_schema(spec, schema)
|
|
215
|
+
return str(s.get("format", "")) if s.get("format") is not None else ""
|
|
216
|
+
|
|
217
|
+
|
|
218
|
+
def schema_length(spec: dict, schema) -> str:
|
|
219
|
+
fmt = schema_format(spec, schema)
|
|
220
|
+
if fmt == "uuid":
|
|
221
|
+
return "36"
|
|
222
|
+
return ""
|
|
223
|
+
|
|
224
|
+
|
|
225
|
+
def flatten_schema(spec: dict, schema) -> List[dict]:
|
|
226
|
+
root = normalize_schema(spec, schema)
|
|
227
|
+
rows: List[dict] = []
|
|
228
|
+
|
|
229
|
+
def walk(node, prefix: List[str]):
|
|
230
|
+
n = normalize_schema(spec, node)
|
|
231
|
+
props = n.get("properties", {})
|
|
232
|
+
required = set(n.get("required", []))
|
|
233
|
+
for key, value in props.items():
|
|
234
|
+
v = normalize_schema(spec, value)
|
|
235
|
+
levels = prefix + [key]
|
|
236
|
+
rows.append(
|
|
237
|
+
{
|
|
238
|
+
"levels": levels,
|
|
239
|
+
"type": schema_type(spec, v),
|
|
240
|
+
"format": schema_format(spec, v),
|
|
241
|
+
"length": schema_length(spec, v),
|
|
242
|
+
"required": "Yes" if key in required else "No",
|
|
243
|
+
"notes": schema_notes(spec, v),
|
|
244
|
+
}
|
|
245
|
+
)
|
|
246
|
+
t = v.get("type")
|
|
247
|
+
if t == "object" or "properties" in v:
|
|
248
|
+
if v.get("properties"):
|
|
249
|
+
walk(v, levels)
|
|
250
|
+
elif t == "array":
|
|
251
|
+
items = normalize_schema(spec, v.get("items", {}))
|
|
252
|
+
if items.get("type") == "object" or items.get("properties"):
|
|
253
|
+
walk(items, levels)
|
|
254
|
+
|
|
255
|
+
walk(root, [])
|
|
256
|
+
return rows
|
|
257
|
+
|
|
258
|
+
|
|
259
|
+
def table(headers: List[str], rows: List[str]) -> str:
|
|
260
|
+
line1 = "| " + " | ".join(headers) + " |"
|
|
261
|
+
line2 = "|" + "|".join([" ---: " if i == 0 else " --- " for i, _ in enumerate(headers)]) + "|"
|
|
262
|
+
return "\n".join([line1, line2] + rows)
|
|
263
|
+
|
|
264
|
+
|
|
265
|
+
def request_row(idx: int, row: dict) -> str:
|
|
266
|
+
levels = row["levels"][:6] + [""] * (6 - len(row["levels"][:6]))
|
|
267
|
+
cols = [
|
|
268
|
+
str(idx),
|
|
269
|
+
levels[-1],
|
|
270
|
+
levels[0],
|
|
271
|
+
levels[1],
|
|
272
|
+
levels[2],
|
|
273
|
+
levels[3],
|
|
274
|
+
levels[4],
|
|
275
|
+
levels[5],
|
|
276
|
+
row["type"],
|
|
277
|
+
row["format"],
|
|
278
|
+
row["length"],
|
|
279
|
+
row["required"],
|
|
280
|
+
row["notes"],
|
|
281
|
+
]
|
|
282
|
+
return "| " + " | ".join(cols) + " |"
|
|
283
|
+
|
|
284
|
+
|
|
285
|
+
def response_row(idx: int, row: dict) -> str:
|
|
286
|
+
levels = row["levels"][:6] + [""] * (6 - len(row["levels"][:6]))
|
|
287
|
+
cols = [
|
|
288
|
+
str(idx),
|
|
289
|
+
levels[-1],
|
|
290
|
+
levels[0],
|
|
291
|
+
levels[1],
|
|
292
|
+
levels[2],
|
|
293
|
+
levels[3],
|
|
294
|
+
levels[4],
|
|
295
|
+
levels[5],
|
|
296
|
+
row["type"],
|
|
297
|
+
row["notes"],
|
|
298
|
+
]
|
|
299
|
+
return "| " + " | ".join(cols) + " |"
|
|
300
|
+
|
|
301
|
+
|
|
302
|
+
def get_request_schema(spec: dict, op: dict):
|
|
303
|
+
rb = op.get("requestBody")
|
|
304
|
+
if not rb:
|
|
305
|
+
return None
|
|
306
|
+
rb = deref(spec, rb)
|
|
307
|
+
return (((rb.get("content") or {}).get("application/json") or {}).get("schema"))
|
|
308
|
+
|
|
309
|
+
|
|
310
|
+
def get_response_schema(spec: dict, op: dict, code: str):
|
|
311
|
+
resp = (op.get("responses") or {}).get(code)
|
|
312
|
+
if not resp:
|
|
313
|
+
return None
|
|
314
|
+
resp = deref(spec, resp)
|
|
315
|
+
return (((resp.get("content") or {}).get("application/json") or {}).get("schema"))
|
|
316
|
+
|
|
317
|
+
|
|
318
|
+
def get_parameters(spec: dict, path_item: dict, op: dict) -> List[dict]:
|
|
319
|
+
out: List[dict] = []
|
|
320
|
+
for p in path_item.get("parameters", []):
|
|
321
|
+
out.append(deref(spec, p))
|
|
322
|
+
for p in op.get("parameters", []):
|
|
323
|
+
out.append(deref(spec, p))
|
|
324
|
+
return out
|
|
325
|
+
|
|
326
|
+
|
|
327
|
+
def find_plantuml_jar(explicit: str) -> Optional[Path]:
|
|
328
|
+
if explicit:
|
|
329
|
+
p = Path(explicit).expanduser().resolve()
|
|
330
|
+
return p if p.exists() else None
|
|
331
|
+
|
|
332
|
+
user_home = Path.home()
|
|
333
|
+
candidates = sorted((user_home / ".vscode" / "extensions").glob("jebbs.plantuml-*/plantuml.jar"))
|
|
334
|
+
if candidates:
|
|
335
|
+
return candidates[-1]
|
|
336
|
+
return None
|
|
337
|
+
|
|
338
|
+
|
|
339
|
+
def render_svgs(plantuml_jar: Path, puml_files: List[Path], images_dir: Path) -> List[str]:
|
|
340
|
+
errors: List[str] = []
|
|
341
|
+
for puml in puml_files:
|
|
342
|
+
proc = subprocess.run(
|
|
343
|
+
["java", "-jar", str(plantuml_jar), "-tsvg", str(puml)],
|
|
344
|
+
stdout=subprocess.PIPE,
|
|
345
|
+
stderr=subprocess.STDOUT,
|
|
346
|
+
text=True,
|
|
347
|
+
check=False,
|
|
348
|
+
)
|
|
349
|
+
svg_src = puml.with_suffix(".svg")
|
|
350
|
+
svg_dst = images_dir / svg_src.name
|
|
351
|
+
if not svg_src.exists():
|
|
352
|
+
errors.append(f"Render failed (no svg): {puml}")
|
|
353
|
+
continue
|
|
354
|
+
shutil.copy2(svg_src, svg_dst)
|
|
355
|
+
svg_src.unlink(missing_ok=True)
|
|
356
|
+
|
|
357
|
+
svg_text = svg_dst.read_text(encoding="utf-8", errors="ignore")
|
|
358
|
+
if any(x in svg_text for x in ("Cannot find group", "Syntax Error", "Some diagram description contains errors")):
|
|
359
|
+
errors.append(f"Rendered with error content: {svg_dst}")
|
|
360
|
+
if proc.returncode != 0:
|
|
361
|
+
# Keep file if generated, but still report CLI failure.
|
|
362
|
+
errors.append(f"PlantUML exit={proc.returncode} for {puml}")
|
|
363
|
+
return errors
|
|
364
|
+
|
|
365
|
+
|
|
366
|
+
def main() -> int:
|
|
367
|
+
args = parse_args()
|
|
368
|
+
|
|
369
|
+
feature_key = args.feature_key.strip()
|
|
370
|
+
if not feature_key:
|
|
371
|
+
raise ValueError("feature-key is empty")
|
|
372
|
+
feature_snake = normalize_feature_snake(feature_key)
|
|
373
|
+
|
|
374
|
+
yaml_path = Path(args.yaml)
|
|
375
|
+
flow_path = Path(args.flow_list)
|
|
376
|
+
output_path = Path(args.output)
|
|
377
|
+
flows_dir = Path(args.flows_dir)
|
|
378
|
+
images_dir = Path(args.images_dir)
|
|
379
|
+
|
|
380
|
+
spec = load_yaml(yaml_path)
|
|
381
|
+
include_filters = parse_include_filters(args.include)
|
|
382
|
+
operations = collect_operations(spec, include_filters)
|
|
383
|
+
if not operations:
|
|
384
|
+
raise RuntimeError("No API operations selected from YAML")
|
|
385
|
+
|
|
386
|
+
flow_map = collect_flow_map(flow_path.read_text(encoding="utf-8"))
|
|
387
|
+
missing_flows: List[str] = []
|
|
388
|
+
|
|
389
|
+
flows_dir.mkdir(parents=True, exist_ok=True)
|
|
390
|
+
images_dir.mkdir(parents=True, exist_ok=True)
|
|
391
|
+
output_path.parent.mkdir(parents=True, exist_ok=True)
|
|
392
|
+
|
|
393
|
+
lines: List[str] = []
|
|
394
|
+
lines.append(f"# {feature_key} API DESIGN DETAIL")
|
|
395
|
+
lines.append("")
|
|
396
|
+
lines.append("## 0. Abbreviations")
|
|
397
|
+
lines.append("")
|
|
398
|
+
lines.append("| No | Term | Meaning |")
|
|
399
|
+
lines.append("| ---: | --- | --- |")
|
|
400
|
+
lines.append("| 1 | API | Application Programming Interface |")
|
|
401
|
+
lines.append("| 2 | UUID | Universally Unique Identifier |")
|
|
402
|
+
lines.append("| 3 | FE | Frontend |")
|
|
403
|
+
lines.append("| 4 | BE | Backend |")
|
|
404
|
+
lines.append("| 5 | DT | Datetime |")
|
|
405
|
+
lines.append("")
|
|
406
|
+
lines.append("## 1. Document Scope")
|
|
407
|
+
lines.append("")
|
|
408
|
+
lines.append("| No | Method | Endpoint | Reference Template |")
|
|
409
|
+
lines.append("| ---: | --- | --- | --- |")
|
|
410
|
+
for i, (method, path, _op) in enumerate(operations, 1):
|
|
411
|
+
lines.append(f"| {i} | {method.upper()} | `{path}` | `API_design.xlsx` style |")
|
|
412
|
+
lines.append("")
|
|
413
|
+
|
|
414
|
+
puml_files: List[Path] = []
|
|
415
|
+
|
|
416
|
+
for i, (method, path, op) in enumerate(operations, 1):
|
|
417
|
+
section_no = i + 1
|
|
418
|
+
title = str(op.get("summary", f"{method.upper()} {path}")).strip()
|
|
419
|
+
slug = f"{feature_snake}__{method}_{slugify(path)}"
|
|
420
|
+
puml_name = f"{slug}.puml"
|
|
421
|
+
svg_name = f"{slug}.svg"
|
|
422
|
+
puml_path = flows_dir / puml_name
|
|
423
|
+
svg_path = images_dir / svg_name
|
|
424
|
+
svg_rel = Path(os.path.relpath(svg_path, output_path.parent))
|
|
425
|
+
|
|
426
|
+
flow = flow_map.get((method, path))
|
|
427
|
+
if not flow:
|
|
428
|
+
missing_flows.append(f"{method.upper()} {path}")
|
|
429
|
+
flow = (
|
|
430
|
+
"@startuml\n"
|
|
431
|
+
f'partition "{method.upper()} **{path}**" {{\n'
|
|
432
|
+
"start\n"
|
|
433
|
+
":TODO add flow;\n"
|
|
434
|
+
"stop\n"
|
|
435
|
+
"}\n"
|
|
436
|
+
"@enduml\n"
|
|
437
|
+
)
|
|
438
|
+
|
|
439
|
+
puml_path.write_text(flow, encoding="utf-8")
|
|
440
|
+
puml_files.append(puml_path)
|
|
441
|
+
|
|
442
|
+
lines.append(f"## {section_no}. API Detail {i} - {title}")
|
|
443
|
+
lines.append("")
|
|
444
|
+
lines.append(f"**Endpoint:** `{method.upper()} {path}`")
|
|
445
|
+
lines.append("")
|
|
446
|
+
lines.append(f"### {section_no}.1 Process Flow")
|
|
447
|
+
lines.append("")
|
|
448
|
+
lines.append(f"Source of truth: `{flow_path.as_posix()}`")
|
|
449
|
+
lines.append("")
|
|
450
|
+
lines.append("```text")
|
|
451
|
+
lines.extend(flow.rstrip("\n").split("\n"))
|
|
452
|
+
lines.append("```")
|
|
453
|
+
lines.append("")
|
|
454
|
+
lines.append(f"})")
|
|
455
|
+
lines.append("")
|
|
456
|
+
|
|
457
|
+
lines.append(f"### {section_no}.2 Parameters")
|
|
458
|
+
lines.append("")
|
|
459
|
+
path_item = spec.get("paths", {}).get(path, {})
|
|
460
|
+
parameters = get_parameters(spec, path_item, op)
|
|
461
|
+
if parameters:
|
|
462
|
+
param_rows = ["| No | Parameter | Type | Required | Description |", "| ---: | --- | --- | --- | --- |"]
|
|
463
|
+
for p_idx, p in enumerate(parameters, 1):
|
|
464
|
+
sch = p.get("schema", {})
|
|
465
|
+
p_type = sch.get("type", "string")
|
|
466
|
+
p_fmt = sch.get("format")
|
|
467
|
+
type_text = f"{p.get('in', 'path')} {p_type}" + (f"({p_fmt})" if p_fmt else "")
|
|
468
|
+
req = "Yes" if p.get("required") else "No"
|
|
469
|
+
desc = str(p.get("description", "")).replace("\n", " ").strip()
|
|
470
|
+
param_rows.append(f"| {p_idx} | `{p.get('name', '')}` | {type_text} | {req} | {desc} |")
|
|
471
|
+
lines.extend(param_rows)
|
|
472
|
+
else:
|
|
473
|
+
lines.append("`None`")
|
|
474
|
+
lines.append("")
|
|
475
|
+
|
|
476
|
+
lines.append(f"### {section_no}.3 Request Parameters (JSON format)")
|
|
477
|
+
lines.append("")
|
|
478
|
+
req_schema = get_request_schema(spec, op)
|
|
479
|
+
if req_schema:
|
|
480
|
+
req_rows = flatten_schema(spec, req_schema)
|
|
481
|
+
req_md_rows = [request_row(x, r) for x, r in enumerate(req_rows, 1)]
|
|
482
|
+
headers = [
|
|
483
|
+
"No",
|
|
484
|
+
"Item Name",
|
|
485
|
+
"Level 1",
|
|
486
|
+
"Level 2",
|
|
487
|
+
"Level 3",
|
|
488
|
+
"Level 4",
|
|
489
|
+
"Level 5",
|
|
490
|
+
"Level 6",
|
|
491
|
+
"Type",
|
|
492
|
+
"Format",
|
|
493
|
+
"Length",
|
|
494
|
+
"Required",
|
|
495
|
+
"Notes",
|
|
496
|
+
]
|
|
497
|
+
lines.append(table(headers, req_md_rows))
|
|
498
|
+
else:
|
|
499
|
+
lines.append("`None`")
|
|
500
|
+
lines.append("")
|
|
501
|
+
|
|
502
|
+
lines.append(f"### {section_no}.4 Success Response (JSON format)")
|
|
503
|
+
lines.append("")
|
|
504
|
+
ok_schema = get_response_schema(spec, op, "200")
|
|
505
|
+
if ok_schema:
|
|
506
|
+
ok_rows = flatten_schema(spec, ok_schema)
|
|
507
|
+
ok_md_rows = [response_row(x, r) for x, r in enumerate(ok_rows, 1)]
|
|
508
|
+
headers = ["No", "Item Name", "Level 1", "Level 2", "Level 3", "Level 4", "Level 5", "Level 6", "Type", "Notes"]
|
|
509
|
+
lines.append(table(headers, ok_md_rows))
|
|
510
|
+
else:
|
|
511
|
+
lines.append("`None`")
|
|
512
|
+
lines.append("")
|
|
513
|
+
|
|
514
|
+
lines.append(f"### {section_no}.5 Error Response (JSON format)")
|
|
515
|
+
lines.append("")
|
|
516
|
+
err_schema = get_response_schema(spec, op, "400")
|
|
517
|
+
if err_schema:
|
|
518
|
+
err_rows = flatten_schema(spec, err_schema)
|
|
519
|
+
err_md_rows = [response_row(x, r) for x, r in enumerate(err_rows, 1)]
|
|
520
|
+
headers = ["No", "Item Name", "Level 1", "Level 2", "Level 3", "Level 4", "Level 5", "Level 6", "Type", "Notes"]
|
|
521
|
+
lines.append(table(headers, err_md_rows))
|
|
522
|
+
else:
|
|
523
|
+
lines.append("`None`")
|
|
524
|
+
lines.append("")
|
|
525
|
+
|
|
526
|
+
final_section_no = len(operations) + 2
|
|
527
|
+
lines.append(f"## {final_section_no}. Flowchart Image Rendering Recommendation (for Markdown)")
|
|
528
|
+
lines.append("")
|
|
529
|
+
lines.append("1. Keep `.puml` files in `docs/api/flows/` as source of truth.")
|
|
530
|
+
lines.append("2. Render `.svg` files into `docs/api/images/` and embed them in markdown.")
|
|
531
|
+
lines.append("3. Keep process flow code block as `text` to avoid duplicate diagram rendering in markdown preview.")
|
|
532
|
+
lines.append("")
|
|
533
|
+
|
|
534
|
+
output_path.write_text("\n".join(lines) + "\n", encoding="utf-8")
|
|
535
|
+
|
|
536
|
+
render_errors: List[str] = []
|
|
537
|
+
if not args.skip_render:
|
|
538
|
+
jar = find_plantuml_jar(args.plantuml_jar)
|
|
539
|
+
if jar:
|
|
540
|
+
render_errors = render_svgs(jar, puml_files, images_dir)
|
|
541
|
+
else:
|
|
542
|
+
render_errors.append("PlantUML jar not found. Use --plantuml-jar or install VSCode PlantUML extension.")
|
|
543
|
+
|
|
544
|
+
print(f"[OK] Generated markdown: {output_path}")
|
|
545
|
+
print(f"[OK] Generated puml files: {len(puml_files)}")
|
|
546
|
+
if missing_flows:
|
|
547
|
+
print("[WARN] Missing flow blocks:")
|
|
548
|
+
for item in missing_flows:
|
|
549
|
+
print(f" - {item}")
|
|
550
|
+
if render_errors:
|
|
551
|
+
print("[WARN] Render issues:")
|
|
552
|
+
for item in render_errors:
|
|
553
|
+
print(f" - {item}")
|
|
554
|
+
else:
|
|
555
|
+
if not args.skip_render:
|
|
556
|
+
print("[OK] Rendered SVG flow images successfully")
|
|
557
|
+
return 0
|
|
558
|
+
|
|
559
|
+
|
|
560
|
+
if __name__ == "__main__":
|
|
561
|
+
try:
|
|
562
|
+
raise SystemExit(main())
|
|
563
|
+
except Exception as exc: # pragma: no cover
|
|
564
|
+
print(f"[ERROR] {exc}", file=sys.stderr)
|
|
565
|
+
raise
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sdtk-api-doc
|
|
3
|
+
description: Generate OpenAPI 3.x YAML and PlantUML flow diagrams for a feature following this toolkit's API conventions. Use when you need to create/update docs/api/* (API spec + flow list) from BA_SPEC/ARCH_DESIGN.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# SDTK API Documentation
|
|
7
|
+
|
|
8
|
+
## Outputs
|
|
9
|
+
- `docs/api/[FeaturePascal]_API.yaml`
|
|
10
|
+
- `docs/api/[FEATURE_KEY]_ENDPOINTS.md`
|
|
11
|
+
- `docs/api/[feature_snake]_api_flow_list.txt`
|
|
12
|
+
- Optional downstream (via `sdtk-api-design-spec`):
|
|
13
|
+
- `docs/api/[FEATURE_KEY]_API_DESIGN_DETAIL.md`
|
|
14
|
+
|
|
15
|
+
## Inputs (minimum)
|
|
16
|
+
- Feature name/key
|
|
17
|
+
- Entities + key fields
|
|
18
|
+
- Use cases (UC-xx) + business rules (BR-xx)
|
|
19
|
+
- Auth/permission model
|
|
20
|
+
|
|
21
|
+
## Process
|
|
22
|
+
1. Read `docs/specs/BA_SPEC_[FEATURE_KEY].md` and/or `docs/architecture/ARCH_DESIGN_[FEATURE_KEY].md`.
|
|
23
|
+
2. Read and apply API/flowchart rules from `./references/FLOWCHART_CREATION_RULES.md`.
|
|
24
|
+
3. Define endpoints mapped to UC-xx; keep path naming consistent across CRUD/search/list/mst patterns.
|
|
25
|
+
4. For each endpoint, document request/response schema and error cases.
|
|
26
|
+
5. Generate/update endpoint markdown (`[FEATURE_KEY]_ENDPOINTS.md`) with summary tables, API type grouping, and screen-logic mapping.
|
|
27
|
+
6. Generate PlantUML flows including: auth, permission check, validation, main logic, error exits.
|
|
28
|
+
7. Ensure traceability notes reference UC/BR where relevant.
|
|
29
|
+
8. Validate English output hygiene when generating English artifacts:
|
|
30
|
+
- no mixed-language leftovers in narrative text
|
|
31
|
+
- no mojibake/encoding corruption markers
|
|
32
|
+
- terminology consistency across endpoint detail, summary tables, and flow labels
|
|
33
|
+
9. If orchestrator mode requires API design detail generation (`apiDesignDetailMode=auto/on`), handoff to `sdtk-api-design-spec` after YAML + flow list are updated.
|
|
34
|
+
|
|
35
|
+
## Reference
|
|
36
|
+
- Deeper analysis: `docs/specs/API_DOC_SKILL_ANALYSIS.md` (if present).
|