mdan-cli 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,300 @@
1
+ # MDAN Integration — ChatGPT / OpenAI
2
+
3
+ ## Setup
4
+
5
+ ### GPT-4o / GPT-4 (Web)
6
+
7
+ 1. Go to chat.openai.com
8
+ 2. Start a new conversation
9
+ 3. Paste the content of `core/orchestrator.md` as your first message, prefixed with:
10
+ ```
11
+ Please adopt the following role and follow these instructions for our entire conversation:
12
+
13
+ [paste orchestrator.md content here]
14
+ ```
15
+
16
+ ### Custom GPT
17
+
18
+ 1. Go to GPT Builder
19
+ 2. In "Instructions", paste `core/orchestrator.md`
20
+ 3. Upload agent files as knowledge files
21
+ 4. Name your GPT "MDAN Core"
22
+
23
+ ### API
24
+
25
+ ```python
26
+ from openai import OpenAI
27
+
28
+ with open("core/orchestrator.md") as f:
29
+ system_prompt = f.read()
30
+
31
+ client = OpenAI()
32
+ response = client.chat.completions.create(
33
+ model="gpt-4o",
34
+ messages=[
35
+ {"role": "system", "content": system_prompt},
36
+ {"role": "user", "content": "I want to build [your project idea]"}
37
+ ]
38
+ )
39
+ ```
40
+
41
+ ## Notes for ChatGPT
42
+
43
+ - Replace `[MDAN-AGENT]` XML tags with `## MDAN AGENT` markdown headers if tags cause issues
44
+ - GPT-4o handles the full Universal Envelope well
45
+ - For long projects, use Projects (ChatGPT Plus) to maintain context
46
+
47
+ ---
48
+
49
+ # MDAN Integration — Google Gemini
50
+
51
+ ## Setup
52
+
53
+ ### Gemini (Web)
54
+
55
+ Gemini works best with markdown-formatted prompts. Convert the orchestrator to markdown:
56
+
57
+ ```
58
+ # MDAN Core — Orchestrator
59
+
60
+ You are MDAN Core, the central orchestrator...
61
+ [rest of orchestrator.md, with [MDAN-AGENT] blocks replaced by ## headers]
62
+ ```
63
+
64
+ ### Gemini API
65
+
66
+ ```python
67
+ import google.generativeai as genai
68
+
69
+ with open("core/orchestrator.md") as f:
70
+ system_prompt = f.read()
71
+
72
+ genai.configure(api_key="YOUR_API_KEY")
73
+ model = genai.GenerativeModel(
74
+ model_name="gemini-1.5-pro",
75
+ system_instruction=system_prompt
76
+ )
77
+ chat = model.start_chat()
78
+ response = chat.send_message("I want to build [your project idea]")
79
+ ```
80
+
81
+ ## Notes for Gemini
82
+
83
+ - Gemini handles markdown well; prefer `## Headers` over XML-style tags
84
+ - Gemini 1.5 Pro has a very large context window — excellent for long MDAN sessions
85
+ - Use Gemini Advanced for best results
86
+
87
+ ---
88
+
89
+ # MDAN Integration — Alibaba Qwen
90
+
91
+ ## Setup
92
+
93
+ ### Qwen Web (Tongyi)
94
+
95
+ 1. Go to tongyi.aliyun.com
96
+ 2. Paste the orchestrator prompt as the first message
97
+ 3. Qwen handles structured prompts well
98
+
99
+ ### Qwen API
100
+
101
+ ```python
102
+ from openai import OpenAI # Qwen uses OpenAI-compatible API
103
+
104
+ with open("core/orchestrator.md") as f:
105
+ system_prompt = f.read()
106
+
107
+ client = OpenAI(
108
+ api_key="YOUR_DASHSCOPE_KEY",
109
+ base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
110
+ )
111
+ response = client.chat.completions.create(
112
+ model="qwen-max",
113
+ messages=[
114
+ {"role": "system", "content": system_prompt},
115
+ {"role": "user", "content": "I want to build [your project idea]"}
116
+ ]
117
+ )
118
+ ```
119
+
120
+ ## Notes for Qwen
121
+
122
+ - Qwen supports the Universal Envelope as-is
123
+ - Qwen-Max gives best results for complex orchestration
124
+ - Chinese language support is native — useful for Chinese-language projects
125
+
126
+ ---
127
+
128
+ # MDAN Integration — Moonshot Kimi
129
+
130
+ ## Setup
131
+
132
+ ### Kimi API
133
+
134
+ ```python
135
+ from openai import OpenAI
136
+
137
+ with open("core/orchestrator.md") as f:
138
+ system_prompt = f.read()
139
+
140
+ client = OpenAI(
141
+ api_key="YOUR_KIMI_KEY",
142
+ base_url="https://api.moonshot.cn/v1"
143
+ )
144
+ response = client.chat.completions.create(
145
+ model="moonshot-v1-128k",
146
+ messages=[
147
+ {"role": "system", "content": system_prompt},
148
+ {"role": "user", "content": "I want to build [your project idea]"}
149
+ ]
150
+ )
151
+ ```
152
+
153
+ ## Notes for Kimi
154
+
155
+ - Kimi's 128k context window makes it excellent for full project sessions
156
+ - Use markdown headers over XML tags for best compatibility
157
+ - Moonshot-v1-128k is recommended for MDAN due to large context support
158
+
159
+ ---
160
+
161
+ # MDAN Integration — Zhipu GLM
162
+
163
+ ## Setup
164
+
165
+ ### GLM API
166
+
167
+ ```python
168
+ from zhipuai import ZhipuAI
169
+
170
+ with open("core/orchestrator.md") as f:
171
+ system_prompt = f.read()
172
+
173
+ client = ZhipuAI(api_key="YOUR_API_KEY")
174
+ response = client.chat.completions.create(
175
+ model="glm-4",
176
+ messages=[
177
+ {"role": "system", "content": system_prompt},
178
+ {"role": "user", "content": "I want to build [your project idea]"}
179
+ ]
180
+ )
181
+ ```
182
+
183
+ ## Notes for GLM
184
+
185
+ - GLM-4 handles structured prompts well
186
+ - Use markdown formatting for best results
187
+ - GLM is particularly strong for Chinese-language software projects
188
+
189
+ ---
190
+
191
+ # MDAN Integration — MiniMax
192
+
193
+ ## Setup
194
+
195
+ ### MiniMax API
196
+
197
+ ```python
198
+ import requests
199
+
200
+ with open("core/orchestrator.md") as f:
201
+ system_prompt = f.read()
202
+
203
+ response = requests.post(
204
+ "https://api.minimax.chat/v1/text/chatcompletion_v2",
205
+ headers={"Authorization": "Bearer YOUR_API_KEY"},
206
+ json={
207
+ "model": "abab6.5s-chat",
208
+ "messages": [
209
+ {"role": "system", "content": system_prompt},
210
+ {"role": "user", "content": "I want to build [your project idea]"}
211
+ ]
212
+ }
213
+ )
214
+ ```
215
+
216
+ ## Notes for MiniMax
217
+
218
+ - MiniMax handles the Universal Envelope well
219
+ - Use structured markdown for agent prompts
220
+
221
+ ---
222
+
223
+ # MDAN Integration — Opencode
224
+
225
+ ## Setup
226
+
227
+ Opencode is a terminal-based AI coding tool. MDAN integrates via its system prompt configuration.
228
+
229
+ ### Configuration
230
+
231
+ ```bash
232
+ # In your project root, create opencode.json
233
+ cat > opencode.json << 'EOF'
234
+ {
235
+ "system": "$(cat core/orchestrator.md)"
236
+ }
237
+ EOF
238
+ ```
239
+
240
+ Or set via environment:
241
+
242
+ ```bash
243
+ export OPENCODE_SYSTEM="$(cat core/orchestrator.md)"
244
+ opencode
245
+ ```
246
+
247
+ ### Usage
248
+
249
+ ```bash
250
+ # Start MDAN session
251
+ opencode "I want to build [your project idea]"
252
+
253
+ # Reference agent prompts
254
+ opencode "@file .mdan/agents/dev.md Implement the user authentication feature"
255
+ ```
256
+
257
+ ## Notes for Opencode
258
+
259
+ - Opencode excels in the BUILD phase — it can directly edit files
260
+ - Use MDAN feature briefs as opencode tasks
261
+ - Place agent files in `.mdan/agents/` for easy reference
262
+
263
+ ---
264
+
265
+ # MDAN Integration — GitHub Copilot
266
+
267
+ ## Setup
268
+
269
+ GitHub Copilot supports workspace instructions via `.github/copilot-instructions.md`.
270
+
271
+ ### Step 1: Create workspace instructions
272
+
273
+ ```bash
274
+ mkdir -p .github
275
+ cat core/orchestrator.md > .github/copilot-instructions.md
276
+ ```
277
+
278
+ ### Step 2: Activate in VS Code
279
+
280
+ 1. Open VS Code with GitHub Copilot extension
281
+ 2. Open Copilot Chat (Ctrl+Shift+I)
282
+ 3. The workspace instructions are automatically loaded
283
+
284
+ ### Step 3: Use MDAN phases
285
+
286
+ ```
287
+ @workspace MDAN Phase 1: I want to build [your project idea]
288
+ ```
289
+
290
+ ## Limitations with GitHub Copilot
291
+
292
+ - Copilot Chat has a shorter context window than dedicated LLM APIs
293
+ - Orchestration is more limited than with Claude or GPT-4
294
+ - Best used for the BUILD phase with pre-written architecture docs
295
+ - Use `@workspace` to give Copilot context from your codebase
296
+
297
+ ## Recommended Usage
298
+
299
+ Use Copilot primarily for the BUILD phase — once the architecture and PRD are written
300
+ (using Claude, GPT-4, or Gemini), Copilot excels at implementing features directly in your IDE.
@@ -0,0 +1,46 @@
1
+ # MDAN Integration — Claude (Anthropic)
2
+
3
+ ## Setup
4
+
5
+ Claude is the reference implementation for MDAN. It handles the Universal Envelope natively.
6
+
7
+ ### Option 1: claude.ai (Web/App)
8
+
9
+ 1. Open a new Claude conversation
10
+ 2. Start your message with the content of `core/orchestrator.md`
11
+ 3. Optionally add the relevant agent prompts for your current phase
12
+ 4. Begin your project
13
+
14
+ ### Option 2: API / Custom Claude App
15
+
16
+ ```python
17
+ import anthropic
18
+
19
+ with open("core/orchestrator.md") as f:
20
+ system_prompt = f.read()
21
+
22
+ client = anthropic.Anthropic()
23
+
24
+ response = client.messages.create(
25
+ model="claude-opus-4-20250514",
26
+ max_tokens=8096,
27
+ system=system_prompt,
28
+ messages=[
29
+ {"role": "user", "content": "I want to build [your project idea]"}
30
+ ]
31
+ )
32
+ ```
33
+
34
+ ### Option 3: Claude Projects
35
+
36
+ 1. Create a new Claude Project
37
+ 2. In Project Instructions, paste the content of `core/orchestrator.md`
38
+ 3. Upload relevant agent files to Project Knowledge
39
+ 4. Start new conversations within the project
40
+
41
+ ## Tips for Claude
42
+
43
+ - Claude handles XML-style `[MDAN-AGENT]` tags natively — use them as-is
44
+ - For long projects, use Claude Projects to maintain context
45
+ - Claude responds well to "DESIGN APPROVED" / "PRD APPROVED" validation phrases
46
+ - Use extended thinking (if available) for complex architecture decisions
@@ -0,0 +1,74 @@
1
+ # MDAN Integration — Cursor IDE
2
+
3
+ ## Setup
4
+
5
+ Cursor uses `.cursorrules` for persistent AI instructions at the project level.
6
+
7
+ ### Step 1: Create `.cursorrules`
8
+
9
+ Copy `core/orchestrator.md` content into a `.cursorrules` file at your project root, then add:
10
+
11
+ ```
12
+ ## CURSOR-SPECIFIC INSTRUCTIONS
13
+
14
+ You are operating inside Cursor IDE. In addition to your MDAN orchestration role:
15
+
16
+ - You have access to the full codebase via @codebase
17
+ - You can read and write files directly
18
+ - Use @file to reference specific files when activating agents
19
+ - When the Dev Agent produces code, write it directly to the appropriate files
20
+ - When the Doc Agent produces documentation, write it to the mdan_output/ folder
21
+
22
+ ### Agent File References
23
+ - Product Agent: @file .mdan/agents/product.md
24
+ - Architect Agent: @file .mdan/agents/architect.md
25
+ - UX Agent: @file .mdan/agents/ux.md
26
+ - Dev Agent: @file .mdan/agents/dev.md
27
+ - Test Agent: @file .mdan/agents/test.md
28
+ - Security Agent: @file .mdan/agents/security.md
29
+ - DevOps Agent: @file .mdan/agents/devops.md
30
+ - Doc Agent: @file .mdan/agents/doc.md
31
+ ```
32
+
33
+ ### Step 2: Copy agents to project
34
+
35
+ ```bash
36
+ mkdir -p .mdan/agents
37
+ cp agents/*.md .mdan/agents/
38
+ ```
39
+
40
+ ### Step 3: Start using MDAN in Cursor
41
+
42
+ Open Cursor Chat (Cmd+L) and type:
43
+ ```
44
+ MDAN: I want to build [your project idea]
45
+ ```
46
+
47
+ ## Cursor-Specific Features
48
+
49
+ ### Composer Mode (Recommended)
50
+ Use Composer (Cmd+I) for BUILD phase — it can write multiple files simultaneously when the Dev Agent produces code.
51
+
52
+ ### Agent Mode
53
+ Enable Agent mode for autonomous implementation. MDAN Core will orchestrate, and Cursor's agent will execute file operations.
54
+
55
+ ### @references
56
+ - `@codebase` — Give MDAN Core full project context
57
+ - `@file path/to/file` — Reference specific files for agent review
58
+ - `@docs` — Reference documentation
59
+
60
+ ## Example `.cursorrules` file structure
61
+
62
+ ```
63
+ .cursorrules ← MDAN Core orchestrator prompt
64
+ .mdan/
65
+ agents/
66
+ product.md
67
+ architect.md
68
+ ux.md
69
+ dev.md
70
+ test.md
71
+ security.md
72
+ devops.md
73
+ doc.md
74
+ ```
@@ -0,0 +1,48 @@
1
+ # MDAN Integration — Windsurf IDE
2
+
3
+ ## Setup
4
+
5
+ Windsurf uses `.windsurfrules` for persistent AI instructions.
6
+
7
+ ### Step 1: Create `.windsurfrules`
8
+
9
+ ```bash
10
+ cp core/orchestrator.md .windsurfrules
11
+ ```
12
+
13
+ Then append at the end:
14
+
15
+ ```
16
+ ## WINDSURF-SPECIFIC INSTRUCTIONS
17
+
18
+ You are operating inside Windsurf IDE with Cascade AI.
19
+
20
+ - Cascade can autonomously execute multi-step coding tasks
21
+ - Use MDAN phases to structure Cascade's work
22
+ - When activating the Dev Agent, Cascade will implement and write files directly
23
+ - Use Cascade's flow awareness to maintain MDAN phase context across sessions
24
+
25
+ ### File Organization
26
+ All MDAN artifacts should be saved to `.mdan/artifacts/` for reference.
27
+ ```
28
+
29
+ ### Step 2: Copy agents
30
+
31
+ ```bash
32
+ mkdir -p .mdan/agents .mdan/artifacts
33
+ cp agents/*.md .mdan/agents/
34
+ ```
35
+
36
+ ### Step 3: Using MDAN with Cascade
37
+
38
+ Cascade's multi-step reasoning pairs well with MDAN's structured phases. When starting a phase:
39
+
40
+ ```
41
+ @MDAN Phase 2: DESIGN — activate Architect Agent for [project name]
42
+ ```
43
+
44
+ ## Tips for Windsurf
45
+
46
+ - Windsurf's Cascade is excellent for the BUILD phase — it can implement entire features autonomously
47
+ - Use MDAN's Feature Briefs as Cascade tasks for predictable, structured implementation
48
+ - Save architecture documents to `.mdan/artifacts/` so Cascade can reference them in context
@@ -0,0 +1,44 @@
1
+ {
2
+ "mdan_version": "2.0.0",
3
+ "user": {
4
+ "name": null
5
+ },
6
+ "project": {
7
+ "name": "{{PROJECT_NAME}}",
8
+ "type": null,
9
+ "detected_profile": null,
10
+ "created_at": "{{DATE}}",
11
+ "last_updated": "{{DATE}}"
12
+ },
13
+ "current_phase": "DISCOVER",
14
+ "phase_history": [
15
+ {
16
+ "phase": "DISCOVER",
17
+ "started_at": "{{DATE}}",
18
+ "completed_at": null,
19
+ "status": "IN_PROGRESS",
20
+ "artifacts": []
21
+ }
22
+ ],
23
+ "agents_used": {
24
+ "learn": null,
25
+ "product": null,
26
+ "architect": null,
27
+ "ux": null,
28
+ "dev": null,
29
+ "test": null,
30
+ "security": null,
31
+ "devops": null,
32
+ "doc": null
33
+ },
34
+ "features": [],
35
+ "decisions": [],
36
+ "open_issues": [],
37
+ "tech_stack": {},
38
+ "learned_knowledge": {
39
+ "skills": [],
40
+ "mcp_servers": [],
41
+ "rulesets": []
42
+ },
43
+ "llm_history": []
44
+ }
@@ -0,0 +1,197 @@
1
+ # MDAN Memory System
2
+
3
+ > La mémoire persistante de MDAN entre les sessions.
4
+ > Chaque projet génère et maintient un fichier `MDAN-STATE.json` à la racine.
5
+
6
+ ---
7
+
8
+ ## Concept
9
+
10
+ Le problème fondamental des agents IA : chaque nouvelle conversation repart de zéro.
11
+ MDAN résout ça avec un fichier d'état JSON versionné, lisible par n'importe quel LLM.
12
+
13
+ Au début de chaque session, l'utilisateur colle le contenu de `MDAN-STATE.json` dans la conversation.
14
+ MDAN Core le lit, reconstruit le contexte complet, et reprend exactement où on s'était arrêté.
15
+
16
+ ---
17
+
18
+ ## Structure du fichier MDAN-STATE.json
19
+
20
+ ```json
21
+ {
22
+ "mdan_version": "2.0.0",
23
+ "user": {
24
+ "name": "Alex"
25
+ },
26
+ "project": {
27
+ "name": "mon-projet",
28
+ "type": "web-app",
29
+ "created_at": "2025-01-15",
30
+ "last_updated": "2025-01-20"
31
+ },
32
+ "current_phase": "BUILD",
33
+ "phase_history": [
34
+ {
35
+ "phase": "DISCOVER",
36
+ "started_at": "2025-01-15",
37
+ "completed_at": "2025-01-15",
38
+ "status": "VALIDATED",
39
+ "artifacts": ["mdan_output/PRD.md"]
40
+ },
41
+ {
42
+ "phase": "DESIGN",
43
+ "started_at": "2025-01-16",
44
+ "completed_at": "2025-01-17",
45
+ "status": "VALIDATED",
46
+ "artifacts": ["mdan_output/ARCHITECTURE.md", "mdan_output/UX-SPEC.md"]
47
+ },
48
+ {
49
+ "phase": "BUILD",
50
+ "started_at": "2025-01-18",
51
+ "completed_at": null,
52
+ "status": "IN_PROGRESS",
53
+ "artifacts": []
54
+ }
55
+ ],
56
+ "agents_used": {
57
+ "product": "2.0.0",
58
+ "architect": "2.0.0",
59
+ "ux": "2.0.0",
60
+ "dev": "2.0.0",
61
+ "test": "2.0.0",
62
+ "security": "2.0.0",
63
+ "devops": "2.0.0",
64
+ "doc": "2.0.0"
65
+ },
66
+ "features": [
67
+ {
68
+ "id": "US-001",
69
+ "title": "User authentication",
70
+ "status": "DONE",
71
+ "implemented_at": "2025-01-18",
72
+ "files": ["src/auth/auth.service.ts", "src/auth/auth.controller.ts"],
73
+ "tests": "PASSING",
74
+ "security_review": "APPROVED"
75
+ },
76
+ {
77
+ "id": "US-002",
78
+ "title": "User profile management",
79
+ "status": "IN_PROGRESS",
80
+ "implemented_at": null,
81
+ "files": [],
82
+ "tests": null,
83
+ "security_review": null
84
+ },
85
+ {
86
+ "id": "US-003",
87
+ "title": "Dashboard",
88
+ "status": "TODO",
89
+ "implemented_at": null,
90
+ "files": [],
91
+ "tests": null,
92
+ "security_review": null
93
+ }
94
+ ],
95
+ "decisions": [
96
+ {
97
+ "id": "ADR-001",
98
+ "title": "PostgreSQL over MongoDB",
99
+ "made_at": "2025-01-16",
100
+ "rationale": "Relational data, ACID needed"
101
+ }
102
+ ],
103
+ "open_issues": [
104
+ {
105
+ "id": "ISSUE-001",
106
+ "type": "BLOCKER",
107
+ "description": "Rate limiting library choice not finalized",
108
+ "phase": "BUILD"
109
+ }
110
+ ],
111
+ "tech_stack": {
112
+ "frontend": "React 18 + TypeScript",
113
+ "backend": "Node.js 20 + Express",
114
+ "database": "PostgreSQL 16",
115
+ "cache": "Redis 7",
116
+ "hosting": "Railway"
117
+ },
118
+ "llm_history": [
119
+ { "session": 1, "llm": "Claude", "phase": "DISCOVER + DESIGN" },
120
+ { "session": 2, "llm": "Cursor", "phase": "BUILD (US-001)" }
121
+ ]
122
+ }
123
+ ```
124
+
125
+ ---
126
+
127
+ ## Protocole de reprise de session
128
+
129
+ ### Ce que l'utilisateur fait
130
+
131
+ ```
132
+ 1. Ouvrir MDAN-STATE.json du projet
133
+ 2. Copier le contenu
134
+ 3. Ouvrir son LLM avec le prompt MDAN Core
135
+ 4. Coller ce message :
136
+
137
+ "MDAN RESUME SESSION
138
+ [coller le contenu de MDAN-STATE.json]"
139
+ ```
140
+
141
+ ### Ce que MDAN Core fait automatiquement
142
+
143
+ ```
144
+ [MDAN CORE — REPRISE DE SESSION]
145
+
146
+ ✅ Projet chargé : [nom]
147
+ ✅ Phase courante : [phase]
148
+ ✅ Progression : [X/Y features complètes]
149
+ ✅ Dernière session : [LLM utilisé, date]
150
+
151
+ Contexte reconstruit :
152
+ - PRD : [résumé en 2 lignes]
153
+ - Architecture : [stack + pattern]
154
+ - Fonctionnalités : [liste avec statuts]
155
+ - Issues ouvertes : [liste]
156
+
157
+ Prochaine action recommandée :
158
+ → [Action précise à faire maintenant]
159
+
160
+ Souhaitez-vous continuer avec [action], ou autre chose ?
161
+ ```
162
+
163
+ ---
164
+
165
+ ## Mise à jour de l'état
166
+
167
+ À la fin de chaque session, MDAN Core génère le MDAN-STATE.json mis à jour :
168
+
169
+ ```
170
+ [MDAN CORE — FIN DE SESSION]
171
+
172
+ Voici votre MDAN-STATE.json mis à jour.
173
+ Remplacez le fichier existant avec ce contenu :
174
+
175
+ [JSON complet mis à jour]
176
+ ```
177
+
178
+ ---
179
+
180
+ ## CLI — Commandes mémoire
181
+
182
+ ```bash
183
+ # Initialiser l'état d'un projet
184
+ mdan memory init mon-projet
185
+
186
+ # Afficher l'état actuel
187
+ mdan memory status
188
+
189
+ # Mettre à jour le statut d'une feature
190
+ mdan memory feature US-001 done
191
+
192
+ # Générer le prompt de reprise
193
+ mdan memory resume
194
+
195
+ # Valider une phase
196
+ mdan memory phase-complete DISCOVER
197
+ ```