@brunosps00/dev-workflow 0.4.0 → 0.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@brunosps00/dev-workflow",
3
- "version": "0.4.0",
3
+ "version": "0.4.2",
4
4
  "description": "AI-driven development workflow commands for any project. Scaffolds a complete PRD-to-PR pipeline with multi-platform AI assistant support.",
5
5
  "bin": {
6
6
  "dev-workflow": "./bin/dev-workflow.js"
@@ -56,18 +56,23 @@ Evaluate whether the topic requires deep research:
56
56
 
57
57
  If executed, use `standard` mode by default. Incorporate findings into subsequent steps.
58
58
 
59
- ### Step 3: Brainstorm
59
+ ### Step 3: Brainstorm (Interactive)
60
60
 
61
61
  Run `/dw-brainstorm` with accumulated context (intel + research).
62
62
  - Generate 3 directions
63
63
  - Automatically converge on the most pragmatic option for the project context
64
64
  - Do NOT wait for user approval (brainstorm is automatic in autopilot)
65
65
 
66
- ### Step 4: PRD
66
+ ### Step 4: PRD (Interactive — 7+ Questions)
67
+
68
+ <critical>The PRD MUST include an interactive interview with the user. Ask AT LEAST 7 clarification questions BEFORE writing the PRD. Do NOT answer questions automatically based on context — the user MUST respond.</critical>
67
69
 
68
70
  Run `/dw-create-prd` using brainstorm findings.
69
- - Follow all command instructions (clarification questions answered based on accumulated context)
70
- - Generate the complete PRD in `.dw/spec/prd-[name]/prd.md`
71
+ - Follow ALL command instructions, especially the clarification questions section
72
+ - Ask at least 7 questions about: problem, target users, critical features, scope, constraints, design, integration
73
+ - In each question, present a recommendation grounded in brainstorm and deep-research findings (if executed). E.g.: "Based on the research, I recommend X because [evidence]. Do you agree or prefer a different direction?"
74
+ - Wait for user responses to each question
75
+ - Only after receiving all responses, write the complete PRD in `.dw/spec/prd-[name]/prd.md`
71
76
 
72
77
  ### === GATE 1: PRD Approval ===
73
78
 
@@ -78,11 +83,16 @@ Present to the user:
78
83
 
79
84
  **Wait for explicit approval.** If the user requests changes, adjust and re-present.
80
85
 
81
- ### Step 5: TechSpec
86
+ ### Step 5: TechSpec (Interactive — 7+ Questions)
87
+
88
+ <critical>The TechSpec MUST include an interactive interview with the user. Ask AT LEAST 7 technical clarification questions BEFORE writing the TechSpec. Do NOT answer questions automatically — the user MUST respond.</critical>
82
89
 
83
90
  Run `/dw-create-techspec` from the approved PRD.
84
- - Follow all command instructions
85
- - Generate in `.dw/spec/prd-[name]/techspec.md`
91
+ - Follow ALL command instructions, especially the clarification questions section
92
+ - Ask at least 7 questions about: preferred architecture, existing vs new libs, testing strategy, integration with existing systems, infrastructure constraints, performance, security
93
+ - In each question, present a technical recommendation grounded in brainstorm, deep-research, and approved PRD findings. E.g.: "Research indicated lib X has better performance for this case [source]. Want to use X or have another preference?"
94
+ - Wait for user responses to each question
95
+ - Only after receiving all responses, generate in `.dw/spec/prd-[name]/techspec.md`
86
96
 
87
97
  ### Step 6: Tasks
88
98
 
@@ -115,6 +125,16 @@ Run `/dw-run-plan` with the PRD path.
115
125
 
116
126
  ### Step 9: Implementation Review (Loop)
117
127
 
128
+ <critical>BEFORE the PRD compliance review, run the project's build and lint. If they fail, fix and re-run until they pass. The implementation review CANNOT start with broken build or lint.</critical>
129
+
130
+ Run the project's build and lint:
131
+ 1. Identify build and lint commands in `package.json` (scripts `build`, `lint`, `lint:fix`, `type-check`, etc.)
132
+ 2. Run lint with `--fix` enabled (e.g., `npm run lint -- --fix` or `npx eslint . --fix`) to auto-correct what's possible
133
+ 3. Run build (e.g., `npm run build` or `npx tsc --noEmit`)
134
+ 4. If any fail after `--fix`: analyze errors, fix manually, and re-run
135
+ 5. Repeat until both build AND lint pass without errors
136
+ 6. Only then proceed to the review
137
+
118
138
  Run `/dw-review-implementation` to verify PRD compliance (Level 2).
119
139
  - If gaps found: fix automatically and re-run the review
120
140
  - Maximum 3 correction cycles
@@ -134,6 +154,13 @@ If QA found bugs:
134
154
 
135
155
  ### Step 12: Implementation Review (Post-QA)
136
156
 
157
+ <critical>BEFORE the post-QA review, run build and lint again with --fix. QA fixes may have introduced new issues.</critical>
158
+
159
+ Run the project's build and lint (same sequence as Step 9):
160
+ 1. Lint with `--fix` enabled
161
+ 2. Build
162
+ 3. If any fail: fix and re-run until they pass
163
+
137
164
  Run `/dw-review-implementation` again to confirm QA fixes did not break PRD compliance.
138
165
  - If gaps found: fix and re-run
139
166
  - Maximum 3 cycles
@@ -273,6 +273,20 @@ Save to `{{PRD_PATH}}/dw-code-review.md`:
273
273
 
274
274
  **REJECTED**: Tests failing, RFs not implemented, serious rules violations, security issues, or CRITICAL issues.
275
275
 
276
+ ## Next Steps by Status
277
+
278
+ <critical>The suggested next step MUST match the review status. NEVER suggest /dw-fix-qa after code-review — that command is exclusively for bugs found by /dw-run-qa.</critical>
279
+
280
+ - **APPROVED**: Suggest `/dw-commit` followed by `/dw-generate-pr`
281
+ - **APPROVED WITH CAVEATS**: List the caveats. Suggest fixing the caveats, re-running build + lint with --fix, then re-running `/dw-code-review`
282
+ - **REJECTED**: List the findings that caused rejection. The correct flow is:
283
+ 1. Fix the findings listed in the report
284
+ 2. Run build and lint with `--fix` until they pass
285
+ 3. Re-run `/dw-code-review`
286
+ 4. Repeat until APPROVED
287
+ - Do NOT suggest `/dw-fix-qa` (that is for visual QA bugs)
288
+ - Do NOT suggest `/dw-run-qa` before resolving code-review findings
289
+
276
290
  **Approval Decision Flow:**
277
291
  ```dot
278
292
  digraph approval {
@@ -97,7 +97,13 @@ Refer to `.dw/rules/` for project-specific URLs and frameworks.
97
97
  - `{{PRD_PATH}}/QA/screenshots/`
98
98
  - `{{PRD_PATH}}/QA/logs/`
99
99
  - `{{PRD_PATH}}/QA/scripts/`
100
- - Read `.dw/templates/qa-test-credentials.md` and choose the appropriate user/profile for the scenario
100
+ <critical>BEFORE executing any test involving login or authentication, search for test credentials in the codebase. Look for (in priority order):
101
+ 1. `.dw/templates/qa-test-credentials.md`
102
+ 2. Any file with "credenciais", "credentials", "test-users", "test-accounts", "auth", "login", "usuarios-teste" in the name (recursive glob search)
103
+ 3. Environment variables in `.env.test`, `.env.local`, `.env.development`
104
+ 4. Documentation in README or docs/ mentioning test users
105
+ If NO credentials are found, STOP and ask the user before continuing. Do NOT guess credentials or use fake data.</critical>
106
+ - Choose the appropriate user/profile for the test scenario
101
107
  - Verify the application is running on localhost
102
108
  - Use `browser_navigate` from Playwright MCP to access the application
103
109
  - Confirm the page loaded correctly with `browser_snapshot`
@@ -56,18 +56,23 @@ Avalie se o topico necessita de pesquisa profunda:
56
56
 
57
57
  Se executar, use modo `standard` por padrao. Incorpore os findings nas etapas seguintes.
58
58
 
59
- ### Etapa 3: Brainstorm
59
+ ### Etapa 3: Brainstorm (Interativo)
60
60
 
61
61
  Execute `/dw-brainstorm` com o contexto acumulado (intel + pesquisa).
62
62
  - Gere 3 direcoes
63
63
  - Convirja automaticamente na opcao mais pragmatica para o contexto do projeto
64
64
  - NAO aguarde aprovacao do usuario (brainstorm e automatico no autopilot)
65
65
 
66
- ### Etapa 4: PRD
66
+ ### Etapa 4: PRD (Interativo — 7+ Perguntas)
67
+
68
+ <critical>O PRD DEVE incluir entrevista interativa com o usuario. Faca NO MINIMO 7 perguntas de esclarecimento ANTES de redigir o PRD. NAO responda as perguntas automaticamente com base no contexto — o usuario DEVE responder.</critical>
67
69
 
68
70
  Execute `/dw-create-prd` usando os findings do brainstorm.
69
- - Siga todas as instrucoes do comando (perguntas de esclarecimento respondidas com base no contexto acumulado)
70
- - Gere o PRD completo em `.dw/spec/prd-[nome]/prd.md`
71
+ - Siga TODAS as instrucoes do comando, especialmente a secao de perguntas de esclarecimento
72
+ - Faca pelo menos 7 perguntas ao usuario sobre: problema, usuarios-alvo, funcionalidades criticas, escopo, restricoes, design, integracao
73
+ - Em cada pergunta, apresente uma recomendacao embasada nos findings do brainstorm e do deep-research (se executado). Ex: "Com base na pesquisa, recomendo X porque [evidencia]. Concorda ou prefere outra direcao?"
74
+ - Aguarde as respostas do usuario para cada pergunta
75
+ - So apos receber todas as respostas, redija o PRD completo em `.dw/spec/prd-[nome]/prd.md`
71
76
 
72
77
  ### ═══ GATE 1: Aprovacao do PRD ═══
73
78
 
@@ -78,11 +83,16 @@ Apresente ao usuario:
78
83
 
79
84
  **Aguarde aprovacao explicita.** Se o usuario pedir mudancas, ajuste e reapresente.
80
85
 
81
- ### Etapa 5: TechSpec
86
+ ### Etapa 5: TechSpec (Interativo — 7+ Perguntas)
87
+
88
+ <critical>O TechSpec DEVE incluir entrevista interativa com o usuario. Faca NO MINIMO 7 perguntas de esclarecimento tecnico ANTES de redigir o TechSpec. NAO responda as perguntas automaticamente — o usuario DEVE responder.</critical>
82
89
 
83
90
  Execute `/dw-create-techspec` a partir do PRD aprovado.
84
- - Siga todas as instrucoes do comando
85
- - Gere em `.dw/spec/prd-[nome]/techspec.md`
91
+ - Siga TODAS as instrucoes do comando, especialmente a secao de perguntas de esclarecimento
92
+ - Faca pelo menos 7 perguntas ao usuario sobre: arquitetura preferida, libs existentes vs novas, estrategia de testes, integracao com sistemas existentes, restricoes de infraestrutura, performance, seguranca
93
+ - Em cada pergunta, apresente uma recomendacao tecnica embasada nos findings do brainstorm, deep-research e PRD aprovado. Ex: "A pesquisa indicou que a lib X tem melhor performance para este caso [fonte]. Quer usar X ou tem outra preferencia?"
94
+ - Aguarde as respostas do usuario para cada pergunta
95
+ - So apos receber todas as respostas, gere em `.dw/spec/prd-[nome]/techspec.md`
86
96
 
87
97
  ### Etapa 6: Tasks
88
98
 
@@ -115,6 +125,16 @@ Execute `/dw-run-plan` com o path do PRD.
115
125
 
116
126
  ### Etapa 9: Review de Implementacao (Loop)
117
127
 
128
+ <critical>ANTES do review de PRD compliance, execute build e lint do projeto. Se falharem, corrija e re-execute ate passar. O review de implementacao NAO pode comecar com build ou lint quebrados.</critical>
129
+
130
+ Execute build e lint do projeto:
131
+ 1. Identifique os comandos de build e lint em `package.json` (scripts `build`, `lint`, `lint:fix`, `type-check`, etc.)
132
+ 2. Execute lint com `--fix` habilitado (ex: `npm run lint -- --fix` ou `npx eslint . --fix`) para auto-corrigir o que for possivel
133
+ 3. Execute build (ex: `npm run build` ou `npx tsc --noEmit`)
134
+ 4. Se algum falhar apos o `--fix`: analise os erros, corrija manualmente, e re-execute
135
+ 5. Repita ate que build E lint passem sem erros
136
+ 6. So entao prossiga para o review
137
+
118
138
  Execute `/dw-review-implementation` para verificar PRD compliance (Level 2).
119
139
  - Se encontrar gaps: corrija automaticamente e re-execute o review
120
140
  - Maximo 3 ciclos de correcao
@@ -134,6 +154,13 @@ Se o QA encontrou bugs:
134
154
 
135
155
  ### Etapa 12: Review de Implementacao (Pos-QA)
136
156
 
157
+ <critical>ANTES do review pos-QA, execute build e lint novamente com --fix. Correcoes do QA podem ter introduzido novos problemas.</critical>
158
+
159
+ Execute build e lint do projeto (mesma sequencia da Etapa 9):
160
+ 1. Lint com `--fix` habilitado
161
+ 2. Build
162
+ 3. Se falhar: corrija e re-execute ate passar
163
+
137
164
  Execute `/dw-review-implementation` novamente para confirmar que as correcoes do QA nao quebraram PRD compliance.
138
165
  - Se encontrar gaps: corrija e re-execute
139
166
  - Maximo 3 ciclos
@@ -253,6 +253,20 @@ Salvar em `{{PRD_PATH}}/dw-code-review.md`:
253
253
 
254
254
  **REPROVADO**: Testes falhando, RFs não implementados, violação grave de rules, problemas de segurança, ou CRITICAL issues.
255
255
 
256
+ ## Próximos Passos por Status
257
+
258
+ <critical>O próximo passo sugerido DEVE corresponder ao status do review. NUNCA sugira /dw-fix-qa após code-review — esse comando é exclusivo para bugs encontrados pelo /dw-run-qa.</critical>
259
+
260
+ - **APROVADO**: Sugira `/dw-commit` seguido de `/dw-generate-pr`
261
+ - **APROVADO COM RESSALVAS**: Liste as ressalvas. Sugira corrigir as ressalvas, re-executar build + lint com --fix, e então re-executar `/dw-code-review`
262
+ - **REPROVADO**: Liste os findings que causaram a reprovação. O fluxo correto é:
263
+ 1. Corrigir os findings listados no relatório
264
+ 2. Executar build e lint com `--fix` até passar
265
+ 3. Re-executar `/dw-code-review`
266
+ 4. Repetir até APROVADO
267
+ - NÃO sugira `/dw-fix-qa` (esse é para bugs de QA visual)
268
+ - NÃO sugira `/dw-run-qa` antes de resolver os findings do code-review
269
+
256
270
  **Fluxo de Decisão de Aprovação:**
257
271
  ```dot
258
272
  digraph approval {
@@ -97,7 +97,13 @@ Consulte `.dw/rules/` para URLs e frameworks específicos do projeto.
97
97
  - `{{PRD_PATH}}/QA/screenshots/`
98
98
  - `{{PRD_PATH}}/QA/logs/`
99
99
  - `{{PRD_PATH}}/QA/scripts/`
100
- - Ler `.dw/templates/qa-test-credentials.md` e escolher o usuário/perfil apropriado para o cenário
100
+ <critical>ANTES de executar qualquer teste que envolva login ou autenticação, busque credenciais de teste no codebase. Procure por (em ordem de prioridade):
101
+ 1. `.dw/templates/qa-test-credentials.md`
102
+ 2. Qualquer arquivo com "credenciais", "credentials", "test-users", "test-accounts", "auth", "login", "usuarios-teste" no nome (busca recursiva com glob)
103
+ 3. Variáveis de ambiente em `.env.test`, `.env.local`, `.env.development`
104
+ 4. Documentação em README ou docs/ que mencione usuários de teste
105
+ Se NENHUMA credencial for encontrada, PARE e pergunte ao usuário antes de continuar. NÃO tente adivinhar credenciais ou usar dados falsos.</critical>
106
+ - Escolher o usuário/perfil apropriado para o cenário de teste
101
107
  - Verificar se a aplicação está rodando em localhost
102
108
  - Usar `browser_navigate` do Playwright MCP para acessar a aplicação
103
109
  - Confirmar que a página carregou corretamente com `browser_snapshot`