@trygentic/agentloop 0.15.1-alpha.11 → 0.16.0-alpha.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@trygentic/agentloop",
3
- "version": "0.15.1-alpha.11",
3
+ "version": "0.16.0-alpha.11",
4
4
  "description": "AI-powered autonomous coding agent",
5
5
  "bin": {
6
6
  "agentloop": "./bin/agentloop"
@@ -9,8 +9,8 @@
9
9
  "postinstall": "node ./scripts/postinstall.mjs"
10
10
  },
11
11
  "optionalDependencies": {
12
- "@trygentic/agentloop-darwin-arm64": "0.15.1-alpha.11",
13
- "@trygentic/agentloop-linux-x64": "0.15.1-alpha.11"
12
+ "@trygentic/agentloop-darwin-arm64": "0.16.0-alpha.11",
13
+ "@trygentic/agentloop-linux-x64": "0.16.0-alpha.11"
14
14
  },
15
15
  "engines": {
16
16
  "node": ">=18.0.0"
@@ -509,6 +509,21 @@
509
509
  "call": "GitPush",
510
510
  "comment": "Push the branch to remote so QA and CI can access the changes"
511
511
  },
512
+ {
513
+ "type": "llm-action",
514
+ "name": "PrepareCompletionComment",
515
+ "prompt": "Generate a completion comment for this task. The comment MUST include a [TASK_FILES] block listing all files created or modified.\n\nTask: {{taskDescription}}\nImplementation: {{implementation}}\nPre-Commit Validation: {{preCommitValidation}}\nProject Info: {{projectInfo}}\n\nFormat the comment as:\n1. A brief summary of what was implemented\n2. A [TASK_FILES] block with one file path per line:\n[TASK_FILES]\npath/to/file1.ts\npath/to/file2.ts\n[/TASK_FILES]\n\nAlso include a [TEST_SETUP] block if tests were run or the project has non-standard test configuration:\n[TEST_SETUP]\ntestDirectory: <relative path to the directory where tests should be run, e.g., \"frontend\" — omit if root>\ntestCommand: <the exact test command that works, e.g., \"npx jest --transform='{}' src/__tests__/\">\nprojectType: <expo, node, bun, etc.>\npackageManager: <npm, yarn, pnpm, bun>\n[/TEST_SETUP]\n\nThis helps QA run the correct test command from the correct directory.\n\nExtract file paths from the implementation changes array. Include ALL files that were created, modified, or deleted.",
516
+ "contextKeys": ["taskDescription", "implementation", "preCommitValidation", "projectInfo"],
517
+ "outputSchema": {
518
+ "type": "object",
519
+ "properties": {
520
+ "comment": { "type": "string", "description": "The full completion comment including [TASK_FILES] and [TEST_SETUP] blocks" }
521
+ },
522
+ "required": ["comment"]
523
+ },
524
+ "outputKey": "completionComment",
525
+ "temperature": 0.3
526
+ },
512
527
  {
513
528
  "type": "action",
514
529
  "call": "AddCompletionComment"
@@ -586,6 +601,7 @@
586
601
  "gitRepoInitialized": false,
587
602
  "gitCommitHash": null,
588
603
  "projectInfo": null,
604
+ "completionComment": null,
589
605
  "preCommitValidation": null,
590
606
  "validationFailed": false,
591
607
  "maxRetries": 3,
@@ -235,6 +235,22 @@ Every implementation MUST include tests. This is non-negotiable.
235
235
  - Place tests near the code they test (e.g., `src/utils/__tests__/helper.test.ts`)
236
236
  - Match existing test file naming: if the project uses `.test.ts`, use that; if it uses `.spec.ts`, use that
237
237
 
238
+ ## Expo / React Native Projects
239
+
240
+ When working on Expo or React Native projects:
241
+
242
+ **Testing conventions:**
243
+ - Tests typically use `jest-expo` preset with Jest
244
+ - Run tests with `npx jest` from the project directory (NOT `npm test` unless a valid test script exists)
245
+ - Follow the existing test patterns in the project — if tests use `--transform='{}'` for pure logic tests, maintain that pattern
246
+ - Do NOT introduce `@testing-library/react-native` rendering tests if existing tests use pure logic patterns
247
+ - Common test file locations: `src/__tests__/`, `__tests__/`, `tests/`
248
+
249
+ **Monorepo awareness:**
250
+ - If the project has subdirectories like `frontend/`, `backend/`, `web/`, ensure you run tests from the correct subdirectory
251
+ - Check the root `package.json` test script — it may delegate to a subdirectory (e.g., `cd frontend && npx jest`)
252
+ - When writing completion comments, always specify the test directory in the [TEST_SETUP] block
253
+
238
254
  ## Root Cause Analysis
239
255
 
240
256
  When fixing bugs or addressing QA feedback, understand the ROOT CAUSE before implementing.
@@ -21,10 +21,10 @@
21
21
  {
22
22
  "type": "llm-action",
23
23
  "name": "PlanAndCreateTasks",
24
- "prompt": "You are a product manager agent. Your job is to break down a high-level feature request into actionable AGILE tasks with proper DAG dependencies.\n\n## Feature Request\nTitle: {{taskTitle}}\nDescription: {{taskDescription}}\n\n## Your Workflow\n\n### Step 0 - Create or Reuse Subproject (MANDATORY FIRST ACTION)\nBefore creating ANY tasks, check if a subproject already exists for this work:\n1. Call `mcp__agentloop__list_subprojects` to check for existing ones\n2. If the delegation message included a subprojectId, reuse that subproject\n3. If no relevant subproject exists, call `mcp__agentloop__create_subproject` with a descriptive name\n4. Save the subprojectId for ALL subsequent create_task calls\n\n### Step 1 - Check Existing Tasks\nCall `mcp__agentloop__list_tasks` with `limit: 100, status: \"all\"` to see what already exists.\nIf tasks already cover this work, report that instead of creating duplicates.\n\n### Step 2 - Analyze Complexity\nDetermine task count based on ACTUAL complexity:\n- Simple (1-5 tasks): \"add hello world endpoint\" -> 1-2 tasks\n- Medium (5-15 tasks): \"add user authentication\" -> 8-12 tasks \n- Large (20-30 tasks): \"build payment system\" -> 25-30 tasks\n\nDO NOT inflate task counts artificially.\n\n### Step 3 - Create Tasks\nFor each task, call `mcp__agentloop__create_task` with:\n- title, description, priority, tags, sequence, subprojectId\n- Record all returned task IDs\n\n### Step 4 - Build DAG Dependencies (MANDATORY)\nCall `mcp__agentloop__add_task_dependency` for EACH dependency relationship.\nMaximize parallelism - engineers work in isolated worktrees.\n\n### Step 5 - Validate\nCall `mcp__agentloop__validate_dag` then `mcp__agentloop__visualize_dag`.\n\n## Critical Rules\n- You are a PLANNER, not an implementer. NEVER write code or create files.\n- ALWAYS create tasks using mcp__agentloop__create_task\n- ALWAYS build DAG dependencies using mcp__agentloop__add_task_dependency\n- ALWAYS include subprojectId in every create_task call\n- Engineers work in project root (.) - NEVER include commands that create subdirectories\n- Explicitly specify tech stack in task descriptions\n\nProvide a summary when done.",
24
+ "prompt": "You are a product manager agent. Your job is to break down a high-level feature request into actionable AGILE tasks with proper DAG dependencies.\n\n## Feature Request\nTitle: {{taskTitle}}\nDescription: {{taskDescription}}\n\n## CRITICAL: Maximize Parallel Tool Calls\nYou MUST minimize the number of LLM turns by batching independent tool calls into the SAME response.\nEvery extra turn adds ~5-10 seconds of latency. Batch aggressively.\n\n## Your Workflow\n\n### Turn 1 Gather Context (parallel reads)\nCall BOTH of these tools in a SINGLE response:\n- `mcp__agentloop__list_subprojects` check for existing subprojects\n- `mcp__agentloop__list_tasks` with `limit: 100, status: \"all\"` check existing tasks\n\n### Turn 2 — Create Subproject (if needed)\nIf the delegation message included a subprojectId, reuse it. Otherwise call `mcp__agentloop__create_subproject`.\nIf a subproject already exists for this work, skip creation.\nSave the subprojectId for ALL subsequent create_task calls.\nIf tasks already cover this work, report that instead of creating duplicates and stop.\n\n### Turn 3 Analyze & Create ALL Tasks (SINGLE response)\nDetermine task count based on ACTUAL complexity:\n- Simple (1-5 tasks): \"add logout button\" -> 1-2 tasks\n- Medium (5-15 tasks): \"add user authentication\" -> 8-12 tasks\n- Large (20-30 tasks): \"build payment system\" -> 25-30 tasks\n\nDO NOT inflate task counts artificially.\n\n**IMPORTANT: Call ALL `mcp__agentloop__create_task` tools in a SINGLE response as parallel tool_use blocks.**\nDo NOT create tasks one at a time across multiple turns. Include all of them in one message.\nEach call needs: title, description, priority, tags, sequence, subprojectId.\nRecord all returned task IDs from the results.\n\n### Turn 4 Add ALL Dependencies (SINGLE response)\n**IMPORTANT: Call ALL `mcp__agentloop__add_task_dependency` tools in a SINGLE response as parallel tool_use blocks.**\nDo NOT add dependencies one at a time across multiple turns.\nUse the task IDs returned from Turn 3. Maximize parallelism engineers work in isolated worktrees.\n\n### Turn 5 Validate (parallel reads)\nCall BOTH in a SINGLE response:\n- `mcp__agentloop__validate_dag`\n- `mcp__agentloop__visualize_dag`\n\n## Critical Rules\n- You are a PLANNER, not an implementer. NEVER write code or create files.\n- ALWAYS create tasks using mcp__agentloop__create_task\n- ALWAYS build DAG dependencies using mcp__agentloop__add_task_dependency\n- ALWAYS include subprojectId in every create_task call\n- Engineers work in project root (.) - NEVER include commands that create subdirectories\n- Explicitly specify tech stack in task descriptions\n- NEVER make sequential tool calls when they can be parallel. This is a performance-critical agent.\n\nProvide a summary when done.",
25
25
  "contextKeys": ["taskTitle", "taskDescription", "taskComments"],
26
26
  "subagent": "product-manager",
27
- "maxTurns": 80,
27
+ "maxTurns": 25,
28
28
  "outputSchema": {
29
29
  "type": "object",
30
30
  "properties": {
@@ -49,7 +49,7 @@ mcp:
49
49
  Use the `subprojectId` parameter to assign every task to the active subproject.
50
50
 
51
51
  Sizing guide:
52
- - Simple (1-5 tasks): "add hello world endpoint" → 1-2 tasks
52
+ - Simple (1-5 tasks): "add logout button" → 1-2 tasks
53
53
  - Medium (5-15 tasks): "add user auth" → 8-12 tasks
54
54
  - Large (20-30 tasks): "build payment system" → 25-30 tasks
55
55
 
@@ -124,7 +124,7 @@ If you find yourself about to write code or run commands, STOP and create a task
124
124
 
125
125
  | Complexity | Tasks | Example |
126
126
  |------------|-------|---------|
127
- | Simple | 1-5 | "add hello world endpoint" → 1-2 tasks |
127
+ | Simple | 1-5 | "add logout button" → 1-2 tasks |
128
128
  | Medium | 5-15 | "add user authentication" → 8-12 tasks |
129
129
  | Large | 20-30 | "build payment system" → 25-30 tasks |
130
130
 
@@ -137,12 +137,12 @@
137
137
  {
138
138
  "type": "llm-action",
139
139
  "name": "DetermineTestCommand",
140
- "prompt": "Determine the correct test command for this project.\n\nProject Info: {{projectInfo}}\n\nCRITICAL: Check the runtime/package manager FIRST before choosing a test command.\n\nRuntime detection priority (check in this order):\n1. If projectInfo.primaryType is 'bun' OR detectedFiles include 'bun.lockb', 'bun.lock', or 'bunfig.toml' OR packageManager is 'bun': this is a BUN project. Use 'bun test'. Set projectType to 'bun'.\n2. If detectedFiles include 'yarn.lock': use 'yarn test'. Set projectType to 'node-yarn'.\n3. If detectedFiles include 'pnpm-lock.yaml': use 'pnpm test'. Set projectType to 'node-pnpm'.\n4. If only 'package.json' is detected with no specific lock file: use 'npm test'. Set projectType to 'node'.\n\nOther project types:\n- Rust (Cargo.toml): cargo test\n- Python (pyproject.toml, setup.py): pytest\n- Go (go.mod): go test ./...\n\nDo NOT default to 'npm test' when Bun indicators are present. A project with bun.lock or bun.lockb is a Bun project, not a Node.js project.",
141
- "contextKeys": ["projectInfo", "testFilesFound"],
140
+ "prompt": "Determine the correct test command for this project.\n\nProject Info: {{projectInfo}}\n\nEngineer Test Setup (from engineer's completion comment): {{engineerTestSetup}}\n\nIMPORTANT: If engineerTestSetup is provided by the engineer, PREFER using their testCommand and testDirectory. The engineer already verified these work. Only override if you detect an obvious error.\n\nIf engineerTestSetup.testDirectory is set (e.g., \"frontend\"), the test command must be run from that subdirectory. Prefix with: cd <testDirectory> && <testCommand>\n\nCRITICAL: Check the runtime/package manager FIRST before choosing a test command.\n\nRuntime detection priority (check in this order):\n1. If projectInfo.primaryType is 'bun' OR detectedFiles include 'bun.lockb', 'bun.lock', or 'bunfig.toml' OR packageManager is 'bun': this is a BUN project. Use 'bun test'. Set projectType to 'bun'.\n2. If detectedFiles include 'yarn.lock': use 'yarn test'. Set projectType to 'node-yarn'.\n3. If detectedFiles include 'pnpm-lock.yaml': use 'pnpm test'. Set projectType to 'node-pnpm'.\n4. If only 'package.json' is detected with no specific lock file: use 'npm test'. Set projectType to 'node'.\n5. If projectInfo.primaryType is 'expo' OR detectedFiles include 'app.json' with 'expo' key, 'app.config.js', 'app.config.ts', OR package.json has 'expo' dependency: this is an EXPO/React Native project. Set projectType to 'expo'.\n - If package.json has a 'test' script that is valid (not a placeholder), use that (e.g., 'npm test' or 'jest')\n - If jest-expo is in dependencies/devDependencies, use 'npx jest'\n - If @testing-library/react-native is present, use 'npx jest'\n - If no test runner is found, use 'npx jest --passWithNoTests'\n\nOther project types:\n- Rust (Cargo.toml): cargo test\n- Python (pyproject.toml, setup.py): pytest\n- Go (go.mod): go test ./...\n\nDo NOT default to 'npm test' when Bun indicators are present. A project with bun.lock or bun.lockb is a Bun project, not a Node.js project.\nDo NOT default to 'npm test' for Expo projects that have no test script. Use 'npx jest' instead.",
141
+ "contextKeys": ["projectInfo", "testFilesFound", "engineerTestSetup"],
142
142
  "outputSchema": {
143
143
  "type": "object",
144
144
  "properties": {
145
- "projectType": { "type": "string", "enum": ["bun", "node", "node-yarn", "node-pnpm", "rust", "python", "go", "java-maven", "java-gradle", "ruby", "php", "elixir", "other"] },
145
+ "projectType": { "type": "string", "enum": ["bun", "node", "node-yarn", "node-pnpm", "expo", "rust", "python", "go", "java-maven", "java-gradle", "ruby", "php", "elixir", "other"] },
146
146
  "testCommand": { "type": "string" },
147
147
  "reasoning": { "type": "string" }
148
148
  },
@@ -180,12 +180,12 @@
180
180
  {
181
181
  "type": "llm-action",
182
182
  "name": "AnalyzeTestResults",
183
- "prompt": "Analyze the test results in the context of what files were changed.\n\nTest Output: {{testResults}}\nTest Command: {{testCommandInfo}}\nGit Diff (files changed by engineer): {{gitDiff}}\nTask Files: {{taskFiles}}\nChange Analysis: {{changeAnalysis}}\n\nYour job is to determine if the engineer's changes CAUSED any test failures. You MUST distinguish between:\n\n1. **Task-related failures**: Tests that fail because of code the engineer changed or added. These are in files listed in the git diff or task files, or test files that directly import/test those changed modules. These are legitimate failures.\n\n2. **Pre-existing/unrelated failures**: Tests that fail in modules the engineer did NOT touch. These failures existed BEFORE the engineer's changes and are NOT the engineer's responsibility. Do NOT count these as failures.\n\n3. **Environment issues**: Test runner not found (exit code 127), dependencies not installed, 'command not found' errors, missing optional dependencies (@rollup/rollup-*, @esbuild/*), module resolution errors. These are QA environment issues, NOT code issues.\n\nIMPORTANT: If ONLY environment issues occurred (no tests actually ran), set 'passed' to false and classify failures as 'environment'.\n\nSet 'passed' to true ONLY if:\n- Tests actually executed AND\n- There are NO task-related failures\n\nSet 'passed' to false if:\n- Environment issues prevented tests from running, OR\n- There are task-related failures\n\nFor each failure, classify it as 'task-related', 'pre-existing', or 'environment' in the classification field.",
184
- "contextKeys": ["testResults", "testCommandInfo", "changeAnalysis", "gitDiff", "taskFiles"],
183
+ "prompt": "Analyze the test results in the context of what files were changed.\n\nTest Output: {{testResults}}\nTest Command: {{testCommandInfo}}\nGit Diff (files changed by engineer): {{gitDiff}}\nTask Files: {{taskFiles}}\nChange Analysis: {{changeAnalysis}}\n\nYour job is to determine if the engineer's changes CAUSED any test failures. You MUST distinguish between:\n\n1. **Task-related failures**: Tests that fail because of code the engineer changed or added. These are in files listed in the git diff or task files, or test files that directly import/test those changed modules. These are legitimate failures.\n\n2. **Pre-existing/unrelated failures**: Tests that fail in modules the engineer did NOT touch. These failures existed BEFORE the engineer's changes and are NOT the engineer's responsibility. Do NOT count these as failures.\n\n3. **Environment issues**: Test runner not found (exit code 127), dependencies not installed, 'command not found' errors, missing optional dependencies (@rollup/rollup-*, @esbuild/*), module resolution errors. These are QA environment issues, NOT code issues.\n\nIMPORTANT: If ONLY environment issues occurred and there are NO indications of task-related failures (taskRelatedFailures is 0 or null), set 'passed' to true the engineer's code is not at fault for environment problems. Classify failures as 'environment'.\n\nSet 'passed' to true if:\n- Tests actually executed AND there are NO task-related failures, OR\n- Tests did NOT execute due to environment issues AND there are NO task-related failures detected\n\nSet 'passed' to false if:\n- There are task-related failures (regardless of whether other environment issues exist)\n\nFor each failure, classify it as 'task-related', 'pre-existing', or 'environment' in the classification field.",
184
+ "contextKeys": ["testResults", "testCommandInfo", "changeAnalysis", "gitDiff", "taskFiles", "engineerTestSetup"],
185
185
  "outputSchema": {
186
186
  "type": "object",
187
187
  "properties": {
188
- "passed": { "type": "boolean", "description": "true ONLY if tests actually ran and no task-related failures exist. false if env issues prevented execution." },
188
+ "passed": { "type": "boolean", "description": "true if no task-related failures exist (even if tests did not run due to environment issues). false only if there are task-related failures." },
189
189
  "testsActuallyRan": { "type": "boolean", "description": "true if tests actually executed, false if blocked by environment issues" },
190
190
  "totalTests": { "type": ["number", "null"] },
191
191
  "passedTests": { "type": ["number", "null"] },
@@ -296,10 +296,10 @@
296
296
  {
297
297
  "type": "llm-condition",
298
298
  "name": "TestsPassed",
299
- "prompt": "Based on the analyzed test results, did the engineer's changes pass QA?\n\nTest Results: {{analyzedTestResults}}\nGit Diff: {{gitDiff}}\nTask Files: {{taskFiles}}\nEnvironment Retry Count: {{envRetryCount}}\n\nReturn true if the engineer's changes did NOT introduce any new test failures AND tests actually executed.\n\nCRITICAL RULES:\n1. If 'passed' is true in analyzedTestResults AND tests actually ran, return true.\n2. If 'taskRelatedFailures' is 0 or null AND tests ran, return true — even if there are pre-existing failures.\n3. Pre-existing failures (tests failing in code the engineer did NOT touch) do NOT count as the engineer's fault. Return true.\n4. ONLY return false if:\n - There are failures directly caused by the engineer's changes (classification: 'task-related'), OR\n - Tests did NOT actually execute (environment issues prevented them from running even after retries)\n5. If 'testsActuallyRan' is false or all failures are 'environment' type, return false - we need to see actual test results.\n6. Environment issues that could not be fixed after retries should result in false (tests didn't run).",
299
+ "prompt": "Based on the analyzed test results, did the engineer's changes pass QA?\n\nTest Results: {{analyzedTestResults}}\nGit Diff: {{gitDiff}}\nTask Files: {{taskFiles}}\nEnvironment Retry Count: {{envRetryCount}}\n\nReturn true if the engineer's changes did NOT introduce any new test failures. Environment issues alone should NOT cause rejection.\n\nCRITICAL RULES:\n1. If 'passed' is true in analyzedTestResults, return true.\n2. If 'taskRelatedFailures' is 0 or null, return true — even if there are pre-existing failures or environment issues. The engineer's code is not at fault.\n3. Pre-existing failures (tests failing in code the engineer did NOT touch) do NOT count as the engineer's fault. Return true.\n4. ONLY return false if there are actual task-related failures (classification: 'task-related') failures directly caused by the engineer's changes.\n5. If 'testsActuallyRan' is false AND 'taskRelatedFailures' is 0 or null, return true environment issues that prevented tests from running are NOT the engineer's fault. The task should be approved.\n6. If 'testsActuallyRan' is false AND there ARE task-related failures detected, return false.",
300
300
  "contextKeys": ["analyzedTestResults", "gitDiff", "taskFiles", "envRetryCount"],
301
301
  "confidenceThreshold": 0.8,
302
- "fallbackValue": false
302
+ "fallbackValue": true
303
303
  },
304
304
  {
305
305
  "type": "llm-action",
@@ -650,6 +650,7 @@
650
650
  "environmentFixAttempted": false,
651
651
  "environmentFixResults": null,
652
652
  "projectInfo": null,
653
+ "engineerTestSetup": null,
653
654
  "testCommandInfo": null,
654
655
  "testExitCode": null,
655
656
  "requestedStatus": null,
@@ -111,6 +111,45 @@ If you encounter "command not found" (exit code 127) or missing dependency error
111
111
  - Module resolution errors for packages the engineer did not change
112
112
  - Flaky tests that fail intermittently and are unrelated to the changes
113
113
 
114
+ ## Expo / React Native Projects
115
+
116
+ Expo and React Native projects require special handling:
117
+
118
+ **How to identify them:**
119
+ - `app.json` with an `"expo"` key at root level
120
+ - `app.config.js` or `app.config.ts` in project root
121
+ - `metro.config.js` (React Native bundler config) in project root
122
+ - `expo` listed as a dependency in `package.json`
123
+ - `jest-expo` or `@testing-library/react-native` in devDependencies
124
+
125
+ **Test commands:**
126
+ - The typical test command is `npx jest` (NOT `npm test` unless a valid test script exists)
127
+ - If `jest-expo` is in devDependencies, use `npx jest` -- the preset handles React Native transforms
128
+ - If no test runner or test files are found, use `npx jest --passWithNoTests`
129
+ - Some projects use `npx expo test` but `npx jest` is more reliable
130
+
131
+ **Environment notes:**
132
+ - Missing `expo` CLI globally is an environment issue, NOT a code issue -- do not reject for this
133
+ - Expo projects use `npm install` or `yarn install` for dependencies (same as Node.js projects)
134
+ - The `jest-expo` preset must be installed as a devDependency for tests to run
135
+
136
+ ## Monorepo / Multi-Directory Projects
137
+
138
+ Many projects have subdirectory structures where the main application and tests live in a subdirectory:
139
+
140
+ **Common patterns:**
141
+ - `frontend/` — React/React Native/Expo app with its own `package.json`
142
+ - `backend/` — API server (Django, Express, etc.)
143
+ - `packages/*/` — npm/yarn workspaces
144
+ - `apps/*/` — monorepo app directories
145
+
146
+ **How to handle:**
147
+ - Check the root `package.json` test script — it may delegate: `cd frontend && npx jest`
148
+ - If the engineer provided a `[TEST_SETUP]` block in their completion comment, USE IT — it contains the verified test directory and command
149
+ - If no test setup was provided, check immediate subdirectories for `package.json` with test scripts
150
+ - Always run tests from the correct subdirectory — DO NOT assume tests run from root
151
+ - `node_modules` may be in the subdirectory, not at root — check both locations
152
+
114
153
  ## Testing Approach
115
154
 
116
155
  1. **Identify the project's test framework** (jest, vitest, pytest, go test, cargo test, etc.)