@vfarcic/dot-ai 0.35.0 → 0.36.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +61 -72
- package/dist/core/doc-discovery.d.ts +38 -0
- package/dist/core/doc-discovery.d.ts.map +1 -0
- package/dist/core/doc-discovery.js +231 -0
- package/dist/core/doc-testing-session.d.ts +109 -0
- package/dist/core/doc-testing-session.d.ts.map +1 -0
- package/dist/core/doc-testing-session.js +747 -0
- package/dist/core/doc-testing-types.d.ts +125 -0
- package/dist/core/doc-testing-types.d.ts.map +1 -0
- package/dist/core/doc-testing-types.js +53 -0
- package/dist/core/error-handling.d.ts +14 -14
- package/dist/core/error-handling.d.ts.map +1 -1
- package/dist/core/error-handling.js +43 -31
- package/dist/core/kubernetes-utils.d.ts +0 -3
- package/dist/core/kubernetes-utils.d.ts.map +1 -1
- package/dist/core/kubernetes-utils.js +40 -8
- package/dist/core/schema.d.ts.map +1 -1
- package/dist/core/schema.js +12 -6
- package/dist/interfaces/cli.d.ts +1 -0
- package/dist/interfaces/cli.d.ts.map +1 -1
- package/dist/interfaces/cli.js +76 -1
- package/dist/interfaces/mcp.d.ts.map +1 -1
- package/dist/interfaces/mcp.js +10 -2
- package/dist/tools/test-docs.d.ts +21 -0
- package/dist/tools/test-docs.d.ts.map +1 -0
- package/dist/tools/test-docs.js +349 -0
- package/package.json +4 -1
- package/prompts/doc-testing-done.md +51 -0
- package/prompts/doc-testing-fix.md +120 -0
- package/prompts/doc-testing-scan.md +140 -0
- package/prompts/doc-testing-test-section.md +239 -0
|
@@ -0,0 +1,140 @@
|
|
|
1
|
+
# Documentation Testing - Scan Phase
|
|
2
|
+
|
|
3
|
+
You are analyzing documentation to identify all content that can be validated through testing. Your goal is to find every section containing factual claims, executable instructions, or verifiable information.
|
|
4
|
+
|
|
5
|
+
## File to Analyze
|
|
6
|
+
**File**: {filePath}
|
|
7
|
+
**Session**: {sessionId}
|
|
8
|
+
|
|
9
|
+
## Core Testing Philosophy
|
|
10
|
+
|
|
11
|
+
**Most technical documentation is testable** through two validation approaches:
|
|
12
|
+
1. **Functional Testing**: Execute instructions and verify they work
|
|
13
|
+
2. **Factual Verification**: Compare claims against actual system state
|
|
14
|
+
|
|
15
|
+
## Comprehensive Content Discovery
|
|
16
|
+
|
|
17
|
+
### 1. Executable & Interactive Content
|
|
18
|
+
- **Commands & Scripts**: Shell commands, CLI tools, code snippets, scripts
|
|
19
|
+
- **Workflows & Procedures**: Step-by-step instructions, installation guides, setup procedures
|
|
20
|
+
- **API & Network Operations**: REST calls, database queries, connectivity tests
|
|
21
|
+
- **File & System Operations**: File creation, directory structures, permission changes
|
|
22
|
+
- **Configuration Examples**: Config files, environment variables, system settings
|
|
23
|
+
|
|
24
|
+
### 2. Factual Claims & System State
|
|
25
|
+
- **Architecture Descriptions**: System components, interfaces, data flows
|
|
26
|
+
- **Implementation Status**: What's implemented vs planned, feature availability
|
|
27
|
+
- **File Structure Claims**: File/directory existence, code organization, module descriptions
|
|
28
|
+
- **Component Descriptions**: What each part does, how components interact
|
|
29
|
+
- **Capability Claims**: Supported features, available commands, system abilities
|
|
30
|
+
- **Version & Compatibility Info**: Software versions, platform support, dependencies
|
|
31
|
+
|
|
32
|
+
### 3. References & Links
|
|
33
|
+
- **External URLs**: Web links, API endpoints, documentation references
|
|
34
|
+
- **Internal References**: File paths, code references, documentation cross-links
|
|
35
|
+
- **Resource References**: Images, downloads, repositories, configuration files
|
|
36
|
+
|
|
37
|
+
### 4. Examples & Demonstrations
|
|
38
|
+
- **Code Examples**: Function usage, API calls, configuration samples
|
|
39
|
+
- **Sample Outputs**: Expected results, error messages, status displays
|
|
40
|
+
- **Use Case Scenarios**: Workflow examples, integration patterns
|
|
41
|
+
|
|
42
|
+
## Content Classification Strategy
|
|
43
|
+
|
|
44
|
+
### What TO Include (Testable Sections)
|
|
45
|
+
- **Any factual claim** that can be verified against system state
|
|
46
|
+
- **Any instruction** that can be executed or followed
|
|
47
|
+
- **Any reference** that can be checked for existence or accessibility
|
|
48
|
+
- **Any example** that can be validated for correctness
|
|
49
|
+
- **Any workflow** that can be tested end-to-end
|
|
50
|
+
- **Any status claim** that can be fact-checked (implemented vs planned)
|
|
51
|
+
- **Any architectural description** that can be compared to actual code
|
|
52
|
+
|
|
53
|
+
### What NOT to Include (Non-Testable Sections)
|
|
54
|
+
- **Pure marketing copy** with no factual claims
|
|
55
|
+
- **Abstract theory** with no concrete implementation details
|
|
56
|
+
- **General philosophy** without specific claims
|
|
57
|
+
- **Legal text** (licenses, terms, copyright)
|
|
58
|
+
- **Pure acknowledgments** without technical content
|
|
59
|
+
- **Speculative future plans** with no current implementation claims
|
|
60
|
+
|
|
61
|
+
### Examples of Testable vs Non-Testable Content
|
|
62
|
+
|
|
63
|
+
#### ✅ TESTABLE:
|
|
64
|
+
- "The CLI has a `recommend` command" → Can verify command exists
|
|
65
|
+
- "Files are stored in `src/core/discovery.ts`" → Can check file exists
|
|
66
|
+
- "The system supports Kubernetes CRDs" → Can test CRD discovery
|
|
67
|
+
- "Run `npm install` to install dependencies" → Can execute command
|
|
68
|
+
- "The API returns JSON format" → Can verify API response format
|
|
69
|
+
|
|
70
|
+
#### ❌ NON-TESTABLE:
|
|
71
|
+
- "This tool helps developers be more productive" → Subjective claim
|
|
72
|
+
- "Kubernetes is a container orchestration platform" → General background info
|
|
73
|
+
- "We believe in developer-first experiences" → Philosophy statement
|
|
74
|
+
- "Thanks to all contributors" → Acknowledgment
|
|
75
|
+
- "The future of DevOps is bright" → Speculative statement
|
|
76
|
+
|
|
77
|
+
## Document Structure Analysis
|
|
78
|
+
|
|
79
|
+
### Section Identification Process
|
|
80
|
+
1. **Find structural markers**: Headers (##, ###, ####), horizontal rules, clear topic boundaries
|
|
81
|
+
2. **Identify section purposes**: Installation, Configuration, Usage, Troubleshooting, Examples, etc.
|
|
82
|
+
3. **Map content types**: What kinds of testable content exist in each section
|
|
83
|
+
4. **Trace dependencies**: Which sections must be completed before others can be tested
|
|
84
|
+
5. **Assess completeness**: Are there gaps or missing steps within sections
|
|
85
|
+
|
|
86
|
+
### Per-Section Analysis
|
|
87
|
+
For each identified section, determine:
|
|
88
|
+
- **Primary purpose**: What is this section trying to help users accomplish?
|
|
89
|
+
- **Testable elements**: What specific items can be validated within this context?
|
|
90
|
+
- **Prerequisites**: What must be done first for this section to work?
|
|
91
|
+
- **Success criteria**: How would you know if following this section succeeded?
|
|
92
|
+
- **Environmental context**: What platform, tools, or setup does this assume?
|
|
93
|
+
|
|
94
|
+
### Universal Validation Strategy
|
|
95
|
+
- **Functional validation**: Do the instructions work as written?
|
|
96
|
+
- **Reference validation**: Do links, files, and resources exist and are accessible?
|
|
97
|
+
- **Configuration validation**: Are config examples syntactically correct and complete?
|
|
98
|
+
- **Prerequisite validation**: Are system requirements and dependencies clearly testable?
|
|
99
|
+
- **Outcome validation**: Do procedures achieve their stated goals?
|
|
100
|
+
|
|
101
|
+
## Output Requirements
|
|
102
|
+
|
|
103
|
+
Your job is simple: **identify the logical sections** of the documentation that contain testable content.
|
|
104
|
+
|
|
105
|
+
### What to Look For:
|
|
106
|
+
- Major headings that represent distinct topics or workflows
|
|
107
|
+
- Sections that contain instructions, commands, examples, or references
|
|
108
|
+
- Skip purely descriptive sections (marketing copy, background info, acknowledgments)
|
|
109
|
+
|
|
110
|
+
### What NOT to Analyze:
|
|
111
|
+
- Don't inventory specific testable items (that's done later per-section)
|
|
112
|
+
- Don't worry about line numbers (they change when docs are edited)
|
|
113
|
+
- Don't analyze dependencies (we test sections top-to-bottom in document order)
|
|
114
|
+
|
|
115
|
+
## Required Output Format
|
|
116
|
+
|
|
117
|
+
Return a simple JSON array of section titles that should be tested:
|
|
118
|
+
|
|
119
|
+
```json
|
|
120
|
+
{
|
|
121
|
+
"sections": [
|
|
122
|
+
"Prerequisites",
|
|
123
|
+
"Installation",
|
|
124
|
+
"Configuration",
|
|
125
|
+
"Usage Examples",
|
|
126
|
+
"Troubleshooting"
|
|
127
|
+
]
|
|
128
|
+
}
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
### Guidelines:
|
|
132
|
+
- Use the **actual section titles** from the document (or close variations)
|
|
133
|
+
- List them in **document order** (top-to-bottom)
|
|
134
|
+
- Include only sections that have **actionable/testable content**
|
|
135
|
+
- Keep titles **concise but descriptive**
|
|
136
|
+
- Aim for **3-8 sections** for most documents
|
|
137
|
+
|
|
138
|
+
## Instructions
|
|
139
|
+
|
|
140
|
+
Read {filePath} and identify the logical sections that contain testable content. Return only the simple JSON array of section titles - nothing more.
|
|
@@ -0,0 +1,239 @@
|
|
|
1
|
+
# Documentation Testing - Section Test Phase (Functional + Semantic)
|
|
2
|
+
|
|
3
|
+
You are testing a specific section of documentation to validate both functionality AND accuracy. You must verify that instructions work AND that the documentation text truthfully describes what actually happens.
|
|
4
|
+
|
|
5
|
+
**Important**: Skip content that has ignore comments containing "dotai-ignore" (e.g., `<!-- dotai-ignore -->`, `.. dotai-ignore`, `// dotai-ignore`). Do not generate issues or recommendations for ignored content.
|
|
6
|
+
|
|
7
|
+
## Section to Test
|
|
8
|
+
**File**: {filePath}
|
|
9
|
+
**Session**: {sessionId}
|
|
10
|
+
**Section**: {sectionTitle} (ID: {sectionId})
|
|
11
|
+
**Progress**: {sectionsRemaining} of {totalSections} sections remaining after this one
|
|
12
|
+
|
|
13
|
+
## Your Task - Two-Phase Validation
|
|
14
|
+
|
|
15
|
+
### Phase 1: Execute and Test (Functional Validation)
|
|
16
|
+
**Goal**: Verify that instructions, examples, and procedures actually work
|
|
17
|
+
|
|
18
|
+
Execute everything testable in this section:
|
|
19
|
+
- Follow step-by-step instructions exactly as written
|
|
20
|
+
- Execute commands, code examples, or procedures
|
|
21
|
+
- Test interactive elements (buttons, forms, interfaces)
|
|
22
|
+
- Verify file operations, downloads, installations
|
|
23
|
+
- Check that prerequisites are sufficient
|
|
24
|
+
- Validate that examples produce expected results
|
|
25
|
+
|
|
26
|
+
### Phase 2: Analyze Claims vs Reality (Semantic Validation)
|
|
27
|
+
**Goal**: Verify that the documentation text truthfully describes what actually happens
|
|
28
|
+
|
|
29
|
+
**MANDATORY SEMANTIC ANALYSIS** - Check every claim in the documentation:
|
|
30
|
+
|
|
31
|
+
□ **Difficulty Claims**: Does "easy," "simple," "straightforward" match actual complexity?
|
|
32
|
+
□ **Automation Claims**: Does "automatically," "seamlessly," "instantly" match real behavior?
|
|
33
|
+
□ **Outcome Claims**: Do "you will see," "this enables," "results in" match what actually happens?
|
|
34
|
+
□ **Time Claims**: Do "quickly," "immediately," "in seconds" match actual duration?
|
|
35
|
+
□ **Prerequisite Claims**: Are stated requirements actually sufficient for success?
|
|
36
|
+
□ **Success Claims**: Do "successful," "working," "ready" match actual end states?
|
|
37
|
+
□ **User Experience Claims**: Would a typical user get the promised experience?
|
|
38
|
+
□ **Code/Architecture Claims**: When documentation makes claims about code, files, or system architecture, validate them against the actual codebase
|
|
39
|
+
|
|
40
|
+
### Code Analysis Validation (When Applicable)
|
|
41
|
+
|
|
42
|
+
**When testing technical documentation in a code repository**, perform BOTH directions of validation:
|
|
43
|
+
|
|
44
|
+
#### 1. Validate Documented Claims Against Code
|
|
45
|
+
**File & Directory Claims:**
|
|
46
|
+
- Check if claimed file paths actually exist (e.g., "src/core/discovery.ts")
|
|
47
|
+
- Verify directory structure matches documentation claims
|
|
48
|
+
- Validate that referenced configuration files exist where claimed
|
|
49
|
+
|
|
50
|
+
**Component & Feature Claims:**
|
|
51
|
+
- For architecture docs claiming "System has components A, B, C" - read the actual source code
|
|
52
|
+
- Check if documented components/classes/functions actually exist in the codebase
|
|
53
|
+
- Verify CLI commands exist if documentation claims they're available
|
|
54
|
+
|
|
55
|
+
**Implementation Status Claims:**
|
|
56
|
+
- For status markers (✅ IMPLEMENTED, 🔴 PLANNED) - verify against actual code
|
|
57
|
+
- Check if "planned" features are already implemented but not updated in docs
|
|
58
|
+
|
|
59
|
+
#### 2. Find Missing Documentation (Reverse Analysis)
|
|
60
|
+
**Scan codebase to identify undocumented features:**
|
|
61
|
+
- Read key source directories (src/, lib/, bin/, tools/) to find major components
|
|
62
|
+
- Check package.json, CLI entry points, and main modules for implemented features
|
|
63
|
+
- Look for significant classes, services, interfaces, or tools not mentioned in documentation
|
|
64
|
+
- Identify recently added features that may not be reflected in architecture docs
|
|
65
|
+
|
|
66
|
+
**For architecture/system documentation specifically:**
|
|
67
|
+
- Compare documented system components against actual code organization
|
|
68
|
+
- Look for major implemented subsystems missing from architecture diagrams
|
|
69
|
+
- Check if all main interfaces/entry points are documented
|
|
70
|
+
|
|
71
|
+
**How to Perform Code Analysis:**
|
|
72
|
+
1. **Forward validation**: For each documented claim, verify against actual code
|
|
73
|
+
2. **Reverse validation**: Scan actual code to find major features missing from docs
|
|
74
|
+
3. Use file reading tools to examine source code structure
|
|
75
|
+
4. Focus on major components that users would need to know about
|
|
76
|
+
5. Don't flag internal implementation details - focus on user-facing or architecturally significant features
|
|
77
|
+
|
|
78
|
+
**For each code-related validation, ask:**
|
|
79
|
+
- Does this documented claim match the actual code?
|
|
80
|
+
- Are there major implemented features missing from this documentation section?
|
|
81
|
+
- Would a developer/user be surprised by significant undocumented functionality?
|
|
82
|
+
|
|
83
|
+
## Universal Testing Approach
|
|
84
|
+
|
|
85
|
+
### Content Discovery
|
|
86
|
+
Look for any testable content in this section:
|
|
87
|
+
- **Commands/Scripts**: Terminal commands, code snippets, shell scripts
|
|
88
|
+
- **Interactive Steps**: Click buttons, fill forms, navigate interfaces
|
|
89
|
+
- **File Operations**: Create, modify, download, upload files
|
|
90
|
+
- **Web Interactions**: Visit URLs, test API endpoints, verify web content
|
|
91
|
+
- **Installation Procedures**: Software setup, dependency installation
|
|
92
|
+
- **Configuration Steps**: Settings, environment setup, account creation
|
|
93
|
+
- **Verification Steps**: Commands or actions that check if something worked
|
|
94
|
+
- **Code Examples**: Runnable code that should produce specific outputs
|
|
95
|
+
- **Troubleshooting**: Problem-solution pairs that can be validated
|
|
96
|
+
|
|
97
|
+
### Universal Functional Testing
|
|
98
|
+
For any instruction found:
|
|
99
|
+
1. **Execute with adaptation** - Modify the steps to work safely in your current environment
|
|
100
|
+
2. **Verify actual outcomes** - Confirm the steps produce the described results
|
|
101
|
+
3. **Complete missing context** - Add authentication, permissions, dependencies, or setup as needed
|
|
102
|
+
4. **Test in safe isolation** - Use temporary directories, test accounts, or sandboxed environments
|
|
103
|
+
5. **Validate end-to-end** - Ensure the full workflow achieves its stated purpose
|
|
104
|
+
|
|
105
|
+
### Universal Semantic Validation
|
|
106
|
+
For every claim or description:
|
|
107
|
+
1. **Accuracy**: Is the statement factually correct?
|
|
108
|
+
2. **Completeness**: Are there undocumented requirements or side effects?
|
|
109
|
+
3. **Precision**: Do vague terms like "automatically," "easily," "quickly" match reality?
|
|
110
|
+
4. **Outcome matching**: Do results match what's promised?
|
|
111
|
+
5. **User expectations**: Would following this meet the set expectations?
|
|
112
|
+
|
|
113
|
+
## Validation Patterns
|
|
114
|
+
|
|
115
|
+
### Pattern 1: Command/Code Claims
|
|
116
|
+
**Documentation Pattern**: "Run X to do Y"
|
|
117
|
+
- **Functional**: Execute command X (adapting for your environment) - does it run without errors?
|
|
118
|
+
- **Semantic**: Does executing command X actually accomplish Y as described?
|
|
119
|
+
|
|
120
|
+
### Pattern 2: Step-by-Step Procedures
|
|
121
|
+
**Documentation Pattern**: "Follow these steps to achieve Z"
|
|
122
|
+
- **Functional**: Execute each step (adapting commands/actions for your environment)
|
|
123
|
+
- **Semantic**: Do the executed steps actually lead to achieving Z as described?
|
|
124
|
+
|
|
125
|
+
### Pattern 3: Interactive Instructions
|
|
126
|
+
**Documentation Pattern**: "Click A, then B will happen"
|
|
127
|
+
- **Functional**: Perform the interaction (click, form submission, navigation, etc.)
|
|
128
|
+
- **Semantic**: Does performing the action actually cause B to happen as claimed?
|
|
129
|
+
|
|
130
|
+
### Pattern 4: Expected Outputs
|
|
131
|
+
**Documentation Pattern**: "You should see output like: [example]"
|
|
132
|
+
- **Functional**: Execute the process and capture actual output
|
|
133
|
+
- **Semantic**: Does the actual output match the documented example (accounting for environment differences)?
|
|
134
|
+
|
|
135
|
+
### Pattern 5: Capability Claims
|
|
136
|
+
**Documentation Pattern**: "This feature enables you to X"
|
|
137
|
+
- **Functional**: Use the feature to perform the claimed capability
|
|
138
|
+
- **Semantic**: Does using the feature actually enable X as claimed?
|
|
139
|
+
|
|
140
|
+
## Execution Guidelines
|
|
141
|
+
|
|
142
|
+
**PRIMARY GOAL**: Test what users will actually do when following the documentation.
|
|
143
|
+
|
|
144
|
+
**MANDATORY TESTING APPROACH:**
|
|
145
|
+
1. **Execute documented examples first** - Always prioritize running the actual commands/procedures shown in the documentation
|
|
146
|
+
2. **Use help commands as supplements** - Help commands (`--help`, `man`, `info`) are valuable for understanding syntax or troubleshooting, but should not replace testing documented workflows
|
|
147
|
+
3. **Test real user workflows** - Focus on the actual commands and procedures users are instructed to follow
|
|
148
|
+
|
|
149
|
+
**Examples of correct approach:**
|
|
150
|
+
- If docs show `npm start` → Execute `npm start` (primary), use `npm --help` if needed for context
|
|
151
|
+
- If docs show `make install PREFIX=/usr/local` → Execute this command (primary), use `make --help` if syntax is unclear
|
|
152
|
+
- If docs show `./configure --enable-feature` → Execute this command (primary), check `./configure --help` if it fails
|
|
153
|
+
- If docs show `pip install -r requirements.txt` → Execute this command (primary), use `pip --help` for troubleshooting
|
|
154
|
+
|
|
155
|
+
**The key principle**: Test the documented workflows that users will actually follow, using help commands as tools for understanding rather than as substitutes for real testing.
|
|
156
|
+
- **Execute with adaptation**: Modify commands/procedures to work safely in your environment
|
|
157
|
+
- `npm install -g tool` → `npm install tool` (avoid global installs)
|
|
158
|
+
- `curl api.prod.com/endpoint` → `curl httpbin.org/get` (use test endpoints)
|
|
159
|
+
- `mkdir /usr/local/app` → `mkdir ./tmp/test-app` (use local tmp directory)
|
|
160
|
+
- `cd /path/to/project` → `cd ./tmp/project` (work in local tmp directory)
|
|
161
|
+
- **Create safe contexts**: Use `./tmp/` directory for all file operations and temporary work
|
|
162
|
+
- **Complete incomplete examples**: Add missing parameters, authentication, or setup steps
|
|
163
|
+
- `curl api.example.com` → `curl -H "Accept: application/json" httpbin.org/json`
|
|
164
|
+
- `docker run image` → `docker run --rm -it image` (ensure cleanup)
|
|
165
|
+
- `touch important-file` → `mkdir -p ./tmp && touch ./tmp/important-file` (create in tmp)
|
|
166
|
+
- **Verify actual behavior**: Don't just check syntax - confirm the described outcomes occur
|
|
167
|
+
- **Adapt destructive operations**: Transform dangerous commands into safe equivalents
|
|
168
|
+
- `rm -rf /data` → `rm -rf ./tmp/test-data` (use local tmp directory)
|
|
169
|
+
- `sudo systemctl restart service` → `echo "Would restart service"` (simulate when necessary)
|
|
170
|
+
- Any file creation/modification → redirect to `./tmp/` directory
|
|
171
|
+
- **Document adaptations**: Explain how you modified examples to make them testable
|
|
172
|
+
|
|
173
|
+
## Result Format
|
|
174
|
+
|
|
175
|
+
Return your results as JSON in this exact format:
|
|
176
|
+
|
|
177
|
+
```json
|
|
178
|
+
{
|
|
179
|
+
"whatWasDone": "Brief summary of what you tested and executed in this section",
|
|
180
|
+
"issues": [
|
|
181
|
+
"Specific problem or issue you found while testing",
|
|
182
|
+
"Another issue that prevents users from succeeding",
|
|
183
|
+
"Documentation inaccuracy or missing information"
|
|
184
|
+
],
|
|
185
|
+
"recommendations": [
|
|
186
|
+
"Specific actionable suggestion to fix an issue",
|
|
187
|
+
"Improvement that would help users succeed",
|
|
188
|
+
"Way to make documentation more accurate"
|
|
189
|
+
]
|
|
190
|
+
}
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
**Guidelines for each field:**
|
|
194
|
+
|
|
195
|
+
**whatWasDone** (string):
|
|
196
|
+
- Concise summary covering BOTH functional testing AND semantic analysis
|
|
197
|
+
- Include what commands/procedures you executed (Phase 1)
|
|
198
|
+
- Include what claims you analyzed (Phase 2)
|
|
199
|
+
- Mention how many items you tested in both phases
|
|
200
|
+
- Example: "Tested 4 installation commands - npm install, API key setup, and 2 verification commands. All executed successfully with minor adaptations. Analyzed 6 documentation claims including 'easy installation' and 'automatic verification' - found installation complexity matches claimed simplicity but verification requires manual interpretation."
|
|
201
|
+
|
|
202
|
+
**Both issues and recommendations should:**
|
|
203
|
+
- Be specific and actionable items only
|
|
204
|
+
- Do NOT include positive assessments like "section works well" or "documentation is accurate"
|
|
205
|
+
- Use empty arrays if nothing to report
|
|
206
|
+
- Keep each item concise but clear
|
|
207
|
+
- Focus on user impact and success
|
|
208
|
+
|
|
209
|
+
**issues** (array of strings):
|
|
210
|
+
- Specific problems that prevent or hinder user success
|
|
211
|
+
- Include both functional problems (doesn't work) and semantic problems (inaccurate descriptions)
|
|
212
|
+
- Examples: "npm install command requires global flag but documentation doesn't mention it", "Verification step expects specific output that doesn't match actual output"
|
|
213
|
+
|
|
214
|
+
**recommendations** (array of strings):
|
|
215
|
+
- Specific actionable improvements that would help users succeed
|
|
216
|
+
- Only suggest concrete changes or additions to the documentation
|
|
217
|
+
- Examples: "Add --global flag to npm install command", "Update expected output example to match actual command output", "Include verification command example"
|
|
218
|
+
|
|
219
|
+
**Important**:
|
|
220
|
+
- Use only this JSON format - do not include additional text before or after
|
|
221
|
+
- Arrays can be empty if no issues or recommendations found
|
|
222
|
+
- Keep strings concise but informative
|
|
223
|
+
- Focus on user impact rather than technical details
|
|
224
|
+
|
|
225
|
+
## Instructions
|
|
226
|
+
|
|
227
|
+
**CRITICAL**: You must complete BOTH phases for comprehensive testing:
|
|
228
|
+
|
|
229
|
+
### Phase 1 Execution Checklist:
|
|
230
|
+
1. **Identify all testable content** - discover commands, procedures, examples
|
|
231
|
+
2. **Execute everything** - run commands, test procedures, verify examples
|
|
232
|
+
3. **Document what actually happens** - capture real outcomes vs expected
|
|
233
|
+
|
|
234
|
+
### Phase 2 Analysis Checklist:
|
|
235
|
+
1. **Find all claims** - scan text for promises, expectations, descriptions
|
|
236
|
+
2. **Evaluate each claim** - does reality match what's written?
|
|
237
|
+
3. **Check user perspective** - would a typical user get the promised experience?
|
|
238
|
+
|
|
239
|
+
**Both phases are mandatory** - functional testing without semantic analysis misses critical user experience gaps. Your goal is ensuring users get both working instructions AND accurate expectations about what will actually happen.
|