@vfarcic/dot-ai 0.116.0 → 0.117.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +21 -18
- package/dist/core/ai-provider-factory.d.ts +4 -2
- package/dist/core/ai-provider-factory.d.ts.map +1 -1
- package/dist/core/ai-provider-factory.js +17 -6
- package/dist/core/capability-operations.js +1 -1
- package/dist/core/generic-session-manager.d.ts +67 -0
- package/dist/core/generic-session-manager.d.ts.map +1 -0
- package/dist/core/generic-session-manager.js +192 -0
- package/dist/core/pattern-operations.js +1 -1
- package/dist/core/providers/noop-provider.d.ts +47 -0
- package/dist/core/providers/noop-provider.d.ts.map +1 -0
- package/dist/core/providers/noop-provider.js +63 -0
- package/dist/core/schema.d.ts.map +1 -1
- package/dist/core/schema.js +13 -13
- package/dist/core/session-utils.d.ts +3 -6
- package/dist/core/session-utils.d.ts.map +1 -1
- package/dist/core/session-utils.js +5 -13
- package/dist/core/shared-prompt-loader.d.ts +15 -3
- package/dist/core/shared-prompt-loader.d.ts.map +1 -1
- package/dist/core/shared-prompt-loader.js +67 -14
- package/dist/core/unified-creation-session.d.ts +3 -10
- package/dist/core/unified-creation-session.d.ts.map +1 -1
- package/dist/core/unified-creation-session.js +34 -75
- package/dist/core/unified-creation-types.d.ts +31 -22
- package/dist/core/unified-creation-types.d.ts.map +1 -1
- package/dist/interfaces/mcp.d.ts.map +1 -1
- package/dist/interfaces/mcp.js +9 -34
- package/dist/tools/answer-question.d.ts.map +1 -1
- package/dist/tools/answer-question.js +12 -12
- package/dist/tools/choose-solution.js +1 -1
- package/dist/tools/generate-manifests.d.ts.map +1 -1
- package/dist/tools/generate-manifests.js +9 -10
- package/dist/tools/index.d.ts +1 -1
- package/dist/tools/index.d.ts.map +1 -1
- package/dist/tools/index.js +6 -6
- package/dist/tools/organizational-data.js +12 -12
- package/dist/tools/project-setup/discovery.d.ts +15 -0
- package/dist/tools/project-setup/discovery.d.ts.map +1 -0
- package/dist/tools/project-setup/discovery.js +104 -0
- package/dist/tools/project-setup/generate-scope.d.ts +15 -0
- package/dist/tools/project-setup/generate-scope.d.ts.map +1 -0
- package/dist/tools/project-setup/generate-scope.js +237 -0
- package/dist/tools/project-setup/report-scan.d.ts +15 -0
- package/dist/tools/project-setup/report-scan.d.ts.map +1 -0
- package/dist/tools/project-setup/report-scan.js +156 -0
- package/dist/tools/project-setup/types.d.ts +111 -0
- package/dist/tools/project-setup/types.d.ts.map +1 -0
- package/dist/tools/project-setup/types.js +8 -0
- package/dist/tools/project-setup.d.ts +28 -0
- package/dist/tools/project-setup.d.ts.map +1 -0
- package/dist/tools/project-setup.js +134 -0
- package/dist/tools/recommend.js +1 -1
- package/dist/tools/remediate.js +1 -1
- package/dist/tools/version.d.ts +0 -7
- package/dist/tools/version.d.ts.map +1 -1
- package/dist/tools/version.js +5 -34
- package/package.json +4 -2
- package/prompts/capability-inference.md +2 -2
- package/prompts/infrastructure-trigger-expansion.md +2 -2
- package/prompts/intent-analysis.md +2 -2
- package/prompts/kyverno-generation.md +14 -14
- package/prompts/manifest-generation.md +5 -5
- package/prompts/map-intent-to-operation.md +2 -2
- package/prompts/pattern-complete-error.md +1 -1
- package/prompts/pattern-complete-success.md +4 -4
- package/prompts/pattern-rationale.md +1 -1
- package/prompts/pattern-resources.md +1 -1
- package/prompts/pattern-review.md +5 -5
- package/prompts/policy-complete-apply.md +4 -4
- package/prompts/policy-complete-discard.md +1 -1
- package/prompts/policy-complete-error.md +1 -1
- package/prompts/policy-complete-save.md +4 -4
- package/prompts/policy-complete-success.md +4 -4
- package/prompts/policy-namespace-scope.md +1 -1
- package/prompts/question-generation.md +5 -5
- package/prompts/resource-analysis.md +3 -3
- package/prompts/resource-selection.md +3 -3
- package/prompts/solution-enhancement.md +4 -4
- package/scripts/anthropic.nu +9 -13
- package/scripts/common.nu +31 -33
- package/scripts/ingress.nu +5 -4
- package/scripts/kubernetes.nu +38 -53
- package/dist/core/doc-discovery.d.ts +0 -38
- package/dist/core/doc-discovery.d.ts.map +0 -1
- package/dist/core/doc-discovery.js +0 -231
- package/dist/core/doc-testing-session.d.ts +0 -109
- package/dist/core/doc-testing-session.d.ts.map +0 -1
- package/dist/core/doc-testing-session.js +0 -696
- package/dist/core/doc-testing-types.d.ts +0 -127
- package/dist/core/doc-testing-types.d.ts.map +0 -1
- package/dist/core/doc-testing-types.js +0 -53
- package/dist/core/nushell-runtime.d.ts +0 -39
- package/dist/core/nushell-runtime.d.ts.map +0 -1
- package/dist/core/nushell-runtime.js +0 -103
- package/dist/core/platform-operations.d.ts +0 -70
- package/dist/core/platform-operations.d.ts.map +0 -1
- package/dist/core/platform-operations.js +0 -294
- package/dist/tools/build-platform.d.ts +0 -25
- package/dist/tools/build-platform.d.ts.map +0 -1
- package/dist/tools/build-platform.js +0 -277
- package/dist/tools/test-docs.d.ts +0 -22
- package/dist/tools/test-docs.d.ts.map +0 -1
- package/dist/tools/test-docs.js +0 -351
- package/prompts/doc-testing-done.md +0 -51
- package/prompts/doc-testing-fix.md +0 -120
- package/prompts/doc-testing-scan.md +0 -140
- package/prompts/doc-testing-test-section.md +0 -169
- package/prompts/platform-operations-parse-script-help.md +0 -68
- package/scripts/ack.nu +0 -195
- package/scripts/argo-workflows.nu +0 -47
- package/scripts/argocd.nu +0 -85
- package/scripts/aso.nu +0 -74
- package/scripts/backstage.nu +0 -349
- package/scripts/cert-manager.nu +0 -13
- package/scripts/cnpg.nu +0 -14
- package/scripts/dot.nu +0 -32
- package/scripts/external-secrets.nu +0 -110
- package/scripts/gatekeeper.nu +0 -19
- package/scripts/github.nu +0 -42
- package/scripts/image.nu +0 -67
- package/scripts/kro.nu +0 -11
- package/scripts/kubevela.nu +0 -22
- package/scripts/port.nu +0 -71
- package/scripts/prometheus.nu +0 -21
- package/scripts/registry.nu +0 -55
- package/scripts/storage.nu +0 -210
- package/scripts/tests.nu +0 -12
- package/scripts/velero.nu +0 -45
- package/shared-prompts/validate-docs.md +0 -22
package/dist/tools/test-docs.js
DELETED
|
@@ -1,351 +0,0 @@
|
|
|
1
|
-
"use strict";
|
|
2
|
-
/**
|
|
3
|
-
* Test Docs Tool - Documentation testing workflow orchestrator
|
|
4
|
-
*/
|
|
5
|
-
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
|
|
6
|
-
if (k2 === undefined) k2 = k;
|
|
7
|
-
var desc = Object.getOwnPropertyDescriptor(m, k);
|
|
8
|
-
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
|
|
9
|
-
desc = { enumerable: true, get: function() { return m[k]; } };
|
|
10
|
-
}
|
|
11
|
-
Object.defineProperty(o, k2, desc);
|
|
12
|
-
}) : (function(o, m, k, k2) {
|
|
13
|
-
if (k2 === undefined) k2 = k;
|
|
14
|
-
o[k2] = m[k];
|
|
15
|
-
}));
|
|
16
|
-
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
|
|
17
|
-
Object.defineProperty(o, "default", { enumerable: true, value: v });
|
|
18
|
-
}) : function(o, v) {
|
|
19
|
-
o["default"] = v;
|
|
20
|
-
});
|
|
21
|
-
var __importStar = (this && this.__importStar) || (function () {
|
|
22
|
-
var ownKeys = function(o) {
|
|
23
|
-
ownKeys = Object.getOwnPropertyNames || function (o) {
|
|
24
|
-
var ar = [];
|
|
25
|
-
for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
|
|
26
|
-
return ar;
|
|
27
|
-
};
|
|
28
|
-
return ownKeys(o);
|
|
29
|
-
};
|
|
30
|
-
return function (mod) {
|
|
31
|
-
if (mod && mod.__esModule) return mod;
|
|
32
|
-
var result = {};
|
|
33
|
-
if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
|
|
34
|
-
__setModuleDefault(result, mod);
|
|
35
|
-
return result;
|
|
36
|
-
};
|
|
37
|
-
})();
|
|
38
|
-
Object.defineProperty(exports, "__esModule", { value: true });
|
|
39
|
-
exports.TESTDOCS_TOOL_INPUT_SCHEMA = exports.TESTDOCS_TOOL_DESCRIPTION = exports.TESTDOCS_TOOL_NAME = void 0;
|
|
40
|
-
exports.handleTestDocsTool = handleTestDocsTool;
|
|
41
|
-
const zod_1 = require("zod");
|
|
42
|
-
const error_handling_1 = require("../core/error-handling");
|
|
43
|
-
const fs = __importStar(require("fs"));
|
|
44
|
-
const doc_testing_session_1 = require("../core/doc-testing-session");
|
|
45
|
-
const doc_discovery_1 = require("../core/doc-discovery");
|
|
46
|
-
// Tool metadata for direct MCP registration
|
|
47
|
-
exports.TESTDOCS_TOOL_NAME = 'testDocs';
|
|
48
|
-
exports.TESTDOCS_TOOL_DESCRIPTION = 'Test, validate, check, scan, verify, analyze, or review documentation files for accuracy, functionality, broken examples, outdated commands, invalid links, and overall quality. Use this tool whenever the user wants to test docs, validate documentation, check if examples work, scan for issues, verify commands, analyze doc quality, or review documentation content. IMPORTANT: This tool returns a workflow step with a "prompt" field - you must execute that prompt immediately to perform the actual documentation analysis.';
|
|
49
|
-
// Zod schema for MCP registration
|
|
50
|
-
exports.TESTDOCS_TOOL_INPUT_SCHEMA = {
|
|
51
|
-
filePath: zod_1.z.string().min(1).optional().describe('Path to documentation file to test (optional - if not provided, will discover available files)'),
|
|
52
|
-
sessionId: zod_1.z.string().optional().describe('Existing session ID to continue (optional)'),
|
|
53
|
-
phase: zod_1.z.enum(['scan', 'test', 'analyze', 'fix', 'done']).optional().describe('Specific phase to run (defaults to scan)'),
|
|
54
|
-
sectionId: zod_1.z.string().optional().describe('Section ID when submitting test results'),
|
|
55
|
-
results: zod_1.z.string().optional().describe('Test results to store (for client agent reporting back)'),
|
|
56
|
-
filePattern: zod_1.z.string().optional().describe('File pattern for discovery (e.g., "**/*.md", "*.rst")'),
|
|
57
|
-
interaction_id: zod_1.z.string().optional().describe('INTERNAL ONLY - Do not populate. Used for evaluation dataset generation.')
|
|
58
|
-
};
|
|
59
|
-
/**
|
|
60
|
-
* Handle test-docs tool request
|
|
61
|
-
*/
|
|
62
|
-
async function handleTestDocsTool(args, _dotAI, logger, requestId) {
|
|
63
|
-
try {
|
|
64
|
-
logger.info('Processing test-docs tool request', {
|
|
65
|
-
requestId,
|
|
66
|
-
filePath: args.filePath,
|
|
67
|
-
sessionId: args.sessionId,
|
|
68
|
-
phase: args.phase,
|
|
69
|
-
interaction_id: args.interaction_id
|
|
70
|
-
});
|
|
71
|
-
// Check if we're in discovery mode (no filePath and no sessionId provided)
|
|
72
|
-
if (!args.filePath && !args.sessionId) {
|
|
73
|
-
logger.info('Running in discovery mode - scanning for documentation files', { requestId });
|
|
74
|
-
const discovery = new doc_discovery_1.DocDiscovery();
|
|
75
|
-
const pattern = discovery.getFilePattern(args);
|
|
76
|
-
const discoveredFiles = await discovery.discoverFiles(process.cwd(), pattern);
|
|
77
|
-
if (discoveredFiles.length === 0) {
|
|
78
|
-
throw error_handling_1.ErrorHandler.createError(error_handling_1.ErrorCategory.VALIDATION, error_handling_1.ErrorSeverity.HIGH, `No documentation files found matching pattern: ${pattern}`, {
|
|
79
|
-
operation: 'file_discovery',
|
|
80
|
-
component: 'TestDocsTool',
|
|
81
|
-
requestId,
|
|
82
|
-
input: { pattern }
|
|
83
|
-
});
|
|
84
|
-
}
|
|
85
|
-
// Return discovery results
|
|
86
|
-
const displayText = discovery.formatForDisplay(discoveredFiles);
|
|
87
|
-
const defaultFile = discoveredFiles[0];
|
|
88
|
-
logger.info('Discovery completed', {
|
|
89
|
-
requestId,
|
|
90
|
-
filesFound: discoveredFiles.length,
|
|
91
|
-
pattern,
|
|
92
|
-
defaultFile: defaultFile.relativePath
|
|
93
|
-
});
|
|
94
|
-
return {
|
|
95
|
-
content: [{
|
|
96
|
-
type: 'text',
|
|
97
|
-
text: JSON.stringify({
|
|
98
|
-
mode: 'discovery',
|
|
99
|
-
pattern,
|
|
100
|
-
filesFound: discoveredFiles.length,
|
|
101
|
-
defaultFile: defaultFile.relativePath,
|
|
102
|
-
files: discoveredFiles.map(f => ({
|
|
103
|
-
path: f.relativePath,
|
|
104
|
-
category: f.category,
|
|
105
|
-
priority: f.priority
|
|
106
|
-
})),
|
|
107
|
-
displayText,
|
|
108
|
-
instruction: `I found ${discoveredFiles.length} documentation file${discoveredFiles.length === 1 ? '' : 's'} matching "${pattern}". You must ask the user which file they want to test. Do not choose automatically - wait for the user to specify which file they prefer. The recommended option is "${defaultFile.relativePath}".`
|
|
109
|
-
}, null, 2)
|
|
110
|
-
}]
|
|
111
|
-
};
|
|
112
|
-
}
|
|
113
|
-
// If we have sessionId but no filePath, load session to get filePath
|
|
114
|
-
if (args.sessionId && !args.filePath) {
|
|
115
|
-
const sessionManager = new doc_testing_session_1.DocTestingSessionManager();
|
|
116
|
-
const existingSession = sessionManager.loadSession(args.sessionId, args);
|
|
117
|
-
if (existingSession) {
|
|
118
|
-
args.filePath = existingSession.filePath;
|
|
119
|
-
}
|
|
120
|
-
}
|
|
121
|
-
// Validate file exists (testing mode)
|
|
122
|
-
if (!fs.existsSync(args.filePath)) {
|
|
123
|
-
throw error_handling_1.ErrorHandler.createError(error_handling_1.ErrorCategory.VALIDATION, error_handling_1.ErrorSeverity.HIGH, `Documentation file not found: ${args.filePath}`, {
|
|
124
|
-
operation: 'file_validation',
|
|
125
|
-
component: 'TestDocsTool',
|
|
126
|
-
requestId,
|
|
127
|
-
input: { filePath: args.filePath }
|
|
128
|
-
});
|
|
129
|
-
}
|
|
130
|
-
// Initialize session manager
|
|
131
|
-
const sessionManager = new doc_testing_session_1.DocTestingSessionManager();
|
|
132
|
-
let session;
|
|
133
|
-
if (args.sessionId) {
|
|
134
|
-
// Load existing session
|
|
135
|
-
session = sessionManager.loadSession(args.sessionId, args);
|
|
136
|
-
if (!session) {
|
|
137
|
-
throw error_handling_1.ErrorHandler.createError(error_handling_1.ErrorCategory.STORAGE, error_handling_1.ErrorSeverity.HIGH, `Session not found: ${args.sessionId}`, {
|
|
138
|
-
operation: 'session_load',
|
|
139
|
-
component: 'TestDocsTool',
|
|
140
|
-
requestId,
|
|
141
|
-
input: { sessionId: args.sessionId }
|
|
142
|
-
});
|
|
143
|
-
}
|
|
144
|
-
logger.info('Loaded existing session', { requestId, sessionId: args.sessionId });
|
|
145
|
-
}
|
|
146
|
-
else {
|
|
147
|
-
// Create new session
|
|
148
|
-
session = sessionManager.createSession(args.filePath, args);
|
|
149
|
-
logger.info('Created new session', { requestId, sessionId: session.sessionId });
|
|
150
|
-
}
|
|
151
|
-
// Handle results submission if provided
|
|
152
|
-
if (args.results && args.sessionId) {
|
|
153
|
-
if (args.sectionId) {
|
|
154
|
-
// Section-specific results
|
|
155
|
-
logger.info('Storing section test results', {
|
|
156
|
-
requestId,
|
|
157
|
-
sessionId: args.sessionId,
|
|
158
|
-
sectionId: args.sectionId
|
|
159
|
-
});
|
|
160
|
-
sessionManager.storeSectionTestResults(args.sessionId, args.sectionId, args.results, args);
|
|
161
|
-
// After storing section results, get the next workflow step automatically
|
|
162
|
-
const nextWorkflowStep = sessionManager.getNextStep(args.sessionId, args);
|
|
163
|
-
if (nextWorkflowStep) {
|
|
164
|
-
return {
|
|
165
|
-
content: [
|
|
166
|
-
{
|
|
167
|
-
type: 'text',
|
|
168
|
-
text: JSON.stringify({
|
|
169
|
-
success: true,
|
|
170
|
-
data: nextWorkflowStep
|
|
171
|
-
}, null, 2)
|
|
172
|
-
}
|
|
173
|
-
]
|
|
174
|
-
};
|
|
175
|
-
}
|
|
176
|
-
}
|
|
177
|
-
else {
|
|
178
|
-
// Scan results - process JSON array of section titles
|
|
179
|
-
logger.info('Processing scan results', {
|
|
180
|
-
requestId,
|
|
181
|
-
sessionId: args.sessionId
|
|
182
|
-
});
|
|
183
|
-
try {
|
|
184
|
-
const resultsData = JSON.parse(args.results);
|
|
185
|
-
// Handle scan results
|
|
186
|
-
if (resultsData.sections && Array.isArray(resultsData.sections)) {
|
|
187
|
-
sessionManager.processScanResults(args.sessionId, resultsData.sections, args);
|
|
188
|
-
logger.info('Scan results processed successfully', {
|
|
189
|
-
requestId,
|
|
190
|
-
sessionId: args.sessionId,
|
|
191
|
-
sectionsCount: resultsData.sections.length
|
|
192
|
-
});
|
|
193
|
-
// After processing scan results, get the next workflow step based on updated session state
|
|
194
|
-
const nextWorkflowStep = sessionManager.getNextStep(args.sessionId, args);
|
|
195
|
-
if (nextWorkflowStep) {
|
|
196
|
-
return {
|
|
197
|
-
content: [
|
|
198
|
-
{
|
|
199
|
-
type: 'text',
|
|
200
|
-
text: JSON.stringify({
|
|
201
|
-
success: true,
|
|
202
|
-
data: nextWorkflowStep
|
|
203
|
-
}, null, 2)
|
|
204
|
-
}
|
|
205
|
-
]
|
|
206
|
-
};
|
|
207
|
-
}
|
|
208
|
-
}
|
|
209
|
-
// Handle fix phase results - array of item status updates
|
|
210
|
-
else if (Array.isArray(resultsData)) {
|
|
211
|
-
logger.info('Processing fix phase results', {
|
|
212
|
-
requestId,
|
|
213
|
-
sessionId: args.sessionId,
|
|
214
|
-
itemUpdates: resultsData.length
|
|
215
|
-
});
|
|
216
|
-
// Update status for each item
|
|
217
|
-
const statusUpdates = [];
|
|
218
|
-
for (const itemUpdate of resultsData) {
|
|
219
|
-
if (itemUpdate.id && itemUpdate.status) {
|
|
220
|
-
// Convert string ID to number if needed
|
|
221
|
-
const itemId = typeof itemUpdate.id === 'string' ? parseInt(itemUpdate.id, 10) : itemUpdate.id;
|
|
222
|
-
sessionManager.updateFixableItemStatus(args.sessionId, itemId, itemUpdate.status, itemUpdate.explanation, args);
|
|
223
|
-
statusUpdates.push({
|
|
224
|
-
id: itemId,
|
|
225
|
-
status: itemUpdate.status,
|
|
226
|
-
explanation: itemUpdate.explanation
|
|
227
|
-
});
|
|
228
|
-
}
|
|
229
|
-
}
|
|
230
|
-
logger.info('Fix phase results processed successfully', {
|
|
231
|
-
requestId,
|
|
232
|
-
sessionId: args.sessionId,
|
|
233
|
-
updatedItems: statusUpdates.length
|
|
234
|
-
});
|
|
235
|
-
// After processing fix results, get the next workflow step
|
|
236
|
-
const nextWorkflowStep = sessionManager.getNextStep(args.sessionId, args);
|
|
237
|
-
if (nextWorkflowStep) {
|
|
238
|
-
return {
|
|
239
|
-
content: [
|
|
240
|
-
{
|
|
241
|
-
type: 'text',
|
|
242
|
-
text: JSON.stringify({
|
|
243
|
-
success: true,
|
|
244
|
-
data: nextWorkflowStep
|
|
245
|
-
}, null, 2)
|
|
246
|
-
}
|
|
247
|
-
]
|
|
248
|
-
};
|
|
249
|
-
}
|
|
250
|
-
}
|
|
251
|
-
else {
|
|
252
|
-
// Provide specific error message based on what we received
|
|
253
|
-
if (Array.isArray(resultsData)) {
|
|
254
|
-
// Fix results format - check if items have correct structure
|
|
255
|
-
const firstItem = resultsData[0];
|
|
256
|
-
if (!firstItem || typeof firstItem !== 'object') {
|
|
257
|
-
throw new Error(`Invalid fix results format. Expected array of objects like: [{"id": 1, "status": "fixed", "explanation": "..."}]. Got array with: ${typeof firstItem}`);
|
|
258
|
-
}
|
|
259
|
-
if (!firstItem.id || !firstItem.status) {
|
|
260
|
-
throw new Error(`Invalid fix result item. Each item must have 'id' and 'status' fields. Expected: [{"id": 1, "status": "fixed", "explanation": "..."}]. Missing fields in: ${JSON.stringify(firstItem)}`);
|
|
261
|
-
}
|
|
262
|
-
// If we get here, it's properly formatted but might have failed in the update process
|
|
263
|
-
throw new Error(`Fix results format is correct but processing failed. Array format: [{"id": number, "status": "fixed|deferred|failed", "explanation": "optional"}]`);
|
|
264
|
-
}
|
|
265
|
-
else {
|
|
266
|
-
// Not an array and not scan results
|
|
267
|
-
throw new Error(`Invalid results format. Expected either:
|
|
268
|
-
- Scan results: {"sections": ["Section 1", "Section 2", ...]}
|
|
269
|
-
- Fix results: [{"id": 1, "status": "fixed", "explanation": "..."}, {"id": 2, "status": "deferred", "explanation": "..."}]
|
|
270
|
-
Got: ${JSON.stringify(resultsData).substring(0, 200)}`);
|
|
271
|
-
}
|
|
272
|
-
}
|
|
273
|
-
}
|
|
274
|
-
catch (parseError) {
|
|
275
|
-
const errorMessage = parseError instanceof Error ? parseError.message : 'Unknown error';
|
|
276
|
-
// Provide helpful JSON parsing guidance
|
|
277
|
-
if (errorMessage.includes('Unexpected token')) {
|
|
278
|
-
throw error_handling_1.ErrorHandler.createError(error_handling_1.ErrorCategory.VALIDATION, error_handling_1.ErrorSeverity.HIGH, `Invalid JSON format in results parameter. ${errorMessage}
|
|
279
|
-
|
|
280
|
-
Expected formats:
|
|
281
|
-
- Scan results: {"sections": ["Section 1", "Section 2"]}
|
|
282
|
-
- Fix results: [{"id": 1, "status": "fixed", "explanation": "description"}]
|
|
283
|
-
|
|
284
|
-
Your input: "${args.results?.substring(0, 200)}..."`, {
|
|
285
|
-
operation: 'results_parsing',
|
|
286
|
-
component: 'TestDocsTool',
|
|
287
|
-
requestId,
|
|
288
|
-
input: { sessionId: args.sessionId, results: args.results }
|
|
289
|
-
});
|
|
290
|
-
}
|
|
291
|
-
throw error_handling_1.ErrorHandler.createError(error_handling_1.ErrorCategory.VALIDATION, error_handling_1.ErrorSeverity.HIGH, `Failed to process results: ${errorMessage}`, {
|
|
292
|
-
operation: 'results_processing',
|
|
293
|
-
component: 'TestDocsTool',
|
|
294
|
-
requestId,
|
|
295
|
-
input: { sessionId: args.sessionId, results: args.results }
|
|
296
|
-
});
|
|
297
|
-
}
|
|
298
|
-
}
|
|
299
|
-
}
|
|
300
|
-
// Determine phase to run - only override if explicitly provided
|
|
301
|
-
const phaseOverride = args.phase ? args.phase : undefined;
|
|
302
|
-
// Get workflow step
|
|
303
|
-
const workflowStep = sessionManager.getNextStep(session.sessionId, args, phaseOverride);
|
|
304
|
-
if (!workflowStep) {
|
|
305
|
-
throw error_handling_1.ErrorHandler.createError(error_handling_1.ErrorCategory.OPERATION, error_handling_1.ErrorSeverity.HIGH, `Failed to get workflow step for session: ${session.sessionId}`, {
|
|
306
|
-
operation: 'workflow_step_generation',
|
|
307
|
-
component: 'TestDocsTool',
|
|
308
|
-
requestId,
|
|
309
|
-
input: { sessionId: session.sessionId, phaseOverride }
|
|
310
|
-
});
|
|
311
|
-
}
|
|
312
|
-
logger.info('Generated workflow step', {
|
|
313
|
-
requestId,
|
|
314
|
-
sessionId: session.sessionId,
|
|
315
|
-
phase: workflowStep.phase,
|
|
316
|
-
nextPhase: workflowStep.nextPhase
|
|
317
|
-
});
|
|
318
|
-
// Return successful response with all WorkflowStep fields
|
|
319
|
-
return {
|
|
320
|
-
content: [{
|
|
321
|
-
type: 'text',
|
|
322
|
-
text: JSON.stringify({
|
|
323
|
-
sessionId: session.sessionId,
|
|
324
|
-
phase: workflowStep.phase,
|
|
325
|
-
filePath: session.filePath,
|
|
326
|
-
prompt: workflowStep.prompt,
|
|
327
|
-
nextPhase: workflowStep.nextPhase,
|
|
328
|
-
nextAction: workflowStep.nextAction,
|
|
329
|
-
instruction: workflowStep.instruction,
|
|
330
|
-
agentInstructions: workflowStep.agentInstructions,
|
|
331
|
-
workflow: workflowStep.workflow,
|
|
332
|
-
data: workflowStep.data
|
|
333
|
-
}, null, 2)
|
|
334
|
-
}]
|
|
335
|
-
};
|
|
336
|
-
}
|
|
337
|
-
catch (error) {
|
|
338
|
-
logger.error('Test-docs tool failed', error);
|
|
339
|
-
// Handle errors consistently
|
|
340
|
-
if (error instanceof Error && 'category' in error) {
|
|
341
|
-
// Already an AppError, just return it
|
|
342
|
-
throw error;
|
|
343
|
-
}
|
|
344
|
-
throw error_handling_1.ErrorHandler.createError(error_handling_1.ErrorCategory.OPERATION, error_handling_1.ErrorSeverity.HIGH, error instanceof Error ? error.message : 'Unknown error in test-docs tool', {
|
|
345
|
-
operation: 'test_docs_tool',
|
|
346
|
-
component: 'TestDocsTool',
|
|
347
|
-
requestId,
|
|
348
|
-
input: args
|
|
349
|
-
});
|
|
350
|
-
}
|
|
351
|
-
}
|
|
@@ -1,51 +0,0 @@
|
|
|
1
|
-
# Documentation Testing - Session Complete
|
|
2
|
-
|
|
3
|
-
Congratulations! You have completed the documentation testing session.
|
|
4
|
-
|
|
5
|
-
## Session Summary
|
|
6
|
-
**File**: {filePath}
|
|
7
|
-
**Session**: {sessionId}
|
|
8
|
-
**Completion Time**: {completionTime}
|
|
9
|
-
|
|
10
|
-
## Final Results
|
|
11
|
-
{finalSummary}
|
|
12
|
-
|
|
13
|
-
## What Happens Next
|
|
14
|
-
|
|
15
|
-
This documentation testing session is now complete. The session data has been saved and can be referenced for future improvements.
|
|
16
|
-
|
|
17
|
-
**Key Points:**
|
|
18
|
-
- **Fixed items** have been successfully resolved and won't appear in future testing sessions
|
|
19
|
-
- **Deferred items** (including ignored items with `dotai-ignore` comments) won't appear in future sessions
|
|
20
|
-
- **Pending items** can be addressed in a new testing session by running the same commands again
|
|
21
|
-
|
|
22
|
-
## Starting a New Session
|
|
23
|
-
|
|
24
|
-
To test this documentation again or test other documentation files:
|
|
25
|
-
|
|
26
|
-
```bash
|
|
27
|
-
# Test the same file again (will skip ignored items)
|
|
28
|
-
dot-ai test-docs --file {filePath}
|
|
29
|
-
|
|
30
|
-
# Test a different documentation file
|
|
31
|
-
dot-ai test-docs --file path/to/other/docs.md
|
|
32
|
-
|
|
33
|
-
# Or use discovery mode to find available documentation
|
|
34
|
-
dot-ai test-docs
|
|
35
|
-
```
|
|
36
|
-
|
|
37
|
-
## Session Data Location
|
|
38
|
-
|
|
39
|
-
Your session data is stored at: `{sessionDir}/{sessionId}.json`
|
|
40
|
-
|
|
41
|
-
This contains:
|
|
42
|
-
- Complete test results for each section
|
|
43
|
-
- Status of all issues and their fixes
|
|
44
|
-
- Timestamps and progress tracking
|
|
45
|
-
- Applied fixes and user decisions
|
|
46
|
-
|
|
47
|
-
---
|
|
48
|
-
|
|
49
|
-
**Session Status**: ✅ COMPLETED
|
|
50
|
-
|
|
51
|
-
The documentation testing workflow is now finished. No further action is required.
|
|
@@ -1,120 +0,0 @@
|
|
|
1
|
-
# Documentation Testing - Fix Phase
|
|
2
|
-
|
|
3
|
-
You are helping users apply fixes to improve documentation based on comprehensive testing results. Your role is to present only the remaining unfixed recommendations and apply the selected fixes.
|
|
4
|
-
|
|
5
|
-
## Session Information
|
|
6
|
-
**File**: {filePath}
|
|
7
|
-
**Session**: {sessionId}
|
|
8
|
-
**Sections Tested**: {totalSections} sections completed
|
|
9
|
-
|
|
10
|
-
## Current Status
|
|
11
|
-
{statusSummary}
|
|
12
|
-
|
|
13
|
-
## Items Requiring Attention
|
|
14
|
-
|
|
15
|
-
{pendingItems}
|
|
16
|
-
|
|
17
|
-
## Your Role - Fix Application Agent
|
|
18
|
-
|
|
19
|
-
You are a comprehensive fix implementation agent. When the user selects items, you MUST attempt to fix them. Your workflow:
|
|
20
|
-
|
|
21
|
-
1. **Present** the unfixed items listed above using their existing IDs
|
|
22
|
-
2. **Get user selection** of which fixes to apply (by ID number)
|
|
23
|
-
3. **ACTUALLY IMPLEMENT THE FIXES** - do not auto-defer anything
|
|
24
|
-
4. **Apply all selected fixes** by making necessary changes across the entire codebase
|
|
25
|
-
5. **Get user confirmation** whether fixes were applied correctly
|
|
26
|
-
6. **Track status** and present remaining unfixed items
|
|
27
|
-
|
|
28
|
-
**CRITICAL**: When user selects an item, you MUST attempt to fix it. Never auto-defer items because you think they are "code bugs" or "outside scope." Your job is to implement whatever fixes are needed.
|
|
29
|
-
|
|
30
|
-
## Fix Scope
|
|
31
|
-
|
|
32
|
-
You handle implementation across the entire codebase:
|
|
33
|
-
|
|
34
|
-
**Documentation Changes**:
|
|
35
|
-
- Fix typos, clarify instructions, update commands
|
|
36
|
-
- Correct broken examples and code snippets
|
|
37
|
-
- Update outdated version numbers and references
|
|
38
|
-
|
|
39
|
-
**Code Implementation Changes**:
|
|
40
|
-
- **API fixes**: Correct method names, signatures, return types to match documentation
|
|
41
|
-
- **Missing features**: Implement functionality described in docs but missing from code
|
|
42
|
-
- **Bug fixes**: Resolve issues discovered through documentation testing
|
|
43
|
-
- **Interface updates**: Make actual code match documented interfaces
|
|
44
|
-
- **New functionality**: Add methods, classes, or modules referenced in documentation
|
|
45
|
-
- **Configuration systems**: Implement missing config options or fix existing ones
|
|
46
|
-
- **Database changes**: Update schemas, queries, or data structures as needed
|
|
47
|
-
- **Dependency management**: Add missing libraries, update package requirements
|
|
48
|
-
|
|
49
|
-
**External Actions**:
|
|
50
|
-
- Create GitHub issues for complex changes requiring team discussion
|
|
51
|
-
- Update project configuration files (package.json, requirements.txt, etc.)
|
|
52
|
-
- Make API calls to external systems when needed
|
|
53
|
-
|
|
54
|
-
## User Interaction Process
|
|
55
|
-
|
|
56
|
-
1. **Present the items** listed above to the user
|
|
57
|
-
2. **Ask for selection**: "Which fixes would you like me to apply? Enter item IDs (e.g., '18,22,29'), ranges (e.g., '18-21'), 'all', or 'none':"
|
|
58
|
-
3. **IMPLEMENT THE SELECTED FIXES**:
|
|
59
|
-
- **Code fixes**: Update source code, fix bugs, implement missing features
|
|
60
|
-
- **Documentation fixes**: Update README, fix examples, clarify instructions
|
|
61
|
-
- **Configuration fixes**: Update package.json, fix build scripts, adjust settings
|
|
62
|
-
- **ANY other changes needed**: Whatever the issue requires
|
|
63
|
-
4. **Show what you did**: Explain the specific changes you made for each item
|
|
64
|
-
5. **Get user confirmation**: Ask user to confirm the status of each fix you attempted:
|
|
65
|
-
- **fixed**: Applied correctly and works as expected
|
|
66
|
-
- **deferred**: User says to handle externally (GitHub issue, backlog) OR permanently ignore
|
|
67
|
-
- **failed**: You attempted but it didn't work or couldn't complete
|
|
68
|
-
|
|
69
|
-
**CRITICAL RULES**:
|
|
70
|
-
- **NEVER auto-defer**: If user selects an item, attempt to fix it regardless of complexity
|
|
71
|
-
- **NEVER refuse**: Don't say "this requires code changes" - make the code changes
|
|
72
|
-
- **NEVER assume scope limits**: Fix documentation bugs, code bugs, config issues, anything needed
|
|
73
|
-
- **ASK for guidance**: If multiple approaches possible, ask user which approach they prefer
|
|
74
|
-
- **ONLY defer when user explicitly requests it**: User must say "defer this" or "skip this"
|
|
75
|
-
|
|
76
|
-
**Important**: If user wants to permanently ignore an item (e.g., "skip this", "ignore this", "don't fix this"), mark it as **deferred** and add an appropriate comment (using the file's comment syntax) containing "dotai-ignore" near the relevant content to prevent future detection of the same issue.
|
|
77
|
-
6. **Continue workflow** with any remaining unfixed items
|
|
78
|
-
|
|
79
|
-
## Status Definitions
|
|
80
|
-
|
|
81
|
-
- **pending**: Not yet addressed, shown for selection
|
|
82
|
-
- **failed**: Attempted but didn't work, shown for retry
|
|
83
|
-
- **fixed**: Successfully resolved, hidden from future presentations
|
|
84
|
-
- **deferred**: Handled externally, hidden from future presentations
|
|
85
|
-
|
|
86
|
-
## Success Criteria
|
|
87
|
-
|
|
88
|
-
Session is complete when:
|
|
89
|
-
- No items remain with status "pending" or "failed"
|
|
90
|
-
- User selects "none" when presented with remaining unfixed items
|
|
91
|
-
- All items are either "fixed" or "deferred"
|
|
92
|
-
|
|
93
|
-
## Instructions
|
|
94
|
-
|
|
95
|
-
**CRITICAL**: Follow this exact format for your response:
|
|
96
|
-
|
|
97
|
-
1. **COPY THE NUMBERED LIST EXACTLY** - Use the EXACT same numbers shown above (18, 19, 20, 32, 38, etc.)
|
|
98
|
-
2. **DO NOT renumber sequentially** - If you see "18. ConfigMap issue" then write "18. ConfigMap issue", NOT "1. ConfigMap issue"
|
|
99
|
-
3. **DO NOT skip any items** - Copy ALL items listed in the sections above
|
|
100
|
-
4. **Ask the selection question exactly**: "Which fixes would you like me to apply? Enter item IDs (e.g., '18,22,29'), ranges (e.g., '18-21'), 'all', or 'none':"
|
|
101
|
-
5. **Wait for user response** - do NOT auto-select or auto-apply anything
|
|
102
|
-
|
|
103
|
-
**DO NOT**:
|
|
104
|
-
- Renumber items (18 stays 18, not becomes 1)
|
|
105
|
-
- Summarize or reformat the text
|
|
106
|
-
- Skip any items from the list above
|
|
107
|
-
- Provide your own analysis
|
|
108
|
-
- Ask vague questions like "Would you like me to apply fixes?"
|
|
109
|
-
- Auto-decide what should be fixed
|
|
110
|
-
|
|
111
|
-
**Required response format**:
|
|
112
|
-
```
|
|
113
|
-
I found [X] items that need attention:
|
|
114
|
-
|
|
115
|
-
[Copy the EXACT numbered list from above - maintain original IDs like 18, 19, 20, 32, 38]
|
|
116
|
-
|
|
117
|
-
Which fixes would you like me to apply? Enter item IDs (e.g., '18,22,29'), ranges (e.g., '18-21'), 'all', or 'none':
|
|
118
|
-
```
|
|
119
|
-
|
|
120
|
-
**CRITICAL**: The user will select items by the ORIGINAL IDs (like 18, 22, 29). If you renumber them, the selection will fail!
|
|
@@ -1,140 +0,0 @@
|
|
|
1
|
-
# Documentation Testing - Scan Phase
|
|
2
|
-
|
|
3
|
-
You are analyzing documentation to identify all content that can be validated through testing. Your goal is to find every section containing factual claims, executable instructions, or verifiable information.
|
|
4
|
-
|
|
5
|
-
## File to Analyze
|
|
6
|
-
**File**: {filePath}
|
|
7
|
-
**Session**: {sessionId}
|
|
8
|
-
|
|
9
|
-
## Core Testing Philosophy
|
|
10
|
-
|
|
11
|
-
**Most technical documentation is testable** through two validation approaches:
|
|
12
|
-
1. **Functional Testing**: Execute instructions and verify they work
|
|
13
|
-
2. **Factual Verification**: Compare claims against actual system state
|
|
14
|
-
|
|
15
|
-
## Comprehensive Content Discovery
|
|
16
|
-
|
|
17
|
-
### 1. Executable & Interactive Content
|
|
18
|
-
- **Commands & Scripts**: Shell commands, CLI tools, code snippets, scripts
|
|
19
|
-
- **Workflows & Procedures**: Step-by-step instructions, installation guides, setup procedures
|
|
20
|
-
- **API & Network Operations**: REST calls, database queries, connectivity tests
|
|
21
|
-
- **File & System Operations**: File creation, directory structures, permission changes
|
|
22
|
-
- **Configuration Examples**: Config files, environment variables, system settings
|
|
23
|
-
|
|
24
|
-
### 2. Factual Claims & System State
|
|
25
|
-
- **Architecture Descriptions**: System components, interfaces, data flows
|
|
26
|
-
- **Implementation Status**: What's implemented vs planned, feature availability
|
|
27
|
-
- **File Structure Claims**: File/directory existence, code organization, module descriptions
|
|
28
|
-
- **Component Descriptions**: What each part does, how components interact
|
|
29
|
-
- **Capability Claims**: Supported features, available commands, system abilities
|
|
30
|
-
- **Version & Compatibility Info**: Software versions, platform support, dependencies
|
|
31
|
-
|
|
32
|
-
### 3. References & Links
|
|
33
|
-
- **External URLs**: Web links, API endpoints, documentation references
|
|
34
|
-
- **Internal References**: File paths, code references, documentation cross-links
|
|
35
|
-
- **Resource References**: Images, downloads, repositories, configuration files
|
|
36
|
-
|
|
37
|
-
### 4. Examples & Demonstrations
|
|
38
|
-
- **Code Examples**: Function usage, API calls, configuration samples
|
|
39
|
-
- **Sample Outputs**: Expected results, error messages, status displays
|
|
40
|
-
- **Use Case Scenarios**: Workflow examples, integration patterns
|
|
41
|
-
|
|
42
|
-
## Content Classification Strategy
|
|
43
|
-
|
|
44
|
-
### What TO Include (Testable Sections)
|
|
45
|
-
- **Any factual claim** that can be verified against system state
|
|
46
|
-
- **Any instruction** that can be executed or followed
|
|
47
|
-
- **Any reference** that can be checked for existence or accessibility
|
|
48
|
-
- **Any example** that can be validated for correctness
|
|
49
|
-
- **Any workflow** that can be tested end-to-end
|
|
50
|
-
- **Any status claim** that can be fact-checked (implemented vs planned)
|
|
51
|
-
- **Any architectural description** that can be compared to actual code
|
|
52
|
-
|
|
53
|
-
### What NOT to Include (Non-Testable Sections)
|
|
54
|
-
- **Pure marketing copy** with no factual claims
|
|
55
|
-
- **Abstract theory** with no concrete implementation details
|
|
56
|
-
- **General philosophy** without specific claims
|
|
57
|
-
- **Legal text** (licenses, terms, copyright)
|
|
58
|
-
- **Pure acknowledgments** without technical content
|
|
59
|
-
- **Speculative future plans** with no current implementation claims
|
|
60
|
-
|
|
61
|
-
### Examples of Testable vs Non-Testable Content
|
|
62
|
-
|
|
63
|
-
#### ✅ TESTABLE:
|
|
64
|
-
- "The CLI has a `recommend` command" → Can verify command exists
|
|
65
|
-
- "Files are stored in `src/core/discovery.ts`" → Can check file exists
|
|
66
|
-
- "The system supports Kubernetes CRDs" → Can test CRD discovery
|
|
67
|
-
- "Run `npm install` to install dependencies" → Can execute command
|
|
68
|
-
- "The API returns JSON format" → Can verify API response format
|
|
69
|
-
|
|
70
|
-
#### ❌ NON-TESTABLE:
|
|
71
|
-
- "This tool helps developers be more productive" → Subjective claim
|
|
72
|
-
- "Kubernetes is a container orchestration platform" → General background info
|
|
73
|
-
- "We believe in developer-first experiences" → Philosophy statement
|
|
74
|
-
- "Thanks to all contributors" → Acknowledgment
|
|
75
|
-
- "The future of DevOps is bright" → Speculative statement
|
|
76
|
-
|
|
77
|
-
## Document Structure Analysis
|
|
78
|
-
|
|
79
|
-
### Section Identification Process
|
|
80
|
-
1. **Find structural markers**: Headers (##, ###, ####), horizontal rules, clear topic boundaries
|
|
81
|
-
2. **Identify section purposes**: Installation, Configuration, Usage, Troubleshooting, Examples, etc.
|
|
82
|
-
3. **Map content types**: What kinds of testable content exist in each section
|
|
83
|
-
4. **Trace dependencies**: Which sections must be completed before others can be tested
|
|
84
|
-
5. **Assess completeness**: Are there gaps or missing steps within sections
|
|
85
|
-
|
|
86
|
-
### Per-Section Analysis
|
|
87
|
-
For each identified section, determine:
|
|
88
|
-
- **Primary purpose**: What is this section trying to help users accomplish?
|
|
89
|
-
- **Testable elements**: What specific items can be validated within this context?
|
|
90
|
-
- **Prerequisites**: What must be done first for this section to work?
|
|
91
|
-
- **Success criteria**: How would you know if following this section succeeded?
|
|
92
|
-
- **Environmental context**: What platform, tools, or setup does this assume?
|
|
93
|
-
|
|
94
|
-
### Universal Validation Strategy
|
|
95
|
-
- **Functional validation**: Do the instructions work as written?
|
|
96
|
-
- **Reference validation**: Do links, files, and resources exist and are accessible?
|
|
97
|
-
- **Configuration validation**: Are config examples syntactically correct and complete?
|
|
98
|
-
- **Prerequisite validation**: Are system requirements and dependencies clearly testable?
|
|
99
|
-
- **Outcome validation**: Do procedures achieve their stated goals?
|
|
100
|
-
|
|
101
|
-
## Output Requirements
|
|
102
|
-
|
|
103
|
-
Your job is simple: **identify the logical sections** of the documentation that contain testable content.
|
|
104
|
-
|
|
105
|
-
### What to Look For:
|
|
106
|
-
- Major headings that represent distinct topics or workflows
|
|
107
|
-
- Sections that contain instructions, commands, examples, or references
|
|
108
|
-
- Skip purely descriptive sections (marketing copy, background info, acknowledgments)
|
|
109
|
-
|
|
110
|
-
### What NOT to Analyze:
|
|
111
|
-
- Don't inventory specific testable items (that's done later per-section)
|
|
112
|
-
- Don't worry about line numbers (they change when docs are edited)
|
|
113
|
-
- Don't analyze dependencies (we test sections top-to-bottom in document order)
|
|
114
|
-
|
|
115
|
-
## Required Output Format
|
|
116
|
-
|
|
117
|
-
Return a simple JSON array of section titles that should be tested:
|
|
118
|
-
|
|
119
|
-
```json
|
|
120
|
-
{
|
|
121
|
-
"sections": [
|
|
122
|
-
"Prerequisites",
|
|
123
|
-
"Installation",
|
|
124
|
-
"Configuration",
|
|
125
|
-
"Usage Examples",
|
|
126
|
-
"Troubleshooting"
|
|
127
|
-
]
|
|
128
|
-
}
|
|
129
|
-
```
|
|
130
|
-
|
|
131
|
-
### Guidelines:
|
|
132
|
-
- Use the **actual section titles** from the document (or close variations)
|
|
133
|
-
- List them in **document order** (top-to-bottom)
|
|
134
|
-
- Include only sections that have **actionable/testable content**
|
|
135
|
-
- Keep titles **concise but descriptive**
|
|
136
|
-
- Aim for **3-8 sections** for most documents
|
|
137
|
-
|
|
138
|
-
## Instructions
|
|
139
|
-
|
|
140
|
-
Read {filePath} and identify the logical sections that contain testable content. Return only the simple JSON array of section titles - nothing more.
|