@claudetools/tools 0.4.0 → 0.5.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +60 -4
- package/dist/cli.js +0 -0
- package/dist/codedna/parser.d.ts +40 -3
- package/dist/codedna/parser.js +65 -8
- package/dist/codedna/registry.js +4 -1
- package/dist/codedna/template-engine.js +66 -32
- package/dist/handlers/codedna-handlers.d.ts +1 -1
- package/dist/handlers/codedna-handlers.js +27 -0
- package/dist/helpers/api-client.js +7 -0
- package/dist/helpers/codedna-monitoring.d.ts +34 -0
- package/dist/helpers/codedna-monitoring.js +159 -0
- package/dist/helpers/error-tracking.d.ts +73 -0
- package/dist/helpers/error-tracking.js +164 -0
- package/dist/helpers/usage-analytics.d.ts +91 -0
- package/dist/helpers/usage-analytics.js +256 -0
- package/dist/templates/claude-md.d.ts +1 -1
- package/dist/templates/claude-md.js +73 -0
- package/docs/AUTO-REGISTRATION.md +353 -0
- package/docs/CLAUDE4_PROMPT_ANALYSIS.md +589 -0
- package/docs/ENTITY_DSL_REFERENCE.md +685 -0
- package/docs/MODERN_STACK_COMPLETE_GUIDE.md +706 -0
- package/docs/PROMPT_STANDARDIZATION_RESULTS.md +324 -0
- package/docs/PROMPT_TIER_TEMPLATES.md +787 -0
- package/docs/RESEARCH_METHODOLOGY_EXTRACTION.md +336 -0
- package/package.json +12 -3
- package/scripts/verify-prompt-compliance.sh +197 -0
|
@@ -0,0 +1,336 @@
|
|
|
1
|
+
# Claude.ai Research Methodology Extraction
|
|
2
|
+
|
|
3
|
+
> **Goal:** Extract the web search and research patterns from Claude.ai Desktop to create a `/research` command and agent capability.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Core Research Behaviors
|
|
8
|
+
|
|
9
|
+
From the leaked prompt's `<search_instructions>` section:
|
|
10
|
+
|
|
11
|
+
### 1. When to Search vs Answer Directly
|
|
12
|
+
|
|
13
|
+
```xml
|
|
14
|
+
<core_search_behaviors>
|
|
15
|
+
1. **Search the web when needed**:
|
|
16
|
+
- For queries where you have reliable knowledge that won't have changed
|
|
17
|
+
(historical facts, scientific principles, completed events), answer directly.
|
|
18
|
+
- For queries about current state that could have changed since knowledge cutoff
|
|
19
|
+
(who holds a position, what policies are in effect, what exists now), search to verify.
|
|
20
|
+
- When in doubt, or if recency could matter, search.
|
|
21
|
+
|
|
22
|
+
2. **Scale tool calls to query complexity**:
|
|
23
|
+
- Adjust tool usage based on query difficulty
|
|
24
|
+
- Use 1 tool call for simple questions needing 1 source
|
|
25
|
+
- Complex tasks require comprehensive research with 5 or more tool calls
|
|
26
|
+
|
|
27
|
+
3. **Use the best tools for the query**:
|
|
28
|
+
- Infer which tools are most appropriate for the query and use those tools
|
|
29
|
+
- Prioritize internal tools for personal/company data
|
|
30
|
+
</core_search_behaviors>
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
## Query Complexity Categories
|
|
36
|
+
|
|
37
|
+
```xml
|
|
38
|
+
<query_complexity_categories>
|
|
39
|
+
**NEVER SEARCH:**
|
|
40
|
+
- Queries about timeless info, fundamental concepts, definitions
|
|
41
|
+
- Well-established technical facts Claude can answer well
|
|
42
|
+
- Examples: "help me code a for loop in python", "what's the Pythagorean theorem"
|
|
43
|
+
|
|
44
|
+
**SINGLE SEARCH:**
|
|
45
|
+
- Current events, weather, recent competition results
|
|
46
|
+
- Fast-changing technical topics
|
|
47
|
+
- Simple factual queries answered definitively with one search
|
|
48
|
+
|
|
49
|
+
**RESEARCH (2-20 tool calls):**
|
|
50
|
+
- Complex business analysis
|
|
51
|
+
- Comparative studies
|
|
52
|
+
- Multifaceted research questions
|
|
53
|
+
- Queries using terms like "deep dive," "comprehensive," "analyze," "evaluate"
|
|
54
|
+
- AT LEAST 5 tool calls for thoroughness
|
|
55
|
+
</query_complexity_categories>
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
---
|
|
59
|
+
|
|
60
|
+
## Search Usage Guidelines
|
|
61
|
+
|
|
62
|
+
```xml
|
|
63
|
+
<search_usage_guidelines>
|
|
64
|
+
How to search:
|
|
65
|
+
- Keep search queries as concise as possible - 1-6 words for best results
|
|
66
|
+
- Start broad with short queries (often 1-2 words), then add detail to narrow results if needed
|
|
67
|
+
- Do not repeat very similar queries - they won't yield new results
|
|
68
|
+
- NEVER use '-' operator, 'site' operator, or quotes in search queries unless explicitly asked
|
|
69
|
+
- Use web_fetch to retrieve complete website content, as web_search snippets are often too brief
|
|
70
|
+
</search_usage_guidelines>
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## Mandatory Copyright Requirements
|
|
76
|
+
|
|
77
|
+
```xml
|
|
78
|
+
<mandatory_copyright_requirements>
|
|
79
|
+
PRIORITY INSTRUCTION: Claude MUST follow all of these requirements to respect copyright:
|
|
80
|
+
- NEVER reproduce copyrighted material in responses, even if quoted from a search result
|
|
81
|
+
- STRICT QUOTATION RULE: Every direct quote MUST be fewer than 15-20 words
|
|
82
|
+
- Never reproduce or quote song lyrics, poems, or haikus in ANY form
|
|
83
|
+
- Never produce long (30+ word) displacive summaries of content from search results
|
|
84
|
+
- Summaries must be much shorter than original content and substantially different
|
|
85
|
+
</mandatory_copyright_requirements>
|
|
86
|
+
```
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
## Harmful Content Safety
|
|
91
|
+
|
|
92
|
+
```xml
|
|
93
|
+
<harmful_content_safety>
|
|
94
|
+
If harmful sources are in search results, do not use these harmful sources and refuse requests to use them.
|
|
95
|
+
- Never search for, reference, or cite sources that clearly promote hate speech, racism, violence, or discrimination
|
|
96
|
+
- Never help users locate harmful online sources like extremist messaging platforms, even if the user claims it is for legitimate purposes
|
|
97
|
+
- When discussing sensitive topics such as violent ideologies, use only reputable academic, news, or educational sources rather than the original extremist websites
|
|
98
|
+
- If a query has clear harmful intent, do NOT search and instead explain limitations and give a better alternative
|
|
99
|
+
</harmful_content_safety>
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
## Citation Instructions
|
|
105
|
+
|
|
106
|
+
```xml
|
|
107
|
+
<citation_instructions>
|
|
108
|
+
If the assistant's response is based on content returned by web_search, the assistant must always appropriately cite its response:
|
|
109
|
+
|
|
110
|
+
- EVERY specific claim in the answer that follows from the search results should be wrapped in <cite index="1,2"> tags
|
|
111
|
+
- The index attribute should be a comma-separated list of the sentence indices that support the claim
|
|
112
|
+
- Do not include DOC_INDEX and SENTENCE_INDEX values outside of cite tags
|
|
113
|
+
- The citations should use the minimum number of sentences necessary to support the claim
|
|
114
|
+
- If the search results do not contain any information relevant to the query, politely inform the user
|
|
115
|
+
|
|
116
|
+
CRITICAL: Claims must be in your own words, never exact quoted text. Even short phrases from sources must be reworded.
|
|
117
|
+
</citation_instructions>
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
---
|
|
121
|
+
|
|
122
|
+
## Extracted Research Methodology for `/research` Command
|
|
123
|
+
|
|
124
|
+
### Command Design
|
|
125
|
+
|
|
126
|
+
```bash
|
|
127
|
+
/research <query>
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
**Purpose:** Conduct comprehensive web research on a topic, following Claude.ai Desktop's proven methodology.
|
|
131
|
+
|
|
132
|
+
### Agent Prompt Template
|
|
133
|
+
|
|
134
|
+
```markdown
|
|
135
|
+
# Research Agent
|
|
136
|
+
|
|
137
|
+
You are a research specialist using Claude.ai Desktop's proven web research methodology.
|
|
138
|
+
|
|
139
|
+
## Your Task
|
|
140
|
+
Research the following query: {query}
|
|
141
|
+
|
|
142
|
+
## Research Methodology
|
|
143
|
+
|
|
144
|
+
### Step 1: Determine Complexity
|
|
145
|
+
- **Timeless knowledge?** → Answer directly, no search needed
|
|
146
|
+
- **Single factual query?** → 1 search call
|
|
147
|
+
- **Complex/comparative/analytical?** → 5+ search calls (comprehensive research)
|
|
148
|
+
|
|
149
|
+
Trigger words for comprehensive research: "deep dive", "comprehensive", "analyze", "evaluate", "compare"
|
|
150
|
+
|
|
151
|
+
### Step 2: Execute Searches
|
|
152
|
+
- Keep queries concise (1-6 words)
|
|
153
|
+
- Start broad, then narrow
|
|
154
|
+
- Don't repeat similar queries
|
|
155
|
+
- Avoid operators (-, site:, quotes) unless explicitly asked
|
|
156
|
+
- Use WebFetch for full content when snippets are insufficient
|
|
157
|
+
|
|
158
|
+
### Step 3: Scale Appropriately
|
|
159
|
+
- Simple queries: 1 tool call
|
|
160
|
+
- Moderate complexity: 2-4 tool calls
|
|
161
|
+
- Complex analysis: 5-20 tool calls minimum
|
|
162
|
+
|
|
163
|
+
### Step 4: Synthesize Findings
|
|
164
|
+
- Reword all claims in your own words
|
|
165
|
+
- Direct quotes: 15-20 words maximum
|
|
166
|
+
- Summaries must be substantially different from source
|
|
167
|
+
- Never reproduce copyrighted material verbatim
|
|
168
|
+
|
|
169
|
+
### Step 5: Cite Sources
|
|
170
|
+
- Wrap every claim in <cite index="1,2"> tags
|
|
171
|
+
- Index refers to sentence numbers from search results
|
|
172
|
+
- Use minimum necessary sentences to support claim
|
|
173
|
+
- If no relevant results found, say so clearly
|
|
174
|
+
|
|
175
|
+
## Safety Rules
|
|
176
|
+
- Reject harmful sources (hate speech, extremist content)
|
|
177
|
+
- Use only reputable academic/news/educational sources for sensitive topics
|
|
178
|
+
- If query has harmful intent, explain limitations and suggest alternatives
|
|
179
|
+
|
|
180
|
+
## Output Format
|
|
181
|
+
Provide:
|
|
182
|
+
1. Executive summary (2-3 sentences)
|
|
183
|
+
2. Detailed findings with citations
|
|
184
|
+
3. Sources list at end
|
|
185
|
+
|
|
186
|
+
Begin research now.
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
---
|
|
190
|
+
|
|
191
|
+
## Implementation for ClaudeTools
|
|
192
|
+
|
|
193
|
+
### 1. Create `/research` Command
|
|
194
|
+
|
|
195
|
+
**Location:** `~/.claude/commands/research.md`
|
|
196
|
+
|
|
197
|
+
```markdown
|
|
198
|
+
You are conducting comprehensive web research using Claude.ai Desktop's proven methodology.
|
|
199
|
+
|
|
200
|
+
**Query:** {args}
|
|
201
|
+
|
|
202
|
+
**Methodology:**
|
|
203
|
+
1. Assess complexity (timeless → answer directly, single fact → 1 search, complex → 5+ searches)
|
|
204
|
+
2. Execute searches (concise 1-6 word queries, start broad then narrow)
|
|
205
|
+
3. Scale tool calls (simple=1, moderate=2-4, complex=5-20)
|
|
206
|
+
4. Synthesize findings (reword everything, quotes max 15-20 words)
|
|
207
|
+
5. Cite sources (<cite index="1,2"> tags for every claim)
|
|
208
|
+
|
|
209
|
+
**Trigger words for comprehensive research:** deep dive, comprehensive, analyze, evaluate, compare
|
|
210
|
+
|
|
211
|
+
**Safety:** Reject harmful sources, use reputable sources only for sensitive topics.
|
|
212
|
+
|
|
213
|
+
**Output:**
|
|
214
|
+
- Executive summary (2-3 sentences)
|
|
215
|
+
- Detailed findings with <cite> tags
|
|
216
|
+
- Sources list
|
|
217
|
+
|
|
218
|
+
Begin research on: {args}
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
### 2. Add Research Agent
|
|
222
|
+
|
|
223
|
+
**Location:** `~/.claude/agents/research-specialist.md`
|
|
224
|
+
|
|
225
|
+
```yaml
|
|
226
|
+
name: research-specialist
|
|
227
|
+
description: Conducts comprehensive web research using Claude.ai Desktop methodology. Scales from single searches to 20+ tool calls based on complexity. Uses proper citation and copyright-safe summarization.
|
|
228
|
+
tools:
|
|
229
|
+
- WebSearch
|
|
230
|
+
- WebFetch
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
**Instructions file:** `~/.claude/agents/instructions/research-specialist.md`
|
|
234
|
+
|
|
235
|
+
(Include the full agent prompt template from above)
|
|
236
|
+
|
|
237
|
+
### 3. Integration with Task System
|
|
238
|
+
|
|
239
|
+
Research agents should automatically detect when comprehensive research is needed:
|
|
240
|
+
|
|
241
|
+
```typescript
|
|
242
|
+
// In task_plan or task_execute
|
|
243
|
+
if (taskDescription.match(/research|analyze|evaluate|comprehensive|deep dive/i)) {
|
|
244
|
+
// Attach research methodology context
|
|
245
|
+
attachContext({
|
|
246
|
+
type: 'research_methodology',
|
|
247
|
+
complexity: detectComplexity(taskDescription),
|
|
248
|
+
estimated_searches: complexity === 'high' ? '5-20' : complexity === 'medium' ? '2-4' : '1'
|
|
249
|
+
});
|
|
250
|
+
}
|
|
251
|
+
```
|
|
252
|
+
|
|
253
|
+
---
|
|
254
|
+
|
|
255
|
+
## Key Differences from Generic Search
|
|
256
|
+
|
|
257
|
+
| Aspect | Generic Search | Claude.ai Research |
|
|
258
|
+
|--------|---------------|-------------------|
|
|
259
|
+
| **Complexity Detection** | One size fits all | Scales 1-20 tool calls based on query |
|
|
260
|
+
| **Query Formation** | User's exact words | Concise 1-6 words, broad→narrow |
|
|
261
|
+
| **Citation** | Optional | Mandatory `<cite>` tags on every claim |
|
|
262
|
+
| **Copyright** | Unspecified | Strict 15-20 word quote limit |
|
|
263
|
+
| **Source Quality** | Any result | Safety filters + reputable sources only |
|
|
264
|
+
| **Tool Selection** | Just search | WebSearch + WebFetch combo |
|
|
265
|
+
|
|
266
|
+
---
|
|
267
|
+
|
|
268
|
+
## Token Budget
|
|
269
|
+
|
|
270
|
+
Research methodology instructions: ~800 tokens
|
|
271
|
+
- Core behaviors: ~200 tokens
|
|
272
|
+
- Complexity categories: ~150 tokens
|
|
273
|
+
- Search guidelines: ~100 tokens
|
|
274
|
+
- Copyright rules: ~150 tokens
|
|
275
|
+
- Safety rules: ~100 tokens
|
|
276
|
+
- Citation format: ~100 tokens
|
|
277
|
+
|
|
278
|
+
**Fits comfortably in Standard tier (~2000 tokens) with domain knowledge.**
|
|
279
|
+
|
|
280
|
+
---
|
|
281
|
+
|
|
282
|
+
## Example Research Workflow
|
|
283
|
+
|
|
284
|
+
**Query:** "Compare WebSockets vs Server-Sent Events for real-time notifications"
|
|
285
|
+
|
|
286
|
+
**Step 1:** Detect complexity → "Compare" = comprehensive research (5+ searches)
|
|
287
|
+
|
|
288
|
+
**Step 2:** Execute searches
|
|
289
|
+
1. WebSearch("WebSockets real-time")
|
|
290
|
+
2. WebSearch("Server-Sent Events")
|
|
291
|
+
3. WebSearch("WebSockets vs SSE comparison")
|
|
292
|
+
4. WebFetch(top result for detailed analysis)
|
|
293
|
+
5. WebSearch("WebSockets performance")
|
|
294
|
+
6. WebSearch("SSE browser support")
|
|
295
|
+
|
|
296
|
+
**Step 3:** Synthesize findings
|
|
297
|
+
- Reword all technical claims
|
|
298
|
+
- Keep quotes under 15-20 words
|
|
299
|
+
- Note differences, not copy descriptions
|
|
300
|
+
|
|
301
|
+
**Step 4:** Cite sources
|
|
302
|
+
```
|
|
303
|
+
<cite index="1,3">WebSockets provide full-duplex communication channels</cite>,
|
|
304
|
+
enabling bidirectional data flow between client and server. In contrast,
|
|
305
|
+
<cite index="2,4">Server-Sent Events offer unidirectional updates from server to client</cite>,
|
|
306
|
+
which suffices for many notification scenarios.
|
|
307
|
+
```
|
|
308
|
+
|
|
309
|
+
**Step 5:** List sources
|
|
310
|
+
```
|
|
311
|
+
Sources:
|
|
312
|
+
1. MDN Web Docs - WebSocket API
|
|
313
|
+
2. HTML5 Rocks - Server-Sent Events
|
|
314
|
+
3. Ably.com - WebSocket vs SSE Comparison
|
|
315
|
+
4. Can I Use - SSE Browser Support
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
---
|
|
319
|
+
|
|
320
|
+
## Recommendations
|
|
321
|
+
|
|
322
|
+
1. **Create `/research` command** with complexity detection
|
|
323
|
+
2. **Add `research-specialist` agent** for spawning in tasks
|
|
324
|
+
3. **Integrate with task system** to auto-detect research needs
|
|
325
|
+
4. **Add citation validation** to ensure `<cite>` tags present
|
|
326
|
+
5. **Monitor token usage** to optimize search count
|
|
327
|
+
|
|
328
|
+
**Priority:** High - This is production-proven methodology from Anthropic
|
|
329
|
+
**Effort:** Medium (~3-4 hours implementation + testing)
|
|
330
|
+
**Impact:** Enables high-quality research for tasks, documentation, planning
|
|
331
|
+
|
|
332
|
+
---
|
|
333
|
+
|
|
334
|
+
**Extracted:** 2025-12-05
|
|
335
|
+
**Source:** Claude.ai Desktop leaked prompt (search_instructions section)
|
|
336
|
+
**Application:** `/research` command + research-specialist agent
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@claudetools/tools",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "0.5.1",
|
|
4
4
|
"description": "Persistent AI memory, task management, and codebase intelligence for Claude Code",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -9,6 +9,8 @@
|
|
|
9
9
|
},
|
|
10
10
|
"files": [
|
|
11
11
|
"dist",
|
|
12
|
+
"docs",
|
|
13
|
+
"scripts",
|
|
12
14
|
"README.md",
|
|
13
15
|
"LICENSE"
|
|
14
16
|
],
|
|
@@ -19,7 +21,12 @@
|
|
|
19
21
|
"test": "vitest run",
|
|
20
22
|
"test:watch": "vitest",
|
|
21
23
|
"test:ui": "vitest --ui",
|
|
22
|
-
"prepublishOnly": "npm run build"
|
|
24
|
+
"prepublishOnly": "npm run build",
|
|
25
|
+
"codedna:monitor": "tsx -e \"import { runMonitoring } from './src/helpers/codedna-monitoring.js'; runMonitoring()\"",
|
|
26
|
+
"codedna:analytics": "tsx -e \"import { weeklyAnalyticsSummary } from './src/helpers/usage-analytics.js'; weeklyAnalyticsSummary()\"",
|
|
27
|
+
"codedna:analytics:24h": "tsx -e \"import { getLast24HoursAnalytics, printAnalytics } from './src/helpers/usage-analytics.js'; const r = await getLast24HoursAnalytics(); printAnalytics(r, 'Last 24 Hours')\"",
|
|
28
|
+
"codedna:analytics:30d": "tsx -e \"import { getLast30DaysAnalytics, printAnalytics } from './src/helpers/usage-analytics.js'; const r = await getLast30DaysAnalytics(); printAnalytics(r, 'Last 30 Days')\"",
|
|
29
|
+
"prompt:verify": "scripts/verify-prompt-compliance.sh"
|
|
23
30
|
},
|
|
24
31
|
"repository": {
|
|
25
32
|
"type": "git",
|
|
@@ -35,7 +42,9 @@
|
|
|
35
42
|
"context",
|
|
36
43
|
"temporal",
|
|
37
44
|
"tasks",
|
|
38
|
-
"codebase"
|
|
45
|
+
"codebase",
|
|
46
|
+
"prompt-engineering",
|
|
47
|
+
"10/10-framework"
|
|
39
48
|
],
|
|
40
49
|
"author": "ClaudeTools <hello@claudetools.dev>",
|
|
41
50
|
"license": "MIT",
|
|
@@ -0,0 +1,197 @@
|
|
|
1
|
+
#!/usr/bin/env bash
|
|
2
|
+
# Prompt Compliance Verification Script
|
|
3
|
+
# Verifies prompts follow 10/10 AI System Prompt Architecture framework
|
|
4
|
+
|
|
5
|
+
set -euo pipefail
|
|
6
|
+
|
|
7
|
+
# Colors
|
|
8
|
+
RED='\033[0;31m'
|
|
9
|
+
GREEN='\033[0;32m'
|
|
10
|
+
YELLOW='\033[1;33m'
|
|
11
|
+
NC='\033[0m' # No Color
|
|
12
|
+
|
|
13
|
+
# Counters
|
|
14
|
+
PASSED=0
|
|
15
|
+
FAILED=0
|
|
16
|
+
WARNINGS=0
|
|
17
|
+
|
|
18
|
+
echo "🔍 10/10 Framework Prompt Compliance Checker"
|
|
19
|
+
echo "============================================="
|
|
20
|
+
echo ""
|
|
21
|
+
|
|
22
|
+
# Function to check if file has required XML tags
|
|
23
|
+
check_xml_structure() {
|
|
24
|
+
local file=$1
|
|
25
|
+
local tier=$2
|
|
26
|
+
local errors=0
|
|
27
|
+
|
|
28
|
+
echo "Checking: $file"
|
|
29
|
+
|
|
30
|
+
# Layer 1: Identity (required in all tiers)
|
|
31
|
+
if ! grep -q "<identity>" "$file"; then
|
|
32
|
+
echo -e "${RED} ✗ Missing <identity> tag (Layer 1)${NC}"
|
|
33
|
+
((errors++))
|
|
34
|
+
else
|
|
35
|
+
echo -e "${GREEN} ✓ Has <identity> tag${NC}"
|
|
36
|
+
fi
|
|
37
|
+
|
|
38
|
+
# Layer 2: Behavioral Guidelines (required in all tiers)
|
|
39
|
+
if ! grep -q "<behavioral_guidelines>" "$file"; then
|
|
40
|
+
echo -e "${RED} ✗ Missing <behavioral_guidelines> tag (Layer 2)${NC}"
|
|
41
|
+
((errors++))
|
|
42
|
+
else
|
|
43
|
+
echo -e "${GREEN} ✓ Has <behavioral_guidelines> tag${NC}"
|
|
44
|
+
fi
|
|
45
|
+
|
|
46
|
+
# Layer 3: Standards (required in all tiers)
|
|
47
|
+
if ! grep -q "<standards>" "$file"; then
|
|
48
|
+
echo -e "${RED} ✗ Missing <standards> tag (Layer 3)${NC}"
|
|
49
|
+
((errors++))
|
|
50
|
+
else
|
|
51
|
+
echo -e "${GREEN} ✓ Has <standards> tag${NC}"
|
|
52
|
+
fi
|
|
53
|
+
|
|
54
|
+
# Layer 4: Domain Knowledge (required in Standard+)
|
|
55
|
+
if [[ "$tier" != "minimal" ]]; then
|
|
56
|
+
if ! grep -q "<domain_knowledge>" "$file"; then
|
|
57
|
+
echo -e "${RED} ✗ Missing <domain_knowledge> tag (Layer 4) - required for $tier tier${NC}"
|
|
58
|
+
((errors++))
|
|
59
|
+
else
|
|
60
|
+
echo -e "${GREEN} ✓ Has <domain_knowledge> tag${NC}"
|
|
61
|
+
fi
|
|
62
|
+
fi
|
|
63
|
+
|
|
64
|
+
# Layer 5: Cross-Cutting Concerns (required in Professional+)
|
|
65
|
+
if [[ "$tier" == "professional" || "$tier" == "enterprise" ]]; then
|
|
66
|
+
if ! grep -q "<cross_cutting_concerns>" "$file"; then
|
|
67
|
+
echo -e "${RED} ✗ Missing <cross_cutting_concerns> tag (Layer 5) - required for $tier tier${NC}"
|
|
68
|
+
((errors++))
|
|
69
|
+
else
|
|
70
|
+
echo -e "${GREEN} ✓ Has <cross_cutting_concerns> tag${NC}"
|
|
71
|
+
fi
|
|
72
|
+
fi
|
|
73
|
+
|
|
74
|
+
# Layer 6: Reference Library (required in Enterprise only)
|
|
75
|
+
if [[ "$tier" == "enterprise" ]]; then
|
|
76
|
+
if ! grep -q "<reference_library>" "$file"; then
|
|
77
|
+
echo -e "${RED} ✗ Missing <reference_library> tag (Layer 6) - required for enterprise tier${NC}"
|
|
78
|
+
((errors++))
|
|
79
|
+
else
|
|
80
|
+
echo -e "${GREEN} ✓ Has <reference_library> tag${NC}"
|
|
81
|
+
fi
|
|
82
|
+
fi
|
|
83
|
+
|
|
84
|
+
# Layer 7: User Input (required in all tiers)
|
|
85
|
+
if ! grep -q "<user_input>" "$file"; then
|
|
86
|
+
echo -e "${RED} ✗ Missing <user_input> tag (Layer 7)${NC}"
|
|
87
|
+
((errors++))
|
|
88
|
+
else
|
|
89
|
+
echo -e "${GREEN} ✓ Has <user_input> tag${NC}"
|
|
90
|
+
fi
|
|
91
|
+
|
|
92
|
+
# Check for priority markers
|
|
93
|
+
if ! grep -Eq "(CRITICAL|MANDATORY|PRIORITY|IMPORTANT)" "$file"; then
|
|
94
|
+
echo -e "${YELLOW} ⚠ No priority markers found (CRITICAL/MANDATORY/PRIORITY/IMPORTANT)${NC}"
|
|
95
|
+
((WARNINGS++))
|
|
96
|
+
else
|
|
97
|
+
echo -e "${GREEN} ✓ Has priority markers${NC}"
|
|
98
|
+
fi
|
|
99
|
+
|
|
100
|
+
echo ""
|
|
101
|
+
return $errors
|
|
102
|
+
}
|
|
103
|
+
|
|
104
|
+
# Check tier-specific token budgets
|
|
105
|
+
check_token_budget() {
|
|
106
|
+
local file=$1
|
|
107
|
+
local tier=$2
|
|
108
|
+
local line_count=$(wc -l < "$file")
|
|
109
|
+
local approx_tokens=$((line_count * 3)) # Rough estimate: ~3 tokens per line
|
|
110
|
+
|
|
111
|
+
case $tier in
|
|
112
|
+
minimal)
|
|
113
|
+
if [ $approx_tokens -gt 600 ]; then
|
|
114
|
+
echo -e "${YELLOW} ⚠ Estimated $approx_tokens tokens (target: ~500 for minimal tier)${NC}"
|
|
115
|
+
((WARNINGS++))
|
|
116
|
+
else
|
|
117
|
+
echo -e "${GREEN} ✓ Token budget OK (~$approx_tokens tokens)${NC}"
|
|
118
|
+
fi
|
|
119
|
+
;;
|
|
120
|
+
standard)
|
|
121
|
+
if [ $approx_tokens -gt 2500 ]; then
|
|
122
|
+
echo -e "${YELLOW} ⚠ Estimated $approx_tokens tokens (target: ~2000 for standard tier)${NC}"
|
|
123
|
+
((WARNINGS++))
|
|
124
|
+
else
|
|
125
|
+
echo -e "${GREEN} ✓ Token budget OK (~$approx_tokens tokens)${NC}"
|
|
126
|
+
fi
|
|
127
|
+
;;
|
|
128
|
+
professional)
|
|
129
|
+
if [ $approx_tokens -gt 6000 ]; then
|
|
130
|
+
echo -e "${YELLOW} ⚠ Estimated $approx_tokens tokens (target: ~5000 for professional tier)${NC}"
|
|
131
|
+
((WARNINGS++))
|
|
132
|
+
else
|
|
133
|
+
echo -e "${GREEN} ✓ Token budget OK (~$approx_tokens tokens)${NC}"
|
|
134
|
+
fi
|
|
135
|
+
;;
|
|
136
|
+
esac
|
|
137
|
+
echo ""
|
|
138
|
+
}
|
|
139
|
+
|
|
140
|
+
# Main verification
|
|
141
|
+
echo "Verifying updated prompts..."
|
|
142
|
+
echo ""
|
|
143
|
+
|
|
144
|
+
# Context detection agents (Minimal tier)
|
|
145
|
+
echo "=== Minimal Tier Agents ==="
|
|
146
|
+
for file in ~/.claude/agents/context-signal-detector.md ~/.claude/agents/context-template-selector.md; do
|
|
147
|
+
if [ -f "$file" ]; then
|
|
148
|
+
if check_xml_structure "$file" "minimal"; then
|
|
149
|
+
check_token_budget "$file" "minimal"
|
|
150
|
+
((PASSED++))
|
|
151
|
+
else
|
|
152
|
+
((FAILED++))
|
|
153
|
+
fi
|
|
154
|
+
fi
|
|
155
|
+
done
|
|
156
|
+
|
|
157
|
+
# Commands (Standard tier)
|
|
158
|
+
echo "=== Standard Tier Commands ==="
|
|
159
|
+
for file in ~/.claude/commands/tasks.md ~/.claude/commands/work.md ~/.claude/commands/research.md; do
|
|
160
|
+
if [ -f "$file" ]; then
|
|
161
|
+
if check_xml_structure "$file" "standard"; then
|
|
162
|
+
check_token_budget "$file" "standard"
|
|
163
|
+
((PASSED++))
|
|
164
|
+
else
|
|
165
|
+
((FAILED++))
|
|
166
|
+
fi
|
|
167
|
+
fi
|
|
168
|
+
done
|
|
169
|
+
|
|
170
|
+
# Planning command (Professional tier)
|
|
171
|
+
echo "=== Professional Tier Commands ==="
|
|
172
|
+
for file in ~/.claude/commands/plan.md; do
|
|
173
|
+
if [ -f "$file" ]; then
|
|
174
|
+
if check_xml_structure "$file" "professional"; then
|
|
175
|
+
check_token_budget "$file" "professional"
|
|
176
|
+
((PASSED++))
|
|
177
|
+
else
|
|
178
|
+
((FAILED++))
|
|
179
|
+
fi
|
|
180
|
+
fi
|
|
181
|
+
done
|
|
182
|
+
|
|
183
|
+
# Summary
|
|
184
|
+
echo "============================================="
|
|
185
|
+
echo "Summary:"
|
|
186
|
+
echo -e "${GREEN}Passed: $PASSED${NC}"
|
|
187
|
+
echo -e "${RED}Failed: $FAILED${NC}"
|
|
188
|
+
echo -e "${YELLOW}Warnings: $WARNINGS${NC}"
|
|
189
|
+
echo ""
|
|
190
|
+
|
|
191
|
+
if [ $FAILED -eq 0 ]; then
|
|
192
|
+
echo -e "${GREEN}✓ All prompts are compliant with 10/10 framework!${NC}"
|
|
193
|
+
exit 0
|
|
194
|
+
else
|
|
195
|
+
echo -e "${RED}✗ Some prompts need attention${NC}"
|
|
196
|
+
exit 1
|
|
197
|
+
fi
|