qlint 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- qlint-0.1.0/LICENSE +21 -0
- qlint-0.1.0/PKG-INFO +444 -0
- qlint-0.1.0/README.md +421 -0
- qlint-0.1.0/pry/__init__.py +1 -0
- qlint-0.1.0/pry/cli.py +169 -0
- qlint-0.1.0/pry/core/__init__.py +0 -0
- qlint-0.1.0/pry/core/complexity.py +76 -0
- qlint-0.1.0/pry/core/duplicates.py +51 -0
- qlint-0.1.0/pry/core/metrics.py +91 -0
- qlint-0.1.0/pry/core/quality.py +42 -0
- qlint-0.1.0/pry/core/security.py +120 -0
- qlint-0.1.0/pry/core/smells.py +90 -0
- qlint-0.1.0/pry/core/traversal.py +47 -0
- qlint-0.1.0/pry/reports/__init__.py +0 -0
- qlint-0.1.0/pry/reports/report_html.py +131 -0
- qlint-0.1.0/pry/reports/report_json.py +48 -0
- qlint-0.1.0/pyproject.toml +34 -0
- qlint-0.1.0/qlint.egg-info/PKG-INFO +444 -0
- qlint-0.1.0/qlint.egg-info/SOURCES.txt +21 -0
- qlint-0.1.0/qlint.egg-info/dependency_links.txt +1 -0
- qlint-0.1.0/qlint.egg-info/entry_points.txt +2 -0
- qlint-0.1.0/qlint.egg-info/top_level.txt +1 -0
- qlint-0.1.0/setup.cfg +4 -0
qlint-0.1.0/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 ropean
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
qlint-0.1.0/PKG-INFO
ADDED
|
@@ -0,0 +1,444 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: qlint
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: Multi-language code quality scanner — complexity, duplication, security, and smells
|
|
5
|
+
License: MIT
|
|
6
|
+
Project-URL: Homepage, https://github.com/ropean/pry
|
|
7
|
+
Project-URL: Repository, https://github.com/ropean/pry
|
|
8
|
+
Project-URL: Issues, https://github.com/ropean/pry/issues
|
|
9
|
+
Keywords: code-quality,static-analysis,security,complexity,linter
|
|
10
|
+
Classifier: Development Status :: 3 - Alpha
|
|
11
|
+
Classifier: Environment :: Console
|
|
12
|
+
Classifier: Intended Audience :: Developers
|
|
13
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
14
|
+
Classifier: Programming Language :: Python :: 3
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
18
|
+
Classifier: Topic :: Software Development :: Quality Assurance
|
|
19
|
+
Requires-Python: >=3.11
|
|
20
|
+
Description-Content-Type: text/markdown
|
|
21
|
+
License-File: LICENSE
|
|
22
|
+
Dynamic: license-file
|
|
23
|
+
|
|
24
|
+
# Hackathon Part I: Build Your Code Scanner
|
|
25
|
+
|
|
26
|
+
**Focus:** DPI workflow + Active Partner + Feedback Loop + Encoding Priority
|
|
27
|
+
**Goal:** Build a working code scanner CLI from scratch — and build the infrastructure to make your agent effective on it
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
## This Is a Pressure Test
|
|
32
|
+
|
|
33
|
+
This hackathon is intentionally ambitious. The goal is NOT to test your coding skills — it's to test your ability to **leverage your coding agent to ship fast**.
|
|
34
|
+
|
|
35
|
+
**Expected experience:**
|
|
36
|
+
- First 10 minutes: Mild panic ("This is a lot!")
|
|
37
|
+
- Minutes 10-30: Finding rhythm ("OK, let the agent handle this")
|
|
38
|
+
- Rest of hackathon: Flow state ("This is actually working")
|
|
39
|
+
|
|
40
|
+
**If you're feeling overwhelmed, that's the point.** The solution isn't working harder — it's applying DPI. Design before building. Plan before implementing. Let the agent iterate while you steer.
|
|
41
|
+
|
|
42
|
+
---
|
|
43
|
+
|
|
44
|
+
## Overview
|
|
45
|
+
|
|
46
|
+
Use your coding agent to build a code scanner that:
|
|
47
|
+
|
|
48
|
+
1. Walks through any codebase and extracts metrics
|
|
49
|
+
2. Analyzes code quality (complexity, duplicates, or smells)
|
|
50
|
+
3. Generates reports (JSON + a human-readable format)
|
|
51
|
+
|
|
52
|
+
**Success = functional scanner + 2-3 analysis features working + infrastructure artifacts created**
|
|
53
|
+
|
|
54
|
+
You choose the language and tech stack. The agent does the heavy lifting. You steer, verify, and encode.
|
|
55
|
+
|
|
56
|
+
**What you'll produce:**
|
|
57
|
+
- A working CLI tool
|
|
58
|
+
- Ground rules for the project
|
|
59
|
+
- At least one custom command encoding a workflow from the build
|
|
60
|
+
- A feedback loop where the agent validates its own work
|
|
61
|
+
|
|
62
|
+
---
|
|
63
|
+
|
|
64
|
+
## Safety Net
|
|
65
|
+
|
|
66
|
+
**Git: commit early, commit often.** This hackathon moves fast.
|
|
67
|
+
|
|
68
|
+
- `git init` if starting fresh
|
|
69
|
+
- `git add . && git commit -m "message"` after each working milestone
|
|
70
|
+
- `git checkout -b experiment` before trying something risky
|
|
71
|
+
- `git stash` or `git checkout .` to reset if things break
|
|
72
|
+
|
|
73
|
+
**Suggested commit points:**
|
|
74
|
+
- After basic scanner works
|
|
75
|
+
- After each analysis feature
|
|
76
|
+
- After report generation works
|
|
77
|
+
- Before any major refactor
|
|
78
|
+
|
|
79
|
+
**Claude Code users:** `/rewind` lets you roll back to any previous prompt — conversation-level checkpoints alongside git's code-level checkpoints. Use both.
|
|
80
|
+
|
|
81
|
+
**Model strategy:** Consider Opus for design and planning prompts, Sonnet for implementation. The hackathon is long enough for this to matter.
|
|
82
|
+
|
|
83
|
+
---
|
|
84
|
+
|
|
85
|
+
## Your Tasks
|
|
86
|
+
|
|
87
|
+
### Step 1: Bootstrap Your Scanner (~15 min)
|
|
88
|
+
|
|
89
|
+
Prompt your agent to generate a basic scanner. Specify your preferred language and output format. Apply the DPI workflow: design the interface first, then let the agent implement.
|
|
90
|
+
|
|
91
|
+
<details>
|
|
92
|
+
<summary><strong>Starter Prompt</strong></summary>
|
|
93
|
+
|
|
94
|
+
```
|
|
95
|
+
Create a [Python/Node/Go] code scanner with:
|
|
96
|
+
- CLI that accepts a directory path
|
|
97
|
+
- Recursive file traversal (skip .git, node_modules, __pycache__)
|
|
98
|
+
- Count files by extension and total lines of code
|
|
99
|
+
- Output JSON summary: {totalFiles, totalLines, languages: {...}}
|
|
100
|
+
- Handle errors gracefully
|
|
101
|
+
|
|
102
|
+
Make it production-ready with proper structure.
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
</details>
|
|
106
|
+
|
|
107
|
+
**Before moving on:**
|
|
108
|
+
- Did the agent ask clarifying questions, or did it just build? (Compliance Bias check)
|
|
109
|
+
- What prompts worked well? What did you have to clarify or retry?
|
|
110
|
+
|
|
111
|
+
**Commit your work:** `git add . && git commit -m "Basic scanner working"`
|
|
112
|
+
|
|
113
|
+
---
|
|
114
|
+
|
|
115
|
+
### Step 2: Add Core Analysis (~45 min)
|
|
116
|
+
|
|
117
|
+
Choose 2-3 analysis features to add to your scanner.
|
|
118
|
+
|
|
119
|
+
**Quick approach:** Pick features that interest you and prompt directly.
|
|
120
|
+
|
|
121
|
+
**DPI approach:** Ask the agent to compare options with trade-offs, then choose. "What are 3 ways to add complexity analysis? Compare trade-offs."
|
|
122
|
+
|
|
123
|
+
**Alignment check:** Before the agent writes code, ask "Tell me what you're going to build" to catch misunderstandings early.
|
|
124
|
+
|
|
125
|
+
<details>
|
|
126
|
+
<summary><strong>Complexity Analysis Prompt</strong></summary>
|
|
127
|
+
|
|
128
|
+
```
|
|
129
|
+
Extend my scanner to calculate cyclomatic complexity for functions.
|
|
130
|
+
Use existing tools or implement a simple version.
|
|
131
|
+
Flag functions with complexity > 10.
|
|
132
|
+
Add to JSON output.
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
</details>
|
|
136
|
+
|
|
137
|
+
<details>
|
|
138
|
+
<summary><strong>Duplication Detection Prompt</strong></summary>
|
|
139
|
+
|
|
140
|
+
```
|
|
141
|
+
Add code duplication detection to my scanner.
|
|
142
|
+
Find duplicate blocks (exact or similar).
|
|
143
|
+
Calculate duplication percentage per file.
|
|
144
|
+
Show top 5 offenders.
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
</details>
|
|
148
|
+
|
|
149
|
+
<details>
|
|
150
|
+
<summary><strong>Security Scanning Prompt</strong></summary>
|
|
151
|
+
|
|
152
|
+
```
|
|
153
|
+
Add basic security scanning:
|
|
154
|
+
- Detect hardcoded secrets (API keys, passwords) using regex
|
|
155
|
+
- Flag dangerous functions (eval, exec, etc.)
|
|
156
|
+
- Output as security_issues array in JSON
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
</details>
|
|
160
|
+
|
|
161
|
+
<details>
|
|
162
|
+
<summary><strong>Code Smell Detection Prompt</strong></summary>
|
|
163
|
+
|
|
164
|
+
```
|
|
165
|
+
Add code smell detection:
|
|
166
|
+
- Long functions (> 50 lines)
|
|
167
|
+
- Deep nesting (> 4 levels)
|
|
168
|
+
- Long parameter lists (> 5 params)
|
|
169
|
+
- Add to JSON with severity ratings
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
</details>
|
|
173
|
+
|
|
174
|
+
**Before moving on:**
|
|
175
|
+
- Which features integrated smoothly vs. required iteration?
|
|
176
|
+
- Did you need to provide more context for certain features?
|
|
177
|
+
- Did you hit a point where the agent needed a fresh session? (Context Degradation signal)
|
|
178
|
+
|
|
179
|
+
**Commit your work:** `git add . && git commit -m "Added analysis features"`
|
|
180
|
+
|
|
181
|
+
---
|
|
182
|
+
|
|
183
|
+
### Step 3: Generate Reports (~30 min)
|
|
184
|
+
|
|
185
|
+
Add human-readable output formats to your scanner.
|
|
186
|
+
|
|
187
|
+
<details>
|
|
188
|
+
<summary><strong>HTML Report Prompt</strong></summary>
|
|
189
|
+
|
|
190
|
+
```
|
|
191
|
+
Create an HTML report generator for my scanner.
|
|
192
|
+
Include:
|
|
193
|
+
- Summary dashboard (total files, lines, quality score)
|
|
194
|
+
- File-by-file breakdown table
|
|
195
|
+
- 2-3 charts (language distribution, complexity)
|
|
196
|
+
- Professional styling
|
|
197
|
+
|
|
198
|
+
Use the JSON output from my scanner as input.
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
</details>
|
|
202
|
+
|
|
203
|
+
<details>
|
|
204
|
+
<summary><strong>Quality Score Prompt</strong></summary>
|
|
205
|
+
|
|
206
|
+
```
|
|
207
|
+
Add a quality scoring system (0-100) to my scanner:
|
|
208
|
+
- Weight complexity (30%), duplication (25%), code smells (25%), docs (20%)
|
|
209
|
+
- Output a letter grade (A/B/C/D/F)
|
|
210
|
+
- Include in both JSON and HTML reports
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
</details>
|
|
214
|
+
|
|
215
|
+
**Before moving on:**
|
|
216
|
+
- How did the agent handle UI/visualization generation?
|
|
217
|
+
- What manual adjustments were needed?
|
|
218
|
+
|
|
219
|
+
**Commit your work:** `git add . && git commit -m "Report generation working"`
|
|
220
|
+
|
|
221
|
+
---
|
|
222
|
+
|
|
223
|
+
## Patterns to Practice
|
|
224
|
+
|
|
225
|
+
**Design Document** *(from DPI)*
|
|
226
|
+
Before prompting for implementation, sketch the interface: what does the CLI accept, what does the JSON output look like, what features are in scope? Even a 2-minute `DESIGN.md` saves 20 minutes of re-prompting.
|
|
227
|
+
|
|
228
|
+
**Active Partner** *(from DPI)*
|
|
229
|
+
The agent defaults to silent compliance. Push back: "What are the trade-offs?" "What would you do differently?" "Push back if something seems wrong."
|
|
230
|
+
|
|
231
|
+
**Feedback Loop** *(from Evolve Loop)*
|
|
232
|
+
Set up a cycle where the agent validates its own output: implement a feature → run the scanner on a real codebase → read the output → fix issues → re-run. The agent should be iterating, not you.
|
|
233
|
+
|
|
234
|
+
**Encoding Priority** *(from Evolve Loop)*
|
|
235
|
+
As you work, notice what you're repeating. A prompt you've typed three times should become a rule. A multi-step workflow should become a command. Encode as you go — don't save it all for the end.
|
|
236
|
+
|
|
237
|
+
---
|
|
238
|
+
|
|
239
|
+
## Obstacles to Watch
|
|
240
|
+
|
|
241
|
+
**Compliance Bias** — The agent says "Sure thing!" even when confused. If it agrees instantly without asking questions, that's a signal. Force alignment: "Tell me what you're going to build before you build it."
|
|
242
|
+
|
|
243
|
+
**Context Degradation** — After many exchanges, the agent loses track of earlier decisions. Watch for contradictions or repeated mistakes. When you see it: start a fresh session with a summary of where you are.
|
|
244
|
+
|
|
245
|
+
**Silent Misalignment** — The agent builds confidently in the wrong direction. Confidence does not equal correctness. Run the scanner on real code frequently — real output catches misalignment faster than reading generated code.
|
|
246
|
+
|
|
247
|
+
---
|
|
248
|
+
|
|
249
|
+
## Definition of Done
|
|
250
|
+
|
|
251
|
+
**Core (required):**
|
|
252
|
+
- [ ] Functional scanner (walks a codebase, extracts metrics)
|
|
253
|
+
- [ ] 2-3 analysis features working
|
|
254
|
+
- [ ] JSON + one human-readable report format
|
|
255
|
+
- [ ] Ground rules file created (`CLAUDE.md` or `.cursor/rules/`) with at least 2-3 meaningful rules
|
|
256
|
+
- [ ] At least one custom command encoding a workflow from the build
|
|
257
|
+
- [ ] Feedback Loop applied: agent validates its work against scanner output (implement → scan → fix → rescan)
|
|
258
|
+
|
|
259
|
+
**Bonus (pick any):**
|
|
260
|
+
- [ ] Quality scoring system (0-100, letter grade)
|
|
261
|
+
- [ ] 3+ chart visualizations
|
|
262
|
+
- [ ] Unit tests for critical paths
|
|
263
|
+
- [ ] Model strategy applied (Opus for planning, Sonnet for implementation)
|
|
264
|
+
- [ ] `DESIGN.md` with architecture decisions documented
|
|
265
|
+
- [ ] Use **Worktrees** to run Parallel Implementations in isolated branches
|
|
266
|
+
- [ ] Use **Subagents** to spawn parallel specialists for independent tasks or phases (ask Claude about it)
|
|
267
|
+
- [ ] Use **Agent Teams** to coordinate multiple agents via a shared task list (ask Claude to search its latest docs - it's an experimental feature)
|
|
268
|
+
|
|
269
|
+
|
|
270
|
+
---
|
|
271
|
+
|
|
272
|
+
## Track What Works (and What Doesn't)
|
|
273
|
+
|
|
274
|
+
Keep a mental or written log as you build:
|
|
275
|
+
|
|
276
|
+
| Worked Well | Needed Iteration | Failed/Abandoned |
|
|
277
|
+
|---|---|---|
|
|
278
|
+
| e.g., "File traversal prompt worked first try" | e.g., "Complexity calc needed 3 attempts" | e.g., "Gave up on X, did Y instead" |
|
|
279
|
+
|
|
280
|
+
**Notice the patterns:**
|
|
281
|
+
- When did DPI (designing before implementing) save time?
|
|
282
|
+
- When did a fresh session beat continuing a long one?
|
|
283
|
+
- When did the Feedback Loop catch something you missed?
|
|
284
|
+
- What did you encode as a rule? What should you have encoded earlier?
|
|
285
|
+
|
|
286
|
+
---
|
|
287
|
+
|
|
288
|
+
## Milestones
|
|
289
|
+
|
|
290
|
+
**Milestone 1: Basic Scanner Works (~15 min)**
|
|
291
|
+
- Can scan any directory
|
|
292
|
+
- Outputs valid JSON
|
|
293
|
+
- Handles errors
|
|
294
|
+
|
|
295
|
+
**Milestone 2: Analysis Added (~45 min)**
|
|
296
|
+
- At least 2 analysis features working
|
|
297
|
+
- Enhanced JSON output
|
|
298
|
+
- Tested on real codebase
|
|
299
|
+
|
|
300
|
+
**Milestone 3: Reports Generated (~30 min)**
|
|
301
|
+
- Human-readable format (HTML or Markdown)
|
|
302
|
+
- Includes visualizations
|
|
303
|
+
- Looks presentable
|
|
304
|
+
|
|
305
|
+
**Milestone 4: Infrastructure Created (~remaining time)**
|
|
306
|
+
- Ground rules file with 2-3 rules
|
|
307
|
+
- Custom command for a workflow
|
|
308
|
+
- Feedback loop running
|
|
309
|
+
|
|
310
|
+
**Reach Milestones 2 and 4 minimum. Milestone 3 is excellent.**
|
|
311
|
+
|
|
312
|
+
---
|
|
313
|
+
|
|
314
|
+
## If You're Stuck
|
|
315
|
+
|
|
316
|
+
<details>
|
|
317
|
+
<summary><strong>10 minutes in, nothing working?</strong></summary>
|
|
318
|
+
|
|
319
|
+
```
|
|
320
|
+
I'm trying to build a code scanner in [LANGUAGE].
|
|
321
|
+
I need it to: traverse directories, count lines, detect file types.
|
|
322
|
+
Generate a complete working starter with CLI interface.
|
|
323
|
+
Keep it simple — I'll enhance it later.
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
</details>
|
|
327
|
+
|
|
328
|
+
<details>
|
|
329
|
+
<summary><strong>Analysis feature not integrating?</strong></summary>
|
|
330
|
+
|
|
331
|
+
```
|
|
332
|
+
I have a scanner that outputs JSON. I want to add [FEATURE].
|
|
333
|
+
Here's my current JSON output: [PASTE]
|
|
334
|
+
Generate code that adds [FEATURE] data to this structure.
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
</details>
|
|
338
|
+
|
|
339
|
+
<details>
|
|
340
|
+
<summary><strong>Report generation broken?</strong></summary>
|
|
341
|
+
|
|
342
|
+
```
|
|
343
|
+
Generate a standalone HTML file that:
|
|
344
|
+
- Reads my scanner's JSON output
|
|
345
|
+
- Displays a dashboard with [METRICS]
|
|
346
|
+
- Uses CDN libraries for visualizations
|
|
347
|
+
- Looks professional
|
|
348
|
+
Complete single-file solution.
|
|
349
|
+
```
|
|
350
|
+
|
|
351
|
+
</details>
|
|
352
|
+
|
|
353
|
+
<details>
|
|
354
|
+
<summary><strong>Don't know what rules to create?</strong></summary>
|
|
355
|
+
|
|
356
|
+
Ask the agent:
|
|
357
|
+
|
|
358
|
+
```
|
|
359
|
+
Review our conversation. What ground rules would have
|
|
360
|
+
prevented our correction cycles? Suggest 2-3 rules I should
|
|
361
|
+
save to [CLAUDE.md / .cursor/rules/].
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
</details>
|
|
365
|
+
|
|
366
|
+
<details>
|
|
367
|
+
<summary><strong>Don't know what to encode as a command?</strong></summary>
|
|
368
|
+
|
|
369
|
+
Look for a multi-step workflow you've done more than once. Common examples:
|
|
370
|
+
- "Run the scanner on this directory, then open the HTML report"
|
|
371
|
+
- "Run tests, then run the scanner, then compare quality scores"
|
|
372
|
+
- "Validate the scanner's JSON output against the expected schema"
|
|
373
|
+
|
|
374
|
+
</details>
|
|
375
|
+
|
|
376
|
+
---
|
|
377
|
+
|
|
378
|
+
## Detailed Requirements (Reference Only)
|
|
379
|
+
|
|
380
|
+
Use these to validate your implementation, not to build from scratch.
|
|
381
|
+
|
|
382
|
+
### Core Capabilities Checklist
|
|
383
|
+
|
|
384
|
+
- [ ] File system traversal with ignore patterns
|
|
385
|
+
- [ ] Multi-language detection (3+ languages)
|
|
386
|
+
- [ ] Basic metrics: LOC, comments, blanks
|
|
387
|
+
- [ ] Function/class counting
|
|
388
|
+
- [ ] 3+ advanced analysis features
|
|
389
|
+
- [ ] 2+ report formats (JSON + HTML/Markdown/CSV)
|
|
390
|
+
- [ ] Quality scoring (0-100)
|
|
391
|
+
- [ ] Visualization (2+ charts/graphs)
|
|
392
|
+
- [ ] Unit tests for critical paths
|
|
393
|
+
|
|
394
|
+
<details>
|
|
395
|
+
<summary><strong>Expected JSON Schema</strong></summary>
|
|
396
|
+
|
|
397
|
+
```json
|
|
398
|
+
{
|
|
399
|
+
"scanId": "unique-identifier",
|
|
400
|
+
"timestamp": "ISO-8601-timestamp",
|
|
401
|
+
"repository": {
|
|
402
|
+
"path": "/path/to/repo",
|
|
403
|
+
"totalFiles": 0,
|
|
404
|
+
"totalLines": 0,
|
|
405
|
+
"languages": {}
|
|
406
|
+
},
|
|
407
|
+
"files": [
|
|
408
|
+
{
|
|
409
|
+
"path": "relative/path/to/file",
|
|
410
|
+
"language": "detected-language",
|
|
411
|
+
"metrics": {
|
|
412
|
+
"loc": 0,
|
|
413
|
+
"comments": 0,
|
|
414
|
+
"complexity": 0
|
|
415
|
+
}
|
|
416
|
+
}
|
|
417
|
+
],
|
|
418
|
+
"qualityScore": 85,
|
|
419
|
+
"grade": "B"
|
|
420
|
+
}
|
|
421
|
+
```
|
|
422
|
+
|
|
423
|
+
</details>
|
|
424
|
+
|
|
425
|
+
---
|
|
426
|
+
|
|
427
|
+
## Part I Checkpoint
|
|
428
|
+
|
|
429
|
+
Before moving to Part II, you should have:
|
|
430
|
+
|
|
431
|
+
- **Core scanner working** — Can scan a codebase and extract metrics
|
|
432
|
+
- **Analysis features** — At least 2-3 features implemented
|
|
433
|
+
- **Reports** — At least JSON + one human-readable format
|
|
434
|
+
- **Infrastructure** — Ground rules + custom command + feedback loop
|
|
435
|
+
|
|
436
|
+
**Don't have all of this?** That's OK. Move to Part II anyway and build something creative with what you have. The infrastructure DoD items can be completed during Part II.
|
|
437
|
+
|
|
438
|
+
**Before Part II, ensure you have:**
|
|
439
|
+
- A clean git commit of your working Part I code
|
|
440
|
+
- Your scanner's JSON output format documented or understood
|
|
441
|
+
- Notes on what prompts and workflows worked well
|
|
442
|
+
|
|
443
|
+
**Context tip:** Part II is a good time for a fresh agent session. Summarize your Part I scanner in 2-3 sentences rather than pasting everything.
|
|
444
|
+
|