pi-rtk 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/package.json +13 -9
  2. package/RTK.md +0 -845
package/package.json CHANGED
@@ -1,19 +1,22 @@
1
1
  {
2
2
  "name": "pi-rtk",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "description": "RTK token reduction extension for pi-coding-agent - reduces LLM token consumption 60-90% by intelligently filtering tool output",
5
5
  "type": "module",
6
- "main": "./index.ts",
6
+ "main": "index.ts",
7
7
  "pi": {
8
8
  "extensions": [
9
9
  "./index.ts"
10
10
  ]
11
11
  },
12
12
  "scripts": {
13
- "clean": "echo 'nothing to clean'",
14
- "build": "echo 'nothing to build'",
15
13
  "check": "bunx tsc --noEmit"
16
14
  },
15
+ "files": [
16
+ "*.ts",
17
+ "techniques/*.ts",
18
+ "README.md"
19
+ ],
17
20
  "keywords": [
18
21
  "pi",
19
22
  "pi-coding-agent",
@@ -23,13 +26,14 @@
23
26
  "rtk",
24
27
  "ai"
25
28
  ],
26
- "author": "",
29
+ "author": "Matt Cowger",
27
30
  "license": "MIT",
28
31
  "repository": {
29
32
  "type": "git",
30
- "url": "https://github.com/mcowger/pi-rtk.git"
33
+ "url": "git+https://github.com/mcowger/pi-rtk.git"
34
+ },
35
+ "bugs": {
36
+ "url": "https://github.com/mcowger/pi-rtk/issues"
31
37
  },
32
- "peerDependencies": {
33
- "@mariozechner/pi-coding-agent": "*"
34
- }
38
+ "homepage": "https://github.com/mcowger/pi-rtk#readme"
35
39
  }
package/RTK.md DELETED
@@ -1,845 +0,0 @@
1
- # RTK Token Reduction Techniques
2
-
3
- This document describes the token reduction techniques used by the Rust Token Killer (RTK) CLI proxy. These techniques are presented in pseudo-code for easy re-implementation in other languages.
4
-
5
- ## Overview
6
-
7
- RTK is a command proxy that intercepts CLI tool output and applies intelligent filtering to reduce token consumption by 60-90% while preserving essential information. The core philosophy is: **Keep what matters, remove what doesn't**.
8
-
9
- ## Core Techniques
10
-
11
- ### 1. Language-Aware Source Code Filtering
12
-
13
- When reading source code files, RTK applies three levels of filtering:
14
-
15
- ```
16
- enum FilterLevel:
17
- NONE # Pass through unchanged
18
- MINIMAL # Remove comments, normalize whitespace
19
- AGGRESSIVE # Keep only signatures and structure
20
-
21
- function filterSourceCode(content, language, level):
22
- patterns = getCommentPatterns(language)
23
- result = empty string
24
- inBlockComment = false
25
- inDocstring = false
26
-
27
- for each line in content:
28
- trimmed = line.trim()
29
-
30
- # Handle block comments
31
- if patterns.blockStart exists and trimmed contains patterns.blockStart:
32
- if not a doc comment:
33
- inBlockComment = true
34
-
35
- if inBlockComment:
36
- if trimmed contains patterns.blockEnd:
37
- inBlockComment = false
38
- skip to next line
39
-
40
- # Handle Python docstrings (keep in minimal mode)
41
- if language is PYTHON and trimmed starts with '"""':
42
- inDocstring = not inDocstring
43
- append line to result
44
- continue
45
-
46
- if inDocstring:
47
- append line to result
48
- continue
49
-
50
- # Skip single-line comments (but keep doc comments)
51
- if patterns.lineComment exists:
52
- if trimmed starts with patterns.lineComment:
53
- if patterns.docComment exists and trimmed starts with patterns.docComment:
54
- append line to result
55
- skip to next line
56
-
57
- # Skip empty lines temporarily
58
- if trimmed is empty:
59
- append newline to result
60
- continue
61
-
62
- append line to result
63
-
64
- # Normalize multiple blank lines to max 2
65
- result = replace 3+ consecutive newlines with 2 newlines
66
- return result.trim()
67
- ```
68
-
69
- **Aggressive Mode Extension:**
70
-
71
- ```
72
- function aggressiveFilter(content, language):
73
- minimal = filterSourceCode(content, language, MINIMAL)
74
- result = empty string
75
- braceDepth = 0
76
- inImplementation = false
77
-
78
- # Patterns to preserve
79
- importPattern = regex "^(use |import |from |require\(|#include)"
80
- signaturePattern = regex "^(pub\s+)?(async\s+)?(fn|def|function|func|class|struct|enum|trait|interface|type)\s+\w+"
81
-
82
- for each line in minimal:
83
- trimmed = line.trim()
84
-
85
- # Always keep imports
86
- if importPattern matches trimmed:
87
- append line to result
88
- continue
89
-
90
- # Keep function/type signatures
91
- if signaturePattern matches trimmed:
92
- append line to result
93
- inImplementation = true
94
- braceDepth = 0
95
- continue
96
-
97
- # Track brace depth for bodies
98
- if inImplementation:
99
- openBraces = count '{' in trimmed
100
- closeBraces = count '}' in trimmed
101
- braceDepth += openBraces - closeBraces
102
-
103
- # Only keep opening/closing braces
104
- if braceDepth <= 1 and (trimmed is "{" or trimmed is "}" or trimmed ends with "{"):
105
- append line to result
106
-
107
- if braceDepth <= 0:
108
- inImplementation = false
109
- if trimmed is not empty and trimmed is not "}":
110
- append " // ... implementation" to result
111
- continue
112
-
113
- # Keep constants and type definitions
114
- if trimmed starts with "const " or "static " or "let " or "pub const " or "pub static ":
115
- append line to result
116
-
117
- return result.trim()
118
- ```
119
-
120
- **Smart Truncation for Large Files:**
121
-
122
- ```
123
- function smartTruncate(content, maxLines, language):
124
- lines = content.split into array
125
- if lines.length <= maxLines:
126
- return content
127
-
128
- result = empty array
129
- keptLines = 0
130
- skippedSection = false
131
-
132
- # Patterns for important lines
133
- importantPattern = regex for signatures, imports, exports, braces
134
-
135
- for each line in lines:
136
- trimmed = line.trim()
137
- isImportant = importantPattern matches trimmed
138
-
139
- if isImportant or keptLines < maxLines / 2:
140
- if skippedSection:
141
- append " // ... N lines omitted" to result
142
- skippedSection = false
143
- append line to result
144
- keptLines += 1
145
- else:
146
- skippedSection = true
147
-
148
- if keptLines >= maxLines - 1:
149
- break
150
-
151
- if skippedSection or keptLines < lines.length:
152
- append "// ... N more lines (total: X)" to result
153
-
154
- return result.join("\n")
155
- ```
156
-
157
- ### 2. ANSI Escape Sequence Stripping
158
-
159
- Remove color codes and formatting from terminal output:
160
-
161
- ```
162
- function stripAnsi(text):
163
- ansiPattern = regex "\\x1b\\[[0-9;]*[a-zA-Z]"
164
- return replace all matches of ansiPattern with empty string
165
- ```
166
-
167
- ### 3. Text Truncation
168
-
169
- ```
170
- function truncate(text, maxLength):
171
- charCount = count characters in text
172
- if charCount <= maxLength:
173
- return text
174
-
175
- if maxLength < 3:
176
- return "..."
177
-
178
- return first (maxLength - 3) characters + "..."
179
- ```
180
-
181
- ### 4. Test Output Filtering
182
-
183
- #### 4.1 Test Result Aggregation
184
-
185
- ```
186
- function aggregateTestResults(output):
187
- # Parse test result lines
188
- resultPattern = regex "test result: (\w+)\\.\\s+(\\d+) passed;\\s+(\\d+) failed;"
189
-
190
- aggregated = new TestSummary
191
-
192
- for each line in output:
193
- if resultPattern matches line:
194
- matches = resultPattern.capture(line)
195
- status = matches[1]
196
- passed = parseInt(matches[2])
197
- failed = parseInt(matches[3])
198
-
199
- aggregated.passed += passed
200
- aggregated.failed += failed
201
- aggregated.suites += 1
202
-
203
- if aggregated.failed == 0:
204
- return compactFormat(aggregated)
205
- else:
206
- return detailedFailureFormat(output)
207
- ```
208
-
209
- #### 4.2 Failure-Only Mode
210
-
211
- ```
212
- function filterTestFailures(output):
213
- failures = empty list
214
- inFailureBlock = false
215
- currentFailure = empty list
216
-
217
- for each line in output:
218
- if line starts with "FAIL" or line contains "FAILED":
219
- if inFailureBlock and currentFailure not empty:
220
- add currentFailure to failures
221
- inFailureBlock = true
222
- currentFailure = [line]
223
- else if inFailureBlock:
224
- if line is empty and currentFailure.length > 3:
225
- add currentFailure to failures
226
- inFailureBlock = false
227
- else if line starts with whitespace or line starts with "----":
228
- add line to currentFailure
229
- else:
230
- add currentFailure to failures
231
- inFailureBlock = false
232
-
233
- if inFailureBlock and currentFailure not empty:
234
- add currentFailure to failures
235
-
236
- return formatFailures(failures)
237
- ```
238
-
239
- ### 5. Git Output Compaction
240
-
241
- #### 5.1 Diff Compaction
242
-
243
- ```
244
- function compactDiff(diffOutput, maxLines):
245
- result = empty list
246
- currentFile = ""
247
- added = 0
248
- removed = 0
249
- inHunk = false
250
- hunkLines = 0
251
- maxHunkLines = 10
252
-
253
- for each line in diffOutput:
254
- if line starts with "diff --git":
255
- # New file - flush previous
256
- if currentFile not empty and (added > 0 or removed > 0):
257
- append " +N -M" to result
258
-
259
- currentFile = extract filename from line
260
- append "\\n📄 " + currentFile to result
261
- added = 0
262
- removed = 0
263
- inHunk = false
264
-
265
- else if line starts with "@@":
266
- inHunk = true
267
- hunkLines = 0
268
- hunkInfo = extract between @@ markers
269
- append " @@ " + hunkInfo + " @@" to result
270
-
271
- else if inHunk:
272
- if line starts with "+" and not "+++":
273
- added += 1
274
- if hunkLines < maxHunkLines:
275
- append " " + line to result
276
- hunkLines += 1
277
-
278
- else if line starts with "-" and not "---":
279
- removed += 1
280
- if hunkLines < maxHunkLines:
281
- append " " + line to result
282
- hunkLines += 1
283
-
284
- else if hunkLines < maxHunkLines and not line starts with "\\":
285
- if hunkLines > 0:
286
- append " " + line to result
287
- hunkLines += 1
288
-
289
- if hunkLines == maxHunkLines:
290
- append " ... (truncated)" to result
291
- hunkLines += 1
292
-
293
- if result.length >= maxLines:
294
- append "\\n... (more changes truncated)" to result
295
- break
296
-
297
- # Flush last file stats
298
- if currentFile not empty and (added > 0 or removed > 0):
299
- append " +N -M" to result
300
-
301
- return result.join("\\n")
302
- ```
303
-
304
- #### 5.2 Status Compaction
305
-
306
- ```
307
- function compactStatus(porcelainOutput):
308
- lines = porcelainOutput.split into array
309
-
310
- if lines is empty:
311
- return "Clean working tree"
312
-
313
- staged = 0
314
- modified = 0
315
- untracked = 0
316
- conflicts = 0
317
-
318
- stagedFiles = empty list
319
- modifiedFiles = empty list
320
- untrackedFiles = empty list
321
-
322
- for each line in lines skip first (branch line):
323
- if line.length < 3:
324
- continue
325
-
326
- status = line[0..1]
327
- filename = line[3..]
328
-
329
- # Parse two-character status
330
- indexStatus = status[0]
331
- worktreeStatus = status[1]
332
-
333
- if indexStatus in ['M', 'A', 'D', 'R', 'C']:
334
- staged += 1
335
- add filename to stagedFiles
336
-
337
- if indexStatus == 'U':
338
- conflicts += 1
339
-
340
- if worktreeStatus in ['M', 'D']:
341
- modified += 1
342
- add filename to modifiedFiles
343
-
344
- if status == "??":
345
- untracked += 1
346
- add filename to untrackedFiles
347
-
348
- # Build summary output
349
- result = "📌 " + branchName + "\\n"
350
-
351
- if staged > 0:
352
- result += "✅ Staged: N files\\n"
353
- show up to 5 files from stagedFiles
354
- if more than 5: "... +N more"
355
-
356
- if modified > 0:
357
- result += "📝 Modified: N files\\n"
358
- show up to 5 files from modifiedFiles
359
- if more than 5: "... +N more"
360
-
361
- if untracked > 0:
362
- result += "❓ Untracked: N files\\n"
363
- show up to 3 files from untrackedFiles
364
- if more than 3: "... +N more"
365
-
366
- if conflicts > 0:
367
- result += "⚠️ Conflicts: N files\\n"
368
-
369
- return result
370
- ```
371
-
372
- #### 5.3 Log Compaction
373
-
374
- ```
375
- function compactLog(logOutput, limit):
376
- lines = logOutput.split into array
377
- result = empty list
378
-
379
- for each line in lines take limit:
380
- if line.length > 80:
381
- line = first 77 characters + "..."
382
- add line to result
383
-
384
- return result.join("\\n")
385
- ```
386
-
387
- ### 6. Build Output Filtering
388
-
389
- #### 6.1 Compilation Noise Removal
390
-
391
- ```
392
- function filterBuildOutput(output):
393
- result = empty list
394
- errors = empty list
395
- warnings = empty list
396
- compiled = 0
397
- inErrorBlock = false
398
- currentError = empty list
399
-
400
- skipPatterns = [
401
- starts with "Compiling",
402
- starts with "Checking",
403
- starts with "Downloading",
404
- starts with "Downloaded",
405
- starts with "Finished"
406
- ]
407
-
408
- for each line in output:
409
- if any pattern matches line:
410
- if line starts with "Compiling" or line starts with "Checking":
411
- compiled += 1
412
- continue
413
-
414
- # Detect errors
415
- if line starts with "error[" or line starts with "error:":
416
- if inErrorBlock and currentError not empty:
417
- add currentError to errors
418
- inErrorBlock = true
419
- currentError = [line]
420
-
421
- else if line starts with "warning:" or line starts with "warning[":
422
- add line to warnings
423
-
424
- else if inErrorBlock:
425
- if line.trim() is empty and currentError.length > 3:
426
- add currentError to errors
427
- inErrorBlock = false
428
- else:
429
- add line to currentError
430
-
431
- if inErrorBlock and currentError not empty:
432
- add currentError to errors
433
-
434
- if errors is empty and warnings is empty:
435
- return "✓ Build successful (N crates compiled)"
436
-
437
- return formatErrorsAndWarnings(errors, warnings, compiled)
438
- ```
439
-
440
- ### 7. Search Result Grouping
441
-
442
- ```
443
- function compactSearchResults(pattern, output, maxResults):
444
- results = parse lines into (file, lineNumber, content) tuples
445
-
446
- # Group by file
447
- byFile = new Map<file, list of (lineNumber, content)>
448
- for each result in results:
449
- byFile.getOrCreate(result.file).add(result)
450
-
451
- # Build output
452
- output = "🔍 N matches in F files:\\n\\n"
453
-
454
- files = sort by key(byFile)
455
- shown = 0
456
-
457
- for each (file, matches) in files:
458
- if shown >= maxResults:
459
- break
460
-
461
- compactFile = compactPath(file, 50)
462
- output += "📄 " + compactFile + " (N matches):\\n"
463
-
464
- for each (lineNum, content) in matches take 10:
465
- cleaned = content.trim()
466
- if cleaned.length > maxLineLength:
467
- cleaned = truncate(cleaned, maxLineLength)
468
- output += " " + lineNum + ": " + cleaned + "\\n"
469
- shown += 1
470
-
471
- if matches.length > 10:
472
- output += " +" + (matches.length - 10) + "\\n"
473
-
474
- output += "\\n"
475
-
476
- if results.length > shown:
477
- output += "... +" + (results.length - shown) + " more\\n"
478
-
479
- return output
480
- ```
481
-
482
- ### 8. Path Compaction
483
-
484
- ```
485
- function compactPath(path, maxLength):
486
- if path.length <= maxLength:
487
- return path
488
-
489
- parts = path.split by '/'
490
- if parts.length <= 3:
491
- return path
492
-
493
- return parts[0] + "/.../" + parts[parts.length - 2] + "/" + parts[parts.length - 1]
494
- ```
495
-
496
- ### 9. JSON Structure Extraction
497
-
498
- ```
499
- function extractJsonSchema(jsonString, maxDepth):
500
- value = parse jsonString
501
- return extractSchema(value, depth=0, maxDepth)
502
-
503
- function extractSchema(value, depth, maxDepth):
504
- if depth > maxDepth:
505
- return indent(depth) + "..."
506
-
507
- indent = " " repeated depth times
508
-
509
- switch value.type:
510
- case NULL:
511
- return indent + "null"
512
-
513
- case BOOLEAN:
514
- return indent + "bool"
515
-
516
- case NUMBER:
517
- if value is integer:
518
- return indent + "int"
519
- else:
520
- return indent + "float"
521
-
522
- case STRING:
523
- if value looks like URL:
524
- return indent + "url"
525
- else if value looks like date:
526
- return indent + "date?"
527
- else:
528
- return indent + "string"
529
-
530
- case ARRAY:
531
- if value is empty:
532
- return indent + "[]"
533
- else:
534
- firstSchema = extractSchema(value[0], depth + 1, maxDepth)
535
- if value.length == 1:
536
- return indent + "[\\n" + firstSchema + "\\n" + indent + "]"
537
- else:
538
- return indent + "[" + firstSchema.trim() + "] (" + value.length + ")"
539
-
540
- case OBJECT:
541
- if value is empty:
542
- return indent + "{}"
543
-
544
- lines = [indent + "{"]
545
- keys = sort(value.keys)
546
-
547
- for each key in keys take 15:
548
- childSchema = extractSchema(value[key], depth + 1, maxDepth)
549
- if value[key] is simple type:
550
- lines.add(indent + " " + key + ": " + childSchema.trim() + ",")
551
- else:
552
- lines.add(indent + " " + key + ":")
553
- lines.add(childSchema)
554
-
555
- if keys.length > 15:
556
- lines.add(indent + " ... +" + (keys.length - 15) + " more keys")
557
-
558
- lines.add(indent + "}")
559
- return lines.join("\\n")
560
- ```
561
-
562
- ### 10. Linter Output Aggregation
563
-
564
- ```
565
- function aggregateLinterOutput(output, linterType):
566
- # Parse based on linter type (ESLint, Ruff, Pylint, etc.)
567
- issues = parseIssues(output, linterType)
568
-
569
- if issues is empty:
570
- return "✓ " + linterType + ": No issues found"
571
-
572
- # Count by severity
573
- errors = count where issue.severity == ERROR
574
- warnings = count where issue.severity == WARNING
575
-
576
- # Group by rule
577
- byRule = new Map<rule, count>
578
- for each issue in issues:
579
- byRule[issue.rule] += 1
580
-
581
- # Group by file
582
- byFile = new Map<file, count>
583
- for each issue in issues:
584
- byFile[issue.file] += 1
585
-
586
- # Build output
587
- result = linterType + ": " + errors + " errors, " + warnings + " warnings in " + byFile.length + " files\\n"
588
- result += "═══════════════════════════════════════\\n"
589
-
590
- # Top rules
591
- sortedRules = sort by value descending(byRule)
592
- result += "Top rules:\\n"
593
- for each (rule, count) in sortedRules take 10:
594
- result += " " + rule + " (" + count + "x)\\n"
595
-
596
- # Top files
597
- result += "\\nTop files:\\n"
598
- sortedFiles = sort by value descending(byFile)
599
- for each (file, count) in sortedFiles take 10:
600
- compact = compactPath(file)
601
- result += " " + compact + " (" + count + " issues)\\n"
602
-
603
- # Show top 3 rules per file
604
- fileRules = filter issues where issue.file == file, group by rule
605
- sortedFileRules = sort by count descending(fileRules)
606
- for each (rule, count) in sortedFileRules take 3:
607
- result += " " + rule + " (" + count + ")\\n"
608
-
609
- return result
610
- ```
611
-
612
- ### 11. Generic Output Summarization
613
-
614
- ```
615
- function summarizeOutput(output, command):
616
- lines = output.split into array
617
-
618
- # Detect output type
619
- if command contains "test" or output contains "passed" and "failed":
620
- return summarizeTests(output)
621
-
622
- else if command contains "build" or output contains "compiling":
623
- return summarizeBuild(output)
624
-
625
- else if output starts with "{" or output starts with "[":
626
- return summarizeJson(output)
627
-
628
- else if all lines are short and not tab-separated:
629
- return summarizeList(output)
630
-
631
- else:
632
- return summarizeGeneric(output)
633
-
634
- function summarizeTests(output):
635
- passed = extract count from "N passed"
636
- failed = extract count from "N failed"
637
- skipped = extract count from "N skipped"
638
-
639
- result = "📋 Test Results:\\n"
640
- result += " ✅ " + passed + " passed\\n"
641
- if failed > 0:
642
- result += " ❌ " + failed + " failed\\n"
643
- if skipped > 0:
644
- result += " ⏭️ " + skipped + " skipped\\n"
645
-
646
- # Collect failure details
647
- failures = extract lines containing "FAIL" or failure markers
648
- if failures not empty:
649
- result += "\\n Failures:\\n"
650
- for each failure in failures take 5:
651
- result += " • " + truncate(failure, 70) + "\\n"
652
-
653
- return result
654
- ```
655
-
656
- ### 12. Tee Recovery System
657
-
658
- When filtering fails, save raw output for recovery:
659
-
660
- ```
661
- function teeRawOutput(raw, commandSlug, exitCode, config):
662
- if not shouldTee(config, raw.length, exitCode):
663
- return null
664
-
665
- # Sanitize filename
666
- sanitized = commandSlug
667
- .replace non-alphanumeric chars (except _ and -) with _
668
- .truncate to 40 chars
669
-
670
- filename = timestamp + "_" + sanitized + ".log"
671
- filepath = teeDirectory + "/" + filename
672
-
673
- # Truncate if exceeds max file size
674
- if raw.length > config.maxFileSize:
675
- content = raw[0..config.maxFileSize] + "\\n\\n--- truncated at N bytes ---"
676
- else:
677
- content = raw
678
-
679
- write content to filepath
680
-
681
- # Rotate old files
682
- cleanupOldFiles(teeDirectory, config.maxFiles)
683
-
684
- return filepath
685
-
686
- function shouldTee(config, rawLength, exitCode):
687
- if not config.enabled:
688
- return false
689
-
690
- if config.mode == NEVER:
691
- return false
692
-
693
- if config.mode == FAILURES and exitCode == 0:
694
- return false
695
-
696
- if rawLength < minimumTeeSize:
697
- return false
698
-
699
- return true
700
- ```
701
-
702
- ### 13. Multi-Tier Parsing Strategy
703
-
704
- For maximum compatibility with changing output formats:
705
-
706
- ```
707
- enum ParseResult:
708
- FULL(data) # Tier 1: Complete structured parse
709
- DEGRADED(data, warnings) # Tier 2: Partial/regex-based parse
710
- PASSTHROUGH(raw) # Tier 3: Fallback to truncated raw output
711
-
712
- function parseOutput(output, parsers):
713
- # Tier 1: Try structured parsing (JSON, XML, etc.)
714
- for each structuredParser in parsers.structured:
715
- try:
716
- data = structuredParser.parse(output)
717
- return ParseResult.FULL(data)
718
- catch ParseError:
719
- continue
720
-
721
- # Tier 2: Try regex extraction
722
- for each regexParser in parsers.regex:
723
- data = regexParser.extract(output)
724
- if data is valid:
725
- warnings = ["Structured parse failed, using regex fallback"]
726
- return ParseResult.DEGRADED(data, warnings)
727
-
728
- # Tier 3: Passthrough with truncation
729
- return ParseResult.PASSTHROUGH(truncate(output, 500))
730
- ```
731
-
732
- ### 14. Error Pattern Detection
733
-
734
- ```
735
- function filterErrors(output):
736
- errorPatterns = [
737
- regex "(?i)^.*error[\\s:\\[].*$",
738
- regex "(?i)^.*\\berr\\b.*$",
739
- regex "(?i)^.*warning[\\s:\\[].*$",
740
- regex "(?i)^.*\\bwarn\\b.*$",
741
- regex "(?i)^.*failed.*$",
742
- regex "(?i)^.*failure.*$",
743
- regex "(?i)^.*exception.*$",
744
- regex "(?i)^.*panic.*$",
745
- # Language-specific patterns
746
- regex "^error\\[E\\d+\\]:.*$", # Rust
747
- regex "^\\s*--> .*:\\d+:\\d+$", # Rust location
748
- regex "^Traceback.*$", # Python
749
- regex "^\\s*File \".*\", line \\d+.*$", # Python traceback
750
- regex "^\\s*at .*:\\d+:\\d+.*$", # JS/TS stack trace
751
- regex "^.*\\.go:\\d+:.*$" # Go error
752
- ]
753
-
754
- result = empty list
755
- inErrorBlock = false
756
- blankCount = 0
757
-
758
- for each line in output:
759
- isErrorLine = any pattern matches line
760
-
761
- if isErrorLine:
762
- inErrorBlock = true
763
- blankCount = 0
764
- add line to result
765
- else if inErrorBlock:
766
- if line.trim() is empty:
767
- blankCount += 1
768
- if blankCount >= 2:
769
- inErrorBlock = false
770
- else:
771
- add line to result
772
- else if line starts with whitespace:
773
- # Continuation of error context
774
- add line to result
775
- blankCount = 0
776
- else:
777
- inErrorBlock = false
778
-
779
- return result.join("\\n")
780
- ```
781
-
782
- ### 15. Confirmation Message Formatting
783
-
784
- ```
785
- function okConfirmation(action, detail):
786
- if detail is empty:
787
- return "ok " + action
788
- else:
789
- return "ok " + action + " " + detail
790
-
791
- # Examples:
792
- # okConfirmation("merged", "#42") => "ok merged #42"
793
- # okConfirmation("created", "PR #5 ...") => "ok created PR #5 ..."
794
- # okConfirmation("commented", "") => "ok commented"
795
- ```
796
-
797
- ### 16. Token Counting
798
-
799
- ```
800
- function countTokens(text):
801
- # Approximate token count using whitespace and punctuation splitting
802
- # This is a rough approximation - actual LLM tokenizers vary
803
- words = text.split by whitespace and punctuation
804
-
805
- # Apply token ratio (typically 0.75 tokens per word for English)
806
- return words.length * 0.75
807
-
808
- function calculateSavings(original, filtered):
809
- originalTokens = countTokens(original)
810
- filteredTokens = countTokens(filtered)
811
-
812
- if originalTokens == 0:
813
- return 0
814
-
815
- savings = (originalTokens - filteredTokens) / originalTokens * 100
816
- return savings
817
- ```
818
-
819
- ### 17. Tracking and Metrics
820
-
821
- ```
822
- function trackCommand(originalCommand, rtkCommand, originalOutput, filteredOutput):
823
- record = {
824
- timestamp: now(),
825
- original_command: originalCommand,
826
- rtk_command: rtkCommand,
827
- input_tokens: countTokens(originalOutput),
828
- output_tokens: countTokens(filteredOutput),
829
- savings_percent: calculateSavings(originalOutput, filteredOutput)
830
- }
831
-
832
- save record to tracking database
833
-
834
- return record.savings_percent
835
- ```
836
-
837
- ## Summary of Key Patterns
838
-
839
- 1. **Remove noise**: Strip compilation messages, download progress, and other non-actionable output
840
- 2. **Group by category**: Aggregate results by file, rule, or error type rather than listing individually
841
- 3. **Show counts, not details**: Replace long lists with "N items" summaries
842
- 4. **Truncate intelligently**: Keep beginnings and ends of important sections, omit middle
843
- 5. **Preserve structure**: Keep file paths, line numbers, and error messages, remove surrounding context
844
- 6. **Fallback gracefully**: When parsing fails, provide degraded output rather than failing completely
845
- 7. **Track metrics**: Measure and report token savings to validate effectiveness