llm-checker 3.1.0 → 3.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +53 -0
- package/bin/CLAUDE.md +9 -0
- package/bin/mcp-server.mjs +208 -0
- package/package.json +10 -4
- package/src/CLAUDE.md +6 -0
- package/src/data/CLAUDE.md +6 -0
- package/src/hardware/CLAUDE.md +6 -0
- package/src/hardware/backends/CLAUDE.md +6 -0
- package/src/hardware/backends/rocm-detector.js +365 -44
- package/src/hardware/detector.js +8 -2
- package/src/models/CLAUDE.md +6 -0
- package/src/ollama/CLAUDE.md +6 -0
- package/src/plugins/CLAUDE.md +6 -0
- package/src/utils/CLAUDE.md +6 -0
package/README.md
CHANGED
|
@@ -23,6 +23,7 @@
|
|
|
23
23
|
<p align="center">
|
|
24
24
|
<a href="#installation">Installation</a> •
|
|
25
25
|
<a href="#quick-start">Quick Start</a> •
|
|
26
|
+
<a href="#claude-code-mcp">Claude MCP</a> •
|
|
26
27
|
<a href="#commands">Commands</a> •
|
|
27
28
|
<a href="#scoring-system">Scoring</a> •
|
|
28
29
|
<a href="#supported-hardware">Hardware</a>
|
|
@@ -91,6 +92,58 @@ llm-checker search qwen --use-case coding
|
|
|
91
92
|
|
|
92
93
|
---
|
|
93
94
|
|
|
95
|
+
## Claude Code MCP
|
|
96
|
+
|
|
97
|
+
LLM Checker includes a built-in [Model Context Protocol](https://modelcontextprotocol.io/) (MCP) server, allowing **Claude Code** and other MCP-compatible AI assistants to analyze your hardware and manage local models directly.
|
|
98
|
+
|
|
99
|
+
### Setup (One Command)
|
|
100
|
+
|
|
101
|
+
```bash
|
|
102
|
+
# Install globally first
|
|
103
|
+
npm install -g llm-checker
|
|
104
|
+
|
|
105
|
+
# Add to Claude Code
|
|
106
|
+
claude mcp add llm-checker -- llm-checker-mcp
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
Or with npx (no global install needed):
|
|
110
|
+
|
|
111
|
+
```bash
|
|
112
|
+
claude mcp add llm-checker -- npx llm-checker-mcp
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
Restart Claude Code and you're done.
|
|
116
|
+
|
|
117
|
+
### Available MCP Tools
|
|
118
|
+
|
|
119
|
+
Once connected, Claude can use these tools:
|
|
120
|
+
|
|
121
|
+
| Tool | Description |
|
|
122
|
+
|------|-------------|
|
|
123
|
+
| `hw_detect` | Detect your hardware (CPU, GPU, RAM, acceleration backend) |
|
|
124
|
+
| `check` | Full compatibility analysis with all models ranked by score |
|
|
125
|
+
| `recommend` | Top model picks by category (coding, reasoning, multimodal, etc.) |
|
|
126
|
+
| `installed` | Rank your already-downloaded Ollama models |
|
|
127
|
+
| `search` | Search the Ollama model catalog with filters |
|
|
128
|
+
| `smart_recommend` | Advanced recommendations using the full scoring engine |
|
|
129
|
+
| `ollama_list` | List all downloaded Ollama models |
|
|
130
|
+
| `ollama_pull` | Download a model from the Ollama registry |
|
|
131
|
+
| `ollama_run` | Run a prompt against a local Ollama model |
|
|
132
|
+
|
|
133
|
+
### Example Prompts
|
|
134
|
+
|
|
135
|
+
After setup, you can ask Claude things like:
|
|
136
|
+
|
|
137
|
+
- *"What's the best coding model for my hardware?"*
|
|
138
|
+
- *"What models do I have installed and how do they rank?"*
|
|
139
|
+
- *"Pull the top reasoning model for my system"*
|
|
140
|
+
- *"Search for multimodal models under 8GB"*
|
|
141
|
+
- *"Run this prompt on qwen2.5-coder"*
|
|
142
|
+
|
|
143
|
+
Claude will automatically call the right tools and give you actionable results.
|
|
144
|
+
|
|
145
|
+
---
|
|
146
|
+
|
|
94
147
|
## Commands
|
|
95
148
|
|
|
96
149
|
### Core Commands
|
package/bin/CLAUDE.md
CHANGED
|
@@ -9,4 +9,13 @@
|
|
|
9
9
|
|----|------|---|-------|------|
|
|
10
10
|
| #3492 | 10:24 PM | 🔵 | Enhanced CLI Structure - Lazy Loading with ASCII Art Branding | ~456 |
|
|
11
11
|
| #3436 | 9:57 PM | 🔵 | Enhanced CLI Implementation - Command-Line Interface with ASCII Art and Ollama Integration | ~575 |
|
|
12
|
+
|
|
13
|
+
### Feb 14, 2026
|
|
14
|
+
|
|
15
|
+
| ID | Time | T | Title | Read |
|
|
16
|
+
|----|------|---|-------|------|
|
|
17
|
+
| #4341 | 6:49 PM | 🟣 | MCP server implementation committed to llm-checker repository | ~431 |
|
|
18
|
+
| #4339 | " | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
19
|
+
| #4338 | " | ✅ | MCP server validated in llm-checker repository structure | ~245 |
|
|
20
|
+
| #4333 | 6:48 PM | 🟣 | MCP server integrated into llm-checker package distribution | ~409 |
|
|
12
21
|
</claude-mem-context>
|
|
@@ -0,0 +1,208 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
|
|
3
|
+
/**
|
|
4
|
+
* LLM Checker MCP Server
|
|
5
|
+
*
|
|
6
|
+
* Model Context Protocol server that exposes llm-checker tools to Claude Code
|
|
7
|
+
* and other MCP-compatible AI assistants.
|
|
8
|
+
*
|
|
9
|
+
* Usage:
|
|
10
|
+
* claude mcp add llm-checker -- npx llm-checker-mcp
|
|
11
|
+
* # or
|
|
12
|
+
* claude mcp add llm-checker -- node node_modules/llm-checker/bin/mcp-server.mjs
|
|
13
|
+
*/
|
|
14
|
+
|
|
15
|
+
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
|
16
|
+
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
|
17
|
+
import { z } from "zod";
|
|
18
|
+
import { execFile } from "child_process";
|
|
19
|
+
import { promisify } from "util";
|
|
20
|
+
import { fileURLToPath } from "url";
|
|
21
|
+
import { dirname, join } from "path";
|
|
22
|
+
|
|
23
|
+
const exec = promisify(execFile);
|
|
24
|
+
const __filename = fileURLToPath(import.meta.url);
|
|
25
|
+
const __dirname = dirname(__filename);
|
|
26
|
+
|
|
27
|
+
// Use the CLI from this package
|
|
28
|
+
const CLI_PATH = join(__dirname, "enhanced_cli.js");
|
|
29
|
+
|
|
30
|
+
// Strip ANSI escape codes for clean output
|
|
31
|
+
function clean(text) {
|
|
32
|
+
return text
|
|
33
|
+
.replace(/\x1B\[[0-9;]*[a-zA-Z]/g, "")
|
|
34
|
+
.replace(/\x1B\[\?[0-9;]*[a-zA-Z]/g, "")
|
|
35
|
+
.replace(/\x1B\([A-Z]/g, "")
|
|
36
|
+
.trim();
|
|
37
|
+
}
|
|
38
|
+
|
|
39
|
+
async function run(args, timeout = 120000) {
|
|
40
|
+
try {
|
|
41
|
+
const { stdout, stderr } = await exec("node", [CLI_PATH, ...args], {
|
|
42
|
+
timeout,
|
|
43
|
+
env: { ...process.env, NODE_NO_WARNINGS: "1" },
|
|
44
|
+
});
|
|
45
|
+
return clean(stdout || stderr);
|
|
46
|
+
} catch (err) {
|
|
47
|
+
if (err.stdout) return clean(err.stdout);
|
|
48
|
+
throw new Error(`llm-checker failed: ${err.message}`);
|
|
49
|
+
}
|
|
50
|
+
}
|
|
51
|
+
|
|
52
|
+
const server = new McpServer({
|
|
53
|
+
name: "llm-checker",
|
|
54
|
+
version: "3.1.0",
|
|
55
|
+
});
|
|
56
|
+
|
|
57
|
+
// --- Tool: hw_detect ---
|
|
58
|
+
server.tool(
|
|
59
|
+
"hw_detect",
|
|
60
|
+
"Detect hardware capabilities: CPU, GPU, RAM, acceleration backends, and recommended tier for running local LLMs",
|
|
61
|
+
{},
|
|
62
|
+
async () => {
|
|
63
|
+
const result = await run(["hw-detect"]);
|
|
64
|
+
return { content: [{ type: "text", text: result }] };
|
|
65
|
+
}
|
|
66
|
+
);
|
|
67
|
+
|
|
68
|
+
// --- Tool: check ---
|
|
69
|
+
server.tool(
|
|
70
|
+
"check",
|
|
71
|
+
"Full system analysis: detect hardware, scan Ollama catalog, and return all compatible models ranked by score with memory estimates",
|
|
72
|
+
{},
|
|
73
|
+
async () => {
|
|
74
|
+
const result = await run(["check"], 180000);
|
|
75
|
+
return { content: [{ type: "text", text: result }] };
|
|
76
|
+
}
|
|
77
|
+
);
|
|
78
|
+
|
|
79
|
+
// --- Tool: recommend ---
|
|
80
|
+
server.tool(
|
|
81
|
+
"recommend",
|
|
82
|
+
"Get top model recommendations for a specific use case category, ranked by the 4D scoring engine (Quality, Speed, Fit, Context)",
|
|
83
|
+
{
|
|
84
|
+
category: z
|
|
85
|
+
.enum(["general", "coding", "reasoning", "multimodal", "embedding", "small"])
|
|
86
|
+
.optional()
|
|
87
|
+
.describe("Use case category (omit for all categories)"),
|
|
88
|
+
},
|
|
89
|
+
async ({ category }) => {
|
|
90
|
+
const args = ["recommend"];
|
|
91
|
+
if (category) args.push(category);
|
|
92
|
+
const result = await run(args, 180000);
|
|
93
|
+
return { content: [{ type: "text", text: result }] };
|
|
94
|
+
}
|
|
95
|
+
);
|
|
96
|
+
|
|
97
|
+
// --- Tool: installed ---
|
|
98
|
+
server.tool(
|
|
99
|
+
"installed",
|
|
100
|
+
"List and rank all locally installed Ollama models by compatibility score against current hardware",
|
|
101
|
+
{},
|
|
102
|
+
async () => {
|
|
103
|
+
const result = await run(["installed"], 60000);
|
|
104
|
+
return { content: [{ type: "text", text: result }] };
|
|
105
|
+
}
|
|
106
|
+
);
|
|
107
|
+
|
|
108
|
+
// --- Tool: search ---
|
|
109
|
+
server.tool(
|
|
110
|
+
"search",
|
|
111
|
+
"Search the Ollama model catalog by keyword (e.g. 'code', 'vision', 'small'). Requires sql.js.",
|
|
112
|
+
{
|
|
113
|
+
query: z.string().describe("Search keyword (model name, family, or capability)"),
|
|
114
|
+
use_case: z
|
|
115
|
+
.enum(["general", "coding", "chat", "reasoning", "creative", "fast"])
|
|
116
|
+
.optional()
|
|
117
|
+
.describe("Optimize results for a specific use case"),
|
|
118
|
+
max_size: z.number().optional().describe("Maximum model size in GB"),
|
|
119
|
+
},
|
|
120
|
+
async ({ query, use_case, max_size }) => {
|
|
121
|
+
const args = ["search", query];
|
|
122
|
+
if (use_case) args.push("--use-case", use_case);
|
|
123
|
+
if (max_size) args.push("--max-size", String(max_size));
|
|
124
|
+
const result = await run(args, 60000);
|
|
125
|
+
return { content: [{ type: "text", text: result }] };
|
|
126
|
+
}
|
|
127
|
+
);
|
|
128
|
+
|
|
129
|
+
// --- Tool: smart_recommend ---
|
|
130
|
+
server.tool(
|
|
131
|
+
"smart_recommend",
|
|
132
|
+
"Advanced recommendation using the full scoring engine with database integration. Requires sql.js.",
|
|
133
|
+
{
|
|
134
|
+
use_case: z
|
|
135
|
+
.enum(["general", "coding", "chat", "reasoning", "creative", "fast", "quality"])
|
|
136
|
+
.optional()
|
|
137
|
+
.describe("Use case to optimize for"),
|
|
138
|
+
},
|
|
139
|
+
async ({ use_case }) => {
|
|
140
|
+
const args = ["smart-recommend"];
|
|
141
|
+
if (use_case) args.push("--use-case", use_case);
|
|
142
|
+
const result = await run(args, 180000);
|
|
143
|
+
return { content: [{ type: "text", text: result }] };
|
|
144
|
+
}
|
|
145
|
+
);
|
|
146
|
+
|
|
147
|
+
// --- Tool: ollama_list ---
|
|
148
|
+
server.tool(
|
|
149
|
+
"ollama_list",
|
|
150
|
+
"List all models currently downloaded in Ollama with their sizes",
|
|
151
|
+
{},
|
|
152
|
+
async () => {
|
|
153
|
+
try {
|
|
154
|
+
const { stdout } = await exec("ollama", ["list"], { timeout: 10000 });
|
|
155
|
+
return { content: [{ type: "text", text: clean(stdout) }] };
|
|
156
|
+
} catch (err) {
|
|
157
|
+
return {
|
|
158
|
+
content: [{ type: "text", text: `Ollama not running or not installed: ${err.message}` }],
|
|
159
|
+
isError: true,
|
|
160
|
+
};
|
|
161
|
+
}
|
|
162
|
+
}
|
|
163
|
+
);
|
|
164
|
+
|
|
165
|
+
// --- Tool: ollama_pull ---
|
|
166
|
+
server.tool(
|
|
167
|
+
"ollama_pull",
|
|
168
|
+
"Download/pull a model from the Ollama registry to local storage",
|
|
169
|
+
{
|
|
170
|
+
model: z.string().describe("Model name to pull (e.g. 'qwen2.5-coder:7b', 'llama3.2:3b')"),
|
|
171
|
+
},
|
|
172
|
+
async ({ model }) => {
|
|
173
|
+
try {
|
|
174
|
+
const { stdout } = await exec("ollama", ["pull", model], { timeout: 600000 });
|
|
175
|
+
return { content: [{ type: "text", text: clean(stdout) || `Successfully pulled ${model}` }] };
|
|
176
|
+
} catch (err) {
|
|
177
|
+
return {
|
|
178
|
+
content: [{ type: "text", text: `Failed to pull ${model}: ${err.message}` }],
|
|
179
|
+
isError: true,
|
|
180
|
+
};
|
|
181
|
+
}
|
|
182
|
+
}
|
|
183
|
+
);
|
|
184
|
+
|
|
185
|
+
// --- Tool: ollama_run ---
|
|
186
|
+
server.tool(
|
|
187
|
+
"ollama_run",
|
|
188
|
+
"Run a prompt against a local Ollama model and return the response",
|
|
189
|
+
{
|
|
190
|
+
model: z.string().describe("Model name (e.g. 'qwen2.5-coder:7b')"),
|
|
191
|
+
prompt: z.string().describe("The prompt to send to the model"),
|
|
192
|
+
},
|
|
193
|
+
async ({ model, prompt }) => {
|
|
194
|
+
try {
|
|
195
|
+
const { stdout } = await exec("ollama", ["run", model, prompt], { timeout: 300000 });
|
|
196
|
+
return { content: [{ type: "text", text: clean(stdout) }] };
|
|
197
|
+
} catch (err) {
|
|
198
|
+
return {
|
|
199
|
+
content: [{ type: "text", text: `Failed to run ${model}: ${err.message}` }],
|
|
200
|
+
isError: true,
|
|
201
|
+
};
|
|
202
|
+
}
|
|
203
|
+
}
|
|
204
|
+
);
|
|
205
|
+
|
|
206
|
+
// Start server
|
|
207
|
+
const transport = new StdioServerTransport();
|
|
208
|
+
await server.connect(transport);
|
package/package.json
CHANGED
|
@@ -1,10 +1,11 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "llm-checker",
|
|
3
|
-
"version": "3.1.
|
|
3
|
+
"version": "3.1.1",
|
|
4
4
|
"description": "Intelligent CLI tool with AI-powered model selection that analyzes your hardware and recommends optimal LLM models for your system",
|
|
5
5
|
"bin": {
|
|
6
6
|
"llm-checker": "bin/enhanced_cli.js",
|
|
7
|
-
"ollama-checker": "bin/enhanced_cli.js"
|
|
7
|
+
"ollama-checker": "bin/enhanced_cli.js",
|
|
8
|
+
"llm-checker-mcp": "bin/mcp-server.mjs"
|
|
8
9
|
},
|
|
9
10
|
"main": "src/index.js",
|
|
10
11
|
"scripts": {
|
|
@@ -27,13 +28,15 @@
|
|
|
27
28
|
"postinstall": "echo 'LLM Checker installed. Run: llm-checker hw-detect'"
|
|
28
29
|
},
|
|
29
30
|
"dependencies": {
|
|
31
|
+
"@modelcontextprotocol/sdk": "^1.26.0",
|
|
30
32
|
"chalk": "^4.1.2",
|
|
31
33
|
"commander": "^11.1.0",
|
|
32
34
|
"inquirer": "^8.2.6",
|
|
33
35
|
"node-fetch": "^2.7.0",
|
|
34
36
|
"ora": "^5.4.1",
|
|
35
37
|
"systeminformation": "^5.21.0",
|
|
36
|
-
"table": "^6.8.1"
|
|
38
|
+
"table": "^6.8.1",
|
|
39
|
+
"zod": "^3.23.0"
|
|
37
40
|
},
|
|
38
41
|
"optionalDependencies": {
|
|
39
42
|
"sql.js": "^1.14.0"
|
|
@@ -63,7 +66,10 @@
|
|
|
63
66
|
"performance",
|
|
64
67
|
"benchmark",
|
|
65
68
|
"apple-silicon",
|
|
66
|
-
"memory-optimization"
|
|
69
|
+
"memory-optimization",
|
|
70
|
+
"mcp",
|
|
71
|
+
"claude",
|
|
72
|
+
"model-context-protocol"
|
|
67
73
|
],
|
|
68
74
|
"author": "Pavelevich (https://github.com/Pavelevich)",
|
|
69
75
|
"license": "MIT",
|
package/src/CLAUDE.md
CHANGED
|
@@ -9,4 +9,10 @@
|
|
|
9
9
|
|----|------|---|-------|------|
|
|
10
10
|
| #3485 | 10:23 PM | 🔵 | Dead Code in Main Analysis Method - Lines 73-214 Unreachable | ~555 |
|
|
11
11
|
| #3435 | 9:57 PM | 🔵 | Main LLMChecker Class Architecture - Platform-Specific Analysis Engine | ~630 |
|
|
12
|
+
|
|
13
|
+
### Feb 14, 2026
|
|
14
|
+
|
|
15
|
+
| ID | Time | T | Title | Read |
|
|
16
|
+
|----|------|---|-------|------|
|
|
17
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
12
18
|
</claude-mem-context>
|
package/src/data/CLAUDE.md
CHANGED
|
@@ -8,4 +8,10 @@
|
|
|
8
8
|
| ID | Time | T | Title | Read |
|
|
9
9
|
|----|------|---|-------|------|
|
|
10
10
|
| #3464 | 10:03 PM | 🔵 | SQL Database Schema - Indexed Model Repository with Benchmarks | ~555 |
|
|
11
|
+
|
|
12
|
+
### Feb 14, 2026
|
|
13
|
+
|
|
14
|
+
| ID | Time | T | Title | Read |
|
|
15
|
+
|----|------|---|-------|------|
|
|
16
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
11
17
|
</claude-mem-context>
|
package/src/hardware/CLAUDE.md
CHANGED
|
@@ -9,4 +9,10 @@
|
|
|
9
9
|
|----|------|---|-------|------|
|
|
10
10
|
| #3490 | 10:24 PM | 🔵 | Hardware Detector Cache Implementation - 5-Minute TTL Without Force Refresh Option | ~536 |
|
|
11
11
|
| #3440 | 9:58 PM | 🔵 | Hardware Detection System - Multi-GPU Support with Intelligent Selection | ~611 |
|
|
12
|
+
|
|
13
|
+
### Feb 14, 2026
|
|
14
|
+
|
|
15
|
+
| ID | Time | T | Title | Read |
|
|
16
|
+
|----|------|---|-------|------|
|
|
17
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
12
18
|
</claude-mem-context>
|
|
@@ -8,4 +8,10 @@
|
|
|
8
8
|
| ID | Time | T | Title | Read |
|
|
9
9
|
|----|------|---|-------|------|
|
|
10
10
|
| #3453 | 10:01 PM | 🔵 | CUDA Detector Implementation - NVIDIA GPU Detection via nvidia-smi | ~497 |
|
|
11
|
+
|
|
12
|
+
### Feb 14, 2026
|
|
13
|
+
|
|
14
|
+
| ID | Time | T | Title | Read |
|
|
15
|
+
|----|------|---|-------|------|
|
|
16
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
11
17
|
</claude-mem-context>
|
|
@@ -1,25 +1,75 @@
|
|
|
1
1
|
/**
|
|
2
2
|
* ROCm Detector
|
|
3
|
-
* Detects AMD GPUs using rocm-smi
|
|
3
|
+
* Detects AMD GPUs using rocm-smi, rocminfo, lspci, and sysfs
|
|
4
4
|
* Supports multi-GPU setups and ROCm capabilities
|
|
5
|
+
* Falls back to lspci/sysfs when ROCm tools are not installed
|
|
5
6
|
*/
|
|
6
7
|
|
|
7
8
|
const { execSync } = require('child_process');
|
|
9
|
+
const fs = require('fs');
|
|
10
|
+
const path = require('path');
|
|
8
11
|
|
|
9
12
|
class ROCmDetector {
|
|
10
13
|
constructor() {
|
|
11
14
|
this.cache = null;
|
|
12
15
|
this.isAvailable = null;
|
|
16
|
+
this.detectionMethod = null; // 'rocm-smi', 'rocminfo', 'lspci', 'sysfs'
|
|
13
17
|
}
|
|
14
18
|
|
|
19
|
+
// AMD PCI device IDs for model name resolution
|
|
20
|
+
static AMD_DEVICE_IDS = {
|
|
21
|
+
// RDNA 3 (RX 7000 series)
|
|
22
|
+
'744c': { name: 'AMD Radeon RX 7900 XTX', vram: 24 },
|
|
23
|
+
'7448': { name: 'AMD Radeon RX 7900 XT', vram: 20 },
|
|
24
|
+
'7460': { name: 'AMD Radeon RX 7900 GRE', vram: 16 },
|
|
25
|
+
'7480': { name: 'AMD Radeon RX 7800 XT', vram: 16 },
|
|
26
|
+
'7481': { name: 'AMD Radeon RX 7700 XT', vram: 12 },
|
|
27
|
+
'7483': { name: 'AMD Radeon RX 7600', vram: 8 },
|
|
28
|
+
'7484': { name: 'AMD Radeon RX 7600 XT', vram: 16 },
|
|
29
|
+
// RDNA 2 (RX 6000 series)
|
|
30
|
+
'73a5': { name: 'AMD Radeon RX 6950 XT', vram: 16 },
|
|
31
|
+
'73bf': { name: 'AMD Radeon RX 6900 XT', vram: 16 },
|
|
32
|
+
'73a3': { name: 'AMD Radeon RX 6800 XT', vram: 16 },
|
|
33
|
+
'73a2': { name: 'AMD Radeon RX 6800', vram: 16 },
|
|
34
|
+
'73df': { name: 'AMD Radeon RX 6700 XT', vram: 12 },
|
|
35
|
+
'73ff': { name: 'AMD Radeon RX 6600 XT', vram: 8 },
|
|
36
|
+
'73e3': { name: 'AMD Radeon RX 6600', vram: 8 },
|
|
37
|
+
// CDNA / Instinct
|
|
38
|
+
'740f': { name: 'AMD Instinct MI300X', vram: 192 },
|
|
39
|
+
'740c': { name: 'AMD Instinct MI300A', vram: 128 },
|
|
40
|
+
'7408': { name: 'AMD Instinct MI250X', vram: 128 },
|
|
41
|
+
'740a': { name: 'AMD Instinct MI250', vram: 64 },
|
|
42
|
+
'738c': { name: 'AMD Instinct MI210', vram: 64 },
|
|
43
|
+
'7388': { name: 'AMD Instinct MI100', vram: 32 },
|
|
44
|
+
};
|
|
45
|
+
|
|
15
46
|
/**
|
|
16
|
-
* Check if
|
|
47
|
+
* Check if AMD GPU is available (ROCm tools, lspci, or sysfs)
|
|
17
48
|
*/
|
|
18
49
|
checkAvailability() {
|
|
19
50
|
if (this.isAvailable !== null) {
|
|
20
51
|
return this.isAvailable;
|
|
21
52
|
}
|
|
22
53
|
|
|
54
|
+
// Only check on Linux
|
|
55
|
+
if (process.platform !== 'linux') {
|
|
56
|
+
// On non-Linux, only ROCm tools matter
|
|
57
|
+
try {
|
|
58
|
+
execSync('rocm-smi --version', {
|
|
59
|
+
encoding: 'utf8',
|
|
60
|
+
timeout: 5000,
|
|
61
|
+
stdio: ['pipe', 'pipe', 'pipe']
|
|
62
|
+
});
|
|
63
|
+
this.isAvailable = true;
|
|
64
|
+
this.detectionMethod = 'rocm-smi';
|
|
65
|
+
return true;
|
|
66
|
+
} catch (e) {
|
|
67
|
+
this.isAvailable = false;
|
|
68
|
+
return false;
|
|
69
|
+
}
|
|
70
|
+
}
|
|
71
|
+
|
|
72
|
+
// 1. Try rocm-smi
|
|
23
73
|
try {
|
|
24
74
|
execSync('rocm-smi --version', {
|
|
25
75
|
encoding: 'utf8',
|
|
@@ -27,21 +77,66 @@ class ROCmDetector {
|
|
|
27
77
|
stdio: ['pipe', 'pipe', 'pipe']
|
|
28
78
|
});
|
|
29
79
|
this.isAvailable = true;
|
|
80
|
+
this.detectionMethod = 'rocm-smi';
|
|
81
|
+
return true;
|
|
30
82
|
} catch (e) {
|
|
31
|
-
//
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
83
|
+
// Continue to next method
|
|
84
|
+
}
|
|
85
|
+
|
|
86
|
+
// 2. Try rocminfo
|
|
87
|
+
try {
|
|
88
|
+
execSync('rocminfo', {
|
|
89
|
+
encoding: 'utf8',
|
|
90
|
+
timeout: 5000,
|
|
91
|
+
stdio: ['pipe', 'pipe', 'pipe']
|
|
92
|
+
});
|
|
93
|
+
this.isAvailable = true;
|
|
94
|
+
this.detectionMethod = 'rocminfo';
|
|
95
|
+
return true;
|
|
96
|
+
} catch (e) {
|
|
97
|
+
// Continue to next method
|
|
98
|
+
}
|
|
99
|
+
|
|
100
|
+
// 3. Try lspci for AMD GPUs (vendor ID 1002)
|
|
101
|
+
try {
|
|
102
|
+
const lspci = execSync('lspci | grep -i "VGA\\|3D\\|Display" | grep -i "AMD\\|ATI\\|Radeon"', {
|
|
103
|
+
encoding: 'utf8',
|
|
104
|
+
timeout: 5000,
|
|
105
|
+
stdio: ['pipe', 'pipe', 'pipe']
|
|
106
|
+
});
|
|
107
|
+
if (lspci.trim().length > 0) {
|
|
38
108
|
this.isAvailable = true;
|
|
39
|
-
|
|
40
|
-
|
|
109
|
+
this.detectionMethod = 'lspci';
|
|
110
|
+
return true;
|
|
41
111
|
}
|
|
112
|
+
} catch (e) {
|
|
113
|
+
// Continue to next method
|
|
114
|
+
}
|
|
115
|
+
|
|
116
|
+
// 4. Try sysfs for AMD GPUs (vendor 0x1002)
|
|
117
|
+
try {
|
|
118
|
+
const drmPath = '/sys/class/drm';
|
|
119
|
+
const entries = fs.readdirSync(drmPath);
|
|
120
|
+
const hasAMD = entries.some(node => {
|
|
121
|
+
try {
|
|
122
|
+
const vendorPath = path.join(drmPath, node, 'device/vendor');
|
|
123
|
+
const vendor = fs.readFileSync(vendorPath, 'utf8').trim();
|
|
124
|
+
return vendor === '0x1002'; // AMD vendor ID
|
|
125
|
+
} catch (e) {
|
|
126
|
+
return false;
|
|
127
|
+
}
|
|
128
|
+
});
|
|
129
|
+
if (hasAMD) {
|
|
130
|
+
this.isAvailable = true;
|
|
131
|
+
this.detectionMethod = 'sysfs';
|
|
132
|
+
return true;
|
|
133
|
+
}
|
|
134
|
+
} catch (e) {
|
|
135
|
+
// No sysfs access
|
|
42
136
|
}
|
|
43
137
|
|
|
44
|
-
|
|
138
|
+
this.isAvailable = false;
|
|
139
|
+
return false;
|
|
45
140
|
}
|
|
46
141
|
|
|
47
142
|
/**
|
|
@@ -66,7 +161,7 @@ class ROCmDetector {
|
|
|
66
161
|
}
|
|
67
162
|
|
|
68
163
|
/**
|
|
69
|
-
* Get detailed GPU information using
|
|
164
|
+
* Get detailed GPU information using best available method
|
|
70
165
|
*/
|
|
71
166
|
getGPUInfo() {
|
|
72
167
|
const result = {
|
|
@@ -78,6 +173,45 @@ class ROCmDetector {
|
|
|
78
173
|
speedCoefficient: 0
|
|
79
174
|
};
|
|
80
175
|
|
|
176
|
+
// Try methods in order of detail level
|
|
177
|
+
let detected = false;
|
|
178
|
+
|
|
179
|
+
// 1. Try rocm-smi (most detailed)
|
|
180
|
+
if (this.detectionMethod === 'rocm-smi' || !this.detectionMethod) {
|
|
181
|
+
detected = this._detectViaRocmSmi(result);
|
|
182
|
+
}
|
|
183
|
+
|
|
184
|
+
// 2. Try rocminfo
|
|
185
|
+
if (!detected && (this.detectionMethod === 'rocminfo' || !this.detectionMethod)) {
|
|
186
|
+
detected = this._detectViaRocmInfo(result);
|
|
187
|
+
}
|
|
188
|
+
|
|
189
|
+
// 3. Try lspci
|
|
190
|
+
if (!detected && (this.detectionMethod === 'lspci' || !this.detectionMethod)) {
|
|
191
|
+
detected = this._detectViaLspci(result);
|
|
192
|
+
}
|
|
193
|
+
|
|
194
|
+
// 4. Try sysfs
|
|
195
|
+
if (!detected && (this.detectionMethod === 'sysfs' || !this.detectionMethod)) {
|
|
196
|
+
detected = this._detectViaSysfs(result);
|
|
197
|
+
}
|
|
198
|
+
|
|
199
|
+
if (!detected || result.gpus.length === 0) {
|
|
200
|
+
return null;
|
|
201
|
+
}
|
|
202
|
+
|
|
203
|
+
result.isMultiGPU = result.gpus.length > 1;
|
|
204
|
+
result.speedCoefficient = result.gpus.length > 0
|
|
205
|
+
? Math.max(...result.gpus.map(g => g.speedCoefficient))
|
|
206
|
+
: 0;
|
|
207
|
+
|
|
208
|
+
return result;
|
|
209
|
+
}
|
|
210
|
+
|
|
211
|
+
/**
|
|
212
|
+
* Detect GPUs via rocm-smi
|
|
213
|
+
*/
|
|
214
|
+
_detectViaRocmSmi(result) {
|
|
81
215
|
// Get ROCm version
|
|
82
216
|
try {
|
|
83
217
|
const versionOutput = execSync('rocm-smi --version', {
|
|
@@ -89,7 +223,7 @@ class ROCmDetector {
|
|
|
89
223
|
result.rocmVersion = match[1];
|
|
90
224
|
}
|
|
91
225
|
} catch (e) {
|
|
92
|
-
|
|
226
|
+
return false;
|
|
93
227
|
}
|
|
94
228
|
|
|
95
229
|
try {
|
|
@@ -157,7 +291,7 @@ class ROCmDetector {
|
|
|
157
291
|
name: name,
|
|
158
292
|
memory: {
|
|
159
293
|
total: vram,
|
|
160
|
-
free: vram,
|
|
294
|
+
free: vram,
|
|
161
295
|
used: 0
|
|
162
296
|
},
|
|
163
297
|
temperature: temps[i] || 0,
|
|
@@ -169,44 +303,231 @@ class ROCmDetector {
|
|
|
169
303
|
result.gpus.push(gpu);
|
|
170
304
|
result.totalVRAM += vram;
|
|
171
305
|
}
|
|
306
|
+
|
|
307
|
+
return result.gpus.length > 0;
|
|
172
308
|
} catch (e) {
|
|
173
|
-
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
309
|
+
return false;
|
|
310
|
+
}
|
|
311
|
+
}
|
|
312
|
+
|
|
313
|
+
/**
|
|
314
|
+
* Detect GPUs via rocminfo
|
|
315
|
+
*/
|
|
316
|
+
_detectViaRocmInfo(result) {
|
|
317
|
+
try {
|
|
318
|
+
const rocmInfo = execSync('rocminfo', {
|
|
319
|
+
encoding: 'utf8',
|
|
320
|
+
timeout: 10000
|
|
321
|
+
});
|
|
322
|
+
|
|
323
|
+
const agentMatches = rocmInfo.matchAll(/Name:\s*(gfx\d+|AMD.*)/gi);
|
|
324
|
+
let idx = 0;
|
|
325
|
+
for (const match of agentMatches) {
|
|
326
|
+
const name = match[1].trim();
|
|
327
|
+
if (name.toLowerCase().includes('gfx') || name.toLowerCase().includes('amd')) {
|
|
328
|
+
const vram = this.estimateVRAMFromGfxName(name);
|
|
329
|
+
|
|
330
|
+
result.gpus.push({
|
|
331
|
+
index: idx,
|
|
332
|
+
name: name,
|
|
333
|
+
memory: { total: vram, free: vram, used: 0 },
|
|
334
|
+
capabilities: this.getGPUCapabilities(name),
|
|
335
|
+
speedCoefficient: this.calculateSpeedCoefficient(name, vram)
|
|
336
|
+
});
|
|
337
|
+
result.totalVRAM += vram;
|
|
338
|
+
idx++;
|
|
339
|
+
}
|
|
340
|
+
}
|
|
341
|
+
|
|
342
|
+
return result.gpus.length > 0;
|
|
343
|
+
} catch (e) {
|
|
344
|
+
return false;
|
|
345
|
+
}
|
|
346
|
+
}
|
|
347
|
+
|
|
348
|
+
/**
|
|
349
|
+
* Detect GPUs via lspci (fallback when ROCm is not installed)
|
|
350
|
+
*/
|
|
351
|
+
_detectViaLspci(result) {
|
|
352
|
+
try {
|
|
353
|
+
const lspciOutput = execSync('lspci -nn | grep -i "VGA\\|3D\\|Display"', {
|
|
354
|
+
encoding: 'utf8',
|
|
355
|
+
timeout: 10000,
|
|
356
|
+
stdio: ['pipe', 'pipe', 'pipe']
|
|
357
|
+
});
|
|
358
|
+
|
|
359
|
+
const lines = lspciOutput.trim().split('\n');
|
|
360
|
+
let idx = 0;
|
|
361
|
+
|
|
362
|
+
for (const line of lines) {
|
|
363
|
+
// Match AMD/ATI VGA devices: "03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [1002:744c]"
|
|
364
|
+
const amdMatch = line.match(/\[(?:AMD|ATI)\].*?\[([0-9a-f]{4}):([0-9a-f]{4})\]/i) ||
|
|
365
|
+
line.match(/(?:AMD|ATI|Radeon).*?\[([0-9a-f]{4}):([0-9a-f]{4})\]/i);
|
|
366
|
+
|
|
367
|
+
if (!amdMatch) continue;
|
|
368
|
+
|
|
369
|
+
const vendorId = amdMatch[1].toLowerCase();
|
|
370
|
+
if (vendorId !== '1002') continue; // Not AMD
|
|
371
|
+
|
|
372
|
+
const deviceId = amdMatch[2].toLowerCase();
|
|
373
|
+
|
|
374
|
+
// Try to get model name from device ID map
|
|
375
|
+
const deviceInfo = ROCmDetector.AMD_DEVICE_IDS[deviceId];
|
|
376
|
+
|
|
377
|
+
// Also try to extract name from lspci line itself
|
|
378
|
+
let lspciName = null;
|
|
379
|
+
const nameMatch = line.match(/\[(?:AMD|ATI)\]\s*(.+?)\s*\[/);
|
|
380
|
+
if (nameMatch) {
|
|
381
|
+
lspciName = nameMatch[1].trim();
|
|
382
|
+
}
|
|
383
|
+
|
|
384
|
+
const name = deviceInfo?.name || this._resolveAMDModelName(lspciName, deviceId) || `AMD GPU (${deviceId})`;
|
|
385
|
+
const vram = deviceInfo?.vram || this.estimateVRAMFromModel(name);
|
|
386
|
+
|
|
387
|
+
// Try to get VRAM from sysfs for this specific device
|
|
388
|
+
const sysfsVram = this._getVRAMFromSysfsForDevice(deviceId);
|
|
389
|
+
|
|
390
|
+
result.gpus.push({
|
|
391
|
+
index: idx,
|
|
392
|
+
name: name,
|
|
393
|
+
memory: {
|
|
394
|
+
total: sysfsVram || vram,
|
|
395
|
+
free: sysfsVram || vram,
|
|
396
|
+
used: 0
|
|
397
|
+
},
|
|
398
|
+
capabilities: this.getGPUCapabilities(name),
|
|
399
|
+
speedCoefficient: this.calculateSpeedCoefficient(name, sysfsVram || vram)
|
|
178
400
|
});
|
|
401
|
+
result.totalVRAM += sysfsVram || vram;
|
|
402
|
+
idx++;
|
|
403
|
+
}
|
|
179
404
|
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
405
|
+
return result.gpus.length > 0;
|
|
406
|
+
} catch (e) {
|
|
407
|
+
return false;
|
|
408
|
+
}
|
|
409
|
+
}
|
|
410
|
+
|
|
411
|
+
/**
|
|
412
|
+
* Detect GPUs via sysfs (last resort fallback)
|
|
413
|
+
*/
|
|
414
|
+
_detectViaSysfs(result) {
|
|
415
|
+
try {
|
|
416
|
+
const drmPath = '/sys/class/drm';
|
|
417
|
+
const cards = fs.readdirSync(drmPath).filter(f => f.startsWith('card') && !f.includes('-'));
|
|
418
|
+
let idx = 0;
|
|
419
|
+
|
|
420
|
+
for (const card of cards) {
|
|
421
|
+
try {
|
|
422
|
+
const vendorPath = path.join(drmPath, card, 'device/vendor');
|
|
423
|
+
const vendor = fs.readFileSync(vendorPath, 'utf8').trim();
|
|
424
|
+
if (vendor !== '0x1002') continue; // Not AMD
|
|
425
|
+
|
|
426
|
+
const devicePath = path.join(drmPath, card, 'device/device');
|
|
427
|
+
const deviceId = fs.readFileSync(devicePath, 'utf8').trim().replace('0x', '').toLowerCase();
|
|
428
|
+
|
|
429
|
+
const deviceInfo = ROCmDetector.AMD_DEVICE_IDS[deviceId];
|
|
430
|
+
const name = deviceInfo?.name || `AMD GPU (${deviceId})`;
|
|
431
|
+
let vram = deviceInfo?.vram || 8;
|
|
432
|
+
|
|
433
|
+
// Try to read VRAM from sysfs
|
|
434
|
+
const vramPaths = [
|
|
435
|
+
path.join(drmPath, card, 'device/mem_info_vram_total'),
|
|
436
|
+
path.join(drmPath, card, 'device/resource'),
|
|
437
|
+
];
|
|
438
|
+
|
|
439
|
+
for (const vramPath of vramPaths) {
|
|
440
|
+
try {
|
|
441
|
+
if (vramPath.endsWith('mem_info_vram_total')) {
|
|
442
|
+
const bytes = parseInt(fs.readFileSync(vramPath, 'utf8').trim());
|
|
443
|
+
if (bytes > 0) {
|
|
444
|
+
vram = Math.round(bytes / (1024 * 1024 * 1024));
|
|
445
|
+
break;
|
|
446
|
+
}
|
|
447
|
+
}
|
|
448
|
+
} catch (e) {
|
|
449
|
+
continue;
|
|
450
|
+
}
|
|
197
451
|
}
|
|
452
|
+
|
|
453
|
+
result.gpus.push({
|
|
454
|
+
index: idx,
|
|
455
|
+
name: name,
|
|
456
|
+
memory: { total: vram, free: vram, used: 0 },
|
|
457
|
+
capabilities: this.getGPUCapabilities(name),
|
|
458
|
+
speedCoefficient: this.calculateSpeedCoefficient(name, vram)
|
|
459
|
+
});
|
|
460
|
+
result.totalVRAM += vram;
|
|
461
|
+
idx++;
|
|
462
|
+
} catch (e) {
|
|
463
|
+
continue;
|
|
198
464
|
}
|
|
199
|
-
} catch (e2) {
|
|
200
|
-
return null;
|
|
201
465
|
}
|
|
466
|
+
|
|
467
|
+
return result.gpus.length > 0;
|
|
468
|
+
} catch (e) {
|
|
469
|
+
return false;
|
|
202
470
|
}
|
|
471
|
+
}
|
|
203
472
|
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
473
|
+
/**
|
|
474
|
+
* Try to get VRAM from sysfs for a specific device ID
|
|
475
|
+
*/
|
|
476
|
+
_getVRAMFromSysfsForDevice(deviceId) {
|
|
477
|
+
try {
|
|
478
|
+
const drmPath = '/sys/class/drm';
|
|
479
|
+
const cards = fs.readdirSync(drmPath).filter(f => f.startsWith('card') && !f.includes('-'));
|
|
480
|
+
|
|
481
|
+
for (const card of cards) {
|
|
482
|
+
try {
|
|
483
|
+
const devPath = path.join(drmPath, card, 'device/device');
|
|
484
|
+
const devId = fs.readFileSync(devPath, 'utf8').trim().replace('0x', '').toLowerCase();
|
|
485
|
+
if (devId !== deviceId) continue;
|
|
486
|
+
|
|
487
|
+
const vramPath = path.join(drmPath, card, 'device/mem_info_vram_total');
|
|
488
|
+
const bytes = parseInt(fs.readFileSync(vramPath, 'utf8').trim());
|
|
489
|
+
if (bytes > 0) {
|
|
490
|
+
return Math.round(bytes / (1024 * 1024 * 1024));
|
|
491
|
+
}
|
|
492
|
+
} catch (e) {
|
|
493
|
+
continue;
|
|
494
|
+
}
|
|
495
|
+
}
|
|
496
|
+
} catch (e) {
|
|
497
|
+
// sysfs not available
|
|
498
|
+
}
|
|
499
|
+
return null;
|
|
500
|
+
}
|
|
208
501
|
|
|
209
|
-
|
|
502
|
+
/**
|
|
503
|
+
* Resolve AMD model name from lspci description and device ID
|
|
504
|
+
*/
|
|
505
|
+
_resolveAMDModelName(lspciName, deviceId) {
|
|
506
|
+
if (!lspciName) return null;
|
|
507
|
+
|
|
508
|
+
// lspci often shows names like "Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M]"
|
|
509
|
+
// Extract the bracketed name if present
|
|
510
|
+
const bracketMatch = lspciName.match(/\[(.+?)\]/);
|
|
511
|
+
if (bracketMatch) {
|
|
512
|
+
const bracketName = bracketMatch[1];
|
|
513
|
+
// If it contains multiple variants separated by /, pick based on device ID
|
|
514
|
+
if (bracketName.includes('/')) {
|
|
515
|
+
const variants = bracketName.split('/').map(v => v.trim());
|
|
516
|
+
// Try to match device ID to specific variant
|
|
517
|
+
const deviceInfo = ROCmDetector.AMD_DEVICE_IDS[deviceId];
|
|
518
|
+
if (deviceInfo) return deviceInfo.name;
|
|
519
|
+
// Default to first variant with "AMD Radeon" prefix
|
|
520
|
+
return `AMD Radeon ${variants[0]}`;
|
|
521
|
+
}
|
|
522
|
+
return `AMD Radeon ${bracketName}`;
|
|
523
|
+
}
|
|
524
|
+
|
|
525
|
+
// If name already looks like a GPU model, use it
|
|
526
|
+
if (lspciName.match(/R[X5-9]\s*\d+/i) || lspciName.match(/MI\d+/i)) {
|
|
527
|
+
return `AMD ${lspciName}`;
|
|
528
|
+
}
|
|
529
|
+
|
|
530
|
+
return null;
|
|
210
531
|
}
|
|
211
532
|
|
|
212
533
|
/**
|
package/src/hardware/detector.js
CHANGED
|
@@ -244,11 +244,17 @@ class HardwareDetector {
|
|
|
244
244
|
isIntegratedGPU(model) {
|
|
245
245
|
if (!model) return false;
|
|
246
246
|
const modelLower = model.toLowerCase();
|
|
247
|
+
|
|
248
|
+
// Explicitly NOT integrated: dedicated AMD Radeon RX/Instinct cards
|
|
249
|
+
if (modelLower.includes('radeon rx') || modelLower.includes('radeon pro') ||
|
|
250
|
+
modelLower.includes('instinct') || modelLower.includes(' rx ')) {
|
|
251
|
+
return false;
|
|
252
|
+
}
|
|
253
|
+
|
|
247
254
|
// Check if GPU is integrated (on-chip or shared memory, not discrete)
|
|
248
|
-
// Note: && has higher precedence than ||, each line is grouped with ()
|
|
249
255
|
return (modelLower.includes('intel') && !modelLower.includes('arc')) ||
|
|
250
256
|
(modelLower.includes('amd') && modelLower.includes('graphics') && !modelLower.includes(' rx ')) ||
|
|
251
|
-
(modelLower.includes('radeon') && modelLower.includes('graphics')) ||
|
|
257
|
+
(modelLower.includes('radeon') && modelLower.includes('graphics') && !modelLower.includes('rx')) ||
|
|
252
258
|
modelLower.includes('iris') ||
|
|
253
259
|
modelLower.includes('uhd') ||
|
|
254
260
|
modelLower.includes('hd graphics') ||
|
package/src/models/CLAUDE.md
CHANGED
|
@@ -14,4 +14,10 @@
|
|
|
14
14
|
| ID | Time | T | Title | Read |
|
|
15
15
|
|----|------|---|-------|------|
|
|
16
16
|
| #3699 | 12:05 AM | ✅ | Git Push Consolidated Architecture Changes to GitHub | ~367 |
|
|
17
|
+
|
|
18
|
+
### Feb 14, 2026
|
|
19
|
+
|
|
20
|
+
| ID | Time | T | Title | Read |
|
|
21
|
+
|----|------|---|-------|------|
|
|
22
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
17
23
|
</claude-mem-context>
|
package/src/ollama/CLAUDE.md
CHANGED
|
@@ -21,4 +21,10 @@
|
|
|
21
21
|
| #3484 | 10:23 PM | 🔵 | Ollama Client Timeout Implementation - Mixed Patterns with AbortController | ~554 |
|
|
22
22
|
| #3443 | 9:59 PM | 🔵 | Ollama Native Scraper - Web Scraping with Dual Cache Strategy | ~594 |
|
|
23
23
|
| #3437 | 9:58 PM | 🔵 | Ollama Client Implementation - HTTP API Wrapper with Connection Management | ~605 |
|
|
24
|
+
|
|
25
|
+
### Feb 14, 2026
|
|
26
|
+
|
|
27
|
+
| ID | Time | T | Title | Read |
|
|
28
|
+
|----|------|---|-------|------|
|
|
29
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
24
30
|
</claude-mem-context>
|
package/src/plugins/CLAUDE.md
CHANGED
|
@@ -8,4 +8,10 @@
|
|
|
8
8
|
| ID | Time | T | Title | Read |
|
|
9
9
|
|----|------|---|-------|------|
|
|
10
10
|
| #3462 | 10:02 PM | 🔵 | Plugin System Architecture - Hook-Based Extensibility Framework | ~648 |
|
|
11
|
+
|
|
12
|
+
### Feb 14, 2026
|
|
13
|
+
|
|
14
|
+
| ID | Time | T | Title | Read |
|
|
15
|
+
|----|------|---|-------|------|
|
|
16
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
11
17
|
</claude-mem-context>
|
package/src/utils/CLAUDE.md
CHANGED
|
@@ -8,4 +8,10 @@
|
|
|
8
8
|
| ID | Time | T | Title | Read |
|
|
9
9
|
|----|------|---|-------|------|
|
|
10
10
|
| #3438 | 9:58 PM | 🔵 | Configuration Management System - Comprehensive Settings with Environment Overrides | ~580 |
|
|
11
|
+
|
|
12
|
+
### Feb 14, 2026
|
|
13
|
+
|
|
14
|
+
| ID | Time | T | Title | Read |
|
|
15
|
+
|----|------|---|-------|------|
|
|
16
|
+
| #4339 | 6:49 PM | 🟣 | MCP server implementation and documentation added to llm-checker repository | ~457 |
|
|
11
17
|
</claude-mem-context>
|