@tecet/ollm 0.1.4 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (86) hide show
  1. package/dist/cli.js +20 -14
  2. package/dist/cli.js.map +3 -3
  3. package/dist/services/documentService.d.ts.map +1 -1
  4. package/dist/services/documentService.js +12 -2
  5. package/dist/services/documentService.js.map +1 -1
  6. package/dist/ui/components/docs/DocsPanel.d.ts.map +1 -1
  7. package/dist/ui/components/docs/DocsPanel.js +1 -1
  8. package/dist/ui/components/docs/DocsPanel.js.map +1 -1
  9. package/dist/ui/components/launch/VersionBanner.js +1 -1
  10. package/dist/ui/components/launch/VersionBanner.js.map +1 -1
  11. package/dist/ui/components/layout/KeybindsLegend.d.ts.map +1 -1
  12. package/dist/ui/components/layout/KeybindsLegend.js +1 -1
  13. package/dist/ui/components/layout/KeybindsLegend.js.map +1 -1
  14. package/dist/ui/components/tabs/BugReportTab.js +1 -1
  15. package/dist/ui/components/tabs/BugReportTab.js.map +1 -1
  16. package/dist/ui/services/docsService.d.ts +12 -27
  17. package/dist/ui/services/docsService.d.ts.map +1 -1
  18. package/dist/ui/services/docsService.js +40 -67
  19. package/dist/ui/services/docsService.js.map +1 -1
  20. package/docs/README.md +3 -410
  21. package/package.json +10 -7
  22. package/scripts/copy-docs-to-user.cjs +34 -0
  23. package/docs/Context/CheckpointFlowDiagram.md +0 -673
  24. package/docs/Context/ContextArchitecture.md +0 -898
  25. package/docs/Context/ContextCompression.md +0 -1102
  26. package/docs/Context/ContextManagment.md +0 -750
  27. package/docs/Context/Index.md +0 -209
  28. package/docs/Context/README.md +0 -390
  29. package/docs/DevelopmentRoadmap/Index.md +0 -238
  30. package/docs/DevelopmentRoadmap/OLLM-CLI_Releases.md +0 -419
  31. package/docs/DevelopmentRoadmap/PlanedFeatures.md +0 -448
  32. package/docs/DevelopmentRoadmap/README.md +0 -174
  33. package/docs/DevelopmentRoadmap/Roadmap.md +0 -572
  34. package/docs/DevelopmentRoadmap/RoadmapVisual.md +0 -372
  35. package/docs/Hooks/Architecture.md +0 -885
  36. package/docs/Hooks/Index.md +0 -244
  37. package/docs/Hooks/KeyboardShortcuts.md +0 -248
  38. package/docs/Hooks/Protocol.md +0 -817
  39. package/docs/Hooks/README.md +0 -403
  40. package/docs/Hooks/UserGuide.md +0 -1483
  41. package/docs/Hooks/VisualGuide.md +0 -598
  42. package/docs/Index.md +0 -506
  43. package/docs/Installation.md +0 -586
  44. package/docs/Introduction.md +0 -367
  45. package/docs/LLM Models/Index.md +0 -239
  46. package/docs/LLM Models/LLM_GettingStarted.md +0 -748
  47. package/docs/LLM Models/LLM_Index.md +0 -701
  48. package/docs/LLM Models/LLM_MemorySystem.md +0 -337
  49. package/docs/LLM Models/LLM_ModelCompatibility.md +0 -499
  50. package/docs/LLM Models/LLM_ModelsArchitecture.md +0 -933
  51. package/docs/LLM Models/LLM_ModelsCommands.md +0 -839
  52. package/docs/LLM Models/LLM_ModelsConfiguration.md +0 -1094
  53. package/docs/LLM Models/LLM_ModelsList.md +0 -1071
  54. package/docs/LLM Models/LLM_ModelsList.md.backup +0 -400
  55. package/docs/LLM Models/README.md +0 -355
  56. package/docs/MCP/MCP_Architecture.md +0 -1086
  57. package/docs/MCP/MCP_Commands.md +0 -1111
  58. package/docs/MCP/MCP_GettingStarted.md +0 -590
  59. package/docs/MCP/MCP_Index.md +0 -524
  60. package/docs/MCP/MCP_Integration.md +0 -866
  61. package/docs/MCP/MCP_Marketplace.md +0 -160
  62. package/docs/MCP/README.md +0 -415
  63. package/docs/Prompts System/Architecture.md +0 -760
  64. package/docs/Prompts System/Index.md +0 -223
  65. package/docs/Prompts System/PromptsRouting.md +0 -1047
  66. package/docs/Prompts System/PromptsTemplates.md +0 -1102
  67. package/docs/Prompts System/README.md +0 -389
  68. package/docs/Prompts System/SystemPrompts.md +0 -856
  69. package/docs/Quickstart.md +0 -535
  70. package/docs/Tools/Architecture.md +0 -884
  71. package/docs/Tools/GettingStarted.md +0 -624
  72. package/docs/Tools/Index.md +0 -216
  73. package/docs/Tools/ManifestReference.md +0 -141
  74. package/docs/Tools/README.md +0 -440
  75. package/docs/Tools/UserGuide.md +0 -773
  76. package/docs/Troubleshooting.md +0 -1265
  77. package/docs/UI&Settings/Architecture.md +0 -729
  78. package/docs/UI&Settings/ColorASCII.md +0 -34
  79. package/docs/UI&Settings/Commands.md +0 -755
  80. package/docs/UI&Settings/Configuration.md +0 -872
  81. package/docs/UI&Settings/Index.md +0 -293
  82. package/docs/UI&Settings/Keybinds.md +0 -372
  83. package/docs/UI&Settings/README.md +0 -278
  84. package/docs/UI&Settings/Terminal.md +0 -637
  85. package/docs/UI&Settings/Themes.md +0 -604
  86. package/docs/UI&Settings/UIGuide.md +0 -550
@@ -1,499 +0,0 @@
1
- # Model Compatibility Matrix
2
-
3
- ## Overview
4
-
5
- This document provides compatibility information for various LLM models tested with OLLM CLI. It documents which features work with which models, known issues, workarounds, and recommendations for model selection based on your use case.
6
-
7
- ## Test Environment
8
-
9
- - **OLLM CLI Version:** 0.1.0
10
- - **Test Date:** 2026-01-15
11
- - **Server:** Ollama (latest)
12
- - **Test Framework:** Vitest with integration tests
13
- - **Test Location:** `packages/test-utils/src/__tests__/modelCompatibility.integration.test.ts`
14
-
15
- ## How to Use This Document
16
-
17
- - **✅ Pass** - Feature works as expected
18
- - **❌ Fail** - Feature does not work
19
- - **⚠️ Partial** - Feature works with limitations
20
- - **⊘ Not Tested** - Feature not yet tested
21
-
22
- ## Summary
23
-
24
- This compatibility matrix is maintained through automated testing. To run the compatibility tests yourself:
25
-
26
- ```bash
27
- # Ensure Ollama server is running
28
- ollama serve
29
-
30
- # Pull the models you want to test
31
- ollama pull llama3.1:8b
32
- ollama pull codellama:7b
33
- ollama pull phi3:mini
34
-
35
- # Run the compatibility tests
36
- npm test -- modelCompatibility.integration.test.ts
37
- ```
38
-
39
- The tests will automatically detect which models are available and test them. Results are logged to the console and can be used to update this document.
40
-
41
- ## Model Categories
42
-
43
- ### General-Purpose Models
44
-
45
- Best for: General conversation, question answering, content generation, multi-domain tasks
46
-
47
- **Recommended Models:**
48
-
49
- - `llama3.1:8b` - Excellent balance of capability and performance
50
- - `llama3.2:3b` - Faster, good for simpler tasks
51
- - `mistral:7b` - Strong reasoning capabilities
52
-
53
- ### Code-Specialized Models
54
-
55
- Best for: Code generation, code review, technical documentation, debugging
56
-
57
- **Recommended Models:**
58
-
59
- - `codellama:7b` - Optimized for code tasks
60
- - `deepseek-coder:6.7b` - Strong code understanding
61
- - `starcoder2:7b` - Multi-language code support
62
-
63
- ### Small/Fast Models
64
-
65
- Best for: Quick responses, resource-constrained environments, high-throughput scenarios
66
-
67
- **Recommended Models:**
68
-
69
- - `phi3:mini` - Excellent quality for size
70
- - `gemma:2b` - Fast and efficient
71
- - `tinyllama:1.1b` - Ultra-fast for simple tasks
72
-
73
- ## Detailed Compatibility Results
74
-
75
- ### llama3.1:8b (General-Purpose)
76
-
77
- | Feature | Status | Notes |
78
- | ------------------- | ------------ | -------------------------------- |
79
- | Basic Chat | ⊘ Not Tested | Requires real server for testing |
80
- | Streaming | ⊘ Not Tested | Requires real server for testing |
81
- | Native Tool Calling | ⊘ Not Tested | Requires real server for testing |
82
- | ReAct Fallback | ⊘ Not Tested | Requires real server for testing |
83
- | Context 4K | ⊘ Not Tested | Requires real server for testing |
84
- | Context 8K | ⊘ Not Tested | Requires real server for testing |
85
- | Context 16K | ⊘ Not Tested | Requires real server for testing |
86
- | Context 32K | ⊘ Not Tested | Requires real server for testing |
87
- | Context 64K | ⊘ Not Tested | Requires real server for testing |
88
- | Context 128K | ⊘ Not Tested | Requires real server for testing |
89
-
90
- **Known Issues:**
91
-
92
- - Performance may degrade at context sizes above 32K tokens
93
- - Tool calling requires specific prompt formatting
94
-
95
- **Recommendations:**
96
-
97
- - Best for general-purpose tasks with moderate context requirements
98
- - Use for conversational AI, content generation, and question answering
99
- - Consider smaller models (llama3.2:3b) for faster responses
100
- - Use context sizes up to 16K for optimal performance
101
-
102
- **Workarounds:**
103
-
104
- - For large context: Use context compression or summarization
105
- - For tool calling: Ensure proper tool schema formatting
106
- - For performance: Reduce context window or use quantized versions
107
-
108
- ---
109
-
110
- ### llama3.2:3b (General-Purpose, Small)
111
-
112
- | Feature | Status | Notes |
113
- | ------------------- | ------------ | -------------------------------- |
114
- | Basic Chat | ⊘ Not Tested | Requires real server for testing |
115
- | Streaming | ⊘ Not Tested | Requires real server for testing |
116
- | Native Tool Calling | ⊘ Not Tested | Requires real server for testing |
117
- | ReAct Fallback | ⊘ Not Tested | Requires real server for testing |
118
- | Context 4K | ⊘ Not Tested | Requires real server for testing |
119
- | Context 8K | ⊘ Not Tested | Requires real server for testing |
120
- | Context 16K | ⊘ Not Tested | Requires real server for testing |
121
- | Context 32K | ⊘ Not Tested | Requires real server for testing |
122
- | Context 64K | ⊘ Not Tested | Requires real server for testing |
123
- | Context 128K | ⊘ Not Tested | Requires real server for testing |
124
-
125
- **Known Issues:**
126
-
127
- - Smaller model may have reduced reasoning capabilities compared to 8B version
128
- - May struggle with complex multi-step tasks
129
-
130
- **Recommendations:**
131
-
132
- - Best for faster responses when quality can be slightly reduced
133
- - Good for simple conversational tasks and quick queries
134
- - Use when response time is more important than maximum quality
135
- - Ideal for resource-constrained environments
136
-
137
- **Workarounds:**
138
-
139
- - For complex tasks: Use llama3.1:8b instead
140
- - For better quality: Provide more detailed prompts
141
- - For tool calling: Use simpler tool schemas
142
-
143
- ---
144
-
145
- ### codellama:7b (Code-Specialized)
146
-
147
- | Feature | Status | Notes |
148
- | ------------------- | ------------ | -------------------------------- |
149
- | Basic Chat | ⊘ Not Tested | Requires real server for testing |
150
- | Streaming | ⊘ Not Tested | Requires real server for testing |
151
- | Native Tool Calling | ⊘ Not Tested | Likely requires ReAct fallback |
152
- | ReAct Fallback | ⊘ Not Tested | Requires real server for testing |
153
- | Context 4K | ⊘ Not Tested | Requires real server for testing |
154
- | Context 8K | ⊘ Not Tested | Requires real server for testing |
155
- | Context 16K | ⊘ Not Tested | Requires real server for testing |
156
- | Context 32K | ⊘ Not Tested | Requires real server for testing |
157
- | Context 64K | ⊘ Not Tested | Requires real server for testing |
158
- | Context 128K | ⊘ Not Tested | Requires real server for testing |
159
-
160
- **Known Issues:**
161
-
162
- - May not support native tool calling (requires ReAct fallback)
163
- - Optimized for code, may be less effective for general conversation
164
- - Tool calling may require specific prompt formatting
165
-
166
- **Recommendations:**
167
-
168
- - Best for code generation, code review, and technical tasks
169
- - Use for debugging, refactoring, and code explanation
170
- - Excellent for multi-language code support
171
- - Consider for technical documentation generation
172
-
173
- **Workarounds:**
174
-
175
- - For tool calling: Use ReAct format instead of native tool calling
176
- - For general chat: Use general-purpose model instead
177
- - For better code quality: Provide code context and examples
178
-
179
- ---
180
-
181
- ### deepseek-coder:6.7b (Code-Specialized)
182
-
183
- | Feature | Status | Notes |
184
- | ------------------- | ------------ | -------------------------------- |
185
- | Basic Chat | ⊘ Not Tested | Requires real server for testing |
186
- | Streaming | ⊘ Not Tested | Requires real server for testing |
187
- | Native Tool Calling | ⊘ Not Tested | Requires real server for testing |
188
- | ReAct Fallback | ⊘ Not Tested | Requires real server for testing |
189
- | Context 4K | ⊘ Not Tested | Requires real server for testing |
190
- | Context 8K | ⊘ Not Tested | Requires real server for testing |
191
- | Context 16K | ⊘ Not Tested | Requires real server for testing |
192
- | Context 32K | ⊘ Not Tested | Requires real server for testing |
193
- | Context 64K | ⊘ Not Tested | Requires real server for testing |
194
- | Context 128K | ⊘ Not Tested | Requires real server for testing |
195
-
196
- **Known Issues:**
197
-
198
- - May have different prompt format requirements than other models
199
- - Tool calling support varies by version
200
-
201
- **Recommendations:**
202
-
203
- - Strong code understanding and generation capabilities
204
- - Good for complex code refactoring tasks
205
- - Use for code analysis and bug detection
206
- - Excellent for explaining complex code
207
-
208
- **Workarounds:**
209
-
210
- - For tool calling: Test both native and ReAct formats
211
- - For best results: Provide clear code context
212
- - For performance: Use appropriate quantization level
213
-
214
- ---
215
-
216
- ### phi3:mini (Small/Fast)
217
-
218
- | Feature | Status | Notes |
219
- | ------------------- | ------------ | -------------------------------- |
220
- | Basic Chat | ⊘ Not Tested | Requires real server for testing |
221
- | Streaming | ⊘ Not Tested | Requires real server for testing |
222
- | Native Tool Calling | ⊘ Not Tested | Requires real server for testing |
223
- | ReAct Fallback | ⊘ Not Tested | Requires real server for testing |
224
- | Context 4K | ⊘ Not Tested | Requires real server for testing |
225
- | Context 8K | ⊘ Not Tested | Requires real server for testing |
226
- | Context 16K | ⊘ Not Tested | Requires real server for testing |
227
- | Context 32K | ⊘ Not Tested | Requires real server for testing |
228
- | Context 64K | ⊘ Not Tested | Requires real server for testing |
229
- | Context 128K | ⊘ Not Tested | Requires real server for testing |
230
-
231
- **Known Issues:**
232
-
233
- - Smaller context window may limit use cases
234
- - May struggle with very complex reasoning tasks
235
- - Tool calling capabilities may be limited
236
-
237
- **Recommendations:**
238
-
239
- - Excellent quality-to-size ratio
240
- - Best for quick responses and simple tasks
241
- - Use when speed is critical
242
- - Good for resource-constrained environments
243
- - Ideal for high-throughput scenarios
244
-
245
- **Workarounds:**
246
-
247
- - For complex tasks: Use larger model
248
- - For large context: Break into smaller chunks
249
- - For tool calling: Use simpler tool schemas
250
-
251
- ---
252
-
253
- ### gemma:2b (Small/Fast)
254
-
255
- | Feature | Status | Notes |
256
- | ------------------- | ------------ | -------------------------------- |
257
- | Basic Chat | ⊘ Not Tested | Requires real server for testing |
258
- | Streaming | ⊘ Not Tested | Requires real server for testing |
259
- | Native Tool Calling | ⊘ Not Tested | Requires real server for testing |
260
- | ReAct Fallback | ⊘ Not Tested | Requires real server for testing |
261
- | Context 4K | ⊘ Not Tested | Requires real server for testing |
262
- | Context 8K | ⊘ Not Tested | Requires real server for testing |
263
- | Context 16K | ⊘ Not Tested | Requires real server for testing |
264
- | Context 32K | ⊘ Not Tested | Requires real server for testing |
265
- | Context 64K | ⊘ Not Tested | Requires real server for testing |
266
- | Context 128K | ⊘ Not Tested | Requires real server for testing |
267
-
268
- **Known Issues:**
269
-
270
- - Very small model may have limited capabilities
271
- - May not support advanced features like tool calling
272
- - Context window is limited
273
-
274
- **Recommendations:**
275
-
276
- - Ultra-fast responses for simple queries
277
- - Good for basic conversational tasks
278
- - Use when minimal resource usage is required
279
- - Suitable for embedded or edge deployments
280
-
281
- **Workarounds:**
282
-
283
- - For complex tasks: Use larger model
284
- - For tool calling: May need to use larger model
285
- - For better quality: Provide very clear, simple prompts
286
-
287
- ---
288
-
289
- ## Model Selection Guide
290
-
291
- ### By Use Case
292
-
293
- **General Conversation & Q&A:**
294
-
295
- - Primary: `llama3.1:8b`
296
- - Fast alternative: `llama3.2:3b`
297
- - Budget option: `phi3:mini`
298
-
299
- **Code Generation & Review:**
300
-
301
- - Primary: `codellama:7b`
302
- - Alternative: `deepseek-coder:6.7b`
303
- - Fast option: `phi3:mini` (for simple code tasks)
304
-
305
- **Quick Responses & High Throughput:**
306
-
307
- - Primary: `phi3:mini`
308
- - Alternative: `gemma:2b`
309
- - Balanced: `llama3.2:3b`
310
-
311
- **Large Context Tasks:**
312
-
313
- - Primary: `llama3.1:8b` (up to 32K)
314
- - Alternative: Use context compression
315
- - Note: Test performance at your required context size
316
-
317
- **Tool Calling:**
318
-
319
- - Native support: `llama3.1:8b`, `llama3.2:3b`
320
- - ReAct fallback: `codellama:7b`, others
321
- - Note: All models can use ReAct format
322
-
323
- ### By Resource Constraints
324
-
325
- **High Memory Available (16GB+ VRAM):**
326
-
327
- - Use 8B+ parameter models
328
- - Enable larger context windows
329
- - Consider multiple models for different tasks
330
-
331
- **Medium Memory (8-16GB VRAM):**
332
-
333
- - Use 7B parameter models
334
- - Moderate context windows (8-16K)
335
- - Balance between quality and speed
336
-
337
- **Low Memory (<8GB VRAM):**
338
-
339
- - Use 3B or smaller models
340
- - Limit context windows (4-8K)
341
- - Consider quantized versions (q4_0)
342
-
343
- **CPU Only:**
344
-
345
- - Use smallest models (2B or less)
346
- - Expect slower responses
347
- - Consider cloud-based alternatives
348
-
349
- ### By Performance Requirements
350
-
351
- **Latency-Critical (<1s response time):**
352
-
353
- - Use `phi3:mini` or `gemma:2b`
354
- - Limit context size
355
- - Use quantized versions
356
-
357
- **Quality-Critical:**
358
-
359
- - Use `llama3.1:8b` or larger
360
- - Allow longer response times
361
- - Use higher precision (q8_0 or f16)
362
-
363
- **Balanced:**
364
-
365
- - Use `llama3.2:3b` or `phi3:mini`
366
- - Moderate context sizes
367
- - Standard quantization (q4_0)
368
-
369
- ## Known Issues & Workarounds
370
-
371
- ### Tool Calling
372
-
373
- **Issue:** Some models don't support native tool calling
374
- **Affected Models:** codellama:7b, older models
375
- **Workaround:** OLLM CLI automatically falls back to ReAct format for tool calling
376
- **Status:** Handled automatically by the system
377
-
378
- **Issue:** Tool calling may require specific prompt formatting
379
- **Affected Models:** All models
380
- **Workaround:** Use OLLM CLI's built-in tool system which handles formatting
381
- **Status:** Handled automatically by the system
382
-
383
- ### Context Handling
384
-
385
- **Issue:** Performance degrades at large context sizes (>32K tokens)
386
- **Affected Models:** Most models
387
- **Workaround:**
388
-
389
- - Use context compression features
390
- - Break large contexts into smaller chunks
391
- - Use summarization for long conversations
392
- **Status:** Mitigation available through OLLM CLI features
393
-
394
- **Issue:** Context window limits vary by model
395
- **Affected Models:** All models
396
- **Workaround:** OLLM CLI automatically detects and enforces model-specific limits
397
- **Status:** Handled automatically by the system
398
-
399
- ### Streaming
400
-
401
- **Issue:** Some models may have inconsistent streaming behavior
402
- **Affected Models:** Varies by model and server version
403
- **Workaround:** OLLM CLI handles streaming inconsistencies transparently
404
- **Status:** Handled automatically by the system
405
-
406
- ### Model Availability
407
-
408
- **Issue:** Models must be pulled before use
409
- **Affected Models:** All models
410
- **Workaround:** Use `ollama pull <model>` before first use
411
- **Status:** User action required
412
-
413
- **Issue:** Large models require significant disk space
414
- **Affected Models:** 8B+ parameter models
415
- **Workaround:**
416
-
417
- - Use quantized versions (q4_0 instead of f16)
418
- - Remove unused models with `ollama rm <model>`
419
- - Monitor disk space
420
- **Status:** User management required
421
-
422
- ## Testing Methodology
423
-
424
- ### Automated Testing
425
-
426
- The compatibility matrix is maintained through automated integration tests:
427
-
428
- 1. **Server Detection:** Tests automatically detect if Ollama server is running
429
- 2. **Model Discovery:** Tests check which models are available
430
- 3. **Capability Testing:** Each available model is tested for all capabilities
431
- 4. **Graceful Skipping:** Tests skip gracefully when server or models unavailable
432
- 5. **Result Documentation:** Test results are logged and can be used to update this document
433
-
434
- ### Test Capabilities
435
-
436
- Each model is tested for:
437
-
438
- - **Basic Chat:** Simple prompt/response interaction
439
- - **Streaming:** Incremental response delivery
440
- - **Native Tool Calling:** Built-in tool calling support
441
- - **ReAct Fallback:** Text-based tool calling format
442
- - **Context Sizes:** 4K, 8K, 16K, 32K, 64K, 128K token contexts
443
-
444
- ### Running Tests Yourself
445
-
446
- ```bash
447
- # Start Ollama server
448
- ollama serve
449
-
450
- # Pull models you want to test
451
- ollama pull llama3.1:8b
452
- ollama pull codellama:7b
453
- ollama pull phi3:mini
454
-
455
- # Run compatibility tests
456
- npm test -- modelCompatibility.integration.test.ts
457
-
458
- # View detailed results in console output
459
- ```
460
-
461
- ### Updating This Document
462
-
463
- When test results change:
464
-
465
- 1. Run the compatibility tests with your models
466
- 2. Review the console output for test results
467
- 3. Update the feature status tables above
468
- 4. Add any new known issues discovered
469
- 5. Update recommendations based on test results
470
- 6. Commit changes with test date and CLI version
471
-
472
- ## Contributing
473
-
474
- If you discover issues or have recommendations:
475
-
476
- 1. Run the compatibility tests to verify the issue
477
- 2. Document the issue with specific model and version
478
- 3. Provide reproduction steps
479
- 4. Suggest workarounds if available
480
- 5. Submit a pull request or issue
481
-
482
- ## Version History
483
-
484
- | Date | CLI Version | Changes |
485
- | ---------- | ----------- | ------------------------------------ |
486
- | 2026-01-15 | 0.1.0 | Initial compatibility matrix created |
487
-
488
- ## References
489
-
490
- - [OLLM CLI Documentation](../../README.md)
491
- - [Models Documentation](3%20projects/OLLM%20CLI/LLM%20Models/README.md)
492
- - [Model Management](Models_architecture.md)
493
- - Testing Strategy (../../.kiro/specs/stage-08-testing-qa/design.md)
494
- - [Integration Tests](../../packages/test-utils/src/__tests__/modelCompatibility.integration.test.ts)
495
- - Ollama Model Library (https://ollama.ai/library)
496
-
497
- ---
498
-
499
- **Note:** This document reflects the current state of testing. Status marked as "⊘ Not Tested" indicates that automated tests require a running server with the model installed. To get actual test results, run the integration tests with your models installed.