@autodev/codebase 0.0.5 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,336 +1,321 @@
1
-
2
-
3
1
  # @autodev/codebase
4
2
 
5
3
  <div align="center">
6
- <img src="src/images/image2.png" alt="Image 2" style="display: inline-block; width: 350px; margin: 0 10px;" />
7
- <img src="src/images/image3.png" alt="Image 3" style="display: inline-block; width: 200px; margin: 0 10px;" />
4
+
5
+ [![npm version](https://img.shields.io/npm/v/@autodev/codebase)](https://www.npmjs.com/package/@autodev/codebase)
6
+ [![GitHub stars](https://img.shields.io/github/stars/anrgct/autodev-codebase)](https://github.com/anrgct/autodev-codebase)
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
8
+
8
9
  </div>
9
10
 
10
- <br />
11
+ ```sh
12
+ ╭─ ~/workspace/autodev-codebase
13
+ ╰─❯ codebase --demo --search="user manage"
14
+ Found 3 results in 2 files for: "user manage"
15
+
16
+ ==================================================
17
+ File: "hello.js"
18
+ ==================================================
19
+ < class UserManager > (L7-20)
20
+ class UserManager {
21
+ constructor() {
22
+ this.users = [];
23
+ }
24
+
25
+ addUser(user) {
26
+ this.users.push(user);
27
+ console.log('User added:', user.name);
28
+ }
29
+
30
+ getUsers() {
31
+ return this.users;
32
+ }
33
+ }
11
34
 
12
- A platform-agnostic code analysis library with semantic search capabilities and MCP (Model Context Protocol) server support. This library provides intelligent code indexing, vector-based semantic search, and can be integrated into various development tools and IDEs.
35
+ ==================================================
36
+ File: "README.md" | 2 snippets
37
+ ==================================================
38
+ < md_h1 Demo Project > md_h2 Usage > md_h3 JavaScript Functions > (L16-20)
39
+ ### JavaScript Functions
40
+
41
+ - greetUser(name) - Greets a user by name
42
+ - UserManager - Class for managing user data
43
+
44
+ ─────
45
+ < md_h1 Demo Project > md_h2 Search Examples > (L27-38)
46
+ ## Search Examples
47
+
48
+ Try searching for:
49
+ - "greet user"
50
+ - "process data"
51
+ - "user management"
52
+ - "batch processing"
53
+ - "YOLO model"
54
+ - "computer vision"
55
+ - "object detection"
56
+ - "model training"
57
+
58
+ ```
59
+ A vector embedding-based code semantic search tool with MCP server and multi-model integration. Can be used as a pure CLI tool. Supports Ollama for fully local embedding and reranking, enabling complete offline operation and privacy protection for your code repository.
13
60
 
14
61
  ## 🚀 Features
15
62
 
16
- - **Semantic Code Search**: Vector-based code search using embeddings
17
- - **MCP Server Support**: HTTP-based MCP server for IDE integration
18
- - **Terminal UI**: Interactive CLI with rich terminal interface
19
- - **Tree-sitter Parsing**: Advanced code parsing and analysis
20
- - **Vector Storage**: Qdrant vector database integration
21
- - **Flexible Embedding**: Support for various embedding models via Ollama
63
+ - **🔍 Semantic Code Search**: Vector-based search using advanced embedding models
64
+ - **🌐 MCP Server**: HTTP-based MCP server with SSE and stdio adapters
65
+ - **💻 Pure CLI Tool**: Standalone command-line interface without GUI dependencies
66
+ - **⚙️ Layered Configuration**: CLI, project, and global config management
67
+ - **🎯 Advanced Path Filtering**: Glob patterns with brace expansion and exclusions
68
+ - **🌲 Tree-sitter Parsing**: Support for 40+ programming languages
69
+ - **💾 Qdrant Integration**: High-performance vector database
70
+ - **🔄 Multiple Providers**: OpenAI, Ollama, Jina, Gemini, Mistral, OpenRouter, Vercel
71
+ - **📊 Real-time Watching**: Automatic index updates
72
+ - **⚡ Batch Processing**: Efficient parallel processing
22
73
 
23
74
  ## 📦 Installation
24
75
 
25
- ### 1. Install and Start Ollama
26
-
76
+ ### 1. Dependencies
27
77
  ```bash
28
- # Install Ollama (macOS)
29
- brew install ollama
30
-
31
- # Start Ollama service
78
+ brew install ollama ripgrep
32
79
  ollama serve
80
+ ollama pull nomic-embed-text
81
+ ```
33
82
 
34
- # In a new terminal, pull the embedding model
35
- ollama pull dengcao/Qwen3-Embedding-0.6B:Q8_0
83
+ ### 2. Qdrant
84
+ ```bash
85
+ docker run -d -p 6333:6333 -p 6334:6334 --name qdrant qdrant/qdrant
36
86
  ```
37
87
 
38
- ### 2. Install ripgrep
88
+ ### 3. Install
89
+ ```bash
90
+ npm install -g @autodev/codebase
91
+ codebase --set-config embedderProvider=ollama,embedderModelId=nomic-embed-text
92
+ ```
39
93
 
40
- `ripgrep` is required for fast codebase indexing. Install it with:
94
+ ## 🛠️ Quick Start
41
95
 
42
96
  ```bash
43
- # Install ripgrep (macOS)
44
- brew install ripgrep
97
+ # Demo mode (recommended for first-time)
98
+ # Creates a demo directory in current working directory for testing
45
99
 
46
- # Or on Ubuntu/Debian
47
- sudo apt-get install ripgrep
100
+ # Index & search
101
+ codebase --demo --index
102
+ codebase --demo --search="user greet"
48
103
 
49
- # Or on Arch Linux
50
- sudo pacman -S ripgrep
104
+ # MCP server
105
+ codebase --demo --serve
51
106
  ```
52
107
 
53
- ### 3. Install and Start Qdrant
54
-
55
- Start Qdrant using Docker:
108
+ ## 📋 Commands
56
109
 
110
+ ### Indexing & Search
57
111
  ```bash
58
- # Start Qdrant container
59
- docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant
60
- ```
112
+ # Index the codebase
113
+ codebase --index --path=/my/project --force
61
114
 
62
- Or download and run Qdrant directly:
115
+ # Search with filters
116
+ codebase --search="error handling" --path-filters="src/**/*.ts"
63
117
 
64
- ```bash
65
- # Download and run Qdrant
66
- wget https://github.com/qdrant/qdrant/releases/latest/download/qdrant-x86_64-unknown-linux-gnu.tar.gz
67
- tar -xzf qdrant-x86_64-unknown-linux-gnu.tar.gz
68
- ./qdrant
69
- ```
118
+ # Search with custom limit and minimum score
119
+ codebase --search="authentication" --limit=20 --min-score=0.7
120
+ codebase --search="API" -l 30 -s 0.5
121
+
122
+ # Search in JSON format
123
+ codebase --search="authentication" --json
70
124
 
71
- ### 4. Verify Services Are Running
125
+ # Clear index data
126
+ codebase --clear --path=/my/project
127
+ ```
72
128
 
129
+ ### MCP Server
73
130
  ```bash
74
- # Check Ollama
75
- curl http://localhost:11434/api/tags
131
+ # HTTP mode (recommended)
132
+ codebase --serve --port=3001 --path=/my/project
76
133
 
77
- # Check Qdrant
78
- curl http://localhost:6333/collections
134
+ # Stdio adapter
135
+ codebase --stdio-adapter --server-url=http://localhost:3001/mcp
79
136
  ```
80
- ### 5. Install Autodev-codebase
81
137
 
138
+ ### Configuration
82
139
  ```bash
83
- npm install -g @autodev/codebase
84
- ```
140
+ # View config
141
+ codebase --get-config
142
+ codebase --get-config embedderProvider --json
85
143
 
86
- Alternatively, you can install it locally:
144
+ # Set config
145
+ codebase --set-config embedderProvider=ollama,embedderModelId=nomic-embed-text
146
+ codebase --set-config --global qdrantUrl=http://localhost:6333
87
147
  ```
88
- git clone https://github.com/anrgct/autodev-codebase
89
- cd autodev-codebase
90
- npm install
91
- npm run build
92
- npm link
93
- ```
94
- ## 🛠️ Usage
95
148
 
96
- ### Command Line Interface
149
+ ### Advanced Features
97
150
 
98
- The CLI provides two main modes:
151
+ #### 🔍 LLM-Powered Search Reranking
152
+ Enable LLM reranking to dramatically improve search relevance:
99
153
 
100
- #### 1. Interactive TUI Mode (Default)
101
154
  ```bash
102
- # Basic usage: index your current folder as the codebase.
103
- # Be cautious when running this command if you have a large number of files.
104
- codebase
155
+ # Enable reranking with Ollama (recommended)
156
+ codebase --set-config rerankerEnabled=true,rerankerProvider=ollama,rerankerOllamaModelId=qwen3-vl:4b-instruct
105
157
 
158
+ # Or use OpenAI-compatible providers
159
+ codebase --set-config rerankerEnabled=true,rerankerProvider=openai-compatible,rerankerOpenAiCompatibleModelId=deepseek-chat
106
160
 
107
- # With custom options
108
- codebase --demo # Create a local demo directory and test the indexing service, recommend for setup
109
- codebase --path=/my/project
110
- codebase --path=/my/project --log-level=info
161
+ # Search with automatic reranking
162
+ codebase --search="user authentication" # Results are automatically reranked by LLM
111
163
  ```
112
164
 
113
- #### 2. MCP Server Mode (Recommended for IDE Integration)
165
+ **Benefits:**
166
+ - 🎯 **Higher precision**: LLM understands semantic relevance beyond vector similarity
167
+ - 📊 **Smart scoring**: Results are reranked on a 0-10 scale based on query relevance
168
+ - ⚡ **Batch processing**: Efficiently handles large result sets with configurable batch sizes
169
+ - 🎛️ **Threshold control**: Filter results with `rerankerMinScore` to keep only high-quality matches
170
+
171
+ #### Path Filtering & Export
114
172
  ```bash
115
- # Start long-running MCP server
116
- cd /my/project
117
- codebase mcp-server
173
+ # Path filtering with brace expansion and exclusions
174
+ codebase --search="API" --path-filters="src/**/*.ts,lib/**/*.js"
175
+ codebase --search="utils" --path-filters="{src,test}/**/*.ts"
118
176
 
119
- # With custom configuration
120
- codebase mcp-server --port=3001 --host=localhost
121
- codebase mcp-server --path=/workspace --port=3002
177
+ # Export results in JSON format for scripts
178
+ codebase --search="auth" --json
122
179
  ```
123
180
 
124
-
125
181
  ## ⚙️ Configuration
126
182
 
127
- ### Configuration Files & Priority
128
-
129
- The library uses a layered configuration system, allowing you to customize settings at different levels. The priority order (highest to lowest) is:
130
-
131
- 1. **CLI Parameters** (e.g., `--model`, `--ollama-url`, `--qdrant-url`, `--config`, etc.)
132
- 2. **Project Config File** (`./autodev-config.json`)
133
- 3. **Global Config File** (`~/.autodev-cache/autodev-config.json`)
134
- 4. **Built-in Defaults**
135
-
136
- Settings specified at a higher level override those at lower levels. This lets you tailor the behavior for your environment or project as needed.
137
-
138
- **Config file locations:**
139
- - Global: `~/.autodev-cache/autodev-config.json`
140
- - Project: `./autodev-config.json`
141
- - CLI: Pass parameters directly when running commands
183
+ ### Config Layers (Priority Order)
184
+ 1. **CLI Arguments** - Runtime parameters (`--path`, `--config`, `--log-level`, `--force`, etc.)
185
+ 2. **Project Config** - `./autodev-config.json` (or custom path via `--config`)
186
+ 3. **Global Config** - `~/.autodev-cache/autodev-config.json`
187
+ 4. **Built-in Defaults** - Fallback values
142
188
 
189
+ **Note:** CLI arguments provide runtime override for paths, logging, and operational behavior. For persistent configuration (embedderProvider, API keys, search parameters), use `--set-config` to save to config files.
143
190
 
144
- #### Global Configuration
145
-
146
- Create a global configuration file at `~/.autodev-cache/autodev-config.json`:
191
+ ### Common Config Examples
147
192
 
193
+ **Ollama:**
148
194
  ```json
149
195
  {
150
- "isEnabled": true,
151
- "embedder": {
152
- "provider": "ollama",
153
- "model": "dengcao/Qwen3-Embedding-0.6B:Q8_0",
154
- "dimension": 1024,
155
- "baseUrl": "http://localhost:11434"
156
- },
157
- "qdrantUrl": "http://localhost:6333",
158
- "qdrantApiKey": "your-api-key-if-needed",
159
- "searchMinScore": 0.4
196
+ "embedderProvider": "ollama",
197
+ "embedderModelId": "nomic-embed-text",
198
+ "qdrantUrl": "http://localhost:6333"
160
199
  }
161
200
  ```
162
201
 
163
- #### Project Configuration
164
-
165
- Create a project-specific configuration file at `./autodev-config.json`:
202
+ **OpenAI:**
203
+ ```json
204
+ {
205
+ "embedderProvider": "openai",
206
+ "embedderModelId": "text-embedding-3-small",
207
+ "embedderOpenAiApiKey": "sk-your-key",
208
+ "qdrantUrl": "http://localhost:6333"
209
+ }
210
+ ```
166
211
 
212
+ **OpenAI-Compatible:**
167
213
  ```json
168
214
  {
169
- "embedder": {
170
- "provider": "openai-compatible",
171
- "apiKey": "sk-xxxxx",
172
- "baseUrl": "http://localhost:2302/v1",
173
- "model": "openai/text-embedding-3-smallnpm",
174
- "dimension": 1536,
175
- },
176
- "qdrantUrl": "http://localhost:6334"
215
+ "embedderProvider": "openai-compatible",
216
+ "embedderModelId": "text-embedding-3-small",
217
+ "embedderOpenAiCompatibleApiKey": "sk-your-key",
218
+ "embedderOpenAiCompatibleBaseUrl": "https://api.openai.com/v1"
177
219
  }
178
220
  ```
179
221
 
180
- #### Configuration Options
222
+ ### Key Configuration Options
181
223
 
182
- | Option | Type | Description | Default |
183
- |--------|------|-------------|---------|
184
- | `isEnabled` | boolean | Enable/disable code indexing feature | `true` |
185
- | `embedder.provider` | string | Embedding provider (`ollama`, `openai`, `openai-compatible`) | `ollama` |
186
- | `embedder.model` | string | Embedding model name | `dengcao/Qwen3-Embedding-0.6B:Q8_0` |
187
- | `embedder.dimension` | number | Vector dimension size | `1024` |
188
- | `embedder.baseUrl` | string | Provider API base URL | `http://localhost:11434` |
189
- | `embedder.apiKey` | string | API key (for OpenAI/compatible providers) | - |
190
- | `qdrantUrl` | string | Qdrant vector database URL | `http://localhost:6333` |
191
- | `qdrantApiKey` | string | Qdrant API key (if authentication enabled) | - |
192
- | `searchMinScore` | number | Minimum similarity score for search results | `0.4` |
224
+ | Category | Options | Description |
225
+ |----------|---------|-------------|
226
+ | **Embedding** | `embedderProvider`, `embedderModelId`, `embedderModelDimension` | Provider and model settings |
227
+ | **API Keys** | `embedderOpenAiApiKey`, `embedderOpenAiCompatibleApiKey` | Authentication |
228
+ | **Vector Store** | `qdrantUrl`, `qdrantApiKey` | Qdrant connection |
229
+ | **Search** | `vectorSearchMinScore`, `vectorSearchMaxResults` | Search behavior |
230
+ | **Reranker** | `rerankerEnabled`, `rerankerProvider` | Result reranking |
193
231
 
194
- **Note**: The `isConfigured` field is automatically calculated based on the completeness of your configuration and should not be set manually. The system will determine if the configuration is valid based on the required fields for your chosen provider.
232
+ **Key CLI Arguments:**
233
+ - `--serve` / `--index` / `--search` - Core operations
234
+ - `--get-config` / `--set-config` - Configuration management
235
+ - `--path`, `--demo`, `--force` - Common options
236
+ - `--limit` / `-l <number>` - Maximum number of search results (default: from config, max 50)
237
+ - `--min-score` / `-s <number>` - Minimum similarity score for search results (0-1, default: from config)
238
+ - `--help` - Show all available options
195
239
 
196
- #### Configuration Priority Examples
240
+ For complete CLI reference, see [CONFIG.md](CONFIG.md).
197
241
 
242
+ **Configuration Commands:**
198
243
  ```bash
199
- # Use global config defaults
200
- codebase
244
+ # View config
245
+ codebase --get-config
246
+ codebase --get-config --json
201
247
 
202
- # Override model via CLI (highest priority)
203
- codebase --model="custom-model"
248
+ # Set config (saves to file)
249
+ codebase --set-config embedderProvider=ollama,embedderModelId=nomic-embed-text
250
+ codebase --set-config --global embedderProvider=openai,embedderOpenAiApiKey=sk-xxx
204
251
 
205
- # Use project config with CLI overrides
206
- codebase --config=./my-config.json --qdrant-url=http://remote:6333
207
- ```
252
+ # Use custom config file
253
+ codebase --config=/path/to/config.json --get-config
254
+ codebase --config=/path/to/config.json --set-config embedderProvider=ollama
208
255
 
209
- ## 🔧 CLI Options
210
-
211
- ### Global Options
212
- - `--path=<path>` - Workspace path (default: current directory)
213
- - `--demo` - Create demo files in workspace
214
- - `--force` - ignore cache force re-index
215
- - `--ollama-url=<url>` - Ollama API URL (default: http://localhost:11434)
216
- - `--qdrant-url=<url>` - Qdrant vector DB URL (default: http://localhost:6333)
217
- - `--model=<model>` - Embedding model (default: nomic-embed-text)
218
- - `--config=<path>` - Config file path
219
- - `--storage=<path>` - Storage directory path
220
- - `--cache=<path>` - Cache directory path
221
- - `--log-level=<level>` - Log level: error|warn|info|debug (default: error)
222
- - `--log-level=<level>` - Log level: error|warn|info|debug (default: error)
223
- - `--help, -h` - Show help
224
-
225
- ### MCP Server Options
226
- - `--port=<port>` - HTTP server port (default: 3001)
227
- - `--host=<host>` - HTTP server host (default: localhost)
256
+ # Runtime override (paths, logging, etc.)
257
+ codebase --index --path=/my/project --log-level=info --force
258
+ ```
228
259
 
260
+ For complete configuration reference, see [CONFIG.md](CONFIG.md).
229
261
 
230
- ### IDE Integration (Cursor/Claude)
262
+ ## 🔌 MCP Integration
231
263
 
232
- Configure your IDE to connect to the MCP server:
264
+ ### HTTP Streamable Mode (Recommended)
265
+ ```bash
266
+ codebase --serve --port=3001
267
+ ```
233
268
 
269
+ **IDE Config:**
234
270
  ```json
235
271
  {
236
272
  "mcpServers": {
237
273
  "codebase": {
238
- "url": "http://localhost:3001/sse"
274
+ "url": "http://localhost:3001/mcp"
239
275
  }
240
276
  }
241
277
  }
242
278
  ```
243
279
 
244
- For clients that do not support SSE MCP, you can use the following configuration:
280
+ ### Stdio Adapter
281
+ ```bash
282
+ # First start the MCP server in one terminal
283
+ codebase --serve --port=3001
245
284
 
285
+ # Then connect via stdio adapter in another terminal (for IDEs that require stdio)
286
+ codebase --stdio-adapter --server-url=http://localhost:3001/mcp
287
+ ```
288
+
289
+ **IDE Config:**
246
290
  ```json
247
291
  {
248
292
  "mcpServers": {
249
293
  "codebase": {
250
294
  "command": "codebase",
251
- "args": [
252
- "stdio-adapter",
253
- "--server-url=http://localhost:3001/sse"
254
- ]
295
+ "args": ["stdio-adapter", "--server-url=http://localhost:3001/mcp"]
255
296
  }
256
297
  }
257
298
  }
258
299
  ```
259
- ## 🌐 MCP Server Features
260
300
 
261
- ### Web Interface
262
- - **Home Page**: `http://localhost:3001` - Server status and configuration
263
- - **Health Check**: `http://localhost:3001/health` - JSON status endpoint
264
- - **MCP Endpoint**: `http://localhost:3001/sse` - SSE/HTTP MCP protocol endpoint
301
+ ## 🤝 Contributing
265
302
 
266
- ### Available MCP Tools
267
- - **`search_codebase`** - Semantic search through your codebase
268
- - Parameters: `query` (string), `limit` (number), `filters` (object)
269
- - Returns: Formatted search results with file paths, scores, and code blocks
303
+ Contributions are welcome! Please feel free to submit a Pull Request or open an Issue on [GitHub](https://github.com/anrgct/autodev-codebase).
270
304
 
305
+ ## 📄 License
271
306
 
307
+ This project is licensed under the [MIT License](https://opensource.org/licenses/MIT).
272
308
 
273
- ### Scripts
274
- ```bash
275
- # Development mode with demo files
276
- npm run dev
309
+ ## 🙏 Acknowledgments
310
+
311
+ This project is a fork and derivative work based on [Roo Code](https://github.com/RooCodeInc/Roo-Code). We've built upon their excellent foundation to create this specialized codebase analysis tool with enhanced features and MCP server capabilities.
277
312
 
278
- # Build for production
279
- npm run build
313
+ ---
280
314
 
281
- # Type checking
282
- npm run type-check
315
+ <div align="center">
283
316
 
284
- # Run TUI demo
285
- npm run demo-tui
317
+ **🌟 If you find this tool helpful, please give us a [star on GitHub](https://github.com/anrgct/autodev-codebase)!**
286
318
 
287
- # Start MCP server demo
288
- npm run mcp-server
289
- ```
319
+ Made with ❤️ for the developer community
290
320
 
291
- ## Embedding Models PK
292
-
293
- **Mainstream Embedding Models Performance**
294
-
295
- | Model | Dimension | Avg Precision@3 | Avg Precision@5 | Good Queries (≥66.7%) | Failed Queries (0%) |
296
- | ------------------------------------------------ | --------- | --------------- | --------------- | --------------------- | ------------------- |
297
- | siliconflow/Qwen/Qwen3-Embedding-8B | 4096 | **76.7%** | 66.0% | 5/10 | 0/10 |
298
- | siliconflow/Qwen/Qwen3-Embedding-4B | 2560 | **73.3%** | 54.0% | 5/10 | 1/10 |
299
- | voyage/voyage-code-3 | 1024 | **73.3%** | 52.0% | 6/10 | 1/10 |
300
- | siliconflow/Qwen/Qwen3-Embedding-0.6B | 1024 | **63.3%** | 42.0% | 4/10 | 1/10 |
301
- | morph-embedding-v2 | 1536 | **56.7%** | 44.0% | 3/10 | 1/10 |
302
- | openai/text-embedding-ada-002 | 1536 | **53.3%** | 38.0% | 2/10 | 1/10 |
303
- | voyage/voyage-3-large | 1024 | **53.3%** | 42.0% | 3/10 | 2/10 |
304
- | openai/text-embedding-3-large | 3072 | **46.7%** | 38.0% | 1/10 | 3/10 |
305
- | voyage/voyage-3.5 | 1024 | **43.3%** | 38.0% | 1/10 | 2/10 |
306
- | voyage/voyage-3.5-lite | 1024 | **36.7%** | 28.0% | 1/10 | 2/10 |
307
- | openai/text-embedding-3-small | 1536 | **33.3%** | 28.0% | 1/10 | 4/10 |
308
- | siliconflow/BAAI/bge-large-en-v1.5 | 1024 | **30.0%** | 28.0% | 0/10 | 3/10 |
309
- | siliconflow/Pro/BAAI/bge-m3 | 1024 | **26.7%** | 24.0% | 0/10 | 2/10 |
310
- | ollama/nomic-embed-text | 768 | **16.7%** | 18.0% | 0/10 | 6/10 |
311
- | siliconflow/netease-youdao/bce-embedding-base_v1 | 1024 | **13.3%** | 16.0% | 0/10 | 6/10 |
312
-
313
- ------
314
-
315
- **Ollama-based Embedding Models Performance**
316
-
317
- | Model | Dimension | Precision@3 | Precision@5 | Good Queries (≥66.7%) | Failed Queries (0%) |
318
- | -------------------------------------------------------- | --------- | ----------- | ----------- | --------------------- | ------------------- |
319
- | ollama/dengcao/Qwen3-Embedding-4B:Q4_K_M | 2560 | 66.7% | 48.0% | 4/10 | 1/10 |
320
- | ollama/dengcao/Qwen3-Embedding-0.6B:f16 | 1024 | 63.3% | 44.0% | 3/10 | 0/10 |
321
- | ollama/dengcao/Qwen3-Embedding-0.6B:Q8_0 | 1024 | 63.3% | 44.0% | 3/10 | 0/10 |
322
- | ollama/dengcao/Qwen3-Embedding-4B:Q8_0 | 2560 | 60.0% | 48.0% | 3/10 | 1/10 |
323
- | lmstudio/taylor-jones/bge-code-v1-Q8_0-GGUF | 1536 | 60.0% | 54.0% | 4/10 | 1/10 |
324
- | ollama/dengcao/Qwen3-Embedding-8B:Q4_K_M | 4096 | 56.7% | 42.0% | 2/10 | 2/10 |
325
- | ollama/hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M | 3584 | 53.3% | 44.0% | 2/10 | 0/10 |
326
- | ollama/bge-m3:f16 | 1024 | 26.7% | 24.0% | 0/10 | 2/10 |
327
- | ollama/hf.co/nomic-ai/nomic-embed-text-v2-moe-GGUF:f16 | 768 | 26.7% | 20.0% | 0/10 | 2/10 |
328
- | ollama/granite-embedding:278m-fp16 | 768 | 23.3% | 18.0% | 0/10 | 4/10 |
329
- | ollama/unclemusclez/jina-embeddings-v2-base-code:f16 | 768 | 23.3% | 16.0% | 0/10 | 5/10 |
330
- | lmstudio/awhiteside/CodeRankEmbed-Q8_0-GGUF | 768 | 23.3% | 16.0% | 0/10 | 5/10 |
331
- | lmstudio/wsxiaoys/jina-embeddings-v2-base-code-Q8_0-GGUF | 768 | 23.3% | 16.0% | 0/10 | 5/10 |
332
- | ollama/dengcao/Dmeta-embedding-zh:F16 | 768 | 20.0% | 20.0% | 0/10 | 6/10 |
333
- | ollama/znbang/bge:small-en-v1.5-q8_0 | 384 | 16.7% | 16.0% | 0/10 | 6/10 |
334
- | lmstudio/nomic-ai/nomic-embed-text-v1.5-GGUF@Q4_K_M | 768 | 16.7% | 14.0% | 0/10 | 6/10 |
335
- | ollama/nomic-embed-text:f16 | 768 | 16.7% | 18.0% | 0/10 | 6/10 |
336
- | ollama/snowflake-arctic-embed2:568m:f16 | 1024 | 16.7% | 18.0% | 0/10 | 5/10 |
321
+ </div>