@aci-metrics/score 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,228 @@
1
+ # ACI Local Scorer (Tier 1)
2
+
3
+ Air-gapped local scoring executable for ACI (AI Collaboration Index) analysis.
4
+
5
+ ## Requirements
6
+
7
+ - macOS with Apple Silicon (M1/M2/M3) recommended
8
+ - 16GB+ RAM
9
+ - Node.js 18+
10
+ - Xcode Command Line Tools (for native compilation)
11
+
12
+ ## Setup
13
+
14
+ ### 1. Install Dependencies
15
+
16
+ ```bash
17
+ npm install
18
+ ```
19
+
20
+ This installs:
21
+ - `node-llama-cpp` - Native bindings for llama.cpp (compiles on install)
22
+ - `simple-statistics` - Statistical analysis for norming and z-scores
23
+
24
+ ### 2. Download a Model
25
+
26
+ The local scorer supports multiple models. Download the one you want to use.
27
+
28
+ **Default: Qwen 2.5 1.5B (~900MB)**
29
+ ```bash
30
+ huggingface-cli download bartowski/Qwen2.5-1.5B-Instruct-GGUF \
31
+ Qwen2.5-1.5B-Instruct-Q4_K_M.gguf \
32
+ --local-dir ./models
33
+ ```
34
+
35
+ **Alternative: Llama 3.2 3B (~1.8GB)**
36
+ ```bash
37
+ huggingface-cli download bartowski/Llama-3.2-3B-Instruct-GGUF \
38
+ Llama-3.2-3B-Instruct-Q4_K_M.gguf \
39
+ --local-dir ./models
40
+ ```
41
+
42
+ **Alternative: Phi 3.5 Mini (~2.2GB)**
43
+ ```bash
44
+ huggingface-cli download bartowski/Phi-3.5-mini-instruct-GGUF \
45
+ Phi-3.5-mini-instruct-Q4_K_M.gguf \
46
+ --local-dir ./models
47
+ ```
48
+
49
+ **Alternative: Gemma 2 2B (~1.4GB)**
50
+ ```bash
51
+ huggingface-cli download bartowski/gemma-2-2b-it-GGUF \
52
+ gemma-2-2b-it-Q4_K_M.gguf \
53
+ --local-dir ./models
54
+ ```
55
+
56
+ ### 3. Verify Installation
57
+
58
+ ```bash
59
+ npm test
60
+ ```
61
+
62
+ This runs the model test which verifies everything is configured correctly.
63
+
64
+ ## Configuration
65
+
66
+ All configuration is in the `config/` directory.
67
+
68
+ ### Switching Models
69
+
70
+ Edit `config/default.json` and change the model id:
71
+
72
+ ```json
73
+ {
74
+ "model": {
75
+ "id": "llama-3.2-3b"
76
+ }
77
+ }
78
+ ```
79
+
80
+ Available models:
81
+ - `qwen2.5-1.5b` (default)
82
+ - `phi-3.5-mini`
83
+ - `llama-3.2-3b`
84
+ - `gemma-2-2b`
85
+
86
+ ### Switching Inference Runtime
87
+
88
+ The scorer supports two inference backends:
89
+
90
+ **node-llama-cpp (default)** - Loads GGUF files directly
91
+ ```json
92
+ {
93
+ "runtime": {
94
+ "provider": "node-llama-cpp"
95
+ }
96
+ }
97
+ ```
98
+
99
+ **Ollama** - Uses Ollama HTTP API
100
+ ```json
101
+ {
102
+ "runtime": {
103
+ "provider": "ollama"
104
+ }
105
+ }
106
+ ```
107
+
108
+ For Ollama, make sure the server is running and the model is pulled:
109
+ ```bash
110
+ ollama pull qwen2.5:1.5b
111
+ ollama serve
112
+ ```
113
+
114
+ ### Command Line Overrides
115
+
116
+ You can override config settings from the command line:
117
+
118
+ ```bash
119
+ # Test with a different model
120
+ node test-model.js --model=llama-3.2-3b
121
+
122
+ # Test with Ollama backend
123
+ node test-model.js --provider=ollama
124
+
125
+ # Combine both
126
+ node test-model.js --model=gemma-2-2b --provider=ollama
127
+ ```
128
+
129
+ ## Directory Structure
130
+
131
+ ```
132
+ local-scorer/
133
+ config/
134
+ default.json # Runtime configuration
135
+ models.json # Model registry
136
+ lib/
137
+ provider-factory.js # Creates the right provider
138
+ providers/
139
+ base.js # Abstract provider interface
140
+ node-llama-cpp.js # GGUF file loader
141
+ ollama.js # Ollama HTTP client
142
+ models/
143
+ *.gguf # Downloaded model files
144
+ prompts/
145
+ qwen.txt # Qwen prompt template
146
+ llama.txt # Llama prompt template
147
+ phi.txt # Phi prompt template
148
+ gemma.txt # Gemma prompt template
149
+ test-model.js # Model verification script
150
+ package.json
151
+ README.md
152
+ ```
153
+
154
+ ## Available Models
155
+
156
+ | Model | Size | Best For |
157
+ |-------|------|----------|
158
+ | qwen2.5-1.5b | ~900MB | Fast, reliable JSON output |
159
+ | phi-3.5-mini | ~2.2GB | Strong reasoning |
160
+ | llama-3.2-3b | ~1.8GB | General capabilities |
161
+ | gemma-2-2b | ~1.4GB | Fast inference |
162
+
163
+ ## Programmatic Usage
164
+
165
+ ```javascript
166
+ var createProvider = require('./lib/provider-factory');
167
+
168
+ async function main() {
169
+ // Create provider (reads from config/default.json)
170
+ var provider = createProvider();
171
+
172
+ // Or override settings
173
+ var provider = createProvider({
174
+ modelId: 'llama-3.2-3b',
175
+ provider: 'node-llama-cpp'
176
+ });
177
+
178
+ // Initialize (loads model)
179
+ await provider.initialize();
180
+
181
+ // Generate with schema enforcement
182
+ var schema = {
183
+ type: 'object',
184
+ properties: {
185
+ task_type: { type: 'string', enum: ['feature', 'bugfix'] },
186
+ complexity: { type: 'number', minimum: 1, maximum: 10 }
187
+ },
188
+ required: ['task_type', 'complexity']
189
+ };
190
+
191
+ var result = await provider.generate('Classify this task...', schema);
192
+ console.log(result);
193
+
194
+ // Clean up
195
+ await provider.destroy();
196
+ }
197
+
198
+ main();
199
+ ```
200
+
201
+ ## Troubleshooting
202
+
203
+ ### node-llama-cpp fails to compile
204
+
205
+ Make sure you have Xcode Command Line Tools installed:
206
+ ```bash
207
+ xcode-select --install
208
+ ```
209
+
210
+ ### Out of memory errors
211
+
212
+ The models require 2-4GB of RAM when loaded. Close other applications if running low.
213
+
214
+ ### Model file not found
215
+
216
+ Ensure the model file is in the `./models/` directory with the exact filename from `config/models.json`.
217
+
218
+ ### Ollama connection refused
219
+
220
+ Make sure Ollama is running:
221
+ ```bash
222
+ ollama serve
223
+ ```
224
+
225
+ And the model is pulled:
226
+ ```bash
227
+ ollama pull qwen2.5:1.5b
228
+ ```