@harperfast/agent 0.13.7 → 0.13.8-ink
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +8 -8
- package/dist/agent.js +4681 -1482
- package/package.json +10 -1
package/README.md
CHANGED
|
@@ -74,24 +74,24 @@ By default, `harper-agent` uses OpenAI. You can switch to other models using the
|
|
|
74
74
|
|
|
75
75
|
```bash
|
|
76
76
|
# Use Claude 3.5 Sonnet
|
|
77
|
-
harper-agent --model claude-
|
|
77
|
+
harper-agent --model claude-4-6-opus-latest
|
|
78
78
|
|
|
79
79
|
# Use Gemini 1.5 Pro
|
|
80
|
-
harper-agent --model gemini-
|
|
80
|
+
harper-agent --model gemini-3-pro
|
|
81
81
|
|
|
82
82
|
# Use a specific OpenAI model
|
|
83
|
-
harper-agent --model gpt-
|
|
83
|
+
harper-agent --model gpt-5.2
|
|
84
84
|
```
|
|
85
85
|
|
|
86
86
|
You can also set the default model via the `HARPER_AGENT_MODEL` environment variable.
|
|
87
87
|
|
|
88
88
|
### Compaction Model
|
|
89
89
|
|
|
90
|
-
By default, `harper-agent` uses `gpt-
|
|
90
|
+
By default, `harper-agent` uses `gpt-5-nano` for session memory compaction. You can switch this to another model using the `--compaction-model` (or `-c`) flag:
|
|
91
91
|
|
|
92
92
|
```bash
|
|
93
93
|
# Use a different compaction model
|
|
94
|
-
harper-agent --compaction-model claude-
|
|
94
|
+
harper-agent --compaction-model claude-4-5-haiku-latest
|
|
95
95
|
```
|
|
96
96
|
|
|
97
97
|
You can also set the default compaction model via the `HARPER_AGENT_COMPACTION_MODEL` environment variable.
|
|
@@ -128,21 +128,21 @@ To use local models with [Ollama](https://ollama.com/), use the `ollama-` prefix
|
|
|
128
128
|
|
|
129
129
|
```bash
|
|
130
130
|
# Use Llama 3 via Ollama
|
|
131
|
-
harper-agent --model ollama-
|
|
131
|
+
harper-agent --model ollama-qwen2.5-coder
|
|
132
132
|
```
|
|
133
133
|
|
|
134
134
|
If your Ollama instance is running on a custom URL, you can set the `OLLAMA_BASE_URL` environment variable:
|
|
135
135
|
|
|
136
136
|
```bash
|
|
137
137
|
export OLLAMA_BASE_URL=http://localhost:11434
|
|
138
|
-
harper-agent --model ollama-
|
|
138
|
+
harper-agent --model ollama-qwen2.5-coder
|
|
139
139
|
```
|
|
140
140
|
|
|
141
141
|
### OpenAI API Key Permissions
|
|
142
142
|
|
|
143
143
|
If you are using a restricted API key, ensure the following permissions are enabled:
|
|
144
144
|
|
|
145
|
-
- **Models**: `Write` access for `gpt-5.2` (the main model) and `gpt-
|
|
145
|
+
- **Models**: `Write` access for `gpt-5.2` (the main model) and `gpt-5-nano` (the memory summarizer)
|
|
146
146
|
- **Model capabilities**: `Write` (to allow tool calling and completions).
|
|
147
147
|
|
|
148
148
|
No other permissions (like Assistants, Threads, or Files) are required as `harper-agent` runs its tools locally.
|