hoomanjs 1.17.2 → 1.17.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -28,10 +28,13 @@ It gives you a practical toolkit to build and run agent workflows:
28
28
 
29
29
  - Multiple LLM providers: `ollama`, `openai`, `anthropic`, `google`, `bedrock`, `groq`, `moonshot`, `xai`
30
30
  - Local configuration under `./.hooman` when that folder exists in the current working directory, otherwise `~/.hooman`
31
+ - Optional web search tool with provider selection (`brave` or `tavily`)
31
32
  - MCP server support via `stdio`, `streamable-http`, and `sse`
32
33
  - MCP server `instructions` support: server-provided instructions are appended to the agent system prompt
33
34
  - MCP channel notification support through `hooman daemon --channels`
34
35
  - Skill discovery / install / removal through the integrated configure flow
36
+ - Bundled prompt harness toggles (`behaviour`, `communication`, `execution`, `engineering`, `guardrails`)
37
+ - Built-in subagent runner tools (`research`, `plan`) with configurable concurrency
35
38
  - Toolkit-oriented architecture with configurable tools, prompts, memory, and transports
36
39
  - Interactive terminal UI for chat and configuration
37
40
 
@@ -46,15 +49,21 @@ It gives you a practical toolkit to build and run agent workflows:
46
49
  Fastest way to get started without cloning the repo:
47
50
 
48
51
  ```bash
49
- npx hoomanjs configure
50
- npx hoomanjs chat
52
+ bunx hoomanjs configure
53
+ bunx hoomanjs chat
54
+
55
+ # or install globally
56
+ bun i -g hoomanjs
51
57
  ```
52
58
 
53
- Or with Bun:
59
+ Or with npm:
54
60
 
55
61
  ```bash
56
- bunx hoomanjs configure
57
- bunx hoomanjs chat
62
+ npx hoomanjs configure
63
+ npx hoomanjs chat
64
+
65
+ # or install globally
66
+ npm i -g hoomanjs
58
67
  ```
59
68
 
60
69
  Recommended first run:
@@ -63,6 +72,23 @@ Recommended first run:
63
72
  2. Start chatting with `hooman chat`.
64
73
  3. Use `hooman exec "your prompt"` for one-off tasks.
65
74
 
75
+ ## Must have
76
+
77
+ For the best experience, set up both:
78
+
79
+ 1. **MCP servers** for on-demand tools in `chat` / `exec` (task APIs, messaging, schedulers, etc.).
80
+ 2. **MCP channels** for event-driven automation with `hooman daemon --channels` (notifications become agent prompts).
81
+
82
+ Suggested MCP servers from this ecosystem:
83
+
84
+ - [`cronmcp`](https://github.com/vaibhavpandeyvpz/cronmcp) - lets Hooman schedule recurring prompts and automations, so routine checks and follow-ups run on time.
85
+ - [`jiraxmcp`](https://github.com/vaibhavpandeyvpz/jiraxmcp) - gives Hooman direct Jira Cloud access to search issues, update tickets, and help drive sprint workflows.
86
+ - [`slackxmcp`](https://github.com/vaibhavpandeyvpz/slackxmcp) - connects Hooman to Slack so it can read channel context, draft updates, and post actions where your team already works.
87
+ - [`tgfmcp`](https://github.com/vaibhavpandeyvpz/tgfmcp) - enables Telegram bot workflows, making it easy to route notifications and respond from agent-driven chats.
88
+ - [`wappmcp`](https://github.com/vaibhavpandeyvpz/wappmcp) - brings WhatsApp Web messaging into Hooman for customer or team communication automations.
89
+
90
+ For production deployments, still review permissions and use least-privilege credentials/tokens for each integration.
91
+
66
92
  ## Install
67
93
 
68
94
  ```bash
@@ -158,16 +184,28 @@ hooman daemon --channels --yolo
158
184
 
159
185
  ### Feature Flags
160
186
 
161
- Runtime tools and prompt sections are controlled from `config.json` under `tools`:
162
-
187
+ Runtime tool and prompt switches are controlled from `config.json`:
188
+
189
+ - `search.enabled`
190
+ - `search.provider` (`brave` or `tavily`)
191
+ - `search.brave.apiKey`
192
+ - `search.tavily.apiKey`
193
+ - `prompts.behaviour`
194
+ - `prompts.communication`
195
+ - `prompts.execution`
196
+ - `prompts.engineering`
197
+ - `prompts.guardrails`
163
198
  - `tools.todo.enabled`
164
199
  - `tools.fetch.enabled`
165
200
  - `tools.filesystem.enabled`
166
201
  - `tools.shell.enabled`
202
+ - `tools.sleep.enabled`
167
203
  - `tools.ltm.enabled`
168
204
  - `tools.wiki.enabled`
169
205
  - `tools.mcp.enabled` (enables MCP management tools + prefixed MCP server tools/instructions)
170
206
  - `tools.skills.enabled` (enables skills management tools + skills prompt sections)
207
+ - `tools.agents.enabled` (enables built-in `run_agents` tool)
208
+ - `tools.agents.concurrency`
171
209
 
172
210
  Both `ltm` and `wiki` include dedicated Chroma settings under:
173
211
 
@@ -185,6 +223,8 @@ hooman configure
185
223
  The configure UI currently lets you:
186
224
 
187
225
  - edit app configuration values
226
+ - choose search provider and set its API key
227
+ - toggle bundled harness prompts (`behaviour`, `communication`, `execution`, `engineering`, `guardrails`)
188
228
  - edit `instructions.md` in your `$VISUAL` / `$EDITOR` (cross-platform fallback included)
189
229
  - add, edit, and delete MCP servers with confirmation
190
230
  - search, install, refresh, and remove skills
@@ -224,7 +264,7 @@ Important files and folders:
224
264
 
225
265
  ## Example `config.json`
226
266
 
227
- This is the shape managed by `hooman configure`:
267
+ This is the config shape loaded by Hooman:
228
268
 
229
269
  ```json
230
270
  {
@@ -234,6 +274,19 @@ This is the shape managed by `hooman configure`:
234
274
  "model": "gemma4:e4b",
235
275
  "params": {}
236
276
  },
277
+ "search": {
278
+ "enabled": false,
279
+ "provider": "brave",
280
+ "brave": {},
281
+ "tavily": {}
282
+ },
283
+ "prompts": {
284
+ "behaviour": true,
285
+ "communication": true,
286
+ "execution": true,
287
+ "engineering": true,
288
+ "guardrails": true
289
+ },
237
290
  "tools": {
238
291
  "todo": {
239
292
  "enabled": true
@@ -247,6 +300,9 @@ This is the shape managed by `hooman configure`:
247
300
  "shell": {
248
301
  "enabled": true
249
302
  },
303
+ "sleep": {
304
+ "enabled": true
305
+ },
250
306
  "ltm": {
251
307
  "enabled": false,
252
308
  "chroma": {
@@ -270,6 +326,10 @@ This is the shape managed by `hooman configure`:
270
326
  },
271
327
  "skills": {
272
328
  "enabled": false
329
+ },
330
+ "agents": {
331
+ "enabled": true,
332
+ "concurrency": 3
273
333
  }
274
334
  },
275
335
  "compaction": {
@@ -292,6 +352,11 @@ Supported `llm.provider` values:
292
352
  - `moonshot`
293
353
  - `xai`
294
354
 
355
+ Supported `search.provider` values:
356
+
357
+ - `brave`
358
+ - `tavily`
359
+
295
360
  ## Provider Notes
296
361
 
297
362
  ### Ollama
@@ -357,6 +422,29 @@ Uses Strands `GoogleModel` on top of `@google/genai`. Top-level options like `ap
357
422
 
358
423
  Supports `region`, `clientConfig`, and optional `apiKey`, with all other values forwarded as Bedrock model options.
359
424
 
425
+ ```json
426
+ {
427
+ "provider": "bedrock",
428
+ "model": "anthropic.claude-sonnet-4-20250514-v1:0",
429
+ "params": {
430
+ "region": "us-east-1",
431
+ "clientConfig": {
432
+ "profile": "dev",
433
+ "maxAttempts": 3,
434
+ "credentials": {
435
+ "accessKeyId": "AKIA...",
436
+ "secretAccessKey": "...",
437
+ "sessionToken": "..."
438
+ }
439
+ },
440
+ "temperature": 0.7,
441
+ "maxTokens": 1024
442
+ }
443
+ }
444
+ ```
445
+
446
+ You can also rely on the AWS default credential chain (recommended) by setting environment variables such as `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and optionally `AWS_SESSION_TOKEN`.
447
+
360
448
  ### Groq
361
449
 
362
450
  Uses the Vercel AI SDK Groq provider (`@ai-sdk/groq`) on top of Strands `VercelModel`. Provider-specific settings `apiKey`, `baseURL`, and `headers` are picked up; other values are forwarded into the model config (`temperature`, `maxTokens`, etc.). Defaults to `GROQ_API_KEY` from the environment when no `apiKey` is supplied.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "hoomanjs",
3
- "version": "1.17.2",
3
+ "version": "1.17.4",
4
4
  "description": "Hackable Bun-powered AI agent toolkit for building local CLI, ACP, MCP, and channel-driven workflows.",
5
5
  "author": {
6
6
  "name": "Vaibhav Pandey",
@@ -33,7 +33,7 @@ const DEFAULT_PROMPTS = {
33
33
  behaviour: true,
34
34
  communication: true,
35
35
  execution: true,
36
- engineering: false,
36
+ engineering: true,
37
37
  guardrails: true,
38
38
  } as const;
39
39
 
@@ -1,6 +1,6 @@
1
- ## Engineering Judgment
1
+ ## Coding / Software Engineering
2
2
 
3
- Use senior engineering judgment, but let the repository guide the solution. Prefer local patterns over invented architecture.
3
+ Handle coding tasks like a senior software engineer, but let the project guide the solution. Prefer local patterns over invented architecture.
4
4
 
5
5
  ### Code Changes
6
6
 
@@ -27,12 +27,12 @@ Use senior engineering judgment, but let the repository guide the solution. Pref
27
27
  - Prefer structured parsers and APIs for structured data instead of ad hoc string manipulation.
28
28
  - Treat generated files, lockfiles, migrations, and configuration as shared contracts. Update them only when the task requires it.
29
29
  - Do not hide failures with broad catches, silent fallbacks, skipped hooks, or weakened checks.
30
- - When touching shared behavior, add or update focused tests when the repository has a test pattern for it.
30
+ - When touching shared behavior, add or update focused tests when the project has a test pattern for it.
31
31
  - Avoid time estimates. Focus on what needs to happen and what is done.
32
32
  - If an approach fails, diagnose the failure before switching tactics. Do not blindly retry the same step.
33
33
  - Escalate with a focused user question only after investigation when safe progress is blocked.
34
34
 
35
- ### Repository Hygiene
35
+ ### Project Hygiene
36
36
 
37
37
  - Work with the current working tree. Do not revert user changes unless explicitly asked.
38
38
  - If unexpected changes affect the task, inspect them and adapt. Ask only when they make safe progress impossible.