@chuckssmith/agentloom 0.7.0 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -14,9 +14,9 @@ loom setup
14
14
  Claude Code is the execution engine. agentloom adds:
15
15
 
16
16
  - **`$grind`** — persistence loop that keeps working until a task is verified complete
17
- - **`$crew`** — parallel workers that decompose and execute simultaneously
17
+ - **`$crew`** — parallel workers that decompose and execute simultaneously
18
18
  - **`$architect`** — deep analysis mode before major decisions
19
- - **`loom crew`** — CLI to spawn and monitor a crew from your terminal
19
+ - **`loom crew`** — CLI to spawn, monitor, and collect results from a crew
20
20
 
21
21
  It does not replace Claude Code. It wraps it.
22
22
 
@@ -27,19 +27,24 @@ It does not replace Claude Code. It wraps it.
27
27
  ```bash
28
28
  npm install -g @chuckssmith/agentloom
29
29
  loom setup # installs $grind, $crew, $architect skills + validates deps
30
+ loom init # create .loomrc config in current project (optional)
30
31
 
31
- # Spawn workers from your terminal:
32
+ # Spawn workers:
32
33
  loom crew "audit every API endpoint for security issues"
34
+ loom crew 3 "refactor the auth module"
33
35
  loom crew 2:explore+1:code-reviewer "review the payment flow"
34
- loom crew --dry-run 3 "migrate the database schema" # preview before launching
36
+
37
+ # Useful flags:
38
+ loom crew --dry-run 3 "migrate the schema" # preview decomposed subtasks first
39
+ loom crew --watch "audit the codebase" # launch + immediately tail logs
40
+ loom crew --serial 3 "build the pipeline" # run workers sequentially
35
41
 
36
42
  # Monitor:
37
- loom watch # live tail all worker logs
38
- loom status # session overview + stale worker detection
39
- loom logs w00 # full output for one worker
43
+ loom watch # live color-coded tail of all worker logs
44
+ loom status # session overview + per-worker liveness (PID-aware)
40
45
 
41
46
  # After workers finish:
42
- loom collect # synthesize results with Claude
47
+ loom collect # synthesize results with Claude into summary.md
43
48
  loom reset --force # clear state for next run
44
49
 
45
50
  # Or use inside any Claude Code session:
@@ -53,46 +58,58 @@ loom reset --force # clear state for next run
53
58
 
54
59
  Install with `loom setup`. Use inside any Claude Code session:
55
60
 
56
- | Skill | Trigger | What it does |
57
- |---|---|---|
58
- | `$grind` | `$grind "<task>"` | Persistence loop — plans, executes in parallel, verifies. Won't stop until a code-reviewer subagent returns PASS |
59
- | `$crew` | `$crew "<task>"` | Decomposes task into independent streams, runs workers simultaneously, verifies result |
60
- | `$architect` | `$architect "<task>"` | Deep analysis — maps the system, finds real problems, recommends approach before you write code |
61
+ | Skill | What it does |
62
+ |---|---|
63
+ | `$grind` | Persistence loop — plans, executes in parallel, verifies. Won't stop until a code-reviewer returns PASS |
64
+ | `$crew` | Decomposes task into independent streams, runs workers simultaneously, verifies result |
65
+ | `$architect` | Deep analysis — maps the system, finds real problems, recommends approach |
61
66
 
62
67
  ---
63
68
 
64
69
  ## CLI reference
65
70
 
71
+ ### Setup
72
+
73
+ ```
74
+ loom init Create .loomrc in current directory (see Configuration below)
75
+ loom setup Install skills to ~/.claude/skills/, validate claude + tmux
76
+ ```
77
+
66
78
  ### Spawning workers
67
79
 
68
80
  ```
69
- loom crew "<task>" 2 general-purpose workers (default)
70
- loom crew 3 "<task>" 3 workers
71
- loom crew 2:explore "<task>" 2 explore-type workers
72
- loom crew 2:explore+1:code-reviewer "<task>" typed crew
73
- loom crew --dry-run 3 "<task>" preview decomposed subtasks, no launch
81
+ loom crew "<task>" Use defaults from .loomrc (or 2 general-purpose)
82
+ loom crew 3 "<task>" 3 workers
83
+ loom crew 2:explore "<task>" 2 explore-type workers
84
+ loom crew 2:explore+1:code-reviewer "<task>" Typed crew
85
+
86
+ Flags (combinable):
87
+ --dry-run Preview AI-decomposed subtasks without launching
88
+ --watch Launch then immediately tail all worker logs
89
+ --serial Run workers sequentially — each worker reads prior results from context file
74
90
  ```
75
91
 
76
92
  ### Monitoring
77
93
 
78
94
  ```
79
- loom watch Live tail all worker logs with color-coded output
80
- loom status Session overview, task counts, stale worker detection
81
- loom logs Summary of all workers (status + last line)
95
+ loom watch Live color-coded tail (auto-exits when all workers done)
96
+ loom status Session overview, task counts, per-worker liveness
97
+ loom logs Summary: all workers, status, last log line
82
98
  loom logs <workerId> Full log + result for one worker (e.g. loom logs w00)
83
99
  ```
84
100
 
85
101
  ### After workers finish
86
102
 
87
103
  ```
88
- loom collect Read worker results + synthesize summary with Claude
104
+ loom collect Read worker results + synthesize with Claude into .claude-team/summary.md
89
105
  loom collect --no-ai Concatenate results without Claude synthesis
90
106
  ```
91
107
 
92
108
  ### Housekeeping
93
109
 
94
110
  ```
95
- loom setup Install skills to ~/.claude/skills/, validate deps
111
+ loom stop Kill all background workers (SIGTERM)
112
+ loom stop <workerId> Kill one worker
96
113
  loom reset --force Wipe .claude-team/ state
97
114
  ```
98
115
 
@@ -100,7 +117,7 @@ loom reset --force Wipe .claude-team/ state
100
117
 
101
118
  ## Worker types
102
119
 
103
- Each type gets a role-specific system prompt that shapes its behavior:
120
+ Each type gets a role-specific system prompt. Read-only roles do **not** receive `--dangerously-skip-permissions`.
104
121
 
105
122
  | Type | Role | Modifies files? |
106
123
  |---|---|---|
@@ -112,20 +129,45 @@ Each type gets a role-specific system prompt that shapes its behavior:
112
129
 
113
130
  ---
114
131
 
132
+ ## Configuration
133
+
134
+ Run `loom init` to create a `.loomrc` in your project directory:
135
+
136
+ ```json
137
+ {
138
+ "workers": 2,
139
+ "agentType": "general-purpose",
140
+ "claimTtlMinutes": 30,
141
+ "staleMinutes": 10
142
+ }
143
+ ```
144
+
145
+ | Key | Default | Description |
146
+ |---|---|---|
147
+ | `workers` | 2 | Default worker count when none specified |
148
+ | `agentType` | `general-purpose` | Default agent type when none specified |
149
+ | `claimTtlMinutes` | 30 | Minutes before a crashed worker's claimed task is re-queued |
150
+ | `staleMinutes` | 10 | Minutes of dead-pid + log silence before worker is flagged STALE |
151
+
152
+ ---
153
+
115
154
  ## State directory
116
155
 
117
156
  Session state lives in `.claude-team/` (gitignored):
118
157
 
119
158
  ```
120
159
  .claude-team/
121
- session.json Active session metadata
122
- context/ Shared context snapshots (workers read + append)
123
- tasks/ Task queue — workers claim atomically via file rename
160
+ session.json Active session metadata
161
+ context/ Shared context snapshots (workers read + append)
162
+ tasks/ Task queue — workers claim atomically via file rename
163
+ Stale claimed tasks (>claimTtlMinutes) auto re-queued
124
164
  workers/
125
- w00.log Live stdout from worker 00
126
- w00-prompt.md Prompt sent to worker 00
127
- w00-result.md Result summary written by worker 00 on completion
128
- summary.md Final synthesis from loom collect
165
+ w00.log Live stdout from worker 00
166
+ w00.pid PID of worker 00 process
167
+ w00-prompt.md Prompt sent to worker 00
168
+ w00-result.md Result summary written by worker 00 on completion
169
+ w00-run.mjs Node.js runner script (tmux mode)
170
+ summary.md Final synthesis from loom collect
129
171
  ```
130
172
 
131
173
  ---
@@ -134,7 +176,7 @@ Session state lives in `.claude-team/` (gitignored):
134
176
 
135
177
  - Node.js 20+
136
178
  - Claude Code CLI (`claude`) — authenticated
137
- - tmux (optional — used on Mac/Linux; falls back to background processes on Windows/WSL)
179
+ - tmux (optional — used on Mac/Linux interactive terminals; background processes used on Windows/WSL/CI)
138
180
 
139
181
  ---
140
182
 
package/dist/cli.js CHANGED
@@ -7,15 +7,19 @@ import { collect } from './commands/collect.js';
7
7
  import { reset } from './commands/reset.js';
8
8
  import { watch } from './commands/watch.js';
9
9
  import { stop } from './commands/stop.js';
10
+ import { init } from './commands/init.js';
10
11
  const [, , command, ...args] = process.argv;
11
12
  const usage = `
12
13
  agentloom (loom) — workflow layer for Claude Code
13
14
 
14
15
  Usage:
16
+ loom init Create .loomrc config in current directory
15
17
  loom setup Install skills and initialize state dir
16
18
  loom crew [N] "<task>" Spawn N parallel workers on a task
17
19
  loom crew 2:explore "<task>" Spawn typed workers (explore/plan/code-reviewer)
18
20
  loom crew --dry-run [N] "<task>" Preview decomposed subtasks without launching
21
+ loom crew --serial [N] "<task>" Run workers sequentially (each sees prior results)
22
+ loom crew --watch [N] "<task>" Launch workers and immediately tail logs
19
23
  loom watch Live tail all worker logs (Ctrl+C to stop)
20
24
  loom stop Kill all background workers (SIGTERM)
21
25
  loom stop <workerId> Kill one worker
@@ -47,6 +51,9 @@ Examples:
47
51
  loom collect
48
52
  `;
49
53
  switch (command) {
54
+ case 'init':
55
+ await init(args);
56
+ break;
50
57
  case 'setup':
51
58
  await setup();
52
59
  break;
@@ -1,8 +1,11 @@
1
1
  import { execSync, spawn, spawnSync } from 'child_process';
2
2
  import { writeFile, mkdir, open } from 'fs/promises';
3
3
  import { join } from 'path';
4
+ import { existsSync } from 'fs';
4
5
  import { parseWorkerSpec, initSession, writeContextSnapshot, decomposeTasks, } from '../team/orchestrator.js';
5
6
  import { STATE_DIR } from '../state/session.js';
7
+ import { watch } from './watch.js';
8
+ import { loadConfig } from '../config.js';
6
9
  const hasTmux = () => {
7
10
  try {
8
11
  execSync('tmux -V', { stdio: 'ignore' });
@@ -67,8 +70,11 @@ export async function crew(args) {
67
70
  process.exit(1);
68
71
  }
69
72
  const dryRun = args.includes('--dry-run');
70
- const filteredArgs = args.filter(a => a !== '--dry-run');
71
- const { specs, task } = parseWorkerSpec(filteredArgs);
73
+ const serial = args.includes('--serial');
74
+ const watchAfter = args.includes('--watch');
75
+ const filteredArgs = args.filter(a => !['--dry-run', '--serial', '--watch'].includes(a));
76
+ const config = await loadConfig();
77
+ const { specs, task } = parseWorkerSpec(filteredArgs, config.workers, config.agentType);
72
78
  const totalWorkers = specs.reduce((sum, s) => sum + s.count, 0);
73
79
  const slug = task.slice(0, 30).toLowerCase().replace(/\s+/g, '-').replace(/[^a-z0-9-]/g, '');
74
80
  console.log(`\nagentloom crew`);
@@ -90,25 +96,83 @@ export async function crew(args) {
90
96
  return;
91
97
  }
92
98
  const useTmux = hasTmux() && !isWSL() && process.stdout.isTTY;
93
- console.log(`Mode: ${useTmux ? 'tmux' : 'background processes'}\n`);
99
+ const mode = serial ? 'serial' : useTmux ? 'tmux' : 'background processes';
100
+ console.log(`Mode: ${mode}\n`);
94
101
  const session = await initSession(task, totalWorkers);
95
102
  const contextPath = await writeContextSnapshot(slug, task);
96
103
  const tasks = await decomposeTasks(task, specs);
97
104
  console.log(`Session: ${session.id}`);
98
105
  console.log(`Tasks: ${tasks.length} created`);
99
106
  console.log(`Context: ${contextPath}\n`);
100
- if (useTmux) {
107
+ if (serial) {
108
+ await launchSerial(session.id, specs, tasks, contextPath);
109
+ console.log(`\nAll workers finished. Run: loom collect`);
110
+ }
111
+ else if (useTmux) {
101
112
  await launchTmux(session.id, specs, tasks, contextPath);
113
+ console.log(`\nWorkers launched. Monitor with:`);
114
+ console.log(` loom status`);
115
+ console.log(` loom stop (kill all workers)`);
102
116
  }
103
117
  else {
104
118
  await launchBackground(session.id, specs, tasks, contextPath);
119
+ if (watchAfter) {
120
+ console.log();
121
+ await watch([]);
122
+ return;
123
+ }
124
+ console.log(`\nWorkers launched. Monitor with:`);
125
+ console.log(` loom status`);
126
+ console.log(` loom watch`);
127
+ console.log(` loom stop (kill all workers)`);
105
128
  }
106
- console.log(`\nWorkers launched. Monitor with:`);
107
- console.log(` loom status`);
108
- console.log(` loom watch`);
109
- console.log(` loom stop (kill all workers)`);
110
129
  console.log(`State dir: ${STATE_DIR}/`);
111
130
  }
131
+ async function launchSerial(sessionId, specs, tasks, contextPath) {
132
+ await mkdir(join(STATE_DIR, 'workers'), { recursive: true });
133
+ let workerIdx = 0;
134
+ for (const spec of specs) {
135
+ for (let i = 0; i < spec.count; i++) {
136
+ const workerId = `w${String(workerIdx).padStart(2, '0')}`;
137
+ const subtask = tasks[workerIdx]?.description ?? tasks[0]?.description ?? '';
138
+ const agentType = tasks[workerIdx]?.agentType ?? spec.agentType;
139
+ workerIdx++;
140
+ // Each worker receives results from all previous workers via the context file
141
+ const prompt = buildWorkerPrompt(subtask, contextPath, sessionId, workerId, agentType);
142
+ const logFile = join(STATE_DIR, 'workers', `${workerId}.log`);
143
+ await writeFile(join(STATE_DIR, 'workers', `${workerId}-prompt.md`), prompt);
144
+ console.log(` → Worker ${workerId} (${agentType}) starting...`);
145
+ const claudeArgs = [
146
+ '--print',
147
+ ...(!READ_ONLY_ROLES.has(agentType) ? ['--dangerously-skip-permissions'] : []),
148
+ '-p',
149
+ prompt,
150
+ ];
151
+ // Run synchronously — block until this worker is done before starting the next
152
+ const result = spawnSync('claude', claudeArgs, {
153
+ encoding: 'utf8',
154
+ timeout: 30 * 60 * 1000, // 30 min max per worker
155
+ env: { ...process.env, AGENTLOOM_WORKER_ID: workerId, AGENTLOOM_SESSION: sessionId },
156
+ });
157
+ const output = (result.stdout ?? '') + (result.stderr ?? '');
158
+ await writeFile(logFile, output);
159
+ if (result.status !== 0) {
160
+ const resultFile = join(STATE_DIR, 'workers', `${workerId}-result.md`);
161
+ await writeFile(resultFile, `# Error\n\nWorker exited with code ${result.status ?? 'unknown'}\n\n${output.slice(-500)}`);
162
+ console.log(` ✗ Worker ${workerId} failed (exit ${result.status ?? '?'})`);
163
+ }
164
+ else {
165
+ // If worker didn't write its own result file, write a placeholder
166
+ const resultFile = join(STATE_DIR, 'workers', `${workerId}-result.md`);
167
+ if (!existsSync(resultFile)) {
168
+ const lastLines = output.trim().split('\n').slice(-20).join('\n');
169
+ await writeFile(resultFile, `# Result\n\n${lastLines}`);
170
+ }
171
+ console.log(` ✓ Worker ${workerId} done`);
172
+ }
173
+ }
174
+ }
175
+ }
112
176
  async function launchBackground(sessionId, specs, tasks, contextPath) {
113
177
  await mkdir(join(STATE_DIR, 'workers'), { recursive: true });
114
178
  let workerIdx = 0;
@@ -0,0 +1 @@
1
+ export declare function init(_args: string[]): Promise<void>;
@@ -0,0 +1,4 @@
1
+ import { initConfig } from '../config.js';
2
+ export async function init(_args) {
3
+ await initConfig();
4
+ }
@@ -1,8 +1,17 @@
1
1
  import { readSession, readTasks, STATE_DIR } from '../state/session.js';
2
2
  import { existsSync, statSync } from 'fs';
3
3
  import { join } from 'path';
4
- import { readdir } from 'fs/promises';
5
- const STALE_THRESHOLD_MS = 10 * 60 * 1000; // 10 minutes with no log growth = stale
4
+ import { readdir, readFile } from 'fs/promises';
5
+ import { loadConfig } from '../config.js';
6
+ function isProcessAlive(pid) {
7
+ try {
8
+ process.kill(pid, 0);
9
+ return true;
10
+ }
11
+ catch {
12
+ return false;
13
+ }
14
+ }
6
15
  export async function status() {
7
16
  if (!existsSync(STATE_DIR)) {
8
17
  console.log('No active session. Run: loom crew "<task>"');
@@ -31,6 +40,8 @@ export async function status() {
31
40
  const logFiles = files.filter(f => f.endsWith('.log')).sort();
32
41
  if (logFiles.length === 0)
33
42
  return;
43
+ const config = await loadConfig();
44
+ const staleThresholdMs = config.staleMinutes * 60 * 1000;
34
45
  console.log(`\nWorkers: ${logFiles.length}`);
35
46
  const now = Date.now();
36
47
  for (const logFile of logFiles) {
@@ -42,21 +53,34 @@ export async function status() {
42
53
  console.log(` [${workerId}] done ✓`);
43
54
  continue;
44
55
  }
45
- // Check if log is growing (worker is alive) or stale
56
+ // Check PID liveness first a quiet log doesn't mean a dead worker
57
+ const pidPath = join(workersDir, `${workerId}.pid`);
58
+ let pidAlive = false;
59
+ if (existsSync(pidPath)) {
60
+ const pid = parseInt(await readFile(pidPath, 'utf8').catch(() => ''), 10);
61
+ if (!isNaN(pid))
62
+ pidAlive = isProcessAlive(pid);
63
+ }
46
64
  const logStat = statSync(logPath);
47
65
  const msSinceWrite = now - logStat.mtimeMs;
48
- const isStale = msSinceWrite > STALE_THRESHOLD_MS;
49
66
  const logSize = logStat.size;
50
- if (logSize === 0) {
67
+ if (logSize === 0 && pidAlive) {
68
+ console.log(` [${workerId}] starting... (pid alive)`);
69
+ }
70
+ else if (logSize === 0) {
51
71
  console.log(` [${workerId}] starting...`);
52
72
  }
53
- else if (isStale) {
73
+ else if (pidAlive) {
74
+ const secs = Math.round(msSinceWrite / 1000);
75
+ console.log(` [${workerId}] running (pid alive, last log ${secs}s ago)`);
76
+ }
77
+ else if (msSinceWrite > staleThresholdMs) {
54
78
  const mins = Math.round(msSinceWrite / 60000);
55
- console.log(` [${workerId}] STALE — no activity for ${mins}m (log: ${logPath})`);
79
+ console.log(` [${workerId}] STALE — pid dead, no log activity for ${mins}m`);
56
80
  }
57
81
  else {
58
82
  const secs = Math.round(msSinceWrite / 1000);
59
- console.log(` [${workerId}] running (last activity ${secs}s ago)`);
83
+ console.log(` [${workerId}] stopped? — pid dead, last log ${secs}s ago`);
60
84
  }
61
85
  }
62
86
  const allDone = logFiles.every(f => existsSync(join(workersDir, f.replace('.log', '-result.md'))));
@@ -0,0 +1,10 @@
1
+ export declare const LOOMRC = ".loomrc";
2
+ export type LoomConfig = {
3
+ workers?: number;
4
+ agentType?: string;
5
+ claimTtlMinutes?: number;
6
+ staleMinutes?: number;
7
+ dangerouslySkipPermissions?: boolean;
8
+ };
9
+ export declare function loadConfig(): Promise<Required<LoomConfig>>;
10
+ export declare function initConfig(): Promise<void>;
package/dist/config.js ADDED
@@ -0,0 +1,44 @@
1
+ import { readFile, writeFile } from 'fs/promises';
2
+ import { existsSync } from 'fs';
3
+ export const LOOMRC = '.loomrc';
4
+ const DEFAULTS = {
5
+ workers: 2,
6
+ agentType: 'general-purpose',
7
+ claimTtlMinutes: 30,
8
+ staleMinutes: 10,
9
+ dangerouslySkipPermissions: false,
10
+ };
11
+ export async function loadConfig() {
12
+ if (!existsSync(LOOMRC))
13
+ return { ...DEFAULTS };
14
+ try {
15
+ const raw = await readFile(LOOMRC, 'utf8');
16
+ const parsed = JSON.parse(raw);
17
+ if (typeof parsed !== 'object' || parsed === null)
18
+ return { ...DEFAULTS };
19
+ return { ...DEFAULTS, ...parsed };
20
+ }
21
+ catch {
22
+ console.error(`[agentloom] Warning: could not parse ${LOOMRC} — using defaults`);
23
+ return { ...DEFAULTS };
24
+ }
25
+ }
26
+ export async function initConfig() {
27
+ if (existsSync(LOOMRC)) {
28
+ console.log(`${LOOMRC} already exists.`);
29
+ return;
30
+ }
31
+ const config = {
32
+ workers: 2,
33
+ agentType: 'general-purpose',
34
+ claimTtlMinutes: 30,
35
+ staleMinutes: 10,
36
+ };
37
+ await writeFile(LOOMRC, JSON.stringify(config, null, 2) + '\n');
38
+ console.log(`Created ${LOOMRC}`);
39
+ console.log(`\nOptions:`);
40
+ console.log(` workers Default number of workers (default: 2)`);
41
+ console.log(` agentType Default agent type (default: general-purpose)`);
42
+ console.log(` claimTtlMinutes Minutes before crashed worker's task is re-queued (default: 30)`);
43
+ console.log(` staleMinutes Minutes before dead-pid worker is flagged STALE (default: 10)`);
44
+ }
@@ -3,7 +3,7 @@ export type WorkerSpec = {
3
3
  count: number;
4
4
  agentType: string;
5
5
  };
6
- export declare function parseWorkerSpec(args: string[]): {
6
+ export declare function parseWorkerSpec(args: string[], defaultWorkers?: number, defaultAgentType?: string): {
7
7
  specs: WorkerSpec[];
8
8
  task: string;
9
9
  };
@@ -3,20 +3,15 @@ import { join } from 'path';
3
3
  import { randomUUID } from 'crypto';
4
4
  import { spawnSync } from 'child_process';
5
5
  import { STATE_DIR, ensureStateDir, writeSession, writeTask } from '../state/session.js';
6
- export function parseWorkerSpec(args) {
7
- // Formats:
8
- // omc team "task description"
9
- // omc team 3 "task description"
10
- // omc team 2:explore "task description"
11
- // omc team 2:explore+1:code-reviewer "task description"
6
+ export function parseWorkerSpec(args, defaultWorkers = 2, defaultAgentType = 'general-purpose') {
12
7
  const task = args[args.length - 1] ?? '';
13
8
  const specArg = args.length > 1 ? args[0] ?? '' : '';
14
9
  if (!specArg) {
15
- return { specs: [{ count: 2, agentType: 'general-purpose' }], task };
10
+ return { specs: [{ count: defaultWorkers, agentType: defaultAgentType }], task };
16
11
  }
17
12
  // Plain number: "3"
18
13
  if (/^\d+$/.test(specArg)) {
19
- return { specs: [{ count: parseInt(specArg), agentType: 'general-purpose' }], task };
14
+ return { specs: [{ count: parseInt(specArg), agentType: defaultAgentType }], task };
20
15
  }
21
16
  // Typed specs: "2:explore+1:code-reviewer"
22
17
  const parts = specArg.split('+');
@@ -24,7 +19,7 @@ export function parseWorkerSpec(args) {
24
19
  const [countStr, agentType] = part.split(':');
25
20
  return {
26
21
  count: parseInt(countStr ?? '1'),
27
- agentType: agentType ?? 'general-purpose',
22
+ agentType: agentType ?? defaultAgentType,
28
23
  };
29
24
  });
30
25
  return { specs, task };
@@ -1,11 +1,13 @@
1
1
  import { readdir, readFile, rename, writeFile, stat, unlink } from 'fs/promises';
2
2
  import { join } from 'path';
3
3
  import { STATE_DIR } from '../state/session.js';
4
+ import { loadConfig } from '../config.js';
4
5
  const TASKS_DIR = join(STATE_DIR, 'tasks');
5
- const CLAIM_TTL_MS = 30 * 60 * 1000; // 30 minutes
6
6
  // Recover tasks whose worker crashed before completing.
7
7
  // Finds -claimed- files older than CLAIM_TTL_MS and re-queues them as -pending.
8
8
  export async function recoverStaleClaims() {
9
+ const config = await loadConfig();
10
+ const claimTtlMs = config.claimTtlMinutes * 60 * 1000;
9
11
  let recovered = 0;
10
12
  let files;
11
13
  try {
@@ -20,7 +22,7 @@ export async function recoverStaleClaims() {
20
22
  const filePath = join(TASKS_DIR, file);
21
23
  try {
22
24
  const { mtimeMs } = await stat(filePath);
23
- if (now - mtimeMs < CLAIM_TTL_MS)
25
+ if (now - mtimeMs < claimTtlMs)
24
26
  continue;
25
27
  const taskId = file.split('-claimed-')[0];
26
28
  if (!taskId)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@chuckssmith/agentloom",
3
- "version": "0.7.0",
3
+ "version": "0.9.0",
4
4
  "description": "A workflow layer for Claude Code — reusable roles, persistence loops, and multi-agent crew coordination",
5
5
  "keywords": [
6
6
  "ai",