get-tbd 0.1.22 → 0.1.24
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +17 -19
- package/dist/bin.mjs +207 -49
- package/dist/bin.mjs.map +1 -1
- package/dist/cli.mjs +61 -46
- package/dist/cli.mjs.map +1 -1
- package/dist/docs/README.md +17 -19
- package/dist/docs/guidelines/bun-monorepo-patterns.md +816 -80
- package/dist/docs/guidelines/pnpm-monorepo-patterns.md +586 -16
- package/dist/docs/guidelines/python-cli-patterns.md +2 -2
- package/dist/docs/guidelines/typescript-cli-tool-rules.md +465 -196
- package/dist/docs/tbd-design.md +4 -17
- package/dist/docs/tbd-docs.md +0 -6
- package/dist/docs/templates/research-brief.md +46 -5
- package/dist/{id-mapping-JGow6Jk4.mjs → id-mapping-DjVJIO4M.mjs} +150 -7
- package/dist/id-mapping-DjVJIO4M.mjs.map +1 -0
- package/dist/{id-mapping-0-R0X8zb.mjs → id-mapping-LjnDSEhN.mjs} +2 -2
- package/dist/index.mjs +1 -1
- package/dist/{src-YXybDjVR.mjs → src-BrM6xcdG.mjs} +2 -2
- package/dist/{src-YXybDjVR.mjs.map → src-BrM6xcdG.mjs.map} +1 -1
- package/dist/tbd +207 -49
- package/package.json +4 -4
- package/dist/id-mapping-JGow6Jk4.mjs.map +0 -1
package/README.md
CHANGED
|
@@ -138,20 +138,20 @@ status or context or knowledge and know what to do next:
|
|
|
138
138
|
|
|
139
139
|
| What you say | What happens | What runs |
|
|
140
140
|
| --- | --- | --- |
|
|
141
|
-
|
|
|
142
|
-
|
|
|
143
|
-
|
|
|
144
|
-
|
|
|
145
|
-
|
|
|
146
|
-
|
|
|
147
|
-
|
|
|
148
|
-
|
|
|
149
|
-
|
|
|
150
|
-
|
|
|
151
|
-
|
|
|
152
|
-
|
|
|
153
|
-
|
|
|
154
|
-
|
|
|
141
|
+
| “Let’s plan a new feature that …” | Agent creates a spec from a template | [`tbd shortcut new-plan-spec`](packages/tbd/docs/shortcuts/standard/new-plan-spec.md) |
|
|
142
|
+
| “Break this spec into beads” | Agent creates implementation beads from the spec | [`tbd shortcut plan-implementation-with-beads`](packages/tbd/docs/shortcuts/standard/plan-implementation-with-beads.md) |
|
|
143
|
+
| “Implement these beads” | Agent works through beads systematically | [`tbd shortcut implement-beads`](packages/tbd/docs/shortcuts/standard/implement-beads.md) |
|
|
144
|
+
| “Create a bead for the bug where …” | Agent creates and tracks a bead | `tbd create "..." --type=bug` |
|
|
145
|
+
| “Let’s work on current beads” | Agent finds ready beads and starts working | `tbd ready` |
|
|
146
|
+
| “Review this code” | Agent performs comprehensive code review with all guidelines | [`tbd shortcut review-code`](packages/tbd/docs/shortcuts/standard/review-code.md) |
|
|
147
|
+
| “Review this PR” | Agent reviews a GitHub pull request and can comment/fix | [`tbd shortcut review-github-pr`](packages/tbd/docs/shortcuts/standard/review-github-pr.md) |
|
|
148
|
+
| “Use the shortcut to commit” | Agent runs full pre-commit checks, code review, and commits | [`tbd shortcut code-review-and-commit`](packages/tbd/docs/shortcuts/standard/code-review-and-commit.md) |
|
|
149
|
+
| “Create a PR” | Agent creates or updates the pull request | [`tbd shortcut create-or-update-pr-simple`](packages/tbd/docs/shortcuts/standard/create-or-update-pr-simple.md) |
|
|
150
|
+
| “Let’s create a research brief on …” | Agent creates a research document using a template | [`tbd shortcut new-research-brief`](packages/tbd/docs/shortcuts/standard/new-research-brief.md) |
|
|
151
|
+
| “How could we test this better?” | Agent loads TDD and testing guidelines | [`tbd guidelines general-tdd-guidelines`](packages/tbd/docs/guidelines/general-tdd-guidelines.md) |
|
|
152
|
+
| “How can we make this a well-designed TypeScript CLI?” | Agent loads TypeScript CLI guidelines | [`tbd guidelines typescript-cli-tool-rules`](packages/tbd/docs/guidelines/typescript-cli-tool-rules.md) |
|
|
153
|
+
| “Can you review if this TypeScript package setup follows best practices” | Agent loads monorepo patterns | [`tbd guidelines pnpm-monorepo-patterns`](packages/tbd/docs/guidelines/pnpm-monorepo-patterns.md) |
|
|
154
|
+
| “How can we do a better job of testing?” | Agent loads golden testing guidelines | [`tbd guidelines golden-testing-guidelines`](packages/tbd/docs/guidelines/golden-testing-guidelines.md) |
|
|
155
155
|
|
|
156
156
|
Under the hood, your agent runs these `tbd` commands automatically.
|
|
157
157
|
You just talk naturally.
|
|
@@ -165,8 +165,8 @@ You just talk naturally.
|
|
|
165
165
|
|
|
166
166
|
- **Git-native:** Beads live in your repo, synced to a separate, dedicated `tbd-sync`
|
|
167
167
|
branch. Your code history stays clean—no bead churn polluting your logs.
|
|
168
|
-
- **Agent friendly:** JSON output,
|
|
169
|
-
|
|
168
|
+
- **Agent friendly:** JSON output, simple commands that agents understand.
|
|
169
|
+
Installs itself as a skill in Claude Code.
|
|
170
170
|
- **Markdown + YAML frontmatter:** One file per bead, human-readable and editable.
|
|
171
171
|
This eliminates most merge conflicts.
|
|
172
172
|
- **Beads alternative:** Largely compatible with `bd` at the CLI level, but with a
|
|
@@ -472,8 +472,6 @@ Every command supports these flags for automation:
|
|
|
472
472
|
| Flag | Purpose |
|
|
473
473
|
| --- | --- |
|
|
474
474
|
| `--json` | Machine-parseable output |
|
|
475
|
-
| `--non-interactive` | Fail if input required |
|
|
476
|
-
| `--yes` | Auto-confirm prompts |
|
|
477
475
|
| `--dry-run` | Preview changes |
|
|
478
476
|
| `--quiet` | Minimal output |
|
|
479
477
|
|
|
@@ -533,7 +531,7 @@ It does *not* aim to solve real-time multi-agent coordination, which is a separa
|
|
|
533
531
|
problem requiring sub-second messaging and atomic claims.
|
|
534
532
|
Tools like [Agent Mail](https://github.com/Dicklesworthstone/mcp_agent_mail) and
|
|
535
533
|
[Gas Town](https://github.com/steveyegge/gastown) address that space and are
|
|
536
|
-
complementary to `tbd`—you could layer real-time coordination on top of `tbd
|
|
534
|
+
complementary to `tbd`—you could layer real-time coordination on top of `tbd`’s durable
|
|
537
535
|
tracking. See the [design doc](packages/tbd/docs/tbd-design.md) for a detailed
|
|
538
536
|
comparison.
|
|
539
537
|
|
package/dist/bin.mjs
CHANGED
|
@@ -8,7 +8,7 @@ import process$1 from "node:process";
|
|
|
8
8
|
import matter from "gray-matter";
|
|
9
9
|
import os, { homedir } from "node:os";
|
|
10
10
|
import tty from "node:tty";
|
|
11
|
-
import { access, chmod, cp, mkdir, readFile, readdir, rename, rm, stat, unlink } from "node:fs/promises";
|
|
11
|
+
import { access, chmod, cp, mkdir, readFile, readdir, rename, rm, rmdir, stat, unlink } from "node:fs/promises";
|
|
12
12
|
import { Readable } from "node:stream";
|
|
13
13
|
import { promisify } from "node:util";
|
|
14
14
|
import crypto, { randomBytes } from "node:crypto";
|
|
@@ -14033,7 +14033,7 @@ function serializeIssue(issue) {
|
|
|
14033
14033
|
* Package version, derived from git at build time.
|
|
14034
14034
|
* Format: X.Y.Z for releases, X.Y.Z-dev.N.hash for dev builds.
|
|
14035
14035
|
*/
|
|
14036
|
-
const VERSION$1 = "0.1.
|
|
14036
|
+
const VERSION$1 = "0.1.24";
|
|
14037
14037
|
|
|
14038
14038
|
//#endregion
|
|
14039
14039
|
//#region src/cli/lib/version.ts
|
|
@@ -96563,15 +96563,12 @@ function sanitizeTab(tab, fallbackTab) {
|
|
|
96563
96563
|
*/
|
|
96564
96564
|
function getCommandContext(command) {
|
|
96565
96565
|
const opts = command.optsWithGlobals();
|
|
96566
|
-
const isCI = Boolean(process.env.CI);
|
|
96567
96566
|
return {
|
|
96568
96567
|
dryRun: opts.dryRun ?? false,
|
|
96569
96568
|
verbose: opts.verbose ?? false,
|
|
96570
96569
|
quiet: opts.quiet ?? false,
|
|
96571
96570
|
json: opts.json ?? false,
|
|
96572
96571
|
color: opts.color ?? "auto",
|
|
96573
|
-
nonInteractive: opts.nonInteractive ?? (!process.stdin.isTTY || isCI),
|
|
96574
|
-
yes: opts.yes ?? false,
|
|
96575
96572
|
sync: opts.sync !== false,
|
|
96576
96573
|
debug: opts.debug ?? false
|
|
96577
96574
|
};
|
|
@@ -99276,7 +99273,11 @@ async function migrateDataToWorktree(baseDir, removeSource = false) {
|
|
|
99276
99273
|
await mkdir(correctIssuesPath, { recursive: true });
|
|
99277
99274
|
await mkdir(correctMappingsPath, { recursive: true });
|
|
99278
99275
|
for (const file of issueFiles) await cp(join(wrongIssuesPath, file), join(correctIssuesPath, file));
|
|
99279
|
-
for (const file of mappingFiles)
|
|
99276
|
+
for (const file of mappingFiles) if (file === "ids.yml") {
|
|
99277
|
+
const { loadIdMapping, mergeIdMappings, saveIdMapping } = await Promise.resolve().then(() => id_mapping_exports);
|
|
99278
|
+
const sourceMapping = await loadIdMapping(wrongPath);
|
|
99279
|
+
await saveIdMapping(correctPath, mergeIdMappings(await loadIdMapping(correctPath), sourceMapping));
|
|
99280
|
+
} else await cp(join(wrongMappingsPath, file), join(correctMappingsPath, file));
|
|
99280
99281
|
const totalFiles = issueFiles.length + mappingFiles.length;
|
|
99281
99282
|
await git("-C", worktreePath, "add", "-A");
|
|
99282
99283
|
if (await git("-C", worktreePath, "diff", "--cached", "--quiet").then(() => false).catch(() => true)) await git("-C", worktreePath, "commit", "--no-verify", "-m", `tbd: migrate ${totalFiles} file(s) from incorrect location`);
|
|
@@ -99777,6 +99778,110 @@ async function listIssues(baseDir) {
|
|
|
99777
99778
|
return issues;
|
|
99778
99779
|
}
|
|
99779
99780
|
|
|
99781
|
+
//#endregion
|
|
99782
|
+
//#region src/utils/lockfile.ts
|
|
99783
|
+
/**
|
|
99784
|
+
* Directory-based mutual exclusion for concurrent file access.
|
|
99785
|
+
*
|
|
99786
|
+
* Note: Despite the name "lockfile", this is NOT a POSIX file lock (flock/fcntl).
|
|
99787
|
+
* It uses mkdir to create a lock *directory* as a coordination convention — no
|
|
99788
|
+
* OS-level file locking syscalls are involved. This makes it portable across all
|
|
99789
|
+
* filesystems, including NFS and other network mounts where flock/fcntl locks
|
|
99790
|
+
* are unreliable or unsupported.
|
|
99791
|
+
*
|
|
99792
|
+
* This is the same strategy used by:
|
|
99793
|
+
*
|
|
99794
|
+
* - **Git** for ref updates (e.g., `.git/refs/heads/main.lock`)
|
|
99795
|
+
* See: https://git-scm.com/docs/gitrepository-layout ("lockfile protocol")
|
|
99796
|
+
* - **npm** for package-lock.json concurrent access
|
|
99797
|
+
*
|
|
99798
|
+
* ## Why mkdir?
|
|
99799
|
+
*
|
|
99800
|
+
* `mkdir(2)` is atomic on all common filesystems (local and network): it either
|
|
99801
|
+
* creates the directory or returns EEXIST. Unlike `open(O_CREAT|O_EXCL)`,
|
|
99802
|
+
* a directory lock is trivially distinguishable from normal files.
|
|
99803
|
+
*
|
|
99804
|
+
* Node.js `fs.mkdir` maps directly to the mkdir(2) syscall, preserving
|
|
99805
|
+
* the atomicity guarantee:
|
|
99806
|
+
* https://nodejs.org/api/fs.html#fsmkdirpath-options-callback
|
|
99807
|
+
*
|
|
99808
|
+
* ## Lock lifecycle
|
|
99809
|
+
*
|
|
99810
|
+
* 1. **Acquire**: `mkdir(lockDir)` — fails with EEXIST if held by another process
|
|
99811
|
+
* 2. **Hold**: Execute the critical section
|
|
99812
|
+
* 3. **Release**: `rmdir(lockDir)` — in a finally block
|
|
99813
|
+
* 4. **Stale detection**: If lock mtime exceeds a threshold, assume the holder
|
|
99814
|
+
* crashed and break the lock. This is a heuristic — safe when the critical
|
|
99815
|
+
* section is short-lived (sub-second for file I/O).
|
|
99816
|
+
*
|
|
99817
|
+
* ## Degraded mode
|
|
99818
|
+
*
|
|
99819
|
+
* If the lock cannot be acquired within the timeout (e.g., due to a stuck
|
|
99820
|
+
* lockfile that isn't old enough to break), the critical section runs anyway.
|
|
99821
|
+
* Callers should design their critical sections to be safe without the lock
|
|
99822
|
+
* (e.g., using read-merge-write for append-only data).
|
|
99823
|
+
*/
|
|
99824
|
+
const DEFAULT_TIMEOUT_MS = 2e3;
|
|
99825
|
+
const DEFAULT_POLL_MS = 50;
|
|
99826
|
+
const DEFAULT_STALE_MS = 5e3;
|
|
99827
|
+
/**
|
|
99828
|
+
* Execute `fn` while holding a lockfile.
|
|
99829
|
+
*
|
|
99830
|
+
* The lock is a directory at `lockPath` (typically `<target-file>.lock`).
|
|
99831
|
+
* Concurrent callers will wait up to `timeoutMs` for the lock, polling
|
|
99832
|
+
* every `pollMs`. Stale locks older than `staleMs` are broken automatically.
|
|
99833
|
+
*
|
|
99834
|
+
* If the lock cannot be acquired, `fn` is still executed (degraded mode).
|
|
99835
|
+
* This ensures a stuck lockfile never permanently blocks the CLI.
|
|
99836
|
+
*
|
|
99837
|
+
* @param lockPath - Path to use as the lock directory (e.g., "/path/to/ids.yml.lock")
|
|
99838
|
+
* @param fn - Critical section to execute under the lock
|
|
99839
|
+
* @param options - Timing parameters for lock acquisition
|
|
99840
|
+
* @returns The return value of `fn`
|
|
99841
|
+
*
|
|
99842
|
+
* @example
|
|
99843
|
+
* ```ts
|
|
99844
|
+
* await withLockfile('/path/to/ids.yml.lock', async () => {
|
|
99845
|
+
* const data = await readFile('/path/to/ids.yml', 'utf-8');
|
|
99846
|
+
* const updated = mergeEntries(data, newEntries);
|
|
99847
|
+
* await writeFile('/path/to/ids.yml', updated);
|
|
99848
|
+
* });
|
|
99849
|
+
* ```
|
|
99850
|
+
*/
|
|
99851
|
+
async function withLockfile(lockPath, fn, options) {
|
|
99852
|
+
const timeoutMs = options?.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
99853
|
+
const pollMs = options?.pollMs ?? DEFAULT_POLL_MS;
|
|
99854
|
+
const staleMs = options?.staleMs ?? DEFAULT_STALE_MS;
|
|
99855
|
+
const deadline = Date.now() + timeoutMs;
|
|
99856
|
+
let acquired = false;
|
|
99857
|
+
while (Date.now() < deadline) try {
|
|
99858
|
+
await mkdir(lockPath);
|
|
99859
|
+
acquired = true;
|
|
99860
|
+
break;
|
|
99861
|
+
} catch (error) {
|
|
99862
|
+
if (error.code !== "EEXIST") break;
|
|
99863
|
+
try {
|
|
99864
|
+
const lockStat = await stat(lockPath);
|
|
99865
|
+
if (Date.now() - lockStat.mtimeMs > staleMs) {
|
|
99866
|
+
try {
|
|
99867
|
+
await rmdir(lockPath);
|
|
99868
|
+
} catch {}
|
|
99869
|
+
continue;
|
|
99870
|
+
}
|
|
99871
|
+
} catch {
|
|
99872
|
+
continue;
|
|
99873
|
+
}
|
|
99874
|
+
await new Promise((resolve) => setTimeout(resolve, pollMs));
|
|
99875
|
+
}
|
|
99876
|
+
try {
|
|
99877
|
+
return await fn();
|
|
99878
|
+
} finally {
|
|
99879
|
+
if (acquired) try {
|
|
99880
|
+
await rmdir(lockPath);
|
|
99881
|
+
} catch {}
|
|
99882
|
+
}
|
|
99883
|
+
}
|
|
99884
|
+
|
|
99780
99885
|
//#endregion
|
|
99781
99886
|
//#region src/lib/sort.ts
|
|
99782
99887
|
/**
|
|
@@ -99921,15 +100026,54 @@ async function loadIdMapping(baseDir) {
|
|
|
99921
100026
|
};
|
|
99922
100027
|
}
|
|
99923
100028
|
/**
|
|
99924
|
-
* Save the ID mapping to disk.
|
|
100029
|
+
* Save the ID mapping to disk with mutual exclusion.
|
|
100030
|
+
*
|
|
100031
|
+
* Uses a lockfile to serialize concurrent writers, then performs read-merge-write
|
|
100032
|
+
* inside the lock. This prevents the lost-update problem when multiple `tbd create`
|
|
100033
|
+
* commands run in parallel.
|
|
100034
|
+
*
|
|
100035
|
+
* The merge is safe because ID mappings are append-only — entries are never
|
|
100036
|
+
* intentionally removed. Even if the lock acquisition fails (degraded mode),
|
|
100037
|
+
* the read-merge-write provides a fallback that preserves entries from other writers.
|
|
99925
100038
|
*/
|
|
99926
100039
|
async function saveIdMapping(baseDir, mapping) {
|
|
99927
100040
|
const filePath = getMappingPath(baseDir);
|
|
99928
100041
|
await mkdir(dirname(filePath), { recursive: true });
|
|
99929
|
-
|
|
99930
|
-
|
|
99931
|
-
|
|
99932
|
-
|
|
100042
|
+
await withLockfile(filePath + ".lock", async () => {
|
|
100043
|
+
let merged = mapping;
|
|
100044
|
+
let onDiskSize = 0;
|
|
100045
|
+
try {
|
|
100046
|
+
const onDisk = await loadIdMappingRaw(filePath);
|
|
100047
|
+
onDiskSize = onDisk.shortToUlid.size;
|
|
100048
|
+
if (onDiskSize > 0) merged = mergeIdMappings(mapping, onDisk);
|
|
100049
|
+
} catch {}
|
|
100050
|
+
if (merged.shortToUlid.size < onDiskSize) throw new Error(`Refusing to save ID mapping: would lose ${onDiskSize - merged.shortToUlid.size} entries (on-disk: ${onDiskSize}, proposed: ${merged.shortToUlid.size}). ID mappings are append-only — this indicates a bug.`);
|
|
100051
|
+
const data = {};
|
|
100052
|
+
const sortedKeys = naturalSort(Array.from(merged.shortToUlid.keys()));
|
|
100053
|
+
for (const key of sortedKeys) data[key] = merged.shortToUlid.get(key);
|
|
100054
|
+
await writeFile(filePath, stringifyYaml(data));
|
|
100055
|
+
});
|
|
100056
|
+
}
|
|
100057
|
+
/**
|
|
100058
|
+
* Load an ID mapping directly from a file path (internal helper for save merging).
|
|
100059
|
+
* Separated from loadIdMapping to avoid coupling the save path to baseDir resolution.
|
|
100060
|
+
*/
|
|
100061
|
+
async function loadIdMappingRaw(filePath) {
|
|
100062
|
+
const { data: rawData } = parseYamlToleratingDuplicateKeys(await readFile(filePath, "utf-8"), filePath);
|
|
100063
|
+
const data = rawData ?? {};
|
|
100064
|
+
const parseResult = IdMappingYamlSchema.safeParse(data);
|
|
100065
|
+
if (!parseResult.success) throw new Error(`Invalid ID mapping format in ${filePath}: ${parseResult.error.message}`);
|
|
100066
|
+
const validData = parseResult.data;
|
|
100067
|
+
const shortToUlid = /* @__PURE__ */ new Map();
|
|
100068
|
+
const ulidToShort = /* @__PURE__ */ new Map();
|
|
100069
|
+
for (const [shortId, ulid] of Object.entries(validData)) {
|
|
100070
|
+
shortToUlid.set(shortId, ulid);
|
|
100071
|
+
ulidToShort.set(ulid, shortId);
|
|
100072
|
+
}
|
|
100073
|
+
return {
|
|
100074
|
+
shortToUlid,
|
|
100075
|
+
ulidToShort
|
|
100076
|
+
};
|
|
99933
100077
|
}
|
|
99934
100078
|
/**
|
|
99935
100079
|
* Calculate the optimal short ID length based on existing ID count.
|
|
@@ -103001,8 +103145,8 @@ var SyncHandler = class extends BaseCommand {
|
|
|
103001
103145
|
else if (options.push) await this.pushChanges(syncBranch, remote);
|
|
103002
103146
|
else await this.fullSync(syncBranch, remote, {
|
|
103003
103147
|
force: options.force,
|
|
103004
|
-
|
|
103005
|
-
|
|
103148
|
+
autoSave: options.autoSave,
|
|
103149
|
+
outbox: options.outbox
|
|
103006
103150
|
});
|
|
103007
103151
|
}
|
|
103008
103152
|
/**
|
|
@@ -103421,7 +103565,7 @@ var SyncHandler = class extends BaseCommand {
|
|
|
103421
103565
|
this.output.error(`Push failed: ${displayError}`);
|
|
103422
103566
|
console.log(` ${aheadCommits} commit(s) not pushed to remote.`);
|
|
103423
103567
|
});
|
|
103424
|
-
if (errorType === "permanent" &&
|
|
103568
|
+
if (errorType === "permanent" && options.autoSave !== false) await this.handlePermanentFailure();
|
|
103425
103569
|
else if (!this.ctx.json) if (errorType === "transient") {
|
|
103426
103570
|
console.log("");
|
|
103427
103571
|
console.log(" This appears to be a temporary issue. Options:");
|
|
@@ -103436,7 +103580,7 @@ var SyncHandler = class extends BaseCommand {
|
|
|
103436
103580
|
}
|
|
103437
103581
|
return;
|
|
103438
103582
|
}
|
|
103439
|
-
if (
|
|
103583
|
+
if (options.outbox !== false) await this.maybeImportOutbox(syncBranch, remote);
|
|
103440
103584
|
this.output.data({
|
|
103441
103585
|
summary,
|
|
103442
103586
|
conflicts: conflicts.length
|
|
@@ -104459,7 +104603,9 @@ var DoctorHandler = class extends BaseCommand {
|
|
|
104459
104603
|
healthChecks.push(await this.checkIdMappingDuplicates(options.fix));
|
|
104460
104604
|
healthChecks.push(await this.checkTempFiles(options.fix));
|
|
104461
104605
|
healthChecks.push(this.checkIssueValidity(this.issues));
|
|
104462
|
-
|
|
104606
|
+
const parsedMaxHistory = options.maxHistory ? parseInt(options.maxHistory, 10) : 50;
|
|
104607
|
+
const maxHistory = Number.isNaN(parsedMaxHistory) || parsedMaxHistory < 0 ? 50 : parsedMaxHistory;
|
|
104608
|
+
healthChecks.push(await this.checkMissingMappings(options.fix, maxHistory));
|
|
104463
104609
|
healthChecks.push(await this.checkWorktree(options.fix));
|
|
104464
104610
|
healthChecks.push(await this.checkDataLocation(options.fix));
|
|
104465
104611
|
healthChecks.push(await this.checkLocalSyncBranch());
|
|
@@ -104813,7 +104959,7 @@ var DoctorHandler = class extends BaseCommand {
|
|
|
104813
104959
|
*
|
|
104814
104960
|
* With --fix, creates missing mappings automatically.
|
|
104815
104961
|
*/
|
|
104816
|
-
async checkMissingMappings(fix) {
|
|
104962
|
+
async checkMissingMappings(fix, maxHistory = 50) {
|
|
104817
104963
|
if (this.issues.length === 0) return {
|
|
104818
104964
|
name: "ID mapping coverage",
|
|
104819
104965
|
status: "ok"
|
|
@@ -104830,25 +104976,41 @@ var DoctorHandler = class extends BaseCommand {
|
|
|
104830
104976
|
status: "ok"
|
|
104831
104977
|
};
|
|
104832
104978
|
if (fix && !this.checkDryRun("Create missing ID mappings")) {
|
|
104833
|
-
const { parseIdMappingFromYaml } = await Promise.resolve().then(() => id_mapping_exports);
|
|
104979
|
+
const { parseIdMappingFromYaml, mergeIdMappings } = await Promise.resolve().then(() => id_mapping_exports);
|
|
104834
104980
|
let historicalMapping;
|
|
104835
104981
|
try {
|
|
104836
104982
|
const syncBranch = (await Promise.resolve().then(() => config_exports).then((m) => m.readConfig(this.cwd))).sync.branch;
|
|
104837
|
-
const
|
|
104838
|
-
if (
|
|
104839
|
-
|
|
104840
|
-
|
|
104841
|
-
|
|
104983
|
+
const logArgs = ["log", "--format=%H"];
|
|
104984
|
+
if (maxHistory > 0) logArgs.push(`-${maxHistory}`);
|
|
104985
|
+
logArgs.push(syncBranch, "--", `${DATA_SYNC_DIR}/mappings/ids.yml`);
|
|
104986
|
+
const commitHashes = (await git(...logArgs)).trim().split("\n").filter(Boolean);
|
|
104987
|
+
for (const commitHash of commitHashes) try {
|
|
104988
|
+
const idsContent = await git("show", `${commitHash}:${DATA_SYNC_DIR}/mappings/ids.yml`);
|
|
104989
|
+
if (idsContent) {
|
|
104990
|
+
const versionMapping = parseIdMappingFromYaml(idsContent);
|
|
104991
|
+
if (!historicalMapping) historicalMapping = versionMapping;
|
|
104992
|
+
else historicalMapping = mergeIdMappings(historicalMapping, versionMapping);
|
|
104993
|
+
}
|
|
104994
|
+
} catch {}
|
|
104842
104995
|
} catch {}
|
|
104996
|
+
const historicalCount = historicalMapping?.shortToUlid.size ?? 0;
|
|
104843
104997
|
const result = reconcileMappings(missingIds, mapping, historicalMapping);
|
|
104844
104998
|
await saveIdMapping(this.dataSyncDir, mapping);
|
|
104845
104999
|
const parts = [];
|
|
104846
105000
|
if (result.recovered.length > 0) parts.push(`recovered ${result.recovered.length} from git history`);
|
|
104847
105001
|
if (result.created.length > 0) parts.push(`created ${result.created.length} new`);
|
|
105002
|
+
const details = [
|
|
105003
|
+
`Scanned ${maxHistory > 0 ? `up to ${maxHistory}` : "all"} git commits for ids.yml history`,
|
|
105004
|
+
`Found ${historicalCount} historical mapping(s) to use for recovery`,
|
|
105005
|
+
`${missingIds.length} issue(s) were missing short ID mappings`
|
|
105006
|
+
];
|
|
105007
|
+
if (result.recovered.length > 0) details.push(`Recovered ${result.recovered.length} original short ID(s) from git history`);
|
|
105008
|
+
if (result.created.length > 0) details.push(`Generated ${result.created.length} new short ID(s) (originals not found in history)`);
|
|
104848
105009
|
return {
|
|
104849
105010
|
name: "ID mapping coverage",
|
|
104850
105011
|
status: "ok",
|
|
104851
|
-
message: parts.join(", ")
|
|
105012
|
+
message: parts.join(", "),
|
|
105013
|
+
details
|
|
104852
105014
|
};
|
|
104853
105015
|
}
|
|
104854
105016
|
return {
|
|
@@ -105007,13 +105169,19 @@ var DoctorHandler = class extends BaseCommand {
|
|
|
105007
105169
|
path: wrongIssuesPath,
|
|
105008
105170
|
details: ["Cannot migrate: worktree must be repaired first.", "The worktree repair should have run before this check."]
|
|
105009
105171
|
};
|
|
105010
|
-
const result = await migrateDataToWorktree(this.cwd);
|
|
105011
|
-
if (result.success)
|
|
105012
|
-
|
|
105013
|
-
|
|
105014
|
-
|
|
105015
|
-
|
|
105016
|
-
|
|
105172
|
+
const result = await migrateDataToWorktree(this.cwd, true);
|
|
105173
|
+
if (result.success) {
|
|
105174
|
+
const details = [];
|
|
105175
|
+
if (result.backupPath) details.push(`Backed up to ${result.backupPath}`);
|
|
105176
|
+
details.push(`Migrated ${result.migratedCount} file(s) from .tbd/data-sync/ to worktree`, "Source files removed after successful migration");
|
|
105177
|
+
return {
|
|
105178
|
+
name: "Data location",
|
|
105179
|
+
status: "ok",
|
|
105180
|
+
message: result.backupPath ? `migrated ${result.migratedCount} file(s), backed up to ${result.backupPath}` : `migrated ${result.migratedCount} file(s)`,
|
|
105181
|
+
path: wrongIssuesPath,
|
|
105182
|
+
details
|
|
105183
|
+
};
|
|
105184
|
+
}
|
|
105017
105185
|
return {
|
|
105018
105186
|
name: "Data location",
|
|
105019
105187
|
status: "error",
|
|
@@ -105210,15 +105378,13 @@ var DoctorHandler = class extends BaseCommand {
|
|
|
105210
105378
|
};
|
|
105211
105379
|
if (consistency.localAhead > 0) return {
|
|
105212
105380
|
name: "Sync consistency",
|
|
105213
|
-
status: "
|
|
105214
|
-
message: `${consistency.localAhead} commit(s)
|
|
105215
|
-
suggestion: "Run: tbd sync to push changes"
|
|
105381
|
+
status: "ok",
|
|
105382
|
+
message: `${consistency.localAhead} local commit(s) not yet pushed — run \`tbd sync\` to push`
|
|
105216
105383
|
};
|
|
105217
105384
|
if (consistency.localBehind > 0) return {
|
|
105218
105385
|
name: "Sync consistency",
|
|
105219
|
-
status: "
|
|
105220
|
-
message: `${consistency.localBehind} commit(s)
|
|
105221
|
-
suggestion: "Run: tbd sync to pull changes"
|
|
105386
|
+
status: "ok",
|
|
105387
|
+
message: `${consistency.localBehind} remote commit(s) not yet pulled — run \`tbd sync\` to pull`
|
|
105222
105388
|
};
|
|
105223
105389
|
return {
|
|
105224
105390
|
name: "Sync consistency",
|
|
@@ -105239,7 +105405,7 @@ var DoctorHandler = class extends BaseCommand {
|
|
|
105239
105405
|
}
|
|
105240
105406
|
}
|
|
105241
105407
|
};
|
|
105242
|
-
const doctorCommand = new Command("doctor").description("Diagnose and repair repository").option("--fix", "Attempt to fix issues").action(async (options, command) => {
|
|
105408
|
+
const doctorCommand = new Command("doctor").description("Diagnose and repair repository").option("--fix", "Attempt to fix issues").option("--max-history <n>", "Max git commits to scan for ID mapping recovery (0 = full history)", "50").action(async (options, command) => {
|
|
105243
105409
|
await new DoctorHandler(command).run(options);
|
|
105244
105410
|
});
|
|
105245
105411
|
|
|
@@ -108634,11 +108800,7 @@ var SetupDefaultHandler = class extends BaseCommand {
|
|
|
108634
108800
|
]);
|
|
108635
108801
|
if (tbdGitignoreResult.created) console.log(` ${colors.success("✓")} Created .tbd/.gitignore`);
|
|
108636
108802
|
else if (tbdGitignoreResult.added.length > 0) console.log(` ${colors.success("✓")} Updated .tbd/.gitignore with new patterns`);
|
|
108637
|
-
const gitattributesResult = await ensureGitignorePatterns(join(projectDir, TBD_DIR, ".gitattributes"), [
|
|
108638
|
-
"# Protect ID mappings from merge deletion (always keep all rows)",
|
|
108639
|
-
"# See: https://github.com/jlevy/tbd/issues/99",
|
|
108640
|
-
"**/mappings/ids.yml merge=union"
|
|
108641
|
-
]);
|
|
108803
|
+
const gitattributesResult = await ensureGitignorePatterns(join(projectDir, TBD_DIR, ".gitattributes"), ["# Protect ID mappings from merge deletion (always keep all rows)", "**/mappings/ids.yml merge=union"]);
|
|
108642
108804
|
if (gitattributesResult.created) console.log(` ${colors.success("✓")} Created .tbd/.gitattributes (merge protection)`);
|
|
108643
108805
|
else if (gitattributesResult.added.length > 0) console.log(` ${colors.success("✓")} Updated .tbd/.gitattributes (merge protection)`);
|
|
108644
108806
|
console.log("Checking integrations...");
|
|
@@ -108771,11 +108933,7 @@ Example:
|
|
|
108771
108933
|
]);
|
|
108772
108934
|
if (tbdGitignoreResult.created) console.log(` ${colors.success("✓")} Created .tbd/.gitignore`);
|
|
108773
108935
|
else if (tbdGitignoreResult.added.length > 0) console.log(` ${colors.success("✓")} Updated .tbd/.gitignore`);
|
|
108774
|
-
const gitattributesResult = await ensureGitignorePatterns(join(cwd, TBD_DIR, ".gitattributes"), [
|
|
108775
|
-
"# Protect ID mappings from merge deletion (always keep all rows)",
|
|
108776
|
-
"# See: https://github.com/jlevy/tbd/issues/99",
|
|
108777
|
-
"**/mappings/ids.yml merge=union"
|
|
108778
|
-
]);
|
|
108936
|
+
const gitattributesResult = await ensureGitignorePatterns(join(cwd, TBD_DIR, ".gitattributes"), ["# Protect ID mappings from merge deletion (always keep all rows)", "**/mappings/ids.yml merge=union"]);
|
|
108779
108937
|
if (gitattributesResult.created) console.log(` ${colors.success("✓")} Created .tbd/.gitattributes (merge protection)`);
|
|
108780
108938
|
else if (gitattributesResult.added.length > 0) console.log(` ${colors.success("✓")} Updated .tbd/.gitattributes (merge protection)`);
|
|
108781
108939
|
try {
|
|
@@ -109150,7 +109308,7 @@ const workspaceCommand = new Command("workspace").description("Manage workspaces
|
|
|
109150
109308
|
function createProgram() {
|
|
109151
109309
|
const program = new Command().name("tbd").description("Git-native issue tracking for AI agents and humans").version(VERSION, "--version", "Show version number").helpOption("--help", "Display help for command").showHelpAfterError("(add --help for additional information)");
|
|
109152
109310
|
configureColoredHelp(program);
|
|
109153
|
-
program.option("--dry-run", "Show what would be done without making changes").option("--verbose", "Enable verbose output").option("--quiet", "Suppress non-essential output").option("--json", "Output as JSON").option("--color <when>", "Colorize output: auto, always, never", "auto").option("--
|
|
109311
|
+
program.option("--dry-run", "Show what would be done without making changes").option("--verbose", "Enable verbose output").option("--quiet", "Suppress non-essential output").option("--json", "Output as JSON").option("--color <when>", "Colorize output: auto, always, never", "auto").option("--no-sync", "Skip automatic sync after write operations").option("--debug", "Show internal IDs alongside public IDs for debugging");
|
|
109154
109312
|
program.commandsGroup("Documentation:");
|
|
109155
109313
|
program.addCommand(readmeCommand);
|
|
109156
109314
|
program.addCommand(primeCommand);
|