@forwardimpact/basecamp 2.5.0 → 2.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -5,7 +5,7 @@ description: >
5
5
  indicate they are open for hire, benchmarks them against fit-pathway jobs, and
6
6
  writes prospect notes. Never contacts candidates. Woken on a schedule by the
7
7
  Basecamp scheduler.
8
- model: sonnet
8
+ model: haiku
9
9
  permissionMode: bypassPermissions
10
10
  skills:
11
11
  - scan-open-candidates
@@ -13,8 +13,8 @@ skills:
13
13
  - fit-map
14
14
  ---
15
15
 
16
- You are the head hunter — a passive talent scout. Each time you are woken by
17
- the scheduler, you scan one publicly available source for candidates who have
16
+ You are the head hunter — a passive talent scout. Each time you are woken by the
17
+ scheduler, you scan one publicly available source for candidates who have
18
18
  **explicitly indicated** they are open for hire. You benchmark promising matches
19
19
  against the engineering framework using `fit-pathway` and write prospect notes.
20
20
 
@@ -105,9 +105,9 @@ with 3+ consecutive failures are **suspended** — skip them during source
105
105
  selection and note the suspension in the triage report.
106
106
 
107
107
  ```
108
- github_open_to_work 0
109
- devto_opentowork 0
110
- hn_wants_hired 0
108
+ github_open_to_work 0
109
+ devto_opentowork 0
110
+ hn_wants_hired 0
111
111
  ```
112
112
 
113
113
  When a WebFetch fails (HTTP 4xx, 5xx, timeout, or blocked-page redirect),
@@ -120,7 +120,7 @@ github_open_to_work 2 403 Forbidden 2026-03-05T14:00:00Z
120
120
  On a successful fetch, reset the row:
121
121
 
122
122
  ```
123
- github_open_to_work 0
123
+ github_open_to_work 0
124
124
  ```
125
125
 
126
126
  ### seen.tsv
@@ -184,14 +184,14 @@ oldest `last_checked` timestamp (or one never checked). Sources in rotation:
184
184
  | --------------------- | ---------------------------------------------------- | -------------- |
185
185
  | `hn_wants_hired` | HN "Who Wants to Be Hired?" monthly thread | Self-posted |
186
186
  | `github_open_to_work` | GitHub user search API — bios with open-to-work | Bio signal |
187
- | `devto_opentowork` | dev.to articles tagged `opentowork`/`lookingforwork` | Tagged article |
187
+ | `devto_opentowork` | dev.to articles tagged `opentowork`/`lookingforwork` | Tagged article |
188
188
 
189
- Pick the source with the oldest check time. If all were checked today, pick
190
- the one checked longest ago.
189
+ Pick the source with the oldest check time. If all were checked today, pick the
190
+ one checked longest ago.
191
191
 
192
192
  **Skip suspended sources.** Check `failures.tsv` — any source with 3+
193
- consecutive failures is suspended. Log the skip and move to the next source.
194
- If all sources are suspended, report that in the triage and exit.
193
+ consecutive failures is suspended. Log the skip and move to the next source. If
194
+ all sources are suspended, report that in the triage and exit.
195
195
 
196
196
  ## 3. Fetch & Scan
197
197
 
@@ -212,6 +212,7 @@ WebFetch: https://hn.algolia.com/api/v1/items/{thread_id}
212
212
  ```
213
213
 
214
214
  Each top-level comment is a candidate. Look for:
215
+
215
216
  - Location (target: US East Coast, UK, EU — especially Greece, Poland, Romania,
216
217
  Bulgaria)
217
218
  - Skills matching framework capabilities
@@ -238,8 +239,8 @@ WebFetch: https://api.github.com/users/{login}
238
239
  Profile fields: `name`, `bio`, `location`, `hireable`, `blog`, `public_repos`,
239
240
  `company`. Check `hireable` (boolean) and bio text for open-to-work signals.
240
241
 
241
- **Rate limit:** 10 requests/minute unauthenticated. Batch user profile fetches
242
- fetch at most 5 profiles per wake cycle.
242
+ **Rate limit:** 10 requests/minute unauthenticated. Batch user profile fetches
243
+ fetch at most 5 profiles per wake cycle.
243
244
 
244
245
  **Cursor:** Store the page number last processed. Rotate through the location
245
246
  queries across wakes (UK → Europe → Remote → repeat).
@@ -253,8 +254,8 @@ WebFetch: https://dev.to/api/articles?tag=opentowork&per_page=25
253
254
  WebFetch: https://dev.to/api/articles?tag=lookingforwork&per_page=25
254
255
  ```
255
256
 
256
- Parse `title`, `description`, `user.name`, `url`, `tag_list`,
257
- `published_at`. Skip articles older than 90 days.
257
+ Parse `title`, `description`, `user.name`, `url`, `tag_list`, `published_at`.
258
+ Skip articles older than 90 days.
258
259
 
259
260
  ## 3b. Creative Fallback — No Results
260
261
 
@@ -267,16 +268,16 @@ dedup, location, or skill fit), do not give up. Try alternative approaches
267
268
  source exhausted.
268
269
 
269
270
  2. **Relax location filters.** If strict geographic filtering eliminated
270
- everyone, re-scan with location filter removed — candidates who don't
271
- state a location may still be relevant.
271
+ everyone, re-scan with location filter removed — candidates who don't state a
272
+ location may still be relevant.
272
273
 
273
274
  3. **Try adjacent sources on the same platform.** For example:
274
275
  - HN: check the previous month's thread if the current one is thin
275
276
  - GitHub: search by skill keywords instead of bio phrases
276
277
  - dev.to: try related tags (`jobsearch`, `career`, `hiring`)
277
278
 
278
- 4. **Skill-based discovery.** Search for framework-relevant skill terms
279
- combined with availability signals. For example, search GitHub for
279
+ 4. **Skill-based discovery.** Search for framework-relevant skill terms combined
280
+ with availability signals. For example, search GitHub for
280
281
  `"data engineering" "open to work"` or `"full stack" "available for hire"`.
281
282
 
282
283
  5. **Log every attempt.** Record each alternative query tried in `log.md` so
@@ -293,29 +294,28 @@ sources.
293
294
  For each post, apply these filters in order:
294
295
 
295
296
  1. **Open-for-hire signal** — Skip if the candidate hasn't explicitly indicated
296
- availability. HN "Who Wants to Be Hired?" posts are inherently opt-in.
297
- GitHub users must have open-to-work bio text or `hireable: true`.
298
- dev.to articles must be tagged `opentowork` or `lookingforwork`.
297
+ availability. HN "Who Wants to Be Hired?" posts are inherently opt-in. GitHub
298
+ users must have open-to-work bio text or `hireable: true`. dev.to articles
299
+ must be tagged `opentowork` or `lookingforwork`.
299
300
 
300
301
  2. **Deduplication** — Check `seen.tsv` for the source + post ID. Skip if
301
302
  already processed.
302
303
 
303
- 3. **Location fit** — Prefer candidates in or open to: US East Coast, UK,
304
- EU (especially Greece, Poland, Romania, Bulgaria). Skip candidates who
305
- are location-locked to incompatible regions, but include "Remote" and
306
- "Anywhere" candidates.
304
+ 3. **Location fit** — Prefer candidates in or open to: US East Coast, UK, EU
305
+ (especially Greece, Poland, Romania, Bulgaria). Skip candidates who are
306
+ location-locked to incompatible regions, but include "Remote" and "Anywhere"
307
+ candidates.
307
308
 
308
- 4. **Skill alignment** — Does the candidate mention skills that map to
309
- framework capabilities? Use `npx fit-pathway skill --list` to check. Look
310
- for:
309
+ 4. **Skill alignment** — Does the candidate mention skills that map to framework
310
+ capabilities? Use `npx fit-pathway skill --list` to check. Look for:
311
311
  - Software engineering skills (full-stack, data integration, cloud, etc.)
312
312
  - Data engineering / data science skills
313
- - Non-traditional backgrounds (law, policy, academia) + technical skills
314
- = strong forward-deployed signal
313
+ - Non-traditional backgrounds (law, policy, academia) + technical skills =
314
+ strong forward-deployed signal
315
315
  - AI/ML tool proficiency (Claude, GPT, LLMs, vibe coding)
316
316
 
317
- 5. **Experience level** — Estimate career level from years of experience,
318
- role titles, and scope descriptions. Map to framework levels (J040–J110).
317
+ 5. **Experience level** — Estimate career level from years of experience, role
318
+ titles, and scope descriptions. Map to framework levels (J040–J110).
319
319
 
320
320
  ## 5. Benchmark Against Framework
321
321
 
@@ -327,6 +327,7 @@ npx fit-pathway job {discipline} {estimated_level} --track={best_track}
327
327
  ```
328
328
 
329
329
  Assess fit as:
330
+
330
331
  - **strong** — Multiple core skills match, experience level aligns, location
331
332
  works, and non-traditional background signals (for forward-deployed)
332
333
  - **moderate** — Some skill overlap, level roughly right, minor gaps
@@ -4,7 +4,7 @@ description: >
4
4
  The user's knowledge curator. Processes synced data into structured notes,
5
5
  extracts entities, and keeps the knowledge base organized. Woken on a
6
6
  schedule by the Basecamp scheduler.
7
- model: sonnet
7
+ model: haiku
8
8
  permissionMode: bypassPermissions
9
9
  skills:
10
10
  - extract-entities
@@ -142,8 +142,13 @@ Use the proficiency definitions from the framework:
142
142
  | `practitioner` | Led teams using this skill, mentored others, deep work |
143
143
  | `expert` | Published, shaped org practice, industry recognition |
144
144
 
145
- **Be conservative.** CVs inflate; default one level below what's claimed unless
146
- there's concrete evidence (metrics, project details, scope indicators).
145
+ **Be sceptical.** CVs inflate significantly. Default **two levels below** what
146
+ the CV implies unless the candidate provides concrete, quantified evidence
147
+ (metrics, measurable outcomes, named systems, team sizes, user/revenue scale).
148
+ Only award the directly implied level when the CV includes specific, verifiable
149
+ details — vague descriptions like "improved performance" or "led initiatives" do
150
+ not count. A skill merely listed in a "Skills" section with no project context
151
+ rates `awareness` at most.
147
152
 
148
153
  ## Step 4: Assess Behaviour Indicators
149
154
 
@@ -174,10 +179,18 @@ npx fit-pathway progress {discipline} {level} --track={track}
174
179
 
175
180
  Classify each skill as:
176
181
 
177
- - **Strong match** — candidate meets or exceeds the expected proficiency
178
- - **Adequate** — candidate is within one level of expected proficiency
182
+ - **Strong match** — candidate meets or exceeds the expected proficiency **and**
183
+ evidence is concrete (metrics, project specifics, scope indicators)
184
+ - **Adequate** — candidate is exactly one level below expected proficiency with
185
+ clear project evidence, **or** meets the level but evidence is thin
179
186
  - **Gap** — candidate is two or more levels below expected proficiency
180
- - **Not evidenced** — CV doesn't mention this skill area
187
+ - **Not evidenced** — CV doesn't mention this skill area. **Treat as a gap** for
188
+ recommendation purposes — absence of evidence is not evidence of skill
189
+
190
+ **Threshold rule:** If more than **one third** of the target job's skills are
191
+ Gap or Not evidenced, the candidate cannot receive "Proceed." If more than
192
+ **half** are Gap or Not evidenced, the candidate cannot receive "Proceed with
193
+ reservations."
181
194
 
182
195
  ## Step 6: Write Assessment
183
196
 
@@ -234,8 +247,21 @@ or could work on either. Reference specific CV evidence.}
234
247
 
235
248
  **Recommendation:** {Proceed / Proceed with reservations / Do not proceed}
236
249
 
250
+ Apply these **decision rules** strictly:
251
+
252
+ | Recommendation | Criteria |
253
+ | ---------------------------- | ----------------------------------------------------------------------- |
254
+ | **Proceed** | ≥ 70% Strong match, no core skill gaps, strong behaviour signals |
255
+ | **Proceed with reservations** | ≥ 50% Strong match, ≤ 2 gaps in non-core skills, no behaviour red flags |
256
+ | **Do not proceed** | All other candidates — including those with thin evidence |
257
+
258
+ When in doubt, choose the stricter recommendation. "Proceed with reservations"
259
+ should be rare — it signals a strong candidate with a specific, addressable
260
+ concern, not a marginal candidate who might work out.
261
+
237
262
  **Rationale:** {3-5 sentences grounding the recommendation in framework data.
238
- Reference specific skill gaps or strengths and their impact on the role.}
263
+ Reference specific skill gaps or strengths and their impact on the role.
264
+ Explicitly state the skill match percentage and gap count.}
239
265
 
240
266
  **Interview focus areas:**
241
267
  - {Area 1 — what to probe in interviews to validate}
@@ -261,7 +287,13 @@ to create the candidate profile from email threads.
261
287
  - [ ] Assessment is grounded in `fit-pathway` framework data, not subjective
262
288
  opinion
263
289
  - [ ] Every skill rating cites specific CV evidence or marks "Not evidenced"
264
- - [ ] Estimated level is conservative (one below CV claims unless proven)
290
+ - [ ] Estimated level is sceptical (two below CV claims unless proven with
291
+ quantified evidence)
292
+ - [ ] "Not evidenced" skills are counted as gaps in the recommendation
293
+ - [ ] Recommendation follows the decision rules table — verify match percentages
294
+ and gap counts before choosing a tier
295
+ - [ ] "Proceed with reservations" is only used for strong candidates with a
296
+ specific, named concern — never as a soft "maybe"
265
297
  - [ ] Track fit analysis references specific skill modifiers from the framework
266
298
  - [ ] Gaps are actionable — they suggest interview focus areas
267
299
  - [ ] Assessment file uses correct path format and links to CV
@@ -100,7 +100,7 @@ function main() {
100
100
  // Strip leading two-space padding from each line and trim overall whitespace
101
101
  body = body
102
102
  .split("\n")
103
- .map((line) => line.replace(/^ /, ""))
103
+ .map((line) => line.replace(/^ {2}/, ""))
104
104
  .join("\n")
105
105
  .trim();
106
106
 
@@ -63,16 +63,19 @@ When the user asks to prep for a meeting:
63
63
 
64
64
  ### Step 1: Identify the Meeting
65
65
 
66
- If specified, look it up in calendar:
66
+ Use the calendar query script to find upcoming meetings:
67
67
 
68
68
  ```bash
69
- ls ~/.cache/fit/basecamp/apple_calendar/ 2>/dev/null
70
- cat "$HOME/.cache/fit/basecamp/apple_calendar/event123.json"
69
+ # Next 2 hours of meetings as JSON
70
+ node .claude/skills/sync-apple-calendar/scripts/query.mjs --upcoming 2h --json
71
+
72
+ # Today's full schedule
73
+ node .claude/skills/sync-apple-calendar/scripts/query.mjs --today
71
74
  ```
72
75
 
73
76
  If "prep me for my next meeting":
74
77
 
75
- - List upcoming events
78
+ - Query upcoming events with `--upcoming 2h`
76
79
  - Find the next meeting with external attendees
77
80
  - Confirm with user if unclear
78
81
 
@@ -51,27 +51,36 @@ Run this skill:
51
51
  ## Before Starting
52
52
 
53
53
  1. Read `USER.md` to get the user's name, email, and domain.
54
- 2. List all session directories:
54
+ 2. **Scan for unprocessed sessions** using the scan script:
55
55
 
56
56
  ```bash
57
- ls "$HOME/Library/Application Support/hyprnote/sessions/"
57
+ node .claude/skills/process-hyprnote/scripts/scan.mjs
58
58
  ```
59
59
 
60
- 3. For each session, check if it needs processing by looking up its key files in
61
- the graph state:
60
+ This checks all sessions against the `graph_processed` state file and reports
61
+ which need processing, with titles, dates, and content previews.
62
62
 
63
- ```bash
64
- grep -F "{file_path}" ~/.cache/fit/basecamp/state/graph_processed
65
- ```
63
+ **Options:**
64
+
65
+ | Flag | Description |
66
+ | ----------- | -------------------------------------------------------- |
67
+ | `--changed` | Also detect sessions whose memo/summary hash has changed |
68
+ | `--json` | Output as JSON (for programmatic use) |
69
+ | `--count` | Just print the count (for quick checks) |
70
+ | `--limit N` | Max sessions to display (default: 20) |
66
71
 
67
72
  A session needs processing if:
68
73
 
69
74
  - Its `_memo.md` path is **not** in `graph_processed`, OR
70
- - Its `_memo.md` hash has changed (compute SHA-256 and compare), OR
75
+ - Its `_memo.md` hash has changed (use `--changed` to detect this), OR
71
76
  - Its `_summary.md` exists and is not in `graph_processed` or has changed
72
77
 
73
78
  **Process all unprocessed sessions in one run** (typically few sessions).
74
79
 
80
+ **Do NOT write bespoke scripts to scan for unprocessed sessions.** Use this
81
+ script — it handles all edge cases (empty memos, missing summaries, metadata
82
+ fallback).
83
+
75
84
  ## Step 0: Build Knowledge Index
76
85
 
77
86
  Scan existing notes to avoid duplicates and resolve entities:
@@ -0,0 +1,246 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * Scan for unprocessed Hyprnote sessions.
4
+ *
5
+ * Compares session _memo.md and _summary.md files against the graph_processed
6
+ * state file to identify sessions that need processing. Reports unprocessed
7
+ * sessions with title, date, and content preview.
8
+ *
9
+ * Usage:
10
+ * node scripts/scan.mjs List unprocessed sessions
11
+ * node scripts/scan.mjs --changed Also detect changed (re-edited) sessions
12
+ * node scripts/scan.mjs --json Output as JSON
13
+ * node scripts/scan.mjs --count Just print the count
14
+ */
15
+
16
+ import { createHash } from "node:crypto";
17
+ import { existsSync, readFileSync, readdirSync, statSync } from "node:fs";
18
+ import { join } from "node:path";
19
+ import { homedir } from "node:os";
20
+
21
+ const HOME = homedir();
22
+ const SESSIONS_DIR = join(
23
+ HOME,
24
+ "Library/Application Support/hyprnote/sessions",
25
+ );
26
+ const STATE_FILE = join(HOME, ".cache/fit/basecamp/state/graph_processed");
27
+
28
+ if (process.argv.includes("-h") || process.argv.includes("--help")) {
29
+ console.log(`scan — find unprocessed Hyprnote sessions
30
+
31
+ Usage:
32
+ node scripts/scan.mjs [options]
33
+
34
+ Options:
35
+ --changed Also detect sessions whose memo/summary hash has changed
36
+ --json Output as JSON array
37
+ --count Just print the unprocessed count (for scripting)
38
+ --limit N Max sessions to display (default: 20)
39
+ -h, --help Show this help message
40
+
41
+ Sessions dir: ~/Library/Application Support/hyprnote/sessions/
42
+ State file: ~/.cache/fit/basecamp/state/graph_processed`);
43
+ process.exit(0);
44
+ }
45
+
46
+ const args = process.argv.slice(2);
47
+ const detectChanged = args.includes("--changed");
48
+ const jsonOutput = args.includes("--json");
49
+ const countOnly = args.includes("--count");
50
+ const limitIdx = args.indexOf("--limit");
51
+ const limit = limitIdx !== -1 ? parseInt(args[limitIdx + 1], 10) || 20 : 20;
52
+
53
+ // --- Load state ---
54
+
55
+ const state = new Map();
56
+ if (existsSync(STATE_FILE)) {
57
+ const text = readFileSync(STATE_FILE, "utf8");
58
+ for (const line of text.split("\n")) {
59
+ if (!line) continue;
60
+ const idx = line.indexOf("\t");
61
+ if (idx === -1) continue;
62
+ state.set(line.slice(0, idx), line.slice(idx + 1));
63
+ }
64
+ }
65
+
66
+ /**
67
+ * Compute SHA-256 hash of file contents.
68
+ */
69
+ function fileHash(filePath) {
70
+ return createHash("sha256").update(readFileSync(filePath)).digest("hex");
71
+ }
72
+
73
+ /**
74
+ * Check if a file needs processing (new or changed).
75
+ */
76
+ function needsProcessing(filePath) {
77
+ const storedHash = state.get(filePath);
78
+ if (!storedHash) return { needed: true, reason: "new" };
79
+ if (detectChanged) {
80
+ const currentHash = fileHash(filePath);
81
+ if (currentHash !== storedHash) return { needed: true, reason: "changed" };
82
+ }
83
+ return { needed: false, reason: null };
84
+ }
85
+
86
+ /**
87
+ * Extract title and date from a memo file.
88
+ */
89
+ function parseMemo(memoPath) {
90
+ try {
91
+ const content = readFileSync(memoPath, "utf8");
92
+
93
+ // Skip empty/whitespace-only memos
94
+ const body = content.replace(/---[\s\S]*?---/, "").trim();
95
+ if (!body || body === " ") return null;
96
+
97
+ // Extract title from first H1
98
+ const titleMatch = content.match(/^#\s+(.+)/m);
99
+ const title = titleMatch ? titleMatch[1].trim() : null;
100
+
101
+ // Extract date from content or fall back to file mtime
102
+ const dateMatch = content.match(/\d{4}-\d{2}-\d{2}/);
103
+ let date = dateMatch ? dateMatch[0] : null;
104
+
105
+ if (!date) {
106
+ const stat = statSync(memoPath);
107
+ date = stat.mtime.toISOString().slice(0, 10);
108
+ }
109
+
110
+ return { title, date, preview: body.slice(0, 150).replace(/\n/g, " ") };
111
+ } catch {
112
+ return null;
113
+ }
114
+ }
115
+
116
+ /**
117
+ * Read _meta.json for session metadata.
118
+ */
119
+ function readMeta(sessionDir) {
120
+ const metaPath = join(sessionDir, "_meta.json");
121
+ if (!existsSync(metaPath)) return null;
122
+ try {
123
+ return JSON.parse(readFileSync(metaPath, "utf8"));
124
+ } catch {
125
+ return null;
126
+ }
127
+ }
128
+
129
+ // --- Scan sessions ---
130
+
131
+ if (!existsSync(SESSIONS_DIR)) {
132
+ console.error(`Hyprnote sessions directory not found: ${SESSIONS_DIR}`);
133
+ process.exit(1);
134
+ }
135
+
136
+ const sessionIds = readdirSync(SESSIONS_DIR);
137
+ const unprocessed = [];
138
+ let totalWithMemos = 0;
139
+ let processedCount = 0;
140
+
141
+ for (const uuid of sessionIds) {
142
+ const sessionPath = join(SESSIONS_DIR, uuid);
143
+ const stat = statSync(sessionPath, { throwIfNoEntry: false });
144
+ if (!stat || !stat.isDirectory()) continue;
145
+
146
+ const memoPath = join(sessionPath, "_memo.md");
147
+ const summaryPath = join(sessionPath, "_summary.md");
148
+ const hasMemo = existsSync(memoPath);
149
+ const hasSummary = existsSync(summaryPath);
150
+
151
+ if (!hasMemo && !hasSummary) continue;
152
+
153
+ totalWithMemos++;
154
+
155
+ // Check memo
156
+ const memoCheck = hasMemo
157
+ ? needsProcessing(memoPath)
158
+ : { needed: false, reason: null };
159
+
160
+ // Check summary
161
+ const summaryCheck = hasSummary
162
+ ? needsProcessing(summaryPath)
163
+ : { needed: false, reason: null };
164
+
165
+ if (!memoCheck.needed && !summaryCheck.needed) {
166
+ processedCount++;
167
+ continue;
168
+ }
169
+
170
+ // Parse memo for display info
171
+ const memo = hasMemo ? parseMemo(memoPath) : null;
172
+ if (hasMemo && !memo) {
173
+ // Empty memo, no summary → skip
174
+ if (!hasSummary) continue;
175
+ }
176
+
177
+ // Read meta for title fallback
178
+ const meta = readMeta(sessionPath);
179
+
180
+ const title = memo?.title || meta?.title || uuid.slice(0, 8);
181
+ const date =
182
+ memo?.date ||
183
+ (meta?.created_at ? meta.created_at.slice(0, 10) : null) ||
184
+ statSync(sessionPath).mtime.toISOString().slice(0, 10);
185
+
186
+ unprocessed.push({
187
+ uuid,
188
+ title,
189
+ date,
190
+ hasMemo,
191
+ hasSummary,
192
+ memoReason: memoCheck.reason,
193
+ summaryReason: summaryCheck.reason,
194
+ preview: memo?.preview || "(summary only)",
195
+ memoPath: hasMemo ? memoPath : null,
196
+ summaryPath: hasSummary ? summaryPath : null,
197
+ });
198
+ }
199
+
200
+ // Sort by date descending (newest first)
201
+ unprocessed.sort((a, b) => b.date.localeCompare(a.date));
202
+
203
+ // --- Output ---
204
+
205
+ if (countOnly) {
206
+ console.log(unprocessed.length);
207
+ process.exit(0);
208
+ }
209
+
210
+ if (jsonOutput) {
211
+ console.log(JSON.stringify(unprocessed.slice(0, limit), null, 2));
212
+ process.exit(0);
213
+ }
214
+
215
+ // Formatted output
216
+ console.log(
217
+ `Sessions: ${totalWithMemos} total, ${processedCount} processed, ${unprocessed.length} unprocessed`,
218
+ );
219
+
220
+ if (unprocessed.length === 0) {
221
+ console.log("\nAll sessions are up to date.");
222
+ process.exit(0);
223
+ }
224
+
225
+ console.log("");
226
+ const display = unprocessed.slice(0, limit);
227
+ for (const s of display) {
228
+ const flags = [];
229
+ if (s.memoReason) flags.push(`memo:${s.memoReason}`);
230
+ if (s.summaryReason) flags.push(`summary:${s.summaryReason}`);
231
+ const sources = [];
232
+ if (s.hasMemo) sources.push("memo");
233
+ if (s.hasSummary) sources.push("summary");
234
+
235
+ console.log(
236
+ `${s.date} | ${s.title} | ${sources.join("+")} | ${flags.join(", ")}`,
237
+ );
238
+ console.log(` ${s.uuid}`);
239
+ if (s.preview && s.preview !== "(summary only)") {
240
+ console.log(` ${s.preview.slice(0, 100)}…`);
241
+ }
242
+ }
243
+
244
+ if (unprocessed.length > limit) {
245
+ console.log(`\n... and ${unprocessed.length - limit} more`);
246
+ }