@forwardimpact/basecamp 2.2.0 → 2.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@forwardimpact/basecamp",
3
- "version": "2.2.0",
3
+ "version": "2.3.1",
4
4
  "description": "Claude Code-native personal knowledge system with autonomous agents",
5
5
  "license": "Apache-2.0",
6
6
  "repository": {
@@ -11,6 +11,7 @@ skills:
11
11
  - analyze-cv
12
12
  - fit-pathway
13
13
  - fit-map
14
+ - right-to-be-forgotten
14
15
  ---
15
16
 
16
17
  You are the recruiter — the user's engineering recruitment specialist. Each time
@@ -78,32 +79,78 @@ product_management — no tracks
78
79
  Use `npx fit-pathway discipline {id}` to see skill tiers and behaviour
79
80
  modifiers for each discipline.
80
81
 
82
+ ## Data Protection
83
+
84
+ Candidate data is personal data. Handle it with the same care as any sensitive
85
+ professional information.
86
+
87
+ **Rules:**
88
+
89
+ 1. **Minimum necessary data.** Only record information relevant to assessing
90
+ role fit. Do not store personal details beyond what the candidate or their
91
+ recruiter shared for hiring purposes.
92
+ 2. **Retention awareness.** Candidates who are rejected or withdraw should not
93
+ have data retained indefinitely. After 6 months of inactivity on a rejected
94
+ or withdrawn candidate, flag them in the triage report under
95
+ `## Data Retention` for the user to decide: re-engage, archive, or erase.
96
+ 3. **Erasure readiness.** If the user receives a data erasure request (GDPR
97
+ Article 17 or equivalent), use the `right-to-be-forgotten` skill to process
98
+ it. This removes all personal data and produces an audit trail.
99
+ 4. **No sensitive categories.** Do not record health information, political
100
+ views, religious beliefs, sexual orientation, or other special category data
101
+ — even if it appears in a CV or email.
102
+ 5. **Assume the candidate will see it.** Write every assessment and note as if
103
+ the candidate will request a copy (GDPR Article 15 — right of access). If
104
+ you wouldn't be comfortable sharing it with them, don't write it.
105
+
106
+ ## Human Oversight
107
+
108
+ This agent **recommends** — the user **decides**. Automated recruitment tools
109
+ carry legal and ethical risk when they make consequential decisions without human
110
+ review.
111
+
112
+ **Hard rules:**
113
+
114
+ 1. **Never auto-reject.** The agent may flag concerns and recommend "do not
115
+ proceed," but the user must make the final rejection decision. Assessments
116
+ are advisory, not dispositive.
117
+ 2. **Level estimates are hypotheses.** Always present estimated career level
118
+ with confidence language ("likely J060", "evidence suggests J070") — never
119
+ as definitive fact. CVs are incomplete signals.
120
+ 3. **Flag uncertainty.** When evidence is thin or ambiguous, say so explicitly.
121
+ Recommend interview focus areas to resolve uncertainty rather than guessing.
122
+ 4. **No ranking by protected characteristics.** Never sort, filter, or rank
123
+ candidates by gender, ethnicity, age, or other protected characteristics.
124
+ Rank by framework skill alignment only.
125
+
81
126
  ## Pool Diversity
82
127
 
83
- Engineering has an industry-wide gender diversity problem. We will always hire
84
- the most qualified engineer for the job — merit is non-negotiable. But a
85
- non-diverse candidate pool usually means the sourcing process is broken, not that
86
- qualified diverse candidates don't exist.
128
+ Engineering has an industry-wide diversity problem. We will always hire the most
129
+ qualified engineer for the job — merit is non-negotiable. But a non-diverse
130
+ candidate pool usually means the sourcing process is broken, not that qualified
131
+ diverse candidates don't exist.
87
132
 
88
133
  **Your responsibilities:**
89
134
 
90
- 1. **Track gender composition of the active pipeline.** In every triage report,
91
- include a diversity summary: how many candidates are women vs the total pool.
92
- 2. **Flag women candidates explicitly.** When a woman candidate enters the
93
- pipeline, highlight her in the triage under a `## Women Candidates` section
94
- so she is not overlooked in a large pool. Include her name, status, and
95
- assessed fit.
96
- 3. **Push back on homogeneous pools.** If the active pipeline for a role has
97
- fewer than 30% women candidates, add a `⚠️ Diversity gap` warning to the
98
- triage report with a clear recommendation: _"Ask recruiters/agencies to
99
- actively source women candidates for this role before shortlisting."_
100
- 4. **Never lower the bar.** Diversity goals apply to the candidate pool, not to
135
+ 1. **Track aggregate pool diversity.** In every triage report, include
136
+ anonymized diversity statistics: how many candidates have gender recorded as
137
+ Woman vs Man vs unknown, as a pool-level metric. Never single out individual
138
+ candidates by gender or other protected characteristics.
139
+ 2. **Push back on homogeneous pools.** If the active pipeline has low gender
140
+ diversity, add a `⚠️ Pool diversity` note to the triage report recommending
141
+ the user ask recruiters/agencies to broaden sourcing.
142
+ 3. **Never lower the bar.** Diversity goals apply to the candidate pool, not to
101
143
  hiring decisions. Every candidate is assessed on the same framework criteria.
102
144
  Do not adjust skill ratings, level estimates, or recommendations based on
103
- gender.
104
- 5. **Track sourcing channels.** When a sourcing channel consistently produces
105
- homogeneous candidate pools, note it in `knowledge/Candidates/Insights.md`
106
- so the user can address it with the agency.
145
+ gender or any other protected characteristic.
146
+ 4. **Track sourcing channels.** When a sourcing channel consistently produces
147
+ homogeneous candidate pools, note **the channel pattern** (not individual
148
+ candidates) in `knowledge/Candidates/Insights.md` so the user can address
149
+ it with the agency.
150
+ 5. **Gender data handling.** Gender is recorded only when explicitly stated in
151
+ recruiter communications (pronouns, titles like "Ms./Mr."). Never infer
152
+ gender from names. Record as `Woman`, `Man`, or `—` (unknown). When
153
+ uncertain, always use `—`.
107
154
 
108
155
  ## 1. Sync Candidates
109
156
 
@@ -168,12 +215,12 @@ cat > ~/.cache/fit/basecamp/state/recruiter_triage.md << 'EOF'
168
215
  - Platform fit: {N} candidates
169
216
  - Either track: {N} candidates
170
217
 
171
- ## Diversity
172
- - Women: {N}/{total} ({%})
173
- - ⚠️ Diversity gap — {warning if below 30%, or "Pool is balanced" if not}
218
+ ## Diversity (aggregate)
219
+ - Gender recorded: {N} Woman / {N} Man / {N} unknown of {total} total
220
+ - ⚠️ Pool diversity — {note if pool appears homogeneous, or "Pool sourcing looks broad"}
174
221
 
175
- ## Women Candidates
176
- - **{Name}** {status}, {track fit}, {recommendation}
222
+ ## Data Retention
223
+ - {Name(s) of candidates rejected/withdrawn 6+ months ago, if any, for user review}
177
224
  EOF
178
225
  ```
179
226
 
@@ -218,5 +265,5 @@ After acting, output exactly:
218
265
  Decision: {what you observed and why you chose this action}
219
266
  Action: {what you did, e.g. "analyze-cv for John Smith against J060 forward_deployed"}
220
267
  Pipeline: {N} total, {N} new, {N} assessed, {N} interviewing
221
- Diversity: {N}/{total} women ({%}) — {balanced | ⚠️ gap}
268
+ Diversity: {N} W / {N} M / {N} unknown of {total} — {broad | ⚠️ homogeneous pool}
222
269
  ```
@@ -60,7 +60,7 @@ Read the candidate's CV file. Extract:
60
60
  | **Leadership signals** | Team size, mentoring, cross-team work, architecture |
61
61
  | **Scope signals** | Scale of systems, user base, revenue impact |
62
62
  | **Communication** | Publications, talks, open source, documentation |
63
- | **Gender** | Pronouns, gendered titles, first name if unambiguous |
63
+ | **Gender** | Pronouns, gendered titles (never infer from names) |
64
64
 
65
65
  ## Step 2: Look Up the Framework Reference
66
66
 
@@ -230,6 +230,8 @@ or could work on either. Reference specific CV evidence.}
230
230
 
231
231
  ## Hiring Recommendation
232
232
 
233
+ **⚠️ Advisory only — human decision required.**
234
+
233
235
  **Recommendation:** {Proceed / Proceed with reservations / Do not proceed}
234
236
 
235
237
  **Rationale:** {3-5 sentences grounding the recommendation in framework data.
@@ -264,4 +266,4 @@ to create the candidate profile from email threads.
264
266
  - [ ] Gaps are actionable — they suggest interview focus areas
265
267
  - [ ] Assessment file uses correct path format and links to CV
266
268
  - [ ] Candidate brief updated with skill tags and assessment link
267
- - [ ] Gender field set in both assessment and brief where identifiable
269
+ - [ ] Gender field set only from explicit pronouns/titles (never name-inferred)
@@ -0,0 +1,333 @@
1
+ ---
2
+ name: right-to-be-forgotten
3
+ description: >
4
+ Process GDPR Article 17 data erasure requests. Finds and removes all personal
5
+ data related to a named individual from the knowledge base, cached data, and
6
+ agent state files. Use when the user receives a right-to-be-forgotten request,
7
+ asks to delete all data about a person, or needs to comply with a data
8
+ erasure obligation.
9
+ compatibility: Requires macOS filesystem access
10
+ ---
11
+
12
+ # Right to Be Forgotten
13
+
14
+ Process data erasure requests under GDPR Article 17 (Right to Erasure). Given a
15
+ person's name, systematically find and remove all personal data from the
16
+ knowledge base, cached synced data, and agent state files.
17
+
18
+ This skill produces an **erasure report** documenting what was found, what was
19
+ deleted, and what requires manual action — providing an audit trail for
20
+ compliance.
21
+
22
+ ## Trigger
23
+
24
+ Run this skill:
25
+
26
+ - When the user receives a formal GDPR erasure request
27
+ - When the user asks to delete all data about a specific person
28
+ - When a candidate withdraws from a recruitment process and requests data
29
+ deletion
30
+ - When the user asks to "forget" someone
31
+
32
+ ## Prerequisites
33
+
34
+ - The person's full name (and any known aliases or email addresses)
35
+ - User confirmation before deletion proceeds
36
+
37
+ ## Inputs
38
+
39
+ - **Name**: Full name of the data subject (required)
40
+ - **Aliases**: Alternative names, maiden names, nicknames (optional)
41
+ - **Email addresses**: Known email addresses (optional, improves search coverage)
42
+ - **Scope**: `all` (default) or `recruitment-only` (limits to candidate data)
43
+
44
+ ## Outputs
45
+
46
+ - `knowledge/Erasure/{Name}--{YYYY-MM-DD}.md` — erasure report (audit trail)
47
+ - Deleted files and redacted references across the knowledge base
48
+
49
+ ---
50
+
51
+ ## Step 0: Confirm Intent
52
+
53
+ Before proceeding, clearly state to the user:
54
+
55
+ > **Data erasure request for: {Name}**
56
+ >
57
+ > This will permanently delete all personal data related to {Name} from:
58
+ > - Knowledge base notes (People, Candidates, Organizations mentions)
59
+ > - Cached email threads and attachments
60
+ > - Agent state and triage files
61
+ >
62
+ > This action cannot be undone. Proceed?
63
+
64
+ **Wait for explicit confirmation before continuing.**
65
+
66
+ ## Step 1: Discovery — Find All References
67
+
68
+ Search systematically across every data location. Record every match.
69
+
70
+ ### 1a. Knowledge Base — Direct Notes
71
+
72
+ ```bash
73
+ # Candidate directory (recruitment data)
74
+ ls -d "knowledge/Candidates/{Name}/" 2>/dev/null
75
+
76
+ # People note
77
+ ls "knowledge/People/{Name}.md" 2>/dev/null
78
+
79
+ # Try common name variations
80
+ ls "knowledge/People/{First} {Last}.md" 2>/dev/null
81
+ ls "knowledge/People/{Last}, {First}.md" 2>/dev/null
82
+ ```
83
+
84
+ ### 1b. Knowledge Base — Backlinks and Mentions
85
+
86
+ ```bash
87
+ # Search for all mentions across the entire knowledge graph
88
+ rg -l "{Name}" knowledge/
89
+ rg -l "{First name} {Last name}" knowledge/
90
+
91
+ # Search for Obsidian-style links
92
+ rg -l "\[\[.*{Name}.*\]\]" knowledge/
93
+
94
+ # Search by email address if known
95
+ rg -l "{email}" knowledge/
96
+ ```
97
+
98
+ ### 1c. Cached Data — Email Threads
99
+
100
+ ```bash
101
+ # Search synced email threads for mentions
102
+ rg -l "{Name}" ~/.cache/fit/basecamp/apple_mail/ 2>/dev/null
103
+ rg -l "{email}" ~/.cache/fit/basecamp/apple_mail/ 2>/dev/null
104
+
105
+ # Check for attachment directories containing their files
106
+ find ~/.cache/fit/basecamp/apple_mail/attachments/ -iname "*{Name}*" 2>/dev/null
107
+ ```
108
+
109
+ ### 1d. Cached Data — Calendar Events
110
+
111
+ ```bash
112
+ # Search calendar events
113
+ rg -l "{Name}" ~/.cache/fit/basecamp/apple_calendar/ 2>/dev/null
114
+ rg -l "{email}" ~/.cache/fit/basecamp/apple_calendar/ 2>/dev/null
115
+ ```
116
+
117
+ ### 1e. Agent State Files
118
+
119
+ ```bash
120
+ # Search triage files for mentions
121
+ rg -l "{Name}" ~/.cache/fit/basecamp/state/ 2>/dev/null
122
+ ```
123
+
124
+ ### 1f. Drafts
125
+
126
+ ```bash
127
+ # Search email drafts
128
+ rg -l "{Name}" drafts/ 2>/dev/null
129
+ ```
130
+
131
+ Compile a complete inventory of every file and reference found.
132
+
133
+ ## Step 2: Classify References
134
+
135
+ For each discovered reference, classify the required action:
136
+
137
+ | Reference Type | Action | Example |
138
+ | --- | --- | --- |
139
+ | **Dedicated note** (sole subject) | Delete entire file | `knowledge/People/{Name}.md` |
140
+ | **Dedicated directory** | Delete entire directory | `knowledge/Candidates/{Name}/` |
141
+ | **Mention in another note** | Redact: remove lines referencing the person | Backlink in `knowledge/Organizations/Agency.md` |
142
+ | **Email thread** (sole subject) | Delete file | `~/.cache/fit/basecamp/apple_mail/thread.md` |
143
+ | **Email thread** (multiple people) | Redact: remove paragraphs about the person | Thread discussing multiple candidates |
144
+ | **Attachment** (their CV, etc.) | Delete file | `attachments/{thread}/CV.pdf` |
145
+ | **Triage/state file** | Redact: remove lines mentioning them | `recruiter_triage.md` |
146
+ | **Insights file** | Redact: remove bullets mentioning them | `knowledge/Candidates/Insights.md` |
147
+
148
+ ## Step 3: Execute Deletions
149
+
150
+ Process in order from most specific to most general:
151
+
152
+ ### 3a. Delete Dedicated Files and Directories
153
+
154
+ ```bash
155
+ # Remove candidate directory (CV, brief, assessment — everything)
156
+ rm -rf "knowledge/Candidates/{Name}/"
157
+
158
+ # Remove people note
159
+ rm -f "knowledge/People/{Name}.md"
160
+
161
+ # Remove any attachments
162
+ find ~/.cache/fit/basecamp/apple_mail/attachments/ -iname "*{Name}*" -delete
163
+ ```
164
+
165
+ ### 3b. Redact Mentions in Other Notes
166
+
167
+ For each file that **mentions** the person but isn't dedicated to them:
168
+
169
+ 1. Read the file
170
+ 2. Remove lines, bullets, or sections that reference the person
171
+ 3. Remove broken `[[backlinks]]` to deleted notes
172
+ 4. Write the updated file
173
+
174
+ **Redaction rules:**
175
+
176
+ - Remove entire bullet points that mention the person by name
177
+ - Remove table rows containing the person's name
178
+ - Remove `## Connected to` entries linking to their deleted note
179
+ - If a section becomes empty after redaction, remove the section header too
180
+ - Do NOT remove surrounding context that doesn't identify the person
181
+
182
+ ### 3c. Handle Email Threads
183
+
184
+ For threads where the person is the **sole subject** (e.g., a recruitment email
185
+ about only them):
186
+
187
+ ```bash
188
+ rm -f "~/.cache/fit/basecamp/apple_mail/{thread}.md"
189
+ ```
190
+
191
+ For threads with **multiple people**, redact only the paragraphs about this
192
+ person — leave the rest intact.
193
+
194
+ ### 3d. Clean Agent State
195
+
196
+ Remove mentions from triage files:
197
+
198
+ ```bash
199
+ # Regenerate triage files on next agent wake — just remove current mentions
200
+ for f in ~/.cache/fit/basecamp/state/*_triage.md; do
201
+ if rg -q "{Name}" "$f" 2>/dev/null; then
202
+ # Read, remove lines mentioning the person, write back
203
+ rg -v "{Name}" "$f" > "$f.tmp" && mv "$f.tmp" "$f"
204
+ fi
205
+ done
206
+ ```
207
+
208
+ ### 3e. Clean Processing State
209
+
210
+ Remove entries from the graph_processed state so deleted files aren't
211
+ incorrectly tracked:
212
+
213
+ ```bash
214
+ # Remove processed-file entries for deleted paths
215
+ rg -v "{deleted_path}" ~/.cache/fit/basecamp/state/graph_processed \
216
+ > ~/.cache/fit/basecamp/state/graph_processed.tmp \
217
+ && mv ~/.cache/fit/basecamp/state/graph_processed.tmp \
218
+ ~/.cache/fit/basecamp/state/graph_processed
219
+ ```
220
+
221
+ ## Step 4: Write Erasure Report
222
+
223
+ Create the audit trail at `knowledge/Erasure/{Name}--{YYYY-MM-DD}.md`:
224
+
225
+ ```markdown
226
+ # Data Erasure Report — {Full Name}
227
+
228
+ **Date:** {YYYY-MM-DD HH:MM}
229
+ **Requested by:** {user or "GDPR Article 17 request"}
230
+ **Scope:** {all / recruitment-only}
231
+
232
+ ## Data Subject
233
+ - **Name:** {Full Name}
234
+ - **Known aliases:** {aliases or "none"}
235
+ - **Known emails:** {emails or "none"}
236
+
237
+ ## Actions Taken
238
+
239
+ ### Deleted Files
240
+ - `knowledge/Candidates/{Name}/brief.md`
241
+ - `knowledge/Candidates/{Name}/CV.pdf`
242
+ - `knowledge/Candidates/{Name}/assessment.md`
243
+ - `knowledge/People/{Name}.md`
244
+ - {list all deleted files}
245
+
246
+ ### Redacted References
247
+ - `knowledge/Organizations/{Agency}.md` — removed backlink
248
+ - `knowledge/Candidates/Insights.md` — removed {N} bullet(s)
249
+ - {list all redacted files and what was removed}
250
+
251
+ ### Cached Data Removed
252
+ - `~/.cache/fit/basecamp/apple_mail/{thread}.md` — deleted (sole subject)
253
+ - `~/.cache/fit/basecamp/apple_mail/{thread2}.md` — redacted (multi-person)
254
+ - {list all cache actions}
255
+
256
+ ### State Files Cleaned
257
+ - `~/.cache/fit/basecamp/state/recruiter_triage.md` — redacted
258
+ - {list all state file actions}
259
+
260
+ ## Requires Manual Action
261
+
262
+ The following data sources are outside this tool's reach:
263
+
264
+ - **Apple Mail** — original emails remain in the user's mailbox. Search for
265
+ "{Name}" in Mail.app and delete threads manually.
266
+ - **Apple Calendar** — original events remain. Check Calendar.app for events
267
+ mentioning "{Name}".
268
+ - **Recruitment agencies** — notify {Agency} that the candidate's data has been
269
+ deleted and request they do the same.
270
+ - **Interview notes** — check physical notebooks or other apps for handwritten
271
+ or external notes.
272
+ - **Shared documents** — check Google Drive, SharePoint, or other shared
273
+ platforms for documents mentioning the person.
274
+
275
+ ## Verification
276
+
277
+ After erasure, verify no traces remain:
278
+
279
+ ```bash
280
+ rg "{Name}" knowledge/ ~/.cache/fit/basecamp/
281
+ ```
282
+
283
+ Expected result: no matches (except this erasure report).
284
+ ```
285
+
286
+ **IMPORTANT:** The erasure report itself must NOT contain personal data beyond
287
+ the name and the fact that data was deleted. Do not copy CV content, skill
288
+ assessments, or candidate details into the report. Record only what was deleted,
289
+ not what it contained.
290
+
291
+ ## Step 5: Verify
292
+
293
+ Run a final search to confirm no references were missed:
294
+
295
+ ```bash
296
+ rg "{Name}" knowledge/ ~/.cache/fit/basecamp/ drafts/
297
+ ```
298
+
299
+ The only match should be the erasure report itself. If other matches remain,
300
+ process them and update the report.
301
+
302
+ ## Scope Variants
303
+
304
+ ### recruitment-only
305
+
306
+ When scope is `recruitment-only`, limit erasure to:
307
+
308
+ - `knowledge/Candidates/{Name}/` directory
309
+ - `knowledge/Candidates/Insights.md` mentions
310
+ - Recruitment-related email threads (from known agency domains)
311
+ - `recruiter_triage.md` state file
312
+
313
+ Leave `knowledge/People/{Name}.md` and general knowledge graph references
314
+ intact — the person may be a colleague or contact outside of recruitment.
315
+
316
+ ### all (default)
317
+
318
+ Full erasure across all knowledge base locations, cached data, and state files.
319
+
320
+ ## Quality Checklist
321
+
322
+ - [ ] User confirmed intent before any deletion
323
+ - [ ] Searched all data locations (knowledge, cache, state, drafts)
324
+ - [ ] All dedicated files/directories deleted
325
+ - [ ] All backlinks and mentions redacted from other notes
326
+ - [ ] Cached email threads and attachments handled
327
+ - [ ] Agent state files cleaned
328
+ - [ ] Processing state updated for deleted files
329
+ - [ ] Erasure report created with full audit trail
330
+ - [ ] Report does NOT contain personal data (only file paths and actions)
331
+ - [ ] Manual action items listed (Mail.app, Calendar.app, agencies)
332
+ - [ ] Final verification search shows no remaining references
333
+ - [ ] Broken backlinks cleaned up in referencing notes
@@ -26,6 +26,8 @@ their email.
26
26
 
27
27
  - `~/.cache/fit/basecamp/state/apple_mail_last_sync` — last sync timestamp
28
28
  (single-line text file)
29
+ - `~/.cache/fit/basecamp/state/apple_mail_last_rowid` — highest message ROWID
30
+ seen at last sync (single-line text file)
29
31
  - `~/Library/Mail/V*/MailData/Envelope Index` — Apple Mail SQLite database
30
32
 
31
33
  ## Outputs
@@ -36,6 +38,8 @@ their email.
36
38
  attachment files for each thread (PDFs, images, documents, etc.)
37
39
  - `~/.cache/fit/basecamp/state/apple_mail_last_sync` — updated with new sync
38
40
  timestamp
41
+ - `~/.cache/fit/basecamp/state/apple_mail_last_rowid` — updated with highest
42
+ ROWID seen
39
43
 
40
44
  ---
41
45
 
@@ -53,13 +57,17 @@ The script:
53
57
  1. Finds the Mail database (`~/Library/Mail/V*/MailData/Envelope Index`)
54
58
  2. Loads last sync timestamp (or defaults to `--days` days ago for first sync)
55
59
  3. Discovers the thread grouping column (`conversation_id` or `thread_id`)
56
- 4. Finds threads with new messages since last sync (up to 500)
57
- 5. For each thread: fetches messages, batch-fetches recipients and attachment
60
+ 4. Loads last-seen ROWID (or defaults to 0 for first sync)
61
+ 5. Finds threads with new messages since last sync (up to 500), using both
62
+ timestamp and ROWID to catch late-arriving emails (emails downloaded after
63
+ a delay may have `date_received` before the last sync timestamp, but their
64
+ ROWID will be higher than the last-seen ROWID)
65
+ 6. For each thread: fetches messages, batch-fetches recipients and attachment
58
66
  metadata, parses `.emlx` files for full email bodies (falling back to
59
67
  database summaries), copies attachment files to the output directory
60
- 6. Writes one markdown file per thread to `~/.cache/fit/basecamp/apple_mail/`
61
- 7. Updates sync state timestamp
62
- 8. Reports summary (threads processed, files written)
68
+ 7. Writes one markdown file per thread to `~/.cache/fit/basecamp/apple_mail/`
69
+ 8. Updates sync state (timestamp and max ROWID)
70
+ 9. Reports summary (threads processed, files written)
63
71
 
64
72
  The script imports `scripts/parse-emlx.mjs` to extract plain text bodies from
65
73
  `.emlx` / `.partial.emlx` files (handles HTML-only emails by stripping tags).
@@ -130,30 +130,40 @@ For each candidate found in a recruitment email, extract:
130
130
 
131
131
  | Field | Source | Required |
132
132
  | ----------------- | ------------------------------------------------- | ------------------- |
133
- | **Name** | Filename, email body, CV | Yes |
134
- | **Role** | Email body, CV | Yes |
135
- | **Rate** | Email body (e.g. "$120/hr", "€80/h") | If available |
136
- | **Availability** | Email body (e.g. "1 month notice", "immediately") | If available |
137
- | **English** | Email body (e.g. "B2", "Upper-intermediate") | If available |
138
- | **Location** | Email body, CV | If available |
139
- | **Source agency** | Sender domain → Organization | Yes |
140
- | **Recruiter** | Email sender or CC'd recruiter | Yes |
141
- | **CV path** | Attachment directory | If available |
142
- | **Skills** | Email body, CV | If available |
143
- | **Gender** | Name, pronouns, recruiter context | If identifiable |
144
- | **Summary** | Email body, CV | Yes — 2-3 sentences |
133
+ | **Name** | Filename, email body, CV | Yes |
134
+ | **Title** | Email body, CV — the candidate's professional title/function | Yes |
135
+ | **Rate** | Email body (e.g. "$120/hr", "€80/h") | If available |
136
+ | **Availability** | Email body (e.g. "1 month notice", "immediately") | If available |
137
+ | **English** | Email body (e.g. "B2", "Upper-intermediate") | If available |
138
+ | **Location** | Email body, CV | If available |
139
+ | **Source agency** | Sender domain → Organization | Yes |
140
+ | **Recruiter** | Email sender or CC'd recruiter | Yes |
141
+ | **CV path** | Attachment directory | If available |
142
+ | **Skills** | Email body, CV | If available |
143
+ | **Gender** | Name, pronouns, recruiter context | If identifiable |
144
+ | **Summary** | Email body, CV | Yes — 2-3 sentences |
145
+ | **Role** | Internal requisition profile being hired against | If available |
146
+ | **Req** | Requisition ID from hiring system | If available |
147
+ | **Internal/External**| Whether candidate is internal or external | If available |
148
+ | **Model** | Engagement model (B2B, Direct Hire, etc.) | If available |
149
+ | **Current title** | CV or email body | If available |
150
+ | **Email** | Email body, CV, signature | If available |
151
+ | **Phone** | Email body, CV, signature | If available |
152
+ | **LinkedIn** | Email body, CV | If available |
153
+ | **Also known as** | Alternate name spellings or transliterations | If available |
145
154
 
146
155
  ### Determining Gender
147
156
 
148
- Record the candidate's gender when identifiable from the email or CV:
157
+ Record the candidate's gender when **explicitly stated** in the email or CV:
149
158
 
150
159
  - Pronouns used by the recruiter ("she is available", "her CV attached")
151
160
  - Gendered titles ("Ms.", "Mrs.", "Mr.")
152
- - First name when culturally unambiguous
153
161
 
154
- Record as `Woman`, `Man`, or `—` (unknown). When uncertain, use `—` — never
155
- guess. This field supports pool diversity tracking; it has **no bearing** on
156
- hiring decisions or assessment criteria.
162
+ Record as `Woman`, `Man`, or `—` (unknown). When uncertain, use `—` — **never
163
+ infer gender from names**, regardless of cultural context. Name-based inference
164
+ is unreliable and culturally biased. This field supports aggregate pool
165
+ diversity tracking; it has **no bearing** on hiring decisions, assessment
166
+ criteria, or candidate visibility.
157
167
 
158
168
  ### Determining Source and Recruiter
159
169
 
@@ -192,9 +202,11 @@ Assign a status based on the email context:
192
202
  | `screening` | Under review, questions asked about the candidate |
193
203
  | `first-interview` | First interview scheduled or completed |
194
204
  | `second-interview` | Second interview scheduled or completed |
205
+ | `work-trial` | Paid work trial or assessment project in progress |
195
206
  | `offer` | Offer extended |
196
207
  | `hired` | Accepted and onboarding |
197
208
  | `rejected` | Explicitly passed on ("not a fit", "pass", "decline") |
209
+ | `withdrawn` | Candidate withdrew from the process |
198
210
  | `on-hold` | Paused, waiting on notice period, or deferred |
199
211
 
200
212
  **Default to `new`** if no response signals are found. Read the full thread
@@ -206,7 +218,10 @@ Look for these patterns in the hiring manager's replies:
206
218
 
207
219
  - "let's schedule" / "set up an interview" → `first-interview`
208
220
  - "second round" / "follow-up interview" → `second-interview`
221
+ - "work trial" / "assessment project" / "paid trial" → `work-trial`
209
222
  - "not what we're looking for" / "pass" → `rejected`
223
+ - "candidate withdrew" / "no longer interested" / "accepted another offer" →
224
+ `withdrawn`
210
225
  - "extend an offer" / "make an offer" → `offer`
211
226
  - "they've accepted" / "start date" → `hired`
212
227
  - "put on hold" / "come back to later" → `on-hold`
@@ -243,7 +258,7 @@ Then create `knowledge/Candidates/{Full Name}/brief.md`:
243
258
  # {Full Name}
244
259
 
245
260
  ## Info
246
- **Role:** {role title}
261
+ **Title:** {professional title/function}
247
262
  **Rate:** {rate or "—"}
248
263
  **Availability:** {availability or "—"}
249
264
  **English:** {level or "—"}
@@ -253,6 +268,7 @@ Then create `knowledge/Candidates/{Full Name}/brief.md`:
253
268
  **Status:** {pipeline status}
254
269
  **First seen:** {date profile was shared, YYYY-MM-DD}
255
270
  **Last activity:** {date of most recent thread activity, YYYY-MM-DD}
271
+ {extra fields here — see below}
256
272
 
257
273
  ## Summary
258
274
  {2-3 sentences: role, experience level, key strengths}
@@ -270,7 +286,11 @@ Then create `knowledge/Candidates/{Full Name}/brief.md`:
270
286
  ## Skills
271
287
  {comma-separated skill tags}
272
288
 
289
+ ## Interview Notes
290
+ {interview feedback, structured by date — omit section if no interviews yet}
291
+
273
292
  ## Notes
293
+ {free-form observations — always present, even if empty}
274
294
  ```
275
295
 
276
296
  If a CV attachment exists, **copy it into the candidate directory** before
@@ -278,6 +298,37 @@ writing the note.
278
298
 
279
299
  If no CV attachment exists, omit the `## CV` section entirely.
280
300
 
301
+ ### Extra Info Fields
302
+
303
+ Place any of these **after Last activity** in the order shown, only when
304
+ available:
305
+
306
+ ```markdown
307
+ **Role:** {internal requisition profile, e.g. "Staff Engineer"}
308
+ **Req:** {requisition ID, e.g. "4950237 — Principal Software Engineer"}
309
+ **Internal/External:** {Internal / External / External (Prior Worker)}
310
+ **Model:** {engagement model, e.g. "B2B (via Agency) — conversion to FTE not possible"}
311
+ **Current title:** {current job title and employer}
312
+ **Email:** {personal or work email}
313
+ **Phone:** {phone number}
314
+ **LinkedIn:** {LinkedIn profile URL}
315
+ **Also known as:** {alternate name spellings}
316
+ ```
317
+
318
+ ### Additional Sections
319
+
320
+ Some candidates accumulate richer profiles over time. These optional sections go
321
+ **after Skills and before Notes**, in this order:
322
+
323
+ 1. `## Education` — degrees, institutions, years
324
+ 2. `## Certifications` — professional certifications
325
+ 3. `## Work History` — chronological career history (when extracted from CV)
326
+ 4. `## Key Facts` — notable bullet points from CV review
327
+ 5. `## Interview Notes` — structured by date as `### YYYY-MM-DD — {description}`
328
+
329
+ `## Notes` is always the **last section**. If an `## Open Items` section exists
330
+ (pending questions or follow-ups), place it after Notes.
331
+
281
332
  ### For EXISTING candidates
282
333
 
283
334
  Read `knowledge/Candidates/{Full Name}/brief.md`, then apply targeted edits:
@@ -363,6 +414,11 @@ produces a full framework-aligned assessment.
363
414
  - [ ] Scanned all new/changed email threads for recruitment signals
364
415
  - [ ] Extracted all candidates found (check attachment directories too)
365
416
  - [ ] Each candidate has a complete note with all available fields
417
+ - [ ] Info fields are in standard order (Title, Rate, Availability, English,
418
+ Location, Gender, Source, Status, First seen, Last activity, then extras)
419
+ - [ ] Sections are in standard order (Info → Summary → CV → Connected to →
420
+ Pipeline → Skills → Education/Certifications/Work History/Key Facts →
421
+ Interview Notes → Notes → Open Items)
366
422
  - [ ] CV paths are correct and point to actual files
367
423
  - [ ] Pipeline status reflects the latest thread activity
368
424
  - [ ] Timeline entries are in chronological order
@@ -372,4 +428,4 @@ produces a full framework-aligned assessment.
372
428
  - [ ] No duplicate candidate notes created
373
429
  - [ ] Key strategic insights added to `Insights.md` where warranted
374
430
  - [ ] Skills tagged using framework skill IDs where possible
375
- - [ ] Gender field populated where identifiable (Woman / Man / —)
431
+ - [ ] Gender field populated only from explicit pronouns/titles (never name-inferred)
@@ -0,0 +1,233 @@
1
+ ---
2
+ name: upstream-skill
3
+ description: Track changes made to skills in this installation and produce a changelog that can be included upstream. Use when skills have been modified, added, or removed locally and those changes should be contributed back to the monorepo.
4
+ ---
5
+
6
+ # Upstream
7
+
8
+ Track changes made to skills in this installation and produce a structured
9
+ changelog so improvements can be contributed back to the upstream monorepo.
10
+
11
+ ## Trigger
12
+
13
+ Run this skill when:
14
+
15
+ - The user asks to prepare local skill changes for upstream contribution
16
+ - Skills in `.claude/skills/` have been modified, added, or removed
17
+ - The user wants to document what changed in the local installation
18
+ - Before syncing with the upstream monorepo
19
+
20
+ ## Prerequisites
21
+
22
+ - A working Basecamp installation with `.claude/skills/` directory
23
+ - Git available for detecting changes
24
+
25
+ ## Inputs
26
+
27
+ - `.claude/skills/*/SKILL.md` — current skill files in this installation
28
+ - Git history — to detect what changed and when
29
+
30
+ ## Outputs
31
+
32
+ - `.claude/skills/<skill-name>/CHANGELOG.md` — per-skill changelog of local
33
+ changes, one per modified skill directory
34
+
35
+ ---
36
+
37
+ ## Process
38
+
39
+ ### Step 1: Identify Changed Skills
40
+
41
+ Use git to find all skill files that have been added, modified, or deleted since
42
+ the last changelog entry (or since initial commit if no changelog exists).
43
+
44
+ ```bash
45
+ # Find the date of the last changelog entry for a skill (if any)
46
+ head -20 .claude/skills/<skill-name>/CHANGELOG.md 2>/dev/null
47
+
48
+ # List changed skill files since last documented change
49
+ git log --oneline --name-status -- '.claude/skills/'
50
+ ```
51
+
52
+ If a skill already has a `.claude/skills/<skill-name>/CHANGELOG.md`, read the
53
+ most recent entry date and only look at changes after that date:
54
+
55
+ ```bash
56
+ git log --after="<last-entry-date>" --name-status -- '.claude/skills/<skill-name>/'
57
+ ```
58
+
59
+ If no changelog exists for a skill, consider all commits that touched that
60
+ skill's directory.
61
+
62
+ ### Step 2: Classify Each Change
63
+
64
+ For every changed skill file, determine the type of change:
65
+
66
+ | Type | Description |
67
+ | ----------- | ----------------------------------------------------- |
68
+ | `added` | New skill created that doesn't exist upstream |
69
+ | `modified` | Existing skill updated (workflow, checklists, tools) |
70
+ | `removed` | Skill deleted from the installation |
71
+ | `renamed` | Skill directory or file renamed |
72
+
73
+ For **modified** skills, read the current file and the previous version to
74
+ identify what specifically changed:
75
+
76
+ ```bash
77
+ # Show diff for a specific skill
78
+ git diff HEAD~N -- '.claude/skills/<skill-name>/SKILL.md'
79
+
80
+ # Or compare against a specific commit/date
81
+ git log --oneline -- '.claude/skills/<skill-name>/'
82
+ git diff <commit> -- '.claude/skills/<skill-name>/SKILL.md'
83
+ ```
84
+
85
+ ### Step 3: Describe Each Change
86
+
87
+ For every changed skill, write a clear description that an upstream maintainer
88
+ can act on. Each entry must answer:
89
+
90
+ 1. **What changed?** — The specific section or behaviour that was modified
91
+ 2. **Why?** — The problem encountered or improvement discovered during use
92
+ 3. **How?** — A summary of the actual change (not a full diff)
93
+
94
+ Good descriptions:
95
+
96
+ - "Added a safety check to Step 3 — agents were skipping validation when the
97
+ source directory was empty, causing silent failures"
98
+ - "Rewrote the Entity Extraction section to process files in batches of 10
99
+ instead of all at once — large inboxes caused context window overflow"
100
+ - "New skill: `process-hyprnote` — transcribes and extracts entities from
101
+ Hyprnote meeting recordings"
102
+
103
+ Bad descriptions:
104
+
105
+ - "Updated SKILL.md" (too vague)
106
+ - "Fixed stuff" (no context)
107
+ - "Changed line 42" (not meaningful to upstream)
108
+
109
+ ### Step 4: Write the Changelog
110
+
111
+ For each changed skill, create or update its
112
+ `.claude/skills/<skill-name>/CHANGELOG.md` with the following format:
113
+
114
+ ```markdown
115
+ # <skill-name> Changelog
116
+
117
+ Changes to this skill that should be considered for upstream inclusion in the
118
+ Forward Impact monorepo.
119
+
120
+ ## <YYYY-MM-DD>
121
+
122
+ **What:** <one-line summary of the change>
123
+
124
+ **Why:** <the problem or improvement that motivated it>
125
+
126
+ **Details:**
127
+ <2-5 lines describing the specific changes made>
128
+
129
+ ---
130
+ ```
131
+
132
+ Entries are in **reverse chronological order** (newest first). Each skill has
133
+ its own changelog file inside its directory.
134
+
135
+ For **new skills**, create the `CHANGELOG.md` alongside the `SKILL.md` with a
136
+ single `added` entry describing the skill's purpose.
137
+
138
+ For **removed skills**, the changelog should be the last file remaining in the
139
+ skill directory, documenting why the skill was removed.
140
+
141
+ ### Step 5: Review the Changelogs
142
+
143
+ After writing, read each changelog back and verify:
144
+
145
+ - [ ] Every changed skill has a `CHANGELOG.md` in its directory
146
+ - [ ] Each entry has What, Why, and Details sections
147
+ - [ ] Descriptions are specific enough for an upstream maintainer to act on
148
+ - [ ] New skills include a brief description of their purpose
149
+ - [ ] Removed skills explain why they were removed
150
+ - [ ] No duplicate entries for the same change
151
+ - [ ] Dates are accurate (from git history, not guessed)
152
+
153
+ ## Example Output
154
+
155
+ `.claude/skills/track-candidates/CHANGELOG.md`:
156
+
157
+ ```markdown
158
+ # track-candidates Changelog
159
+
160
+ Changes to this skill that should be considered for upstream inclusion in the
161
+ Forward Impact monorepo.
162
+
163
+ ## 2026-03-01
164
+
165
+ **What:** Added gender field extraction for diversity tracking
166
+
167
+ **Why:** Recruitment pipeline lacked diversity metrics — pool composition was
168
+ invisible without structured gender data.
169
+
170
+ **Details:**
171
+ - Added Gender field to candidate brief template (Woman / Man / —)
172
+ - Added extraction rules: pronouns, gendered titles, culturally unambiguous names
173
+ - Added explicit note that field has no bearing on hiring decisions
174
+ - Updated quality checklist to include gender field verification
175
+
176
+ ---
177
+ ```
178
+
179
+ `.claude/skills/process-hyprnote/CHANGELOG.md`:
180
+
181
+ ```markdown
182
+ # process-hyprnote Changelog
183
+
184
+ Changes to this skill that should be considered for upstream inclusion in the
185
+ Forward Impact monorepo.
186
+
187
+ ## 2026-03-01
188
+
189
+ **What:** New skill for processing Hyprnote meeting recordings
190
+
191
+ **Why:** Meeting notes were being lost — Hyprnote captures transcriptions but
192
+ they weren't being integrated into the knowledge base.
193
+
194
+ **Details:**
195
+ - Reads transcription files from `~/.cache/fit/basecamp/hyprnote/`
196
+ - Extracts people, decisions, and action items
197
+ - Creates meeting notes in `knowledge/Meetings/`
198
+ - Links attendees to `knowledge/People/` entries
199
+
200
+ ---
201
+ ```
202
+
203
+ `.claude/skills/extract-entities/CHANGELOG.md`:
204
+
205
+ ```markdown
206
+ # extract-entities Changelog
207
+
208
+ Changes to this skill that should be considered for upstream inclusion in the
209
+ Forward Impact monorepo.
210
+
211
+ ## 2026-02-15
212
+
213
+ **What:** Increased batch size from 5 to 10 files per run
214
+
215
+ **Why:** Processing was too slow for large inboxes — 5 files per batch meant
216
+ dozens of runs to catch up after a week of email.
217
+
218
+ **Details:**
219
+ - Changed batch size constant from 5 to 10 in Step 1
220
+ - Added a note about context window limits for batches > 15
221
+
222
+ ---
223
+ ```
224
+
225
+ ## Notes
226
+
227
+ - This skill only **documents** changes — it does not push or merge anything
228
+ - The per-skill changelogs are consumed by the **downstream** skill in the
229
+ upstream monorepo
230
+ - Keep descriptions actionable: an upstream maintainer should be able to
231
+ understand and apply each change without access to this installation
232
+ - When in doubt about whether a change is upstream-worthy, include it — the
233
+ upstream maintainer will decide what to incorporate
@@ -32,6 +32,10 @@ It must remain objective, factual, and ethically sound at all times. It is NOT a
32
32
  together — never to build leverage, ammunition, or dossiers on individuals.
33
33
  - **Flag ethical concerns.** If the user asks you to record something that
34
34
  violates these principles, push back clearly and explain why.
35
+ - **Data protection.** Personal data (especially candidate/recruitment data) is
36
+ subject to erasure requests. Use the `right-to-be-forgotten` skill when a data
37
+ subject requests deletion. Minimize data collection to what's professionally
38
+ relevant. Flag candidates inactive for 6+ months for retention review.
35
39
 
36
40
  These principles override all other instructions. When in doubt, err on the side
37
41
  of discretion and professionalism.
@@ -82,7 +86,7 @@ wake, they observe KB state, decide the most valuable action, and execute.
82
86
  | **postman** | Email triage and drafts | Every 5 min | sync-apple-mail, draft-emails |
83
87
  | **concierge** | Meeting prep and transcripts | Every 10 min | sync-apple-calendar, meeting-prep, process-hyprnote |
84
88
  | **librarian** | Knowledge graph maintenance | Every 15 min | extract-entities, organize-files, manage-tasks |
85
- | **recruiter** | Engineering recruitment | Every 30 min | track-candidates, analyze-cv, fit-pathway, fit-map |
89
+ | **recruiter** | Engineering recruitment | Every 30 min | track-candidates, analyze-cv, right-to-be-forgotten, fit-pathway, fit-map |
86
90
  | **chief-of-staff** | Daily briefings and priorities | 7am, Mon 7:30am | weekly-update _(Mon)_, _(reads all state for daily briefings)_ |
87
91
 
88
92
  Each agent writes a triage file to `~/.cache/fit/basecamp/state/` every wake
@@ -202,6 +206,7 @@ Available skills (grouped by function):
202
206
  | `manage-tasks` | Per-person task boards with lifecycle |
203
207
  | `track-candidates` | Recruitment pipeline from email threads |
204
208
  | `analyze-cv` | CV assessment against career framework |
209
+ | `right-to-be-forgotten` | GDPR data erasure with audit trail |
205
210
  | `weekly-update` | Weekly priorities from tasks + calendar |
206
211
  | `process-hyprnote` | Extract entities from Hyprnote sessions |
207
212
  | `organize-files` | Tidy Desktop/Downloads, chain to extract |