@forwardimpact/basecamp 2.6.1 → 2.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -58,6 +58,9 @@ Run this skill:
58
58
  - `knowledge/Organizations/*.md` — organization notes
59
59
  - `knowledge/Projects/*.md` — project notes
60
60
  - `knowledge/Topics/*.md` — topic notes
61
+ - `knowledge/Roles/*.md` — role/requisition files (created or enriched)
62
+ - `knowledge/Candidates/*/brief.md` — candidate briefs (enriched with inferred
63
+ metadata)
61
64
  - `~/.cache/fit/basecamp/state/graph_processed` — updated with newly processed
62
65
  files
63
66
 
@@ -385,6 +388,75 @@ Log state changes in activity with `[Field → value]` notation:
385
388
  - **2025-01-20** (email): Leadership approved pilot. [Status → active]
386
389
  ```
387
390
 
391
+ ## Step 7b: Detect Recruitment Signals
392
+
393
+ When processing emails and calendar events, look for signals that relate to the
394
+ recruitment pipeline. These signals enrich `knowledge/Roles/` and
395
+ `knowledge/Candidates/` with metadata that cannot be derived from any single
396
+ source.
397
+
398
+ ### Requisition Number Detection
399
+
400
+ Scan email subjects and bodies for requisition numbers (e.g. 7-digit Workday
401
+ IDs). When found:
402
+
403
+ 1. Check if a Role file exists: `ls knowledge/Roles/ | grep "{req_number}"`
404
+ 2. If **no Role file exists**, create a stub (see `track-candidates` Step 0b
405
+ for the template). Search the knowledge graph for context:
406
+ ```bash
407
+ rg "{req_number}" knowledge/
408
+ ```
409
+ 3. If a Role file **does exist**, check if the email provides new metadata
410
+ (hiring manager, recruiter, locations) and update the Role file.
411
+
412
+ ### Hiring Manager Inference from Calendar Events
413
+
414
+ When a calendar event title matches interview-related patterns:
415
+ - "Interview", "Screening", "Screen", "Decomposition", "Panel", "Technical
416
+ Assessment", "Candidate"
417
+ - Combined with a person name (cross-reference `knowledge/Candidates/`)
418
+
419
+ Extract the **organizer** of the event. If the organizer is NOT the user (from
420
+ `USER.md`), they are likely the hiring manager for this role. To confirm:
421
+
422
+ 1. Look up the organizer in `knowledge/People/` — check if they have a role
423
+ indicating they manage a team or are described as a hiring manager.
424
+ 2. Look up which candidate is being interviewed — check their `brief.md` for
425
+ a `Req` field.
426
+ 3. If a Req is known, update the corresponding `knowledge/Roles/*.md` file's
427
+ `Hiring manager` field (only if currently `—`).
428
+ 4. Update the candidate's `brief.md` `Hiring manager` field if currently `—`.
429
+
430
+ ### Recruiter Inference from Email Threads
431
+
432
+ When processing email threads that reference candidates (by name match against
433
+ `knowledge/Candidates/`), check the To/CC fields for internal recruiters:
434
+
435
+ 1. Cross-reference To/CC addresses against `knowledge/People/` notes.
436
+ 2. If a CC'd person's note mentions "recruiter", "talent acquisition", or a
437
+ similar recruiting role, they are likely the internal recruiter for this
438
+ candidate's role.
439
+ 3. Update the candidate's `brief.md` recruiter field and the corresponding
440
+ Role file if the field is currently `—`.
441
+
442
+ ### Domain Lead Resolution
443
+
444
+ When a hiring manager is newly identified (from calendar or email inference),
445
+ attempt to resolve the domain lead:
446
+
447
+ 1. Read the hiring manager's People note for a `**Reports to:**` field.
448
+ 2. Walk up the reporting chain until reaching a VP or senior leader listed in
449
+ a stakeholder map or organizational hierarchy note.
450
+ 3. Update both the Role file's `Domain lead` and the candidate brief's
451
+ `Domain lead`.
452
+
453
+ **Be conservative:** Only set hiring manager/domain lead/recruiter when the
454
+ evidence is strong. A single calendar invite organized by someone is suggestive
455
+ but not conclusive — confirm against People notes or multiple data points before
456
+ setting the field.
457
+
458
+ ---
459
+
388
460
  ## Step 8: Check for Duplicates
389
461
 
390
462
  Before writing:
@@ -462,4 +534,8 @@ Before completing, verify:
462
534
  - [ ] Open items are commitments (no "find their email" tasks)
463
535
  - [ ] State changes logged with `[Field → value]` notation
464
536
  - [ ] Bidirectional links are consistent
537
+ - [ ] Requisition numbers detected and Role files created/enriched
538
+ - [ ] Hiring manager inferred from calendar event organizers where applicable
539
+ - [ ] Recruiter inferred from email CC fields where applicable
540
+ - [ ] Domain lead resolved from hiring manager reporting chain where applicable
465
541
  - [ ] Graph state updated for processed files
@@ -0,0 +1,366 @@
1
+ ---
2
+ name: hiring-decision
3
+ description: >
4
+ Synthesize all evidence (CV screening, interview assessments, transcripts)
5
+ into a final hiring recommendation. Produces a comprehensive recommendation
6
+ document with full evidence trail, level/track/discipline confirmation, and
7
+ a clear hire/no-hire decision. Use when all interview stages are complete.
8
+ ---
9
+
10
+ # Hiring Decision
11
+
12
+ Synthesize all available evidence for a candidate into a final hiring
13
+ recommendation. This is the culmination of the hiring pipeline — every prior
14
+ assessment feeds into this document.
15
+
16
+ This is Stage 3 of a three-stage hiring pipeline:
17
+
18
+ 1. **Screen CV** — CV arrives → interview or pass
19
+ 2. **Assess Interview** — transcript arrives → updated evidence profile
20
+ 3. **Hiring Decision** (this skill) — all stages complete → hire or not
21
+
22
+ ## Trigger
23
+
24
+ Run this skill:
25
+
26
+ - When the user asks for a final hiring recommendation
27
+ - When all planned interview stages for a candidate are complete
28
+ - When the user asks "should we hire {Name}?"
29
+ - When the user needs to compare finalists for a position
30
+
31
+ ## Prerequisites
32
+
33
+ - `fit-pathway` CLI installed (`npx fit-pathway` must work)
34
+ - Screening assessment: `knowledge/Candidates/{Name}/screening.md`
35
+ - At least one interview assessment: `knowledge/Candidates/{Name}/interview-*.md`
36
+ - Candidate brief: `knowledge/Candidates/{Name}/brief.md`
37
+ - Transcripts provide additional context but are not strictly required if
38
+ interview assessments exist
39
+
40
+ ## Inputs
41
+
42
+ - All candidate artifacts in `knowledge/Candidates/{Name}/`
43
+ - Framework data via `fit-pathway`
44
+ - `knowledge/Candidates/Insights.md` for cross-candidate context
45
+ - `knowledge/Roles/*.md` — the Role file for this candidate's requisition
46
+ (provides remaining positions, hiring manager, domain lead priorities)
47
+ - Other active candidates at the same level (for relative positioning)
48
+
49
+ ## Outputs
50
+
51
+ - `knowledge/Candidates/{Name}/recommendation.md` — final hiring recommendation
52
+ - Updated `knowledge/Candidates/{Name}/brief.md` — status and links
53
+ - Updated `knowledge/Candidates/Insights.md` — cross-candidate observations
54
+
55
+ ---
56
+
57
+ ## Step 1: Gather All Evidence
58
+
59
+ Collect every artifact for this candidate:
60
+
61
+ ```bash
62
+ ls knowledge/Candidates/{Name}/
63
+ ```
64
+
65
+ Read in order:
66
+
67
+ 1. `brief.md` — pipeline history, status, context
68
+ 2. `screening.md` — CV screening (Stage 1 output)
69
+ 3. `interview-*.md` — all interview assessments (Stage 2 outputs)
70
+ 4. `transcript-*.md` — raw transcripts for additional detail
71
+ 5. `panel.md` — panel brief if it exists (for continuity)
72
+
73
+ Build a chronological evidence timeline:
74
+
75
+ | Date | Stage | Source | Key Finding |
76
+ |------|-------|--------|-------------|
77
+ | {date} | CV Screening | screening.md | {key finding} |
78
+ | {date} | {Interview type} | interview-{date}.md | {key finding} |
79
+
80
+ ## Step 2: Build Final Skill Profile
81
+
82
+ For each skill in the target job's matrix, determine the **final rating** by
83
+ selecting the highest-fidelity evidence available:
84
+
85
+ **Evidence hierarchy** (highest to lowest fidelity):
86
+
87
+ 1. **Live demonstration** — candidate showed the skill in an interview exercise
88
+ 2. **Detailed interview discussion** — candidate gave specific, probing answers
89
+ 3. **Interviewer observation** — interviewer noted strength or concern
90
+ 4. **CV evidence with quantification** — concrete metrics and named projects
91
+ 5. **CV evidence without quantification** — described but vague
92
+ 6. **Not evidenced** — never surfaced across any stage
93
+
94
+ ```bash
95
+ # Load the framework reference for final comparison
96
+ npx fit-pathway job {discipline} {level} --track={track} --skills
97
+
98
+ # Check progression if level is borderline
99
+ npx fit-pathway progress {discipline} {level} --track={track}
100
+ ```
101
+
102
+ For each skill, record:
103
+
104
+ - **Final rating** — the proficiency level supported by the best evidence
105
+ - **Best evidence source** — which stage provided the strongest signal
106
+ - **Trajectory** — did the rating improve, decline, or hold across stages?
107
+ - **Confidence** — high (demonstrated live), medium (discussed well), low
108
+ (CV only or thin evidence)
109
+
110
+ ## Step 3: Build Final Behaviour Profile
111
+
112
+ For each framework behaviour, determine the **final maturity** using the same
113
+ evidence hierarchy. Behaviours assessed in interviews carry far more weight
114
+ than CV signals.
115
+
116
+ ```bash
117
+ npx fit-pathway behaviour --list
118
+ ```
119
+
120
+ For each behaviour:
121
+
122
+ - **Final maturity** — the maturity level supported by interview evidence
123
+ - **Best evidence** — specific moment or pattern across interviews
124
+ - **Consistency** — was the behaviour shown once or throughout?
125
+
126
+ ## Step 4: Confirm Level and Track
127
+
128
+ Using the complete evidence profile, make a final level and track recommendation:
129
+
130
+ ```bash
131
+ # Compare adjacent levels
132
+ npx fit-pathway job {discipline} {lower_level} --track={track}
133
+ npx fit-pathway job {discipline} {target_level} --track={track}
134
+ npx fit-pathway progress {discipline} {lower_level} --track={track}
135
+ ```
136
+
137
+ | Question | Answer informs |
138
+ | ----------------------------------------------------------- | ----------------------- |
139
+ | Does the candidate meet ≥ 70% of skills at the target level? | Level confirmation |
140
+ | Were level concerns from screening resolved in interviews? | Level upgrade/downgrade |
141
+ | Did interviewers explicitly suggest a different level? | Strong level signal |
142
+ | Does the candidate's scope and autonomy match the level? | Level fit |
143
+ | Which track energized the candidate in interviews? | Track confirmation |
144
+
145
+ ## Step 4b: Read Role Context
146
+
147
+ If the candidate has a `Req` field in their brief, read the corresponding Role
148
+ file:
149
+
150
+ ```bash
151
+ ls knowledge/Roles/ | grep "{req_number}"
152
+ cat "knowledge/Roles/{matching file}"
153
+ ```
154
+
155
+ Extract and include in the recommendation:
156
+
157
+ - **Remaining positions** on this req (from the Role file's `Positions` count
158
+ minus filled candidates in the Candidates table)
159
+ - **Hiring manager** and their expectations (from the Role file and their People
160
+ note)
161
+ - **Domain lead** and their hiring priorities (from recent meetings/emails)
162
+ - **Other candidates** on the same req — how does this candidate compare to the
163
+ pipeline for this specific role?
164
+ - **Channel** — is this a vendor candidate or HR candidate? This affects
165
+ onboarding timeline and engagement model.
166
+
167
+ ## Step 5: Assess Against Active Pipeline
168
+
169
+ Check how this candidate compares to others at the same level:
170
+
171
+ ```bash
172
+ # Read cross-candidate insights
173
+ cat knowledge/Candidates/Insights.md
174
+
175
+ # Check other candidates at the same level
176
+ for dir in knowledge/Candidates/*/; do
177
+ if [ -f "$dir/screening.md" ]; then
178
+ head -5 "$dir/screening.md"
179
+ fi
180
+ done
181
+ ```
182
+
183
+ This is not a ranking exercise — it provides context. Note:
184
+
185
+ - Whether this candidate fills a gap in the current pipeline
186
+ - Whether stronger candidates exist for the same role
187
+ - Whether this candidate is better suited to a different open position
188
+
189
+ ## Step 6: Write Hiring Recommendation
190
+
191
+ Create `knowledge/Candidates/{Name}/recommendation.md`:
192
+
193
+ ```markdown
194
+ # Hiring Recommendation — {Full Name}
195
+
196
+ **Role:** {Discipline} {Level} — {Track}
197
+ **Req:** {Req number and title, or "—"}
198
+ **Hiring manager:** {Name from Role file, or "—"}
199
+ **Domain lead:** {Name from Role file, or "—"}
200
+ **Channel:** {hr / vendor}
201
+ **Date:** {YYYY-MM-DD}
202
+ **Prepared by:** Recruiter agent (with framework analysis)
203
+
204
+ **⚠️ Advisory only — human decision required.**
205
+
206
+ ---
207
+
208
+ ## Recommendation
209
+
210
+ **{Hire / Hire at {adjusted level} / Do not hire}**
211
+
212
+ {3-5 sentence executive summary. Lead with the decision and the single
213
+ strongest reason. Then address the primary risk and whether interviews
214
+ resolved it. End with a clear statement of confidence.}
215
+
216
+ ---
217
+
218
+ ## Evidence Summary
219
+
220
+ ### Process Timeline
221
+
222
+ | Date | Stage | Interviewer(s) | Outcome |
223
+ |------|-------|----------------|---------|
224
+ | {date} | CV Screening | — | {outcome} |
225
+ | {date} | {Interview type} | {names} | {outcome} |
226
+
227
+ ### Final Skill Profile
228
+
229
+ Framework reference: `{discipline} {level} --track={track}`
230
+
231
+ | Skill | Expected | Final Rating | Confidence | Best Evidence | Trajectory |
232
+ | --- | --- | --- | --- | --- | --- |
233
+ | {skill} | {level} | {level} | {High/Med/Low} | {Stage that provided best evidence} | {↑ / ↓ / ―} |
234
+
235
+ **Skill match:** {N}% Strong match, {N}% Adequate, {N}% Gap
236
+
237
+ ### Final Behaviour Profile
238
+
239
+ | Behaviour | Expected | Final Maturity | Best Evidence |
240
+ | --- | --- | --- | --- |
241
+ | {behaviour} | {maturity} | {maturity} | {specific observation} |
242
+
243
+ ### Level Confirmation
244
+
245
+ **Target level:** {original target}
246
+ **Recommended level:** {confirmed or adjusted}
247
+
248
+ {Paragraph explaining the level decision. Reference the framework progression
249
+ criteria and specific evidence from interviews. If the level changed during
250
+ the process, explain the journey (e.g. "Initially assessed at J100, screening
251
+ identified scope concerns, decomposition confirmed J090 as the right fit").}
252
+
253
+ ### Track Confirmation
254
+
255
+ **Recommended track:** {forward_deployed / platform / either}
256
+
257
+ {Paragraph explaining track fit. Reference specific interview moments that
258
+ revealed the candidate's natural orientation.}
259
+
260
+ ---
261
+
262
+ ## Decision Framework
263
+
264
+ Apply these **decision rules** strictly:
265
+
266
+ | Recommendation | Criteria |
267
+ | ----------------------------- | ------------------------------------------------------------------------------------- |
268
+ | **Hire** | ≥ 70% Strong match at final level, no unresolved core skill gaps, strong behaviours |
269
+ | **Hire at adjusted level** | Strong candidate but evidence supports a different level than originally targeted |
270
+ | **Do not hire** | Unresolved core skill gaps, behaviour concerns, or insufficient evidence after interviews |
271
+
272
+ ### Hire Criteria Check
273
+
274
+ - [ ] ≥ 70% of skills rated Strong match at the recommended level
275
+ - [ ] No core skill gaps remain unresolved (core = top-tier skills for the
276
+ discipline/track combination)
277
+ - [ ] All framework behaviours at or above expected maturity
278
+ - [ ] Level confirmed by interview evidence (not just CV)
279
+ - [ ] Track fit confirmed by interview evidence
280
+ - [ ] No red flags from any interview stage
281
+
282
+ ### Risk Assessment
283
+
284
+ **Primary risk:** {The single biggest concern and whether it was resolved}
285
+ **Mitigation:** {How the risk can be managed if hiring proceeds}
286
+
287
+ **Secondary risks:**
288
+ - {Risk 2 — severity and mitigation}
289
+
290
+ ---
291
+
292
+ ## Strengths to Leverage
293
+
294
+ {What this person will bring to the team from day one. Frame in terms of
295
+ business impact, not just technical capability.}
296
+
297
+ 1. **{Strength}:** {evidence and expected impact}
298
+ 2. **{Strength}:** {evidence and expected impact}
299
+
300
+ ## Development Areas
301
+
302
+ {What this person will need to grow into. Frame as investment, not weakness.}
303
+
304
+ 1. **{Area}:** {current level → target level, suggested development path}
305
+ 2. **{Area}:** {current level → target level, suggested development path}
306
+
307
+ ---
308
+
309
+ ## Role Context
310
+
311
+ {Context from the Role file to inform the hiring decision.}
312
+
313
+ - **Requisition:** {Req number — title}
314
+ - **Remaining positions:** {N of M filled}
315
+ - **Hiring manager:** {name} | **Domain lead:** {name}
316
+ - **Channel:** {hr / vendor — and implications for engagement model}
317
+
318
+ ## Pipeline Context
319
+
320
+ {How this candidate compares to the active pipeline. Not a ranking — context
321
+ for the hiring decision.}
322
+
323
+ - **Pipeline position:** {Where this candidate sits relative to others on the
324
+ same requisition}
325
+ - **Unique value:** {What this candidate offers that others in the pipeline don't}
326
+ - **Alternative fit:** {If not hired for this role, could they fit another
327
+ open position? Reference other Role files.}
328
+
329
+ ---
330
+
331
+ *This recommendation synthesizes all available evidence from {N} assessment
332
+ stages conducted between {first date} and {last date}. The final decision
333
+ rests with the hiring manager. All assessments are advisory.*
334
+ ```
335
+
336
+ ## Step 7: Update Candidate Brief and Insights
337
+
338
+ Update `knowledge/Candidates/{Name}/brief.md`:
339
+
340
+ - Update **Status** to reflect the recommendation (`recommended` / `not-recommended`)
341
+ - Add link: `- [Hiring Recommendation](./recommendation.md)`
342
+ - Add final pipeline entry with the recommendation date and outcome
343
+
344
+ Update `knowledge/Candidates/Insights.md` with any cross-candidate observations:
345
+
346
+ - If this candidate is the strongest at their level, note it
347
+ - If this candidate revealed a pattern about a sourcing channel, note the channel
348
+ - If the level adjustment has implications for other candidates, note it
349
+
350
+ **Use precise edits — don't rewrite entire files.**
351
+
352
+ ## Quality Checklist
353
+
354
+ - [ ] Every skill rating traces to a specific evidence source (stage + detail)
355
+ - [ ] Evidence hierarchy is respected — interview evidence outranks CV evidence
356
+ - [ ] Level recommendation is grounded in framework progression criteria
357
+ - [ ] Track recommendation cites interview evidence, not just CV signals
358
+ - [ ] Decision rules are applied strictly — verify percentages and gap counts
359
+ - [ ] Risk assessment is honest — don't minimize real concerns to justify hiring
360
+ - [ ] Development areas are specific and actionable
361
+ - [ ] Pipeline context is factual — no ranking by protected characteristics
362
+ - [ ] Cross-candidate insights added to Insights.md where relevant
363
+ - [ ] Brief updated with recommendation link and status
364
+ - [ ] "Do not hire" is explained with the same rigour as "Hire" — the candidate
365
+ deserves to know why if they request access (GDPR Article 15)
366
+ - [ ] Recommendation could be shown to the candidate without embarrassment
@@ -25,6 +25,8 @@ meetings.
25
25
  - `knowledge/People/*.md` — attendee context
26
26
  - `knowledge/Organizations/*.md` — company context
27
27
  - `knowledge/Projects/*.md` — project context
28
+ - `knowledge/Candidates/*/brief.md` — candidate context (for interview meetings)
29
+ - `knowledge/Roles/*.md` — role/requisition context (for interview meetings)
28
30
 
29
31
  ## Outputs
30
32
 
@@ -130,6 +132,40 @@ Suggested Talking Points
130
132
  - Talking points: concrete, not generic
131
133
  - If no notes exist for a person, mention that and offer to create one
132
134
 
135
+ ### Interview Meeting Context
136
+
137
+ When preparing for interview meetings (title contains "Interview", "Screening",
138
+ "Decomposition", "Panel", or a candidate name from `knowledge/Candidates/`):
139
+
140
+ 1. **Read the candidate brief:** `knowledge/Candidates/{Name}/brief.md`
141
+ 2. **Read the Role file:** Look up the `Req` field and read the corresponding
142
+ `knowledge/Roles/*.md` file.
143
+ 3. **Include in the briefing:**
144
+ - Candidate's current status, skills, and screening recommendation
145
+ - Role context: hiring manager, domain lead, remaining positions
146
+ - Other candidates on the same requisition and their statuses (from the
147
+ Role file's Candidates table)
148
+ - Previous interview assessments if this is a second/later stage
149
+ - Panel brief if one exists (`panel.md`)
150
+ 4. **Format as a dedicated section:**
151
+
152
+ ```markdown
153
+ ## Candidate: {Name}
154
+ **Role:** {Title from Role file} | **Req:** {req number}
155
+ **Hiring manager:** {name} | **Domain lead:** {name}
156
+ **Status:** {current pipeline status}
157
+ **Screening:** {Interview / Interview with focus areas / Pass}
158
+
159
+ ### Pipeline for this req
160
+ - {N} candidates total, {N} interviewed, {N} remaining positions
161
+
162
+ ### Key strengths
163
+ - {from screening.md}
164
+
165
+ ### Focus areas for this interview
166
+ - {from screening.md or panel.md}
167
+ ```
168
+
133
169
  ## Constraints
134
170
 
135
171
  - Only prep for meetings with external attendees
@@ -241,7 +241,7 @@ Create the audit trail at `knowledge/Erasure/{Name}--{YYYY-MM-DD}.md`:
241
241
  ### Deleted Files
242
242
  - `knowledge/Candidates/{Name}/brief.md`
243
243
  - `knowledge/Candidates/{Name}/CV.pdf`
244
- - `knowledge/Candidates/{Name}/assessment.md`
244
+ - `knowledge/Candidates/{Name}/screening.md`
245
245
  - `knowledge/People/{Name}.md`
246
246
  - {list all deleted files}
247
247