@forwardimpact/basecamp 2.6.1 → 2.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,28 +1,33 @@
1
1
  ---
2
- name: analyze-cv
2
+ name: screen-cv
3
3
  description: >
4
- Analyze candidate CVs against the engineering career framework using
5
- fit-pathway as the reference point. Assess skill alignment, identify track
6
- fit (forward_deployed vs platform), estimate career level, and produce
7
- structured assessments. Use when the user asks to evaluate a CV, compare a
8
- candidate to a role, or assess engineering fit.
4
+ Screen candidate CVs against the engineering career framework to decide
5
+ whether to invest interview time. Produces a structured screening assessment
6
+ with interview/pass recommendation and suggested interview focus areas.
7
+ Use when the user asks to evaluate a CV or when a new CV is detected.
9
8
  ---
10
9
 
11
- # Analyze CV
10
+ # Screen CV
12
11
 
13
- Analyze a candidate's CV against the engineering career framework defined in
14
- `fit-pathway`. Produces a structured assessment: estimated career level, track
15
- fit, skill alignment, gaps, and a hiring recommendation. Every assessment is
16
- grounded in the framework — no subjective impressions.
12
+ Screen a candidate's CV against the engineering career framework defined in
13
+ `fit-pathway`. The sole question this skill answers: **is this candidate worth
14
+ interviewing?** Every assessment is grounded in the framework no subjective
15
+ impressions.
16
+
17
+ This is Stage 1 of a three-stage hiring pipeline:
18
+
19
+ 1. **Screen CV** (this skill) — CV arrives → interview or pass
20
+ 2. **Assess Interview** — transcript arrives → updated evidence profile
21
+ 3. **Hiring Decision** — all stages complete → hire or not
17
22
 
18
23
  ## Trigger
19
24
 
20
25
  Run this skill:
21
26
 
22
- - When the user asks to analyze, evaluate, or assess a CV
23
27
  - When a new CV is added to `knowledge/Candidates/{Name}/`
24
- - When the user asks "is this person a fit for {role}?"
25
- - When comparing a candidate's background against a specific job level and track
28
+ - When a CV appears in `~/Downloads/` and is associated with a candidate
29
+ - When the user asks to screen, evaluate, or assess a CV
30
+ - When the user asks "is this person worth interviewing?"
26
31
 
27
32
  ## Prerequisites
28
33
 
@@ -37,13 +42,28 @@ Run this skill:
37
42
  - Target role (optional): `{discipline} {level} --track={track}`
38
43
  - Existing candidate brief (if available):
39
44
  `knowledge/Candidates/{Name}/brief.md`
45
+ - Role file (if candidate has a `Req`): `knowledge/Roles/*.md` — provides
46
+ `Level`, `Discipline`, `Hiring manager`, and `Domain lead` for
47
+ more accurate screening
40
48
 
41
49
  ## Outputs
42
50
 
43
- - `knowledge/Candidates/{Name}/assessment.md` — structured CV assessment
51
+ - `knowledge/Candidates/{Name}/screening.md` — structured screening assessment
44
52
  - Updated `knowledge/Candidates/{Name}/brief.md` — skills and summary enriched
45
53
  from CV analysis
46
54
 
55
+ ## Output Filename Convention
56
+
57
+ The screening assessment MUST be written to `screening.md` — no other filename.
58
+ Do not use `assessment.md`, `cv-screening.md`, `evaluation.md`, or any variant.
59
+
60
+ Before writing, check whether a file with a different name already exists that
61
+ contains a CV screening (look for `# CV Screening` in the header). If found,
62
+ delete the misnamed file after writing `screening.md` to avoid duplicates.
63
+
64
+ When linking from `brief.md`, always use the exact text:
65
+ `- [CV Screening](./screening.md)`
66
+
47
67
  ---
48
68
 
49
69
  ## Step 1: Read the CV
@@ -62,11 +82,31 @@ Read the candidate's CV file. Extract:
62
82
  | **Communication** | Publications, talks, open source, documentation |
63
83
  | **Gender** | Pronouns, gendered titles (never infer from names) |
64
84
 
85
+ ## Step 1b: Read Role File for Context
86
+
87
+ If the candidate's brief has a `Req` field, look up the corresponding Role file:
88
+
89
+ ```bash
90
+ ls knowledge/Roles/ | grep "{req_number}"
91
+ cat "knowledge/Roles/{matching file}"
92
+ ```
93
+
94
+ The Role file provides:
95
+
96
+ - **Level** and **Discipline** — use as the target role if no explicit target
97
+ was specified by the user. This is more accurate than estimating from the CV.
98
+ - **Hiring manager** and **Domain lead** — include in the screening output
99
+ header for context.
100
+
101
+ If the Role file specifies a Level and Discipline, use them as the target role
102
+ for framework comparison (unless the user explicitly specified a different
103
+ target).
104
+
65
105
  ## Step 2: Look Up the Framework Reference
66
106
 
67
107
  Use `fit-pathway` to load the reference data for assessment.
68
108
 
69
- ### If a target role is specified
109
+ ### If a target role is specified (or derived from the Role file)
70
110
 
71
111
  ```bash
72
112
  # Get the full job definition
@@ -185,42 +225,43 @@ Classify each skill as:
185
225
  clear project evidence, **or** meets the level but evidence is thin
186
226
  - **Gap** — candidate is two or more levels below expected proficiency
187
227
  - **Not evidenced** — CV doesn't mention this skill area. **Treat as a gap** for
188
- recommendation purposes — absence of evidence is not evidence of skill
228
+ screening purposes — absence of evidence is not evidence of skill
189
229
 
190
- **Threshold rule:** If more than **one third** of the target job's skills are
191
- Gap or Not evidenced, the candidate cannot receive "Proceed." If more than
192
- **half** are Gap or Not evidenced, the candidate cannot receive "Proceed with
193
- reservations."
194
-
195
- ## Step 6: Write Assessment
230
+ ## Step 6: Write Screening Assessment
196
231
 
197
- Create `knowledge/Candidates/{Name}/assessment.md`:
232
+ Create `knowledge/Candidates/{Name}/screening.md`:
198
233
 
199
234
  ```markdown
200
- # CV Assessment — {Full Name}
235
+ # CV Screening — {Full Name}
201
236
 
202
237
  **Assessed against:** {Discipline} {Level} — {Track}
238
+ **Req:** {Req number and title, or "—" if no req}
239
+ **Hiring manager:** {Name from Role file, or "—"}
240
+ **Domain lead:** {Name from Role file, or "—"}
203
241
  **Date:** {YYYY-MM-DD}
204
242
  **CV source:** [{filename}](./{filename})
205
243
 
206
244
  ## Summary
207
245
 
208
- {2-3 sentence summary: overall fit, key strengths, primary concerns}
246
+ {2-3 sentence summary: overall fit, key strengths, primary concerns.
247
+ Frame around the screening question: is this worth an interview?}
209
248
 
210
249
  ## Estimated Profile
211
250
 
212
- | Dimension | Assessment |
213
- | ---------------- | ----------------------------------------- |
214
- | **Level** | {estimated level and confidence} |
215
- | **Track fit** | {forward_deployed / platform / either} |
216
- | **Discipline** | {best discipline match} |
217
- | **Gender** | {Woman / Man / —} |
251
+ | Dimension | Assessment |
252
+ | -------------- | -------------------------------------- |
253
+ | **Level** | {estimated level and confidence} |
254
+ | **Track fit** | {forward_deployed / platform / either} |
255
+ | **Discipline** | {best discipline match} |
256
+ | **Gender** | {Woman / Man / —} |
218
257
 
219
258
  ## Skill Alignment
220
259
 
260
+ Framework reference: `{discipline} {level} --track={track}`
261
+
221
262
  | Skill | Expected | Estimated | Status |
222
263
  | --- | --- | --- | --- |
223
- | {skill} | {framework level} | {CV-based estimate} | {Strong/Adequate/Gap/Not evidenced} |
264
+ | {skill} | {framework level} | {CV-based estimate} | {Strong match / 🟡 Adequate /Gap /Not evidenced} |
224
265
 
225
266
  ### Key Strengths
226
267
  - {Strength 1 — with CV evidence}
@@ -234,28 +275,33 @@ Create `knowledge/Candidates/{Name}/assessment.md`:
234
275
 
235
276
  | Behaviour | Expected Maturity | CV Evidence | Signal |
236
277
  | --- | --- | --- | --- |
237
- | {behaviour} | {maturity} | {evidence or "—"} | {Strong/Weak/None} |
278
+ | {behaviour} | {maturity} | {evidence or "—"} | {Strong / Weak / None} |
238
279
 
239
280
  ## Track Fit Analysis
240
281
 
241
282
  {Paragraph explaining why this candidate fits forward_deployed, platform,
242
283
  or could work on either. Reference specific CV evidence.}
243
284
 
244
- ## Hiring Recommendation
285
+ ## Screening Recommendation
245
286
 
246
287
  **⚠️ Advisory only — human decision required.**
247
288
 
248
- **Recommendation:** {Proceed / Proceed with reservations / Do not proceed}
289
+ **Recommendation:** {Interview / Interview with focus areas / Pass}
249
290
 
250
291
  Apply these **decision rules** strictly:
251
292
 
252
- | Recommendation | Criteria |
253
- | ---------------------------- | ----------------------------------------------------------------------- |
254
- | **Proceed** | ≥ 70% Strong match, no core skill gaps, strong behaviour signals |
255
- | **Proceed with reservations** | ≥ 50% Strong match, ≤ 2 gaps in non-core skills, no behaviour red flags |
256
- | **Do not proceed** | All other candidates — including those with thin evidence |
293
+ | Recommendation | Criteria |
294
+ | -------------------------------- | ------------------------------------------------------------------------- |
295
+ | **Interview** | ≥ 70% Strong match, no core skill gaps, strong behaviour signals |
296
+ | **Interview with focus areas** | ≥ 50% Strong match, ≤ 2 gaps in non-core skills, no behaviour red flags |
297
+ | **Pass** | All other candidates — including those with thin evidence |
257
298
 
258
- When in doubt, choose the stricter recommendation. "Proceed with reservations"
299
+ **Threshold rule:** If more than **one third** of the target job's skills are
300
+ Gap or Not evidenced, the candidate cannot receive "Interview." If more than
301
+ **half** are Gap or Not evidenced, the candidate cannot receive "Interview with
302
+ focus areas."
303
+
304
+ When in doubt, choose the stricter recommendation. "Interview with focus areas"
259
305
  should be rare — it signals a strong candidate with a specific, addressable
260
306
  concern, not a marginal candidate who might work out.
261
307
 
@@ -263,9 +309,24 @@ concern, not a marginal candidate who might work out.
263
309
  Reference specific skill gaps or strengths and their impact on the role.
264
310
  Explicitly state the skill match percentage and gap count.}
265
311
 
266
- **Interview focus areas:**
267
- - {Area 1 — what to probe in interviews to validate}
268
- - {Area 2 what to probe in interviews to validate}
312
+ ## Interview Focus Areas
313
+
314
+ {Only present if recommendation is Interview or Interview with focus areas.
315
+ These are the specific uncertainties that interviews must resolve.}
316
+
317
+ - **{Area 1}:** {What to probe and why — link to a specific gap or thin evidence}
318
+ - **{Area 2}:** {What to probe and why — link to a specific gap or thin evidence}
319
+
320
+ ### Suggested Interview Questions
321
+
322
+ {Generate role-specific questions using the framework:}
323
+
324
+ ```bash
325
+ npx fit-pathway interview {discipline} {level} --track={track}
326
+ ```
327
+
328
+ {Select 3-5 questions most relevant to the identified gaps and focus areas.
329
+ For each question, note which gap or uncertainty it targets.}
269
330
  ```
270
331
 
271
332
  ## Step 7: Enrich Candidate Brief
@@ -275,7 +336,7 @@ If `knowledge/Candidates/{Name}/brief.md` exists, update it with findings:
275
336
  - Add or update the **Skills** section with framework skill IDs
276
337
  - Update **Summary** if the CV provides better context
277
338
  - Set the **Gender** field if identifiable from the CV and not already set
278
- - Add a link to the assessment: `- [CV Assessment](./assessment.md)`
339
+ - Add a link to the assessment: `- [CV Screening](./screening.md)`
279
340
 
280
341
  **Use precise edits — don't rewrite the entire file.**
281
342
 
@@ -292,10 +353,16 @@ to create the candidate profile from email threads.
292
353
  - [ ] "Not evidenced" skills are counted as gaps in the recommendation
293
354
  - [ ] Recommendation follows the decision rules table — verify match percentages
294
355
  and gap counts before choosing a tier
295
- - [ ] "Proceed with reservations" is only used for strong candidates with a
356
+ - [ ] "Interview with focus areas" is only used for strong candidates with a
296
357
  specific, named concern — never as a soft "maybe"
297
358
  - [ ] Track fit analysis references specific skill modifiers from the framework
298
- - [ ] Gaps are actionable they suggest interview focus areas
359
+ - [ ] Interview focus areas are specific and tied to identified gaps
360
+ - [ ] Suggested interview questions target the right uncertainties
361
+ - [ ] Output file is named exactly `screening.md` — not `assessment.md` or any
362
+ variant
363
+ - [ ] No duplicate screening file exists under a different name in the candidate
364
+ folder
299
365
  - [ ] Assessment file uses correct path format and links to CV
300
- - [ ] Candidate brief updated with skill tags and assessment link
366
+ - [ ] Candidate brief links to screening using exact text
367
+ `[CV Screening](./screening.md)`
301
368
  - [ ] Gender field set only from explicit pronouns/titles (never name-inferred)
@@ -28,6 +28,9 @@ Run this skill:
28
28
 
29
29
  - `~/.cache/fit/basecamp/apple_mail/*.md` — synced email threads
30
30
  - `~/.cache/fit/basecamp/apple_mail/attachments/` — CV/resume attachments
31
+ - `~/.cache/fit/basecamp/apple_calendar/*.json` — synced calendar events (for
32
+ cross-source inference)
33
+ - `knowledge/Roles/*.md` — open role/requisition files (for metadata inheritance)
31
34
  - `~/.cache/fit/basecamp/state/graph_processed` — tracks processed files (shared
32
35
  with `extract-entities`)
33
36
  - `USER.md` — user identity for self-exclusion
@@ -37,6 +40,7 @@ Run this skill:
37
40
  - `knowledge/Candidates/{Full Name}/brief.md` — candidate profile note
38
41
  - `knowledge/Candidates/{Full Name}/CV.pdf` — local copy of CV (or `CV.docx`)
39
42
  - `knowledge/Candidates/{Full Name}/headshot.jpeg` — candidate headshot photo
43
+ - `knowledge/Roles/*.md` — created/updated role files (candidate tables rebuilt)
40
44
  - `~/.cache/fit/basecamp/state/graph_processed` — updated with processed threads
41
45
 
42
46
  ---
@@ -77,6 +81,101 @@ Also scan `knowledge/People/`, `knowledge/Organizations/`, and
77
81
  `knowledge/Projects/` to resolve recruiter names, agency orgs, and project
78
82
  links.
79
83
 
84
+ ## Step 0b: Role Sync
85
+
86
+ Synchronize `knowledge/Roles/` with the current candidate pipeline. This ensures
87
+ role files stay current and enables metadata inheritance.
88
+
89
+ ### Build Role Index
90
+
91
+ ```bash
92
+ ls knowledge/Roles/
93
+ ```
94
+
95
+ Read each Role file's Info block to build a lookup of: Req → Role file path,
96
+ plus Hiring manager, Domain lead, recruiter, Channel for each role.
97
+
98
+ ### Ensure Role Files Exist
99
+
100
+ Scan all candidate briefs for `**Req:**` values:
101
+
102
+ ```bash
103
+ rg "^\*\*Req:\*\*" knowledge/Candidates/*/brief.md
104
+ ```
105
+
106
+ For each distinct req number found in candidate briefs that does **not** have a
107
+ matching Role file, create a stub Role file:
108
+
109
+ ```markdown
110
+ # {Title from candidate Req field}
111
+
112
+ ## Info
113
+ **Req:** {req number}
114
+ **Title:** {title from Req field}
115
+ **Level:** —
116
+ **Track:** —
117
+ **Discipline:** —
118
+ **Domain lead:** —
119
+ **Hiring manager:** —
120
+ **Locations:** —
121
+ **Positions:** —
122
+ **Channel:** hr
123
+ **Status:** open
124
+ **Opened:** —
125
+ **Last activity:** {today}
126
+
127
+ ## Connected to
128
+ - Staffing/recruitment project
129
+
130
+ ## Candidates
131
+ <!-- Rebuilt each cycle -->
132
+
133
+ ## Notes
134
+ - Stub created automatically — enrich with data from emails, calendar, and imports.
135
+ ```
136
+
137
+ Then attempt to enrich the stub by searching for the req number across the
138
+ knowledge graph:
139
+
140
+ ```bash
141
+ rg "{req_number}" knowledge/
142
+ ```
143
+
144
+ Look for mentions in project timeline entries, People notes, and email threads.
145
+ Extract hiring manager, domain lead, recruiter, locations, and level from the
146
+ surrounding context.
147
+
148
+ ### Rebuild Candidate Tables
149
+
150
+ For each Role file, rebuild the `## Candidates` table by scanning briefs:
151
+
152
+ ```bash
153
+ rg -l "Req:.*{req_number}" knowledge/Candidates/*/brief.md
154
+ ```
155
+
156
+ For each matching candidate, read their Status and First seen, then rebuild the
157
+ table:
158
+
159
+ ```markdown
160
+ ## Candidates
161
+ | Candidate | Status | Channel | First seen |
162
+ |---|---|---|---|
163
+ | [[Candidates/{Name}/brief\|{Name}]] | {status} | {channel} | {date} |
164
+ ```
165
+
166
+ Sort by First seen (newest first).
167
+
168
+ ### Resolve Domain Lead
169
+
170
+ If a Role file has a hiring manager but no domain lead, attempt to resolve it:
171
+
172
+ 1. Read the hiring manager's People note for a `**Reports to:**` field.
173
+ 2. Walk up the reporting chain until reaching a VP or senior leader listed in
174
+ a stakeholder map or organizational hierarchy note.
175
+ 3. Set `Domain lead` on the Role file.
176
+
177
+ ---
178
+
80
179
  ## Step 1: Identify Recruitment Emails
81
180
 
82
181
  For each new/changed email thread, determine if it contains candidate
@@ -145,6 +244,9 @@ For each candidate found in a recruitment email, extract:
145
244
  | **Summary** | Email body, CV | Yes — 2-3 sentences |
146
245
  | **Role** | Internal requisition profile being hired against | If available |
147
246
  | **Req** | Requisition ID from hiring system | If available |
247
+ | **Channel** | `hr` or `vendor` — see derivation rules below | Yes |
248
+ | **Hiring manager** | Cross-source inference — see below | If determinable |
249
+ | **Domain lead** | Resolved from hiring manager reporting chain | If determinable |
148
250
  | **Internal/External** | Whether candidate is internal or external | If available |
149
251
  | **Model** | Engagement model (B2B, Direct Hire, etc.) | If available |
150
252
  | **Current title** | CV or email body | If available |
@@ -153,6 +255,52 @@ For each candidate found in a recruitment email, extract:
153
255
  | **LinkedIn** | Email body, CV | If available |
154
256
  | **Also known as** | Alternate name spellings or transliterations | If available |
155
257
 
258
+ ### Determining Channel
259
+
260
+ Set `Channel` based on the candidate's source:
261
+
262
+ - **`vendor`** — if the `Source` field links to an `[[Organizations/...]]` that
263
+ is a recruitment vendor/partner (check the org note for keywords: supplier,
264
+ recruitment partner, contractor, staffing), or if the `Req` field contains
265
+ "via {vendor name}" rather than a system ID.
266
+ - **`hr`** — if the candidate came through a hiring system (has a numeric Req),
267
+ applied internally, or was submitted by an internal recruiter.
268
+
269
+ ### Cross-Source Inference for Hiring Manager and Domain Lead
270
+
271
+ These fields are rarely available in a single email. Use the following
272
+ resolution chain, stopping at the first match:
273
+
274
+ 1. **Req-first inheritance:** If the candidate has a `Req`, look up the matching
275
+ `knowledge/Roles/*.md` file. Inherit `Hiring manager` and `Domain lead`
276
+ from the Role file.
277
+
278
+ 2. **Calendar inference:** Search synced calendar events for interview events
279
+ mentioning the candidate's name:
280
+ ```bash
281
+ rg -l "{Candidate Name}" ~/.cache/fit/basecamp/apple_calendar/
282
+ ```
283
+ Read matching events. The **organizer** of an interview event (who is not the
284
+ user from `USER.md`) is likely the hiring manager. Record this on the
285
+ candidate brief and update the Role file if it was missing.
286
+
287
+ 3. **Email inference:** In the email thread where the candidate was submitted,
288
+ check the To/CC fields for internal recipients (besides the user).
289
+ Cross-reference against `knowledge/People/` notes — if a CC'd person has a
290
+ role indicating hiring manager, record them.
291
+
292
+ 4. **Reporting chain resolution:** Once a hiring manager is known, look up their
293
+ People note for a `**Reports to:**` field. Walk up the reporting chain until
294
+ reaching a VP or senior leader listed in a stakeholder map or organizational
295
+ hierarchy note — that person is the domain lead.
296
+
297
+ 5. **Staffing project timeline:** Search for the candidate name or their vendor
298
+ in the staffing/recruitment project notes. Surrounding context often mentions
299
+ the hiring manager.
300
+
301
+ If none of these resolve a value, use `—` and leave it for enrichment in future
302
+ cycles as more data arrives.
303
+
156
304
  ### Determining Gender
157
305
 
158
306
  Record the candidate's gender when **explicitly stated** in the email or CV:
@@ -322,6 +470,9 @@ Then create `knowledge/Candidates/{Full Name}/brief.md`:
322
470
  ## Connected to
323
471
  - [[Organizations/{Agency}]] — sourced by
324
472
  - [[People/{Recruiter}]] — recruiter
473
+ - [[Roles/{Role filename without .md}]] — applied to
474
+ - [[People/{Hiring manager}]] — hiring manager
475
+ - [[People/{Domain lead}]] — domain lead
325
476
 
326
477
  ## Pipeline
327
478
  - **{date}**: {event}
@@ -348,7 +499,10 @@ available:
348
499
 
349
500
  ```markdown
350
501
  **Role:** {internal requisition profile, e.g. "Staff Engineer"}
351
- **Req:** {requisition ID, e.g. "4950237 — Principal Software Engineer"}
502
+ **Req:** {requisition ID — backlink to Role file, e.g. "[[Roles/4950237 — PSE Forward Deployed|4950237]]"}
503
+ **Channel:** {hr / vendor}
504
+ **Hiring manager:** {[[People/{name}]] or "—"}
505
+ **Domain lead:** {[[People/{name}]] or "—"}
352
506
  **Internal/External:** {Internal / External / External (Prior Worker)}
353
507
  **Model:** {engagement model, e.g. "B2B (via Agency) — conversion to FTE not possible"}
354
508
  **Current title:** {current job title and employer}
@@ -358,6 +512,12 @@ available:
358
512
  **Also known as:** {alternate name spellings}
359
513
  ```
360
514
 
515
+ When a `Req` is known, the value should backlink to the corresponding Role file
516
+ in `knowledge/Roles/`. Use the format:
517
+ `[[Roles/{filename without .md}|{req number}]] — {title}` for system reqs, or
518
+ `[[Roles/{filename without .md}|Vendor]] — {description}` for vendor pipeline
519
+ candidates.
520
+
361
521
  ### Additional Sections
362
522
 
363
523
  Some candidates accumulate richer profiles over time. These optional sections go
@@ -413,11 +573,12 @@ Format: one bullet per insight under `## Placement Notes`, with
413
573
 
414
574
  After writing candidate notes, verify links go both ways:
415
575
 
416
- | If you add... | Then also add... |
417
- | ------------------------ | ------------------------------------------- |
418
- | Candidate → Organization | Organization → Candidate |
419
- | Candidate → Recruiter | Recruiter → Candidate (in Activity section) |
420
- | Candidate → Project | Project → Candidate (in People section) |
576
+ | If you add... | Then also add... |
577
+ | ------------------------ | --------------------------------------------------------- |
578
+ | Candidate → Organization | Organization → Candidate |
579
+ | Candidate → Recruiter | Recruiter → Candidate (in Activity section) |
580
+ | Candidate → Project | Project → Candidate (in People section) |
581
+ | Candidate → Role | Role → Candidate (in Candidates table — rebuilt by sync) |
421
582
 
422
583
  Use absolute paths: `[[Candidates/Name/brief|Name]]`,
423
584
  `[[Organizations/Agency]]`, `[[People/Recruiter]]`.
@@ -449,8 +610,8 @@ Use framework skill IDs (e.g. `data_integration`, `full_stack_development`,
449
610
  `architecture_and_design`) in the **Skills** section of the candidate brief
450
611
  instead of free-form tags. This enables consistent cross-candidate comparison.
451
612
 
452
- If a candidate has a CV attachment, flag them for the `analyze-cv` skill which
453
- produces a full framework-aligned assessment.
613
+ If a candidate has a CV attachment, flag them for the `screen-cv` skill which
614
+ produces a framework-aligned screening assessment.
454
615
 
455
616
  ## Quality Checklist
456
617
 
@@ -473,5 +634,11 @@ produces a full framework-aligned assessment.
473
634
  - [ ] Skills tagged using framework skill IDs where possible
474
635
  - [ ] Gender field populated only from explicit pronouns/titles (never
475
636
  name-inferred)
637
+ - [ ] Channel field set on every candidate (`hr` or `vendor`)
638
+ - [ ] Hiring manager and Domain lead populated via cross-source inference where
639
+ determinable
640
+ - [ ] Req field backlinks to corresponding Role file in `knowledge/Roles/`
641
+ - [ ] Connected to section includes backlink to Role file
642
+ - [ ] Role files have up-to-date Candidates tables (rebuilt by Step 0b)
476
643
  - [ ] Headshots searched in email attachments and `~/Downloads/` (recursive)
477
644
  - [ ] Found headshots copied as `headshot.jpeg` into candidate directory
@@ -43,6 +43,7 @@ Run this skill:
43
43
 
44
44
  - `knowledge/Candidates/{Full Name}/brief.md` — candidate profile note
45
45
  - `knowledge/Candidates/{Full Name}/CV.md` — resume text rendered as markdown
46
+ - `knowledge/Roles/{Req ID} — {Title}.md` — created or updated role file
46
47
  - Updated existing candidate briefs if candidate already exists
47
48
 
48
49
  ---
@@ -161,6 +162,76 @@ The full output is a JSON object with:
161
162
  - `requisition` — metadata (id, title, location, hiringManager, recruiter)
162
163
  - `candidates` — array of candidate objects with all extracted fields
163
164
 
165
+ ## Step 1b: Create or Update Role File
166
+
167
+ After parsing the export, create or update the corresponding Role file in
168
+ `knowledge/Roles/`. The filename convention is `{Req ID} — {Short Title}.md`.
169
+
170
+ ```bash
171
+ ls knowledge/Roles/ | grep "{Req ID}"
172
+ ```
173
+
174
+ ### If the Role file does NOT exist
175
+
176
+ Create it using the requisition metadata from the export:
177
+
178
+ ```markdown
179
+ # {Requisition Title}
180
+
181
+ ## Info
182
+ **Req:** {Req ID}
183
+ **Title:** {Full title from export}
184
+ **Level:** {Infer from title: "Principal" → J100, "Staff" → J090, "Director" → J100 M-track, "Senior" → J070}
185
+ **Track:** {P-track for IC roles, M-track for Director/Manager roles}
186
+ **Discipline:** {Infer: "Software Engineer" → software_engineering, "Data Engineer" → data_engineering, "Data Scientist" → data_science}
187
+ **Domain lead:** —
188
+ **Hiring manager:** {From export metadata if available, or "—"}
189
+ **Locations:** {Primary Location from export}
190
+ **Positions:** —
191
+ **Channel:** hr
192
+ **Status:** open
193
+ **Opened:** {Recruiting Start Date from export}
194
+ **Last activity:** {today}
195
+
196
+ ## Connected to
197
+ - Staffing/recruitment project
198
+
199
+ ## Candidates
200
+ <!-- Rebuilt by track-candidates role sync -->
201
+
202
+ ## Notes
203
+ - Created from requisition export on {today}.
204
+ ```
205
+
206
+ ### Resolving Domain Lead
207
+
208
+ The export rarely contains organizational hierarchy information directly. Use
209
+ cross-referencing to resolve it:
210
+
211
+ 1. **Search the knowledge graph** for mentions of the req number:
212
+ ```bash
213
+ rg "{Req ID}" knowledge/
214
+ ```
215
+ Look in project timelines, People notes, and Topics for context about which
216
+ area/VP owns this req.
217
+
218
+ 2. **Check the Hiring Manager** (if available from export): look up their People
219
+ note for `**Reports to:**` and walk up the chain to a VP or senior leader in
220
+ a stakeholder map or organizational hierarchy note.
221
+
222
+ 3. **Fallback**: If neither resolves, set `Domain lead: —` for enrichment by
223
+ later cycles of `track-candidates` or `extract-entities`.
224
+
225
+ ### If the Role file ALREADY exists
226
+
227
+ Update it with any new metadata from the export:
228
+
229
+ - Set `Hiring manager` if the export provides it and the Role file has `—`
230
+ - Update `Last activity` to today
231
+ - Add a Notes entry: `- Requisition export processed on {today}: {N} candidates`
232
+
233
+ ---
234
+
164
235
  ## Step 2: Build Candidate Index
165
236
 
166
237
  Scan existing candidate notes to avoid duplicates:
@@ -259,7 +330,10 @@ Then create `knowledge/Candidates/{Clean Name}/brief.md` using the
259
330
  **Status:** {pipeline status from Step 3}
260
331
  **First seen:** {Date Applied, YYYY-MM-DD}
261
332
  **Last activity:** {Date Applied, YYYY-MM-DD}
262
- **Req:** {Req ID} — {Req Title}
333
+ **Req:** [[Roles/{Role filename without .md}|{Req ID}]] — {Req Title}
334
+ **Channel:** hr
335
+ **Hiring manager:** {From Role file or "—"}
336
+ **Domain lead:** {From Role file or "—"}
263
337
  **Internal/External:** {Internal / External / External (Prior Worker)}
264
338
  **Current title:** {Current Job Title at Current Company}
265
339
  **Email:** {Email or "—"}
@@ -273,6 +347,9 @@ strengths. If no resume text, use Current Job Title + Total Years Experience.}
273
347
  - [CV.md](./CV.md)
274
348
 
275
349
  ## Connected to
350
+ - [[Roles/{Role filename without .md}]] — applied to
351
+ - {[[People/{Hiring manager}]] — hiring manager, if known}
352
+ - {[[People/{Domain lead}]] — domain lead, if known}
276
353
  - {Referred by person, if present}
277
354
 
278
355
  ## Pipeline
@@ -351,8 +428,8 @@ npx fit-pathway skill --list
351
428
  ```
352
429
 
353
430
  Use framework skill IDs in the **Skills** section of each brief. If a candidate
354
- has a CV.md, flag them for the `analyze-cv` skill for a full framework-aligned
355
- assessment.
431
+ has a CV.md, flag them for the `screen-cv` skill for a framework-aligned
432
+ screening assessment.
356
433
 
357
434
  ## Quality Checklist
358
435
 
@@ -369,6 +446,11 @@ assessment.
369
446
  - [ ] Name annotations stripped from directory names and headings
370
447
  - [ ] Existing candidates updated (not duplicated) with precise edits
371
448
  - [ ] Skills tagged using framework skill IDs where possible
372
- - [ ] Gender field set to `—` (Workday exports don't include gender signals)
449
+ - [ ] Gender field set to `—` (exports don't include gender signals)
450
+ - [ ] Role file created or updated in `knowledge/Roles/`
451
+ - [ ] Channel set to `hr` on all imported candidates
452
+ - [ ] Hiring manager and Domain lead inherited from Role file where available
453
+ - [ ] Req field backlinks to Role file
454
+ - [ ] Connected to section includes backlink to Role file
373
455
  - [ ] Insights.md updated with strategic observations
374
456
  - [ ] No duplicate candidate directories created