@forwardimpact/basecamp 2.3.0 → 2.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +20 -0
- package/package.json +4 -1
- package/template/.claude/skills/sync-apple-mail/SKILL.md +13 -5
- package/template/.claude/skills/track-candidates/SKILL.md +68 -13
- package/template/.claude/skills/upstream-skill/SKILL.md +53 -27
- package/template/.claude/skills/workday-requisition/SKILL.md +341 -0
- package/template/.claude/skills/workday-requisition/scripts/parse-workday.mjs +243 -0
- package/template/CLAUDE.md +18 -17
package/README.md
CHANGED
|
@@ -165,6 +165,25 @@ vi ~/.fit/basecamp/scheduler.json
|
|
|
165
165
|
{ "type": "once", "runAt": "2025-02-12T10:00:00Z" }
|
|
166
166
|
```
|
|
167
167
|
|
|
168
|
+
## Updating
|
|
169
|
+
|
|
170
|
+
When you upgrade Basecamp (install a new `.pkg`), the installer automatically
|
|
171
|
+
runs `--update` on all configured knowledge bases. This pushes the latest
|
|
172
|
+
`CLAUDE.md`, skills, and agents into each KB without touching your data.
|
|
173
|
+
|
|
174
|
+
You can also run it manually at any time:
|
|
175
|
+
|
|
176
|
+
```bash
|
|
177
|
+
# Update all configured knowledge bases
|
|
178
|
+
/Applications/Basecamp.app/Contents/MacOS/fit-basecamp --update
|
|
179
|
+
|
|
180
|
+
# Update a specific knowledge base
|
|
181
|
+
/Applications/Basecamp.app/Contents/MacOS/fit-basecamp --update ~/Documents/Personal
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
The update merges `.claude/settings.json` non-destructively — new entries are
|
|
185
|
+
added but your existing permissions are preserved.
|
|
186
|
+
|
|
168
187
|
## CLI Reference
|
|
169
188
|
|
|
170
189
|
```
|
|
@@ -172,6 +191,7 @@ fit-basecamp Run due tasks once and exit
|
|
|
172
191
|
fit-basecamp --daemon Run continuously (poll every 60s)
|
|
173
192
|
fit-basecamp --run <task> Run a specific task immediately
|
|
174
193
|
fit-basecamp --init <path> Initialize a new knowledge base
|
|
194
|
+
fit-basecamp --update [path] Update KB skills, agents, and CLAUDE.md
|
|
175
195
|
fit-basecamp --status Show knowledge bases and task status
|
|
176
196
|
fit-basecamp --validate Validate agents and skills exist
|
|
177
197
|
fit-basecamp --help Show this help
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@forwardimpact/basecamp",
|
|
3
|
-
"version": "2.
|
|
3
|
+
"version": "2.4.0",
|
|
4
4
|
"description": "Claude Code-native personal knowledge system with autonomous agents",
|
|
5
5
|
"license": "Apache-2.0",
|
|
6
6
|
"repository": {
|
|
@@ -28,5 +28,8 @@
|
|
|
28
28
|
],
|
|
29
29
|
"engines": {
|
|
30
30
|
"node": ">=18.0.0"
|
|
31
|
+
},
|
|
32
|
+
"dependencies": {
|
|
33
|
+
"xlsx": "^0.18.5"
|
|
31
34
|
}
|
|
32
35
|
}
|
|
@@ -26,6 +26,8 @@ their email.
|
|
|
26
26
|
|
|
27
27
|
- `~/.cache/fit/basecamp/state/apple_mail_last_sync` — last sync timestamp
|
|
28
28
|
(single-line text file)
|
|
29
|
+
- `~/.cache/fit/basecamp/state/apple_mail_last_rowid` — highest message ROWID
|
|
30
|
+
seen at last sync (single-line text file)
|
|
29
31
|
- `~/Library/Mail/V*/MailData/Envelope Index` — Apple Mail SQLite database
|
|
30
32
|
|
|
31
33
|
## Outputs
|
|
@@ -36,6 +38,8 @@ their email.
|
|
|
36
38
|
attachment files for each thread (PDFs, images, documents, etc.)
|
|
37
39
|
- `~/.cache/fit/basecamp/state/apple_mail_last_sync` — updated with new sync
|
|
38
40
|
timestamp
|
|
41
|
+
- `~/.cache/fit/basecamp/state/apple_mail_last_rowid` — updated with highest
|
|
42
|
+
ROWID seen
|
|
39
43
|
|
|
40
44
|
---
|
|
41
45
|
|
|
@@ -53,13 +57,17 @@ The script:
|
|
|
53
57
|
1. Finds the Mail database (`~/Library/Mail/V*/MailData/Envelope Index`)
|
|
54
58
|
2. Loads last sync timestamp (or defaults to `--days` days ago for first sync)
|
|
55
59
|
3. Discovers the thread grouping column (`conversation_id` or `thread_id`)
|
|
56
|
-
4.
|
|
57
|
-
5.
|
|
60
|
+
4. Loads last-seen ROWID (or defaults to 0 for first sync)
|
|
61
|
+
5. Finds threads with new messages since last sync (up to 500), using both
|
|
62
|
+
timestamp and ROWID to catch late-arriving emails (emails downloaded after
|
|
63
|
+
a delay may have `date_received` before the last sync timestamp, but their
|
|
64
|
+
ROWID will be higher than the last-seen ROWID)
|
|
65
|
+
6. For each thread: fetches messages, batch-fetches recipients and attachment
|
|
58
66
|
metadata, parses `.emlx` files for full email bodies (falling back to
|
|
59
67
|
database summaries), copies attachment files to the output directory
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
68
|
+
7. Writes one markdown file per thread to `~/.cache/fit/basecamp/apple_mail/`
|
|
69
|
+
8. Updates sync state (timestamp and max ROWID)
|
|
70
|
+
9. Reports summary (threads processed, files written)
|
|
63
71
|
|
|
64
72
|
The script imports `scripts/parse-emlx.mjs` to extract plain text bodies from
|
|
65
73
|
`.emlx` / `.partial.emlx` files (handles HTML-only emails by stripping tags).
|
|
@@ -130,18 +130,27 @@ For each candidate found in a recruitment email, extract:
|
|
|
130
130
|
|
|
131
131
|
| Field | Source | Required |
|
|
132
132
|
| ----------------- | ------------------------------------------------- | ------------------- |
|
|
133
|
-
| **Name**
|
|
134
|
-
| **
|
|
135
|
-
| **Rate**
|
|
136
|
-
| **Availability**
|
|
137
|
-
| **English**
|
|
138
|
-
| **Location**
|
|
139
|
-
| **Source agency**
|
|
140
|
-
| **Recruiter**
|
|
141
|
-
| **CV path**
|
|
142
|
-
| **Skills**
|
|
143
|
-
| **Gender**
|
|
144
|
-
| **Summary**
|
|
133
|
+
| **Name** | Filename, email body, CV | Yes |
|
|
134
|
+
| **Title** | Email body, CV — the candidate's professional title/function | Yes |
|
|
135
|
+
| **Rate** | Email body (e.g. "$120/hr", "€80/h") | If available |
|
|
136
|
+
| **Availability** | Email body (e.g. "1 month notice", "immediately") | If available |
|
|
137
|
+
| **English** | Email body (e.g. "B2", "Upper-intermediate") | If available |
|
|
138
|
+
| **Location** | Email body, CV | If available |
|
|
139
|
+
| **Source agency** | Sender domain → Organization | Yes |
|
|
140
|
+
| **Recruiter** | Email sender or CC'd recruiter | Yes |
|
|
141
|
+
| **CV path** | Attachment directory | If available |
|
|
142
|
+
| **Skills** | Email body, CV | If available |
|
|
143
|
+
| **Gender** | Name, pronouns, recruiter context | If identifiable |
|
|
144
|
+
| **Summary** | Email body, CV | Yes — 2-3 sentences |
|
|
145
|
+
| **Role** | Internal requisition profile being hired against | If available |
|
|
146
|
+
| **Req** | Requisition ID from hiring system | If available |
|
|
147
|
+
| **Internal/External**| Whether candidate is internal or external | If available |
|
|
148
|
+
| **Model** | Engagement model (B2B, Direct Hire, etc.) | If available |
|
|
149
|
+
| **Current title** | CV or email body | If available |
|
|
150
|
+
| **Email** | Email body, CV, signature | If available |
|
|
151
|
+
| **Phone** | Email body, CV, signature | If available |
|
|
152
|
+
| **LinkedIn** | Email body, CV | If available |
|
|
153
|
+
| **Also known as** | Alternate name spellings or transliterations | If available |
|
|
145
154
|
|
|
146
155
|
### Determining Gender
|
|
147
156
|
|
|
@@ -193,9 +202,11 @@ Assign a status based on the email context:
|
|
|
193
202
|
| `screening` | Under review, questions asked about the candidate |
|
|
194
203
|
| `first-interview` | First interview scheduled or completed |
|
|
195
204
|
| `second-interview` | Second interview scheduled or completed |
|
|
205
|
+
| `work-trial` | Paid work trial or assessment project in progress |
|
|
196
206
|
| `offer` | Offer extended |
|
|
197
207
|
| `hired` | Accepted and onboarding |
|
|
198
208
|
| `rejected` | Explicitly passed on ("not a fit", "pass", "decline") |
|
|
209
|
+
| `withdrawn` | Candidate withdrew from the process |
|
|
199
210
|
| `on-hold` | Paused, waiting on notice period, or deferred |
|
|
200
211
|
|
|
201
212
|
**Default to `new`** if no response signals are found. Read the full thread
|
|
@@ -207,7 +218,10 @@ Look for these patterns in the hiring manager's replies:
|
|
|
207
218
|
|
|
208
219
|
- "let's schedule" / "set up an interview" → `first-interview`
|
|
209
220
|
- "second round" / "follow-up interview" → `second-interview`
|
|
221
|
+
- "work trial" / "assessment project" / "paid trial" → `work-trial`
|
|
210
222
|
- "not what we're looking for" / "pass" → `rejected`
|
|
223
|
+
- "candidate withdrew" / "no longer interested" / "accepted another offer" →
|
|
224
|
+
`withdrawn`
|
|
211
225
|
- "extend an offer" / "make an offer" → `offer`
|
|
212
226
|
- "they've accepted" / "start date" → `hired`
|
|
213
227
|
- "put on hold" / "come back to later" → `on-hold`
|
|
@@ -244,7 +258,7 @@ Then create `knowledge/Candidates/{Full Name}/brief.md`:
|
|
|
244
258
|
# {Full Name}
|
|
245
259
|
|
|
246
260
|
## Info
|
|
247
|
-
**
|
|
261
|
+
**Title:** {professional title/function}
|
|
248
262
|
**Rate:** {rate or "—"}
|
|
249
263
|
**Availability:** {availability or "—"}
|
|
250
264
|
**English:** {level or "—"}
|
|
@@ -254,6 +268,7 @@ Then create `knowledge/Candidates/{Full Name}/brief.md`:
|
|
|
254
268
|
**Status:** {pipeline status}
|
|
255
269
|
**First seen:** {date profile was shared, YYYY-MM-DD}
|
|
256
270
|
**Last activity:** {date of most recent thread activity, YYYY-MM-DD}
|
|
271
|
+
{extra fields here — see below}
|
|
257
272
|
|
|
258
273
|
## Summary
|
|
259
274
|
{2-3 sentences: role, experience level, key strengths}
|
|
@@ -271,7 +286,11 @@ Then create `knowledge/Candidates/{Full Name}/brief.md`:
|
|
|
271
286
|
## Skills
|
|
272
287
|
{comma-separated skill tags}
|
|
273
288
|
|
|
289
|
+
## Interview Notes
|
|
290
|
+
{interview feedback, structured by date — omit section if no interviews yet}
|
|
291
|
+
|
|
274
292
|
## Notes
|
|
293
|
+
{free-form observations — always present, even if empty}
|
|
275
294
|
```
|
|
276
295
|
|
|
277
296
|
If a CV attachment exists, **copy it into the candidate directory** before
|
|
@@ -279,6 +298,37 @@ writing the note.
|
|
|
279
298
|
|
|
280
299
|
If no CV attachment exists, omit the `## CV` section entirely.
|
|
281
300
|
|
|
301
|
+
### Extra Info Fields
|
|
302
|
+
|
|
303
|
+
Place any of these **after Last activity** in the order shown, only when
|
|
304
|
+
available:
|
|
305
|
+
|
|
306
|
+
```markdown
|
|
307
|
+
**Role:** {internal requisition profile, e.g. "Staff Engineer"}
|
|
308
|
+
**Req:** {requisition ID, e.g. "4950237 — Principal Software Engineer"}
|
|
309
|
+
**Internal/External:** {Internal / External / External (Prior Worker)}
|
|
310
|
+
**Model:** {engagement model, e.g. "B2B (via Agency) — conversion to FTE not possible"}
|
|
311
|
+
**Current title:** {current job title and employer}
|
|
312
|
+
**Email:** {personal or work email}
|
|
313
|
+
**Phone:** {phone number}
|
|
314
|
+
**LinkedIn:** {LinkedIn profile URL}
|
|
315
|
+
**Also known as:** {alternate name spellings}
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
### Additional Sections
|
|
319
|
+
|
|
320
|
+
Some candidates accumulate richer profiles over time. These optional sections go
|
|
321
|
+
**after Skills and before Notes**, in this order:
|
|
322
|
+
|
|
323
|
+
1. `## Education` — degrees, institutions, years
|
|
324
|
+
2. `## Certifications` — professional certifications
|
|
325
|
+
3. `## Work History` — chronological career history (when extracted from CV)
|
|
326
|
+
4. `## Key Facts` — notable bullet points from CV review
|
|
327
|
+
5. `## Interview Notes` — structured by date as `### YYYY-MM-DD — {description}`
|
|
328
|
+
|
|
329
|
+
`## Notes` is always the **last section**. If an `## Open Items` section exists
|
|
330
|
+
(pending questions or follow-ups), place it after Notes.
|
|
331
|
+
|
|
282
332
|
### For EXISTING candidates
|
|
283
333
|
|
|
284
334
|
Read `knowledge/Candidates/{Full Name}/brief.md`, then apply targeted edits:
|
|
@@ -364,6 +414,11 @@ produces a full framework-aligned assessment.
|
|
|
364
414
|
- [ ] Scanned all new/changed email threads for recruitment signals
|
|
365
415
|
- [ ] Extracted all candidates found (check attachment directories too)
|
|
366
416
|
- [ ] Each candidate has a complete note with all available fields
|
|
417
|
+
- [ ] Info fields are in standard order (Title, Rate, Availability, English,
|
|
418
|
+
Location, Gender, Source, Status, First seen, Last activity, then extras)
|
|
419
|
+
- [ ] Sections are in standard order (Info → Summary → CV → Connected to →
|
|
420
|
+
Pipeline → Skills → Education/Certifications/Work History/Key Facts →
|
|
421
|
+
Interview Notes → Notes → Open Items)
|
|
367
422
|
- [ ] CV paths are correct and point to actual files
|
|
368
423
|
- [ ] Pipeline status reflects the latest thread activity
|
|
369
424
|
- [ ] Timeline entries are in chronological order
|
|
@@ -29,7 +29,8 @@ Run this skill when:
|
|
|
29
29
|
|
|
30
30
|
## Outputs
|
|
31
31
|
|
|
32
|
-
- `.claude/skills
|
|
32
|
+
- `.claude/skills/<skill-name>/CHANGELOG.md` — per-skill changelog of local
|
|
33
|
+
changes, one per modified skill directory
|
|
33
34
|
|
|
34
35
|
---
|
|
35
36
|
|
|
@@ -41,21 +42,22 @@ Use git to find all skill files that have been added, modified, or deleted since
|
|
|
41
42
|
the last changelog entry (or since initial commit if no changelog exists).
|
|
42
43
|
|
|
43
44
|
```bash
|
|
44
|
-
# Find the date of the last changelog entry (if any)
|
|
45
|
-
head -20 .claude/skills
|
|
45
|
+
# Find the date of the last changelog entry for a skill (if any)
|
|
46
|
+
head -20 .claude/skills/<skill-name>/CHANGELOG.md 2>/dev/null
|
|
46
47
|
|
|
47
48
|
# List changed skill files since last documented change
|
|
48
49
|
git log --oneline --name-status -- '.claude/skills/'
|
|
49
50
|
```
|
|
50
51
|
|
|
51
|
-
If `.claude/skills
|
|
52
|
-
and only look at changes after that date:
|
|
52
|
+
If a skill already has a `.claude/skills/<skill-name>/CHANGELOG.md`, read the
|
|
53
|
+
most recent entry date and only look at changes after that date:
|
|
53
54
|
|
|
54
55
|
```bash
|
|
55
|
-
git log --after="<last-entry-date>" --name-status -- '.claude/skills
|
|
56
|
+
git log --after="<last-entry-date>" --name-status -- '.claude/skills/<skill-name>/'
|
|
56
57
|
```
|
|
57
58
|
|
|
58
|
-
If no changelog exists, consider all commits that touched
|
|
59
|
+
If no changelog exists for a skill, consider all commits that touched that
|
|
60
|
+
skill's directory.
|
|
59
61
|
|
|
60
62
|
### Step 2: Classify Each Change
|
|
61
63
|
|
|
@@ -106,18 +108,17 @@ Bad descriptions:
|
|
|
106
108
|
|
|
107
109
|
### Step 4: Write the Changelog
|
|
108
110
|
|
|
109
|
-
|
|
111
|
+
For each changed skill, create or update its
|
|
112
|
+
`.claude/skills/<skill-name>/CHANGELOG.md` with the following format:
|
|
110
113
|
|
|
111
114
|
```markdown
|
|
112
|
-
#
|
|
115
|
+
# <skill-name> Changelog
|
|
113
116
|
|
|
114
|
-
Changes to
|
|
115
|
-
|
|
117
|
+
Changes to this skill that should be considered for upstream inclusion in the
|
|
118
|
+
Forward Impact monorepo.
|
|
116
119
|
|
|
117
120
|
## <YYYY-MM-DD>
|
|
118
121
|
|
|
119
|
-
### <skill-name> (added|modified|removed)
|
|
120
|
-
|
|
121
122
|
**What:** <one-line summary of the change>
|
|
122
123
|
|
|
123
124
|
**Why:** <the problem or improvement that motivated it>
|
|
@@ -128,14 +129,20 @@ inclusion in the Forward Impact monorepo.
|
|
|
128
129
|
---
|
|
129
130
|
```
|
|
130
131
|
|
|
131
|
-
Entries are in **reverse chronological order** (newest first).
|
|
132
|
-
|
|
132
|
+
Entries are in **reverse chronological order** (newest first). Each skill has
|
|
133
|
+
its own changelog file inside its directory.
|
|
134
|
+
|
|
135
|
+
For **new skills**, create the `CHANGELOG.md` alongside the `SKILL.md` with a
|
|
136
|
+
single `added` entry describing the skill's purpose.
|
|
137
|
+
|
|
138
|
+
For **removed skills**, the changelog should be the last file remaining in the
|
|
139
|
+
skill directory, documenting why the skill was removed.
|
|
133
140
|
|
|
134
|
-
### Step 5: Review the
|
|
141
|
+
### Step 5: Review the Changelogs
|
|
135
142
|
|
|
136
|
-
After writing, read
|
|
143
|
+
After writing, read each changelog back and verify:
|
|
137
144
|
|
|
138
|
-
- [ ] Every changed skill has
|
|
145
|
+
- [ ] Every changed skill has a `CHANGELOG.md` in its directory
|
|
139
146
|
- [ ] Each entry has What, Why, and Details sections
|
|
140
147
|
- [ ] Descriptions are specific enough for an upstream maintainer to act on
|
|
141
148
|
- [ ] New skills include a brief description of their purpose
|
|
@@ -145,16 +152,16 @@ After writing, read the changelog back and verify:
|
|
|
145
152
|
|
|
146
153
|
## Example Output
|
|
147
154
|
|
|
155
|
+
`.claude/skills/track-candidates/CHANGELOG.md`:
|
|
156
|
+
|
|
148
157
|
```markdown
|
|
149
|
-
#
|
|
158
|
+
# track-candidates Changelog
|
|
150
159
|
|
|
151
|
-
Changes to
|
|
152
|
-
|
|
160
|
+
Changes to this skill that should be considered for upstream inclusion in the
|
|
161
|
+
Forward Impact monorepo.
|
|
153
162
|
|
|
154
163
|
## 2026-03-01
|
|
155
164
|
|
|
156
|
-
### track-candidates (modified)
|
|
157
|
-
|
|
158
165
|
**What:** Added gender field extraction for diversity tracking
|
|
159
166
|
|
|
160
167
|
**Why:** Recruitment pipeline lacked diversity metrics — pool composition was
|
|
@@ -166,7 +173,18 @@ invisible without structured gender data.
|
|
|
166
173
|
- Added explicit note that field has no bearing on hiring decisions
|
|
167
174
|
- Updated quality checklist to include gender field verification
|
|
168
175
|
|
|
169
|
-
|
|
176
|
+
---
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
`.claude/skills/process-hyprnote/CHANGELOG.md`:
|
|
180
|
+
|
|
181
|
+
```markdown
|
|
182
|
+
# process-hyprnote Changelog
|
|
183
|
+
|
|
184
|
+
Changes to this skill that should be considered for upstream inclusion in the
|
|
185
|
+
Forward Impact monorepo.
|
|
186
|
+
|
|
187
|
+
## 2026-03-01
|
|
170
188
|
|
|
171
189
|
**What:** New skill for processing Hyprnote meeting recordings
|
|
172
190
|
|
|
@@ -180,10 +198,17 @@ they weren't being integrated into the knowledge base.
|
|
|
180
198
|
- Links attendees to `knowledge/People/` entries
|
|
181
199
|
|
|
182
200
|
---
|
|
201
|
+
```
|
|
183
202
|
|
|
184
|
-
|
|
203
|
+
`.claude/skills/extract-entities/CHANGELOG.md`:
|
|
185
204
|
|
|
186
|
-
|
|
205
|
+
```markdown
|
|
206
|
+
# extract-entities Changelog
|
|
207
|
+
|
|
208
|
+
Changes to this skill that should be considered for upstream inclusion in the
|
|
209
|
+
Forward Impact monorepo.
|
|
210
|
+
|
|
211
|
+
## 2026-02-15
|
|
187
212
|
|
|
188
213
|
**What:** Increased batch size from 5 to 10 files per run
|
|
189
214
|
|
|
@@ -200,7 +225,8 @@ dozens of runs to catch up after a week of email.
|
|
|
200
225
|
## Notes
|
|
201
226
|
|
|
202
227
|
- This skill only **documents** changes — it does not push or merge anything
|
|
203
|
-
- The
|
|
228
|
+
- The per-skill changelogs are consumed by the **downstream** skill in the
|
|
229
|
+
upstream monorepo
|
|
204
230
|
- Keep descriptions actionable: an upstream maintainer should be able to
|
|
205
231
|
understand and apply each change without access to this installation
|
|
206
232
|
- When in doubt about whether a change is upstream-worthy, include it — the
|
|
@@ -0,0 +1,341 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: workday-requisition
|
|
3
|
+
description: >
|
|
4
|
+
Import candidates from a Workday requisition export (.xlsx) into
|
|
5
|
+
knowledge/Candidates/. Parses requisition metadata and candidate data,
|
|
6
|
+
creates candidate briefs and CV.md files from resume text, and integrates
|
|
7
|
+
with the existing track-candidates pipeline. Use when the user provides a
|
|
8
|
+
Workday export file or asks to import candidates from an XLSX requisition
|
|
9
|
+
export.
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
# Workday Requisition Import
|
|
13
|
+
|
|
14
|
+
Import candidates from a Workday requisition export (`.xlsx`) into
|
|
15
|
+
`knowledge/Candidates/`. Extracts requisition metadata and candidate profiles,
|
|
16
|
+
creates standardized candidate briefs and `CV.md` files from the embedded resume
|
|
17
|
+
text, and integrates with the existing `track-candidates` pipeline format.
|
|
18
|
+
|
|
19
|
+
## Trigger
|
|
20
|
+
|
|
21
|
+
Run this skill:
|
|
22
|
+
|
|
23
|
+
- When the user provides a Workday requisition export file (`.xlsx`)
|
|
24
|
+
- When the user asks to import candidates from Workday or an XLSX export
|
|
25
|
+
- When the user mentions a requisition ID and asks to process the export
|
|
26
|
+
|
|
27
|
+
## Prerequisites
|
|
28
|
+
|
|
29
|
+
- A Workday requisition export file (`.xlsx`) accessible on the filesystem
|
|
30
|
+
(typically in `~/Downloads/`)
|
|
31
|
+
- The `xlsx` npm package installed in the KB root:
|
|
32
|
+
```bash
|
|
33
|
+
npm install xlsx
|
|
34
|
+
```
|
|
35
|
+
- User identity configured in `USER.md`
|
|
36
|
+
|
|
37
|
+
## Inputs
|
|
38
|
+
|
|
39
|
+
- Path to the `.xlsx` file (e.g.
|
|
40
|
+
`~/Downloads/4951493_Principal_Software_Engineer_–_Forward_Deployed_(Open).xlsx`)
|
|
41
|
+
|
|
42
|
+
## Outputs
|
|
43
|
+
|
|
44
|
+
- `knowledge/Candidates/{Full Name}/brief.md` — candidate profile note
|
|
45
|
+
- `knowledge/Candidates/{Full Name}/CV.md` — resume text rendered as markdown
|
|
46
|
+
- Updated existing candidate briefs if candidate already exists
|
|
47
|
+
|
|
48
|
+
---
|
|
49
|
+
|
|
50
|
+
## Workday Export Format
|
|
51
|
+
|
|
52
|
+
The Workday requisition export contains multiple sheets. This skill uses:
|
|
53
|
+
|
|
54
|
+
### Sheet 1 — Requisition Metadata
|
|
55
|
+
|
|
56
|
+
Key-value pairs, one per row:
|
|
57
|
+
|
|
58
|
+
| Row | Field | Example |
|
|
59
|
+
| --- | --------------------- | -------------------------------------- |
|
|
60
|
+
| 1 | Title header | `4951493 Principal Software Engineer…` |
|
|
61
|
+
| 2 | Recruiting Start Date | `02/10/2026` |
|
|
62
|
+
| 3 | Target Hire Date | `02/10/2026` |
|
|
63
|
+
| 4 | Primary Location | `USA - NY - Headquarters` |
|
|
64
|
+
| 5 | Hiring Manager Title | `Hiring Manager` |
|
|
65
|
+
| 6 | Hiring Manager | Name |
|
|
66
|
+
| 7 | Recruiter Title | `Recruiter` |
|
|
67
|
+
| 8 | Recruiter | Name |
|
|
68
|
+
|
|
69
|
+
### Sheet 3 — Candidates
|
|
70
|
+
|
|
71
|
+
Row 3 contains column headers. Data rows start at row 4. After the last
|
|
72
|
+
candidate, stage-summary rows appear (these are not candidates).
|
|
73
|
+
|
|
74
|
+
| Column | Field | Maps to brief field… |
|
|
75
|
+
| ------ | ---------------------- | ------------------------ |
|
|
76
|
+
| B | Candidate name | `# {Name}` |
|
|
77
|
+
| C | Stage | Status derivation |
|
|
78
|
+
| D | Step / Disposition | Status derivation |
|
|
79
|
+
| G | Resume filename | Reference only (no file) |
|
|
80
|
+
| H | Date Applied | **First seen** |
|
|
81
|
+
| I | Current Job Title | **Current title**, Title |
|
|
82
|
+
| J | Current Company | **Current title** suffix |
|
|
83
|
+
| K | Source | **Source** |
|
|
84
|
+
| L | Referred by | **Source** suffix |
|
|
85
|
+
| N | Availability Date | **Availability** |
|
|
86
|
+
| O | Visa Requirement | Notes |
|
|
87
|
+
| P | Eligible to Work | Notes |
|
|
88
|
+
| Q | Relocation | Notes |
|
|
89
|
+
| R | Salary Expectations | **Rate** |
|
|
90
|
+
| S | Non-Compete | Notes |
|
|
91
|
+
| T | Candidate Location | **Location** |
|
|
92
|
+
| U | Phone | **Phone** |
|
|
93
|
+
| V | Email | **Email** |
|
|
94
|
+
| W | Total Years Experience | Summary context |
|
|
95
|
+
| X | All Job Titles | Work History context |
|
|
96
|
+
| Y | Companies | Work History context |
|
|
97
|
+
| Z | Degrees | Education |
|
|
98
|
+
| AA | Fields of Study | Education |
|
|
99
|
+
| AB | Language | **English** / Language |
|
|
100
|
+
| AC | Resume Text | `CV.md` content |
|
|
101
|
+
|
|
102
|
+
#### Name Annotations
|
|
103
|
+
|
|
104
|
+
Names may include parenthetical annotations:
|
|
105
|
+
|
|
106
|
+
- `(Prior Worker)` → Internal/External = `External (Prior Worker)`
|
|
107
|
+
- `(Internal)` → Internal/External = `Internal`
|
|
108
|
+
- No annotation + source contains "Internal" → `Internal`
|
|
109
|
+
- Otherwise → `External`
|
|
110
|
+
|
|
111
|
+
## Before Starting
|
|
112
|
+
|
|
113
|
+
1. Read `USER.md` to get the user's name, email, and domain.
|
|
114
|
+
2. Confirm the XLSX file path with the user (or use the provided path).
|
|
115
|
+
3. Ensure the `xlsx` package is installed:
|
|
116
|
+
```bash
|
|
117
|
+
npm list xlsx 2>/dev/null || npm install xlsx
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
## Step 1: Parse the Export
|
|
121
|
+
|
|
122
|
+
Run the parse script to extract structured data:
|
|
123
|
+
|
|
124
|
+
```bash
|
|
125
|
+
node .claude/skills/workday-requisition/scripts/parse-workday.mjs "<path-to-xlsx>" --summary
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
This prints a summary of the requisition and all candidates. Review the output
|
|
129
|
+
to confirm the file parsed correctly and note the total candidate count.
|
|
130
|
+
|
|
131
|
+
For the full JSON output (used in subsequent steps):
|
|
132
|
+
|
|
133
|
+
```bash
|
|
134
|
+
node .claude/skills/workday-requisition/scripts/parse-workday.mjs "<path-to-xlsx>"
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
The full output is a JSON object with:
|
|
138
|
+
|
|
139
|
+
- `requisition` — metadata (id, title, location, hiringManager, recruiter)
|
|
140
|
+
- `candidates` — array of candidate objects with all extracted fields
|
|
141
|
+
|
|
142
|
+
## Step 2: Build Candidate Index
|
|
143
|
+
|
|
144
|
+
Scan existing candidate notes to avoid duplicates:
|
|
145
|
+
|
|
146
|
+
```bash
|
|
147
|
+
ls -d knowledge/Candidates/*/ 2>/dev/null
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
For each existing candidate, check if they match any imported candidate by name.
|
|
151
|
+
Use fuzzy matching — the Workday name may differ slightly from an existing note
|
|
152
|
+
(e.g. middle names, accents, spelling variations).
|
|
153
|
+
|
|
154
|
+
## Step 3: Determine Pipeline Status
|
|
155
|
+
|
|
156
|
+
Map Workday stage and step/disposition to the `track-candidates` pipeline
|
|
157
|
+
status:
|
|
158
|
+
|
|
159
|
+
| Workday Step / Disposition | Pipeline Status |
|
|
160
|
+
| ---------------------------- | ------------------ |
|
|
161
|
+
| `Considered` | `new` |
|
|
162
|
+
| `Manager Resume Screen` | `screening` |
|
|
163
|
+
| `Assessment` | `screening` |
|
|
164
|
+
| `Interview` / `Phone Screen` | `first-interview` |
|
|
165
|
+
| `Second Interview` | `second-interview` |
|
|
166
|
+
| `Reference Check` | `second-interview` |
|
|
167
|
+
| `Offer` | `offer` |
|
|
168
|
+
| `Employment Agreement` | `offer` |
|
|
169
|
+
| `Background Check` | `hired` |
|
|
170
|
+
| `Ready for Hire` | `hired` |
|
|
171
|
+
| `Rejected` / `Declined` | `rejected` |
|
|
172
|
+
|
|
173
|
+
If the step is identical to the stage (e.g. both "Considered"), default to
|
|
174
|
+
`new`.
|
|
175
|
+
|
|
176
|
+
## Step 4: Create CV.md from Resume Text
|
|
177
|
+
|
|
178
|
+
For each candidate with resume text, create
|
|
179
|
+
`knowledge/Candidates/{Clean Name}/CV.md`:
|
|
180
|
+
|
|
181
|
+
```markdown
|
|
182
|
+
# {Clean Name} — Resume
|
|
183
|
+
|
|
184
|
+
> Extracted from Workday requisition export {Req ID} on {today's date}.
|
|
185
|
+
> Original file: {Resume filename from column G}
|
|
186
|
+
|
|
187
|
+
---
|
|
188
|
+
|
|
189
|
+
{Resume text from column AC, preserving original formatting}
|
|
190
|
+
```
|
|
191
|
+
|
|
192
|
+
**Formatting rules for resume text:**
|
|
193
|
+
|
|
194
|
+
- Preserve paragraph breaks (double newlines)
|
|
195
|
+
- Convert ALL-CAPS section headers to `## Heading` format
|
|
196
|
+
- Preserve bullet points and lists
|
|
197
|
+
- Clean up excessive whitespace but keep structure
|
|
198
|
+
- Do not rewrite or summarize — reproduce faithfully
|
|
199
|
+
|
|
200
|
+
If a candidate has no resume text, skip the CV.md file.
|
|
201
|
+
|
|
202
|
+
## Step 5: Write Candidate Brief
|
|
203
|
+
|
|
204
|
+
### For NEW candidates
|
|
205
|
+
|
|
206
|
+
Create the candidate directory and brief:
|
|
207
|
+
|
|
208
|
+
```bash
|
|
209
|
+
mkdir -p "knowledge/Candidates/{Clean Name}"
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
Then create `knowledge/Candidates/{Clean Name}/brief.md` using the
|
|
213
|
+
`track-candidates` format:
|
|
214
|
+
|
|
215
|
+
```markdown
|
|
216
|
+
# {Clean Name}
|
|
217
|
+
|
|
218
|
+
## Info
|
|
219
|
+
**Title:** {Current Job Title or "—"}
|
|
220
|
+
**Rate:** {Salary Expectations or "—"}
|
|
221
|
+
**Availability:** {Availability Date or "—"}
|
|
222
|
+
**English:** {Language field or "—"}
|
|
223
|
+
**Location:** {Candidate Location or "—"}
|
|
224
|
+
**Gender:** —
|
|
225
|
+
**Source:** {Source} {via Referred by, if present}
|
|
226
|
+
**Status:** {pipeline status from Step 3}
|
|
227
|
+
**First seen:** {Date Applied, YYYY-MM-DD}
|
|
228
|
+
**Last activity:** {Date Applied, YYYY-MM-DD}
|
|
229
|
+
**Req:** {Req ID} — {Req Title}
|
|
230
|
+
**Internal/External:** {Internal / External / External (Prior Worker)}
|
|
231
|
+
**Current title:** {Current Job Title at Current Company}
|
|
232
|
+
**Email:** {Email or "—"}
|
|
233
|
+
**Phone:** {Phone or "—"}
|
|
234
|
+
|
|
235
|
+
## Summary
|
|
236
|
+
{2-3 sentences based on resume text: role focus, years of experience, key
|
|
237
|
+
strengths. If no resume text, use Current Job Title + Total Years Experience.}
|
|
238
|
+
|
|
239
|
+
## CV
|
|
240
|
+
- [CV.md](./CV.md)
|
|
241
|
+
|
|
242
|
+
## Connected to
|
|
243
|
+
- {Referred by person, if present}
|
|
244
|
+
|
|
245
|
+
## Pipeline
|
|
246
|
+
- **{Date Applied}**: Applied via {Source}
|
|
247
|
+
|
|
248
|
+
## Skills
|
|
249
|
+
{Extract key technical skills from resume text — use framework IDs where
|
|
250
|
+
possible via `npx fit-pathway skill --list`}
|
|
251
|
+
|
|
252
|
+
## Education
|
|
253
|
+
{Degrees and Fields of Study from the export columns}
|
|
254
|
+
|
|
255
|
+
## Work History
|
|
256
|
+
{All Job Titles and Companies from the export columns, formatted as a list}
|
|
257
|
+
|
|
258
|
+
## Notes
|
|
259
|
+
{Include any noteworthy fields here:}
|
|
260
|
+
{- Visa requirement (if present)}
|
|
261
|
+
{- Eligible to work (if present)}
|
|
262
|
+
{- Relocation willingness (if present)}
|
|
263
|
+
{- Non-compete status (if present)}
|
|
264
|
+
{- Total years of experience}
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
**Extra fields** (after Last activity, in order): Req, Internal/External,
|
|
268
|
+
Current title, Email, Phone, LinkedIn — include only when available. Follow the
|
|
269
|
+
order defined in the `track-candidates` skill.
|
|
270
|
+
|
|
271
|
+
### For EXISTING candidates
|
|
272
|
+
|
|
273
|
+
Read `knowledge/Candidates/{Name}/brief.md`, then apply targeted edits:
|
|
274
|
+
|
|
275
|
+
- Add or update **Req** field with this requisition's ID
|
|
276
|
+
- Update **Status** if the Workday stage is more advanced
|
|
277
|
+
- Update **Last activity** date if this application is more recent
|
|
278
|
+
- Add a new **Pipeline** entry:
|
|
279
|
+
`**{Date Applied}**: Applied to {Req ID} — {Req Title} via {Source}`
|
|
280
|
+
- Update any missing fields (Email, Phone, Location) from the export
|
|
281
|
+
- Do NOT overwrite existing richer data with sparser Workday data
|
|
282
|
+
|
|
283
|
+
**Use precise edits — don't rewrite the entire file.**
|
|
284
|
+
|
|
285
|
+
## Step 6: Process in Batches
|
|
286
|
+
|
|
287
|
+
Workday exports can contain many candidates. Process in batches of **10
|
|
288
|
+
candidates per run** to stay within context limits.
|
|
289
|
+
|
|
290
|
+
For each batch:
|
|
291
|
+
|
|
292
|
+
1. Parse the JSON output (or re-run the parse script)
|
|
293
|
+
2. Process 10 candidates: create/update brief + CV.md
|
|
294
|
+
3. Report progress: `Processed {N}/{Total} candidates`
|
|
295
|
+
|
|
296
|
+
If the export has more than 10 candidates, tell the user how many remain and
|
|
297
|
+
offer to continue.
|
|
298
|
+
|
|
299
|
+
## Step 7: Capture Key Insights
|
|
300
|
+
|
|
301
|
+
After processing all candidates, review the batch for strategic observations and
|
|
302
|
+
add them to `knowledge/Candidates/Insights.md`:
|
|
303
|
+
|
|
304
|
+
- Candidates who stand out as strong matches
|
|
305
|
+
- Candidates better suited for a different role
|
|
306
|
+
- Notable patterns (source quality, experience distribution, skill gaps)
|
|
307
|
+
|
|
308
|
+
Follow the `track-candidates` Insights format: one bullet per insight under
|
|
309
|
+
`## Placement Notes` with `[[Candidates/Name/brief|Name]]` links.
|
|
310
|
+
|
|
311
|
+
## Step 8: Tag Skills with Framework IDs
|
|
312
|
+
|
|
313
|
+
When resume text mentions technical skills, map them to the engineering
|
|
314
|
+
framework:
|
|
315
|
+
|
|
316
|
+
```bash
|
|
317
|
+
npx fit-pathway skill --list
|
|
318
|
+
```
|
|
319
|
+
|
|
320
|
+
Use framework skill IDs in the **Skills** section of each brief. If a candidate
|
|
321
|
+
has a CV.md, flag them for the `analyze-cv` skill for a full framework-aligned
|
|
322
|
+
assessment.
|
|
323
|
+
|
|
324
|
+
## Quality Checklist
|
|
325
|
+
|
|
326
|
+
- [ ] XLSX parsed correctly — verify candidate count matches summary
|
|
327
|
+
- [ ] Requisition metadata extracted (ID, title, hiring manager, recruiter)
|
|
328
|
+
- [ ] Each candidate has a directory under `knowledge/Candidates/{Clean Name}/`
|
|
329
|
+
- [ ] CV.md created for every candidate with resume text
|
|
330
|
+
- [ ] CV.md faithfully reproduces resume text (no rewriting or summarizing)
|
|
331
|
+
- [ ] Brief follows `track-candidates` format exactly
|
|
332
|
+
- [ ] Info fields in standard order (Title → Rate → Availability → English →
|
|
333
|
+
Location → Gender → Source → Status → First seen → Last activity → extras)
|
|
334
|
+
- [ ] Pipeline status correctly mapped from Workday stage/step
|
|
335
|
+
- [ ] Internal/External correctly derived from name annotations and source
|
|
336
|
+
- [ ] Name annotations stripped from directory names and headings
|
|
337
|
+
- [ ] Existing candidates updated (not duplicated) with precise edits
|
|
338
|
+
- [ ] Skills tagged using framework skill IDs where possible
|
|
339
|
+
- [ ] Gender field set to `—` (Workday exports don't include gender signals)
|
|
340
|
+
- [ ] Insights.md updated with strategic observations
|
|
341
|
+
- [ ] No duplicate candidate directories created
|
|
@@ -0,0 +1,243 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* Parse a Workday requisition export (.xlsx) and output structured JSON.
|
|
4
|
+
*
|
|
5
|
+
* Reads Sheet1 for requisition metadata and the "Candidates" sheet for
|
|
6
|
+
* candidate data. Outputs a JSON object to stdout with:
|
|
7
|
+
* - requisition: { id, title, startDate, targetHireDate, location,
|
|
8
|
+
* hiringManager, recruiter }
|
|
9
|
+
* - candidates: [ { name, cleanName, stage, step, resumeFile, dateApplied,
|
|
10
|
+
* currentTitle, currentCompany, source, referredBy,
|
|
11
|
+
* availabilityDate, visaRequirement, eligibleToWork,
|
|
12
|
+
* relocation, salaryExpectations, nonCompete, location,
|
|
13
|
+
* phone, email, totalYearsExperience, allJobTitles,
|
|
14
|
+
* companies, degrees, fieldsOfStudy, language,
|
|
15
|
+
* resumeText, internalExternal } ]
|
|
16
|
+
*
|
|
17
|
+
* Usage:
|
|
18
|
+
* node scripts/parse-workday.mjs <path-to-xlsx>
|
|
19
|
+
* node scripts/parse-workday.mjs <path-to-xlsx> --summary
|
|
20
|
+
* node scripts/parse-workday.mjs -h|--help
|
|
21
|
+
*
|
|
22
|
+
* Requires: npm install xlsx
|
|
23
|
+
*/
|
|
24
|
+
|
|
25
|
+
import { readFileSync } from "node:fs";
|
|
26
|
+
|
|
27
|
+
if (
|
|
28
|
+
process.argv.includes("-h") ||
|
|
29
|
+
process.argv.includes("--help") ||
|
|
30
|
+
process.argv.length < 3
|
|
31
|
+
) {
|
|
32
|
+
console.log(`parse-workday — extract candidates from a Workday requisition export
|
|
33
|
+
|
|
34
|
+
Usage:
|
|
35
|
+
node scripts/parse-workday.mjs <path-to-xlsx> Full JSON output
|
|
36
|
+
node scripts/parse-workday.mjs <path-to-xlsx> --summary Name + status only
|
|
37
|
+
node scripts/parse-workday.mjs -h|--help Show this help
|
|
38
|
+
|
|
39
|
+
Output (JSON):
|
|
40
|
+
{ requisition: { id, title, ... }, candidates: [ { name, ... }, ... ] }
|
|
41
|
+
|
|
42
|
+
Requires: npm install xlsx`);
|
|
43
|
+
process.exit(process.argv.length < 3 ? 1 : 0);
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
let XLSX;
|
|
47
|
+
try {
|
|
48
|
+
XLSX = await import("xlsx");
|
|
49
|
+
} catch {
|
|
50
|
+
console.error(
|
|
51
|
+
"Error: xlsx package not found. Install it first:\n npm install xlsx",
|
|
52
|
+
);
|
|
53
|
+
process.exit(1);
|
|
54
|
+
}
|
|
55
|
+
|
|
56
|
+
const filePath = process.argv[2];
|
|
57
|
+
const summaryMode = process.argv.includes("--summary");
|
|
58
|
+
|
|
59
|
+
const data = readFileSync(filePath);
|
|
60
|
+
const wb = XLSX.read(data, { type: "buffer", cellDates: true });
|
|
61
|
+
|
|
62
|
+
// --- Sheet 1: Requisition metadata ---
|
|
63
|
+
|
|
64
|
+
const ws1 = wb.Sheets[wb.SheetNames[0]];
|
|
65
|
+
const sheet1Rows = XLSX.utils.sheet_to_json(ws1, { header: 1, defval: "" });
|
|
66
|
+
|
|
67
|
+
/** Extract the requisition ID and title from the header row. */
|
|
68
|
+
function parseReqHeader(headerText) {
|
|
69
|
+
// Format: "4951493 Principal Software Engineer – Forward Deployed: 4951493 ..."
|
|
70
|
+
const text = String(headerText).split(":")[0].trim();
|
|
71
|
+
const match = text.match(/^(\d+)\s+(.+)$/);
|
|
72
|
+
if (match) return { id: match[1], title: match[2] };
|
|
73
|
+
return { id: "", title: text };
|
|
74
|
+
}
|
|
75
|
+
|
|
76
|
+
/** Build a key-value map from Sheet1 rows (column A = label, column B = value). */
|
|
77
|
+
function buildReqMetadata(rows) {
|
|
78
|
+
const meta = {};
|
|
79
|
+
for (const row of rows) {
|
|
80
|
+
const key = String(row[0] || "").trim();
|
|
81
|
+
const val = String(row[1] || "").trim();
|
|
82
|
+
if (key && val) meta[key] = val;
|
|
83
|
+
}
|
|
84
|
+
return meta;
|
|
85
|
+
}
|
|
86
|
+
|
|
87
|
+
const reqHeader = parseReqHeader(sheet1Rows[0]?.[0] || "");
|
|
88
|
+
const reqMeta = buildReqMetadata(sheet1Rows.slice(1));
|
|
89
|
+
|
|
90
|
+
/** Clean a metadata date string (e.g. "02/10/2026 - 22 days ago" → "2026-02-10"). */
|
|
91
|
+
function cleanMetaDate(val) {
|
|
92
|
+
if (!val) return "";
|
|
93
|
+
const clean = val.replace(/\s*-\s*\d+\s+days?\s+ago$/i, "").trim();
|
|
94
|
+
// Convert MM/DD/YYYY → YYYY-MM-DD
|
|
95
|
+
const match = clean.match(/^(\d{2})\/(\d{2})\/(\d{4})$/);
|
|
96
|
+
if (match) return `${match[3]}-${match[1]}-${match[2]}`;
|
|
97
|
+
return clean;
|
|
98
|
+
}
|
|
99
|
+
|
|
100
|
+
const requisition = {
|
|
101
|
+
id: reqHeader.id,
|
|
102
|
+
title: reqHeader.title,
|
|
103
|
+
startDate: cleanMetaDate(reqMeta["Recruiting Start Date"]),
|
|
104
|
+
targetHireDate: cleanMetaDate(reqMeta["Target Hire Date"]),
|
|
105
|
+
location: reqMeta["Primary Location"] || "",
|
|
106
|
+
hiringManager: reqMeta["Hiring Manager"] || "",
|
|
107
|
+
recruiter: reqMeta["Recruiter"] || "",
|
|
108
|
+
};
|
|
109
|
+
|
|
110
|
+
// --- Sheet 3: Candidates ---
|
|
111
|
+
|
|
112
|
+
// Find the "Candidates" sheet (usually index 2, but search by name to be safe)
|
|
113
|
+
const candSheetName =
|
|
114
|
+
wb.SheetNames.find((n) => n.toLowerCase() === "candidates") ||
|
|
115
|
+
wb.SheetNames[2];
|
|
116
|
+
const ws3 = wb.Sheets[candSheetName];
|
|
117
|
+
const candRows = XLSX.utils.sheet_to_json(ws3, { header: 1, defval: "" });
|
|
118
|
+
|
|
119
|
+
// Row 3 (index 2) has column headers. Data starts at row 4 (index 3).
|
|
120
|
+
// Stage summary rows start when column A has a non-empty value that looks like
|
|
121
|
+
// a label or number — detect by checking if column C (Stage) is empty and
|
|
122
|
+
// column A has a value.
|
|
123
|
+
const DATA_START = 3;
|
|
124
|
+
|
|
125
|
+
/**
|
|
126
|
+
* Clean a candidate name by stripping annotations like (Prior Worker),
|
|
127
|
+
* (Internal), etc. Returns { cleanName, internalExternal }.
|
|
128
|
+
*/
|
|
129
|
+
function parseName(raw) {
|
|
130
|
+
const name = String(raw).trim();
|
|
131
|
+
if (!name) return { cleanName: "", internalExternal: "" };
|
|
132
|
+
|
|
133
|
+
const match = name.match(/^(.+?)\s*\(([^)]+)\)\s*$/);
|
|
134
|
+
if (match) {
|
|
135
|
+
const annotation = match[2].trim();
|
|
136
|
+
let ie = "";
|
|
137
|
+
if (/prior\s*worker/i.test(annotation)) ie = "External (Prior Worker)";
|
|
138
|
+
else if (/internal/i.test(annotation)) ie = "Internal";
|
|
139
|
+
else ie = annotation;
|
|
140
|
+
return { cleanName: match[1].trim(), internalExternal: ie };
|
|
141
|
+
}
|
|
142
|
+
return { cleanName: name, internalExternal: "" };
|
|
143
|
+
}
|
|
144
|
+
|
|
145
|
+
/** Detect source-based internal/external when name annotation is absent. */
|
|
146
|
+
function inferInternalExternal(source, nameAnnotation) {
|
|
147
|
+
if (nameAnnotation) return nameAnnotation;
|
|
148
|
+
if (/internal/i.test(source)) return "Internal";
|
|
149
|
+
return "External";
|
|
150
|
+
}
|
|
151
|
+
|
|
152
|
+
/** Format a date value (may be Date object or string). */
|
|
153
|
+
function fmtDate(val) {
|
|
154
|
+
if (!val) return "";
|
|
155
|
+
if (val instanceof Date) {
|
|
156
|
+
// Use local date parts to avoid UTC offset shifting the day
|
|
157
|
+
const y = val.getFullYear();
|
|
158
|
+
const m = String(val.getMonth() + 1).padStart(2, "0");
|
|
159
|
+
const d = String(val.getDate()).padStart(2, "0");
|
|
160
|
+
return `${y}-${m}-${d}`;
|
|
161
|
+
}
|
|
162
|
+
const s = String(val).trim();
|
|
163
|
+
// Strip trailing " 00:00:00" and relative text like " - 22 days ago"
|
|
164
|
+
return s
|
|
165
|
+
.replace(/\s+\d{2}:\d{2}:\d{2}$/, "")
|
|
166
|
+
.replace(/\s*-\s*\d+\s+days?\s+ago$/i, "");
|
|
167
|
+
}
|
|
168
|
+
|
|
169
|
+
/** Normalise multiline cell values into clean lists. */
|
|
170
|
+
function multiline(val) {
|
|
171
|
+
if (!val) return "";
|
|
172
|
+
return String(val)
|
|
173
|
+
.split("\n")
|
|
174
|
+
.map((l) => l.trim())
|
|
175
|
+
.filter(Boolean)
|
|
176
|
+
.join(", ");
|
|
177
|
+
}
|
|
178
|
+
|
|
179
|
+
const candidates = [];
|
|
180
|
+
|
|
181
|
+
for (let i = DATA_START; i < candRows.length; i++) {
|
|
182
|
+
const row = candRows[i];
|
|
183
|
+
const rawName = String(row[1] || "").trim(); // Column B (index 1)
|
|
184
|
+
const stage = String(row[2] || "").trim(); // Column C (index 2)
|
|
185
|
+
|
|
186
|
+
// Stop at stage-summary rows: column A has a value, column C (stage) is empty
|
|
187
|
+
if (!rawName || (!stage && String(row[0] || "").trim())) break;
|
|
188
|
+
if (!rawName) continue;
|
|
189
|
+
|
|
190
|
+
const { cleanName, internalExternal: nameIE } = parseName(rawName);
|
|
191
|
+
const source = String(row[10] || "").trim();
|
|
192
|
+
|
|
193
|
+
candidates.push({
|
|
194
|
+
name: rawName,
|
|
195
|
+
cleanName,
|
|
196
|
+
stage,
|
|
197
|
+
step: String(row[3] || "").trim(),
|
|
198
|
+
awaitingMe: String(row[4] || "").trim(),
|
|
199
|
+
awaitingAction: String(row[5] || "").trim(),
|
|
200
|
+
resumeFile: String(row[6] || "").trim(),
|
|
201
|
+
dateApplied: fmtDate(row[7]),
|
|
202
|
+
currentTitle: String(row[8] || "").trim(),
|
|
203
|
+
currentCompany: String(row[9] || "").trim(),
|
|
204
|
+
source,
|
|
205
|
+
referredBy: String(row[11] || "").trim(),
|
|
206
|
+
availabilityDate: fmtDate(row[13]),
|
|
207
|
+
visaRequirement: String(row[14] || "").trim(),
|
|
208
|
+
eligibleToWork: String(row[15] || "").trim(),
|
|
209
|
+
relocation: String(row[16] || "").trim(),
|
|
210
|
+
salaryExpectations: String(row[17] || "").trim(),
|
|
211
|
+
nonCompete: String(row[18] || "").trim(),
|
|
212
|
+
location: String(row[19] || "").trim(),
|
|
213
|
+
phone: String(row[20] || "").trim(),
|
|
214
|
+
email: String(row[21] || "").trim(),
|
|
215
|
+
totalYearsExperience: String(row[22] || "").trim(),
|
|
216
|
+
allJobTitles: multiline(row[23]),
|
|
217
|
+
companies: multiline(row[24]),
|
|
218
|
+
degrees: multiline(row[25]),
|
|
219
|
+
fieldsOfStudy: multiline(row[26]),
|
|
220
|
+
language: multiline(row[27]),
|
|
221
|
+
resumeText: String(row[28] || "").trim(),
|
|
222
|
+
internalExternal: inferInternalExternal(source, nameIE),
|
|
223
|
+
});
|
|
224
|
+
}
|
|
225
|
+
|
|
226
|
+
// --- Output ---
|
|
227
|
+
|
|
228
|
+
if (summaryMode) {
|
|
229
|
+
console.log(`Requisition: ${requisition.id} — ${requisition.title}`);
|
|
230
|
+
console.log(`Location: ${requisition.location}`);
|
|
231
|
+
console.log(`Hiring Manager: ${requisition.hiringManager}`);
|
|
232
|
+
console.log(`Recruiter: ${requisition.recruiter}`);
|
|
233
|
+
console.log(`Candidates: ${candidates.length}`);
|
|
234
|
+
console.log();
|
|
235
|
+
for (const c of candidates) {
|
|
236
|
+
const resume = c.resumeText ? "has resume" : "no resume";
|
|
237
|
+
console.log(
|
|
238
|
+
` ${c.cleanName} — ${c.step || c.stage} (${c.internalExternal}, ${resume})`,
|
|
239
|
+
);
|
|
240
|
+
}
|
|
241
|
+
} else {
|
|
242
|
+
console.log(JSON.stringify({ requisition, candidates }, null, 2));
|
|
243
|
+
}
|
package/template/CLAUDE.md
CHANGED
|
@@ -81,13 +81,13 @@ This knowledge base is maintained by a team of agents, each defined in
|
|
|
81
81
|
`.claude/agents/`. They are woken on a schedule by the Basecamp scheduler. Each
|
|
82
82
|
wake, they observe KB state, decide the most valuable action, and execute.
|
|
83
83
|
|
|
84
|
-
| Agent | Domain | Schedule | Skills
|
|
85
|
-
| ------------------ | ------------------------------ | --------------- |
|
|
86
|
-
| **postman** | Email triage and drafts | Every 5 min | sync-apple-mail, draft-emails
|
|
87
|
-
| **concierge** | Meeting prep and transcripts | Every 10 min | sync-apple-calendar, meeting-prep, process-hyprnote
|
|
88
|
-
| **librarian** | Knowledge graph maintenance | Every 15 min | extract-entities, organize-files, manage-tasks
|
|
89
|
-
| **recruiter** | Engineering recruitment | Every 30 min | track-candidates, analyze-cv, right-to-be-forgotten, fit-pathway, fit-map |
|
|
90
|
-
| **chief-of-staff** | Daily briefings and priorities | 7am, Mon 7:30am | weekly-update _(Mon)_, _(reads all state for daily briefings)_
|
|
84
|
+
| Agent | Domain | Schedule | Skills |
|
|
85
|
+
| ------------------ | ------------------------------ | --------------- | ---------------------------------------------------------------------------------------------- |
|
|
86
|
+
| **postman** | Email triage and drafts | Every 5 min | sync-apple-mail, draft-emails |
|
|
87
|
+
| **concierge** | Meeting prep and transcripts | Every 10 min | sync-apple-calendar, meeting-prep, process-hyprnote |
|
|
88
|
+
| **librarian** | Knowledge graph maintenance | Every 15 min | extract-entities, organize-files, manage-tasks |
|
|
89
|
+
| **recruiter** | Engineering recruitment | Every 30 min | track-candidates, analyze-cv, workday-requisition, right-to-be-forgotten, fit-pathway, fit-map |
|
|
90
|
+
| **chief-of-staff** | Daily briefings and priorities | 7am, Mon 7:30am | weekly-update _(Mon)_, _(reads all state for daily briefings)_ |
|
|
91
91
|
|
|
92
92
|
Each agent writes a triage file to `~/.cache/fit/basecamp/state/` every wake
|
|
93
93
|
cycle. The naming convention is `{agent}_triage.md`:
|
|
@@ -200,16 +200,17 @@ Available skills (grouped by function):
|
|
|
200
200
|
|
|
201
201
|
**Knowledge graph** — build and maintain structured notes:
|
|
202
202
|
|
|
203
|
-
| Skill
|
|
204
|
-
|
|
|
205
|
-
| `extract-entities`
|
|
206
|
-
| `manage-tasks`
|
|
207
|
-
| `track-candidates`
|
|
208
|
-
| `
|
|
209
|
-
| `
|
|
210
|
-
| `
|
|
211
|
-
| `
|
|
212
|
-
| `
|
|
203
|
+
| Skill | Purpose |
|
|
204
|
+
| ----------------------- | ---------------------------------------- |
|
|
205
|
+
| `extract-entities` | Process synced data into knowledge notes |
|
|
206
|
+
| `manage-tasks` | Per-person task boards with lifecycle |
|
|
207
|
+
| `track-candidates` | Recruitment pipeline from email threads |
|
|
208
|
+
| `workday-requisition` | Import candidates from Workday XLSX |
|
|
209
|
+
| `analyze-cv` | CV assessment against career framework |
|
|
210
|
+
| `right-to-be-forgotten` | GDPR data erasure with audit trail |
|
|
211
|
+
| `weekly-update` | Weekly priorities from tasks + calendar |
|
|
212
|
+
| `process-hyprnote` | Extract entities from Hyprnote sessions |
|
|
213
|
+
| `organize-files` | Tidy Desktop/Downloads, chain to extract |
|
|
213
214
|
|
|
214
215
|
**Communication** — draft, send, and present:
|
|
215
216
|
|