interviewsignal 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (30) hide show
  1. interviewsignal-0.1.0/PKG-INFO +376 -0
  2. interviewsignal-0.1.0/README.md +354 -0
  3. interviewsignal-0.1.0/interview/__init__.py +1 -0
  4. interviewsignal-0.1.0/interview/cli.py +521 -0
  5. interviewsignal-0.1.0/interview/core/__init__.py +1 -0
  6. interviewsignal-0.1.0/interview/core/audit.py +338 -0
  7. interviewsignal-0.1.0/interview/core/decisions.py +176 -0
  8. interviewsignal-0.1.0/interview/core/email_sender.py +220 -0
  9. interviewsignal-0.1.0/interview/core/grader.py +424 -0
  10. interviewsignal-0.1.0/interview/core/report.py +312 -0
  11. interviewsignal-0.1.0/interview/core/session.py +367 -0
  12. interviewsignal-0.1.0/interview/core/setup.py +180 -0
  13. interviewsignal-0.1.0/interview/core/transport.py +384 -0
  14. interviewsignal-0.1.0/interview/dashboard/__init__.py +1 -0
  15. interviewsignal-0.1.0/interview/dashboard/serve.py +904 -0
  16. interviewsignal-0.1.0/interview/hooks/__init__.py +1 -0
  17. interviewsignal-0.1.0/interview/hooks/claude_hook.py +175 -0
  18. interviewsignal-0.1.0/interview/relay/__init__.py +0 -0
  19. interviewsignal-0.1.0/interview/relay/server.py +373 -0
  20. interviewsignal-0.1.0/interview/relay/store.py +391 -0
  21. interviewsignal-0.1.0/interview/skills/interview/SKILL.md +298 -0
  22. interviewsignal-0.1.0/interview/skills/submit/SKILL.md +62 -0
  23. interviewsignal-0.1.0/interviewsignal.egg-info/PKG-INFO +376 -0
  24. interviewsignal-0.1.0/interviewsignal.egg-info/SOURCES.txt +28 -0
  25. interviewsignal-0.1.0/interviewsignal.egg-info/dependency_links.txt +1 -0
  26. interviewsignal-0.1.0/interviewsignal.egg-info/entry_points.txt +2 -0
  27. interviewsignal-0.1.0/interviewsignal.egg-info/requires.txt +5 -0
  28. interviewsignal-0.1.0/interviewsignal.egg-info/top_level.txt +1 -0
  29. interviewsignal-0.1.0/pyproject.toml +47 -0
  30. interviewsignal-0.1.0/setup.cfg +4 -0
@@ -0,0 +1,376 @@
1
+ Metadata-Version: 2.4
2
+ Name: interviewsignal
3
+ Version: 0.1.0
4
+ Summary: AI-native interview platform — capture thought process, not puzzle performance
5
+ License-Expression: MIT
6
+ Project-URL: Homepage, https://github.com/NikhilSKashyap/interviewsignal
7
+ Project-URL: Issues, https://github.com/NikhilSKashyap/interviewsignal/issues
8
+ Keywords: interview,hiring,claude-code,ai,developer-experience
9
+ Classifier: Development Status :: 3 - Alpha
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Topic :: Software Development :: Testing
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.10
14
+ Classifier: Programming Language :: Python :: 3.11
15
+ Classifier: Programming Language :: Python :: 3.12
16
+ Requires-Python: >=3.10
17
+ Description-Content-Type: text/markdown
18
+ Provides-Extra: dev
19
+ Requires-Dist: pytest>=8; extra == "dev"
20
+ Requires-Dist: ruff; extra == "dev"
21
+ Requires-Dist: mypy; extra == "dev"
22
+
23
+ # interviewsignal
24
+
25
+ [![PyPI](https://img.shields.io/pypi/v/interviewsignal)](https://pypi.org/project/interviewsignal/)
26
+ [![Downloads](https://static.pepy.tech/badge/interviewsignal/month)](https://pepy.tech/project/interviewsignal)
27
+ [![CI](https://github.com/nikhilkashyap/interviewsignal/actions/workflows/ci.yml/badge.svg)](https://github.com/nikhilkashyap/interviewsignal/actions)
28
+ [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
29
+
30
+ **An AI-native interview platform.** Type `/interview` in Claude Code, Codex, Cursor, or any AI coding assistant — it captures your entire thought process as you solve a problem and sends a structured, tamper-evident audit to the hiring manager.
31
+
32
+ No contrived puzzles. No whiteboard anxiety. No bias. Just signal.
33
+
34
+ ---
35
+
36
+ ## The problem
37
+
38
+ You're hiring a software engineer. You've spent 6 hours watching three candidates struggle with binary tree problems none of them will ever encounter on the job. One froze. One solved it but couldn't explain their thinking. One was brilliant but had a bad day.
39
+
40
+ You still don't know who can actually build software.
41
+
42
+ Meanwhile, every one of those candidates uses AI coding assistants every day. You tested them without their tools — like testing a surgeon without instruments.
43
+
44
+ **There's a better signal: how they think.**
45
+
46
+ ---
47
+
48
+ ## What you get
49
+
50
+ **For hiring managers:**
51
+ - A timestamped audit of everything the candidate did — every prompt, every tool call, every file written
52
+ - AI grading against your own rubric (not a canned scoring system)
53
+ - Anonymous-first dashboard: you see scores before names, eliminating identity bias
54
+ - Grade-before-Reveal enforcement with a tamper-evident audit trail — defensible in a DEI audit
55
+ - Comment threads, hire/reject decisions, all hash-chained and email-anchored
56
+
57
+ **For candidates:**
58
+ - Work the way you actually work — with AI assistance, in your own environment
59
+ - Get evaluated on your thinking, not your ability to memorise algorithms
60
+ - No file transfers, no email attachments — just a short code
61
+
62
+ **For teams:**
63
+ - Close the interview loop in half the time
64
+ - A written record of the hiring decision from problem to offer
65
+ - Works inside your existing toolchain — no new platform to log into
66
+
67
+ ---
68
+
69
+ ## Install
70
+
71
+ ```bash
72
+ pip install interviewsignal && interview install
73
+ ```
74
+
75
+ Requires Python 3.10+ and one of: [Claude Code](https://claude.ai/code), [Codex](https://openai.com/codex), [Cursor](https://cursor.com), [Gemini CLI](https://github.com/google-gemini/gemini-cli), [Aider](https://aider.chat).
76
+
77
+ Then configure grading and (optionally) the relay:
78
+
79
+ ```bash
80
+ interview configure-api-key # Anthropic API key — for grading
81
+ interview configure-relay # relay URL — auto-registers your HM account
82
+ ```
83
+
84
+ > **Enterprise / no personal API key?** See [Enterprise configuration](#enterprise-configuration) below.
85
+
86
+ ---
87
+
88
+ ## Quickstart
89
+
90
+ ### Hiring manager
91
+
92
+ ```
93
+ /interview hm
94
+ ```
95
+
96
+ You'll be asked for:
97
+ - Problem statement
98
+ - Grading rubric (plain language — "weight decomposition 40%, code quality 30%, tests 30%")
99
+ - Your email + CC list (HR, co-interviewer, etc.)
100
+ - Audit recipient (for DEI compliance)
101
+ - Time limit (optional)
102
+ - Anonymize candidates? (yes / no)
103
+
104
+ You get back a code like `INT-4829-XK`. Share it with your candidate — that's all they need.
105
+
106
+ ### Candidate
107
+
108
+ ```bash
109
+ pip install interviewsignal && interview install
110
+ ```
111
+
112
+ ```
113
+ /interview INT-4829-XK
114
+ ```
115
+
116
+ The problem appears. Relay is auto-configured — no API keys or file transfers required. Work normally — ask the AI questions, write code, run tests. The session records everything automatically.
117
+
118
+ When done:
119
+
120
+ ```
121
+ /submit
122
+ ```
123
+
124
+ The session is sealed and sent to the relay. The hiring manager's dashboard updates automatically.
125
+
126
+ ### Hiring manager — review
127
+
128
+ ```bash
129
+ interview dashboard
130
+ # → http://localhost:7832
131
+ ```
132
+
133
+ Candidates appear as "Candidate A", "Candidate B" — scores first, names second. Click into any candidate to see the full transcript, dimension scores, and diff. Add comments. Record your decision. Click Reveal when you're ready to unmask.
134
+
135
+ ---
136
+
137
+ ## How it works
138
+
139
+ interviewsignal installs as a skill into your AI coding assistant. It hooks into every tool call — reads, writes, bash commands — and builds an append-only, hash-chained session log. On `/submit`, the log is sealed and pushed to the relay. The HM grades from their dashboard using their own AI key.
140
+
141
+ ```
142
+ Candidate side HM side
143
+ ───────────────────────── ───────────────────────
144
+ interview configure-relay interview configure-relay
145
+ ↓ auto-registered via relay ↓ gets unique hm_key
146
+
147
+ /interview hm ← share code INT-4829-XK
148
+ ↓ creates interview
149
+ ↓ pushes package to relay
150
+
151
+ /interview INT-4829-XK interview dashboard
152
+ ↓ fetches problem from relay ↓ localhost:7832
153
+ ↓ relay auto-configured locally ↓ candidates arrive
154
+ Session starts ↓ anonymous by default
155
+ ↓ hooks capture every tool call ↓ Grade All / Grade Selected
156
+ ↓ append-only events.jsonl ↓ score before name
157
+ ↓ hash chain (tamper-evident) ↓ Reveal unlocks after grading
158
+ /submit ↓ comment thread
159
+ ↓ session sealed ↓ hire / next round / reject
160
+ ↓ pushed to relay ↓ full audit trail
161
+ ```
162
+
163
+ **Three passes on submit:**
164
+ 1. `session seal` — finalises hash chain, captures git diff (start → end)
165
+ 2. Push to relay — sealed session (events + manifest + report) stored server-side
166
+ 3. HM grades from dashboard — sends timeline + rubric + diff to their AI key, returns structured JSON scores
167
+
168
+ **The integrity model:**
169
+
170
+ Every HM action — grading, revealing identity, adding a comment, recording a decision — is logged with a SHA-256 hash chain. Key events are silently emailed to a designated audit recipient (typically HR). The mail server's timestamp is outside the HM's control. Reveal is physically disabled until a grade is saved, ensuring blind evaluation is provable.
171
+
172
+ ```
173
+ [2026-04-13T10:47:22Z] grade_recorded INT-4829-XK hash=d4abe5e6
174
+ [2026-04-13T10:52:09Z] identity_revealed INT-4829-XK hash=2370be19
175
+ ```
176
+
177
+ *"Identity revealed 4.8 minutes after grade was recorded."* That one line is your DEI proof.
178
+
179
+ ---
180
+
181
+ ## Relay
182
+
183
+ The relay stores interview packages and candidate sessions so hiring managers and candidates only need to share a short code — no file transfers, no email attachments.
184
+
185
+ Run `interview configure-relay` to choose:
186
+
187
+ ```
188
+ How do you want to deliver interview sessions?
189
+ ──────────────────────────────────────────────
190
+ 1. Your own relay Railway / Render / self-hosted — private, ~$5/mo
191
+ 2. Email only SMTP — no server, reports arrive by email
192
+ ```
193
+
194
+ ### Option 1 — Your own relay (~$5/mo, fully private)
195
+
196
+ Deploy in one click:
197
+
198
+ [![Deploy on Railway](https://railway.com/button.svg)](https://railway.com/new/template?template=https://github.com/NikhilSKashyap/interviewsignal)
199
+
200
+ After deploying:
201
+ 1. Set environment variable `RELAY_API_KEY` (any random string) in Railway → Variables
202
+ 2. Add a `/data` volume when prompted — this is where sessions are stored
203
+ 3. Copy your Railway URL (e.g. `https://myrelay.up.railway.app`)
204
+ 4. Run `interview configure-relay` → option 1 → paste URL
205
+
206
+ Your data stays in your own Railway account. Cost is ~$5/month (Railway Hobby plan).
207
+
208
+ Or run it anywhere with Docker:
209
+
210
+ ```bash
211
+ docker run -e RELAY_API_KEY=secret -v /data:/data -p 8080:8080 \
212
+ ghcr.io/nikhilskashyap/interviewsignal:latest
213
+ ```
214
+
215
+ See [docs/self-hosting.md](docs/self-hosting.md) for data layout, backup, and key rotation.
216
+
217
+ ### Option 2 — Email only (free, no server)
218
+
219
+ ```bash
220
+ interview configure-relay # choose 2
221
+ interview configure-email # set up SMTP credentials
222
+ ```
223
+
224
+ Candidates run `/submit` → report is emailed directly to the HM. The HM saves the JSON attachment to `~/.interview/received/` and it appears in the dashboard. No server needed, no ongoing cost. Trade-off: manual file handling and the dashboard can't re-grade from raw events.
225
+
226
+ ---
227
+
228
+ ## Enterprise configuration
229
+
230
+ Companies that don't issue personal API keys — or that route all AI traffic through an internal gateway — can configure a custom LLM endpoint:
231
+
232
+ ```bash
233
+ interview configure-llm
234
+ ```
235
+
236
+ This covers three patterns:
237
+
238
+ | Pattern | What to set |
239
+ |---|---|
240
+ | Anthropic direct | API key only (default) |
241
+ | Internal proxy (Floodgate, corporate gateway) | Base URL + optional key; proxy handles auth |
242
+ | OpenAI-compatible endpoint | Base URL + key + `format=openai` |
243
+
244
+ **What gets configured (`~/.interview/config.json`):**
245
+
246
+ ```json
247
+ {
248
+ "anthropic_base_url": "https://ai-gateway.corp.internal/anthropic",
249
+ "anthropic_api_key": "",
250
+ "api_format": "anthropic",
251
+ "grading_model": "claude-3-5-haiku",
252
+ "anthropic_extra_headers": {"X-Team-ID": "ml-hiring"}
253
+ }
254
+ ```
255
+
256
+ The API key is optional — leave it blank if your proxy handles auth at the network or SSO level. Extra headers let you pass team/project routing headers required by some gateways.
257
+
258
+ Environment variable overrides (useful for CI or shared machines):
259
+
260
+ ```bash
261
+ ANTHROPIC_API_KEY=... # API key
262
+ ANTHROPIC_BASE_URL=... # base URL override
263
+ INTERVIEW_GRADING_MODEL=... # model name override
264
+ ```
265
+
266
+ ---
267
+
268
+ ## True meritocracy
269
+
270
+ Most companies say they hire on merit. Almost none of them do — because the process doesn't allow it.
271
+
272
+ When you know who someone is before you evaluate them, bias isn't a failure of character. It's a failure of process. The name on the resume, the university on the screen share, the face on the Zoom — they all get into your head before the first line of code is written.
273
+
274
+ `interviewsignal` makes meritocracy structurally possible:
275
+
276
+ **Same tools, same environment.** Every candidate works with the same AI assistant they use on the job. The person who drilled Leetcode for six months has no advantage over the person who just builds things. The only variable is how well they think.
277
+
278
+ **You can't prevent a candidate from having a second screen. Neither can a Leetcode proctoring tool.** The difference is that with interviewsignal, gaming it well requires understanding the problem — and that's the signal.
279
+
280
+ **Score before name, always.** Grades are locked in before identity is revealed — not as a policy, but as a technical constraint. You cannot click Reveal until a score is saved. The order of events is cryptographically provable.
281
+
282
+ **A tamper-evident record.** Every action — grading, revealing identity, adding a comment, recording a decision — is hash-chained and email-anchored to a timestamp outside your control. The audit trail doesn't just log what happened. It proves it.
283
+
284
+ ```
285
+ [2026-04-13T10:47:22Z] grade_recorded INT-4829-XK hash=d4abe5e6
286
+ [2026-04-13T10:52:09Z] identity_revealed INT-4829-XK hash=2370be19
287
+ ```
288
+
289
+ *"Identity revealed 4.8 minutes after grade was recorded."*
290
+
291
+ That one line is the whole argument. You hired the person with the best score. You can prove it. Not because you were careful, but because the system made any other sequence impossible.
292
+
293
+ ---
294
+
295
+ ## Platform support
296
+
297
+ | Platform | Install | Hook mechanism |
298
+ |---|---|---|
299
+ | Claude Code (Linux/Mac/Windows) | `interview install` | PreToolUse + PostToolUse hooks |
300
+ | Codex | `interview install --platform codex` | PreToolUse hook + AGENTS.md |
301
+ | Cursor | `interview install --platform cursor` | `.cursor/rules/interview.mdc` |
302
+ | Gemini CLI | `interview install --platform gemini` | BeforeTool hook + GEMINI.md |
303
+ | Aider | `interview install --platform aider` | AGENTS.md |
304
+
305
+ ---
306
+
307
+ ## What gets captured
308
+
309
+ | Event | Captured |
310
+ |---|---|
311
+ | File reads | Path |
312
+ | File writes | Path + content hash |
313
+ | Bash commands | Command + exit code |
314
+ | File edits | Path + change summary |
315
+ | Git state | Branch + commit at start and end |
316
+ | Git diff | Full diff (start → submit) |
317
+ | Timestamps | Millisecond precision on every event |
318
+
319
+ The session log is append-only and hash-chained. Any tampering breaks the chain.
320
+
321
+ What is **not** captured: file contents (only hashes and paths, for privacy). The grader evaluates the timeline and diff, not raw file contents.
322
+
323
+ ---
324
+
325
+ ## Configuration reference
326
+
327
+ ```bash
328
+ # Grading
329
+ interview configure-api-key # Anthropic API key (direct access)
330
+ interview configure-llm # Enterprise: custom endpoint, proxy, format, extra headers
331
+
332
+ # Delivery
333
+ interview configure-relay # Relay URL + auto-register HM account
334
+ interview configure-email # SMTP fallback (no relay)
335
+
336
+ # Runtime
337
+ interview dashboard # Local HM dashboard at localhost:7832
338
+ interview status # Check active session
339
+ interview install --help # Platform install options
340
+ ```
341
+
342
+ All config stored in `~/.interview/config.json` (permissions: 600).
343
+
344
+ ---
345
+
346
+ ## Privacy
347
+
348
+ **Candidate sessions** are stored on the relay (or locally, in email mode). The relay stores: `events.jsonl`, `manifest.json`, `report.html`, and `report.json`. Raw file contents are never stored — only paths, hashes, and command summaries.
349
+
350
+ **Grading** sends the session timeline and git diff to the configured AI endpoint (Anthropic API by default, or your enterprise proxy). No raw file contents. The grading call uses your own API key — interviewsignal never sees it.
351
+
352
+ **Self-hosted relay:** Run the relay inside your own infrastructure and nothing leaves your network. See [docs/self-hosting.md](docs/self-hosting.md).
353
+
354
+ No telemetry. No analytics. No tracking.
355
+
356
+ ---
357
+
358
+ ## Built with
359
+
360
+ Python stdlib only (no external dependencies for core or relay). Grading via [Anthropic Messages API](https://docs.anthropic.com/en/api) or any compatible endpoint. Dashboard is a self-contained local HTTP server. Reports are single-file HTML. Relay is a single-process stdlib HTTP server backed by flat files.
361
+
362
+ ---
363
+
364
+ ## Contributing
365
+
366
+ **Worked examples** are the most valuable contribution. Run a real interview session, save the output to `worked/{slug}/`, write an honest `review.md` evaluating what the grading got right and wrong, open a PR.
367
+
368
+ **Platform support** — each new platform is a ~30 line adapter in `cli.py`. If you use an AI coding tool not listed above, adding support is straightforward.
369
+
370
+ See [ARCHITECTURE.md](ARCHITECTURE.md) for module responsibilities and [docs/relay-api.md](docs/relay-api.md) for the full relay API contract.
371
+
372
+ ---
373
+
374
+ <p align="center">
375
+ <em>Thought process, not puzzles. Pure signal.</em>
376
+ </p>