agentic-threat-hunting-framework 0.2.3__py3-none-any.whl → 0.3.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {agentic_threat_hunting_framework-0.2.3.dist-info → agentic_threat_hunting_framework-0.3.0.dist-info}/METADATA +38 -40
- agentic_threat_hunting_framework-0.3.0.dist-info/RECORD +51 -0
- athf/__version__.py +1 -1
- athf/cli.py +7 -2
- athf/commands/__init__.py +4 -0
- athf/commands/agent.py +452 -0
- athf/commands/context.py +6 -9
- athf/commands/env.py +2 -2
- athf/commands/hunt.py +3 -3
- athf/commands/init.py +45 -0
- athf/commands/research.py +530 -0
- athf/commands/similar.py +5 -5
- athf/core/research_manager.py +419 -0
- athf/core/web_search.py +340 -0
- athf/data/__init__.py +19 -0
- athf/data/docs/CHANGELOG.md +147 -0
- athf/data/docs/CLI_REFERENCE.md +1797 -0
- athf/data/docs/INSTALL.md +594 -0
- athf/data/docs/README.md +31 -0
- athf/data/docs/environment.md +256 -0
- athf/data/docs/getting-started.md +419 -0
- athf/data/docs/level4-agentic-workflows.md +480 -0
- athf/data/docs/lock-pattern.md +149 -0
- athf/data/docs/maturity-model.md +400 -0
- athf/data/docs/why-athf.md +44 -0
- athf/data/hunts/FORMAT_GUIDELINES.md +507 -0
- athf/data/hunts/H-0001.md +453 -0
- athf/data/hunts/H-0002.md +436 -0
- athf/data/hunts/H-0003.md +546 -0
- athf/data/hunts/README.md +231 -0
- athf/data/integrations/MCP_CATALOG.md +45 -0
- athf/data/integrations/README.md +129 -0
- athf/data/integrations/quickstart/splunk.md +162 -0
- athf/data/knowledge/hunting-knowledge.md +2375 -0
- athf/data/prompts/README.md +172 -0
- athf/data/prompts/ai-workflow.md +581 -0
- athf/data/prompts/basic-prompts.md +316 -0
- athf/data/templates/HUNT_LOCK.md +228 -0
- agentic_threat_hunting_framework-0.2.3.dist-info/RECORD +0 -23
- {agentic_threat_hunting_framework-0.2.3.dist-info → agentic_threat_hunting_framework-0.3.0.dist-info}/WHEEL +0 -0
- {agentic_threat_hunting_framework-0.2.3.dist-info → agentic_threat_hunting_framework-0.3.0.dist-info}/entry_points.txt +0 -0
- {agentic_threat_hunting_framework-0.2.3.dist-info → agentic_threat_hunting_framework-0.3.0.dist-info}/licenses/LICENSE +0 -0
- {agentic_threat_hunting_framework-0.2.3.dist-info → agentic_threat_hunting_framework-0.3.0.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,480 @@
|
|
|
1
|
+
# Level 4: Agentic Workflows
|
|
2
|
+
|
|
3
|
+
At Level 4, you move from **reactive assistance** to **proactive automation**. Instead of asking your AI for help with each task, you deploy autonomous agents that monitor, reason, and act based on objectives you define.
|
|
4
|
+
|
|
5
|
+
**The key difference from Level 3:** Agents operate autonomously rather than waiting for your prompts. They detect events, make decisions within guardrails, and coordinate with each other through shared memory (your LOCK-structured hunts).
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## What Changes at Level 4
|
|
10
|
+
|
|
11
|
+
### Level 3 vs. Level 4
|
|
12
|
+
|
|
13
|
+
**Level 3 (Interactive):**
|
|
14
|
+
|
|
15
|
+
- You: "Execute hunt H-0042"
|
|
16
|
+
- Claude: [Runs query, creates ticket, updates hunt]
|
|
17
|
+
- **Pattern:** You direct every action
|
|
18
|
+
|
|
19
|
+
**Level 4 (Autonomous):**
|
|
20
|
+
|
|
21
|
+
- Agent: [Monitors CTI feed every 6 hours]
|
|
22
|
+
- Agent: [Detects new Qakbot campaign, searches past hunts]
|
|
23
|
+
- Agent: [Generates draft hunt H-0156.md, flags for review]
|
|
24
|
+
- Agent: [Posts to Slack: "New hunt ready for review"]
|
|
25
|
+
- **Pattern:** Agent acts on objectives, you validate
|
|
26
|
+
|
|
27
|
+
### Success at Level 4
|
|
28
|
+
|
|
29
|
+
- Agents **monitor** CTI feeds without your intervention
|
|
30
|
+
- Agents **generate** draft hunts based on new threats
|
|
31
|
+
- Agents **coordinate** through shared LOCK memory
|
|
32
|
+
- You **validate and approve** rather than create from scratch
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## Multi-Agent Architecture
|
|
37
|
+
|
|
38
|
+
At Level 4, multiple specialized agents work together, coordinating through your LOCK-structured hunt repository.
|
|
39
|
+
|
|
40
|
+
### Example Agent Roles
|
|
41
|
+
|
|
42
|
+
#### CTI Monitor Agent
|
|
43
|
+
|
|
44
|
+
**Role:** Watch threat feeds and identify relevant TTPs
|
|
45
|
+
|
|
46
|
+
**Triggers:**
|
|
47
|
+
|
|
48
|
+
- Scheduled: Every 6 hours
|
|
49
|
+
- Webhook: New threat intel published
|
|
50
|
+
|
|
51
|
+
**Actions:**
|
|
52
|
+
|
|
53
|
+
1. Query CTI feeds (MISP, AlienVault OTX, vendor feeds)
|
|
54
|
+
2. Extract MITRE ATT&CK techniques
|
|
55
|
+
3. Search past hunts: "Have we covered this technique?"
|
|
56
|
+
4. If new technique → trigger Hypothesis Generator Agent
|
|
57
|
+
|
|
58
|
+
#### Hypothesis Generator Agent
|
|
59
|
+
|
|
60
|
+
**Role:** Create draft hunt files in LOCK format
|
|
61
|
+
|
|
62
|
+
**Triggers:**
|
|
63
|
+
|
|
64
|
+
- Agent event from CTI Monitor: "New technique detected"
|
|
65
|
+
|
|
66
|
+
**Actions:**
|
|
67
|
+
|
|
68
|
+
1. Search past hunts for related TTPs
|
|
69
|
+
2. Review lessons learned from similar hunts
|
|
70
|
+
3. Generate LOCK-formatted hypothesis
|
|
71
|
+
4. Validate query syntax
|
|
72
|
+
5. Create draft hunt file (hunts/H-XXXX.md)
|
|
73
|
+
6. Trigger Validator Agent
|
|
74
|
+
|
|
75
|
+
#### Validator Agent
|
|
76
|
+
|
|
77
|
+
**Role:** Review draft hunts for feasibility
|
|
78
|
+
|
|
79
|
+
**Triggers:**
|
|
80
|
+
|
|
81
|
+
- Agent event from Hypothesis Generator: "Draft ready"
|
|
82
|
+
|
|
83
|
+
**Actions:**
|
|
84
|
+
|
|
85
|
+
1. Check query against data sources (from AGENTS.md)
|
|
86
|
+
2. Validate MITRE technique IDs
|
|
87
|
+
3. Verify data source availability
|
|
88
|
+
4. Flag issues or approve for review
|
|
89
|
+
5. Trigger Notifier Agent
|
|
90
|
+
|
|
91
|
+
#### Notifier Agent
|
|
92
|
+
|
|
93
|
+
**Role:** Alert analysts when human review is needed
|
|
94
|
+
|
|
95
|
+
**Triggers:**
|
|
96
|
+
|
|
97
|
+
- Agent event from Validator: "Review needed"
|
|
98
|
+
|
|
99
|
+
**Actions:**
|
|
100
|
+
|
|
101
|
+
1. Post to Slack (#threat-hunting channel)
|
|
102
|
+
2. Create GitHub issue with label "hunt-review"
|
|
103
|
+
3. Send email summary to security team
|
|
104
|
+
4. Update hunt tracking dashboard
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## Example Multi-Agent Workflow
|
|
109
|
+
|
|
110
|
+
### Scenario: New Qakbot Campaign Detected
|
|
111
|
+
|
|
112
|
+
**1. CTI Monitor Agent (Autonomous - 06:00 UTC)**
|
|
113
|
+
|
|
114
|
+
```
|
|
115
|
+
[Agent runs scheduled job]
|
|
116
|
+
- Queries MISP feed for new indicators
|
|
117
|
+
- Detects: Qakbot campaign using T1059.003 (Windows Command Shell)
|
|
118
|
+
- Searches past hunts: grep "T1059.003" hunts/*.md
|
|
119
|
+
- Result: No prior coverage of this sub-technique
|
|
120
|
+
- Decision: New threat detected, trigger Hypothesis Generator
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
**2. Hypothesis Generator Agent (06:02 UTC)**
|
|
124
|
+
|
|
125
|
+
```
|
|
126
|
+
[Triggered by CTI Monitor]
|
|
127
|
+
- Reviews similar hunts: H-0042 (PowerShell), H-0089 (Process Execution)
|
|
128
|
+
- Extracts lessons: "Include parent-child process chains", "Filter System32 parents"
|
|
129
|
+
- Generates LOCK hypothesis:
|
|
130
|
+
|
|
131
|
+
Learn: Qakbot campaign using T1059.003 detected in CTI
|
|
132
|
+
Observe: Adversaries spawn cmd.exe from suspicious parents (Office, browsers)
|
|
133
|
+
Check: [Generated Splunk query with bounds and limits]
|
|
134
|
+
Keep: [Placeholder for execution results]
|
|
135
|
+
|
|
136
|
+
- Creates: hunts/H-0156.md
|
|
137
|
+
- Validates query syntax
|
|
138
|
+
- Decision: Draft ready, trigger Validator
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
**3. Validator Agent (06:03 UTC)**
|
|
142
|
+
|
|
143
|
+
```
|
|
144
|
+
[Triggered by Hypothesis Generator]
|
|
145
|
+
- Reads AGENTS.md for data source availability
|
|
146
|
+
- Checks: index=sysmon exists ✓
|
|
147
|
+
- Checks: EventCode=1 available ✓
|
|
148
|
+
- Validates: MITRE technique T1059.003 format ✓
|
|
149
|
+
- Reviews: Query has time bounds ✓
|
|
150
|
+
- Reviews: Query has result limits ✓
|
|
151
|
+
- Decision: Hunt validated, trigger Notifier
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
**4. Notifier Agent (06:04 UTC)**
|
|
155
|
+
|
|
156
|
+
```
|
|
157
|
+
[Triggered by Validator]
|
|
158
|
+
- Posts to Slack #threat-hunting:
|
|
159
|
+
"🔍 New hunt H-0156 ready for review
|
|
160
|
+
- Technique: T1059.003 (Windows Command Shell)
|
|
161
|
+
- Threat: Qakbot campaign
|
|
162
|
+
- Status: Draft generated, validation passed
|
|
163
|
+
- Review: https://github.com/org/hunts/pull/156"
|
|
164
|
+
|
|
165
|
+
- Creates GitHub issue #156:
|
|
166
|
+
Title: "Review hunt H-0156: Qakbot T1059.003 detection"
|
|
167
|
+
Labels: hunt-review, auto-generated
|
|
168
|
+
Assigned: @security-team
|
|
169
|
+
|
|
170
|
+
- Decision: Notification sent, workflow complete
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
**5. You Wake Up (08:30 UTC)**
|
|
174
|
+
|
|
175
|
+
```
|
|
176
|
+
Slack notification: "3 new draft hunts created overnight"
|
|
177
|
+
|
|
178
|
+
You review H-0156.md:
|
|
179
|
+
- Learn section: Clear threat context ✓
|
|
180
|
+
- Observe section: Specific hypothesis ✓
|
|
181
|
+
- Check section: Well-bounded query ✓
|
|
182
|
+
- Data sources: Available in our environment ✓
|
|
183
|
+
|
|
184
|
+
Decision: Approve and execute
|
|
185
|
+
|
|
186
|
+
You: "Execute hunt H-0156"
|
|
187
|
+
Claude: [Runs via Splunk MCP, finds 2 suspicious events, creates tickets]
|
|
188
|
+
```
|
|
189
|
+
|
|
190
|
+
**Result:** From threat detection to actionable investigation in under 3 hours, most of it automated.
|
|
191
|
+
|
|
192
|
+
---
|
|
193
|
+
|
|
194
|
+
## Example Agent Configuration
|
|
195
|
+
|
|
196
|
+
Below is a **conceptual example** showing how agents could be configured. This is not included in the repository - it represents a pattern you would implement using your chosen agent framework.
|
|
197
|
+
|
|
198
|
+
```yaml
|
|
199
|
+
# Example: config/agent_workflow.yaml
|
|
200
|
+
# Conceptual configuration for autonomous agents
|
|
201
|
+
|
|
202
|
+
agents:
|
|
203
|
+
- name: cti_monitor
|
|
204
|
+
role: Watch CTI feeds and identify relevant threats
|
|
205
|
+
triggers:
|
|
206
|
+
- schedule: "every 6 hours"
|
|
207
|
+
- webhook: "/api/cti/new"
|
|
208
|
+
actions:
|
|
209
|
+
- search_hunts(technique_id) # Check if we've hunted this before
|
|
210
|
+
- trigger_agent("hypothesis_generator") if new_technique
|
|
211
|
+
guardrails:
|
|
212
|
+
- log_all_searches: true
|
|
213
|
+
- max_triggers_per_day: 10
|
|
214
|
+
|
|
215
|
+
- name: hypothesis_generator
|
|
216
|
+
role: Create LOCK-formatted hunt hypotheses
|
|
217
|
+
triggers:
|
|
218
|
+
- agent_event: "cti_monitor.new_technique"
|
|
219
|
+
actions:
|
|
220
|
+
- search_hunts(technique_id) # Get historical context
|
|
221
|
+
- apply_lessons_learned()
|
|
222
|
+
- generate_lock_hypothesis()
|
|
223
|
+
- validate_query_syntax()
|
|
224
|
+
- create_draft_hunt_file()
|
|
225
|
+
- trigger_agent("validator")
|
|
226
|
+
guardrails:
|
|
227
|
+
- require_data_source_validation: true
|
|
228
|
+
- max_drafts_per_day: 5
|
|
229
|
+
|
|
230
|
+
- name: validator
|
|
231
|
+
role: Review and validate draft hunts
|
|
232
|
+
triggers:
|
|
233
|
+
- agent_event: "hypothesis_generator.draft_ready"
|
|
234
|
+
actions:
|
|
235
|
+
- validate_query(query, platform)
|
|
236
|
+
- check_data_source_compatibility()
|
|
237
|
+
- verify_mitre_technique_format()
|
|
238
|
+
- flag_for_human_review() if issues_found
|
|
239
|
+
- trigger_agent("notifier") if validation_passed
|
|
240
|
+
guardrails:
|
|
241
|
+
- block_if_data_source_missing: true
|
|
242
|
+
- require_time_bounds: true
|
|
243
|
+
|
|
244
|
+
- name: notifier
|
|
245
|
+
role: Alert analysts when hunts need review
|
|
246
|
+
triggers:
|
|
247
|
+
- agent_event: "validator.review_needed"
|
|
248
|
+
- agent_event: "validator.validation_passed"
|
|
249
|
+
actions:
|
|
250
|
+
- post_to_slack(channel="#threat-hunting", hunt_id)
|
|
251
|
+
- create_github_issue(labels=["hunt-review", "auto-generated"])
|
|
252
|
+
- send_email_summary(recipients=security_team)
|
|
253
|
+
guardrails:
|
|
254
|
+
- rate_limit: "max 20 notifications per day"
|
|
255
|
+
|
|
256
|
+
# Global Guardrails
|
|
257
|
+
guardrails:
|
|
258
|
+
- all_hunts_require_human_approval: true
|
|
259
|
+
- no_automatic_query_execution: true
|
|
260
|
+
- log_all_agent_actions: true
|
|
261
|
+
- daily_summary_report: true
|
|
262
|
+
- halt_on_error: true
|
|
263
|
+
|
|
264
|
+
# Shared Memory
|
|
265
|
+
memory:
|
|
266
|
+
- hunt_repository: "hunts/"
|
|
267
|
+
- context_files: ["AGENTS.md", "knowledge/hunting-knowledge.md"]
|
|
268
|
+
- lessons_learned: "automatically extracted from Keep sections"
|
|
269
|
+
```
|
|
270
|
+
|
|
271
|
+
---
|
|
272
|
+
|
|
273
|
+
## Implementation Options
|
|
274
|
+
|
|
275
|
+
Level 4 can be built using various agent frameworks. Choose based on your team's experience and requirements.
|
|
276
|
+
|
|
277
|
+
### LangGraph
|
|
278
|
+
|
|
279
|
+
**Best for:** Stateful, multi-step workflows
|
|
280
|
+
|
|
281
|
+
**Strengths:**
|
|
282
|
+
|
|
283
|
+
- Built on LangChain ecosystem
|
|
284
|
+
- Graph-based workflow definition
|
|
285
|
+
- State management between steps
|
|
286
|
+
- Good for complex orchestration
|
|
287
|
+
|
|
288
|
+
**Example use case:** Multi-agent pipeline with conditional branching based on validation results
|
|
289
|
+
|
|
290
|
+
### CrewAI
|
|
291
|
+
|
|
292
|
+
**Best for:** Role-based agent collaboration
|
|
293
|
+
|
|
294
|
+
**Strengths:**
|
|
295
|
+
|
|
296
|
+
- Define agents by role (researcher, analyst, writer)
|
|
297
|
+
- Natural delegation between agents
|
|
298
|
+
- Built-in task management
|
|
299
|
+
- Good for team-based patterns
|
|
300
|
+
|
|
301
|
+
**Example use case:** CTI researcher agent + hypothesis writer agent + validator agent working together
|
|
302
|
+
|
|
303
|
+
### AutoGen
|
|
304
|
+
|
|
305
|
+
**Best for:** Conversational agent patterns
|
|
306
|
+
|
|
307
|
+
**Strengths:**
|
|
308
|
+
|
|
309
|
+
- Microsoft-backed framework
|
|
310
|
+
- Multi-agent conversations
|
|
311
|
+
- Human-in-the-loop patterns
|
|
312
|
+
- Good for collaborative workflows
|
|
313
|
+
|
|
314
|
+
**Example use case:** Agents discussing and refining hunts through conversation before presenting to humans
|
|
315
|
+
|
|
316
|
+
### Custom Orchestration
|
|
317
|
+
|
|
318
|
+
**Best for:** Purpose-built solutions
|
|
319
|
+
|
|
320
|
+
**Strengths:**
|
|
321
|
+
|
|
322
|
+
- Full control over architecture
|
|
323
|
+
- Integrate exactly your tools
|
|
324
|
+
- No framework overhead
|
|
325
|
+
- Optimized for your environment
|
|
326
|
+
|
|
327
|
+
**Example use case:** Simple Python scripts with cron triggers and API calls to your specific stack
|
|
328
|
+
|
|
329
|
+
---
|
|
330
|
+
|
|
331
|
+
## Guardrails and Safety
|
|
332
|
+
|
|
333
|
+
At Level 4, guardrails are critical. Agents operate autonomously, so you must define boundaries.
|
|
334
|
+
|
|
335
|
+
### Essential Guardrails
|
|
336
|
+
|
|
337
|
+
**1. Human Approval Required**
|
|
338
|
+
|
|
339
|
+
```yaml
|
|
340
|
+
guardrails:
|
|
341
|
+
- all_hunts_require_human_approval: true
|
|
342
|
+
- no_automatic_query_execution: true
|
|
343
|
+
```
|
|
344
|
+
|
|
345
|
+
Agents can draft hunts, but humans must approve before execution.
|
|
346
|
+
|
|
347
|
+
**2. Logging Everything**
|
|
348
|
+
|
|
349
|
+
```yaml
|
|
350
|
+
guardrails:
|
|
351
|
+
- log_all_agent_actions: true
|
|
352
|
+
- audit_trail: true
|
|
353
|
+
```
|
|
354
|
+
|
|
355
|
+
Every agent action must be logged for review.
|
|
356
|
+
|
|
357
|
+
**3. Rate Limiting**
|
|
358
|
+
|
|
359
|
+
```yaml
|
|
360
|
+
guardrails:
|
|
361
|
+
- max_drafts_per_day: 10
|
|
362
|
+
- max_notifications_per_day: 20
|
|
363
|
+
```
|
|
364
|
+
|
|
365
|
+
Prevent runaway agents from flooding your team.
|
|
366
|
+
|
|
367
|
+
**4. Validation Gates**
|
|
368
|
+
|
|
369
|
+
```yaml
|
|
370
|
+
guardrails:
|
|
371
|
+
- require_data_source_validation: true
|
|
372
|
+
- require_time_bounds_in_queries: true
|
|
373
|
+
```
|
|
374
|
+
|
|
375
|
+
Agents must validate hunts against your environment before flagging for review.
|
|
376
|
+
|
|
377
|
+
**5. Halt on Error**
|
|
378
|
+
|
|
379
|
+
```yaml
|
|
380
|
+
guardrails:
|
|
381
|
+
- halt_on_error: true
|
|
382
|
+
- notify_on_failure: true
|
|
383
|
+
```
|
|
384
|
+
|
|
385
|
+
If an agent encounters an error, stop the workflow and alert humans.
|
|
386
|
+
|
|
387
|
+
---
|
|
388
|
+
|
|
389
|
+
## Getting Started with Level 4
|
|
390
|
+
|
|
391
|
+
### Phase 1: Planning (Week 1)
|
|
392
|
+
|
|
393
|
+
1. **Define objectives:** What should agents do autonomously?
|
|
394
|
+
2. **Identify workflows:** Which manual tasks are repetitive?
|
|
395
|
+
3. **Choose framework:** LangGraph, CrewAI, AutoGen, or custom?
|
|
396
|
+
4. **Design guardrails:** What are your safety boundaries?
|
|
397
|
+
|
|
398
|
+
### Phase 2: Single Agent (Weeks 2-4)
|
|
399
|
+
|
|
400
|
+
1. **Start simple:** Deploy one monitoring agent (CTI monitor)
|
|
401
|
+
2. **Test in sandbox:** Run agent in isolated environment
|
|
402
|
+
3. **Validate outputs:** Review agent-generated drafts
|
|
403
|
+
4. **Tune parameters:** Adjust triggers and thresholds
|
|
404
|
+
|
|
405
|
+
### Phase 3: Multi-Agent (Weeks 5-8)
|
|
406
|
+
|
|
407
|
+
1. **Add second agent:** Hypothesis generator
|
|
408
|
+
2. **Test coordination:** Verify agents communicate correctly
|
|
409
|
+
3. **Add third agent:** Validator
|
|
410
|
+
4. **Complete pipeline:** CTI Monitor → Generator → Validator → Notifier
|
|
411
|
+
|
|
412
|
+
### Phase 4: Production (Weeks 9-12)
|
|
413
|
+
|
|
414
|
+
1. **Deploy to production:** Run agents on real CTI feeds
|
|
415
|
+
2. **Monitor closely:** Review all agent actions daily
|
|
416
|
+
3. **Iterate on guardrails:** Adjust based on false positives
|
|
417
|
+
4. **Measure impact:** Track time saved and hunt quality
|
|
418
|
+
|
|
419
|
+
---
|
|
420
|
+
|
|
421
|
+
## Success Metrics
|
|
422
|
+
|
|
423
|
+
Track these metrics to measure Level 4 success:
|
|
424
|
+
|
|
425
|
+
**Automation Metrics:**
|
|
426
|
+
|
|
427
|
+
- **Draft hunts generated per week** (target: 3-5)
|
|
428
|
+
- **Time from threat detection to hunt draft** (target: <1 hour)
|
|
429
|
+
- **Human review time per draft** (target: <10 minutes)
|
|
430
|
+
|
|
431
|
+
**Quality Metrics:**
|
|
432
|
+
|
|
433
|
+
- **Draft approval rate** (target: >80%)
|
|
434
|
+
- **Query validation success rate** (target: >95%)
|
|
435
|
+
- **False positive rate in agent-generated hunts** (target: <20%)
|
|
436
|
+
|
|
437
|
+
**Impact Metrics:**
|
|
438
|
+
|
|
439
|
+
- **Total time saved per week** (target: 5-10 hours)
|
|
440
|
+
- **Hunts executed per month** (increase over Level 3 baseline)
|
|
441
|
+
- **Mean time to detect new threats** (decrease over manual process)
|
|
442
|
+
|
|
443
|
+
---
|
|
444
|
+
|
|
445
|
+
## Common Patterns
|
|
446
|
+
|
|
447
|
+
### Pattern 1: CTI-Driven Hunt Generation
|
|
448
|
+
|
|
449
|
+
**Flow:** CTI Feed → Technique Extraction → Hunt Generation → Human Review
|
|
450
|
+
|
|
451
|
+
**Agents:** CTI Monitor + Hypothesis Generator + Validator + Notifier
|
|
452
|
+
|
|
453
|
+
### Pattern 2: Alert-Driven Investigation
|
|
454
|
+
|
|
455
|
+
**Flow:** SIEM Alert → Historical Search → Draft Response → Ticket Creation
|
|
456
|
+
|
|
457
|
+
**Agents:** Alert Monitor + Context Researcher + Response Generator + Ticket Creator
|
|
458
|
+
|
|
459
|
+
### Pattern 3: Scheduled Hunt Refresh
|
|
460
|
+
|
|
461
|
+
**Flow:** Daily Schedule → Review Old Hunts → Re-execute Queries → Update Results
|
|
462
|
+
|
|
463
|
+
**Agents:** Scheduler + Hunt Executor + Results Analyzer + Documentation Updater
|
|
464
|
+
|
|
465
|
+
---
|
|
466
|
+
|
|
467
|
+
## Learn More
|
|
468
|
+
|
|
469
|
+
- **Maturity Model:** [maturity-model.md](maturity-model.md)
|
|
470
|
+
- **Level 3 Examples:** [../integrations/README.md](../integrations/README.md)
|
|
471
|
+
- **Getting Started:** [getting-started.md](getting-started.md)
|
|
472
|
+
- **Integration Catalog:** [../integrations/MCP_CATALOG.md](../integrations/MCP_CATALOG.md)
|
|
473
|
+
|
|
474
|
+
---
|
|
475
|
+
|
|
476
|
+
## Remember
|
|
477
|
+
|
|
478
|
+
**Success can look like many things at Level 4.** You might have agents that autonomously execute queries using MCP servers, or agents that orchestrate multi-step workflows. At this stage, you're mature enough to make architectural decisions based on your team's needs and risk tolerance.
|
|
479
|
+
|
|
480
|
+
**The key:** All agents share the same memory layer - your LOCK-structured hunts - ensuring consistency and enabling true coordination.
|
|
@@ -0,0 +1,149 @@
|
|
|
1
|
+
# The LOCK Pattern
|
|
2
|
+
|
|
3
|
+
Every threat hunt follows the same basic loop: **Learn → Observe → Check → Keep**.
|
|
4
|
+
|
|
5
|
+
ATHF formalizes that loop with the **LOCK Pattern**, a lightweight structure that is readable by both humans and AI tools.
|
|
6
|
+
|
|
7
|
+
**Why LOCK?** It's small enough to use and strict enough for agents to interpret.
|
|
8
|
+
|
|
9
|
+

|
|
10
|
+
|
|
11
|
+
## The Four Phases
|
|
12
|
+
|
|
13
|
+
### Learn: Gather Context
|
|
14
|
+
|
|
15
|
+
Gather context from threat intelligence, alerts, or anomalies.
|
|
16
|
+
|
|
17
|
+
**Example:**
|
|
18
|
+
> "We received CTI indicating increased use of Rundll32 for execution (T1218.011)."
|
|
19
|
+
|
|
20
|
+
**What to include:**
|
|
21
|
+
|
|
22
|
+
- Threat intelligence that motivated the hunt
|
|
23
|
+
- Recent incidents or alerts
|
|
24
|
+
- Available data sources (Sysmon, EDR, security logs)
|
|
25
|
+
- MITRE ATT&CK techniques if known
|
|
26
|
+
|
|
27
|
+
### Observe: Form Hypothesis
|
|
28
|
+
|
|
29
|
+
Form a hypothesis about what the adversary might be doing.
|
|
30
|
+
|
|
31
|
+
**Example:**
|
|
32
|
+
> "Adversaries may be using Rundll32 to load unsigned DLLs to bypass security controls."
|
|
33
|
+
|
|
34
|
+
**What to include:**
|
|
35
|
+
|
|
36
|
+
- Specific adversary behavior you're looking for
|
|
37
|
+
- Why this behavior is suspicious
|
|
38
|
+
- What makes it detectable
|
|
39
|
+
- Expected indicators or patterns
|
|
40
|
+
|
|
41
|
+
### Check: Test Hypothesis
|
|
42
|
+
|
|
43
|
+
Test the hypothesis using bounded queries or scripts.
|
|
44
|
+
|
|
45
|
+
**Example (Splunk):**
|
|
46
|
+
|
|
47
|
+
```spl
|
|
48
|
+
index=winlogs EventCode=4688 CommandLine="*rundll32*" NOT Signed="TRUE"
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
**What to include:**
|
|
52
|
+
|
|
53
|
+
- The actual query or detection logic
|
|
54
|
+
- Data source and time range used
|
|
55
|
+
- Query constraints (time bounds, result limits)
|
|
56
|
+
- Any filtering or correlation logic
|
|
57
|
+
|
|
58
|
+
### Keep: Record Findings
|
|
59
|
+
|
|
60
|
+
Record findings and lessons learned.
|
|
61
|
+
|
|
62
|
+
**Example:**
|
|
63
|
+
> "No evidence of execution found in the past 14 days. Query should be expanded to include encoded commands next run."
|
|
64
|
+
|
|
65
|
+
**What to include:**
|
|
66
|
+
|
|
67
|
+
- Results (found/not found)
|
|
68
|
+
- True positives and false positives
|
|
69
|
+
- Lessons learned
|
|
70
|
+
- Next steps or follow-up actions
|
|
71
|
+
- Links to related hunts
|
|
72
|
+
|
|
73
|
+
## Example Hunt Using LOCK
|
|
74
|
+
|
|
75
|
+
```markdown
|
|
76
|
+
# H-0031: Detecting Remote Management Abuse via PowerShell and WMI
|
|
77
|
+
|
|
78
|
+
**Learn**
|
|
79
|
+
Incident response from a recent ransomware case showed adversaries using PowerShell remoting and WMI to move laterally between Windows hosts.
|
|
80
|
+
These techniques often bypass EDR detections that look only for credential theft or file-based artifacts.
|
|
81
|
+
Telemetry sources available: Sysmon (Event IDs 1, 3, 10), Windows Security Logs (Event ID 4624), and EDR process trees.
|
|
82
|
+
|
|
83
|
+
**Observe**
|
|
84
|
+
Adversaries may execute PowerShell commands remotely or invoke WMI for lateral movement using existing admin credentials.
|
|
85
|
+
Suspicious behavior includes PowerShell or wmiprvse.exe processes initiated by non-admin accounts or targeting multiple remote systems in a short time window.
|
|
86
|
+
|
|
87
|
+
**Check**
|
|
88
|
+
index=sysmon OR index=edr
|
|
89
|
+
(EventCode=1 OR EventCode=10)
|
|
90
|
+
| search (Image="*powershell.exe" OR Image="*wmiprvse.exe")
|
|
91
|
+
| stats count dc(DestinationHostname) as unique_targets by User, Computer, CommandLine
|
|
92
|
+
| where unique_targets > 3
|
|
93
|
+
| sort - unique_targets
|
|
94
|
+
|
|
95
|
+
**Keep**
|
|
96
|
+
Detected two accounts showing lateral movement patterns:
|
|
97
|
+
- `svc_backup` executed PowerShell sessions on five hosts in under ten minutes
|
|
98
|
+
- `itadmin-temp` invoked wmiprvse.exe from a workstation instead of a jump server
|
|
99
|
+
|
|
100
|
+
Confirmed `svc_backup` activity as legitimate backup automation.
|
|
101
|
+
Marked `itadmin-temp` as suspicious; account disabled pending review.
|
|
102
|
+
|
|
103
|
+
Next iteration: expand to include remote registry and PSExec telemetry for broader coverage.
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
**See full hunt examples:**
|
|
107
|
+
- [H-0001: macOS Information Stealer Detection](../hunts/H-0001.md) - Complete hunt with YAML frontmatter, detailed LOCK sections, query evolution, and results
|
|
108
|
+
- [H-0002: Linux Crontab Persistence Detection](../hunts/H-0002.md) - Multi-query approach with behavioral analysis
|
|
109
|
+
- [H-0003: AWS Lambda Persistence Detection](../hunts/H-0003.md) - Cloud hunting with CloudTrail correlation
|
|
110
|
+
- [Hunt Showcase](../../../SHOWCASE.md) - Side-by-side comparison of all three hunts
|
|
111
|
+
|
|
112
|
+
## Best Practices
|
|
113
|
+
|
|
114
|
+
**For Learn:**
|
|
115
|
+
|
|
116
|
+
- Reference specific threat intelligence or incidents
|
|
117
|
+
- List available data sources
|
|
118
|
+
- Include MITRE ATT&CK technique IDs
|
|
119
|
+
|
|
120
|
+
**For Observe:**
|
|
121
|
+
|
|
122
|
+
- Be specific about the behavior you're hunting
|
|
123
|
+
- Explain why it's suspicious
|
|
124
|
+
- State what makes it detectable
|
|
125
|
+
|
|
126
|
+
**For Check:**
|
|
127
|
+
|
|
128
|
+
- Always include time bounds in queries
|
|
129
|
+
- Limit result sets to avoid expensive operations
|
|
130
|
+
- Document the query language (Splunk, KQL, SQL, etc.)
|
|
131
|
+
|
|
132
|
+
**For Keep:**
|
|
133
|
+
|
|
134
|
+
- Be honest about false positives
|
|
135
|
+
- Document what worked and what didn't
|
|
136
|
+
- Include next steps for iteration
|
|
137
|
+
- Link to related hunts
|
|
138
|
+
|
|
139
|
+
## Why LOCK Works
|
|
140
|
+
|
|
141
|
+
**Without LOCK:** Every hunt is a fresh tab explosion.
|
|
142
|
+
|
|
143
|
+
**With LOCK:** Every hunt becomes part of the memory layer.
|
|
144
|
+
|
|
145
|
+
By capturing every hunt in this format, ATHF makes it possible for AI assistants to recall prior work, generate new hypotheses, and suggest refined queries based on past results.
|
|
146
|
+
|
|
147
|
+
## Templates
|
|
148
|
+
|
|
149
|
+
See [templates/](../templates/) for ready-to-use LOCK hunt templates.
|