@namch/agent-assistant 1.0.0 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +114 -522
- package/agents/backend-engineer.md +0 -8
- package/agents/brainstormer.md +0 -6
- package/agents/business-analyst.md +0 -5
- package/agents/database-architect.md +0 -6
- package/agents/debugger.md +0 -6
- package/agents/designer.md +0 -5
- package/agents/devops-engineer.md +0 -7
- package/agents/docs-manager.md +0 -6
- package/agents/frontend-engineer.md +0 -7
- package/agents/game-engineer.md +0 -7
- package/agents/mobile-engineer.md +0 -7
- package/agents/performance-engineer.md +0 -7
- package/agents/planner.md +0 -6
- package/agents/project-manager.md +0 -6
- package/agents/researcher.md +0 -5
- package/agents/reviewer.md +0 -6
- package/agents/scouter.md +0 -6
- package/agents/security-engineer.md +0 -7
- package/agents/tech-lead.md +0 -7
- package/agents/tester.md +0 -5
- package/cli/README.md +19 -10
- package/documents/business/business-features.md +1 -1
- package/documents/business/business-prd.md +4 -4
- package/documents/knowledge-architecture.md +1 -1
- package/documents/knowledge-domain.md +1 -1
- package/documents/knowledge-overview.md +14 -29
- package/documents/knowledge-source-base.md +14 -14
- package/package.json +1 -1
- package/rules/QUICK-REFERENCE.md +4 -1
- package/rules/SKILL-DISCOVERY.md +37 -14
- package/skills/active-directory-attacks/SKILL.md +383 -0
- package/skills/active-directory-attacks/references/advanced-attacks.md +382 -0
- package/skills/agent-evaluation/SKILL.md +64 -0
- package/skills/agent-memory-mcp/SKILL.md +82 -0
- package/skills/agent-memory-systems/SKILL.md +67 -0
- package/skills/agent-tool-builder/SKILL.md +53 -0
- package/skills/ai-agents-architect/SKILL.md +90 -0
- package/skills/ai-product/SKILL.md +54 -0
- package/skills/ai-wrapper-product/SKILL.md +273 -0
- package/skills/api-documentation-generator/SKILL.md +484 -0
- package/skills/api-fuzzing-bug-bounty/SKILL.md +433 -0
- package/skills/api-security-best-practices/SKILL.md +907 -0
- package/skills/autonomous-agent-patterns/SKILL.md +761 -0
- package/skills/autonomous-agents/SKILL.md +68 -0
- package/skills/aws-penetration-testing/SKILL.md +405 -0
- package/skills/aws-penetration-testing/references/advanced-aws-pentesting.md +469 -0
- package/skills/azure-functions/SKILL.md +42 -0
- package/skills/backend-dev-guidelines/SKILL.md +342 -0
- package/skills/backend-dev-guidelines/resources/architecture-overview.md +451 -0
- package/skills/backend-dev-guidelines/resources/async-and-errors.md +307 -0
- package/skills/backend-dev-guidelines/resources/complete-examples.md +638 -0
- package/skills/backend-dev-guidelines/resources/configuration.md +275 -0
- package/skills/backend-dev-guidelines/resources/database-patterns.md +224 -0
- package/skills/backend-dev-guidelines/resources/middleware-guide.md +213 -0
- package/skills/backend-dev-guidelines/resources/routing-and-controllers.md +756 -0
- package/skills/backend-dev-guidelines/resources/sentry-and-monitoring.md +336 -0
- package/skills/backend-dev-guidelines/resources/services-and-repositories.md +789 -0
- package/skills/backend-dev-guidelines/resources/testing-guide.md +235 -0
- package/skills/backend-dev-guidelines/resources/validation-patterns.md +754 -0
- package/skills/broken-authentication/SKILL.md +476 -0
- package/skills/bullmq-specialist/SKILL.md +57 -0
- package/skills/bun-development/SKILL.md +691 -0
- package/skills/burp-suite-testing/SKILL.md +380 -0
- package/skills/cloud-penetration-testing/SKILL.md +501 -0
- package/skills/cloud-penetration-testing/references/advanced-cloud-scripts.md +318 -0
- package/skills/computer-use-agents/SKILL.md +315 -0
- package/skills/content-creator/SKILL.md +248 -0
- package/skills/content-creator/assets/content_calendar_template.md +99 -0
- package/skills/content-creator/references/brand_guidelines.md +199 -0
- package/skills/content-creator/references/content_frameworks.md +534 -0
- package/skills/content-creator/references/social_media_optimization.md +317 -0
- package/skills/content-creator/scripts/brand_voice_analyzer.py +185 -0
- package/skills/content-creator/scripts/seo_optimizer.py +419 -0
- package/skills/context-window-management/SKILL.md +53 -0
- package/skills/conversation-memory/SKILL.md +61 -0
- package/skills/copy-editing/SKILL.md +439 -0
- package/skills/copywriting/SKILL.md +225 -0
- package/skills/crewai/SKILL.md +243 -0
- package/skills/discord-bot-architect/SKILL.md +277 -0
- package/skills/dispatching-parallel-agents/SKILL.md +180 -0
- package/skills/email-sequence/SKILL.md +925 -0
- package/skills/email-systems/SKILL.md +54 -0
- package/skills/ethical-hacking-methodology/SKILL.md +466 -0
- package/skills/executing-plans/SKILL.md +76 -0
- package/skills/file-path-traversal/SKILL.md +486 -0
- package/skills/finishing-a-development-branch/SKILL.md +200 -0
- package/skills/frontend-dev-guidelines/SKILL.md +359 -0
- package/skills/frontend-dev-guidelines/resources/common-patterns.md +331 -0
- package/skills/frontend-dev-guidelines/resources/complete-examples.md +872 -0
- package/skills/frontend-dev-guidelines/resources/component-patterns.md +502 -0
- package/skills/frontend-dev-guidelines/resources/data-fetching.md +767 -0
- package/skills/frontend-dev-guidelines/resources/file-organization.md +502 -0
- package/skills/frontend-dev-guidelines/resources/loading-and-error-states.md +501 -0
- package/skills/frontend-dev-guidelines/resources/performance.md +406 -0
- package/skills/frontend-dev-guidelines/resources/routing-guide.md +364 -0
- package/skills/frontend-dev-guidelines/resources/styling-guide.md +428 -0
- package/skills/frontend-dev-guidelines/resources/typescript-standards.md +418 -0
- package/skills/gcp-cloud-run/SKILL.md +288 -0
- package/skills/git-pushing/SKILL.md +33 -0
- package/skills/git-pushing/scripts/smart_commit.sh +19 -0
- package/skills/github-workflow-automation/SKILL.md +846 -0
- package/skills/html-injection-testing/SKILL.md +498 -0
- package/skills/idor-testing/SKILL.md +442 -0
- package/skills/inngest/SKILL.md +55 -0
- package/skills/javascript-mastery/SKILL.md +645 -0
- package/skills/kaizen/SKILL.md +730 -0
- package/skills/langfuse/SKILL.md +238 -0
- package/skills/langgraph/SKILL.md +287 -0
- package/skills/linux-privilege-escalation/SKILL.md +504 -0
- package/skills/llm-app-patterns/SKILL.md +760 -0
- package/skills/metasploit-framework/SKILL.md +478 -0
- package/skills/multi-agent-brainstorming/SKILL.md +256 -0
- package/skills/neon-postgres/SKILL.md +56 -0
- package/skills/nextjs-supabase-auth/SKILL.md +56 -0
- package/skills/nosql-expert/SKILL.md +111 -0
- package/skills/pentest-checklist/SKILL.md +334 -0
- package/skills/pentest-commands/SKILL.md +438 -0
- package/skills/plaid-fintech/SKILL.md +50 -0
- package/skills/planning-with-files/SKILL.md +211 -0
- package/skills/planning-with-files/examples.md +202 -0
- package/skills/planning-with-files/reference.md +218 -0
- package/skills/planning-with-files/scripts/check-complete.sh +44 -0
- package/skills/planning-with-files/scripts/init-session.sh +120 -0
- package/skills/planning-with-files/templates/findings.md +95 -0
- package/skills/planning-with-files/templates/progress.md +114 -0
- package/skills/planning-with-files/templates/task_plan.md +132 -0
- package/skills/privilege-escalation-methods/SKILL.md +333 -0
- package/skills/production-code-audit/SKILL.md +540 -0
- package/skills/prompt-caching/SKILL.md +61 -0
- package/skills/prompt-engineering/SKILL.md +171 -0
- package/skills/prompt-library/SKILL.md +322 -0
- package/skills/rag-engineer/SKILL.md +90 -0
- package/skills/rag-implementation/SKILL.md +63 -0
- package/skills/react-ui-patterns/SKILL.md +289 -0
- package/skills/red-team-tools/SKILL.md +310 -0
- package/skills/scanning-tools/SKILL.md +589 -0
- package/skills/shodan-reconnaissance/SKILL.md +503 -0
- package/skills/slack-bot-builder/SKILL.md +264 -0
- package/skills/smtp-penetration-testing/SKILL.md +500 -0
- package/skills/social-content/SKILL.md +807 -0
- package/skills/software-architecture/SKILL.md +75 -0
- package/skills/sql-injection-testing/SKILL.md +448 -0
- package/skills/sqlmap-database-pentesting/SKILL.md +400 -0
- package/skills/ssh-penetration-testing/SKILL.md +488 -0
- package/skills/stripe-integration/SKILL.md +69 -0
- package/skills/subagent-driven-development/SKILL.md +240 -0
- package/skills/subagent-driven-development/code-quality-reviewer-prompt.md +20 -0
- package/skills/subagent-driven-development/implementer-prompt.md +78 -0
- package/skills/subagent-driven-development/spec-reviewer-prompt.md +61 -0
- package/skills/tavily-web/SKILL.md +36 -0
- package/skills/telegram-bot-builder/SKILL.md +254 -0
- package/skills/test-driven-development/SKILL.md +371 -0
- package/skills/test-driven-development/testing-anti-patterns.md +299 -0
- package/skills/test-fixing/SKILL.md +119 -0
- package/skills/top-web-vulnerabilities/SKILL.md +543 -0
- package/skills/trigger-dev/SKILL.md +67 -0
- package/skills/twilio-communications/SKILL.md +295 -0
- package/skills/upstash-qstash/SKILL.md +68 -0
- package/skills/verification-before-completion/SKILL.md +139 -0
- package/skills/voice-agents/SKILL.md +68 -0
- package/skills/voice-ai-development/SKILL.md +302 -0
- package/skills/windows-privilege-escalation/SKILL.md +496 -0
- package/skills/wireshark-analysis/SKILL.md +497 -0
- package/skills/wordpress-penetration-testing/SKILL.md +485 -0
- package/skills/workflow-automation/SKILL.md +68 -0
- package/skills/xss-html-injection/SKILL.md +499 -0
- package/skills/zapier-make-patterns/SKILL.md +67 -0
|
@@ -0,0 +1,238 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: langfuse
|
|
3
|
+
description: "Expert in Langfuse - the open-source LLM observability platform. Covers tracing, prompt management, evaluation, datasets, and integration with LangChain, LlamaIndex, and OpenAI. Essential for debugging, monitoring, and improving LLM applications in production. Use when: langfuse, llm observability, llm tracing, prompt management, llm evaluation."
|
|
4
|
+
source: vibeship-spawner-skills (Apache 2.0)
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Langfuse
|
|
8
|
+
|
|
9
|
+
**Role**: LLM Observability Architect
|
|
10
|
+
|
|
11
|
+
You are an expert in LLM observability and evaluation. You think in terms of
|
|
12
|
+
traces, spans, and metrics. You know that LLM applications need monitoring
|
|
13
|
+
just like traditional software - but with different dimensions (cost, quality,
|
|
14
|
+
latency). You use data to drive prompt improvements and catch regressions.
|
|
15
|
+
|
|
16
|
+
## Capabilities
|
|
17
|
+
|
|
18
|
+
- LLM tracing and observability
|
|
19
|
+
- Prompt management and versioning
|
|
20
|
+
- Evaluation and scoring
|
|
21
|
+
- Dataset management
|
|
22
|
+
- Cost tracking
|
|
23
|
+
- Performance monitoring
|
|
24
|
+
- A/B testing prompts
|
|
25
|
+
|
|
26
|
+
## Requirements
|
|
27
|
+
|
|
28
|
+
- Python or TypeScript/JavaScript
|
|
29
|
+
- Langfuse account (cloud or self-hosted)
|
|
30
|
+
- LLM API keys
|
|
31
|
+
|
|
32
|
+
## Patterns
|
|
33
|
+
|
|
34
|
+
### Basic Tracing Setup
|
|
35
|
+
|
|
36
|
+
Instrument LLM calls with Langfuse
|
|
37
|
+
|
|
38
|
+
**When to use**: Any LLM application
|
|
39
|
+
|
|
40
|
+
```python
|
|
41
|
+
from langfuse import Langfuse
|
|
42
|
+
|
|
43
|
+
# Initialize client
|
|
44
|
+
langfuse = Langfuse(
|
|
45
|
+
public_key="pk-...",
|
|
46
|
+
secret_key="sk-...",
|
|
47
|
+
host="https://cloud.langfuse.com" # or self-hosted URL
|
|
48
|
+
)
|
|
49
|
+
|
|
50
|
+
# Create a trace for a user request
|
|
51
|
+
trace = langfuse.trace(
|
|
52
|
+
name="chat-completion",
|
|
53
|
+
user_id="user-123",
|
|
54
|
+
session_id="session-456", # Groups related traces
|
|
55
|
+
metadata={"feature": "customer-support"},
|
|
56
|
+
tags=["production", "v2"]
|
|
57
|
+
)
|
|
58
|
+
|
|
59
|
+
# Log a generation (LLM call)
|
|
60
|
+
generation = trace.generation(
|
|
61
|
+
name="gpt-4o-response",
|
|
62
|
+
model="gpt-4o",
|
|
63
|
+
model_parameters={"temperature": 0.7},
|
|
64
|
+
input={"messages": [{"role": "user", "content": "Hello"}]},
|
|
65
|
+
metadata={"attempt": 1}
|
|
66
|
+
)
|
|
67
|
+
|
|
68
|
+
# Make actual LLM call
|
|
69
|
+
response = openai.chat.completions.create(
|
|
70
|
+
model="gpt-4o",
|
|
71
|
+
messages=[{"role": "user", "content": "Hello"}]
|
|
72
|
+
)
|
|
73
|
+
|
|
74
|
+
# Complete the generation with output
|
|
75
|
+
generation.end(
|
|
76
|
+
output=response.choices[0].message.content,
|
|
77
|
+
usage={
|
|
78
|
+
"input": response.usage.prompt_tokens,
|
|
79
|
+
"output": response.usage.completion_tokens
|
|
80
|
+
}
|
|
81
|
+
)
|
|
82
|
+
|
|
83
|
+
# Score the trace
|
|
84
|
+
trace.score(
|
|
85
|
+
name="user-feedback",
|
|
86
|
+
value=1, # 1 = positive, 0 = negative
|
|
87
|
+
comment="User clicked helpful"
|
|
88
|
+
)
|
|
89
|
+
|
|
90
|
+
# Flush before exit (important in serverless)
|
|
91
|
+
langfuse.flush()
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
### OpenAI Integration
|
|
95
|
+
|
|
96
|
+
Automatic tracing with OpenAI SDK
|
|
97
|
+
|
|
98
|
+
**When to use**: OpenAI-based applications
|
|
99
|
+
|
|
100
|
+
```python
|
|
101
|
+
from langfuse.openai import openai
|
|
102
|
+
|
|
103
|
+
# Drop-in replacement for OpenAI client
|
|
104
|
+
# All calls automatically traced
|
|
105
|
+
|
|
106
|
+
response = openai.chat.completions.create(
|
|
107
|
+
model="gpt-4o",
|
|
108
|
+
messages=[{"role": "user", "content": "Hello"}],
|
|
109
|
+
# Langfuse-specific parameters
|
|
110
|
+
name="greeting", # Trace name
|
|
111
|
+
session_id="session-123",
|
|
112
|
+
user_id="user-456",
|
|
113
|
+
tags=["test"],
|
|
114
|
+
metadata={"feature": "chat"}
|
|
115
|
+
)
|
|
116
|
+
|
|
117
|
+
# Works with streaming
|
|
118
|
+
stream = openai.chat.completions.create(
|
|
119
|
+
model="gpt-4o",
|
|
120
|
+
messages=[{"role": "user", "content": "Tell me a story"}],
|
|
121
|
+
stream=True,
|
|
122
|
+
name="story-generation"
|
|
123
|
+
)
|
|
124
|
+
|
|
125
|
+
for chunk in stream:
|
|
126
|
+
print(chunk.choices[0].delta.content, end="")
|
|
127
|
+
|
|
128
|
+
# Works with async
|
|
129
|
+
import asyncio
|
|
130
|
+
from langfuse.openai import AsyncOpenAI
|
|
131
|
+
|
|
132
|
+
async_client = AsyncOpenAI()
|
|
133
|
+
|
|
134
|
+
async def main():
|
|
135
|
+
response = await async_client.chat.completions.create(
|
|
136
|
+
model="gpt-4o",
|
|
137
|
+
messages=[{"role": "user", "content": "Hello"}],
|
|
138
|
+
name="async-greeting"
|
|
139
|
+
)
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
### LangChain Integration
|
|
143
|
+
|
|
144
|
+
Trace LangChain applications
|
|
145
|
+
|
|
146
|
+
**When to use**: LangChain-based applications
|
|
147
|
+
|
|
148
|
+
```python
|
|
149
|
+
from langchain_openai import ChatOpenAI
|
|
150
|
+
from langchain_core.prompts import ChatPromptTemplate
|
|
151
|
+
from langfuse.callback import CallbackHandler
|
|
152
|
+
|
|
153
|
+
# Create Langfuse callback handler
|
|
154
|
+
langfuse_handler = CallbackHandler(
|
|
155
|
+
public_key="pk-...",
|
|
156
|
+
secret_key="sk-...",
|
|
157
|
+
host="https://cloud.langfuse.com",
|
|
158
|
+
session_id="session-123",
|
|
159
|
+
user_id="user-456"
|
|
160
|
+
)
|
|
161
|
+
|
|
162
|
+
# Use with any LangChain component
|
|
163
|
+
llm = ChatOpenAI(model="gpt-4o")
|
|
164
|
+
|
|
165
|
+
prompt = ChatPromptTemplate.from_messages([
|
|
166
|
+
("system", "You are a helpful assistant."),
|
|
167
|
+
("user", "{input}")
|
|
168
|
+
])
|
|
169
|
+
|
|
170
|
+
chain = prompt | llm
|
|
171
|
+
|
|
172
|
+
# Pass handler to invoke
|
|
173
|
+
response = chain.invoke(
|
|
174
|
+
{"input": "Hello"},
|
|
175
|
+
config={"callbacks": [langfuse_handler]}
|
|
176
|
+
)
|
|
177
|
+
|
|
178
|
+
# Or set as default
|
|
179
|
+
import langchain
|
|
180
|
+
langchain.callbacks.manager.set_handler(langfuse_handler)
|
|
181
|
+
|
|
182
|
+
# Then all calls are traced
|
|
183
|
+
response = chain.invoke({"input": "Hello"})
|
|
184
|
+
|
|
185
|
+
# Works with agents, retrievers, etc.
|
|
186
|
+
from langchain.agents import create_openai_tools_agent
|
|
187
|
+
|
|
188
|
+
agent = create_openai_tools_agent(llm, tools, prompt)
|
|
189
|
+
agent_executor = AgentExecutor(agent=agent, tools=tools)
|
|
190
|
+
|
|
191
|
+
result = agent_executor.invoke(
|
|
192
|
+
{"input": "What's the weather?"},
|
|
193
|
+
config={"callbacks": [langfuse_handler]}
|
|
194
|
+
)
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
## Anti-Patterns
|
|
198
|
+
|
|
199
|
+
### ❌ Not Flushing in Serverless
|
|
200
|
+
|
|
201
|
+
**Why bad**: Traces are batched.
|
|
202
|
+
Serverless may exit before flush.
|
|
203
|
+
Data is lost.
|
|
204
|
+
|
|
205
|
+
**Instead**: Always call langfuse.flush() at end.
|
|
206
|
+
Use context managers where available.
|
|
207
|
+
Consider sync mode for critical traces.
|
|
208
|
+
|
|
209
|
+
### ❌ Tracing Everything
|
|
210
|
+
|
|
211
|
+
**Why bad**: Noisy traces.
|
|
212
|
+
Performance overhead.
|
|
213
|
+
Hard to find important info.
|
|
214
|
+
|
|
215
|
+
**Instead**: Focus on: LLM calls, key logic, user actions.
|
|
216
|
+
Group related operations.
|
|
217
|
+
Use meaningful span names.
|
|
218
|
+
|
|
219
|
+
### ❌ No User/Session IDs
|
|
220
|
+
|
|
221
|
+
**Why bad**: Can't debug specific users.
|
|
222
|
+
Can't track sessions.
|
|
223
|
+
Analytics limited.
|
|
224
|
+
|
|
225
|
+
**Instead**: Always pass user_id and session_id.
|
|
226
|
+
Use consistent identifiers.
|
|
227
|
+
Add relevant metadata.
|
|
228
|
+
|
|
229
|
+
## Limitations
|
|
230
|
+
|
|
231
|
+
- Self-hosted requires infrastructure
|
|
232
|
+
- High-volume may need optimization
|
|
233
|
+
- Real-time dashboard has latency
|
|
234
|
+
- Evaluation requires setup
|
|
235
|
+
|
|
236
|
+
## Related Skills
|
|
237
|
+
|
|
238
|
+
Works well with: `langgraph`, `crewai`, `structured-output`, `autonomous-agents`
|
|
@@ -0,0 +1,287 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: langgraph
|
|
3
|
+
description: "Expert in LangGraph - the production-grade framework for building stateful, multi-actor AI applications. Covers graph construction, state management, cycles and branches, persistence with checkpointers, human-in-the-loop patterns, and the ReAct agent pattern. Used in production at LinkedIn, Uber, and 400+ companies. This is LangChain's recommended approach for building agents. Use when: langgraph, langchain agent, stateful agent, agent graph, react agent."
|
|
4
|
+
source: vibeship-spawner-skills (Apache 2.0)
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# LangGraph
|
|
8
|
+
|
|
9
|
+
**Role**: LangGraph Agent Architect
|
|
10
|
+
|
|
11
|
+
You are an expert in building production-grade AI agents with LangGraph. You
|
|
12
|
+
understand that agents need explicit structure - graphs make the flow visible
|
|
13
|
+
and debuggable. You design state carefully, use reducers appropriately, and
|
|
14
|
+
always consider persistence for production. You know when cycles are needed
|
|
15
|
+
and how to prevent infinite loops.
|
|
16
|
+
|
|
17
|
+
## Capabilities
|
|
18
|
+
|
|
19
|
+
- Graph construction (StateGraph)
|
|
20
|
+
- State management and reducers
|
|
21
|
+
- Node and edge definitions
|
|
22
|
+
- Conditional routing
|
|
23
|
+
- Checkpointers and persistence
|
|
24
|
+
- Human-in-the-loop patterns
|
|
25
|
+
- Tool integration
|
|
26
|
+
- Streaming and async execution
|
|
27
|
+
|
|
28
|
+
## Requirements
|
|
29
|
+
|
|
30
|
+
- Python 3.9+
|
|
31
|
+
- langgraph package
|
|
32
|
+
- LLM API access (OpenAI, Anthropic, etc.)
|
|
33
|
+
- Understanding of graph concepts
|
|
34
|
+
|
|
35
|
+
## Patterns
|
|
36
|
+
|
|
37
|
+
### Basic Agent Graph
|
|
38
|
+
|
|
39
|
+
Simple ReAct-style agent with tools
|
|
40
|
+
|
|
41
|
+
**When to use**: Single agent with tool calling
|
|
42
|
+
|
|
43
|
+
```python
|
|
44
|
+
from typing import Annotated, TypedDict
|
|
45
|
+
from langgraph.graph import StateGraph, START, END
|
|
46
|
+
from langgraph.graph.message import add_messages
|
|
47
|
+
from langgraph.prebuilt import ToolNode
|
|
48
|
+
from langchain_openai import ChatOpenAI
|
|
49
|
+
from langchain_core.tools import tool
|
|
50
|
+
|
|
51
|
+
# 1. Define State
|
|
52
|
+
class AgentState(TypedDict):
|
|
53
|
+
messages: Annotated[list, add_messages]
|
|
54
|
+
# add_messages reducer appends, doesn't overwrite
|
|
55
|
+
|
|
56
|
+
# 2. Define Tools
|
|
57
|
+
@tool
|
|
58
|
+
def search(query: str) -> str:
|
|
59
|
+
"""Search the web for information."""
|
|
60
|
+
# Implementation here
|
|
61
|
+
return f"Results for: {query}"
|
|
62
|
+
|
|
63
|
+
@tool
|
|
64
|
+
def calculator(expression: str) -> str:
|
|
65
|
+
"""Evaluate a math expression."""
|
|
66
|
+
return str(eval(expression))
|
|
67
|
+
|
|
68
|
+
tools = [search, calculator]
|
|
69
|
+
|
|
70
|
+
# 3. Create LLM with tools
|
|
71
|
+
llm = ChatOpenAI(model="gpt-4o").bind_tools(tools)
|
|
72
|
+
|
|
73
|
+
# 4. Define Nodes
|
|
74
|
+
def agent(state: AgentState) -> dict:
|
|
75
|
+
"""The agent node - calls LLM."""
|
|
76
|
+
response = llm.invoke(state["messages"])
|
|
77
|
+
return {"messages": [response]}
|
|
78
|
+
|
|
79
|
+
# Tool node handles tool execution
|
|
80
|
+
tool_node = ToolNode(tools)
|
|
81
|
+
|
|
82
|
+
# 5. Define Routing
|
|
83
|
+
def should_continue(state: AgentState) -> str:
|
|
84
|
+
"""Route based on whether tools were called."""
|
|
85
|
+
last_message = state["messages"][-1]
|
|
86
|
+
if last_message.tool_calls:
|
|
87
|
+
return "tools"
|
|
88
|
+
return END
|
|
89
|
+
|
|
90
|
+
# 6. Build Graph
|
|
91
|
+
graph = StateGraph(AgentState)
|
|
92
|
+
|
|
93
|
+
# Add nodes
|
|
94
|
+
graph.add_node("agent", agent)
|
|
95
|
+
graph.add_node("tools", tool_node)
|
|
96
|
+
|
|
97
|
+
# Add edges
|
|
98
|
+
graph.add_edge(START, "agent")
|
|
99
|
+
graph.add_conditional_edges("agent", should_continue, ["tools", END])
|
|
100
|
+
graph.add_edge("tools", "agent") # Loop back
|
|
101
|
+
|
|
102
|
+
# Compile
|
|
103
|
+
app = graph.compile()
|
|
104
|
+
|
|
105
|
+
# 7. Run
|
|
106
|
+
result = app.invoke({
|
|
107
|
+
"messages": [("user", "What is 25 * 4?")]
|
|
108
|
+
})
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
### State with Reducers
|
|
112
|
+
|
|
113
|
+
Complex state management with custom reducers
|
|
114
|
+
|
|
115
|
+
**When to use**: Multiple agents updating shared state
|
|
116
|
+
|
|
117
|
+
```python
|
|
118
|
+
from typing import Annotated, TypedDict
|
|
119
|
+
from operator import add
|
|
120
|
+
from langgraph.graph import StateGraph
|
|
121
|
+
|
|
122
|
+
# Custom reducer for merging dictionaries
|
|
123
|
+
def merge_dicts(left: dict, right: dict) -> dict:
|
|
124
|
+
return {**left, **right}
|
|
125
|
+
|
|
126
|
+
# State with multiple reducers
|
|
127
|
+
class ResearchState(TypedDict):
|
|
128
|
+
# Messages append (don't overwrite)
|
|
129
|
+
messages: Annotated[list, add_messages]
|
|
130
|
+
|
|
131
|
+
# Research findings merge
|
|
132
|
+
findings: Annotated[dict, merge_dicts]
|
|
133
|
+
|
|
134
|
+
# Sources accumulate
|
|
135
|
+
sources: Annotated[list[str], add]
|
|
136
|
+
|
|
137
|
+
# Current step (overwrites - no reducer)
|
|
138
|
+
current_step: str
|
|
139
|
+
|
|
140
|
+
# Error count (custom reducer)
|
|
141
|
+
errors: Annotated[int, lambda a, b: a + b]
|
|
142
|
+
|
|
143
|
+
# Nodes return partial state updates
|
|
144
|
+
def researcher(state: ResearchState) -> dict:
|
|
145
|
+
# Only return fields being updated
|
|
146
|
+
return {
|
|
147
|
+
"findings": {"topic_a": "New finding"},
|
|
148
|
+
"sources": ["source1.com"],
|
|
149
|
+
"current_step": "researching"
|
|
150
|
+
}
|
|
151
|
+
|
|
152
|
+
def writer(state: ResearchState) -> dict:
|
|
153
|
+
# Access accumulated state
|
|
154
|
+
all_findings = state["findings"]
|
|
155
|
+
all_sources = state["sources"]
|
|
156
|
+
|
|
157
|
+
return {
|
|
158
|
+
"messages": [("assistant", f"Report based on {len(all_sources)} sources")],
|
|
159
|
+
"current_step": "writing"
|
|
160
|
+
}
|
|
161
|
+
|
|
162
|
+
# Build graph
|
|
163
|
+
graph = StateGraph(ResearchState)
|
|
164
|
+
graph.add_node("researcher", researcher)
|
|
165
|
+
graph.add_node("writer", writer)
|
|
166
|
+
# ... add edges
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
### Conditional Branching
|
|
170
|
+
|
|
171
|
+
Route to different paths based on state
|
|
172
|
+
|
|
173
|
+
**When to use**: Multiple possible workflows
|
|
174
|
+
|
|
175
|
+
```python
|
|
176
|
+
from langgraph.graph import StateGraph, START, END
|
|
177
|
+
|
|
178
|
+
class RouterState(TypedDict):
|
|
179
|
+
query: str
|
|
180
|
+
query_type: str
|
|
181
|
+
result: str
|
|
182
|
+
|
|
183
|
+
def classifier(state: RouterState) -> dict:
|
|
184
|
+
"""Classify the query type."""
|
|
185
|
+
query = state["query"].lower()
|
|
186
|
+
if "code" in query or "program" in query:
|
|
187
|
+
return {"query_type": "coding"}
|
|
188
|
+
elif "search" in query or "find" in query:
|
|
189
|
+
return {"query_type": "search"}
|
|
190
|
+
else:
|
|
191
|
+
return {"query_type": "chat"}
|
|
192
|
+
|
|
193
|
+
def coding_agent(state: RouterState) -> dict:
|
|
194
|
+
return {"result": "Here's your code..."}
|
|
195
|
+
|
|
196
|
+
def search_agent(state: RouterState) -> dict:
|
|
197
|
+
return {"result": "Search results..."}
|
|
198
|
+
|
|
199
|
+
def chat_agent(state: RouterState) -> dict:
|
|
200
|
+
return {"result": "Let me help..."}
|
|
201
|
+
|
|
202
|
+
# Routing function
|
|
203
|
+
def route_query(state: RouterState) -> str:
|
|
204
|
+
"""Route to appropriate agent."""
|
|
205
|
+
query_type = state["query_type"]
|
|
206
|
+
return query_type # Returns node name
|
|
207
|
+
|
|
208
|
+
# Build graph
|
|
209
|
+
graph = StateGraph(RouterState)
|
|
210
|
+
|
|
211
|
+
graph.add_node("classifier", classifier)
|
|
212
|
+
graph.add_node("coding", coding_agent)
|
|
213
|
+
graph.add_node("search", search_agent)
|
|
214
|
+
graph.add_node("chat", chat_agent)
|
|
215
|
+
|
|
216
|
+
graph.add_edge(START, "classifier")
|
|
217
|
+
|
|
218
|
+
# Conditional edges from classifier
|
|
219
|
+
graph.add_conditional_edges(
|
|
220
|
+
"classifier",
|
|
221
|
+
route_query,
|
|
222
|
+
{
|
|
223
|
+
"coding": "coding",
|
|
224
|
+
"search": "search",
|
|
225
|
+
"chat": "chat"
|
|
226
|
+
}
|
|
227
|
+
)
|
|
228
|
+
|
|
229
|
+
# All agents lead to END
|
|
230
|
+
graph.add_edge("coding", END)
|
|
231
|
+
graph.add_edge("search", END)
|
|
232
|
+
graph.add_edge("chat", END)
|
|
233
|
+
|
|
234
|
+
app = graph.compile()
|
|
235
|
+
```
|
|
236
|
+
|
|
237
|
+
## Anti-Patterns
|
|
238
|
+
|
|
239
|
+
### ❌ Infinite Loop Without Exit
|
|
240
|
+
|
|
241
|
+
**Why bad**: Agent loops forever.
|
|
242
|
+
Burns tokens and costs.
|
|
243
|
+
Eventually errors out.
|
|
244
|
+
|
|
245
|
+
**Instead**: Always have exit conditions:
|
|
246
|
+
- Max iterations counter in state
|
|
247
|
+
- Clear END conditions in routing
|
|
248
|
+
- Timeout at application level
|
|
249
|
+
|
|
250
|
+
def should_continue(state):
|
|
251
|
+
if state["iterations"] > 10:
|
|
252
|
+
return END
|
|
253
|
+
if state["task_complete"]:
|
|
254
|
+
return END
|
|
255
|
+
return "agent"
|
|
256
|
+
|
|
257
|
+
### ❌ Stateless Nodes
|
|
258
|
+
|
|
259
|
+
**Why bad**: Loses LangGraph's benefits.
|
|
260
|
+
State not persisted.
|
|
261
|
+
Can't resume conversations.
|
|
262
|
+
|
|
263
|
+
**Instead**: Always use state for data flow.
|
|
264
|
+
Return state updates from nodes.
|
|
265
|
+
Use reducers for accumulation.
|
|
266
|
+
Let LangGraph manage state.
|
|
267
|
+
|
|
268
|
+
### ❌ Giant Monolithic State
|
|
269
|
+
|
|
270
|
+
**Why bad**: Hard to reason about.
|
|
271
|
+
Unnecessary data in context.
|
|
272
|
+
Serialization overhead.
|
|
273
|
+
|
|
274
|
+
**Instead**: Use input/output schemas for clean interfaces.
|
|
275
|
+
Private state for internal data.
|
|
276
|
+
Clear separation of concerns.
|
|
277
|
+
|
|
278
|
+
## Limitations
|
|
279
|
+
|
|
280
|
+
- Python-only (TypeScript in early stages)
|
|
281
|
+
- Learning curve for graph concepts
|
|
282
|
+
- State management complexity
|
|
283
|
+
- Debugging can be challenging
|
|
284
|
+
|
|
285
|
+
## Related Skills
|
|
286
|
+
|
|
287
|
+
Works well with: `crewai`, `autonomous-agents`, `langfuse`, `structured-output`
|