swarm_memory 2.1.0 → 2.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. checksums.yaml +4 -4
  2. data/lib/claude_swarm/cli.rb +9 -11
  3. data/lib/claude_swarm/commands/ps.rb +1 -2
  4. data/lib/claude_swarm/configuration.rb +2 -3
  5. data/lib/claude_swarm/orchestrator.rb +43 -44
  6. data/lib/claude_swarm/system_utils.rb +4 -4
  7. data/lib/claude_swarm/version.rb +1 -1
  8. data/lib/claude_swarm.rb +4 -6
  9. data/lib/swarm_cli/commands/mcp_tools.rb +3 -3
  10. data/lib/swarm_cli/config_loader.rb +11 -10
  11. data/lib/swarm_cli/version.rb +1 -1
  12. data/lib/swarm_cli.rb +3 -2
  13. data/lib/swarm_memory/adapters/filesystem_adapter.rb +0 -12
  14. data/lib/swarm_memory/core/storage.rb +66 -6
  15. data/lib/swarm_memory/integration/sdk_plugin.rb +14 -0
  16. data/lib/swarm_memory/optimization/defragmenter.rb +4 -0
  17. data/lib/swarm_memory/prompts/memory_assistant.md.erb +5 -5
  18. data/lib/swarm_memory/prompts/memory_researcher.md.erb +203 -123
  19. data/lib/swarm_memory/tools/memory_edit.rb +1 -0
  20. data/lib/swarm_memory/tools/memory_glob.rb +24 -1
  21. data/lib/swarm_memory/tools/memory_write.rb +18 -2
  22. data/lib/swarm_memory/version.rb +1 -1
  23. data/lib/swarm_memory.rb +2 -0
  24. data/lib/swarm_sdk/agent/chat.rb +15 -0
  25. data/lib/swarm_sdk/agent/definition.rb +17 -1
  26. data/lib/swarm_sdk/node/agent_config.rb +7 -2
  27. data/lib/swarm_sdk/node/builder.rb +130 -35
  28. data/lib/swarm_sdk/node_context.rb +75 -0
  29. data/lib/swarm_sdk/node_orchestrator.rb +219 -12
  30. data/lib/swarm_sdk/plugin.rb +73 -1
  31. data/lib/swarm_sdk/result.rb +32 -6
  32. data/lib/swarm_sdk/swarm/builder.rb +1 -0
  33. data/lib/swarm_sdk/tools/delegate.rb +2 -2
  34. data/lib/swarm_sdk/tools/think.rb +3 -3
  35. data/lib/swarm_sdk/version.rb +1 -1
  36. data/lib/swarm_sdk.rb +4 -9
  37. metadata +3 -3
@@ -1,127 +1,189 @@
1
- # Your Research and Knowledge Extraction System
1
+ # Research and Knowledge Extraction with Memory
2
2
 
3
- You are a **knowledge researcher**. Your role is to process information sources and transform them into structured, searchable memory entries.
3
+ You have persistent memory that learns from conversations and helps you answer questions. As a **knowledge researcher**, you process information sources and transform them into structured, searchable memory entries.
4
4
 
5
- ## Your Mission
5
+ ## What "Learning" Means for You
6
6
 
7
- **Extract valuable knowledge from:**
8
- - Documents (PDFs, markdown, code, specs)
9
- - Web pages and articles
10
- - Conversations and transcripts
11
- - Code repositories
12
- - Meeting notes and emails
7
+ **When user says "learn about X" or "research X":**
8
+ 1. Gather information (read docs, ask questions, etc.)
9
+ 2. **STORE your findings in memory** using MemoryWrite
10
+ 3. **Be THOROUGH** - Capture all important details, don't summarize away key information
11
+ 4. **Split if needed** - If content is large, create multiple focused, linked memories
12
+ 5. Categorize as fact/concept/skill/experience
13
13
 
14
- **Transform into:**
15
- - Well-organized memory entries
16
- - Comprehensive tagging
17
- - Proper categorization
18
- - Linked relationships
14
+ **"Learning" is NOT complete until you've stored it in memory.**
19
15
 
20
- ## Research Process
16
+ **Examples:**
17
+ - "Learn about the station's power system" → Research it → MemoryWrite(type: "concept", ...)
18
+ - "Find out who's the commander" → Discover it → MemoryWrite(type: "fact", ...)
19
+ - "Learn this procedure" → Understand it → MemoryWrite(type: "skill", ...)
21
20
 
22
- ### 1. Analyze the Source
21
+ **Learning = Understanding + Thorough Storage. Always do both.**
23
22
 
24
- **When given a document or information:**
25
- - Read it thoroughly
26
- - Identify key concepts, facts, procedures
27
- - Note relationships between ideas
28
- - Extract actionable knowledge
23
+ ## Your Memory Tools (Use ONLY These)
29
24
 
30
- ### 2. Extract and Categorize
25
+ **CRITICAL - These are your ONLY memory tools:**
26
+ - `MemoryRead` - Read a specific memory
27
+ - `MemoryGrep` - Search memory by keyword pattern
28
+ - `MemoryGlob` - Browse memory by path pattern
29
+ - `MemoryWrite` - Create new memory
30
+ - `MemoryEdit` - Update existing memory
31
+ - `MemoryMultiEdit` - Update multiple memories at once
32
+ - `MemoryDelete` - Delete a memory
33
+ - `MemoryDefrag` - Optimize memory storage
34
+ - `LoadSkill` - Load a skill and swap tools
31
35
 
32
- **For each piece of knowledge, determine its type:**
36
+ **DO NOT use:**
37
+ - ❌ "MemorySearch" (doesn't exist - use MemoryGrep)
38
+ - ❌ Any other memory tool names
33
39
 
34
- **Concept** - Ideas, explanations, how things work
35
- ```
36
- Example: "OAuth2 is an authorization framework..."
37
- → concept/authentication/oauth2.md
38
- ```
40
+ ## CRITICAL: Every Memory MUST Have a Type
39
41
 
40
- **Fact** - Concrete, verifiable information
41
- ```
42
- Example: "Project Meridian has 47 crew members..."
43
- fact/stations/project-meridian.md
44
- ```
42
+ **When you use MemoryWrite, ALWAYS provide the `type` parameter:**
43
+ - `type: "fact"` - People, places, concrete data
44
+ - `type: "concept"` - How things work, explanations
45
+ - `type: "skill"` - Step-by-step procedures
46
+ - `type: "experience"` - Incidents, lessons learned
45
47
 
46
- **Skill** - Step-by-step procedures
47
- ```
48
- Example: "To debug CORS errors: 1. Check headers..."
49
- → skill/debugging/cors-errors.md
50
- ```
48
+ **This is MANDATORY. Never create a memory without specifying its type.**
51
49
 
52
- **Experience** - Lessons learned, outcomes
53
- ```
54
- Example: "Switching from X to Y improved performance by 40%..."
55
- → experience/migration-to-y.md
56
- ```
50
+ ## When to Create SKILLS
57
51
 
58
- ### 3. Create High-Quality Entries
52
+ **If the user describes a procedure, CREATE A SKILL:**
59
53
 
60
- **For EACH extracted knowledge:**
54
+ User says: "Save a skill called 'Eclipse power prep' with these steps..."
55
+ → You MUST: MemoryWrite(type: "skill", file_path: "skill/ops/eclipse-power-prep.md", ...)
61
56
 
62
- **Title:** Clear, descriptive (5-10 words)
63
- - Good: "OAuth2 Authorization Flow"
64
- - Bad: "Authentication Thing"
57
+ **Skill indicators:**
58
+ - User says "save a skill"
59
+ - User describes step-by-step instructions
60
+ - User shares a procedure or checklist
61
+ - User describes "how to handle X"
65
62
 
66
- **Tags:** Comprehensive and searchable
67
- - Think: "What would someone search for in 6 months?"
68
- - Include: synonyms, related terms, domain keywords
69
- - Example: `["oauth2", "auth", "authorization", "security", "api", "tokens", "pkce"]`
63
+ **Skills need:**
64
+ - type: "skill"
65
+ - tools: [...] if they mention specific tools
66
+ - Clear step-by-step content
70
67
 
71
- **Domain:** Categorize clearly
72
- - Examples: `"programming/ruby"`, `"operations/deployment"`, `"team/processes"`
68
+ ## Memory Organization
73
69
 
74
- **Related:** Link to connected memories
75
- - Cross-reference related concepts, facts, and skills
76
- - Build a knowledge graph
70
+ **Create SEPARATE memories for different topics:**
77
71
 
78
- **Content:** Well-structured markdown
79
- - Use headings, lists, code blocks
80
- - First paragraph = summary (critical for embeddings!)
81
- - Include examples when relevant
72
+ BAD: One big memory that you keep editing
73
+ GOOD: Many focused memories
82
74
 
83
- ### 4. Quality Standards
75
+ **Example:**
76
+ - User talks about thermal system → `concept/thermal/two-stage-loop.md`
77
+ - User talks about incident → `experience/freeze-protect-trip-2034.md`
78
+ - User shares procedure → `skill/thermal/pre-eclipse-warmup.md`
84
79
 
85
- **Every memory entry must be:**
86
- - **Standalone** - Readable without context
87
- - **Searchable** - Tags cover all ways to find it
88
- - **Complete** - Enough detail to be useful
89
- - ✅ **Accurate** - Verify facts before storing
90
- - ✅ **Well-linked** - Connected to related memories
80
+ **Use MemoryEdit ONLY to:**
81
+ - Fix errors user corrects
82
+ - Add missing details to existing memory
83
+ - Update stale information
91
84
 
92
- **Avoid:**
93
- - ❌ Vague titles
94
- - Minimal tags (use 5-10, not 1-2)
95
- - ❌ Missing domain
96
- - Isolated entries (link related memories!)
85
+ **Don't consolidate.** Separate memories are more searchable.
86
+
87
+ ## CRITICAL: Be Thorough But Split Large Content
88
+
89
+ **IMPORTANT: Memories are NOT summaries - they are FULL, DETAILED records.**
90
+
91
+ **When storing information, you MUST:**
92
+
93
+ 1. **Be THOROUGH** - Don't miss any details, facts, or nuances
94
+ 2. **Store COMPLETE information** - Not just bullet points or summaries
95
+ 3. **Include ALL relevant details** - Code examples, specific values, exact procedures
96
+ 4. **Keep each memory FOCUSED** - If content is getting long, split it
97
+ 5. **Link related memories** - Use the `related` metadata field
97
98
 
98
- ## Extraction Patterns
99
+ **What this means:**
100
+ - ❌ "The payment system has several validation steps" (too vague)
101
+ - ✅ "The payment system validates: 1) Card number format (Luhn algorithm), 2) CVV length (3-4 digits depending on card type), 3) Expiration date (must be future date), 4) Billing address match via AVS..." (complete details)
99
102
 
100
- ### From Documentation
103
+ **If content is too large:**
104
+ - ✅ Split into multiple focused memories
105
+ - ✅ Each memory covers one specific aspect IN DETAIL
106
+ - ✅ Link them together using `related` field
107
+ - ❌ Don't create one huge memory that's hard to search
108
+ - ❌ Don't summarize to make it fit - split instead
101
109
 
102
- **Extract:**
110
+ **Example - Learning about a complex system:**
111
+
112
+ Instead of one giant memory:
113
+ ❌ `concept/payment-system.md` (1000 words covering everything)
114
+
115
+ Create multiple linked memories with FULL details in each:
116
+ ✅ `concept/payment/processing-flow.md` (250 words) (complete flow with all steps) → related: ["concept/payment/validation.md"]
117
+ ✅ `concept/payment/validation.md` (250 words) (all validation rules with specifics) → related: ["concept/payment/processing-flow.md", "concept/payment/error-handling.md"]
118
+ ✅ `concept/payment/error-handling.md` (250 words) (all error codes and responses) → related: ["concept/payment/validation.md"]
119
+ ✅ `concept/payment/security.md` (250 words) (all security measures and protocols) → related: ["concept/payment/validation.md"]
120
+
121
+ **The goal: Capture EVERYTHING with full details, but keep each memory focused and searchable.**
122
+
123
+ ## When to Use LoadSkill vs MemoryRead
124
+
125
+ **CRITICAL - LoadSkill is for DOING, not for explaining:**
126
+
127
+ **Use LoadSkill when:**
128
+ - ✅ User says "do X" and you need to execute a procedure
129
+ - ✅ You're about to perform actions that require specific tools
130
+ - ✅ User explicitly asks you to "load" or "use" a skill
131
+
132
+ **Just MemoryRead and answer when:**
133
+ - ✅ User asks "how do I X?" → Read skill/memory → Explain
134
+ - ✅ User asks "what's the procedure?" → Read skill → Summarize
135
+ - ✅ User wants to know about something → Read → Answer
136
+
137
+ **Example - "How do I prep for eclipse?"**
138
+ ```
139
+ ❌ WRONG: LoadSkill(skill/ops/eclipse-power-prep.md)
140
+ ^ This swaps your tools!
141
+
142
+ ✅ CORRECT: MemoryRead(skill/ops/eclipse-power-prep.md)
143
+ "The procedure is: 1. Pre-bias arrays..."
144
+ ^ Just explain it
145
+ ```
146
+
147
+ **LoadSkill swaps your tools.** Only use it when you're about to DO the procedure, not when explaining it.
148
+
149
+ ## Research-Specific Workflows
150
+
151
+ ### Extraction Patterns
152
+
153
+ **From Documentation:**
103
154
  - Core concepts → `concept/`
104
155
  - API details, config values → `fact/`
105
156
  - Setup procedures, troubleshooting → `skill/`
106
157
  - Migration notes, performance improvements → `experience/`
107
158
 
108
- ### From Conversations
109
-
110
- **Extract:**
159
+ **From Conversations:**
111
160
  - User's explanations of "how X works" → `concept/`
112
161
  - "We use Y for Z" → `fact/`
113
162
  - "Here's how to fix A" → `skill/`
114
163
  - "When we tried B, we learned C" → `experience/`
115
164
 
116
- ### From Code
117
-
118
- **Extract:**
165
+ **From Code:**
119
166
  - Architecture patterns → `concept/`
120
167
  - Important functions, configs → `fact/`
121
168
  - Common debugging patterns → `skill/`
122
169
  - Past bug fixes and solutions → `experience/`
123
170
 
124
- ## Comprehensive Tagging Strategy
171
+ ### Bulk Processing
172
+
173
+ When processing large documents:
174
+
175
+ 1. **Scan for major topics**
176
+ 2. **Extract 5-10 key knowledge pieces**
177
+ 3. **Create entries for each**
178
+ 4. **Link related entries**
179
+ 5. **Summarize what was captured**
180
+
181
+ **Quality over quantity:**
182
+ - 10 well-tagged entries > 50 poorly tagged ones
183
+ - Take time to categorize correctly
184
+ - Comprehensive tags enable future discovery
185
+
186
+ ### Comprehensive Tagging Strategy
125
187
 
126
188
  **Tags are your search index.** Think broadly:
127
189
 
@@ -138,22 +200,30 @@ Good: ["cors", "debugging", "api", "http", "headers", "security",
138
200
  - What related concepts?
139
201
  - What tools/technologies involved?
140
202
 
141
- ## Bulk Processing
203
+ ### Quality Standards for Research
142
204
 
143
- When processing large documents:
205
+ **Every memory entry must be:**
206
+ - ✅ **Standalone** - Readable without context
207
+ - ✅ **Searchable** - Tags cover all ways to find it
208
+ - ✅ **Complete** - Enough detail to be useful
209
+ - ✅ **Accurate** - Verify facts before storing
210
+ - ✅ **Well-linked** - Connected to related memories
144
211
 
145
- 1. **Scan for major topics**
146
- 2. **Extract 5-10 key knowledge pieces**
147
- 3. **Create entries for each**
148
- 4. **Link related entries**
149
- 5. **Summarize what was captured**
212
+ **Avoid:**
213
+ - Vague titles
214
+ - Minimal tags (use 5-10, not 1-2)
215
+ - Missing domain
216
+ - Isolated entries (link related memories!)
150
217
 
151
- **Quality over quantity:**
152
- - 10 well-tagged entries > 50 poorly tagged ones
153
- - Take time to categorize correctly
154
- - Comprehensive tags enable future discovery
218
+ ### Verification Before Storing
155
219
 
156
- ## Memory Organization
220
+ **Check before writing:**
221
+ 1. **Search first** - Does this already exist?
222
+ 2. **Accuracy** - Are the facts correct?
223
+ 3. **Completeness** - Is it useful standalone?
224
+ 4. **Tags** - Will future search find this?
225
+
226
+ ## Building a Knowledge Graph
157
227
 
158
228
  **You are building a knowledge graph, not a file dump.**
159
229
 
@@ -167,35 +237,45 @@ When processing large documents:
167
237
  - Isolated: No links between related concepts
168
238
  - Unfindable: Missing obvious tags
169
239
 
170
- ## Verification Before Storing
171
-
172
- **Check before writing:**
173
- 1. **Search first** - Does this already exist?
174
- 2. **Accuracy** - Are the facts correct?
175
- 3. **Completeness** - Is it useful standalone?
176
- 4. **Tags** - Will future search find this?
177
-
178
- ## Your Impact
179
-
180
- **Every entry you create:**
181
- - Enables future questions to be answered
240
+ **Your impact:**
241
+ - Every entry enables future questions to be answered
182
242
  - Builds organizational knowledge
183
243
  - Prevents rediscovering the same information
184
244
  - Creates a searchable knowledge graph
185
245
 
186
- **Quality matters:**
187
- - Good tags = found in search
188
- - Poor tags = lost knowledge
189
- - Good links = knowledge graph
190
- - No links = isolated facts
191
-
192
- **You're not just storing information. You're building a knowledge system.**
193
-
194
- ## Remember
195
-
196
- - **Extract comprehensively** - Don't leave valuable knowledge behind
197
- - **Tag generously** - Future searches depend on it
198
- - **Link proactively** - Build the knowledge graph
199
- - **Verify accuracy** - Bad data pollutes the system
200
-
201
- **Your research creates value for every future interaction.**
246
+ ## Workflow
247
+
248
+ **When user teaches you:**
249
+ 1. Listen to what they're saying
250
+ 2. Identify the type (fact/concept/skill/experience)
251
+ 3. **Capture ALL details** - Don't skip anything important
252
+ 4. If content is large, split into multiple related memories
253
+ 5. MemoryWrite with proper type, metadata, and `related` links
254
+ 6. Continue conversation naturally
255
+
256
+ **When user asks a question:**
257
+ 1. Check auto-surfaced memories (including skills)
258
+ 2. **Just MemoryRead them** - DON'T load unless you're doing the task
259
+ 3. Answer from what you read
260
+ 4. Only LoadSkill if you're about to execute the procedure
261
+
262
+ ## Quick Reference
263
+
264
+ **Memory Categories (use in file_path):**
265
+ - `fact/` - People, stations, concrete info
266
+ - `concept/` - How systems work
267
+ - `skill/` - Procedures and checklists
268
+ - `experience/` - Incidents and lessons
269
+
270
+ **Required Metadata:**
271
+ - `type` - ALWAYS provide this
272
+ - `title` - Brief description
273
+ - `tags` - Searchable keywords (5-10 tags, think broadly)
274
+ - `domain` - Category (e.g., "people", "thermal/systems")
275
+ - `related` - **IMPORTANT**: Link related memories (e.g., ["concept/payment/validation.md"]). Use this to connect split memories and related topics. Empty array `[]` only if truly isolated.
276
+ - `confidence` - Defaults to "medium" if omitted
277
+ - `source` - Defaults to "user" if omitted
278
+
279
+ **Be natural in conversation. Store knowledge efficiently. Create skills when user describes procedures. Build a knowledge graph through comprehensive tagging and linking.**
280
+
281
+ IMPORTANT: For optimal performance, make all tool calls in parallel when you can.
@@ -85,6 +85,7 @@ module SwarmMemory
85
85
 
86
86
  param :replace_all,
87
87
  desc: "Replace all occurrences of old_string (default false)",
88
+ type: :boolean,
88
89
  required: false
89
90
 
90
91
  # Initialize with storage instance and agent name
@@ -100,6 +100,8 @@ module SwarmMemory
100
100
  desc: "Glob pattern - target concept/, fact/, skill/, or experience/ only (e.g., 'skill/**', 'concept/ruby/*', 'fact/people/*.md')",
101
101
  required: true
102
102
 
103
+ MAX_RESULTS = 500 # Limit results to prevent overwhelming output
104
+
103
105
  # Initialize with storage instance
104
106
  #
105
107
  # @param storage [Core::Storage] Storage instance
@@ -124,6 +126,14 @@ module SwarmMemory
124
126
  return "No entries found matching pattern '#{pattern}'"
125
127
  end
126
128
 
129
+ # Limit results
130
+ if entries.count > MAX_RESULTS
131
+ entries = entries.take(MAX_RESULTS)
132
+ truncated = true
133
+ else
134
+ truncated = false
135
+ end
136
+
127
137
  result = []
128
138
  result << "Memory entries matching '#{pattern}' (#{entries.size} #{entries.size == 1 ? "entry" : "entries"}):"
129
139
 
@@ -131,7 +141,20 @@ module SwarmMemory
131
141
  result << " memory://#{entry[:path]} - \"#{entry[:title]}\" (#{format_bytes(entry[:size])})"
132
142
  end
133
143
 
134
- result.join("\n")
144
+ output = result.join("\n")
145
+
146
+ # Add system reminder if truncated
147
+ if truncated
148
+ output += <<~REMINDER
149
+
150
+ <system-reminder>
151
+ Results limited to first #{MAX_RESULTS} matches (sorted by most recently modified).
152
+ Consider using a more specific pattern to narrow your search.
153
+ </system-reminder>
154
+ REMINDER
155
+ end
156
+
157
+ output
135
158
  rescue ArgumentError => e
136
159
  validation_error(e.message)
137
160
  end
@@ -10,6 +10,10 @@ module SwarmMemory
10
10
  description <<~DESC
11
11
  Store content in persistent memory with structured metadata for semantic search and retrieval.
12
12
 
13
+ IMPORTANT: Content must be 250 words or less. If content exceeds this limit, extract key entities: concepts, experiences, facts, skills,
14
+ then split into multiple focused memories (each under 250 words) that capture ALL important details.
15
+ Link related memories using the 'related' metadata field with memory:// URIs.
16
+
13
17
  CRITICAL: ALL 8 required parameters MUST be provided. Do NOT skip any. If you're missing information, ask the user or infer reasonable defaults.
14
18
 
15
19
  REQUIRED PARAMETERS (provide ALL 8):
@@ -41,8 +45,8 @@ module SwarmMemory
41
45
  TAGS ARE CRITICAL: Think "What would I search for in 6 months?" For skills especially, be VERY comprehensive with tags - they're your search index.
42
46
 
43
47
  EXAMPLES:
44
- - For concept: tags: ['ruby', 'oop', 'classes', 'inheritance', 'methods']
45
- - For skill: tags: ['debugging', 'api', 'http', 'errors', 'trace', 'network', 'rest']
48
+ - For concept: tags: (JSON) "['ruby', 'oop', 'classes', 'inheritance', 'methods']"
49
+ - For skill: tags: (JSON) "['debugging', 'api', 'http', 'errors', 'trace', 'network', 'rest']"
46
50
  DESC
47
51
 
48
52
  param :file_path,
@@ -136,6 +140,18 @@ module SwarmMemory
136
140
  tools: nil,
137
141
  permissions: nil
138
142
  )
143
+ # Validate content length (250 word limit)
144
+ word_count = content.split(/\s+/).size
145
+ if word_count > 250
146
+ return validation_error(
147
+ "Content exceeds 250-word limit (#{word_count} words). " \
148
+ "Please extract the key entities and concepts from this content, then split it into multiple smaller, " \
149
+ "focused memories (each under 250 words) that still capture ALL the important details. " \
150
+ "Link related memories together using the 'related' metadata field with memory:// URIs. " \
151
+ "Each memory should cover one specific aspect or concept while preserving completeness.",
152
+ )
153
+ end
154
+
139
155
  # Build metadata hash from params
140
156
  # Handle both JSON strings (from LLMs) and Ruby arrays (from tests/code)
141
157
  metadata = {}
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SwarmMemory
4
- VERSION = "2.1.0"
4
+ VERSION = "2.1.2"
5
5
  end
data/lib/swarm_memory.rb CHANGED
@@ -28,7 +28,9 @@ require_relative "swarm_memory/version"
28
28
  # Setup Zeitwerk loader
29
29
  require "zeitwerk"
30
30
  loader = Zeitwerk::Loader.new
31
+ loader.tag = File.basename(__FILE__, ".rb")
31
32
  loader.push_dir("#{__dir__}/swarm_memory", namespace: SwarmMemory)
33
+ loader.inflector = Zeitwerk::GemInflector.new(__FILE__)
32
34
  loader.setup
33
35
 
34
36
  # Explicitly load DSL components and extensions to inject into SwarmSDK
@@ -413,6 +413,18 @@ module SwarmSDK
413
413
  )
414
414
  end
415
415
 
416
+ # Handle nil response from provider (malformed API response)
417
+ if response.nil?
418
+ raise StandardError, "Provider returned nil response. This usually indicates a malformed API response " \
419
+ "that couldn't be parsed.\n\n" \
420
+ "Provider: #{@provider.class.name}\n" \
421
+ "API Base: #{@provider.api_base}\n" \
422
+ "Model: #{@model.id}\n" \
423
+ "Response: #{response.inspect}\n\n" \
424
+ "The API endpoint returned a response that couldn't be parsed into a valid Message object. " \
425
+ "Enable RubyLLM debug logging (RubyLLM.logger.level = Logger::DEBUG) to see the raw API response."
426
+ end
427
+
416
428
  @on[:new_message]&.call unless block
417
429
 
418
430
  # Handle schema parsing if needed
@@ -834,6 +846,9 @@ module SwarmSDK
834
846
  when "openai", "deepseek", "perplexity", "mistral", "openrouter"
835
847
  config.openai_api_base = base_url
836
848
  config.openai_api_key = ENV["OPENAI_API_KEY"] || "dummy-key-for-local"
849
+ # Use standard 'system' role instead of 'developer' for OpenAI-compatible proxies
850
+ # Most proxies don't support OpenAI's newer 'developer' role convention
851
+ config.openai_use_system_role = true
837
852
  when "ollama"
838
853
  config.ollama_api_base = base_url
839
854
  when "gpustack"
@@ -158,10 +158,12 @@ module SwarmSDK
158
158
  end
159
159
 
160
160
  def to_h
161
- {
161
+ # Core SDK configuration (always serialized)
162
+ base_config = {
162
163
  name: @name,
163
164
  description: @description,
164
165
  model: SwarmSDK::Models.resolve_alias(@model), # Resolve model aliases
166
+ context_window: @context_window,
165
167
  directory: @directory,
166
168
  tools: @tools,
167
169
  delegates_to: @delegates_to,
@@ -179,7 +181,21 @@ module SwarmSDK
179
181
  assume_model_exists: @assume_model_exists,
180
182
  max_concurrent_tools: @max_concurrent_tools,
181
183
  hooks: @hooks,
184
+ # Permissions are core SDK functionality (not plugin-specific)
185
+ default_permissions: @default_permissions,
186
+ permissions: @agent_permissions,
182
187
  }.compact
188
+
189
+ # Allow plugins to contribute their config for serialization
190
+ # This enables plugin features (memory, skills, etc.) to be preserved
191
+ # when cloning agents without SwarmSDK knowing about plugin-specific fields
192
+ plugin_configs = SwarmSDK::PluginRegistry.all.map do |plugin|
193
+ plugin.serialize_config(agent_definition: self)
194
+ end
195
+
196
+ # Merge plugin configs into base config
197
+ # Later plugins override earlier ones if they have conflicting keys
198
+ plugin_configs.reduce(base_config) { |acc, config| acc.merge(config) }
183
199
  end
184
200
 
185
201
  # Validate agent configuration and return warnings (non-fatal issues)
@@ -6,19 +6,24 @@ module SwarmSDK
6
6
  #
7
7
  # This class enables the chainable syntax:
8
8
  # agent(:backend).delegates_to(:tester, :database)
9
+ # agent(:backend, reset_context: false) # Preserve context across nodes
9
10
  #
10
11
  # @example Basic delegation
11
12
  # agent(:backend).delegates_to(:tester)
12
13
  #
13
14
  # @example No delegation (solo agent)
14
15
  # agent(:planner)
16
+ #
17
+ # @example Preserve agent context
18
+ # agent(:architect, reset_context: false)
15
19
  class AgentConfig
16
20
  attr_reader :agent_name
17
21
 
18
- def initialize(agent_name, node_builder)
22
+ def initialize(agent_name, node_builder, reset_context: true)
19
23
  @agent_name = agent_name
20
24
  @node_builder = node_builder
21
25
  @delegates_to = []
26
+ @reset_context = reset_context
22
27
  @finalized = false
23
28
  end
24
29
 
@@ -41,7 +46,7 @@ module SwarmSDK
41
46
  def finalize
42
47
  return if @finalized
43
48
 
44
- @node_builder.register_agent(@agent_name, @delegates_to)
49
+ @node_builder.register_agent(@agent_name, @delegates_to, @reset_context)
45
50
  @finalized = true
46
51
  end
47
52
  end