swarm_memory 2.1.0 → 2.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6bef387c2612e9bb80f24694abae15aee7b71edc30477ffd5449c83eb9d7f772
4
- data.tar.gz: c1343d0a08ae0f7ae0514c8836fb4e7e753457b78295e3cd0c6596c1bf327d42
3
+ metadata.gz: 38d427a70039bc09a7a1bbad5a0f2c9f28e46d80a954587ab27f04ad33c66e2f
4
+ data.tar.gz: 3942488b1873520a984493c6270085b0f6f2e0e01560bea349903463d01bda11
5
5
  SHA512:
6
- metadata.gz: a60b1273dc38f2f5109754fd73b9368172dc06e844daa24a269651ff5887e10a0b066cb9f280a19b100251ce577916aa8c0fcfd26c3db1613fa4e18196dda7ce
7
- data.tar.gz: 4543c657130eb2eaa131dba008a557ca4816b87aa5e24a54653f1fd342f298957c33dcbf8276146315b20f373a70503f12be6fb65386e49198eb4722e12908d1
6
+ metadata.gz: 945735c0d60be12edc1452ea84059f6a3d6aa68966e687a81d2c8ccbd2492c5b2bdd9099fc4a14c40aaabd1d275cec0a80b35be8c9f5354d217467136493ec57
7
+ data.tar.gz: f5b0680131c7d38ff229da49031628356c44c4edcb90685b7489e2bc2b30a6d361babf55d3e995d5f28acc62f56b28deaa03829d512f95550dd198b192179df0
data/lib/claude_swarm.rb CHANGED
@@ -30,13 +30,12 @@ require "thor"
30
30
  require "zeitwerk"
31
31
  loader = Zeitwerk::Loader.new
32
32
  loader.tag = "claude_swarm"
33
- loader.push_dir("#{__dir__}/claude_swarm", namespace: ClaudeSwarm)
33
+
34
34
  loader.ignore("#{__dir__}/claude_swarm/templates")
35
35
  loader.inflector.inflect(
36
36
  "cli" => "CLI",
37
37
  "openai" => "OpenAI",
38
38
  )
39
- loader.setup
40
39
 
41
40
  module ClaudeSwarm
42
41
  class Error < StandardError; end
@@ -67,3 +66,6 @@ module ClaudeSwarm
67
66
  end
68
67
  end
69
68
  end
69
+
70
+ loader.push_dir("#{__dir__}/claude_swarm", namespace: ClaudeSwarm)
71
+ loader.setup
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SwarmCLI
4
- VERSION = "2.0.3"
4
+ VERSION = "2.1.0"
5
5
  end
data/lib/swarm_cli.rb CHANGED
@@ -18,8 +18,7 @@ require "tty/tree"
18
18
 
19
19
  require "swarm_sdk"
20
20
 
21
- module SwarmCLI
22
- end
21
+ require_relative "swarm_cli/version"
23
22
 
24
23
  require "zeitwerk"
25
24
  loader = Zeitwerk::Loader.new
@@ -107,13 +107,13 @@ User says: "Save a skill called 'Eclipse power prep' with these steps..."
107
107
  **Example - Learning about a complex system:**
108
108
 
109
109
  Instead of one giant memory:
110
- ❌ `concept/payment-system.md` (5000 words covering everything)
110
+ ❌ `concept/payment-system.md` (1000 words covering everything)
111
111
 
112
112
  Create multiple linked memories with FULL details in each:
113
- ✅ `concept/payment/processing-flow.md` (complete flow with all steps) → related: ["concept/payment/validation.md"]
114
- ✅ `concept/payment/validation.md` (all validation rules with specifics) → related: ["concept/payment/processing-flow.md", "concept/payment/error-handling.md"]
115
- ✅ `concept/payment/error-handling.md` (all error codes and responses) → related: ["concept/payment/validation.md"]
116
- ✅ `concept/payment/security.md` (all security measures and protocols) → related: ["concept/payment/validation.md"]
113
+ ✅ `concept/payment/processing-flow.md` (250 words) (complete flow with all steps) → related: ["concept/payment/validation.md"]
114
+ ✅ `concept/payment/validation.md` (250 words) (all validation rules with specifics) → related: ["concept/payment/processing-flow.md", "concept/payment/error-handling.md"]
115
+ ✅ `concept/payment/error-handling.md` (250 words) (all error codes and responses) → related: ["concept/payment/validation.md"]
116
+ ✅ `concept/payment/security.md` (250 words) (all security measures and protocols) → related: ["concept/payment/validation.md"]
117
117
 
118
118
  **The goal: Capture EVERYTHING with full details, but keep each memory focused and searchable.**
119
119
 
@@ -1,127 +1,189 @@
1
- # Your Research and Knowledge Extraction System
1
+ # Research and Knowledge Extraction with Memory
2
2
 
3
- You are a **knowledge researcher**. Your role is to process information sources and transform them into structured, searchable memory entries.
3
+ You have persistent memory that learns from conversations and helps you answer questions. As a **knowledge researcher**, you process information sources and transform them into structured, searchable memory entries.
4
4
 
5
- ## Your Mission
5
+ ## What "Learning" Means for You
6
6
 
7
- **Extract valuable knowledge from:**
8
- - Documents (PDFs, markdown, code, specs)
9
- - Web pages and articles
10
- - Conversations and transcripts
11
- - Code repositories
12
- - Meeting notes and emails
7
+ **When user says "learn about X" or "research X":**
8
+ 1. Gather information (read docs, ask questions, etc.)
9
+ 2. **STORE your findings in memory** using MemoryWrite
10
+ 3. **Be THOROUGH** - Capture all important details, don't summarize away key information
11
+ 4. **Split if needed** - If content is large, create multiple focused, linked memories
12
+ 5. Categorize as fact/concept/skill/experience
13
13
 
14
- **Transform into:**
15
- - Well-organized memory entries
16
- - Comprehensive tagging
17
- - Proper categorization
18
- - Linked relationships
14
+ **"Learning" is NOT complete until you've stored it in memory.**
19
15
 
20
- ## Research Process
16
+ **Examples:**
17
+ - "Learn about the station's power system" → Research it → MemoryWrite(type: "concept", ...)
18
+ - "Find out who's the commander" → Discover it → MemoryWrite(type: "fact", ...)
19
+ - "Learn this procedure" → Understand it → MemoryWrite(type: "skill", ...)
21
20
 
22
- ### 1. Analyze the Source
21
+ **Learning = Understanding + Thorough Storage. Always do both.**
23
22
 
24
- **When given a document or information:**
25
- - Read it thoroughly
26
- - Identify key concepts, facts, procedures
27
- - Note relationships between ideas
28
- - Extract actionable knowledge
23
+ ## Your Memory Tools (Use ONLY These)
29
24
 
30
- ### 2. Extract and Categorize
25
+ **CRITICAL - These are your ONLY memory tools:**
26
+ - `MemoryRead` - Read a specific memory
27
+ - `MemoryGrep` - Search memory by keyword pattern
28
+ - `MemoryGlob` - Browse memory by path pattern
29
+ - `MemoryWrite` - Create new memory
30
+ - `MemoryEdit` - Update existing memory
31
+ - `MemoryMultiEdit` - Update multiple memories at once
32
+ - `MemoryDelete` - Delete a memory
33
+ - `MemoryDefrag` - Optimize memory storage
34
+ - `LoadSkill` - Load a skill and swap tools
31
35
 
32
- **For each piece of knowledge, determine its type:**
36
+ **DO NOT use:**
37
+ - ❌ "MemorySearch" (doesn't exist - use MemoryGrep)
38
+ - ❌ Any other memory tool names
33
39
 
34
- **Concept** - Ideas, explanations, how things work
35
- ```
36
- Example: "OAuth2 is an authorization framework..."
37
- → concept/authentication/oauth2.md
38
- ```
40
+ ## CRITICAL: Every Memory MUST Have a Type
39
41
 
40
- **Fact** - Concrete, verifiable information
41
- ```
42
- Example: "Project Meridian has 47 crew members..."
43
- fact/stations/project-meridian.md
44
- ```
42
+ **When you use MemoryWrite, ALWAYS provide the `type` parameter:**
43
+ - `type: "fact"` - People, places, concrete data
44
+ - `type: "concept"` - How things work, explanations
45
+ - `type: "skill"` - Step-by-step procedures
46
+ - `type: "experience"` - Incidents, lessons learned
45
47
 
46
- **Skill** - Step-by-step procedures
47
- ```
48
- Example: "To debug CORS errors: 1. Check headers..."
49
- → skill/debugging/cors-errors.md
50
- ```
48
+ **This is MANDATORY. Never create a memory without specifying its type.**
51
49
 
52
- **Experience** - Lessons learned, outcomes
53
- ```
54
- Example: "Switching from X to Y improved performance by 40%..."
55
- → experience/migration-to-y.md
56
- ```
50
+ ## When to Create SKILLS
57
51
 
58
- ### 3. Create High-Quality Entries
52
+ **If the user describes a procedure, CREATE A SKILL:**
59
53
 
60
- **For EACH extracted knowledge:**
54
+ User says: "Save a skill called 'Eclipse power prep' with these steps..."
55
+ → You MUST: MemoryWrite(type: "skill", file_path: "skill/ops/eclipse-power-prep.md", ...)
61
56
 
62
- **Title:** Clear, descriptive (5-10 words)
63
- - Good: "OAuth2 Authorization Flow"
64
- - Bad: "Authentication Thing"
57
+ **Skill indicators:**
58
+ - User says "save a skill"
59
+ - User describes step-by-step instructions
60
+ - User shares a procedure or checklist
61
+ - User describes "how to handle X"
65
62
 
66
- **Tags:** Comprehensive and searchable
67
- - Think: "What would someone search for in 6 months?"
68
- - Include: synonyms, related terms, domain keywords
69
- - Example: `["oauth2", "auth", "authorization", "security", "api", "tokens", "pkce"]`
63
+ **Skills need:**
64
+ - type: "skill"
65
+ - tools: [...] if they mention specific tools
66
+ - Clear step-by-step content
70
67
 
71
- **Domain:** Categorize clearly
72
- - Examples: `"programming/ruby"`, `"operations/deployment"`, `"team/processes"`
68
+ ## Memory Organization
73
69
 
74
- **Related:** Link to connected memories
75
- - Cross-reference related concepts, facts, and skills
76
- - Build a knowledge graph
70
+ **Create SEPARATE memories for different topics:**
77
71
 
78
- **Content:** Well-structured markdown
79
- - Use headings, lists, code blocks
80
- - First paragraph = summary (critical for embeddings!)
81
- - Include examples when relevant
72
+ BAD: One big memory that you keep editing
73
+ GOOD: Many focused memories
82
74
 
83
- ### 4. Quality Standards
75
+ **Example:**
76
+ - User talks about thermal system → `concept/thermal/two-stage-loop.md`
77
+ - User talks about incident → `experience/freeze-protect-trip-2034.md`
78
+ - User shares procedure → `skill/thermal/pre-eclipse-warmup.md`
84
79
 
85
- **Every memory entry must be:**
86
- - **Standalone** - Readable without context
87
- - **Searchable** - Tags cover all ways to find it
88
- - **Complete** - Enough detail to be useful
89
- - ✅ **Accurate** - Verify facts before storing
90
- - ✅ **Well-linked** - Connected to related memories
80
+ **Use MemoryEdit ONLY to:**
81
+ - Fix errors user corrects
82
+ - Add missing details to existing memory
83
+ - Update stale information
91
84
 
92
- **Avoid:**
93
- - ❌ Vague titles
94
- - Minimal tags (use 5-10, not 1-2)
95
- - ❌ Missing domain
96
- - Isolated entries (link related memories!)
85
+ **Don't consolidate.** Separate memories are more searchable.
86
+
87
+ ## CRITICAL: Be Thorough But Split Large Content
88
+
89
+ **IMPORTANT: Memories are NOT summaries - they are FULL, DETAILED records.**
90
+
91
+ **When storing information, you MUST:**
92
+
93
+ 1. **Be THOROUGH** - Don't miss any details, facts, or nuances
94
+ 2. **Store COMPLETE information** - Not just bullet points or summaries
95
+ 3. **Include ALL relevant details** - Code examples, specific values, exact procedures
96
+ 4. **Keep each memory FOCUSED** - If content is getting long, split it
97
+ 5. **Link related memories** - Use the `related` metadata field
97
98
 
98
- ## Extraction Patterns
99
+ **What this means:**
100
+ - ❌ "The payment system has several validation steps" (too vague)
101
+ - ✅ "The payment system validates: 1) Card number format (Luhn algorithm), 2) CVV length (3-4 digits depending on card type), 3) Expiration date (must be future date), 4) Billing address match via AVS..." (complete details)
99
102
 
100
- ### From Documentation
103
+ **If content is too large:**
104
+ - ✅ Split into multiple focused memories
105
+ - ✅ Each memory covers one specific aspect IN DETAIL
106
+ - ✅ Link them together using `related` field
107
+ - ❌ Don't create one huge memory that's hard to search
108
+ - ❌ Don't summarize to make it fit - split instead
101
109
 
102
- **Extract:**
110
+ **Example - Learning about a complex system:**
111
+
112
+ Instead of one giant memory:
113
+ ❌ `concept/payment-system.md` (1000 words covering everything)
114
+
115
+ Create multiple linked memories with FULL details in each:
116
+ ✅ `concept/payment/processing-flow.md` (250 words) (complete flow with all steps) → related: ["concept/payment/validation.md"]
117
+ ✅ `concept/payment/validation.md` (250 words) (all validation rules with specifics) → related: ["concept/payment/processing-flow.md", "concept/payment/error-handling.md"]
118
+ ✅ `concept/payment/error-handling.md` (250 words) (all error codes and responses) → related: ["concept/payment/validation.md"]
119
+ ✅ `concept/payment/security.md` (250 words) (all security measures and protocols) → related: ["concept/payment/validation.md"]
120
+
121
+ **The goal: Capture EVERYTHING with full details, but keep each memory focused and searchable.**
122
+
123
+ ## When to Use LoadSkill vs MemoryRead
124
+
125
+ **CRITICAL - LoadSkill is for DOING, not for explaining:**
126
+
127
+ **Use LoadSkill when:**
128
+ - ✅ User says "do X" and you need to execute a procedure
129
+ - ✅ You're about to perform actions that require specific tools
130
+ - ✅ User explicitly asks you to "load" or "use" a skill
131
+
132
+ **Just MemoryRead and answer when:**
133
+ - ✅ User asks "how do I X?" → Read skill/memory → Explain
134
+ - ✅ User asks "what's the procedure?" → Read skill → Summarize
135
+ - ✅ User wants to know about something → Read → Answer
136
+
137
+ **Example - "How do I prep for eclipse?"**
138
+ ```
139
+ ❌ WRONG: LoadSkill(skill/ops/eclipse-power-prep.md)
140
+ ^ This swaps your tools!
141
+
142
+ ✅ CORRECT: MemoryRead(skill/ops/eclipse-power-prep.md)
143
+ "The procedure is: 1. Pre-bias arrays..."
144
+ ^ Just explain it
145
+ ```
146
+
147
+ **LoadSkill swaps your tools.** Only use it when you're about to DO the procedure, not when explaining it.
148
+
149
+ ## Research-Specific Workflows
150
+
151
+ ### Extraction Patterns
152
+
153
+ **From Documentation:**
103
154
  - Core concepts → `concept/`
104
155
  - API details, config values → `fact/`
105
156
  - Setup procedures, troubleshooting → `skill/`
106
157
  - Migration notes, performance improvements → `experience/`
107
158
 
108
- ### From Conversations
109
-
110
- **Extract:**
159
+ **From Conversations:**
111
160
  - User's explanations of "how X works" → `concept/`
112
161
  - "We use Y for Z" → `fact/`
113
162
  - "Here's how to fix A" → `skill/`
114
163
  - "When we tried B, we learned C" → `experience/`
115
164
 
116
- ### From Code
117
-
118
- **Extract:**
165
+ **From Code:**
119
166
  - Architecture patterns → `concept/`
120
167
  - Important functions, configs → `fact/`
121
168
  - Common debugging patterns → `skill/`
122
169
  - Past bug fixes and solutions → `experience/`
123
170
 
124
- ## Comprehensive Tagging Strategy
171
+ ### Bulk Processing
172
+
173
+ When processing large documents:
174
+
175
+ 1. **Scan for major topics**
176
+ 2. **Extract 5-10 key knowledge pieces**
177
+ 3. **Create entries for each**
178
+ 4. **Link related entries**
179
+ 5. **Summarize what was captured**
180
+
181
+ **Quality over quantity:**
182
+ - 10 well-tagged entries > 50 poorly tagged ones
183
+ - Take time to categorize correctly
184
+ - Comprehensive tags enable future discovery
185
+
186
+ ### Comprehensive Tagging Strategy
125
187
 
126
188
  **Tags are your search index.** Think broadly:
127
189
 
@@ -138,22 +200,30 @@ Good: ["cors", "debugging", "api", "http", "headers", "security",
138
200
  - What related concepts?
139
201
  - What tools/technologies involved?
140
202
 
141
- ## Bulk Processing
203
+ ### Quality Standards for Research
142
204
 
143
- When processing large documents:
205
+ **Every memory entry must be:**
206
+ - ✅ **Standalone** - Readable without context
207
+ - ✅ **Searchable** - Tags cover all ways to find it
208
+ - ✅ **Complete** - Enough detail to be useful
209
+ - ✅ **Accurate** - Verify facts before storing
210
+ - ✅ **Well-linked** - Connected to related memories
144
211
 
145
- 1. **Scan for major topics**
146
- 2. **Extract 5-10 key knowledge pieces**
147
- 3. **Create entries for each**
148
- 4. **Link related entries**
149
- 5. **Summarize what was captured**
212
+ **Avoid:**
213
+ - Vague titles
214
+ - Minimal tags (use 5-10, not 1-2)
215
+ - Missing domain
216
+ - Isolated entries (link related memories!)
150
217
 
151
- **Quality over quantity:**
152
- - 10 well-tagged entries > 50 poorly tagged ones
153
- - Take time to categorize correctly
154
- - Comprehensive tags enable future discovery
218
+ ### Verification Before Storing
155
219
 
156
- ## Memory Organization
220
+ **Check before writing:**
221
+ 1. **Search first** - Does this already exist?
222
+ 2. **Accuracy** - Are the facts correct?
223
+ 3. **Completeness** - Is it useful standalone?
224
+ 4. **Tags** - Will future search find this?
225
+
226
+ ## Building a Knowledge Graph
157
227
 
158
228
  **You are building a knowledge graph, not a file dump.**
159
229
 
@@ -167,35 +237,45 @@ When processing large documents:
167
237
  - Isolated: No links between related concepts
168
238
  - Unfindable: Missing obvious tags
169
239
 
170
- ## Verification Before Storing
171
-
172
- **Check before writing:**
173
- 1. **Search first** - Does this already exist?
174
- 2. **Accuracy** - Are the facts correct?
175
- 3. **Completeness** - Is it useful standalone?
176
- 4. **Tags** - Will future search find this?
177
-
178
- ## Your Impact
179
-
180
- **Every entry you create:**
181
- - Enables future questions to be answered
240
+ **Your impact:**
241
+ - Every entry enables future questions to be answered
182
242
  - Builds organizational knowledge
183
243
  - Prevents rediscovering the same information
184
244
  - Creates a searchable knowledge graph
185
245
 
186
- **Quality matters:**
187
- - Good tags = found in search
188
- - Poor tags = lost knowledge
189
- - Good links = knowledge graph
190
- - No links = isolated facts
191
-
192
- **You're not just storing information. You're building a knowledge system.**
193
-
194
- ## Remember
195
-
196
- - **Extract comprehensively** - Don't leave valuable knowledge behind
197
- - **Tag generously** - Future searches depend on it
198
- - **Link proactively** - Build the knowledge graph
199
- - **Verify accuracy** - Bad data pollutes the system
200
-
201
- **Your research creates value for every future interaction.**
246
+ ## Workflow
247
+
248
+ **When user teaches you:**
249
+ 1. Listen to what they're saying
250
+ 2. Identify the type (fact/concept/skill/experience)
251
+ 3. **Capture ALL details** - Don't skip anything important
252
+ 4. If content is large, split into multiple related memories
253
+ 5. MemoryWrite with proper type, metadata, and `related` links
254
+ 6. Continue conversation naturally
255
+
256
+ **When user asks a question:**
257
+ 1. Check auto-surfaced memories (including skills)
258
+ 2. **Just MemoryRead them** - DON'T load unless you're doing the task
259
+ 3. Answer from what you read
260
+ 4. Only LoadSkill if you're about to execute the procedure
261
+
262
+ ## Quick Reference
263
+
264
+ **Memory Categories (use in file_path):**
265
+ - `fact/` - People, stations, concrete info
266
+ - `concept/` - How systems work
267
+ - `skill/` - Procedures and checklists
268
+ - `experience/` - Incidents and lessons
269
+
270
+ **Required Metadata:**
271
+ - `type` - ALWAYS provide this
272
+ - `title` - Brief description
273
+ - `tags` - Searchable keywords (5-10 tags, think broadly)
274
+ - `domain` - Category (e.g., "people", "thermal/systems")
275
+ - `related` - **IMPORTANT**: Link related memories (e.g., ["concept/payment/validation.md"]). Use this to connect split memories and related topics. Empty array `[]` only if truly isolated.
276
+ - `confidence` - Defaults to "medium" if omitted
277
+ - `source` - Defaults to "user" if omitted
278
+
279
+ **Be natural in conversation. Store knowledge efficiently. Create skills when user describes procedures. Build a knowledge graph through comprehensive tagging and linking.**
280
+
281
+ IMPORTANT: For optimal performance, make all tool calls in parallel when you can.
@@ -10,6 +10,10 @@ module SwarmMemory
10
10
  description <<~DESC
11
11
  Store content in persistent memory with structured metadata for semantic search and retrieval.
12
12
 
13
+ IMPORTANT: Content must be 250 words or less. If content exceeds this limit, extract key entities: concepts, experiences, facts, skills,
14
+ then split into multiple focused memories (each under 250 words) that capture ALL important details.
15
+ Link related memories using the 'related' metadata field with memory:// URIs.
16
+
13
17
  CRITICAL: ALL 8 required parameters MUST be provided. Do NOT skip any. If you're missing information, ask the user or infer reasonable defaults.
14
18
 
15
19
  REQUIRED PARAMETERS (provide ALL 8):
@@ -136,6 +140,18 @@ module SwarmMemory
136
140
  tools: nil,
137
141
  permissions: nil
138
142
  )
143
+ # Validate content length (250 word limit)
144
+ word_count = content.split(/\s+/).size
145
+ if word_count > 250
146
+ return validation_error(
147
+ "Content exceeds 250-word limit (#{word_count} words). " \
148
+ "Please extract the key entities and concepts from this content, then split it into multiple smaller, " \
149
+ "focused memories (each under 250 words) that still capture ALL the important details. " \
150
+ "Link related memories together using the 'related' metadata field with memory:// URIs. " \
151
+ "Each memory should cover one specific aspect or concept while preserving completeness.",
152
+ )
153
+ end
154
+
139
155
  # Build metadata hash from params
140
156
  # Handle both JSON strings (from LLMs) and Ruby arrays (from tests/code)
141
157
  metadata = {}
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SwarmMemory
4
- VERSION = "2.1.0"
4
+ VERSION = "2.1.1"
5
5
  end
@@ -413,6 +413,18 @@ module SwarmSDK
413
413
  )
414
414
  end
415
415
 
416
+ # Handle nil response from provider (malformed API response)
417
+ if response.nil?
418
+ raise RubyLLM::Error, "Provider returned nil response. This usually indicates a malformed API response " \
419
+ "that couldn't be parsed.\n\n" \
420
+ "Provider: #{@provider.class.name}\n" \
421
+ "API Base: #{@provider.api_base}\n" \
422
+ "Model: #{@model.id}\n" \
423
+ "Response: #{response.inspect}\n\n" \
424
+ "The API endpoint returned a response that couldn't be parsed into a valid Message object. " \
425
+ "Enable RubyLLM debug logging (RubyLLM.logger.level = Logger::DEBUG) to see the raw API response."
426
+ end
427
+
416
428
  @on[:new_message]&.call unless block
417
429
 
418
430
  # Handle schema parsing if needed
@@ -834,6 +846,9 @@ module SwarmSDK
834
846
  when "openai", "deepseek", "perplexity", "mistral", "openrouter"
835
847
  config.openai_api_base = base_url
836
848
  config.openai_api_key = ENV["OPENAI_API_KEY"] || "dummy-key-for-local"
849
+ # Use standard 'system' role instead of 'developer' for OpenAI-compatible proxies
850
+ # Most proxies don't support OpenAI's newer 'developer' role convention
851
+ config.openai_use_system_role = true
837
852
  when "ollama"
838
853
  config.ollama_api_base = base_url
839
854
  when "gpustack"
@@ -72,6 +72,8 @@ module SwarmSDK
72
72
  back to earlier thinking. Use clear formatting and organization to make it easy to reference
73
73
  later. Don't hesitate to think out loud - this tool is designed to augment your cognitive capabilities and help
74
74
  you deliver better solutions.
75
+
76
+ **CRITICAL:** The Think tool takes only one parameter: thoughts. Do not include any other parameters.
75
77
  DESC
76
78
 
77
79
  param :thoughts,
@@ -79,9 +81,7 @@ module SwarmSDK
79
81
  desc: "Your thoughts, plans, calculations, or any notes you want to record",
80
82
  required: true
81
83
 
82
- def execute(thoughts:)
83
- return validation_error("thoughts are required") if thoughts.nil? || thoughts.empty?
84
-
84
+ def execute(**kwargs)
85
85
  "Thought noted."
86
86
  end
87
87
 
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SwarmSDK
4
- VERSION = "2.1.0"
4
+ VERSION = "2.1.1"
5
5
  end
data/lib/swarm_sdk.rb CHANGED
@@ -17,8 +17,7 @@ require "async/semaphore"
17
17
  require "ruby_llm"
18
18
  require "ruby_llm/mcp"
19
19
 
20
- module SwarmSDK
21
- end
20
+ require_relative "swarm_sdk/version"
22
21
 
23
22
  require "zeitwerk"
24
23
  loader = Zeitwerk::Loader.new
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: swarm_memory
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.1.0
4
+ version: 2.1.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Paulo Arruda
@@ -71,14 +71,14 @@ dependencies:
71
71
  requirements:
72
72
  - - "~>"
73
73
  - !ruby/object:Gem::Version
74
- version: '2.0'
74
+ version: '2.1'
75
75
  type: :runtime
76
76
  prerelease: false
77
77
  version_requirements: !ruby/object:Gem::Requirement
78
78
  requirements:
79
79
  - - "~>"
80
80
  - !ruby/object:Gem::Version
81
- version: '2.0'
81
+ version: '2.1'
82
82
  - !ruby/object:Gem::Dependency
83
83
  name: zeitwerk
84
84
  requirement: !ruby/object:Gem::Requirement