supermemory 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: f86b60e2a7c71dc78f86188b483ef92b4307978bef0ccabc262e30d0c3e9fccf
4
+ data.tar.gz: 8ecd09829f9cb22d6440348fb62ef539526572167e1172dd3f88f2fb275cb819
5
+ SHA512:
6
+ metadata.gz: 9e2c210643a3d7ccdf910f1edb856442c1486c762d6938d778e57c80106e40c9cdcfb27816e1e889958ccaaa07111789eaf297da3d9e93868689fd4f0cb4efe7
7
+ data.tar.gz: 8610cabaaa9e900942962146d251e789d512c2fa2d8141b505d8aa241474cb0ea6c6ec0305d8f1cd866e2f9bfa85f11421fff791333f1df72ca041de69679e0e
data/CHANGELOG.md ADDED
@@ -0,0 +1,24 @@
1
+ # Changelog
2
+
3
+ ## [0.1.0] - 2025-02-21
4
+
5
+ ### Added
6
+
7
+ - Core Supermemory client with full API coverage
8
+ - Documents: add, batch_add, get, update, delete, list, delete_bulk, list_processing, upload_file
9
+ - Search: documents (v3), memories (v4), execute
10
+ - Memories: forget, update_memory
11
+ - Settings: get, update
12
+ - Connections: create, list, configure, get_by_id, get_by_tag, delete_by_id, delete_by_provider, import, list_documents, resources
13
+ - Profile: top-level profile method
14
+ - Error handling with retry logic (exponential backoff with jitter)
15
+ - ruby-openai integration
16
+ - `SupermemoryTools` for function calling
17
+ - `with_supermemory` wrapper for automatic memory injection
18
+ - graph-agent integration
19
+ - Pre-built node functions (recall_memories, store_memory, search_memories, add_memory)
20
+ - `build_memory_graph` helper for quick setup
21
+ - Memory-aware state schema
22
+ - langchainrb integration
23
+ - `SupermemoryTool` with ToolDefinition for search_memory, add_memory, get_profile, forget_memory
24
+ - `SupermemoryMemory` helper for manual memory context management
data/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Supermemory
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,503 @@
1
+ # Supermemory Ruby SDK
2
+
3
+ Ruby SDK for [Supermemory](https://supermemory.ai) — Memory API for the AI era.
4
+
5
+ Add persistent memory to AI applications with document management, semantic search, user profiling, and integrations with popular Ruby AI frameworks.
6
+
7
+ [![Gem Version](https://badge.fury.io/rb/supermemory.svg)](https://rubygems.org/gems/supermemory)
8
+
9
+ ## Installation
10
+
11
+ Add to your Gemfile:
12
+
13
+ ```ruby
14
+ gem "supermemory"
15
+ ```
16
+
17
+ Or install directly:
18
+
19
+ ```bash
20
+ gem install supermemory
21
+ ```
22
+
23
+ ## Quick Start
24
+
25
+ ```ruby
26
+ require "supermemory"
27
+
28
+ client = Supermemory::Client.new(api_key: ENV["SUPERMEMORY_API_KEY"])
29
+
30
+ # Add a document
31
+ result = client.add(content: "The user prefers dark mode and uses Ruby daily.")
32
+ puts result["id"]
33
+
34
+ # Search memories
35
+ results = client.search.memories(q: "user preferences", container_tag: "user-123")
36
+ results["results"].each { |r| puts r["memory"] }
37
+
38
+ # Get user profile
39
+ profile = client.profile(container_tag: "user-123")
40
+ puts profile["profile"]["static"] # Long-term facts
41
+ puts profile["profile"]["dynamic"] # Recent context
42
+ ```
43
+
44
+ ## Configuration
45
+
46
+ ### Global configuration
47
+
48
+ ```ruby
49
+ Supermemory.configure do |config|
50
+ config.api_key = ENV["SUPERMEMORY_API_KEY"]
51
+ config.base_url = "https://api.supermemory.ai" # default
52
+ config.timeout = 60 # seconds, default
53
+ config.max_retries = 2 # default
54
+ end
55
+
56
+ client = Supermemory::Client.new
57
+ ```
58
+
59
+ ### Per-client configuration
60
+
61
+ ```ruby
62
+ client = Supermemory::Client.new(
63
+ api_key: "sk-...",
64
+ base_url: "https://custom-endpoint.com",
65
+ timeout: 30,
66
+ max_retries: 3
67
+ )
68
+ ```
69
+
70
+ ### Environment variables
71
+
72
+ | Variable | Description |
73
+ |----------|-------------|
74
+ | `SUPERMEMORY_API_KEY` | Default API key |
75
+ | `SUPERMEMORY_BASE_URL` | Default base URL |
76
+
77
+ ## Core API
78
+
79
+ ### Documents
80
+
81
+ ```ruby
82
+ # Add a document
83
+ client.documents.add(
84
+ content: "User prefers functional programming patterns",
85
+ container_tag: "user-123",
86
+ custom_id: "pref-001",
87
+ metadata: { topic: "preferences", source: "chat" },
88
+ entity_context: "This is a user preference about coding style"
89
+ )
90
+
91
+ # Batch add
92
+ client.documents.batch_add(
93
+ documents: [
94
+ { content: "Fact one", container_tag: "user-123" },
95
+ { content: "Fact two", container_tag: "user-123" }
96
+ ]
97
+ )
98
+ # Or with plain strings:
99
+ client.documents.batch_add(
100
+ documents: ["Fact one", "Fact two"],
101
+ container_tag: "user-123"
102
+ )
103
+
104
+ # Get, update, delete
105
+ doc = client.documents.get("doc-id")
106
+ client.documents.update("doc-id", content: "Updated content")
107
+ client.documents.delete("doc-id")
108
+
109
+ # List with filters
110
+ client.documents.list(
111
+ filters: {
112
+ "AND" => [
113
+ { key: "topic", value: "preferences" },
114
+ { key: "source", value: "chat" }
115
+ ]
116
+ },
117
+ limit: 20,
118
+ sort: "createdAt",
119
+ order: "desc"
120
+ )
121
+
122
+ # Bulk delete
123
+ client.documents.delete_bulk(ids: ["id1", "id2", "id3"])
124
+
125
+ # List processing documents
126
+ client.documents.list_processing
127
+
128
+ # Upload a file
129
+ file = Faraday::Multipart::FilePart.new("/path/to/doc.pdf", "application/pdf")
130
+ client.documents.upload_file(file: file, container_tag: "user-123")
131
+ ```
132
+
133
+ ### Search
134
+
135
+ ```ruby
136
+ # Document search (v3 - chunk-level)
137
+ results = client.search.documents(
138
+ q: "async programming",
139
+ limit: 10,
140
+ rerank: true,
141
+ include_summary: true,
142
+ filters: { "AND" => [{ key: "topic", value: "python" }] }
143
+ )
144
+
145
+ results["results"].each do |r|
146
+ puts "#{r["title"]} (score: #{r["score"]})"
147
+ r["chunks"].each { |c| puts " #{c["content"]}" if c["isRelevant"] }
148
+ end
149
+
150
+ # Memory search (v4 - low latency, conversational)
151
+ results = client.search.memories(
152
+ q: "user preferences",
153
+ container_tag: "user-123",
154
+ search_mode: "hybrid", # "memories", "hybrid", or "documents"
155
+ limit: 5,
156
+ threshold: 0.5
157
+ )
158
+
159
+ results["results"].each do |r|
160
+ puts r["memory"] || r["chunk"]
161
+ end
162
+ ```
163
+
164
+ ### Memories
165
+
166
+ ```ruby
167
+ # Forget a memory
168
+ client.memories.forget(container_tag: "user-123", id: "mem-id")
169
+ # Or by content:
170
+ client.memories.forget(
171
+ container_tag: "user-123",
172
+ content: "User prefers dark mode",
173
+ reason: "User updated preference"
174
+ )
175
+
176
+ # Update a memory (creates new version)
177
+ client.memories.update_memory(
178
+ container_tag: "user-123",
179
+ id: "mem-id",
180
+ new_content: "User prefers light mode now"
181
+ )
182
+ ```
183
+
184
+ ### User Profile
185
+
186
+ ```ruby
187
+ result = client.profile(container_tag: "user-123", q: "coding preferences")
188
+
189
+ puts result["profile"]["static"] # ["Prefers Ruby", "10 years experience"]
190
+ puts result["profile"]["dynamic"] # ["Working on SDK", "Learning Rust"]
191
+
192
+ # Search results included when q is provided
193
+ if result["searchResults"]
194
+ result["searchResults"]["results"].each { |r| puts r["memory"] }
195
+ end
196
+ ```
197
+
198
+ ### Settings
199
+
200
+ ```ruby
201
+ settings = client.settings.get
202
+ client.settings.update(chunk_size: 1500, should_llm_filter: true)
203
+ ```
204
+
205
+ ### Connections
206
+
207
+ ```ruby
208
+ # Create OAuth connection
209
+ conn = client.connections.create("github",
210
+ redirect_url: "https://myapp.com/callback",
211
+ document_limit: 100
212
+ )
213
+ puts conn["authLink"] # Redirect user here
214
+
215
+ # List connections
216
+ client.connections.list
217
+
218
+ # Import/sync
219
+ client.connections.import("github")
220
+
221
+ # List documents from a connection
222
+ client.connections.list_documents("github")
223
+ ```
224
+
225
+ ## Error Handling
226
+
227
+ ```ruby
228
+ begin
229
+ client.add(content: "test")
230
+ rescue Supermemory::AuthenticationError => e
231
+ puts "Invalid API key: #{e.message}"
232
+ rescue Supermemory::RateLimitError => e
233
+ puts "Rate limited: #{e.message}"
234
+ rescue Supermemory::NotFoundError => e
235
+ puts "Not found: #{e.message}"
236
+ rescue Supermemory::InternalServerError => e
237
+ puts "Server error (retries exhausted): #{e.message}"
238
+ rescue Supermemory::APIConnectionError => e
239
+ puts "Connection failed: #{e.message}"
240
+ rescue Supermemory::APITimeoutError => e
241
+ puts "Request timed out: #{e.message}"
242
+ end
243
+ ```
244
+
245
+ Automatic retries with exponential backoff are applied for status codes 408, 409, 429, and 5xx.
246
+
247
+ ---
248
+
249
+ ## Integrations
250
+
251
+ ### ruby-openai Integration
252
+
253
+ Works with [ruby-openai](https://github.com/alexrudall/ruby-openai).
254
+
255
+ #### Approach 1: `with_supermemory` Wrapper (Automatic)
256
+
257
+ Wraps your OpenAI client to auto-inject memories into system prompts:
258
+
259
+ ```ruby
260
+ require "supermemory/integrations/openai"
261
+ require "openai"
262
+
263
+ openai = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
264
+
265
+ # Wrap with Supermemory — memories are automatically included
266
+ client = Supermemory::Integrations::OpenAI.with_supermemory(openai, "user-123",
267
+ mode: "full", # "profile", "query", or "full"
268
+ add_memory: "always" # "always" or "never"
269
+ )
270
+
271
+ # Use exactly like a normal OpenAI client
272
+ response = client.chat(parameters: {
273
+ model: "gpt-4o",
274
+ messages: [
275
+ { role: "system", content: "You are a helpful assistant." },
276
+ { role: "user", content: "What's my favorite language?" }
277
+ ]
278
+ })
279
+ puts response.dig("choices", 0, "message", "content")
280
+ ```
281
+
282
+ **Modes:**
283
+
284
+ | Mode | Description |
285
+ |------|-------------|
286
+ | `profile` | Injects user profile (static + dynamic facts) |
287
+ | `query` | Searches memories based on user message |
288
+ | `full` | Both profile and search (best for chatbots) |
289
+
290
+ #### Approach 2: Function Calling Tools (Explicit)
291
+
292
+ The model decides when to search or add memories:
293
+
294
+ ```ruby
295
+ require "supermemory/integrations/openai"
296
+ require "openai"
297
+
298
+ openai = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
299
+ tools = Supermemory::Integrations::OpenAI::SupermemoryTools.new(
300
+ api_key: ENV["SUPERMEMORY_API_KEY"],
301
+ config: { container_tag: "user-123" }
302
+ )
303
+
304
+ response = openai.chat(parameters: {
305
+ model: "gpt-4o",
306
+ messages: [
307
+ { role: "system", content: "You are a helpful assistant with memory." },
308
+ { role: "user", content: "Remember that I prefer tea over coffee" }
309
+ ],
310
+ tools: tools.get_tool_definitions
311
+ })
312
+
313
+ # Handle tool calls
314
+ message = response.dig("choices", 0, "message")
315
+ if message["tool_calls"]
316
+ tool_results = Supermemory::Integrations::OpenAI.execute_memory_tool_calls(
317
+ api_key: ENV["SUPERMEMORY_API_KEY"],
318
+ tool_calls: message["tool_calls"],
319
+ config: { container_tag: "user-123" }
320
+ )
321
+
322
+ # Feed results back for final response
323
+ messages = [
324
+ { role: "system", content: "You are a helpful assistant with memory." },
325
+ { role: "user", content: "Remember that I prefer tea over coffee" },
326
+ message,
327
+ *tool_results
328
+ ]
329
+
330
+ final = openai.chat(parameters: { model: "gpt-4o", messages: messages })
331
+ puts final.dig("choices", 0, "message", "content")
332
+ end
333
+ ```
334
+
335
+ ### graph-agent Integration
336
+
337
+ Works with [graph-agent](https://github.com/ai-firstly/graph-agent).
338
+
339
+ #### Quick Start with `build_memory_graph`
340
+
341
+ ```ruby
342
+ require "supermemory/integrations/graph_agent"
343
+ require "graph_agent"
344
+
345
+ # Define your LLM node
346
+ llm_node = ->(state, _config) {
347
+ context = state[:memory_context]
348
+ # Call your LLM here with memory context
349
+ response_text = "Based on what I know: #{context.empty? ? "nothing yet" : context}"
350
+ { messages: [{ role: "assistant", content: response_text }] }
351
+ }
352
+
353
+ # Build a complete memory-augmented graph
354
+ app = Supermemory::Integrations::GraphAgent.build_memory_graph(
355
+ api_key: ENV["SUPERMEMORY_API_KEY"],
356
+ llm_node: llm_node
357
+ )
358
+
359
+ # Run it
360
+ result = app.invoke(
361
+ { messages: [{ role: "user", content: "What are my preferences?" }], user_id: "user-123" }
362
+ )
363
+ puts result[:messages].last[:content]
364
+ ```
365
+
366
+ #### Custom Graph with Memory Nodes
367
+
368
+ ```ruby
369
+ require "supermemory/integrations/graph_agent"
370
+ require "graph_agent"
371
+
372
+ nodes = Supermemory::Integrations::GraphAgent::Nodes.new(
373
+ api_key: ENV["SUPERMEMORY_API_KEY"]
374
+ )
375
+
376
+ schema = Supermemory::Integrations::GraphAgent.memory_schema(
377
+ extra_fields: { intent: { type: String, default: "" } }
378
+ )
379
+
380
+ graph = GraphAgent::Graph::StateGraph.new(schema)
381
+
382
+ # Add memory nodes
383
+ graph.add_node("recall", nodes.method(:recall_memories))
384
+ graph.add_node("store", nodes.method(:store_memory))
385
+
386
+ # Add your custom nodes
387
+ graph.add_node("classify") do |state|
388
+ { intent: state[:messages].last[:content].match?(/remember|save/i) ? "store" : "query" }
389
+ end
390
+
391
+ graph.add_node("respond") do |state|
392
+ context = state[:memory_context]
393
+ { messages: [{ role: "assistant", content: "Response with context: #{context}" }] }
394
+ end
395
+
396
+ # Wire edges
397
+ graph.add_edge(GraphAgent::START, "recall")
398
+ graph.add_edge("recall", "classify")
399
+ graph.add_conditional_edges("classify", ->(s) { s[:intent] }, {
400
+ "store" => "store",
401
+ "query" => "respond"
402
+ })
403
+ graph.add_edge("respond", "store")
404
+ graph.add_edge("store", GraphAgent::END_NODE)
405
+
406
+ app = graph.compile
407
+ result = app.invoke(
408
+ { messages: [{ role: "user", content: "Remember I like Ruby" }], user_id: "user-123" }
409
+ )
410
+ ```
411
+
412
+ #### Available Node Functions
413
+
414
+ | Node | Purpose | State Input | State Output |
415
+ |------|---------|-------------|--------------|
416
+ | `recall_memories` | Fetch profile + relevant memories | `:messages`, `:user_id` | `:memories`, `:memory_context` |
417
+ | `store_memory` | Store latest conversation exchange | `:messages`, `:user_id` | (none) |
418
+ | `search_memories` | Search with specific query | `:query` or `:messages`, `:user_id` | `:memories` |
419
+ | `add_memory` | Store specific content | `:memory_content`, `:user_id` | (none) |
420
+
421
+ ### langchainrb Integration
422
+
423
+ Works with [langchainrb](https://github.com/patterns-ai-core/langchainrb).
424
+
425
+ #### As a Tool with Langchain::Assistant
426
+
427
+ ```ruby
428
+ require "supermemory/integrations/langchain"
429
+ require "langchain"
430
+
431
+ llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
432
+ memory_tool = Supermemory::Integrations::Langchain::SupermemoryTool.new(
433
+ api_key: ENV["SUPERMEMORY_API_KEY"],
434
+ container_tag: "user-123"
435
+ )
436
+
437
+ assistant = Langchain::Assistant.new(
438
+ llm: llm,
439
+ tools: [memory_tool],
440
+ instructions: "You are a helpful assistant with persistent memory. " \
441
+ "Use memory tools to remember and recall information."
442
+ )
443
+
444
+ # The assistant will automatically use search_memory/add_memory as needed
445
+ assistant.add_message_and_run!(content: "Remember that my favorite language is Ruby")
446
+ assistant.add_message_and_run!(content: "What's my favorite programming language?")
447
+
448
+ puts assistant.messages.last.content
449
+ ```
450
+
451
+ #### Available Tool Functions
452
+
453
+ | Function | Description |
454
+ |----------|-------------|
455
+ | `search_memory` | Search memories by query (supports memories/hybrid/documents modes) |
456
+ | `add_memory` | Save information to long-term memory |
457
+ | `get_profile` | Get user profile with optional search |
458
+ | `forget_memory` | Remove specific memory by content |
459
+
460
+ #### Manual Memory Management
461
+
462
+ For non-tool-based memory injection:
463
+
464
+ ```ruby
465
+ require "supermemory/integrations/langchain"
466
+
467
+ memory = Supermemory::Integrations::Langchain::SupermemoryMemory.new(
468
+ api_key: ENV["SUPERMEMORY_API_KEY"],
469
+ container_tag: "user-123"
470
+ )
471
+
472
+ # Get context for system prompt
473
+ context = memory.context(query: "user preferences")
474
+
475
+ # Use with any LLM call
476
+ llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
477
+ response = llm.chat(messages: [
478
+ { role: "system", content: "You are a helpful assistant.\n\n#{context}" },
479
+ { role: "user", content: "What are my preferences?" }
480
+ ])
481
+
482
+ # Store the exchange
483
+ memory.store(
484
+ user_message: "What are my preferences?",
485
+ assistant_message: response.chat_completion
486
+ )
487
+
488
+ # Direct search
489
+ results = memory.search(query: "coding preferences", limit: 5)
490
+ ```
491
+
492
+ ## Development
493
+
494
+ ```bash
495
+ git clone https://github.com/supermemoryai/ruby-sdk.git
496
+ cd ruby-sdk
497
+ bundle install
498
+ bundle exec rspec
499
+ ```
500
+
501
+ ## License
502
+
503
+ MIT License. See [LICENSE](LICENSE).