agentmemory-openai 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Devasish Banerjee
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,203 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentmemory-openai
3
+ Version: 0.1.0
4
+ Summary: Persistent memory for OpenAI agents — store, recall, and forget.
5
+ Home-page: https://github.com/bdeva1975/agentmemory
6
+ Author: Devasish Banerjee
7
+ Author-email: your_email@gmail.com
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Classifier: Operating System :: OS Independent
11
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
12
+ Requires-Python: >=3.10
13
+ Description-Content-Type: text/markdown
14
+ License-File: LICENSE
15
+ Requires-Dist: openai>=1.0.0
16
+ Requires-Dist: python-dotenv>=1.0.0
17
+ Requires-Dist: httpx>=0.27.0
18
+ Dynamic: author
19
+ Dynamic: author-email
20
+ Dynamic: classifier
21
+ Dynamic: description
22
+ Dynamic: description-content-type
23
+ Dynamic: home-page
24
+ Dynamic: license-file
25
+ Dynamic: requires-dist
26
+ Dynamic: requires-python
27
+ Dynamic: summary
28
+
29
+ # 🧠 AgentMemory
30
+
31
+ **Persistent memory for your OpenAI agents — store, recall, and forget.**
32
+
33
+ ![Python](https://img.shields.io/badge/python-3.10%2B-blue)
34
+ ![License](https://img.shields.io/badge/license-MIT-green)
35
+ ![OpenAI](https://img.shields.io/badge/powered%20by-OpenAI-412991)
36
+
37
+ ---
38
+
39
+ ## The problem
40
+
41
+ Every OpenAI agent starts from zero. No memory of past conversations,
42
+ preferences, decisions, or context. Every session is a blank slate.
43
+
44
+ **AgentMemory solves this — giving your agent durable, queryable memory
45
+ that persists across sessions.**
46
+
47
+ ---
48
+
49
+ ## Quickstart
50
+
51
+ Install dependencies:
52
+
53
+ ```bash
54
+ pip install openai python-dotenv
55
+ ```
56
+
57
+ Set your OpenAI API key:
58
+
59
+ ```bash
60
+ # .env
61
+ OPENAI_API_KEY=your_key_here
62
+ ```
63
+
64
+ Run your first memory session:
65
+
66
+ ```python
67
+ from agentmemory import AgentMemory
68
+
69
+ memory = AgentMemory(memory_path="agent_memory.json")
70
+
71
+ # Store memories from any text
72
+ memory.remember(
73
+ "I am building a RAG application using Python and OpenAI. "
74
+ "I prefer concise answers and clean code."
75
+ )
76
+
77
+ # Recall relevant memories
78
+ results = memory.recall("What is the user building?")
79
+ print(results)
80
+ ```
81
+
82
+ Output:
83
+
84
+ ```
85
+ Found 2 relevant memories:
86
+ • [CONTEXT] User is building a RAG application using Python and OpenAI
87
+ • [CONTEXT] User prefers concise answers and clean code
88
+ ```
89
+
90
+ ---
91
+
92
+ ## API Reference
93
+
94
+ ```python
95
+ memory = AgentMemory(
96
+ memory_path="agent_memory.json", # local storage file
97
+ model="gpt-4o-mini", # judge model
98
+ max_memories=100, # max memories to store
99
+ )
100
+
101
+ memory.remember(text) # extract and store memories from text
102
+ memory.recall(query, top_k) # retrieve relevant memories
103
+ memory.forget(memory_id) # delete a specific memory
104
+ memory.forget_all() # clear all memories
105
+ memory.list_all() # return all stored memories
106
+ memory.count # total number of memories
107
+ ```
108
+
109
+ ---
110
+
111
+ ## Memory categories
112
+
113
+ | Category | Description |
114
+ |----------|-------------|
115
+ | `fact` | Factual statements about the user or domain |
116
+ | `preference` | User preferences and style choices |
117
+ | `decision` | Decisions made during conversations |
118
+ | `context` | Project or situational context |
119
+
120
+ ---
121
+
122
+ ## Streamlit demo
123
+
124
+ Run the interactive demo locally:
125
+
126
+ ```bash
127
+ streamlit run app.py
128
+ ```
129
+
130
+ Features:
131
+
132
+ - Store memories from any text input
133
+ - Search memories with semantic recall
134
+ - View all stored memories in the sidebar
135
+ - Delete individual memories or clear all
136
+
137
+ ---
138
+
139
+ ## How it works
140
+
141
+ 1. `remember(text)` sends the text to GPT-4o-mini which extracts
142
+ individual factual claims, preferences, decisions, and context.
143
+ 2. Each memory is stored as a structured object in a local JSON file.
144
+ 3. `recall(query)` sends the query and all stored memories to
145
+ GPT-4o-mini which scores each memory for relevance.
146
+ 4. Memories are returned ranked by relevance score.
147
+
148
+ All storage is **local** — no cloud, no database, no infrastructure.
149
+ Just a JSON file on your machine.
150
+
151
+ ---
152
+
153
+ ## Cost
154
+
155
+ Each `remember()` call: **~$0.001**
156
+
157
+ Each `recall()` call: **~$0.001**
158
+
159
+ Both use GPT-4o-mini as the judge.
160
+
161
+ ---
162
+
163
+ ## Roadmap
164
+
165
+ - [ ] Vector embedding search for faster recall at scale
166
+ - [ ] Memory expiry and TTL support
167
+ - [ ] Memory categories and filtering
168
+ - [ ] LangChain and LlamaIndex integration hooks
169
+ - [ ] Multi-agent shared memory support
170
+
171
+ ---
172
+
173
+ ## Project structure
174
+
175
+ ```
176
+ agentmemory/
177
+ ├── agentmemory/
178
+ │ ├── __init__.py # public API
179
+ │ ├── memory.py # AgentMemory core class
180
+ │ └── models.py # Memory and MemorySearchResult dataclasses
181
+ ├── app.py # Streamlit demo
182
+ ├── example.py # quickstart example
183
+ ├── requirements.txt
184
+ ├── .env.example
185
+ └── README.md
186
+ ```
187
+
188
+ ---
189
+
190
+ ## License
191
+
192
+ MIT — free to use, modify, and distribute.
193
+
194
+ ---
195
+
196
+ ## Contributing
197
+
198
+ Pull requests are welcome. Please open an issue first to discuss
199
+ what you would like to change.
200
+
201
+ ---
202
+
203
+ *Built with OpenAI GPT-4o-mini as the memory extraction and recall judge.*
@@ -0,0 +1,175 @@
1
+ # 🧠 AgentMemory
2
+
3
+ **Persistent memory for your OpenAI agents — store, recall, and forget.**
4
+
5
+ ![Python](https://img.shields.io/badge/python-3.10%2B-blue)
6
+ ![License](https://img.shields.io/badge/license-MIT-green)
7
+ ![OpenAI](https://img.shields.io/badge/powered%20by-OpenAI-412991)
8
+
9
+ ---
10
+
11
+ ## The problem
12
+
13
+ Every OpenAI agent starts from zero. No memory of past conversations,
14
+ preferences, decisions, or context. Every session is a blank slate.
15
+
16
+ **AgentMemory solves this — giving your agent durable, queryable memory
17
+ that persists across sessions.**
18
+
19
+ ---
20
+
21
+ ## Quickstart
22
+
23
+ Install dependencies:
24
+
25
+ ```bash
26
+ pip install openai python-dotenv
27
+ ```
28
+
29
+ Set your OpenAI API key:
30
+
31
+ ```bash
32
+ # .env
33
+ OPENAI_API_KEY=your_key_here
34
+ ```
35
+
36
+ Run your first memory session:
37
+
38
+ ```python
39
+ from agentmemory import AgentMemory
40
+
41
+ memory = AgentMemory(memory_path="agent_memory.json")
42
+
43
+ # Store memories from any text
44
+ memory.remember(
45
+ "I am building a RAG application using Python and OpenAI. "
46
+ "I prefer concise answers and clean code."
47
+ )
48
+
49
+ # Recall relevant memories
50
+ results = memory.recall("What is the user building?")
51
+ print(results)
52
+ ```
53
+
54
+ Output:
55
+
56
+ ```
57
+ Found 2 relevant memories:
58
+ • [CONTEXT] User is building a RAG application using Python and OpenAI
59
+ • [CONTEXT] User prefers concise answers and clean code
60
+ ```
61
+
62
+ ---
63
+
64
+ ## API Reference
65
+
66
+ ```python
67
+ memory = AgentMemory(
68
+ memory_path="agent_memory.json", # local storage file
69
+ model="gpt-4o-mini", # judge model
70
+ max_memories=100, # max memories to store
71
+ )
72
+
73
+ memory.remember(text) # extract and store memories from text
74
+ memory.recall(query, top_k) # retrieve relevant memories
75
+ memory.forget(memory_id) # delete a specific memory
76
+ memory.forget_all() # clear all memories
77
+ memory.list_all() # return all stored memories
78
+ memory.count # total number of memories
79
+ ```
80
+
81
+ ---
82
+
83
+ ## Memory categories
84
+
85
+ | Category | Description |
86
+ |----------|-------------|
87
+ | `fact` | Factual statements about the user or domain |
88
+ | `preference` | User preferences and style choices |
89
+ | `decision` | Decisions made during conversations |
90
+ | `context` | Project or situational context |
91
+
92
+ ---
93
+
94
+ ## Streamlit demo
95
+
96
+ Run the interactive demo locally:
97
+
98
+ ```bash
99
+ streamlit run app.py
100
+ ```
101
+
102
+ Features:
103
+
104
+ - Store memories from any text input
105
+ - Search memories with semantic recall
106
+ - View all stored memories in the sidebar
107
+ - Delete individual memories or clear all
108
+
109
+ ---
110
+
111
+ ## How it works
112
+
113
+ 1. `remember(text)` sends the text to GPT-4o-mini which extracts
114
+ individual factual claims, preferences, decisions, and context.
115
+ 2. Each memory is stored as a structured object in a local JSON file.
116
+ 3. `recall(query)` sends the query and all stored memories to
117
+ GPT-4o-mini which scores each memory for relevance.
118
+ 4. Memories are returned ranked by relevance score.
119
+
120
+ All storage is **local** — no cloud, no database, no infrastructure.
121
+ Just a JSON file on your machine.
122
+
123
+ ---
124
+
125
+ ## Cost
126
+
127
+ Each `remember()` call: **~$0.001**
128
+
129
+ Each `recall()` call: **~$0.001**
130
+
131
+ Both use GPT-4o-mini as the judge.
132
+
133
+ ---
134
+
135
+ ## Roadmap
136
+
137
+ - [ ] Vector embedding search for faster recall at scale
138
+ - [ ] Memory expiry and TTL support
139
+ - [ ] Memory categories and filtering
140
+ - [ ] LangChain and LlamaIndex integration hooks
141
+ - [ ] Multi-agent shared memory support
142
+
143
+ ---
144
+
145
+ ## Project structure
146
+
147
+ ```
148
+ agentmemory/
149
+ ├── agentmemory/
150
+ │ ├── __init__.py # public API
151
+ │ ├── memory.py # AgentMemory core class
152
+ │ └── models.py # Memory and MemorySearchResult dataclasses
153
+ ├── app.py # Streamlit demo
154
+ ├── example.py # quickstart example
155
+ ├── requirements.txt
156
+ ├── .env.example
157
+ └── README.md
158
+ ```
159
+
160
+ ---
161
+
162
+ ## License
163
+
164
+ MIT — free to use, modify, and distribute.
165
+
166
+ ---
167
+
168
+ ## Contributing
169
+
170
+ Pull requests are welcome. Please open an issue first to discuss
171
+ what you would like to change.
172
+
173
+ ---
174
+
175
+ *Built with OpenAI GPT-4o-mini as the memory extraction and recall judge.*
@@ -0,0 +1,5 @@
1
+ from agentmemory.memory import AgentMemory
2
+ from agentmemory.models import Memory, MemorySearchResult
3
+
4
+ __version__ = "0.1.0"
5
+ __all__ = ["AgentMemory", "Memory", "MemorySearchResult"]
@@ -0,0 +1,272 @@
1
+ import os
2
+ import json
3
+ import uuid
4
+ import httpx
5
+ from openai import OpenAI
6
+ from dotenv import load_dotenv
7
+ from typing import List
8
+ from agentmemory.models import Memory, MemorySearchResult
9
+
10
+ load_dotenv()
11
+
12
+ _client = None
13
+
14
+
15
+ def _get_client() -> OpenAI:
16
+ """Lazy-initialise the OpenAI client once."""
17
+ global _client
18
+ if _client is None:
19
+ api_key = os.getenv("OPENAI_API_KEY")
20
+ if not api_key:
21
+ raise EnvironmentError(
22
+ "OPENAI_API_KEY not found. "
23
+ "Set it in your .env file or as an environment variable."
24
+ )
25
+ http_client = httpx.Client(verify=False)
26
+ _client = OpenAI(api_key=api_key, http_client=http_client)
27
+ return _client
28
+
29
+
30
+ _EXTRACT_PROMPT = """
31
+ You are a memory extraction assistant.
32
+
33
+ Given a conversation message, extract all factual claims, preferences,
34
+ decisions, or important context that an AI agent should remember for
35
+ future conversations.
36
+
37
+ Return ONLY a valid JSON object with a single key "memories" containing an array.
38
+ No explanation. No markdown. Raw JSON only.
39
+
40
+ Each item must follow this schema:
41
+ {
42
+ "content": "the memory to store as a short clear sentence",
43
+ "category": "fact | preference | decision | context"
44
+ }
45
+
46
+ Example output:
47
+ {"memories": [
48
+ {"content": "User prefers Python over JavaScript", "category": "preference"},
49
+ {"content": "User is building a RAG application", "category": "context"}
50
+ ]}
51
+
52
+ If nothing meaningful to extract, return {"memories": []}
53
+ """.strip()
54
+
55
+ _SEARCH_PROMPT = """
56
+ You are a memory relevance judge.
57
+
58
+ Given a query and a list of memories, return the indices of memories
59
+ that are relevant to the query, along with a relevance score.
60
+
61
+ Return ONLY a valid JSON object with a single key "results" containing an array.
62
+ No explanation. No markdown. Raw JSON only.
63
+
64
+ Each item must follow this schema:
65
+ {
66
+ "index": 0,
67
+ "relevance": 0.95
68
+ }
69
+
70
+ Return {"results": []} if no memories are relevant.
71
+ Relevance score must be between 0.0 and 1.0.
72
+ Only include memories with relevance >= 0.5.
73
+ """.strip()
74
+
75
+
76
+ class AgentMemory:
77
+ """
78
+ Persistent memory store for OpenAI agents.
79
+
80
+ Stores memories in a local JSON file and retrieves
81
+ relevant memories using GPT-4o-mini as a semantic judge.
82
+
83
+ Usage:
84
+ memory = AgentMemory(memory_path="agent_memory.json")
85
+ memory.remember("User told me they prefer concise answers.")
86
+ results = memory.recall("What does the user prefer?")
87
+ print(results)
88
+ """
89
+
90
+ def __init__(
91
+ self,
92
+ memory_path: str = "agent_memory.json",
93
+ model: str = "gpt-4o-mini",
94
+ max_memories: int = 100,
95
+ ):
96
+ self.memory_path = memory_path
97
+ self.model = model
98
+ self.max_memories = max_memories
99
+ self._memories: List[Memory] = []
100
+ self._load()
101
+
102
+ def _load(self):
103
+ """Load memories from the JSON file."""
104
+ if os.path.exists(self.memory_path):
105
+ try:
106
+ with open(self.memory_path, "r", encoding="utf-8") as f:
107
+ data = json.load(f)
108
+ self._memories = [Memory(**m) for m in data]
109
+ except Exception:
110
+ self._memories = []
111
+
112
+ def _save(self):
113
+ """Save memories to the JSON file."""
114
+ with open(self.memory_path, "w", encoding="utf-8") as f:
115
+ json.dump(
116
+ [m.__dict__ for m in self._memories],
117
+ f,
118
+ indent=2,
119
+ ensure_ascii=False,
120
+ )
121
+
122
+ def remember(self, text: str) -> List[Memory]:
123
+ """
124
+ Extract and store memories from a text input.
125
+
126
+ Args:
127
+ text: conversation message or any text to extract memories from
128
+
129
+ Returns:
130
+ List of Memory objects that were stored
131
+ """
132
+ if not text or not text.strip():
133
+ return []
134
+
135
+ client = _get_client()
136
+
137
+ try:
138
+ completion = client.chat.completions.create(
139
+ model=self.model,
140
+ messages=[
141
+ {"role": "system", "content": _EXTRACT_PROMPT},
142
+ {"role": "user", "content": text.strip()},
143
+ ],
144
+ temperature=0,
145
+ response_format={"type": "json_object"},
146
+ )
147
+ raw = completion.choices[0].message.content
148
+ except Exception as e:
149
+ return []
150
+
151
+ try:
152
+ parsed = json.loads(raw)
153
+ items = parsed.get("memories", [])
154
+ except json.JSONDecodeError:
155
+ return []
156
+
157
+ new_memories = []
158
+ for item in items:
159
+ if isinstance(item, dict) and "content" in item:
160
+ memory = Memory(
161
+ id=str(uuid.uuid4()),
162
+ content=item["content"],
163
+ category=item.get("category", "fact"),
164
+ )
165
+ self._memories.append(memory)
166
+ new_memories.append(memory)
167
+
168
+ if len(self._memories) > self.max_memories:
169
+ self._memories = self._memories[-self.max_memories:]
170
+
171
+ self._save()
172
+ return new_memories
173
+
174
+ def recall(
175
+ self,
176
+ query: str,
177
+ top_k: int = 5,
178
+ ) -> MemorySearchResult:
179
+ """
180
+ Retrieve memories relevant to a query.
181
+
182
+ Args:
183
+ query: the question or context to search memories for
184
+ top_k: maximum number of memories to return
185
+
186
+ Returns:
187
+ MemorySearchResult with relevant memories and scores
188
+ """
189
+ if not self._memories:
190
+ return MemorySearchResult(query=query)
191
+
192
+ if not query or not query.strip():
193
+ return MemorySearchResult(query=query)
194
+
195
+ client = _get_client()
196
+
197
+ memories_list = "\n".join(
198
+ f"{i}: {m.content} [{m.category}]"
199
+ for i, m in enumerate(self._memories)
200
+ )
201
+
202
+ try:
203
+ completion = client.chat.completions.create(
204
+ model=self.model,
205
+ messages=[
206
+ {"role": "system", "content": _SEARCH_PROMPT},
207
+ {
208
+ "role": "user",
209
+ "content": f"QUERY: {query.strip()}\n\nMEMORIES:\n{memories_list}",
210
+ },
211
+ ],
212
+ temperature=0,
213
+ response_format={"type": "json_object"},
214
+ )
215
+ raw = completion.choices[0].message.content
216
+ except Exception as e:
217
+ return MemorySearchResult(query=query)
218
+
219
+ try:
220
+ parsed = json.loads(raw)
221
+ items = parsed.get("results", [])
222
+ except json.JSONDecodeError:
223
+ return MemorySearchResult(query=query)
224
+
225
+ relevant = []
226
+ for item in items:
227
+ if isinstance(item, dict) and "index" in item:
228
+ idx = item["index"]
229
+ if 0 <= idx < len(self._memories):
230
+ memory = self._memories[idx]
231
+ memory.relevance = float(item.get("relevance", 0.5))
232
+ relevant.append(memory)
233
+
234
+ relevant.sort(key=lambda m: m.relevance, reverse=True)
235
+ relevant = relevant[:top_k]
236
+
237
+ return MemorySearchResult(
238
+ memories=relevant,
239
+ query=query,
240
+ total_found=len(relevant),
241
+ )
242
+
243
+ def forget(self, memory_id: str) -> bool:
244
+ """
245
+ Delete a specific memory by ID.
246
+
247
+ Args:
248
+ memory_id: the ID of the memory to delete
249
+
250
+ Returns:
251
+ True if deleted, False if not found
252
+ """
253
+ original_count = len(self._memories)
254
+ self._memories = [m for m in self._memories if m.id != memory_id]
255
+ if len(self._memories) < original_count:
256
+ self._save()
257
+ return True
258
+ return False
259
+
260
+ def forget_all(self):
261
+ """Clear all memories."""
262
+ self._memories = []
263
+ self._save()
264
+
265
+ def list_all(self) -> List[Memory]:
266
+ """Return all stored memories."""
267
+ return self._memories.copy()
268
+
269
+ @property
270
+ def count(self) -> int:
271
+ """Total number of stored memories."""
272
+ return len(self._memories)
@@ -0,0 +1,48 @@
1
+ from dataclasses import dataclass, field
2
+ from typing import List
3
+ from datetime import datetime
4
+
5
+
6
+ @dataclass
7
+ class Memory:
8
+ """
9
+ A single memory entry stored by the agent.
10
+
11
+ Attributes:
12
+ id : unique memory identifier
13
+ content : the fact or information extracted from conversation
14
+ category : type of memory (fact, preference, decision, context)
15
+ created_at : timestamp when memory was created
16
+ relevance : relevance score when retrieved (0.0 - 1.0)
17
+ """
18
+ id: str
19
+ content: str
20
+ category: str = "fact"
21
+ created_at: str = field(default_factory=lambda: datetime.now().isoformat())
22
+ relevance: float = 0.0
23
+
24
+ def __str__(self) -> str:
25
+ return f"[{self.category.upper()}] {self.content}"
26
+
27
+
28
+ @dataclass
29
+ class MemorySearchResult:
30
+ """
31
+ Result returned when searching agent memories.
32
+
33
+ Attributes:
34
+ memories : list of relevant Memory objects
35
+ query : the search query used
36
+ total_found : total number of memories found
37
+ """
38
+ memories: List[Memory] = field(default_factory=list)
39
+ query: str = ""
40
+ total_found: int = 0
41
+
42
+ def __str__(self) -> str:
43
+ if not self.memories:
44
+ return "No relevant memories found."
45
+ lines = [f"Found {self.total_found} relevant memories:"]
46
+ for m in self.memories:
47
+ lines.append(f" • {m}")
48
+ return "\n".join(lines)
@@ -0,0 +1,203 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentmemory-openai
3
+ Version: 0.1.0
4
+ Summary: Persistent memory for OpenAI agents — store, recall, and forget.
5
+ Home-page: https://github.com/bdeva1975/agentmemory
6
+ Author: Devasish Banerjee
7
+ Author-email: your_email@gmail.com
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Classifier: Operating System :: OS Independent
11
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
12
+ Requires-Python: >=3.10
13
+ Description-Content-Type: text/markdown
14
+ License-File: LICENSE
15
+ Requires-Dist: openai>=1.0.0
16
+ Requires-Dist: python-dotenv>=1.0.0
17
+ Requires-Dist: httpx>=0.27.0
18
+ Dynamic: author
19
+ Dynamic: author-email
20
+ Dynamic: classifier
21
+ Dynamic: description
22
+ Dynamic: description-content-type
23
+ Dynamic: home-page
24
+ Dynamic: license-file
25
+ Dynamic: requires-dist
26
+ Dynamic: requires-python
27
+ Dynamic: summary
28
+
29
+ # 🧠 AgentMemory
30
+
31
+ **Persistent memory for your OpenAI agents — store, recall, and forget.**
32
+
33
+ ![Python](https://img.shields.io/badge/python-3.10%2B-blue)
34
+ ![License](https://img.shields.io/badge/license-MIT-green)
35
+ ![OpenAI](https://img.shields.io/badge/powered%20by-OpenAI-412991)
36
+
37
+ ---
38
+
39
+ ## The problem
40
+
41
+ Every OpenAI agent starts from zero. No memory of past conversations,
42
+ preferences, decisions, or context. Every session is a blank slate.
43
+
44
+ **AgentMemory solves this — giving your agent durable, queryable memory
45
+ that persists across sessions.**
46
+
47
+ ---
48
+
49
+ ## Quickstart
50
+
51
+ Install dependencies:
52
+
53
+ ```bash
54
+ pip install openai python-dotenv
55
+ ```
56
+
57
+ Set your OpenAI API key:
58
+
59
+ ```bash
60
+ # .env
61
+ OPENAI_API_KEY=your_key_here
62
+ ```
63
+
64
+ Run your first memory session:
65
+
66
+ ```python
67
+ from agentmemory import AgentMemory
68
+
69
+ memory = AgentMemory(memory_path="agent_memory.json")
70
+
71
+ # Store memories from any text
72
+ memory.remember(
73
+ "I am building a RAG application using Python and OpenAI. "
74
+ "I prefer concise answers and clean code."
75
+ )
76
+
77
+ # Recall relevant memories
78
+ results = memory.recall("What is the user building?")
79
+ print(results)
80
+ ```
81
+
82
+ Output:
83
+
84
+ ```
85
+ Found 2 relevant memories:
86
+ • [CONTEXT] User is building a RAG application using Python and OpenAI
87
+ • [CONTEXT] User prefers concise answers and clean code
88
+ ```
89
+
90
+ ---
91
+
92
+ ## API Reference
93
+
94
+ ```python
95
+ memory = AgentMemory(
96
+ memory_path="agent_memory.json", # local storage file
97
+ model="gpt-4o-mini", # judge model
98
+ max_memories=100, # max memories to store
99
+ )
100
+
101
+ memory.remember(text) # extract and store memories from text
102
+ memory.recall(query, top_k) # retrieve relevant memories
103
+ memory.forget(memory_id) # delete a specific memory
104
+ memory.forget_all() # clear all memories
105
+ memory.list_all() # return all stored memories
106
+ memory.count # total number of memories
107
+ ```
108
+
109
+ ---
110
+
111
+ ## Memory categories
112
+
113
+ | Category | Description |
114
+ |----------|-------------|
115
+ | `fact` | Factual statements about the user or domain |
116
+ | `preference` | User preferences and style choices |
117
+ | `decision` | Decisions made during conversations |
118
+ | `context` | Project or situational context |
119
+
120
+ ---
121
+
122
+ ## Streamlit demo
123
+
124
+ Run the interactive demo locally:
125
+
126
+ ```bash
127
+ streamlit run app.py
128
+ ```
129
+
130
+ Features:
131
+
132
+ - Store memories from any text input
133
+ - Search memories with semantic recall
134
+ - View all stored memories in the sidebar
135
+ - Delete individual memories or clear all
136
+
137
+ ---
138
+
139
+ ## How it works
140
+
141
+ 1. `remember(text)` sends the text to GPT-4o-mini which extracts
142
+ individual factual claims, preferences, decisions, and context.
143
+ 2. Each memory is stored as a structured object in a local JSON file.
144
+ 3. `recall(query)` sends the query and all stored memories to
145
+ GPT-4o-mini which scores each memory for relevance.
146
+ 4. Memories are returned ranked by relevance score.
147
+
148
+ All storage is **local** — no cloud, no database, no infrastructure.
149
+ Just a JSON file on your machine.
150
+
151
+ ---
152
+
153
+ ## Cost
154
+
155
+ Each `remember()` call: **~$0.001**
156
+
157
+ Each `recall()` call: **~$0.001**
158
+
159
+ Both use GPT-4o-mini as the judge.
160
+
161
+ ---
162
+
163
+ ## Roadmap
164
+
165
+ - [ ] Vector embedding search for faster recall at scale
166
+ - [ ] Memory expiry and TTL support
167
+ - [ ] Memory categories and filtering
168
+ - [ ] LangChain and LlamaIndex integration hooks
169
+ - [ ] Multi-agent shared memory support
170
+
171
+ ---
172
+
173
+ ## Project structure
174
+
175
+ ```
176
+ agentmemory/
177
+ ├── agentmemory/
178
+ │ ├── __init__.py # public API
179
+ │ ├── memory.py # AgentMemory core class
180
+ │ └── models.py # Memory and MemorySearchResult dataclasses
181
+ ├── app.py # Streamlit demo
182
+ ├── example.py # quickstart example
183
+ ├── requirements.txt
184
+ ├── .env.example
185
+ └── README.md
186
+ ```
187
+
188
+ ---
189
+
190
+ ## License
191
+
192
+ MIT — free to use, modify, and distribute.
193
+
194
+ ---
195
+
196
+ ## Contributing
197
+
198
+ Pull requests are welcome. Please open an issue first to discuss
199
+ what you would like to change.
200
+
201
+ ---
202
+
203
+ *Built with OpenAI GPT-4o-mini as the memory extraction and recall judge.*
@@ -0,0 +1,11 @@
1
+ LICENSE
2
+ README.md
3
+ setup.py
4
+ agentmemory/__init__.py
5
+ agentmemory/memory.py
6
+ agentmemory/models.py
7
+ agentmemory_openai.egg-info/PKG-INFO
8
+ agentmemory_openai.egg-info/SOURCES.txt
9
+ agentmemory_openai.egg-info/dependency_links.txt
10
+ agentmemory_openai.egg-info/requires.txt
11
+ agentmemory_openai.egg-info/top_level.txt
@@ -0,0 +1,3 @@
1
+ openai>=1.0.0
2
+ python-dotenv>=1.0.0
3
+ httpx>=0.27.0
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,28 @@
1
+ from setuptools import setup, find_packages
2
+
3
+ with open("README.md", "r", encoding="utf-8") as f:
4
+ long_description = f.read()
5
+
6
+ setup(
7
+ name="agentmemory-openai",
8
+ version="0.1.0",
9
+ author="Devasish Banerjee",
10
+ author_email="your_email@gmail.com",
11
+ description="Persistent memory for OpenAI agents — store, recall, and forget.",
12
+ long_description=long_description,
13
+ long_description_content_type="text/markdown",
14
+ url="https://github.com/bdeva1975/agentmemory",
15
+ packages=find_packages(),
16
+ classifiers=[
17
+ "Programming Language :: Python :: 3",
18
+ "License :: OSI Approved :: MIT License",
19
+ "Operating System :: OS Independent",
20
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
21
+ ],
22
+ python_requires=">=3.10",
23
+ install_requires=[
24
+ "openai>=1.0.0",
25
+ "python-dotenv>=1.0.0",
26
+ "httpx>=0.27.0",
27
+ ],
28
+ )