ace-framework 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,42 @@
1
+ # API Keys for LLM Providers
2
+ # Copy this file to .env and add your actual API keys
3
+
4
+ # OpenAI
5
+ OPENAI_API_KEY=your-openai-api-key-here
6
+
7
+ # Anthropic (Claude)
8
+ ANTHROPIC_API_KEY=your-anthropic-api-key-here
9
+
10
+ # Google (Gemini)
11
+ GOOGLE_API_KEY=your-google-api-key-here
12
+
13
+ # Cohere
14
+ COHERE_API_KEY=your-cohere-api-key-here
15
+
16
+ # Azure OpenAI (optional)
17
+ AZURE_API_KEY=your-azure-api-key-here
18
+ AZURE_API_BASE=https://your-resource.openai.azure.com
19
+ AZURE_API_VERSION=2024-02-15-preview
20
+
21
+ # AWS Bedrock (optional)
22
+ AWS_ACCESS_KEY_ID=your-aws-access-key
23
+ AWS_SECRET_ACCESS_KEY=your-aws-secret-key
24
+ AWS_REGION_NAME=us-east-1
25
+
26
+ # Hugging Face (optional)
27
+ HUGGINGFACE_API_KEY=your-huggingface-api-key-here
28
+
29
+ # Replicate (optional)
30
+ REPLICATE_API_KEY=your-replicate-api-key-here
31
+
32
+ # Together AI (optional)
33
+ TOGETHER_API_KEY=your-together-api-key-here
34
+
35
+ # Model Configuration (optional)
36
+ DEFAULT_MODEL=gpt-4o-mini
37
+ DEFAULT_TEMPERATURE=0.0
38
+ DEFAULT_MAX_TOKENS=512
39
+
40
+ # Cost Tracking (optional)
41
+ TRACK_COSTS=true
42
+ MAX_BUDGET=10.0
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Kayba.ai
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,11 @@
1
+ include README.md
2
+ include LICENSE
3
+ include requirements.txt
4
+ include requirements-optional.txt
5
+ include .env.example
6
+ recursive-include ace *.py
7
+ recursive-include examples *.py
8
+ recursive-include tests *.py
9
+ global-exclude __pycache__
10
+ global-exclude *.py[co]
11
+ global-exclude .DS_Store
@@ -0,0 +1,316 @@
1
+ Metadata-Version: 2.4
2
+ Name: ace-framework
3
+ Version: 0.1.0
4
+ Summary: Build self-improving AI agents that learn from experience
5
+ Home-page: https://github.com/Kayba-ai/agentic-context-engine
6
+ Author: Kayba.ai
7
+ Author-email: "Kayba.ai" <hello@kayba.ai>
8
+ Maintainer-email: "Kayba.ai" <hello@kayba.ai>
9
+ License: MIT
10
+ Project-URL: Homepage, https://kayba.ai
11
+ Project-URL: Documentation, https://github.com/Kayba-ai/agentic-context-engine#readme
12
+ Project-URL: Repository, https://github.com/Kayba-ai/agentic-context-engine
13
+ Project-URL: Issues, https://github.com/Kayba-ai/agentic-context-engine/issues
14
+ Keywords: ai,llm,agents,machine-learning,self-improvement,context-engineering,ace,openai,anthropic,claude,gpt
15
+ Classifier: Development Status :: 4 - Beta
16
+ Classifier: Intended Audience :: Developers
17
+ Classifier: License :: OSI Approved :: MIT License
18
+ Classifier: Programming Language :: Python :: 3
19
+ Classifier: Programming Language :: Python :: 3.9
20
+ Classifier: Programming Language :: Python :: 3.10
21
+ Classifier: Programming Language :: Python :: 3.11
22
+ Classifier: Programming Language :: Python :: 3.12
23
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
24
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
25
+ Requires-Python: >=3.9
26
+ Description-Content-Type: text/markdown
27
+ License-File: LICENSE
28
+ Requires-Dist: pydantic>=2.0.0
29
+ Requires-Dist: python-dotenv>=1.0.0
30
+ Provides-Extra: all
31
+ Requires-Dist: litellm>=1.0.0; extra == "all"
32
+ Requires-Dist: transformers>=4.0.0; extra == "all"
33
+ Requires-Dist: torch>=2.0.0; extra == "all"
34
+ Provides-Extra: litellm
35
+ Requires-Dist: litellm>=1.0.0; extra == "litellm"
36
+ Provides-Extra: transformers
37
+ Requires-Dist: transformers>=4.0.0; extra == "transformers"
38
+ Requires-Dist: torch>=2.0.0; extra == "transformers"
39
+ Provides-Extra: dev
40
+ Requires-Dist: pytest>=7.0.0; extra == "dev"
41
+ Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
42
+ Requires-Dist: black>=23.0.0; extra == "dev"
43
+ Requires-Dist: mypy>=1.0.0; extra == "dev"
44
+ Dynamic: author
45
+ Dynamic: home-page
46
+ Dynamic: license-file
47
+ Dynamic: requires-python
48
+
49
+ # Agentic Context Engine (ACE) 🚀
50
+
51
+ **Build self-improving AI agents that learn from experience**
52
+
53
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
54
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
55
+ [![Paper](https://img.shields.io/badge/Paper-arXiv:2510.04618-red.svg)](https://arxiv.org/abs/2510.04618)
56
+
57
+ 🧠 **ACE** is a framework for building AI agents that get smarter over time by learning from their mistakes and successes.
58
+
59
+ 💡 Based on the paper "Agentic Context Engineering" from Stanford/SambaNova - ACE helps your LLM agents build a "playbook" of strategies that improves with each task.
60
+
61
+ 🔌 **Works with any LLM** - OpenAI, Anthropic Claude, Google Gemini, and 100+ more providers out of the box!
62
+
63
+ ## Quick Start
64
+
65
+ **Minimum Python 3.9 required**
66
+
67
+ ### Install ACE:
68
+ ```bash
69
+ pip install ace-framework
70
+ # or for development:
71
+ pip install -r requirements.txt
72
+ ```
73
+
74
+ ### Set up your API key:
75
+ ```bash
76
+ # Copy the example environment file
77
+ cp .env.example .env
78
+
79
+ # Add your OpenAI key (or Anthropic, Google, etc.)
80
+ echo "OPENAI_API_KEY=your-key-here" >> .env
81
+ ```
82
+
83
+ ### Run your first agent:
84
+ ```python
85
+ from ace import LiteLLMClient, OfflineAdapter, Generator, Reflector, Curator
86
+ from ace import Playbook, Sample, TaskEnvironment, EnvironmentResult
87
+ from dotenv import load_dotenv
88
+
89
+ load_dotenv()
90
+
91
+ # Create your agent with any LLM
92
+ client = LiteLLMClient(model="gpt-3.5-turbo") # or claude-3, gemini-pro, etc.
93
+
94
+ # Set up ACE components
95
+ adapter = OfflineAdapter(
96
+ playbook=Playbook(),
97
+ generator=Generator(client),
98
+ reflector=Reflector(client),
99
+ curator=Curator(client)
100
+ )
101
+
102
+ # Define a simple task
103
+ class SimpleEnv(TaskEnvironment):
104
+ def evaluate(self, sample, output):
105
+ correct = sample.ground_truth.lower() in output.final_answer.lower()
106
+ return EnvironmentResult(
107
+ feedback="Correct!" if correct else "Try again",
108
+ ground_truth=sample.ground_truth
109
+ )
110
+
111
+ # Train your agent
112
+ samples = [
113
+ Sample(question="What is 2+2?", ground_truth="4"),
114
+ Sample(question="Capital of France?", ground_truth="Paris"),
115
+ ]
116
+
117
+ results = adapter.run(samples, SimpleEnv(), epochs=1)
118
+ print(f"Agent learned {len(adapter.playbook.bullets())} strategies!")
119
+ ```
120
+
121
+ ## How It Works
122
+
123
+ ACE uses three AI "roles" that work together to help your agent improve:
124
+
125
+ 1. **🎯 Generator** - Tries to solve tasks using the current playbook
126
+ 2. **🔍 Reflector** - Analyzes what went wrong (or right)
127
+ 3. **📝 Curator** - Updates the playbook with new strategies
128
+
129
+ Think of it like a sports team reviewing game footage to get better!
130
+
131
+ ## Examples
132
+
133
+ ### Simple Q&A Agent
134
+ ```python
135
+ python examples/simple_ace_example.py
136
+ ```
137
+
138
+ ### Advanced Examples with Different LLMs
139
+ ```python
140
+ python examples/quickstart_litellm.py
141
+ ```
142
+
143
+ Check out the `examples/` folder for more!
144
+
145
+ ## Supported LLM Providers
146
+
147
+ ACE works with **100+ LLM providers** through LiteLLM:
148
+
149
+ - **OpenAI** - GPT-4, GPT-3.5-turbo
150
+ - **Anthropic** - Claude 3 (Opus, Sonnet, Haiku)
151
+ - **Google** - Gemini Pro, PaLM
152
+ - **Cohere** - Command models
153
+ - **Local Models** - Ollama, Transformers
154
+ - **And many more!**
155
+
156
+ Just change the model name:
157
+ ```python
158
+ # OpenAI
159
+ client = LiteLLMClient(model="gpt-4")
160
+
161
+ # Anthropic Claude
162
+ client = LiteLLMClient(model="claude-3-sonnet-20240229")
163
+
164
+ # Google Gemini
165
+ client = LiteLLMClient(model="gemini-pro")
166
+
167
+ # With fallbacks for reliability
168
+ client = LiteLLMClient(
169
+ model="gpt-4",
170
+ fallbacks=["claude-3-haiku", "gpt-3.5-turbo"]
171
+ )
172
+ ```
173
+
174
+ ## Key Features
175
+
176
+ - ✅ **Self-Improving** - Agents learn from experience and build knowledge
177
+ - ✅ **Provider Agnostic** - Switch LLMs with one line of code
178
+ - ✅ **Production Ready** - Automatic retries, fallbacks, and error handling
179
+ - ✅ **Cost Efficient** - Track costs and use cheaper models as fallbacks
180
+ - ✅ **Async Support** - Built for high-performance applications
181
+ - ✅ **Fully Typed** - Great IDE support and type safety
182
+
183
+ ## Advanced Usage
184
+
185
+ ### Online Learning (Learn While Running)
186
+ ```python
187
+ from ace import OnlineAdapter
188
+
189
+ # Agent improves while processing real tasks
190
+ adapter = OnlineAdapter(
191
+ playbook=existing_playbook, # Can start with existing knowledge
192
+ generator=Generator(client),
193
+ reflector=Reflector(client),
194
+ curator=Curator(client)
195
+ )
196
+
197
+ # Process tasks one by one, learning from each
198
+ for task in real_world_tasks:
199
+ result = adapter.process(task, environment)
200
+ # Agent automatically updates its strategies
201
+ ```
202
+
203
+ ### Custom Task Environments
204
+ ```python
205
+ class CodeTestingEnv(TaskEnvironment):
206
+ def evaluate(self, sample, output):
207
+ # Run the generated code
208
+ test_passed = run_tests(output.final_answer)
209
+
210
+ return EnvironmentResult(
211
+ feedback=f"Tests {'passed' if test_passed else 'failed'}",
212
+ ground_truth=sample.ground_truth,
213
+ metrics={"pass_rate": 1.0 if test_passed else 0.0}
214
+ )
215
+ ```
216
+
217
+ ### Streaming Responses
218
+ ```python
219
+ # Get responses token by token
220
+ for chunk in client.complete_with_stream("Write a story"):
221
+ print(chunk, end="", flush=True)
222
+ ```
223
+
224
+ ### Async Operations
225
+ ```python
226
+ import asyncio
227
+
228
+ async def main():
229
+ response = await client.acomplete("Solve this problem...")
230
+ print(response.text)
231
+
232
+ asyncio.run(main())
233
+ ```
234
+
235
+ ## Architecture
236
+
237
+ ACE implements the Agentic Context Engineering method from the research paper:
238
+
239
+ - **Playbook**: A structured memory that stores successful strategies
240
+ - **Bullets**: Individual strategies with helpful/harmful counters
241
+ - **Delta Operations**: Incremental updates that preserve knowledge
242
+ - **Three Roles**: Generator, Reflector, and Curator working together
243
+
244
+ The framework prevents "context collapse" - a common problem where agents forget important information over time.
245
+
246
+ ## Repository Structure
247
+
248
+ ```
249
+ ace/
250
+ ├── ace/ # Core library
251
+ │ ├── playbook.py # Strategy storage
252
+ │ ├── roles.py # Generator, Reflector, Curator
253
+ │ ├── adaptation.py # Training loops
254
+ │ └── llm_providers/ # LLM integrations
255
+ ├── examples/ # Ready-to-run examples
256
+ ├── tests/ # Unit tests
257
+ └── docs/ # Documentation
258
+ ```
259
+
260
+ ## Contributing
261
+
262
+ We welcome contributions! Feel free to:
263
+ - 🐛 Report bugs
264
+ - 💡 Suggest features
265
+ - 🔧 Submit PRs
266
+ - 📚 Improve documentation
267
+
268
+ ## Citation
269
+
270
+ If you use ACE in your research or project, please cite the original papers:
271
+
272
+ ### ACE Paper (Primary Reference)
273
+ ```bibtex
274
+ @article{zhang2024ace,
275
+ title={Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models},
276
+ author={Zhang, Qizheng and Hu, Changran and Upasani, Shubhangi and Ma, Boyuan and Hong, Fenglu and
277
+ Kamanuru, Vamsidhar and Rainton, Jay and Wu, Chen and Ji, Mengmeng and Li, Hanchen and
278
+ Thakker, Urmish and Zou, James and Olukotun, Kunle},
279
+ journal={arXiv preprint arXiv:2510.04618},
280
+ year={2024}
281
+ }
282
+ ```
283
+
284
+ ### Dynamic Cheatsheet (Foundation Work)
285
+ ACE builds upon the adaptive memory concepts from Dynamic Cheatsheet:
286
+
287
+ ```bibtex
288
+ @article{suzgun2025dynamiccheatsheet,
289
+ title={Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory},
290
+ author={Suzgun, Mirac and Yuksekgonul, Mert and Bianchi, Federico and Jurafsky, Dan and Zou, James},
291
+ year={2025},
292
+ eprint={2504.07952},
293
+ archivePrefix={arXiv},
294
+ primaryClass={cs.LG},
295
+ url={https://arxiv.org/abs/2504.07952}
296
+ }
297
+ ```
298
+
299
+ ### This Implementation
300
+ If you use this specific implementation, you can also reference:
301
+
302
+ ```
303
+ This repository: https://github.com/Kayba-ai/agentic-context-engine
304
+ PyPI package: https://pypi.org/project/ace-framework/
305
+ Based on the open reproduction at: https://github.com/sci-m-wang/ACE-open
306
+ ```
307
+
308
+ ## License
309
+
310
+ MIT License - see [LICENSE](LICENSE) file for details.
311
+
312
+ ---
313
+
314
+ **Note**: This is an independent implementation based on the ACE paper (arXiv:2510.04618) and builds upon concepts from Dynamic Cheatsheet. For the original reproduction scaffold, see [sci-m-wang/ACE-open](https://github.com/sci-m-wang/ACE-open).
315
+
316
+ Made with ❤️ by [Kayba](https://kayba.ai) and the open-source community
@@ -0,0 +1,268 @@
1
+ # Agentic Context Engine (ACE) 🚀
2
+
3
+ **Build self-improving AI agents that learn from experience**
4
+
5
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+ [![Paper](https://img.shields.io/badge/Paper-arXiv:2510.04618-red.svg)](https://arxiv.org/abs/2510.04618)
8
+
9
+ 🧠 **ACE** is a framework for building AI agents that get smarter over time by learning from their mistakes and successes.
10
+
11
+ 💡 Based on the paper "Agentic Context Engineering" from Stanford/SambaNova - ACE helps your LLM agents build a "playbook" of strategies that improves with each task.
12
+
13
+ 🔌 **Works with any LLM** - OpenAI, Anthropic Claude, Google Gemini, and 100+ more providers out of the box!
14
+
15
+ ## Quick Start
16
+
17
+ **Minimum Python 3.9 required**
18
+
19
+ ### Install ACE:
20
+ ```bash
21
+ pip install ace-framework
22
+ # or for development:
23
+ pip install -r requirements.txt
24
+ ```
25
+
26
+ ### Set up your API key:
27
+ ```bash
28
+ # Copy the example environment file
29
+ cp .env.example .env
30
+
31
+ # Add your OpenAI key (or Anthropic, Google, etc.)
32
+ echo "OPENAI_API_KEY=your-key-here" >> .env
33
+ ```
34
+
35
+ ### Run your first agent:
36
+ ```python
37
+ from ace import LiteLLMClient, OfflineAdapter, Generator, Reflector, Curator
38
+ from ace import Playbook, Sample, TaskEnvironment, EnvironmentResult
39
+ from dotenv import load_dotenv
40
+
41
+ load_dotenv()
42
+
43
+ # Create your agent with any LLM
44
+ client = LiteLLMClient(model="gpt-3.5-turbo") # or claude-3, gemini-pro, etc.
45
+
46
+ # Set up ACE components
47
+ adapter = OfflineAdapter(
48
+ playbook=Playbook(),
49
+ generator=Generator(client),
50
+ reflector=Reflector(client),
51
+ curator=Curator(client)
52
+ )
53
+
54
+ # Define a simple task
55
+ class SimpleEnv(TaskEnvironment):
56
+ def evaluate(self, sample, output):
57
+ correct = sample.ground_truth.lower() in output.final_answer.lower()
58
+ return EnvironmentResult(
59
+ feedback="Correct!" if correct else "Try again",
60
+ ground_truth=sample.ground_truth
61
+ )
62
+
63
+ # Train your agent
64
+ samples = [
65
+ Sample(question="What is 2+2?", ground_truth="4"),
66
+ Sample(question="Capital of France?", ground_truth="Paris"),
67
+ ]
68
+
69
+ results = adapter.run(samples, SimpleEnv(), epochs=1)
70
+ print(f"Agent learned {len(adapter.playbook.bullets())} strategies!")
71
+ ```
72
+
73
+ ## How It Works
74
+
75
+ ACE uses three AI "roles" that work together to help your agent improve:
76
+
77
+ 1. **🎯 Generator** - Tries to solve tasks using the current playbook
78
+ 2. **🔍 Reflector** - Analyzes what went wrong (or right)
79
+ 3. **📝 Curator** - Updates the playbook with new strategies
80
+
81
+ Think of it like a sports team reviewing game footage to get better!
82
+
83
+ ## Examples
84
+
85
+ ### Simple Q&A Agent
86
+ ```python
87
+ python examples/simple_ace_example.py
88
+ ```
89
+
90
+ ### Advanced Examples with Different LLMs
91
+ ```python
92
+ python examples/quickstart_litellm.py
93
+ ```
94
+
95
+ Check out the `examples/` folder for more!
96
+
97
+ ## Supported LLM Providers
98
+
99
+ ACE works with **100+ LLM providers** through LiteLLM:
100
+
101
+ - **OpenAI** - GPT-4, GPT-3.5-turbo
102
+ - **Anthropic** - Claude 3 (Opus, Sonnet, Haiku)
103
+ - **Google** - Gemini Pro, PaLM
104
+ - **Cohere** - Command models
105
+ - **Local Models** - Ollama, Transformers
106
+ - **And many more!**
107
+
108
+ Just change the model name:
109
+ ```python
110
+ # OpenAI
111
+ client = LiteLLMClient(model="gpt-4")
112
+
113
+ # Anthropic Claude
114
+ client = LiteLLMClient(model="claude-3-sonnet-20240229")
115
+
116
+ # Google Gemini
117
+ client = LiteLLMClient(model="gemini-pro")
118
+
119
+ # With fallbacks for reliability
120
+ client = LiteLLMClient(
121
+ model="gpt-4",
122
+ fallbacks=["claude-3-haiku", "gpt-3.5-turbo"]
123
+ )
124
+ ```
125
+
126
+ ## Key Features
127
+
128
+ - ✅ **Self-Improving** - Agents learn from experience and build knowledge
129
+ - ✅ **Provider Agnostic** - Switch LLMs with one line of code
130
+ - ✅ **Production Ready** - Automatic retries, fallbacks, and error handling
131
+ - ✅ **Cost Efficient** - Track costs and use cheaper models as fallbacks
132
+ - ✅ **Async Support** - Built for high-performance applications
133
+ - ✅ **Fully Typed** - Great IDE support and type safety
134
+
135
+ ## Advanced Usage
136
+
137
+ ### Online Learning (Learn While Running)
138
+ ```python
139
+ from ace import OnlineAdapter
140
+
141
+ # Agent improves while processing real tasks
142
+ adapter = OnlineAdapter(
143
+ playbook=existing_playbook, # Can start with existing knowledge
144
+ generator=Generator(client),
145
+ reflector=Reflector(client),
146
+ curator=Curator(client)
147
+ )
148
+
149
+ # Process tasks one by one, learning from each
150
+ for task in real_world_tasks:
151
+ result = adapter.process(task, environment)
152
+ # Agent automatically updates its strategies
153
+ ```
154
+
155
+ ### Custom Task Environments
156
+ ```python
157
+ class CodeTestingEnv(TaskEnvironment):
158
+ def evaluate(self, sample, output):
159
+ # Run the generated code
160
+ test_passed = run_tests(output.final_answer)
161
+
162
+ return EnvironmentResult(
163
+ feedback=f"Tests {'passed' if test_passed else 'failed'}",
164
+ ground_truth=sample.ground_truth,
165
+ metrics={"pass_rate": 1.0 if test_passed else 0.0}
166
+ )
167
+ ```
168
+
169
+ ### Streaming Responses
170
+ ```python
171
+ # Get responses token by token
172
+ for chunk in client.complete_with_stream("Write a story"):
173
+ print(chunk, end="", flush=True)
174
+ ```
175
+
176
+ ### Async Operations
177
+ ```python
178
+ import asyncio
179
+
180
+ async def main():
181
+ response = await client.acomplete("Solve this problem...")
182
+ print(response.text)
183
+
184
+ asyncio.run(main())
185
+ ```
186
+
187
+ ## Architecture
188
+
189
+ ACE implements the Agentic Context Engineering method from the research paper:
190
+
191
+ - **Playbook**: A structured memory that stores successful strategies
192
+ - **Bullets**: Individual strategies with helpful/harmful counters
193
+ - **Delta Operations**: Incremental updates that preserve knowledge
194
+ - **Three Roles**: Generator, Reflector, and Curator working together
195
+
196
+ The framework prevents "context collapse" - a common problem where agents forget important information over time.
197
+
198
+ ## Repository Structure
199
+
200
+ ```
201
+ ace/
202
+ ├── ace/ # Core library
203
+ │ ├── playbook.py # Strategy storage
204
+ │ ├── roles.py # Generator, Reflector, Curator
205
+ │ ├── adaptation.py # Training loops
206
+ │ └── llm_providers/ # LLM integrations
207
+ ├── examples/ # Ready-to-run examples
208
+ ├── tests/ # Unit tests
209
+ └── docs/ # Documentation
210
+ ```
211
+
212
+ ## Contributing
213
+
214
+ We welcome contributions! Feel free to:
215
+ - 🐛 Report bugs
216
+ - 💡 Suggest features
217
+ - 🔧 Submit PRs
218
+ - 📚 Improve documentation
219
+
220
+ ## Citation
221
+
222
+ If you use ACE in your research or project, please cite the original papers:
223
+
224
+ ### ACE Paper (Primary Reference)
225
+ ```bibtex
226
+ @article{zhang2024ace,
227
+ title={Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models},
228
+ author={Zhang, Qizheng and Hu, Changran and Upasani, Shubhangi and Ma, Boyuan and Hong, Fenglu and
229
+ Kamanuru, Vamsidhar and Rainton, Jay and Wu, Chen and Ji, Mengmeng and Li, Hanchen and
230
+ Thakker, Urmish and Zou, James and Olukotun, Kunle},
231
+ journal={arXiv preprint arXiv:2510.04618},
232
+ year={2024}
233
+ }
234
+ ```
235
+
236
+ ### Dynamic Cheatsheet (Foundation Work)
237
+ ACE builds upon the adaptive memory concepts from Dynamic Cheatsheet:
238
+
239
+ ```bibtex
240
+ @article{suzgun2025dynamiccheatsheet,
241
+ title={Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory},
242
+ author={Suzgun, Mirac and Yuksekgonul, Mert and Bianchi, Federico and Jurafsky, Dan and Zou, James},
243
+ year={2025},
244
+ eprint={2504.07952},
245
+ archivePrefix={arXiv},
246
+ primaryClass={cs.LG},
247
+ url={https://arxiv.org/abs/2504.07952}
248
+ }
249
+ ```
250
+
251
+ ### This Implementation
252
+ If you use this specific implementation, you can also reference:
253
+
254
+ ```
255
+ This repository: https://github.com/Kayba-ai/agentic-context-engine
256
+ PyPI package: https://pypi.org/project/ace-framework/
257
+ Based on the open reproduction at: https://github.com/sci-m-wang/ACE-open
258
+ ```
259
+
260
+ ## License
261
+
262
+ MIT License - see [LICENSE](LICENSE) file for details.
263
+
264
+ ---
265
+
266
+ **Note**: This is an independent implementation based on the ACE paper (arXiv:2510.04618) and builds upon concepts from Dynamic Cheatsheet. For the original reproduction scaffold, see [sci-m-wang/ACE-open](https://github.com/sci-m-wang/ACE-open).
267
+
268
+ Made with ❤️ by [Kayba](https://kayba.ai) and the open-source community