mcal-ai 0.1.0__py3-none-any.whl → 0.2.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- mcal_ai-0.2.0.dist-info/METADATA +168 -0
- {mcal_ai-0.1.0.dist-info → mcal_ai-0.2.0.dist-info}/RECORD +6 -6
- mcal_ai-0.1.0.dist-info/METADATA +0 -319
- {mcal_ai-0.1.0.dist-info → mcal_ai-0.2.0.dist-info}/WHEEL +0 -0
- {mcal_ai-0.1.0.dist-info → mcal_ai-0.2.0.dist-info}/entry_points.txt +0 -0
- {mcal_ai-0.1.0.dist-info → mcal_ai-0.2.0.dist-info}/licenses/LICENSE +0 -0
- {mcal_ai-0.1.0.dist-info → mcal_ai-0.2.0.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,168 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: mcal-ai
|
|
3
|
+
Version: 0.2.0
|
|
4
|
+
Summary: Memory-Context Alignment Layer for Goal-Coherent AI Agents
|
|
5
|
+
Author: MCAL Team
|
|
6
|
+
License: MIT
|
|
7
|
+
Project-URL: Homepage, https://github.com/Shivakoreddi/mcal-ai
|
|
8
|
+
Project-URL: Documentation, https://github.com/Shivakoreddi/mcal-ai#readme
|
|
9
|
+
Project-URL: Repository, https://github.com/Shivakoreddi/mcal-ai.git
|
|
10
|
+
Project-URL: Issues, https://github.com/Shivakoreddi/mcal-ai/issues
|
|
11
|
+
Keywords: llm,memory,agents,context,ai,nlp
|
|
12
|
+
Classifier: Development Status :: 3 - Alpha
|
|
13
|
+
Classifier: Intended Audience :: Developers
|
|
14
|
+
Classifier: Intended Audience :: Science/Research
|
|
15
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
16
|
+
Classifier: Programming Language :: Python :: 3
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
19
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
20
|
+
Requires-Python: >=3.11
|
|
21
|
+
Description-Content-Type: text/markdown
|
|
22
|
+
License-File: LICENSE
|
|
23
|
+
Requires-Dist: anthropic>=0.18.0
|
|
24
|
+
Requires-Dist: openai>=1.0.0
|
|
25
|
+
Requires-Dist: boto3>=1.28.0
|
|
26
|
+
Requires-Dist: pydantic>=2.0.0
|
|
27
|
+
Requires-Dist: numpy>=1.24.0
|
|
28
|
+
Requires-Dist: faiss-cpu>=1.7.4
|
|
29
|
+
Requires-Dist: sentence-transformers>=2.2.0
|
|
30
|
+
Requires-Dist: sqlalchemy>=2.0.0
|
|
31
|
+
Requires-Dist: aiosqlite>=0.19.0
|
|
32
|
+
Requires-Dist: tiktoken>=0.5.0
|
|
33
|
+
Requires-Dist: tenacity>=8.2.0
|
|
34
|
+
Requires-Dist: rich>=13.0.0
|
|
35
|
+
Requires-Dist: python-dotenv>=1.0.0
|
|
36
|
+
Provides-Extra: langgraph
|
|
37
|
+
Requires-Dist: langgraph>=0.0.40; extra == "langgraph"
|
|
38
|
+
Requires-Dist: langchain-core>=0.1.0; extra == "langgraph"
|
|
39
|
+
Provides-Extra: crewai
|
|
40
|
+
Requires-Dist: crewai>=0.28.0; extra == "crewai"
|
|
41
|
+
Provides-Extra: autogen
|
|
42
|
+
Requires-Dist: pyautogen>=0.2.0; extra == "autogen"
|
|
43
|
+
Provides-Extra: langchain
|
|
44
|
+
Requires-Dist: langchain>=0.1.0; extra == "langchain"
|
|
45
|
+
Requires-Dist: langchain-core>=0.1.0; extra == "langchain"
|
|
46
|
+
Provides-Extra: integrations
|
|
47
|
+
Requires-Dist: langgraph>=0.0.40; extra == "integrations"
|
|
48
|
+
Requires-Dist: langchain-core>=0.1.0; extra == "integrations"
|
|
49
|
+
Requires-Dist: crewai>=0.28.0; extra == "integrations"
|
|
50
|
+
Requires-Dist: pyautogen>=0.2.0; extra == "integrations"
|
|
51
|
+
Provides-Extra: mem0
|
|
52
|
+
Requires-Dist: mem0ai>=0.1.0; extra == "mem0"
|
|
53
|
+
Provides-Extra: dev
|
|
54
|
+
Requires-Dist: pytest>=7.4.0; extra == "dev"
|
|
55
|
+
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
|
|
56
|
+
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
|
|
57
|
+
Requires-Dist: black>=23.0.0; extra == "dev"
|
|
58
|
+
Requires-Dist: ruff>=0.1.0; extra == "dev"
|
|
59
|
+
Requires-Dist: mypy>=1.5.0; extra == "dev"
|
|
60
|
+
Requires-Dist: pre-commit>=3.4.0; extra == "dev"
|
|
61
|
+
Requires-Dist: ipykernel>=6.25.0; extra == "dev"
|
|
62
|
+
Requires-Dist: jupyter>=1.0.0; extra == "dev"
|
|
63
|
+
Provides-Extra: eval
|
|
64
|
+
Requires-Dist: pandas>=2.0.0; extra == "eval"
|
|
65
|
+
Requires-Dist: matplotlib>=3.7.0; extra == "eval"
|
|
66
|
+
Requires-Dist: seaborn>=0.12.0; extra == "eval"
|
|
67
|
+
Requires-Dist: wandb>=0.15.0; extra == "eval"
|
|
68
|
+
Requires-Dist: scipy>=1.11.0; extra == "eval"
|
|
69
|
+
Provides-Extra: all
|
|
70
|
+
Requires-Dist: mcal[dev,eval,integrations]; extra == "all"
|
|
71
|
+
Dynamic: license-file
|
|
72
|
+
|
|
73
|
+
# MCAL: Memory-Context Alignment Layer
|
|
74
|
+
|
|
75
|
+
> **Intent-Preserving Memory for Goal-Coherent AI Agents**
|
|
76
|
+
|
|
77
|
+
[](https://pypi.org/project/mcal-ai/)
|
|
78
|
+
[](https://www.python.org/downloads/)
|
|
79
|
+
[](https://opensource.org/licenses/MIT)
|
|
80
|
+
|
|
81
|
+
## Why MCAL?
|
|
82
|
+
|
|
83
|
+
Current AI memory systems store **facts** but lose **meaning**:
|
|
84
|
+
|
|
85
|
+
| What's Stored | What's Lost |
|
|
86
|
+
|---------------|-------------|
|
|
87
|
+
| "User chose PostgreSQL" | **WHY** they chose it over MongoDB |
|
|
88
|
+
| "User wants to visit Japan" | **HOW** this fits their overall travel goals |
|
|
89
|
+
|
|
90
|
+
MCAL preserves the **reasoning behind decisions**, not just the conclusions.
|
|
91
|
+
|
|
92
|
+
## Installation
|
|
93
|
+
|
|
94
|
+
```bash
|
|
95
|
+
pip install mcal-ai
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
**Framework integrations:**
|
|
99
|
+
```bash
|
|
100
|
+
pip install mcal-ai-langgraph # LangGraph integration
|
|
101
|
+
pip install mcal-ai-crewai # CrewAI integration
|
|
102
|
+
pip install mcal-ai-autogen # AutoGen integration
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
## Quick Start
|
|
106
|
+
|
|
107
|
+
```python
|
|
108
|
+
import asyncio
|
|
109
|
+
from mcal import MCAL
|
|
110
|
+
|
|
111
|
+
async def main():
|
|
112
|
+
mcal = MCAL(
|
|
113
|
+
llm_provider="anthropic", # or "openai", "bedrock"
|
|
114
|
+
embedding_provider="openai", # or "bedrock"
|
|
115
|
+
)
|
|
116
|
+
|
|
117
|
+
messages = [
|
|
118
|
+
{"role": "user", "content": "I'm building a fraud detection pipeline"},
|
|
119
|
+
{"role": "assistant", "content": "Let's start with data ingestion..."},
|
|
120
|
+
{"role": "user", "content": "I chose PostgreSQL over MongoDB for storage"},
|
|
121
|
+
]
|
|
122
|
+
|
|
123
|
+
# Extract goals, decisions, and reasoning
|
|
124
|
+
result = await mcal.add(messages, user_id="user_123")
|
|
125
|
+
print(f"Extracted {result.unified_graph.node_count} nodes")
|
|
126
|
+
|
|
127
|
+
# Search with goal-aware retrieval
|
|
128
|
+
results = await mcal.search("What database?", user_id="user_123")
|
|
129
|
+
|
|
130
|
+
# Get context for LLM prompts
|
|
131
|
+
context = mcal.get_context("What's next?", user_id="user_123")
|
|
132
|
+
|
|
133
|
+
asyncio.run(main())
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
## Key Features
|
|
137
|
+
|
|
138
|
+
- **Intent Graph** - Hierarchical goal structures (Mission → Goal → Task)
|
|
139
|
+
- **Reasoning Chains** - Store WHY decisions were made, not just conclusions
|
|
140
|
+
- **Goal-Aware Retrieval** - Retrieve based on objective alignment, not just similarity
|
|
141
|
+
- **Multi-Provider** - Works with Anthropic, OpenAI, and AWS Bedrock
|
|
142
|
+
- **Standalone** - No external dependencies, JSON file persistence
|
|
143
|
+
|
|
144
|
+
## Environment Variables
|
|
145
|
+
|
|
146
|
+
```bash
|
|
147
|
+
# Choose your LLM provider
|
|
148
|
+
ANTHROPIC_API_KEY=sk-ant-... # For Claude
|
|
149
|
+
OPENAI_API_KEY=sk-... # For GPT-4 / embeddings
|
|
150
|
+
|
|
151
|
+
# Optional: AWS Bedrock
|
|
152
|
+
AWS_ACCESS_KEY_ID=...
|
|
153
|
+
AWS_SECRET_ACCESS_KEY=...
|
|
154
|
+
AWS_DEFAULT_REGION=us-east-1
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
## Documentation
|
|
158
|
+
|
|
159
|
+
- [GitHub Repository](https://github.com/Shivakoreddi/mcal-ai)
|
|
160
|
+
- [Design Document](https://github.com/Shivakoreddi/mcal-ai/blob/main/docs/MCAL_DESIGN.md)
|
|
161
|
+
|
|
162
|
+
## License
|
|
163
|
+
|
|
164
|
+
MIT License - see [LICENSE](https://github.com/Shivakoreddi/mcal-ai/blob/main/LICENSE) for details.
|
|
165
|
+
|
|
166
|
+
## Author
|
|
167
|
+
|
|
168
|
+
Created by [Shiva Koreddi](https://github.com/Shivakoreddi)
|
|
@@ -24,9 +24,9 @@ mcal/integrations/langchain.py,sha256=J_xOUIUoWYcTA5iFiwyt9rTLt1R4lltAJ3hj957B4V
|
|
|
24
24
|
mcal/integrations/langgraph.py,sha256=nIE7U9aQQLgG_qRQmZuy2ZOJbg_ZRCIbmYOWc0bZklA,1257
|
|
25
25
|
mcal/providers/bedrock.py,sha256=zUI07KouzOAXfMgDwloJuor0COElIBrE5_PVtjn6g7E,8632
|
|
26
26
|
mcal/storage/__init__.py,sha256=9CsHK4EtBOoSk04EcOFzCAP8LjvqX3K2ZE-hxA96jPU,29
|
|
27
|
-
mcal_ai-0.
|
|
28
|
-
mcal_ai-0.
|
|
29
|
-
mcal_ai-0.
|
|
30
|
-
mcal_ai-0.
|
|
31
|
-
mcal_ai-0.
|
|
32
|
-
mcal_ai-0.
|
|
27
|
+
mcal_ai-0.2.0.dist-info/licenses/LICENSE,sha256=zdp5kxDzb-kYvBiEZ_h1Hi96z-o6e5oXoXFx2IIefCs,1062
|
|
28
|
+
mcal_ai-0.2.0.dist-info/METADATA,sha256=TXhbGb66uMq1_5EVuUJ9ZYEGPI3BRU3RpI1zaCrlaa8,5897
|
|
29
|
+
mcal_ai-0.2.0.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
|
|
30
|
+
mcal_ai-0.2.0.dist-info/entry_points.txt,sha256=ICilsPI5krXkoM9espdv7jtJnRMO2hTdTcwiocNDfJs,39
|
|
31
|
+
mcal_ai-0.2.0.dist-info/top_level.txt,sha256=aJ6Ay5tUlQgkrlbGXc90wuOWQpncaHNFN036i9hIWj0,5
|
|
32
|
+
mcal_ai-0.2.0.dist-info/RECORD,,
|
mcal_ai-0.1.0.dist-info/METADATA
DELETED
|
@@ -1,319 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.4
|
|
2
|
-
Name: mcal-ai
|
|
3
|
-
Version: 0.1.0
|
|
4
|
-
Summary: Memory-Context Alignment Layer for Goal-Coherent AI Agents
|
|
5
|
-
Author: MCAL Team
|
|
6
|
-
License: MIT
|
|
7
|
-
Project-URL: Homepage, https://github.com/Shivakoreddi/mcal-ai
|
|
8
|
-
Project-URL: Documentation, https://github.com/Shivakoreddi/mcal-ai#readme
|
|
9
|
-
Project-URL: Repository, https://github.com/Shivakoreddi/mcal-ai.git
|
|
10
|
-
Project-URL: Issues, https://github.com/Shivakoreddi/mcal-ai/issues
|
|
11
|
-
Keywords: llm,memory,agents,context,ai,nlp
|
|
12
|
-
Classifier: Development Status :: 3 - Alpha
|
|
13
|
-
Classifier: Intended Audience :: Developers
|
|
14
|
-
Classifier: Intended Audience :: Science/Research
|
|
15
|
-
Classifier: License :: OSI Approved :: MIT License
|
|
16
|
-
Classifier: Programming Language :: Python :: 3
|
|
17
|
-
Classifier: Programming Language :: Python :: 3.11
|
|
18
|
-
Classifier: Programming Language :: Python :: 3.12
|
|
19
|
-
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
20
|
-
Requires-Python: >=3.11
|
|
21
|
-
Description-Content-Type: text/markdown
|
|
22
|
-
License-File: LICENSE
|
|
23
|
-
Requires-Dist: anthropic>=0.18.0
|
|
24
|
-
Requires-Dist: openai>=1.0.0
|
|
25
|
-
Requires-Dist: boto3>=1.28.0
|
|
26
|
-
Requires-Dist: pydantic>=2.0.0
|
|
27
|
-
Requires-Dist: numpy>=1.24.0
|
|
28
|
-
Requires-Dist: faiss-cpu>=1.7.4
|
|
29
|
-
Requires-Dist: sentence-transformers>=2.2.0
|
|
30
|
-
Requires-Dist: sqlalchemy>=2.0.0
|
|
31
|
-
Requires-Dist: aiosqlite>=0.19.0
|
|
32
|
-
Requires-Dist: tiktoken>=0.5.0
|
|
33
|
-
Requires-Dist: tenacity>=8.2.0
|
|
34
|
-
Requires-Dist: rich>=13.0.0
|
|
35
|
-
Requires-Dist: python-dotenv>=1.0.0
|
|
36
|
-
Provides-Extra: langgraph
|
|
37
|
-
Requires-Dist: langgraph>=0.0.40; extra == "langgraph"
|
|
38
|
-
Requires-Dist: langchain-core>=0.1.0; extra == "langgraph"
|
|
39
|
-
Provides-Extra: crewai
|
|
40
|
-
Requires-Dist: crewai>=0.28.0; extra == "crewai"
|
|
41
|
-
Provides-Extra: autogen
|
|
42
|
-
Requires-Dist: pyautogen>=0.2.0; extra == "autogen"
|
|
43
|
-
Provides-Extra: langchain
|
|
44
|
-
Requires-Dist: langchain>=0.1.0; extra == "langchain"
|
|
45
|
-
Requires-Dist: langchain-core>=0.1.0; extra == "langchain"
|
|
46
|
-
Provides-Extra: integrations
|
|
47
|
-
Requires-Dist: langgraph>=0.0.40; extra == "integrations"
|
|
48
|
-
Requires-Dist: langchain-core>=0.1.0; extra == "integrations"
|
|
49
|
-
Requires-Dist: crewai>=0.28.0; extra == "integrations"
|
|
50
|
-
Requires-Dist: pyautogen>=0.2.0; extra == "integrations"
|
|
51
|
-
Provides-Extra: mem0
|
|
52
|
-
Requires-Dist: mem0ai>=0.1.0; extra == "mem0"
|
|
53
|
-
Provides-Extra: dev
|
|
54
|
-
Requires-Dist: pytest>=7.4.0; extra == "dev"
|
|
55
|
-
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
|
|
56
|
-
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
|
|
57
|
-
Requires-Dist: black>=23.0.0; extra == "dev"
|
|
58
|
-
Requires-Dist: ruff>=0.1.0; extra == "dev"
|
|
59
|
-
Requires-Dist: mypy>=1.5.0; extra == "dev"
|
|
60
|
-
Requires-Dist: pre-commit>=3.4.0; extra == "dev"
|
|
61
|
-
Requires-Dist: ipykernel>=6.25.0; extra == "dev"
|
|
62
|
-
Requires-Dist: jupyter>=1.0.0; extra == "dev"
|
|
63
|
-
Provides-Extra: eval
|
|
64
|
-
Requires-Dist: pandas>=2.0.0; extra == "eval"
|
|
65
|
-
Requires-Dist: matplotlib>=3.7.0; extra == "eval"
|
|
66
|
-
Requires-Dist: seaborn>=0.12.0; extra == "eval"
|
|
67
|
-
Requires-Dist: wandb>=0.15.0; extra == "eval"
|
|
68
|
-
Requires-Dist: scipy>=1.11.0; extra == "eval"
|
|
69
|
-
Provides-Extra: all
|
|
70
|
-
Requires-Dist: mcal[dev,eval,integrations]; extra == "all"
|
|
71
|
-
Dynamic: license-file
|
|
72
|
-
|
|
73
|
-
# MCAL: Memory-Context Alignment Layer
|
|
74
|
-
|
|
75
|
-
> **Beyond Retrieval:** Intent-Preserving Memory for Goal-Coherent AI Agents
|
|
76
|
-
|
|
77
|
-
[](https://www.python.org/downloads/)
|
|
78
|
-
[](https://opensource.org/licenses/MIT)
|
|
79
|
-
[](https://arxiv.org/)
|
|
80
|
-
[](docs/MCAL_DESIGN.md)
|
|
81
|
-
|
|
82
|
-
## What's New in v2.0
|
|
83
|
-
|
|
84
|
-
MCAL is now **fully standalone** - no external dependencies on Mem0 or other memory providers. The architecture includes:
|
|
85
|
-
|
|
86
|
-
- **Built-in Embedding Service** - OpenAI or Bedrock embeddings
|
|
87
|
-
- **Native Vector Search** - Cosine similarity with HNSW-like indexing
|
|
88
|
-
- **Graph Deduplication** - Automatic node merging with similarity detection
|
|
89
|
-
- **JSON Persistence** - Zero-config file-based storage
|
|
90
|
-
|
|
91
|
-
## The Problem
|
|
92
|
-
|
|
93
|
-
Current AI agent memory systems store **facts** but lose **meaning**:
|
|
94
|
-
|
|
95
|
-
| What's Stored | What's Lost |
|
|
96
|
-
|--------------------------------------|--------------------------------------------------|
|
|
97
|
-
| "User wants to visit Japan" | **WHY** they chose Japan over other destinations |
|
|
98
|
-
| "User booked a hotel in Shibuya" | **WHAT** alternatives were considered (Shinjuku, Ginza, Asakusa) |
|
|
99
|
-
| "User plans to visit Kyoto" | **HOW** this fits into the overall trip plan |
|
|
100
|
-
|
|
101
|
-
This creates the **Memory-Context Alignment Paradox**: as conversations grow, agents remember *what* was said but forget *why* it mattered.
|
|
102
|
-
|
|
103
|
-
## Our Solution: Three Pillars
|
|
104
|
-
|
|
105
|
-
### 1. Intent Graph Preservation
|
|
106
|
-
Hierarchical goal structures that persist across sessions:
|
|
107
|
-
```
|
|
108
|
-
MISSION: Plan a 2-week vacation to Japan
|
|
109
|
-
├── GOAL: Book travel [✓ COMPLETED]
|
|
110
|
-
│ ├── TASK: Find flights [✓]
|
|
111
|
-
│ └── TASK: Reserve hotels [✓]
|
|
112
|
-
├── GOAL: Plan activities [ACTIVE]
|
|
113
|
-
│ ├── TASK: Research Tokyo attractions [✓]
|
|
114
|
-
│ ├── TASK: Plan Kyoto day trips [IN PROGRESS]
|
|
115
|
-
│ └── TASK: Book restaurants [PENDING]
|
|
116
|
-
└── GOAL: Pack and prepare [PENDING]
|
|
117
|
-
```
|
|
118
|
-
|
|
119
|
-
### 2. Reasoning Chain Storage
|
|
120
|
-
Preserve **WHY** decisions were made, not just conclusions:
|
|
121
|
-
```
|
|
122
|
-
Decision: "Stay in Shibuya for Tokyo accommodation"
|
|
123
|
-
├── Alternatives: [Shinjuku, Ginza, Asakusa]
|
|
124
|
-
├── Rationale: "Central location, good nightlife, easy metro access"
|
|
125
|
-
├── Evidence: ["User wants to explore at night", "Prefers walkable areas"]
|
|
126
|
-
└── Trade-offs: ["More expensive but saves daily transit time"]
|
|
127
|
-
```
|
|
128
|
-
|
|
129
|
-
### 3. Goal-Aware Retrieval
|
|
130
|
-
Retrieve based on **objective achievement**, not just similarity:
|
|
131
|
-
```
|
|
132
|
-
Score = α × semantic_similarity
|
|
133
|
-
+ β × goal_alignment ← NEW
|
|
134
|
-
+ γ × decision_relevance ← NEW
|
|
135
|
-
+ δ × recency_decay
|
|
136
|
-
```
|
|
137
|
-
|
|
138
|
-
## Installation
|
|
139
|
-
|
|
140
|
-
```bash
|
|
141
|
-
# Install from source (recommended)
|
|
142
|
-
git clone https://github.com/Shivakoreddi/mcla-research.git
|
|
143
|
-
cd mcla-research
|
|
144
|
-
pip install -e .
|
|
145
|
-
|
|
146
|
-
# Development installation with test dependencies
|
|
147
|
-
pip install -e ".[dev]"
|
|
148
|
-
|
|
149
|
-
# Optional: Install with legacy Mem0 support
|
|
150
|
-
pip install -e ".[mem0]"
|
|
151
|
-
```
|
|
152
|
-
|
|
153
|
-
### Requirements
|
|
154
|
-
- Python 3.11+
|
|
155
|
-
- `anthropic` - For LLM extraction (Claude)
|
|
156
|
-
- `openai` - For embeddings (optional, can use Bedrock instead)
|
|
157
|
-
|
|
158
|
-
## Quick Start
|
|
159
|
-
|
|
160
|
-
```python
|
|
161
|
-
import asyncio
|
|
162
|
-
from mcal import MCAL
|
|
163
|
-
|
|
164
|
-
async def main():
|
|
165
|
-
# Initialize MCAL (standalone by default)
|
|
166
|
-
mcal = MCAL(
|
|
167
|
-
llm_provider="anthropic", # or "bedrock", "openai"
|
|
168
|
-
embedding_provider="openai", # or "bedrock"
|
|
169
|
-
)
|
|
170
|
-
|
|
171
|
-
# Add conversation messages
|
|
172
|
-
messages = [
|
|
173
|
-
{"role": "user", "content": "I'm building a fraud detection ML pipeline"},
|
|
174
|
-
{"role": "assistant", "content": "Great! Let's start with data ingestion..."},
|
|
175
|
-
{"role": "user", "content": "I chose PostgreSQL over MongoDB for the data store"},
|
|
176
|
-
{"role": "assistant", "content": "PostgreSQL is a solid choice for structured fraud data..."}
|
|
177
|
-
]
|
|
178
|
-
|
|
179
|
-
result = await mcal.add(messages, user_id="user_123")
|
|
180
|
-
|
|
181
|
-
# Access the unified graph with goals, decisions, and reasoning
|
|
182
|
-
print(f"Extracted {result.unified_graph.node_count} nodes")
|
|
183
|
-
print(f"Active goals: {result.unified_graph.get_active_goals()}")
|
|
184
|
-
print(f"Decisions: {result.unified_graph.get_all_decisions_with_detail()}")
|
|
185
|
-
|
|
186
|
-
# Search for relevant context
|
|
187
|
-
search_results = await mcal.search(
|
|
188
|
-
query="What database did the user choose?",
|
|
189
|
-
user_id="user_123"
|
|
190
|
-
)
|
|
191
|
-
|
|
192
|
-
# Get formatted context for LLM
|
|
193
|
-
context = mcal.get_context(
|
|
194
|
-
query="What should we focus on next?",
|
|
195
|
-
user_id="user_123",
|
|
196
|
-
max_tokens=4000
|
|
197
|
-
)
|
|
198
|
-
|
|
199
|
-
asyncio.run(main())
|
|
200
|
-
|
|
201
|
-
## Architecture
|
|
202
|
-
|
|
203
|
-
```
|
|
204
|
-
┌─────────────────────────────────────────────────────────────────┐
|
|
205
|
-
│ MCAL v2.0 │
|
|
206
|
-
│ ┌───────────────────────────────────────────────────────────┐ │
|
|
207
|
-
│ │ Application Layer │ │
|
|
208
|
-
│ │ mcal.add() │ mcal.search() │ mcal.get_context() │ │
|
|
209
|
-
│ └───────────────────────────────────────────────────────────┘ │
|
|
210
|
-
│ │ │
|
|
211
|
-
│ ┌───────────────────────────────────────────────────────────┐ │
|
|
212
|
-
│ │ Unified Deep Extractor │ │
|
|
213
|
-
│ │ Single LLM call extracts: GOALS | DECISIONS | FACTS │ │
|
|
214
|
-
│ └───────────────────────────────────────────────────────────┘ │
|
|
215
|
-
│ │ │
|
|
216
|
-
│ ┌───────────────────────────────────────────────────────────┐ │
|
|
217
|
-
│ │ Unified Graph (6 Nodes, 13 Edges) │ │
|
|
218
|
-
│ │ PERSON | THING | CONCEPT | GOAL | DECISION | ACTION │ │
|
|
219
|
-
│ └───────────────────────────────────────────────────────────┘ │
|
|
220
|
-
│ │ │
|
|
221
|
-
│ ┌───────────────────────────────────────────────────────────┐ │
|
|
222
|
-
│ │ Standalone Services │ │
|
|
223
|
-
│ │ ┌──────────────┐ ┌─────────────┐ ┌─────────────────┐ │ │
|
|
224
|
-
│ │ │ Embeddings │ │Vector Search│ │ Deduplication │ │ │
|
|
225
|
-
│ │ │ (OpenAI/ │ │(Cosine Sim) │ │ (Similarity │ │ │
|
|
226
|
-
│ │ │ Bedrock) │ │ │ │ Merging) │ │ │
|
|
227
|
-
│ │ └──────────────┘ └─────────────┘ └─────────────────┘ │ │
|
|
228
|
-
│ └───────────────────────────────────────────────────────────┘ │
|
|
229
|
-
│ │ │
|
|
230
|
-
│ ┌───────────────────────────────────────────────────────────┐ │
|
|
231
|
-
│ │ JSON File Persistence │ │
|
|
232
|
-
│ │ ~/.mcal/users/{user_id}/graph.json │ │
|
|
233
|
-
│ └───────────────────────────────────────────────────────────┘ │
|
|
234
|
-
└─────────────────────────────────────────────────────────────────┘
|
|
235
|
-
```
|
|
236
|
-
|
|
237
|
-
## Project Structure
|
|
238
|
-
|
|
239
|
-
```
|
|
240
|
-
mcla-research/
|
|
241
|
-
├── src/mcal/
|
|
242
|
-
│ ├── mcal.py # Main MCAL class
|
|
243
|
-
│ ├── core/
|
|
244
|
-
│ │ ├── unified_extractor.py # Single-pass extraction
|
|
245
|
-
│ │ ├── unified_graph.py # Graph with rich attributes
|
|
246
|
-
│ │ ├── embedding_service.py # Embedding generation
|
|
247
|
-
│ │ ├── vector_index.py # Similarity search
|
|
248
|
-
│ │ └── deduplication.py # Node merging
|
|
249
|
-
│ ├── providers/
|
|
250
|
-
│ │ └── llm_providers.py # Anthropic, OpenAI, Bedrock
|
|
251
|
-
│ └── storage/
|
|
252
|
-
│ └── sqlite_store.py # Persistence layer
|
|
253
|
-
├── experiments/
|
|
254
|
-
├── data/
|
|
255
|
-
│ ├── synthetic/ # Generated conversations
|
|
256
|
-
│ └── benchmarks/ # MCAL-Bench dataset
|
|
257
|
-
├── tests/
|
|
258
|
-
└── docs/
|
|
259
|
-
```
|
|
260
|
-
|
|
261
|
-
## Evaluation: MCAL-Bench
|
|
262
|
-
|
|
263
|
-
We introduce **MCAL-Bench**, the first benchmark for reasoning preservation and goal coherence:
|
|
264
|
-
|
|
265
|
-
| Metric | What It Measures |
|
|
266
|
-
|--------|------------------|
|
|
267
|
-
| **RPS** (Reasoning Preservation Score) | Can the system explain WHY a decision was made? |
|
|
268
|
-
| **GCS** (Goal Coherence Score) | Do responses align with user's active objectives? |
|
|
269
|
-
| **TER** (Token Efficiency Ratio) | Quality-per-token vs full context baseline |
|
|
270
|
-
|
|
271
|
-
## Results (Preliminary)
|
|
272
|
-
|
|
273
|
-
| System | RPS | GCS | TER |
|
|
274
|
-
|--------|-----|-----|-----|
|
|
275
|
-
| Full Context | 0.85 | 0.82 | 1.0x |
|
|
276
|
-
| Summarization | 0.45 | 0.58 | 2.1x |
|
|
277
|
-
| Mem0 | 0.52 | 0.61 | 3.2x |
|
|
278
|
-
| **MCAL (Ours)** | **0.78** | **0.79** | **3.8x** |
|
|
279
|
-
|
|
280
|
-
## Roadmap
|
|
281
|
-
|
|
282
|
-
- [x] Problem formulation & research
|
|
283
|
-
- [ ] Week 1: Foundation (baseline + data)
|
|
284
|
-
- [ ] Week 2: Core algorithms
|
|
285
|
-
- [ ] Week 3: Benchmark & evaluation
|
|
286
|
-
- [ ] Week 4: Paper draft
|
|
287
|
-
- [ ] Week 5: Release & arXiv
|
|
288
|
-
|
|
289
|
-
## Citation
|
|
290
|
-
|
|
291
|
-
```bibtex
|
|
292
|
-
@article{mcal2026,
|
|
293
|
-
title={MCAL: Memory-Context Alignment for Goal-Coherent AI Agents},
|
|
294
|
-
author={Koreddi, Shiva},
|
|
295
|
-
journal={arXiv preprint},
|
|
296
|
-
year={2026}
|
|
297
|
-
}
|
|
298
|
-
```
|
|
299
|
-
|
|
300
|
-
## License
|
|
301
|
-
|
|
302
|
-
MIT License - see [LICENSE](LICENSE) for details.
|
|
303
|
-
|
|
304
|
-
## Acknowledgments
|
|
305
|
-
|
|
306
|
-
Built on insights from:
|
|
307
|
-
- [MemGPT](https://github.com/cpacker/MemGPT) - OS-inspired memory hierarchy
|
|
308
|
-
- [Reflexion](https://arxiv.org/abs/2303.11366) - Verbal self-reflection
|
|
309
|
-
|
|
310
|
-
---
|
|
311
|
-
|
|
312
|
-
## Migration from v1.x
|
|
313
|
-
|
|
314
|
-
If you were using MCAL with Mem0 backend, see [STANDALONE_MIGRATION.md](docs/STANDALONE_MIGRATION.md) for migration guide.
|
|
315
|
-
|
|
316
|
-
**Key changes:**
|
|
317
|
-
- `mem0_config` and `mem0_api_key` parameters are deprecated
|
|
318
|
-
- `use_standalone_backend` is deprecated (standalone is now default)
|
|
319
|
-
- Install `mcal[mem0]` for legacy Mem0 support
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|