memorysafe 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,17 @@
1
+ # Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ Copyright 2026 MemorySafe Labs
6
+
7
+ Licensed under the Apache License, Version 2.0 (the "License");
8
+ you may not use this file except in compliance with the License.
9
+ You may obtain a copy of the License at
10
+
11
+ http://www.apache.org/licenses/LICENSE-2.0
12
+
13
+ Unless required by applicable law or agreed to in writing, software
14
+ distributed under the License is distributed on an "AS IS" BASIS,
15
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16
+ See the License for the specific language governing permissions and
17
+ limitations under the License.
@@ -0,0 +1,11 @@
1
+ Metadata-Version: 2.4
2
+ Name: memorysafe
3
+ Version: 0.1.0
4
+ Summary: MemorySafe: MVI-driven memory governance for continual learning systems.
5
+ Author: Carla P. Centeno
6
+ License: MIT
7
+ Requires-Python: >=3.10
8
+ License-File: LICENSE
9
+ Requires-Dist: mvi-metrics
10
+ Requires-Dist: numpy
11
+ Dynamic: license-file
@@ -0,0 +1,186 @@
1
+ # MemorySafe Labs
2
+ **Memory Governance for Continual Learning Systems**
3
+
4
+ ## 🚀 Public Demo (Colab)
5
+
6
+ Run the MemorySafe public demo comparing MemorySafe-Taste vs FIFO:
7
+
8
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/MemorySafe-Labs/memorysafe/blob/main/benchmarks/Taste_Demo_MemorySafe_Taste_vs_FIFO_%20(1).ipynb)
9
+
10
+
11
+ MemorySafe is a memory governance framework for continual learning systems, designed to prevent safety-critical information from being forgotten under memory and compute constraints.
12
+
13
+ Rather than modifying how models learn, MemorySafe governs **what they retain**.
14
+
15
+ It operates as a policy-level decision layer for memory, acting as an intelligent governor that disciplines data retention independently of the learning algorithm.
16
+
17
+ Use cases include medical AI, edge systems, fraud detection, robotics, and privacy-aware AI.
18
+
19
+ ---
20
+
21
+ ## Problem
22
+
23
+ Modern continual learning systems implicitly conflate **exposure with importance**.
24
+
25
+ Under standard replay strategies:
26
+ - frequent samples dominate memory,
27
+ - rare but critical events are overwritten,
28
+ - long-term reliability degrades.
29
+
30
+ This is especially harmful in:
31
+ - medical AI,
32
+ - edge systems,
33
+ - fraud detection,
34
+ - safety-critical robotics.
35
+
36
+ MemorySafe reframes memory as a **resource allocation and lifecycle management problem**, explicitly separating:
37
+
38
+ > **Risk ≠ Value ≠ Decision**
39
+
40
+ ---
41
+
42
+ ## Core Concepts
43
+
44
+ ### 1. Memory Vulnerability Index (MVI)
45
+ MVI estimates how likely a memory is to be forgotten under future learning pressure.
46
+
47
+ It captures:
48
+ - interference from new tasks,
49
+ - sensitivity to gradient updates,
50
+ - temporal competition effects.
51
+
52
+ MVI is:
53
+ - predictive (not retrospective),
54
+ - continuous and interpretable,
55
+ - model-agnostic.
56
+
57
+ ### 2. Memory Relevance (Value)
58
+ Relevance estimates how valuable a memory is, independently of its vulnerability.
59
+
60
+ Guiding principles:
61
+ - repetition ≠ importance,
62
+ - rare but salient events retain value,
63
+ - relevance decays over time.
64
+
65
+ ### 3. ProtectScore (Decision Signal)
66
+ ProtectScore combines MVI and Relevance into a deterministic decision signal that governs:
67
+ - protection,
68
+ - consolidation,
69
+ - eviction under fixed capacity.
70
+
71
+ This treats forgetting as an **active, functional choice**, not a system failure.
72
+
73
+ ---
74
+
75
+ ## What MemorySafe Is (and Is Not)
76
+
77
+ **MemorySafe is:**
78
+ - a memory governance layer,
79
+ - compatible with any learning algorithm,
80
+ - model-agnostic and pluggable,
81
+ - interpretable and low-overhead.
82
+
83
+ **MemorySafe is not:**
84
+ - a replacement for training,
85
+ - a full continual learning algorithm,
86
+ - a benchmark-optimized model.
87
+
88
+ It governs **memory decisions**, not learning dynamics.
89
+
90
+ ---
91
+
92
+ ## Architecture
93
+
94
+ MemorySafe acts as a policy layer on top of replay buffers or memory modules.
95
+
96
+ It maintains per-memory attributes:
97
+ - task_id
98
+ - MVI
99
+ - relevance
100
+ - protect_score
101
+ - replay_count
102
+ - protected flag
103
+
104
+ Guarantees:
105
+ - no gradients inside memory logic,
106
+ - no dataset-specific heuristics,
107
+ - deterministic and auditable behavior.
108
+
109
+ ---
110
+
111
+ ## Generalization & Validation
112
+
113
+ The same unchanged MemorySafe policy was evaluated across heterogeneous continual learning benchmarks:
114
+
115
+ - MNIST
116
+ - Fashion-MNIST
117
+ - CIFAR-10
118
+ - CIFAR-100
119
+ - Omniglot
120
+ - Permuted MNIST
121
+ - PneumoniaMNIST (medical imaging)
122
+
123
+ Without dataset-specific tuning, MemorySafe demonstrated:
124
+ - consistent Task-0 protection,
125
+ - strong rare-event retention,
126
+ - stability across increasing task difficulty.
127
+
128
+ These results indicate **zero-shot generalization of a memory allocation policy**.
129
+
130
+ ---
131
+
132
+ ## Use Cases
133
+
134
+ - **Medical AI**
135
+ Protection of rare pathologies and safety-critical cases.
136
+
137
+ - **Edge AI (Jetson / embedded)**
138
+ Memory governance under tight RAM and compute budgets.
139
+
140
+ - **Fraud Detection**
141
+ Retention of delayed or rare anomalies.
142
+
143
+ - **Robotics**
144
+ Preservation of critical failures and safety events.
145
+
146
+ - **Privacy-Aware AI**
147
+ Predictive forgetting and sensitive data governance (MVI-P).
148
+
149
+ ---
150
+
151
+ ## Integration
152
+
153
+ MemorySafe is compatible with:
154
+
155
+ - Experience Replay
156
+ - GEM / A-GEM
157
+ - PackNet
158
+ - Progressive Neural Networks
159
+ - Custom replay buffers
160
+
161
+ It can be integrated as:
162
+ - a replay gate,
163
+ - an eviction governor,
164
+ - a diagnostic layer for memory risk.
165
+
166
+ ---
167
+
168
+ ## Technology Stack
169
+
170
+ - PyTorch
171
+ - CUDA / GPU accelerated training
172
+ - Designed for integration with modern ML pipelines
173
+ - Research prototype (Alpha)
174
+
175
+ ---
176
+
177
+ ## One-Sentence Essence
178
+
179
+ **MemorySafe treats AI memory as a governed resource, separating vulnerability from value to enable intentional, interpretable, and generalizable memory decisions under real-world constraints.**
180
+
181
+ ## License
182
+
183
+ MemorySafe is released under the Apache 2.0 License.
184
+
185
+ This allows free use, modification, and distribution, including for commercial purposes.
186
+ For enterprise support, integrations, or commercial partnerships, contact MemorySafe Labs.
@@ -0,0 +1,8 @@
1
+ [project]
2
+ name = "memorysafe"
3
+ version = "0.1.0"
4
+ description = "MemorySafe: MVI-driven memory governance for continual learning systems."
5
+ authors = [{name = "Carla P. Centeno"}]
6
+ dependencies = ["mvi-metrics", "numpy"]
7
+ requires-python = ">=3.10"
8
+ license = {text = "MIT"}
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,5 @@
1
+ __version__ = "0.1.0"
2
+
3
+ from .buffer import TasteMemorySafeBuffer
4
+
5
+ __all__ = ["__version__", "TasteMemorySafeBuffer"]
@@ -0,0 +1,91 @@
1
+ from dataclasses import dataclass
2
+ from typing import List, Optional, Tuple, Dict
3
+ import numpy as np
4
+ import torch
5
+ import torch.nn.functional as F
6
+
7
+
8
+ @dataclass
9
+ class MemoryItem:
10
+ x: torch.Tensor
11
+ y: int
12
+ protect_score: float
13
+ seen_step: int
14
+
15
+
16
+ class TasteMemorySafeBuffer:
17
+ """
18
+ Public demo buffer (non-IP). Not the proprietary MemorySafe policy/MVI.
19
+ Keeps top protect_score items under fixed capacity and replays samples.
20
+ """
21
+
22
+ def __init__(self, capacity: int = 300):
23
+ self.capacity = int(capacity)
24
+ self.items: List[MemoryItem] = []
25
+ self.step = 0
26
+
27
+ @torch.no_grad()
28
+ def compute_protect_score(self, y: torch.Tensor, logits: torch.Tensor) -> torch.Tensor:
29
+ probs = F.softmax(logits, dim=1)
30
+ p_true = probs.gather(1, y.view(-1, 1)).squeeze(1)
31
+
32
+ loss_proxy = (-torch.log(p_true + 1e-8))
33
+ top2 = torch.topk(probs, k=2, dim=1).values
34
+ margin = (top2[:, 0] - top2[:, 1]).clamp(min=0.0)
35
+
36
+ loss_n = (loss_proxy / (loss_proxy.max() + 1e-8)).clamp(0, 1)
37
+ margin_n = (1.0 - (margin / (margin.max() + 1e-8))).clamp(0, 1)
38
+
39
+ score = 0.6 * loss_n + 0.4 * margin_n
40
+ return score.clamp(0, 1)
41
+
42
+ def add_batch(self, x: torch.Tensor, y: torch.Tensor, score: torch.Tensor) -> None:
43
+ x = x.detach().cpu()
44
+ y = y.detach().cpu()
45
+ score = score.detach().cpu()
46
+
47
+ for i in range(x.size(0)):
48
+ self.items.append(
49
+ MemoryItem(
50
+ x=x[i],
51
+ y=int(y[i]),
52
+ protect_score=float(score[i].item()),
53
+ seen_step=self.step,
54
+ )
55
+ )
56
+ self.step += 1
57
+
58
+ if len(self.items) > self.capacity:
59
+ self.items.sort(key=lambda it: it.protect_score, reverse=True)
60
+ self.items = self.items[: self.capacity]
61
+
62
+ def sample(
63
+ self,
64
+ n: int = 64,
65
+ device: Optional[torch.device] = None,
66
+ ) -> Optional[Tuple[torch.Tensor, torch.Tensor]]:
67
+ if not self.items:
68
+ return None
69
+
70
+ n = min(int(n), len(self.items))
71
+ idx = np.random.choice(len(self.items), size=n, replace=False)
72
+
73
+ xs = torch.stack([self.items[i].x for i in idx])
74
+ ys = torch.tensor([self.items[i].y for i in idx], dtype=torch.long)
75
+
76
+ if device is not None:
77
+ xs = xs.to(device)
78
+ ys = ys.to(device)
79
+
80
+ return xs, ys
81
+
82
+ def stats(self) -> Dict[str, float]:
83
+ if not self.items:
84
+ return {"size": 0.0, "avg_score": 0.0, "p90_score": 0.0}
85
+
86
+ scores = np.array([it.protect_score for it in self.items], dtype=float)
87
+ return {
88
+ "size": float(len(self.items)),
89
+ "avg_score": float(scores.mean()),
90
+ "p90_score": float(np.percentile(scores, 90)),
91
+ }
File without changes
@@ -0,0 +1,11 @@
1
+ Metadata-Version: 2.4
2
+ Name: memorysafe
3
+ Version: 0.1.0
4
+ Summary: MemorySafe: MVI-driven memory governance for continual learning systems.
5
+ Author: Carla P. Centeno
6
+ License: MIT
7
+ Requires-Python: >=3.10
8
+ License-File: LICENSE
9
+ Requires-Dist: mvi-metrics
10
+ Requires-Dist: numpy
11
+ Dynamic: license-file
@@ -0,0 +1,11 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/memorysafe/__init__.py
5
+ src/memorysafe/buffer.py
6
+ src/memorysafe/demo.py
7
+ src/memorysafe.egg-info/PKG-INFO
8
+ src/memorysafe.egg-info/SOURCES.txt
9
+ src/memorysafe.egg-info/dependency_links.txt
10
+ src/memorysafe.egg-info/requires.txt
11
+ src/memorysafe.egg-info/top_level.txt
@@ -0,0 +1,2 @@
1
+ mvi-metrics
2
+ numpy
@@ -0,0 +1 @@
1
+ memorysafe