neruva-record 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,184 @@
1
+ Metadata-Version: 2.4
2
+ Name: neruva-record
3
+ Version: 0.1.0
4
+ Summary: Drop-in auto-record wrapper for the Anthropic Python SDK — every LLM turn captured to Neruva Memory. Recall later via the Neruva MCP / API. Cross-session, cross-vendor, portable.
5
+ Author-email: Clouthier Simulation Labs <info@neruva.io>
6
+ License: MIT
7
+ Project-URL: Homepage, https://neruva.io/
8
+ Project-URL: Documentation, https://neruva.io/docs/
9
+ Project-URL: Source, https://github.com/CloutSimLabs/neruva
10
+ Keywords: neruva,anthropic,claude,agent-memory,auto-record,observability,llm
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Operating System :: OS Independent
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Topic :: Software Development :: Libraries
20
+ Requires-Python: >=3.10
21
+ Description-Content-Type: text/markdown
22
+ Requires-Dist: httpx>=0.27
23
+ Provides-Extra: anthropic
24
+ Requires-Dist: anthropic>=0.30; extra == "anthropic"
25
+
26
+ # neruva-record
27
+
28
+ Drop-in auto-record wrapper for the Anthropic Python SDK. Every
29
+ `messages.create()` call captures both directions to a Neruva Memory
30
+ namespace as a side-effect. Stop manually saving things; query the
31
+ namespace later for "what did this agent do?"
32
+
33
+ Pairs with [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp)
34
+ (captures MCP tool calls). Together: full agent-loop capture.
35
+
36
+ ## Install
37
+
38
+ ```bash
39
+ pip install neruva-record anthropic
40
+ export NERUVA_API_KEY=... # https://app.neruva.io
41
+ ```
42
+
43
+ ## Use
44
+
45
+ ```python
46
+ import anthropic
47
+ from neruva_record import auto_record
48
+
49
+ client = auto_record(
50
+ anthropic.Anthropic(),
51
+ index="brain", # one per user/account
52
+ namespace="main", # one per agent (free-form)
53
+ ttl_days=30, # optional: auto-expire records after N days
54
+ )
55
+
56
+ # Drop-in: client behaves identically to bare Anthropic.
57
+ response = client.messages.create(
58
+ model="claude-opus-4-7",
59
+ max_tokens=200,
60
+ messages=[{"role": "user", "content": "Hi!"}],
61
+ )
62
+
63
+ # Recording happened as a side-effect. Query it later via the MCP
64
+ # or the REST API.
65
+ ```
66
+
67
+ ## Async
68
+
69
+ ```python
70
+ from anthropic import AsyncAnthropic
71
+ from neruva_record import auto_record
72
+
73
+ client = auto_record(
74
+ AsyncAnthropic(),
75
+ index="brain", namespace="main",
76
+ )
77
+
78
+ response = await client.messages.create(
79
+ model="claude-opus-4-7", max_tokens=200,
80
+ messages=[{"role": "user", "content": "Hi!"}],
81
+ )
82
+ ```
83
+
84
+ ## Naming convention
85
+
86
+ | Field | What | Example |
87
+ |---|---|---|
88
+ | `index` | One per user/account. Agent-type-neutral. | `brain` |
89
+ | `namespace` | One per agent the user runs. Free-form. | `main`, `support-bot`, `research` |
90
+
91
+ ```python
92
+ # multi-agent setup -- same brain, one namespace per agent
93
+ support = auto_record(Anthropic(), index="brain", namespace="support-bot")
94
+ research = auto_record(Anthropic(), index="brain", namespace="research")
95
+ ops = auto_record(Anthropic(), index="brain", namespace="orchestrator")
96
+ ```
97
+
98
+ ## What gets recorded
99
+
100
+ Per LLM turn, one record:
101
+
102
+ ```json
103
+ {
104
+ "id": "llm-<unix-ms>-<rand>",
105
+ "text": "USER: <user-message>\n\nASSISTANT: <response>",
106
+ "metadata": {
107
+ "kind": "llm_turn",
108
+ "vendor": "anthropic",
109
+ "model": "claude-opus-4-7",
110
+ "stop_reason": "end_turn",
111
+ "input_tokens": 12,
112
+ "output_tokens": 87,
113
+ "latency_ms": 1240,
114
+ "ts": <unix-ms>,
115
+ "_auto_expire_at": <unix-ms-or-omitted>
116
+ }
117
+ }
118
+ ```
119
+
120
+ User text is truncated at 600 chars, assistant at 1800 chars (for
121
+ embedding quality and storage cost). The full conversation history
122
+ isn't stored — only the most recent user message and the response.
123
+ For full trace capture, also use
124
+ [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp) which
125
+ records every tool call alongside this.
126
+
127
+ ## Querying back
128
+
129
+ Use the [Neruva MCP](https://www.npmjs.com/package/@neruva/mcp) or the
130
+ REST API to query the recorded turns:
131
+
132
+ ```bash
133
+ curl -X POST https://api.neruva.io/v1/indexes/brain/text/query \
134
+ -H "Api-Key: $NERUVA_API_KEY" \
135
+ -H "Content-Type: application/json" \
136
+ -d '{
137
+ "namespace": "main",
138
+ "text": "what did I ask about deployment?",
139
+ "topK": 5,
140
+ "includeMetadata": true,
141
+ "filter": { "vendor": { "$eq": "anthropic" } }
142
+ }'
143
+ ```
144
+
145
+ ## Failure mode
146
+
147
+ Recording is **fire-and-forget**. If the recording POST fails for any
148
+ reason, your `messages.create()` call returns normally — the recorder
149
+ swallows its own errors. You will never get fewer responses than you
150
+ asked for; you may get fewer records than you'd see in a perfect world.
151
+
152
+ ## TTL / decay
153
+
154
+ Set `ttl_days=N` and each record carries `_auto_expire_at = ts + N
155
+ days`. The Neruva server-side decay sweep automatically removes
156
+ records past their expiry on the next query/upsert touch (max once
157
+ per 15 minutes per namespace). No background job to run.
158
+
159
+ ```python
160
+ client = auto_record(
161
+ anthropic.Anthropic(),
162
+ index="brain", namespace="main",
163
+ ttl_days=7, # only the last week is retained
164
+ )
165
+ ```
166
+
167
+ ## Roadmap
168
+
169
+ - v0.1: Anthropic Python (sync + async, non-streaming)
170
+ - v0.2: Anthropic streaming + tool-use turn capture
171
+ - v0.3: OpenAI Python
172
+ - v0.4: Google Gemini Python
173
+ - v0.5: TypeScript versions of all the above
174
+ - v1.0: Stable API, security audit
175
+
176
+ ## Related
177
+
178
+ - [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp) — MCP tool-call auto-record
179
+ - [`neruva-mcp`](https://pypi.org/project/neruva-mcp/) — Python MCP server
180
+ - [`neruva.io/docs`](https://neruva.io/docs/) — full documentation
181
+
182
+ ## License
183
+
184
+ MIT — Clouthier Simulation Labs.
@@ -0,0 +1,159 @@
1
+ # neruva-record
2
+
3
+ Drop-in auto-record wrapper for the Anthropic Python SDK. Every
4
+ `messages.create()` call captures both directions to a Neruva Memory
5
+ namespace as a side-effect. Stop manually saving things; query the
6
+ namespace later for "what did this agent do?"
7
+
8
+ Pairs with [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp)
9
+ (captures MCP tool calls). Together: full agent-loop capture.
10
+
11
+ ## Install
12
+
13
+ ```bash
14
+ pip install neruva-record anthropic
15
+ export NERUVA_API_KEY=... # https://app.neruva.io
16
+ ```
17
+
18
+ ## Use
19
+
20
+ ```python
21
+ import anthropic
22
+ from neruva_record import auto_record
23
+
24
+ client = auto_record(
25
+ anthropic.Anthropic(),
26
+ index="brain", # one per user/account
27
+ namespace="main", # one per agent (free-form)
28
+ ttl_days=30, # optional: auto-expire records after N days
29
+ )
30
+
31
+ # Drop-in: client behaves identically to bare Anthropic.
32
+ response = client.messages.create(
33
+ model="claude-opus-4-7",
34
+ max_tokens=200,
35
+ messages=[{"role": "user", "content": "Hi!"}],
36
+ )
37
+
38
+ # Recording happened as a side-effect. Query it later via the MCP
39
+ # or the REST API.
40
+ ```
41
+
42
+ ## Async
43
+
44
+ ```python
45
+ from anthropic import AsyncAnthropic
46
+ from neruva_record import auto_record
47
+
48
+ client = auto_record(
49
+ AsyncAnthropic(),
50
+ index="brain", namespace="main",
51
+ )
52
+
53
+ response = await client.messages.create(
54
+ model="claude-opus-4-7", max_tokens=200,
55
+ messages=[{"role": "user", "content": "Hi!"}],
56
+ )
57
+ ```
58
+
59
+ ## Naming convention
60
+
61
+ | Field | What | Example |
62
+ |---|---|---|
63
+ | `index` | One per user/account. Agent-type-neutral. | `brain` |
64
+ | `namespace` | One per agent the user runs. Free-form. | `main`, `support-bot`, `research` |
65
+
66
+ ```python
67
+ # multi-agent setup -- same brain, one namespace per agent
68
+ support = auto_record(Anthropic(), index="brain", namespace="support-bot")
69
+ research = auto_record(Anthropic(), index="brain", namespace="research")
70
+ ops = auto_record(Anthropic(), index="brain", namespace="orchestrator")
71
+ ```
72
+
73
+ ## What gets recorded
74
+
75
+ Per LLM turn, one record:
76
+
77
+ ```json
78
+ {
79
+ "id": "llm-<unix-ms>-<rand>",
80
+ "text": "USER: <user-message>\n\nASSISTANT: <response>",
81
+ "metadata": {
82
+ "kind": "llm_turn",
83
+ "vendor": "anthropic",
84
+ "model": "claude-opus-4-7",
85
+ "stop_reason": "end_turn",
86
+ "input_tokens": 12,
87
+ "output_tokens": 87,
88
+ "latency_ms": 1240,
89
+ "ts": <unix-ms>,
90
+ "_auto_expire_at": <unix-ms-or-omitted>
91
+ }
92
+ }
93
+ ```
94
+
95
+ User text is truncated at 600 chars, assistant at 1800 chars (for
96
+ embedding quality and storage cost). The full conversation history
97
+ isn't stored — only the most recent user message and the response.
98
+ For full trace capture, also use
99
+ [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp) which
100
+ records every tool call alongside this.
101
+
102
+ ## Querying back
103
+
104
+ Use the [Neruva MCP](https://www.npmjs.com/package/@neruva/mcp) or the
105
+ REST API to query the recorded turns:
106
+
107
+ ```bash
108
+ curl -X POST https://api.neruva.io/v1/indexes/brain/text/query \
109
+ -H "Api-Key: $NERUVA_API_KEY" \
110
+ -H "Content-Type: application/json" \
111
+ -d '{
112
+ "namespace": "main",
113
+ "text": "what did I ask about deployment?",
114
+ "topK": 5,
115
+ "includeMetadata": true,
116
+ "filter": { "vendor": { "$eq": "anthropic" } }
117
+ }'
118
+ ```
119
+
120
+ ## Failure mode
121
+
122
+ Recording is **fire-and-forget**. If the recording POST fails for any
123
+ reason, your `messages.create()` call returns normally — the recorder
124
+ swallows its own errors. You will never get fewer responses than you
125
+ asked for; you may get fewer records than you'd see in a perfect world.
126
+
127
+ ## TTL / decay
128
+
129
+ Set `ttl_days=N` and each record carries `_auto_expire_at = ts + N
130
+ days`. The Neruva server-side decay sweep automatically removes
131
+ records past their expiry on the next query/upsert touch (max once
132
+ per 15 minutes per namespace). No background job to run.
133
+
134
+ ```python
135
+ client = auto_record(
136
+ anthropic.Anthropic(),
137
+ index="brain", namespace="main",
138
+ ttl_days=7, # only the last week is retained
139
+ )
140
+ ```
141
+
142
+ ## Roadmap
143
+
144
+ - v0.1: Anthropic Python (sync + async, non-streaming)
145
+ - v0.2: Anthropic streaming + tool-use turn capture
146
+ - v0.3: OpenAI Python
147
+ - v0.4: Google Gemini Python
148
+ - v0.5: TypeScript versions of all the above
149
+ - v1.0: Stable API, security audit
150
+
151
+ ## Related
152
+
153
+ - [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp) — MCP tool-call auto-record
154
+ - [`neruva-mcp`](https://pypi.org/project/neruva-mcp/) — Python MCP server
155
+ - [`neruva.io/docs`](https://neruva.io/docs/) — full documentation
156
+
157
+ ## License
158
+
159
+ MIT — Clouthier Simulation Labs.
@@ -0,0 +1,51 @@
1
+ [build-system]
2
+ requires = ["setuptools>=68", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "neruva-record"
7
+ version = "0.1.0"
8
+ description = "Drop-in auto-record wrapper for the Anthropic Python SDK — every LLM turn captured to Neruva Memory. Recall later via the Neruva MCP / API. Cross-session, cross-vendor, portable."
9
+ readme = "README.md"
10
+ requires-python = ">=3.10"
11
+ license = { text = "MIT" }
12
+ authors = [
13
+ { name = "Clouthier Simulation Labs", email = "info@neruva.io" },
14
+ ]
15
+ keywords = [
16
+ "neruva",
17
+ "anthropic",
18
+ "claude",
19
+ "agent-memory",
20
+ "auto-record",
21
+ "observability",
22
+ "llm",
23
+ ]
24
+ classifiers = [
25
+ "Development Status :: 4 - Beta",
26
+ "Intended Audience :: Developers",
27
+ "License :: OSI Approved :: MIT License",
28
+ "Operating System :: OS Independent",
29
+ "Programming Language :: Python :: 3",
30
+ "Programming Language :: Python :: 3.10",
31
+ "Programming Language :: Python :: 3.11",
32
+ "Programming Language :: Python :: 3.12",
33
+ "Topic :: Software Development :: Libraries",
34
+ ]
35
+ dependencies = [
36
+ "httpx>=0.27",
37
+ ]
38
+
39
+ [project.optional-dependencies]
40
+ anthropic = ["anthropic>=0.30"]
41
+
42
+ [project.urls]
43
+ Homepage = "https://neruva.io/"
44
+ Documentation = "https://neruva.io/docs/"
45
+ Source = "https://github.com/CloutSimLabs/neruva"
46
+
47
+ [tool.setuptools.packages.find]
48
+ where = ["src"]
49
+
50
+ [tool.setuptools.package-data]
51
+ neruva_record = ["py.typed"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,39 @@
1
+ """Neruva auto-record SDK wrappers.
2
+
3
+ Drop-in observability for LLM clients. Every turn is upserted to a
4
+ Neruva Memory namespace as a side-effect, with rich metadata so you
5
+ can recall, replay, and audit later.
6
+
7
+ Currently supported:
8
+
9
+ auto_record(client, ...) Anthropic Python SDK (sync + async)
10
+
11
+ More vendors (OpenAI, Google, xAI/Mistral) ship in subsequent releases.
12
+
13
+ Quick start:
14
+
15
+ import anthropic
16
+ from neruva_record import auto_record
17
+
18
+ client = auto_record(
19
+ anthropic.Anthropic(),
20
+ index="brain",
21
+ namespace="main",
22
+ ttl_days=30, # optional: auto-expire records after N days
23
+ )
24
+
25
+ # client behaves identically to bare Anthropic; recording is a side-effect
26
+ response = client.messages.create(
27
+ model="claude-opus-4-7",
28
+ max_tokens=200,
29
+ messages=[{"role": "user", "content": "Hi!"}],
30
+ )
31
+
32
+ Set ``NERUVA_API_KEY`` in the environment, or pass ``neruva_api_key=...``.
33
+ """
34
+ from __future__ import annotations
35
+
36
+ from .anthropic import auto_record
37
+
38
+ __all__ = ["auto_record"]
39
+ __version__ = "0.1.0"
@@ -0,0 +1,112 @@
1
+ """Internal recorder: HTTP client + record formatting.
2
+
3
+ Used by the per-vendor wrappers (anthropic, openai, ...) to ship
4
+ captured turns to a Neruva Memory namespace via fire-and-forget HTTP
5
+ POSTs. Failures are swallowed -- recording must never break the
6
+ user's call.
7
+ """
8
+ from __future__ import annotations
9
+
10
+ import os
11
+ import time
12
+ from typing import Any, Optional
13
+ from urllib.parse import quote
14
+
15
+ import httpx
16
+
17
+
18
+ _DEFAULT_URL = "https://api.neruva.io"
19
+
20
+
21
+ class Recorder:
22
+ """Holds connection state for a single (index, namespace) target."""
23
+
24
+ def __init__(
25
+ self,
26
+ api_key: str,
27
+ index: str,
28
+ namespace: str = "",
29
+ ttl_days: Optional[int] = None,
30
+ base_url: Optional[str] = None,
31
+ timeout: float = 10.0,
32
+ ) -> None:
33
+ if not api_key:
34
+ raise ValueError(
35
+ "neruva-record requires NERUVA_API_KEY. "
36
+ "Set the env var or pass neruva_api_key=... to auto_record()."
37
+ )
38
+ if not index:
39
+ raise ValueError("auto_record(index=...) is required")
40
+ self.api_key = api_key
41
+ self.index = index
42
+ self.namespace = namespace or ""
43
+ self.ttl_days = ttl_days if (ttl_days and ttl_days > 0) else None
44
+ self.base_url = (base_url or os.environ.get(
45
+ "NERUVA_URL", _DEFAULT_URL,
46
+ )).rstrip("/")
47
+ self._client = httpx.Client(
48
+ base_url=self.base_url,
49
+ headers={
50
+ "Api-Key": self.api_key,
51
+ "Content-Type": "application/json",
52
+ },
53
+ timeout=timeout,
54
+ )
55
+
56
+ def close(self) -> None:
57
+ try:
58
+ self._client.close()
59
+ except Exception:
60
+ pass
61
+
62
+ def upsert_turn(
63
+ self,
64
+ record_id: str,
65
+ text: str,
66
+ metadata: dict[str, Any],
67
+ ) -> None:
68
+ """POST a single turn record. Swallows all errors silently."""
69
+ if self.ttl_days and "_auto_expire_at" not in metadata:
70
+ metadata = {
71
+ **metadata,
72
+ "_auto_expire_at": int(time.time() * 1000) + self.ttl_days * 86_400_000,
73
+ }
74
+ path = f"/v1/indexes/{quote(self.index, safe='')}/text/upsert"
75
+ try:
76
+ self._client.post(
77
+ path,
78
+ json={
79
+ "namespace": self.namespace,
80
+ "items": [{
81
+ "id": record_id,
82
+ "text": text,
83
+ "metadata": metadata,
84
+ }],
85
+ },
86
+ )
87
+ except Exception:
88
+ # Never break the caller.
89
+ pass
90
+
91
+
92
+ def make_recorder(
93
+ *,
94
+ api_key: Optional[str],
95
+ index: str,
96
+ namespace: str = "",
97
+ ttl_days: Optional[int] = None,
98
+ base_url: Optional[str] = None,
99
+ ) -> Recorder:
100
+ """Build a Recorder, resolving API key from env if not passed."""
101
+ key = api_key or os.environ.get("NERUVA_API_KEY")
102
+ return Recorder(
103
+ api_key=key or "",
104
+ index=index,
105
+ namespace=namespace,
106
+ ttl_days=ttl_days,
107
+ base_url=base_url,
108
+ )
109
+
110
+
111
+ def short(s: str, n: int) -> str:
112
+ return s if len(s) <= n else s[:n] + "..."
@@ -0,0 +1,205 @@
1
+ """Anthropic Python SDK auto-record wrapper.
2
+
3
+ Drop-in: every ``client.messages.create()`` call captures both the user
4
+ message and the assistant response into a Neruva Memory namespace.
5
+
6
+ Synchronous and async (``AsyncAnthropic``) clients are both supported.
7
+ Streaming (``messages.stream()``) is not yet recorded -- coming in a
8
+ later release.
9
+ """
10
+ from __future__ import annotations
11
+
12
+ import os
13
+ import time
14
+ import uuid
15
+ from typing import Any, Optional, TypeVar
16
+
17
+ from ._recorder import Recorder, make_recorder, short
18
+
19
+
20
+ C = TypeVar("C")
21
+
22
+
23
+ _USER_PREVIEW = 600
24
+ _ASSISTANT_PREVIEW = 1800
25
+
26
+
27
+ def _extract_user_text(messages: Any) -> str:
28
+ """Pull the most recent user-role message text out of a messages list."""
29
+ if not isinstance(messages, list):
30
+ return ""
31
+ for msg in reversed(messages):
32
+ if not isinstance(msg, dict) or msg.get("role") != "user":
33
+ continue
34
+ content = msg.get("content")
35
+ if isinstance(content, str):
36
+ return content
37
+ if isinstance(content, list):
38
+ parts = []
39
+ for block in content:
40
+ if isinstance(block, dict) and block.get("type") == "text":
41
+ parts.append(str(block.get("text", "")))
42
+ return " ".join(parts)
43
+ return str(content) if content is not None else ""
44
+ return ""
45
+
46
+
47
+ def _extract_assistant_text(response: Any) -> str:
48
+ """Pull the assistant text out of an Anthropic Messages response."""
49
+ content = getattr(response, "content", None)
50
+ if not content:
51
+ return ""
52
+ parts = []
53
+ for block in content:
54
+ # Object form (anthropic SDK) or dict form (raw API).
55
+ btype = getattr(block, "type", None) or (
56
+ block.get("type") if isinstance(block, dict) else None
57
+ )
58
+ if btype == "text":
59
+ text = getattr(block, "text", None) or (
60
+ block.get("text") if isinstance(block, dict) else None
61
+ )
62
+ if text:
63
+ parts.append(str(text))
64
+ return "\n".join(parts)
65
+
66
+
67
+ def _build_record(
68
+ request_kwargs: dict[str, Any],
69
+ response: Any,
70
+ latency_ms: int,
71
+ ) -> tuple[str, str, dict[str, Any]]:
72
+ """Return (record_id, text, metadata) for one turn."""
73
+ user_text = _extract_user_text(request_kwargs.get("messages"))
74
+ asst_text = _extract_assistant_text(response)
75
+ text = (
76
+ f"USER: {short(user_text, _USER_PREVIEW)}\n\n"
77
+ f"ASSISTANT: {short(asst_text, _ASSISTANT_PREVIEW)}"
78
+ )
79
+
80
+ usage = getattr(response, "usage", None)
81
+ input_tokens = getattr(usage, "input_tokens", None) if usage else None
82
+ output_tokens = getattr(usage, "output_tokens", None) if usage else None
83
+
84
+ meta: dict[str, Any] = {
85
+ "kind": "llm_turn",
86
+ "vendor": "anthropic",
87
+ "model": request_kwargs.get("model"),
88
+ "stop_reason": getattr(response, "stop_reason", None),
89
+ "input_tokens": int(input_tokens) if input_tokens is not None else None,
90
+ "output_tokens": int(output_tokens) if output_tokens is not None else None,
91
+ "latency_ms": int(latency_ms),
92
+ "ts": int(time.time() * 1000),
93
+ }
94
+ # Strip None values for clean storage
95
+ meta = {k: v for k, v in meta.items() if v is not None}
96
+
97
+ rid = f"llm-{int(time.time() * 1000)}-{uuid.uuid4().hex[:8]}"
98
+ return rid, text, meta
99
+
100
+
101
+ def _wrap_messages_create(client: Any, recorder: Recorder) -> None:
102
+ """Replace client.messages.create with a recording version."""
103
+ original_create = client.messages.create
104
+
105
+ def wrapped(**kwargs):
106
+ t0 = time.perf_counter()
107
+ response = original_create(**kwargs)
108
+ latency_ms = int((time.perf_counter() - t0) * 1000)
109
+ try:
110
+ rid, text, meta = _build_record(kwargs, response, latency_ms)
111
+ recorder.upsert_turn(rid, text, meta)
112
+ except Exception:
113
+ pass # never break the caller
114
+ return response
115
+
116
+ client.messages.create = wrapped
117
+
118
+
119
+ def _wrap_messages_create_async(client: Any, recorder: Recorder) -> None:
120
+ """Replace AsyncAnthropic client.messages.create with a recording version."""
121
+ original_create = client.messages.create
122
+
123
+ async def wrapped(**kwargs):
124
+ t0 = time.perf_counter()
125
+ response = await original_create(**kwargs)
126
+ latency_ms = int((time.perf_counter() - t0) * 1000)
127
+ try:
128
+ rid, text, meta = _build_record(kwargs, response, latency_ms)
129
+ recorder.upsert_turn(rid, text, meta)
130
+ except Exception:
131
+ pass
132
+ return response
133
+
134
+ client.messages.create = wrapped
135
+
136
+
137
+ def auto_record(
138
+ client: C,
139
+ *,
140
+ index: str,
141
+ namespace: str = "",
142
+ ttl_days: Optional[int] = None,
143
+ neruva_api_key: Optional[str] = None,
144
+ neruva_url: Optional[str] = None,
145
+ ) -> C:
146
+ """Wrap an Anthropic ``Anthropic`` or ``AsyncAnthropic`` client so every
147
+ ``messages.create()`` call upserts the turn into Neruva Memory.
148
+
149
+ The wrapped client behaves identically to the original; recording
150
+ happens as a side-effect via fire-and-forget HTTP POST. Errors in
151
+ recording are swallowed silently and never propagate to the caller.
152
+
153
+ Parameters
154
+ ----------
155
+ client : anthropic.Anthropic or anthropic.AsyncAnthropic
156
+ The client to wrap. The instance is mutated in place; the same
157
+ object is returned for chaining convenience.
158
+ index : str
159
+ Neruva Memory index name. Use one per user/account.
160
+ namespace : str, optional
161
+ Neruva namespace. Use one per agent (e.g. "main", "support-bot",
162
+ "research"). Defaults to empty string (the default namespace).
163
+ ttl_days : int, optional
164
+ If set, each recorded turn carries ``_auto_expire_at`` metadata
165
+ and the Neruva server-side decay sweep removes records older
166
+ than ``ttl_days`` days.
167
+ neruva_api_key : str, optional
168
+ Neruva API key. Falls back to the ``NERUVA_API_KEY`` env var.
169
+ neruva_url : str, optional
170
+ Neruva API base URL. Falls back to the ``NERUVA_URL`` env var
171
+ or ``https://api.neruva.io``.
172
+
173
+ Returns
174
+ -------
175
+ The same client instance, with ``messages.create`` wrapped.
176
+
177
+ Example
178
+ -------
179
+ >>> import anthropic
180
+ >>> from neruva_record import auto_record
181
+ >>> client = auto_record(
182
+ ... anthropic.Anthropic(),
183
+ ... index="brain", namespace="main", ttl_days=30,
184
+ ... )
185
+ >>> response = client.messages.create(
186
+ ... model="claude-opus-4-7", max_tokens=200,
187
+ ... messages=[{"role": "user", "content": "Hi"}],
188
+ ... )
189
+ """
190
+ recorder = make_recorder(
191
+ api_key=neruva_api_key, index=index, namespace=namespace,
192
+ ttl_days=ttl_days, base_url=neruva_url,
193
+ )
194
+
195
+ # Detect sync vs async by class name -- avoids requiring anthropic
196
+ # to be installed at import time.
197
+ cls_name = type(client).__name__
198
+ if "Async" in cls_name:
199
+ _wrap_messages_create_async(client, recorder)
200
+ else:
201
+ _wrap_messages_create(client, recorder)
202
+
203
+ # Stash recorder on the client so users can introspect / close it.
204
+ setattr(client, "_neruva_recorder", recorder)
205
+ return client
File without changes
@@ -0,0 +1,184 @@
1
+ Metadata-Version: 2.4
2
+ Name: neruva-record
3
+ Version: 0.1.0
4
+ Summary: Drop-in auto-record wrapper for the Anthropic Python SDK — every LLM turn captured to Neruva Memory. Recall later via the Neruva MCP / API. Cross-session, cross-vendor, portable.
5
+ Author-email: Clouthier Simulation Labs <info@neruva.io>
6
+ License: MIT
7
+ Project-URL: Homepage, https://neruva.io/
8
+ Project-URL: Documentation, https://neruva.io/docs/
9
+ Project-URL: Source, https://github.com/CloutSimLabs/neruva
10
+ Keywords: neruva,anthropic,claude,agent-memory,auto-record,observability,llm
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Operating System :: OS Independent
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Topic :: Software Development :: Libraries
20
+ Requires-Python: >=3.10
21
+ Description-Content-Type: text/markdown
22
+ Requires-Dist: httpx>=0.27
23
+ Provides-Extra: anthropic
24
+ Requires-Dist: anthropic>=0.30; extra == "anthropic"
25
+
26
+ # neruva-record
27
+
28
+ Drop-in auto-record wrapper for the Anthropic Python SDK. Every
29
+ `messages.create()` call captures both directions to a Neruva Memory
30
+ namespace as a side-effect. Stop manually saving things; query the
31
+ namespace later for "what did this agent do?"
32
+
33
+ Pairs with [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp)
34
+ (captures MCP tool calls). Together: full agent-loop capture.
35
+
36
+ ## Install
37
+
38
+ ```bash
39
+ pip install neruva-record anthropic
40
+ export NERUVA_API_KEY=... # https://app.neruva.io
41
+ ```
42
+
43
+ ## Use
44
+
45
+ ```python
46
+ import anthropic
47
+ from neruva_record import auto_record
48
+
49
+ client = auto_record(
50
+ anthropic.Anthropic(),
51
+ index="brain", # one per user/account
52
+ namespace="main", # one per agent (free-form)
53
+ ttl_days=30, # optional: auto-expire records after N days
54
+ )
55
+
56
+ # Drop-in: client behaves identically to bare Anthropic.
57
+ response = client.messages.create(
58
+ model="claude-opus-4-7",
59
+ max_tokens=200,
60
+ messages=[{"role": "user", "content": "Hi!"}],
61
+ )
62
+
63
+ # Recording happened as a side-effect. Query it later via the MCP
64
+ # or the REST API.
65
+ ```
66
+
67
+ ## Async
68
+
69
+ ```python
70
+ from anthropic import AsyncAnthropic
71
+ from neruva_record import auto_record
72
+
73
+ client = auto_record(
74
+ AsyncAnthropic(),
75
+ index="brain", namespace="main",
76
+ )
77
+
78
+ response = await client.messages.create(
79
+ model="claude-opus-4-7", max_tokens=200,
80
+ messages=[{"role": "user", "content": "Hi!"}],
81
+ )
82
+ ```
83
+
84
+ ## Naming convention
85
+
86
+ | Field | What | Example |
87
+ |---|---|---|
88
+ | `index` | One per user/account. Agent-type-neutral. | `brain` |
89
+ | `namespace` | One per agent the user runs. Free-form. | `main`, `support-bot`, `research` |
90
+
91
+ ```python
92
+ # multi-agent setup -- same brain, one namespace per agent
93
+ support = auto_record(Anthropic(), index="brain", namespace="support-bot")
94
+ research = auto_record(Anthropic(), index="brain", namespace="research")
95
+ ops = auto_record(Anthropic(), index="brain", namespace="orchestrator")
96
+ ```
97
+
98
+ ## What gets recorded
99
+
100
+ Per LLM turn, one record:
101
+
102
+ ```json
103
+ {
104
+ "id": "llm-<unix-ms>-<rand>",
105
+ "text": "USER: <user-message>\n\nASSISTANT: <response>",
106
+ "metadata": {
107
+ "kind": "llm_turn",
108
+ "vendor": "anthropic",
109
+ "model": "claude-opus-4-7",
110
+ "stop_reason": "end_turn",
111
+ "input_tokens": 12,
112
+ "output_tokens": 87,
113
+ "latency_ms": 1240,
114
+ "ts": <unix-ms>,
115
+ "_auto_expire_at": <unix-ms-or-omitted>
116
+ }
117
+ }
118
+ ```
119
+
120
+ User text is truncated at 600 chars, assistant at 1800 chars (for
121
+ embedding quality and storage cost). The full conversation history
122
+ isn't stored — only the most recent user message and the response.
123
+ For full trace capture, also use
124
+ [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp) which
125
+ records every tool call alongside this.
126
+
127
+ ## Querying back
128
+
129
+ Use the [Neruva MCP](https://www.npmjs.com/package/@neruva/mcp) or the
130
+ REST API to query the recorded turns:
131
+
132
+ ```bash
133
+ curl -X POST https://api.neruva.io/v1/indexes/brain/text/query \
134
+ -H "Api-Key: $NERUVA_API_KEY" \
135
+ -H "Content-Type: application/json" \
136
+ -d '{
137
+ "namespace": "main",
138
+ "text": "what did I ask about deployment?",
139
+ "topK": 5,
140
+ "includeMetadata": true,
141
+ "filter": { "vendor": { "$eq": "anthropic" } }
142
+ }'
143
+ ```
144
+
145
+ ## Failure mode
146
+
147
+ Recording is **fire-and-forget**. If the recording POST fails for any
148
+ reason, your `messages.create()` call returns normally — the recorder
149
+ swallows its own errors. You will never get fewer responses than you
150
+ asked for; you may get fewer records than you'd see in a perfect world.
151
+
152
+ ## TTL / decay
153
+
154
+ Set `ttl_days=N` and each record carries `_auto_expire_at = ts + N
155
+ days`. The Neruva server-side decay sweep automatically removes
156
+ records past their expiry on the next query/upsert touch (max once
157
+ per 15 minutes per namespace). No background job to run.
158
+
159
+ ```python
160
+ client = auto_record(
161
+ anthropic.Anthropic(),
162
+ index="brain", namespace="main",
163
+ ttl_days=7, # only the last week is retained
164
+ )
165
+ ```
166
+
167
+ ## Roadmap
168
+
169
+ - v0.1: Anthropic Python (sync + async, non-streaming)
170
+ - v0.2: Anthropic streaming + tool-use turn capture
171
+ - v0.3: OpenAI Python
172
+ - v0.4: Google Gemini Python
173
+ - v0.5: TypeScript versions of all the above
174
+ - v1.0: Stable API, security audit
175
+
176
+ ## Related
177
+
178
+ - [`@neruva/mcp`](https://www.npmjs.com/package/@neruva/mcp) — MCP tool-call auto-record
179
+ - [`neruva-mcp`](https://pypi.org/project/neruva-mcp/) — Python MCP server
180
+ - [`neruva.io/docs`](https://neruva.io/docs/) — full documentation
181
+
182
+ ## License
183
+
184
+ MIT — Clouthier Simulation Labs.
@@ -0,0 +1,11 @@
1
+ README.md
2
+ pyproject.toml
3
+ src/neruva_record/__init__.py
4
+ src/neruva_record/_recorder.py
5
+ src/neruva_record/anthropic.py
6
+ src/neruva_record/py.typed
7
+ src/neruva_record.egg-info/PKG-INFO
8
+ src/neruva_record.egg-info/SOURCES.txt
9
+ src/neruva_record.egg-info/dependency_links.txt
10
+ src/neruva_record.egg-info/requires.txt
11
+ src/neruva_record.egg-info/top_level.txt
@@ -0,0 +1,4 @@
1
+ httpx>=0.27
2
+
3
+ [anthropic]
4
+ anthropic>=0.30
@@ -0,0 +1 @@
1
+ neruva_record