hindsight-embed 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,42 @@
1
+ # Python-generated files
2
+ __pycache__/
3
+ *.py[oc]
4
+ build/
5
+ dist/
6
+ wheels/
7
+ *.egg-info
8
+
9
+ # Virtual environments
10
+ .venv
11
+
12
+ # Node
13
+ node_modules/
14
+
15
+ # Environment variables
16
+ .env
17
+
18
+ # IDE
19
+ .idea/
20
+ .vscode/
21
+ *.swp
22
+ *.swo
23
+
24
+ # NLTK data (will be downloaded automatically)
25
+ nltk_data/
26
+
27
+ # Large benchmark datasets (will be downloaded automatically)
28
+ **/longmemeval_s_cleaned.json
29
+
30
+ # Debug logs
31
+ logs/
32
+
33
+ .DS_Store
34
+
35
+ # Generated docs files
36
+ hindsight-docs/static/llms-full.txt
37
+
38
+
39
+ hindsight-dev/benchmarks/locomo/results/
40
+ hindsight-dev/benchmarks/longmemeval/results/
41
+ hindsight-cli/target
42
+ hindsight-clients/rust/target
@@ -0,0 +1,79 @@
1
+ Metadata-Version: 2.4
2
+ Name: hindsight-embed
3
+ Version: 0.1.0
4
+ Summary: Hindsight embedded CLI - local memory operations without a server
5
+ Requires-Python: >=3.11
6
+ Requires-Dist: hindsight-api>=0.1.11
7
+ Requires-Dist: questionary>=2.0.0
8
+ Description-Content-Type: text/markdown
9
+
10
+ # hindsight-embed
11
+
12
+ Hindsight embedded CLI - local memory operations without a server.
13
+
14
+ This package provides a simple CLI for storing and recalling memories using Hindsight's memory engine with an embedded PostgreSQL database (pg0). No external server or database setup required.
15
+
16
+ ## Installation
17
+
18
+ ```bash
19
+ pip install hindsight-embed
20
+ # or with uvx (no install needed)
21
+ uvx hindsight-embed --help
22
+ ```
23
+
24
+ ## Quick Start
25
+
26
+ ```bash
27
+ # Set your LLM API key
28
+ export OPENAI_API_KEY=sk-...
29
+
30
+ # Store a memory
31
+ hindsight-embed retain "User prefers dark mode"
32
+
33
+ # Recall memories
34
+ hindsight-embed recall "What are user preferences?"
35
+ ```
36
+
37
+ ## Commands
38
+
39
+ ### retain
40
+
41
+ Store a memory:
42
+
43
+ ```bash
44
+ hindsight-embed retain "User prefers dark mode"
45
+ hindsight-embed retain "Meeting on Monday" --context work
46
+ ```
47
+
48
+ ### recall
49
+
50
+ Search memories:
51
+
52
+ ```bash
53
+ hindsight-embed recall "user preferences"
54
+ hindsight-embed recall "upcoming events" --budget high
55
+ hindsight-embed recall "project details" -v # verbose output
56
+ ```
57
+
58
+ ## Environment Variables
59
+
60
+ | Variable | Description | Default |
61
+ |----------|-------------|---------|
62
+ | `HINDSIGHT_EMBED_LLM_API_KEY` | LLM API key (or use `OPENAI_API_KEY`) | Required |
63
+ | `HINDSIGHT_EMBED_LLM_PROVIDER` | LLM provider (`openai`, `anthropic`, `google`, `ollama`) | `openai` |
64
+ | `HINDSIGHT_EMBED_LLM_MODEL` | LLM model | `gpt-4o-mini` |
65
+ | `HINDSIGHT_EMBED_BANK_ID` | Memory bank ID | `default` |
66
+
67
+ ## Use with AI Coding Assistants
68
+
69
+ This CLI is designed to work with AI coding assistants like Claude Code, OpenCode, and Codex CLI. Install the Hindsight skill:
70
+
71
+ ```bash
72
+ curl -fsSL https://hindsight.vectorize.io/get-skill | bash
73
+ ```
74
+
75
+ This will configure the LLM provider and install the skill to your assistant's skills directory.
76
+
77
+ ## License
78
+
79
+ Apache 2.0
@@ -0,0 +1,70 @@
1
+ # hindsight-embed
2
+
3
+ Hindsight embedded CLI - local memory operations without a server.
4
+
5
+ This package provides a simple CLI for storing and recalling memories using Hindsight's memory engine with an embedded PostgreSQL database (pg0). No external server or database setup required.
6
+
7
+ ## Installation
8
+
9
+ ```bash
10
+ pip install hindsight-embed
11
+ # or with uvx (no install needed)
12
+ uvx hindsight-embed --help
13
+ ```
14
+
15
+ ## Quick Start
16
+
17
+ ```bash
18
+ # Set your LLM API key
19
+ export OPENAI_API_KEY=sk-...
20
+
21
+ # Store a memory
22
+ hindsight-embed retain "User prefers dark mode"
23
+
24
+ # Recall memories
25
+ hindsight-embed recall "What are user preferences?"
26
+ ```
27
+
28
+ ## Commands
29
+
30
+ ### retain
31
+
32
+ Store a memory:
33
+
34
+ ```bash
35
+ hindsight-embed retain "User prefers dark mode"
36
+ hindsight-embed retain "Meeting on Monday" --context work
37
+ ```
38
+
39
+ ### recall
40
+
41
+ Search memories:
42
+
43
+ ```bash
44
+ hindsight-embed recall "user preferences"
45
+ hindsight-embed recall "upcoming events" --budget high
46
+ hindsight-embed recall "project details" -v # verbose output
47
+ ```
48
+
49
+ ## Environment Variables
50
+
51
+ | Variable | Description | Default |
52
+ |----------|-------------|---------|
53
+ | `HINDSIGHT_EMBED_LLM_API_KEY` | LLM API key (or use `OPENAI_API_KEY`) | Required |
54
+ | `HINDSIGHT_EMBED_LLM_PROVIDER` | LLM provider (`openai`, `anthropic`, `google`, `ollama`) | `openai` |
55
+ | `HINDSIGHT_EMBED_LLM_MODEL` | LLM model | `gpt-4o-mini` |
56
+ | `HINDSIGHT_EMBED_BANK_ID` | Memory bank ID | `default` |
57
+
58
+ ## Use with AI Coding Assistants
59
+
60
+ This CLI is designed to work with AI coding assistants like Claude Code, OpenCode, and Codex CLI. Install the Hindsight skill:
61
+
62
+ ```bash
63
+ curl -fsSL https://hindsight.vectorize.io/get-skill | bash
64
+ ```
65
+
66
+ This will configure the LLM provider and install the skill to your assistant's skills directory.
67
+
68
+ ## License
69
+
70
+ Apache 2.0
@@ -0,0 +1,3 @@
1
+ """Hindsight embedded CLI - local memory operations without a server."""
2
+
3
+ __version__ = "0.1.0"
@@ -0,0 +1,423 @@
1
+ """
2
+ Hindsight Embedded CLI.
3
+
4
+ A simple CLI for local memory operations using embedded PostgreSQL (pg0).
5
+ No external server required - runs everything locally.
6
+
7
+ Usage:
8
+ hindsight-embed configure # Interactive setup
9
+ hindsight-embed retain "User prefers dark mode"
10
+ hindsight-embed recall "What are user preferences?"
11
+
12
+ Environment variables:
13
+ HINDSIGHT_EMBED_LLM_API_KEY: Required. API key for LLM provider.
14
+ HINDSIGHT_EMBED_LLM_PROVIDER: Optional. LLM provider (default: "openai").
15
+ HINDSIGHT_EMBED_LLM_MODEL: Optional. LLM model (default: "gpt-4o-mini").
16
+ HINDSIGHT_EMBED_BANK_ID: Optional. Memory bank ID (default: "default").
17
+ HINDSIGHT_EMBED_LOG_LEVEL: Optional. Log level (default: "warning").
18
+ """
19
+
20
+ import argparse
21
+ import asyncio
22
+ import logging
23
+ import os
24
+ import sys
25
+ from pathlib import Path
26
+
27
+ CONFIG_DIR = Path.home() / ".hindsight"
28
+ CONFIG_FILE = CONFIG_DIR / "embed"
29
+
30
+
31
+ def setup_logging(verbose: bool = False):
32
+ """Configure logging."""
33
+ level_str = os.environ.get("HINDSIGHT_EMBED_LOG_LEVEL", "warning").lower()
34
+ if verbose:
35
+ level_str = "debug"
36
+
37
+ level_map = {
38
+ "debug": logging.DEBUG,
39
+ "info": logging.INFO,
40
+ "warning": logging.WARNING,
41
+ "error": logging.ERROR,
42
+ }
43
+ level = level_map.get(level_str, logging.WARNING)
44
+
45
+ logging.basicConfig(
46
+ level=level,
47
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
48
+ stream=sys.stderr,
49
+ )
50
+ return logging.getLogger(__name__)
51
+
52
+
53
+ def load_config_file():
54
+ """Load configuration from file if it exists."""
55
+ if CONFIG_FILE.exists():
56
+ with open(CONFIG_FILE) as f:
57
+ for line in f:
58
+ line = line.strip()
59
+ if line and not line.startswith("#") and "=" in line:
60
+ # Handle 'export VAR=value' format
61
+ if line.startswith("export "):
62
+ line = line[7:]
63
+ key, value = line.split("=", 1)
64
+ if key not in os.environ: # Don't override env vars
65
+ os.environ[key] = value
66
+
67
+
68
+ def get_config():
69
+ """Get configuration from environment variables."""
70
+ load_config_file()
71
+ return {
72
+ "llm_api_key": os.environ.get("HINDSIGHT_EMBED_LLM_API_KEY")
73
+ or os.environ.get("HINDSIGHT_API_LLM_API_KEY")
74
+ or os.environ.get("OPENAI_API_KEY"),
75
+ "llm_provider": os.environ.get("HINDSIGHT_EMBED_LLM_PROVIDER")
76
+ or os.environ.get("HINDSIGHT_API_LLM_PROVIDER", "openai"),
77
+ "llm_model": os.environ.get("HINDSIGHT_EMBED_LLM_MODEL")
78
+ or os.environ.get("HINDSIGHT_API_LLM_MODEL", "gpt-4o-mini"),
79
+ "bank_id": os.environ.get("HINDSIGHT_EMBED_BANK_ID", "default"),
80
+ }
81
+
82
+
83
+ def do_configure(args):
84
+ """Interactive configuration setup with beautiful TUI."""
85
+ import questionary
86
+ from questionary import Style
87
+
88
+ # Custom style for the prompts
89
+ custom_style = Style([
90
+ ('qmark', 'fg:cyan bold'),
91
+ ('question', 'fg:white bold'),
92
+ ('answer', 'fg:cyan'),
93
+ ('pointer', 'fg:cyan bold'),
94
+ ('highlighted', 'fg:cyan bold'),
95
+ ('selected', 'fg:green'),
96
+ ('text', 'fg:white'),
97
+ ])
98
+
99
+ print()
100
+ print("\033[1m\033[36m ╭─────────────────────────────────────╮\033[0m")
101
+ print("\033[1m\033[36m │ Hindsight Embed Configuration │\033[0m")
102
+ print("\033[1m\033[36m ╰─────────────────────────────────────╯\033[0m")
103
+ print()
104
+
105
+ # Check existing config
106
+ if CONFIG_FILE.exists():
107
+ if not questionary.confirm(
108
+ "Existing configuration found. Reconfigure?",
109
+ default=False,
110
+ style=custom_style,
111
+ ).ask():
112
+ print("\n\033[32m✓\033[0m Keeping existing configuration.")
113
+ return 0
114
+ print()
115
+
116
+ # Provider selection with descriptions
117
+ providers = [
118
+ questionary.Choice("OpenAI (recommended)", value=("openai", "o3-mini", "OpenAI")),
119
+ questionary.Choice("Groq (fast & free tier)", value=("groq", "openai/gpt-oss-20b", "Groq")),
120
+ questionary.Choice("Google Gemini", value=("google", "gemini-2.0-flash", "Google")),
121
+ questionary.Choice("Ollama (local, no API key)", value=("ollama", "llama3.2", None)),
122
+ ]
123
+
124
+ result = questionary.select(
125
+ "Select your LLM provider:",
126
+ choices=providers,
127
+ style=custom_style,
128
+ ).ask()
129
+
130
+ if result is None: # User cancelled
131
+ print("\n\033[33m⚠\033[0m Configuration cancelled.")
132
+ return 1
133
+
134
+ provider, default_model, key_name = result
135
+
136
+ # API key
137
+ api_key = ""
138
+ if key_name:
139
+ env_keys = {
140
+ "OpenAI": "OPENAI_API_KEY",
141
+ "Groq": "GROQ_API_KEY",
142
+ "Google": "GOOGLE_API_KEY",
143
+ }
144
+ env_key = env_keys.get(key_name, "")
145
+ existing = os.environ.get(env_key, "")
146
+
147
+ if existing:
148
+ masked = existing[:8] + "..." + existing[-4:] if len(existing) > 12 else "***"
149
+ if questionary.confirm(
150
+ f"Found {key_name} key in ${env_key} ({masked}). Use it?",
151
+ default=True,
152
+ style=custom_style,
153
+ ).ask():
154
+ api_key = existing
155
+
156
+ if not api_key:
157
+ api_key = questionary.password(
158
+ f"Enter your {key_name} API key:",
159
+ style=custom_style,
160
+ ).ask()
161
+
162
+ if not api_key:
163
+ print("\n\033[31m✗\033[0m API key is required.", file=sys.stderr)
164
+ return 1
165
+
166
+ # Model selection
167
+ model = questionary.text(
168
+ "Model name:",
169
+ default=default_model,
170
+ style=custom_style,
171
+ ).ask()
172
+
173
+ if model is None:
174
+ return 1
175
+
176
+ # Bank ID
177
+ bank_id = questionary.text(
178
+ "Memory bank ID:",
179
+ default="default",
180
+ style=custom_style,
181
+ ).ask()
182
+
183
+ if bank_id is None:
184
+ return 1
185
+
186
+ # Save configuration
187
+ CONFIG_DIR.mkdir(parents=True, exist_ok=True)
188
+
189
+ with open(CONFIG_FILE, "w") as f:
190
+ f.write("# Hindsight Embed Configuration\n")
191
+ f.write(f"# Generated by hindsight-embed configure\n\n")
192
+ f.write(f"HINDSIGHT_EMBED_LLM_PROVIDER={provider}\n")
193
+ f.write(f"HINDSIGHT_EMBED_LLM_MODEL={model}\n")
194
+ f.write(f"HINDSIGHT_EMBED_BANK_ID={bank_id}\n")
195
+ if api_key:
196
+ f.write(f"HINDSIGHT_EMBED_LLM_API_KEY={api_key}\n")
197
+
198
+ CONFIG_FILE.chmod(0o600)
199
+
200
+ print()
201
+ print("\033[32m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\033[0m")
202
+ print("\033[32m ✓ Configuration saved!\033[0m")
203
+ print("\033[32m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\033[0m")
204
+ print()
205
+ print(f" \033[2mConfig:\033[0m {CONFIG_FILE}")
206
+ print()
207
+ print(" \033[2mTest with:\033[0m")
208
+ print(' \033[36mhindsight-embed retain "Test memory"\033[0m')
209
+ print(' \033[36mhindsight-embed recall "test"\033[0m')
210
+ print()
211
+
212
+ return 0
213
+
214
+
215
+ async def _create_engine(config: dict, logger):
216
+ """Create and initialize the memory engine."""
217
+ logger.debug("Setting up environment variables...")
218
+
219
+ # Set hindsight-api environment variables from our config
220
+ if config["llm_api_key"]:
221
+ os.environ["HINDSIGHT_API_LLM_API_KEY"] = config["llm_api_key"]
222
+ if config["llm_provider"]:
223
+ os.environ["HINDSIGHT_API_LLM_PROVIDER"] = config["llm_provider"]
224
+ if config["llm_model"]:
225
+ os.environ["HINDSIGHT_API_LLM_MODEL"] = config["llm_model"]
226
+
227
+ logger.debug("Importing MemoryEngine...")
228
+
229
+ # Import after setting env vars
230
+ from hindsight_api import MemoryEngine
231
+ from hindsight_api.engine.task_backend import SyncTaskBackend
232
+
233
+ # Use pg0 embedded database
234
+ db_name = f"hindsight-embed-{config['bank_id']}"
235
+ logger.debug(f"Creating MemoryEngine with pg0://{db_name}")
236
+
237
+ # Use SyncTaskBackend to avoid background workers that prevent clean exit
238
+ memory = MemoryEngine(
239
+ db_url=f"pg0://{db_name}",
240
+ task_backend=SyncTaskBackend(),
241
+ )
242
+
243
+ logger.debug("Initializing engine...")
244
+ await memory.initialize()
245
+
246
+ logger.debug("Engine initialized")
247
+ return memory
248
+
249
+
250
+ async def do_retain(args, config: dict, logger):
251
+ """Execute retain command."""
252
+ from hindsight_api.models import RequestContext
253
+
254
+ logger.info(f"Retaining memory: {args.content[:50]}...")
255
+
256
+ memory = await _create_engine(config, logger)
257
+
258
+ try:
259
+ logger.debug("Calling retain_batch_async...")
260
+ await memory.retain_batch_async(
261
+ bank_id=config["bank_id"],
262
+ contents=[{
263
+ "content": args.content,
264
+ "context": args.context or "general",
265
+ }],
266
+ request_context=RequestContext(),
267
+ )
268
+ msg = f"Stored memory: {args.content[:50]}..." if len(args.content) > 50 else f"Stored memory: {args.content}"
269
+ print(msg, flush=True)
270
+ return 0
271
+ except Exception as e:
272
+ logger.error(f"Retain failed: {e}", exc_info=True)
273
+ print(f"Error: {e}", file=sys.stderr)
274
+ return 1
275
+
276
+
277
+ async def do_recall(args, config: dict, logger):
278
+ """Execute recall command."""
279
+ from hindsight_api.engine.memory_engine import Budget
280
+ from hindsight_api.engine.response_models import VALID_RECALL_FACT_TYPES
281
+ from hindsight_api.models import RequestContext
282
+
283
+ logger.info(f"Recalling with query: {args.query}")
284
+
285
+ memory = await _create_engine(config, logger)
286
+
287
+ try:
288
+ budget_map = {"low": Budget.LOW, "mid": Budget.MID, "high": Budget.HIGH}
289
+ budget_enum = budget_map.get(args.budget.lower(), Budget.LOW)
290
+
291
+ logger.debug(f"Calling recall_async with budget={budget_enum}...")
292
+ result = await memory.recall_async(
293
+ bank_id=config["bank_id"],
294
+ query=args.query,
295
+ fact_type=list(VALID_RECALL_FACT_TYPES),
296
+ budget=budget_enum,
297
+ max_tokens=args.max_tokens,
298
+ request_context=RequestContext(),
299
+ )
300
+
301
+ logger.debug(f"Recall returned {len(result.results)} results")
302
+
303
+ if result.results:
304
+ print("Memories found:", flush=True)
305
+ print("-" * 40, flush=True)
306
+ for fact in result.results:
307
+ print(f"- {fact.text}", flush=True)
308
+ if args.verbose and fact.occurred_start:
309
+ print(f" (Date: {fact.occurred_start})", flush=True)
310
+ print("-" * 40, flush=True)
311
+ print(f"Total: {len(result.results)} memories", flush=True)
312
+ else:
313
+ print("No relevant memories found.", flush=True)
314
+
315
+ return 0
316
+ except Exception as e:
317
+ logger.error(f"Recall failed: {e}", exc_info=True)
318
+ print(f"Error: {e}", file=sys.stderr)
319
+ return 1
320
+
321
+
322
+ def main():
323
+ """Main entry point."""
324
+ parser = argparse.ArgumentParser(
325
+ description="Hindsight Embedded CLI - local memory operations without a server",
326
+ formatter_class=argparse.RawDescriptionHelpFormatter,
327
+ epilog="""
328
+ Examples:
329
+ hindsight-embed configure # Interactive setup
330
+ hindsight-embed retain "User prefers dark mode"
331
+ hindsight-embed retain "Meeting on Monday" -c work
332
+ hindsight-embed recall "user preferences"
333
+ hindsight-embed recall "meetings" --budget high
334
+ """
335
+ )
336
+
337
+ parser.add_argument(
338
+ "--verbose", "-v",
339
+ action="store_true",
340
+ help="Enable verbose/debug logging"
341
+ )
342
+
343
+ subparsers = parser.add_subparsers(dest="command", help="Commands")
344
+
345
+ # Configure command
346
+ subparsers.add_parser("configure", help="Interactive configuration setup")
347
+
348
+ # Retain command
349
+ retain_parser = subparsers.add_parser("retain", help="Store a memory")
350
+ retain_parser.add_argument("content", help="The memory content to store")
351
+ retain_parser.add_argument(
352
+ "--context", "-c",
353
+ help="Category for the memory (e.g., 'preferences', 'work')",
354
+ default="general"
355
+ )
356
+
357
+ # Recall command
358
+ recall_parser = subparsers.add_parser("recall", help="Search memories")
359
+ recall_parser.add_argument("query", help="Search query")
360
+ recall_parser.add_argument(
361
+ "--budget", "-b",
362
+ choices=["low", "mid", "high"],
363
+ default="low",
364
+ help="Search budget level (default: low)"
365
+ )
366
+ recall_parser.add_argument(
367
+ "--max-tokens", "-m",
368
+ type=int,
369
+ default=4096,
370
+ help="Maximum tokens in results (default: 4096)"
371
+ )
372
+ recall_parser.add_argument(
373
+ "--verbose", "-v",
374
+ action="store_true",
375
+ help="Show additional details"
376
+ )
377
+
378
+ args = parser.parse_args()
379
+
380
+ # Setup logging
381
+ verbose = getattr(args, 'verbose', False)
382
+ logger = setup_logging(verbose)
383
+
384
+ if not args.command:
385
+ parser.print_help()
386
+ sys.exit(1)
387
+
388
+ # Handle configure separately (no config needed)
389
+ if args.command == "configure":
390
+ exit_code = do_configure(args)
391
+ sys.exit(exit_code)
392
+
393
+ config = get_config()
394
+
395
+ # Check for LLM API key
396
+ if not config["llm_api_key"]:
397
+ print("Error: LLM API key is required.", file=sys.stderr)
398
+ print("Run 'hindsight-embed configure' to set up.", file=sys.stderr)
399
+ sys.exit(1)
400
+
401
+ # Run the appropriate command
402
+ exit_code = 1
403
+ try:
404
+ if args.command == "retain":
405
+ exit_code = asyncio.run(do_retain(args, config, logger))
406
+ elif args.command == "recall":
407
+ exit_code = asyncio.run(do_recall(args, config, logger))
408
+ else:
409
+ parser.print_help()
410
+ exit_code = 1
411
+ except KeyboardInterrupt:
412
+ logger.debug("Interrupted")
413
+ exit_code = 130
414
+ except Exception as e:
415
+ logger.error(f"Unexpected error: {e}", exc_info=True)
416
+ print(f"Error: {e}", file=sys.stderr)
417
+ exit_code = 1
418
+
419
+ sys.exit(exit_code)
420
+
421
+
422
+ if __name__ == "__main__":
423
+ main()
@@ -0,0 +1,23 @@
1
+ [build-system]
2
+ requires = ["hatchling"]
3
+ build-backend = "hatchling.build"
4
+
5
+ [project]
6
+ name = "hindsight-embed"
7
+ version = "0.1.0"
8
+ description = "Hindsight embedded CLI - local memory operations without a server"
9
+ readme = "README.md"
10
+ requires-python = ">=3.11"
11
+ dependencies = [
12
+ "hindsight-api>=0.1.11",
13
+ "questionary>=2.0.0",
14
+ ]
15
+
16
+ [project.scripts]
17
+ hindsight-embed = "hindsight_embed.cli:main"
18
+
19
+ [tool.hatch.build.targets.wheel]
20
+ packages = ["hindsight_embed"]
21
+
22
+ [tool.uv.sources]
23
+ hindsight-api = { workspace = true }
@@ -0,0 +1,68 @@
1
+ #!/bin/bash
2
+ #
3
+ # Simple smoke test for hindsight-embed CLI
4
+ # Tests retain and recall operations with embedded PostgreSQL
5
+ #
6
+
7
+ set -e
8
+
9
+ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
10
+
11
+ echo "=== Hindsight Embed Smoke Test ==="
12
+
13
+ # Check required environment
14
+ if [ -z "$HINDSIGHT_EMBED_LLM_API_KEY" ]; then
15
+ echo "Error: HINDSIGHT_EMBED_LLM_API_KEY is required"
16
+ exit 1
17
+ fi
18
+
19
+ # Use a unique bank ID for this test run
20
+ export HINDSIGHT_EMBED_BANK_ID="test-$$-$(date +%s)"
21
+ echo "Using bank ID: $HINDSIGHT_EMBED_BANK_ID"
22
+
23
+ # Test 1: Retain a memory
24
+ echo ""
25
+ echo "Test 1: Retaining a memory..."
26
+ OUTPUT=$(uv run --project "$SCRIPT_DIR" hindsight-embed retain "The user's favorite color is blue" 2>&1)
27
+ echo "$OUTPUT"
28
+ if ! echo "$OUTPUT" | grep -q "Stored memory"; then
29
+ echo "FAIL: Expected 'Stored memory' in output"
30
+ exit 1
31
+ fi
32
+ echo "PASS: Memory retained successfully"
33
+
34
+ # Test 2: Recall the memory
35
+ echo ""
36
+ echo "Test 2: Recalling memories..."
37
+ OUTPUT=$(uv run --project "$SCRIPT_DIR" hindsight-embed recall "What is the user's favorite color?" 2>&1)
38
+ echo "$OUTPUT"
39
+ if ! echo "$OUTPUT" | grep -qi "blue"; then
40
+ echo "FAIL: Expected 'blue' in recall output"
41
+ exit 1
42
+ fi
43
+ echo "PASS: Memory recalled successfully"
44
+
45
+ # Test 3: Retain with context
46
+ echo ""
47
+ echo "Test 3: Retaining memory with context..."
48
+ OUTPUT=$(uv run --project "$SCRIPT_DIR" hindsight-embed retain "User prefers Python over JavaScript" --context work 2>&1)
49
+ echo "$OUTPUT"
50
+ if ! echo "$OUTPUT" | grep -q "Stored memory"; then
51
+ echo "FAIL: Expected 'Stored memory' in output"
52
+ exit 1
53
+ fi
54
+ echo "PASS: Memory with context retained successfully"
55
+
56
+ # Test 4: Recall with budget
57
+ echo ""
58
+ echo "Test 4: Recalling with budget..."
59
+ OUTPUT=$(uv run --project "$SCRIPT_DIR" hindsight-embed recall "programming preferences" --budget mid 2>&1)
60
+ echo "$OUTPUT"
61
+ if ! echo "$OUTPUT" | grep -qi "python"; then
62
+ echo "FAIL: Expected 'Python' in recall output"
63
+ exit 1
64
+ fi
65
+ echo "PASS: Memory recalled with budget successfully"
66
+
67
+ echo ""
68
+ echo "=== All tests passed! ==="