empathy-framework 3.7.1__py3-none-any.whl → 3.8.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: empathy-framework
3
- Version: 3.7.1
4
- Summary: AI collaboration framework with XML-enhanced prompts, persistent memory, CrewAI integration, and multi-agent orchestration. Includes HIPAA-compliant healthcare wizards and 53% reduction in hallucinations.
3
+ Version: 3.8.1
4
+ Summary: AI collaboration framework with intelligent caching (up to 57% cache hit rate), tier routing (34-86% cost savings depending on task complexity), XML-enhanced prompts, persistent memory, CrewAI integration, and multi-agent orchestration. Includes HIPAA-compliant healthcare wizards.
5
5
  Author-email: Patrick Roebuck <admin@smartaimemory.com>
6
6
  Maintainer-email: Smart-AI-Memory <admin@smartaimemory.com>
7
7
  License: # Fair Source License, version 0.9
@@ -177,8 +177,8 @@ Requires-Dist: typer<1.0.0,>=0.9.0
177
177
  Requires-Dist: pyyaml<7.0,>=6.0
178
178
  Requires-Dist: anthropic<1.0.0,>=0.25.0
179
179
  Requires-Dist: crewai<1.0.0,>=0.1.0
180
- Requires-Dist: langchain<1.0.0,>=0.1.0
181
- Requires-Dist: langchain-core<1.0.0,>=0.1.0
180
+ Requires-Dist: langchain<2.0.0,>=0.1.0
181
+ Requires-Dist: langchain-core<2.0.0,>=0.1.0
182
182
  Provides-Extra: anthropic
183
183
  Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "anthropic"
184
184
  Provides-Extra: openai
@@ -200,6 +200,10 @@ Requires-Dist: langgraph-checkpoint<4.0.0,>=3.0.0; extra == "agents"
200
200
  Requires-Dist: marshmallow<5.0.0,>=4.1.2; extra == "agents"
201
201
  Provides-Extra: crewai
202
202
  Requires-Dist: crewai<1.0.0,>=0.1.0; extra == "crewai"
203
+ Provides-Extra: cache
204
+ Requires-Dist: sentence-transformers<4.0.0,>=2.0.0; extra == "cache"
205
+ Requires-Dist: torch<3.0.0,>=2.0.0; extra == "cache"
206
+ Requires-Dist: numpy<3.0.0,>=1.24.0; extra == "cache"
203
207
  Provides-Extra: healthcare
204
208
  Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "healthcare"
205
209
  Requires-Dist: openai<2.0.0,>=1.12.0; extra == "healthcare"
@@ -363,9 +367,34 @@ Dynamic: license-file
363
367
  pip install empathy-framework[developer] # Lightweight for individual developers
364
368
  ```
365
369
 
366
- ## What's New in v3.7.0
370
+ ## What's New in v3.8.0
367
371
 
368
- ### 🚀 **XML-Enhanced Prompting: 15-35% Token Reduction + Graceful Validation**
372
+ ### 🎯 **Transparent Cost Claims: Honest Role-Based Savings (34-86%)**
373
+
374
+ **Real savings depend on your work role.** Architects using 60% PREMIUM tasks see 34% savings, while junior devs see 86%. See [role-based analysis](docs/cost-analysis/COST_SAVINGS_BY_ROLE_AND_PROVIDER.md) for your specific case.
375
+
376
+ ### 🚀 **Intelligent Response Caching: Up to 57% Hit Rate (Benchmarked)**
377
+
378
+ **Hash-only cache**: 100% hit rate on identical prompts, ~5μs lookups
379
+ **Hybrid cache**: Up to 57% hit rate on semantically similar prompts (measured on security audit workflow)
380
+
381
+ ```python
382
+ from empathy_os.cache import create_cache
383
+
384
+ # Hash-only mode (fast, exact matches)
385
+ cache = create_cache(cache_type="hash")
386
+
387
+ # Hybrid mode (semantic similarity)
388
+ cache = create_cache(cache_type="hybrid", similarity_threshold=0.95)
389
+ ```
390
+
391
+ See [caching docs](docs/caching/) for benchmarks and configuration.
392
+
393
+ ---
394
+
395
+ ### Previous Release: v3.7.0
396
+
397
+ #### 🚀 **XML-Enhanced Prompting: 15-35% Token Reduction + Graceful Validation**
369
398
 
370
399
  **Slash your API costs and eliminate response parsing errors with production-ready XML enhancements.**
371
400
 
@@ -521,9 +550,11 @@ See ESLintParser, PylintParser, or MyPyParser for examples.
521
550
  ### Previous (v3.0.x)
522
551
 
523
552
  - **Multi-Model Provider System** — Anthropic, OpenAI, Google Gemini, Ollama, or Hybrid mode
524
- - **80-96% Cost Savings** — Smart tier routing: cheap models detect, best models decide
553
+ - **34-86% Cost Savings** — Smart tier routing varies by role: architects 34%, senior devs 65%, junior devs 86%*
525
554
  - **VSCode Dashboard** — 10 integrated workflows with input history persistence
526
555
 
556
+ *See [Cost Savings Analysis](docs/cost-analysis/COST_SAVINGS_BY_ROLE_AND_PROVIDER.md) for your specific use case
557
+
527
558
  ---
528
559
 
529
560
  ## Quick Start (2 Minutes)
@@ -597,12 +628,106 @@ print(result.prevention_steps) # How to prevent it
597
628
  | **Predicts future issues** | 30-90 days ahead | No | No |
598
629
  | **Persistent memory** | Redis + patterns | No | No |
599
630
  | **Multi-provider support** | Claude, GPT-4, Gemini, Ollama | N/A | GPT only |
600
- | **Cost optimization** | 80-96% savings | N/A | No |
631
+ | **Cost optimization** | 34-86% savings* | N/A | No |
601
632
  | **Your data stays local** | Yes | Cloud | Cloud |
602
633
  | **Free for small teams** | ≤5 employees | No | No |
603
634
 
604
635
  ---
605
636
 
637
+ ## What's New in v3.8.0
638
+
639
+ ### 🚀 **Intelligent Response Caching: Benchmarked Performance**
640
+
641
+ **Stop paying full price for repeated LLM calls. Measured results: up to 99.8% faster, 40% cost reduction on test generation, 57% cache hit rate on security audits.**
642
+
643
+ #### Hybrid Cache: Hash + Semantic Matching
644
+
645
+ ```python
646
+ from empathy_os.workflows import SecurityAuditWorkflow
647
+
648
+ # That's it - caching auto-configured!
649
+ workflow = SecurityAuditWorkflow(enable_cache=True)
650
+ result = await workflow.execute(target_path="./src")
651
+
652
+ # Check savings
653
+ print(f"Cost: ${result.cost_report.total_cost:.4f}")
654
+ print(f"Cache hit rate: {result.cost_report.cache_hit_rate:.1f}%")
655
+ print(f"Savings: ${result.cost_report.savings_from_cache:.4f}")
656
+ ```
657
+
658
+ **Real Results** (v3.8.0 benchmark - see [CACHING_BENCHMARK_REPORT.md](CACHING_BENCHMARK_REPORT.md)):
659
+
660
+ - **Hash-only cache**: 30.3% average hit rate across 12 workflows, up to 99.8% faster (code review: 17.8s → 0.03s)
661
+ - **Hybrid cache**: Up to 57% hit rate on similar prompts (security audit - benchmarked)
662
+ - **Cost reduction**: 40% on test-generation workflow (measured)
663
+
664
+ #### Two Cache Strategies
665
+
666
+ **Hash-Only Cache** (Default - Zero Dependencies):
667
+ - Perfect for CI/CD and testing
668
+ - 100% hit rate on identical prompts
669
+ - ~5μs lookup time
670
+ - No ML dependencies needed
671
+
672
+ **Hybrid Cache** (Semantic Matching):
673
+
674
+ - Up to 57% hit rate on similar prompts (benchmarked)
675
+ - Understands intent, not just text
676
+ - Install: `pip install empathy-framework[cache]`
677
+ - Best for development and production
678
+
679
+ #### Automatic Setup
680
+
681
+ Framework detects your environment and configures optimal caching:
682
+
683
+ ```python
684
+ # First run: Framework checks for sentence-transformers
685
+ # - Found? Uses hybrid cache (semantic matching, up to 57% hit rate)
686
+ # - Missing? Prompts: "Install for semantic matching? (y/n)"
687
+ # - Declined? Falls back to hash-only (100% hit rate on identical prompts)
688
+ # - Any errors? Disables gracefully, workflow continues
689
+
690
+ # Subsequent runs: Cache just works
691
+ ```
692
+
693
+ #### The Caching Paradox: Adaptive Workflows
694
+
695
+ **Discovered during v3.8.0 development**: Some workflows (Security Audit, Bug Prediction) cost MORE on Run 2 with cache enabled - and that's a FEATURE.
696
+
697
+ **Why?** Adaptive workflows use cache to free up time for deeper analysis:
698
+
699
+ ```
700
+ Security Audit without cache:
701
+ Run 1: $0.11, 45 seconds - surface scan finds 3 issues
702
+
703
+ Security Audit with cache:
704
+ Run 2: $0.13, 15 seconds - cache frees 30s for deep analysis
705
+ → Uses saved time for PREMIUM tier vulnerability research
706
+ → Finds 7 issues including critical SQLi we missed
707
+ → Extra $0.02 cost = prevented security breach
708
+ ```
709
+
710
+ **Result**: Cache makes workflows SMARTER, not just cheaper.
711
+
712
+ See [Adaptive Workflows Documentation](docs/caching/ADAPTIVE_WORKFLOWS.md) for full explanation.
713
+
714
+ #### Complete Documentation
715
+
716
+ - **[Quick Reference](docs/caching/QUICK_REFERENCE.md)** - Common scenarios, 1-page cheat sheet
717
+ - **[Configuration Guide](docs/caching/CONFIGURATION_GUIDE.md)** - All options, when to use each
718
+ - **[Adaptive Workflows](docs/caching/ADAPTIVE_WORKFLOWS.md)** - Why Run 2 can cost more (it's good!)
719
+
720
+ **Test it yourself**:
721
+ ```bash
722
+ # Quick test (2-3 minutes)
723
+ python benchmark_caching_simple.py
724
+
725
+ # Full benchmark (15-20 minutes, all 12 workflows)
726
+ python benchmark_caching.py
727
+ ```
728
+
729
+ ---
730
+
606
731
  ## Become a Power User
607
732
 
608
733
  ### Level 1: Basic Usage
@@ -615,18 +740,37 @@ pip install empathy-framework[developer]
615
740
  - Works out of the box with sensible defaults
616
741
  - Auto-detects your API keys
617
742
 
618
- ### Level 2: Cost Optimization
743
+ ### Level 2: Cost Optimization (Role-Based Savings)
744
+
745
+ **Tier routing automatically routes tasks to appropriate models, saving 34-86% depending on your work role.**
619
746
 
620
747
  ```bash
621
- # Enable hybrid mode for 80-96% cost savings
748
+ # Enable hybrid mode
622
749
  python -m empathy_os.models.cli provider --set hybrid
623
750
  ```
624
751
 
625
- | Tier | Model | Use Case | Cost |
626
- |------|-------|----------|------|
627
- | Cheap | GPT-4o-mini / Haiku | Summarization, simple tasks | $0.15-0.25/M |
628
- | Capable | GPT-4o / Sonnet | Bug fixing, code review | $2.50-3.00/M |
629
- | Premium | o1 / Opus | Architecture, complex decisions | $15/M |
752
+ #### Tier Pricing
753
+
754
+ | Tier | Model | Use Case | Cost per Task* |
755
+ |------|-------|----------|----------------|
756
+ | CHEAP | GPT-4o-mini / Haiku | Summarization, formatting, simple tasks | $0.0045-0.0075 |
757
+ | CAPABLE | GPT-4o / Sonnet | Bug fixing, code review, analysis | $0.0725-0.090 |
758
+ | PREMIUM | o1 / Opus | Architecture, complex decisions, design | $0.435-0.450 |
759
+
760
+ *Typical task: 5,000 input tokens, 1,000 output tokens
761
+
762
+ #### Actual Savings by Role
763
+
764
+ | Your Role | PREMIUM % | CAPABLE % | CHEAP % | Actual Savings | Notes |
765
+ |-----------|-----------|-----------|---------|----------------|-------|
766
+ | **Architect / Designer** | 60% | 30% | 10% | **34%** | Design work requires complex reasoning |
767
+ | **Senior Developer** | 25% | 50% | 25% | **65%** | Mix of architecture and implementation |
768
+ | **Mid-Level Developer** | 15% | 60% | 25% | **73%** | Mostly implementation and bug fixes |
769
+ | **Junior Developer** | 5% | 40% | 55% | **86%** | Simple features, tests, documentation |
770
+ | **QA Engineer** | 10% | 35% | 55% | **80%** | Test generation, reports, automation |
771
+ | **DevOps Engineer** | 20% | 50% | 30% | **69%** | Infrastructure planning + automation |
772
+
773
+ **See [Complete Cost Analysis](docs/cost-analysis/COST_SAVINGS_BY_ROLE_AND_PROVIDER.md) for provider comparisons (Anthropic vs OpenAI vs Ollama) and detailed calculations.**
630
774
 
631
775
  ### Level 3: Multi-Model Workflows
632
776
 
@@ -20,7 +20,7 @@ coach_wizards/refactoring_wizard.py,sha256=X0MTx3BHpOlOMAYDow-3HX5GyryY70JGAF5vA
20
20
  coach_wizards/scaling_wizard.py,sha256=n1RLtpWmj1RSEGSWssMiUPwCdpskO3z2Z3yhLlTdXro,2598
21
21
  coach_wizards/security_wizard.py,sha256=19SOClSxo6N-QqUc_QsFXOE7yEquiZF4kLi7jRomA7g,2605
22
22
  coach_wizards/testing_wizard.py,sha256=vKFgFG4uJfAVFmCIQbkrWNvZhIfLC6ve_XbvWZKrPg4,2563
23
- empathy_framework-3.7.1.dist-info/licenses/LICENSE,sha256=IJ9eeI5KSrD5P7alsn7sI_6_1bDihxBA5S4Sen4jf2k,4937
23
+ empathy_framework-3.8.1.dist-info/licenses/LICENSE,sha256=IJ9eeI5KSrD5P7alsn7sI_6_1bDihxBA5S4Sen4jf2k,4937
24
24
  empathy_healthcare_plugin/__init__.py,sha256=4NioL1_86UXzkd-QNkQZUSZ8rKTQGSP0TC9VXP32kQs,295
25
25
  empathy_healthcare_plugin/monitors/__init__.py,sha256=Udp8qfZR504QAq5_eQjvtIaE7v06Yguc7nuF40KllQc,196
26
26
  empathy_healthcare_plugin/monitors/clinical_protocol_monitor.py,sha256=GkNh2Yuw9cvuKuPh3mriWtKJZFq_sTxBD7Ci8lFV9gQ,11620
@@ -93,11 +93,11 @@ empathy_llm_toolkit/wizards/healthcare_wizard.py,sha256=zIdeXqS5jPTRFhUTi0MyPYqh
93
93
  empathy_llm_toolkit/wizards/patient_assessment_README.md,sha256=DInK_x7LgM8Qi9YSHgXtm7_sQupioJRf0M43_vml4ck,1586
94
94
  empathy_llm_toolkit/wizards/patient_assessment_wizard.py,sha256=dsvoOq0AYCBigmn6HPoaSBnBPk9YV7IzAFZkJYx1iZQ,5423
95
95
  empathy_llm_toolkit/wizards/technology_wizard.py,sha256=8hQirzzGQp7UVtj1hFCoaoLLtqAtx9HFf4mdUWV1xH0,7533
96
- empathy_os/__init__.py,sha256=MTqYiqWxdkAUY-KvT4nkQTMVwRpydF6OFdoQyj56qlE,7074
96
+ empathy_os/__init__.py,sha256=Wo1dH8xdalai4janV1Y8E7YMiAy4PsBEHgX4yeElj6w,7069
97
97
  empathy_os/agent_monitoring.py,sha256=s4seLC_J4AtQ3PYWrRPO8YHM-Fbm0Q36kPEdlTHf2HI,13375
98
98
  empathy_os/cli.py,sha256=6gTjhG35PoG1KmRmaW3nQW-lm52S3UhVhDtSR21jvKo,99330
99
99
  empathy_os/cli_unified.py,sha256=7W0WU-bL71P4GWRURTH513gAEKqCn3GLpfeWLnQlLS0,19376
100
- empathy_os/config.py,sha256=Ec2WJUZuDgNyyD00VCpsaGz2LbKdoJlgnrSFvEf7LsU,14706
100
+ empathy_os/config.py,sha256=J-0p-pm6-L-mWoLNYYI3PX0ZRBhtf3u8eT__r6lknrg,14785
101
101
  empathy_os/coordination.py,sha256=E2HvHxKk1xbYswtgxhnVKB6DRxfXUV5pCt-XWHOvNKM,28509
102
102
  empathy_os/core.py,sha256=PvrPs8JQPwQjPZZQj6zy6xQTFGHKziaeRxCIlIs3iHA,52951
103
103
  empathy_os/cost_tracker.py,sha256=Cd07k2ecpYH2Xq968Ed1YsyqvEXQ5Cn6RqoAENVEE_k,12524
@@ -119,6 +119,12 @@ empathy_os/wizard_factory_cli.py,sha256=BMOOTIKf9bAVOuvtHi_HgkIBqnLpLBCkuvA51kxV
119
119
  empathy_os/workflow_commands.py,sha256=PqzpsXVB7Vk2raMbSYTDICETb1PZJ4u7IXb7HQdKSYo,25174
120
120
  empathy_os/adaptive/__init__.py,sha256=slgz00rOK-9bTENJod7fu5mxNiAc8eOJaBN6hQmAnZo,477
121
121
  empathy_os/adaptive/task_complexity.py,sha256=XhQgubrTxqV2V6CBD4kKZ_0kPCjchzg94HaUFIY6Q3c,3783
122
+ empathy_os/cache/__init__.py,sha256=nKHyWnQmyBU6f24UOq5iZ4ULUFys5Dhl6dFZlU0jlHU,3232
123
+ empathy_os/cache/base.py,sha256=1waj2OZ_5gRhwYhSfsSfgQs1N_ftZcAq6UAV9RctSek,3968
124
+ empathy_os/cache/dependency_manager.py,sha256=Khwpi4LiT5xFp1Pka4XQHO0bLzWUG1ElpRQ_Zmup_iM,7614
125
+ empathy_os/cache/hash_only.py,sha256=CdyZvl81lbargRyVhWVBJB4VDUNay4KIf3knUmqvZCo,7427
126
+ empathy_os/cache/hybrid.py,sha256=d8R-r1IarW5tEKtjyBsFVSFKv_mGmn9jPhRUzqaST6o,12588
127
+ empathy_os/cache/storage.py,sha256=VrvvpAeQF29T-d6Ly6L-bqvu-XHFJRWICb-gfiC0Hqw,7659
122
128
  empathy_os/config/__init__.py,sha256=jkdGF6eWEqOayld3Xcl1tsk92YIXIBXI-QcecMUBbXE,1577
123
129
  empathy_os/config/xml_config.py,sha256=x_UQxkHWJpoLW1ZkpXHpRcxbpjaSFIHg_h-DbKc4JA4,7362
124
130
  empathy_os/dashboard/__init__.py,sha256=DG1UAaL9DoO-h7DbijgFSFmHJdfPKhKrsxtVWKBc3Nc,363
@@ -129,7 +135,7 @@ empathy_os/memory/config.py,sha256=EoBBZLgyY5sHT3kLeyWjyI4b6hRA3Xiaoo3xdMLLGos,6
129
135
  empathy_os/memory/control_panel.py,sha256=TNUOexd5kTE2ytCRgOZrZdPa8h9xHVvOZha97_85kkw,43201
130
136
  empathy_os/memory/edges.py,sha256=ljku2CYeetJ7qDnQUJobrPZdBaYFxkn_PfGSNa9sVDE,5891
131
137
  empathy_os/memory/graph.py,sha256=Up4PCG8BbK5fmGym7rNEPZsCNjS7TQ3_F9-S6-mS2L4,18820
132
- empathy_os/memory/long_term.py,sha256=BTxUk4bxLw_q-x5EpkKFUoLBzWzg4x6Rru5s8OzCIKY,40312
138
+ empathy_os/memory/long_term.py,sha256=GFKGpPWa4jwiO2Xz447JDuRi7Qz2jxvBH8GhBpUAlLQ,40543
133
139
  empathy_os/memory/nodes.py,sha256=lVLN9pp8cFSBrO2Nh8d0zTOglM3QBaYCtwfO5P8dryc,4752
134
140
  empathy_os/memory/redis_bootstrap.py,sha256=G71loEE2LTcn0BCp_mN96Bk37ygFz1ImORiLoEmg9jA,16508
135
141
  empathy_os/memory/short_term.py,sha256=OHOFh2Tvt20rNuEV1t4-WowJIh1FkyH99T3S89MZYvE,67051
@@ -193,7 +199,7 @@ empathy_os/trust/circuit_breaker.py,sha256=VMuVmH_lZ_RB0E-tjE150Qtbk7WnkLQXc-fp_
193
199
  empathy_os/validation/__init__.py,sha256=3Mi5H0uzEelapCdbsHskTDPGSTkQXLvxBpfFhrIWfKw,514
194
200
  empathy_os/validation/xml_validator.py,sha256=aOr6AZKfjBq-tOKzBSGEl5WU0Ca8S5ewz7Xvcwiy-UQ,8773
195
201
  empathy_os/workflows/__init__.py,sha256=UYXKvpkaLEAGqnMpUHrdR8TZiqmR8k47lpdEs2cs9B0,12022
196
- empathy_os/workflows/base.py,sha256=Yj7ZmheuMmNDIN3YMmuTpqKsffr0JQJ5wEh-I8OnYHY,52029
202
+ empathy_os/workflows/base.py,sha256=njeDLvZE5ZwH_tVzsVdrMGnrvD6fugA0ITiVOSSDKCQ,57556
197
203
  empathy_os/workflows/bug_predict.py,sha256=Td2XtawwTSqBOOIqlziNXcOt4YRMMeya2W1tFOJKytY,35726
198
204
  empathy_os/workflows/code_review.py,sha256=SWNXSuJ2v4to8sZiHSQ2Z06gVCJ10L1LQr77Jf1SUyM,35647
199
205
  empathy_os/workflows/code_review_adapters.py,sha256=9aGUDAgE1B1EUJ-Haz2Agwo4RAwY2aqHtNYKEbJq2yc,11065
@@ -205,20 +211,20 @@ empathy_os/workflows/documentation_orchestrator.py,sha256=sSzTsLJPkSlHcE_o-pfM-2
205
211
  empathy_os/workflows/health_check.py,sha256=1yxNPV4wpLNEoMlbTm_S9Dlbp7q4L3WZM5kL_1LKVDQ,24633
206
212
  empathy_os/workflows/manage_documentation.py,sha256=gknIse4MzTLxRowIAS07WSXNqWAoWCfxmoIJSbTYBNM,29419
207
213
  empathy_os/workflows/new_sample_workflow1.py,sha256=GBUdgGtjrA1h7Xrdt14GsWjcMYvRMIPgRDM1gZ7Qex0,3978
208
- empathy_os/workflows/new_sample_workflow1_README.md,sha256=ZaSvtRAIuYYFJsGp_l4lc9lIgTJUhsGvqP3qWUISA0A,2365
214
+ empathy_os/workflows/new_sample_workflow1_README.md,sha256=bzLyqukgaKilG1OnCdLsc5GNWsYEagI7mn3n80BPMHY,2366
209
215
  empathy_os/workflows/perf_audit.py,sha256=JzJjuU5WQAa7Eisdwmwz4CQTB5W5v-mphVJ_yhORM5A,25275
210
216
  empathy_os/workflows/pr_review.py,sha256=H6YukqPrHYA8uDmddISYy0PVp8axmaoeBeB6dZ0GpLE,26193
211
217
  empathy_os/workflows/progress.py,sha256=OOyTOOeO2OUL0dQxoU97gpSRivNurab6hmtfLHFnJzI,14531
212
218
  empathy_os/workflows/progress_server.py,sha256=aLDrcBuxqN9_mfJRcRLJypuB4uRMFq9u4C_li9HioOM,10233
213
- empathy_os/workflows/refactor_plan.py,sha256=YbuvnKMVPN0YgXQ7psPIJng0yko5oQJBwy5L6GO0EDY,25336
219
+ empathy_os/workflows/refactor_plan.py,sha256=G7VQ3Yx9DJWGS4EKJnaJEr10t16S9fNx0a3vRRiZiu0,25581
214
220
  empathy_os/workflows/release_prep.py,sha256=BTXDJpjHev8rSESStvxdRCNKHRjfnibhEaWBx1bMBi0,27553
215
221
  empathy_os/workflows/research_synthesis.py,sha256=7KozOOlFJADiVR1Y9LP3-DwwNG6JhRRnOLgCDGkdvSM,13967
216
222
  empathy_os/workflows/secure_release.py,sha256=IzJjaQau90-Q7So1E_Lx0uuechAqI_0D2U0RwO3uilg,20966
217
223
  empathy_os/workflows/security_adapters.py,sha256=UX4vRQwuSmA_V8t5sh5BGzT4RRkCk6lt6jy_VkhqJ1s,10945
218
- empathy_os/workflows/security_audit.py,sha256=ovGKbSvnCZsr7Huhc2clu0f7qLeEKop4lfKm7CEqx_A,40119
224
+ empathy_os/workflows/security_audit.py,sha256=LtGel4Tu5Nc3QGRr69w-UjMONmcVyXg8Noqhj4N1Ssk,40039
219
225
  empathy_os/workflows/step_config.py,sha256=CdRNAQ1SiPsuAl10s58ioyk5w8XCArecSS9AjyHWQJM,7213
220
226
  empathy_os/workflows/test5.py,sha256=6o_sFk4dAIyOIVY9nDilgQWaJIGjl551wzphbcnXwTI,3767
221
- empathy_os/workflows/test5_README.md,sha256=SvCm0jOqe-3aqZXsMWHSr56SKukH5l7SbULlqVctG2s,2405
227
+ empathy_os/workflows/test5_README.md,sha256=bnYhbwyNVGN0dbIcnAUhEJbwSf4cE-UAkD09p_gvThc,2406
222
228
  empathy_os/workflows/test_gen.py,sha256=u9cwbavrrS_2XPJHjjwrXFkEF55i3cpwwqIO1Epgoao,70563
223
229
  empathy_os/workflows/test_lifecycle.py,sha256=c6UDSd6kOQdCHmaJviwAnUVceVQuSdLNQ9eKbVooiMY,16890
224
230
  empathy_os/workflows/test_maintenance.py,sha256=jiMeYX7Qg3CnRU5xW8LuOXnARxV7uqfygDKkIsEgL0s,22941
@@ -278,7 +284,7 @@ empathy_software_plugin/wizards/testing/coverage_analyzer.py,sha256=k37_GJL895OK
278
284
  empathy_software_plugin/wizards/testing/quality_analyzer.py,sha256=dvpc0ryaFXafKeKws9dEUbkFk0fj1U2FIIRCzD4vAnA,17744
279
285
  empathy_software_plugin/wizards/testing/test_suggester.py,sha256=EVq1AWD2vRI6Jd9vSQu8NrVnOOyit2oOx4Xyw9_4eg8,17873
280
286
  hot_reload/README.md,sha256=FnQWSX-dfTIJvXHjjiWRWBjMK2DR-eyOfDHJlGIzm0o,10406
281
- hot_reload/__init__.py,sha256=oZfpbPB9T8W44XJ-DkOVBBzXv8-qo8Rgg34J-ZeUMbM,1688
287
+ hot_reload/__init__.py,sha256=RZ4Ow7Than1w-FWkqOqkDLpBwUn-AUidv5dKyfcSCI0,1628
282
288
  hot_reload/config.py,sha256=y6H36dmkTBmc6rbC8hnlK_QrmIEGFBnf-xxpCme70S8,2322
283
289
  hot_reload/integration.py,sha256=R-gRov0aqbnyAOE4Zcy-L8mPkFK4-riUy2kIyjBk8-w,6665
284
290
  hot_reload/reloader.py,sha256=akqz5t9nn5kc-yj1Dj2NuyPO1p1cWaAhUf-CVlr0XRE,9365
@@ -315,13 +321,13 @@ workflow_patterns/behavior.py,sha256=faYRxqTqRliFBqYP08OptBsJga21n9D8BHAs8B3HK18
315
321
  workflow_patterns/core.py,sha256=H52xnB4IqMdzJpOoySk5s7XcuKNTsqAt1RKpbdz_cyw,2607
316
322
  workflow_patterns/output.py,sha256=ArpR4D_z5MtRlWCKlKUmSWfXlMw-nkBukM3bYM98-XA,3106
317
323
  workflow_patterns/registry.py,sha256=0U_XT0hdQ5fLHuEJlrvzjaCBUyeWDA675_hEyvHxT0o,7461
318
- workflow_patterns/structural.py,sha256=rj74ZwG9cg--bE0YkiH_YtsDiRYLi122xY_NANMOXWY,9470
324
+ workflow_patterns/structural.py,sha256=nDwWuZYnXbm21Gsr0yMoMQiOcU07VuNlpsUGPyZ2efk,9470
319
325
  workflow_scaffolding/__init__.py,sha256=UpX5vjjjPjIaAKyIV1D4GxJzLUZy5DzdzgSkePYMES0,222
320
326
  workflow_scaffolding/__main__.py,sha256=0qspuNoadTDqyskXTlT8Sahqau-XIxN35NHTSGVW6z4,236
321
327
  workflow_scaffolding/cli.py,sha256=R4rCTDENRMil2c3v32MnisqteFRDfilS6RHBNlYV39Q,6752
322
328
  workflow_scaffolding/generator.py,sha256=whWbBmWEA0rN3M3X9EzTjfbwBxHcF40Jin8-nbj0S0E,8858
323
- empathy_framework-3.7.1.dist-info/METADATA,sha256=xkw8apOjV9Naj5JOtvdf70YtMwfev78EDW8KyJqIAJQ,40989
324
- empathy_framework-3.7.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
325
- empathy_framework-3.7.1.dist-info/entry_points.txt,sha256=zMu7sKCiLndbEEXjTecltS-1P_JZoEUKrifuRBBbroc,1268
326
- empathy_framework-3.7.1.dist-info/top_level.txt,sha256=KmPrj9gMAqXeKLPLp3VEAPDTNDE-LQjET3Ew3LU8sbs,180
327
- empathy_framework-3.7.1.dist-info/RECORD,,
329
+ empathy_framework-3.8.1.dist-info/METADATA,sha256=Wb0_Me2BAYnFSuXbddG0mg4ksxR27aNYwbifJ8eWpks,46740
330
+ empathy_framework-3.8.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
331
+ empathy_framework-3.8.1.dist-info/entry_points.txt,sha256=zMu7sKCiLndbEEXjTecltS-1P_JZoEUKrifuRBBbroc,1268
332
+ empathy_framework-3.8.1.dist-info/top_level.txt,sha256=KmPrj9gMAqXeKLPLp3VEAPDTNDE-LQjET3Ew3LU8sbs,180
333
+ empathy_framework-3.8.1.dist-info/RECORD,,
empathy_os/__init__.py CHANGED
@@ -55,7 +55,7 @@ Copyright 2025 Smart AI Memory, LLC
55
55
  Licensed under Fair Source 0.9
56
56
  """
57
57
 
58
- __version__ = "1.0.0-beta"
58
+ __version__ = "3.8.1"
59
59
  __author__ = "Patrick Roebuck"
60
60
  __email__ = "hello@deepstudy.ai"
61
61
 
@@ -0,0 +1,117 @@
1
+ """Response caching for Empathy Framework workflows.
2
+
3
+ Provides hybrid hash + semantic similarity caching to reduce API costs by 70%.
4
+
5
+ Usage:
6
+ from empathy_os.cache import create_cache
7
+
8
+ # Auto-detect best cache (hybrid if deps available, hash-only otherwise)
9
+ cache = create_cache()
10
+
11
+ # Manual cache selection
12
+ from empathy_os.cache import HashOnlyCache, HybridCache
13
+
14
+ cache = HashOnlyCache() # Always available
15
+ cache = HybridCache() # Requires sentence-transformers
16
+
17
+ Copyright 2025 Smart-AI-Memory
18
+ Licensed under Fair Source License 0.9
19
+ """
20
+
21
+ import logging
22
+ from typing import Optional
23
+
24
+ from .base import BaseCache, CacheEntry, CacheStats
25
+ from .hash_only import HashOnlyCache
26
+
27
+ logger = logging.getLogger(__name__)
28
+
29
+ # Try to import HybridCache (requires optional dependencies)
30
+ try:
31
+ from .hybrid import HybridCache
32
+
33
+ HYBRID_AVAILABLE = True
34
+ except ImportError:
35
+ HYBRID_AVAILABLE = False
36
+ HybridCache = None # type: ignore
37
+
38
+
39
+ def create_cache(
40
+ cache_type: str | None = None,
41
+ **kwargs,
42
+ ) -> BaseCache:
43
+ """Create appropriate cache based on available dependencies.
44
+
45
+ Auto-detects if sentence-transformers is available and creates
46
+ HybridCache if possible, otherwise falls back to HashOnlyCache.
47
+
48
+ Args:
49
+ cache_type: Force specific cache type ("hash" | "hybrid" | None for auto).
50
+ **kwargs: Additional arguments passed to cache constructor.
51
+
52
+ Returns:
53
+ BaseCache instance (HybridCache or HashOnlyCache).
54
+
55
+ Example:
56
+ # Auto-detect (recommended)
57
+ cache = create_cache()
58
+
59
+ # Force hash-only
60
+ cache = create_cache(cache_type="hash")
61
+
62
+ # Force hybrid (raises ImportError if deps missing)
63
+ cache = create_cache(cache_type="hybrid")
64
+
65
+ """
66
+ # Force hash-only
67
+ if cache_type == "hash":
68
+ logger.info("Using hash-only cache (explicit)")
69
+ return HashOnlyCache(**kwargs)
70
+
71
+ # Force hybrid
72
+ if cache_type == "hybrid":
73
+ if not HYBRID_AVAILABLE:
74
+ raise ImportError(
75
+ "HybridCache requires sentence-transformers. "
76
+ "Install with: pip install empathy-framework[cache]"
77
+ )
78
+ logger.info("Using hybrid cache (explicit)")
79
+ return HybridCache(**kwargs)
80
+
81
+ # Auto-detect (default)
82
+ if HYBRID_AVAILABLE:
83
+ logger.info("Using hybrid cache (auto-detected)")
84
+ return HybridCache(**kwargs)
85
+ else:
86
+ logger.info(
87
+ "Using hash-only cache (sentence-transformers not available). "
88
+ "For 70% cost savings, install with: pip install empathy-framework[cache]"
89
+ )
90
+ return HashOnlyCache(**kwargs)
91
+
92
+
93
+ def auto_setup_cache() -> None:
94
+ """Auto-setup cache with one-time prompt if dependencies missing.
95
+
96
+ Called automatically by BaseWorkflow on first run.
97
+ Prompts user to install cache dependencies if not available.
98
+
99
+ """
100
+ from .dependency_manager import DependencyManager
101
+
102
+ manager = DependencyManager()
103
+
104
+ if manager.should_prompt_cache_install():
105
+ manager.prompt_cache_install()
106
+
107
+
108
+ __all__ = [
109
+ "BaseCache",
110
+ "CacheEntry",
111
+ "CacheStats",
112
+ "HashOnlyCache",
113
+ "HybridCache",
114
+ "create_cache",
115
+ "auto_setup_cache",
116
+ "HYBRID_AVAILABLE",
117
+ ]
@@ -0,0 +1,166 @@
1
+ """Base cache interface for Empathy Framework response caching.
2
+
3
+ Copyright 2025 Smart-AI-Memory
4
+ Licensed under Fair Source License 0.9
5
+ """
6
+
7
+ from abc import ABC, abstractmethod
8
+ from dataclasses import dataclass
9
+ from typing import Any
10
+
11
+
12
+ @dataclass
13
+ class CacheEntry:
14
+ """Cached LLM response with metadata."""
15
+
16
+ key: str
17
+ response: Any
18
+ workflow: str
19
+ stage: str
20
+ model: str
21
+ prompt_hash: str
22
+ timestamp: float
23
+ ttl: int | None = None # Time-to-live in seconds
24
+
25
+ def is_expired(self, current_time: float) -> bool:
26
+ """Check if entry has expired."""
27
+ if self.ttl is None:
28
+ return False
29
+ return (current_time - self.timestamp) > self.ttl
30
+
31
+
32
+ @dataclass
33
+ class CacheStats:
34
+ """Cache hit/miss statistics."""
35
+
36
+ hits: int = 0
37
+ misses: int = 0
38
+ evictions: int = 0
39
+
40
+ @property
41
+ def total(self) -> int:
42
+ """Total cache lookups."""
43
+ return self.hits + self.misses
44
+
45
+ @property
46
+ def hit_rate(self) -> float:
47
+ """Cache hit rate percentage."""
48
+ if self.total == 0:
49
+ return 0.0
50
+ return (self.hits / self.total) * 100.0
51
+
52
+ def to_dict(self) -> dict[str, Any]:
53
+ """Convert to dictionary for logging."""
54
+ return {
55
+ "hits": self.hits,
56
+ "misses": self.misses,
57
+ "evictions": self.evictions,
58
+ "total": self.total,
59
+ "hit_rate": round(self.hit_rate, 1),
60
+ }
61
+
62
+
63
+ class BaseCache(ABC):
64
+ """Abstract base class for LLM response caching."""
65
+
66
+ def __init__(self, max_size_mb: int = 500, default_ttl: int = 86400):
67
+ """Initialize cache.
68
+
69
+ Args:
70
+ max_size_mb: Maximum cache size in megabytes.
71
+ default_ttl: Default time-to-live in seconds (default: 24 hours).
72
+
73
+ """
74
+ self.max_size_mb = max_size_mb
75
+ self.default_ttl = default_ttl
76
+ self.stats = CacheStats()
77
+
78
+ @abstractmethod
79
+ def get(
80
+ self,
81
+ workflow: str,
82
+ stage: str,
83
+ prompt: str,
84
+ model: str,
85
+ ) -> Any | None:
86
+ """Get cached response.
87
+
88
+ Args:
89
+ workflow: Workflow name (e.g., "code-review").
90
+ stage: Stage name (e.g., "scan").
91
+ prompt: Prompt text.
92
+ model: Model identifier (e.g., "claude-3-5-sonnet-20241022").
93
+
94
+ Returns:
95
+ Cached response if found, None otherwise.
96
+
97
+ """
98
+ pass
99
+
100
+ @abstractmethod
101
+ def put(
102
+ self,
103
+ workflow: str,
104
+ stage: str,
105
+ prompt: str,
106
+ model: str,
107
+ response: Any,
108
+ ttl: int | None = None,
109
+ ) -> None:
110
+ """Store response in cache.
111
+
112
+ Args:
113
+ workflow: Workflow name.
114
+ stage: Stage name.
115
+ prompt: Prompt text.
116
+ model: Model identifier.
117
+ response: LLM response to cache.
118
+ ttl: Optional custom TTL (uses default if None).
119
+
120
+ """
121
+ pass
122
+
123
+ @abstractmethod
124
+ def clear(self) -> None:
125
+ """Clear all cached entries."""
126
+ pass
127
+
128
+ @abstractmethod
129
+ def get_stats(self) -> CacheStats:
130
+ """Get cache statistics.
131
+
132
+ Returns:
133
+ CacheStats with hit/miss counts.
134
+
135
+ """
136
+ pass
137
+
138
+ def _create_cache_key(
139
+ self,
140
+ workflow: str,
141
+ stage: str,
142
+ prompt: str,
143
+ model: str,
144
+ ) -> str:
145
+ """Create cache key from workflow, stage, prompt, and model.
146
+
147
+ Uses SHA256 hash of concatenated values.
148
+
149
+ Args:
150
+ workflow: Workflow name.
151
+ stage: Stage name.
152
+ prompt: Prompt text.
153
+ model: Model identifier.
154
+
155
+ Returns:
156
+ Cache key (SHA256 hash).
157
+
158
+ """
159
+ import hashlib
160
+
161
+ # Combine all inputs for cache key
162
+ key_parts = [workflow, stage, prompt, model]
163
+ key_string = "|".join(key_parts)
164
+
165
+ # SHA256 hash for consistent key length
166
+ return hashlib.sha256(key_string.encode()).hexdigest()