adaptive-memory-multi-model-router 1.2.2 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (195) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +146 -66
  3. package/dist/index.d.ts +1 -1
  4. package/dist/index.js +1 -1
  5. package/dist/integrations/airtable.js +20 -0
  6. package/dist/integrations/discord.js +18 -0
  7. package/dist/integrations/github.js +23 -0
  8. package/dist/integrations/gmail.js +19 -0
  9. package/dist/integrations/google-calendar.js +18 -0
  10. package/dist/integrations/index.js +61 -0
  11. package/dist/integrations/jira.js +21 -0
  12. package/dist/integrations/linear.js +19 -0
  13. package/dist/integrations/notion.js +19 -0
  14. package/dist/integrations/slack.js +18 -0
  15. package/dist/integrations/telegram.js +19 -0
  16. package/dist/providers/registry.js +7 -3
  17. package/docs/ARCHITECTURAL-IMPROVEMENTS-2025.md +1391 -0
  18. package/docs/ARCHITECTURAL-IMPROVEMENTS-REVISED-2025.md +1051 -0
  19. package/docs/CONFIGURATION.md +476 -0
  20. package/docs/COUNCIL_DECISION.json +308 -0
  21. package/docs/COUNCIL_SUMMARY.md +265 -0
  22. package/docs/COUNCIL_V2.2_DECISION.md +416 -0
  23. package/docs/IMPROVEMENT_ROADMAP.md +515 -0
  24. package/docs/LLM_COUNCIL_DECISION.md +508 -0
  25. package/docs/QUICK_START_VISIBILITY.md +782 -0
  26. package/docs/REDDIT_GAP_ANALYSIS.md +299 -0
  27. package/docs/RESEARCH_BACKED_IMPROVEMENTS.md +1180 -0
  28. package/docs/TMLPD_QNA.md +751 -0
  29. package/docs/TMLPD_V2.1_COMPLETE.md +763 -0
  30. package/docs/TMLPD_V2.2_RESEARCH_ROADMAP.md +754 -0
  31. package/docs/V2.2_IMPLEMENTATION_COMPLETE.md +446 -0
  32. package/docs/V2_IMPLEMENTATION_GUIDE.md +388 -0
  33. package/docs/VISIBILITY_ADOPTION_PLAN.md +1005 -0
  34. package/docs/launch-content/LAUNCH_EXECUTION_CHECKLIST.md +421 -0
  35. package/docs/launch-content/README.md +457 -0
  36. package/docs/launch-content/assets/cost_comparison_100_tasks.png +0 -0
  37. package/docs/launch-content/assets/cumulative_savings.png +0 -0
  38. package/docs/launch-content/assets/parallel_speedup.png +0 -0
  39. package/docs/launch-content/assets/provider_pricing_comparison.png +0 -0
  40. package/docs/launch-content/assets/task_breakdown_comparison.png +0 -0
  41. package/docs/launch-content/generate_charts.py +313 -0
  42. package/docs/launch-content/hn_show_post.md +139 -0
  43. package/docs/launch-content/partner_outreach_templates.md +745 -0
  44. package/docs/launch-content/reddit_posts.md +467 -0
  45. package/docs/launch-content/twitter_thread.txt +460 -0
  46. package/examples/QUICKSTART.md +1 -1
  47. package/openclaw-alexa-bridge/ALL_REMAINING_FIXES_PLAN.md +313 -0
  48. package/openclaw-alexa-bridge/REMAINING_FIXES_SUMMARY.md +277 -0
  49. package/openclaw-alexa-bridge/src/alexa_handler_no_tmlpd.js +1234 -0
  50. package/openclaw-alexa-bridge/test_fixes.js +77 -0
  51. package/package.json +120 -29
  52. package/package.json.tmp +0 -0
  53. package/qna/TMLPD_QNA.md +3 -3
  54. package/skill/SKILL.md +2 -2
  55. package/src/__tests__/integration/tmpld_integration.test.py +540 -0
  56. package/src/agents/skill_enhanced_agent.py +318 -0
  57. package/src/memory/__init__.py +15 -0
  58. package/src/memory/agentic_memory.py +353 -0
  59. package/src/memory/semantic_memory.py +444 -0
  60. package/src/memory/simple_memory.py +466 -0
  61. package/src/memory/working_memory.py +447 -0
  62. package/src/orchestration/__init__.py +52 -0
  63. package/src/orchestration/execution_engine.py +353 -0
  64. package/src/orchestration/halo_orchestrator.py +367 -0
  65. package/src/orchestration/mcts_workflow.py +498 -0
  66. package/src/orchestration/role_assigner.py +473 -0
  67. package/src/orchestration/task_planner.py +522 -0
  68. package/src/providers/__init__.py +67 -0
  69. package/src/providers/anthropic.py +304 -0
  70. package/src/providers/base.py +241 -0
  71. package/src/providers/cerebras.py +373 -0
  72. package/src/providers/registry.py +476 -0
  73. package/src/routing/__init__.py +30 -0
  74. package/src/routing/universal_router.py +621 -0
  75. package/src/skills/TMLPD-QUICKREF.md +210 -0
  76. package/src/skills/TMLPD-SETUP-SUMMARY.md +157 -0
  77. package/src/skills/TMLPD.md +540 -0
  78. package/src/skills/__tests__/skill_manager.test.ts +328 -0
  79. package/src/skills/skill_manager.py +385 -0
  80. package/src/skills/test-tmlpd.sh +108 -0
  81. package/src/skills/tmlpd-category.yaml +67 -0
  82. package/src/skills/tmlpd-monitoring.yaml +188 -0
  83. package/src/skills/tmlpd-phase.yaml +132 -0
  84. package/src/state/__init__.py +17 -0
  85. package/src/state/simple_checkpoint.py +508 -0
  86. package/src/tmlpd_agent.py +464 -0
  87. package/src/tmpld_v2.py +427 -0
  88. package/src/workflows/__init__.py +18 -0
  89. package/src/workflows/advanced_difficulty_classifier.py +377 -0
  90. package/src/workflows/chaining_executor.py +417 -0
  91. package/src/workflows/difficulty_integration.py +209 -0
  92. package/src/workflows/orchestrator.py +469 -0
  93. package/src/workflows/orchestrator_executor.py +456 -0
  94. package/src/workflows/parallelization_executor.py +382 -0
  95. package/src/workflows/router.py +311 -0
  96. package/test_integration_simple.py +86 -0
  97. package/test_mcts_workflow.py +150 -0
  98. package/test_templd_integration.py +262 -0
  99. package/test_universal_router.py +275 -0
  100. package/tmlpd-pi-extension/README.md +36 -0
  101. package/tmlpd-pi-extension/dist/cache/prefixCache.d.ts +114 -0
  102. package/tmlpd-pi-extension/dist/cache/prefixCache.d.ts.map +1 -0
  103. package/tmlpd-pi-extension/dist/cache/prefixCache.js +285 -0
  104. package/tmlpd-pi-extension/dist/cache/prefixCache.js.map +1 -0
  105. package/tmlpd-pi-extension/dist/cache/responseCache.d.ts +58 -0
  106. package/tmlpd-pi-extension/dist/cache/responseCache.d.ts.map +1 -0
  107. package/tmlpd-pi-extension/dist/cache/responseCache.js +153 -0
  108. package/tmlpd-pi-extension/dist/cache/responseCache.js.map +1 -0
  109. package/tmlpd-pi-extension/dist/cli.js +59 -0
  110. package/tmlpd-pi-extension/dist/cost/costTracker.d.ts +95 -0
  111. package/tmlpd-pi-extension/dist/cost/costTracker.d.ts.map +1 -0
  112. package/tmlpd-pi-extension/dist/cost/costTracker.js +240 -0
  113. package/tmlpd-pi-extension/dist/cost/costTracker.js.map +1 -0
  114. package/tmlpd-pi-extension/dist/index.d.ts +723 -0
  115. package/tmlpd-pi-extension/dist/index.d.ts.map +1 -0
  116. package/tmlpd-pi-extension/dist/index.js +239 -0
  117. package/tmlpd-pi-extension/dist/index.js.map +1 -0
  118. package/tmlpd-pi-extension/dist/memory/episodicMemory.d.ts +82 -0
  119. package/tmlpd-pi-extension/dist/memory/episodicMemory.d.ts.map +1 -0
  120. package/tmlpd-pi-extension/dist/memory/episodicMemory.js +145 -0
  121. package/tmlpd-pi-extension/dist/memory/episodicMemory.js.map +1 -0
  122. package/tmlpd-pi-extension/dist/orchestration/haloOrchestrator.d.ts +102 -0
  123. package/tmlpd-pi-extension/dist/orchestration/haloOrchestrator.d.ts.map +1 -0
  124. package/tmlpd-pi-extension/dist/orchestration/haloOrchestrator.js +207 -0
  125. package/tmlpd-pi-extension/dist/orchestration/haloOrchestrator.js.map +1 -0
  126. package/tmlpd-pi-extension/dist/orchestration/mctsWorkflow.d.ts +85 -0
  127. package/tmlpd-pi-extension/dist/orchestration/mctsWorkflow.d.ts.map +1 -0
  128. package/tmlpd-pi-extension/dist/orchestration/mctsWorkflow.js +210 -0
  129. package/tmlpd-pi-extension/dist/orchestration/mctsWorkflow.js.map +1 -0
  130. package/tmlpd-pi-extension/dist/providers/localProvider.d.ts +102 -0
  131. package/tmlpd-pi-extension/dist/providers/localProvider.d.ts.map +1 -0
  132. package/tmlpd-pi-extension/dist/providers/localProvider.js +338 -0
  133. package/tmlpd-pi-extension/dist/providers/localProvider.js.map +1 -0
  134. package/tmlpd-pi-extension/dist/providers/registry.d.ts +55 -0
  135. package/tmlpd-pi-extension/dist/providers/registry.d.ts.map +1 -0
  136. package/tmlpd-pi-extension/dist/providers/registry.js +138 -0
  137. package/tmlpd-pi-extension/dist/providers/registry.js.map +1 -0
  138. package/tmlpd-pi-extension/dist/routing/advancedRouter.d.ts +68 -0
  139. package/tmlpd-pi-extension/dist/routing/advancedRouter.d.ts.map +1 -0
  140. package/tmlpd-pi-extension/dist/routing/advancedRouter.js +332 -0
  141. package/tmlpd-pi-extension/dist/routing/advancedRouter.js.map +1 -0
  142. package/tmlpd-pi-extension/dist/tools/tmlpdTools.d.ts +101 -0
  143. package/tmlpd-pi-extension/dist/tools/tmlpdTools.d.ts.map +1 -0
  144. package/tmlpd-pi-extension/dist/tools/tmlpdTools.js +368 -0
  145. package/tmlpd-pi-extension/dist/tools/tmlpdTools.js.map +1 -0
  146. package/tmlpd-pi-extension/dist/utils/batchProcessor.d.ts +96 -0
  147. package/tmlpd-pi-extension/dist/utils/batchProcessor.d.ts.map +1 -0
  148. package/tmlpd-pi-extension/dist/utils/batchProcessor.js +170 -0
  149. package/tmlpd-pi-extension/dist/utils/batchProcessor.js.map +1 -0
  150. package/tmlpd-pi-extension/dist/utils/compression.d.ts +61 -0
  151. package/tmlpd-pi-extension/dist/utils/compression.d.ts.map +1 -0
  152. package/tmlpd-pi-extension/dist/utils/compression.js +281 -0
  153. package/tmlpd-pi-extension/dist/utils/compression.js.map +1 -0
  154. package/tmlpd-pi-extension/dist/utils/reliability.d.ts +74 -0
  155. package/tmlpd-pi-extension/dist/utils/reliability.d.ts.map +1 -0
  156. package/tmlpd-pi-extension/dist/utils/reliability.js +177 -0
  157. package/tmlpd-pi-extension/dist/utils/reliability.js.map +1 -0
  158. package/tmlpd-pi-extension/dist/utils/speculativeDecoding.d.ts +117 -0
  159. package/tmlpd-pi-extension/dist/utils/speculativeDecoding.d.ts.map +1 -0
  160. package/tmlpd-pi-extension/dist/utils/speculativeDecoding.js +246 -0
  161. package/tmlpd-pi-extension/dist/utils/speculativeDecoding.js.map +1 -0
  162. package/tmlpd-pi-extension/dist/utils/tokenUtils.d.ts +50 -0
  163. package/tmlpd-pi-extension/dist/utils/tokenUtils.d.ts.map +1 -0
  164. package/tmlpd-pi-extension/dist/utils/tokenUtils.js +124 -0
  165. package/tmlpd-pi-extension/dist/utils/tokenUtils.js.map +1 -0
  166. package/tmlpd-pi-extension/examples/QUICKSTART.md +183 -0
  167. package/tmlpd-pi-extension/package-lock.json +75 -0
  168. package/tmlpd-pi-extension/package.json +172 -0
  169. package/tmlpd-pi-extension/python/examples.py +53 -0
  170. package/tmlpd-pi-extension/python/integrations.py +330 -0
  171. package/tmlpd-pi-extension/python/setup.py +28 -0
  172. package/tmlpd-pi-extension/python/tmlpd.py +369 -0
  173. package/tmlpd-pi-extension/qna/REDDIT_GAP_ANALYSIS.md +299 -0
  174. package/tmlpd-pi-extension/qna/TMLPD_QNA.md +751 -0
  175. package/tmlpd-pi-extension/skill/SKILL.md +238 -0
  176. package/{src → tmlpd-pi-extension/src}/index.ts +1 -1
  177. package/tmlpd-pi-extension/tsconfig.json +18 -0
  178. package/demo/research-demo.js +0 -266
  179. package/notebooks/quickstart.ipynb +0 -157
  180. package/rust/tmlpd.h +0 -268
  181. package/src/cache/prefixCache.ts +0 -365
  182. package/src/routing/advancedRouter.ts +0 -406
  183. package/src/utils/speculativeDecoding.ts +0 -344
  184. /package/{src → tmlpd-pi-extension/src}/cache/responseCache.ts +0 -0
  185. /package/{src → tmlpd-pi-extension/src}/cost/costTracker.ts +0 -0
  186. /package/{src → tmlpd-pi-extension/src}/memory/episodicMemory.ts +0 -0
  187. /package/{src → tmlpd-pi-extension/src}/orchestration/haloOrchestrator.ts +0 -0
  188. /package/{src → tmlpd-pi-extension/src}/orchestration/mctsWorkflow.ts +0 -0
  189. /package/{src → tmlpd-pi-extension/src}/providers/localProvider.ts +0 -0
  190. /package/{src → tmlpd-pi-extension/src}/providers/registry.ts +0 -0
  191. /package/{src → tmlpd-pi-extension/src}/tools/tmlpdTools.ts +0 -0
  192. /package/{src → tmlpd-pi-extension/src}/utils/batchProcessor.ts +0 -0
  193. /package/{src → tmlpd-pi-extension/src}/utils/compression.ts +0 -0
  194. /package/{src → tmlpd-pi-extension/src}/utils/reliability.ts +0 -0
  195. /package/{src → tmlpd-pi-extension/src}/utils/tokenUtils.ts +0 -0
@@ -0,0 +1,751 @@
1
+ # TMLPD PI - Q&A for Common LLM Issues
2
+
3
+ > Comprehensive answers to frequent LLM parallel processing problems.
4
+ > Each Q&A maps to a TMLPD PI feature or capability.
5
+
6
+ ---
7
+
8
+ ## Table of Contents
9
+
10
+ 1. [Parallel Processing Issues](#1-parallel-processing-issues)
11
+ 2. [Cost & Budget Issues](#2-cost--budget-issues)
12
+ 3. [Reliability & Fallback Issues](#3-reliability--fallback-issues)
13
+ 4. [Caching Issues](#4-caching-issues)
14
+ 5. [Model Routing Issues](#5-model-routing-issues)
15
+ 6. [Context & Memory Issues](#6-context--memory-issues)
16
+ 7. [Framework Integration Issues](#7-framework-integration-issues)
17
+ 8. [Future Capabilities](#8-future-capabilities)
18
+
19
+ ---
20
+
21
+ ## 1. Parallel Processing Issues
22
+
23
+ ### Q1: "How do I run multiple LLM providers in parallel?"
24
+
25
+ ```python
26
+ from tmlpd import TMLPDClient
27
+ import asyncio
28
+
29
+ async def parallel_example():
30
+ client = TMLPDClient()
31
+
32
+ # Execute across 3 providers simultaneously
33
+ result = await client.execute_parallel(
34
+ prompt="Explain quantum entanglement",
35
+ models=["openai/gpt-4o", "anthropic/claude-3.5-sonnet", "google/gemini-2.0-flash"]
36
+ )
37
+
38
+ print(f"Got {result.successful_models}/{result.total_models} responses")
39
+ for resp in result.responses:
40
+ print(f" {resp.provider}: {resp.content[:50]}...")
41
+ ```
42
+
43
+ **TMLPD Feature:** `execute_parallel()` with automatic provider coordination
44
+
45
+ ---
46
+
47
+ ### Q2: "How can I compare responses from different models?"
48
+
49
+ ```python
50
+ # Compare responses side-by-side
51
+ result = await client.execute_parallel(
52
+ prompt="Write a Python async decorator with retry logic",
53
+ models=["openai/gpt-4o", "anthropic/claude-3.5-sonnet", "codex/codex"]
54
+ )
55
+
56
+ for r in result.responses:
57
+ print(f"\n=== {r.model} ({r.provider}) - ${r.cost:.4f} ===")
58
+ print(r.content[:500])
59
+ ```
60
+
61
+ **TMLPD Feature:** Multi-model comparison with cost/metadata tracking
62
+
63
+ ---
64
+
65
+ ### Q3: "How do I limit concurrent requests to avoid rate limits?"
66
+
67
+ ```typescript
68
+ const tmlpd = createTMLPD({
69
+ maxConcurrent: 3 // Limit parallel executions
70
+ });
71
+
72
+ // HALO orchestration with concurrency control
73
+ const halo = new HALOOrchestrator({
74
+ maxConcurrent: 3,
75
+ enableMCTS: true
76
+ });
77
+ ```
78
+
79
+ **TMLPD Feature:** `maxConcurrent` config + HALO concurrency management
80
+
81
+ ---
82
+
83
+ ### Q4: "How do I handle streaming responses from multiple models?"
84
+
85
+ ```typescript
86
+ // Streaming parallel execution
87
+ const results = await tmlpd.executeParallel(
88
+ prompt="Write a long technical explanation",
89
+ models=["gpt-4o", "claude"],
90
+ streaming: true
91
+ );
92
+
93
+ // Results stream in as they arrive
94
+ for await (const chunk of results.stream) {
95
+ console.log(`${chunk.model}: ${chunk.delta}`);
96
+ }
97
+ ```
98
+
99
+ **TMLPD Feature:** Built-in streaming support per provider
100
+
101
+ ---
102
+
103
+ ## 2. Cost & Budget Issues
104
+
105
+ ### Q5: "How do I track LLM costs in real-time?"
106
+
107
+ ```python
108
+ # Real-time cost tracking
109
+ summary = await client.get_cost_summary()
110
+
111
+ print(f"Total spent: ${summary.total_cost:.4f}")
112
+ print(f"By provider: {summary.by_provider}")
113
+ print(f"Daily: {summary.daily_costs}")
114
+ print(f"Requests: {summary.request_count}")
115
+ print(f"Avg cost: ${summary.average_cost_per_request:.6f}")
116
+ ```
117
+
118
+ **TMLPD Feature:** `getCostSummary()` with provider/daily breakdown
119
+
120
+ ---
121
+
122
+ ### Q6: "How do I set daily/monthly budgets?"
123
+
124
+ ```typescript
125
+ const tmlpd = createTMLPD({
126
+ budget: {
127
+ daily_limit: 10.00, // $10/day max
128
+ monthly_limit: 100.00 // $100/month max
129
+ }
130
+ });
131
+
132
+ // Check budget anytime
133
+ const budget = await tmlpd.getBudget();
134
+ // { daily: { used: 2.50, limit: 10.00, remaining: 7.50 }, ... }
135
+ ```
136
+
137
+ **TMLPD Feature:** Budget limits + `getBudget()` monitoring
138
+
139
+ ---
140
+
141
+ ### Q7: "How do I route to cheaper models for simple tasks?"
142
+
143
+ ```python
144
+ from tmlpd import TMLPDLite, TaskType
145
+
146
+ lite = TMLPDLite()
147
+
148
+ # Auto-classify and route optimally
149
+ prompt = "What is 2+2?"
150
+ task_type = lite.classify_task(prompt) # TaskType.FAST
151
+
152
+ # FAST tasks route to: gemini, claude-haiku, codex
153
+ models = lite.get_optimal_models(task_type, 2)
154
+
155
+ # Use cheapest for simple tasks
156
+ result = await client.execute(prompt, model=models[0])
157
+ # Routes to gemini-flash (~$0.0001) instead of gpt-4o (~$0.01)
158
+ ```
159
+
160
+ **TMLPD Feature:** Task classification → optimal model routing
161
+
162
+ ---
163
+
164
+ ### Q8: "How do I avoid overspending on failed requests?"
165
+
166
+ ```python
167
+ # Circuit breaker prevents cascade failures
168
+ config = TMLPDConfig(
169
+ retry_max_attempts=3,
170
+ retry_base_delay_ms=500,
171
+ retry_jitter=0.3, # Random jitter prevents thundering herd
172
+ max_concurrent=5
173
+ )
174
+
175
+ # Failed requests don't count against budget during circuit break
176
+ summary = await client.get_cost_summary()
177
+ print(f"Failed requests cost: ${summary.failed_request_cost}")
178
+ ```
179
+
180
+ **TMLPD Feature:** Circuit breakers + retry cost control
181
+
182
+ ---
183
+
184
+ ## 3. Reliability & Fallback Issues
185
+
186
+ ### Q9: "How do I handle provider outages gracefully?"
187
+
188
+ ```typescript
189
+ // Automatic fallback when primary fails
190
+ const result = await tmlpd.executeParallel(
191
+ prompt="Critical task - need response",
192
+ models: ["anthropic/claude-3.5-sonnet", // Primary
193
+ "openai/gpt-4o", // Fallback 1
194
+ "google/gemini-2.0-flash"] // Fallback 2
195
+ );
196
+
197
+ // If claude fails, gpt-4o succeeds automatically
198
+ console.log(`Success: ${result.successful_models > 0}`);
199
+ ```
200
+
201
+ **TMLPD Feature:** Automatic fallback chain, first-success wins
202
+
203
+ ---
204
+
205
+ ### Q10: "How do I implement retry with exponential backoff?"
206
+
207
+ ```typescript
208
+ const tmlpd = createTMLPD({
209
+ retry: {
210
+ max_attempts: 3,
211
+ base_delay_ms: 500, // Start at 500ms
212
+ max_delay_ms: 30000, // Cap at 30s
213
+ jitter: 0.3 // ±30% randomization
214
+ }
215
+ });
216
+
217
+ // Jitter: 500, ~650, ~1200ms (varies to prevent thundering herd)
218
+ ```
219
+
220
+ **TMLPD Feature:** Configurable exponential backoff with jitter
221
+
222
+ ---
223
+
224
+ ### Q11: "How do I detect and isolate failing providers?"
225
+
226
+ ```typescript
227
+ // Provider status check
228
+ const status = await tmlpd.getProviderStatus();
229
+
230
+ console.log(status.ready_providers); // ["openai", "anthropic", "google"]
231
+ console.log(status.providers["openai"].failures); // 0
232
+ console.log(status.providers["anthropic"].latency_ms); // 450
233
+
234
+ // Automatically routes around failures
235
+ ```
236
+
237
+ **TMLPD Feature:** Provider health monitoring + automatic isolation
238
+
239
+ ---
240
+
241
+ ### Q12: "How do I ensure at least one response succeeds?"
242
+
243
+ ```typescript
244
+ // Guaranteed delivery with fallback chain
245
+ const result = await tmlpd.executeParallel(
246
+ prompt="Must get response",
247
+ models: ["premium-model-1", "premium-model-2", "budget-model"],
248
+ fallback_enabled: true
249
+ );
250
+
251
+ // result.responses guaranteed to have at least one success
252
+ if (result.successful_models === 0) {
253
+ throw new Error("All providers failed - escalate");
254
+ }
255
+ ```
256
+
257
+ **TMLPD Feature:** At-least-one-success guarantee
258
+
259
+ ---
260
+
261
+ ## 4. Caching Issues
262
+
263
+ ### Q13: "How do I cache LLM responses to save money?"
264
+
265
+ ```python
266
+ # Enable caching (enabled by default)
267
+ result1 = lite.process("What is Python?", use_cache=True)
268
+ # First call - cache miss, real API call
269
+
270
+ result2 = lite.process("What is Python?", use_cache=True)
271
+ # Second call - cache hit, instant, $0
272
+ print(f"Cached: {result2['cached']}") # True
273
+ ```
274
+
275
+ **TMLPD Feature:** LRU cache with SHA-256 key generation
276
+
277
+ ---
278
+
279
+ ### Q14: "How do I invalidate stale cache entries?"
280
+
281
+ ```typescript
282
+ // Invalidate specific model cache
283
+ await tmlpd.invalidateCache("gpt-4o");
284
+
285
+ // Invalidate all cache
286
+ await tmlpd.invalidateCache(); // Clears everything
287
+
288
+ // Get cache stats
289
+ const stats = await tmlpd.getCacheStats();
290
+ // { hits: 42, misses: 10, size: 25, hit_rate: 0.808 }
291
+ ```
292
+
293
+ **TMLPD Feature:** Selective + full cache invalidation
294
+
295
+ ---
296
+
297
+ ### Q15: "How do I cache based on semantic similarity, not exact match?"
298
+
299
+ ```typescript
300
+ // Semantic caching via episodic memory
301
+ const memory = new EpisodicMemoryStore();
302
+
303
+ memory.store({
304
+ task: { description: "Explain quantum physics", type: "explanation" },
305
+ result: { success: true, output: "Quantum physics is...", cost: 0.02 },
306
+ agent: { id: "agent-1", model: "gpt-4o", provider: "openai" },
307
+ metadata: { tokens: 500 },
308
+ importance: 0.8
309
+ });
310
+
311
+ // Later query with similar intent
312
+ const similar = memory.getSimilarTasks("What is quantum mechanics?", 3);
313
+ // Returns tasks with semantic keyword overlap
314
+ ```
315
+
316
+ **TMLPD Feature:** EpisodicMemoryStore for semantic/keyword caching
317
+
318
+ ---
319
+
320
+ ### Q16: "How do I set cache TTL for different content types?"
321
+
322
+ ```typescript
323
+ const tmlpd = createTMLPD({
324
+ cache: {
325
+ ttl_seconds: 3600, // 1 hour default
326
+ max_entries: 1000 // LRU eviction
327
+ }
328
+ });
329
+
330
+ // Factual answers: short TTL (15 min)
331
+ // General explanations: medium TTL (1 hour)
332
+ // Documentation: long TTL (24 hours)
333
+ ```
334
+
335
+ **TMLPD Feature:** Configurable TTL per request type
336
+
337
+ ---
338
+
339
+ ## 5. Model Routing Issues
340
+
341
+ ### Q17: "How do I automatically route tasks to optimal models?"
342
+
343
+ ```python
344
+ from tmlpd import TMLPDLite, TaskType
345
+
346
+ lite = TMLPDLite()
347
+
348
+ # Task → Model routing (built-in intelligence)
349
+ routing_table = {
350
+ # CODING tasks
351
+ "Write a Python async function": TaskType.CODING,
352
+ "Debug this JavaScript": TaskType.CODING,
353
+ # FRONTEND tasks
354
+ "Build a React component": TaskType.FRONTEND,
355
+ "Style this CSS": TaskType.FRONTEND,
356
+ # FAST tasks (budget)
357
+ "What is 2+2?": TaskType.FAST,
358
+ "Quick translation": TaskType.FAST,
359
+ # PREMIUM tasks
360
+ "Design system architecture": TaskType.PREMIUM,
361
+ }
362
+
363
+ prompt = "Write a Python decorator with retry"
364
+ task = lite.classify_task(prompt) # TaskType.CODING
365
+ models = lite.get_optimal_models(task, 3) # ["codex", "claude-minimax", "claude"]
366
+ ```
367
+
368
+ **TMLPD Feature:** Automatic task classification + model routing
369
+
370
+ ---
371
+
372
+ ### Q18: "How do I use different models for different parts of a task?"
373
+
374
+ ```python
375
+ # Split task: research → write → review
376
+ async def pipeline_task(prompt):
377
+ client = TMLPDClient()
378
+
379
+ # Phase 1: Research (fast, broad)
380
+ research = await client.execute(prompt, model="gemini-flash")
381
+
382
+ # Phase 2: Write (premium quality)
383
+ writing = await client.execute(f"Write detailed response about: {research.content}",
384
+ model="claude-3.5-sonnet")
385
+
386
+ # Phase 3: Review (medium)
387
+ review = await client.execute(f"Review: {writing.content}",
388
+ model="gpt-4o")
389
+
390
+ return review.content
391
+ ```
392
+
393
+ **TMLPD Feature:** Sequential multi-model pipeline support
394
+
395
+ ---
396
+
397
+ ### Q19: "How do I balance cost vs quality dynamically?"
398
+
399
+ ```typescript
400
+ // Adaptive quality based on task complexity
401
+ const halo = new HALOOrchestrator({ enableMCTS: true });
402
+
403
+ const result = await halo.execute("Complex microservices architecture", async (subtask) => {
404
+ // MCTS evaluates: is subtask simple? use cheap model
405
+ // Is it complex? use premium model
406
+ return executeSubtask(subtask, {
407
+ max_cost: subtask.complexity > 0.7 ? 0.50 : 0.05
408
+ });
409
+ });
410
+ ```
411
+
412
+ **TMLPD Feature:** HALO + MCTS for adaptive cost/quality
413
+
414
+ ---
415
+
416
+ ### Q20: "How do I handle multilingual routing (Chinese, etc)?"
417
+
418
+ ```python
419
+ from tmlpd import TMLPDLite, TaskType
420
+
421
+ lite = TMLPDLite()
422
+
423
+ # Chinese language detection
424
+ chinese_prompt = "解释量子纠缠"
425
+ task_type = lite.classify_task(chinese_prompt)
426
+ # TaskType.CHINESE
427
+
428
+ # Routes to: claude-glm (best for Chinese), claude-minimax
429
+ models = lite.get_optimal_models(task_type, 2)
430
+ ```
431
+
432
+ **TMLPD Feature:** Built-in multilingual task classification
433
+
434
+ ---
435
+
436
+ ## 6. Context & Memory Issues
437
+
438
+ ### Q21: "How do I maintain conversation context across multiple calls?"
439
+
440
+ ```typescript
441
+ // Episodic memory for conversation continuity
442
+ const memory = new EpisodicMemoryStore();
443
+
444
+ // Store interaction
445
+ memory.store({
446
+ task: { description: "User asked about quantum physics", type: "explanation" },
447
+ result: { success: true, output: "Quantum physics explains..." },
448
+ agent: { id: "session-123", model: "gpt-4o", provider: "openai" },
449
+ metadata: { context_window_used: 4096 },
450
+ importance: 0.7
451
+ });
452
+
453
+ // Later: recall relevant context
454
+ const past = memory.getSimilarTasks("quantum entanglement", 5);
455
+ // Returns previous quantum-related conversations
456
+ ```
457
+
458
+ **TMLPD Feature:** EpisodicMemoryStore for context persistence
459
+
460
+ ---
461
+
462
+ ### Q22: "How do I manage long context without cost explosion?"
463
+
464
+ ```python
465
+ # Intelligent context chunking
466
+ async def long_context_handler(prompt, max_tokens=4000):
467
+ client = TMLPDClient()
468
+
469
+ # Detect if prompt is too long
470
+ estimated_tokens = len(prompt.split()) * 1.3
471
+
472
+ if estimated_tokens > max_tokens:
473
+ # Summarize and compress context
474
+ summary = await client.execute(
475
+ f"Summarize key points: {prompt[:10000]}",
476
+ model="gemini-flash" # Cheap for summarization
477
+ )
478
+ return await client.execute(f"Based on: {summary}", model="premium-model")
479
+
480
+ return await client.execute(prompt, model="premium-model")
481
+ ```
482
+
483
+ **TMLPD Feature:** Context chunking + compression patterns
484
+
485
+ ---
486
+
487
+ ### Q23: "How do I learn from past successful tasks?"
488
+
489
+ ```python
490
+ # Store successes for future reference
491
+ memory = EpisodicMemoryStore()
492
+
493
+ result = await client.execute("Complex Python async pattern")
494
+ if result.success:
495
+ memory.store({
496
+ task: { description: "Python async pattern", type: "coding", complexity: 3 },
497
+ result: { success: True, output: result.content, cost: result.cost },
498
+ agent: { model: result.model },
499
+ metadata: {},
500
+ importance: 0.9 # High importance = longer retention
501
+ })
502
+
503
+ # Future similar task benefits from this experience
504
+ ```
505
+
506
+ **TMLPD Feature:** Episodic memory with importance weighting
507
+
508
+ ---
509
+
510
+ ## 7. Framework Integration Issues
511
+
512
+ ### Q24: "How do I use TMLPD with LangChain?"
513
+
514
+ ```python
515
+ from langchain.llms import BaseLLM
516
+ from tmlpd import TMLPDLite
517
+
518
+ class TMLPDLLM(BaseLLM):
519
+ def __init__(self, task_type="default"):
520
+ self.lite = TMLPDLite()
521
+ self.task_type = task_type
522
+
523
+ def _call(self, prompt: str, **kwargs) -> str:
524
+ result = self.lite.process(prompt)
525
+ return result["content"]
526
+
527
+ # Use with LangChain chains
528
+ llm = TMLPDLLM(task_type="coding")
529
+ chain = prompt | llm | output_parser
530
+ ```
531
+
532
+ **TMLPD Feature:** LangChain `BaseLLM` compatible wrapper
533
+
534
+ ---
535
+
536
+ ### Q25: "How do I use TMLPD with LlamaIndex?"
537
+
538
+ ```python
539
+ from llama_index.llms import LLM
540
+ from tmlpd import TMLPDLite
541
+
542
+ class TMLPDLLM(LLM):
543
+ def __init__(self, task_type="default"):
544
+ self.lite = TMLPDLite()
545
+
546
+ @property
547
+ def metadata(self):
548
+ return {"name": "TMLPD", "model_names": ["gpt-4o", "claude", "gemini"]}
549
+
550
+ def complete(self, prompt: str) -> str:
551
+ return self.lite.process(prompt)["content"]
552
+
553
+ async def acomplete(self, prompt: str) -> str:
554
+ return self.complete(prompt)
555
+
556
+ # Use in LlamaIndex queries
557
+ llm = TMLPDLLM()
558
+ response = index.query("Explain quantum", llm=llm)
559
+ ```
560
+
561
+ **TMLPD Feature:** LlamaIndex `LLM` interface compatible
562
+
563
+ ---
564
+
565
+ ### Q26: "How do I create AutoGen agents with TMLPD?"
566
+
567
+ ```python
568
+ from autogen import AssistantAgent
569
+ from tmlpd import TMLPDLite
570
+
571
+ class TMLPDAgent(AssistantAgent):
572
+ def __init__(self, name, task_type="default", **kwargs):
573
+ super().__init__(name, **kwargs)
574
+ self.lite = TMLPDLite()
575
+ self.task_type = task_type
576
+
577
+ def generate_reply(self, messages, sender, **kwargs):
578
+ last_msg = messages[-1]["content"]
579
+ result = self.lite.process(last_msg)
580
+ return result["content"]
581
+
582
+ # Create multi-agent system
583
+ coder = TMLPDAgent("coder", task_type="coding")
584
+ reviewer = TMLPDAgent("reviewer", task_type="analysis")
585
+ ```
586
+
587
+ **TMLPD Feature:** AutoGen `AssistantAgent` base class
588
+
589
+ ---
590
+
591
+ ### Q27: "How do I integrate with CrewAI?"
592
+
593
+ ```python
594
+ from crewai import Agent
595
+ from tmlpd import TMLPDLite
596
+
597
+ class TMLPDAgent(Agent):
598
+ def __init__(self, role, goal, backstory, task_type="default"):
599
+ super().__init__(role=role, goal=goal, backstory=backstory)
600
+ self.lite = TMLPDLite()
601
+ self.task_type = task_type
602
+
603
+ def execute_task(self, task, context=None):
604
+ prompt = f"{context}\n\n{task}" if context else task
605
+ result = self.lite.process(prompt)
606
+ return result["content"]
607
+
608
+ # Create crew
609
+ researcher = TMLPDAgent(
610
+ role="Researcher",
611
+ goal="Research AI topics thoroughly",
612
+ backstory="Expert AI researcher",
613
+ task_type="analysis"
614
+ )
615
+ ```
616
+
617
+ **TMLPD Feature:** CrewAI `Agent` pattern compatible
618
+
619
+ ---
620
+
621
+ ## 8. Future Capabilities
622
+
623
+ ### Q28: "How do I enable semantic search over cached responses?"
624
+
625
+ ```typescript
626
+ // Coming in v1.2: Semantic cache
627
+ const semanticCache = new SemanticMemoryStore({
628
+ vector_dim: 768, // Embedding dimension
629
+ similarity_threshold: 0.85
630
+ });
631
+
632
+ // Store with embedding
633
+ semanticCache.store({
634
+ prompt: "Explain neural network backpropagation",
635
+ response: "Backpropagation is...",
636
+ embedding: await getEmbedding("Explain neural network backpropagation")
637
+ });
638
+
639
+ // Future query finds semantically similar cached response
640
+ const cached = await semanticCache.findSimilar("How does backprop work?");
641
+ // Returns "Explain neural network backpropagation" - high similarity
642
+ ```
643
+
644
+ **Planned:** ChromaDB-backed semantic memory (see full TMLPD)
645
+
646
+ ---
647
+
648
+ ### Q29: "How do I use MCTS for workflow optimization?"
649
+
650
+ ```typescript
651
+ // Monte Carlo Tree Search for task planning
652
+ const mcts = new MCTSWorkflowOptimizer({
653
+ maxIterations: 50,
654
+ explorationConstant: 1.414 // UCB1 tuned
655
+ }, ["claude", "codex", "gemini"]);
656
+
657
+ const strategy = await mcts.findBestStrategy(subtasks, async (subtask) => {
658
+ // Evaluate which model works best for this subtask type
659
+ const result = await executeWithModel(subtask);
660
+ return { cost: result.cost, quality: evaluateQuality(result) };
661
+ });
662
+
663
+ console.log(strategy.model_picks); // ["claude", "codex", "codex", "gemini"]
664
+ ```
665
+
666
+ **TMLPD Feature:** MCTS workflow optimizer (reference implementation)
667
+
668
+ ---
669
+
670
+ ### Q30: "How do I orchestrate complex multi-step tasks with HALO?"
671
+
672
+ ```typescript
673
+ // HALO: Hierarchical Adaptive LLM Orchestration
674
+ const halo = new HALOOrchestrator({
675
+ maxConcurrent: 4,
676
+ enableMCTS: true,
677
+ maxDepth: 3 // Plan → Assign → Execute
678
+ });
679
+
680
+ const result = await halo.execute(
681
+ "Build a complete REST API with auth, DB, and tests",
682
+ async (subtask, agent) => {
683
+ // HALO decomposes:
684
+ // Level 1: Plan (auth, models, routes, tests)
685
+ // Level 2: Assign each to optimal agent
686
+ // Level 3: Execute in parallel with fallback
687
+ return executeSubtask(subtask, { maxConcurrent: 2 });
688
+ }
689
+ );
690
+
691
+ console.log(result.strategy); // { plan: [...], assignments: {...} }
692
+ ```
693
+
694
+ **TMLPD Feature:** HALO orchestrator for complex task decomposition
695
+
696
+ ---
697
+
698
+ ## Feature Mapping Table
699
+
700
+ | Issue Category | Problem | TMLPD Solution |
701
+ |----------------|---------|----------------|
702
+ | **Parallel** | Run multiple providers | `execute_parallel()` |
703
+ | **Parallel** | Rate limit control | `maxConcurrent` config |
704
+ | **Parallel** | Stream multiple | `streaming: true` |
705
+ | **Cost** | Track spending | `getCostSummary()` |
706
+ | **Cost** | Set budgets | `budget.daily/monthly` |
707
+ | **Cost** | Route to cheap | Task classification |
708
+ | **Reliability** | Handle outages | Automatic fallback |
709
+ | **Reliability** | Retry backoff | Exponential + jitter |
710
+ | **Reliability** | Isolate failures | Provider health check |
711
+ | **Caching** | Cache responses | LRU cache with SHA-256 |
712
+ | **Caching** | Invalidate stale | `invalidateCache()` |
713
+ | **Caching** | Semantic match | EpisodicMemoryStore |
714
+ | **Routing** | Auto-route tasks | `classify_task()` |
715
+ | **Routing** | Multi-phase | Sequential pipelines |
716
+ | **Routing** | Multilingual | CHINESE task type |
717
+ | **Memory** | Persist context | EpisodicMemoryStore |
718
+ | **Memory** | Learn from past | Importance-weighted storage |
719
+ | **Framework** | LangChain | `TMLPDLLM(BaseLLM)` |
720
+ | **Framework** | LlamaIndex | `TMLPDLLM(LLM)` |
721
+ | **Framework** | AutoGen | `TMLPDAgent(AssistantAgent)` |
722
+ | **Framework** | CrewAI | `TMLPDAgent(Agent)` |
723
+
724
+ ---
725
+
726
+ ## Getting Started
727
+
728
+ ```bash
729
+ # Install
730
+ npm install tmlpd-pi
731
+
732
+ # Python (copy python/tmlpd.py to your project)
733
+ ```
734
+
735
+ ```python
736
+ # Quick start - Python
737
+ from tmlpd import quick_process
738
+ result = quick_process("What is quantum entanglement?")
739
+ ```
740
+
741
+ ```typescript
742
+ // Quick start - TypeScript
743
+ import { createTMLPD } from "tmlpd-pi";
744
+ const tmlpd = createTMLPD();
745
+ const result = await tmlpd.executeParallel("Explain quantum", ["gpt-4o", "claude"]);
746
+ ```
747
+
748
+ ---
749
+
750
+ **Package:** [npmjs.com/package/tmlpd-pi](https://npmjs.com/package/tmlpd-pi)
751
+ **Repository:** [github.com/Das-rebel/tmlpd-skill](https://github.com/Das-rebel/tmlpd-skill)