crca 1.4.0__py3-none-any.whl → 1.5.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (306) hide show
  1. CRCA.py +172 -7
  2. MODEL_CARD.md +53 -0
  3. PKG-INFO +8 -2
  4. RELEASE_NOTES.md +17 -0
  5. STABILITY.md +19 -0
  6. architecture/hybrid/consistency_engine.py +362 -0
  7. architecture/hybrid/conversation_manager.py +421 -0
  8. architecture/hybrid/explanation_generator.py +452 -0
  9. architecture/hybrid/few_shot_learner.py +533 -0
  10. architecture/hybrid/graph_compressor.py +286 -0
  11. architecture/hybrid/hybrid_agent.py +4398 -0
  12. architecture/hybrid/language_compiler.py +623 -0
  13. architecture/hybrid/main,py +0 -0
  14. architecture/hybrid/reasoning_tracker.py +322 -0
  15. architecture/hybrid/self_verifier.py +524 -0
  16. architecture/hybrid/task_decomposer.py +567 -0
  17. architecture/hybrid/text_corrector.py +341 -0
  18. benchmark_results/crca_core_benchmarks.json +178 -0
  19. branches/crca_sd/crca_sd_realtime.py +6 -2
  20. branches/general_agent/__init__.py +102 -0
  21. branches/general_agent/general_agent.py +1400 -0
  22. branches/general_agent/personality.py +169 -0
  23. branches/general_agent/utils/__init__.py +19 -0
  24. branches/general_agent/utils/prompt_builder.py +170 -0
  25. {crca-1.4.0.dist-info → crca-1.5.0.dist-info}/METADATA +8 -2
  26. {crca-1.4.0.dist-info → crca-1.5.0.dist-info}/RECORD +303 -20
  27. crca_core/__init__.py +35 -0
  28. crca_core/benchmarks/__init__.py +14 -0
  29. crca_core/benchmarks/synthetic_scm.py +103 -0
  30. crca_core/core/__init__.py +23 -0
  31. crca_core/core/api.py +120 -0
  32. crca_core/core/estimate.py +208 -0
  33. crca_core/core/godclass.py +72 -0
  34. crca_core/core/intervention_design.py +174 -0
  35. crca_core/core/lifecycle.py +48 -0
  36. crca_core/discovery/__init__.py +9 -0
  37. crca_core/discovery/tabular.py +193 -0
  38. crca_core/identify/__init__.py +171 -0
  39. crca_core/identify/backdoor.py +39 -0
  40. crca_core/identify/frontdoor.py +48 -0
  41. crca_core/identify/graph.py +106 -0
  42. crca_core/identify/id_algorithm.py +43 -0
  43. crca_core/identify/iv.py +48 -0
  44. crca_core/models/__init__.py +67 -0
  45. crca_core/models/provenance.py +56 -0
  46. crca_core/models/refusal.py +39 -0
  47. crca_core/models/result.py +83 -0
  48. crca_core/models/spec.py +151 -0
  49. crca_core/models/validation.py +68 -0
  50. crca_core/scm/__init__.py +9 -0
  51. crca_core/scm/linear_gaussian.py +198 -0
  52. crca_core/timeseries/__init__.py +6 -0
  53. crca_core/timeseries/pcmci.py +181 -0
  54. crca_llm/__init__.py +12 -0
  55. crca_llm/client.py +85 -0
  56. crca_llm/coauthor.py +118 -0
  57. crca_llm/orchestrator.py +289 -0
  58. crca_llm/types.py +21 -0
  59. crca_reasoning/__init__.py +16 -0
  60. crca_reasoning/critique.py +54 -0
  61. crca_reasoning/godclass.py +206 -0
  62. crca_reasoning/memory.py +24 -0
  63. crca_reasoning/rationale.py +10 -0
  64. crca_reasoning/react_controller.py +81 -0
  65. crca_reasoning/tool_router.py +97 -0
  66. crca_reasoning/types.py +40 -0
  67. crca_sd/__init__.py +15 -0
  68. crca_sd/crca_sd_core.py +2 -0
  69. crca_sd/crca_sd_governance.py +2 -0
  70. crca_sd/crca_sd_mpc.py +2 -0
  71. crca_sd/crca_sd_realtime.py +2 -0
  72. crca_sd/crca_sd_tui.py +2 -0
  73. cuda-keyring_1.1-1_all.deb +0 -0
  74. cuda-keyring_1.1-1_all.deb.1 +0 -0
  75. docs/IMAGE_ANNOTATION_USAGE.md +539 -0
  76. docs/INSTALL_DEEPSPEED.md +125 -0
  77. docs/api/branches/crca-cg.md +19 -0
  78. docs/api/branches/crca-q.md +27 -0
  79. docs/api/branches/crca-sd.md +37 -0
  80. docs/api/branches/general-agent.md +24 -0
  81. docs/api/branches/overview.md +19 -0
  82. docs/api/crca/agent-methods.md +62 -0
  83. docs/api/crca/operations.md +79 -0
  84. docs/api/crca/overview.md +32 -0
  85. docs/api/image-annotation/engine.md +52 -0
  86. docs/api/image-annotation/overview.md +17 -0
  87. docs/api/schemas/annotation.md +34 -0
  88. docs/api/schemas/core-schemas.md +82 -0
  89. docs/api/schemas/overview.md +32 -0
  90. docs/api/schemas/policy.md +30 -0
  91. docs/api/utils/conversation.md +22 -0
  92. docs/api/utils/graph-reasoner.md +32 -0
  93. docs/api/utils/overview.md +21 -0
  94. docs/api/utils/router.md +19 -0
  95. docs/api/utils/utilities.md +97 -0
  96. docs/architecture/causal-graphs.md +41 -0
  97. docs/architecture/data-flow.md +29 -0
  98. docs/architecture/design-principles.md +33 -0
  99. docs/architecture/hybrid-agent/components.md +38 -0
  100. docs/architecture/hybrid-agent/consistency.md +26 -0
  101. docs/architecture/hybrid-agent/overview.md +44 -0
  102. docs/architecture/hybrid-agent/reasoning.md +22 -0
  103. docs/architecture/llm-integration.md +26 -0
  104. docs/architecture/modular-structure.md +37 -0
  105. docs/architecture/overview.md +69 -0
  106. docs/architecture/policy-engine-arch.md +29 -0
  107. docs/branches/crca-cg/corposwarm.md +39 -0
  108. docs/branches/crca-cg/esg-scoring.md +30 -0
  109. docs/branches/crca-cg/multi-agent.md +35 -0
  110. docs/branches/crca-cg/overview.md +40 -0
  111. docs/branches/crca-q/alternative-data.md +55 -0
  112. docs/branches/crca-q/architecture.md +71 -0
  113. docs/branches/crca-q/backtesting.md +45 -0
  114. docs/branches/crca-q/causal-engine.md +33 -0
  115. docs/branches/crca-q/execution.md +39 -0
  116. docs/branches/crca-q/market-data.md +60 -0
  117. docs/branches/crca-q/overview.md +58 -0
  118. docs/branches/crca-q/philosophy.md +60 -0
  119. docs/branches/crca-q/portfolio-optimization.md +66 -0
  120. docs/branches/crca-q/risk-management.md +102 -0
  121. docs/branches/crca-q/setup.md +65 -0
  122. docs/branches/crca-q/signal-generation.md +61 -0
  123. docs/branches/crca-q/signal-validation.md +43 -0
  124. docs/branches/crca-sd/core.md +84 -0
  125. docs/branches/crca-sd/governance.md +53 -0
  126. docs/branches/crca-sd/mpc-solver.md +65 -0
  127. docs/branches/crca-sd/overview.md +59 -0
  128. docs/branches/crca-sd/realtime.md +28 -0
  129. docs/branches/crca-sd/tui.md +20 -0
  130. docs/branches/general-agent/overview.md +37 -0
  131. docs/branches/general-agent/personality.md +36 -0
  132. docs/branches/general-agent/prompt-builder.md +30 -0
  133. docs/changelog/index.md +79 -0
  134. docs/contributing/code-style.md +69 -0
  135. docs/contributing/documentation.md +43 -0
  136. docs/contributing/overview.md +29 -0
  137. docs/contributing/testing.md +29 -0
  138. docs/core/crcagent/async-operations.md +65 -0
  139. docs/core/crcagent/automatic-extraction.md +107 -0
  140. docs/core/crcagent/batch-prediction.md +80 -0
  141. docs/core/crcagent/bayesian-inference.md +60 -0
  142. docs/core/crcagent/causal-graph.md +92 -0
  143. docs/core/crcagent/counterfactuals.md +96 -0
  144. docs/core/crcagent/deterministic-simulation.md +78 -0
  145. docs/core/crcagent/dual-mode-operation.md +82 -0
  146. docs/core/crcagent/initialization.md +88 -0
  147. docs/core/crcagent/optimization.md +65 -0
  148. docs/core/crcagent/overview.md +63 -0
  149. docs/core/crcagent/time-series.md +57 -0
  150. docs/core/schemas/annotation.md +30 -0
  151. docs/core/schemas/core-schemas.md +82 -0
  152. docs/core/schemas/overview.md +30 -0
  153. docs/core/schemas/policy.md +41 -0
  154. docs/core/templates/base-agent.md +31 -0
  155. docs/core/templates/feature-mixins.md +31 -0
  156. docs/core/templates/overview.md +29 -0
  157. docs/core/templates/templates-guide.md +75 -0
  158. docs/core/tools/mcp-client.md +34 -0
  159. docs/core/tools/overview.md +24 -0
  160. docs/core/utils/conversation.md +27 -0
  161. docs/core/utils/graph-reasoner.md +29 -0
  162. docs/core/utils/overview.md +27 -0
  163. docs/core/utils/router.md +27 -0
  164. docs/core/utils/utilities.md +97 -0
  165. docs/css/custom.css +84 -0
  166. docs/examples/basic-usage.md +57 -0
  167. docs/examples/general-agent/general-agent-examples.md +50 -0
  168. docs/examples/hybrid-agent/hybrid-agent-examples.md +56 -0
  169. docs/examples/image-annotation/image-annotation-examples.md +54 -0
  170. docs/examples/integration/integration-examples.md +58 -0
  171. docs/examples/overview.md +37 -0
  172. docs/examples/trading/trading-examples.md +46 -0
  173. docs/features/causal-reasoning/advanced-topics.md +101 -0
  174. docs/features/causal-reasoning/counterfactuals.md +43 -0
  175. docs/features/causal-reasoning/do-calculus.md +50 -0
  176. docs/features/causal-reasoning/overview.md +47 -0
  177. docs/features/causal-reasoning/structural-models.md +52 -0
  178. docs/features/hybrid-agent/advanced-components.md +55 -0
  179. docs/features/hybrid-agent/core-components.md +64 -0
  180. docs/features/hybrid-agent/overview.md +34 -0
  181. docs/features/image-annotation/engine.md +82 -0
  182. docs/features/image-annotation/features.md +113 -0
  183. docs/features/image-annotation/integration.md +75 -0
  184. docs/features/image-annotation/overview.md +53 -0
  185. docs/features/image-annotation/quickstart.md +73 -0
  186. docs/features/policy-engine/doctrine-ledger.md +105 -0
  187. docs/features/policy-engine/monitoring.md +44 -0
  188. docs/features/policy-engine/mpc-control.md +89 -0
  189. docs/features/policy-engine/overview.md +46 -0
  190. docs/getting-started/configuration.md +225 -0
  191. docs/getting-started/first-agent.md +164 -0
  192. docs/getting-started/installation.md +144 -0
  193. docs/getting-started/quickstart.md +137 -0
  194. docs/index.md +118 -0
  195. docs/js/mathjax.js +13 -0
  196. docs/lrm/discovery_proof_notes.md +25 -0
  197. docs/lrm/finetune_full.md +83 -0
  198. docs/lrm/math_appendix.md +120 -0
  199. docs/lrm/overview.md +32 -0
  200. docs/mkdocs.yml +238 -0
  201. docs/stylesheets/extra.css +21 -0
  202. docs_generated/crca_core/CounterfactualResult.md +12 -0
  203. docs_generated/crca_core/DiscoveryHypothesisResult.md +13 -0
  204. docs_generated/crca_core/DraftSpec.md +13 -0
  205. docs_generated/crca_core/EstimateResult.md +13 -0
  206. docs_generated/crca_core/IdentificationResult.md +17 -0
  207. docs_generated/crca_core/InterventionDesignResult.md +12 -0
  208. docs_generated/crca_core/LockedSpec.md +15 -0
  209. docs_generated/crca_core/RefusalResult.md +12 -0
  210. docs_generated/crca_core/ValidationReport.md +9 -0
  211. docs_generated/crca_core/index.md +13 -0
  212. examples/general_agent_example.py +277 -0
  213. examples/general_agent_quickstart.py +202 -0
  214. examples/general_agent_simple.py +92 -0
  215. examples/hybrid_agent_auto_extraction.py +84 -0
  216. examples/hybrid_agent_dictionary_demo.py +104 -0
  217. examples/hybrid_agent_enhanced.py +179 -0
  218. examples/hybrid_agent_general_knowledge.py +107 -0
  219. examples/image_annotation_quickstart.py +328 -0
  220. examples/test_hybrid_fixes.py +77 -0
  221. image_annotation/__init__.py +27 -0
  222. image_annotation/annotation_engine.py +2593 -0
  223. install_cuda_wsl2.sh +59 -0
  224. install_deepspeed.sh +56 -0
  225. install_deepspeed_simple.sh +87 -0
  226. mkdocs.yml +252 -0
  227. ollama/Modelfile +8 -0
  228. prompts/__init__.py +2 -1
  229. prompts/default_crca.py +9 -1
  230. prompts/general_agent.py +227 -0
  231. prompts/image_annotation.py +56 -0
  232. pyproject.toml +17 -2
  233. requirements-docs.txt +10 -0
  234. requirements.txt +21 -2
  235. schemas/__init__.py +26 -1
  236. schemas/annotation.py +222 -0
  237. schemas/conversation.py +193 -0
  238. schemas/hybrid.py +211 -0
  239. schemas/reasoning.py +276 -0
  240. schemas_export/crca_core/CounterfactualResult.schema.json +108 -0
  241. schemas_export/crca_core/DiscoveryHypothesisResult.schema.json +113 -0
  242. schemas_export/crca_core/DraftSpec.schema.json +635 -0
  243. schemas_export/crca_core/EstimateResult.schema.json +113 -0
  244. schemas_export/crca_core/IdentificationResult.schema.json +145 -0
  245. schemas_export/crca_core/InterventionDesignResult.schema.json +111 -0
  246. schemas_export/crca_core/LockedSpec.schema.json +646 -0
  247. schemas_export/crca_core/RefusalResult.schema.json +90 -0
  248. schemas_export/crca_core/ValidationReport.schema.json +62 -0
  249. scripts/build_lrm_dataset.py +80 -0
  250. scripts/export_crca_core_schemas.py +54 -0
  251. scripts/export_hf_lrm.py +37 -0
  252. scripts/export_ollama_gguf.py +45 -0
  253. scripts/generate_changelog.py +157 -0
  254. scripts/generate_crca_core_docs_from_schemas.py +86 -0
  255. scripts/run_crca_core_benchmarks.py +163 -0
  256. scripts/run_full_finetune.py +198 -0
  257. scripts/run_lrm_eval.py +31 -0
  258. templates/graph_management.py +29 -0
  259. tests/conftest.py +9 -0
  260. tests/test_core.py +2 -3
  261. tests/test_crca_core_discovery_tabular.py +15 -0
  262. tests/test_crca_core_estimate_dowhy.py +36 -0
  263. tests/test_crca_core_identify.py +18 -0
  264. tests/test_crca_core_intervention_design.py +36 -0
  265. tests/test_crca_core_linear_gaussian_scm.py +69 -0
  266. tests/test_crca_core_spec.py +25 -0
  267. tests/test_crca_core_timeseries_pcmci.py +15 -0
  268. tests/test_crca_llm_coauthor.py +12 -0
  269. tests/test_crca_llm_orchestrator.py +80 -0
  270. tests/test_hybrid_agent_llm_enhanced.py +556 -0
  271. tests/test_image_annotation_demo.py +376 -0
  272. tests/test_image_annotation_operational.py +408 -0
  273. tests/test_image_annotation_unit.py +551 -0
  274. tests/test_training_moe.py +13 -0
  275. training/__init__.py +42 -0
  276. training/datasets.py +140 -0
  277. training/deepspeed_zero2_0_5b.json +22 -0
  278. training/deepspeed_zero2_1_5b.json +22 -0
  279. training/deepspeed_zero3_0_5b.json +28 -0
  280. training/deepspeed_zero3_14b.json +28 -0
  281. training/deepspeed_zero3_h100_3gpu.json +20 -0
  282. training/deepspeed_zero3_offload.json +28 -0
  283. training/eval.py +92 -0
  284. training/finetune.py +516 -0
  285. training/public_datasets.py +89 -0
  286. training_data/react_train.jsonl +7473 -0
  287. utils/agent_discovery.py +311 -0
  288. utils/batch_processor.py +317 -0
  289. utils/conversation.py +78 -0
  290. utils/edit_distance.py +118 -0
  291. utils/formatter.py +33 -0
  292. utils/graph_reasoner.py +530 -0
  293. utils/rate_limiter.py +283 -0
  294. utils/router.py +2 -2
  295. utils/tool_discovery.py +307 -0
  296. webui/__init__.py +10 -0
  297. webui/app.py +229 -0
  298. webui/config.py +104 -0
  299. webui/static/css/style.css +332 -0
  300. webui/static/js/main.js +284 -0
  301. webui/templates/index.html +42 -0
  302. tests/test_crca_excel.py +0 -166
  303. tests/test_data_broker.py +0 -424
  304. tests/test_palantir.py +0 -349
  305. {crca-1.4.0.dist-info → crca-1.5.0.dist-info}/WHEEL +0 -0
  306. {crca-1.4.0.dist-info → crca-1.5.0.dist-info}/licenses/LICENSE +0 -0
@@ -0,0 +1,101 @@
1
+ # Advanced Causal Reasoning Topics
2
+
3
+ Advanced topics in causal reasoning including discovery, interventions, and confounders.
4
+
5
+ ## Causal Discovery
6
+
7
+ Causal discovery learns causal structure from observational data.
8
+
9
+ ### Mathematical Foundation
10
+
11
+ Causal discovery uses various methods:
12
+
13
+ **Constraint-Based Methods**: Test conditional independencies:
14
+
15
+ $$X \perp Y | Z \implies \text{No direct edge } X \to Y$$
16
+
17
+ **Score-Based Methods**: Search for graph $G$ that maximizes:
18
+
19
+ $$Score(G, D) = \sum_{i=1}^n \log P(V_i | Pa(V_i), \theta_i) - \text{Complexity}(G)$$
20
+
21
+ **Functional Causal Models**: Assume functional relationships:
22
+
23
+ $$Y = f(X, U_Y)$$
24
+
25
+ Where $U_Y$ is independent of $X$.
26
+
27
+ ### Usage
28
+
29
+ Causal discovery is integrated into automatic extraction:
30
+
31
+ ```python
32
+ agent = CRCAAgent(enable_automatic_extraction=True)
33
+ result = agent.run("Analyze relationships in data")
34
+ ```
35
+
36
+ ## Causal Interventions
37
+
38
+ Interventions represent actions that change the value of a variable.
39
+
40
+ ### Mathematical Foundation
41
+
42
+ **Intervention vs. Observation**:
43
+
44
+ **Observation**: $P(Y | X=x)$ - Conditional probability
45
+
46
+ **Intervention**: $P(Y | do(X=x))$ - Causal effect
47
+
48
+ The key difference:
49
+
50
+ $$P(Y | X=x) = \sum_z P(Y | X=x, Z=z) P(Z=z | X=x)$$
51
+ $$P(Y | do(X=x)) = \sum_z P(Y | X=x, Z=z) P(Z=z)$$
52
+
53
+ **Average Causal Effect (ACE)**:
54
+
55
+ $$ACE = E[Y | do(X=1)] - E[Y | do(X=0)]$$
56
+
57
+ ### Usage
58
+
59
+ ```python
60
+ effect = agent.compute_causal_effect(
61
+ treatment="X",
62
+ outcome="Y",
63
+ value_treated=1,
64
+ value_control=0
65
+ )
66
+ ```
67
+
68
+ ## Confounder Identification
69
+
70
+ Confounders are variables that cause both treatment and outcome.
71
+
72
+ ### Mathematical Foundation
73
+
74
+ **Confounder Definition**: $Z$ is a confounder for $X \to Y$ if:
75
+
76
+ $$Z \to X \text{ and } Z \to Y$$
77
+
78
+ This creates a backdoor path: $X \leftarrow Z \to Y$.
79
+
80
+ **Backdoor Criterion**: To identify $E[Y | do(X=x)]$, we need to block all backdoor paths. A set $Z$ satisfies the backdoor criterion if:
81
+
82
+ 1. $Z$ blocks all backdoor paths from $X$ to $Y$
83
+ 2. $Z$ contains no descendants of $X$
84
+
85
+ Then:
86
+
87
+ $$E[Y | do(X=x)] = \sum_z E[Y | X=x, Z=z] P(Z=z)$$
88
+
89
+ ### Usage
90
+
91
+ ```python
92
+ from utils.graph_reasoner import GraphReasoner
93
+
94
+ reasoner = GraphReasoner(graph)
95
+ confounders = reasoner.identify_confounders("X", "Y")
96
+ adjustment_set = reasoner.find_adjustment_set("X", "Y")
97
+ ```
98
+
99
+ ## Next Steps
100
+
101
+ - [Overview](overview.md) - Return to overview
@@ -0,0 +1,43 @@
1
+ # Counterfactual Reasoning
2
+
3
+ Counterfactual reasoning answers "what-if" questions about alternative scenarios.
4
+
5
+ ## Overview
6
+
7
+ Counterfactuals enable reasoning about what would have happened under different conditions, given what actually occurred.
8
+
9
+ ## Mathematical Foundation
10
+
11
+ ### Counterfactual Query
12
+
13
+ A counterfactual query is:
14
+
15
+ $$P(Y_{x'} | X=x, Y=y)$$
16
+
17
+ Where:
18
+ - $Y_{x'}$: Potential outcome under intervention $do(X=x')$
19
+ - $X=x, Y=y$: Observed values
20
+
21
+ ### Three-Step Process
22
+
23
+ 1. **Abduction**: Infer exogenous variables from observations
24
+ $$U_Y = f^{-1}(X=x, Z=z, Y=y)$$
25
+
26
+ 2. **Action**: Set $X = x'$
27
+
28
+ 3. **Prediction**: Compute $Y$ using structural equations
29
+ $$Y_{x'} = f(X=x', Z=z, U_Y)$$
30
+
31
+ ## Usage
32
+
33
+ ```python
34
+ counterfactuals = agent.generate_counterfactuals(
35
+ intervention={"education": "college"},
36
+ outcome="income",
37
+ observed={"education": "high_school", "income": 30000}
38
+ )
39
+ ```
40
+
41
+ ## Next Steps
42
+
43
+ - [Causal Discovery](causal-discovery.md) - Learning causal structure
@@ -0,0 +1,50 @@
1
+ # Do-Calculus
2
+
3
+ Do-calculus provides rules for identifying causal effects from observational data.
4
+
5
+ ## Overview
6
+
7
+ Do-calculus, developed by Judea Pearl, provides three rules for manipulating probability expressions involving the do-operator.
8
+
9
+ ## Mathematical Foundation
10
+
11
+ ### Do-Operator
12
+
13
+ The do-operator $do(X=x)$ represents intervention, setting $X$ to value $x$:
14
+
15
+ $$P(Y | do(X=x)) \neq P(Y | X=x)$$
16
+
17
+ The key difference: observation vs. intervention.
18
+
19
+ ### Do-Calculus Rules
20
+
21
+ **Rule 1** (Insertion/deletion of observations):
22
+ $$P(Y | do(X), Z, W) = P(Y | do(X), W)$$
23
+
24
+ If $Z$ is irrelevant to $Y$ given $X$ and $W$ in the graph.
25
+
26
+ **Rule 2** (Action/observation exchange):
27
+ $$P(Y | do(X), do(Z), W) = P(Y | do(X), Z, W)$$
28
+
29
+ If $Z$ is irrelevant to $Y$ given $X$ and $W$ in the modified graph.
30
+
31
+ **Rule 3** (Insertion/deletion of actions):
32
+ $$P(Y | do(X), do(Z), W) = P(Y | do(X), W)$$
33
+
34
+ If $Z$ is irrelevant to $Y$ given $X$ and $W$ in the modified graph.
35
+
36
+ ## Causal Effect Identification
37
+
38
+ Using do-calculus, we can identify:
39
+
40
+ $$E[Y | do(X=x)] = \sum_{z} E[Y | X=x, Z=z] P(Z=z)$$
41
+
42
+ Where $Z$ is a valid adjustment set.
43
+
44
+ ## Usage
45
+
46
+ CR-CA automatically applies do-calculus rules when computing causal effects.
47
+
48
+ ## Next Steps
49
+
50
+ - [Counterfactuals](counterfactuals.md) - Counterfactual reasoning
@@ -0,0 +1,47 @@
1
+ # Causal Reasoning Overview
2
+
3
+ Causal reasoning framework based on Judea Pearl's Structural Causal Models.
4
+
5
+ ## Overview
6
+
7
+ Causal reasoning enables understanding of cause-and-effect relationships, going beyond correlation to identify true causal mechanisms.
8
+
9
+ ## Key Concepts
10
+
11
+ - **Structural Causal Models (SCMs)**: Mathematical framework for causality
12
+ - **Do-Calculus**: Rules for causal inference
13
+ - **Counterfactuals**: "What-if" reasoning
14
+ - **Causal Discovery**: Learning causal structure from data
15
+ - **Interventions**: Reasoning about actions
16
+
17
+ ## Mathematical Foundation
18
+
19
+ ### Structural Causal Models
20
+
21
+ An SCM is a triple $(U, V, F)$ where:
22
+ - $U$: Exogenous variables
23
+ - $V$: Endogenous variables
24
+ - $F$: Structural equations
25
+
26
+ Each variable has:
27
+
28
+ $$V_i = f_i(Pa(V_i), U_i)$$
29
+
30
+ Where $Pa(V_i)$ are parents of $V_i$.
31
+
32
+ ### Do-Calculus
33
+
34
+ The do-operator represents intervention:
35
+
36
+ $$P(Y | do(X=x)) = \sum_{z} P(Y | X=x, Z=z) P(Z=z)$$
37
+
38
+ ## Documentation
39
+
40
+ - **[Structural Models](structural-models.md)**: SCMs
41
+ - **[Do-Calculus](do-calculus.md)**: Do-calculus implementation
42
+ - **[Counterfactuals](counterfactuals.md)**: Counterfactual reasoning
43
+
44
+ ## Next Steps
45
+
46
+ - [Structural Models](structural-models.md) - SCMs
47
+ - [Do-Calculus](do-calculus.md) - Do-calculus
@@ -0,0 +1,52 @@
1
+ # Structural Causal Models (SCMs)
2
+
3
+ Structural Causal Models provide the mathematical foundation for causal reasoning.
4
+
5
+ ## Overview
6
+
7
+ SCMs represent causal relationships through structural equations that define how variables depend on their causes.
8
+
9
+ ## Mathematical Foundation
10
+
11
+ ### Definition
12
+
13
+ An SCM is a triple:
14
+
15
+ $$(U, V, F)$$
16
+
17
+ Where:
18
+ - $U = \{U_1, \ldots, U_n\}$: Exogenous (unobserved) variables
19
+ - $V = \{V_1, \ldots, V_m\}$: Endogenous (observed) variables
20
+ - $F = \{f_1, \ldots, f_m\}$: Structural equations
21
+
22
+ ### Structural Equations
23
+
24
+ Each endogenous variable has a structural equation:
25
+
26
+ $$V_i = f_i(Pa(V_i), U_i)$$
27
+
28
+ Where $Pa(V_i)$ are the parents of $V_i$ in the causal graph.
29
+
30
+ ### Example
31
+
32
+ For education and income:
33
+
34
+ $$Education = f_E(U_E)$$
35
+ $$Experience = f_{Exp}(Education, U_{Exp})$$
36
+ $$Income = f_I(Education, Experience, U_I)$$
37
+
38
+ ## Usage
39
+
40
+ ```python
41
+ from CRCA import CRCAAgent
42
+
43
+ agent = CRCAAgent(model_name="gpt-4o-mini")
44
+
45
+ # Build SCM
46
+ agent.add_causal_relationship("education", "income", strength=0.6)
47
+ agent.add_causal_relationship("experience", "income", strength=0.4)
48
+ ```
49
+
50
+ ## Next Steps
51
+
52
+ - [Do-Calculus](do-calculus.md) - Do-calculus implementation
@@ -0,0 +1,55 @@
1
+ # Hybrid Agent Advanced Components
2
+
3
+ Advanced components of the hybrid agent architecture.
4
+
5
+ ## Explanation Generator
6
+
7
+ Generates human-readable explanations for reasoning steps.
8
+
9
+ The explanation generator translates reasoning steps into natural language explanations.
10
+
11
+ ## Few-Shot Learner
12
+
13
+ Enables learning from few examples.
14
+
15
+ ### Mathematical Foundation
16
+
17
+ Few-shot learning uses:
18
+
19
+ $$P(y | x, D_{few}) = \int P(y | x, \theta) P(\theta | D_{few}) d\theta$$
20
+
21
+ Where $D_{few}$ is the few-shot dataset.
22
+
23
+ ## Graph Compressor
24
+
25
+ Reduces causal graph complexity while preserving essential relationships.
26
+
27
+ ### Mathematical Foundation
28
+
29
+ Graph compression preserves:
30
+
31
+ $$\text{Preserve}(G, G') = \frac{|Paths(G) \cap Paths(G')|}{|Paths(G)|}$$
32
+
33
+ Where $G'$ is the compressed graph.
34
+
35
+ ## Language Compiler
36
+
37
+ Translates natural language into structured representations.
38
+
39
+ The language compiler converts natural language tasks into structured formats suitable for causal reasoning.
40
+
41
+ ## Reasoning Tracker
42
+
43
+ Monitors and logs reasoning steps.
44
+
45
+ The reasoning tracker maintains a log of all reasoning steps for analysis and debugging.
46
+
47
+ ## Text Corrector
48
+
49
+ Fixes errors in generated text.
50
+
51
+ The text corrector identifies and corrects errors in agent-generated text.
52
+
53
+ ## Next Steps
54
+
55
+ - [Overview](overview.md) - Return to overview
@@ -0,0 +1,64 @@
1
+ # Hybrid Agent Core Components
2
+
3
+ Core components of the hybrid agent architecture.
4
+
5
+ ## Consistency Engine
6
+
7
+ Ensures logical consistency across reasoning steps.
8
+
9
+ ### Mathematical Foundation
10
+
11
+ Consistency is checked using:
12
+
13
+ $$\text{Consistent}(S_1, S_2) = \begin{cases}
14
+ 1 & \text{if } S_1 \not\models \neg S_2 \\
15
+ 0 & \text{otherwise}
16
+ \end{cases}$$
17
+
18
+ Where $S_1, S_2$ are reasoning steps.
19
+
20
+ ## Conversation Manager
21
+
22
+ Manages multi-turn dialogue and context management.
23
+
24
+ ### Mathematical Foundation
25
+
26
+ Conversation state is maintained as:
27
+
28
+ $$C_t = \{m_1, m_2, \ldots, m_t\}$$
29
+
30
+ Where $m_i$ are messages. Context is extracted as:
31
+
32
+ $$Context_t = f(C_t)$$
33
+
34
+ ## Task Decomposer
35
+
36
+ Decomposes complex tasks into manageable subtasks.
37
+
38
+ ### Mathematical Foundation
39
+
40
+ Task decomposition:
41
+
42
+ $$T = \{T_1, T_2, \ldots, T_n\}$$
43
+
44
+ Where:
45
+ - $T$: Original task
46
+ - $T_i$: Subtasks
47
+ - Dependencies: $T_i \prec T_j$
48
+
49
+ ## Self Verifier
50
+
51
+ Verifies reasoning steps for correctness.
52
+
53
+ ### Mathematical Foundation
54
+
55
+ Verification checks:
56
+
57
+ $$\text{Valid}(S) = \begin{cases}
58
+ 1 & \text{if } S \text{ is logically consistent} \\
59
+ 0 & \text{otherwise}
60
+ \end{cases}$$
61
+
62
+ ## Next Steps
63
+
64
+ - [Advanced Components](advanced-components.md) - Advanced components
@@ -0,0 +1,34 @@
1
+ # Hybrid Agent Overview
2
+
3
+ The hybrid agent architecture combines multiple reasoning components for robust causal analysis.
4
+
5
+ ## Overview
6
+
7
+ Hybrid agent integrates consistency engine, conversation manager, explanation generator, and other components for enhanced reasoning.
8
+
9
+ ## Key Components
10
+
11
+ - **Consistency Engine**: Ensures logical consistency
12
+ - **Conversation Manager**: Manages dialogue
13
+ - **Explanation Generator**: Generates explanations
14
+ - **Task Decomposer**: Decomposes complex tasks
15
+ - **Self Verifier**: Verifies reasoning steps
16
+
17
+ ## Mathematical Foundation
18
+
19
+ Hybrid agent combines reasoning modes:
20
+
21
+ $$R_{hybrid} = \alpha R_{LLM} + \beta R_{deterministic} + \gamma R_{symbolic}$$
22
+
23
+ Where $\alpha + \beta + \gamma = 1$ and each $R_i$ is a reasoning component.
24
+
25
+ ## Documentation
26
+
27
+ - **[Consistency Engine](consistency-engine.md)**: Consistency mechanisms
28
+ - **[Conversation Manager](conversation-manager.md)**: Dialogue management
29
+ - **[Task Decomposer](task-decomposer.md)**: Task decomposition
30
+
31
+ ## Next Steps
32
+
33
+ - [Consistency Engine](consistency-engine.md) - Consistency mechanisms
34
+ - [Conversation Manager](conversation-manager.md) - Dialogue management
@@ -0,0 +1,82 @@
1
+ # AnnotationEngine
2
+
3
+ The AnnotationEngine class provides the core image annotation functionality.
4
+
5
+ ## Overview
6
+
7
+ AnnotationEngine processes images to extract geometric primitives, generate semantic labels, and enable natural language queries.
8
+
9
+ ## Initialization
10
+
11
+ ```python
12
+ from image_annotation import ImageAnnotationEngine
13
+ from image_annotation.annotation_engine import AnnotationConfig
14
+
15
+ # Basic initialization
16
+ engine = ImageAnnotationEngine()
17
+
18
+ # With custom configuration
19
+ config = AnnotationConfig(
20
+ gpt_model="gpt-4o-mini",
21
+ enable_temporal_tracking=True,
22
+ auto_detect_type=True,
23
+ output_format="all"
24
+ )
25
+ engine = ImageAnnotationEngine(config=config)
26
+ ```
27
+
28
+ ## Main Methods
29
+
30
+ ### annotate()
31
+
32
+ Annotate image(s) with full automation:
33
+
34
+ ```python
35
+ result = engine.annotate(
36
+ input="image.png",
37
+ frame_id=None,
38
+ output="all"
39
+ )
40
+ ```
41
+
42
+ Returns `AnnotationResult` object with:
43
+ - `annotation_graph`: Graph of entities, labels, and relations
44
+ - `overlay_image`: Annotated image overlay
45
+ - `formal_report`: Textual report
46
+ - `json_output`: JSON representation
47
+
48
+ ### query()
49
+
50
+ Query an image with natural language:
51
+
52
+ ```python
53
+ result = engine.query(
54
+ input="image.png",
55
+ query="find all circles",
56
+ frame_id=None
57
+ )
58
+ ```
59
+
60
+ Returns dictionary with:
61
+ - `answer`: Natural language answer
62
+ - `entities`: List of relevant entities
63
+ - `measurements`: Dict of measurements
64
+ - `graph`: AnnotationGraph if needed
65
+
66
+ ## Mathematical Foundation
67
+
68
+ The annotation process involves:
69
+
70
+ 1. **Geometric Detection**: Extract primitives using computer vision
71
+ 2. **Semantic Classification**: Assign labels using LLM
72
+
73
+ For entity $E_i$ in image $I$:
74
+
75
+ $$P(L_i | E_i, I) = \text{LLM}(E_i, I)$$
76
+
77
+ Where $L_i$ is the semantic label.
78
+
79
+ ## Next Steps
80
+
81
+ - [Geometric Primitives](geometric-primitives.md) - Primitive extraction
82
+ - [Semantic Labeling](semantic-labeling.md) - Labeling system
@@ -0,0 +1,113 @@
1
+ # Image Annotation Features
2
+
3
+ Comprehensive guide to image annotation features.
4
+
5
+ ## Geometric Primitive Extraction
6
+
7
+ Extracts geometric primitives from images using computer vision algorithms.
8
+
9
+ ### Mathematical Foundation
10
+
11
+ **Circle Detection**: Circles are detected using Hough circle transform:
12
+
13
+ $$(x - x_c)^2 + (y - y_c)^2 = r^2$$
14
+
15
+ Where $(x_c, y_c)$ is the center and $r$ is the radius.
16
+
17
+ **Line Detection**: Lines are detected using Hough line transform:
18
+
19
+ $$\rho = x \cos \theta + y \sin \theta$$
20
+
21
+ Where $\rho$ is the distance from origin and $\theta$ is the angle.
22
+
23
+ **Contour Detection**: Contours are extracted using:
24
+
25
+ $$C = \text{findContours}(\nabla I)$$
26
+
27
+ Where $\nabla I$ is the image gradient.
28
+
29
+ ### Usage
30
+
31
+ ```python
32
+ result = engine.annotate("image.png", output="all")
33
+ entities = result.annotation_graph.entities
34
+ circles = [e for e in entities if e.primitive_type == "circle"]
35
+ ```
36
+
37
+ ## Semantic Labeling
38
+
39
+ Assigns meaningful labels to detected geometric primitives.
40
+
41
+ ### Mathematical Foundation
42
+
43
+ For entity $E_i$ with geometric properties $G_i$, the semantic label $L_i$ is:
44
+
45
+ $$L_i = \arg\max_{l} P(l | E_i, G_i, I)$$
46
+
47
+ Where $I$ is the image context, and the probability is estimated using LLM:
48
+
49
+ $$P(l | E_i, G_i, I) \approx \text{LLM}(E_i, G_i, I)$$
50
+
51
+ ### Usage
52
+
53
+ ```python
54
+ labels = result.annotation_graph.labels
55
+ for label in labels:
56
+ print(f"Entity {label.entity_id}: {label.label}")
57
+ ```
58
+
59
+ ## Query Capabilities
60
+
61
+ Natural language queries about images.
62
+
63
+ ### Query Types
64
+
65
+ - **Object Detection**: "find all circles", "identify military bases"
66
+ - **Measurement**: "measure distance", "calculate area"
67
+ - **Counting**: "how many objects", "count lines"
68
+ - **Spatial**: "what is near X", "find objects between A and B"
69
+
70
+ ### Mathematical Foundation
71
+
72
+ Queries are processed by:
73
+
74
+ 1. Parsing the natural language query $Q$
75
+ 2. Identifying relevant entities $E_Q \subseteq E$
76
+ 3. Computing requested measurements $M_Q$
77
+ 4. Generating answer $A = f(Q, E_Q, M_Q)$
78
+
79
+ ### Usage
80
+
81
+ ```python
82
+ result = engine.query("image.png", "find all circles")
83
+ print(result["answer"])
84
+ ```
85
+
86
+ ## Temporal Tracking
87
+
88
+ Tracks entities across video frames using Kalman filters.
89
+
90
+ ### Mathematical Foundation
91
+
92
+ Temporal tracking uses Kalman filters for state estimation:
93
+
94
+ **State Model**:
95
+ $$\mathbf{x}_t = \mathbf{F} \mathbf{x}_{t-1} + \mathbf{B} \mathbf{u}_t + \mathbf{w}_t$$
96
+
97
+ **Observation Model**:
98
+ $$\mathbf{z}_t = \mathbf{H} \mathbf{x}_t + \mathbf{v}_t$$
99
+
100
+ **Kalman Update**:
101
+ $$\hat{\mathbf{x}}_{t|t} = \hat{\mathbf{x}}_{t|t-1} + \mathbf{K}_t (\mathbf{z}_t - \mathbf{H} \hat{\mathbf{x}}_{t|t-1})$$
102
+
103
+ ### Usage
104
+
105
+ ```python
106
+ engine = ImageAnnotationEngine(enable_temporal_tracking=True)
107
+ for frame_id, image_path in enumerate(frame_paths):
108
+ result = engine.annotate(image_path, frame_id=frame_id)
109
+ ```
110
+
111
+ ## Next Steps
112
+
113
+ - [Integration](integration.md) - Integration with agents