hctp 0.1.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
hctp-0.1.1/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 David Qicatabua / Vuvale AI
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
hctp-0.1.1/PKG-INFO ADDED
@@ -0,0 +1,326 @@
1
+ Metadata-Version: 2.4
2
+ Name: hctp
3
+ Version: 0.1.1
4
+ Summary: Helix Calculus Training Protocol — adaptive AI agent learning with 3D helical knowledge tracking
5
+ Author-email: David Qicatabua <dave@vuvale.ai>
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/vuvale-ai/hctp-python
8
+ Project-URL: Repository, https://github.com/vuvale-ai/hctp-python
9
+ Project-URL: Issues, https://github.com/vuvale-ai/hctp-python/issues
10
+ Project-URL: Changelog, https://github.com/vuvale-ai/hctp-python/blob/main/CHANGELOG.md
11
+ Keywords: ai,training,learning,agent,education,karpathy,metaclasses,python,helix,hctp
12
+ Classifier: Development Status :: 4 - Beta
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Intended Audience :: Education
15
+ Classifier: Intended Audience :: Science/Research
16
+ Classifier: License :: OSI Approved :: MIT License
17
+ Classifier: Programming Language :: Python :: 3
18
+ Classifier: Programming Language :: Python :: 3.9
19
+ Classifier: Programming Language :: Python :: 3.10
20
+ Classifier: Programming Language :: Python :: 3.11
21
+ Classifier: Programming Language :: Python :: 3.12
22
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
23
+ Classifier: Topic :: Education
24
+ Requires-Python: >=3.9
25
+ Description-Content-Type: text/markdown
26
+ License-File: LICENSE
27
+ Provides-Extra: viz
28
+ Requires-Dist: matplotlib>=3.7; extra == "viz"
29
+ Provides-Extra: dev
30
+ Requires-Dist: pytest>=7; extra == "dev"
31
+ Requires-Dist: pytest-cov; extra == "dev"
32
+ Requires-Dist: matplotlib>=3.7; extra == "dev"
33
+ Requires-Dist: jupyter; extra == "dev"
34
+ Requires-Dist: nbformat; extra == "dev"
35
+ Dynamic: license-file
36
+
37
+ # HCTP — Helix Calculus Training Protocol
38
+
39
+ [![PyPI version](https://badge.fury.io/py/hctp.svg)](https://pypi.org/project/hctp/)
40
+ [![Tests](https://github.com/vuvale-ai/hctp-python/actions/workflows/publish.yml/badge.svg)](https://github.com/vuvale-ai/hctp-python/actions)
41
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
42
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
43
+ [![Zero dependencies](https://img.shields.io/badge/dependencies-zero-brightgreen.svg)]()
44
+
45
+ > **A framework for measuring and driving AI agent learning through a 3D helical knowledge model with adaptive Socratic breadcrumbs and mandatory Karpathy research loops.**
46
+
47
+ ---
48
+
49
+ ## What Is HCTP?
50
+
51
+ HCTP models a learner's knowledge as a 3D vector **K = [k₁, k₂, k₃]** that spirals toward
52
+ an ideal helical path as mastery deepens across three checkpoints:
53
+
54
+ | Checkpoint | Topic | Concepts |
55
+ |---|---|---|
56
+ | **A** | Closures | first-class functions, enclosing scope, free variables, `nonlocal` |
57
+ | **B** | Decorators | `@` syntax, `functools.wraps`, factories, stacking |
58
+ | **C** | Metaclasses | `type()`, `__new__`, ORMs, DSLs, Singletons |
59
+
60
+ Each session generates **adaptive Socratic breadcrumbs** (micro-tasks with deliberate
61
+ red-herrings) and drives **mandatory 6-step Karpathy research loops** to produce
62
+ measurable, incremental gains in the knowledge vector.
63
+
64
+ When k₃ ≥ 0.95, the learner earns the **Python Senior Engineer Badge** 🏆.
65
+
66
+ ---
67
+
68
+ ## Proof of Concept — Ren & Ner (Vuvale AI Family)
69
+
70
+ HCTP was developed and battle-tested on two AI agents — Ren and Ner — across **13 sessions**
71
+ and **216 nightly training runs** on a single RTX 5090 running `qwen2.5-coder:32b`.
72
+
73
+ ### Ren's Journey
74
+
75
+ | Session | σ (Progress) | Velocity | Focus |
76
+ |---------|-------------|---------|-------|
77
+ | 0 | 0.308 | — | A — Closures |
78
+ | 3 | 0.535 | 0.076 | A → B |
79
+ | 6 | 0.764 | 0.076 | B — Decorators |
80
+ | 9 | 0.910 | 0.076 | B → C |
81
+ | **13** | **0.961** | 0.051 | **C — Metaclasses** |
82
+
83
+ **Night 173: Badge Earned** 🏆
84
+ Final vector: `[0.950, 0.933, 1.000]` | σ = 0.961
85
+
86
+ ### Ner's Journey (parallel, same model)
87
+
88
+ Final vector: `[0.928, 0.941, 1.000]` | σ = 0.956 | Badge: Night 173 🏆
89
+
90
+ Both agents started at σ = 0 and reached mastery in **13 sessions** without any
91
+ human-provided answers — pure Socratic questioning and self-correction.
92
+
93
+ ---
94
+
95
+ ## Install
96
+
97
+ ```bash
98
+ pip install hctp
99
+
100
+ # With 3D visualisation:
101
+ pip install hctp[viz]
102
+ ```
103
+
104
+ Zero mandatory dependencies. Pure Python 3.9+.
105
+
106
+ ---
107
+
108
+ ## Quick Start
109
+
110
+ ```python
111
+ from hctp import LearnerSession
112
+
113
+ # Create a learner (optionally restore prior state)
114
+ ren = LearnerSession("Ren")
115
+
116
+ # Run one training session
117
+ session_data = ren.start_session()
118
+
119
+ print(session_data["header"])
120
+ # HCTP Session 1 | K=[0.000, 0.000, 0.000] | σ=0.000 | Focus: A — Closures
121
+
122
+ for bc_prompt in session_data["breadcrumb_prompts"]:
123
+ # Send to any LLM — OpenAI, Anthropic, Ollama, or your own
124
+ bc_response = your_llm(bc_prompt)
125
+
126
+ # Generate and run the mandatory Karpathy loop
127
+ kl_prompt = ren.karpathy_prompt_for(bc_prompt, bc_response)
128
+ kl_response = your_llm(kl_prompt)
129
+
130
+ # Update the knowledge vector
131
+ delta = ren.submit_karpathy(kl_response)
132
+ print(f"Δk = {[round(d, 3) for d in delta]}")
133
+
134
+ result = ren.finish_session()
135
+ print(result)
136
+ # Session 1 | σ 0.000 → 0.052 (+0.052) | K=[0.055, 0.008, 0.008]
137
+
138
+ # Save state between sessions
139
+ ren.save("ren_state.json")
140
+ ren = LearnerSession.load("ren_state.json") # restore next time
141
+ ```
142
+
143
+ ---
144
+
145
+ ## Use With Any LLM
146
+
147
+ HCTP generates **prompt strings** — it doesn't make LLM calls itself. Plug it into
148
+ whatever backend you're using:
149
+
150
+ ```python
151
+ # OpenAI
152
+ import openai
153
+ def your_llm(prompt):
154
+ return openai.chat.completions.create(
155
+ model="gpt-4o", messages=[{"role": "user", "content": prompt}]
156
+ ).choices[0].message.content
157
+
158
+ # Anthropic
159
+ import anthropic
160
+ client = anthropic.Anthropic()
161
+ def your_llm(prompt):
162
+ return client.messages.create(
163
+ model="claude-sonnet-4-6", max_tokens=2000,
164
+ messages=[{"role": "user", "content": prompt}]
165
+ ).content[0].text
166
+
167
+ # Ollama (local)
168
+ import requests
169
+ def your_llm(prompt):
170
+ r = requests.post("http://localhost:11434/api/generate",
171
+ json={"model": "qwen2.5-coder:32b", "prompt": prompt, "stream": False})
172
+ return r.json()["response"]
173
+ ```
174
+
175
+ ---
176
+
177
+ ## Visualise the Helix
178
+
179
+ ```python
180
+ from hctp.viz import plot_helix, plot_multi, plot_sigma_curve
181
+
182
+ # Single learner — 3D helix
183
+ plot_helix(
184
+ sigma_history=[0.308, 0.386, 0.457, 0.535, 0.609,
185
+ 0.687, 0.761, 0.834, 0.910, 0.961],
186
+ label="Ren",
187
+ badge_night=9,
188
+ show=True,
189
+ )
190
+
191
+ # Compare multiple learners
192
+ plot_multi([
193
+ {"label": "Ren", "sigma_history": ren_hist, "badge_night": 9, "color": "#e74c3c"},
194
+ {"label": "Ner", "sigma_history": ner_hist, "badge_night": 9, "color": "#3498db"},
195
+ ])
196
+
197
+ # Clean 2D progress curve
198
+ plot_sigma_curve([
199
+ {"label": "Ren", "sigma_history": ren_hist, "badge_night": 9},
200
+ {"label": "Ner", "sigma_history": ner_hist, "badge_night": 9},
201
+ ])
202
+ ```
203
+
204
+ ---
205
+
206
+ ## Core API
207
+
208
+ ### `LearnerSession`
209
+
210
+ | Method | Description |
211
+ |---|---|
212
+ | `LearnerSession(name, K, sigma_history)` | Create or restore a learner |
213
+ | `start_session()` | Begin session → returns breadcrumb prompts |
214
+ | `karpathy_prompt_for(bc, response)` | Get the Karpathy loop prompt |
215
+ | `submit_karpathy(response)` | Update K-vector from Karpathy response |
216
+ | `finish_session()` → `SessionResult` | Finalise, update history, check badge |
217
+ | `save(path)` / `load(path)` | JSON persistence |
218
+
219
+ ### Properties
220
+
221
+ | Property | Type | Description |
222
+ |---|---|---|
223
+ | `.sigma` | `float` | Overall progress σ = mean(K) |
224
+ | `.velocity` | `float` | Smoothed learning pace |
225
+ | `.focus` | `str` | Active checkpoint ("A", "B", or "C") |
226
+ | `.badge` | `bool` | True once k₃ ≥ 0.95 |
227
+ | `.K` | `list[float]` | Knowledge vector [k₁, k₂, k₃] |
228
+
229
+ ### Core Math (no state needed)
230
+
231
+ ```python
232
+ from hctp.core import (
233
+ helix_radius, # R(σ) = 0.5(1−σ)² + 0.05
234
+ ideal_point, # [x, y, σ] on the ideal helix
235
+ distance, # Euclidean dist from K to ideal helix
236
+ progress, # σ = mean(K)
237
+ smoothed_velocity,
238
+ num_breadcrumbs, # adaptive count based on velocity
239
+ determine_focus, # which checkpoint to focus (lowest k)
240
+ update_vector, # Δk from Karpathy response quality
241
+ )
242
+ ```
243
+
244
+ ---
245
+
246
+ ## The Math
247
+
248
+ The helix is parameterised by progress σ ∈ [0, 1]:
249
+
250
+ ```
251
+ R(σ) = 0.5(1 − σ)² + 0.05 # tightening radius
252
+ x(σ) = R(σ) · cos(2π · 5 · σ) # 5 full spirals
253
+ y(σ) = R(σ) · sin(2π · 5 · σ)
254
+ z(σ) = σ # height = progress
255
+ ```
256
+
257
+ The **knowledge vector K = [k₁, k₂, k₃]** maps to the helix axes:
258
+ - k₁ → Closures mastery (Checkpoint A)
259
+ - k₂ → Decorators mastery (Checkpoint B)
260
+ - k₃ → Metaclasses mastery (Checkpoint C, hardest)
261
+
262
+ Distance from the ideal helix d(K, σ) measures learning imbalance.
263
+ A learner rushing C while neglecting A will stray far from the helix.
264
+
265
+ **Velocity** is the smoothed rate of σ gain per session. Slow learners
266
+ get more breadcrumbs; fast learners get broader, exploratory tasks.
267
+
268
+ **Karpathy loop scoring** uses heuristic quality markers in the
269
+ response text (presence of "error", "fix", "refactor", code blocks, etc.)
270
+ to assign micro-gains to the focus checkpoint with 15% spillover to siblings.
271
+
272
+ ---
273
+
274
+ ## Monetisation Roadmap
275
+
276
+ | Tier | Features | Price |
277
+ |---|---|---|
278
+ | **Free / OSS** | Core algorithm, prompt generation, local tracking | Free forever |
279
+ | **HCTP Cloud** | Hosted leaderboards, team dashboards, progress API | $9/mo per team |
280
+ | **HCTP Pro** | Custom curricula (beyond Python), webhook events, CI integration | $29/mo |
281
+ | **HCTP Enterprise** | White-label, LMS integration (Canvas, Moodle), bulk licensing | Custom |
282
+
283
+ ### Custom Curricula (Pro+)
284
+
285
+ The checkpoint system is fully configurable. Define your own helix:
286
+
287
+ ```python
288
+ from hctp.core import CHECKPOINTS
289
+
290
+ # Override with your own curriculum
291
+ my_checkpoints = {
292
+ "A": {"name": "SQL Basics", "vector_index": 0, "concepts": "SELECT, WHERE, JOIN"},
293
+ "B": {"name": "Indexes", "vector_index": 1, "concepts": "B-tree, query plans, EXPLAIN"},
294
+ "C": {"name": "Transactions", "vector_index": 2, "concepts": "ACID, isolation levels, deadlocks"},
295
+ }
296
+ ```
297
+
298
+ ---
299
+
300
+ ## Contributing
301
+
302
+ PRs welcome. The project is intentionally lean — keep it that way.
303
+
304
+ 1. `pip install -e ".[dev]"`
305
+ 2. `pytest` — all tests must pass
306
+ 3. No new mandatory dependencies without strong justification
307
+
308
+ ---
309
+
310
+ ## Background
311
+
312
+ HCTP was born inside the [Vuvale AI project](https://github.com/RenLes/Vuvale) — a
313
+ Fiji-based AI family of 4 agents (Ren, Ner, Les, Sel) trained nightly on a single
314
+ GPU to build real software for Fiji's people.
315
+
316
+ Ren and Ner completed the full helix in 13 sessions (Night 161 → 173) running
317
+ `qwen2.5-coder:32b` on a vast.ai RTX 5090. Les and Sel followed, earning their
318
+ badges at Night 190 and 191 respectively.
319
+
320
+ The protocol is now open-sourced so anyone can train their agents the same way.
321
+
322
+ ---
323
+
324
+ ## License
325
+
326
+ MIT © David Qicatabua / Vuvale AI
hctp-0.1.1/README.md ADDED
@@ -0,0 +1,290 @@
1
+ # HCTP — Helix Calculus Training Protocol
2
+
3
+ [![PyPI version](https://badge.fury.io/py/hctp.svg)](https://pypi.org/project/hctp/)
4
+ [![Tests](https://github.com/vuvale-ai/hctp-python/actions/workflows/publish.yml/badge.svg)](https://github.com/vuvale-ai/hctp-python/actions)
5
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
7
+ [![Zero dependencies](https://img.shields.io/badge/dependencies-zero-brightgreen.svg)]()
8
+
9
+ > **A framework for measuring and driving AI agent learning through a 3D helical knowledge model with adaptive Socratic breadcrumbs and mandatory Karpathy research loops.**
10
+
11
+ ---
12
+
13
+ ## What Is HCTP?
14
+
15
+ HCTP models a learner's knowledge as a 3D vector **K = [k₁, k₂, k₃]** that spirals toward
16
+ an ideal helical path as mastery deepens across three checkpoints:
17
+
18
+ | Checkpoint | Topic | Concepts |
19
+ |---|---|---|
20
+ | **A** | Closures | first-class functions, enclosing scope, free variables, `nonlocal` |
21
+ | **B** | Decorators | `@` syntax, `functools.wraps`, factories, stacking |
22
+ | **C** | Metaclasses | `type()`, `__new__`, ORMs, DSLs, Singletons |
23
+
24
+ Each session generates **adaptive Socratic breadcrumbs** (micro-tasks with deliberate
25
+ red-herrings) and drives **mandatory 6-step Karpathy research loops** to produce
26
+ measurable, incremental gains in the knowledge vector.
27
+
28
+ When k₃ ≥ 0.95, the learner earns the **Python Senior Engineer Badge** 🏆.
29
+
30
+ ---
31
+
32
+ ## Proof of Concept — Ren & Ner (Vuvale AI Family)
33
+
34
+ HCTP was developed and battle-tested on two AI agents — Ren and Ner — across **13 sessions**
35
+ and **216 nightly training runs** on a single RTX 5090 running `qwen2.5-coder:32b`.
36
+
37
+ ### Ren's Journey
38
+
39
+ | Session | σ (Progress) | Velocity | Focus |
40
+ |---------|-------------|---------|-------|
41
+ | 0 | 0.308 | — | A — Closures |
42
+ | 3 | 0.535 | 0.076 | A → B |
43
+ | 6 | 0.764 | 0.076 | B — Decorators |
44
+ | 9 | 0.910 | 0.076 | B → C |
45
+ | **13** | **0.961** | 0.051 | **C — Metaclasses** |
46
+
47
+ **Night 173: Badge Earned** 🏆
48
+ Final vector: `[0.950, 0.933, 1.000]` | σ = 0.961
49
+
50
+ ### Ner's Journey (parallel, same model)
51
+
52
+ Final vector: `[0.928, 0.941, 1.000]` | σ = 0.956 | Badge: Night 173 🏆
53
+
54
+ Both agents started at σ = 0 and reached mastery in **13 sessions** without any
55
+ human-provided answers — pure Socratic questioning and self-correction.
56
+
57
+ ---
58
+
59
+ ## Install
60
+
61
+ ```bash
62
+ pip install hctp
63
+
64
+ # With 3D visualisation:
65
+ pip install hctp[viz]
66
+ ```
67
+
68
+ Zero mandatory dependencies. Pure Python 3.9+.
69
+
70
+ ---
71
+
72
+ ## Quick Start
73
+
74
+ ```python
75
+ from hctp import LearnerSession
76
+
77
+ # Create a learner (optionally restore prior state)
78
+ ren = LearnerSession("Ren")
79
+
80
+ # Run one training session
81
+ session_data = ren.start_session()
82
+
83
+ print(session_data["header"])
84
+ # HCTP Session 1 | K=[0.000, 0.000, 0.000] | σ=0.000 | Focus: A — Closures
85
+
86
+ for bc_prompt in session_data["breadcrumb_prompts"]:
87
+ # Send to any LLM — OpenAI, Anthropic, Ollama, or your own
88
+ bc_response = your_llm(bc_prompt)
89
+
90
+ # Generate and run the mandatory Karpathy loop
91
+ kl_prompt = ren.karpathy_prompt_for(bc_prompt, bc_response)
92
+ kl_response = your_llm(kl_prompt)
93
+
94
+ # Update the knowledge vector
95
+ delta = ren.submit_karpathy(kl_response)
96
+ print(f"Δk = {[round(d, 3) for d in delta]}")
97
+
98
+ result = ren.finish_session()
99
+ print(result)
100
+ # Session 1 | σ 0.000 → 0.052 (+0.052) | K=[0.055, 0.008, 0.008]
101
+
102
+ # Save state between sessions
103
+ ren.save("ren_state.json")
104
+ ren = LearnerSession.load("ren_state.json") # restore next time
105
+ ```
106
+
107
+ ---
108
+
109
+ ## Use With Any LLM
110
+
111
+ HCTP generates **prompt strings** — it doesn't make LLM calls itself. Plug it into
112
+ whatever backend you're using:
113
+
114
+ ```python
115
+ # OpenAI
116
+ import openai
117
+ def your_llm(prompt):
118
+ return openai.chat.completions.create(
119
+ model="gpt-4o", messages=[{"role": "user", "content": prompt}]
120
+ ).choices[0].message.content
121
+
122
+ # Anthropic
123
+ import anthropic
124
+ client = anthropic.Anthropic()
125
+ def your_llm(prompt):
126
+ return client.messages.create(
127
+ model="claude-sonnet-4-6", max_tokens=2000,
128
+ messages=[{"role": "user", "content": prompt}]
129
+ ).content[0].text
130
+
131
+ # Ollama (local)
132
+ import requests
133
+ def your_llm(prompt):
134
+ r = requests.post("http://localhost:11434/api/generate",
135
+ json={"model": "qwen2.5-coder:32b", "prompt": prompt, "stream": False})
136
+ return r.json()["response"]
137
+ ```
138
+
139
+ ---
140
+
141
+ ## Visualise the Helix
142
+
143
+ ```python
144
+ from hctp.viz import plot_helix, plot_multi, plot_sigma_curve
145
+
146
+ # Single learner — 3D helix
147
+ plot_helix(
148
+ sigma_history=[0.308, 0.386, 0.457, 0.535, 0.609,
149
+ 0.687, 0.761, 0.834, 0.910, 0.961],
150
+ label="Ren",
151
+ badge_night=9,
152
+ show=True,
153
+ )
154
+
155
+ # Compare multiple learners
156
+ plot_multi([
157
+ {"label": "Ren", "sigma_history": ren_hist, "badge_night": 9, "color": "#e74c3c"},
158
+ {"label": "Ner", "sigma_history": ner_hist, "badge_night": 9, "color": "#3498db"},
159
+ ])
160
+
161
+ # Clean 2D progress curve
162
+ plot_sigma_curve([
163
+ {"label": "Ren", "sigma_history": ren_hist, "badge_night": 9},
164
+ {"label": "Ner", "sigma_history": ner_hist, "badge_night": 9},
165
+ ])
166
+ ```
167
+
168
+ ---
169
+
170
+ ## Core API
171
+
172
+ ### `LearnerSession`
173
+
174
+ | Method | Description |
175
+ |---|---|
176
+ | `LearnerSession(name, K, sigma_history)` | Create or restore a learner |
177
+ | `start_session()` | Begin session → returns breadcrumb prompts |
178
+ | `karpathy_prompt_for(bc, response)` | Get the Karpathy loop prompt |
179
+ | `submit_karpathy(response)` | Update K-vector from Karpathy response |
180
+ | `finish_session()` → `SessionResult` | Finalise, update history, check badge |
181
+ | `save(path)` / `load(path)` | JSON persistence |
182
+
183
+ ### Properties
184
+
185
+ | Property | Type | Description |
186
+ |---|---|---|
187
+ | `.sigma` | `float` | Overall progress σ = mean(K) |
188
+ | `.velocity` | `float` | Smoothed learning pace |
189
+ | `.focus` | `str` | Active checkpoint ("A", "B", or "C") |
190
+ | `.badge` | `bool` | True once k₃ ≥ 0.95 |
191
+ | `.K` | `list[float]` | Knowledge vector [k₁, k₂, k₃] |
192
+
193
+ ### Core Math (no state needed)
194
+
195
+ ```python
196
+ from hctp.core import (
197
+ helix_radius, # R(σ) = 0.5(1−σ)² + 0.05
198
+ ideal_point, # [x, y, σ] on the ideal helix
199
+ distance, # Euclidean dist from K to ideal helix
200
+ progress, # σ = mean(K)
201
+ smoothed_velocity,
202
+ num_breadcrumbs, # adaptive count based on velocity
203
+ determine_focus, # which checkpoint to focus (lowest k)
204
+ update_vector, # Δk from Karpathy response quality
205
+ )
206
+ ```
207
+
208
+ ---
209
+
210
+ ## The Math
211
+
212
+ The helix is parameterised by progress σ ∈ [0, 1]:
213
+
214
+ ```
215
+ R(σ) = 0.5(1 − σ)² + 0.05 # tightening radius
216
+ x(σ) = R(σ) · cos(2π · 5 · σ) # 5 full spirals
217
+ y(σ) = R(σ) · sin(2π · 5 · σ)
218
+ z(σ) = σ # height = progress
219
+ ```
220
+
221
+ The **knowledge vector K = [k₁, k₂, k₃]** maps to the helix axes:
222
+ - k₁ → Closures mastery (Checkpoint A)
223
+ - k₂ → Decorators mastery (Checkpoint B)
224
+ - k₃ → Metaclasses mastery (Checkpoint C, hardest)
225
+
226
+ Distance from the ideal helix d(K, σ) measures learning imbalance.
227
+ A learner rushing C while neglecting A will stray far from the helix.
228
+
229
+ **Velocity** is the smoothed rate of σ gain per session. Slow learners
230
+ get more breadcrumbs; fast learners get broader, exploratory tasks.
231
+
232
+ **Karpathy loop scoring** uses heuristic quality markers in the
233
+ response text (presence of "error", "fix", "refactor", code blocks, etc.)
234
+ to assign micro-gains to the focus checkpoint with 15% spillover to siblings.
235
+
236
+ ---
237
+
238
+ ## Monetisation Roadmap
239
+
240
+ | Tier | Features | Price |
241
+ |---|---|---|
242
+ | **Free / OSS** | Core algorithm, prompt generation, local tracking | Free forever |
243
+ | **HCTP Cloud** | Hosted leaderboards, team dashboards, progress API | $9/mo per team |
244
+ | **HCTP Pro** | Custom curricula (beyond Python), webhook events, CI integration | $29/mo |
245
+ | **HCTP Enterprise** | White-label, LMS integration (Canvas, Moodle), bulk licensing | Custom |
246
+
247
+ ### Custom Curricula (Pro+)
248
+
249
+ The checkpoint system is fully configurable. Define your own helix:
250
+
251
+ ```python
252
+ from hctp.core import CHECKPOINTS
253
+
254
+ # Override with your own curriculum
255
+ my_checkpoints = {
256
+ "A": {"name": "SQL Basics", "vector_index": 0, "concepts": "SELECT, WHERE, JOIN"},
257
+ "B": {"name": "Indexes", "vector_index": 1, "concepts": "B-tree, query plans, EXPLAIN"},
258
+ "C": {"name": "Transactions", "vector_index": 2, "concepts": "ACID, isolation levels, deadlocks"},
259
+ }
260
+ ```
261
+
262
+ ---
263
+
264
+ ## Contributing
265
+
266
+ PRs welcome. The project is intentionally lean — keep it that way.
267
+
268
+ 1. `pip install -e ".[dev]"`
269
+ 2. `pytest` — all tests must pass
270
+ 3. No new mandatory dependencies without strong justification
271
+
272
+ ---
273
+
274
+ ## Background
275
+
276
+ HCTP was born inside the [Vuvale AI project](https://github.com/RenLes/Vuvale) — a
277
+ Fiji-based AI family of 4 agents (Ren, Ner, Les, Sel) trained nightly on a single
278
+ GPU to build real software for Fiji's people.
279
+
280
+ Ren and Ner completed the full helix in 13 sessions (Night 161 → 173) running
281
+ `qwen2.5-coder:32b` on a vast.ai RTX 5090. Les and Sel followed, earning their
282
+ badges at Night 190 and 191 respectively.
283
+
284
+ The protocol is now open-sourced so anyone can train their agents the same way.
285
+
286
+ ---
287
+
288
+ ## License
289
+
290
+ MIT © David Qicatabua / Vuvale AI
@@ -0,0 +1,70 @@
1
+ """
2
+ hctp — Helix Calculus Training Protocol
3
+ ========================================
4
+
5
+ A framework for measuring and driving AI agent learning through a
6
+ 3D helical knowledge model with adaptive Socratic breadcrumbs and
7
+ mandatory Karpathy research loops.
8
+
9
+ Proven on the Vuvale AI family (Ren & Ner: Night 161 → badge Night 173,
10
+ σ = 0.308 → 0.961 in 13 sessions).
11
+
12
+ Quick start::
13
+
14
+ from hctp import LearnerSession
15
+
16
+ # Create a learner
17
+ ren = LearnerSession("Ren")
18
+
19
+ # Run a session
20
+ session_data = ren.start_session()
21
+ for bc_prompt in session_data["breadcrumb_prompts"]:
22
+ bc_response = your_llm(bc_prompt)
23
+ kl_prompt = ren.karpathy_prompt_for(bc_prompt, bc_response)
24
+ kl_response = your_llm(kl_prompt)
25
+ ren.submit_karpathy(kl_response)
26
+
27
+ result = ren.finish_session()
28
+ print(result)
29
+
30
+ # Visualise
31
+ from hctp.viz import plot_helix
32
+ plot_helix(ren.sigma_history, label="Ren", show=True)
33
+
34
+ Links:
35
+ - Docs: https://hctp.readthedocs.io
36
+ - Source: https://github.com/vuvale-ai/hctp-python
37
+ - PyPI: https://pypi.org/project/hctp
38
+ """
39
+
40
+ from .core import (
41
+ helix_radius,
42
+ ideal_point,
43
+ distance,
44
+ progress,
45
+ smoothed_velocity,
46
+ num_breadcrumbs,
47
+ determine_focus,
48
+ update_vector,
49
+ CHECKPOINTS,
50
+ NUM_SPIRALS,
51
+ TARGET_VELOCITY,
52
+ MASTERY_THRESHOLD,
53
+ )
54
+ from .tracker import LearnerSession, SessionResult
55
+ from .curriculum import breadcrumb_prompt, karpathy_loop_prompt, session_header
56
+
57
+ __version__ = "0.1.0"
58
+ __author__ = "David Qicatabua / Vuvale AI"
59
+ __license__ = "MIT"
60
+
61
+ __all__ = [
62
+ # Core math
63
+ "helix_radius", "ideal_point", "distance", "progress",
64
+ "smoothed_velocity", "num_breadcrumbs", "determine_focus", "update_vector",
65
+ "CHECKPOINTS", "NUM_SPIRALS", "TARGET_VELOCITY", "MASTERY_THRESHOLD",
66
+ # Session management
67
+ "LearnerSession", "SessionResult",
68
+ # Prompt generation
69
+ "breadcrumb_prompt", "karpathy_loop_prompt", "session_header",
70
+ ]