kolmgformers 0.0.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. kolmgformers-0.0.3/LICENSE +179 -0
  2. kolmgformers-0.0.3/MANIFEST.in +4 -0
  3. kolmgformers-0.0.3/PKG-INFO +219 -0
  4. kolmgformers-0.0.3/README.md +184 -0
  5. kolmgformers-0.0.3/kolmgformers/__init__.py +270 -0
  6. kolmgformers-0.0.3/kolmgformers/advanced.py +1204 -0
  7. kolmgformers-0.0.3/kolmgformers/attention.py +753 -0
  8. kolmgformers-0.0.3/kolmgformers/configuration.py +557 -0
  9. kolmgformers-0.0.3/kolmgformers/di.py +324 -0
  10. kolmgformers-0.0.3/kolmgformers/diffusion.py +932 -0
  11. kolmgformers-0.0.3/kolmgformers/dla/__init__.py +16 -0
  12. kolmgformers-0.0.3/kolmgformers/dla/absorption.py +245 -0
  13. kolmgformers-0.0.3/kolmgformers/dla/buffer.py +226 -0
  14. kolmgformers-0.0.3/kolmgformers/dla/pipeline.py +356 -0
  15. kolmgformers-0.0.3/kolmgformers/kolmog_benchmarks.py +377 -0
  16. kolmgformers-0.0.3/kolmgformers/kolmog_configuration.py +163 -0
  17. kolmgformers-0.0.3/kolmgformers/kolmog_tokenization.py +438 -0
  18. kolmgformers-0.0.3/kolmgformers/kolmog_training.py +676 -0
  19. kolmgformers-0.0.3/kolmgformers/layers.py +889 -0
  20. kolmgformers-0.0.3/kolmgformers/lora.py +447 -0
  21. kolmgformers-0.0.3/kolmgformers/mfs.py +326 -0
  22. kolmgformers-0.0.3/kolmgformers/modeling.py +1450 -0
  23. kolmgformers-0.0.3/kolmgformers/modeling_kolmogformer.py +1013 -0
  24. kolmgformers-0.0.3/kolmgformers/moe.py +603 -0
  25. kolmgformers-0.0.3/kolmgformers/nca/__init__.py +20 -0
  26. kolmgformers-0.0.3/kolmgformers/nca/ee.py +281 -0
  27. kolmgformers-0.0.3/kolmgformers/nca/fae.py +147 -0
  28. kolmgformers-0.0.3/kolmgformers/nca/novelty.py +166 -0
  29. kolmgformers-0.0.3/kolmgformers/nca/prl.py +171 -0
  30. kolmgformers-0.0.3/kolmgformers/nca/router.py +215 -0
  31. kolmgformers-0.0.3/kolmgformers/py.typed +0 -0
  32. kolmgformers-0.0.3/kolmgformers/rank_compression.py +496 -0
  33. kolmgformers-0.0.3/kolmgformers/tasa.py +258 -0
  34. kolmgformers-0.0.3/kolmgformers/tokenizer.py +670 -0
  35. kolmgformers-0.0.3/kolmgformers/training.py +2039 -0
  36. kolmgformers-0.0.3/kolmgformers/twe.py +465 -0
  37. kolmgformers-0.0.3/kolmgformers.egg-info/PKG-INFO +219 -0
  38. kolmgformers-0.0.3/kolmgformers.egg-info/SOURCES.txt +41 -0
  39. kolmgformers-0.0.3/kolmgformers.egg-info/dependency_links.txt +1 -0
  40. kolmgformers-0.0.3/kolmgformers.egg-info/requires.txt +20 -0
  41. kolmgformers-0.0.3/kolmgformers.egg-info/top_level.txt +1 -0
  42. kolmgformers-0.0.3/pyproject.toml +45 -0
  43. kolmgformers-0.0.3/setup.cfg +4 -0
@@ -0,0 +1,179 @@
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship made available under
36
+ the License, as indicated by a copyright notice that is included in
37
+ or attached to the work (an example is provided in the Appendix below).
38
+
39
+ "Derivative Works" shall mean any work, whether in Source or Object
40
+ form, that is based on (or derived from) the Work and for which the
41
+ editorial revisions, annotations, elaborations, or other modifications
42
+ represent, as a whole, an original work of authorship. For the purposes
43
+ of this License, Derivative Works shall not include works that remain
44
+ separable from, or merely link (or bind by name) to the interfaces of,
45
+ the Work and Derivative Works thereof.
46
+
47
+ "Contribution" shall mean, as submitted to the Licensor for inclusion
48
+ in the Work by the copyright owner or by an individual or Legal Entity
49
+ authorized to submit on behalf of the copyright owner. For the purposes
50
+ of this definition, "submitted" means any form of electronic, verbal,
51
+ or written communication sent to the Licensor or its representatives,
52
+ including but not limited to communication on electronic mailing lists,
53
+ source code control systems, and issue tracking systems that are managed
54
+ by, or on behalf of, the Licensor for the purpose of discussing and
55
+ improving the Work, but excluding communication that is conspicuously
56
+ marked or designated in writing by the copyright owner as "Not a
57
+ Contribution."
58
+
59
+ "Contributor" shall mean Licensor and any Legal Entity on behalf of
60
+ whom a Contribution has been received by the Licensor and included
61
+ within the Work.
62
+
63
+ 2. Grant of Copyright License. Subject to the terms and conditions of
64
+ this License, each Contributor hereby grants to You a perpetual,
65
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
66
+ copyright license to reproduce, prepare Derivative Works of,
67
+ publicly display, publicly perform, sublicense, and distribute the
68
+ Work and such Derivative Works in Source or Object form.
69
+
70
+ 3. Grant of Patent License. Subject to the terms and conditions of
71
+ this License, each Contributor hereby grants to You a perpetual,
72
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
73
+ (except as stated in this section) patent license to make, have made,
74
+ use, offer to sell, sell, import, and otherwise transfer the Work,
75
+ where such license applies only to those patent claims licensable
76
+ by such Contributor that are necessarily infringed by their
77
+ Contribution(s) alone or by the combination of their Contribution(s)
78
+ with the Work to which such Contribution(s) was submitted. If You
79
+ institute patent litigation against any entity (including a cross-claim
80
+ or counterclaim in a lawsuit) alleging that the Work or any
81
+ Contribution embodied within the Work constitutes direct or
82
+ contributory patent infringement, then any patent licenses granted to
83
+ You under this License for that Work shall terminate as of the date
84
+ such litigation is filed.
85
+
86
+ 4. Redistribution. You may reproduce and distribute copies of the
87
+ Work or Derivative Works thereof in any medium, with or without
88
+ modifications, and in Source or Object form, provided that You
89
+ meet the following conditions:
90
+
91
+ (a) You must give any other recipients of the Work or Derivative Works
92
+ a copy of this License; and
93
+
94
+ (b) You must cause any modified files to carry prominent notices
95
+ stating that You changed the files; and
96
+
97
+ (c) You must retain, in the Source form of any Derivative Works
98
+ that You distribute, all copyright, patent, trademark, and
99
+ attribution notices from the Source form of the Work,
100
+ excluding those notices that do not pertain to any part of
101
+ the Derivative Works; and
102
+
103
+ (d) If the Work includes a "NOTICE" text file as part of its
104
+ distribution, You must include a readable copy of the attribution
105
+ notices contained within such NOTICE file, in at least one of the
106
+ following places: within a NOTICE text provided as part of the
107
+ Derivative Works; within the Source form or documentation, if
108
+ provided along with the Derivative Works; or, within a display
109
+ generated by the Derivative Works, if and wherever such
110
+ third-party notices normally appear. The contents of the NOTICE
111
+ file are for informational purposes only and do not modify the
112
+ License. You may add Your own attribution notices within
113
+ Derivative Works that You distribute, alongside or as an addendum
114
+ to the NOTICE text from the Work, provided that such additional
115
+ attribution notices cannot be construed as modifying the License.
116
+
117
+ You may add Your own license statement for Your modifications and
118
+ may provide additional grant of rights to use, copy, modify, merge,
119
+ publish, distribute, sublicense, and/or sell copies of the
120
+ Contribution, subject to the terms and conditions of this License.
121
+
122
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
123
+ any Contribution intentionally submitted for inclusion in the Work
124
+ by You to the Licensor shall be under the terms and conditions of
125
+ this License, without any additional terms or conditions.
126
+
127
+ 6. Trademarks. This License does not grant permission to use the trade
128
+ names, trademarks, service marks, or product names of the Licensor,
129
+ except as required for reasonable and customary use in describing the
130
+ origin of the Work and reproducing the content of the NOTICE file.
131
+
132
+ 7. Disclaimer of Warranty. Unless required by applicable law or agreed
133
+ to in writing, Licensor provides the Work (and each Contributor
134
+ provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES
135
+ OR CONDITIONS OF ANY KIND, either express or implied, including,
136
+ without limitation, any warranties or conditions of TITLE,
137
+ NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR
138
+ PURPOSE. You are solely responsible for determining the
139
+ appropriateness of using or reproducing the Work and assume any
140
+ risks associated with Your exercise of permissions under this License.
141
+
142
+ 8. Limitation of Liability. In no event and under no legal theory,
143
+ whether in tort (including negligence), contract, or otherwise,
144
+ unless required by applicable law (such as deliberate and grossly
145
+ negligent acts) or agreed to in writing, shall any Contributor be
146
+ liable to You for damages, including any direct, indirect, special,
147
+ incidental, or exemplary damages of any character arising as a result
148
+ of this License or out of the use or inability to use the Work
149
+ (including but not limited to damages for loss of goodwill, work
150
+ stoppage, computer failure or malfunction, or all other commercial
151
+ damages or losses), even if such Contributor has been advised of the
152
+ possibility of such damages.
153
+
154
+ 9. Accepting Warranty or Additional Liability. While redistributing
155
+ the Work or Derivative Works thereof, You may choose to offer,
156
+ and charge a fee for, acceptance of support, warranty, indemnity,
157
+ or other liability obligations and/or rights consistent with this
158
+ License. However, in accepting such obligations, You may act only
159
+ on Your own behalf and on Your sole responsibility, not on behalf
160
+ of any other Contributor, and only if You agree to indemnify,
161
+ defend, and hold each Contributor harmless for any liability
162
+ incurred by, or claims asserted against, such Contributor by reason
163
+ of your accepting any such warranty or additional liability.
164
+
165
+ END OF TERMS AND CONDITIONS
166
+
167
+ Copyright 2024 OMGFormers Contributors
168
+
169
+ Licensed under the Apache License, Version 2.0 (the "License");
170
+ you may not use this file except in compliance with the License.
171
+ You may obtain a copy of the License at
172
+
173
+ http://www.apache.org/licenses/LICENSE-2.0
174
+
175
+ Unless required by applicable law or agreed to in writing, software
176
+ distributed under the License is distributed on an "AS IS" BASIS,
177
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
178
+ See the License for the specific language governing permissions and
179
+ limitations under the License.
@@ -0,0 +1,4 @@
1
+ include README.md
2
+ include LICENSE
3
+ include pyproject.toml
4
+ recursive-include kolmgformers *.py *.typed
@@ -0,0 +1,219 @@
1
+ Metadata-Version: 2.4
2
+ Name: kolmgformers
3
+ Version: 0.0.3
4
+ Summary: KOLMGformers: Unified KAN attention-free sequence modeling (KOLMOGformers) + Parallel Diffusion LM (OMGformers)
5
+ Author: Ömür Bera Işık
6
+ License-Expression: Apache-2.0
7
+ Keywords: deep-learning,transformers,diffusion,language-model,nlp,pytorch,kolmogorov-arnold,kan,attention-free,lora,moe
8
+ Classifier: Development Status :: 3 - Alpha
9
+ Classifier: Intended Audience :: Science/Research
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Programming Language :: Python :: 3.9
12
+ Classifier: Programming Language :: Python :: 3.10
13
+ Classifier: Programming Language :: Python :: 3.11
14
+ Classifier: Programming Language :: Python :: 3.12
15
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
16
+ Requires-Python: >=3.9
17
+ Description-Content-Type: text/markdown
18
+ License-File: LICENSE
19
+ Requires-Dist: torch>=2.0
20
+ Provides-Extra: hf
21
+ Requires-Dist: transformers>=4.35; extra == "hf"
22
+ Requires-Dist: tokenizers>=0.15; extra == "hf"
23
+ Provides-Extra: safetensors
24
+ Requires-Dist: safetensors>=0.4; extra == "safetensors"
25
+ Provides-Extra: flash
26
+ Requires-Dist: flash-attn>=2.0; extra == "flash"
27
+ Provides-Extra: quant
28
+ Requires-Dist: bitsandbytes>=0.41; extra == "quant"
29
+ Provides-Extra: all
30
+ Requires-Dist: transformers>=4.35; extra == "all"
31
+ Requires-Dist: tokenizers>=0.15; extra == "all"
32
+ Requires-Dist: safetensors>=0.4; extra == "all"
33
+ Requires-Dist: bitsandbytes>=0.41; extra == "all"
34
+ Dynamic: license-file
35
+
36
+ # KOLMGformers v0.0.3
37
+
38
+ Unified Python library merging two research-grade model families:
39
+
40
+ | Family | Architecture | Key property |
41
+ |--------|-------------|-------------|
42
+ | **KOLMOG** | Kolmogorov-Arnold Cumulative Context | Attention-free, O(d) memory |
43
+ | **OMG** | Parallel Diffusion Transformer | Masked diffusion, full feature set |
44
+
45
+ ---
46
+
47
+ ## What's New in v0.0.3 — Bug Fixes & Improvements
48
+
49
+ ### Bug Fixes (KOLMOG)
50
+
51
+ | ID | Component | Issue | Fix |
52
+ |----|-----------|-------|-----|
53
+ | #K5 | `KOLMOGformerLayer` | `phi` received raw `kappa_out` and `context` — shape contract was implicit and fragile | Documented and type-checked; context extractor now propagates `attention_mask` |
54
+ | #K6 | `generate()` | Repetition penalty used a Python loop over `set(generated[b].tolist())` — O(seq·vocab) per step | Vectorised with `tensor.unique()` + `scatter_` |
55
+ | #K7 | `generate()` | Top-p nucleus sampling: `cumprobs − softmax(sorted)` double-subtracted the pivot token, causing off-by-one exclusions | Replaced with correct shifted-cumsum implementation |
56
+ | #K8 | `PLKANLayer` | `breakpoints` initialized via `expand().clone()` left non-contiguous memory; subtle autograd issues under in-place ops | Replaced with `linspace(...).repeat()` → always contiguous |
57
+ | #K9 | `InnerKolmogorovFunction` | Always used slow B-spline `KANLayer` for φ layers, ignoring `config.use_plkan` | `build_kan_layer` factory now honoured for φ too (~3–5× speedup with PLKAN) |
58
+ | #K10 | `CumulativeContextExtractor` | Causal pad+shift produced wrong exclusive prefix at position 0 (current token leaked into its own context) | Replaced with correct exclusive cumsum: `C^{<i} = cumsum[i] − kappa_w[i]` |
59
+ | #K11 | `CumulativeContextExtractor` | `attention_mask` was accepted by model but never threaded to the context extractor; pad tokens polluted context vectors | Mask is now applied to `kappa` before accumulation at every layer |
60
+ | #K12 | `KOLMOGformerForCausalLM` | Logits returned only `[:, :-1, :]` (already shifted), breaking downstream use of the full logit tensor | Full-sequence logits returned; shift applied only inside loss computation (matches HF API) |
61
+ | #K13 | `save_pretrained` | Direct `torch.save` to final path could leave a corrupted checkpoint on interruption | Atomic write via `tempfile` + `os.replace`; safetensors support added |
62
+ | #K14 | `KOLMOGformerModel` | No gradient checkpointing — OOM on long sequences during training | `enable_gradient_checkpointing()` added; controlled via `TrainingArguments.gradient_checkpointing` |
63
+ | #K15 | `KANLayer.b_splines` | Grid buffer could be float32 while activations are bfloat16/float16, causing dtype mismatch | Grid is cast to `x.dtype` on every forward pass |
64
+
65
+ ### Bug Fixes (Training)
66
+
67
+ | ID | Component | Issue | Fix |
68
+ |----|-----------|-------|-----|
69
+ | #T1 | `Trainer` | No early stopping — training continued even after convergence | `early_stopping_patience` added to `TrainingArguments` |
70
+ | #T2 | `DataCollatorForCausalLM` | Sequences silently truncated or accepted without warning | `warn_length` parameter warns when batch sequences exceed model's position limit |
71
+ | #T3 | `Trainer._save_checkpoint` | Saved only model weights — optimizer/scheduler lost, training couldn't truly resume | Optimizer + scheduler state saved in `trainer_state.pt` |
72
+ | #T4 | `Trainer.load_checkpoint` | Restored only model weights and step count | Now restores optimizer, scheduler, early-stopping state |
73
+ | #T5 | `get_scheduler` | Missing `"constant_with_warmup"` type | Added |
74
+ | #T6 | `Trainer` | `bf16=True` on CPU silently fell back to float32 with no warning | Warning emitted; autocast errors caught gracefully |
75
+ | #T7 | `DataCollatorForMaskedLM` | Random-replacement tokens were drawn from `[0, vocab_size)` including `[PAD]`/`[BOS]`/`[EOS]` | Now draws from `[num_special_tokens, vocab_size)` |
76
+
77
+ ### New Features (v0.0.3)
78
+
79
+ **`KOLMOGformerConfig` additions:**
80
+ - `context_dropout` — independent dropout on the context vector path (default `0.0`).
81
+ - `ffn_type` — FFN activation: `"gelu"` (default) | `"silu"` | `"swiglu"`.
82
+ - `max_position_embeddings_dynamic` — RoPE cache auto-extends beyond limit instead of erroring (default `True`).
83
+ - `validate()` — called in `__post_init__`; surfaces config errors early with helpful messages.
84
+ - `__repr__` — readable summary of key config fields.
85
+
86
+ **`TrainingArguments` additions:**
87
+ - `early_stopping_patience` — stop after N evaluations without improvement.
88
+ - `gradient_checkpointing` — enable memory-efficient training automatically.
89
+
90
+ ---
91
+
92
+ ## Installation
93
+
94
+ ```bash
95
+ pip install -e .
96
+ # Optional extras
97
+ pip install -e ".[hf]" # HuggingFace tokenizers
98
+ pip install -e ".[flash]" # Flash Attention 2
99
+ pip install -e ".[all]" # Everything
100
+ ```
101
+
102
+ ---
103
+
104
+ ## Quick Start
105
+
106
+ ### KOLMOG — Attention-Free Causal LM
107
+
108
+ ```python
109
+ from kolmgformers import KOLMOGformerConfig, KOLMOGformerForCausalLM
110
+ import torch
111
+
112
+ config = KOLMOGformerConfig(
113
+ vocab_size=32000,
114
+ hidden_size=512,
115
+ num_channels=8,
116
+ num_layers=6,
117
+ causal=True,
118
+ use_nce=True, # Normalized Context Extraction
119
+ use_wcc=True, # Weighted Cumulative Context (v0.0.2+)
120
+ use_plkan=True, # Piecewise Linear KAN — 3-5x faster (v0.0.2+)
121
+ )
122
+ model = KOLMOGformerForCausalLM(config)
123
+ print(config) # v0.0.3: readable repr
124
+
125
+ ids = torch.tensor([[1, 42, 100]])
126
+ out = model.generate(ids, max_new_tokens=50, temperature=0.8)
127
+ ```
128
+
129
+ ### KOLMOG — Training with Early Stopping
130
+
131
+ ```python
132
+ from kolmgformers import (
133
+ KOLMOGTrainer, KOLMOGTrainingArguments,
134
+ KOLMOGDataCollatorForCausalLM,
135
+ )
136
+
137
+ args = KOLMOGTrainingArguments(
138
+ output_dir="runs/my_run",
139
+ num_train_epochs=10,
140
+ early_stopping_patience=3, # v0.0.3: stop after 3 bad evals
141
+ gradient_checkpointing=True, # v0.0.3: save memory on long seqs
142
+ evaluation_strategy="steps",
143
+ eval_steps=500,
144
+ )
145
+ trainer = KOLMOGTrainer(
146
+ model=model,
147
+ args=args,
148
+ train_dataset=train_ds,
149
+ eval_dataset=val_ds,
150
+ data_collator=KOLMOGDataCollatorForCausalLM(pad_token_id=0),
151
+ )
152
+ trainer.train()
153
+ ```
154
+
155
+ ### KOLMOG — True Checkpoint Resume
156
+
157
+ ```python
158
+ # v0.0.3: optimizer + scheduler state is saved, enabling true resume
159
+ trainer.load_checkpoint("runs/my_run/checkpoint-5000")
160
+ trainer.train() # continues from exact state
161
+ ```
162
+
163
+ ### OMG — Diffusion LM
164
+
165
+ ```python
166
+ from kolmgformers import OMGConfig, OMGModel
167
+
168
+ config = OMGConfig(vocab_size=32000, hidden_size=768, num_layers=12)
169
+ model = OMGModel(config)
170
+
171
+ import torch
172
+ prompt = torch.tensor([[1, 42]])
173
+ out = model.generate(prompt, new_tokens=128, steps=10)
174
+ ```
175
+
176
+ ---
177
+
178
+ ## Architecture: KOLMOG
179
+
180
+ Based on the Kolmogorov-Arnold representation theorem:
181
+
182
+ ```
183
+ F(X) = Σ_q Φ_q( Σ_i φ_{q,i}( xᵢ ⊕ eᵢ ⊕ c_{q,i} ) )
184
+ ```
185
+
186
+ Key innovations:
187
+ - **KAN layers** — learnable B-spline activations per edge (not fixed non-linearities)
188
+ - **NCE** — Normalized Context Extraction: jackknife leave-one-out mean context
189
+ - **WCC** — Weighted Cumulative Context: attention-like token selectivity at O(n·d)
190
+ - **PLKAN** — Piecewise Linear KAN: 3–5× faster than B-spline, same expressivity
191
+ - **No attention** — O(n·d) time, O(d) memory (independent of sequence length)
192
+
193
+ ---
194
+
195
+ ## Architecture: OMG
196
+
197
+ Parallel Diffusion Language Model with:
198
+ - GQA / MLA / Sliding-Window / Linear / Block-Sparse attention
199
+ - MoE (dense + soft MoE)
200
+ - DS-PDLM dual-stream (understanding + generation)
201
+ - LoRA / DoRA PEFT
202
+ - TASA + MFS + DI efficiency trilogy
203
+ - TWE temporal embeddings, NCA neuro-creative routing
204
+
205
+ ---
206
+
207
+ ## Bug Fix History
208
+
209
+ | Version | Fixes |
210
+ |---------|-------|
211
+ | v0.0.1 | #K1–#K4 (config, imports, KANLayer rightmost knot) |
212
+ | v0.0.2 | WCC + PLKANLayer added |
213
+ | v0.0.3 | #K5–#K15 (context mask, generate, PLKAN, phi layers, causal prefix, gradient checkpointing) + #T1–#T7 (early stopping, optimizer save/load, bf16 CPU, special token masking) |
214
+
215
+ ---
216
+
217
+ ## License
218
+
219
+ Apache-2.0
@@ -0,0 +1,184 @@
1
+ # KOLMGformers v0.0.3
2
+
3
+ Unified Python library merging two research-grade model families:
4
+
5
+ | Family | Architecture | Key property |
6
+ |--------|-------------|-------------|
7
+ | **KOLMOG** | Kolmogorov-Arnold Cumulative Context | Attention-free, O(d) memory |
8
+ | **OMG** | Parallel Diffusion Transformer | Masked diffusion, full feature set |
9
+
10
+ ---
11
+
12
+ ## What's New in v0.0.3 — Bug Fixes & Improvements
13
+
14
+ ### Bug Fixes (KOLMOG)
15
+
16
+ | ID | Component | Issue | Fix |
17
+ |----|-----------|-------|-----|
18
+ | #K5 | `KOLMOGformerLayer` | `phi` received raw `kappa_out` and `context` — shape contract was implicit and fragile | Documented and type-checked; context extractor now propagates `attention_mask` |
19
+ | #K6 | `generate()` | Repetition penalty used a Python loop over `set(generated[b].tolist())` — O(seq·vocab) per step | Vectorised with `tensor.unique()` + `scatter_` |
20
+ | #K7 | `generate()` | Top-p nucleus sampling: `cumprobs − softmax(sorted)` double-subtracted the pivot token, causing off-by-one exclusions | Replaced with correct shifted-cumsum implementation |
21
+ | #K8 | `PLKANLayer` | `breakpoints` initialized via `expand().clone()` left non-contiguous memory; subtle autograd issues under in-place ops | Replaced with `linspace(...).repeat()` → always contiguous |
22
+ | #K9 | `InnerKolmogorovFunction` | Always used slow B-spline `KANLayer` for φ layers, ignoring `config.use_plkan` | `build_kan_layer` factory now honoured for φ too (~3–5× speedup with PLKAN) |
23
+ | #K10 | `CumulativeContextExtractor` | Causal pad+shift produced wrong exclusive prefix at position 0 (current token leaked into its own context) | Replaced with correct exclusive cumsum: `C^{<i} = cumsum[i] − kappa_w[i]` |
24
+ | #K11 | `CumulativeContextExtractor` | `attention_mask` was accepted by model but never threaded to the context extractor; pad tokens polluted context vectors | Mask is now applied to `kappa` before accumulation at every layer |
25
+ | #K12 | `KOLMOGformerForCausalLM` | Logits returned only `[:, :-1, :]` (already shifted), breaking downstream use of the full logit tensor | Full-sequence logits returned; shift applied only inside loss computation (matches HF API) |
26
+ | #K13 | `save_pretrained` | Direct `torch.save` to final path could leave a corrupted checkpoint on interruption | Atomic write via `tempfile` + `os.replace`; safetensors support added |
27
+ | #K14 | `KOLMOGformerModel` | No gradient checkpointing — OOM on long sequences during training | `enable_gradient_checkpointing()` added; controlled via `TrainingArguments.gradient_checkpointing` |
28
+ | #K15 | `KANLayer.b_splines` | Grid buffer could be float32 while activations are bfloat16/float16, causing dtype mismatch | Grid is cast to `x.dtype` on every forward pass |
29
+
30
+ ### Bug Fixes (Training)
31
+
32
+ | ID | Component | Issue | Fix |
33
+ |----|-----------|-------|-----|
34
+ | #T1 | `Trainer` | No early stopping — training continued even after convergence | `early_stopping_patience` added to `TrainingArguments` |
35
+ | #T2 | `DataCollatorForCausalLM` | Sequences silently truncated or accepted without warning | `warn_length` parameter warns when batch sequences exceed model's position limit |
36
+ | #T3 | `Trainer._save_checkpoint` | Saved only model weights — optimizer/scheduler lost, training couldn't truly resume | Optimizer + scheduler state saved in `trainer_state.pt` |
37
+ | #T4 | `Trainer.load_checkpoint` | Restored only model weights and step count | Now restores optimizer, scheduler, early-stopping state |
38
+ | #T5 | `get_scheduler` | Missing `"constant_with_warmup"` type | Added |
39
+ | #T6 | `Trainer` | `bf16=True` on CPU silently fell back to float32 with no warning | Warning emitted; autocast errors caught gracefully |
40
+ | #T7 | `DataCollatorForMaskedLM` | Random-replacement tokens were drawn from `[0, vocab_size)` including `[PAD]`/`[BOS]`/`[EOS]` | Now draws from `[num_special_tokens, vocab_size)` |
41
+
42
+ ### New Features (v0.0.3)
43
+
44
+ **`KOLMOGformerConfig` additions:**
45
+ - `context_dropout` — independent dropout on the context vector path (default `0.0`).
46
+ - `ffn_type` — FFN activation: `"gelu"` (default) | `"silu"` | `"swiglu"`.
47
+ - `max_position_embeddings_dynamic` — RoPE cache auto-extends beyond limit instead of erroring (default `True`).
48
+ - `validate()` — called in `__post_init__`; surfaces config errors early with helpful messages.
49
+ - `__repr__` — readable summary of key config fields.
50
+
51
+ **`TrainingArguments` additions:**
52
+ - `early_stopping_patience` — stop after N evaluations without improvement.
53
+ - `gradient_checkpointing` — enable memory-efficient training automatically.
54
+
55
+ ---
56
+
57
+ ## Installation
58
+
59
+ ```bash
60
+ pip install -e .
61
+ # Optional extras
62
+ pip install -e ".[hf]" # HuggingFace tokenizers
63
+ pip install -e ".[flash]" # Flash Attention 2
64
+ pip install -e ".[all]" # Everything
65
+ ```
66
+
67
+ ---
68
+
69
+ ## Quick Start
70
+
71
+ ### KOLMOG — Attention-Free Causal LM
72
+
73
+ ```python
74
+ from kolmgformers import KOLMOGformerConfig, KOLMOGformerForCausalLM
75
+ import torch
76
+
77
+ config = KOLMOGformerConfig(
78
+ vocab_size=32000,
79
+ hidden_size=512,
80
+ num_channels=8,
81
+ num_layers=6,
82
+ causal=True,
83
+ use_nce=True, # Normalized Context Extraction
84
+ use_wcc=True, # Weighted Cumulative Context (v0.0.2+)
85
+ use_plkan=True, # Piecewise Linear KAN — 3-5x faster (v0.0.2+)
86
+ )
87
+ model = KOLMOGformerForCausalLM(config)
88
+ print(config) # v0.0.3: readable repr
89
+
90
+ ids = torch.tensor([[1, 42, 100]])
91
+ out = model.generate(ids, max_new_tokens=50, temperature=0.8)
92
+ ```
93
+
94
+ ### KOLMOG — Training with Early Stopping
95
+
96
+ ```python
97
+ from kolmgformers import (
98
+ KOLMOGTrainer, KOLMOGTrainingArguments,
99
+ KOLMOGDataCollatorForCausalLM,
100
+ )
101
+
102
+ args = KOLMOGTrainingArguments(
103
+ output_dir="runs/my_run",
104
+ num_train_epochs=10,
105
+ early_stopping_patience=3, # v0.0.3: stop after 3 bad evals
106
+ gradient_checkpointing=True, # v0.0.3: save memory on long seqs
107
+ evaluation_strategy="steps",
108
+ eval_steps=500,
109
+ )
110
+ trainer = KOLMOGTrainer(
111
+ model=model,
112
+ args=args,
113
+ train_dataset=train_ds,
114
+ eval_dataset=val_ds,
115
+ data_collator=KOLMOGDataCollatorForCausalLM(pad_token_id=0),
116
+ )
117
+ trainer.train()
118
+ ```
119
+
120
+ ### KOLMOG — True Checkpoint Resume
121
+
122
+ ```python
123
+ # v0.0.3: optimizer + scheduler state is saved, enabling true resume
124
+ trainer.load_checkpoint("runs/my_run/checkpoint-5000")
125
+ trainer.train() # continues from exact state
126
+ ```
127
+
128
+ ### OMG — Diffusion LM
129
+
130
+ ```python
131
+ from kolmgformers import OMGConfig, OMGModel
132
+
133
+ config = OMGConfig(vocab_size=32000, hidden_size=768, num_layers=12)
134
+ model = OMGModel(config)
135
+
136
+ import torch
137
+ prompt = torch.tensor([[1, 42]])
138
+ out = model.generate(prompt, new_tokens=128, steps=10)
139
+ ```
140
+
141
+ ---
142
+
143
+ ## Architecture: KOLMOG
144
+
145
+ Based on the Kolmogorov-Arnold representation theorem:
146
+
147
+ ```
148
+ F(X) = Σ_q Φ_q( Σ_i φ_{q,i}( xᵢ ⊕ eᵢ ⊕ c_{q,i} ) )
149
+ ```
150
+
151
+ Key innovations:
152
+ - **KAN layers** — learnable B-spline activations per edge (not fixed non-linearities)
153
+ - **NCE** — Normalized Context Extraction: jackknife leave-one-out mean context
154
+ - **WCC** — Weighted Cumulative Context: attention-like token selectivity at O(n·d)
155
+ - **PLKAN** — Piecewise Linear KAN: 3–5× faster than B-spline, same expressivity
156
+ - **No attention** — O(n·d) time, O(d) memory (independent of sequence length)
157
+
158
+ ---
159
+
160
+ ## Architecture: OMG
161
+
162
+ Parallel Diffusion Language Model with:
163
+ - GQA / MLA / Sliding-Window / Linear / Block-Sparse attention
164
+ - MoE (dense + soft MoE)
165
+ - DS-PDLM dual-stream (understanding + generation)
166
+ - LoRA / DoRA PEFT
167
+ - TASA + MFS + DI efficiency trilogy
168
+ - TWE temporal embeddings, NCA neuro-creative routing
169
+
170
+ ---
171
+
172
+ ## Bug Fix History
173
+
174
+ | Version | Fixes |
175
+ |---------|-------|
176
+ | v0.0.1 | #K1–#K4 (config, imports, KANLayer rightmost knot) |
177
+ | v0.0.2 | WCC + PLKANLayer added |
178
+ | v0.0.3 | #K5–#K15 (context mask, generate, PLKAN, phi layers, causal prefix, gradient checkpointing) + #T1–#T7 (early stopping, optimizer save/load, bf16 CPU, special token masking) |
179
+
180
+ ---
181
+
182
+ ## License
183
+
184
+ Apache-2.0