archscope 0.2.2__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- archscope/__init__.py +30 -0
- archscope/_utils.py +113 -0
- archscope/attribute.py +201 -0
- archscope/backends.py +236 -0
- archscope/bench.py +262 -0
- archscope/circuits.py +255 -0
- archscope/cli.py +120 -0
- archscope/diff.py +212 -0
- archscope/kazdov_backend.py +141 -0
- archscope/lens.py +304 -0
- archscope/neurons.py +118 -0
- archscope/probes.py +160 -0
- archscope/sae.py +127 -0
- archscope/transfer.py +188 -0
- archscope-0.2.2.dist-info/METADATA +324 -0
- archscope-0.2.2.dist-info/RECORD +20 -0
- archscope-0.2.2.dist-info/WHEEL +5 -0
- archscope-0.2.2.dist-info/entry_points.txt +2 -0
- archscope-0.2.2.dist-info/licenses/LICENSE +17 -0
- archscope-0.2.2.dist-info/top_level.txt +1 -0
|
@@ -0,0 +1,324 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: archscope
|
|
3
|
+
Version: 0.2.2
|
|
4
|
+
Summary: Lightweight workbench for cross-architecture mechanistic interpretability experiments on small models
|
|
5
|
+
Author: Juan Cruz Dovzak
|
|
6
|
+
License: Apache-2.0
|
|
7
|
+
Keywords: mechanistic-interpretability,sparse-autoencoders,probes,RNN,Mamba,transformer
|
|
8
|
+
Classifier: Development Status :: 4 - Beta
|
|
9
|
+
Classifier: Intended Audience :: Science/Research
|
|
10
|
+
Classifier: License :: OSI Approved :: Apache Software License
|
|
11
|
+
Classifier: Operating System :: OS Independent
|
|
12
|
+
Classifier: Programming Language :: Python :: 3
|
|
13
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
16
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
17
|
+
Requires-Python: >=3.10
|
|
18
|
+
Description-Content-Type: text/markdown
|
|
19
|
+
License-File: LICENSE
|
|
20
|
+
Requires-Dist: torch>=2.1.0
|
|
21
|
+
Requires-Dist: numpy>=1.26.0
|
|
22
|
+
Requires-Dist: einops>=0.7.0
|
|
23
|
+
Requires-Dist: click>=8.1.0
|
|
24
|
+
Requires-Dist: rich>=13.0.0
|
|
25
|
+
Requires-Dist: transformers>=4.40.0
|
|
26
|
+
Requires-Dist: datasets>=2.19.0
|
|
27
|
+
Requires-Dist: scikit-learn>=1.4.0
|
|
28
|
+
Provides-Extra: jax
|
|
29
|
+
Requires-Dist: jax>=0.4.30; extra == "jax"
|
|
30
|
+
Requires-Dist: flax>=0.8.4; extra == "jax"
|
|
31
|
+
Provides-Extra: mamba
|
|
32
|
+
Requires-Dist: mamba-ssm>=1.2; extra == "mamba"
|
|
33
|
+
Provides-Extra: dev
|
|
34
|
+
Requires-Dist: pytest>=8.0; extra == "dev"
|
|
35
|
+
Requires-Dist: ruff>=0.4.0; extra == "dev"
|
|
36
|
+
Dynamic: license-file
|
|
37
|
+
|
|
38
|
+
# archscope
|
|
39
|
+
|
|
40
|
+
**Mechanistic interpretability experiments across architectures — Transformers, SSMs/Mamba, recurrent models, and hybrids.**
|
|
41
|
+
|
|
42
|
+
[](https://github.com/OriginalKazdov/archscope/actions/workflows/ci.yml)
|
|
43
|
+
[](https://www.python.org)
|
|
44
|
+
[](LICENSE)
|
|
45
|
+
|
|
46
|
+
## What archscope is
|
|
47
|
+
|
|
48
|
+
`archscope` is a **small-model interpretability workbench**. It's designed for quick, reproducible experiments across model families — not for large-scale SAE training, production model auditing, or replacing mature Transformer-specific tools.
|
|
49
|
+
|
|
50
|
+
Use it when you want to ask:
|
|
51
|
+
- *Can I extract comparable activations from different architectures?*
|
|
52
|
+
- *Do linear probes transfer across model families?*
|
|
53
|
+
- *Do induction-like behaviors appear outside attention?*
|
|
54
|
+
- *Did a fine-tuned model drift in specific layers?*
|
|
55
|
+
- *Do dense or rank-1 SAEs reconstruct this model family better at this layer?*
|
|
56
|
+
|
|
57
|
+
It is **not**: a competitor to `transformer_lens` or `nnsight` (both are broader and more mature), a production audit tool, or a SaaS. It's a small, hackable workbench.
|
|
58
|
+
|
|
59
|
+
```python
|
|
60
|
+
import archscope as mi
|
|
61
|
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
62
|
+
|
|
63
|
+
tok = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf")
|
|
64
|
+
model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-130m-hf")
|
|
65
|
+
|
|
66
|
+
backend = mi.backends.Backend.for_model(model, hint="mamba")
|
|
67
|
+
|
|
68
|
+
# Extract Mamba's recurrent SSM state h_t (in addition to residual stream)
|
|
69
|
+
ssm = backend.extract(tok("text", return_tensors="pt"), layers=["layer_12.ssm_state"])[0]
|
|
70
|
+
# Shape: (B, intermediate_size, ssm_state_size) = (B, 1536, 16) for mamba-130m
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## What's inside
|
|
76
|
+
|
|
77
|
+
### Core mech-interp methods
|
|
78
|
+
|
|
79
|
+
| Module | What it does | Source |
|
|
80
|
+
|---|---|---|
|
|
81
|
+
| `probes` | Linear/MLP probes on hidden states | Drop the Act (arXiv:2605.11467) |
|
|
82
|
+
| `sae` | Dense + Rank-1 factored sparse autoencoders | WriteSAE (arXiv:2605.12770) |
|
|
83
|
+
| `neurons` | Top-K contrastive neuron modulation | Targeted Neuron Mod (arXiv:2605.12290) |
|
|
84
|
+
| `attribute` | Activation patching + DIM decomposition | Multi-Agent Sycophancy (arXiv:2605.12991) |
|
|
85
|
+
| `circuits` | Induction, copy, attention-concentration detectors | Olsson et al 2022 |
|
|
86
|
+
| `lens` | Logit lens + Tuned lens | Belrose et al 2023 |
|
|
87
|
+
| `diff` | Model-diff: base vs fine-tuned, find what changed | this library |
|
|
88
|
+
|
|
89
|
+
### Experiment infrastructure
|
|
90
|
+
|
|
91
|
+
| Module | What it does |
|
|
92
|
+
|---|---|
|
|
93
|
+
| `backends` | Unified extraction API across architectures |
|
|
94
|
+
| `transfer` | Cross-arch probe transfer via paired-activation linear alignment |
|
|
95
|
+
| `bench` | InterpProfile — standardized comparable profile (`mi.bench.benchmark()`) |
|
|
96
|
+
|
|
97
|
+
### Backends
|
|
98
|
+
|
|
99
|
+
| Backend | Models | Specific |
|
|
100
|
+
|---|---|---|
|
|
101
|
+
| `transformer` | Pythia, GPT-2, Llama, Mistral, Qwen, MPT, Falcon, GPT-Neo | residual stream |
|
|
102
|
+
| `mamba` | Mamba, Mamba-2 | residual + explicit `.ssm_state` (recurrent h_t) |
|
|
103
|
+
| `kazdov` | Kazdov-α hybrid MoBE-BCN+MHA | residual per custom block |
|
|
104
|
+
| `recurrent` | Generic RNN (user subclass) | hidden state per layer |
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## Install
|
|
109
|
+
|
|
110
|
+
```bash
|
|
111
|
+
pip install archscope # once on PyPI
|
|
112
|
+
# or:
|
|
113
|
+
git clone https://github.com/OriginalKazdov/archscope.git
|
|
114
|
+
cd archscope && pip install -e .
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
For Mamba on CPU you don't need `mamba-ssm` — HF's slow path works. On CUDA install `mamba-ssm` for the fast path.
|
|
118
|
+
|
|
119
|
+
---
|
|
120
|
+
|
|
121
|
+
## Quick examples
|
|
122
|
+
|
|
123
|
+
### Train a probe on any architecture
|
|
124
|
+
|
|
125
|
+
```python
|
|
126
|
+
import archscope as mi
|
|
127
|
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
128
|
+
|
|
129
|
+
model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-160m")
|
|
130
|
+
tok = AutoTokenizer.from_pretrained("EleutherAI/pythia-160m")
|
|
131
|
+
tk = lambda txts: tok(txts, return_tensors="pt", padding=True, truncation=True)
|
|
132
|
+
|
|
133
|
+
probe = mi.probes.fit_probe(
|
|
134
|
+
model,
|
|
135
|
+
inputs_pos=tk(["I love this", "Wonderful!", "Amazing"]),
|
|
136
|
+
inputs_neg=tk(["I hate this", "Awful", "Terrible"]),
|
|
137
|
+
layer_name="layer_5.residual",
|
|
138
|
+
backend_hint="transformer",
|
|
139
|
+
)
|
|
140
|
+
print(probe.metrics) # {'train_auroc': 1.0, ...}
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
### Extract Mamba's SSM recurrent state
|
|
144
|
+
|
|
145
|
+
```python
|
|
146
|
+
backend = mi.backends.Backend.for_model(mamba_model, hint="mamba")
|
|
147
|
+
rec = backend.extract(tk("Hello world"), layers=["layer_12.ssm_state"])[0]
|
|
148
|
+
# rec.activations.shape == (B, intermediate_size, ssm_state_size)
|
|
149
|
+
# This is the actual recurrent memory h_t of Mamba — exposed via the same
|
|
150
|
+
# extraction API used for Transformer residual streams.
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
### Logit lens / tuned lens — see what each layer "thinks"
|
|
154
|
+
|
|
155
|
+
```python
|
|
156
|
+
result = mi.lens.logit_lens(
|
|
157
|
+
model, tok,
|
|
158
|
+
prompt="The capital of France is",
|
|
159
|
+
target_token=" Paris",
|
|
160
|
+
backend_hint="transformer",
|
|
161
|
+
)
|
|
162
|
+
print(result.to_markdown())
|
|
163
|
+
|
|
164
|
+
# Tuned lens — learned per-layer projections (Belrose et al 2023):
|
|
165
|
+
tl = mi.lens.TunedLens.fit(model, tok, calibration_texts, backend_hint="transformer")
|
|
166
|
+
tl.predict(model, tok, "...", backend_hint="transformer")
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
### Model Diff — what did fine-tuning change?
|
|
170
|
+
|
|
171
|
+
```python
|
|
172
|
+
from archscope.diff import compare
|
|
173
|
+
|
|
174
|
+
result = compare(
|
|
175
|
+
base_model, fine_tuned_model, tokenizer,
|
|
176
|
+
calibration_texts=texts,
|
|
177
|
+
backend_hint="transformer",
|
|
178
|
+
)
|
|
179
|
+
print(result.to_markdown())
|
|
180
|
+
# Per-layer residual drift, top shifted neurons, circuit deltas.
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
### Detect circuits cross-arch
|
|
184
|
+
|
|
185
|
+
```python
|
|
186
|
+
scores = mi.circuits.run_all_circuits(model, tokenizer=tok)
|
|
187
|
+
print(scores["induction_head"].relative) # × chance
|
|
188
|
+
print(scores["copy_circuit"].score) # accuracy
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
### InterpBench — standardized model profile
|
|
192
|
+
|
|
193
|
+
```python
|
|
194
|
+
profile = mi.bench.benchmark(
|
|
195
|
+
"EleutherAI/pythia-160m", model, tok,
|
|
196
|
+
backend_hint="transformer", arch_family="transformer",
|
|
197
|
+
tokenize_fn=tk,
|
|
198
|
+
)
|
|
199
|
+
print(mi.bench.profile_to_markdown(profile))
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
CLI:
|
|
203
|
+
```bash
|
|
204
|
+
archscope info
|
|
205
|
+
archscope bench EleutherAI/pythia-160m --arch transformer --out pythia.json
|
|
206
|
+
archscope bench state-spaces/mamba-130m-hf --arch mamba
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
---
|
|
210
|
+
|
|
211
|
+
## Findings — running archscope on a mini-zoo of 7 small models
|
|
212
|
+
|
|
213
|
+
Each model profiled with `bench.benchmark()` (probes + circuits + dense vs rank-1 SAE). ~10 min total compute on CPU.
|
|
214
|
+
|
|
215
|
+
### Reproduce
|
|
216
|
+
|
|
217
|
+
```bash
|
|
218
|
+
python scripts/reproduce_mini_zoo.py
|
|
219
|
+
# → _research/mini_zoo_leaderboard.json
|
|
220
|
+
# → _research/mini_zoo_leaderboard.md
|
|
221
|
+
```
|
|
222
|
+
|
|
223
|
+
Skip specific models with `--skip Mamba-370m` if memory-tight. Kazdov-α is included only if the local checkpoint is available.
|
|
224
|
+
|
|
225
|
+
| Model | Arch | Params | Induction (× chance) | SAE-dense | SAE-rank1 | SSM var |
|
|
226
|
+
|---|---|---|---|---|---|---|
|
|
227
|
+
| Pythia-160m | transformer | 162M | 490× | 0.019 | 0.025 | — |
|
|
228
|
+
| Pythia-410m | transformer | 405M | 3,261× | 0.075 | 0.135 | — |
|
|
229
|
+
| GPT-2 | transformer | 124M | 6,393× | 5.731 | **0.608** | — |
|
|
230
|
+
| Mamba-130m | SSM | 129M | 6,378× | 0.048 | **0.032** | 0.54 |
|
|
231
|
+
| Mamba-370m | SSM | 372M | **7,730×** | 0.022 | 0.027 | 0.73 |
|
|
232
|
+
| Qwen2.5-0.5B | transformer | 494M | **17,637×** | 0.092 | 0.068 | — |
|
|
233
|
+
| kazdov-α | hybrid | 98M | 2,700× | 0.043 | **0.004** | — |
|
|
234
|
+
|
|
235
|
+
**Open questions raised by this run** (single-seed observations, not formal claims):
|
|
236
|
+
|
|
237
|
+
- **Does induction-like behavior require attention heads?** Mamba — which has no attention mechanism — scores 6378-7730× chance on our behavioral induction test, comparable to or above similarly-sized Transformers. The test is behavioral (output-based), so it doesn't presume any specific mechanism. What in SSMs implements this behavior?
|
|
238
|
+
- **Why does naive logit lens degrade with depth on Mamba?** Applying each model's own `lm_head` to its intermediate residuals surfaces the target with depth on Pythia (target rank 5117 → 77 across 12 layers on "capital of France is _Paris_"). The same procedure on Mamba moves the target *away* from top-1 (rank 197 → 1049 across 24 layers). Does this hold across more SSM checkpoints? Is tuned-lens enough to fix it?
|
|
239
|
+
- **Is rank-1 SAE preference architecture-driven or layer-driven?** In this run, GPT-2, both Mambas, and kazdov-α reconstructed better with rank-1 factored SAEs at the tested mid-layer; both Pythias preferred dense; Qwen was marginal. Suggestive but needs layer sweeps + multiple seeds before claiming a pattern.
|
|
240
|
+
- **How much do training recipe, tokenizer, and data affect induction-like behavior?** Qwen2.5-0.5B shows 17,637× induction — 5.4× higher than Pythia-410m at similar size. Plausibly attributable to data curation + training stability since 2023, but we haven't isolated the cause.
|
|
241
|
+
- **Does Mamba's SSM-state utilization scale with model size?** In this run, the input-dependent variance ratio rose 0.54 (Mamba-130m) → 0.73 (Mamba-370m). Does this trend hold across more checkpoints?
|
|
242
|
+
|
|
243
|
+
These aren't published findings — they're observations from a single mini-zoo run. Methodological corrections welcome.
|
|
244
|
+
|
|
245
|
+
### Metrics caveats
|
|
246
|
+
|
|
247
|
+
- **Induction score** is behavioral (output-based), not proof of a specific circuit. It tells you the model copies `A→B` associations in-context; it doesn't tell you *how*.
|
|
248
|
+
- **SAE reconstruction error** is measured on a small sample of mid-layer activations. Lower is better. Numbers are not comparable across layers with different residual magnitudes (e.g., Pythia L11 has very large residuals which dominate dense SAE recon).
|
|
249
|
+
- **SSM-state variance ratio** is descriptive — it tells you whether the state changes meaningfully across inputs, not whether the state is *causally used* downstream.
|
|
250
|
+
- **Logit lens** results are diagnostic, not a guarantee of representational alignment. Naive logit lens applies the *final* `lm_head` to intermediate residuals — when that fails, it just means the residuals aren't in the final-layer vocab space (e.g., Mamba). `TunedLens` is the fix.
|
|
251
|
+
- All probes/SAEs/circuit tests in InterpBench are **single-seed**. Treat differences <2× as noise.
|
|
252
|
+
|
|
253
|
+
---
|
|
254
|
+
|
|
255
|
+
## Honest limits
|
|
256
|
+
|
|
257
|
+
`archscope` is a v0.2 release. What it does well: cross-architecture mech-interp primitives, unified API, real observable findings, validated on multiple architectures. What it doesn't do yet:
|
|
258
|
+
|
|
259
|
+
- No causal scrubbing (gold-standard circuit verification)
|
|
260
|
+
- No interactive notebook viz (matplotlib helpers are TBD)
|
|
261
|
+
- Circuit detection is limited to induction / copy / attention-concentration — no IOI, name-mover, or successor heads yet
|
|
262
|
+
- Mamba-2 backend support is partial (Mamba-1 fully supported)
|
|
263
|
+
- No pretrained SAE collection (you train your own per layer)
|
|
264
|
+
- Probe transfer assumes same-tokenizer paired data
|
|
265
|
+
|
|
266
|
+
See [`CONTRIBUTING.md`](CONTRIBUTING.md) for what we welcome (new backends, new circuit detectors, viz helpers).
|
|
267
|
+
|
|
268
|
+
For mature Transformer-centric workflows, prefer [`transformer_lens`](https://github.com/TransformerLensOrg/TransformerLens) or [`nnsight`](https://nnsight.net/). They are broader and more mature; `archscope` focuses on lightweight cross-architecture experiments and small / non-standard model workflows.
|
|
269
|
+
|
|
270
|
+
---
|
|
271
|
+
|
|
272
|
+
## Citation
|
|
273
|
+
|
|
274
|
+
```bibtex
|
|
275
|
+
@misc{dovzak2026archscope,
|
|
276
|
+
title = {archscope: Cross-architecture mechanistic interpretability experiments},
|
|
277
|
+
author = {Juan Cruz Dovzak},
|
|
278
|
+
year = {2026},
|
|
279
|
+
url = {https://github.com/OriginalKazdov/archscope}
|
|
280
|
+
}
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
Source papers reimplemented or wrapped:
|
|
284
|
+
- WriteSAE — arXiv:2605.12770
|
|
285
|
+
- Drop the Act / ProFIL — arXiv:2605.11467
|
|
286
|
+
- Targeted Neuron Modulation — arXiv:2605.12290
|
|
287
|
+
- Multi-Agent Sycophancy — arXiv:2605.12991
|
|
288
|
+
- Tuned Lens (Belrose et al, 2023)
|
|
289
|
+
- Induction heads (Olsson et al, 2022)
|
|
290
|
+
|
|
291
|
+
---
|
|
292
|
+
|
|
293
|
+
## Troubleshooting
|
|
294
|
+
|
|
295
|
+
### "The fast path is not available because ..." (Mamba on CPU)
|
|
296
|
+
|
|
297
|
+
Normal. Mamba falls back to a slow pure-PyTorch path that works correctly (~30s per benchmark vs ~1s on CUDA). Install `pip install mamba-ssm causal-conv1d` only on CUDA machines.
|
|
298
|
+
|
|
299
|
+
### Custom backend not auto-detected
|
|
300
|
+
|
|
301
|
+
Pass `Backend.for_model(model, hint="my_backend")` explicitly. Auto-detection uses `config.model_type`.
|
|
302
|
+
|
|
303
|
+
### `RuntimeError: Trying to backward through the graph a second time`
|
|
304
|
+
|
|
305
|
+
Activations from `Backend.extract()` carry the autograd graph by default. Call `.detach()` before reusing, or extract inside `torch.no_grad()`. The high-level `probes.fit_probe()` does this for you.
|
|
306
|
+
|
|
307
|
+
---
|
|
308
|
+
|
|
309
|
+
## Roadmap (post-0.2.0)
|
|
310
|
+
|
|
311
|
+
- Multi-token circuit detection: IOI, name-mover, successor heads
|
|
312
|
+
- Mamba-2 backend with same `.ssm_state` API
|
|
313
|
+
- Cross-arch SAE feature alignment (extend `transfer.py` from probes to features)
|
|
314
|
+
- Pretrained SAE collection for common small models
|
|
315
|
+
- Plotly/matplotlib viz helpers
|
|
316
|
+
- HuggingFace Space demo
|
|
317
|
+
|
|
318
|
+
PRs welcome — see [`CONTRIBUTING.md`](CONTRIBUTING.md).
|
|
319
|
+
|
|
320
|
+
---
|
|
321
|
+
|
|
322
|
+
## License
|
|
323
|
+
|
|
324
|
+
Apache-2.0
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
archscope/__init__.py,sha256=-_03oEIhC5V8l8ejAubb9fF7l8wlqMhLMvWHt_t-qNU,1156
|
|
2
|
+
archscope/_utils.py,sha256=xnTK6KyX07UXU5MTKng0kArAacxmmF4S-QoT0LbJoG8,3995
|
|
3
|
+
archscope/attribute.py,sha256=VyQoL5EgQaFWsU820pp8t11OsHFCVjbZWCQt3R-mcQ0,6960
|
|
4
|
+
archscope/backends.py,sha256=Wk7J21AItHoVkN_FA0amBimP7B5_wvcHPVKn9oeHN7Q,9782
|
|
5
|
+
archscope/bench.py,sha256=hqtCTuAIpBnCSHe7UhYXtjBRiQFjk5YK-avTyG9Htvw,11034
|
|
6
|
+
archscope/circuits.py,sha256=HxEDJY8XrwnxoK8AooOIWsW7unJAY_Rcndzp1bhXZm4,9197
|
|
7
|
+
archscope/cli.py,sha256=f7lEhEm8WuRiTNr7OpCgAX0FbSClYIzMD7J9Z2TRJCg,4451
|
|
8
|
+
archscope/diff.py,sha256=IRVZTDUfPrQSNILwHUJ8M2DAGogQgzZ0RCAlOtPggwE,8498
|
|
9
|
+
archscope/kazdov_backend.py,sha256=XPaIZma_lN9hEzKwZBRe1V-ILxIUYtqL6fdLGRcqJwM,4988
|
|
10
|
+
archscope/lens.py,sha256=arnLsnz2O7q0U6qOz0Dh6Sy_pb4I9ogM0cj7ziZt7fA,12126
|
|
11
|
+
archscope/neurons.py,sha256=w6psUvThLoh7oGHafwSJWcdRB7QOPxCkAui_O2MDzD4,4079
|
|
12
|
+
archscope/probes.py,sha256=Pq7k5bsMnVXSw5MPs9Xkxh6xC7U9FBD5duX8zRY16G0,5903
|
|
13
|
+
archscope/sae.py,sha256=5fWpgFuQY9kKqZPK-qemiQCLLQoMbzZFrc-YFtSaIKI,4393
|
|
14
|
+
archscope/transfer.py,sha256=NonW6fDD69QOiXhW9us3wtuNEFIC93HDUKu9uM6BDK0,7644
|
|
15
|
+
archscope-0.2.2.dist-info/licenses/LICENSE,sha256=0vKwygHdmKeIpQ8AuymAmD1Mv5jTU14wp7eHWmngxy8,631
|
|
16
|
+
archscope-0.2.2.dist-info/METADATA,sha256=aQ5ZDy0AdQOIxreUFozSvyB_2d7HtIHNVIn7NHzMpac,13912
|
|
17
|
+
archscope-0.2.2.dist-info/WHEEL,sha256=aeYiig01lYGDzBgS8HxWXOg3uV61G9ijOsup-k9o1sk,91
|
|
18
|
+
archscope-0.2.2.dist-info/entry_points.txt,sha256=B_jZgDwUQYMY2G9ygPGQ4eGheou7idX-nRbEykNscZ0,48
|
|
19
|
+
archscope-0.2.2.dist-info/top_level.txt,sha256=epSWCpK_x3rrCj2dURZjQRD3WRcg_5jQGgsnsSuOzFw,10
|
|
20
|
+
archscope-0.2.2.dist-info/RECORD,,
|
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
Apache License
|
|
2
|
+
Version 2.0, January 2004
|
|
3
|
+
http://www.apache.org/licenses/
|
|
4
|
+
|
|
5
|
+
Copyright 2026 Juan Cruz Dovzak
|
|
6
|
+
|
|
7
|
+
Licensed under the Apache License, Version 2.0 (the "License");
|
|
8
|
+
you may not use this file except in compliance with the License.
|
|
9
|
+
You may obtain a copy of the License at
|
|
10
|
+
|
|
11
|
+
http://www.apache.org/licenses/LICENSE-2.0
|
|
12
|
+
|
|
13
|
+
Unless required by applicable law or agreed to in writing, software
|
|
14
|
+
distributed under the License is distributed on an "AS IS" BASIS,
|
|
15
|
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
16
|
+
See the License for the specific language governing permissions and
|
|
17
|
+
limitations under the License.
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
archscope
|