nlos 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.cursor/commands/COMMAND-MAP.md +252 -0
- package/.cursor/commands/assume.md +208 -0
- package/.cursor/commands/enhance-prompt.md +39 -0
- package/.cursor/commands/hype.md +709 -0
- package/.cursor/commands/kernel-boot.md +254 -0
- package/.cursor/commands/note.md +28 -0
- package/.cursor/commands/scratchpad.md +81 -0
- package/.cursor/commands/sys-ref.md +81 -0
- package/AGENTS.md +67 -0
- package/KERNEL.md +428 -0
- package/KERNEL.yaml +189 -0
- package/LICENSE +21 -0
- package/QUICKSTART.md +230 -0
- package/README.md +202 -0
- package/axioms.yaml +437 -0
- package/bin/nlos.js +403 -0
- package/memory.md +493 -0
- package/package.json +56 -0
- package/personalities.md +363 -0
- package/portable/README.md +209 -0
- package/portable/TEST-PLAN.md +213 -0
- package/portable/kernel-payload-full.json +40 -0
- package/portable/kernel-payload-full.md +2046 -0
- package/portable/kernel-payload.json +24 -0
- package/portable/kernel-payload.md +1072 -0
- package/projects/README.md +146 -0
- package/scripts/generate-kernel-payload.py +339 -0
- package/scripts/kernel-boot-llama-cpp.sh +192 -0
- package/scripts/kernel-boot-lm-studio.sh +206 -0
- package/scripts/kernel-boot-ollama.sh +214 -0
|
@@ -0,0 +1,146 @@
|
|
|
1
|
+
---
|
|
2
|
+
type: document
|
|
3
|
+
last_updated: 2025-12-03
|
|
4
|
+
description: |
|
|
5
|
+
Directory overview and organization guide for all project folders in the Capturebox workspace. Summarizes folder structure, system definitions, archival process, natural language navigation patterns, and current status of principal active project systems.
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Projects Directory
|
|
9
|
+
|
|
10
|
+
## Structure
|
|
11
|
+
|
|
12
|
+
### active/
|
|
13
|
+
|
|
14
|
+
Work in progress — artifacts being actively developed.
|
|
15
|
+
|
|
16
|
+
- **active/cisco/** — Cisco work projects, drafts, analyses, decks
|
|
17
|
+
- **active/personal/** — Personal tasks, side projects, experiments
|
|
18
|
+
|
|
19
|
+
### systems/
|
|
20
|
+
|
|
21
|
+
Reusable systems and their operating files. These are frameworks/tools that generate artifacts.
|
|
22
|
+
|
|
23
|
+
| System | Purpose | Command(s) |
|
|
24
|
+
|----------------------------|----------------------------------------------------------------------|-------------------------------------|
|
|
25
|
+
| **design-pipeline** | Tracker for UX design work — gates, artifacts, reminders, progress | `/dp`, `/dp-status`, `/dp-gate` |
|
|
26
|
+
| **design-thinking-system** | Constraint-based design analysis, XDR principles evaluation | `/evaluate-design`, `/design-spec` |
|
|
27
|
+
| **feature-forge** | Orchestrator experiment for full automation (laboratory status) | `/feature-forge` |
|
|
28
|
+
| **hype-system** | Context-aware creative momentum and forward-looking observations | `/hype` |
|
|
29
|
+
| **journalpad-system** | Interactive journaling tool with adaptive Q/A and explore flavors | `/journalpad` |
|
|
30
|
+
| **lateral-os** | LSP Operating System — intelligence layer for ideation | `/lsp-*` commands |
|
|
31
|
+
| **natural-language-os** | Book project: "LLMs as substrate for domain-specific operating systems" | — book |
|
|
32
|
+
| **persona-as-agent** | Security persona agents (SAM, REMI, ALEX, KIT, NIK) for HCD process | `/persona-system`, `/persona-adapt` |
|
|
33
|
+
| **problem-solver-system** | Lightweight connector aggregating problem-solving techniques from lateral-os, design-thinking-system, and signal-to-action | `/problem-solver` |
|
|
34
|
+
| **skills-engine-system** | Define, store, and route reusable skills as low-flavor background capabilities | `/skills` |
|
|
35
|
+
| **self-writer-system** | Performance reviews, personal reflection, growth journaling | `/perf-writer`, `/self-reflect` |
|
|
36
|
+
| **signal-to-action** | Transform unstructured input into structured artifacts via recipes | `/run-recipe` |
|
|
37
|
+
| **ux-blog-system** | 6-phase systematic blog post creation | `/ux-blog` |
|
|
38
|
+
| **ux-writer-system** | Context-aware UI copy generation (tooltips, microcopy, voice) | `/ux-writer`, `/ux-voice-check` |
|
|
39
|
+
| **visual-design-system** | Gestalt-based perceptual design principles, constraints, and framework evaluation | — |
|
|
40
|
+
|
|
41
|
+
### tools/
|
|
42
|
+
|
|
43
|
+
Standalone utilities and helpers.
|
|
44
|
+
|
|
45
|
+
- **prompt-maker-for-ai-assistant/** — Example build prompt for UI components (see `/prompt-maker-ui` command)
|
|
46
|
+
|
|
47
|
+
---
|
|
48
|
+
|
|
49
|
+
## Natural Language Guidance
|
|
50
|
+
|
|
51
|
+
| Query | Path |
|
|
52
|
+
|--------------------------------|-----------------------------------------|
|
|
53
|
+
| "Show me active work" | `active/` |
|
|
54
|
+
| "Show me Cisco projects" | `active/cisco/` |
|
|
55
|
+
| "Show me personal projects" | `active/personal/` |
|
|
56
|
+
| "What systems are available?" | `systems/` |
|
|
57
|
+
| "System outputs go where?" | `active/` (drafts) or `docs/` (final) |
|
|
58
|
+
|
|
59
|
+
---
|
|
60
|
+
|
|
61
|
+
## Archive Pattern
|
|
62
|
+
|
|
63
|
+
When a project is complete:
|
|
64
|
+
|
|
65
|
+
- Move from `active/cisco/` → `archive/projects/cisco/`
|
|
66
|
+
- Move from `active/personal/` → `archive/projects/personal/`
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
70
|
+
## Current Active Projects
|
|
71
|
+
---
|
|
72
|
+
|
|
73
|
+
## System Status
|
|
74
|
+
|
|
75
|
+
| System | Status | Last Updated |
|
|
76
|
+
|------------------------|---------------------------|--------------|
|
|
77
|
+
| design-pipeline | Experimental/Pursuing | 2025-12-20 |
|
|
78
|
+
| design-thinking-system | Active | 2025-11-29 |
|
|
79
|
+
| feature-forge | Experimental/Laboratory | 2025-12-20 |
|
|
80
|
+
| hype-system | Active | 2025-11-30 |
|
|
81
|
+
| journalpad-system | Active | 2025-12-19 |
|
|
82
|
+
| lateral-os | Operational | 2025-11-28 |
|
|
83
|
+
| natural-language-os | First Draft | 2025-12-01 |
|
|
84
|
+
| persona-as-agent | Production | 2025-11-29 |
|
|
85
|
+
| self-writer-system | Active | 2025-11-27 |
|
|
86
|
+
| signal-to-action | Active (v2 testing) | 2025-11-30 |
|
|
87
|
+
| ux-blog-system | Active | 2025-11-25 |
|
|
88
|
+
| ux-writer-system | Active | 2025-11-24 |
|
|
89
|
+
| visual-design-system | Active | 2025-12-09 |
|
|
90
|
+
|
|
91
|
+
---
|
|
92
|
+
|
|
93
|
+
## Philosophy
|
|
94
|
+
|
|
95
|
+
The systems in this directory share a common architecture: **human-in-the-loop epistemic control**.
|
|
96
|
+
|
|
97
|
+
These are not automation engines. They are cognitive accelerators.
|
|
98
|
+
|
|
99
|
+
### The Inversion
|
|
100
|
+
|
|
101
|
+
The system doesn't produce answers for the human. The human works to produce their own understanding *through* using the system.
|
|
102
|
+
|
|
103
|
+
This inverts the typical AI framing where the model is the intelligent agent and the human is the beneficiary. Here, the human is the intelligent agent. The model runs the operating system.
|
|
104
|
+
|
|
105
|
+
### How It Works
|
|
106
|
+
|
|
107
|
+
Each system follows a recursive pattern:
|
|
108
|
+
|
|
109
|
+
1. **Take input** — unstructured material, constraints, context
|
|
110
|
+
2. **Transform it** — into structured scaffolds, interpretable artifacts
|
|
111
|
+
3. **Hand it back** — for interrogation, reshaping, redirection
|
|
112
|
+
4. **Use human shaping** — as the next instruction
|
|
113
|
+
|
|
114
|
+
The system is not "working for" the operator. The operator is working *through* the system.
|
|
115
|
+
|
|
116
|
+
### Why This Matters
|
|
117
|
+
|
|
118
|
+
Understanding emerges through recursive interaction. Each pass through the system is a learning cycle:
|
|
119
|
+
|
|
120
|
+
| Interaction | What the Human Gains |
|
|
121
|
+
|-------------|----------------------|
|
|
122
|
+
| Reading outputs | Seeing material reflected in new structure |
|
|
123
|
+
| Interpreting meaning | Connecting system transforms to real intent |
|
|
124
|
+
| Refining direction | Clarifying and focusing what actually needs to be known |
|
|
125
|
+
| Reshaping artifacts | Discovering gaps in topic understanding |
|
|
126
|
+
| Adjusting protocols | Encoding insight into future iterations |
|
|
127
|
+
|
|
128
|
+
The system doesn't need to be "right" — it needs to be *useful for thinking*. Every interaction surfaces something: a connection you missed, a framing you hadn't considered, a question you didn't know to ask.
|
|
129
|
+
|
|
130
|
+
The human learns. The system accelerates the learning.
|
|
131
|
+
|
|
132
|
+
What emerges is a hybrid computational model:
|
|
133
|
+
|
|
134
|
+
> The machine transforms information.
|
|
135
|
+
> The human transforms the system.
|
|
136
|
+
> And the system transforms the human.
|
|
137
|
+
|
|
138
|
+
### The Doctrine
|
|
139
|
+
|
|
140
|
+
A Natural Language Operating System is a structured expert companion, but not the final authority; it is a structured way of thinking interactively with the machine. The model transforms your inputs, and you use those transforms to see more clearly, decide more deliberately, and learn faster.
|
|
141
|
+
|
|
142
|
+
> **Definition**: A Natural Language Operating System is a human-directed cognitive instrument that enables learning, reasoning, and decision-making through structured machine-mediated iteration.
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
*Last updated: 2025-12-20*
|
|
@@ -0,0 +1,339 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
"""
|
|
3
|
+
generate-kernel-payload.py - Generate portable NL-OS kernel payloads
|
|
4
|
+
|
|
5
|
+
Creates standalone files that can be fed to ANY LLM as system prompt/context.
|
|
6
|
+
The generated payload allows any capable model to "boot" into Capturebox NL-OS mode.
|
|
7
|
+
|
|
8
|
+
Usage:
|
|
9
|
+
python3 scripts/generate-kernel-payload.py [options]
|
|
10
|
+
|
|
11
|
+
Options:
|
|
12
|
+
--tier TIER Payload tier: mandatory, lazy, full (default: mandatory)
|
|
13
|
+
--format FORMAT Output format: markdown, json, text (default: markdown)
|
|
14
|
+
--output PATH Output file path (default: portable/kernel-payload.md)
|
|
15
|
+
--all Generate all tiers and formats
|
|
16
|
+
--verify Verify all source files exist
|
|
17
|
+
--tokens Show token estimates only, don't generate
|
|
18
|
+
|
|
19
|
+
Examples:
|
|
20
|
+
python3 scripts/generate-kernel-payload.py # Default payload
|
|
21
|
+
python3 scripts/generate-kernel-payload.py --tier full # Full kernel
|
|
22
|
+
python3 scripts/generate-kernel-payload.py --format json # JSON for APIs
|
|
23
|
+
python3 scripts/generate-kernel-payload.py --all # All variants
|
|
24
|
+
python3 scripts/generate-kernel-payload.py --verify # Verify files
|
|
25
|
+
"""
|
|
26
|
+
|
|
27
|
+
import argparse
|
|
28
|
+
import json
|
|
29
|
+
import sys
|
|
30
|
+
from pathlib import Path
|
|
31
|
+
from datetime import datetime
|
|
32
|
+
from typing import Optional
|
|
33
|
+
|
|
34
|
+
# Resolve capturebox root
|
|
35
|
+
SCRIPT_DIR = Path(__file__).parent
|
|
36
|
+
CAPTUREBOX_ROOT = SCRIPT_DIR.parent
|
|
37
|
+
|
|
38
|
+
# File definitions with token estimates (based on ~4 chars per token)
|
|
39
|
+
KERNEL_FILES = {
|
|
40
|
+
'mandatory': [
|
|
41
|
+
('memory.md', 4600),
|
|
42
|
+
('AGENTS.md', 1200),
|
|
43
|
+
('axioms.yaml', 4800),
|
|
44
|
+
],
|
|
45
|
+
'lazy': [
|
|
46
|
+
('personalities.md', 3600),
|
|
47
|
+
('.cursor/commands/COMMAND-MAP.md', 1350),
|
|
48
|
+
],
|
|
49
|
+
'extended': [
|
|
50
|
+
('projects/README.md', 1000),
|
|
51
|
+
('KERNEL.yaml', 500),
|
|
52
|
+
],
|
|
53
|
+
}
|
|
54
|
+
|
|
55
|
+
|
|
56
|
+
def read_file(path: Path) -> str:
|
|
57
|
+
"""Read file contents, return placeholder if missing."""
|
|
58
|
+
full_path = CAPTUREBOX_ROOT / path
|
|
59
|
+
if full_path.exists():
|
|
60
|
+
return full_path.read_text()
|
|
61
|
+
return f"# File not found: {path}\n"
|
|
62
|
+
|
|
63
|
+
|
|
64
|
+
def verify_files() -> bool:
|
|
65
|
+
"""Verify all kernel files exist."""
|
|
66
|
+
all_exist = True
|
|
67
|
+
|
|
68
|
+
print("Verifying kernel files...")
|
|
69
|
+
print()
|
|
70
|
+
|
|
71
|
+
for tier_name, files in KERNEL_FILES.items():
|
|
72
|
+
print(f" {tier_name.upper()} tier:")
|
|
73
|
+
for filename, tokens in files:
|
|
74
|
+
full_path = CAPTUREBOX_ROOT / filename
|
|
75
|
+
exists = full_path.exists()
|
|
76
|
+
status = "[x]" if exists else "[ ]"
|
|
77
|
+
size = f"({full_path.stat().st_size:,} bytes)" if exists else "(MISSING)"
|
|
78
|
+
print(f" {status} {filename} {size}")
|
|
79
|
+
if not exists and tier_name == 'mandatory':
|
|
80
|
+
all_exist = False
|
|
81
|
+
print()
|
|
82
|
+
|
|
83
|
+
return all_exist
|
|
84
|
+
|
|
85
|
+
|
|
86
|
+
def estimate_tokens(tier: str = 'mandatory') -> dict:
|
|
87
|
+
"""Calculate token estimates for a tier."""
|
|
88
|
+
files_to_include = KERNEL_FILES['mandatory'].copy()
|
|
89
|
+
|
|
90
|
+
if tier in ('lazy', 'full'):
|
|
91
|
+
files_to_include.extend(KERNEL_FILES['lazy'])
|
|
92
|
+
if tier == 'full':
|
|
93
|
+
files_to_include.extend(KERNEL_FILES['extended'])
|
|
94
|
+
|
|
95
|
+
total_tokens = sum(tokens for _, tokens in files_to_include)
|
|
96
|
+
|
|
97
|
+
return {
|
|
98
|
+
'tier': tier,
|
|
99
|
+
'file_count': len(files_to_include),
|
|
100
|
+
'estimated_tokens': total_tokens,
|
|
101
|
+
'files': [(f, t) for f, t in files_to_include],
|
|
102
|
+
}
|
|
103
|
+
|
|
104
|
+
|
|
105
|
+
def generate_payload(tier: str = 'mandatory', format: str = 'markdown') -> str:
|
|
106
|
+
"""Generate kernel payload for specified tier and format."""
|
|
107
|
+
|
|
108
|
+
files_to_load = KERNEL_FILES['mandatory'].copy()
|
|
109
|
+
if tier in ('lazy', 'full'):
|
|
110
|
+
files_to_load.extend(KERNEL_FILES['lazy'])
|
|
111
|
+
if tier == 'full':
|
|
112
|
+
files_to_load.extend(KERNEL_FILES['extended'])
|
|
113
|
+
|
|
114
|
+
# Load file contents
|
|
115
|
+
sections = []
|
|
116
|
+
total_tokens = 0
|
|
117
|
+
|
|
118
|
+
for filename, tokens in files_to_load:
|
|
119
|
+
content = read_file(Path(filename))
|
|
120
|
+
actual_tokens = len(content) // 4 # Rough estimate
|
|
121
|
+
sections.append({
|
|
122
|
+
'file': filename,
|
|
123
|
+
'content': content,
|
|
124
|
+
'estimated_tokens': tokens,
|
|
125
|
+
'actual_chars': len(content),
|
|
126
|
+
})
|
|
127
|
+
total_tokens += actual_tokens
|
|
128
|
+
|
|
129
|
+
# Format output
|
|
130
|
+
if format == 'json':
|
|
131
|
+
return json.dumps({
|
|
132
|
+
'metadata': {
|
|
133
|
+
'generated': datetime.now().isoformat(),
|
|
134
|
+
'generator': 'generate-kernel-payload.py',
|
|
135
|
+
'tier': tier,
|
|
136
|
+
'total_estimated_tokens': total_tokens,
|
|
137
|
+
'file_count': len(sections),
|
|
138
|
+
},
|
|
139
|
+
'instructions': (
|
|
140
|
+
"Feed this payload to any LLM as system prompt or context. "
|
|
141
|
+
"The model will boot into Capturebox NL-OS mode. "
|
|
142
|
+
"After loading, the model should acknowledge: "
|
|
143
|
+
"'Kernel loaded. Ready for capturebox operations.'"
|
|
144
|
+
),
|
|
145
|
+
'files': [
|
|
146
|
+
{
|
|
147
|
+
'filename': s['file'],
|
|
148
|
+
'content': s['content'],
|
|
149
|
+
}
|
|
150
|
+
for s in sections
|
|
151
|
+
],
|
|
152
|
+
}, indent=2, ensure_ascii=False)
|
|
153
|
+
|
|
154
|
+
elif format == 'text':
|
|
155
|
+
# Plain concatenation for simple use
|
|
156
|
+
return '\n\n'.join([s['content'] for s in sections])
|
|
157
|
+
|
|
158
|
+
else: # markdown (default)
|
|
159
|
+
header = f"""# Capturebox NL-OS Kernel Payload
|
|
160
|
+
|
|
161
|
+
**Generated**: {datetime.now().strftime('%Y-%m-%d %H:%M')}
|
|
162
|
+
**Tier**: {tier}
|
|
163
|
+
**Estimated tokens**: ~{total_tokens:,}
|
|
164
|
+
**Files**: {len(sections)}
|
|
165
|
+
|
|
166
|
+
---
|
|
167
|
+
|
|
168
|
+
## How to Use This Payload
|
|
169
|
+
|
|
170
|
+
Feed this entire file as system prompt or context to any capable LLM.
|
|
171
|
+
The LLM will "boot" into Capturebox NL-OS mode.
|
|
172
|
+
|
|
173
|
+
### Supported Runtimes:
|
|
174
|
+
- Claude Code / Claude API
|
|
175
|
+
- Cursor IDE
|
|
176
|
+
- Ollama (any model)
|
|
177
|
+
- llama.cpp
|
|
178
|
+
- LM Studio
|
|
179
|
+
- OpenAI-compatible APIs
|
|
180
|
+
- Any LLM with system prompt capability
|
|
181
|
+
|
|
182
|
+
### After Loading
|
|
183
|
+
|
|
184
|
+
The model should acknowledge: **"Kernel loaded. Ready for capturebox operations."**
|
|
185
|
+
|
|
186
|
+
### Quick Start
|
|
187
|
+
|
|
188
|
+
**Ollama:**
|
|
189
|
+
```bash
|
|
190
|
+
ollama run qwen2.5:3b --system "$(cat portable/kernel-payload.md)"
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
**LM Studio:**
|
|
194
|
+
1. Open LM Studio
|
|
195
|
+
2. Paste this file's contents into System Prompt
|
|
196
|
+
3. Start chatting
|
|
197
|
+
|
|
198
|
+
**API (OpenAI-compatible):**
|
|
199
|
+
```python
|
|
200
|
+
messages = [
|
|
201
|
+
{{"role": "system", "content": open("portable/kernel-payload.md").read()}},
|
|
202
|
+
{{"role": "user", "content": "Acknowledge kernel boot."}}
|
|
203
|
+
]
|
|
204
|
+
```
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
# KERNEL CONTEXT BEGINS
|
|
209
|
+
|
|
210
|
+
"""
|
|
211
|
+
|
|
212
|
+
body_parts = []
|
|
213
|
+
for s in sections:
|
|
214
|
+
body_parts.append(f"""---
|
|
215
|
+
|
|
216
|
+
## {s['file']}
|
|
217
|
+
|
|
218
|
+
{s['content']}
|
|
219
|
+
""")
|
|
220
|
+
|
|
221
|
+
footer = """
|
|
222
|
+
---
|
|
223
|
+
|
|
224
|
+
# KERNEL CONTEXT ENDS
|
|
225
|
+
|
|
226
|
+
After reading the above kernel context, acknowledge with:
|
|
227
|
+
"Kernel loaded. Ready for capturebox operations."
|
|
228
|
+
"""
|
|
229
|
+
|
|
230
|
+
return header + '\n'.join(body_parts) + footer
|
|
231
|
+
|
|
232
|
+
|
|
233
|
+
def main():
|
|
234
|
+
parser = argparse.ArgumentParser(
|
|
235
|
+
description='Generate portable NL-OS kernel payloads',
|
|
236
|
+
formatter_class=argparse.RawDescriptionHelpFormatter,
|
|
237
|
+
epilog=__doc__
|
|
238
|
+
)
|
|
239
|
+
parser.add_argument(
|
|
240
|
+
'--tier',
|
|
241
|
+
choices=['mandatory', 'lazy', 'full'],
|
|
242
|
+
default='mandatory',
|
|
243
|
+
help='Payload tier (default: mandatory)'
|
|
244
|
+
)
|
|
245
|
+
parser.add_argument(
|
|
246
|
+
'--format',
|
|
247
|
+
choices=['markdown', 'json', 'text'],
|
|
248
|
+
default='markdown',
|
|
249
|
+
help='Output format (default: markdown)'
|
|
250
|
+
)
|
|
251
|
+
parser.add_argument(
|
|
252
|
+
'--output',
|
|
253
|
+
type=Path,
|
|
254
|
+
help='Output file path (default: portable/kernel-payload.md)'
|
|
255
|
+
)
|
|
256
|
+
parser.add_argument(
|
|
257
|
+
'--all',
|
|
258
|
+
action='store_true',
|
|
259
|
+
help='Generate all tiers and formats'
|
|
260
|
+
)
|
|
261
|
+
parser.add_argument(
|
|
262
|
+
'--verify',
|
|
263
|
+
action='store_true',
|
|
264
|
+
help='Verify all source files exist'
|
|
265
|
+
)
|
|
266
|
+
parser.add_argument(
|
|
267
|
+
'--tokens',
|
|
268
|
+
action='store_true',
|
|
269
|
+
help='Show token estimates only'
|
|
270
|
+
)
|
|
271
|
+
|
|
272
|
+
args = parser.parse_args()
|
|
273
|
+
|
|
274
|
+
# Verify mode
|
|
275
|
+
if args.verify:
|
|
276
|
+
success = verify_files()
|
|
277
|
+
sys.exit(0 if success else 1)
|
|
278
|
+
|
|
279
|
+
# Token estimate mode
|
|
280
|
+
if args.tokens:
|
|
281
|
+
for tier in ['mandatory', 'lazy', 'full']:
|
|
282
|
+
info = estimate_tokens(tier)
|
|
283
|
+
print(f"\n{tier.upper()} tier: ~{info['estimated_tokens']:,} tokens")
|
|
284
|
+
for filename, tokens in info['files']:
|
|
285
|
+
print(f" - {filename}: ~{tokens:,}")
|
|
286
|
+
sys.exit(0)
|
|
287
|
+
|
|
288
|
+
# Ensure output directory exists
|
|
289
|
+
output_dir = CAPTUREBOX_ROOT / 'portable'
|
|
290
|
+
output_dir.mkdir(parents=True, exist_ok=True)
|
|
291
|
+
|
|
292
|
+
# Generate all mode
|
|
293
|
+
if args.all:
|
|
294
|
+
generated = []
|
|
295
|
+
|
|
296
|
+
# Generate all combinations
|
|
297
|
+
for tier in ['mandatory', 'full']:
|
|
298
|
+
for fmt in ['markdown', 'json']:
|
|
299
|
+
suffix = '' if tier == 'mandatory' else f'-{tier}'
|
|
300
|
+
ext = 'md' if fmt == 'markdown' else fmt
|
|
301
|
+
output_path = output_dir / f'kernel-payload{suffix}.{ext}'
|
|
302
|
+
|
|
303
|
+
payload = generate_payload(tier=tier, format=fmt)
|
|
304
|
+
output_path.write_text(payload)
|
|
305
|
+
|
|
306
|
+
info = estimate_tokens(tier)
|
|
307
|
+
generated.append({
|
|
308
|
+
'file': output_path.name,
|
|
309
|
+
'tier': tier,
|
|
310
|
+
'format': fmt,
|
|
311
|
+
'tokens': info['estimated_tokens'],
|
|
312
|
+
})
|
|
313
|
+
print(f"Generated: {output_path}")
|
|
314
|
+
|
|
315
|
+
print(f"\nGenerated {len(generated)} payload files in {output_dir}/")
|
|
316
|
+
return
|
|
317
|
+
|
|
318
|
+
# Single file generation
|
|
319
|
+
if args.output:
|
|
320
|
+
output_path = args.output
|
|
321
|
+
else:
|
|
322
|
+
suffix = '' if args.tier == 'mandatory' else f'-{args.tier}'
|
|
323
|
+
ext = 'md' if args.format == 'markdown' else args.format
|
|
324
|
+
output_path = output_dir / f'kernel-payload{suffix}.{ext}'
|
|
325
|
+
|
|
326
|
+
# Generate payload
|
|
327
|
+
payload = generate_payload(tier=args.tier, format=args.format)
|
|
328
|
+
|
|
329
|
+
# Write output
|
|
330
|
+
output_path.parent.mkdir(parents=True, exist_ok=True)
|
|
331
|
+
output_path.write_text(payload)
|
|
332
|
+
|
|
333
|
+
info = estimate_tokens(args.tier)
|
|
334
|
+
print(f"Generated {args.tier} kernel payload: {output_path}")
|
|
335
|
+
print(f"Estimated tokens: ~{info['estimated_tokens']:,}")
|
|
336
|
+
|
|
337
|
+
|
|
338
|
+
if __name__ == '__main__':
|
|
339
|
+
main()
|
|
@@ -0,0 +1,192 @@
|
|
|
1
|
+
#!/usr/bin/env bash
|
|
2
|
+
# kernel-boot-llama-cpp.sh - Boot NL-OS kernel via llama.cpp
|
|
3
|
+
#
|
|
4
|
+
# Usage: ./scripts/kernel-boot-llama-cpp.sh [--model PATH] [--full] [--output FILE]
|
|
5
|
+
#
|
|
6
|
+
# Generates a prompt file from kernel files for use with llama.cpp CLI.
|
|
7
|
+
# You can then run: llama-cli -m model.gguf -f prompt.txt --interactive
|
|
8
|
+
#
|
|
9
|
+
# Options:
|
|
10
|
+
# --model PATH Path to GGUF model file (optional, for direct launch)
|
|
11
|
+
# --full Load full tier including personalities and command map
|
|
12
|
+
# --output FILE Output prompt file (default: /tmp/capturebox-kernel-prompt.txt)
|
|
13
|
+
# --ctx-size N Context size in tokens (default: 16384)
|
|
14
|
+
# --launch Launch llama-cli directly (requires --model)
|
|
15
|
+
# --help Show this help message
|
|
16
|
+
#
|
|
17
|
+
# Examples:
|
|
18
|
+
# ./scripts/kernel-boot-llama-cpp.sh # Generate prompt file
|
|
19
|
+
# ./scripts/kernel-boot-llama-cpp.sh --full # Full kernel context
|
|
20
|
+
# ./scripts/kernel-boot-llama-cpp.sh --output ~/kernel.txt # Custom output path
|
|
21
|
+
# ./scripts/kernel-boot-llama-cpp.sh --model ~/llama3.gguf --launch # Direct launch
|
|
22
|
+
|
|
23
|
+
set -euo pipefail
|
|
24
|
+
|
|
25
|
+
# Resolve capturebox root directory
|
|
26
|
+
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
27
|
+
CAPTUREBOX_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
28
|
+
|
|
29
|
+
# Defaults
|
|
30
|
+
MODEL_PATH=""
|
|
31
|
+
FULL_BOOT=false
|
|
32
|
+
OUTPUT_FILE="/tmp/capturebox-kernel-prompt.txt"
|
|
33
|
+
CTX_SIZE=16384
|
|
34
|
+
LAUNCH=false
|
|
35
|
+
|
|
36
|
+
# Color output
|
|
37
|
+
RED='\033[0;31m'
|
|
38
|
+
GREEN='\033[0;32m'
|
|
39
|
+
YELLOW='\033[1;33m'
|
|
40
|
+
BLUE='\033[0;34m'
|
|
41
|
+
NC='\033[0m' # No Color
|
|
42
|
+
|
|
43
|
+
# Parse arguments
|
|
44
|
+
while [[ $# -gt 0 ]]; do
|
|
45
|
+
case $1 in
|
|
46
|
+
--model)
|
|
47
|
+
MODEL_PATH="$2"
|
|
48
|
+
shift 2
|
|
49
|
+
;;
|
|
50
|
+
--full)
|
|
51
|
+
FULL_BOOT=true
|
|
52
|
+
shift
|
|
53
|
+
;;
|
|
54
|
+
--output)
|
|
55
|
+
OUTPUT_FILE="$2"
|
|
56
|
+
shift 2
|
|
57
|
+
;;
|
|
58
|
+
--ctx-size)
|
|
59
|
+
CTX_SIZE="$2"
|
|
60
|
+
shift 2
|
|
61
|
+
;;
|
|
62
|
+
--launch)
|
|
63
|
+
LAUNCH=true
|
|
64
|
+
shift
|
|
65
|
+
;;
|
|
66
|
+
--help|-h)
|
|
67
|
+
head -25 "$0" | tail -20
|
|
68
|
+
exit 0
|
|
69
|
+
;;
|
|
70
|
+
*)
|
|
71
|
+
echo -e "${RED}Unknown option: $1${NC}"
|
|
72
|
+
echo "Use --help for usage information"
|
|
73
|
+
exit 1
|
|
74
|
+
;;
|
|
75
|
+
esac
|
|
76
|
+
done
|
|
77
|
+
|
|
78
|
+
# Verify required files exist
|
|
79
|
+
echo -e "${BLUE}Verifying kernel files...${NC}"
|
|
80
|
+
|
|
81
|
+
MANDATORY_FILES=(
|
|
82
|
+
"$CAPTUREBOX_ROOT/memory.md"
|
|
83
|
+
"$CAPTUREBOX_ROOT/AGENTS.md"
|
|
84
|
+
"$CAPTUREBOX_ROOT/axioms.yaml"
|
|
85
|
+
)
|
|
86
|
+
|
|
87
|
+
for file in "${MANDATORY_FILES[@]}"; do
|
|
88
|
+
if [[ ! -f "$file" ]]; then
|
|
89
|
+
echo -e "${RED}CRITICAL: Missing mandatory file: $file${NC}"
|
|
90
|
+
exit 1
|
|
91
|
+
fi
|
|
92
|
+
done
|
|
93
|
+
|
|
94
|
+
# Build kernel payload
|
|
95
|
+
echo -e "${BLUE}Building kernel prompt file...${NC}"
|
|
96
|
+
|
|
97
|
+
# llama.cpp prompt format with system instruction
|
|
98
|
+
PROMPT="<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
|
99
|
+
|
|
100
|
+
You are booting into Capturebox NL-OS. The following kernel context defines your operational parameters. Read and internalize these instructions before responding.
|
|
101
|
+
|
|
102
|
+
# Capturebox NL-OS Kernel Context
|
|
103
|
+
|
|
104
|
+
## memory.md (Behavioral Directives)
|
|
105
|
+
|
|
106
|
+
$(cat "$CAPTUREBOX_ROOT/memory.md")
|
|
107
|
+
|
|
108
|
+
---
|
|
109
|
+
|
|
110
|
+
## AGENTS.md (Hard Invariants)
|
|
111
|
+
|
|
112
|
+
$(cat "$CAPTUREBOX_ROOT/AGENTS.md")
|
|
113
|
+
|
|
114
|
+
---
|
|
115
|
+
|
|
116
|
+
## axioms.yaml (Canonical Definitions)
|
|
117
|
+
|
|
118
|
+
$(cat "$CAPTUREBOX_ROOT/axioms.yaml")
|
|
119
|
+
"
|
|
120
|
+
|
|
121
|
+
if [[ "$FULL_BOOT" == true ]]; then
|
|
122
|
+
echo -e "${BLUE}Including lazy tier files...${NC}"
|
|
123
|
+
|
|
124
|
+
PROMPT+="
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
## personalities.md (Voice Presets)
|
|
129
|
+
|
|
130
|
+
$(cat "$CAPTUREBOX_ROOT/personalities.md")
|
|
131
|
+
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
## COMMAND-MAP.md (Command Registry)
|
|
135
|
+
|
|
136
|
+
$(cat "$CAPTUREBOX_ROOT/.cursor/commands/COMMAND-MAP.md")
|
|
137
|
+
"
|
|
138
|
+
fi
|
|
139
|
+
|
|
140
|
+
# Close system header and add user prompt
|
|
141
|
+
PROMPT+="
|
|
142
|
+
|
|
143
|
+
After processing this context, acknowledge with: \"Kernel loaded. Ready for capturebox operations.\"
|
|
144
|
+
|
|
145
|
+
<|eot_id|><|start_header_id|>user<|end_header_id|>
|
|
146
|
+
|
|
147
|
+
Please acknowledge that you have loaded the Capturebox NL-OS kernel.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
|
148
|
+
|
|
149
|
+
"
|
|
150
|
+
|
|
151
|
+
# Calculate approximate token count
|
|
152
|
+
CHAR_COUNT=${#PROMPT}
|
|
153
|
+
TOKEN_ESTIMATE=$((CHAR_COUNT / 4))
|
|
154
|
+
|
|
155
|
+
# Write prompt file
|
|
156
|
+
echo "$PROMPT" > "$OUTPUT_FILE"
|
|
157
|
+
|
|
158
|
+
echo -e "${GREEN}Kernel prompt file created: $OUTPUT_FILE${NC}"
|
|
159
|
+
echo -e "${GREEN}Approximate tokens: ~$TOKEN_ESTIMATE${NC}"
|
|
160
|
+
echo -e "${BLUE}Tier: $(if [[ "$FULL_BOOT" == true ]]; then echo "FULL"; else echo "MANDATORY"; fi)${NC}"
|
|
161
|
+
|
|
162
|
+
# Provide usage instructions
|
|
163
|
+
echo ""
|
|
164
|
+
echo -e "${YELLOW}To use with llama.cpp:${NC}"
|
|
165
|
+
echo " llama-cli -m /path/to/model.gguf -f $OUTPUT_FILE --interactive --ctx-size $CTX_SIZE"
|
|
166
|
+
echo ""
|
|
167
|
+
echo "Or with llama-server:"
|
|
168
|
+
echo " llama-server -m /path/to/model.gguf --ctx-size $CTX_SIZE"
|
|
169
|
+
echo " Then POST the prompt content to /completion endpoint"
|
|
170
|
+
|
|
171
|
+
# Launch if requested
|
|
172
|
+
if [[ "$LAUNCH" == true ]]; then
|
|
173
|
+
if [[ -z "$MODEL_PATH" ]]; then
|
|
174
|
+
echo -e "${RED}Error: --launch requires --model PATH${NC}"
|
|
175
|
+
exit 1
|
|
176
|
+
fi
|
|
177
|
+
|
|
178
|
+
if [[ ! -f "$MODEL_PATH" ]]; then
|
|
179
|
+
echo -e "${RED}Error: Model file not found: $MODEL_PATH${NC}"
|
|
180
|
+
exit 1
|
|
181
|
+
fi
|
|
182
|
+
|
|
183
|
+
if ! command -v llama-cli &> /dev/null; then
|
|
184
|
+
echo -e "${RED}Error: llama-cli not found in PATH${NC}"
|
|
185
|
+
echo "Build llama.cpp and add to PATH, or use the prompt file manually"
|
|
186
|
+
exit 1
|
|
187
|
+
fi
|
|
188
|
+
|
|
189
|
+
echo ""
|
|
190
|
+
echo -e "${GREEN}Launching llama-cli with kernel context...${NC}"
|
|
191
|
+
llama-cli -m "$MODEL_PATH" -f "$OUTPUT_FILE" --interactive --ctx-size "$CTX_SIZE"
|
|
192
|
+
fi
|