nlos 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/QUICKSTART.md ADDED
@@ -0,0 +1,230 @@
1
+ # NL-OS Quick Start Guide
2
+
3
+ Get the Natural Language Operating System running in under 5 minutes.
4
+
5
+ ---
6
+
7
+ ## Prerequisites
8
+
9
+ Choose ONE of:
10
+ - **Ollama** (recommended) - [ollama.ai](https://ollama.ai)
11
+ - **LM Studio** - [lmstudio.ai](https://lmstudio.ai)
12
+ - **llama.cpp** - [github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
13
+ - **Any LLM with a chat interface** (Claude, ChatGPT, etc.)
14
+
15
+ ---
16
+
17
+ ## Method 1: npm Install (Recommended)
18
+
19
+ ```bash
20
+ # Install the CLI globally
21
+ npm install -g nlos
22
+
23
+ # Pull a model (if using Ollama)
24
+ ollama pull qwen2.5:3b
25
+
26
+ # Boot NL-OS
27
+ nlos boot
28
+ ```
29
+
30
+ That's it. You're in NL-OS mode.
31
+
32
+ ### CLI Commands
33
+
34
+ ```bash
35
+ nlos boot # Boot with default model (qwen2.5:3b)
36
+ nlos boot --model llama3.1:8b # Boot with specific model
37
+ nlos boot --full # Load full kernel (includes personalities)
38
+ nlos boot --dry-run # Preview system prompt without launching
39
+
40
+ nlos payload # Generate portable payload file
41
+ nlos payload --tier full # Full kernel payload
42
+ nlos payload --format json # JSON format for API use
43
+
44
+ nlos verify # Check kernel files exist
45
+ nlos tokens # Show token estimates
46
+ ```
47
+
48
+ ---
49
+
50
+ ## Method 2: Clone and Run
51
+
52
+ ```bash
53
+ # Clone the repository
54
+ git clone https://github.com/yourusername/capturebox.git
55
+ cd capturebox
56
+
57
+ # Option A: Use the boot script (Ollama)
58
+ ./scripts/kernel-boot-ollama.sh
59
+
60
+ # Option B: Use Make
61
+ make kernel.boot
62
+ ```
63
+
64
+ ---
65
+
66
+ ## Method 3: Manual Paste (Any LLM)
67
+
68
+ Works with Claude, ChatGPT, Gemini, or any chat interface.
69
+
70
+ 1. Open the file: `portable/kernel-payload.md`
71
+ 2. Copy the entire contents (Cmd+A, Cmd+C)
72
+ 3. Paste into a new chat as your first message
73
+ 4. Wait for acknowledgment: "Kernel loaded. Ready for capturebox operations."
74
+
75
+ **Tip for macOS:**
76
+ ```bash
77
+ cat portable/kernel-payload.md | pbcopy
78
+ # Now paste into any chat
79
+ ```
80
+
81
+ ---
82
+
83
+ ## Verify It's Working
84
+
85
+ After boot, test these:
86
+
87
+ ```
88
+ What constraints are you operating under?
89
+ ```
90
+ Model should mention: no emojis, append-only logs, patch-style edits.
91
+
92
+ ```
93
+ ./hype
94
+ ```
95
+ Model should generate a context-aware creative boost.
96
+
97
+ ```
98
+ What systems are available?
99
+ ```
100
+ Model should list cognitive frameworks like lateral-os, signal-to-action, etc.
101
+
102
+ ---
103
+
104
+ ## Configuration
105
+
106
+ ### Change Default Model
107
+
108
+ Edit `KERNEL.yaml`:
109
+
110
+ ```yaml
111
+ platforms:
112
+ ollama:
113
+ default_model: llama3.1:8b # Change from qwen2.5:3b
114
+ ```
115
+
116
+ Or use command line:
117
+ ```bash
118
+ nlos boot --model mistral:7b
119
+ ```
120
+
121
+ ### Operational Profiles
122
+
123
+ | Profile | Model | Use Case |
124
+ |---------|-------|----------|
125
+ | speed | qwen2.5:3b | Fast responses, batch work |
126
+ | balanced | mistral:7b | General use |
127
+ | quality | llama3.1:8b | Deep analysis, synthesis |
128
+
129
+ ```bash
130
+ nlos boot --profile quality
131
+ ```
132
+
133
+ ---
134
+
135
+ ## Common Workflows
136
+
137
+ ### Daily Start
138
+ ```bash
139
+ nlos boot
140
+ # You're now in NL-OS mode for the session
141
+ ```
142
+
143
+ ### Creative Work
144
+ ```bash
145
+ nlos boot --full --profile quality
146
+ # Full kernel with personalities, using best model
147
+ ```
148
+
149
+ ### Quick Extraction
150
+ ```bash
151
+ nlos boot --profile speed
152
+ # Fast model for quick tasks
153
+ ```
154
+
155
+ ### Offline Work
156
+ ```bash
157
+ # Ensure model is pulled first
158
+ ollama pull llama3.1:8b
159
+
160
+ # Boot offline (no internet needed)
161
+ nlos boot --model llama3.1:8b
162
+ ```
163
+
164
+ ---
165
+
166
+ ## Troubleshooting
167
+
168
+ ### "Ollama not found"
169
+ ```bash
170
+ # Install Ollama
171
+ brew install ollama # macOS
172
+ # or download from ollama.ai
173
+
174
+ # Start the server
175
+ ollama serve
176
+ ```
177
+
178
+ ### "Model not found"
179
+ ```bash
180
+ # Pull the model first
181
+ ollama pull qwen2.5:3b
182
+ ollama pull llama3.1:8b # for quality mode
183
+ ```
184
+
185
+ ### "Context too long"
186
+ Use a model with larger context window, or use mandatory tier only:
187
+ ```bash
188
+ nlos boot --tier mandatory
189
+ ```
190
+
191
+ ### Model doesn't follow rules
192
+ Some smaller models may not follow all kernel rules. Try:
193
+ - Larger model (7B+)
194
+ - Repeat key constraints in your first message
195
+ - Use `--profile quality`
196
+
197
+ ---
198
+
199
+ ## Next Steps
200
+
201
+ 1. **Explore commands**: Run `./sys-ref` to see all 57+ slash commands
202
+ 2. **Try personalities**: Run `./assume Quentin` for a different voice
203
+ 3. **Build workflows**: Create your own commands in `.cursor/commands/`
204
+ 4. **Read the philosophy**: See `projects/systems/natural-language-os/` for the book
205
+
206
+ ---
207
+
208
+ ## File Locations
209
+
210
+ | What | Where |
211
+ |------|-------|
212
+ | Kernel entry point | `KERNEL.md` |
213
+ | Configuration | `KERNEL.yaml` |
214
+ | Behavioral rules | `memory.md` |
215
+ | Safety invariants | `AGENTS.md` |
216
+ | Slash commands | `.cursor/commands/` |
217
+ | Cognitive systems | `projects/systems/` |
218
+ | Portable payloads | `portable/` |
219
+
220
+ ---
221
+
222
+ ## Getting Help
223
+
224
+ - **In-session**: Ask "What commands are available?"
225
+ - **Documentation**: See `portable/README.md`
226
+ - **Issues**: [github.com/yourusername/capturebox/issues](https://github.com/yourusername/capturebox/issues)
227
+
228
+ ---
229
+
230
+ *Time to boot: ~2 minutes with Ollama, ~30 seconds with manual paste.*
package/README.md ADDED
@@ -0,0 +1,202 @@
1
+ # NL-OS: Natural Language Operating System
2
+
3
+ A model-agnostic kernel that turns any LLM into a cognitive operating system.
4
+
5
+ ```
6
+ npm install -g nlos
7
+ nlos boot
8
+ ```
9
+
10
+ ---
11
+
12
+ ## What is NL-OS?
13
+
14
+ NL-OS inverts the traditional AI relationship: **you are the intelligent agent; the model is the substrate.**
15
+
16
+ Instead of asking an AI to do things for you, NL-OS provides a structured environment where you think *through* the model. The kernel defines behavioral rules, operational constraints, and cognitive frameworks that persist across sessions and work with any capable LLM.
17
+
18
+ ## Key Features
19
+
20
+ - **Model-agnostic** - Works with Claude, GPT-4, Llama, Mistral, Qwen, or any LLM
21
+ - **Portable kernel** - Same behavioral rules across all runtimes
22
+ - **57+ slash commands** - Executable workflows defined in natural language
23
+ - **17 cognitive systems** - Reusable frameworks for ideation, writing, design, and more
24
+ - **Zero dependencies on APIs** - Pure text-based instruction set
25
+
26
+ ## Quick Start
27
+
28
+ ### Option 1: npm (Recommended)
29
+
30
+ ```bash
31
+ # Install globally
32
+ npm install -g nlos
33
+
34
+ # Boot with Ollama (default)
35
+ nlos boot
36
+
37
+ # Boot with specific model
38
+ nlos boot --model llama3.1:8b
39
+
40
+ # Generate portable payload
41
+ nlos payload
42
+ ```
43
+
44
+ ### Option 2: Direct Use
45
+
46
+ ```bash
47
+ # Clone the kernel
48
+ git clone https://github.com/yourusername/capturebox.git
49
+ cd capturebox
50
+
51
+ # Boot via Ollama
52
+ ./scripts/kernel-boot-ollama.sh
53
+
54
+ # Or generate a payload and paste into any LLM
55
+ cat portable/kernel-payload.md | pbcopy
56
+ ```
57
+
58
+ ### Option 3: Manual Paste
59
+
60
+ 1. Open [portable/kernel-payload.md](portable/kernel-payload.md)
61
+ 2. Copy the entire contents
62
+ 3. Paste into any LLM chat as your first message
63
+ 4. The model acknowledges: "Kernel loaded. Ready for capturebox operations."
64
+
65
+ ## Supported Runtimes
66
+
67
+ | Runtime | Command | Notes |
68
+ |---------|---------|-------|
69
+ | Ollama | `nlos boot` | Default, any Ollama model |
70
+ | llama.cpp | `nlos boot --runtime llama-cpp` | GGUF models |
71
+ | LM Studio | `nlos boot --runtime lm-studio` | GUI or API |
72
+ | Claude Code | Native | Auto-loads from directory |
73
+ | Cursor IDE | Native | Via .cursorrules |
74
+ | Any LLM | `nlos payload` | Copy/paste system prompt |
75
+
76
+ ## Architecture
77
+
78
+ ```
79
+ KERNEL.md # Entry point (auto-loaded by Claude Code/Cursor)
80
+ KERNEL.yaml # Platform/model configuration
81
+ memory.md # Behavioral directives (~4.6K tokens)
82
+ AGENTS.md # Hard invariants and safety rules (~1.2K tokens)
83
+ axioms.yaml # Canonical definitions (~4.8K tokens)
84
+ personalities.md # Voice presets (lazy-loaded)
85
+ .cursor/commands/ # 57+ slash command specs
86
+ projects/systems/ # 17 cognitive frameworks
87
+ portable/ # Standalone boot payloads
88
+ ```
89
+
90
+ ### Boot Tiers
91
+
92
+ | Tier | Tokens | Contents |
93
+ |------|--------|----------|
94
+ | Mandatory | ~10,600 | memory.md, AGENTS.md, axioms.yaml |
95
+ | Lazy | +4,950 | personalities.md, COMMAND-MAP.md |
96
+ | Full | ~15,500 | All kernel files |
97
+
98
+ ## Core Concepts
99
+
100
+ ### The Fundamental Inversion
101
+
102
+ Traditional AI: "AI, write me a report."
103
+ NL-OS: "I will think through this problem using the model as my cognitive substrate."
104
+
105
+ The model doesn't produce answers *for* you. You produce understanding *through* the model.
106
+
107
+ ### Slash Commands
108
+
109
+ Commands are protocol specifications, not code. Any LLM that reads the spec can execute it.
110
+
111
+ ```
112
+ ./kernel-boot # Initialize NL-OS
113
+ ./hype # Creative momentum boost
114
+ ./ux-writer # Generate UI copy
115
+ ./design-spec # Create design specifications
116
+ ./assume Quentin # Adopt a personality preset
117
+ ```
118
+
119
+ ### Cognitive Systems
120
+
121
+ Reusable frameworks for different thinking modes:
122
+
123
+ - **lateral-os** - Ideation with dimensional analysis
124
+ - **signal-to-action** - Transform inputs into structured outputs
125
+ - **persona-as-agent** - Research through security personas
126
+ - **self-writer-system** - Reflection and performance reviews
127
+ - **hype-system** - Creative momentum tracking
128
+
129
+ ## Configuration
130
+
131
+ Edit `KERNEL.yaml` to customize:
132
+
133
+ ```yaml
134
+ runtime:
135
+ current: ollama # or: claude_code, cursor, llama_cpp, lm_studio
136
+
137
+ platforms:
138
+ ollama:
139
+ default_model: qwen2.5:3b
140
+
141
+ local_llm:
142
+ default_profile: balanced # or: speed, quality, memory_constrained
143
+ ```
144
+
145
+ ## Development
146
+
147
+ ```bash
148
+ # Install dependencies
149
+ npm install
150
+
151
+ # Run tests
152
+ npm test
153
+
154
+ # Generate all payloads
155
+ make kernel.payload
156
+
157
+ # Verify kernel files
158
+ make kernel.verify
159
+ ```
160
+
161
+ ## Philosophy
162
+
163
+ 1. **Human epistemic control** - You decide what's true; the model helps you think
164
+ 2. **Cognitive acceleration** - Systems scaffold thinking, not replace it
165
+ 3. **Evidence-first** - All claims trace to sources
166
+ 4. **Natural language as infrastructure** - Everything is human-readable
167
+ 5. **Portability by design** - Works anywhere, depends on nothing
168
+
169
+ ## FAQ
170
+
171
+ **Q: Why not just use Claude/GPT directly?**
172
+
173
+ A: You can. NL-OS adds persistent behavioral rules, reusable workflows, and cognitive frameworks that survive across sessions. It's the difference between a blank canvas and an equipped studio.
174
+
175
+ **Q: Does this require an internet connection?**
176
+
177
+ A: No. Run entirely offline with Ollama, llama.cpp, or LM Studio. The kernel is just text files.
178
+
179
+ **Q: What models work best?**
180
+
181
+ A: Any model with 16K+ context and good instruction-following. Tested with Claude, GPT-4, Llama 3.1, Mistral, and Qwen 2.5.
182
+
183
+ **Q: Can I customize the kernel?**
184
+
185
+ A: Yes. Edit memory.md for behavioral rules, add commands to .cursor/commands/, create new systems in projects/systems/.
186
+
187
+ ## License
188
+
189
+ MIT
190
+
191
+ ## Contributing
192
+
193
+ 1. Fork the repository
194
+ 2. Create a feature branch
195
+ 3. Test with multiple LLM runtimes
196
+ 4. Submit a pull request
197
+
198
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
199
+
200
+ ---
201
+
202
+ *NL-OS: Think through machines, not at them.*