@hanzo/dev 2.1.1 → 3.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,359 +1,358 @@
1
- # @hanzo/dev
1
+ <img src="docs/images/every-logo.png" alt="Every Code Logo" width="400">
2
2
 
3
- > State-of-the-art AI development platform with swarm intelligence
3
+ &ensp;
4
4
 
5
- [![npm version](https://badge.fury.io/js/@hanzo%2Fdev.svg)](https://www.npmjs.com/package/@hanzo/dev)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5
+ **Every Code** (Code for short) is a fast, local coding agent for your terminal. It's a community-driven fork of `openai/codex` focused on real developer ergonomics: Browser integration, multi-agents, theming, and reasoning control — all while staying compatible with upstream.
7
6
 
8
- @hanzo/dev is an advanced AI development platform that orchestrates multiple AI agents working in parallel. Built with swarm intelligence and Model Context Protocol (MCP) at its core, it achieves industry-leading performance on software engineering benchmarks.
7
+ &ensp;
8
+ ## What's new in v0.6.0 (December 2025)
9
9
 
10
- ## Features
10
+ - **Auto Review** – background ghost-commit watcher runs reviews in a separate worktree whenever a turn changes code; uses `codex-5.1-mini-high` and reports issues plus ready-to-apply fixes without blocking the main thread.
11
+ - **Code Bridge** – Sentry-style local bridge that streams errors, console, screenshots, and control from running apps into Code; ships an MCP server; install by asking Code to pull `https://github.com/just-every/code-bridge`.
12
+ - **Plays well with Auto Drive** – reviews run in parallel with long Auto Drive tasks so quality checks land while the flow keeps moving.
13
+ - **Quality-first focus** – the release shifts emphasis from "can the model write this file" to "did we verify it works".
14
+ - _From v0.5.0:_ rename to Every Code, upgraded `/auto` planning/recovery, unified `/settings`, faster streaming/history with card-based activity, and more reliable `/resume` + `/undo`.
11
15
 
12
- - 🤖 **Multi-AI Support**: Integrate with Claude, OpenAI, Gemini, and local AI models
13
- - 🔧 **Tool Unification**: Single interface for all AI coding assistants
14
- - 🌐 **MCP Integration**: Full Model Context Protocol support for extensible tools
15
- - 👥 **Peer Agent Networks**: Spawn multiple agents that collaborate via MCP
16
- - 🎯 **CodeAct Agent**: Automatic planning, execution, and self-correction
17
- - 🌍 **Browser Automation**: Control browsers via Hanzo Browser/Extension
18
- - 📝 **Advanced Editing**: File manipulation with undo, chunk localization
19
- - 🚀 **Parallel Execution**: Run multiple tasks concurrently across agents
20
- - 🔍 **SWE-bench Ready**: Optimized for software engineering benchmarks
16
+ [Read the full notes in RELEASE_NOTES.md](docs/release-notes/RELEASE_NOTES.md)
21
17
 
22
- ## Installation
18
+ &ensp;
19
+ ## Why Every Code
20
+
21
+ - 🚀 **Auto Drive orchestration** – Multi-agent automation that now self-heals and ships complete tasks.
22
+ - 🌐 **Browser Integration** – CDP support, headless browsing, screenshots captured inline.
23
+ - 🤖 **Multi-agent commands** – `/plan`, `/code` and `/solve` coordinate multiple CLI agents.
24
+ - 🧭 **Unified settings hub** – `/settings` overlay for limits, theming, approvals, and provider wiring.
25
+ - 🎨 **Theme system** – Switch between accessible presets, customize accents, and preview live via `/themes`.
26
+ - 🔌 **MCP support** – Extend with filesystem, DBs, APIs, or your own tools.
27
+ - 🔒 **Safety modes** – Read-only, approvals, and workspace sandboxing.
28
+
29
+ &ensp;
30
+ ## AI Videos
31
+
32
+ &ensp;
33
+ <p align="center">
34
+ <a href="https://www.youtube.com/watch?v=Ra3q8IVpIOc">
35
+ <img src="docs/images/video-auto-review-play.jpg" alt="Play Auto Review video" width="100%">
36
+ </a><br>
37
+ <strong>Auto Review</strong>
38
+ </p>
39
+
40
+ &ensp;
41
+ <p align="center">
42
+ <a href="https://youtu.be/UOASHZPruQk">
43
+ <img src="docs/images/video-auto-drive-new-play.jpg" alt="Play Introducing Auto Drive video" width="100%">
44
+ </a><br>
45
+ <strong>Auto Drive Overview</strong>
46
+ </p>
47
+
48
+ &ensp;
49
+ <p align="center">
50
+ <a href="https://youtu.be/sV317OhiysQ">
51
+ <img src="docs/images/video-v03-play.jpg" alt="Play Multi-Agent Support video" width="100%">
52
+ </a><br>
53
+ <strong>Multi-Agent Promo</strong>
54
+ </p>
55
+
56
+
57
+
58
+ &ensp;
59
+ ## Quickstart
60
+
61
+ ### Run
23
62
 
24
63
  ```bash
25
- npm install -g @hanzo/dev
64
+ npx -y @just-every/code
26
65
  ```
27
66
 
28
- Or use directly with npx:
67
+ ### Install & Run
29
68
 
30
69
  ```bash
31
- npx @hanzo/dev
70
+ npm install -g @just-every/code
71
+ code // or `coder` if you're using VS Code
32
72
  ```
33
73
 
34
- ## Quick Start
74
+ Note: If another tool already provides a `code` command (e.g. VS Code), our CLI is also installed as `coder`. Use `coder` to avoid conflicts.
35
75
 
36
- ### Interactive Mode
76
+ **Authenticate** (one of the following):
77
+ - **Sign in with ChatGPT** (Plus/Pro/Team; uses models available to your plan)
78
+ - Run `code` and pick "Sign in with ChatGPT"
79
+ - **API key** (usage-based)
80
+ - Set `export OPENAI_API_KEY=xyz` and run `code`
81
+
82
+ ### Install Claude & Gemini (optional)
83
+
84
+ Every Code supports orchestrating other AI CLI tools. Install these and config to use alongside Code.
37
85
 
38
86
  ```bash
39
- dev
87
+ # Ensure Node.js 20+ is available locally (installs into ~/.n)
88
+ npm install -g n
89
+ export N_PREFIX="$HOME/.n"
90
+ export PATH="$N_PREFIX/bin:$PATH"
91
+ n 20.18.1
92
+
93
+ # Install the companion CLIs
94
+ export npm_config_prefix="${npm_config_prefix:-$HOME/.npm-global}"
95
+ mkdir -p "$npm_config_prefix/bin"
96
+ export PATH="$npm_config_prefix/bin:$PATH"
97
+ npm install -g @anthropic-ai/claude-code @google/gemini-cli @qwen-code/qwen-code
98
+
99
+ # Quick smoke tests
100
+ claude --version
101
+ gemini --version
102
+ qwen --version
40
103
  ```
41
104
 
42
- This launches an interactive menu where you can:
43
- - Select your preferred AI tool
44
- - Configure API keys
45
- - Access specialized commands
105
+ > ℹ️ Add `export N_PREFIX="$HOME/.n"` and `export PATH="$N_PREFIX/bin:$PATH"` (plus the `npm_config_prefix` bin path) to your shell profile so the CLIs stay on `PATH` in future sessions.
46
106
 
47
- ### Direct Tool Access
107
+ &ensp;
108
+ ## Commands
48
109
 
110
+ ### Browser
49
111
  ```bash
50
- # Launch with specific AI provider
51
- dev --claude
52
- dev --openai
53
- dev --gemini
54
- dev --grok
55
- dev --local
56
-
57
- # Advanced modes
58
- dev --workspace # Unified workspace mode
59
- dev --benchmark # Run SWE-bench evaluation
60
-
61
- # Swarm mode - edit multiple files in parallel
62
- dev --claude --swarm 5 -p "Add copyright header to all files"
63
- dev --openai --swarm 10 -p "Fix all ESLint errors"
64
- dev --gemini --swarm 20 -p "Add JSDoc comments to all functions"
65
- dev --local --swarm 100 -p "Format all files with prettier"
66
- ```
112
+ # Connect code to external Chrome browser (running CDP)
113
+ /chrome # Connect with auto-detect port
114
+ /chrome 9222 # Connect to specific port
67
115
 
68
- ### Environment Configuration
69
-
70
- Create a `.env` file in your project root:
116
+ # Switch to internal browser mode
117
+ /browser # Use internal headless browser
118
+ /browser https://example.com # Open URL in internal browser
119
+ ```
71
120
 
72
- ```env
73
- # API Keys
74
- ANTHROPIC_API_KEY=your_key_here
75
- OPENAI_API_KEY=your_key_here
76
- GEMINI_API_KEY=your_key_here
77
- TOGETHER_API_KEY=your_key_here
121
+ ### Agents
122
+ ```bash
123
+ # Plan code changes (Claude, Gemini and GPT-5 consensus)
124
+ # All agents review task and create a consolidated plan
125
+ /plan "Stop the AI from ordering pizza at 3AM"
78
126
 
79
- # Local AI
80
- HANZO_APP_URL=http://localhost:8080
81
- LOCAL_LLM_URL=http://localhost:11434
127
+ # Solve complex problems (Claude, Gemini and GPT-5 race)
128
+ # Fastest preferred (see https://arxiv.org/abs/2505.17813)
129
+ /solve "Why does deleting one user drop the whole database?"
82
130
 
83
- # Browser Integration
84
- HANZO_BROWSER_URL=http://localhost:9223
85
- HANZO_EXTENSION_WS=ws://localhost:9222
131
+ # Write code! (Claude, Gemini and GPT-5 consensus)
132
+ # Creates multiple worktrees then implements the optimal solution
133
+ /code "Show dark mode when I feel cranky"
86
134
  ```
87
135
 
88
- ## Advanced Features
89
-
90
- ### Swarm Mode
136
+ ### Auto Drive
137
+ ```bash
138
+ # Hand off a multi-step task; Auto Drive will coordinate agents and approvals
139
+ /auto "Refactor the auth flow and add device login"
91
140
 
92
- Launch multiple agents to edit files in parallel across your codebase:
141
+ # Resume or inspect an active Auto Drive run
142
+ /auto status
143
+ ```
93
144
 
145
+ ### General
94
146
  ```bash
95
- # Basic swarm usage
96
- dev --claude --swarm 5 -p "Add copyright header to all files"
147
+ # Try a new theme!
148
+ /themes
97
149
 
98
- # Process specific file types
99
- dev --openai --swarm 20 -p "Add type annotations" --pattern "**/*.ts"
150
+ # Change reasoning level
151
+ /reasoning low|medium|high
100
152
 
101
- # Maximum parallelism (up to 100 agents)
102
- dev --gemini --swarm 100 -p "Fix linting errors"
153
+ # Switch models or effort presets
154
+ /model
103
155
 
104
- # Using local provider for cost efficiency
105
- dev --local --swarm 50 -p "Format with prettier"
156
+ # Start new conversation
157
+ /new
106
158
  ```
107
159
 
108
- Features:
109
- - **Lazy agent spawning**: Agents are created as needed, not all at once
110
- - **Automatic authentication**: Handles provider login if API keys are available
111
- - **Parallel execution**: Each agent processes a different file simultaneously
112
- - **Smart file detection**: Automatically finds all editable files in your project
113
- - **Progress tracking**: Real-time status updates as files are processed
160
+ ## CLI reference
161
+
162
+ ```shell
163
+ code [options] [prompt]
164
+
165
+ Options:
166
+ --model <name> Override the model for the active provider (e.g. gpt-5.1)
167
+ --read-only Prevent file modifications
168
+ --no-approval Skip approval prompts (use with caution)
169
+ --config <key=val> Override config values
170
+ --oss Use local open source models
171
+ --sandbox <mode> Set sandbox level (read-only, workspace-write, etc.)
172
+ --help Show help information
173
+ --debug Log API requests and responses to file
174
+ --version Show version number
175
+ ```
114
176
 
115
- Example: Adding copyright headers to 5 files in parallel:
177
+ Note: `--model` only changes the model name sent to the active provider. To use a different provider, set `model_provider` in `config.toml`. Providers must expose an OpenAI-compatible API (Chat Completions or Responses).
116
178
 
117
- ```bash
118
- # Navigate to your test directory
119
- cd test-swarm
179
+ &ensp;
180
+ ## Memory & project docs
181
+
182
+ Every Code can remember context across sessions:
120
183
 
121
- # Run swarm with Claude
122
- dev --claude --swarm 5 -p "Add copyright header '// Copyright 2025 Hanzo Industries Inc.' at the top of each file"
184
+ 1. **Create an `AGENTS.md` or `CLAUDE.md` file** in your project root:
185
+ ```markdown
186
+ # Project Context
187
+ This is a React TypeScript application with:
188
+ - Authentication via JWT
189
+ - PostgreSQL database
190
+ - Express.js backend
191
+
192
+ ## Key files:
193
+ - `/src/auth/` - Authentication logic
194
+ - `/src/api/` - API client code
195
+ - `/server/` - Backend services
123
196
  ```
124
197
 
125
- The swarm will:
126
- 1. Find all editable files in the directory
127
- 2. Spawn up to 5 Claude agents
128
- 3. Assign each agent a file to process
129
- 4. Execute edits in parallel
130
- 5. Report results when complete
198
+ 2. **Session memory**: Every Code maintains conversation history
199
+ 3. **Codebase analysis**: Automatically understands project structure
131
200
 
132
- Supported providers:
133
- - `--claude`: Claude AI (requires ANTHROPIC_API_KEY or claude login)
134
- - `--openai`: OpenAI GPT (requires OPENAI_API_KEY)
135
- - `--gemini`: Google Gemini (requires GOOGLE_API_KEY)
136
- - `--grok`: Grok AI (requires GROK_API_KEY)
137
- - `--local`: Local Hanzo agent (no API key required)
201
+ &ensp;
202
+ ## Non-interactive / CI mode
138
203
 
139
- ### Workspace Mode
204
+ For automation and CI/CD:
140
205
 
141
- Open a unified workspace with all tools available:
206
+ ```shell
207
+ # Run a specific task
208
+ code --no-approval "run tests and fix any failures"
142
209
 
143
- ```bash
144
- dev workspace
145
- ```
210
+ # Generate reports
211
+ code --read-only "analyze code quality and generate report"
146
212
 
147
- Features:
148
- - Integrated shell, editor, browser, and planner
149
- - Persistent session state
150
- - Tool switching without context loss
151
- - Unified command interface
152
-
153
- ### MCP Server Configuration
154
-
155
- Configure MCP servers in `.mcp.json`:
156
-
157
- ```json
158
- {
159
- "servers": [
160
- {
161
- "name": "filesystem",
162
- "command": "npx",
163
- "args": ["@modelcontextprotocol/server-filesystem"],
164
- "env": { "MCP_ALLOWED_PATHS": "." }
165
- },
166
- {
167
- "name": "git",
168
- "command": "npx",
169
- "args": ["@modelcontextprotocol/server-git"]
170
- },
171
- {
172
- "name": "custom",
173
- "command": "python",
174
- "args": ["my-mcp-server.py"],
175
- "transport": "stdio"
176
- }
177
- ]
178
- }
213
+ # Batch processing
214
+ code --config output_format=json "list all TODO comments"
179
215
  ```
180
216
 
181
- ## Architecture
182
-
183
- ### Core Components
184
-
185
- 1. **Editor Module** (`lib/editor.ts`)
186
- - View, create, and edit files
187
- - String replacement with validation
188
- - Chunk localization for large files
189
- - Undo/redo functionality
190
-
191
- 2. **MCP Client** (`lib/mcp-client.ts`)
192
- - Stdio and WebSocket transports
193
- - Dynamic tool discovery
194
- - Session management
195
- - JSON-RPC protocol
196
-
197
- 3. **CodeAct Agent** (`lib/code-act-agent.ts`)
198
- - Automatic task planning
199
- - Parallel step execution
200
- - Self-correction with retries
201
- - State and observation tracking
202
-
203
- 4. **Peer Agent Network** (`lib/peer-agent-network.ts`)
204
- - Agent spawning strategies
205
- - Inter-agent communication
206
- - MCP tool exposure
207
- - Swarm optimization
208
-
209
- 5. **Agent Loop** (`lib/agent-loop.ts`)
210
- - LLM provider abstraction
211
- - Browser automation
212
- - Tool orchestration
213
- - Execution management
214
-
215
- ## API Usage
216
-
217
- ### Programmatic Access
218
-
219
- ```typescript
220
- import { CodeActAgent, PeerAgentNetwork, ConfigurableAgentLoop } from '@hanzo/dev';
221
-
222
- // Create an agent
223
- const agent = new CodeActAgent('my-agent', functionCallingSystem);
224
- await agent.plan('Fix the login bug');
225
- const result = await agent.execute();
226
-
227
- // Create a peer network
228
- const network = new PeerAgentNetwork();
229
- await network.spawnAgentsForCodebase('./src', 'claude-code', 'one-per-file');
230
-
231
- // Configure agent loop
232
- const loop = new ConfigurableAgentLoop({
233
- provider: {
234
- name: 'Claude',
235
- type: 'anthropic',
236
- apiKey: process.env.ANTHROPIC_API_KEY,
237
- model: 'claude-3-opus-20240229',
238
- supportsTools: true,
239
- supportsStreaming: true
240
- },
241
- maxIterations: 10,
242
- enableMCP: true,
243
- enableBrowser: true,
244
- enableSwarm: true
245
- });
246
-
247
- await loop.initialize();
248
- await loop.execute('Refactor the authentication module');
249
- ```
217
+ &ensp;
218
+ ## Model Context Protocol (MCP)
219
+
220
+ Every Code supports MCP for extended capabilities:
250
221
 
251
- ### Custom Tool Registration
252
-
253
- ```typescript
254
- import { FunctionCallingSystem } from '@hanzo/dev';
255
-
256
- const functionCalling = new FunctionCallingSystem();
257
-
258
- // Register custom tool
259
- functionCalling.registerTool({
260
- name: 'my_custom_tool',
261
- description: 'Does something special',
262
- parameters: {
263
- type: 'object',
264
- properties: {
265
- input: { type: 'string', description: 'Tool input' }
266
- },
267
- required: ['input']
268
- },
269
- handler: async (args) => {
270
- // Tool implementation
271
- return { success: true, result: `Processed: ${args.input}` };
272
- }
273
- });
222
+ - **File operations**: Advanced file system access
223
+ - **Database connections**: Query and modify databases
224
+ - **API integrations**: Connect to external services
225
+ - **Custom tools**: Build your own extensions
226
+
227
+ Configure MCP in `~/.code/config.toml` Define each server under a named table like `[mcp_servers.<name>]` (this maps to the JSON `mcpServers` object used by other clients):
228
+
229
+ ```toml
230
+ [mcp_servers.filesystem]
231
+ command = "npx"
232
+ args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
274
233
  ```
275
234
 
276
- ## Performance & Benchmarks
235
+ &ensp;
236
+ ## Configuration
277
237
 
278
- ### SWE-bench Results
238
+ Main config file: `~/.code/config.toml`
279
239
 
280
- Our platform is continuously evaluated on the Software Engineering Benchmark:
240
+ > [!NOTE]
241
+ > Every Code reads from both `~/.code/` and `~/.codex/` for backwards compatibility, but it only writes updates to `~/.code/`. If you switch back to Codex and it fails to start, remove `~/.codex/config.toml`. If Every Code appears to miss settings after upgrading, copy your legacy `~/.codex/config.toml` into `~/.code/`.
281
242
 
282
- | Metric | Score | Details |
283
- |--------|-------|---------|
284
- | Success Rate | 15%+ | Solving real GitHub issues |
285
- | Avg Resolution Time | 90s | Per task completion |
286
- | Cost Efficiency | $0.10/task | Using swarm optimization |
287
- | Parallel Speedup | 4.2x | With 5-agent swarm |
243
+ ```toml
244
+ # Model settings
245
+ model = "gpt-5.1"
246
+ model_provider = "openai"
288
247
 
289
- ### Running Benchmarks
248
+ # Behavior
249
+ approval_policy = "on-request" # untrusted | on-failure | on-request | never
250
+ model_reasoning_effort = "medium" # low | medium | high
251
+ sandbox_mode = "workspace-write"
290
252
 
291
- ```bash
292
- # Run full SWE-bench evaluation
293
- dev --benchmark swe-bench
294
-
295
- # Run on specific dataset
296
- dev --benchmark swe-bench --dataset lite
297
-
298
- # Custom benchmark configuration
299
- dev --benchmark swe-bench \
300
- --agents 10 \
301
- --parallel \
302
- --timeout 300 \
303
- --output results.json
253
+ # UI preferences see THEME_CONFIG.md
254
+ [tui.theme]
255
+ name = "light-photon"
256
+
257
+ # Add config for specific models
258
+ [profiles.gpt-5]
259
+ model = "gpt-5.1"
260
+ model_provider = "openai"
261
+ approval_policy = "never"
262
+ model_reasoning_effort = "high"
263
+ model_reasoning_summary = "detailed"
304
264
  ```
305
265
 
306
- ### Performance Optimizations
266
+ ### Environment variables
307
267
 
308
- 1. **Swarm Intelligence**: Multiple agents work on different aspects simultaneously
309
- 2. **Local Orchestration**: Hanzo Zen manages coordination locally, reducing API calls
310
- 3. **Smart Caching**: MCP tools cache results across agents
311
- 4. **Parallel Execution**: CodeAct identifies independent steps and runs them concurrently
268
+ - `CODE_HOME`: Override config directory location
269
+ - `OPENAI_API_KEY`: Use API key instead of ChatGPT auth
270
+ - `OPENAI_BASE_URL`: Use OpenAI-compatible API endpoints (chat or responses)
271
+ - `OPENAI_WIRE_API`: Force the built-in OpenAI provider to use `chat` or `responses` wiring
312
272
 
313
- ## Testing
273
+ &ensp;
274
+ ## FAQ
314
275
 
315
- ```bash
316
- # Run all tests
317
- npm test
276
+ **How is this different from the original?**
277
+ > This fork adds browser integration, multi-agent commands (`/plan`, `/solve`, `/code`), theme system, and enhanced reasoning controls while maintaining full compatibility.
318
278
 
319
- # Run specific test suite
320
- npm run test:swe-bench
279
+ **Can I use my existing Codex configuration?**
280
+ > Yes. Every Code reads from both `~/.code/` (primary) and legacy `~/.codex/` directories. We only write to `~/.code/`, so Codex will keep running if you switch back; copy or remove legacy files if you notice conflicts.
321
281
 
322
- # Watch mode
323
- npm run test:watch
282
+ **Does this work with ChatGPT Plus?**
283
+ > Absolutely. Use the same "Sign in with ChatGPT" flow as the original.
324
284
 
325
- # Coverage report
326
- npm run test:coverage
327
- ```
285
+ **Is my data secure?**
286
+ > Yes. Authentication stays on your machine, and we don't proxy your credentials or conversations.
328
287
 
288
+ &ensp;
329
289
  ## Contributing
330
290
 
331
- We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
291
+ We welcome contributions! Every Code maintains compatibility with upstream while adding community-requested features.
332
292
 
333
- ### Development Setup
293
+ ### Development workflow
334
294
 
335
295
  ```bash
336
- # Clone the repository
337
- git clone https://github.com/hanzoai/dev.git
338
- cd dev/packages/dev
339
-
340
- # Install dependencies
296
+ # Clone and setup
297
+ git clone https://github.com/just-every/code.git
298
+ cd code
341
299
  npm install
342
300
 
343
- # Run in development mode
344
- npm run dev
301
+ # Build (use fast build for development)
302
+ ./build-fast.sh
303
+
304
+ # Run locally
305
+ ./code-rs/target/dev-fast/code
306
+ ```
345
307
 
346
- # Build
347
- npm run build
308
+ #### Git hooks
348
309
 
349
- # Run tests
350
- npm test
310
+ This repo ships shared hooks under `.githooks/`. To enable them locally:
311
+
312
+ ```bash
313
+ git config core.hooksPath .githooks
351
314
  ```
352
315
 
316
+ The `pre-push` hook runs `./pre-release.sh` automatically when pushing to `main`.
317
+
318
+ ### Opening a pull request
319
+
320
+ 1. Fork the repository
321
+ 2. Create a feature branch: `git checkout -b feature/amazing-feature`
322
+ 3. Make your changes
323
+ 4. Run tests: `cargo test`
324
+ 5. Build successfully: `./build-fast.sh`
325
+ 6. Submit a pull request
326
+
327
+
328
+ &ensp;
329
+ ## Legal & Use
330
+
331
+ ### License & attribution
332
+ - This project is a community fork of `openai/codex` under **Apache-2.0**. We preserve upstream LICENSE and NOTICE files.
333
+ - **Every Code** (Code) is **not** affiliated with, sponsored by, or endorsed by OpenAI.
334
+
335
+ ### Your responsibilities
336
+ Using OpenAI, Anthropic or Google services through Every Code means you agree to **their Terms and policies**. In particular:
337
+ - **Don't** programmatically scrape/extract content outside intended flows.
338
+ - **Don't** bypass or interfere with rate limits, quotas, or safety mitigations.
339
+ - Use your **own** account; don't share or rotate accounts to evade limits.
340
+ - If you configure other model providers, you're responsible for their terms.
341
+
342
+ ### Privacy
343
+ - Your auth file lives at `~/.code/auth.json`
344
+ - Inputs/outputs you send to AI providers are handled under their Terms and Privacy Policy; consult those documents (and any org-level data-sharing settings).
345
+
346
+ ### Subject to change
347
+ AI providers can change eligibility, limits, models, or authentication flows. Every Code supports **both** ChatGPT sign-in and API-key modes so you can pick what fits (local/hobby vs CI/automation).
348
+
349
+ &ensp;
353
350
  ## License
354
351
 
355
- MIT © [Hanzo AI](https://hanzo.ai)
352
+ Apache 2.0 - See [LICENSE](LICENSE) file for details.
356
353
 
357
- ## Acknowledgments
354
+ Every Code is a community fork of the original Codex CLI. We maintain compatibility while adding enhanced features requested by the developer community.
358
355
 
359
- Built by [Hanzo AI](https://hanzo.ai) - Advancing AI infrastructure for developers worldwide.
356
+ &ensp;
357
+ ---
358
+ **Need help?** Open an issue on [GitHub](https://github.com/just-every/code/issues) or check our documentation.