lite-cc 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
lite_cc-0.1.0/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Keyang Ru
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
lite_cc-0.1.0/PKG-INFO ADDED
@@ -0,0 +1,309 @@
1
+ Metadata-Version: 2.4
2
+ Name: lite-cc
3
+ Version: 0.1.0
4
+ Summary: Lightweight multi-model coding agent CLI
5
+ Author-email: Keyang Ru <rukeyang@gmail.com>
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://github.com/key4ng/lite-cc
8
+ Project-URL: Repository, https://github.com/key4ng/lite-cc
9
+ Keywords: llm,agent,cli,coding-agent,litellm,multi-model
10
+ Classifier: Development Status :: 3 - Alpha
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.11
14
+ Classifier: Programming Language :: Python :: 3.12
15
+ Classifier: Programming Language :: Python :: 3.13
16
+ Classifier: Topic :: Software Development :: Libraries
17
+ Requires-Python: >=3.11
18
+ Description-Content-Type: text/markdown
19
+ License-File: LICENSE
20
+ Requires-Dist: click>=8.0
21
+ Requires-Dist: litellm>=1.40
22
+ Requires-Dist: oci>=2.0
23
+ Requires-Dist: pyyaml>=6.0
24
+ Dynamic: license-file
25
+
26
+ <p align="center">
27
+ <h1 align="center">lite-cc</h1>
28
+ <p align="center">
29
+ A minimal, multi-model coding agent runtime for the terminal.
30
+ </p>
31
+ </p>
32
+
33
+ <p align="center">
34
+ <a href="#quick-start">Quick Start</a> &middot;
35
+ <a href="#how-it-works">How It Works</a> &middot;
36
+ <a href="#plugins--skills">Plugins & Skills</a> &middot;
37
+ <a href="#configuration">Configuration</a> &middot;
38
+ <a href="#safety">Safety</a>
39
+ </p>
40
+
41
+ ---
42
+
43
+ <p align="center">
44
+ <img src="assets/demo.gif" alt="litecc demo" width="720">
45
+ </p>
46
+
47
+ **lite-cc** (`litecc`) is a lightweight, provider-agnostic coding agent for the terminal. It connects to any LLM via [LiteLLM](https://docs.litellm.ai/docs/providers), runs an autonomous tool loop, and extends its capabilities through plugins and skills.
48
+
49
+ ## Key Features
50
+
51
+ - **Multi-model** — OpenAI, Anthropic, OCI, Gemini, Groq, Ollama, and [any provider LiteLLM supports](https://docs.litellm.ai/docs/providers).
52
+ - **Autonomous tool loop** — Reasons, calls tools, observes results, and iterates until the task is done.
53
+ - **Plugin & skill system** — Load plugins to inject domain knowledge. Skills are loaded on demand to keep context lean.
54
+ - **Safe by default** — Dangerous commands are blocked. File access is scoped to the project directory. Fully autonomous, no prompts.
55
+ - **Structured output** — Colored, timestamped progress logs show exactly what the agent is doing.
56
+
57
+ ## Quick Start
58
+
59
+ ```bash
60
+ # Install
61
+ git clone https://github.com/key4ng/lite-cc.git
62
+ cd lite-cc
63
+ pip install -e .
64
+
65
+ # Run
66
+ litecc run "list all Python files and describe what each one does"
67
+
68
+ # Run with the example plugin
69
+ litecc run "analyze cc/agent.py" --plugin-dir examples/code-analyst
70
+ ```
71
+
72
+ Output:
73
+
74
+ ```
75
+ 14:32:05 [litecc] Using model: gpt-5.2
76
+ 14:32:05 [litecc] Starting task...
77
+ 14:32:06 [tool] list_files: **/*.py
78
+ 14:32:07 [tool] read_file: cc/agent.py
79
+ 14:32:08 [gpt-5.2] I'll describe each file...
80
+ 14:32:09 [litecc] Here are the Python files...
81
+ ```
82
+
83
+ ## How It Works
84
+
85
+ ```
86
+ litecc run "fix the failing tests"
87
+
88
+
89
+ Load config, plugins, skills
90
+
91
+
92
+ Build system prompt + tool definitions
93
+
94
+
95
+ ┌─ Agent Loop ──────────────────────────┐
96
+ │ 1. Send messages + tools to LLM │
97
+ │ 2. LLM returns tool calls │
98
+ │ → execute safely → append results │
99
+ │ 3. LLM returns text → done │
100
+ └───────────────────────────────────────┘
101
+ ```
102
+
103
+ The loop runs until the model produces a final answer or hits the max iteration limit.
104
+
105
+ ## Usage
106
+
107
+ ```bash
108
+ # Basic task
109
+ litecc run "fix the failing tests"
110
+
111
+ # Choose a model
112
+ litecc run "refactor this module" --model anthropic/claude-3-sonnet-20240229
113
+
114
+ # Load plugins
115
+ litecc run "triage the latest ticket" --plugin-dir ~/my-plugin
116
+ litecc run "check health" --plugin-dir ~/plugin-a --plugin-dir ~/plugin-b
117
+
118
+ # Different project directory
119
+ litecc run "explain the architecture" --project-dir ~/other-repo
120
+
121
+ # Verbose output (show tool results, full reasoning)
122
+ litecc run "explore the codebase" -v
123
+
124
+ # Limit iterations
125
+ litecc run "explore the codebase" --max-iterations 20
126
+ ```
127
+
128
+ <details>
129
+ <summary>CLI Reference</summary>
130
+
131
+ ```
132
+ Usage: litecc run [OPTIONS] PROMPT
133
+
134
+ Options:
135
+ --plugin-dir TEXT Plugin directory (repeatable)
136
+ --model TEXT LiteLLM model string
137
+ --max-iterations INT Max tool loop iterations (default: 50)
138
+ --project-dir TEXT Working directory (default: cwd)
139
+ -v, --verbose Show detailed tool output
140
+ --help Show this message and exit
141
+ ```
142
+
143
+ </details>
144
+
145
+ ## Plugins & Skills
146
+
147
+ lite-cc uses a plugin format compatible with Claude Code. Plugins provide domain knowledge and reusable workflows.
148
+
149
+ ### Plugin Structure
150
+
151
+ ```
152
+ my-plugin/
153
+ .claude-plugin/
154
+ plugin.json # Manifest (required)
155
+ CLAUDE.md # Instructions injected into system prompt
156
+ pipeline/
157
+ deploy-check/
158
+ SKILL.md # Skill with YAML frontmatter
159
+ commands/
160
+ triage.md # Command-style skill
161
+ ```
162
+
163
+ ### Skill Format
164
+
165
+ ```markdown
166
+ ---
167
+ name: deploy-check
168
+ description: Verify a deployment is healthy by checking pod status and logs.
169
+ ---
170
+
171
+ # Deploy Check
172
+
173
+ ## Steps
174
+
175
+ 1. Check pod status:
176
+ ﹩bash
177
+ kubectl get pods -n <NAMESPACE> -o wide
178
+
179
+
180
+ 2. Review recent events and summarize findings.
181
+ ```
182
+
183
+ ### How It Works
184
+
185
+ 1. On startup, `--plugin-dir` directories are scanned for `.claude-plugin/plugin.json`
186
+ 2. `CLAUDE.md` is injected into the system prompt
187
+ 3. Skills are indexed by name and description — the model sees the list but not the full content
188
+ 4. When needed, the model calls `use_skill("deploy-check")` to load the full instructions
189
+ 5. The skill content is injected into the conversation and the model follows the steps
190
+
191
+ This keeps context lean — only the skills actually needed are loaded.
192
+
193
+ ## Configuration
194
+
195
+ Config is resolved in order of precedence (highest wins):
196
+
197
+ | Priority | Source | Example |
198
+ |----------|--------|---------|
199
+ | 1 | CLI flags | `--model openai/gpt-4o` |
200
+ | 2 | Environment variables | `CC_MODEL=openai/gpt-4o` |
201
+ | 3 | Config file | `~/.cc/config.yaml` |
202
+ | 4 | Defaults | `oci/openai.gpt-5.2` |
203
+
204
+ ### Environment Variables
205
+
206
+ | Variable | Default | Description |
207
+ |----------|---------|-------------|
208
+ | `CC_MODEL` | `oci/openai.gpt-5.2` | LiteLLM model identifier |
209
+ | `CC_OCI_REGION` | `us-chicago-1` | OCI region for inference |
210
+ | `CC_OCI_COMPARTMENT` | — | OCI compartment OCID (required for `oci/` models) |
211
+ | `CC_OCI_CONFIG_PROFILE` | `DEFAULT` | OCI config profile |
212
+ | `CC_MAX_ITERATIONS` | `50` | Max agent loop iterations |
213
+ | `CC_TIMEOUT` | `120` | Per-command timeout (seconds) |
214
+
215
+ <details>
216
+ <summary>YAML config example</summary>
217
+
218
+ Create `~/.cc/config.yaml`:
219
+
220
+ ```yaml
221
+ model: oci/openai.gpt-5.2
222
+ oci_region: us-chicago-1
223
+ oci_compartment: ocid1.tenancy.oc1..aaaaaaaexample
224
+ max_iterations: 50
225
+ timeout: 120
226
+ ```
227
+
228
+ </details>
229
+
230
+ ### Supported Models
231
+
232
+ Any [LiteLLM provider](https://docs.litellm.ai/docs/providers) works out of the box:
233
+
234
+ | Provider | Model Example | Auth |
235
+ |----------|--------------|------|
236
+ | OpenAI | `openai/gpt-4o` | `OPENAI_API_KEY` |
237
+ | Anthropic | `anthropic/claude-3-sonnet-20240229` | `ANTHROPIC_API_KEY` |
238
+ | OCI GenAI | `oci/openai.gpt-5.2` | `~/.oci/config` session token |
239
+ | Gemini | `gemini/gemini-pro` | `GEMINI_API_KEY` |
240
+ | Groq | `groq/llama3-70b-8192` | `GROQ_API_KEY` |
241
+ | Ollama | `ollama/llama3` | Local server |
242
+
243
+ ## Built-in Tools
244
+
245
+ | Tool | Description |
246
+ |------|-------------|
247
+ | `bash` | Shell execution with safety checks, output truncation, and timeout |
248
+ | `read_file` | Read files with optional line range (`offset`, `limit`) |
249
+ | `write_file` | Create or overwrite files (auto-creates parent dirs) |
250
+ | `list_files` | Glob pattern search (e.g., `**/*.py`) |
251
+ | `grep` | Recursive regex search across files |
252
+ | `use_skill` | Load a skill's instructions into the conversation |
253
+
254
+ All file tools are scoped to the project directory. Bash commands are checked against a deny list before execution.
255
+
256
+ ## Safety
257
+
258
+ lite-cc enforces safety guardrails at the tool execution layer — no user prompts, just deny and report.
259
+
260
+ ### Blocked Commands
261
+
262
+ | Category | Patterns |
263
+ |----------|----------|
264
+ | File deletion | `rm`, `rmdir`, `unlink` |
265
+ | Privilege escalation | `sudo`, `su`, `doas` |
266
+ | System control | `shutdown`, `reboot`, `halt` |
267
+ | Disk operations | `mkfs`, `fdisk`, `dd` |
268
+ | Process control | `kill`, `killall`, `pkill` |
269
+ | Destructive git | `git push --force`, `git clean` |
270
+ | Remote code exec | `curl ... \| sh`, `wget ... \| bash` |
271
+
272
+ ### Path Restrictions
273
+
274
+ - All file operations resolve inside the project directory
275
+ - Path traversal (`../../etc/passwd`) is detected and blocked
276
+ - Sensitive paths blocked: `~/.ssh`, `~/.aws`, `/etc`, `/private`
277
+
278
+ ### Output Limits
279
+
280
+ - **2000 lines** or **100KB** per command (whichever is first)
281
+ - **120s** timeout (configurable via `CC_TIMEOUT`)
282
+
283
+ > The safety layer is a guardrail, not a security boundary. It prevents common destructive operations in an autonomous loop.
284
+
285
+ ## Architecture
286
+
287
+ ```
288
+ cc/
289
+ cli.py # Click CLI entry point
290
+ config.py # Layered config (CLI > env > yaml > defaults)
291
+ agent.py # Core tool loop with progress logging
292
+ llm.py # LiteLLM wrapper with OCI auth
293
+ safety.py # Command deny list + path checks
294
+ output.py # Colored terminal output
295
+ tools/ # Built-in tool implementations
296
+ plugins/ # Plugin discovery + skill indexing
297
+ ```
298
+
299
+ ## Development
300
+
301
+ ```bash
302
+ pip install -e . # Install
303
+ pytest -v # Run all 35 tests
304
+ pytest -k "safety" -v # Run tests by pattern
305
+ ```
306
+
307
+ ## License
308
+
309
+ MIT
@@ -0,0 +1,284 @@
1
+ <p align="center">
2
+ <h1 align="center">lite-cc</h1>
3
+ <p align="center">
4
+ A minimal, multi-model coding agent runtime for the terminal.
5
+ </p>
6
+ </p>
7
+
8
+ <p align="center">
9
+ <a href="#quick-start">Quick Start</a> &middot;
10
+ <a href="#how-it-works">How It Works</a> &middot;
11
+ <a href="#plugins--skills">Plugins & Skills</a> &middot;
12
+ <a href="#configuration">Configuration</a> &middot;
13
+ <a href="#safety">Safety</a>
14
+ </p>
15
+
16
+ ---
17
+
18
+ <p align="center">
19
+ <img src="assets/demo.gif" alt="litecc demo" width="720">
20
+ </p>
21
+
22
+ **lite-cc** (`litecc`) is a lightweight, provider-agnostic coding agent for the terminal. It connects to any LLM via [LiteLLM](https://docs.litellm.ai/docs/providers), runs an autonomous tool loop, and extends its capabilities through plugins and skills.
23
+
24
+ ## Key Features
25
+
26
+ - **Multi-model** — OpenAI, Anthropic, OCI, Gemini, Groq, Ollama, and [any provider LiteLLM supports](https://docs.litellm.ai/docs/providers).
27
+ - **Autonomous tool loop** — Reasons, calls tools, observes results, and iterates until the task is done.
28
+ - **Plugin & skill system** — Load plugins to inject domain knowledge. Skills are loaded on demand to keep context lean.
29
+ - **Safe by default** — Dangerous commands are blocked. File access is scoped to the project directory. Fully autonomous, no prompts.
30
+ - **Structured output** — Colored, timestamped progress logs show exactly what the agent is doing.
31
+
32
+ ## Quick Start
33
+
34
+ ```bash
35
+ # Install
36
+ git clone https://github.com/key4ng/lite-cc.git
37
+ cd lite-cc
38
+ pip install -e .
39
+
40
+ # Run
41
+ litecc run "list all Python files and describe what each one does"
42
+
43
+ # Run with the example plugin
44
+ litecc run "analyze cc/agent.py" --plugin-dir examples/code-analyst
45
+ ```
46
+
47
+ Output:
48
+
49
+ ```
50
+ 14:32:05 [litecc] Using model: gpt-5.2
51
+ 14:32:05 [litecc] Starting task...
52
+ 14:32:06 [tool] list_files: **/*.py
53
+ 14:32:07 [tool] read_file: cc/agent.py
54
+ 14:32:08 [gpt-5.2] I'll describe each file...
55
+ 14:32:09 [litecc] Here are the Python files...
56
+ ```
57
+
58
+ ## How It Works
59
+
60
+ ```
61
+ litecc run "fix the failing tests"
62
+
63
+
64
+ Load config, plugins, skills
65
+
66
+
67
+ Build system prompt + tool definitions
68
+
69
+
70
+ ┌─ Agent Loop ──────────────────────────┐
71
+ │ 1. Send messages + tools to LLM │
72
+ │ 2. LLM returns tool calls │
73
+ │ → execute safely → append results │
74
+ │ 3. LLM returns text → done │
75
+ └───────────────────────────────────────┘
76
+ ```
77
+
78
+ The loop runs until the model produces a final answer or hits the max iteration limit.
79
+
80
+ ## Usage
81
+
82
+ ```bash
83
+ # Basic task
84
+ litecc run "fix the failing tests"
85
+
86
+ # Choose a model
87
+ litecc run "refactor this module" --model anthropic/claude-3-sonnet-20240229
88
+
89
+ # Load plugins
90
+ litecc run "triage the latest ticket" --plugin-dir ~/my-plugin
91
+ litecc run "check health" --plugin-dir ~/plugin-a --plugin-dir ~/plugin-b
92
+
93
+ # Different project directory
94
+ litecc run "explain the architecture" --project-dir ~/other-repo
95
+
96
+ # Verbose output (show tool results, full reasoning)
97
+ litecc run "explore the codebase" -v
98
+
99
+ # Limit iterations
100
+ litecc run "explore the codebase" --max-iterations 20
101
+ ```
102
+
103
+ <details>
104
+ <summary>CLI Reference</summary>
105
+
106
+ ```
107
+ Usage: litecc run [OPTIONS] PROMPT
108
+
109
+ Options:
110
+ --plugin-dir TEXT Plugin directory (repeatable)
111
+ --model TEXT LiteLLM model string
112
+ --max-iterations INT Max tool loop iterations (default: 50)
113
+ --project-dir TEXT Working directory (default: cwd)
114
+ -v, --verbose Show detailed tool output
115
+ --help Show this message and exit
116
+ ```
117
+
118
+ </details>
119
+
120
+ ## Plugins & Skills
121
+
122
+ lite-cc uses a plugin format compatible with Claude Code. Plugins provide domain knowledge and reusable workflows.
123
+
124
+ ### Plugin Structure
125
+
126
+ ```
127
+ my-plugin/
128
+ .claude-plugin/
129
+ plugin.json # Manifest (required)
130
+ CLAUDE.md # Instructions injected into system prompt
131
+ pipeline/
132
+ deploy-check/
133
+ SKILL.md # Skill with YAML frontmatter
134
+ commands/
135
+ triage.md # Command-style skill
136
+ ```
137
+
138
+ ### Skill Format
139
+
140
+ ```markdown
141
+ ---
142
+ name: deploy-check
143
+ description: Verify a deployment is healthy by checking pod status and logs.
144
+ ---
145
+
146
+ # Deploy Check
147
+
148
+ ## Steps
149
+
150
+ 1. Check pod status:
151
+ ﹩bash
152
+ kubectl get pods -n <NAMESPACE> -o wide
153
+
154
+
155
+ 2. Review recent events and summarize findings.
156
+ ```
157
+
158
+ ### How It Works
159
+
160
+ 1. On startup, `--plugin-dir` directories are scanned for `.claude-plugin/plugin.json`
161
+ 2. `CLAUDE.md` is injected into the system prompt
162
+ 3. Skills are indexed by name and description — the model sees the list but not the full content
163
+ 4. When needed, the model calls `use_skill("deploy-check")` to load the full instructions
164
+ 5. The skill content is injected into the conversation and the model follows the steps
165
+
166
+ This keeps context lean — only the skills actually needed are loaded.
167
+
168
+ ## Configuration
169
+
170
+ Config is resolved in order of precedence (highest wins):
171
+
172
+ | Priority | Source | Example |
173
+ |----------|--------|---------|
174
+ | 1 | CLI flags | `--model openai/gpt-4o` |
175
+ | 2 | Environment variables | `CC_MODEL=openai/gpt-4o` |
176
+ | 3 | Config file | `~/.cc/config.yaml` |
177
+ | 4 | Defaults | `oci/openai.gpt-5.2` |
178
+
179
+ ### Environment Variables
180
+
181
+ | Variable | Default | Description |
182
+ |----------|---------|-------------|
183
+ | `CC_MODEL` | `oci/openai.gpt-5.2` | LiteLLM model identifier |
184
+ | `CC_OCI_REGION` | `us-chicago-1` | OCI region for inference |
185
+ | `CC_OCI_COMPARTMENT` | — | OCI compartment OCID (required for `oci/` models) |
186
+ | `CC_OCI_CONFIG_PROFILE` | `DEFAULT` | OCI config profile |
187
+ | `CC_MAX_ITERATIONS` | `50` | Max agent loop iterations |
188
+ | `CC_TIMEOUT` | `120` | Per-command timeout (seconds) |
189
+
190
+ <details>
191
+ <summary>YAML config example</summary>
192
+
193
+ Create `~/.cc/config.yaml`:
194
+
195
+ ```yaml
196
+ model: oci/openai.gpt-5.2
197
+ oci_region: us-chicago-1
198
+ oci_compartment: ocid1.tenancy.oc1..aaaaaaaexample
199
+ max_iterations: 50
200
+ timeout: 120
201
+ ```
202
+
203
+ </details>
204
+
205
+ ### Supported Models
206
+
207
+ Any [LiteLLM provider](https://docs.litellm.ai/docs/providers) works out of the box:
208
+
209
+ | Provider | Model Example | Auth |
210
+ |----------|--------------|------|
211
+ | OpenAI | `openai/gpt-4o` | `OPENAI_API_KEY` |
212
+ | Anthropic | `anthropic/claude-3-sonnet-20240229` | `ANTHROPIC_API_KEY` |
213
+ | OCI GenAI | `oci/openai.gpt-5.2` | `~/.oci/config` session token |
214
+ | Gemini | `gemini/gemini-pro` | `GEMINI_API_KEY` |
215
+ | Groq | `groq/llama3-70b-8192` | `GROQ_API_KEY` |
216
+ | Ollama | `ollama/llama3` | Local server |
217
+
218
+ ## Built-in Tools
219
+
220
+ | Tool | Description |
221
+ |------|-------------|
222
+ | `bash` | Shell execution with safety checks, output truncation, and timeout |
223
+ | `read_file` | Read files with optional line range (`offset`, `limit`) |
224
+ | `write_file` | Create or overwrite files (auto-creates parent dirs) |
225
+ | `list_files` | Glob pattern search (e.g., `**/*.py`) |
226
+ | `grep` | Recursive regex search across files |
227
+ | `use_skill` | Load a skill's instructions into the conversation |
228
+
229
+ All file tools are scoped to the project directory. Bash commands are checked against a deny list before execution.
230
+
231
+ ## Safety
232
+
233
+ lite-cc enforces safety guardrails at the tool execution layer — no user prompts, just deny and report.
234
+
235
+ ### Blocked Commands
236
+
237
+ | Category | Patterns |
238
+ |----------|----------|
239
+ | File deletion | `rm`, `rmdir`, `unlink` |
240
+ | Privilege escalation | `sudo`, `su`, `doas` |
241
+ | System control | `shutdown`, `reboot`, `halt` |
242
+ | Disk operations | `mkfs`, `fdisk`, `dd` |
243
+ | Process control | `kill`, `killall`, `pkill` |
244
+ | Destructive git | `git push --force`, `git clean` |
245
+ | Remote code exec | `curl ... \| sh`, `wget ... \| bash` |
246
+
247
+ ### Path Restrictions
248
+
249
+ - All file operations resolve inside the project directory
250
+ - Path traversal (`../../etc/passwd`) is detected and blocked
251
+ - Sensitive paths blocked: `~/.ssh`, `~/.aws`, `/etc`, `/private`
252
+
253
+ ### Output Limits
254
+
255
+ - **2000 lines** or **100KB** per command (whichever is first)
256
+ - **120s** timeout (configurable via `CC_TIMEOUT`)
257
+
258
+ > The safety layer is a guardrail, not a security boundary. It prevents common destructive operations in an autonomous loop.
259
+
260
+ ## Architecture
261
+
262
+ ```
263
+ cc/
264
+ cli.py # Click CLI entry point
265
+ config.py # Layered config (CLI > env > yaml > defaults)
266
+ agent.py # Core tool loop with progress logging
267
+ llm.py # LiteLLM wrapper with OCI auth
268
+ safety.py # Command deny list + path checks
269
+ output.py # Colored terminal output
270
+ tools/ # Built-in tool implementations
271
+ plugins/ # Plugin discovery + skill indexing
272
+ ```
273
+
274
+ ## Development
275
+
276
+ ```bash
277
+ pip install -e . # Install
278
+ pytest -v # Run all 35 tests
279
+ pytest -k "safety" -v # Run tests by pattern
280
+ ```
281
+
282
+ ## License
283
+
284
+ MIT
@@ -0,0 +1 @@
1
+ """lite-cc — lightweight multi-model coding agent CLI."""
@@ -0,0 +1,4 @@
1
+ from cc.cli import main
2
+
3
+ if __name__ == "__main__":
4
+ main()