ollaagent 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,10 @@
1
+ # Python-generated files
2
+ __pycache__/
3
+ *.py[oc]
4
+ build/
5
+ dist/
6
+ wheels/
7
+ *.egg-info
8
+
9
+ # Virtual environments
10
+ .venv
@@ -0,0 +1 @@
1
+ 3.11
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 github010000
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,117 @@
1
+ Metadata-Version: 2.4
2
+ Name: ollaagent
3
+ Version: 0.1.0
4
+ Summary: Local LLM agent powered by ollama — memory, plan mode, subagents
5
+ License-File: LICENSE
6
+ Requires-Python: >=3.11
7
+ Requires-Dist: ollama==0.6.1
8
+ Requires-Dist: pydantic==2.12.5
9
+ Requires-Dist: python-dotenv==1.2.2
10
+ Requires-Dist: pyyaml>=6.0
11
+ Requires-Dist: rich==14.3.3
12
+ Description-Content-Type: text/markdown
13
+
14
+ # ollaAgent
15
+
16
+ A local LLM agent powered by [ollama](https://ollama.com) — with persistent memory, plan mode, and parallel subagents.
17
+
18
+ [![Python](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org)
19
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
20
+
21
+ ## Features
22
+
23
+ | Feature | Description |
24
+ |---------|-------------|
25
+ | **Agent Loop** | Iterative tool-calling loop with `run_python`, `run_bash`, `read_file`, `write_file`, `list_files` |
26
+ | **Persistent Memory** | JSON-backed memory with `/memory add/list/search/clear` commands |
27
+ | **Session Saving** | Conversation history auto-saved to `.agents/sessions/` on exit |
28
+ | **Plan Mode** | `/plan <task>` — generates a structured step-by-step plan without executing tools |
29
+ | **Subagents** | `/subagent` — runs multiple ollama instances in parallel via `multiprocessing.Pool` |
30
+ | **Permission Control** | Configurable allow/deny patterns for bash commands |
31
+ | **Cloudflare Access** | Supports CF-Access headers for tunneled ollama endpoints |
32
+
33
+ ## Requirements
34
+
35
+ - Python 3.11+
36
+ - [ollama](https://ollama.com) running locally (or via Cloudflare Access)
37
+ - [uv](https://docs.astral.sh/uv/)
38
+
39
+ ## Installation
40
+
41
+ ```bash
42
+ git clone https://github.com/github010000/ollaAgent
43
+ cd ollaAgent
44
+ uv sync
45
+ ```
46
+
47
+ ## Usage
48
+
49
+ ```bash
50
+ # Start the agent
51
+ uv run ollaagent
52
+
53
+ # With a specific model
54
+ uv run ollaagent --model qwen2.5-coder:7b
55
+
56
+ # With a remote ollama host
57
+ uv run ollaagent --host https://your-ollama.example.com
58
+ ```
59
+
60
+ ## Built-in Commands
61
+
62
+ | Command | Description |
63
+ |---------|-------------|
64
+ | `/plan <task>` | Generate a step-by-step plan (no execution) |
65
+ | `/subagent` | Run tasks in parallel across multiple ollama instances |
66
+ | `/memory add <text>` | Add an entry to persistent memory |
67
+ | `/memory list` | List all memory entries |
68
+ | `/memory search <query>` | Search memory by keyword |
69
+ | `/memory clear` | Clear all memory entries |
70
+ | `/exit` | Exit the agent |
71
+
72
+ ## Subagent Usage
73
+
74
+ Single model across all tasks:
75
+ ```
76
+ /subagent
77
+ > --model llama3:8b task one | task two | task three
78
+ ```
79
+
80
+ Per-task model assignment:
81
+ ```
82
+ > @qwen2.5-coder:7b write a sorting algorithm | @llama3:8b explain it
83
+ ```
84
+
85
+ ## Configuration
86
+
87
+ Create a `.env` file in the project root:
88
+
89
+ ```env
90
+ OLLAMA_HOST=http://localhost:11434
91
+ CF_CLIENT_ID=
92
+ CF_CLIENT_SECRET=
93
+ ```
94
+
95
+ ## Project Structure
96
+
97
+ ```
98
+ ollaAgent/
99
+ ├── agent.py # Main agent loop & CLI entry point
100
+ ├── memory.py # Persistent memory (JSON)
101
+ ├── plan_mode.py # Plan-only mode (tools=[])
102
+ ├── subagent.py # Parallel subagents via multiprocessing
103
+ ├── tool_bash.py # Bash tool with permission control
104
+ ├── permissions.py # Allow/deny pattern matching
105
+ ├── config_loader.py # YAML config & system prompt builder
106
+ └── ollama_client.py # Ollama client factory
107
+ ```
108
+
109
+ ## Running Tests
110
+
111
+ ```bash
112
+ uv run pytest
113
+ ```
114
+
115
+ ## License
116
+
117
+ MIT — see [LICENSE](LICENSE) for details.
@@ -0,0 +1,104 @@
1
+ # ollaAgent
2
+
3
+ A local LLM agent powered by [ollama](https://ollama.com) — with persistent memory, plan mode, and parallel subagents.
4
+
5
+ [![Python](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
7
+
8
+ ## Features
9
+
10
+ | Feature | Description |
11
+ |---------|-------------|
12
+ | **Agent Loop** | Iterative tool-calling loop with `run_python`, `run_bash`, `read_file`, `write_file`, `list_files` |
13
+ | **Persistent Memory** | JSON-backed memory with `/memory add/list/search/clear` commands |
14
+ | **Session Saving** | Conversation history auto-saved to `.agents/sessions/` on exit |
15
+ | **Plan Mode** | `/plan <task>` — generates a structured step-by-step plan without executing tools |
16
+ | **Subagents** | `/subagent` — runs multiple ollama instances in parallel via `multiprocessing.Pool` |
17
+ | **Permission Control** | Configurable allow/deny patterns for bash commands |
18
+ | **Cloudflare Access** | Supports CF-Access headers for tunneled ollama endpoints |
19
+
20
+ ## Requirements
21
+
22
+ - Python 3.11+
23
+ - [ollama](https://ollama.com) running locally (or via Cloudflare Access)
24
+ - [uv](https://docs.astral.sh/uv/)
25
+
26
+ ## Installation
27
+
28
+ ```bash
29
+ git clone https://github.com/github010000/ollaAgent
30
+ cd ollaAgent
31
+ uv sync
32
+ ```
33
+
34
+ ## Usage
35
+
36
+ ```bash
37
+ # Start the agent
38
+ uv run ollaagent
39
+
40
+ # With a specific model
41
+ uv run ollaagent --model qwen2.5-coder:7b
42
+
43
+ # With a remote ollama host
44
+ uv run ollaagent --host https://your-ollama.example.com
45
+ ```
46
+
47
+ ## Built-in Commands
48
+
49
+ | Command | Description |
50
+ |---------|-------------|
51
+ | `/plan <task>` | Generate a step-by-step plan (no execution) |
52
+ | `/subagent` | Run tasks in parallel across multiple ollama instances |
53
+ | `/memory add <text>` | Add an entry to persistent memory |
54
+ | `/memory list` | List all memory entries |
55
+ | `/memory search <query>` | Search memory by keyword |
56
+ | `/memory clear` | Clear all memory entries |
57
+ | `/exit` | Exit the agent |
58
+
59
+ ## Subagent Usage
60
+
61
+ Single model across all tasks:
62
+ ```
63
+ /subagent
64
+ > --model llama3:8b task one | task two | task three
65
+ ```
66
+
67
+ Per-task model assignment:
68
+ ```
69
+ > @qwen2.5-coder:7b write a sorting algorithm | @llama3:8b explain it
70
+ ```
71
+
72
+ ## Configuration
73
+
74
+ Create a `.env` file in the project root:
75
+
76
+ ```env
77
+ OLLAMA_HOST=http://localhost:11434
78
+ CF_CLIENT_ID=
79
+ CF_CLIENT_SECRET=
80
+ ```
81
+
82
+ ## Project Structure
83
+
84
+ ```
85
+ ollaAgent/
86
+ ├── agent.py # Main agent loop & CLI entry point
87
+ ├── memory.py # Persistent memory (JSON)
88
+ ├── plan_mode.py # Plan-only mode (tools=[])
89
+ ├── subagent.py # Parallel subagents via multiprocessing
90
+ ├── tool_bash.py # Bash tool with permission control
91
+ ├── permissions.py # Allow/deny pattern matching
92
+ ├── config_loader.py # YAML config & system prompt builder
93
+ └── ollama_client.py # Ollama client factory
94
+ ```
95
+
96
+ ## Running Tests
97
+
98
+ ```bash
99
+ uv run pytest
100
+ ```
101
+
102
+ ## License
103
+
104
+ MIT — see [LICENSE](LICENSE) for details.
@@ -0,0 +1,104 @@
1
+ # ollaAgent
2
+
3
+ [ollama](https://ollama.com) 기반 로컬 LLM 에이전트 — 영구 메모리, 플랜 모드, 병렬 서브에이전트를 지원합니다.
4
+
5
+ [![Python](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
7
+
8
+ ## 주요 기능
9
+
10
+ | 기능 | 설명 |
11
+ |------|------|
12
+ | **에이전트 루프** | `run_python`, `run_bash`, `read_file`, `write_file`, `list_files` 도구를 활용한 반복 실행 루프 |
13
+ | **영구 메모리** | JSON 파일 기반 메모리. `/memory add/list/search/clear` 명령으로 관리 |
14
+ | **세션 저장** | 종료 시 대화 히스토리를 `.agents/sessions/`에 자동 저장 |
15
+ | **플랜 모드** | `/plan <태스크>` — 도구 실행 없이 단계별 계획만 생성 |
16
+ | **서브에이전트** | `/subagent` — `multiprocessing.Pool`로 여러 ollama 인스턴스를 병렬 실행 |
17
+ | **권한 제어** | bash 명령에 대한 허용/거부 패턴 설정 |
18
+ | **Cloudflare Access** | 터널링된 ollama 엔드포인트에 CF-Access 헤더 지원 |
19
+
20
+ ## 요구사항
21
+
22
+ - Python 3.11 이상
23
+ - [ollama](https://ollama.com) 로컬 실행 중 (또는 Cloudflare Access 경유)
24
+ - [uv](https://docs.astral.sh/uv/)
25
+
26
+ ## 설치
27
+
28
+ ```bash
29
+ git clone https://github.com/github010000/ollaAgent
30
+ cd ollaAgent
31
+ uv sync
32
+ ```
33
+
34
+ ## 실행
35
+
36
+ ```bash
37
+ # 에이전트 시작
38
+ uv run ollaagent
39
+
40
+ # 모델 지정
41
+ uv run ollaagent --model qwen2.5-coder:7b
42
+
43
+ # 원격 ollama 호스트 지정
44
+ uv run ollaagent --host https://your-ollama.example.com
45
+ ```
46
+
47
+ ## 내장 명령어
48
+
49
+ | 명령어 | 설명 |
50
+ |--------|------|
51
+ | `/plan <태스크>` | 단계별 계획 생성 (도구 실행 없음) |
52
+ | `/subagent` | 여러 태스크를 병렬로 실행 |
53
+ | `/memory add <내용>` | 영구 메모리에 항목 추가 |
54
+ | `/memory list` | 전체 메모리 목록 출력 |
55
+ | `/memory search <키워드>` | 키워드로 메모리 검색 |
56
+ | `/memory clear` | 전체 메모리 삭제 |
57
+ | `/exit` | 에이전트 종료 |
58
+
59
+ ## 서브에이전트 사용법
60
+
61
+ 단일 모델로 여러 태스크 병렬 실행:
62
+ ```
63
+ /subagent
64
+ > --model llama3:8b 태스크1 | 태스크2 | 태스크3
65
+ ```
66
+
67
+ 태스크별 모델 지정:
68
+ ```
69
+ > @qwen2.5-coder:7b 정렬 알고리즘 작성 | @llama3:8b 코드 설명
70
+ ```
71
+
72
+ ## 환경 설정
73
+
74
+ 프로젝트 루트에 `.env` 파일을 생성합니다:
75
+
76
+ ```env
77
+ OLLAMA_HOST=http://localhost:11434
78
+ CF_CLIENT_ID=
79
+ CF_CLIENT_SECRET=
80
+ ```
81
+
82
+ ## 프로젝트 구조
83
+
84
+ ```
85
+ ollaAgent/
86
+ ├── agent.py # 메인 에이전트 루프 & CLI 진입점
87
+ ├── memory.py # 영구 메모리 (JSON)
88
+ ├── plan_mode.py # 플랜 전용 모드 (tools=[])
89
+ ├── subagent.py # multiprocessing 기반 병렬 서브에이전트
90
+ ├── tool_bash.py # 권한 제어가 포함된 bash 도구
91
+ ├── permissions.py # 허용/거부 패턴 매칭
92
+ ├── config_loader.py # YAML 설정 & 시스템 프롬프트 빌더
93
+ └── ollama_client.py # ollama 클라이언트 팩토리
94
+ ```
95
+
96
+ ## 테스트 실행
97
+
98
+ ```bash
99
+ uv run pytest
100
+ ```
101
+
102
+ ## 라이선스
103
+
104
+ MIT — 자세한 내용은 [LICENSE](LICENSE)를 참조하세요.
@@ -0,0 +1,6 @@
1
+ def main():
2
+ print("Hello from ollaagent!")
3
+
4
+
5
+ if __name__ == "__main__":
6
+ main()
File without changes