@mndrk/agx 1.0.2 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,11 @@
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "Bash(node --check:*)",
5
+ "Bash(node index.js:*)",
6
+ "Bash(python:*)",
7
+ "Bash(npm publish:*)",
8
+ "Bash(npm uninstall:*)"
9
+ ]
10
+ }
11
+ }
package/README.md CHANGED
@@ -1,83 +1,141 @@
1
1
  # agx
2
2
 
3
- Unified AI Agent Wrapper for Gemini, Claude, and Ollama.
3
+ Unified AI Agent CLI with persistent memory. Wraps Claude, Gemini, and Ollama with automatic state management via [mem](https://github.com/ramarlina/memx).
4
4
 
5
- ## Installation
6
-
7
- From the `agx` directory:
8
5
  ```bash
9
- npm link
6
+ npm install -g @mndrk/agx
10
7
  ```
11
- Now you can use `agx` globally.
12
8
 
13
- ## Usage
9
+ ## Quick Start
14
10
 
15
11
  ```bash
16
- agx <provider> [options] --prompt "<prompt>"
17
- ```
12
+ # Simple prompt
13
+ agx claude -p "explain this code"
18
14
 
19
- ### Providers
15
+ # Use default provider
16
+ agx -p "what does this function do?"
20
17
 
21
- | Provider | Aliases | Backend |
22
- |----------|---------|---------|
23
- | `gemini` | `gem`, `g` | Google Gemini CLI |
24
- | `claude` | `cl`, `c` | Anthropic Claude CLI |
25
- | `ollama` | `ol`, `o` | Local Ollama via Claude interface |
18
+ # With persistent memory (auto-detected)
19
+ agx claude -p "continue working on the todo app"
26
20
 
27
- ### Options
21
+ # Auto-create task (for agents, non-interactive)
22
+ agx claude --auto-task -p "Build a todo app with React"
23
+ ```
28
24
 
29
- | Option | Short | Description |
30
- |--------|-------|-------------|
31
- | `--prompt <text>` | `-p` | The prompt to send |
32
- | `--model <name>` | `-m` | Model name to use |
33
- | `--yolo` | `-y` | Skip permission prompts |
34
- | `--print` | | Non-interactive mode (output and exit) |
35
- | `--interactive` | `-i` | Force interactive mode |
36
- | `--sandbox` | `-s` | Enable sandbox (gemini only) |
37
- | `--debug` | `-d` | Enable debug output |
38
- | `--mcp <config>` | | MCP config file (claude/ollama only) |
25
+ ## Memory Integration
39
26
 
40
- ### Raw Passthrough
27
+ agx integrates with [mem](https://github.com/ramarlina/memx) for persistent state across sessions:
41
28
 
42
- Use `--` to pass arguments directly to the underlying CLI:
43
29
  ```bash
44
- agx claude -- --resume
30
+ # If ~/.mem has a task mapped to cwd, context is auto-loaded
31
+ cd ~/Projects/my-app
32
+ agx claude -p "continue" # Knows where it left off
33
+
34
+ # Create task with explicit criteria
35
+ agx claude --task todo-app \
36
+ --criteria "CRUD working" \
37
+ --criteria "Tests passing" \
38
+ --criteria "Deployed to Vercel" \
39
+ -p "Build a todo app"
45
40
  ```
46
41
 
47
- ## LLM-Predictable Command Patterns
42
+ ## Output Markers
48
43
 
49
- For LLMs constructing commands, use these canonical patterns:
44
+ Agents control state via markers in their output:
50
45
 
51
- ```bash
52
- # Pattern: agx <provider> --prompt "<prompt>"
53
- agx claude --prompt "explain this code"
54
- agx gemini --prompt "summarize the file"
55
- agx ollama --prompt "write a function"
46
+ ```
47
+ [checkpoint: Hero section complete] # Save progress
48
+ [learn: Tailwind is fast] # Record learning
49
+ [next: Add auth system] # Set next step
50
+ [criteria: 2] # Mark criterion #2 done
51
+ [approve: Deploy to production?] # Halt for approval
52
+ [blocked: Need API key from client] # Mark stuck
53
+ [pause] # Stop, resume later
54
+ [continue] # Keep going (daemon)
55
+ [done] # Task complete
56
+ [split: auth "Handle authentication"] # Create subtask
57
+ ```
58
+
59
+ ## Providers
60
+
61
+ | Provider | Aliases | Description |
62
+ |----------|---------|-------------|
63
+ | claude | c, cl | Anthropic Claude Code |
64
+ | gemini | g, gem | Google Gemini CLI |
65
+ | ollama | o, ol | Local Ollama models |
56
66
 
57
- # Pattern: agx <provider> --model <model> --prompt "<prompt>"
58
- agx claude --model claude-sonnet-4-20250514 --prompt "fix the bug"
59
- agx gemini --model gemini-2.0-flash --prompt "optimize this"
60
- agx ollama --model qwen3:8b --prompt "refactor"
67
+ ## Options
68
+
69
+ ```
70
+ --prompt, -p <text> Prompt to send
71
+ --model, -m <name> Model name
72
+ --yolo, -y Skip permission prompts
73
+ --print Non-interactive output
74
+ --interactive, -i Force interactive mode
75
+ --mem Enable mem integration (auto-detected)
76
+ --no-mem Disable mem integration
77
+ --auto-task Auto-create task from prompt
78
+ --task <name> Specific task name
79
+ --criteria <text> Success criterion (repeatable)
80
+ --daemon Loop on [continue] marker
81
+ ```
61
82
 
62
- # Pattern: agx <provider> --yolo --prompt "<prompt>"
63
- agx claude --yolo --prompt "run the tests"
83
+ ## Commands
64
84
 
65
- # Pattern: agx <provider> --print --prompt "<prompt>"
66
- agx claude --print --prompt "what is 2+2"
85
+ ```bash
86
+ agx init # Setup wizard
87
+ agx config # Configuration menu
88
+ agx status # Show current config
89
+ agx skill # View LLM skill
90
+ agx skill install # Install skill to Claude/Gemini
67
91
  ```
68
92
 
69
- ### Command Structure
93
+ ## Loop Control
94
+
95
+ The agent controls execution flow via markers:
96
+
97
+ - `[done]` → Task complete, exit
98
+ - `[pause]` → Save state, exit (resume later with same command)
99
+ - `[blocked: reason]` → Mark stuck, notify human, exit
100
+ - `[continue]` → Keep going (daemon mode loops)
101
+ - `[approve: question]` → Halt until human approves
102
+
103
+ ## Task Splitting
104
+
105
+ Break large tasks into subtasks:
70
106
 
71
107
  ```
72
- agx <provider> [--model <name>] [--yolo] [--print] --prompt "<prompt>"
108
+ Agent output:
109
+ This is too big. Breaking it down.
110
+
111
+ [split: setup "Project scaffolding"]
112
+ [split: auth "Authentication system"]
113
+ [split: crud "CRUD operations"]
114
+ [next: Start with setup subtask]
115
+ [pause]
73
116
  ```
74
117
 
75
- **Rules for LLMs:**
76
- 1. Always use `--prompt` flag for the prompt text
77
- 2. Quote the prompt with double quotes
78
- 3. Place options before `--prompt`
79
- 4. Use full provider names (`claude`, `gemini`, `ollama`) for clarity
118
+ agx creates subtask branches in ~/.mem linked to the parent.
119
+
120
+ ## Example: Full Workflow
121
+
122
+ ```bash
123
+ # Day 1: Start project
124
+ mkdir ~/Projects/my-app && cd ~/Projects/my-app
125
+ agx claude --auto-task -p "Build a React todo app with auth"
126
+
127
+ # Agent works, outputs markers
128
+ # [checkpoint: Scaffolded with Vite]
129
+ # [learn: Vite is faster than CRA]
130
+ # [next: Add todo list component]
131
+ # [pause]
132
+
133
+ # Day 2: Continue
134
+ cd ~/Projects/my-app
135
+ agx claude -p "continue"
136
+ # Context auto-loaded, agent picks up where it left off
137
+ ```
80
138
 
81
- ## Ollama Support
139
+ ## License
82
140
 
83
- `agx ollama` automatically configures the environment to use a local Ollama instance as the backend for Claude Code. Default model is `glm-4.7:cloud` unless specified with `--model`.
141
+ MIT