@goldensheepai/toknxr-cli 0.2.0 β†’ 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,17 +5,21 @@
5
5
  ## πŸ”₯ Quick Start (ONE COMMAND!)
6
6
 
7
7
  ### Option 1: Ultimate One-Command Setup (Recommended)
8
+
8
9
  ```bash
9
10
  # From the ToknXR-CLI directory, run:
10
11
  ./toknxr-cli/start.sh
11
12
  ```
13
+
12
14
  This single command does **EVERYTHING**:
15
+
13
16
  - βœ… Sets up all configuration files
14
17
  - βœ… Creates environment variables template
15
18
  - βœ… Starts the server with all 5 providers
16
19
  - βœ… Opens your AI analytics dashboard
17
20
 
18
21
  ### Option 2: NPM Commands
22
+
19
23
  ```bash
20
24
  # 1. Navigate to the toknxr-cli directory
21
25
  cd toknxr-cli
@@ -28,6 +32,7 @@ npm run go
28
32
  ```
29
33
 
30
34
  ### Option 3: Manual Setup (if you prefer)
35
+
31
36
  ```bash
32
37
  cd toknxr-cli
33
38
  npm run quickstart # Sets up everything
@@ -40,6 +45,7 @@ npm start # Start tracking
40
45
  ## πŸ“¦ What's Included
41
46
 
42
47
  βœ… **5 AI Providers** with full configuration support:
48
+
43
49
  - **Ollama-Llama3** (Local AI - Free)
44
50
  - **Gemini-Pro** (Google AI - Paid)
45
51
  - **Gemini-Free** (Google AI - Free tier)
@@ -47,6 +53,7 @@ npm start # Start tracking
47
53
  - **Anthropic-Claude** (Claude - Paid)
48
54
 
49
55
  βœ… **Advanced Analytics**:
56
+
50
57
  - Real-time token usage tracking
51
58
  - Cost monitoring and budget alerts
52
59
  - Code quality analysis for coding requests
@@ -54,14 +61,256 @@ npm start # Start tracking
54
61
  - Effectiveness scoring
55
62
 
56
63
  βœ… **Smart Features**:
64
+
57
65
  - Automatic provider routing
58
66
  - Budget enforcement with alerts
59
67
  - Comprehensive logging
60
68
  - Web dashboard for visualization
61
69
 
70
+ ## 🎭 See It In Action - Complete User Journey
71
+
72
+ ### **Role-Play: Developer workday - Alex wants to track AI usage and optimize costs**
73
+
74
+ ```bash
75
+ # Terminal opens - Alex sees welcome screen
76
+ $ toknxr
77
+
78
+ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
79
+ β•šβ•β•β–ˆβ–ˆβ•”β•β•β• β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
80
+ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
81
+ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
82
+ β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘
83
+ β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β• β•šβ•β• β•šβ•β•β•β• β•šβ•β• β•šβ•β• β•šβ•β• β•šβ•β•
84
+
85
+ πŸ‘ Powered by Golden Sheep AI
86
+ ```
87
+
88
+ **Alex (thinking):** "Okay, I need to set up tracking for my OpenAI and Google AI usage. Let me configure this."
89
+
90
+ ```bash
91
+ # 1. Initialize project with config and policies
92
+ $ toknxr init
93
+
94
+ Created .env
95
+ Created toknxr.config.json
96
+ Created toknxr.policy.json
97
+
98
+ $ cat toknxr.config.json
99
+ {
100
+ "providers": [
101
+ {
102
+ "name": "Gemini-Pro",
103
+ "routePrefix": "/gemini",
104
+ "targetUrl": "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent",
105
+ "apiKeyEnvVar": "GEMINI_API_KEY",
106
+ "authHeader": "x-goog-api-key",
107
+ "tokenMapping": {
108
+ "prompt": "usageMetadata.promptTokenCount",
109
+ "completion": "usageMetadata.candidatesTokenCount",
110
+ "total": "usageMetadata.totalTokenCount"
111
+ }
112
+ }
113
+ ]
114
+ }
115
+
116
+ # 2. Alex sets up API key
117
+ $ echo "GEMINI_API_KEY=your_actual_key_here" >> .env
118
+ # 3. Sets budget policies
119
+ $ vim toknxr.policy.json
120
+ # Edits: monthlyUSD: 100, perProviderMonthlyUSD: {"Gemini-Pro": 50}
121
+ ```
122
+
123
+ **Alex:** "Great! Now I need to authenticate so my data syncs to the dashboard."
124
+
125
+ ```bash
126
+ # 4. Login to web dashboard
127
+ $ toknxr login
128
+
129
+ [Auth] Opening https://your-toknxr-app.com/cli-login
130
+ [Auth] Please authenticate in your browser...
131
+
132
+ # Alex opens browser, logs in with email/password
133
+ # CLI detects successful auth, stores token locally
134
+
135
+ [Auth] βœ… Successfully authenticated! You can now sync data to your dashboard.
136
+ ```
137
+
138
+ **Alex:** "Perfect! Now let me start tracking my AI usage automatically."
139
+
140
+ ```bash
141
+ # 5. Start the proxy server
142
+ $ toknxr start
143
+
144
+ [Proxy] Server listening on http://localhost:8788
145
+ [Proxy] Loaded providers: Gemini-Pro
146
+ [Proxy] ⏳ Ready to intercept and analyze your AI requests...
147
+
148
+ πŸ‘ TokNXR is watching your AI usage...
149
+ ```
150
+
151
+ **Alex (working on a coding task):** "Let me ask Gemini to write some code for me."
152
+
153
+ ````bash
154
+ # 6. Alex uses their normal AI workflow, but points to proxy
155
+ $ curl "http://localhost:8788/gemini" \
156
+ -H "Content-Type: application/json" \
157
+ -d '{
158
+ "contents": [{
159
+ "parts": [{"text": "Write a Python function to calculate fibonacci numbers efficiently"}]
160
+ }]
161
+ }'
162
+
163
+ # 7. Proxy intercepts, forwards to Gemini, analyzes response
164
+ [Proxy] Received request: POST /gemini | requestId=1234-abcd
165
+ [Proxy] Matched provider: Gemini-Pro
166
+ [Proxy] Forwarding request to https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
167
+ [Proxy] Running AI analysis pipeline... Confidence: 5.2%, Likely: false
168
+ [Proxy] Running code quality analysis... Quality: 87/100, Effectiveness: 92/100
169
+ [Proxy] Interaction successfully logged to interactions.log
170
+
171
+ # Returns the actual AI response to Alex's curl command
172
+ {
173
+ "candidates": [{
174
+ "content": {
175
+ "parts": [{
176
+ "text": "def fibonacci(n, memo={}):\n if n in memo: return memo[n]\n if n <= 1: return n\n memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)\n return memo[n]"
177
+ }]
178
+ },
179
+ "usageMetadata": {
180
+ "promptTokenCount": 45,
181
+ "candidatesTokenCount": 78,
182
+ "totalTokenCount": 123
183
+ }
184
+ }]
185
+ }```
186
+
187
+ **Alex:** "Nice! The proxy automatically captured that interaction. Let me check the local analytics."
188
+
189
+ ```bash
190
+ # 8. Check local stats and analytics
191
+ $ toknxr stats
192
+
193
+ Token Usage Statistics
194
+
195
+ Provider: Gemini-Pro
196
+ Total Requests: 1
197
+ Total Tokens: 123
198
+ - Prompt Tokens: 45
199
+ - Completion Tokens: 78
200
+ Cost (USD): $0.0005
201
+
202
+ Grand Totals
203
+ Requests: 1
204
+ Tokens: 123
205
+ - Prompt: 45
206
+ - Completion: 78
207
+ Cost (USD): $0.0005
208
+
209
+ Code Quality Insights:
210
+ Coding Requests: 1
211
+ Avg Code Quality: 87/100
212
+ Avg Effectiveness: 92/100
213
+
214
+ βœ… Your AI coding setup is working well
215
+ ````
216
+
217
+ **Alex:** "Sweet! Let me see more detailed analysis."
218
+
219
+ ```bash
220
+ # 9. Deep dive into code analysis
221
+ $ toknxr code-analysis
222
+
223
+ AI Code Quality Analysis
224
+
225
+ Language Distribution:
226
+ python: 1 requests
227
+
228
+ Code Quality Scores:
229
+ Excellent (90-100): 0
230
+ Good (75-89): 1
231
+ Fair (60-74): 0
232
+ Poor (0-59): 0
233
+
234
+ Effectiveness Scores (Prompt ↔ Result):
235
+ Excellent (90-100): 1
236
+
237
+ Recent Low-Quality Code Examples:
238
+ (none - everything looks good!)
239
+
240
+ Improvement Suggestions:
241
+ πŸ’‘ Great! Your AI coding setup is working well
242
+ πŸ’‘ Consider establishing code review processes for edge cases
243
+ ```
244
+
245
+ **Alex:** "Perfect! Now I want to sync this data to my web dashboard so my team can see it."
246
+
247
+ ```bash
248
+ # 10. Sync local logs to Supabase dashboard
249
+ $ toknxr sync
250
+
251
+ [Sync] Reading 1 interactions from interactions.log
252
+ [Sync] Analyzing interactions for quality insights
253
+ [Sync] Preparing to sync 1 interactions to Supabase
254
+ [Sync] Syncing interaction 1/1...
255
+ [Sync] Successfully synced to dashboard
256
+ [Sync] Data now available at https://your-toknxr-app.com/dashboard
257
+
258
+ Synced 1 interactions to your dashboard βœ…
259
+ ```
260
+
261
+ **Alex opens dashboard:** "Let me check the web interface."
262
+
263
+ ### **Web Dashboard Results:**
264
+
265
+ - **Stats Cards:** Total cost $0.0005, 1 interaction, 0% waste, 0% hallucination rate
266
+ - **Charts:** Cost trends, quality scores over time
267
+ - **Recent Interactions:** Shows the fibonacci function request with quality score
268
+ - **Analysis:** "Great session! Low costs, high quality AI responses"
269
+
270
+ **Alex:** "Excellent! I have full visibility into my AI usage. Let me work more and then check if there are any budget concerns."
271
+
272
+ ```bash
273
+ # Alex works through the day, proxy tracks everything automatically
274
+
275
+ # Later that afternoon...
276
+ $ toknxr stats
277
+
278
+ # Shows accumulated usage across the day
279
+ # Alerts if approaching budget limits
280
+ # Identifies patterns in AI effectiveness
281
+
282
+ $ toknxr providers
283
+
284
+ AI Provider Comparison
285
+
286
+ 🏒 Gemini-Pro:
287
+ Total Interactions: 47
288
+ Hallucination Rate: 3.2%
289
+ Avg Quality Score: 84/100
290
+ Avg Effectiveness: 89/100
291
+
292
+ πŸ† Performance Summary:
293
+ Best Provider: Gemini-Pro (84/100 quality)
294
+ ```
295
+
296
+ **Alex (end of day):** "Wow! I've automatically tracked 47 AI interactions, analyzed code quality, caught some hallucinations early, and stayed under budget. I'll sync this to the team dashboard."
297
+
298
+ ---
299
+
300
+ ## **🎭 The End Result:**
301
+
302
+ βœ… **Automatic tracking** of ALL AI usage (tokens, costs, quality)
303
+ βœ… **Real-time alerts** for budgets and hallucinations
304
+ βœ… **Team visibility** through synced dashboard
305
+ βœ… **Data-driven optimization** of AI workflows
306
+ βœ… **Cost control** and quality insights
307
+
308
+ _The user became an **AI efficiency expert** without extra work - just normal development with automatic superpowers! πŸ¦Έβ€β™‚οΈ_
309
+
62
310
  ## πŸ”§ Setup Details
63
311
 
64
312
  ### Environment Variables (.env file)
313
+
65
314
  ```bash
66
315
  # Required: Google AI API Key (for Gemini models)
67
316
  GEMINI_API_KEY=your_gemini_api_key_here
@@ -80,17 +329,18 @@ WEBHOOK_URL=https://your-webhook-url.com/alerts
80
329
 
81
330
  Once running, your AI providers will be available at:
82
331
 
83
- | Provider | Endpoint | Status |
84
- |----------|----------|---------|
85
- | **Ollama-Llama3** | `http://localhost:8788/ollama` | βœ… Ready |
86
- | **Gemini-Pro** | `http://localhost:8788/gemini` | βœ… Ready |
87
- | **Gemini-Free** | `http://localhost:8788/gemini-free` | βœ… Ready |
88
- | **OpenAI-GPT4** | `http://localhost:8788/openai` | βœ… Ready |
89
- | **Anthropic-Claude** | `http://localhost:8788/anthropic` | βœ… Ready |
332
+ | Provider | Endpoint | Status |
333
+ | -------------------- | ----------------------------------- | -------- |
334
+ | **Ollama-Llama3** | `http://localhost:8788/ollama` | βœ… Ready |
335
+ | **Gemini-Pro** | `http://localhost:8788/gemini` | βœ… Ready |
336
+ | **Gemini-Free** | `http://localhost:8788/gemini-free` | βœ… Ready |
337
+ | **OpenAI-GPT4** | `http://localhost:8788/openai` | βœ… Ready |
338
+ | **Anthropic-Claude** | `http://localhost:8788/anthropic` | βœ… Ready |
90
339
 
91
340
  ## πŸ’‘ Usage Examples
92
341
 
93
342
  ### Using with curl
343
+
94
344
  ```bash
95
345
  # Test Gemini-Free (no API key needed for testing)
96
346
  curl -X POST http://localhost:8788/gemini-free \
@@ -104,18 +354,20 @@ curl -X POST http://localhost:8788/gemini \
104
354
  ```
105
355
 
106
356
  ### Using with JavaScript/Node.js
357
+
107
358
  ```javascript
108
359
  const response = await fetch('http://localhost:8788/gemini-free', {
109
360
  method: 'POST',
110
361
  headers: { 'Content-Type': 'application/json' },
111
362
  body: JSON.stringify({
112
- contents: [{ parts: [{ text: 'Your prompt here' }] }]
113
- })
363
+ contents: [{ parts: [{ text: 'Your prompt here' }] }],
364
+ }),
114
365
  });
115
366
  const data = await response.json();
116
367
  ```
117
368
 
118
369
  ### Using with Python
370
+
119
371
  ```python
120
372
  import requests
121
373
 
@@ -128,6 +380,7 @@ result = response.json()
128
380
  ## πŸ“Š Analytics & Monitoring
129
381
 
130
382
  ### View Usage Statistics
383
+
131
384
  ```bash
132
385
  # View token usage and cost statistics
133
386
  npm run cli stats
@@ -140,6 +393,7 @@ npm run cli hallucination-analysis
140
393
  ```
141
394
 
142
395
  ### Dashboard Access
396
+
143
397
  - **Main Dashboard**: `http://localhost:8788/dashboard`
144
398
  - **Health Check**: `http://localhost:8788/health`
145
399
  - **API Stats**: `http://localhost:8788/api/stats`
@@ -147,6 +401,7 @@ npm run cli hallucination-analysis
147
401
  ## πŸ› οΈ Advanced Configuration
148
402
 
149
403
  ### Budget Management
404
+
150
405
  The system includes intelligent budget management:
151
406
 
152
407
  ```bash
@@ -163,7 +418,9 @@ npm run cli policy:init
163
418
  ```
164
419
 
165
420
  ### Custom Configuration
421
+
166
422
  Edit `toknxr.config.json` to:
423
+
167
424
  - Add new AI providers
168
425
  - Modify token mapping
169
426
  - Update API endpoints
@@ -174,6 +431,7 @@ Edit `toknxr.config.json` to:
174
431
  ### Common Issues
175
432
 
176
433
  **Port 8788 already in use:**
434
+
177
435
  ```bash
178
436
  # Kill existing process
179
437
  pkill -f "npm run start"
@@ -182,15 +440,18 @@ npm start
182
440
  ```
183
441
 
184
442
  **API key not working:**
443
+
185
444
  - Verify your API keys in the `.env` file
186
445
  - Check that keys have the correct permissions
187
446
  - Test keys directly with the provider's API
188
447
 
189
448
  **Ollama not available:**
449
+
190
450
  - Ensure Ollama is running: `ollama serve`
191
451
  - Check that it's accessible at `http://localhost:11434`
192
452
 
193
453
  ### Getting Help
454
+
194
455
  ```bash
195
456
  # View all available commands
196
457
  npm run cli --help