cli-apimon 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,350 @@
1
+ Metadata-Version: 2.4
2
+ Name: cli-apimon
3
+ Version: 0.1.0
4
+ Summary: CLI tool for API monitoring, analytics, and LLM-powered improvement insights
5
+ Author-email: apimon <hello@apimon.dev>
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/apimon/apimon
8
+ Project-URL: Repository, https://github.com/apimon/apimon
9
+ Project-URL: Issues, https://github.com/apimon/apimon/issues
10
+ Keywords: api,monitoring,cli,analytics,proxy,llm,openai,gemini,anthropic,ai-agent
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Environment :: Console
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Programming Language :: Python :: 3.13
20
+ Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
21
+ Classifier: Topic :: System :: Monitoring
22
+ Requires-Python: >=3.10
23
+ Description-Content-Type: text/markdown
24
+ Requires-Dist: aiohttp>=3.9.0
25
+ Requires-Dist: sqlalchemy>=2.0.0
26
+ Requires-Dist: click>=8.1.0
27
+ Requires-Dist: rich>=13.0.0
28
+ Requires-Dist: textual>=0.50.0
29
+ Requires-Dist: python-dateutil>=2.8.0
30
+ Provides-Extra: llm
31
+ Requires-Dist: openai>=1.0.0; extra == "llm"
32
+ Requires-Dist: google-genai>=0.3.0; extra == "llm"
33
+ Requires-Dist: anthropic>=0.18.0; extra == "llm"
34
+
35
+ # apimon
36
+
37
+ A CLI tool for API monitoring, analytics, and AI-powered improvement insights. Designed for both humans and AI agents.
38
+
39
+ ## Features
40
+
41
+ - **Proxy-based Monitoring**: Sits between clients and your API server to capture all traffic
42
+ - **Rich Analytics**: Hit counts, response times, error rates, percentiles (p50/p90/p95/p99)
43
+ - **Pattern Detection**: Automatically identifies slow routes, chatty APIs, caching opportunities, and security anomalies
44
+ - **LLM-Powered Insights**: Generate AI-driven analysis using OpenAI, Google Gemini, or Anthropic Claude
45
+ - **Agent-Friendly**: Full `--json` support and `--ai` mode for non-interactive automation
46
+ - **Interactive TUI**: Textual-based terminal UI for live monitoring
47
+
48
+ ## Installation
49
+
50
+ ```bash
51
+ pip install apimon
52
+ ```
53
+
54
+ For LLM integration (optional):
55
+
56
+ ```bash
57
+ pip install apimon[llm]
58
+ ```
59
+
60
+ ## Quick Start
61
+
62
+ ### 1. Start the proxy server
63
+
64
+ ```bash
65
+ apimon proxy --target-port 3000
66
+ ```
67
+
68
+ ### 2. Make requests through the proxy
69
+
70
+ ```bash
71
+ curl http://localhost:8080/api/users
72
+ ```
73
+
74
+ ### 3. View analytics
75
+
76
+ ```bash
77
+ # Human-friendly dashboard
78
+ apimon dashboard
79
+
80
+ # Or interactive TUI
81
+ apimon ui
82
+ ```
83
+
84
+ ### 4. Generate AI insights
85
+
86
+ ```bash
87
+ export OPENAI_API_KEY="sk-..."
88
+ apimon insights --provider openai
89
+ ```
90
+
91
+ ---
92
+
93
+ ## Commands Reference
94
+
95
+ | Command | Description |
96
+ | ---------------------- | ---------------------------------- |
97
+ | `apimon proxy` | Start the proxy server |
98
+ | `apimon ui` | Interactive TUI dashboard |
99
+ | `apimon ui --ai` | **Machine-readable JSON snapshot** |
100
+ | `apimon dashboard` | Terminal analytics dashboard |
101
+ | `apimon stats` | Route statistics |
102
+ | `apimon requests` | Recent requests |
103
+ | `apimon request <id>` | Detailed view of a request |
104
+ | `apimon suggestions` | Rule-based improvement suggestions |
105
+ | `apimon insights` | LLM-powered AI analysis |
106
+ | `apimon graph` | ASCII graphs of activity |
107
+ | `apimon export <file>` | Export data to JSON file |
108
+ | `apimon clear` | Clear all stored data |
109
+
110
+ ---
111
+
112
+ ## AI Agent Integration
113
+
114
+ apimon is designed to be used by AI agents. Every command supports `--json` output, and `apimon ui --ai` provides a complete snapshot for automated analysis.
115
+
116
+ ### Non-Interactive Mode (`--ai`)
117
+
118
+ ```bash
119
+ # Get full analytics snapshot as JSON (no TUI)
120
+ apimon ui --ai
121
+
122
+ # Include LLM analysis
123
+ apimon ui --ai --provider openai
124
+
125
+ # Pipe to jq for specific fields
126
+ apimon ui --ai | jq .analytics_summary
127
+ apimon ui --ai | jq .cache_candidates
128
+ apimon ui --ai | jq .unique_error_messages
129
+ ```
130
+
131
+ ### JSON Output Fields
132
+
133
+ The `--ai` mode returns a comprehensive JSON object:
134
+
135
+ ```json
136
+ {
137
+ "apimon_version": "0.1.0",
138
+ "db_path": "apimon.db",
139
+ "hours_analyzed": 24,
140
+ "analytics_summary": {
141
+ "total_requests": 8158,
142
+ "error_rate": 48.1,
143
+ "avg_response_time_ms": 26.15,
144
+ "unique_routes": 7
145
+ },
146
+ "response_time_percentiles": {
147
+ "p50": 1.16,
148
+ "p90": 96.52,
149
+ "p95": 123.33,
150
+ "p99": 146.41
151
+ },
152
+ "route_stats": [...],
153
+ "top_routes_by_traffic": [...],
154
+ "route_percentiles": [...],
155
+ "status_code_distribution": [...],
156
+ "method_distribution": [...],
157
+ "error_summary": [...],
158
+ "unique_error_messages": [...],
159
+ "slowest_routes": [...],
160
+ "cache_candidates": [...],
161
+ "hourly_summary": [...],
162
+ "error_rate_trend": [...],
163
+ "suggestions": [...],
164
+ "llm_prompt": "...",
165
+ "llm_provider": "openai",
166
+ "llm_insights": "...",
167
+ "llm_error": null
168
+ }
169
+ ```
170
+
171
+ ### Key Fields for Agents
172
+
173
+ | Field | Description |
174
+ | --------------------------- | ------------------------------------------------------------ |
175
+ | `analytics_summary` | Overall stats: total requests, error rate, avg response time |
176
+ | `response_time_percentiles` | Global p50, p90, p95, p99 latencies |
177
+ | `top_routes_by_traffic` | Routes ranked by hit count with traffic share % |
178
+ | `route_percentiles` | Per-route latency percentiles |
179
+ | `unique_error_messages` | Actual error response bodies grouped by route/status |
180
+ | `cache_candidates` | GET routes suitable for caching, with benefit scores |
181
+ | `error_rate_trend` | Hourly error rates to detect spikes |
182
+ | `suggestions` | Rule-based improvement suggestions |
183
+ | `llm_prompt` | The exact prompt sent to the LLM (for debugging/reuse) |
184
+ | `llm_insights` | LLM response (if `--provider` was specified) |
185
+
186
+ ### JSON Mode for Individual Commands
187
+
188
+ ```bash
189
+ # Route statistics
190
+ apimon stats --json
191
+ apimon stats --json | jq '.[] | select(.error_rate > 10)'
192
+
193
+ # Recent requests
194
+ apimon requests --json --limit 100
195
+ apimon requests --json --method POST | jq '.[] | select(.response_status >= 400)'
196
+
197
+ # Single request detail
198
+ apimon request 42 --json
199
+
200
+ # Suggestions
201
+ apimon suggestions --json
202
+ apimon suggestions --json | jq '.[] | select(.severity == "high")'
203
+
204
+ # LLM insights
205
+ apimon insights --json --provider openai
206
+ apimon insights --json --provider openai | jq -r .insights
207
+
208
+ # Graph data
209
+ apimon graph --json
210
+ apimon graph --json | jq .time_series
211
+ ```
212
+
213
+ ### Example: Agent Workflow
214
+
215
+ ```bash
216
+ # 1. Get full snapshot
217
+ DATA=$(apimon ui --ai --provider openai)
218
+
219
+ # 2. Check for critical issues
220
+ echo "$DATA" | jq '.suggestions[] | select(.severity == "high")'
221
+
222
+ # 3. Get caching recommendations
223
+ echo "$DATA" | jq '.cache_candidates'
224
+
225
+ # 4. Read LLM analysis
226
+ echo "$DATA" | jq -r '.llm_insights'
227
+
228
+ # 5. Get the prompt for custom LLM calls
229
+ echo "$DATA" | jq -r '.llm_prompt' > prompt.txt
230
+ ```
231
+
232
+ ---
233
+
234
+ ## LLM Providers
235
+
236
+ ### Environment Variables
237
+
238
+ | Provider | Environment Variable |
239
+ | --------- | -------------------- |
240
+ | OpenAI | `OPENAI_API_KEY` |
241
+ | Gemini | `GEMINI_API_KEY` |
242
+ | Anthropic | `ANTHROPIC_API_KEY` |
243
+
244
+ ### Usage
245
+
246
+ ```bash
247
+ # OpenAI (default)
248
+ export OPENAI_API_KEY="sk-..."
249
+ apimon insights --provider openai
250
+ apimon ui --ai --provider openai
251
+
252
+ # Google Gemini
253
+ export GEMINI_API_KEY="..."
254
+ apimon insights --provider gemini
255
+
256
+ # Anthropic Claude
257
+ export ANTHROPIC_API_KEY="sk-ant-..."
258
+ apimon insights --provider anthropic
259
+
260
+ # Pass key directly (not recommended for scripts)
261
+ apimon insights --provider openai --api-key sk-...
262
+ ```
263
+
264
+ ### LLM Prompt Contents
265
+
266
+ The LLM receives comprehensive data including:
267
+
268
+ - Overall analytics summary
269
+ - Response time percentiles (global and per-route)
270
+ - Traffic distribution by route
271
+ - Unique error messages with response bodies
272
+ - Caching candidates with benefit scores
273
+ - Error rate trends by hour
274
+ - Full route statistics
275
+
276
+ The prompt asks for:
277
+
278
+ 1. Critical issues requiring immediate attention
279
+ 2. Performance bottlenecks analysis
280
+ 3. Caching strategy with TTL recommendations
281
+ 4. Error pattern analysis
282
+ 5. Architecture recommendations
283
+ 6. Prioritized action items
284
+
285
+ ---
286
+
287
+ ## Interactive TUI
288
+
289
+ ```bash
290
+ apimon ui
291
+ ```
292
+
293
+ On startup, you'll be prompted to configure an LLM provider (optional). Press **Skip** to use the TUI without LLM features.
294
+
295
+ ### Keyboard Shortcuts
296
+
297
+ | Key | Action |
298
+ | --- | ---------------- |
299
+ | `1` | Routes tab |
300
+ | `2` | Requests tab |
301
+ | `3` | Analytics tab |
302
+ | `r` | Refresh data |
303
+ | `d` | Toggle dark mode |
304
+ | `q` | Quit |
305
+
306
+ The Analytics tab includes a **"Get LLM Insights"** button that calls the configured LLM and displays results inline.
307
+
308
+ ---
309
+
310
+ ## Proxy Options
311
+
312
+ ```bash
313
+ apimon proxy --target-host localhost --target-port 3000 --port 8080
314
+ ```
315
+
316
+ | Option | Default | Description |
317
+ | --------------- | ----------- | -------------------- |
318
+ | `--target-host` | `localhost` | Your API server host |
319
+ | `--target-port` | `3000` | Your API server port |
320
+ | `--port` | `8080` | Proxy listen port |
321
+ | `--db-path` | `apimon.db` | SQLite database path |
322
+
323
+ ---
324
+
325
+ ## Data Storage
326
+
327
+ All data is stored in a local SQLite database (`apimon.db` by default).
328
+
329
+ ### Route Pattern Normalization
330
+
331
+ The proxy automatically normalizes parameterized routes:
332
+
333
+ - `/users/123` → `/users/{id}`
334
+ - `/posts/abc-def-123` → `/posts/{id}`
335
+ - `/api/v2/items` → `/api/{version}/items`
336
+
337
+ This enables meaningful aggregation of statistics.
338
+
339
+ ### Clear Data
340
+
341
+ ```bash
342
+ apimon clear --yes # Non-interactive (for scripts/agents)
343
+ apimon clear # Prompts for confirmation
344
+ ```
345
+
346
+ ---
347
+
348
+ ## License
349
+
350
+ MIT
@@ -0,0 +1,316 @@
1
+ # apimon
2
+
3
+ A CLI tool for API monitoring, analytics, and AI-powered improvement insights. Designed for both humans and AI agents.
4
+
5
+ ## Features
6
+
7
+ - **Proxy-based Monitoring**: Sits between clients and your API server to capture all traffic
8
+ - **Rich Analytics**: Hit counts, response times, error rates, percentiles (p50/p90/p95/p99)
9
+ - **Pattern Detection**: Automatically identifies slow routes, chatty APIs, caching opportunities, and security anomalies
10
+ - **LLM-Powered Insights**: Generate AI-driven analysis using OpenAI, Google Gemini, or Anthropic Claude
11
+ - **Agent-Friendly**: Full `--json` support and `--ai` mode for non-interactive automation
12
+ - **Interactive TUI**: Textual-based terminal UI for live monitoring
13
+
14
+ ## Installation
15
+
16
+ ```bash
17
+ pip install apimon
18
+ ```
19
+
20
+ For LLM integration (optional):
21
+
22
+ ```bash
23
+ pip install apimon[llm]
24
+ ```
25
+
26
+ ## Quick Start
27
+
28
+ ### 1. Start the proxy server
29
+
30
+ ```bash
31
+ apimon proxy --target-port 3000
32
+ ```
33
+
34
+ ### 2. Make requests through the proxy
35
+
36
+ ```bash
37
+ curl http://localhost:8080/api/users
38
+ ```
39
+
40
+ ### 3. View analytics
41
+
42
+ ```bash
43
+ # Human-friendly dashboard
44
+ apimon dashboard
45
+
46
+ # Or interactive TUI
47
+ apimon ui
48
+ ```
49
+
50
+ ### 4. Generate AI insights
51
+
52
+ ```bash
53
+ export OPENAI_API_KEY="sk-..."
54
+ apimon insights --provider openai
55
+ ```
56
+
57
+ ---
58
+
59
+ ## Commands Reference
60
+
61
+ | Command | Description |
62
+ | ---------------------- | ---------------------------------- |
63
+ | `apimon proxy` | Start the proxy server |
64
+ | `apimon ui` | Interactive TUI dashboard |
65
+ | `apimon ui --ai` | **Machine-readable JSON snapshot** |
66
+ | `apimon dashboard` | Terminal analytics dashboard |
67
+ | `apimon stats` | Route statistics |
68
+ | `apimon requests` | Recent requests |
69
+ | `apimon request <id>` | Detailed view of a request |
70
+ | `apimon suggestions` | Rule-based improvement suggestions |
71
+ | `apimon insights` | LLM-powered AI analysis |
72
+ | `apimon graph` | ASCII graphs of activity |
73
+ | `apimon export <file>` | Export data to JSON file |
74
+ | `apimon clear` | Clear all stored data |
75
+
76
+ ---
77
+
78
+ ## AI Agent Integration
79
+
80
+ apimon is designed to be used by AI agents. Every command supports `--json` output, and `apimon ui --ai` provides a complete snapshot for automated analysis.
81
+
82
+ ### Non-Interactive Mode (`--ai`)
83
+
84
+ ```bash
85
+ # Get full analytics snapshot as JSON (no TUI)
86
+ apimon ui --ai
87
+
88
+ # Include LLM analysis
89
+ apimon ui --ai --provider openai
90
+
91
+ # Pipe to jq for specific fields
92
+ apimon ui --ai | jq .analytics_summary
93
+ apimon ui --ai | jq .cache_candidates
94
+ apimon ui --ai | jq .unique_error_messages
95
+ ```
96
+
97
+ ### JSON Output Fields
98
+
99
+ The `--ai` mode returns a comprehensive JSON object:
100
+
101
+ ```json
102
+ {
103
+ "apimon_version": "0.1.0",
104
+ "db_path": "apimon.db",
105
+ "hours_analyzed": 24,
106
+ "analytics_summary": {
107
+ "total_requests": 8158,
108
+ "error_rate": 48.1,
109
+ "avg_response_time_ms": 26.15,
110
+ "unique_routes": 7
111
+ },
112
+ "response_time_percentiles": {
113
+ "p50": 1.16,
114
+ "p90": 96.52,
115
+ "p95": 123.33,
116
+ "p99": 146.41
117
+ },
118
+ "route_stats": [...],
119
+ "top_routes_by_traffic": [...],
120
+ "route_percentiles": [...],
121
+ "status_code_distribution": [...],
122
+ "method_distribution": [...],
123
+ "error_summary": [...],
124
+ "unique_error_messages": [...],
125
+ "slowest_routes": [...],
126
+ "cache_candidates": [...],
127
+ "hourly_summary": [...],
128
+ "error_rate_trend": [...],
129
+ "suggestions": [...],
130
+ "llm_prompt": "...",
131
+ "llm_provider": "openai",
132
+ "llm_insights": "...",
133
+ "llm_error": null
134
+ }
135
+ ```
136
+
137
+ ### Key Fields for Agents
138
+
139
+ | Field | Description |
140
+ | --------------------------- | ------------------------------------------------------------ |
141
+ | `analytics_summary` | Overall stats: total requests, error rate, avg response time |
142
+ | `response_time_percentiles` | Global p50, p90, p95, p99 latencies |
143
+ | `top_routes_by_traffic` | Routes ranked by hit count with traffic share % |
144
+ | `route_percentiles` | Per-route latency percentiles |
145
+ | `unique_error_messages` | Actual error response bodies grouped by route/status |
146
+ | `cache_candidates` | GET routes suitable for caching, with benefit scores |
147
+ | `error_rate_trend` | Hourly error rates to detect spikes |
148
+ | `suggestions` | Rule-based improvement suggestions |
149
+ | `llm_prompt` | The exact prompt sent to the LLM (for debugging/reuse) |
150
+ | `llm_insights` | LLM response (if `--provider` was specified) |
151
+
152
+ ### JSON Mode for Individual Commands
153
+
154
+ ```bash
155
+ # Route statistics
156
+ apimon stats --json
157
+ apimon stats --json | jq '.[] | select(.error_rate > 10)'
158
+
159
+ # Recent requests
160
+ apimon requests --json --limit 100
161
+ apimon requests --json --method POST | jq '.[] | select(.response_status >= 400)'
162
+
163
+ # Single request detail
164
+ apimon request 42 --json
165
+
166
+ # Suggestions
167
+ apimon suggestions --json
168
+ apimon suggestions --json | jq '.[] | select(.severity == "high")'
169
+
170
+ # LLM insights
171
+ apimon insights --json --provider openai
172
+ apimon insights --json --provider openai | jq -r .insights
173
+
174
+ # Graph data
175
+ apimon graph --json
176
+ apimon graph --json | jq .time_series
177
+ ```
178
+
179
+ ### Example: Agent Workflow
180
+
181
+ ```bash
182
+ # 1. Get full snapshot
183
+ DATA=$(apimon ui --ai --provider openai)
184
+
185
+ # 2. Check for critical issues
186
+ echo "$DATA" | jq '.suggestions[] | select(.severity == "high")'
187
+
188
+ # 3. Get caching recommendations
189
+ echo "$DATA" | jq '.cache_candidates'
190
+
191
+ # 4. Read LLM analysis
192
+ echo "$DATA" | jq -r '.llm_insights'
193
+
194
+ # 5. Get the prompt for custom LLM calls
195
+ echo "$DATA" | jq -r '.llm_prompt' > prompt.txt
196
+ ```
197
+
198
+ ---
199
+
200
+ ## LLM Providers
201
+
202
+ ### Environment Variables
203
+
204
+ | Provider | Environment Variable |
205
+ | --------- | -------------------- |
206
+ | OpenAI | `OPENAI_API_KEY` |
207
+ | Gemini | `GEMINI_API_KEY` |
208
+ | Anthropic | `ANTHROPIC_API_KEY` |
209
+
210
+ ### Usage
211
+
212
+ ```bash
213
+ # OpenAI (default)
214
+ export OPENAI_API_KEY="sk-..."
215
+ apimon insights --provider openai
216
+ apimon ui --ai --provider openai
217
+
218
+ # Google Gemini
219
+ export GEMINI_API_KEY="..."
220
+ apimon insights --provider gemini
221
+
222
+ # Anthropic Claude
223
+ export ANTHROPIC_API_KEY="sk-ant-..."
224
+ apimon insights --provider anthropic
225
+
226
+ # Pass key directly (not recommended for scripts)
227
+ apimon insights --provider openai --api-key sk-...
228
+ ```
229
+
230
+ ### LLM Prompt Contents
231
+
232
+ The LLM receives comprehensive data including:
233
+
234
+ - Overall analytics summary
235
+ - Response time percentiles (global and per-route)
236
+ - Traffic distribution by route
237
+ - Unique error messages with response bodies
238
+ - Caching candidates with benefit scores
239
+ - Error rate trends by hour
240
+ - Full route statistics
241
+
242
+ The prompt asks for:
243
+
244
+ 1. Critical issues requiring immediate attention
245
+ 2. Performance bottlenecks analysis
246
+ 3. Caching strategy with TTL recommendations
247
+ 4. Error pattern analysis
248
+ 5. Architecture recommendations
249
+ 6. Prioritized action items
250
+
251
+ ---
252
+
253
+ ## Interactive TUI
254
+
255
+ ```bash
256
+ apimon ui
257
+ ```
258
+
259
+ On startup, you'll be prompted to configure an LLM provider (optional). Press **Skip** to use the TUI without LLM features.
260
+
261
+ ### Keyboard Shortcuts
262
+
263
+ | Key | Action |
264
+ | --- | ---------------- |
265
+ | `1` | Routes tab |
266
+ | `2` | Requests tab |
267
+ | `3` | Analytics tab |
268
+ | `r` | Refresh data |
269
+ | `d` | Toggle dark mode |
270
+ | `q` | Quit |
271
+
272
+ The Analytics tab includes a **"Get LLM Insights"** button that calls the configured LLM and displays results inline.
273
+
274
+ ---
275
+
276
+ ## Proxy Options
277
+
278
+ ```bash
279
+ apimon proxy --target-host localhost --target-port 3000 --port 8080
280
+ ```
281
+
282
+ | Option | Default | Description |
283
+ | --------------- | ----------- | -------------------- |
284
+ | `--target-host` | `localhost` | Your API server host |
285
+ | `--target-port` | `3000` | Your API server port |
286
+ | `--port` | `8080` | Proxy listen port |
287
+ | `--db-path` | `apimon.db` | SQLite database path |
288
+
289
+ ---
290
+
291
+ ## Data Storage
292
+
293
+ All data is stored in a local SQLite database (`apimon.db` by default).
294
+
295
+ ### Route Pattern Normalization
296
+
297
+ The proxy automatically normalizes parameterized routes:
298
+
299
+ - `/users/123` → `/users/{id}`
300
+ - `/posts/abc-def-123` → `/posts/{id}`
301
+ - `/api/v2/items` → `/api/{version}/items`
302
+
303
+ This enables meaningful aggregation of statistics.
304
+
305
+ ### Clear Data
306
+
307
+ ```bash
308
+ apimon clear --yes # Non-interactive (for scripts/agents)
309
+ apimon clear # Prompts for confirmation
310
+ ```
311
+
312
+ ---
313
+
314
+ ## License
315
+
316
+ MIT
@@ -0,0 +1,34 @@
1
+ """apimon - API monitoring and analytics CLI tool."""
2
+
3
+ __version__ = "0.1.0"
4
+ __author__ = "apimon"
5
+ __license__ = "MIT"
6
+
7
+ from apimon.proxy import ProxyServer
8
+ from apimon.analytics import AnalyticsEngine
9
+ from apimon.storage import DataStore
10
+ from apimon.llm import (
11
+ LLMProvider,
12
+ LLMClient,
13
+ LLMInsight,
14
+ OpenAIClient,
15
+ GeminiClient,
16
+ AnthropicClient,
17
+ create_llm_client,
18
+ LLMInsightGenerator,
19
+ )
20
+
21
+ __all__ = [
22
+ "ProxyServer",
23
+ "AnalyticsEngine",
24
+ "DataStore",
25
+ "LLMProvider",
26
+ "LLMClient",
27
+ "LLMInsight",
28
+ "OpenAIClient",
29
+ "GeminiClient",
30
+ "AnthropicClient",
31
+ "create_llm_client",
32
+ "LLMInsightGenerator",
33
+ "__version__",
34
+ ]