polydev-ai 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,359 @@
1
+ # Polydev AI Website
2
+
3
+ Advanced Model Context Protocol platform with comprehensive multi-LLM integration, subscription-based CLI access, OAuth bridges, and advanced tooling for AI development.
4
+
5
+ ## Features
6
+
7
+ ### 🤖 **Comprehensive LLM Integration**
8
+ - **API-Based Providers**: Direct integration with 8+ providers (Anthropic, OpenAI, Google, etc.)
9
+ - **Subscription-Based CLI Access**: Use your existing ChatGPT Plus, Claude Pro, GitHub Copilot subscriptions
10
+ - **Unified Interface**: Single API for all providers with consistent streaming responses
11
+ - **Auto-Detection**: Automatic CLI tool discovery and path configuration
12
+
13
+ ### 🔧 **CLI Provider Support**
14
+ - **Codex CLI**: Access GPT-5 with high reasoning through ChatGPT subscription
15
+ - **Claude Code CLI**: Use Claude via Anthropic subscription
16
+ - **Gemini CLI**: Google Cloud authentication integration
17
+ - **GitHub Copilot**: VS Code Language Model API integration
18
+
19
+ ### 🛠 **Advanced Tooling**
20
+ - **Model Context Protocol (MCP)**: Hosted MCP server with OAuth authentication (like Vercel)
21
+ - **Multi-Authentication**: Both OAuth and API token support for maximum flexibility
22
+ - **Process Execution**: Cross-platform CLI management with timeout handling
23
+ - **Path Auto-Discovery**: Smart detection of CLI installations across Windows, macOS, Linux
24
+ - **Real-time Status**: Live CLI availability and authentication checking
25
+
26
+ ### 🔒 **Security & Authentication**
27
+ - **Encrypted Storage**: Browser-based API key encryption
28
+ - **OAuth Bridges**: Secure authentication flows
29
+ - **Subscription Auth**: No API costs - use existing subscriptions
30
+ - **Local Storage**: Keys never leave your device
31
+
32
+ ### 📊 **Monitoring & Analytics**
33
+ - **PostHog Integration**: Advanced user analytics and feature tracking
34
+ - **BetterStack Monitoring**: System health and performance monitoring
35
+ - **Upstash Redis**: High-performance caching layer
36
+ - **Supabase Auth**: Robust authentication system
37
+
38
+ ## Tech Stack
39
+
40
+ ### **Frontend**
41
+ - **Framework**: Next.js 15 with App Router
42
+ - **UI Library**: React 18 with TypeScript
43
+ - **Styling**: Tailwind CSS with shadcn/ui components
44
+ - **Icons**: Lucide React
45
+ - **State Management**: React hooks with custom providers
46
+
47
+ ### **LLM Integration**
48
+ - **API Handlers**: Custom TypeScript handlers for each provider
49
+ - **CLI Integration**: Cross-platform process execution utilities
50
+ - **Streaming**: Server-Sent Events for real-time responses
51
+ - **Authentication**: Both API key and subscription-based authentication
52
+
53
+ ### **Backend Services**
54
+ - **Analytics**: PostHog for user tracking and feature analytics
55
+ - **Monitoring**: BetterStack for system health and logging
56
+ - **Caching**: Upstash Redis for high-performance data caching
57
+ - **Authentication**: Supabase for user management and auth flows
58
+ - **Database**: Supabase PostgreSQL for user data
59
+
60
+ ### **Security & Storage**
61
+ - **Encryption**: Browser SubtleCrypto API for client-side encryption
62
+ - **Storage**: Local browser storage with encrypted API keys
63
+ - **CORS**: Configured for secure cross-origin requests
64
+
65
+ ### **Development & Deployment**
66
+ - **Package Manager**: npm with Node.js 18+
67
+ - **Build System**: Next.js with TypeScript compilation
68
+ - **Deployment**: Vercel with automatic deployments
69
+ - **Environment**: Support for multiple deployment environments
70
+
71
+ ## Getting Started
72
+
73
+ ### **Prerequisites**
74
+ - Node.js 18+
75
+ - npm or yarn package manager
76
+ - (Optional) CLI tools for subscription-based access:
77
+ - Codex CLI for ChatGPT Plus integration
78
+ - Claude Code CLI for Anthropic Pro integration
79
+ - Gemini CLI for Google Cloud integration
80
+ - VS Code with GitHub Copilot for Copilot integration
81
+
82
+ ### **Installation**
83
+
84
+ 1. **Clone the repository**
85
+ ```bash
86
+ git clone <repository-url>
87
+ cd polydev-website
88
+ ```
89
+
90
+ 2. **Install dependencies**
91
+ ```bash
92
+ npm install
93
+ ```
94
+
95
+ 3. **Set up environment variables** (see Environment Variables section)
96
+
97
+ 4. **Start development server**
98
+ ```bash
99
+ npm run dev
100
+ ```
101
+
102
+ 5. **Open the application**
103
+ Navigate to [http://localhost:3000](http://localhost:3000) to view the application.
104
+
105
+ ### **Quick Configuration**
106
+
107
+ 1. **API Key Setup**: Go to Settings → API Keys tab to configure traditional API access
108
+ 2. **CLI Setup**: Go to Settings → CLI Subscriptions tab to set up subscription-based access
109
+ 3. **Provider Selection**: Choose your preferred LLM provider from the dropdown
110
+ 4. **Test Integration**: Use the chat interface to test your configuration
111
+
112
+ ## Environment Variables
113
+
114
+ Create a `.env.local` file with the following variables:
115
+
116
+ ```env
117
+ # Supabase
118
+ NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
119
+ NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
120
+ SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
121
+
122
+ # PostHog Analytics
123
+ NEXT_PUBLIC_POSTHOG_KEY=your_posthog_key
124
+ NEXT_PUBLIC_POSTHOG_HOST=https://us.i.posthog.com
125
+
126
+ # Upstash Redis
127
+ UPSTASH_REDIS_REST_URL=your_upstash_redis_url
128
+ UPSTASH_REDIS_REST_TOKEN=your_upstash_redis_token
129
+
130
+ # BetterStack Logging
131
+ BETTERSTACK_LOGS_TOKEN=your_betterstack_token
132
+ ```
133
+
134
+ ## CLI Provider Setup
135
+
136
+ ### **Codex CLI (ChatGPT Plus Integration)**
137
+
138
+ 1. **Install Codex CLI**:
139
+ - Download from [OpenAI's official repository](https://openai.com/chatgpt/desktop)
140
+ - Ensure you have an active ChatGPT Plus subscription
141
+
142
+ 2. **Authentication**:
143
+ ```bash
144
+ codex auth
145
+ ```
146
+
147
+ 3. **Verify Installation**:
148
+ ```bash
149
+ codex --version
150
+ ```
151
+
152
+ ### **Claude Code CLI (Anthropic Pro Integration)**
153
+
154
+ 1. **Install Claude Code CLI**:
155
+ - Follow instructions at [Claude Code Documentation](https://docs.anthropic.com/en/docs/claude-code)
156
+ - Requires active Claude Pro subscription
157
+
158
+ 2. **Authentication**:
159
+ ```bash
160
+ claude login
161
+ ```
162
+
163
+ 3. **Verify Installation**:
164
+ ```bash
165
+ claude --version
166
+ ```
167
+
168
+ ### **Gemini CLI (Google Cloud Integration)**
169
+
170
+ 1. **Install Google Cloud CLI**:
171
+ ```bash
172
+ # macOS
173
+ brew install google-cloud-sdk
174
+
175
+ # Windows
176
+ # Download from https://cloud.google.com/sdk/docs/install
177
+
178
+ # Linux
179
+ curl https://sdk.cloud.google.com | bash
180
+ ```
181
+
182
+ 2. **Authentication**:
183
+ ```bash
184
+ gcloud auth login
185
+ gcloud auth application-default login
186
+ ```
187
+
188
+ ### **GitHub Copilot Integration**
189
+
190
+ 1. **Install VS Code** with GitHub Copilot extension
191
+ 2. **Authentication**: Sign in with your GitHub account that has Copilot access
192
+ 3. **Verification**: The application will detect VS Code and Copilot availability automatically
193
+
194
+ ## API Provider Configuration
195
+
196
+ ### **Setting Up API Keys**
197
+
198
+ 1. Navigate to Settings → API Keys tab
199
+ 2. Select your preferred provider from the dropdown
200
+ 3. Enter your API key (encrypted automatically)
201
+ 4. Choose your preferred model
202
+ 5. Test the configuration
203
+
204
+ ### **Supported API Providers**
205
+
206
+ | Provider | Models | Context Window | Features |
207
+ |----------|--------|---------------|----------|
208
+ | **Anthropic** | Claude 3.5 Sonnet, Haiku, Opus | 200K tokens | Best for reasoning and code |
209
+ | **OpenAI** | GPT-4o, GPT-4 Turbo, GPT-3.5 | 128K tokens | Versatile, widely adopted |
210
+ | **Google Gemini** | Gemini 1.5 Pro, Flash | 1M+ tokens | Large context window |
211
+ | **OpenRouter** | 100+ models | Varies | Access to multiple providers |
212
+ | **Groq** | Open-source models | Varies | Ultra-fast inference |
213
+ | **Perplexity** | Search-optimized models | Varies | AI search and reasoning |
214
+ | **DeepSeek** | Reasoning models | Varies | Advanced reasoning capabilities |
215
+ | **Mistral AI** | European AI models | Varies | Strong performance, EU-based |
216
+
217
+ ## Usage Examples
218
+
219
+ ### **Basic Chat Interface**
220
+
221
+ 1. Configure your preferred provider (API key or CLI)
222
+ 2. Select a model from the dropdown
223
+ 3. Start chatting in the main interface
224
+ 4. Switch providers anytime without losing conversation history
225
+
226
+ ### **CLI Provider Usage**
227
+
228
+ ```typescript
229
+ // The application automatically detects CLI availability
230
+ // Users can configure custom paths if needed
231
+
232
+ // Example: Using Codex CLI for high reasoning
233
+ const response = await llmService.createCliMessage(
234
+ 'codex',
235
+ 'You are a helpful AI assistant',
236
+ [{ role: 'user', content: 'Explain quantum computing' }],
237
+ { temperature: 0.7 }
238
+ )
239
+ ```
240
+
241
+ ### **API Provider Usage**
242
+
243
+ ```typescript
244
+ // Standard API key usage
245
+ const response = await llmService.createMessage(
246
+ 'You are a helpful AI assistant',
247
+ [{ role: 'user', content: 'Write a Python function' }],
248
+ { provider: 'anthropic', model: 'claude-3-5-sonnet-20241022' }
249
+ )
250
+ ```
251
+
252
+ ## Architecture Overview
253
+
254
+ ### **CLI Integration Architecture**
255
+
256
+ ```
257
+ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
258
+ │ Frontend UI │────│ Process Utils │────│ CLI Tools │
259
+ │ (React/TS) │ │ (Node.js) │ │ (External) │
260
+ └─────────────────┘ └──────────────────┘ └─────────────────┘
261
+ │ │ │
262
+ │ │ │
263
+ ▼ ▼ ▼
264
+ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
265
+ │ LLM Service │ │ CLI Handlers │ │ Subscriptions │
266
+ │ (Unified API) │ │ (Per Provider) │ │ (ChatGPT+, etc) │
267
+ └─────────────────┘ └──────────────────┘ └─────────────────┘
268
+ ```
269
+
270
+ ### **Security Model**
271
+
272
+ - **API Keys**: Encrypted using browser SubtleCrypto API
273
+ - **Local Storage**: Keys never leave your device
274
+ - **CLI Authentication**: Uses existing subscription authentication
275
+ - **Process Isolation**: CLI processes run in isolated environments
276
+
277
+ ## Troubleshooting
278
+
279
+ ### **CLI Provider Issues**
280
+
281
+ **Codex CLI not detected:**
282
+ 1. Verify ChatGPT Plus subscription is active
283
+ 2. Check installation path: `which codex`
284
+ 3. Re-run authentication: `codex auth`
285
+ 4. Configure custom path in Settings → CLI Subscriptions
286
+
287
+ **Claude Code CLI authentication failed:**
288
+ 1. Ensure Claude Pro subscription
289
+ 2. Run `claude login` manually
290
+ 3. Check network connectivity
291
+ 4. Verify CLI version compatibility
292
+
293
+ **Gemini CLI setup issues:**
294
+ 1. Install Google Cloud SDK completely
295
+ 2. Run both `gcloud auth` commands
296
+ 3. Enable required APIs in Google Cloud Console
297
+ 4. Check quota limits
298
+
299
+ **GitHub Copilot not available:**
300
+ 1. Install VS Code with Copilot extension
301
+ 2. Sign in to GitHub account with Copilot access
302
+ 3. Restart the application
303
+ 4. Check VS Code Language Model API availability
304
+
305
+ ### **API Provider Issues**
306
+
307
+ **API key validation failed:**
308
+ 1. Verify the key is correctly copied (no extra spaces)
309
+ 2. Check if the key has required permissions
310
+ 3. Ensure sufficient account credits/quota
311
+ 4. Try regenerating the API key
312
+
313
+ **Connection timeout:**
314
+ 1. Check internet connectivity
315
+ 2. Verify firewall settings
316
+ 3. Try different model/provider
317
+ 4. Increase timeout in settings
318
+
319
+ **Model not available:**
320
+ 1. Check provider documentation for model availability
321
+ 2. Verify your account tier supports the model
322
+ 3. Try alternative models from the same provider
323
+
324
+ ## Development
325
+
326
+ ### **Adding New CLI Providers**
327
+
328
+ 1. **Create handler** in `src/lib/llm/handlers/`
329
+ 2. **Update types** in `src/types/api-configuration.ts`
330
+ 3. **Add configuration** to `CLI_PROVIDERS` constant
331
+ 4. **Update UI** in `CliProviderConfiguration.tsx`
332
+ 5. **Register handler** in `LLMService`
333
+
334
+ ### **Adding New API Providers**
335
+
336
+ 1. **Create handler** implementing `ApiHandler` interface
337
+ 2. **Update `ApiProvider` type** and `PROVIDERS` configuration
338
+ 3. **Add API key fields** to `ApiConfiguration`
339
+ 4. **Update UI** components for new provider
340
+ 5. **Add authentication logic**
341
+
342
+ ## Performance Optimization
343
+
344
+ ### **CLI Response Optimization**
345
+ - CLI processes are cached and reused when possible
346
+ - Streaming responses reduce perceived latency
347
+ - Process timeouts prevent hanging connections
348
+ - Cross-platform path detection minimizes setup time
349
+
350
+ ### **API Response Optimization**
351
+ - Server-Sent Events for real-time streaming
352
+ - Connection pooling for API requests
353
+ - Response caching for repeated queries
354
+ - Automatic retry logic with exponential backoff
355
+
356
+ ## Health Check
357
+
358
+ The application includes a health check endpoint at `/api/health` for monitoring purposes.
359
+ # Trigger deployment