claudish 4.5.0 → 4.5.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AI_AGENT_GUIDE.md CHANGED
@@ -13,7 +13,7 @@
13
13
  claudish --models --json
14
14
 
15
15
  # 2. Run task with specific model (OpenRouter)
16
- claudish --model openai/gpt-5.2 "your task here"
16
+ claudish --model openai/gpt-5.3 "your task here"
17
17
 
18
18
  # 3. Run with direct Gemini API
19
19
  claudish --model g/gemini-2.0-flash "your task here"
@@ -22,7 +22,7 @@ claudish --model g/gemini-2.0-flash "your task here"
22
22
  claudish --model ollama/llama3.2 "your task here"
23
23
 
24
24
  # 5. For large prompts, use stdin
25
- echo "your task" | claudish --stdin --model openai/gpt-5.2
25
+ echo "your task" | claudish --stdin --model openai/gpt-5.3
26
26
  ```
27
27
 
28
28
  ## What is Claudish?
@@ -41,7 +41,7 @@ Claudish = Claude Code + Any AI Model
41
41
 
42
42
  | Prefix | Backend | Example |
43
43
  |--------|---------|---------|
44
- | _(none)_ | OpenRouter | `openai/gpt-5.2` |
44
+ | _(none)_ | OpenRouter | `openai/gpt-5.3` |
45
45
  | `g/` `gemini/` | Google Gemini | `g/gemini-2.0-flash` |
46
46
  | `v/` `vertex/` | Vertex AI | `v/gemini-2.5-flash` |
47
47
  | `oai/` `openai/` | OpenAI | `oai/gpt-4o` |
@@ -96,7 +96,7 @@ claudish --model vertex/openai/gpt-oss-120b-maas "reason"
96
96
 
97
97
  | Model ID | Provider | Category | Best For |
98
98
  |----------|----------|----------|----------|
99
- | `openai/gpt-5.2` | OpenAI | Reasoning | **Default** - Most advanced reasoning |
99
+ | `openai/gpt-5.3` | OpenAI | Reasoning | **Default** - Most advanced reasoning |
100
100
  | `minimax/minimax-m2.1` | MiniMax | Coding | Budget-friendly, fast |
101
101
  | `z-ai/glm-4.7` | Z.AI | Coding | Balanced performance |
102
102
  | `google/gemini-3-pro-preview` | Google | Reasoning | 1M context window |
@@ -353,7 +353,7 @@ User: "What models support vision?"
353
353
  Claude: [calls search_models tool with query="vision"]
354
354
 
355
355
  User: "Compare how GPT-5 and Gemini explain this concept"
356
- Claude: [calls compare_models tool with models=["openai/gpt-5.2", "google/gemini-3-pro-preview"]]
356
+ Claude: [calls compare_models tool with models=["openai/gpt-5.3", "google/gemini-3-pro-preview"]]
357
357
  ```
358
358
 
359
359
  ### MCP vs CLI Mode
@@ -394,7 +394,7 @@ Claude: [calls compare_models tool with models=["openai/gpt-5.2", "google/gemini
394
394
  **compare_models**
395
395
  ```typescript
396
396
  {
397
- models: string[], // e.g., ["openai/gpt-5.2", "x-ai/grok-code-fast-1"]
397
+ models: string[], // e.g., ["openai/gpt-5.3", "x-ai/grok-code-fast-1"]
398
398
  prompt: string, // Prompt to send to all models
399
399
  system_prompt?: string // Optional system prompt
400
400
  }