milaidy 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +8 -0
- package/README.md +538 -0
- package/dist/argv-CfSowvEA.js +63 -0
- package/dist/config-B-mboG4v.js +4 -0
- package/dist/eliza-CPJjgw-e.js +1491 -0
- package/dist/eliza.js +2192 -0
- package/dist/entry.js +232 -0
- package/dist/index.js +209 -0
- package/dist/links-BFKlWqSe.js +15 -0
- package/dist/paths-D_yh1DEJ.js +69 -0
- package/dist/plugins-cli-B7kSre2c.js +134 -0
- package/dist/program-6KwWwKKh.js +510 -0
- package/dist/register.agents-CPVmSjMG.js +17 -0
- package/dist/register.browser-B2ooXxNx.js +15 -0
- package/dist/register.channels-CMYQ6K6Y.js +42 -0
- package/dist/register.cron-D91lY1_Y.js +9 -0
- package/dist/register.devices-rU5I5L_y.js +13 -0
- package/dist/register.gateway-82SLAvw3.js +22 -0
- package/dist/register.hooks-B_XTBEkt.js +9 -0
- package/dist/register.logs-BgEGcPd8.js +10 -0
- package/dist/register.models-BJt9eVgZ.js +26 -0
- package/dist/register.nodes-B5xY1s8a.js +9 -0
- package/dist/register.skills-SFQqYIhg.js +10 -0
- package/dist/register.subclis-uF_AsbWR.js +187 -0
- package/dist/run-main-XODklzS-.js +56 -0
- package/dist/theme-DBvtuGeq.js +36 -0
- package/dist/utils-C1AUpp_V.js +42 -0
- package/dist/version-Cpn3yr5D.js +26 -0
- package/dist/workspace-Co3Wul2D.js +206 -0
- package/dist/workspace-DCA6MNVK.js +350 -0
- package/docs/.i18n/README.md +31 -0
- package/docs/.i18n/glossary.zh-CN.json +210 -0
- package/docs/.i18n/zh-CN.tm.jsonl +1329 -0
- package/docs/CNAME +1 -0
- package/docs/automation/cron-jobs.md +468 -0
- package/docs/automation/cron-vs-heartbeat.md +254 -0
- package/docs/automation/gmail-pubsub.md +256 -0
- package/docs/automation/poll.md +69 -0
- package/docs/automation/webhook.md +163 -0
- package/docs/bedrock.md +176 -0
- package/docs/brave-search.md +41 -0
- package/docs/broadcast-groups.md +442 -0
- package/docs/cli/acp.md +170 -0
- package/docs/cli/agent.md +24 -0
- package/docs/cli/agents.md +75 -0
- package/docs/cli/approvals.md +50 -0
- package/docs/cli/browser.md +107 -0
- package/docs/cli/channels.md +79 -0
- package/docs/cli/config.md +50 -0
- package/docs/cli/configure.md +33 -0
- package/docs/cli/cron.md +42 -0
- package/docs/cli/dashboard.md +16 -0
- package/docs/cli/devices.md +67 -0
- package/docs/cli/directory.md +63 -0
- package/docs/cli/dns.md +23 -0
- package/docs/cli/docs.md +15 -0
- package/docs/cli/doctor.md +41 -0
- package/docs/cli/gateway.md +199 -0
- package/docs/cli/health.md +21 -0
- package/docs/cli/hooks.md +291 -0
- package/docs/cli/index.md +1029 -0
- package/docs/cli/logs.md +24 -0
- package/docs/cli/memory.md +45 -0
- package/docs/cli/message.md +239 -0
- package/docs/cli/models.md +79 -0
- package/docs/cli/node.md +112 -0
- package/docs/cli/nodes.md +73 -0
- package/docs/cli/onboard.md +29 -0
- package/docs/cli/pairing.md +21 -0
- package/docs/cli/plugins.md +62 -0
- package/docs/cli/reset.md +17 -0
- package/docs/cli/sandbox.md +152 -0
- package/docs/cli/security.md +26 -0
- package/docs/cli/sessions.md +16 -0
- package/docs/cli/setup.md +29 -0
- package/docs/cli/skills.md +26 -0
- package/docs/cli/status.md +26 -0
- package/docs/cli/system.md +60 -0
- package/docs/cli/tui.md +23 -0
- package/docs/cli/uninstall.md +17 -0
- package/docs/cli/update.md +98 -0
- package/docs/cli/voicecall.md +34 -0
- package/docs/cli/webhooks.md +25 -0
- package/docs/concepts/agent-loop.md +146 -0
- package/docs/concepts/agent-workspace.md +229 -0
- package/docs/concepts/agent.md +122 -0
- package/docs/concepts/architecture.md +129 -0
- package/docs/concepts/channel-routing.md +114 -0
- package/docs/concepts/compaction.md +61 -0
- package/docs/concepts/context.md +159 -0
- package/docs/concepts/features.md +53 -0
- package/docs/concepts/group-messages.md +84 -0
- package/docs/concepts/groups.md +373 -0
- package/docs/concepts/markdown-formatting.md +130 -0
- package/docs/concepts/memory.md +546 -0
- package/docs/concepts/messages.md +154 -0
- package/docs/concepts/model-failover.md +149 -0
- package/docs/concepts/model-providers.md +315 -0
- package/docs/concepts/models.md +208 -0
- package/docs/concepts/multi-agent.md +376 -0
- package/docs/concepts/oauth.md +145 -0
- package/docs/concepts/plugins.md +454 -0
- package/docs/concepts/presence.md +102 -0
- package/docs/concepts/queue.md +89 -0
- package/docs/concepts/retry.md +69 -0
- package/docs/concepts/secrets.md +300 -0
- package/docs/concepts/session-pruning.md +122 -0
- package/docs/concepts/session-tool.md +193 -0
- package/docs/concepts/session.md +188 -0
- package/docs/concepts/sessions.md +10 -0
- package/docs/concepts/skills.md +392 -0
- package/docs/concepts/streaming.md +135 -0
- package/docs/concepts/system-prompt.md +114 -0
- package/docs/concepts/timezone.md +91 -0
- package/docs/concepts/typebox.md +289 -0
- package/docs/concepts/typing-indicators.md +68 -0
- package/docs/concepts/usage-tracking.md +35 -0
- package/docs/custom.css +4 -0
- package/docs/date-time.md +128 -0
- package/docs/debugging.md +162 -0
- package/docs/docs.json +1599 -0
- package/docs/environment.md +81 -0
- package/docs/hooks.md +876 -0
- package/docs/index.md +179 -0
- package/docs/install/ansible.md +208 -0
- package/docs/install/bun.md +59 -0
- package/docs/install/development-channels.md +75 -0
- package/docs/install/docker.md +567 -0
- package/docs/install/index.md +185 -0
- package/docs/install/installer.md +123 -0
- package/docs/install/migrating.md +192 -0
- package/docs/install/nix.md +96 -0
- package/docs/install/node.md +78 -0
- package/docs/install/uninstall.md +128 -0
- package/docs/install/updating.md +228 -0
- package/docs/logging.md +350 -0
- package/docs/multi-agent-sandbox-tools.md +395 -0
- package/docs/network.md +54 -0
- package/docs/nodes/audio.md +114 -0
- package/docs/nodes/camera.md +156 -0
- package/docs/nodes/images.md +72 -0
- package/docs/nodes/index.md +341 -0
- package/docs/nodes/location-command.md +113 -0
- package/docs/nodes/media-understanding.md +379 -0
- package/docs/nodes/talk.md +90 -0
- package/docs/nodes/voicewake.md +65 -0
- package/docs/northflank.mdx +53 -0
- package/docs/perplexity.md +80 -0
- package/docs/platforms/android.md +129 -0
- package/docs/platforms/digitalocean.md +262 -0
- package/docs/platforms/exe-dev.md +125 -0
- package/docs/platforms/fly.md +486 -0
- package/docs/platforms/gcp.md +503 -0
- package/docs/platforms/hetzner.md +330 -0
- package/docs/platforms/index.md +53 -0
- package/docs/platforms/ios.md +106 -0
- package/docs/platforms/linux.md +94 -0
- package/docs/platforms/mac/bundled-gateway.md +73 -0
- package/docs/platforms/mac/canvas.md +125 -0
- package/docs/platforms/mac/child-process.md +69 -0
- package/docs/platforms/mac/dev-setup.md +102 -0
- package/docs/platforms/mac/health.md +34 -0
- package/docs/platforms/mac/icon.md +31 -0
- package/docs/platforms/mac/logging.md +57 -0
- package/docs/platforms/mac/menu-bar.md +81 -0
- package/docs/platforms/mac/peekaboo.md +65 -0
- package/docs/platforms/mac/permissions.md +44 -0
- package/docs/platforms/mac/release.md +85 -0
- package/docs/platforms/mac/remote.md +83 -0
- package/docs/platforms/mac/signing.md +47 -0
- package/docs/platforms/mac/skills.md +33 -0
- package/docs/platforms/mac/voice-overlay.md +60 -0
- package/docs/platforms/mac/voicewake.md +67 -0
- package/docs/platforms/mac/webchat.md +41 -0
- package/docs/platforms/mac/xpc.md +61 -0
- package/docs/platforms/macos-vm.md +281 -0
- package/docs/platforms/macos.md +203 -0
- package/docs/platforms/oracle.md +303 -0
- package/docs/platforms/raspberry-pi.md +358 -0
- package/docs/platforms/windows.md +159 -0
- package/docs/plugin.md +651 -0
- package/docs/plugins/agent-tools.md +99 -0
- package/docs/plugins/manifest.md +71 -0
- package/docs/plugins/voice-call.md +273 -0
- package/docs/plugins/zalouser.md +70 -0
- package/docs/providers/anthropic.md +152 -0
- package/docs/providers/claude-max-api-proxy.md +148 -0
- package/docs/providers/cloudflare-ai-gateway.md +71 -0
- package/docs/providers/deepgram.md +93 -0
- package/docs/providers/glm.md +33 -0
- package/docs/providers/index.md +63 -0
- package/docs/providers/minimax.md +208 -0
- package/docs/providers/models.md +51 -0
- package/docs/providers/moonshot.md +142 -0
- package/docs/providers/ollama.md +223 -0
- package/docs/providers/openai.md +62 -0
- package/docs/providers/opencode.md +36 -0
- package/docs/providers/openrouter.md +37 -0
- package/docs/providers/qwen.md +53 -0
- package/docs/providers/synthetic.md +99 -0
- package/docs/providers/venice.md +267 -0
- package/docs/providers/vercel-ai-gateway.md +50 -0
- package/docs/providers/xiaomi.md +64 -0
- package/docs/providers/zai.md +36 -0
- package/docs/railway.mdx +99 -0
- package/docs/reference/templates/AGENTS.md +9 -0
- package/docs/reference/templates/BOOTSTRAP.md +3 -0
- package/docs/reference/templates/HEARTBEAT.md +3 -0
- package/docs/reference/templates/IDENTITY.md +3 -0
- package/docs/reference/templates/TOOLS.md +3 -0
- package/docs/reference/templates/USER.md +3 -0
- package/docs/render.mdx +165 -0
- package/docs/start/docs-directory.md +63 -0
- package/docs/start/getting-started.md +212 -0
- package/docs/start/milaidy.md +247 -0
- package/docs/start/onboarding.md +258 -0
- package/docs/start/pairing.md +86 -0
- package/docs/start/quickstart.md +81 -0
- package/docs/start/setup.md +149 -0
- package/docs/start/showcase.md +416 -0
- package/docs/start/wizard.md +418 -0
- package/docs/testing.md +368 -0
- package/docs/token-use.md +112 -0
- package/docs/tools/agent-send.md +53 -0
- package/docs/tools/apply-patch.md +50 -0
- package/docs/tools/browser-linux-troubleshooting.md +139 -0
- package/docs/tools/browser-login.md +68 -0
- package/docs/tools/browser.md +576 -0
- package/docs/tools/chrome-extension.md +178 -0
- package/docs/tools/clawhub.md +257 -0
- package/docs/tools/creating-skills.md +54 -0
- package/docs/tools/elevated.md +57 -0
- package/docs/tools/exec-approvals.md +246 -0
- package/docs/tools/exec.md +179 -0
- package/docs/tools/firecrawl.md +61 -0
- package/docs/tools/index.md +508 -0
- package/docs/tools/llm-task.md +115 -0
- package/docs/tools/reactions.md +22 -0
- package/docs/tools/skills-config.md +76 -0
- package/docs/tools/skills.md +300 -0
- package/docs/tools/slash-commands.md +196 -0
- package/docs/tools/subagents.md +151 -0
- package/docs/tools/thinking.md +73 -0
- package/docs/tools/web.md +261 -0
- package/docs/tui.md +159 -0
- package/docs/vps.md +43 -0
- package/docs/web/control-ui.md +221 -0
- package/docs/web/dashboard.md +46 -0
- package/docs/web/index.md +116 -0
- package/docs/web/webchat.md +49 -0
- package/milaidy.mjs +14 -0
- package/package.json +271 -0
- package/skills/.cache/catalog.json +88519 -0
|
@@ -0,0 +1,148 @@
|
|
|
1
|
+
---
|
|
2
|
+
summary: "Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint"
|
|
3
|
+
read_when:
|
|
4
|
+
- You want to use Claude Max subscription with OpenAI-compatible tools
|
|
5
|
+
- You want a local API server that wraps Claude Code CLI
|
|
6
|
+
- You want to save money by using subscription instead of API keys
|
|
7
|
+
title: "Claude Max API Proxy"
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# Claude Max API Proxy
|
|
11
|
+
|
|
12
|
+
**claude-max-api-proxy** is a community tool that exposes your Claude Max/Pro subscription as an OpenAI-compatible API endpoint. This allows you to use your subscription with any tool that supports the OpenAI API format.
|
|
13
|
+
|
|
14
|
+
## Why Use This?
|
|
15
|
+
|
|
16
|
+
| Approach | Cost | Best For |
|
|
17
|
+
| ----------------------- | --------------------------------------------------- | ------------------------------------------ |
|
|
18
|
+
| Anthropic API | Pay per token (~$15/M input, $75/M output for Opus) | Production apps, high volume |
|
|
19
|
+
| Claude Max subscription | $200/month flat | Personal use, development, unlimited usage |
|
|
20
|
+
|
|
21
|
+
If you have a Claude Max subscription and want to use it with OpenAI-compatible tools, this proxy can save you significant money.
|
|
22
|
+
|
|
23
|
+
## How It Works
|
|
24
|
+
|
|
25
|
+
```
|
|
26
|
+
Your App → claude-max-api-proxy → Claude Code CLI → Anthropic (via subscription)
|
|
27
|
+
(OpenAI format) (converts format) (uses your login)
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
The proxy:
|
|
31
|
+
|
|
32
|
+
1. Accepts OpenAI-format requests at `http://localhost:3456/v1/chat/completions`
|
|
33
|
+
2. Converts them to Claude Code CLI commands
|
|
34
|
+
3. Returns responses in OpenAI format (streaming supported)
|
|
35
|
+
|
|
36
|
+
## Installation
|
|
37
|
+
|
|
38
|
+
```bash
|
|
39
|
+
# Requires Node.js 20+ and Claude Code CLI
|
|
40
|
+
npm install -g claude-max-api-proxy
|
|
41
|
+
|
|
42
|
+
# Verify Claude CLI is authenticated
|
|
43
|
+
claude --version
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## Usage
|
|
47
|
+
|
|
48
|
+
### Start the server
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
claude-max-api
|
|
52
|
+
# Server runs at http://localhost:3456
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Test it
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
# Health check
|
|
59
|
+
curl http://localhost:3456/health
|
|
60
|
+
|
|
61
|
+
# List models
|
|
62
|
+
curl http://localhost:3456/v1/models
|
|
63
|
+
|
|
64
|
+
# Chat completion
|
|
65
|
+
curl http://localhost:3456/v1/chat/completions \
|
|
66
|
+
-H "Content-Type: application/json" \
|
|
67
|
+
-d '{
|
|
68
|
+
"model": "claude-opus-4",
|
|
69
|
+
"messages": [{"role": "user", "content": "Hello!"}]
|
|
70
|
+
}'
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
### With Milaidy
|
|
74
|
+
|
|
75
|
+
You can point Milaidy at the proxy as a custom OpenAI-compatible endpoint:
|
|
76
|
+
|
|
77
|
+
```json5
|
|
78
|
+
{
|
|
79
|
+
env: {
|
|
80
|
+
OPENAI_API_KEY: "not-needed",
|
|
81
|
+
OPENAI_BASE_URL: "http://localhost:3456/v1",
|
|
82
|
+
},
|
|
83
|
+
agents: {
|
|
84
|
+
defaults: {
|
|
85
|
+
model: { primary: "openai/claude-opus-4" },
|
|
86
|
+
},
|
|
87
|
+
},
|
|
88
|
+
}
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
## Available Models
|
|
92
|
+
|
|
93
|
+
| Model ID | Maps To |
|
|
94
|
+
| ----------------- | --------------- |
|
|
95
|
+
| `claude-opus-4` | Claude Opus 4 |
|
|
96
|
+
| `claude-sonnet-4` | Claude Sonnet 4 |
|
|
97
|
+
| `claude-haiku-4` | Claude Haiku 4 |
|
|
98
|
+
|
|
99
|
+
## Auto-Start on macOS
|
|
100
|
+
|
|
101
|
+
Create a LaunchAgent to run the proxy automatically:
|
|
102
|
+
|
|
103
|
+
```bash
|
|
104
|
+
cat > ~/Library/LaunchAgents/com.claude-max-api.plist << 'EOF'
|
|
105
|
+
<?xml version="1.0" encoding="UTF-8"?>
|
|
106
|
+
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
|
107
|
+
<plist version="1.0">
|
|
108
|
+
<dict>
|
|
109
|
+
<key>Label</key>
|
|
110
|
+
<string>com.claude-max-api</string>
|
|
111
|
+
<key>RunAtLoad</key>
|
|
112
|
+
<true/>
|
|
113
|
+
<key>KeepAlive</key>
|
|
114
|
+
<true/>
|
|
115
|
+
<key>ProgramArguments</key>
|
|
116
|
+
<array>
|
|
117
|
+
<string>/usr/local/bin/node</string>
|
|
118
|
+
<string>/usr/local/lib/node_modules/claude-max-api-proxy/dist/server/standalone.js</string>
|
|
119
|
+
</array>
|
|
120
|
+
<key>EnvironmentVariables</key>
|
|
121
|
+
<dict>
|
|
122
|
+
<key>PATH</key>
|
|
123
|
+
<string>/usr/local/bin:/opt/homebrew/bin:~/.local/bin:/usr/bin:/bin</string>
|
|
124
|
+
</dict>
|
|
125
|
+
</dict>
|
|
126
|
+
</plist>
|
|
127
|
+
EOF
|
|
128
|
+
|
|
129
|
+
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
## Links
|
|
133
|
+
|
|
134
|
+
- **npm:** https://www.npmjs.com/package/claude-max-api-proxy
|
|
135
|
+
- **GitHub:** https://github.com/atalovesyou/claude-max-api-proxy
|
|
136
|
+
- **Issues:** https://github.com/atalovesyou/claude-max-api-proxy/issues
|
|
137
|
+
|
|
138
|
+
## Notes
|
|
139
|
+
|
|
140
|
+
- This is a **community tool**, not officially supported by Anthropic or Milaidy
|
|
141
|
+
- Requires an active Claude Max/Pro subscription with Claude Code CLI authenticated
|
|
142
|
+
- The proxy runs locally and does not send data to any third-party servers
|
|
143
|
+
- Streaming responses are fully supported
|
|
144
|
+
|
|
145
|
+
## See Also
|
|
146
|
+
|
|
147
|
+
- [Anthropic provider](/providers/anthropic) - Native Milaidy integration with Claude setup-token or API keys
|
|
148
|
+
- [OpenAI provider](/providers/openai) - For OpenAI/Codex subscriptions
|
|
@@ -0,0 +1,71 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "Cloudflare AI Gateway"
|
|
3
|
+
summary: "Cloudflare AI Gateway setup (auth + model selection)"
|
|
4
|
+
read_when:
|
|
5
|
+
- You want to use Cloudflare AI Gateway with Milaidy
|
|
6
|
+
- You need the account ID, gateway ID, or API key env var
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Cloudflare AI Gateway
|
|
10
|
+
|
|
11
|
+
Cloudflare AI Gateway sits in front of provider APIs and lets you add analytics, caching, and controls. For Anthropic, Milaidy uses the Anthropic Messages API through your Gateway endpoint.
|
|
12
|
+
|
|
13
|
+
- Provider: `cloudflare-ai-gateway`
|
|
14
|
+
- Base URL: `https://gateway.ai.cloudflare.com/v1/<account_id>/<gateway_id>/anthropic`
|
|
15
|
+
- Default model: `cloudflare-ai-gateway/claude-sonnet-4-5`
|
|
16
|
+
- API key: `CLOUDFLARE_AI_GATEWAY_API_KEY` (your provider API key for requests through the Gateway)
|
|
17
|
+
|
|
18
|
+
For Anthropic models, use your Anthropic API key.
|
|
19
|
+
|
|
20
|
+
## Quick start
|
|
21
|
+
|
|
22
|
+
1. Set the provider API key and Gateway details:
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
milaidy onboard --auth-choice cloudflare-ai-gateway-api-key
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
2. Set a default model:
|
|
29
|
+
|
|
30
|
+
```json5
|
|
31
|
+
{
|
|
32
|
+
agents: {
|
|
33
|
+
defaults: {
|
|
34
|
+
model: { primary: "cloudflare-ai-gateway/claude-sonnet-4-5" },
|
|
35
|
+
},
|
|
36
|
+
},
|
|
37
|
+
}
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
## Non-interactive example
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
milaidy onboard --non-interactive \
|
|
44
|
+
--mode local \
|
|
45
|
+
--auth-choice cloudflare-ai-gateway-api-key \
|
|
46
|
+
--cloudflare-ai-gateway-account-id "your-account-id" \
|
|
47
|
+
--cloudflare-ai-gateway-gateway-id "your-gateway-id" \
|
|
48
|
+
--cloudflare-ai-gateway-api-key "$CLOUDFLARE_AI_GATEWAY_API_KEY"
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
## Authenticated gateways
|
|
52
|
+
|
|
53
|
+
If you enabled Gateway authentication in Cloudflare, add the `cf-aig-authorization` header (this is in addition to your provider API key).
|
|
54
|
+
|
|
55
|
+
```json5
|
|
56
|
+
{
|
|
57
|
+
models: {
|
|
58
|
+
providers: {
|
|
59
|
+
"cloudflare-ai-gateway": {
|
|
60
|
+
headers: {
|
|
61
|
+
"cf-aig-authorization": "Bearer <cloudflare-ai-gateway-token>",
|
|
62
|
+
},
|
|
63
|
+
},
|
|
64
|
+
},
|
|
65
|
+
},
|
|
66
|
+
}
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
## Environment note
|
|
70
|
+
|
|
71
|
+
If the Gateway runs as a daemon (launchd/systemd), make sure `CLOUDFLARE_AI_GATEWAY_API_KEY` is available to that process (for example, in `~/.milaidy/.env` or via `env.shellEnv`).
|
|
@@ -0,0 +1,93 @@
|
|
|
1
|
+
---
|
|
2
|
+
summary: "Deepgram transcription for inbound voice notes"
|
|
3
|
+
read_when:
|
|
4
|
+
- You want Deepgram speech-to-text for audio attachments
|
|
5
|
+
- You need a quick Deepgram config example
|
|
6
|
+
title: "Deepgram"
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Deepgram (Audio Transcription)
|
|
10
|
+
|
|
11
|
+
Deepgram is a speech-to-text API. In Milaidy it is used for **inbound audio/voice note
|
|
12
|
+
transcription** via `tools.media.audio`.
|
|
13
|
+
|
|
14
|
+
When enabled, Milaidy uploads the audio file to Deepgram and injects the transcript
|
|
15
|
+
into the reply pipeline (`{{Transcript}}` + `[Audio]` block). This is **not streaming**;
|
|
16
|
+
it uses the pre-recorded transcription endpoint.
|
|
17
|
+
|
|
18
|
+
Website: https://deepgram.com
|
|
19
|
+
Docs: https://developers.deepgram.com
|
|
20
|
+
|
|
21
|
+
## Quick start
|
|
22
|
+
|
|
23
|
+
1. Set your API key:
|
|
24
|
+
|
|
25
|
+
```
|
|
26
|
+
DEEPGRAM_API_KEY=dg_...
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
2. Enable the provider:
|
|
30
|
+
|
|
31
|
+
```json5
|
|
32
|
+
{
|
|
33
|
+
tools: {
|
|
34
|
+
media: {
|
|
35
|
+
audio: {
|
|
36
|
+
enabled: true,
|
|
37
|
+
models: [{ provider: "deepgram", model: "nova-3" }],
|
|
38
|
+
},
|
|
39
|
+
},
|
|
40
|
+
},
|
|
41
|
+
}
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
## Options
|
|
45
|
+
|
|
46
|
+
- `model`: Deepgram model id (default: `nova-3`)
|
|
47
|
+
- `language`: language hint (optional)
|
|
48
|
+
- `tools.media.audio.providerOptions.deepgram.detect_language`: enable language detection (optional)
|
|
49
|
+
- `tools.media.audio.providerOptions.deepgram.punctuate`: enable punctuation (optional)
|
|
50
|
+
- `tools.media.audio.providerOptions.deepgram.smart_format`: enable smart formatting (optional)
|
|
51
|
+
|
|
52
|
+
Example with language:
|
|
53
|
+
|
|
54
|
+
```json5
|
|
55
|
+
{
|
|
56
|
+
tools: {
|
|
57
|
+
media: {
|
|
58
|
+
audio: {
|
|
59
|
+
enabled: true,
|
|
60
|
+
models: [{ provider: "deepgram", model: "nova-3", language: "en" }],
|
|
61
|
+
},
|
|
62
|
+
},
|
|
63
|
+
},
|
|
64
|
+
}
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
Example with Deepgram options:
|
|
68
|
+
|
|
69
|
+
```json5
|
|
70
|
+
{
|
|
71
|
+
tools: {
|
|
72
|
+
media: {
|
|
73
|
+
audio: {
|
|
74
|
+
enabled: true,
|
|
75
|
+
providerOptions: {
|
|
76
|
+
deepgram: {
|
|
77
|
+
detect_language: true,
|
|
78
|
+
punctuate: true,
|
|
79
|
+
smart_format: true,
|
|
80
|
+
},
|
|
81
|
+
},
|
|
82
|
+
models: [{ provider: "deepgram", model: "nova-3" }],
|
|
83
|
+
},
|
|
84
|
+
},
|
|
85
|
+
},
|
|
86
|
+
}
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
## Notes
|
|
90
|
+
|
|
91
|
+
- Authentication follows the standard provider auth order; `DEEPGRAM_API_KEY` is the simplest path.
|
|
92
|
+
- Override endpoints or headers with `tools.media.audio.baseUrl` and `tools.media.audio.headers` when using a proxy.
|
|
93
|
+
- Output follows the same audio rules as other providers (size caps, timeouts, transcript injection).
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
---
|
|
2
|
+
summary: "GLM model family overview + how to use it in Milaidy"
|
|
3
|
+
read_when:
|
|
4
|
+
- You want GLM models in Milaidy
|
|
5
|
+
- You need the model naming convention and setup
|
|
6
|
+
title: "GLM Models"
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# GLM models
|
|
10
|
+
|
|
11
|
+
GLM is a **model family** (not a company) available through the Z.AI platform. In Milaidy, GLM
|
|
12
|
+
models are accessed via the `zai` provider and model IDs like `zai/glm-4.7`.
|
|
13
|
+
|
|
14
|
+
## CLI setup
|
|
15
|
+
|
|
16
|
+
```bash
|
|
17
|
+
milaidy onboard --auth-choice zai-api-key
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
## Config snippet
|
|
21
|
+
|
|
22
|
+
```json5
|
|
23
|
+
{
|
|
24
|
+
env: { ZAI_API_KEY: "sk-..." },
|
|
25
|
+
agents: { defaults: { model: { primary: "zai/glm-4.7" } } },
|
|
26
|
+
}
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
## Notes
|
|
30
|
+
|
|
31
|
+
- GLM versions and availability can change; check Z.AI's docs for the latest.
|
|
32
|
+
- Example model IDs include `glm-4.7` and `glm-4.6`.
|
|
33
|
+
- For provider details, see [/providers/zai](/providers/zai).
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
---
|
|
2
|
+
summary: "Model providers (LLMs) supported by Milaidy"
|
|
3
|
+
read_when:
|
|
4
|
+
- You want to choose a model provider
|
|
5
|
+
- You need a quick overview of supported LLM backends
|
|
6
|
+
title: "Model Providers"
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Model Providers
|
|
10
|
+
|
|
11
|
+
Milaidy can use many LLM providers. Pick a provider, authenticate, then set the
|
|
12
|
+
default model as `provider/model`.
|
|
13
|
+
|
|
14
|
+
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels).
|
|
15
|
+
|
|
16
|
+
## Highlight: Venice (Venice AI)
|
|
17
|
+
|
|
18
|
+
Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
|
|
19
|
+
|
|
20
|
+
- Default: `venice/llama-3.3-70b`
|
|
21
|
+
- Best overall: `venice/claude-opus-45` (Opus remains the strongest)
|
|
22
|
+
|
|
23
|
+
See [Venice AI](/providers/venice).
|
|
24
|
+
|
|
25
|
+
## Quick start
|
|
26
|
+
|
|
27
|
+
1. Authenticate with the provider (usually via `milaidy onboard`).
|
|
28
|
+
2. Set the default model:
|
|
29
|
+
|
|
30
|
+
```json5
|
|
31
|
+
{
|
|
32
|
+
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
|
33
|
+
}
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
## Provider docs
|
|
37
|
+
|
|
38
|
+
- [OpenAI (API + Codex)](/providers/openai)
|
|
39
|
+
- [Anthropic (API + Claude Code CLI)](/providers/anthropic)
|
|
40
|
+
- [Qwen (OAuth)](/providers/qwen)
|
|
41
|
+
- [OpenRouter](/providers/openrouter)
|
|
42
|
+
- [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
|
43
|
+
- [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
|
|
44
|
+
- [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)
|
|
45
|
+
- [OpenCode Zen](/providers/opencode)
|
|
46
|
+
- [Amazon Bedrock](/bedrock)
|
|
47
|
+
- [Z.AI](/providers/zai)
|
|
48
|
+
- [Xiaomi](/providers/xiaomi)
|
|
49
|
+
- [GLM models](/providers/glm)
|
|
50
|
+
- [MiniMax](/providers/minimax)
|
|
51
|
+
- [Venice (Venice AI, privacy-focused)](/providers/venice)
|
|
52
|
+
- [Ollama (local models)](/providers/ollama)
|
|
53
|
+
|
|
54
|
+
## Transcription providers
|
|
55
|
+
|
|
56
|
+
- [Deepgram (audio transcription)](/providers/deepgram)
|
|
57
|
+
|
|
58
|
+
## Community tools
|
|
59
|
+
|
|
60
|
+
- [Claude Max API Proxy](/providers/claude-max-api-proxy) - Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint
|
|
61
|
+
|
|
62
|
+
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,
|
|
63
|
+
see [Model providers](/concepts/model-providers).
|
|
@@ -0,0 +1,208 @@
|
|
|
1
|
+
---
|
|
2
|
+
summary: "Use MiniMax M2.1 in Milaidy"
|
|
3
|
+
read_when:
|
|
4
|
+
- You want MiniMax models in Milaidy
|
|
5
|
+
- You need MiniMax setup guidance
|
|
6
|
+
title: "MiniMax"
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# MiniMax
|
|
10
|
+
|
|
11
|
+
MiniMax is an AI company that builds the **M2/M2.1** model family. The current
|
|
12
|
+
coding-focused release is **MiniMax M2.1** (December 23, 2025), built for
|
|
13
|
+
real-world complex tasks.
|
|
14
|
+
|
|
15
|
+
Source: [MiniMax M2.1 release note](https://www.minimax.io/news/minimax-m21)
|
|
16
|
+
|
|
17
|
+
## Model overview (M2.1)
|
|
18
|
+
|
|
19
|
+
MiniMax highlights these improvements in M2.1:
|
|
20
|
+
|
|
21
|
+
- Stronger **multi-language coding** (Rust, Java, Go, C++, Kotlin, Objective-C, TS/JS).
|
|
22
|
+
- Better **web/app development** and aesthetic output quality (including native mobile).
|
|
23
|
+
- Improved **composite instruction** handling for office-style workflows, building on
|
|
24
|
+
interleaved thinking and integrated constraint execution.
|
|
25
|
+
- **More concise responses** with lower token usage and faster iteration loops.
|
|
26
|
+
- Stronger **tool/agent framework** compatibility and context management (Claude Code,
|
|
27
|
+
Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
|
|
28
|
+
- Higher-quality **dialogue and technical writing** outputs.
|
|
29
|
+
|
|
30
|
+
## MiniMax M2.1 vs MiniMax M2.1 Lightning
|
|
31
|
+
|
|
32
|
+
- **Speed:** Lightning is the “fast” variant in MiniMax’s pricing docs.
|
|
33
|
+
- **Cost:** Pricing shows the same input cost, but Lightning has higher output cost.
|
|
34
|
+
- **Coding plan routing:** The Lightning back-end isn’t directly available on the MiniMax
|
|
35
|
+
coding plan. MiniMax auto-routes most requests to Lightning, but falls back to the
|
|
36
|
+
regular M2.1 back-end during traffic spikes.
|
|
37
|
+
|
|
38
|
+
## Choose a setup
|
|
39
|
+
|
|
40
|
+
### MiniMax OAuth (Coding Plan) — recommended
|
|
41
|
+
|
|
42
|
+
**Best for:** quick setup with MiniMax Coding Plan via OAuth, no API key required.
|
|
43
|
+
|
|
44
|
+
Enable the bundled OAuth plugin and authenticate:
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
milaidy plugins enable minimax-portal-auth # skip if already loaded.
|
|
48
|
+
milaidy gateway restart # restart if gateway is already running
|
|
49
|
+
milaidy onboard --auth-choice minimax-portal
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
You will be prompted to select an endpoint:
|
|
53
|
+
|
|
54
|
+
- **Global** - International users (`api.minimax.io`)
|
|
55
|
+
- **CN** - Users in China (`api.minimaxi.com`)
|
|
56
|
+
|
|
57
|
+
See the MiniMax OAuth plugin documentation for details.
|
|
58
|
+
|
|
59
|
+
### MiniMax M2.1 (API key)
|
|
60
|
+
|
|
61
|
+
**Best for:** hosted MiniMax with Anthropic-compatible API.
|
|
62
|
+
|
|
63
|
+
Configure via CLI:
|
|
64
|
+
|
|
65
|
+
- Run `milaidy configure`
|
|
66
|
+
- Select **Model/auth**
|
|
67
|
+
- Choose **MiniMax M2.1**
|
|
68
|
+
|
|
69
|
+
```json5
|
|
70
|
+
{
|
|
71
|
+
env: { MINIMAX_API_KEY: "sk-..." },
|
|
72
|
+
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
|
|
73
|
+
models: {
|
|
74
|
+
mode: "merge",
|
|
75
|
+
providers: {
|
|
76
|
+
minimax: {
|
|
77
|
+
baseUrl: "https://api.minimax.io/anthropic",
|
|
78
|
+
apiKey: "${MINIMAX_API_KEY}",
|
|
79
|
+
api: "anthropic-messages",
|
|
80
|
+
models: [
|
|
81
|
+
{
|
|
82
|
+
id: "MiniMax-M2.1",
|
|
83
|
+
name: "MiniMax M2.1",
|
|
84
|
+
reasoning: false,
|
|
85
|
+
input: ["text"],
|
|
86
|
+
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
|
|
87
|
+
contextWindow: 200000,
|
|
88
|
+
maxTokens: 8192,
|
|
89
|
+
},
|
|
90
|
+
],
|
|
91
|
+
},
|
|
92
|
+
},
|
|
93
|
+
},
|
|
94
|
+
}
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
### MiniMax M2.1 as fallback (Opus primary)
|
|
98
|
+
|
|
99
|
+
**Best for:** keep Opus 4.5 as primary, fail over to MiniMax M2.1.
|
|
100
|
+
|
|
101
|
+
```json5
|
|
102
|
+
{
|
|
103
|
+
env: { MINIMAX_API_KEY: "sk-..." },
|
|
104
|
+
agents: {
|
|
105
|
+
defaults: {
|
|
106
|
+
models: {
|
|
107
|
+
"anthropic/claude-opus-4-5": { alias: "opus" },
|
|
108
|
+
"minimax/MiniMax-M2.1": { alias: "minimax" },
|
|
109
|
+
},
|
|
110
|
+
model: {
|
|
111
|
+
primary: "anthropic/claude-opus-4-5",
|
|
112
|
+
fallbacks: ["minimax/MiniMax-M2.1"],
|
|
113
|
+
},
|
|
114
|
+
},
|
|
115
|
+
},
|
|
116
|
+
}
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
### Optional: Local via LM Studio (manual)
|
|
120
|
+
|
|
121
|
+
**Best for:** local inference with LM Studio.
|
|
122
|
+
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
|
|
123
|
+
desktop/server) using LM Studio's local server.
|
|
124
|
+
|
|
125
|
+
Configure manually via `milaidy.json`:
|
|
126
|
+
|
|
127
|
+
```json5
|
|
128
|
+
{
|
|
129
|
+
agents: {
|
|
130
|
+
defaults: {
|
|
131
|
+
model: { primary: "lmstudio/minimax-m2.1-gs32" },
|
|
132
|
+
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
|
|
133
|
+
},
|
|
134
|
+
},
|
|
135
|
+
models: {
|
|
136
|
+
mode: "merge",
|
|
137
|
+
providers: {
|
|
138
|
+
lmstudio: {
|
|
139
|
+
baseUrl: "http://127.0.0.1:1234/v1",
|
|
140
|
+
apiKey: "lmstudio",
|
|
141
|
+
api: "openai-responses",
|
|
142
|
+
models: [
|
|
143
|
+
{
|
|
144
|
+
id: "minimax-m2.1-gs32",
|
|
145
|
+
name: "MiniMax M2.1 GS32",
|
|
146
|
+
reasoning: false,
|
|
147
|
+
input: ["text"],
|
|
148
|
+
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
|
149
|
+
contextWindow: 196608,
|
|
150
|
+
maxTokens: 8192,
|
|
151
|
+
},
|
|
152
|
+
],
|
|
153
|
+
},
|
|
154
|
+
},
|
|
155
|
+
},
|
|
156
|
+
}
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
## Configure via `milaidy configure`
|
|
160
|
+
|
|
161
|
+
Use the interactive config wizard to set MiniMax without editing JSON:
|
|
162
|
+
|
|
163
|
+
1. Run `milaidy configure`.
|
|
164
|
+
2. Select **Model/auth**.
|
|
165
|
+
3. Choose **MiniMax M2.1**.
|
|
166
|
+
4. Pick your default model when prompted.
|
|
167
|
+
|
|
168
|
+
## Configuration options
|
|
169
|
+
|
|
170
|
+
- `models.providers.minimax.baseUrl`: prefer `https://api.minimax.io/anthropic` (Anthropic-compatible); `https://api.minimax.io/v1` is optional for OpenAI-compatible payloads.
|
|
171
|
+
- `models.providers.minimax.api`: prefer `anthropic-messages`; `openai-completions` is optional for OpenAI-compatible payloads.
|
|
172
|
+
- `models.providers.minimax.apiKey`: MiniMax API key (`MINIMAX_API_KEY`).
|
|
173
|
+
- `models.providers.minimax.models`: define `id`, `name`, `reasoning`, `contextWindow`, `maxTokens`, `cost`.
|
|
174
|
+
- `agents.defaults.models`: alias models you want in the allowlist.
|
|
175
|
+
- `models.mode`: keep `merge` if you want to add MiniMax alongside built-ins.
|
|
176
|
+
|
|
177
|
+
## Notes
|
|
178
|
+
|
|
179
|
+
- Model refs are `minimax/<model>`.
|
|
180
|
+
- Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key).
|
|
181
|
+
- Update pricing values in `models.json` if you need exact cost tracking.
|
|
182
|
+
- Referral link for MiniMax Coding Plan (10% off): https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link
|
|
183
|
+
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
|
|
184
|
+
- Use `milaidy models list` and `milaidy models set minimax/MiniMax-M2.1` to switch.
|
|
185
|
+
|
|
186
|
+
## Troubleshooting
|
|
187
|
+
|
|
188
|
+
### “Unknown model: minimax/MiniMax-M2.1”
|
|
189
|
+
|
|
190
|
+
This usually means the **MiniMax provider isn’t configured** (no provider entry
|
|
191
|
+
and no MiniMax auth profile/env key found). A fix for this detection is in
|
|
192
|
+
**2026.1.12** (unreleased at the time of writing). Fix by:
|
|
193
|
+
|
|
194
|
+
- Upgrading to **2026.1.12** (or run from source `main`), then restarting the gateway.
|
|
195
|
+
- Running `milaidy configure` and selecting **MiniMax M2.1**, or
|
|
196
|
+
- Adding the `models.providers.minimax` block manually, or
|
|
197
|
+
- Setting `MINIMAX_API_KEY` (or a MiniMax auth profile) so the provider can be injected.
|
|
198
|
+
|
|
199
|
+
Make sure the model id is **case‑sensitive**:
|
|
200
|
+
|
|
201
|
+
- `minimax/MiniMax-M2.1`
|
|
202
|
+
- `minimax/MiniMax-M2.1-lightning`
|
|
203
|
+
|
|
204
|
+
Then recheck with:
|
|
205
|
+
|
|
206
|
+
```bash
|
|
207
|
+
milaidy models list
|
|
208
|
+
```
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
---
|
|
2
|
+
summary: "Model providers (LLMs) supported by Milaidy"
|
|
3
|
+
read_when:
|
|
4
|
+
- You want to choose a model provider
|
|
5
|
+
- You want quick setup examples for LLM auth + model selection
|
|
6
|
+
title: "Model Provider Quickstart"
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Model Providers
|
|
10
|
+
|
|
11
|
+
Milaidy can use many LLM providers. Pick one, authenticate, then set the default
|
|
12
|
+
model as `provider/model`.
|
|
13
|
+
|
|
14
|
+
## Highlight: Venice (Venice AI)
|
|
15
|
+
|
|
16
|
+
Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.
|
|
17
|
+
|
|
18
|
+
- Default: `venice/llama-3.3-70b`
|
|
19
|
+
- Best overall: `venice/claude-opus-45` (Opus remains the strongest)
|
|
20
|
+
|
|
21
|
+
See [Venice AI](/providers/venice).
|
|
22
|
+
|
|
23
|
+
## Quick start (two steps)
|
|
24
|
+
|
|
25
|
+
1. Authenticate with the provider (usually via `milaidy onboard`).
|
|
26
|
+
2. Set the default model:
|
|
27
|
+
|
|
28
|
+
```json5
|
|
29
|
+
{
|
|
30
|
+
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
|
31
|
+
}
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
## Supported providers (starter set)
|
|
35
|
+
|
|
36
|
+
- [OpenAI (API + Codex)](/providers/openai)
|
|
37
|
+
- [Anthropic (API + Claude Code CLI)](/providers/anthropic)
|
|
38
|
+
- [OpenRouter](/providers/openrouter)
|
|
39
|
+
- [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
|
40
|
+
- [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
|
|
41
|
+
- [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)
|
|
42
|
+
- [Synthetic](/providers/synthetic)
|
|
43
|
+
- [OpenCode Zen](/providers/opencode)
|
|
44
|
+
- [Z.AI](/providers/zai)
|
|
45
|
+
- [GLM models](/providers/glm)
|
|
46
|
+
- [MiniMax](/providers/minimax)
|
|
47
|
+
- [Venice (Venice AI)](/providers/venice)
|
|
48
|
+
- [Amazon Bedrock](/bedrock)
|
|
49
|
+
|
|
50
|
+
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,
|
|
51
|
+
see [Model providers](/concepts/model-providers).
|