wechat-to-anything 0.6.4 → 0.6.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.en.md ADDED
@@ -0,0 +1,194 @@
1
+ <p align="center">
2
+ <img src="docs/banner.png" alt="wechat-to-anything" />
3
+ </p>
4
+
5
+ <h1 align="center">wechat-to-anything</h1>
6
+
7
+ <p align="center">
8
+ <a href="https://www.npmjs.com/package/wechat-to-anything"><img src="https://img.shields.io/npm/v/wechat-to-anything?style=flat-square&color=cb3837" alt="npm" /></a>
9
+ <a href="https://github.com/kellyvv/wechat-to-anything"><img src="https://img.shields.io/github/stars/kellyvv/wechat-to-anything?style=flat-square&color=yellow" alt="stars" /></a>
10
+ <a href="LICENSE"><img src="https://img.shields.io/github/license/kellyvv/wechat-to-anything?style=flat-square" alt="license" /></a>
11
+ <a href="https://github.com/kellyvv/wechat-to-anything"><img src="https://img.shields.io/badge/node-%3E%3D22-brightgreen?style=flat-square" alt="node" /></a>
12
+ </p>
13
+
14
+ <p align="center">
15
+ <a href="#quick-start">Quick Start</a> · <a href="#full-multimodal-matrix">Multimodal</a> · <a href="#media-protocol">Media Protocol</a> · <a href="#multi-agent-mode">Multi-Agent</a> · <a href="#proactive-send-api">Send API</a> · <a href="#bring-your-own-agent">Custom Agent</a>
16
+ </p>
17
+
18
+ <p align="center">
19
+ <a href="README.md">中文</a> | English
20
+ </p>
21
+
22
+ > ⭐ If this project helps you, please give it a Star!
23
+
24
+ **The first open-source project** to support full multimodal bidirectional communication between WeChat and AI Agents — text, images, voice, video, and files, both sending and receiving.
25
+
26
+ <p align="center">
27
+ <img src="docs/wechat-image-send.png" width="250" alt="Agent sends files, images, voice" />
28
+ <img src="docs/wechat-image-receive.png" width="250" alt="Agent sends images, video, voice" />
29
+ <a href="https://github.com/kellyvv/wechat-to-anything/raw/main/docs/wechat-voice-demo.mp4">
30
+ <img src="docs/wechat-voice-demo.gif" width="250" alt="Voice demo (click for audio)" />
31
+ </a>
32
+ </p>
33
+
34
+ ## Features
35
+
36
+ - 🔌 **Zero-config setup** — One `npx` command, no cloning, no configuration
37
+ - 🧠 **Agent-agnostic** — Works with any OpenAI-compatible API (Codex / Gemini / Claude / custom)
38
+ - 📡 **Full multimodal** — Text, images, voice, video, files — bidirectional
39
+ - 🤖 **Multi-Agent** — Connect multiple Agents simultaneously, route with `@` prefix
40
+ - ⌨️ **Typing indicator** — Shows "typing..." while Agent is thinking
41
+ - 📤 **Proactive Send API** — Agent can push multiple messages to simulate human typing rhythm
42
+
43
+ ### Full Multimodal Matrix
44
+
45
+ | Modality | WeChat → Agent | Agent → WeChat |
46
+ |------|:---:|:---:|
47
+ | 📝 Text | ✅ | ✅ |
48
+ | 📷 Image | ✅ Auto-detect | ✅ HD original |
49
+ | 🎤 Voice | ✅ Speech-to-text | ✅ Voice bubble |
50
+ | 🎬 Video | ✅ Auto-receive | ✅ With thumbnail |
51
+ | 📄 File | ✅ Content extraction | ✅ Downloadable |
52
+
53
+ ## Quick Start
54
+
55
+ ```bash
56
+ # Pick your favorite Agent:
57
+ npx wechat-to-anything --codex # OpenAI Codex
58
+ npx wechat-to-anything --gemini # Google Gemini
59
+ npx wechat-to-anything --claude # Claude Code
60
+ npx wechat-to-anything --openclaw # OpenClaw
61
+
62
+ # Or pass a URL directly:
63
+ npx wechat-to-anything http://your-agent:8000/v1
64
+ ```
65
+
66
+ > First time: A QR code pops up in terminal → Scan with WeChat → Done. Login is cached automatically.
67
+
68
+ ### Dependencies
69
+
70
+ ```bash
71
+ # 1. Node.js >= 22
72
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
73
+ nvm install 22
74
+
75
+ # 2. Python 3 + pip
76
+ brew install python3 # macOS
77
+ apt install python3 python3-pip # Linux
78
+
79
+ # 3. ffmpeg
80
+ brew install ffmpeg # macOS
81
+ apt install ffmpeg # Linux
82
+
83
+ # 4. pilk
84
+ pip install pilk
85
+ ```
86
+
87
+ ## How It Works
88
+
89
+ ```
90
+ WeChat User ←→ Tencent ilinkai API ←→ wechat-to-anything ←→ Your Agent (HTTP)
91
+ ```
92
+
93
+ Directly calls Tencent's ilinkai API to send/receive WeChat messages. No middleware, no reverse engineering, no web client. Your Agent just needs an OpenAI-compatible HTTP endpoint.
94
+
95
+ ## Bring Your Own Agent
96
+
97
+ Any language — just expose `POST /v1/chat/completions`:
98
+
99
+ ```python
100
+ @app.post("/v1/chat/completions")
101
+ def chat(request):
102
+ message = request.json["messages"][-1]["content"]
103
+ reply = your_agent(message)
104
+ return {"choices": [{"message": {"role": "assistant", "content": reply}}]}
105
+ ```
106
+
107
+ Then: `npx wechat-to-anything http://your-agent:8000/v1`
108
+
109
+ ## Media Protocol
110
+
111
+ Include specific formats in Agent responses to automatically send media:
112
+
113
+ | Type | Agent Response Format | Notes |
114
+ |------|----------------------|-------|
115
+ | Image | `![desc](URL or path)` | URL, local path, or data URI |
116
+ | Voice | `[audio:path or URL]` | MP3/WAV/OGG, requires `ffmpeg` + `pilk` |
117
+ | Video | `[video:path or URL]` | Requires `ffmpeg` |
118
+ | File | `[file:path or URL]` | Any file type |
119
+
120
+ **Image receiving** (WeChat → Agent) follows the [OpenAI Vision API](https://platform.openai.com/docs/guides/vision):
121
+
122
+ ```json
123
+ {
124
+ "messages": [{
125
+ "role": "user",
126
+ "content": [
127
+ { "type": "text", "text": "What is this?" },
128
+ { "type": "image_url", "image_url": { "url": "data:image/jpeg;base64,..." } }
129
+ ]
130
+ }]
131
+ }
132
+ ```
133
+
134
+ > Examples: [image-test.mjs](examples/image-test.mjs) · [voice-test.mjs](examples/voice-test.mjs) · [video-test-local.mjs](examples/video-test-local.mjs) · [file-test.mjs](examples/file-test.mjs)
135
+
136
+ ## Multi-Agent Mode
137
+
138
+ Connect multiple Agents simultaneously, route with `@` prefix. Supports OpenAI format and [ACP](https://agentcommunicationprotocol.dev/):
139
+
140
+ ```bash
141
+ npx wechat-to-anything \
142
+ --agent codex=http://localhost:3001/v1 \
143
+ --agent gemini=http://localhost:3002/v1 \
144
+ --agent bee=acp://localhost:8000/chat \
145
+ --default codex
146
+ ```
147
+
148
+ | WeChat Message | Effect |
149
+ |---|---|
150
+ | `Hello` | Sent to default Agent |
151
+ | `@codex write a sort` | Routes to Codex |
152
+ | `@gemini review code` | Routes to Gemini |
153
+ | `@list` | List all Agents |
154
+ | `@switch gemini` | Switch default |
155
+
156
+ ## Proactive Send API
157
+
158
+ Bridge starts an HTTP API on `localhost:9099`. Agents can proactively push multiple messages (simulating human typing rhythm):
159
+
160
+ ```bash
161
+ curl -X POST http://localhost:9099/api/send \
162
+ -H "Content-Type: application/json" \
163
+ -d '{"to": "user_id", "content": "Hmm..."}'
164
+ ```
165
+
166
+ - `to` — WeChat user ID (bridge passes this via the `user` field when calling agents)
167
+ - `content` — Same formats as agent responses (plain text, `![](url)`, `[audio:path]`, etc.)
168
+ - Use `--port PORT` to customize the port
169
+
170
+ **Use case**: Agent splits one reply into multiple segments with controlled timing:
171
+
172
+ ```python
173
+ import requests, time
174
+ def send(to, text):
175
+ requests.post("http://localhost:9099/api/send", json={"to": to, "content": text})
176
+
177
+ send(user_id, "Hmm...")
178
+ time.sleep(1.5)
179
+ send(user_id, "Let me think")
180
+ time.sleep(2)
181
+ # Final segment returned as normal response
182
+ ```
183
+
184
+ ## Credentials
185
+
186
+ Login credentials are saved in `~/.wechat-to-anything/credentials.json`. Delete to re-login.
187
+
188
+ ## Star History
189
+
190
+ If this project helped you, please give it a ⭐ Star — it's the best support!
191
+
192
+ ## License
193
+
194
+ [MIT](LICENSE)
package/README.md CHANGED
@@ -1,79 +1,100 @@
1
- # wechat-to-anything
1
+ <p align="center">
2
+ <img src="docs/banner.png" alt="wechat-to-anything" />
3
+ </p>
2
4
 
3
- > 把微信变成任何 AI Agent 的前端。零依赖,一条命令。
4
- >
5
- > ⭐ 如果这个项目对你有帮助,请给个 Star!本项目仅用于技术学习和交流,开源不易。
5
+ <h1 align="center">wechat-to-anything</h1>
6
6
 
7
- 微信双向支持 Agent 多种模态消息发送和接收,支持文本、图片、语音、视频、文件。
7
+ <p align="center">
8
+ <a href="https://www.npmjs.com/package/wechat-to-anything"><img src="https://img.shields.io/npm/v/wechat-to-anything?style=flat-square&color=cb3837" alt="npm" /></a>
9
+ <a href="https://github.com/kellyvv/wechat-to-anything"><img src="https://img.shields.io/github/stars/kellyvv/wechat-to-anything?style=flat-square&color=yellow" alt="stars" /></a>
10
+ <a href="LICENSE"><img src="https://img.shields.io/github/license/kellyvv/wechat-to-anything?style=flat-square" alt="license" /></a>
11
+ <a href="https://github.com/kellyvv/wechat-to-anything"><img src="https://img.shields.io/badge/node-%3E%3D22-brightgreen?style=flat-square" alt="node" /></a>
12
+ </p>
8
13
 
9
14
  <p align="center">
10
- <img src="docs/wechat-image-send.png" width="250" alt="发送图片给 Agent 识别" />
11
- <img src="docs/wechat-image-receive.png" width="250" alt="Agent 发送图片到微信" />
12
- <a href="https://github.com/kellyvv/wechat-to-anything/raw/main/docs/wechat-voice-demo.mp4">
13
- <img src="docs/wechat-voice-demo.gif" width="250" alt="语音发送演示(点击播放有声版)" />
14
- </a>
15
+ <a href="#快速开始">快速开始</a> · <a href="#全模态支持矩阵">全模态</a> · <a href="#多媒体协议">多媒体协议</a> · <a href="#多-agent-模式">多 Agent</a> · <a href="#主动发送-api">主动发送</a> · <a href="#接入自己的-agent">自定义 Agent</a>
15
16
  </p>
16
17
 
17
- ## 原理
18
+ <p align="center">
19
+ 中文 | <a href="README.en.md">English</a>
20
+ </p>
18
21
 
19
- ```
20
- 微信 ←→ ilinkai API (腾讯) ←→ wechat-to-anything ←→ 你的 Agent (HTTP)
21
- ```
22
+ > ⭐ 如果这个项目对你有帮助,请给个 Star!
23
+
24
+ **全网首个**支持微信与任何 AI Agent 全模态双向通信的开源项目 —— 文本、图片、语音、视频、文件,发送和接收全覆盖。
22
25
 
23
- 直接调用腾讯 ilinkai 接口收发微信消息,无中间层。你的 Agent 只需暴露一个 OpenAI 兼容的 HTTP 接口(`POST /v1/chat/completions`),任何语言都行。
26
+ <p align="center">
27
+ <img src="docs/wechat-image-send.png" width="250" alt="Agent 发送文件、图片、语音" />
28
+ <img src="docs/wechat-image-receive.png" width="250" alt="Agent 发送图片、视频、语音" />
29
+ <a href="https://github.com/kellyvv/wechat-to-anything/raw/main/docs/wechat-voice-demo.mp4">
30
+ <img src="docs/wechat-voice-demo.gif" width="250" alt="语音演示(点击播放有声版)" />
31
+ </a>
32
+ </p>
24
33
 
25
- ### 多种模态支持
34
+ ## 特性
26
35
 
27
- | 方向 | 图片 | 语音 | 视频 | 文件 |
28
- |---|---|---|---|---|
29
- | **微信 Agent** | ✅ 自动识别 | ✅ 语音转文字 | ✅ 自动接收 | ✅ 提取文本 |
30
- | **Agent 微信** | 自动发图 | ✅ 语音消息 | ✅ 视频消息 | 文本回复 |
36
+ - 🔌 **零依赖接入** `npx` 一条命令,无需 clone、无需配置
37
+ - 🧠 **Agent 无关** — 支持任何 OpenAI 兼容 API(Codex / Gemini / Claude / 自建)
38
+ - 📡 **全模态** 文本、图片、语音、视频、文件,双向全覆盖
39
+ - 🤖 **多 Agent** 同时接入多个 Agent,`@` 路由切换
40
+ - ⌨️ **打字指示器** — Agent 思考时显示"对方正在输入"
41
+ - 📤 **主动发送 API** — Agent 可推送多条消息,模拟真人打字节奏
31
42
 
32
- ## 前置条件
43
+ ### 全模态支持矩阵
33
44
 
34
- - Node.js >= 22(`nvm install 22`)
45
+ | 模态 | 微信 Agent | Agent → 微信 |
46
+ |------|:---:|:---:|
47
+ | 📝 文本 | ✅ | ✅ |
48
+ | 📷 图片 | ✅ 自动识别 | ✅ HD 原图 |
49
+ | 🎤 语音 | ✅ 语音转文字 | ✅ 语音气泡 |
50
+ | 🎬 视频 | ✅ 自动接收 | ✅ 带缩略图 |
51
+ | 📄 文件 | ✅ 提取内容 | ✅ 可下载 |
35
52
 
36
53
  ## 快速开始
37
54
 
38
55
  ```bash
39
- # 一条命令,选你喜欢的 Agent:
56
+ # 选你喜欢的 Agent:
40
57
  npx wechat-to-anything --codex # OpenAI Codex
41
58
  npx wechat-to-anything --gemini # Google Gemini
42
59
  npx wechat-to-anything --claude # Claude Code
43
60
  npx wechat-to-anything --openclaw # OpenClaw
44
61
 
45
- # Agent 同时用:
46
- npx wechat-to-anything --codex --gemini
62
+ # 或直接传 URL:
63
+ npx wechat-to-anything http://your-agent:8000/v1
47
64
  ```
48
65
 
49
- > 需要先安装对应 CLI:`npm i -g @openai/codex` / `@google/gemini-cli` / `@anthropic-ai/claude-code` / `openclaw`
50
- >
51
- > 也支持直接传 URL:`npx wechat-to-anything http://your-agent:8000/v1`
66
+ > 首次使用:终端弹出二维码 微信扫码 完成。之后自动复用登录。
52
67
 
53
- ### 接入 OpenClaw
68
+ ### 环境依赖
54
69
 
55
70
  ```bash
56
- # 1. 安装并配置 OpenClaw
57
- npm i -g openclaw
58
- openclaw configure # 设置模型(如 Gemini / OpenAI)
71
+ # 1. Node.js >= 22
72
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
73
+ nvm install 22
59
74
 
60
- # 2. 启动 Gateway
61
- openclaw gateway
75
+ # 2. Python 3 + pip
76
+ brew install python3 # macOS
77
+ apt install python3 python3-pip # Linux
62
78
 
63
- # 3. 启动桥(另一个终端)
64
- npx wechat-to-anything --openclaw
79
+ # 3. ffmpeg
80
+ brew install ffmpeg # macOS
81
+ apt install ffmpeg # Linux
82
+
83
+ # 4. pilk
84
+ pip install pilk
65
85
  ```
66
86
 
67
- > OpenClaw 的 Gateway 需要先配好模型 provider(运行 `openclaw configure`)。
68
- > 如果 OpenClaw 已有 `openclaw-weixin` 插件,需先禁用以避免消息冲突。
87
+ ## 原理
69
88
 
70
- ### 首次使用
89
+ ```
90
+ 微信用户 ←→ 腾讯 ilinkai API ←→ wechat-to-anything ←→ 你的 Agent (HTTP)
91
+ ```
71
92
 
72
- 终端会弹出二维码 微信扫码 完成。之后自动复用登录凭证。
93
+ 直接调用腾讯 ilinkai 接口收发微信消息,无中间层、无逆向、无网页版。Agent 只需暴露一个 OpenAI 兼容的 HTTP 接口。
73
94
 
74
- ## 接入你自己的 Agent
95
+ ## 接入自己的 Agent
75
96
 
76
- 暴露 `POST /v1/chat/completions` 即可,任何语言:
97
+ 任何语言,暴露 `POST /v1/chat/completions` 即可:
77
98
 
78
99
  ```python
79
100
  @app.post("/v1/chat/completions")
@@ -83,11 +104,38 @@ def chat(request):
83
104
  return {"choices": [{"message": {"role": "assistant", "content": reply}}]}
84
105
  ```
85
106
 
86
- 然后 `npx wechat-to-anything http://your-agent:8000/v1`。
107
+ 然后:`npx wechat-to-anything http://your-agent:8000/v1`
108
+
109
+ ## 多媒体协议
110
+
111
+ Agent 回复中包含特定格式即可自动发送多媒体:
112
+
113
+ | 类型 | Agent 回复格式 | 说明 |
114
+ |------|--------------|------|
115
+ | 图片 | `![描述](URL或路径)` | 支持 URL、本地路径、data URI |
116
+ | 语音 | `[audio:路径或URL]` | MP3/WAV/OGG,需 `ffmpeg` + `pilk` |
117
+ | 视频 | `[video:路径或URL]` | 需 `ffmpeg` |
118
+ | 文件 | `[file:路径或URL]` | 任意文件类型 |
119
+
120
+ **图片接收**(微信 → Agent)遵循 [OpenAI Vision API](https://platform.openai.com/docs/guides/vision):
121
+
122
+ ```json
123
+ {
124
+ "messages": [{
125
+ "role": "user",
126
+ "content": [
127
+ { "type": "text", "text": "这是什么?" },
128
+ { "type": "image_url", "image_url": { "url": "data:image/jpeg;base64,..." } }
129
+ ]
130
+ }]
131
+ }
132
+ ```
133
+
134
+ > 示例:[image-test.mjs](examples/image-test.mjs) · [voice-test.mjs](examples/voice-test.mjs) · [video-test-local.mjs](examples/video-test-local.mjs) · [file-test.mjs](examples/file-test.mjs)
87
135
 
88
136
  ## 多 Agent 模式
89
137
 
90
- 同时接入多个 Agent,通过 `@` 前缀路由消息。支持 OpenAI 兼容格式和 [ACP (Agent Communication Protocol)](https://agentcommunicationprotocol.dev/) 两种协议:
138
+ 同时接入多个 Agent,`@` 前缀路由。支持 OpenAI 格式和 [ACP 协议](https://agentcommunicationprotocol.dev/)
91
139
 
92
140
  ```bash
93
141
  npx wechat-to-anything \
@@ -97,66 +145,46 @@ npx wechat-to-anything \
97
145
  --default codex
98
146
  ```
99
147
 
100
- > `http://` OpenAI 格式,`acp://` → ACP 协议,自动识别。
101
-
102
- 微信里使用:
103
-
104
- | 消息 | 效果 |
148
+ | 微信消息 | 效果 |
105
149
  |---|---|
106
150
  | `你好` | 发给默认 Agent |
107
151
  | `@codex 写个排序` | 路由到 Codex |
108
152
  | `@gemini 审查代码` | 路由到 Gemini |
109
- | `@bee 分析数据` | 路由到 ACP Agent |
110
- | `@list` | 查看已注册的 Agent |
111
- | `@切换 gemini` | 切换默认 Agent |
112
-
113
- 多 Agent 模式下回复自动带 `[agentName]` 前缀标识来源。每个用户独立维护默认 Agent。
153
+ | `@list` | 查看所有 Agent |
154
+ | `@切换 gemini` | 切换默认 |
114
155
 
115
- ## 多媒体格式
156
+ ## 主动发送 API
116
157
 
117
- **图片(微信 Agent)**:遵循 [OpenAI Vision API](https://platform.openai.com/docs/guides/vision),`content` 为数组:
158
+ Bridge 启动时会在 `localhost:9099` 暴露 HTTP API,Agent 可主动推送多条消息(模拟真人打字节奏):
118
159
 
119
- ```json
120
- {
121
- "messages": [{
122
- "role": "user",
123
- "content": [
124
- { "type": "text", "text": "这是什么?" },
125
- { "type": "image_url", "image_url": { "url": "data:image/jpeg;base64,..." } }
126
- ]
127
- }]
128
- }
160
+ ```bash
161
+ curl -X POST http://localhost:9099/api/send \
162
+ -H "Content-Type: application/json" \
163
+ -d '{"to": "user_id", "content": "嗯……"}'
129
164
  ```
130
165
 
131
- **图片(Agent 微信)**:回复中包含 `![desc](https://...)` 自动发图(HD 原图质量,自动生成缩略图)。支持 URL、本地路径、data URI。
132
-
133
- **语音(Agent 微信)**:回复中包含 `[audio:path 或 URL]` 自动发语音气泡。支持 MP3、WAV、OGG 等。需要 `ffmpeg` 和 `pip install pilk`。
134
-
135
- ```python
136
- @app.post("/v1/chat/completions")
137
- def chat(request):
138
- audio_path = your_tts(message) # → /tmp/reply.mp3
139
- reply = f"[audio:{audio_path}]\n这是文字版内容"
140
- return {"choices": [{"message": {"role": "assistant", "content": reply}}]}
141
- ```
166
+ - `to` 微信用户 ID(bridge agent 时通过 `user` 字段传入)
167
+ - `content` — 支持和 Agent 回复相同的格式(纯文本、`![](url)`、`[audio:path]` 等)
168
+ - `--port PORT` 自定义端口
142
169
 
143
- **视频(Agent → 微信)**:回复中包含 `[video:path 或 URL]` 自动发视频消息(含缩略图)。需要 `ffmpeg`。
170
+ **用途**:Agent 对一条消息可分多段回复,控制发送间隔:
144
171
 
145
172
  ```python
146
- @app.post("/v1/chat/completions")
147
- def chat(request):
148
- reply = "[video:/tmp/demo.mp4]\n这是视频描述"
149
- return {"choices": [{"message": {"role": "assistant", "content": reply}}]}
173
+ import requests, time
174
+ def send(to, text):
175
+ requests.post("http://localhost:9099/api/send", json={"to": to, "content": text})
176
+
177
+ send(user_id, "嗯……")
178
+ time.sleep(1.5)
179
+ send(user_id, "让我想想")
180
+ time.sleep(2)
181
+ # 最后一段作为正常 response 返回
150
182
  ```
151
183
 
152
- > 示例:[examples/image-test.mjs](examples/image-test.mjs) · [examples/voice-test.mjs](examples/voice-test.mjs) · [examples/video-test-local.mjs](examples/video-test-local.mjs)
153
-
154
184
  ## 凭证
155
185
 
156
186
  登录凭证保存在 `~/.wechat-to-anything/credentials.json`,删除即可重新登录。
157
187
 
158
-
159
-
160
188
  ## Star History
161
189
 
162
190
  如果这个项目帮到了你,请给个 ⭐ Star,这是对我们最大的支持!
package/bin/cli.mjs CHANGED
@@ -32,6 +32,12 @@ ${pc.bold("参数:")}
32
32
  --openclaw ${pc.dim("内置 OpenClaw(需先 npm i -g openclaw)")}
33
33
  --agent ${pc.dim("name=url")} ${pc.dim("注册自定义 Agent")}
34
34
  --default ${pc.dim("name")} ${pc.dim("设置默认 Agent")}
35
+ --port ${pc.dim("PORT")} ${pc.dim("API 端口(默认 9099),暴露 POST /api/send")}
36
+
37
+ ${pc.bold("API:")}
38
+ POST http://localhost:PORT/api/send
39
+ ${pc.dim('{ "to": "user_id", "content": "消息内容" }')}
40
+ ${pc.dim("Agent 可主动推送多条消息,模拟真人节奏")}
35
41
 
36
42
  ${pc.dim("Docs: https://github.com/kellyvv/wechat-to-anything")}
37
43
  `);
@@ -40,6 +46,7 @@ ${pc.dim("Docs: https://github.com/kellyvv/wechat-to-anything")}
40
46
 
41
47
  // 解析参数
42
48
  let i = 0;
49
+ let port = 9099;
43
50
  while (i < args.length) {
44
51
  if (args[i] === "--codex") {
45
52
  agents.set("codex", "cli://codex");
@@ -71,6 +78,13 @@ while (i < args.length) {
71
78
  } else if (args[i] === "--default" && args[i + 1]) {
72
79
  defaultAgent = args[i + 1].toLowerCase();
73
80
  i += 2;
81
+ } else if (args[i] === "--port" && args[i + 1]) {
82
+ port = parseInt(args[i + 1], 10);
83
+ if (isNaN(port)) {
84
+ console.error(pc.red(`无效的端口号: ${args[i + 1]}`));
85
+ process.exit(1);
86
+ }
87
+ i += 2;
74
88
  } else if (!args[i].startsWith("--")) {
75
89
  if (!args[i].startsWith("acp://")) {
76
90
  try { new URL(args[i]); } catch {
@@ -114,7 +128,7 @@ if (agents.size === 1 && agents.has("default")) {
114
128
  }
115
129
  console.log();
116
130
 
117
- import("../cli/bridge.mjs").then((mod) => mod.start(agents, defaultAgent)).catch((err) => {
131
+ import("../cli/bridge.mjs").then((mod) => mod.start(agents, defaultAgent, { port })).catch((err) => {
118
132
  console.error(pc.red(err.message));
119
133
  process.exit(1);
120
134
  });
@@ -17,10 +17,10 @@ import { randomBytes } from "node:crypto";
17
17
  /**
18
18
  * 统一调用接口 — 根据 URL 自动选择适配器
19
19
  */
20
- export async function callAgentAuto(url, messages) {
21
- if (url.startsWith("acp://")) return callACP(url, messages);
20
+ export async function callAgentAuto(url, messages, userId) {
21
+ if (url.startsWith("acp://")) return callACP(url, messages, userId);
22
22
  if (url.startsWith("cli://")) return callCLI(url, messages);
23
- return callOpenAI(url, messages);
23
+ return callOpenAI(url, messages, userId);
24
24
  }
25
25
 
26
26
  /**
@@ -50,11 +50,11 @@ export async function checkAgent(url) {
50
50
 
51
51
  // ========== OpenAI 适配器 ==========
52
52
 
53
- async function callOpenAI(agentUrl, messages) {
53
+ async function callOpenAI(agentUrl, messages, userId) {
54
54
  const res = await fetch(`${agentUrl}/chat/completions`, {
55
55
  method: "POST",
56
56
  headers: { "Content-Type": "application/json" },
57
- body: JSON.stringify({ messages }),
57
+ body: JSON.stringify({ messages, user: userId || undefined }),
58
58
  signal: AbortSignal.timeout(300_000),
59
59
  });
60
60
  if (!res.ok) {
@@ -74,7 +74,7 @@ function parseACPUrl(acpUrl) {
74
74
  return { httpUrl: `http://${withoutScheme.slice(0, slashIdx)}`, agentName: withoutScheme.slice(slashIdx + 1) };
75
75
  }
76
76
 
77
- async function callACP(acpUrl, messages) {
77
+ async function callACP(acpUrl, messages, userId) {
78
78
  const { httpUrl, agentName } = parseACPUrl(acpUrl);
79
79
  const input = messages.map((msg) => {
80
80
  if (typeof msg.content === "string") {
package/cli/bridge.mjs CHANGED
@@ -1,4 +1,5 @@
1
1
  import pc from "picocolors";
2
+ import { createServer } from "node:http";
2
3
  import {
3
4
  loadCredentials, loginWithQR, getUpdates,
4
5
  sendMessage, sendImageByUrl, sendVideoByUrl,
@@ -14,7 +15,7 @@ import { stripMarkdown } from "./markdown.mjs";
14
15
  * 启动桥:WeChat ilinkai API ←→ Agent HTTP
15
16
  * 支持文本 + 图片 + 语音 + 文件,双向
16
17
  */
17
- export async function start(agents, defaultAgent) {
18
+ export async function start(agents, defaultAgent, { port = 9099 } = {}) {
18
19
  // 兼容旧的单 URL 调用
19
20
  if (typeof agents === "string") {
20
21
  const url = agents;
@@ -103,6 +104,177 @@ export async function start(agents, defaultAgent) {
103
104
  const pendingImages = new Map(); // userId → { base64, timestamp }
104
105
  const IMAGE_BUFFER_TTL = 5 * 60_000; // 5 min 过期
105
106
 
107
+ // per-user contextToken 缓存(供 /api/send 使用)
108
+ const userContextTokens = new Map(); // userId → contextToken
109
+
110
+ /**
111
+ * 统一发送内容(纯文本 / 图片 / 语音 / 视频 / 文件)
112
+ * 复用 Agent 回复的多媒体协议格式
113
+ */
114
+ async function sendContent(to, content, tag = "") {
115
+ const ct = userContextTokens.get(to) || "";
116
+
117
+ // 检查回复是否包含 [audio:path/url]
118
+ const audioMatch = content.match(/\[audio:(.*?)\]/);
119
+ // 检查回复是否包含图片(markdown 格式,支持 URL 和 data URI)
120
+ const imageMatch = content.match(/!\[.*?\]\(((?:https?:\/\/|data:image\/|\/).+?)\)/);
121
+ // 检查回复是否包含 [video:path/url]
122
+ const videoMatch = content.match(/\[video:(.*?)\]/);
123
+ // 检查回复是否包含 [file:path/url]
124
+ const fileMatch = content.match(/\[file:(.*?)\]/);
125
+
126
+ if (audioMatch) {
127
+ const audioSrc = audioMatch[1];
128
+ const textPart = content.replace(/\[audio:.*?\]/g, "").trim();
129
+ console.log(pc.green(`→ [send] [语音] ${audioSrc.slice(0, 60)}`));
130
+ try {
131
+ const { execSync } = await import("node:child_process");
132
+ const { statSync, writeFileSync } = await import("node:fs");
133
+ const { uploadToCdn } = await import("./cdn.mjs");
134
+ const { buildHeaders, BASE_URL: baseUrl } = await import("./weixin.mjs");
135
+
136
+ let audioFile = audioSrc;
137
+ if (audioSrc.startsWith("http://") || audioSrc.startsWith("https://")) {
138
+ const resp = await fetch(audioSrc);
139
+ if (!resp.ok) throw new Error(`下载失败: ${resp.status}`);
140
+ writeFileSync("/tmp/wxta_audio_in.mp3", Buffer.from(await resp.arrayBuffer()));
141
+ audioFile = "/tmp/wxta_audio_in.mp3";
142
+ }
143
+
144
+ execSync(`ffmpeg -y -i "${audioFile}" -ar 16000 -ac 1 -f s16le /tmp/wxta_audio.pcm 2>/dev/null`);
145
+ execSync(`python3 -c "import pilk; pilk.encode('/tmp/wxta_audio.pcm', '/tmp/wxta_audio.silk', pcm_rate=16000, tencent=True)"`);
146
+ const pcmSize = statSync("/tmp/wxta_audio.pcm").size;
147
+ const durationMs = Math.round((pcmSize / 32000) * 1000);
148
+
149
+ const cdn = await uploadToCdn("/tmp/wxta_audio.silk", to, creds.token, 4);
150
+ const aesKeyB64 = Buffer.from(cdn.aeskey).toString("base64");
151
+ const crypto = await import("node:crypto");
152
+
153
+ const body = JSON.stringify({
154
+ msg: {
155
+ from_user_id: "", to_user_id: to,
156
+ client_id: crypto.randomUUID(),
157
+ message_type: 2, message_state: 2,
158
+ item_list: [{
159
+ type: 3,
160
+ voice_item: {
161
+ media: { encrypt_query_param: cdn.downloadParam, aes_key: aesKeyB64 },
162
+ encode_type: 4, bits_per_sample: 16, sample_rate: 16000, playtime: durationMs,
163
+ },
164
+ }],
165
+ context_token: ct,
166
+ },
167
+ base_info: {},
168
+ });
169
+ await fetch(`${baseUrl}/ilink/bot/sendmessage`, {
170
+ method: "POST", headers: buildHeaders(creds.token, body), body,
171
+ });
172
+ console.log(pc.green(`→ [语音] 已发送 (${durationMs}ms)`));
173
+ if (textPart) await sendMessage(creds.token, to, tag + stripMarkdown(textPart), ct);
174
+ } catch (err) {
175
+ console.error(pc.red(` 语音发送失败: ${err.message}`));
176
+ await sendMessage(creds.token, to, tag + content.replace(/\[audio:.*?\]/g, "").trim() || content, ct);
177
+ }
178
+ } else if (imageMatch) {
179
+ const imageUrl = imageMatch[1];
180
+ const textPart = content.replace(/!\[.*?\]\(((?:https?:\/\/|data:image\/|\/).+?)\)/g, "").trim();
181
+ console.log(pc.green(`→ [send] [图片] ${imageUrl.slice(0, 60)}`));
182
+ try {
183
+ if (textPart) await sendMessage(creds.token, to, tag + stripMarkdown(textPart), ct);
184
+ await sendImageByUrl(creds.token, to, ct, imageUrl);
185
+ } catch (err) {
186
+ console.error(pc.red(` 图片发送失败: ${err.message}`));
187
+ await sendMessage(creds.token, to, tag + content, ct);
188
+ }
189
+ } else if (videoMatch) {
190
+ const videoSrc = videoMatch[1];
191
+ const textPart = content.replace(/\[video:.*?\]/g, "").trim();
192
+ console.log(pc.green(`→ [send] [视频] ${videoSrc.slice(0, 60)}`));
193
+ try {
194
+ if (textPart) await sendMessage(creds.token, to, tag + stripMarkdown(textPart), ct);
195
+ await sendVideoByUrl(creds.token, to, ct, videoSrc);
196
+ } catch (err) {
197
+ console.error(pc.red(` 视频发送失败: ${err.message}`));
198
+ await sendMessage(creds.token, to, tag + stripMarkdown(content), ct);
199
+ }
200
+ } else if (fileMatch) {
201
+ const fileSrc = fileMatch[1];
202
+ const textPart = content.replace(/\[file:.*?\]/g, "").trim();
203
+ const fileName = fileSrc.split("/").pop() || "file";
204
+ console.log(pc.green(`→ [send] [文件] ${fileSrc.slice(0, 60)}`));
205
+ try {
206
+ const { writeFileSync, unlinkSync } = await import("node:fs");
207
+ const { tmpdir } = await import("node:os");
208
+ const { join } = await import("node:path");
209
+ const resp = await fetch(fileSrc);
210
+ if (!resp.ok) throw new Error(`file download failed: ${resp.status}`);
211
+ const buf = Buffer.from(await resp.arrayBuffer());
212
+ const tmpPath = join(tmpdir(), `wx-file-${Date.now()}-${fileName}`);
213
+ writeFileSync(tmpPath, buf);
214
+ try {
215
+ const uploaded = await uploadToCdn(tmpPath, to, creds.token, 3);
216
+ const { sendFileMessage } = await import("./weixin.mjs");
217
+ await sendFileMessage(creds.token, to, ct, uploaded, fileName);
218
+ if (textPart) await sendMessage(creds.token, to, tag + stripMarkdown(textPart), ct);
219
+ } finally {
220
+ try { unlinkSync(tmpPath); } catch {}
221
+ }
222
+ } catch (err) {
223
+ console.error(pc.red(` 文件发送失败: ${err.message}`));
224
+ await sendMessage(creds.token, to, tag + stripMarkdown(content), ct);
225
+ }
226
+ } else {
227
+ // 纯文本
228
+ console.log(pc.green(`→ [send] ${content.slice(0, 80)}${content.length > 80 ? "..." : ""}`));
229
+ await sendMessage(creds.token, to, tag + stripMarkdown(content), ct);
230
+ }
231
+ }
232
+
233
+ // ─── HTTP API Server (/api/send) ────────────────────────────────
234
+ const httpServer = createServer(async (req, res) => {
235
+ // CORS
236
+ res.setHeader("Access-Control-Allow-Origin", "*");
237
+ res.setHeader("Access-Control-Allow-Methods", "POST, OPTIONS");
238
+ res.setHeader("Access-Control-Allow-Headers", "Content-Type");
239
+ if (req.method === "OPTIONS") { res.writeHead(204); res.end(); return; }
240
+
241
+ if (req.method === "POST" && req.url === "/api/send") {
242
+ let body = "";
243
+ for await (const chunk of req) body += chunk;
244
+ try {
245
+ const { to, content } = JSON.parse(body);
246
+ if (!to || !content) {
247
+ res.writeHead(400, { "Content-Type": "application/json" });
248
+ res.end(JSON.stringify({ error: "missing 'to' or 'content'" }));
249
+ return;
250
+ }
251
+ console.log(pc.cyan(`← [API] → ${to.slice(0, 12)}...: ${content.slice(0, 60)}`));
252
+ await sendContent(to, content);
253
+ res.writeHead(200, { "Content-Type": "application/json" });
254
+ res.end("{}");
255
+ } catch (err) {
256
+ console.error(pc.red(` /api/send 错误: ${err.message}`));
257
+ res.writeHead(500, { "Content-Type": "application/json" });
258
+ res.end(JSON.stringify({ error: err.message }));
259
+ }
260
+ return;
261
+ }
262
+
263
+ res.writeHead(404, { "Content-Type": "application/json" });
264
+ res.end(JSON.stringify({ error: "not found" }));
265
+ });
266
+
267
+ httpServer.on("error", (err) => {
268
+ if (err.code === "EADDRINUSE") {
269
+ console.warn(pc.yellow(`⚠️ 端口 ${port} 已被占用,API 未启动(bridge 继续运行)`));
270
+ } else {
271
+ console.error(pc.red(`API 服务器错误: ${err.message}`));
272
+ }
273
+ });
274
+ httpServer.listen(port, () => {
275
+ console.log(pc.green(`📡 API 已启动: http://localhost:${port}/api/send`));
276
+ });
277
+
106
278
  const loop = async () => {
107
279
  while (true) {
108
280
  try {
@@ -117,6 +289,9 @@ export async function start(agents, defaultAgent) {
117
289
  const contextToken = msg.context_token || "";
118
290
  if (!from) continue;
119
291
 
292
+ // 缓存 contextToken 供 /api/send 使用
293
+ if (contextToken) userContextTokens.set(from, contextToken);
294
+
120
295
  const text = extractText(msg);
121
296
  const media = extractMedia(msg);
122
297
 
@@ -325,138 +500,10 @@ export async function start(agents, defaultAgent) {
325
500
  // 调用 Agent
326
501
  try {
327
502
  if (typing) await typing.onReplyStart();
328
- const reply = await callAgentAuto(agentUrl, agentMessages);
503
+ const reply = await callAgentAuto(agentUrl, agentMessages, from);
329
504
  if (typing) typing.onIdle();
330
505
  const agentTag = multiMode ? `[${targetAgent}] ` : "";
331
-
332
- // 检查回复是否包含 [audio:path/url]
333
- const audioMatch = reply.match(/\[audio:(.*?)\]/);
334
- // 检查回复是否包含图片(markdown 格式,支持 URL 和 data URI)
335
- const imageMatch = reply.match(/!\[.*?\]\(((?:https?:\/\/|data:image\/)[^\s)]+)\)/);
336
- // 检查回复是否包含 [video:path/url]
337
- const videoMatch = reply.match(/\[video:(.*?)\]/);
338
- // 检查回复是否包含 [file:path/url]
339
- const fileMatch = reply.match(/\[file:(.*?)\]/);
340
-
341
- if (audioMatch) {
342
- const audioSrc = audioMatch[1];
343
- const textPart = reply.replace(/\[audio:.*?\]/g, "").trim();
344
- console.log(pc.green(`→ [${targetAgent}] [语音] ${audioSrc.slice(0, 60)}`));
345
- try {
346
- const { execSync } = await import("node:child_process");
347
- const { statSync, writeFileSync } = await import("node:fs");
348
- const { uploadToCdn } = await import("./cdn.mjs");
349
- const { buildHeaders, BASE_URL: baseUrl } = await import("./weixin.mjs");
350
-
351
- // 下载或使用本地文件
352
- let audioFile = audioSrc;
353
- if (audioSrc.startsWith("http://") || audioSrc.startsWith("https://")) {
354
- const resp = await fetch(audioSrc);
355
- if (!resp.ok) throw new Error(`下载失败: ${resp.status}`);
356
- writeFileSync("/tmp/wxta_audio_in.mp3", Buffer.from(await resp.arrayBuffer()));
357
- audioFile = "/tmp/wxta_audio_in.mp3";
358
- }
359
-
360
- // 转码: audio → PCM(16kHz) → SILK
361
- execSync(`ffmpeg -y -i "${audioFile}" -ar 16000 -ac 1 -f s16le /tmp/wxta_audio.pcm 2>/dev/null`);
362
- execSync(`python3 -c "import pilk; pilk.encode('/tmp/wxta_audio.pcm', '/tmp/wxta_audio.silk', pcm_rate=16000, tencent=True)"`);
363
- const pcmSize = statSync("/tmp/wxta_audio.pcm").size;
364
- const durationMs = Math.round((pcmSize / 32000) * 1000);
365
-
366
- // CDN 上传 + 发送语音(与"语音测试"相同格式)
367
- const cdn = await uploadToCdn("/tmp/wxta_audio.silk", from, creds.token, 4);
368
- const aesKeyB64 = Buffer.from(cdn.aeskey).toString("base64");
369
- const crypto = await import("node:crypto");
370
-
371
- const body = JSON.stringify({
372
- msg: {
373
- from_user_id: "", to_user_id: from,
374
- client_id: crypto.randomUUID(),
375
- message_type: 2, message_state: 2,
376
- item_list: [{
377
- type: 3,
378
- voice_item: {
379
- media: {
380
- encrypt_query_param: cdn.downloadParam,
381
- aes_key: aesKeyB64,
382
- },
383
- encode_type: 4,
384
- bits_per_sample: 16,
385
- sample_rate: 16000,
386
- playtime: durationMs,
387
- },
388
- }],
389
- context_token: contextToken,
390
- },
391
- base_info: {},
392
- });
393
- await fetch(`${baseUrl}/ilink/bot/sendmessage`, {
394
- method: "POST",
395
- headers: buildHeaders(creds.token, body),
396
- body,
397
- });
398
- console.log(pc.green(`→ [语音] 已发送 (${durationMs}ms)`));
399
- if (textPart) await sendMessage(creds.token, from, agentTag + stripMarkdown(textPart), contextToken);
400
- } catch (err) {
401
- console.error(pc.red(` 语音发送失败: ${err.message}`));
402
- await sendMessage(creds.token, from, agentTag + reply.replace(/\[audio:.*?\]/g, "").trim() || reply, contextToken);
403
- }
404
- } else if (imageMatch) {
405
- // Agent 回复了图片 URL → 直接发到微信
406
- const imageUrl = imageMatch[1];
407
- const textPart = reply.replace(/!\[.*?\]\(https?:\/\/[^\s)]+\)/g, "").trim();
408
- console.log(pc.green(`→ [${targetAgent}] [图片] ${imageUrl.slice(0, 60)}`));
409
- try {
410
- if (textPart) await sendMessage(creds.token, from, agentTag + stripMarkdown(textPart), contextToken);
411
- await sendImageByUrl(creds.token, from, contextToken, imageUrl);
412
- } catch (err) {
413
- console.error(pc.red(` 图片发送失败: ${err.message}`));
414
- await sendMessage(creds.token, from, agentTag + reply, contextToken);
415
- }
416
- } else if (videoMatch) {
417
- // Agent 回复了视频 → CDN 上传发到微信
418
- const videoSrc = videoMatch[1];
419
- const textPart = reply.replace(/\[video:.*?\]/g, "").trim();
420
- console.log(pc.green(`→ [${targetAgent}] [视频] ${videoSrc.slice(0, 60)}`));
421
- try {
422
- if (textPart) await sendMessage(creds.token, from, agentTag + stripMarkdown(textPart), contextToken);
423
- await sendVideoByUrl(creds.token, from, contextToken, videoSrc);
424
- } catch (err) {
425
- console.error(pc.red(` 视频发送失败: ${err.message}`));
426
- await sendMessage(creds.token, from, agentTag + stripMarkdown(reply), contextToken);
427
- }
428
- } else if (fileMatch) {
429
- // Agent 回复了文件 → CDN 上传发到微信
430
- const fileSrc = fileMatch[1];
431
- const textPart = reply.replace(/\[file:.*?\]/g, "").trim();
432
- const fileName = fileSrc.split("/").pop() || "file";
433
- console.log(pc.green(`→ [${targetAgent}] [文件] ${fileSrc.slice(0, 60)}`));
434
- try {
435
- const { writeFileSync, unlinkSync } = await import("node:fs");
436
- const { tmpdir } = await import("node:os");
437
- const { join } = await import("node:path");
438
- const resp = await fetch(fileSrc);
439
- if (!resp.ok) throw new Error(`file download failed: ${resp.status}`);
440
- const buf = Buffer.from(await resp.arrayBuffer());
441
- const tmpPath = join(tmpdir(), `wx-file-${Date.now()}-${fileName}`);
442
- writeFileSync(tmpPath, buf);
443
- try {
444
- const uploaded = await uploadToCdn(tmpPath, from, creds.token, 3);
445
- const { sendFileMessage } = await import("./weixin.mjs");
446
- await sendFileMessage(creds.token, from, contextToken, uploaded, fileName);
447
- if (textPart) await sendMessage(creds.token, from, agentTag + stripMarkdown(textPart), contextToken);
448
- } finally {
449
- try { unlinkSync(tmpPath); } catch {}
450
- }
451
- } catch (err) {
452
- console.error(pc.red(` 文件发送失败: ${err.message}`));
453
- await sendMessage(creds.token, from, agentTag + stripMarkdown(reply), contextToken);
454
- }
455
- } else {
456
- // 纯文本回复
457
- console.log(pc.green(`→ [${targetAgent}] ${reply.slice(0, 80)}${reply.length > 80 ? "..." : ""}`));
458
- await sendMessage(creds.token, from, agentTag + stripMarkdown(reply), contextToken);
459
- }
506
+ await sendContent(from, reply, agentTag);
460
507
  } catch (err) {
461
508
  if (typing) typing.onCleanup();
462
509
  console.error(pc.red(` ${targetAgent} 错误: ${err.message}`));
@@ -471,6 +518,7 @@ export async function start(agents, defaultAgent) {
471
518
  };
472
519
 
473
520
  process.on("SIGINT", () => {
521
+ httpServer.close();
474
522
  console.log(pc.dim("\n桥已停止"));
475
523
  process.exit(0);
476
524
  });
Binary file
Binary file
Binary file
@@ -0,0 +1,39 @@
1
+ /**
2
+ * 文件发送测试脚本
3
+ *
4
+ * 用法: node examples/file-test.mjs [文件路径]
5
+ *
6
+ * 通过 CDN 上传文件,发送到自己的微信(用于测试)。
7
+ * 需要先扫码登录获取 credentials。
8
+ */
9
+ import { readFileSync } from "fs";
10
+ import { homedir, tmpdir } from "os";
11
+ import { join, basename } from "path";
12
+
13
+ const creds = JSON.parse(readFileSync(homedir() + "/.wechat-to-anything/credentials.json", "utf-8"));
14
+
15
+ // 测试文件:命令行指定或自动生成
16
+ let filePath = process.argv[2];
17
+ let fileName;
18
+ if (filePath) {
19
+ fileName = basename(filePath);
20
+ } else {
21
+ const { writeFileSync } = await import("fs");
22
+ filePath = join(tmpdir(), "wxta-test-file.txt");
23
+ writeFileSync(filePath, `Hello from wechat-to-anything!\n测试文件 ${new Date().toISOString()}\n`);
24
+ fileName = "wxta-test-file.txt";
25
+ console.log("未指定文件,已生成测试文件:", filePath);
26
+ }
27
+
28
+ const { uploadToCdn } = await import("../cli/cdn.mjs");
29
+ const { getUpdates, sendFileMessage } = await import("../cli/weixin.mjs");
30
+
31
+ const msgs = await getUpdates(creds.token);
32
+ const ct = msgs?.context_token || "";
33
+
34
+ console.log("上传文件:", fileName);
35
+ const uploaded = await uploadToCdn(filePath, creds.userId, creds.token, 3);
36
+ console.log("✅ CDN 上传完成 | 大小:", uploaded.fileSize, "bytes | md5:", uploaded.rawMd5);
37
+
38
+ await sendFileMessage(creds.token, creds.userId, ct, uploaded, fileName);
39
+ console.log("✅ 发送成功!请检查微信");
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "wechat-to-anything",
3
- "version": "0.6.4",
3
+ "version": "0.6.6",
4
4
  "description": "一条命令,把微信变成任何 AI Agent 的入口",
5
5
  "type": "module",
6
6
  "bin": {