aicodeswitch 2.0.5 → 2.0.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,10 @@
2
2
 
3
3
  All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
4
4
 
5
+ ### 2.0.7 (2026-02-02)
6
+
7
+ ### 2.0.6 (2026-02-01)
8
+
5
9
  ### 2.0.5 (2026-01-27)
6
10
 
7
11
  ### 2.0.4 (2026-01-27)
package/CLAUDE.md CHANGED
@@ -110,6 +110,7 @@ aicos version # Show current version information
110
110
  - Main app: `App.tsx` - Navigation and layout
111
111
  - Pages:
112
112
  - `VendorsPage.tsx` - Manage AI service vendors
113
+ - `SkillsPage.tsx` - Manage global Skills and discovery
113
114
  - `RoutesPage.tsx` - Configure routing rules
114
115
  - `LogsPage.tsx` - View request/access/error logs
115
116
  - `SettingsPage.tsx` - Application settings
@@ -155,6 +156,10 @@ aicos version # Show current version information
155
156
  - Writes/ restores Codex config files (`~/.codex/config.toml`, `~/.codex/auth.json`)
156
157
  - Exports/ imports encrypted configuration data
157
158
 
159
+ ### Skills Management
160
+ - Lists global Skills for Claude Code and Codex
161
+ - Provides discovery search (discover/return toggle button) and installs Skills into target tool directories
162
+
158
163
  ### Logging
159
164
  - Request logs: Detailed API call records with token usage
160
165
  - Access logs: System access records
@@ -166,7 +171,8 @@ aicos version # Show current version information
166
171
  2. **Data Directory**: Default: `~/.aicodeswitch/data/` (SQLite3 database)
167
172
  3. **Config File**: `~/.aicodeswitch/aicodeswitch.conf` (HOST, PORT, AUTH)
168
173
  4. **Dev Ports**: UI (4568), Server (4567) - configured in `vite.config.ts` and `server/main.ts`
169
- 5. **API Endpoints**: All routes are prefixed with `/api/` except proxy routes (`/claude-code/`, `/codex/`)
174
+ 5. **Skills Search**: `SKILLSMP_API_KEY` is required for Skills discovery via SkillsMP
175
+ 6. **API Endpoints**: All routes are prefixed with `/api/` except proxy routes (`/claude-code/`, `/codex/`)
170
176
 
171
177
  ## Build and Deployment
172
178
 
@@ -189,4 +195,5 @@ aicos version # Show current version information
189
195
  * 使用yarn作为包管理器,请使用yarn安装依赖。
190
196
  * 前端依赖库安装在devDependencies中,请使用yarn install --dev安装。
191
197
  * 所有对话请使用中文。生成代码中的文案及相关注释根据代码原本的语言生成。
192
- * 在服务端,直接使用 __dirname 来获取当前目录,不要使用 process.cwd()
198
+ * 在服务端,直接使用 __dirname 来获取当前目录,不要使用 process.cwd()
199
+ * 每次有新的变化时,你需要更新 CLAUDE.md 来让文档保持最新。
package/README.md CHANGED
@@ -7,7 +7,8 @@ AI Code Switch 是帮助你在本地管理 AI 编程工具接入大模型的工
7
7
 
8
8
  **而且它尽可能简单的帮你解决这件事。**
9
9
 
10
- 视频演示:[https://www.bilibili.com/video/BV1uEznBuEJd/](https://www.bilibili.com/video/BV1uEznBuEJd/?from=github)
10
+ - 视频演示:[https://www.bilibili.com/video/BV1uEznBuEJd/](https://www.bilibili.com/video/BV1uEznBuEJd/?from=github)
11
+ - 1分钟让Claude Code接入GLM国产模型:[https://www.bilibili.com/video/BV1a865B8ErA/](https://www.bilibili.com/video/BV1a865B8ErA/)
11
12
 
12
13
  ## 安装
13
14
 
package/TECH.md ADDED
@@ -0,0 +1,193 @@
1
+ ## 实现Skills搜索
2
+
3
+ 使用skillsmp的api实现搜索功能
4
+
5
+ 在 server 下,创建一个 config.ts 来保存 SKILLSMP_API_KEY
6
+
7
+ GET /api/v1/skills/ai-search
8
+ AI semantic search powered by Cloudflare AI
9
+
10
+ Parameter Type Required Description
11
+ q string ✓ AI search query
12
+
13
+ Code Examples
14
+
15
+ const response = await fetch(
16
+ 'https://skillsmp.com/api/v1/skills/ai-search?q=How+to+create+a+web+scraper',
17
+ {
18
+ headers: {
19
+ 'Authorization': 'Bearer sk_live_skillsmp_SNqsutoSiH51g-7-E0zVFVuugcnXfQbxCqfDI786TI0'
20
+ }
21
+ }
22
+ );
23
+
24
+ const data = await response.json();
25
+ console.log(data.data.skills);
26
+
27
+ Responses example:
28
+ {
29
+ "success": true,
30
+ "data": {
31
+ "object": "vector_store.search_results.page",
32
+ "search_query": "How to create a web scraper",
33
+ "data": [
34
+ {
35
+ "file_id": "b941f4a570315d69f10272c602473c47f04bdd75fb714311ed36ea5b7029b1e5",
36
+ "filename": "skills/mattb543-asheville-event-feed-claude-event-scraper-skill-md.md",
37
+ "score": 0.58138084,
38
+ "skill": {
39
+ "id": "mattb543-asheville-event-feed-claude-event-scraper-skill-md",
40
+ "name": "event-scraper",
41
+ "author": "MattB543",
42
+ "description": "Create new event scraping scripts for websites. Use when adding a new event source to the Asheville Event Feed. ALWAYS start by detecting the CMS/platform and trying known API endpoints first. Browser scraping is NOT supported (Vercel limitation). Handles API-based, HTML/JSON-LD, and hybrid patterns with comprehensive testing workflows.",
43
+ "githubUrl": "https://github.com/MattB543/asheville-event-feed/tree/main/claude/event-scraper",
44
+ "skillUrl": "https://skillsmp.com/skills/mattb543-asheville-event-feed-claude-event-scraper-skill-md",
45
+ "stars": 5,
46
+ "updatedAt": 1768598524
47
+ }
48
+ },
49
+ {
50
+ "file_id": "34e6c58d525232653f2adfdf8bc5abf9b50eb9128b5abe68a630d656ca9801c9",
51
+ "filename": "skills/dvorkinguy-claude-skills-agents-skills-apify-scraper-builder-skill-md.md",
52
+ "score": 0.5716883
53
+ },
54
+ {
55
+ "file_id": "7dcf28e65a11cb6fb0e205bc9b3ad1ce9007724f809632ccd81f42c3167bd688",
56
+ "filename": "skills/honeyspoon-nix-config-config-opencode-skill-web-scraper-skill-md.md",
57
+ "score": 0.5999056,
58
+ "skill": {
59
+ "id": "honeyspoon-nix-config-config-opencode-skill-web-scraper-skill-md",
60
+ "name": "web-scraper",
61
+ "author": "honeyspoon",
62
+ "description": "This skill should be used when users need to scrape content from websites, extract text from web pages, crawl and follow links, or download documentation from online sources. It features concurrent URL processing, automatic deduplication, content filtering, domain restrictions, and proper directory hierarchy based on URL structure. Use for documentation gathering, content extraction, web archival, or research data collection.",
63
+ "githubUrl": "https://github.com/honeyspoon/nix_config/tree/main/config/opencode/skill/web-scraper",
64
+ "skillUrl": "https://skillsmp.com/skills/honeyspoon-nix-config-config-opencode-skill-web-scraper-skill-md",
65
+ "stars": 0,
66
+ "updatedAt": 1769614394
67
+ }
68
+ },
69
+ {
70
+ "file_id": "1f2f1810d2d63396a79130bbb59849ad776fdcc6242e57120528d626d7da4adc",
71
+ "filename": "skills/igosuki-claude-skills-web-scraper-skill-md.md",
72
+ "score": 0.5999056,
73
+ "skill": {
74
+ "id": "igosuki-claude-skills-web-scraper-skill-md",
75
+ "name": "web-scraper",
76
+ "author": "Igosuki",
77
+ "description": "This skill should be used when users need to scrape content from websites, extract text from web pages, crawl and follow links, or download documentation from online sources. It features concurrent URL processing, automatic deduplication, content filtering, domain restrictions, and proper directory hierarchy based on URL structure. Use for documentation gathering, content extraction, web archival, or research data collection.",
78
+ "githubUrl": "https://github.com/Igosuki/claude-skills/tree/main/web-scraper",
79
+ "skillUrl": "https://skillsmp.com/skills/igosuki-claude-skills-web-scraper-skill-md",
80
+ "stars": 1,
81
+ "updatedAt": 1761052646
82
+ }
83
+ },
84
+ {
85
+ "file_id": "8bc3072e2ca3c703aae8ec4cb48be2a3c0826d96b4f22585788d34f1ddd9cbda",
86
+ "filename": "skills/breverdbidder-life-os-skills-website-to-vite-scraper-skill-md.md",
87
+ "score": 0.5806182,
88
+ "skill": {
89
+ "id": "breverdbidder-life-os-skills-website-to-vite-scraper-skill-md",
90
+ "name": "website-to-vite-scraper",
91
+ "author": "breverdbidder",
92
+ "description": "Multi-provider website scraper that converts any website (including CSR/SPA) to deployable static sites. Uses Playwright, Apify RAG Browser, Crawl4AI, and Firecrawl for comprehensive scraping. Triggers on requests to clone, reverse-engineer, or convert websites.",
93
+ "githubUrl": "https://github.com/breverdbidder/life-os/tree/main/skills/website-to-vite-scraper",
94
+ "skillUrl": "https://skillsmp.com/skills/breverdbidder-life-os-skills-website-to-vite-scraper-skill-md",
95
+ "stars": 2,
96
+ "updatedAt": 1769767370
97
+ }
98
+ },
99
+ {
100
+ "file_id": "8ed6e980ab048ac58d8c7d3bd3238fdc4cda3d25f17bc97b4dc4cfb2cf34cd35",
101
+ "filename": "skills/leobrival-serum-plugins-official-plugins-crawler-skills-website-crawler-skill-md.md",
102
+ "score": 0.5535727,
103
+ "skill": {
104
+ "id": "leobrival-serum-plugins-official-plugins-crawler-skills-website-crawler-skill-md",
105
+ "name": "website-crawler",
106
+ "author": "leobrival",
107
+ "description": "High-performance web crawler for discovering and mapping website structure. Use when users ask to crawl a website, map site structure, discover pages, find all URLs on a site, analyze link relationships, or generate site reports. Supports sitemap discovery, checkpoint/resume, rate limiting, and HTML report generation.",
108
+ "githubUrl": "https://github.com/leobrival/serum-plugins-official/tree/main/plugins/crawler/skills/website-crawler",
109
+ "skillUrl": "https://skillsmp.com/skills/leobrival-serum-plugins-official-plugins-crawler-skills-website-crawler-skill-md",
110
+ "stars": 1,
111
+ "updatedAt": 1769437425
112
+ }
113
+ },
114
+ {
115
+ "file_id": "efa5af3ef929da6b1db4f194da6d639600db620612b508e425099947215f480a",
116
+ "filename": "skills/hokupod-sitepanda-assets-skill-md.md",
117
+ "score": 0.57184756,
118
+ "skill": {
119
+ "id": "hokupod-sitepanda-assets-skill-md",
120
+ "name": "sitepanda",
121
+ "author": "hokupod",
122
+ "description": "Scrape websites with a headless browser and extract main readable content as Markdown. Use this skill when the user asks to retrieve, analyze, or summarize content from a URL or website.",
123
+ "githubUrl": "https://github.com/hokupod/sitepanda/tree/main/assets",
124
+ "skillUrl": "https://skillsmp.com/skills/hokupod-sitepanda-assets-skill-md",
125
+ "stars": 10,
126
+ "updatedAt": 1768397847
127
+ }
128
+ },
129
+ {
130
+ "file_id": "e74a45c1d4c2646144b1019bb8a4e52aa4352c52a1b35566234a68bf057c666a",
131
+ "filename": "skills/vanman2024-ai-dev-marketplace-plugins-rag-pipeline-skills-web-scraping-tools-skill-md.md",
132
+ "score": 0.55511314
133
+ },
134
+ {
135
+ "file_id": "9b11ec7c719303b2999eaf4fa535a26ffcf718ea773f579e5e6b5b8d046cce12",
136
+ "filename": "skills/nathanvale-side-quest-marketplace-plugins-scraper-toolkit-skills-playwright-scraper-skill-md.md",
137
+ "score": 0.54990166,
138
+ "skill": {
139
+ "id": "nathanvale-side-quest-marketplace-plugins-scraper-toolkit-skills-playwright-scraper-skill-md",
140
+ "name": "playwright-scraper",
141
+ "author": "nathanvale",
142
+ "description": "Production-proven Playwright web scraping patterns with selector-first approach and robust error handling.\nUse when users need to build web scrapers, extract data from websites, automate browser interactions,\nor ask about Playwright selectors, text extraction (innerText vs textContent), regex patterns for HTML,\nfallback hierarchies, or scraping best practices.",
143
+ "githubUrl": "https://github.com/nathanvale/side-quest-marketplace/tree/main/plugins/scraper-toolkit/skills/playwright-scraper",
144
+ "skillUrl": "https://skillsmp.com/skills/nathanvale-side-quest-marketplace-plugins-scraper-toolkit-skills-playwright-scraper-skill-md",
145
+ "stars": 2,
146
+ "updatedAt": 1769733906
147
+ }
148
+ },
149
+ {
150
+ "file_id": "42348180a6ffcff196f8aa2b23797a0868a891174655e0c4df3245b4bfc530a0",
151
+ "filename": "skills/salberg87-authenticated-scrape-skill-md.md",
152
+ "score": 0.5437498,
153
+ "skill": {
154
+ "id": "salberg87-authenticated-scrape-skill-md",
155
+ "name": "authenticated-scrape",
156
+ "author": "Salberg87",
157
+ "description": "Scrape data from authenticated websites by capturing network requests with auth headers automatically. Use when the user wants to extract data from logged-in pages, private dashboards, or authenticated APIs.",
158
+ "githubUrl": "https://github.com/Salberg87/authenticated-scrape",
159
+ "skillUrl": "https://skillsmp.com/skills/salberg87-authenticated-scrape-skill-md",
160
+ "stars": 0,
161
+ "updatedAt": 1767684913
162
+ }
163
+ }
164
+ ],
165
+ "has_more": false,
166
+ "next_page": null
167
+ },
168
+ "meta": {
169
+ "requestId": "f841d8bc-3d77-4a00-9899-4df8b4e52c86",
170
+ "responseTimeMs": 3327
171
+ }
172
+ }
173
+
174
+ Error Handling
175
+ The API uses standard HTTP status codes and returns error details in JSON format.
176
+
177
+ Error Code HTTP Description
178
+ MISSING_API_KEY 401 API key not provided
179
+ INVALID_API_KEY 401 Invalid API key
180
+ MISSING_QUERY 400 Missing required query parameter
181
+ INTERNAL_ERROR 500 Internal server error
182
+ Error Response Example:
183
+
184
+ json
185
+
186
+ Copy
187
+ {
188
+ "success": false,
189
+ "error": {
190
+ "code": "INVALID_API_KEY",
191
+ "message": "The provided API key is invalid"
192
+ }
193
+ }
package/bin/ui.js CHANGED
@@ -7,19 +7,24 @@ const { getServerInfo } = require('./utils/get-server');
7
7
 
8
8
  const openBrowser = (url) => {
9
9
  let command;
10
+ let args;
10
11
 
11
12
  if (os.platform() === 'darwin') {
12
13
  command = 'open';
14
+ args = [url];
13
15
  } else if (os.platform() === 'win32') {
14
- command = 'start';
16
+ command = 'cmd';
17
+ args = ['/c', 'start', '', url]; // 空字符串作为窗口标题
15
18
  } else {
16
19
  // Linux and others
17
20
  command = 'xdg-open';
21
+ args = [url];
18
22
  }
19
23
 
20
- const child = spawn(command, [url], {
24
+ const child = spawn(command, args, {
21
25
  detached: true,
22
- stdio: 'ignore'
26
+ stdio: 'ignore',
27
+ shell: os.platform() === 'win32' // Windows 需要 shell
23
28
  });
24
29
 
25
30
  child.unref();
@@ -0,0 +1,4 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ exports.SKILLSMP_API_KEY = void 0;
4
+ exports.SKILLSMP_API_KEY = process.env.SKILLSMP_API_KEY || 'sk_live_skillsmp_SNqsutoSiH51g-7-E0zVFVuugcnXfQbxCqfDI786TI0';