harness-auto-docs 0.3.2 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md CHANGED
@@ -63,7 +63,15 @@ doc targets, calls an LLM (Claude or GPT), writes Markdown files, and opens a Gi
63
63
  | Variable | Description |
64
64
  |----------|-------------|
65
65
  | `AI_MODEL` | Model name: `claude-sonnet-4-6`, `gpt-4o`, etc. |
66
- | `AI_API_KEY` | Anthropic or OpenAI API key |
66
+ | `ANTHROPIC_API_KEY` | Anthropic API key (required for `claude-*` models) |
67
+ | `OPENAI_API_KEY` | OpenAI API key (required for `gpt-*` models) |
68
+ | `MINIMAX_API_KEY` | MiniMax API key (required for `MiniMax-*` models) |
69
+ | `QWEN_API_KEY` | Alibaba Qwen API key (required for `qwen-*` models) |
70
+ | `ZHIPU_API_KEY` | Zhipu AI key (required for `glm-*` models) |
71
+ | `DEEPSEEK_API_KEY` | DeepSeek API key (required for `deepseek-*` models) |
72
+ | `DOUBAO_API_KEY` | Doubao API key (required for `doubao-*` models) |
73
+ | `KIMI_API_KEY` | Moonshot AI key (required for `moonshot-*` models) |
74
+ | `GROK_API_KEY` | xAI Grok key (required for `grok-*` models) |
67
75
  | `GITHUB_TOKEN` | GitHub PAT or Actions token |
68
76
 
69
77
  ## How to extend
package/README.md CHANGED
@@ -9,7 +9,7 @@ Auto-generate [Harness Engineering](https://openai.com/research/harness-engineer
9
9
 
10
10
  Add `.github/workflows/harness-docs.yml` to your project (see `examples/github-workflow.yml`), then set:
11
11
 
12
- - `AI_API_KEY` your Anthropic, OpenAI, or MiniMax API key (repository secret)
12
+ - `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` / `MINIMAX_API_KEY` / API key for the chosen provider (see Supported models below)
13
13
  - `AI_MODEL` — model name, e.g. `claude-sonnet-4-6`, `gpt-4o`, or `MiniMax-Text-01`
14
14
  - `GITHUB_TOKEN` — provided automatically by GitHub Actions
15
15
 
@@ -37,18 +37,24 @@ When you push a tag (`git tag v1.2.0 && git push --tags`), a PR is automatically
37
37
  ## Local run
38
38
 
39
39
  ```bash
40
- AI_MODEL=claude-sonnet-4-6 AI_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
40
+ AI_MODEL=claude-sonnet-4-6 ANTHROPIC_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
41
41
  ```
42
42
 
43
43
  ## Supported models
44
44
 
45
45
  Model routing is determined by the prefix of `AI_MODEL`:
46
46
 
47
- | Prefix | Provider | Example models |
48
- |--------|----------|----------------|
49
- | `claude-*` | Anthropic | `claude-sonnet-4-6`, `claude-opus-4-6`, `claude-haiku-4-5-20251001` |
50
- | `gpt-*` | OpenAI | `gpt-4o`, `gpt-4o-mini`, `o3` |
51
- | `MiniMax-*` | MiniMax (Anthropic-compatible) | `MiniMax-Text-01` |
47
+ | Prefix | Provider | API key env var | Example models |
48
+ |--------|----------|-----------------|----------------|
49
+ | `claude-*` | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4-6`, `claude-opus-4-6` |
50
+ | `gpt-*` | OpenAI | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o3` |
51
+ | `MiniMax-*` | MiniMax | `MINIMAX_API_KEY` | `MiniMax-Text-01` |
52
+ | `qwen-*` | Qwen (阿里云) | `QWEN_API_KEY` | `qwen-turbo`, `qwen-plus`, `qwen-max` |
53
+ | `glm-*` | 智谱 | `ZHIPU_API_KEY` | `glm-4`, `glm-4-flash` |
54
+ | `deepseek-*` | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-chat`, `deepseek-coder` |
55
+ | `doubao-*` | 豆包 (ByteDance) | `DOUBAO_API_KEY` | `doubao-pro-4k` |
56
+ | `moonshot-*` | Kimi (Moonshot) | `KIMI_API_KEY` | `moonshot-v1-8k`, `moonshot-v1-32k` |
57
+ | `grok-*` | Grok (xAI) | `GROK_API_KEY` | `grok-2`, `grok-beta` |
52
58
 
53
59
  ## Future
54
60
 
package/README_ja.md CHANGED
@@ -9,7 +9,7 @@
9
9
 
10
10
  `.github/workflows/harness-docs.yml` をプロジェクトに追加し(`examples/github-workflow.yml` を参照)、以下を設定します:
11
11
 
12
- - `AI_API_KEY` Anthropic、OpenAI、または MiniMax API キー(repository secret)
12
+ - `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` / `MINIMAX_API_KEY` / … — 選択したプロバイダーの API キー(詳細はサポートモデル表を参照)
13
13
  - `AI_MODEL` — モデル名(例:`claude-sonnet-4-6`、`gpt-4o`、`MiniMax-Text-01`)
14
14
  - `GITHUB_TOKEN` — GitHub Actions により自動提供
15
15
 
@@ -37,18 +37,24 @@
37
37
  ## ローカル実行
38
38
 
39
39
  ```bash
40
- AI_MODEL=claude-sonnet-4-6 AI_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
40
+ AI_MODEL=claude-sonnet-4-6 ANTHROPIC_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
41
41
  ```
42
42
 
43
43
  ## サポートモデル
44
44
 
45
45
  モデルルーティングは `AI_MODEL` のプレフィックスで決定されます:
46
46
 
47
- | プレフィックス | プロバイダー | 対応モデル |
48
- |--------------|-------------|-----------|
49
- | `claude-*` | Anthropic | `claude-sonnet-4-6`、`claude-opus-4-6`、`claude-haiku-4-5-20251001` |
50
- | `gpt-*` | OpenAI | `gpt-4o`、`gpt-4o-mini`、`o3` |
51
- | `MiniMax-*` | MiniMax(Anthropic 互換) | `MiniMax-Text-01` |
47
+ | プレフィックス | プロバイダー | API キー環境変数 | モデル例 |
48
+ |--------------|------------|----------------|---------|
49
+ | `claude-*` | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4-6`、`claude-opus-4-6` |
50
+ | `gpt-*` | OpenAI | `OPENAI_API_KEY` | `gpt-4o`、`gpt-4o-mini`、`o3` |
51
+ | `MiniMax-*` | MiniMax | `MINIMAX_API_KEY` | `MiniMax-Text-01` |
52
+ | `qwen-*` | Qwen(阿里云) | `QWEN_API_KEY` | `qwen-turbo`、`qwen-plus`、`qwen-max` |
53
+ | `glm-*` | 智谱 AI | `ZHIPU_API_KEY` | `glm-4`、`glm-4-flash` |
54
+ | `deepseek-*` | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-chat`、`deepseek-coder` |
55
+ | `doubao-*` | 豆包(ByteDance) | `DOUBAO_API_KEY` | `doubao-pro-4k` |
56
+ | `moonshot-*` | Kimi(Moonshot) | `KIMI_API_KEY` | `moonshot-v1-8k`、`moonshot-v1-32k` |
57
+ | `grok-*` | Grok(xAI) | `GROK_API_KEY` | `grok-2`、`grok-beta` |
52
58
 
53
59
  ## 今後の予定
54
60
 
package/README_zh.md CHANGED
@@ -9,7 +9,7 @@
9
9
 
10
10
  将 `.github/workflows/harness-docs.yml` 添加到项目中(参见 `examples/github-workflow.yml`),然后设置:
11
11
 
12
- - `AI_API_KEY` 你的 Anthropic、OpenAI MiniMax API 密钥(repository secret)
12
+ - `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` / `MINIMAX_API_KEY` / … — 所选供应商的 API 密钥(详见下方支持的模型表格)
13
13
  - `AI_MODEL` — 模型名称,如 `claude-sonnet-4-6`、`gpt-4o` 或 `MiniMax-Text-01`
14
14
  - `GITHUB_TOKEN` — GitHub Actions 自动提供
15
15
 
@@ -37,18 +37,24 @@
37
37
  ## 本地运行
38
38
 
39
39
  ```bash
40
- AI_MODEL=claude-sonnet-4-6 AI_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
40
+ AI_MODEL=claude-sonnet-4-6 ANTHROPIC_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
41
41
  ```
42
42
 
43
43
  ## 支持的模型
44
44
 
45
45
  模型路由由 `AI_MODEL` 的前缀决定:
46
46
 
47
- | 前缀 | 提供商 | 示例模型 |
48
- |------|--------|---------|
49
- | `claude-*` | Anthropic | `claude-sonnet-4-6`、`claude-opus-4-6`、`claude-haiku-4-5-20251001` |
50
- | `gpt-*` | OpenAI | `gpt-4o`、`gpt-4o-mini`、`o3` |
51
- | `MiniMax-*` | MiniMax(Anthropic 兼容) | `MiniMax-Text-01` |
47
+ | 前缀 | 供应商 | API 密钥环境变量 | 示例模型 |
48
+ |------|--------|-----------------|---------|
49
+ | `claude-*` | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4-6`、`claude-opus-4-6` |
50
+ | `gpt-*` | OpenAI | `OPENAI_API_KEY` | `gpt-4o`、`gpt-4o-mini`、`o3` |
51
+ | `MiniMax-*` | MiniMax | `MINIMAX_API_KEY` | `MiniMax-Text-01` |
52
+ | `qwen-*` | 通义千问(阿里云) | `QWEN_API_KEY` | `qwen-turbo`、`qwen-plus`、`qwen-max` |
53
+ | `glm-*` | 智谱 AI | `ZHIPU_API_KEY` | `glm-4`、`glm-4-flash` |
54
+ | `deepseek-*` | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-chat`、`deepseek-coder` |
55
+ | `doubao-*` | 豆包(字节跳动) | `DOUBAO_API_KEY` | `doubao-pro-4k` |
56
+ | `moonshot-*` | Kimi(月之暗面) | `KIMI_API_KEY` | `moonshot-v1-8k`、`moonshot-v1-32k` |
57
+ | `grok-*` | Grok(xAI) | `GROK_API_KEY` | `grok-2`、`grok-beta` |
52
58
 
53
59
  ## 未来计划
54
60
 
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class DeepSeekProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class DeepSeekProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://api.deepseek.com' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class DoubaoProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class DoubaoProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://ark.cn-beijing.volces.com/api/v3' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class GrokProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class GrokProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://api.x.ai/v1' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class KimiProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class KimiProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://api.moonshot.cn/v1' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class QwenProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class QwenProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class ZhipuProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class ZhipuProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://open.bigmodel.cn/api/paas/v4' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
package/dist/cli.js CHANGED
@@ -9,6 +9,12 @@ import { GitLabProvider } from './providers/gitlab.js';
9
9
  import { AnthropicProvider } from './ai/anthropic.js';
10
10
  import { OpenAIProvider } from './ai/openai.js';
11
11
  import { MiniMaxProvider } from './ai/minimax.js';
12
+ import { QwenProvider } from './ai/qwen.js';
13
+ import { ZhipuProvider } from './ai/zhipu.js';
14
+ import { DeepSeekProvider } from './ai/deepseek.js';
15
+ import { DoubaoProvider } from './ai/doubao.js';
16
+ import { KimiProvider } from './ai/kimi.js';
17
+ import { GrokProvider } from './ai/grok.js';
12
18
  function requireEnv(name) {
13
19
  const value = process.env[name];
14
20
  if (!value) {
@@ -31,14 +37,26 @@ function detectPlatform() {
31
37
  }
32
38
  return new GitHubProvider(token);
33
39
  }
34
- function selectAI(model, apiKey) {
40
+ function selectAI(model) {
35
41
  if (model.startsWith('claude-'))
36
- return new AnthropicProvider(apiKey, model);
42
+ return new AnthropicProvider(requireEnv('ANTHROPIC_API_KEY'), model);
37
43
  if (model.startsWith('gpt-'))
38
- return new OpenAIProvider(apiKey, model);
44
+ return new OpenAIProvider(requireEnv('OPENAI_API_KEY'), model);
39
45
  if (model.startsWith('MiniMax-'))
40
- return new MiniMaxProvider(apiKey, model);
41
- console.error(`Error: Unknown model "${model}". Use a claude-*, gpt-*, or MiniMax-* model.`);
46
+ return new MiniMaxProvider(requireEnv('MINIMAX_API_KEY'), model);
47
+ if (model.startsWith('qwen-'))
48
+ return new QwenProvider(requireEnv('QWEN_API_KEY'), model);
49
+ if (model.startsWith('glm-'))
50
+ return new ZhipuProvider(requireEnv('ZHIPU_API_KEY'), model);
51
+ if (model.startsWith('deepseek-'))
52
+ return new DeepSeekProvider(requireEnv('DEEPSEEK_API_KEY'), model);
53
+ if (model.startsWith('doubao-'))
54
+ return new DoubaoProvider(requireEnv('DOUBAO_API_KEY'), model);
55
+ if (model.startsWith('moonshot-'))
56
+ return new KimiProvider(requireEnv('KIMI_API_KEY'), model); // Kimi is the product; Moonshot AI is the provider
57
+ if (model.startsWith('grok-'))
58
+ return new GrokProvider(requireEnv('GROK_API_KEY'), model);
59
+ console.error(`Error: Unknown model "${model}". Supported prefixes: claude-*, gpt-*, MiniMax-*, qwen-*, glm-*, deepseek-*, doubao-*, moonshot-*, grok-*`);
42
60
  process.exit(1);
43
61
  }
44
62
  function getDefaultBranch() {
@@ -52,14 +70,13 @@ function getDefaultBranch() {
52
70
  }
53
71
  async function main() {
54
72
  const model = requireEnv('AI_MODEL');
55
- const apiKey = requireEnv('AI_API_KEY');
56
73
  const diff = extractDiff();
57
74
  if (!diff.raw.trim()) {
58
75
  console.log('No diff between tags. Nothing to document.');
59
76
  process.exit(0);
60
77
  }
61
78
  console.log(`Generating docs: ${diff.prevTag} → ${diff.currentTag}`);
62
- const ai = selectAI(model, apiKey);
79
+ const ai = selectAI(model);
63
80
  const targets = selectTargets(diff.fileGroups, diff.changedFiles);
64
81
  console.log(`Updating ${targets.length} documents: ${targets.join(', ')}`);
65
82
  const results = await generateDocs(ai, diff, targets);