harness-auto-docs 0.3.1 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md CHANGED
@@ -63,7 +63,15 @@ doc targets, calls an LLM (Claude or GPT), writes Markdown files, and opens a Gi
63
63
  | Variable | Description |
64
64
  |----------|-------------|
65
65
  | `AI_MODEL` | Model name: `claude-sonnet-4-6`, `gpt-4o`, etc. |
66
- | `AI_API_KEY` | Anthropic or OpenAI API key |
66
+ | `ANTHROPIC_API_KEY` | Anthropic API key (required for `claude-*` models) |
67
+ | `OPENAI_API_KEY` | OpenAI API key (required for `gpt-*` models) |
68
+ | `MINIMAX_API_KEY` | MiniMax API key (required for `MiniMax-*` models) |
69
+ | `QWEN_API_KEY` | Alibaba Qwen API key (required for `qwen-*` models) |
70
+ | `ZHIPU_API_KEY` | Zhipu AI key (required for `glm-*` models) |
71
+ | `DEEPSEEK_API_KEY` | DeepSeek API key (required for `deepseek-*` models) |
72
+ | `DOUBAO_API_KEY` | Doubao API key (required for `doubao-*` models) |
73
+ | `KIMI_API_KEY` | Moonshot AI key (required for `moonshot-*` models) |
74
+ | `GROK_API_KEY` | xAI Grok key (required for `grok-*` models) |
67
75
  | `GITHUB_TOKEN` | GitHub PAT or Actions token |
68
76
 
69
77
  ## How to extend
package/README.md CHANGED
@@ -1,3 +1,6 @@
1
+ [![translation](https://img.shields.io/badge/translation->中文-blue?style=flat-square&logo=googletranslate)](README_zh.md)
2
+ [![translation](https://img.shields.io/badge/translation->日本語-blue?style=flat-square&logo=googletranslate)](README_ja.md)
3
+
1
4
  # harness-auto-docs
2
5
 
3
6
  Auto-generate [Harness Engineering](https://openai.com/research/harness-engineering) style documentation when your project creates a git tag. Opens a PR with updated docs.
@@ -6,7 +9,7 @@ Auto-generate [Harness Engineering](https://openai.com/research/harness-engineer
6
9
 
7
10
  Add `.github/workflows/harness-docs.yml` to your project (see `examples/github-workflow.yml`), then set:
8
11
 
9
- - `AI_API_KEY` your Anthropic, OpenAI, or MiniMax API key (repository secret)
12
+ - `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` / `MINIMAX_API_KEY` / API key for the chosen provider (see Supported models below)
10
13
  - `AI_MODEL` — model name, e.g. `claude-sonnet-4-6`, `gpt-4o`, or `MiniMax-Text-01`
11
14
  - `GITHUB_TOKEN` — provided automatically by GitHub Actions
12
15
 
@@ -34,18 +37,24 @@ When you push a tag (`git tag v1.2.0 && git push --tags`), a PR is automatically
34
37
  ## Local run
35
38
 
36
39
  ```bash
37
- AI_MODEL=claude-sonnet-4-6 AI_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
40
+ AI_MODEL=claude-sonnet-4-6 ANTHROPIC_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
38
41
  ```
39
42
 
40
43
  ## Supported models
41
44
 
42
45
  Model routing is determined by the prefix of `AI_MODEL`:
43
46
 
44
- | Prefix | Provider | Example models |
45
- |--------|----------|----------------|
46
- | `claude-*` | Anthropic | `claude-sonnet-4-6`, `claude-opus-4-6`, `claude-haiku-4-5-20251001` |
47
- | `gpt-*` | OpenAI | `gpt-4o`, `gpt-4o-mini`, `o3` |
48
- | `MiniMax-*` | MiniMax (Anthropic-compatible) | `MiniMax-Text-01` |
47
+ | Prefix | Provider | API key env var | Example models |
48
+ |--------|----------|-----------------|----------------|
49
+ | `claude-*` | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4-6`, `claude-opus-4-6` |
50
+ | `gpt-*` | OpenAI | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o3` |
51
+ | `MiniMax-*` | MiniMax | `MINIMAX_API_KEY` | `MiniMax-Text-01` |
52
+ | `qwen-*` | Qwen (阿里云) | `QWEN_API_KEY` | `qwen-turbo`, `qwen-plus`, `qwen-max` |
53
+ | `glm-*` | 智谱 | `ZHIPU_API_KEY` | `glm-4`, `glm-4-flash` |
54
+ | `deepseek-*` | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-chat`, `deepseek-coder` |
55
+ | `doubao-*` | 豆包 (ByteDance) | `DOUBAO_API_KEY` | `doubao-pro-4k` |
56
+ | `moonshot-*` | Kimi (Moonshot) | `KIMI_API_KEY` | `moonshot-v1-8k`, `moonshot-v1-32k` |
57
+ | `grok-*` | Grok (xAI) | `GROK_API_KEY` | `grok-2`, `grok-beta` |
49
58
 
50
59
  ## Future
51
60
 
package/README_ja.md ADDED
@@ -0,0 +1,61 @@
1
+ [![translation](https://img.shields.io/badge/translation->English-blue?style=flat-square&logo=googletranslate)](README.md)
2
+ [![translation](https://img.shields.io/badge/translation->中文-blue?style=flat-square&logo=googletranslate)](README_zh.md)
3
+
4
+ # harness-auto-docs
5
+
6
+ プロジェクトの git タグ作成時に、[Harness Engineering](https://openai.com/research/harness-engineering) スタイルのドキュメントを自動生成し、PR を開いてドキュメントを更新します。
7
+
8
+ ## 使用方法
9
+
10
+ `.github/workflows/harness-docs.yml` をプロジェクトに追加し(`examples/github-workflow.yml` を参照)、以下を設定します:
11
+
12
+ - `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` / `MINIMAX_API_KEY` / … — 選択したプロバイダーの API キー(詳細はサポートモデル表を参照)
13
+ - `AI_MODEL` — モデル名(例:`claude-sonnet-4-6`、`gpt-4o`、`MiniMax-Text-01`)
14
+ - `GITHUB_TOKEN` — GitHub Actions により自動提供
15
+
16
+ タグをプッシュすると(`git tag v1.2.0 && git push --tags`)、自動的にドキュメントを更新する PR が開きます。
17
+
18
+ ## 更新されるファイル
19
+
20
+ | ファイル | 常時更新 | 条件付き更新 |
21
+ |---------|---------|-------------|
22
+ | `AGENTS.md` | ✓ | |
23
+ | `ARCHITECTURE.md` | ✓ | |
24
+ | `docs/DESIGN.md` | ✓ | |
25
+ | `docs/QUALITY_SCORE.md` | ✓ | |
26
+ | `changelog/vX.Y.Z.md` | ✓ | |
27
+ | `docs/design-docs/vX.Y.Z.md` | ✓ | |
28
+ | `docs/design-docs/index.md` | ✓ | |
29
+ | `docs/exec-plans/tech-debt-tracker.md` | ✓ | |
30
+ | `docs/FRONTEND.md` | | フロントエンドファイル変更時 |
31
+ | `docs/SECURITY.md` | | 認証/セキュリティファイル変更時 |
32
+ | `docs/RELIABILITY.md` | | インフラファイル変更時 |
33
+ | `docs/generated/db-schema.md` | | SQL/schema ファイル変更時 |
34
+ | `docs/product-specs/index.md` | | 新機能検出時 |
35
+ | `docs/references/` | | 新規依存関係追加時 |
36
+
37
+ ## ローカル実行
38
+
39
+ ```bash
40
+ AI_MODEL=claude-sonnet-4-6 ANTHROPIC_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
41
+ ```
42
+
43
+ ## サポートモデル
44
+
45
+ モデルルーティングは `AI_MODEL` のプレフィックスで決定されます:
46
+
47
+ | プレフィックス | プロバイダー | API キー環境変数 | モデル例 |
48
+ |--------------|------------|----------------|---------|
49
+ | `claude-*` | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4-6`、`claude-opus-4-6` |
50
+ | `gpt-*` | OpenAI | `OPENAI_API_KEY` | `gpt-4o`、`gpt-4o-mini`、`o3` |
51
+ | `MiniMax-*` | MiniMax | `MINIMAX_API_KEY` | `MiniMax-Text-01` |
52
+ | `qwen-*` | Qwen(阿里云) | `QWEN_API_KEY` | `qwen-turbo`、`qwen-plus`、`qwen-max` |
53
+ | `glm-*` | 智谱 AI | `ZHIPU_API_KEY` | `glm-4`、`glm-4-flash` |
54
+ | `deepseek-*` | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-chat`、`deepseek-coder` |
55
+ | `doubao-*` | 豆包(ByteDance) | `DOUBAO_API_KEY` | `doubao-pro-4k` |
56
+ | `moonshot-*` | Kimi(Moonshot) | `KIMI_API_KEY` | `moonshot-v1-8k`、`moonshot-v1-32k` |
57
+ | `grok-*` | Grok(xAI) | `GROK_API_KEY` | `grok-2`、`grok-beta` |
58
+
59
+ ## 今後の予定
60
+
61
+ - GitLab サポート(MR 作成)
package/README_zh.md ADDED
@@ -0,0 +1,61 @@
1
+ [![translation](https://img.shields.io/badge/translation->English-blue?style=flat-square&logo=googletranslate)](README.md)
2
+ [![translation](https://img.shields.io/badge/translation->日本語-blue?style=flat-square&logo=googletranslate)](README_ja.md)
3
+
4
+ # harness-auto-docs
5
+
6
+ 当项目创建 git tag 时,自动生成 [Harness Engineering](https://openai.com/research/harness-engineering) 风格的文档,并自动开启 PR 更新文档。
7
+
8
+ ## 使用方法
9
+
10
+ 将 `.github/workflows/harness-docs.yml` 添加到项目中(参见 `examples/github-workflow.yml`),然后设置:
11
+
12
+ - `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` / `MINIMAX_API_KEY` / … — 所选供应商的 API 密钥(详见下方支持的模型表格)
13
+ - `AI_MODEL` — 模型名称,如 `claude-sonnet-4-6`、`gpt-4o` 或 `MiniMax-Text-01`
14
+ - `GITHUB_TOKEN` — GitHub Actions 自动提供
15
+
16
+ 当你推送标签(`git tag v1.2.0 && git push --tags`)时,系统会自动开启一个包含更新文档的 PR。
17
+
18
+ ## 更新范围
19
+
20
+ | 文件 | 始终更新 | 条件更新 |
21
+ |------|---------|---------|
22
+ | `AGENTS.md` | ✓ | |
23
+ | `ARCHITECTURE.md` | ✓ | |
24
+ | `docs/DESIGN.md` | ✓ | |
25
+ | `docs/QUALITY_SCORE.md` | ✓ | |
26
+ | `changelog/vX.Y.Z.md` | ✓ | |
27
+ | `docs/design-docs/vX.Y.Z.md` | ✓ | |
28
+ | `docs/design-docs/index.md` | ✓ | |
29
+ | `docs/exec-plans/tech-debt-tracker.md` | ✓ | |
30
+ | `docs/FRONTEND.md` | | 前端文件变更时 |
31
+ | `docs/SECURITY.md` | | 认证/安全文件变更时 |
32
+ | `docs/RELIABILITY.md` | | 基础设施文件变更时 |
33
+ | `docs/generated/db-schema.md` | | SQL/schema 文件变更时 |
34
+ | `docs/product-specs/index.md` | | 检测到新功能时 |
35
+ | `docs/references/` | | 新增依赖时 |
36
+
37
+ ## 本地运行
38
+
39
+ ```bash
40
+ AI_MODEL=claude-sonnet-4-6 ANTHROPIC_API_KEY=sk-ant-... GITHUB_TOKEN=ghp_... npx harness-auto-docs
41
+ ```
42
+
43
+ ## 支持的模型
44
+
45
+ 模型路由由 `AI_MODEL` 的前缀决定:
46
+
47
+ | 前缀 | 供应商 | API 密钥环境变量 | 示例模型 |
48
+ |------|--------|-----------------|---------|
49
+ | `claude-*` | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4-6`、`claude-opus-4-6` |
50
+ | `gpt-*` | OpenAI | `OPENAI_API_KEY` | `gpt-4o`、`gpt-4o-mini`、`o3` |
51
+ | `MiniMax-*` | MiniMax | `MINIMAX_API_KEY` | `MiniMax-Text-01` |
52
+ | `qwen-*` | 通义千问(阿里云) | `QWEN_API_KEY` | `qwen-turbo`、`qwen-plus`、`qwen-max` |
53
+ | `glm-*` | 智谱 AI | `ZHIPU_API_KEY` | `glm-4`、`glm-4-flash` |
54
+ | `deepseek-*` | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-chat`、`deepseek-coder` |
55
+ | `doubao-*` | 豆包(字节跳动) | `DOUBAO_API_KEY` | `doubao-pro-4k` |
56
+ | `moonshot-*` | Kimi(月之暗面) | `KIMI_API_KEY` | `moonshot-v1-8k`、`moonshot-v1-32k` |
57
+ | `grok-*` | Grok(xAI) | `GROK_API_KEY` | `grok-2`、`grok-beta` |
58
+
59
+ ## 未来计划
60
+
61
+ - GitLab 支持(MR 创建)
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class DeepSeekProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class DeepSeekProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://api.deepseek.com' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class DoubaoProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class DoubaoProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://ark.cn-beijing.volces.com/api/v3' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class GrokProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class GrokProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://api.x.ai/v1' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class KimiProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class KimiProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://api.moonshot.cn/v1' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class QwenProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class QwenProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
@@ -0,0 +1,7 @@
1
+ import type { AIProvider } from './interface.js';
2
+ export declare class ZhipuProvider implements AIProvider {
3
+ private client;
4
+ private model;
5
+ constructor(apiKey: string, model: string);
6
+ generate(prompt: string): Promise<string>;
7
+ }
@@ -0,0 +1,16 @@
1
+ import OpenAI from 'openai';
2
+ export class ZhipuProvider {
3
+ client;
4
+ model;
5
+ constructor(apiKey, model) {
6
+ this.client = new OpenAI({ apiKey, baseURL: 'https://open.bigmodel.cn/api/paas/v4' });
7
+ this.model = model;
8
+ }
9
+ async generate(prompt) {
10
+ const completion = await this.client.chat.completions.create({
11
+ model: this.model,
12
+ messages: [{ role: 'user', content: prompt }],
13
+ });
14
+ return completion.choices[0].message.content ?? '';
15
+ }
16
+ }
package/dist/cli.js CHANGED
@@ -9,6 +9,12 @@ import { GitLabProvider } from './providers/gitlab.js';
9
9
  import { AnthropicProvider } from './ai/anthropic.js';
10
10
  import { OpenAIProvider } from './ai/openai.js';
11
11
  import { MiniMaxProvider } from './ai/minimax.js';
12
+ import { QwenProvider } from './ai/qwen.js';
13
+ import { ZhipuProvider } from './ai/zhipu.js';
14
+ import { DeepSeekProvider } from './ai/deepseek.js';
15
+ import { DoubaoProvider } from './ai/doubao.js';
16
+ import { KimiProvider } from './ai/kimi.js';
17
+ import { GrokProvider } from './ai/grok.js';
12
18
  function requireEnv(name) {
13
19
  const value = process.env[name];
14
20
  if (!value) {
@@ -31,14 +37,26 @@ function detectPlatform() {
31
37
  }
32
38
  return new GitHubProvider(token);
33
39
  }
34
- function selectAI(model, apiKey) {
40
+ function selectAI(model) {
35
41
  if (model.startsWith('claude-'))
36
- return new AnthropicProvider(apiKey, model);
42
+ return new AnthropicProvider(requireEnv('ANTHROPIC_API_KEY'), model);
37
43
  if (model.startsWith('gpt-'))
38
- return new OpenAIProvider(apiKey, model);
44
+ return new OpenAIProvider(requireEnv('OPENAI_API_KEY'), model);
39
45
  if (model.startsWith('MiniMax-'))
40
- return new MiniMaxProvider(apiKey, model);
41
- console.error(`Error: Unknown model "${model}". Use a claude-*, gpt-*, or MiniMax-* model.`);
46
+ return new MiniMaxProvider(requireEnv('MINIMAX_API_KEY'), model);
47
+ if (model.startsWith('qwen-'))
48
+ return new QwenProvider(requireEnv('QWEN_API_KEY'), model);
49
+ if (model.startsWith('glm-'))
50
+ return new ZhipuProvider(requireEnv('ZHIPU_API_KEY'), model);
51
+ if (model.startsWith('deepseek-'))
52
+ return new DeepSeekProvider(requireEnv('DEEPSEEK_API_KEY'), model);
53
+ if (model.startsWith('doubao-'))
54
+ return new DoubaoProvider(requireEnv('DOUBAO_API_KEY'), model);
55
+ if (model.startsWith('moonshot-'))
56
+ return new KimiProvider(requireEnv('KIMI_API_KEY'), model); // Kimi is the product; Moonshot AI is the provider
57
+ if (model.startsWith('grok-'))
58
+ return new GrokProvider(requireEnv('GROK_API_KEY'), model);
59
+ console.error(`Error: Unknown model "${model}". Supported prefixes: claude-*, gpt-*, MiniMax-*, qwen-*, glm-*, deepseek-*, doubao-*, moonshot-*, grok-*`);
42
60
  process.exit(1);
43
61
  }
44
62
  function getDefaultBranch() {
@@ -52,14 +70,13 @@ function getDefaultBranch() {
52
70
  }
53
71
  async function main() {
54
72
  const model = requireEnv('AI_MODEL');
55
- const apiKey = requireEnv('AI_API_KEY');
56
73
  const diff = extractDiff();
57
74
  if (!diff.raw.trim()) {
58
75
  console.log('No diff between tags. Nothing to document.');
59
76
  process.exit(0);
60
77
  }
61
78
  console.log(`Generating docs: ${diff.prevTag} → ${diff.currentTag}`);
62
- const ai = selectAI(model, apiKey);
79
+ const ai = selectAI(model);
63
80
  const targets = selectTargets(diff.fileGroups, diff.changedFiles);
64
81
  console.log(`Updating ${targets.length} documents: ${targets.join(', ')}`);
65
82
  const results = await generateDocs(ai, diff, targets);