kafka-mcp 0.1.1 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -9,23 +9,49 @@
9
9
  - 获取指定 Topic 的消息并支持搜索
10
10
  - 获取消费者组列表
11
11
  - 查看指定消费者组在某个 Topic 上的消费进度
12
+ - 通过命令写入和查看 `~/.config/kafka-mcp/config.json`
13
+ - 安装面向 Cursor、Claude、Codex 的技能模板
12
14
 
13
15
  ## 功能与命令
14
16
 
15
- 当前 CLI 提供以下 5 个命令:
17
+ 当前 CLI 提供以下 8 个命令:
16
18
 
17
- ### 1. `topics`
19
+ ### 1. `config`
20
+
21
+ 管理用户配置文件,配置会写入:
22
+
23
+ ```bash
24
+ ~/.config/kafka-mcp/config.json
25
+ ```
26
+
27
+ 常用子命令:
28
+
29
+ - `config path`:查看配置文件路径
30
+ - `config show`:查看当前用户配置
31
+ - `config set`:写入配置
32
+
33
+ 示例:
34
+
35
+ ```bash
36
+ npx kafka-mcp config path
37
+ npx kafka-mcp config show
38
+ npx kafka-mcp config set --brokers localhost:9092 --client-id kafka-mcp-cli
39
+ npx kafka-mcp config set --brokers localhost:9092,localhost:9093 --ssl
40
+ npx kafka-mcp config set --brokers localhost:9092 --sasl-mechanism plain --sasl-username demo --sasl-password secret
41
+ ```
42
+
43
+ ### 2. `topics`
18
44
 
19
45
  列出所有 Topic,并支持按名称搜索。
20
46
 
21
47
  示例:
22
48
 
23
49
  ```bash
24
- node dist/cli.js topics
25
- node dist/cli.js topics --search order
50
+ npx kafka-mcp topics
51
+ npx kafka-mcp topics --search order
26
52
  ```
27
53
 
28
- ### 2. `topic`
54
+ ### 3. `topic`
29
55
 
30
56
  查看某个 Topic 的元数据,包括:
31
57
 
@@ -37,10 +63,10 @@ node dist/cli.js topics --search order
37
63
  示例:
38
64
 
39
65
  ```bash
40
- node dist/cli.js topic --topic orders
66
+ npx kafka-mcp topic --topic orders
41
67
  ```
42
68
 
43
- ### 3. `messages`
69
+ ### 4. `messages`
44
70
 
45
71
  读取指定 Topic 的消息,并支持按关键字搜索。
46
72
 
@@ -62,23 +88,40 @@ node dist/cli.js topic --topic orders
62
88
  示例:
63
89
 
64
90
  ```bash
65
- node dist/cli.js messages --topic orders
66
- node dist/cli.js messages --topic orders --search paid --limit 20
67
- node dist/cli.js messages --topic orders --latest
91
+ npx kafka-mcp messages --topic orders
92
+ npx kafka-mcp messages --topic orders --search paid --limit 20
93
+ npx kafka-mcp messages --topic orders --latest
68
94
  ```
69
95
 
70
- ### 4. `groups`
96
+ ### 5. `groups`
71
97
 
72
98
  列出消费者组,并支持按组名搜索。
73
99
 
74
100
  示例:
75
101
 
76
102
  ```bash
77
- node dist/cli.js groups
78
- node dist/cli.js groups --search billing
103
+ npx kafka-mcp groups
104
+ npx kafka-mcp groups --search billing
105
+ ```
106
+
107
+ ### 6. `group describe`
108
+
109
+ 一次性查看某个消费者组的完整信息,包括:
110
+
111
+ - group 基本信息
112
+ - state
113
+ - protocol / protocolType
114
+ - 成员列表
115
+ - 每个成员负责的 topic / partition
116
+ - 该 group 涉及的所有 topic 的 committed offset / latest offset / lag
117
+
118
+ 示例:
119
+
120
+ ```bash
121
+ npx kafka-mcp group describe --group billing-consumers
79
122
  ```
80
123
 
81
- ### 5. `group-progress`
124
+ ### 7. `group-progress`
82
125
 
83
126
  查看某个消费者组在某个 Topic 上的消费进度。
84
127
 
@@ -93,7 +136,35 @@ node dist/cli.js groups --search billing
93
136
  示例:
94
137
 
95
138
  ```bash
96
- node dist/cli.js group-progress --group billing-consumers --topic orders
139
+ npx kafka-mcp group-progress --group billing-consumers --topic orders
140
+ ```
141
+
142
+ ### 8. `skill-install`
143
+
144
+ 安装可复用的技能模板,支持:
145
+
146
+ - `cursor`
147
+ - `claude`
148
+ - `codex`
149
+
150
+ 默认安装位置:
151
+
152
+ - `cursor`:当前目录下的 `.cursor/rules/kafka-mcp.mdc`
153
+ - `claude`:当前目录下的 `.claude/commands/kafka-mcp.md`
154
+ - `codex`:`$CODEX_HOME/skills/kafka-cli-inspector/SKILL.md`
155
+
156
+ 常用参数:
157
+
158
+ - `--dir`:指定安装根目录
159
+ - `--force`:覆盖已有文件
160
+
161
+ 示例:
162
+
163
+ ```bash
164
+ npx kafka-mcp skill-install cursor
165
+ npx kafka-mcp skill-install claude
166
+ npx kafka-mcp skill-install codex
167
+ npx kafka-mcp skill-install cursor --dir /path/to/project --force
97
168
  ```
98
169
 
99
170
  ## 配置方式
@@ -104,6 +175,12 @@ CLI 会按以下顺序读取配置,找到第一个存在的配置源后使用
104
175
  2. `~/.config/kafka-mcp/config.json`
105
176
  3. 环境变量会覆盖上述配置
106
177
 
178
+ 推荐直接使用命令写入用户配置:
179
+
180
+ ```bash
181
+ npx kafka-mcp config set --brokers localhost:9092 --client-id kafka-mcp-cli
182
+ ```
183
+
107
184
  最小配置示例:
108
185
 
109
186
  ```json
@@ -139,32 +216,69 @@ CLI 会按以下顺序读取配置,找到第一个存在的配置源后使用
139
216
 
140
217
  ## 安装
141
218
 
219
+ 本地开发:
220
+
142
221
  ```bash
143
222
  npm install
144
223
  npm run build
145
224
  ```
146
225
 
147
- 如果要作为全局命令使用,可以在发布到 npm 后执行:
226
+ 如果要直接使用已经发布到 npm 的版本,可以有两种方式。
227
+
228
+ 方式一:全局安装
148
229
 
149
230
  ```bash
150
231
  npm install -g kafka-mcp
151
232
  ```
152
233
 
234
+ 安装完成后可以直接执行:
235
+
236
+ ```bash
237
+ kafka-mcp --help
238
+ kafka-mcp topics
239
+ kafka-mcp messages --topic orders --search paid
240
+ ```
241
+
242
+ 方式二:使用 `npx` 直接运行,无需全局安装
243
+
244
+ ```bash
245
+ npx kafka-mcp --help
246
+ npx kafka-mcp topics
247
+ npx kafka-mcp messages --topic orders --search paid
248
+ ```
249
+
250
+ 如果你想固定版本,也可以这样使用:
251
+
252
+ ```bash
253
+ npx kafka-mcp@0.1.4 --help
254
+ ```
255
+
153
256
  ## 使用示例
154
257
 
155
258
  ```bash
156
- npx tsx src/cli.ts topics --search order
157
- npx tsx src/cli.ts topic --topic orders
158
- npx tsx src/cli.ts messages --topic orders --search paid --limit 20
159
- npx tsx src/cli.ts groups --search billing
160
- npx tsx src/cli.ts group-progress --group billing-consumers --topic orders
259
+ npx kafka-mcp topics --search order
260
+ npx kafka-mcp topic --topic orders
261
+ npx kafka-mcp messages --topic orders --search paid --limit 20
262
+ npx kafka-mcp groups --search billing
263
+ npx kafka-mcp group describe --group billing-consumers
264
+ npx kafka-mcp group-progress --group billing-consumers --topic orders
265
+ npx kafka-mcp config show
266
+ npx kafka-mcp skill-install cursor
161
267
  ```
162
268
 
163
- 如果已经执行过构建,也可以直接运行:
269
+ 如果使用已发布的 npm 包,对应命令可以写成:
164
270
 
165
271
  ```bash
166
- node dist/cli.js topics
167
- node dist/cli.js messages --topic orders --search paid
272
+ kafka-mcp topics --search order
273
+ kafka-mcp topic --topic orders
274
+ kafka-mcp messages --topic orders --search paid --limit 20
275
+ kafka-mcp groups --search billing
276
+ kafka-mcp group describe --group billing-consumers
277
+ kafka-mcp group-progress --group billing-consumers --topic orders
278
+ kafka-mcp config show
279
+ kafka-mcp skill-install cursor
280
+ kafka-mcp skill-install claude
281
+ kafka-mcp skill-install codex
168
282
  ```
169
283
 
170
284
  ## Skill
package/dist/cli.js CHANGED
@@ -1,7 +1,9 @@
1
1
  #!/usr/bin/env node
2
2
  import { Command } from "commander";
3
+ import { readUserConfig, sanitizeConfigInput, USER_CONFIG_PATH, writeUserConfig } from "./lib/config.js";
3
4
  import { formatRows } from "./lib/format.js";
4
5
  import { KafkaCliService } from "./lib/kafka.js";
6
+ import { installSkill } from "./lib/skill-installer.js";
5
7
  const program = new Command();
6
8
  let service;
7
9
  function getService() {
@@ -11,7 +13,88 @@ function getService() {
11
13
  program
12
14
  .name("kafka-mcp")
13
15
  .description("Kafka CLI for topics, messages, and consumer group inspection")
14
- .version("0.1.0");
16
+ .version("0.1.4");
17
+ const configCommand = new Command("config")
18
+ .description("Manage kafka-mcp config under ~/.config/kafka-mcp/config.json");
19
+ configCommand
20
+ .command("path")
21
+ .description("Print the user config file path")
22
+ .action(() => {
23
+ console.log(USER_CONFIG_PATH);
24
+ });
25
+ configCommand
26
+ .command("show")
27
+ .description("Show the user config stored under ~/.config")
28
+ .action(() => {
29
+ console.log(JSON.stringify(readUserConfig(), null, 2));
30
+ });
31
+ configCommand
32
+ .command("set")
33
+ .description("Write Kafka connection config to ~/.config/kafka-mcp/config.json")
34
+ .requiredOption("-b, --brokers <brokers>", "Comma-separated broker list, for example localhost:9092,localhost:9093")
35
+ .option("-c, --client-id <clientId>", "Kafka client id", "kafka-mcp-cli")
36
+ .option("--ssl", "Enable SSL")
37
+ .option("--no-ssl", "Disable SSL")
38
+ .option("--sasl-mechanism <mechanism>", "SASL mechanism: plain | scram-sha-256 | scram-sha-512")
39
+ .option("--sasl-username <username>", "SASL username")
40
+ .option("--sasl-password <password>", "SASL password")
41
+ .action(async (options) => {
42
+ const nextConfig = sanitizeConfigInput({
43
+ brokers: options.brokers.split(","),
44
+ clientId: options.clientId,
45
+ ssl: options.ssl,
46
+ sasl: options.saslMechanism
47
+ ? {
48
+ mechanism: options.saslMechanism,
49
+ username: options.saslUsername ?? "",
50
+ password: options.saslPassword ?? ""
51
+ }
52
+ : undefined
53
+ });
54
+ if (options.saslMechanism && (!options.saslUsername || !options.saslPassword)) {
55
+ throw new Error("SASL username and password are required when sasl-mechanism is set.");
56
+ }
57
+ await writeUserConfig(nextConfig);
58
+ console.log(`Wrote config to: ${USER_CONFIG_PATH}`);
59
+ });
60
+ program.addCommand(configCommand);
61
+ const groupCommand = new Command("group")
62
+ .description("Run detailed consumer group inspection commands");
63
+ groupCommand
64
+ .command("describe")
65
+ .description("Show a detailed consumer group report across all assigned topics")
66
+ .requiredOption("-g, --group <groupId>", "Consumer group id")
67
+ .action(async (options) => {
68
+ const group = await getService().getGroupDescribe(options.group);
69
+ console.log(`group: ${group.groupId}`);
70
+ console.log(`state: ${group.state}`);
71
+ console.log(`protocol: ${group.protocol}`);
72
+ console.log(`protocolType: ${group.protocolType}`);
73
+ console.log("");
74
+ console.log("members:");
75
+ console.log(formatRows(group.members.map((member) => ({
76
+ memberId: member.memberId,
77
+ clientId: member.clientId,
78
+ consumerHost: member.consumerId,
79
+ assignments: member.assignments
80
+ .map((assignment) => `${assignment.topic}[${assignment.partitions.join(",")}]`)
81
+ .join(" ")
82
+ }))));
83
+ for (const topic of group.topics) {
84
+ console.log("");
85
+ console.log(`topic: ${topic.topic}`);
86
+ console.log(formatRows(topic.partitions.map((partition) => ({
87
+ partition: partition.partition,
88
+ memberId: partition.memberId ?? "",
89
+ clientId: partition.clientId ?? "",
90
+ consumerHost: partition.consumerId ?? "",
91
+ committed: partition.committedOffset,
92
+ latest: partition.latestOffset,
93
+ lag: partition.lag ?? ""
94
+ }))));
95
+ }
96
+ });
97
+ program.addCommand(groupCommand);
15
98
  program
16
99
  .command("topics")
17
100
  .description("List topics")
@@ -89,6 +172,26 @@ program
89
172
  lag: partition.lag ?? ""
90
173
  }))));
91
174
  });
175
+ program
176
+ .command("skill-install")
177
+ .description("Install a reusable skill template for Cursor, Claude, or Codex")
178
+ .argument("<target>", "Target platform: cursor | claude | codex")
179
+ .option("-d, --dir <path>", "Custom base directory for installation")
180
+ .option("-f, --force", "Overwrite an existing installed file")
181
+ .action(async (target, options) => {
182
+ if (!isSkillTarget(target)) {
183
+ throw new Error(`Unsupported target: ${target}. Expected one of: cursor, claude, codex.`);
184
+ }
185
+ const result = await installSkill({
186
+ target,
187
+ baseDir: options.dir,
188
+ force: options.force
189
+ });
190
+ console.log(`Installed ${result.target} skill to: ${result.outputPath}`);
191
+ });
192
+ function isSkillTarget(value) {
193
+ return value === "cursor" || value === "claude" || value === "codex";
194
+ }
92
195
  program.parseAsync(process.argv).catch((error) => {
93
196
  const message = error instanceof Error ? error.message : String(error);
94
197
  console.error(`Error: ${message}`);
@@ -1,2 +1,6 @@
1
1
  import type { KafkaCliConfig } from "./types.js";
2
+ export declare const USER_CONFIG_PATH: string;
3
+ export declare function readUserConfig(): Partial<KafkaCliConfig>;
4
+ export declare function writeUserConfig(config: Partial<KafkaCliConfig>): Promise<void>;
5
+ export declare function sanitizeConfigInput(config: Partial<KafkaCliConfig>): Partial<KafkaCliConfig>;
2
6
  export declare function loadConfig(): KafkaCliConfig;
@@ -1,10 +1,12 @@
1
1
  import { existsSync, readFileSync } from "node:fs";
2
+ import { mkdir, writeFile } from "node:fs/promises";
2
3
  import { homedir } from "node:os";
3
- import { join, resolve } from "node:path";
4
+ import { dirname, join, resolve } from "node:path";
4
5
  const DEFAULT_CLIENT_ID = "kafka-mcp-cli";
6
+ export const USER_CONFIG_PATH = join(homedir(), ".config", "kafka-mcp", "config.json");
5
7
  const CONFIG_CANDIDATES = [
6
8
  resolve(process.cwd(), ".kafka-mcp.json"),
7
- join(homedir(), ".config", "kafka-mcp", "config.json")
9
+ USER_CONFIG_PATH
8
10
  ];
9
11
  function isSupportedMechanism(value) {
10
12
  return value === "plain" || value === "scram-sha-256" || value === "scram-sha-512";
@@ -19,6 +21,39 @@ function readConfigFile() {
19
21
  }
20
22
  return {};
21
23
  }
24
+ export function readUserConfig() {
25
+ if (!existsSync(USER_CONFIG_PATH)) {
26
+ return {};
27
+ }
28
+ return JSON.parse(readFileSync(USER_CONFIG_PATH, "utf8"));
29
+ }
30
+ export async function writeUserConfig(config) {
31
+ await mkdir(dirname(USER_CONFIG_PATH), { recursive: true });
32
+ await writeFile(USER_CONFIG_PATH, `${JSON.stringify(config, null, 2)}\n`, "utf8");
33
+ }
34
+ export function sanitizeConfigInput(config) {
35
+ const sanitized = {};
36
+ if (config.brokers) {
37
+ sanitized.brokers = config.brokers.map((item) => item.trim()).filter(Boolean);
38
+ }
39
+ if (config.clientId) {
40
+ sanitized.clientId = config.clientId;
41
+ }
42
+ if (config.ssl !== undefined) {
43
+ sanitized.ssl = config.ssl;
44
+ }
45
+ if (config.sasl) {
46
+ if (!isSupportedMechanism(config.sasl.mechanism)) {
47
+ throw new Error(`Unsupported SASL mechanism: ${config.sasl.mechanism}`);
48
+ }
49
+ sanitized.sasl = {
50
+ mechanism: config.sasl.mechanism,
51
+ username: config.sasl.username,
52
+ password: config.sasl.password
53
+ };
54
+ }
55
+ return sanitized;
56
+ }
22
57
  export function loadConfig() {
23
58
  const fileConfig = readConfigFile();
24
59
  const brokersFromEnv = process.env.KAFKA_BROKERS
@@ -1,5 +1,5 @@
1
1
  import { type ITopicMetadata } from "kafkajs";
2
- import type { GroupTopicProgress, TopicMessage } from "./types.js";
2
+ import type { GroupDescribe, GroupTopicProgress, TopicMessage } from "./types.js";
3
3
  export declare class KafkaCliService {
4
4
  private readonly kafka;
5
5
  constructor();
@@ -14,4 +14,5 @@ export declare class KafkaCliService {
14
14
  }): Promise<TopicMessage[]>;
15
15
  listConsumerGroups(search?: string): Promise<string[]>;
16
16
  getGroupTopicProgress(groupId: string, topic: string): Promise<GroupTopicProgress>;
17
+ getGroupDescribe(groupId: string): Promise<GroupDescribe>;
17
18
  }
package/dist/lib/kafka.js CHANGED
@@ -159,6 +159,65 @@ export class KafkaCliService {
159
159
  await admin.disconnect();
160
160
  }
161
161
  }
162
+ async getGroupDescribe(groupId) {
163
+ const admin = this.kafka.admin();
164
+ await admin.connect();
165
+ try {
166
+ const groupDescriptions = await admin.describeGroups([groupId]);
167
+ const description = groupDescriptions.groups[0];
168
+ if (!description) {
169
+ throw new Error(`Consumer group not found: ${groupId}`);
170
+ }
171
+ const groupOffsets = await admin.fetchOffsets({ groupId });
172
+ const topics = await Promise.all(groupOffsets.map(async (groupTopicOffsets) => {
173
+ const topicAssignments = decodeAssignmentsForTopic(description, groupTopicOffsets.topic);
174
+ const assignmentMap = buildAssignmentMap(description, topicAssignments);
175
+ const topicOffsets = await admin.fetchTopicOffsets(groupTopicOffsets.topic);
176
+ const partitions = topicOffsets.map((partitionOffset) => {
177
+ const committed = groupTopicOffsets.partitions.find((partition) => partition.partition === partitionOffset.partition);
178
+ const latest = toBigIntOrNull(partitionOffset.high);
179
+ const committedOffset = toBigIntOrNull(committed?.offset ?? "-1");
180
+ const lag = latest !== null && committedOffset !== null && latest >= committedOffset
181
+ ? (latest - committedOffset).toString()
182
+ : null;
183
+ const member = assignmentMap.get(partitionOffset.partition);
184
+ return {
185
+ partition: partitionOffset.partition,
186
+ memberId: member?.memberId ?? null,
187
+ clientId: member?.clientId ?? null,
188
+ consumerId: member?.consumerId ?? null,
189
+ committedOffset: committed?.offset ?? "-1",
190
+ latestOffset: partitionOffset.high,
191
+ lag
192
+ };
193
+ });
194
+ return {
195
+ topic: groupTopicOffsets.topic,
196
+ partitions
197
+ };
198
+ }));
199
+ const memberAssignments = decodeAssignmentsForAllTopics(description);
200
+ return {
201
+ groupId,
202
+ state: description.state,
203
+ protocol: description.protocol,
204
+ protocolType: description.protocolType,
205
+ members: description.members.map((member) => ({
206
+ memberId: member.memberId,
207
+ clientId: member.clientId,
208
+ consumerId: member.clientHost,
209
+ assignments: Array.from(memberAssignments.get(member.memberId)?.entries() ?? []).map(([topic, partitions]) => ({
210
+ topic,
211
+ partitions
212
+ })),
213
+ })),
214
+ topics: topics.sort((left, right) => left.topic.localeCompare(right.topic))
215
+ };
216
+ }
217
+ finally {
218
+ await admin.disconnect();
219
+ }
220
+ }
162
221
  }
163
222
  function buildAssignmentMap(description, decodedAssignments) {
164
223
  const assignment = new Map();
@@ -181,6 +240,18 @@ function decodeAssignmentsForTopic(description, topic) {
181
240
  }
182
241
  return assignments;
183
242
  }
243
+ function decodeAssignmentsForAllTopics(description) {
244
+ const assignments = new Map();
245
+ for (const member of description.members) {
246
+ const decoded = AssignerProtocol.MemberAssignment.decode(member.memberAssignment);
247
+ const memberAssignments = new Map();
248
+ for (const [topic, partitions] of Object.entries(decoded?.assignment ?? {})) {
249
+ memberAssignments.set(topic, partitions);
250
+ }
251
+ assignments.set(member.memberId, memberAssignments);
252
+ }
253
+ return assignments;
254
+ }
184
255
  function decodeMessageValue(message, field) {
185
256
  return decodeValue(message[field]);
186
257
  }
@@ -0,0 +1,11 @@
1
+ export type SkillTarget = "cursor" | "claude" | "codex";
2
+ export interface InstallSkillOptions {
3
+ target: SkillTarget;
4
+ baseDir?: string;
5
+ force?: boolean;
6
+ }
7
+ export interface InstallSkillResult {
8
+ target: SkillTarget;
9
+ outputPath: string;
10
+ }
11
+ export declare function installSkill(options: InstallSkillOptions): Promise<InstallSkillResult>;
@@ -0,0 +1,121 @@
1
+ import { existsSync } from "node:fs";
2
+ import { mkdir, writeFile } from "node:fs/promises";
3
+ import { homedir } from "node:os";
4
+ import { dirname, resolve } from "node:path";
5
+ export async function installSkill(options) {
6
+ const outputPath = resolveInstallPath(options.target, options.baseDir);
7
+ const content = renderSkillContent(options.target);
8
+ await mkdir(dirname(outputPath), { recursive: true });
9
+ if (!options.force && existsSync(outputPath)) {
10
+ throw new Error(`Target file already exists: ${outputPath}. Re-run with --force to overwrite.`);
11
+ }
12
+ await writeFile(outputPath, content, "utf8");
13
+ return {
14
+ target: options.target,
15
+ outputPath
16
+ };
17
+ }
18
+ function resolveInstallPath(target, baseDir) {
19
+ if (target === "cursor") {
20
+ return resolve(baseDir ?? process.cwd(), ".cursor", "rules", "kafka-mcp.mdc");
21
+ }
22
+ if (target === "claude") {
23
+ return resolve(baseDir ?? process.cwd(), ".claude", "commands", "kafka-mcp.md");
24
+ }
25
+ const codexHome = process.env.CODEX_HOME ?? resolve(homedir(), ".codex");
26
+ return resolve(baseDir ?? codexHome, "skills", "kafka-cli-inspector", "SKILL.md");
27
+ }
28
+ function renderSkillContent(target) {
29
+ if (target === "cursor") {
30
+ return `---
31
+ description: Kafka inspection workflow through the published kafka-mcp CLI
32
+ globs:
33
+ alwaysApply: false
34
+ ---
35
+
36
+ - Use \`npx kafka-mcp\` for read-only Kafka inspection tasks.
37
+ - If configuration may be missing, start with \`npx kafka-mcp config show\` or write it with \`npx kafka-mcp config set --brokers <host:port>\`.
38
+ - Use \`npx kafka-mcp topics --search <keyword>\` when the exact topic is unknown.
39
+ - Use \`npx kafka-mcp topic --topic <topic>\` to inspect topic partition metadata.
40
+ - Use \`npx kafka-mcp messages --topic <topic> --search <keyword> --limit <n>\` to preview messages.
41
+ - Use \`npx kafka-mcp groups --search <keyword>\` to find consumer groups.
42
+ - Use \`npx kafka-mcp group describe --group <groupId>\` for a broad group report across all assigned topics.
43
+ - Use \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\` for focused lag analysis on one topic.
44
+ - Keep limits small unless the user explicitly asks for broader inspection.
45
+ - Summarize only the relevant topics, messages, members, or lagging partitions.
46
+ `;
47
+ }
48
+ if (target === "claude") {
49
+ return `---
50
+ description: Inspect Kafka topics, messages, and consumer lag with kafka-mcp
51
+ argument-hint: [goal or topic/group details]
52
+ ---
53
+
54
+ Use the published \`kafka-mcp\` CLI through \`npx kafka-mcp\` to investigate Kafka in read-only mode.
55
+
56
+ Preferred workflow:
57
+
58
+ 1. If configuration may be missing, run \`npx kafka-mcp config show\`. If needed, instruct the user to configure brokers with \`npx kafka-mcp config set --brokers <host:port>\`.
59
+ 2. If the exact topic is unknown, run \`npx kafka-mcp topics --search $ARGUMENTS\`.
60
+ 3. If the exact consumer group is unknown, run \`npx kafka-mcp groups --search $ARGUMENTS\`.
61
+ 4. For topic structure, run \`npx kafka-mcp topic --topic <topic>\`.
62
+ 5. For message lookup, run \`npx kafka-mcp messages --topic <topic> --search <keyword> --limit 20\`.
63
+ 6. For a broad group report, run \`npx kafka-mcp group describe --group <groupId>\`.
64
+ 7. For focused lag inspection on one topic, run \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\`.
65
+
66
+ When responding:
67
+
68
+ - Summarize only the relevant topics, messages, members, or lagging partitions.
69
+ - Mention topic, partition, committed offset, latest offset, and lag when reporting progress issues.
70
+ - If using \`group describe\`, call out group state, affected topics, and which member currently owns lagging partitions.
71
+ - Call out clearly when there are no matches.
72
+ `;
73
+ }
74
+ return `---
75
+ name: kafka-cli-inspector
76
+ description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, configure Kafka brokers, or analyze a consumer group's state and lag through the published kafka-mcp npm CLI.
77
+ ---
78
+
79
+ # Kafka CLI Inspector
80
+
81
+ Use this skill when a user wants Kafka read-only inspection through the published \`kafka-mcp\` npm CLI.
82
+
83
+ ## Command catalog
84
+
85
+ - \`npx kafka-mcp config path\` prints the user config path.
86
+ - \`npx kafka-mcp config show\` shows the user config stored under \`~/.config/kafka-mcp/config.json\`.
87
+ - \`npx kafka-mcp config set --brokers <host:port>\` writes user config.
88
+ - \`npx kafka-mcp topics --search <keyword>\` lists topics and supports fuzzy substring search.
89
+ - \`npx kafka-mcp topic --topic <topic>\` shows partition metadata for a topic.
90
+ - \`npx kafka-mcp messages --topic <topic> --search <keyword> --limit <n>\` previews messages from a topic and filters by topic, key, value, or headers.
91
+ - \`npx kafka-mcp groups --search <keyword>\` lists consumer groups and supports search.
92
+ - \`npx kafka-mcp group describe --group <groupId>\` returns a broad consumer group report across all assigned topics, including members, assignments, committed offsets, latest offsets, and lag.
93
+ - \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\` shows committed offsets, latest offsets, lag, and the current member assignment per partition.
94
+
95
+ ## Workflow
96
+
97
+ 1. Start with configuration. If brokers may not be set, run \`npx kafka-mcp config show\`. If nothing is configured, tell the user to run \`npx kafka-mcp config set --brokers <host:port>\`.
98
+ 2. Discover unknown names first. Use \`topics --search\` for topic discovery and \`groups --search\` for consumer group discovery.
99
+ 3. Inspect topic structure before deep analysis when needed. Use \`topic --topic <topic>\` to confirm partitions and leaders.
100
+ 4. Investigate message content with \`messages\`. Prefer small limits and a targeted search term.
101
+ 5. Use \`group describe\` when the task is broad: overall group health, which topics the group touches, how partitions are assigned, or which members appear responsible for lag.
102
+ 6. Use \`group-progress\` when the task is narrow: lag on one known topic, partition-by-partition ownership, or verifying whether one topic is stuck.
103
+
104
+ ## Recommended patterns
105
+
106
+ - Unknown topic name: \`npx kafka-mcp topics --search order\`
107
+ - Unknown consumer group: \`npx kafka-mcp groups --search billing\`
108
+ - Verify topic layout: \`npx kafka-mcp topic --topic orders\`
109
+ - Search recent business events in messages: \`npx kafka-mcp messages --topic orders --search paid --limit 20\`
110
+ - Broad consumer diagnosis: \`npx kafka-mcp group describe --group billing-consumers\`
111
+ - Focused lag diagnosis: \`npx kafka-mcp group-progress --group billing-consumers --topic orders\`
112
+
113
+ ## Output guidance
114
+
115
+ - Summarize only the relevant topics, messages, lagging partitions, and responsible members.
116
+ - Call out when a search returns no results rather than dumping empty tables.
117
+ - When reporting lag, mention the topic, partition, committed offset, latest offset, lag value, and assigned member when available.
118
+ - When using \`group describe\`, mention group state, protocol type, affected topics, and any partitions with notable lag.
119
+ - If configuration is missing, say that explicitly before suggesting deeper Kafka commands.
120
+ `;
121
+ }
@@ -38,3 +38,23 @@ export interface GroupTopicProgress {
38
38
  }>;
39
39
  partitions: GroupPartitionProgress[];
40
40
  }
41
+ export interface GroupDescribeTopic {
42
+ topic: string;
43
+ partitions: GroupPartitionProgress[];
44
+ }
45
+ export interface GroupDescribe {
46
+ groupId: string;
47
+ state: string;
48
+ protocol: string;
49
+ protocolType: string;
50
+ members: Array<{
51
+ memberId: string;
52
+ clientId: string;
53
+ consumerId: string;
54
+ assignments: Array<{
55
+ topic: string;
56
+ partitions: number[];
57
+ }>;
58
+ }>;
59
+ topics: GroupDescribeTopic[];
60
+ }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "kafka-mcp",
3
- "version": "0.1.1",
3
+ "version": "0.1.4",
4
4
  "description": "Kafka inspection CLI with topic, message, and consumer group utilities",
5
5
  "type": "module",
6
6
  "main": "./dist/cli.js",
@@ -1,29 +1,46 @@
1
1
  ---
2
2
  name: kafka-cli-inspector
3
- description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, or analyze a consumer group's progress for a topic through the local kafka-mcp CLI.
3
+ description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, configure Kafka brokers, or analyze a consumer group's state and lag through the published kafka-mcp npm CLI.
4
4
  ---
5
5
 
6
6
  # Kafka CLI Inspector
7
7
 
8
- Use this skill when a user wants Kafka read-only inspection from the local workspace.
8
+ Use this skill when a user wants Kafka read-only inspection through the published `kafka-mcp` npm CLI.
9
9
 
10
- ## Commands
10
+ ## Command catalog
11
11
 
12
- - `npx tsx src/cli.ts topics --search <keyword>` lists topics and supports fuzzy substring search.
13
- - `npx tsx src/cli.ts topic --topic <topic>` shows partition metadata for a topic.
14
- - `npx tsx src/cli.ts messages --topic <topic> --search <keyword> --limit <n>` previews messages from a topic and filters by topic, key, value, or headers.
15
- - `npx tsx src/cli.ts groups --search <keyword>` lists consumer groups and supports search.
16
- - `npx tsx src/cli.ts group-progress --group <groupId> --topic <topic>` shows committed offsets, latest offsets, lag, and the current member assignment per partition.
12
+ - `npx kafka-mcp config path` prints the user config path.
13
+ - `npx kafka-mcp config show` shows the user config stored under `~/.config/kafka-mcp/config.json`.
14
+ - `npx kafka-mcp config set --brokers <host:port>` writes user config.
15
+ - `npx kafka-mcp topics --search <keyword>` lists topics and supports fuzzy substring search.
16
+ - `npx kafka-mcp topic --topic <topic>` shows partition metadata for a topic.
17
+ - `npx kafka-mcp messages --topic <topic> --search <keyword> --limit <n>` previews messages from a topic and filters by topic, key, value, or headers.
18
+ - `npx kafka-mcp groups --search <keyword>` lists consumer groups and supports search.
19
+ - `npx kafka-mcp group describe --group <groupId>` returns a broad consumer group report across all assigned topics, including members, assignments, committed offsets, latest offsets, and lag.
20
+ - `npx kafka-mcp group-progress --group <groupId> --topic <topic>` shows committed offsets, latest offsets, lag, and the current member assignment per partition.
17
21
 
18
22
  ## Workflow
19
23
 
20
- 1. Check that Kafka brokers are configured through `.kafka-mcp.json`, `~/.config/kafka-mcp/config.json`, or `KAFKA_BROKERS`.
21
- 2. Use `topics` or `groups` first if the exact topic or consumer group is unknown.
22
- 3. Use `messages` for lightweight investigation. Prefer small limits and a search term.
23
- 4. Use `group-progress` when the task is about backlog, lag, stuck partitions, or which consumer currently owns a partition.
24
+ 1. Start with configuration. If brokers may not be set, run `npx kafka-mcp config show`. If nothing is configured, tell the user to run `npx kafka-mcp config set --brokers <host:port>`.
25
+ 2. Discover unknown names first. Use `topics --search` for topic discovery and `groups --search` for consumer group discovery.
26
+ 3. Inspect topic structure before deep analysis when needed. Use `topic --topic <topic>` to confirm partitions and leaders.
27
+ 4. Investigate message content with `messages`. Prefer small limits and a targeted search term.
28
+ 5. Use `group describe` when the task is broad: overall group health, which topics the group touches, how partitions are assigned, or which members appear responsible for lag.
29
+ 6. Use `group-progress` when the task is narrow: lag on one known topic, partition-by-partition ownership, or verifying whether one topic is stuck.
30
+
31
+ ## Recommended patterns
32
+
33
+ - Unknown topic name: `npx kafka-mcp topics --search order`
34
+ - Unknown consumer group: `npx kafka-mcp groups --search billing`
35
+ - Verify topic layout: `npx kafka-mcp topic --topic orders`
36
+ - Search recent business events in messages: `npx kafka-mcp messages --topic orders --search paid --limit 20`
37
+ - Broad consumer diagnosis: `npx kafka-mcp group describe --group billing-consumers`
38
+ - Focused lag diagnosis: `npx kafka-mcp group-progress --group billing-consumers --topic orders`
24
39
 
25
40
  ## Output guidance
26
41
 
27
- - Summarize only the relevant topics, messages, or lagging partitions.
42
+ - Summarize only the relevant topics, messages, lagging partitions, and responsible members.
28
43
  - Call out when a search returns no results rather than dumping empty tables.
29
- - When reporting lag, mention the topic, partition, committed offset, latest offset, and lag value.
44
+ - When reporting lag, mention the topic, partition, committed offset, latest offset, lag value, and assigned member when available.
45
+ - When using `group describe`, mention group state, protocol type, affected topics, and any partitions with notable lag.
46
+ - If configuration is missing, say that explicitly before suggesting deeper Kafka commands.