kafka-mcp 0.1.2 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -9,24 +9,49 @@
9
9
  - 获取指定 Topic 的消息并支持搜索
10
10
  - 获取消费者组列表
11
11
  - 查看指定消费者组在某个 Topic 上的消费进度
12
+ - 通过命令写入和查看 `~/.config/kafka-mcp/config.json`
12
13
  - 安装面向 Cursor、Claude、Codex 的技能模板
13
14
 
14
15
  ## 功能与命令
15
16
 
16
- 当前 CLI 提供以下 6 个命令:
17
+ 当前 CLI 提供以下 8 个命令:
17
18
 
18
- ### 1. `topics`
19
+ ### 1. `config`
20
+
21
+ 管理用户配置文件,配置会写入:
22
+
23
+ ```bash
24
+ ~/.config/kafka-mcp/config.json
25
+ ```
26
+
27
+ 常用子命令:
28
+
29
+ - `config path`:查看配置文件路径
30
+ - `config show`:查看当前用户配置
31
+ - `config set`:写入配置
32
+
33
+ 示例:
34
+
35
+ ```bash
36
+ npx kafka-mcp config path
37
+ npx kafka-mcp config show
38
+ npx kafka-mcp config set --brokers localhost:9092 --client-id kafka-mcp-cli
39
+ npx kafka-mcp config set --brokers localhost:9092,localhost:9093 --ssl
40
+ npx kafka-mcp config set --brokers localhost:9092 --sasl-mechanism plain --sasl-username demo --sasl-password secret
41
+ ```
42
+
43
+ ### 2. `topics`
19
44
 
20
45
  列出所有 Topic,并支持按名称搜索。
21
46
 
22
47
  示例:
23
48
 
24
49
  ```bash
25
- node dist/cli.js topics
26
- node dist/cli.js topics --search order
50
+ npx kafka-mcp topics
51
+ npx kafka-mcp topics --search order
27
52
  ```
28
53
 
29
- ### 2. `topic`
54
+ ### 3. `topic`
30
55
 
31
56
  查看某个 Topic 的元数据,包括:
32
57
 
@@ -38,10 +63,10 @@ node dist/cli.js topics --search order
38
63
  示例:
39
64
 
40
65
  ```bash
41
- node dist/cli.js topic --topic orders
66
+ npx kafka-mcp topic --topic orders
42
67
  ```
43
68
 
44
- ### 3. `messages`
69
+ ### 4. `messages`
45
70
 
46
71
  读取指定 Topic 的消息,并支持按关键字搜索。
47
72
 
@@ -63,23 +88,40 @@ node dist/cli.js topic --topic orders
63
88
  示例:
64
89
 
65
90
  ```bash
66
- node dist/cli.js messages --topic orders
67
- node dist/cli.js messages --topic orders --search paid --limit 20
68
- node dist/cli.js messages --topic orders --latest
91
+ npx kafka-mcp messages --topic orders
92
+ npx kafka-mcp messages --topic orders --search paid --limit 20
93
+ npx kafka-mcp messages --topic orders --latest
69
94
  ```
70
95
 
71
- ### 4. `groups`
96
+ ### 5. `groups`
72
97
 
73
98
  列出消费者组,并支持按组名搜索。
74
99
 
75
100
  示例:
76
101
 
77
102
  ```bash
78
- node dist/cli.js groups
79
- node dist/cli.js groups --search billing
103
+ npx kafka-mcp groups
104
+ npx kafka-mcp groups --search billing
80
105
  ```
81
106
 
82
- ### 5. `group-progress`
107
+ ### 6. `group describe`
108
+
109
+ 一次性查看某个消费者组的完整信息,包括:
110
+
111
+ - group 基本信息
112
+ - state
113
+ - protocol / protocolType
114
+ - 成员列表
115
+ - 每个成员负责的 topic / partition
116
+ - 该 group 涉及的所有 topic 的 committed offset / latest offset / lag
117
+
118
+ 示例:
119
+
120
+ ```bash
121
+ npx kafka-mcp group describe --group billing-consumers
122
+ ```
123
+
124
+ ### 7. `group-progress`
83
125
 
84
126
  查看某个消费者组在某个 Topic 上的消费进度。
85
127
 
@@ -94,10 +136,10 @@ node dist/cli.js groups --search billing
94
136
  示例:
95
137
 
96
138
  ```bash
97
- node dist/cli.js group-progress --group billing-consumers --topic orders
139
+ npx kafka-mcp group-progress --group billing-consumers --topic orders
98
140
  ```
99
141
 
100
- ### 6. `skill-install`
142
+ ### 8. `skill-install`
101
143
 
102
144
  安装可复用的技能模板,支持:
103
145
 
@@ -119,10 +161,10 @@ node dist/cli.js group-progress --group billing-consumers --topic orders
119
161
  示例:
120
162
 
121
163
  ```bash
122
- node dist/cli.js skill-install cursor
123
- node dist/cli.js skill-install claude
124
- node dist/cli.js skill-install codex
125
- node dist/cli.js skill-install cursor --dir /path/to/project --force
164
+ npx kafka-mcp skill-install cursor
165
+ npx kafka-mcp skill-install claude
166
+ npx kafka-mcp skill-install codex
167
+ npx kafka-mcp skill-install cursor --dir /path/to/project --force
126
168
  ```
127
169
 
128
170
  ## 配置方式
@@ -133,6 +175,12 @@ CLI 会按以下顺序读取配置,找到第一个存在的配置源后使用
133
175
  2. `~/.config/kafka-mcp/config.json`
134
176
  3. 环境变量会覆盖上述配置
135
177
 
178
+ 推荐直接使用命令写入用户配置:
179
+
180
+ ```bash
181
+ npx kafka-mcp config set --brokers localhost:9092 --client-id kafka-mcp-cli
182
+ ```
183
+
136
184
  最小配置示例:
137
185
 
138
186
  ```json
@@ -202,25 +250,20 @@ npx kafka-mcp messages --topic orders --search paid
202
250
  如果你想固定版本,也可以这样使用:
203
251
 
204
252
  ```bash
205
- npx kafka-mcp@0.1.1 --help
253
+ npx kafka-mcp@0.1.4 --help
206
254
  ```
207
255
 
208
256
  ## 使用示例
209
257
 
210
258
  ```bash
211
- npx tsx src/cli.ts topics --search order
212
- npx tsx src/cli.ts topic --topic orders
213
- npx tsx src/cli.ts messages --topic orders --search paid --limit 20
214
- npx tsx src/cli.ts groups --search billing
215
- npx tsx src/cli.ts group-progress --group billing-consumers --topic orders
216
- npx tsx src/cli.ts skill-install cursor
217
- ```
218
-
219
- 如果已经执行过构建,也可以直接运行:
220
-
221
- ```bash
222
- node dist/cli.js topics
223
- node dist/cli.js messages --topic orders --search paid
259
+ npx kafka-mcp topics --search order
260
+ npx kafka-mcp topic --topic orders
261
+ npx kafka-mcp messages --topic orders --search paid --limit 20
262
+ npx kafka-mcp groups --search billing
263
+ npx kafka-mcp group describe --group billing-consumers
264
+ npx kafka-mcp group-progress --group billing-consumers --topic orders
265
+ npx kafka-mcp config show
266
+ npx kafka-mcp skill-install cursor
224
267
  ```
225
268
 
226
269
  如果使用已发布的 npm 包,对应命令可以写成:
@@ -230,7 +273,9 @@ kafka-mcp topics --search order
230
273
  kafka-mcp topic --topic orders
231
274
  kafka-mcp messages --topic orders --search paid --limit 20
232
275
  kafka-mcp groups --search billing
276
+ kafka-mcp group describe --group billing-consumers
233
277
  kafka-mcp group-progress --group billing-consumers --topic orders
278
+ kafka-mcp config show
234
279
  kafka-mcp skill-install cursor
235
280
  kafka-mcp skill-install claude
236
281
  kafka-mcp skill-install codex
package/dist/cli.js CHANGED
@@ -1,5 +1,6 @@
1
1
  #!/usr/bin/env node
2
2
  import { Command } from "commander";
3
+ import { readUserConfig, sanitizeConfigInput, USER_CONFIG_PATH, writeUserConfig } from "./lib/config.js";
3
4
  import { formatRows } from "./lib/format.js";
4
5
  import { KafkaCliService } from "./lib/kafka.js";
5
6
  import { installSkill } from "./lib/skill-installer.js";
@@ -12,7 +13,88 @@ function getService() {
12
13
  program
13
14
  .name("kafka-mcp")
14
15
  .description("Kafka CLI for topics, messages, and consumer group inspection")
15
- .version("0.1.2");
16
+ .version("0.1.4");
17
+ const configCommand = new Command("config")
18
+ .description("Manage kafka-mcp config under ~/.config/kafka-mcp/config.json");
19
+ configCommand
20
+ .command("path")
21
+ .description("Print the user config file path")
22
+ .action(() => {
23
+ console.log(USER_CONFIG_PATH);
24
+ });
25
+ configCommand
26
+ .command("show")
27
+ .description("Show the user config stored under ~/.config")
28
+ .action(() => {
29
+ console.log(JSON.stringify(readUserConfig(), null, 2));
30
+ });
31
+ configCommand
32
+ .command("set")
33
+ .description("Write Kafka connection config to ~/.config/kafka-mcp/config.json")
34
+ .requiredOption("-b, --brokers <brokers>", "Comma-separated broker list, for example localhost:9092,localhost:9093")
35
+ .option("-c, --client-id <clientId>", "Kafka client id", "kafka-mcp-cli")
36
+ .option("--ssl", "Enable SSL")
37
+ .option("--no-ssl", "Disable SSL")
38
+ .option("--sasl-mechanism <mechanism>", "SASL mechanism: plain | scram-sha-256 | scram-sha-512")
39
+ .option("--sasl-username <username>", "SASL username")
40
+ .option("--sasl-password <password>", "SASL password")
41
+ .action(async (options) => {
42
+ const nextConfig = sanitizeConfigInput({
43
+ brokers: options.brokers.split(","),
44
+ clientId: options.clientId,
45
+ ssl: options.ssl,
46
+ sasl: options.saslMechanism
47
+ ? {
48
+ mechanism: options.saslMechanism,
49
+ username: options.saslUsername ?? "",
50
+ password: options.saslPassword ?? ""
51
+ }
52
+ : undefined
53
+ });
54
+ if (options.saslMechanism && (!options.saslUsername || !options.saslPassword)) {
55
+ throw new Error("SASL username and password are required when sasl-mechanism is set.");
56
+ }
57
+ await writeUserConfig(nextConfig);
58
+ console.log(`Wrote config to: ${USER_CONFIG_PATH}`);
59
+ });
60
+ program.addCommand(configCommand);
61
+ const groupCommand = new Command("group")
62
+ .description("Run detailed consumer group inspection commands");
63
+ groupCommand
64
+ .command("describe")
65
+ .description("Show a detailed consumer group report across all assigned topics")
66
+ .requiredOption("-g, --group <groupId>", "Consumer group id")
67
+ .action(async (options) => {
68
+ const group = await getService().getGroupDescribe(options.group);
69
+ console.log(`group: ${group.groupId}`);
70
+ console.log(`state: ${group.state}`);
71
+ console.log(`protocol: ${group.protocol}`);
72
+ console.log(`protocolType: ${group.protocolType}`);
73
+ console.log("");
74
+ console.log("members:");
75
+ console.log(formatRows(group.members.map((member) => ({
76
+ memberId: member.memberId,
77
+ clientId: member.clientId,
78
+ consumerHost: member.consumerId,
79
+ assignments: member.assignments
80
+ .map((assignment) => `${assignment.topic}[${assignment.partitions.join(",")}]`)
81
+ .join(" ")
82
+ }))));
83
+ for (const topic of group.topics) {
84
+ console.log("");
85
+ console.log(`topic: ${topic.topic}`);
86
+ console.log(formatRows(topic.partitions.map((partition) => ({
87
+ partition: partition.partition,
88
+ memberId: partition.memberId ?? "",
89
+ clientId: partition.clientId ?? "",
90
+ consumerHost: partition.consumerId ?? "",
91
+ committed: partition.committedOffset,
92
+ latest: partition.latestOffset,
93
+ lag: partition.lag ?? ""
94
+ }))));
95
+ }
96
+ });
97
+ program.addCommand(groupCommand);
16
98
  program
17
99
  .command("topics")
18
100
  .description("List topics")
@@ -1,2 +1,6 @@
1
1
  import type { KafkaCliConfig } from "./types.js";
2
+ export declare const USER_CONFIG_PATH: string;
3
+ export declare function readUserConfig(): Partial<KafkaCliConfig>;
4
+ export declare function writeUserConfig(config: Partial<KafkaCliConfig>): Promise<void>;
5
+ export declare function sanitizeConfigInput(config: Partial<KafkaCliConfig>): Partial<KafkaCliConfig>;
2
6
  export declare function loadConfig(): KafkaCliConfig;
@@ -1,10 +1,12 @@
1
1
  import { existsSync, readFileSync } from "node:fs";
2
+ import { mkdir, writeFile } from "node:fs/promises";
2
3
  import { homedir } from "node:os";
3
- import { join, resolve } from "node:path";
4
+ import { dirname, join, resolve } from "node:path";
4
5
  const DEFAULT_CLIENT_ID = "kafka-mcp-cli";
6
+ export const USER_CONFIG_PATH = join(homedir(), ".config", "kafka-mcp", "config.json");
5
7
  const CONFIG_CANDIDATES = [
6
8
  resolve(process.cwd(), ".kafka-mcp.json"),
7
- join(homedir(), ".config", "kafka-mcp", "config.json")
9
+ USER_CONFIG_PATH
8
10
  ];
9
11
  function isSupportedMechanism(value) {
10
12
  return value === "plain" || value === "scram-sha-256" || value === "scram-sha-512";
@@ -19,6 +21,39 @@ function readConfigFile() {
19
21
  }
20
22
  return {};
21
23
  }
24
+ export function readUserConfig() {
25
+ if (!existsSync(USER_CONFIG_PATH)) {
26
+ return {};
27
+ }
28
+ return JSON.parse(readFileSync(USER_CONFIG_PATH, "utf8"));
29
+ }
30
+ export async function writeUserConfig(config) {
31
+ await mkdir(dirname(USER_CONFIG_PATH), { recursive: true });
32
+ await writeFile(USER_CONFIG_PATH, `${JSON.stringify(config, null, 2)}\n`, "utf8");
33
+ }
34
+ export function sanitizeConfigInput(config) {
35
+ const sanitized = {};
36
+ if (config.brokers) {
37
+ sanitized.brokers = config.brokers.map((item) => item.trim()).filter(Boolean);
38
+ }
39
+ if (config.clientId) {
40
+ sanitized.clientId = config.clientId;
41
+ }
42
+ if (config.ssl !== undefined) {
43
+ sanitized.ssl = config.ssl;
44
+ }
45
+ if (config.sasl) {
46
+ if (!isSupportedMechanism(config.sasl.mechanism)) {
47
+ throw new Error(`Unsupported SASL mechanism: ${config.sasl.mechanism}`);
48
+ }
49
+ sanitized.sasl = {
50
+ mechanism: config.sasl.mechanism,
51
+ username: config.sasl.username,
52
+ password: config.sasl.password
53
+ };
54
+ }
55
+ return sanitized;
56
+ }
22
57
  export function loadConfig() {
23
58
  const fileConfig = readConfigFile();
24
59
  const brokersFromEnv = process.env.KAFKA_BROKERS
@@ -1,5 +1,5 @@
1
1
  import { type ITopicMetadata } from "kafkajs";
2
- import type { GroupTopicProgress, TopicMessage } from "./types.js";
2
+ import type { GroupDescribe, GroupTopicProgress, TopicMessage } from "./types.js";
3
3
  export declare class KafkaCliService {
4
4
  private readonly kafka;
5
5
  constructor();
@@ -14,4 +14,5 @@ export declare class KafkaCliService {
14
14
  }): Promise<TopicMessage[]>;
15
15
  listConsumerGroups(search?: string): Promise<string[]>;
16
16
  getGroupTopicProgress(groupId: string, topic: string): Promise<GroupTopicProgress>;
17
+ getGroupDescribe(groupId: string): Promise<GroupDescribe>;
17
18
  }
package/dist/lib/kafka.js CHANGED
@@ -159,6 +159,65 @@ export class KafkaCliService {
159
159
  await admin.disconnect();
160
160
  }
161
161
  }
162
+ async getGroupDescribe(groupId) {
163
+ const admin = this.kafka.admin();
164
+ await admin.connect();
165
+ try {
166
+ const groupDescriptions = await admin.describeGroups([groupId]);
167
+ const description = groupDescriptions.groups[0];
168
+ if (!description) {
169
+ throw new Error(`Consumer group not found: ${groupId}`);
170
+ }
171
+ const groupOffsets = await admin.fetchOffsets({ groupId });
172
+ const topics = await Promise.all(groupOffsets.map(async (groupTopicOffsets) => {
173
+ const topicAssignments = decodeAssignmentsForTopic(description, groupTopicOffsets.topic);
174
+ const assignmentMap = buildAssignmentMap(description, topicAssignments);
175
+ const topicOffsets = await admin.fetchTopicOffsets(groupTopicOffsets.topic);
176
+ const partitions = topicOffsets.map((partitionOffset) => {
177
+ const committed = groupTopicOffsets.partitions.find((partition) => partition.partition === partitionOffset.partition);
178
+ const latest = toBigIntOrNull(partitionOffset.high);
179
+ const committedOffset = toBigIntOrNull(committed?.offset ?? "-1");
180
+ const lag = latest !== null && committedOffset !== null && latest >= committedOffset
181
+ ? (latest - committedOffset).toString()
182
+ : null;
183
+ const member = assignmentMap.get(partitionOffset.partition);
184
+ return {
185
+ partition: partitionOffset.partition,
186
+ memberId: member?.memberId ?? null,
187
+ clientId: member?.clientId ?? null,
188
+ consumerId: member?.consumerId ?? null,
189
+ committedOffset: committed?.offset ?? "-1",
190
+ latestOffset: partitionOffset.high,
191
+ lag
192
+ };
193
+ });
194
+ return {
195
+ topic: groupTopicOffsets.topic,
196
+ partitions
197
+ };
198
+ }));
199
+ const memberAssignments = decodeAssignmentsForAllTopics(description);
200
+ return {
201
+ groupId,
202
+ state: description.state,
203
+ protocol: description.protocol,
204
+ protocolType: description.protocolType,
205
+ members: description.members.map((member) => ({
206
+ memberId: member.memberId,
207
+ clientId: member.clientId,
208
+ consumerId: member.clientHost,
209
+ assignments: Array.from(memberAssignments.get(member.memberId)?.entries() ?? []).map(([topic, partitions]) => ({
210
+ topic,
211
+ partitions
212
+ })),
213
+ })),
214
+ topics: topics.sort((left, right) => left.topic.localeCompare(right.topic))
215
+ };
216
+ }
217
+ finally {
218
+ await admin.disconnect();
219
+ }
220
+ }
162
221
  }
163
222
  function buildAssignmentMap(description, decodedAssignments) {
164
223
  const assignment = new Map();
@@ -181,6 +240,18 @@ function decodeAssignmentsForTopic(description, topic) {
181
240
  }
182
241
  return assignments;
183
242
  }
243
+ function decodeAssignmentsForAllTopics(description) {
244
+ const assignments = new Map();
245
+ for (const member of description.members) {
246
+ const decoded = AssignerProtocol.MemberAssignment.decode(member.memberAssignment);
247
+ const memberAssignments = new Map();
248
+ for (const [topic, partitions] of Object.entries(decoded?.assignment ?? {})) {
249
+ memberAssignments.set(topic, partitions);
250
+ }
251
+ assignments.set(member.memberId, memberAssignments);
252
+ }
253
+ return assignments;
254
+ }
184
255
  function decodeMessageValue(message, field) {
185
256
  return decodeValue(message[field]);
186
257
  }
@@ -34,12 +34,15 @@ alwaysApply: false
34
34
  ---
35
35
 
36
36
  - Use \`npx kafka-mcp\` for read-only Kafka inspection tasks.
37
- - Start with \`npx kafka-mcp topics --search <keyword>\` when the exact topic is unknown.
37
+ - If configuration may be missing, start with \`npx kafka-mcp config show\` or write it with \`npx kafka-mcp config set --brokers <host:port>\`.
38
+ - Use \`npx kafka-mcp topics --search <keyword>\` when the exact topic is unknown.
39
+ - Use \`npx kafka-mcp topic --topic <topic>\` to inspect topic partition metadata.
38
40
  - Use \`npx kafka-mcp messages --topic <topic> --search <keyword> --limit <n>\` to preview messages.
39
41
  - Use \`npx kafka-mcp groups --search <keyword>\` to find consumer groups.
40
- - Use \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\` to inspect committed offsets, latest offsets, and lag.
42
+ - Use \`npx kafka-mcp group describe --group <groupId>\` for a broad group report across all assigned topics.
43
+ - Use \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\` for focused lag analysis on one topic.
41
44
  - Keep limits small unless the user explicitly asks for broader inspection.
42
- - Summarize only the relevant topics, messages, or lagging partitions.
45
+ - Summarize only the relevant topics, messages, members, or lagging partitions.
43
46
  `;
44
47
  }
45
48
  if (target === "claude") {
@@ -52,46 +55,67 @@ Use the published \`kafka-mcp\` CLI through \`npx kafka-mcp\` to investigate Kaf
52
55
 
53
56
  Preferred workflow:
54
57
 
55
- 1. If the exact topic is unknown, run \`npx kafka-mcp topics --search $ARGUMENTS\`.
56
- 2. If the exact consumer group is unknown, run \`npx kafka-mcp groups --search $ARGUMENTS\`.
57
- 3. For message lookup, run \`npx kafka-mcp messages --topic <topic> --search <keyword> --limit 20\`.
58
- 4. For consumer lag, run \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\`.
58
+ 1. If configuration may be missing, run \`npx kafka-mcp config show\`. If needed, instruct the user to configure brokers with \`npx kafka-mcp config set --brokers <host:port>\`.
59
+ 2. If the exact topic is unknown, run \`npx kafka-mcp topics --search $ARGUMENTS\`.
60
+ 3. If the exact consumer group is unknown, run \`npx kafka-mcp groups --search $ARGUMENTS\`.
61
+ 4. For topic structure, run \`npx kafka-mcp topic --topic <topic>\`.
62
+ 5. For message lookup, run \`npx kafka-mcp messages --topic <topic> --search <keyword> --limit 20\`.
63
+ 6. For a broad group report, run \`npx kafka-mcp group describe --group <groupId>\`.
64
+ 7. For focused lag inspection on one topic, run \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\`.
59
65
 
60
66
  When responding:
61
67
 
62
- - Summarize only the relevant topics, messages, or lagging partitions.
68
+ - Summarize only the relevant topics, messages, members, or lagging partitions.
63
69
  - Mention topic, partition, committed offset, latest offset, and lag when reporting progress issues.
70
+ - If using \`group describe\`, call out group state, affected topics, and which member currently owns lagging partitions.
64
71
  - Call out clearly when there are no matches.
65
72
  `;
66
73
  }
67
74
  return `---
68
75
  name: kafka-cli-inspector
69
- description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, or analyze a consumer group's progress for a topic through the published kafka-mcp npm CLI.
76
+ description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, configure Kafka brokers, or analyze a consumer group's state and lag through the published kafka-mcp npm CLI.
70
77
  ---
71
78
 
72
79
  # Kafka CLI Inspector
73
80
 
74
81
  Use this skill when a user wants Kafka read-only inspection through the published \`kafka-mcp\` npm CLI.
75
82
 
76
- ## Commands
83
+ ## Command catalog
77
84
 
85
+ - \`npx kafka-mcp config path\` prints the user config path.
86
+ - \`npx kafka-mcp config show\` shows the user config stored under \`~/.config/kafka-mcp/config.json\`.
87
+ - \`npx kafka-mcp config set --brokers <host:port>\` writes user config.
78
88
  - \`npx kafka-mcp topics --search <keyword>\` lists topics and supports fuzzy substring search.
79
89
  - \`npx kafka-mcp topic --topic <topic>\` shows partition metadata for a topic.
80
90
  - \`npx kafka-mcp messages --topic <topic> --search <keyword> --limit <n>\` previews messages from a topic and filters by topic, key, value, or headers.
81
91
  - \`npx kafka-mcp groups --search <keyword>\` lists consumer groups and supports search.
92
+ - \`npx kafka-mcp group describe --group <groupId>\` returns a broad consumer group report across all assigned topics, including members, assignments, committed offsets, latest offsets, and lag.
82
93
  - \`npx kafka-mcp group-progress --group <groupId> --topic <topic>\` shows committed offsets, latest offsets, lag, and the current member assignment per partition.
83
94
 
84
95
  ## Workflow
85
96
 
86
- 1. Check that Kafka brokers are configured through \`.kafka-mcp.json\`, \`~/.config/kafka-mcp/config.json\`, or \`KAFKA_BROKERS\`.
87
- 2. Use \`topics\` or \`groups\` first if the exact topic or consumer group is unknown.
88
- 3. Use \`messages\` for lightweight investigation. Prefer small limits and a search term.
89
- 4. Use \`group-progress\` when the task is about backlog, lag, stuck partitions, or which consumer currently owns a partition.
97
+ 1. Start with configuration. If brokers may not be set, run \`npx kafka-mcp config show\`. If nothing is configured, tell the user to run \`npx kafka-mcp config set --brokers <host:port>\`.
98
+ 2. Discover unknown names first. Use \`topics --search\` for topic discovery and \`groups --search\` for consumer group discovery.
99
+ 3. Inspect topic structure before deep analysis when needed. Use \`topic --topic <topic>\` to confirm partitions and leaders.
100
+ 4. Investigate message content with \`messages\`. Prefer small limits and a targeted search term.
101
+ 5. Use \`group describe\` when the task is broad: overall group health, which topics the group touches, how partitions are assigned, or which members appear responsible for lag.
102
+ 6. Use \`group-progress\` when the task is narrow: lag on one known topic, partition-by-partition ownership, or verifying whether one topic is stuck.
103
+
104
+ ## Recommended patterns
105
+
106
+ - Unknown topic name: \`npx kafka-mcp topics --search order\`
107
+ - Unknown consumer group: \`npx kafka-mcp groups --search billing\`
108
+ - Verify topic layout: \`npx kafka-mcp topic --topic orders\`
109
+ - Search recent business events in messages: \`npx kafka-mcp messages --topic orders --search paid --limit 20\`
110
+ - Broad consumer diagnosis: \`npx kafka-mcp group describe --group billing-consumers\`
111
+ - Focused lag diagnosis: \`npx kafka-mcp group-progress --group billing-consumers --topic orders\`
90
112
 
91
113
  ## Output guidance
92
114
 
93
- - Summarize only the relevant topics, messages, or lagging partitions.
115
+ - Summarize only the relevant topics, messages, lagging partitions, and responsible members.
94
116
  - Call out when a search returns no results rather than dumping empty tables.
95
- - When reporting lag, mention the topic, partition, committed offset, latest offset, and lag value.
117
+ - When reporting lag, mention the topic, partition, committed offset, latest offset, lag value, and assigned member when available.
118
+ - When using \`group describe\`, mention group state, protocol type, affected topics, and any partitions with notable lag.
119
+ - If configuration is missing, say that explicitly before suggesting deeper Kafka commands.
96
120
  `;
97
121
  }
@@ -38,3 +38,23 @@ export interface GroupTopicProgress {
38
38
  }>;
39
39
  partitions: GroupPartitionProgress[];
40
40
  }
41
+ export interface GroupDescribeTopic {
42
+ topic: string;
43
+ partitions: GroupPartitionProgress[];
44
+ }
45
+ export interface GroupDescribe {
46
+ groupId: string;
47
+ state: string;
48
+ protocol: string;
49
+ protocolType: string;
50
+ members: Array<{
51
+ memberId: string;
52
+ clientId: string;
53
+ consumerId: string;
54
+ assignments: Array<{
55
+ topic: string;
56
+ partitions: number[];
57
+ }>;
58
+ }>;
59
+ topics: GroupDescribeTopic[];
60
+ }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "kafka-mcp",
3
- "version": "0.1.2",
3
+ "version": "0.1.4",
4
4
  "description": "Kafka inspection CLI with topic, message, and consumer group utilities",
5
5
  "type": "module",
6
6
  "main": "./dist/cli.js",
@@ -1,29 +1,46 @@
1
1
  ---
2
2
  name: kafka-cli-inspector
3
- description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, or analyze a consumer group's progress for a topic through the local kafka-mcp CLI.
3
+ description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, configure Kafka brokers, or analyze a consumer group's state and lag through the published kafka-mcp npm CLI.
4
4
  ---
5
5
 
6
6
  # Kafka CLI Inspector
7
7
 
8
8
  Use this skill when a user wants Kafka read-only inspection through the published `kafka-mcp` npm CLI.
9
9
 
10
- ## Commands
10
+ ## Command catalog
11
11
 
12
+ - `npx kafka-mcp config path` prints the user config path.
13
+ - `npx kafka-mcp config show` shows the user config stored under `~/.config/kafka-mcp/config.json`.
14
+ - `npx kafka-mcp config set --brokers <host:port>` writes user config.
12
15
  - `npx kafka-mcp topics --search <keyword>` lists topics and supports fuzzy substring search.
13
16
  - `npx kafka-mcp topic --topic <topic>` shows partition metadata for a topic.
14
17
  - `npx kafka-mcp messages --topic <topic> --search <keyword> --limit <n>` previews messages from a topic and filters by topic, key, value, or headers.
15
18
  - `npx kafka-mcp groups --search <keyword>` lists consumer groups and supports search.
19
+ - `npx kafka-mcp group describe --group <groupId>` returns a broad consumer group report across all assigned topics, including members, assignments, committed offsets, latest offsets, and lag.
16
20
  - `npx kafka-mcp group-progress --group <groupId> --topic <topic>` shows committed offsets, latest offsets, lag, and the current member assignment per partition.
17
21
 
18
22
  ## Workflow
19
23
 
20
- 1. Check that Kafka brokers are configured through `.kafka-mcp.json`, `~/.config/kafka-mcp/config.json`, or `KAFKA_BROKERS`.
21
- 2. Use `topics` or `groups` first if the exact topic or consumer group is unknown.
22
- 3. Use `messages` for lightweight investigation. Prefer small limits and a search term.
23
- 4. Use `group-progress` when the task is about backlog, lag, stuck partitions, or which consumer currently owns a partition.
24
+ 1. Start with configuration. If brokers may not be set, run `npx kafka-mcp config show`. If nothing is configured, tell the user to run `npx kafka-mcp config set --brokers <host:port>`.
25
+ 2. Discover unknown names first. Use `topics --search` for topic discovery and `groups --search` for consumer group discovery.
26
+ 3. Inspect topic structure before deep analysis when needed. Use `topic --topic <topic>` to confirm partitions and leaders.
27
+ 4. Investigate message content with `messages`. Prefer small limits and a targeted search term.
28
+ 5. Use `group describe` when the task is broad: overall group health, which topics the group touches, how partitions are assigned, or which members appear responsible for lag.
29
+ 6. Use `group-progress` when the task is narrow: lag on one known topic, partition-by-partition ownership, or verifying whether one topic is stuck.
30
+
31
+ ## Recommended patterns
32
+
33
+ - Unknown topic name: `npx kafka-mcp topics --search order`
34
+ - Unknown consumer group: `npx kafka-mcp groups --search billing`
35
+ - Verify topic layout: `npx kafka-mcp topic --topic orders`
36
+ - Search recent business events in messages: `npx kafka-mcp messages --topic orders --search paid --limit 20`
37
+ - Broad consumer diagnosis: `npx kafka-mcp group describe --group billing-consumers`
38
+ - Focused lag diagnosis: `npx kafka-mcp group-progress --group billing-consumers --topic orders`
24
39
 
25
40
  ## Output guidance
26
41
 
27
- - Summarize only the relevant topics, messages, or lagging partitions.
42
+ - Summarize only the relevant topics, messages, lagging partitions, and responsible members.
28
43
  - Call out when a search returns no results rather than dumping empty tables.
29
- - When reporting lag, mention the topic, partition, committed offset, latest offset, and lag value.
44
+ - When reporting lag, mention the topic, partition, committed offset, latest offset, lag value, and assigned member when available.
45
+ - When using `group describe`, mention group state, protocol type, affected topics, and any partitions with notable lag.
46
+ - If configuration is missing, say that explicitly before suggesting deeper Kafka commands.