kafka-mcp 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,172 @@
1
+ # Kafka MCP CLI
2
+
3
+ `kafka-mcp` 是一个基于 TypeScript 开发的 Kafka 查询 CLI,同时配套了一份可供 AI Agent 使用的 Skill 定义。
4
+
5
+ 当前支持的能力包括:
6
+
7
+ - 从配置文件或环境变量读取 Kafka 地址并建立连接
8
+ - 获取 Topic 列表并支持搜索
9
+ - 获取指定 Topic 的消息并支持搜索
10
+ - 获取消费者组列表
11
+ - 查看指定消费者组在某个 Topic 上的消费进度
12
+
13
+ ## 功能与命令
14
+
15
+ 当前 CLI 提供以下 5 个命令:
16
+
17
+ ### 1. `topics`
18
+
19
+ 列出所有 Topic,并支持按名称搜索。
20
+
21
+ 示例:
22
+
23
+ ```bash
24
+ node dist/cli.js topics
25
+ node dist/cli.js topics --search order
26
+ ```
27
+
28
+ ### 2. `topic`
29
+
30
+ 查看某个 Topic 的元数据,包括:
31
+
32
+ - 分区编号
33
+ - leader
34
+ - replicas
35
+ - ISR
36
+
37
+ 示例:
38
+
39
+ ```bash
40
+ node dist/cli.js topic --topic orders
41
+ ```
42
+
43
+ ### 3. `messages`
44
+
45
+ 读取指定 Topic 的消息,并支持按关键字搜索。
46
+
47
+ 搜索范围包括:
48
+
49
+ - Topic 名称
50
+ - Message Key
51
+ - Message Value
52
+ - Message Headers
53
+
54
+ 常用参数:
55
+
56
+ - `--topic`:指定 Topic
57
+ - `--search`:搜索关键字
58
+ - `--limit`:最多返回多少条匹配消息,默认 `10`
59
+ - `--timeout-ms`:读取超时时间,默认 `5000`
60
+ - `--latest`:从最新 offset 开始读取,默认从最早 offset 开始
61
+
62
+ 示例:
63
+
64
+ ```bash
65
+ node dist/cli.js messages --topic orders
66
+ node dist/cli.js messages --topic orders --search paid --limit 20
67
+ node dist/cli.js messages --topic orders --latest
68
+ ```
69
+
70
+ ### 4. `groups`
71
+
72
+ 列出消费者组,并支持按组名搜索。
73
+
74
+ 示例:
75
+
76
+ ```bash
77
+ node dist/cli.js groups
78
+ node dist/cli.js groups --search billing
79
+ ```
80
+
81
+ ### 5. `group-progress`
82
+
83
+ 查看某个消费者组在某个 Topic 上的消费进度。
84
+
85
+ 输出内容包括:
86
+
87
+ - 分区编号
88
+ - 当前负责该分区的消费者成员
89
+ - 已提交 offset
90
+ - Topic 最新 offset
91
+ - Lag
92
+
93
+ 示例:
94
+
95
+ ```bash
96
+ node dist/cli.js group-progress --group billing-consumers --topic orders
97
+ ```
98
+
99
+ ## 配置方式
100
+
101
+ CLI 会按以下顺序读取配置,找到第一个存在的配置源后使用:
102
+
103
+ 1. 当前目录下的 `.kafka-mcp.json`
104
+ 2. `~/.config/kafka-mcp/config.json`
105
+ 3. 环境变量会覆盖上述配置
106
+
107
+ 最小配置示例:
108
+
109
+ ```json
110
+ {
111
+ "brokers": ["localhost:9092"],
112
+ "clientId": "kafka-mcp-cli",
113
+ "ssl": false
114
+ }
115
+ ```
116
+
117
+ 如果 Kafka 开启了 SASL,也可以在配置文件中增加认证信息:
118
+
119
+ ```json
120
+ {
121
+ "brokers": ["localhost:9092"],
122
+ "clientId": "kafka-mcp-cli",
123
+ "sasl": {
124
+ "mechanism": "plain",
125
+ "username": "username",
126
+ "password": "password"
127
+ }
128
+ }
129
+ ```
130
+
131
+ 支持的环境变量:
132
+
133
+ - `KAFKA_BROKERS`
134
+ - `KAFKA_CLIENT_ID`
135
+ - `KAFKA_SSL`
136
+ - `KAFKA_SASL_MECHANISM`
137
+ - `KAFKA_SASL_USERNAME`
138
+ - `KAFKA_SASL_PASSWORD`
139
+
140
+ ## 安装
141
+
142
+ ```bash
143
+ npm install
144
+ npm run build
145
+ ```
146
+
147
+ 如果要作为全局命令使用,可以在发布到 npm 后执行:
148
+
149
+ ```bash
150
+ npm install -g kafka-mcp
151
+ ```
152
+
153
+ ## 使用示例
154
+
155
+ ```bash
156
+ npx tsx src/cli.ts topics --search order
157
+ npx tsx src/cli.ts topic --topic orders
158
+ npx tsx src/cli.ts messages --topic orders --search paid --limit 20
159
+ npx tsx src/cli.ts groups --search billing
160
+ npx tsx src/cli.ts group-progress --group billing-consumers --topic orders
161
+ ```
162
+
163
+ 如果已经执行过构建,也可以直接运行:
164
+
165
+ ```bash
166
+ node dist/cli.js topics
167
+ node dist/cli.js messages --topic orders --search paid
168
+ ```
169
+
170
+ ## Skill
171
+
172
+ 可复用的 Skill 定义位于 `skills/kafka-cli/SKILL.md`,后续可以基于这个 CLI 继续扩展成面向 Codex、Claude 或其他 Agent 的 Kafka 操作能力。
package/dist/cli.d.ts ADDED
@@ -0,0 +1,2 @@
1
+ #!/usr/bin/env node
2
+ export {};
package/dist/cli.js ADDED
@@ -0,0 +1,96 @@
1
+ #!/usr/bin/env node
2
+ import { Command } from "commander";
3
+ import { formatRows } from "./lib/format.js";
4
+ import { KafkaCliService } from "./lib/kafka.js";
5
+ const program = new Command();
6
+ let service;
7
+ function getService() {
8
+ service ??= new KafkaCliService();
9
+ return service;
10
+ }
11
+ program
12
+ .name("kafka-mcp")
13
+ .description("Kafka CLI for topics, messages, and consumer group inspection")
14
+ .version("0.1.0");
15
+ program
16
+ .command("topics")
17
+ .description("List topics")
18
+ .option("-s, --search <keyword>", "Filter topic names")
19
+ .action(async (options) => {
20
+ const topics = await getService().listTopics(options.search);
21
+ console.log(formatRows(topics.map((topic) => ({ topic }))));
22
+ });
23
+ program
24
+ .command("topic")
25
+ .description("Show metadata for a topic")
26
+ .requiredOption("-t, --topic <topic>", "Topic name")
27
+ .action(async (options) => {
28
+ const topic = await getService().getTopicMetadata(options.topic);
29
+ if (!topic) {
30
+ throw new Error(`Topic not found: ${options.topic}`);
31
+ }
32
+ console.log(formatRows(topic.partitions.map((partition) => ({
33
+ partition: partition.partitionId,
34
+ leader: partition.leader,
35
+ replicas: partition.replicas.join(","),
36
+ isr: partition.isr.join(",")
37
+ }))));
38
+ });
39
+ program
40
+ .command("messages")
41
+ .description("Read messages from a topic")
42
+ .requiredOption("-t, --topic <topic>", "Topic name")
43
+ .option("-s, --search <keyword>", "Filter by topic, key, value, or headers")
44
+ .option("-l, --limit <number>", "Max matching messages", "10")
45
+ .option("--timeout-ms <number>", "Stop after N milliseconds", "5000")
46
+ .option("--latest", "Read from latest offsets instead of beginning")
47
+ .action(async (options) => {
48
+ const messages = await getService().consumeMessages({
49
+ topic: options.topic,
50
+ search: options.search,
51
+ limit: Number.parseInt(options.limit, 10),
52
+ timeoutMs: Number.parseInt(options.timeoutMs, 10),
53
+ fromBeginning: !options.latest
54
+ });
55
+ console.log(formatRows(messages.map((message) => ({
56
+ partition: message.partition,
57
+ offset: message.offset,
58
+ timestamp: message.timestamp ?? "",
59
+ key: message.key ?? "",
60
+ value: message.value ?? ""
61
+ }))));
62
+ });
63
+ program
64
+ .command("groups")
65
+ .description("List consumer groups")
66
+ .option("-s, --search <keyword>", "Filter group ids")
67
+ .action(async (options) => {
68
+ const groups = await getService().listConsumerGroups(options.search);
69
+ console.log(formatRows(groups.map((groupId) => ({ groupId }))));
70
+ });
71
+ program
72
+ .command("group-progress")
73
+ .description("Show a consumer group's progress on a topic")
74
+ .requiredOption("-g, --group <groupId>", "Consumer group id")
75
+ .requiredOption("-t, --topic <topic>", "Topic name")
76
+ .action(async (options) => {
77
+ const progress = await getService().getGroupTopicProgress(options.group, options.topic);
78
+ console.log(`group: ${progress.groupId}`);
79
+ console.log(`topic: ${progress.topic}`);
80
+ console.log(`state: ${progress.state}`);
81
+ console.log("");
82
+ console.log(formatRows(progress.partitions.map((partition) => ({
83
+ partition: partition.partition,
84
+ memberId: partition.memberId ?? "",
85
+ clientId: partition.clientId ?? "",
86
+ consumerHost: partition.consumerId ?? "",
87
+ committed: partition.committedOffset,
88
+ latest: partition.latestOffset,
89
+ lag: partition.lag ?? ""
90
+ }))));
91
+ });
92
+ program.parseAsync(process.argv).catch((error) => {
93
+ const message = error instanceof Error ? error.message : String(error);
94
+ console.error(`Error: ${message}`);
95
+ process.exitCode = 1;
96
+ });
@@ -0,0 +1,2 @@
1
+ import type { KafkaCliConfig } from "./types.js";
2
+ export declare function loadConfig(): KafkaCliConfig;
@@ -0,0 +1,60 @@
1
+ import { existsSync, readFileSync } from "node:fs";
2
+ import { homedir } from "node:os";
3
+ import { join, resolve } from "node:path";
4
+ const DEFAULT_CLIENT_ID = "kafka-mcp-cli";
5
+ const CONFIG_CANDIDATES = [
6
+ resolve(process.cwd(), ".kafka-mcp.json"),
7
+ join(homedir(), ".config", "kafka-mcp", "config.json")
8
+ ];
9
+ function isSupportedMechanism(value) {
10
+ return value === "plain" || value === "scram-sha-256" || value === "scram-sha-512";
11
+ }
12
+ function readConfigFile() {
13
+ for (const filePath of CONFIG_CANDIDATES) {
14
+ if (!existsSync(filePath)) {
15
+ continue;
16
+ }
17
+ const parsed = JSON.parse(readFileSync(filePath, "utf8"));
18
+ return parsed;
19
+ }
20
+ return {};
21
+ }
22
+ export function loadConfig() {
23
+ const fileConfig = readConfigFile();
24
+ const brokersFromEnv = process.env.KAFKA_BROKERS
25
+ ?.split(",")
26
+ .map((item) => item.trim())
27
+ .filter(Boolean);
28
+ const brokers = brokersFromEnv ?? fileConfig.brokers ?? [];
29
+ if (brokers.length === 0) {
30
+ throw new Error("Kafka brokers are missing. Set KAFKA_BROKERS or create .kafka-mcp.json / ~/.config/kafka-mcp/config.json.");
31
+ }
32
+ const config = {
33
+ brokers,
34
+ clientId: process.env.KAFKA_CLIENT_ID || fileConfig.clientId || DEFAULT_CLIENT_ID
35
+ };
36
+ const ssl = process.env.KAFKA_SSL;
37
+ if (ssl !== undefined) {
38
+ config.ssl = ssl === "true";
39
+ }
40
+ else if (fileConfig.ssl !== undefined) {
41
+ config.ssl = fileConfig.ssl;
42
+ }
43
+ const mechanism = process.env.KAFKA_SASL_MECHANISM ??
44
+ fileConfig.sasl?.mechanism;
45
+ const username = process.env.KAFKA_SASL_USERNAME ??
46
+ fileConfig.sasl?.username;
47
+ const password = process.env.KAFKA_SASL_PASSWORD ??
48
+ fileConfig.sasl?.password;
49
+ if (mechanism && username && password) {
50
+ if (!isSupportedMechanism(mechanism)) {
51
+ throw new Error(`Unsupported SASL mechanism: ${mechanism}`);
52
+ }
53
+ config.sasl = {
54
+ mechanism,
55
+ username,
56
+ password
57
+ };
58
+ }
59
+ return config;
60
+ }
@@ -0,0 +1 @@
1
+ export declare function formatRows(rows: Array<Record<string, string | number | null>>): string;
@@ -0,0 +1,16 @@
1
+ export function formatRows(rows) {
2
+ if (rows.length === 0) {
3
+ return "No results.";
4
+ }
5
+ const headers = Object.keys(rows[0]);
6
+ const widths = headers.map((header) => Math.max(header.length, ...rows.map((row) => String(row[header] ?? "").length)));
7
+ const formatLine = (values) => values.map((value, index) => value.padEnd(widths[index])).join(" ");
8
+ const lines = [
9
+ formatLine(headers),
10
+ formatLine(widths.map((width) => "-".repeat(width))),
11
+ ];
12
+ for (const row of rows) {
13
+ lines.push(formatLine(headers.map((header) => String(row[header] ?? ""))));
14
+ }
15
+ return lines.join("\n");
16
+ }
@@ -0,0 +1,17 @@
1
+ import { type ITopicMetadata } from "kafkajs";
2
+ import type { GroupTopicProgress, TopicMessage } from "./types.js";
3
+ export declare class KafkaCliService {
4
+ private readonly kafka;
5
+ constructor();
6
+ listTopics(search?: string): Promise<string[]>;
7
+ getTopicMetadata(topic: string): Promise<ITopicMetadata | undefined>;
8
+ consumeMessages(options: {
9
+ topic: string;
10
+ search?: string;
11
+ limit: number;
12
+ timeoutMs: number;
13
+ fromBeginning: boolean;
14
+ }): Promise<TopicMessage[]>;
15
+ listConsumerGroups(search?: string): Promise<string[]>;
16
+ getGroupTopicProgress(groupId: string, topic: string): Promise<GroupTopicProgress>;
17
+ }
@@ -0,0 +1,210 @@
1
+ import { AssignerProtocol, Kafka } from "kafkajs";
2
+ import { loadConfig } from "./config.js";
3
+ import { filterTopics, matchMessage } from "./search.js";
4
+ function decodeValue(value) {
5
+ if (typeof value === "string") {
6
+ return value;
7
+ }
8
+ return value ? value.toString("utf8") : null;
9
+ }
10
+ function decodeHeaderValue(value) {
11
+ if (value === undefined) {
12
+ return "";
13
+ }
14
+ if (Array.isArray(value)) {
15
+ return value.map((item) => decodeValue(item) ?? "").join(",");
16
+ }
17
+ return decodeValue(value) ?? "";
18
+ }
19
+ function toBigIntOrNull(value) {
20
+ if (!value || value === "-1") {
21
+ return null;
22
+ }
23
+ return BigInt(value);
24
+ }
25
+ export class KafkaCliService {
26
+ kafka;
27
+ constructor() {
28
+ const config = loadConfig();
29
+ this.kafka = new Kafka({
30
+ clientId: config.clientId,
31
+ brokers: config.brokers,
32
+ ssl: config.ssl,
33
+ sasl: toKafkaSasl(config.sasl)
34
+ });
35
+ }
36
+ async listTopics(search) {
37
+ const admin = this.kafka.admin();
38
+ await admin.connect();
39
+ try {
40
+ const metadata = await admin.fetchTopicMetadata();
41
+ return filterTopics(metadata.topics.map((topic) => topic.name).sort((left, right) => left.localeCompare(right)), search);
42
+ }
43
+ finally {
44
+ await admin.disconnect();
45
+ }
46
+ }
47
+ async getTopicMetadata(topic) {
48
+ const admin = this.kafka.admin();
49
+ await admin.connect();
50
+ try {
51
+ const metadata = await admin.fetchTopicMetadata({ topics: [topic] });
52
+ return metadata.topics.find((item) => item.name === topic);
53
+ }
54
+ finally {
55
+ await admin.disconnect();
56
+ }
57
+ }
58
+ async consumeMessages(options) {
59
+ const consumer = this.kafka.consumer({
60
+ groupId: `kafka-mcp-cli-preview-${Date.now()}`
61
+ });
62
+ const matches = [];
63
+ await consumer.connect();
64
+ await consumer.subscribe({ topic: options.topic, fromBeginning: options.fromBeginning });
65
+ const timer = setTimeout(async () => {
66
+ await consumer.stop();
67
+ }, options.timeoutMs);
68
+ try {
69
+ await consumer.run({
70
+ autoCommit: false,
71
+ eachMessage: async ({ topic, partition, message }) => {
72
+ const normalized = {
73
+ topic,
74
+ partition,
75
+ offset: message.offset,
76
+ timestamp: message.timestamp,
77
+ key: decodeMessageValue(message, "key"),
78
+ value: decodeMessageValue(message, "value"),
79
+ headers: Object.fromEntries(Object.entries(message.headers ?? {}).map(([header, value]) => [
80
+ header,
81
+ decodeHeaderValue(value)
82
+ ]))
83
+ };
84
+ if (matchMessage(normalized, options.search)) {
85
+ matches.push(normalized);
86
+ }
87
+ if (matches.length >= options.limit) {
88
+ await consumer.stop();
89
+ }
90
+ }
91
+ });
92
+ }
93
+ finally {
94
+ clearTimeout(timer);
95
+ await consumer.disconnect();
96
+ }
97
+ return matches;
98
+ }
99
+ async listConsumerGroups(search) {
100
+ const admin = this.kafka.admin();
101
+ await admin.connect();
102
+ try {
103
+ const groups = await admin.listGroups();
104
+ return groups.groups
105
+ .map((group) => group.groupId)
106
+ .filter((groupId) => !search || groupId.toLowerCase().includes(search.toLowerCase()))
107
+ .sort((left, right) => left.localeCompare(right));
108
+ }
109
+ finally {
110
+ await admin.disconnect();
111
+ }
112
+ }
113
+ async getGroupTopicProgress(groupId, topic) {
114
+ const admin = this.kafka.admin();
115
+ await admin.connect();
116
+ try {
117
+ const groupDescriptions = await admin.describeGroups([groupId]);
118
+ const description = groupDescriptions.groups[0];
119
+ if (!description) {
120
+ throw new Error(`Consumer group not found: ${groupId}`);
121
+ }
122
+ const topicOffsets = await admin.fetchTopicOffsets(topic);
123
+ const groupOffsets = await admin.fetchOffsets({ groupId, topics: [topic] });
124
+ const groupTopicOffsets = groupOffsets[0];
125
+ const topicAssignments = decodeAssignmentsForTopic(description, topic);
126
+ const assignmentMap = buildAssignmentMap(description, topicAssignments);
127
+ const partitions = topicOffsets.map((partitionOffset) => {
128
+ const committed = groupTopicOffsets?.partitions.find((partition) => partition.partition === partitionOffset.partition);
129
+ const latest = toBigIntOrNull(partitionOffset.high);
130
+ const committedOffset = toBigIntOrNull(committed?.offset ?? "-1");
131
+ const lag = latest !== null && committedOffset !== null && latest >= committedOffset
132
+ ? (latest - committedOffset).toString()
133
+ : null;
134
+ const member = assignmentMap.get(partitionOffset.partition);
135
+ return {
136
+ partition: partitionOffset.partition,
137
+ memberId: member?.memberId ?? null,
138
+ clientId: member?.clientId ?? null,
139
+ consumerId: member?.consumerId ?? null,
140
+ committedOffset: committed?.offset ?? "-1",
141
+ latestOffset: partitionOffset.high,
142
+ lag
143
+ };
144
+ });
145
+ return {
146
+ groupId,
147
+ topic,
148
+ state: description.state,
149
+ members: description.members.map((member) => ({
150
+ memberId: member.memberId,
151
+ clientId: member.clientId,
152
+ consumerId: member.clientHost,
153
+ partitions: topicAssignments.get(member.memberId) ?? []
154
+ })),
155
+ partitions
156
+ };
157
+ }
158
+ finally {
159
+ await admin.disconnect();
160
+ }
161
+ }
162
+ }
163
+ function buildAssignmentMap(description, decodedAssignments) {
164
+ const assignment = new Map();
165
+ for (const member of description.members) {
166
+ for (const partition of decodedAssignments.get(member.memberId) ?? []) {
167
+ assignment.set(partition, {
168
+ memberId: member.memberId,
169
+ clientId: member.clientId,
170
+ consumerId: member.clientHost
171
+ });
172
+ }
173
+ }
174
+ return assignment;
175
+ }
176
+ function decodeAssignmentsForTopic(description, topic) {
177
+ const assignments = new Map();
178
+ for (const member of description.members) {
179
+ const decoded = AssignerProtocol.MemberAssignment.decode(member.memberAssignment);
180
+ assignments.set(member.memberId, decoded?.assignment[topic] ?? []);
181
+ }
182
+ return assignments;
183
+ }
184
+ function decodeMessageValue(message, field) {
185
+ return decodeValue(message[field]);
186
+ }
187
+ function toKafkaSasl(sasl) {
188
+ if (!sasl) {
189
+ return undefined;
190
+ }
191
+ if (sasl.mechanism === "plain") {
192
+ return {
193
+ mechanism: "plain",
194
+ username: sasl.username,
195
+ password: sasl.password
196
+ };
197
+ }
198
+ if (sasl.mechanism === "scram-sha-256") {
199
+ return {
200
+ mechanism: "scram-sha-256",
201
+ username: sasl.username,
202
+ password: sasl.password
203
+ };
204
+ }
205
+ return {
206
+ mechanism: "scram-sha-512",
207
+ username: sasl.username,
208
+ password: sasl.password
209
+ };
210
+ }
@@ -0,0 +1,4 @@
1
+ import type { TopicMessage } from "./types.js";
2
+ export declare function includesInsensitive(input: string, search?: string): boolean;
3
+ export declare function filterTopics(topics: string[], search?: string): string[];
4
+ export declare function matchMessage(message: TopicMessage, search?: string): boolean;
@@ -0,0 +1,21 @@
1
+ export function includesInsensitive(input, search) {
2
+ if (!search) {
3
+ return true;
4
+ }
5
+ return input.toLowerCase().includes(search.toLowerCase());
6
+ }
7
+ export function filterTopics(topics, search) {
8
+ return topics.filter((topic) => includesInsensitive(topic, search));
9
+ }
10
+ export function matchMessage(message, search) {
11
+ if (!search) {
12
+ return true;
13
+ }
14
+ const haystacks = [
15
+ message.topic,
16
+ message.key ?? "",
17
+ message.value ?? "",
18
+ JSON.stringify(message.headers)
19
+ ];
20
+ return haystacks.some((item) => includesInsensitive(item, search));
21
+ }
@@ -0,0 +1,40 @@
1
+ export interface KafkaCliConfig {
2
+ brokers: string[];
3
+ clientId: string;
4
+ ssl?: boolean;
5
+ sasl?: {
6
+ mechanism: "plain" | "scram-sha-256" | "scram-sha-512";
7
+ username: string;
8
+ password: string;
9
+ };
10
+ }
11
+ export interface TopicMessage {
12
+ topic: string;
13
+ partition: number;
14
+ offset: string;
15
+ timestamp?: string;
16
+ key: string | null;
17
+ value: string | null;
18
+ headers: Record<string, string>;
19
+ }
20
+ export interface GroupPartitionProgress {
21
+ partition: number;
22
+ memberId: string | null;
23
+ clientId: string | null;
24
+ consumerId: string | null;
25
+ committedOffset: string;
26
+ latestOffset: string;
27
+ lag: string | null;
28
+ }
29
+ export interface GroupTopicProgress {
30
+ groupId: string;
31
+ topic: string;
32
+ state: string;
33
+ members: Array<{
34
+ memberId: string;
35
+ clientId: string;
36
+ consumerId: string;
37
+ partitions: number[];
38
+ }>;
39
+ partitions: GroupPartitionProgress[];
40
+ }
@@ -0,0 +1 @@
1
+ export {};
package/package.json ADDED
@@ -0,0 +1,44 @@
1
+ {
2
+ "name": "kafka-mcp",
3
+ "version": "0.1.0",
4
+ "description": "Kafka inspection CLI with topic, message, and consumer group utilities",
5
+ "type": "module",
6
+ "main": "./dist/cli.js",
7
+ "types": "./dist/cli.d.ts",
8
+ "bin": {
9
+ "kafka-mcp": "./dist/cli.js"
10
+ },
11
+ "files": [
12
+ "dist",
13
+ "README.md",
14
+ "skills"
15
+ ],
16
+ "scripts": {
17
+ "build": "tsc -p tsconfig.json",
18
+ "dev": "tsx src/cli.ts",
19
+ "test": "vitest run",
20
+ "lint": "tsc --noEmit -p tsconfig.vitest.json",
21
+ "prepublishOnly": "npm run build && npm run lint && npm test"
22
+ },
23
+ "keywords": [
24
+ "kafka",
25
+ "cli",
26
+ "consumer-group",
27
+ "topic",
28
+ "codex-skill"
29
+ ],
30
+ "engines": {
31
+ "node": ">=18.18"
32
+ },
33
+ "license": "MIT",
34
+ "dependencies": {
35
+ "commander": "^14.0.0",
36
+ "kafkajs": "^2.2.4"
37
+ },
38
+ "devDependencies": {
39
+ "@types/node": "^24.6.0",
40
+ "tsx": "^4.20.5",
41
+ "typescript": "^5.9.3",
42
+ "vitest": "^3.2.4"
43
+ }
44
+ }
@@ -0,0 +1,29 @@
1
+ ---
2
+ name: kafka-cli-inspector
3
+ description: Use this skill when you need to inspect Kafka topics, preview messages, search Kafka data, or analyze a consumer group's progress for a topic through the local kafka-mcp CLI.
4
+ ---
5
+
6
+ # Kafka CLI Inspector
7
+
8
+ Use this skill when a user wants Kafka read-only inspection from the local workspace.
9
+
10
+ ## Commands
11
+
12
+ - `npx tsx src/cli.ts topics --search <keyword>` lists topics and supports fuzzy substring search.
13
+ - `npx tsx src/cli.ts topic --topic <topic>` shows partition metadata for a topic.
14
+ - `npx tsx src/cli.ts messages --topic <topic> --search <keyword> --limit <n>` previews messages from a topic and filters by topic, key, value, or headers.
15
+ - `npx tsx src/cli.ts groups --search <keyword>` lists consumer groups and supports search.
16
+ - `npx tsx src/cli.ts group-progress --group <groupId> --topic <topic>` shows committed offsets, latest offsets, lag, and the current member assignment per partition.
17
+
18
+ ## Workflow
19
+
20
+ 1. Check that Kafka brokers are configured through `.kafka-mcp.json`, `~/.config/kafka-mcp/config.json`, or `KAFKA_BROKERS`.
21
+ 2. Use `topics` or `groups` first if the exact topic or consumer group is unknown.
22
+ 3. Use `messages` for lightweight investigation. Prefer small limits and a search term.
23
+ 4. Use `group-progress` when the task is about backlog, lag, stuck partitions, or which consumer currently owns a partition.
24
+
25
+ ## Output guidance
26
+
27
+ - Summarize only the relevant topics, messages, or lagging partitions.
28
+ - Call out when a search returns no results rather than dumping empty tables.
29
+ - When reporting lag, mention the topic, partition, committed offset, latest offset, and lag value.