@llryiop/avatar-boot-cli 1.0.1 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/docs/exam-question-generate-api.md +163 -0
- package/package.json +1 -1
- package/src/prompts.js +3 -3
- package/src/transform.js +1 -1
- package/templates/.claude/skills/avatar-boot-starter-feign/README.md +243 -0
- package/templates/.claude/skills/avatar-boot-starter-feign/SKILL.md +47 -219
- package/templates/.claude/skills/avatar-boot-starter-feign/references//345/212/237/350/203/275/350/257/246/350/247/243.md +65 -0
- package/templates/.claude/skills/avatar-boot-starter-feign/references//345/277/253/351/200/237/346/216/245/345/205/245/346/214/207/345/215/227.md +75 -0
- package/templates/.claude/skills/avatar-boot-starter-feign/references//351/205/215/347/275/256/345/217/202/350/200/203.md +70 -0
- package/templates/.claude/skills/avatar-boot-starter-job/README.md +437 -0
- package/templates/.claude/skills/avatar-boot-starter-job/SKILL.md +35 -414
- package/templates/.claude/skills/avatar-boot-starter-job/references//345/270/270/350/247/201/351/227/256/351/242/230.md +55 -0
- package/templates/.claude/skills/avatar-boot-starter-job/references//345/277/253/351/200/237/346/216/245/345/205/245/344/270/216/351/205/215/347/275/256.md +124 -0
- package/templates/.claude/skills/avatar-boot-starter-job/references//347/233/221/346/216/247/346/214/207/346/240/207.md +72 -0
- package/templates/.claude/skills/avatar-boot-starter-kafka/README.md +580 -0
- package/templates/.claude/skills/avatar-boot-starter-kafka/SKILL.md +36 -560
- package/templates/.claude/skills/avatar-boot-starter-kafka/references//346/234/200/344/275/263/345/256/236/350/267/265.md +43 -0
- package/templates/.claude/skills/avatar-boot-starter-kafka/references//346/240/270/345/277/203/345/212/237/350/203/275.md +117 -0
- package/templates/.claude/skills/avatar-boot-starter-kafka/references//351/205/215/347/275/256/345/217/202/350/200/203.md +54 -0
- package/templates/.claude/skills/avatar-boot-starter-mysql/README.md +572 -0
- package/templates/.claude/skills/avatar-boot-starter-mysql/SKILL.md +40 -550
- package/templates/.claude/skills/avatar-boot-starter-mysql/references//345/256/236/344/275/223/344/270/216/345/212/237/350/203/275.md +96 -0
- package/templates/.claude/skills/avatar-boot-starter-mysql/references//345/277/253/351/200/237/346/216/245/345/205/245/344/270/216/346/225/260/346/215/256/346/272/220.md +91 -0
- package/templates/.claude/skills/avatar-boot-starter-mysql/references//351/253/230/347/272/247/347/211/271/346/200/247/344/270/216/351/205/215/347/275/256.md +59 -0
- package/templates/.claude/skills/avatar-boot-starter-nacos/README.md +901 -0
- package/templates/.claude/skills/avatar-boot-starter-nacos/SKILL.md +40 -879
- package/templates/.claude/skills/avatar-boot-starter-nacos/references//345/212/237/350/203/275/344/275/277/347/224/250.md +134 -0
- package/templates/.claude/skills/avatar-boot-starter-nacos/references//345/277/253/351/200/237/346/216/245/345/205/245/344/270/216/351/205/215/347/275/256.md +96 -0
- package/templates/.claude/skills/avatar-boot-starter-nacos/references//346/225/205/351/232/234/346/216/222/346/237/245.md +64 -0
- package/templates/.claude/skills/avatar-boot-starter-oss/README.md +594 -0
- package/templates/.claude/skills/avatar-boot-starter-oss/SKILL.md +52 -570
- package/templates/.claude/skills/avatar-boot-starter-oss/references//345/277/253/351/200/237/346/216/245/345/205/245/344/270/216/351/205/215/347/275/256.md +77 -0
- package/templates/.claude/skills/avatar-boot-starter-oss/references//346/240/270/345/277/203/345/212/237/350/203/275.md +94 -0
- package/templates/.claude/skills/avatar-boot-starter-oss/references//350/247/204/350/214/203/344/270/216/346/263/250/346/204/217/344/272/213/351/241/271.md +61 -0
- package/templates/.claude/skills/avatar-boot-starter-redis/README.md +586 -0
- package/templates/.claude/skills/avatar-boot-starter-redis/SKILL.md +42 -566
- package/templates/.claude/skills/avatar-boot-starter-redis/references//345/277/253/351/200/237/346/216/245/345/205/245/344/270/216/351/205/215/347/275/256.md +78 -0
- package/templates/.claude/skills/avatar-boot-starter-redis/references//346/225/260/346/215/256/346/223/215/344/275/234.md +111 -0
- package/templates/.claude/skills/avatar-boot-starter-redis/references//351/253/230/347/272/247/345/212/237/350/203/275.md +90 -0
- package/templates/.claude/skills/avatar-boot-starter-rocketmq/README.md +662 -0
- package/templates/.claude/skills/avatar-boot-starter-rocketmq/SKILL.md +48 -640
- package/templates/.claude/skills/avatar-boot-starter-rocketmq/references//346/240/270/345/277/203/345/212/237/350/203/275.md +101 -0
- package/templates/.claude/skills/avatar-boot-starter-rocketmq/references//351/205/215/347/275/256/344/270/216/346/263/250/346/204/217/344/272/213/351/241/271.md +44 -0
- package/templates/.claude/skills/avatar-boot-starter-rocketmq/references//351/253/230/347/272/247/347/211/271/346/200/247.md +71 -0
- package/templates/.claude/skills/avatar-boot-starter-web/README.md +1007 -0
- package/templates/.claude/skills/avatar-boot-starter-web/SKILL.md +150 -1003
- package/templates/.claude/skills/avatar-boot-starter-web/references//345/212/237/350/203/275-LogInfo/346/263/250/350/247/243.md +75 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//345/212/237/350/203/275-/345/205/250/345/261/200/345/274/202/345/270/270/345/244/204/347/220/206.md +90 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//345/212/237/350/203/275-/346/214/207/346/240/207/347/233/221/346/216/247.md +74 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//345/212/237/350/203/275-/346/227/245/345/277/227/344/275/223/347/263/273.md +73 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//345/212/237/350/203/275-/350/257/267/346/261/202/344/270/212/344/270/213/346/226/207.md +77 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//345/277/253/351/200/237/346/216/245/345/205/245/346/214/207/345/215/227.md +52 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//346/263/250/346/204/217/344/272/213/351/241/271.md +68 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//350/207/252/345/256/232/344/271/211/346/211/251/345/261/225/346/214/207/345/215/227.md +107 -0
- package/templates/.claude/skills/avatar-boot-starter-web/references//351/205/215/347/275/256/345/217/202/350/200/203.md +107 -0
- package/templates/.claude/skills/crud-generator/SKILL.md +133 -64
- package/templates/.claude/skills/database-design/README.md +207 -0
- package/templates/.claude/skills/database-design/SKILL.md +469 -82
- package/templates/.claude/skills/database-design/references//345/221/275/345/220/215/350/247/204/350/214/203.md +232 -0
- package/templates/.claude/skills/database-design/references//345/255/227/346/256/265/347/261/273/345/236/213/350/247/204/350/214/203.md +400 -0
- package/templates/.claude/skills/database-design/references//347/264/242/345/274/225/350/247/204/350/214/203.md +506 -0
- package/templates/avatar-scaffold-api/pom.xml +0 -5
- package/templates/avatar-scaffold-service/pom.xml +25 -87
- package/templates/avatar-scaffold-service/src/main/resources/application-dev.yaml +3 -5
- package/templates/avatar-scaffold-service/src/main/resources/application-local.yaml +2 -2
- package/templates/pom.xml +9 -18
|
@@ -0,0 +1,72 @@
|
|
|
1
|
+
# XXL-Job 监控指标
|
|
2
|
+
|
|
3
|
+
## 启用监控
|
|
4
|
+
|
|
5
|
+
监控功能默认启用,需在 `application.yml` 中开启 Prometheus 端点:
|
|
6
|
+
|
|
7
|
+
```yaml
|
|
8
|
+
management:
|
|
9
|
+
endpoints:
|
|
10
|
+
web:
|
|
11
|
+
exposure:
|
|
12
|
+
include: prometheus,health,info
|
|
13
|
+
metrics:
|
|
14
|
+
export:
|
|
15
|
+
prometheus:
|
|
16
|
+
enabled: true
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
禁用监控:
|
|
20
|
+
```yaml
|
|
21
|
+
xxl:
|
|
22
|
+
job:
|
|
23
|
+
metrics:
|
|
24
|
+
enabled: false # 禁用所有监控
|
|
25
|
+
execution-enabled: true # 仅启用任务执行监控
|
|
26
|
+
executor-enabled: true # 仅启用执行器状态监控
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
## 可用指标
|
|
30
|
+
|
|
31
|
+
### 任务执行指标
|
|
32
|
+
|
|
33
|
+
| 指标名 | 类型 | 标签 | 说明 |
|
|
34
|
+
|:--|:--|:--|:--|
|
|
35
|
+
| `xxl.job.execution.count` | Counter | `job_handler`, `status` | 执行次数(status: success/failure) |
|
|
36
|
+
| `xxl.job.execution.time` | Timer | `job_handler`, `status` | 执行耗时 |
|
|
37
|
+
| `xxl.job.execution.active` | Gauge | 无 | 当前正在执行的任务数 |
|
|
38
|
+
|
|
39
|
+
### 执行器状态指标
|
|
40
|
+
|
|
41
|
+
| 指标名 | 类型 | 标签 | 说明 |
|
|
42
|
+
|:--|:--|:--|:--|
|
|
43
|
+
| `xxl.job.executor.registered` | Gauge | `app_name` | 注册状态(1=已注册,0=未注册) |
|
|
44
|
+
| `xxl.job.executor.handlers.total` | Gauge | `app_name` | 已注册的 JobHandler 总数 |
|
|
45
|
+
|
|
46
|
+
## Prometheus 查询示例
|
|
47
|
+
|
|
48
|
+
```promql
|
|
49
|
+
# 任务执行成功率
|
|
50
|
+
sum(rate(xxl_job_execution_count_total{status="success"}[5m]))
|
|
51
|
+
/ sum(rate(xxl_job_execution_count_total[5m])) * 100
|
|
52
|
+
|
|
53
|
+
# 任务平均执行时间(秒)
|
|
54
|
+
rate(xxl_job_execution_time_seconds_sum[5m])
|
|
55
|
+
/ rate(xxl_job_execution_time_seconds_count[5m])
|
|
56
|
+
|
|
57
|
+
# 特定任务的失败次数
|
|
58
|
+
sum(xxl_job_execution_count_total{job_handler="demoJobHandler", status="failure"})
|
|
59
|
+
|
|
60
|
+
# 当前活跃任务数
|
|
61
|
+
xxl_job_execution_active
|
|
62
|
+
|
|
63
|
+
# P95 执行耗时
|
|
64
|
+
histogram_quantile(0.95, rate(xxl_job_execution_time_seconds_bucket[5m]))
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
## Grafana 推荐面板
|
|
68
|
+
|
|
69
|
+
- **任务执行概览**:总执行次数、成功率、平均耗时
|
|
70
|
+
- **任务执行趋势**:按 job_handler 分组的执行次数趋势
|
|
71
|
+
- **执行器状态**:注册状态、Handler 数量、活跃任务数
|
|
72
|
+
- **性能分析**:P95/P99 耗时分布
|
|
@@ -0,0 +1,580 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: avatar-boot-starter-kafka
|
|
3
|
+
description: 当涉及 Kafka、消息队列、消息生产、消息消费、事件驱动 相关功能时使用此技能 - 为 Spring Boot 3.5.3 应用提供基于 spring-kafka 的消息队列能力,包含生产者/消费者自动配置、JSON 序列化、死信队列、幂等消费。
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Avatar Boot 的 Kafka 消息队列集成模块,基于 spring-kafka(版本由 Spring Boot BOM 管理)提供开箱即用的消息生产和消费能力。
|
|
7
|
+
|
|
8
|
+
## 功能特性
|
|
9
|
+
|
|
10
|
+
- ✅ **自动配置** - 基于 Spring Boot 自动配置机制,自动装配 KafkaTemplate 和消费者
|
|
11
|
+
- ✅ **JSON 序列化** - 内置 JSON 序列化/反序列化支持,无缝传输 Java 对象
|
|
12
|
+
- ✅ **消费者组管理** - 支持消费者组和分区分配策略配置
|
|
13
|
+
- ✅ **死信队列** - 内置 DLT(Dead Letter Topic)支持,处理消费失败的消息
|
|
14
|
+
- ✅ **幂等消费** - 提供幂等消费模式,防止消息重复处理
|
|
15
|
+
- ✅ **监控集成** - 集成 Micrometer 指标,监控生产者/消费者状态
|
|
16
|
+
|
|
17
|
+
## 快速开始
|
|
18
|
+
|
|
19
|
+
### 1. 添加依赖
|
|
20
|
+
|
|
21
|
+
在项目的 `pom.xml` 中添加依赖:
|
|
22
|
+
|
|
23
|
+
```xml
|
|
24
|
+
<dependency>
|
|
25
|
+
<groupId>com.iflytek.avatar.boot</groupId>
|
|
26
|
+
<artifactId>avatar-boot-starter-kafka</artifactId>
|
|
27
|
+
</dependency>
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
> 版本由 Avatar Boot BOM 统一管理,无需指定 version。spring-kafka 版本由 Spring Boot 3.5.3 父 BOM 管理。
|
|
31
|
+
|
|
32
|
+
### 2. 配置文件
|
|
33
|
+
|
|
34
|
+
在 `application.yml` 中添加 Kafka 配置:
|
|
35
|
+
|
|
36
|
+
```yaml
|
|
37
|
+
spring:
|
|
38
|
+
kafka:
|
|
39
|
+
bootstrap-servers: localhost:9092
|
|
40
|
+
producer:
|
|
41
|
+
retries: 3 # 重试次数
|
|
42
|
+
batch-size: 16384 # 批量发送大小(16KB)
|
|
43
|
+
buffer-memory: 33554432 # 缓冲区大小(32MB)
|
|
44
|
+
acks: all # 确认机制:all 表示所有副本确认
|
|
45
|
+
key-serializer: org.apache.kafka.common.serialization.StringSerializer
|
|
46
|
+
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
|
|
47
|
+
properties:
|
|
48
|
+
linger.ms: 10 # 等待时间(毫秒),增大可提高吞吐量
|
|
49
|
+
max.request.size: 1048576 # 最大请求大小(1MB)
|
|
50
|
+
enable.idempotence: true # 启用生产者幂等
|
|
51
|
+
consumer:
|
|
52
|
+
group-id: ${spring.application.name}-group
|
|
53
|
+
auto-offset-reset: earliest # 无偏移量时从最早开始消费
|
|
54
|
+
enable-auto-commit: false # 关闭自动提交(使用手动确认)
|
|
55
|
+
max-poll-records: 500 # 每次拉取最大记录数
|
|
56
|
+
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
|
|
57
|
+
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
|
|
58
|
+
properties:
|
|
59
|
+
spring.json.trusted.packages: com.example.* # 信任的反序列化包
|
|
60
|
+
session.timeout.ms: 30000
|
|
61
|
+
heartbeat.interval.ms: 10000
|
|
62
|
+
max.poll.interval.ms: 300000 # 最大处理时间(5分钟)
|
|
63
|
+
listener:
|
|
64
|
+
ack-mode: MANUAL_IMMEDIATE # 手动确认模式
|
|
65
|
+
concurrency: 3 # 消费者并发数
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
## 生产者(Producer)
|
|
69
|
+
|
|
70
|
+
### 基本消息发送
|
|
71
|
+
|
|
72
|
+
```java
|
|
73
|
+
package com.example.producer;
|
|
74
|
+
|
|
75
|
+
import com.example.event.OrderCreatedEvent;
|
|
76
|
+
import lombok.RequiredArgsConstructor;
|
|
77
|
+
import lombok.extern.slf4j.Slf4j;
|
|
78
|
+
import org.springframework.kafka.core.KafkaTemplate;
|
|
79
|
+
import org.springframework.kafka.support.SendResult;
|
|
80
|
+
import org.springframework.stereotype.Service;
|
|
81
|
+
|
|
82
|
+
import java.util.concurrent.CompletableFuture;
|
|
83
|
+
|
|
84
|
+
@Slf4j
|
|
85
|
+
@Service
|
|
86
|
+
@RequiredArgsConstructor
|
|
87
|
+
public class OrderEventProducer {
|
|
88
|
+
|
|
89
|
+
private final KafkaTemplate<String, Object> kafkaTemplate;
|
|
90
|
+
|
|
91
|
+
private static final String TOPIC = "order-events";
|
|
92
|
+
|
|
93
|
+
/**
|
|
94
|
+
* 异步发送消息(推荐)
|
|
95
|
+
*/
|
|
96
|
+
public void sendOrderCreatedEvent(OrderCreatedEvent event) {
|
|
97
|
+
CompletableFuture<SendResult<String, Object>> future =
|
|
98
|
+
kafkaTemplate.send(TOPIC, event.getOrderNo(), event);
|
|
99
|
+
|
|
100
|
+
future.whenComplete((result, ex) -> {
|
|
101
|
+
if (ex == null) {
|
|
102
|
+
log.info("消息发送成功: topic={}, partition={}, offset={}",
|
|
103
|
+
result.getRecordMetadata().topic(),
|
|
104
|
+
result.getRecordMetadata().partition(),
|
|
105
|
+
result.getRecordMetadata().offset());
|
|
106
|
+
} else {
|
|
107
|
+
log.error("消息发送失败: event={}", event, ex);
|
|
108
|
+
// 进行补偿处理:入库重试、告警等
|
|
109
|
+
}
|
|
110
|
+
});
|
|
111
|
+
}
|
|
112
|
+
|
|
113
|
+
/**
|
|
114
|
+
* 同步发送消息(需要等待结果)
|
|
115
|
+
*/
|
|
116
|
+
public SendResult<String, Object> sendSync(String topic, String key, Object message) {
|
|
117
|
+
try {
|
|
118
|
+
return kafkaTemplate.send(topic, key, message).get();
|
|
119
|
+
} catch (Exception e) {
|
|
120
|
+
log.error("同步发送消息失败: topic={}, key={}", topic, key, e);
|
|
121
|
+
throw new RuntimeException("消息发送失败", e);
|
|
122
|
+
}
|
|
123
|
+
}
|
|
124
|
+
|
|
125
|
+
/**
|
|
126
|
+
* 发送到指定分区
|
|
127
|
+
*/
|
|
128
|
+
public void sendToPartition(String topic, int partition, String key, Object message) {
|
|
129
|
+
kafkaTemplate.send(topic, partition, key, message);
|
|
130
|
+
}
|
|
131
|
+
}
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
### 事件定义
|
|
135
|
+
|
|
136
|
+
```java
|
|
137
|
+
package com.example.event;
|
|
138
|
+
|
|
139
|
+
import lombok.AllArgsConstructor;
|
|
140
|
+
import lombok.Data;
|
|
141
|
+
import lombok.NoArgsConstructor;
|
|
142
|
+
|
|
143
|
+
import java.io.Serializable;
|
|
144
|
+
import java.math.BigDecimal;
|
|
145
|
+
import java.time.LocalDateTime;
|
|
146
|
+
|
|
147
|
+
@Data
|
|
148
|
+
@NoArgsConstructor
|
|
149
|
+
@AllArgsConstructor
|
|
150
|
+
public class OrderCreatedEvent implements Serializable {
|
|
151
|
+
|
|
152
|
+
private String orderNo;
|
|
153
|
+
private Long userId;
|
|
154
|
+
private BigDecimal totalAmount;
|
|
155
|
+
private LocalDateTime createTime;
|
|
156
|
+
private String eventId; // 用于幂等去重
|
|
157
|
+
}
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
## 消费者(Consumer)
|
|
161
|
+
|
|
162
|
+
### 基本消费
|
|
163
|
+
|
|
164
|
+
```java
|
|
165
|
+
package com.example.consumer;
|
|
166
|
+
|
|
167
|
+
import com.example.event.OrderCreatedEvent;
|
|
168
|
+
import lombok.extern.slf4j.Slf4j;
|
|
169
|
+
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
|
170
|
+
import org.springframework.kafka.annotation.KafkaListener;
|
|
171
|
+
import org.springframework.kafka.support.Acknowledgment;
|
|
172
|
+
import org.springframework.stereotype.Component;
|
|
173
|
+
|
|
174
|
+
@Slf4j
|
|
175
|
+
@Component
|
|
176
|
+
public class OrderEventConsumer {
|
|
177
|
+
|
|
178
|
+
/**
|
|
179
|
+
* 单条消息消费(手动确认)
|
|
180
|
+
*/
|
|
181
|
+
@KafkaListener(topics = "order-events", groupId = "order-service-group")
|
|
182
|
+
public void onOrderCreated(ConsumerRecord<String, OrderCreatedEvent> record,
|
|
183
|
+
Acknowledgment ack) {
|
|
184
|
+
try {
|
|
185
|
+
OrderCreatedEvent event = record.value();
|
|
186
|
+
log.info("收到订单事件: orderNo={}, partition={}, offset={}",
|
|
187
|
+
event.getOrderNo(), record.partition(), record.offset());
|
|
188
|
+
|
|
189
|
+
// 处理业务逻辑
|
|
190
|
+
processOrder(event);
|
|
191
|
+
|
|
192
|
+
// 手动确认
|
|
193
|
+
ack.acknowledge();
|
|
194
|
+
} catch (Exception e) {
|
|
195
|
+
log.error("处理订单事件失败: record={}", record, e);
|
|
196
|
+
// 不确认,消息将被重新消费
|
|
197
|
+
// 也可以调用 ack.acknowledge() 后将失败消息发送到死信队列
|
|
198
|
+
}
|
|
199
|
+
}
|
|
200
|
+
|
|
201
|
+
/**
|
|
202
|
+
* 批量消费
|
|
203
|
+
*/
|
|
204
|
+
@KafkaListener(topics = "batch-events", groupId = "batch-group",
|
|
205
|
+
containerFactory = "batchKafkaListenerContainerFactory")
|
|
206
|
+
public void onBatchMessages(List<ConsumerRecord<String, OrderCreatedEvent>> records,
|
|
207
|
+
Acknowledgment ack) {
|
|
208
|
+
log.info("批量收到 {} 条消息", records.size());
|
|
209
|
+
try {
|
|
210
|
+
for (ConsumerRecord<String, OrderCreatedEvent> record : records) {
|
|
211
|
+
processOrder(record.value());
|
|
212
|
+
}
|
|
213
|
+
ack.acknowledge();
|
|
214
|
+
} catch (Exception e) {
|
|
215
|
+
log.error("批量处理消息失败", e);
|
|
216
|
+
}
|
|
217
|
+
}
|
|
218
|
+
|
|
219
|
+
private void processOrder(OrderCreatedEvent event) {
|
|
220
|
+
// 业务处理逻辑
|
|
221
|
+
}
|
|
222
|
+
}
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
### 多 Topic 消费
|
|
226
|
+
|
|
227
|
+
```java
|
|
228
|
+
@KafkaListener(topics = {"order-events", "payment-events"},
|
|
229
|
+
groupId = "notification-group")
|
|
230
|
+
public void onMultipleTopics(ConsumerRecord<String, Object> record, Acknowledgment ack) {
|
|
231
|
+
String topic = record.topic();
|
|
232
|
+
switch (topic) {
|
|
233
|
+
case "order-events" -> handleOrderEvent(record);
|
|
234
|
+
case "payment-events" -> handlePaymentEvent(record);
|
|
235
|
+
}
|
|
236
|
+
ack.acknowledge();
|
|
237
|
+
}
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
## JSON 序列化/反序列化配置
|
|
241
|
+
|
|
242
|
+
### 自定义序列化配置
|
|
243
|
+
|
|
244
|
+
```java
|
|
245
|
+
package com.example.config;
|
|
246
|
+
|
|
247
|
+
import org.apache.kafka.common.serialization.StringDeserializer;
|
|
248
|
+
import org.apache.kafka.common.serialization.StringSerializer;
|
|
249
|
+
import org.springframework.context.annotation.Bean;
|
|
250
|
+
import org.springframework.context.annotation.Configuration;
|
|
251
|
+
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
|
|
252
|
+
import org.springframework.kafka.core.*;
|
|
253
|
+
import org.springframework.kafka.support.serializer.JsonDeserializer;
|
|
254
|
+
import org.springframework.kafka.support.serializer.JsonSerializer;
|
|
255
|
+
|
|
256
|
+
import java.util.HashMap;
|
|
257
|
+
import java.util.Map;
|
|
258
|
+
|
|
259
|
+
@Configuration
|
|
260
|
+
public class KafkaConfig {
|
|
261
|
+
|
|
262
|
+
@Bean
|
|
263
|
+
public ProducerFactory<String, Object> producerFactory() {
|
|
264
|
+
Map<String, Object> props = new HashMap<>();
|
|
265
|
+
props.put(org.apache.kafka.clients.producer.ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
|
|
266
|
+
props.put(org.apache.kafka.clients.producer.ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
|
|
267
|
+
props.put(org.apache.kafka.clients.producer.ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
|
|
268
|
+
return new DefaultKafkaProducerFactory<>(props);
|
|
269
|
+
}
|
|
270
|
+
|
|
271
|
+
@Bean
|
|
272
|
+
public ConsumerFactory<String, Object> consumerFactory() {
|
|
273
|
+
Map<String, Object> props = new HashMap<>();
|
|
274
|
+
props.put(org.apache.kafka.clients.consumer.ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
|
|
275
|
+
props.put(org.apache.kafka.clients.consumer.ConsumerConfig.GROUP_ID_CONFIG, "default-group");
|
|
276
|
+
|
|
277
|
+
JsonDeserializer<Object> deserializer = new JsonDeserializer<>();
|
|
278
|
+
deserializer.addTrustedPackages("com.example.*");
|
|
279
|
+
deserializer.setUseTypeHeaders(true);
|
|
280
|
+
|
|
281
|
+
return new DefaultKafkaConsumerFactory<>(props,
|
|
282
|
+
new StringDeserializer(), deserializer);
|
|
283
|
+
}
|
|
284
|
+
|
|
285
|
+
@Bean
|
|
286
|
+
public ConcurrentKafkaListenerContainerFactory<String, Object>
|
|
287
|
+
kafkaListenerContainerFactory(ConsumerFactory<String, Object> consumerFactory) {
|
|
288
|
+
ConcurrentKafkaListenerContainerFactory<String, Object> factory =
|
|
289
|
+
new ConcurrentKafkaListenerContainerFactory<>();
|
|
290
|
+
factory.setConsumerFactory(consumerFactory);
|
|
291
|
+
factory.setConcurrency(3);
|
|
292
|
+
factory.getContainerProperties().setAckMode(
|
|
293
|
+
org.springframework.kafka.listener.ContainerProperties.AckMode.MANUAL_IMMEDIATE);
|
|
294
|
+
return factory;
|
|
295
|
+
}
|
|
296
|
+
}
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
## 幂等消费模式
|
|
300
|
+
|
|
301
|
+
### 方式一:数据库唯一键去重
|
|
302
|
+
|
|
303
|
+
```java
|
|
304
|
+
@Slf4j
|
|
305
|
+
@Component
|
|
306
|
+
@RequiredArgsConstructor
|
|
307
|
+
public class IdempotentOrderConsumer {
|
|
308
|
+
|
|
309
|
+
private final OrderService orderService;
|
|
310
|
+
private final MessageLogMapper messageLogMapper;
|
|
311
|
+
|
|
312
|
+
@KafkaListener(topics = "order-events", groupId = "idempotent-group")
|
|
313
|
+
public void onMessage(ConsumerRecord<String, OrderCreatedEvent> record, Acknowledgment ack) {
|
|
314
|
+
OrderCreatedEvent event = record.value();
|
|
315
|
+
String eventId = event.getEventId();
|
|
316
|
+
|
|
317
|
+
try {
|
|
318
|
+
// 1. 检查是否已处理(数据库唯一键)
|
|
319
|
+
MessageLog existing = messageLogMapper.selectById(eventId);
|
|
320
|
+
if (existing != null) {
|
|
321
|
+
log.info("消息已处理,跳过: eventId={}", eventId);
|
|
322
|
+
ack.acknowledge();
|
|
323
|
+
return;
|
|
324
|
+
}
|
|
325
|
+
|
|
326
|
+
// 2. 记录消息日志(利用唯一键约束防止并发重复)
|
|
327
|
+
MessageLog logEntry = new MessageLog();
|
|
328
|
+
logEntry.setEventId(eventId);
|
|
329
|
+
logEntry.setTopic(record.topic());
|
|
330
|
+
logEntry.setContent(JSON.toJSONString(event));
|
|
331
|
+
logEntry.setStatus("PROCESSING");
|
|
332
|
+
messageLogMapper.insert(logEntry);
|
|
333
|
+
|
|
334
|
+
// 3. 处理业务
|
|
335
|
+
orderService.processOrder(event);
|
|
336
|
+
|
|
337
|
+
// 4. 更新状态
|
|
338
|
+
logEntry.setStatus("SUCCESS");
|
|
339
|
+
messageLogMapper.updateById(logEntry);
|
|
340
|
+
|
|
341
|
+
ack.acknowledge();
|
|
342
|
+
} catch (DuplicateKeyException e) {
|
|
343
|
+
log.info("并发去重,消息已处理: eventId={}", eventId);
|
|
344
|
+
ack.acknowledge();
|
|
345
|
+
} catch (Exception e) {
|
|
346
|
+
log.error("处理消息失败: eventId={}", eventId, e);
|
|
347
|
+
}
|
|
348
|
+
}
|
|
349
|
+
}
|
|
350
|
+
```
|
|
351
|
+
|
|
352
|
+
### 方式二:Redis 去重
|
|
353
|
+
|
|
354
|
+
```java
|
|
355
|
+
@Slf4j
|
|
356
|
+
@Component
|
|
357
|
+
@RequiredArgsConstructor
|
|
358
|
+
public class RedisIdempotentConsumer {
|
|
359
|
+
|
|
360
|
+
private final StringRedisTemplate redisTemplate;
|
|
361
|
+
private final OrderService orderService;
|
|
362
|
+
|
|
363
|
+
private static final String DEDUP_KEY_PREFIX = "avatar:kafka:dedup:";
|
|
364
|
+
private static final long DEDUP_TTL_HOURS = 24;
|
|
365
|
+
|
|
366
|
+
@KafkaListener(topics = "order-events", groupId = "redis-dedup-group")
|
|
367
|
+
public void onMessage(ConsumerRecord<String, OrderCreatedEvent> record, Acknowledgment ack) {
|
|
368
|
+
OrderCreatedEvent event = record.value();
|
|
369
|
+
String dedupKey = DEDUP_KEY_PREFIX + event.getEventId();
|
|
370
|
+
|
|
371
|
+
try {
|
|
372
|
+
// SETNX 原子操作去重
|
|
373
|
+
Boolean isNew = redisTemplate.opsForValue()
|
|
374
|
+
.setIfAbsent(dedupKey, "1", DEDUP_TTL_HOURS, TimeUnit.HOURS);
|
|
375
|
+
|
|
376
|
+
if (Boolean.FALSE.equals(isNew)) {
|
|
377
|
+
log.info("消息已处理(Redis 去重),跳过: eventId={}", event.getEventId());
|
|
378
|
+
ack.acknowledge();
|
|
379
|
+
return;
|
|
380
|
+
}
|
|
381
|
+
|
|
382
|
+
// 处理业务
|
|
383
|
+
orderService.processOrder(event);
|
|
384
|
+
ack.acknowledge();
|
|
385
|
+
} catch (Exception e) {
|
|
386
|
+
log.error("处理消息失败: eventId={}", event.getEventId(), e);
|
|
387
|
+
// 删除去重标记,允许重试
|
|
388
|
+
redisTemplate.delete(dedupKey);
|
|
389
|
+
}
|
|
390
|
+
}
|
|
391
|
+
}
|
|
392
|
+
```
|
|
393
|
+
|
|
394
|
+
## 死信队列(DLT)配置
|
|
395
|
+
|
|
396
|
+
### 使用 DefaultErrorHandler 配置 DLT
|
|
397
|
+
|
|
398
|
+
```java
|
|
399
|
+
package com.example.config;
|
|
400
|
+
|
|
401
|
+
import lombok.extern.slf4j.Slf4j;
|
|
402
|
+
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
|
403
|
+
import org.springframework.context.annotation.Bean;
|
|
404
|
+
import org.springframework.context.annotation.Configuration;
|
|
405
|
+
import org.springframework.kafka.core.KafkaTemplate;
|
|
406
|
+
import org.springframework.kafka.listener.CommonErrorHandler;
|
|
407
|
+
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
|
|
408
|
+
import org.springframework.kafka.listener.DefaultErrorHandler;
|
|
409
|
+
import org.springframework.util.backoff.FixedBackOff;
|
|
410
|
+
|
|
411
|
+
@Slf4j
|
|
412
|
+
@Configuration
|
|
413
|
+
public class KafkaErrorConfig {
|
|
414
|
+
|
|
415
|
+
/**
|
|
416
|
+
* 配置死信队列处理器
|
|
417
|
+
* 重试 3 次,每次间隔 1 秒,最终失败发送到 DLT
|
|
418
|
+
*/
|
|
419
|
+
@Bean
|
|
420
|
+
public CommonErrorHandler errorHandler(KafkaTemplate<String, Object> kafkaTemplate) {
|
|
421
|
+
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
|
|
422
|
+
(record, ex) -> {
|
|
423
|
+
log.error("消息处理失败,发送到死信队列: topic={}, key={}, error={}",
|
|
424
|
+
record.topic(), record.key(), ex.getMessage());
|
|
425
|
+
// 默认发送到 {originalTopic}.DLT
|
|
426
|
+
return new org.apache.kafka.common.TopicPartition(
|
|
427
|
+
record.topic() + ".DLT", record.partition());
|
|
428
|
+
});
|
|
429
|
+
|
|
430
|
+
// 重试 3 次,间隔 1 秒
|
|
431
|
+
DefaultErrorHandler errorHandler = new DefaultErrorHandler(recoverer,
|
|
432
|
+
new FixedBackOff(1000L, 3L));
|
|
433
|
+
|
|
434
|
+
// 不重试某些异常
|
|
435
|
+
errorHandler.addNotRetryableExceptions(
|
|
436
|
+
IllegalArgumentException.class,
|
|
437
|
+
com.fasterxml.jackson.core.JsonParseException.class);
|
|
438
|
+
|
|
439
|
+
return errorHandler;
|
|
440
|
+
}
|
|
441
|
+
}
|
|
442
|
+
```
|
|
443
|
+
|
|
444
|
+
### 消费死信队列
|
|
445
|
+
|
|
446
|
+
```java
|
|
447
|
+
@Slf4j
|
|
448
|
+
@Component
|
|
449
|
+
public class DeadLetterConsumer {
|
|
450
|
+
|
|
451
|
+
@KafkaListener(topics = "order-events.DLT", groupId = "dlt-group")
|
|
452
|
+
public void onDeadLetter(ConsumerRecord<String, Object> record, Acknowledgment ack) {
|
|
453
|
+
log.error("收到死信消息: topic={}, key={}, value={}",
|
|
454
|
+
record.topic(), record.key(), record.value());
|
|
455
|
+
|
|
456
|
+
// 记录到数据库或告警系统
|
|
457
|
+
// alertService.sendAlert("Kafka 死信消息", record.toString());
|
|
458
|
+
|
|
459
|
+
ack.acknowledge();
|
|
460
|
+
}
|
|
461
|
+
}
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
## Consumer Lag 监控
|
|
465
|
+
|
|
466
|
+
### 配置 Actuator 监控
|
|
467
|
+
|
|
468
|
+
```yaml
|
|
469
|
+
management:
|
|
470
|
+
endpoints:
|
|
471
|
+
web:
|
|
472
|
+
exposure:
|
|
473
|
+
include: health,prometheus,kafka
|
|
474
|
+
health:
|
|
475
|
+
kafka:
|
|
476
|
+
enabled: true # 启用 Kafka 健康检查
|
|
477
|
+
```
|
|
478
|
+
|
|
479
|
+
### 使用 Kafka 命令行查看消费延迟
|
|
480
|
+
|
|
481
|
+
```bash
|
|
482
|
+
# 查看消费者组偏移量和延迟
|
|
483
|
+
kafka-consumer-groups.sh --bootstrap-server localhost:9092 \
|
|
484
|
+
--group order-service-group --describe
|
|
485
|
+
|
|
486
|
+
# 重点关注 LAG 列:
|
|
487
|
+
# TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG
|
|
488
|
+
# order-events 0 1000 1050 50
|
|
489
|
+
```
|
|
490
|
+
|
|
491
|
+
## 最佳实践
|
|
492
|
+
|
|
493
|
+
### 1. 生产者
|
|
494
|
+
|
|
495
|
+
- **启用幂等生产**:设置 `enable.idempotence=true`,确保消息不重复发送
|
|
496
|
+
- **设置 acks=all**:等待所有副本确认,保证消息持久化
|
|
497
|
+
- **合理设置 retries**:推荐 3 次,配合 `retry.backoff.ms` 使用
|
|
498
|
+
- **使用 Key 分区**:相同 Key 的消息发送到同一分区,保证顺序性
|
|
499
|
+
|
|
500
|
+
### 2. 消费者
|
|
501
|
+
|
|
502
|
+
- **关闭自动提交**:使用手动确认(`enable-auto-commit: false`)
|
|
503
|
+
- **实现幂等消费**:使用数据库唯一键或 Redis 去重
|
|
504
|
+
- **合理设置 concurrency**:消费者并发数不超过分区数
|
|
505
|
+
- **设置 max.poll.interval.ms**:大于业务最大处理时间,避免 rebalance
|
|
506
|
+
|
|
507
|
+
### 3. Topic 设计
|
|
508
|
+
|
|
509
|
+
- **合理设置分区数**:根据预期吞吐量,一个分区约 10MB/s
|
|
510
|
+
- **Key 设计**:选择分布均匀的 Key,避免数据倾斜
|
|
511
|
+
- **消息大小**:单条消息不超过 1MB,大数据考虑存 OSS 传引用
|
|
512
|
+
|
|
513
|
+
### 4. 序列化
|
|
514
|
+
|
|
515
|
+
- **生产者/消费者序列化一致**:统一使用 JsonSerializer / JsonDeserializer
|
|
516
|
+
- **配置 trusted.packages**:消费端必须配置信任的包路径
|
|
517
|
+
- **消息体增加版本号**:便于后续消息格式升级
|
|
518
|
+
|
|
519
|
+
## 常见问题
|
|
520
|
+
|
|
521
|
+
### 1. Consumer Rebalance 频繁
|
|
522
|
+
|
|
523
|
+
**原因**:消费处理时间超过 `max.poll.interval.ms`
|
|
524
|
+
|
|
525
|
+
**解决**:
|
|
526
|
+
- 增大 `max.poll.interval.ms` 配置(默认 5 分钟)
|
|
527
|
+
- 减小 `max.poll.records`(每次拉取的消息数)
|
|
528
|
+
- 优化消费逻辑,减少单条消息处理时间
|
|
529
|
+
- 检查 `session.timeout.ms` 和 `heartbeat.interval.ms` 配置
|
|
530
|
+
|
|
531
|
+
### 2. 反序列化异常
|
|
532
|
+
|
|
533
|
+
**原因**:生产者和消费者序列化方式不匹配
|
|
534
|
+
|
|
535
|
+
**解决**:
|
|
536
|
+
- 确保生产者 JsonSerializer 和消费者 JsonDeserializer 版本一致
|
|
537
|
+
- 配置 `spring.json.trusted.packages` 包含消息类的包路径
|
|
538
|
+
- 检查消息类是否有默认构造函数和 getter/setter
|
|
539
|
+
- 旧消息格式变更时使用 `ErrorHandlingDeserializer` 包装
|
|
540
|
+
|
|
541
|
+
### 3. 消费延迟(Lag)持续增长
|
|
542
|
+
|
|
543
|
+
**原因**:消费速度跟不上生产速度
|
|
544
|
+
|
|
545
|
+
**解决**:
|
|
546
|
+
- 增加消费者并发数(`concurrency` 配置)
|
|
547
|
+
- 增加 Topic 分区数(需要先扩分区再加消费者)
|
|
548
|
+
- 优化消费逻辑,使用批量处理
|
|
549
|
+
- 检查是否有单条消息处理时间过长
|
|
550
|
+
|
|
551
|
+
### 4. 消息丢失
|
|
552
|
+
|
|
553
|
+
**原因**:生产者未确认或消费者自动提交偏移量
|
|
554
|
+
|
|
555
|
+
**解决**:
|
|
556
|
+
- 生产者设置 `acks=all`,启用幂等
|
|
557
|
+
- 消费者关闭自动提交,使用手动确认
|
|
558
|
+
- 检查 Topic 副本因子(推荐 >= 3)
|
|
559
|
+
- 检查 `min.insync.replicas` 配置
|
|
560
|
+
|
|
561
|
+
### 5. 消息重复消费
|
|
562
|
+
|
|
563
|
+
**原因**:消费者 rebalance 或手动提交前崩溃
|
|
564
|
+
|
|
565
|
+
**解决**:
|
|
566
|
+
- 实现幂等消费(数据库唯一键或 Redis 去重)
|
|
567
|
+
- 消息体中包含唯一事件 ID
|
|
568
|
+
- 使用 `MANUAL_IMMEDIATE` 确认模式,处理完立即提交
|
|
569
|
+
|
|
570
|
+
## 依赖版本
|
|
571
|
+
|
|
572
|
+
- spring-kafka: 由 Spring Boot 3.5.3 BOM 管理
|
|
573
|
+
- Spring Boot: 3.5.3
|
|
574
|
+
- Java: 21
|
|
575
|
+
|
|
576
|
+
## 参考文档
|
|
577
|
+
|
|
578
|
+
- [Spring for Apache Kafka 官方文档](https://docs.spring.io/spring-kafka/reference/)
|
|
579
|
+
- [Apache Kafka 官方文档](https://kafka.apache.org/documentation/)
|
|
580
|
+
- [Spring Boot Kafka 配置](https://docs.spring.io/spring-boot/reference/messaging/kafka.html)
|