specweave 0.23.10 → 0.23.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (135) hide show
  1. package/.claude-plugin/marketplace.json +7 -7
  2. package/CLAUDE.md +384 -1449
  3. package/dist/src/cli/commands/cleanup-cache.d.ts +14 -0
  4. package/dist/src/cli/commands/cleanup-cache.d.ts.map +1 -0
  5. package/dist/src/cli/commands/cleanup-cache.js +63 -0
  6. package/dist/src/cli/commands/cleanup-cache.js.map +1 -0
  7. package/dist/src/cli/commands/init.js +40 -0
  8. package/dist/src/cli/commands/init.js.map +1 -1
  9. package/dist/src/cli/helpers/async-project-loader.d.ts +148 -0
  10. package/dist/src/cli/helpers/async-project-loader.d.ts.map +1 -0
  11. package/dist/src/cli/helpers/async-project-loader.js +351 -0
  12. package/dist/src/cli/helpers/async-project-loader.js.map +1 -0
  13. package/dist/src/cli/helpers/cancelation-handler.d.ts +123 -0
  14. package/dist/src/cli/helpers/cancelation-handler.d.ts.map +1 -0
  15. package/dist/src/cli/helpers/cancelation-handler.js +187 -0
  16. package/dist/src/cli/helpers/cancelation-handler.js.map +1 -0
  17. package/dist/src/cli/helpers/import-strategy-prompter.d.ts +43 -0
  18. package/dist/src/cli/helpers/import-strategy-prompter.d.ts.map +1 -0
  19. package/dist/src/cli/helpers/import-strategy-prompter.js +136 -0
  20. package/dist/src/cli/helpers/import-strategy-prompter.js.map +1 -0
  21. package/dist/src/cli/helpers/issue-tracker/ado.d.ts +5 -2
  22. package/dist/src/cli/helpers/issue-tracker/ado.d.ts.map +1 -1
  23. package/dist/src/cli/helpers/issue-tracker/ado.js +90 -40
  24. package/dist/src/cli/helpers/issue-tracker/ado.js.map +1 -1
  25. package/dist/src/cli/helpers/issue-tracker/jira.d.ts +2 -1
  26. package/dist/src/cli/helpers/issue-tracker/jira.d.ts.map +1 -1
  27. package/dist/src/cli/helpers/issue-tracker/jira.js +120 -35
  28. package/dist/src/cli/helpers/issue-tracker/jira.js.map +1 -1
  29. package/dist/src/cli/helpers/progress-tracker.d.ts +121 -0
  30. package/dist/src/cli/helpers/progress-tracker.d.ts.map +1 -0
  31. package/dist/src/cli/helpers/progress-tracker.js +202 -0
  32. package/dist/src/cli/helpers/progress-tracker.js.map +1 -0
  33. package/dist/src/cli/helpers/project-count-fetcher.d.ts +69 -0
  34. package/dist/src/cli/helpers/project-count-fetcher.d.ts.map +1 -0
  35. package/dist/src/cli/helpers/project-count-fetcher.js +173 -0
  36. package/dist/src/cli/helpers/project-count-fetcher.js.map +1 -0
  37. package/dist/src/config/types.d.ts +14 -14
  38. package/dist/src/core/cache/cache-manager.d.ts +119 -0
  39. package/dist/src/core/cache/cache-manager.d.ts.map +1 -0
  40. package/dist/src/core/cache/cache-manager.js +304 -0
  41. package/dist/src/core/cache/cache-manager.js.map +1 -0
  42. package/dist/src/core/cache/rate-limit-checker.d.ts +92 -0
  43. package/dist/src/core/cache/rate-limit-checker.d.ts.map +1 -0
  44. package/dist/src/core/cache/rate-limit-checker.js +160 -0
  45. package/dist/src/core/cache/rate-limit-checker.js.map +1 -0
  46. package/dist/src/core/progress/cancelation-handler.d.ts +79 -0
  47. package/dist/src/core/progress/cancelation-handler.d.ts.map +1 -0
  48. package/dist/src/core/progress/cancelation-handler.js +111 -0
  49. package/dist/src/core/progress/cancelation-handler.js.map +1 -0
  50. package/dist/src/core/progress/error-logger.d.ts +58 -0
  51. package/dist/src/core/progress/error-logger.d.ts.map +1 -0
  52. package/dist/src/core/progress/error-logger.js +99 -0
  53. package/dist/src/core/progress/error-logger.js.map +1 -0
  54. package/dist/src/core/progress/import-state.d.ts +71 -0
  55. package/dist/src/core/progress/import-state.d.ts.map +1 -0
  56. package/dist/src/core/progress/import-state.js +96 -0
  57. package/dist/src/core/progress/import-state.js.map +1 -0
  58. package/dist/src/core/progress/progress-tracker.d.ts +139 -0
  59. package/dist/src/core/progress/progress-tracker.d.ts.map +1 -0
  60. package/dist/src/core/progress/progress-tracker.js +223 -0
  61. package/dist/src/core/progress/progress-tracker.js.map +1 -0
  62. package/dist/src/init/architecture/types.d.ts +6 -6
  63. package/dist/src/integrations/ado/ado-client.d.ts +25 -0
  64. package/dist/src/integrations/ado/ado-client.d.ts.map +1 -1
  65. package/dist/src/integrations/ado/ado-client.js +67 -0
  66. package/dist/src/integrations/ado/ado-client.js.map +1 -1
  67. package/dist/src/integrations/ado/ado-dependency-loader.d.ts +99 -0
  68. package/dist/src/integrations/ado/ado-dependency-loader.d.ts.map +1 -0
  69. package/dist/src/integrations/ado/ado-dependency-loader.js +207 -0
  70. package/dist/src/integrations/ado/ado-dependency-loader.js.map +1 -0
  71. package/dist/src/integrations/jira/jira-client.d.ts +32 -0
  72. package/dist/src/integrations/jira/jira-client.d.ts.map +1 -1
  73. package/dist/src/integrations/jira/jira-client.js +81 -0
  74. package/dist/src/integrations/jira/jira-client.js.map +1 -1
  75. package/dist/src/integrations/jira/jira-dependency-loader.d.ts +101 -0
  76. package/dist/src/integrations/jira/jira-dependency-loader.d.ts.map +1 -0
  77. package/dist/src/integrations/jira/jira-dependency-loader.js +200 -0
  78. package/dist/src/integrations/jira/jira-dependency-loader.js.map +1 -0
  79. package/package.json +1 -1
  80. package/plugins/specweave/.claude-plugin/plugin.json +20 -0
  81. package/plugins/specweave/agents/architect/AGENT.md +100 -602
  82. package/plugins/specweave/agents/pm/AGENT.md +96 -597
  83. package/plugins/specweave/agents/pm/AGENT.md.bak +1893 -0
  84. package/plugins/specweave/agents/pm/AGENT.md.bak2 +1754 -0
  85. package/plugins/specweave/commands/check-hooks.md +257 -0
  86. package/plugins/specweave/hooks/docs-changed.sh +9 -1
  87. package/plugins/specweave/hooks/docs-changed.sh.backup +79 -0
  88. package/plugins/specweave/hooks/human-input-required.sh +9 -1
  89. package/plugins/specweave/hooks/human-input-required.sh.backup +75 -0
  90. package/plugins/specweave/hooks/post-edit-spec.sh +202 -31
  91. package/plugins/specweave/hooks/post-first-increment.sh.backup +61 -0
  92. package/plugins/specweave/hooks/post-increment-change.sh +6 -1
  93. package/plugins/specweave/hooks/post-increment-change.sh.backup +98 -0
  94. package/plugins/specweave/hooks/post-increment-completion.sh +6 -1
  95. package/plugins/specweave/hooks/post-increment-completion.sh.backup +231 -0
  96. package/plugins/specweave/hooks/post-increment-planning.sh +6 -1
  97. package/plugins/specweave/hooks/post-increment-planning.sh.backup +1048 -0
  98. package/plugins/specweave/hooks/post-increment-status-change.sh +6 -1
  99. package/plugins/specweave/hooks/post-increment-status-change.sh.backup +147 -0
  100. package/plugins/specweave/hooks/post-metadata-change.sh +7 -1
  101. package/plugins/specweave/hooks/post-spec-update.sh.backup +158 -0
  102. package/plugins/specweave/hooks/post-task-completion.sh +225 -228
  103. package/plugins/specweave/hooks/post-user-story-complete.sh.backup +179 -0
  104. package/plugins/specweave/hooks/post-write-spec.sh +207 -31
  105. package/plugins/specweave/hooks/pre-command-deduplication.sh.backup +83 -0
  106. package/plugins/specweave/hooks/pre-edit-spec.sh +151 -0
  107. package/plugins/specweave/hooks/pre-implementation.sh +9 -1
  108. package/plugins/specweave/hooks/pre-implementation.sh.backup +67 -0
  109. package/plugins/specweave/hooks/pre-task-completion.sh +14 -8
  110. package/plugins/specweave/hooks/pre-task-completion.sh.backup +194 -0
  111. package/plugins/specweave/hooks/pre-tool-use.sh +9 -1
  112. package/plugins/specweave/hooks/pre-tool-use.sh.backup +133 -0
  113. package/plugins/specweave/hooks/pre-write-spec.sh +151 -0
  114. package/plugins/specweave/hooks/test-pretooluse-env.sh +72 -0
  115. package/plugins/specweave/hooks/user-prompt-submit.sh.backup +386 -0
  116. package/plugins/specweave/skills/compliance-architecture/SKILL.md +374 -0
  117. package/plugins/specweave/skills/external-sync-wizard/SKILL.md +610 -0
  118. package/plugins/specweave/skills/pm-closure-validation/SKILL.md +541 -0
  119. package/plugins/specweave/skills/roadmap-planner/SKILL.md +473 -0
  120. package/plugins/specweave-ado/commands/refresh-cache.js +25 -0
  121. package/plugins/specweave-ado/commands/refresh-cache.ts +40 -0
  122. package/plugins/specweave-ado/hooks/post-living-docs-update.sh +9 -2
  123. package/plugins/specweave-ado/hooks/post-living-docs-update.sh.backup +353 -0
  124. package/plugins/specweave-ado/hooks/post-task-completion.sh +10 -2
  125. package/plugins/specweave-ado/hooks/post-task-completion.sh.backup +172 -0
  126. package/plugins/specweave-github/hooks/post-task-completion.sh +10 -2
  127. package/plugins/specweave-github/hooks/post-task-completion.sh.backup +258 -0
  128. package/plugins/specweave-jira/commands/refresh-cache.js +25 -0
  129. package/plugins/specweave-jira/commands/refresh-cache.ts +40 -0
  130. package/plugins/specweave-jira/hooks/post-task-completion.sh +10 -2
  131. package/plugins/specweave-jira/hooks/post-task-completion.sh.backup +172 -0
  132. package/plugins/specweave-kafka-streams/commands/topology.md +437 -0
  133. package/plugins/specweave-n8n/commands/workflow-template.md +262 -0
  134. package/plugins/specweave-release/hooks/.specweave/logs/dora-tracking.log +252 -6465
  135. package/plugins/specweave-release/hooks/post-task-completion.sh.backup +110 -0
@@ -0,0 +1,437 @@
1
+ ---
2
+ name: specweave-kafka-streams:topology
3
+ description: Generate Kafka Streams topology code (Java/Kotlin) with KStream/KTable patterns. Creates stream processing applications with windowing, joins, state stores, and exactly-once semantics.
4
+ ---
5
+
6
+ # Generate Kafka Streams Topology
7
+
8
+ Create production-ready Kafka Streams applications with best practices baked in.
9
+
10
+ ## What This Command Does
11
+
12
+ 1. **Select Pattern**: Choose topology pattern (word count, enrichment, aggregation, etc.)
13
+ 2. **Configure Topics**: Input/output topics and schemas
14
+ 3. **Define Operations**: Filter, map, join, aggregate, window
15
+ 4. **Generate Code**: Java or Kotlin implementation
16
+ 5. **Add Tests**: Topology Test Driver unit tests
17
+ 6. **Build Configuration**: Gradle/Maven, dependencies, configs
18
+
19
+ ## Available Patterns
20
+
21
+ ### Pattern 1: Stream Processing (Filter + Transform)
22
+ **Use Case**: Data cleansing and transformation
23
+
24
+ **Topology**:
25
+ ```java
26
+ KStream<String, Event> events = builder.stream("raw-events");
27
+
28
+ KStream<String, ProcessedEvent> processed = events
29
+ .filter((key, value) -> value.isValid())
30
+ .mapValues(value -> value.toUpperCase())
31
+ .selectKey((key, value) -> value.getUserId());
32
+
33
+ processed.to("processed-events");
34
+ ```
35
+
36
+ ### Pattern 2: Stream-Table Join (Enrichment)
37
+ **Use Case**: Enrich events with reference data
38
+
39
+ **Topology**:
40
+ ```java
41
+ // Users table (changelog stream)
42
+ KTable<Long, User> users = builder.table("users");
43
+
44
+ // Click events
45
+ KStream<Long, ClickEvent> clicks = builder.stream("clicks");
46
+
47
+ // Enrich clicks with user data
48
+ KStream<Long, EnrichedClick> enriched = clicks.leftJoin(
49
+ users,
50
+ (click, user) -> new EnrichedClick(
51
+ click.getPage(),
52
+ user != null ? user.getName() : "unknown",
53
+ click.getTimestamp()
54
+ )
55
+ );
56
+
57
+ enriched.to("enriched-clicks");
58
+ ```
59
+
60
+ ### Pattern 3: Windowed Aggregation
61
+ **Use Case**: Time-based metrics (counts, sums, averages)
62
+
63
+ **Topology**:
64
+ ```java
65
+ KTable<Windowed<String>, Long> counts = events
66
+ .groupByKey()
67
+ .windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(5)))
68
+ .count(Materialized.as("event-counts"));
69
+
70
+ counts.toStream()
71
+ .map((windowedKey, count) -> {
72
+ String key = windowedKey.key();
73
+ Instant start = windowedKey.window().startTime();
74
+ return KeyValue.pair(key, new WindowedCount(key, start, count));
75
+ })
76
+ .to("event-counts-output");
77
+ ```
78
+
79
+ ### Pattern 4: Stateful Deduplication
80
+ **Use Case**: Remove duplicate events within time window
81
+
82
+ **Topology**:
83
+ ```java
84
+ KStream<String, Event> deduplicated = events
85
+ .transformValues(
86
+ () -> new DeduplicationTransformer(Duration.ofMinutes(10)),
87
+ Materialized.as("dedup-store")
88
+ );
89
+
90
+ deduplicated.to("unique-events");
91
+ ```
92
+
93
+ ## Example Usage
94
+
95
+ ```bash
96
+ # Generate topology
97
+ /specweave-kafka-streams:topology
98
+
99
+ # I'll ask:
100
+ # 1. Language? (Java or Kotlin)
101
+ # 2. Pattern? (Filter/Transform, Join, Aggregation, Deduplication)
102
+ # 3. Input topic(s)?
103
+ # 4. Output topic(s)?
104
+ # 5. Windowing? (if aggregation)
105
+ # 6. State store? (if stateful)
106
+ # 7. Build tool? (Gradle or Maven)
107
+
108
+ # Then I'll generate:
109
+ # - src/main/java/MyApp.java (application code)
110
+ # - src/test/java/MyAppTest.java (unit tests)
111
+ # - build.gradle or pom.xml
112
+ # - application.properties
113
+ # - README.md with setup instructions
114
+ ```
115
+
116
+ ## Generated Files
117
+
118
+ **1. StreamsApplication.java**: Main topology
119
+ ```java
120
+ package com.example.streams;
121
+
122
+ import org.apache.kafka.streams.*;
123
+ import org.apache.kafka.streams.kstream.*;
124
+
125
+ public class StreamsApplication {
126
+ public static void main(String[] args) {
127
+ Properties props = new Properties();
128
+ props.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-app");
129
+ props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
130
+ props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG,
131
+ StreamsConfig.EXACTLY_ONCE_V2);
132
+
133
+ StreamsBuilder builder = new StreamsBuilder();
134
+
135
+ // Topology code here
136
+ KStream<String, String> input = builder.stream("input-topic");
137
+ KStream<String, String> processed = input
138
+ .filter((key, value) -> value != null)
139
+ .mapValues(value -> value.toUpperCase());
140
+ processed.to("output-topic");
141
+
142
+ KafkaStreams streams = new KafkaStreams(builder.build(), props);
143
+ streams.start();
144
+
145
+ // Graceful shutdown
146
+ Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
147
+ }
148
+ }
149
+ ```
150
+
151
+ **2. StreamsApplicationTest.java**: Unit tests with Topology Test Driver
152
+ ```java
153
+ package com.example.streams;
154
+
155
+ import org.apache.kafka.streams.*;
156
+ import org.junit.jupiter.api.*;
157
+
158
+ public class StreamsApplicationTest {
159
+ private TopologyTestDriver testDriver;
160
+ private TestInputTopic<String, String> inputTopic;
161
+ private TestOutputTopic<String, String> outputTopic;
162
+
163
+ @BeforeEach
164
+ public void setup() {
165
+ StreamsBuilder builder = new StreamsBuilder();
166
+ // Build topology
167
+ // ...
168
+
169
+ Properties props = new Properties();
170
+ props.put(StreamsConfig.APPLICATION_ID_CONFIG, "test");
171
+ props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "dummy:1234");
172
+
173
+ testDriver = new TopologyTestDriver(builder.build(), props);
174
+ inputTopic = testDriver.createInputTopic("input-topic",
175
+ Serdes.String().serializer(),
176
+ Serdes.String().serializer());
177
+ outputTopic = testDriver.createOutputTopic("output-topic",
178
+ Serdes.String().deserializer(),
179
+ Serdes.String().deserializer());
180
+ }
181
+
182
+ @Test
183
+ public void testTransformation() {
184
+ // Send test data
185
+ inputTopic.pipeInput("key1", "hello");
186
+
187
+ // Assert output
188
+ KeyValue<String, String> output = outputTopic.readKeyValue();
189
+ assertEquals("HELLO", output.value);
190
+ }
191
+
192
+ @AfterEach
193
+ public void tearDown() {
194
+ testDriver.close();
195
+ }
196
+ }
197
+ ```
198
+
199
+ **3. build.gradle**: Gradle build configuration
200
+ ```groovy
201
+ plugins {
202
+ id 'java'
203
+ id 'application'
204
+ }
205
+
206
+ group = 'com.example'
207
+ version = '1.0.0'
208
+
209
+ repositories {
210
+ mavenCentral()
211
+ }
212
+
213
+ dependencies {
214
+ implementation 'org.apache.kafka:kafka-streams:3.6.0'
215
+ implementation 'org.slf4j:slf4j-simple:2.0.9'
216
+
217
+ testImplementation 'org.apache.kafka:kafka-streams-test-utils:3.6.0'
218
+ testImplementation 'org.junit.jupiter:junit-jupiter:5.10.0'
219
+ }
220
+
221
+ application {
222
+ mainClass = 'com.example.streams.StreamsApplication'
223
+ }
224
+
225
+ test {
226
+ useJUnitPlatform()
227
+ }
228
+ ```
229
+
230
+ **4. application.properties**: Runtime configuration
231
+ ```properties
232
+ bootstrap.servers=localhost:9092
233
+ application.id=my-streams-app
234
+ processing.guarantee=exactly_once_v2
235
+ commit.interval.ms=100
236
+ cache.max.bytes.buffering=10485760
237
+ num.stream.threads=2
238
+ replication.factor=3
239
+ ```
240
+
241
+ **5. README.md**: Setup instructions
242
+ ```markdown
243
+ # Kafka Streams Application
244
+
245
+ ## Build
246
+
247
+ ```bash
248
+ # Gradle
249
+ ./gradlew build
250
+
251
+ # Maven
252
+ mvn clean package
253
+ ```
254
+
255
+ ## Run
256
+
257
+ ```bash
258
+ # Gradle
259
+ ./gradlew run
260
+
261
+ # Maven
262
+ mvn exec:java
263
+ ```
264
+
265
+ ## Test
266
+
267
+ ```bash
268
+ # Unit tests
269
+ ./gradlew test
270
+
271
+ # Integration tests (requires Kafka cluster)
272
+ ./gradlew integrationTest
273
+ ```
274
+
275
+ ## Docker
276
+
277
+ ```bash
278
+ # Build image
279
+ docker build -t my-streams-app .
280
+
281
+ # Run
282
+ docker run -e BOOTSTRAP_SERVERS=kafka:9092 my-streams-app
283
+ ```
284
+ ```
285
+
286
+ ## Configuration Options
287
+
288
+ ### Exactly-Once Semantics (EOS v2)
289
+ ```java
290
+ props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG,
291
+ StreamsConfig.EXACTLY_ONCE_V2);
292
+ ```
293
+
294
+ ### Multiple Stream Threads
295
+ ```java
296
+ props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 4);
297
+ ```
298
+
299
+ ### State Store Configuration
300
+ ```java
301
+ StoreBuilder<KeyValueStore<String, Long>> storeBuilder =
302
+ Stores.keyValueStoreBuilder(
303
+ Stores.persistentKeyValueStore("my-store"),
304
+ Serdes.String(),
305
+ Serdes.Long()
306
+ )
307
+ .withCachingEnabled()
308
+ .withLoggingEnabled(Map.of("retention.ms", "86400000"));
309
+ ```
310
+
311
+ ### Custom Serdes
312
+ ```java
313
+ // JSON Serde (using Jackson)
314
+ public class JsonSerde<T> implements Serde<T> {
315
+ private final ObjectMapper mapper = new ObjectMapper();
316
+ private final Class<T> type;
317
+
318
+ public JsonSerde(Class<T> type) {
319
+ this.type = type;
320
+ }
321
+
322
+ @Override
323
+ public Serializer<T> serializer() {
324
+ return (topic, data) -> {
325
+ try {
326
+ return mapper.writeValueAsBytes(data);
327
+ } catch (Exception e) {
328
+ throw new SerializationException(e);
329
+ }
330
+ };
331
+ }
332
+
333
+ @Override
334
+ public Deserializer<T> deserializer() {
335
+ return (topic, data) -> {
336
+ try {
337
+ return mapper.readValue(data, type);
338
+ } catch (Exception e) {
339
+ throw new SerializationException(e);
340
+ }
341
+ };
342
+ }
343
+ }
344
+ ```
345
+
346
+ ## Testing Strategies
347
+
348
+ ### 1. Unit Tests (Topology Test Driver)
349
+ ```java
350
+ // No Kafka cluster required
351
+ TopologyTestDriver testDriver = new TopologyTestDriver(topology, props);
352
+ ```
353
+
354
+ ### 2. Integration Tests (Embedded Kafka)
355
+ ```java
356
+ @ExtendWith(EmbeddedKafkaExtension.class)
357
+ public class IntegrationTest {
358
+ @Test
359
+ public void testWithRealKafka(EmbeddedKafka kafka) {
360
+ // Real Kafka cluster
361
+ }
362
+ }
363
+ ```
364
+
365
+ ### 3. Performance Tests (Load Testing)
366
+ ```bash
367
+ # Generate test load
368
+ kafka-producer-perf-test.sh \
369
+ --topic input-topic \
370
+ --num-records 1000000 \
371
+ --throughput 10000 \
372
+ --record-size 1024 \
373
+ --producer-props bootstrap.servers=localhost:9092
374
+ ```
375
+
376
+ ## Monitoring
377
+
378
+ ### JMX Metrics
379
+ ```java
380
+ // Enable JMX
381
+ props.put(StreamsConfig.METRICS_RECORDING_LEVEL_CONFIG, "DEBUG");
382
+
383
+ // Export to Prometheus
384
+ props.put("metric.reporters",
385
+ "io.confluent.metrics.reporter.ConfluentMetricsReporter");
386
+ ```
387
+
388
+ ### Key Metrics to Monitor
389
+ - **Consumer Lag**: `kafka.consumer.fetch.manager.records.lag.max`
390
+ - **Processing Rate**: `kafka.streams.stream.task.process.rate`
391
+ - **State Store Size**: `kafka.streams.state.store.bytes.total`
392
+ - **Rebalance Frequency**: `kafka.streams.consumer.coordinator.rebalance.total`
393
+
394
+ ## Troubleshooting
395
+
396
+ ### Issue 1: Rebalancing Too Frequently
397
+ **Solution**: Increase session timeout
398
+ ```java
399
+ props.put(StreamsConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
400
+ ```
401
+
402
+ ### Issue 2: State Store Too Large
403
+ **Solution**: Enable compaction, reduce retention
404
+ ```java
405
+ storeBuilder.withLoggingEnabled(Map.of(
406
+ "cleanup.policy", "compact",
407
+ "retention.ms", "86400000"
408
+ ));
409
+ ```
410
+
411
+ ### Issue 3: Slow Processing
412
+ **Solution**: Increase parallelism
413
+ ```java
414
+ // More threads
415
+ props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 8);
416
+
417
+ // More partitions (requires topic reconfiguration)
418
+ kafka-topics.sh --alter --topic input-topic --partitions 8
419
+ ```
420
+
421
+ ## Related Commands
422
+
423
+ - `/specweave-kafka:dev-env` - Set up local Kafka cluster for testing
424
+ - `/specweave-kafka:monitor-setup` - Configure Prometheus + Grafana monitoring
425
+
426
+ ## Documentation
427
+
428
+ - **Kafka Streams Docs**: https://kafka.apache.org/documentation/streams/
429
+ - **Topology Patterns**: `.specweave/docs/public/guides/kafka-streams-patterns.md`
430
+ - **State Stores**: `.specweave/docs/public/guides/kafka-streams-state.md`
431
+ - **Testing Guide**: `.specweave/docs/public/guides/kafka-streams-testing.md`
432
+
433
+ ---
434
+
435
+ **Plugin**: specweave-kafka-streams
436
+ **Version**: 1.0.0
437
+ **Status**: ✅ Production Ready
@@ -0,0 +1,262 @@
1
+ ---
2
+ name: specweave-n8n:workflow-template
3
+ description: Generate n8n workflow JSON template with Kafka trigger/producer nodes. Creates event-driven workflow patterns (fan-out, retry+DLQ, enrichment, CDC).
4
+ ---
5
+
6
+ # Generate n8n Workflow Template
7
+
8
+ Create ready-to-use n8n workflow JSON files with Kafka integration patterns.
9
+
10
+ ## What This Command Does
11
+
12
+ 1. **Select Pattern**: Choose from common event-driven patterns
13
+ 2. **Configure Kafka**: Specify topics, consumer groups, brokers
14
+ 3. **Customize Workflow**: Add enrichment, transformations, error handling
15
+ 4. **Generate JSON**: Export n8n-compatible workflow file
16
+ 5. **Import Instructions**: How to load into n8n UI
17
+
18
+ ## Available Patterns
19
+
20
+ ### Pattern 1: Kafka Trigger → HTTP Enrichment → Kafka Producer
21
+ **Use Case**: Event enrichment with external API
22
+
23
+ **Workflow**:
24
+ ```
25
+ [Kafka Trigger] → [HTTP Request] → [Set/Transform] → [Kafka Producer]
26
+ ↓ ↓ ↓
27
+ Input topic Enrich data Output topic
28
+ ```
29
+
30
+ **Configuration**:
31
+ - Input topic (e.g., `orders`)
32
+ - API endpoint (e.g., `https://api.example.com/customers/{id}`)
33
+ - Output topic (e.g., `enriched-orders`)
34
+
35
+ ### Pattern 2: Kafka Trigger → Fan-Out
36
+ **Use Case**: Single event triggers multiple downstream topics
37
+
38
+ **Workflow**:
39
+ ```
40
+ [Kafka Trigger] → [Switch] → [Kafka Producer] (high-priority)
41
+ ↓ ↓
42
+ Input └─→ [Kafka Producer] (all-events)
43
+ └─→ [Kafka Producer] (analytics)
44
+ ```
45
+
46
+ ### Pattern 3: Retry with Dead Letter Queue
47
+ **Use Case**: Fault-tolerant processing with retry logic
48
+
49
+ **Workflow**:
50
+ ```
51
+ [Kafka Trigger] → [Try] → [Process] → [Kafka Producer] (success)
52
+ ↓ ↓
53
+ Input [Catch] → [Increment Retry Count]
54
+
55
+ retry < 3 ?
56
+
57
+ [Kafka Producer] (retry-topic)
58
+
59
+ [Kafka Producer] (dlq-topic)
60
+ ```
61
+
62
+ ### Pattern 4: Change Data Capture (CDC)
63
+ **Use Case**: Database polling → Kafka events
64
+
65
+ **Workflow**:
66
+ ```
67
+ [Cron: Every 1m] → [PostgreSQL Query] → [Compare] → [Kafka Producer]
68
+ ↓ ↓
69
+ Get new rows Detect changes
70
+
71
+ Publish CDC events
72
+ ```
73
+
74
+ ## Example Usage
75
+
76
+ ```bash
77
+ # Generate workflow template
78
+ /specweave-n8n:workflow-template
79
+
80
+ # I'll ask:
81
+ # 1. Which pattern? (Enrichment, Fan-Out, Retry+DLQ, CDC)
82
+ # 2. Input topic name?
83
+ # 3. Output topic(s)?
84
+ # 4. Kafka broker (default: localhost:9092)?
85
+ # 5. Consumer group name?
86
+
87
+ # Then I'll generate:
88
+ # - workflow.json (importable into n8n)
89
+ # - README.md with setup instructions
90
+ # - .env.example with required variables
91
+ ```
92
+
93
+ ## Generated Files
94
+
95
+ **1. workflow.json**: n8n workflow definition
96
+ ```json
97
+ {
98
+ "name": "Kafka Event Enrichment",
99
+ "nodes": [
100
+ {
101
+ "type": "n8n-nodes-base.kafkaTrigger",
102
+ "name": "Kafka Trigger",
103
+ "parameters": {
104
+ "topic": "orders",
105
+ "groupId": "order-processor",
106
+ "brokers": "localhost:9092"
107
+ }
108
+ },
109
+ {
110
+ "type": "n8n-nodes-base.httpRequest",
111
+ "name": "Enrich Customer Data",
112
+ "parameters": {
113
+ "url": "https://api.example.com/customers/={{$json.customerId}}",
114
+ "authentication": "genericCredentialType"
115
+ }
116
+ },
117
+ {
118
+ "type": "n8n-nodes-base.set",
119
+ "name": "Transform",
120
+ "parameters": {
121
+ "values": {
122
+ "orderId": "={{$json.order.id}}",
123
+ "customerName": "={{$json.customer.name}}"
124
+ }
125
+ }
126
+ },
127
+ {
128
+ "type": "n8n-nodes-base.kafka",
129
+ "name": "Kafka Producer",
130
+ "parameters": {
131
+ "topic": "enriched-orders",
132
+ "brokers": "localhost:9092"
133
+ }
134
+ }
135
+ ],
136
+ "connections": { ... }
137
+ }
138
+ ```
139
+
140
+ **2. README.md**: Import instructions
141
+ ```markdown
142
+ # Import Workflow into n8n
143
+
144
+ 1. Open n8n UI (http://localhost:5678)
145
+ 2. Click "Workflows" → "Import from File"
146
+ 3. Select workflow.json
147
+ 4. Configure credentials (Kafka, HTTP API)
148
+ 5. Activate workflow
149
+ 6. Test with sample event
150
+ ```
151
+
152
+ **3. .env.example**: Required environment variables
153
+ ```bash
154
+ KAFKA_BROKERS=localhost:9092
155
+ KAFKA_SASL_USERNAME=your-username
156
+ KAFKA_SASL_PASSWORD=your-password
157
+ API_ENDPOINT=https://api.example.com
158
+ API_TOKEN=your-api-token
159
+ ```
160
+
161
+ ## Import into n8n
162
+
163
+ **Via UI**:
164
+ 1. n8n Dashboard → Workflows → Import from File
165
+ 2. Select generated workflow.json
166
+ 3. Configure Kafka credentials
167
+ 4. Activate workflow
168
+
169
+ **Via CLI**:
170
+ ```bash
171
+ # Import workflow
172
+ n8n import:workflow --input=workflow.json
173
+
174
+ # List workflows
175
+ n8n list:workflow
176
+ ```
177
+
178
+ ## Configuration Options
179
+
180
+ ### Kafka Settings
181
+ - **Brokers**: Comma-separated list (e.g., `broker1:9092,broker2:9092`)
182
+ - **Consumer Group**: Unique identifier for this workflow
183
+ - **Offset**: `earliest` (replay) or `latest` (new messages only)
184
+ - **Auto Commit**: `true` (recommended) or `false` (manual)
185
+ - **SSL/SASL**: Authentication credentials for secure clusters
186
+
187
+ ### Error Handling
188
+ - **Retry Count**: Maximum retries before DLQ (default: 3)
189
+ - **Backoff Strategy**: Exponential (1s, 2s, 4s, 8s)
190
+ - **DLQ Topic**: Dead letter queue for failed messages
191
+ - **Alert on Failure**: Send Slack/email notification
192
+
193
+ ### Performance
194
+ - **Batch Size**: Process N messages at once (default: 1)
195
+ - **Batch Timeout**: Wait up to N ms for batch (default: 5000)
196
+ - **Parallel Execution**: Enable for HTTP enrichment (default: disabled)
197
+ - **Max Memory**: Limit workflow memory usage
198
+
199
+ ## Testing
200
+
201
+ **Manual Test**:
202
+ ```bash
203
+ # 1. Produce test event
204
+ echo '{"orderId": 123, "customerId": 456}' | \
205
+ kcat -P -b localhost:9092 -t orders
206
+
207
+ # 2. Check n8n execution log
208
+ # n8n UI → Executions → View latest run
209
+
210
+ # 3. Consume output
211
+ kcat -C -b localhost:9092 -t enriched-orders
212
+ ```
213
+
214
+ **Automated Test**:
215
+ ```bash
216
+ # Execute workflow via CLI
217
+ n8n execute workflow --file workflow.json \
218
+ --input test-data.json
219
+
220
+ # Expected output: success status
221
+ ```
222
+
223
+ ## Troubleshooting
224
+
225
+ ### Issue 1: Workflow Not Triggering
226
+ **Solution**: Check Kafka connection
227
+ ```bash
228
+ # Test Kafka connectivity
229
+ kcat -L -b localhost:9092
230
+
231
+ # Verify consumer group registered
232
+ kafka-consumer-groups.sh --bootstrap-server localhost:9092 \
233
+ --describe --group order-processor
234
+ ```
235
+
236
+ ### Issue 2: Messages Not Being Consumed
237
+ **Solution**: Check offset position
238
+ - n8n UI → Workflow → Kafka Trigger → Offset
239
+ - Set to `earliest` to replay all messages
240
+
241
+ ### Issue 3: HTTP Enrichment Timeout
242
+ **Solution**: Enable parallel processing
243
+ - Workflow → HTTP Request → Batching → Enable
244
+ - Set batch size: 100
245
+ - Set timeout: 30s
246
+
247
+ ## Related Commands
248
+
249
+ - `/specweave-kafka:dev-env` - Set up local Kafka cluster
250
+ - `/specweave-n8n:test-workflow` - Test workflow with sample data (coming soon)
251
+
252
+ ## Documentation
253
+
254
+ - **n8n Kafka Nodes**: https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.kafka/
255
+ - **Workflow Patterns**: `.specweave/docs/public/guides/n8n-kafka-patterns.md`
256
+ - **Error Handling**: `.specweave/docs/public/guides/n8n-error-handling.md`
257
+
258
+ ---
259
+
260
+ **Plugin**: specweave-n8n
261
+ **Version**: 1.0.0
262
+ **Status**: ✅ Production Ready