@mytechtoday/augment-extensions 1.2.0 → 1.2.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/AGENTS.md +35 -3
- package/README.md +3 -3
- package/augment-extensions/domain-rules/software-architecture/README.md +143 -0
- package/augment-extensions/domain-rules/software-architecture/examples/banking-layered.md +961 -0
- package/augment-extensions/domain-rules/software-architecture/examples/ecommerce-microservices.md +990 -0
- package/augment-extensions/domain-rules/software-architecture/examples/iot-eventdriven.md +882 -0
- package/augment-extensions/domain-rules/software-architecture/examples/monolith-to-microservices-migration.md +703 -0
- package/augment-extensions/domain-rules/software-architecture/examples/serverless-imageprocessing.md +957 -0
- package/augment-extensions/domain-rules/software-architecture/examples/trading-eventdriven.md +747 -0
- package/augment-extensions/domain-rules/software-architecture/module.json +119 -0
- package/augment-extensions/domain-rules/software-architecture/rules/challenges-solutions.md +763 -0
- package/augment-extensions/domain-rules/software-architecture/rules/definitions-terminology.md +409 -0
- package/augment-extensions/domain-rules/software-architecture/rules/design-principles.md +684 -0
- package/augment-extensions/domain-rules/software-architecture/rules/evaluation-testing.md +1381 -0
- package/augment-extensions/domain-rules/software-architecture/rules/event-driven-architecture.md +616 -0
- package/augment-extensions/domain-rules/software-architecture/rules/fundamentals.md +306 -0
- package/augment-extensions/domain-rules/software-architecture/rules/industry-architectures.md +554 -0
- package/augment-extensions/domain-rules/software-architecture/rules/layered-architecture.md +776 -0
- package/augment-extensions/domain-rules/software-architecture/rules/microservices-architecture.md +503 -0
- package/augment-extensions/domain-rules/software-architecture/rules/modeling-documentation.md +1199 -0
- package/augment-extensions/domain-rules/software-architecture/rules/monolithic-architecture.md +351 -0
- package/augment-extensions/domain-rules/software-architecture/rules/principles.md +556 -0
- package/augment-extensions/domain-rules/software-architecture/rules/quality-attributes.md +797 -0
- package/augment-extensions/domain-rules/software-architecture/rules/scalability-performance.md +1345 -0
- package/augment-extensions/domain-rules/software-architecture/rules/security-architecture.md +1039 -0
- package/augment-extensions/domain-rules/software-architecture/rules/serverless-architecture.md +711 -0
- package/augment-extensions/domain-rules/software-architecture/rules/skills-development.md +568 -0
- package/augment-extensions/domain-rules/software-architecture/rules/tools-methodologies.md +961 -0
- package/augment-extensions/workflows/beads/examples/complete-workflow-example.md +8 -8
- package/augment-extensions/workflows/beads/rules/best-practices.md +2 -2
- package/augment-extensions/workflows/beads/rules/file-format.md +4 -4
- package/augment-extensions/workflows/beads/rules/manual-setup.md +4 -4
- package/augment-extensions/workflows/beads/rules/workflow.md +3 -3
- package/modules.md +40 -3
- package/package.json +1 -1
|
@@ -0,0 +1,747 @@
|
|
|
1
|
+
# Stock Trading System Architecture Example
|
|
2
|
+
|
|
3
|
+
## Overview
|
|
4
|
+
|
|
5
|
+
This document provides a comprehensive example of a stock trading system built with event-driven architecture, focusing on low latency, high throughput, and real-time processing.
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## System Context
|
|
10
|
+
|
|
11
|
+
### Business Requirements
|
|
12
|
+
|
|
13
|
+
**Functional Requirements**
|
|
14
|
+
- Real-time market data streaming
|
|
15
|
+
- Order placement and execution
|
|
16
|
+
- Portfolio management and tracking
|
|
17
|
+
- Risk management and compliance
|
|
18
|
+
- Trade settlement and clearing
|
|
19
|
+
- Market analysis and alerts
|
|
20
|
+
- Audit trail and regulatory reporting
|
|
21
|
+
|
|
22
|
+
**Non-Functional Requirements**
|
|
23
|
+
- **Latency**: < 10ms order processing
|
|
24
|
+
- **Throughput**: 100,000+ orders/second
|
|
25
|
+
- **Availability**: 99.99% during trading hours
|
|
26
|
+
- **Data Integrity**: Zero data loss, exactly-once processing
|
|
27
|
+
- **Compliance**: SEC, FINRA regulations
|
|
28
|
+
- **Audit**: Complete trade history
|
|
29
|
+
|
|
30
|
+
### System Constraints
|
|
31
|
+
|
|
32
|
+
- Trading hours: 9:30 AM - 4:00 PM EST
|
|
33
|
+
- Market data updates: 1000+ updates/second per symbol
|
|
34
|
+
- Order types: Market, Limit, Stop, Stop-Limit
|
|
35
|
+
- Asset classes: Stocks, Options, Futures
|
|
36
|
+
- Regulatory reporting: T+1 settlement
|
|
37
|
+
|
|
38
|
+
---
|
|
39
|
+
|
|
40
|
+
## Architecture Overview
|
|
41
|
+
|
|
42
|
+
### High-Level Architecture
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
┌─────────────────────────────────────────────────────────────┐
|
|
46
|
+
│ Trading Platform │
|
|
47
|
+
├─────────────────────────────────────────────────────────────┤
|
|
48
|
+
│ │
|
|
49
|
+
│ Market Data Feed → Event Stream → Trading Engine │
|
|
50
|
+
│ ↓ ↓ │
|
|
51
|
+
│ Price Updates Order Events → Execution Engine │
|
|
52
|
+
│ ↓ ↓ │
|
|
53
|
+
│ Analytics Risk Check → Portfolio Service │
|
|
54
|
+
│ ↓ ↓ │
|
|
55
|
+
│ Alerts Settlement → Clearing Service │
|
|
56
|
+
│ ↓ │
|
|
57
|
+
│ Audit Log → Compliance Service │
|
|
58
|
+
│ │
|
|
59
|
+
└─────────────────────────────────────────────────────────────┘
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
### Event Flow
|
|
63
|
+
|
|
64
|
+
```
|
|
65
|
+
1. Market Data Events
|
|
66
|
+
MarketDataReceived → PriceUpdated → AnalyticsCalculated → AlertTriggered
|
|
67
|
+
|
|
68
|
+
2. Order Lifecycle Events
|
|
69
|
+
OrderPlaced → OrderValidated → RiskChecked → OrderExecuted → TradeSettled
|
|
70
|
+
|
|
71
|
+
3. Portfolio Events
|
|
72
|
+
TradeExecuted → PositionUpdated → PortfolioRebalanced → ReportGenerated
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Service Architecture
|
|
78
|
+
|
|
79
|
+
### Core Services
|
|
80
|
+
|
|
81
|
+
**1. Market Data Service**
|
|
82
|
+
- Ingest real-time market data from exchanges
|
|
83
|
+
- Normalize data from multiple sources
|
|
84
|
+
- Publish price updates to event stream
|
|
85
|
+
- Handle 1M+ events/second
|
|
86
|
+
|
|
87
|
+
**2. Order Management Service**
|
|
88
|
+
- Receive and validate orders
|
|
89
|
+
- Publish order events
|
|
90
|
+
- Track order lifecycle
|
|
91
|
+
- Handle order modifications and cancellations
|
|
92
|
+
|
|
93
|
+
**3. Trading Engine**
|
|
94
|
+
- Match orders with market conditions
|
|
95
|
+
- Execute trades
|
|
96
|
+
- Publish execution events
|
|
97
|
+
- Sub-millisecond latency
|
|
98
|
+
|
|
99
|
+
**4. Risk Management Service**
|
|
100
|
+
- Real-time risk calculations
|
|
101
|
+
- Position limits enforcement
|
|
102
|
+
- Margin requirements
|
|
103
|
+
- Circuit breaker triggers
|
|
104
|
+
|
|
105
|
+
**5. Portfolio Service**
|
|
106
|
+
- Track positions and P&L
|
|
107
|
+
- Calculate portfolio metrics
|
|
108
|
+
- Handle corporate actions
|
|
109
|
+
- Generate performance reports
|
|
110
|
+
|
|
111
|
+
**6. Settlement Service**
|
|
112
|
+
- Trade confirmation
|
|
113
|
+
- Clearing and settlement (T+1)
|
|
114
|
+
- Cash management
|
|
115
|
+
- Reconciliation
|
|
116
|
+
|
|
117
|
+
**7. Compliance Service**
|
|
118
|
+
- Regulatory reporting
|
|
119
|
+
- Audit trail
|
|
120
|
+
- Trade surveillance
|
|
121
|
+
- Best execution analysis
|
|
122
|
+
|
|
123
|
+
---
|
|
124
|
+
|
|
125
|
+
## Technology Stack
|
|
126
|
+
|
|
127
|
+
### Event Streaming
|
|
128
|
+
- **Message Broker**: Apache Kafka (high throughput, low latency)
|
|
129
|
+
- **Stream Processing**: Apache Flink (real-time analytics)
|
|
130
|
+
- **Event Store**: Apache Kafka + Kafka Streams
|
|
131
|
+
- **Schema Registry**: Confluent Schema Registry (Avro)
|
|
132
|
+
|
|
133
|
+
### Data Storage
|
|
134
|
+
- **Time-Series DB**: InfluxDB (market data, metrics)
|
|
135
|
+
- **Relational DB**: PostgreSQL (orders, accounts, positions)
|
|
136
|
+
- **In-Memory Cache**: Redis (real-time prices, session data)
|
|
137
|
+
- **Event Store**: Kafka (event sourcing)
|
|
138
|
+
- **Analytics**: ClickHouse (OLAP queries)
|
|
139
|
+
|
|
140
|
+
### Services
|
|
141
|
+
- **Language**: Java (low latency), Go (high concurrency)
|
|
142
|
+
- **Framework**: Spring Boot, Micronaut
|
|
143
|
+
- **API**: gRPC (internal), REST (external)
|
|
144
|
+
|
|
145
|
+
### Infrastructure
|
|
146
|
+
- **Container Orchestration**: Kubernetes
|
|
147
|
+
- **Service Mesh**: Istio
|
|
148
|
+
- **Monitoring**: Prometheus + Grafana
|
|
149
|
+
- **Tracing**: Jaeger
|
|
150
|
+
- **Logging**: ELK Stack
|
|
151
|
+
|
|
152
|
+
---
|
|
153
|
+
|
|
154
|
+
## Implementation Details
|
|
155
|
+
|
|
156
|
+
### 1. Event Schema Design
|
|
157
|
+
|
|
158
|
+
**Market Data Event**
|
|
159
|
+
|
|
160
|
+
```java
|
|
161
|
+
// Avro schema for market data events
|
|
162
|
+
public class MarketDataEvent {
|
|
163
|
+
private String symbol;
|
|
164
|
+
private String exchange;
|
|
165
|
+
private BigDecimal bidPrice;
|
|
166
|
+
private BigDecimal askPrice;
|
|
167
|
+
private long bidSize;
|
|
168
|
+
private long askSize;
|
|
169
|
+
private BigDecimal lastPrice;
|
|
170
|
+
private long volume;
|
|
171
|
+
private long timestamp; // Nanosecond precision
|
|
172
|
+
private String eventId;
|
|
173
|
+
}
|
|
174
|
+
|
|
175
|
+
// Avro schema for order events
|
|
176
|
+
public class OrderEvent {
|
|
177
|
+
private String orderId;
|
|
178
|
+
private String accountId;
|
|
179
|
+
private String symbol;
|
|
180
|
+
private OrderType orderType; // MARKET, LIMIT, STOP, STOP_LIMIT
|
|
181
|
+
private OrderSide side; // BUY, SELL
|
|
182
|
+
private long quantity;
|
|
183
|
+
private BigDecimal price;
|
|
184
|
+
private BigDecimal stopPrice;
|
|
185
|
+
private OrderStatus status; // PENDING, VALIDATED, EXECUTED, REJECTED, CANCELLED
|
|
186
|
+
private long timestamp;
|
|
187
|
+
private String eventId;
|
|
188
|
+
}
|
|
189
|
+
|
|
190
|
+
// Trade execution event
|
|
191
|
+
public class TradeExecutedEvent {
|
|
192
|
+
private String tradeId;
|
|
193
|
+
private String orderId;
|
|
194
|
+
private String symbol;
|
|
195
|
+
private OrderSide side;
|
|
196
|
+
private long quantity;
|
|
197
|
+
private BigDecimal executionPrice;
|
|
198
|
+
private BigDecimal commission;
|
|
199
|
+
private long executionTime;
|
|
200
|
+
private String venue;
|
|
201
|
+
private String eventId;
|
|
202
|
+
}
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
### 2. Market Data Service Implementation
|
|
206
|
+
|
|
207
|
+
**Real-Time Price Streaming**
|
|
208
|
+
|
|
209
|
+
```java
|
|
210
|
+
// Market data ingestion service
|
|
211
|
+
@Service
|
|
212
|
+
public class MarketDataService {
|
|
213
|
+
private final KafkaTemplate<String, MarketDataEvent> kafkaTemplate;
|
|
214
|
+
private final RedisTemplate<String, String> redisTemplate;
|
|
215
|
+
|
|
216
|
+
@Autowired
|
|
217
|
+
public MarketDataService(
|
|
218
|
+
KafkaTemplate<String, MarketDataEvent> kafkaTemplate,
|
|
219
|
+
RedisTemplate<String, String> redisTemplate
|
|
220
|
+
) {
|
|
221
|
+
this.kafkaTemplate = kafkaTemplate;
|
|
222
|
+
this.redisTemplate = redisTemplate;
|
|
223
|
+
}
|
|
224
|
+
|
|
225
|
+
// Ingest market data from exchange feed
|
|
226
|
+
public void processMarketData(ExchangeFeed feed) {
|
|
227
|
+
MarketDataEvent event = MarketDataEvent.builder()
|
|
228
|
+
.symbol(feed.getSymbol())
|
|
229
|
+
.exchange(feed.getExchange())
|
|
230
|
+
.bidPrice(feed.getBidPrice())
|
|
231
|
+
.askPrice(feed.getAskPrice())
|
|
232
|
+
.bidSize(feed.getBidSize())
|
|
233
|
+
.askSize(feed.getAskSize())
|
|
234
|
+
.lastPrice(feed.getLastPrice())
|
|
235
|
+
.volume(feed.getVolume())
|
|
236
|
+
.timestamp(System.nanoTime())
|
|
237
|
+
.eventId(UUID.randomUUID().toString())
|
|
238
|
+
.build();
|
|
239
|
+
|
|
240
|
+
// Publish to Kafka topic (partitioned by symbol)
|
|
241
|
+
kafkaTemplate.send("market-data", event.getSymbol(), event);
|
|
242
|
+
|
|
243
|
+
// Update Redis cache for real-time queries
|
|
244
|
+
String key = "price:" + event.getSymbol();
|
|
245
|
+
redisTemplate.opsForValue().set(key,
|
|
246
|
+
objectMapper.writeValueAsString(event),
|
|
247
|
+
Duration.ofSeconds(60)
|
|
248
|
+
);
|
|
249
|
+
}
|
|
250
|
+
}
|
|
251
|
+
|
|
252
|
+
// Kafka consumer for market data analytics
|
|
253
|
+
@Component
|
|
254
|
+
public class MarketDataAnalytics {
|
|
255
|
+
|
|
256
|
+
@KafkaListener(topics = "market-data", groupId = "analytics-group")
|
|
257
|
+
public void processMarketData(MarketDataEvent event) {
|
|
258
|
+
// Calculate technical indicators
|
|
259
|
+
calculateMovingAverage(event);
|
|
260
|
+
calculateRSI(event);
|
|
261
|
+
detectPriceAlerts(event);
|
|
262
|
+
}
|
|
263
|
+
|
|
264
|
+
private void calculateMovingAverage(MarketDataEvent event) {
|
|
265
|
+
// Use Kafka Streams for windowed aggregations
|
|
266
|
+
// Calculate SMA, EMA over 5min, 15min, 1hr windows
|
|
267
|
+
}
|
|
268
|
+
|
|
269
|
+
private void detectPriceAlerts(MarketDataEvent event) {
|
|
270
|
+
// Check if price crosses user-defined thresholds
|
|
271
|
+
// Publish alert events
|
|
272
|
+
}
|
|
273
|
+
}
|
|
274
|
+
```
|
|
275
|
+
|
|
276
|
+
### 3. Order Management with Event Sourcing
|
|
277
|
+
|
|
278
|
+
**Order Service**
|
|
279
|
+
|
|
280
|
+
```java
|
|
281
|
+
@Service
|
|
282
|
+
public class OrderService {
|
|
283
|
+
private final OrderRepository orderRepository;
|
|
284
|
+
private final EventPublisher eventPublisher;
|
|
285
|
+
private final RiskService riskService;
|
|
286
|
+
|
|
287
|
+
@Transactional
|
|
288
|
+
public Order placeOrder(PlaceOrderRequest request) {
|
|
289
|
+
// Create order entity
|
|
290
|
+
Order order = Order.builder()
|
|
291
|
+
.orderId(UUID.randomUUID().toString())
|
|
292
|
+
.accountId(request.getAccountId())
|
|
293
|
+
.symbol(request.getSymbol())
|
|
294
|
+
.orderType(request.getOrderType())
|
|
295
|
+
.side(request.getSide())
|
|
296
|
+
.quantity(request.getQuantity())
|
|
297
|
+
.price(request.getPrice())
|
|
298
|
+
.status(OrderStatus.PENDING)
|
|
299
|
+
.createdAt(Instant.now())
|
|
300
|
+
.build();
|
|
301
|
+
|
|
302
|
+
// Persist order
|
|
303
|
+
orderRepository.save(order);
|
|
304
|
+
|
|
305
|
+
// Publish OrderPlaced event
|
|
306
|
+
OrderEvent event = OrderEvent.fromOrder(order);
|
|
307
|
+
eventPublisher.publish("order-events", event);
|
|
308
|
+
|
|
309
|
+
return order;
|
|
310
|
+
}
|
|
311
|
+
}
|
|
312
|
+
|
|
313
|
+
// Event-driven order processing pipeline
|
|
314
|
+
@Component
|
|
315
|
+
public class OrderProcessor {
|
|
316
|
+
|
|
317
|
+
@KafkaListener(topics = "order-events", groupId = "order-processor")
|
|
318
|
+
public void processOrderEvent(OrderEvent event) {
|
|
319
|
+
switch (event.getStatus()) {
|
|
320
|
+
case PENDING:
|
|
321
|
+
validateOrder(event);
|
|
322
|
+
break;
|
|
323
|
+
case VALIDATED:
|
|
324
|
+
checkRisk(event);
|
|
325
|
+
break;
|
|
326
|
+
case RISK_APPROVED:
|
|
327
|
+
executeOrder(event);
|
|
328
|
+
break;
|
|
329
|
+
case EXECUTED:
|
|
330
|
+
settleOrder(event);
|
|
331
|
+
break;
|
|
332
|
+
}
|
|
333
|
+
}
|
|
334
|
+
|
|
335
|
+
private void validateOrder(OrderEvent event) {
|
|
336
|
+
// Validate order parameters
|
|
337
|
+
// Check account exists and is active
|
|
338
|
+
// Verify symbol is tradable
|
|
339
|
+
|
|
340
|
+
if (isValid(event)) {
|
|
341
|
+
event.setStatus(OrderStatus.VALIDATED);
|
|
342
|
+
eventPublisher.publish("order-events", event);
|
|
343
|
+
} else {
|
|
344
|
+
event.setStatus(OrderStatus.REJECTED);
|
|
345
|
+
event.setRejectionReason("Invalid order parameters");
|
|
346
|
+
eventPublisher.publish("order-events", event);
|
|
347
|
+
}
|
|
348
|
+
}
|
|
349
|
+
|
|
350
|
+
private void checkRisk(OrderEvent event) {
|
|
351
|
+
// Check buying power
|
|
352
|
+
// Verify position limits
|
|
353
|
+
// Check concentration risk
|
|
354
|
+
|
|
355
|
+
RiskCheckResult result = riskService.checkOrder(event);
|
|
356
|
+
|
|
357
|
+
if (result.isApproved()) {
|
|
358
|
+
event.setStatus(OrderStatus.RISK_APPROVED);
|
|
359
|
+
eventPublisher.publish("order-events", event);
|
|
360
|
+
} else {
|
|
361
|
+
event.setStatus(OrderStatus.REJECTED);
|
|
362
|
+
event.setRejectionReason(result.getReason());
|
|
363
|
+
eventPublisher.publish("order-events", event);
|
|
364
|
+
}
|
|
365
|
+
}
|
|
366
|
+
}
|
|
367
|
+
```
|
|
368
|
+
|
|
369
|
+
### 4. Trading Engine with Low Latency
|
|
370
|
+
|
|
371
|
+
**High-Performance Order Execution**
|
|
372
|
+
|
|
373
|
+
```java
|
|
374
|
+
// Trading engine using Disruptor for ultra-low latency
|
|
375
|
+
public class TradingEngine {
|
|
376
|
+
private final Disruptor<OrderCommand> disruptor;
|
|
377
|
+
private final RingBuffer<OrderCommand> ringBuffer;
|
|
378
|
+
|
|
379
|
+
public TradingEngine() {
|
|
380
|
+
// Disruptor pattern for lock-free concurrency
|
|
381
|
+
ThreadFactory threadFactory = new ThreadFactory() {
|
|
382
|
+
@Override
|
|
383
|
+
public Thread newThread(Runnable r) {
|
|
384
|
+
Thread t = new Thread(r);
|
|
385
|
+
t.setPriority(Thread.MAX_PRIORITY);
|
|
386
|
+
return t;
|
|
387
|
+
}
|
|
388
|
+
};
|
|
389
|
+
|
|
390
|
+
disruptor = new Disruptor<>(
|
|
391
|
+
OrderCommand::new,
|
|
392
|
+
1024 * 1024, // Ring buffer size (power of 2)
|
|
393
|
+
threadFactory,
|
|
394
|
+
ProducerType.MULTI,
|
|
395
|
+
new YieldingWaitStrategy() // Low latency wait strategy
|
|
396
|
+
);
|
|
397
|
+
|
|
398
|
+
disruptor.handleEventsWith(new OrderExecutionHandler());
|
|
399
|
+
disruptor.start();
|
|
400
|
+
|
|
401
|
+
ringBuffer = disruptor.getRingBuffer();
|
|
402
|
+
}
|
|
403
|
+
|
|
404
|
+
public void submitOrder(OrderEvent order) {
|
|
405
|
+
long sequence = ringBuffer.next();
|
|
406
|
+
try {
|
|
407
|
+
OrderCommand command = ringBuffer.get(sequence);
|
|
408
|
+
command.setOrder(order);
|
|
409
|
+
} finally {
|
|
410
|
+
ringBuffer.publish(sequence);
|
|
411
|
+
}
|
|
412
|
+
}
|
|
413
|
+
|
|
414
|
+
// Event handler for order execution
|
|
415
|
+
private class OrderExecutionHandler implements EventHandler<OrderCommand> {
|
|
416
|
+
@Override
|
|
417
|
+
public void onEvent(OrderCommand command, long sequence, boolean endOfBatch) {
|
|
418
|
+
OrderEvent order = command.getOrder();
|
|
419
|
+
|
|
420
|
+
// Execute order matching logic
|
|
421
|
+
TradeExecutedEvent trade = executeOrder(order);
|
|
422
|
+
|
|
423
|
+
// Publish trade execution event
|
|
424
|
+
if (trade != null) {
|
|
425
|
+
eventPublisher.publish("trade-events", trade);
|
|
426
|
+
}
|
|
427
|
+
}
|
|
428
|
+
|
|
429
|
+
private TradeExecutedEvent executeOrder(OrderEvent order) {
|
|
430
|
+
// Match order with market data
|
|
431
|
+
// For market orders: execute at current market price
|
|
432
|
+
// For limit orders: check if price is met
|
|
433
|
+
|
|
434
|
+
MarketDataEvent marketData = getLatestMarketData(order.getSymbol());
|
|
435
|
+
|
|
436
|
+
BigDecimal executionPrice = calculateExecutionPrice(order, marketData);
|
|
437
|
+
|
|
438
|
+
return TradeExecutedEvent.builder()
|
|
439
|
+
.tradeId(UUID.randomUUID().toString())
|
|
440
|
+
.orderId(order.getOrderId())
|
|
441
|
+
.symbol(order.getSymbol())
|
|
442
|
+
.side(order.getSide())
|
|
443
|
+
.quantity(order.getQuantity())
|
|
444
|
+
.executionPrice(executionPrice)
|
|
445
|
+
.commission(calculateCommission(order))
|
|
446
|
+
.executionTime(System.nanoTime())
|
|
447
|
+
.venue("NASDAQ")
|
|
448
|
+
.eventId(UUID.randomUUID().toString())
|
|
449
|
+
.build();
|
|
450
|
+
}
|
|
451
|
+
}
|
|
452
|
+
}
|
|
453
|
+
|
|
454
|
+
```
|
|
455
|
+
|
|
456
|
+
### 5. Portfolio Service with Event Sourcing
|
|
457
|
+
|
|
458
|
+
**Position Tracking**
|
|
459
|
+
|
|
460
|
+
```java
|
|
461
|
+
@Service
|
|
462
|
+
public class PortfolioService {
|
|
463
|
+
private final PositionRepository positionRepository;
|
|
464
|
+
|
|
465
|
+
@KafkaListener(topics = "trade-events", groupId = "portfolio-group")
|
|
466
|
+
public void handleTradeExecuted(TradeExecutedEvent event) {
|
|
467
|
+
// Update position using event sourcing
|
|
468
|
+
Position position = positionRepository.findByAccountAndSymbol(
|
|
469
|
+
event.getAccountId(),
|
|
470
|
+
event.getSymbol()
|
|
471
|
+
).orElse(new Position(event.getAccountId(), event.getSymbol()));
|
|
472
|
+
|
|
473
|
+
// Apply trade to position
|
|
474
|
+
if (event.getSide() == OrderSide.BUY) {
|
|
475
|
+
position.addShares(event.getQuantity(), event.getExecutionPrice());
|
|
476
|
+
} else {
|
|
477
|
+
position.removeShares(event.getQuantity(), event.getExecutionPrice());
|
|
478
|
+
}
|
|
479
|
+
|
|
480
|
+
// Calculate realized P&L
|
|
481
|
+
BigDecimal realizedPnL = position.calculateRealizedPnL(event);
|
|
482
|
+
|
|
483
|
+
// Persist updated position
|
|
484
|
+
positionRepository.save(position);
|
|
485
|
+
|
|
486
|
+
// Publish position updated event
|
|
487
|
+
PositionUpdatedEvent positionEvent = PositionUpdatedEvent.builder()
|
|
488
|
+
.accountId(event.getAccountId())
|
|
489
|
+
.symbol(event.getSymbol())
|
|
490
|
+
.quantity(position.getQuantity())
|
|
491
|
+
.averagePrice(position.getAveragePrice())
|
|
492
|
+
.marketValue(position.getMarketValue())
|
|
493
|
+
.unrealizedPnL(position.getUnrealizedPnL())
|
|
494
|
+
.realizedPnL(realizedPnL)
|
|
495
|
+
.timestamp(Instant.now())
|
|
496
|
+
.build();
|
|
497
|
+
|
|
498
|
+
eventPublisher.publish("position-events", positionEvent);
|
|
499
|
+
}
|
|
500
|
+
}
|
|
501
|
+
```
|
|
502
|
+
|
|
503
|
+
---
|
|
504
|
+
|
|
505
|
+
## Event Streaming Configuration
|
|
506
|
+
|
|
507
|
+
### Kafka Topic Configuration
|
|
508
|
+
|
|
509
|
+
```yaml
|
|
510
|
+
# Kafka topics for trading system
|
|
511
|
+
topics:
|
|
512
|
+
market-data:
|
|
513
|
+
partitions: 100 # Partition by symbol for parallelism
|
|
514
|
+
replication-factor: 3
|
|
515
|
+
retention-ms: 86400000 # 24 hours
|
|
516
|
+
compression-type: lz4
|
|
517
|
+
|
|
518
|
+
order-events:
|
|
519
|
+
partitions: 50
|
|
520
|
+
replication-factor: 3
|
|
521
|
+
retention-ms: 2592000000 # 30 days (audit trail)
|
|
522
|
+
compression-type: snappy
|
|
523
|
+
|
|
524
|
+
trade-events:
|
|
525
|
+
partitions: 50
|
|
526
|
+
replication-factor: 3
|
|
527
|
+
retention-ms: 31536000000 # 1 year (regulatory)
|
|
528
|
+
compression-type: snappy
|
|
529
|
+
|
|
530
|
+
position-events:
|
|
531
|
+
partitions: 20
|
|
532
|
+
replication-factor: 3
|
|
533
|
+
retention-ms: 31536000000 # 1 year
|
|
534
|
+
|
|
535
|
+
risk-events:
|
|
536
|
+
partitions: 10
|
|
537
|
+
replication-factor: 3
|
|
538
|
+
retention-ms: 2592000000 # 30 days
|
|
539
|
+
```
|
|
540
|
+
|
|
541
|
+
### Kafka Producer Configuration
|
|
542
|
+
|
|
543
|
+
```java
|
|
544
|
+
@Configuration
|
|
545
|
+
public class KafkaProducerConfig {
|
|
546
|
+
|
|
547
|
+
@Bean
|
|
548
|
+
public ProducerFactory<String, Object> producerFactory() {
|
|
549
|
+
Map<String, Object> config = new HashMap<>();
|
|
550
|
+
|
|
551
|
+
// Broker configuration
|
|
552
|
+
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092");
|
|
553
|
+
|
|
554
|
+
// Serialization
|
|
555
|
+
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
|
|
556
|
+
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class);
|
|
557
|
+
config.put("schema.registry.url", "http://schema-registry:8081");
|
|
558
|
+
|
|
559
|
+
// Performance tuning for low latency
|
|
560
|
+
config.put(ProducerConfig.ACKS_CONFIG, "1"); // Leader acknowledgment only
|
|
561
|
+
config.put(ProducerConfig.LINGER_MS_CONFIG, 0); // No batching delay
|
|
562
|
+
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
|
|
563
|
+
config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432); // 32MB
|
|
564
|
+
config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "lz4");
|
|
565
|
+
|
|
566
|
+
// Idempotence for exactly-once semantics
|
|
567
|
+
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
|
|
568
|
+
config.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 5);
|
|
569
|
+
|
|
570
|
+
return new DefaultKafkaProducerFactory<>(config);
|
|
571
|
+
}
|
|
572
|
+
}
|
|
573
|
+
```
|
|
574
|
+
|
|
575
|
+
---
|
|
576
|
+
|
|
577
|
+
## Monitoring and Observability
|
|
578
|
+
|
|
579
|
+
### Key Metrics
|
|
580
|
+
|
|
581
|
+
**Latency Metrics**
|
|
582
|
+
```java
|
|
583
|
+
@Component
|
|
584
|
+
public class TradingMetrics {
|
|
585
|
+
private final MeterRegistry meterRegistry;
|
|
586
|
+
|
|
587
|
+
public void recordOrderLatency(long startTime) {
|
|
588
|
+
long latency = System.nanoTime() - startTime;
|
|
589
|
+
meterRegistry.timer("order.processing.latency")
|
|
590
|
+
.record(latency, TimeUnit.NANOSECONDS);
|
|
591
|
+
}
|
|
592
|
+
|
|
593
|
+
public void recordExecutionLatency(long startTime) {
|
|
594
|
+
long latency = System.nanoTime() - startTime;
|
|
595
|
+
meterRegistry.timer("order.execution.latency")
|
|
596
|
+
.record(latency, TimeUnit.NANOSECONDS);
|
|
597
|
+
}
|
|
598
|
+
}
|
|
599
|
+
```
|
|
600
|
+
|
|
601
|
+
**Prometheus Metrics**
|
|
602
|
+
- `order_processing_latency_seconds` - P50, P95, P99 latency
|
|
603
|
+
- `order_throughput_total` - Orders processed per second
|
|
604
|
+
- `trade_execution_latency_seconds` - Execution latency
|
|
605
|
+
- `market_data_lag_seconds` - Market data freshness
|
|
606
|
+
- `kafka_consumer_lag` - Event processing lag
|
|
607
|
+
- `risk_check_duration_seconds` - Risk check performance
|
|
608
|
+
|
|
609
|
+
---
|
|
610
|
+
|
|
611
|
+
## Scalability and Performance
|
|
612
|
+
|
|
613
|
+
### Performance Optimizations
|
|
614
|
+
|
|
615
|
+
**1. Partitioning Strategy**
|
|
616
|
+
- Partition market data by symbol (100 partitions)
|
|
617
|
+
- Partition orders by account ID (50 partitions)
|
|
618
|
+
- Enables parallel processing
|
|
619
|
+
|
|
620
|
+
**2. In-Memory Caching**
|
|
621
|
+
```java
|
|
622
|
+
// Redis cache for hot market data
|
|
623
|
+
@Cacheable(value = "market-data", key = "#symbol")
|
|
624
|
+
public MarketDataEvent getLatestPrice(String symbol) {
|
|
625
|
+
return marketDataRepository.findLatestBySymbol(symbol);
|
|
626
|
+
}
|
|
627
|
+
```
|
|
628
|
+
|
|
629
|
+
**3. Database Optimization**
|
|
630
|
+
- Use time-series database (InfluxDB) for market data
|
|
631
|
+
- Partition PostgreSQL tables by date
|
|
632
|
+
- Read replicas for reporting queries
|
|
633
|
+
- Connection pooling (HikariCP)
|
|
634
|
+
|
|
635
|
+
**4. Network Optimization**
|
|
636
|
+
- Co-locate services in same availability zone
|
|
637
|
+
- Use gRPC for inter-service communication
|
|
638
|
+
- Enable TCP_NODELAY for low latency
|
|
639
|
+
|
|
640
|
+
### Scalability Metrics
|
|
641
|
+
|
|
642
|
+
**Before Optimization**
|
|
643
|
+
- Order processing latency: 50ms (P99)
|
|
644
|
+
- Throughput: 10,000 orders/second
|
|
645
|
+
- Market data lag: 500ms
|
|
646
|
+
|
|
647
|
+
**After Optimization**
|
|
648
|
+
- Order processing latency: 8ms (P99)
|
|
649
|
+
- Throughput: 100,000 orders/second
|
|
650
|
+
- Market data lag: 50ms
|
|
651
|
+
|
|
652
|
+
---
|
|
653
|
+
|
|
654
|
+
## Compliance and Audit
|
|
655
|
+
|
|
656
|
+
### Regulatory Requirements
|
|
657
|
+
|
|
658
|
+
**SEC Rule 606 - Order Routing**
|
|
659
|
+
- Track order routing decisions
|
|
660
|
+
- Report execution quality
|
|
661
|
+
- Store for 3 years
|
|
662
|
+
|
|
663
|
+
**FINRA Rule 4511 - Books and Records**
|
|
664
|
+
- Maintain complete audit trail
|
|
665
|
+
- Immutable event log (Kafka)
|
|
666
|
+
- Retention: 6 years
|
|
667
|
+
|
|
668
|
+
**Implementation**
|
|
669
|
+
|
|
670
|
+
```java
|
|
671
|
+
@Service
|
|
672
|
+
public class AuditService {
|
|
673
|
+
|
|
674
|
+
@KafkaListener(topics = {"order-events", "trade-events"}, groupId = "audit-group")
|
|
675
|
+
public void auditEvent(Object event) {
|
|
676
|
+
// Store all events in immutable audit log
|
|
677
|
+
AuditRecord record = AuditRecord.builder()
|
|
678
|
+
.eventType(event.getClass().getSimpleName())
|
|
679
|
+
.eventData(objectMapper.writeValueAsString(event))
|
|
680
|
+
.timestamp(Instant.now())
|
|
681
|
+
.userId(SecurityContextHolder.getContext().getAuthentication().getName())
|
|
682
|
+
.build();
|
|
683
|
+
|
|
684
|
+
auditRepository.save(record);
|
|
685
|
+
}
|
|
686
|
+
|
|
687
|
+
// Generate regulatory reports
|
|
688
|
+
public Report generateRule606Report(LocalDate startDate, LocalDate endDate) {
|
|
689
|
+
// Query audit log for order routing data
|
|
690
|
+
// Aggregate execution quality metrics
|
|
691
|
+
// Generate report in required format
|
|
692
|
+
}
|
|
693
|
+
}
|
|
694
|
+
```
|
|
695
|
+
|
|
696
|
+
---
|
|
697
|
+
|
|
698
|
+
## Key Takeaways
|
|
699
|
+
|
|
700
|
+
### Architecture Decisions
|
|
701
|
+
|
|
702
|
+
1. **Event-Driven Architecture**: Enables loose coupling, scalability, and real-time processing
|
|
703
|
+
2. **Kafka for Event Streaming**: High throughput, low latency, durable event log
|
|
704
|
+
3. **Event Sourcing**: Complete audit trail, regulatory compliance
|
|
705
|
+
4. **LMAX Disruptor**: Ultra-low latency order execution (< 10ms)
|
|
706
|
+
5. **Redis Caching**: Fast access to real-time market data
|
|
707
|
+
6. **Time-Series Database**: Efficient storage and querying of market data
|
|
708
|
+
|
|
709
|
+
### Trade-offs
|
|
710
|
+
|
|
711
|
+
**Benefits**
|
|
712
|
+
- ✅ Low latency (< 10ms order processing)
|
|
713
|
+
- ✅ High throughput (100K+ orders/second)
|
|
714
|
+
- ✅ Complete audit trail (regulatory compliance)
|
|
715
|
+
- ✅ Scalable (horizontal scaling of consumers)
|
|
716
|
+
- ✅ Fault tolerant (Kafka replication)
|
|
717
|
+
- ✅ Real-time analytics (Kafka Streams)
|
|
718
|
+
|
|
719
|
+
**Challenges**
|
|
720
|
+
- ❌ Eventual consistency (not suitable for all use cases)
|
|
721
|
+
- ❌ Complex debugging (distributed event flows)
|
|
722
|
+
- ❌ Event schema evolution (requires careful planning)
|
|
723
|
+
- ❌ Operational complexity (Kafka cluster management)
|
|
724
|
+
- ❌ Higher infrastructure costs
|
|
725
|
+
|
|
726
|
+
### Lessons Learned
|
|
727
|
+
|
|
728
|
+
1. **Partition by symbol**: Enables parallel processing of market data
|
|
729
|
+
2. **Use Avro schemas**: Ensures backward compatibility
|
|
730
|
+
3. **Monitor consumer lag**: Critical for detecting processing delays
|
|
731
|
+
4. **Implement circuit breakers**: Prevents cascade failures
|
|
732
|
+
5. **Use exactly-once semantics**: Ensures data integrity
|
|
733
|
+
6. **Optimize for P99 latency**: Not just average latency
|
|
734
|
+
|
|
735
|
+
---
|
|
736
|
+
|
|
737
|
+
## References
|
|
738
|
+
|
|
739
|
+
- **Event-Driven Architecture**: Martin Fowler's Event Sourcing pattern
|
|
740
|
+
- **LMAX Disruptor**: High-performance inter-thread messaging
|
|
741
|
+
- **Apache Kafka**: Distributed event streaming platform
|
|
742
|
+
- **SEC Regulations**: Rule 606, Rule 613 (CAT)
|
|
743
|
+
- **FINRA Rules**: Rule 4511 (Books and Records)
|
|
744
|
+
|
|
745
|
+
---
|
|
746
|
+
|
|
747
|
+
**Total Lines**: 650+
|