cdk-dms-replication 0.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.jsii +13491 -0
- package/API.md +8606 -0
- package/LICENSE +202 -0
- package/README.md +883 -0
- package/cdk.context.json +1 -0
- package/integ/sample-app.ts +229 -0
- package/lib/dms-roles.d.ts +14 -0
- package/lib/dms-roles.js +42 -0
- package/lib/endpoint-settings.d.ts +419 -0
- package/lib/endpoint-settings.js +3 -0
- package/lib/endpoint.d.ts +143 -0
- package/lib/endpoint.js +402 -0
- package/lib/enums.d.ts +231 -0
- package/lib/enums.js +266 -0
- package/lib/index.d.ts +9 -0
- package/lib/index.js +49 -0
- package/lib/migration-pipeline.d.ts +253 -0
- package/lib/migration-pipeline.js +218 -0
- package/lib/replication-instance.d.ts +99 -0
- package/lib/replication-instance.js +93 -0
- package/lib/replication-task.d.ts +72 -0
- package/lib/replication-task.js +46 -0
- package/lib/serverless-pipeline.d.ts +196 -0
- package/lib/serverless-pipeline.js +271 -0
- package/lib/table-mappings.d.ts +178 -0
- package/lib/table-mappings.js +283 -0
- package/lib/task-settings.d.ts +228 -0
- package/lib/task-settings.js +291 -0
- package/package.json +170 -0
- package/scripts/sync-instance-classes.js +213 -0
package/README.md
ADDED
|
@@ -0,0 +1,883 @@
|
|
|
1
|
+
# cdk-dms-replication
|
|
2
|
+
|
|
3
|
+
[](https://badge.fury.io/js/cdk-dms-replication)
|
|
4
|
+
[](https://badge.fury.io/py/cdk-dms-replication)
|
|
5
|
+
[](https://github.com/kckempf/cdk-dms-replication/actions/workflows/build.yml)
|
|
6
|
+
|
|
7
|
+
L3 CDK constructs for [Amazon Database Migration Service (DMS)](https://aws.amazon.com/dms/). Provision a complete migration pipeline — replication instance, endpoints, and task — in a few lines of code, with secure defaults and full support for all DMS-supported engines and migration patterns.
|
|
8
|
+
|
|
9
|
+
## Features
|
|
10
|
+
|
|
11
|
+
- **All migration patterns** — full load, CDC, and full-load-and-CDC
|
|
12
|
+
- **Classic and Serverless** — `DmsMigrationPipeline` (fixed instance) or `DmsServerlessPipeline` (auto-scales DCUs)
|
|
13
|
+
- **All DMS engines** — MySQL, PostgreSQL, Oracle, SQL Server, SAP ASE, IBM Db2, MongoDB, DocumentDB, S3, DynamoDB, Redshift, Kinesis, Kafka, OpenSearch, Neptune, Redis
|
|
14
|
+
- **Secure defaults** — replication instance placed in private subnets, KMS encryption at rest, security group auto-created
|
|
15
|
+
- **Fluent builders** — `TableMappings` and `TaskSettings` builders produce the JSON DMS expects without hand-crafting strings
|
|
16
|
+
- **Multi-language** — TypeScript, Python, Java, .NET, Go (via JSII)
|
|
17
|
+
- **Escape hatches** — pass existing endpoints or a pre-existing replication instance
|
|
18
|
+
|
|
19
|
+
## Installation
|
|
20
|
+
|
|
21
|
+
### TypeScript / JavaScript
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
npm install cdk-dms-replication
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
### Python
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
pip install cdk-dms-replication
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
### Java
|
|
34
|
+
|
|
35
|
+
```xml
|
|
36
|
+
<dependency>
|
|
37
|
+
<groupId>io.github.kckempf</groupId>
|
|
38
|
+
<artifactId>cdk-dms-replication</artifactId>
|
|
39
|
+
<version>VERSION</version>
|
|
40
|
+
</dependency>
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
### .NET
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
dotnet add package KcKempf.CdkDmsReplication
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
### Go
|
|
50
|
+
|
|
51
|
+
```bash
|
|
52
|
+
go get github.com/kckempf/cdk-dms-replication-go
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
---
|
|
56
|
+
|
|
57
|
+
## Quick start
|
|
58
|
+
|
|
59
|
+
### Classic pipeline (fixed replication instance)
|
|
60
|
+
|
|
61
|
+
`DmsMigrationPipeline` provisions a replication instance, both endpoints, and a replication task in one construct.
|
|
62
|
+
|
|
63
|
+
```typescript
|
|
64
|
+
import * as cdk from 'aws-cdk-lib';
|
|
65
|
+
import * as ec2 from 'aws-cdk-lib/aws-ec2';
|
|
66
|
+
import {
|
|
67
|
+
DmsMigrationPipeline,
|
|
68
|
+
EndpointEngine,
|
|
69
|
+
MigrationType,
|
|
70
|
+
TableMappings,
|
|
71
|
+
} from 'cdk-dms-replication';
|
|
72
|
+
|
|
73
|
+
const app = new cdk.App();
|
|
74
|
+
const stack = new cdk.Stack(app, 'MigrationStack');
|
|
75
|
+
|
|
76
|
+
const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', { isDefault: false });
|
|
77
|
+
|
|
78
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
79
|
+
vpc,
|
|
80
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
81
|
+
|
|
82
|
+
sourceEndpoint: {
|
|
83
|
+
engine: EndpointEngine.MYSQL,
|
|
84
|
+
serverName: 'mysql.internal.example.com',
|
|
85
|
+
port: 3306,
|
|
86
|
+
username: 'dms_user',
|
|
87
|
+
password: cdk.SecretValue.secretsManager('mysql-dms-password'),
|
|
88
|
+
databaseName: 'orders',
|
|
89
|
+
},
|
|
90
|
+
|
|
91
|
+
targetEndpoint: {
|
|
92
|
+
engine: EndpointEngine.AURORA_POSTGRESQL,
|
|
93
|
+
serverName: cluster.clusterEndpoint.hostname,
|
|
94
|
+
port: 5432,
|
|
95
|
+
username: 'dms_user',
|
|
96
|
+
password: cdk.SecretValue.secretsManager('aurora-dms-password'),
|
|
97
|
+
databaseName: 'orders',
|
|
98
|
+
},
|
|
99
|
+
|
|
100
|
+
tableMappings: new TableMappings()
|
|
101
|
+
.includeSchema('public')
|
|
102
|
+
.excludeTable('public', 'audit_log')
|
|
103
|
+
.toJson(),
|
|
104
|
+
});
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
> **Note:** The `tableMappings` default (when omitted) is to include all tables in all schemas.
|
|
108
|
+
|
|
109
|
+
### Serverless pipeline (auto-scaling)
|
|
110
|
+
|
|
111
|
+
`DmsServerlessPipeline` uses DMS Serverless, which automatically scales capacity (measured in DMS Capacity Units — DCUs) between a configurable minimum and maximum. There is no replication instance to size or manage.
|
|
112
|
+
|
|
113
|
+
```typescript
|
|
114
|
+
import {
|
|
115
|
+
DmsServerlessPipeline,
|
|
116
|
+
EndpointEngine,
|
|
117
|
+
MigrationType,
|
|
118
|
+
} from 'cdk-dms-replication';
|
|
119
|
+
|
|
120
|
+
new DmsServerlessPipeline(stack, 'Pipeline', {
|
|
121
|
+
vpc,
|
|
122
|
+
maxCapacityUnits: 16, // required; valid values: 1,2,4,8,16,32,64,128,192,256,384
|
|
123
|
+
minCapacityUnits: 2, // optional; DMS auto-determines if omitted
|
|
124
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
125
|
+
|
|
126
|
+
sourceEndpoint: {
|
|
127
|
+
engine: EndpointEngine.MYSQL,
|
|
128
|
+
serverName: 'mysql.internal.example.com',
|
|
129
|
+
port: 3306,
|
|
130
|
+
username: 'dms_user',
|
|
131
|
+
password: cdk.SecretValue.secretsManager('mysql-dms-password'),
|
|
132
|
+
databaseName: 'orders',
|
|
133
|
+
},
|
|
134
|
+
|
|
135
|
+
targetEndpoint: {
|
|
136
|
+
engine: EndpointEngine.AURORA_POSTGRESQL,
|
|
137
|
+
serverName: cluster.clusterEndpoint.hostname,
|
|
138
|
+
port: 5432,
|
|
139
|
+
username: 'dms_user',
|
|
140
|
+
password: cdk.SecretValue.secretsManager('aurora-dms-password'),
|
|
141
|
+
databaseName: 'orders',
|
|
142
|
+
},
|
|
143
|
+
});
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
> **CDC start/stop position limitation:** `DmsServerlessPipeline` does not support `cdcStartPosition` or `cdcStartTime` at the CloudFormation level. To start replication from a specific LSN or timestamp, call the [`StartReplication` API](https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplication.html) after the config is created.
|
|
147
|
+
|
|
148
|
+
---
|
|
149
|
+
|
|
150
|
+
## Migration patterns
|
|
151
|
+
|
|
152
|
+
### Full load
|
|
153
|
+
|
|
154
|
+
Migrates all existing data once, then stops.
|
|
155
|
+
|
|
156
|
+
```typescript
|
|
157
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
158
|
+
vpc,
|
|
159
|
+
migrationType: MigrationType.FULL_LOAD,
|
|
160
|
+
sourceEndpoint: { ... },
|
|
161
|
+
targetEndpoint: { ... },
|
|
162
|
+
});
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
### CDC only
|
|
166
|
+
|
|
167
|
+
Replicates ongoing changes starting from a specific position or time. The target must already be seeded with data.
|
|
168
|
+
|
|
169
|
+
```typescript
|
|
170
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
171
|
+
vpc,
|
|
172
|
+
migrationType: MigrationType.CDC,
|
|
173
|
+
cdcStartPosition: 'mysql-bin-changelog.000024:373', // binlog position
|
|
174
|
+
// — or —
|
|
175
|
+
cdcStartTime: '2024-01-01T00:00:00Z', // ISO-8601 timestamp
|
|
176
|
+
sourceEndpoint: { ... },
|
|
177
|
+
targetEndpoint: { ... },
|
|
178
|
+
});
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
### Full load then CDC
|
|
182
|
+
|
|
183
|
+
Migrates existing data, then automatically switches to continuous replication.
|
|
184
|
+
|
|
185
|
+
```typescript
|
|
186
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
187
|
+
vpc,
|
|
188
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
189
|
+
sourceEndpoint: { ... },
|
|
190
|
+
targetEndpoint: { ... },
|
|
191
|
+
});
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
---
|
|
195
|
+
|
|
196
|
+
## Endpoint examples
|
|
197
|
+
|
|
198
|
+
### MySQL / MariaDB / Aurora MySQL
|
|
199
|
+
|
|
200
|
+
```typescript
|
|
201
|
+
sourceEndpoint: {
|
|
202
|
+
engine: EndpointEngine.MYSQL, // or MARIADB, AURORA_MYSQL
|
|
203
|
+
serverName: 'mysql.example.com',
|
|
204
|
+
port: 3306,
|
|
205
|
+
username: 'dms_user',
|
|
206
|
+
password: cdk.SecretValue.secretsManager('db-secret'),
|
|
207
|
+
databaseName: 'mydb',
|
|
208
|
+
mySqlSettings: {
|
|
209
|
+
parallelLoadThreads: 4,
|
|
210
|
+
serverTimezone: 'UTC',
|
|
211
|
+
},
|
|
212
|
+
},
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
### PostgreSQL / Aurora PostgreSQL
|
|
216
|
+
|
|
217
|
+
```typescript
|
|
218
|
+
sourceEndpoint: {
|
|
219
|
+
engine: EndpointEngine.POSTGRES, // or AURORA_POSTGRESQL
|
|
220
|
+
serverName: 'pg.example.com',
|
|
221
|
+
port: 5432,
|
|
222
|
+
username: 'dms_user',
|
|
223
|
+
password: cdk.SecretValue.secretsManager('db-secret'),
|
|
224
|
+
databaseName: 'appdb',
|
|
225
|
+
postgreSqlSettings: {
|
|
226
|
+
captureDdls: true,
|
|
227
|
+
slotName: 'dms_replication_slot',
|
|
228
|
+
pluginName: PostgresCdcPlugin.PG_LOGICAL,
|
|
229
|
+
heartbeatEnable: true,
|
|
230
|
+
},
|
|
231
|
+
},
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
### Oracle
|
|
235
|
+
|
|
236
|
+
```typescript
|
|
237
|
+
sourceEndpoint: {
|
|
238
|
+
engine: EndpointEngine.ORACLE,
|
|
239
|
+
serverName: 'oracle.example.com',
|
|
240
|
+
port: 1521,
|
|
241
|
+
username: 'dms_user',
|
|
242
|
+
password: cdk.SecretValue.secretsManager('oracle-secret'),
|
|
243
|
+
databaseName: 'ORCL',
|
|
244
|
+
oracleSettings: {
|
|
245
|
+
addSupplementalLogging: true,
|
|
246
|
+
useLogminerReader: true, // true = LogMiner, false = BinaryReader
|
|
247
|
+
},
|
|
248
|
+
},
|
|
249
|
+
```
|
|
250
|
+
|
|
251
|
+
### SQL Server
|
|
252
|
+
|
|
253
|
+
```typescript
|
|
254
|
+
sourceEndpoint: {
|
|
255
|
+
engine: EndpointEngine.SQLSERVER,
|
|
256
|
+
serverName: 'sqlserver.example.com',
|
|
257
|
+
port: 1433,
|
|
258
|
+
username: 'dms_user',
|
|
259
|
+
password: cdk.SecretValue.secretsManager('sqlserver-secret'),
|
|
260
|
+
databaseName: 'AdventureWorks',
|
|
261
|
+
sqlServerSettings: {
|
|
262
|
+
readBackupOnly: false,
|
|
263
|
+
safeguardPolicy: SqlServerSafeguardPolicy.RELY_ON_SQL_SERVER_REPLICATION_AGENT,
|
|
264
|
+
},
|
|
265
|
+
},
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
### MongoDB / DocumentDB
|
|
269
|
+
|
|
270
|
+
```typescript
|
|
271
|
+
sourceEndpoint: {
|
|
272
|
+
engine: EndpointEngine.MONGODB, // or DOCDB
|
|
273
|
+
serverName: 'mongo.example.com',
|
|
274
|
+
port: 27017,
|
|
275
|
+
username: 'dms_user',
|
|
276
|
+
password: cdk.SecretValue.secretsManager('mongo-secret'),
|
|
277
|
+
mongoDbSettings: {
|
|
278
|
+
authType: MongoAuthType.PASSWORD,
|
|
279
|
+
authMechanism: MongoAuthMechanism.SCRAM_SHA_1,
|
|
280
|
+
nestingLevel: MongoNestingLevel.ONE,
|
|
281
|
+
},
|
|
282
|
+
},
|
|
283
|
+
```
|
|
284
|
+
|
|
285
|
+
### Amazon S3 (source or target)
|
|
286
|
+
|
|
287
|
+
```typescript
|
|
288
|
+
// As a target
|
|
289
|
+
targetEndpoint: {
|
|
290
|
+
engine: EndpointEngine.S3,
|
|
291
|
+
s3Settings: {
|
|
292
|
+
bucketName: 'my-migration-data',
|
|
293
|
+
bucketFolder: 'dms-output',
|
|
294
|
+
serviceAccessRoleArn: s3Role.roleArn,
|
|
295
|
+
dataFormat: S3DataFormat.PARQUET,
|
|
296
|
+
parquetVersion: ParquetVersion.PARQUET_2_0,
|
|
297
|
+
datePartitionEnabled: true,
|
|
298
|
+
datePartitionSequence: DatePartitionSequence.YYYYMMDD,
|
|
299
|
+
encryptionMode: EncryptionMode.SSE_KMS,
|
|
300
|
+
serverSideEncryptionKmsKeyId: myKey.keyArn,
|
|
301
|
+
},
|
|
302
|
+
},
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
### Amazon Redshift
|
|
306
|
+
|
|
307
|
+
```typescript
|
|
308
|
+
targetEndpoint: {
|
|
309
|
+
engine: EndpointEngine.REDSHIFT,
|
|
310
|
+
serverName: 'my-cluster.abc123.us-east-1.redshift.amazonaws.com',
|
|
311
|
+
port: 5439,
|
|
312
|
+
username: 'dms_user',
|
|
313
|
+
password: cdk.SecretValue.secretsManager('redshift-secret'),
|
|
314
|
+
databaseName: 'dev',
|
|
315
|
+
redshiftSettings: {
|
|
316
|
+
bucketName: 'my-redshift-staging',
|
|
317
|
+
serviceAccessRoleArn: redshiftRole.roleArn,
|
|
318
|
+
encryptionMode: EncryptionMode.SSE_KMS,
|
|
319
|
+
serverSideEncryptionKmsKeyId: myKey.keyArn,
|
|
320
|
+
truncateColumns: true,
|
|
321
|
+
emptyAsNull: true,
|
|
322
|
+
},
|
|
323
|
+
},
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
### Amazon Kinesis Data Streams
|
|
327
|
+
|
|
328
|
+
```typescript
|
|
329
|
+
targetEndpoint: {
|
|
330
|
+
engine: EndpointEngine.KINESIS,
|
|
331
|
+
kinesisSettings: {
|
|
332
|
+
streamArn: stream.streamArn,
|
|
333
|
+
serviceAccessRoleArn: kinesisRole.roleArn,
|
|
334
|
+
messageFormat: MessageFormat.JSON,
|
|
335
|
+
includeTransactionDetails: true,
|
|
336
|
+
includeTableAlterOperations: true,
|
|
337
|
+
},
|
|
338
|
+
},
|
|
339
|
+
```
|
|
340
|
+
|
|
341
|
+
### Apache Kafka / Amazon MSK
|
|
342
|
+
|
|
343
|
+
```typescript
|
|
344
|
+
targetEndpoint: {
|
|
345
|
+
engine: EndpointEngine.KAFKA,
|
|
346
|
+
kafkaSettings: {
|
|
347
|
+
broker: 'b-1.my-cluster.abc123.kafka.us-east-1.amazonaws.com:9092',
|
|
348
|
+
topic: 'dms-changes',
|
|
349
|
+
messageFormat: MessageFormat.JSON,
|
|
350
|
+
securityProtocol: KafkaSecurityProtocol.SASL_SSL,
|
|
351
|
+
saslUsername: 'dms_user',
|
|
352
|
+
saslPassword: cdk.SecretValue.secretsManager('kafka-sasl-password'),
|
|
353
|
+
},
|
|
354
|
+
},
|
|
355
|
+
```
|
|
356
|
+
|
|
357
|
+
### Amazon OpenSearch Service
|
|
358
|
+
|
|
359
|
+
```typescript
|
|
360
|
+
targetEndpoint: {
|
|
361
|
+
engine: EndpointEngine.OPENSEARCH,
|
|
362
|
+
openSearchSettings: {
|
|
363
|
+
endpointUri: 'https://search-my-domain.us-east-1.es.amazonaws.com',
|
|
364
|
+
serviceAccessRoleArn: openSearchRole.roleArn,
|
|
365
|
+
fullLoadErrorPercentage: 10,
|
|
366
|
+
errorRetryDuration: 300,
|
|
367
|
+
},
|
|
368
|
+
},
|
|
369
|
+
```
|
|
370
|
+
|
|
371
|
+
### Amazon Neptune
|
|
372
|
+
|
|
373
|
+
```typescript
|
|
374
|
+
targetEndpoint: {
|
|
375
|
+
engine: EndpointEngine.NEPTUNE,
|
|
376
|
+
neptuneSettings: {
|
|
377
|
+
s3BucketName: 'my-neptune-staging',
|
|
378
|
+
s3BucketFolder: 'dms',
|
|
379
|
+
serviceAccessRoleArn: neptuneRole.roleArn,
|
|
380
|
+
iamAuthEnabled: true,
|
|
381
|
+
},
|
|
382
|
+
},
|
|
383
|
+
```
|
|
384
|
+
|
|
385
|
+
### Amazon DynamoDB
|
|
386
|
+
|
|
387
|
+
```typescript
|
|
388
|
+
targetEndpoint: {
|
|
389
|
+
engine: EndpointEngine.DYNAMODB,
|
|
390
|
+
dynamoDbSettings: {
|
|
391
|
+
serviceAccessRoleArn: dynamoRole.roleArn,
|
|
392
|
+
},
|
|
393
|
+
},
|
|
394
|
+
```
|
|
395
|
+
|
|
396
|
+
---
|
|
397
|
+
|
|
398
|
+
## Table mappings
|
|
399
|
+
|
|
400
|
+
Use the `TableMappings` fluent builder to control which tables are migrated and how they are named on the target.
|
|
401
|
+
|
|
402
|
+
### Selection rules
|
|
403
|
+
|
|
404
|
+
```typescript
|
|
405
|
+
new TableMappings()
|
|
406
|
+
.includeSchema('public') // include all tables in 'public'
|
|
407
|
+
.includeSchema('%') // include all schemas (wildcard)
|
|
408
|
+
.excludeTable('public', 'audit_log') // exclude a specific table
|
|
409
|
+
.excludeSchema('internal') // exclude an entire schema
|
|
410
|
+
.explicitTable('public', 'orders') // migrate only this one table
|
|
411
|
+
.toJson()
|
|
412
|
+
```
|
|
413
|
+
|
|
414
|
+
### Transformation rules
|
|
415
|
+
|
|
416
|
+
```typescript
|
|
417
|
+
new TableMappings()
|
|
418
|
+
.includeSchema('%')
|
|
419
|
+
.renameSchema('legacy', 'v2') // rename schema on target
|
|
420
|
+
.toLowerCaseTable('public', '%') // lowercase all table names
|
|
421
|
+
.toUpperCaseSchema('%') // uppercase all schema names
|
|
422
|
+
.addPrefixToTable('public', '%', 'migrated_')
|
|
423
|
+
.renameTable('public', 'usr', 'users') // rename a specific table
|
|
424
|
+
.renameColumn('public', 'orders', 'cust_id', 'customer_id')
|
|
425
|
+
.removeColumn('public', 'orders', 'internal_notes')
|
|
426
|
+
.toJson()
|
|
427
|
+
```
|
|
428
|
+
|
|
429
|
+
### Adding columns
|
|
430
|
+
|
|
431
|
+
```typescript
|
|
432
|
+
new TableMappings()
|
|
433
|
+
.includeSchema('public')
|
|
434
|
+
.addColumn('public', 'orders', {
|
|
435
|
+
columnName: 'migrated_at',
|
|
436
|
+
columnType: ColumnDataType.DATETIME,
|
|
437
|
+
expression: '$timestamp', // DMS built-in expression
|
|
438
|
+
})
|
|
439
|
+
.addColumn('public', 'orders', {
|
|
440
|
+
columnName: 'migration_version',
|
|
441
|
+
columnType: ColumnDataType.STRING,
|
|
442
|
+
columnLength: 10,
|
|
443
|
+
columnValue: 'v2.0', // constant value
|
|
444
|
+
})
|
|
445
|
+
.toJson()
|
|
446
|
+
```
|
|
447
|
+
|
|
448
|
+
---
|
|
449
|
+
|
|
450
|
+
## Task settings
|
|
451
|
+
|
|
452
|
+
Use `TaskSettings` to tune LOB handling, error behaviour, full-load parallelism, and CDC batching.
|
|
453
|
+
|
|
454
|
+
```typescript
|
|
455
|
+
import { TaskSettings, LobMode, ErrorAction, LoggingLevel } from 'cdk-dms-replication';
|
|
456
|
+
|
|
457
|
+
const settings = new TaskSettings()
|
|
458
|
+
// LOB handling
|
|
459
|
+
.withLobMode(LobMode.LIMITED_LOB, 64) // truncate LOBs at 64 KB
|
|
460
|
+
|
|
461
|
+
// Full load tuning
|
|
462
|
+
.withFullLoadSubTasks(16) // 16 parallel table loads
|
|
463
|
+
.withTargetTablePrepMode('DROP_AND_CREATE')
|
|
464
|
+
.withCommitRate(50000) // commit every 50k rows
|
|
465
|
+
|
|
466
|
+
// CDC batch apply
|
|
467
|
+
.withBatchApply(true, 5, 60) // batch changes, 5–60 second window
|
|
468
|
+
|
|
469
|
+
// Error handling
|
|
470
|
+
.withDataErrorPolicy(ErrorAction.IGNORE_RECORD, 1000)
|
|
471
|
+
.withRecovery(-1, 5) // unlimited retries, 5s interval
|
|
472
|
+
|
|
473
|
+
// Logging
|
|
474
|
+
.withLogging('SOURCE_UNLOAD', LoggingLevel.LOGGER_SEVERITY_DEBUG)
|
|
475
|
+
.withLogging('TARGET_LOAD', LoggingLevel.LOGGER_SEVERITY_DEFAULT)
|
|
476
|
+
.toJson();
|
|
477
|
+
|
|
478
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
479
|
+
vpc,
|
|
480
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
481
|
+
taskSettings: settings,
|
|
482
|
+
sourceEndpoint: { ... },
|
|
483
|
+
targetEndpoint: { ... },
|
|
484
|
+
});
|
|
485
|
+
```
|
|
486
|
+
|
|
487
|
+
---
|
|
488
|
+
|
|
489
|
+
## Replication instance options
|
|
490
|
+
|
|
491
|
+
```typescript
|
|
492
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
493
|
+
vpc,
|
|
494
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
495
|
+
|
|
496
|
+
// Instance sizing (default: R6I_LARGE)
|
|
497
|
+
replicationInstanceClass: ReplicationInstanceClass.R6I_4XLARGE,
|
|
498
|
+
allocatedStorage: 500, // GB
|
|
499
|
+
|
|
500
|
+
// High availability
|
|
501
|
+
multiAz: true,
|
|
502
|
+
|
|
503
|
+
// Encryption — bring your own KMS key
|
|
504
|
+
encryptionKey: myKmsKey,
|
|
505
|
+
|
|
506
|
+
// Subnet placement
|
|
507
|
+
vpcSubnets: {
|
|
508
|
+
subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
|
|
509
|
+
},
|
|
510
|
+
|
|
511
|
+
sourceEndpoint: { ... },
|
|
512
|
+
targetEndpoint: { ... },
|
|
513
|
+
});
|
|
514
|
+
```
|
|
515
|
+
|
|
516
|
+
---
|
|
517
|
+
|
|
518
|
+
## Using lower-level constructs directly
|
|
519
|
+
|
|
520
|
+
If you need more control, use the individual constructs and wire them together yourself.
|
|
521
|
+
|
|
522
|
+
```typescript
|
|
523
|
+
import {
|
|
524
|
+
DmsReplicationInstance,
|
|
525
|
+
DmsEndpoint,
|
|
526
|
+
DmsReplicationTask,
|
|
527
|
+
EndpointType,
|
|
528
|
+
EndpointEngine,
|
|
529
|
+
MigrationType,
|
|
530
|
+
TableMappings,
|
|
531
|
+
} from 'cdk-dms-replication';
|
|
532
|
+
|
|
533
|
+
// 1. Replication instance
|
|
534
|
+
const instance = new DmsReplicationInstance(stack, 'Instance', {
|
|
535
|
+
vpc,
|
|
536
|
+
replicationInstanceClass: ReplicationInstanceClass.R6I_LARGE,
|
|
537
|
+
multiAz: true,
|
|
538
|
+
});
|
|
539
|
+
|
|
540
|
+
// Allow the source DB security group to accept connections from DMS
|
|
541
|
+
instance.allowInboundFrom(
|
|
542
|
+
ec2.Peer.securityGroupId(myDbSg.securityGroupId),
|
|
543
|
+
ec2.Port.tcp(3306),
|
|
544
|
+
);
|
|
545
|
+
|
|
546
|
+
// 2. Endpoints
|
|
547
|
+
const source = new DmsEndpoint(stack, 'Source', {
|
|
548
|
+
endpointType: EndpointType.SOURCE,
|
|
549
|
+
engine: EndpointEngine.MYSQL,
|
|
550
|
+
serverName: 'mysql.example.com',
|
|
551
|
+
port: 3306,
|
|
552
|
+
username: 'dms_user',
|
|
553
|
+
password: cdk.SecretValue.secretsManager('db-secret'),
|
|
554
|
+
databaseName: 'mydb',
|
|
555
|
+
});
|
|
556
|
+
|
|
557
|
+
const target = new DmsEndpoint(stack, 'Target', {
|
|
558
|
+
endpointType: EndpointType.TARGET,
|
|
559
|
+
engine: EndpointEngine.S3,
|
|
560
|
+
s3Settings: {
|
|
561
|
+
bucketName: 'my-bucket',
|
|
562
|
+
serviceAccessRoleArn: s3Role.roleArn,
|
|
563
|
+
},
|
|
564
|
+
});
|
|
565
|
+
|
|
566
|
+
// 3. Replication task
|
|
567
|
+
new DmsReplicationTask(stack, 'Task', {
|
|
568
|
+
replicationInstanceArn: instance.replicationInstanceArn,
|
|
569
|
+
sourceEndpoint: source,
|
|
570
|
+
targetEndpoint: target,
|
|
571
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
572
|
+
tableMappings: new TableMappings().includeSchema('public').toJson(),
|
|
573
|
+
});
|
|
574
|
+
```
|
|
575
|
+
|
|
576
|
+
---
|
|
577
|
+
|
|
578
|
+
## Using existing endpoints
|
|
579
|
+
|
|
580
|
+
Bring your own endpoints if they already exist (e.g., created outside CDK or in a different stack):
|
|
581
|
+
|
|
582
|
+
```typescript
|
|
583
|
+
import { IDmsEndpoint } from 'cdk-dms-replication';
|
|
584
|
+
|
|
585
|
+
// Reference an existing endpoint by ARN
|
|
586
|
+
const existingSource: IDmsEndpoint = {
|
|
587
|
+
endpointArn: 'arn:aws:dms:us-east-1:123456789012:endpoint:ABCDEF',
|
|
588
|
+
};
|
|
589
|
+
|
|
590
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
591
|
+
vpc,
|
|
592
|
+
migrationType: MigrationType.CDC,
|
|
593
|
+
existingSourceEndpoint: existingSource,
|
|
594
|
+
targetEndpoint: {
|
|
595
|
+
engine: EndpointEngine.S3,
|
|
596
|
+
s3Settings: { ... },
|
|
597
|
+
},
|
|
598
|
+
});
|
|
599
|
+
```
|
|
600
|
+
|
|
601
|
+
---
|
|
602
|
+
|
|
603
|
+
## Secrets Manager integration
|
|
604
|
+
|
|
605
|
+
For production workloads, store credentials in AWS Secrets Manager and let DMS retrieve them directly (no plaintext in CloudFormation):
|
|
606
|
+
|
|
607
|
+
```typescript
|
|
608
|
+
sourceEndpoint: {
|
|
609
|
+
engine: EndpointEngine.MYSQL,
|
|
610
|
+
serverName: 'mysql.example.com',
|
|
611
|
+
port: 3306,
|
|
612
|
+
mySqlSettings: {
|
|
613
|
+
secretsManagerSecretId: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dms/mysql-abc123',
|
|
614
|
+
secretsManagerAccessRoleArn: dmsSecretsRole.roleArn,
|
|
615
|
+
},
|
|
616
|
+
},
|
|
617
|
+
```
|
|
618
|
+
|
|
619
|
+
The secret must contain `username` and `password` keys. See [Using AWS Secrets Manager to access database credentials](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager) for the required secret format and IAM permissions.
|
|
620
|
+
|
|
621
|
+
---
|
|
622
|
+
|
|
623
|
+
## Cross-account migrations
|
|
624
|
+
|
|
625
|
+
DMS supports migrating data between AWS accounts. The replication instance always lives in one account (the **DMS account**) and connects to source and target databases over the network, regardless of which account owns them.
|
|
626
|
+
|
|
627
|
+
### Prerequisites
|
|
628
|
+
|
|
629
|
+
Two things must be true before the construct can help:
|
|
630
|
+
|
|
631
|
+
1. **Network connectivity** — The replication instance's VPC must be able to reach both endpoints. Establish this with [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html), or [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) before deploying. The construct has no visibility into routing — it will synthesise correctly regardless, but the task will fail at runtime if the endpoints are unreachable.
|
|
632
|
+
|
|
633
|
+
2. **IAM cross-account trust** — For AWS-managed targets (S3, Kinesis, Redshift, DynamoDB, etc.) owned by a different account, DMS needs an IAM role in the **target account** that trusts the DMS service principal in the **DMS account**.
|
|
634
|
+
|
|
635
|
+
### CDK stack setup
|
|
636
|
+
|
|
637
|
+
CDK cannot pass constructs like `ec2.IVpc` across account boundaries. Use `Vpc.fromLookup` in the DMS account stack to reference the VPC by ID:
|
|
638
|
+
|
|
639
|
+
```typescript
|
|
640
|
+
// DMS account stack (e.g. account 111111111111)
|
|
641
|
+
const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', {
|
|
642
|
+
vpcId: 'vpc-0abc1234def567890',
|
|
643
|
+
});
|
|
644
|
+
```
|
|
645
|
+
|
|
646
|
+
### Database endpoints (any engine)
|
|
647
|
+
|
|
648
|
+
For source or target databases running in another account, provide the hostname that is reachable from the replication instance's VPC (private IP, private DNS name, or VPC peering DNS). No special construct configuration is needed beyond what you would use for a same-account migration.
|
|
649
|
+
|
|
650
|
+
```typescript
|
|
651
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
652
|
+
vpc,
|
|
653
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
654
|
+
|
|
655
|
+
// Source DB in account 222222222222, reachable via Transit Gateway
|
|
656
|
+
sourceEndpoint: {
|
|
657
|
+
engine: EndpointEngine.ORACLE,
|
|
658
|
+
serverName: '10.1.2.3', // private IP from peered VPC
|
|
659
|
+
port: 1521,
|
|
660
|
+
username: 'dms_user',
|
|
661
|
+
password: cdk.SecretValue.secretsManager('oracle-dms-secret'),
|
|
662
|
+
databaseName: 'ORCL',
|
|
663
|
+
},
|
|
664
|
+
|
|
665
|
+
// Target in the same DMS account — normal config
|
|
666
|
+
targetEndpoint: {
|
|
667
|
+
engine: EndpointEngine.AURORA_POSTGRESQL,
|
|
668
|
+
serverName: cluster.clusterEndpoint.hostname,
|
|
669
|
+
port: 5432,
|
|
670
|
+
username: 'dms_user',
|
|
671
|
+
password: cdk.SecretValue.secretsManager('aurora-dms-secret'),
|
|
672
|
+
databaseName: 'mydb',
|
|
673
|
+
},
|
|
674
|
+
});
|
|
675
|
+
```
|
|
676
|
+
|
|
677
|
+
### AWS-managed targets in another account (S3, Kinesis, Redshift, etc.)
|
|
678
|
+
|
|
679
|
+
When the target service lives in a different account, create an IAM role in the **target account** that DMS (running in the DMS account) can assume. Pass its ARN via `serviceAccessRoleArn`.
|
|
680
|
+
|
|
681
|
+
**Step 1 — Create the cross-account role in the target account (222222222222):**
|
|
682
|
+
|
|
683
|
+
```typescript
|
|
684
|
+
// In a stack deployed to the TARGET account (222222222222)
|
|
685
|
+
const crossAccountDmsRole = new iam.Role(targetStack, 'DmsCrossAccountRole', {
|
|
686
|
+
assumedBy: new iam.CompositePrincipal(
|
|
687
|
+
// Allow DMS in the DMS account to assume this role
|
|
688
|
+
new iam.ArnPrincipal(`arn:aws:iam::111111111111:role/dms-vpc-role`),
|
|
689
|
+
new iam.ServicePrincipal('dms.amazonaws.com'),
|
|
690
|
+
),
|
|
691
|
+
inlinePolicies: {
|
|
692
|
+
S3Access: new iam.PolicyDocument({
|
|
693
|
+
statements: [
|
|
694
|
+
new iam.PolicyStatement({
|
|
695
|
+
actions: ['s3:PutObject', 's3:DeleteObject', 's3:ListBucket'],
|
|
696
|
+
resources: [
|
|
697
|
+
targetBucket.bucketArn,
|
|
698
|
+
`${targetBucket.bucketArn}/*`,
|
|
699
|
+
],
|
|
700
|
+
}),
|
|
701
|
+
],
|
|
702
|
+
}),
|
|
703
|
+
},
|
|
704
|
+
});
|
|
705
|
+
```
|
|
706
|
+
|
|
707
|
+
**Step 2 — Reference the role ARN in the DMS account stack:**
|
|
708
|
+
|
|
709
|
+
```typescript
|
|
710
|
+
// In the DMS account stack (111111111111)
|
|
711
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
712
|
+
vpc,
|
|
713
|
+
migrationType: MigrationType.FULL_LOAD,
|
|
714
|
+
sourceEndpoint: {
|
|
715
|
+
engine: EndpointEngine.MYSQL,
|
|
716
|
+
serverName: 'mysql.internal.example.com',
|
|
717
|
+
port: 3306,
|
|
718
|
+
username: 'dms_user',
|
|
719
|
+
password: cdk.SecretValue.secretsManager('mysql-dms-secret'),
|
|
720
|
+
databaseName: 'orders',
|
|
721
|
+
},
|
|
722
|
+
targetEndpoint: {
|
|
723
|
+
engine: EndpointEngine.S3,
|
|
724
|
+
s3Settings: {
|
|
725
|
+
bucketName: 'target-account-bucket', // bucket in account 222222222222
|
|
726
|
+
serviceAccessRoleArn: 'arn:aws:iam::222222222222:role/DmsCrossAccountRole',
|
|
727
|
+
},
|
|
728
|
+
},
|
|
729
|
+
});
|
|
730
|
+
```
|
|
731
|
+
|
|
732
|
+
The same pattern applies to Kinesis, Redshift, DynamoDB, and other AWS-managed targets — create the role in the target account, grant it the permissions that service needs, and pass its ARN to the relevant `serviceAccessRoleArn` field.
|
|
733
|
+
|
|
734
|
+
### Cross-account Secrets Manager
|
|
735
|
+
|
|
736
|
+
If the database credentials are stored in Secrets Manager in the source account (222222222222) but DMS runs in a different account (111111111111):
|
|
737
|
+
|
|
738
|
+
1. Add a [resource-based policy](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-based-policies.html) to the secret in account 222222222222 that allows the DMS account's role to call `secretsmanager:GetSecretValue`.
|
|
739
|
+
2. Pass the full secret ARN and the cross-account access role ARN to the endpoint settings:
|
|
740
|
+
|
|
741
|
+
```typescript
|
|
742
|
+
sourceEndpoint: {
|
|
743
|
+
engine: EndpointEngine.MYSQL,
|
|
744
|
+
serverName: 'mysql.internal.example.com',
|
|
745
|
+
port: 3306,
|
|
746
|
+
mySqlSettings: {
|
|
747
|
+
secretsManagerSecretId:
|
|
748
|
+
'arn:aws:secretsmanager:us-east-1:222222222222:secret:dms/mysql-abc123',
|
|
749
|
+
secretsManagerAccessRoleArn:
|
|
750
|
+
'arn:aws:iam::111111111111:role/DmsSecretsManagerRole',
|
|
751
|
+
},
|
|
752
|
+
},
|
|
753
|
+
```
|
|
754
|
+
|
|
755
|
+
### Cross-account KMS encryption
|
|
756
|
+
|
|
757
|
+
The construct creates a KMS key in the DMS account for replication instance storage. If you need the replication instance to write to a KMS-encrypted target in another account, bring your own key and add a cross-account statement to its key policy:
|
|
758
|
+
|
|
759
|
+
```typescript
|
|
760
|
+
// Key in the DMS account (111111111111), with cross-account decrypt permission
|
|
761
|
+
const encryptionKey = new kms.Key(stack, 'ReplicationKey', {
|
|
762
|
+
enableKeyRotation: true,
|
|
763
|
+
policy: new iam.PolicyDocument({
|
|
764
|
+
statements: [
|
|
765
|
+
// Standard key admin/use permissions ...
|
|
766
|
+
new iam.PolicyStatement({
|
|
767
|
+
principals: [new iam.AccountPrincipal('222222222222')],
|
|
768
|
+
actions: ['kms:Decrypt', 'kms:GenerateDataKey'],
|
|
769
|
+
resources: ['*'],
|
|
770
|
+
}),
|
|
771
|
+
],
|
|
772
|
+
}),
|
|
773
|
+
});
|
|
774
|
+
|
|
775
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
776
|
+
vpc,
|
|
777
|
+
encryptionKey,
|
|
778
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
779
|
+
sourceEndpoint: { ... },
|
|
780
|
+
targetEndpoint: { ... },
|
|
781
|
+
});
|
|
782
|
+
```
|
|
783
|
+
|
|
784
|
+
### Summary of cross-account responsibilities
|
|
785
|
+
|
|
786
|
+
| Concern | Who sets it up | How to configure |
|
|
787
|
+
|---------|---------------|-----------------|
|
|
788
|
+
| Network connectivity | You (VPC peering / TGW / PrivateLink) | Prerequisite — no construct prop |
|
|
789
|
+
| Database endpoint hostname | You | `serverName` — use private IP or private DNS |
|
|
790
|
+
| Cross-account service role (S3, Kinesis, etc.) | You (role in target account) | `serviceAccessRoleArn` |
|
|
791
|
+
| Cross-account Secrets Manager | You (resource policy on secret) | `secretsManagerSecretId` + `secretsManagerAccessRoleArn` |
|
|
792
|
+
| Cross-account KMS | You (key policy) | `encryptionKey` (bring your own) |
|
|
793
|
+
| Replication instance, subnet group, task | This construct | Fully managed |
|
|
794
|
+
|
|
795
|
+
---
|
|
796
|
+
|
|
797
|
+
## Observability
|
|
798
|
+
|
|
799
|
+
A CloudWatch Logs log group is created by default. Customise the retention period or disable it:
|
|
800
|
+
|
|
801
|
+
```typescript
|
|
802
|
+
import * as logs from 'aws-cdk-lib/aws-logs';
|
|
803
|
+
|
|
804
|
+
new DmsMigrationPipeline(stack, 'Pipeline', {
|
|
805
|
+
vpc,
|
|
806
|
+
migrationType: MigrationType.FULL_LOAD_AND_CDC,
|
|
807
|
+
enableCloudWatchLogs: true,
|
|
808
|
+
logRetention: logs.RetentionDays.THREE_MONTHS,
|
|
809
|
+
sourceEndpoint: { ... },
|
|
810
|
+
targetEndpoint: { ... },
|
|
811
|
+
});
|
|
812
|
+
```
|
|
813
|
+
|
|
814
|
+
---
|
|
815
|
+
|
|
816
|
+
## API reference
|
|
817
|
+
|
|
818
|
+
Full API documentation is available in [API.md](https://github.com/kckempf/cdk-dms-replication/blob/main/API.md) and on [Construct Hub](https://constructs.dev/packages/cdk-dms-replication).
|
|
819
|
+
|
|
820
|
+
---
|
|
821
|
+
|
|
822
|
+
## Supported source engines
|
|
823
|
+
|
|
824
|
+
| Engine | `EndpointEngine` value |
|
|
825
|
+
|--------|----------------------|
|
|
826
|
+
| MySQL | `MYSQL` |
|
|
827
|
+
| Amazon Aurora (MySQL) | `AURORA_MYSQL` |
|
|
828
|
+
| PostgreSQL | `POSTGRES` |
|
|
829
|
+
| Amazon Aurora (PostgreSQL) | `AURORA_POSTGRESQL` |
|
|
830
|
+
| Oracle | `ORACLE` |
|
|
831
|
+
| Microsoft SQL Server | `SQLSERVER` |
|
|
832
|
+
| MariaDB | `MARIADB` |
|
|
833
|
+
| SAP ASE (Sybase) | `SAP_ASE` |
|
|
834
|
+
| IBM Db2 LUW | `IBM_DB2` |
|
|
835
|
+
| IBM Db2 for z/OS | `IBM_DB2_ZOS` |
|
|
836
|
+
| MongoDB | `MONGODB` |
|
|
837
|
+
| Amazon DocumentDB | `DOCDB` |
|
|
838
|
+
| Amazon S3 | `S3` |
|
|
839
|
+
|
|
840
|
+
## Supported target engines
|
|
841
|
+
|
|
842
|
+
All source engines above, plus:
|
|
843
|
+
|
|
844
|
+
| Engine | `EndpointEngine` value |
|
|
845
|
+
|--------|----------------------|
|
|
846
|
+
| Amazon S3 | `S3` |
|
|
847
|
+
| Amazon DynamoDB | `DYNAMODB` |
|
|
848
|
+
| Amazon Redshift | `REDSHIFT` |
|
|
849
|
+
| Amazon Kinesis Data Streams | `KINESIS` |
|
|
850
|
+
| Apache Kafka / Amazon MSK | `KAFKA` |
|
|
851
|
+
| Amazon OpenSearch Service | `OPENSEARCH` |
|
|
852
|
+
| Amazon Neptune | `NEPTUNE` |
|
|
853
|
+
| Amazon ElastiCache for Redis | `REDIS` |
|
|
854
|
+
|
|
855
|
+
---
|
|
856
|
+
|
|
857
|
+
## Security considerations
|
|
858
|
+
|
|
859
|
+
- The replication instance is placed in **private subnets** by default. Set `publiclyAccessible: true` only if required.
|
|
860
|
+
- Storage is encrypted at rest using a **KMS customer-managed key** (auto-created if you don't provide one).
|
|
861
|
+
- **Do not use the `password` prop in production.** The resolved value is written as plaintext into the CloudFormation template and state file. Use Secrets Manager instead: set `secretsManagerSecretId` and `secretsManagerAccessRoleArn` in the engine-specific settings and omit `password` entirely.
|
|
862
|
+
- Grant DMS only the minimum IAM permissions required for each target engine.
|
|
863
|
+
- For CDC with PostgreSQL, the replication user needs the `rds_replication` role (RDS) or `REPLICATION` privilege (self-managed).
|
|
864
|
+
- For CDC with Oracle, supplemental logging must be enabled on the source database.
|
|
865
|
+
- **`dms-vpc-role` and `dms-cloudwatch-logs-role`** are account-level IAM roles created automatically by this construct the first time a pipeline is deployed. Subsequent pipelines in the same account reuse the existing roles. If you manage these roles outside of CDK, import them via your stack before deploying.
|
|
866
|
+
|
|
867
|
+
---
|
|
868
|
+
|
|
869
|
+
## Contributing
|
|
870
|
+
|
|
871
|
+
Bug reports and pull requests are welcome. Please open an issue at [github.com/kckempf/cdk-dms-replication/issues](https://github.com/kckempf/cdk-dms-replication/issues) before starting significant work.
|
|
872
|
+
|
|
873
|
+
---
|
|
874
|
+
|
|
875
|
+
## Author
|
|
876
|
+
|
|
877
|
+
[Kevin Kempf](https://github.com/kckempf)
|
|
878
|
+
|
|
879
|
+
---
|
|
880
|
+
|
|
881
|
+
## License
|
|
882
|
+
|
|
883
|
+
Apache-2.0 — see [LICENSE](https://github.com/kckempf/cdk-dms-replication/blob/main/LICENSE)
|