forge-sql-orm 1.0.30 → 1.1.31
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +242 -661
- package/dist/ForgeSQLORM.js +541 -568
- package/dist/ForgeSQLORM.js.map +1 -1
- package/dist/ForgeSQLORM.mjs +539 -555
- package/dist/ForgeSQLORM.mjs.map +1 -1
- package/dist/core/ForgeSQLCrudOperations.d.ts +101 -130
- package/dist/core/ForgeSQLCrudOperations.d.ts.map +1 -1
- package/dist/core/ForgeSQLORM.d.ts +11 -10
- package/dist/core/ForgeSQLORM.d.ts.map +1 -1
- package/dist/core/ForgeSQLQueryBuilder.d.ts +271 -113
- package/dist/core/ForgeSQLQueryBuilder.d.ts.map +1 -1
- package/dist/core/ForgeSQLSelectOperations.d.ts +65 -22
- package/dist/core/ForgeSQLSelectOperations.d.ts.map +1 -1
- package/dist/core/SystemTables.d.ts +59 -0
- package/dist/core/SystemTables.d.ts.map +1 -0
- package/dist/index.d.ts +1 -2
- package/dist/index.d.ts.map +1 -1
- package/dist/utils/sqlUtils.d.ts +53 -6
- package/dist/utils/sqlUtils.d.ts.map +1 -1
- package/dist-cli/cli.js +561 -360
- package/dist-cli/cli.js.map +1 -1
- package/dist-cli/cli.mjs +561 -360
- package/dist-cli/cli.mjs.map +1 -1
- package/package.json +26 -26
- package/src/core/ForgeSQLCrudOperations.ts +360 -473
- package/src/core/ForgeSQLORM.ts +40 -78
- package/src/core/ForgeSQLQueryBuilder.ts +250 -133
- package/src/core/ForgeSQLSelectOperations.ts +182 -72
- package/src/core/SystemTables.ts +7 -0
- package/src/index.ts +1 -2
- package/src/utils/sqlUtils.ts +155 -23
- package/dist/core/ComplexQuerySchemaBuilder.d.ts +0 -38
- package/dist/core/ComplexQuerySchemaBuilder.d.ts.map +0 -1
- package/dist/knex/index.d.ts +0 -4
- package/dist/knex/index.d.ts.map +0 -1
- package/src/core/ComplexQuerySchemaBuilder.ts +0 -63
- package/src/knex/index.ts +0 -4
package/README.md
CHANGED
|
@@ -2,17 +2,17 @@
|
|
|
2
2
|
|
|
3
3
|
[](https://github.com/vzakharchenko/forge-sql-orm/actions/workflows/node.js.yml)
|
|
4
4
|
|
|
5
|
-
**Forge-SQL-ORM** is an ORM designed for working with [@forge/sql](https://developer.atlassian.com/platform/forge/storage-reference/sql-tutorial/) in **Atlassian Forge**. It is built on top of [
|
|
5
|
+
**Forge-SQL-ORM** is an ORM designed for working with [@forge/sql](https://developer.atlassian.com/platform/forge/storage-reference/sql-tutorial/) in **Atlassian Forge**. It is built on top of [Drizzle ORM](https://orm.drizzle.team) and provides advanced capabilities for working with relational databases inside Forge.
|
|
6
6
|
|
|
7
7
|
## Key Features
|
|
8
|
-
|
|
9
|
-
- ✅ **
|
|
10
|
-
- ✅ **
|
|
11
|
-
- ✅ **
|
|
12
|
-
- ✅ **
|
|
13
|
-
- ✅ **
|
|
14
|
-
- ✅ **
|
|
15
|
-
- ✅ **
|
|
8
|
+
- ✅ **Supports complex SQL queries** with joins and filtering using Drizzle ORM
|
|
9
|
+
- ✅ **Batch insert support** with duplicate key handling
|
|
10
|
+
- ✅ **Schema migration support**, allowing automatic schema evolution
|
|
11
|
+
- ✅ **Automatic entity generation** from MySQL/tidb databases
|
|
12
|
+
- ✅ **Automatic migration generation** from MySQL/tidb databases
|
|
13
|
+
- ✅ **Drop Migrations** Generate a migration to drop all tables and clear migrations history for subsequent schema recreation
|
|
14
|
+
- ✅ **Optimistic Locking** Ensures data consistency by preventing conflicts when multiple users update the same record
|
|
15
|
+
- ✅ **Type Safety** Full TypeScript support with proper type inference
|
|
16
16
|
|
|
17
17
|
## Installation
|
|
18
18
|
|
|
@@ -23,51 +23,22 @@ Forge-SQL-ORM is designed to work with @forge/sql and requires some additional s
|
|
|
23
23
|
```sh
|
|
24
24
|
npm install forge-sql-orm -S
|
|
25
25
|
npm install @forge/sql -S
|
|
26
|
-
npm
|
|
27
|
-
npm
|
|
26
|
+
npm install drizzle-orm mysql2
|
|
27
|
+
npm install mysql2 @types/mysql2 -D
|
|
28
28
|
```
|
|
29
29
|
|
|
30
30
|
This will:
|
|
31
|
-
|
|
32
|
-
Install
|
|
33
|
-
Install
|
|
34
|
-
|
|
35
|
-
✅ Step 2: Configure Post-Installation Patch
|
|
36
|
-
By default, MikroORM and Knex include some features that are not compatible with Forge's restricted runtime.
|
|
37
|
-
To fix this, we need to patch these libraries after installation.
|
|
38
|
-
|
|
39
|
-
Run:
|
|
40
|
-
|
|
41
|
-
```sh
|
|
42
|
-
npm pkg set scripts.postinstall="forge-sql-orm patch:mikroorm"
|
|
43
|
-
```
|
|
44
|
-
|
|
45
|
-
✅ Step 3: Apply the Patch
|
|
46
|
-
After setting up the postinstall script, run:
|
|
47
|
-
|
|
48
|
-
```sh
|
|
49
|
-
npm i
|
|
50
|
-
```
|
|
51
|
-
|
|
52
|
-
This will:
|
|
53
|
-
|
|
54
|
-
Trigger the postinstall hook, which applies the necessary patches to MikroORM and Knex.
|
|
55
|
-
Ensure everything is correctly configured for running inside Forge.
|
|
56
|
-
|
|
57
|
-
🔧 Why is the Patch Required?
|
|
58
|
-
Atlassian Forge has a restricted execution environment, which does not allow:
|
|
59
|
-
|
|
60
|
-
- Dynamic import(id) calls, commonly used in MikroORM.
|
|
61
|
-
- Direct file system access, which MikroORM sometimes relies on.
|
|
62
|
-
- Unsupported database dialects, such as PostgreSQL or SQLite.
|
|
63
|
-
- The patch removes these unsupported features to ensure full compatibility.
|
|
31
|
+
- Install Forge-SQL-ORM (the ORM for @forge/sql)
|
|
32
|
+
- Install @forge/sql, the Forge database layer
|
|
33
|
+
- Install Drizzle ORM and its MySQL driver
|
|
34
|
+
- Install TypeScript types for MySQL
|
|
64
35
|
|
|
65
36
|
## Step-by-Step Migration Workflow
|
|
66
37
|
|
|
67
|
-
1. **Generate initial
|
|
38
|
+
1. **Generate initial schema from an existing database**
|
|
68
39
|
|
|
69
40
|
```sh
|
|
70
|
-
npx forge-sql-orm generate:model --dbName testDb --output ./database/
|
|
41
|
+
npx forge-sql-orm generate:model --dbName testDb --output ./database/schema
|
|
71
42
|
```
|
|
72
43
|
|
|
73
44
|
_(This is done only once when setting up the project)_
|
|
@@ -75,7 +46,7 @@ Atlassian Forge has a restricted execution environment, which does not allow:
|
|
|
75
46
|
2. **Create the first migration**
|
|
76
47
|
|
|
77
48
|
```sh
|
|
78
|
-
npx forge-sql-orm migrations:create --dbName testDb --entitiesPath ./database/
|
|
49
|
+
npx forge-sql-orm migrations:create --dbName testDb --entitiesPath ./database/schema --output ./database/migration
|
|
79
50
|
```
|
|
80
51
|
|
|
81
52
|
_(This initializes the database migration structure, also done once)_
|
|
@@ -92,575 +63,312 @@ Atlassian Forge has a restricted execution environment, which does not allow:
|
|
|
92
63
|
5. **Update the migration**
|
|
93
64
|
|
|
94
65
|
```sh
|
|
95
|
-
npx forge-sql-orm migrations:update --dbName testDb --entitiesPath ./database/
|
|
66
|
+
npx forge-sql-orm migrations:update --dbName testDb --entitiesPath ./database/schema --output ./database/migration
|
|
96
67
|
```
|
|
97
68
|
|
|
98
|
-
- ⚠️ **Do NOT update
|
|
99
|
-
- If
|
|
69
|
+
- ⚠️ **Do NOT update schema before this step!**
|
|
70
|
+
- If schema is updated first, the migration will be empty!
|
|
100
71
|
|
|
101
72
|
6. **Deploy to Forge and verify that the migration runs without issues**
|
|
102
73
|
|
|
103
74
|
- Run the updated migration on Forge.
|
|
104
75
|
|
|
105
|
-
7. **Update the
|
|
76
|
+
7. **Update the schema**
|
|
106
77
|
|
|
107
78
|
```sh
|
|
108
|
-
npx forge-sql-orm generate:model --dbName testDb --output ./database/
|
|
79
|
+
npx forge-sql-orm generate:model --dbName testDb --output ./database/schema
|
|
109
80
|
```
|
|
110
81
|
|
|
111
82
|
8. **Repeat steps 4-7 as needed**
|
|
112
83
|
|
|
113
84
|
**⚠️ WARNING:**
|
|
114
85
|
|
|
115
|
-
- **Do NOT swap steps 7 and 5!** If you update
|
|
116
|
-
- Always generate the **migration first**, then update the **
|
|
117
|
-
|
|
118
|
-
---
|
|
119
|
-
|
|
120
|
-
# Connection to ORM
|
|
121
|
-
|
|
122
|
-
```js
|
|
123
|
-
import ForgeSQL from "forge-sql-orm";
|
|
124
|
-
import { Orders } from "./entities/Orders";
|
|
125
|
-
import { Users } from "./entities/Users";
|
|
126
|
-
import ENTITIES from "./entities";
|
|
127
|
-
|
|
128
|
-
const forgeSQL = new ForgeSQL(ENTITIES);
|
|
129
|
-
```
|
|
130
|
-
|
|
131
|
-
- Fetch Data:
|
|
132
|
-
|
|
133
|
-
```js
|
|
134
|
-
const formattedQuery = forgeSQL
|
|
135
|
-
.createQueryBuilder(Users)
|
|
136
|
-
.select("*")
|
|
137
|
-
.limit(limit)
|
|
138
|
-
.offset(offset)
|
|
139
|
-
.getFormattedQuery();
|
|
140
|
-
//select `u0`.* from `users` as `u0` limit 10 offset 1
|
|
141
|
-
return await forgeSQL.fetch().executeSchemaSQL(formattedQuery, UsersSchema);
|
|
142
|
-
```
|
|
143
|
-
|
|
144
|
-
- Raw Fetch Data
|
|
145
|
-
|
|
146
|
-
```js
|
|
147
|
-
const users = (await forgeSQL.fetch().executeRawSQL) < Users > "SELECT * FROM users";
|
|
148
|
-
```
|
|
149
|
-
|
|
150
|
-
- Complex Query
|
|
151
|
-
|
|
152
|
-
```js
|
|
153
|
-
// Define schema for join result
|
|
154
|
-
const innerJoinSchema = forgeSQL.fetch().createComplexQuerySchema();
|
|
155
|
-
schemaBuilder.addField(Users.meta.properties.name);
|
|
156
|
-
schemaBuilder.addField(Orders.meta.properties.product);
|
|
157
|
-
|
|
158
|
-
// Execute query
|
|
159
|
-
const query = forgeSQL.createQueryBuilder(Orders, "order")
|
|
160
|
-
.limit(10).offset(10)
|
|
161
|
-
.innerJoin("user", "user")
|
|
162
|
-
.select(["user.name", "order.product"])
|
|
163
|
-
.getFormattedQuery();
|
|
164
|
-
// select `user`.`name`, `order`.`product` from `orders` as `order` inner join `users` as `user` on `order`.`user_id` = `user`.`id` limit 10 offset 10
|
|
165
|
-
const results = await forgeSQL.fetch().executeSchemaSQL(query, innerJoinSchema);
|
|
166
|
-
console.log(results);
|
|
167
|
-
```
|
|
168
|
-
|
|
169
|
-
Below is an example of how you can extend your README's CRUD Operations section with information and examples for both `updateFieldById` and `updateFields` methods:
|
|
170
|
-
|
|
171
|
-
---
|
|
172
|
-
|
|
173
|
-
🛠 **CRUD Operations**
|
|
174
|
-
|
|
175
|
-
- **Insert Data**
|
|
176
|
-
|
|
177
|
-
```js
|
|
178
|
-
// INSERT INTO users (id, name) VALUES (1, 'Smith')
|
|
179
|
-
const userId = await forgeSQL.crud().insert(UsersSchema, [{ id: 1, name: "Smith" }]);
|
|
180
|
-
```
|
|
181
|
-
|
|
182
|
-
- **Insert Bulk Data**
|
|
183
|
-
|
|
184
|
-
```js
|
|
185
|
-
// INSERT INTO users (id, name) VALUES (2, 'Smith'), (3, 'Vasyl')
|
|
186
|
-
await forgeSQL.crud().insert(UsersSchema, [
|
|
187
|
-
{ id: 2, name: "Smith" },
|
|
188
|
-
{ id: 3, name: "Vasyl" },
|
|
189
|
-
]);
|
|
190
|
-
```
|
|
191
|
-
|
|
192
|
-
- **Insert Data with Duplicates**
|
|
193
|
-
|
|
194
|
-
```js
|
|
195
|
-
// INSERT INTO users (id, name) VALUES (4, 'Smith'), (4, 'Vasyl')
|
|
196
|
-
// ON DUPLICATE KEY UPDATE name = VALUES(name)
|
|
197
|
-
await forgeSQL.crud().insert(
|
|
198
|
-
UsersSchema,
|
|
199
|
-
[
|
|
200
|
-
{ id: 4, name: "Smith" },
|
|
201
|
-
{ id: 4, name: "Vasyl" },
|
|
202
|
-
],
|
|
203
|
-
true,
|
|
204
|
-
);
|
|
205
|
-
```
|
|
206
|
-
|
|
207
|
-
- **Update Data by Primary Key**
|
|
208
|
-
|
|
209
|
-
```js
|
|
210
|
-
// This uses the updateById method which wraps updateFieldById (with optimistic locking if configured)
|
|
211
|
-
await forgeSQL.crud().updateById({ id: 1, name: "Smith Updated" }, UsersSchema);
|
|
212
|
-
```
|
|
213
|
-
|
|
214
|
-
- **Update Specific Fields by Primary Key**
|
|
86
|
+
- **Do NOT swap steps 7 and 5!** If you update schema before generating a migration, the migration will be empty!
|
|
87
|
+
- Always generate the **migration first**, then update the **schema**.
|
|
215
88
|
|
|
216
|
-
|
|
217
|
-
// Updates specific fields of a record identified by its primary key.
|
|
218
|
-
// Note: The primary key field (e.g. id) must be included in the fields array.
|
|
219
|
-
await forgeSQL.crud().updateFieldById({ id: 1, name: "Updated Name" }, ["id", "name"], UsersSchema);
|
|
220
|
-
```
|
|
221
|
-
|
|
222
|
-
- **Update Fields Without Primary Key and Versioning**
|
|
223
|
-
|
|
224
|
-
```js
|
|
225
|
-
// Updates specified fields for records matching the given conditions.
|
|
226
|
-
// In this example, the "name" and "age" fields are updated for users where the email is 'smith@example.com'.
|
|
227
|
-
const affectedRows = await forgeSQL.crud().updateFields(
|
|
228
|
-
{ name: "New Name", age: 35, email: "smith@example.com" },
|
|
229
|
-
["name", "age"],
|
|
230
|
-
UsersSchema
|
|
231
|
-
);
|
|
232
|
-
console.log(`Rows affected: ${affectedRows}`);
|
|
233
|
-
|
|
234
|
-
// Alternatively, you can provide an explicit WHERE condition:
|
|
235
|
-
const affectedRowsWithWhere = await forgeSQL.crud().updateFields(
|
|
236
|
-
{ name: "New Name", age: 35 },
|
|
237
|
-
["name", "age"],
|
|
238
|
-
UsersSchema,
|
|
239
|
-
{ email: "smith@example.com" }
|
|
240
|
-
);
|
|
241
|
-
console.log(`Rows affected: ${affectedRowsWithWhere}`);
|
|
242
|
-
```
|
|
243
|
-
|
|
244
|
-
- **Delete Data**
|
|
245
|
-
|
|
246
|
-
```js
|
|
247
|
-
await forgeSQL.crud().deleteById(1, UsersSchema);
|
|
248
|
-
```
|
|
249
|
-
|
|
250
|
-
## Quick Start
|
|
251
|
-
|
|
252
|
-
### 1. Designing the Database
|
|
253
|
-
|
|
254
|
-
You can start by designing a **MySQL/tidb database** using tools like [DbSchema](https://dbschema.com/) or by using an existing MySQL/tidb database.
|
|
89
|
+
## Drop Migrations
|
|
255
90
|
|
|
256
|
-
|
|
257
|
-
|
|
91
|
+
The Drop Migrations feature allows you to completely reset your database schema in Atlassian Forge SQL. This is useful when you need to:
|
|
92
|
+
- Start fresh with a new schema
|
|
93
|
+
- Reset all tables and their data
|
|
94
|
+
- Clear migration history
|
|
95
|
+
- Ensure your local schema matches the deployed database
|
|
258
96
|
|
|
259
|
-
|
|
97
|
+
### Important Requirements
|
|
260
98
|
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
name VARCHAR(200)
|
|
266
|
-
) engine=InnoDB;
|
|
99
|
+
Before using Drop Migrations, ensure that:
|
|
100
|
+
1. Your local schema exactly matches the current database schema deployed in Atlassian Forge SQL
|
|
101
|
+
2. You have a backup of your data if needed
|
|
102
|
+
3. You understand that this operation will delete all tables and data
|
|
267
103
|
|
|
268
|
-
|
|
269
|
-
id INT NOT NULL PRIMARY KEY,
|
|
270
|
-
user_id INT NOT NULL ,
|
|
271
|
-
product VARCHAR(200)
|
|
272
|
-
) engine=InnoDB;
|
|
104
|
+
### Usage
|
|
273
105
|
|
|
274
|
-
|
|
275
|
-
```
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
Run the following command to generate entity models based on your database:
|
|
280
|
-
|
|
281
|
-
```sh
|
|
282
|
-
npx forge-sql-orm generate:model --host localhost --port 3306 --user root --password secret --dbName testDb --output ./database/entities
|
|
283
|
-
```
|
|
284
|
-
|
|
285
|
-
This will generate **entity schemas** in the `./database/entities` directory.
|
|
286
|
-
Users Model and Schema:
|
|
287
|
-
|
|
288
|
-
```js
|
|
289
|
-
import { EntitySchema } from 'forge-sql-orm';
|
|
290
|
-
|
|
291
|
-
export class Users {
|
|
292
|
-
id!: number;
|
|
293
|
-
name?: string;
|
|
294
|
-
}
|
|
295
|
-
|
|
296
|
-
export const UsersSchema = new EntitySchema({
|
|
297
|
-
class: Users,
|
|
298
|
-
properties: {
|
|
299
|
-
id: { primary: true, type: 'integer', unsigned: false },
|
|
300
|
-
name: { type: 'string', length: 200, nullable: true },
|
|
301
|
-
},
|
|
302
|
-
});
|
|
303
|
-
|
|
304
|
-
```
|
|
305
|
-
|
|
306
|
-
Orders Model and Schema:
|
|
307
|
-
|
|
308
|
-
```js
|
|
309
|
-
import { EntitySchema } from 'forge-sql-orm';
|
|
310
|
-
import { Users } from './Users';
|
|
311
|
-
|
|
312
|
-
export class Orders {
|
|
313
|
-
id!: number;
|
|
314
|
-
user!: Users;
|
|
315
|
-
userId!: number;
|
|
316
|
-
product?: string;
|
|
317
|
-
}
|
|
318
|
-
|
|
319
|
-
export const OrdersSchema = new EntitySchema({
|
|
320
|
-
class: Orders,
|
|
321
|
-
properties: {
|
|
322
|
-
id: { primary: true, type: 'integer', unsigned: false, autoincrement: false },
|
|
323
|
-
user: {
|
|
324
|
-
kind: 'm:1',
|
|
325
|
-
entity: () => Users,
|
|
326
|
-
fieldName: 'user_id',
|
|
327
|
-
index: 'fk_orders_users',
|
|
328
|
-
},
|
|
329
|
-
userId: {
|
|
330
|
-
type: 'integer',
|
|
331
|
-
fieldName: 'user_id',
|
|
332
|
-
persist: false,
|
|
333
|
-
index: 'fk_orders_users',
|
|
334
|
-
},
|
|
335
|
-
product: { type: 'string', length: 200, nullable: true },
|
|
336
|
-
},
|
|
337
|
-
});
|
|
338
|
-
```
|
|
339
|
-
|
|
340
|
-
index.ts
|
|
341
|
-
|
|
342
|
-
```js
|
|
343
|
-
import { Orders } from "./Orders";
|
|
344
|
-
import { Users } from "./Users";
|
|
106
|
+
1. First, ensure your local schema matches the deployed database:
|
|
107
|
+
```bash
|
|
108
|
+
npx forge-sql-orm generate:model --output ./database/schema
|
|
109
|
+
```
|
|
345
110
|
|
|
346
|
-
|
|
347
|
-
```
|
|
111
|
+
2. Generate the drop migration:
|
|
112
|
+
```bash
|
|
113
|
+
npx forge-sql-orm migrations:drop --entitiesPath ./database/schema --output ./database/migration
|
|
114
|
+
```
|
|
348
115
|
|
|
349
|
-
|
|
116
|
+
3. Deploy and run the migration in your Forge app:
|
|
117
|
+
```js
|
|
118
|
+
import migrationRunner from "./database/migration";
|
|
119
|
+
import { MigrationRunner } from "@forge/sql/out/migration";
|
|
350
120
|
|
|
351
|
-
|
|
121
|
+
const runner = new MigrationRunner();
|
|
122
|
+
await migrationRunner(runner);
|
|
123
|
+
await runner.run();
|
|
124
|
+
```
|
|
352
125
|
|
|
353
|
-
|
|
354
|
-
|
|
355
|
-
|
|
126
|
+
4. After dropping all tables, you can create a new migration to recreate the schema:
|
|
127
|
+
```bash
|
|
128
|
+
npx forge-sql-orm migrations:create --entitiesPath ./database/schema --output ./database/migration --force
|
|
129
|
+
```
|
|
130
|
+
The `--force` parameter is required here because we're creating a new migration after dropping all tables.
|
|
356
131
|
|
|
357
|
-
|
|
132
|
+
### Example Migration Output
|
|
358
133
|
|
|
134
|
+
The generated drop migration will look like this:
|
|
359
135
|
```js
|
|
360
136
|
import { MigrationRunner } from "@forge/sql/out/migration";
|
|
361
137
|
|
|
362
138
|
export default (migrationRunner: MigrationRunner): MigrationRunner => {
|
|
363
139
|
return migrationRunner
|
|
364
|
-
.enqueue("v1_MIGRATION0", "
|
|
365
|
-
.enqueue("v1_MIGRATION1", "
|
|
366
|
-
.enqueue("v1_MIGRATION2", "
|
|
140
|
+
.enqueue("v1_MIGRATION0", "ALTER TABLE `orders` DROP FOREIGN KEY `fk_orders_users`")
|
|
141
|
+
.enqueue("v1_MIGRATION1", "DROP INDEX `idx_orders_user_id` ON `orders`")
|
|
142
|
+
.enqueue("v1_MIGRATION2", "DROP TABLE IF EXISTS `orders`")
|
|
143
|
+
.enqueue("v1_MIGRATION3", "DROP TABLE IF EXISTS `users`")
|
|
144
|
+
.enqueue("MIGRATION_V1_1234567890", "DELETE FROM __migrations");
|
|
367
145
|
};
|
|
368
146
|
```
|
|
369
147
|
|
|
370
|
-
###
|
|
148
|
+
### ⚠️ Important Notes
|
|
371
149
|
|
|
372
|
-
|
|
373
|
-
|
|
374
|
-
|
|
375
|
-
|
|
376
|
-
|
|
377
|
-
console.log("Provisioning the database");
|
|
378
|
-
await sql._provision();
|
|
379
|
-
|
|
380
|
-
console.log("Running schema migrations");
|
|
381
|
-
const migrations = await migration(migrationRunner);
|
|
382
|
-
const successfulMigrations = await migrations.run();
|
|
383
|
-
console.log("Migrations applied:", successfulMigrations);
|
|
384
|
-
|
|
385
|
-
const migrationHistory = (await migrationRunner.list())
|
|
386
|
-
.map((y) => `${y.id}, ${y.name}, ${y.migratedAt.toUTCString()}`)
|
|
387
|
-
.join("\n");
|
|
388
|
-
|
|
389
|
-
console.log("Migrations history:\nid, name, migrated_at\n", migrationHistory);
|
|
390
|
-
|
|
391
|
-
return {
|
|
392
|
-
headers: { "Content-Type": ["application/json"] },
|
|
393
|
-
statusCode: 200,
|
|
394
|
-
statusText: "OK",
|
|
395
|
-
body: "Migrations successfully executed",
|
|
396
|
-
};
|
|
397
|
-
};
|
|
398
|
-
```
|
|
150
|
+
- This operation is **irreversible** - all data will be lost
|
|
151
|
+
- Make sure your local schema is up-to-date with the deployed database
|
|
152
|
+
- Consider backing up your data before running drop migrations
|
|
153
|
+
- The migration will clear the `__migrations` table to allow for fresh migration history
|
|
154
|
+
- Drop operations are performed in the correct order: first foreign keys, then indexes, then tables
|
|
399
155
|
|
|
400
|
-
|
|
156
|
+
---
|
|
401
157
|
|
|
402
|
-
|
|
158
|
+
# Connection to ORM
|
|
403
159
|
|
|
404
160
|
```js
|
|
405
|
-
import Entities from "./entities";
|
|
406
161
|
import ForgeSQL from "forge-sql-orm";
|
|
407
|
-
import { UsersSchema, Users } from "./entities/Users";
|
|
408
|
-
|
|
409
|
-
const forgeSQL = new ForgeSQL(ENTITIES);
|
|
410
|
-
|
|
411
|
-
// Insert Data
|
|
412
|
-
const user = new Users();
|
|
413
|
-
user.name = "John Doe";
|
|
414
|
-
const userId = await forgeSQL.crud().insert(UsersSchema, [user]);
|
|
415
|
-
console.log("Inserted User ID:", userId);
|
|
416
162
|
|
|
417
|
-
|
|
418
|
-
const users = await forgeSQL.fetch().executeSchemaSQL("SELECT * FROM users", UsersSchema);
|
|
419
|
-
console.log(users);
|
|
163
|
+
const forgeSQL = new ForgeSQL();
|
|
420
164
|
```
|
|
421
165
|
|
|
422
|
-
|
|
166
|
+
## Fetch Data
|
|
423
167
|
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
Modify the schema in DbSchema
|
|
427
|
-

|
|
428
|
-
or manually run:
|
|
429
|
-
|
|
430
|
-
```sh
|
|
431
|
-
ALTER TABLE `users` ADD email VARCHAR(255);
|
|
432
|
-
```
|
|
433
|
-
|
|
434
|
-
Then, generate a new migration:
|
|
435
|
-
|
|
436
|
-
```sh
|
|
437
|
-
npx forge-sql-orm migrations:update --dbName testDb --entitiesPath ./database/entities --output ./database/migration
|
|
438
|
-
```
|
|
439
|
-
|
|
440
|
-
Generated migration:
|
|
168
|
+
### Basic Fetch Operations
|
|
441
169
|
|
|
442
170
|
```js
|
|
443
|
-
|
|
444
|
-
|
|
445
|
-
|
|
446
|
-
|
|
447
|
-
|
|
448
|
-
|
|
171
|
+
// Using executeQuery for single result
|
|
172
|
+
const user = await forgeSQL
|
|
173
|
+
.fetch()
|
|
174
|
+
.executeQuery(
|
|
175
|
+
forgeSQL.getDrizzleQueryBuilder()
|
|
176
|
+
.select("*").from(Users)
|
|
177
|
+
.where(eq(Users.id, 1))
|
|
178
|
+
);
|
|
179
|
+
// Returns: { id: 1, name: "John Doe" }
|
|
449
180
|
|
|
450
|
-
|
|
181
|
+
// Using executeQueryOnlyOne for single result with error handling
|
|
182
|
+
const user = await forgeSQL
|
|
183
|
+
.fetch()
|
|
184
|
+
.executeQueryOnlyOne(
|
|
185
|
+
forgeSQL
|
|
186
|
+
.getDrizzleQueryBuilder()
|
|
187
|
+
.select("*").from(Users)
|
|
188
|
+
.where(eq(Users.id, 1))
|
|
189
|
+
);
|
|
190
|
+
// Returns: { id: 1, name: "John Doe" }
|
|
191
|
+
// Throws error if multiple records found
|
|
192
|
+
// Returns undefined if no records found
|
|
451
193
|
|
|
452
|
-
|
|
194
|
+
// Using executeQuery with aliases
|
|
195
|
+
const usersAlias = alias(Users, "u");
|
|
196
|
+
const result = await forgeSQL
|
|
197
|
+
.fetch()
|
|
198
|
+
.executeQuery(
|
|
199
|
+
forgeSQL
|
|
200
|
+
.getDrizzleQueryBuilder()
|
|
201
|
+
.select({
|
|
202
|
+
userId: rawSql`${usersAlias.id} as \`userId\``,
|
|
203
|
+
userName: rawSql`${usersAlias.name} as \`userName\``
|
|
204
|
+
}).from(usersAlias)
|
|
205
|
+
);
|
|
206
|
+
// Returns: { userId: 1, userName: "John Doe" }
|
|
453
207
|
|
|
454
|
-
|
|
455
|
-
|
|
208
|
+
// Using executeQuery with joins
|
|
209
|
+
const orderWithUser = await forgeSQL
|
|
210
|
+
.fetch()
|
|
211
|
+
.executeQuery(
|
|
212
|
+
forgeSQL
|
|
213
|
+
.getDrizzleQueryBuilder()
|
|
214
|
+
.select({
|
|
215
|
+
orderId: rawSql`${Orders.id} as \`orderId\``,
|
|
216
|
+
product: Orders.product,
|
|
217
|
+
userName: rawSql`${Users.name} as \`userName\``
|
|
218
|
+
}).from(Orders)
|
|
219
|
+
.innerJoin(Users, eq(Orders.userId, Users.id))
|
|
220
|
+
.where(eq(Orders.id, 1))
|
|
221
|
+
);
|
|
222
|
+
// Returns: { orderId: 1, product: "Product 1", userName: "John Doe" }
|
|
456
223
|
```
|
|
457
224
|
|
|
458
|
-
|
|
225
|
+
### Complex Queries with Aggregations
|
|
459
226
|
|
|
460
227
|
```js
|
|
461
|
-
|
|
462
|
-
|
|
463
|
-
|
|
464
|
-
|
|
465
|
-
|
|
466
|
-
|
|
467
|
-
|
|
468
|
-
|
|
469
|
-
|
|
470
|
-
|
|
471
|
-
|
|
472
|
-
|
|
473
|
-
|
|
474
|
-
|
|
475
|
-
},
|
|
476
|
-
});
|
|
228
|
+
// Finding duplicates
|
|
229
|
+
const duplicates = await forgeSQL
|
|
230
|
+
.fetch()
|
|
231
|
+
.executeQuery(
|
|
232
|
+
forgeSQL
|
|
233
|
+
.getDrizzleQueryBuilder()
|
|
234
|
+
.select({
|
|
235
|
+
name: Users.name,
|
|
236
|
+
count: rawSql`COUNT(*) as \`count\``
|
|
237
|
+
}).from(Users)
|
|
238
|
+
.groupBy(Users.name)
|
|
239
|
+
.having(rawSql`COUNT(*) > 1`)
|
|
240
|
+
);
|
|
241
|
+
// Returns: { name: "John Doe", count: 2 }
|
|
477
242
|
|
|
243
|
+
// Using executeQueryOnlyOne for unique results
|
|
244
|
+
const userStats = await forgeSQL
|
|
245
|
+
.fetch()
|
|
246
|
+
.executeQueryOnlyOne(
|
|
247
|
+
forgeSQL
|
|
248
|
+
.getDrizzleQueryBuilder()
|
|
249
|
+
.select({
|
|
250
|
+
totalUsers: rawSql`COUNT(*) as \`totalUsers\``,
|
|
251
|
+
uniqueNames: rawSql`COUNT(DISTINCT name) as \`uniqueNames\``
|
|
252
|
+
}).from(Users)
|
|
253
|
+
);
|
|
254
|
+
// Returns: { totalUsers: 100, uniqueNames: 80 }
|
|
255
|
+
// Throws error if multiple records found
|
|
478
256
|
```
|
|
479
257
|
|
|
480
|
-
|
|
258
|
+
### Raw SQL Queries
|
|
481
259
|
|
|
482
|
-
|
|
483
|
-
|
|
484
|
-
|
|
485
|
-
|
|
486
|
-
|
|
487
|
-
For example, fetching **users and their purchased products**:
|
|
488
|
-
|
|
489
|
-
```ts
|
|
490
|
-
import ForgeSQL, { EntitySchema } from "forge-sql-orm";
|
|
491
|
-
import { Orders } from "./entities/Orders";
|
|
492
|
-
import { Users } from "./entities/Users";
|
|
493
|
-
import ENTITIES from "./entities";
|
|
494
|
-
|
|
495
|
-
const forgeSQL = new ForgeSQL(ENTITIES);
|
|
496
|
-
|
|
497
|
-
// Define schema for join result
|
|
498
|
-
class InnerJoinResult {
|
|
499
|
-
name!: string;
|
|
500
|
-
product!: string;
|
|
501
|
-
}
|
|
502
|
-
|
|
503
|
-
export const innerJoinSchema = new EntitySchema<InnerJoinResult>({
|
|
504
|
-
class: InnerJoinResult,
|
|
505
|
-
properties: {
|
|
506
|
-
name: { type: "string", fieldName: "name" },
|
|
507
|
-
product: { type: "string", fieldName: "product" },
|
|
508
|
-
},
|
|
509
|
-
});
|
|
510
|
-
innerJoinSchema.init();
|
|
511
|
-
|
|
512
|
-
// Execute query
|
|
513
|
-
const query = forgeSQL
|
|
514
|
-
.createQueryBuilder(Orders, "order")
|
|
515
|
-
.limit(10)
|
|
516
|
-
.offset(0)
|
|
517
|
-
.innerJoin("user", "user")
|
|
518
|
-
.select(["user.name", "order.product"])
|
|
519
|
-
.getFormattedQuery();
|
|
520
|
-
|
|
521
|
-
const results = await forgeSQL.fetch().executeSchemaSQL(query, innerJoinSchema);
|
|
522
|
-
console.log(results);
|
|
260
|
+
```js
|
|
261
|
+
// Using executeRawSQL for direct SQL queries
|
|
262
|
+
const users = await forgeSQL
|
|
263
|
+
.fetch()
|
|
264
|
+
.executeRawSQL<Users>("SELECT * FROM users");
|
|
523
265
|
```
|
|
524
266
|
|
|
525
|
-
|
|
267
|
+
## CRUD Operations
|
|
526
268
|
|
|
527
|
-
|
|
269
|
+
### Insert Operations
|
|
528
270
|
|
|
529
|
-
|
|
271
|
+
```js
|
|
272
|
+
// Single insert
|
|
273
|
+
const userId = await forgeSQL.crud().insert(Users, [{ id: 1, name: "Smith" }]);
|
|
530
274
|
|
|
531
|
-
|
|
532
|
-
|
|
533
|
-
|
|
534
|
-
|
|
275
|
+
// Bulk insert
|
|
276
|
+
await forgeSQL.crud().insert(Users, [
|
|
277
|
+
{ id: 2, name: "Smith" },
|
|
278
|
+
{ id: 3, name: "Vasyl" },
|
|
279
|
+
]);
|
|
535
280
|
|
|
536
|
-
|
|
281
|
+
// Insert with duplicate handling
|
|
282
|
+
await forgeSQL.crud().insert(
|
|
283
|
+
Users,
|
|
284
|
+
[
|
|
285
|
+
{ id: 4, name: "Smith" },
|
|
286
|
+
{ id: 4, name: "Vasyl" },
|
|
287
|
+
],
|
|
288
|
+
true
|
|
289
|
+
);
|
|
290
|
+
```
|
|
537
291
|
|
|
538
|
-
###
|
|
292
|
+
### Update Operations
|
|
539
293
|
|
|
540
|
-
```
|
|
541
|
-
|
|
542
|
-
|
|
543
|
-
import { Users } from "./entities/Users";
|
|
544
|
-
import ENTITIES from "./entities";
|
|
294
|
+
```js
|
|
295
|
+
// Update by ID with optimistic locking
|
|
296
|
+
await forgeSQL.crud().updateById({ id: 1, name: "Smith Updated" }, Users);
|
|
545
297
|
|
|
546
|
-
|
|
547
|
-
|
|
548
|
-
}
|
|
298
|
+
// Update specific fields
|
|
299
|
+
await forgeSQL.crud().updateById(
|
|
300
|
+
{ id: 1, age: 35 },
|
|
301
|
+
Users
|
|
302
|
+
);
|
|
549
303
|
|
|
550
|
-
|
|
304
|
+
// Update with custom WHERE condition
|
|
305
|
+
await forgeSQL.crud().updateFields(
|
|
306
|
+
{ name: "New Name", age: 35 },
|
|
307
|
+
Users,
|
|
308
|
+
eq(Users.email, "smith@example.com")
|
|
309
|
+
);
|
|
551
310
|
```
|
|
552
311
|
|
|
553
|
-
|
|
554
|
-
|
|
555
|
-
The `getKnex()` method allows direct interaction with Knex.js, enabling execution of raw SQL queries and complex query building.
|
|
312
|
+
### Delete Operations
|
|
556
313
|
|
|
557
|
-
|
|
558
|
-
|
|
559
|
-
|
|
560
|
-
const fields: string[] = ["name", "email"];
|
|
561
|
-
|
|
562
|
-
// Define selected fields, including a count of duplicate occurrences
|
|
563
|
-
const selectFields: Array<string | Knex.Raw> = [
|
|
564
|
-
...fields,
|
|
565
|
-
forgeSQL.getKnex().raw("COUNT(*) as count"),
|
|
566
|
-
];
|
|
567
|
-
|
|
568
|
-
// Create a QueryBuilder with grouping and filtering for duplicates
|
|
569
|
-
let selectQueryBuilder = forgeSQL
|
|
570
|
-
.createQueryBuilder(UsersSchema)
|
|
571
|
-
.select(selectFields as unknown as string[])
|
|
572
|
-
.groupBy(fields)
|
|
573
|
-
.having("COUNT(*) > 1");
|
|
574
|
-
|
|
575
|
-
// Generate the final SQL query with ordering by count
|
|
576
|
-
const query = selectQueryBuilder.getKnexQuery().orderByRaw("count ASC").toSQL().sql;
|
|
577
|
-
|
|
578
|
-
/*
|
|
579
|
-
SQL Query:
|
|
580
|
-
SELECT `u0`.`name`, `u0`.`email`, COUNT(*) as count
|
|
581
|
-
FROM `users` AS `u0`
|
|
582
|
-
GROUP BY `u0`.`name`, `u0`.`email`
|
|
583
|
-
HAVING COUNT(*) > 1
|
|
584
|
-
ORDER BY count ASC;
|
|
585
|
-
*/
|
|
586
|
-
|
|
587
|
-
// Execute the SQL query and retrieve results
|
|
588
|
-
const duplicateResult = await forgeSQL
|
|
589
|
-
.fetch()
|
|
590
|
-
.executeSchemaSQL<DuplicateResult>(query, DuplicateSchema);
|
|
314
|
+
```js
|
|
315
|
+
// Delete by ID
|
|
316
|
+
await forgeSQL.crud().deleteById(1, Users);
|
|
591
317
|
```
|
|
592
318
|
|
|
593
|
-
🔹 **What does this example do?**
|
|
594
|
-
|
|
595
|
-
1. Selects `name` and `email`, along with the count of duplicate occurrences (`COUNT(*) as count`).
|
|
596
|
-
2. Groups the data by `name` and `email` to identify duplicates.
|
|
597
|
-
3. Filters the results to include only groups with more than one record (`HAVING COUNT(*) > 1`).
|
|
598
|
-
4. Sorts the final results in ascending order by count (`ORDER BY count ASC`).
|
|
599
|
-
5. Executes the SQL query and returns the duplicate records.
|
|
600
|
-
|
|
601
|
-
---
|
|
602
|
-
|
|
603
319
|
## Optimistic Locking
|
|
604
320
|
|
|
605
|
-
Optimistic locking is a concurrency control mechanism that prevents data conflicts when multiple transactions attempt to update the same record concurrently. Instead of using locks, this technique relies on a version field in your entity models.
|
|
321
|
+
Optimistic locking is a concurrency control mechanism that prevents data conflicts when multiple transactions attempt to update the same record concurrently. Instead of using locks, this technique relies on a version field in your entity models.
|
|
606
322
|
|
|
607
|
-
###
|
|
323
|
+
### Supported Version Field Types
|
|
608
324
|
|
|
609
|
-
-
|
|
610
|
-
|
|
325
|
+
- `datetime` - Timestamp-based versioning
|
|
326
|
+
- `timestamp` - Timestamp-based versioning
|
|
327
|
+
- `integer` - Numeric version increment
|
|
328
|
+
- `decimal` - Numeric version increment
|
|
611
329
|
|
|
612
|
-
|
|
613
|
-
The version field must be of type `datetime`, `timestamp`, `integer`, or `decimal` and must be non-nullable. If the field's type does not meet these requirements, a warning message will be logged to the console during model generation.
|
|
330
|
+
### Configuration
|
|
614
331
|
|
|
615
|
-
|
|
616
|
-
|
|
617
|
-
|
|
618
|
-
|
|
619
|
-
|
|
620
|
-
|
|
621
|
-
|
|
622
|
-
|
|
623
|
-
|
|
624
|
-
|
|
332
|
+
```typescript
|
|
333
|
+
const options = {
|
|
334
|
+
additionalMetadata: {
|
|
335
|
+
users: {
|
|
336
|
+
tableName: "users",
|
|
337
|
+
versionField: {
|
|
338
|
+
fieldName: "updatedAt",
|
|
339
|
+
}
|
|
340
|
+
}
|
|
341
|
+
}
|
|
342
|
+
};
|
|
625
343
|
|
|
626
|
-
|
|
627
|
-
export class TestEntityVersion {
|
|
628
|
-
id!: number;
|
|
629
|
-
name?: string;
|
|
630
|
-
version!: number;
|
|
631
|
-
}
|
|
632
|
-
|
|
633
|
-
export const TestEntityVersionSchema = new EntitySchema({
|
|
634
|
-
class: TestEntityVersion,
|
|
635
|
-
properties: {
|
|
636
|
-
id: { primary: true, type: "integer", unsigned: false, autoincrement: false },
|
|
637
|
-
name: { type: "string", nullable: true },
|
|
638
|
-
version: { type: "integer", nullable: false, version: true },
|
|
639
|
-
},
|
|
640
|
-
});
|
|
344
|
+
const forgeSQL = new ForgeSQL(options);
|
|
641
345
|
```
|
|
642
346
|
|
|
643
|
-
|
|
644
|
-
|
|
645
|
-
---
|
|
646
|
-
|
|
647
|
-
## Usage with MikroORM Generator
|
|
347
|
+
### Example Usage
|
|
648
348
|
|
|
649
|
-
|
|
650
|
-
|
|
651
|
-
|
|
652
|
-
|
|
349
|
+
```typescript
|
|
350
|
+
// The version field will be automatically handled
|
|
351
|
+
await forgeSQL.crud().updateById(
|
|
352
|
+
{
|
|
353
|
+
id: 1,
|
|
354
|
+
name: "Updated Name",
|
|
355
|
+
updatedAt: new Date() // Will be automatically set if not provided
|
|
356
|
+
},
|
|
357
|
+
Users
|
|
358
|
+
);
|
|
653
359
|
```
|
|
654
360
|
|
|
655
|
-
|
|
656
|
-
|
|
657
|
-
## Forge SQL ORM CLI Documentation
|
|
361
|
+
## ForgeSqlOrmOptions
|
|
658
362
|
|
|
659
|
-
The
|
|
363
|
+
The `ForgeSqlOrmOptions` object allows customization of ORM behavior:
|
|
660
364
|
|
|
661
|
-
|
|
365
|
+
| Option | Type | Description |
|
|
366
|
+
| -------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
367
|
+
| `logRawSqlQuery` | `boolean` | Enables logging of raw SQL queries in the Atlassian Forge Developer Console. Useful for debugging and monitoring. Defaults to `false`. |
|
|
368
|
+
| `disableOptimisticLocking` | `boolean` | Disables optimistic locking. When set to `true`, no additional condition (e.g., a version check) is added during record updates, which can improve performance. However, this may lead to conflicts when multiple transactions attempt to update the same record concurrently. |
|
|
369
|
+
| `additionalMetadata` | `object` | Allows adding custom metadata to all entities. This is useful for tracking common fields across all tables (e.g., `createdAt`, `updatedAt`, `createdBy`, etc.). The metadata will be automatically added to all generated entities. |
|
|
662
370
|
|
|
663
|
-
|
|
371
|
+
## CLI Commands
|
|
664
372
|
|
|
665
373
|
```sh
|
|
666
374
|
$ npx forge-sql-orm --help
|
|
@@ -672,140 +380,13 @@ Options:
|
|
|
672
380
|
-h, --help Display help for command
|
|
673
381
|
|
|
674
382
|
Commands:
|
|
675
|
-
generate:model [options] Generate
|
|
676
|
-
migrations:create [options] Generate an initial migration for the entire database
|
|
677
|
-
migrations:update [options] Generate a migration to update the database schema
|
|
678
|
-
|
|
679
|
-
help [command] Display help for a specific command
|
|
383
|
+
generate:model [options] Generate Drizzle models from the database
|
|
384
|
+
migrations:create [options] Generate an initial migration for the entire database
|
|
385
|
+
migrations:update [options] Generate a migration to update the database schema
|
|
386
|
+
migrations:drop [options] Generate a migration to drop all tables
|
|
387
|
+
help [command] Display help for a specific command
|
|
680
388
|
```
|
|
681
389
|
|
|
682
|
-
|
|
683
|
-
|
|
684
|
-
### 📌 Entity Generation
|
|
685
|
-
|
|
686
|
-
```sh
|
|
687
|
-
npx forge-sql-orm generate:model --host localhost --port 3306 --user root --password secret --dbName mydb --output ./src/database/entities --versionField updatedAt --saveEnv
|
|
688
|
-
```
|
|
689
|
-
|
|
690
|
-
This command will:
|
|
691
|
-
|
|
692
|
-
- Connect to `mydb` on `localhost:3306`.
|
|
693
|
-
- Generate MikroORM entity classes.
|
|
694
|
-
- Save them in `./src/database/entities`.
|
|
695
|
-
- Create an `index.ts` file with all entities.
|
|
696
|
-
- **`--versionField updatedAt`**: Specifies the field used for entity versioning.
|
|
697
|
-
- **`--saveEnv`**: Saves configuration settings to `.env` for future use.
|
|
698
|
-
|
|
699
|
-
#### 🔹 VersionField Explanation
|
|
700
|
-
|
|
701
|
-
The `--versionField` option is crucial for handling entity versioning. It should be a field of type `datetime`, `integer`, or `decimal`. This field is used to track changes to entities, ensuring that updates follow proper versioning strategies.
|
|
702
|
-
|
|
703
|
-
**Example:**
|
|
704
|
-
|
|
705
|
-
- `updatedAt` (datetime) - Commonly used for timestamp-based versioning.
|
|
706
|
-
- `versionNumber` (integer) - Can be used for numeric version increments.
|
|
707
|
-
|
|
708
|
-
If the specified field does not meet the required criteria, warnings will be logged.
|
|
709
|
-
|
|
710
|
-
---
|
|
711
|
-
|
|
712
|
-
### 📌 Database Migrations
|
|
713
|
-
|
|
714
|
-
```sh
|
|
715
|
-
npx forge-sql-orm migrations:create --host localhost --port 3306 --user root --password secret --dbName mydb --output ./src/database/migration --entitiesPath ./src/database/entities --saveEnv
|
|
716
|
-
```
|
|
717
|
-
|
|
718
|
-
This command will:
|
|
719
|
-
|
|
720
|
-
- Create the initial migration based on all detected entities.
|
|
721
|
-
- Save migration files in `./src/database/migration`.
|
|
722
|
-
- Create `index.ts` for automatic migration execution.
|
|
723
|
-
- **`--saveEnv`**: Saves configuration settings to `.env` for future use.
|
|
724
|
-
|
|
725
|
-
---
|
|
726
|
-
|
|
727
|
-
### 📌 Update Schema Migration
|
|
728
|
-
|
|
729
|
-
```sh
|
|
730
|
-
npx forge-sql-orm migrations:update --host localhost --port 3306 --user root --password secret --dbName mydb --output ./src/database/migration --entitiesPath ./src/database/entities --saveEnv
|
|
731
|
-
```
|
|
732
|
-
|
|
733
|
-
This command will:
|
|
734
|
-
|
|
735
|
-
- Detect schema changes (new tables, columns, indexes).
|
|
736
|
-
- Generate only the required migrations.
|
|
737
|
-
- Update `index.ts` to include new migrations.
|
|
738
|
-
- **`--saveEnv`**: Saves configuration settings to `.env` for future use.
|
|
739
|
-
|
|
740
|
-
---
|
|
741
|
-
|
|
742
|
-
### 📌 Using the patch:mikroorm Command
|
|
743
|
-
|
|
744
|
-
If needed, you can manually apply the patch at any time using:
|
|
745
|
-
|
|
746
|
-
```sh
|
|
747
|
-
npx forge-sql-orm patch:mikroorm
|
|
748
|
-
```
|
|
749
|
-
|
|
750
|
-
This command:
|
|
751
|
-
|
|
752
|
-
- Removes unsupported database dialects (e.g., PostgreSQL, SQLite).
|
|
753
|
-
- Fixes dynamic imports to work in Forge.
|
|
754
|
-
- Ensures Knex and MikroORM work properly inside Forge.
|
|
755
|
-
|
|
756
|
-
---
|
|
757
|
-
|
|
758
|
-
### 📌 Configuration Methods
|
|
759
|
-
|
|
760
|
-
You can define database credentials using:
|
|
761
|
-
|
|
762
|
-
1️⃣ **Command-line arguments**:
|
|
763
|
-
|
|
764
|
-
```sh
|
|
765
|
-
--host, --port, --user, --password, --dbName, --output, --versionField, --saveEnv
|
|
766
|
-
```
|
|
767
|
-
|
|
768
|
-
2️⃣ **Environment variables**:
|
|
769
|
-
|
|
770
|
-
```bash
|
|
771
|
-
export FORGE_SQL_ORM_HOST=localhost
|
|
772
|
-
export FORGE_SQL_ORM_PORT=3306
|
|
773
|
-
export FORGE_SQL_ORM_USER=root
|
|
774
|
-
export FORGE_SQL_ORM_PASSWORD=secret
|
|
775
|
-
export FORGE_SQL_ORM_DBNAME=mydb
|
|
776
|
-
```
|
|
777
|
-
|
|
778
|
-
3️⃣ **Using a `.env` file**:
|
|
779
|
-
|
|
780
|
-
```sh
|
|
781
|
-
FORGE_SQL_ORM_HOST=localhost
|
|
782
|
-
FORGE_SQL_ORM_PORT=3306
|
|
783
|
-
FORGE_SQL_ORM_USER=root
|
|
784
|
-
FORGE_SQL_ORM_PASSWORD=secret
|
|
785
|
-
FORGE_SQL_ORM_DBNAME=mydb
|
|
786
|
-
```
|
|
787
|
-
|
|
788
|
-
4️⃣ **Interactive prompts** (if missing parameters, the CLI will ask for input).
|
|
789
|
-
|
|
790
|
-
---
|
|
791
|
-
|
|
792
|
-
### 📌 Manual Migration Execution
|
|
793
|
-
|
|
794
|
-
To manually execute migrations in your application:
|
|
795
|
-
|
|
796
|
-
```js
|
|
797
|
-
import migrationRunner from "./src/database/migration";
|
|
798
|
-
import { MigrationRunner } from "@forge/sql/out/migration";
|
|
799
|
-
|
|
800
|
-
const runner = new MigrationRunner();
|
|
801
|
-
await migrationRunner(runner);
|
|
802
|
-
await runner.run(); // ✅ Apply migrations
|
|
803
|
-
```
|
|
804
|
-
|
|
805
|
-
This approach allows you to apply migrations programmatically in a Forge application.
|
|
806
|
-
|
|
807
|
-
---
|
|
808
|
-
|
|
809
|
-
📜 **License**
|
|
390
|
+
## License
|
|
810
391
|
This project is licensed under the **MIT License**.
|
|
811
392
|
Feel free to use it for commercial and personal projects.
|