jsonbadger 0.5.0 → 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +36 -18
- package/docs/api/connection.md +144 -0
- package/docs/api/delta-tracker.md +106 -0
- package/docs/api/document.md +77 -0
- package/docs/api/field-types.md +329 -0
- package/docs/api/index.md +35 -0
- package/docs/api/model.md +392 -0
- package/docs/api/query-builder.md +81 -0
- package/docs/api/schema.md +204 -0
- package/docs/architecture-flow.md +397 -0
- package/docs/examples.md +495 -218
- package/docs/jsonb-ops.md +171 -0
- package/docs/lifecycle/model-compilation.md +111 -0
- package/docs/lifecycle.md +146 -0
- package/docs/query-translation.md +11 -10
- package/package.json +10 -3
- package/src/connection/connect.js +12 -17
- package/src/connection/connection.js +128 -0
- package/src/connection/server-capabilities.js +60 -59
- package/src/constants/defaults.js +32 -19
- package/src/constants/{id-strategies.js → id-strategy.js} +28 -29
- package/src/constants/intake-mode.js +8 -0
- package/src/debug/debug-logger.js +17 -15
- package/src/errors/model-overwrite-error.js +25 -0
- package/src/errors/query-error.js +25 -23
- package/src/errors/validation-error.js +25 -23
- package/src/field-types/base-field-type.js +137 -140
- package/src/field-types/builtins/advanced.js +365 -365
- package/src/field-types/builtins/index.js +579 -585
- package/src/field-types/field-type-namespace.js +9 -0
- package/src/field-types/registry.js +149 -122
- package/src/index.js +26 -36
- package/src/migration/ensure-index.js +157 -154
- package/src/migration/ensure-schema.js +27 -15
- package/src/migration/ensure-table.js +44 -31
- package/src/migration/schema-indexes-resolver.js +8 -6
- package/src/model/document-instance.js +29 -540
- package/src/model/document.js +60 -0
- package/src/model/factory/constants.js +36 -0
- package/src/model/factory/index.js +58 -0
- package/src/model/model.js +875 -0
- package/src/model/operations/delete-one.js +39 -0
- package/src/model/operations/insert-one.js +35 -0
- package/src/model/operations/query-builder.js +132 -0
- package/src/model/operations/update-one.js +333 -0
- package/src/model/state.js +34 -0
- package/src/schema/field-definition-parser.js +213 -218
- package/src/schema/path-introspection.js +87 -82
- package/src/schema/schema-compiler.js +126 -212
- package/src/schema/schema.js +621 -138
- package/src/sql/index.js +17 -0
- package/src/sql/jsonb/ops.js +153 -0
- package/src/{query → sql/jsonb}/path-parser.js +54 -43
- package/src/sql/jsonb/read/elem-match.js +133 -0
- package/src/{query → sql/jsonb/read}/operators/contains.js +13 -7
- package/src/sql/jsonb/read/operators/elem-match.js +9 -0
- package/src/{query → sql/jsonb/read}/operators/has-all-keys.js +17 -11
- package/src/{query → sql/jsonb/read}/operators/has-any-keys.js +18 -11
- package/src/sql/jsonb/read/operators/has-key.js +12 -0
- package/src/{query → sql/jsonb/read}/operators/jsonpath-exists.js +22 -15
- package/src/{query → sql/jsonb/read}/operators/jsonpath-match.js +22 -15
- package/src/{query → sql/jsonb/read}/operators/size.js +23 -16
- package/src/sql/parameter-binder.js +18 -13
- package/src/sql/read/build-count-query.js +12 -0
- package/src/sql/read/build-find-query.js +25 -0
- package/src/sql/read/limit-skip.js +21 -0
- package/src/sql/read/sort.js +85 -0
- package/src/sql/read/where/base-fields.js +310 -0
- package/src/sql/read/where/casting.js +90 -0
- package/src/sql/read/where/context.js +79 -0
- package/src/sql/read/where/field-clause.js +58 -0
- package/src/sql/read/where/index.js +38 -0
- package/src/sql/read/where/operator-entries.js +29 -0
- package/src/{query → sql/read/where}/operators/all.js +16 -10
- package/src/sql/read/where/operators/eq.js +12 -0
- package/src/{query → sql/read/where}/operators/gt.js +23 -16
- package/src/{query → sql/read/where}/operators/gte.js +23 -16
- package/src/{query → sql/read/where}/operators/in.js +18 -12
- package/src/sql/read/where/operators/index.js +40 -0
- package/src/{query → sql/read/where}/operators/lt.js +23 -16
- package/src/{query → sql/read/where}/operators/lte.js +23 -16
- package/src/sql/read/where/operators/ne.js +12 -0
- package/src/{query → sql/read/where}/operators/nin.js +18 -12
- package/src/{query → sql/read/where}/operators/regex.js +14 -8
- package/src/sql/read/where/operators.js +126 -0
- package/src/sql/read/where/text-operators.js +83 -0
- package/src/sql/run.js +46 -0
- package/src/sql/write/build-delete-query.js +33 -0
- package/src/sql/write/build-insert-query.js +42 -0
- package/src/sql/write/build-update-query.js +65 -0
- package/src/utils/assert.js +34 -27
- package/src/utils/delta-tracker/.archive/1 tracker-redesign-codex-v2.md +250 -0
- package/src/utils/delta-tracker/.archive/1 tracker-redesign-gemini.md +101 -0
- package/src/utils/delta-tracker/.archive/2 evaluation by gemini.txt +65 -0
- package/src/utils/delta-tracker/.archive/2 evaluation by grok.txt +39 -0
- package/src/utils/delta-tracker/.archive/3 gemini evaluate grok.txt +37 -0
- package/src/utils/delta-tracker/.archive/3 grok evaluate gemini.txt +63 -0
- package/src/utils/delta-tracker/.archive/4 gemini veredict.txt +16 -0
- package/src/utils/delta-tracker/.archive/index.1.js +587 -0
- package/src/utils/delta-tracker/.archive/index.2.js +612 -0
- package/src/utils/delta-tracker/index.js +592 -0
- package/src/utils/dirty-tracker/inline.js +335 -0
- package/src/utils/dirty-tracker/instance.js +414 -0
- package/src/utils/dirty-tracker/static.js +343 -0
- package/src/utils/json-safe.js +13 -9
- package/src/utils/object-path.js +227 -33
- package/src/utils/object.js +408 -168
- package/src/utils/string.js +55 -0
- package/src/utils/value.js +169 -30
- package/docs/api.md +0 -152
- package/src/connection/disconnect.js +0 -16
- package/src/connection/pool-store.js +0 -46
- package/src/model/model-factory.js +0 -555
- package/src/query/limit-skip-compiler.js +0 -31
- package/src/query/operators/elem-match.js +0 -3
- package/src/query/operators/eq.js +0 -6
- package/src/query/operators/has-key.js +0 -6
- package/src/query/operators/index.js +0 -60
- package/src/query/operators/ne.js +0 -6
- package/src/query/query-builder.js +0 -93
- package/src/query/sort-compiler.js +0 -30
- package/src/query/where-compiler.js +0 -477
- package/src/sql/sql-runner.js +0 -31
package/src/sql/run.js
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
/*
|
|
2
|
+
* MODULE RESPONSIBILITY
|
|
3
|
+
* Execute SQL text against the resolved connection and normalize query failures.
|
|
4
|
+
*/
|
|
5
|
+
import debug_logger from '#src/debug/debug-logger.js';
|
|
6
|
+
import QueryError from '#src/errors/query-error.js';
|
|
7
|
+
import {assert_condition} from '#src/utils/assert.js';
|
|
8
|
+
import {is_function} from '#src/utils/value.js';
|
|
9
|
+
|
|
10
|
+
async function run(sql_text, sql_params, connection) {
|
|
11
|
+
const params = sql_params || [];
|
|
12
|
+
const debug_mode = resolve_debug_mode(connection);
|
|
13
|
+
const pool_instance = resolve_pool_instance(connection);
|
|
14
|
+
|
|
15
|
+
debug_logger(debug_mode, 'sql_query', {
|
|
16
|
+
sql_text: sql_text,
|
|
17
|
+
params: params
|
|
18
|
+
});
|
|
19
|
+
|
|
20
|
+
try {
|
|
21
|
+
return await pool_instance.query(sql_text, params);
|
|
22
|
+
} catch(error) {
|
|
23
|
+
debug_logger(debug_mode, 'sql_error', {
|
|
24
|
+
sql_text: sql_text,
|
|
25
|
+
params: params,
|
|
26
|
+
message: error.message
|
|
27
|
+
});
|
|
28
|
+
|
|
29
|
+
throw new QueryError('SQL execution failed', {
|
|
30
|
+
sql_text: sql_text,
|
|
31
|
+
params: params,
|
|
32
|
+
cause: error.message
|
|
33
|
+
});
|
|
34
|
+
}
|
|
35
|
+
}
|
|
36
|
+
|
|
37
|
+
function resolve_pool_instance(connection) {
|
|
38
|
+
assert_condition(connection && is_function(connection.pool_instance?.query), 'run requires connection.pool_instance');
|
|
39
|
+
return connection.pool_instance;
|
|
40
|
+
}
|
|
41
|
+
|
|
42
|
+
function resolve_debug_mode(connection) {
|
|
43
|
+
return connection?.options?.debug === true;
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
export default run;
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
/*
|
|
2
|
+
* MODULE RESPONSIBILITY
|
|
3
|
+
* Build SQL text and parameters for delete writes.
|
|
4
|
+
*/
|
|
5
|
+
/**
|
|
6
|
+
* Build one compiled delete query payload.
|
|
7
|
+
*
|
|
8
|
+
* @param {object} query_context
|
|
9
|
+
* @param {string} query_context.table_identifier
|
|
10
|
+
* @param {string} query_context.data_identifier
|
|
11
|
+
* @param {object} query_context.where_result
|
|
12
|
+
* @param {string} query_context.where_result.sql
|
|
13
|
+
* @param {Array<*>} query_context.where_result.params
|
|
14
|
+
* @returns {{sql_text: string, sql_params: Array<*>}}
|
|
15
|
+
*/
|
|
16
|
+
function build_delete_query(query_context) {
|
|
17
|
+
const sql_text =
|
|
18
|
+
`WITH target_row AS (SELECT id FROM ${query_context.table_identifier} WHERE ${query_context.where_result.sql} LIMIT 1) ` +
|
|
19
|
+
`DELETE FROM ${query_context.table_identifier} AS target_table ` +
|
|
20
|
+
`USING target_row ` +
|
|
21
|
+
`WHERE target_table.id = target_row.id ` +
|
|
22
|
+
`RETURNING target_table.id::text AS id, ` +
|
|
23
|
+
`target_table.${query_context.data_identifier} AS data, ` +
|
|
24
|
+
`target_table.created_at AS created_at, ` +
|
|
25
|
+
`target_table.updated_at AS updated_at`;
|
|
26
|
+
|
|
27
|
+
return {
|
|
28
|
+
sql_text,
|
|
29
|
+
sql_params: query_context.where_result.params
|
|
30
|
+
};
|
|
31
|
+
}
|
|
32
|
+
|
|
33
|
+
export default build_delete_query;
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
/*
|
|
2
|
+
* MODULE RESPONSIBILITY
|
|
3
|
+
* Build SQL text and parameters for insert writes.
|
|
4
|
+
*/
|
|
5
|
+
import {jsonb_stringify} from '#src/utils/json.js';
|
|
6
|
+
|
|
7
|
+
/**
|
|
8
|
+
* Build one insert query payload for a model row insert.
|
|
9
|
+
*
|
|
10
|
+
* @param {object} query_context
|
|
11
|
+
* @param {string} query_context.table_identifier
|
|
12
|
+
* @param {string} query_context.data_identifier
|
|
13
|
+
* @param {object} query_context.payload
|
|
14
|
+
* @param {object} query_context.base_fields
|
|
15
|
+
* @returns {object}
|
|
16
|
+
*/
|
|
17
|
+
function build_insert_query(query_context) {
|
|
18
|
+
const sql_columns = [query_context.data_identifier];
|
|
19
|
+
const sql_params = [jsonb_stringify(query_context.payload)];
|
|
20
|
+
const sql_values = ['$1::jsonb'];
|
|
21
|
+
|
|
22
|
+
if(query_context.base_fields.id !== undefined && query_context.base_fields.id !== null) {
|
|
23
|
+
sql_columns.push('id');
|
|
24
|
+
sql_params.push(String(query_context.base_fields.id));
|
|
25
|
+
sql_values.push('$' + sql_params.length + '::uuid');
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
for(const key of ['created_at', 'updated_at']) {
|
|
29
|
+
sql_columns.push(key);
|
|
30
|
+
sql_params.push(query_context.base_fields[key]);
|
|
31
|
+
sql_values.push('$' + sql_params.length + '::timestamptz');
|
|
32
|
+
}
|
|
33
|
+
|
|
34
|
+
return {
|
|
35
|
+
sql_text:
|
|
36
|
+
`INSERT INTO ${query_context.table_identifier} (${sql_columns.join(', ')}) VALUES (${sql_values.join(', ')}) ` +
|
|
37
|
+
`RETURNING id::text AS id, ${query_context.data_identifier} AS data, created_at AS created_at, updated_at AS updated_at`,
|
|
38
|
+
sql_params
|
|
39
|
+
};
|
|
40
|
+
}
|
|
41
|
+
|
|
42
|
+
export default build_insert_query;
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
/*
|
|
2
|
+
* MODULE RESPONSIBILITY
|
|
3
|
+
* Build SQL text and parameters for update writes.
|
|
4
|
+
*/
|
|
5
|
+
import {bind_parameter} from '#src/sql/parameter-binder.js';
|
|
6
|
+
import {has_own} from '#src/utils/object.js';
|
|
7
|
+
|
|
8
|
+
/**
|
|
9
|
+
* Build one compiled update query payload.
|
|
10
|
+
*
|
|
11
|
+
* @param {object} query_context
|
|
12
|
+
* @param {string} query_context.table_identifier
|
|
13
|
+
* @param {string} query_context.data_identifier
|
|
14
|
+
* @param {object} query_context.update_expression
|
|
15
|
+
* @param {object} query_context.update_expression.jsonb_ops
|
|
16
|
+
* @param {object} query_context.update_expression.timestamp_set
|
|
17
|
+
* @param {object} query_context.parameter_state
|
|
18
|
+
* @param {Array<*>} query_context.parameter_state.params
|
|
19
|
+
* @param {object} query_context.where_result
|
|
20
|
+
* @param {string} query_context.where_result.sql
|
|
21
|
+
* @param {Array<*>} query_context.where_result.params
|
|
22
|
+
* @returns {{sql_text: string, sql_params: Array<*>}}
|
|
23
|
+
*/
|
|
24
|
+
function build_update_query(query_context) {
|
|
25
|
+
const {data_identifier, update_expression, parameter_state, table_identifier, where_result} = query_context;
|
|
26
|
+
|
|
27
|
+
// Finalize the JSONB mutation expression.
|
|
28
|
+
// The SQL boundary owns JSONB compilation and should compile via
|
|
29
|
+
// `jsonb_ops.compile(parameter_state)` as the new contract lands.
|
|
30
|
+
const compiled_data_expression = update_expression.jsonb_ops.compile(parameter_state);
|
|
31
|
+
|
|
32
|
+
const assignments = [`${data_identifier} = ${compiled_data_expression}`];
|
|
33
|
+
const timestamps = update_expression.timestamp_set;
|
|
34
|
+
const params = parameter_state;
|
|
35
|
+
|
|
36
|
+
if(has_own(timestamps, 'created_at')) {
|
|
37
|
+
const created_at_param = bind_parameter(params, timestamps.created_at);
|
|
38
|
+
assignments.push(`created_at = ${created_at_param}::timestamptz`);
|
|
39
|
+
}
|
|
40
|
+
|
|
41
|
+
if(has_own(timestamps, 'updated_at')) {
|
|
42
|
+
const updated_at_param = bind_parameter(params, timestamps.updated_at);
|
|
43
|
+
assignments.push(`updated_at = ${updated_at_param}::timestamptz`);
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
const sql_text =
|
|
47
|
+
`WITH target_row AS (SELECT id FROM ${table_identifier} WHERE ${where_result.sql} LIMIT 1) ` +
|
|
48
|
+
`UPDATE ${table_identifier} AS target_table ` +
|
|
49
|
+
`SET ${assignments.join(', ')} ` +
|
|
50
|
+
`FROM target_row ` +
|
|
51
|
+
`WHERE target_table.id = target_row.id ` +
|
|
52
|
+
`RETURNING target_table.id::text AS id, ` +
|
|
53
|
+
`target_table.${data_identifier} AS data, ` +
|
|
54
|
+
`target_table.created_at AS created_at, ` +
|
|
55
|
+
`target_table.updated_at AS updated_at`;
|
|
56
|
+
|
|
57
|
+
const sql_params = [...where_result.params, ...parameter_state.params];
|
|
58
|
+
|
|
59
|
+
return {
|
|
60
|
+
sql_text,
|
|
61
|
+
sql_params
|
|
62
|
+
};
|
|
63
|
+
}
|
|
64
|
+
|
|
65
|
+
export default build_update_query;
|
package/src/utils/assert.js
CHANGED
|
@@ -1,27 +1,34 @@
|
|
|
1
|
-
const identifier_pattern = /^[a-zA-Z_][a-zA-Z0-9_]*$/;
|
|
2
|
-
const path_pattern = /^[a-zA-Z_][a-zA-Z0-9_]*(\.[a-zA-Z_][a-zA-Z0-9_]*)*$/;
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
if(!condition_value) {
|
|
6
|
-
throw new Error(message || 'Assertion failed');
|
|
7
|
-
}
|
|
8
|
-
}
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
const label_value = label || 'identifier';
|
|
12
|
-
|
|
13
|
-
assert_condition(typeof identifier_value === 'string', label_value + ' must be a string');
|
|
14
|
-
assert_condition(identifier_pattern.test(identifier_value), label_value + ' has invalid characters');
|
|
15
|
-
}
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
const label_value = label || 'path';
|
|
19
|
-
|
|
20
|
-
assert_condition(typeof path_value === 'string', label_value + ' must be a string');
|
|
21
|
-
assert_condition(path_pattern.test(path_value), label_value + ' has invalid characters');
|
|
22
|
-
}
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
assert_identifier(identifier_value, 'identifier');
|
|
26
|
-
return '"' + identifier_value + '"';
|
|
27
|
-
}
|
|
1
|
+
const identifier_pattern = /^[a-zA-Z_][a-zA-Z0-9_]*$/;
|
|
2
|
+
const path_pattern = /^[a-zA-Z_][a-zA-Z0-9_]*(\.[a-zA-Z_][a-zA-Z0-9_]*)*$/;
|
|
3
|
+
|
|
4
|
+
function assert_condition(condition_value, message) {
|
|
5
|
+
if(!condition_value) {
|
|
6
|
+
throw new Error(message || 'Assertion failed');
|
|
7
|
+
}
|
|
8
|
+
}
|
|
9
|
+
|
|
10
|
+
function assert_identifier(identifier_value, label) {
|
|
11
|
+
const label_value = label || 'identifier';
|
|
12
|
+
|
|
13
|
+
assert_condition(typeof identifier_value === 'string', label_value + ' must be a string');
|
|
14
|
+
assert_condition(identifier_pattern.test(identifier_value), label_value + ' has invalid characters');
|
|
15
|
+
}
|
|
16
|
+
|
|
17
|
+
function assert_path(path_value, label) {
|
|
18
|
+
const label_value = label || 'path';
|
|
19
|
+
|
|
20
|
+
assert_condition(typeof path_value === 'string', label_value + ' must be a string');
|
|
21
|
+
assert_condition(path_pattern.test(path_value), label_value + ' has invalid characters');
|
|
22
|
+
}
|
|
23
|
+
|
|
24
|
+
function quote_identifier(identifier_value) {
|
|
25
|
+
assert_identifier(identifier_value, 'identifier');
|
|
26
|
+
return '"' + identifier_value + '"';
|
|
27
|
+
}
|
|
28
|
+
|
|
29
|
+
export {
|
|
30
|
+
assert_condition,
|
|
31
|
+
assert_identifier,
|
|
32
|
+
assert_path,
|
|
33
|
+
quote_identifier
|
|
34
|
+
};
|
|
@@ -0,0 +1,250 @@
|
|
|
1
|
+
## Cleaner Redesign For JSONB Updates
|
|
2
|
+
|
|
3
|
+
```js
|
|
4
|
+
const document = DirtyTracker(new Document(row), {track: ['data']});
|
|
5
|
+
|
|
6
|
+
document.data.preferences.theme = 'dark';
|
|
7
|
+
delete document.data.preferences.legacy_flag;
|
|
8
|
+
|
|
9
|
+
await document.save();
|
|
10
|
+
```
|
|
11
|
+
|
|
12
|
+
The most efficient design is to stop deriving SQL updates from a dirty-path list after the fact. Instead, track a canonical change plan as mutations happen, then compile that plan directly into SQL.
|
|
13
|
+
|
|
14
|
+
## Core Idea
|
|
15
|
+
|
|
16
|
+
Build the system around one internal artifact:
|
|
17
|
+
|
|
18
|
+
```js
|
|
19
|
+
[
|
|
20
|
+
{kind: 'set', path: ['preferences', 'theme'], value: 'dark'},
|
|
21
|
+
{kind: 'unset', path: ['preferences', 'legacy_flag']},
|
|
22
|
+
{kind: 'replace_root', value: {...}}
|
|
23
|
+
]
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
Do not make `model.js` rediscover intent from snapshots and dirty strings. Let the tracker produce normalized operations up front.
|
|
27
|
+
|
|
28
|
+
Because the tracker is initialized as `DirtyTracker(document, {track: ['data']})`, this change list should describe only mutations relative to the tracked branch. In the model layer, those changes map to `document.data`. Base fields like `id`, `created_at`, and `updated_at` still belong to the outer model and SQL update flow.
|
|
29
|
+
|
|
30
|
+
## Why This Is Better
|
|
31
|
+
|
|
32
|
+
- No special-case guessing for `data` vs `data.foo.bar`.
|
|
33
|
+
- No fake root replacement through `$set`.
|
|
34
|
+
- No silent deletion drift.
|
|
35
|
+
- No need to reconstruct lost intent from the current object graph.
|
|
36
|
+
- The SQL layer receives exactly the operation type it needs to compile.
|
|
37
|
+
|
|
38
|
+
## Recommended Architecture
|
|
39
|
+
|
|
40
|
+
### 1. Keep `Document` plain
|
|
41
|
+
|
|
42
|
+
`Document` should stay as the simple state holder and path utility surface.
|
|
43
|
+
|
|
44
|
+
It should own:
|
|
45
|
+
- `id`
|
|
46
|
+
- `data`
|
|
47
|
+
- `created_at`
|
|
48
|
+
- `updated_at`
|
|
49
|
+
- plain helpers like `get()`, `set()`, and maybe `unset()`
|
|
50
|
+
|
|
51
|
+
It should not know PostgreSQL details.
|
|
52
|
+
|
|
53
|
+
### 2. Replace “dirty fields” with semantic changes
|
|
54
|
+
|
|
55
|
+
The tracker should maintain a change list instead of only a `dirty_keys` set.
|
|
56
|
+
|
|
57
|
+
Suggested internal model:
|
|
58
|
+
|
|
59
|
+
```js
|
|
60
|
+
{
|
|
61
|
+
base_state: {...},
|
|
62
|
+
changes: [],
|
|
63
|
+
change_index: new Map(),
|
|
64
|
+
watchers: [],
|
|
65
|
+
proxy_cache: new WeakMap()
|
|
66
|
+
}
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
The tracker API can still expose friendly helpers:
|
|
70
|
+
|
|
71
|
+
```js
|
|
72
|
+
document.$has_changes()
|
|
73
|
+
document.$get_changes()
|
|
74
|
+
document.$reset_changes()
|
|
75
|
+
document.$rebase_changes()
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
`$get_dirty_fields()` can remain as a compatibility helper if needed, but it should become a derived view, not the source of truth.
|
|
79
|
+
|
|
80
|
+
### 3. Track semantic operations, not just paths
|
|
81
|
+
|
|
82
|
+
When the tracked branch changes, record operations like:
|
|
83
|
+
|
|
84
|
+
- `set(path, value)`
|
|
85
|
+
- `unset(path)`
|
|
86
|
+
- `replace_root(value)`
|
|
87
|
+
|
|
88
|
+
This removes ambiguity.
|
|
89
|
+
|
|
90
|
+
Examples:
|
|
91
|
+
|
|
92
|
+
```js
|
|
93
|
+
document.data.profile.age = 31;
|
|
94
|
+
// => {kind: 'set', path: ['profile', 'age'], value: 31}
|
|
95
|
+
|
|
96
|
+
delete document.data.profile.age;
|
|
97
|
+
// => {kind: 'unset', path: ['profile', 'age']}
|
|
98
|
+
|
|
99
|
+
document.data = {profile: {age: 31}};
|
|
100
|
+
// => {kind: 'replace_root', value: {profile: {age: 31}}}
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
### 4. Add `deleteProperty` support to the tracker
|
|
104
|
+
|
|
105
|
+
Right now the tracker is focused on `get` and `set`. A proper JSONB sync story also needs `deleteProperty`.
|
|
106
|
+
|
|
107
|
+
That is the clean place to capture leaf removal without inventing `undefined` semantics.
|
|
108
|
+
|
|
109
|
+
### 5. Treat root replacement as a first-class operation
|
|
110
|
+
|
|
111
|
+
Do not encode tracked-root replacement as:
|
|
112
|
+
|
|
113
|
+
```js
|
|
114
|
+
{$set: {...}}
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
That is not root replacement. It is only a collection of path assignments.
|
|
118
|
+
|
|
119
|
+
Use a real operation:
|
|
120
|
+
|
|
121
|
+
```js
|
|
122
|
+
{kind: 'replace_root', value: next_value}
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
Then compile it directly to:
|
|
126
|
+
|
|
127
|
+
```sql
|
|
128
|
+
data = $1::jsonb
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
## SQL Compiler Design
|
|
132
|
+
|
|
133
|
+
### 1. Compile from changes
|
|
134
|
+
|
|
135
|
+
The PostgreSQL layer should accept normalized operations, not infer them from ad hoc object shapes.
|
|
136
|
+
|
|
137
|
+
Suggested internal compiler contract:
|
|
138
|
+
|
|
139
|
+
```js
|
|
140
|
+
build_payload_update_expression(data_identifier, changes)
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
The payload compiler should only assemble the expression for the `.data` column. The outer update compiler can combine that with other assignments like `updated_at`.
|
|
144
|
+
|
|
145
|
+
### 2. Supported operation mapping
|
|
146
|
+
|
|
147
|
+
- `set(path, value)` -> `jsonb_set(...)`
|
|
148
|
+
- `unset(path)` -> `#-`
|
|
149
|
+
- `replace_root(value)` -> direct `data = $1::jsonb`
|
|
150
|
+
|
|
151
|
+
Example:
|
|
152
|
+
|
|
153
|
+
```js
|
|
154
|
+
[
|
|
155
|
+
{kind: 'set', path: ['preferences', 'theme'], value: 'dark'},
|
|
156
|
+
{kind: 'unset', path: ['preferences', 'legacy_flag']}
|
|
157
|
+
]
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
becomes roughly:
|
|
161
|
+
|
|
162
|
+
```sql
|
|
163
|
+
(jsonb_set(data, '{preferences,theme}', $1::jsonb, true) #- '{preferences,legacy_flag}')
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
### 3. Resolve conflicts before SQL compilation
|
|
167
|
+
|
|
168
|
+
Do not let the SQL compiler guess which op wins.
|
|
169
|
+
|
|
170
|
+
Normalize the operation list before compilation:
|
|
171
|
+
|
|
172
|
+
- `replace_root` wipes earlier changes
|
|
173
|
+
- last write wins for identical paths
|
|
174
|
+
- parent/child conflicts collapse to the minimal valid form
|
|
175
|
+
|
|
176
|
+
Example:
|
|
177
|
+
|
|
178
|
+
```js
|
|
179
|
+
set(profile, {...})
|
|
180
|
+
set(profile.age, 31)
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
Normalize to one final operation set before SQL generation.
|
|
184
|
+
|
|
185
|
+
## Suggested Rules
|
|
186
|
+
|
|
187
|
+
### Payload rules
|
|
188
|
+
|
|
189
|
+
- Track only `document.data` for JSONB persistence.
|
|
190
|
+
- Keep base columns outside payload tracking.
|
|
191
|
+
- Rebase after successful insert/hydrate/update.
|
|
192
|
+
|
|
193
|
+
### Mutation rules
|
|
194
|
+
|
|
195
|
+
- `set path` means JSONB set
|
|
196
|
+
- `delete path` means JSONB unset
|
|
197
|
+
- `assign document.data = ...` means payload root replacement
|
|
198
|
+
- `assign undefined` should not silently mean delete
|
|
199
|
+
|
|
200
|
+
### Array rules
|
|
201
|
+
|
|
202
|
+
Keep arrays simple unless you truly need patch-level array semantics.
|
|
203
|
+
|
|
204
|
+
Recommended default:
|
|
205
|
+
- replacing an array records one `set` on that array path
|
|
206
|
+
- mutating arrays in place can also collapse to one parent-array `set`
|
|
207
|
+
- add explicit array operators later only if needed
|
|
208
|
+
|
|
209
|
+
That avoids a lot of fragile path math and still gives efficient updates for normal usage.
|
|
210
|
+
|
|
211
|
+
## Clean Save Flow
|
|
212
|
+
|
|
213
|
+
```js
|
|
214
|
+
async function save_model(model_instance) {
|
|
215
|
+
const document = model_instance.document;
|
|
216
|
+
const changes = document.$get_changes();
|
|
217
|
+
|
|
218
|
+
if(!changes.length) {
|
|
219
|
+
return model_instance;
|
|
220
|
+
}
|
|
221
|
+
|
|
222
|
+
const data_expression = build_payload_update_expression('data', changes);
|
|
223
|
+
const updated_row = await exec_update_one(model, {id: document.id}, {data_expression});
|
|
224
|
+
|
|
225
|
+
document.init(updated_row);
|
|
226
|
+
document.$rebase_changes();
|
|
227
|
+
|
|
228
|
+
return model_instance;
|
|
229
|
+
}
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
This is the key simplification:
|
|
233
|
+
|
|
234
|
+
- `model.js` should orchestrate
|
|
235
|
+
- the tracker should describe changes
|
|
236
|
+
- the payload SQL builder should compile the `.data` update
|
|
237
|
+
|
|
238
|
+
Each layer does one job.
|
|
239
|
+
|
|
240
|
+
## Recommendation
|
|
241
|
+
|
|
242
|
+
The most beautiful version is:
|
|
243
|
+
|
|
244
|
+
- plain `Document`
|
|
245
|
+
- tracker mounted at the document root with `track: ['data']`
|
|
246
|
+
- tracker emits semantic change operations
|
|
247
|
+
- SQL compiler consumes those changes for the `.data` column
|
|
248
|
+
- no reverse-engineering of intent from dirty strings
|
|
249
|
+
|
|
250
|
+
That design removes nearly all of the hacks around root replacement, deletions, and JSONB path edge cases because those cases stop being edge cases. They become normal operation types.
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
Zooming out and redesigning this without constraints is exactly what this architecture needs right now. We’ve been trying to force a square peg (a rich, reactive proxy diff) into a round hole (a query builder that currently only understands simple `$set` root paths).
|
|
2
|
+
|
|
3
|
+
If we have the freedom to refactor upstream, we can eliminate all the string-stripping hacks, the `undefined` edge-case skipping, and the `extract_diff` middleman entirely.
|
|
4
|
+
|
|
5
|
+
Here is what the "Holy Grail" architecture looks like for a reactive, JSONB-backed Postgres ORM.
|
|
6
|
+
|
|
7
|
+
### 1. The Tracker Layer: Emit Deltas, Not Just Paths
|
|
8
|
+
Right now, the tracker just keeps a `Set` of dirty strings (`['data.age', 'data.preferences']`). We are leaving too much semantic meaning on the table.
|
|
9
|
+
|
|
10
|
+
**The Fix:** We refactor the Proxy to intercept `deleteProperty` as well as `set`. Instead of returning an array of strings, the tracker exposes a `$get_delta()` method that returns a structured patch ready for a NoSQL-style query builder.
|
|
11
|
+
|
|
12
|
+
```javascript
|
|
13
|
+
// What model.document.$get_delta() would return:
|
|
14
|
+
{
|
|
15
|
+
$set: {
|
|
16
|
+
'preferences.theme': 'dark', // Native stripping of 'data.' done at the proxy config level
|
|
17
|
+
'age': 30
|
|
18
|
+
},
|
|
19
|
+
$unset: [
|
|
20
|
+
'obsolete_key' // Caught via the `delete` keyword or explicit undefined
|
|
21
|
+
],
|
|
22
|
+
$replace_root: false // Flips to true if they did `model.data = { ... }`
|
|
23
|
+
}
|
|
24
|
+
```
|
|
25
|
+
*No `extract_diff` required. The tracker naturally speaks the language of the database.*
|
|
26
|
+
|
|
27
|
+
### 2. The Model Layer: The Clean Handoff
|
|
28
|
+
In `model.js`, `Model.prototype.update` becomes incredibly thin and elegant. It just grabs the delta from the tracker and passes it straight into the update executor.
|
|
29
|
+
|
|
30
|
+
```javascript
|
|
31
|
+
Model.prototype.update = async function() {
|
|
32
|
+
if (!this.document.$has_dirty_fields()) return this;
|
|
33
|
+
|
|
34
|
+
const delta = this.document.$get_delta();
|
|
35
|
+
|
|
36
|
+
// If the developer replaced the entire object, bypass JSONB surgery
|
|
37
|
+
if (delta.$replace_root) {
|
|
38
|
+
await exec_update_one(model, { id: this.id }, { $replace: this.payload });
|
|
39
|
+
} else {
|
|
40
|
+
// Pass the surgical set/unset instructions directly
|
|
41
|
+
await exec_update_one(model, { id: this.id }, { $set: delta.$set, $unset: delta.$unset });
|
|
42
|
+
}
|
|
43
|
+
|
|
44
|
+
this.document.$rebase_dirty_fields();
|
|
45
|
+
return this;
|
|
46
|
+
};
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
### 3. The Query Builder Layer: Chained JSONB SQL (The Magic)
|
|
50
|
+
This is where the real power of Postgres comes in. We update `create-update-query.js` (and its expression compilers) to construct a **Chained JSONB Expression**.
|
|
51
|
+
|
|
52
|
+
Postgres allows you to nest `jsonb_set` and the deletion operator (`#-`) to apply multiple surgical changes to a JSON column in a single pass without overriding the whole column.
|
|
53
|
+
|
|
54
|
+
If our delta is:
|
|
55
|
+
`$set: { 'a.b': 1, 'c': 2 }`, `$unset: ['d.e']`
|
|
56
|
+
|
|
57
|
+
The query compiler generates this SQL:
|
|
58
|
+
```sql
|
|
59
|
+
UPDATE table_name
|
|
60
|
+
SET
|
|
61
|
+
data = jsonb_set(
|
|
62
|
+
jsonb_set(
|
|
63
|
+
data #- '{d,e}', -- 1. Unset the deleted leaf first
|
|
64
|
+
'{a,b}', $1::jsonb -- 2. Set the first new leaf
|
|
65
|
+
),
|
|
66
|
+
'{c}', $2::jsonb -- 3. Set the second new leaf
|
|
67
|
+
),
|
|
68
|
+
updated_at = NOW()
|
|
69
|
+
WHERE id = $3
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
**How we build this upstream (`create-update-query.js` context):**
|
|
73
|
+
You start with the base column name as your expression string, and iteratively wrap it.
|
|
74
|
+
```javascript
|
|
75
|
+
let data_expr = 'data';
|
|
76
|
+
|
|
77
|
+
// 1. Chain all unsets
|
|
78
|
+
for (const path of unset_paths) {
|
|
79
|
+
const pg_path = format_pg_array(path); // 'd.e' -> '{d,e}'
|
|
80
|
+
data_expr = `${data_expr} #- '${pg_path}'`;
|
|
81
|
+
}
|
|
82
|
+
|
|
83
|
+
// 2. Chain all sets
|
|
84
|
+
for (const [path, val] of Object.entries(set_payload)) {
|
|
85
|
+
const pg_path = format_pg_array(path); // 'a.b' -> '{a,b}'
|
|
86
|
+
const param_id = bind_parameter(params, val);
|
|
87
|
+
data_expr = `jsonb_set(${data_expr}, '${pg_path}', ${param_id}::jsonb)`;
|
|
88
|
+
}
|
|
89
|
+
|
|
90
|
+
// Resulting in a single clean assignment for the query builder:
|
|
91
|
+
assignments.push(`data = ${data_expr}`);
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
### Why this approach is "Beautiful":
|
|
95
|
+
1. **Zero Hacks:** We stop writing code to protect the query builder from things it can't do, and instead teach the query builder how to do them natively.
|
|
96
|
+
2. **True State Synchronization:** If a frontend sends a payload missing a key, or a developer does `delete model.payload.feature_flag`, the database correctly executes a `#-` deletion instead of leaving stale data floating around in the JSONB column.
|
|
97
|
+
3. **Perfect Encapsulation:** The tracker handles the "what changed" logic. The query builder handles the "how to write it to Postgres" logic. `model.js` just hands the baton between them.
|
|
98
|
+
|
|
99
|
+
If we refactor towards this, we throw away `extract-diff.js` completely. We upgrade `instance.js` to expose `$get_delta()`, and we add a few lines to your update query helpers to support `$unset` and `$replace`.
|
|
100
|
+
|
|
101
|
+
Does this architectural direction feel like the kind of robust, scalable solution you are aiming for?
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
This is a fascinating architectural pivot. Both proposals correctly identify the root flaw in the current `instance.js` implementation: trying to reverse-engineer database intent from a flattened `Set` of dirty string paths is a fragile anti-pattern. By moving the intent capture to the Proxy level, you completely eliminate the brittle parsing logic.
|
|
2
|
+
|
|
3
|
+
Here is an evaluation of both approaches, breaking down their strengths, weaknesses, and how they interact with your current `DirtyTracker`.
|
|
4
|
+
|
|
5
|
+
### Architectural Comparison
|
|
6
|
+
|
|
7
|
+
| Feature | Codex V2 (Semantic Operations List) | Gemini (Mongo-style Delta Map) |
|
|
8
|
+
| :--- | :--- | :--- |
|
|
9
|
+
| **Data Structure** | Array of objects (`[{kind: 'set', ...}]`) | Grouped object (`{$set: {...}, $unset: [...]}`) |
|
|
10
|
+
| **Mutation Tracking** | Chronological (records every change in order) | State-collapsed (last write wins natively via object keys) |
|
|
11
|
+
| **Root Replacement** | Explicit operation: `{kind: 'replace_root', value: ...}` | Boolean flag: `$replace_root: true` (relies on outer state) |
|
|
12
|
+
| **SQL Generation** | Requires a pre-compilation step to normalize conflicts | Direct mapping (iterate unsets, then iterate sets) |
|
|
13
|
+
| **Proxy Implementation** | Heavier: Must append to array and manage path overlaps | Lighter: Just updates the `$set` or `$unset` object keys |
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
### Feature Ratings & Analysis
|
|
18
|
+
|
|
19
|
+
**1. The Tracker API & State Construction**
|
|
20
|
+
* **Winner: Gemini (Mongo-style Delta Map)**
|
|
21
|
+
* **Why:** In the Codex approach, tracking every operation in an array means you have to write a complex "conflict resolution" algorithm before generating SQL. If a user sets `profile.age` to 30, then 31, then deletes `profile`, the Codex array has three entries that must be collapsed. The Gemini approach uses object keys, meaning state naturally collapses as it mutates. A later set simply overwrites the earlier key in the `$set` object.
|
|
22
|
+
|
|
23
|
+
**2. SQL Compilation Pipeline**
|
|
24
|
+
* **Winner: Gemini**
|
|
25
|
+
* **Why:** Generating Postgres JSONB chaining is drastically simpler with a grouped delta. You can reliably execute all `$unset` operations first, followed by all `$set` operations. This ensures that a parent deletion doesn't accidentally wipe out a subsequent child insertion. The Codex array would require sorting or graph analysis to achieve this same safety guarantee.
|
|
26
|
+
|
|
27
|
+
**3. Handling Root Replacements (`document.data = {...}`)**
|
|
28
|
+
* **Winner: Codex V2**
|
|
29
|
+
* **Why:** The Gemini proposal suggests `{ $replace_root: true }` and then relying on `this.payload` in the model to provide the data. This breaks the encapsulation of the tracker. The delta should be a pure, self-contained set of instructions. Codex handles this perfectly by making it a distinct operation that carries its own payload: `{ kind: 'replace_root', value: next_value }`.
|
|
30
|
+
|
|
31
|
+
**4. Handling Deletions**
|
|
32
|
+
* **Winner: Tie**
|
|
33
|
+
* **Why:** Both proposals correctly identify that `instance.js` is missing a `deleteProperty` trap in the Proxy. Right now, `delete document.data.foo` circumvents the dirty tracker entirely. Both architectures require adding this trap to capture `$unset` operations.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
### The Synthesized Architecture (The Best of Both)
|
|
38
|
+
|
|
39
|
+
The ideal solution merges the clean NoSQL data structure of the Gemini proposal with the strict operational encapsulation of the Codex proposal.
|
|
40
|
+
|
|
41
|
+
By updating `instance.js`, you can modify the Proxy to output a normalized delta object directly, which the SQL builder can consume without any middleman logic.
|
|
42
|
+
|
|
43
|
+
```javascript
|
|
44
|
+
// The ideal output from document.$get_delta()
|
|
45
|
+
{
|
|
46
|
+
replace_root: null, // or the complete new object
|
|
47
|
+
set: {
|
|
48
|
+
'preferences.theme': 'dark',
|
|
49
|
+
'profile.age': 31
|
|
50
|
+
},
|
|
51
|
+
unset: [
|
|
52
|
+
'preferences.legacy_flag'
|
|
53
|
+
]
|
|
54
|
+
}
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
**How this changes `instance.js`:**
|
|
58
|
+
1. **Add `deleteProperty`**: You must implement the `deleteProperty` trap in `build_proxy` to catch leaf removals and push them into an internal `store.unset_keys` Set.
|
|
59
|
+
2. **Intercept Object Assignment**: When `set` intercepts a write to the root tracked property (e.g., the entire `data` object is replaced), clear the `set` and `unset` trackers and assign the new object to `store.replace_root`.
|
|
60
|
+
3. **Cross-Cancellation**: If a key is added to `set`, remove it from `unset`. If a key is added to `unset`, remove it from `set`.
|
|
61
|
+
|
|
62
|
+
**How this changes the SQL layer:**
|
|
63
|
+
You pass this delta directly to the query builder. If `replace_root` has a value, compile a standard `data = $1::jsonb` query. Otherwise, loop through `unset` using the `#-` operator, then wrap that result by looping through `set` with `jsonb_set`.
|
|
64
|
+
|
|
65
|
+
Would you like to draft the exact `deleteProperty` trap and delta-generation methods to slot into the `build_proxy` function of `instance.js`?
|