cry-synced-db-client 0.1.140 → 0.1.143
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +60 -0
- package/dist/index.js +173 -125
- package/dist/src/db/DexieDb.d.ts +2 -1
- package/dist/src/db/SyncedDb.d.ts +2 -1
- package/dist/src/db/managers/ConnectionManager.d.ts +39 -33
- package/dist/src/db/managers/CrossTabSyncManager.d.ts +6 -3
- package/dist/src/db/types/managers.d.ts +16 -8
- package/dist/src/types/I_DexieDb.d.ts +8 -0
- package/dist/src/types/I_SyncedDb.d.ts +25 -4
- package/dist/src/types/index.d.ts +1 -1
- package/package.json +1 -1
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,66 @@
|
|
|
2
2
|
|
|
3
3
|
## Unreleased
|
|
4
4
|
|
|
5
|
+
### `getDirtyMeta()` for lightweight dirty-state inspection
|
|
6
|
+
|
|
7
|
+
- New `SyncedDb.getDirtyMeta()` returns dirty-entry meta (everything except the
|
|
8
|
+
`changes` payload) grouped per collection, only for collections with ≥1 dirty
|
|
9
|
+
record. Mirrors `getDirty()` shape but avoids loading change payloads —
|
|
10
|
+
useful for counts, timestamps, and indicator UIs.
|
|
11
|
+
- New `I_DexieDb.getDirtyMeta(collection)` returning `DirtyMeta[]`.
|
|
12
|
+
- New exported `DirtyMeta` type (`Omit<DirtyChange, "changes">`) and
|
|
13
|
+
`DirtyChange` type surfaced from the package entry.
|
|
14
|
+
|
|
15
|
+
### Fix: multi-tab divergence when offline edits cross leader/follower
|
|
16
|
+
|
|
17
|
+
Fixes a bug where, after both tabs edited different records offline and came
|
|
18
|
+
online with leader-first, the leader ended up with stale record content
|
|
19
|
+
carrying the new server `_rev`. Because `resolveConflict` ignores server
|
|
20
|
+
echoes with equal-or-lower `_rev`, the divergence was permanent until page
|
|
21
|
+
reload. Follower-first came out clean; leader-first did not.
|
|
22
|
+
|
|
23
|
+
Two contributing causes, both fixed:
|
|
24
|
+
|
|
25
|
+
- `SyncEngine` post-upload in-mem patch no longer spreads stale `getInMemById`
|
|
26
|
+
result over server-returned `_rev`/`_ts`. In-mem is now fed the freshly
|
|
27
|
+
patched Dexie item (authoritative content + server meta), so the tab that
|
|
28
|
+
uploaded on behalf of another tab's dirty write ends up with matching
|
|
29
|
+
content and `_rev` in-mem.
|
|
30
|
+
- `CrossTabSyncManager.broadcastMetaUpdate` no longer gated by `isLeader()`.
|
|
31
|
+
Non-leader tabs now broadcast their local writes so the leader's in-mem
|
|
32
|
+
cache learns of them via the existing shared-Dexie reload path. Reload
|
|
33
|
+
broadcasts (post-full-sync) remain leader-only.
|
|
34
|
+
|
|
35
|
+
### BREAKING: Self-healing sync/reconnect lifecycle
|
|
36
|
+
|
|
37
|
+
Fixes a class of bugs where the 60s auto-sync scheduler silently died after a
|
|
38
|
+
sync failure or leader flap and was never re-armed until a page reload (see
|
|
39
|
+
`/tmp/cry-synced-db-client-sync-interval-bug.md`). Reproducers spanned 5
|
|
40
|
+
tenants, 62–296 min of dead scheduler with dirty items accumulating.
|
|
41
|
+
|
|
42
|
+
- Removed `onForcedOffline` callback and `ConnectionManager.goOffline()` method.
|
|
43
|
+
- Added `onSyncFailed(reason)` callback — fires on each sync failure but does
|
|
44
|
+
**not** mutate online state. The next auto-sync / reconnect tick retries.
|
|
45
|
+
- Added `onlineRetryIntervalMs` config (default 60000, 0 = disable) — periodic
|
|
46
|
+
`tryGoOnline()` probe while offline but not forcedOffline. Always-live from
|
|
47
|
+
`init()` to `close()` so recovery does not depend on external signals.
|
|
48
|
+
- `autoSyncTimer` and the new reconnect timer are both always-live from `init()`
|
|
49
|
+
to `close()`. `setOnline(false)`, `forceOffline(true)`, and sync failure no
|
|
50
|
+
longer clear timers — only flip flags; each tick is defensive.
|
|
51
|
+
- `SyncedDb.sync()` now opportunistically calls `tryGoOnline()` when internally
|
|
52
|
+
offline (but not forcedOffline). Syncs while `forceOffline(true)` still throw
|
|
53
|
+
`Cannot sync while in forced offline mode`.
|
|
54
|
+
- `onBecameLeader` now triggers `tryGoOnline()` when offline and post-init,
|
|
55
|
+
covering visibility re-claim and leader flap after mobile browser discards
|
|
56
|
+
state.
|
|
57
|
+
- `syncLock` is now acquired before the early `tryGoOnline()` in
|
|
58
|
+
`SyncedDb.sync()` so the internal `INITIAL SYNC` that `tryGoOnline()` kicks
|
|
59
|
+
off is a no-op inside the outer call (avoids double sync).
|
|
60
|
+
|
|
61
|
+
**Migration for consumers:**
|
|
62
|
+
`onForcedOffline: (reason) => log(reason)` → `onSyncFailed: (reason) => log(reason)`.
|
|
63
|
+
Signature is identical. No other callback changes.
|
|
64
|
+
|
|
5
65
|
- Add `refreshInBackground` `QueryOpts` option for `findById` / `findByIds`
|
|
6
66
|
- Stale-while-revalidate: cache-hit returns local result immediately and
|
|
7
67
|
triggers a background fetch that updates Dexie + in-mem through conflict
|
package/dist/index.js
CHANGED
|
@@ -634,13 +634,14 @@ var CrossTabSyncManager = class {
|
|
|
634
634
|
}
|
|
635
635
|
/**
|
|
636
636
|
* Broadcast updated IDs to other tabs (debounced).
|
|
637
|
-
*
|
|
637
|
+
* Any tab with local writes broadcasts so other tabs refresh their in-mem
|
|
638
|
+
* from shared Dexie. Otherwise non-leader writes stay invisible to the leader's
|
|
639
|
+
* in-mem cache, and a later upload patches new _rev onto stale content.
|
|
638
640
|
* While a server sync is in progress, suppresses delta broadcasts and only
|
|
639
641
|
* records which collections were affected (for the post-sync reload broadcast).
|
|
640
642
|
*/
|
|
641
643
|
broadcastMetaUpdate(updates) {
|
|
642
644
|
if (!this.metaUpdateChannel) return;
|
|
643
|
-
if (!this.deps.isLeader()) return;
|
|
644
645
|
if (this.serverSyncInProgress) {
|
|
645
646
|
for (const collection of Object.keys(updates)) {
|
|
646
647
|
this.syncAffectedCollections.add(collection);
|
|
@@ -842,46 +843,41 @@ var CrossTabSyncManager = class {
|
|
|
842
843
|
};
|
|
843
844
|
|
|
844
845
|
// src/db/managers/ConnectionManager.ts
|
|
846
|
+
var DEFAULT_ONLINE_RETRY_INTERVAL_MS = 6e4;
|
|
845
847
|
var ConnectionManager = class {
|
|
846
848
|
constructor(config) {
|
|
847
849
|
this.online = false;
|
|
848
850
|
this.forcedOffline = false;
|
|
851
|
+
this.tryGoOnlineInFlight = false;
|
|
852
|
+
this.closed = false;
|
|
853
|
+
var _a;
|
|
849
854
|
this.restInterface = config.restInterface;
|
|
850
855
|
this.restTimeoutMs = config.restTimeoutMs;
|
|
851
856
|
this.syncTimeoutMs = config.syncTimeoutMs;
|
|
852
857
|
this.autoSyncIntervalMs = config.autoSyncIntervalMs;
|
|
858
|
+
this.onlineRetryIntervalMs = (_a = config.onlineRetryIntervalMs) != null ? _a : DEFAULT_ONLINE_RETRY_INTERVAL_MS;
|
|
853
859
|
this.callbacks = config.callbacks;
|
|
854
860
|
this.deps = config.deps;
|
|
855
861
|
}
|
|
856
|
-
/**
|
|
857
|
-
* Current online status (considering forcedOffline).
|
|
858
|
-
*/
|
|
862
|
+
/** Current online status (considering forcedOffline). */
|
|
859
863
|
isOnline() {
|
|
860
864
|
return this.online && !this.forcedOffline;
|
|
861
865
|
}
|
|
862
|
-
/**
|
|
863
|
-
* Is forced offline mode active?
|
|
864
|
-
*/
|
|
865
866
|
isForcedOffline() {
|
|
866
867
|
return this.forcedOffline;
|
|
867
868
|
}
|
|
868
|
-
/**
|
|
869
|
-
* Can we sync with server?
|
|
870
|
-
*/
|
|
871
869
|
canSync() {
|
|
872
870
|
return this.online && !this.forcedOffline;
|
|
873
871
|
}
|
|
874
|
-
/**
|
|
875
|
-
* Can we receive server updates?
|
|
876
|
-
*/
|
|
877
872
|
canReceiveServerUpdates() {
|
|
878
873
|
return !this.forcedOffline;
|
|
879
874
|
}
|
|
880
875
|
/**
|
|
881
|
-
* Set online status.
|
|
882
|
-
*
|
|
883
|
-
*
|
|
884
|
-
*
|
|
876
|
+
* Set online status. Does NOT stop any timers.
|
|
877
|
+
*
|
|
878
|
+
* - `setOnline(true)` attempts `tryGoOnline` (ping → flip state).
|
|
879
|
+
* - `setOnline(false)` flips state to offline and fires `onOnlineStatusChange`.
|
|
880
|
+
* The reconnect timer will continue attempting to come back online.
|
|
885
881
|
*/
|
|
886
882
|
async setOnline(online) {
|
|
887
883
|
var _a, _b;
|
|
@@ -889,19 +885,19 @@ var ConnectionManager = class {
|
|
|
889
885
|
if (online) {
|
|
890
886
|
await this.tryGoOnline();
|
|
891
887
|
} else {
|
|
892
|
-
this.online =
|
|
893
|
-
this.stopAutoSync();
|
|
888
|
+
this.online = false;
|
|
894
889
|
(_b = (_a = this.callbacks).onOnlineStatusChange) == null ? void 0 : _b.call(_a, false);
|
|
895
890
|
}
|
|
896
891
|
}
|
|
897
892
|
/**
|
|
898
|
-
* Force offline mode.
|
|
893
|
+
* Force offline mode. Does NOT stop timers — reconnect timer will still
|
|
894
|
+
* check `forcedOffline` and skip while true. When released, `tryGoOnline`
|
|
895
|
+
* fires immediately to avoid waiting for the next tick.
|
|
899
896
|
*/
|
|
900
897
|
forceOffline(forced) {
|
|
901
898
|
if (this.forcedOffline === forced) return;
|
|
902
899
|
this.forcedOffline = forced;
|
|
903
900
|
if (forced) {
|
|
904
|
-
this.stopAutoSync();
|
|
905
901
|
this.deps.releaseLeaderLock();
|
|
906
902
|
} else {
|
|
907
903
|
this.deps.tryBecomeLeader();
|
|
@@ -911,50 +907,86 @@ var ConnectionManager = class {
|
|
|
911
907
|
}
|
|
912
908
|
}
|
|
913
909
|
/**
|
|
914
|
-
*
|
|
915
|
-
*
|
|
916
|
-
* a tab should remain leader within its window to process WebSocket notifications
|
|
917
|
-
* even while offline due to network issues.
|
|
910
|
+
* Attempt to transition from offline to online.
|
|
911
|
+
* Idempotent, guards against concurrent calls and forcedOffline.
|
|
918
912
|
*/
|
|
919
|
-
|
|
920
|
-
var _a, _b;
|
|
921
|
-
|
|
922
|
-
this.
|
|
923
|
-
this.
|
|
924
|
-
|
|
925
|
-
|
|
926
|
-
|
|
927
|
-
|
|
913
|
+
async tryGoOnline() {
|
|
914
|
+
var _a, _b, _c;
|
|
915
|
+
if (this.closed) return;
|
|
916
|
+
if (this.forcedOffline) return;
|
|
917
|
+
if (this.tryGoOnlineInFlight) return;
|
|
918
|
+
this.tryGoOnlineInFlight = true;
|
|
919
|
+
try {
|
|
920
|
+
const wasOffline = !this.online;
|
|
921
|
+
if (wasOffline) {
|
|
922
|
+
let pingResult;
|
|
923
|
+
try {
|
|
924
|
+
pingResult = await this.withSyncTimeout(
|
|
925
|
+
this.restInterface.ping(),
|
|
926
|
+
"ping"
|
|
927
|
+
);
|
|
928
|
+
} catch (err) {
|
|
929
|
+
console.warn("tryGoOnline: ping failed:", err);
|
|
930
|
+
this.online = false;
|
|
931
|
+
return;
|
|
932
|
+
}
|
|
933
|
+
if (!pingResult) {
|
|
934
|
+
const url = (_a = this.restInterface.endpoint) != null ? _a : "unknown";
|
|
935
|
+
console.warn(`Ping to ${url} failed - staying offline`);
|
|
936
|
+
return;
|
|
937
|
+
}
|
|
938
|
+
this.online = true;
|
|
939
|
+
(_c = (_b = this.callbacks).onOnlineStatusChange) == null ? void 0 : _c.call(_b, true);
|
|
940
|
+
if (!this.deps.isLeader()) {
|
|
941
|
+
this.deps.tryBecomeLeader();
|
|
942
|
+
}
|
|
943
|
+
}
|
|
928
944
|
try {
|
|
929
|
-
this.
|
|
945
|
+
await this.deps.sync("INITIAL SYNC");
|
|
930
946
|
} catch (err) {
|
|
931
|
-
console.
|
|
947
|
+
console.warn("INITIAL SYNC after tryGoOnline failed (stays online):", err);
|
|
932
948
|
}
|
|
949
|
+
} finally {
|
|
950
|
+
this.tryGoOnlineInFlight = false;
|
|
933
951
|
}
|
|
934
952
|
}
|
|
935
953
|
/**
|
|
936
|
-
* Start
|
|
954
|
+
* Start both timers. Idempotent. Called by SyncedDb.init().
|
|
937
955
|
*/
|
|
938
|
-
|
|
939
|
-
this.
|
|
940
|
-
if (this.
|
|
941
|
-
|
|
956
|
+
startTimers() {
|
|
957
|
+
this.closed = false;
|
|
958
|
+
if (!this.autoSyncTimer && this.autoSyncIntervalMs && this.autoSyncIntervalMs > 0) {
|
|
959
|
+
const intervalMs = this.autoSyncIntervalMs;
|
|
960
|
+
this.autoSyncTimer = setInterval(() => {
|
|
961
|
+
if (this.forcedOffline || !this.online) return;
|
|
962
|
+
this.deps.sync(`interval ${intervalMs}ms`).catch((err) => {
|
|
963
|
+
console.error("Auto-sync failed:", err);
|
|
964
|
+
});
|
|
965
|
+
}, intervalMs);
|
|
966
|
+
}
|
|
967
|
+
if (!this.reconnectTimer && this.onlineRetryIntervalMs && this.onlineRetryIntervalMs > 0) {
|
|
968
|
+
const retryMs = this.onlineRetryIntervalMs;
|
|
969
|
+
this.reconnectTimer = setInterval(() => {
|
|
970
|
+
if (this.forcedOffline || this.online || this.tryGoOnlineInFlight) return;
|
|
971
|
+
this.tryGoOnline().catch((err) => {
|
|
972
|
+
console.error("Reconnect tryGoOnline failed:", err);
|
|
973
|
+
});
|
|
974
|
+
}, retryMs);
|
|
942
975
|
}
|
|
943
|
-
const intervalMs = this.autoSyncIntervalMs;
|
|
944
|
-
this.autoSyncTimer = setInterval(() => {
|
|
945
|
-
this.deps.sync(`interval ${intervalMs}ms`).catch((err) => {
|
|
946
|
-
console.error("Auto-sync failed:", err);
|
|
947
|
-
});
|
|
948
|
-
}, intervalMs);
|
|
949
976
|
}
|
|
950
977
|
/**
|
|
951
|
-
* Stop
|
|
978
|
+
* Stop both timers. Called by SyncedDb.close().
|
|
952
979
|
*/
|
|
953
|
-
|
|
980
|
+
stopTimers() {
|
|
981
|
+
this.closed = true;
|
|
954
982
|
if (this.autoSyncTimer) {
|
|
955
983
|
clearInterval(this.autoSyncTimer);
|
|
956
984
|
this.autoSyncTimer = void 0;
|
|
957
985
|
}
|
|
986
|
+
if (this.reconnectTimer) {
|
|
987
|
+
clearInterval(this.reconnectTimer);
|
|
988
|
+
this.reconnectTimer = void 0;
|
|
989
|
+
}
|
|
958
990
|
}
|
|
959
991
|
/**
|
|
960
992
|
* Ping server.
|
|
@@ -1025,6 +1057,19 @@ var ConnectionManager = class {
|
|
|
1025
1057
|
}
|
|
1026
1058
|
}
|
|
1027
1059
|
}
|
|
1060
|
+
/**
|
|
1061
|
+
* Notify consumers of a sync failure. Does not mutate state.
|
|
1062
|
+
* Called from SyncEngine via deps.onSyncFailed wiring.
|
|
1063
|
+
*/
|
|
1064
|
+
callOnSyncFailed(reason) {
|
|
1065
|
+
if (this.callbacks.onSyncFailed) {
|
|
1066
|
+
try {
|
|
1067
|
+
this.callbacks.onSyncFailed(reason);
|
|
1068
|
+
} catch (err) {
|
|
1069
|
+
console.error("onSyncFailed callback failed:", err);
|
|
1070
|
+
}
|
|
1071
|
+
}
|
|
1072
|
+
}
|
|
1028
1073
|
/**
|
|
1029
1074
|
* Call onWsConnect callback.
|
|
1030
1075
|
*/
|
|
@@ -1083,43 +1128,6 @@ var ConnectionManager = class {
|
|
|
1083
1128
|
getOnWsReconnect() {
|
|
1084
1129
|
return this.callbacks.onWsReconnect;
|
|
1085
1130
|
}
|
|
1086
|
-
// ============================================================
|
|
1087
|
-
// Private Methods
|
|
1088
|
-
// ============================================================
|
|
1089
|
-
async tryGoOnline() {
|
|
1090
|
-
var _a, _b, _c, _d, _e;
|
|
1091
|
-
if (this.forcedOffline) {
|
|
1092
|
-
return;
|
|
1093
|
-
}
|
|
1094
|
-
try {
|
|
1095
|
-
const pingResult = await this.withSyncTimeout(
|
|
1096
|
-
this.restInterface.ping(),
|
|
1097
|
-
"ping"
|
|
1098
|
-
);
|
|
1099
|
-
if (!pingResult) {
|
|
1100
|
-
const url = (_a = this.restInterface.endpoint) != null ? _a : "unknown";
|
|
1101
|
-
console.warn(`Ping to ${url} failed - staying offline`);
|
|
1102
|
-
return;
|
|
1103
|
-
}
|
|
1104
|
-
const wasOffline = !this.online;
|
|
1105
|
-
this.online = true;
|
|
1106
|
-
if (wasOffline) {
|
|
1107
|
-
(_c = (_b = this.callbacks).onOnlineStatusChange) == null ? void 0 : _c.call(_b, true);
|
|
1108
|
-
if (!this.deps.isLeader()) {
|
|
1109
|
-
this.deps.tryBecomeLeader();
|
|
1110
|
-
}
|
|
1111
|
-
}
|
|
1112
|
-
this.startAutoSync();
|
|
1113
|
-
await this.deps.sync("INITIAL SYNC");
|
|
1114
|
-
} catch (err) {
|
|
1115
|
-
console.warn("Failed to go online (ping failed or timed out):", err);
|
|
1116
|
-
const wasOnline = this.online;
|
|
1117
|
-
this.online = false;
|
|
1118
|
-
if (wasOnline) {
|
|
1119
|
-
(_e = (_d = this.callbacks).onOnlineStatusChange) == null ? void 0 : _e.call(_d, false);
|
|
1120
|
-
}
|
|
1121
|
-
}
|
|
1122
|
-
}
|
|
1123
1131
|
};
|
|
1124
1132
|
|
|
1125
1133
|
// node_modules/superjson/dist/double-indexed-kv.js
|
|
@@ -2541,8 +2549,8 @@ var _SyncEngine = class _SyncEngine {
|
|
|
2541
2549
|
});
|
|
2542
2550
|
} catch (err) {
|
|
2543
2551
|
const reason = err instanceof Error ? err.message : String(err);
|
|
2544
|
-
console.error("Sync failed
|
|
2545
|
-
this.deps.
|
|
2552
|
+
console.error("Sync failed:", err);
|
|
2553
|
+
this.deps.onSyncFailed(`Sync failed: ${reason}`);
|
|
2546
2554
|
this.callOnSyncEnd({
|
|
2547
2555
|
durationMs: Date.now() - startTime,
|
|
2548
2556
|
receivedCount,
|
|
@@ -2661,13 +2669,7 @@ var _SyncEngine = class _SyncEngine {
|
|
|
2661
2669
|
dexieDeleteIds.push(entity._id);
|
|
2662
2670
|
} else {
|
|
2663
2671
|
dexieSaveBatch.push(dexieItem);
|
|
2664
|
-
|
|
2665
|
-
if (inMemItem) {
|
|
2666
|
-
inMemUpdateBatch.push(__spreadProps(__spreadValues({}, inMemItem), {
|
|
2667
|
-
_rev: entity._rev,
|
|
2668
|
-
_ts: entity._ts
|
|
2669
|
-
}));
|
|
2670
|
-
}
|
|
2672
|
+
inMemUpdateBatch.push(dexieItem);
|
|
2671
2673
|
}
|
|
2672
2674
|
}
|
|
2673
2675
|
}
|
|
@@ -3511,7 +3513,20 @@ var _SyncedDb = class _SyncedDb {
|
|
|
3511
3513
|
tenant: this.tenant,
|
|
3512
3514
|
windowId,
|
|
3513
3515
|
callbacks: {
|
|
3514
|
-
onBecameLeader:
|
|
3516
|
+
onBecameLeader: () => {
|
|
3517
|
+
if (this.initialized && !this.connectionManager.isOnline() && !this.connectionManager.isForcedOffline()) {
|
|
3518
|
+
this.connectionManager.tryGoOnline().catch((err) => {
|
|
3519
|
+
console.error("tryGoOnline on becameLeader failed:", err);
|
|
3520
|
+
});
|
|
3521
|
+
}
|
|
3522
|
+
if (config.onBecameLeader) {
|
|
3523
|
+
try {
|
|
3524
|
+
config.onBecameLeader();
|
|
3525
|
+
} catch (err) {
|
|
3526
|
+
console.error("onBecameLeader callback failed:", err);
|
|
3527
|
+
}
|
|
3528
|
+
}
|
|
3529
|
+
},
|
|
3515
3530
|
onLostLeadership: config.onLostLeadership,
|
|
3516
3531
|
onInfrastructureError: config.onInfrastructureError ? (type, message, error) => {
|
|
3517
3532
|
config.onInfrastructureError({
|
|
@@ -3556,9 +3571,10 @@ var _SyncedDb = class _SyncedDb {
|
|
|
3556
3571
|
restTimeoutMs: (_h = config.restTimeoutMs) != null ? _h : 9e4,
|
|
3557
3572
|
syncTimeoutMs: (_i = config.syncTimeoutMs) != null ? _i : 12e4,
|
|
3558
3573
|
autoSyncIntervalMs: config.autoSyncIntervalMs,
|
|
3574
|
+
onlineRetryIntervalMs: config.onlineRetryIntervalMs,
|
|
3559
3575
|
callbacks: {
|
|
3560
3576
|
onOnlineStatusChange: config.onOnlineStatusChange,
|
|
3561
|
-
|
|
3577
|
+
onSyncFailed: config.onSyncFailed,
|
|
3562
3578
|
onWsConnect: config.onWsConnect,
|
|
3563
3579
|
onWsDisconnect: config.onWsDisconnect,
|
|
3564
3580
|
onWsReconnect: config.onWsReconnect,
|
|
@@ -3628,7 +3644,7 @@ var _SyncedDb = class _SyncedDb {
|
|
|
3628
3644
|
},
|
|
3629
3645
|
getInMemById: (collection, id) => this.inMemDb.getById(collection, id),
|
|
3630
3646
|
withSyncTimeout: (promise, operation) => this.connectionManager.withSyncTimeout(promise, operation),
|
|
3631
|
-
|
|
3647
|
+
onSyncFailed: (reason) => this.connectionManager.callOnSyncFailed(reason),
|
|
3632
3648
|
flushAllPendingChanges: () => this.pendingChanges.flushAll(),
|
|
3633
3649
|
cancelRestUploadTimer: () => this.pendingChanges.cancelRestUploadTimer(),
|
|
3634
3650
|
awaitRestUpload: () => this.pendingChanges.awaitRestUpload(),
|
|
@@ -3769,6 +3785,7 @@ var _SyncedDb = class _SyncedDb {
|
|
|
3769
3785
|
this.crossTabSync.init();
|
|
3770
3786
|
(_a = this.wakeSync) == null ? void 0 : _a.init();
|
|
3771
3787
|
(_b = this.networkStatus) == null ? void 0 : _b.init();
|
|
3788
|
+
this.connectionManager.startTimers();
|
|
3772
3789
|
if (this.serverUpdateNotifier) {
|
|
3773
3790
|
if (this.serverUpdateNotifier.setCallbacks) {
|
|
3774
3791
|
const cleanup = this.serverUpdateNotifier.setCallbacks({
|
|
@@ -3875,7 +3892,7 @@ var _SyncedDb = class _SyncedDb {
|
|
|
3875
3892
|
var _a, _b;
|
|
3876
3893
|
this.leaderElection.setClosing(true);
|
|
3877
3894
|
this.pendingChanges.cancelRestUploadTimer();
|
|
3878
|
-
this.connectionManager.
|
|
3895
|
+
this.connectionManager.stopTimers();
|
|
3879
3896
|
await this.pendingChanges.flushAll();
|
|
3880
3897
|
(_a = this.networkStatus) == null ? void 0 : _a.dispose();
|
|
3881
3898
|
(_b = this.wakeSync) == null ? void 0 : _b.dispose();
|
|
@@ -4423,39 +4440,45 @@ var _SyncedDb = class _SyncedDb {
|
|
|
4423
4440
|
}
|
|
4424
4441
|
// ==================== Sync Operations ====================
|
|
4425
4442
|
async sync(calledFrom) {
|
|
4426
|
-
if (!this.connectionManager.canSync()) {
|
|
4427
|
-
if (this.connectionManager.isForcedOffline()) {
|
|
4428
|
-
throw new Error("Cannot sync while in forced offline mode");
|
|
4429
|
-
}
|
|
4430
|
-
return;
|
|
4431
|
-
}
|
|
4432
4443
|
if (this.syncLock) return;
|
|
4433
4444
|
this.syncLock = true;
|
|
4434
|
-
this.syncing = true;
|
|
4435
|
-
this.crossTabSync.startServerSync();
|
|
4436
4445
|
try {
|
|
4437
|
-
|
|
4438
|
-
|
|
4439
|
-
|
|
4440
|
-
|
|
4441
|
-
|
|
4442
|
-
|
|
4446
|
+
if (!this.connectionManager.isOnline() && !this.connectionManager.isForcedOffline()) {
|
|
4447
|
+
await this.connectionManager.tryGoOnline();
|
|
4448
|
+
}
|
|
4449
|
+
if (!this.connectionManager.canSync()) {
|
|
4450
|
+
if (this.connectionManager.isForcedOffline()) {
|
|
4451
|
+
throw new Error("Cannot sync while in forced offline mode");
|
|
4452
|
+
}
|
|
4453
|
+
return;
|
|
4454
|
+
}
|
|
4455
|
+
this.syncing = true;
|
|
4456
|
+
this.crossTabSync.startServerSync();
|
|
4457
|
+
try {
|
|
4458
|
+
await this.syncEngine.sync(calledFrom);
|
|
4459
|
+
if (!this.syncOnlyCollections) {
|
|
4460
|
+
const now = /* @__PURE__ */ new Date();
|
|
4461
|
+
if (!this._lastFullSyncDate) {
|
|
4462
|
+
this._setLastInitialSync(now).catch((err) => {
|
|
4463
|
+
console.error("Failed to persist lastInitialSync:", err);
|
|
4464
|
+
});
|
|
4465
|
+
}
|
|
4466
|
+
this._setLastFullSync(now).catch((err) => {
|
|
4467
|
+
console.error("Failed to persist lastFullSync:", err);
|
|
4443
4468
|
});
|
|
4444
4469
|
}
|
|
4445
|
-
|
|
4446
|
-
|
|
4447
|
-
|
|
4470
|
+
} finally {
|
|
4471
|
+
this.syncing = false;
|
|
4472
|
+
this.crossTabSync.endServerSync();
|
|
4473
|
+
await this.processQueuedWsUpdates();
|
|
4474
|
+
try {
|
|
4475
|
+
await this.maybeAutoEvict();
|
|
4476
|
+
} catch (err) {
|
|
4477
|
+
console.error("Auto-eviction failed:", err);
|
|
4478
|
+
}
|
|
4448
4479
|
}
|
|
4449
4480
|
} finally {
|
|
4450
|
-
this.syncing = false;
|
|
4451
4481
|
this.syncLock = false;
|
|
4452
|
-
this.crossTabSync.endServerSync();
|
|
4453
|
-
await this.processQueuedWsUpdates();
|
|
4454
|
-
try {
|
|
4455
|
-
await this.maybeAutoEvict();
|
|
4456
|
-
} catch (err) {
|
|
4457
|
-
console.error("Auto-eviction failed:", err);
|
|
4458
|
-
}
|
|
4459
4482
|
}
|
|
4460
4483
|
}
|
|
4461
4484
|
async processQueuedWsUpdates() {
|
|
@@ -4522,6 +4545,16 @@ var _SyncedDb = class _SyncedDb {
|
|
|
4522
4545
|
}
|
|
4523
4546
|
return result;
|
|
4524
4547
|
}
|
|
4548
|
+
async getDirtyMeta() {
|
|
4549
|
+
const result = {};
|
|
4550
|
+
for (const [collectionName] of this.collections) {
|
|
4551
|
+
const metas = await this.dexieDb.getDirtyMeta(collectionName);
|
|
4552
|
+
if (metas.length > 0) {
|
|
4553
|
+
result[collectionName] = metas;
|
|
4554
|
+
}
|
|
4555
|
+
}
|
|
4556
|
+
return result;
|
|
4557
|
+
}
|
|
4525
4558
|
// ==================== Data Deletion ====================
|
|
4526
4559
|
async dropCollection(collection, force = false) {
|
|
4527
4560
|
this.assertCollection(collection);
|
|
@@ -5142,6 +5175,21 @@ var DexieDb = class extends Dexie {
|
|
|
5142
5175
|
}
|
|
5143
5176
|
return result;
|
|
5144
5177
|
}
|
|
5178
|
+
async getDirtyMeta(collection) {
|
|
5179
|
+
const dirtyEntries = await this.dirtyChanges.where("[collection+id]").between([collection, Dexie.minKey], [collection, Dexie.maxKey]).toArray();
|
|
5180
|
+
const result = [];
|
|
5181
|
+
for (const entry of dirtyEntries) {
|
|
5182
|
+
result.push({
|
|
5183
|
+
collection: entry.collection,
|
|
5184
|
+
id: entry.id,
|
|
5185
|
+
baseTs: entry.baseTs,
|
|
5186
|
+
baseRev: entry.baseRev,
|
|
5187
|
+
createdAt: entry.createdAt,
|
|
5188
|
+
updatedAt: entry.updatedAt
|
|
5189
|
+
});
|
|
5190
|
+
}
|
|
5191
|
+
return result;
|
|
5192
|
+
}
|
|
5145
5193
|
async addDirtyChange(collection, id, changes, baseMeta) {
|
|
5146
5194
|
const stringId = this.idToString(id);
|
|
5147
5195
|
const existing = await this.dirtyChanges.get([collection, stringId]);
|
package/dist/src/db/DexieDb.d.ts
CHANGED
|
@@ -1,5 +1,5 @@
|
|
|
1
1
|
import Dexie from "dexie";
|
|
2
|
-
import type { DirtyChange, I_DexieDb, SyncMeta } from "../types/I_DexieDb";
|
|
2
|
+
import type { DirtyChange, DirtyMeta, I_DexieDb, SyncMeta } from "../types/I_DexieDb";
|
|
3
3
|
import type { CollectionConfig } from "../types/CollectionConfig";
|
|
4
4
|
import type { Id, LocalDbEntity } from "../types/DbEntity";
|
|
5
5
|
/**
|
|
@@ -31,6 +31,7 @@ export declare class DexieDb extends Dexie implements I_DexieDb {
|
|
|
31
31
|
forEachBatch<T extends LocalDbEntity>(collection: string, batchSize: number, callback: (items: T[]) => Promise<void>): Promise<void>;
|
|
32
32
|
count(collection: string): Promise<number>;
|
|
33
33
|
getDirty<T extends LocalDbEntity>(collection: string): Promise<Partial<T>[]>;
|
|
34
|
+
getDirtyMeta(collection: string): Promise<DirtyMeta[]>;
|
|
34
35
|
addDirtyChange(collection: string, id: Id, changes: Record<string, any>, baseMeta?: {
|
|
35
36
|
_ts?: any;
|
|
36
37
|
_rev?: number;
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
import type { AggregateOptions } from "mongodb";
|
|
2
2
|
import type { I_SyncedDb, SyncedDbConfig, WsNotificationInfo, EvictionInfo, EvictionCollectionInfo } from "../types/I_SyncedDb";
|
|
3
|
-
import type { MetaUpdateBroadcast } from "../types/I_DexieDb";
|
|
3
|
+
import type { DirtyMeta, MetaUpdateBroadcast } from "../types/I_DexieDb";
|
|
4
4
|
import type { QuerySpec, QueryOpts, UpdateSpec, InsertSpec, BatchSpec } from "../types/I_RestInterface";
|
|
5
5
|
import type { Id, DbEntity } from "../types/DbEntity";
|
|
6
6
|
/**
|
|
@@ -161,6 +161,7 @@ export declare class SyncedDb implements I_SyncedDb {
|
|
|
161
161
|
getOnWsNotification(): ((info: WsNotificationInfo) => void) | undefined;
|
|
162
162
|
getOnWakeSync(): ((info: import("./types/managers").WakeSyncInfo) => void) | undefined;
|
|
163
163
|
getDirty<T extends DbEntity>(): Promise<Readonly<Record<string, readonly T[]>>>;
|
|
164
|
+
getDirtyMeta(): Promise<Readonly<Record<string, readonly DirtyMeta[]>>>;
|
|
164
165
|
dropCollection(collection: string, force?: boolean): Promise<void>;
|
|
165
166
|
dropDatabase(force?: boolean): Promise<void>;
|
|
166
167
|
/**
|
|
@@ -1,11 +1,19 @@
|
|
|
1
1
|
/**
|
|
2
|
-
* ConnectionManager - Manages online/offline state
|
|
2
|
+
* ConnectionManager - Manages online/offline state, auto-sync and reconnect.
|
|
3
3
|
*
|
|
4
|
-
*
|
|
5
|
-
* -
|
|
6
|
-
*
|
|
7
|
-
* -
|
|
8
|
-
*
|
|
4
|
+
* Invariants:
|
|
5
|
+
* - `autoSyncTimer` and `reconnectTimer` are always-live from `startTimers()`
|
|
6
|
+
* (called by SyncedDb during init) until `stopTimers()` (called by close()).
|
|
7
|
+
* - Neither timer is cleared by `setOnline(false)`, `forceOffline(true)`, or
|
|
8
|
+
* sync failure — state changes only flip flags. Each tick is defensive and
|
|
9
|
+
* no-ops when inapplicable.
|
|
10
|
+
* - `autoSyncTimer` tick: run `sync()` iff `online && !forcedOffline`.
|
|
11
|
+
* - `reconnectTimer` tick: call `tryGoOnline()` iff `!online && !forcedOffline`
|
|
12
|
+
* and no `tryGoOnline` is in flight.
|
|
13
|
+
* - `tryGoOnline()` pings the server; on success flips `online=true` (next
|
|
14
|
+
* auto-sync tick then runs). Historical `goOffline(reason)` method and
|
|
15
|
+
* `onForcedOffline` callback are removed — use `onSyncFailed` (logging-only
|
|
16
|
+
* callback fired from SyncEngine) or explicit `forceOffline(true)`.
|
|
9
17
|
*/
|
|
10
18
|
import type { I_ConnectionManager, ConnectionManagerConfig } from "../types/managers";
|
|
11
19
|
export declare class ConnectionManager implements I_ConnectionManager {
|
|
@@ -13,54 +21,48 @@ export declare class ConnectionManager implements I_ConnectionManager {
|
|
|
13
21
|
private readonly restTimeoutMs;
|
|
14
22
|
private readonly syncTimeoutMs;
|
|
15
23
|
private readonly autoSyncIntervalMs?;
|
|
24
|
+
private readonly onlineRetryIntervalMs;
|
|
16
25
|
private readonly callbacks;
|
|
17
26
|
private readonly deps;
|
|
18
27
|
private online;
|
|
19
28
|
private forcedOffline;
|
|
20
29
|
private autoSyncTimer?;
|
|
30
|
+
private reconnectTimer?;
|
|
31
|
+
private tryGoOnlineInFlight;
|
|
32
|
+
private closed;
|
|
21
33
|
constructor(config: ConnectionManagerConfig);
|
|
22
|
-
/**
|
|
23
|
-
* Current online status (considering forcedOffline).
|
|
24
|
-
*/
|
|
34
|
+
/** Current online status (considering forcedOffline). */
|
|
25
35
|
isOnline(): boolean;
|
|
26
|
-
/**
|
|
27
|
-
* Is forced offline mode active?
|
|
28
|
-
*/
|
|
29
36
|
isForcedOffline(): boolean;
|
|
30
|
-
/**
|
|
31
|
-
* Can we sync with server?
|
|
32
|
-
*/
|
|
33
37
|
canSync(): boolean;
|
|
34
|
-
/**
|
|
35
|
-
* Can we receive server updates?
|
|
36
|
-
*/
|
|
37
38
|
canReceiveServerUpdates(): boolean;
|
|
38
39
|
/**
|
|
39
|
-
* Set online status.
|
|
40
|
-
*
|
|
41
|
-
*
|
|
42
|
-
*
|
|
40
|
+
* Set online status. Does NOT stop any timers.
|
|
41
|
+
*
|
|
42
|
+
* - `setOnline(true)` attempts `tryGoOnline` (ping → flip state).
|
|
43
|
+
* - `setOnline(false)` flips state to offline and fires `onOnlineStatusChange`.
|
|
44
|
+
* The reconnect timer will continue attempting to come back online.
|
|
43
45
|
*/
|
|
44
46
|
setOnline(online: boolean): Promise<void>;
|
|
45
47
|
/**
|
|
46
|
-
* Force offline mode.
|
|
48
|
+
* Force offline mode. Does NOT stop timers — reconnect timer will still
|
|
49
|
+
* check `forcedOffline` and skip while true. When released, `tryGoOnline`
|
|
50
|
+
* fires immediately to avoid waiting for the next tick.
|
|
47
51
|
*/
|
|
48
52
|
forceOffline(forced: boolean): void;
|
|
49
53
|
/**
|
|
50
|
-
*
|
|
51
|
-
*
|
|
52
|
-
* a tab should remain leader within its window to process WebSocket notifications
|
|
53
|
-
* even while offline due to network issues.
|
|
54
|
+
* Attempt to transition from offline to online.
|
|
55
|
+
* Idempotent, guards against concurrent calls and forcedOffline.
|
|
54
56
|
*/
|
|
55
|
-
|
|
57
|
+
tryGoOnline(): Promise<void>;
|
|
56
58
|
/**
|
|
57
|
-
* Start
|
|
59
|
+
* Start both timers. Idempotent. Called by SyncedDb.init().
|
|
58
60
|
*/
|
|
59
|
-
|
|
61
|
+
startTimers(): void;
|
|
60
62
|
/**
|
|
61
|
-
* Stop
|
|
63
|
+
* Stop both timers. Called by SyncedDb.close().
|
|
62
64
|
*/
|
|
63
|
-
|
|
65
|
+
stopTimers(): void;
|
|
64
66
|
/**
|
|
65
67
|
* Ping server.
|
|
66
68
|
*/
|
|
@@ -77,6 +79,11 @@ export declare class ConnectionManager implements I_ConnectionManager {
|
|
|
77
79
|
* Report infrastructure error.
|
|
78
80
|
*/
|
|
79
81
|
reportInfrastructureError(type: string, message: string, error?: Error): void;
|
|
82
|
+
/**
|
|
83
|
+
* Notify consumers of a sync failure. Does not mutate state.
|
|
84
|
+
* Called from SyncEngine via deps.onSyncFailed wiring.
|
|
85
|
+
*/
|
|
86
|
+
callOnSyncFailed(reason: string): void;
|
|
80
87
|
/**
|
|
81
88
|
* Call onWsConnect callback.
|
|
82
89
|
*/
|
|
@@ -101,5 +108,4 @@ export declare class ConnectionManager implements I_ConnectionManager {
|
|
|
101
108
|
* Get onWsReconnect callback.
|
|
102
109
|
*/
|
|
103
110
|
getOnWsReconnect(): ((attempt: number) => void) | undefined;
|
|
104
|
-
private tryGoOnline;
|
|
105
111
|
}
|
|
@@ -1,8 +1,9 @@
|
|
|
1
1
|
/**
|
|
2
2
|
* CrossTabSyncManager - Manages cross-tab synchronization via BroadcastChannel.
|
|
3
3
|
*
|
|
4
|
-
*
|
|
5
|
-
*
|
|
4
|
+
* Any tab with local writes (leader or follower) broadcasts the IDs of updated
|
|
5
|
+
* records so other tabs refresh their in-memory state from shared Dexie.
|
|
6
|
+
* Reload broadcasts (post-full-sync) remain leader-only.
|
|
6
7
|
*/
|
|
7
8
|
import type { MetaUpdateBroadcast } from "../../types/I_DexieDb";
|
|
8
9
|
import type { I_CrossTabSyncManager, CrossTabSyncConfig } from "../types/managers";
|
|
@@ -31,7 +32,9 @@ export declare class CrossTabSyncManager implements I_CrossTabSyncManager {
|
|
|
31
32
|
init(): void;
|
|
32
33
|
/**
|
|
33
34
|
* Broadcast updated IDs to other tabs (debounced).
|
|
34
|
-
*
|
|
35
|
+
* Any tab with local writes broadcasts so other tabs refresh their in-mem
|
|
36
|
+
* from shared Dexie. Otherwise non-leader writes stay invisible to the leader's
|
|
37
|
+
* in-mem cache, and a later upload patches new _rev onto stale content.
|
|
35
38
|
* While a server sync is in progress, suppresses delta broadcasts and only
|
|
36
39
|
* records which collections were affected (for the post-sync reload broadcast).
|
|
37
40
|
*/
|
|
@@ -91,7 +91,10 @@ export interface I_CrossTabSyncManager {
|
|
|
91
91
|
}
|
|
92
92
|
export interface ConnectionCallbacks {
|
|
93
93
|
onOnlineStatusChange?: (online: boolean) => void;
|
|
94
|
-
|
|
94
|
+
/**
|
|
95
|
+
* Fired on sync failure. Does NOT mutate online state. For logging only.
|
|
96
|
+
*/
|
|
97
|
+
onSyncFailed?: (reason: string) => void;
|
|
95
98
|
onWsConnect?: () => void;
|
|
96
99
|
onWsDisconnect?: (reason: string) => void;
|
|
97
100
|
onWsReconnect?: (attempt: number) => void;
|
|
@@ -112,6 +115,7 @@ export interface ConnectionManagerConfig {
|
|
|
112
115
|
restTimeoutMs: number;
|
|
113
116
|
syncTimeoutMs: number;
|
|
114
117
|
autoSyncIntervalMs?: number;
|
|
118
|
+
onlineRetryIntervalMs?: number;
|
|
115
119
|
callbacks: ConnectionCallbacks;
|
|
116
120
|
deps: ConnectionManagerDeps;
|
|
117
121
|
}
|
|
@@ -128,12 +132,15 @@ export interface I_ConnectionManager {
|
|
|
128
132
|
setOnline(online: boolean): Promise<void>;
|
|
129
133
|
/** Force offline mode. */
|
|
130
134
|
forceOffline(forced: boolean): void;
|
|
131
|
-
/**
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
135
|
+
/**
|
|
136
|
+
* Attempt to transition from internal-offline to online (ping + start timers
|
|
137
|
+
* if successful). No-op if already online, forcedOffline, or a try is in flight.
|
|
138
|
+
*/
|
|
139
|
+
tryGoOnline(): Promise<void>;
|
|
140
|
+
/** Start both auto-sync and reconnect timers (idempotent). */
|
|
141
|
+
startTimers(): void;
|
|
142
|
+
/** Stop both auto-sync and reconnect timers. Called from close(). */
|
|
143
|
+
stopTimers(): void;
|
|
137
144
|
/** Ping server. */
|
|
138
145
|
ping(timeoutMs?: number): Promise<boolean>;
|
|
139
146
|
/** Wrap promise with sync timeout. */
|
|
@@ -256,7 +263,8 @@ export interface SyncEngineDeps {
|
|
|
256
263
|
writeToInMemBatch: <T extends DbEntity>(collection: string, items: T[], operation: "upsert" | "delete") => void;
|
|
257
264
|
getInMemById: <T extends DbEntity>(collection: string, id: Id) => T | undefined;
|
|
258
265
|
withSyncTimeout: <T>(promise: Promise<T>, operation: string) => Promise<T>;
|
|
259
|
-
|
|
266
|
+
/** Notify consumers that a sync cycle failed. Does not mutate online state. */
|
|
267
|
+
onSyncFailed: (reason: string) => void;
|
|
260
268
|
flushAllPendingChanges: () => Promise<void>;
|
|
261
269
|
cancelRestUploadTimer: () => void;
|
|
262
270
|
awaitRestUpload: () => Promise<void>;
|
|
@@ -27,6 +27,12 @@ export interface DirtyChange {
|
|
|
27
27
|
/** When last change was accumulated */
|
|
28
28
|
updatedAt: number;
|
|
29
29
|
}
|
|
30
|
+
/**
|
|
31
|
+
* Meta fields of a DirtyChange entry, without the `changes` payload.
|
|
32
|
+
* Used by `getDirtyMeta` for lightweight dirty-state inspection
|
|
33
|
+
* (counts, timestamps) without loading change payloads into memory.
|
|
34
|
+
*/
|
|
35
|
+
export type DirtyMeta = Omit<DirtyChange, "changes">;
|
|
30
36
|
/** Shared fields for all cross-tab broadcast messages */
|
|
31
37
|
interface BroadcastBase {
|
|
32
38
|
/** Unique ID of the SyncedDb instance that sent this broadcast */
|
|
@@ -84,6 +90,8 @@ export interface I_DexieDb {
|
|
|
84
90
|
forEachBatch<T extends LocalDbEntity>(collection: string, batchSize: number, callback: (items: T[]) => Promise<void>): Promise<void>;
|
|
85
91
|
/** Vrne vse dirty objekte (z lokalnimi spremembami) - returns only changed fields + _id + metadata */
|
|
86
92
|
getDirty<T extends LocalDbEntity>(collection: string): Promise<Partial<T>[]>;
|
|
93
|
+
/** Vrne meta podatke vseh dirty vnosov za kolekcijo (brez `changes` payloada) */
|
|
94
|
+
getDirtyMeta(collection: string): Promise<DirtyMeta[]>;
|
|
87
95
|
/** Add or accumulate changes for a record */
|
|
88
96
|
addDirtyChange(collection: string, id: Id, changes: Record<string, any>, baseMeta?: {
|
|
89
97
|
_ts?: any;
|
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
import type { AggregateOptions } from "mongodb";
|
|
2
2
|
import type { Id, DbEntity, LocalDbEntity } from "./DbEntity";
|
|
3
3
|
import type { QuerySpec, QueryOpts, UpdateSpec, InsertSpec, BatchSpec, I_RestInterface, CollectionUpdateRequest, CollectionUpdateResult, GetNewerSpec } from "./I_RestInterface";
|
|
4
|
-
import type { I_DexieDb } from "./I_DexieDb";
|
|
4
|
+
import type { DirtyMeta, I_DexieDb } from "./I_DexieDb";
|
|
5
5
|
import type { I_InMemDb } from "./I_InMemDb";
|
|
6
6
|
import type { I_ServerUpdateNotifier } from "./I_ServerUpdateNotifier";
|
|
7
7
|
import type { WakeSyncInfo, NetworkStatusChangeInfo } from "../db/types/managers";
|
|
@@ -317,8 +317,13 @@ export interface SyncedDbConfig {
|
|
|
317
317
|
debounceDexieWritesMs?: number;
|
|
318
318
|
/** Debounce čas za pošiljanje na REST v ms (default: 100) - po uspešnem zapisu v Dexie */
|
|
319
319
|
debounceRestWritesMs?: number;
|
|
320
|
-
/**
|
|
321
|
-
|
|
320
|
+
/**
|
|
321
|
+
* Callback fired on each sync failure. Unlike the removed `onForcedOffline`,
|
|
322
|
+
* this does NOT mutate online state — the library keeps trying on the next
|
|
323
|
+
* auto-sync tick. Use this callback for logging/telemetry only. To actually
|
|
324
|
+
* force the database offline, call {@link forceOffline}.
|
|
325
|
+
*/
|
|
326
|
+
onSyncFailed?: (reason: string) => void;
|
|
322
327
|
/** Callback fired once during init() when the IndexedDB database was created fresh (first ever open). */
|
|
323
328
|
onDatabaseCreated?: () => void;
|
|
324
329
|
/** Callback at the start of each sync cycle. initialSync=true if no full sync has completed yet. */
|
|
@@ -381,8 +386,19 @@ export interface SyncedDbConfig {
|
|
|
381
386
|
* Če je podano, se sync() kliče avtomatsko na ta interval, ko je online.
|
|
382
387
|
* Auto-sync se izvaja samo ko je online in ne bo interferiral z eksplicitnimi sync() klici
|
|
383
388
|
* (uporablja isti syncLock mehanizem).
|
|
389
|
+
*
|
|
390
|
+
* Timer je zagnan v init() in ustavljen v close(). Če je offline ali
|
|
391
|
+
* forcedOffline, tick-i so no-op (ne ubije se timer, naslednji tick bo spet
|
|
392
|
+
* poskusil). Self-healing: sync napaka ne ustavi timer-ja.
|
|
384
393
|
*/
|
|
385
394
|
autoSyncIntervalMs?: number;
|
|
395
|
+
/**
|
|
396
|
+
* Interval za periodično poskušanje preklopa iz offline v online, v ms.
|
|
397
|
+
* Vedno-živi timer: od init() do close(). Če smo offline in ne forcedOffline,
|
|
398
|
+
* vsak tick pokliče `tryGoOnline()` (ping → če uspešen, gremo online in naslednji
|
|
399
|
+
* auto-sync tick bo sinhroniziral). Default: 60000 (60 s). 0/undefined disable.
|
|
400
|
+
*/
|
|
401
|
+
onlineRetryIntervalMs?: number;
|
|
386
402
|
/** Callback when WebSocket connects */
|
|
387
403
|
onWsConnect?: () => void;
|
|
388
404
|
/** Callback when WebSocket disconnects */
|
|
@@ -391,7 +407,7 @@ export interface SyncedDbConfig {
|
|
|
391
407
|
onWsReconnect?: (attempt: number) => void;
|
|
392
408
|
/** Callback when a WebSocket notification is received */
|
|
393
409
|
onWsNotification?: (info: WsNotificationInfo) => void;
|
|
394
|
-
/** Callback when online status changes (after ping success/failure in tryGoOnline
|
|
410
|
+
/** Callback when online status changes (after ping success/failure in tryGoOnline) */
|
|
395
411
|
onOnlineStatusChange?: (online: boolean) => void;
|
|
396
412
|
/** Debounce interval for cross-tab sync broadcasts in ms (default: 100) */
|
|
397
413
|
crossTabSyncDebounceMs?: number;
|
|
@@ -684,6 +700,11 @@ export interface I_SyncedDb {
|
|
|
684
700
|
getDebounceRestWritesMs(): number;
|
|
685
701
|
/** Vrne vse dirty objekte iz vseh kolekcij */
|
|
686
702
|
getDirty<T extends DbEntity>(): Promise<Readonly<Record<string, readonly T[]>>>;
|
|
703
|
+
/**
|
|
704
|
+
* Vrne meta podatke dirty vnosov (brez `changes` payloada) po kolekcijah,
|
|
705
|
+
* ki imajo vsaj en dirty zapis. Kolekcije brez dirty vnosov niso vključene.
|
|
706
|
+
*/
|
|
707
|
+
getDirtyMeta(): Promise<Readonly<Record<string, readonly DirtyMeta[]>>>;
|
|
687
708
|
/**
|
|
688
709
|
* Drops a collection, ensuring no data loss.
|
|
689
710
|
* - Throws if offline or forcedOffline (unless force=true)
|
|
@@ -2,7 +2,7 @@ export type { Id, Entity, IdOrEntity, DbEntity, LocalDbEntity } from "./DbEntity
|
|
|
2
2
|
export type { PublishableOperation, PublishRevsPayloadInsert, PublishRevsPayloadUpdate, PublishRevsPayloadDelete, PublishRevsPayloadUpdateMany, PublishRevsPayloadDeleteMany, PublishRevsPayloadBatchItem, PublishRevsPayloadBatch, PublishRevsPayload, PublishRevsSpec, PublishDataPayloadBase, PublishDataPayloadInsert, PublishDataPayloadUpdate, PublishDataPayloadDelete, PublishDataPayloadBatch, PublishDataPayload, PublishDataSpec, PublishSpec, } from "./PublishRevsPayload";
|
|
3
3
|
export type { Obj, QuerySpec, Projection, QueryOpts, KeyOf, InsertKeyOf, InsertSpec, UpdateSpec, BatchSpec, UpsertOptions, GetNewerSpec, I_RestInterface as RestInterface, } from "./I_RestInterface";
|
|
4
4
|
export type { I_InMemDb as InMemDb } from "./I_InMemDb";
|
|
5
|
-
export type { I_DexieDb as DexieDb, SyncMeta } from "./I_DexieDb";
|
|
5
|
+
export type { I_DexieDb as DexieDb, SyncMeta, DirtyChange, DirtyMeta } from "./I_DexieDb";
|
|
6
6
|
export type { I_ServerUpdateNotifier as ServerUpdateNotifier, ServerUpdateCallback, ServerUpdateNotifierCallbacks } from "./I_ServerUpdateNotifier";
|
|
7
7
|
export type { I_SyncedDb as SyncedDb, SyncedDbConfig, CollectionConfig, CollectionSyncConfig, SyncInfo, ServerWriteRequestInfo, ServerWriteResultInfo, FindNewerManyCallInfo, FindNewerManyResultInfo, DexieWriteRequestInfo, DexieWriteResultInfo, LocalstorageWriteResultInfo, WsNotificationInfo, InfrastructureErrorType, InfrastructureErrorInfo, ConflictSource, ConflictResolutionReport, CrossTabSyncInfo, EvictionInfo, EvictionCollectionInfo, } from "./I_SyncedDb";
|
|
8
8
|
export type { NetworkStatusChangeInfo } from "../db/types/managers";
|