@dboio/cli 0.20.0 → 0.20.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@dboio/cli",
3
- "version": "0.20.0",
3
+ "version": "0.20.3",
4
4
  "description": "CLI for the DBO.io framework",
5
5
  "type": "module",
6
6
  "bin": {
@@ -0,0 +1,135 @@
1
+ # Dual-Platform Maintenance Strategy
2
+
3
+ ## Overview
4
+
5
+ The dbo.io API runs on two parallel codebases during the .NET Core migration period:
6
+
7
+ - **`src/webapp-framework/`** — ASP.NET 4.7.2 (Windows), the current production platform
8
+ - **`src/webapp-core/`** — .NET Core 8 (Linux), functionally tested, being prepared for production
9
+
10
+ Both share identical `App_Code/` directories containing all business logic (~249 .cs files). Going forward, new feature work must be applied to both codebases until webapp-framework is retired.
11
+
12
+ ## File Structure
13
+
14
+ ### Shared Code (`App_Code/dboio/`)
15
+
16
+ These directories exist in both `webapp-framework` and `webapp-core` and must stay in sync for feature changes:
17
+
18
+ - `Data/` — entities, data sources, API add-ons, app operations, output/query system
19
+ - `Data/App/` — AppManager, application-level operations
20
+ - `Data/DataSource/` — MySQL, SQL Server, ODBC data source implementations
21
+ - `Data/Output/` — output rendering, query building
22
+ - `Web/Controllers/` — all MVC/API controllers
23
+ - `Web/Security/` — authentication, authorization, SAML
24
+ - `Web/` — caching (ICacheProvider, Redis, HttpRuntime), session, delivery pipeline
25
+ - `Util/` — encryption, email, file/zip utilities
26
+
27
+ ### webapp-framework Only (Windows)
28
+
29
+ - `App_Start/` — legacy ASP.NET startup
30
+ - `Web.config` + `web.*.config` — IIS configuration files
31
+ - `Global.asax` / `Global.asax.cs` — application lifecycle
32
+ - `Account/` — legacy account pages
33
+
34
+ ### webapp-core Only (Linux)
35
+
36
+ - `Program.cs` — .NET Core entry point and middleware pipeline
37
+ - `Middleware/` — 9 middleware classes (ported from HttpModules)
38
+ - `Shims/` — compatibility shims (HttpContext, Cache, ConfigurationManager, CaseInsensitiveFileProvider)
39
+ - `appsettings.json` / `appsettings.*.json` — .NET Core configuration
40
+ - `log4net.config` — Core-specific logging configuration
41
+
42
+ ## Change Categories
43
+
44
+ ### Category 1: Core-Only Infrastructure
45
+
46
+ Changes that only apply to `webapp-core`. No transfer to `webapp-framework` needed.
47
+
48
+ Examples:
49
+ - Middleware files (`Middleware/*.cs`)
50
+ - Shim files (`Shims/*.cs`)
51
+ - `Program.cs`
52
+ - `appsettings*.json`, `log4net.config`
53
+ - `App_Code/Properties/AssemblyInfo.cs` (platform-diverged)
54
+ - `App_Code/Global.asax.cs` (platform-diverged)
55
+ - Build/deploy scripts (`build/deploy-core.sh`, `build/boot-deploy.sh`)
56
+ - AWS infrastructure (EFS, CloudWatch, Launch Templates)
57
+
58
+ When a transfer review issue is created for Category 1 files, close it as "not planned."
59
+
60
+ ### Category 2: Shared Business Logic
61
+
62
+ Changes to business logic in `App_Code/dboio/` that affect both platforms.
63
+
64
+ Examples:
65
+ - Entity classes, data access (`Data/`, `Data/App/`, `Data/DataSource/`)
66
+ - Controllers (`Web/Controllers/`)
67
+ - Security (`Web/Security/`)
68
+ - Cache logic (`Web/` cache providers)
69
+ - Utility classes (`Util/`)
70
+ - Any bug fix or feature in shared code
71
+
72
+ When a transfer review issue is created for Category 2 files, apply the equivalent change to `src/webapp-framework/App_Code/`, then close the issue.
73
+
74
+ ## Automated Transfer Review Workflow
75
+
76
+ A GitHub Action (`.github/workflows/transfer-review.yml`) monitors pushes to all branches.
77
+
78
+ ### How it works
79
+
80
+ 1. Triggers on any push that modifies files under `src/webapp-core/App_Code/`
81
+ 2. Filters out excluded subdirectories (configurable in the workflow YAML)
82
+ 3. Creates a GitHub issue labeled `netcore-aspnet-transfer`
83
+ 4. Lists all changed App_Code files, the commit hash, branch, and author
84
+
85
+ ### Reviewing transfer issues
86
+
87
+ 1. Open the issue and review the changed files
88
+ 2. If all changes are Category 1 (Core-only): close as "not planned"
89
+ 3. If any changes are Category 2 (shared): apply to `webapp-framework`, then close with a reference to the transfer commit
90
+
91
+ ### Exclusion list
92
+
93
+ The workflow YAML contains an `EXCLUDE_PATTERNS` array. Add subdirectory names to suppress issues for known Core-only paths within `App_Code/`:
94
+
95
+ ```bash
96
+ EXCLUDE_PATTERNS=(
97
+ # "Properties"
98
+ # Add more as patterns emerge
99
+ )
100
+ ```
101
+
102
+ ## Commit Message Conventions
103
+
104
+ No strict format required, but these prefixes help identify change scope:
105
+
106
+ - **`webapp-core:`** — Core-only changes (Category 1, no transfer needed)
107
+ - **`webapp-framework:`** — Framework-only changes
108
+ - **No prefix** — changes that apply to both platforms, or general feature work
109
+
110
+ Examples:
111
+ - `webapp-core: fix middleware ordering for auth` (Category 1)
112
+ - `Task XXXX | entity: add new validation rule #NNN` (Category 2, needs transfer)
113
+
114
+ ## Development Flow
115
+
116
+ 1. New business logic typically starts in `webapp-framework` (production platform)
117
+ 2. After committing, manually apply the same change to `webapp-core`
118
+ 3. Core-only infrastructure work happens only in `webapp-core`
119
+ 4. The transfer review workflow catches the reverse case (Core-first changes that may need Framework transfer)
120
+
121
+ ## Retirement Criteria for webapp-framework
122
+
123
+ `webapp-framework` can be retired when ALL of the following are met:
124
+
125
+ 1. **Functional parity** — webapp-core handles all production traffic without regressions
126
+ 2. **Production cutover** — load balancer routes 100% of traffic to Core servers
127
+ 3. **Stability period** — at least 30 days of production traffic on Core with no regressions
128
+ 4. **Client sign-off** — explicit approval to decommission Windows servers
129
+ 5. **Rollback plan verified** — confirmed ability to revert to webapp-framework if needed
130
+
131
+ After retirement:
132
+ - Archive `src/webapp-framework/` (do not delete immediately)
133
+ - Remove the transfer review workflow
134
+ - Remove the `netcore-aspnet-transfer` label
135
+ - Update this document
@@ -93,9 +93,10 @@ const ROOT_FILE_TEMPLATES = {
93
93
  */
94
94
  async function buildBinMetadata(filePath, entity, appConfig, structure) {
95
95
  const rel = relative(process.cwd(), filePath).replace(/\\/g, '/');
96
- const ext = extname(filePath).replace('.', '').toLowerCase();
97
96
  const fileName = basename(filePath);
98
- const base = basename(filePath, extname(filePath));
97
+ const ext = extname(filePath).replace('.', '').toLowerCase();
98
+ const rawBase = basename(filePath, extname(filePath));
99
+ const base = rawBase || fileName; // dotfiles (.dboignore): rawBase='' → use full filename
99
100
  const fileDir = dirname(rel);
100
101
  const bin = findBinByPath(fileDir, structure);
101
102
  const binPath = bin?.path || '';
@@ -103,9 +104,9 @@ async function buildBinMetadata(filePath, entity, appConfig, structure) {
103
104
  const metaPath = join(dirname(filePath), `${base}.metadata.json`);
104
105
 
105
106
  if (entity === 'content') {
106
- const contentPath = binPath
107
- ? `${binPath}/${base}.${ext}`
108
- : `${base}.${ext}`;
107
+ // For dotfiles, base === fileName (no real ext), so path = base; otherwise base.ext
108
+ const localName = (base === fileName) ? fileName : `${base}.${ext}`;
109
+ const contentPath = binPath ? `${binPath}/${localName}` : localName;
109
110
  const meta = {
110
111
  _entity: 'content',
111
112
  _companionReferenceColumns: ['Content'],
@@ -165,7 +166,7 @@ async function adoptSingleFile(filePath, entityArg, options) {
165
166
  const appBin = findBinByPath('app', structure);
166
167
  const binsAppDir = join(process.cwd(), BINS_DIR, 'app');
167
168
  const ext = extname(fileName).replace('.', '').toUpperCase() || 'TXT';
168
- const stem = basename(fileName, extname(fileName));
169
+ const stem = fileName;
169
170
 
170
171
  const metaFilename = `${stem}.metadata.json`;
171
172
  const metaPath = join(binsAppDir, metaFilename);
@@ -174,15 +175,15 @@ async function adoptSingleFile(filePath, entityArg, options) {
174
175
  let existingMeta = null;
175
176
  try { existingMeta = JSON.parse(await readFile(metaPath, 'utf8')); } catch {}
176
177
  if (existingMeta) {
177
- if (existingMeta.UID || existingMeta._CreatedOn) {
178
- log.warn(`"${fileName}" is already on the server (has UID/_CreatedOn) — skipping.`);
178
+ if (existingMeta._CreatedOn || existingMeta._LastUpdated) {
179
+ log.warn(`"${fileName}" is already on the server (_CreatedOn/_LastUpdated present) — skipping.`);
179
180
  return;
180
181
  }
181
182
  if (!options.yes) {
182
183
  const inquirer = (await import('inquirer')).default;
183
184
  const { overwrite } = await inquirer.prompt([{
184
185
  type: 'confirm', name: 'overwrite',
185
- message: `Metadata already exists for "${fileName}" (no UID). Overwrite?`,
186
+ message: `Metadata already exists for "${fileName}" (not yet on server). Overwrite?`,
186
187
  default: false,
187
188
  }]);
188
189
  if (!overwrite) return;
@@ -215,8 +216,8 @@ async function adoptSingleFile(filePath, entityArg, options) {
215
216
  }
216
217
 
217
218
  const dir = dirname(filePath);
218
- const ext = extname(filePath);
219
- const base = basename(filePath, ext);
219
+ const rawBase = basename(filePath, extname(filePath));
220
+ const base = rawBase || basename(filePath); // dotfiles: '' → full filename
220
221
 
221
222
  // Check for existing metadata
222
223
  const metaPath = join(dir, `${base}.metadata.json`);
@@ -226,17 +227,17 @@ async function adoptSingleFile(filePath, entityArg, options) {
226
227
  } catch { /* no file — that's fine */ }
227
228
 
228
229
  if (existingMeta) {
229
- if (existingMeta.UID || existingMeta._CreatedOn) {
230
- log.warn(`"${fileName}" is already on the server (has UID/_CreatedOn) — skipping.`);
230
+ if (existingMeta._CreatedOn || existingMeta._LastUpdated) {
231
+ log.warn(`"${fileName}" is already on the server (_CreatedOn/_LastUpdated present) — skipping.`);
231
232
  return;
232
233
  }
233
- // Metadata exists but no server record
234
+ // Metadata exists but record not yet on server
234
235
  if (!options.yes) {
235
236
  const inquirer = (await import('inquirer')).default;
236
237
  const { overwrite } = await inquirer.prompt([{
237
238
  type: 'confirm',
238
239
  name: 'overwrite',
239
- message: `Metadata already exists for "${fileName}" (no UID). Overwrite?`,
240
+ message: `Metadata already exists for "${fileName}" (not yet on server). Overwrite?`,
240
241
  default: false,
241
242
  }]);
242
243
  if (!overwrite) return;
@@ -355,7 +356,8 @@ async function adoptSingleFile(filePath, entityArg, options) {
355
356
  async function runInteractiveWizard(filePath, options) {
356
357
  const inquirer = (await import('inquirer')).default;
357
358
  const fileName = basename(filePath);
358
- const base = basename(filePath, extname(filePath));
359
+ const rawBase = basename(filePath, extname(filePath));
360
+ const base = rawBase || fileName; // dotfiles: '' → full filename
359
361
  const metaPath = join(dirname(filePath), `${base}.metadata.json`);
360
362
 
361
363
  log.plain('');
@@ -14,7 +14,7 @@ import { checkDomainChange } from '../lib/domain-guard.js';
14
14
  import { applyTrashIcon, ensureTrashIcon, tagProjectFiles } from '../lib/tagging.js';
15
15
  import { loadMetadataSchema, saveMetadataSchema, getTemplateCols, setTemplateCols, buildTemplateFromCloneRecord, generateMetadataFromSchema, parseReferenceExpression, mergeDescriptorSchemaFromDependencies } from '../lib/metadata-schema.js';
16
16
  import { fetchSchema, loadSchema, saveSchema, isSchemaStale } from '../lib/schema.js';
17
- import { appMetadataPath } from '../lib/config.js';
17
+ import { appMetadataPath, baselinePath, metadataSchemaPath } from '../lib/config.js';
18
18
  import { runPendingMigrations } from '../lib/migrations.js';
19
19
  import { upsertDeployEntry } from '../lib/deploy-config.js';
20
20
  import { syncDependencies, parseDependenciesColumn } from '../lib/dependencies.js';
@@ -250,6 +250,114 @@ export async function detectAndTrashOrphans(appJson, ig, sync, options) {
250
250
  }
251
251
  }
252
252
 
253
+ /**
254
+ * Use the explicit `appJson.deleted` map (returned by the server in delta/baseline responses)
255
+ * to find and trash local files for records the server has deleted.
256
+ *
257
+ * Unlike detectAndTrashOrphans() which diffs all local UIDs against all server UIDs,
258
+ * this function is authoritative: if the server says a UID was deleted, it moves the
259
+ * local files immediately — no full UID scan needed. This makes it safe to call in
260
+ * pull/delta mode where appJson.children may only contain changed records.
261
+ *
262
+ * @param {object} appJson - App JSON possibly containing a `deleted` map
263
+ * @param {import('ignore').Ignore} ig - Ignore instance for findMetadataFiles
264
+ * @param {object} sync - Parsed synchronize.json { delete, edit, add }
265
+ * @param {object} options - Clone options
266
+ */
267
+ export async function trashServerDeletedRecords(appJson, ig, sync, options) {
268
+ if (options.entityFilter) return;
269
+ if (!appJson?.deleted || typeof appJson.deleted !== 'object') return;
270
+
271
+ // Build set of UIDs to trash from all entities in deleted map
272
+ const deletedUids = new Map(); // uid → { entity, name }
273
+ for (const [entity, entries] of Object.entries(appJson.deleted)) {
274
+ if (!Array.isArray(entries)) continue;
275
+ for (const entry of entries) {
276
+ if (entry?.UID) {
277
+ deletedUids.set(String(entry.UID), { entity, name: entry.Name || entry.UID });
278
+ }
279
+ }
280
+ }
281
+
282
+ if (deletedUids.size === 0) return;
283
+
284
+ // UIDs already queued for deletion in synchronize.json — skip them
285
+ const stagedDeleteUids = new Set(
286
+ (sync.delete || []).map(e => e.UID).filter(Boolean).map(String)
287
+ );
288
+
289
+ const metaFiles = await findMetadataFiles(process.cwd(), ig);
290
+ if (metaFiles.length === 0) return;
291
+
292
+ const trashDir = join(process.cwd(), 'trash');
293
+ const toTrash = [];
294
+
295
+ for (const metaPath of metaFiles) {
296
+ let meta;
297
+ try {
298
+ meta = JSON.parse(await readFile(metaPath, 'utf8'));
299
+ } catch {
300
+ continue;
301
+ }
302
+
303
+ if (!meta.UID) continue;
304
+ const uid = String(meta.UID);
305
+ if (!deletedUids.has(uid)) continue;
306
+ if (stagedDeleteUids.has(uid)) continue;
307
+
308
+ const metaDir = dirname(metaPath);
309
+ const filesToMove = [metaPath];
310
+
311
+ for (const col of (meta._companionReferenceColumns || meta._contentColumns || [])) {
312
+ const ref = meta[col];
313
+ if (ref && String(ref).startsWith('@')) {
314
+ const refName = String(ref).substring(1);
315
+ const companionPath = refName.startsWith('/')
316
+ ? join(process.cwd(), refName)
317
+ : join(metaDir, refName);
318
+ if (await fileExists(companionPath)) filesToMove.push(companionPath);
319
+ }
320
+ }
321
+
322
+ if (meta._mediaFile && String(meta._mediaFile).startsWith('@')) {
323
+ const refName = String(meta._mediaFile).substring(1);
324
+ const mediaPath = refName.startsWith('/')
325
+ ? join(process.cwd(), refName)
326
+ : join(metaDir, refName);
327
+ if (await fileExists(mediaPath)) filesToMove.push(mediaPath);
328
+ }
329
+
330
+ const { entity } = deletedUids.get(uid);
331
+ toTrash.push({ metaPath, uid, entity, filesToMove });
332
+ }
333
+
334
+ if (toTrash.length === 0) return;
335
+
336
+ await mkdir(trashDir, { recursive: true });
337
+
338
+ let trashed = 0;
339
+ for (const { metaPath, uid, entity, filesToMove } of toTrash) {
340
+ log.dim(` Trashed (server deleted): ${basename(metaPath)} (${entity}:${uid})`);
341
+ for (const filePath of filesToMove) {
342
+ const destBase = basename(filePath);
343
+ let destPath = join(trashDir, destBase);
344
+ try { await stat(destPath); destPath = `${destPath}.${Date.now()}`; } catch {}
345
+ try {
346
+ await rename(filePath, destPath);
347
+ trashed++;
348
+ } catch (err) {
349
+ log.warn(` Could not trash: ${filePath} — ${err.message}`);
350
+ }
351
+ }
352
+ }
353
+
354
+ if (trashed > 0) {
355
+ await ensureTrashIcon(trashDir);
356
+ log.plain('');
357
+ log.warn(`Moved ${toTrash.length} server-deleted record(s) to trash`);
358
+ }
359
+ }
360
+
253
361
  /**
254
362
  * Resolve a content Path to a directory under Bins/.
255
363
  *
@@ -353,8 +461,9 @@ export function resolveRecordPaths(entityName, record, structure, placementPref)
353
461
  const uid = String(record.UID || record._id || 'untitled');
354
462
  // Companion: natural name, no UID
355
463
  const filename = sanitizeFilename(buildContentFileName(record, uid));
356
- // Metadata: name.metadata.json
357
- const metaPath = join(dir, buildMetaFilename(name));
464
+ // Metadata: filename.metadata.json (includes extension to avoid collisions between records
465
+ // with the same Name but different Extension, e.g. codeTest.js vs codeTest.css)
466
+ const metaPath = join(dir, buildMetaFilename(filename));
358
467
 
359
468
  return { dir, filename, metaPath };
360
469
  }
@@ -1232,7 +1341,24 @@ export async function performClone(source, options = {}) {
1232
1341
  // processExtensionEntries() later loads null (wrong file) and descriptor
1233
1342
  // sub-directories + companion @reference entries are lost.
1234
1343
  if (!options.pullMode && appJson?.ShortName) {
1235
- await updateConfigWithApp({ AppShortName: appJson.ShortName });
1344
+ // If the app's ShortName changed, rename the .app/ files that are keyed by it
1345
+ // before updating config so the old paths can still be resolved.
1346
+ const oldShortName = config.AppShortName;
1347
+ const newShortName = appJson.ShortName;
1348
+ if (oldShortName && oldShortName !== newShortName) {
1349
+ const oldBaseline = await baselinePath();
1350
+ const oldAppMeta = await appMetadataPath();
1351
+ const oldSchema = await metadataSchemaPath();
1352
+ await updateConfigWithApp({ AppShortName: newShortName });
1353
+ const newBaseline = await baselinePath();
1354
+ const newAppMeta = await appMetadataPath();
1355
+ const newSchema = await metadataSchemaPath();
1356
+ for (const [oldPath, newPath] of [[oldBaseline, newBaseline], [oldAppMeta, newAppMeta], [oldSchema, newSchema]]) {
1357
+ try { await access(oldPath); await rename(oldPath, newPath); log.dim(` Renamed ${basename(oldPath)} → ${basename(newPath)}`); } catch { /* file absent, nothing to rename */ }
1358
+ }
1359
+ } else {
1360
+ await updateConfigWithApp({ AppShortName: newShortName });
1361
+ }
1236
1362
  }
1237
1363
 
1238
1364
  // Regenerate metadata_schema.json for any new entity types
@@ -1525,6 +1651,9 @@ export async function performClone(source, options = {}) {
1525
1651
  if (!entityFilter) {
1526
1652
  const ig = await loadIgnore();
1527
1653
  const sync = await loadSynchronize();
1654
+ // Use explicit deleted list from server first (authoritative, works in delta/pull mode)
1655
+ await trashServerDeletedRecords(appJson, ig, sync, { ...options, entityFilter });
1656
+ // Fall back to full UID diff for records absent from server but not in deleted list
1528
1657
  await detectAndTrashOrphans(appJson, ig, sync, { ...options, entityFilter });
1529
1658
  }
1530
1659
 
@@ -2075,7 +2204,7 @@ async function processEntityDirEntries(entityName, entries, options, serverTz) {
2075
2204
  // Skip __WILL_DELETE__-prefixed files — treat as "no existing file"
2076
2205
  const willDeleteEntityMeta = join(dirName, `${WILL_DELETE_PREFIX}${basename(metaPath)}`);
2077
2206
  const entityMetaExists = await fileExists(metaPath) && !await fileExists(willDeleteEntityMeta);
2078
- if (entityMetaExists && !options.yes && !hasNewExtractions) {
2207
+ if (entityMetaExists && !options.yes && !options.force && !hasNewExtractions) {
2079
2208
  if (bulkAction.value === 'skip_all') {
2080
2209
  log.dim(` Skipped ${name}`);
2081
2210
  refs.push({ uid: record.UID, metaPath });
@@ -3389,7 +3518,8 @@ async function processRecord(entityName, record, structure, options, usedNames,
3389
3518
  const uid = String(record.UID || record._id || 'untitled');
3390
3519
  // Companion: natural name, no UID (use collision-resolved override if available)
3391
3520
  const fileName = filenameOverride || sanitizeFilename(buildContentFileName(record, uid));
3392
- // Metadata: name.metadata.json; usedNames retained for non-UID edge case tracking
3521
+ // Metadata: filename.metadata.json (includes extension to avoid collisions between records
3522
+ // with the same Name but different Extension, e.g. codeTest.js vs codeTest.css)
3393
3523
  const nameKey = `${dir}/${name}`;
3394
3524
  usedNames.set(nameKey, (usedNames.get(nameKey) || 0) + 1);
3395
3525
 
@@ -3401,7 +3531,20 @@ async function processRecord(entityName, record, structure, options, usedNames,
3401
3531
  );
3402
3532
 
3403
3533
  const filePath = join(dir, fileName);
3404
- const metaPath = join(dir, buildMetaFilename(name));
3534
+ const metaPath = join(dir, buildMetaFilename(fileName));
3535
+
3536
+ // Legacy migration: rename old name.metadata.json → new filename.metadata.json
3537
+ // (repos cloned before this fix used the base name without extension as the metadata stem)
3538
+ const legacyMetaPath = join(dir, buildMetaFilename(name));
3539
+ if (legacyMetaPath !== metaPath && !await fileExists(metaPath) && await fileExists(legacyMetaPath)) {
3540
+ try {
3541
+ const legacyMeta = JSON.parse(await readFile(legacyMetaPath, 'utf8'));
3542
+ if (legacyMeta.UID === uid) {
3543
+ const { rename: fsRename } = await import('fs/promises');
3544
+ await fsRename(legacyMetaPath, metaPath);
3545
+ }
3546
+ } catch { /* non-critical */ }
3547
+ }
3405
3548
 
3406
3549
  // Rename legacy ~UID companion files to natural names if needed
3407
3550
  if (await fileExists(metaPath)) {
@@ -4431,8 +4574,18 @@ async function _generateRootFileStub(filename, appJson) {
4431
4574
  }
4432
4575
 
4433
4576
  if (filenameLower === 'claude.md') {
4577
+ const cfg = await loadConfig();
4578
+ const domain = cfg.domain || '';
4579
+ const appShortName = appJson.ShortName || '';
4580
+ const siteRecords = appJson.children?.site || [];
4581
+ const siteLines = siteRecords.map(s => {
4582
+ const url = `//${domain}/app/${appShortName}/${s.ShortName}`;
4583
+ const label = s.Title || s.Name || s.ShortName;
4584
+ return `- \`${url}\` — ${label} (add \`?dev=true\` to serve uncompiled JS; add \`&console=true\` for verbose debug logging)`;
4585
+ });
4434
4586
  const stub = [
4435
4587
  `# ${appName}`,
4588
+ ...(siteLines.length > 0 ? [``, `## App Sites`, ``, ...siteLines] : []),
4436
4589
  ``,
4437
4590
  `## DBO CLI`,
4438
4591
  ``,
@@ -4478,7 +4631,19 @@ async function _generateRootFileStub(filename, appJson) {
4478
4631
  }
4479
4632
 
4480
4633
  if (filenameLower === 'readme.md') {
4634
+ const cfg = await loadConfig();
4635
+ const domain = cfg.domain || '';
4636
+ const appShortName = appJson.ShortName || '';
4637
+ const siteRecords = appJson.children?.site || [];
4481
4638
  const parts = [`# ${appName}`];
4639
+ if (siteRecords.length > 0) {
4640
+ parts.push('');
4641
+ for (const s of siteRecords) {
4642
+ const url = `//${domain}/app/${appShortName}/${s.ShortName}`;
4643
+ const label = s.Title || s.Name || s.ShortName;
4644
+ parts.push(`- [${label}](${url})`);
4645
+ }
4646
+ }
4482
4647
  if (description) parts.push('', description);
4483
4648
  parts.push('');
4484
4649
  await writeFile(rootPath, parts.join('\n'));
@@ -6,9 +6,9 @@ import { buildInputBody, checkSubmitErrors, getSessionUserOverride } from '../li
6
6
  import { formatResponse, formatError } from '../lib/formatter.js';
7
7
  import { log } from '../lib/logger.js';
8
8
  import { shouldSkipColumn } from '../lib/columns.js';
9
- import { loadConfig, loadAppConfig, loadSynchronize, saveSynchronize, loadAppJsonBaseline, saveAppJsonBaseline, hasBaseline, loadScripts, loadScriptsLocal, addDeleteEntry, loadRootContentFiles } from '../lib/config.js';
9
+ import { loadConfig, loadAppConfig, loadSynchronize, saveSynchronize, loadAppJsonBaseline, saveAppJsonBaseline, hasBaseline, loadScripts, loadScriptsLocal, addDeleteEntry, loadRootContentFiles, loadRepositoryIntegrationID } from '../lib/config.js';
10
10
  import { mergeScriptsConfig, resolveHooks, buildHookEnv, runBuildLifecycle, runPushLifecycle } from '../lib/scripts.js';
11
- import { checkStoredTicket, applyStoredTicketToSubmission, clearRecordTicket, clearGlobalTicket } from '../lib/ticketing.js';
11
+ import { checkStoredTicket, applyStoredTicketToSubmission, clearRecordTicket, clearGlobalTicket, fetchAndCacheRepositoryIntegration } from '../lib/ticketing.js';
12
12
  import { checkModifyKey, isModifyKeyError, handleModifyKeyError } from '../lib/modify-key.js';
13
13
  import { resolveTransactionKey } from '../lib/transaction-key.js';
14
14
  import { setFileTimestamps, parseServerDate } from '../lib/timestamps.js';
@@ -380,8 +380,8 @@ async function pushSingleFile(filePath, client, options, modifyKey = null, trans
380
380
  }
381
381
  }
382
382
 
383
- // Toe-stepping check for single-file push
384
- if (isToeStepping(options) && meta.UID) {
383
+ // Toe-stepping check for single-file push (only for records confirmed on server)
384
+ if (isToeStepping(options) && (meta._CreatedOn || meta._LastUpdated)) {
385
385
  const baseline = await loadAppJsonBaseline();
386
386
  if (baseline) {
387
387
  const appConfig = await loadAppConfig();
@@ -426,7 +426,13 @@ async function pushSingleFile(filePath, client, options, modifyKey = null, trans
426
426
  }
427
427
  // ── End script hooks ────────────────────────────────────────────────
428
428
 
429
- const success = await pushFromMetadata(meta, metaPath, client, options, null, modifyKey, transactionKey);
429
+ const isNewRecord = !meta._CreatedOn && !meta._LastUpdated;
430
+ let success;
431
+ if (isNewRecord) {
432
+ success = await addFromMetadata(meta, metaPath, client, options, modifyKey);
433
+ } else {
434
+ success = await pushFromMetadata(meta, metaPath, client, options, null, modifyKey, transactionKey);
435
+ }
430
436
  if (success) {
431
437
  const baseline = await loadAppJsonBaseline();
432
438
  if (baseline) {
@@ -736,7 +742,7 @@ async function pushDirectory(dirPath, client, options, modifyKey = null, transac
736
742
  continue;
737
743
  }
738
744
 
739
- const isNewRecord = !meta.UID && !meta._id;
745
+ const isNewRecord = !meta._CreatedOn && !meta._LastUpdated;
740
746
 
741
747
  // Verify @file references exist
742
748
  const contentCols = meta._companionReferenceColumns || meta._contentColumns || [];
@@ -873,19 +879,50 @@ async function pushDirectory(dirPath, client, options, modifyKey = null, transac
873
879
  // Pre-flight ticket validation (only if no --ticket flag)
874
880
  const totalRecords = toPush.length + outputsWithChanges.length + binPushItems.length;
875
881
  if (!options.ticket && totalRecords > 0) {
876
- const recordSummary = [
877
- ...toPush.map(r => { const p = parseMetaFilename(basename(r.metaPath)); return p ? p.naturalBase : basename(r.metaPath, '.metadata.json'); }),
878
- ...outputsWithChanges.map(r => basename(r.metaPath, '.json')),
879
- ...binPushItems.map(r => `bin:${r.meta.Name}`),
880
- ].join(', ');
881
- const ticketCheck = await checkStoredTicket(options, `${totalRecords} record(s): ${recordSummary}`);
882
- if (ticketCheck.cancel) {
883
- log.info('Submission cancelled');
884
- return;
882
+ // Proactive check: fetch RepositoryIntegrationID from the server before prompting.
883
+ // Uses UpdatedAfter=<today> to keep the response small; the top-level app record is always returned.
884
+ // Result is cached in .app/config.json so subsequent fetches can fall back to it.
885
+ let ticketingNeeded = null; // null = unknown (fetch failed)
886
+ const appConfig = await loadAppConfig();
887
+ const appShortNameForTicket = appConfig?.AppShortName;
888
+ if (appShortNameForTicket) {
889
+ const { id, fetched } = await fetchAndCacheRepositoryIntegration(client, appShortNameForTicket);
890
+ if (fetched) {
891
+ // Server answered: null means no RepositoryIntegration configured → skip ticketing
892
+ ticketingNeeded = (id != null);
893
+ }
894
+ }
895
+
896
+ // Fallback chain when the server fetch failed or no AppShortName:
897
+ // 1. Stored RepositoryIntegrationID in .app/config.json (from last successful fetch)
898
+ // 2. ticketing_required flag in ticketing.local.json (set reactively on first ticket_error)
899
+ if (ticketingNeeded === null) {
900
+ const storedId = await loadRepositoryIntegrationID();
901
+ if (storedId != null) {
902
+ ticketingNeeded = true;
903
+ }
904
+ // If storedId is also null, leave ticketingNeeded as null — checkStoredTicket will
905
+ // decide based on ticketing_required in ticketing.local.json (reactive fallback).
885
906
  }
886
- if (ticketCheck.clearTicket) {
887
- await clearGlobalTicket();
888
- log.dim(' Cleared stored ticket');
907
+
908
+ // Skip ticketing entirely when we have a confirmed negative signal
909
+ if (ticketingNeeded === false) {
910
+ // RepositoryIntegrationID is null on the server — no ticket needed
911
+ } else {
912
+ const recordSummary = [
913
+ ...toPush.map(r => { const p = parseMetaFilename(basename(r.metaPath)); return p ? p.naturalBase : basename(r.metaPath, '.metadata.json'); }),
914
+ ...outputsWithChanges.map(r => basename(r.metaPath, '.json')),
915
+ ...binPushItems.map(r => `bin:${r.meta.Name}`),
916
+ ].join(', ');
917
+ const ticketCheck = await checkStoredTicket(options, `${totalRecords} record(s): ${recordSummary}`);
918
+ if (ticketCheck.cancel) {
919
+ log.info('Submission cancelled');
920
+ return;
921
+ }
922
+ if (ticketCheck.clearTicket) {
923
+ await clearGlobalTicket();
924
+ log.dim(' Cleared stored ticket');
925
+ }
889
926
  }
890
927
  }
891
928
 
@@ -1181,8 +1218,10 @@ async function pushByUIDs(uids, client, options, modifyKey = null, transactionKe
1181
1218
  }
1182
1219
 
1183
1220
  /**
1184
- * Submit a new record (add) from metadata that has no UID yet.
1185
- * Builds RowID:add1 expressions, submits, then renames files with the returned ~UID.
1221
+ * Submit a new record (insert) from metadata that has no _CreatedOn/_LastUpdated yet.
1222
+ * Builds RowID:add1 expressions and submits. A manually-specified UID (if present in
1223
+ * metadata, placed there by the developer) is included so the server uses that UID.
1224
+ * The server assigns _CreatedOn/_LastUpdated on success, which are written back.
1186
1225
  */
1187
1226
  async function addFromMetadata(meta, metaPath, client, options, modifyKey = null) {
1188
1227
  const entity = meta._entity;
@@ -1194,7 +1233,8 @@ async function addFromMetadata(meta, metaPath, client, options, modifyKey = null
1194
1233
 
1195
1234
  for (const [key, value] of Object.entries(meta)) {
1196
1235
  if (shouldSkipColumn(key)) continue;
1197
- if (key === 'UID') continue;
1236
+ // UID is included only if the developer manually placed it in the metadata file.
1237
+ // The CLI never auto-generates UIDs — the server assigns them on insert.
1198
1238
  if (value === null || value === undefined) continue;
1199
1239
 
1200
1240
  const strValue = String(value);
@@ -1266,7 +1306,7 @@ async function addFromMetadata(meta, metaPath, client, options, modifyKey = null
1266
1306
  return false;
1267
1307
  }
1268
1308
 
1269
- // Extract UID from response and rename metadata to ~uid convention
1309
+ // Extract UID and server-populated fields from response, write back to metadata
1270
1310
  const addResults = result.payload?.Results?.Add || result.data?.Payload?.Results?.Add || [];
1271
1311
  if (addResults.length > 0) {
1272
1312
  const returnedUID = addResults[0].UID;
@@ -1286,7 +1326,7 @@ async function addFromMetadata(meta, metaPath, client, options, modifyKey = null
1286
1326
  const config = await loadConfig();
1287
1327
  const serverTz = config.ServerTimezone;
1288
1328
 
1289
- // Rename metadata file to ~UID convention; companions keep natural names
1329
+ // Write UID and server timestamps back to metadata file; filenames are never renamed
1290
1330
  const renameResult = await renameToUidConvention(meta, metaPath, returnedUID, returnedLastUpdated, serverTz);
1291
1331
 
1292
1332
  // Propagate updated meta back (renameToUidConvention creates a new object)
@@ -1505,7 +1545,7 @@ async function pushFromMetadata(meta, metaPath, client, options, changedColumns
1505
1545
  // Clean up per-record ticket on success
1506
1546
  await clearRecordTicket(uid || id);
1507
1547
 
1508
- // Post-insert UID write: if the record lacked a UID and the server returned one
1548
+ // Post-push UID write: if the record lacked a UID (data record) and the server returned one
1509
1549
  try {
1510
1550
  const editResults2 = result.payload?.Results?.Edit || result.data?.Payload?.Results?.Edit || [];
1511
1551
  const addResults2 = result.payload?.Results?.Add || result.data?.Payload?.Results?.Add || [];
@@ -1638,8 +1678,7 @@ async function checkPathMismatch(meta, metaPath, entity, options) {
1638
1678
  if (!contentFileName) return;
1639
1679
 
1640
1680
  // Compute the current path based on where the file actually is.
1641
- // Strip the ~UID from the filename — the metadata Path is the canonical
1642
- // server path and never contains the local ~UID suffix.
1681
+ // Strip any legacy ~UID suffix — the metadata Path is the canonical server path.
1643
1682
  const uid = meta.UID;
1644
1683
  const serverFileName = uid ? stripUidFromFilename(contentFileName, uid) : contentFileName;
1645
1684
  const currentFilePath = join(metaDir, serverFileName);
package/src/lib/config.js CHANGED
@@ -794,6 +794,32 @@ export async function loadTicketSuggestionOutput() {
794
794
  } catch { return null; }
795
795
  }
796
796
 
797
+ /**
798
+ * Save RepositoryIntegrationID to .app/config.json.
799
+ * Stores the value fetched from the server's app object.
800
+ * Pass null to clear (ticketing not required for this app).
801
+ */
802
+ export async function saveRepositoryIntegrationID(value) {
803
+ await mkdir(projectDir(), { recursive: true });
804
+ let existing = {};
805
+ try { existing = JSON.parse(await readFile(configPath(), 'utf8')); } catch {}
806
+ if (value != null) existing.RepositoryIntegrationID = value;
807
+ else delete existing.RepositoryIntegrationID;
808
+ await writeFile(configPath(), JSON.stringify(existing, null, 2) + '\n');
809
+ }
810
+
811
+ /**
812
+ * Load RepositoryIntegrationID from .app/config.json.
813
+ * Returns the stored value or null if not set.
814
+ */
815
+ export async function loadRepositoryIntegrationID() {
816
+ try {
817
+ const raw = await readFile(configPath(), 'utf8');
818
+ const val = JSON.parse(raw).RepositoryIntegrationID;
819
+ return (val != null && val !== '') ? val : null;
820
+ } catch { return null; }
821
+ }
822
+
797
823
  // ─── Gitignore ────────────────────────────────────────────────────────────
798
824
 
799
825
  /**
@@ -289,15 +289,14 @@ export async function syncDependencies(options = {}) {
289
289
  await symlinkCredentials(parentProjectDir, checkoutProjectDir);
290
290
 
291
291
  // 3. Staleness check (unless --force or --schema)
292
- if (!forceAll) {
293
- // Also check if the checkout is essentially empty (only .app/ exists) —
294
- // a previous clone may have failed or been cleaned up, leaving just config
295
- let checkoutEmpty = true;
296
- try {
297
- const entries = await readdir(checkoutDir);
298
- checkoutEmpty = entries.every(e => e === '.app' || e.startsWith('.'));
299
- } catch { /* dir doesn't exist yet — treat as empty */ }
292
+ // Track checkoutEmpty here so step 4 can decide whether to use the local schema.
293
+ let checkoutEmpty = true;
294
+ try {
295
+ const entries = await readdir(checkoutDir);
296
+ checkoutEmpty = entries.every(e => e === '.app' || e.startsWith('.'));
297
+ } catch { /* dir doesn't exist yet — treat as empty */ }
300
298
 
299
+ if (!forceAll) {
301
300
  if (!checkoutEmpty) {
302
301
  let isStale = true;
303
302
  try {
@@ -314,7 +313,10 @@ export async function syncDependencies(options = {}) {
314
313
  }
315
314
 
316
315
  // 4. Run the clone (quiet — suppress child process output)
317
- if (shortname === '_system' && options.systemSchemaPath) {
316
+ // Use the local schema file only for the very first (empty) checkout — it's a fast path
317
+ // for pre-bundled schemas. For stale checkouts always fetch from the server so that
318
+ // newly-added records (e.g. docs added after the schema was saved) are included.
319
+ if (shortname === '_system' && options.systemSchemaPath && checkoutEmpty) {
318
320
  const relPath = relative(checkoutDir, options.systemSchemaPath);
319
321
  await execFn(checkoutDir, ['clone', relPath, '--force', '--yes', '--no-deps'], { quiet: true });
320
322
  } else {
@@ -37,12 +37,8 @@ export async function buildInputBody(dataExpressions, extraParams = {}) {
37
37
  }
38
38
 
39
39
  for (const expr of dataExpressions) {
40
- // Split by & to handle multiple ops in one -d string
41
- const ops = expr.split('&');
42
- for (const op of ops) {
43
- const encoded = await encodeInputExpression(op.trim());
44
- parts.push(encoded);
45
- }
40
+ const encoded = await encodeInputExpression(expr.trim());
41
+ parts.push(encoded);
46
42
  }
47
43
 
48
44
  return parts.join('&');
package/src/lib/insert.js CHANGED
@@ -283,7 +283,8 @@ export async function submitAdd(meta, metaPath, filePath, client, options) {
283
283
 
284
284
  for (const [key, value] of Object.entries(meta)) {
285
285
  if (shouldSkipColumn(key)) continue;
286
- if (key === 'UID') continue; // Never submit UID on add server assigns it
286
+ // UID is included only if the developer manually placed it in the metadata file.
287
+ // The CLI never auto-generates UIDs — the server assigns them on insert.
287
288
  if (value === null || value === undefined) continue;
288
289
 
289
290
  const strValue = String(value);
@@ -481,8 +482,8 @@ export async function findUnaddedFiles(dir, ig, referencedFiles) {
481
482
  const raw = await readFile(join(dir, entry.name), 'utf8');
482
483
  if (!raw.trim()) continue;
483
484
  const meta = JSON.parse(raw);
484
- // Only count records that are on the server (have UID or _CreatedOn)
485
- if (!meta.UID && !meta._CreatedOn) continue;
485
+ // Only count records that are on the server (have _CreatedOn or _LastUpdated)
486
+ if (!meta._CreatedOn && !meta._LastUpdated) continue;
486
487
  collectCompanionRefs(meta, localRefs);
487
488
  } catch { /* skip unreadable */ }
488
489
  }
@@ -550,7 +551,7 @@ async function _scanMetadataRefs(dir, referenced) {
550
551
  const raw = await readFile(fullPath, 'utf8');
551
552
  if (!raw.trim()) continue;
552
553
  const meta = JSON.parse(raw);
553
- if (!meta._CreatedOn && !meta.UID) continue; // only count server-confirmed records
554
+ if (!meta._CreatedOn && !meta._LastUpdated) continue; // only count server-confirmed records
554
555
 
555
556
  // Collect all @ references (including from inline output children)
556
557
  const allRefs = new Set();
@@ -1,7 +1,7 @@
1
1
  import { readFile, writeFile, mkdir } from 'fs/promises';
2
2
  import { join } from 'path';
3
3
  import { log } from './logger.js';
4
- import { projectDir } from './config.js';
4
+ import { projectDir, saveRepositoryIntegrationID, loadRepositoryIntegrationID } from './config.js';
5
5
 
6
6
  const TICKETING_FILE = 'ticketing.local.json';
7
7
 
@@ -309,3 +309,59 @@ export async function applyStoredTicketToSubmission(dataExprs, entity, rowId, ui
309
309
  }
310
310
  return null;
311
311
  }
312
+
313
+ /**
314
+ * Fetch the app object from the server and cache RepositoryIntegrationID in .app/config.json.
315
+ *
316
+ * Uses UpdatedAfter=<today> to keep the response small — children records are filtered
317
+ * but the top-level app record (including RepositoryIntegrationID) is always returned.
318
+ *
319
+ * @param {DboClient} client
320
+ * @param {string} appShortName
321
+ * @returns {Promise<{ id: string|null, fetched: boolean }>}
322
+ * id: the RepositoryIntegrationID value (or null if not set)
323
+ * fetched: true if the server responded successfully, false on network/parse failure
324
+ */
325
+ export async function fetchAndCacheRepositoryIntegration(client, appShortName) {
326
+ if (!appShortName) return { id: null, fetched: false };
327
+
328
+ try {
329
+ const today = new Date().toISOString().substring(0, 10); // YYYY-MM-DD
330
+ const result = await client.get(
331
+ `/api/app/object/${encodeURIComponent(appShortName)}`,
332
+ { UpdatedAfter: today }
333
+ );
334
+
335
+ if (!result.ok && !result.successful) return { id: null, fetched: false };
336
+
337
+ const data = result.payload || result.data;
338
+ if (!data) return { id: null, fetched: false };
339
+
340
+ // Normalize response shape (array, Rows wrapper, or direct object)
341
+ let appRecord;
342
+ if (Array.isArray(data)) {
343
+ appRecord = data.length > 0 ? data[0] : null;
344
+ } else if (data?.Rows?.length > 0) {
345
+ appRecord = data.Rows[0];
346
+ } else if (data?.rows?.length > 0) {
347
+ appRecord = data.rows[0];
348
+ } else if (data && typeof data === 'object' && (data.UID || data.ShortName)) {
349
+ appRecord = data;
350
+ } else {
351
+ return { id: null, fetched: false };
352
+ }
353
+
354
+ if (!appRecord) return { id: null, fetched: false };
355
+
356
+ const id = (appRecord.RepositoryIntegrationID != null && appRecord.RepositoryIntegrationID !== '')
357
+ ? appRecord.RepositoryIntegrationID
358
+ : null;
359
+
360
+ // Persist so push can fall back to this if the next fetch fails
361
+ await saveRepositoryIntegrationID(id);
362
+
363
+ return { id, fetched: true };
364
+ } catch {
365
+ return { id: null, fetched: false };
366
+ }
367
+ }
@@ -1,11 +1,11 @@
1
1
  import chalk from 'chalk';
2
2
  import { dirname, basename, join } from 'path';
3
- import { readFile } from 'fs/promises';
4
- import { findBaselineEntry, shouldSkipColumn, normalizeValue, isReference, resolveReferencePath } from './delta.js';
3
+ import { readFile, writeFile } from 'fs/promises';
4
+ import { findBaselineEntry, shouldSkipColumn, normalizeValue, isReference, resolveReferencePath, saveBaseline } from './delta.js';
5
5
  import { resolveContentValue } from '../commands/clone.js';
6
6
  import { computeLineDiff, formatDiff } from './diff.js';
7
7
  import { parseMetaFilename } from './filenames.js';
8
- import { parseServerDate } from './timestamps.js';
8
+ import { parseServerDate, setFileTimestamps } from './timestamps.js';
9
9
  import { log } from './logger.js';
10
10
 
11
11
  /**
@@ -335,6 +335,7 @@ export async function checkToeStepping(records, client, baseline, options, appSh
335
335
  let skippedUIDs = new Set();
336
336
  let bulkAction = null; // 'push_all' | 'skip_all'
337
337
  let hasConflicts = false;
338
+ let baselineModified = false;
338
339
 
339
340
  for (const { meta, metaPath } of records) {
340
341
  const uid = meta.UID;
@@ -381,6 +382,14 @@ export async function checkToeStepping(records, client, baseline, options, appSh
381
382
  if (options.yes || bulkAction === 'push_all') {
382
383
  continue; // push this record
383
384
  }
385
+ if (bulkAction === 'pull_all') {
386
+ await applyServerToLocal(serverEntry, meta, metaPath, serverTz);
387
+ _updateBaselineEntry(baseline, entity, uid, serverEntry);
388
+ baselineModified = true;
389
+ log.success(` Pulled server version of "${label}" to local`);
390
+ skippedUIDs.add(uid);
391
+ continue;
392
+ }
384
393
  if (bulkAction === 'skip_all') {
385
394
  skippedUIDs.add(uid);
386
395
  continue;
@@ -395,9 +404,11 @@ export async function checkToeStepping(records, client, baseline, options, appSh
395
404
  message: `"${label}" has server changes. How to proceed?`,
396
405
  choices: [
397
406
  { name: 'Push anyway (overwrite server changes)', value: 'push' },
407
+ { name: 'Pull from server (overwrite local changes)', value: 'pull' },
398
408
  { name: 'Compare differences', value: 'compare' },
399
409
  { name: 'Skip this record', value: 'skip' },
400
410
  { name: 'Push all remaining (overwrite all)', value: 'push_all' },
411
+ { name: 'Pull all remaining (overwrite all local)', value: 'pull_all' },
401
412
  { name: 'Skip all remaining', value: 'skip_all' },
402
413
  { name: 'Cancel entire push', value: 'cancel' },
403
414
  ],
@@ -414,6 +425,14 @@ export async function checkToeStepping(records, client, baseline, options, appSh
414
425
  log.info('Push cancelled. Run "dbo pull" to fetch server changes first.');
415
426
  return false;
416
427
  }
428
+ if (action === 'pull' || action === 'pull_all') {
429
+ await applyServerToLocal(serverEntry, meta, metaPath, serverTz);
430
+ _updateBaselineEntry(baseline, entity, uid, serverEntry);
431
+ baselineModified = true;
432
+ log.success(` Pulled server version of "${label}" to local`);
433
+ skippedUIDs.add(uid);
434
+ if (action === 'pull_all') bulkAction = 'pull_all';
435
+ }
417
436
  if (action === 'skip' || action === 'skip_all') {
418
437
  skippedUIDs.add(uid);
419
438
  if (action === 'skip_all') bulkAction = 'skip_all';
@@ -423,6 +442,12 @@ export async function checkToeStepping(records, client, baseline, options, appSh
423
442
  }
424
443
  }
425
444
 
445
+ if (baselineModified) {
446
+ try {
447
+ await saveBaseline(baseline);
448
+ } catch { /* non-critical — next push will re-detect */ }
449
+ }
450
+
426
451
  if (!hasConflicts) return true;
427
452
 
428
453
  // Return skipped UIDs so the caller can filter them out
@@ -496,3 +521,78 @@ async function showPushDiff(serverEntry, localMeta, metaPath) {
496
521
 
497
522
  log.plain('');
498
523
  }
524
+
525
+ /**
526
+ * Update the in-memory baseline entry for a record with the server's current
527
+ * values. Called after pulling from server so the next toe-stepping check
528
+ * sees the pulled state as the new baseline and does not re-raise the conflict.
529
+ *
530
+ * @param {Object} baseline - Loaded baseline object (mutated in place)
531
+ * @param {string} entity - Entity type (e.g., "content")
532
+ * @param {string} uid - Record UID
533
+ * @param {Object} serverEntry - Live server record
534
+ */
535
+ function _updateBaselineEntry(baseline, entity, uid, serverEntry) {
536
+ if (!baseline?.children) return;
537
+ const arr = baseline.children[entity];
538
+ if (!Array.isArray(arr)) return;
539
+ const idx = arr.findIndex(e => e.UID === uid);
540
+ if (idx < 0) return;
541
+
542
+ const SKIP = new Set(['_entity', '_companionReferenceColumns', '_contentColumns',
543
+ '_mediaFile', '_pathConfirmed', 'children', '_id']);
544
+ for (const [col, rawVal] of Object.entries(serverEntry)) {
545
+ if (SKIP.has(col)) continue;
546
+ const decoded = resolveContentValue(rawVal);
547
+ arr[idx][col] = decoded !== null ? decoded : rawVal;
548
+ }
549
+ }
550
+
551
+ /**
552
+ * Overwrite local metadata and companion content files with server values.
553
+ *
554
+ * Called when the user chooses "Pull from server" during conflict resolution.
555
+ * Updates:
556
+ * - companion content files (columns listed in _companionReferenceColumns)
557
+ * - all non-system metadata fields
558
+ * - _LastUpdated / _CreatedOn timestamps in the metadata JSON
559
+ * - file timestamps on both the metadata file and any companion files
560
+ *
561
+ * @param {Object} serverEntry - Live server record (from fetchServerRecord*)
562
+ * @param {Object} localMeta - Currently loaded metadata object
563
+ * @param {string} metaPath - Absolute path to the .metadata.json file
564
+ * @param {string} [serverTz] - Server timezone string (e.g. "America/Chicago")
565
+ */
566
+ async function applyServerToLocal(serverEntry, localMeta, metaPath, serverTz) {
567
+ const metaDir = dirname(metaPath);
568
+ const companions = new Set(localMeta._companionReferenceColumns || []);
569
+
570
+ // Write companion content files from server values
571
+ for (const col of companions) {
572
+ const ref = localMeta[col];
573
+ if (!ref || !String(ref).startsWith('@')) continue;
574
+ const filePath = join(metaDir, String(ref).substring(1));
575
+ const serverValue = resolveContentValue(serverEntry[col]);
576
+ if (serverValue !== null) {
577
+ await writeFile(filePath, serverValue, 'utf8');
578
+ try {
579
+ await setFileTimestamps(filePath, serverEntry._CreatedOn, serverEntry._LastUpdated, serverTz);
580
+ } catch { /* non-critical */ }
581
+ }
582
+ }
583
+
584
+ // Merge non-system, non-companion server columns into localMeta
585
+ const skipMeta = new Set(['_entity', '_companionReferenceColumns', '_contentColumns', '_mediaFile',
586
+ '_pathConfirmed', 'children', '_id']);
587
+ for (const [col, rawVal] of Object.entries(serverEntry)) {
588
+ if (skipMeta.has(col)) continue;
589
+ if (companions.has(col)) continue; // companion already handled as a file
590
+ const decoded = resolveContentValue(rawVal);
591
+ localMeta[col] = decoded !== null ? decoded : rawVal;
592
+ }
593
+
594
+ await writeFile(metaPath, JSON.stringify(localMeta, null, 2) + '\n');
595
+ try {
596
+ await setFileTimestamps(metaPath, serverEntry._CreatedOn, serverEntry._LastUpdated, serverTz);
597
+ } catch { /* non-critical */ }
598
+ }