@fuzzle/opencode-accountant 0.4.6 → 0.5.0-next.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,674 @@
1
+ # Import Context Architecture
2
+
3
+ The **Import Context** is a shared state system that tracks bank statement CSV files throughout the import pipeline. Each context is a JSON file stored in `.memory/{uuid}.json` that accumulates metadata and results as the file progresses through classification, import, and reconciliation.
4
+
5
+ ## Table of Contents
6
+
7
+ - [Overview](#overview)
8
+ - [The Multi-Account Problem](#the-multi-account-problem)
9
+ - [Context Schema](#context-schema)
10
+ - [Context Lifecycle](#context-lifecycle)
11
+ - [Context Flow](#context-flow)
12
+ - [File Storage](#file-storage)
13
+ - [Design Decisions](#design-decisions)
14
+ - [Real-World Examples](#real-world-examples)
15
+ - [Troubleshooting](#troubleshooting)
16
+
17
+ ## Overview
18
+
19
+ ### What is an Import Context?
20
+
21
+ An import context is a persistent JSON document that:
22
+
23
+ - **Tracks a single CSV file** through the entire import pipeline
24
+ - **Stores all discovered metadata** (provider, currency, account number, dates, balances)
25
+ - **Records progress** (which steps completed, results from each step)
26
+ - **Enables step coordination** (later steps access data from earlier steps)
27
+ - **Provides audit trail** (complete history of what happened to each file)
28
+
29
+ ### Why Context Files?
30
+
31
+ Before contexts, the pipeline had a critical limitation: when you had multiple accounts from the same provider/currency (e.g., UBS checking account `.0` and UBS savings account `.1`), the reconcile step couldn't distinguish between them. It would select CSVs by modification time, often picking the wrong file.
32
+
33
+ **Context files solve this by**:
34
+
35
+ - Giving each CSV a unique identifier (UUID)
36
+ - Storing the account number extracted during classification
37
+ - Allowing reconcile to find the exact right CSV using the account number
38
+
39
+ ## The Multi-Account Problem
40
+
41
+ ### Before Import Context
42
+
43
+ ```
44
+ Scenario: You have two UBS CHF accounts
45
+ - Checking: 0235-90250546.0
46
+ - Savings: 0235-90250546.1
47
+
48
+ Both CSVs end up in: import/done/ubs/chf/
49
+
50
+ reconcile-statement tool behavior:
51
+ ❌ Searches for: provider=ubs, currency=chf
52
+ ❌ Finds: BOTH CSVs
53
+ ❌ Selects: Most recently modified file
54
+ ❌ Result: Wrong CSV reconciled against wrong account
55
+ ```
56
+
57
+ ### After Import Context
58
+
59
+ ```
60
+ Scenario: Same two UBS CHF accounts
61
+
62
+ classify-statements creates:
63
+ - Context UUID-A for checking (accountNumber: "0235-90250546.0")
64
+ - Context UUID-B for savings (accountNumber: "0235-90250546.1")
65
+
66
+ reconcile-statement behavior:
67
+ ✓ Receives: contextId (e.g., UUID-A)
68
+ ✓ Loads: Context with accountNumber "0235-90250546.0"
69
+ ✓ Filters: CSVs by account number in filename
70
+ ✓ Finds: Exact CSV for checking account
71
+ ✓ Result: Correct CSV reconciled against correct account
72
+ ```
73
+
74
+ **Key insight**: The context's `accountNumber` field is the critical piece of data that makes multi-account imports work reliably.
75
+
76
+ ## Context Schema
77
+
78
+ ### ImportContext Interface
79
+
80
+ ```typescript
81
+ interface ImportContext {
82
+ // === Identity ===
83
+ id: string; // UUID (e.g., "f47ac10b-58cc-4372-a567-0e02b2c3d479")
84
+ createdAt: string; // ISO 8601 timestamp
85
+ updatedAt: string; // ISO 8601 timestamp (changes with each update)
86
+
87
+ // === File Tracking ===
88
+ filename: string; // Current filename
89
+ filePath: string; // Current path (updates as file moves through pipeline)
90
+ originalFilename?: string; // Original name before any renaming
91
+
92
+ // === Provider Detection (set by classify-statements) ===
93
+ provider: string; // e.g., "ubs", "revolut"
94
+ currency: string; // e.g., "chf", "eur"
95
+ accountNumber?: string; // e.g., "0235-90250546.1" (critical for multi-account)
96
+
97
+ // === CSV Metadata (extracted by classify-statements) ===
98
+ fromDate?: string; // Transaction date range start
99
+ untilDate?: string; // Transaction date range end
100
+ openingBalance?: string; // Opening balance value
101
+ closingBalance?: string; // Closing balance value
102
+
103
+ // === User Overrides (optional manual corrections) ===
104
+ account?: string; // Manual hledger account override
105
+
106
+ // === Import Results (set by import-statements) ===
107
+ rulesFile?: string; // Path to rules file used
108
+ yearJournal?: string; // Year journal file written to
109
+ transactionCount?: number; // Number of transactions imported
110
+
111
+ // === Reconciliation (set by reconcile-statement) ===
112
+ reconciledAccount?: string; // Account that was reconciled
113
+ actualBalance?: string; // Actual balance from hledger
114
+ lastTransactionDate?: string; // Date of last transaction
115
+ reconciled?: boolean; // true if balances matched
116
+ }
117
+ ```
118
+
119
+ ### Design Principles
120
+
121
+ 1. **Flat Structure**: No nested objects, no arrays - easy to read and update
122
+ 2. **No Duplication**: Each piece of data appears exactly once
123
+ 3. **Progressive Enhancement**: Fields populate as pipeline progresses
124
+ 4. **Optional Fields**: Only required fields are id, createdAt, updatedAt, filename, filePath, provider, currency
125
+
126
+ ## Context Lifecycle
127
+
128
+ ### Stage 1: Creation (classify-statements)
129
+
130
+ When a CSV is classified, a new context is created:
131
+
132
+ ```json
133
+ {
134
+ "id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
135
+ "createdAt": "2026-02-28T10:30:00.000Z",
136
+ "updatedAt": "2026-02-28T10:30:00.000Z",
137
+
138
+ "filename": "ubs-0235-90250546.1-transactions-2026-02-22-to-2026-02-24.csv",
139
+ "filePath": "import/pending/ubs/chf/ubs-0235-90250546.1-transactions-2026-02-22-to-2026-02-24.csv",
140
+ "originalFilename": "transactions-0235-90250546.1-2026-02.csv",
141
+
142
+ "provider": "ubs",
143
+ "currency": "chf",
144
+ "accountNumber": "0235-90250546.1",
145
+
146
+ "fromDate": "2026-02-22",
147
+ "untilDate": "2026-02-24",
148
+ "openingBalance": "4001.55",
149
+ "closingBalance": "9001.55"
150
+ }
151
+ ```
152
+
153
+ **What happens**:
154
+
155
+ - UUID generated for unique identification
156
+ - File tracked at current location (`import/pending/...`)
157
+ - Provider/currency detected from CSV
158
+ - Metadata extracted from CSV headers
159
+ - Context saved to `.memory/{uuid}.json`
160
+
161
+ ### Stage 2: Import (import-statements)
162
+
163
+ After transactions are imported, context is updated:
164
+
165
+ ```json
166
+ {
167
+ "id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
168
+ "createdAt": "2026-02-28T10:30:00.000Z",
169
+ "updatedAt": "2026-02-28T10:31:15.000Z", // ← updated
170
+
171
+ // ... previous fields unchanged ...
172
+
173
+ "filePath": "import/done/ubs/chf/ubs-0235-90250546.1-transactions-2026-02-22-to-2026-02-24.csv", // ← moved
174
+
175
+ "rulesFile": "ledger/rules/ubs-0235-90250546.1.rules", // ← added
176
+ "yearJournal": "ledger/2026.journal", // ← added
177
+ "transactionCount": 2 // ← added
178
+ }
179
+ ```
180
+
181
+ **What happens**:
182
+
183
+ - `filePath` updated to reflect move from `pending/` to `done/`
184
+ - `rulesFile` records which rules file was used
185
+ - `yearJournal` records which journal received the transactions
186
+ - `transactionCount` records how many transactions were imported
187
+ - `updatedAt` timestamp refreshed
188
+
189
+ ### Stage 3: Reconciliation (reconcile-statement)
190
+
191
+ After balance reconciliation, context is updated:
192
+
193
+ ```json
194
+ {
195
+ "id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
196
+ "createdAt": "2026-02-28T10:30:00.000Z",
197
+ "updatedAt": "2026-02-28T10:31:45.000Z", // ← updated again
198
+
199
+ // ... all previous fields unchanged ...
200
+
201
+ "reconciledAccount": "assets:bank:ubs:savings", // ← added
202
+ "actualBalance": "CHF 9001.55", // ← added
203
+ "lastTransactionDate": "2026-02-24", // ← added
204
+ "reconciled": true // ← added
205
+ }
206
+ ```
207
+
208
+ **What happens**:
209
+
210
+ - `reconciledAccount` records which hledger account was checked
211
+ - `actualBalance` records the balance hledger calculated
212
+ - `lastTransactionDate` records the cutoff date for balance calculation
213
+ - `reconciled` flag indicates success/failure
214
+ - `updatedAt` timestamp refreshed
215
+
216
+ ### Stage 4: Persistence
217
+
218
+ **Context files are kept permanently** in `.memory/` for:
219
+
220
+ - Audit trail (what happened to each import)
221
+ - Debugging (inspect context when issues occur)
222
+ - Re-running steps (can pass contextId to tools manually)
223
+
224
+ ## Context Flow
225
+
226
+ ### Pipeline Architecture
227
+
228
+ ```
229
+ ┌─────────────────────────────────────────────────────────────────┐
230
+ │ USER DROPS CSV FILES │
231
+ │ into import/incoming/ │
232
+ └─────────────────────────────────────────────────────────────────┘
233
+
234
+
235
+ ┌─────────────────────────────────────────────────────────────────┐
236
+ │ STEP 1: classify-statements │
237
+ │ ┌──────────────────────────────────────────────────────────┐ │
238
+ │ │ For each CSV file: │ │
239
+ │ │ 1. Detect provider/currency from header │ │
240
+ │ │ 2. Extract metadata (account#, dates, balances) │ │
241
+ │ │ 3. Generate UUID │ │
242
+ │ │ 4. Create .memory/{uuid}.json context file │ │
243
+ │ │ 5. Move CSV to import/pending/{provider}/{currency}/ │ │
244
+ │ └──────────────────────────────────────────────────────────┘ │
245
+ │ │
246
+ │ OUTPUT: Array of contextIds │
247
+ │ ["uuid-1", "uuid-2", ...] │
248
+ └─────────────────────────────────────────────────────────────────┘
249
+
250
+
251
+ ┌─────────────────────────────────────────────────────────────────┐
252
+ │ STEP 2-5: Process Each Context │
253
+ │ (sequential, fail-fast) │
254
+ └─────────────────────────────────────────────────────────────────┘
255
+
256
+
257
+ ┌───────────────────────┴───────────────────────┐
258
+ │ │
259
+ ▼ ▼
260
+ ┌──────────────────┐ ┌──────────────────┐
261
+ │ Context UUID-1 │ │ Context UUID-2 │
262
+ │ (checking .0) │ │ (savings .1) │
263
+ └──────────────────┘ └──────────────────┘
264
+ │ │
265
+ ▼ ▼
266
+ ┌─────────────────────────────────────────────────────────────────┐
267
+ │ STEP 2: Account Declarations (per context) │
268
+ │ - Load context to find CSV file │
269
+ │ - Find matching rules file │
270
+ │ - Ensure accounts declared in year journal │
271
+ └─────────────────────────────────────────────────────────────────┘
272
+ │ │
273
+ ▼ ▼
274
+ ┌─────────────────────────────────────────────────────────────────┐
275
+ │ STEP 3: Dry Run (per context) │
276
+ │ - Load context to find CSV file │
277
+ │ - Run hledger print to preview transactions │
278
+ │ - Check for unknown accounts │
279
+ │ - Abort if unknowns found │
280
+ └─────────────────────────────────────────────────────────────────┘
281
+ │ │
282
+ ▼ ▼
283
+ ┌─────────────────────────────────────────────────────────────────┐
284
+ │ STEP 4: Import (per context) │
285
+ │ - Load context to find CSV file │
286
+ │ - Import transactions to year journal │
287
+ │ - Move CSV from pending/ to done/ │
288
+ │ - Update context: rulesFile, yearJournal, count │
289
+ └─────────────────────────────────────────────────────────────────┘
290
+ │ │
291
+ ▼ ▼
292
+ ┌─────────────────────────────────────────────────────────────────┐
293
+ │ STEP 5: Reconcile (per context) │
294
+ │ - Load context to get accountNumber │
295
+ │ - Find CSV by accountNumber (not by timestamp!) │
296
+ │ - Compare expected vs actual balance │
297
+ │ - Update context: reconciliation results │
298
+ └─────────────────────────────────────────────────────────────────┘
299
+ │ │
300
+ ▼ ▼
301
+ ┌──────────────────┐ ┌──────────────────┐
302
+ │ ✓ Complete │ │ ✓ Complete │
303
+ │ Context in │ │ Context in │
304
+ │ .memory/uuid-1 │ │ .memory/uuid-2 │
305
+ └──────────────────┘ └──────────────────┘
306
+ ```
307
+
308
+ ### Key Concepts
309
+
310
+ 1. **One Context Per File**: Each CSV gets its own UUID and context
311
+ 2. **Sequential Processing**: Contexts processed one at a time (fail-fast on error)
312
+ 3. **Progressive Updates**: Each step adds data to the context
313
+ 4. **Isolated State**: Each context is independent (checking vs savings don't interfere)
314
+
315
+ ## File Storage
316
+
317
+ ### Directory Structure
318
+
319
+ ```
320
+ your-project/
321
+ ├── .memory/ # Import contexts
322
+ │ ├── f47ac10b-58cc-4372-a567-0e02b2c3d479.json # Checking account context
323
+ │ ├── 8b3e9c21-1a4f-4d89-b123-9f8e7d6c5b4a.json # Savings account context
324
+ │ └── a1b2c3d4-5e6f-7g8h-9i0j-k1l2m3n4o5p6.json # Another import
325
+
326
+ ├── import/
327
+ │ ├── incoming/ # Drop CSV files here
328
+ │ ├── pending/ # Classified, awaiting import
329
+ │ │ └── ubs/
330
+ │ │ └── chf/
331
+ │ │ ├── ubs-0235-90250546.0-transactions-2026-02-22-to-2026-02-24.csv
332
+ │ │ └── ubs-0235-90250546.1-transactions-2026-02-22-to-2026-02-24.csv
333
+ │ └── done/ # Imported successfully
334
+ │ └── ubs/
335
+ │ └── chf/
336
+ │ ├── ubs-0235-90250546.0-transactions-2026-02-22-to-2026-02-24.csv
337
+ │ └── ubs-0235-90250546.1-transactions-2026-02-22-to-2026-02-24.csv
338
+
339
+ └── ledger/
340
+ ├── .hledger.journal # Main journal
341
+ ├── 2026.journal # Year journal
342
+ └── rules/
343
+ ├── ubs-0235-90250546.0.rules # Checking account rules
344
+ └── ubs-0235-90250546.1.rules # Savings account rules
345
+ ```
346
+
347
+ ### Context File Naming
348
+
349
+ **Format**: `.memory/{uuid}.json`
350
+
351
+ **Example**: `.memory/f47ac10b-58cc-4372-a567-0e02b2c3d479.json`
352
+
353
+ **Why UUIDs?**
354
+
355
+ - Guaranteed unique (no collisions)
356
+ - No dependency on filename (which may change)
357
+ - Portable (can reference from any tool/script)
358
+ - Time-independent (not based on timestamps)
359
+
360
+ ### Gitignore
361
+
362
+ The `.memory/` directory is **gitignored** by default. Context files are:
363
+
364
+ - Temporary/transient data
365
+ - Specific to local import runs
366
+ - Not part of the accounting data (which is in `ledger/`)
367
+
368
+ However, they persist locally for audit/debugging purposes.
369
+
370
+ ## Design Decisions
371
+
372
+ ### 1. One Context Per File (Not Per Import Batch)
373
+
374
+ **Decision**: Each CSV gets its own context, even if multiple CSVs are imported together.
375
+
376
+ **Rationale**:
377
+
378
+ - Enables independent tracking of each account
379
+ - Supports sequential processing (fail on one doesn't corrupt others)
380
+ - Simplifies state management (no arrays, no nested structures)
381
+
382
+ **Alternative considered**: Single context for entire batch
383
+
384
+ - **Rejected**: Complicates multi-account scenarios, harder to debug partial failures
385
+
386
+ ### 2. UUID-Based Naming
387
+
388
+ **Decision**: Use random UUIDs for context IDs, not timestamps or sequential numbers.
389
+
390
+ **Rationale**:
391
+
392
+ - No collisions (multiple imports can run concurrently in theory)
393
+ - No ordering assumptions (contexts are independent)
394
+ - Portable (can share context IDs across systems)
395
+
396
+ **Alternative considered**: Timestamp-based IDs (e.g., `2026-02-28-103000.json`)
397
+
398
+ - **Rejected**: Collisions possible, implies temporal ordering
399
+
400
+ ### 3. Flat Schema (No Nesting)
401
+
402
+ **Decision**: All fields at top level, no nested objects or arrays.
403
+
404
+ **Rationale**:
405
+
406
+ - Easy to read/debug (inspect any field with `jq .fieldName`)
407
+ - Simple updates (no need to preserve nested structure)
408
+ - No data duplication (currency appears once, not in multiple places)
409
+
410
+ **Alternative considered**: Nested structure (e.g., `{ classification: {...}, import: {...}, reconcile: {...} }`)
411
+
412
+ - **Rejected**: More complex to update, harder to query individual fields
413
+
414
+ ### 4. Sequential Processing (Fail-Fast)
415
+
416
+ **Decision**: Process contexts one at a time. If one fails, stop immediately.
417
+
418
+ **Rationale**:
419
+
420
+ - Easier to debug (failure isolated to specific context)
421
+ - Prevents partial imports (all-or-nothing per context)
422
+ - Simpler error handling (don't need to track multiple failures)
423
+
424
+ **Alternative considered**: Parallel processing with rollback
425
+
426
+ - **Rejected**: Complex error handling, risk of data corruption, unnecessary optimization
427
+
428
+ ### 5. Persistent Context Files
429
+
430
+ **Decision**: Keep context files in `.memory/` indefinitely (don't auto-delete).
431
+
432
+ **Rationale**:
433
+
434
+ - Audit trail (what happened during each import)
435
+ - Debugging aid (inspect context when issues occur)
436
+ - Enable re-running steps (can manually pass contextId to tools)
437
+
438
+ **Alternative considered**: Delete after successful pipeline completion
439
+
440
+ - **Rejected**: Loses audit trail, harder to debug issues after the fact
441
+
442
+ ## Real-World Examples
443
+
444
+ ### Example 1: Single Account Import
445
+
446
+ **Scenario**: Import one UBS checking account statement
447
+
448
+ **Step 1**: Drop CSV in `import/incoming/`
449
+
450
+ ```
451
+ transactions-export.csv
452
+ ```
453
+
454
+ **Step 2**: Run `classify-statements`
455
+
456
+ ```json
457
+ {
458
+ "classified": [
459
+ {
460
+ "filename": "ubs-0235-90250546.0-transactions-2026-02-01-to-2026-02-28.csv",
461
+ "contextId": "abc123-...",
462
+ "provider": "ubs",
463
+ "currency": "chf"
464
+ }
465
+ ]
466
+ }
467
+ ```
468
+
469
+ **Context created**: `.memory/abc123-....json`
470
+
471
+ ```json
472
+ {
473
+ "id": "abc123-...",
474
+ "filename": "ubs-0235-90250546.0-transactions-2026-02-01-to-2026-02-28.csv",
475
+ "filePath": "import/pending/ubs/chf/ubs-0235-90250546.0-transactions-2026-02-01-to-2026-02-28.csv",
476
+ "provider": "ubs",
477
+ "currency": "chf",
478
+ "accountNumber": "0235-90250546.0",
479
+ "fromDate": "2026-02-01",
480
+ "untilDate": "2026-02-28",
481
+ "closingBalance": "5432.10"
482
+ }
483
+ ```
484
+
485
+ **Step 3**: Run `import-pipeline` (or individual steps)
486
+
487
+ - Pipeline receives `["abc123-..."]`
488
+ - Processes context abc123
489
+ - Updates context with import/reconcile results
490
+
491
+ **Final context**: `.memory/abc123-....json`
492
+
493
+ ```json
494
+ {
495
+ "id": "abc123-...",
496
+ "filename": "ubs-0235-90250546.0-transactions-2026-02-01-to-2026-02-28.csv",
497
+ "filePath": "import/done/ubs/chf/ubs-0235-90250546.0-transactions-2026-02-01-to-2026-02-28.csv",
498
+ "provider": "ubs",
499
+ "currency": "chf",
500
+ "accountNumber": "0235-90250546.0",
501
+ "fromDate": "2026-02-01",
502
+ "untilDate": "2026-02-28",
503
+ "closingBalance": "5432.10",
504
+ "rulesFile": "ledger/rules/ubs-0235-90250546.0.rules",
505
+ "yearJournal": "ledger/2026.journal",
506
+ "transactionCount": 42,
507
+ "reconciledAccount": "assets:bank:ubs:checking",
508
+ "actualBalance": "CHF 5432.10",
509
+ "lastTransactionDate": "2026-02-28",
510
+ "reconciled": true
511
+ }
512
+ ```
513
+
514
+ ### Example 2: Multi-Account Import (The Critical Use Case)
515
+
516
+ **Scenario**: Import both UBS checking (`.0`) and savings (`.1`) for the same month
517
+
518
+ **Step 1**: Drop both CSVs in `import/incoming/`
519
+
520
+ ```
521
+ transactions-checking-2026-02.csv
522
+ transactions-savings-2026-02.csv
523
+ ```
524
+
525
+ **Step 2**: Run `classify-statements`
526
+
527
+ ```json
528
+ {
529
+ "classified": [
530
+ {
531
+ "filename": "ubs-0235-90250546.0-transactions-2026-02-01-to-2026-02-28.csv",
532
+ "contextId": "checking-uuid",
533
+ "accountNumber": "0235-90250546.0"
534
+ },
535
+ {
536
+ "filename": "ubs-0235-90250546.1-transactions-2026-02-01-to-2026-02-28.csv",
537
+ "contextId": "savings-uuid",
538
+ "accountNumber": "0235-90250546.1"
539
+ }
540
+ ]
541
+ }
542
+ ```
543
+
544
+ **Two contexts created**:
545
+
546
+ `.memory/checking-uuid.json`:
547
+
548
+ ```json
549
+ {
550
+ "id": "checking-uuid",
551
+ "accountNumber": "0235-90250546.0",
552
+ "filePath": "import/pending/ubs/chf/ubs-0235-90250546.0-transactions-2026-02-01-to-2026-02-28.csv",
553
+ "closingBalance": "5432.10"
554
+ }
555
+ ```
556
+
557
+ `.memory/savings-uuid.json`:
558
+
559
+ ```json
560
+ {
561
+ "id": "savings-uuid",
562
+ "accountNumber": "0235-90250546.1",
563
+ "filePath": "import/pending/ubs/chf/ubs-0235-90250546.1-transactions-2026-02-01-to-2026-02-28.csv",
564
+ "closingBalance": "12500.00"
565
+ }
566
+ ```
567
+
568
+ **Step 3**: Run `import-pipeline`
569
+
570
+ - Pipeline receives `["checking-uuid", "savings-uuid"]`
571
+ - **Processes checking-uuid first**:
572
+ - import-statements loads context, finds CSV by filePath
573
+ - reconcile-statement loads context, uses accountNumber to find CSV
574
+ - ✓ Reconciles checking account
575
+ - **Then processes savings-uuid**:
576
+ - import-statements loads context, finds CSV by filePath
577
+ - reconcile-statement loads context, uses accountNumber to find CSV
578
+ - ✓ Reconciles savings account
579
+
580
+ **Critical point**: Because each context has its own `accountNumber`, reconcile-statement can filter CSVs correctly:
581
+
582
+ ```typescript
583
+ // reconcile-statement for checking-uuid
584
+ const csvFiles = findCsvFiles(doneDir, 'ubs', 'chf').filter((file) =>
585
+ file.includes('0235-90250546.0')
586
+ ); // ← from context
587
+
588
+ // reconcile-statement for savings-uuid
589
+ const csvFiles = findCsvFiles(doneDir, 'ubs', 'chf').filter((file) =>
590
+ file.includes('0235-90250546.1')
591
+ ); // ← from context
592
+ ```
593
+
594
+ **Without contexts**: Both would just pick the most recent CSV → **wrong account reconciled** ❌
595
+
596
+ **With contexts**: Each picks the correct CSV → **right account reconciled** ✓
597
+
598
+ ## Troubleshooting
599
+
600
+ ### Context Not Found
601
+
602
+ **Error**: `Failed to load import context: abc123-...`
603
+
604
+ **Cause**: Context file doesn't exist in `.memory/`
605
+
606
+ **Solutions**:
607
+
608
+ 1. Check if context file exists: `ls .memory/abc123-*.json`
609
+ 2. Verify contextId is correct (check classify-statements output)
610
+ 3. Re-run classify-statements if context was accidentally deleted
611
+
612
+ ### Wrong CSV Reconciled
613
+
614
+ **Error**: `Balance mismatch` or `Reconciled wrong account`
615
+
616
+ **Diagnosis**:
617
+
618
+ 1. Check context file: `cat .memory/{contextId}.json`
619
+ 2. Verify `accountNumber` field is set correctly
620
+ 3. Check CSV filename contains account number: `ls import/done/ubs/chf/`
621
+
622
+ **Cause**: Usually means `accountNumber` wasn't extracted during classification
623
+
624
+ **Solutions**:
625
+
626
+ 1. Update provider config to extract account number via `renamePattern`
627
+ 2. Check CSV metadata extraction in `config/import/providers.yaml`
628
+
629
+ ### Multiple Contexts for Same File
630
+
631
+ **Symptom**: Multiple UUIDs for what seems like the same CSV
632
+
633
+ **Cause**: This is normal if you've re-run classify-statements multiple times
634
+
635
+ **Impact**: No problem - each context is independent. The pipeline will process each one separately.
636
+
637
+ **Cleanup**: You can manually delete old context files if desired: `rm .memory/old-uuid.json`
638
+
639
+ ### Context Out of Sync with File Location
640
+
641
+ **Error**: `CSV file not found: import/pending/...` (but file is in `import/done/`)
642
+
643
+ **Cause**: Context's `filePath` is stale (file moved manually without updating context)
644
+
645
+ **Solutions**:
646
+
647
+ 1. Don't move CSV files manually - let the tools do it
648
+ 2. If already moved, update context: `jq '.filePath = "new/path"' .memory/{uuid}.json > temp && mv temp .memory/{uuid}.json`
649
+ 3. Or delete context and re-classify
650
+
651
+ ### Inspect Context Easily
652
+
653
+ **Tip**: Use `jq` to pretty-print and query contexts
654
+
655
+ ```bash
656
+ # Pretty-print entire context
657
+ jq . .memory/abc123-....json
658
+
659
+ # Get specific field
660
+ jq .accountNumber .memory/abc123-....json
661
+
662
+ # Find all contexts for UBS
663
+ jq 'select(.provider == "ubs")' .memory/*.json
664
+
665
+ # Find contexts with failed reconciliation
666
+ jq 'select(.reconciled == false)' .memory/*.json
667
+ ```
668
+
669
+ ## See Also
670
+
671
+ - [classify-statements Tool](../tools/classify-statements.md) - Creates contexts
672
+ - [import-statements Tool](../tools/import-statements.md) - Updates contexts with import results
673
+ - [reconcile-statement Tool](../tools/reconcile-statement.md) - Updates contexts with reconciliation results
674
+ - [import-pipeline Tool](../tools/import-pipeline.md) - Orchestrates context flow through all steps